data type. This patch takes the approach of allowing an optional hyphen after
each group of four hex digits.
Author: Robert Haas <robertmhaas@gmail.com>
upon requests from backends, rather than on a fixed 500msec cycle. (There's
still throttling logic to ensure it writes no more often than once per
500msec, though.) This should result in a significant reduction in stats file
write traffic in typical scenarios where the stats are demanded only
infrequently.
This approach also means that the former difficulty with changing
stats_temp_directory on-the-fly has gone away, so remove the caution about
that as well as the thrashing we did to minimize the trouble window.
In passing, also fix pgstat_report_stat() so that we will send a stats
message if we have function call stats but not table stats to report;
this fixes a bug in the recent patch to support function-call stats.
Martin Pihlak
RETURNING clause, not just a SELECT as formerly.
A side effect of this patch is that when a set-returning SQL function is used
in a FROM clause, performance is improved because the output is collected into
a tuplestore within the function, rather than using the less efficient
value-per-call mechanism.
BSD sed. So write it in Perl, which is more portable and a bit faster, too.
We already use Perl for standard documentation builds, so this imposes no
additional requirement.
via a tuplestore instead of value-per-call. Refactor a few things to reduce
ensuing code duplication with nodeFunctionscan.c. This represents the
reasonably noncontroversial part of my proposed patch to switch SQL functions
over to returning tuplestores. For the moment, SQL functions still do things
the old way. However, this change enables PL SRFs to be called in targetlists
(observe changes in plperl regression results).
recursion when we are unable to convert a localized error message to the
client's encoding. We've been over this ground before, but as reported by
Ibrar Ahmed, it still didn't work in the case of conversion failures for
the conversion-failure message itself :-(. Fix by installing a "circuit
breaker" that disables attempts to localize this message once we get into
recursion trouble.
Patch all supported branches, because it is in fact broken in all of them;
though I had to add some missing translations to the older branches in
order to expose the failure in the particular test case I was using.
after each other (since we already add a newline on each, this makes them
multiline).
Previously a new error would just overwrite the old one, so for example any
error caused when trying to connect with SSL enabled would be overwritten
by the error message form the non-SSL connection when using sslmode=prefer.
* make LDAP use this instead of the hacky previous method to specify
the DN to bind as
* make all auth options behave the same when they are not compiled
into the server
* rename "ident maps" to "user name maps", and support them for all
auth methods that provide an external username
This makes a backwards incompatible change in the format of pg_hba.conf
for the ident, PAM and LDAP authentication methods.
scanning; GiST and GIN do not, and it seems like too much trouble to make
them do so. By teaching ExecSupportsBackwardScan() about this restriction,
we ensure that the planner will protect a scroll cursor from the problem
by adding a Materialize node.
In passing, fix another longstanding bug in the same area: backwards scan of
a plan with set-returning functions in the targetlist did not work either,
since the TupFromTlist expansion code pays no attention to direction (and
has no way to run a SRF backwards anyway). Again the fix is to make
ExecSupportsBackwardScan check this restriction.
Also adjust the index AM API specification to note that mark/restore support
is unnecessary if the AM can't produce ordered output.
the timestamp types. Turns out this doesn't even reduce the available
range of dates, since the restriction to dates that work for Julian-date
arithmetic is much tighter than the int32 range anyway. Per a longstanding
TODO item.
depth-first search order. Upon close reading of SQL:2008, it seems that the
spec's SEARCH DEPTH FIRST and SEARCH BREADTH FIRST options do not actually
guarantee any particular result order: what they do is provide a constructed
column that the user can then sort on in the outer query. So this is actually
just as much functionality ...
pseudo-type record[] to represent arrays of possibly-anonymous composite
types. Since composite datums carry their own type identification, no
extra knowledge is needed at the array level.
The main reason for doing this right now is that it is necessary to support
the general case of detection of cycles in recursive queries: if you need to
compare more than one column to detect a cycle, you need to compare a ROW()
to an array built from ROW()s, at least if you want to do it as the spec
suggests. Add some documentation and regression tests concerning the cycle
detection issue.
implementation uses an in-memory hash table, so it will poop out for very
large recursive results ... but the performance characteristics of a
sort-based implementation would be pretty unpleasant too.
relation forks. While the file names are not visible to users, for those
that do peek into the data directory, it's nice to have more descriptive
names. Per Greg Stark's suggestion.
There are some unimplemented aspects: recursive queries must use UNION ALL
(should allow UNION too), and we don't have SEARCH or CYCLE clauses.
These might or might not get done for 8.4, but even without them it's a
pretty useful feature.
There are also a couple of small loose ends and definitional quibbles,
which I'll send a memo about to pgsql-hackers shortly. But let's land
the patch now so we can get on with other development.
Yoshiyuki Asaba, with lots of help from Tatsuo Ishii and Tom Lane
name of a fork ('main' or 'fsm', at the moment) to pg_relation_size() to
get the size of a specific fork. Defaults to 'main', if none given.
While we're at it, modify pg_relation_size to take a regclass as argument,
instead of separate variants taking oid and name. This change is
transparent to typical use where the table name is passed as a string
literal, like pg_relation_size('table'), but will break queries like
pg_relation_size(namecol), where namecol is of type name. text-type input
still works, and using a non-schema-qualified table name is not very
reliable anyway, so this is unlikely to break anyone's queries in practice.
large enough for block numbers higher than 2^31. The old pre-FSM-rewrite
pg_freespacemap implementation got this right. While we're at it, remove
some unnecessary #includes.
free space information is stored in a dedicated FSM relation fork, with each
relation (except for hash indexes; they don't use FSM).
This eliminates the max_fsm_relations and max_fsm_pages GUC options; remove any
trace of them from the backend, initdb, and documentation.
Rewrite contrib/pg_freespacemap to match the new FSM implementation. Also
introduce a new variant of the get_raw_page(regclass, int4, int4) function in
contrib/pageinspect that let's you to return pages from any relation fork, and
a new fsm_page_contents() function to inspect the new FSM pages.
ctype are now more like encoding, stored in new datcollate and datctype
columns in pg_database.
This is a stripped-down version of Radek Strnad's patch, with further
changes by me.
that presence of the password in the conninfo string must be checked *before*
risking a connection attempt, there is no point in checking it afterwards.
This makes the specification of PQconnectionUsedPassword() a bit simpler
and perhaps more generally useful, too.
conninfo string *before* trying to connect to the remote server, not after.
As pointed out by Marko Kreen, in certain not-very-plausible situations
this could result in sending a password from the postgres user's .pgpass file,
or other places that non-superusers shouldn't have access to, to an
untrustworthy remote server. The cleanest fix seems to be to expose libpq's
conninfo-string-parsing code so that dblink can check for a password option
without duplicating the parsing logic.
Joe Conway, with a little cleanup by Tom Lane
sequence of operations that libpq goes through while creating a PGresult.
Also, remove ill-considered "const" decoration on parameters passed to
event procedures.
guarantees about whether event procedures will receive DESTROY events.
They no longer need to defend themselves against getting a DESTROY
without a successful prior CREATE.
Andrew Chernow
value. This means that hash index lookups are always lossy and have to be
rechecked when the heap is visited; however, the gain in index compactness
outweighs this when the indexed values are wide. Also, we only need to
perform datatype comparisons when the hash codes match exactly, rather than
for every entry in the hash bucket; so it could also win for datatypes that
have expensive comparison functions. A small additional win is gained by
keeping hash index pages sorted by hash code and using binary search to reduce
the number of index tuples we have to look at.
Xiao Meng
This commit also incorporates Zdenek Kotala's patch to isolate hash metapages
and hash bitmaps a bit better from the page header datastructures.
each connection. This makes it possible to catch errors in the pg_hba
file when it's being reloaded, instead of silently reloading a broken
file and failing only when a user tries to connect.
This patch also makes the "sameuser" argument to ident authentication
optional.
and the literal syntax INTERVAL 'string' ... SECOND(n), as required by the
SQL standard. Our old syntax put (n) directly after INTERVAL, which was
a mistake, but will still be accepted for backward compatibility as well
as symmetry with the TIMESTAMP cases.
Change intervaltypmodout to show it in the spec's way, too. (This could
potentially affect clients, if there are any that analyze the typmod of an
INTERVAL in any detail.)
Also fix interval input to handle 'min:sec.frac' properly; I had overlooked
this case in my previous patch.
Document the use of the interval fields qualifier, which up to now we had
never mentioned in the docs. (I think the omission was intentional because
it didn't work per spec; but it does now, or at least close enough to be
credible.)
for editing if no function name is specified. This seems a much cleaner way
to offer that functionality than the original patch had. In passing,
de-clutter the error displays that are given for a bogus function-name
argument, and standardize on "$function$" as the default delimiter for the
function body. (The original coding would use the shortest possible
dollar-quote delimiter, which seems to create unnecessarily high risk of
later conflicts with the user-modified function body.)
In support of that, create a backend function pg_get_functiondef().
The psql command is functional but maybe a bit rough around the edges...
Abhijit Menon-Sen
debug_print_plan to appear at LOG message level, not DEBUG1 as historically.
Make debug_pretty_print default to on. Also, cause plans generated via
EXPLAIN to be subject to debug_print_plan. This is all to make
debug_print_plan a reasonably comfortable substitute for the former behavior
of EXPLAIN VERBOSE.
While at it, mark a couple of items completed in 8.4:
! o -Prevent long-lived temporary tables from causing frozen-xid
advancement starvation
! * -Improve performance of shared invalidation queue for multiple CPUs
Also remove a couple of obsolete assignments.
>
> * Fix all set-returning system functions so they support a wildcard
> target list
>
> SELECT * FROM pg_get_keywords() works but SELECT * FROM
> pg_show_all_settings() does not.