ANALYZE now accepts foreign tables and allows the table's FDW to control
how the sample rows are collected. (But only manual ANALYZEs will touch
foreign tables, for the moment, since among other things it's not very
clear how to handle remote permissions checks in an auto-analyze.)
contrib/file_fdw is extended to support this.
Etsuro Fujita, reviewed by Shigeru Hanada, some further tweaking by me.
Ants Aasma's original patch to add timing information for buffer I/O
requests exposed this data at the relation level, which was judged too
costly. I've here exposed it at the database level instead.
This patch provides a test case for libpq's row processor API.
contrib/dblink can deal with very large result sets by dumping them into
a tuplestore (which can spill to disk) --- but until now, the intermediate
storage of the query result in a PGresult meant memory bloat for any large
result. Now we use a row processor to convert the data to tuple form and
dump it directly into the tuplestore.
A limitation is that this only works for plain dblink() queries, not
dblink_send_query() followed by dblink_get_result(). In the latter
case we don't know the desired tuple rowtype soon enough. While hack
solutions to that are possible, a different user-level API would
probably be a better answer.
Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
Traditionally libpq has collected an entire query result before passing
it back to the application. That provides a simple and transactional API,
but it's pretty inefficient for large result sets. This patch allows the
application to process each row on-the-fly instead of accumulating the
rows into the PGresult. Error recovery becomes a bit more complex, but
often that tradeoff is well worth making.
Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
pg_stat_statements now hashes selected fields of the analyzed parse tree
to assign a "fingerprint" to each query, and groups all queries with the
same fingerprint into a single entry in the pg_stat_statements view.
In practice it is expected that queries with the same fingerprint will be
equivalent except for values of literal constants. To make the display
more useful, such constants are replaced by "?" in the displayed query
strings.
This mechanism currently supports only optimizable queries (SELECT,
INSERT, UPDATE, DELETE). Utility commands are still matched on the
basis of their literal query strings.
There remain some open questions about how to deal with utility statements
that contain optimizable queries (such as EXPLAIN and SELECT INTO) and how
to deal with expiring speculative hashtable entries that are made to save
the normalized form of a query string. However, fixing these issues should
require only localized changes, and since there are other open patches
involving contrib/pg_stat_statements, it seems best to go ahead and commit
what we've got.
Peter Geoghegan, reviewed by Daniel Farina
Currently, the only way to see the numbers this gathers is via
EXPLAIN (ANALYZE, BUFFERS), but the plan is to add visibility through
the stats collector and pg_stat_statements in subsequent patches.
Ants Aasma, reviewed by Greg Smith, with some further changes by me.
Fix loss of previous expression-simplification work when a transform
function fires: we must not simply revert to untransformed input tree.
Instead build a dummy FuncExpr node to pass to the transform function.
This has the additional advantage of providing a simpler, more uniform
API for transform functions.
Move documentation to a somewhat less buried spot, relocate some
poorly-placed code, be more wary of null constants and invalid typmod
values, add an opr_sanity check on protransform function signatures,
and some other minor cosmetic adjustments.
Note: although this patch touches pg_proc.h, no need for catversion
bump, because the changes are cosmetic and don't actually change the
intended catalog contents.
Per a suggestion from Euler Taveira, it seems like a good idea to include
this information in \du (and \dg) output. This costs nothing for people
who are not using the VALID UNTIL feature, while for those who are, it's
rather critical information.
Fabrízio de Royes Mello
PGAC_PATH_COLLATEINDEX supposed that it could use AC_PATH_PROGS to search
for collateindex.pl, but that macro will only accept files that are marked
executable, and at least some DocBook installations don't mark the script
executable (a case the docs Makefile was already prepared for). Accept the
script if it's present and readable in $DOCBOOKSTYLE/bin, and otherwise
search the PATH as before.
Having fixed that up, we don't need the fallback case that was in the docs
Makefile, and instead can throw an understandable error if configure didn't
find the script. Per recent trouble report from John Lumby.
Document that routine vacuuming is now also important for the purpose
of index-only scans; and mention in the section that describes the
visibility map that it is used to implement index-only scans.
Marti Raudsepp, with some changes by me.
Instead of just stopping after removing an arbitrary subset of orphaned
large objects, commit and start a new transaction after each -l objects.
This is just as effective as the original patch at limiting the number of
locks used, and it doesn't require doing the OID collection process
repeatedly to get everything. Since the option no longer changes the
fundamental behavior of vacuumlo, and it avoids a known server-side
limitation, enable it by default (with a default limit of 1000 LOs per
transaction).
In passing, be more careful about properly quoting the names of tables
and fields, and do some other cosmetic cleanup.
This is intended as infrastructure to allow sepgsql to cooperate with
connection pooling software, by allowing the effective security label
to be set for each new connection.
KaiGai Kohei, reviewed by Yeb Havinga.
add ability to control permissions of created files
have psql echo its queries for easier debugging
output four separate log files, and delete them on success
add -r/--retain option to keep log files after success
make logs file append-only
remove -g/-G/-l logging options
sugggest tailing appropriate log file on failure
enhance -v/--verbose behavior
This patch fixes the other major compatibility-breaking limitation of
SPGiST, that it didn't store anything for null values of the indexed
column, and so could not support whole-index scans or "x IS NULL"
tests. The approach is to create a wholly separate search tree for
the null entries, and use fixed "allTheSame" insertion and search
rules when processing this tree, instead of calling the index opclass
methods. This way the opclass methods do not need to worry about
dealing with nulls.
Catversion bump is for pg_am updates as well as the change in on-disk
format of SPGiST indexes; there are some tweaks in SPGiST WAL records
as well.
Heavily rewritten version of a patch by Oleg Bartunov and Teodor Sigaev.
(The original also stored nulls separately, but it reused GIN code to do
so; which required undesirable compromises in the on-disk format, and
would likely lead to bugs due to the GIN code being required to work in
two very different contexts.)
The original API definition was incapable of supporting whole-index scans
because there was no way to invoke leaf-value reconstruction without
checking any qual conditions. Also, it was inefficient for
multiple-qual-condition scans because value reconstruction got done over
again for each qual condition, and because other internal work in the
consistent functions likewise had to be done for each qual. To fix these
issues, pass the whole scankey array to the opclass consistent functions,
instead of only letting them see one item at a time. (Essentially, the
loop over scankey entries is now inside the consistent functions not
outside them. This makes the consistent functions a bit more complicated,
but not unreasonably so.)
In itself this commit does nothing except save a few cycles in
multiple-qual-condition index scans, since we can't support whole-index
scans on SPGiST indexes until nulls are included in the index. However,
I consider this a must-fix for 9.2 because once we release it will get
very much harder to change the opclass API definition.
Further reflection shows that a single callback isn't very workable if we
desire to let FDWs generate multiple Paths, because that forces the FDW to
do all work necessary to generate a valid Plan node for each Path. Instead
split the former PlanForeignScan API into three steps: GetForeignRelSize,
GetForeignPaths, GetForeignPlan. We had already bit the bullet of breaking
the 9.1 FDW API for 9.2, so this shouldn't cause very much additional pain,
and it's substantially more flexible for complex FDWs.
Add an fdw_private field to RelOptInfo so that the new functions can save
state there rather than possibly having to recalculate information two or
three times.
In addition, we'd not thought through what would be needed to allow an FDW
to set up subexpressions of its choice for runtime execution. We could
treat ForeignScan.fdw_private as an executable expression but that seems
likely to break existing FDWs unnecessarily (in particular, it would
restrict the set of node types allowable in fdw_private to those supported
by expression_tree_walker). Instead, invent a separate field fdw_exprs
which will receive the postprocessing appropriate for expression trees.
(One field is enough since it can be a list of expressions; also, we assume
the corresponding expression state tree(s) will be held within fdw_state,
so we don't need to add anything to ForeignScanState.)
Per review of Hanada Shigeru's pgsql_fdw patch. We may need to tweak this
further as we continue to work on that patch, but to me it feels a lot
closer to being right now.
GetForeignColumnOptions provides some abstraction for accessing
column-specific FDW options, on a par with the access functions that were
already provided here for other FDW-related information.
Adjust file_fdw.c to use GetForeignColumnOptions instead of equivalent
hand-rolled code.
In addition, add some SGML documentation for the functions exported by
foreign.c that are meant for use by FDW authors.
(This is the fdw_helper portion of the proposed pgsql_fdw patch.)
Hanada Shigeru, reviewed by KaiGai Kohei
The original API specification only allowed an FDW to create a single
access path, which doesn't seem like a terribly good idea in hindsight.
Instead, move the responsibility for building the Path node and calling
add_path() into the FDW's PlanForeignScan function. Now, it can do that
more than once if appropriate. There is no longer any need for the
transient FdwPlan struct, so get rid of that.
Etsuro Fujita, Shigeru Hanada, Tom Lane
In backup.sgml, point out that you need to be using the logging collector
if you want to log messages from a failing archive_command script. (This
is an oversimplification, in that it will work without the collector as
long as you're not sending postmaster stderr to /dev/null; but it seems
like a good idea to encourage use of the collector to avoid problems
with multiple processes concurrently scribbling on one file.)
In config.sgml, do some wordsmithing of logging_collector discussion.
Per bug #6518 from Janning Vygen
Comparing two xlog locations are useful for example when calculating
replication lag.
Euler Taveira de Oliveira, reviewed by Fujii Masao, and some cleanups
from me
This patch improves selectivity estimation for the array <@, &&, and @>
(containment and overlaps) operators. It enables collection of statistics
about individual array element values by ANALYZE, and introduces
operator-specific estimators that use these stats. In addition,
ScalarArrayOpExpr constructs of the forms "const = ANY/ALL (array_column)"
and "const <> ANY/ALL (array_column)" are estimated by treating them as
variants of the containment operators.
Since we still collect scalar-style stats about the array values as a
whole, the pg_stats view is expanded to show both these stats and the
array-style stats in separate columns. This creates an incompatible change
in how stats for tsvector columns are displayed in pg_stats: the stats
about lexemes are now displayed in the array-related columns instead of the
original scalar-related columns.
There are a few loose ends here, notably that it'd be nice to be able to
suppress either the scalar-style stats or the array-element stats for
columns for which they're not useful. But the patch is in good enough
shape to commit for wider testing.
Alexander Korotkov, reviewed by Noah Misch and Nathan Boley
The only reason this didn't work before was that parserOpenTable()
rejects composite types. So use relation_openrv() directly and
manually do the errposition() setup that parserOpenTable() does.
For those of us who prefer the formatting of the docs using the
website stylesheets. Use "make STYLE=website draft" (for example) to use.
The stylesheet itself is referenced directly to the website, so there
is currently no copy of it stored in the source repository. Thus, docs
built with it will only look correct if the browser can access the website
when viewing them.
The <literal> markup is not visible as distinct on man pages, which
creates a bit of confusion when looking at the documentation of the
pg_basebackup -l option. Rather than reinventing the entire font
system for man pages to remedy this, just put some quotes around this
particular case, which should also help in other output formats.
Several places were still written as though standard_conforming_strings
didn't exist, much less be the default. Now that it is on by default,
we can simplify the text and just insert occasional notes suggesting that
you might have to think harder if it's turned off. Per discussion of a
suggestion from Hannes Frederic Sowa.
Back-patch to 9.1 where standard_conforming_strings was made the default.
Most people won't read them individually anyway, it's an easy way to find
them, and it's a lot of duplicated information if they are kept in two
different places.
This makes it easier to match a column name with the description of it,
and makes it possible to add more detailed documentation in the future.
This patch does not add that extra documentation at this point, only
the structure required for it.
Modeled on the changes already done to pg_stat_activity.
This check was overlooked when we added function execute permissions to the
system years ago. For an ordinary trigger function it's not a big deal,
since trigger functions execute with the permissions of the table owner,
so they couldn't do anything the user issuing the CREATE TRIGGER couldn't
have done anyway. However, if a trigger function is SECURITY DEFINER,
that is not the case. The lack of checking would allow another user to
install it on his own table and then invoke it with, essentially, forged
input data; which the trigger function is unlikely to realize, so it might
do something undesirable, for instance insert false entries in an audit log
table.
Reported by Dinesh Kumar, patch by Robert Haas
Security: CVE-2012-0866
This allows changing the location of the files that were previously
hard-coded to server.crt, server.key, root.crt, root.crl.
server.crt and server.key continue to be the default settings and are
thus required to be present by default if SSL is enabled. But the
settings for the server-side CA and CRL are now empty by default, and
if they are set, the files are required to be present. This replaces
the previous behavior of ignoring the functionality if the files were
not found.
Some line feeds are added to target lists and from lists to make
them more readable. By default they wrap at 80 columns if possible,
but the wrap column is also selectable - if 0 it wraps after every
item.
Andrew Dunstan, reviewed by Hitoshi Harada.
We don't normally allow quals to be pushed down into a view created
with the security_barrier option, but functions without side effects
are an exception: they're OK. This allows much better performance in
common cases, such as when using an equality operator (that might
even be indexable).
There is an outstanding issue here with the CREATE FUNCTION / ALTER
FUNCTION syntax: there's no way to use ALTER FUNCTION to unset the
leakproof flag. But I'm committing this as-is so that it doesn't
have to be rebased again; we can fix up the grammar in a future
commit.
KaiGai Kohei, with some wordsmithing by me.
Add new psql settings and command-line options to support setting the
field and record separators for unaligned output to a zero byte, for
easier interfacing with other shell tools.
reviewed by Abhijit Menon-Sen
Sometimes it may be useful to get actual row counts out of EXPLAIN
(ANALYZE) without paying the cost of timing every node entry/exit.
With this patch, you can say EXPLAIN (ANALYZE, TIMING OFF) to get that.
Tomas Vondra, reviewed by Eric Theise, with minor doc changes by me.
Do not prompt when options were not specified. Assume --no-createdb,
--no-createrole, --no-superuser by default.
Also disable prompting for user name in dropdb, unless --interactive
was specified.
reviewed by Josh Kupershmidt
In dry-run mode, just the name of the file to be removed is printed to
stdout; this is so the user can easily plug it into another program
through a pipe. If debug mode is also specified, a more verbose message
is printed to stderr.
Author: Gabriele Bartolini
Reviewer: Josh Kupershmidt
Like the XML data type, we simply store JSON data as text, after checking
that it is valid. More complex operations such as canonicalization and
comparison may come later, but this is enough for not.
There are a few open issues here, such as whether we should attempt to
detect UTF-8 surrogate pairs represented as \uXXXX\uYYYY, but this gets
the basic framework in place.
The sequence USAGE privilege is sufficiently similar to the SQL
standard that it seems reasonable to show in the information schema.
Also add some compatibility notes about it on the GRANT reference
page.
Add result object functions .colnames, .coltypes, .coltypmods to
obtain information about the result column names and types, which was
previously not possible in the PL/Python SPI interface.
reviewed by Abhijit Menon-Sen
This patch fixes the planner so that it can generate nestloop-with-
inner-indexscan plans even with one or more levels of joining between
the indexscan and the nestloop join that is supplying the parameter.
The executor was fixed to handle such cases some time ago, but the
planner was not ready. This should improve our plans in many situations
where join ordering restrictions formerly forced complete table scans.
There is probably a fair amount of tuning work yet to be done, because
of various heuristics that have been added to limit the number of
parameterized paths considered. However, we are not going to find out
what needs to be adjusted until the code gets some real-world use, so
it's time to get it in there where it can be tested easily.
Note API change for index AM amcostestimate functions. I'm not aware of
any non-core index AMs, but if there are any, they will need minor
adjustments.
Add counters for number and size of temporary files used
for spill-to-disk queries for each database to the
pg_stat_database view.
Tomas Vondra, review by Magnus Hagander
Base backup follows recommended procedure, plus goes to great
lengths to ensure that partial page writes are avoided.
Jun Ishizuka and Fujii Masao, with minor modifications
This reports the depth level of triggers currently in execution, or zero
if not called from inside a trigger.
No catversion bump in this patch, but you have to initdb if you want
access to the new function.
Author: Kevin Grittner
Replication occurs only to memory on standby, not to disk,
so provides additional performance if user wishes to
reduce durability level slightly. Adds concept of multiple
independent sync rep queues.
Fujii Masao and Simon Riggs
This separates the state (running/idle/idleintransaction etc) into
it's own field ("state"), and leaves the query field containing just
query text.
The query text will now mean "current query" when a query is running
and "last query" in other states. Accordingly,the field has been
renamed from current_query to query.
Since backwards compatibility was broken anyway to make that, the procpid
field has also been renamed to pid - along with the same field in
pg_stat_replication for consistency.
Scott Mead and Magnus Hagander, review work from Greg Smith
That avoids errors when the functions are used in queries like "SELECT
pg_relation_size(oid) FROM pg_class", and a table is dropped concurrently.
Phil Sorber
When creating a child table, or when attaching an existing table as
child of another, we must not allow inheritable constraints to be
merged with non-inheritable ones, because then grandchildren would not
properly get the constraint. This would violate the grandparent's
expectations.
Bugs noted by Robert Haas.
Author: Nikhil Sontakke
Allows a user to use pg_cancel_queries() to cancel queries in
other backends if they are running under the same role.
pg_terminate_backend() still requires superuser permissoins.
Short patch, many authors working on the bikeshed: Magnus Hagander,
Josh Kupershmidt, Edward Muller, Greg Smith.
superuser doesn't have doesn't make much sense, as a superuser can do
whatever he wants through other means, anyway. So instead of granting
replication privilege to superusers in CREATE USER time by default, allow
replication connection from superusers whether or not they have the
replication privilege.
Patch by Noah Misch, per discussion on bug report #6264
Point out in the compatibility section that granting grant options to
PUBLIC is not supported by PostgreSQL. This is already mentioned
earlier, but since it concerns the information schema, it might be
worth pointing out explicitly as a compatibility issue.
The original implementation of this interpreted it as a kind of
"inheritance" facility and named all the internal structures
accordingly. This turned out to be very confusing, because it has
nothing to do with the INHERITS feature. So rename all the internal
parser infrastructure, update the comments, adjust the error messages,
and split up the regression tests.
ALTER DOMAIN / DROP CONSTRAINT on a nonexistent constraint name did
not report any error. Now it reports an error. The IF EXISTS option
was added to get the usual behavior of ignoring nonexistent objects to
drop.
In commit e2c2c2e8b1 I made use of nested
list structures to show which clauses went with which index columns, but
on reflection that's a data structure that only an old-line Lisp hacker
could love. Worse, it adds unnecessary complication to the many places
that don't much care which clauses go with which index columns. Revert
to the previous arrangement of flat lists of clauses, and instead add a
parallel integer list of column numbers. The places that care about the
pairing can chase both lists with forboth(), while the places that don't
care just examine one list the same as before.
The only real downside to this is that there are now two more lists that
need to be passed to amcostestimate functions in case they care about
column matching (which btcostestimate does, so not passing the info is not
an option). Rather than deal with 11-argument amcostestimate functions,
pass just the IndexPath and expect the functions to extract fields from it.
That gets us down to 7 arguments which is better than 11, and it seems
more future-proof against likely additions to the information we keep
about an index path.
When a view is marked as a security barrier, it will not be pulled up
into the containing query, and no quals will be pushed down into it,
so that no function or operator chosen by the user can be applied to
rows not exposed by the view. Views not configured with this
option cannot provide robust row-level security, but will perform far
better.
Patch by KaiGai Kohei; original problem report by Heikki Linnakangas
(in October 2009!). Review (in earlier versions) by Noah Misch and
others. Design advice by Tom Lane and myself. Further review and
cleanup by me.
You could already rename domains using ALTER TYPE, but with this new
command it is more consistent with how other commands treat domains as
a subcategory of types.
This adds support for the more or less SQL-conforming USAGE privilege
on types and domains. The intent is to be able restrict which users
can create dependencies on types, which restricts the way in which
owners can alter types.
reviewed by Yeb Havinga
This makes them enforceable only on the parent table, not on children
tables. This is useful in various situations, per discussion involving
people bitten by the restrictive behavior introduced in 8.4.
Message-Id:
8762mp93iw.fsf@comcast.netCAFaPBrSMMpubkGf4zcRL_YL-AERUbYF_-ZNNYfb3CVwwEqc9TQ@mail.gmail.com
Authors: Nikhil Sontakke, Alex Hunsaker
Reviewed by Robert Haas and myself
Operator classes can specify whether or not they support this; this
preserves the flexibility to use lossy representations within an index.
In passing, move constant data about a given index into the rd_amcache
cache area, instead of doing fresh lookups each time we start an index
operation. This is mainly to try to make sure that spgcanreturn() has
insignificant cost; I still don't have any proof that it matters for
actual index accesses. Also, get rid of useless copying of FmgrInfo
pointers; we can perfectly well use the relcache's versions in-place.
The need for this was debated when we put in the index-only-scan feature,
but at the time we had no near-term expectation of having AMs that could
support such scans for only some indexes; so we kept it simple. However,
the SP-GiST AM forces the issue, so let's fix it.
This patch only installs the new API; no behavior actually changes.
SP-GiST is comparable to GiST in flexibility, but supports non-balanced
partitioned search structures rather than balanced trees. As described at
PGCon 2011, this new indexing structure can beat GiST in both index build
time and query speed for search problems that it is well matched to.
There are a number of areas that could still use improvement, but at this
point the code seems committable.
Teodor Sigaev and Oleg Bartunov, with considerable revisions by Tom Lane
Valid values are --pre-data, data and post-data. The option can be
given more than once. --schema-only is equivalent to
--section=pre-data --section=post-data. --data-only is equivalent
to --section=data.
Andrew Dunstan, reviewed by Joachim Wieland and Josh Berkus.
This works the same as include, except that an error is not thrown
if the file is missing. Instead the fact that it's missing is
logged.
Greg Smith, reviewed by Euler Taveira de Oliveira.
Andrew Dunstan, reviewed by Josh Berkus, Robert Haas and Peter Geoghegan.
This allows dumping of a table definition but not its data, on a per table basis.
Table name patterns are supported just as for --exclude-table.
Instead, add a function pg_tablespace_location(oid) used to return
the same information, and do this by reading the symbolic link.
Doing it this way makes it possible to relocate a tablespace when the
database is down by simply changing the symbolic link.
This patch creates an API whereby a btree index opclass can optionally
provide non-SQL-callable support functions for sorting. In the initial
patch, we only use this to provide a directly-callable comparator function,
which can be invoked with a bit less overhead than the traditional
SQL-callable comparator. While that should be of value in itself, the real
reason for doing this is to provide a datatype-extensible framework for
more aggressive optimizations, as in Peter Geoghegan's recent work.
Robert Haas and Tom Lane
If unable to connect to "postgres", try "template1". This allows things to
work more smoothly in the case where the postgres database has been
dropped. And just in case that's not good enough, also allow the user to
specify a maintenance database to be used for the initial connection, to
cover the case where neither postgres nor template1 is suitable.
Add a function plpy.cursor that is similar to plpy.execute but uses an
SPI cursor to avoid fetching the entire result set into memory.
Jan Urbański, reviewed by Steve Singer
This can be used to set (or unset) environment variables that will
affect programs called by psql (such as the PAGER), probably most
usefully in a .psqlrc file.
Andrew Dunstan, reviewed by Josh Kupershmidt.
PGresults used to be read-only from the application's viewpoint, but now
that we've exposed various functions that allow modification of a PGresult,
that sweeping statement is no longer accurate. Noted by Dmitriy Igrishin.
The correct information appears in the text, so just remove the statement
in the table, where it did not fit nicely anyway. (Curiously, the correct
info has been there much longer than the erroneous table entry.)
Resolves problem noted by Daniele Varrazzo.
In HEAD and 9.1, also do a bit of wordsmithing on other text on the page.
The WITH [NO] DATA option was not supported, nor the ability to specify
replacement column names; the former limitation wasn't even documented, as
per recent complaint from Naoya Anzai. Fix by moving the responsibility
for supporting these options into the executor. It actually takes less
code this way ...
catversion bump due to change in representation of IntoClause, which might
affect stored rules.
It's not clear that a per-datatype typanalyze function would be any more
useful than a generic typanalyze for ranges. What *is* clear is that
letting unprivileged users select typanalyze functions is a crash risk or
worse. So remove the option from CREATE TYPE AS RANGE, and instead put in
a generic typanalyze function for ranges. The generic function does
nothing as yet, but hopefully we'll improve that before 9.2 release.
Per discussion, the zero-argument forms aren't really worth the catalog
space (just write 'empty' instead). The one-argument forms have some use,
but they also have a serious problem with looking too much like functional
cast notation; to the point where in many real use-cases, the parser would
misinterpret what was wanted.
Committing this as a separate patch, with the thought that we might want
to revert part or all of it if we can think of some way around the cast
ambiguity.
For a very long time, one of the parser's heuristics for resolving
ambiguous operator calls has been to assume that unknown-type literals are
of the same type as the other input (if it's known). However, this was
only used in the first step of quickly checking for an exact-types match,
and thus did not help in resolving matches that require coercion, such as
matches to polymorphic operators. As we add more polymorphic operators,
this becomes more of a problem. This patch adds another use of the same
heuristic as a last-ditch check before failing to resolve an ambiguous
operator or function call. In particular this will let us define the range
inclusion operator in a less limited way (to come in a follow-on patch).
A very long time ago, language names were specified as literals rather
than identifiers, so this code was added to do case-folding. But that
style has ben deprecated for many years so this isn't needed any more.
Language names will still be downcased when specified as unquoted
identifiers, but quoted identifiers or the old style using string
literals will be left as-is.
Change range_lower and range_upper to return NULL rather than throwing an
error when the input range is empty or the relevant bound is infinite. Per
discussion, throwing an error seems likely to be unduly hard to work with.
Also, this is more consistent with the behavior of the constructors, which
treat NULL as meaning an infinite bound.
Change range_before, range_after, range_adjacent to return false rather
than throwing an error when one or both input ranges are empty.
The original definition is unnecessarily difficult to use, and also can
result in undesirable planner failures since the planner could try to
compare an empty range to something else while deriving statistical
estimates. (This was, in fact, the cause of repeatable regression test
failures on buildfarm member jaguar, as well as intermittent failures
elsewhere.)
Also tweak rangetypes regression test to not drop all the objects it
creates, so that the final state of the regression database contains
some rangetype objects for pg_dump testing.
This adds the "auto" option to the \x command, which switches to the
expanded mode when the normal output would be wider than the screen.
reviewed by Noah Misch
This reverts commit 0180bd6180.
contrib/userlock is gone, but user-level locking still exists,
and is exposed via the pg_advisory* family of functions.
Since PostgreSQL 9.0, we've emitted a warning message when an operator
named => is created, because the SQL standard now reserves that token
for another use. But we've also shipped such an operator with hstore.
Use of the function hstore(text, text) has been recommended in
preference to =>(text, text). Per discussion, it's now time to take
the next step and stop shipping the operator. This will allow us to
prohibit the use of => as an operator name in a future release if and
when we wish to support the SQL standard use of this token.
The release notes should mention this incompatibility.
Patch by me, reviewed by David Wheeler, Dimitri Fontaine and Tom Lane.
Add option for parallel streaming of the transaction log while a
base backup is running, to get the logfiles before the server has
removed them.
Also add a tool called pg_receivexlog, which streams the transaction
log into files, creating a log archive without having to wait for
segments to complete, thus decreasing the window of data loss without
having to waste space using archive_timeout. This works best in
combination with archive_command - suggested usage docs etc coming later.
This allows different instances to use the eventlog with different
identifiers, by setting the event_source GUC, similar to how
syslog_ident works.
Original patch by MauMau, heavily modified by Magnus Hagander
A transaction can export a snapshot with pg_export_snapshot(), and then
others can import it with SET TRANSACTION SNAPSHOT. The data does not
leave the server so there are not security issues. A snapshot can only
be imported while the exporting transaction is still running, and there
are some other restrictions.
I'm not totally convinced that we've covered all the bases for SSI (true
serializable) mode, but it works fine for lesser isolation modes.
Joachim Wieland, reviewed by Marko Tiikkaja, and rather heavily modified
by Tom Lane
In general the data returned by an index-only scan should have the
datatypes originally computed by FormIndexDatum. If the index opclasses
use "storage" datatypes different from their input datatypes, the scan
tuple will not have the same rowtype attributed to the index; but we had
a hard-wired assumption that that was true in nodeIndexonlyscan.c. We'd
already hacked around the issue for the one case where the types are
different in btree indexes (btree name_ops), but this would definitely
come back to bite us if we ever implement index-only scans in GiST.
To fix, require the index AM to explicitly provide the tupdesc for the
tuple it is returning. btree can just pass back the index's tupdesc, but
GiST will have to work harder when and if it supports index-only scans.
I had previously proposed fixing this by allowing the index AM to fill the
scan tuple slot directly; but on reflection that seemed like a module
layering violation, since TupleTableSlots are creatures of the executor.
At least in the btree case, it would also be less efficient, since the
tuple deconstruction work would occur even for rows later found to be
invisible to the scan's snapshot.
Rearrange text to improve clarity, and add an example of implicit reference
to a plpgsql variable in a bound cursor's query. Byproduct of some work
I'd done on the "named cursor parameters" patch before giving up on it.