We copy all the matched tuples off the page during _bt_readpage, instead of
expensively re-locking the page during each subsequent tuple fetch. This
costs a bit more local storage, but not more than 2*BLCKSZ worth, and the
reduction in LWLock traffic is certainly worth that. What's more, this
lets us get rid of the API wart in the original patch that said an index AM
could randomly decline to supply an index tuple despite having asserted
pg_am.amcanreturn. That will be important for future improvements in the
index-only-scan feature, since the executor will now be able to rely on
having the index data available.
When a btree index contains all columns required by the query, and the
visibility map shows that all tuples on a target heap page are
visible-to-all, we don't need to fetch that heap page. This patch depends
on the previous patches that made the visibility map reliable.
There's a fair amount left to do here, notably trying to figure out a less
chintzy way of estimating the cost of an index-only scan, but the core
functionality seems ready to commit.
Robert Haas and Ibrar Ahmed, with some previous work by Heikki Linnakangas.
This variable provides only marginal error-prevention capability (since
it can only check the prefix of a qualified GUC name), and the consensus
is that that isn't worth the amount of hassle that maintaining the setting
creates for DBAs. So, let's just remove it.
With this commit, the system will silently accept a value for any qualified
GUC name at all, whether it has anything to do with any known extension or
not. (Unqualified names still have to match known built-in settings,
though; and you will get a WARNING at extension load time if there's an
unrecognized setting with that extension's prefix.)
There's still some discussion ongoing about whether to tighten that up and
if so how; but if we do come up with a solution, it's not likely to look
anything like custom_variable_classes.
This patch has two distinct purposes: to report multiple problems in
postgresql.conf rather than always bailing out after the first one,
and to change the policy for whether changes are applied when there are
unrelated errors in postgresql.conf.
Formerly the policy was to apply no changes if any errors could be
detected, but that had a significant consistency problem, because in some
cases specific values might be seen as valid by some processes but invalid
by others. This meant that the latter processes would fail to adopt
changes in other parameters even though the former processes had done so.
The new policy is that during SIGHUP, the file is rejected as a whole
if there are any errors in the "name = value" syntax, or if any lines
attempt to set nonexistent built-in parameters, or if any lines attempt
to set custom parameters whose prefix is not listed in (the new value of)
custom_variable_classes. These tests should always give the same results
in all processes, and provide what seems a reasonably robust defense
against loading values from badly corrupted config files. If these tests
pass, all processes will apply all settings that they individually see as
good, ignoring (but logging) any they don't.
In addition, the postmaster does not abandon reading a configuration file
after the first syntax error, but continues to read the file and report
syntax errors (up to a maximum of 100 syntax errors per file).
The postmaster will still refuse to start up if the configuration file
contains any errors at startup time, but these changes allow multiple
errors to be detected and reported before quitting.
Alexey Klyukin, reviewed by Andy Colson and av (Alexander ?)
with some additional hacking by Tom Lane
We'll now use "exists" for EXISTS(SELECT ...), "array" for ARRAY(SELECT
...), or the sub-select's own result column name for a simple expression
sub-select. Previously, you usually got "?column?" in such cases.
Marti Raudsepp, reviewed by Kyotaro Horiugchi
pg_trgm was already doing this unofficially, but the implementation hadn't
been thought through very well and leaked memory. Restructure the core
GiST code so that it actually works, and document it. Ordinarily this
would have required an extra memory context creation/destruction for each
GiST index search, but I was able to avoid that in the normal case of a
non-rescanned search by finessing the handling of the RBTree. It used to
have its own context always, but now shares a context with the
scan-lifespan data structures, unless there is more than one rescan call.
This should make the added overhead unnoticeable in typical cases.
I've made a significant effort at filling in the "Using EXPLAIN" section
to be reasonably complete about mentioning everything that EXPLAIN can
output, including the "Rows Removed" outputs that were added by Marko
Tiikkaja's recent documentation-free patch. I also updated the examples to
be consistent with current behavior; several of them were not close to what
the current code will do. No doubt there's more that can be done here, but
I'm out of patience for today.
Because these tests require root privileges, not to mention invasive
changes to the security configuration of the host system, it's not
reasonable for them to be invoked by a regular "make check" or "make
installcheck". Instead, dike out the Makefile's knowledge of the tests,
and change chkselinuxenv (now renamed "test_sepgsql") into a script that
verifies the environment is workable and then runs the tests. It's
expected that test_sepgsql will only be run manually.
While at it, do some cleanup in the error checking in the script, and
do some wordsmithing in the documentation.
The keywords and values arguments of these functions are more properly
declared "const char * const *" than just "const char **".
Lionel Elie Mamane, reviewed by Craig Ringer
This mode still exists for backwards compatibility, making
sslmode=require the same as sslmode=verify-ca when the file is present,
but not causing an error when it isn't.
Per bug 6189, reported by Srinivas Aji
This is implemented as a per-column boolean option, rather than trying
to match COPY's convention of a single option listing the column names.
Shigeru Hanada, reviewed by KaiGai Kohei
Rewrite plancache.c so that a "cached plan" (which is rather a misnomer
at this point) can support generation of custom, parameter-value-dependent
plans, and can make an intelligent choice between using custom plans and
the traditional generic-plan approach. The specific choice algorithm
implemented here can probably be improved in future, but this commit is
all about getting the mechanism in place, not the policy.
In addition, restructure the API to greatly reduce the amount of extraneous
data copying needed. The main compromise needed to make that possible was
to split the initial creation of a CachedPlanSource into two steps. It's
worth noting in particular that SPI_saveplan is now deprecated in favor of
SPI_keepplan, which accomplishes the same end result with zero data
copying, and no need to then spend even more cycles throwing away the
original SPIPlan. The risk of long-term memory leaks while manipulating
SPIPlans has also been greatly reduced. Most of this improvement is based
on use of the recently-added MemoryContextSetParent primitive.
BREAKAGE.
Remove double-quoting of index/table names in reindexdb. BACKWARD
COMPABILITY BREAKAGE.
Document thate user/database names are preserved with double-quoting by
command-line tools like vacuumdb.
We were doing some amazingly complicated things in order to avoid running
the very expensive identify_system_timezone() procedure during GUC
initialization. But there is an obvious fix for that, which is to do it
once during initdb and have initdb install the system-specific default into
postgresql.conf, as it already does for most other GUC variables that need
system-environment-dependent defaults. This means that the timezone (and
log_timezone) settings no longer have any magic behavior in the server.
Per discussion.
As per my recent proposal, this refactors things so that these typedefs and
macros are available in a header that can be included in frontend-ish code.
I also changed various headers that were undesirably including
utils/timestamp.h to include datatype/timestamp.h instead. Unsurprisingly,
this showed that half the system was getting utils/timestamp.h by way of
xlog.h.
No actual code changes here, just header refactoring.
When building a GiST index that doesn't fit in cache, buffers are attached
to some internal nodes in the index. This speeds up the build by avoiding
random I/O that would otherwise be needed to traverse all the way down the
tree to the find right leaf page for tuple.
Alexander Korotkov
We've now seen more than one gripe from somebody who didn't get the memo
about how to install contrib modules in 9.1. Try to make it a little more
prominent that you aren't supposed to call the scripts directly anymore.
This example wasn't updated when we changed the behavior of bpcharlen()
in 8.0, nor when we changed the number of parameters taken by the bpchar()
cast function in 7.3. Per report from lsliang.
These changes allow backtick command evaluation and psql variable
interpolation to happen on substrings of a single meta-command argument.
Formerly, no such evaluations happened at all if the backtick or colon
wasn't the first character of the argument, and we considered an argument
completed as soon as we'd processed one backtick, variable reference, or
quoted substring. A string like 'FOO'BAR was thus taken as two arguments
not one, not exactly what one would expect. In the new coding, an argument
is considered terminated only by unquoted whitespace or backslash.
Also, clean up a bunch of omissions, infelicities and outright errors in
the psql documentation of variables and metacommand argument syntax.
The previous coding resulted in contrib modules unintentionally overriding
the use of CONTRIB_TESTDB. There seems no particularly good reason to
allow that (after all, the makefile can set CONTRIB_TESTDB if that's really
what it intends).
In passing, document REGRESS_OPTS where the other pgxs.mk options are
documented.
Back-patch to 9.1 --- in prior versions, there were no cases of contrib
modules setting REGRESS_OPTS without including the --dbname switch, so
while the coding was fragile there was no actual bug.
Instead of displaying comments on an arbitrary subset of the object
types which support them, make \dd display comments on exactly those
object types which don't have their own backlash commands. We now
regard the display of comments as properly the job of the relevant
backslash command (though many of them do so only in verbose mode)
rather than something that \dd should be responsible for. However,
a handful of object types have no backlash command, so make \dd
give information about those.
Josh Kupershmidt
The latch infrastructure is now capable of detecting all cases where the
walsender loop needs to wake up, so there is no reason to have an arbitrary
timeout.
Also, modify the walsender loop logic to follow the standard pattern of
ResetLatch, test for work to do, WaitLatch. The previous coding was both
hard to follow and buggy: it would sometimes busy-loop despite having
nothing available to do, eg between receipt of a signal and the next time
it was caught up with new WAL, and it also had interesting choices like
deciding to update to WALSNDSTATE_STREAMING on the strength of information
known to be obsolete.
The relevant backslash commands already exist, so we're just adding an
additional column. With this commit, all objects that have psql backslash
commands and accept comments should now display those comments at least
in verbose mode.
Josh Kupershmidt, with doc additions by me.
\dc and \dD now accept a "+" option, which will cause the comments to
be displayed. Along the way, correct a few oversights in the previous
commit in this area, 3b17efdfdd - namely,
(1) when \dL+ is used, make description still be the last column, for
consistency with what we've done elsewhere; and (2) document the
difference between \dC and \dC+.
Josh Kupershmidt, with a couple of doc changes by me.
Also, handle failure better: don't just blindly keep trying to delete
stuff after the transaction has already failed.
Tim Lewis, reviewed by Josh Kupershmidt, with further hacking by me.
There is what may actually be a mistake in our markup. The problem is
in a situation like
<para>
<command>FOO</command> is ...
there is strictly speaking a line break before "FOO". In the HTML
output, this does not appear to be a problem, but in the man page
output, this shows up, so you get double blank lines at odd places.
So far, we have attempted to work around this with an XSL hack, but
that causes other problems, such as creating run-ins in places like
<acronym>SQL</acronym> <command>COPY</command>
So fix the problem properly by removing the extra whitespace. I only
fixed the problems that affect the man page output, not all the
places.
Somebody added a cross-reference to shared_preload_libraries, but wrote the
wrong variable name when they did it (and didn't bother to make it a link
either).
Spotted by Christoph Anton Mitterer.
The output of \dL (list languages) is fairly narrow, so we just always
display the comment. \dC (list casts) can get fairly wide, so we only
display comments if the new \dC+ option is specified.
Josh Kupershmidt
Also change "switch" to "arg" because "switch" is a bit of a sloppy
term. So the environment variable is called
PSQL_EDITOR_LINENUMBER_ARG. Set "+" as hardcoded default value on
Unix (since "vi" is the hardcoded default editor), so many users won't
have to configure this at all. Move the documentation around a bit to
centralize the editor configuration under environment variables,
rather than repeating bits of it under every backslash command that
invokes an editor.
Previously, xpath() simply returned an empty array if the expression did
not yield a node set. This is useless for expressions that return scalars,
such as one with name() at the top level. Arrange to return the scalar
value as a single-element xml array, instead. (String values will be
suitably escaped.)
This change will also cause xpath_exists() to return true, not false,
for such expressions.
Florian Pflug, reviewed by Radoslaw Smogura
This requires a new shared catalog, pg_shseclabel.
Along the way, fix the security_label regression tests so that they
don't monkey with the labels of any pre-existing objects. This is
unlikely to matter in practice, since only the label for the "dummy"
provider was being manipulated. But this way still seems cleaner.
KaiGai Kohei, with fairly extensive hacking by me.
Standby servers can now have WALSender processes, which can work with
either WALReceiver or archive_commands to pass data. Fully updated
docs, including new conceptual terms of sending server, upstream and
downstream servers. WALSenders terminated when promote to master.
Fujii Masao, review, rework and doc rewrite by Simon Riggs
This is more SQL-spec-compliant, more easily extensible, and better
performing than the old method of inventing special variables.
Pavel Stehule, reviewed by Shigeru Hanada and David Wheeler
When an AccessShareLock, RowShareLock, or RowExclusiveLock is requested
on an unshared database relation, and we can verify that no conflicting
locks can possibly be present, record the lock in a per-backend queue,
stored within the PGPROC, rather than in the primary lock table. This
eliminates a great deal of contention on the lock manager LWLocks.
This patch also refactors the interface between GetLockStatusData() and
pg_lock_status() to be a bit more abstract, so that we don't rely so
heavily on the lock manager's internal representation details. The new
fast path lock structures don't have a LOCK or PROCLOCK structure to
return, so we mustn't depend on that for purposes of listing outstanding
locks.
Review by Jeff Davis.