Hot_standby_feedback could be reset by reload and worked correctly, but if
the server was restarted rather than reloaded the xmin was not reset.
Force reset always if hot_standby_feedback is enabled at startup.
Ants Aasma, Craig Ringer
Reported-by: Ants Aasma
!foo means "the tsvector does not contain foo", and therefore it should
match an empty tsvector. ts_match_vq() overenthusiastically supposed
that an empty tsvector could never match any query, so it forcibly
returned FALSE, the wrong answer. Remove the premature optimization.
Our behavior on this point was inconsistent, because while seqscans and
GIST index searches both failed to match empty tsvectors, GIN index
searches would find them, since GIN scans don't rely on ts_match_vq().
That makes this certainly a bug, not a debatable definition disagreement,
so back-patch to all supported branches.
Report and diagnosis by Tom Dunstan (bug #14515); added test cases by me.
Discussion: https://postgr.es/m/20170126025524.1434.97828@wrigleys.postgresql.org
This improves readability a bit and may make future improvements easier.
In passing, make sure that the JB_ROOT_IS_XXX macros deliver boolean (0/1)
results; the previous coding was a bug hazard, though no actual bugs are
known.
Nikita Glukhov, extended a bit by me
Discussion: https://postgr.es/m/9e21a39c-c1d7-b9b5-44a0-c5345a5029f6@postgrespro.ru
There's really no situation where we don't want these unknown-to-text
conversions to happen. The alternative is failure anyway, and the one
caller that was passing "false" did so only because it expected the
case could not arise. Might as well simplify the code.
Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
Previously, type "unknown" was labeled as a base type in pg_type, which
perhaps had some sense to it because you were allowed to create tables with
unknown-type columns. But now that we don't allow that, it makes more
sense to label it a pseudo-type. This has the additional effects of
forbidding use of "unknown" as a domain base type, cast source or target
type, PL function argument or result type, or plpgsql local variable type;
all of which seem like good holes to plug.
Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
Previously, we left such literals alone if the query or subquery had
no properties forcing a type decision to be made (such as an ORDER BY or
DISTINCT clause using that output column). This meant that "unknown" could
be an exposed output column type, which has never been a great idea because
it could result in strange failures later on. For example, an outer query
that tried to do any operations on an unknown-type subquery output would
generally fail with some weird error like "failed to find conversion
function from unknown to text" or "could not determine which collation to
use for string comparison". Also, if the case occurred in a CREATE VIEW's
query then the view would have an unknown-type column, causing similar
failures in queries trying to use the view.
To fix, at the tail end of parse analysis of a query, forcibly convert any
remaining "unknown" literals in its SELECT or RETURNING list to type text.
However, provide a switch to suppress that, and use it in the cases of
SELECT inside a set operation or INSERT command. In those cases we already
had type resolution rules that make use of context information from outside
the subquery proper, and we don't want to change that behavior.
Also, change creation of an unknown-type column in a relation from a
warning to a hard error. The error should be unreachable now in CREATE
VIEW or CREATE MATVIEW, but it's still possible to explicitly say "unknown"
in CREATE TABLE or CREATE (composite) TYPE. We want to forbid that because
it's nothing but a foot-gun.
This change creates a pg_upgrade failure case: a matview that contains an
unknown-type column can't be pg_upgraded, because reparsing the matview's
defining query will now decide that the column is of type text, which
doesn't match the cstring-like storage that the old materialized column
would actually have. Add a checking pass to detect that. While at it,
we can detect tables or composite types that would fail, essentially
for free. Those would fail safely anyway later on, but we might as
well fail earlier.
This patch is by me, but it owes something to previous investigations
by Rahila Syed. Also thanks to Ashutosh Bapat and Michael Paquier for
review.
Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
Commit 7012b132d added some tests that consumed an excessive amount of
time, more than tripling the time needed for "make installcheck" for this
module. Add filter conditions to reduce the number of rows scanned,
bringing the runtime down to within hailing distance of what it was before.
Jeevan Chalke and Ashutosh Bapat, per a gripe from me
Discussion: https://postgr.es/m/16565.1478104765@sss.pgh.pa.us
According to the comments in tupconvert.c, it's necessary to perform
tuple conversion when either table has OIDs, and this was previously
checked by ensuring that the tdtypeid value matched between the tables
in question. However, that's overly stringent: we have access to
tdhasoid and can test directly whether OIDs are present, which lets us
avoid conversion in cases where the type OIDs are different but the
tuple descriptors are entirely the same (and neither has OIDs). This
is useful to the partitioning code, which can thereby avoid converting
tuples when inserting into a partition whose columns appear in the
same order as the parent columns, the normal case. It's possible
for the tuple routing code to avoid some additional overhead in this
case as well, so do that, too.
It's not clear whether it would be OK to skip this when both tables
have OIDs: do callers count on this to build a new tuple (losing the
previous OID) in such instances? Until we figure it out, leave the
behavior in that case alone.
Amit Langote, reviewed by me.
Commit 587cda35c added a test to updatable_views.sql that created
tables named the same as tables used by the concurrent inherit.sql
script. Unsurprisingly, this results in random failures.
Pick different names.
Per buildfarm.
In the new code for selecting sequence data from pg_sequence, set the
schema to pg_catalog instead of the sequences own schema, and refer to
the sequence by OID instead of name, which was missing a schema
qualification.
Reported-by: Stephen Frost <sfrost@snowman.net>
Formerly an alternate password file could only be selected via the
environment variable PGPASSFILE; now it can also be selected via a
new connection parameter "passfile", corresponding to the conventions
for most other connection parameters. There was some concern about
this creating a security weakness, but it was agreed that that argument
was pretty thin, and there are clear use-cases for handling password
files this way.
Julian Markwort, reviewed by Fabien Coelho, some adjustments by me
Discussion: https://postgr.es/m/a4b4f4f1-7b58-a0e8-5268-5f7db8e8ccaa@uni-muenster.de
This is useful infrastructure for an upcoming proposed patch to
allow the WAL segment size to be changed at initdb time; tools like
pg_basebackup need the ability to interrogate the server setting.
But it also doesn't seem like a bad thing to have independently of
that; it may find other uses in the future.
Robert Haas and Beena Emerson. (The original patch here was by
Beena, but I rewrote it to such a degree that most of the code
being committed here is mine.)
Discussion: http://postgr.es/m/CA+TgmobNo4qz06wHEmy9DszAre3dYx-WNhHSCbU9SAwf+9Ft6g@mail.gmail.com
If you create a DestReciver of type DestRemote and try to use it from
a replication connection that is not bound to a specific daabase, or
any other hypothetical type of backend that is not bound to a specific
database, it will fail because it doesn't have a pg_proc catalog to
look up properties of the types being printed. In general, that's
an unavoidable problem, but we can hardwire the properties of a few
builtin types in order to support utility commands. This new
DestReceiver of type DestRemoteSimple does just that.
Patch by me, reviewed by Michael Paquier.
Discussion: http://postgr.es/m/CA+TgmobNo4qz06wHEmy9DszAre3dYx-WNhHSCbU9SAwf+9Ft6g@mail.gmail.com
This patch doesn't actually make any index AM parallel-aware, but it
provides the necessary functions at the AM layer to do so.
Rahila Syed, Amit Kapila, Robert Haas
Previously, ExecInitModifyTable was missing handling for WITH CHECK
OPTION, and view_query_is_auto_updatable was missing handling for
RELKIND_PARTITIONED_TABLE.
Amit Langote, reviewed by me.
In 2ac3ef7a01, we changed things so that
it's possible for a different TupleTableSlot to be used for partitioned
tables at successively lower levels. If we do end up changing the slot
from the original, we must update ecxt_scantuple to point to the new one
for partition key of the tuple to be computed correctly.
Reported by Rajkumar Raghuwanshi. Patch by Amit Langote.
Discussion: http://postgr.es/m/CAKcux6%3Dm1qyqB2k6cjniuMMrYXb75O-MB4qGQMu8zg-iGGLjDw%40mail.gmail.com
We've accumulated quite a bit of stuff with which pgindent is not
quite happy in this code; clean it up to provide a less-annoying base
for future pgindent runs.
The code here previously tried to call the partitioning operator, but
really the right thing to do (and the safe thing to do) is use
datumIsEqual().
Amit Langote, but I expanded the comment and fixed a compiler warning.
For some reason that is lost in history, a descending sequence would
default its minimum value to -2^63+1 (-PG_INT64_MAX) instead of
-2^63 (PG_INT64_MIN), even though explicitly specifying a minimum value
of -2^63 would work. Fix this inconsistency by using the full range by
default.
Reported-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
initdb used to warn about that, but it was changed to an error in
pg_import_system_locales, but some build farm members failed because of
that. Change it back to a warning.
Vacuum truncation scan can be sped up on rotating media by prefetching
blocks in forward direction. That makes the blocks already present in
memory by the time they are needed, while also letting OS read-ahead
kick in.
The truncate scan has been measured to be five times faster than without
this patch (that was on a slow disk, but it shouldn't hurt on fast
disks.)
Author: Álvaro Herrera, loosely based on a submission by Claudio Freire
Discussion: https://postgr.es/m/CAGTBQpa6NFGO_6g_y_7zQx8L9GcHDSQKYdo1tGuh791z6PYgEg@mail.gmail.com
Joining three tables only takes two join nodes. I think when I (tgl)
wrote this, I was envisioning possible additional joins; but since the
example doesn't show any fourth table, it's just confusing to write
a third join node.
Etsuro Fujita
Discussion: https://postgr.es/m/e6cfbaa3-af02-1abc-c25e-8fa5c6bc4e21@lab.ntt.co.jp
This appears to be necessary to fix a failure seen on buildfarm member
sittella. It shouldn't be necessary according to the letter of the C
standard, because we don't change the values of these variables within
the PG_TRY blocks; but somehow gcc 4.7.2 is dropping the ball.
Discussion: https://postgr.es/m/17555.1485179975@sss.pgh.pa.us
Missing a destroyPQExpBuffer() in the early exit branch. The early
exits aren't really necessary. Most similar functions just proceed
running the rest of the code zero times and clean up at the end.
Project style is to put things in this order, for the good and sufficient
reason that you often need the typedefs in the function declarations.
There already was one function declaration that needed a typedef, which
was randomly placed away from all the other static function declarations
in consequence. And the submitted patch for better json_populate_record
functionality jumped through even more hoops in order to preserve this
bad idea.
This patch only moves lines from point A to point B, no other changes.
Coverity complained quite properly that commit ea15e1867 had introduced
unreachable code into ExecGather(); to wit, it was no longer possible to
iterate the final for-loop more or less than once. So remove the for().
In passing, clean up a couple of comments, and make better use of a local
variable.
Turns out this has been broken for years and we'd not noticed. The one
case that was getting exercised in the buildfarm, or probably anywhere
else, was postgres_fdw.sl's reference to libpq.sl; and it turns out that
that was always going to libpq.sl in the actual installation directory
not the temporary install. We'd not noticed because the buildfarm script
does "make install" before it tests contrib. However, the recent addition
of a logical-replication test to the core regression scripts resulted in
trying to use libpqwalreceiver.sl before "make install" happens, and that
failed for lack of finding libpq.sl, as shown by failures on buildfarm
members gaur and pademelon.
There are two changes needed to fix it: the magic environment variable to
specify shlib search path at runtime is SHLIB_PATH not LD_LIBRARY_PATH,
and the shlib link command needs to specify the +s switch else the library
will not honor SHLIB_PATH.
I'm not quite sure why buildfarm members anole and gharial (HPUX 11) didn't
show the same failure. Consulting man pages on the web says that HPUX 11
honors both LD_LIBRARY_PATH and SHLIB_PATH, which would explain half of it,
and the rather confusing wording I've been able to find suggests that +s
might effectively be the default in HPUX 11. But it seems at least as
likely that there's just a libpq.so installed in /usr/lib on that machine;
as long as it's not too ancient, that would satisfy the test. In any case
I do not think this patch will break HPUX 11.
At the moment I don't see a need to back-patch this, since it only matters
for testing purposes, not to mention that HPUX 10 is probably dead in the
real world anyway.