This extends the fixes made in commit 085b6b667 to other SRFs with the
same bug, namely pg_logdir_ls(), pgrowlocks(), pg_timezone_names(),
pg_ls_dir(), and pg_tablespace_databases().
Also adjust various comments and documentation to warn against
expecting to clean up resources during a ValuePerCall SRF's final
call.
Back-patch to all supported branches, since these functions were
all born broken.
Justin Pryzby, with cosmetic tweaks by me
Discussion: https://postgr.es/m/20200308173103.GC1357@telsasoft.com
The data types that contrib/pageinspect's bt_metap() function were
declared to return as OUT arguments were wrong in some cases. In
particular, the oldest_xact column (a TransactionId/xid field) was
declared integer/int4 within the pageinspect extension's sql file. This
led to errors when an oldest_xact value that exceeded 2^31-1 was
encountered.
We cannot fix the declaration on Postgres 11 or 12. All we can do is
ameliorate the problem. Use "%d" instead of "%u" to format the output
of the oldest_xact value. This makes the C code match the declaration,
suppressing unhelpful error messages that might otherwise make
bt_metap() totally unusable. A bogus negative oldest_xact value will be
displayed instead of raising an error.
This commit addresses the same issue as master branch commit 691e8b2e18,
which actually fixed the problem. Backpatch to the 11 and 12 branches
only, since they are the only branches (other than master) that have
oldest_xact. All of the other problematic columns already display bogus
output for out of range values.
Reported-By: Victor Yegorov
Bug: #16285
Discussion: https://postgr.es/m/20200309223557.aip5n6ewln4ixbbi@alap3.anarazel.de
Backpatch: 11 and 12 only
Manifested as
ERROR: subtransaction logged without previous top-level txn record
this check forbids legit behaviours like
- First xl_xact_assignment record is beyond reading, i.e. earlier
restart_lsn.
- After restart_lsn there is some change of a subxact.
- After that, there is second xl_xact_assignment (for another subxact)
revealing the relationship between top and first subxact.
Such a transaction won't be streamed anyway because we hadn't seen it in
full. Saying for sure whether xact of some record encountered after
the snapshot was deserialized can be streamed or not requires to know
whether it wrote something before deserialization point --if yes, it
hasn't been seen in full and can't be decoded. Snapshot doesn't have such
info, so there is no easy way to relax the check.
Reported-by: Hsu, John
Diagnosed-by: Arseny Sher
Author: Arseny Sher, Amit Kapila
Reviewed-by: Amit Kapila, Dilip Kumar
Backpatch-through: 9.5
Discussion: https://postgr.es/m/AB5978B2-1772-4FEE-A245-74C91704ECB0@amazon.com
Very recent gcc complains that PLyObject_ToJsonbValue could return
a pointer to a local variable. I think it's wrong; but the coding
is fragile enough, and the savings of one palloc() minimal enough,
that it seems better to just do a palloc() all the time. (My other
idea of tweaking the if-condition doesn't suppress the warning.)
Back-patch to v11 where this code was introduced.
Discussion: https://postgr.es/m/21547.1580170366@sss.pgh.pa.us
In a statement like "UPDATE remote_tab SET (x,y) = (SELECT ...)",
we'd conclude that the statement could be directly executed remotely,
because the sub-SELECT is in a resjunk tlist item that's not examined
for shippability. Currently that ends up crashing if the sub-SELECT
contains any remote Vars. Prevent the crash by deeming MULTIEXEC
Params to be unshippable.
This is a bit of a brute-force solution, since if the sub-SELECT
*doesn't* contain any remote Vars, the current execution technology
would work; but that's not a terribly common use-case for this syntax,
I think. In any case, we generally don't try to ship sub-SELECTs, so
it won't surprise anybody that this doesn't end up as a remote direct
update. I'd be inclined to see if that general limitation can be fixed
before worrying about this case further.
Per report from Lukáš Sobotka.
Back-patch to 9.6. 9.5 had MULTIEXPR, but we didn't try to perform
remote direct updates then, so the case didn't arise anyway.
Discussion: https://postgr.es/m/CAJif3k+iA_ekBB5Zw2hDBaE1wtiQa4LH4_JUXrrMGwTrH0J01Q@mail.gmail.com
While building a hash map of categories in load_categories_hash,
resulting category names have not thus far been checked to ensure
they are not null. Prior to pg12 null category names worked to the
extent that they did not crash on some platforms. This is because
those system libraries have an snprintf which can deal with being
passed a null pointer argument for a string. But even in those cases
null categories did nothing useful. And on some platforms it crashed.
As of pg12, our own version of snprintf gets called, and it does
not deal with null pointer arguments at all, and crashes consistently.
Fix that by disallowing null categories. They never worked usefully,
and no one has ever asked for them to work previously. Back-patch to
all supported branches.
Reported-By: Ireneusz Pluta
Discussion: https://postgr.es/m/16176-7489719b05e4303c@postgresql.org
We realized years ago that it's better for libpq to accept all
connection parameters syntactically, even if some are ignored or
restricted due to lack of the feature in a particular build.
However, that lesson from the SSL support was for some reason never
applied to the GSSAPI support. This is causing various buildfarm
members to have problems with a test case added by commit 6136e94dc,
and it's just a bad idea from a user-experience standpoint anyway,
so fix it.
While at it, fix some places where parameter-related infrastructure
was added with the aid of a dartboard, or perhaps with the aid of
the anti-pattern "add new stuff at the end". It should be safe
to rearrange the contents of struct pg_conn even in released
branches, since that's private to libpq (and we'd have to move
some fields in some builds to fix this, anyway).
Back-patch to all supported branches.
Discussion: https://postgr.es/m/11297.1576868677@sss.pgh.pa.us
The dict_int text search dictionary template accepts maxlen parameter,
which is then used to cap the length of input strings. The value was
not properly checked, and the code simply does
txt[d->maxlen] = '\0';
to insert a terminator, leading to segfaults with negative values.
This commit simply rejects values less than 1. The issue was there since
dct_int was introduced in 9.3, so backpatch all the way back to 9.4
which is the oldest supported version.
Reported-by: cili
Discussion: https://postgr.es/m/16144-a36a5bef7657047d@postgresql.org
Backpatch-through: 9.4
Currently, postgres_fdw does not support preparing a remote transaction
for two-phase commit even in the case where the remote transaction is
read-only, but the old error message appeared to imply that that was not
supported only if the remote transaction modified remote tables. Change
the message so as to include the case where the remote transaction is
read-only.
Also fix a comment above the message.
Also add a note about the lack of supporting PREPARE TRANSACTION to the
postgres_fdw documentation.
Reported-by: Gilles Darold
Author: Gilles Darold and Etsuro Fujita
Reviewed-by: Michael Paquier and Kyotaro Horiguchi
Backpatch-through: 9.4
Discussion: https://postgr.es/m/08600ed3-3084-be70-65ba-279ab19618a5%40darold.net
In-core relation options can use a custom lock mode since 47167b7, that
has lowered the lock available for some autovacuum parameters. However
it forgot to consider custom relation options. This causes failures
with ALTER TABLE SET when changing a custom relation option, as its lock
is not defined. The existing APIs to define a custom reloption does not
allow to define a custom lock mode, so enforce its initialization to
AccessExclusiveMode which should be safe enough in all cases. An
upcoming patch will extend the existing APIs to allow a custom lock mode
to be defined.
The problem can be reproduced with bloom indexes, so add a test there.
Reported-by: Nikolay Sharplov
Analyzed-by: Thomas Munro, Michael Paquier
Author: Michael Paquier
Reviewed-by: Kuntal Ghosh
Discussion: https://postgr.es/m/20190920013831.GD1844@paquier.xyz
Backpatch-through: 9.6
contrib/amcheck failed to consider the possibility that unlogged
relations will not have any main relation fork files when running in hot
standby mode. This led to low-level "can't happen" errors that complain
about the absence of a relfilenode file.
To fix, simply skip verification of unlogged index relations during
recovery. In passing, add a direct check for the presence of a main
fork just before verification proper begins, so that we cleanly verify
the presence of the main relation fork file.
Author: Andrey Borodin, Peter Geoghegan
Reported-By: Andrey Borodin
Diagnosed-By: Andrey Borodin
Discussion: https://postgr.es/m/DA9B33AC-53CB-4643-96D4-7A0BBC037FA1@yandex-team.ru
Backpatch: 10-, where amcheck was introduced.
contrib/intarray considers "arraycol <@ constant-array" to be indexable,
but its GiST opclass code fails to reliably find index entries for empty
array values (which of course should trivially match such queries).
This is because the test condition to see whether we should descend
through a non-leaf node is wrong.
Unfortunately, empty array entries could be anywhere in the index,
as these index opclasses are currently designed. So there's no way
to fix this except by lobotomizing <@ indexscans to scan the whole
index ... which is what this patch does. That's pretty unfortunate:
the performance is now actually worse than a seqscan, in most cases.
We'd be better off to remove <@ from the GiST opclasses entirely,
and perhaps a future non-back-patchable patch will do so.
In the meantime, applications whose performance is adversely impacted
have a couple of options. They could switch to a GIN index, which
doesn't have this bug, or they could replace "arraycol <@ constant-array"
with "arraycol <@ constant-array AND arraycol && constant-array".
That will provide about the same performance as before, and it will find
all non-empty subsets of the given constant-array, which is all that
could reliably be expected of the query before.
While at it, add some more regression test cases to improve code
coverage of contrib/intarray.
In passing, adjust resize_intArrayType so that when it's returning an
empty array, it uses construct_empty_array for that rather than
cowboy hacking on the input array. While the hack produces an array
that looks valid for most purposes, it isn't bitwise equal to empty
arrays produced by other code paths, which could have subtle odd
effects. I don't think this code path is performance-critical
enough to justify such shortcuts. (Back-patch this part only as far
as v11; before commit 01783ac36 we were not careful about this in
other intarray code paths either.)
Back-patch the <@ fixes to all supported versions, since this was
broken from day one.
Patch by me; thanks to Alexander Korotkov for review.
Discussion: https://postgr.es/m/458.1565114141@sss.pgh.pa.us
Perl has multiple internal representations of "undef", and just
testing for SvTYPE(x) == SVt_NULL doesn't recognize all of them,
leading to "cannot transform this Perl type to jsonb" errors.
Use the approved test SvOK() instead.
Report and patch by Ivan Panchenko. Back-patch to v11 where
this module was added.
Discussion: https://postgr.es/m/1564783533.324795401@f193.i.mail.ru
When piling up loading of modules using check_password_hook_type,
loading passwordcheck would remove any trace of a previously-loaded
hook. Unloading the module would also cause previous hooks to be
entirely gone.
Reported-by: Rafael Castro
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/15932-78f48f9ef166778c@postgresql.org
Backpatch-through: 9.4
As of Fedora 30, it seems that the system-provided macros for setting
up user privileges in SELinux policies don't grant the ability to read
/etc/passwd, as they formerly did. This restriction breaks psql
(which tries to use getpwuid() to obtain the user name it's running
under) and thereby the contrib/sepgsql regression test. Add explicit
specifications that we need the right to read /etc/passwd.
Mike Palmiotto, per a report from me. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/23856.1563381159@sss.pgh.pa.us
Solely relying on a relation's reltuples/relpages estimate to size the
Bloom filters used by amcheck verification makes verification less
effective when the estimates are very stale. In extreme cases,
verification options that use Bloom filters internally could be totally
ineffective, without users receiving any clear indication that certain
types of corruption might easily be missed.
To fix, use RelationGetNumberOfBlocks() instead of relpages to size the
downlink block Bloom filter. Use the same RelationGetNumberOfBlocks()
value to derive a minimum size for the heapallindexed Bloom filter,
rather than completely trusting reltuples. Verification will still be
reasonably effective when the projected/estimated number of Bloom filter
elements is at least 1/5 of the final number of elements, which is
assured by the new sizing logic.
Reported-By: Alexander Korotkov
Discussion: https://postgr.es/m/CAH2-Wzk0ke2J42KrNYBKu0Xovjy-sU5ub7PWjgpbsKdAQcL4OA@mail.gmail.com
Backpatch: 11-, where downlink/heapallindexed verification were added.
Previously, in postgresPlanForeignModify, we planned an UPDATE operation
on a foreign table so that we transmit only columns that were explicitly
targets of the UPDATE, so as to avoid unnecessary data transmission, but
if there were BEFORE ROW UPDATE triggers on the foreign table, those
triggers might change values for non-target columns, in which case we
would miss sending changed values for those columns. Prevent optimizing
away transmitting all columns if there are BEFORE ROW UPDATE triggers on
the foreign table.
This is an oversight in commit 7cbe57c34 which added triggers on foreign
tables, so apply the patch all the way back to 9.4 where that came in.
Author: Shohei Mochizuki
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/201905270152.x4R1q3qi014550@toshiba.co.jp
Descriptions of pg_trgm GUC options have % replaced with %% like it was
a printf-like format. But that's not needed since they are just plain strings.
This commit fixed that. Backpatch to last supported version since this error
present from the beginning.
Reported-by: Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoAgPKODUsu9gqUFiNqEOAqedStxJ-a0sapsJXWWAVp%3Dxg%40mail.gmail.com
Backpatch-through: 9.4
It's harmless to call PQfreemem() with a NULL argument, so the only
consequence was that if allocating 'schema' failed, but allocating 'table'
or 'field' succeeded, we would leak a bit of memory. That's highly
unlikely to happen, so this is just academical, but let's get it right.
Per bug #15838 from Timur Birsh. Backpatch back to 9.5, where the
PQfreemem() calls were introduced.
Discussion: https://www.postgresql.org/message-id/15838-3221652c72c5e69d@postgresql.org
A parallel worker process should not be making any decisions of its
own about whether to auto-explain. If the parent session process
passed down flags asking for instrumentation data, do that, otherwise
not. Trying to enable instrumentation anyway leads to bugs like the
"could not find key N in shm TOC" failure reported in bug #15821
from Christian Hofstaedtler.
We can implement this cheaply by piggybacking on the existing logic
for not doing anything when we've chosen not to sample a statement.
While at it, clean up some tin-eared coding related to the sampling
feature, including an off-by-one error that meant that asking for 1.0
sampling rate didn't actually result in sampling every statement.
Although the specific case reported here only manifested in >= v11,
I believe that related misbehaviors can be demonstrated in any version
that has parallel query; and the off-by-one error is certainly there
back to 9.6 where that feature was added. So back-patch to 9.6.
Discussion: https://postgr.es/m/15821-5eb422e980594075@postgresql.org
When reindexing individual indexes on pg_class it was possible to
either trigger an assertion failure:
TRAP: FailedAssertion("!(!ReindexIsProcessingIndex(((index)->rd_id)))
That's because reindex_index() called SetReindexProcessing() - which
enables an asserts ensuring no index insertions happen into the index
- before calling RelationSetNewRelfilenode(). That not correct for
indexes on pg_class, because RelationSetNewRelfilenode() updates the
relevant pg_class row, which needs to update the indexes.
The are two reasons this wasn't noticed earlier. Firstly the bug
doesn't trigger when reindexing all of pg_class, as reindex_relation
has code "hiding" all yet-to-be-reindexed indexes. Secondly, the bug
only triggers when the the update to pg_class doesn't turn out to be a
HOT update - otherwise there's no index insertion to trigger the
bug. Most of the time there's enough space, making this bug hard to
trigger.
To fix, move RelationSetNewRelfilenode() to before the
SetReindexProcessing() (and, together with some other code, to outside
of the PG_TRY()).
To make sure the error checking intended by SetReindexProcessing() is
more robust, modify CatalogIndexInsert() to check
ReindexIsProcessingIndex() even when the update is a HOT update.
Also add a few regression tests for REINDEXing of system catalogs.
The last two improvements would have prevented some of the issues
fixed in 5c1560606dc4c from being introduced in the first place.
Reported-By: Michael Paquier
Diagnosed-By: Tom Lane and Andres Freund
Author: Andres Freund
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/20190418011430.GA19133@paquier.xyz
Backpatch: 9.4-, the bug is present in all branches
foreign_grouping_ok() is willing to put fairly arbitrary expressions into
the targetlist of a remote SELECT that's doing grouping or aggregation on
the remote side, including expressions that have no foreign component to
them at all. This is possibly a bit dubious from an efficiency standpoint;
but it rises to the level of a crash-causing bug if the expression is just
a Param or non-foreign Var. In that case, the expression will necessarily
also appear in the fdw_exprs list of values we need to send to the remote
server, and then setrefs.c's set_foreignscan_references will mistakenly
replace the fdw_exprs entry with a Var referencing the targetlist result.
The root cause of this problem is bad design in commit e7cb7ee14: it put
logic into set_foreignscan_references that IMV is postgres_fdw-specific,
and yet this bug shows that it isn't postgres_fdw-specific enough. The
transformation being done on fdw_exprs assumes that fdw_exprs is to be
evaluated with the fdw_scan_tlist as input, which is not how postgres_fdw
uses it; yet it could be the right thing for some other FDW. (In the
bigger picture, setrefs.c has no business assuming this for the other
expression fields of a ForeignScan either.)
The right fix therefore would be to expand the FDW API so that the
FDW could inform setrefs.c how it intends to evaluate these various
expressions. We can't change that in the back branches though, and we
also can't just summarily change setrefs.c's behavior there, or we're
likely to break external FDWs.
As a stopgap, therefore, hack up postgres_fdw so that it won't attempt
to send targetlist entries that look exactly like the fdw_exprs entries
they'd produce. In most cases this actually produces a superior plan,
IMO, with less data needing to be transmitted and returned; so we probably
ought to think harder about whether we should ship tlist expressions at
all when they don't contain any foreign Vars or Aggs. But that's an
optimization not a bug fix so I left it for later. One case where this
produces an inferior plan is where the expression in question is actually
a GROUP BY expression: then the restriction prevents us from using remote
grouping. It might be possible to work around that (since that would
reduce to group-by-a-constant on the remote side); but it seems like a
pretty unlikely corner case, so I'm not sure it's worth expending effort
solely to improve that. In any case the right long-term answer is to fix
the API as sketched above, and then revert this hack.
Per bug #15781 from Sean Johnston. Back-patch to v10 where the problem
was introduced.
Discussion: https://postgr.es/m/15781-2601b1002bad087c@postgresql.org
Commit 3d956d9562 added support for update row movement in postgres_fdw.
This patch fixes the following issues introduced by that commit:
* When a remote partition chosen to insert routed rows into was also an
UPDATE subplan target rel that would be updated later, the UPDATE that
used a direct modification plan modified those routed rows incorrectly
because those routed rows were visible to the later UPDATE command.
The right fix for this would be to have some way in postgres_fdw in
which the later UPDATE command ignores those routed rows, but it seems
hard to do so with the current infrastructure. For now throw an error
in that case.
* When a remote partition chosen to insert routed rows into was also an
UPDATE subplan target rel, fmstate created for the UPDATE that used a
non-direct modification plan was mistakenly overridden by another
fmstate created for inserting those routed rows into the partition.
This caused 1) server crash when the partition would be updated later,
and 2) resource leak when the partition had been already updated. To
avoid that, adjust the treatment of the fmstate for the inserting. As
for #1, since we would also have the incorrectness issue as mentioned
above, error out in that case as well.
Update the docs to mention that postgres_fdw currently does not handle
the case where a remote partition chosen to insert a routed row into is
also an UPDATE subplan target rel that will be updated later.
Author: Amit Langote and Etsuro Fujita
Reviewed-by: Amit Langote
Backpatch-through: 11 where row movement in postgres_fdw was added
Discussion: https://postgr.es/m/21e7eaa4-0d4d-20c2-a1f7-c7e96f4ce440@lab.ntt.co.jp
The code would do "PQclear(res)" twice if lo_unlink failed, evidently
due to careless thinking about how far out a "break" would break.
Remove the extra PQclear and adjust the loop logic so that we'll fall
out of both levels of loop after an error, as was clearly the intent.
Spotted by Coverity. I have no idea why it took this long to notice,
since the bug has been there since commit 67ccbb080. Accordingly,
back-patch to all supported branches.
We should try to prewarm each database only once. Otherwise, if
prewarming fails for some reason, it will just keep retrying in an
infnite loop. This can happen if, for example, the database has been
dropped. The existing code was intended to implement the try-once
behavior, but failed to do so because it neglected to set
worker.bgw_restart_time to BGW_NEVER_RESTART.
Mithun Cy, per a report from Hans Buschmann
Discussion: http://postgr.es/m/CA+hUKGKpQJCWcgyy3QTC9vdn6uKAR_8r__A-MMm2GYfj45caag@mail.gmail.com
Ever since its birth, ReorderBufferBuildTupleCidHash() has contained an
assertion that a catalog tuple cannot change Cmax after acquiring one. But
that's wrong: if a subtransaction executes DDL that affects that catalog
tuple, and later aborts and another DDL affects the same tuple, it will
change Cmax. Relax the assertion to merely verify that the Cmax remains
valid and monotonically increasing, instead.
Add a test that tickles the relevant code.
Diagnosed by, and initial patch submitted by: Arseny Sher
Co-authored-by: Arseny Sher
Discussion: https://postgr.es/m/874l9p8hyw.fsf@ars-thinkpad
As reported in bug #15613 from Srinivasan S A, file_fdw and postgres_fdw
neglected to mark plain baserel foreign paths as parameterized when the
relation has lateral_relids. Other FDWs have surely copied this mistake,
so rather than just patching those two modules, install a band-aid fix
in create_foreignscan_path to rectify the mistake centrally.
Although the band-aid is enough to fix the visible symptom, correct
the calls in file_fdw and postgres_fdw anyway, so that they are valid
examples for external FDWs.
Also, since the band-aid isn't enough to make this work for parameterized
foreign joins, throw an elog(ERROR) if such a case is passed to
create_foreignscan_path. This shouldn't pose much of a problem for
existing external FDWs, since it's likely they aren't trying to make such
paths anyway (though some of them may need a defense against joins with
lateral_relids, similar to the one this patch installs into postgres_fdw).
Add some assertions in relnode.c to catch future occurrences of the same
error --- in particular, as backstop against core-code mistakes like the
one fixed by commit bdd9a99aa.
Discussion: https://postgr.es/m/15613-092be1be9576c728@postgresql.org
The previous tacit assumption that index_form_tuple() hides differences
in the TOAST state of its input datums was wrong. Normalize input
varlena datums by decompressing compressed values, and forming a new
index tuple for fingerprinting using uncompressed inputs. The final
normalized representation may actually be compressed once again within
index_form_tuple(), though that shouldn't matter. When the original
tuple is found to have no datums that are compressed inline, fingerprint
the original tuple directly.
Normalization avoids false positive reports of corruption in certain
cases. For example, the executor can apply toasting with some inline
compression to an entire heap tuple because its input has a single
external TOAST pointer. Varlena datums for other attributes that are
not particularly good candidates for inline compression can be
compressed in the heap tuple in passing, without the representation of
the same values in index tuples ever receiving concomitant inline
compression.
Add a test case to recreate the issue in a simpler though less realistic
way: by exploiting differences in pg_attribute.attstorage between heap
and index relations.
This bug was discovered by me during testing of an upcoming set of nbtree
enhancements. It was also independently reported by Andreas Kunert, as
bug #15597. His test case was rather more realistic than the one I
ended up using.
Bug: #15597
Discussion: https://postgr.es/m/CAH2-WznrVd9ie+TTJ45nDT+v2nUt6YJwQrT9SebCdQKtAvfPZw@mail.gmail.com
Discussion: https://postgr.es/m/15597-294e5d3e7f01c407@postgresql.org
Backpatch: 11-, where heapallindexed verification was introduced.
Some makefiles were trying to do this:
temp-install: EXTRA_INSTALL=contrib/test_decoding
but that no longer works as of commit aa019da52: the macro is now
consulted by the checkprep target, one level down, and apparently
gmake doesn't propagate such macro settings recursively.
The problem is masked since 42e61c774 because pgxs.mk also sets up
EXTRA_INSTALL, and correctly applies it to the checkprep target.
Unfortunately I'd not risked back-patching that to before v11.
Since aa019da52 was pushed back to v10, it broke test_decoding
there (the only module for which this actually makes a difference
at present).
Hence, back-patch 42e61c774 to v10. Also, remove some demonstrably
useless settings of EXTRA_INSTALL in v10 and v11 (they'd already
been cleaned up in HEAD).
Per buildfarm.
Discussion: https://postgr.es/m/CAEepm=1pEJdwv6DSGmOfpX0EaX7L7sT28c1nXpqvQvmLfEWb1g@mail.gmail.com
fe0a0b5, which has added a stronger random source in Postgres, has
introduced a thinko when creating a padding message which gets encrypted
for Elgamal. The padding message cannot have zeros, which are replaced
by random bytes. However if pg_strong_random() failed, the message
would finish by being considered in correct shape for encryption with
zeros.
Author: Tom Lane
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/20186.1546188423@sss.pgh.pa.us
Backpatch-through: 10
COPY can skip writing WAL when loading data on a table which has been
created in the same transaction as the one loading the data, however
this cannot work on views or foreign table as this would result in
trying to flush relation files which do not exist. So disable the
optimization so as commands are able to work the same way with any
configuration of wal_level.
Tests are added to cover the different cases, which need to have
wal_level set to minimal to allow the problem to show up, and that is
not the default configuration.
Reported-by: Luis M. Carril, Etsuro Fujita
Author: Amit Langote, Michael Paquier
Reviewed-by: Etsuro Fujita
Discussion: https://postgr.es/m/15552-c64aa14c5c22f63c@postgresql.org
Backpatch-through: 10, where support for COPY on views has been added,
while v11 has added support for COPY on foreign tables.
postgres_fdw's postgresGetForeignPlan() assumes without checking that the
outer_plan it's given for a join relation must have a NestLoop, MergeJoin,
or HashJoin node at the top. That's been wrong at least since commit
4bbf6edfb (which could cause insertion of a Sort node on top) and it seems
like a pretty unsafe thing to Just Assume even without that.
Through blind good fortune, this doesn't seem to have any worse
consequences today than strange EXPLAIN output, but it's clearly trouble
waiting to happen.
To fix, test the node type explicitly before touching Join-specific
fields, and avoid jamming the new tlist into a node type that can't
do projection. Export a new support function from createplan.c
to avoid building low-level knowledge about the latter into FDWs.
Back-patch to 9.6 where the faulty coding was added. Note that the
associated regression test cases don't show any changes before v11,
apparently because the tests back-patched with 4bbf6edfb don't actually
exercise the problem case before then (there's no top-level Sort
in those plans).
Discussion: https://postgr.es/m/8946.1544644803@sss.pgh.pa.us
Hstore data generated on pg 8.4 and pg_upgraded to current versions
remains in its original on-disk format unless modified. The same goes
for values generated by the addon hstore-new module on pre-9.0
versions. (The hstoreUpgrade function converts old values on the fly
when read in, but the on-disk value is not modified by this.)
Since old-format empty hstores (and hstore-new hstores) have
representations compatible with the new format, hstoreUpgrade thought
it could get away without modifying such values; but this breaks
hstore_hash (and the new hstore_hash_extended) which assumes
bit-perfect matching between semantically identical hstore values.
Only one bit actually differs (the "new version" flag in the count
field) but that of course is enough to break the hash.
Fix by making hstoreUpgrade unconditionally convert all old values to
new format.
Backpatch all the way, even though this changes a hash value in some
cases, because in those cases the hash value is already failing - for
example, a hash join between old- and new-format empty hstores will be
failing to match, or a hash index on an hstore column containing an
old-format empty value will be failing to find the value since it will
be searching for a hash derived from a new-format datum. (There are no
known field reports of this happening, probably because hashing of
hstores has only been useful in limited circumstances and there
probably isn't much upgraded data being used this way.)
Per concerns arising from discussion of commit eb6f29141be. Original
bug is my fault.
Discussion: https://postgr.es/m/60b1fd3b-7332-40f0-7e7f-f2f04f777747%402ndquadrant.com
1. Integer overflow in internal_size could result in memory corruption
in decompression since a zero-length array would be allocated and then
written to. This leads to crashes or corruption when traversing an
index which has been populated with sufficiently sparse values. Fix by
using int64 for computations and checking for overflow.
2. Integer overflow in g_int_compress could cause pessimal merge
choices, resulting in unnecessarily large ranges (which would in turn
trigger issue 1 above). Fix by using int64 again.
3. Even without overflow, array sizes could become large enough to
cause unexplained memory allocation errors. Fix by capping the sizes
to a safe limit and report actual errors pointing at gist__intbig_ops
as needed.
4. Large inputs to the compression function always consist of large
runs of consecutive integers, and the compression loop was processing
these one at a time in an O(N^2) manner with a lot of overhead. The
expected runtime of this function could easily exceed 6 months for a
single call as a result. Fix by performing a linear-time first pass,
which reduces the worst case to something on the order of seconds.
Backpatch all the way, since this has been wrong forever.
Per bug #15518 from report from irc user "dymk", analysis and patch by
me.
Discussion: https://postgr.es/m/15518-799e426c3b4f8358@postgresql.org
To avoid the sorts of problems complained of by Jakob Egger, it'd be
best if configure didn't emit any references to the sysroot path at all.
In the case of PL/Tcl, we can do that just by keeping our hands off the
TCL_INCLUDE_SPEC string altogether. In the case of PL/Perl, we need to
substitute -iwithsysroot for -I in the compile commands, which is easily
handled if we change to using a configure output variable that includes
the switch not only the directory name. Since PL/Tcl and PL/Python
already do it like that, this seems like good consistency cleanup anyway.
Hence, this replaces the advice given to Perl-related extensions in commit
5e2217131; instead of writing "-I$(perl_archlibexp)/CORE", they should
just write "$(perl_includespec)". (The old way continues to work, but not
on recent macOS.)
It's still the case that configure needs to be aware of the sysroot
path internally, but that's cleaner than what we had before.
As before, back-patch to all supported versions.
Discussion: https://postgr.es/m/20840.1537850987@sss.pgh.pa.us
Commit b5febc1d1 added a contrib/btree_gist test case that has been
observed to fail in the buildfarm as a result of background auto-analyze
updating stats and changing the selected plan. Forestall that by
forcibly analyzing in foreground, instead. The new plan choice is
just as good for our purposes, since we really only care that an
index-only plan does not get selected.
Back-patch to 9.5, like the previous patch.
Discussion: https://postgr.es/m/14643.1539629304@sss.pgh.pa.us
Repeatedly rewriting a mapped catalog table with VACUUM FULL or
CLUSTER could cause logical decoding to fail with:
ERROR, "could not map filenode \"%s\" to relation OID"
To trigger the problem the rewritten catalog had to have live tuples
with toasted columns.
The problem was triggered as during catalog table rewrites the
heap_insert() check that prevents logical decoding information to be
emitted for system catalogs, failed to treat the new heap's toast table
as a system catalog (because the new heap is not recognized as a
catalog table via RelationIsLogicallyLogged()). The relmapper, in
contrast to the normal catalog contents, does not contain historical
information. After a single rewrite of a mapped table the new relation
is known to the relmapper, but if the table is rewritten twice before
logical decoding occurs, the relfilenode cannot be mapped to a
relation anymore. Which then leads us to error out. This only
happens for toast tables, because the main table contents aren't
re-inserted with heap_insert().
The fix is simple, add a new heap_insert() flag that prevents logical
decoding information from being emitted, and accept during decoding
that there might not be tuple data for toast tables.
Unfortunately that does not fix pre-existing logical decoding
errors. Doing so would require not throwing an error when a filenode
cannot be mapped to a relation during decoding, and that seems too
likely to hide bugs. If it's crucial to fix decoding for an existing
slot, temporarily changing the ERROR in ReorderBufferCommit() to a
WARNING appears to be the best fix.
Author: Andres Freund
Discussion: https://postgr.es/m/20180914021046.oi7dm4ra3ot2g2kt@alap3.anarazel.de
Backpatch: 9.4-, where logical decoding was introduced
Historically we forbade datatype-specific comparison functions from
returning INT_MIN, so that it would be safe to invert the sort order
just by negating the comparison result. However, this was never
really safe for comparison functions that directly return the result
of memcmp(), strcmp(), etc, as POSIX doesn't place any such restriction
on those library functions. Buildfarm results show that at least on
recent Linux on s390x, memcmp() actually does return INT_MIN sometimes,
causing sort failures.
The agreed-on answer is to remove this restriction and fix relevant
call sites to not make such an assumption; code such as "res = -res"
should be replaced by "INVERT_COMPARE_RESULT(res)". The same is needed
in a few places that just directly negated the result of memcmp or
strcmp.
To help find places having this problem, I've also added a compile option
to nbtcompare.c that causes some of the commonly used comparators to
return INT_MIN/INT_MAX instead of their usual -1/+1. It'd likely be
a good idea to have at least one buildfarm member running with
"-DSTRESS_SORT_INT_MIN". That's far from a complete test of course,
but it should help to prevent fresh introductions of such bugs.
This is a longstanding portability hazard, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/20180928185215.ffoq2xrq5d3pafna@alap3.anarazel.de
I (Andres) was more than a bit hasty in committing 33001fd7a7072d48327
after last minute changes, leading to a number of problems (jit output
was only shown for JIT in parallel workers, and just EXPLAIN without
ANALYZE didn't work). Lukas luckily found these issues quickly.
Instead of combining instrumentation in in standard_ExecutorEnd(), do
so on demand in the new ExplainPrintJITSummary().
Also update a documentation example of the JIT output, changed in
52050ad8ebec8d831.
Author: Lukas Fittl, with minor changes by me
Discussion: https://postgr.es/m/CAP53PkxmgJht69pabxBXJBM+0oc6kf3KHMborLP7H2ouJ0CCtQ@mail.gmail.com
Backpatch: 11, where JIT compilation was introduced
contrib/pageinspect's tuple_data_split() function thought it could get
away with opening the referenced relation with NoLock. In practice
there's no guarantee that the current session holds any lock on that
rel (even if we just read a page from it), so that this is unsafe.
Switch to using AccessShareLock. Also, postpone closing the relation,
so that we needn't copy its tupdesc. Also, fix unsafe use of
att_isnull() for attributes past the end of the tuple.
Per testing with a patch that complains if we open a relation without
holding any lock on it. I don't plan to back-patch that patch, but we
should close the holes it identifies in all supported branches.
Discussion: https://postgr.es/m/2038.1538335244@sss.pgh.pa.us
Previously, when using parallel query, EXPLAIN (ANALYZE)'s JIT
compilation timings did not include the overhead from doing so on the
workers. Fix that.
We do so by simply aggregating the cost of doing JIT compilation on
workers and the leader together. Arguably that's not quite accurate,
because the total time spend doing so is spent in parallel - but it's
hard to do much better. For additional detail, when VERBOSE is
specified, the stats for workers are displayed separately.
Author: Amit Khandekar and Andres Freund
Discussion: https://postgr.es/m/CAJ3gD9eLrz51RK_gTkod+71iDcjpB_N8eC6vU2AW-VicsAERpQ@mail.gmail.com
Backpatch: 11-
Apple's latest rearrangements of the system-supplied headers have broken
building of PL/Perl and PL/Tcl. The only practical way to fix PL/Tcl is to
start using the "-isysroot" compiler flag to point to SDK-supplied headers,
as Apple expects. We must also start distinguishing where to find Perl's
headers from where to find its shared library; but that seems like good
cleanup anyway.
Extensions that formerly did something like -I$(perl_archlibexp)/CORE
should now do -I$(perl_includedir)/CORE instead. perl_archlibexp
is still the place to look for libperl.so, though.
If for some reason you don't like the default -isysroot setting, you can
override that by setting PG_SYSROOT in configure's arguments. I don't
currently think people would need to do so, unless maybe for cross-version
build purposes.
In addition, teach configure where to find tclConfig.sh. Our traditional
method of searching $auto_path hasn't worked for the last couple of macOS
releases, and it now seems clear that Apple's not going to change that.
The workaround of manually specifying --with-tclconfig was annoying
already, but Mojave's made it a lot more so because the sysroot path now
has to be included as well. Let's just wire the knowledge into configure
instead. To avoid breaking builds against non-default Tcl installations
(e.g. MacPorts) wherein the $auto_path method probably still works,
arrange to try the additional case only after all else has failed.
Back-patch to all supported versions, since at least the buildfarm
cares about that. The changes are set up to not do anything on macOS
releases that are old enough to not have functional sysroot trees.
Commit 25fff40 has granted execute permission of the function
pg_stat_statements_reset() to default role "pg_read_all_stats", but this
role is meant to read statistics, and not to reset them. The
permissions on this function are revoked from "pg_read_all_stats". The
version of pg_stat_statements is bumped up in consequence.
Author: Haribabu Kommi
Reviewed-by: Michael Paquier, Amit Kapila
Discussion: https://postgr.es/m/CAJrrPGf5fCnKqXObpwGN9nMyD--tzOf-7LFCJiz59Z1wJ5qj9A@mail.gmail.com