Specifically, this blocks DECLARE ... WITH HOLD and firing of deferred
triggers within index expressions and materialized view queries. An
attacker having permission to create non-temp objects in at least one
schema could execute arbitrary SQL functions under the identity of the
bootstrap superuser. One can work around the vulnerability by disabling
autovacuum and not manually running ANALYZE, CLUSTER, REINDEX, CREATE
INDEX, VACUUM FULL, or REFRESH MATERIALIZED VIEW. (Don't restore from
pg_dump, since it runs some of those commands.) Plain VACUUM (without
FULL) is safe, and all commands are fine when a trusted user owns the
target object. Performance may degrade quickly under this workaround,
however. Back-patch to 9.5 (all supported versions).
Reviewed by Robert Haas. Reported by Etienne Stalmans.
Security: CVE-2020-25695
This back-patches commit 20d3fe900 into the v12 and v13 branches.
At the time I thought that commit was not fixing any observable
bug, but Bertrand Drouvot showed otherwise: adding a dropped column
to the previously-considered scenario crashes v12 and v13, unless the
dropped column happens to be an integer. That is, of course, because
the tupdesc we derive from the plan output tlist fails to describe
the dropped column accurately, so that we'll do the wrong thing with
a tuple in which that column isn't NULL.
There is no bug in pre-v12 branches because they already did use
the table's real tuple descriptor for any trigger-returned tuple.
It seems that this set of bugs can be blamed on the changes that
removed es_trig_tuple_slot, though I've not attempted to pin that
down precisely.
Although there's no code change needed in HEAD, update the test case
to include a dropped column there too.
Discussion: https://postgr.es/m/db5d97c8-f48a-51e2-7b08-b73d5434d425@amazon.com
Discussion: https://postgr.es/m/16644-5da7ef98a7ac4545@postgresql.org
* Avoid pointlessly highlighting that an index vacuum was executed by a
parallel worker; user doesn't care.
* Don't give the impression that a non-concurrent reindex of an invalid
index on a TOAST table would work, because it wouldn't.
* Add a "translator:" comment for a mysterious message.
Discussion: https://postgr.es/m/20201107034943.GA16596@alvherre.pgsql
Reviewed-by: Michael Paquier <michael@paquier.xyz>
The buildfarm thinks this leads to memory stomps, though annoyingly
I can't duplicate that here. The existing code in strings.pgc is
doing something that doesn't seem to be sanctioned at all really
by the documentation, but I'm disinclined to try to make that nicer
right now. Let's just declare some more output variables in hopes
of working around it.
Add items committed in the last 24 hours. Also correct my failure
to credit Nikita Glukhov for 52ad1e659, which did all the heavy
lifting for 3db322eaa.
These were broken in multiple ways:
* The xbstart and xhstart lexer actions neglected to set
"state_before_str_start" before transitioning to the xb/xh states,
thus possibly resulting in "internal error: unreachable state" later.
* The test for valid string contents at the end of xb state was flat out
wrong, as it accounted incorrectly for the "b" prefix that the xbstart
action had injected. Meanwhile, the xh state had no such check at all.
* The generated literal value failed to include any quote marks.
* The grammar did the wrong thing anyway, typically ignoring the
literal value and emitting something else, since BCONST and XCONST
tokens were handled randomly differently from SCONST tokens.
The first of these problems is evidently an oversight in commit
7f380c59f, but the others seem to be very ancient. The lack of
complaints shows that ECPG users aren't using these syntaxes much
(although I do vaguely remember one previous complaint).
As written, this patch is dependent on 7f380c59f, so it can't go
back further than v13. Given the shortage of complaints, I'm not
excited about adapting the patch to prior branches.
Report and patch by Shenhao Wang (test case adjusted by me)
Discussion: https://postgr.es/m/d6402f1bacb74ecba22ef715dbba17fd@G08CNEXMBPEKD06.g08.fujitsu.local
The list of indexes was being leaked when asked for an index that
doesn't have an index partition in the table partition. Not a common
case admittedly --and in most cases where it occurs, caller throws an
error anyway-- but worth fixing for cleanliness and in case any
third-party code is calling this function.
While at it, remove use of lfirst_oid() to obtain a value we already
have.
Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/20201105203606.GF22691@telsasoft.com
brin_form_tuple failed to consider the values may be toasted, inserting
the toast pointer into the index. This may easily result in index
corruption, as the toast data may be deleted and cleaned up by vacuum.
The cleanup however does not care about indexes, leaving invalid toast
pointers behind, which triggers errors like this:
ERROR: missing chunk number 0 for toast value 16433 in pg_toast_16426
A less severe consequence are inconsistent failures due to the index row
being too large, depending on whether brin_form_tuple operated on plain
or toasted version of the row. For example
CREATE TABLE t (val TEXT);
INSERT INTO t VALUES ('... long value ...')
CREATE INDEX idx ON t USING brin (val);
would likely succeed, as the row would likely include toast pointer.
Switching the order of INSERT and CREATE INDEX would likely fail:
ERROR: index row size 8712 exceeds maximum 8152 for index "idx"
because this happens before the row values are toasted.
The bug exists since PostgreSQL 9.5 where BRIN indexes were introduced.
So backpatch all the way back.
Author: Tomas Vondra
Reviewed-by: Alvaro Herrera
Backpatch-through: 9.5
Discussion: https://postgr.es/m/20201001184133.oq5uq75sb45pu3aw@development
Discussion: https://postgr.es/m/20201104010544.zexj52mlldagzowv%40development
As usual, the release notes for other branches will be made by cutting
these down, but put them up for community review first.
Also as usual for a .1 release, there are some entries here that
are not really relevant for v13 because they already appeared in 13.0.
Those'll be removed later.
Revert 59ab4ac32, as well as the followup fix 33862cb9c, in all
branches. We need to think a bit harder about what the behavior
of LOCK TABLE on views should be, and there's no time for that
before next week's releases. We'll take another crack at this
later.
Discussion: https://postgr.es/m/16703-e348f58aab3cf6cc@postgresql.org
Revert 403a3d91c, as well as the followup fix 7f4235032, in all
branches. We need to think a bit harder about what the behavior
of LOCK TABLE on views should be, and there's no time for that
before next week's releases. We'll take another crack at this
later.
Discussion: https://postgr.es/m/16703-e348f58aab3cf6cc@postgresql.org
LOCK TABLE has complained about "infinite recursion" when applied
to a self-referential view, ever since we made it recurse into views
in v11. However, that breaks pg_dump's new assumption that it's
okay to lock every relation. There doesn't seem to be any good
reason to throw an error: if we just abandon the recursion, we've
still satisfied the requirement of locking every referenced relation.
Per bug #16703 from Andrew Bille (via Alexander Lakhin).
Discussion: https://postgr.es/m/16703-e348f58aab3cf6cc@postgresql.org
Logic for counting heap TIDs from posting list tuples (added by commit
0d861bbb) was faulty. It didn't count any TIDs/index tuples in the
event of no callback being set. This meant that we incorrectly counted
no index tuples in clean-up only VACUUMs, which could lead to
pg_class.reltuples being spuriously set to 0 in affected indexes.
To fix, go back to counting items from the page in cases where there is
no callback. This approach isn't very accurate, but it works well
enough in practice while avoiding the expense of accessing every index
tuple during cleanup-only VACUUMs.
Author: Peter Geoghegan <pg@bowt.ie>
Reported-By: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>
https://postgr.es/m/20201023174451.69e358f1@firost
Backpatch: 13-, where nbtree deduplication was introduced
hash_array_extended() needs to pass PG_GET_COLLATION() to the hash
function of the element type. Otherwise, the hash function of a
collation-aware data type such as text will error out, since the
introduction of nondeterministic collation made hash functions require
a collation, too.
The consequence of this is that before this change, hash partitioning
using an array over text in the partition key would not work.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/32c1fdae-95c6-5dc6-058a-a90330a3b621%40enterprisedb.com
Commit 6ee3b5fb99 cleaned up most of the long/int64 confusion related to
incremental sort, but the sort debug messages were still using %ld for
int64 variables. So fix that.
Author: Haiying Tang
Backpatch-through: 13, where the incremental sort code was added
Discussion: https://postgr.es/m/4250be9d350c4992abb722a76e288aef%40G08CNEXMBPEKD05.g08.fujitsu.local
When considering Incremental Sort below a Gather Merge, we need to be
a bit more careful when matching pathkeys to EC members. It's not enough
to find a member whose Vars are all in the current relation's target;
volatile expressions in particular need to be contained in the target,
otherwise it's too early to use the pathkey.
Reported-by: Jaime Casanova
Author: James Coleman
Reviewed-by: Tomas Vondra
Backpatch-through: 13, where the incremental sort code was added
Discussion: https://postgr.es/m/CAJGNTeNaxpXgBVcRhJX%2B2vSbq%2BF2kJqGBcvompmpvXb7pq%2BoFA%40mail.gmail.com
If the planner erroneously puts a non-parallel-safe SubPlan into
a parallelized portion of the query tree, nodeSubplan.c will fail
in the worker processes because it finds a null in es_subplanstates,
which it's unable to cope with. It seems worth a test-and-elog to
make that an error case rather than a core dump case.
This probably should have been included in commit 16ebab688, which
was responsible for allowing nulls to appear in es_subplanstates
to begin with. So, back-patch to v10 where that came in.
Discussion: https://postgr.es/m/924226.1604422326@sss.pgh.pa.us
The intention in commit 491c029db was to require superuserness to
change the BYPASSRLS property, but the actual effect of the coding
in AlterRole() was to require superuserness to change anything at all
about a BYPASSRLS role. Other properties of a BYPASSRLS role should
be changeable under the same rules as for a normal role, though.
Fix that, and also take care of some documentation omissions related
to BYPASSRLS and REPLICATION role properties.
Tom Lane and Stephen Frost, per bug report from Wolfgang Walther.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/a5548a9f-89ee-3167-129d-162b5985fcf8@technowledgy.de
The current implementation cannot handle this correctly, so just
forbid it for now.
GENERATED clauses must be attached to the column definition and cannot
be added later like DEFAULT, so if a child table has a generation
expression that the parent does not have, the child column will
necessarily be an attlocal column. So to implement ALTER TABLE ONLY /
DROP EXPRESSION, we'd need extra code to update attislocal of the
direct child tables, somewhat similar to how DROP COLUMN does it, so
that the resulting state can be properly dumped and restored.
Discussion: https://www.postgresql.org/message-id/flat/15830.1575468847%40sss.pgh.pa.us
fill_hba_line() thought it could get away with passing sizeof(struct
sockaddr_storage) rather than the actual addrlen previously returned
by getaddrinfo(). While that appears to work on many platforms,
it does not work on FreeBSD 11: you get back a failure, which leads
to the view showing NULL for the address and netmask columns in all
rows. The POSIX spec for getnameinfo() is pretty clearly on
FreeBSD's side here: you should pass the actual address length.
So it seems plausible that there are other platforms where this
coding also fails, and we just hadn't noticed.
Also, IMO the fact that getnameinfo() failure leads to a NULL output
is pretty bogus in itself. Our pg_getnameinfo_all() wrapper is
careful to emit "???" on failure, and we should use that in such
cases. NULL should only be emitted in rows that don't have IP
addresses.
Per bug #16695 from Peter Vandivier. Back-patch to v10 where this
code was added.
Discussion: https://postgr.es/m/16695-a665558e2f630be7@postgresql.org
On detecting a corrupted match tag, pglz_decompress() should just
summarily return -1. Breaking out of the loop, as I did in dfc797730,
doesn't quite guarantee that will happen. Also, we can use
unlikely() on that check, just in case it helps.
Backpatch to v13, like the previous patch.
This is useful for checks of relation pages without having to load the
pages into the shared buffers, and two cases can make use of that: page
verification in base backups and the online, lock-safe, flavor.
Compatibility is kept with past versions using a routine that calls the
new extended routine with the set of options compatible with the
original version. Contrary to d401c576, a macro cannot be used as there
may be external code relying on the presence of the original routine.
This is applied down to 11, where this will be used by a follow-up
commit addressing a set of issues with page verification in base
backups.
Extracted from a larger patch by the same author.
Author: Anastasia Lubennikova
Reviewed-by: Michael Paquier, Julien Rouhaud
Discussion: https://postgr.es/m/608f3476-0598-2514-2c03-e05c7d2b0cbd@postgrespro.ru
Backpatch-through: 11
pglz_maximum_compressed_size() potentially underestimated the amount
of compressed data required to produce N bytes of decompressed data;
this is a fault in commit 11a078cf8.
Separately from that, pglz_decompress() failed to protect itself
against corrupt compressed data, particularly off == 0 in a match
tag. Commit c60e520f6 turned such a situation into an infinite loop,
where before it'd just have resulted in garbage output.
The combination of these two bugs seems like it may explain bug #16694
from Tom Vijlbrief, though it's impossible to be quite sure without
direct inspection of the failing session. (One needs to assume that
the pglz_maximum_compressed_size() bug caused us to fail to fetch the
second byte of a match tag, and what happened to be there instead was
a zero. The reported infinite loop is hard to explain without off == 0,
though.)
Aside from fixing the bugs, rewrite associated comments for more
clarity.
Back-patch to v13 where both these commits landed.
Discussion: https://postgr.es/m/16694-f107871e499ec114@postgresql.org
Although error results received from the backend should always have
a SQLSTATE field, ones generated by libpq won't, making this code
vulnerable to a crash after, say, untimely loss of connection.
Noted by Coverity.
Oversight in commit 403a3d91c. Back-patch to 9.5, as that was.
Statistics associated to an index got lost after running REINDEX
CONCURRENTLY, while the non-concurrent case preserves these correctly.
The concurrent and non-concurrent operations need to be consistent for
the end-user, and missing statistics would force to wait for a new
analyze to happen, which could take some time depending on the activity
of the existing autovacuum workers. This issue is fixed by copying any
existing entries in pg_statistic associated to the old index to the new
one. Note that this copy is already done with the data of the index in
the stats collector.
Reported-by: Fabrízio de Royes Mello
Author: Michael Paquier, Fabrízio de Royes Mello
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/CAFcNs+qpFPmiHd1oTXvcPdvAHicJDA9qBUSujgAhUMJyUMb+SA@mail.gmail.com
Backpatch-through: 12
Certain background workers initiate parallel queries while
debug_query_string==NULL, at which point they attempted strlen(NULL) and
died to SIGSEGV. Older debug_query_string observers allow NULL, so do
likewise in these newer ones. Back-patch to v11, where commit
7de4a1bcc56f494acbd0d6e70781df877dc8ecb5 introduced the first of these.
Discussion: https://postgr.es/m/20201014022636.GA1962668@rfd.leadboat.com
The timetz test cases I added in commit a9632830b were unintentionally
sensitive to whether or not DST is active in the PST8PDT time zone.
Thus, they'll start failing this coming weekend, as reported by
Bernhard M. Wiedemann in bug #16689. Fortunately, DST-awareness is
not significant to the purpose of these test cases, so we can just
force them all to PDT (DST hours) to preserve stability of the
results.
Back-patch to v10, as the prior patch was.
Discussion: https://postgr.es/m/16689-57701daa23b377bf@postgresql.org
In almost all other places, we use plain "r" or "w" mode in popen()
calls (the exceptions being for COPY data). This one has been
overlooked (possibly because it's buried in a ".l" flex file?),
but it's using PG_BINARY_R.
Kensuke Okamura complained in bug #16688 that we fail to strip \r
when stripping the trailing newline from a backtick result string.
That's true enough, but we'd also fail to convert embedded \r\n
cleanly, which also seems undesirable. Fixing the popen() mode
seems like the best way to deal with this.
It's been like this for a long time, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/16688-c649c7b69cd7e6f8@postgresql.org
It's unsafe to do this at parse time because addition of generated
columns to a table would not invalidate stored rules containing
UPDATEs on the table ... but there might now be dependent generated
columns that were not there when the rule was made. This also fixes
an oversight that rewriteTargetView failed to update extraUpdatedCols
when transforming an UPDATE on an updatable view. (Since the new
calculation is downstream of that, rewriteTargetView doesn't actually
need to do anything; but before, there was a demonstrable bug there.)
In v13 and HEAD, this leads to easily-visible bugs because (since
commit c6679e4fc) we won't recalculate generated columns that aren't
listed in extraUpdatedCols. In v12 this bitmap is mostly just used
for trigger-firing decisions, so you'd only notice a problem if a
trigger cared whether a generated column had been updated.
I'd complained about this back in May, but then forgot about it
until bug #16671 from Michael Paul Killian revived the issue.
Back-patch to v12 where this field was introduced. If existing
stored rules contain any extraUpdatedCols values, they'll be
ignored because the rewriter will overwrite them, so the bug will
be fixed even for existing rules. (But note that if someone were
to update to 13.1 or 12.5, store some rules with UPDATEs on tables
having generated columns, and then downgrade to a prior minor version,
they might observe issues similar to what this patch fixes. That
seems unlikely enough to not be worth going to a lot of effort to fix.)
Discussion: https://postgr.es/m/10206.1588964727@sss.pgh.pa.us
Discussion: https://postgr.es/m/16671-2fa55851859fb166@postgresql.org
Now that LOCK TABLE can take any relation type, acquire lock on all
relations that are to be dumped. This prevents schema changes or
deadlock errors that could cause a dump to fail after expending much
effort. The server is tested to have the capability and the feature
disabled if it doesn't, so that a patched pg_dump doesn't fail when
connecting to an unpatched server.
Backpatch to 9.5.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reported-by: Wells Oliver <wells.oliver@gmail.com>
Discussion: https://postgr.es/m/20201021200659.GA32358@alvherre.pgsql
The restriction that only tables and views can be locked by LOCK TABLE
is quite arbitrary, since the underlying mechanism can lock any relation
type. Drop the restriction so that programs such as pg_dump can lock
all relations they're interested in, preventing schema changes that
could cause a dump to fail after expending much effort.
Backpatch to 9.5.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reported-by: Wells Oliver <wells.oliver@gmail.com>
Discussion: https://postgr.es/m/20201021200659.GA32358@alvherre.pgsql
If the old row has any "missing" attributes that are supposed to
be retrieved from an associated tuple descriptor, the wrong things
happened because the trigger result is shoved directly into an
executor slot that lacks the missing-attribute data. Notably,
CHECK-constraint verification would incorrectly see those columns
as NULL, and so would RETURNING-list evaluation.
Band-aid around this by forcibly expanding the tuple before passing
it to the trigger function. (IMO it was a fundamental misdesign to
put the missing-attribute data into tuple constraints, which so
much of the system considers to be optional. But we're probably
stuck with that now, and will have to continue to apply band-aids
as we find other places with similar issues.)
Back-patch to v12. v11 would also have the issue, except that
commit 920311ab1 already applied a similar band-aid. That forced
expansion in more cases than seem really necessary, though, so
this isn't a directly equivalent fix.
Amit Langote, with some cosmetic changes by me
Discussion: https://postgr.es/m/16644-5da7ef98a7ac4545@postgresql.org
We must not set the "done" flag until after we've executed the
initialization function. Otherwise, other threads can fall through
the initial unlocked test before initialization is really complete.
This has been seen to cause rare failures of ecpg's thread/descriptor
test, and it could presumably cause other sorts of misbehavior in
threaded ECPG-using applications, since ecpglib relies on
pthread_once() in several places.
Diagnosis and patch by me, based on investigation by Alexander Lakhin.
Back-patch to all supported branches (the bug dates to 2007).
Discussion: https://postgr.es/m/16685-d6cd241872c101d3@postgresql.org
The ExplainCloseGroup arguments for incremental sort usage data
didn't match the corresponding ExplainOpenGroup. This only matters
for XML-format output, which is probably why we'd not noticed.
Daniel Gustafsson, per bug #16683 from Frits Jalvingh
Discussion: https://postgr.es/m/16683-8005033324ad34e9@postgresql.org
There's no functional change at all here, but I'm curious to see
whether this change successfully shuts up Coverity's warning about
a useless strcmp(), which appeared with the previous update.
Discussion: http://mm.icann.org/pipermail/tz/2020-October/029370.html
psql's \connect claims to be able to re-use previous connection
parameters, but in fact it only re-uses the database name, user name,
host name (and possibly hostaddr, depending on version), and port.
This is problematic for assorted use cases. Notably, pg_dump[all]
emits "\connect databasename" commands which we would like to have
re-use all other parameters. If such a script is loaded in a psql run
that initially had "-d connstring" with some non-default parameters,
those other parameters would be lost, potentially causing connection
failure. (Thus, this is the same kind of bug addressed in commits
a45bc8a4f and 8e5793ab6, although the details are much different.)
To fix, redesign do_connect() so that it pulls out all properties
of the old PGconn using PQconninfo(), and then replaces individual
properties in that array. In the case where we don't wish to re-use
anything, get libpq's default settings using PQconndefaults() and
replace entries in that, so that we don't need different code paths
for the two cases.
This does result in an additional behavioral change for cases where
the original connection parameters allowed multiple hosts, say
"psql -h host1,host2", and the \connect request allows re-use of the
host setting. Because the previous coding relied on PQhost(), it
would only permit reconnection to the same host originally selected.
Although one can think of scenarios where that's a good thing, there
are others where it is not. Moreover, that behavior doesn't seem to
meet the principle of least surprise, nor was it documented; nor is
it even clear it was intended, since that coding long pre-dates the
addition of multi-host support to libpq. Hence, this patch is content
to drop it and re-use the host list as given.
Per Peter Eisentraut's comments on bug #16604. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org