Most parts of the parser can expect that the stack overflow check
in transformExprRecurse() will trigger before things get desperate.
However, transformFromClauseItem() can recurse directly to self
without having analyzed any expressions, so it's possible to drive
it to a stack-overrun crash. Add a check to prevent that.
Per bug #17583 from Egor Chindyaskin. Back-patch to all supported
branches.
Richard Guo
Discussion: https://postgr.es/m/17583-33be55b9f981f75c@postgresql.org
As of 897795240cfaaed724af2f53ed2c50c9862f951f, check constraints can
be declared invalid. But that patch didn't update _outConstraint() to
also show the relevant struct fields (which were only applicable to
foreign keys before that). This currently only affects debugging
output, so no impact in practice.
96ef3b8ff1cf1950e897fd2f766d4bd9ef0d5d56 accidentally copied a not
applicable comment from the float8_pass_by_value code to the
data_checksums code. Remove that.
87d3b35a1ca31a9d947a8f919a6006679216dff0 changed pg_upgrade to
checking the checksum version rather than just the Boolean presence of
checksums, but didn't change the field type in its ControlData struct
from bool. So this would not work correctly if there ever is a
checksum version larger than 1.
The set of fields printed by _outConstraint() in the CONSTR_IDENTITY
case didn't match the set of fields actually used in that case. (The
code was probably uncarefully copied from the CONSTR_DEFAULT case.)
Fix that by using the right set of fields. Since there is no read
support for this node type, this is really just for debugging output
right now, so it doesn't affect anything important.
Previously, we relied on HEAP2_NEW_CID records and XACT_INVALIDATION
records to know if the transaction has modified the catalog, and that
information is not serialized to snapshot. Therefore, after the restart,
if the logical decoding decodes only the commit record of the transaction
that has actually modified a catalog, we will miss adding its XID to the
snapshot. Thus, we will end up looking at catalogs with the wrong
snapshot.
To fix this problem, this changes the snapshot builder so that it
remembers the last-running-xacts list of the decoded RUNNING_XACTS record
after restoring the previously serialized snapshot. Then, we mark the
transaction as containing catalog changes if it's in the list of initial
running transactions and its commit record has XACT_XINFO_HAS_INVALS. To
avoid ABI breakage, we store the array of the initial running transactions
in the static variables InitialRunningXacts and NInitialRunningXacts,
instead of storing those in SnapBuild or ReorderBuffer.
This approach has a false positive; we could end up adding the transaction
that didn't change catalog to the snapshot since we cannot distinguish
whether the transaction has catalog changes only by checking the COMMIT
record. It doesn't have the information on which (sub) transaction has
catalog changes, and XACT_XINFO_HAS_INVALS doesn't necessarily indicate
that the transaction has catalog change. But that won't be a problem since
we use snapshot built during decoding only to read system catalogs.
On the master branch, we took a more future-proof approach by writing
catalog modifying transactions to the serialized snapshot which avoids the
above false positive. But we cannot backpatch it because of a change in
the SnapBuild.
Reported-by: Mike Oh
Author: Masahiko Sawada
Reviewed-by: Amit Kapila, Shi yu, Takamichi Osumi, Kyotaro Horiguchi, Bertrand Drouvot, Ahsan Hadi
Backpatch-through: 10
Discussion: https://postgr.es/m/81D0D8B0-E7C4-4999-B616-1E5004DBDCD2%40amazon.com
fmgr_sql must make expanded-datum arguments read-only, because
it's possible that the function body will pass the argument to
more than one callee function. If one of those functions takes
the datum's R/W property as license to scribble on it, then later
callees will see an unexpected value, leading to wrong answers.
From a performance standpoint, it'd be nice to skip this in the
common case that the argument value is passed to only one callee.
However, detecting that seems fairly hard, and certainly not
something that I care to attempt in a back-patched bug fix.
Per report from Adam Mackler. This has been broken since we
invented expanded datums, so back-patch to all supported branches.
Discussion: https://postgr.es/m/WScDU5qfoZ7PB2gXwNqwGGgDPmWzz08VdydcPFLhOwUKZcdWbblbo-0Lku-qhuEiZoXJ82jpiQU4hOjOcrevYEDeoAvz6nR0IU4IHhXnaCA=@mackler.email
Discussion: https://postgr.es/m/187436.1660143060@sss.pgh.pa.us
Per buildfarm, the output order of \dx+ isn't consistent across
locales. Apply NO_LOCALE to force C locale. There might be a
more localized way, but I'm not seeing it offhand, and anyway
there is nothing in this test module that particularly cares
about locales.
Security: CVE-2022-2625
Previously, if an extension script did CREATE OR REPLACE and there was
an existing object not belonging to the extension, it would overwrite
the object and adopt it into the extension. This is problematic, first
because the overwrite is probably unintentional, and second because we
didn't change the object's ownership. Thus a hostile user could create
an object in advance of an expected CREATE EXTENSION command, and would
then have ownership rights on an extension object, which could be
modified for trojan-horse-type attacks.
Hence, forbid CREATE OR REPLACE of an existing object unless it already
belongs to the extension. (Note that we've always forbidden replacing
an object that belongs to some other extension; only the behavior for
previously-free-standing objects changes here.)
For the same reason, also fail CREATE IF NOT EXISTS when there is
an existing object that doesn't belong to the extension.
Our thanks to Sven Klemm for reporting this problem.
Security: CVE-2022-2625
Commit 59be1c942a47 already tried to make
src/test/recovery/t/033_replay_tsp_drops more reliable, but it wasn't
enough. Try to improve on that by making this use of a replication slot
to be more like others. Also, don't drop the slot.
Make a few other stylistic changes while at it. It's still quite slow,
which is another thing that we need to fix in this script.
Backpatch to all supported branches.
Discussion: https://postgr.es/m/349302.1659191875@sss.pgh.pa.us
As usual, the release notes for older branches will be made by cutting
these down, but put them up for community review first.
Due to the out-of-cycle release of 14.4, there are a number of commits
that appeared in 14.4 that are not yet shipped in the earlier branches.
This draft repeats those release note entries for convenience in
preparing the older-branch notes later. They'll be stripped out of
the 14.5 section after that's done.
On closer inspection, mcv.c isn't as broken for ScalarArrayOpExpr
as I thought. The Var-on-right issue is real enough, but actually
it does cope fine with a NULL array constant --- I was misled by
an XXX comment suggesting it didn't. Undo that part of the code
change, and replace the XXX comment with something less misleading.
Since v14, the extended stats machinery will try to estimate for
otherwise-unsupported boolean expressions if they match an expression
available from an extended stats object. mcv.c did not get the memo
about this, and would spit up with "unknown clause type". Fortunately
the case is easy to handle, since we can expect the expression yields
boolean.
While here, replace some not-terribly-on-point assertions with
simpler runtime tests for lookup failure. That seems appropriate
so that we get an elog not a crash if we somehow get to the new
it-should-be-a-bool-expression code with a subexpression that
doesn't match any stats column.
Per report from Danny Shemesh. Thanks to Justin Pryzby for
preliminary investigation.
Discussion: https://postgr.es/m/CAFZC=QqD6=27wQPOW1pbRa98KPyuyn+7cL_Ay_Ck-roZV84vHg@mail.gmail.com
statext_is_compatible_clause_internal() checked that the arguments
of a ScalarArrayOpExpr are one Var and one Const, but it would allow
cases where the Const was on the left. Subsequent uses of the clause
are not expecting that and would suffer assertion failures or core
dumps. mcv.c also had not bothered to cope with the case of a NULL
array constant, which seems really unacceptably sloppy of somebody.
(Although our tools failed us there too, since AFAIK neither Coverity
nor any compiler warned of the obvious use-of-uninitialized-variable
condition.) It seems best to handle that by having
statext_is_compatible_clause_internal() reject it.
Noted while fixing bug #17570. Back-patch to v13 where the
extended stats code grew some awareness of ScalarArrayOpExpr.
This makes it more convenient for git config to contain the
blame.ignoreRevsFile setting; otherwise current git versions complain if
the file is not present.
I constructed the file for each branch by scraping the file in branch
master for commits that appear in that branch. Because a few additional
pgindent commits have been added to the list in master since the list
was first created, this also propagates those to branches 14 and 15
where the file already existed. Also, some entries appear to have been
made using author-date rather than committer-date in the format string,
so some timestamps are changed. Also remove bogus whitespace in the
suggested `git log` format string.
Backpatch to all supported branches.
Discussion: https://postgr.es/m/20220711163138.o72evdeus5f5yy5z@alvherre.pgsql
Commit a4d75c86b improved the extended-stats logic to allow extended
stats to be collected on expressions not just bare Vars. To apply
such stats, we first verify that the user has permissions to read all
columns used in the stats. (If not, the query will likely fail at
runtime, but the planner ought not do so.) That had to get extended
to check permissions of columns appearing within such expressions,
but the code for that was completely wrong: it applied pull_varattnos
to the wrong pointer, leading to "unrecognized node type" failures.
Furthermore, although you couldn't get to this because of that bug,
it failed to account for the attnum offset applied by pull_varattnos.
This escaped recognition so far because the code in question is not
reached when the user has whole-table SELECT privilege (which is the
common case), and because only subexpressions not specially handled
by statext_is_compatible_clause_internal() are at risk.
I think a large part of the reason for this bug is under-documentation
of what statext_is_compatible_clause() is doing and what its arguments
are, so do some work on the comments to try to improve that.
Per bug #17570 from Alexander Kozhemyakin. Patch by Richard Guo;
comments and other cosmetic improvements by me. (Thanks also to
Japin Li for diagnosis.) Back-patch to v14 where the bug came in.
Discussion: https://postgr.es/m/17570-f2f2e0f4bccf0965@postgresql.org
The five commits ending at cc2c7d65fc27e877c9f407587b0b92d46cd6dd16
closed this race condition for v15+. For v14 through v10, add a HINT to
discourage studying the cosmetic problem.
Reviewed by Kyotaro Horiguchi and David Steele.
Discussion: https://postgr.es/m/20220731061747.GA3692882@rfd.leadboat.com
Having additional triggers in a test table made the ORDER BY clauses in
old queries underspecified. Add another column there for stability.
Per sporadic buildfarm pink.
When inserting a view referencing a foreign table that has WITH CHECK
OPTION constraints, in single-insert mode postgres_fdw retrieves the
data that was actually inserted on the remote side so that the WITH
CHECK OPTION constraints are enforced with the data locally, but in
batch-insert mode it cannot currently retrieve the data (except for the
row first inserted through the view), resulting in enforcing the WITH
CHECK OPTION constraints with the data passed from the core (except for
the first-inserted row), which led to incorrect results when inserting
into a view referencing a foreign table in which a remote BEFORE ROW
INSERT trigger changes the rows inserted through the view so that they
violate the view's WITH CHECK OPTION constraint. Also, the query
inserting into the view caused an assertion failure in assert-enabled
builds.
Fix these by disabling batch insertion when inserting into such a view.
Back-patch to v14 where batch insertion was added.
Discussion: https://postgr.es/m/CAPmGK17LpbTZs4m4a_6THP54UBeK9fHvX8aVVA%2BC6yEZDZwQcg%40mail.gmail.com
Using ATSimpleRecursion() in ATPrepCmd() to do so as bbb927b4db9b did is
not correct, because ATPrepCmd() can't distinguish between triggers that
may be cloned and those that may not, so would wrongly try to recurse
for the latter category of triggers.
So this commit restores the code in EnableDisableTrigger() that
86f575948c77 had added to do the recursion, which would do it only for
triggers that may be cloned, that is, row-level triggers. This also
changes tablecmds.c such that ATExecCmd() is able to pass the value of
ONLY flag down to EnableDisableTrigger() using its new 'recurse'
parameter.
This also fixes what seems like an oversight of 86f575948c77 that the
recursion to partition triggers would only occur if EnableDisableTrigger()
had actually changed the trigger. It is more apt to recurse to inspect
partition triggers even if the parent's trigger didn't need to be
changed: only then can we be certain that all descendants share the same
state afterwards.
Backpatch all the way back to 11, like bbb927b4db9b. Care is taken not
to break ABI compatibility (and that no catversion bump is needed.)
Co-authored-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Dmitry Koval <d.koval@postgrespro.ru>
Discussion: https://postgr.es/m/CA+HiwqG-cZT3XzGAnEgZQLoQbyfJApVwOTQaCaas1mhpf+4V5A@mail.gmail.com
Ordinarily the functions called in this loop ought to have plenty
of CFIs themselves; but we've now seen a case where no such CFI is
reached, making the loop uninterruptible. Even though that's from
a recently-introduced bug, it seems prudent to install a CFI at
the loop level in all branches.
Per discussion of bug #17558 from Andrew Kesper (an actual fix for
that bug will follow).
Discussion: https://postgr.es/m/17558-3f6599ffcf52fd4a@postgresql.org
Remove the test case added by commit fac1b470a, which never actually
worked to expose the problem it claimed to test. Replace it with
a case that does expose the problem, and also covers the SRF-not-
at-the-top deficiency repaired in 1aa8dad41.
Richard Guo, with some editorialization by me
Discussion: https://postgr.es/m/17564-c7472c2f90ef2da3@postgresql.org
Commit fac1b470a thought we could check for set-returning functions
by testing only the top-level node in an expression tree. This is
wrong in itself, and to make matters worse it encouraged others
to make the same mistake, by exporting tlist.c's special-purpose
IS_SRF_CALL() as a widely-visible macro. I can't find any evidence
that anyone's taken the bait, but it was only a matter of time.
Use expression_returns_set() instead, and stuff the IS_SRF_CALL()
genie back in its bottle, this time with a warning label. I also
added a couple of cross-reference comments.
After a fair amount of fooling around, I've despaired of making
a robust test case that exposes the bug reliably, so no test case
here. (Note that the test case added by fac1b470a is itself
broken, in that it doesn't notice if you remove the code change.
The repro given by the bug submitter currently doesn't fail either
in v15 or HEAD, though I suspect that may indicate an unrelated bug.)
Per bug #17564 from Martijn van Oosterhout. Back-patch to v13,
as the faulty patch was.
Discussion: https://postgr.es/m/17564-c7472c2f90ef2da3@postgresql.org
The sto_using_cursor and sto_using_select tests were coded to exercise
every permutation of their test steps, but AFAICS there is no value in
exercising more than one. This matters because each permutation costs
about six seconds, thanks to the "pg_sleep(6)". Perhaps we could
reduce that, but the useless permutations seem worth getting rid of
in any case. (Note that sto_using_hash_index got it right already.)
While here, clean up some other sloppiness such as an unused table.
This doesn't make too much difference in interactive testing, since the
wasted time is typically masked by parallelization with other tests.
However, the buildfarm runs this as a serial step, which means we can
expect to shave ~40 seconds from every buildfarm run. That makes it
worth back-patching.
Discussion: https://postgr.es/m/2515192.1659454702@sss.pgh.pa.us
We've heard a couple of reports of people having trouble with
multi-gigabyte-sized query-texts files. It occurred to me that on
32-bit platforms, there could be an issue with integer overflow
of calculations associated with the total query text size.
Address that with several changes:
1. Limit pg_stat_statements.max to INT_MAX / 2 not INT_MAX.
The hashtable code will bound it to that anyway unless "long"
is 64 bits. We still need overflow guards on its use, but
this helps.
2. Add a check to prevent extending the query-texts file to
more than MaxAllocHugeSize. If it got that big, qtext_load_file
would certainly fail, so there's not much point in allowing it.
Without this, we'd need to consider whether extent, query_offset,
and related variables shouldn't be off_t not size_t.
3. Adjust the comparisons in need_gc_qtexts() to be done in 64-bit
arithmetic on all platforms. It appears possible that under duress
those multiplications could overflow 32 bits, yielding a false
conclusion that we need to garbage-collect the texts file, which
could lead to repeatedly garbage-collecting after every hash table
insertion.
Per report from Bruno da Silva. I'm not convinced that these
issues fully explain his problem; there may be some other bug that's
contributing to the query-texts file becoming so large in the first
place. But it did get that big, so #2 is a reasonable defense,
and #3 could explain the reported performance difficulties.
(See also commit 8bbe4cbd9, which addressed some related bugs.
The second Discussion: link is the thread that led up to that.)
This issue is old, and is primarily a problem for old platforms,
so back-patch.
Discussion: https://postgr.es/m/CAB+Nuk93fL1Q9eLOCotvLP07g7RAv4vbdrkm0cVQohDVMpAb9A@mail.gmail.com
Discussion: https://postgr.es/m/5601D354.5000703@BlueTreble.com
I thought commit fd96d14d9 had plugged all the holes of this sort,
but no, function RTEs could produce oversize tuples too, either
via long coldeflists or just from multiple functions in one RTE.
(I'm pretty sure the other variants of base RTEs aren't a problem,
because they ultimately refer to either a table or a sub-SELECT,
whose widths are enforced elsewhere. But we explicitly allow join
RTEs to be overwidth, as long as you don't try to form their
tuple result.)
Per further discussion of bug #17561. As before, patch all branches.
Discussion: https://postgr.es/m/17561-80350151b9ad2ad4@postgresql.org
errno was not reported correctly after attempting to clone a file,
leading to incorrect error reports. While scanning through the code, I
have not noticed any similar mistakes.
Error introduced in 3a769d8.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20220731134135.GY15006@telsasoft.com
Backpatch-through: 12
The code tried to access ARR_DIMS(v)[0] and ARR_LBOUND(v)[0]
whether or not those values exist. This made the range check
on the "n" argument unstable --- it might or might not fail, and
if it did it would report garbage for the allowed upper limit.
These bogus accesses would probably annoy Valgrind, and if you
were very unlucky even lead to SIGSEGV.
Report and fix by Martin Kalcher. Back-patch to v14 where this
function was added.
Discussion: https://postgr.es/m/baaeb413-b8a8-4656-5757-ef347e5ec11f@aboutsource.net
The new test is from commit 9e4f914b5e.
With this setting messages have SQL error numbers included, so that
needs to be provided for in the pattern looked for.
Backpatch to all live branches like the original.
A RowExpr with more than MaxTupleAttributeNumber columns would fail at
execution anyway, since we cannot form a tuple datum with more than that
many columns. While heap_form_tuple() has a check for too many columns,
it emerges that there are some intermediate bits of code that don't
check and can be driven to failure with sufficiently many columns.
Checking this at parse time seems like the most appropriate place to
install a defense, since we already check SELECT list length there.
While at it, make the SELECT-list-length error use the same errcode
(TOO_MANY_COLUMNS) as heap_form_tuple does, rather than the generic
PROGRAM_LIMIT_EXCEEDED.
Per bug #17561 from Egor Chindyaskin. The given test case crashes
in all supported branches (and probably a lot further back),
so patch all.
Discussion: https://postgr.es/m/17561-80350151b9ad2ad4@postgresql.org
On FreeBSD, the new test fails due to a WAL file being removed before
the standby has had the chance to copy it. Fix by adding a replication
slot to prevent the removal until after the standby has connected.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reported-by: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/CAEze2Wj5nau_qpjbwihvmXLfkAWOZ5TKdbnqOc6nKSiRJEoPyQ@mail.gmail.com
Crash recovery on standby may encounter missing directories
when replaying database-creation WAL records. Prior to this
patch, the standby would fail to recover in such a case;
however, the directories could be legitimately missing.
Consider the following sequence of commands:
CREATE DATABASE
DROP DATABASE
DROP TABLESPACE
If, after replaying the last WAL record and removing the
tablespace directory, the standby crashes and has to replay the
create database record again, crash recovery must be able to continue.
A fix for this problem was already attempted in 49d9cfc68bf4, but it
was reverted because of design issues. This new version is based
on Robert Haas' proposal: any missing tablespaces are created
during recovery before reaching consistency. Tablespaces
are created as real directories, and should be deleted
by later replay. CheckRecoveryConsistency ensures
they have disappeared.
The problems detected by this new code are reported as PANIC,
except when allow_in_place_tablespaces is set to ON, in which
case they are WARNING. Apart from making tests possible, this
gives users an escape hatch in case things don't go as planned.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Author: Asim R Praveen <apraveen@pivotal.io>
Author: Paul Guo <paulguo@gmail.com>
Reviewed-by: Anastasia Lubennikova <lubennikovaav@gmail.com> (older versions)
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com> (older versions)
Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Diagnosed-by: Paul Guo <paulguo@gmail.com>
Discussion: https://postgr.es/m/CAEET0ZGx9AvioViLf7nbR_8tH9-=27DN5xWJ2P9-ROH16e4JUA@mail.gmail.com
On Windows with MSVC, get_dirent_type() was recently made to return
DT_LNK for junction points by commit 9d3444dc, which fixed some
defective dirent.c code.
On Windows with Cygwin, get_dirent_type() already worked for symlinks,
as it does on POSIX systems, because Cygwin has its own fake symlinks
that behave like POSIX (on closer inspection, Cygwin's dirent has the
BSD d_type extension but it's probably always DT_UNKNOWN, so we fall
back to lstat(), which understands Cygwin symlinks with S_ISLNK()).
On Windows with MinGW/MSYS, we need extra code, because the MinGW
runtime has its own readdir() without d_type, and the lstat()-based
fallback has no knowledge of our convention for treating junctions as
symlinks.
Back-patch to 14, where get_dirent_type() landed.
Reported-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/b9ddf605-6b36-f90d-7c30-7b3e95c46276%40dunslane.net
This is a backpatch to branches 10-14 of the following commits:
7170f2159fb2 Allow "in place" tablespaces.
c6f2f01611d4 Fix pg_basebackup with in-place tablespaces.
f6f0db4d6240 Fix pg_tablespace_location() with in-place tablespaces
7a7cd84893e0 doc: Remove mention to in-place tablespaces for pg_tablespace_location()
5344723755bd Remove unnecessary Windows-specific basebackup code.
In-place tablespaces were introduced as a testing helper mechanism, but
they are going to be used for a bugfix in WAL replay to be backpatched
to all stable branches.
I (Álvaro) had to adjust some code to account for lack of
get_dirent_type() in branches prior to 14.
Author: Thomas Munro <thomas.munro@gmail.com>
Author: Michaël Paquier <michael@paquier.xyz>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20220722081858.omhn2in5zt3g4nek@alvherre.pgsql