After commit 5a2fed911a85ed6d8a015a6bafe3a0d9a69334ae, the catalog state
resulting from these commands ceased to affect sessions. Restore the
longstanding behavior, which is like beginning the session with a SET
ROLE command. If cherry-picking the CVE-2024-10978 fixes, default to
including this, too. (This fixes an unintended side effect of fixing
CVE-2024-10978.) Back-patch to v12, like that commit. The release team
decided to include v12, despite the original intent to halt v12 commits
earlier this week.
Tom Lane and Noah Misch. Reported by Etienne LAFARGE.
Discussion: https://postgr.es/m/CADOZwSb0UsEr4_UTFXC5k7=fyyK8uKXekucd+-uuGjJsGBfxgw@mail.gmail.com
Previously LogicalIncreaseRestartDecodingForSlot() accidentally
accepted any LSN as the candidate_lsn and candidate_valid after the
restart_lsn of the replication slot was updated, so it potentially
caused the restart_lsn to move backwards.
A scenario where this could happen in logical replication is: after a
logical replication restart, based on previous candidate_lsn and
candidate_valid values in memory, the restart_lsn advances upon
receiving a subscriber acknowledgment. Then, logical decoding restarts
from an older point, setting candidate_lsn and candidate_valid based
on an old RUNNING_XACTS record. Subsequent subscriber acknowledgments
then update the restart_lsn to an LSN older than the current value.
In the reported case, after WAL files were removed by a checkpoint,
the retreated restart_lsn prevented logical replication from
restarting due to missing WAL segments.
This change essentially modifies the 'if' condition to 'else if'
condition within the function. The previous code had an asymmetry in
this regard compared to LogicalIncreaseXminForSlot(), which does
almost the same thing for different fields.
The WAL removal issue was reported by Hubert Depesz Lubaczewski.
Backpatch to all supported versions, since the bug exists since 9.4
where logical decoding was introduced.
Reviewed-by: Tomas Vondra, Ashutosh Bapat, Amit Kapila
Discussion: https://postgr.es/m/Yz2hivgyjS1RfMKs%40depesz.com
Discussion: https://postgr.es/m/85fff40e-148b-4e86-b921-b4b846289132%40vondra.me
Backpatch-through: 13
In commit 08c0d6ad6 which introduced "rainbow" arcs in regex NFAs,
I didn't think terribly hard about what to do when creating the color
complement of a rainbow arc. Clearly, the complement cannot match any
characters, and I took the easy way out by just not building any arcs
at all in the complement arc set. That mostly works, but Nikolay
Shaplov found a case where it doesn't: if we decide to delete that
sub-NFA later because it's inside a "{0}" quantifier, delsub()
suffered an assertion failure. That's because delsub() relies on
the target sub-NFA being fully connected. That was always true
before, and the best fix seems to be to restore that property.
Hence, invent a new arc type CANTMATCH that can be generated in
place of an empty color complement, and drop it again later when we
start NFA optimization. (At that point we don't need to do delsub()
any more, and besides there are other cases where NFA optimization can
lead to disconnected subgraphs.)
It appears that this bug has no consequences in a non-assert-enabled
build: there will be some transiently leaked NFA states/arcs, but
they'll get cleaned up eventually. Still, we don't like assertion
failures, so back-patch to v14 where rainbow arcs were introduced.
Per bug #18708 from Nikolay Shaplov.
Discussion: https://postgr.es/m/18708-f94f2599c9d2c005@postgresql.org
Allowing foreign keys where the referenced and the referencing columns
have collations with different notions of equality is problematic.
This can only happen when using nondeterministic collations, for
example, if the referencing column is case-insensitive and the
referenced column is not, or vice versa. It does not happen if both
collations are deterministic.
To show one example:
CREATE COLLATION case_insensitive (provider = icu, deterministic = false, locale = 'und-u-ks-level2');
CREATE TABLE pktable (x text COLLATE "C" PRIMARY KEY);
CREATE TABLE fktable (x text COLLATE case_insensitive REFERENCES pktable ON UPDATE CASCADE ON DELETE CASCADE);
INSERT INTO pktable VALUES ('A'), ('a');
INSERT INTO fktable VALUES ('A');
BEGIN; DELETE FROM pktable WHERE x = 'a'; TABLE fktable; ROLLBACK;
BEGIN; DELETE FROM pktable WHERE x = 'A'; TABLE fktable; ROLLBACK;
Both of these DELETE statements delete the one row from fktable. So
this means that one row from fktable references two rows in pktable,
which should not happen. (That's why a primary key or unique
constraint is required on pktable.)
When nondeterministic collations were implemented, the SQL standard
available to yours truly said that referential integrity checks should
be performed with the collation of the referenced column, and so
that's how we implemented it. But this turned out to be a mistake in
the SQL standard, for the same reasons as above, that was later
(SQL:2016) fixed to require both collations to be the same. So that's
what we are aiming for here.
We don't have to be quite so strict. We can allow different
collations if they are both deterministic. This is also good for
backward compatibility.
So the new rule is that the collations either have to be the same or
both deterministic. Or in other words, if one of them is
nondeterministic, then both have to be the same.
Users upgrading from before that have affected setups will need to
make changes to their schemas (i.e., change one or both collations in
affected foreign-key relationships) before the upgrade will succeed.
Some of the nice test cases for the previous situation in
collate.icu.utf8.sql are now obsolete. They are changed to just check
the error checking of the new rule. Note that collate.sql already
contained a test for foreign keys with different deterministic
collations.
A bunch of code in ri_triggers.c that added a COLLATE clause to
enforce the referenced column's collation can be removed, because both
columns now have to have the same notion of equality, so it doesn't
matter which one to use.
Reported-by: Paul Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Jian He <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/78d824e0-b21e-480d-a252-e4b84bc2c24b@illuminatedcomputing.com
Previously, in unlucky cases, it was possible for pg_rewind to remove
certain WAL segments from the rewound demoted primary. In particular
this happens if those files have been marked for archival (i.e., their
.ready files were created) but not yet archived; the newly promoted node
no longer has such files because of them having been recycled, but they
are likely critical for recovery in the demoted node. If pg_rewind
removes them, recovery is not possible anymore.
Fix this by maintaining a hash table of files in this situation in the
scan that looks for a checkpoint, which the decide_file_actions phase
can consult so that it knows to preserve them.
Backpatch to 14. The problem also exists in 13, but that branch was not
blessed with commit eb00f1d4bf96, so this patch is difficult to apply
there. Users of older releases will just have to continue to be extra
careful when rewinding.
Co-authored-by: Полина Бунгина (Polina Bungina) <bungina@gmail.com>
Co-authored-by: Alexander Kukushkin <cyberdemn@gmail.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Discussion: https://postgr.es/m/CAAtGL4AhzmBRsEsaDdz7065T+k+BscNadfTqP1NcPmsqwA5HBw@mail.gmail.com
Refactor objectNamesToOids() to use get_object_address() internally if
possible. Not only does this save a lot of code, it also allows us to
use the object locking provided by get_object_address() for
GRANT/REVOKE. There was previously a code comment that complained
about the lack of locking in objectNamesToOids(), which is now fixed.
The check in ExecGrant_Type_check() is obsolete because
get_object_address_type() already does the same check.
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/bf72b82c-124d-4efa-a484-bb928e9494e4@eisentraut.org
In EEOP_BOOL_AND_STEP* and EEOP_BOOL_OR_STEP*, we emitted pointlesss
store instructions to store to resnull/resvalue values that were just
loaded from the same fields in the previous instructions. They will
surely get optimized away by LLVM if any optimizations are enabled,
but it's better to not emit them in the first place. In
EEOP_BOOL_NOT_STEP, similar story with resnull.
In EEOP_NULLIF, when it returns NULL, there was also a redundant store
to resvalue just after storing a 0 to it. The value of resvalue
doesn't matter when resnull is set, so in fact even storing the 0 is
unnecessary, but I kept that because we tend to do that for general
tidiness.
Author: Xing Guo <higuoxing@gmail.com>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/CACpMh%2BC%3Dg13WdvzLRSponsVWGgxwDSMzQWM4Gz0heOyaA0-N6g@mail.gmail.com
Some places declared a Relation before calling get_object_address()
only to assert that the relation is NULL after the call.
The new assertion allows passing NULL as the relation argument at
those places making the code cleaner and easier to understand.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://www.postgresql.org/message-id/ZzG34eNrT83W/Orz@ip-10-97-1-34.eu-west-3.compute.internal
This fixes a set of race conditions with cumulative statistics where a
shared stats entry could be dropped while it should still be valid in
the event when it is reused: an entry may refer to a different object
but requires the same hash key. This can happen with various stats
kinds, like:
- Replication slots that compute internally an index number, for
different slot names.
- Stats kinds that use an OID in the object key, where a wraparound
causes the same key to be used if an OID is used for the same object.
- As of PostgreSQL 18, custom pgstats kinds could also be an issue,
depending on their implementation.
This issue is fixed by introducing a counter called "generation" in the
shared entries via PgStatShared_HashEntry, initialized at 0 when an
entry is created and incremented when the same entry is reused, to avoid
concurrent issues on drop because of other backends still holding a
reference to it. This "generation" is copied to the local copy that a
backend holds when looking at an object, then cross-checked with the
shared entry to make sure that the entry is not dropped even if its
"refcount" justifies that if it has been reused.
This problem could show up when a backend shuts down and needs to
discard any entries it still holds, causing statistics to be removed
when they should not, or even an assertion failure. Another report
involved a failure in a standby after an OID wraparound, where the
startup process would FATAL on a "can only drop stats once", stopping
recovery abruptly. The buildfarm has been sporadically complaining
about the problem, as well, but the window is hard to reach with the
in-core tests.
Note that the issue can be reproduced easily by adding a sleep before
dshash_find() in pgstat_release_entry_ref() to enlarge the problematic
window while repeating test_decoding's isolation test oldest_xmin a
couple of times, for example, as pointed out by Alexander Lakhin.
Reported-by: Alexander Lakhin, Peter Smith
Author: Kyotaro Horiguchi, Michael Paquier
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/CAA4eK1KxuMVyAryz_Vk5yq3ejgKYcL6F45Hj9ZnMNBS-g+PuZg@mail.gmail.com
Discussion: https://postgr.es/m/17947-b9554521ad963c9c@postgresql.org
Backpatch-through: 15
Previously, only backends, autovacuum workers, and background workers
had an entry in the PMChildFlags array. With this commit, all
postmaster child processes, including all the aux processes, have an
entry. Dead-end backends still don't get an entry, though, and other
processes that don't touch shared memory will never mark their
PMChildFlags entry as active.
We now maintain separate freelists for different kinds of child
processes. That ensures that there are always slots available for
autovacuum and background workers. Previously, pre-authentication
backends could prevent autovacuum or background workers from starting
up, by using up all the slots.
The code to manage the slots in the postmaster process is in a new
pmchild.c source file. Because postmaster.c is just so large.
Assigning pmsignal slot numbers is now pmchild.c's responsibility.
This replaces the PMChildInUse array in pmsignal.c.
Some of the comments in postmaster.c still talked about the "stats
process", but that was removed in commit 5891c7a8ed. Fix those while
we're at it.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/a102f15f-eac4-4ff2-af02-f9ff209ec66f@iki.fi
Previously, the postmaster would never try to kill dead-end child
processes, even if there were no other processes left. A dead-end
backend will eventually exit, when authentication_timeout expires, but
if a dead-end backend is the only thing that's preventing the server
from shutting down, it seems better to kill it immediately. It's
particularly important, if there was a bug in the early startup code
that prevented a dead-end child from timing out and exiting normally.
Includes a test for that case where a dead-end backend previously
prevented the server from shutting down.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/a102f15f-eac4-4ff2-af02-f9ff209ec66f@iki.fi
These two functions with their different argument lists have never been
tested in this module, so let's add something.
Author: Ronan Dunklau
Discussion: https://postgr.es/m/ZzMSJkiNZhimjXWx@paquier.xyz
This makes it easier to add precondition assertions. We now assert that
the last call to _bt_readpage succeeded, and that the current item index
is within the bounds of the currPos items array.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Masahiro Ikeda <ikedamsh@oss.nttdata.com>
Discussion: https://postgr.es/m/CAH2-WznFkEs9K1PtNruti5JjawY-dwj+gkaEh_k1ZE+1xLLGkA@mail.gmail.com
Because in the 18 cycle we turned checksums on by default with commit
04bec894a04c, and pg_upgrade fails if the setting doesn't match in old
and new clusters, the built-in cross-version pg_upgrade test is failing
if the old version is older than 18. Fix the script so that it creates
the old cluster with checksums enabled (-k) in cross-version scenarios.
This went unnoticed because the buildfarm doesn't use the same test code
for cross-version testing.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/202411071838.7fgkb7uvavvz@alvherre.pgsql
InjectionPointEntry->name was described as a hash key, which was fine
when introduced in d86d20f0ba79, but it is not now.
Oversight in 86db52a5062a, that has changed the way injection points are
stored in shared memory from a hash table to an array.
Backpatch-through: 17
Maintain the pg_stat_user_indexes.idx_scan pgstat counter during
contrib/Bloom index scans.
Oversight in commit 9ee014fc, which added the Bloom index contrib
module.
Author: Masahiro Ikeda <ikedamsh@oss.nttdata.com>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/c48839d881388ee401a01807c686004d@oss.nttdata.com
Backpatch: 13- (all supported branches).
Multiple buildfarm animals warn that a newly added Assert() is
impossible to fail; remove it to avoid the noise. While at it, use
direct assignment to obtain the value we need, avoiding an unnecessary
memcpy().
(I decided to remove the "pfree" call for the detoasted short-datum;
because this is only used for DDL, it's not problematic to leak such a
small allocation.)
Noted by Tom Lane about 14e87ffa5c54.
Discussion: https://postgr.es/m/3649828.1731083171@sss.pgh.pa.us
72a5b1fc8804 was the piece missing for the conversion of this module.
pg_freespace is bumped to 1.3, with its function pg_freespace(regclass)
converted to this new style.
There are other modules in the tree that need a similar treatment; these
will be handled later.
Author: Tom Lane
Reviewed-by: Ronan Dunklau
Discussion: https://postgr.es/m/3395418.1618352794@sss.pgh.pa.us
The current code calls array_eq() and does not provide FmgrInfo. This commit
provides initialization of FmgrInfo and uses C collation as the safe option
for text comparison because we don't know anything about the semantics of
opclass options.
Backpatch to 13, where opclass options were introduced.
Reported-by: Nicolas Maus
Discussion: https://postgr.es/m/18692-72ea398df3ec6712%40postgresql.org
Backpatch-through: 13
Commit 5a2fed911 had an unexpected side-effect: the parallel worker
launched for the new test case would fail if it couldn't use a
superuser-reserved connection slot. The reason that test failed
while all our pre-existing ones worked is that the connection
privilege tests in InitPostgres had been based on the superuserness
of the leader's AuthenticatedUserId, but after the rearrangements
of 5a2fed911 we were testing the superuserness of CurrentUserId,
which the new test case deliberately made to be a non-superuser.
This all seems very accidental and probably not the behavior we really
want, but a security patch is no time to be redesigning things.
Pending some discussion about desirable semantics, hack it so that
InitPostgres continues to pay attention to the superuserness of
AuthenticatedUserId when starting a parallel worker.
Nathan Bossart and Tom Lane, per buildfarm member sawshark.
Security: CVE-2024-10978
TestUpgradeXversion knows how to make the main regression database's
references to pg_regress.so be version-independent. But it doesn't
do that for plperl's database, so that the C function added by
commit b7e3a52a8 is causing cross-version upgrade test failures.
Path of least resistance is to just drop the function at the end
of the new test.
In <= v14, also take the opportunity to clean up the generated
test files.
Security: CVE-2024-10979
meson makes the backslashes in text2macro.pl's --strip argument
into forward slashes, effectively disabling comment stripping.
That hasn't caused us issues before, but it breaks the test case
for b7e3a52a8. We don't really need the pattern to be adjustable,
so just hard-wire it into the script instead.
Context: https://github.com/mesonbuild/meson/issues/1564
Security: CVE-2024-10979
The SQL spec mandates that SET SESSION AUTHORIZATION implies
SET ROLE NONE. We tried to implement that within the lowest-level
functions that manipulate these settings, but that was a bad idea.
In particular, guc.c assumes that it doesn't matter in what order
it applies GUC variable updates, but that was not the case for these
two variables. This problem, compounded by some hackish attempts to
work around it, led to some security-grade issues:
* Rolling back a transaction that had done SET SESSION AUTHORIZATION
would revert to SET ROLE NONE, even if that had not been the previous
state, so that the effective user ID might now be different from what
it had been.
* The same for SET SESSION AUTHORIZATION in a function SET clause.
* If a parallel worker inspected current_setting('role'), it saw
"none" even when it should see something else.
Also, although the parallel worker startup code intended to cope
with the current role's pg_authid row having disappeared, its
implementation of that was incomplete so it would still fail.
Fix by fully separating the miscinit.c functions that assign
session_authorization from those that assign role. To implement the
spec's requirement, teach set_config_option itself to perform "SET
ROLE NONE" when it sets session_authorization. (This is undoubtedly
ugly, but the alternatives seem worse. In particular, there's no way
to do it within assign_session_authorization without incompatible
changes in the API for GUC assign hooks.) Also, improve
ParallelWorkerMain to directly set all the relevant user-ID variables
instead of relying on some of them to get set indirectly. That
allows us to survive not finding the pg_authid row during worker
startup.
In v16 and earlier, this includes back-patching 9987a7bf3 which
fixed a violation of GUC coding rules: SetSessionAuthorization
is not an appropriate place to be throwing errors from.
Security: CVE-2024-10978
If a CTE, subquery, sublink, security invoker view, or coercion
projection references a table with row-level security policies, we
neglected to mark the plan as potentially dependent on which role
is executing it. This could lead to later executions in the same
session returning or hiding rows that should have been hidden or
returned instead.
Reported-by: Wolfgang Walther
Reviewed-by: Noah Misch
Security: CVE-2024-10976
Backpatch-through: 12
Many process environment variables (e.g. PATH), bypass the containment
expected of a trusted PL. Hence, trusted PLs must not offer features
that achieve setenv(). Otherwise, an attacker having USAGE privilege on
the language often can achieve arbitrary code execution, even if the
attacker lacks a database server operating system user.
To fix PL/Perl, replace trusted PL/Perl %ENV with a tied hash that just
replaces each modification attempt with a warning. Sites that reach
these warnings should evaluate the application-specific implications of
proceeding without the environment modification:
Can the application reasonably proceed without the modification?
If no, switch to plperlu or another approach.
If yes, the application should change the code to stop attempting
environment modifications. If that's too difficult, add "untie
%main::ENV" in any code executed before the warning. For example,
one might add it to the start of the affected function or even to
the plperl.on_plperl_init setting.
In passing, link to Perl's guidance about the Perl features behind the
security posture of PL/Perl.
Back-patch to v12 (all supported versions).
Andrew Dunstan and Noah Misch
Security: CVE-2024-10979
The GUC max_replication_slots has a different meaning for sending servers
and subscribers. Add cross-links in each section for easy reference.
Author: Tristan Partin
Discussion: https://postgr.es/m/D5FNEPMMFHFX.1OQBCML0TU5AH@partin.io
Two attributes are added to pg_stat_database:
* parallel_workers_to_launch, counting the total number of parallel
workers that were planned to be launched.
* parallel_workers_launched, counting the total number of parallel
workers actually launched.
The ratio of both fields can provide hints that there are not enough
slots available when launching parallel workers, also useful when
pg_stat_statements is not deployed on an instance (i.e. cf54a2c00254).
This commit relies on de3a2ea3b264, that has added two fields to EState,
that get incremented when executing Gather or GatherMerge nodes.
A test is added in select_parallel, where parallel workers are spawned.
Bump catalog version.
Author: Benoit Lobréau
Discussion: https://postgr.es/m/783bc7f7-659a-42fa-99dd-ee0565644e25@dalibo.com
This commit changes libpq so that errors reported by the backend during
the protocol negotiation for SSL and GSS are discarded by the client, as
these may include bytes that could be consumed by the client and write
arbitrary bytes to a client's terminal.
A failure with the SSL negotiation now leads to an error immediately
reported, without a retry on any other methods allowed, like a fallback
to a plaintext connection.
A failure with GSS discards the error message received, and we allow a
fallback as it may be possible that the error is caused by a connection
attempt with a pre-11 server, GSS encryption having been introduced in
v12. This was a problem only with v17 and newer versions; older
versions discard the error message already in this case, assuming a
failure caused by a lack of support for GSS encryption.
Author: Jacob Champion
Reviewed-by: Peter Eisentraut, Heikki Linnakangas, Michael Paquier
Security: CVE-2024-10977
Backpatch-through: 12
A single PGSS entry's spinlock is used to be able to modify "counters"
without holding pgss->lock exclusively, as mentioned at the top of
pg_stat_statements.c and within pgssEntry.
Within a single pgssEntry, stats_since and minmax_stats_since are never
modified without holding pgss->lock exclusively, so there is no need to
hold an entry's spinlock when reading stats_since and
minmax_stats_since, as done when scanning all the PGSS entries for
function calls of pg_stat_statements().
This also restores the consistency between the code and the comments
about the entry's spinlock usage. This change is a performance
improvement (it can be argued that this is a logic bug), so there is no
need for a backpatch. This saves two instructions from being read while
holding an entry's spinlock.
Author: Karina Litskevich
Reviewed-by: Michael Paquier, wenhui qiu
Discussion: https://postgr.es/m/CACiT8ibhCmzbcOxM0v4pRLH3abk-95LPkt7_uC2JMP+miPjxsg@mail.gmail.com
The code in question (pg_preadv() and pg_pwritev()) has been around
for a while, but commit 15c9ac3629 moved it to a header file. If
third-party code that includes this header file is built with
-Wsign-compare on a system without preadv() or pwritev(), warnings
ensue. This commit fixes said warnings by casting the result of
pg_pread()/pg_pwrite() to size_t, which should be safe because we
will have already checked for a negative value.
Author: Wolfgang Walther
Discussion: https://postgr.es/m/16989737-1aa8-48fd-8dfe-b7ada06509ab%40technowledgy.de
Backpatch-through: 17
When _bt_readnextpage is called with our nbtree parallel scan already
seized (i.e. when it is directly called by _bt_first), we never expect a
prior call to _bt_readpage for lastcurrblkno to already indicate that
the scan should end -- the _bt_first caller's blkno must always be read.
After all, the "prior" _bt_readpage call (the call for lastcurrblkno)
probably took place in some other backend (and it might not even have
finished by the time our backend reaches _bt_first/_bt_readnextpage).
Add a documenting assertion to the path where _bt_readnextpage ends the
parallel scan based on information about lastcurrblkno from so->currPos.
Assert that the most recent _bt_readpage call that set so->currPos is in
fact lastcurrblkno's _bt_readpage call.
Follow-up to bugfix commit b5ee4e52.
Unlike TRY_POPCNT_FAST, which is defined in pg_bitutils.h, this
macro is defined in c.h (via pg_config.h), so we can check for it
earlier and avoid some unnecessary #includes on systems that lack
AVX-512 support.
Oversight in commit f78667bd91.
Discussion: https://postgr.es/m/Zy5K5Qmlb3Z4dsd4%40nathan
Commit ac04aa84a put the shutoff for this into the planner, which is
not ideal because it doesn't prevent us from re-using a previously
made parallel plan. Revert the planner change and instead put the
shutoff into InitializeParallelDSM, modeling it on the existing code
there for recovering from failure to allocate a DSM segment.
However, that code path is mostly untested, and testing a bit harder
showed there's at least one bug: ExecHashJoinReInitializeDSM is not
prepared for us to have skipped doing parallel DSM setup. I also
thought the Assert in ReinitializeParallelWorkers is pretty
ill-advised, and replaced it with a silent Min() operation.
The existing test case added by ac04aa84a serves fine to test this
version of the fix, so no change needed there.
Patch by me, but thanks to Noah Misch for the core idea that we
could shut off worker creation when !INTERRUPTS_CAN_BE_PROCESSED.
Back-patch to v12, as ac04aa84a was.
Discussion: https://postgr.es/m/CAC-SaSzHUKT=vZJ8MPxYdC_URPfax+yoA1hKTcF4ROz_Q6z0_Q@mail.gmail.com
Commit 1bd4bc85, which refactored nbtree sibling link traversal, made
_bt_parallel_seize reset the scan's currPos so that things were
consistent with the state of a serial backend moving between pages.
This overlooked the fact that _bt_readnextpage relied on the existing
currPos state to decide when to end the scan -- even though it came from
before the scan was seized. As a result of all this, parallel nbtree
scans could needlessly behave like full index scans.
To fix, teach _bt_readnextpage to explicitly allow the use of an already
read page's so->currPos when deciding whether to end the scan -- even
during parallel index scans (allow it consistently now). This requires
moving _bt_readnextpage's seizure of the scan to earlier in its loop.
That way _bt_readnextpage either deals with the true so->currPos state,
or an initialized-by-_bt_parallel_seize currPos state set from when the
scan was seized. Now _bt_steppage (the most important _bt_readnextpage
caller) takes the same uniform approach to setting up its call using
details taken from so->currPos -- regardless of whether the scan happens
to be parallel or serial.
The new loop structure in _bt_readnextpage is prone to getting confused
by P_NONE blknos set when the rightmost or leftmost page was reached.
We could avoid that by adding an explicit check, but that would be ugly.
Avoid this problem by teaching _bt_parallel_seize to end the parallel
scan instead of returning a P_NONE next block/blkno. Doing things this
way was arguably a missed opportunity for commit 1bd4bc85. It allows us
to remove a similar "blkno == P_NONE" check from _bt_first.
Oversight in commit 1bd4bc85, which refactored sibling link traversal
(as part of optimizing nbtree backward scan locking).
Author: Peter Geoghegan <pg@bowt.ie>
Reported-By: Masahiro Ikeda <ikedamsh@oss.nttdata.com>
Diagnosed-By: Masahiro Ikeda <ikedamsh@oss.nttdata.com>
Reviewed-By: Masahiro Ikeda <ikedamsh@oss.nttdata.com>
Discussion: https://postgr.es/m/f8efb9c0f8d1a71b44fd7f8e42e49c25@oss.nttdata.com
We now create contype='n' pg_constraint rows for not-null constraints on
user tables. Only one such constraint is allowed for a column.
We propagate these constraints to other tables during operations such as
adding inheritance relationships, creating and attaching partitions and
creating tables LIKE other tables. These related constraints mostly
follow the well-known rules of conislocal and coninhcount that we have
for CHECK constraints, with some adaptations: for example, as opposed to
CHECK constraints, we don't match not-null ones by name when descending
a hierarchy to alter or remove it, instead matching by the name of the
column that they apply to. This means we don't require the constraint
names to be identical across a hierarchy.
The inheritance status of these constraints can be controlled: now we
can be sure that if a parent table has one, then all children will have
it as well. They can optionally be marked NO INHERIT, and then children
are free not to have one. (There's currently no support for altering a
NO INHERIT constraint into inheriting down the hierarchy, but that's a
desirable future feature.)
This also opens the door for having these constraints be marked NOT
VALID, as well as allowing UNIQUE+NOT NULL to be used for functional
dependency determination, as envisioned by commit e49ae8d3bc58. It's
likely possible to allow DEFERRABLE constraints as followup work, as
well.
psql shows these constraints in \d+, though we may want to reconsider if
this turns out to be too noisy. Earlier versions of this patch hid
constraints that were on the same columns of the primary key, but I'm
not sure that that's very useful. If clutter is a problem, we might be
better off inventing a new \d++ command and not showing the constraints
in \d+.
For now, we omit these constraints on system catalog columns, because
they're unlikely to achieve anything.
The main difference to the previous attempt at this (b0e96f311985) is
that we now require that such a constraint always exists when a primary
key is in the column; we didn't require this previously which had a
number of unpalatable consequences. With this requirement, the code is
easier to reason about. For example:
- We no longer have "throwaway constraints" during pg_dump. We needed
those for the case where a table had a PK without a not-null
underneath, to prevent a slow scan of the data during restore of the
PK creation, which was particularly problematic for pg_upgrade.
- We no longer have to cope with attnotnull being set spuriously in
case a primary key is dropped indirectly (e.g., via DROP COLUMN).
Some bits of code in this patch were authored by Jian He.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Author: Bernd Helmle <mailings@oopsware.de>
Reviewed-by: 何建 (jian he) <jian.universality@gmail.com>
Reviewed-by: 王刚 (Tender Wang) <tndrwang@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/202408310358.sdhumtyuy2ht@alvherre.pgsql
If the collation of any join key column doesn’t match the collation of
the corresponding partition key, partitionwise joins can yield incorrect
results. For example, rows that would match under the join key collation
might be located in different partitions due to the partitioning
collation. In such cases, a partitionwise join would yield different
results from a non-partitionwise join, so disallow it in such cases.
Reported-by: Tender Wang <tndrwang@gmail.com>
Author: Jian He <jian.universality@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Discussion: https://postgr.es/m/CAHewXNno_HKiQ6PqyLYfuqDtwp7KKHZiH1J7Pqyz0nr+PS2Dwg@mail.gmail.com
Backpatch-through: 12