The previous system had configure put the value into the makefiles and
then have the makefiles pass them to the build of pg_config. That was
put in place when pg_config was a shell script. We can simplify that
by having configure put the value into pg_config.h directly. This
also makes the standard build system match how the MSVC build system
already does it.
Discussion: https://www.postgresql.org/message-id/flat/6e457870-cef5-5f1d-b57c-fc89cfb8a788%402ndquadrant.com
Commit 4eaea3db introduced TupleHashTableHash(), but the signature
didn't match the other exposed functions. Separate it into internal
and external versions. The external version hides the details behind
an API more consistent with the other external functions, and the
internal version is still suitable for simplehash.
Marking an object as dependant on an extension did not have any
privilege check whatsoever; this allowed any user to mark objects as
droppable by anyone able to DROP EXTENSION, which could be used to cause
system-wide havoc. Disallow by checking that the calling user owns the
mentioned object.
(No constraints are placed on the extension.)
Security: CVE-2020-1720
Reported-by: Tom Lane
Discussion: 31605.1566429043@sss.pgh.pa.us
The previous coding forgot to apply shell quoting to the socket
directory and the data folder, leading to failures when running
pg_upgrade. This refactors the code generating the pg_ctl command
starting clusters to use a more correct shell quoting. Failures are
easier to trigger in 12 and newer versions by using a value of
--socketdir that includes quotes, but it is also possible to cause
failures with quotes included in the default socket directory used by
pg_upgrade or the data folders of the clusters involved in the
upgrade.
As 9.4 is going to be EOL'd with the next minor release, nobody is
likely going to upgrade to it now so this branch is not included in the
set of branches fixed.
Author: Michael Paquier
Reviewed-by: Álvaro Herrera, Noah Misch
Backpatch-through: 9.5
Commit b10714080 moved the GinPageSetDeleteXid() call to a spot where
the "page" variable was pointing to the wrong page, causing the XID
to be inserted on a page that's not being deleted, thus allowing later
GinPageIsRecyclable tests to recycle the deleted page too soon.
It might be a good idea to stop using the single "page" variable for
multiple purposes in this function. But for the moment I just moved
the GinPageSetDeleteXid() call down beside the GinPageSetDeleted()
call, which seems like a more logical place for it anyway.
Back-patch to v11, as the faulty patch was. (Fortunately, the bug
hasn't made it into any release yet.)
Discussion: https://postgr.es/m/21620.1581098806@sss.pgh.pa.us
On a multi-level partioned table, when adding a partition not directly
connected to the root table, foreign key constraints referencing the
root were not cloned to the new partition, leading to the FK being
possibly inadvertently violated later on.
This was caused by fuzzy thinking in CloneFkReferenced (commit
f56f8f8da6): it was skipping constraints marked as having parents on
the theory that cloning those would create duplicates; but that's only
correct for the top level of the partitioning hierarchy. For levels
below that one, such constraints must still be considered and only
skipped if later on we see that we'd create duplicates. Apparently, I
(Álvaro) wrote the comments right but the code implemented something
slightly different.
Author: Jehan-Guillaume de Rorthais
Discussion: https://postgr.es/m/20200206004948.238352db@firost
When running TRUNCATE CASCADE on a child of a partitioned table
referenced by another partitioned table, the truncate was not applied to
partitions of the referencing table; this could leave rows violating the
constraint in the referencing partitioned table. Repair by walking the
pg_constraint chain all the way up to the topmost referencing table.
Note: any partitioned tables containing FKs that reference other
partitioned tables should be checked for possible violating rows, if
TRUNCATE has occurred in partitions of the referenced table.
Reported-by: Christophe Courtois
Author: Jehan-Guillaume de Rorthais
Discussion: https://postgr.es/m/20200204183906.115f693e@firost
Commit 147e3722f7 changed Tid scan so that it calls table_beginscan()
and uses the scan option for seq scan. This change caused two issues.
(1) The change caused Tid scan to take a predicate lock on the entire
relation in serializable transaction even when relation-level
lock is not necessary. This could lead to an unexpected
serialization error.
(2) The change caused Tid scan to increment the number of seq_scan
in pg_stat_*_tables views even though it's not seq scan. This
could confuse the users.
This commit adds the scan option for Tid scan and makes Tid scan
use it, to avoid those issues.
Back-patch to v12, where the bug was introduced.
Author: Tatsuhito Kasahara
Reviewed-by: Kyotaro Horiguchi, Masahiko Sawada, Fujii Masao
Discussion: https://postgr.es/m/CAP0=ZVKy+gTbFmB6X_UW0pP3WaeJ-fkUWHoD-pExS=at3CY76g@mail.gmail.com
The main benefit of doing so is that this allows llvm to ensure that
types match - previously that'd only be detected by a crash within the
called function. There were a number of cases where we passed a
superfluous parameter...
To avoid needing to add all the functions to llvmjit.{c,h}, instead
get them from the llvm module for llvmjit_types.c. Also use that for
the functions from llvmjit_types already in llvmjit.h.
Author: Soumyadeep Chakraborty and Andres Freund
Discussion: https://postgr.es/m/CADwEdooww3wZv-sXSfatzFRwMuwa186LyTwkBfwEW6NjtooBPA@mail.gmail.com
Expose two new entry points: one for only calculating the hash value
of a tuple, and another for looking up a hash entry when the hash
value is already known. This will be useful for disk-based Hash
Aggregation to avoid recomputing the hash value for the same tuple
after saving and restoring it from disk.
Discussion: https://postgr.es/m/37091115219dd522fd9ed67333ee8ed1b7e09443.camel%40j-davis.com
This merges the code emission for a number of opcodes by handling the
behavioural difference more locally. This reduces code, and also
improves the generated code a bit in some cases, by removing redundant
constants.
Author: Andres Freund
Discussion: https://postgr.es/m/20191023163849.sosqbfs5yenocez3@alap3.anarazel.de
It's already tracked via ExprState->parent, so we don't need to also
include it in ExprEvalStep. When that code originally was written
ExprState->parent didn't exist, but it since has been introduced in
6719b238e8.
Author: Andres Freund
Discussion: https://postgr.es/m/20191023163849.sosqbfs5yenocez3@alap3.anarazel.de
This mostly consists of using C99 style for loops, moving variables
into narrower scopes, and a smattering of other minor improvements.
Done separately to make it easier to review patches with actual
functional changes.
Author: Andres Freund
Discussion: https://postgr.es/m/20191023163849.sosqbfs5yenocez3@alap3.anarazel.de
This reverts commit 7bae0ad, as this is not ideal with the tar format,
and we may want to explore more options like what is done by tar with
some equivalents of --owner and --group, but for pg_basebackup.
Per complaints from Magnus Hagander and Stephen Frost.
Discussion: https://postgr.es/m/20200205172259.GW3195@tamriel.snowman.net
This reverts commit 41aadee, as the GUC checks could run on older values
with the new values used, and result in incorrect errors if both
parameters are changed at the same time.
Per complaint from Tom Lane.
Discussion: https://postgr.es/m/27574.1581015893@sss.pgh.pa.us
Backpatch-through: 12
In certain transient states, it's possible that a table has attributes
with attgenerated set but no default expressions in pg_attrdef yet.
In that case, the old code path would not set
relation->rd_att->constr->has_generated_stored, unless
relation->rd_att->constr was also populated for some other reason.
There was probably no practical impact, but it's better to keep this
consistent.
Reported-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/20200115181105.ad6ab6dlgyww3lb6%40alap3.anarazel.de
Previously PostgreSQL built with -DLWLOCK_STATS could report
more than one LWLock statistics entries for the same backend
process and the same LWLock. This is strange and only one
statistics should be output in that case, instead.
The cause of this issue is that the key variable used for
LWLock stats hash table was not fully initialized. The key
consists of two fields and they were initialized. But
the following 4 bytes allocated in the key variable for
the alignment was not initialized. So even if the same key
was specified, hash_search(HASH_ENTER) could not find
the existing entry for that key and created new one.
This commit fixes this issue by initializing the key
variable with zero. As the side effect of this commit,
the volume of LWLock statistics output would be reduced
very much.
Back-patch to v10, where commit 3761fe3c20 introduced the issue.
Author: Fujii Masao
Reviewed-by: Julien Rouhaud, Kyotaro Horiguchi
Discussion: https://postgr.es/m/26359edb-798a-568f-d93a-6aafac49752d@oss.nttdata.com
This new field tracks the PID of the group leader used with parallel
query. For parallel workers and the leader, the value is set to the
PID of the group leader. So, for the group leader, the value is the
same as its own PID. Note that this reflects what PGPROC stores in
shared memory, so as leader_pid is NULL if a backend has never been
involved in parallel query. If the backend is using parallel query or
has used it at least once, the value is set until the backend exits.
Author: Julien Rouhaud
Reviewed-by: Sergei Kornilov, Guillaume Lelarge, Michael Paquier, Tomas
Vondra
Discussion: https://postgr.es/m/CAOBaU_Yy5bt0vTPZ2_LUM6cUcGeqmYNoJ8-Rgto+c2+w3defYA@mail.gmail.com
Tuple conversion incorrectly concluded that no conversion was needed
as long as all the attributes lined up. But if the source tuple has a
missing attribute (from addition of a column with default), then the
destination tupdesc might not reflect the same default. The typical
symptom was that the affected columns would be unexpectedly NULL.
Repair by always forcing conversion if the source has missing
attributes, which will be filled in by the deform operation. (In
theory we could optimize for when the destination has the same
default, but that seemed overkill.)
Backpatch to 11 where missing attributes were added.
Per bug #16242.
Vik Fearing (discovery, code, testing) and me (analysis, testcase).
Discussion: https://postgr.es/m/16242-d1c9fca28445966b@postgresql.org
Using 32 bit counters means they can now realistically wrap around when
vacuuming extremely large tables. Because they're signed integers,
stats printed by vacuum look very odd when they do.
We'd love to backpatch this, but refrain because the variables are
exported and could cause third-party code to break.
Reviewed-by: Julien Rouhaud, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20200131205926.GA16367@alvherre.pgsql
The docs are ambiguous as to which tables would be copied over when the
copy_data parameter is true in ALTER SUBSCRIPTION ... REFRESH PUBLICATION.
Make it clear that it only applies to tables which are new in the
publication.
Author: David Christensen (reword by Álvaro Herrera)
Discussion: https://postgr.es/m/95339420-7F09-4F8C-ACC0-8F1CFAAD9CD7@endpoint.com
PostgresNode already retained base directories in such cases. Stop
using $SIG{__DIE__}, which is redundant with the exit status check, in
lieu of proliferating it to TestLib. Back-patch to 9.6, where commit
88802e0680 introduced retention on
failure.
Reviewed by Daniel Gustafsson.
Discussion: https://postgr.es/m/20200202170155.GA3264196@rfd.leadboat.com
Use kevent(2) to wait for events on the BSD family of operating
systems and macOS. This is similar to the epoll(2) support added
for Linux by commit 98a64d0bd.
Author: Thomas Munro
Reviewed-by: Andres Freund, Marko Tiikkaja, Tom Lane
Tested-by: Mateusz Guzik, Matteo Beccati, Keith Fiske, Heikki Linnakangas, Michael Paquier, Peter Eisentraut, Rui DeSousa, Tom Lane, Mark Wong
Discussion: https://postgr.es/m/CAEepm%3D37oF84-iXDTQ9MrGjENwVGds%2B5zTr38ca73kWR7ez_tA%40mail.gmail.com
Commit 499be013d added this field in a rather poorly-thought-through
manner, with the result being that rather than being a field of the
Append or MergeAppend plan node as intended (and as it seems to be,
in text format), it was actually an element of the "Plans" subgroup.
At least in JSON format, that's flat out invalid syntax, because
"Plans" is an array not an object.
While it's not hard to move the generation of the field so that it
appears where it's supposed to, this does result in a visible change
in field order in text format, in cases where a Append or MergeAppend
plan node has any InitPlans attached. That's slightly annoying to
do in stable branches; but the alternative of continuing to emit
broken non-text formats seems worse.
Also, since the set of fields emitted is not supposed to be
data-dependent in non-text formats, make sure that "Subplans Removed"
appears in Append and MergeAppend nodes even when it's zero, in those
formats. (The previous coding made it look like it could appear in
some other node types such as BitmapAnd, but we don't actually support
runtime pruning there, so don't emit it in those cases.)
Per bug #16171 from Mahadevan Ramachandran. Fix by Daniel Gustafsson
and Tom Lane, reviewed by Hamid Akhtar. Back-patch to v11 where this
code came in.
Discussion: https://postgr.es/m/16171-b72259ab75505fa2@postgresql.org
First, this code did not bother checking for a failure when calling
dup(). Then, per zlib, gzerror() returns NULL for a NULL input, which
can happen if passing down to gzdopen() an invalid file descriptor or if
there was an allocation failure.
No back-patch is done as this would unlikely be a problem in the field.
Per Coverity.
Reported-by: Tom Lane
When replica identity is FULL (an admittedly unusual case), the loop
that searches for tuples in execReplication.c didn't stop scanning the
table when once a matching tuple was found. Add the missing 'break'.
Note slight behavior change: we now return the first matching tuple
rather than the last one. They are supposed to be indistinguishable
anyway, so this shouldn't matter.
Author: Konstantin Knizhnik
Discussion: https://postgr.es/m/379743f6-ae91-b866-f7a2-5624e6d2b0a4@postgrespro.ru
Those new assertions can be used at file scope, outside of any function
for compilation checks. This commit provides implementations for C and
C++, and fallback implementations.
Author: Peter Smith
Reviewed-by: Andres Freund, Kyotaro Horiguchi, Dagfinn Ilmari Mannsåker,
Michael Paquier
Discussion: https://postgr.es/m/201DD0641B056142AC8C6645EC1B5F62014B8E8030@SYD1217
The error exits added to initialize_SSL() failed to clean up the
partially-built SSL_context, and some of them also leaked the
result of SSLerrmessage(). Make them match other error-handling
cases in that function.
The error exits added to connectOptions2() failed to set conn->status
like every other error exit in that function.
In passing, make the SSL_get_peer_certificate() error exit look more
like all the other calls of SSLerrmessage().
Oversights in commit ff8ca5fad. Coverity whined about leakage of the
SSLerrmessage() results; I noted the rest in manual code review.
Using a lookup table of digit pairs reduces the number of divisions
needed, and calculating the length upfront saves some work; these
ideas are taken from the code previously committed for floats.
David Fetter, reviewed by Kyotaro Horiguchi, Tels, and me.
Discussion: https://postgr.es/m/20190924052620.GP31596%40fetter.org
Similarly to pg_upgrade, pg_ctl and initdb, a root user is able to use
--version and --help, but cannot execute the actual operation to avoid
the creation of files with permissions incompatible with the
postmaster.
This is a behavior change, so not back-patching is done.
Author: Ian Barwick
Discussion: https://postgr.es/m/CABvVfJVqOdD2neLkYdygdOHvbWz_5K_iWiqY+psMfA=FeAa3qQ@mail.gmail.com
If we attempt to create a DSM segment when no slots are available,
we should return the memory to the operating system. Previously
we did that if the DSM_CREATE_NULL_IF_MAXSEGMENTS flag was
passed in, but we didn't do it if an error was raised. Repair.
Back-patch to 9.4, where DSM segments arrived.
Author: Thomas Munro
Reviewed-by: Robert Haas
Reported-by: Julian Backes
Discussion: https://postgr.es/m/CA%2BhUKGKAAoEw-R4om0d2YM4eqT1eGEi6%3DQot-3ceDR-SLiWVDw%40mail.gmail.com
This code would accept "strinX", where X is any 1-byte character,
as meaning "string". Clearly it wasn't meant to do that.
No back-patch, since this doesn't affect correct queries and
there's some tiny chance we'd break somebody's incorrect query
in a minor release.
Report and patch by Dominik Czarnota.
Discussion: https://postgr.es/m/CABEVAa1dU0mDCAfaT8WF2adVXTDsLVJy_izotg6ze_hh-cn8qQ@mail.gmail.com
Commit fc7695891 changed CheckAttributeType to recurse into ranges,
but made it pass down the wrong collation (always InvalidOid, since
ranges as such have no collation). This would result in guaranteed
failure when considering a range type whose subtype is collatable.
Embarrassingly, we lack any regression tests that would expose such
a problem (but fortunately, somebody noticed before we shipped this
bug in any release).
Fix it to pass down the range's subtype collation property instead,
and add some regression test cases to exercise collatable-subtype
ranges a bit more. Back-patch to all supported branches, as the
previous patch was.
Report and patch by Julien Rouhaud, test cases tweaked by me
Discussion: https://postgr.es/m/CAOBaU_aBWqNweiGUFX0guzBKkcfJ8mnnyyGC_KBQmO12Mj5f_A@mail.gmail.com
If we failed to fork a worker process, or create a communication pipe
for one, WaitForTerminatingWorkers would suffer an assertion failure
if assert-enabled, otherwise crash or go into an infinite loop. This
was a consequence of not accounting for the startup condition where
we've not yet forked all the workers.
The original bug was that ParallelBackupStart would set workerStatus to
WRKR_IDLE before it had successfully forked a worker. I made things
worse in commit b7b8cc0cf by not understanding the undocumented fact
that the WRKR_TERMINATED state was also meant to represent the case
where a worker hadn't been started yet: I changed enum T_WorkerStatus
so that *all* the worker slots were initially in WRKR_IDLE state. But
this wasn't any more broken in practice, since even one slot in the
wrong state would keep WaitForTerminatingWorkers from terminating.
In v10 and later, introduce an explicit T_WorkerStatus value for
worker-not-started, in hopes of preventing future oversights of the
same ilk. Before that, just document that WRKR_TERMINATED is supposed
to cover that case (partly because it wasn't actively broken, and
partly because the enum is exposed outside parallel.c in those branches,
so there's microscopically more risk involved in changing it).
In all branches, introduce a WORKER_IS_RUNNING status test macro
to hide which T_WorkerStatus values mean that, and be more careful
not to access ParallelSlot fields till we're sure they're valid.
Per report from Vignesh C, though this is my patch not his.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/CALDaNm1Luv-E3sarR+-unz-BjchquHHyfP+YC+2FS2pt_J+wxg@mail.gmail.com
We have code paths for Unix socket support and no Unix socket support.
Now add a third variant: Unix socket support but do not use a Unix
socket by default in the client or the server, only if you explicitly
specify one. This will be useful when we enable Unix socket support
on Windows.
To implement this, tweak things so that setting DEFAULT_PGSOCKET_DIR
to "" has the desired effect. This mostly already worked like that;
only a few places needed to be adjusted. Notably, the reference to
DEFAULT_PGSOCKET_DIR in UNIXSOCK_PATH() could be removed because all
callers already resolve an empty socket directory setting with a
default if appropriate.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/75f72249-8ae6-322a-63df-4fe03eeccb9f@2ndquadrant.com