Prior to commit 0709b7ee72, which changed the spinlock primitives
to function as compiler barriers, access to variables within a
spinlock-protected section required using a volatile pointer, but
that is no longer necessary.
Reviewed-by: Bertrand Drouvot, Michael Paquier
Discussion: https://postgr.es/m/Zqkv9iK7MkNS0KaN%40nathan
Previously, passing NULL for pg_locale_t meant "use the libc provider
and the server environment". Now that the database collation is
represented as a proper pg_locale_t (not dependent on setlocale()),
remove special cases for NULL.
Leave wchar2char() and char2wchar() unchanged for now, because the
callers don't always have a libc-based pg_locale_t available.
Discussion: https://postgr.es/m/cfd9eb85-c52a-4ec9-a90e-a5e4de56e57d@eisentraut.org
Reviewed-by: Peter Eisentraut, Andreas Karlsson
Unlike the rest of the astreamer (formerly bbstreamer) infrastructure
which is reusable by other tools, astreamer_inject.c seems extremely
specific to pg_basebackup. Hence, move the corresponding declarations
to a separate header file, so that we can move the rest of the code
without moving this.
Amul Sul, reviewed by Sravan Kumar and by me.
Discussion: http://postgr.es/m/CAAJ_b94StvLWrc_p4q-f7n3OPfr6GhL8_XuAg2aAaYZp1tF-nw@mail.gmail.com
I (rhaas) intended "bbstreamer" to stand for "base backup streamer,"
but that implies that this infrastructure can only ever be used by
pg_basebackup. In fact, it is a generally useful way of streaming
data from a tar or compressed tar file, and it could be extended to
work with other archive formats as well if we ever wanted to do that.
Hence, rename it to "astreamer" (archive streamer) in preparation for
reusing the infrastructure from pg_verifybackup (and perhaps
eventually also other utilities, such as pg_combinebackup or
pg_waldump).
This is purely a renaming commit. Comment adjustments and relocation
of the actual code to someplace from which it can be reused are left
to future commits.
Amul Sul, reviewed by Sravan Kumar and by me.
Discussion: http://postgr.es/m/CAAJ_b94StvLWrc_p4q-f7n3OPfr6GhL8_XuAg2aAaYZp1tF-nw@mail.gmail.com
When pg_dump retrieves the list of database objects and performs the
data dump, there was possibility that objects are replaced with others
of the same name, such as views, and access them. This vulnerability
could result in code execution with superuser privileges during the
pg_dump process.
This issue can arise when dumping data of sequences, foreign
tables (only 13 or later), or tables registered with a WHERE clause in
the extension configuration table.
To address this, pg_dump now utilizes the newly introduced
restrict_nonsystem_relation_kind GUC parameter to restrict the
accesses to non-system views and foreign tables during the dump
process. This new GUC parameter is added to back branches too, but
these changes do not require cluster recreation.
Back-patch to all supported branches.
Reviewed-by: Noah Misch
Security: CVE-2024-7348
Backpatch-through: 12
Here we adjust escape_json_with_len() to make use of SIMD to allow
processing of up to 16-bytes at a time rather than processing a single
byte at a time. This has been shown to speed up escaping of JSON
strings significantly.
Escaping is required for both JSON string properties and also the
property names themselves, so this should also help improve the speed of
the conversion from JSON into text for JSON objects that have property
names 16 or more bytes long.
Escaping JSON strings was often a significant bottleneck for longer
strings. With these changes, some benchmarking has shown a query
performing nearly 4 times faster when escaping a JSON object with a 1MB
text property. Tests with shorter text properties saw smaller but still
significant performance improvements. For example, a test outputting 1024
JSON strings with a text property length ranging from 1 char to 1024 chars
became around 2 times faster.
Author: David Rowley
Reviewed-by: Melih Mutlu
Discussion: https://postgr.es/m/CAApHDvpLXwMZvbCKcdGfU9XQjGCDm7tFpRdTXuB9PVgpNUYfEQ@mail.gmail.com
Like 75534436a4, this acts mainly as a template to show what can be
achieved with fixed-numbered stats (like WAL, bgwriter, etc.) with the
pluggable cumulative statistics APIs introduced in 7949d95945.
Fixed-numbered stats are defined in their own file, named
injection_stats_fixed.c, separated entirely from the variable-numbered
case in injection_stats.c. This is mainly for clarity as having both
examples in the same file would be confusing.
Note that this commit uses the helper routines added in 2eff9e678d.
The stats stored track globally the number of times injection points
have been attached, detached or run. Two more fields should be added
later for the number of times a point has been cached or loaded, but
what's here is enough as a template.
More TAP tests are added, providing coverage for fixed-numbered custom
stats.
Author: Michael Paquier
Reviewed-by: Dmitry Dolgov, Bertrand Drouvot
Discussion: https://postgr.es/m/Zmqm9j5EO0I4W8dx@paquier.xyz
This acts as a template of what can be achieved with the pluggable
cumulative stats APIs introduced in 7949d95945 for the
variable-numbered case where stats entries are stored in the pgstats
dshash, while being potentially useful on its own for injection points,
say to add starting and/or stopping conditions based on the statistics
(want to trigger a callback after N calls, for example?).
Currently, the only data gathered is the number of times an injection
point is run. More fields can always be added as required. All the
routines related to the stats are located in their own file, called
injection_stats.c in the test module injection_points, for clarity.
The stats can be used only if the test module is loaded through
shared_preload_libraries. The key of the dshash uses InvalidOid for the
database, and an int4 hash of the injection point name as object ID.
A TAP test is added to provide coverage for the new custom cumulative
stats APIs, showing the persistency of the data across restarts, for
example.
Author: Michael Paquier
Reviewed-by: Dmitry Dolgov, Bertrand Drouvot
Discussion: https://postgr.es/m/Zmqm9j5EO0I4W8dx@paquier.xyz
This is useful for extensions to get snapshot and shmem data for custom
cumulative statistics when these have a fixed number of objects, so as
these do not need to know about the snapshot internals, aka pgStatLocal.
An upcoming commit introducing an example template for custom cumulative
stats with fixed-numbered objects will make use of these. I have
noticed that this is useful for extension developers while hacking my
own example, actually.
Author: Michael Paquier
Reviewed-by: Dmitry Dolgov, Bertrand Drouvot
Discussion: https://postgr.es/m/Zmqm9j5EO0I4W8dx@paquier.xyz
This commit adds support in the backend for $subject, allowing
out-of-core extensions to plug their own custom kinds of cumulative
statistics. This feature has come up a few times into the lists, and
the first, original, suggestion came from Andres Freund, about
pg_stat_statements to use the cumulative statistics APIs in shared
memory rather than its own less efficient internals. The advantage of
this implementation is that this can be extended to any kind of
statistics.
The stats kinds are divided into two parts:
- The in-core "builtin" stats kinds, with designated initializers, able
to use IDs up to 128.
- The "custom" stats kinds, able to use a range of IDs from 128 to 256
(128 slots available as of this patch), with information saved in
TopMemoryContext. This can be made larger, if necessary.
There are two types of cumulative statistics in the backend:
- For fixed-numbered objects (like WAL, archiver, etc.). These are
attached to the snapshot and pgstats shmem control structures for
efficiency, and built-in stats kinds still do that to avoid any
redirection penalty. The data of custom kinds is stored in a first
array in snapshot structure and a second array in the shmem control
structure, both indexed by their ID, acting as an equivalent of the
builtin stats.
- For variable-numbered objects (like tables, functions, etc.). These
are stored in a dshash using the stats kind ID in the hash lookup key.
Internally, the handling of the builtin stats is unchanged, and both
fixed and variabled-numbered objects are supported. Structure
definitions for builtin stats kinds are renamed to reflect better the
differences with custom kinds.
Like custom RMGRs, custom cumulative statistics can only be loaded with
shared_preload_libraries at startup, and must allocate a unique ID
shared across all the PostgreSQL extension ecosystem with the following
wiki page to avoid conflicts:
https://wiki.postgresql.org/wiki/CustomCumulativeStats
This makes the detection of the stats kinds and their handling when
reading and writing stats much easier than, say, allocating IDs for
stats kinds from a shared memory counter, that may change the ID used by
a stats kind across restarts. When under development, extensions can
use PGSTAT_KIND_EXPERIMENTAL.
Two examples that can be used as templates for fixed-numbered and
variable-numbered stats kinds will be added in some follow-up commits,
with tests to provide coverage.
Some documentation is added to explain how to use this plugin facility.
Author: Michael Paquier
Reviewed-by: Dmitry Dolgov, Bertrand Drouvot
Discussion: https://postgr.es/m/Zmqm9j5EO0I4W8dx@paquier.xyz
Otherwise, this would break if using C and C++ compilers from
different families and they understand different options. It already
used the right flags for compiling, this is only for linking. Also,
the meson setup already did this correctly.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/228700.1722717983@sss.pgh.pa.us
These should have been switched from %d to %u in 3188a4582a in the
debugging elogs added in ca1ba50fcb. PgStat_Kind should never be
higher than INT32_MAX, but let's be clean.
Issue noticed while hacking more on this area.
This warning flag detects global variables not declared in header
files. This is similar to what -Wmissing-prototypes does for
functions. (More correctly, it is similar to what
-Wmissing-declarations does for functions, but -Wmissing-prototypes is
a superset of that in C.)
This flag is new in GCC 14. Clang has supported it for a while.
Several recent commits have cleaned up warnings triggered by this, so
it should now be clean.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/e0a62134-83da-4ba4-8cdb-ceb0111c95ce@eisentraut.org
Since commit 4b74ebf726, the refresh logic is used to populate
materialized views, so we can simplify the error message in
ExecCreateTableAs().
Also, RefreshMatViewByOid() is moved to just after
create_ctas_nodata() call to improve code readability.
Author: Yugo Nagata
Discussion: https://postgr.es/m/20240802161301.d975daca9ba7a706fa05ecd7@sraoss.co.jp
pg_wal_replay_wait() is to be used on standby and specifies waiting for
the specific WAL location to be replayed. This option is useful when
the user makes some data changes on primary and needs a guarantee to see
these changes are on standby.
The queue of waiters is stored in the shared memory as an LSN-ordered pairing
heap, where the waiter with the nearest LSN stays on the top. During
the replay of WAL, waiters whose LSNs have already been replayed are deleted
from the shared memory pairing heap and woken up by setting their latches.
pg_wal_replay_wait() needs to wait without any snapshot held. Otherwise,
the snapshot could prevent the replay of WAL records, implying a kind of
self-deadlock. This is why it is only possible to implement
pg_wal_replay_wait() as a procedure working without an active snapshot,
not a function.
Catversion is bumped.
Discussion: https://postgr.es/m/eb12f9b03851bb2583adab5df9579b4b%40postgrespro.ru
Author: Kartyshov Ivan, Alexander Korotkov
Reviewed-by: Michael Paquier, Peter Eisentraut, Dilip Kumar, Amit Kapila
Reviewed-by: Alexander Lakhin, Bharath Rupireddy, Euler Taveira
Reviewed-by: Heikki Linnakangas, Kyotaro Horiguchi
Before Bison 3.4, the generated parser implementation files run afoul
of -Wmissing-variable-declarations (in spite of commit ab61c40bfa)
because declarations for yylval and possibly yylloc are missing. The
generated header files contain an extern declaration, but the
implementation files don't include the header files. Since Bison 3.4,
the generated implementation files automatically include the generated
header files, so then it works.
To make this work with older Bison versions as well, include the
generated header file from the .y file.
(With older Bison versions, the generated implementation file contains
effectively a copy of the header file pasted in, so including the
header file is redundant. But we know this works anyway because the
core grammar uses this arrangement already.)
Discussion: https://www.postgresql.org/message-id/flat/e0a62134-83da-4ba4-8cdb-ceb0111c95ce@eisentraut.org
A follow-up patch is planned to make cumulative statistics pluggable,
and using a type is useful in the internal routines used by pgstats as
PgStat_Kind may have a value that was not originally in the enum removed
here, once made pluggable.
While on it, this commit switches pgstat_is_kind_valid() to use
PgStat_Kind rather than an int, to be more consistent with its existing
callers. Some loops based on the stats kind IDs are switched to use
PgStat_Kind rather than int, for consistency with the new time.
Author: Michael Paquier
Reviewed-by: Dmitry Dolgov, Bertrand Drouvot
Discussion: https://postgr.es/m/Zmqm9j5EO0I4W8dx@paquier.xyz
This is used in the startup process to check that the pgstats file we
are reading includes the redo LSN referring to the shutdown checkpoint
where it has been written. The redo LSN in the pgstats file needs to
match with what the control file has.
This is intended to be used for an upcoming change that will extend the
write of the stats file to happen during checkpoints, rather than only
shutdown sequences.
Bump PGSTAT_FILE_FORMAT_ID.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/Zp8o6_cl0KSgsnvS@paquier.xyz
This converts
COPY_PARSE_PLAN_TREES
WRITE_READ_PARSE_PLAN_TREES
RAW_EXPRESSION_COVERAGE_TEST
into run-time parameters
debug_copy_parse_plan_trees
debug_write_read_parse_plan_trees
debug_raw_expression_coverage_test
They can be activated for tests using PG_TEST_INITDB_EXTRA_OPTS.
The compile-time symbols are kept for build farm compatibility, but
they now just determine the default value of the run-time settings.
Furthermore, support for these settings is not compiled in at all
unless assertions are enabled, or the new symbol
DEBUG_NODE_TESTS_ENABLED is defined at compile time, or any of the
legacy compile-time setting symbols are defined. So there is no
run-time overhead in production builds. (This is similar to the
handling of DISCARD_CACHES_ENABLED.)
Discussion: https://www.postgresql.org/message-id/flat/30747bd8-f51e-4e0c-a310-a6e2c37ec8aa%40eisentraut.org
Until now we generated an ExprState for each parameter to a SubPlan and
evaluated them one-by-one ExecScanSubPlan. That's sub-optimal as creating lots
of small ExprStates
a) makes JIT compilation more expensive
b) wastes memory
c) is a bit slower to execute
This commit arranges to evaluate parameters to a SubPlan as part of the
ExprState referencing a SubPlan, using the new EEOP_PARAM_SET expression
step. We emit one EEOP_PARAM_SET for each argument to a subplan, just before
the EEOP_SUBPLAN step.
It likely is worth using EEOP_PARAM_SET in other places as well, e.g. for
SubPlan outputs, nestloop parameters and - more ambitiously - to get rid of
ExprContext->domainValue/caseValue/ecxt_agg*. But that's for later.
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Alena Rybakina <lena.ribackina@yandex.ru>
Discussion: https://postgr.es/m/20230225214401.346ancgjqc3zmvek@awork3.anarazel.de
This reverts commit f5f30c22ed.
Some buildfarm animals are failing with "cannot change
"client_encoding" during a parallel operation". It looks like
assign_client_encoding is unhappy at being asked to roll back a
client_encoding setting after a parallel worker encounters a
failure. There must be more to it though: why didn't I see this
during local testing? In any case, it's clear that moving the
RestoreGUCState() call is not as side-effect-free as I thought.
Given that the bug f5f30c22e intended to fix has gone unreported
for years, it's not something that's urgent to fix; I'm not
willing to risk messing with it further with only days to our
next release wrap.
RefreshMatviewByOid is used for both REFRESH and CREATE MATERIALIZED
VIEW. This flag is currently just used for handling internal error
messages, but also aimed to improve code-readability.
Author: Yugo Nagata
Discussion: https://postgr.es/m/20240726122630.70e889f63a4d7e26f8549de8@sraoss.co.jp
Parallel workers failed after a sequence like
BEGIN;
CREATE USER foo;
SET SESSION AUTHORIZATION foo;
because check_session_authorization could not see the uncommitted
pg_authid row for "foo". This is because we ran RestoreGUCState()
in a separate transaction using an ordinary just-created snapshot.
The same disease afflicts any other GUC that requires catalog lookups
and isn't forgiving about the lookups failing.
To fix, postpone RestoreGUCState() into the worker's main transaction
after we've set up a snapshot duplicating the leader's. This affects
check_transaction_isolation and check_transaction_deferrable, which
think they should only run during transaction start. Make them
act like check_transaction_read_only, which already knows it should
silently accept the value when InitializingParallelWorker.
Per bug #18545 from Andrey Rachitskiy. Back-patch to all
supported branches, because this has been wrong for awhile.
Discussion: https://postgr.es/m/18545-feba138862f19aaa@postgresql.org
As one might guess, this function dumps the sequence data. It is
called once per sequence, and each such call executes a query to
retrieve the relevant data for a single sequence. This can cause
pg_dump to take significantly longer, especially when there are
many sequences.
This commit improves the performance of this function by gathering
all the sequence data with a single query at the beginning of
pg_dump. This information is stored in a sorted array that
dumpSequenceData() can bsearch() for what it needs. This follows a
similar approach as previous commits that introduced sorted arrays
for role information, pg_class information, and sequence metadata.
As with those commits, this patch will cause pg_dump to use more
memory, but that isn't expected to be too egregious.
Note that we use the brand new function pg_sequence_read_tuple() in
the query that gathers all sequence data, so we must continue to
use the preexisting query-per-sequence approach for versions older
than 18.
Reviewed-by: Euler Taveira, Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/20240503025140.GA1227404%40nathanxps13
This new function returns the data for the given sequence, i.e.,
the values within the sequence tuple. Since this function is a
substitute for SELECT from the sequence, the SELECT privilege is
required on the sequence in question. It returns all NULLs for
sequences for which we lack privileges, other sessions' temporary
sequences, and unlogged sequences on standbys.
This function is primarily intended for use by pg_dump in a
follow-up commit that will use it to optimize dumpSequenceData().
Like pg_sequence_last_value(), which is a support function for the
pg_sequences system view, pg_sequence_read_tuple() is left
undocumented.
Bumps catversion.
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/20240503025140.GA1227404%40nathanxps13
This function dumps the sequence definitions. It is called once
per sequence, and each such call executes a query to retrieve the
metadata for a single sequence. This can cause pg_dump to take
significantly longer, especially when there are many sequences.
This commit improves the performance of this function by gathering
all the sequence metadata with a single query at the beginning of
pg_dump. This information is stored in a sorted array that
dumpSequence() can bsearch() for what it needs. This follows a
similar approach as commits d5e8930f50 and 2329cad1b9, which
introduced sorted arrays for role information and pg_class
information, respectively. As with those commits, this patch will
cause pg_dump to use more memory, but that isn't expected to be too
egregious.
Note that before version 10, the sequence metadata was stored in
the sequence relation itself, which makes it difficult to gather
all the sequence metadata with a single query. For those older
versions, we continue to use the preexisting query-per-sequence
approach.
Reviewed-by: Euler Taveira
Discussion: https://postgr.es/m/20240503025140.GA1227404%40nathanxps13
This commit modifies dumpSequence() to parse all the sequence
metadata into the appropriate types instead of carting around
string pointers to the PGresult data. Besides allowing us to free
the PGresult storage earlier in the function, this eliminates the
need to compare min_value and max_value to their respective
defaults as strings.
This is preparatory work for a follow-up commit that will improve
the performance of dumpSequence() in a similar manner to how commit
2329cad1b9 optimized binary_upgrade_set_pg_class_oids().
Reviewed-by: Euler Taveira
Discussion: https://postgr.es/m/20240503025140.GA1227404%40nathanxps13
Prior to this commit, the docs for enable_partitionwise_aggregate and
enable_partitionwise_join mentioned the additional overheads enabling
these causes for the query planner, but they mentioned nothing about the
possible surge in work_mem-consuming executor nodes that could end up in
the final plan. Dimitrios reported the OOM killer intervened on his
query as a result of using enable_partitionwise_aggregate=on.
Here we adjust the docs to mention the possible increase in the number of
work_mem-consuming executor nodes that can appear in the final plan as a
result of enabling these GUCs.
Reported-by: Dimitrios Apostolou
Reviewed-by: Ashutosh Bapat
Discussion: https://postgr.es/m/3603c380-d094-136e-e333-610914fb3e80%40gmx.net
Discussion: https://postgr.es/m/CAApHDvoZ0_yqwPFEpb6h261L76BUpmh5GxBQq0LeRzQ5Jh3zzg@mail.gmail.com
Backpatch-through: 12, oldest supported version
Some users inadvertently rely on pg_dump as their primary backup tool,
when better solutions exist. The pg_dump man page is arguably
misleading in that it starts with
"pg_dump is a utility for backing up a PostgreSQL database."
This tones this down a little bit, by replacing most uses of "backup"
with "export" and adding a short note that pg_dump is not a
general-purpose backup tool.
Discussion: https://www.postgresql.org/message-id/flat/70b48475-7706-4268-990d-fd522b038d96%40eisentraut.org
After disabling the subscription, the failed test was changing the
two_phase option for the subscription. We can't change the two_phase
option for a subscription till the corresponding apply worker is active.
The check to ensure that the replication apply worker has exited was
incorrect.
Author: Vignesh C
Discussion: https://postgr.es/m/CALDaNm3YY+bzj+JWJbY+DsUgJ2mPk8OR1ttjVX2cywKr4BUgxw@mail.gmail.com