This will eventually be replaced once some translations exist
for pg_combinebackup's messages. In the meantime, we need
something here to prevent "make" from complaining in
builds with --enable-nls. Per advice added in 88dad06b4.
Self-join removal appears to be safe to apply with placeholder variables
as long as we handle PlaceHolderVar in replace_varno_walker() and replace
relid in phinfo->ph_lateral.
Discussion: https://postgr.es/m/18187-831da249cbd2ff8e%40postgresql.org
Author: Richard Guo
Reviewed-by: Andrei Lepikhov
This commit also retires sje_walker. This increases the generalty of replacing
varno in the parse tree and simplifies the code.
Discussion: https://postgr.es/m/18187-831da249cbd2ff8e%40postgresql.org
Author: Richard Guo
Reviewed-by: Andrei Lepikhov
Bhis commit introduces enhancements to the pg_stat_checkpointer view by adding
three new columns: restartpoints_timed, restartpoints_req, and
restartpoints_done. These additions aim to improve the visibility and
monitoring of restartpoint processes on replicas.
Previously, it was challenging to differentiate between successful and failed
restartpoint requests. This limitation arises because restartpoints on replicas
are dependent on checkpoint records from the primary, and cannot occur more
frequently than these checkpoints.
The new columns allow for clear distinction and tracking of restartpoint
requests, their triggers, and successful completions. This enhancement aids
database administrators and developers in better understanding and diagnosing
issues related to restartpoint behavior, particularly in scenarios where
restartpoint requests may fail.
System catalog is changed. Catversion is bumped.
Discussion: https://postgr.es/m/99b2ccd1-a77a-962a-0837-191cdf56c2b9%40inbox.ru
Author: Anton A. Melnikov
Reviewed-by: Kyotaro Horiguchi, Alexander Korotkov
Using a scale factor large enough so as the number of rows to insert
gets larger than INT32_MAX would cause an infinite loop in
initPopulateTable(), preventing pgbench to finish its initialization.
Oversight in e35cc3b3f2d0 that has refactored the data generation logic.
Author: John Hsu
Reviewed-by: Tatsuo Ishii, Japin Li
Discussion: https://postgr.es/m/CA+-JvFvHsOafjHcuFPfkyouHNZvbOXhBNhwZxKm3WNgYz9bwzA@mail.gmail.com
Commit 664d75753 pulled 010_tab_completion.pl's infrastructure for
invoking an interactive psql session out into a generally-useful test
function, but it didn't move enough stuff. We need to set up various
environment variables that readline will look at, both to ensure
stability of test results and to prevent test actions from cluttering
the calling user's ~/.psql_history. Expecting calling scripts to
remember to do that is too failure-prone: the other existing caller
001_password.pl did not do it. Hence, remove those initialization
steps from 010_tab_completion.pl and put them into interactive_psql().
Since interactive_psql was already making a local ENV hash, this has
no effect on calling scripts.
Discussion: https://postgr.es/m/794610.1703182896@sss.pgh.pa.us
When a column is dropped, the fields attacl, attoptions, and
attfdwoptions were kept unchanged. This is probably harmless, but it
seems wasteful, and leaves potentially dangling data lying around (for
example, attacl could contain references to users that are later also
dropped).
Change this to set those fields to null when a column is marked as
dropped.
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/249d819d-1763-4580-8110-0bf91a0f08b7@eisentraut.org
Up to now, our distribution tarballs have included a plain-text form
of the installation.sgml chapter. The rationale for that was that a
recipient might not have either ready internet access or HTML-viewing
tools; a theory that seems downright quaint today. Maintaining the
ability to generate this file is not without cost, because it puts
special requirements on installation.sgml that are often overlooked.
Moreover, we are moving in the direction of making our distribution
tarballs be pure git snapshots for traceability/reproducibility
reasons; including generated files doesn't fit into that plan.
Hence, let's just drop INSTALL and remove the infrastructure for
generating it. The top-level README will now recommend visiting
our website to see the installation instructions. As a useful
side-effect, we can get rid of README.git which has provoked
confusion.
Discussion: https://postgr.es/m/20231220114927.faccqqprmuyrzdip@alap3.anarazel.de
Discussion: https://postgr.es/m/e07408d9-e5f2-d9fd-5672-f53354e9305e@eisentraut.org
Commit 1301c80b21 removed some infrastructure needed to check
windows-oriented perl scripts. It also removed most such scripts, but
this one was left over. We repair the damage by making Win32::Registry a
conditional requirement that is only loaded on Windows. With this change
`perl -cw win32tzlist.pl` once again passes on non-Windows machines.
Discussion: https://postgr.es/m/a2bd77fd-61b8-4c2b-b12e-3e22ae260f82@eisentraut.org
Another oversight in dc2123400, visible when building/testing
in the source directory. (There's a lot of stuff we could
simplify if we stop supporting that case, but for now it's
still mainstream.)
This is necessary when spgcanreturn() is invoked on a partitioned
index, and the failure might be reachable in other scenarios as
well. The rest of what spgGetCache() does is perfectly sensible
for a partitioned index, so we should allow it to go through.
I think the main takeaway from this is that we lack sufficient test
coverage for non-btree partitioned indexes. Therefore, I added
simple test cases for brin and gin as well as spgist (hash and
gist AMs were covered already in indexing.sql).
Per bug #18256 from Alexander Lakhin. Although the known test case
only fails since v16 (3c569049b), I've got no faith at all that there
aren't other ways to reach this problem; so back-patch to all
supported branches.
Discussion: https://postgr.es/m/18256-0b0e1b6e4a620f1b@postgresql.org
Fix a bug during MERGE if a cross-partition update is attempted on a
partitioned table with a BEFORE DELETE ROW trigger that returns NULL,
to prevent the update. This would cause an error to be thrown, or an
assert failure in an assert-enabled build.
This was an oversight in 9321c79c86, which failed to properly
distinguish a DELETE prevented by a trigger from one prevented by a
concurrent update. Fix by having ExecDelete() return the TM_Result
status to ExecCrossPartitionUpdate(), so that it can distinguish the
two cases, and make ExecCrossPartitionUpdate() return the TM_Result
status to ExecUpdateAct(), so that it can return the correct status
from a concurrent update.
In addition, ensure that the command tag is correctly updated by
having ExecMergeMatched() pass canSetTag to ExecUpdateAct(), rather
than passing false, so that it updates the command tag if it does a
cross-partition update, making this code path in ExecMergeMatched()
consistent with ExecUpdate().
Per bug #18238 from Alexander Lakhin. Back-patch to v15, where MERGE
was introduced.
Dean Rasheed, reviewed by Richard Guo and Jian He.
Discussion: https://postgr.es/m/18238-2f2bdc7f720180b9%40postgresql.org
sed is used only if dtrace or selinux are enabled. Those options are
only used on Unix platforms, which should have sed. But we don't want
to make sed a hard requirement on Windows, which was the case in meson
until now.
This just changes sed to be not-required by meson. If you happen to
use a system with, say, dtrace but without sed, you might get a
slightly complicated error from meson during the build, but that seems
better than making the requiredness a complicated conditional that
will need to be maintained.
Discussion: https://www.postgresql.org/message-id/flat/ZQzp_VMJcerM1Cs_%40paquier.xyz
This is a function that makes a node jump by N WAL segments, which is
something a couple of tests have been relying on for some cases related
to streaming, replication slot limits and logical decoding on standbys.
Hence, this centralizes the logic, while making it cheaper by relying on
pg_logical_emit_message() to emit WAL records before switching to a new
segment.
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Euler Taveira
Discussion: https://postgr.es/m/CALj2ACU3R8QFCvDewHCMKjgb2w_-CMCyd6DAK=Jb-af14da5eg@mail.gmail.com
Commit 6af179395 added isCatalogRel field to some WAL record types,
but this field was not shown in the rmgr descriptions. This commit
changes the several rmgr descriptions to display the isCatalogRel
field.
Author: Bertrand Drouvot
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/957dc8f9-2a02-4640-9c01-9dcbf97c4187%40gmail.com
--show-diff becomes --diff, and --silent-diff becomes --check. These
options may now be given together. Without --check, --diff will exit
with a zero status even if diffs are found. With --check, it will now
exit with a non-zero status in that case.
Author: Tristan Partin
Reviewed-by: Daniel Gustafsson, Jelte Fennema-Nio
Discussion: https://postgr.es/m/CXLX2XYTH9S6.140SC6Y61VD88@neon.tech
The pg_dump compression was using strdup() instead of pg_strdup()
and failed to check the returned pointer for out-of-memory before
dereferencing it. Fix by using pg_strdup() instead which probably
was the intention here in the original patch.
Backpatch to v16 where pg_dump compression was introduced.
Reviewed-by: Tristan Partin <tristan@neon.tech>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/CC661D60-6F4C-474D-B9CF-E789ACA3CEFC@yesql.se
Backpatch-through: 16
To take an incremental backup, you use the new replication command
UPLOAD_MANIFEST to upload the manifest for the prior backup. This
prior backup could either be a full backup or another incremental
backup. You then use BASE_BACKUP with the INCREMENTAL option to take
the backup. pg_basebackup now has an --incremental=PATH_TO_MANIFEST
option to trigger this behavior.
An incremental backup is like a regular full backup except that
some relation files are replaced with files with names like
INCREMENTAL.${ORIGINAL_NAME}, and the backup_label file contains
additional lines identifying it as an incremental backup. The new
pg_combinebackup tool can be used to reconstruct a data directory
from a full backup and a series of incremental backups.
Patch by me. Reviewed by Matthias van de Meent, Dilip Kumar, Jakub
Wartak, Peter Eisentraut, and Álvaro Herrera. Thanks especially to
Jakub for incredibly helpful and extensive testing.
Discussion: http://postgr.es/m/CA+TgmoYOYZfMCyOXFyC-P+-mdrZqm5pP2N7S-r0z3_402h9rsA@mail.gmail.com
When active, this process writes WAL summary files to
$PGDATA/pg_wal/summaries. Each summary file contains information for a
certain range of LSNs on a certain TLI. For each relation, it stores a
"limit block" which is 0 if a relation is created or destroyed within
a certain range of WAL records, or otherwise the shortest length to
which the relation was truncated during that range of WAL records, or
otherwise InvalidBlockNumber. In addition, it stores a list of blocks
which have been modified during that range of WAL records, but
excluding blocks which were removed by truncation after they were
modified and never subsequently modified again.
In other words, it tells us which blocks need to copied in case of an
incremental backup covering that range of WAL records. But this
doesn't yet add the capability to actually perform an incremental
backup; the next patch will do that.
A new parameter summarize_wal enables or disables this new background
process. The background process also automatically deletes summary
files that are older than wal_summarize_keep_time, if that parameter
has a non-zero value and the summarizer is configured to run.
Patch by me, with some design help from Dilip Kumar and Andres Freund.
Reviewed by Matthias van de Meent, Dilip Kumar, Jakub Wartak, Peter
Eisentraut, and Álvaro Herrera.
Discussion: http://postgr.es/m/CA+TgmoYOYZfMCyOXFyC-P+-mdrZqm5pP2N7S-r0z3_402h9rsA@mail.gmail.com
First, mark the xlblocks member with InvalidXLogRecPtr, then issue a
write barrier, then initialize it. That ensures that the xlblocks
member doesn't appear valid while the contents are being initialized.
In preparation for reading WAL buffer contents without a lock.
Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACVfFMfqD5oLzZSQQZWfXiJqd-NdX0_317veP6FuB31QWA@mail.gmail.com
Reviewed-by: Andres Freund
This commit removes all the scripts located in src/tools/msvc/ to build
PostgreSQL with Visual Studio on Windows, meson becoming the recommended
way to achieve that. The scripts held some information that is still
relevant with meson, information kept and moved to better locations.
Comments that referred directly to the scripts are removed.
All the documentation still relevant that was in install-windows.sgml
has been moved to installation.sgml under a new subsection for Visual.
All the content specific to the scripts is removed. Some adjustments
for the documentation are planned in a follow-up set of changes.
Author: Michael Paquier
Reviewed-by: Peter Eisentraut, Andres Freund
Discussion: https://postgr.es/m/ZQzp_VMJcerM1Cs_@paquier.xyz
This makes it possible for the code to be easily reused by other
client-side tools, and/or by the server.
Patch by me. Review of this patch in particular by at least Peter
Eisentraut; reviewers for the patch series in general include Dilip
Kumar, Andres Fruend, David Steele, Álvaro Herrera, and Jakub Wartak.
Discussion: http://postgr.es/m/CA+TgmoZ6UGZVnSy5iak6s6+AXu_DewXovDjhLs3-su6nmU_x_g@mail.gmail.com
It's at least theoretically possible to overflow int32 when adding up
column width estimates to make a row width estimate. (The bug example
isn't terribly convincing as a real use-case, but perhaps wide joins
would provide a more plausible route to trouble.) This'd lead to
assertion failures or silly planner behavior. To forestall it, make
the relevant functions compute their running sums in int64 arithmetic
and then clamp to int32 range at the end. We can reasonably assume
that MaxAllocSize is a hard limit on actual tuple width, so clamping
to that is simply a correction for dubious input values, and there's
no need to go as far as widening width variables to int64 everywhere.
Per bug #18247 from RekGRpth. There've been no reports of this issue
arising in practical cases, so I feel no need to back-patch.
Richard Guo and Tom Lane
Discussion: https://postgr.es/m/18247-11ac477f02954422@postgresql.org
The example for dropping an option was incorrectly quoting the
option key thus making it a value turning the command into an
unqualified ADD operation. The result of dropping became adding
a new key/value pair instead:
d=# alter foreign data wrapper f options (drop 'b');
ALTER FOREIGN DATA WRAPPER
d=# select fdwoptions from pg_foreign_data_wrapper where fdwname='f';
fdwoptions
------------
{drop=b}
(1 row)
This has been incorrect for a long time so backpatch to all
supported branches.
Author: Tim <tim.needham2@gmail.com>
Discussion: https://postgr.es/m/170292280173.1876505.5204623074024041738@wrigleys.postgresql.org
- Remove MemoryContextAllocZeroAligned(). It was supposed to be a
faster version of MemoryContextAllocZero(), but modern compilers turn
the MemSetLoop() into a call to memset() anyway, making it more or
less identical to MemoryContextAllocZero(). That was the only user of
MemSetTest, MemSetLoop, so remove those too, as well as palloc0fast().
- Convert newNode() to a static inline function. When this was
originally originally written, it was written as a macro because
testing showed that gcc didn't inline the size check as we
intended. Modern compiler versions do, and now that it just calls
palloc0() there is no size-check to inline anyway.
One nice effect is that the palloc0() takes one less argument than
MemoryContextAllocZeroAligned(), which saves a few instructions in the
callers of newNode().
Reviewed-by: Peter Eisentraut, Tom Lane, John Naylor, Thomas Munro
Discussion: https://www.postgresql.org/message-id/b51f1fa7-7e6a-4ecc-936d-90a8a1659e7c@iki.fi
This function reads directly a page from a relation, relying on
index_open() to open the index to read from. Unfortunately, this would
crash when using partitioned indexes, as these can be opened with
index_open() but they have no physical pages.
Alexander has fixed the module, while I have written the test.
Author: Alexander Lakhin, Michael Paquier
Discussion: https://postgr.es/m/18246-f4d9ff7cb3af77e6@postgresql.org
Backpatch-through: 12
As coded, the function relied on index_open() when opening an index
relation, allowing partitioned indexes to be processed by
pgstathashindex(). This was leading to a "could not open file" error
because partitioned indexes have no physical files, or to a crash with
an assertion failure (like on HEAD).
This issue is fixed by applying the same checks as the other stat
functions for indexes, with a lookup at both RELKIND_INDEX and the index
AM expected.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/18246-f4d9ff7cb3af77e6@postgresql.org
Backpatch-through: 12
Currently, we give a misleading error if the user omits to pass the
required parameter 'proto_version' in SQL API
pg_logical_slot_get_changes() or during START_REPLICATION protocol
message. The error raised is as follows which indicates that the wrong
version is passed.
ERROR: client sent proto_version=0 but server only supports protocol 1 or higher
Author: Emre Hasegeli
Reviewed-by: Peter Smith, Amit Kapila
Discussion: https://postgr.es/m/CAE2gYzwdwtUbs-tPSV-QBwgTubiyGD2ZGsSnAVsDfAGGLDrGOA@mail.gmail.com
The value was double in the original implementation of this logic.
Commit da08a6598 pulled it out into a subroutine, but carelessly
declared the parameter as int when it should have been double.
On most platforms, the only ill effect would be to clamp the value
to be not more than INT_MAX, which would seldom be exceeded and
probably wouldn't change the estimates too much anyway. Nonetheless,
it's wrong and can cause complaints from ubsan.
While here, improve the comments and parameter names.
This is an ABI change in a globally exposed subroutine, so
back-patching would create some risk of breaking extensions.
The value of the fix doesn't seem high enough to warrant taking
that risk, so fix in HEAD only.
Per report from Alexander Lakhin.
Discussion: https://postgr.es/m/f5e15fe1-202d-1936-f47c-f0c69a936b72@gmail.com
Presently, all platforms implement atomic exchanges by performing
an atomic compare-and-swap in a loop until it succeeds. This can
be especially expensive when there is contention on the atomic
variable. This commit optimizes atomic exchanges on many platforms
by using compiler intrinsics, which should compile into something
much less expensive than a compare-and-swap loop. Since these
intrinsics have been available for some time, the inline assembly
implementations are omitted.
Suggested-by: Andres Freund
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/20231129212905.GA1258737%40nathanxps13
Commit dc3f9bc549 mainly targeted the JSONTYPE_NUMERIC code path.
This commit applies similar optimizations (e.g., removing
unnecessary runtime calls to strlen() and palloc()) to nearby code.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20231208203708.GA4126315%40nathanxps13