As a demonstration of the sort of thing that can be done by adding a
custom backup target, this defines a 'shell' target which executes a
command defined by the system administrator. The command is executed
once for each tar archive generate by the backup and once for the
backup manifest, if any. Each time the command is executed, it
receives the contents of th file for which it is executed via standard
input.
The configured command can use %f to refer to the name of the archive
(e.g. base.tar, $TABLESPACE_OID.tar, backup_manifest) and %d to refer
to the target detail (pg_basebackup --target shell:DETAIL). A target
detail is required if %d appears in the configured command and
forbidden if it does not.
Patch by me, reviewed by Abhijit Menon-Sen.
Discussion: http://postgr.es/m/CA+TgmoaqvdT-u3nt+_kkZ7bgDAyqDB0i-+XOMmr5JN2Rd37hxw@mail.gmail.com
Logical replication apply workers for a subscription can easily get stuck
in an infinite loop of attempting to apply a change, triggering an error
(such as a constraint violation), exiting with the error written to the
subscription server log, and restarting.
To partially remedy the situation, this patch adds a new subscription
option named 'disable_on_error'. To be consistent with old behavior, this
option defaults to false. When true, both the tablesync worker and apply
worker catch any errors thrown and disable the subscription in order to
break the loop. The error is still also written in the logs.
Once the subscription is disabled, users can either manually resolve the
conflict/error or skip the conflicting transaction by using
pg_replication_origin_advance() function. After resolving the conflict,
users need to enable the subscription to allow apply process to proceed.
Author: Osumi Takamichi and Mark Dilger
Reviewed-by: Greg Nancarrow, Vignesh C, Amit Kapila, Wang wei, Tang Haiying, Peter Smith, Masahiko Sawada, Shi Yu
Discussion : https://postgr.es/m/DB35438F-9356-4841-89A0-412709EBD3AB%40enterprisedb.com
"Hot Standby" was capitalized in a couple of places in the docs, as the
style primarily used when it was introduced, but this has not been much
respected across the years. Per discussion, it is more natural for the
reader to use "hot standby" (aka lower-case only) when in the middle of
a sentence, and "Hot standby" (aka capitalized) in a title. This commit
adjusts all the places in the docs to be consistent with this choice,
rather than applying one style or the other midway.
Author: Daniel Westermann
Reviewed-by: Kyotaro Horiguchi, Aleksander Alekseev, Robert Treat
Discussion: https://postgr.es/m/GVAP278MB093160025A779A1A5788D0EAD2039@GVAP278MB0931.CHEP278.PROD.OUTLOOK.COM
wal_compression gains a new value, "zstd", to allow the compression of
full-page images using the compression method of the same name.
Compression is done using the default level recommended by the library,
as of ZSTD_CLEVEL_DEFAULT = 3. Some benchmarking has shown that it
could make sense to use a level lower for the FPI compression, like 1 or
2, as the compression rate did not change much with a bit less CPU
consumed, but any tests done would only cover few scenarios so it is
hard to come to a clear conclusion. Anyway, there is no reason to not
use the default level instead, which is the level recommended by the
library so it should be fine for most cases.
zstd outclasses easily pglz, and is better than LZ4 where one wants to
have more compression at the cost of extra CPU but both are good enough
in their own scenarios, so the choice between one or the other of these
comes to a study of the workload patterns and the schema involved,
mainly.
This commit relies heavily on 4035cd5, that reshaped the code creating
and restoring full-page writes to be aware of the compression type,
making this integration straight-forward.
This patch borrows some early work from Andrey Borodin, though the patch
got a complete rewrite.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20220222231948.GJ9008@telsasoft.com
ALTER ROUTINE triggers the events ddl_command_start and ddl_command_end,
and DROP ROUTINE triggers sql_drop, ddl_command_start and
ddl_command_end, but this was not mention on the matrix table.
Reported-by: Leslie Lemaire
Discussion: https://postgr.es/m/164647533363.646.5802968483136493025@wrigleys.postgresql.org
Backpatch-through: 11
Both client-side compression and server-side compression are now
supported for zstd. In addition, a backup compressed by the server
using zstd can now be decompressed by the client in order to
accommodate the use of -Fp.
Jeevan Ladhe, with some edits by me.
Discussion: http://postgr.es/m/CA+Tgmobyzfbz=gyze2_LL1ZumZunmaEKbHQxjrFkOR7APZGu-g@mail.gmail.com
This commits adds both the finish LSN (commit_lsn in case transaction got
committed, prepare_lsn in case of a prepared transaction, etc.) and
replication origin name to the existing error context message.
This will help users in specifying the origin name and transaction finish
LSN to pg_replication_origin_advance() SQL function to skip a particular
transaction.
Author: Masahiko Sawada
Reviewed-by: Takamichi Osumi, Euler Taveira, and Amit Kapila
Discussion: https://postgr.es/m/CAD21AoBarBf2oTF71ig2g_o=3Z_Dt6_sOpMQma1kFgbnA5OZ_w@mail.gmail.com
It was decided (refer to the Discussion link below) that the stats
collector is not an appropriate place to store the error information of
subscription workers.
This patch changes the pg_stat_subscription_workers view (introduced by
commit 8d74fc96db) so that it stores only statistics counters:
apply_error_count and sync_error_count, and has one entry for
each subscription. The removed error information such as error-XID and
the error message would be stored in another way in the future which is
more reliable and persistent.
After removing these error details, there is no longer any relation
information, so the subscription statistics are now a cluster-wide
statistics.
The patch also changes the view name to pg_stat_subscription_stats since
the word "worker" is an implementation detail that we use one worker for
one tablesync and one apply.
Author: Masahiko Sawada, based on suggestions by Andres Freund
Reviewed-by: Peter Smith, Haiying Tang, Takamichi Osumi, Amit Kapila
Discussion: https://postgr.es/m/20220125063131.4cmvsxbz2tdg6g65@alap3.anarazel.de
SPI_commit previously left it up to the caller to recover from any error
occurring during commit. Since that's complicated and requires use of
low-level xact.c facilities, it's not too surprising that no caller got
it right. Let's move the responsibility for cleanup into spi.c. Doing
that requires redefining SPI_commit as starting a new transaction, so
that it becomes equivalent to SPI_commit_and_chain except that you get
default transaction characteristics instead of preserving the prior
transaction's characteristics. We can make this pretty transparent
API-wise by redefining SPI_start_transaction() as a no-op. Callers
that expect to do something in between might be surprised, but
available evidence is that no callers do so.
Having made that API redefinition, we can fix this mess by having
SPI_commit[_and_chain] trap errors and start a new, clean transaction
before re-throwing the error. Likewise for SPI_rollback[_and_chain].
Some cleanup is also needed in AtEOXact_SPI, which was nowhere near
smart enough to deal with SPI contexts nested inside a committing
context.
While plperl and pltcl need no changes beyond removing their now-useless
SPI_start_transaction() calls, plpython needs some more work because it
hadn't gotten the memo about catching commit/rollback errors in the
first place. Such an error resulted in longjmp'ing out of the Python
interpreter, which leaks Python stack entries at present and is reported
to crash Python 3.11 altogether. Add the missing logic to catch such
errors and convert them into Python exceptions.
We are probably going to have to back-patch this once Python 3.11 ships,
but it's a sufficiently basic change that I'm a bit nervous about doing
so immediately. Let's let it bake awhile in HEAD first.
Peter Eisentraut and Tom Lane
Discussion: https://postgr.es/m/3375ffd8-d71c-2565-e348-a597d6e739e3@enterprisedb.com
Discussion: https://postgr.es/m/17416-ed8fe5d7213d6c25@postgresql.org
postgres_fdw commits remote (sub)transactions opened on remote server(s)
in a local (sub)transaction one by one when the local (sub)transaction
commits. This patch allows it to commit the remote (sub)transactions in
parallel to improve performance. This is enabled by the server option
"parallel_commit". The default is false.
Etsuro Fujita, reviewed by Fujii Masao and David Zhang.
Discussion: http://postgr.es/m/CAPmGK17dAZCXvwnfpr1eTfknTGdt%3DhYTV9405Gt5SqPOX8K84w%40mail.gmail.com
This feature adds row filtering for publication tables. When a publication
is defined or modified, an optional WHERE clause can be specified. Rows
that don't satisfy this WHERE clause will be filtered out. This allows a
set of tables to be partially replicated. The row filter is per table. A
new row filter can be added simply by specifying a WHERE clause after the
table name. The WHERE clause must be enclosed by parentheses.
The row filter WHERE clause for a table added to a publication that
publishes UPDATE and/or DELETE operations must contain only columns that
are covered by REPLICA IDENTITY. The row filter WHERE clause for a table
added to a publication that publishes INSERT can use any column. If the
row filter evaluates to NULL, it is regarded as "false". The WHERE clause
only allows simple expressions that don't have user-defined functions,
user-defined operators, user-defined types, user-defined collations,
non-immutable built-in functions, or references to system columns. These
restrictions could be addressed in the future.
If you choose to do the initial table synchronization, only data that
satisfies the row filters is copied to the subscriber. If the subscription
has several publications in which a table has been published with
different WHERE clauses, rows that satisfy ANY of the expressions will be
copied. If a subscriber is a pre-15 version, the initial table
synchronization won't use row filters even if they are defined in the
publisher.
The row filters are applied before publishing the changes. If the
subscription has several publications in which the same table has been
published with different filters (for the same publish operation), those
expressions get OR'ed together so that rows satisfying any of the
expressions will be replicated.
This means all the other filters become redundant if (a) one of the
publications have no filter at all, (b) one of the publications was
created using FOR ALL TABLES, (c) one of the publications was created
using FOR ALL TABLES IN SCHEMA and the table belongs to that same schema.
If your publication contains a partitioned table, the publication
parameter publish_via_partition_root determines if it uses the partition's
row filter (if the parameter is false, the default) or the root
partitioned table's row filter.
Psql commands \dRp+ and \d <table-name> will display any row filters.
Author: Hou Zhijie, Euler Taveira, Peter Smith, Ajin Cherian
Reviewed-by: Greg Nancarrow, Haiying Tang, Amit Kapila, Tomas Vondra, Dilip Kumar, Vignesh C, Alvaro Herrera, Andres Freund, Wei Wang
Discussion: https://www.postgresql.org/message-id/flat/CAHE3wggb715X%2BmK_DitLXF25B%3DjE6xyNCH4YOwM860JR7HarGQ%40mail.gmail.com
"regress" is a new mode added to compute_query_id aimed at facilitating
regression testing when a module computing query IDs is loaded into the
backend, like pg_stat_statements. It works the same way as "auto",
meaning that query IDs are computed if a module enables it, except that
query IDs are hidden in EXPLAIN outputs to ensure regression output
stability.
Like any GUCs of the kind (force_parallel_mode, etc.), this new
configuration can be added to an instance's postgresql.conf, or just
passed down with PGOPTIONS at command level. compute_query_id uses an
enum for its set of option values, meaning that this addition ensures
ABI compatibility.
Using this new configuration mode allows installcheck-world to pass when
running the tests on an instance with pg_stat_statements enabled,
stabilizing the test output while checking the paths doing query ID
computations.
Reported-by: Anton Melnikov
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/1634283396.372373993@f75.i.mail.ru
Discussion: https://postgr.es/m/YgHlxgc/OimuPYhH@paquier.xyz
Backpatch-through: 14
6c417bb has added the build infrastructure to support ZSTD, but forgot
to update this section of the docs to mention the variable ZSTD, as per
the change done in vcregress.pl.
While on it, reword this section of the docs to describe what happens in
the default case, as per a suggestion from Robert Haas.
Discussion: https://postgr.es/m/YhCL0fKnDv/Zvtuo@paquier.xyz
As currently implemented, failure of a PGEVT_CONNRESET callback
forces the PGconn into the CONNECTION_BAD state (without closing
the socket, which is inconsistent with other failure paths), and
prevents later callbacks from being called. This seems highly
questionable, and indeed is questioned by comments in the source.
Instead, let's just ignore the result value of PGEVT_CONNRESET
calls. Like the preceding commit, this converts event callbacks
into "pure observers" that cannot affect libpq's processing logic.
Discussion: https://postgr.es/m/3185105.1644960083@sss.pgh.pa.us
As currently implemented, failure of a PGEVT_RESULTCREATE callback
causes the PGresult to be converted to an error result. This is
intellectually inconsistent (shouldn't a failing callback likewise
prevent creation of the error result? what about side-effects on the
behavior seen by other event procs? why does PQfireResultCreateEvents
act differently from PQgetResult?), but more importantly it destroys
any promises we might wish to make about the behavior of libpq in
nontrivial operating modes, such as pipeline mode. For example,
it's not possible to promise that PGRES_PIPELINE_SYNC results will
be returned if an event callback fails on those. With this
definition, expecting applications to behave sanely in the face of
possibly-failing callbacks seems like a very big lift.
Hence, redefine the result of a callback failure as being simply
that that event procedure won't be called any more for this PGresult
(which was true already). Event procedures can still signal failure
back to the application through out-of-band mechanisms, for example
via their passthrough arguments.
Similarly, don't let failure of a PGEVT_RESULTCOPY callback prevent
PQcopyResult from succeeding. That definition allowed a misbehaving
event proc to break single-row mode (our sole internal use of
PQcopyResult), and it probably had equally deleterious effects for
outside uses.
Discussion: https://postgr.es/m/3185105.1644960083@sss.pgh.pa.us
Commit 6e0cb3dec1 allowed postgres_fdw.application_name to include
escape sequences %a (application name), %d (database name), %u (user name)
and %p (pid). In addition to them, this commit makes it support
the escape sequences for session ID (%c) and cluster name (%C).
These are helpful to investigate where each remote transactions came from.
Author: Fujii Masao
Reviewed-by: Ryohei Takahashi, Kyotaro Horiguchi
Discussion: https://postgr.es/m/1041dc9a-c976-049f-9f14-e7d94c29c4b2@oss.nttdata.com
Python 2.7 went EOL 2020-01-01 and the support for Python 2 requires a fair
bit of infrastructure. Therefore we are removing Python 2 support in plpython.
This patch just rejects Python 2 during configure / mkvcbuild.pl. Future
commits will remove the code and infrastructure for Python 2 support and
adjust more of the documentation. This way we can see the buildfarm state
after the removal sooner and we can be sure that failures are due to
desupporting Python 2, rather than caused by infrastructure cleanup.
Reviewed-By: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/20211031184548.g4sxfe47n2kyi55r@alap3.anarazel.de
Double the default setting for hash_mem_multiplier, from 1.0 to 2.0.
This setting makes hash-based executor nodes use twice the usual
work_mem limit.
The PostgreSQL 15 release notes should have a compatibility note about
this change.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wzndc_ROk6CY-bC6p9O53q974Y0Ey4WX8jcPbuTZYM4Q3A@mail.gmail.com
The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL
segment that was just filled up to be archived. It was because the
test used 'pg_stat_archiver.last_archived_wal' to check if a file was
archived, but the order that WAL files are archived when a standby is
promoted is not fully deterministic, and 'last_archived_wal' tracks
the last segment that was archived, not the highest-numbered WAL
segment. Because of that, if the archiver archived segment 3, and then
2, 'last_archived_wal' say 2, and the test query would think that 3
has not been archived yet.
Normally, WAL files are marked ready for archival in order, and the
archiver process will process them in order, so that issue doesn't
arise. We have used the same query on 'last_archived_wal' in a few
other tests with no problem. But when a standby is promoted, things
are a bit chaotic. After promotion, the server will try to archive all
the WAL segments from the old timeline that are in pg_wal, as well as
the history file and any new WAL segments on the new timeline. The
end-of-recovery checkpoint will create the .ready files for all the
WAL files on the old timeline, but at the same time, the new timeline
is opened up for business. A file from the new timeline can therefore
be archived before the files from the old timeline have been marked as
ready for archival.
It turns out that we don't really need to wait for the archival in
this particular test, because the standby server is about to be
stopped, and stopping a server will wait for the end-of-recovery
checkpoint and all WAL archivals to finish, anyway. So we can just
remove it from the test.
Add a note to the docs on 'pg_stat_archiver' view that files can be
archived out of order.
Reviewed-by: Tom Lane
Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
docbook-xsl's index generation stylesheet (autoidx.xsl) has a small bug: It
doesn't include xlink in exclude-result-prefixes. Normally just leads to a a
single xmlns:xlink in the <div> containing the index, but because our
customization emits that, xmlns:xlink intead gets added to every element
output by autoidx.xsl below the <div>, totalling around 100kB.
Adding the spurious xmlns:xlink to the <div> ourselves isn't great, but avoids
the duplication.
Reviewed-By: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/20220213201618.qz6p6noon3wagr3f%40alap3.anarazel.de
The XSL stylesheets used a mix of incorrect or outdated namespace
declarations for XHTML, probably based on ancient advice and examples.
Clean all this up.
Besides improving correctness (although probably no impact in
practice, other than possible validation failures), this removes a
bunch of useless namespace declarations in the HTML output.
Reported-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/20220213201618.qz6p6noon3wagr3f%40alap3.anarazel.de
This adds to database objects the same version tracking that collation
objects have. There is a new pg_database column datcollversion that
stores the version, a new function
pg_database_collation_actual_version() to get the version from the
operating system, and a new subcommand ALTER DATABASE ... REFRESH
COLLATION VERSION.
This was not originally added together with pg_collation.collversion,
since originally version tracking was only supported for ICU, and ICU
on a database-level is not currently supported. But we now have
version tracking for glibc (since PG13), FreeBSD (since PG14), and
Windows (since PG13), so this is useful to have now.
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/f0ff3190-29a3-5b39-a179-fa32eee57db6%40enterprisedb.com
Previously we used poll() directly to check for a POLLRDHUP event.
Instead, use the WaitEventSet API to poll the socket for
WL_SOCKET_CLOSED, which knows how to detect this condition on many more
operating systems.
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: Maksim Milyutin <milyutinma@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/77def86b27e41f0efcba411460e929ae%40postgrespro.ru
Currently, during UPDATE, the unchanged replica identity key attributes
are not logged separately because they are getting logged as part of the
new tuple. But if they are stored externally then the untoasted values are
not getting logged as part of the new tuple and logical replication won't
be able to replicate such UPDATEs. So we need to log such attributes as
part of the old_key_tuple during UPDATE.
Reported-by: Haiying Tang
Author: Dilip Kumar and Amit Kapila
Reviewed-by: Alvaro Herrera, Haiying Tang, Andres Freund
Backpatch-through: 10
Discussion: https://postgr.es/m/OS0PR01MB611342D0A92D4F4BF26C0F47FB229@OS0PR01MB6113.jpnprd01.prod.outlook.com
LZ4 compression can now be performed on the client using
pg_basebackup -Ft --compress client-lz4, and LZ4 decompression of
a backup compressed on the server can be performed on the client
using pg_basebackup -Fp --compress server-lz4.
Dipesh Pandit, reviewed and tested by Jeevan Ladhe and Tushar Ahuja,
with a few corrections - and some documentation - by me.
Discussion: http://postgr.es/m/CAN1g5_FeDmiA9D8wdG2W6Lkq5CpubxOAqTmd2et9hsinTJtsMQ@mail.gmail.com
LZ4 compression can be a lot faster than gzip compression, so users
may prefer it even if the compression ratio is not as good. We will
want pg_basebackup to support LZ4 compression and decompression on the
client side as well, and there is a pending patch for that, but it's
by a different author, so I am committing this part separately for
that reason.
Jeevan Ladhe, reviewed by Tushar Ahuja and by me.
Discussion: http://postgr.es/m/CANm22Cg9cArXEaYgHVZhCnzPLfqXCZLAzjwTq7Fc0quXRPfbxA@mail.gmail.com
This extends the logical decoding to also decode sequence increments.
We differentiate between sequences created in the current (in-progress)
transaction, and sequences created earlier. This mixed behavior is
necessary because while sequences are not transactional (increments are
not subject to ROLLBACK), relfilenode changes are. So we do this:
* Changes for sequences created in the same top-level transaction are
treated as transactional, i.e. just like any other change from that
transaction, and discarded in case of a rollback.
* Changes for sequences created earlier are applied immediately, as if
performed outside any transaction. This applies also after ALTER
SEQUENCE, which may create a new relfilenode.
Moreover, if we ever get support for DDL replication, the sequence
won't exist until the transaction gets applied.
Sequences created in the current transaction are tracked in a simple
hash table, identified by a relfilenode. That means a sequence may
already exist, but if a transaction does ALTER SEQUENCE then the
increments for the new relfilenode will be treated as transactional.
For each relfilenode we track the XID of (sub)transaction that created
it, which is needed for cleanup at transaction end. We don't need to
check the XID to decide if an increment is transactional - if we find a
match in the hash table, it has to be the same transaction.
This requires two minor changes to WAL-logging. Firstly, we need to
ensure the sequence record has a valid XID - until now the the increment
might have XID 0 if it was the first change in a subxact. But the
sequence might have been created in the same top-level transaction. So
we ensure the XID is assigned when WAL-logging increments.
The other change is addition of "created" flag, marking increments for
newly created relfilenodes. This makes it easier to maintain the hash
table of sequences that need transactional handling.
Note: This is needed because of subxacts. A XID 0 might still have the
sequence created in a different subxact of the same top-level xact.
This does not include any changes to test_decoding and/or the built-in
replication - those will be committed in separate patches.
A patch adding decoding of sequences was originally submitted by Cary
Huang. This commit reworks various important aspects (e.g. the WAL
logging and transactional/non-transactional handling). However, the
original patch and reviews were very useful.
Author: Tomas Vondra, Cary Huang
Reviewed-by: Peter Eisentraut, Hannu Krosing, Andres Freund
Discussion: https://postgr.es/m/d045f3c2-6cfb-06d3-5540-e63c320df8bc@enterprisedb.com
Discussion: https://postgr.es/m/1710ed7e13b.cd7177461430746.3372264562543607781@highgo.ca
Commit cc333f32336f5146b75190f57ef587dff225f565 added a new COPY
sub-protocol for taking base backups, but retained support for the
previous protocol. For the same reasons articulated in the message
for commit 9cd28c2e5f11dfeef64a14035b82e70acead65fd, remove support
for the previous protocol from the server.
Discussion: http://postgr.es/m/CA+TgmoazKcKUWtqVa0xZqSzbKgTH+X-aw4V7GyLD68EpDLMh8A@mail.gmail.com
Commit 0ba281cb4bf9f5f65529dfa4c8282abb734dd454 added a new syntax
for the BASE_BACKUP command, with extensible options, but maintained
support for the legacy syntax. This isn't important for PostgreSQL,
where pg_basebackup works with older server versions but not newer
ones, but it could in theory matter for out-of-core users of the
replication protocol.
Discussion on pgsql-hackers, however, suggests that no one is aware
of any out-of-core use of the BASE_BACKUP command, and the consensus
is in favor of removing support for the old syntax to simplify the
code, so do that.
Discussion: http://postgr.es/m/CA+TgmoazKcKUWtqVa0xZqSzbKgTH+X-aw4V7GyLD68EpDLMh8A@mail.gmail.com
Historically, the location of any files generated by pg_upgrade, as of
the per-database logs and internal dumps, has been the current working
directory, leaving all those files behind when using --retain or on a
failure.
Putting all those contents in a targeted subdirectory makes the whole
easier to debug, and simplifies the code in charge of cleaning up the
logs. Note that another reason is that this facilitates the move of
pg_upgrade to TAP with a fixed location for all the logs to grab if the
test fails repeatedly.
Initially, we thought about being able to specify the output directory
with a new option, but we have settled on using a subdirectory located
at the root of the new cluster's data folder, "pg_upgrade_output.d",
instead, as at the end the new data directory is the location of all the
data generated by pg_upgrade. There is a take with group permissions
here though: if the new data folder has been initialized with this
option, we need to create all the files and paths with the correct
permissions or a base backup taken after a pg_upgrade --retain would
fail, meaning that GetDataDirectoryCreatePerm() has to be called before
creating the log paths, before a couple of sanity checks on the clusters
and before getting the socket directory for the cluster's host settings.
The idea of the new location is based on a suggestion from Peter
Eisentraut.
Also thanks to Andrew Dunstan, Peter Eisentraut, Daniel Gustafsson, Tom
Lane and Bruce Momjian for the discussion (in alphabetical order).
Author: Justin Pryzby
Discussion: https://postgr.es/m/20211212025017.GN17618@telsasoft.com
A very well-informed user might deduce this from what we said already,
but I'd bet against it. Lay it out explicitly.
While here, rewrite the comment about tuple routing to be more
intelligible to an average SQL user.
Per bug #17395 from Alexander Lakhin. Back-patch to v11. (The text
in this area is different in v10 and I'm not sufficiently excited
about this point to adapt the patch.)
Discussion: https://postgr.es/m/17395-8c326292078d1a57@postgresql.org
Running a shell command for each file to be archived has a lot of
overhead and may not offer as much error checking as you want, or the
exact semantics that you want. So, offer the option to call a loadable
module for each file to be archived, rather than running a shell command.
Also, add a 'basic_archive' contrib module as an example implementation
that archives to a local directory.
Nathan Bossart, with a little bit of kibitzing by me.
Discussion: http://postgr.es/m/20220202224433.GA1036711@nathanxps13
The SQL standard has been ambiguous about whether null values in
unique constraints should be considered equal or not. Different
implementations have different behaviors. In the SQL:202x draft, this
has been formalized by making this implementation-defined and adding
an option on unique constraint definitions UNIQUE [ NULLS [NOT]
DISTINCT ] to choose a behavior explicitly.
This patch adds this option to PostgreSQL. The default behavior
remains UNIQUE NULLS DISTINCT. Making this happen in the btree code
is pretty easy; most of the patch is just to carry the flag around to
all the places that need it.
The CREATE UNIQUE INDEX syntax extension is not from the standard,
it's my own invention.
I named all the internal flags, catalog columns, etc. in the negative
("nulls not distinct") so that the default PostgreSQL behavior is the
default if the flag is false.
Reviewed-by: Maxim Orlov <orlovmg@gmail.com>
Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/84e5ee1b-387e-9a54-c326-9082674bde78@enterprisedb.com
Some cleanup for commit 54637508f87bd5f07fb9406bac6b08240283be3b:
Reformat pg_database.dat to reflect the new field order. Also update
the corresponding example in bki.sgml. Reorder the way the fields are
filled in dbcommands.c to correspond to the new order.