It wasn't all that clear which lock levels, if any, would be held on the
DEFAULT partition during an ATTACH PARTITION operation.
Also, clarify which locks will be taken if the DEFAULT partition or the
table being attached are themselves partitioned tables.
Here I'm only backpatching to v12 as before then we obtained an ACCESS
EXCLUSIVE lock on the partitioned table. It seems much less relevant to
mention which locks are taken on other tables when the partitioned table
itself is locked with an ACCESS EXCLUSIVE lock.
Author: Matthias van de Meent, David Rowley
Discussion: https://postgr.es/m/CAEze2WiTB6iwrV8W_J=fnrnZ7fowW3qu-8iQ8zCHP3FiQ6+o-A@mail.gmail.com
Backpatch-through: 12
The logic used to support a change of access method for a table is
similar to changes for tablespace or relation persistence, requiring a
table rewrite with an exclusive lock of the relation changed. Table
rewrites done in ALTER TABLE already go through the table AM layer when
scanning tuples from the old relation and inserting them into the new
one, making this implementation straight-forward.
Note that partitioned tables are not supported as these have no access
methods defined.
Author: Justin Pryzby, Jeff Davis
Reviewed-by: Michael Paquier, Vignesh C
Discussion: https://postgr.es/m/20210228222530.GD20769@telsasoft.com
The error messages using the word "non-negative" are confusing
because it's ambiguous about whether it accepts zero or not.
This commit improves those error messages by replacing it with
less ambiguous word like "greater than zero" or
"greater than or equal to zero".
Also this commit added the note about the word "non-negative" to
the error message style guide, to help writing the new error messages.
When postgres_fdw option fetch_size was set to zero, previously
the error message "fetch_size requires a non-negative integer value"
was reported. This error message was outright buggy. Therefore
back-patch to all supported versions where such buggy error message
could be thrown.
Reported-by: Hou Zhijie
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
Add pg_resetxlog -u option to set the oldest xid in pg_control.
Previously -x set this value be -2 billion less than the -x value.
However, this causes the server to immediately scan all relation's
relfrozenxid so it can advance pg_control's oldest xid to be inside the
autovacuum_freeze_max_age range, which is inefficient and might disrupt
diagnostic recovery. pg_upgrade will use this option to better create
the new cluster to match the old cluster.
Reported-by: Jason Harvey, Floris Van Nee
Discussion: https://postgr.es/m/20190615183759.GB239428@rfd.leadboat.com, 87da83168c644fd9aae38f546cc70295@opammb0562.comp.optiver.com
Author: Bertrand Drouvot
Backpatch-through: 9.6
Formerly, when specifying NUMERIC(precision, scale), the scale had to
be in the range [0, precision], which was per SQL spec. This commit
extends the range of allowed scales to [-1000, 1000], independent of
the precision (whose valid range remains [1, 1000]).
A negative scale implies rounding before the decimal point. For
example, a column might be declared with a scale of -3 to round values
to the nearest thousand. Note that the display scale remains
non-negative, so in this case the display scale will be zero, and all
digits before the decimal point will be displayed.
A scale greater than the precision supports fractional values with
zeros immediately after the decimal point.
Take the opportunity to tidy up the code that packs, unpacks and
validates the contents of a typmod integer, encapsulating it in a
small set of new inline functions.
Bump the catversion because the allowed contents of atttypmod have
changed for numeric columns. This isn't a change that requires a
re-initdb, but negative scale values in the typmod would confuse old
backends.
Dean Rasheed, with additional improvements by Tom Lane. Reviewed by
Tom Lane.
Discussion: https://postgr.es/m/CAEZATCWdNLgpKihmURF8nfofP0RFtAKJ7ktY6GcZOPnMfUoRqA@mail.gmail.com
The documentation mentioned the use of log_checkpoints, that cannot be
used in this context. This commit replaces log_checkpoints with
force_parallel_mode, a developer option useful to perform checks related
to parallelism.
Oversight in 854434c.
Author: Haiying Tang
Discussion: https://postgr.es/m/OS0PR01MB6113954B883ACEB2DDC973F2FBE59@OS0PR01MB6113.jpnprd01.prod.outlook.com
Backpatch-through: 14
Renaming triggers on partitioned tables had two problems: first,
it did not recurse to renaming the triggers on the partitions; and
second, it failed to prohibit renaming clone triggers. Having triggers
with different names in partitions is pointless, and furthermore pg_dump
would not preserve names for partitions anyway.
Not backpatched -- making the ALTER TRIGGER throw an error in stable
versions might cause problems for existing scripts.
Co-authored-by: Arne Roland <A.Roland@index.de>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/d0fd7040c2fb4de1a111b9d9ccc456b8@index.de
Dept. of second thoughts: the new verbiage added in commit aaec237b1a2f
is targeted at the wrong audience. Remove the bits about git and talk
about how to get tarballs only. People looking for the git repo can
look in the appendix. That'll need to be expanded, but this commit
doesn't do that.
In passing, fix a couple of typos that snuck in with the previous
commit.
Discussion: https://postgr.es/m/713760.1626891263@sss.pgh.pa.us
It has been spotted that multiranges lack of ability to decompose them into
individual ranges. Subscription and proper expanded object representation
require substantial work, and it's too late for v14. This commit
provides the implementation of unnest(multirange), which is quite trivial.
unnest(multirange) is defined as a polymorphic procedure.
Catversion is bumped.
Reported-by: Jonathan S. Katz
Discussion: https://postgr.es/m/flat/60258efe-bd7e-4886-82e1-196e0cac5433%40postgresql.org
Author: Alexander Korotkov
Reviewed-by: Justin Pryzby, Jonathan S. Katz, Zhihong Yu, Tom Lane
Reviewed-by: Alvaro Herrera
We had documentation of default_transaction_isolation et al,
but for some reason not of transaction_isolation et al.
AFAICS this is just an ancient oversight, so repair.
Per bug #17077 from Yanliang Lei.
Discussion: https://postgr.es/m/17077-ade8e166a01e1374@postgresql.org
Ensure to properly mark up function parameters in text with <parameter>,
avoid using <acronym> for terms which aren't acronyms and properly place
the ", and" in a value list. The acronym removal is a follow-up to commit
fb72a7b8c3 which removed it for minmax-multi. In passing, also fix an
incorrectly cased word.
Author: Ekaterina Kiryanova <e.kiryanova@postgrespro.ru>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Discussion: https://postgr.es/m/c050ecbc-80b2-b360-3c1d-9fe6a6a11bb5@postgrespro.ru
Backpatch-through: v14
As of v14, pg_depend contains almost 7000 "pin" entries recording
the OIDs of built-in objects. This is a fair amount of bloat for
every database, and it adds time to pg_depend lookups as well as
initdb. We can get rid of all of those entries in favor of an OID
range check, i.e. "OIDs below FirstUnpinnedObjectId are pinned".
(template1 and the public schema are exceptions. Those exceptions
are now wired into IsPinnedObject() instead of initdb's code for
filling pg_depend, but it's the same amount of cruft either way.)
The contents of pg_shdepend are modified likewise.
Discussion: https://postgr.es/m/3737988.1618451008@sss.pgh.pa.us
To add support for streaming transactions at prepare time into the
built-in logical replication, we need to do the following things:
* Modify the output plugin (pgoutput) to implement the new two-phase API
callbacks, by leveraging the extended replication protocol.
* Modify the replication apply worker, to properly handle two-phase
transactions by replaying them on prepare.
* Add a new SUBSCRIPTION option "two_phase" to allow users to enable
two-phase transactions. We enable the two_phase once the initial data sync
is over.
We however must explicitly disable replication of two-phase transactions
during replication slot creation, even if the plugin supports it. We
don't need to replicate the changes accumulated during this phase,
and moreover, we don't have a replication connection open so we don't know
where to send the data anyway.
The streaming option is not allowed with this new two_phase option. This
can be done as a separate patch.
We don't allow to toggle two_phase option of a subscription because it can
lead to an inconsistent replica. For the same reason, we don't allow to
refresh the publication once the two_phase is enabled for a subscription
unless copy_data option is false.
Author: Peter Smith, Ajin Cherian and Amit Kapila based on previous work by Nikhil Sontakke and Stas Kelvich
Reviewed-by: Amit Kapila, Sawada Masahiko, Vignesh C, Dilip Kumar, Takamichi Osumi, Greg Nancarrow
Tested-By: Haiying Tang
Discussion: https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru
Discussion: https://postgr.es/m/CAA4eK1+opiV4aFTmWWUF9h_32=HfPOW9vZASHarT0UA5oBrtGw@mail.gmail.com
"Result Cache" was never a great name for this node, but nobody managed
to come up with another name that anyone liked enough. That was until
David Johnston mentioned "Node Memoization", which Tom Lane revised to
just "Memoize". People seem to like "Memoize", so let's do the rename.
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/20210708165145.GG1176@momjian.us
Backpatch-through: 14, where Result Cache was introduced
The name introduced by commit 4656e3d66 was agreed to be unreasonably
long. To match this change, rename initdb's recently-added
--clobber-cache option to --discard-caches.
Discussion: https://postgr.es/m/1374320.1625430433@sss.pgh.pa.us
Allow a pager to be used by the \watch command. This works but isn't
very useful with traditional pagers like "less", so use a different
environment variable. The popular open source tool "pspg" (also by
Pavel) knows how to display the output if you set PSQL_WATCH_PAGER="pspg
--stream".
To make \watch react quickly when the user quits the pager or presses
^C, and also to increase the accuracy of its timing and decrease the
rate of useless context switches, change the main loop of the \watch
command to use sigwait() rather than a sleeping/polling loop, on Unix.
Supported on Unix only for now (like pspg).
Author: Pavel Stehule <pavel.stehule@gmail.com>
Author: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CAFj8pRBfzUUPz-3gN5oAzto9SDuRSq-TQPfXU_P6h0L7hO%2BEhg%40mail.gmail.com
When sending queries in pipeline mode, we were careless about leaving
the connection in the right state so that PQgetResult would behave
correctly; trying to read further results after sending a query after
having read a result with an error would sometimes hang. Fix by
ensuring internal libpq state is changed properly. All the state
changes were being done by the callers of pqAppendCmdQueueEntry(); it
would have become too repetitious to have this logic in each of them, so
instead put it all in that function and relieve callers of the
responsibility.
Add a test to verify this case. Without the code fix, this new test
hangs sometimes.
Also, document that PQisBusy() would return false when no queries are
pending result. This is not intuitively obvious, and NULL would be
obtained by calling PQgetResult() at that point, which is confusing.
Wording by Boris Kolpackov.
In passing, fix bogus use of "false" to mean "0", per Ranier Vilela.
Backpatch to 14.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Boris Kolpackov <boris@codesynthesis.com>
Discussion: https://postgr.es/m/boris.20210624103805@codesynthesis.com
This commit fixes wrong wording like "a fewer kinds"
in the description about track_planning option.
Back-patch to v13 where pg_stat_statements.track_planning was added.
Author: Justin Pryzby
Reviewed-by: Julien Rouhaud, Fujii Masao
Discussion: https://postgr.es/m/20210418233615.GB7256@telsasoft.com
Previously the values such as '100$%$#$#', '9,223,372,' were accepted and
treated as valid integers for postgres_fdw options batch_size and fetch_size.
Whereas this is not the case with fdw_startup_cost and fdw_tuple_cost options
for which an error is thrown. This was because endptr was not used
while converting strings to integers using strtol.
This commit changes the logic so that it uses parse_int function
instead of strtol as it serves the purpose by returning false in case
if it is unable to convert the string to integer. Note that
this function also rounds off the values such as '100.456' to 100 and
'100.567' or '100.678' to 101.
While on this, use parse_real for fdw_startup_cost and fdw_tuple_cost options.
Since parse_int and parse_real are being used for reloptions and GUCs,
it is more appropriate to use in postgres_fdw rather than using strtol
and strtod directly.
Back-patch to v14.
Author: Bharath Rupireddy
Reviewed-by: Ashutosh Bapat, Tom Lane, Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/CALj2ACVMO6wY5Pc4oe1OCgUOAtdjHuFsBDw8R5uoYR86eWFQDA@mail.gmail.com
Previously, all CustomScan providers had to support projections,
but there may be cases where this is inconvenient. Add a flag
bit to say if it's supported.
Important item for the release notes: this is non-backwards-compatible
since the default is now to assume that CustomScan providers can't
project, instead of assuming that they can. It's fail-soft, but could
result in visible performance penalties due to adding unnecessary
Result nodes.
Sven Klemm, reviewed by Aleksander Alekseev; some cosmetic fiddling
by me.
Discussion: https://postgr.es/m/CAMCrgp1kyakOz6c8aKhNDJXjhQ1dEjEnp+6KNT3KxPrjNtsrDg@mail.gmail.com
This has the advantage to make a process more responsive when the
postmaster dies, even if the wait time was rather limited as there was
only a 50ms timeout here. Another advantage of this change is for
monitoring, as we gain a new wait event for the end-of-vacuum
truncation.
Author: Bharath Rupireddy
Reviewed-by: Aleksander Alekseev, Thomas Munro, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACU4AdPCq6NLfcA-ZGwX7pPCK5FgEj-CAU0xCKzkASSy_A@mail.gmail.com
These are the same as world and install-world respectively, but without
building or installing the documentation. There are many reasons for
wanting to be able to do this, including speed, lack of documentation
building tools, and wanting to build other formats of the documentation.
Plans for simplifying the buildfarm client code include using these
targets.
Backpatch to all live branches.
Discussion: https://postgr.es/m/6a421136-d462-b043-a8eb-e75b2861f3df@dunslane.net
Commit 4656e3d66 replaced the "#define CLOBBER_CACHE_ALWAYS"
testing mechanism with a GUC, which has been a great help for
doing cache-clobber testing in more efficient ways; but there
is a gap in the implementation. The only way to do cache-clobber
testing during an initdb run is to use the old method with #define,
because one can't set the GUC from outside. Improve this by
adding a switch to initdb for the purpose.
(Perhaps someday we should let initdb pass through arbitrary
"-c NAME=VALUE" switches. Quoting difficulties dissuaded me
from attempting that right now, though.)
Back-patch to v14 where 4656e3d66 came in.
Discussion: https://postgr.es/m/1582507.1624227029@sss.pgh.pa.us
Previously the descriptions of tup_returned and tup_fetched columns
in pg_stat_database view were confusing. This commit improves them
so that they represent the following formulas of those columns
more accurately.
* pg_stat_database.tup_returned
= sum(pg_stat_all_tables.seq_tup_read)
+ sum(pg_stat_all_indexes.idx_tup_read)
* pg_stat_database.tup_fetched
= sum(pg_stat_all_tables.idx_tup_fetch)
In these formulas, note that the counters for some system catalogs
like pg_database shared across all databases of a cluster are excluded
from the calculations of sum.
Author: Masahiro Ikeda
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/9eeeccdb-5dd7-90f9-2807-a4b5d2b76ca3@oss.nttdata.com
Extend the replication command CREATE_REPLICATION_SLOT to support the
TWO_PHASE option. This will allow decoding commands like PREPARE
TRANSACTION, COMMIT PREPARED and ROLLBACK PREPARED for slots created with
this option. The decoding of the transaction happens at prepare command.
This patch also adds support of two-phase in pg_recvlogical via a new
option --two-phase.
This option will also be used by future patches that allow streaming of
transactions at prepare time for built-in logical replication. With this,
the out-of-core logical replication solutions can enable replication of
two-phase transactions via replication protocol.
Author: Ajin Cherian
Reviewed-By: Jeff Davis, Vignesh C, Amit Kapila
Discussion:
https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ruhttps://postgr.es/m/64b9f783c6e125f18f88fbc0c0234e34e71d8639.camel@j-davis.com
This commit prevents pg_checksums to do a rewrite of a block if it has
no need to, in the case where the computed checksum matches with what's
already stored in the block read. This is helpful to accelerate
successive runs of the tool when the previous ones got interrupted, for
example.
The number of blocks and files written is tracked and reported by the
tool once finished. Note that the final flush of the data folder
happens even if no blocks are written, as it could be possible that a
previous interrupted run got stopped while doing a flush.
Author: Greg Sabino Mullane
Reviewed-by: Paquier Michael, Julien Rouhaud
Discussion: https://postgr.es/m/CAKAnmmL+k6goxmVzQJB+0bAR0PN1sgo6GDUXJhyhUmVMze1QAw@mail.gmail.com
This new libpq function allows the application to send an 'H' message,
which instructs the server to flush its outgoing buffer.
This hasn't been needed so far because the Sync message already requests
a buffer; and I failed to realize that this was needed in pipeline mode
because PQpipelineSync also causes the buffer to be flushed. However,
sometimes it is useful to request a flush without establishing a
synchronization point.
Backpatch to 14, where pipeline mode was introduced in libpq.
Reported-by: Boris Kolpackov <boris@codesynthesis.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/202106252350.t76x73nt643j@alvherre.pgsql
The logic is implemented so as there can be a choice in the compression
used when building a WAL record, and an extra per-record bit is used to
track down if a block is compressed with PGLZ, LZ4 or nothing.
wal_compression, the existing parameter, is changed to an enum with
support for the following backward-compatible values:
- "off", the default, to not use compression.
- "pglz" or "on", to compress FPWs with PGLZ.
- "lz4", the new mode, to compress FPWs with LZ4.
Benchmarking has showed that LZ4 outclasses easily PGLZ. ZSTD would be
also an interesting choice, but going just with LZ4 for now makes the
patch minimalistic as toast compression is already able to use LZ4, so
there is no need to worry about any build-related needs for this
implementation.
Author: Andrey Borodin, Justin Pryzby
Reviewed-by: Dilip Kumar, Michael Paquier
Discussion: https://postgr.es/m/3037310D-ECB7-4BF1-AF20-01C10BB33A33@yandex-team.ru