gist.sgml and xindex.sgml hadn't been fully updated for the
addition of a sortsupport support function (commit 16fa9b2b3).
xindex.sgml also missed that the compress and decompress support
functions are optional, an apparently far older oversight.
In passing, fix gratuitous inconsistencies in wording and
capitalization.
Noted by E. Rogov. Back-patch to v14; the residual issues
before that aren't significant enough to bother with.
Discussion: https://postgr.es/m/163335322905.12519.5711557029494638051@wrigleys.postgresql.org
Prior to v14, we insisted that the query in RETURN QUERY be of a type
that returns tuples. (For instance, INSERT RETURNING was allowed,
but not plain INSERT.) That happened indirectly because we opened a
cursor for the query, so spi.c checked SPI_is_cursor_plan(). As a
consequence, the error message wasn't terribly on-point, but at least
it was there.
Commit 2f48ede08 lost this detail. Instead, plain RETURN QUERY
insisted that the query be a SELECT (by checking for SPI_OK_SELECT)
while RETURN QUERY EXECUTE failed to check the query type at all.
Neither of these changes was intended.
The only convenient place to check this in the EXECUTE case is inside
_SPI_execute_plan, because we haven't done parse analysis until then.
So we need to pass down a flag saying whether to enforce that the
query returns tuples. Fortunately, we can squeeze another boolean
into struct SPIExecuteOptions without an ABI break, since there's
padding space there. (It's unlikely that any extensions would
already be using this new struct, but preserving ABI in v14 seems
like a smart idea anyway.)
Within spi.c, it seemed like _SPI_execute_plan's parameter list
was already ridiculously long, and I didn't want to make it longer.
So I thought of passing SPIExecuteOptions down as-is, allowing that
parameter list to become much shorter. This makes the patch a bit
more invasive than it might otherwise be, but it's all internal to
spi.c, so that seems fine.
Per report from Marc Bachmann. Back-patch to v14 where the
faulty code came in.
Discussion: https://postgr.es/m/1F2F75F0-27DF-406F-848D-8B50C7EEF06A@gmail.com
Commit 9868167500 added pg_stat_replication_slots view to monitor
ReorderBuffer stats but mistakenly added it under
"Dynamic Statistics Views" section in the docs whereas it belongs to
"Collected Statistics Views" section.
Author: Amit Kapila
Reviewed-by: Masahiko Sawada
Backpatch-through: 14, where it was introduced
Discussion: https://postgr.es/m/CAA4eK1Kb5ur=OC-G4cAsqPOjoVe+S8LNw1WmUY8Owasjk8o5WQ@mail.gmail.com
Be a little more vocal about the risks of remote collations not
matching local ones. Actually fixing these risks seems hard,
and I've given up on the idea that it might be back-patchable.
So the best we can do for the back branches is add documentation.
Per discussion of bug #16583 from Jiří Fejfar.
Discussion: https://postgr.es/m/2438715.1632510693@sss.pgh.pa.us
OpenSSL 3 introduced the concept of providers to support modularization,
and moved the outdated ciphers to the new legacy provider. In case it's
not loaded in the users openssl.cnf file there will be a lot of regress
test failures, so add alternative outputs covering those.
Also document the need to load the legacy provider in order to use older
ciphers with OpenSSL-enabled pgcrypto.
This will be backpatched to all supported version once there is sufficient
testing in the buildfarm of OpenSSL 3.
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/FEF81714-D479-4512-839B-C769D2605F8A@yesql.se
Index vacuums may happen multiple times depending on the number of dead
tuples stored, as of maintenance_work_mem for a manual VACUUM. For
autovacuum, this is controlled by autovacuum_work_mem instead, if set.
The documentation mentioned the former, but not the latter in the
context of autovacuum.
Reported-by: Nikolai Berkoff
Author: Laurenz Albe, Euler Taveira
Discussion: https://postgr.es/m/161545365522.10134.12195402324485546870@wrigleys.postgresql.org
Backpatch-through: 9.6
Improve various item descriptions. Rearrange some things into
(IMO) more logical order. Fix missing markup and dubious
choices of link destinations. Drop a couple of items that
were later back-patched or otherwise don't seem to need
to be documented here.
The available refresh options are specified as refresh_options under
REFRESH PUBLICATION, and DROP PUBLICATION itself has an option named
refresh. Clarify what we mean by refresh options to avoid confusion.
Backpatch through v14 where ALTER SUBSCRIPTION ... DROP PUBLICATION
was introduced.
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Discussion: https://postgr.es/m/CAD21AoCm1wJ3A8Q9EmBjRbShYkJ+o+Oa_z9O0hvwhvhUa2BSyg@mail.gmail.com
Backpatch-through: 14
Formerly, we sent signals for outgoing NOTIFY messages within
ProcessCompletedNotifies, which was also responsible for sending
relevant ones of those messages to our connected client. It therefore
had to run during the main-loop processing that occurs just before
going idle. This arrangement had two big disadvantages:
* Now that procedures allow intra-command COMMITs, it would be
useful to send NOTIFYs to other sessions immediately at COMMIT
(though, for reasons of wire-protocol stability, we still shouldn't
forward them to our client until end of command).
* Background processes such as replication workers would not send
NOTIFYs at all, since they never execute the client communication
loop. We've had requests to allow triggers running in replication
workers to send NOTIFYs, so that's a problem.
To fix these things, move transmission of outgoing NOTIFY signals
into AtCommit_Notify, where it will happen during CommitTransaction.
Also move the possible call of asyncQueueAdvanceTail there, to
ensure we don't bloat the async SLRU if a background worker sends
many NOTIFYs with no one listening.
We can also drop the call of asyncQueueReadAllNotifications,
allowing ProcessCompletedNotifies to go away entirely. That's
because commit 790026972 added a call of ProcessNotifyInterrupt
adjacent to PostgresMain's call of ProcessCompletedNotifies,
and that does its own call of asyncQueueReadAllNotifications,
meaning that we were uselessly doing two such calls (inside two
separate transactions) whenever inbound notify signals coincided
with an outbound notify. We need only set notifyInterruptPending
to ensure that ProcessNotifyInterrupt runs, and we're done.
The existing documentation suggests that custom background workers
should call ProcessCompletedNotifies if they want to send NOTIFY
messages. To avoid an ABI break in the back branches, reduce it
to an empty routine rather than removing it entirely. Removal
will occur in v15.
Although the problems mentioned above have existed for awhile,
I don't feel comfortable back-patching this any further than v13.
There was quite a bit of churn in adjacent code between 12 and 13.
At minimum we'd have to also backpatch 51004c717, and a good deal
of other adjustment would also be needed, so the benefit-to-risk
ratio doesn't look attractive.
Per bug #15293 from Michael Powers (and similar gripes from others).
Artur Zakirov and Tom Lane
Discussion: https://postgr.es/m/153243441449.1404.2274116228506175596@wrigleys.postgresql.org
This contains all individuals mentioned in the commit messages during
PostgreSQL 14 development.
current through ed740b06b18e1a23becd54c97ff229aba4c94349
Commit 6c3ffd6 added a couple new predefined roles but didn't properly
wrap the SQL commands mentioned in the description of those roles with
command tags, so add them now.
Backpatch-through: 14
Reported-by: Michael Banck
Discussion: https://postgr.es/m/606d8b1c.1c69fb81.3df04.1a99@mx.google.com
The current refresh behavior tries to just refresh added/dropped
publications but that leads to removing wrong tables from subscription. We
can't refresh just the dropped publication because it is quite possible
that some of the tables are removed from publication by that time and now
those will remain as part of the subscription. Also, there is a chance
that the tables that were part of the publication being dropped are also
part of another publication, so we can't remove those.
So, we decided that by default, add/drop commands will also act like
REFRESH PUBLICATION which means they will refresh all the publications. We
can keep the old behavior for "add publication" but it is better to be
consistent with "drop publication".
Author: Hou Zhijie
Reviewed-by: Masahiko Sawada, Amit Kapila
Backpatch-through: 14, where it was introduced
Discussion: https://postgr.es/m/OS0PR01MB5716935D4C2CC85A6143073F94EF9@OS0PR01MB5716.jpnprd01.prod.outlook.com
Using --quiet in combination with --no-strict-names didn't work as
documented, a warning message was still emitted. Since the --quiet
flag was working in an unconventional way to other utilities, fix
by removing the functionality instead.
Backpatch through 14 where pg_amcheck was introduced.
Bug: 17148
Reported-by: Chen Jiaoqian <chenjq.jy@fujitsu.com>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://postgr.es/m/17148-b5087318e2b04fc6@postgresql.org
Backpatch-through: 14
This reverts the following commits:
1b5617eb844cd2470a334c1d2eec66cf9b39c41a Describe (auto-)analyze behavior for partitioned tables
0e69f705cc1a3df273b38c9883fb5765991e04fe Set pg_class.reltuples for partitioned tables
41badeaba8beee7648ebe7923a41c04f1f3cb302 Document ANALYZE storage parameters for partitioned tables
0827e8af70f4653ba17ed773f123a60eadd9f9c9 autovacuum: handle analyze for partitioned tables
There are efficiency issues in this code when handling databases with
large numbers of partitions, and it doesn't look like there isn't any
trivial way to handle those. There are some other issues as well. It's
now too late in the cycle for nontrivial fixes, so we'll have to let
Postgres 14 users continue to manually deal with ANALYZE their
partitioned tables, and hopefully we can fix the issues for Postgres 15.
I kept [most of] be280cdad298 ("Don't reset relhasindex for partitioned
tables on ANALYZE") because while we added it due to 0827e8af70f4, it is
a good bugfix in its own right, since it affects manual analyze as well
as autovacuum-induced analyze, and there's no reason to revert it.
I retained the addition of relkind 'p' to tables included by
pg_stat_user_tables, because reverting that would require a catversion
bump.
Also, in pg14 only, I keep a struct member that was added to
PgStat_TabStatEntry to avoid breaking compatibility with existing stat
files.
Backpatch to 14.
Discussion: https://postgr.es/m/20210722205458.f2bug3z6qzxzpx2s@alap3.anarazel.de
The check for sslsni only checked for existence of the parameter
but not for the actual value of the param. This meant that the
SNI extension was always turned on. Fix by inspecting the value
of sslsni and only activate the SNI extension iff sslsni has been
enabled. Also update the docs to be more in line with how other
boolean params are documented.
Backpatch to 14 where sslsni was first implemented.
Reviewed-by: Tom Lane
Backpatch-through: 14, where sslni was added
In ec34040af I added a mention that there was no point in setting
maintenance_work_limit to anything higher than 1GB for vacuum, but that
was incorrect as ginInsertCleanup() also looks at what
maintenance_work_mem is set to during VACUUM and that's not limited to
1GB.
Here I attempt to make it more clear that the limitation is only around
the number of dead tuple identifiers that we can collect during VACUUM.
I've also added a note to autovacuum_work_mem to mention this limitation.
I didn't do that in ec34040af as I'd had some wrong-headed ideas about
just limiting the maximum value for that GUC to 1GB.
Author: David Rowley
Discussion: https://postgr.es/m/CAApHDvpGwOAvunp-E-bN_rbAs3hmxMoasm5pzkYDbf36h73s7w@mail.gmail.com
Backpatch-through: 9.6, same as ec34040af
Since commit e462856a7a, pg_upgrade automatically creates a script to
update extensions, so mention that instead of ALTER EXTENSION.
Backpatch-through: 9.6
Copy-and-pasteo in 665c5855e, evidently. The 9.6 docs toolchain
whined about duplicate index entries, though our modern toolchain
doesn't. In any case, these GUCs surely are not about the
default settings of these values.
postgres_fdw imported generated columns from the remote tables as plain
columns, and caused failures like "ERROR: cannot insert a non-DEFAULT
value into column "foo"" when inserting into the foreign tables, as it
tried to insert values into the generated columns. To fix, we do the
following under the assumption that generated columns in a postgres_fdw
foreign table are defined so that they represent generated columns in
the underlying remote table:
* Send DEFAULT for the generated columns to the foreign server on insert
or update, not generated column values computed on the local server.
* Add to postgresImportForeignSchema() an option "import_generated" to
include column generated expressions in the definitions of foreign
tables imported from a foreign server. The option is true by default.
The assumption seems reasonable, because that would make a query of the
postgres_fdw foreign table return values for the generated columns that
are consistent with the generated expression.
While here, fix another issue in postgresImportForeignSchema(): it tried
to include column generated expressions as column default expressions in
the foreign table definitions when the import_default option was enabled.
Per bug #16631 from Daniel Cherniy. Back-patch to v12 where generated
columns were added.
Discussion: https://postgr.es/m/16631-e929fe9db0ffc7cf%40postgresql.org