adf97c156 made it so ExprStates could support hashing and changed Hash
Join to use that instead of manually extracting Datums from tuples and
hashing them one column at a time.
When hashing multiple columns or expressions, the code added in that
commit stored the intermediate hash value in the ExprState's resvalue
field. That was a mistake as steps may be injected into the ExprState
between each hashing step that look at or overwrite the stored
intermediate hash value. EEOP_PARAM_SET is an example of such a step.
Here we fix this by adding a new dedicated field for storing
intermediate hash values and adjust the code so that all apart from the
final hashing step store their result in the intermediate field.
In passing, rename a variable so that it's more aligned to the
surrounding code and also so a few lines stay within the 80 char margin.
Reported-by: Andres Freund
Reviewed-by: Alena Rybakina <a.rybakina@postgrespro.ru>
Discussion: https://postgr.es/m/CAApHDvqo9eenEFXND5zZ9JxO_k4eTA4jKMGxSyjdTrsmYvnmZw@mail.gmail.com
This commit adds missing checks for COPY FORCE_NOT_NULL and FORCE_NULL
when applied to all columns via "*". These options now correctly
require CSV mode and are disallowed in COPY TO, making their behavior
consistent with FORCE_QUOTE.
Some regression tests are added to verify the correct behavior for the
all-columns case, including FORCE_QUOTE, which was not tested.
Backpatch down to 17, where support for the all-column grammar with
FORCE_NOT_NULL and FORCE_NULL has been added.
Author: Joel Jacobson
Reviewed-by: Zhang Mingli
Discussion: https://postgr.es/m/65030d1d-5f90-4fa4-92eb-f5f50389858e@app.fastmail.com
Backpatch-through: 17
Some queries in copy2 are there to check various option combinations,
and used "stdin" or "stdout" incompatible with the COPY TO or FROM
clauses combined with them, which was confusing. This commit rewrites
these queries to use a compatible grammar.
The coverage of the tests is unchanged. Like the original commit
451d1164b9d0, backpatch down to 16 where these have been introduced. A
follow-up commit will rely on this area of the tests for a bug fix.
Author: Joel Jacobson
Reviewed-by: Zhang Mingli
Discussion: https://postgr.es/m/65030d1d-5f90-4fa4-92eb-f5f50389858e@app.fastmail.com
Backpatch-through: 16
Commit 2dc1deaea turns out to have been still a brick shy of a load,
because CALL statements executing within a plpgsql exception block
could still pass the wrong snapshot to stable functions within the
CALL's argument list. That happened because standard_ProcessUtility
forces isAtomicContext to true if IsTransactionBlock is true, which
it always will be inside a subtransaction. Then ExecuteCallStmt
would think it does not need to push a new snapshot --- but
_SPI_execute_plan didn't do so either, since it thought it was in
nonatomic mode.
The best fix for this seems to be for _SPI_execute_plan to operate
in atomic execution mode if IsSubTransaction() is true, even when the
SPI context as a whole is non-atomic. This makes _SPI_execute_plan
have the same rules about when non-atomic execution is allowed as
_SPI_commit/_SPI_rollback have about when COMMIT/ROLLBACK are allowed,
which seems appropriately symmetric. (If anyone ever tries to allow
COMMIT/ROLLBACK inside a subtransaction, this would all need to be
rethought ... but I'm unconvinced that such a thing could be logically
consistent at all.)
For further consistency, also check IsSubTransaction() in
SPI_inside_nonatomic_context. That does not matter for its
one present-day caller StartTransaction, which can't be reached
inside a subtransaction. But if any other callers ever arise,
they'd presumably want this definition.
Per bug #18656 from Alexander Alehin. Back-patch to all
supported branches, like previous fixes in this area.
Discussion: https://postgr.es/m/18656-cade1780866ef66c@postgresql.org
Commit a4ccc1cef introduced the Generation Context and modified the
logical decoding process to use a Generation Context with a fixed
block size of 8MB for storing tuple data decoded during logical
decoding (i.e., rb->tup_context). Several reports have indicated that
the logical decoding process can be terminated due to
out-of-memory (OOM) situations caused by excessive memory usage in
rb->tup_context.
This issue can occur when decoding a workload involving several
concurrent transactions, including a long-running transaction that
modifies tuples. By design, the Generation Context does not free a
memory block until all chunks within that block are
released. Consequently, if tuples modified by the long-running
transaction are stored across multiple memory blocks, these blocks
remain allocated until the long-running transaction completes, leading
to substantial memory fragmentation. The memory usage during logical
decoding, tracked by rb->size, does not account for memory
fragmentation, resulting in potentially much higher memory consumption
than the value of the logical_decoding_work_mem parameter.
Various improvement strategies were discussed in the relevant
thread. This change reduces the block size of the Generation Context
used in rb->tup_context from 8MB to 8kB. This modification
significantly decreases the likelihood of substantial memory
fragmentation occurring and is relatively straightforward to
backport. Performance testing across multiple platforms has confirmed
that this change will not introduce any performance degradation that
would impact actual operation.
Backport to all supported branches.
Reported-by: Alex Richman, Michael Guissine, Avi Weinberg
Reviewed-by: Amit Kapila, Fujii Masao, David Rowley
Tested-by: Hayato Kuroda, Shlok Kyal
Discussion: https://postgr.es/m/CAD21AoBTY1LATZUmvSXEssvq07qDZufV4AF-OHh9VD2pC0VY2A%40mail.gmail.com
Backpatch-through: 12
Avoid null-pointer crash when considering a cursor declaration
that's outside any C function (a case which is useless anyway).
Ensure a cursor for a prepared statement is marked as initially
not open. At worst, if we chanced to get not-already-zeroed memory
from malloc(), this oversight would result in failing to issue a
"cursor "foo" has been declared but not opened" warning that would
have been appropriate.
Avoid running off the end of the buffer when there are mismatched
square brackets following a variable name. This could lead to
SIGSEGV after reaching the end of memory.
Given the lack of field complaints, none of these seem to be worth
back-patching, but let's clean them up in HEAD.
Per valgrind testing by Alexander Lakhin.
Discussion: https://postgr.es/m/5f5bcecd-d7ec-b8c0-6c92-d1a7c6e0f639@gmail.com
Commit 5bf748b8 taught nbtree ScalarArrayOp index scans to decide when
and how to start the next primitive index scan based on physical index
characteristics. This included rules for deciding whether to start a
new primitive index scan (or whether to move onto the right sibling leaf
page instead) that specifically consider truncated lower-order columns
(-inf columns) from leaf page high keys.
These omitted columns were treated as satisfying the scan's required
scan keys, though only for scan keys marked required in the current scan
direction (forward). Scan keys that didn't get this behavior (those
marked required in the backwards direction only) usually didn't give the
scan reasonable cause to reposition itself to a later leaf page (via
another descent of the index in _bt_first), but _bt_advance_array_keys
would nevertheless always give up by forcing another call to _bt_first.
_bt_advance_array_keys was unwilling to allow the scan to continue onto
the next leaf page, to reconsider whether we really should start another
primitive scan based on the details of the sibling page's tuples. This
didn't match its behavior with similar cases involving keys required in
the current scan direction (forward), which seems unprincipled. It led
to an excessive number of primitive scans/index descents for queries
with a higher-order = array scan key (with dense, contiguous values)
mixed with a lower-order required > or >= scan key.
Bring > and >= strategy scan keys in line with other required scan key
types: treat truncated -inf scan keys as having satisfied scan keys
required in either scan direction (forwards and backwards alike) during
array advancement. That way affected scans can continue to the right
sibling leaf page. Advancement must now schedule an explicit recheck of
the right sibling page's high key in cases involving > or >= scan keys.
The recheck gives the scan a way to back out and start another primitive
index scan (we can't just rely on _bt_checkkeys with > or >= scan keys).
This work can be considered a stand alone optimization on top of the
work from commit 5bf748b8. But it was written in preparation for an
upcoming patch that will add skip scan to nbtree. In practice scans
that use "skip arrays" will tend to be much more sensitive to any
implementation deficiencies in this area.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CAH2-Wz=9A_UtM7HzUThSkQ+BcrQsQZuNhWOvQWK06PRkEp=SKQ@mail.gmail.com
Generally, we don't want any overriding xreflabels in the options
list, so that we can link to options and the link renders as the
option name. The -g option did this differently and config.sgml made
use of that for a link. The new --no-data-checksums option (commit
983a588e0b8) apparently copied this pattern, but that seems like the
wrong direction, as a future patch revealed.
To fix, remove the two xreflabels and rewrite the link in config.sgml
with an explicit link text.
Two near-identical copies of clause_sides_match_join() existed in
joinpath.c and analyzejoins.c. Deduplicate this by moving the function
into restrictinfo.h.
It isn't quite clear that keeping the inline property of this function
is worthwhile, but this commit is just an exercise in code
deduplication. More effort would be required to determine if the inline
property is worth keeping.
Author: James Hunter <james.hunter.pg@gmail.com>
Discussion: https://postgr.es/m/CAJVSvF7Nm_9kgMLOch4c-5fbh3MYg%3D9BdnDx3Dv7Fcb64zr64Q%40mail.gmail.com
This module provides SQL functions that allow to inspect logical
decoding components.
It currently allows to inspect the contents of serialized logical
snapshots of a running database cluster, which is useful for debugging
or educational purposes.
Author: Bertrand Drouvot
Reviewed-by: Amit Kapila, Shveta Malik, Peter Smith, Peter Eisentraut
Reviewed-by: David G. Johnston
Discussion: https://postgr.es/m/ZscuZ92uGh3wm4tW%40ip-10-97-1-34.eu-west-3.compute.internal
This commit moves the definitions of the SnapBuild and SnapBuildOnDisk
structs, related to logical snapshots, to the snapshot_internal.h
file. This change allows external tools, such as
pg_logicalinspect (with an upcoming patch), to access and utilize the
contents of logical snapshots.
Author: Bertrand Drouvot
Reviewed-by: Amit Kapila, Shveta Malik, Peter Smith
Discussion: https://postgr.es/m/ZscuZ92uGh3wm4tW%40ip-10-97-1-34.eu-west-3.compute.internal
Put the rule type at the start not the end, and put spaces
between the constitutent token names instead of smashing them
into an illegible mess. This has no functional impact but
I think it makes the rules a great deal more readable.
Discussion: https://postgr.es/m/1185216.1724001216@sss.pgh.pa.us
parse.pl contains several constant tables that describe tweaks
to be made to the backend grammar. In the same spirit as
00b0e7204, add cross-checks that each table entry is used at
least once (or exactly once if that's appropriate). This should
help catch cases where adjustments to the backend grammar cause
a table entry not to match as expected.
Per suggestion from Michael Paquier.
Discussion: https://postgr.es/m/ZsLVbjsc5x5Saesg@paquier.xyz
Careless string hacking caused parse.pl to transform gram.y's
declaration
%nonassoc IDENT PARTITION RANGE ROWS ...
into
%nonassoc IDENT
%nonassoc CSTRING PARTITION RANGE ROWS ...
It turns out that this has no semantic impact, because the
generated preproc.c is exactly the same either way (if you
inject a blank line to keep line numbers the same).
Nonetheless, given the great emphasis that the commentary in
gram.y places on keeping those other keywords at the same
precedence level as IDENT, this seems like foolishly risking ecpg
behaving differently from the core parser. Adjust the code so
that CSTRING is added to the precedence line without breaking it
into two lines.
Discussion: https://postgr.es/m/2157151.1713540065@sss.pgh.pa.us
Invent a notion of "local" storage that will automatically be
reclaimed at the end of each statement. Use this for location
strings as well as other visibly short-lived data within the parser.
Also, make cat_str and make_str return local storage and not free
their inputs, which allows dispensing with a whole lot of retail
mm_strdup calls. We do have to add some new ones in places where
a local-lifetime string needs to be added to a longer-lived data
structure, but on balance there are a lot less mm_strdup calls than
before.
In hopes of flushing out places where changes were necessary,
I changed YYLTYPE from "char *" to "const char *", which forced
const-ification of various function arguments that probably
should've been like that all along.
This still leaks somewhat more memory than v17, but that will be
cleaned up in future commits.
Discussion: https://postgr.es/m/2011420.1713493114@sss.pgh.pa.us
mm_alloc and mm_strdup were in type.c, which seems a completely
random choice. No doubt the original author thought two small
functions didn't deserve their own file. But I'm about to add
some more memory-management stuff beside them, so let's put them
in a less surprising place. This seems like a better home for
mmerror, mmfatal, and the cat_str/make_str family, too.
Discussion: https://postgr.es/m/2011420.1713493114@sss.pgh.pa.us
Most productions in the preprocessor grammar construct strings
representing SQL or C statements or fragments thereof. Instead
of returning these as <str> results of the productions, return
them as "location" values, taking advantage of Bison's flexibility
about what a location is. We aren't really giving up anything
thereby, since ecpg's error reports have always just given line
numbers, and that's tracked separately. The advantage of this
is that a single instance of the YYLLOC_DEFAULT macro can
perform all the work needed by the vast majority of productions,
including all the ones made automatically by parse.pl. This
avoids having large numbers of effectively-identical productions,
which tickles an optimization inefficiency in recent versions of
clang. (This patch reduces the compilation time for preproc.o
by more than 100-fold with clang 16, and is visibly helpful with
gcc too.) The compiled parser is noticeably smaller as well.
A disadvantage of this approach is that YYLLOC_DEFAULT is applied
before running the production's semantic action (if any). This
means it cannot use the method favored by cat_str() of free'ing
all the input strings; if the action needs to look at the input
strings, it'd be looking at dangling storage. As this stands,
therefore, it leaks memory like a sieve. This is already a big
patch though, and fixing the memory management seems like a
separable problem, so let's leave that for the next step.
(This does remove some free() calls that I'd have had to touch
anyway, in the expectation that the next step will manage
memory reclamation quite differently.)
Most of the changes here are mindless substitution of "@N" for
"$N" in grammar rules; see the changes to README.parser for
an explanation.
Discussion: https://postgr.es/m/2011420.1713493114@sss.pgh.pa.us
Remove a lot of cruft, clean up and document what's left.
This produces the same preproc.y output as before, except for
fewer blank lines. (It's not like we're making any attempt to
match the layout of gram.y, so I removed the one bit of logic
that seemed to have that in mind.)
Discussion: https://postgr.es/m/2011420.1713493114@sss.pgh.pa.us
As noted in the previous commit, check_rules.pl is now entirely
redundant with checks made by parse.pl, or would be if it weren't
for the places where it's wrong. It's a waste of build cycles
and maintenance effort, so remove it.
Discussion: https://postgr.es/m/2011420.1713493114@sss.pgh.pa.us
README.parser is the user's manual, such as it is, for parse.pl.
It's rather poorly written if you ask me; so try to improve it.
(More could be written here, but this at least covers the same
info in a more organized fashion.)
Also, the single solitary line of usage info in parse.pl itself
was a lie. Replace.
Add some error checks that the ecpg.addons entries meet the syntax
rules set forth in README.parser. One of them didn't, but
accidentally worked anyway because the logic in include_addon is
such that 'block' is the default behavior.
Also add a cross-check that each ecpg.addons entry is matched exactly
once in the backend grammar. This exposed that there are two dead
entries there --- they are dead because the %replace_types table in
parse.pl causes their nonterminals to be ignored altogether.
Removing them doesn't change the generated preproc.y file.
(This implies that check_rules.pl is completely worthless and should
be nuked: it adds build cycles and maintenance effort while failing
to reliably accomplish its one job of detecting dead rules. I'll
do that separately.)
Discussion: https://postgr.es/m/2011420.1713493114@sss.pgh.pa.us
Commit 9fab40ad32e changed ReorderBuffer to use Slab Context for
allocating ReorderBufferTXN entries instead of using a caching
mechanism. The txn->node is no longer used as an element of the list
of preallocated ReorderBufferTXNs.
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CAD21AoB1CTnX66Ji3zTCnjoPVC9OzYe0B6LygUHcxEB2RV-hFw%40mail.gmail.com
The MergeJoin struct was tracking "mergeStrategies", which were an
array of btree strategy numbers, purely for the purpose of comparing
it later against btree strategies to determine if the scan direction
was forward or reverse. Change that. Instead, track
"mergeReversals", an array of bool, to indicate the same without an
unfortunate assumption that a strategy number refers specifically to a
btree strategy.
Author: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
Functions make_pathkey_from_sortop() and transformWindowDefinitions(),
which receive a SortGroupClause, were determining the sort order
(ascending vs. descending) by comparing that structure's operator
strategy to BTLessStrategyNumber, but could just as easily have gotten
it from the SortGroupClause object, if it had such a field, so add
one. This reduces the number of places that hardcode the assumption
that the strategy refers specifically to a btree strategy, rather than
some other index AM's operators.
Author: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/E72EAA49-354D-4C2E-8EB9-255197F55330@enterprisedb.com
TAP tests can write
$node->init(no_data_checksums => 1);
to initialize a cluster explicitly without checksums. Currently, this
is the default, but this change allows running all tests with
checksums enabled, like
PG_TEST_INITDB_EXTRA_OPTS=--data-checksums meson test ...
And this also prepares the tests for when we switch the default to
checksums enabled.
The pg_checksums tests need to disable checksums so it can test its
own functionality of enabling checksums. The amcheck/pg_amcheck tests
need to disable checksums because they manually introduce corruption
that they want to detect, but with checksums enabled, the checksum
verification will fail before they even get to their work.
Author: Greg Sabino Mullane <greg@turnstep.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://www.postgresql.org/message-id/flat/CAKAnmmKwiMHik5AHmBEdf5vqzbOBbcwEPHo4-PioWeAbzwcTOQ@mail.gmail.com
When answering support questions online it's helpful to be able to
refer to the specific format by using an anchored link.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://postgr.es/m/87edatit3t.fsf@wibble.ilmari.org
Commit 15abc7788e6 tolerated namespace pollution from BeOS system
headers. Commit 44f902122 de-supported BeOS. Since that stuff didn't
make it into the Meson build system, synchronize by removing from
configure.
Author: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Japin Li <japinli@hotmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (the idea, not the patch)
Discussion: https://postgr.es/m/ME3P282MB3166F9D1F71F787929C0C7E7B6312%40ME3P282MB3166.AUSP282.PROD.OUTLOOK.COM
Attempting to use an interval of time less than 1ms would cause \watch
to hang. This was confusing, so let's change the logic so as an
interval lower than 1ms behaves the same as 0.
Comments are added to mention that the internals of do_watch() had
better rely on "sleep_ms", the interval value in milliseconds. While on
it, this commit adds a test to check the behavior of interval values
less than 1ms.
\watch hanging for interval values less than 1ms existed before
6f9ee74d45aa, that has changed the code to support an interval value of
0.
Reported-by: Heikki Linnakangas
Author: Andrey M. Borodin, Michael Paquier
Discussion: https://postgr.es/m/88445e0e-3156-4b9d-afae-9a1a7b1631f6@iki.fi
Backpatch-through: 16
max_parallel_maintenance_workers has been introduced in 9da0cc35284b,
and used a hardcoded limit of 1024 rather than this variable.
max_parallel_workers and max_parallel_workers_per_gather already used
MAX_PARALLEL_WORKER_LIMIT (1024) as their upper-bound since
6599c9ac3340.
Author: Matthias van de Meent
Reviewed-by: Zhang Mingli
Discussion: https://postgr.es/m/CAEze2WiCiJD+8Wig_wGPyn4vgdPjbnYXy2Rw+9KYi6izTMuP=w@mail.gmail.com
find_computable_ec_member() had the wrong mental model of what
its primary caller prepare_sort_from_pathkeys() would do with
the selected EquivalenceClass member expression. We will not
compute the EC expression in a plan node atop the one returning
the passed-in targetlist; rather, the EC expression will be
computed as an additional column of that targetlist. So any
Var or quasi-Var used in the given tlist is also available to the
EC expression. In simple cases this makes no difference because
the given tlist is just a list of Vars or quasi-Vars --- but if
we are considering an appendrel member produced by flattening
a UNION ALL, the tlist may contain expressions, resulting in
failure to match and a "could not find pathkey item to sort"
error.
To fix, we can flatten both the tlist and the EC members with
pull_var_clause(), and then just check for subset-ness, so
that the code is actually shorter than before.
While this bug is quite old, the present patch only works back to
v13. We could possibly make it work in v12 by back-patching parts
of 375398244. On the whole though I don't like the risk/reward
ratio of that idea. v12's final release is next month, meaning
there would be no chance to correct matters if the patch causes a
regression. Since this failure has escaped notice for 14 years,
it's likely nobody will hit it in the field with v12.
Per bug #18652 from Alexander Lakhin.
Andrei Lepikhov and Tom Lane
Discussion: https://postgr.es/m/18652-deaa782ebcca85d1@postgresql.org
A missed check for the builtin collation provider could result in
falling through to call isalpha().
This does not appear to have practical consequences because it only
happens for characters in the ASCII range. Regardless, the builtin
provider should not be calling libc functions, so backpatch.
Discussion: https://postgr.es/m/1bd5a0a5192f82c22ee7527e825b18ab0028b2c7.camel@j-davis.com
Backpatch-through: 17
PostgreSQL has for a long time mixed two BIO implementations, which can
lead to subtle bugs and inconsistencies. This cleans up our BIO by just
just setting up the methods we need. This patch does not introduce any
functionality changes.
The following methods are no longer defined due to not being needed:
- gets: Not used by libssl
- puts: Not used by libssl
- create: Sets up state not used by libpq
- destroy: Not used since libpq use BIO_NOCLOSE, if it was used it close
the socket from underneath libpq
- callback_ctrl: Not implemented by sockets
The following methods are defined for our BIO:
- read: Used for reading arbitrary length data from the BIO. No change
in functionality from the previous implementation.
- write: Used for writing arbitrary length data to the BIO. No change
in functionality from the previous implementation.
- ctrl: Used for processing ctrl messages in the BIO (similar to ioctl).
The only ctrl message which matters is BIO_CTRL_FLUSH used for
writing out buffered data (or signal EOF and that no more data
will be written). BIO_CTRL_FLUSH is mandatory to implement and
is implemented as a no-op since there is no intermediate buffer
to flush.
BIO_CTRL_EOF is the out-of-band method for signalling EOF to
read_ex based BIO's. Our BIO is not read_ex based but someone
could accidentally call BIO_CTRL_EOF on us so implement mainly
for completeness sake.
As the implementation is no longer related to BIO_s_socket or calling
SSL_set_fd, methods have been renamed to reference the PGconn and Port
types instead.
This also reverts back to using BIO_set_data, with our fallback, as a small
optimization as BIO_set_app_data require the ex_data mechanism in OpenSSL.
Author: David Benjamin <davidben@google.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAF8qwaCZ97AZWXtg_y359SpOHe+HdJ+p0poLCpJYSUxL-8Eo8A@mail.gmail.com
This function returns the name, size, and last modification time of
each regular file in pg_wal/summaries. This allows administrators
to grant privileges to view the contents of this directory without
granting privileges on pg_ls_dir(), which allows listing the
contents of many other directories. This commit also gives the
pg_monitor predefined role EXECUTE privileges on the new
pg_ls_summariesdir() function.
Bumps catversion.
Author: Yushi Ogiwara
Reviewed-by: Michael Paquier, Fujii Masao
Discussion: https://postgr.es/m/a0a3af15a9b9daa107739eb45aa9a9bc%40oss.nttdata.com
Both functions advance the transaction ID, which modifies the system
state. Thus, they should be marked as VOLATILE.
Additionally, they call the AssignTransactionId function, which cannot
be invoked in parallel mode, so they should be marked as PARALLEL
UNSAFE.
Author: Yushi Ogiwara <btogiwarayuushi@oss.nttdata.com>
Discussion: https://www.postgresql.org/message-id/18f01e4fd46448f88c7a1363050a9955@oss.nttdata.com
Previously, per-script statistics were never output when all
transactions failed due to serialization or deadlock errors. However,
it is reasonable to report such information if there are ones even
when there are no successful transaction since these failed
transactions are now objects to be reported.
Meanwhile, if the total number of successful, skipped, and failed
transactions is zero, we don't have to report the number of failed
transactions as similar to the number of skipped transactions, which
avoids to print "NaN%" in lines on failed transaction reports.
Also, the number of transactions in per-script results now includes
skipped and failed transactions. It prevents to print "total of NaN%"
when any transactions are not successfully processed. The number of
transactions actually processed per-script and TPS based on it are now
output explicitly in a separate line.
Author: Yugo Nagata
Reviewed-by: Tatsuo Ishii
Discussion: https://postgr.es/m/20240921003544.2436ef8da9c5c8cb963c651b%40sraoss.co.jp
c01743aa4 added EXPLAIN output to display the plan node's disabled_node
count whenever that count is above 0. Seemingly, there weren't many
people who liked that output as each parent of a disabled node would
also have a "Disabled Nodes" output due to the way disabled_nodes is
accumulated towards the root plan node. It was often hard and sometimes
impossible to figure out which nodes were disabled from looking at
EXPLAIN. You might think it would be possible to manually add up the
numbers from the "Disabled Nodes" output of a given node's children to
figure out if that node has a higher disabled_nodes count than its
children, but that wouldn't have worked for Append and Merge Append nodes
if some disabled child nodes were run-time pruned during init plan. Those
children are not displayed in EXPLAIN.
Here we attempt to improve this output by only showing "Disabled: true"
against only the nodes which are explicitly disabled themselves. That
seems to be the output that's desired by the most people who voiced
their opinion. This is done by summing up the disabled_nodes of the
given node's children and checking if that number is less than the
disabled_nodes of the current node.
This commit also fixes a bug in make_sort() which was neglecting to set
the Sort's disabled_nodes field. This should have copied what was done
in cost_sort(), but it hadn't been updated. With the new output, the
choice to not maintain that field properly was clearly wrong as the
disabled-ness of the node was attributed to the Sort's parent instead.
Reviewed-by: Laurenz Albe, Alena Rybakina
Discussion: https://postgr.es/m/9e4ad616bebb103ec2084bf6f724cfc739e7fabb.camel@cybertec.at