Commit 28e626bde0 introduced the infrastructure for tracking more detailed IO
statistics. This commit adds the actual collection of the new IO statistics
for relations and temporary relations. See aforementioned commit for goals and
high-level design.
The changes in this commit are fairly straight-forward. The bulk of the change
is to passing sufficient information to the callsites of pgstat_count_io_op().
A somewhat unsightly detail is that it currently is hard to find a better
place to count fsyncs than in md.c, whereas the other pgstat_count_io_op()
calls are in bufmgr.c/localbuf.c. As the number of fsyncs is tied to md.c
implementation details, it's not obvious there is a better answer.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
This commit adds the infrastructure for more detailed IO statistics. The calls
to actually count IOs, a system view to access the new statistics,
documentation and tests will be added in subsequent commits, to make review
easier.
While we already had some IO statistics, e.g. in pg_stat_bgwriter and
pg_stat_database, they did not provide sufficient detail to understand what
the main sources of IO are, or whether configuration changes could avoid
IO. E.g., pg_stat_bgwriter.buffers_backend does contain the number of buffers
written out by a backend, but as that includes extending relations (always
done by backends) and writes triggered by the use of buffer access strategies,
it cannot easily be used to tune background writer or checkpointer. Similarly,
pg_stat_database.blks_read cannot easily be used to tune shared_buffers /
compute a cache hit ratio, as the use of buffer access strategies will often
prevent a large fraction of the read blocks to end up in shared_buffers.
The new IO statistics count IO operations (evict, extend, fsync, read, reuse,
and write), and are aggregated for each combination of backend type (backend,
autovacuum worker, bgwriter, etc), target object of the IO (relations, temp
relations) and context of the IO (normal, vacuum, bulkread, bulkwrite).
What is tracked in this series of patches, is sufficient to perform the
aforementioned analyses. Further details, e.g. tracking the number of buffer
hits, would make that even easier, but was left out for now, to keep the scope
of the already large patchset manageable.
Bumps PGSTAT_FILE_FORMAT_ID.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
While removing the use of SHM_QUEUE from predicate.c, in 9600371764, I made
two mistakes in GetSafeSnapshotBlockingPids():
- Removed the check for output_size
- Previously, when the first loop didn't find a matching proc, sxact would be
NULL. But with naive use of dlist_foreach() it ends up as the value of the
last iteration.
The second issue is the cause of occasional failures in the deadlock-hard and
deadlock-soft isolation tests that we have been observing on CI. The issue was
very hard to reproduce, as it requires the transactions.sql regression test to
run at the same time as the deadlock-{hard,soft} isolation test.
I did not find other similar mistakes in 9600371764.
Discussion: https://postgr.es/m/20230208221145.bwzhancellclrgia@awork3.anarazel.de
It appears no longer possible to build the SGML docs without a local
installation of the DocBook DTD, because sourceforge.net now only
permits HTTPS access, and no common version of xsltproc supports that.
Hence, remove the bits of our documentation suggesting that that's
possible or useful.
In fact, we might as well add the --nonet option to the build recipes
automatically, for a bit of extra security.
Also fix our documentation-tool-installation recipes for macOS to
ensure that xmllint and xsltproc are pulled in from MacPorts or
Homebrew. The previous recipes assumed you could use the
Apple-supplied versions of these tools; which still works, except that
you'd need to set an environment variable to ensure that they would
find DTD files provided by those package managers. Simpler and easier
to just recommend pulling in the additional packages.
In HEAD, also document how to build docs using Meson, and adjust
"ninja docs" to just build the HTML docs, for consistency with the
default behavior of doc/src/sgml/Makefile.
In a fit of neatnik-ism, I also made the ordering of the package
lists match the order in which the tools are described at the head
of the appendix.
Aleksander Alekseev, Peter Eisentraut, Tom Lane
Discussion: https://postgr.es/m/CAJ7c6TO8Aro2nxg=EQsVGiSDe-TstP4EsSvDHd7DSRsP40PgGA@mail.gmail.com
A new --commit option will add all the files in a commit to the file
list. The option can be specified more than once.
Also, if a directory is given on the command line, all the files in that
directory tree will be added to the file list.
Per suggestions from Robert Haas
Reviewed by Jelte Fennema
Discussion: https://postgr.es/m/CA+TgmoY59Ksso81RNLArNxj0a7xaqV_F_u7gSMHbgdc2kG5Vpw@mail.gmail.com
The code I added in fee7b77b9 could misbehave if commute_above_r
contains multiple relids. While adding too many relids here is
probably harmless (pre-fee7b77b9, we did it all the time), it's
not very expensive to be accurate: we just have to intersect
commute_above_r with the join's relids.
Discussion: https://postgr.es/m/17781-c0405c8b3cd5e072@postgresql.org
No GUCs that use NO_SHOW_ALL are reported in pg_show_all_settings(),
hence trying to check combinations of flags related to it is pointless.
These queries have been introduced by d10e41d, so backpatch down to 15
to keep all the branches consistent. Equivalent checks based on
NO_SHOW_ALL could be added in check_GUC_init() when a GUC is initially
loaded, but this can be done only on HEAD.
Author: Nitin Jadhav
Discussion: https://postgr.es/m/CAMm1aWaYe0muu3ABo7iSAgK+OWDS9yNe8GGRYnCyeEpScYKa+g@mail.gmail.com
Backpatch-through: 15
The DDLs like Refresh Materialized views that generate lots of temporary
data due to rewrite rules may not be processed by output plugins (for
example pgoutput). So, we won't send keep-alive messages for a long time
while processing such commands and that can lead the subscriber side to
timeout. We have previously fixed a similar case for large transactions in
commit f95d53eded where the output plugin filters all or most of the
changes but missed to handle the DDLs.
We decided not to backpatch this as this adds a new callback in the
existing exposed structure and moreover, users can increase the
wal_sender_timeout and wal_receiver_timeout to avoid this problem.
Author: Wang wei, Hou Zhijie
Reviewed-by: Peter Smith, Ashutosh Bapat, Shi yu, Amit Kapila
Discussion: https://postgr.es/m/OS3PR01MB6275478E5D29E4A563302D3D9E2B9@OS3PR01MB6275.jpnprd01.prod.outlook.com
Discussion: https://postgr.es/m/CAA5-nLARN7-3SLU_QUxfy510pmrYK6JJb=bk3hcgemAM_pAv+w@mail.gmail.com
The logic for when to add the current outer join's own relid
to the nullingrels sets of output Vars and PHVs was overly
complicated and underly correct. Not sure why I didn't think
of this before, but since what we want is marking per the
syntactic structure, we can just consult our records about
the syntactic structure, ie syn_righthand/syn_lefthand.
Also, tighten the rule about when to add the commute_above_r
bits, in hopes of eliminating some squishy reasoning. I do not
know of a reason to think that that's broken as-is, but this way
seems better.
Per bug #17781 from Robins Tharakan.
Discussion: https://postgr.es/m/17781-c0405c8b3cd5e072@postgresql.org
Commit c3382a3c3, which moved the implementation of PG_TEST_EXTRA
from src/test/Makefile into individual test scripts, broke the
directions given in the subdirectory README files about how to run
these tests by hand. Update. Also mention wal_consistency_checking
in recovery/README --- that omission isn't the fault of c3382a3c3,
but it's still an omission.
Currently, we reuse WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE in the
apply worker while sending data to the parallel apply worker via a shared
memory queue. This is not appropriate as one won't be able to distinguish
whether the worker is waiting for sending data or for the state change.
To patch instead uses the wait event WAIT_EVENT_MQ_SEND which has been
already used in blocking mode while sending data via a shared memory
queue.
Author: Hou Zhijie
Reviewed-by: Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/OS0PR01MB57161C680B22E4C591628EE994DA9@OS0PR01MB5716.jpnprd01.prod.outlook.com
Here we further simplify the code in heapgettup() and
heapgettup_pagemode() to make better use of the helper functions added in
the previous recent refactors in this area.
In passing, remove an unneeded cast added in 8ca6d49f6.
Author: Melanie Plageman
Reviewed-by: Andres Freund, David Rowley
Discussion: https://postgr.es/m/CAAKRu_YSOnhKsDyFcqJsKtBSrd32DP-jjXmv7hL0BPD-z0TGXQ@mail.gmail.com
Like the implementation for node copy, write and read, this node
requires a custom implementation so as the query jumbling is able to
consider the correct value assigned to it, depending on its type (int,
float, bool, string, bitstring).
Based on a dump of pg_stat_statements from the regression database, this
would confuse the query jumbling of the following queries:
- SET.
- COPY TO with SELECT queries.
- START TRANSACTION with different isolation levels.
- ALTER TABLE with default expressions.
- CREATE TABLE with partition bounds.
Note that there may be a long-term argument in tracking the location of
such nodes so as query strings holding such nodes could be normalized,
but this is left as a separate discussion.
Oversight in 3db72eb.
Discussion: https://postgr.es/m/Y9+HuYslMAP6yyPb@paquier.xyz
The existing comments in load_relcache_init_file() were not flexible
when new entries were added at the end, so they ended up a bit wrong.
Simplify the comments to avoid this issue.
The portion of join_is_removable() that checks PlaceHolderVars
can be made a little more accurate and intelligible than it was.
The key point is that we can allow join removal even if a PHV
mentions the target rel in ph_eval_at, if that mention was only
added as a consequence of forcing the PHV up to a join level
that's at/above the outer join we're trying to get rid of.
We can check that by testing for the OJ's relid appearing in
ph_eval_at, indicating that it's supposed to be evaluated after
the outer join, plus the existing test that the contained
expression doesn't actually mention the target rel.
While here, add an explicit check that there'll be something left
in ph_eval_at after we remove the target rel and OJ relid. There
is an Assert later on about that, and I'm not too sure that the
case could happen for a PHV satisfying the other constraints,
but let's just check. (There was previously a bms_is_subset test
that meant to cover this risk, but it's broken now because it
doesn't account for the fact that we'll also remove the OJ relid.)
The real reason for revisiting this code though is that the
Assert I left behind in 8538519db turns out to be easily
reachable, because if a PHV of this sort appears in an upper-level
qual clause then that clause's clause_relids will include the
PHV's ph_eval_at relids. This is a mirage though: we have or soon
will remove these relids from the PHV's ph_eval_at, and therefore
they no longer belong in qual clauses' clause_relids either.
Remove that Assert in join_is_removable, and replace the similar
one in remove_rel_from_query with code to remove the deleted relids
from clause_relids.
Per bug #17773 from Robins Tharakan.
Discussion: https://postgr.es/m/17773-a592e6cedbc7bac5@postgresql.org
In standby mode, we don't actually report progress of recovery,
but up until now, startup_progress_timeout_handler() nevertheless
got called every log_startup_progress_interval seconds. That's
an unnecessary expense, so avoid it.
Report by Thomas Munro. Patch by Bharath Rupireddy, reviewed by
Simon Riggs, Thomas Munro, and me. Back-patch to v15, where
the problem was introduced.
Discussion: https://www.postgresql.org/message-id/CA%2BhUKGKCHSffAj8zZJKJvNX7ygnQFxVD6wm1d-2j3fVw%2BMafPQ%40mail.gmail.com
The splitting into parts actually started earlier than the text had
claimed, but that is ancient history anyway by now and does not need
to be mentioned. Update that and tweak the text a bit.
This extends the work begun by a73952b, with the addition of a GUC check
for flag combinations in check_GUC_init(), making sure that anything
defined with GUC_NO_SHOW_ALL also includes GUC_NOT_IN_SAMPLE, as first
step. There has never been any GUCs of this kind in the core code, and
this combination makes little sense as a parameter marked as not fit for
SHOW ALL should not be hidden in postgresql.conf.sample.
Note that GUCs marked with GUC_NO_SHOW_ALL are not listed under
pg_settings or SHOW ALL (still they can be queried individually), making
them unfit for checks via SQL queries in the regression tests that do a
full scan of the parameters available. The SQL tests are still a bit
incorrect about that, and will be cleaned up in a separate commit. We
have also discussed the possibility to extend the SQL functions for GUCs
so as they could show more information about parameters defined with
GUC_NO_SHOW_ALL, though it has been concluded that this is not worth the
extra complication in the long run, an enforced policy at initialization
time being enough to do the same job.
Per discussion with Nitin Jadhav and Tom Lane.
Discussion: https://postgr.es/m/CAMm1aWaYe0muu3ABo7iSAgK+OWDS9yNe8GGRYnCyeEpScYKa+g@mail.gmail.com
This adds a new option to pg_verifybackup called -P/--progress, showing
every second some information about the progress of the checksum
verification based on the data of a backup manifest.
Similarly to what is done for pg_rewind and pg_basebackup, the
information printed in the progress report consists of the current
amount of data computed and the total amount of data that will be
computed. Note that files found with an incorrect size do not have
their checksum verified, hence their size is not appended to the total
amount of data estimated during the first scan of the manifest data
(such incorrect sizes could be overly high, for one, falsifying the
progress report).
Author: Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoC5+JOgMd4o3z_oxw0f8JDSsCYY7zSbhe-O9x7f33rw_A@mail.gmail.com
pqsecure_open_gss() includes a code path handling error messages with
v2-style protocol messages coming from the server. The client-side
buffer holding the error message does not force a NULL-termination, with
the data of the server getting copied to the errorMessage of the
connection. Hence, it would be possible for a server to send an
unterminated string and copy arbitrary bytes in the buffer receiving the
error message in the client, opening the door to a crash or even data
exposure.
As at this stage of the authentication process the exchange has not been
completed yet, this could be abused by an attacker without Kerberos
credentials. Clients that have a valid kerberos cache are vulnerable as
libpq opportunistically requests for it except if gssencmode is
disabled.
Author: Jacob Champion
Backpatch-through: 12
Security: CVE-2022-41862
This reverts commits 24c35ec and 57169ad. PreRestoreCommand() and
PostRestoreCommand() need to be put closer to the system() call calling
a restore_command, as they enable in_restore_command for the startup
process which would in turn trigger an immediate proc_exit() in the
SIGTERM handler. Perhaps we could get rid of this behavior entirely,
but 24c35ec has made the window where the flag is enabled much larger
than it was, and any Postgres-like actions (palloc, etc.) taken by code
paths while the flag is enabled could lead to more severe issues in the
shutdown processing.
Note that curculio has showed that there are much more problems in this
area, unrelated to this change, actually, hence the issues related to
that had better be addressed first. Keeping the code of HEAD in line
with the stable branches should make that a bit easier.
Per discussion with Andres Freund and Nathan Bossart.
Discussion: https://postgr.es/m/Y979NR3U5VnWrTwB@paquier.xyz
make_outerjoininfo was set up to update SpecialJoinInfo's
commute_below, commute_above_l, commute_above_r fields as soon as
it found a pair of outer joins that look like they can commute.
However, this decision could be negated later in the same loop due
to finding an intermediate outer join that prevents commutation.
That left us with commute_xxx fields that were contradictory to the
join order restrictions expressed in min_lefthand/min_righthand.
The latter fields would keep us from actually choosing a bad join
order; but the inconsistent commute_xxx fields could bollix details
such as the varnullingrels values created for intermediate join
relation targetlists, ending in an assertion failure in setrefs.c.
To fix, wait till the end of make_outerjoininfo where we have
accurate values for min_lefthand/min_righthand, and then insert
only relids not present in those sets into the commute_xxx fields.
Per SQLSmith testing by Robins Tharakan. Note that while Robins
bisected the failure to commit b448f1c8d, it's really the fault of
2489d76c4. The outerjoin_delayed logic removed in the later commit
was keeping us from deciding that troublesome join pairs commute,
at least in the specific example seen here.
Discussion: https://postgr.es/m/CAEP4nAyAORgE8K_RHSmvWbE9UaChhjbEL1RrDU3neePwwRUB=A@mail.gmail.com
deconstruct_distribute tweaks the outer join scope (ojscope)
it passes to distribute_qual_to_rels when considering an outer
join qual that's above potentially-commutable outer joins.
However, if the current join is *not* potentially commutable,
we shouldn't do that. The argument that distribute_qual_to_rels
will not do something wrong with the bogus ojscope falls flat
if we don't pass it non-null postponed_oj_qual_list. Moreover,
there's no need to play games in this case since we aren't going
to commute anything.
Per SQLSmith testing by Robins Tharakan.
Discussion: https://postgr.es/m/CAEP4nAw74k4b-=93gmfCNX3MOY3y4uPxqbk_MnCVEpdsqHJVsg@mail.gmail.com
If we have a RestrictInfo that mentions both the removal-candidate
relation and the outer join's relid, then that is a pushed-down
condition not a join condition, so it should be grounds for deciding
that we can't remove the outer join. In commit 2489d76c4, I'd blindly
included the OJ's relid into "joinrelids" as per the new standard
convention, but the checks of attr_needed and ph_needed should only
allow the join's input rels to be mentioned.
Having done that, the check for references in pushed-down quals
a few lines further down should be redundant. I left it in place
as an Assert, though.
While researching this I happened across a couple of comments that
worried about the effects of update_placeholder_eval_levels.
That's gone as of b448f1c8d, so we can remove some worry.
Per bug #17769 from Robins Tharakan. The submitted test case
triggers this more or less accidentally because we flatten out
a LATERAL sub-select after we've done join strength reduction;
if we did that in the other order, this problem would be masked
because the outer join would get simplified to an inner join.
To ensure that the committed test case will continue to test
what it means to even if we make that happen someday, use a
test clause involving COALESCE(), which will prevent us from
using it to do join strength reduction.
Patch by me, but thanks to Richard Guo for initial investigation.
Discussion: https://postgr.es/m/17769-e4f7a5c9d84a80a7@postgresql.org
After pulling up LATERAL subqueries, we may have qual clauses that
refer to relations outside their syntactic scope. Before doing any
such pullup, prepjointree.c checks to make sure that it wouldn't
create a semantically-invalid situation; but we leave it to
deconstruct_jointree() to actually move these quals up the join
tree to a place where they can be evaluated. In commit 2489d76c4,
I (tgl) refactored deconstruct_jointree() in a way that caused
assertion failures while moving such quals, because the new logic
failed to distinguish "this jointree node is a parent of the source
one" from "this jointree node is processed after the source
one in depth-first order".
Fix this, and at the same time reduce the overhead a bit, by
getting rid of the common PostponedQual list and instead making each
JoinTreeItem contain a list of quals that needed to be postponed to
its level. We can help distribute_qual_to_rels find the appropriate
JoinTreeItem efficiently by adding parent-item links to the
JoinTreeItem data structure. This ends up being the same number
of relid subset checks as the original (pre-bug) logic, but less
list manipulation is required during multi-level postponements.
Richard Guo and Tom Lane, per bug #17768 from Robins Tharakan.
Discussion: https://postgr.es/m/17768-5ac8730ece54478f@postgresql.org
This allows underscores to be used in integer and numeric literals,
and their corresponding type input functions, for visual grouping.
For example:
1_500_000_000
3.14159_26535_89793
0xffff_ffff
0b_1001_0001
A single underscore is allowed between any 2 digits, or immediately
after the base prefix indicator of non-decimal integers, per SQL:202x
draft.
Peter Eisentraut and Dean Rasheed
Discussion: https://postgr.es/m/84aae844-dc55-a4be-86d9-4f0fa405cc97%40enterprisedb.com
The prior coding of int64_div_fast_to_numeric() had a number of bugs
that would cause it to fail under different circumstances, such as
with log10val2 <= 0, or log10val2 a multiple of 4, or in the "slow"
numeric path with log10val2 >= 10.
None of those could be triggered by any of our current code, which
only uses log10val2 = 3 or 6. However, they made it a hazard for any
future code that might use it. Also, since this is exported by
numeric.c, users writing their own C code might choose to use it.
Therefore fix, and back-patch to v14, where it was introduced.
Dean Rasheed, reviewed by Tom Lane.
Discussion: https://postgr.es/m/CAEZATCW8gXgW0tgPxPgHDPhVX71%2BSWFRkhnXy%2BTfGDsKLepu2g%40mail.gmail.com
Breaking <phrase> over two lines is not handled by psql's
create_help.pl. (It creates faulty \help output.)
Undo the formatting change introduced by
9bdad1b515 to fix this for now.
So far we have used containers for testing windows on cirrus-ci. Unfortunately
they come with substantial overhead: First, the container images are pulled
onto the host on-demand. Due to the large size of windows containers, that
ends up taking nearly 4 minutes. Secondly, IO is slow, leading to CI runs
taking long.
Thus switch to windows VMs, improving windows CI times by well over 2x.
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/211afb88-6df6-b74d-f1b7-84b5f21ad875@gmail.com
Backpatch: 15-, where CI was added
To interlock against concurrent drops, we use to hold ExclusiveLock on
pg_replication_origin till xact commit. This blocks even concurrent drops
of different origins by tablesync workers. So, instead, lock the specific
origin to interlock against concurrent drops.
This reduces the test time variability in src/test/subscription where
multiple tables are being synced.
Author: Vignesh C
Reviewed-by: Hou Zhijie, Amit Kapila
Discussion: https://postgr.es/m/1412708.1674417574@sss.pgh.pa.us
Backward and forward scans share much of the same page acquisition code.
Here we consolidate that code to reduce some duplication.
Additionally, add a new rs_coffset field to HeapScanDescData to track the
offset of the current tuple. The new field fits nicely into the padding
between a bool and BlockNumber field and saves having to look at the last
returned tuple to figure out which offset we should be looking at for the
current tuple.
Author: Melanie Plageman
Reviewed-by: David Rowley
Discussion: https://postgr.es/m/CAAKRu_bvkhka0CZQun28KTqhuUh5ZqY=_T8QEqZqOL02rpi2bw@mail.gmail.com
In the 90s we needed to deal with computers that still had the
pre-standard signal masking APIs. That hasn't been relevant for a very
long time on Unix systems, and c94ae9d8 got rid of a remaining
dependency in our Windows porting code. PG_SETMASK didn't expose
save/restore functionality, so we'd already started using sigprocmask()
directly in places, creating the visual distraction of having two ways
to spell it. It's not part of the API that extensions are expected to
be using (but if they are, the change will be trivial). It seems like a
good time to drop the old macro and just call the standard POSIX
function.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BKfQgrhHP2DLTohX1WwubaCBHmTzGnAEDPZ-Gug-Xskg%40mail.gmail.com
Installing with plain cpan failed for me recently, as the archive it
searched has been purged of old releases. However, you can give cpanm a
complete URL to the exact version you want to install, so document using
that.
Improve the comment explaining the choice of rscale in numeric_sqrt(),
and ensure that the code works consistently when other values of
NBASE/DEC_DIGITS are used.
Note that, in practice, we always expect DEC_DIGITS == 4, and this
does not change the computation in that case.
Joel Jacobson and Dean Rasheed
Discussion: https://postgr.es/m/06712c29-98e9-43b3-98da-f234d81c6e49%40app.fastmail.com