The fields of NLSVERSIONINFOEX are of type DWORD, which is unsigned
long, so the results of the computations being printed are also of
type unsigned long.
The sslcertmode option controls whether the server is allowed and/or
required to request a certificate from the client. There are three
modes:
- "allow" is the default and follows the current behavior, where a
configured client certificate is sent if the server requests one
(via one of its default locations or sslcert). With the current
implementation, will happen whenever TLS is negotiated.
- "disable" causes the client to refuse to send a client certificate
even if sslcert is configured or if a client certificate is available in
one of its default locations.
- "require" causes the client to fail if a client certificate is never
sent and the server opens a connection anyway. This doesn't add any
additional security, since there is no guarantee that the server is
validating the certificate correctly, but it may helpful to troubleshoot
more complicated TLS setups.
sslcertmode=require requires SSL_CTX_set_cert_cb(), available since
OpenSSL 1.0.2. Note that LibreSSL does not include it.
Using a connection parameter different than require_auth has come up as
the simplest design because certificate authentication does not rely
directly on any of the AUTH_REQ_* codes, and one may want to require a
certificate to be sent in combination of a given authentication method,
like SCRAM-SHA-256.
TAP tests are added in src/test/ssl/, some of them relying on sslinfo to
check if a certificate has been set. These are compatible across all
the versions of OpenSSL supported on HEAD (currently down to 1.0.1).
Author: Jacob Champion
Reviewed-by: Aleksander Alekseev, Peter Eisentraut, David G. Johnston,
Michael Paquier
Discussion: https://postgr.es/m/9e5a8ccddb8355ea9fa4b75a1e3a9edc88a70cd3.camel@vmware.com
The same error message will be used for a different option, to be
introduced in a separate patch. Reshaping the error message as done
here saves in translation.
Extracted from a larger patch by the same author.
Author: Jacob Champion
Discussion: https://postgr.es/m/9e5a8ccddb8355ea9fa4b75a1e3a9edc88a70cd3.camel@vmware.com
This commit renames the members of a few pgstat structures related to
functions and relations, by respectively removing their prefix "f_" and
"t_". The statistics for functions and relations and handled in their
own file, and pgstatfuncs.c associates each field in a structure
variable named based on the object type handled, so no information is
lost with this rename.
This will help with some of the refactoring aimed for pgstatfuncs.c, as
this makes more consistent the field names with the SQL functions
retrieving them.
Author: Bertrand Drouvot
Reviewed-by: Michael Paquier, Melanie Plageman
Discussion: https://postgr.es/m/9142f62a-a422-145c-bde0-b5bc498a4ada@gmail.com
Replace the symlink-chasing logic in find_my_exec with realpath(3),
which has been required by POSIX since SUSv2. (Windows lacks
realpath(), but there we can use _fullpath() which is functionally
equivalent.) The main benefit of this is that -- on all modern
platforms at least -- realpath() avoids the chdir() shenanigans
we used to perform while interpreting symlinks. That had various
corner-case failure modes so it's good to get rid of it.
There is still ongoing discussion about whether we could skip the
replacement of symlinks in some cases, but that's really matter
for a separate patch. Meanwhile I want to push this before we get
too close to feature freeze, so that we can find out if there are
showstopper portability issues.
Discussion: https://postgr.es/m/797232.1662075573@sss.pgh.pa.us
Add pgstat counter to track row updates that result in the successor
version going to a new heap page, leaving behind an original version
whose t_ctid points to the new version. The current count is shown by
the n_tup_newpage_upd column of each of the pg_stat_*_tables views.
The new n_tup_newpage_upd column complements the existing n_tup_hot_upd
and n_tup_upd columns. Tables that have high n_tup_newpage_upd values
(relative to n_tup_upd) are good candidates for tuning heap fillfactor.
Corey Huinker, with small tweaks by me.
Author: Corey Huinker <corey.huinker@gmail.com>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CADkLM=ded21M9iZ36hHm-vj2rE2d=zcKpUQMds__Xm2pxLfHKA@mail.gmail.com
The "und" locale is an alternative spelling of the root locale, but it
was not recognized until ICU 55. To maintain common behavior across
all supported ICU versions, check for "und" and replace with "root"
before opening.
Previously, the lack of support for "und" was dangerous, because
versions 54 and older fall back to the environment when a locale is
not found. If the user specified "und" for the language (which is
expected and documented), it could not only resolve to the wrong
collator, but it could unexpectedly change (which could lead to
corrupt indexes).
This effectively reverts commit d72900bded, which worked around the
problem for the built-in "unicode" collation, and is no longer
necessary.
Discussion: https://postgr.es/m/60da0cecfb512a78b8666b31631a636215d8ce73.camel@j-davis.com
Discussion: https://postgr.es/m/0c6fa66f2753217d2a40480a96bd2ccf023536a1.camel@j-davis.com
Reviewed-by: Peter Eisentraut
We shouldn't set successor[whatever] to an offset number that is less
than FirstOffsetNumber or more than maxoff. We already avoided that
for redirects, but not for CTID links. Allowing bad offset numbers
into the successor[] array causes core dumps.
We shouldn't use HeapTupleHeaderIsHotUpdated() because it checks
stuff other than the status of the infomask2 bit HEAP_HOT_UPDATED.
We only care about the status of that bit, not the other stuff
that HeapTupleHeaderIsHotUpdated() checks. This mistake can cause
verify_heapam() to report corruption when none is present.
The first hunk of this patch was written by me. The other two were
written by Andres Freund. This could probably do with more review
before commit, but I'd like to try to get the buildfarm green again
sooner rather than later.
Discussion: http://postgr.es/m/20230322204552.s6cv3ybqkklhhybb@awork3.anarazel.de
This is evidently not required by most compilers, but buildfarm
member fairywren is unhappy without it. It looks like the meson
infrastructure has this right already.
Prior to the introduction of the compression API in e9960732a9, pg_dump
would use the ZLIB_IN_SIZE/ZLIB_OUT_SIZE to size input/output buffers.
Commit 0da243fed0 introduced similar constants for LZ4, but while gzip
defined both buffers to be 4kB, LZ4 used 4kB and 16kB without any clear
reasoning why that's desirable.
Furthermore, parts of the code unaware of which compression is used
(e.g. pg_backup_directory.c) continued to use ZLIB_OUT_SIZE directly.
Simplify by replacing the various constants with DEFAULT_IO_BUFFER_SIZE,
set to 4kB. The compression implementations still have an option to use
a custom value, but considering 4kB was fine for 20+ years, I find that
unlikely (and we'd probably just increase the default buffer size).
Author: Georgios Kokolatos
Reviewed-by: Tomas Vondra, Justin Pryzby
Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
After 0da243fed0 got committed, we've received a report about a compiler
warning, related to the new LZ4File_gets() function:
compress_lz4.c: In function 'LZ4File_gets':
compress_lz4.c:492:19: warning: comparison of unsigned expression in
'< 0' is always false [-Wtype-limits]
492 | if (dsize < 0)
The reason is very simple - dsize is declared as size_t, which is an
unsigned integer, and thus the check is pointless and we might fail to
notice an error in some cases (or fail in a strange way a bit later).
The warning could have been silenced by simply changing the type, but we
realized the API mostly assumes all the libraries use the same types and
report errors the same way (e.g. by returning 0 and/or negative value).
But we can't make this assumption - the gzip/lz4 libraries already
disagree on some of this, and even if they did a library added in the
future might not.
The right solution is to define what the API does, and translate the
library-specific behavior in consistent way (so that the internal errors
are not exposed to users of our compression API). So this adjusts the
data types in a couple places, so that we don't miss library errors, and
simplifies and unifies the error reporting to simply return true/false
(instead of e.g. size_t).
While at it, make sure LZ4File_open_write() does not clobber errno in
case open_func() fails.
Author: Georgios Kokolatos
Reported-by: Alexander Lakhin
Reviewed-by: Tomas Vondra, Justin Pryzby
Discussion: https://postgr.es/m/33496f7c-3449-1426-d568-63f6bca2ac1f@gmail.com
Hide details of supporting older ICU versions in a wrapper
function. The current code only needs to handle
icu_set_collation_attributes(), but a subsequent commit will add
additional version-specific code.
Discussion: https://postgr.es/m/7ee414ad-deb5-1144-8a0e-b34ae3b71cd5@enterprisedb.com
Reviewed-by: Peter Eisentraut
We fail to apply updates and deletes when the REPLICA IDENTITY FULL is
used for the table having generated columns. We didn't use to ignore
generated columns while doing tuple comparison among the tuples from
the publisher and subscriber during apply of updates and deletes.
Author: Onder Kalaci
Reviewed-by: Shi yu, Amit Kapila
Backpatch-through: 12
Discussion: https://postgr.es/m/CACawEhVQC9WoofunvXg12aXtbqKnEgWxoRx3+v8q32AWYsdpGg@mail.gmail.com
This patch allows copying tables in the binary format during table
synchronization when the binary option for a subscription is enabled.
Previously, tables are copied in text format even if the subscription is
created with the binary option enabled. Copying tables in binary format
may reduce the time spent depending on column types.
A binary copy for initial table synchronization is supported only when
both publisher and subscriber are v16 or later.
Author: Melih Mutlu
Reviewed-by: Peter Smith, Shi yu, Euler Taveira, Vignesh C, Kuroda Hayato, Osumi Takamichi, Bharath Rupireddy, Hou Zhijie
Discussion: https://postgr.es/m/CAGPVpCQvAziCLknEnygY0v1-KBtg%2BOm-9JHJYZOnNPKFJPompw%40mail.gmail.com
This commit improves the tests of pg_walinspect on a few things:
- Remove aggregates for queries that should fail. If the code is
reworked in such a way that the behavior of these queries is changed,
we would get more input from them, written this way.
- Expect at least one record reported in the valid queries doing scans
across ranges, rather than zero records.
- Adjust a few comments, for consistency.
Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACVaoXW3nJD9zq8E66BEf-phgJfFcKRVJq9GXkuX0b3ULQ@mail.gmail.com
* Commit 3048898e dropped -ING from PHJ wait event names. Update the
corresponding barrier phases names to match.
* Rename the "DONE" phases to "FREE". That's symmetrical with
"ALLOCATE", and names the activity that actually happens in that phase
(as we do for the other phases) rather than a state. The bug fixed by
commit 8d578b9b might have been more obvious with this name.
* Rename the batch/bucket growth barriers' "ALLOCATE" phases to
"REALLOCATE", a better description of what they do.
* Update the high level comments about phases to highlight phases
are executed by a single process with an asterisk (mostly memory
management phases).
No behavior change, as this is just improving internal identifiers. The
only user-visible sign of this is that a couple of wait events' display
names change from "...Allocate" to "...Reallocate" in pg_stat_activity,
to stay in sync with the internal names.
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BMDpwF2Eo2LAvzd%3DpOh81wUTsrwU1uAwR-v6OGBB6%2B7g%40mail.gmail.com
Currently, in read committed transaction isolation mode (default), we have the
following sequence of actions when tuple_update()/tuple_delete() finds
the tuple updated by concurrent transaction.
1. Attempt to update/delete tuple with tuple_update()/tuple_delete(), which
returns TM_Updated.
2. Lock tuple with tuple_lock().
3. Re-evaluate plan qual (recheck if we still need to update/delete and
calculate the new tuple for update).
4. Second attempt to update/delete tuple with tuple_update()/tuple_delete().
This attempt should be successful, since the tuple was previously locked.
This patch eliminates step 2 by taking the lock during first
tuple_update()/tuple_delete() call. Heap table access method saves some
efforts by checking the updated tuple once instead of twice. Future
undo-based table access methods, which will start from the latest row version,
can immediately place a lock there.
The code in nodeModifyTable.c is simplified by removing the nested switch/case.
Discussion: https://postgr.es/m/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com
Reviewed-by: Aleksander Alekseev, Pavel Borisov, Vignesh C, Mason Sharp
Reviewed-by: Andres Freund, Chris Travers
When we lock tuple using table_tuple_lock() then we at the same time fetch
the locked tuple to the slot. In this case we can skip extra
table_tuple_fetch_row_version() thank to we've already fetched the 'old' tuple
and nobody can change it concurrently since it's locked.
Discussion: https://postgr.es/m/CAPpHfdua-YFw3XTprfutzGp28xXLigFtzNbuFY8yPhqeq6X5kg%40mail.gmail.com
Reviewed-by: Aleksander Alekseev, Pavel Borisov, Vignesh C, Mason Sharp
Reviewed-by: Andres Freund, Chris Travers
Use of pack("L") gets around the basic endian problem, but it doesn't
deal with the fact that the order of the bitfields within the struct
may differ. This patch fixes it to work with gcc on NetBSD/macppc,
but I wonder whether that will be enough --- in principle, there
could be four different combinations of bitpatterns needed here.
Discussion: https://postgr.es/m/1650745.1679513221@sss.pgh.pa.us
In commit 3e51b278d, I misinterpreted the coding in setup_config()
as setting min_wal_size and max_wal_size to compile-time-constant
values. But it's not: there's a hidden dependency on --wal-segsize.
Therefore leaving these variables commented out is the wrong thing.
Per report from Andres Freund.
Discussion: https://postgr.es/m/20230322200751.jvfvsuuhd3hgm6vv@awork3.anarazel.de
While testing commit 3e51b278d, I noted that initdb leaks about a
megabyte worth of data due to the sloppy bookkeeping in its
string-manipulating code. That's not a huge amount on modern machines,
but it's still kind of annoying, and it's easy to fix by recognizing
that we might as well treat these arrays of strings as
modifiable-in-place. There's no caller that cares about preserving
the old state of the array after replace_token or replace_guc_value.
With this fix, valgrind sees only a few hundred bytes leaked during
an initdb run.
Discussion: https://postgr.es/m/2844176.1674681919@sss.pgh.pa.us
This option, or its long form --set, sets the GUC "name" to "value".
The setting applies in the bootstrap and standalone servers run by
initdb, and is also written into the generated postgresql.conf.
This can save an extra editing step when creating a new cluster,
but the real use-case is for coping with situations where the
bootstrap server fails to start due to environmental issues;
for example, if it's necessary to force huge_pages to off.
Discussion: https://postgr.es/m/2844176.1674681919@sss.pgh.pa.us
RelationCopyStorageUsingBuffer() did not free the strategies used to access
the source / target relation. They memory was released at the end of the
transaction, but when using a template database with a lot of relations, the
temporary leak can become big prohibitively big.
RelationCopyStorageUsingBuffer() acquired the buffer for the target relation
with RBM_NORMAL, therefore requiring a read of a block guaranteed to be
zero. Use RBM_ZERO_AND_LOCK instead.
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/20230321070113.o2vqqxogjykwgfrr@awork3.anarazel.de
Backpatch: 15-, where STRATEGY WAL_LOG was introduced
Prior to this commit, we only consider each tuple or line pointer
on the page in isolation, but now we can do some validation of a line
pointer against its successor. For example, a redirect line pointer
shouldn't point to another redirect line pointer, and if a tuple
is HOT-updated, the result should be a heap-only tuple.
Himanshu Upadhyaya and Robert Haas, reviewed by Aleksander Alekseev,
Andres Freund, and Peter Geoghegan.
This commit adds some documentation about two monitoring functions:
- pg_stat_get_xact_blocks_fetched()
- pg_stat_get_xact_blocks_hit()
The description of these functions has been removed in ddfc2d9, later
simplified by 5f2b089, assuming that all the functions whose
descriptions were removed are used in system views. Unfortunately, some
of them were are not used in any system views, so they lacked
documentation.
This gap exists in the docs for a long time, so backpatch all the way
down.
Reported-by: Michael Paquier
Author: Bertrand Drouvot
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/ZBeeH5UoNkTPrwHO@paquier.xyz
Backpatch-through: 11
The files generated by gen_node_support.pl for query jumbling
(queryjumblefuncs.funcs.c and queryjumblefuncs.switch.c) were not being
removed on make maintainer-clean (they need to remain around after a
simple "clean"). This commit makes the operation consistent with the
copy, equal, out and read files.
While on it, update a comment in the nodes'README where a reference to
queryjumblefuncs.funcs.c was missing.
Reported-by: Nathan Bossart
Reviewed-by: Richard Guo, Daniel Gustafsson
Discussion: https://postgr.es/m/ZBgAfTHcL6W7zGdW@paquier.xyz
As demonstrated by David Johnston, the Memoize cache hit ratio calculation
wasn't quite correct.
This change only affects the estimated hit ratio when the estimated number
of entries to cache is estimated not to fit inside the cache. For
example, if we expect 2000 distinct cache key values and only expect to be
able to cache 1000 of those at once due to memory constraints, with an
estimate of 10000 calls, if we could store all entries then the hit ratio
should be 80% to account for the first 2000 of the 10000 calls to be a
cache miss due to the value not being cached yet. If we can only store
1000 entries for each of the 2000 distinct possible values at once then
the 80% should be reduced by half to make the final estimate of 40%.
Previously, the calculation would have produced an estimated hit ratio of
30%, which wasn't correct.
Apply to master only so as not to destabilize plans in the back branches.
Reported-by: David G. Johnston
Discussion: https://postgr.es/m/CAKFQuwZEmcNk3YQo2Xj4EDUOdY6qakad31rOD1Vc4q1_s68-Ew@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvrV44LwiF4W_qf_RpbGYWSgp1kF=cZr+kTRRaALUfmXqw@mail.gmail.com
These are set after a \! command or a backtick substitution.
SHELL_ERROR is just "true" for error (nonzero exit status) or "false"
for success, while SHELL_EXIT_CODE records the actual exit status
following standard shell/system(3) conventions.
Corey Huinker, reviewed by Maxim Orlov and myself
Discussion: https://postgr.es/m/CADkLM=cWao2x2f+UDw15W1JkVFr_bsxfstw=NGea7r9m4j-7rQ@mail.gmail.com
The check for the number of roles in the target cluster for an upgrade
selects the existing roles and performs a COUNT(*) over the result. A
value of one is the expected query result value indicating that only
the install user is present in the new cluster. The result was converted
with the function for converting a string containing an Oid into a numeric,
which avoids potential overflow but makes the code less readable since
it's not actually an Oid at all.
Discussion: https://postgr.es/m/41AB5F1F-4389-4B25-9668-5C430375836C@yesql.se
This makes it easier to specify values taken directly from WAL file
names.
The option parsing is arranged in the style of option_parse_int() (but
we need to parse unsigned int), to allow future refactoring in the
same manner.
Reviewed-by: Sébastien Lardière <sebastien@lardiere.net>
Discussion: https://www.postgresql.org/message-id/flat/8fef346e-2541-76c3-d768-6536ae052993@lardiere.net
We fail to apply updates and deletes when the REPLICA IDENTITY FULL is
used for the table having dropped columns. We didn't use to ignore dropped
columns while doing tuple comparison among the tuples from the publisher
and subscriber during apply of updates and deletes.
Author: Onder Kalaci, Shi yu
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CACawEhVQC9WoofunvXg12aXtbqKnEgWxoRx3+v8q32AWYsdpGg@mail.gmail.com
With unlucky timing and parallel_leader_participation=off (not the
default), PHJ could attempt to access per-batch shared state just as it
was being freed. There was code intended to prevent that by checking
for a cleared pointer, but it was racy. Fix, by introducing an extra
barrier phase. The new phase PHJ_BUILD_RUNNING means that it's safe to
access the per-batch state to find a batch to help with, and
PHJ_BUILD_DONE means that it is too late. The last to detach will free
the array of per-batch state as before, but now it will also atomically
advance the phase, so that late attachers can avoid the hazard. This
mirrors the way per-batch hash tables are freed (see phases
PHJ_BATCH_PROBING and PHJ_BATCH_DONE).
An earlier attempt to fix this (commit 3b8981b6, later reverted) missed
one special case. When the inner side is empty (the "empty inner
optimization), the build barrier would only make it to
PHJ_BUILD_HASHING_INNER phase before workers attempted to detach from
the hashtable. In that case, fast-forward the build barrier to
PHJ_BUILD_RUNNING before proceeding, so that our later assertions hold
and we can still negotiate who is cleaning up.
Revealed by build farm failures, where BarrierAttach() failed a sanity
check assertion, because the memory had been clobbered by dsa_free().
In non-assert builds, the result could be a segmentation fault.
Back-patch to all supported releases.
Author: Thomas Munro <thomas.munro@gmail.com>
Author: Melanie Plageman <melanieplageman@gmail.com>
Reported-by: Michael Paquier <michael@paquier.xyz>
Reported-by: David Geier <geidav.pg@gmail.com>
Tested-by: David Geier <geidav.pg@gmail.com>
Discussion: https://postgr.es/m/20200929061142.GA29096%40paquier.xyz
Counting writes only for io_context = 'normal' is unreliable, as backends
using a buffer access strategy could flush all of the dirty buffers out from
under the other backends and checkpointer. Change the test to count writes in
any context. This achieves roughly the same coverage anyway.
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Author: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://www.postgresql.org/message-id/ZAnWU8WbXEDjrfUE%40telsasoft.com
@extschema:name@ extends the existing @extschema@ feature so that
we can also insert the schema name of some required extension,
thus making cross-extension references robust even if they are in
different schemas.
However, this has the same hazard as @extschema@: if the schema
name is embedded literally in an installed object, rather than being
looked up once during extension script execution, then it's no longer
safe to relocate the other extension to another schema. To deal with
that without restricting things unnecessarily, add a "no_relocate"
option to extension control files. This allows an extension to
specify that it cannot handle relocation of some of its required
extensions, even if in themselves those extensions are relocatable.
We detect "no_relocate" requests of dependent extensions during
ALTER EXTENSION SET SCHEMA.
Regina Obe, reviewed by Sandro Santilli and myself
Discussion: https://postgr.es/m/003001d8f4ae$402282c0$c0678840$@pcorp.us
When determining whether an index update may be skipped by using HOT, we
can ignore attributes indexed by block summarizing indexes without
references to individual tuples that need to be cleaned up.
A new type TU_UpdateIndexes provides a signal to the executor to
determine which indexes to update - no indexes, all indexes, or only the
summarizing indexes.
This also removes rd_indexattr list, and replaces it with rd_attrsvalid
flag. The list was not used anywhere, and a simple flag is sufficient.
This was originally committed as 5753d4ee32, but then got reverted by
e3fcca0d0d because of correctness issues.
Original patch by Josef Simanek, various fixes and improvements by Tomas
Vondra and me.
Authors: Matthias van de Meent, Josef Simanek, Tomas Vondra
Reviewed-by: Tomas Vondra, Alvaro Herrera
Discussion: https://postgr.es/m/05ebcb44-f383-86e3-4f31-0a97a55634cf@enterprisedb.com
Discussion: https://postgr.es/m/CAFp7QwpMRGcDAQumN7onN9HjrJ3u4X3ZRXdGFT0K5G2JWvnbWg%40mail.gmail.com
When calculating distance in brin_minmax_multi_distance_inet(), the
netmask was applied incorrectly. This results in (seemingly) incorrect
ordering of values, triggering an assert.
For builds without asserts this is mostly harmless - we may merge other
ranges, possibly resulting in slightly less efficient index. But it's
still correct and the greedy algorithm doesn't guarantee optimality
anyway.
Backpatch to 14, where minmax-multi indexes were introduced.
Reported by Dmitry Dolgov, investigation and fix by me.
Reported-by: Dmitry Dolgov
Backpatch-through: 14
Discussion: https://postgr.es/m/17774-c6f3e36dd4471e67@postgresql.org