This adds the ability to pretty-print XML documents ... according to
libxml's somewhat idiosyncratic notions of what's pretty, anyway.
One notable divergence from a strict reading of the spec is that
libxml is willing to collapse empty nodes "<node></node>" to just
"<node/>", whereas SQL and the underlying XML spec say that this
option should only result in whitespace tweaks. Nonetheless,
it seems close enough to justify using the SQL-standard syntax.
Jim Jones, reviewed by Peter Smith and myself
Discussion: https://postgr.es/m/2f5df461-dad8-6d7d-4568-08e10608a69b@uni-muenster.de
The hook can be installed by a shared_preload library.
A similar mechanism could be used for radius paswords, for example, and
the type name auth_password_hook_typ has been shosen with that in mind.
John Naylor and Andrew Dunstan
Discussion: https://postgr.es/m/469b06ed-69de-ba59-c13a-91d2372e52a9@dunslane.net
preprocess_targetlist thought PHVs couldn't appear here.
It was mistaken, as per report from Önder Kalacı.
Surveying other pull_var_clause calls, I noted no similar errors,
but I did notice that qual_is_pushdown_safe's assertion about
!contain_window_function was pointless, because the following
pull_var_clause call would complain about them anyway. In HEAD
only, remove the redundant Assert and improve the commentary.
Discussion: https://postgr.es/m/CACawEhUuum-gC_2S3sXLTcsk7bUSPSHOD+g1ZpfKaDK-KKPPWA@mail.gmail.com
The previous coding based on select() had commentary about historical
portability concerns. Use POSIX nanosleep() instead.
This has independently been suggested a couple of times before, but
never managed to stick. Since recent and proposed work removes other
uses of select(), and associated code and comments relating to its
non-portable interaction with signals, it seems like a good time to tidy
up this case, too.
Also modernize the explanation of why WaitLatch() is a better way to
wait.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Suggested-by: Paul Guo <paulguo@gmail.com>
Suggested-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAAKRu_b-q0hXCBUCAATh0Z4Zi6UkiC0k2DFgoD3nC-r3SkR3tg%40mail.gmail.com
Discussion: https://postgr.es/m/CABQrizfxpBLZT5mZeE0js5oCh1tqEWvcGF3vMRCv5P-RwUY5dQ@mail.gmail.com
Discussion: https://postgr.es/m/4902.1552349020@sss.pgh.pa.us
There are still some systems that use traditional tick-based sleep
timing, but many including Linux, FreeBSD and macOS started using high
resolution timer hardware more directly a decade or two ago. Update our
comment about that. Also highlight that Windows is like the older
Unixen in that respect.
Author: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BogAon8_V223Ldv6taPR2uKH3X_UJ_A7LJAf3-VRARPA%40mail.gmail.com
The current implementation of _pgfstat64() is ineffective in detecting a
terminal handle or an anonymous named pipe. This commit improves our
port of fstat() to detect more efficiently such cases by relying on
GetFileType(), and returning more correct data when the type found is
either a FILE_TYPE_PIPE (_S_IFIFO) or a FILE_TYPE_CHAR (_S_IFCHR).
This is part of a more global fix to address failures when feeding the
output generated by pg_dump to pg_restore through a pipe, for example,
but not all of it. We are also going to need to do something about
fseek() and ftello() which are not reliable on WIN32 for the same cases
where fstat() was incorrect. Fixing fstat() is independent of the rest,
though, which is why both fixes are handled separately, and this is the
first part of it.
Reported-by: Daniel Watzinger
Author: Daniel Watzinger, Juan José Santamaría Flecha
Discussion: https://postgr.es/m/b1448cd7-871e-20e3-8398-895e2d1d3bf9@gmail.com
Backpatch-through: 14
Using REPLICA IDENTITY FULL on the publisher can lead to a full table scan
per tuple change on the subscription when REPLICA IDENTITY or PK index is
not available. This makes REPLICA IDENTITY FULL impractical to use apart
from some small number of use cases.
This patch allows using indexes other than PRIMARY KEY or REPLICA
IDENTITY on the subscriber during apply of update/delete. The index that
can be used must be a btree index, not a partial index, and it must have
at least one column reference (i.e. cannot consist of only expressions).
We can uplift these restrictions in the future. There is no smart
mechanism to pick the index. If there is more than one index that
satisfies these requirements, we just pick the first one. We discussed
using some of the optimizer's low-level APIs for this but ruled it out
as that can be a maintenance burden in the long run.
This patch improves the performance in the vast majority of cases and the
improvement is proportional to the amount of data in the table. However,
there could be some regression in a small number of cases where the indexes
have a lot of duplicate and dead rows. It was discussed that those are
mostly impractical cases but we can provide a table or subscription level
option to disable this feature if required.
Author: Onder Kalaci, Amit Kapila
Reviewed-by: Peter Smith, Shi yu, Hou Zhijie, Vignesh C, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/CACawEhVLqmAAyPXdHEPv1ssU2c=dqOniiGz7G73HfyS7+nGV4w@mail.gmail.com
Commit 4753ef37 changed vacuum_delay_point() to use the WaitLatch() API,
to fix the problem that vacuum could keep running for a very long time
after the postmaster died.
Unfortunately, that broke commit caf626b2's support for fractional
vacuum_cost_delay, which shipped in PostgreSQL 12. WaitLatch() works in
whole milliseconds.
For now, revert the change from commit 4753ef37, but add an explicit
check for postmaster death. That's an extra system call on systems
other than Linux and FreeBSD, but that overhead doesn't matter much
considering that we willingly went to sleep and woke up again. (In
later work, we might add higher resolution timeouts to the latch API so
that we could do this with our standard programming pattern, but that
wouldn't be back-patched.)
Back-patch to 14, where commit 4753ef37 arrived.
Reported-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/CAAKRu_b-q0hXCBUCAATh0Z4Zi6UkiC0k2DFgoD3nC-r3SkR3tg%40mail.gmail.com
Our waitpid() emulation didn't prevent a PID from being recycled by the
OS before the call to waitpid(). The postmaster could finish up
tracking more than one child process with the same PID, and confuse
them.
Fix, by moving the guts of pgwin32_deadchild_callback() into waitpid(),
so that resources are released synchronously. The process and PID
continue to exist until we close the process handle, which only happens
once we're ready to adjust our book-keeping of running children.
This seems to explain a couple of failures on CI. It had never been
reported before, despite the code being as old as the Windows port.
Perhaps Windows started recycling PIDs more rapidly, or perhaps timing
changes due to commit 7389aad6 made it more likely to break.
Thanks to Alexander Lakhin for analysis and Andres Freund for tracking
down the root cause.
Back-patch to all supported branches.
Reported-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20230208012852.bvkn2am4h4iqjogq%40awork3.anarazel.de
The band-aid applied in commit f0bedf3e4 turns out to still need
some work: it made sure we didn't set Np->last_relevant too small
(to the left of the decimal point), but it didn't prevent setting
it too large (off the end of the partially-converted string).
This could result in fetching data beyond the end of the allocated
space, which with very bad luck could cause a SIGSEGV, though
I don't see any hazard of interesting memory disclosure.
Per bug #17839 from Thiago Nunes. The bug's pretty ancient,
so back-patch to all supported versions.
Discussion: https://postgr.es/m/17839-aada50db24d7b0da@postgresql.org
This patch adds new pg_dump switches
--table-and-children=pattern
--exclude-table-and-children=pattern
--exclude-table-data-and-children=pattern
which act the same as the existing --table, --exclude-table, and
--exclude-table-data switches, except that any partitions or
inheritance child tables of the table(s) matching the pattern
are also included or excluded.
Gilles Darold, reviewed by Stéphane Tachoires
Discussion: https://postgr.es/m/5aa393b5-5f67-8447-b83e-544516990ee2@migops.com
Scanning the expression for compatible Vars isn't really necessary,
because the subsequent match against StatisticExtInfo entries will
eliminate expressions containing other Vars just fine. Moreover,
this code hadn't stopped to think about what to do with
PlaceHolderVars or Aggrefs in the clause; and at least for the PHV
case, that demonstrably leads to failures. Rather than work out
whether it's reasonable to ignore those, let's just remove the
whole stanza.
Per report from Richard Guo. Back-patch to v14 where this code
was added.
Discussion: https://postgr.es/m/CAMbWs48Mmvm-acGevXuwpB=g5JMqVSL6i9z5UaJyLGJqa-XPAA@mail.gmail.com
Use PostgreSQL consistently for referring to the productname rather
than Postgres. This also adds <productname> markup.
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: "Jonathan S. Katz" <jkatz@postgresql.org>
Discussion: https://postgr.es/m/9C019644-9EA4-4B79-A52C-5C47A5B6B2DF@yesql.se
This commit reworks a bit the set-returning functions of pg_walinspect,
making them more flexible regarding their end LSN:
- pg_get_wal_records_info()
- pg_get_wal_stats()
- pg_get_wal_block_info()
The end LSNs given to these functions is now handled so as a value
higher than the current LSN of the cluster (insert LSN for a primary, or
replay LSN for a standby) does not raise an error, giving more
flexibility to monitoring queries. Instead, the functions return
results up to the current LSN, as found at the beginning of each
function call.
As an effect of that, pg_get_wal_records_info_till_end_of_wal() and
pg_get_wal_stats_till_end_of_wal() are now removed from 1.1, as the
existing, equivalent functions are able to offer the same
possibilities.
Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACU0_q-o4DSweyaW9NO1KBx-QkN6G_OzYQvpjf3CZVASkg@mail.gmail.com
Expose the standard error functions as SQL-callable functions. These
are expected to be useful to people working with normal distributions,
and we use them here to test the distribution from random_normal().
Since these functions are defined in the POSIX and C99 standards, they
should in theory be available on all supported platforms. If that
turns out not to be the case, more work will be needed.
On all platforms tested so far, using extra_float_digits = -1 in the
regression tests is sufficient to allow for variations between
implementations. However, past experience has shown that there are
almost certainly going to be additional unexpected portability issues,
so these tests may well need further adjustments, based on the
buildfarm results.
Dean Rasheed, reviewed by Nathan Bossart and Thomas Munro.
Discussion: https://postgr.es/m/CAEZATCXv5fi7+Vu-POiyai+ucF95+YMcCMafxV+eZuN1B-=MkQ@mail.gmail.com
The new connection parameter require_auth allows a libpq client to
define a list of comma-separated acceptable authentication types for use
with the server. There is no negotiation: if the server does not
present one of the allowed authentication requests, the connection
attempt done by the client fails.
The following keywords can be defined in the list:
- password, for AUTH_REQ_PASSWORD.
- md5, for AUTH_REQ_MD5.
- gss, for AUTH_REQ_GSS[_CONT].
- sspi, for AUTH_REQ_SSPI and AUTH_REQ_GSS_CONT.
- scram-sha-256, for AUTH_REQ_SASL[_CONT|_FIN].
- creds, for AUTH_REQ_SCM_CREDS (perhaps this should be removed entirely
now).
- none, to control unauthenticated connections.
All the methods that can be defined in the list can be negated, like
"!password", in which case the server must NOT use the listed
authentication type. The special method "none" allows/disallows the use
of unauthenticated connections (but it does not govern transport-level
authentication via TLS or GSSAPI).
Internally, the patch logic is tied to check_expected_areq(), that was
used for channel_binding, ensuring that an incoming request is
compatible with conn->require_auth. It also introduces a new flag,
conn->client_finished_auth, which is set by various authentication
routines when the client side of the handshake is finished. This
signals to check_expected_areq() that an AUTH_REQ_OK from the server is
expected, and allows the client to complain if the server bypasses
authentication entirely, with for example the reception of a too-early
AUTH_REQ_OK message.
Regression tests are added in authentication TAP tests for all the
keywords supported (except "creds", because it is around only for
compatibility reasons). A new TAP script has been added for SSPI, as
there was no script dedicated to it yet. It relies on SSPI being the
default authentication method on Windows, as set by pg_regress.
Author: Jacob Champion
Reviewed-by: Peter Eisentraut, David G. Johnston, Michael Paquier
Discussion: https://postgr.es/m/9e5a8ccddb8355ea9fa4b75a1e3a9edc88a70cd3.camel@vmware.com
When not detecting openssl via pkg-config, we'd error out if the headers
weren't found, even if -Dssl=auto. When detecting via pkg-config, but the
headers could not be found, we'd error out because the ssl_int variable would
not exist.
Reported-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/20230313180432.GA246741@nathanxps13
The majority of error exit cases in json_lex_string() failed to
set lex->token_terminator, causing problems for the error context
reporting code: it would see token_terminator less than token_start
and do something more or less nuts. In v14 and up the end result
could be as bad as a crash in report_json_context(). Older
versions accidentally avoided that fate; but all versions produce
error context lines that are far less useful than intended,
because they'd stop at the end of the prior token instead of
continuing to where the actually-bad input is.
To fix, invent some macros that make it less notationally painful
to do the right thing. Also add documentation about what the
function is actually required to do; and in >= v14, add an assertion
in report_json_context about token_terminator being sufficiently
far advanced.
Per report from Nikolay Shaplov. Back-patch to all supported
versions.
Discussion: https://postgr.es/m/7332649.x5DLKWyVIX@thinkpad-pgpro
check_agg_arguments_walker() supposed that it needn't descend into
the arguments of a lower-level aggregate function, but this is
just wrong in the presence of multiple levels of sub-select. The
oversight would lead to executor failures on queries that should
be rejected. (Prior to v11, they actually were rejected, thanks
to a "redundant" execution-time check.)
Per bug #17835 from Anban Company. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/17835-4f29f3098b2d0ba4@postgresql.org
This allows for a string which if an input field matches causes the
column's default value to be inserted. The advantage of this is that
the default can be inserted in some rows and not others, for which
non-default data is available.
The file_fdw extension is also modified to take allow use of this
option.
Israel Barth Rubio
Discussion: https://postgr.es/m/CAO_rXXAcqesk6DsvioOZ5zmeEmpUN5ktZf-9=9yu+DTr0Xr8Uw@mail.gmail.com
This ensures that the row count in the command tag for a MERGE is
correctly computed in the case where UPDATEs or DELETEs are skipped
due to a BEFORE ROW trigger returning NULL (the INSERT case was
already handled correctly by ExecMergeNotMatched() calling
ExecInsert()).
Back-patch to v15, where MERGE was introduced.
Discussion: https://postgr.es/m/CAEZATCU8XEmR0JWKDtyb7iZ%3DqCffxS9uyJt0iOZ4TV4RT%2Bow1w%40mail.gmail.com
If MERGE attempts an UPDATE or DELETE on a table with BEFORE ROW
triggers, or a cross-partition UPDATE (with or without triggers), and
a concurrent UPDATE or DELETE happens, the merge code would fail.
In some cases this would lead to a crash, while in others it would
cause the wrong merge action to be executed, or no action at all. The
immediate cause of the crash was the trigger code calling
ExecGetUpdateNewTuple() as part of the EPQ mechanism, which fails
because during a merge ri_projectNew is NULL, since merge has its own
per-action projection information, which ExecGetUpdateNewTuple() knows
nothing about.
Fix by arranging for the trigger code to exit early, returning the
TM_Result and TM_FailureData information, if a concurrent modification
is detected, allowing the merge code to do the necessary EPQ handling
in its own way. Similarly, prevent the cross-partition update code
from doing any EPQ processing for a merge, allowing the merge code to
work out what it needs to do.
This leads to a number of simplifications in nodeModifyTable.c. Most
notably, the ModifyTableContext->GetUpdateNewTuple() callback is no
longer needed, and mergeGetUpdateNewTuple() can be deleted, since
there is no longer any requirement for get-update-new-tuple during a
merge. Similarly, ModifyTableContext->cpUpdateRetrySlot is no longer
needed. Thus ExecGetUpdateNewTuple() and the retry_slot handling of
ExecCrossPartitionUpdate() can be restored to how they were in v14,
before the merge code was added, and ExecMergeMatched() no longer
needs any special-case handling for cross-partition updates.
While at it, tidy up ExecUpdateEpilogue() a bit, making it handle
recheckIndexes locally, rather than passing it in as a parameter,
ensuring that it is freed properly. This dates back to when it was
split off from ExecUpdate() to support merge.
Per bug #17809 from Alexander Lakhin, and follow-up investigation of
bug #17792, also from Alexander Lakhin.
Back-patch to v15, where MERGE was introduced, taking care to preserve
backwards-compatibility of the trigger API in v15 for any extensions
that might use it.
Discussion:
https://postgr.es/m/17809-9e6650bef133f0fe%40postgresql.orghttps://postgr.es/m/17792-0f89452029662c36%40postgresql.org
Most of these calls were to generate some random data. These can be
replaced by appropriately adapted sha256() calls. To keep the diff
smaller, we wrap this into a helper function that produces the same
output format and length as the md5() call.
This will eventually allow these tests to pass in OpenSSL FIPS mode
(which does not allow MD5 use).
Similar work for other test suites will follow later.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/dbbd927f-ef1f-c9a1-4ec6-c759778ac852@enterprisedb.com
The recently added standard collation UNICODE (0d21d4b9bc) doesn't
give consistent results on some build farm members with old ICU
versions. Apparently, the ICU locale specification 'und' (language
tag style) misbehaves on some older ICU versions. Replacing it with
'' (ICU locale ID style) fixes it at least on some OS versions. Let's
see what the build farm says.
The error cases for TLS and GSS encryption were inconsistent. After TLS
fails, the connection is marked as dead and follow-up calls of
PQconnectPoll() would return immediately, but GSS encryption was not
doing that, so the connection would still have been allowed to enter the
GSS handling code. This was handled incorrectly when gssencmode was set
to "require". "prefer" was working correctly, and this could not happen
under "disable" as GSS encryption would not be attempted.
This commit makes the error handling of GSS encryption on par with TLS
portion, fixing the case of gssencmode=require.
Reported-by: Jacob Champion
Author: Michael Paquier
Reviewed-by: Jacob Champion, Stephen Frost
Discussion: https://postgr.es/m/23787477-5fe1-a161-6d2a-e459f74c4713@timescale.com
Backpatch-through: 12
The 'ssl' option is of type 'combo', but we add a choice 'auto' that
simulates the behavior of a feature option. This way, openssl is used
automatically by default if present, but we retain the ability to
potentially select another ssl library.
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/ad65ffd1-a9a7-fda1-59c6-f7dc763c3051%40enterprisedb.com
The regression tests of pg_walinspect are reworked on a few aspects:
- Reorganization on the validation checks done for the start and end
LSNs on the six SQL functions currently available in 1.1.
- Addition of a few patterns doing bound checks for invalid start LSN,
invalid end LSN, and failures in reading LSN positions, for anything
that's been missing.
- Use of a consistent style across the whole, limiting blank lines
across the queries.
- Addition of a new test script for upgrades. For the time being, this
is straight-forward with a check that the upgrade from 1.0 works
correctly. This will be made more complicated once the interface of
this extension is reworked in 1.1 with a follow-up patch.
Most of the contents of this commit are extracted from a larger patch by
the same author, largely reorganized by me to minimize the differences
with the upcoming work aimed to lift the bound checks on the input LSNs
used by the functions of this extension.
Author: Bharath Rupireddy, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACU0_q-o4DSweyaW9NO1KBx-QkN6G_OzYQvpjf3CZVASkg@mail.gmail.com
adjust_appendrel_attrs can't transfer nullingrel labeling to a non-Var
translation expression (mainly because it's too late to wrap such an
expression in a PlaceHolderVar). I'd supposed in commit 2489d76c4
that that restriction was unreachable because we'd not attempt to push
problematic clauses down to an appendrel child relation. I forgot that
set_append_rel_size blindly converts all the parent rel's joininfo
clauses to child clauses, and that list could well contain clauses
from above a nulling outer join.
We might eventually have to devise a direct fix for this implementation
restriction, but for now it seems enough to filter out troublesome
clauses while constructing the child's joininfo list. Such clauses
are certainly not useful while constructing paths for the child rel;
they'll have to be applied later when we join the completed appendrel
to something else. So we don't need them here, and omitting them from
the list should save a few cycles while processing the child rel.
Per bug #17832 from Marko Tiikkaja.
Discussion: https://postgr.es/m/17832-d0a8106cdf1b722e@postgresql.org
This was an omission in the original creation of the module.
Also slightly adjust some wording to avoid a double "is".
Backpatch the non-meson piece of this to release 12, where the module
was introduced.
Discussion: https://postgr.es/m/be869e1c-8e3f-4cde-8609-212c899cccf9@dunslane.net
Freezing the relation N times and fetching the tuples one-by-one isn't that
cheap. On my machine this reduces test times by a bit less than one second, on
windows CI it's a few seconds.
Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/20230309001558.b7shzvio645ebdta@awork3.anarazel.de
64bit xids can't represent xids before epoch 0 (see also be504a3e97). When
FullTransactionIdFromXidAndCtx() was passed such an xid, it'd create a 64bit
xid far into the future. Noticed while adding assertions in the course of
investigating be504a3e97, as amcheck's test create such xids.
To fix the issue, just return FirstNormalFullTransactionId in this case. A
freshly initdb'd cluster already has a newer horizon. The most minimal version
of this would make the messages for some detected corruptions differently
inaccurate. To make those cases accurate, switch
FullTransactionIdFromXidAndCtx() to use the 32bit modulo difference between
xid and nextxid to compute the 64bit xid, yielding sensible "in the future" /
"in the past" answers.
Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/20230108002923.cyoser3ttmt63bfn@awork3.anarazel.de
Backpatch: 14-, where heapam verification was introduced
The initialization order in update_cached_xid_range() was wrong, calling
FullTransactionIdFromXidAndCtx() before setting
->next_xid. FullTransactionIdFromXidAndCtx() uses ->next_xid.
In most situations this will not cause visible issues, because the next call
to update_cached_xid_range() will use a less wrong ->next_xid. It's rare that
xids advance fast enough for this to be a problem.
Found while adding more asserts to the 64bit xid infrastructure.
Reviewed-by: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/20230108002923.cyoser3ttmt63bfn@awork3.anarazel.de
Backpatch: 14-, where heapam verification was introduced
The ossp-uuid pkg-config file includes the necessary -I to include the header
as uuid.h. Previously this would only work if ossp-uuid has its headers in
ossp/ in an already searched path.
Reported-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/7bb8b8a5-5297-ab3c-3412-466fd0124d00@dunslane.net
If the regex compiler can see that a regex is unsatisfiable
(for example, '$foo') then it may emit an NFA having no arcs.
pg_trgm's packGraph function did the wrong thing in this case;
it would access off the end of a work array, and with bad luck
could produce a corrupted output data structure causing more
problems later. This could end with wrong answers or crashes
in queries using a pg_trgm GIN or GiST index with such a regex.
Fix by not trying to de-duplicate if there aren't at least 2 arcs.
Per bug #17830 from Alexander Lakhin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/17830-57ff5f89bdb02b09@postgresql.org
The COPY documentation is quite clear that "COPY relation TO" copies
rows from only the named table, not any inheritance children it may
have. However, if you enabled row-level security on the table then
this stopped being true, because the code forgot to apply the ONLY
modifier in the "SELECT ... FROM relation" query that it constructs
in order to allow RLS predicates to be attached. Fix that.
Report and patch by Antonin Houska (comment adjustments and test case
by me). Back-patch to all supported branches.
Discussion: https://postgr.es/m/3472.1675251957@antos
Previously, the default encoding was derived from the locale when
using libc; while the default was always UTF-8 when using ICU. That
would throw an error when the locale was not compatible with UTF-8.
This commit causes initdb to derive the default encoding from the
locale for both providers. If --no-locale is specified (or if the
locale is C or POSIX), the default encoding will be UTF-8 for ICU
(because ICU does not support SQL_ASCII) and SQL_ASCII for libc.
Per buildfarm failure on system "hoverfly" related to commit
27b62377b4.
Discussion: https://postgr.es/m/d191d5841347301a8f1238f609471ddd957fc47e.camel%40j-davis.com
This commit reworks pg_get_wal_fpi_info() to become aware of all the
block information that can be attached to a record rather than just its
full-page writes:
- Addition of the block id as assigned by XLogRegisterBuffer(),
XLogRegisterBlock() or XLogRegisterBufData().
- Addition of the block data, as bytea, or NULL if none. The length of
the block data can be guessed with length(), so there is no need to
store its length in a separate field.
- Addition of the full-page image length, as counted without a hole or
even compressed.
- Modification of the handling of the full-page image data. This is
still a bytea, but it could become NULL if none is assigned to a block.
- Addition of the full-page image flags, tracking if a page is stored
with a hole, if it needs to be applied and the type of compression
applied to it, as of all the BKPIMAGE_* values in xlogrecord.h.
The information of each block is returned as one single record, with the
record's ReadRecPtr included to be able to join the block information
with the existing pg_get_wal_records_info(). Note that it is perfectly
possible for a block to hold both data and full-page image.
Thanks also to Kyotaro Horiguchi and Matthias van de Meent for the
discussion.
This commit uses some of the work proposed by Melanie, though it has
been largely redesigned and rewritten by me. Bharath has helped in
refining a bit the whole.
Reported-by: Melanie Plageman
Author: Michael Paquier, Melanie Plageman, Bharath Rupireddy
Discussion: https://postgr.es/m/CAAKRu_bORebdZmcV8V4cZBzU8M_C6tDDdbiPhCZ6i-iuSXW9TA@mail.gmail.com
Datetime input formerly accepted combinations such as
'1995-08-06 infinity', but this seems like a clear error.
Reject any combination of regular y/m/d/h/m/s fields with
these special tokens.
Joseph Koshakow, reviewed by Keisuke Kuroda and myself
Discussion: https://postgr.es/m/CAAvxfHdm8wwXwG_FFRaJ1nTHiMWb7YXS2YKCzCt8Q0a2ZoMcHg@mail.gmail.com