Commits cf112c12 and a0dc8271 were a little too hasty in getting rid of
the pg_ prefixes where we use pread(), pwrite() and vectored variants.
We dropped support for ancient Unixes where we needed to use lseek() to
implement replacements for those, but it turns out that Windows also
changes the current position even when you pass in an offset to
ReadFile() and WriteFile() if the file handle is synchronous, despite
its documentation saying otherwise.
Switching to asynchronous file handles would fix that, but have other
complications. For now let's just put back the pg_ prefix and add some
comments to highlight the non-standard side-effect, which we can now
describe as Windows-only.
Reported-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/20220923202439.GA1156054%40nathanxps13
There are still some alignment-related failures in the buildfarm,
which might or might not be able to be fixed quickly, but I've also
just realized that it increased the size of many WAL records by 4 bytes
because a block reference contains a RelFileLocator. The effect of that
hasn't been studied or discussed, so revert for now.
The previous macro implementations just cast the argument to a target
type but did not check whether the input type was appropriate. The
function implementation can do better type checking of the input type.
For the *GetDatumFast() macros, converting to an inline function
doesn't work in the !USE_FLOAT8_BYVAL case, but we can use
AssertVariableIsOfTypeMacro() to get a similar level of type checking.
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/8528fb7e-0aa2-6b54-85fb-0c0886dbd6ed%40enterprisedb.com
RelFileNumbers are now assigned using a separate counter, instead of
being assigned from the OID counter. This counter never wraps around:
if all 2^56 possible RelFileNumbers are used, an internal error
occurs. As the cluster is limited to 2^64 total bytes of WAL, this
limitation should not cause a problem in practice.
If the counter were 64 bits wide rather than 56 bits wide, we would
need to increase the width of the BufferTag, which might adversely
impact buffer lookup performance. Also, this lets us use bigint for
pg_class.relfilenode and other places where these values are exposed
at the SQL level without worrying about overflow.
This should remove the need to keep "tombstone" files around until
the next checkpoint when relations are removed. We do that to keep
RelFileNumbers from being recycled, but now that won't happen
anyway. However, this patch doesn't actually change anything in
this area; it just makes it possible for a future patch to do so.
Dilip Kumar, based on an idea from Andres Freund, who also reviewed
some earlier versions of the patch. Further review and some
wordsmithing by me. Also reviewed at various points by Ashutosh
Sharma, Vignesh C, Amul Sul, Álvaro Herrera, and Tom Lane.
Discussion: http://postgr.es/m/CA+Tgmobp7+7kmi4gkq7Y+4AM9fTvL+O1oQ4-5gFTT+6Ng-dQ=g@mail.gmail.com
We weren't jumbling the merge action list, so wildly different commands
would be considered to use the same query ID. Add that, mention it in
the docs, and some test lines.
Backpatch to 15.
Author: Tatsu <bt22nakamorit@oss.nttdata.com>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://postgr.es/m/d87e391694db75a038abc3b2597828e8@oss.nttdata.com
Make sure that function declarations use names that exactly match the
corresponding names from function definitions in contrib code.
Like other recent commits that cleaned up function parameter names, this
commit was written with help from clang-tidy.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-WznJt9CMM9KJTMjJh_zbL5hD9oX44qdJ4aqZtjFi-zA3Tg@mail.gmail.com
Autoconf is showing its age, fewer and fewer contributors know how to wrangle
it. Recursive make has a lot of hard to resolve dependency issues and slow
incremental rebuilds. Our home-grown MSVC build system is hard to maintain for
developers not using Windows and runs tests serially. While these and other
issues could individually be addressed with incremental improvements, together
they seem best addressed by moving to a more modern build system.
After evaluating different build system choices, we chose to use meson, to a
good degree based on the adoption by other open source projects.
We decided that it's more realistic to commit a relatively early version of
the new build system and mature it in tree.
This commit adds an initial version of a meson based build system. It supports
building postgres on at least AIX, FreeBSD, Linux, macOS, NetBSD, OpenBSD,
Solaris and Windows (however only gcc is supported on aix, solaris). For
Windows/MSVC postgres can now be built with ninja (faster, particularly for
incremental builds) and msbuild (supporting the visual studio GUI, but
building slower).
Several aspects (e.g. Windows rc file generation, PGXS compatibility, LLVM
bitcode generation, documentation adjustments) are done in subsequent commits
requiring further review. Other aspects (e.g. not installing test-only
extensions) are not yet addressed.
When building on Windows with msbuild, builds are slower when using a visual
studio version older than 2019, because those versions do not support
MultiToolTask, required by meson for intra-target parallelism.
The plan is to remove the MSVC specific build system in src/tools/msvc soon
after reaching feature parity. However, we're not planning to remove the
autoconf/make build system in the near future. Likely we're going to keep at
least the parts required for PGXS to keep working around until all supported
versions build with meson.
Some initial help for postgres developers is at
https://wiki.postgresql.org/wiki/Meson
With contributions from Thomas Munro, John Naylor, Stone Tickle and others.
Author: Andres Freund <andres@anarazel.de>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Author: Peter Eisentraut <peter@eisentraut.org>
Reviewed-By: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://postgr.es/m/20211012083721.hvixq4pnh2pixr3j@alap3.anarazel.de
The tlist of the EvalPlanQual outer plan for a ForeignScan node is
adjusted to produce a tuple whose descriptor matches the scan tuple slot
for the ForeignScan node. But in the case where the outer plan contains
an extra Sort node, if the new tlist contained columns required only for
evaluating PlaceHolderVars or columns required only for evaluating local
conditions, this would cause setrefs.c to fail with the error.
The cause of this is that when creating the outer plan by injecting the
Sort node into an alternative local join plan that could emit such extra
columns as well, we fail to arrange for the outer plan to propagate them
up through the Sort node, causing setrefs.c to fail to match up them in
the new tlist to what is available from the outer plan. Repair.
Per report from Alexander Pyhalov.
Richard Guo and Etsuro Fujita, reviewed by Alexander Pyhalov and Tom Lane.
Backpatch to all supported versions.
Discussion: http://postgr.es/m/cfb17bf6dfdf876467bd5ef533852d18%40postgrespro.ru
guc.c has grown to be one of our largest .c files, making it
a bottleneck for compilation. It's also acquired a bunch of
knowledge that'd be better kept elsewhere, because of our not
very good habit of putting variable-specific check hooks here.
Hence, split it up along these lines:
* guc.c itself retains just the core GUC housekeeping mechanisms.
* New file guc_funcs.c contains the SET/SHOW interfaces and some
SQL-accessible functions for GUC manipulation.
* New file guc_tables.c contains the data arrays that define the
built-in GUC variables, along with some already-exported constant
tables.
* GUC check/assign/show hook functions are moved to the variable's
home module, whenever that's clearly identifiable. A few hard-
to-classify hooks ended up in commands/variable.c, which was
already a home for miscellaneous GUC hook functions.
To avoid cluttering a lot more header files with #include "guc.h",
I also invented a new header file utils/guc_hooks.h and put all
the GUC hook functions' declarations there, regardless of their
originating module. That allowed removal of #include "guc.h"
from some existing headers. The fallout from that (hopefully
all caught here) demonstrates clearly why such inclusions are
best minimized: there are a lot of files that, for example,
were getting array.h at two or more levels of remove, despite
not having any connection at all to GUCs in themselves.
There is some very minor code beautification here, such as
renaming a couple of inconsistently-named hook functions
and improving some comments. But mostly this just moves
code from point A to point B and deals with the ensuing
needs for #include adjustments and exporting a few functions
that previously weren't exported.
Patch by me, per a suggestion from Andres Freund; thanks also
to Michael Paquier for the idea to invent guc_funcs.c.
Discussion: https://postgr.es/m/587607.1662836699@sss.pgh.pa.us
pgstattuple uses data type double for other percentage calculations
and exposes those values to the users via the float8 data type.
However, scanned_percent in struct output_type is of type uint64, even
though it is later returned via Float8GetDatum(). Change it to use
double to be inline with other percentages.
Author: Bharath Rupireddy
Reviewed-by: Peter Eisentraut
Discussion: https://www.postgresql.org/message-id/36ee692b-232f-0484-ce94-dc39d82021ad%40enterprisedb.com
pg_walinspect uses datatype double (double precision floating point
number) for WAL stats percentile calculations and expose them via
float4 (single precision floating point number), which an unnecessary
loss of precision and confusing. Even though, it's harmless that way,
let's use float8 (double precision floating-point number) to be in
sync with what pg_walinspect does internally and what it exposes to
the users. This seems to be the pattern used elsewhere in the code.
Reported-by: Peter Eisentraut
Author: Bharath Rupireddy
Reviewed-by: Peter Eisentraut
Discussion: https://www.postgresql.org/message-id/36ee692b-232f-0484-ce94-dc39d82021ad%40enterprisedb.com
When using the BSD UUID functions, contrib/uuid-ossp expects
uuid_create() to produce a version-1 UUID. FreeBSD still does so,
but in recent NetBSD releases that function produces a version-4
(random) UUID instead. That's not acceptable for our purposes:
if the user wanted v4 she would have asked for v4, not v1.
Hence, check the version digit and complain if it's not '1'.
Also drop the documentation's claim that the NetBSD implementation
is usable. It might be, depending on which OS version you're using,
but we're not going to get into that kind of detail.
(Maybe someday we should ditch all these external libraries
and just write our own UUID code, but today is not that day.)
Nazir Bilal Yavuz, with cosmetic adjustments and docs by me.
Backpatch to all supported versions.
Discussion: https://postgr.es/m/3848059.1661038772@sss.pgh.pa.us
Discussion: https://postgr.es/m/17358-89806e7420797025@postgresql.org
Since the retirement of some older buildfarm members, the oldest Bison
that gets regular testing is 2.3. MacOS ships that version, and will
continue doing so for the forseeable future because of Apple's policy
regarding GPLv3. While Mac users could use a package manager to install
a newer version, there is no compelling reason to force them do so at
this time.
Reviewed by Andres Freund
Discussion: https://www.postgresql.org/message-id/1097762.1662145681@sss.pgh.pa.us
In a similar effort to f736e188c and 110d81728, fixup various usages of
string functions where a more appropriate function is available and more
fit for purpose.
These changes include:
1. Use cstring_to_text_with_len() instead of cstring_to_text() when
working with a StringInfoData and the length can easily be obtained.
2. Use appendStringInfoString() instead of appendStringInfo() when no
formatting is required.
3. Use pstrdup(...) instead of psprintf("%s", ...)
4. Use pstrdup(...) instead of psprintf(...) (with no formatting)
5. Use appendPQExpBufferChar() instead of appendPQExpBufferStr() when the
length of the string being appended is 1.
6. appendStringInfoChar() instead of appendStringInfo() when no formatting
is required and string is 1 char long.
7. Use appendPQExpBufferStr(b, .) instead of appendPQExpBuffer(b, "%s", .)
8. Don't use pstrdup when it's fine to just point to the string constant.
I (David) did find other cases of #8 but opted to use #4 instead as I
wasn't certain enough that applying #8 was ok (e.g in hba.c)
Author: Ranier Vilela, David Rowley
Discussion: https://postgr.es/m/CAApHDvo2j2+RJBGhNtUz6BxabWWh2Jx16wMUMWKUjv70Ver1vg@mail.gmail.com
Since these macros just cast whatever you give them to the designated
output type, and many normal uses also cast the output type further, a
number of incorrect uses go undiscovered. The fixes in this patch
have been discovered by changing these macros to inline functions,
which is the subject of a future patch.
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://www.postgresql.org/message-id/flat/8528fb7e-0aa2-6b54-85fb-0c0886dbd6ed%40enterprisedb.com
Calculating similarity between large strings can be timesconsuming
and overrun configured statement timeouts. Check for interrupts in
the main loop to ensure query cancellation can be performed.
Author: Robins Tharakan <tharakan@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAEP4nAxvmfc_XWTz73bqXRhgjONi=1HaX4_NhsopA3L6UvnN1g@mail.gmail.com
The planner has to special-case indexes on boolean columns, because
what we need for an indexscan on such a column is a qual of the shape
of "boolvar = pseudoconstant". For plain bool constants, previous
simplification will have reduced this to "boolvar" or "NOT boolvar",
and we have to reverse that if we want to make an indexqual. There is
existing code to do so, but it only fires when the index's opfamily
is BOOL_BTREE_FAM_OID or BOOL_HASH_FAM_OID. Thus extension AMs, or
extension opclasses such as contrib/btree_gin, are out in the cold.
The reason for hard-wiring the set of relevant opfamilies was mostly
to avoid a catalog lookup in a hot code path. We can improve matters
while not taking much of a performance hit by relying on the
hard-wired set when the opfamily OID is visibly built-in, and only
checking the catalogs when dealing with an extension opfamily.
While here, rename IsBooleanOpfamily to IsBuiltinBooleanOpfamily
to remind future users of that macro of its limitations. At some
point we might want to make indxpath.c's improved version of the
test globally accessible, but it's not presently needed elsewhere.
Zongliang Quan and Tom Lane
Discussion: https://postgr.es/m/f293b91d-1d46-d386-b6bb-4b06ff5c667b@yeah.net
Oversight in commit 418ec3207: it's better to do it like this,
else you have to drop and recreate the extension for each
permutation. tcn.spec only has one permutation at present,
so this doesn't speed it up any, but it's still a bad example.
It has been incorrectly assumed in commit 7f13ac8123 that we can either
purge all or none in the catalog modifying xids list retrieved from a
serialized snapshot. It is quite possible that some of the xids in that
array are old enough to be pruned but not others.
As per buildfarm
Author: Amit Kapila and Masahiko Sawada
Reviwed-by: Masahiko Sawada
Discussion: https://postgr.es/m/CAA4eK1LBtv6ayE+TvCcPmC-xse=DVg=SmbyQD1nv_AaqcpUJEg@mail.gmail.com
quote_identifier's API is designed on the assumption that it's
not worth worrying about a short-term memory leak when we have
to produce a quoted version of the given identifier. Whoever wrote
quote_object_name took it on themselves to override that judgment,
but the only way to do so is to cast away const someplace. We can
avoid that and substantially shorten the function by going along
with quote_identifier's opinion. AFAICS quote_object_name is not
used in any way where this would be unsustainable.
Per discussion of commit 45987aae2, which exposed that we had
a casting-away-const situation here.
Discussion: https://postgr.es/m/20220827112304.GL2342@telsasoft.com
Commit 121d2d3d70 included simd.h into pg_wchar.h. This caused a problem
on Windows, since Perl has "#define free" (referring to globals), which
breaks the Windows' header. To fix, move the static inline function
definitions from plperl_helpers.h, into plperl.h, where we already
document the necessary inclusion order. Since those functions were the
only reason for the existence of plperl_helpers.h, remove it.
First reported by Justin Pryzby
Diagnosis and review by Andres Freund, patch by myself per suggestion
from Tom Lane
Discussion: https://www.postgresql.org/message-id/20220826115546.GE2342%40telsasoft.com
This is preparatory work for a project to increase the number of bits
in a RelFileNumber from 32 to 56.
Along the way, introduce static inline accessor functions for a couple
of BufferTag fields.
Dilip Kumar, reviewed by me. The overall patch series has also had
review at various times from Andres Freund, Ashutosh Sharma, Hannu
Krosing, Vignesh C, Álvaro Herrera, and Tom Lane.
Discussion: http://postgr.es/m/CAFiTN-trubju5YbWAq-BSpZ90-Z6xCVBQE8BVqXqANOZAF1Znw@mail.gmail.com
Usage of ReadNextXLogRecord()'s first_record parameter for error
reporting isn't always correct. For instance, in GetWALRecordsInfo()
and GetWalStats(), we're reading multiple records, and first_record
is always passed as the LSN of the first record which is then used
for error reporting for later WAL record read failures. This isn't
correct.
The correct parameter to use for error reports in case of WAL
reading failures is xlogreader->EndRecPtr. This change fixes it.
While on it, removed an unnecessary Assert in pg_walinspect code.
Reported-by: Robert Haas
Author: Bharath Rupireddy
Reviewed-by: Robert Haas
Discussion: https://www.postgresql.org/message-id/CA%2BTgmoZAOGzPUifrcZRjFZ2vbtcw3mp-mN6UgEoEcQg6bY3OVg%40mail.gmail.com
Backpatch-through: 15
Previously, we relied on HEAP2_NEW_CID records and XACT_INVALIDATION
records to know if the transaction has modified the catalog, and that
information is not serialized to snapshot. Therefore, after the restart,
if the logical decoding decodes only the commit record of the transaction
that has actually modified a catalog, we will miss adding its XID to the
snapshot. Thus, we will end up looking at catalogs with the wrong
snapshot.
To fix this problem, this change adds the list of transaction IDs and
sub-transaction IDs, that have modified catalogs and are running during
snapshot serialization, to the serialized snapshot. After restart or
otherwise, when we restore from such a serialized snapshot, the
corresponding list is restored in memory. Now, when decoding a COMMIT
record, we check both the list and the ReorderBuffer to see if the
transaction has modified catalogs.
Since this adds additional information to the serialized snapshot, we
cannot backpatch it. For back branches, we took another approach.
We remember the last-running-xacts list of the decoded RUNNING_XACTS
record after restoring the previously serialized snapshot. Then, we mark
the transaction as containing catalog changes if it's in the list of
initial running transactions and its commit record has
XACT_XINFO_HAS_INVALS. This doesn't require any file format changes but
the transaction will end up being added to the snapshot even if it has
only relcache invalidations. But that won't be a problem since we use
snapshot built during decoding only to read system catalogs.
This commit bumps SNAPBUILD_VERSION because of a change in SnapBuild.
Reported-by: Mike Oh
Author: Masahiko Sawada
Reviewed-by: Amit Kapila, Shi yu, Takamichi Osumi, Kyotaro Horiguchi, Bertrand Drouvot, Ahsan Hadi
Backpatch-through: 10
Discussion: https://postgr.es/m/81D0D8B0-E7C4-4999-B616-1E5004DBDCD2%40amazon.com
When inserting a view referencing a foreign table that has WITH CHECK
OPTION constraints, in single-insert mode postgres_fdw retrieves the
data that was actually inserted on the remote side so that the WITH
CHECK OPTION constraints are enforced with the data locally, but in
batch-insert mode it cannot currently retrieve the data (except for the
row first inserted through the view), resulting in enforcing the WITH
CHECK OPTION constraints with the data passed from the core (except for
the first-inserted row), which led to incorrect results when inserting
into a view referencing a foreign table in which a remote BEFORE ROW
INSERT trigger changes the rows inserted through the view so that they
violate the view's WITH CHECK OPTION constraint. Also, the query
inserting into the view caused an assertion failure in assert-enabled
builds.
Fix these by disabling batch insertion when inserting into such a view.
Back-patch to v14 where batch insertion was added.
Discussion: https://postgr.es/m/CAPmGK17LpbTZs4m4a_6THP54UBeK9fHvX8aVVA%2BC6yEZDZwQcg%40mail.gmail.com
pread() and pwrite() are in SUSv2, and all targeted Unix systems have
them.
Previously, we defined pg_pread and pg_pwrite to emulate these function
with lseek() on old Unixen. The names with a pg_ prefix were a reminder
of a portability hazard: they might change the current file position.
That hazard is gone, so we can drop the prefixes.
Since the remaining replacement code is Windows-only, move it into
src/port/win32p{read,write}.c, and move the declarations into
src/include/port/win32_port.h.
No need for vestigial HAVE_PREAD, HAVE_PWRITE macros as they were only
used for declarations in port.h which have now moved into win32_port.h.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Greg Stark <stark@mit.edu>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA+hUKGJ3LHeP9w5Fgzdr4G8AnEtJ=z=p6hGDEm4qYGEUX5B6fQ@mail.gmail.com
We've heard a couple of reports of people having trouble with
multi-gigabyte-sized query-texts files. It occurred to me that on
32-bit platforms, there could be an issue with integer overflow
of calculations associated with the total query text size.
Address that with several changes:
1. Limit pg_stat_statements.max to INT_MAX / 2 not INT_MAX.
The hashtable code will bound it to that anyway unless "long"
is 64 bits. We still need overflow guards on its use, but
this helps.
2. Add a check to prevent extending the query-texts file to
more than MaxAllocHugeSize. If it got that big, qtext_load_file
would certainly fail, so there's not much point in allowing it.
Without this, we'd need to consider whether extent, query_offset,
and related variables shouldn't be off_t not size_t.
3. Adjust the comparisons in need_gc_qtexts() to be done in 64-bit
arithmetic on all platforms. It appears possible that under duress
those multiplications could overflow 32 bits, yielding a false
conclusion that we need to garbage-collect the texts file, which
could lead to repeatedly garbage-collecting after every hash table
insertion.
Per report from Bruno da Silva. I'm not convinced that these
issues fully explain his problem; there may be some other bug that's
contributing to the query-texts file becoming so large in the first
place. But it did get that big, so #2 is a reasonable defense,
and #3 could explain the reported performance difficulties.
(See also commit 8bbe4cbd9, which addressed some related bugs.
The second Discussion: link is the thread that led up to that.)
This issue is old, and is primarily a problem for old platforms,
so back-patch.
Discussion: https://postgr.es/m/CAB+Nuk93fL1Q9eLOCotvLP07g7RAv4vbdrkm0cVQohDVMpAb9A@mail.gmail.com
Discussion: https://postgr.es/m/5601D354.5000703@BlueTreble.com
ORDER BY / DISTINCT aggreagtes have, since implemented in Postgres, been
executed by always performing a sort in nodeAgg.c to sort the tuples in
the current group into the correct order before calling the transition
function on the sorted tuples. This was not great as often there might be
an index that could have provided pre-sorted input and allowed the
transition functions to be called as the rows come in, rather than having
to store them in a tuplestore in order to sort them once all the tuples
for the group have arrived.
Here we change the planner so it requests a path with a sort order which
supports the most amount of ORDER BY / DISTINCT aggregate functions and
add new code to the executor to allow it to support the processing of
ORDER BY / DISTINCT aggregates where the tuples are already sorted in the
correct order.
Since there can be many ORDER BY / DISTINCT aggregates in any given query
level, it's very possible that we can't find an order that suits all of
these aggregates. The sort order that the planner chooses is simply the
one that suits the most aggregate functions. We take the most strictly
sorted variation of each order and see how many aggregate functions can
use that, then we try again with the order of the remaining aggregates to
see if another order would suit more aggregate functions. For example:
SELECT agg(a ORDER BY a),agg2(a ORDER BY a,b) ...
would request the sort order to be {a, b} because {a} is a subset of the
sort order of {a,b}, but;
SELECT agg(a ORDER BY a),agg2(a ORDER BY c) ...
would just pick a plan ordered by {a} (we give precedence to aggregates
which are earlier in the targetlist).
SELECT agg(a ORDER BY a),agg2(a ORDER BY b),agg3(a ORDER BY b) ...
would choose to order by {b} since two aggregates suit that vs just one
that requires input ordered by {a}.
Author: David Rowley
Reviewed-by: Ronan Dunklau, James Coleman, Ranier Vilela, Richard Guo, Tom Lane
Discussion: https://postgr.es/m/CAApHDvpHzfo92%3DR4W0%2BxVua3BUYCKMckWAmo-2t_KiXN-wYH%3Dw%40mail.gmail.com
We can't check the output of this view very closely without
creating portability headaches, but we can make sure that
the number of rows is as-expected. In any case, this is
sufficient to exercise all the C code within, which is a
lot better than the 0% coverage we had before.
DongWook Lee
Discussion: https://postgr.es/m/CAAcByaLCHGJB7qAENEcx9D09UL=w4ma+yijwF_-1MSqQZ9wK6Q@mail.gmail.com
In commit 7c34555f8, I overlooked the need to configure SSPI
on Windows to allow login as the non-superuser role.
Fix that by adding auth_extra/--create-role incantation
(which, oddly enough, doesn't actually create the role).
Per buildfarm.
While here, upgrade the mechanism for temporarily setting
$ENV{PGUSER}, as per recommendation from ilmari.
Discussion: https://postgr.es/m/87edy7j1zz.fsf@wibble.ilmari.org
After commit 089480c07, it's necessary for background worker entry
points to be marked PGDLLEXPORT, else they aren't findable by
LookupBackgroundWorkerFunction(). Since pg_prewarm lacks any
regression tests, it's not surprising its worker entry points were
overlooked. (A quick search turned up no other such oversights.)
I added some documentation pointing out the need for this, too.
Robins Tharakan and Tom Lane
CAEP4nAzndnQv3-1QKb=D-hLoK3Rko12HHMFHHtdj2GQAUXO3gw@mail.gmail.com
We weren't exercising the session_preload_libraries option in any
meaningful way. auto_explain is a good testbed for doing so, since
it's one of the primary use-cases for session_preload_libraries.
Hence, adjust its TAP test to load the library via
session_preload_libraries not shared_preload_libraries. While at it,
feed test-specific settings to the backend via PGOPTIONS rather than
tediously rewriting postgresql.conf.
Also, since auto_explain has some PGC_SUSET parameters, we can use it
to provide a test case for the permissions-checking bug just fixed
by commit b35617de3.
Back-patch to v15 so that we have coverage for the permissions issue
in that branch too. To do that, I back-patched the refactoring
recently done by commit 550bc0a6c.
Dagfinn Ilmari Mannsåker and Tom Lane
Discussion: https://postgr.es/m/CABwTF4VEpwTHhRQ+q5MiC5ucngN-whN-PdcKeufX7eLSoAfbZA@mail.gmail.com
When postgres_fdw begins an asynchronous data fetch, it submits FETCH query
by using PQsendQuery(). If PQsendQuery() fails and returns 0, postgres_fdw
should report an error. But, previously, postgres_fdw reported an error
only when the return value is less than 0, though PQsendQuery() never return
the values other than 0 and 1. Therefore postgres_fdw could not handle
the failure to send FETCH query in an asynchronous data fetch.
This commit fixes postgres_fdw so that it reports an error
when PQsendQuery() returns 0.
Back-patch to v14 where asynchronous execution was supported in postgres_fdw.
Author: Fujii Masao
Reviewed-by: Japin Li, Tom Lane
Discussion: https://postgr.es/m/b187a7cf-d4e3-5a32-4d01-8383677797f3@oss.nttdata.com