1349d2790 added code to adjust the PlannerInfo.group_pathkeys so that
ORDER BY / DISTINCT aggregate functions could obtain pre-sorted inputs
to allow faster execution. That commit forgot to adjust the pathkeys in
create_agg_path(). Some code in there assumed that it was always fine
to make the AggPath's pathkeys the same as its subpath's. That seems to
have been ok up until 1349d2790, but since that commit adds pathkeys for
columns which are within the aggregate function, those columns won't be
available above the aggregate node. This can result in "could not find
pathkey item to sort" during create_plan().
The fix here is to strip off the additional pathkeys added by
adjust_group_pathkeys_for_groupagg(). It seems that the pathkeys here
will only ever be group_pathkeys, so all we need to do is check if the
length of the pathkey list is longer than the num_groupby_pathkeys and
get rid of the additional ones only if we see extras.
Reported-by: Justin Pryzby
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/ZQhYYRhUxpW3PSf9%40telsasoft.com
Backpatch-through: 16, where 1349d2790 was introduced
Going by c4a1933b4, b33ef397a and 05893712c (to name just a few), it seems
that maintaining debug_print_rel() is often forgotten. In the case of
c4a1933b4, it was several years before anyone noticed that a path type
was not handled by debug_print_rel(). (debug_print_rel() is only
compiled when building with OPTIMIZER_DEBUG).
After a quick survey on the pgsql-hackers mailing list, nobody came
forward to admit that they use OPTIMIZER_DEBUG. So to prevent any future
maintenance neglect, let's just remove debug_print_rel() and have
OPTIMIZER_DEBUG make use of pprint() instead (as suggested by Tom Lane).
If anyone wants to come forward to claim they make use of
OPTIMIZER_DEBUG in a way that they need debug_print_rel() then they have
around 10 months remaining in the v17 cycle where we could revert this.
If nobody comes forward in that time, then we can likely safely declare
debug_print_rel() as not worth keeping.
Discussion: https://postgr.es/m/CAApHDvoCdjo8Cu2zEZF4-AxWG-90S+pYXAnoDDa9J3xH-OrczQ@mail.gmail.com
Back in the 8.3 era we discovered that it was problematic if
libpq.so had encoding ID assignments different from the backend,
which is possible because on some platforms libpq.so might be
of a different major version from the calling programs.
psql should use libpq's assignments, but initdb has to use the
backend's, else it will put wrong values into pg_database.
The solution devised in commit 8468146b0 relied on giving initdb
its own copy of encnames.c rather than relying on the functions
exported by libpq. Later, that metamorphosed into ensuring that
libpgcommon got linked before libpq -- which made things OK for
initdb but broke psql. We didn't notice for lack of any changes
in enum pg_enc since then. Commit 06843df4a reversed that, fixing
the latent bug in psql but adding one in initdb. The meson build
infrastructure is also not being sufficiently careful about link
order, and trying to make it so would be equally fragile.
Hence, let's use a new scheme based on giving the libpq-exported
symbols different real names than the same functions exported from
libpgcommon.a or libpgcommon_srv.a. (We could distinguish those
two cases as well, but there seems no need to.) libpq gets the
official names to avoid an ABI break for libpq clients, while the
other cases use #define's to make the real names "xxx_private"
rather than "xxx". By controlling where the #define's are
applied, we can force any particular client program to use one
set or the other of the encnames.c functions.
We cannot back-patch this, since it'd be an ABI break for backend
loadable modules, but there seems little need to. We're just
trying to ensure that the world is safe for hypothetical future
additions to enum pg_enc.
In passing this should fix "duplicate symbol" linker warnings
that we've been seeing on AIX buildfarm members since commit
06843df4a. It's not very clear why that linker is complaining
now, when there were strictly *more* duplicates visible before,
but in any case this should remove the reason for complaint.
Patch by me; thanks to Andres Freund for review.
Discussion: https://postgr.es/m/2385119.1696354473@sss.pgh.pa.us
There was some discussion what the line length should be. Most output
currently clearly targets around 80 columns, but the maximum in use
currently is 95, so we set that as the current maximum. This just
ensures that there is some guidance and there are no wild deviations.
based on patch by Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Discussion: https://www.postgresql.org/message-id/flat/50ca8ff35a8dd8f9ec89963b503571a7@oss.nttdata.com
This seemed inconsistent with the --help output of other tools.
Depending on the values, it can cause ugly formatting. Also, we're
not getting the defaults from libpq, we're just emulating the methods
libpq uses to derive these values, so they might not be 100% correct.
Author: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Discussion: https://www.postgresql.org/message-id/flat/50ca8ff35a8dd8f9ec89963b503571a7@oss.nttdata.com
Currently, B-tree code matches every scan key to every item on the page.
Imagine the ordered B-tree scan for the query like this.
SELECT * FROM tbl WHERE col > 'a' AND col < 'b' ORDER BY col;
The (col > 'a') scan key will be always matched once we find the location to
start the scan. The (col < 'b') scan key will match every item on the page
as long as it matches the last item on the page.
This patch implements prechecking of the scan keys required for directional
scan on beginning of page scan. If precheck is successful we can skip this
scan keys check for the items on the page. That could lead to significant
acceleration especially if the comparison operator is expensive.
Idea from patch by Konstantin Knizhnik.
Discussion: https://postgr.es/m/079c3f8e-3371-abe2-e93c-fc8a0ae3f571%40garret.ru
Reviewed-by: Peter Geoghegan, Pavel Borisov
A bgworker can spawn parallel workers of its own when executing queries,
and if the worker uses BGWORKER_BYPASS_ALLOWCONN while the database it
is connected to does not allow connections, a parallel worker would fail
to startup. In the case of this module, the step checking for the
presence of the schema to create was spawning a worker, failing the last
test introduced by 991bb0f965.
This issue could be reproduced with debug_parallel_query = 'regress',
for example.
Per buildfarm member crake.
BuildDescForRelation() main job is to convert ColumnDef lists to
pg_attribute/tuple descriptor arrays, which is really mostly an
internal subroutine of DefineRelation() and some related functions,
which is more the remit of tablecmds.c and doesn't have much to do
with the basic tuple descriptor interfaces in tupdesc.c. This is also
supported by observing the header includes we can remove in tupdesc.c.
By moving it over, we can also (in the future) make
BuildDescForRelation() use more internals of tablecmds.c that are not
sensible to be exposed in tupdesc.c.
Discussion: https://www.postgresql.org/message-id/flat/52a125e4-ff9a-95f5-9f61-b87cf447e4da@eisentraut.org
Keep track of the used size of the array. That avoids looping through
the whole array in a few places. It doesn't matter from a performance
point of view since the array is small anyway, but this feels less
surprising and is a little less code. Now that we have an explicit
NumListenSockets variable that is statically initialized to 0, we
don't need the loop to initialize the array.
Allocate the array in PostmasterContext. The array isn't needed in
child processes, so this allows reusing that memory. We could easily
make the array resizable now, but we haven't heard any complaints
about the current 64 sockets limit.
Discussion: https://www.postgresql.org/message-id/7bb7ad65-a018-2419-742f-fa5fd877d338@iki.fi
Previously, the JSON code didn't have to worry too much about freeing
JsonLexContext, because it was never too long-lived. With new features
being added for SQL/JSON this is no longer the case. Add a routine
that knows how to free this struct and apply that to a few places, to
prevent this from becoming problematic.
At the same time, we change the API of makeJsonLexContextCstringLen to
make it receive a pointer to JsonLexContext for callers that want it to
be stack-allocated; it can also be passed as NULL to get the original
behavior of a palloc'ed one.
This also causes an ABI break due to the addition of flags to
JsonLexContext, so we can't easily backpatch it. AFAICS that's not much
of a problem; apparently some leaks might exist in JSON usage of
text-search, for example via json_to_tsvector, but I haven't seen any
complaints about that.
Per Coverity complaint about datum_to_jsonb_internal().
Discussion: https://postgr.es/m/20230808174110.oq3iymllsv6amkih@alvherre.pgsql
6b94e7a6d did this for ordered append paths to allow fast startup
MergeAppends, however, nothing was done for the Append case.
Here we adjust add_paths_to_append_rel() to have it build an AppendPath
containing the cheapest startup paths from each of the child relations
when the append rel has "consider_startup" set.
Author: Andy Fan, David Rowley
Discussion: https://www.postgresql.org/message-id/CAKU4AWrXSkUV=Pt-gRxQT7EbfUeNssprGyNsB=5mJibFZ6S3ww@mail.gmail.com
Ensure we switch to the per-tuple memory context to prevent any memory
leaks of detoasted Datums in MemoizeHash_hash() and MemoizeHash_equal().
Reported-by: Orlov Aleksej
Author: Orlov Aleksej, David Rowley
Discussion: https://postgr.es/m/83281eed63c74e4f940317186372abfd%40cft.ru
Backpatch-through: 14, where Memoize was added
The comment
/* On some platforms, readline is declared as readline(char *) */
is obsolete. The casting away of const can be removed.
The const in the readline() prototype was added in GNU readline 4.2,
released in 2001. BSD libedit has also had const in the prototype
since at least 2001.
(The commit that introduced this comment (187e865174) talked about
FreeBSD 4.8, which didn't have readline compatibility in libedit yet,
so it must have been talking about GNU readline in the base system.
This checks out, but already FreeBSD 5 had an updated GNU readline
with const.)
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/862fc1d4-9a0c-d2b6-5451-ee3dc750bcab%40eisentraut.org
A couple of new options are added to this module to provide more control
on the ways bgworkers are started:
- A new GUC called worker_spi.role to control which role to use by
default when starting a worker.
- worker_spi_launch() gains three arguments: a role OID, a database OID
and flags (currently only BGWORKER_BYPASS_ALLOWCONN). By default, the
role OID and the database OID are InvalidOid, in which case the worker
would use the related GUCs.
Workers loaded by shared_preload_libraries use the default values
provided by the GUCs, with flags at 0. The options are given to the
main bgworker routine through bgw_extra. A test case is tweaked to
start two dynamic workers with databases and roles defined by the caller
of worker_spi_launch().
These additions will have the advantage of expanding the tests for
bgworkers, for at least two cases:
- BGWORKER_BYPASS_ALLOWCONN has no coverage in the core tree.
- A new bgworker flag is under discussion, and this eases the
integration of new tests.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/bcc36259-7850-4882-97ef-d6b905d2fc51@gmail.com
Two custom wait events are added here:
- "DblinkConnect", when waiting to establish a connection to a remote
server.
- "DblinkGetConnect", when waiting to establish a connection to a remote
server but it could not be found in the list of already-opened ones.
Author: Masahiro Ikeda
Discussion: https://postgr.es/m/197bce267fa691a0ac62c86c4ab904c4@oss.nttdata.com
Three custom wait events are added here:
- "PostgresFdwCleanupResult", waiting while cleaning up PQgetResult() on
transaction abort.
- "PostgresFdwConnect", waiting to establish a connection to a remote
server.
- "PostgresFdwGetResult", waiting to receive a result from a remote
server.
Author: Masahiro Ikeda
Discussion: https://postgr.es/m/197bce267fa691a0ac62c86c4ab904c4@oss.nttdata.com
Two custom wait events are added here:
- "TestShmMqBgWorkerStartup", when setting up a set of bgworkers in
wait_for_workers_to_become_ready().
- "TestShmMqMessageQueue", when waiting for a queued message in
test_shm_mq_pipelined().
Author: Masahiro Ikeda
Discussion: https://postgr.es/m/197bce267fa691a0ac62c86c4ab904c4@oss.nttdata.com
This naming is more consistent with all the other user-facing wait event
strings. Other in-core modules will use the same naming convention, so
let's be consistent here as well.
Extracted from a larger patch by the same author.
Author: Masahiro Ikeda
Discussion: https://postgr.es/m/197bce267fa691a0ac62c86c4ab904c4@oss.nttdata.com
Allow a line break in example output, as we have done elsewhere.
Overlength output was added in commit 1e68e43d3.
While here, adjust some shaky grammar in an adjacent note
(from a different commit, c9af05465).
Per buildfarm.
It is unnecessary to include this field in IndexInfo. It is only used
by DDL code, not during execution. It is really only used to pass
local information around between functions in index.c and indexcmds.c,
for which it is clearer to use local variables, like in similar cases.
Discussion: https://www.postgresql.org/message-id/flat/f84640e3-00d3-5abd-3f41-e6a19d33c40b@eisentraut.org
In hopes of putting these where any would-be implementer is sure to
find them, make a placeholder grammar production for ALTER DROP VALUE
and put them there. This is really just a docs patch, though.
Vik Fearing, with a bit more wordsmithing by me
Discussion: https://postgr.es/m/9fffd149-da0f-0c9c-6745-731fb688642a@postgresfriends.org
This further reduces the length and complexity of sendFile(),
hopefully make it easier to understand and modify. In addition
to moving some logic into a new function, I took this opportunity
to make a few slight adjustments to sendFile() itself, including
renaming the 'len' variable to 'bytes_done', since we use it to represent
the number of bytes we've already handled so far, not the total
length of the file.
Patch by me, reviewed by David Steele.
Discussion: http://postgr.es/m/CA+TgmoYt5jXH4U6cu1dm9Oe2FTn1aae6hBNhZzJJjyjbE_zYig@mail.gmail.com
If checksum verification fails for a particular page, we reread the
page and try one more time. The code that does this somewhat complex
and difficult to follow. Move some of the logic into a new function
and rearrange the code a bit to try to make it clearer. This way,
we don't need the block_retry Boolean, a couple of other variables
move from sendFile() into the new function, and some code is now less
deeply indented.
Patch by me, reviewed by David Steele.
Discussion: http://postgr.es/m/CA+TgmoYt5jXH4U6cu1dm9Oe2FTn1aae6hBNhZzJJjyjbE_zYig@mail.gmail.com
The code in charge of copying the contents of PgBackendStatus to local
memory could fail on memory allocation because of an overflow on the
amount of memory to use. The overflow can happen when combining a high
value track_activity_query_size (max at 1MB) with a large
max_connections, when both multiplied get higher than INT32_MAX as both
parameters treated as signed integers. This could for example trigger
with the following functions, all calling pgstat_read_current_status():
- pg_stat_get_backend_subxact()
- pg_stat_get_backend_idset()
- pg_stat_get_progress_info()
- pg_stat_get_activity()
- pg_stat_get_db_numbackends()
The change to use MemoryContextAllocHuge() has been introduced in
8d0ddccec6, so backpatch down to 12.
Author: Jakub Wartak
Discussion: https://postgr.es/m/CAKZiRmw8QSNVw2qNK-dznsatQqz+9DkCquxP0GHbbv1jMkGHMA@mail.gmail.com
Backpatch-through: 12
Make a few newish calls to appendStringInfo() which have no special
formatting use appendStringInfoString() instead. Also, adjust usages of
appendStringInfoString() which only append a string containing a single
character to make use of appendStringInfoChar() instead.
This makes the code marginally faster, but primarily this change is so
we use the StringInfo type as it was intended to be used.
Discussion: https://postgr.es/m/CAApHDvpXKQmL+r=VDNS98upqhr9yGBhv2Jw3GBFFk_wKHcB39A@mail.gmail.com
This commit changes the WAL reader routines so as a FATAL for the
backend or exit(FAILURE) for the frontend is triggered if an allocation
for a WAL record decode fails in walreader.c, rather than treating this
case as bogus data, which would be equivalent to the end of WAL. The
key is to avoid palloc_extended(MCXT_ALLOC_NO_OOM) in walreader.c,
relying on plain palloc() calls.
The previous behavior could make WAL replay finish too early than it
should. For example, crash recovery finishing earlier may corrupt
clusters because not all the WAL available locally was replayed to
ensure a consistent state. Out-of-memory failures would show up
randomly depending on the memory pressure on the host, but one simple
case would be to generate a large record, then replay this record after
downsizing a host, as Ethan Mertz originally reported.
This relies on bae868caf2, as the WAL reader routines now do the
memory allocation required for a record only once its header has been
fully read and validated, making xl_tot_len trustable. Making the WAL
reader react differently on out-of-memory or bogus record data would
require ABI changes, so this is the safest choice for stable branches.
Also, it is worth noting that 3f1ce97346 has been using a plain
palloc() in this code for some time now.
Thanks to Noah Misch and Thomas Munro for the discussion.
Like the other commit, backpatch down to 12, leaving out v11 that will
be EOL'd soon. The behavior of considering a failed allocation as bogus
data comes originally from 0ffe11abd3, where the record length
retrieved from its header was not entirely trustable.
Reported-by: Ethan Mertz
Discussion: https://postgr.es/m/ZRKKdI5-RRlta3aF@paquier.xyz
Backpatch-through: 12
The list form of stat() is an inelegant API as it relies on the position
of the file size in the list returned in result. Like in any other
places of the tree, replace that with a -s switch instead.
Another suggestion from Dagfinn is File::Stat, which we've been already
using for some other fields. It really comes down to a matter of taste
to choose that over -s, and the latter is more used in the tree.
Author: Bertrand Drouvot
Reviewed-by: Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/b2020df7-d0fc-4ea5-b2a9-7efc6d36b2ac@gmail.com
In a selective restore, ACLs for a table should be dumped if the
table is selected to be dumped. However, if the table has both
table-level and column-level ACLs, only the table-level ACL was
restored. This happened because _tocEntryRequired assumed that
an ACL could have only one dependency (the one on its table),
and punted if there was more than one. But since commit ea9125304,
column-level ACLs also depend on the table-level ACL if any, to
ensure correct ordering in parallel restores. To fix, adjust the
logic in _tocEntryRequired to ignore dependencies on ACLs.
I extended a test case in 002_pg_dump.pl so that it purports to
test for this; but in fact the test passes even without the fix.
That's because this bug only manifests during a selective restore,
while the scenarios 002_pg_dump.pl tests include only selective dumps.
Perhaps somebody would like to extend the script so that it can test
scenarios including selective restore, but I'm not touching that.
Euler Taveira and Tom Lane, per report from Kong Man.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/DM4PR11MB73976902DBBA10B1D652F9498B06A@DM4PR11MB7397.namprd11.prod.outlook.com
The retry loop is needed because heap_page_prune() calls
HeapTupleSatisfiesVacuum() and then lazy_scan_prune() does the same
thing again, and they might get different answers due to concurrent
clog updates. But this patch makes heap_page_prune() return the
HeapTupleSatisfiesVacuum() results that it computed back to the
caller, which allows lazy_scan_prune() to avoid needing to recompute
those values in the first place. That's nice both because it eliminates
the need for a retry loop and also because it's cheaper.
Melanie Plageman, reviewed by David Geier, Andres Freund, and me.
Discussion: https://postgr.es/m/CAAKRu_br124qsGJieuYA0nGjywEukhK1dKBfRdby_4yY3E9SXA%40mail.gmail.com
This adjusts the expression evaluation code for CoerceViaIO and
CoerceToDomain to handle errors softly if needed.
For CoerceViaIo, this means using InputFunctionCallSafe(), which
provides the option to handle errors softly, instead of calling the
type input function directly.
For CoerceToDomain, this simply entails replacing the ereport() in
ExecEvalConstraintCheck() by errsave().
In both cases, the ErrorSaveContext to be used when evaluating the
expression is stored by ExecInitExprRec() in the expression's struct
in the expression's ExprEvalStep. The ErrorSaveContext is passed by
setting ExprState.escontext to point to it when calling
ExecInitExprRec() on the expression whose errors are to be handled
softly.
Note that no call site of ExecInitExprRec() has been changed in this
commit, so there's no functional change. This is intended for
implementing new SQL/JSON expression nodes in future commits that
will use to it suppress errors that may occur during type coercions.
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
SetResultVariables() was not getting called when "printing" a result
that failed (see around PrintQueryResult), which would cause some
variables to not be set, like ROW_COUNT, SQLSTATE or ERROR. This can be
confusing as a previous result would be retained.
This state could be reached when failing to process tuples in a few
commands, like \gset when it returns no tuples, or \crosstabview. A
test is added, based on \gset.
This is arguably a bug fix, but no backpatch is done as there is a risk
of breaking scripts that rely on the previous behavior, even if they do
so accidentally.
Reported-by: amutu
Author: Japin Li
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/18134-87126d90cb4dd049@postgresql.org
After receiving position data for a lexeme, tsvectorrecv()
advanced its "datalen" value by (npos+1)*sizeof(WordEntry)
where the correct calculation is (npos+1)*sizeof(WordEntryPos).
This accidentally failed to render the constructed tsvector
invalid, but it did result in leaving some wasted space
approximately equal to the space consumed by the position data.
That could have several bad effects:
* Disk space is wasted if the received tsvector is stored into a
table as-is.
* A legal tsvector could get rejected with "maximum total lexeme
length exceeded" if the extra space pushes it over the MAXSTRPOS
limit.
* In edge cases, the finished tsvector could be assigned a length
larger than the allocated size of its palloc chunk, conceivably
leading to SIGSEGV when the tsvector gets copied somewhere else.
The odds of a field failure of this sort seem low, though valgrind
testing could probably have found this.
While we're here, let's express the calculation as
"sizeof(uint16) + npos * sizeof(WordEntryPos)" to avoid the type
pun implicit in the "npos + 1" formulation. It's not wrong
given that WordEntryPos had better be 2 bytes to avoid padding
problems, but it seems clearer this way.
Report and patch by Denis Erokhin. Back-patch to all supported
versions.
Discussion: https://postgr.es/m/009801d9f2d9$f29730c0$d7c59240$@datagile.ru