Since commit 599b33b94, this function assumed that every RTE_RELATION
RangeTblEntry would have an associated RelOptInfo. But that's not so:
we only build RelOptInfos for relations that are scanned by the query.
In particular the target of an INSERT won't have one, so that Vars
appearing in an INSERT ... RETURNING list will not have an associated
RelOptInfo. This apparently wasn't a problem before commit f7816aec2
taught examine_simple_variable() to drill down into CTEs containing
INSERT RETURNING, but it is now.
To fix, add a fallback code path that gets the userid to use directly
from the RTEPermissionInfo associated with the RTE. (Sadly, we must
have two code paths, because not every RTE has a RTEPermissionInfo
either.)
Per report from Alexander Lakhin. No back-patch, since the case is
apparently unreachable before f7816aec2.
Discussion: https://postgr.es/m/608a4886-6c60-0f9e-97d5-591256bd4150@gmail.com
An array element equal to INT_MAX gave this code indigestion,
causing an infinite loop that surely ended in SIGSEGV. We fixed
some nearby problems awhile ago (cf 757c5182f) but missed this.
Report and diagnosis by Alexander Lakhin (bug #18273); patch by me
Discussion: https://postgr.es/m/18273-9a832d1da122600c@postgresql.org
During the calculations of the maximum for the number of buckets, take into
account that later we round that to the next power of 2.
Reported-by: Karen Talarico
Bug: #16925
Discussion: https://postgr.es/m/16925-ec96d83529d0d629%40postgresql.org
Author: Thomas Munro, Andrei Lepikhov, Alexander Korotkov
Reviewed-by: Alena Rybakina
Backpatch-through: 12
When the SJE code handles the transfer of qual clauses from the removed
relation to the remaining one, it replaces the Vars of the removed
relation with the Vars of the remaining relation for each clause, and
then reintegrates these clauses into the appropriate restriction or join
clause lists, while attempting to avoid duplicates.
However, the code compares RestrictInfo->clause to determine if two
clauses are duplicates. This is just flat wrong. Two RestrictInfos
with the same clause can have different required_relids,
incompatible_relids, is_pushed_down, and so on. This can cause qual
clauses to be mistakenly omitted, leading to wrong results.
This patch fixes it by comparing the entire RestrictInfos not just their
clauses ignoring 'rinfo_serial' field (otherwise almost all RestrictInfos will
be unique). Making 'rinfo_serial' equal_ignore would break other code. This
is why this commit implements our own comparison function for checking the
equality of RestrictInfos.
Reported-by: Zuming Jiang
Bug: #18261
Discussion: https://postgr.es/m/18261-2a75d748c928609b%40postgresql.org
Author: Richard Guo
Support referencing a composite-type variable in %TYPE.
Remove the undocumented, untested, and pretty useless ability
to have the subject of %TYPE be an (unqualified) type name.
You get the same result by just not writing %TYPE.
Add or adjust some test cases to improve code coverage here.
Discussion: https://postgr.es/m/716852.1704402127@sss.pgh.pa.us
A typo has been introduced by 31966b151e6a when updating the state of a
local buffer when a temporary relation is extended, for the case of a
block included in the relation range extended, when it is already found
in the hash table holding the local buffers. In this case, BM_VALID
should be cleared, but the buffer state was changed so as BM_VALID
remained while clearing the other flags.
As reported on the thread, it was possible to corrupt the state of the
local buffers on ENOSPC, but the states would be corrupted on any kind
of ERROR during the relation extend (like partial writes or some other
errno).
Reported-by: Alexander Lakhin
Author: Tender Wang
Reviewed-by: Richard Guo, Alexander Lakhin, Michael Paquier
Discussion: https://postgr.es/m/18259-6e256429825dd435@postgresql.org
Backpatch-through: 16
If we have DECHIST statistics about the argument expression, use
the average number of distinct elements as the array length estimate.
(It'd be better to use the average total number of elements, but
that is not currently calculated by compute_array_stats(), and
it's unclear that it'd be worth extra effort to get.)
To do this, we have to change the signature of estimate_array_length
to pass the "root" pointer. While at it, also change its result
type to "double". That's probably not really necessary, but it
avoids any risk of overflow of the value extracted from DECHIST.
All existing callers are going to use the result in a "double"
calculation anyway.
Paul Jungwirth, reviewed by Jian He and myself
Discussion: https://postgr.es/m/CA+renyUnM2d+SmrxKpDuAdpiq6FOM=FByvi6aS6yi__qyf6j9A@mail.gmail.com
Many foreach loops only use the ListCell pointer to retrieve the
content of the cell, like so:
ListCell *lc;
foreach(lc, mylist)
{
int myint = lfirst_int(lc);
...
}
This commit adds a few convenience macros that automatically
declare the loop variable and retrieve the current cell's contents.
This allows us to rewrite the previous loop like this:
foreach_int(myint, mylist)
{
...
}
This commit also adjusts a few existing loops in order to add
coverage for the new/adjusted macros. There is presently no plan
to bulk update all foreach loops, as that could introduce a
significant amount of back-patching pain. Instead, these macros
are primarily intended for use in new code.
Author: Jelte Fennema-Nio
Reviewed-by: David Rowley, Alvaro Herrera, Vignesh C, Tom Lane
Discussion: https://postgr.es/m/CAGECzQSwXKnxGwW1_Q5JE%2B8Ja20kyAbhBHO04vVrQsLcDciwXA%40mail.gmail.com
This adds a new ALTER TABLE subcommand ALTER COLUMN ... SET EXPRESSION
that changes the generation expression of a generated column.
The syntax is not standard but was adapted from other SQL
implementations.
This command causes a table rewrite, using the usual ALTER TABLE
mechanisms. The implementation is similar to and makes use of some of
the infrastructure of the SET DATA TYPE subcommand (for example,
rebuilding constraints and indexes afterwards). The new command
requires a new pass in AlterTablePass, and the ADD COLUMN pass had to
be moved earlier so that combinations of ADD COLUMN and SET EXPRESSION
can work.
Author: Amul Sul <sulamul@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAAJ_b94yyJeGA-5M951_Lr+KfZokOp-2kXicpmEhi5FXhBeTog@mail.gmail.com
1349d2790 added code to allow DISTINCT and ORDER BY aggregates to work
more efficiently by using presorted input. That commit added some code
that made use of the AggState's tmpcontext and adjusted the
ecxt_outertuple and ecxt_innertuple slots before checking if the current
row is distinct from the previously seen row. That code forgot to set the
TupleTableSlots back to what they were originally, which could result in
errors such as:
ERROR: attribute 1 of type record has wrong type
This only affects aggregate functions which have multiple arguments when
DISTINCT is used. For example: string_agg(DISTINCT col, ', ')
Thanks to Tom Lane for identifying the breaking commit.
Bug: #18264
Reported-by: Vojtěch Beneš
Discussion: https://postgr.es/m/18264-e363593d7e9feb7d@postgresql.org
Backpatch-through: 16, where 1349d2790 was added
This patch changes the existing 'conflicting' field to 'conflict_reason'
in pg_replication_slots. This new field indicates the reason for the
logical slot's conflict with recovery. It is always NULL for physical
slots, as well as for logical slots which are not invalidated. The
non-NULL values indicate that the slot is marked as invalidated. Possible
values are:
wal_removed = required WAL has been removed.
rows_removed = required rows have been removed.
wal_level_insufficient = the primary doesn't have a wal_level sufficient
to perform logical decoding.
The existing users of 'conflicting' column can get the same answer by
using 'conflict_reason' IS NOT NULL.
Author: Shveta Malik
Reviewed-by: Amit Kapila, Bertrand Drouvot, Michael Paquier
Discussion: https://postgr.es/m/ZYOE8IguqTbp-seF@paquier.xyz
The "vertexes" spelling is also valid, but we consistently use
"vertices" elsewhere.
Author: Dagfinn Ilmari Mannsåker
Reviewed-by: Shubham Khanna
Discussion: https://postgr.es/m/87le9fmi01.fsf@wibble.ilmari.org
CheckPWChallengeAuth() would return STATUS_ERROR if the user does not
exist or has no password assigned, even if the client disconnected
without responding to the password challenge (as libpq often will,
for example). We should return STATUS_EOF in that case, and the
lower-level functions do, but this code level got it wrong since the
refactoring done in 7ac955b34. This breaks the intent of not logging
anything for EOF cases (cf. comments in auth_failed()) and might
also confuse users of ClientAuthentication_hook.
Per report from Liu Lang. Back-patch to all supported versions.
Discussion: https://postgr.es/m/b725238c-539d-cb09-2bff-b5e6cb2c069c@esgyn.cn
Second attempt at 283a95da923. Since we can't reorder the enum values
of JsonPathItemType, instead reorder the switch cases where they are
used to generally follow the order of the enum values, for better
maintainability.
This reverts commit 283a95da923605c1cc148155db2d865d0801b419.
The reordering of JsonPathItemType affects the binary on-disk
compatibility of the jsonpath type, so we must not change it. Revert
for now and consider.
Swap the arguments to TimestampDifferenceMilliseconds so that we get
a positive answer instead of zero.
Then use the result of that computation instead of ignoring it.
Per reports from Alexander Lakhin.
Discussion: http://postgr.es/m/8b686764-7ac1-74c3-70f9-b64685a2535f@gmail.com
New TAP tests added by commits 9a17be1e and 4710b67d missed to convert
warnings to FATAL. This commit fixes that.
Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Some of the TAP tests have been historically setting the environment
variable PGDATABASE to 'postgres', which is not needed because
PostgreSQL::Test::Cluster already sets it when initialized. This commit
removes these explicit setups.
Note that the dependency of cluster -a with PGDATABASE (from
1caef31d9e55) is still documented.
Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACXLAz5dW3ZP+Fec8g6jQMMmDyCVT+qdbye2h7QJJmhsdw@mail.gmail.com
Avoid leaving a dangling pointer in the unlikely event that
nsphash_create fails. Improve comments, and fix formatting by
adding typedefs.list entries.
Discussion: https://postgr.es/m/3972900.1704145107@sss.pgh.pa.us
This feature will allow us to replicate the changes on subscriber nodes
after the upgrade.
Previously, only the subscription metadata information was preserved.
Without the list of relations and their state, it's not possible to
re-enable the subscriptions without missing some records as the list of
relations can only be refreshed after enabling the subscription (and
therefore starting the apply worker). Even if we added a way to refresh
the subscription while enabling a publication, we still wouldn't know
which relations are new on the publication side, and therefore should be
fully synced, and which shouldn't.
To preserve the subscription relations, this patch teaches pg_dump to
restore the content of pg_subscription_rel from the old cluster by using
binary_upgrade_add_sub_rel_state SQL function. This is supported only
in binary upgrade mode.
The subscription's replication origin is needed to ensure that we don't
replicate anything twice.
To preserve the replication origins, this patch teaches pg_dump to update
the replication origin along with creating a subscription by using
binary_upgrade_replorigin_advance SQL function to restore the
underlying replication origin remote LSN. This is supported only in
binary upgrade mode.
pg_upgrade will check that all the subscription relations are in 'i'
(init) or in 'r' (ready) state and will error out if that's not the case,
logging the reason for the failure. This helps to avoid the risk of any
dangling slot or origin after the upgrade.
Author: Vignesh C, Julien Rouhaud, Shlok Kyal
Reviewed-by: Peter Smith, Masahiko Sawada, Michael Paquier, Amit Kapila, Hayato Kuroda
Discussion: https://postgr.es/m/20230217075433.u5mjly4d5cr4hcfe@jrouhaud
The brinbuildCallbackParallel callback used by parallel BRIN builds did
not consider that the parallel table scans may be synchronized, starting
from an arbitrary block and then wrap around.
If this happened and the scan actually did wrap around, tuples from the
beginning of the table were added to the last range produced by the same
worker. The index would be missing range at the beginning of the table,
while the last range would be too wide. This would not produce incorrect
query results, but it'd be less efficient.
Fixed by checking for both past and future ranges in the callback. The
worker may produce multiple summaries for the same page range, but the
leader will merge them as if the summaries came from different workers.
Discussion: https://postgr.es/m/c2ee7d69-ce17-43f2-d1a0-9811edbda6e6%40enterprisedb.com
Commit b437571714 added support for parallel builds for BRIN indexes,
using code similar to BTREE parallel builds, and also a new tuplesort
variant. This commit simplifies the new code in two ways:
* The "spool" grouping tuplesort and the heap/index is not necessary.
The heap/index are available as separate arguments, causing confusion.
So remove the spool, and use the tuplesort directly.
* The new tuplesort variant does not need the heap/index, as it sorts
simply by the range block number, without accessing the tuple data.
So simplify that too.
Initial report and patch by Ranier Vilela, further cleanup by me.
Author: Ranier Vilela
Discussion: https://postgr.es/m/CAEudQAqD7f2i4iyEaAz-5o-bf6zXVX-AkNUBm-YjUXEemaEh6A%40mail.gmail.com
Commit 16671ba6e7 moved the code that sends "sorry, too many clients
already" and other such messages, but it had the effect that we would
send that error even if the the startup packet processing failed, e.g.
because the client sent an invalid startup packet. That was not
intentional.
Spotted while reading the code again.
When enabled (default off), this logs a backtrace anytime elog() or an
equivalent ereport() for internal errors is called.
This is not well covered by the existing backtrace_functions, because
there are many equally-worded low-level errors in many functions. And
if you find out where the error is, then you need to manually rewrite
the elog() to ereport() to attach the errbacktrace(), which is
annoying. Having a backtrace automatically on every elog() call could
be very helpful during development for various kinds of common errors
from palloc, syscache, node support, etc.
Discussion: https://www.postgresql.org/message-id/flat/ba76c6bc-f03f-4285-bf16-47759cfcab9e@eisentraut.org
There are a lot of Perl scripts in the tree, mostly code generation
and TAP tests. Occasionally, these scripts produce warnings. These
are probably always mistakes on the developer side (true positives).
Typical examples are warnings from genbki.pl or related when you make
a mess in the catalog files during development, or warnings from tests
when they massage a config file that looks different on different
hosts, or mistakes during merges (e.g., duplicate subroutine
definitions), or just mistakes that weren't noticed because there is a
lot of output in a verbose build.
This changes all warnings into fatal errors, by replacing
use warnings;
by
use warnings FATAL => 'all';
in all Perl files.
Discussion: https://www.postgresql.org/message-id/flat/06f899fd-1826-05ab-42d6-adeb1fd5e200%40eisentraut.org