The implementation uses smaller code when the "expected" operand is a
small constant, but the implementation needlessly defined the set of
acceptable constants more narrowly than the ABI does. Core PostgreSQL
and PGXN don't use the constant path at all, so this is future-proofing.
Back-patch to v13, where commit 30ee5d17c20dbb282a9952b3048d6ad52d56c371
introduced this code.
Reviewed by Tom Lane. Reported by Christoph Berg.
Discussion: https://postgr.es/m/20201009092825.GD889580@msg.df7cb.de
While xlc defines __64BIT__, gcc does not. Due to this oversight in
commit 30ee5d17c20dbb282a9952b3048d6ad52d56c371, gcc builds continued
implementing 64-bit atomics by way of intrinsics. Back-patch to v13,
where that commit first appeared.
Reviewed by Tom Lane.
Discussion: https://postgr.es/m/20201011051043.GA1724101@rfd.leadboat.com
It appears that commit cf63c641c, which intended to prevent
misoptimization of the result-building step in makeOrderedSetArgs,
didn't go far enough: buildfarm member hornet's version of xlc
is now optimizing back to the old, broken behavior in which
list_length(directargs) is fetched only after list_concat() has
changed that value. I'm not entirely convinced whether that's
an undeniable compiler bug or whether it can be justified by a
sufficiently aggressive interpretation of C sequence points.
So let's just change the code to make it harder to misinterpret.
Back-patch to all supported versions, just in case.
Discussion: https://postgr.es/m/1830491.1601944935@sss.pgh.pa.us
The date-vs-timestamp, date-vs-timestamptz, and timestamp-vs-timestamptz
comparators all worked by promoting the first type to the second and
then doing a simple same-type comparison. This works fine, except
when the conversion result is out of range, in which case we throw an
entirely avoidable error. The sources of such failures are
(a) type date can represent dates much farther in the future than
the timestamp types can;
(b) timezone rotation might cause a just-in-range timestamp value to
become a just-out-of-range timestamptz value.
Up to now we just ignored these corner-case issues, but now we have
an actual user complaint (bug #16657 from Huss EL-Sheikh), so let's
do something about it.
It turns out that commit 52ad1e659 already built all the necessary
infrastructure to support error-free comparisons, but neglected to
actually use it in the main-line code paths. Fix that, do a little
bit of code style review, and remove the now-duplicate logic in
jsonpath_exec.c.
Back-patch to v13 where 52ad1e659 came in. We could take this back
further by back-patching said infrastructure, but given the small
number of complaints so far, I don't feel a great need to.
Discussion: https://postgr.es/m/16657-cde2f876d8cc7971@postgresql.org
Commit 3eb3d3e78 was a few bricks shy of a load: while it correctly
set the table's "interesting" flag when deciding to dump the data of
an extension config table, it was not correct to clear that flag
if we concluded we shouldn't dump the data. This led to the crash
reported in bug #16655, because in fact we'll traverse dumpTableSchema
anyway for all extension tables (to see if they have user-added
seclabels or RLS policies).
The right thing to do is to force "interesting" true in makeTableDataInfo,
and otherwise leave the flag alone. (Doing it there is more future-proof
in case additional calls are added, and it also avoids setting the flag
unnecessarily if that function decides the table is non-dumpable.)
This investigation also showed that while only the --inserts code path
had an obvious failure in the case considered by 3eb3d3e78, the COPY
code path also has a problem with not having loaded table subsidiary
data. That causes fmtCopyColumnList to silently return an empty string
instead of the correct column list. That accidentally mostly works,
which perhaps is why we didn't notice this before. It would only fail
if the restore column order is different from the dump column order,
which only happens in weird inheritance cases, so it's not surprising
nobody had hit the case with an extension config table. Nonetheless,
it's a bug, and it goes a long way back, not just to v12 where the
--inserts code path started to have a problem with this.
In hopes of catching such cases a bit sooner in future, add some
Asserts that "interesting" has been set in both dumpTableData and
dumpTableSchema. Adjust the test case added by 3eb3d3e78 so that it
checks the COPY rather than INSERT form of that bug, allowing it to
detect the longer-standing symptom.
Per bug #16655 from Cameron Daniel. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/16655-5c92d6b3a9438137@postgresql.org
Discussion: https://postgr.es/m/18048b44-3414-b983-8c7c-9165b177900d@2ndQuadrant.com
This patch prevents crashes or wrong plans when partition-wise joins
are considered during GEQO planning, as a consequence of the
EquivalenceClass data structures becoming corrupt after a GEQO
context reset.
A remaining problem is that successive GEQO cycles will make multiple
copies of the required EC members, since add_child_join_rel_equivalences
has no idea that such members might exist already. For now we'll just
live with that. The lack of field complaints of crashes suggests that
this is a mighty little-used situation.
Back-patch to v12 where this code was introduced.
Discussion: https://postgr.es/m/1683100.1601860653@sss.pgh.pa.us
get_eclass_for_sort_expr() computes expr_relids and nullable_relids
early on, even though they won't be needed unless we make a new
EquivalenceClass, which we often don't. Aside from the probably-minor
inefficiency, there's a memory management problem: these bitmapsets will
be built in the caller's context, leading to dangling pointers if that
is shorter-lived than root->planner_cxt. This would be a live bug if
get_eclass_for_sort_expr() could be called with create_it = true during
GEQO join planning. So far as I can find, the core code never does
that, but it's hard to be sure that no extensions do, especially since
the comments make it clear that that's supposed to be a supported case.
Fix by not computing these values until we've switched into planner_cxt
to build the new EquivalenceClass.
generate_join_implied_equalities() uses inner_rel->relids to look up
relevant eclasses, but it ought to be using nominal_inner_relids.
This is presently harmless because a child RelOptInfo will always have
exactly the same eclass_indexes as its topmost parent; but that might
not be true forever, and anyway it makes the code confusing.
The first of these is old (introduced by me in f3b3b8d5b), so back-patch
to all supported branches. The second only dates to v13, but we might
as well back-patch it to keep the code looking similar across branches.
Discussion: https://postgr.es/m/1508010.1601832581@sss.pgh.pa.us
The descriptions of make_interval() and pg_options_to_table()
were randomly different from the reality embedded in pg_proc.
(These are not all the discrepancies I found in a quick search,
but the others perhaps require more discussion, since there's
at least a case to be made for changing pg_proc not the docs.)
make_interval issue noted by Thomas Kellerer.
Discussion: https://postgr.es/m/7b154ef0-9f22-90b9-7734-4bf23686695b@gmx.net
I noticed while trying to run the regression tests under a low
geqo_threshold that one query on information_schema.columns had
unstable (as in, variable from one run to the next) output order.
This is pretty unsurprising given the complexity of the underlying
plan. Interestingly, of this test's three nigh-identical queries on
information_schema.columns, the other two already had ORDER BY clauses
guaranteeing stable output. Let's make this one look the same.
Back-patch to v10 where this test was added. We've not heard field
reports of the test failing, but this experience shows that it can
happen when testing under even slightly unusual conditions.
Commit 151c0c5f7 neglected the possibility that a TEMP_CONFIG file
would explicitly set max_wal_senders=0; as indeed buildfarm member
thorntail does, so that it can test wal_level=minimal in other test
suites. Hence, rather than assuming that max_wal_senders=10 will
prevail if we say nothing, set it explicitly.
Set max_replication_slots=10 explicitly too, just to be safe.
Back-patch to v10, like the previous patch.
Discussion: https://postgr.es/m/723911.1601417626@sss.pgh.pa.us
The error message about columns in the primary key not including all of
the partition key was unclear; reword it.
Backpatch all the way to pg11, where it appeared.
Reported-by: Nagaraj Raj <nagaraj.sf@yahoo.com>
Discussion: https://postgr.es/m/64062533.78364.1601415362244@mail.yahoo.com
Previously, a conversion such as
to_date('-44-02-01','YYYY-MM-DD')
would result in '0045-02-01 BC', as the code attempted to interpret
the negative year as BC, but failed to apply the correction needed
for our internal handling of BC years. Fix the off-by-one problem.
Also, arrange for the combination of a negative year and an
explicit "BC" marker to cancel out and produce AD. This is how
the negative-century case works, so it seems sane to do likewise.
Continue to read "year 0000" as 1 BC. Oracle would throw an error,
but we've accepted that case for a long time so I'm hesitant to
change it in a back-patch.
Per bug #16419 from Saeed Hubaishan. Back-patch to all supported
branches.
Dar Alathar-Yemen and Tom Lane
Discussion: https://postgr.es/m/16419-d8d9db0a7553f01b@postgresql.org
PostgresNode.pm set "max_wal_senders = 5" for replication testing,
but this seems to be slightly too low for our current test suite.
Slower buildfarm members frequently report "number of requested standby
connections exceeds max_wal_senders" failures, due to old walsenders
not exiting instantaneously. Usually, the test does not fail overall
because of automatic walreceiver restart, but sometimes the failure
becomes visible; and in any case such retries slow down the test.
That value came in with commit 89ac7004d, but was soon obsoleted by
f6d6d2920, which raised the built-in default from zero to 10; so that
PostgresNode.pm is actually setting it to less than the conservative
built-in default. That seems pretty pointless, so let's remove the
special setting and let the default prevail, in hopes of making
the TAP tests more robust.
Likewise, the setting "max_replication_slots = 5" is obsolete and
can be removed.
While here, reverse-engineer a comment about why we're choosing
less-than-default values for some other settings.
(Note: before v12, max_wal_senders counted against max_connections
so that the latter setting also needs some fiddling with.)
Back-patch to v10 where the subscription tests were added.
It's likely that the older branches aren't pushing the boundaries
of max_wal_senders, but I'm disinclined to spend time trying to
figure out exactly when it started to be a problem.
Discussion: https://postgr.es/m/723911.1601417626@sss.pgh.pa.us
Explicitly mention that primary key constraints are also included in the
limitation that the constraint columns must be a superset of the partition key
columns.
Wording suggestion from Tom Lane.
Discussion: https://postgr.es/m/64062533.78364.1601415362244@mail.yahoo.com
Backpatch-through: 11, where unique constraints on partitioned tables were added
When executing a CALL or DO in a non-atomic context (i.e., not inside
a function or query), plpgsql creates a new plan each time through,
as a rather hacky solution to some resource management issues. But
it failed to free this plan until exit of the current procedure or DO
block, resulting in serious memory bloat in procedures that called
other procedures many times. Fix by remembering to free the plan,
and by being more honest about restoring the previous state (otherwise,
recursive procedure calls have a problem).
There was also a smaller leak associated with recalculation of the
"target" list of output variables. Fix that by using the statement-
lifespan context to hold non-permanent values.
Back-patch to v11 where procedures were introduced.
Pavel Stehule and Tom Lane
Discussion: https://postgr.es/m/CAFj8pRDiiU1dqym+_P4_GuTWm76knJu7z9opWayBJTC0nQGUUA@mail.gmail.com
The SQL standard doesn't require jsonpath .datetime() method to support the
ISO 8601 format. But our to_json[b]() functions convert timestamps to text in
the ISO 8601 format in the sake of compatibility with javascript. So, we add
support of the ISO 8601 to the jsonpath .datetime() in the sake compatibility
with to_json[b]().
The standard mode of datetime parsing currently supports just template patterns
and separators in the format string. In order to implement ISO 8601, we have to
add support of the format string double quotes to the standard parsing mode.
Discussion: https://postgr.es/m/94321be0-cc96-1a81-b6df-796f437f7c66%40postgrespro.ru
Author: Nikita Glukhov, revised by me
Backpatch-through: 13
bffe1bd684 has introduced jsonpath .datetime() method, but default formats
for time and timestamp contain excess space between time and timezone. This
commit removes this excess space making behavior of .datetime() method
standard-compliant.
Discussion: https://postgr.es/m/94321be0-cc96-1a81-b6df-796f437f7c66%40postgrespro.ru
Author: Nikita Glukhov
Backpatch-through: 13
Previously the standby server didn't archive timeline history files
streamed from the primary even when archive_mode is set to "always",
while it archives the streamed WAL files. This could cause the PITR to
fail because there was no required timeline history file in the archive.
The cause of this issue was that walreceiver didn't mark those files as
ready for archiving.
This commit makes walreceiver mark those streamed timeline history
files as ready for archiving if archive_mode=always. Then the archiver
process archives the marked timeline history files.
Back-patch to all supported versions.
Reported-by: Grigory Smolkin
Author: Grigory Smolkin, Fujii Masao
Reviewed-by: David Zhang, Anastasia Lubennikova
Discussion: https://postgr.es/m/54b059d4-2b48-13a4-6f43-95a087c92367@postgrespro.ru
This addresses a couple of issues with the so-said subject:
- Report the correct parent relation with the index actually being
rebuilt or validated. Previously, the command status remained set to
the last index created for the progress of the index build and
validation, which would be incorrect when working on a table that has
more than one index.
- Use the correct phase when waiting before the drop of the old
indexes. Previously, this was reported with the same status as when
waiting before the old indexes are marked as dead.
Author: Matthias van de Meent, Michael Paquier
Discussion: https://postgr.es/m/CAEze2WhqFgcwe1_tv=sFYhLWV2AdpfukumotJ6JNcAOQs3jufg@mail.gmail.com
Backpatch-through: 12
We have a dozen or so places that need to iterate over all but the
first cell of a List. Prior to v13 this was typically written as
for_each_cell(lc, lnext(list_head(list)))
Commit 1cff1b95a changed these to
for_each_cell(lc, list, list_second_cell(list))
This patch introduces a new macro for_each_from() which expresses
the start point as a list index, allowing these to be written as
for_each_from(lc, list, 1)
This is marginally more efficient, since ForEachState.i can be
initialized directly instead of backing into it from a ListCell
address. It also seems clearer and less typo-prone.
Some of the remaining uses of for_each_cell() look like they could
profitably be changed to for_each_from(), but here I confined myself
to changing uses of list_second_cell().
Also, fix for_each_cell_setup() and for_both_cell_setup() to
const-ify their arguments; that's a simple oversight in 1cff1b95a.
Back-patch into v13, on the grounds that (1) the const-ification
is a minor bug fix, and (2) it's better for back-patching purposes
if we only have two ways to write these loops rather than three.
In HEAD, also remove list_third_cell() and list_fourth_cell(),
which were also introduced in 1cff1b95a, and are unused as of
cc99baa43. It seems unlikely that any third-party code would
have started to use them already; anyone who has can be directed
to list_nth_cell instead.
Discussion: https://postgr.es/m/CAApHDvpo1zj9KhEpU2cCRZfSM3Q6XGdhzuAS2v79PH7WJBkYVA@mail.gmail.com
This function leaked some memory while loading qual clauses for
an RLS policy. While ordinarily negligible, that could build up
in some repeated-reload cases, as reported by Konstantin Knizhnik.
We can improve matters by borrowing the coding long used in
RelationBuildRuleLock: build stringToNode's result directly in
the target context, and remember to explicitly pfree the
input string.
This patch by no means completely guarantees zero leaks within
this function, since we have no real guarantee that the catalog-
reading subroutines it calls don't leak anything. However,
practical tests suggest that this is enough to resolve the issue.
In any case, any remaining leaks are similar to those risked by
RelationBuildRuleLock and other relcache-loading subroutines.
If we need to fix them, we should adopt a more global approach
such as that used by the RECOVER_RELATION_BUILD_MEMORY hack.
While here, let's remove the need for an expensive PG_TRY block by
using MemoryContextSetParent to reparent an initially-short-lived
context for the RLS data.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/21356c12-8917-8249-b35f-1c447231922b@postgrespro.ru
Parallel pg_dump failed if its -d parameter was a connection string
containing any essential information other than host, port, or username.
The same was true for pg_restore with --create.
The reason is that these scenarios failed to preserve the connection
string from the command line; the code felt free to replace that with
just the database name when reconnecting from a pg_dump parallel worker
or after creating the target database. By chance, parallel pg_restore
did not suffer this defect, as long as you didn't say --create.
In practice it seems that the error would be obvious only if the
connstring included essential, non-default SSL or GSS parameters.
This may explain why it took us so long to notice. (It also makes
it very difficult to craft a regression test case illustrating the
problem, since the test would fail in builds without those options.)
Fix by refactoring so that ConnectDatabase always receives all the
relevant options directly from the command line, rather than
reconstructed values. Inject a different database name, when necessary,
by relying on libpq's rules for handling multiple "dbname" parameters.
While here, let's get rid of the essentially duplicate _connectDB
function, as well as some obsolete nearby cruft.
Per bug #16604 from Zsolt Ero. Back-patch to all supported branches.
Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
Harmonize behavior by moving reponsibility for fsyncing directories down
into slru.c. In 10 and later, only the multixact directories were
missed (see commit 1b02be21), and in older branches all SLRUs were
missed.
Back-patch to all supported releases.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CA%2BhUKGLtsTUOScnNoSMZ-2ZLv%2BwGh01J6kAo_DM8mTRq1sKdSQ%40mail.gmail.com
tsearch_readline() saves the string pointer it returns to the caller
for possible use in the associated error context callback. However,
the caller will usually pfree that string sometime before it next
calls tsearch_readline(), so that there is a window where an ereport
will try to print an already-freed string.
The built-in users of tsearch_readline() happen to all do that pfree
at the bottoms of their loops, so that the window is effectively
empty for them. However, this is not documented as a requirement,
and contrib/dict_xsyn doesn't do it like that, so it seems likely
that third-party dictionaries might have live bugs here.
The practical consequences of this seem pretty limited in any case,
since production builds wouldn't clobber the freed string immediately,
besides which you'd not expect syntax errors in dictionary files
being used in production. Still, it's clearly a bug waiting to bite
somebody.
Fix by pstrdup'ing the string to be saved for the error callback,
and then pfree'ing it next time through. It's been like this for
a long time, so back-patch to all supported branches.
Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
These two SQL functions are aliases for the same C function, so this
change has no semantic effect. However, because we dropped the
numeric_fac alias in HEAD (commit 76f412ab3), operator definitions
based on that one don't port forward, causing problems for cross-version
upgrade tests based on the regression database.
Patch all active back branches to dodge the problem.
Discussion: https://postgr.es/m/449144.1600439950@sss.pgh.pa.us
After commits 85f6b49c2c and 3ba59ccc89, we can allow parallel inserts
which was earlier not possible as parallel group members won't conflict
for relation extension and page lock. In those commits, we forgot to
update comments at few places.
Author: Amit Kapila
Reviewed-by: Robert Haas and Dilip Kumar
Backpatch-through: 13
Discussion: https://postgr.es/m/CAFiTN-tMrQh5FFMPx5aWJ+1gi1H6JxktEhq5mDwCHgnEO5oBkA@mail.gmail.com
For parallel btree scan to work for array of scan keys, it should reach
BTPARALLEL_DONE state once for every distinct combination of array keys.
This is required to ensure that the parallel workers don't try to seize
blocks at the same time for different scan keys. We missed to update this
state when we discovered that the scan keys can't be satisfied.
Author: James Hunter
Reviewed-by: Amit Kapila
Tested-by: Justin Pryzby
Backpatch-through: 10, where it was introduced
Discussion: https://postgr.es/m/4248CABC-25E3-4809-B4D0-128E1BAABC3C@amazon.com
Since commit fd5942c18f97 (2012, 9.3-era), walsender has been sending
completion tags for certain replication commands twice -- and they're
not even consistent. Apparently neither libpq nor JDBC have a problem
with it, but it's not kosher. Fix by remove the EndCommand() call in
the common code path for them all, and inserting specific calls to
EndReplicationCommand() specifically in those places where it's needed.
EndReplicationCommand() is a new simple function to send the completion
tag for replication commands. Do this instead of sending a generic
SELECT completion tag for them all, which was also pretty bogus (if
innocuous). While at it, change StartReplication() to use
EndReplicationCommand() instead of pg_puttextmessage().
In commit 2f9661311b83, I failed to realize that replication commands
are not close-enough kin of regular SQL commands, so the
DROP_REPLICATION_SLOT tag I added is undeserved and a type pun. Take it
out.
Backpatch to 13, where the latter commit appeared. The duplicate tag
has been sent since 9.3, but since nothing is broken, it doesn't seem
worth fixing.
Per complaints from Tom Lane.
Discussion: https://postgr.es/m/1347966.1600195735@sss.pgh.pa.us
Commit d114cc53 overlooked the fact that pg_upgrade'd B-Tree indexes
have leaf page high keys whose offset numbers do not match the one from
the copy of the tuple one level up (the copy stored with a downlink for
leaf page's right sibling page). This led to false positive reports of
corruption from bt_index_parent_check() when it was called to verify a
pg_upgrade'd index.
To fix, skip comparing the offset number on pg_upgrade'd B-Tree indexes.
Author: Anastasia Lubennikova <a.lubennikova@postgrespro.ru>
Author: Peter Geoghegan <pg@bowt.ie>
Reported-By: Andrew Bille <andrewbille@gmail.com>
Diagnosed-By: Anastasia Lubennikova <a.lubennikova@postgrespro.ru>
Bug: #16619
Discussion: https://postgr.es/m/16619-aaba10f83fdc1c3c@postgresql.org
Backpatch: 13-, where child check was enhanced.
If a partitioned table's column is already marked NOT NULL, there is
no need to examine its partitions, because we can rely on previous
DDL to have enforced that the child columns are NOT NULL as well.
(Unfortunately, the same cannot be said for traditional inheritance,
so for now we have to restrict the optimization to partitioned tables.)
Hence, we may skip recursing to child tables in this situation.
The reason this case is worth worrying about is that when pg_dump dumps
a partitioned table having a primary key, it will include the requisite
NOT NULL markings in the CREATE TABLE commands, and then add the
primary key as a separate step. The primary key addition generates a
SET NOT NULL as a subcommand, just to be sure. So the situation where
a SET NOT NULL is redundant does arise in the real world.
Skipping the recursion does more than just save a few cycles: it means
that a command such as "ALTER TABLE ONLY partition_parent ADD PRIMARY
KEY" will take locks only on the partition parent table, not on the
partitions. It turns out that parallel pg_restore is effectively
assuming that that's true, and has little choice but to do so because
the dependencies listed for such a TOC entry don't include the
partitions. pg_restore could thus issue this ALTER while data restores
on the partitions are still in progress. Taking unnecessary locks on
the partitions not only hurts concurrency, but can lead to actual
deadlock failures, as reported by Domagoj Smoljanovic.
(A contributing factor in the deadlock is that TRUNCATE on a child
partition wants a non-exclusive lock on the parent. This seems
likewise unnecessary, but the fix for it is more invasive so we
won't consider back-patching it. Fortunately, getting rid of one
of these two poor behaviors is enough to remove the deadlock.)
Although support for partitioned primary keys came in with v11,
this patch is dependent on the SET NOT NULL refactoring done by
commit f4a3fdfbd, so we can only patch back to v12.
Patch by me; thanks to Alvaro Herrera and Amit Langote for review.
Discussion: https://postgr.es/m/VI1PR03MB31670CA1BD9625C3A8C5DD05EB230@VI1PR03MB3167.eurprd03.prod.outlook.com
The code recorded cache invalidation events by zeroing the "localreloid"
field of affected cache entries. However, it's possible for an inval
event to occur even while we have the entry open and locked. So an
ill-timed inval could result in "cache lookup failed for relation 0"
errors, if the worker's code tried to use the cleared field. We can
fix that by creating a separate bool field to record whether the entry
needs to be revalidated. (In the back branches, cram the bool into
what had been padding space, to avoid an ABI break in the somewhat
unlikely event that any extension is looking at this struct.)
Also, rearrange the logic in logicalrep_rel_open so that it
does the right thing in cases where table_open would fail.
We should retry the lookup by name in that case, but we didn't.
The real-world impact of this is probably small. In the first place,
the error conditions are very low probability, and in the second place,
the worker would just exit and get restarted. We only noticed because
in a CLOBBER_CACHE_ALWAYS build, the failure can occur repeatedly,
preventing the worker from making progress. Nonetheless, it's clearly
a bug, and it impedes a useful type of testing; so back-patch to v10
where this code was introduced.
Discussion: https://postgr.es/m/1032727.1600096803@sss.pgh.pa.us
Previously, it was based on nBlocksAllocated to account for tapes with
open write buffers that may not have made it to the BufFile yet.
That was unnecessary, because callers do not need to get the number of
blocks while a tape has an open write buffer; and it also conflicted
with the preallocation logic added for HashAgg.
Reviewed-by: Peter Geoghegan
Discussion: https://postgr.es/m/ce5af05900fdbd0e9185747825a7423c48501964.camel@j-davis.com
Backpatch-through: 13
ALTER TABLE commands in an extension script are added to an event
trigger command list; but starting with commit b5810de3f4 they do so in
a memory context that's too short-lived, so when execution ends and time
comes to use the entries, they've already been freed.
(This would also be a problem with ALTER TABLE commands in a
multi-command query string, but these serendipitously end in
PortalContext -- which probably explains why it took so long for this to
be reported.)
Fix by using the memory context specifically set for that, instead.
Backpatch to 13, where the aforementioned commit appeared.
Reported-by: Philippe Beaudoin
Author: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>
Discussion: https://postgr.es/m/20200902193715.6e0269d4@firost