When a partitioned table has an index that doesn't support a constraint,
but a partition has an equivalent index that does, then a DETACH
operation would misbehave: a crash in assertion-enabled systems (because
we fail to find the constraint in the parent that we expect to), or a
broken coninhcount value (-1) in production systems (because we blindly
believe that we've successfully detached the parent).
While we should reject an ATTACH of a partition with such an index, we
have failed to do so in existing releases, so adding an error in stable
releases might break the (unlikely) existing applications that rely on
this behavior. At this point I don't even want to reject them in
master, because it'd break pg_upgrade if such databases exist, and there
would be no easy way to fix existing databases without expensive index
rebuilds.
(Later on we could add ALTER TABLE ... ADD CONSTRAINT USING INDEX to
partitioned tables, which would allow the user to fix such patterns. At
that point we could add more restrictions to prevent the problem from
its root.)
Also, add a test case that leaves one table in this condition, so that
we can verify that pg_upgrade continues to work if we later decide to
change the policy on the master branch.
Backpatch to all supported branches.
Co-authored-by: Tender Wang <tndrwang@gmail.com>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/18500-62948b6fe5522f56@postgresql.org
This routine can currently only be called from the postmaster in
single-user mode or the checkpointer, but there was no sanity check to
make sure that this was always the case.
This has proved to be useful when hacking the zone (at least to me), to
make sure that the write of the pgstats file happens at shutdown, as
wanted by design, in the correct process context.
Discussion: https://postgr.es/m/ZnEiqAITL-VgZDoY@paquier.xyz
The slot synchronization failed because the local slot's (created during
slot synchronization) catalog_xmin on standby is ahead of remote slot.
This happens because the INSERT before slot synchronization results in the
generation of a new xid that could be replicated to the standby. Now
before the xmin of the physical slot on the primary catches up via
hot_standby_feedback, the test has created a logical slot that got some
prior value of catalog_xmin.
To fix this we could try to ensure that the physical slot's catalog_xmin
is caught up to latest value before creating a logical slot but we took a
simpler path to move the INSERT after synchronizing the logical slot.
Reported-by: Alexander Lakhin as per buildfarm
Diagnosed-by: Amit Kapila, Hou Zhijie, Alexander Lakhin
Author: Hou Zhijie
Backpatch-through: 17
Discussion: https://postgr.es/m/bde6ac67-69cc-c104-5ab6-dd4f5deadf24@gmail.com
When generating non-parallel nestloop paths for each available outer
path, we always consider materializing the cheapest inner path if
feasible. Similarly, in this patch, we also consider materializing
the cheapest inner path when building partial nestloop paths. This
approach potentially reduces the need to rescan the inner side of a
partial nestloop path for each outer tuple.
Author: Tender Wang
Reviewed-by: Richard Guo, Robert Haas, David Rowley, Alena Rybakina
Reviewed-by: Tomasz Rybak, Paul Jungwirth, Yuki Fujii
Discussion: https://postgr.es/m/CAHewXNkPmtEXNfVQMou_7NqQmFABca9f4etjBtdbbm0ZKDmWvw@mail.gmail.com
The comment at the top of pgstat_read_statsfile() mentioned that the
stats are read from the on-disk file into the pgstats dshash. This is
incorrect for fix-numbered stats as these are loaded directly into
shared memory. This commit simplifies the comment to be more general.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/Zo/eJIHUcqKxeSgv@ip-10-97-1-34.eu-west-3.compute.internal
When creating and initializing a logical slot, the restart_lsn is set
to the latest WAL insertion point (or the latest replay point on
standbys). Subsequently, WAL records are decoded from that point to
find the start point for extracting changes in the
DecodingContextFindStartpoint() function. Since the initial
restart_lsn could be in the middle of a transaction, the start point
must be a consistent point where we won't see the data for partial
transactions.
Previously, when not building a full snapshot, serialized snapshots
were restored, and the SnapBuild jumps to the consistent state even
while finding the start point. Consequently, the slot's restart_lsn
and confirmed_flush could be set to the middle of a transaction. This
could lead to various unexpected consequences. Specifically, there
were reports of logical decoding decoding partial transactions, and
assertion failures occurred because only subtransactions were decoded
without decoding their top-level transaction until decoding the commit
record.
To resolve this issue, the changes prevent restoring the serialized
snapshot and jumping to the consistent state while finding the start
point.
On v17 and HEAD, a flag indicating whether snapshot restores should be
skipped has been added to the SnapBuild struct, and SNAPBUILD_VERSION
has been bumpded.
On backbranches, the flag is stored in the LogicalDecodingContext
instead, preserving on-disk compatibility.
Backpatch to all supported versions.
Reported-by: Drew Callahan
Reviewed-by: Amit Kapila, Hayato Kuroda
Discussion: https://postgr.es/m/2444AA15-D21B-4CCE-8052-52C7C2DAFE5C%40amazon.com
Backpatch-through: 12
This partially reverts commit 628c1d1f2c.
It appears that there are non line-end differences in some regression
tests on Windows. To keep the buildfarm and CI clients happy, change
this back for now, pending further investigation.
Per reports from Tatsuo Ishii and Nazir Bilal Yavuz.
This new entry type is used for all the fixed-numbered statistics,
making possible support for custom pluggable stats. In short, we need
to be able to detect more easily if a stats kind exists or not when
reading back its data from the pgstats file without a dependency on the
order of the entries read. The kind ID of the stats is added to the
data written.
The data is written in the same fashion as previously, with the
fixed-numbered stats first and the dshash entries next. The read part
becomes more flexible, loading fixed-numbered stats into shared memory
based on the new entry type found.
Bump PGSTAT_FILE_FORMAT_ID.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/Zot5bxoPYdS7yaoy@paquier.xyz
This new callback gives fixed-numbered stats the possibility to take
actions based on the area of shared memory allocated for them.
This removes from pgstat_shmem.c any knowledge specific to the types
of fixed-numbered stats, and the initializations happen in their own
files. Like b68b29bc8fec, this change is useful to make this area of
the code more pluggable, so as custom fixed-numbered stats can take
actions after their shared memory area is initialized.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/Zot5bxoPYdS7yaoy@paquier.xyz
Presently, the page for predefined roles contains a table with
brief descriptions of what each role allows. Below the table,
there is a separate section with more detailed information about
some of the roles. As the set of predefined roles has grown over
the years, this page has (IMHO) become less readable.
This commit attempts to improve the predefined roles documentation
by abandoning the table in favor of listing each role with its own
complete description, similar to how we document GUCs. Besides
merging the information that was split between the table and the
section below it, this commit also alphabetizes the roles. The
alphabetization is imperfect because some of the roles are grouped
(e.g., pg_read_all_data and pg_write_all_data), and we order such
groups by the first role mentioned, but that seemed like a better
choice than breaking the groups apart. Finally, this commit makes
some stylistic adjustments to the text.
Reviewed-by: David G. Johnston, Robert Haas
Discussion: https://postgr.es/m/ZmtM-4-eRtq8DRf6%40nathan
Formerly, the computation of the bucket index involved calling
div_var() with a scale determined by select_div_scale(), and then
taking the floor of the result. That involved computing anything from
16 to 1000 digits after the decimal point, only for floor_var() to
throw them away. In addition, the quotient was computed with rounding
in the final digit, which meant that in rare cases the whole result
could round up to the wrong bucket, and could exceed count. Thus it
was also necessary to clamp the result to the range [1, count], though
that didn't prevent the result being in the wrong internal bucket.
Instead, compute the quotient using floor division, which guarantees
the correct result, as specified by the SQL spec, and doesn't need to
be clamped. This is both much simpler and more efficient, since it no
longer computes any quotient digits after the decimal point.
In addition, it is not necessary to have separate code to handle
reversed bounds, since the signs cancel out when dividing.
As with b0e9e4d76c and a2a0c7c29e, no back-patch.
Dean Rasheed, reviewed by Joel Jacobson.
Discussion: https://postgr.es/m/CAEZATCVbJH%2BLE9EXW8Rk3AxLe%3DjbOk2yrT_AUJGGh5Rah6zoeg%40mail.gmail.com
Test result files might be checked out using Unix or Windows style line
endings, depening on git flags, so on Windows we use the
--strip-trailing-cr flag to tell diff to ignore line endings
differences.
The flag is added to the diff invocation for the test_json_parser module
tests and the pg_bsd_indent tests. in pg_regress.c we replace the
current use of the "-w" flag, which ignore all white space differences,
with this one which only ignores line end differences.
Discussion: https://postgr.es/m/20240707052030.r77hbdkid3mwksop@awork3.anarazel.de
The I/O timing information collected when track_io_timing is
enabled is now documented to appear in the pg_stat_io view,
which was previously not mentioned.
This commit also enhances the description of track_io_timing
to clarify that it monitors not only block read and write
but also block extend and fsync operations. Additionally,
the description of track_wal_io_timing has been improved
to mention both WAL write and WAL fsync monitoring.
Backpatch to v16 where pg_stat_io was added.
Author: Hajime Matsunaga
Reviewed-by: Melanie Plageman, Nazir Bilal Yavuz, Fujii Masao
Discussion: https://postgr.es/m/TYWPR01MB10742EE4A6F34C33061429D38A4D52@TYWPR01MB10742.jpnprd01.prod.outlook.com
This patch modifies the pg_get_acl() function to accept a third argument
called "objsubid", bringing it on par with similar functions in this
area like pg_describe_object(). This enables the retrieval of ACLs for
relation attributes when scanning dependencies.
Bump catalog version.
Author: Joel Jacobson
Discussion: https://postgr.es/m/f2539bff-64be-47f0-9f0b-df85d3cc0432@app.fastmail.com
libxml2 2.13 has an entirely different rule than earlier versions
about when to emit "chunk is not well balanced" errors. This
causes regression test output discrepancies for three test cases
that formerly provoked that error (along with others) and now don't.
Closer inspection shows that at least in 2.13, this error is pretty
useless because it can only be emitted after some other more-relevant
error. So let's get rid of the cross-version discrepancy by just
suppressing it. In case some older libxml2 version is capable of
emitting this error by itself, suppress only when some other error
has already been captured.
Like 066e8ac6e and 6082b3d5d, this will need to be back-patched,
but let's check the results in HEAD first. (The patch for xml_2.out,
in particular, is blind since I can't test it here.)
Erik Wienhold and Tom Lane, per report from Frank Streitzig.
Discussion: https://postgr.es/m/trinity-b0161630-d230-4598-9ebc-7a23acdb37cb-1720186432160@3c-app-gmx-bap25
Discussion: https://postgr.es/m/trinity-361ba18b-541a-4fe7-bc63-655ae3a7d599-1720259822452@3c-app-gmx-bs01
Since commit 3a9b18b309, roles with privileges of pg_signal_backend
cannot signal autovacuum workers. Many users treated the ability
to signal autovacuum workers as a feature instead of a bug, so we
are reintroducing it via a new predefined role. Having privileges
of this new role, named pg_signal_autovacuum_worker, only permits
signaling autovacuum workers. It does not permit signaling other
types of superuser backends.
Bumps catversion.
Author: Kirill Reshke
Reviewed-by: Anthony Leung, Michael Paquier, Andrey Borodin
Discussion: https://postgr.es/m/CALdSSPhC4GGmbnugHfB9G0%3DfAxjCSug_-rmL9oUh0LTxsyBfsg%40mail.gmail.com
Previously, the comment incorrectly stated that libpqrcv_check_conninfo()
returns true or false based on the connection string check.
However, this function actually has a void return type and
raises an error if the check fails.
Author: Rintaro Ikeda
Reviewed-by: Jelte Fennema-Nio, Fujii Masao
Discussion: https://postgr.es/m/6a1ca81b27fec4da0ccdfaaaec787982@oss.nttdata.com
When either input has a small number of digits, and the exact product
is requested, the speed of numeric multiplication can be increased
significantly by using a faster direct multiplication algorithm. This
works by fully computing each result digit in turn, starting with the
least significant, and propagating the carry up. This save cycles by
not requiring a temporary buffer to store digit products, not making
multiple passes over the digits of the longer input, and not requiring
separate carry-propagation passes.
For now, this is used when the shorter input has 1-4 NBASE digits (up
to 13-16 decimal digits), and the longer input is of any size, which
covers a lot of common real-world cases. Also, the relative benefit
increases as the size of the longer input increases.
Possible future work would be to try extending the technique to larger
numbers of digits in the shorter input.
Joel Jacobson and Dean Rasheed.
Discussion: https://postgr.es/m/44d2ffca-d560-4919-b85a-4d07060946aa@app.fastmail.com
1. Remove the keyword SELECT from the examples to be consistent
with the examples of other JSON-related functions listed on the
same page.
2. Add <synopsis> tags around the functions' syntax definition
3. Capitalize function names in the syntax synopsis and the examples
4. Use <itemizedlist> lists for dividing the descriptions of
individual functions into bullet points
5. Significantly rewrite the description of wrapper clauses of
JSON_QUERY
6. Significantly rewrite the descriptions of ON ERROR / EMPTY
clauses of JSON_QUERY() and JSON_VALUE() functions
7. Add a note about how JSON_VALUE() and JSON_QUERY() differ when
returning a JSON null result
8. Move the description of the PASSING clause from the descriptions
of individual functions into the top paragraph
And other miscellaneous text improvements, typo fixes.
Suggested-by: Thom Brown <thom@linux.com>
Suggested-by: David G. Johnston <david.g.johnston@gmail.com>
Reviewed-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Erik Rijkers <er@xs4all.nl>
Discussion: https://postgr.es/m/CAA-aLv7Dfy9BMrhUZ1skcg=OdqysWKzObS7XiDXdotJNF0E44Q@mail.gmail.com
Discussion: https://postgr.es/m/CAKFQuwZNxNHuPk44zDF7z8qZec1Aof10aA9tWvBU5CMhEKEd8A@mail.gmail.com
Commit 0fdab27ad6 changed the code to wait for WAL to be available before
determining the timeline but forgot to move the failure check.
This change is to make the related code easier to understand and enhance
otherwise there is no bug in the current code.
In the passing, improve the nearby comments to explain why we determine
am_cascading_walsender after waiting for the required WAL.
Author: Peter Smith
Reviewed-by: Bertrand Drouvot, Amit Kapila
Discussion: https://postgr.es/m/CAHut+PvqX49fusLyXspV1Mmd_EekPtXG0oT146vZjcb9XDvNgw@mail.gmail.com
This is similar to 9004abf6206e, but this time for the write part of the
stats file. The code is changed so as, rather than referring to
individual members of PgStat_Snapshot in an order based on their
PgStat_Kind value, a loop based on pgstat_kind_infos is used to retrieve
the contents to write from the snapshot structure, for a size of
PgStat_KindInfo's shared_data_len.
This requires the addition to PgStat_KindInfo of an offset to track the
location of each fixed-numbered stats in PgStat_Snapshot. This change
is useful to make this area of the code more easily pluggable, and
reduces the knowledge of specific fixed-numbered kinds in pgstat.c.
Reviewed-by: Bertrand Drouvot
Discussion: https://postgr.es/m/Zot5bxoPYdS7yaoy@paquier.xyz
036bdcec9 added some code to perform some verification on portions of
the planner costs in EXPLAIN ANALYZE but failed to consider that some
buildfarm animals such as bushmaster and taipan are running very low jit
thresholds. This caused these animals to fail as they were outputting
JIT-related details in EXPLAIN ANALYZE for the newly added tests.
Here we avoid that by disabling JIT for the plans in question.
Discussion: https://postgr.es/m/CAApHDvpxV4rrO3XUCgGS5N9Wg6f2r0ojJPD2tX2FRV-o9sRTJA@mail.gmail.com
Previously, pg_wal_summary_contents() had two issues,
causing discrepancies between pg_wal_summary_contents()
and the pg_walsummary command on the same WAL summary file:
(1) It did not emit the limit block when that's the only data for
a particular relation fork.
(2) It emitted the same limit block multiple times if the list of
block numbers was long enough.
This commit fixes these issues.
Backpatch to v17 where pg_wal_summary_contents() was added.
Author: Fujii Masao
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/90980ee6-2da6-42f6-a7b0-b7bae62ae279@oss.nttdata.com
Nodes like Memoize report the cache stats for each parallel worker, so it
makes sense to show the exact and lossy pages in Parallel Bitmap Heap Scan
in a similar way. Likewise, Sort shows the method and memory used for
each worker.
There was some discussion on whether the leader stats should include the
totals for each parallel worker or not. I did some analysis on this to
see what other parallel node types do and it seems only Parallel Hash does
anything like this. All the rest, per what's supported by
ExecParallelRetrieveInstrumentation() are consistent with each other.
Author: David Geier <geidav.pg@gmail.com>
Author: Heikki Linnakangas <hlinnaka@iki.fi>
Author: Donghang Lin <donghanglin@gmail.com>
Author: Alena Rybakina <lena.ribackina@yandex.ru>
Author: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Michael Christofides <michael@pgmustard.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Donghang Lin <donghanglin@gmail.com>
Reviewed-by: Masahiro Ikeda <Masahiro.Ikeda@nttdata.com>
Discussion: https://postgr.es/m/b3d80961-c2e5-38cc-6a32-61886cdf766d%40gmail.com
This provides the planner with row estimates for
generate_series(TIMESTAMP, TIMESTAMP, INTERVAL),
generate_series(TIMESTAMPTZ, TIMESTAMPTZ, INTERVAL) and
generate_series(TIMESTAMPTZ, TIMESTAMPTZ, INTERVAL, TEXT) when the input
parameter values can be estimated during planning.
Author: David Rowley
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CAApHDvrBE%3D%2BASo_sGYmQJ3GvO8GPvX5yxXhRS%3Dt_ybd4odFkhQ%40mail.gmail.com
This reverts commit e9f15bc9. Instead of a hacky solution that didn't
work on Windows, we avoid trying to move the directory possibly across
drives, and instead remove it and recreate it in the new location.
Discussion: https://postgr.es/m/20240707070243.sb77kp4ubowauctz@awork3.anarazel.de
Backpatch to release 14 like the previous patch.
While this strategy is ordinarily quite costly because it requires
performing two checkpoints, testing shows that it tends to be a
faster choice than WAL_LOG during pg_upgrade, presumably because
fsync is turned off. Furthermore, we can skip the checkpoints
altogether because the problems they are intended to prevent don't
apply to pg_upgrade. Instead, we just need to CHECKPOINT once in
the new cluster after making any changes to template0 and before
restoring the rest of the databases. This ensures that said
template0 changes are written out to disk prior to creating the
databases via FILE_COPY.
Co-authored-by: Matthias van de Meent
Reviewed-by: Ranier Vilela, Dilip Kumar, Robert Haas, Michael Paquier
Discussion: https://postgr.es/m/Zl9ta3FtgdjizkJ5%40nathan
If we choose ports in the range typically used for ephemeral ports there
is a danger of encountering a port conflict due to a race condition
between the time we choose the port in a range below that typically used
to allocate ephemeral ports, but higher than the range typically used by
well known services.
Author: Jelte Fenema-Nio, with some editing by me.
Discussion: https://postgr.es/m/d6ee8761-39d1-0033-1afb-d5a57ee056f2@gmail.com
Backpatch to all live branches (12 and up)
Currently they are started in unix socket mode in ost cases, and then
converted to run in TCP mode. This can result in port collisions, and
there is no virtue in startng in unix socket mode, so start as we will
be going on.
Discussion: https://postgr.es/m/d6ee8761-39d1-0033-1afb-d5a57ee056f2@gmail.com
Backpatch to all live branches (12 and up).
xmlParseInNodeContext has basically the same functionality with
a different API: we have to supply an xmlNode that's attached to a
document rather than just the document. That's not hard though.
The benefits are two:
* Early 2.13.x releases of libxml2 contain a bug that causes
xmlParseBalancedChunkMemory to return the wrong status value in some
cases. This breaks our regression tests. While that bug is now fixed
upstream and will probably never be seen in any production-oriented
distro, it is currently a problem on some more-bleeding-edge-friendly
platforms.
* xmlParseBalancedChunkMemory is considered to depend on libxml2's
semi-deprecated SAX1 APIs, and will go away when and if they do.
There may already be libxml2 builds out there that lack this function.
So there are both short- and long-term reasons to make this change.
While here, avoid allocating an xmlParserCtxt in DOCUMENT parse mode,
since that code path is not going to use it.
Like 066e8ac6e, this will need to be back-patched. This is just a
trial commit to see if the buildfarm agrees that we can use
xmlParseInNodeContext unconditionally.
Erik Wienhold and Tom Lane, per report from Frank Streitzig.
Discussion: https://postgr.es/m/trinity-b0161630-d230-4598-9ebc-7a23acdb37cb-1720186432160@3c-app-gmx-bap25
Discussion: https://postgr.es/m/trinity-361ba18b-541a-4fe7-bc63-655ae3a7d599-1720259822452@3c-app-gmx-bs01
The numeric round() and trunc() functions clamp the scale argument to
the range between +/- NUMERIC_MAX_RESULT_SCALE (2000), which is much
smaller than the actual allowed range of type numeric. As a result,
they return incorrect results when asked to round/truncate more than
2000 digits before or after the decimal point.
Fix by using the correct upper and lower scale limits based on the
actual allowed (and documented) range of type numeric.
While at it, use the new NUMERIC_WEIGHT_MAX constant instead of
SHRT_MAX in all other overflow checks, and fix a comment thinko in
power_var() introduced by e54a758d24 -- the minimum value of
ln_dweight is -NUMERIC_DSCALE_MAX (-16383), not -SHRT_MAX, though this
doesn't affect the point being made in the comment, that the resulting
local_rscale value may exceed NUMERIC_MAX_DISPLAY_SCALE (1000).
Back-patch to all supported branches.
Dean Rasheed, reviewed by Joel Jacobson.
Discussion: https://postgr.es/m/CAEZATCXB%2BrDTuMjhK5ZxcouufigSc-X4tGJCBTMpZ3n%3DxxQuhg%40mail.gmail.com
a6417078c414 has introduced as project policy that new features
committed during the development cycle should use new OIDs in the
[8000,9999] range.
4564f1cebd43 did not respect that rule, so let's renumber pg_get_acl()
to use an OID in the correct range.
Bump catalog version.
Both of these counters were using the "long" data type. On MSVC that's
a 32-bit type. On modern hardware, I was able to demonstrate that we can
wrap those counters with a query that only takes 15 minutes to run.
This issue may manifest itself either by not showing the values of the
counters because they've wrapped and are less than zero, resulting in
them being filtered by the > 0 checks in show_tidbitmap_info(), or bogus
numbers being displayed which are modulus 2^32 of the actual number.
Widen these counters to uint64.
Discussion: https://postgr.es/m/CAApHDvpS_97TU+jWPc=T83WPp7vJa1dTw3mojEtAVEZOWh9bjQ@mail.gmail.com
For an inner_unique join, we always assume that the executor will stop
scanning for matches after the first match. Therefore, for a mergejoin
that is inner_unique and whose mergeclauses are sufficient to identify a
match, we set the skip_mark_restore flag to true, indicating that the
executor need not do mark/restore calls. However, merge-right-anti-join
did not get this memo and continues scanning the inner side for matches
after the first match. If there are duplicates in the outer scan, we
may incorrectly skip matching some inner tuples, which can lead to wrong
results.
Here we fix this issue by ensuring that merge-right-anti-join also
advances to next outer tuple after the first match in inner_unique
cases. This also saves cycles by avoiding unnecessary scanning of inner
tuples after the first match.
Although hash-right-anti-join does not suffer from this wrong results
issue, we apply the same change to it as well, to help save cycles for
the same reason.
Per bug #18522 from Antti Lampinen, and bug #18526 from Feliphe Pozzer.
Back-patch to v16 where right-anti-join was introduced.
Author: Richard Guo
Discussion: https://postgr.es/m/18522-c7a8956126afdfd0@postgresql.org
This acts as a revert of b83747a8a65b and 9886744a361b. As pointed out
by Noah, HEAD and REL_17_STABLE are in a weird state where the code
paths adding /D would limit the spawn of child processes, but we still
have code paths where the spawn of more than one child process(es) would
be possible.
Let's remove these /D switches for now, to bring back the code into a
state consistent with how autorun is configured on a Windows host.
Reported-by: Noah Misch
Discussion: https://postgr.es/m/20240630021211.f3.nmisch@google.com
Backpatch-through: 17
590b045c3 made it so tuplestore.c would store tuples inside a
generation.c memory context. After fixing a bug report in 97651b013, it
seems that it's probably best not to allocate BufFile related
allocations in that context. Let's keep it just for tuple data.
This adjusts the code to switch to the Tuplestorestate.context's parent,
which is the MemoryContext that tuplestore_begin_common() was called in.
It does not seem worth adding a new field in Tuplestorestate to store
this when we can access it by looking at the Tuplestorestate's
context's parent.
Discussion: https://postgr.es/m/CAApHDvqFt_CdJtSr+E9YLZb7jZAyRCy3hjQ+ktM+dcOFVq-xkg@mail.gmail.com
This only affects MEMORY_CONTEXT_CHECKING builds.
This fixes an off-by-one issue in GenerationRealloc() where the
fast-path code which tries to reuse the existing allocation if the
existing chunk is >= the new requested size. The code there thought it
was always ok to use the existing chunk, but when oldsize == size there
isn't enough space to store the sentinel byte. If both sizes matched
exactly set_sentinel() would overwrite the first byte beyond the chunk
and then subsequent GenerationRealloc() calls could then fail the
Assert(chunk->requested_size < oldsize) check which is trying to ensure
the chunk is large enough to store the sentinel.
The same issue does not exist in aset.c as the sentinel checking code
only adds a sentinel byte if there's enough space in the chunk.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/49275921-7b39-41af-5eb8-97b50ce3312e@gmail.com
Backpatch-through: 16, where the problem was introduced by 0e480385e