Until now, REINDEX was not able to work with partitioned tables and
indexes, forcing users to reindex partitions one by one. This extends
REINDEX INDEX and REINDEX TABLE so as they can accept a partitioned
index and table in input, respectively, to reindex all the partitions
assigned to them with physical storage (foreign tables, partitioned
tables and indexes are then discarded).
This shares some logic with schema and database REINDEX as each
partition gets processed in its own transaction after building a list of
relations to work on. This choice has the advantage to minimize the
number of invalid indexes to one partition with REINDEX CONCURRENTLY in
the event a cancellation or failure in-flight, as the only indexes
handled at once in a single REINDEX CONCURRENTLY loop are the ones from
the partition being working on.
Isolation tests are added to emulate some cases I bumped into while
developing this feature, particularly with the concurrent drop of a
leaf partition reindexed. However, this is rather limited as LOCK would
cause REINDEX to block in the first transaction building the list of
partitions.
Per its multi-transaction nature, this new flavor cannot run in a
transaction block, similarly to REINDEX SCHEMA, SYSTEM and DATABASE.
Author: Justin Pryzby, Michael Paquier
Reviewed-by: Anastasia Lubennikova
Discussion: https://postgr.es/m/db12e897-73ff-467e-94cb-4af03705435f.adger.lj@alibaba-inc.com
Tomas Vondra observed that the IO behavior for HashAgg tends to be
worse than for Sort. Penalize HashAgg IO costs accordingly.
Also, account for the CPU effort of spilling the tuples and reading
them back.
Discussion: https://postgr.es/m/20200906212112.nzoy5ytrzjjodpfh@development
Reviewed-by: Tomas Vondra
Backpatch-through: 13
Experimentation shows that clang will auto-vectorize the critical
multiplication loop if the termination condition is written "i2 < limit"
rather than "i2 <= limit". This seems unbelievably stupid, but I've
reproduced it on both clang 9.0.1 (RHEL8) and 11.0.3 (macOS Catalina).
gcc doesn't care, so tweak the code to do it that way.
Discussion: https://postgr.es/m/CAJ3gD9evtA_vBo+WMYMyT-u=keHX7-r8p2w7OSRfXf42LTwCZQ@mail.gmail.com
This allows us to skip some stat calls, by extending commit 861c6e7c to
cover Windows systems.
Author: Juan José Santamaría Flecha <juanjo.santamaria@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Magnus Hagander <magnus@hagander.net>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BFzxupGGN4GpUdbzZN%2Btn6FQPHo8w0Q%2BAPH5Wz8RG%2Bww%40mail.gmail.com
Fix one instance of a table cell overflow by adding a zero-width
space. The visual impact of this is minimal, but since this is
currently the only such case reported by FOP ("contents of ... exceed
the available area"), it seems worth getting rid of.
Some kernels can tell us the type of a "dirent", so we can avoid a call
to stat() or lstat() in many cases. Define a new function
get_dirent_type() to contain that logic, for use by the backend and
frontend versions of walkdir(), and perhaps other callers in future.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Juan José Santamaría Flecha <juanjo.santamaria@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKG%2BFzxupGGN4GpUdbzZN%2Btn6FQPHo8w0Q%2BAPH5Wz8RG%2Bww%40mail.gmail.com
The previous version of the docs mentioned that files are rewritten,
implying that a second copy of each file gets created, but each file is
updated in-place.
Author: Michael Banck
Reviewed-by: Daniel Gustafsson, Michael Paquier
Discussion: https://postgr.es/m/858086b6a42fb7d17995b6175856f7e7ec44d0a2.camel@credativ.de
Backpatch-through: 12
This covers the functionality tests for streaming in-progress
subtransactions, streaming transactions containing rollback to savepoints,
and streaming transactions having DDLs.
Author: Tomas Vondra, Amit Kapila and Dilip Kumar
Reviewed-by: Dilip Kumar
Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
Compile numeric.c with -ftree-vectorize where available, and adjust
the innermost loop of mul_var() so that it is amenable to being
auto-vectorized. (Mainly, that involves making it process the arrays
left-to-right not right-to-left.)
Applying -ftree-vectorize actually makes numeric.o smaller, at least
with my compiler (gcc 8.3.1 on x86_64), and it's a little faster too.
Independently of that, fixing the inner loop to be vectorizable also
makes things a bit faster. But doing both is a huge win for
multiplications with lots of digits. For me, the numeric regression
test is the same speed to within measurement noise, but numeric_big
is a full 45% faster.
We also looked into applying -funroll-loops, but that makes numeric.o
bloat quite a bit, and the additional speed improvement is very
marginal.
Amit Khandekar, reviewed and edited a little by me
Discussion: https://postgr.es/m/CAJ3gD9evtA_vBo+WMYMyT-u=keHX7-r8p2w7OSRfXf42LTwCZQ@mail.gmail.com
Replace CFLAGS_VECTOR with CFLAGS_UNROLL_LOOPS and CFLAGS_VECTORIZE,
allowing us to distinguish whether we want to apply -funroll-loops,
-ftree-vectorize, or both to a particular source file. Up to now
the only consumer of the symbol has been checksum.c which wants
both, so that there was no need to distinguish; but that's about
to change.
Amit Khandekar, reviewed and edited a little by me
Discussion: https://postgr.es/m/CAJ3gD9evtA_vBo+WMYMyT-u=keHX7-r8p2w7OSRfXf42LTwCZQ@mail.gmail.com
Refactor replace_string() to use a StringInfo for the modifiable
string argument. This allows the string to be of indefinite size
initially and/or grow substantially during replacement. The previous
logic in convert_sourcefiles_in() had a hard-wired limit of 1024
bytes on any line in input/*.sql or output/*.out files. While we've
not had reports of trouble yet, it'd surely have bit us someday.
This also fixes replace_string() so it won't get into an infinite
loop if the string-to-be-replaced is a substring of the replacement.
That's unlikely to happen in current usage, but the function surely
shouldn't depend on it.
Also fix ecpg_filter() to use a StringInfo and thereby remove its
hard limit of 300 bytes on the length of an ecpg source line.
Asim Rama Praveen and Georgios Kokolatos,
reviewed by Alvaro Herrera and myself
Discussion: https://postgr.es/m/y9Dlk2QhiZ39DhaB1QE9mgZ95HcOQKZCNtGwN7XCRKMdBRBnX_0woaRUtTjloEp4PKA6ERmcUcfq3lPGfKPOJ5xX2TV-5WoRYyySeNHRzdw=@protonmail.com
Letting the caller provide a StringInfo to read into is helpful when
the caller needs to merge lines or otherwise modify the data after
it's been read. Notably, now the code added by commit 8f8154a50
can use pg_get_line_append() instead of having its own copy of that
logic. A follow-on commit will also make use of this.
Also, since StringInfo buffers are a minimum of 1KB long, blindly
using pg_get_line() in a loop can eat a lot more memory than one would
expect. I discovered for instance that commit e0f05cd5b caused initdb
to consume circa 10MB to read postgres.bki, even though that's under
1MB worth of data. A less memory-hungry alternative is to re-use the
same StringInfo for all lines and pg_strdup the results.
Discussion: https://postgr.es/m/1315832.1599345736@sss.pgh.pa.us
We reported the wrong types when complaining that an aggregate's
moving-aggregate implementation is inconsistent with its regular
implementation.
This was wrong since the feature was introduced, so back-patch
to all supported branches.
Jeff Janes
Discussion: https://postgr.es/m/CAMkU=1x808LH=LPhZp9mNSP0Xd1xDqEd+XeGcvEe48dfE6xV=A@mail.gmail.com
This is duplicative of an lstat that was just done by the calling
function (traverse_datadir), besides which we weren't really doing
anything with the results. There's not much point in checking to
see if someone removed the file since the previous lstat, since the
FILE_ACTION_REMOVE code would have to deal with missing-file cases
anyway. Moreover, the "exists = false" assignment was a dead store;
nothing was done with that value later.
A syscall saved is a syscall earned, so back-patch to 9.5
where this code was introduced.
Discussion: https://postgr.es/m/1221796.1599329320@sss.pgh.pa.us
At some point back in the last century, somebody felt that reading
all of pg_type twice was cheaper, or at least easier, than using
repalloc() to resize the Typ[] array dynamically. That seems like an
entirely wacko proposition, so rewrite the code to do it the other
way. (To add insult to injury, there were two not-quite-identical
copies of said code.)
initdb.c's readfile() function had the same disease of preferring
to do double the I/O to avoid resizing its output array. Here,
we can make things easier by using the just-invented pg_get_line()
function to handle reading individual lines without a predetermined
notion of how long they are.
On my machine, it's difficult to detect any net change in the
overall runtime of initdb from these changes; but they should
help on slower buildfarm machines (especially since a buildfarm
cycle involves a lot of initdb's these days).
My attention was drawn to these places by scan-build complaints,
but on inspection they needed a lot more work than just suppressing
dead stores :-(
This commit improves the dependency registrations by taking advantage of
the preliminary work done in 63110c62, to group together the insertion
of dependencies of the same type to pg_depend. With the current layer
of routines available, and as only dependencies of the same type can be
grouped, there are code paths still doing more than one multi-insert
when it is necessary to register dependencies of multiple types
(constraint and index creation are two cases doing that).
While on it, this refactors some of the code to use ObjectAddressSet()
when manipulating object addresses.
Author: Daniel Gustafsson, Michael Paquier
Reviewed-by: Andres Freund, Álvaro Herrera
Discussion: https://postgr.es/m/20200807061619.GA23955@paquier.xyz
The basic tests that defined SQL functions didn't actually run the
functions to see if they worked. Add that, and also fix a minor
mistake in a function that was revealed by this. (This is not a
question of test coverage, since there are other places where SQL
functions are run, but it is a bit of a silly test design.)
Discussion: https://www.postgresql.org/message-id/flat/1c11f1eb-f00c-43b7-799d-2d44132c02d7@2ndquadrant.com
This is a follow-up of the work done in e3931d01. This case is a bit
different than pg_attribute and pg_shdepend: the maximum number of items
to insert is known in advance, but there is no need to handle pinned
dependencies. Hence, the base allocation for slots is done based on the
number of items and the maximum allowed with a cap at 64kB. Slots are
initialized once used to minimize the overhead of the operation.
The insertions can be done for dependencies of the same type. More
could be done by grouping the insertion of multiple dependency types in
a single batch. This is left as future work.
Some of the multi-insert logic is also simplified for pg_shdepend, as
per the feedback discussed for this specific patch. This also moves to
indexing.h the variable capping the maximum amount of data that can be
used at once for a multi-insert, instead of having separate definitions
for pg_attribute, pg_depend and pg_shdepend.
Author: Daniel Gustafsson, Michael Paquier
Reviewed-by: Andres Freund, Álvaro Herrera
Discussion: https://postgr.es/m/20200807061619.GA23955@paquier.xyz
I happened to notice that the new test case I added in b55b4dad9
falls over if one runs "make check" repeatedly; though not in branches
after v10. That's because it was assuming that tmp_check/pgpass
wouldn't exist already. However, it's only been since v11 that the
Makefiles forcibly remove all of tmp_check/ before starting a TAP run.
This fix to unlink the file is therefore strictly necessary only in
v10 ... but it seems wisest to do it across the board, rather than
let the test rely on external logic to get the conditions right.
Commit 3f60f690f only partially fixed the broken-status-tracking
issue in LogicalRepApplyLoop: we need ping_sent to have the same
lifetime as last_recv_timestamp. The effects are much less serious
than what that commit fixed, though. AFAICS this would just lead to
extra ping requests being sent, once per second until the sender
responds. Still, it's a bug, so backpatch to v10 as before.
Discussion: https://postgr.es/m/959627.1599248476@sss.pgh.pa.us
When reading a WAL record fails to find continuation record(s) of the
proper length, report what it expects, for clarity.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20200903212152.GA15319@alvherre.pgsql
If this data is not collected, pg_dump segfaults if asked for column
inserts.
Fix by Fabrízio de Royes Mello
Backpatch to release 12 where the bug was introduced.
The "DROP ACCESS METHOD gist2" test will require locking the index
to be dropped and then its table; while most ordinary operations
lock a table first then its index. While no concurrent test scripts
should be touching fast_emp4000, autovacuum might chance to be
processing that table when the DROP runs, resulting in a deadlock
failure. This is pretty rare but we see it in the buildfarm from
time to time.
To fix, acquire a lock on fast_emp4000 before issuing the DROP.
Since the point of the exercise is mostly to prevent buildfarm
failures, back-patch to 9.6 where this test was introduced.
Discussion: https://postgr.es/m/839004.1599185607@sss.pgh.pa.us
This node already handles multiple options using a bitmask, so having a
separate boolean flag is not necessary. This simplifies the code a bit
with less arguments to give to the reindex routines, by replacing the
boolean with an equivalent bitmask value.
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/20200902110326.GA14963@paquier.xyz
This patch started out with the goal of harmonizing various arbitrary
limits on password length, but after awhile a better idea emerged:
let's just get rid of those fixed limits.
recv_password_packet() has an arbitrary limit on the packet size,
which we don't really need, so just drop it. (Note that this doesn't
really affect anything for MD5 or SCRAM password verification, since
those will hash the user's password to something shorter anyway.
It does matter for auth methods that require a cleartext password.)
Likewise remove the arbitrary error condition in pg_saslprep().
The remaining limits are mostly in client-side code that prompts
for passwords. To improve those, refactor simple_prompt() so that
it allocates its own result buffer that can be made as big as
necessary. Actually, it proves best to make a separate routine
pg_get_line() that has essentially the semantics of fgets(), except
that it allocates a suitable result buffer and hence will never
return a truncated line. (pg_get_line has a lot of potential
applications to replace randomly-sized fgets buffers elsewhere,
but I'll leave that for another patch.)
I built pg_get_line() atop stringinfo.c, which requires moving
that code to src/common/; but that seems fine since it was a poor
fit for src/port/ anyway.
This patch is mostly mine, but it owes a good deal to Nathan Bossart
who pressed for a solution to the password length problem and
created a predecessor patch. Also thanks to Peter Eisentraut and
Stephen Frost for ideas and discussion.
Discussion: https://postgr.es/m/09512C4F-8CB9-4021-B455-EF4C4F0D55A0@amazon.com
Because sigsetjmp() will restore the initial state with signals blocked,
the code path in bgworker.c for reporting an error and exiting would
execute that way. Usually this is fairly harmless; but if a parallel
worker had an error message exceeding the shared-memory communication
buffer size (16K) it would lock up, because it would wait for a
resume-sending signal from its parallel leader which it would never
detect.
To fix, just unblock signals at the appropriate point.
This can be shown to fail back to 9.6. The lack of parallel query
infrastructure makes it difficult to provide a simple test case for
9.5; but I'm pretty sure the issue exists in some form there as well,
so apply the code change there too.
Vignesh C, reviewed by Bharath Rupireddy, Robert Haas, and myself
Discussion: https://postgr.es/m/CALDaNm1d1hHPZUg3xU4XjtWBOLCrA+-2cJcLpw-cePZ=GgDVfA@mail.gmail.com
A backslash at the end of a line now causes the next line to be appended
to the current one (effectively, the backslash and newline are discarded).
This allows long HBA entries to be created without legibility problems.
While we're here, get rid of the former hard-wired length limit on
pg_hba.conf lines, by using an expansible StringInfo buffer instead
of a fixed-size local variable.
Since the same code is used to read the ident map file, these changes
apply there as well.
Fabien Coelho, reviewed by Justin Pryzby and David Zhang
Discussion: https://postgr.es/m/alpine.DEB.2.21.2003251906140.15243@pseudo
The majority of our audience is probably using a pre-packaged Postgres
build rather than raw sources. For them, much of runtime.sgml is not
too relevant, and they should be reading the packager's docs instead.
Add some notes pointing that way in appropriate places.
Text by me; thanks to Daniel Gustafsson for review and discussion,
and to Laurenz Albe for an earlier version.
Discussion: https://postgr.es/m/159430831443.16535.11360317280100947016@wrigleys.postgresql.org
This makes the first mention of a system catalog or view in each
paragraph in the system system catalog and view documentation pages
hyperlinks, for easier navigation.
Also linkify the first mention of pg_hba.conf in pg_hba_file_rules, as
that's more specific and easier to spot than the link to the client
authentication chapter.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://www.postgresql.org/message-id/flat/87mu5xqc11.fsf@wibble.ilmari.org
To add support for streaming of in-progress transactions into the
built-in logical replication, we need to do three things:
* Extend the logical replication protocol, so identify in-progress
transactions, and allow adding additional bits of information (e.g.
XID of subtransactions).
* Modify the output plugin (pgoutput) to implement the new stream
API callbacks, by leveraging the extended replication protocol.
* Modify the replication apply worker, to properly handle streamed
in-progress transaction by spilling the data to disk and then
replaying them on commit.
We however must explicitly disable streaming replication during
replication slot creation, even if the plugin supports it. We
don't need to replicate the changes accumulated during this phase,
and moreover we don't have a replication connection open so we
don't have where to send the data anyway.
Author: Tomas Vondra, Dilip Kumar and Amit Kapila
Reviewed-by: Amit Kapila, Kuntal Ghosh and Ajin Cherian
Tested-by: Neha Sharma, Mahendra Singh Thalor and Ajin Cherian
Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
This splits a string at occurrences of a delimiter. It is exactly like
string_to_array() except for producing a set of values instead of an
array of values. Thus, the relationship of these two functions is
the same as between regexp_split_to_table() and regexp_split_to_array().
Although the same results could be had from unnest(string_to_array()),
this is somewhat faster than that, and anyway it seems reasonable to
have it for symmetry with the regexp functions.
Pavel Stehule, reviewed by Peter Smith
Discussion: https://postgr.es/m/CAFj8pRD8HOpjq2TqeTBhSo_QkzjLOhXzGCpKJ4nCs7Y9SQkuPw@mail.gmail.com