Commit b0b0d84b3d663a148022e900ebfc164284a95f55 purported to make it
possible to relaunch workers using the same parallel context, but it had
an unpleasant race condition: we might reinitialize after the workers
have sent their last control message but before they have dettached the
DSM, leaving to crashes. Repair by introducing a new ParallelContext
operation, ReinitializeParallelDSM.
Adjust execParallel.c to use this new support, so that we can rescan a
Gather node by relaunching workers but without needing to recreate the
DSM.
Amit Kapila, with some adjustments by me. Extracted from latest parallel
sequential scan patch.
Show how this can be used in practice to make queries simpler and more
flexible. Also, draw an explicit contrast to the existence operator,
which doesn't work that way.
Peter Geoghegan and Tom Lane
The original Gather code failed to mark a Gather node as not able to
do projection, but it couldn't, even though it did call initialize its
projection info via ExecAssignProjectionInfo. There doesn't seem to
be any good reason for this node not to have projection capability,
so clean things up so that it does. Without this, plans using Gather
nodes might need to carry extra Result nodes to do projection.
A bug in the original free space computation made it possible to
return a page which wasn't actually able to fit the item. Since the
insertion code isn't prepared to deal with PageAddItem failing, a PANIC
resulted ("failed to add BRIN tuple [to new page]"). Add a macro to
encapsulate the correct computation, and use it in
brin_getinsertbuffer's callers before calling that routine, to raise an
early error.
I became aware of the possiblity of a problem in this area while working
on ccc4c074994d734. There's no archived discussion about it, but it's
easy to reproduce a problem in the unpatched code with something like
CREATE TABLE t (a text);
CREATE INDEX ti ON t USING brin (a) WITH (pages_per_range=1);
for length in `seq 8000 8196`
do
psql -f - <<EOF
TRUNCATE TABLE t;
INSERT INTO t VALUES ('z'), (repeat('a', $length));
EOF
done
Backpatch to 9.5, where BRIN was introduced.
Further tweak commit_ts.c so that on a standby the state is completely
consistent with what that in the master, rather than behaving
differently in the cases that the settings differ. Now in standby and
master the module should always be active or inactive in lockstep.
Author: Petr Jelínek, with some further tweaks by Álvaro Herrera.
Backpatch to 9.5, where commit timestamps were introduced.
Discussion: http://www.postgresql.org/message-id/5622BF9D.2010409@2ndquadrant.com
Bernd Helmle complained that CreateReplicationSlot() was assigning the
same value to the same variable twice, so we could remove one of them.
Code inspection reveals that we can actually remove both assignments:
according to the author the assignment was there for beauty of the
strlen line only, and another possible fix to that is to put the strlen
in its own line, so do that.
To be consistent within the file, refactor all duplicated strlen()
calls, which is what we do elsewhere in the backend anyway. In
basebackup.c, snprintf already returns the right length; no need for
strlen afterwards.
Backpatch to 9.4, where replication slots were introduced, to keep code
identical. Some of this is older, but the patch doesn't apply cleanly
and it's only of cosmetic value anyway.
Discussion: http://www.postgresql.org/message-id/BE2FD71DEA35A2287EA5F018@eje.credativ.lan
If the counterparty writes some data into the queue and then detaches,
it's wrong to return SHM_MQ_DETACHED right away. If we do that, we
fail to read whatever was written.
The shm_mq mechanism was intended to optionally notice when the process
on the other end of the queue fails to attach to the queue. It does
this by allowing the user to pass a BackgroundWorkerHandle; if the
background worker in question is launched and dies without attaching
to the queue, then we know it never will. This logic works OK in
blocking mode, but when called with nowait = true we fail to notice
that this has happened due to an asymmetry in the logic. Repair.
Reported off-list by Rushabh Lathia. Patch by me.
Once upon a time we did not have a separate CREATEROLE privilege, and
CREATEUSER effectively meant SUPERUSER. When we invented CREATEROLE
(in 8.1) we also added SUPERUSER so as to have a less confusing keyword
for this role property. However, we left CREATEUSER in place as a
deprecated synonym for SUPERUSER, because of backwards-compatibility
concerns. It's still there and is still confusing people, as for example
in bug #13694 from Justin Catterson. 9.6 will be ten years or so later,
which surely ought to be long enough to end the deprecation and just
remove these old keywords. Hence, do so.
Commit 816e336f12ecabdc834d4cc31bcf966b2dd323dc added the wrong error
check to async.c; sending restrictions is restricted to the leader,
not altogether unsafe.
Commit 3bd909b220930f21d6e15833a17947be749e7fde added ExecShutdownNode
to traverse the planstate tree and call shutdown functions, but made
a Gather node, the only node that actually has such a function, abort
the tree traversal, which is wrong.
Commit bda76c1c8cfb1d11751ba6be88f0242850481733 caused both plus and
minus infinity to be rendered as "infinity", which is not only wrong
but inconsistent with the pre-9.4 behavior of to_json(). Fix that by
duplicating the coding in date_out/timestamp_out/timestamptz_out more
closely. Per bug #13687 from Stepan Perlov. Back-patch to 9.4, like
the previous commit.
In passing, also re-pgindent json.c, since it had gotten a bit messed up by
recent patches (and I was already annoyed by indentation-related problems
in back-patching this fix ...)
Per a report from Shay Rojansky, Npgsql sends ssl_renegotiation_limit=0
in the startup packet because it does not support renegotiation; other
clients which have not attempted to support renegotiation might well
behave similarly. The recent removal of this parameter forces them to
break compatibility with either current PostgreSQL versions, or
previous ones. Per discussion, the best solution is to accept the
parameter but only allow a value of 0.
Shay Rojansky, edited a little by me.
Commit 0e57b4d8bd9674adaf5747421b3255b85e385534 contained some clever
logic that attempted to make sure that we couldn't get confused about
whether the last thing we cached was a strcoll() result or a strxfrm()
result, but it wasn't quite clever enough, because we can perform
further abbreviations after having already performed some comparisons.
Introduce an explicit flag in the hopes of making this watertight.
Peter Geoghegan, reviewed by me.
Instead, use transaction abort. Given an unlucky bout of latency, the
timeout would cancel the RESET itself. Buildfarm members gharial,
lapwing, mereswine, shearwater, and sungazer witness that. Back-patch
to 9.1 (all supported versions). The query_canceled test still could
timeout before entering its subtransaction; for whatever reason, that
has yet to happen on the buildfarm.
pg_regprefix was doing nothing with lookahead constraints, which would
be fine if it were the right kind of nothing, but it isn't: we have to
terminate our search for a fixed prefix, not just pretend the LACON arc
isn't there. Otherwise, if the current state has both a LACON outarc and a
single plain-color outarc, we'd falsely conclude that the color represents
an addition to the fixed prefix, and generate an extracted index condition
that restricts the indexscan too much. (See added regression test case.)
Terminating the search is conservative: we could traverse the LACON arc
(thus assuming that the constraint can be satisfied at runtime) and then
examine the outarcs of the linked-to state. But that would be a lot more
work than it seems worth, because writing a LACON followed by a single
plain character is a pretty silly thing to do.
This makes a difference only in rather contrived cases, but it's a bug,
so back-patch to all supported branches.
Using this API, one backend can set up a ParallelHeapScanDesc to
which multiple backends can then attach. Each tuple in the relation
will be returned to exactly one of the scanning backends. Only
forward scans are supported, and rescans must be carefully
coordinated.
This is not exposed to the planner or executor yet.
The original version of this code was written by me. Amit Kapila
reviewed it, tested it, and improved it, including adding support for
synchronized scans, per review comments from Jeff Davis. Extensive
testing of this and related patches was performed by Haribabu Kommi.
Final cleanup of this patch by me.
This may allow some callers to avoid the overhead involved in tearing
down a parallel context and then setting up a new one, which means
releasing the DSM and then allocating and populating a new one. I
suspect we'll want to revise the Gather node to make use of this new
capability, but even if not it may be useful elsewhere and requires
very little additional code.
Revert our previous addition of "all" flags to copyins() and copyouts();
they're no longer needed, and were never anything but an unsightly hack.
Improve a couple of infelicities in the REG_DEBUG code for dumping
the NFA data structure, including adding code to count the total
number of states and arcs.
Add a couple of missed error checks.
Add some more documentation in the README file, and some regression tests
illustrating cases that exceeded the state-count limit and/or took
unreasonable amounts of time before this set of patches.
Back-patch to all supported branches.
This code previously counted the number of NFA states it created, and
complained if a limit was exceeded, so as to prevent bizarre regex patterns
from consuming unreasonable time or memory. That's fine as far as it went,
but the code paid no attention to how many arcs linked those states. Since
regexes can be contrived that have O(N) states but will need O(N^2) arcs
after fixempties() processing, it was still possible to blow out memory,
and take a long time doing it too. To fix, modify the bookkeeping to count
space used by both states and arcs.
I did not bother with including the "color map" in the accounting; it
can only grow to a few megabytes, which is not a lot in comparison to
what we're allowing for states+arcs (about 150MB on 64-bit machines
or half that on 32-bit machines).
Looking at some of the larger real-world regexes captured in the Tcl
regression test suite suggests that the most that is likely to be needed
for regexes found in the wild is under 10MB, so I believe that the current
limit has enough headroom to make it okay to keep it as a hard-wired limit.
In connection with this, redefine REG_ETOOBIG as meaning "regular
expression is too complex"; the previous wording of "nfa has too many
states" was already somewhat inapropos because of the error code's use
for stack depth overrun, and it was not very user-friendly either.
Back-patch to all supported branches.
The previous coding would create a new intermediate state every time it
wanted to interchange the ordering of two constraint arcs. Certain regex
features such as \Y can generate large numbers of parallel constraint arcs,
and if we needed to reorder the results of that, we created unreasonable
numbers of intermediate states. To improve matters, keep a list of
already-created intermediate states associated with the state currently
being considered by the outer loop; we can re-use such states to place all
the new arcs leading to the same destination or source.
I also took the trouble to redefine push() and pull() to have a less risky
API: they no longer delete any state or arc that the caller might possibly
have a pointer to, except for the specifically-passed constraint arc.
This reduces the risk of re-introducing the same type of error seen in
the failed patch for CVE-2007-4772.
Back-patch to all supported branches.
The previous coding took something like O(N^4) time to fully process a
chain of N EMPTY arcs. We can't really do much better than O(N^2) because
we have to insert about that many arcs, but we can do lots better than
what's there now. The win comes partly from using mergeins() to amortize
de-duplication of arcs across multiple source states, and partly from
exploiting knowledge of the ordering of arcs for each state to avoid
looking at arcs we don't need to consider during the scan. We do have
to be a bit careful of the possible reordering of arcs introduced by
the sort-merge coding of the previous commit, but that's not hard to
deal with.
Back-patch to all supported branches.
Change the singly-linked in-arc and out-arc lists to be doubly-linked,
so that arc deletion is constant time rather than having worst-case time
proportional to the number of other arcs on the connected states.
Modify the bulk arc transfer operations copyins(), copyouts(), moveins(),
moveouts() so that they use a sort-and-merge algorithm whenever there's
more than a small number of arcs to be copied or moved. The previous
method is O(N^2) in the number of arcs involved, because it performs
duplicate checking independently for each copied arc. The new method may
change the ordering of existing arcs for the destination state, but nothing
really cares about that.
Provide another bulk arc copying method mergeins(), which is unused as
of this commit but is needed for the next one. It basically is like
copyins(), but the source arcs might not all come from the same state.
Replace the O(N^2) bubble-sort algorithm used in carcsort() with a qsort()
call.
These changes greatly improve the performance of regex compilation for
large or complex regexes, at the cost of extra space for arc storage during
compilation. The original tradeoff was probably fine when it was made, but
now we care more about speed and less about memory consumption.
Back-patch to all supported branches.
It's possible to construct regular expressions that contain loops of
constraint arcs (that is, ^ $ AHEAD BEHIND or LACON arcs). There's no use
in fully traversing such a loop at execution, since you'd just end up in
the same NFA state without having consumed any input. Worse, such a loop
leads to infinite looping in the pullback/pushfwd stage of compilation,
because we keep pushing or pulling the same constraints around the loop
in a vain attempt to move them to the pre or post state. Such looping was
previously recognized in CVE-2007-4772; but the fix only handled the case
of trivial single-state loops (that is, a constraint arc leading back to
its source state) ... and not only that, it was incorrect even for that
case, because it broke the admittedly-not-very-clearly-stated API contract
of the pull() and push() subroutines. The first two regression test cases
added by this commit exhibit patterns that result in assertion failures
because of that (though there seem to be no ill effects in non-assert
builds). The other new test cases exhibit multi-state constraint loops;
in an unpatched build they will run until the NFA state-count limit is
exceeded.
To fix, remove the code added for CVE-2007-4772, and instead create a
general-purpose constraint-loop-breaking phase of regex compilation that
executes before we do pullback/pushfwd. Since we never need to traverse
a constraint loop fully, we can just break the loop at any chosen spot,
if we add clone states that can replicate any sequence of arc transitions
that would've traversed just part of the loop.
Also add some commentary clarifying why we have to have all these
machinations in the first place.
This class of problems has been known for some time --- we had a report
from Marc Mamin about two years ago, for example, and there are related
complaints in the Tcl bug tracker. I had discussed a fix of this kind
off-list with Henry Spencer, but didn't get around to doing something
about it until the issue was rediscovered by Greg Stark recently.
Back-patch to all supported branches.
Prior to commit 0709b7ee72e4bc71ad07b7120acd117265ab51d0, access to
variables within a spinlock-protected critical section had to be done
through a volatile pointer, but that should no longer be necessary.
Michael Paquier
Prior to commit 0709b7ee72e4bc71ad07b7120acd117265ab51d0, access to
variables within a spinlock-protected critical section had to be done
through a volatile pointer, but that should no longer be necessary.
Thomas Munro
In order for this to be safe, the code which hands true serializability
will need to taught that the SIRead locks taken by a parallel worker
pertain to the same transaction as those taken by the parallel leader.
Some further changes may be needed as well. Until the necessary
adaptations are made, don't generate parallel plans in serializable
mode, and if a previously-generated parallel plan is used after
serializable mode has been activated, run it serially.
This fixes a bug in commit 7aea8e4f2daa4b39ca9d1309a0c4aadb0f7ed81b.
In the previous coding, before returning from ExecutorRun, we'd shut
down all parallel workers. This was dead wrong if ExecutorRun was
called with a non-zero tuple count; it had the effect of truncating
the query output. To fix, give ExecutePlan control over whether to
enter parallel mode, and have it refuse to do so if the tuple count
is non-zero. Rewrite the Gather logic so that it can cope with being
called outside parallel mode.
Commit 7aea8e4f2daa4b39ca9d1309a0c4aadb0f7ed81b is largely to blame
for this problem, though this patch modifies some subsequently-committed
code which relied on the guarantees it purported to make.
Commit 7aea8e4f2daa4b39ca9d1309a0c4aadb0f7ed81b was overoptimistic
about the degree of safety associated with running various functions
in parallel mode. Functions that take a table name or OID as an
argument are at least parallel-restricted, because the table might be
temporary, and we currently don't allow parallel workers to touch
temporary tables. Functions that take a query as an argument are
outright unsafe, because the query could be anything, including a
parallel-unsafe query.
Also, the queue of pending notifications is backend-private, so adding
to it from a worker doesn't behave correctly. We could fix this by
transferring the worker's queue of pending notifications to the master
during worker cleanup, but that seems like more trouble than it's
worth for now. In addition to adjusting the pg_proc.h markings, also
add an explicit check for this in async.c.
check_role() tries to verify that the user has permission to become the
requested role, but this is inappropriate in a parallel worker, which
needs to exactly recreate the master's authorization settings. So skip
the check in that case.
This fixes a bug in commit 924bcf4f16d54c55310b28f77686608684734f42.
Starting a parallel worker transaction changes our notion of which XIDs
are in-progress or committed, and our notion of the current command
counter ID. Therefore, our view of these caches prior to starting
this transaction may no longer valid. Defend against that by clearing
them.
This fixes a bug in commit 924bcf4f16d54c55310b28f77686608684734f42.