transaction.
The transaction that is initiated by the parallel worker to cooperate
with the actual transaction started by the main backend to complete the
query execution should not be counted as a separate transaction. The
other internal transactions started and committed by the parallel worker
are still counted as separate transactions as we that is what we do in
other places like autovacuum.
This will partially fix the bloat in transaction stats due to additional
transactions performed by parallel workers. For a complete fix, we need to
decide how we want to show all the transactions that are started internally
for various operations and that is a matter of separate patch.
Reported-by: Haribabu Kommi
Author: Haribabu Kommi
Reviewed-by: Amit Kapila, Jamison Kirk and Rahila Syed
Backpatch-through: 9.6
Discussion: https://postgr.es/m/CAJrrPGc9=jKXuScvNyQ+VNhO0FZk7LLAShAJRyZjnedd2D61EQ@mail.gmail.com
This has to be prevented because inlining would result in multiple
self-references, which we don't support (and in fact that's disallowed
by the SQL spec, see statements about linearly vs. nonlinearly
recursive queries). Bug fix for commit 608b167f9.
Per report from Yaroslav Schekin (via Andrew Gierth)
Discussion: https://postgr.es/m/87wolmg60q.fsf@news-spur.riddles.org.uk
Commit 25ee70511ec2 introduced a memory leak in pgbench: some PGresult
structs were not being freed during error bailout, because we're now
doing more PQgetResult() calls than previously. Since there's more
cleanup code outside the discard_response() routine than in it, refactor
the cleanup code, removing the routine.
This has little effect currently, since we abandon processing after
hitting errors, but if we ever get further pgbench features (such as
testing for serializable transactions), it'll matter.
Per Coverity.
Reviewed-by: Michaël Paquier
We weren't testing anything involving EPQ on UPDATEs that move tuples
into different partitions. Depending on the implementation,
it might be that these cases aren't actually very interesting ...
but given our thin coverage of EPQ in general, I think it's a good
idea to have a test case.
Amit Langote, minor tweak by me
Discussion: https://postgr.es/m/7889df35-ad1a-691a-00e3-4d4b18f364e3@lab.ntt.co.jp
We've long had reports of intermittent "could not reattach to shared
memory" errors on Windows. Buildfarm member dory fails that way when
PGSharedMemoryReAttach() execution overlaps with creation of a thread
for the process's "default thread pool". Fix that by providing a second
region to receive asynchronous allocations that would otherwise intrude
into UsedShmemSegAddr. In pgwin32_ReserveSharedMemoryRegion(), stop
trying to free reservations landing at incorrect addresses; the caller's
next step has been to terminate the affected process. Back-patch to 9.4
(all supported versions).
Reviewed by Tom Lane. He also did much of the prerequisite research;
see commit bcbf2346d69f6006f126044864dd9383d50d87b4.
Discussion: https://postgr.es/m/20190402135442.GA1173872@rfd.leadboat.com
join_is_legal() needs to reject forming certain outer joins in cases
where that would lead the planner down a blind alley. However, it
mistakenly supposed that the way to handle full joins was to treat them
as applying the same constraints as for left joins, only to both sides.
That doesn't work, as shown in bug #15741 from Anthony Skorski: given
a lateral reference out of a join that's fully enclosed by a full join,
the code would fail to believe that any join ordering is legal, resulting
in errors like "failed to build any N-way joins".
However, we don't really need to consider full joins at all for this
purpose, because we effectively force them to be evaluated in syntactic
order, and that order is always legal for lateral references. Hence,
get rid of this broken logic for full joins and just ignore them instead.
This seems to have been an oversight in commit 7e19db0c0.
Back-patch to all supported branches, as that was.
Discussion: https://postgr.es/m/15741-276f1f464b3f40eb@postgresql.org
Explain that it is not enforced that querying a generated column
returns data that is consistent with the data that was stored. This
is similar to the note about constraints nearby.
Reported-by: Amit Langote <amitlangote09@gmail.com>
vacuum_truncate controls whether vacuum tries to truncate off
any empty pages at the end of the table. Previously vacuum always
tried to do the truncation. However, the truncation could cause
some problems; for example, ACCESS EXCLUSIVE lock needs to
be taken on the table during the truncation and can cause
the query cancellation on the standby even if hot_standby_feedback
is true. Setting this reloption to false can be helpful to avoid
such problems.
Author: Tsunakawa Takayuki
Reviewed-By: Julien Rouhaud, Masahiko Sawada, Michael Paquier, Kirk Jamison and Fujii Masao
Discussion: https://postgr.es/m/CAHGQGwE5UqFqSq1=kV3QtTUtXphTdyHA-8rAj4A=Y+e4kyp3BQ@mail.gmail.com
This commit fixes three, unfortunately related, issues:
1) Since 5db6df0c01, the introduction of DML via tableam, it was
possible to trigger "ERROR: unexpected table_lock_tuple status: 1"
when updating a row that was previously updated in the same
transaction - but only when the previously updated row was before
updated in a concurrent transaction (and READ COMMITTED was
used). The reason for that was that that case simply wasn't
expected. Fixing that lead to:
2) Even before the above commit, there were error checks (introduced
in 6868ed7491b7) preventing a row being updated by different
commands within the same statement (say in a function called by an
UPDATE) - but that check wasn't performed when the row was first
updated in a concurrent transaction - instead the second update was
silently skipped in that case. After this change we throw the same
error as we'd without the concurrent transaction.
3) The error messages (introduced in 6868ed7491b7) preventing such
updates emitted the same error message for both DELETE and
UPDATE ("tuple to be updated was already modified by an operation
triggered by the current command"). While that could be changed
separately, it made it hard to write tests that verify the correct
correct behavior of the code.
This commit changes heap's implementation of table_lock_tuple() to
return TM_SelfModified instead of TM_Invisible (previously loosely
modeled after EvalPlanQualFetch), and teaches nodeModifyTable.c to
handle that in response to table_lock_tuple() and not just in response
to table_(delete|update).
Additionally it fixes the wrong error message (see 3 above). The
comment for table_lock_tuple() is also adjusted to state that
TM_Deleted won't return information in TM_FailureData - it'll not
always be available.
This also adds tests to ensure that DELETE/UPDATE correctly error out
when affecting a row that concurrently was modified by another
transaction.
Author: Andres Freund
Reported-By: Tom Lane, when investigating a bug bug fix to another bug
by Amit Langote
Discussion: https://postgr.es/m/19321.1554567786@sss.pgh.pa.us
As bug #15733 has proved, we are lacking coverage for partition tuple
routing with dropped attributes when involving three levels of
partitioning or more. There was only an active bug in this area for
v11, and HEAD is proving to handle those scenarios fine, still it lacked
some coverage for the previous problem.
Author: Amit Langote, Michael Paquier
Discussion: https://postgr.es/m/15733-7692379e310b80ec@postgresql.org
pg_get_indexdef_worker carelessly fetched indoption entries even for
non-key index columns that don't have one. 99.999% of the time this
would be harmless, since the code wouldn't examine the value ... but
some fine day this will be a fetch off the end of memory, resulting
in SIGSEGV.
Detected through valgrind testing. Odd that the buildfarm's valgrind
critters haven't noticed.
The new command lists partitioned relations (tables and/or indexes),
possibly with their sizes, possibly including partitioned partitions;
their parents (if not top-level); if indexes show the tables they belong
to; and their descriptions.
While there are various possible improvements to this, having it in this
form is already a great improvement over not having any way to obtain
this report.
Author: Pavel Stěhule, with help from Mathias Brossard, Amit Langote and
Justin Pryzby.
Reviewed-by: Amit Langote, Mathias Brossard, Melanie Plageman,
Michaël Paquier, Álvaro Herrera
Before those commits, partitioning-related code in the executor could
assume that ModifyTableState.resultRelInfo[] contains only leaf partitions.
However, now a fully-pruned update results in a dummy ModifyTable that
references the root partitioned table, and that breaks some stuff.
In v11, this led to an assertion or core dump in the tuple routing code.
Fix by disabling tuple routing, since we don't need that anyway.
(I chose to do that in HEAD as well for safety, even though the problem
doesn't manifest in HEAD as it stands.)
In v10, this confused ExecInitModifyTable's decision about whether it
needed to close the root table. But we can get rid of that altogether
by being smarter about where to find the root table.
Note that since the referenced commits haven't shipped yet, this
isn't fixing any bug the field has seen.
Amit Langote, per a report from me
Discussion: https://postgr.es/m/20710.1554582479@sss.pgh.pa.us
This uses the same infrastructure that the CREATE INDEX progress
reporting uses. Add a column to pg_stat_progress_create_index to
report the OID of the index being worked on. This was not necessary
for CREATE INDEX, but it's useful for REINDEX.
Also edit the phase descriptions a bit to be more consistent with the
source code comments.
Discussion: https://www.postgresql.org/message-id/ef6a6757-c36a-9e81-123f-13b19e36b7d7%402ndquadrant.com
Fix some places where we might fail to do Py_DECREF() on a Python
object (thereby leaking it for the rest of the session). Almost
all of the risks were in error-recovery paths, which we don't really
expect to hit anyway. Hence, while this is definitely a bug fix,
it doesn't quite seem worth back-patching.
Nikita Glukhov, Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/28053a7d-10d8-fc23-b05c-b4749c873f63@postgrespro.ru
The foreign-key-checking loop in ATRewriteTables failed to ignore
relations without storage (e.g., partitioned tables), unlike the
initial loop. This accidentally worked as long as RI_Initial_Check
succeeded, which it does in most practical cases (including all the
ones exercised in the existing regression tests :-(). However, if
that failed, as for instance when there are permissions issues,
then we entered the slow fire-the-trigger-on-each-tuple path.
And that would try to read from the referencing relation, and fail
if it lacks storage.
A second problem, recently introduced in HEAD, was that this loop
had been broken by sloppy refactoring for the tableam API changes.
Repair both issues, and add a regression test case so we have some
coverage on this code path. Back-patch as needed to v11.
(It looks like this code could do with additional bulletproofing,
but let's get a working test case in place first.)
Hadi Moshayedi, Tom Lane, Andres Freund
Discussion: https://postgr.es/m/CAK=1=WrnNmBbe5D9sm3t0a6dnAq3cdbF1vXY816j1wsMqzC8bw@mail.gmail.com
Discussion: https://postgr.es/m/19030.1554574075@sss.pgh.pa.us
Discussion: https://postgr.es/m/20190325180405.jytoehuzkeozggxx%40alap3.anarazel.de
Similarly to the set of parameters for keepalive, a connection parameter
for libpq is added as well as a backend GUC, called tcp_user_timeout.
Increasing the TCP user timeout is useful to allow a connection to
survive extended periods without end-to-end connection, and decreasing
it allows application to fail faster. By default, the parameter is 0,
which makes the connection use the system default, and follows a logic
close to the keepalive parameters in its handling. When connecting
through a Unix-socket domain, the parameters have no effect.
Author: Ryohei Nagaura
Reviewed-by: Fabien Coelho, Robert Haas, Kyotaro Horiguchi, Kirk
Jamison, Mikalai Keida, Takayuki Tsunakawa, Andrei Yahorau
Discussion: https://postgr.es/m/EDA4195584F5064680D8130B1CA91C45367328@G01JPEXMBYT04
If we need ordered output from a scan of a partitioned table, but
the ordering matches the partition ordering, then we don't need to
use a MergeAppend to combine the pre-ordered per-partition scan
results: a plain Append will produce the same results. This
both saves useless comparison work inside the MergeAppend proper,
and allows us to start returning tuples after istarting up just
the first child node not all of them.
However, all is not peaches and cream, because if some of the
child nodes have high startup costs then there will be big
discontinuities in the tuples-returned-versus-elapsed-time curve.
The planner's cost model cannot handle that (yet, anyway).
If we model the Append's startup cost as being just the first
child's startup cost, we may drastically underestimate the cost
of fetching slightly more tuples than are available from the first
child. Since we've had bad experiences with over-optimistic choices
of "fast start" plans for ORDER BY LIMIT queries, that seems scary.
As a klugy workaround, set the startup cost estimate for an ordered
Append to be the sum of its children's startup costs (as MergeAppend
would). This doesn't really describe reality, but it's less likely
to cause a bad plan choice than an underestimated startup cost would.
In practice, the cases where we really care about this optimization
will have child plans that are IndexScans with zero startup cost,
so that the overly conservative estimate is still just zero.
David Rowley, reviewed by Julien Rouhaud and Antonin Houska
Discussion: https://postgr.es/m/CAKJS1f-hAqhPLRk_RaSFTgYxd=Tz5hA7kQ2h4-DhJufQk8TGuw@mail.gmail.com
This allows the user to create duplicates of existing replication slots,
either logical or physical, and even changing properties such as whether
they are temporary or the output plugin used.
There are multiple uses for this, such as initializing multiple replicas
using the slot for one base backup; when doing investigation of logical
replication issues; and to select a different output plugins.
Author: Masahiko Sawada
Reviewed-by: Michael Paquier, Andres Freund, Petr Jelinek
Discussion: https://postgr.es/m/CAD21AoAm7XX8y_tOPP6j4Nzzch12FvA1wPqiO690RCk+uYVstg@mail.gmail.com
Prior to v12, if you used a collation-sensitive regex feature in a
pattern handled by processSQLNamePattern() (for instance, \d '\\w+'
in psql), the behavior you got matched the database's default collation.
Since commit 586b98fdf you'd usually get C-collation behavior, because
the catalog "name"-type columns are now marked as COLLATE "C". Add
explicit COLLATE specifications to restore the prior behavior.
(Note for whoever writes the v12 release notes: the need for this shows
that while 586b98fdf preserved pre-v12 behavior of "name" columns for
simple comparison operators, it changed the behavior of regex operators
on those columns. Although this patch fixes it for pattern matches
generated by our own tools, user-written queries will still be affected.
So we'd better mention this issue as a compatibility item.)
Daniel Vérité
Discussion: https://postgr.es/m/701e51f0-0ec0-4e70-a365-1958d66dd8d2@manitou-mail.org
The limitations that it is not allowed to create/attach a foreign table
as a partition of an indexed partitioned table were not documented.
Reported-By: Stepan Yankevych
Author: Etsuro Fujita
Reviewed-By: Amit Langote
Backpatch-through: 11 where partitioned index was introduced
Discussion: https://postgr.es/m/1553869152.858391073.5f8m3n0x@frv53.fwdcdn.com
Rewrite get_attgenerated() to avoid compiler warning if the compiler
does not recognize that elog(ERROR) does not return.
Reported-by: David Rowley <david.rowley@2ndquadrant.com>
This reverts commits 2f932f71d9f2963bbd201129d7b971c8f5f077fd,
16ee6eaf80a40007a138b60bb5661660058d0422 and
6f0e190056fe441f7cf788ff19b62b13c94f68f3. The buildfarm has revealed
several bugs. Back-patch like the original commits.
Discussion: https://postgr.es/m/20190404145319.GA1720877@rfd.leadboat.com
Commit 3eb77eba moved a _mdfd_getseg() call from mdsync() into a new
callback function mdsyncfiletag(), but didn't get the arguments quite
right. Without the EXTENSION_DONT_CHECK_SIZE flag we fail to open a
segment if lower-numbered segments have been truncated, and it wants
a block number rather than a segment number.
While comparing with the older coding, also remove an unnecessary
clobbering of errno, and adjust the code in mdunlinkfiletag() to
ressemble the original code from mdpostckpt() more closely instead
of using an unnecessary call to smgropen().
Author: Thomas Munro
Discussion: https://postgr.es/m/CA%2BhUKGL%2BYLUOA0eYiBXBfwW%2BbH5kFgh94%3DgQH0jHEJ-t5Y91wQ%40mail.gmail.com
There was some confusion over the format of the error message returned
from the server during GSSAPI startup; specifically, it was expected
that a length would be returned when, in reality, at this early stage in
the startup sequence, no length is returned from the server as part of
an error message.
Correct the client-side code for dealing with error messages sent by the
server during startup by simply reading what's available into our
buffer, after we've discovered it's an error message, and then reporting
back what was returned.
In passing, also add in documentation of the environment variable
PGGSSENCMODE which was missed previously, and adjust the code to look
for the PGGSSENCMODE variable (the environment variable change was
missed in the prior GSSMODE -> GSSENCMODE commit).
Error-handling issue discovered by Peter Eisentraut, the rest were items
discovered during testing of the error handling.
Since 11, it is possible to use a non-superuser role when using an
online source cluster with pg_rewind as long as the role has proper
permissions to execute on the source all the functions used by
pg_rewind, and the documentation stated that a superuser is necessary.
Let's add at the same time all the details needed to create such a
role.
A second confusion which comes a lot from users is that it is necessary
to issue a checkpoint on a freshly-promoted standby so as its control
file has up-to-date timeline information which is used by pg_rewind to
validate the operation. Let's document that properly. This is
back-patched down to 9.5 where pg_rewind has been introduced.
Author: Michael Paquier
Reviewed-by: Magnus Hagander
Discussion: https://postgr.es/m/CABUevEz5bpvbwVsYCaSMV80CBZ5-82nkMzbb+Bu=h1m=rLdn=g@mail.gmail.com
Backpatch-through: 9.5
Previously it was allowed to set default_table_access_method to an
empty string. That makes sense for default_tablespace, where that was
copied from, as it signals falling back to the database's default
tablespace. As there is no equivalent for table AMs, forbid that.
Also make sure to throw a usable error when creating a table using an
index AM, by using get_am_type_oid() to implement get_table_am_oid()
instead of a separate copy. Previously we'd error out only later, in
GetTableAmRoutine().
Thirdly remove GetTableAmRoutineByAmId() - it was only used in an
earlier version of 8586bf7ed8.
Add tests for the above (some for index AMs as well).
Commit c1afd175, which added support for rootdescend verification to
amcheck, added only minimal regression test coverage. Address this by
making sure that rootdescend verification is run on a multi-level index.
In passing, simplify some of the regression tests that exercise
multi-level nbtree page deletion.
Both issues spotted while rereviewing coverage of the nbtree patch
series using gcov.
This adds table_multi_insert(), and converts COPY FROM, the only user
of heap_multi_insert, to it.
A simple conversion of COPY FROM use slots would have yielded a
slowdown when inserting into a partitioned table for some
workloads. Different partitions might need different slots (both slot
types and their descriptors), and dropping / creating slots when
there's constant partition changes is measurable.
Thus instead revamp the COPY FROM buffering for partitioned tables to
allow to buffer inserts into multiple tables, flushing only when
limits are reached across all partition buffers. By only dropping
slots when there've been inserts into too many different partitions,
the aforementioned overhead is gone. By allowing larger batches, even
when there are frequent partition changes, we actuall speed such cases
up significantly.
By using slots COPY of very narrow rows into unlogged / temporary
might slow down very slightly (due to the indirect function calls).
Author: David Rowley, Andres Freund, Haribabu Kommi
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.dehttps://postgr.es/m/20190327054923.t3epfuewxfqdt22e@alap3.anarazel.de
This is intended for use mostly in test scripts for external tools,
which could do without cross-PG-version variations in error message
wording. Of course, the SQLSTATE isn't guaranteed stable either, but
it should be more so than the error message text.
Note: there's a bit of an ABI change for libpq here, but it seems
OK because if somebody compiles against a newer version of libpq-fe.h,
and then tries to pass PQERRORS_SQLSTATE to PQsetErrorVerbosity()
of an older libpq library, it will be accepted and then act like
PQERRORS_DEFAULT, thanks to the way the tests in pqBuildErrorMessage3
have historically been phrased. That seems acceptable.
Didier Gautheron, reviewed by Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/CAJRYxuKyj4zA+JGVrtx8OWAuBfE-_wN4sUMK4H49EuPed=mOBw@mail.gmail.com
The previous convention that stdout was selected by default when nothing
is specified was just too error-prone.
After a suggestion from Andrew Gierth.
Author: Euler Taveira
Reviewed-by: Yoshikazu Imai, José Arthur Benetasso Villanova
Discussion: https://postgr.es/m/87sgwrmhdv.fsf@news-spur.riddles.org.uk
The assertions added by commit b04aeb0a0 exposed that there are some
code paths wherein the executor will try to open an index without
holding any lock on it. We do have some lock on the index's table,
so it seems likely that there's no fatal problem with this (for
instance, the index couldn't get dropped from under us). Still,
it's bad practice and we should fix it.
To do so, remove the optimizations in ExecInitIndexScan and friends
that tried to avoid taking a lock on an index belonging to a target
relation, and just take the lock always. In non-bug cases, this
will result in no additional shared-memory access, since we'll find
in the local lock table that we already have a lock of the desired
type; hence, no significant performance degradation should occur.
Also, adjust the planner and executor so that the type of lock taken
on an index is always identical to the type of lock taken for its table,
by relying on the recently added RangeTblEntry.rellockmode field.
This avoids some corner cases where that might not have been true
before (possibly resulting in extra locking overhead), and prevents
future maintenance issues from having multiple bits of logic that
all needed to be in sync. In addition, this change removes all core
calls to ExecRelationIsTargetRelation, which avoids a possible O(N^2)
startup penalty for queries with large numbers of target relations.
(We'd probably remove that function altogether, were it not that we
advertise it as something that FDWs might want to use.)
Also adjust some places in selfuncs.c to not take any lock on indexes
they are transiently opening, since we can assume that plancat.c
did that already.
In passing, change gin_clean_pending_list() to take RowExclusiveLock
not AccessShareLock on its target index. Although it's not clear that
that's actually a bug, it seemed very strange for a function that's
explicitly going to modify the index to use only AccessShareLock.
David Rowley, reviewed by Julien Rouhaud and Amit Langote,
a bit of further tweaking by me
Discussion: https://postgr.es/m/19465.1541636036@sss.pgh.pa.us