Both INDEX_CLEANUP and TRUNCATE have been available since v12, and are
enabled by default except if respectively vacuum_index_cleanup and
vacuum_truncate are disabled for a given relation. This change adds
support for disabling these options from vacuumdb.
Author: Nathan Bossart
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/6F7F17EF-B1F2-4681-8D03-BA96365717C0@amazon.com
As it stands, this flag is only set when we've successfully sent a
cancel request, not if we get SIGINT and then fail to send a cancel.
However, for almost all callers, that's the Wrong Thing: we'd prefer
to abort processing after control-C even if no cancel could be sent.
As an example, since commit 1d468b9ad "pgbench -i" fails to give up
sending COPY data even after control-C, if the postmaster has been
stopped, which is clearly not what the code intends and not what anyone
would want. (The fact that it keeps going at all is the fault of a
separate bug in libpq, but not letting CancelRequested become set is
clearly not what we want here.)
The sole exception, as far as I can find, is that scripts_parallel.c's
ParallelSlotsGetIdle tries to consume a query result after issuing a
cancel, which of course might not terminate quickly if no cancel
happened. But that behavior was poorly thought out too. No user of
ParallelSlotsGetIdle tries to continue processing after a cancel,
so there is really no point in trying to clear the connection's state.
Moreover this has the same defect as for other users of cancel.c,
that if the cancel request fails for some reason then we end up with
control-C being completely ignored. (On top of that, select_loop failed
to distinguish clearly between SIGINT and other reasons for select(2)
failing, which means that it's possible that the existing code would
think that a cancel has been sent when it hasn't.)
Hence, redefine CancelRequested as simply meaning that SIGINT was
received. We could add a second flag with the other meaning, but
in the absence of any compelling argument why such a flag is needed,
I think it would just offer an opportunity for future callers to
get it wrong. Also remove the consumeQueryResult call in
ParallelSlotsGetIdle's failure exit. In passing, simplify the
API of select_loop.
It would now be possible to re-unify psql's cancel_pressed with
CancelRequested, partly undoing 5d43c3c54. But I'm not really
convinced that that's worth the trouble, so I left psql alone,
other than fixing a misleading comment.
This code is new in v13 (cf a4fd3aa71), so no need for back-patch.
Per investigation of a complaint from Andres Freund.
Discussion: https://postgr.es/m/20200603201242.ofvm4jztpqytwfye@alap3.anarazel.de
Includes some manual cleanup of places that pgindent messed up,
most of which weren't per project style anyway.
Notably, it seems some people didn't absorb the style rules of
commit c9d297751, because there were a bunch of new occurrences
of function calls with a newline just after the left paren, all
with faulty expectations about how the rest of the call would get
indented.
OpenBSD falls back to "C" when using an incorrect input with setlocale()
and LC_CTYPE, causing this test, introduced by 008cf04, to fail. This
removes the culprit test to avoid the portability issue.
Per report from Robert Haas, via buildfarm member curculio.
Discussion: https://postgr.es/m/CA+TgmoZ6ddh3mHD9gU8DvNYoFmuJaYYn1+4AvZNp25vTdRwCAQ@mail.gmail.com
Backpatch-through: 11
The original coding failed to properly quote those arguments, leading to
failures when using quotes in the values used. As the quoting can be
encoding-sensitive, the connection to the backend needs to be taken
before applying the correct quoting.
Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/20200214041004.GB1998@paquier.xyz
Backpatch-through: 9.5
Commit 40d964ec99 allowed vacuum command to leverage multiple CPUs by
invoking parallel workers to process indexes. This commit provides a
'--parallel' option to specify the parallel degree used by vacuum command.
Author: Masahiko Sawada, with few modifications by me
Reviewed-by: Mahendra Singh and Amit Kapila
Discussion: https://postgr.es/m/CAD21AoDTPMgzSkV4E3SFo1CH_x50bf5PqZFQf4jmqjk-C03BWg@mail.gmail.com
This variable is now part of the refactored code for query cancellation
in fe_utils. This fixes an oversight in commit a4fd3aa. While on it,
improve some header includes in bin/scripts/.
Author: Michael Paquier
Reviewed-by: Fabien Coelho
Discussion: https://postgr.es/m/20191203101625.GF1634@paquier.xyz
Originally, this code was duplicated in src/bin/psql/ and
src/bin/scripts/, but it can be useful for other frontend applications,
like pgbench. This refactoring offers the possibility to setup a custom
callback which would get called in the signal handler for SIGINT or when
the interruption console events happen on Windows.
Author: Fabien Coelho, with contributions from Michael Paquier
Reviewed-by: Álvaro Herrera, Ibrar Ahmed
Discussion: https://postgr.es/m/alpine.DEB.2.21.1910311939430.27369@lancre
This commit revert the commits to add a test case that tests the 'force'
option when there is an active backend connected to the database being
dropped.
This feature internally sends SIGTERM to all the backends connected to the
database being dropped and then the same is reported to the client. We
found that on Windows, the client end of the socket is not able to read
the data once we close the socket in the server which leads to loss of
error message which is not what we expect. We also observed similar
behavior in other cases like pg_terminate_backend(),
pg_ctl kill TERM <pid>. There are probably a few others like that. The
fix for this requires further study.
Discussion: https://postgr.es/m/E1iaD8h-0004us-K9@gemulon.postgresql.org
Specifying '-f' will add the 'force' option to the DROP DATABASE command
sent to the server. This will try to terminate all existing connections
to the target database before dropping it.
Author: Pavel Stehule
Reviewed-by: Vignesh C and Amit Kapila
Discussion: https://postgr.es/m/CAP_rwwmLJJbn70vLOZFpxGw3XD7nLB_7+NKz46H5EOO2k5H7OQ@mail.gmail.com
8ae0d47 marked those options as obsolete back in 2005, with the options
removed from the documentation. This removes the last references to
both options in the code which were kept around for compatibility
purposes with past commands.
Author: Alexander Lakhin
Discussion: https://postgr.es/m/5da284a2-62d9-e338-88d1-26ee5009d93e@gmail.com
FD_SETSIZE needs to be declared before winsock2.h, or it is possible to
run into buffer overflow issues when using --jobs. This is similar to
pgbench's solution done in a23c641.
This has been introduced by 71d84ef, and older versions have been using
the default value of FD_SETSIZE, defined at 64.
Per buildfarm member jacana, but this impacts all Windows animals
running the TAP tests. I have reproduced the failure locally to check
the patch.
Author: Michael Paquier
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/20190826054000.GE7005@paquier.xyz
Backpatch-through: 9.5
When trying to use a high number of jobs, vacuumdb (and more recently
reindexdb) has only checked for a maximum number of jobs used, causing
confusing failures when running out of file descriptors when the jobs
open connections to Postgres. This commit changes the error handling so
as we do not check anymore for a maximum number of allowed jobs when
parsing the option value with FD_SETSIZE, but check instead if a file
descriptor is within the supported range when opening the connections
for the jobs so as this is detected at the earliest time possible.
Also, improve the error message to give a hint about the number of jobs
recommended, using a wording given by the reviewers of the patch.
Reported-by: Andres Freund
Author: Michael Paquier
Reviewed-by: Andres Freund, Álvaro Herrera, Tom Lane
Discussion: https://postgr.es/m/20190818001858.ho3ev4z57fqhs7a5@alap3.anarazel.de
Backpatch-through: 9.5
FD_SETSIZE is included in sys/select.h per POSIX, and this header
inclusion has been moved to scripts_parallel.c as of 5f38403 without
moving the variable, causing a compilation failure on recent versions of
OpenBSD (6.6 was the version used in the report).
In order to take care of the failure, move FD_SETSIZE directly to
scripts_parallel.c with a wrapper controlling the maximum number of
parallel slots supported, based on a suggestion by Andres Freund.
While on it, reduce the maximum number to be less than FD_SETSIZE,
leaving some room for stdin, stdout and such as they consume some file
descriptors.
The buildfarm did not complain about that, as it happens to only be
an issue on recent versions of OpenBSD and there is no coverage in this
area. 51c3e9f fixed a similar set of issues.
Bug: #15964
Reported-by: Sean Farrell
Discussion: https://postgr.es/m/15964-c1753bdfed722e04@postgresql.org
When building a list of relations for a parallel processing of a schema
or a database (or just a single-entry list for the non-parallel case
with the database name), the list is allocated and built on-the-fly for
each database processed, leaking after one database-level reindex is
done. This accumulates leaks when processing all databases, and could
become a visible issue with thousands of relations.
This is fixed by introducing a new routine in simple_list.c to free all
the elements in a simple list made of strings or OIDs. The header of
the list may be using a variable declaration or an allocated pointer,
so we don't have a routine to free this part to keep the interface
simple.
Per report from coverity for an issue introduced by 5ab892c, and
valgrind complains about the leak as well. The idea to introduce a new
routine in simple_list.c is from Tom Lane.
Author: Michael Paquier
Reviewed-by: Tom Lane
When doing a schema-level or a database-level operation, a list of
relations to build is created which gets processed in parallel using
multiple connections, based on the recent refactoring for parallel slots
in src/bin/scripts/. System catalogs are processed first in a
serialized fashion to prevent deadlocks, followed by the rest done in
parallel.
This new option is not compatible with --system as reindexing system
catalogs in parallel can lead to deadlocks, and with --index as there is
no conflict handling for indexes rebuilt in parallel depending in the
same relation.
Author: Julien Rouhaud
Reviewed-by: Sergei Kornilov, Michael Paquier
Discussion: https://postgr.es/m/CAOBaU_YrnH_Jqo46NhaJ7uRBiWWEcS40VNRQxgFbqYo9kApUsg@mail.gmail.com
The existing facility of vacuumdb to handle parallel connections into a
given database with an authentication set is moved to a common file in
src/bin/scripts/, named scripts_parallel.c. This introduces a set of
routines to initialize, wait and terminate a set of connections,
simplifying a bit the code of vacuumdb on the way. More routines
related to result handling and database connection are moved to
common.c.
The initial plan is to use that for reindexdb, but it could be applied
to other tools like clusterdb.
While on it, clean up a set of variables "progname" which were defined
as routine arguments for error messages. Since most of the callers have
switched to pg_log_error() and such there is no need for this variable.
Author: Julien Rouhaud
Reviewed-by: Michael Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/CAOBaU_YrnH_Jqo46NhaJ7uRBiWWEcS40VNRQxgFbqYo9kApUsg@mail.gmail.com
This changes various places where appendPQExpBuffer was used in places
where it was possible to use appendPQExpBufferStr, and likewise for
appendStringInfo and appendStringInfoString. This is really just a
stylistic improvement, but there are also small performance gains to be
had from doing this.
Discussion: http://postgr.es/m/CAKJS1f9P=M-3ULmPvr8iCno8yvfDViHibJjpriHU8+SXUgeZ=w@mail.gmail.com
This merges the portion related to REINDEX SYSTEM into the routine
already available for all the other reindex types, making the query
generation cleaner. While on it, change the handling of the reindex
types using an enum, which allows to get rid of the hardcoded strings
used directly in the query generation present for the same purpose (aka
"TABLE", "DATABASE", etc.).
Per discussion with Julien Rouhaud, Tom Lane, Alvaro Herrera and me.
Author: Julien Rouhaud
Discussion: https://postgr.es/m/CAOBaU_bSmSik_WRK9niDnm-3NkNZky6+uKxkmQwvthZvMWpS5A@mail.gmail.com
The original placement of this module in src/fe_utils/ is ill-considered,
because several src/common/ modules have dependencies on it, meaning that
libpgcommon and libpgfeutils now have mutual dependencies. That makes it
pointless to have distinct libraries at all. The intended design is that
libpgcommon is lower-level than libpgfeutils, so only dependencies from
the latter to the former are acceptable.
We already have the precedent that fe_memutils and a couple of other
modules in src/common/ are frontend-only, so it's not stretching anything
out of whack to treat logging.c as a frontend-only module in src/common/.
To the extent that such modules help provide a common frontend/backend
environment for the rest of common/ to use, it's a reasonable design.
(logging.c does not yet provide an ereport() emulation, but one can
dream.)
Hence, move these files over, and revert basically all of the build-system
changes made by commit cc8d41511. There are no places that need to grow
new dependencies on libpgcommon, further reinforcing the idea that this
is the right solution.
Discussion: https://postgr.es/m/a912ffff-f6e4-778a-c86a-cf5c47a12933@2ndquadrant.com
The function had been interpreting SQL_ASCII messages as UTF8, throwing
an error when they were invalid UTF8. The new behavior is consistent
with pg_do_encoding_conversion(). This affects LOG_DESTINATION_STDERR
and LOG_DESTINATION_EVENTLOG, which will send untranslated bytes to
write() and ReportEventA(). On buildfarm member bowerbird, enabling
log_connections caused an error whenever the role name was not valid
UTF8. Back-patch to 9.4 (all supported versions).
Discussion: https://postgr.es/m/20190512015615.GD1124997@rfd.leadboat.com
When failing to reindex a table or an index, reindexdb would generate an
extra error message related to a database failure, which is misleading.
Backpatch all the way down, as this has been introduced by 85e9a5a0.
Discussion: https://postgr.es/m/CAOBaU_Yo61RwNO3cW6WVYWwH7EYMPuexhKqufb2nFGOdunbcHw@mail.gmail.com
Author: Julien Rouhaud
Reviewed-by: Daniel Gustafsson, Álvaro Herrera, Tom Lane, Michael
Paquier
Backpatch-through: 9.4
When running a batch of VACUUM or ANALYZE commands on a given database,
there were cases where it is possible to have vacuumdb not report an
error where it actually should, leading to incorrect status results.
Author: Julien Rouhaud
Reviewed-by: Amit Kapila, Michael Paquier
Discussion: https://postgr.es/m/CAOBaU_ZuTwz7CtqLYJ1Ouuh272bTQPLN8b1bAPk0bCBm4PDMTQ@mail.gmail.com
Backpatch-through: 9.5
This unifies the various ad hoc logging (message printing, error
printing) systems used throughout the command-line programs.
Features:
- Program name is automatically prefixed.
- Message string does not end with newline. This removes a common
source of inconsistencies and omissions.
- Additionally, a final newline is automatically stripped, simplifying
use of PQerrorMessage() etc., another common source of mistakes.
- I converted error message strings to use %m where possible.
- As a result of the above several points, more translatable message
strings can be shared between different components and between
frontends and backend, without gratuitous punctuation or whitespace
differences.
- There is support for setting a "log level". This is not meant to be
user-facing, but can be used internally to implement debug or
verbose modes.
- Lazy argument evaluation, so no significant overhead if logging at
some level is disabled.
- Some color in the messages, similar to gcc and clang. Set
PG_COLOR=auto to try it out. Some colors are predefined, but can be
customized by setting PG_COLORS.
- Common files (common/, fe_utils/, etc.) can handle logging much more
simply by just using one API without worrying too much about the
context of the calling program, requiring callbacks, or having to
pass "progname" around everywhere.
- Some programs called setvbuf() to make sure that stderr is
unbuffered, even on Windows. But not all programs did that. This
is now done centrally.
Soft goals:
- Reduces vertical space use and visual complexity of error reporting
in the source code.
- Encourages more deliberate classification of messages. For example,
in some cases it wasn't clear without analyzing the surrounding code
whether a message was meant as an error or just an info.
- Concepts and terms are vaguely aligned with popular logging
frameworks such as log4j and Python logging.
This is all just about printing stuff out. Nothing affects program
flow (e.g., fatal exits). The uses are just too varied to do that.
Some existing code had wrappers that do some kind of print-and-exit,
and I adapted those.
I tried to keep the output mostly the same, but there is a lot of
historical baggage to unwind and special cases to consider, and I
might not always have succeeded. One significant change is that
pg_rewind used to write all error messages to stdout. That is now
changed to stderr.
Reviewed-by: Donald Dong <xdong@csumb.edu>
Reviewed-by: Arthur Zakirov <a.zakirov@postgrespro.ru>
Discussion: https://www.postgresql.org/message-id/flat/6a609b43-4f57-7348-6480-bd022f924310@2ndquadrant.com
This adds the CONCURRENTLY option to the REINDEX command. A REINDEX
CONCURRENTLY on a specific index creates a new index (like CREATE
INDEX CONCURRENTLY), then renames the old index away and the new index
in place and adjusts the dependencies, and then drops the old
index (like DROP INDEX CONCURRENTLY). The REINDEX command also has
the capability to run its other variants (TABLE, DATABASE) with the
CONCURRENTLY option (but not SYSTEM).
The reindexdb command gets the --concurrently option.
Author: Michael Paquier, Andreas Karlsson, Peter Eisentraut
Reviewed-by: Andres Freund, Fujii Masao, Jim Nasby, Sergei Kornilov
Discussion: https://www.postgresql.org/message-id/flat/60052986-956b-4478-45ed-8bd119e9b9cf%402ndquadrant.com#74948a1044c56c5e817a5050f554ddee
These two new options can be used to improve the selectivity of
relations to vacuum or analyze even further depending on the age of
respectively their transaction ID or multixact ID, so as it is possible
to prioritize tables to prevent wraparound of one or the other.
Combined with --table, it is possible to target a subset of tables to
choose as potential processing targets.
Author: Nathan Bossart
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/FFE5373C-E26A-495B-B5C8-911EC4A41C5E@amazon.com
vacuumdb would use a catalog query only when the command caller does not
define a list of tables. Switching to a catalog table represents two
advantages:
- Relation existence check can happen before running any VACUUM or
ANALYZE query. Before this change, if multiple relations are defined
using --table, the utility would fail only after processing the
firstly-defined ones, which may be a long some depending on the size of
the relation. This adds checks for the relation names, and does
nothing, at least yet, for the attribute names.
- More filtering options can become available for the utility user.
These options, which may be introduced later on, are based on the
relation size or the relation age, and need to be made available even if
the user does not list any specific table with --table.
Author: Nathan Bossart
Reviewed-by: Michael Paquier, Masahiko Sawada
Discussion: https://postgr.es/m/FFE5373C-E26A-495B-B5C8-911EC4A41C5E@amazon.com
vacuumdb generates by itself SQL queries to run ANALYZE or VACUUM on the
backend, but we never actually checked for query patterns with column
lists defined.
Author: Michael Paquier
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/FFE5373C-E26A-495B-B5C8-911EC4A41C5E@amazon.com