Very recent gcc complains that PLyObject_ToJsonbValue could return
a pointer to a local variable. I think it's wrong; but the coding
is fragile enough, and the savings of one palloc() minimal enough,
that it seems better to just do a palloc() all the time. (My other
idea of tweaking the if-condition doesn't suppress the warning.)
Back-patch to v11 where this code was introduced.
Discussion: https://postgr.es/m/21547.1580170366@sss.pgh.pa.us
We used to strategically place newlines after some function call left
parentheses to make pgindent move the argument list a few chars to the
left, so that the whole line would fit under 80 chars. However,
pgindent no longer does that, so the newlines just made the code
vertically longer for no reason. Remove those newlines, and reflow some
of those lines for some extra naturality.
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/20200129200401.GA6303@alvherre.pgsql
To make this work, (1) makeJsonLexContextCstringLen now takes the
encoding to be used as an argument; (2) check_stack_depth() is made to
do nothing in frontend code, and (3) elog(ERROR, ...) is changed to
pg_log_fatal + exit in frontend code.
Mark Dilger, reviewed and slightly revised by me.
Discussion: http://postgr.es/m/CA+TgmoYfOXhd27MUDGioVh6QtpD0C1K-f6ObSA10AWiHBAL5bA@mail.gmail.com
These two new parameters, named sslminprotocolversion and
sslmaxprotocolversion, allow to respectively control the minimum and the
maximum version of the SSL protocol used for the SSL connection attempt.
The default setting is to allow any version for both the minimum and the
maximum bounds, causing libpq to rely on the bounds set by the backend
when negotiating the protocol to use for an SSL connection. The bounds
are checked when the values are set at the earliest stage possible as
this makes the checks independent of any SSL implementation.
Author: Daniel Gustafsson
Reviewed-by: Michael Paquier, Cary Huang
Discussion: https://postgr.es/m/4F246AE3-A7AE-471E-BD3D-C799D3748E03@yesql.se
In a statement like "UPDATE remote_tab SET (x,y) = (SELECT ...)",
we'd conclude that the statement could be directly executed remotely,
because the sub-SELECT is in a resjunk tlist item that's not examined
for shippability. Currently that ends up crashing if the sub-SELECT
contains any remote Vars. Prevent the crash by deeming MULTIEXEC
Params to be unshippable.
This is a bit of a brute-force solution, since if the sub-SELECT
*doesn't* contain any remote Vars, the current execution technology
would work; but that's not a terribly common use-case for this syntax,
I think. In any case, we generally don't try to ship sub-SELECTs, so
it won't surprise anybody that this doesn't end up as a remote direct
update. I'd be inclined to see if that general limitation can be fixed
before worrying about this case further.
Per report from Lukáš Sobotka.
Back-patch to 9.6. 9.5 had MULTIEXPR, but we didn't try to perform
remote direct updates then, so the case didn't arise anyway.
Discussion: https://postgr.es/m/CAJif3k+iA_ekBB5Zw2hDBaE1wtiQa4LH4_JUXrrMGwTrH0J01Q@mail.gmail.com
This function allows us to fsync the specified file or directory.
It's useful, for example, when we want to sync the file that
pg_file_write() writes out or that COPY TO exports the data into,
for durability.
Author: Fujii Masao
Reviewed-By: Julien Rouhaud, Arthur Zakirov, Michael Paquier, Atsushi Torikoshi
Discussion: https://www.postgresql.org/message-id/CAHGQGwGY8uzZ_k8dHRoW1zDcy1Z7=5GQ+So4ZkVy2u=nLsk=hA@mail.gmail.com
The strategy of GIN index scan is driven by opclass-specific extract_query
method. This method that needed search mode is GIN_SEARCH_MODE_ALL. This
mode means that matching tuple may contain none of extracted entries. Simple
example is '!term' tsquery, which doesn't need any term to exist in matching
tsvector.
In order to handle such scan key GIN calculates virtual entry, which contains
all TIDs of all entries of attribute. In fact this is full scan of index
attribute. And typically this is very slow, but allows to handle some queries
correctly in GIN. However, current algorithm calculate such virtual entry for
each GIN_SEARCH_MODE_ALL scan key even if they are multiple for the same
attribute. This is clearly not optimal.
This commit improves the situation by introduction of "exclude only" scan keys.
Such scan keys are not capable to return set of matching TIDs. Instead, they
are capable only to filter TIDs produced by normal scan keys. Therefore,
each attribute should contain at least one normal scan key, while rest of them
may be "exclude only" if search mode is GIN_SEARCH_MODE_ALL.
The same optimization might be applied to the whole scan, not per-attribute.
But that leads to NULL values elimination problem. There is trade-off between
multiple possible ways to do this. We probably want to do this later using
some cost-based decision algorithm.
Discussion: https://postgr.es/m/CAOBaU_YGP5-BEt5Cc0%3DzMve92vocPzD%2BXiZgiZs1kjY0cj%3DXBg%40mail.gmail.com
Author: Nikita Glukhov, Alexander Korotkov, Tom Lane, Julien Rouhaud
Reviewed-by: Julien Rouhaud, Tomas Vondra, Tom Lane
Introduce new fields amusemaintenanceworkmem and amparallelvacuumoptions
in IndexAmRoutine for parallel vacuum. The amusemaintenanceworkmem tells
whether a particular IndexAM uses maintenance_work_mem or not. This will
help in controlling the memory used by individual workers as otherwise,
each worker can consume memory equal to maintenance_work_mem. The
amparallelvacuumoptions tell whether a particular IndexAM participates in
a parallel vacuum and if so in which phase (bulkdelete, vacuumcleanup) of
vacuum.
Author: Masahiko Sawada and Amit Kapila
Reviewed-by: Dilip Kumar, Amit Kapila, Tomas Vondra and Robert Haas
Discussion:
https://postgr.es/m/CAD21AoDTPMgzSkV4E3SFo1CH_x50bf5PqZFQf4jmqjk-C03BWg@mail.gmail.comhttps://postgr.es/m/CAA4eK1LmcD5aPogzwim5Nn58Ki+74a6Edghx4Wd8hAskvHaq5A@mail.gmail.com
We currently have several sets of files generated from data provided
by Unicode. These all have ad hoc rules and instructions for updating
when new Unicode versions appear, and it's not done consistently.
This patch centralizes and automates the process and makes it part of
the release checklist. The Unicode and CLDR versions are specified in
Makefile.global.in. There is a new make target "update-unicode" that
downloads all the relevant files and runs the generation script.
There is also a new script for generating the table of combining
characters for ucs_wcwidth(). That table is now in a separate include
file rather than hardcoded into the middle of other code. This is
based on the script that was used for generating
d8594d123c, but the script itself wasn't
committed at that time.
Reviewed-by: John Naylor <john.naylor@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/c8d05f42-443e-6c23-819b-05b31759a37c@2ndquadrant.com
Failures in allocations could lead to crashes with NULL pointer
dereferences . Memory context TopMemoryContext is used instead to keep
alive the plans allocated in the session. A more specific context could
be used here, but this is left for later.
Reported-by: Jian Zhang
Author: Michael Paquier
Reviewed-by: Tom Lane, Andres Freund
Discussion: https://postgr.es/m/16190-70181c803641c3dc@postgresql.org
This follows multiple complains from Peter Geoghegan, Andres Freund and
Alvaro Herrera that this issue ought to be dug more before actually
happening, if it happens.
Discussion: https://postgr.es/m/20191226144606.GA5659@alvherre.pgsql
The following renaming is done so as source files related to index
access methods are more consistent with table access methods (the
original names used for index AMs ware too generic, and could be
confused as including features related to table AMs):
- amapi.h -> indexam.h.
- amapi.c -> indexamapi.c. Here we have an equivalent with
backend/access/table/tableamapi.c.
- amvalidate.c -> indexamvalidate.c.
- amvalidate.h -> indexamvalidate.h.
- genam.c -> indexgenam.c.
- genam.h -> indexgenam.h.
This has been discussed during the development of v12 when table AM was
worked on, but the renaming never happened.
Author: Michael Paquier
Reviewed-by: Fabien Coelho, Julien Rouhaud
Discussion: https://postgr.es/m/20191223053434.GF34339@paquier.xyz
Using \ is unnecessary and ugly, so remove that. While at it, stitch
the literals back into a single line: we've long discouraged splitting
error message literals even when they go past the 80 chars line limit,
to improve greppability.
Leave contrib/tablefunc alone.
Discussion: https://postgr.es/m/20191223195156.GA12271@alvherre.pgsql
While building a hash map of categories in load_categories_hash,
resulting category names have not thus far been checked to ensure
they are not null. Prior to pg12 null category names worked to the
extent that they did not crash on some platforms. This is because
those system libraries have an snprintf which can deal with being
passed a null pointer argument for a string. But even in those cases
null categories did nothing useful. And on some platforms it crashed.
As of pg12, our own version of snprintf gets called, and it does
not deal with null pointer arguments at all, and crashes consistently.
Fix that by disallowing null categories. They never worked usefully,
and no one has ever asked for them to work previously. Back-patch to
all supported branches.
Reported-By: Ireneusz Pluta
Discussion: https://postgr.es/m/16176-7489719b05e4303c@postgresql.org
Per project policy, transient roles created by regression test cases
should be named "regress_something", to reduce the risks of running
such cases against installed servers. And no such role should ever
be left behind after running a test.
Discussion: https://postgr.es/m/11297.1576868677@sss.pgh.pa.us
We realized years ago that it's better for libpq to accept all
connection parameters syntactically, even if some are ignored or
restricted due to lack of the feature in a particular build.
However, that lesson from the SSL support was for some reason never
applied to the GSSAPI support. This is causing various buildfarm
members to have problems with a test case added by commit 6136e94dc,
and it's just a bad idea from a user-experience standpoint anyway,
so fix it.
While at it, fix some places where parameter-related infrastructure
was added with the aid of a dartboard, or perhaps with the aid of
the anti-pattern "add new stuff at the end". It should be safe
to rearrange the contents of struct pg_conn even in released
branches, since that's private to libpq (and we'd have to move
some fields in some builds to fix this, anyway).
Back-patch to all supported branches.
Discussion: https://postgr.es/m/11297.1576868677@sss.pgh.pa.us
Currently postgres_fdw doesn't permit a non-superuser to connect to a
foreign server without specifying a password, or to use an
authentication mechanism that doesn't use the password. This is to avoid
using the settings and identity of the user running Postgres.
However, this doesn't make sense for all authentication methods. We
therefore allow a superuser to set "password_required 'false'" for user
mappings for the postgres_fdw. The superuser must ensure that the
foreign server won't try to rely solely on the server identity (e.g.
trust, peer, ident) or use an authentication mechanism that relies on the
password settings (e.g. md5, scram-sha-256).
This feature is a prelude to better support for sslcert and sslkey
settings in user mappings.
Author: Craig Ringer.
Discussion: https://postgr.es/m/075135da-545c-f958-fed0-5dcb462d6dae@2ndQuadrant.com
This patch causes EXPLAIN to always assign a separate table alias to the
parent RTE of an append relation (inheritance set); before, such RTEs
were ignored if not actually scanned by the plan. Since the child RTEs
now always have that same alias to start with (cf. commit 55a1954da),
the net effect is that the parent RTE usually gets the alias used or
implied by the query text, and the children all get that alias with "_N"
appended. (The exception to "usually" is if there are duplicate aliases
in different subtrees of the original query; then some of those original
RTEs will also have "_N" appended.)
This results in more uniform output for partitioned-table plans than
we had before: the partitioned table itself gets the original alias,
and all child tables have aliases with "_N", rather than the previous
behavior where one of the children would get an alias without "_N".
The reason for giving the parent RTE an alias, even if it isn't scanned
by the plan, is that we now use the parent's alias to qualify Vars that
refer to an appendrel output column and appear above the Append or
MergeAppend that computes the appendrel. But below the append, Vars
refer to some one of the child relations, and are displayed that way.
This seems clearer than the old behavior where a Var that could carry
values from any child relation was displayed as if it referred to only
one of them.
While at it, change ruleutils.c so that the code paths used by EXPLAIN
deal in Plan trees not PlanState trees. This effectively reverts a
decision made in commit 1cc29fe7c, which seemed like a good idea at
the time to make ruleutils.c consistent with explain.c. However,
it's problematic because we'd really like to allow executor startup
pruning to remove all the children of an append node when possible,
leaving no child PlanState to resolve Vars against. (That's not done
here, but will be in the next patch.) This requires different handling
of subplans and initplans than before, but is otherwise a pretty
straightforward change.
Discussion: https://postgr.es/m/001001d4f44b$2a2cca50$7e865ef0$@lab.ntt.co.jp
AfterTriggerExecute() retrieves a fresh tuple or pair of tuples from a
tuplestore and then stores the tuple(s) in the passed-in slot(s) if
AFTER_TRIGGER_FDW_FETCH, while it uses the most-recently-retrieved
tuple(s) stored in the slot(s) if AFTER_TRIGGER_FDW_REUSE. This was
done correctly before 12, but commit ff11e7f4b broke it by mistakenly
clearing the tuple(s) stored in the slot(s) in that function, leading to
an assertion failure as reported in bug #16139 from Alexander Lakhin.
Also, fix some other issues with the aforementioned commit in passing:
* For tg_newslot, which is a slot added to the TriggerData struct by the
commit to store new updated tuples, it didn't ensure the slot was NULL
if there was no such tuple.
* The commit failed to update the documentation about the trigger
interface.
Author: Etsuro Fujita
Backpatch-through: 12
Discussion: https://postgr.es/m/16139-94f9ccf0db6119ec%40postgresql.org
The dict_int text search dictionary template accepts maxlen parameter,
which is then used to cap the length of input strings. The value was
not properly checked, and the code simply does
txt[d->maxlen] = '\0';
to insert a terminator, leading to segfaults with negative values.
This commit simply rejects values less than 1. The issue was there since
dct_int was introduced in 9.3, so backpatch all the way back to 9.4
which is the oldest supported version.
Reported-by: cili
Discussion: https://postgr.es/m/16144-a36a5bef7657047d@postgresql.org
Backpatch-through: 9.4
EXPLAIN generally only adds schema qualifications to table names when
VERBOSE is specified. In postgres_fdw's "Relations" output, table
names were always so qualified, but that was an implementation
restriction: in the original coding, we didn't have access to the
verbose flag at the time the string was generated. After the code
rearrangement of commit 4526951d5, we do have that info available
at the right time, so make this output follow the normal rule.
Discussion: https://postgr.es/m/12424.1575168015@sss.pgh.pa.us
If an inheritance/partitioning parent table is assigned some column
alias names in the query, EXPLAIN mapped those aliases onto the
child tables' columns by physical position, resulting in bogus output
if a child table's columns aren't one-for-one with the parent's.
To fix, make expand_single_inheritance_child() generate a correctly
re-mapped column alias list, rather than just copying the parent
RTE's alias node. (We have to fill the alias field, not just
adjust the eref field, because ruleutils.c will ignore eref in
favor of looking at the real column names.)
This means that child tables will now always have alias fields in
plan rtables, where before they might not have. That results in
a rather substantial set of regression test output changes:
EXPLAIN will now always show child tables with aliases that match
the parent table (usually with "_N" appended for uniqueness).
But that seems like a net positive for understandability, since
the parent alias corresponds to something that actually appeared
in the original query, while the child table names didn't.
(Note that this does not change anything for cases where an explicit
table alias was written in the query for the parent table; it
just makes cases without such aliases behave similarly to that.)
Hence, while we could avoid these subsidiary changes if we made
inherit.c more complicated, we choose not to.
Discussion: https://postgr.es/m/12424.1575168015@sss.pgh.pa.us
The relation aliases shown in the "Relations" line for a foreign scan
didn't always agree with those used in the rest of EXPLAIN's output.
The regression test result changes appearing here provide examples.
It's really impossible for postgres_fdw to duplicate EXPLAIN's alias
assignment logic during postgresGetForeignRelSize(), because of the
de-duplication that EXPLAIN does on a global basis --- and anyway,
trying to duplicate that would be unmaintainable. Instead, just put
numeric rangetable indexes into the string, and convert those to
table names/aliases in postgresExplainForeignScan, which does have
access to the results of ruleutils.c's alias assignment logic.
Aside from being more reliable, this shifts some work from planning
to EXPLAIN, which is a good tradeoff for performance. (I also
changed from using StringInfo to using psprintf, which makes the
code slightly simpler and reduces its memory consumption.)
A kluge required by this solution is that we have to reverse-engineer
the rtoffset applied by setrefs.c. If that logic ever fails
(presumably because the member tables of a join got offset by
different amounts), we'll need some more cooperation with setrefs.c
to keep things straight. But for now, there's no need for that.
Arguably this is a back-patchable bug fix, but since this is a mostly
cosmetic issue and there have been no field complaints, I'll refrain
for now.
Discussion: https://postgr.es/m/12424.1575168015@sss.pgh.pa.us
This patch providies for support for password protected SSL client
keys in libpq, and for DER format keys, both encrypted and unencrypted.
There is a new connection parameter sslpassword, which is supplied to
the OpenSSL libraries via a callback function. The callback function can
also be set by an application by calling PQgetSSLKeyPassHook(). There is
also a function to retreive the connection setting, PQsslpassword().
Craig Ringer and Andrew Dunstan
Reviewed by: Greg Nancarrow
Discussion: https://postgr.es/m/f7ee88ed-95c4-95c1-d4bf-7b415363ab62@2ndQuadrant.com
Use SELinux "db_table: { truncate }" to check if permission is granted to
TRUNCATE. Update example SELinux policy to grant needed permission for
TRUNCATE. Add new regression test to demonstrate a positive and negative
cases. Test will only be run if the loaded SELinux policy has the
"db_table: { truncate }" permission. Makes use of recent commit which added
object TRUNCATE hook. Patch by Yuli Khodorkovskiy with minor
editorialization by me. Not back-patched because the object TRUNCATE hook
was not.
Author: Yuli Khodorkovskiy
Reviewed-by: Joe Conway
Discussion: https://postgr.es/m/CAFL5wJcomybj1Xdw7qWmPJRpGuFukKgNrDb6uVBaCMgYS9dkaA%40mail.gmail.com
Instead of deciding to serialize a transaction merely based on the
number of changes in that xact (toplevel or subxact), this makes
the decisions based on amount of memory consumed by the changes.
The memory limit is defined by a new logical_decoding_work_mem GUC,
so for example we can do this
SET logical_decoding_work_mem = '128kB'
to reduce the memory usage of walsenders or set the higher value to
reduce disk writes. The minimum value is 64kB.
When adding a change to a transaction, we account for the size in
two places. Firstly, in the ReorderBuffer, which is then used to
decide if we reached the total memory limit. And secondly in the
transaction the change belongs to, so that we can pick the largest
transaction to evict (and serialize to disk).
We still use max_changes_in_memory when loading changes serialized
to disk. The trouble is we can't use the memory limit directly as
there might be multiple subxact serialized, we need to read all of
them but we don't know how many are there (and which subxact to
read first).
We do not serialize the ReorderBufferTXN entries, so if there is a
transaction with many subxacts, most memory may be in this type of
objects. Those records are not included in the memory accounting.
We also do not account for INTERNAL_TUPLECID changes, which are
kept in a separate list and not evicted from memory. Transactions
with many CTID changes may consume significant amounts of memory,
but we can't really do much about that.
The current eviction algorithm is very simple - the transaction is
picked merely by size, while it might be useful to also consider age
(LSN) of the changes for example. With the new Generational memory
allocator, evicting the oldest changes would make it more likely
the memory gets actually pfreed.
The logical_decoding_work_mem can be set in postgresql.conf, in which
case it serves as the default for all publishers on that instance.
Author: Tomas Vondra, with changes by Dilip Kumar and Amit Kapila
Reviewed-by: Dilip Kumar and Amit Kapila
Tested-By: Vignesh C
Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
postgres_fdw does not support two-phase transactions, so let's add a
small negative test case to check after it. Note that this is checked
using an end-of-xact callback to ensure a proper connection cleanup with
the foreign server, which is called before checking if a server is able
to handle 2PC with max_prepared_xacts, so this test does not need an
alternate output file.
Author: Gilles Darold
Discussion: https://postgr.es/m/20191108090507.GC1768@paquier.xyz
Currently, postgres_fdw does not support preparing a remote transaction
for two-phase commit even in the case where the remote transaction is
read-only, but the old error message appeared to imply that that was not
supported only if the remote transaction modified remote tables. Change
the message so as to include the case where the remote transaction is
read-only.
Also fix a comment above the message.
Also add a note about the lack of supporting PREPARE TRANSACTION to the
postgres_fdw documentation.
Reported-by: Gilles Darold
Author: Gilles Darold and Etsuro Fujita
Reviewed-by: Michael Paquier and Kyotaro Horiguchi
Backpatch-through: 9.4
Discussion: https://postgr.es/m/08600ed3-3084-be70-65ba-279ab19618a5%40darold.net
When maintaining or merging patches, one of the most common sources
for conflicts are the list of objects in makefiles. Especially when
the split across lines has been changed on both sides, which is
somewhat common due to attempting to stay below 80 columns, those
conflicts are unnecessarily laborious to resolve.
By splitting, and alphabetically sorting, OBJS style lines into one
object per line, conflicts should be less frequent, and easier to
resolve when they still occur.
Author: Andres Freund
Discussion: https://postgr.es/m/20191029200901.vww4idgcxv74cwes@alap3.anarazel.de
Historically, the code to build relation options has been shaped the
same way in multiple code paths by using a set of datums in input with
the options parsed with a static table which is then filled with the
option values. This introduces a new common routine in reloptions.c to
do most of the legwork for the in-core code paths.
Author: Amit Langote
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CA+HiwqGsoSn_uTPPYT19WrtR7oYpYtv4CdS0xuedTKiHHWuk_g@mail.gmail.com
DatumGetPointer() takes a Datum argument, not a pointer.
This is cosmetic given the current definitions of the
underlying macros, but it's still formally a type violation.
Bug was introduced in commit 389bb2818, but there seems
no need to back-patch.
Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/d8jlfsxq3a0.fsf@dalvik.ping.uio.no
This gives an alternative way of catching exceptions, for the common
case where the cleanup code is the same in the error and non-error
cases. So instead of
PG_TRY();
{
... code that might throw ereport(ERROR) ...
}
PG_CATCH();
{
cleanup();
PG_RE_THROW();
}
PG_END_TRY();
cleanup();
one can write
PG_TRY();
{
... code that might throw ereport(ERROR) ...
}
PG_FINALLY();
{
cleanup();
}
PG_END_TRY();
Discussion: https://www.postgresql.org/message-id/flat/95a822c3-728b-af0e-d7e5-71890507ae0c%402ndquadrant.com
The basic rule we follow here is to always first include 'postgres.h' or
'postgres_fe.h' whichever is applicable, then system header includes and
then Postgres header includes. In this, we also follow that all the
Postgres header includes are in order based on their ASCII value. We
generally follow these rules, but the code has deviated in many places.
This commit makes it consistent just for contrib modules. The later
commits will enforce similar rules in other parts of code.
Author: Vignesh C
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CALDaNm2Sznv8RR6Ex-iJO6xAdsxgWhCoETkaYX=+9DW3q0QCfA@mail.gmail.com
1df5875 has changed the way dependencies are dropped for this command
with inheritance trees, which impacts sepgsql. This just updates the
regression test output to take care of the failures and adapt to the new
code.
Reported by buildfarm member rhinoceros.
Author: Michael Paquier
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/20191013101331.GC1434@paquier.xyz
Backpatch-through: 12
Relation options can define a lock mode other than AccessExclusiveMode
since 47167b7, but modules defining custom relation options did not
really have a way to enforce that. Correct that by extending the
current API set so as modules can define a custom lock mode.
Author: Michael Paquier
Reviewed-by: Kuntal Ghosh
Discussion: https://postgr.es/m/20190920013831.GD1844@paquier.xyz
In-core relation options can use a custom lock mode since 47167b7, that
has lowered the lock available for some autovacuum parameters. However
it forgot to consider custom relation options. This causes failures
with ALTER TABLE SET when changing a custom relation option, as its lock
is not defined. The existing APIs to define a custom reloption does not
allow to define a custom lock mode, so enforce its initialization to
AccessExclusiveMode which should be safe enough in all cases. An
upcoming patch will extend the existing APIs to allow a custom lock mode
to be defined.
The problem can be reproduced with bloom indexes, so add a test there.
Reported-by: Nikolay Sharplov
Analyzed-by: Thomas Munro, Michael Paquier
Author: Michael Paquier
Reviewed-by: Kuntal Ghosh
Discussion: https://postgr.es/m/20190920013831.GD1844@paquier.xyz
Backpatch-through: 9.6
When a relation is truncated, shared_buffers needs to be scanned
so that any buffers for the relation forks are invalidated in it.
Previously, shared_buffers was scanned for each relation forks, i.e.,
MAIN, FSM and VM, when VACUUM truncated off any empty pages
at the end of relation or TRUNCATE truncated the relation in place.
Since shared_buffers needed to be scanned multiple times,
it could take a long time to finish those commands especially
when shared_buffers was large.
This commit changes the logic so that shared_buffers is scanned only
one time for those three relation forks.
Author: Kirk Jamison
Reviewed-by: Masahiko Sawada, Thomas Munro, Alvaro Herrera, Takayuki Tsunakawa and Fujii Masao
Discussion: https://postgr.es/m/D09B13F772D2274BB348A310EE3027C64E2067@g01jpexmbkw24