The planner can sometimes compute very large values for numGroups, and in
cases where we have no alternative to building a hashtable, such a value
will get fed directly to BuildTupleHashTable as its nbuckets parameter.
There were two ways in which that could go bad. First, BuildTupleHashTable
declared the parameter as "int" but most callers were passing "long"s,
so on 64-bit machines undetected overflow could occur leading to a bogus
negative value. The obvious fix for that is to change the parameter to
"long", which is what I've done in HEAD. In the back branches that seems a
bit risky, though, since third-party code might be calling this function.
So for them, just put in a kluge to treat negative inputs as INT_MAX.
Second, hash_create can go nuts with extremely large requested table sizes
(notably, my_log2 becomes an infinite loop for inputs larger than
LONG_MAX/2). What seems most appropriate to avoid that is to bound the
initial table size request to work_mem.
This fixes bug #6035 reported by Daniel Schreiber. Although the reported
case only occurs back to 8.4 since it involves WITH RECURSIVE, I think
it's a good idea to install the defenses in all supported branches.
The code to assemble ldap_get_values_len's output into a single string
wrote the terminating null one byte past where it should. Fix that,
and make some other cosmetic adjustments to make the code a trifle more
readable and more in line with usual Postgres coding style.
Also, free the "result" string when done with it, to avoid a permanent
memory leak.
Bug report and patch by Albe Laurenz, cosmetic adjustments by me.
After finding an EXISTS or ANY sub-select that can be converted to a
semi-join or anti-join, we should recurse into the body of the sub-select.
This allows cases such as EXISTS-within-EXISTS to be optimized properly.
The original coding would leave the lower sub-select as a SubLink, which
is no better and often worse than what we can do with a join. Per example
from Wayne Conrad.
Back-patch to 8.4. There is a related issue in older versions' handling
of pull_up_IN_clauses, but they're lame enough anyway about the whole area
that it seems not worth the extra work to try to fix.
We must lock out autovacuuming of the old toast table before computing the
OldestXmin horizon we will use. Otherwise, autovacuum could start on the
toast table later, compute a later OldestXmin horizon, and remove as DEAD
toast tuples that we still need (because we think their parent tuples are
only RECENTLY_DEAD). Per further thought about bug #5998.
VACUUM was willing to remove a committed-dead tuple immediately if it was
deleted by the same transaction that inserted it. The idea is that such a
tuple could never have been visible to any other transaction, so we don't
need to keep it around to satisfy MVCC snapshots. However, there was
already an exception for tuples that are part of an update chain, and this
exception created a problem: we might remove TOAST tuples (which are never
part of an update chain) while their parent tuple stayed around (if it was
part of an update chain). This didn't pose a problem for most things,
since the parent tuple is indeed dead: no snapshot will ever consider it
visible. But MVCC-safe CLUSTER had a problem, since it will try to copy
RECENTLY_DEAD tuples to the new table. It then has to copy their TOAST
data too, and would fail if VACUUM had already removed the toast tuples.
Easiest fix is to get rid of the special case for xmin == xmax. This may
delay reclaiming dead space for a little bit in some cases, but it's by far
the most reliable way to fix the issue.
Per bug #5998 from Mark Reid. Back-patch to 8.3, which is the oldest
version with MVCC-safe CLUSTER.
Convert it to use successive shifts right instead of increasing a divisor.
This is probably a tad more efficient than the original coding, and it's
nicer-looking than the previous patch because we don't need a special case
to avoid overflow in the last branch. But the real reason to do it is to
avoid a Solaris compiler bug, as per results from buildfarm member moa.
Per recent discussion, it's important for all computed datums (not only the
results of input functions) to not contain any ill-defined (uninitialized)
bits. Failing to ensure that can result in equal() reporting that
semantically indistinguishable Consts are not equal, which in turn leads to
bizarre and undesirable planner behavior, such as in a recent example from
David Johnston. We might eventually try to fix this in a general manner by
allowing datatypes to define identity-testing functions, but for now the
path of least resistance is to expect datatypes to force all unused bits
into consistent states.
Per some testing by Noah Misch, array and path functions seem to be the
only ones presenting risks at the moment, so I looked through all the
functions in adt/array*.c and geo_ops.c and fixed them as necessary. In
the array functions, the easiest/safest fix is to allocate result arrays
with palloc0 instead of palloc. Possibly in future someone will want to
look into whether we can just zero the padding bytes, but that looks too
complex for a back-patchable fix. In the path functions, we already had a
precedent in path_in for just zeroing the one known pad field, so duplicate
that code as needed.
Back-patch to all supported branches.
Most commenters agreed that this is more friendly than silently failing
to match the line during actual connection attempts. Also, this will
prevent corner cases that might arise when trying to handle such a line
when the SSL code isn't turned on. An example is that specifying
clientcert=1 in such a line would formerly result in a completely
misleading complaint that root.crt wasn't present, as seen in a recent
report from Marc-Andre Laverdiere. While we could have instead fixed
that specific behavior, it seems likely that we'd have a continuing stream
of such bizarre behaviors if we keep on allowing hostssl lines when SSL is
disabled.
Back-patch to 8.4, where clientcert was introduced. Earlier versions don't
have this specific issue, and the code is enough different to make this
patch not applicable without more work than it seems worth.
The expression that tried to round the value to the nearest TB could
overflow, leading to bogus output as reported in bug #5993 from Nicola
Cossu. This isn't likely to ever happen in the intended usage of the
function (if it could, we'd be needing to use a wider datatype instead);
but it's not hard to give the expected output, so let's do so.
If we find a DELETE_IN_PROGRESS HOT-updated tuple, it is impossible to know
whether to index it or not except by waiting to see if the deleting
transaction commits. If it doesn't, the tuple might again be LIVE, meaning
we have to index it. So wait and recheck in that case.
Also, we must not rely on ii_BrokenHotChain to decide that it's possible to
omit tuples from the index. That could result in omitting tuples that we
need, particularly in view of yesterday's fixes to not necessarily set
indcheckxmin (but it's broken even without that, as per my analysis today).
Since this is just an extremely marginal performance optimization, dropping
the test shouldn't hurt.
These cases are only expected to happen in system catalogs (they're
possible there due to early release of RowExclusiveLock in most
catalog-update code paths). Since reindexing of a system catalog isn't a
particularly performance-critical operation anyway, there's no real need to
be concerned about possible performance degradation from these changes.
The worst aspects of this bug were introduced in 9.0 --- 8.x will always
wait out a DELETE_IN_PROGRESS tuple. But I think dropping index entries
on the strength of ii_BrokenHotChain is dangerous even without that, so
back-patch removal of that optimization to 8.3 and 8.4.
Per comment from Greg Stark, it's less clear that HOT chains don't conflict
with the index than it would be for a valid index. So, let's preserve the
former behavior that indcheckxmin does get set when there are
potentially-broken HOT chains in this case. This change does not cause any
pg_index update that wouldn't have happened anyway, so we're not
re-introducing the previous bug with pg_index updates, and surely the case
is not significant from a performance standpoint; so let's be as
conservative as possible.
"People's Republic of China" locale on Windows was causing initdb to fail.
This fixes bug #5818 reported by yulei. On master, this makes the mapping
of "People's Republic of China" to just "China" obsolete. In 9.0 and 8.4,
just fix the escaping. Earlier versions didn't have locale names in bki
file.
There can never be a need to push the indcheckxmin horizon forward, since
any HOT chains that are actually broken with respect to the index must
pre-date its original creation. So we can just avoid changing pg_index
altogether during a REINDEX operation.
This offers a cleaner solution than my previous patch for the problem
found a few days ago that we mustn't try to update pg_index while we are
reindexing it. System catalog indexes will always be created with
indcheckxmin = false during initdb, and with this modified code we should
never try to change their pg_index entries. This avoids special-casing
system catalogs as the former patch did, and should provide a performance
benefit for many cases where REINDEX formerly caused an index to be
considered unusable for a short time.
Back-patch to 8.3 to cover all versions containing HOT. Note that this
patch changes the API for index_build(), but I believe it is unlikely that
any add-on code is calling that directly.
The places that attempt to change pg_index.indcheckxmin during a reindexing
operation cannot be executed safely if pg_index itself is the subject of
the operation. This is the explanation for a couple of recent reports of
VACUUM FULL failing with
ERROR: duplicate key value violates unique constraint "pg_index_indexrelid_index"
DETAIL: Key (indexrelid)=(2678) already exists.
However, there isn't any real need to update indcheckxmin in such a
situation, if we assume that pg_index can never contain a truly broken HOT
chain. This assumption holds if new indexes are never created on it during
concurrent operations, which is something we don't consider safe for any
system catalog, not just pg_index. Accordingly, modify the code to not
manipulate indcheckxmin when reindexing any system catalog.
Back-patch to 8.3, where HOT was introduced. The known failure scenarios
involve 9.0-style VACUUM FULL, so there might not be any real risk before
9.0, but let's not assume that.
When we are doing GEQO join planning, the current memory context is a
short-lived context that will be reset at the end of geqo_eval(). However,
the RelOptInfos for base relations are set up before that and then re-used
across many GEQO cycles. Hence, any code that modifies a baserel during
join planning has to be careful not to put pointers to the short-lived
context into the baserel struct. mark_dummy_rel got this wrong, leading to
easy-to-reproduce-once-you-know-how crashes in 8.4, as reported off-list by
Leo Carson of SDSC. Some improvements made in 9.0 make it difficult to
demonstrate the crash in 9.0 or HEAD; but there's no doubt that there's
still a risk factor here, so patch all branches that have the function.
(Note: 8.3 has a similar function, but it's only applied to joinrels and
thus is not a hazard.)
In particular, if we don't have real ndistinct estimates for both sides,
fall back to assuming that half of the left-hand rows have join partners.
This is what was done in 8.2 and 8.3 (cf nulltestsel() in those versions).
It's pretty stupid but it won't lead us to think that an antijoin produces
no rows out, as seen in recent example from Uwe Schroeder.
Remove the hard-wired assumption that __mips__ (and only __mips__) lacks
dlopen in FreeBSD and OpenBSD. This assumption is outdated at least for
OpenBSD, as per report from an anonymous 9.1 tester. We can perfectly well
use HAVE_DLOPEN instead to decide which code to use.
Some other cosmetic adjustments to make freebsd.c, netbsd.c, and openbsd.c
exactly alike.
This is a back-patch of commit dcb2bda9b7042dbf43f876c94ebf35d951de10e9 of
Aug 6 2009, which fixed assorted cases in which plpgsql would fail to cope
with composite types that contain any dropped columns. Per discussion,
this fix has been out in 9.0 for long enough to make it improbable that it
creates any new bugs, so this is a low-risk fix. To make it even lower
risk, I did not back-patch the changes in execQual.c, but just accepted
the duplication of code between there and tupconvert.c. The added files
tupconvert.h and tupconvert.c match their current states in HEAD.
The original coding supposed that a dump TOC file could never contain lines
longer than 1K. The folly of that was exposed by a recent report from
Per-Olov Esgard. We only really need to see the first dozen or two bytes
of each line, since we're just trying to read off the numeric ID at the
start of the line; so there's no need for a particularly huge buffer.
What there is a need for is logic to not process continuation bufferloads.
Back-patch to all supported branches, since it's always been like this.
Eventually we might be able to allow that, but it's not clear how many
places need to be fixed to prevent infinite recursion when there's a direct
or indirect inclusion of a rowtype in itself. One such place is
CheckAttributeType(), which will recurse to stack overflow in cases such as
those exhibited in bug #5950 from Alex Perepelica. If we were sure it was
the only such place, we could easily modify the code added by this patch to
stop the recursion without a complaint ... but it probably isn't the only
such place. Hence, throw error until such time as someone is excited
enough about this type of usage to put work into making it safe.
Back-patch as far as 8.3. 8.2 doesn't have the recursive call in
CheckAttributeType in the first place, so I see no need to add code there
in the absence of clear evidence of a problem elsewhere.
On closer inspection, that two-element initcond value seems to have been
a little white lie to avoid explaining the full behavior of float8_accum.
But if people are going to expect the examples to be exactly correct,
I suppose we'd better explain. Per comment from Thom Brown.
Opening a catcache's index could require reading from that cache's own
catalog, which of course would acquire AccessShareLock on the catalog.
So the original coding here risks locking index before heap, which could
deadlock against another backend trying to get exclusive locks in the
normal order. Because InitCatCachePhase2 is only called when a backend
has to start up without a relcache init file, the deadlock was seldom seen
in the field. (And by the same token, there's no need to worry about any
performance disadvantage; so not much point in trying to distinguish
exactly which catalogs have the risk.)
Bug report, diagnosis, and patch by Nikhil Sontakke. Additional commentary
by me. Back-patch to all supported branches.
This has been broken for years, and I'm not sure why it has not been
noticed before, but now a very modern Cygwin breaks on it, and the fix
is clearly correct. Backpatching to all live branches.
Add dummy returns before every potential division-by-zero in int8.c,
because apparently further "improvements" in gcc's optimizer have
enabled it to break functions that weren't broken before.
Aurelien Jarno, via Martin Pitt
contains newly-inserted tuples that according to our OldestXmin are not
yet visible to everyone. The value returned by GetOldestXmin() is conservative,
and it can move backwards on repeated calls, so if we see that contradiction
between the PD_ALL_VISIBLE flag and status of tuples on the page, we have to
assume it's because an earlier vacuum calculated a higher OldestXmin value,
and all the tuples really are visible to everyone.
We have received several reports of this bug, with the "PD_ALL_VISIBLE flag
was incorrectly set in relation ..." warning appearing in logs. We were
finally able to hunt it down with David Gould's help to run extra diagnostics
in an environment where this happened frequently.
Also reword the warning, per Robert Haas' suggestion, to not imply that the
PD_ALL_VISIBLE flag is necessarily at fault, as it might also be a symptom
of corruption on a tuple header.
Backpatch to 8.4, where the PD_ALL_VISIBLE flag was introduced.
ExecUpdate checked for whether ExecBRUpdateTriggers had returned a new
tuple value by seeing if the returned tuple was pointer-equal to the old
one. But the "old one" was in estate->es_junkFilter's result slot, which
would be scribbled on if we had done an EvalPlanQual update in response to
a concurrent update of the target tuple; therefore we were comparing a
dangling pointer to a live one. Given the right set of circumstances we
could get a false match, resulting in not forcing the tuple to be stored in
the slot we thought it was stored in. In the case reported by Maxim Boguk
in bug #5798, this led to "cannot extract system attribute from virtual
tuple" failures when trying to do "RETURNING ctid". I believe there is a
very-low-probability chance of more serious errors, such as generating
incorrect index entries based on the original rather than the
trigger-modified version of the row.
In HEAD, change all of ExecBRInsertTriggers, ExecIRInsertTriggers,
ExecBRUpdateTriggers, and ExecIRUpdateTriggers so that they continue to
have similar APIs. In the back branches I just changed
ExecBRUpdateTriggers, since there is no bug in the ExecBRInsertTriggers
case.
ts_typanalyze.c computes MCE statistics as fractions of the non-null rows,
which seems fairly reasonable, and anyway changing it in released versions
wouldn't be a good idea. But then ts_selfuncs.c has to account for that.
Failure to do so results in overestimates in columns with a significant
fraction of null documents. Back-patch to 8.4 where this stuff was
introduced.
Jesper Krogh
Recent releases had a check on rel->rd_refcnt in heap_drop_with_catalog,
but failed to cover the possibility of pending trigger events at DROP time.
(Before 8.4 we didn't even check the refcnt.) When the trigger events were
eventually fired, you'd get "could not open relation with OID nnn" errors,
as in recent report from strk. Better to throw a suitable error when the
DROP is attempted.
Also add a similar check in DROP INDEX.
Back-patch to all supported branches.
New versions of libintl redefine setlocale() to a macro
which causes problems when the backend and libintl are
linked against different versions of the runtime, which
is often the case in msvc builds.
Hiroshi Inoue, slightly updated comment by me
reduce_outer_joins() mistakenly treated a semijoin like a left join for
purposes of deciding whether not-null constraints created by the join's
quals could be passed down into the join's left-hand side (possibly
resulting in outer-join simplification there). Actually, semijoin works
like inner join for this purpose, ie, we do not need to see any rows that
can't possibly satisfy the quals. Hence, two-line fix to treat semi and
inner joins alike. Per observation by Andres Freund about a performance
gripe from Yazan Suleiman.
Back-patch to 8.4, since this oversight has been there since the current
handling of semijoins was implemented.