mirror of https://github.com/postgres/postgres
Simplify lazy_scan_heap's handling of scanned pages.
Redefine a scanned page as any heap page that actually gets pinned by VACUUM's first pass over the heap, regardless of whether or not the page was cleanup locked. Although it's fundamentally impossible to prune a heap page without a cleanup lock (since we cannot safely defragment the page), we can do just about everything else. The only notable further exception is freezing tuples, though even that is arguably a consequence of not being able to prune (not a separate issue). VACUUM now does as much of the same processing as possible for pages that could not be cleanup locked. Any failure to do specific required processing is treated as a special case exception, which will be rare in practice. We now collect any preexisting LP_DEAD items (left behind by earlier opportunistic pruning) in the dead_items array for these heap pages, and count their tuples in the usual way. Steps used to decide if we'll attempt relation truncation are performed in the usual way for no-cleanup-lock scanned pages, too. Although eliminating these special cases is intrinsically useful, it's even more useful as an enabler of further simplifications. The only essential difference between aggressive and non-aggressive is that only aggressive is _guaranteed_ to be able to advance relfrozenxid up to FreezeLimit. Advancing relfrozenxid is always useful, but before now non-aggressive VACUUMs threw away the opportunity to do so whenever a cleanup lock could not be acquired on any page, no matter what the details were. This was very pessimistic. It isn't actually necessary to "behave aggressively" to maintain the ability to advance relfrozenxid when a cleanup lock isn't immediately available (most of the time). The non-aggressive case will now make sure that it isn't safe to advance relfrozenxid (without waiting) using only a share lock. It will usually notice that there are no tuples that need to be frozen anyway, just like in the aggressive case -- and so it no longer wastes an opportunity to advance relfrozenxid over nothing. (The non-aggressive case still won't wait for a cleanup lock when there really are tuples on the page that need to be frozen, since that really would amount to "behaving aggressively".) VACUUM currently has a tendency to set heap pages to all-visible in the visibility map before it freezes all of the tuples on the page. Only a subsequent aggressive VACUUM will visit these pages to freeze their tuples, usually only when the tuple XIDs are much older than the vacuum_freeze_min_age GUC (FreezeLimit cutoff) is supposed to allow. And so non-aggressive VACUUMs are still far less likely to be able to advance relfrozenxid in practice, even with the enhancements from this commit. This remaining issue will be addressed by future work that overhauls the criteria for freezing tuples. Once that's in place, almost every VACUUM operation will be able to advance relfrozenxid in practice. Author: Peter Geoghegan <pg@bowt.ie> Reviewed-By: Andres Freund <andres@anarazel.de> Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com> Discussion: https://postgr.es/m/CAH2-Wznp=c=Opj8Z7RMR3G=ec3_JfGYMN_YvmCEjoPCHzWbx0g@mail.gmail.com
This commit is contained in:
parent
4eb2176318
commit
44fa84881f
File diff suppressed because it is too large
Load Diff
|
@ -45,7 +45,7 @@ step stats:
|
|||
|
||||
relpages|reltuples
|
||||
--------+---------
|
||||
1| 20
|
||||
1| 21
|
||||
(1 row)
|
||||
|
||||
|
||||
|
|
|
@ -2,9 +2,10 @@
|
|||
# to page pins. We absolutely need to avoid setting reltuples=0 in
|
||||
# such cases, since that interferes badly with planning.
|
||||
#
|
||||
# Expected result in second permutation is 20 tuples rather than 21 as
|
||||
# for the others, because vacuum should leave the previous result
|
||||
# (from before the insert) in place.
|
||||
# Expected result for all three permutation is 21 tuples, including
|
||||
# the second permutation. VACUUM is able to count the concurrently
|
||||
# inserted tuple in its final reltuples, even when a cleanup lock
|
||||
# cannot be acquired on the affected heap page.
|
||||
|
||||
setup {
|
||||
create table smalltbl
|
||||
|
|
Loading…
Reference in New Issue