diff options
author | Peter Geoghegan | 2021-03-11 00:27:01 +0000 |
---|---|---|
committer | Peter Geoghegan | 2021-03-11 00:27:01 +0000 |
commit | 9f3665fbfc34b963933e51778c7feaa8134ac885 (patch) | |
tree | f06201f72fc31718beb0b17f48facb270c28df26 /src/include/access/nbtree.h | |
parent | 845ac7f847a25505e91f30dca4e0330b25785ee0 (diff) |
Don't consider newly inserted tuples in nbtree VACUUM.
Remove the entire idea of "stale stats" within nbtree VACUUM (stop
caring about stats involving the number of inserted tuples). Also
remove the vacuum_cleanup_index_scale_factor GUC/param on the master
branch (though just disable them on postgres 13).
The vacuum_cleanup_index_scale_factor/stats interface made the nbtree AM
partially responsible for deciding when pg_class.reltuples stats needed
to be updated. This seems contrary to the spirit of the index AM API,
though -- it is not actually necessary for an index AM's bulk delete and
cleanup callbacks to provide accurate stats when it happens to be
inconvenient. The core code owns that. (Index AMs have the authority
to perform or not perform certain kinds of deferred cleanup based on
their own considerations, such as page deletion and recycling, but that
has little to do with pg_class.reltuples/num_index_tuples.)
This issue was fairly harmless until the introduction of the
autovacuum_vacuum_insert_threshold feature by commit b07642db, which had
an undesirable interaction with the vacuum_cleanup_index_scale_factor
mechanism: it made insert-driven autovacuums perform full index scans,
even though there is no real benefit to doing so. This has been tied to
a regression with an append-only insert benchmark [1].
Also have remaining cases that perform a full scan of an index during a
cleanup-only nbtree VACUUM indicate that the final tuple count is only
an estimate. This prevents vacuumlazy.c from setting the index's
pg_class.reltuples in those cases (it will now only update pg_class when
vacuumlazy.c had TIDs for nbtree to bulk delete). This arguably fixes
an oversight in deduplication-related bugfix commit 48e12913.
[1] https://smalldatum.blogspot.com/2021/01/insert-benchmark-postgres-is-still.html
Author: Peter Geoghegan <[email protected]>
Reviewed-By: Masahiko Sawada <[email protected]>
Discussion: https://postgr.es/m/CAD21AoA4WHthN5uU6+WScZ7+J_RcEjmcuH94qcoUPuB42ShXzg@mail.gmail.com
Backpatch: 13-, where autovacuum_vacuum_insert_threshold was added.
Diffstat (limited to 'src/include/access/nbtree.h')
-rw-r--r-- | src/include/access/nbtree.h | 7 |
1 files changed, 2 insertions, 5 deletions
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h index b56b7b7868e..5c66d1f366e 100644 --- a/src/include/access/nbtree.h +++ b/src/include/access/nbtree.h @@ -110,7 +110,7 @@ typedef struct BTMetaPageData /* number of deleted, non-recyclable pages during last cleanup */ uint32 btm_last_cleanup_num_delpages; - /* number of heap tuples during last cleanup */ + /* number of heap tuples during last cleanup (deprecated) */ float8 btm_last_cleanup_num_heap_tuples; bool btm_allequalimage; /* are all columns "equalimage"? */ @@ -1067,8 +1067,6 @@ typedef struct BTOptions { int32 varlena_header_; /* varlena header (do not touch directly!) */ int fillfactor; /* page fill factor in percent (0..100) */ - /* fraction of newly inserted tuples needed to trigger index cleanup */ - float8 vacuum_cleanup_index_scale_factor; bool deduplicate_items; /* Try to deduplicate items? */ } BTOptions; @@ -1171,8 +1169,7 @@ extern OffsetNumber _bt_findsplitloc(Relation rel, Page origpage, */ extern void _bt_initmetapage(Page page, BlockNumber rootbknum, uint32 level, bool allequalimage); -extern void _bt_set_cleanup_info(Relation rel, BlockNumber num_delpages, - float8 num_heap_tuples); +extern void _bt_set_cleanup_info(Relation rel, BlockNumber num_delpages); extern void _bt_upgrademetapage(Page page); extern Buffer _bt_getroot(Relation rel, int access); extern Buffer _bt_gettrueroot(Relation rel); |