diff options
author | Peter Geoghegan | 2021-01-13 17:21:32 +0000 |
---|---|---|
committer | Peter Geoghegan | 2021-01-13 17:21:32 +0000 |
commit | d168b666823b6e0bcf60ed19ce24fb5fb91b8ccf (patch) | |
tree | 3a1faeb512413b47f56619453c8c609403eec5f7 /src/backend/access/index | |
parent | 9dc718bdf2b1a574481a45624d42b674332e2903 (diff) |
Enhance nbtree index tuple deletion.
Teach nbtree and heapam to cooperate in order to eagerly remove
duplicate tuples representing dead MVCC versions. This is "bottom-up
deletion". Each bottom-up deletion pass is triggered lazily in response
to a flood of versions on an nbtree leaf page. This usually involves a
"logically unchanged index" hint (these are produced by the executor
mechanism added by commit 9dc718bd).
The immediate goal of bottom-up index deletion is to avoid "unnecessary"
page splits caused entirely by version duplicates. It naturally has an
even more useful effect, though: it acts as a backstop against
accumulating an excessive number of index tuple versions for any given
_logical row_. Bottom-up index deletion complements what we might now
call "top-down index deletion": index vacuuming performed by VACUUM.
Bottom-up index deletion responds to the immediate local needs of
queries, while leaving it up to autovacuum to perform infrequent clean
sweeps of the index. The overall effect is to avoid certain
pathological performance issues related to "version churn" from UPDATEs.
The previous tableam interface used by index AMs to perform tuple
deletion (the table_compute_xid_horizon_for_tuples() function) has been
replaced with a new interface that supports certain new requirements.
Many (perhaps all) of the capabilities added to nbtree by this commit
could also be extended to other index AMs. That is left as work for a
later commit.
Extend deletion of LP_DEAD-marked index tuples in nbtree by adding logic
to consider extra index tuples (that are not LP_DEAD-marked) for
deletion in passing. This increases the number of index tuples deleted
significantly in many cases. The LP_DEAD deletion process (which is now
called "simple deletion" to clearly distinguish it from bottom-up
deletion) won't usually need to visit any extra table blocks to check
these extra tuples. We have to visit the same table blocks anyway to
generate a latestRemovedXid value (at least in the common case where the
index deletion operation's WAL record needs such a value).
Testing has shown that the "extra tuples" simple deletion enhancement
increases the number of index tuples deleted with almost any workload
that has LP_DEAD bits set in leaf pages. That is, it almost never fails
to delete at least a few extra index tuples. It helps most of all in
cases that happen to naturally have a lot of delete-safe tuples. It's
not uncommon for an individual deletion operation to end up deleting an
order of magnitude more index tuples compared to the old naive approach
(e.g., custom instrumentation of the patch shows that this happens
fairly often when the regression tests are run).
Add a further enhancement that augments simple deletion and bottom-up
deletion in indexes that make use of deduplication: Teach nbtree's
_bt_delitems_delete() function to support granular TID deletion in
posting list tuples. It is now possible to delete individual TIDs from
posting list tuples provided the TIDs have a tableam block number of a
table block that gets visited as part of the deletion process (visiting
the table block can be triggered directly or indirectly). Setting the
LP_DEAD bit of a posting list tuple is still an all-or-nothing thing,
but that matters much less now that deletion only needs to start out
with the right _general_ idea about which index tuples are deletable.
Bump XLOG_PAGE_MAGIC because xl_btree_delete changed.
No bump in BTREE_VERSION, since there are no changes to the on-disk
representation of nbtree indexes. Indexes built on PostgreSQL 12 or
PostgreSQL 13 will automatically benefit from bottom-up index deletion
(i.e. no reindexing required) following a pg_upgrade. The enhancement
to simple deletion is available with all B-Tree indexes following a
pg_upgrade, no matter what PostgreSQL version the user upgrades from.
Author: Peter Geoghegan <[email protected]>
Reviewed-By: Heikki Linnakangas <[email protected]>
Reviewed-By: Victor Yegorov <[email protected]>
Discussion: https://postgr.es/m/CAH2-Wzm+maE3apHB8NOtmM=p-DO65j2V5GzAWCOEEuy3JZgb2g@mail.gmail.com
Diffstat (limited to 'src/backend/access/index')
-rw-r--r-- | src/backend/access/index/genam.c | 46 |
1 files changed, 35 insertions, 11 deletions
diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c index e9877906e56..c911c705ba6 100644 --- a/src/backend/access/index/genam.c +++ b/src/backend/access/index/genam.c @@ -276,11 +276,18 @@ BuildIndexValueDescription(Relation indexRelation, /* * Get the latestRemovedXid from the table entries pointed at by the index - * tuples being deleted. - * - * Note: index access methods that don't consistently use the standard - * IndexTuple + heap TID item pointer representation will need to provide - * their own version of this function. + * tuples being deleted using an AM-generic approach. + * + * This is a table_index_delete_tuples() shim used by index AMs that have + * simple requirements. These callers only need to consult the tableam to get + * a latestRemovedXid value, and only expect to delete tuples that are already + * known deletable. When a latestRemovedXid value isn't needed in index AM's + * deletion WAL record, it is safe for it to skip calling here entirely. + * + * We assume that caller index AM uses the standard IndexTuple representation, + * with table TIDs stored in the t_tid field. We also expect (and assert) + * that the line pointers on page for 'itemnos' offsets are already marked + * LP_DEAD. */ TransactionId index_compute_xid_horizon_for_tuples(Relation irel, @@ -289,12 +296,17 @@ index_compute_xid_horizon_for_tuples(Relation irel, OffsetNumber *itemnos, int nitems) { - ItemPointerData *ttids = - (ItemPointerData *) palloc(sizeof(ItemPointerData) * nitems); + TM_IndexDeleteOp delstate; TransactionId latestRemovedXid = InvalidTransactionId; Page ipage = BufferGetPage(ibuf); IndexTuple itup; + delstate.bottomup = false; + delstate.bottomupfreespace = 0; + delstate.ndeltids = 0; + delstate.deltids = palloc(nitems * sizeof(TM_IndexDelete)); + delstate.status = palloc(nitems * sizeof(TM_IndexStatus)); + /* identify what the index tuples about to be deleted point to */ for (int i = 0; i < nitems; i++) { @@ -303,14 +315,26 @@ index_compute_xid_horizon_for_tuples(Relation irel, iitemid = PageGetItemId(ipage, itemnos[i]); itup = (IndexTuple) PageGetItem(ipage, iitemid); - ItemPointerCopy(&itup->t_tid, &ttids[i]); + Assert(ItemIdIsDead(iitemid)); + + ItemPointerCopy(&itup->t_tid, &delstate.deltids[i].tid); + delstate.deltids[i].id = delstate.ndeltids; + delstate.status[i].idxoffnum = InvalidOffsetNumber; /* unused */ + delstate.status[i].knowndeletable = true; /* LP_DEAD-marked */ + delstate.status[i].promising = false; /* unused */ + delstate.status[i].freespace = 0; /* unused */ + + delstate.ndeltids++; } /* determine the actual xid horizon */ - latestRemovedXid = - table_compute_xid_horizon_for_tuples(hrel, ttids, nitems); + latestRemovedXid = table_index_delete_tuples(hrel, &delstate); + + /* assert tableam agrees that all items are deletable */ + Assert(delstate.ndeltids == nitems); - pfree(ttids); + pfree(delstate.deltids); + pfree(delstate.status); return latestRemovedXid; } |