pgsql: Avoid duplicate XIDs at recovery when building initial snapshot

Lists: pgsql-committerspgsql-hackers
From: Michael Paquier <michael(at)paquier(dot)xyz>
To: pgsql-committers(at)lists(dot)postgresql(dot)org
Subject: pgsql: Avoid duplicate XIDs at recovery when building initial snapshot
Date: 2018-10-14 13:26:24
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-committers pgsql-hackers

Avoid duplicate XIDs at recovery when building initial snapshot

On a primary, sets of XLOG_RUNNING_XACTS records are generated on a
periodic basis to allow recovery to build the initial state of
transactions for a hot standby. The set of transaction IDs is created
by scanning all the entries in ProcArray. However it happens that its
logic never counted on the fact that two-phase transactions finishing to
prepare can put ProcArray in a state where there are two entries with
the same transaction ID, one for the initial transaction which gets
cleared when prepare finishes, and a second, dummy, entry to track that
the transaction is still running after prepare finishes. This way
ensures a continuous presence of the transaction so as callers of for
example TransactionIdIsInProgress() are always able to see it as alive.

So, if a XLOG_RUNNING_XACTS takes a standby snapshot while a two-phase
transaction finishes to prepare, the record can finish with duplicated
XIDs, which is a state expected by design. If this record gets applied
on a standby to initial its recovery state, then it would simply fail,
so the odds of facing this failure are very low in practice. It would
be tempting to change the generation of XLOG_RUNNING_XACTS so as
duplicates are removed on the source, but this requires to hold on
ProcArrayLock for longer and this would impact all workloads,
particularly those using heavily two-phase transactions.

XLOG_RUNNING_XACTS is also actually used only to initialize the standby
state at recovery, so instead the solution is taken to discard
duplicates when applying the initial snapshot.

Diagnosed-by: Konstantin Knizhnik
Author: Michael Paquier
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 9.3

Branch
------
master

Details
-------
https://git.postgresql.org/pg/commitdiff/1df21ddb19c6e764fc9378c900515a5d642ad820

Modified Files
--------------
src/backend/storage/ipc/procarray.c | 21 +++++++++++++++++++--
1 file changed, 19 insertions(+), 2 deletions(-)


From: Andres Freund <andres(at)anarazel(dot)de>
To: Michael Paquier <michael(at)paquier(dot)xyz>
Cc: pgsql-committers(at)lists(dot)postgresql(dot)org
Subject: Re: pgsql: Avoid duplicate XIDs at recovery when building initial snapshot
Date: 2018-10-14 17:42:40
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-committers pgsql-hackers

On 2018-10-14 13:26:24 +0000, Michael Paquier wrote:
> Avoid duplicate XIDs at recovery when building initial snapshot
>
> On a primary, sets of XLOG_RUNNING_XACTS records are generated on a
> periodic basis to allow recovery to build the initial state of
> transactions for a hot standby. The set of transaction IDs is created
> by scanning all the entries in ProcArray. However it happens that its
> logic never counted on the fact that two-phase transactions finishing to
> prepare can put ProcArray in a state where there are two entries with
> the same transaction ID, one for the initial transaction which gets
> cleared when prepare finishes, and a second, dummy, entry to track that
> the transaction is still running after prepare finishes. This way
> ensures a continuous presence of the transaction so as callers of for
> example TransactionIdIsInProgress() are always able to see it as alive.
>
> So, if a XLOG_RUNNING_XACTS takes a standby snapshot while a two-phase
> transaction finishes to prepare, the record can finish with duplicated
> XIDs, which is a state expected by design. If this record gets applied
> on a standby to initial its recovery state, then it would simply fail,
> so the odds of facing this failure are very low in practice. It would
> be tempting to change the generation of XLOG_RUNNING_XACTS so as
> duplicates are removed on the source, but this requires to hold on
> ProcArrayLock for longer and this would impact all workloads,
> particularly those using heavily two-phase transactions.
>
> XLOG_RUNNING_XACTS is also actually used only to initialize the standby
> state at recovery, so instead the solution is taken to discard
> duplicates when applying the initial snapshot.
>
> Diagnosed-by: Konstantin Knizhnik
> Author: Michael Paquier
> Discussion: https://postgr.es/m/[email protected]
> Backpatch-through: 9.3

I'm unhappy this approach was taken over objections. Without a real
warning. Even leaving the crummyness aside, did you check other users
of XLOG_RUNNING_XACTS, e.g. logical decoding?

Greetings,

Andres Freund


From: Michael Paquier <michael(at)paquier(dot)xyz>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Postgres hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pgsql: Avoid duplicate XIDs at recovery when building initial snapshot
Date: 2018-10-22 03:03:26
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-committers pgsql-hackers

(moving to -hackers)

On Sun, Oct 14, 2018 at 10:42:40AM -0700, Andres Freund wrote:
> I'm unhappy this approach was taken over objections. Without a real
> warning.

Oops, that was not clear to me. Sorry about that! I did not see you
objecting again after the last arguments I raised. The end of PREPARE
TRANSACTION is designed like that since it has been introduced, so
changing the way the dummy GXACT is inserted before the main one is
cleared from its XID does not sound wise to me. The current design is
also present for a couple of reasons, please see this thread:
https://www.postgresql.org/message-id/[email protected]
This has resulted in e26b0abd.

Among the things I thought are:
- Clearing the XID at the same time the dummy entry is added, which
actually means to hold on ProcArrayLock longer while doing more at the
end of prepare. I actually don't think you can do that cleanly without
endangering the transaction visibility for other backends, and syncrep
may cause the window to get wider.
- Changing GetRunningTransactionData so as duplicates are removed at
this stage. However this also requires to hold ProcArrayLock for
longer. For most deployments, if no dummy entries from 2PC transactions
are present it would be possible to bypass the checks to remove the
duplicated entries, but if at least one dummy entry is found it would be
necessary to scan again the whole ProcArray, which would be most likely
the case at each checkpoint with workloads like what Konstantin has
mentioned in the original report. And ProcArrayLock is already a point
of contention for many OLTP workloads with small transactions. So the
performance argument worries me.

Speaking of which, I have looked at the performance of qsort, and for a
couple of thousand entries, we may not see any impact. But I am not
confident enough to say that it would be OK for all platforms each time
a standby snapshot is taken when ordering works on 4-bytes elements, so
the performance argument from Konstantin seems quite sensible to me (see
the quickly-hacked qsort_perf.c attached).

> Even leaving the crummyness aside, did you check other users of
> XLOG_RUNNING_XACTS, e.g. logical decoding?

I actually spent some time checking that, so it is not innocent.
SnapBuildWaitSnapshot() waits for transactions to commit or abort based
on the XIDs in the record. And that's the only place where those XIDs
are used so it won't matter to wait twice for the same transaction to
finish. The error callback would be used only once.
--
Michael

Attachment Content-Type Size
qsort_perf.c text/x-csrc 812 bytes

From: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Michael Paquier <michael(at)paquier(dot)xyz>, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: pgsql: Avoid duplicate XIDs at recovery when building initial snapshot
Date: 2018-10-22 15:36:25
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-committers pgsql-hackers

On 2018-Oct-14, Andres Freund wrote:

> On 2018-10-14 13:26:24 +0000, Michael Paquier wrote:
> > Avoid duplicate XIDs at recovery when building initial snapshot

> I'm unhappy this approach was taken over objections. Without a real
> warning. Even leaving the crummyness aside, did you check other users
> of XLOG_RUNNING_XACTS, e.g. logical decoding?

Mumble. Is there a real problem here -- I mean, did this break logical
decoding? Maybe we need some more tests to ensure this stuff works
sanely for logical decoding.

FWIW and not directly related, I recently became aware (because of a
customer question) that txid_current_snapshot() is oblivious of
XLOG_RUNNING_XACTS in a standby. AFAICS it's not a really serious
concern, but it was surprising.

--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Andres Freund <andres(at)anarazel(dot)de>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Michael Paquier <michael(at)paquier(dot)xyz>, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: pgsql: Avoid duplicate XIDs at recovery when building initial snapshot
Date: 2018-10-22 17:41:55
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-committers pgsql-hackers

Hi,

On 2018-10-22 12:36:25 -0300, Alvaro Herrera wrote:
> On 2018-Oct-14, Andres Freund wrote:
>
> > On 2018-10-14 13:26:24 +0000, Michael Paquier wrote:
> > > Avoid duplicate XIDs at recovery when building initial snapshot
>
> > I'm unhappy this approach was taken over objections. Without a real
> > warning. Even leaving the crummyness aside, did you check other users
> > of XLOG_RUNNING_XACTS, e.g. logical decoding?
>
> Mumble. Is there a real problem here -- I mean, did this break logical
> decoding? Maybe we need some more tests to ensure this stuff works
> sanely for logical decoding.

Hm? My point is that this fix just puts a band-aid onto *one* of the
places that read a XLOG_RUNNING_XACTS. Which still leaves the contents
of WAL record corrupted. There's not even a note at the WAL-record's
definition or its logging denoting that the contents are not what you'd
expect. I don't mean that the fix would break logical decoding, but
that it's possible that an equivalent of the problem affecting hot
standby also affects logical decoding. And even leaving those two users
aside, it's possible that there will be further vulernable internal
users or extensions parsing the WAL.

> FWIW and not directly related, I recently became aware (because of a
> customer question) that txid_current_snapshot() is oblivious of
> XLOG_RUNNING_XACTS in a standby. AFAICS it's not a really serious
> concern, but it was surprising.

That's more fundamental than just XLOG_RUNNING_XACTS though, no? The
whole way running transactions (i.e. including those that are just
detected by looking at their xid) are handled in the known xids struct
and in snapshots seems incompatible with that, no?

Greetings,

Andres Freund


From: Andres Freund <andres(at)anarazel(dot)de>
To: Michael Paquier <michael(at)paquier(dot)xyz>
Cc: Postgres hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pgsql: Avoid duplicate XIDs at recovery when building initial snapshot
Date: 2018-10-22 18:04:35
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-committers pgsql-hackers

Hi,

On 2018-10-22 12:03:26 +0900, Michael Paquier wrote:
> (moving to -hackers)
>
> On Sun, Oct 14, 2018 at 10:42:40AM -0700, Andres Freund wrote:
> > I'm unhappy this approach was taken over objections. Without a real
> > warning.
>
> Oops, that was not clear to me. Sorry about that! I did not see you
> objecting again after the last arguments I raised. The end of PREPARE
> TRANSACTION is designed like that since it has been introduced, so
> changing the way the dummy GXACT is inserted before the main one is
> cleared from its XID does not sound wise to me. The current design is
> also present for a couple of reasons, please see this thread:
> https://www.postgresql.org/message-id/[email protected]
> This has resulted in e26b0abd.

None of them explains why having "corrupt" WAL that's later fixed up is
a good idea.

> Among the things I thought are:
> - Clearing the XID at the same time the dummy entry is added, which
> actually means to hold on ProcArrayLock longer while doing more at the
> end of prepare. I actually don't think you can do that cleanly without
> endangering the transaction visibility for other backends, and syncrep
> may cause the window to get wider.

> - Changing GetRunningTransactionData so as duplicates are removed at
> this stage. However this also requires to hold ProcArrayLock for
> longer.

That's *MUCH* better than what we have right
now. GetRunningTransactionData() isn't called all that often, for once,
and for another the WAL then is correct.

> > Even leaving the crummyness aside, did you check other users of
> > XLOG_RUNNING_XACTS, e.g. logical decoding?
>
> I actually spent some time checking that, so it is not innocent.

"innocent"?

> SnapBuildWaitSnapshot() waits for transactions to commit or abort based
> on the XIDs in the record. And that's the only place where those XIDs
> are used so it won't matter to wait twice for the same transaction to
> finish. The error callback would be used only once.

Right. We used to use it more (and it'd probably caused problems), but
since 955a684e0401954a58e956535107bc4b7136d952 it should be ok...

Greetings,

Andres Freund


From: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Michael Paquier <michael(at)paquier(dot)xyz>, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: pgsql: Avoid duplicate XIDs at recovery when building initial snapshot
Date: 2018-10-22 22:15:38
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-committers pgsql-hackers

On 2018-Oct-22, Andres Freund wrote:

> Hi,
>
> On 2018-10-22 12:36:25 -0300, Alvaro Herrera wrote:
> > On 2018-Oct-14, Andres Freund wrote:
> >
> > > On 2018-10-14 13:26:24 +0000, Michael Paquier wrote:
> > > > Avoid duplicate XIDs at recovery when building initial snapshot
> >
> > > I'm unhappy this approach was taken over objections. Without a real
> > > warning. Even leaving the crummyness aside, did you check other users
> > > of XLOG_RUNNING_XACTS, e.g. logical decoding?
> >
> > Mumble. Is there a real problem here -- I mean, did this break logical
> > decoding? Maybe we need some more tests to ensure this stuff works
> > sanely for logical decoding.
>
> Hm? My point is that this fix just puts a band-aid onto *one* of the
> places that read a XLOG_RUNNING_XACTS. Which still leaves the contents
> of WAL record corrupted. There's not even a note at the WAL-record's
> definition or its logging denoting that the contents are not what you'd
> expect. I don't mean that the fix would break logical decoding, but
> that it's possible that an equivalent of the problem affecting hot
> standby also affects logical decoding. And even leaving those two users
> aside, it's possible that there will be further vulernable internal
> users or extensions parsing the WAL.

Ah! I misinterpreted what you were saying. I agree we shouldn't let
the WAL message have wrong data. Of course we shouldn't just add a
code comment stating that the data is wrong :-)

> > FWIW and not directly related, I recently became aware (because of a
> > customer question) that txid_current_snapshot() is oblivious of
> > XLOG_RUNNING_XACTS in a standby. AFAICS it's not a really serious
> > concern, but it was surprising.
>
> That's more fundamental than just XLOG_RUNNING_XACTS though, no? The
> whole way running transactions (i.e. including those that are just
> detected by looking at their xid) are handled in the known xids struct
> and in snapshots seems incompatible with that, no?

hmm ... as I recall, txid_current_snapshot also only considers xacts by
xid, so read-only xacts are not considered -- seems to me that if you
think of snapshots in a general way, you're right, but for whatever you
want txid_current_snapshot for (and I don't quite remember what that is)
then it's not really important.

--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Michael Paquier <michael(at)paquier(dot)xyz>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: pgsql: Avoid duplicate XIDs at recovery when building initial snapshot
Date: 2018-10-23 01:43:38
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-committers pgsql-hackers

On Mon, Oct 22, 2018 at 07:15:38PM -0300, Alvaro Herrera wrote:
> On 2018-Oct-22, Andres Freund wrote:
>> Hm? My point is that this fix just puts a band-aid onto *one* of the
>> places that read a XLOG_RUNNING_XACTS. Which still leaves the contents
>> of WAL record corrupted. There's not even a note at the WAL-record's
>> definition or its logging denoting that the contents are not what you'd
>> expect. I don't mean that the fix would break logical decoding, but
>> that it's possible that an equivalent of the problem affecting hot
>> standby also affects logical decoding. And even leaving those two users
>> aside, it's possible that there will be further vulernable internal
>> users or extensions parsing the WAL.
>
> Ah! I misinterpreted what you were saying. I agree we shouldn't let
> the WAL message have wrong data. Of course we shouldn't just add a
> code comment stating that the data is wrong :-)

Well, following the same kind of thoughts, txid_current_snapshot() uses
sort_snapshot() to remove all the duplicates after fetching its data
from GetSnapshotData(), so wouldn't we want to do something about
removal of duplicates if dummy PGXACT entries are found while scanning
the ProcArray also in this case? What I would think we should do is not
only to patch GetRunningTransactionData() but also GetSnapshotData() so
as we don't have duplicates also in this case, and do things in such a
way that both code paths use the same logic, and that we don't need to
have sort_snapshot() anymore. That would be more costly though...
--
Michael


From: Michael Paquier <michael(at)paquier(dot)xyz>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: pgsql: Avoid duplicate XIDs at recovery when building initial snapshot
Date: 2018-11-01 06:09:11
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-committers pgsql-hackers

On Tue, Oct 23, 2018 at 10:43:38AM +0900, Michael Paquier wrote:
> Well, following the same kind of thoughts, txid_current_snapshot() uses
> sort_snapshot() to remove all the duplicates after fetching its data
> from GetSnapshotData(), so wouldn't we want to do something about
> removal of duplicates if dummy PGXACT entries are found while scanning
> the ProcArray also in this case? What I would think we should do is not
> only to patch GetRunningTransactionData() but also GetSnapshotData() so
> as we don't have duplicates also in this case, and do things in such a
> way that both code paths use the same logic, and that we don't need to
> have sort_snapshot() anymore. That would be more costly though...

My apologies it took a bit longer than I thought. I got a patch on my
desk for a couple of days, and finally took the time to finish something
which would address the concerns raised here. As long as we don't reach
more than hundreds of thousands of entries, there is not going to be any
performance impact. So what I do in the attached is to revert 1df21ddb,
and then have GetRunningTransactionData() sort the XIDs in the snapshot
and remove duplicates only if at least one dummy proc entry is found
while scanning, for xids and subxids. This way, there is no need to
impact most of the instance deployments with the extra sort/removal
phase as most don't use two-phase transactions. The sorting done at
recovery when initializing the standby snapshot still needs to happen of
course.

The patch is added to the upcoming CF for review.

Thanks,
--
Michael

Attachment Content-Type Size
duplicate-xid-snapshot-v1.patch text/x-diff 4.3 KB

From: Dmitry Dolgov <9erthalion6(at)gmail(dot)com>
To: Michael Paquier <michael(at)paquier(dot)xyz>
Cc: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL Developers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: pgsql: Avoid duplicate XIDs at recovery when building initial snapshot
Date: 2018-11-30 13:54:04
Message-ID: CA+q6zcXzUV3V9O5XQrATL9=W6SnjRvaX4yGd=syMUdyjOhnDEw@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-committers pgsql-hackers

> On Thu, Nov 1, 2018 at 7:09 AM Michael Paquier <michael(at)paquier(dot)xyz> wrote:
>
> On Tue, Oct 23, 2018 at 10:43:38AM +0900, Michael Paquier wrote:
> > Well, following the same kind of thoughts, txid_current_snapshot() uses
> > sort_snapshot() to remove all the duplicates after fetching its data
> > from GetSnapshotData(), so wouldn't we want to do something about
> > removal of duplicates if dummy PGXACT entries are found while scanning
> > the ProcArray also in this case? What I would think we should do is not
> > only to patch GetRunningTransactionData() but also GetSnapshotData() so
> > as we don't have duplicates also in this case, and do things in such a
> > way that both code paths use the same logic, and that we don't need to
> > have sort_snapshot() anymore. That would be more costly though...
>
> My apologies it took a bit longer than I thought. I got a patch on my
> desk for a couple of days, and finally took the time to finish something
> which would address the concerns raised here. As long as we don't reach
> more than hundreds of thousands of entries, there is not going to be any
> performance impact. So what I do in the attached is to revert 1df21ddb,
> and then have GetRunningTransactionData() sort the XIDs in the snapshot
> and remove duplicates only if at least one dummy proc entry is found
> while scanning, for xids and subxids. This way, there is no need to
> impact most of the instance deployments with the extra sort/removal
> phase as most don't use two-phase transactions. The sorting done at
> recovery when initializing the standby snapshot still needs to happen of
> course.
>
> The patch is added to the upcoming CF for review.

Unfortunately, patch has some conflict with the current master. Could you
please post a rebased version?


From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Michael Paquier <michael(at)paquier(dot)xyz>
Cc: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: pgsql: Avoid duplicate XIDs at recovery when building initial snapshot
Date: 2018-11-30 14:55:47
Message-ID: CANP8+jKCbqqMSBeJFZWM0PT8Zf=ex13oYsN-Bcfze+_dTz-R9w@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-committers pgsql-hackers

On Thu, 1 Nov 2018 at 06:09, Michael Paquier <michael(at)paquier(dot)xyz> wrote:

> On Tue, Oct 23, 2018 at 10:43:38AM +0900, Michael Paquier wrote:
> > Well, following the same kind of thoughts, txid_current_snapshot() uses
> > sort_snapshot() to remove all the duplicates after fetching its data
> > from GetSnapshotData(), so wouldn't we want to do something about
> > removal of duplicates if dummy PGXACT entries are found while scanning
> > the ProcArray also in this case? What I would think we should do is not
> > only to patch GetRunningTransactionData() but also GetSnapshotData() so
> > as we don't have duplicates also in this case, and do things in such a
> > way that both code paths use the same logic, and that we don't need to
> > have sort_snapshot() anymore. That would be more costly though...
>
> My apologies it took a bit longer than I thought. I got a patch on my
> desk for a couple of days, and finally took the time to finish something
> which would address the concerns raised here. As long as we don't reach
> more than hundreds of thousands of entries, there is not going to be any
> performance impact. So what I do in the attached is to revert 1df21ddb,
> and then have GetRunningTransactionData() sort the XIDs in the snapshot
> and remove duplicates only if at least one dummy proc entry is found
> while scanning, for xids and subxids. This way, there is no need to
> impact most of the instance deployments with the extra sort/removal
> phase as most don't use two-phase transactions. The sorting done at
> recovery when initializing the standby snapshot still needs to happen of
> course.
>
> The patch is added to the upcoming CF for review.
>

1df21ddb looks OK to me and was simple enough to backpatch safely.

Seems excessive to say that the WAL record is corrupt, it just contains
duplicates, just as exported snapshots do. There's a few other imprecise
things around in WAL, that is why we need the RunningXact data in the first
place. So we have a choice of whether to remove the duplicates eagerly or
lazily.

For GetRunningTransactionData(), we can do eager or lazy, since its not a
foreground process. I don't object to changing it to be eager in this path,
but this patch is more complex than 1df21ddb and I don't think we should
backpatch this change, assuming it is acceptable.

This patch doesn't do it, but the suggestion that we touch
GetSnapshotData() in the same way so we de-duplicate eagerly is a different
matter and would need careful performance testing to ensure we don't slow
down 2PC users.

--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Michael Paquier <michael(at)paquier(dot)xyz>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: pgsql: Avoid duplicate XIDs at recovery when building initial snapshot
Date: 2018-11-30 23:08:04
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-committers pgsql-hackers

On Fri, Nov 30, 2018 at 02:55:47PM +0000, Simon Riggs wrote:
> 1df21ddb looks OK to me and was simple enough to backpatch safely.

Thanks for the feedback!

> Seems excessive to say that the WAL record is corrupt, it just contains
> duplicates, just as exported snapshots do. There's a few other imprecise
> things around in WAL, that is why we need the RunningXact data in the first
> place. So we have a choice of whether to remove the duplicates eagerly or
> lazily.
>
> For GetRunningTransactionData(), we can do eager or lazy, since its not a
> foreground process. I don't object to changing it to be eager in this path,
> but this patch is more complex than 1df21ddb and I don't think we should
> backpatch this change, assuming it is acceptable.

Yes, I would avoid a backpatch for this more complicated one, and
we need a solution for already-generated WAL. It is not complicated to
handle duplicates for xacts and subxacts however holding ProcArrayLock
for a longer time stresses me as it is already a bottleneck.

> This patch doesn't do it, but the suggestion that we touch
> GetSnapshotData() in the same way so we de-duplicate eagerly is a different
> matter and would need careful performance testing to ensure we don't slow
> down 2PC users.

Definitely. That's a much hotter code path. I am not completely sure
if that's an effort worth pursuing either..
--
Michael


From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Michael Paquier <michael(at)paquier(dot)xyz>
Cc: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: pgsql: Avoid duplicate XIDs at recovery when building initial snapshot
Date: 2018-12-01 10:51:10
Message-ID: CANP8+jKdrQpDU6pe=de9gRz6zR0nJBXM5qU=Tq0Jiz5TinZ5=Q@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-committers pgsql-hackers

On Fri, 30 Nov 2018 at 23:08, Michael Paquier <michael(at)paquier(dot)xyz> wrote:

> On Fri, Nov 30, 2018 at 02:55:47PM +0000, Simon Riggs wrote:
> > 1df21ddb looks OK to me and was simple enough to backpatch safely.
>
> Thanks for the feedback!
>
> > Seems excessive to say that the WAL record is corrupt, it just contains
> > duplicates, just as exported snapshots do. There's a few other imprecise
> > things around in WAL, that is why we need the RunningXact data in the
> first
> > place. So we have a choice of whether to remove the duplicates eagerly or
> > lazily.
> >
> > For GetRunningTransactionData(), we can do eager or lazy, since its not a
> > foreground process. I don't object to changing it to be eager in this
> path,
> > but this patch is more complex than 1df21ddb and I don't think we should
> > backpatch this change, assuming it is acceptable.
>
> Yes, I would avoid a backpatch for this more complicated one, and
> we need a solution for already-generated WAL.

Yes, that is an important reason not to backpatch.

> It is not complicated to
> handle duplicates for xacts and subxacts however holding ProcArrayLock
> for a longer time stresses me as it is already a bottleneck.
>

I hadn't realised this patch holds ProcArrayLock while removing duplicates.
Now I know I vote against applying this patch unless someone can show that
the performance effects of doing so are negligable, which I doubt.

--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


From: Michael Paquier <michael(at)paquier(dot)xyz>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: pgsql: Avoid duplicate XIDs at recovery when building initial snapshot
Date: 2018-12-03 06:43:58
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-committers pgsql-hackers

On Sat, Dec 01, 2018 at 10:51:10AM +0000, Simon Riggs wrote:
> On Fri, 30 Nov 2018 at 23:08, Michael Paquier <michael(at)paquier(dot)xyz> wrote:
>> It is not complicated to
>> handle duplicates for xacts and subxacts however holding ProcArrayLock
>> for a longer time stresses me as it is already a bottleneck.
>
> I hadn't realised this patch holds ProcArrayLock while removing duplicates.
> Now I know I vote against applying this patch unless someone can show that
> the performance effects of doing so are negligable, which I doubt.

Me too after more thoughts on that. Please note that I have marked the
patch as returned with feedback for now.
--
Michael