You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
1
(3) |
2
|
3
(1) |
4
(1) |
5
|
6
(1) |
7
(1) |
8
(3) |
9
(3) |
10
(6) |
11
|
12
(1) |
13
(1) |
14
(3) |
15
|
16
(2) |
17
(16) |
18
|
19
|
20
(6) |
21
(1) |
22
(8) |
23
(18) |
24
(1) |
25
(3) |
26
(2) |
27
(14) |
28
(18) |
29
(14) |
|
|
|
From: Amit K. <ami...@en...> - 2012-02-08 05:05:57
|
On 8 February 2012 06:14, Michael Paquier <mic...@gm...> wrote: > Hi, > > I had a look at the patch, and why using SessionLock and DontWait enums > coupled with booleans? > For DontWait I don't really believe it is required as you simply do a check > on a boolean condition. > For simplicity, why not using as variable name isWait instead of dontWait. > > Then, regarding SessionLock, for the time being we might have only > TRANSACTION_LOCK and SESSION_LOCK but why condition them with boolean > values? > In the future we might support extra lock levels so isn't a simple enum fit > for the task? > The checks on sessionLock could be done simply on the values of the enum. I > feel it is kind of weird to use such enums which may be extended by future > applications if needed. Hi Michael, The only reason I defined these additional enums for true/false was to make the calls to pgxc_advisory_lock() readable. Look at the following diff: - pgxc_advisory_lock(key, 0, 0, true, ExclusiveLock, true, false); + pgxc_advisory_lock(key, 0, 0, true, ExclusiveLock, SESSION_LOCK, WAIT); The second one immediately shows whether we have used session lock or not, or whether we want to wait or not, as against the first one where it is not very obvious by "true" or "false", and one has to remember the last two arguments of the function definition. And there are many such calls all over the file. > > About the lock content, if I understood well you simply spread the lock with > pgxc_advisory_xact_lock to all the Coordinators so it might work correctly. > [... Compile ...] > And indeed it works. > > Regards, > > On Tue, Feb 7, 2012 at 11:30 AM, Amit Khandekar > <ami...@en...> wrote: >> >> Attached is the patch that implements transaction level lock >> functions. As mentioned, keeping these functions at the coordinator >> level sufficed; pushing them further to data nodes was not required >> because coordinators are aware of the transactions. >> >> On 1 February 2012 17:30, Amit Khandekar >> <ami...@en...> wrote: >> > >> > Now that session level locks implementation is checked in, here are >> > some notes on implementation of transaction level locks ... (Just archiving >> > my thoughts here). >> > >> > Transaction level locks do not have an explicit way to unlock. They are >> > released once the transaction ends. >> > >> > Similar to session level locks, we need to propagate the >> > pg_advisory_xact_lock function to other nodes to make them cluster-aware. >> > But should we propagate to data nodes or coordinators? >> > >> > I was earlier of the opinion that we should propagate on data nodes, but >> > now I feel the propagation can be kept to coordinators because they are >> > aware of the transactions. I realized that besides datanodes, even the >> > coordinator keeps a local transaction active until user ends the >> > transaction. >> > >> > Is it necessary to propagate to data nodes also, besides coordinators? >> > I don't think it is necessary. Because all advisory lock commands have to go >> > through coordinators. And these locks do not share the same space with >> > implicit database object locks which are held on datanodes when a table is >> > referenced in a query. >> > >> > The LOCK <table> command is implemented by propagating to both >> > coordinator and data nodes, which is necessary because the lock held on a >> > table using LOCK command shares the same space with implicit table locks >> > held when a query accesses tables. So, when a query like "select * from >> > tab1" is executed in a transaction on data node, it should wait for a lock >> > held on the table using LOCK tab1. And for that, this explicit lock should >> > be propagated onto data nodes, otherwise the query won't see this lock. >> > >> > >> > Will the coordinator-to-coordinator connection pooling have the same >> > unlocking issues like session level locks? No, because the the transaction >> > locks are never unlocked explicitly. >> > >> > Will the coord-to-coord pooler connection keep the lock active even >> > after the user ends transaction ? No, because when a user transaction ends, >> > the pooler connection also ends its transaction although it does not end the >> > connection. When there are two simultaneous transactions from the same >> > coordinator, the coordinator creates two pooler connections to the remote >> > coordinator. The advisory locks on the two transactions will be held on >> > these two pooler connections until the transactions end. >> > >> > So effectively, there does not look to be much more work for transaction >> > level locks except that they keep all the coord-coord locks active, they >> > don't unlock the remote locks unlike session-level locks. Attached patch >> > does this for a sample function. All synchronization issues considered for >> > session level locks apply here also. >> > >> > >> > Will send the complete patch after more testing. >> > >> > >> > On 30 January 2012 17:25, Amit Khandekar >> > <ami...@en...> wrote: >> >> >> >> On 20 January 2012 17:50, Amit Khandekar >> >> <ami...@en...> wrote: >> >> > Before going into implementation issues that I thought of on advisory >> >> > locks >> >> > , here's a PG doc snippet briefly explaining what are they: >> >> > >> >> > "There are two different types of advisory locks in PostgreSQL: >> >> > session >> >> > level and transaction level. Once acquired, a session level advisory >> >> > lock is >> >> > held until explicitly released or the session ends. Unlike standard >> >> > locks, >> >> > session level advisory locks do not honor transaction semantics: a >> >> > lock >> >> > acquired during a transaction that is later rolled back will still be >> >> > held >> >> > following the rollback, and likewise an unlock is effective even if >> >> > the >> >> > calling transaction fails later. The same session level lock can be >> >> > acquired >> >> > multiple times by its owning process: for each lock request there >> >> > must be a >> >> > corresponding unlock request before the lock is actually released. >> >> > (If a >> >> > session already holds a given lock, additional requests will always >> >> > succeed, >> >> > even if other sessions are awaiting the lock.) Transaction level >> >> > locks on >> >> > the other hand behave more like regular locks; they are automatically >> >> > released at the end of the transaction, and can not be explicitly >> >> > unlocked." >> >> > >> >> > >> >> > Implementation notes for session level locks. >> >> > ---------------------------- >> >> > >> >> > In a nutshell this is how they work : >> >> > >> >> > Application 1 calls select pg_advisory_lock(100) >> >> > Application 2 calls select pg_advisory_lock(100) and waits for app 1 >> >> > to >> >> > call select pg_advisory_unlock(100) or to end the session. >> >> > >> >> > >> >> > Goal is to make sure that when a client from coordinator C1 locks on >> >> > a key, >> >> > another client from coordinator C2 should wait on the lock. THe >> >> > advisory_locks calls from the session get stacked, that means the >> >> > resource >> >> > won't get released until it is unlocked as many times as it has >> >> > locked. So >> >> > pg_advisory_lock() 3 times should be followed by pg_advisory_unlock() >> >> > 3 >> >> > times for the resource to be freed for other sessions. >> >> > >> >> > One simple way is to just send a parallel 'exec direct $$select >> >> > pg_advisory_lock()$$ ' on all coordinators. >> >> > >> >> > Sequence of actions: >> >> > >> >> > 1. Client1 from C1 calls : >> >> > select pg_advisory_lock(100) >> >> > THis call will be propogated to all coords (C1 and C2) so there will >> >> > be >> >> > locks held on both C1 and C2. C1 lock is native lock, and the lock on >> >> > C2 is >> >> > through C1's pooler connection to C2. >> >> > >> >> > 2. Next client2 from C2 calls : >> >> > select pg_advisory_lock(100) >> >> > This will again call pg_advisory_lock() on C1 and C2. Both of these >> >> > calls >> >> > would wait because client1 has held locks. so effectively the client2 >> >> > will >> >> > wait on this call. >> >> > >> >> > 3a. Next client1 calls pg_advisory_unlock(). >> >> > This will call pg_advisory_unlock() on C1 and C2, unlocking the >> >> > earlier >> >> > held locks. >> >> > THis will in turn make client2 return from its pg_advisory_lock() >> >> > call. >> >> > OR >> >> > >> >> > 3b. Client 1 exits without calling pg_advisory_unlock(). >> >> > This will automatically release the lock held on the native >> >> > coordinator C1. >> >> > But the one held on C2 will not be released because that lock is held >> >> > on the >> >> > pooler session to C2. So here we are in trouble. This C2 lock will be >> >> > held >> >> > there permanently until the pooler exists or someone explicitly calls >> >> > pg_advisory_unlock, and so client2 waiting on this lock will hang >> >> > indefintely. >> >> > >> >> > To handle this issue with pooler connection, the pg_advisory_lock() >> >> > definition should immediately unlock all locks except the native >> >> > coordinator >> >> > lock. But still these lock-unlock calls on remote coordinator are >> >> > necessary >> >> > because it should get a chance to wait for a resource in case already >> >> > someone else has held the resource from some other coordinator. >> >> > >> >> > pg_advisory_lock () >> >> > { >> >> > /* >> >> > * Go on locking on each coordinator. Keep on unlocking the >> >> > previous one >> >> > * each time a new lock is held. Don't unlock the native >> >> > coordinator. >> >> > * After finishing all coordinators, ultimately only the native >> >> > coordinator >> >> > * would be held, but still we will have scanned all coordinators >> >> > to >> >> > make >> >> > * sure no one else is already grabbed the lock. >> >> > */ >> >> > for (i = 0; i < last_coordinator_index; i++) >> >> > { >> >> > Call pg_advisory_lock() on coordinator[i]; >> >> > if (i > 0 && !is_native(coordinator[i-1]) ) >> >> > Call pg_advisory_unlock() on coordinator[i-1]; >> >> > } >> >> > >> >> > >> >> > } >> >> > >> >> > Note that the order of locking all coordinators must strictly be >> >> > common >> >> > to all. If client1 from C1 locks C1 first and then C2, and in >> >> > parallel >> >> > client2 from C2 locks C2 first and then C1, there would arise the >> >> > typical >> >> > synchronisation issues. For e.g., simultaneously client1 and client2 >> >> > call >> >> > pg_advisory_lock() on C1 and C2 respectively. They won't wait because >> >> > they >> >> > are on different coordinators. NExt, client1 and client2 both would >> >> > simultaneously try to lock on C2 and C1 resp. , and both would wait >> >> > on each >> >> > other. This deadlock would not arise if both lock with a common >> >> > order, say >> >> > C1 then C2. >> >> > >> >> > And that's the reason lock function cannot be propogated to all >> >> > coordinators >> >> > in parallel because that way we cannot gurantee the order of locking. >> >> > >> >> > >> >> >> >> Based on the implementation notes, attached is a patch that covers >> >> session level advisory locks. A common function pgxc_advisory_locks() >> >> is used to implement the common algorithm used by all variants of >> >> pg_advisory_lock function. The algorithm is based keeping in mind the >> >> above issue that I had discussed. >> >> >> >> Additionally, I had to tweak a bit the existing LockAcquire() function >> >> in locks.c. When a user from the same session calls advisory lock a >> >> second time, then it should not propogate the call to all >> >> coordinators, rather it should just increment the lock reference >> >> locally. So it was needed to check if the lock already exists on the >> >> same session, and then increment it, else just return without locking. >> >> This specific behaviour was not present in existing LockAcquire() >> >> function, so I made it support the same. >> >> >> >> Tweaked pgxc_execute_on_nodes() function that was written for object >> >> size functions so that it can be reused for advisory lock functions. >> >> >> >> I was planning to include both session level and transaction level >> >> locking functions, but looks like transaction level functions have to >> >> be handled differently, so I am working on it, and will send it using >> >> a separate patch. >> >> >> >> >> >> > >> >> > Implementation notes for transaction level locks. >> >> > -------------------------------------------------------- >> >> > >> >> > Yet to think on this, but for this, I think we need to propogate the >> >> > calls >> >> > to datanode rather than coordinator. Because datanodes are the ones >> >> > who >> >> > handle transactions. These locks cannot be unlocked. They only unlock >> >> > at end >> >> > of transmission. >> > >> > >> >> >> ------------------------------------------------------------------------------ >> Keep Your Developer Skills Current with LearnDevNow! >> The most comprehensive online learning library for Microsoft developers >> is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, >> Metro Style Apps, more. Free future releases when you subscribe now! >> http://p.sf.net/sfu/learndevnow-d2d >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > > -- > Michael Paquier > http://michael.otacoo.com |
From: Michael P. <mic...@gm...> - 2012-02-08 00:44:17
|
Hi, I had a look at the patch, and why using SessionLock and DontWait enums coupled with booleans? For DontWait I don't really believe it is required as you simply do a check on a boolean condition. For simplicity, why not using as variable name isWait instead of dontWait. Then, regarding SessionLock, for the time being we might have only TRANSACTION_LOCK and SESSION_LOCK but why condition them with boolean values? In the future we might support extra lock levels so isn't a simple enum fit for the task? The checks on sessionLock could be done simply on the values of the enum. I feel it is kind of weird to use such enums which may be extended by future applications if needed. About the lock content, if I understood well you simply spread the lock with pgxc_advisory_xact_lock to all the Coordinators so it might work correctly. [... Compile ...] And indeed it works. Regards, On Tue, Feb 7, 2012 at 11:30 AM, Amit Khandekar < ami...@en...> wrote: > Attached is the patch that implements transaction level lock > functions. As mentioned, keeping these functions at the coordinator > level sufficed; pushing them further to data nodes was not required > because coordinators are aware of the transactions. > > On 1 February 2012 17:30, Amit Khandekar > <ami...@en...> wrote: > > > > Now that session level locks implementation is checked in, here are > some notes on implementation of transaction level locks ... (Just archiving > my thoughts here). > > > > Transaction level locks do not have an explicit way to unlock. They are > released once the transaction ends. > > > > Similar to session level locks, we need to propagate the > pg_advisory_xact_lock function to other nodes to make them cluster-aware. > But should we propagate to data nodes or coordinators? > > > > I was earlier of the opinion that we should propagate on data nodes, but > now I feel the propagation can be kept to coordinators because they are > aware of the transactions. I realized that besides datanodes, even the > coordinator keeps a local transaction active until user ends the > transaction. > > > > Is it necessary to propagate to data nodes also, besides coordinators? > I don't think it is necessary. Because all advisory lock commands have to > go through coordinators. And these locks do not share the same space with > implicit database object locks which are held on datanodes when a table is > referenced in a query. > > > > The LOCK <table> command is implemented by propagating to both > coordinator and data nodes, which is necessary because the lock held on a > table using LOCK command shares the same space with implicit table locks > held when a query accesses tables. So, when a query like "select * from > tab1" is executed in a transaction on data node, it should wait for a lock > held on the table using LOCK tab1. And for that, this explicit lock should > be propagated onto data nodes, otherwise the query won't see this lock. > > > > > > Will the coordinator-to-coordinator connection pooling have the same > unlocking issues like session level locks? No, because the the transaction > locks are never unlocked explicitly. > > > > Will the coord-to-coord pooler connection keep the lock active even > after the user ends transaction ? No, because when a user transaction ends, > the pooler connection also ends its transaction although it does not end > the connection. When there are two simultaneous transactions from the same > coordinator, the coordinator creates two pooler connections to the remote > coordinator. The advisory locks on the two transactions will be held on > these two pooler connections until the transactions end. > > > > So effectively, there does not look to be much more work for transaction > level locks except that they keep all the coord-coord locks active, they > don't unlock the remote locks unlike session-level locks. Attached patch > does this for a sample function. All synchronization issues considered for > session level locks apply here also. > > > > > > Will send the complete patch after more testing. > > > > > > On 30 January 2012 17:25, Amit Khandekar < > ami...@en...> wrote: > >> > >> On 20 January 2012 17:50, Amit Khandekar > >> <ami...@en...> wrote: > >> > Before going into implementation issues that I thought of on advisory > locks > >> > , here's a PG doc snippet briefly explaining what are they: > >> > > >> > "There are two different types of advisory locks in PostgreSQL: > session > >> > level and transaction level. Once acquired, a session level advisory > lock is > >> > held until explicitly released or the session ends. Unlike standard > locks, > >> > session level advisory locks do not honor transaction semantics: a > lock > >> > acquired during a transaction that is later rolled back will still be > held > >> > following the rollback, and likewise an unlock is effective even if > the > >> > calling transaction fails later. The same session level lock can be > acquired > >> > multiple times by its owning process: for each lock request there > must be a > >> > corresponding unlock request before the lock is actually released. > (If a > >> > session already holds a given lock, additional requests will always > succeed, > >> > even if other sessions are awaiting the lock.) Transaction level > locks on > >> > the other hand behave more like regular locks; they are automatically > >> > released at the end of the transaction, and can not be explicitly > unlocked." > >> > > >> > > >> > Implementation notes for session level locks. > >> > ---------------------------- > >> > > >> > In a nutshell this is how they work : > >> > > >> > Application 1 calls select pg_advisory_lock(100) > >> > Application 2 calls select pg_advisory_lock(100) and waits for app 1 > to > >> > call select pg_advisory_unlock(100) or to end the session. > >> > > >> > > >> > Goal is to make sure that when a client from coordinator C1 locks on > a key, > >> > another client from coordinator C2 should wait on the lock. THe > >> > advisory_locks calls from the session get stacked, that means the > resource > >> > won't get released until it is unlocked as many times as it has > locked. So > >> > pg_advisory_lock() 3 times should be followed by pg_advisory_unlock() > 3 > >> > times for the resource to be freed for other sessions. > >> > > >> > One simple way is to just send a parallel 'exec direct $$select > >> > pg_advisory_lock()$$ ' on all coordinators. > >> > > >> > Sequence of actions: > >> > > >> > 1. Client1 from C1 calls : > >> > select pg_advisory_lock(100) > >> > THis call will be propogated to all coords (C1 and C2) so there will > be > >> > locks held on both C1 and C2. C1 lock is native lock, and the lock on > C2 is > >> > through C1's pooler connection to C2. > >> > > >> > 2. Next client2 from C2 calls : > >> > select pg_advisory_lock(100) > >> > This will again call pg_advisory_lock() on C1 and C2. Both of these > calls > >> > would wait because client1 has held locks. so effectively the client2 > will > >> > wait on this call. > >> > > >> > 3a. Next client1 calls pg_advisory_unlock(). > >> > This will call pg_advisory_unlock() on C1 and C2, unlocking the > earlier > >> > held locks. > >> > THis will in turn make client2 return from its pg_advisory_lock() > call. > >> > OR > >> > > >> > 3b. Client 1 exits without calling pg_advisory_unlock(). > >> > This will automatically release the lock held on the native > coordinator C1. > >> > But the one held on C2 will not be released because that lock is held > on the > >> > pooler session to C2. So here we are in trouble. This C2 lock will be > held > >> > there permanently until the pooler exists or someone explicitly calls > >> > pg_advisory_unlock, and so client2 waiting on this lock will hang > >> > indefintely. > >> > > >> > To handle this issue with pooler connection, the pg_advisory_lock() > >> > definition should immediately unlock all locks except the native > coordinator > >> > lock. But still these lock-unlock calls on remote coordinator are > necessary > >> > because it should get a chance to wait for a resource in case already > >> > someone else has held the resource from some other coordinator. > >> > > >> > pg_advisory_lock () > >> > { > >> > /* > >> > * Go on locking on each coordinator. Keep on unlocking the > previous one > >> > * each time a new lock is held. Don't unlock the native > coordinator. > >> > * After finishing all coordinators, ultimately only the native > >> > coordinator > >> > * would be held, but still we will have scanned all coordinators > to > >> > make > >> > * sure no one else is already grabbed the lock. > >> > */ > >> > for (i = 0; i < last_coordinator_index; i++) > >> > { > >> > Call pg_advisory_lock() on coordinator[i]; > >> > if (i > 0 && !is_native(coordinator[i-1]) ) > >> > Call pg_advisory_unlock() on coordinator[i-1]; > >> > } > >> > > >> > > >> > } > >> > > >> > Note that the order of locking all coordinators must strictly be > common > >> > to all. If client1 from C1 locks C1 first and then C2, and in parallel > >> > client2 from C2 locks C2 first and then C1, there would arise the > typical > >> > synchronisation issues. For e.g., simultaneously client1 and client2 > call > >> > pg_advisory_lock() on C1 and C2 respectively. They won't wait because > they > >> > are on different coordinators. NExt, client1 and client2 both would > >> > simultaneously try to lock on C2 and C1 resp. , and both would wait > on each > >> > other. This deadlock would not arise if both lock with a common > order, say > >> > C1 then C2. > >> > > >> > And that's the reason lock function cannot be propogated to all > coordinators > >> > in parallel because that way we cannot gurantee the order of locking. > >> > > >> > > >> > >> Based on the implementation notes, attached is a patch that covers > >> session level advisory locks. A common function pgxc_advisory_locks() > >> is used to implement the common algorithm used by all variants of > >> pg_advisory_lock function. The algorithm is based keeping in mind the > >> above issue that I had discussed. > >> > >> Additionally, I had to tweak a bit the existing LockAcquire() function > >> in locks.c. When a user from the same session calls advisory lock a > >> second time, then it should not propogate the call to all > >> coordinators, rather it should just increment the lock reference > >> locally. So it was needed to check if the lock already exists on the > >> same session, and then increment it, else just return without locking. > >> This specific behaviour was not present in existing LockAcquire() > >> function, so I made it support the same. > >> > >> Tweaked pgxc_execute_on_nodes() function that was written for object > >> size functions so that it can be reused for advisory lock functions. > >> > >> I was planning to include both session level and transaction level > >> locking functions, but looks like transaction level functions have to > >> be handled differently, so I am working on it, and will send it using > >> a separate patch. > >> > >> > >> > > >> > Implementation notes for transaction level locks. > >> > -------------------------------------------------------- > >> > > >> > Yet to think on this, but for this, I think we need to propogate the > calls > >> > to datanode rather than coordinator. Because datanodes are the ones > who > >> > handle transactions. These locks cannot be unlocked. They only unlock > at end > >> > of transmission. > > > > > > > ------------------------------------------------------------------------------ > Keep Your Developer Skills Current with LearnDevNow! > The most comprehensive online learning library for Microsoft developers > is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3, > Metro Style Apps, more. Free future releases when you subscribe now! > http://p.sf.net/sfu/learndevnow-d2d > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Michael Paquier http://michael.otacoo.com |
From: Michael P. <mic...@gm...> - 2012-02-08 00:21:00
|
On Mon, Feb 6, 2012 at 1:44 PM, Ashutosh Bapat < ash...@en...> wrote: > The line defining the usage or assertion vs error is very fine. The > question I ask when adding an assertion or elog is this "What do I want to > happen if an assumption is broken?". What should happen if such a case > happens in production environment? If an assertion trips the backend > received it aborts the process, and connection is closed. Assertions are > available only when cassert flag is used. For production servers people do > not use these flags. Whereas when an elog is called, the connection remains > active. If you are sure that the assumption is bound to be true, it's good > to have assertion, but we are exposed to a crash in production. So I > usually prefer calling elog over Assertion esp in cases, when I am not sure > about assumptions. In this case, the arguments scan_relid and tlist are > coming separately and looking at code, I couldn't convince myself that the > targetlist has Vars from the same relation. At the same time since this is > a scan node on a relation, its targetlist ought to have Vars from single > relation and that to the one for which the scan is called. My mistake, I read the patch once again and got it. The error message is adapted and patch looks fine. Please go ahead. -- Michael Paquier http://michael.otacoo.com |