You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
1
(14) |
2
|
3
(4) |
4
(12) |
5
(14) |
6
|
7
(1) |
8
(7) |
9
(10) |
10
(7) |
11
(8) |
12
(6) |
13
|
14
(1) |
15
(3) |
16
(1) |
17
(8) |
18
(11) |
19
(3) |
20
|
21
(2) |
22
(9) |
23
(2) |
24
(14) |
25
(13) |
26
(1) |
27
|
28
|
29
(1) |
30
(11) |
|
|
|
|
From: Ahsan H. <ahs...@en...> - 2013-04-18 11:35:16
|
Hi Amit, Can you take sometime out to look at this please? I know you are super busy with the trigger work. -- Ahsan On Thu, Apr 18, 2013 at 2:13 AM, Abbas Butt <abb...@en...>wrote: > While doing some testing I found that just checking > res == LOCKACQUIRE_OK > is not enough, the function should succeed if the lock is already held > too, hence the condition should be > (res == LOCKACQUIRE_OK || res == LOCKACQUIRE_ALREADY_HELD) > Attached patch corrects this problem. > > > > On Mon, Apr 8, 2013 at 11:23 AM, Abbas Butt <abb...@en...>wrote: > >> Thanks. I will commit it later today. >> >> >> On Mon, Apr 8, 2013 at 9:52 AM, Amit Khandekar < >> ami...@en...> wrote: >> >>> Hi Abbas, >>> >>> The patch looks good to go. >>> >>> -Amit >>> >>> >>> On 6 April 2013 01:02, Abbas Butt <abb...@en...> wrote: >>> >>>> Hi, >>>> >>>> Consider this test case when run on a single coordinator cluster. >>>> >>>> From one session acquire a lock >>>> >>>> edb@edb-virtual-machine:/usr/local/pgsql/bin$ ./psql postgres >>>> psql (PGXC 1.1devel, based on PG 9.2beta2) >>>> Type "help" for help. >>>> >>>> postgres=# select pg_try_advisory_lock(1234,5678); >>>> pg_try_advisory_lock >>>> ---------------------- >>>> t >>>> (1 row) >>>> >>>> >>>> and from another terminal try to acquire the same lock >>>> >>>> edb@edb-virtual-machine:/usr/local/pgsql/bin$ ./psql postgres >>>> psql (PGXC 1.1devel, based on PG 9.2beta2) >>>> Type "help" for help. >>>> >>>> postgres=# select pg_try_advisory_lock(1234,5678); >>>> pg_try_advisory_lock >>>> ---------------------- >>>> t >>>> (1 row) >>>> >>>> Note that the second request succeeds where as the lock is already held >>>> by the first session. >>>> >>>> The problem is that pgxc_advisory_lock neglects the return of >>>> LockAcquire function in case of single coordinator. >>>> The attached patch corrects the problem. >>>> >>>> Comments are welcome. >>>> >>>> >>>> -- >>>> Abbas >>>> Architect >>>> EnterpriseDB Corporation >>>> The Enterprise PostgreSQL Company >>>> >>>> Phone: 92-334-5100153 >>>> >>>> Website: www.enterprisedb.com >>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>> >>>> This e-mail message (and any attachment) is intended for the use of >>>> the individual or entity to whom it is addressed. This message >>>> contains information from EnterpriseDB Corporation that may be >>>> privileged, confidential, or exempt from disclosure under applicable >>>> law. If you are not the intended recipient or authorized to receive >>>> this for the intended recipient, any use, dissemination, distribution, >>>> retention, archiving, or copying of this communication is strictly >>>> prohibited. If you have received this e-mail in error, please notify >>>> the sender immediately by reply e-mail and delete this message. >>>> >>>> ------------------------------------------------------------------------------ >>>> Minimize network downtime and maximize team effectiveness. >>>> Reduce network management and security costs.Learn how to hire >>>> the most talented Cisco Certified professionals. Visit the >>>> Employer Resources Portal >>>> http://www.cisco.com/web/learning/employer_resources/index.html >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>> >> >> >> -- >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. >> > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > > > ------------------------------------------------------------------------------ > Precog is a next-generation analytics platform capable of advanced > analytics on semi-structured data. The platform includes APIs for building > apps and a phenomenal toolset for data science. Developers can use > our toolset for easy data analysis & visualization. Get a free account! > http://www2.precog.com/precogplatform/slashdotnewsletter > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Ahsan Hadi Snr Director Product Development EnterpriseDB Corporation The Enterprise Postgres Company Phone: +92-51-8358874 Mobile: +92-333-5162114 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Abbas B. <abb...@en...> - 2013-04-18 11:08:18
|
On Thu, Apr 18, 2013 at 8:43 AM, Ashutosh Bapat < ash...@en...> wrote: > Did you measure the performance? > I tried but I was getting very strange numbers , It took some hours but reported Time: 365649.353 ms which comes out to be some 6 minutes, I am not sure why. > > > On Thu, Apr 18, 2013 at 9:02 AM, Abbas Butt <abb...@en...>wrote: > >> >> >> On Thu, Apr 18, 2013 at 1:07 AM, Abbas Butt <abb...@en...>wrote: >> >>> Hi, >>> Here is the review of the patch. >>> >>> Overall the patch is good to go. I have reviewed the code and found some >>> minor errors, which I corrected and have attached the revised patch with >>> the mail. >>> >>> I have tested both the cases when the sort happens in memory and when it >>> happens using disk and found both working. >>> >>> I agree that the approach used in the patch is cleaner and has smaller >>> footprint. >>> >>> I have corrected some white space errors and an unintentional change in >>> function set_dbcleanup_callback >>> git apply /home/edb/Desktop/MergeSort/xc_sort.patch >>> /home/edb/Desktop/MergeSort/xc_sort.patch:539: trailing whitespace. >>> void *fparams; >>> /home/edb/Desktop/MergeSort/xc_sort.patch:1012: trailing whitespace. >>> >>> /home/edb/Desktop/MergeSort/xc_sort.patch:1018: trailing whitespace. >>> >>> /home/edb/Desktop/MergeSort/xc_sort.patch:1087: trailing whitespace. >>> /* >>> /home/edb/Desktop/MergeSort/xc_sort.patch:1228: trailing whitespace. >>> size_t len, Oid msgnode_oid, >>> warning: 5 lines add whitespace errors. >>> >>> I am leaving a query running for tonight which would sort 10M rows of a >>> distributed table and would return top 100 of them. I would report its >>> outcome tomorrow morning. >>> >> >> It worked, here is the test case >> >> 1. create table test1 (id integer primary key , padding text); >> 2. Load 10M rows >> 3. select id from test1 order by 1 limit 100 >> >> >> >> >>> >>> Best Regards >>> >>> >>> On Mon, Apr 1, 2013 at 11:02 AM, Koichi Suzuki < >>> koi...@gm...> wrote: >>> >>>> Thanks. Then 90% improvement means about 53% of the duration, while >>>> 50% means 67% of it. Number of queries in a given duration is 190 vs. >>>> 150, difference is 40. >>>> >>>> Considering the needed resource, it may be okay to begin with >>>> materialization. >>>> >>>> Any other inputs? >>>> ---------- >>>> Koichi Suzuki >>>> >>>> >>>> 2013/4/1 Ashutosh Bapat <ash...@en...> >>>> >>>>> >>>>> >>>>> On Mon, Apr 1, 2013 at 10:59 AM, Koichi Suzuki < >>>>> koi...@gm...> wrote: >>>>> >>>>>> I understand materialize everything makes code clearer and >>>>>> implementation becomes simpler and better structured. >>>>>> >>>>>> What do you mean by x% improvement? Does 90% improvement mean the >>>>>> total duration is 10% of the original? >>>>>> >>>>> x% improvement means, duration reduces to 100/(100+x) as compared to >>>>> the non-pushdown scenario. Or in simpler words, we see (100+x) queries >>>>> being completed by pushdown approach in the same time in which nonpushdown >>>>> approach completes 100 queries. >>>>> >>>>>> ---------- >>>>>> Koichi Suzuki >>>>>> >>>>>> >>>>>> 2013/3/29 Ashutosh Bapat <ash...@en...> >>>>>> >>>>>>> Hi All, >>>>>>> I measured the scale up for both approaches - a. using datanode >>>>>>> connections as tapes (existing one) b. materialising result on tapes before >>>>>>> merging (the approach I proposed). For 1M rows, 5 coordinators I have found >>>>>>> that approach (a) gives 90% improvement whereas approach (b) gives 50% >>>>>>> improvement. Although the difference is significant, I feel that approach >>>>>>> (b) is much cleaner than approach (a) and doesn't have large footprint >>>>>>> compared to PG code and it takes care of all the cases like 1. >>>>>>> materialising sorted result, 2. takes care of any number of datanode >>>>>>> connections without memory overrun. It's possible to improve it further if >>>>>>> we avoid materialisation of datanode result in tuplestore. >>>>>>> >>>>>>> Patch attached for reference. >>>>>>> >>>>>>> On Tue, Mar 26, 2013 at 10:38 AM, Ashutosh Bapat < >>>>>>> ash...@en...> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Mar 26, 2013 at 10:19 AM, Koichi Suzuki < >>>>>>>> koi...@gm...> wrote: >>>>>>>> >>>>>>>>> On thing we should think for option 1 is: >>>>>>>>> >>>>>>>>> When a number of the result is huge, applications has to wait long >>>>>>>>> time until they get the first row. Because this option may need >>>>>>>>> disk >>>>>>>>> write, total resource consumption will be larger. >>>>>>>>> >>>>>>>>> >>>>>>>> Yes, I am aware of this fact. Please read the next paragraph and >>>>>>>> you will see that the current situation is no better. >>>>>>>> >>>>>>>> >>>>>>>>> I'm wondering if we can use "cursor" at database so that we can >>>>>>>>> read >>>>>>>>> each tape more simply, I mean, to leave each query node open and >>>>>>>>> read >>>>>>>>> next row from any query node. >>>>>>>>> >>>>>>>>> >>>>>>>> We do that right now. But because of such a simulated cursor (it's >>>>>>>> not cursor per say, but we just fetch the required result from connection >>>>>>>> as the demand arises in merging runs), we observer following things >>>>>>>> >>>>>>>> If the plan has multiple remote query nodes (as there will be in >>>>>>>> case of merge join), we assign the same connection to these nodes. Before >>>>>>>> this assignment, the result from the previous connection is materialised at >>>>>>>> the coordinator. This means that, when we will get huge result from the >>>>>>>> datanode, it will be materialised (which will have the more cost as >>>>>>>> materialising it on tape, as this materialisation happens in a linked list, >>>>>>>> which is not optimized). We need to share connection between more than one >>>>>>>> RemoteQuery node because same transaction can not work on two connections >>>>>>>> to same server. Not only performance, but the code has become ugly because >>>>>>>> of this approach. At various places in executor, we have special handling >>>>>>>> for sorting, which needs to be maintained. >>>>>>>> >>>>>>>> Instead if we materialise all the result on tape and then proceed >>>>>>>> with step D5 in Knuth's algorithm for polyphase merge sort, the code will >>>>>>>> be much simpler and we won't loose much performance. In fact, we might be >>>>>>>> able to leverage fetching bulk data on connection which can be materialised >>>>>>>> on tape in bulk. >>>>>>>> >>>>>>>> >>>>>>>>> Regards; >>>>>>>>> ---------- >>>>>>>>> Koichi Suzuki >>>>>>>>> >>>>>>>>> >>>>>>>>> 2013/3/25 Ashutosh Bapat <ash...@en...>: >>>>>>>>> > Hi All, >>>>>>>>> > I am working on using remote sorting for merge joins. The idea >>>>>>>>> is while >>>>>>>>> > using merge join at the coordinator, get the data sorted from >>>>>>>>> the datanodes; >>>>>>>>> > for replicated relations, we can get all the rows sorted and for >>>>>>>>> distributed >>>>>>>>> > tables we have to get sorted runs which can be merged at the >>>>>>>>> coordinator. >>>>>>>>> > For merge join the sorted inner relation needs to be randomly >>>>>>>>> accessible. >>>>>>>>> > For replicated relations this can be achieved by materialising >>>>>>>>> the result. >>>>>>>>> > But for distributed relations, we do not materialise the sorted >>>>>>>>> result at >>>>>>>>> > coordinator but compute the sorted result by merging the sorted >>>>>>>>> results from >>>>>>>>> > individual nodes on the fly. For distributed relations, the >>>>>>>>> connection to >>>>>>>>> > the datanodes themselves are used as logical tapes (which >>>>>>>>> provide the sorted >>>>>>>>> > runs). The final result is computed on the fly by choosing the >>>>>>>>> smallest or >>>>>>>>> > greatest row (as required) from the connections. >>>>>>>>> > >>>>>>>>> > For a Sort node the materialised result can reside in memory (if >>>>>>>>> it fits >>>>>>>>> > there) or on one of the logical tapes used for merge sort. So, >>>>>>>>> in order to >>>>>>>>> > provide random access to the sorted result, we need to >>>>>>>>> materialise the >>>>>>>>> > result either in the memory or on the logical tape. In-memory >>>>>>>>> > materialisation is not easily possible since we have already >>>>>>>>> resorted for >>>>>>>>> > tape based sort, in case of distributed relations and to >>>>>>>>> materialise the >>>>>>>>> > result on tape, there is no logical tape available in current >>>>>>>>> algorithm. To >>>>>>>>> > make it work, there are following possible ways >>>>>>>>> > >>>>>>>>> > 1. When random access is required, materialise the sorted runs >>>>>>>>> from >>>>>>>>> > individual nodes onto tapes (one tape for each node) and then >>>>>>>>> merge them on >>>>>>>>> > one extra tape, which can be used for materialisation. >>>>>>>>> > 2. Use a mix of connections and logical tape in the same tape >>>>>>>>> set. Merge the >>>>>>>>> > sorted runs from connections on a logical tape in the same >>>>>>>>> logical tape set. >>>>>>>>> > >>>>>>>>> > While the second one looks attractive from performance >>>>>>>>> perspective (it saves >>>>>>>>> > writing and reading from the tape), it would make the merge code >>>>>>>>> ugly by >>>>>>>>> > using mixed tapes. The read calls for connection and logical >>>>>>>>> tape are >>>>>>>>> > different and we will need both on the logical tape where the >>>>>>>>> final result >>>>>>>>> > is materialized. So, I am thinking of going with 1, in fact, to >>>>>>>>> have same >>>>>>>>> > code to handle remote sort, use 1 in all cases (whether or not >>>>>>>>> > materialization is required). >>>>>>>>> > >>>>>>>>> > Had original authors of remote sort code thought about this >>>>>>>>> materialization? >>>>>>>>> > Anything they can share on this topic? >>>>>>>>> > Any comment? >>>>>>>>> > -- >>>>>>>>> > Best Wishes, >>>>>>>>> > Ashutosh Bapat >>>>>>>>> > EntepriseDB Corporation >>>>>>>>> > The Enterprise Postgres Company >>>>>>>>> > >>>>>>>>> > >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> > Everyone hates slow websites. So do we. >>>>>>>>> > Make your web apps faster with AppDynamics >>>>>>>>> > Download AppDynamics Lite for free today: >>>>>>>>> > http://p.sf.net/sfu/appdyn_d2d_mar >>>>>>>>> > _______________________________________________ >>>>>>>>> > Postgres-xc-developers mailing list >>>>>>>>> > Pos...@li... >>>>>>>>> > >>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>> > >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Best Wishes, >>>>>>>> Ashutosh Bapat >>>>>>>> EntepriseDB Corporation >>>>>>>> The Enterprise Postgres Company >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Enterprise Postgres Company >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Enterprise Postgres Company >>>>> >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Own the Future-Intel® Level Up Game Demo Contest 2013 >>>> Rise to greatness in Intel's independent game demo contest. >>>> Compete for recognition, cash, and the chance to get your game >>>> on Steam. $5K grand prize plus 10 genre and skill prizes. >>>> Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>> >>> >>> -- >>> -- >>> Abbas >>> Architect >>> EnterpriseDB Corporation >>> The Enterprise PostgreSQL Company >>> >>> Phone: 92-334-5100153 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of >>> the individual or entity to whom it is addressed. This message >>> contains information from EnterpriseDB Corporation that may be >>> privileged, confidential, or exempt from disclosure under applicable >>> law. If you are not the intended recipient or authorized to receive >>> this for the intended recipient, any use, dissemination, distribution, >>> retention, archiving, or copying of this communication is strictly >>> prohibited. If you have received this e-mail in error, please notify >>> the sender immediately by reply e-mail and delete this message. >>> >> >> >> >> -- >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Abbas B. <abb...@en...> - 2013-04-18 11:06:30
|
On Thu, Apr 18, 2013 at 10:39 AM, Ashutosh Bapat < ash...@en...> wrote: > Hi Abbas, > Thanks for the corrections. I will use those and commit the patch. > > Regarding in-memory sort: unfortunately, all the sorting here is tape > based sorting so, there is no real in-memory sorting. Well, as long as the > rows fit in a cache, it's still in-memory, but with some overhead. The next > steps here could be to use statistics (so, we have to make it available) > and if the number of rows estimated is smaller, start with in-memory sort. > But that will need some more work. > Understood. > > > On Thu, Apr 18, 2013 at 1:37 AM, Abbas Butt <abb...@en...>wrote: > >> Hi, >> Here is the review of the patch. >> >> Overall the patch is good to go. I have reviewed the code and found some >> minor errors, which I corrected and have attached the revised patch with >> the mail. >> >> I have tested both the cases when the sort happens in memory and when it >> happens using disk and found both working. >> >> I agree that the approach used in the patch is cleaner and has smaller >> footprint. >> >> I have corrected some white space errors and an unintentional change in >> function set_dbcleanup_callback >> git apply /home/edb/Desktop/MergeSort/xc_sort.patch >> /home/edb/Desktop/MergeSort/xc_sort.patch:539: trailing whitespace. >> void *fparams; >> /home/edb/Desktop/MergeSort/xc_sort.patch:1012: trailing whitespace. >> >> /home/edb/Desktop/MergeSort/xc_sort.patch:1018: trailing whitespace. >> >> /home/edb/Desktop/MergeSort/xc_sort.patch:1087: trailing whitespace. >> /* >> /home/edb/Desktop/MergeSort/xc_sort.patch:1228: trailing whitespace. >> size_t len, Oid msgnode_oid, >> warning: 5 lines add whitespace errors. >> >> I am leaving a query running for tonight which would sort 10M rows of a >> distributed table and would return top 100 of them. I would report its >> outcome tomorrow morning. >> >> Best Regards >> >> >> On Mon, Apr 1, 2013 at 11:02 AM, Koichi Suzuki <koi...@gm... >> > wrote: >> >>> Thanks. Then 90% improvement means about 53% of the duration, while 50% >>> means 67% of it. Number of queries in a given duration is 190 vs. 150, >>> difference is 40. >>> >>> Considering the needed resource, it may be okay to begin with >>> materialization. >>> >>> Any other inputs? >>> ---------- >>> Koichi Suzuki >>> >>> >>> 2013/4/1 Ashutosh Bapat <ash...@en...> >>> >>>> >>>> >>>> On Mon, Apr 1, 2013 at 10:59 AM, Koichi Suzuki < >>>> koi...@gm...> wrote: >>>> >>>>> I understand materialize everything makes code clearer and >>>>> implementation becomes simpler and better structured. >>>>> >>>>> What do you mean by x% improvement? Does 90% improvement mean the >>>>> total duration is 10% of the original? >>>>> >>>> x% improvement means, duration reduces to 100/(100+x) as compared to >>>> the non-pushdown scenario. Or in simpler words, we see (100+x) queries >>>> being completed by pushdown approach in the same time in which nonpushdown >>>> approach completes 100 queries. >>>> >>>>> ---------- >>>>> Koichi Suzuki >>>>> >>>>> >>>>> 2013/3/29 Ashutosh Bapat <ash...@en...> >>>>> >>>>>> Hi All, >>>>>> I measured the scale up for both approaches - a. using datanode >>>>>> connections as tapes (existing one) b. materialising result on tapes before >>>>>> merging (the approach I proposed). For 1M rows, 5 coordinators I have found >>>>>> that approach (a) gives 90% improvement whereas approach (b) gives 50% >>>>>> improvement. Although the difference is significant, I feel that approach >>>>>> (b) is much cleaner than approach (a) and doesn't have large footprint >>>>>> compared to PG code and it takes care of all the cases like 1. >>>>>> materialising sorted result, 2. takes care of any number of datanode >>>>>> connections without memory overrun. It's possible to improve it further if >>>>>> we avoid materialisation of datanode result in tuplestore. >>>>>> >>>>>> Patch attached for reference. >>>>>> >>>>>> On Tue, Mar 26, 2013 at 10:38 AM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, Mar 26, 2013 at 10:19 AM, Koichi Suzuki < >>>>>>> koi...@gm...> wrote: >>>>>>> >>>>>>>> On thing we should think for option 1 is: >>>>>>>> >>>>>>>> When a number of the result is huge, applications has to wait long >>>>>>>> time until they get the first row. Because this option may need >>>>>>>> disk >>>>>>>> write, total resource consumption will be larger. >>>>>>>> >>>>>>>> >>>>>>> Yes, I am aware of this fact. Please read the next paragraph and you >>>>>>> will see that the current situation is no better. >>>>>>> >>>>>>> >>>>>>>> I'm wondering if we can use "cursor" at database so that we can read >>>>>>>> each tape more simply, I mean, to leave each query node open and >>>>>>>> read >>>>>>>> next row from any query node. >>>>>>>> >>>>>>>> >>>>>>> We do that right now. But because of such a simulated cursor (it's >>>>>>> not cursor per say, but we just fetch the required result from connection >>>>>>> as the demand arises in merging runs), we observer following things >>>>>>> >>>>>>> If the plan has multiple remote query nodes (as there will be in >>>>>>> case of merge join), we assign the same connection to these nodes. Before >>>>>>> this assignment, the result from the previous connection is materialised at >>>>>>> the coordinator. This means that, when we will get huge result from the >>>>>>> datanode, it will be materialised (which will have the more cost as >>>>>>> materialising it on tape, as this materialisation happens in a linked list, >>>>>>> which is not optimized). We need to share connection between more than one >>>>>>> RemoteQuery node because same transaction can not work on two connections >>>>>>> to same server. Not only performance, but the code has become ugly because >>>>>>> of this approach. At various places in executor, we have special handling >>>>>>> for sorting, which needs to be maintained. >>>>>>> >>>>>>> Instead if we materialise all the result on tape and then proceed >>>>>>> with step D5 in Knuth's algorithm for polyphase merge sort, the code will >>>>>>> be much simpler and we won't loose much performance. In fact, we might be >>>>>>> able to leverage fetching bulk data on connection which can be materialised >>>>>>> on tape in bulk. >>>>>>> >>>>>>> >>>>>>>> Regards; >>>>>>>> ---------- >>>>>>>> Koichi Suzuki >>>>>>>> >>>>>>>> >>>>>>>> 2013/3/25 Ashutosh Bapat <ash...@en...>: >>>>>>>> > Hi All, >>>>>>>> > I am working on using remote sorting for merge joins. The idea is >>>>>>>> while >>>>>>>> > using merge join at the coordinator, get the data sorted from the >>>>>>>> datanodes; >>>>>>>> > for replicated relations, we can get all the rows sorted and for >>>>>>>> distributed >>>>>>>> > tables we have to get sorted runs which can be merged at the >>>>>>>> coordinator. >>>>>>>> > For merge join the sorted inner relation needs to be randomly >>>>>>>> accessible. >>>>>>>> > For replicated relations this can be achieved by materialising >>>>>>>> the result. >>>>>>>> > But for distributed relations, we do not materialise the sorted >>>>>>>> result at >>>>>>>> > coordinator but compute the sorted result by merging the sorted >>>>>>>> results from >>>>>>>> > individual nodes on the fly. For distributed relations, the >>>>>>>> connection to >>>>>>>> > the datanodes themselves are used as logical tapes (which provide >>>>>>>> the sorted >>>>>>>> > runs). The final result is computed on the fly by choosing the >>>>>>>> smallest or >>>>>>>> > greatest row (as required) from the connections. >>>>>>>> > >>>>>>>> > For a Sort node the materialised result can reside in memory (if >>>>>>>> it fits >>>>>>>> > there) or on one of the logical tapes used for merge sort. So, in >>>>>>>> order to >>>>>>>> > provide random access to the sorted result, we need to >>>>>>>> materialise the >>>>>>>> > result either in the memory or on the logical tape. In-memory >>>>>>>> > materialisation is not easily possible since we have already >>>>>>>> resorted for >>>>>>>> > tape based sort, in case of distributed relations and to >>>>>>>> materialise the >>>>>>>> > result on tape, there is no logical tape available in current >>>>>>>> algorithm. To >>>>>>>> > make it work, there are following possible ways >>>>>>>> > >>>>>>>> > 1. When random access is required, materialise the sorted runs >>>>>>>> from >>>>>>>> > individual nodes onto tapes (one tape for each node) and then >>>>>>>> merge them on >>>>>>>> > one extra tape, which can be used for materialisation. >>>>>>>> > 2. Use a mix of connections and logical tape in the same tape >>>>>>>> set. Merge the >>>>>>>> > sorted runs from connections on a logical tape in the same >>>>>>>> logical tape set. >>>>>>>> > >>>>>>>> > While the second one looks attractive from performance >>>>>>>> perspective (it saves >>>>>>>> > writing and reading from the tape), it would make the merge code >>>>>>>> ugly by >>>>>>>> > using mixed tapes. The read calls for connection and logical tape >>>>>>>> are >>>>>>>> > different and we will need both on the logical tape where the >>>>>>>> final result >>>>>>>> > is materialized. So, I am thinking of going with 1, in fact, to >>>>>>>> have same >>>>>>>> > code to handle remote sort, use 1 in all cases (whether or not >>>>>>>> > materialization is required). >>>>>>>> > >>>>>>>> > Had original authors of remote sort code thought about this >>>>>>>> materialization? >>>>>>>> > Anything they can share on this topic? >>>>>>>> > Any comment? >>>>>>>> > -- >>>>>>>> > Best Wishes, >>>>>>>> > Ashutosh Bapat >>>>>>>> > EntepriseDB Corporation >>>>>>>> > The Enterprise Postgres Company >>>>>>>> > >>>>>>>> > >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> > Everyone hates slow websites. So do we. >>>>>>>> > Make your web apps faster with AppDynamics >>>>>>>> > Download AppDynamics Lite for free today: >>>>>>>> > http://p.sf.net/sfu/appdyn_d2d_mar >>>>>>>> > _______________________________________________ >>>>>>>> > Postgres-xc-developers mailing list >>>>>>>> > Pos...@li... >>>>>>>> > >>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>> > >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Enterprise Postgres Company >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Enterprise Postgres Company >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Enterprise Postgres Company >>>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Own the Future-Intel® Level Up Game Demo Contest 2013 >>> >>> Rise to greatness in Intel's independent game demo contest. >>> Compete for recognition, cash, and the chance to get your game >>> on Steam. $5K grand prize plus 10 genre and skill prizes. >>> Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d >>> >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Ashutosh B. <ash...@en...> - 2013-04-18 05:39:29
|
Hi Abbas, Thanks for the corrections. I will use those and commit the patch. Regarding in-memory sort: unfortunately, all the sorting here is tape based sorting so, there is no real in-memory sorting. Well, as long as the rows fit in a cache, it's still in-memory, but with some overhead. The next steps here could be to use statistics (so, we have to make it available) and if the number of rows estimated is smaller, start with in-memory sort. But that will need some more work. On Thu, Apr 18, 2013 at 1:37 AM, Abbas Butt <abb...@en...>wrote: > Hi, > Here is the review of the patch. > > Overall the patch is good to go. I have reviewed the code and found some > minor errors, which I corrected and have attached the revised patch with > the mail. > > I have tested both the cases when the sort happens in memory and when it > happens using disk and found both working. > > I agree that the approach used in the patch is cleaner and has smaller > footprint. > > I have corrected some white space errors and an unintentional change in > function set_dbcleanup_callback > git apply /home/edb/Desktop/MergeSort/xc_sort.patch > /home/edb/Desktop/MergeSort/xc_sort.patch:539: trailing whitespace. > void *fparams; > /home/edb/Desktop/MergeSort/xc_sort.patch:1012: trailing whitespace. > > /home/edb/Desktop/MergeSort/xc_sort.patch:1018: trailing whitespace. > > /home/edb/Desktop/MergeSort/xc_sort.patch:1087: trailing whitespace. > /* > /home/edb/Desktop/MergeSort/xc_sort.patch:1228: trailing whitespace. > size_t len, Oid msgnode_oid, > warning: 5 lines add whitespace errors. > > I am leaving a query running for tonight which would sort 10M rows of a > distributed table and would return top 100 of them. I would report its > outcome tomorrow morning. > > Best Regards > > > On Mon, Apr 1, 2013 at 11:02 AM, Koichi Suzuki <koi...@gm...>wrote: > >> Thanks. Then 90% improvement means about 53% of the duration, while 50% >> means 67% of it. Number of queries in a given duration is 190 vs. 150, >> difference is 40. >> >> Considering the needed resource, it may be okay to begin with >> materialization. >> >> Any other inputs? >> ---------- >> Koichi Suzuki >> >> >> 2013/4/1 Ashutosh Bapat <ash...@en...> >> >>> >>> >>> On Mon, Apr 1, 2013 at 10:59 AM, Koichi Suzuki < >>> koi...@gm...> wrote: >>> >>>> I understand materialize everything makes code clearer and >>>> implementation becomes simpler and better structured. >>>> >>>> What do you mean by x% improvement? Does 90% improvement mean the >>>> total duration is 10% of the original? >>>> >>> x% improvement means, duration reduces to 100/(100+x) as compared to the >>> non-pushdown scenario. Or in simpler words, we see (100+x) queries being >>> completed by pushdown approach in the same time in which nonpushdown >>> approach completes 100 queries. >>> >>>> ---------- >>>> Koichi Suzuki >>>> >>>> >>>> 2013/3/29 Ashutosh Bapat <ash...@en...> >>>> >>>>> Hi All, >>>>> I measured the scale up for both approaches - a. using datanode >>>>> connections as tapes (existing one) b. materialising result on tapes before >>>>> merging (the approach I proposed). For 1M rows, 5 coordinators I have found >>>>> that approach (a) gives 90% improvement whereas approach (b) gives 50% >>>>> improvement. Although the difference is significant, I feel that approach >>>>> (b) is much cleaner than approach (a) and doesn't have large footprint >>>>> compared to PG code and it takes care of all the cases like 1. >>>>> materialising sorted result, 2. takes care of any number of datanode >>>>> connections without memory overrun. It's possible to improve it further if >>>>> we avoid materialisation of datanode result in tuplestore. >>>>> >>>>> Patch attached for reference. >>>>> >>>>> On Tue, Mar 26, 2013 at 10:38 AM, Ashutosh Bapat < >>>>> ash...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Tue, Mar 26, 2013 at 10:19 AM, Koichi Suzuki < >>>>>> koi...@gm...> wrote: >>>>>> >>>>>>> On thing we should think for option 1 is: >>>>>>> >>>>>>> When a number of the result is huge, applications has to wait long >>>>>>> time until they get the first row. Because this option may need disk >>>>>>> write, total resource consumption will be larger. >>>>>>> >>>>>>> >>>>>> Yes, I am aware of this fact. Please read the next paragraph and you >>>>>> will see that the current situation is no better. >>>>>> >>>>>> >>>>>>> I'm wondering if we can use "cursor" at database so that we can read >>>>>>> each tape more simply, I mean, to leave each query node open and read >>>>>>> next row from any query node. >>>>>>> >>>>>>> >>>>>> We do that right now. But because of such a simulated cursor (it's >>>>>> not cursor per say, but we just fetch the required result from connection >>>>>> as the demand arises in merging runs), we observer following things >>>>>> >>>>>> If the plan has multiple remote query nodes (as there will be in case >>>>>> of merge join), we assign the same connection to these nodes. Before this >>>>>> assignment, the result from the previous connection is materialised at the >>>>>> coordinator. This means that, when we will get huge result from the >>>>>> datanode, it will be materialised (which will have the more cost as >>>>>> materialising it on tape, as this materialisation happens in a linked list, >>>>>> which is not optimized). We need to share connection between more than one >>>>>> RemoteQuery node because same transaction can not work on two connections >>>>>> to same server. Not only performance, but the code has become ugly because >>>>>> of this approach. At various places in executor, we have special handling >>>>>> for sorting, which needs to be maintained. >>>>>> >>>>>> Instead if we materialise all the result on tape and then proceed >>>>>> with step D5 in Knuth's algorithm for polyphase merge sort, the code will >>>>>> be much simpler and we won't loose much performance. In fact, we might be >>>>>> able to leverage fetching bulk data on connection which can be materialised >>>>>> on tape in bulk. >>>>>> >>>>>> >>>>>>> Regards; >>>>>>> ---------- >>>>>>> Koichi Suzuki >>>>>>> >>>>>>> >>>>>>> 2013/3/25 Ashutosh Bapat <ash...@en...>: >>>>>>> > Hi All, >>>>>>> > I am working on using remote sorting for merge joins. The idea is >>>>>>> while >>>>>>> > using merge join at the coordinator, get the data sorted from the >>>>>>> datanodes; >>>>>>> > for replicated relations, we can get all the rows sorted and for >>>>>>> distributed >>>>>>> > tables we have to get sorted runs which can be merged at the >>>>>>> coordinator. >>>>>>> > For merge join the sorted inner relation needs to be randomly >>>>>>> accessible. >>>>>>> > For replicated relations this can be achieved by materialising the >>>>>>> result. >>>>>>> > But for distributed relations, we do not materialise the sorted >>>>>>> result at >>>>>>> > coordinator but compute the sorted result by merging the sorted >>>>>>> results from >>>>>>> > individual nodes on the fly. For distributed relations, the >>>>>>> connection to >>>>>>> > the datanodes themselves are used as logical tapes (which provide >>>>>>> the sorted >>>>>>> > runs). The final result is computed on the fly by choosing the >>>>>>> smallest or >>>>>>> > greatest row (as required) from the connections. >>>>>>> > >>>>>>> > For a Sort node the materialised result can reside in memory (if >>>>>>> it fits >>>>>>> > there) or on one of the logical tapes used for merge sort. So, in >>>>>>> order to >>>>>>> > provide random access to the sorted result, we need to materialise >>>>>>> the >>>>>>> > result either in the memory or on the logical tape. In-memory >>>>>>> > materialisation is not easily possible since we have already >>>>>>> resorted for >>>>>>> > tape based sort, in case of distributed relations and to >>>>>>> materialise the >>>>>>> > result on tape, there is no logical tape available in current >>>>>>> algorithm. To >>>>>>> > make it work, there are following possible ways >>>>>>> > >>>>>>> > 1. When random access is required, materialise the sorted runs from >>>>>>> > individual nodes onto tapes (one tape for each node) and then >>>>>>> merge them on >>>>>>> > one extra tape, which can be used for materialisation. >>>>>>> > 2. Use a mix of connections and logical tape in the same tape set. >>>>>>> Merge the >>>>>>> > sorted runs from connections on a logical tape in the same logical >>>>>>> tape set. >>>>>>> > >>>>>>> > While the second one looks attractive from performance perspective >>>>>>> (it saves >>>>>>> > writing and reading from the tape), it would make the merge code >>>>>>> ugly by >>>>>>> > using mixed tapes. The read calls for connection and logical tape >>>>>>> are >>>>>>> > different and we will need both on the logical tape where the >>>>>>> final result >>>>>>> > is materialized. So, I am thinking of going with 1, in fact, to >>>>>>> have same >>>>>>> > code to handle remote sort, use 1 in all cases (whether or not >>>>>>> > materialization is required). >>>>>>> > >>>>>>> > Had original authors of remote sort code thought about this >>>>>>> materialization? >>>>>>> > Anything they can share on this topic? >>>>>>> > Any comment? >>>>>>> > -- >>>>>>> > Best Wishes, >>>>>>> > Ashutosh Bapat >>>>>>> > EntepriseDB Corporation >>>>>>> > The Enterprise Postgres Company >>>>>>> > >>>>>>> > >>>>>>> ------------------------------------------------------------------------------ >>>>>>> > Everyone hates slow websites. So do we. >>>>>>> > Make your web apps faster with AppDynamics >>>>>>> > Download AppDynamics Lite for free today: >>>>>>> > http://p.sf.net/sfu/appdyn_d2d_mar >>>>>>> > _______________________________________________ >>>>>>> > Postgres-xc-developers mailing list >>>>>>> > Pos...@li... >>>>>>> > >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>> > >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Enterprise Postgres Company >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Enterprise Postgres Company >>>>> >>>> >>>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Enterprise Postgres Company >>> >> >> >> >> ------------------------------------------------------------------------------ >> Own the Future-Intel® Level Up Game Demo Contest 2013 >> >> Rise to greatness in Intel's independent game demo contest. >> Compete for recognition, cash, and the chance to get your game >> on Steam. $5K grand prize plus 10 genre and skill prizes. >> Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d >> >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Ashutosh B. <ash...@en...> - 2013-04-18 03:43:38
|
Did you measure the performance? On Thu, Apr 18, 2013 at 9:02 AM, Abbas Butt <abb...@en...>wrote: > > > On Thu, Apr 18, 2013 at 1:07 AM, Abbas Butt <abb...@en...>wrote: > >> Hi, >> Here is the review of the patch. >> >> Overall the patch is good to go. I have reviewed the code and found some >> minor errors, which I corrected and have attached the revised patch with >> the mail. >> >> I have tested both the cases when the sort happens in memory and when it >> happens using disk and found both working. >> >> I agree that the approach used in the patch is cleaner and has smaller >> footprint. >> >> I have corrected some white space errors and an unintentional change in >> function set_dbcleanup_callback >> git apply /home/edb/Desktop/MergeSort/xc_sort.patch >> /home/edb/Desktop/MergeSort/xc_sort.patch:539: trailing whitespace. >> void *fparams; >> /home/edb/Desktop/MergeSort/xc_sort.patch:1012: trailing whitespace. >> >> /home/edb/Desktop/MergeSort/xc_sort.patch:1018: trailing whitespace. >> >> /home/edb/Desktop/MergeSort/xc_sort.patch:1087: trailing whitespace. >> /* >> /home/edb/Desktop/MergeSort/xc_sort.patch:1228: trailing whitespace. >> size_t len, Oid msgnode_oid, >> warning: 5 lines add whitespace errors. >> >> I am leaving a query running for tonight which would sort 10M rows of a >> distributed table and would return top 100 of them. I would report its >> outcome tomorrow morning. >> > > It worked, here is the test case > > 1. create table test1 (id integer primary key , padding text); > 2. Load 10M rows > 3. select id from test1 order by 1 limit 100 > > > > >> >> Best Regards >> >> >> On Mon, Apr 1, 2013 at 11:02 AM, Koichi Suzuki <koi...@gm... >> > wrote: >> >>> Thanks. Then 90% improvement means about 53% of the duration, while 50% >>> means 67% of it. Number of queries in a given duration is 190 vs. 150, >>> difference is 40. >>> >>> Considering the needed resource, it may be okay to begin with >>> materialization. >>> >>> Any other inputs? >>> ---------- >>> Koichi Suzuki >>> >>> >>> 2013/4/1 Ashutosh Bapat <ash...@en...> >>> >>>> >>>> >>>> On Mon, Apr 1, 2013 at 10:59 AM, Koichi Suzuki < >>>> koi...@gm...> wrote: >>>> >>>>> I understand materialize everything makes code clearer and >>>>> implementation becomes simpler and better structured. >>>>> >>>>> What do you mean by x% improvement? Does 90% improvement mean the >>>>> total duration is 10% of the original? >>>>> >>>> x% improvement means, duration reduces to 100/(100+x) as compared to >>>> the non-pushdown scenario. Or in simpler words, we see (100+x) queries >>>> being completed by pushdown approach in the same time in which nonpushdown >>>> approach completes 100 queries. >>>> >>>>> ---------- >>>>> Koichi Suzuki >>>>> >>>>> >>>>> 2013/3/29 Ashutosh Bapat <ash...@en...> >>>>> >>>>>> Hi All, >>>>>> I measured the scale up for both approaches - a. using datanode >>>>>> connections as tapes (existing one) b. materialising result on tapes before >>>>>> merging (the approach I proposed). For 1M rows, 5 coordinators I have found >>>>>> that approach (a) gives 90% improvement whereas approach (b) gives 50% >>>>>> improvement. Although the difference is significant, I feel that approach >>>>>> (b) is much cleaner than approach (a) and doesn't have large footprint >>>>>> compared to PG code and it takes care of all the cases like 1. >>>>>> materialising sorted result, 2. takes care of any number of datanode >>>>>> connections without memory overrun. It's possible to improve it further if >>>>>> we avoid materialisation of datanode result in tuplestore. >>>>>> >>>>>> Patch attached for reference. >>>>>> >>>>>> On Tue, Mar 26, 2013 at 10:38 AM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, Mar 26, 2013 at 10:19 AM, Koichi Suzuki < >>>>>>> koi...@gm...> wrote: >>>>>>> >>>>>>>> On thing we should think for option 1 is: >>>>>>>> >>>>>>>> When a number of the result is huge, applications has to wait long >>>>>>>> time until they get the first row. Because this option may need >>>>>>>> disk >>>>>>>> write, total resource consumption will be larger. >>>>>>>> >>>>>>>> >>>>>>> Yes, I am aware of this fact. Please read the next paragraph and you >>>>>>> will see that the current situation is no better. >>>>>>> >>>>>>> >>>>>>>> I'm wondering if we can use "cursor" at database so that we can read >>>>>>>> each tape more simply, I mean, to leave each query node open and >>>>>>>> read >>>>>>>> next row from any query node. >>>>>>>> >>>>>>>> >>>>>>> We do that right now. But because of such a simulated cursor (it's >>>>>>> not cursor per say, but we just fetch the required result from connection >>>>>>> as the demand arises in merging runs), we observer following things >>>>>>> >>>>>>> If the plan has multiple remote query nodes (as there will be in >>>>>>> case of merge join), we assign the same connection to these nodes. Before >>>>>>> this assignment, the result from the previous connection is materialised at >>>>>>> the coordinator. This means that, when we will get huge result from the >>>>>>> datanode, it will be materialised (which will have the more cost as >>>>>>> materialising it on tape, as this materialisation happens in a linked list, >>>>>>> which is not optimized). We need to share connection between more than one >>>>>>> RemoteQuery node because same transaction can not work on two connections >>>>>>> to same server. Not only performance, but the code has become ugly because >>>>>>> of this approach. At various places in executor, we have special handling >>>>>>> for sorting, which needs to be maintained. >>>>>>> >>>>>>> Instead if we materialise all the result on tape and then proceed >>>>>>> with step D5 in Knuth's algorithm for polyphase merge sort, the code will >>>>>>> be much simpler and we won't loose much performance. In fact, we might be >>>>>>> able to leverage fetching bulk data on connection which can be materialised >>>>>>> on tape in bulk. >>>>>>> >>>>>>> >>>>>>>> Regards; >>>>>>>> ---------- >>>>>>>> Koichi Suzuki >>>>>>>> >>>>>>>> >>>>>>>> 2013/3/25 Ashutosh Bapat <ash...@en...>: >>>>>>>> > Hi All, >>>>>>>> > I am working on using remote sorting for merge joins. The idea is >>>>>>>> while >>>>>>>> > using merge join at the coordinator, get the data sorted from the >>>>>>>> datanodes; >>>>>>>> > for replicated relations, we can get all the rows sorted and for >>>>>>>> distributed >>>>>>>> > tables we have to get sorted runs which can be merged at the >>>>>>>> coordinator. >>>>>>>> > For merge join the sorted inner relation needs to be randomly >>>>>>>> accessible. >>>>>>>> > For replicated relations this can be achieved by materialising >>>>>>>> the result. >>>>>>>> > But for distributed relations, we do not materialise the sorted >>>>>>>> result at >>>>>>>> > coordinator but compute the sorted result by merging the sorted >>>>>>>> results from >>>>>>>> > individual nodes on the fly. For distributed relations, the >>>>>>>> connection to >>>>>>>> > the datanodes themselves are used as logical tapes (which provide >>>>>>>> the sorted >>>>>>>> > runs). The final result is computed on the fly by choosing the >>>>>>>> smallest or >>>>>>>> > greatest row (as required) from the connections. >>>>>>>> > >>>>>>>> > For a Sort node the materialised result can reside in memory (if >>>>>>>> it fits >>>>>>>> > there) or on one of the logical tapes used for merge sort. So, in >>>>>>>> order to >>>>>>>> > provide random access to the sorted result, we need to >>>>>>>> materialise the >>>>>>>> > result either in the memory or on the logical tape. In-memory >>>>>>>> > materialisation is not easily possible since we have already >>>>>>>> resorted for >>>>>>>> > tape based sort, in case of distributed relations and to >>>>>>>> materialise the >>>>>>>> > result on tape, there is no logical tape available in current >>>>>>>> algorithm. To >>>>>>>> > make it work, there are following possible ways >>>>>>>> > >>>>>>>> > 1. When random access is required, materialise the sorted runs >>>>>>>> from >>>>>>>> > individual nodes onto tapes (one tape for each node) and then >>>>>>>> merge them on >>>>>>>> > one extra tape, which can be used for materialisation. >>>>>>>> > 2. Use a mix of connections and logical tape in the same tape >>>>>>>> set. Merge the >>>>>>>> > sorted runs from connections on a logical tape in the same >>>>>>>> logical tape set. >>>>>>>> > >>>>>>>> > While the second one looks attractive from performance >>>>>>>> perspective (it saves >>>>>>>> > writing and reading from the tape), it would make the merge code >>>>>>>> ugly by >>>>>>>> > using mixed tapes. The read calls for connection and logical tape >>>>>>>> are >>>>>>>> > different and we will need both on the logical tape where the >>>>>>>> final result >>>>>>>> > is materialized. So, I am thinking of going with 1, in fact, to >>>>>>>> have same >>>>>>>> > code to handle remote sort, use 1 in all cases (whether or not >>>>>>>> > materialization is required). >>>>>>>> > >>>>>>>> > Had original authors of remote sort code thought about this >>>>>>>> materialization? >>>>>>>> > Anything they can share on this topic? >>>>>>>> > Any comment? >>>>>>>> > -- >>>>>>>> > Best Wishes, >>>>>>>> > Ashutosh Bapat >>>>>>>> > EntepriseDB Corporation >>>>>>>> > The Enterprise Postgres Company >>>>>>>> > >>>>>>>> > >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> > Everyone hates slow websites. So do we. >>>>>>>> > Make your web apps faster with AppDynamics >>>>>>>> > Download AppDynamics Lite for free today: >>>>>>>> > http://p.sf.net/sfu/appdyn_d2d_mar >>>>>>>> > _______________________________________________ >>>>>>>> > Postgres-xc-developers mailing list >>>>>>>> > Pos...@li... >>>>>>>> > >>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>> > >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Enterprise Postgres Company >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Enterprise Postgres Company >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Enterprise Postgres Company >>>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Own the Future-Intel® Level Up Game Demo Contest 2013 >>> Rise to greatness in Intel's independent game demo contest. >>> Compete for recognition, cash, and the chance to get your game >>> on Steam. $5K grand prize plus 10 genre and skill prizes. >>> Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. >> > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Abbas B. <abb...@en...> - 2013-04-18 03:32:39
|
On Thu, Apr 18, 2013 at 1:07 AM, Abbas Butt <abb...@en...>wrote: > Hi, > Here is the review of the patch. > > Overall the patch is good to go. I have reviewed the code and found some > minor errors, which I corrected and have attached the revised patch with > the mail. > > I have tested both the cases when the sort happens in memory and when it > happens using disk and found both working. > > I agree that the approach used in the patch is cleaner and has smaller > footprint. > > I have corrected some white space errors and an unintentional change in > function set_dbcleanup_callback > git apply /home/edb/Desktop/MergeSort/xc_sort.patch > /home/edb/Desktop/MergeSort/xc_sort.patch:539: trailing whitespace. > void *fparams; > /home/edb/Desktop/MergeSort/xc_sort.patch:1012: trailing whitespace. > > /home/edb/Desktop/MergeSort/xc_sort.patch:1018: trailing whitespace. > > /home/edb/Desktop/MergeSort/xc_sort.patch:1087: trailing whitespace. > /* > /home/edb/Desktop/MergeSort/xc_sort.patch:1228: trailing whitespace. > size_t len, Oid msgnode_oid, > warning: 5 lines add whitespace errors. > > I am leaving a query running for tonight which would sort 10M rows of a > distributed table and would return top 100 of them. I would report its > outcome tomorrow morning. > It worked, here is the test case 1. create table test1 (id integer primary key , padding text); 2. Load 10M rows 3. select id from test1 order by 1 limit 100 > > Best Regards > > > On Mon, Apr 1, 2013 at 11:02 AM, Koichi Suzuki <koi...@gm...>wrote: > >> Thanks. Then 90% improvement means about 53% of the duration, while 50% >> means 67% of it. Number of queries in a given duration is 190 vs. 150, >> difference is 40. >> >> Considering the needed resource, it may be okay to begin with >> materialization. >> >> Any other inputs? >> ---------- >> Koichi Suzuki >> >> >> 2013/4/1 Ashutosh Bapat <ash...@en...> >> >>> >>> >>> On Mon, Apr 1, 2013 at 10:59 AM, Koichi Suzuki < >>> koi...@gm...> wrote: >>> >>>> I understand materialize everything makes code clearer and >>>> implementation becomes simpler and better structured. >>>> >>>> What do you mean by x% improvement? Does 90% improvement mean the >>>> total duration is 10% of the original? >>>> >>> x% improvement means, duration reduces to 100/(100+x) as compared to the >>> non-pushdown scenario. Or in simpler words, we see (100+x) queries being >>> completed by pushdown approach in the same time in which nonpushdown >>> approach completes 100 queries. >>> >>>> ---------- >>>> Koichi Suzuki >>>> >>>> >>>> 2013/3/29 Ashutosh Bapat <ash...@en...> >>>> >>>>> Hi All, >>>>> I measured the scale up for both approaches - a. using datanode >>>>> connections as tapes (existing one) b. materialising result on tapes before >>>>> merging (the approach I proposed). For 1M rows, 5 coordinators I have found >>>>> that approach (a) gives 90% improvement whereas approach (b) gives 50% >>>>> improvement. Although the difference is significant, I feel that approach >>>>> (b) is much cleaner than approach (a) and doesn't have large footprint >>>>> compared to PG code and it takes care of all the cases like 1. >>>>> materialising sorted result, 2. takes care of any number of datanode >>>>> connections without memory overrun. It's possible to improve it further if >>>>> we avoid materialisation of datanode result in tuplestore. >>>>> >>>>> Patch attached for reference. >>>>> >>>>> On Tue, Mar 26, 2013 at 10:38 AM, Ashutosh Bapat < >>>>> ash...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Tue, Mar 26, 2013 at 10:19 AM, Koichi Suzuki < >>>>>> koi...@gm...> wrote: >>>>>> >>>>>>> On thing we should think for option 1 is: >>>>>>> >>>>>>> When a number of the result is huge, applications has to wait long >>>>>>> time until they get the first row. Because this option may need disk >>>>>>> write, total resource consumption will be larger. >>>>>>> >>>>>>> >>>>>> Yes, I am aware of this fact. Please read the next paragraph and you >>>>>> will see that the current situation is no better. >>>>>> >>>>>> >>>>>>> I'm wondering if we can use "cursor" at database so that we can read >>>>>>> each tape more simply, I mean, to leave each query node open and read >>>>>>> next row from any query node. >>>>>>> >>>>>>> >>>>>> We do that right now. But because of such a simulated cursor (it's >>>>>> not cursor per say, but we just fetch the required result from connection >>>>>> as the demand arises in merging runs), we observer following things >>>>>> >>>>>> If the plan has multiple remote query nodes (as there will be in case >>>>>> of merge join), we assign the same connection to these nodes. Before this >>>>>> assignment, the result from the previous connection is materialised at the >>>>>> coordinator. This means that, when we will get huge result from the >>>>>> datanode, it will be materialised (which will have the more cost as >>>>>> materialising it on tape, as this materialisation happens in a linked list, >>>>>> which is not optimized). We need to share connection between more than one >>>>>> RemoteQuery node because same transaction can not work on two connections >>>>>> to same server. Not only performance, but the code has become ugly because >>>>>> of this approach. At various places in executor, we have special handling >>>>>> for sorting, which needs to be maintained. >>>>>> >>>>>> Instead if we materialise all the result on tape and then proceed >>>>>> with step D5 in Knuth's algorithm for polyphase merge sort, the code will >>>>>> be much simpler and we won't loose much performance. In fact, we might be >>>>>> able to leverage fetching bulk data on connection which can be materialised >>>>>> on tape in bulk. >>>>>> >>>>>> >>>>>>> Regards; >>>>>>> ---------- >>>>>>> Koichi Suzuki >>>>>>> >>>>>>> >>>>>>> 2013/3/25 Ashutosh Bapat <ash...@en...>: >>>>>>> > Hi All, >>>>>>> > I am working on using remote sorting for merge joins. The idea is >>>>>>> while >>>>>>> > using merge join at the coordinator, get the data sorted from the >>>>>>> datanodes; >>>>>>> > for replicated relations, we can get all the rows sorted and for >>>>>>> distributed >>>>>>> > tables we have to get sorted runs which can be merged at the >>>>>>> coordinator. >>>>>>> > For merge join the sorted inner relation needs to be randomly >>>>>>> accessible. >>>>>>> > For replicated relations this can be achieved by materialising the >>>>>>> result. >>>>>>> > But for distributed relations, we do not materialise the sorted >>>>>>> result at >>>>>>> > coordinator but compute the sorted result by merging the sorted >>>>>>> results from >>>>>>> > individual nodes on the fly. For distributed relations, the >>>>>>> connection to >>>>>>> > the datanodes themselves are used as logical tapes (which provide >>>>>>> the sorted >>>>>>> > runs). The final result is computed on the fly by choosing the >>>>>>> smallest or >>>>>>> > greatest row (as required) from the connections. >>>>>>> > >>>>>>> > For a Sort node the materialised result can reside in memory (if >>>>>>> it fits >>>>>>> > there) or on one of the logical tapes used for merge sort. So, in >>>>>>> order to >>>>>>> > provide random access to the sorted result, we need to materialise >>>>>>> the >>>>>>> > result either in the memory or on the logical tape. In-memory >>>>>>> > materialisation is not easily possible since we have already >>>>>>> resorted for >>>>>>> > tape based sort, in case of distributed relations and to >>>>>>> materialise the >>>>>>> > result on tape, there is no logical tape available in current >>>>>>> algorithm. To >>>>>>> > make it work, there are following possible ways >>>>>>> > >>>>>>> > 1. When random access is required, materialise the sorted runs from >>>>>>> > individual nodes onto tapes (one tape for each node) and then >>>>>>> merge them on >>>>>>> > one extra tape, which can be used for materialisation. >>>>>>> > 2. Use a mix of connections and logical tape in the same tape set. >>>>>>> Merge the >>>>>>> > sorted runs from connections on a logical tape in the same logical >>>>>>> tape set. >>>>>>> > >>>>>>> > While the second one looks attractive from performance perspective >>>>>>> (it saves >>>>>>> > writing and reading from the tape), it would make the merge code >>>>>>> ugly by >>>>>>> > using mixed tapes. The read calls for connection and logical tape >>>>>>> are >>>>>>> > different and we will need both on the logical tape where the >>>>>>> final result >>>>>>> > is materialized. So, I am thinking of going with 1, in fact, to >>>>>>> have same >>>>>>> > code to handle remote sort, use 1 in all cases (whether or not >>>>>>> > materialization is required). >>>>>>> > >>>>>>> > Had original authors of remote sort code thought about this >>>>>>> materialization? >>>>>>> > Anything they can share on this topic? >>>>>>> > Any comment? >>>>>>> > -- >>>>>>> > Best Wishes, >>>>>>> > Ashutosh Bapat >>>>>>> > EntepriseDB Corporation >>>>>>> > The Enterprise Postgres Company >>>>>>> > >>>>>>> > >>>>>>> ------------------------------------------------------------------------------ >>>>>>> > Everyone hates slow websites. So do we. >>>>>>> > Make your web apps faster with AppDynamics >>>>>>> > Download AppDynamics Lite for free today: >>>>>>> > http://p.sf.net/sfu/appdyn_d2d_mar >>>>>>> > _______________________________________________ >>>>>>> > Postgres-xc-developers mailing list >>>>>>> > Pos...@li... >>>>>>> > >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>> > >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Enterprise Postgres Company >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Enterprise Postgres Company >>>>> >>>> >>>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Enterprise Postgres Company >>> >> >> >> >> ------------------------------------------------------------------------------ >> Own the Future-Intel® Level Up Game Demo Contest 2013 >> Rise to greatness in Intel's independent game demo contest. >> Compete for recognition, cash, and the chance to get your game >> on Steam. $5K grand prize plus 10 genre and skill prizes. >> Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Abbas B. <abb...@en...> - 2013-04-17 21:13:29
|
While doing some testing I found that just checking res == LOCKACQUIRE_OK is not enough, the function should succeed if the lock is already held too, hence the condition should be (res == LOCKACQUIRE_OK || res == LOCKACQUIRE_ALREADY_HELD) Attached patch corrects this problem. On Mon, Apr 8, 2013 at 11:23 AM, Abbas Butt <abb...@en...>wrote: > Thanks. I will commit it later today. > > > On Mon, Apr 8, 2013 at 9:52 AM, Amit Khandekar < > ami...@en...> wrote: > >> Hi Abbas, >> >> The patch looks good to go. >> >> -Amit >> >> >> On 6 April 2013 01:02, Abbas Butt <abb...@en...> wrote: >> >>> Hi, >>> >>> Consider this test case when run on a single coordinator cluster. >>> >>> From one session acquire a lock >>> >>> edb@edb-virtual-machine:/usr/local/pgsql/bin$ ./psql postgres >>> psql (PGXC 1.1devel, based on PG 9.2beta2) >>> Type "help" for help. >>> >>> postgres=# select pg_try_advisory_lock(1234,5678); >>> pg_try_advisory_lock >>> ---------------------- >>> t >>> (1 row) >>> >>> >>> and from another terminal try to acquire the same lock >>> >>> edb@edb-virtual-machine:/usr/local/pgsql/bin$ ./psql postgres >>> psql (PGXC 1.1devel, based on PG 9.2beta2) >>> Type "help" for help. >>> >>> postgres=# select pg_try_advisory_lock(1234,5678); >>> pg_try_advisory_lock >>> ---------------------- >>> t >>> (1 row) >>> >>> Note that the second request succeeds where as the lock is already held >>> by the first session. >>> >>> The problem is that pgxc_advisory_lock neglects the return of >>> LockAcquire function in case of single coordinator. >>> The attached patch corrects the problem. >>> >>> Comments are welcome. >>> >>> >>> -- >>> Abbas >>> Architect >>> EnterpriseDB Corporation >>> The Enterprise PostgreSQL Company >>> >>> Phone: 92-334-5100153 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of >>> the individual or entity to whom it is addressed. This message >>> contains information from EnterpriseDB Corporation that may be >>> privileged, confidential, or exempt from disclosure under applicable >>> law. If you are not the intended recipient or authorized to receive >>> this for the intended recipient, any use, dissemination, distribution, >>> retention, archiving, or copying of this communication is strictly >>> prohibited. If you have received this e-mail in error, please notify >>> the sender immediately by reply e-mail and delete this message. >>> >>> ------------------------------------------------------------------------------ >>> Minimize network downtime and maximize team effectiveness. >>> Reduce network management and security costs.Learn how to hire >>> the most talented Cisco Certified professionals. Visit the >>> Employer Resources Portal >>> http://www.cisco.com/web/learning/employer_resources/index.html >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Abbas B. <abb...@en...> - 2013-04-17 20:08:01
|
Hi, Here is the review of the patch. Overall the patch is good to go. I have reviewed the code and found some minor errors, which I corrected and have attached the revised patch with the mail. I have tested both the cases when the sort happens in memory and when it happens using disk and found both working. I agree that the approach used in the patch is cleaner and has smaller footprint. I have corrected some white space errors and an unintentional change in function set_dbcleanup_callback git apply /home/edb/Desktop/MergeSort/xc_sort.patch /home/edb/Desktop/MergeSort/xc_sort.patch:539: trailing whitespace. void *fparams; /home/edb/Desktop/MergeSort/xc_sort.patch:1012: trailing whitespace. /home/edb/Desktop/MergeSort/xc_sort.patch:1018: trailing whitespace. /home/edb/Desktop/MergeSort/xc_sort.patch:1087: trailing whitespace. /* /home/edb/Desktop/MergeSort/xc_sort.patch:1228: trailing whitespace. size_t len, Oid msgnode_oid, warning: 5 lines add whitespace errors. I am leaving a query running for tonight which would sort 10M rows of a distributed table and would return top 100 of them. I would report its outcome tomorrow morning. Best Regards On Mon, Apr 1, 2013 at 11:02 AM, Koichi Suzuki <koi...@gm...>wrote: > Thanks. Then 90% improvement means about 53% of the duration, while 50% > means 67% of it. Number of queries in a given duration is 190 vs. 150, > difference is 40. > > Considering the needed resource, it may be okay to begin with > materialization. > > Any other inputs? > ---------- > Koichi Suzuki > > > 2013/4/1 Ashutosh Bapat <ash...@en...> > >> >> >> On Mon, Apr 1, 2013 at 10:59 AM, Koichi Suzuki <koi...@gm... >> > wrote: >> >>> I understand materialize everything makes code clearer and >>> implementation becomes simpler and better structured. >>> >>> What do you mean by x% improvement? Does 90% improvement mean the >>> total duration is 10% of the original? >>> >> x% improvement means, duration reduces to 100/(100+x) as compared to the >> non-pushdown scenario. Or in simpler words, we see (100+x) queries being >> completed by pushdown approach in the same time in which nonpushdown >> approach completes 100 queries. >> >>> ---------- >>> Koichi Suzuki >>> >>> >>> 2013/3/29 Ashutosh Bapat <ash...@en...> >>> >>>> Hi All, >>>> I measured the scale up for both approaches - a. using datanode >>>> connections as tapes (existing one) b. materialising result on tapes before >>>> merging (the approach I proposed). For 1M rows, 5 coordinators I have found >>>> that approach (a) gives 90% improvement whereas approach (b) gives 50% >>>> improvement. Although the difference is significant, I feel that approach >>>> (b) is much cleaner than approach (a) and doesn't have large footprint >>>> compared to PG code and it takes care of all the cases like 1. >>>> materialising sorted result, 2. takes care of any number of datanode >>>> connections without memory overrun. It's possible to improve it further if >>>> we avoid materialisation of datanode result in tuplestore. >>>> >>>> Patch attached for reference. >>>> >>>> On Tue, Mar 26, 2013 at 10:38 AM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> >>>>> >>>>> On Tue, Mar 26, 2013 at 10:19 AM, Koichi Suzuki < >>>>> koi...@gm...> wrote: >>>>> >>>>>> On thing we should think for option 1 is: >>>>>> >>>>>> When a number of the result is huge, applications has to wait long >>>>>> time until they get the first row. Because this option may need disk >>>>>> write, total resource consumption will be larger. >>>>>> >>>>>> >>>>> Yes, I am aware of this fact. Please read the next paragraph and you >>>>> will see that the current situation is no better. >>>>> >>>>> >>>>>> I'm wondering if we can use "cursor" at database so that we can read >>>>>> each tape more simply, I mean, to leave each query node open and read >>>>>> next row from any query node. >>>>>> >>>>>> >>>>> We do that right now. But because of such a simulated cursor (it's not >>>>> cursor per say, but we just fetch the required result from connection as >>>>> the demand arises in merging runs), we observer following things >>>>> >>>>> If the plan has multiple remote query nodes (as there will be in case >>>>> of merge join), we assign the same connection to these nodes. Before this >>>>> assignment, the result from the previous connection is materialised at the >>>>> coordinator. This means that, when we will get huge result from the >>>>> datanode, it will be materialised (which will have the more cost as >>>>> materialising it on tape, as this materialisation happens in a linked list, >>>>> which is not optimized). We need to share connection between more than one >>>>> RemoteQuery node because same transaction can not work on two connections >>>>> to same server. Not only performance, but the code has become ugly because >>>>> of this approach. At various places in executor, we have special handling >>>>> for sorting, which needs to be maintained. >>>>> >>>>> Instead if we materialise all the result on tape and then proceed with >>>>> step D5 in Knuth's algorithm for polyphase merge sort, the code will be >>>>> much simpler and we won't loose much performance. In fact, we might be able >>>>> to leverage fetching bulk data on connection which can be materialised on >>>>> tape in bulk. >>>>> >>>>> >>>>>> Regards; >>>>>> ---------- >>>>>> Koichi Suzuki >>>>>> >>>>>> >>>>>> 2013/3/25 Ashutosh Bapat <ash...@en...>: >>>>>> > Hi All, >>>>>> > I am working on using remote sorting for merge joins. The idea is >>>>>> while >>>>>> > using merge join at the coordinator, get the data sorted from the >>>>>> datanodes; >>>>>> > for replicated relations, we can get all the rows sorted and for >>>>>> distributed >>>>>> > tables we have to get sorted runs which can be merged at the >>>>>> coordinator. >>>>>> > For merge join the sorted inner relation needs to be randomly >>>>>> accessible. >>>>>> > For replicated relations this can be achieved by materialising the >>>>>> result. >>>>>> > But for distributed relations, we do not materialise the sorted >>>>>> result at >>>>>> > coordinator but compute the sorted result by merging the sorted >>>>>> results from >>>>>> > individual nodes on the fly. For distributed relations, the >>>>>> connection to >>>>>> > the datanodes themselves are used as logical tapes (which provide >>>>>> the sorted >>>>>> > runs). The final result is computed on the fly by choosing the >>>>>> smallest or >>>>>> > greatest row (as required) from the connections. >>>>>> > >>>>>> > For a Sort node the materialised result can reside in memory (if it >>>>>> fits >>>>>> > there) or on one of the logical tapes used for merge sort. So, in >>>>>> order to >>>>>> > provide random access to the sorted result, we need to materialise >>>>>> the >>>>>> > result either in the memory or on the logical tape. In-memory >>>>>> > materialisation is not easily possible since we have already >>>>>> resorted for >>>>>> > tape based sort, in case of distributed relations and to >>>>>> materialise the >>>>>> > result on tape, there is no logical tape available in current >>>>>> algorithm. To >>>>>> > make it work, there are following possible ways >>>>>> > >>>>>> > 1. When random access is required, materialise the sorted runs from >>>>>> > individual nodes onto tapes (one tape for each node) and then merge >>>>>> them on >>>>>> > one extra tape, which can be used for materialisation. >>>>>> > 2. Use a mix of connections and logical tape in the same tape set. >>>>>> Merge the >>>>>> > sorted runs from connections on a logical tape in the same logical >>>>>> tape set. >>>>>> > >>>>>> > While the second one looks attractive from performance perspective >>>>>> (it saves >>>>>> > writing and reading from the tape), it would make the merge code >>>>>> ugly by >>>>>> > using mixed tapes. The read calls for connection and logical tape >>>>>> are >>>>>> > different and we will need both on the logical tape where the final >>>>>> result >>>>>> > is materialized. So, I am thinking of going with 1, in fact, to >>>>>> have same >>>>>> > code to handle remote sort, use 1 in all cases (whether or not >>>>>> > materialization is required). >>>>>> > >>>>>> > Had original authors of remote sort code thought about this >>>>>> materialization? >>>>>> > Anything they can share on this topic? >>>>>> > Any comment? >>>>>> > -- >>>>>> > Best Wishes, >>>>>> > Ashutosh Bapat >>>>>> > EntepriseDB Corporation >>>>>> > The Enterprise Postgres Company >>>>>> > >>>>>> > >>>>>> ------------------------------------------------------------------------------ >>>>>> > Everyone hates slow websites. So do we. >>>>>> > Make your web apps faster with AppDynamics >>>>>> > Download AppDynamics Lite for free today: >>>>>> > http://p.sf.net/sfu/appdyn_d2d_mar >>>>>> > _______________________________________________ >>>>>> > Postgres-xc-developers mailing list >>>>>> > Pos...@li... >>>>>> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>> > >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Enterprise Postgres Company >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Enterprise Postgres Company >>>> >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > > > ------------------------------------------------------------------------------ > Own the Future-Intel® Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. > Compete for recognition, cash, and the chance to get your game > on Steam. $5K grand prize plus 10 genre and skill prizes. > Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Ashutosh B. <ash...@en...> - 2013-04-17 11:16:57
|
Hi Amit, Thanks for completing this tedius work. It's pretty complicated. As I understand it right, the patch deals with following things For after row triggers, PG stores the fireable triggers as events with ctid of the row as pointer to the row on which the event should be carried out. For INSERT and DELETE there is only one ctid viz. new or old resp. For UPDATE, ctids of both the new and old row are stored. For some reason (I am not clear about the reasons) after row triggers are fired after queueing the events and thus we need some storage on coordinator to store the affected tuples, so that they can be retrieved while firing the triggers. We do not save the entire tuple in the trigger event, to save memory if there are multiple events that need the same tuple. In PG, ctid of the row suffices to fetch the row from the heap, which acts as the storage itself. In XC, however, we need some storage to store the tuples to be fed to trigger events and need a pointer for each row stored. This pointer will be saved in the trigger event, and will be used to fetch the row. Your patch uses two tuplestores to store old and new rows resp. For Update we will use both the tuplestores, but for INSERT we will use only one of them. Here are my comments 1. As I understand, the tuplestore has a different kind of pointer than ctid and thus you have created a union in Trigger event structure. Can we use hash based storage instead of tuplestore? The hash based storage will have advantages like a. existing ctid, nodeoid combination can be used as the key in hash store, thus not requiring any union (but will need to add an OID member). The same ItemPointer structure can be then used, instead of creating prototypes for XC. b. Hash is ideally a random access storage unlike tuplestore, which needs some kind of sequential access. c. At places there is code to first get a pointer in Tuplestore before actually adding the row, which complicates the code. Hash storage will not have this problem since the key is independent of the position in hash storage. 2. Using two separate tuplestores for new and old tuples is waste of memory. A tuplestore allocates 1K memory by default, thus having two tuple stores requires double the amount of memory. If in worst case, the tuplestore overflows to disk, we will have two files created on file system, causing random sequential writes on disk, which will affect the performance. This will mean that the same row pointer can not be used for OLD and NEW, but that should be fine, as PG itself doesn't need that condition. 3. The tuple store management code is too much tied with the static structures in trigger.c. We need to isolate this code in a separate file, so that this approach can be used for other features like constraints if required. Please separate this code into a separate file with well defined interfaces like function to add a row to storage, get its pointer, fetch the row from storage, delete the row from storage (?), destroy the storage etc. and use them for trigger functionality. In the same file, we need a prologue describing the need of these interfaces and description of the interfaces itself. In fact, if this infrastructure is also needed in PG, we should put it in PG. 4. While using two tuplestores we have hardcoded the tuplestore indices as 0 and 1. Instead of that can we use some macros OR even better use different variables for both of them. Same goes for all 2 sized arrays that are defined for the same purpose. 5. Please look at the current trigger regression tests. If they do not cover all the possible test scenarios please add them in the regression. Testing all the scenarios (various combinations of type triggers, DMLs) is critical here. If you find that the current implementation is working fine, all the above points can be taken up later after the 1.1 release. The testing can be take up between beta and GA, and others can be taken up in next release. But it's important to at least study these approaches. Some specific comments 1. In function pgxc_ar_init_rowstore(), we have used palloc0 + memcpy + pfree() instead of repalloc + memzero new entries. Repalloc allows to extend the existing memory allocation without moving the contents (if possible) and has advantage that it wouldn't fail if sum of allocated memory and required memory is greater than available memory but required memory is less than the available memory. So, it's always advantageous to use Repalloc. Why haven't we used repalloc here? 2. Can we extend pgxc_ar_goto_end(), to be goto_anywhere function, where end is a special position. E.g pgxc_ar_goto(ARTupInfo, Pos), where Pos can be a valid index OR a special position END. pgxc_ar_goto(ARTupInfo, END) would act as pgxc_ar_goto_end() and pgxc_ar_goto(ARTupInfo. Pos != END) would replace the tuplestore advance loop in pgxc_ar_dofetch(). The function may accept a flag backwards if that is required. Huh, done at last! On Mon, Apr 15, 2013 at 9:54 AM, Amit Khandekar < ami...@en...> wrote: > >> On Fri, Apr 5, 2013 at 2:38 PM, Amit Khandekar < >> ami...@en...> wrote: >> >>> FYI .. I will use the following document to keep updating the >>> implementation details for "Saving AR trigger rows in tuplestore" : >>> >>> >>> https://docs.google.com/document/d/158IPS9npmfNsOWPN6ZYgPy91aowTUNP7L7Fl9zBBGqs/edit?usp=sharing >>> >> > Attached is the patch to support after-row triggers. The above doc is > updated. Yet to analyse the regression tests. The attached test.sql is the > one I used for unit testing, it is not yet ready to be inserted into > regression suite. I will be working next on the regression and Ashutosh's > comments on before-row triggers > > Also I haven't yet rebased the rowtriggers branch over the new > merge-related changes in the master branch. This patch is over the > rowtriggers branch; I did not push this patch onto the rowtriggers branch > as well, although I intended to do it, but suspected of some possible > issues if I push the rowtriggers branch after the recent merge-related > changes going on in the repository. First I will rebase all the rowtriggers > branch changes onto the new master branch. > > > > > > >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Minimize network downtime and maximize team effectiveness. >>> Reduce network management and security costs.Learn how to hire >>> the most talented Cisco Certified professionals. Visit the >>> Employer Resources Portal >>> http://www.cisco.com/web/learning/employer_resources/index.html >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> Pavan Deolasee >> http://www.linkedin.com/in/pavandeolasee >> > > > > ------------------------------------------------------------------------------ > Precog is a next-generation analytics platform capable of advanced > analytics on semi-structured data. The platform includes APIs for building > apps and a phenomenal toolset for data science. Developers can use > our toolset for easy data analysis & visualization. Get a free account! > http://www2.precog.com/precogplatform/slashdotnewsletter > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Michael P. <mic...@gm...> - 2013-04-17 04:12:07
|
On Wed, Apr 17, 2013 at 1:03 PM, Amit Khandekar < ami...@en...> wrote: > I mean it gives a misleading message by showing commits that were not > actually the ones because of which the mail was sent, actual ones were > different. > Just for your answer, this is not a sourceforge issue, only an issue with the post-commit hook added in XC's GIT repo to trigger emails. If you can find a better hook script than the one already in (there should be some by googling), it would fix those email problems. -- Michael |
From: Ashutosh B. <ash...@en...> - 2013-04-17 04:10:33
|
On Wed, Apr 17, 2013 at 8:59 AM, Amit Khandekar < ami...@en...> wrote: > > > On 10 April 2013 16:00, Ashutosh Bapat <ash...@en...>wrote: > >> Hi Amit, >> In function pgxc_dml_add_qual_to_query(), INT4OID is used as default >> data-type of system column. Although it serves the purpose for now, if one >> wants to use this function for some other purpose, INT4OID would not >> suffice. Either please update the prologue of function to mention that >> system columns of INT4 type or accept type of system column as parameter. >> Also, the name of the function doesn't reflect the restricted purpose the >> function serves. It is too generic. >> > > This function is neither created by the trigger changes nor is affected by > them. So this change should not be done here. > Can you please open a bug and assign it to Abbas. It looks to be coming from Abbas's work. > > >> >> >> In this commit, I see following comment, >> 191 + * Beware, the ordering of ctid and node_id is important ! >> ctid should >> 192 + * be followed by node_id, not vice-versa, so as to be >> consistent with >> 193 + * the data row to be generated while binding the parameters >> for the >> 194 + * update statement. >> >> Is there a way, throught code, to link the parameters added to query and >> those bound at the time of execution? If it's not possible right now, can >> we log a bug/feature and get to it later? >> > > I had first thought that this can be considered. But now I think there is > no easy way to do this. But still I have opened 3611078 just in case. > > >> Otherwise the patch looks good. >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Amit K. <ami...@en...> - 2013-04-17 03:53:06
|
Commid id 9b6d7fa8859f03334192b13736deaf335fd8ceed addresses the review comments. Details below. On 12 April 2013 16:15, Ashutosh Bapat <ash...@en...>wrote: > Hi Amit, > Here are comments for commit > commit badf4c31edfd3d72a38a348523f5e05730374700 > Author: Amit Khandekar <ami...@en...> > Date: Thu Apr 4 12:17:09 2013 +0545 > > The core BEFORE ROW trigger-support related changes. > > Because there is no tupleId parameter in the trigger functions is not > valid, > we need to accept the OLD row through oldtuple parameter of the > trigger functions. > And then we craft NEW row using modified values slot, plus the OLD > values in OLD tuple. > > Comments > --------------- > In ExecBRDeleteTriggers() and all the functions executing before row > triggers, I see this code > 2210 #ifdef PGXC > 2211 if ((IS_PGXC_COORDINATOR && !IsConnFromCoord())) > 2212 trigtuple = pgxc_get_trigger_tuple(datanode_tuphead); > 2213 else /* On datanode, do the usual way */ > 2214 { > > This code will be applicable to all the deletes/modifications at the > coordinator. The updates to catalogs happen locally to the coordinator and > thus should call GetTupleForTrigger instead of pgxc_get_trigger_tuple(). I > am not sure if there can be triggers on catalog tables, (mostly not), but > it will be better to check whether the tuple is local or global. > Corrected IS_PGXC_COORDINATOR() condition check. Used RelationGetLocInfo(). > Right now we are passing both the old tuple and tupleid both to > ExecBRDeleteTriggers() function. Depending upon the tuplestore > implementation we do for storing the old tuples, we may be able to retrieve > the old tuple from the tuplestore given some index into it. Can we use the > ItemPointer to store this index? > I have given quite a bit of a thought to try to re-use the tupleid parameter for dual purpose. We can use the same argument to pass tupleid or the tuple header. But this is because we know that both ItemPointer and HeapTuleHeader are defined as pointers. But we should not assume their definition. We can may be define a compilation-time check : #if sizeof(HeapTupleHeader) > sizeof(ItemPointer) /* (can we even do this ?) */ #error #endif So that in the future if we cannot accommodate HeapTupleHeader values in ItemPointer, it will not build, and then we then at that point we would do the changes in the prototypes of all the Exec[AB]R*Trigger() functions. Still I think this is not the right thing to do. We should keep the code in the current condition, and only if we start getting too many PG merge issues, we should consider the above method. > I am coming back to functions pgxc_form_trigger_tuple() and > pgxc_get_trigger_tuple(). In these functions there is nothing XC specific, > so my first thought was whether there are functions already in PG code, > which would serve the functionality that these two functions serve. I tried > to search but without success. Can you find any PG function which has this > functionality? Should these functions be in heaptuple.c or such file > instead of trigger.c? Also, there is nothing specific for triggers in these > functions, so better their names not contain trigger. > Removed pgxc_form_trigger_tuple(), but kept pgxc_get_trigger_tuple() because it might possibly be needed to serve the purpose of GetTupleTrigger() later when we fix the concurrent update issue. I have added a PGXCTODO for the same. > > Regarding trigger firing, I am assuming that, all the BR triggers are > fired on a single node either the coordinator (if there at least one > nonshippable trigger) or datanode (if all the triggers are shippable). We > can not fire some at coordinator and some at the datanode because that > might change the order of firing the triggers. PG has documented that order > of firing triggers is alphabetical. With this context, > in function pgxc_is_trigger_firable(), what happens if a triggers is > shippable but needs to be fired on coordinator. From the code it looks like > we won't fire the trigger. At line 5037, you have used IsConnFromCoord() > which is true even for a coordinator - coordinator connection. Do you want > to specifically check datanode here? It will be helpful if the function > prologue contains a truth table with shippability and node where the firing > will happen and the connection origination. This is not your change but > somebody who implemented triggers last time has left a serious bug here. > I had initially implemented the support to ensure all triggers be fired either on coordinator or datanode, but as I mentioned in the initial patch email, I have removed that part when I sent the BR trigger patch because this change better be done after we have both BR and AR trigger. The reason is because we need to selectively ship both AR and BR triggers or only ship AR but not BR triggers depending on some specific shippability conditions. This logic will be handled in a different commit. I have added PGXCTODO in pgxc_is_firable(). > > Everywhere I see that we have used the checks for coordinator to execute > the things XC way or PG way. I think we need better have PG way for local > modifications and XC way for remote modifications. > Corrected the conditions. I guess this is the same issue as the issue#1 you mentioned above ? > > The declaration for function fill_slot_with_oldvals(), should have the > return type, function name everything on the same line, as per PG standards. > Corrected. > > The function fill_slots_with_oldvals() is being called before the actual > update or in fact any of the trigger processing. What's the purpose of this > function? The comment there tell what the function does, but not WHY. > For remote tables, the plan slot does not have all NEW tuple values in the plan slot. (Right now we do have but after the fetch-only-reqd-columns patch we won't have) If oldtuple is supplied, we would also need a complete NEW tuple. Currently for remote tables, triggers are the only case where oldtuple is passed. So we should craft the NEW tuple wheneverwe have the OLD tuple. Have updated the above comments in the code. > > I think I will need to revisit this commit again at the time of reviewing > after row trigger implementation, since this commit has modified some of > that area as well. > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > |
From: Amit K. <ami...@en...> - 2013-04-17 03:35:27
|
On 10 April 2013 16:36, Ashutosh Bapat <ash...@en...>wrote: > Hi Amit, > Sorry, there is one more comment, > I see that the code to add wholerow attribute has been duplicated, once > for views and once for before row triggers. Is it possible not to duplicate > this code? > > Yes, wholerow attribute has been duplicated, but I think it is simpler to keep it like that. I had attempted to merge it in PG code but the code becomes complicated and it is difficult understand exactly which junk attribtues are added and for which node, and which type of table or view. > > On Wed, Apr 10, 2013 at 4:33 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> Hi Amit >> Not your change but, we need to update the prologue of >> rewriteTargetListUD with XC specific changes. >> >> Again not your change, but there's comment in the function >> 1256 /* >> 1257 * In Postgres-XC, we need to evaluate quals of the parse tree >> and determine >> 1258 * if they are Coordinator quals. If they are, their attribute >> need to be >> 1259 * added to target list for evaluation. In case some are found, >> add them as >> 1260 * junks in the target list. The junk status will be used by >> remote UPDATE >> 1261 * planning to associate correct element to a clause. >> 1262 * For DELETE, having such columns in target list helps to >> evaluate Quals >> 1263 * correctly on Coordinator. >> 1264 * PGXCTODO: This list could be reduced to keep only in target >> list the >> 1265 * vars using Coordinator Quals. >> 1266 */ >> 1267 if (IS_PGXC_COORDINATOR && parsetree->jointree) >> 1268 var_list = pull_qual_vars((Node *) parsetree->jointree, >> parsetree->resultRelation); >> >> I think we need to do this only when these attributes are not in the >> targetlist already. My guess is we don't need this code at all. Can we >> check this? >> > The pull_qual_vars() code is not related to trigger related changes. I have anyways removed this code in the patch that I submitted for fetching only required columns for DML source data query. > >> Please use RelationGetLocInfo to get location info given Relation in >> 1336 if >> (!IsLocatorReplicated(GetLocatorType(RelationGetRelid(target_relation)))) >> > Used RelationGetLocInfo() instead of GetLocatorType(RelationGetRelid... although again this was an existing code, but I anyways corrected it because it was quite close to the trigger changes. The commit id for the above changes is : 44d07c5c1a405e5b896403741f8f2aea288d0313 (rowtriggers_new) >> Rest of the changes look fine. >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > |
From: Amit K. <ami...@en...> - 2013-04-17 03:29:55
|
On 10 April 2013 16:00, Ashutosh Bapat <ash...@en...>wrote: > Hi Amit, > In function pgxc_dml_add_qual_to_query(), INT4OID is used as default > data-type of system column. Although it serves the purpose for now, if one > wants to use this function for some other purpose, INT4OID would not > suffice. Either please update the prologue of function to mention that > system columns of INT4 type or accept type of system column as parameter. > Also, the name of the function doesn't reflect the restricted purpose the > function serves. It is too generic. > This function is neither created by the trigger changes nor is affected by them. So this change should not be done here. > > > In this commit, I see following comment, > 191 + * Beware, the ordering of ctid and node_id is important ! > ctid should > 192 + * be followed by node_id, not vice-versa, so as to be > consistent with > 193 + * the data row to be generated while binding the parameters > for the > 194 + * update statement. > > Is there a way, throught code, to link the parameters added to query and > those bound at the time of execution? If it's not possible right now, can > we log a bug/feature and get to it later? > I had first thought that this can be considered. But now I think there is no easy way to do this. But still I have opened 3611078 just in case. > Otherwise the patch looks good. > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > |
From: Abbas B. <abb...@en...> - 2013-04-16 06:24:16
|
I found that "Add regression tests for pg_dump/restore" is currently listed in the PG TODO list http://wiki.postgresql.org/wiki/Todo and I have found that Tom Lane has replied to these threads saying that it would be nice to have regression tests for pg_dump. http://www.postgresql.org/message-id/200...@ss... http://www.postgresql.org/message-id/224...@ss... http://www.postgresql.org/message-id/455...@ss... So I guess it makes sense to propose an infrastructure to test pg_dump on pg hackers mailing list. Regards Abbas On Mon, Apr 15, 2013 at 11:04 AM, Ashutosh Bapat < ash...@en...> wrote: > It seems that you wrote an infrastructure to test dump in general. I > thought, PG already has tests to test pg_dump and pg_restore functionality. > I don't think, we should add infrastructure to test the dump functionality. > It has to be done in PG and then incorporated in XC. > > > On Sun, Apr 14, 2013 at 7:15 AM, Abbas Butt <abb...@en...>wrote: > >> Hi, >> Attached please find a patch that adds support to test this feature. >> To test this feature we had to add Support For Shell Scripts In >> Regression Tests. >> >> For this purpose a new keyword "script" is added in schedule files. >> The folder "scripts_input" is supposed to contain shell scripts in which >> the following place holders can be used (in addition to the ones already >> there). >> >> @abs_bin_dir@ which is replaced by the installation bin directory. >> @database_name@ which is replaced by the database name used to run >> regression tests. >> The schedule file can have a command like this for running a shell >> script >> script: script_name.sh >> >> In oder to test TO NODE clause in the CREATE TABLE statements the >> following scheme is used. >> >> 1. Run test xc_1_to_node.sql >> It creates some tables in a certain schema on some specific nodes >> >> 2. Create a script xc_to_node.source with place holders in the folder >> "scripts_input". This script contains a command like this: >> >> @abs_bin_dir@/pg_dump -c --include-nodes --schema=test_dump_restore >> -s >> @database_name@ --file=@abs_srcdir@/sql/xc_2_to_node.sql >> >> 3. The function convert_sourcefiles is changed to accommodate script >> files >> After the place holders are replaced the source file in folder >> "scripts_input" gets copied into folder "scripts", >> with the name xc_to_node.sh >> >> /usr/local/pgsql/bin/pg_dump -c --include-nodes >> --schema=test_dump_restore -s >> regression >> --file=/home/edb/pgxc/postgres-xc/src/test/regress/sql/xc_2_to_node.sql >> >> 4. In the schedule file the regression system is asked to run this >> script using script keyword >> >> script: xc_to_node.sh >> >> 5. The function run_schedule is changed to accommodate the script >> keyword. >> It does not support running more than one scripts in parallel. >> When the function encounters a script keyword, it calls a function >> shell_command passing it the name of the test which is script name >> itself. The function shell_command is new, added to run a shell >> script. >> >> 6. When the shell script runs it generates a dump which gets copied into >> the sql folder as a xc_2_to_node.sql. >> It will be run next to restore the dump. >> >> 7. Run xc_2_to_node.sql, to restore the dump, use ignore keyword because >> the order of objects might change in the dump. >> >> 8. Run xc_3_to_node.sql, which tests that the tables got created on the >> required nodes. >> >> Comments are welcome. >> >> >> >> On Sat, Mar 30, 2013 at 11:49 PM, Abbas Butt <abb...@en... >> > wrote: >> >>> Hi, >>> Attached please find revised patch that provides the --include-nodes >>> option in both pg_dump and pg_dumpall. Please note that this patch applies >>> on top of the one sent for feature ID 3608376. >>> >>> >>> On Wed, Mar 27, 2013 at 5:05 PM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> Feature ID 3608375 >>>> >>>> On Tue, Mar 5, 2013 at 1:45 PM, Abbas Butt <abb...@en... >>>> > wrote: >>>> >>>>> The attached patch changes the name of the option to --include-nodes. >>>>> >>>>> >>>>> On Mon, Mar 4, 2013 at 2:41 PM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Mon, Mar 4, 2013 at 2:09 PM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, Mar 4, 2013 at 1:51 PM, Abbas Butt < >>>>>>> abb...@en...> wrote: >>>>>>> >>>>>>>> What I had in mind was to have pg_dump, when run with include-node, >>>>>>>> emit CREATE NODE/ CREATE NODE GROUP commands only and nothing else. Those >>>>>>>> commands will be used to create existing nodes/groups on the new >>>>>>>> coordinator to be added. So it does make sense to use this option >>>>>>>> independently, in fact it is supposed to be used independently. >>>>>>>> >>>>>>>> >>>>>>> Ok, got it. But then include-node is really a misnomer. We should >>>>>>> use --dump-nodes or something like that. >>>>>>> >>>>>> >>>>>> In that case we can use include-nodes here. >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> On Mon, Mar 4, 2013 at 11:21 AM, Ashutosh Bapat < >>>>>>>> ash...@en...> wrote: >>>>>>>> >>>>>>>>> Dumping TO NODE clause only makes sense if we dump CREATE NODE/ >>>>>>>>> CREATE NODE GROUP. Dumping CREATE NODE/CREATE NODE GROUP may make sense >>>>>>>>> independently, but might be useless without dumping TO NODE clause. >>>>>>>>> >>>>>>>>> BTW, OTOH, dumping CREATE NODE/CREATE NODE GROUP clause wouldn't >>>>>>>>> create the nodes on all the coordinators, >>>>>>>> >>>>>>>> >>>>>>>> All the coordinators already have the nodes information. >>>>>>>> >>>>>>>> >>>>>>>>> but only the coordinator where dump will be restored. That's >>>>>>>>> another thing you will need to consider OR are you going to fix that as >>>>>>>>> well? >>>>>>>> >>>>>>>> >>>>>>>> As a first step I am only listing the manual steps required to add >>>>>>>> a new node, that might say run this command on all the existing >>>>>>>> coordinators by connecting to them one by one manually. We can decide to >>>>>>>> automate these steps later. >>>>>>>> >>>>>>>> >>>>>>> ok >>>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, Mar 4, 2013 at 11:41 AM, Abbas Butt < >>>>>>>>> abb...@en...> wrote: >>>>>>>>> >>>>>>>>>> I was thinking of using include-nodes to dump CREATE NODE / >>>>>>>>>> CREATE NODE GROUP, that is required as one of the missing links in adding a >>>>>>>>>> new node. How do you think about that? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Mon, Mar 4, 2013 at 9:02 AM, Ashutosh Bapat < >>>>>>>>>> ash...@en...> wrote: >>>>>>>>>> >>>>>>>>>>> Hi Abbas, >>>>>>>>>>> Please take a look at >>>>>>>>>>> http://www.postgresql.org/docs/9.2/static/app-pgdump.html, >>>>>>>>>>> which gives all the command line options for pg_dump. instead of >>>>>>>>>>> include-to-node-clause, just include-nodes would suffice, I guess. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Fri, Mar 1, 2013 at 8:36 PM, Abbas Butt < >>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>> >>>>>>>>>>>> PFA a updated patch that provides a command line argument >>>>>>>>>>>> called --include-to-node-clause to let pg_dump know that the created dump >>>>>>>>>>>> is supposed to emit TO NODE clause in the CREATE TABLE command. >>>>>>>>>>>> If the argument is provided while taking the dump from a >>>>>>>>>>>> datanode, it does not show TO NODE clause in the dump since the catalog >>>>>>>>>>>> table is empty in this case. >>>>>>>>>>>> The documentation of pg_dump is updated accordingly. >>>>>>>>>>>> The rest of the functionality stays the same as before. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Mon, Feb 25, 2013 at 10:29 AM, Ashutosh Bapat < >>>>>>>>>>>> ash...@en...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> I think we should always dump DISTRIBUTE BY. >>>>>>>>>>>>> >>>>>>>>>>>>> PG does not stop dumping (or provide an option to do so) newer >>>>>>>>>>>>> syntax so that the dump will work on older versions. On similar lines, an >>>>>>>>>>>>> XC dump can not be used against PG without modification (removing >>>>>>>>>>>>> DISTRIBUTE BY). There can be more serious problems like exceeding table >>>>>>>>>>>>> size limits if an XC dump is tried to be restored in PG. >>>>>>>>>>>>> >>>>>>>>>>>>> As to TO NODE clause, I agree, that one can restore the dump >>>>>>>>>>>>> on a cluster with different configuration, so giving an option to dump TO >>>>>>>>>>>>> NODE clause will help. >>>>>>>>>>>>> >>>>>>>>>>>>> On Mon, Feb 25, 2013 at 6:42 AM, Michael Paquier < >>>>>>>>>>>>> mic...@gm...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Mon, Feb 25, 2013 at 4:17 AM, Abbas Butt < >>>>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Sun, Feb 24, 2013 at 5:33 PM, Michael Paquier < >>>>>>>>>>>>>>> mic...@gm...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Sun, Feb 24, 2013 at 7:04 PM, Abbas Butt < >>>>>>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Sun, Feb 24, 2013 at 1:44 PM, Michael Paquier < >>>>>>>>>>>>>>>>> mic...@gm...> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Sun, Feb 24, 2013 at 3:51 PM, Abbas Butt < >>>>>>>>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>> PFA a patch to fix pg_dump to generate TO NODE clause >>>>>>>>>>>>>>>>>>> in the dump. >>>>>>>>>>>>>>>>>>> This is required because otherwise all tables get >>>>>>>>>>>>>>>>>>> created on all nodes after a dump-restore cycle. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Not sure this is good if you take a dump of an XC cluster >>>>>>>>>>>>>>>>>> to restore that to a vanilla Postgres cluster. >>>>>>>>>>>>>>>>>> Why not adding a new option that would control the >>>>>>>>>>>>>>>>>> generation of this clause instead of forcing it? >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I think you can use the pg_dump that comes with vanilla PG >>>>>>>>>>>>>>>>> to do that, can't you? But I am open to adding a control option if every >>>>>>>>>>>>>>>>> body thinks so. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Sure you can, this is just to simplify the life of users a >>>>>>>>>>>>>>>> maximum by not having multiple pg_dump binaries in their serves. >>>>>>>>>>>>>>>> Saying that, I think that there is no option to choose if >>>>>>>>>>>>>>>> DISTRIBUTE BY is printed in the dump or not... >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Yah if we choose to have an option we will put both >>>>>>>>>>>>>>> DISTRIBUTE BY and TO NODE under it. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> Why not an option for DISTRIBUTE BY, and another for TO NODE? >>>>>>>>>>>>>> This would bring more flexibility to the way dumps are >>>>>>>>>>>>>> generated. >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Michael >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>> Everyone hates slow websites. So do we. >>>>>>>>>>>>>> Make your web apps faster with AppDynamics >>>>>>>>>>>>>> Download AppDynamics Lite for free today: >>>>>>>>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>>>>>>> Pos...@li... >>>>>>>>>>>>>> >>>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Best Wishes, >>>>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>>>> The Enterprise Postgres Company >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> -- >>>>>>>>>>>> Abbas >>>>>>>>>>>> Architect >>>>>>>>>>>> EnterpriseDB Corporation >>>>>>>>>>>> The Enterprise PostgreSQL Company >>>>>>>>>>>> >>>>>>>>>>>> Phone: 92-334-5100153 >>>>>>>>>>>> >>>>>>>>>>>> Website: www.enterprisedb.com >>>>>>>>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>>>>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>>>>>>>> >>>>>>>>>>>> This e-mail message (and any attachment) is intended for the >>>>>>>>>>>> use of >>>>>>>>>>>> the individual or entity to whom it is addressed. This message >>>>>>>>>>>> contains information from EnterpriseDB Corporation that may be >>>>>>>>>>>> privileged, confidential, or exempt from disclosure under >>>>>>>>>>>> applicable >>>>>>>>>>>> law. If you are not the intended recipient or authorized to >>>>>>>>>>>> receive >>>>>>>>>>>> this for the intended recipient, any use, dissemination, >>>>>>>>>>>> distribution, >>>>>>>>>>>> retention, archiving, or copying of this communication is >>>>>>>>>>>> strictly >>>>>>>>>>>> prohibited. If you have received this e-mail in error, please >>>>>>>>>>>> notify >>>>>>>>>>>> the sender immediately by reply e-mail and delete this message. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Best Wishes, >>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>> The Enterprise Postgres Company >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> -- >>>>>>>>>> Abbas >>>>>>>>>> Architect >>>>>>>>>> EnterpriseDB Corporation >>>>>>>>>> The Enterprise PostgreSQL Company >>>>>>>>>> >>>>>>>>>> Phone: 92-334-5100153 >>>>>>>>>> >>>>>>>>>> Website: www.enterprisedb.com >>>>>>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>>>>>> >>>>>>>>>> This e-mail message (and any attachment) is intended for the use >>>>>>>>>> of >>>>>>>>>> the individual or entity to whom it is addressed. This message >>>>>>>>>> contains information from EnterpriseDB Corporation that may be >>>>>>>>>> privileged, confidential, or exempt from disclosure under >>>>>>>>>> applicable >>>>>>>>>> law. If you are not the intended recipient or authorized to >>>>>>>>>> receive >>>>>>>>>> this for the intended recipient, any use, dissemination, >>>>>>>>>> distribution, >>>>>>>>>> retention, archiving, or copying of this communication is strictly >>>>>>>>>> prohibited. If you have received this e-mail in error, please >>>>>>>>>> notify >>>>>>>>>> the sender immediately by reply e-mail and delete this message. >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Best Wishes, >>>>>>>>> Ashutosh Bapat >>>>>>>>> EntepriseDB Corporation >>>>>>>>> The Enterprise Postgres Company >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> -- >>>>>>>> Abbas >>>>>>>> Architect >>>>>>>> EnterpriseDB Corporation >>>>>>>> The Enterprise PostgreSQL Company >>>>>>>> >>>>>>>> Phone: 92-334-5100153 >>>>>>>> >>>>>>>> Website: www.enterprisedb.com >>>>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>>>> >>>>>>>> This e-mail message (and any attachment) is intended for the use of >>>>>>>> the individual or entity to whom it is addressed. This message >>>>>>>> contains information from EnterpriseDB Corporation that may be >>>>>>>> privileged, confidential, or exempt from disclosure under applicable >>>>>>>> law. If you are not the intended recipient or authorized to receive >>>>>>>> this for the intended recipient, any use, dissemination, >>>>>>>> distribution, >>>>>>>> retention, archiving, or copying of this communication is strictly >>>>>>>> prohibited. If you have received this e-mail in error, please notify >>>>>>>> the sender immediately by reply e-mail and delete this message. >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Enterprise Postgres Company >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> -- >>>>>> Abbas >>>>>> Architect >>>>>> EnterpriseDB Corporation >>>>>> The Enterprise PostgreSQL Company >>>>>> >>>>>> Phone: 92-334-5100153 >>>>>> >>>>>> Website: www.enterprisedb.com >>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>> >>>>>> This e-mail message (and any attachment) is intended for the use of >>>>>> the individual or entity to whom it is addressed. This message >>>>>> contains information from EnterpriseDB Corporation that may be >>>>>> privileged, confidential, or exempt from disclosure under applicable >>>>>> law. If you are not the intended recipient or authorized to receive >>>>>> this for the intended recipient, any use, dissemination, distribution, >>>>>> retention, archiving, or copying of this communication is strictly >>>>>> prohibited. If you have received this e-mail in error, please notify >>>>>> the sender immediately by reply e-mail and delete this message. >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> Abbas >>>>> Architect >>>>> EnterpriseDB Corporation >>>>> The Enterprise PostgreSQL Company >>>>> >>>>> Phone: 92-334-5100153 >>>>> >>>>> Website: www.enterprisedb.com >>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>> >>>>> This e-mail message (and any attachment) is intended for the use of >>>>> the individual or entity to whom it is addressed. This message >>>>> contains information from EnterpriseDB Corporation that may be >>>>> privileged, confidential, or exempt from disclosure under applicable >>>>> law. If you are not the intended recipient or authorized to receive >>>>> this for the intended recipient, any use, dissemination, distribution, >>>>> retention, archiving, or copying of this communication is strictly >>>>> prohibited. If you have received this e-mail in error, please notify >>>>> the sender immediately by reply e-mail and delete this message. >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> Abbas >>>> Architect >>>> EnterpriseDB Corporation >>>> The Enterprise PostgreSQL Company >>>> >>>> Phone: 92-334-5100153 >>>> >>>> Website: www.enterprisedb.com >>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>> >>>> This e-mail message (and any attachment) is intended for the use of >>>> the individual or entity to whom it is addressed. This message >>>> contains information from EnterpriseDB Corporation that may be >>>> privileged, confidential, or exempt from disclosure under applicable >>>> law. If you are not the intended recipient or authorized to receive >>>> this for the intended recipient, any use, dissemination, distribution, >>>> retention, archiving, or copying of this communication is strictly >>>> prohibited. If you have received this e-mail in error, please notify >>>> the sender immediately by reply e-mail and delete this message. >>> >>> >>> >>> >>> -- >>> -- >>> Abbas >>> Architect >>> EnterpriseDB Corporation >>> The Enterprise PostgreSQL Company >>> >>> Phone: 92-334-5100153 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of >>> the individual or entity to whom it is addressed. This message >>> contains information from EnterpriseDB Corporation that may be >>> privileged, confidential, or exempt from disclosure under applicable >>> law. If you are not the intended recipient or authorized to receive >>> this for the intended recipient, any use, dissemination, distribution, >>> retention, archiving, or copying of this communication is strictly >>> prohibited. If you have received this e-mail in error, please notify >>> the sender immediately by reply e-mail and delete this message. >>> >> >> >> >> -- >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Ashutosh B. <ash...@en...> - 2013-04-15 08:24:06
|
Hi Amit, Till now we have developed a lot of functions, which are specific to XC. Can you please separate that code into a separate file, so as to reduce the chances of conflicts. On Mon, Apr 15, 2013 at 9:54 AM, Amit Khandekar < ami...@en...> wrote: > >> On Fri, Apr 5, 2013 at 2:38 PM, Amit Khandekar < >> ami...@en...> wrote: >> >>> FYI .. I will use the following document to keep updating the >>> implementation details for "Saving AR trigger rows in tuplestore" : >>> >>> >>> https://docs.google.com/document/d/158IPS9npmfNsOWPN6ZYgPy91aowTUNP7L7Fl9zBBGqs/edit?usp=sharing >>> >> > Attached is the patch to support after-row triggers. The above doc is > updated. Yet to analyse the regression tests. The attached test.sql is the > one I used for unit testing, it is not yet ready to be inserted into > regression suite. I will be working next on the regression and Ashutosh's > comments on before-row triggers > > Also I haven't yet rebased the rowtriggers branch over the new > merge-related changes in the master branch. This patch is over the > rowtriggers branch; I did not push this patch onto the rowtriggers branch > as well, although I intended to do it, but suspected of some possible > issues if I push the rowtriggers branch after the recent merge-related > changes going on in the repository. First I will rebase all the rowtriggers > branch changes onto the new master branch. > > > > > > >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Minimize network downtime and maximize team effectiveness. >>> Reduce network management and security costs.Learn how to hire >>> the most talented Cisco Certified professionals. Visit the >>> Employer Resources Portal >>> http://www.cisco.com/web/learning/employer_resources/index.html >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> Pavan Deolasee >> http://www.linkedin.com/in/pavandeolasee >> > > > > ------------------------------------------------------------------------------ > Precog is a next-generation analytics platform capable of advanced > analytics on semi-structured data. The platform includes APIs for building > apps and a phenomenal toolset for data science. Developers can use > our toolset for easy data analysis & visualization. Get a free account! > http://www2.precog.com/precogplatform/slashdotnewsletter > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Ashutosh B. <ash...@en...> - 2013-04-15 06:04:36
|
It seems that you wrote an infrastructure to test dump in general. I thought, PG already has tests to test pg_dump and pg_restore functionality. I don't think, we should add infrastructure to test the dump functionality. It has to be done in PG and then incorporated in XC. On Sun, Apr 14, 2013 at 7:15 AM, Abbas Butt <abb...@en...>wrote: > Hi, > Attached please find a patch that adds support to test this feature. > To test this feature we had to add Support For Shell Scripts In > Regression Tests. > > For this purpose a new keyword "script" is added in schedule files. > The folder "scripts_input" is supposed to contain shell scripts in which > the following place holders can be used (in addition to the ones already > there). > > @abs_bin_dir@ which is replaced by the installation bin directory. > @database_name@ which is replaced by the database name used to run > regression tests. > The schedule file can have a command like this for running a shell script > script: script_name.sh > > In oder to test TO NODE clause in the CREATE TABLE statements the > following scheme is used. > > 1. Run test xc_1_to_node.sql > It creates some tables in a certain schema on some specific nodes > > 2. Create a script xc_to_node.source with place holders in the folder > "scripts_input". This script contains a command like this: > > @abs_bin_dir@/pg_dump -c --include-nodes --schema=test_dump_restore > -s > @database_name@ --file=@abs_srcdir@/sql/xc_2_to_node.sql > > 3. The function convert_sourcefiles is changed to accommodate script > files > After the place holders are replaced the source file in folder > "scripts_input" gets copied into folder "scripts", > with the name xc_to_node.sh > > /usr/local/pgsql/bin/pg_dump -c --include-nodes > --schema=test_dump_restore -s > regression > --file=/home/edb/pgxc/postgres-xc/src/test/regress/sql/xc_2_to_node.sql > > 4. In the schedule file the regression system is asked to run this > script using script keyword > > script: xc_to_node.sh > > 5. The function run_schedule is changed to accommodate the script > keyword. > It does not support running more than one scripts in parallel. > When the function encounters a script keyword, it calls a function > shell_command passing it the name of the test which is script name > itself. The function shell_command is new, added to run a shell > script. > > 6. When the shell script runs it generates a dump which gets copied into > the sql folder as a xc_2_to_node.sql. > It will be run next to restore the dump. > > 7. Run xc_2_to_node.sql, to restore the dump, use ignore keyword because > the order of objects might change in the dump. > > 8. Run xc_3_to_node.sql, which tests that the tables got created on the > required nodes. > > Comments are welcome. > > > > On Sat, Mar 30, 2013 at 11:49 PM, Abbas Butt <abb...@en...>wrote: > >> Hi, >> Attached please find revised patch that provides the --include-nodes >> option in both pg_dump and pg_dumpall. Please note that this patch applies >> on top of the one sent for feature ID 3608376. >> >> >> On Wed, Mar 27, 2013 at 5:05 PM, Abbas Butt <abb...@en...>wrote: >> >>> Feature ID 3608375 >>> >>> On Tue, Mar 5, 2013 at 1:45 PM, Abbas Butt <abb...@en...>wrote: >>> >>>> The attached patch changes the name of the option to --include-nodes. >>>> >>>> >>>> On Mon, Mar 4, 2013 at 2:41 PM, Abbas Butt <abb...@en... >>>> > wrote: >>>> >>>>> >>>>> >>>>> On Mon, Mar 4, 2013 at 2:09 PM, Ashutosh Bapat < >>>>> ash...@en...> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Mon, Mar 4, 2013 at 1:51 PM, Abbas Butt < >>>>>> abb...@en...> wrote: >>>>>> >>>>>>> What I had in mind was to have pg_dump, when run with include-node, >>>>>>> emit CREATE NODE/ CREATE NODE GROUP commands only and nothing else. Those >>>>>>> commands will be used to create existing nodes/groups on the new >>>>>>> coordinator to be added. So it does make sense to use this option >>>>>>> independently, in fact it is supposed to be used independently. >>>>>>> >>>>>>> >>>>>> Ok, got it. But then include-node is really a misnomer. We should use >>>>>> --dump-nodes or something like that. >>>>>> >>>>> >>>>> In that case we can use include-nodes here. >>>>> >>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> On Mon, Mar 4, 2013 at 11:21 AM, Ashutosh Bapat < >>>>>>> ash...@en...> wrote: >>>>>>> >>>>>>>> Dumping TO NODE clause only makes sense if we dump CREATE NODE/ >>>>>>>> CREATE NODE GROUP. Dumping CREATE NODE/CREATE NODE GROUP may make sense >>>>>>>> independently, but might be useless without dumping TO NODE clause. >>>>>>>> >>>>>>>> BTW, OTOH, dumping CREATE NODE/CREATE NODE GROUP clause wouldn't >>>>>>>> create the nodes on all the coordinators, >>>>>>> >>>>>>> >>>>>>> All the coordinators already have the nodes information. >>>>>>> >>>>>>> >>>>>>>> but only the coordinator where dump will be restored. That's >>>>>>>> another thing you will need to consider OR are you going to fix that as >>>>>>>> well? >>>>>>> >>>>>>> >>>>>>> As a first step I am only listing the manual steps required to add a >>>>>>> new node, that might say run this command on all the existing coordinators >>>>>>> by connecting to them one by one manually. We can decide to automate these >>>>>>> steps later. >>>>>>> >>>>>>> >>>>>> ok >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Mar 4, 2013 at 11:41 AM, Abbas Butt < >>>>>>>> abb...@en...> wrote: >>>>>>>> >>>>>>>>> I was thinking of using include-nodes to dump CREATE NODE / CREATE >>>>>>>>> NODE GROUP, that is required as one of the missing links in adding a new >>>>>>>>> node. How do you think about that? >>>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, Mar 4, 2013 at 9:02 AM, Ashutosh Bapat < >>>>>>>>> ash...@en...> wrote: >>>>>>>>> >>>>>>>>>> Hi Abbas, >>>>>>>>>> Please take a look at >>>>>>>>>> http://www.postgresql.org/docs/9.2/static/app-pgdump.html, which >>>>>>>>>> gives all the command line options for pg_dump. instead of >>>>>>>>>> include-to-node-clause, just include-nodes would suffice, I guess. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, Mar 1, 2013 at 8:36 PM, Abbas Butt < >>>>>>>>>> abb...@en...> wrote: >>>>>>>>>> >>>>>>>>>>> PFA a updated patch that provides a command line argument called >>>>>>>>>>> --include-to-node-clause to let pg_dump know that the created dump is >>>>>>>>>>> supposed to emit TO NODE clause in the CREATE TABLE command. >>>>>>>>>>> If the argument is provided while taking the dump from a >>>>>>>>>>> datanode, it does not show TO NODE clause in the dump since the catalog >>>>>>>>>>> table is empty in this case. >>>>>>>>>>> The documentation of pg_dump is updated accordingly. >>>>>>>>>>> The rest of the functionality stays the same as before. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Mon, Feb 25, 2013 at 10:29 AM, Ashutosh Bapat < >>>>>>>>>>> ash...@en...> wrote: >>>>>>>>>>> >>>>>>>>>>>> I think we should always dump DISTRIBUTE BY. >>>>>>>>>>>> >>>>>>>>>>>> PG does not stop dumping (or provide an option to do so) newer >>>>>>>>>>>> syntax so that the dump will work on older versions. On similar lines, an >>>>>>>>>>>> XC dump can not be used against PG without modification (removing >>>>>>>>>>>> DISTRIBUTE BY). There can be more serious problems like exceeding table >>>>>>>>>>>> size limits if an XC dump is tried to be restored in PG. >>>>>>>>>>>> >>>>>>>>>>>> As to TO NODE clause, I agree, that one can restore the dump on >>>>>>>>>>>> a cluster with different configuration, so giving an option to dump TO NODE >>>>>>>>>>>> clause will help. >>>>>>>>>>>> >>>>>>>>>>>> On Mon, Feb 25, 2013 at 6:42 AM, Michael Paquier < >>>>>>>>>>>> mic...@gm...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Mon, Feb 25, 2013 at 4:17 AM, Abbas Butt < >>>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Sun, Feb 24, 2013 at 5:33 PM, Michael Paquier < >>>>>>>>>>>>>> mic...@gm...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Sun, Feb 24, 2013 at 7:04 PM, Abbas Butt < >>>>>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Sun, Feb 24, 2013 at 1:44 PM, Michael Paquier < >>>>>>>>>>>>>>>> mic...@gm...> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Sun, Feb 24, 2013 at 3:51 PM, Abbas Butt < >>>>>>>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>> PFA a patch to fix pg_dump to generate TO NODE clause in >>>>>>>>>>>>>>>>>> the dump. >>>>>>>>>>>>>>>>>> This is required because otherwise all tables get created >>>>>>>>>>>>>>>>>> on all nodes after a dump-restore cycle. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Not sure this is good if you take a dump of an XC cluster >>>>>>>>>>>>>>>>> to restore that to a vanilla Postgres cluster. >>>>>>>>>>>>>>>>> Why not adding a new option that would control the >>>>>>>>>>>>>>>>> generation of this clause instead of forcing it? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I think you can use the pg_dump that comes with vanilla PG >>>>>>>>>>>>>>>> to do that, can't you? But I am open to adding a control option if every >>>>>>>>>>>>>>>> body thinks so. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Sure you can, this is just to simplify the life of users a >>>>>>>>>>>>>>> maximum by not having multiple pg_dump binaries in their serves. >>>>>>>>>>>>>>> Saying that, I think that there is no option to choose if >>>>>>>>>>>>>>> DISTRIBUTE BY is printed in the dump or not... >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Yah if we choose to have an option we will put both >>>>>>>>>>>>>> DISTRIBUTE BY and TO NODE under it. >>>>>>>>>>>>>> >>>>>>>>>>>>> Why not an option for DISTRIBUTE BY, and another for TO NODE? >>>>>>>>>>>>> This would bring more flexibility to the way dumps are >>>>>>>>>>>>> generated. >>>>>>>>>>>>> -- >>>>>>>>>>>>> Michael >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>> Everyone hates slow websites. So do we. >>>>>>>>>>>>> Make your web apps faster with AppDynamics >>>>>>>>>>>>> Download AppDynamics Lite for free today: >>>>>>>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>>>>>> Pos...@li... >>>>>>>>>>>>> >>>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Best Wishes, >>>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>>> The Enterprise Postgres Company >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> -- >>>>>>>>>>> Abbas >>>>>>>>>>> Architect >>>>>>>>>>> EnterpriseDB Corporation >>>>>>>>>>> The Enterprise PostgreSQL Company >>>>>>>>>>> >>>>>>>>>>> Phone: 92-334-5100153 >>>>>>>>>>> >>>>>>>>>>> Website: www.enterprisedb.com >>>>>>>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>>>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>>>>>>> >>>>>>>>>>> This e-mail message (and any attachment) is intended for the use >>>>>>>>>>> of >>>>>>>>>>> the individual or entity to whom it is addressed. This message >>>>>>>>>>> contains information from EnterpriseDB Corporation that may be >>>>>>>>>>> privileged, confidential, or exempt from disclosure under >>>>>>>>>>> applicable >>>>>>>>>>> law. If you are not the intended recipient or authorized to >>>>>>>>>>> receive >>>>>>>>>>> this for the intended recipient, any use, dissemination, >>>>>>>>>>> distribution, >>>>>>>>>>> retention, archiving, or copying of this communication is >>>>>>>>>>> strictly >>>>>>>>>>> prohibited. If you have received this e-mail in error, please >>>>>>>>>>> notify >>>>>>>>>>> the sender immediately by reply e-mail and delete this message. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Best Wishes, >>>>>>>>>> Ashutosh Bapat >>>>>>>>>> EntepriseDB Corporation >>>>>>>>>> The Enterprise Postgres Company >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> -- >>>>>>>>> Abbas >>>>>>>>> Architect >>>>>>>>> EnterpriseDB Corporation >>>>>>>>> The Enterprise PostgreSQL Company >>>>>>>>> >>>>>>>>> Phone: 92-334-5100153 >>>>>>>>> >>>>>>>>> Website: www.enterprisedb.com >>>>>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>>>>> >>>>>>>>> This e-mail message (and any attachment) is intended for the use of >>>>>>>>> the individual or entity to whom it is addressed. This message >>>>>>>>> contains information from EnterpriseDB Corporation that may be >>>>>>>>> privileged, confidential, or exempt from disclosure under >>>>>>>>> applicable >>>>>>>>> law. If you are not the intended recipient or authorized to receive >>>>>>>>> this for the intended recipient, any use, dissemination, >>>>>>>>> distribution, >>>>>>>>> retention, archiving, or copying of this communication is strictly >>>>>>>>> prohibited. If you have received this e-mail in error, please >>>>>>>>> notify >>>>>>>>> the sender immediately by reply e-mail and delete this message. >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Best Wishes, >>>>>>>> Ashutosh Bapat >>>>>>>> EntepriseDB Corporation >>>>>>>> The Enterprise Postgres Company >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> -- >>>>>>> Abbas >>>>>>> Architect >>>>>>> EnterpriseDB Corporation >>>>>>> The Enterprise PostgreSQL Company >>>>>>> >>>>>>> Phone: 92-334-5100153 >>>>>>> >>>>>>> Website: www.enterprisedb.com >>>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>>> >>>>>>> This e-mail message (and any attachment) is intended for the use of >>>>>>> the individual or entity to whom it is addressed. This message >>>>>>> contains information from EnterpriseDB Corporation that may be >>>>>>> privileged, confidential, or exempt from disclosure under applicable >>>>>>> law. If you are not the intended recipient or authorized to receive >>>>>>> this for the intended recipient, any use, dissemination, >>>>>>> distribution, >>>>>>> retention, archiving, or copying of this communication is strictly >>>>>>> prohibited. If you have received this e-mail in error, please notify >>>>>>> the sender immediately by reply e-mail and delete this message. >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Best Wishes, >>>>>> Ashutosh Bapat >>>>>> EntepriseDB Corporation >>>>>> The Enterprise Postgres Company >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> -- >>>>> Abbas >>>>> Architect >>>>> EnterpriseDB Corporation >>>>> The Enterprise PostgreSQL Company >>>>> >>>>> Phone: 92-334-5100153 >>>>> >>>>> Website: www.enterprisedb.com >>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>> >>>>> This e-mail message (and any attachment) is intended for the use of >>>>> the individual or entity to whom it is addressed. This message >>>>> contains information from EnterpriseDB Corporation that may be >>>>> privileged, confidential, or exempt from disclosure under applicable >>>>> law. If you are not the intended recipient or authorized to receive >>>>> this for the intended recipient, any use, dissemination, distribution, >>>>> retention, archiving, or copying of this communication is strictly >>>>> prohibited. If you have received this e-mail in error, please notify >>>>> the sender immediately by reply e-mail and delete this message. >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> Abbas >>>> Architect >>>> EnterpriseDB Corporation >>>> The Enterprise PostgreSQL Company >>>> >>>> Phone: 92-334-5100153 >>>> >>>> Website: www.enterprisedb.com >>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>> >>>> This e-mail message (and any attachment) is intended for the use of >>>> the individual or entity to whom it is addressed. This message >>>> contains information from EnterpriseDB Corporation that may be >>>> privileged, confidential, or exempt from disclosure under applicable >>>> law. If you are not the intended recipient or authorized to receive >>>> this for the intended recipient, any use, dissemination, distribution, >>>> retention, archiving, or copying of this communication is strictly >>>> prohibited. If you have received this e-mail in error, please notify >>>> the sender immediately by reply e-mail and delete this message. >>>> >>> >>> >>> >>> -- >>> -- >>> Abbas >>> Architect >>> EnterpriseDB Corporation >>> The Enterprise PostgreSQL Company >>> >>> Phone: 92-334-5100153 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of >>> the individual or entity to whom it is addressed. This message >>> contains information from EnterpriseDB Corporation that may be >>> privileged, confidential, or exempt from disclosure under applicable >>> law. If you are not the intended recipient or authorized to receive >>> this for the intended recipient, any use, dissemination, distribution, >>> retention, archiving, or copying of this communication is strictly >>> prohibited. If you have received this e-mail in error, please notify >>> the sender immediately by reply e-mail and delete this message. >> >> >> >> >> -- >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. >> > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Abbas B. <abb...@en...> - 2013-04-14 01:45:48
|
Hi, Attached please find a patch that adds support to test this feature. To test this feature we had to add Support For Shell Scripts In Regression Tests. For this purpose a new keyword "script" is added in schedule files. The folder "scripts_input" is supposed to contain shell scripts in which the following place holders can be used (in addition to the ones already there). @abs_bin_dir@ which is replaced by the installation bin directory. @database_name@ which is replaced by the database name used to run regression tests. The schedule file can have a command like this for running a shell script script: script_name.sh In oder to test TO NODE clause in the CREATE TABLE statements the following scheme is used. 1. Run test xc_1_to_node.sql It creates some tables in a certain schema on some specific nodes 2. Create a script xc_to_node.source with place holders in the folder "scripts_input". This script contains a command like this: @abs_bin_dir@/pg_dump -c --include-nodes --schema=test_dump_restore -s @database_name@ --file=@abs_srcdir@/sql/xc_2_to_node.sql 3. The function convert_sourcefiles is changed to accommodate script files After the place holders are replaced the source file in folder "scripts_input" gets copied into folder "scripts", with the name xc_to_node.sh /usr/local/pgsql/bin/pg_dump -c --include-nodes --schema=test_dump_restore -s regression --file=/home/edb/pgxc/postgres-xc/src/test/regress/sql/xc_2_to_node.sql 4. In the schedule file the regression system is asked to run this script using script keyword script: xc_to_node.sh 5. The function run_schedule is changed to accommodate the script keyword. It does not support running more than one scripts in parallel. When the function encounters a script keyword, it calls a function shell_command passing it the name of the test which is script name itself. The function shell_command is new, added to run a shell script. 6. When the shell script runs it generates a dump which gets copied into the sql folder as a xc_2_to_node.sql. It will be run next to restore the dump. 7. Run xc_2_to_node.sql, to restore the dump, use ignore keyword because the order of objects might change in the dump. 8. Run xc_3_to_node.sql, which tests that the tables got created on the required nodes. Comments are welcome. On Sat, Mar 30, 2013 at 11:49 PM, Abbas Butt <abb...@en...>wrote: > Hi, > Attached please find revised patch that provides the --include-nodes > option in both pg_dump and pg_dumpall. Please note that this patch applies > on top of the one sent for feature ID 3608376. > > > On Wed, Mar 27, 2013 at 5:05 PM, Abbas Butt <abb...@en...>wrote: > >> Feature ID 3608375 >> >> On Tue, Mar 5, 2013 at 1:45 PM, Abbas Butt <abb...@en...>wrote: >> >>> The attached patch changes the name of the option to --include-nodes. >>> >>> >>> On Mon, Mar 4, 2013 at 2:41 PM, Abbas Butt <abb...@en...>wrote: >>> >>>> >>>> >>>> On Mon, Mar 4, 2013 at 2:09 PM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> >>>>> >>>>> On Mon, Mar 4, 2013 at 1:51 PM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> What I had in mind was to have pg_dump, when run with include-node, >>>>>> emit CREATE NODE/ CREATE NODE GROUP commands only and nothing else. Those >>>>>> commands will be used to create existing nodes/groups on the new >>>>>> coordinator to be added. So it does make sense to use this option >>>>>> independently, in fact it is supposed to be used independently. >>>>>> >>>>>> >>>>> Ok, got it. But then include-node is really a misnomer. We should use >>>>> --dump-nodes or something like that. >>>>> >>>> >>>> In that case we can use include-nodes here. >>>> >>>> >>>>> >>>>> >>>>>> >>>>>> On Mon, Mar 4, 2013 at 11:21 AM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>>> Dumping TO NODE clause only makes sense if we dump CREATE NODE/ >>>>>>> CREATE NODE GROUP. Dumping CREATE NODE/CREATE NODE GROUP may make sense >>>>>>> independently, but might be useless without dumping TO NODE clause. >>>>>>> >>>>>>> BTW, OTOH, dumping CREATE NODE/CREATE NODE GROUP clause wouldn't >>>>>>> create the nodes on all the coordinators, >>>>>> >>>>>> >>>>>> All the coordinators already have the nodes information. >>>>>> >>>>>> >>>>>>> but only the coordinator where dump will be restored. That's another >>>>>>> thing you will need to consider OR are you going to fix that as well? >>>>>> >>>>>> >>>>>> As a first step I am only listing the manual steps required to add a >>>>>> new node, that might say run this command on all the existing coordinators >>>>>> by connecting to them one by one manually. We can decide to automate these >>>>>> steps later. >>>>>> >>>>>> >>>>> ok >>>>> >>>>> >>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, Mar 4, 2013 at 11:41 AM, Abbas Butt < >>>>>>> abb...@en...> wrote: >>>>>>> >>>>>>>> I was thinking of using include-nodes to dump CREATE NODE / CREATE >>>>>>>> NODE GROUP, that is required as one of the missing links in adding a new >>>>>>>> node. How do you think about that? >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Mar 4, 2013 at 9:02 AM, Ashutosh Bapat < >>>>>>>> ash...@en...> wrote: >>>>>>>> >>>>>>>>> Hi Abbas, >>>>>>>>> Please take a look at >>>>>>>>> http://www.postgresql.org/docs/9.2/static/app-pgdump.html, which >>>>>>>>> gives all the command line options for pg_dump. instead of >>>>>>>>> include-to-node-clause, just include-nodes would suffice, I guess. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Fri, Mar 1, 2013 at 8:36 PM, Abbas Butt < >>>>>>>>> abb...@en...> wrote: >>>>>>>>> >>>>>>>>>> PFA a updated patch that provides a command line argument called >>>>>>>>>> --include-to-node-clause to let pg_dump know that the created dump is >>>>>>>>>> supposed to emit TO NODE clause in the CREATE TABLE command. >>>>>>>>>> If the argument is provided while taking the dump from a >>>>>>>>>> datanode, it does not show TO NODE clause in the dump since the catalog >>>>>>>>>> table is empty in this case. >>>>>>>>>> The documentation of pg_dump is updated accordingly. >>>>>>>>>> The rest of the functionality stays the same as before. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Mon, Feb 25, 2013 at 10:29 AM, Ashutosh Bapat < >>>>>>>>>> ash...@en...> wrote: >>>>>>>>>> >>>>>>>>>>> I think we should always dump DISTRIBUTE BY. >>>>>>>>>>> >>>>>>>>>>> PG does not stop dumping (or provide an option to do so) newer >>>>>>>>>>> syntax so that the dump will work on older versions. On similar lines, an >>>>>>>>>>> XC dump can not be used against PG without modification (removing >>>>>>>>>>> DISTRIBUTE BY). There can be more serious problems like exceeding table >>>>>>>>>>> size limits if an XC dump is tried to be restored in PG. >>>>>>>>>>> >>>>>>>>>>> As to TO NODE clause, I agree, that one can restore the dump on >>>>>>>>>>> a cluster with different configuration, so giving an option to dump TO NODE >>>>>>>>>>> clause will help. >>>>>>>>>>> >>>>>>>>>>> On Mon, Feb 25, 2013 at 6:42 AM, Michael Paquier < >>>>>>>>>>> mic...@gm...> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Mon, Feb 25, 2013 at 4:17 AM, Abbas Butt < >>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Sun, Feb 24, 2013 at 5:33 PM, Michael Paquier < >>>>>>>>>>>>> mic...@gm...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Sun, Feb 24, 2013 at 7:04 PM, Abbas Butt < >>>>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Sun, Feb 24, 2013 at 1:44 PM, Michael Paquier < >>>>>>>>>>>>>>> mic...@gm...> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Sun, Feb 24, 2013 at 3:51 PM, Abbas Butt < >>>>>>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>> PFA a patch to fix pg_dump to generate TO NODE clause in >>>>>>>>>>>>>>>>> the dump. >>>>>>>>>>>>>>>>> This is required because otherwise all tables get created >>>>>>>>>>>>>>>>> on all nodes after a dump-restore cycle. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Not sure this is good if you take a dump of an XC cluster >>>>>>>>>>>>>>>> to restore that to a vanilla Postgres cluster. >>>>>>>>>>>>>>>> Why not adding a new option that would control the >>>>>>>>>>>>>>>> generation of this clause instead of forcing it? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I think you can use the pg_dump that comes with vanilla PG >>>>>>>>>>>>>>> to do that, can't you? But I am open to adding a control option if every >>>>>>>>>>>>>>> body thinks so. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> Sure you can, this is just to simplify the life of users a >>>>>>>>>>>>>> maximum by not having multiple pg_dump binaries in their serves. >>>>>>>>>>>>>> Saying that, I think that there is no option to choose if >>>>>>>>>>>>>> DISTRIBUTE BY is printed in the dump or not... >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Yah if we choose to have an option we will put both DISTRIBUTE >>>>>>>>>>>>> BY and TO NODE under it. >>>>>>>>>>>>> >>>>>>>>>>>> Why not an option for DISTRIBUTE BY, and another for TO NODE? >>>>>>>>>>>> This would bring more flexibility to the way dumps are >>>>>>>>>>>> generated. >>>>>>>>>>>> -- >>>>>>>>>>>> Michael >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>> Everyone hates slow websites. So do we. >>>>>>>>>>>> Make your web apps faster with AppDynamics >>>>>>>>>>>> Download AppDynamics Lite for free today: >>>>>>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>>>>> Pos...@li... >>>>>>>>>>>> >>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Best Wishes, >>>>>>>>>>> Ashutosh Bapat >>>>>>>>>>> EntepriseDB Corporation >>>>>>>>>>> The Enterprise Postgres Company >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> -- >>>>>>>>>> Abbas >>>>>>>>>> Architect >>>>>>>>>> EnterpriseDB Corporation >>>>>>>>>> The Enterprise PostgreSQL Company >>>>>>>>>> >>>>>>>>>> Phone: 92-334-5100153 >>>>>>>>>> >>>>>>>>>> Website: www.enterprisedb.com >>>>>>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>>>>>> >>>>>>>>>> This e-mail message (and any attachment) is intended for the use >>>>>>>>>> of >>>>>>>>>> the individual or entity to whom it is addressed. This message >>>>>>>>>> contains information from EnterpriseDB Corporation that may be >>>>>>>>>> privileged, confidential, or exempt from disclosure under >>>>>>>>>> applicable >>>>>>>>>> law. If you are not the intended recipient or authorized to >>>>>>>>>> receive >>>>>>>>>> this for the intended recipient, any use, dissemination, >>>>>>>>>> distribution, >>>>>>>>>> retention, archiving, or copying of this communication is strictly >>>>>>>>>> prohibited. If you have received this e-mail in error, please >>>>>>>>>> notify >>>>>>>>>> the sender immediately by reply e-mail and delete this message. >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Best Wishes, >>>>>>>>> Ashutosh Bapat >>>>>>>>> EntepriseDB Corporation >>>>>>>>> The Enterprise Postgres Company >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> -- >>>>>>>> Abbas >>>>>>>> Architect >>>>>>>> EnterpriseDB Corporation >>>>>>>> The Enterprise PostgreSQL Company >>>>>>>> >>>>>>>> Phone: 92-334-5100153 >>>>>>>> >>>>>>>> Website: www.enterprisedb.com >>>>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>>>> >>>>>>>> This e-mail message (and any attachment) is intended for the use of >>>>>>>> the individual or entity to whom it is addressed. This message >>>>>>>> contains information from EnterpriseDB Corporation that may be >>>>>>>> privileged, confidential, or exempt from disclosure under applicable >>>>>>>> law. If you are not the intended recipient or authorized to receive >>>>>>>> this for the intended recipient, any use, dissemination, >>>>>>>> distribution, >>>>>>>> retention, archiving, or copying of this communication is strictly >>>>>>>> prohibited. If you have received this e-mail in error, please notify >>>>>>>> the sender immediately by reply e-mail and delete this message. >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Enterprise Postgres Company >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> -- >>>>>> Abbas >>>>>> Architect >>>>>> EnterpriseDB Corporation >>>>>> The Enterprise PostgreSQL Company >>>>>> >>>>>> Phone: 92-334-5100153 >>>>>> >>>>>> Website: www.enterprisedb.com >>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>> >>>>>> This e-mail message (and any attachment) is intended for the use of >>>>>> the individual or entity to whom it is addressed. This message >>>>>> contains information from EnterpriseDB Corporation that may be >>>>>> privileged, confidential, or exempt from disclosure under applicable >>>>>> law. If you are not the intended recipient or authorized to receive >>>>>> this for the intended recipient, any use, dissemination, distribution, >>>>>> retention, archiving, or copying of this communication is strictly >>>>>> prohibited. If you have received this e-mail in error, please notify >>>>>> the sender immediately by reply e-mail and delete this message. >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Enterprise Postgres Company >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> Abbas >>>> Architect >>>> EnterpriseDB Corporation >>>> The Enterprise PostgreSQL Company >>>> >>>> Phone: 92-334-5100153 >>>> >>>> Website: www.enterprisedb.com >>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>> >>>> This e-mail message (and any attachment) is intended for the use of >>>> the individual or entity to whom it is addressed. This message >>>> contains information from EnterpriseDB Corporation that may be >>>> privileged, confidential, or exempt from disclosure under applicable >>>> law. If you are not the intended recipient or authorized to receive >>>> this for the intended recipient, any use, dissemination, distribution, >>>> retention, archiving, or copying of this communication is strictly >>>> prohibited. If you have received this e-mail in error, please notify >>>> the sender immediately by reply e-mail and delete this message. >>>> >>> >>> >>> >>> -- >>> -- >>> Abbas >>> Architect >>> EnterpriseDB Corporation >>> The Enterprise PostgreSQL Company >>> >>> Phone: 92-334-5100153 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of >>> the individual or entity to whom it is addressed. This message >>> contains information from EnterpriseDB Corporation that may be >>> privileged, confidential, or exempt from disclosure under applicable >>> law. If you are not the intended recipient or authorized to receive >>> this for the intended recipient, any use, dissemination, distribution, >>> retention, archiving, or copying of this communication is strictly >>> prohibited. If you have received this e-mail in error, please notify >>> the sender immediately by reply e-mail and delete this message. >>> >> >> >> >> -- >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. > > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Ashutosh B. <ash...@en...> - 2013-04-12 10:45:58
|
Hi Amit, Here are comments for commit commit badf4c31edfd3d72a38a348523f5e05730374700 Author: Amit Khandekar <ami...@en...> Date: Thu Apr 4 12:17:09 2013 +0545 The core BEFORE ROW trigger-support related changes. Because there is no tupleId parameter in the trigger functions is not valid, we need to accept the OLD row through oldtuple parameter of the trigger functions. And then we craft NEW row using modified values slot, plus the OLD values in OLD tuple. Comments --------------- In ExecBRDeleteTriggers() and all the functions executing before row triggers, I see this code 2210 #ifdef PGXC 2211 if ((IS_PGXC_COORDINATOR && !IsConnFromCoord())) 2212 trigtuple = pgxc_get_trigger_tuple(datanode_tuphead); 2213 else /* On datanode, do the usual way */ 2214 { This code will be applicable to all the deletes/modifications at the coordinator. The updates to catalogs happen locally to the coordinator and thus should call GetTupleForTrigger instead of pgxc_get_trigger_tuple(). I am not sure if there can be triggers on catalog tables, (mostly not), but it will be better to check whether the tuple is local or global. Right now we are passing both the old tuple and tupleid both to ExecBRDeleteTriggers() function. Depending upon the tuplestore implementation we do for storing the old tuples, we may be able to retrieve the old tuple from the tuplestore given some index into it. Can we use the ItemPointer to store this index? I am coming back to functions pgxc_form_trigger_tuple() and pgxc_get_trigger_tuple(). In these functions there is nothing XC specific, so my first thought was whether there are functions already in PG code, which would serve the functionality that these two functions serve. I tried to search but without success. Can you find any PG function which has this functionality? Should these functions be in heaptuple.c or such file instead of trigger.c? Also, there is nothing specific for triggers in these functions, so better their names not contain trigger. Regarding trigger firing, I am assuming that, all the BR triggers are fired on a single node either the coordinator (if there at least one nonshippable trigger) or datanode (if all the triggers are shippable). We can not fire some at coordinator and some at the datanode because that might change the order of firing the triggers. PG has documented that order of firing triggers is alphabetical. With this context, in function pgxc_is_trigger_firable(), what happens if a triggers is shippable but needs to be fired on coordinator. From the code it looks like we won't fire the trigger. At line 5037, you have used IsConnFromCoord() which is true even for a coordinator - coordinator connection. Do you want to specifically check datanode here? It will be helpful if the function prologue contains a truth table with shippability and node where the firing will happen and the connection origination. This is not your change but somebody who implemented triggers last time has left a serious bug here. Everywhere I see that we have used the checks for coordinator to execute the things XC way or PG way. I think we need better have PG way for local modifications and XC way for remote modifications. The declaration for function fill_slot_with_oldvals(), should have the return type, function name everything on the same line, as per PG standards. The function fill_slots_with_oldvals() is being called before the actual update or in fact any of the trigger processing. What's the purpose of this function? The comment there tell what the function does, but not WHY. I think I will need to revisit this commit again at the time of reviewing after row trigger implementation, since this commit has modified some of that area as well. -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Venky K. <ve...@ad...> - 2013-04-12 06:48:07
|
Ashutosh, All tables are replicated on all the data nodes (we have six datanodes in the production cluster). Happy to report that, since the time we turned on a primary node, we have not had any deadlocks. Things are humming smoothly. ________________________________________ Venky Kandaswamy Principal Engineer, Adchemy Inc. 925-200-7124 ________________________________ From: Ashutosh Bapat [ash...@en...] Sent: Thursday, April 11, 2013 8:25 PM To: Venky Kandaswamy Cc: pos...@li... Subject: Re: [Postgres-xc-developers] PGXC hangs when run with concurrent inserts Are the FK table and Refering table on the same node? Foreign key constraints can not be implemented if they involve rows across nodes. In fact, global constraints (constraints that involve data across various nodes) is not supported by 1.0. Neither it will be supported in the next version. On Thu, Apr 11, 2013 at 6:51 AM, Venky Kandaswamy <ve...@ad...<mailto:ve...@ad...>> wrote: We are processing inserts/updates using multiple threads. Here is the trace log of the actual statements that are hung. The scenario shows the statements on the coordinator and 2 datanodes. The scenario is similar across all the datanodes. The same data updates did not cause Postgres 9.1.2 to hang. This could be related to an application problem, although we could not reproduce it on Postgres 9.1.2. At a high level, there is an update on the 'feature' table that is holding an exclusive lock on the row. The inserts are inserting to another table that has a foreign key that references the row being locked by the update. Pid 7174 and 7179 are waiting to complete and they are also similar inserts. The only thing in common seems to be that the update is locking the feature row that is referenced in a foreign key in the other inserts. This should not cause a deadlock, I believe. The question in my mind is whether pids 7181 and 7186 should have been granted exclusive access to a tuple while others were granted share access. This might cause a race condition. This causes PGXC to hang. Obviously, the update is in turn waiting for something (which we cannot figure out from the logs) and therefore not committing the update. [postgres@sv4-pgxc-db01 pgxc]$ ps -ef | grep adchemy1234 <COORDINATOR> postgres 7169 7113 0 16:41 ? 00:00:02 postgres: adchemy adchemy1234 192.168.51.73(49186) INSERT postgres 7170 7113 0 16:41 ? 00:00:02 postgres: adchemy adchemy1234 192.168.51.73(49187) INSERT postgres 7171 7113 0 16:41 ? 00:00:02 postgres: adchemy adchemy1234 192.168.51.73(49188) UPDATE postgres 7172 7113 0 16:41 ? 00:00:02 postgres: adchemy adchemy1234 192.168.51.73(49189) INSERT postgres 7173 7113 0 16:41 ? 00:00:02 postgres: adchemy adchemy1234 192.168.51.73(49190) INSERT <COORDINATOR> <DATANODE1> postgres 7174 7127 0 16:41 ? 00:00:01 postgres: adchemy adchemy1234 172.17.28.61(51909) idle in transaction postgres 7175 7127 0 16:41 ? 00:00:01 postgres: adchemy adchemy1234 172.17.28.61(51910) INSERT waiting postgres 7181 7127 0 16:41 ? 00:00:01 postgres: adchemy adchemy1234 172.17.28.61(51924) UPDATE waiting postgres 7182 7127 0 16:41 ? 00:00:01 postgres: adchemy adchemy1234 172.17.28.61(51925) INSERT waiting postgres 7183 7127 0 16:41 ? 00:00:01 postgres: adchemy adchemy1234 172.17.28.61(51926) INSERT waiting <DATANODE1> <DATANODE2> postgres 7179 7140 0 16:41 ? 00:00:00 postgres: adchemy adchemy1234 172.17.28.61(48957) idle in transaction postgres 7180 7140 0 16:41 ? 00:00:00 postgres: adchemy adchemy1234 172.17.28.61(48962) INSERT waiting postgres 7184 7140 0 16:41 ? 00:00:00 postgres: adchemy adchemy1234 172.17.28.61(48970) INSERT waiting postgres 7185 7140 0 16:41 ? 00:00:00 postgres: adchemy adchemy1234 172.17.28.61(48975) INSERT waiting postgres 7186 7140 0 16:41 ? 00:00:00 postgres: adchemy adchemy1234 172.17.28.61(48980) UPDATE waiting <DATANODE2> -----LOGS----- formatted %t %u %p 2013-04-10 16:42:16 PDT adchemy 7169 LOG: execute S_1: BEGIN 2013-04-10 16:42:16 PDT adchemy 7169 LOG: execute <unnamed>: select nextval ('hibernate_sequence') 2013-04-10 16:42:16 PDT adchemy 7169 LOG: execute <unnamed>: insert into biods.product_feature (category_id, category_semid, created_ts, feature_id, feature_semid, feature_value_id, feature_value_semid, modified_by, prd_id, prd_semid, updated_ts, prd_feature_id) values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) 2013-04-10 16:42:16 PDT adchemy 7169 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.343-07', $4 = '42318', $5 = 'description', $6 = '46105', $7 = 'description,Give your riches the designer treatment with Mcms leather heritage wallet. The logo-stamped little number stores your essentials in luxe vintage style.', $8 = NULL, $9 = '46449', $10 = '7630015470685', $11 = '2013-04-10 15:02:42.343-07', $12 = '46455' 2013-04-10 16:42:16 PDT adchemy 7170 LOG: execute S_1: BEGIN 2013-04-10 16:42:16 PDT adchemy 7170 LOG: execute <unnamed>: select nextval ('hibernate_sequence') 2013-04-10 16:42:16 PDT adchemy 7170 LOG: execute <unnamed>: insert into biods.product_feature (category_id, category_semid, created_ts, feature_id, feature_semid, feature_value_id, feature_value_semid, modified_by, prd_id, prd_semid, updated_ts, prd_feature_id) values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) 2013-04-10 16:42:16 PDT adchemy 7170 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:43.413-07', $4 = '42318', $5 = 'description', $6 = '46326', $7 = 'description,Rich leather is dressed up with a bold logo-stamped plaque in this utility chic wallet from Marc By Marc Jacobs.', $8 = NULL, $9 = '46438', $10 = '883936992041', $11 = '2013-04-10 15:02:43.413-07', $12 = '46445' 2013-04-10 16:42:15 PDT adchemy 7171 LOG: execute S_1: BEGIN 2013-04-10 16:42:15 PDT adchemy 7171 LOG: execute <unnamed>: select feature0_.feature_id as feature1_8_1_, feature0_.created_ts as created2_8_1_, feature0_.feature_name as feature3_8_1_, feature0_.feature_semid as feature4_8_1_, feature0_.modified_by as modified5_8_1_, feature0_.source_msg_ts as source6_8_1_, feature0_.updated_ts as updated7_8_1_, featureval1_.feature_id as feature9_8_3_, featureval1_.feature_value_id as feature1_14_3_, featureval1_.feature_value_id as feature1_14_0_, featureval1_.created_ts as created2_14_0_, featureval1_.feature_id as feature9_14_0_, featureval1_.feature_semid as feature3_14_0_, featureval1_.feature_value as feature4_14_0_, featureval1_.feature_value_semid as feature5_14_0_, featureval1_.modified_by as modified6_14_0_, featureval1_.source_msg_ts as source7_14_0_, featureval1_.updated_ts as updated8_14_0_ from biods.feature feature0_ left outer join biods.feature_value featureval1_ on feature0_.feature_id=featureval1_.feature_id where feature0_.feature_id=$1 2013-04-10 16:42:15 PDT adchemy 7171 DETAIL: parameters: $1 = '42318' 2013-04-10 16:42:15 PDT adchemy 7171 LOG: execute <unnamed>: update biods.feature set created_ts=$1, feature_name=$2, feature_semid=$3, modified_by=$4, source_msg_ts=$5, updated_ts=$6 where feature_id=$7 2013-04-10 16:42:15 PDT adchemy 7171 DETAIL: parameters: $1 = '2013-04-10 15:02:34.706-07', $2 = 'description', $3 = 'description', $4 = NULL, $5 = '2013-04-10 15:02:43.576-07', $6 = '2013-04-10 15:02:43.573-07', $7 = '42318' 2013-04-10 16:42:17 PDT adchemy 7172 LOG: execute S_1: BEGIN 2013-04-10 16:42:17 PDT adchemy 7172 LOG: execute <unnamed>: select nextval ('hibernate_sequence') 2013-04-10 16:42:17 PDT adchemy 7172 LOG: execute <unnamed>: insert into biods.product_feature (category_id, category_semid, created_ts, feature_id, feature_semid, feature_value_id, feature_value_semid, modified_by, prd_id, prd_semid, updated_ts, prd_feature_id) values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) 2013-04-10 16:42:17 PDT adchemy 7172 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.003-07', $4 = '42318', $5 = 'description', $6 = '44831', $7 = 'description,A chic logo-detailed cosmetic case for the contemporary girl from Tory Burch. Exclusive to Bloomingdales.', $8 = NULL, $9 = '46453', $10 = '885427179580', $11 = '2013-04-10 15:02:42.003-07', $12 = '46460' 2013-04-10 16:42:15 PDT adchemy 7173 LOG: execute S_1: BEGIN 2013-04-10 16:42:15 PDT adchemy 7173 LOG: execute <unnamed>: select nextval ('hibernate_sequence') 2013-04-10 16:42:15 PDT adchemy 7173 LOG: execute <unnamed>: insert into biods.product_feature (category_id, category_semid, created_ts, feature_id, feature_semid, feature_value_id, feature_value_semid, modified_by, prd_id, prd_semid, updated_ts, prd_feature_id) values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) 2013-04-10 16:42:15 PDT adchemy 7173 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.674-07', $4 = '42318', $5 = 'description', $6 = '46154', $7 = 'description,Keep the essentials close with LeSportsacs crossbody bag in matte black nylon practical interior zip compartments make those daily errands a little bit easier.', $8 = NULL, $9 = '46425', $10 = '883681258669', $11 = '2013-04-10 15:02:42.674-07', $12 = '46435' 2013-04-10 16:42:15 PDT adchemy 7174 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:15 PDT adchemy 7174 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:15 PDT adchemy 7174 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.674-07', $4 = '42318', $5 = 'description', $6 = '46154', $7 = 'description,Keep the essentials close with LeSportsacs crossbody bag in matte black nylon practical interior zip compartments make those daily errands a little bit easier.', $8 = NULL, $9 = '46425', $10 = '883681258669', $11 = '2013-04-10 15:02:42.674-07', $12 = '46435' 2013-04-10 16:42:16 PDT adchemy 7175 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:16 PDT adchemy 7175 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:16 PDT adchemy 7175 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.343-07', $4 = '42318', $5 = 'description', $6 = '46105', $7 = 'description,Give your riches the designer treatment with Mcms leather heritage wallet. The logo-stamped little number stores your essentials in luxe vintage style.', $8 = NULL, $9 = '46449', $10 = '7630015470685', $11 = '2013-04-10 15:02:42.343-07', $12 = '46455' 2013-04-10 16:42:15 PDT adchemy 7179 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:15 PDT adchemy 7179 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:15 PDT adchemy 7179 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.674-07', $4 = '42318', $5 = 'description', $6 = '46154', $7 = 'description,Keep the essentials close with LeSportsacs crossbody bag in matte black nylon practical interior zip compartments make those daily errands a little bit easier.', $8 = NULL, $9 = '46425', $10 = '883681258669', $11 = '2013-04-10 15:02:42.674-07', $12 = '46435' 2013-04-10 16:42:16 PDT adchemy 7180 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:16 PDT adchemy 7180 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:16 PDT adchemy 7180 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.343-07', $4 = '42318', $5 = 'description', $6 = '46105', $7 = 'description,Give your riches the designer treatment with Mcms leather heritage wallet. The logo-stamped little number stores your essentials in luxe vintage style.', $8 = NULL, $9 = '46449', $10 = '7630015470685', $11 = '2013-04-10 15:02:42.343-07', $12 = '46455' 2013-04-10 16:42:15 PDT adchemy 7181 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:15 PDT adchemy 7181 LOG: execute <unnamed>: UPDATE biods.feature SET feature_semid = $3, feature_name = $2, created_ts = $1, updated_ts = $6, source_msg_ts = $5, modified_by = $4 WHERE (feature_id = $7) 2013-04-10 16:42:15 PDT adchemy 7181 DETAIL: parameters: $1 = '2013-04-10 15:02:34.706-07', $2 = 'description', $3 = 'description', $4 = NULL, $5 = '2013-04-10 15:02:43.576-07', $6 = '2013-04-10 15:02:43.573-07', $7 = '42318' 2013-04-10 16:42:17 PDT adchemy 7182 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:17 PDT adchemy 7182 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:17 PDT adchemy 7182 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.003-07', $4 = '42318', $5 = 'description', $6 = '44831', $7 = 'description,A chic logo-detailed cosmetic case for the contemporary girl from Tory Burch. Exclusive to Bloomingdales.', $8 = NULL, $9 = '46453', $10 = '885427179580', $11 = '2013-04-10 15:02:42.003-07', $12 = '46460' 2013-04-10 16:42:16 PDT adchemy 7183 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:16 PDT adchemy 7183 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:16 PDT adchemy 7183 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:43.413-07', $4 = '42318', $5 = 'description', $6 = '46326', $7 = 'description,Rich leather is dressed up with a bold logo-stamped plaque in this utility chic wallet from Marc By Marc Jacobs.', $8 = NULL, $9 = '46438', $10 = '883936992041', $11 = '2013-04-10 15:02:43.413-07', $12 = '46445' 2013-04-10 16:42:17 PDT adchemy 7184 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:17 PDT adchemy 7184 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:17 PDT adchemy 7184 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.003-07', $4 = '42318', $5 = 'description', $6 = '44831', $7 = 'description,A chic logo-detailed cosmetic case for the contemporary girl from Tory Burch. Exclusive to Bloomingdales.', $8 = NULL, $9 = '46453', $10 = '885427179580', $11 = '2013-04-10 15:02:42.003-07', $12 = '46460' 2013-04-10 16:42:16 PDT adchemy 7185 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:16 PDT adchemy 7185 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:16 PDT adchemy 7185 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:43.413-07', $4 = '42318', $5 = 'description', $6 = '46326', $7 = 'description,Rich leather is dressed up with a bold logo-stamped plaque in this utility chic wallet from Marc By Marc Jacobs.', $8 = NULL, $9 = '46438', $10 = '883936992041', $11 = '2013-04-10 15:02:43.413-07', $12 = '46445' 2013-04-10 16:42:15 PDT adchemy 7186 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:15 PDT adchemy 7186 LOG: execute <unnamed>: UPDATE biods.feature SET feature_semid = $3, feature_name = $2, created_ts = $1, updated_ts = $6, source_msg_ts = $5, modified_by = $4 WHERE (feature_id = $7) 2013-04-10 16:42:15 PDT adchemy 7186 DETAIL: parameters: $1 = '2013-04-10 15:02:34.706-07', $2 = 'description', $3 = 'description', $4 = NULL, $5 = '2013-04-10 15:02:43.576-07', $6 = '2013-04-10 15:02:43.573-07', $7 = '42318' LOCKS ON COORDINATOR: [venky@sv4-pgxc-db01 ~]$ /usr/local/pgsql/bin/psql -p 5432 -U postgres -d adchemy1234 -c "SELECT pid, relname, locktype, mode, granted from pg_locks, pg_class where relation=oid and relname not like 'pg_%' order by mode;" pid | relname | locktype | mode | granted ------+--------------------+----------+------------------+--------- 7169 | hibernate_sequence | relation | AccessShareLock | t 7173 | hibernate_sequence | relation | AccessShareLock | t 7172 | hibernate_sequence | relation | AccessShareLock | t 7171 | feature_value | relation | AccessShareLock | t 7171 | feature | relation | AccessShareLock | t 7170 | hibernate_sequence | relation | AccessShareLock | t 7171 | feature | relation | RowExclusiveLock | t 7172 | product_feature | relation | RowExclusiveLock | t 7170 | product_feature | relation | RowExclusiveLock | t 7173 | product_feature | relation | RowExclusiveLock | t 7169 | product_feature | relation | RowExclusiveLock | t (11 rows) LOCKS ON DATANODE1: [venky@sv4-pgxc-db01 ~]$ /usr/local/pgsql/bin/psql -p 5433 -U postgres -d adchemy1234 -c "SELECT pid, relname, locktype, mode, granted from pg_locks, pg_class where relation=oid and relname not like 'pg_%' order by mode;" pid | relname | locktype | mode | granted ------+-----------------------+----------+--------------------------+--------- 7174 | prd_id | relation | AccessShareLock | t 7182 | feature_id | relation | AccessShareLock | t 7174 | feature_value_id | relation | AccessShareLock | t 7183 | feature_id | relation | AccessShareLock | t 7174 | feature_id | relation | AccessShareLock | t 7175 | feature_id | relation | AccessShareLock | t 7181 | feature | tuple | ExclusiveLock | t 7181 | feature_semid | relation | RowExclusiveLock | t 7181 | feature_id | relation | RowExclusiveLock | t 7181 | feature | relation | RowExclusiveLock | t 7175 | cat_prd_feature_semid | relation | RowExclusiveLock | t 7183 | cat_prd_feature_semid | relation | RowExclusiveLock | t 7183 | prd_feature_id | relation | RowExclusiveLock | t 7183 | product_feature | relation | RowExclusiveLock | t 7182 | cat_prd_feature_semid | relation | RowExclusiveLock | t 7182 | prd_feature_id | relation | RowExclusiveLock | t 7182 | product_feature | relation | RowExclusiveLock | t 7175 | prd_feature_id | relation | RowExclusiveLock | t 7175 | product_feature | relation | RowExclusiveLock | t 7174 | product_feature | relation | RowExclusiveLock | t 7206 | feature_semid | relation | RowExclusiveLock | t 7206 | feature_id | relation | RowExclusiveLock | t 7174 | product | relation | RowShareLock | t 7182 | feature | relation | RowShareLock | t 7174 | feature_value | relation | RowShareLock | t 7183 | category | relation | RowShareLock | t 7174 | feature | relation | RowShareLock | t 7174 | category | relation | RowShareLock | t 7175 | category | relation | RowShareLock | t 7175 | feature | relation | RowShareLock | t 7183 | feature | relation | RowShareLock | t 7182 | category | relation | RowShareLock | t 7182 | feature | tuple | ShareLock | f 7175 | feature | tuple | ShareLock | f 7183 | feature | tuple | ShareLock | f 7206 | feature | relation | ShareUpdateExclusiveLock | t LOCKS ON DATANODE2: [venky@sv4-pgxc-db01 ~]$ /usr/local/pgsql/bin/psql -p 5434 -U postgres -d adchemy1234 -c "SELECT pid, relname, locktype, mode, granted from pg_locks, pg_class where relation=oid and relname not like 'pg_%' order by mode;" pid | relname | locktype | mode | granted ------+-----------------------+----------+--------------------------+--------- 7185 | feature_id | relation | AccessShareLock | t 7179 | feature_value_id | relation | AccessShareLock | t 7179 | prd_id | relation | AccessShareLock | t 7184 | feature_id | relation | AccessShareLock | t 7180 | feature_id | relation | AccessShareLock | t 7179 | feature_id | relation | AccessShareLock | t 7186 | feature | tuple | ExclusiveLock | t 7184 | prd_feature_id | relation | RowExclusiveLock | t 7184 | product_feature | relation | RowExclusiveLock | t 7186 | feature_semid | relation | RowExclusiveLock | t 7186 | feature_id | relation | RowExclusiveLock | t 7186 | feature | relation | RowExclusiveLock | t 7185 | cat_prd_feature_semid | relation | RowExclusiveLock | t 7185 | prd_feature_id | relation | RowExclusiveLock | t 7185 | product_feature | relation | RowExclusiveLock | t 7184 | cat_prd_feature_semid | relation | RowExclusiveLock | t 7180 | cat_prd_feature_semid | relation | RowExclusiveLock | t 7180 | prd_feature_id | relation | RowExclusiveLock | t 7180 | product_feature | relation | RowExclusiveLock | t 7179 | product_feature | relation | RowExclusiveLock | t 7202 | feature_semid | relation | RowExclusiveLock | t 7202 | feature_id | relation | RowExclusiveLock | t 7179 | product | relation | RowShareLock | t 7184 | feature | relation | RowShareLock | t 7179 | feature_value | relation | RowShareLock | t 7185 | category | relation | RowShareLock | t 7179 | feature | relation | RowShareLock | t 7179 | category | relation | RowShareLock | t 7180 | feature | relation | RowShareLock | t 7180 | category | relation | RowShareLock | t 7185 | feature | relation | RowShareLock | t 7184 | category | relation | RowShareLock | t 7185 | feature | tuple | ShareLock | f 7180 | feature | tuple | ShareLock | f 7184 | feature | tuple | ShareLock | f 7202 | feature | relation | ShareUpdateExclusiveLock | t (36 rows) ________________________________________ Venky Kandaswamy Principal Engineer, Adchemy Inc. 925-200-7124 ________________________________ From: Koichi Suzuki [koi...@gm...<mailto:koi...@gm...>] Sent: Monday, April 08, 2013 10:41 PM To: Amit Khandekar Cc: Venky Kandaswamy; pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-developers] PGXC hangs when run with concurrent inserts Because insert is being done in parallel, I'm afraid there could be a possibility that we have internal lock conflicts, which should not happen. Regards; ---------- Koichi Suzuki 2013/4/9 Amit Khandekar <ami...@en...<mailto:ami...@en...>> On 9 April 2013 06:46, Venky Kandaswamy <ve...@ad...<mailto:ve...@ad...>> wrote: All, We have been running into a hang issue on our app that appears to be related to PGXC. Our app processes messages from RabbitMQ and inserts/updates tables. We run 5 concurrent threads. The incoming queues are replicated, one feeding Postgres 9.1 and the other feeding PGXC (current git master). PGXC is hanging on inserts after processing a few transactions. It does not appear to be related to the actual data itself. IT looks like all the sessions are waiting for something. There is no information on locks available from pg_locks. Since most of the operations are inserts, it does not look like it is due to locks, unless something has acquired table locks. But just to rule out that possibility, it would be better if you check pg_locks on the datanodes, if you have checked it only on coordinator so far. An strace simply says recfrom(10. The are no errors in the logs from gtm, coordinator or datanodes. The tables have referential integrity and use a shared sequence to get the next id. Is it possible that something is going on with the logic to retrieve sequence numbers? The tables are all replicated. Unfortunately, we have not been able to reproduce a reliable test case. [postgres@gnode0 pgxc]$ /usr/local/pgsql/bin/psql -p 5433 -U postgres -d postgres -c 'select * from pg_catalog.pg_stat_activity;' datid | datname | pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | xact_start | query_start | state_change | waiting | state | query -------+--------------+-------+----------+----------+------------------+----------------+-----------------+-------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+---------+---------------------+-------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 12893 | postgres | 22330 | 10 | postgres | pgxc | 192.168.53.109 | | 47025 | 2013-03-31 21:42:16.724845-07 | | 2013-04-08 15:43:52.313325-07 | 2013-04-08 15:26:11.444754-07 | f | idle | COMMIT PREPARED 'T1 32273' 16393 | master | 4267 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 54961 | 2013-04-08 15:24:28.668023-07 | | 2013-04-08 15:33:17.586836-07 | 2013-04-08 15:33:17.587942-07 | f | idle | SELECT count(*) FRO M ONLY bicommon.account_datasource WHERE true 16395 | adchemy10013 | 4363 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 55084 | 2013-04-08 15:28:48.822939-07 | | 2013-04-08 15:50:21.650727-07 | 2013-04-08 15:50:07.916753-07 | f | idle | SELECT prd_id, prd_ semid, prd_name, prd_line, prd_model, prd_brand, prd_image_url, prd_dest_url, created_ts, updated_ts, source_msg_ts, modified_by FROM biods.product 16393 | master | 4486 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 55246 | 2013-04-08 15:33:21.019388-07 | | 2013-04-08 15:43:51.321376-07 | 2013-04-08 15:43:51.322675-07 | f | idle | SET SESSION AUTHORI ZATION DEFAULT;RESET ALL; 16393 | master | 4781 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 55515 | 2013-04-08 15:42:42.122785-07 | | 2013-04-08 17:02:21.023713-07 | 2013-04-08 17:02:20.804751-07 | f | idle | SET SESSION AUTHORI ZATION DEFAULT;RESET ALL; 16393 | master | 4787 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 55521 | 2013-04-08 15:42:42.142662-07 | | 2013-04-08 16:17:19.26364-07 | 2013-04-08 16:17:19.126163-07 | f | idle | SET SESSION AUTHORI ZATION DEFAULT;RESET ALL; 16393 | master | 4792 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 55526 | 2013-04-08 15:42:42.159009-07 | | 2013-04-08 15:45:11.915026-07 | 2013-04-08 15:45:11.886392-07 | f | idle | SET SESSION AUTHORI ZATION DEFAULT;RESET ALL; 16393 | master | 4799 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 55533 | 2013-04-08 15:42:42.678387-07 | | 2013-04-08 17:02:21.195332-07 | 2013-04-08 17:02:20.805074-07 | f | idle | SET SESSION AUTHORI ZATION DEFAULT;RESET ALL; 16393 | master | 4804 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 55538 | 2013-04-08 15:42:42.694802-07 | | 2013-04-08 15:45:11.904619-07 | 2013-04-08 15:45:11.888493-07 | f | idle | SET SESSION AUTHORI ZATION DEFAULT;RESET ALL; 16395 | adchemy10013 | 4977 | 17361 | adchemy | pgxc | 192.168.53.109 | | 55732 | 2013-04-08 15:47:34.901175-07 | 2013-04-08 15:48:08.345331-07 | 2013-04-08 15:48:08.528818-07 | 2013-04-08 15:48:08.410815-07 | f | idle in transaction | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 4979 | 17361 | adchemy | pgxc | 192.168.53.109 | | 55734 | 2013-04-08 15:47:35.042778-07 | 2013-04-08 15:48:16.384763-07 | 2013-04-08 15:48:16.506899-07 | 2013-04-08 15:48:16.388503-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 4985 | 17361 | adchemy | pgxc | 192.168.53.109 | | 55740 | 2013-04-08 15:47:35.235945-07<tel:35.235945-07> | 2013-04-08 15:48:14.38895-07 | 2013-04-08 15:48:14.445351-07 | 2013-04-08 15:48:14.446752-07 | t | active | UPDATE biods.featur e SET feature_semid = $3, feature_name = $2, created_ts = $1, updated_ts = $6, source_msg_ts = $5, modified_by = $4 WHERE (feature_id = $7) 16395 | adchemy10013 | 4986 | 17361 | adchemy | pgxc | 192.168.53.109 | | 55741 | 2013-04-08 15:47:35.238843-07<tel:35.238843-07> | 2013-04-08 15:48:18.201043-07 | 2013-04-08 15:48:18.273204-07 | 2013-04-08 15:48:18.205647-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 4998 | 17361 | adchemy | pgxc | 192.168.53.109 | | 55753 | 2013-04-08 15:47:35.910309-07 | 2013-04-08 15:48:08.412038-07 | 2013-04-08 15:48:08.566945-07 | 2013-04-08 15:48:08.415026-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 6340 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57002 | 2013-04-08 16:31:44.414804-07 | 2013-04-08 16:31:50.293828-07 | 2013-04-08 16:31:50.433988-07 | 2013-04-08 16:31:50.297752-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 6341 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57003 | 2013-04-08 16:31:44.418356-07 | 2013-04-08 16:31:49.450704-07 | 2013-04-08 16:31:49.599946-07 | 2013-04-08 16:31:49.45562-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 6348 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57010 | 2013-04-08 16:31:45.065767-07<tel:45.065767-07> | 2013-04-08 16:31:50.699979-07 | 2013-04-08 16:31:50.817425-07 | 2013-04-08 16:31:50.704669-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 6349 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57011 | 2013-04-08 16:31:45.06926-07 | 2013-04-08 16:31:51.528207-07 | 2013-04-08 16:31:51.582036-07 | 2013-04-08 16:31:51.532618-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 6350 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57012 | 2013-04-08 16:31:45.072711-07<tel:45.072711-07> | 2013-04-08 16:31:50.085336-07 | 2013-04-08 16:31:50.223221-07 | 2013-04-08 16:31:50.088908-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7269 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57774 | 2013-04-08 16:57:15.563006-07 | 2013-04-08 16:57:21.849156-07 | 2013-04-08 16:57:21.978984-07 | 2013-04-08 16:57:21.853289-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7271 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57776 | 2013-04-08 16:57:15.63199-07 | 2013-04-08 16:57:16.575535-07 | 2013-04-08 16:57:17.00605-07 | 2013-04-08 16:57:17.007747-07 | t | active | INSERT INTO biods.f eature_value (feature_value_id, feature_value_semid, feature_value, feature_semid, feature_id, created_ts, updated_ts, source_msg_ts, modified_by) VALUES ($9, $5, $4, $3, $2, $1, $8, $7, $6) 16395 | adchemy10013 | 7283 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57788 | 2013-04-08 16:57:16.292702-07 | 2013-04-08 16:57:21.849125-07 | 2013-04-08 16:57:21.978824-07 | 2013-04-08 16:57:21.853251-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7284 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57789 | 2013-04-08 16:57:16.295879-07 | 2013-04-08 16:57:24.233166-07 | 2013-04-08 16:57:24.321938-07 | 2013-04-08 16:57:24.237514-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7285 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57790 | 2013-04-08 16:57:16.299271-07 | 2013-04-08 16:57:22.119868-07 | 2013-04-08 16:57:22.197213-07 | 2013-04-08 16:57:22.128357-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7465 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57954 | 2013-04-08 17:01:54.750113-07 | 2013-04-08 17:02:00.17336-07 | 2013-04-08 17:02:00.320469-07 | 2013-04-08 17:02:00.177758-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7466 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57955 | 2013-04-08 17:01:54.753559-07 | 2013-04-08 17:01:59.49003-07 | 2013-04-08 17:01:59.602925-07 | 2013-04-08 17:01:59.493732-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7467 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57956 | 2013-04-08 17:01:54.75699-07 | 2013-04-08 17:01:58.262083-07 | 2013-04-08 17:01:58.349452-07 | 2013-04-08 17:01:58.350822-07 | t | active | INSERT INTO biods.f eature_value (feature_value_id, feature_value_semid, feature_value, feature_semid, feature_id, created_ts, updated_ts, source_msg_ts, modified_by) VALUES ($9, $5, $4, $3, $2, $1, $8, $7, $6) 16395 | adchemy10013 | 7473 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57963 | 2013-04-08 17:01:55.49134-07 | 2013-04-08 17:02:00.313138-07 | 2013-04-08 17:02:00.420405-07 | 2013-04-08 17:02:00.318887-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7474 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57964 | 2013-04-08 17:01:55.494777-07 | 2013-04-08 17:02:00.514142-07 | 2013-04-08 17:02:00.577239-07 | 2013-04-08 17:02:00.519572-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 12893 | postgres | 8517 | 10 | postgres | psql | | | -1 | 2013-04-08 17:35:28.217934-07 | | 2013-04-08 17:35:28.220366-07 | 2013-04-08 17:35:28.220369-07 | f | active | select * from pg_ca talog.pg_stat_activity; (30 rows) ________________________________________ Venky Kandaswamy Principal Engineer, Adchemy Inc. 925-200-7124<tel:925-200-7124> ------------------------------------------------------------------------------ Precog is a next-generation analytics platform capable of advanced analytics on semi-structured data. The platform includes APIs for building apps and a phenomenal toolset for data science. Developers can use our toolset for easy data analysis & visualization. Get a free account! http://www2.precog.com/precogplatform/slashdotnewsletter _______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers ------------------------------------------------------------------------------ Precog is a next-generation analytics platform capable of advanced analytics on semi-structured data. The platform includes APIs for building apps and a phenomenal toolset for data science. Developers can use our toolset for easy data analysis & visualization. Get a free account! http://www2.precog.com/precogplatform/slashdotnewsletter _______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers ------------------------------------------------------------------------------ Precog is a next-generation analytics platform capable of advanced analytics on semi-structured data. The platform includes APIs for building apps and a phenomenal toolset for data science. Developers can use our toolset for easy data analysis & visualization. Get a free account! http://www2.precog.com/precogplatform/slashdotnewsletter _______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Ashutosh B. <ash...@en...> - 2013-04-12 03:37:40
|
Yes, it is important that the upgrade doesn't happen at an unexpected time aka while we are in the midst of releasing next release. It will be great if they can schedule our upgrade after June. On Fri, Apr 12, 2013 at 8:24 AM, Koichi Suzuki <ko...@in...>wrote: > As committers may have noticed, sourceforge team is upgrading all the > project, beginning at April 22nd. They say that they're not sure when > specific project are upgraded and how long each upgrade takes. > > Here's an announcement: http://sourceforge.net/blog/upgrades-april22/ > Again, details of the upgrade will be found at: > https://sourceforge.net/p/upgrade/ > > Important point is the change in URLs for the code repositories. I do > hope that URL of the project page is not changed. > > I tested an upgrade with one of my project "pglesslog", which is now > inactive and does not have any major impact. The result is: > > 1. Repo URL changes. Wow, we can use https to handle git! > 2. Web site URL seems not to change, > 3. Project URL seems not to change, > 4. Appearance of the administration page is completely new. > 5. SSH interface will be provided, as well as SCP, RSYNC and others. > > So the impact seems to be minor but I'd like to write to sourceforge to > delay the upgrade until the end of June for 1.1 release just in case, or > upgrade "now" to avoid any problems after the feature freeze. > > Any inputs? > --- > Koichi Suzuki > > > ------------------------------------------------------------------------------ > Precog is a next-generation analytics platform capable of advanced > analytics on semi-structured data. The platform includes APIs for building > apps and a phenomenal toolset for data science. Developers can use > our toolset for easy data analysis & visualization. Get a free account! > http://www2.precog.com/precogplatform/slashdotnewsletter > _______________________________________________ > Postgres-xc-core mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-core > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Ashutosh B. <ash...@en...> - 2013-04-12 03:25:55
|
Are the FK table and Refering table on the same node? Foreign key constraints can not be implemented if they involve rows across nodes. In fact, global constraints (constraints that involve data across various nodes) is not supported by 1.0. Neither it will be supported in the next version. On Thu, Apr 11, 2013 at 6:51 AM, Venky Kandaswamy <ve...@ad...> wrote: > We are processing inserts/updates using multiple threads. Here is the > trace log of the actual statements that are hung. The scenario shows the > statements on the coordinator and 2 datanodes. The scenario is similar > across all the datanodes. > The same data updates did not cause Postgres 9.1.2 to hang. This could be > related to an application problem, although we could not reproduce it on > Postgres 9.1.2. > > At a high level, there is an update on the 'feature' table that is holding > an exclusive lock on the row. The inserts are inserting to another table > that has a foreign key that references the row being locked by the update. > Pid 7174 and 7179 are waiting to complete and they are also similar > inserts. The only thing in common seems to be that the update is locking > the feature row that is referenced in a foreign key in the other inserts. > This should not cause a deadlock, I believe. > > The question in my mind is whether pids 7181 and 7186 should have been > granted exclusive access to a tuple while others were granted share access. > This might cause a race condition. > > This causes PGXC to hang. Obviously, the update is in turn waiting for > something (which we cannot figure out from the logs) and therefore not > committing the update. > > [postgres@sv4-pgxc-db01 pgxc]$ ps -ef | grep adchemy1234 > <COORDINATOR> > postgres 7169 7113 0 16:41 ? 00:00:02 postgres: adchemy > adchemy1234 192.168.51.73(49186) INSERT > postgres 7170 7113 0 16:41 ? 00:00:02 postgres: adchemy > adchemy1234 192.168.51.73(49187) INSERT > postgres 7171 7113 0 16:41 ? 00:00:02 postgres: adchemy > adchemy1234 192.168.51.73(49188) UPDATE > postgres 7172 7113 0 16:41 ? 00:00:02 postgres: adchemy > adchemy1234 192.168.51.73(49189) INSERT > postgres 7173 7113 0 16:41 ? 00:00:02 postgres: adchemy > adchemy1234 192.168.51.73(49190) INSERT > <COORDINATOR> > > <DATANODE1> > postgres 7174 7127 0 16:41 ? 00:00:01 postgres: adchemy > adchemy1234 172.17.28.61(51909) idle in transaction > postgres 7175 7127 0 16:41 ? 00:00:01 postgres: adchemy > adchemy1234 172.17.28.61(51910) INSERT waiting > postgres 7181 7127 0 16:41 ? 00:00:01 postgres: adchemy > adchemy1234 172.17.28.61(51924) UPDATE waiting > postgres 7182 7127 0 16:41 ? 00:00:01 postgres: adchemy > adchemy1234 172.17.28.61(51925) INSERT waiting > postgres 7183 7127 0 16:41 ? 00:00:01 postgres: adchemy > adchemy1234 172.17.28.61(51926) INSERT waiting > <DATANODE1> > > <DATANODE2> > postgres 7179 7140 0 16:41 ? 00:00:00 postgres: adchemy > adchemy1234 172.17.28.61(48957) idle in transaction > postgres 7180 7140 0 16:41 ? 00:00:00 postgres: adchemy > adchemy1234 172.17.28.61(48962) INSERT waiting > postgres 7184 7140 0 16:41 ? 00:00:00 postgres: adchemy > adchemy1234 172.17.28.61(48970) INSERT waiting > postgres 7185 7140 0 16:41 ? 00:00:00 postgres: adchemy > adchemy1234 172.17.28.61(48975) INSERT waiting > postgres 7186 7140 0 16:41 ? 00:00:00 postgres: adchemy > adchemy1234 172.17.28.61(48980) UPDATE waiting > <DATANODE2> > > -----LOGS----- formatted %t %u %p > > 2013-04-10 16:42:16 PDT adchemy 7169 LOG: execute S_1: BEGIN > 2013-04-10 16:42:16 PDT adchemy 7169 LOG: execute <unnamed>: select > nextval ('hibernate_sequence') > 2013-04-10 16:42:16 PDT adchemy 7169 LOG: execute <unnamed>: insert into > biods.product_feature (category_id, category_semid, created_ts, feature_id, > feature_semid, feature_value_id, feature_value_semid, modified_by, prd_id, > prd_semid, updated_ts, prd_feature_id) values ($1, $2, $3, $4, $5, $6, $7, > $8, $9, $10, $11, $12) > 2013-04-10 16:42:16 PDT adchemy 7169 DETAIL: parameters: $1 = '42302', $2 > = 'Handbags', $3 = '2013-04-10 15:02:42.343-07', $4 = '42318', $5 = > 'description', $6 = '46105', $7 = 'description,Give your riches the > designer treatment with Mcms leather heritage wallet. The logo-stamped > little number stores your essentials in luxe vintage style.', $8 = NULL, $9 > = '46449', $10 = '7630015470685', $11 = '2013-04-10 15:02:42.343-07', $12 = > '46455' > > 2013-04-10 16:42:16 PDT adchemy 7170 LOG: execute S_1: BEGIN > 2013-04-10 16:42:16 PDT adchemy 7170 LOG: execute <unnamed>: select > nextval ('hibernate_sequence') > 2013-04-10 16:42:16 PDT adchemy 7170 LOG: execute <unnamed>: insert into > biods.product_feature (category_id, category_semid, created_ts, feature_id, > feature_semid, feature_value_id, feature_value_semid, modified_by, prd_id, > prd_semid, updated_ts, prd_feature_id) values ($1, $2, $3, $4, $5, $6, $7, > $8, $9, $10, $11, $12) > 2013-04-10 16:42:16 PDT adchemy 7170 DETAIL: parameters: $1 = '42302', $2 > = 'Handbags', $3 = '2013-04-10 15:02:43.413-07', $4 = '42318', $5 = > 'description', $6 = '46326', $7 = 'description,Rich leather is dressed up > with a bold logo-stamped plaque in this utility chic wallet from Marc By > Marc Jacobs.', $8 = NULL, $9 = '46438', $10 = '883936992041', $11 = > '2013-04-10 15:02:43.413-07', $12 = '46445' > > 2013-04-10 16:42:15 PDT adchemy 7171 LOG: execute S_1: BEGIN > 2013-04-10 16:42:15 PDT adchemy 7171 LOG: execute <unnamed>: select > feature0_.feature_id as feature1_8_1_, feature0_.created_ts as > created2_8_1_, feature0_.feature_name as feature3_8_1_, > feature0_.feature_semid as feature4_8_1_, feature0_.modified_by as > modified5_8_1_, feature0_.source_msg_ts as source6_8_1_, > feature0_.updated_ts as updated7_8_1_, featureval1_.feature_id as > feature9_8_3_, featureval1_.feature_value_id as feature1_14_3_, > featureval1_.feature_value_id as feature1_14_0_, featureval1_.created_ts as > created2_14_0_, featureval1_.feature_id as feature9_14_0_, > featureval1_.feature_semid as feature3_14_0_, featureval1_.feature_value as > feature4_14_0_, featureval1_.feature_value_semid as feature5_14_0_, > featureval1_.modified_by as modified6_14_0_, featureval1_.source_msg_ts as > source7_14_0_, featureval1_.updated_ts as updated8_14_0_ from biods.feature > feature0_ left outer join biods.feature_value featureval1_ on > feature0_.feature_id=featureval1_.feature_id where feature0_.feature_id=$1 > 2013-04-10 16:42:15 PDT adchemy 7171 DETAIL: parameters: $1 = '42318' > 2013-04-10 16:42:15 PDT adchemy 7171 LOG: execute <unnamed>: update > biods.feature set created_ts=$1, feature_name=$2, feature_semid=$3, > modified_by=$4, source_msg_ts=$5, updated_ts=$6 where feature_id=$7 > 2013-04-10 16:42:15 PDT adchemy 7171 DETAIL: parameters: $1 = '2013-04-10 > 15:02:34.706-07', $2 = 'description', $3 = 'description', $4 = NULL, $5 = > '2013-04-10 15:02:43.576-07', $6 = '2013-04-10 15:02:43.573-07', $7 = > '42318' > > 2013-04-10 16:42:17 PDT adchemy 7172 LOG: execute S_1: BEGIN > 2013-04-10 16:42:17 PDT adchemy 7172 LOG: execute <unnamed>: select > nextval ('hibernate_sequence') > 2013-04-10 16:42:17 PDT adchemy 7172 LOG: execute <unnamed>: insert into > biods.product_feature (category_id, category_semid, created_ts, feature_id, > feature_semid, feature_value_id, feature_value_semid, modified_by, prd_id, > prd_semid, updated_ts, prd_feature_id) values ($1, $2, $3, $4, $5, $6, $7, > $8, $9, $10, $11, $12) > 2013-04-10 16:42:17 PDT adchemy 7172 DETAIL: parameters: $1 = '42302', $2 > = 'Handbags', $3 = '2013-04-10 15:02:42.003-07', $4 = '42318', $5 = > 'description', $6 = '44831', $7 = 'description,A chic logo-detailed > cosmetic case for the contemporary girl from Tory Burch. Exclusive to > Bloomingdales.', $8 = NULL, $9 = '46453', $10 = '885427179580', $11 = > '2013-04-10 15:02:42.003-07', $12 = '46460' > > 2013-04-10 16:42:15 PDT adchemy 7173 LOG: execute S_1: BEGIN > 2013-04-10 16:42:15 PDT adchemy 7173 LOG: execute <unnamed>: select > nextval ('hibernate_sequence') > 2013-04-10 16:42:15 PDT adchemy 7173 LOG: execute <unnamed>: insert into > biods.product_feature (category_id, category_semid, created_ts, feature_id, > feature_semid, feature_value_id, feature_value_semid, modified_by, prd_id, > prd_semid, updated_ts, prd_feature_id) values ($1, $2, $3, $4, $5, $6, $7, > $8, $9, $10, $11, $12) > 2013-04-10 16:42:15 PDT adchemy 7173 DETAIL: parameters: $1 = '42302', $2 > = 'Handbags', $3 = '2013-04-10 15:02:42.674-07', $4 = '42318', $5 = > 'description', $6 = '46154', $7 = 'description,Keep the essentials close > with LeSportsacs crossbody bag in matte black nylon practical interior zip > compartments make those daily errands a little bit easier.', $8 = NULL, $9 > = '46425', $10 = '883681258669', $11 = '2013-04-10 15:02:42.674-07', $12 = > '46435' > > 2013-04-10 16:42:15 PDT adchemy 7174 LOG: statement: START TRANSACTION > ISOLATION LEVEL read committed READ WRITE > 2013-04-10 16:42:15 PDT adchemy 7174 LOG: execute <unnamed>: INSERT INTO > biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, > feature_value_id, category_semid, prd_semid, feature_semid, > feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, > $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) > 2013-04-10 16:42:15 PDT adchemy 7174 DETAIL: parameters: $1 = '42302', $2 > = 'Handbags', $3 = '2013-04-10 15:02:42.674-07', $4 = '42318', $5 = > 'description', $6 = '46154', $7 = 'description,Keep the essentials close > with LeSportsacs crossbody bag in matte black nylon practical interior zip > compartments make those daily errands a little bit easier.', $8 = NULL, $9 > = '46425', $10 = '883681258669', $11 = '2013-04-10 15:02:42.674-07', $12 = > '46435' > > 2013-04-10 16:42:16 PDT adchemy 7175 LOG: statement: START TRANSACTION > ISOLATION LEVEL read committed READ WRITE > 2013-04-10 16:42:16 PDT adchemy 7175 LOG: execute <unnamed>: INSERT INTO > biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, > feature_value_id, category_semid, prd_semid, feature_semid, > feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, > $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) > 2013-04-10 16:42:16 PDT adchemy 7175 DETAIL: parameters: $1 = '42302', $2 > = 'Handbags', $3 = '2013-04-10 15:02:42.343-07', $4 = '42318', $5 = > 'description', $6 = '46105', $7 = 'description,Give your riches the > designer treatment with Mcms leather heritage wallet. The logo-stamped > little number stores your essentials in luxe vintage style.', $8 = NULL, $9 > = '46449', $10 = '7630015470685', $11 = '2013-04-10 15:02:42.343-07', $12 = > '46455' > > 2013-04-10 16:42:15 PDT adchemy 7179 LOG: statement: START TRANSACTION > ISOLATION LEVEL read committed READ WRITE > 2013-04-10 16:42:15 PDT adchemy 7179 LOG: execute <unnamed>: INSERT INTO > biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, > feature_value_id, category_semid, prd_semid, feature_semid, > feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, > $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) > 2013-04-10 16:42:15 PDT adchemy 7179 DETAIL: parameters: $1 = '42302', $2 > = 'Handbags', $3 = '2013-04-10 15:02:42.674-07', $4 = '42318', $5 = > 'description', $6 = '46154', $7 = 'description,Keep the essentials close > with LeSportsacs crossbody bag in matte black nylon practical interior zip > compartments make those daily errands a little bit easier.', $8 = NULL, $9 > = '46425', $10 = '883681258669', $11 = '2013-04-10 15:02:42.674-07', $12 = > '46435' > > 2013-04-10 16:42:16 PDT adchemy 7180 LOG: statement: START TRANSACTION > ISOLATION LEVEL read committed READ WRITE > 2013-04-10 16:42:16 PDT adchemy 7180 LOG: execute <unnamed>: INSERT INTO > biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, > feature_value_id, category_semid, prd_semid, feature_semid, > feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, > $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) > 2013-04-10 16:42:16 PDT adchemy 7180 DETAIL: parameters: $1 = '42302', $2 > = 'Handbags', $3 = '2013-04-10 15:02:42.343-07', $4 = '42318', $5 = > 'description', $6 = '46105', $7 = 'description,Give your riches the > designer treatment with Mcms leather heritage wallet. The logo-stamped > little number stores your essentials in luxe vintage style.', $8 = NULL, $9 > = '46449', $10 = '7630015470685', $11 = '2013-04-10 15:02:42.343-07', $12 = > '46455' > > 2013-04-10 16:42:15 PDT adchemy 7181 LOG: statement: START TRANSACTION > ISOLATION LEVEL read committed READ WRITE > 2013-04-10 16:42:15 PDT adchemy 7181 LOG: execute <unnamed>: UPDATE > biods.feature SET feature_semid = $3, feature_name = $2, created_ts = $1, > updated_ts = $6, source_msg_ts = $5, modified_by = $4 WHERE (feature_id = > $7) > 2013-04-10 16:42:15 PDT adchemy 7181 DETAIL: parameters: $1 = '2013-04-10 > 15:02:34.706-07', $2 = 'description', $3 = 'description', $4 = NULL, $5 = > '2013-04-10 15:02:43.576-07', $6 = '2013-04-10 15:02:43.573-07', $7 = > '42318' > > 2013-04-10 16:42:17 PDT adchemy 7182 LOG: statement: START TRANSACTION > ISOLATION LEVEL read committed READ WRITE > 2013-04-10 16:42:17 PDT adchemy 7182 LOG: execute <unnamed>: INSERT INTO > biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, > feature_value_id, category_semid, prd_semid, feature_semid, > feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, > $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) > 2013-04-10 16:42:17 PDT adchemy 7182 DETAIL: parameters: $1 = '42302', $2 > = 'Handbags', $3 = '2013-04-10 15:02:42.003-07', $4 = '42318', $5 = > 'description', $6 = '44831', $7 = 'description,A chic logo-detailed > cosmetic case for the contemporary girl from Tory Burch. Exclusive to > Bloomingdales.', $8 = NULL, $9 = '46453', $10 = '885427179580', $11 = > '2013-04-10 15:02:42.003-07', $12 = '46460' > > 2013-04-10 16:42:16 PDT adchemy 7183 LOG: statement: START TRANSACTION > ISOLATION LEVEL read committed READ WRITE > 2013-04-10 16:42:16 PDT adchemy 7183 LOG: execute <unnamed>: INSERT INTO > biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, > feature_value_id, category_semid, prd_semid, feature_semid, > feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, > $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) > 2013-04-10 16:42:16 PDT adchemy 7183 DETAIL: parameters: $1 = '42302', $2 > = 'Handbags', $3 = '2013-04-10 15:02:43.413-07', $4 = '42318', $5 = > 'description', $6 = '46326', $7 = 'description,Rich leather is dressed up > with a bold logo-stamped plaque in this utility chic wallet from Marc By > Marc Jacobs.', $8 = NULL, $9 = '46438', $10 = '883936992041', $11 = > '2013-04-10 15:02:43.413-07', $12 = '46445' > > 2013-04-10 16:42:17 PDT adchemy 7184 LOG: statement: START TRANSACTION > ISOLATION LEVEL read committed READ WRITE > 2013-04-10 16:42:17 PDT adchemy 7184 LOG: execute <unnamed>: INSERT INTO > biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, > feature_value_id, category_semid, prd_semid, feature_semid, > feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, > $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) > 2013-04-10 16:42:17 PDT adchemy 7184 DETAIL: parameters: $1 = '42302', $2 > = 'Handbags', $3 = '2013-04-10 15:02:42.003-07', $4 = '42318', $5 = > 'description', $6 = '44831', $7 = 'description,A chic logo-detailed > cosmetic case for the contemporary girl from Tory Burch. Exclusive to > Bloomingdales.', $8 = NULL, $9 = '46453', $10 = '885427179580', $11 = > '2013-04-10 15:02:42.003-07', $12 = '46460' > > 2013-04-10 16:42:16 PDT adchemy 7185 LOG: statement: START TRANSACTION > ISOLATION LEVEL read committed READ WRITE > 2013-04-10 16:42:16 PDT adchemy 7185 LOG: execute <unnamed>: INSERT INTO > biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, > feature_value_id, category_semid, prd_semid, feature_semid, > feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, > $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) > 2013-04-10 16:42:16 PDT adchemy 7185 DETAIL: parameters: $1 = '42302', $2 > = 'Handbags', $3 = '2013-04-10 15:02:43.413-07', $4 = '42318', $5 = > 'description', $6 = '46326', $7 = 'description,Rich leather is dressed up > with a bold logo-stamped plaque in this utility chic wallet from Marc By > Marc Jacobs.', $8 = NULL, $9 = '46438', $10 = '883936992041', $11 = > '2013-04-10 15:02:43.413-07', $12 = '46445' > > 2013-04-10 16:42:15 PDT adchemy 7186 LOG: statement: START TRANSACTION > ISOLATION LEVEL read committed READ WRITE > 2013-04-10 16:42:15 PDT adchemy 7186 LOG: execute <unnamed>: UPDATE > biods.feature SET feature_semid = $3, feature_name = $2, created_ts = $1, > updated_ts = $6, source_msg_ts = $5, modified_by = $4 WHERE (feature_id = > $7) > 2013-04-10 16:42:15 PDT adchemy 7186 DETAIL: parameters: $1 = '2013-04-10 > 15:02:34.706-07', $2 = 'description', $3 = 'description', $4 = NULL, $5 = > '2013-04-10 15:02:43.576-07', $6 = '2013-04-10 15:02:43.573-07', $7 = > '42318' > > LOCKS ON COORDINATOR: > > [venky@sv4-pgxc-db01 ~]$ /usr/local/pgsql/bin/psql -p 5432 -U postgres -d > adchemy1234 -c "SELECT pid, relname, locktype, mode, granted from pg_locks, > pg_class where relation=oid and relname not like 'pg_%' order by mode;" > pid | relname | locktype | mode | granted > ------+--------------------+----------+------------------+--------- > 7169 | hibernate_sequence | relation | AccessShareLock | t > 7173 | hibernate_sequence | relation | AccessShareLock | t > 7172 | hibernate_sequence | relation | AccessShareLock | t > 7171 | feature_value | relation | AccessShareLock | t > 7171 | feature | relation | AccessShareLock | t > 7170 | hibernate_sequence | relation | AccessShareLock | t > 7171 | feature | relation | RowExclusiveLock | t > 7172 | product_feature | relation | RowExclusiveLock | t > 7170 | product_feature | relation | RowExclusiveLock | t > 7173 | product_feature | relation | RowExclusiveLock | t > 7169 | product_feature | relation | RowExclusiveLock | t > (11 rows) > > LOCKS ON DATANODE1: > > [venky@sv4-pgxc-db01 ~]$ /usr/local/pgsql/bin/psql -p 5433 -U postgres -d > adchemy1234 -c "SELECT pid, relname, locktype, mode, granted from pg_locks, > pg_class where relation=oid and relname not like 'pg_%' order by mode;" > pid | relname | locktype | mode | > granted > > ------+-----------------------+----------+--------------------------+--------- > 7174 | prd_id | relation | AccessShareLock | t > 7182 | feature_id | relation | AccessShareLock | t > 7174 | feature_value_id | relation | AccessShareLock | t > 7183 | feature_id | relation | AccessShareLock | t > 7174 | feature_id | relation | AccessShareLock | t > 7175 | feature_id | relation | AccessShareLock | t > 7181 | feature | tuple | ExclusiveLock | t > 7181 | feature_semid | relation | RowExclusiveLock | t > 7181 | feature_id | relation | RowExclusiveLock | t > 7181 | feature | relation | RowExclusiveLock | t > 7175 | cat_prd_feature_semid | relation | RowExclusiveLock | t > 7183 | cat_prd_feature_semid | relation | RowExclusiveLock | t > 7183 | prd_feature_id | relation | RowExclusiveLock | t > 7183 | product_feature | relation | RowExclusiveLock | t > 7182 | cat_prd_feature_semid | relation | RowExclusiveLock | t > 7182 | prd_feature_id | relation | RowExclusiveLock | t > 7182 | product_feature | relation | RowExclusiveLock | t > 7175 | prd_feature_id | relation | RowExclusiveLock | t > 7175 | product_feature | relation | RowExclusiveLock | t > 7174 | product_feature | relation | RowExclusiveLock | t > 7206 | feature_semid | relation | RowExclusiveLock | t > 7206 | feature_id | relation | RowExclusiveLock | t > 7174 | product | relation | RowShareLock | t > 7182 | feature | relation | RowShareLock | t > 7174 | feature_value | relation | RowShareLock | t > 7183 | category | relation | RowShareLock | t > 7174 | feature | relation | RowShareLock | t > 7174 | category | relation | RowShareLock | t > 7175 | category | relation | RowShareLock | t > 7175 | feature | relation | RowShareLock | t > 7183 | feature | relation | RowShareLock | t > 7182 | category | relation | RowShareLock | t > 7182 | feature | tuple | ShareLock | f > 7175 | feature | tuple | ShareLock | f > 7183 | feature | tuple | ShareLock | f > 7206 | feature | relation | ShareUpdateExclusiveLock | t > > LOCKS ON DATANODE2: > > [venky@sv4-pgxc-db01 ~]$ /usr/local/pgsql/bin/psql -p 5434 -U postgres -d > adchemy1234 -c "SELECT pid, relname, locktype, mode, granted from pg_locks, > pg_class where relation=oid and relname not like 'pg_%' order by mode;" > pid | relname | locktype | mode | > granted > > ------+-----------------------+----------+--------------------------+--------- > 7185 | feature_id | relation | AccessShareLock | t > 7179 | feature_value_id | relation | AccessShareLock | t > 7179 | prd_id | relation | AccessShareLock | t > 7184 | feature_id | relation | AccessShareLock | t > 7180 | feature_id | relation | AccessShareLock | t > 7179 | feature_id | relation | AccessShareLock | t > 7186 | feature | tuple | ExclusiveLock | t > 7184 | prd_feature_id | relation | RowExclusiveLock | t > 7184 | product_feature | relation | RowExclusiveLock | t > 7186 | feature_semid | relation | RowExclusiveLock | t > 7186 | feature_id | relation | RowExclusiveLock | t > 7186 | feature | relation | RowExclusiveLock | t > 7185 | cat_prd_feature_semid | relation | RowExclusiveLock | t > 7185 | prd_feature_id | relation | RowExclusiveLock | t > 7185 | product_feature | relation | RowExclusiveLock | t > 7184 | cat_prd_feature_semid | relation | RowExclusiveLock | t > 7180 | cat_prd_feature_semid | relation | RowExclusiveLock | t > 7180 | prd_feature_id | relation | RowExclusiveLock | t > 7180 | product_feature | relation | RowExclusiveLock | t > 7179 | product_feature | relation | RowExclusiveLock | t > 7202 | feature_semid | relation | RowExclusiveLock | t > 7202 | feature_id | relation | RowExclusiveLock | t > 7179 | product | relation | RowShareLock | t > 7184 | feature | relation | RowShareLock | t > 7179 | feature_value | relation | RowShareLock | t > 7185 | category | relation | RowShareLock | t > 7179 | feature | relation | RowShareLock | t > 7179 | category | relation | RowShareLock | t > 7180 | feature | relation | RowShareLock | t > 7180 | category | relation | RowShareLock | t > 7185 | feature | relation | RowShareLock | t > 7184 | category | relation | RowShareLock | t > 7185 | feature | tuple | ShareLock | f > 7180 | feature | tuple | ShareLock | f > 7184 | feature | tuple | ShareLock | f > 7202 | feature | relation | ShareUpdateExclusiveLock | t > (36 rows) > > > > ________________________________________ > > Venky Kandaswamy > > Principal Engineer, Adchemy Inc. > > 925-200-7124 > ------------------------------ > *From:* Koichi Suzuki [koi...@gm...] > *Sent:* Monday, April 08, 2013 10:41 PM > *To:* Amit Khandekar > *Cc:* Venky Kandaswamy; pos...@li... > *Subject:* Re: [Postgres-xc-developers] PGXC hangs when run with > concurrent inserts > > Because insert is being done in parallel, I'm afraid there could be a > possibility that we have internal lock conflicts, which should not happen. > > Regards; > ---------- > Koichi Suzuki > > > 2013/4/9 Amit Khandekar <ami...@en...> > >> >> >> >> On 9 April 2013 06:46, Venky Kandaswamy <ve...@ad...> wrote: >> >>> All, >>> We have been running into a hang issue on our app that appears to be >>> related to PGXC. Our app processes messages from RabbitMQ and >>> inserts/updates tables. We run 5 concurrent threads. The incoming queues >>> are replicated, one feeding Postgres 9.1 and the other feeding PGXC >>> (current git master). PGXC is hanging on inserts after processing a few >>> transactions. It does not appear to be related to the actual data itself. >>> IT looks like all the sessions are waiting for something. There is no >>> information on locks available from pg_locks. >>> >> >> Since most of the operations are inserts, it does not look like it is >> due to locks, unless something has acquired table locks. But just to rule >> out that possibility, it would be better if you check pg_locks on the >> datanodes, if you have checked it only on coordinator so far. >> >> >>> >>> An strace simply says recfrom(10. >>> >>> The are no errors in the logs from gtm, coordinator or datanodes. >>> >>> The tables have referential integrity and use a shared sequence to >>> get the next id. Is it possible that something is going on with the logic >>> to retrieve sequence numbers? The tables are all replicated. >>> >>> Unfortunately, we have not been able to reproduce a reliable test >>> case. >>> >>> [postgres@gnode0 pgxc]$ /usr/local/pgsql/bin/psql -p 5433 -U >>> postgres -d postgres -c 'select * from pg_catalog.pg_stat_activity;' >>> datid | datname | pid | usesysid | usename | application_name >>> | client_addr | client_hostname | client_port | >>> backend_start | xact_start | >>> query_start | state_change | waiting | >>> state | >>> >>> query >>> >>> >>> -------+--------------+-------+----------+----------+------------------+----------------+-----------------+-------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+---------+---------------------+-------------------- >>> >>> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- >>> 12893 | postgres | 22330 | 10 | postgres | pgxc | >>> 192.168.53.109 | | 47025 | 2013-03-31 >>> 21:42:16.724845-07 | | 2013-04-08 >>> 15:43:52.313325-07 | 2013-04-08 15:26:11.444754-07 | f | >>> idle | COMMIT PREPARED 'T1 >>> 32273' >>> 16393 | master | 4267 | 16392 | xcadmin | pgxc | >>> 192.168.53.109 | | 54961 | 2013-04-08 >>> 15:24:28.668023-07 | | 2013-04-08 >>> 15:33:17.586836-07 | 2013-04-08 15:33:17.587942-07 | f | >>> idle | SELECT count(*) FRO >>> M ONLY bicommon.account_datasource WHERE true >>> 16395 | adchemy10013 | 4363 | 16392 | xcadmin | pgxc | >>> 192.168.53.109 | | 55084 | 2013-04-08 >>> 15:28:48.822939-07 | | 2013-04-08 >>> 15:50:21.650727-07 | 2013-04-08 15:50:07.916753-07 | f | >>> idle | SELECT prd_id, prd_ >>> semid, prd_name, prd_line, prd_model, prd_brand, prd_image_url, >>> prd_dest_url, created_ts, updated_ts, source_msg_ts, modified_by FROM >>> biods.product >>> 16393 | master | 4486 | 16392 | xcadmin | pgxc | >>> 192.168.53.109 | | 55246 | 2013-04-08 >>> 15:33:21.019388-07 | | 2013-04-08 >>> 15:43:51.321376-07 | 2013-04-08 15:43:51.322675-07 | f | >>> idle | SET SESSION AUTHORI >>> ZATION DEFAULT;RESET ALL; >>> 16393 | master | 4781 | 16392 | xcadmin | pgxc | >>> 192.168.53.109 | | 55515 | 2013-04-08 >>> 15:42:42.122785-07 | | 2013-04-08 >>> 17:02:21.023713-07 | 2013-04-08 17:02:20.804751-07 | f | >>> idle | SET SESSION AUTHORI >>> ZATION DEFAULT;RESET ALL; >>> 16393 | master | 4787 | 16392 | xcadmin | pgxc | >>> 192.168.53.109 | | 55521 | 2013-04-08 >>> 15:42:42.142662-07 | | 2013-04-08 >>> 16:17:19.26364-07 | 2013-04-08 16:17:19.126163-07 | f | >>> idle | SET SESSION AUTHORI >>> ZATION DEFAULT;RESET ALL; >>> 16393 | master | 4792 | 16392 | xcadmin | pgxc | >>> 192.168.53.109 | | 55526 | 2013-04-08 >>> 15:42:42.159009-07 | | 2013-04-08 >>> 15:45:11.915026-07 | 2013-04-08 15:45:11.886392-07 | f | >>> idle | SET SESSION AUTHORI >>> ZATION DEFAULT;RESET ALL; >>> 16393 | master | 4799 | 16392 | xcadmin | pgxc | >>> 192.168.53.109 | | 55533 | 2013-04-08 >>> 15:42:42.678387-07 | | 2013-04-08 >>> 17:02:21.195332-07 | 2013-04-08 17:02:20.805074-07 | f | >>> idle | SET SESSION AUTHORI >>> ZATION DEFAULT;RESET ALL; >>> 16393 | master | 4804 | 16392 | xcadmin | pgxc | >>> 192.168.53.109 | | 55538 | 2013-04-08 >>> 15:42:42.694802-07 | | 2013-04-08 >>> 15:45:11.904619-07 | 2013-04-08 15:45:11.888493-07 | f | >>> idle | SET SESSION AUTHORI >>> ZATION DEFAULT;RESET ALL; >>> 16395 | adchemy10013 | 4977 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 55732 | 2013-04-08 >>> 15:47:34.901175-07 | 2013-04-08 15:48:08.345331-07 | 2013-04-08 >>> 15:48:08.528818-07 | 2013-04-08 15:48:08.410815-07 | f | idle in >>> transaction | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 4979 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 55734 | 2013-04-08 >>> 15:47:35.042778-07 | 2013-04-08 15:48:16.384763-07 | 2013-04-08 >>> 15:48:16.506899-07 | 2013-04-08 15:48:16.388503-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 4985 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 55740 | 2013-04-08 15:47: >>> 35.235945-07 | 2013-04-08 15:48:14.38895-07 | 2013-04-08 >>> 15:48:14.445351-07 | 2013-04-08 15:48:14.446752-07 | t | >>> active | UPDATE biods.featur >>> e SET feature_semid = $3, feature_name = $2, created_ts = $1, updated_ts >>> = $6, source_msg_ts = $5, modified_by = $4 WHERE (feature_id = $7) >>> 16395 | adchemy10013 | 4986 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 55741 | 2013-04-08 15:47: >>> 35.238843-07 | 2013-04-08 15:48:18.201043-07 | 2013-04-08 >>> 15:48:18.273204-07 | 2013-04-08 15:48:18.205647-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 4998 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 55753 | 2013-04-08 >>> 15:47:35.910309-07 | 2013-04-08 15:48:08.412038-07 | 2013-04-08 >>> 15:48:08.566945-07 | 2013-04-08 15:48:08.415026-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 6340 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 57002 | 2013-04-08 >>> 16:31:44.414804-07 | 2013-04-08 16:31:50.293828-07 | 2013-04-08 >>> 16:31:50.433988-07 | 2013-04-08 16:31:50.297752-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 6341 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 57003 | 2013-04-08 >>> 16:31:44.418356-07 | 2013-04-08 16:31:49.450704-07 | 2013-04-08 >>> 16:31:49.599946-07 | 2013-04-08 16:31:49.45562-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 6348 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 57010 | 2013-04-08 16:31: >>> 45.065767-07 | 2013-04-08 16:31:50.699979-07 | 2013-04-08 >>> 16:31:50.817425-07 | 2013-04-08 16:31:50.704669-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 6349 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 57011 | 2013-04-08 >>> 16:31:45.06926-07 | 2013-04-08 16:31:51.528207-07 | 2013-04-08 >>> 16:31:51.582036-07 | 2013-04-08 16:31:51.532618-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 6350 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 57012 | 2013-04-08 16:31: >>> 45.072711-07 | 2013-04-08 16:31:50.085336-07 | 2013-04-08 >>> 16:31:50.223221-07 | 2013-04-08 16:31:50.088908-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 7269 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 57774 | 2013-04-08 >>> 16:57:15.563006-07 | 2013-04-08 16:57:21.849156-07 | 2013-04-08 >>> 16:57:21.978984-07 | 2013-04-08 16:57:21.853289-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 7271 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 57776 | 2013-04-08 >>> 16:57:15.63199-07 | 2013-04-08 16:57:16.575535-07 | 2013-04-08 >>> 16:57:17.00605-07 | 2013-04-08 16:57:17.007747-07 | t | >>> active | INSERT INTO biods.f >>> eature_value (feature_value_id, feature_value_semid, feature_value, >>> feature_semid, feature_id, created_ts, updated_ts, source_msg_ts, >>> modified_by) VALUES ($9, $5, $4, $3, $2, $1, $8, $7, $6) >>> 16395 | adchemy10013 | 7283 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 57788 | 2013-04-08 >>> 16:57:16.292702-07 | 2013-04-08 16:57:21.849125-07 | 2013-04-08 >>> 16:57:21.978824-07 | 2013-04-08 16:57:21.853251-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 7284 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 57789 | 2013-04-08 >>> 16:57:16.295879-07 | 2013-04-08 16:57:24.233166-07 | 2013-04-08 >>> 16:57:24.321938-07 | 2013-04-08 16:57:24.237514-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 7285 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 57790 | 2013-04-08 >>> 16:57:16.299271-07 | 2013-04-08 16:57:22.119868-07 | 2013-04-08 >>> 16:57:22.197213-07 | 2013-04-08 16:57:22.128357-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 7465 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 57954 | 2013-04-08 >>> 17:01:54.750113-07 | 2013-04-08 17:02:00.17336-07 | 2013-04-08 >>> 17:02:00.320469-07 | 2013-04-08 17:02:00.177758-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 7466 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 57955 | 2013-04-08 >>> 17:01:54.753559-07 | 2013-04-08 17:01:59.49003-07 | 2013-04-08 >>> 17:01:59.602925-07 | 2013-04-08 17:01:59.493732-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 7467 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 57956 | 2013-04-08 >>> 17:01:54.75699-07 | 2013-04-08 17:01:58.262083-07 | 2013-04-08 >>> 17:01:58.349452-07 | 2013-04-08 17:01:58.350822-07 | t | >>> active | INSERT INTO biods.f >>> eature_value (feature_value_id, feature_value_semid, feature_value, >>> feature_semid, feature_id, created_ts, updated_ts, source_msg_ts, >>> modified_by) VALUES ($9, $5, $4, $3, $2, $1, $8, $7, $6) >>> 16395 | adchemy10013 | 7473 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 57963 | 2013-04-08 >>> 17:01:55.49134-07 | 2013-04-08 17:02:00.313138-07 | 2013-04-08 >>> 17:02:00.420405-07 | 2013-04-08 17:02:00.318887-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 16395 | adchemy10013 | 7474 | 17361 | adchemy | pgxc | >>> 192.168.53.109 | | 57964 | 2013-04-08 >>> 17:01:55.494777-07 | 2013-04-08 17:02:00.514142-07 | 2013-04-08 >>> 17:02:00.577239-07 | 2013-04-08 17:02:00.519572-07 | t | >>> active | INSERT INTO biods.p >>> roduct_feature (prd_feature_id, category_id, prd_id, feature_id, >>> feature_value_id, category_semid, prd_semid, feature_semid, >>> feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, >>> $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) >>> 12893 | postgres | 8517 | 10 | postgres | psql >>> | | | -1 | 2013-04-08 >>> 17:35:28.217934-07 | | 2013-04-08 >>> 17:35:28.220366-07 | 2013-04-08 17:35:28.220369-07 | f | >>> active | select * from pg_ca >>> talog.pg_stat_activity; >>> (30 rows) >>> >>> >>> ________________________________________ >>> >>> Venky Kandaswamy >>> >>> Principal Engineer, Adchemy Inc. >>> >>> 925-200-7124 >>> >>> >>> ------------------------------------------------------------------------------ >>> Precog is a next-generation analytics platform capable of advanced >>> analytics on semi-structured data. The platform includes APIs for >>> building >>> apps and a phenomenal toolset for data science. Developers can use >>> our toolset for easy data analysis & visualization. Get a free account! >>> http://www2.precog.com/precogplatform/slashdotnewsletter >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> ------------------------------------------------------------------------------ >> Precog is a next-generation analytics platform capable of advanced >> analytics on semi-structured data. The platform includes APIs for building >> apps and a phenomenal toolset for data science. Developers can use >> our toolset for easy data analysis & visualization. Get a free account! >> http://www2.precog.com/precogplatform/slashdotnewsletter >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > ------------------------------------------------------------------------------ > Precog is a next-generation analytics platform capable of advanced > analytics on semi-structured data. The platform includes APIs for building > apps and a phenomenal toolset for data science. Developers can use > our toolset for easy data analysis & visualization. Get a free account! > http://www2.precog.com/precogplatform/slashdotnewsletter > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Michael P. <mic...@gm...> - 2013-04-12 02:58:33
|
On Fri, Apr 12, 2013 at 11:54 AM, Koichi Suzuki <ko...@in...>wrote: > As committers may have noticed, sourceforge team is upgrading all the > project, beginning at April 22nd. They say that they're not sure when > specific project are upgraded and how long each upgrade takes. > > Here's an announcement: http://sourceforge.net/blog/upgrades-april22/ > Again, details of the upgrade will be found at: > https://sourceforge.net/p/upgrade/ > > Important point is the change in URLs for the code repositories. I do > hope that URL of the project page is not changed. > > I tested an upgrade with one of my project "pglesslog", which is now > inactive and does not have any major impact. The result is: > > 1. Repo URL changes. Wow, we can use https to handle git! > This is cool. They lacked this support for years. Even with that, I don't think that their services will gain users from the github community though ;) -- Michael |
From: Koichi S. <ko...@in...> - 2013-04-12 02:53:05
|
As committers may have noticed, sourceforge team is upgrading all the project, beginning at April 22nd. They say that they're not sure when specific project are upgraded and how long each upgrade takes. Here's an announcement: http://sourceforge.net/blog/upgrades-april22/ Again, details of the upgrade will be found at: https://sourceforge.net/p/upgrade/ Important point is the change in URLs for the code repositories. I do hope that URL of the project page is not changed. I tested an upgrade with one of my project "pglesslog", which is now inactive and does not have any major impact. The result is: 1. Repo URL changes. Wow, we can use https to handle git! 2. Web site URL seems not to change, 3. Project URL seems not to change, 4. Appearance of the administration page is completely new. 5. SSH interface will be provided, as well as SCP, RSYNC and others. So the impact seems to be minor but I'd like to write to sourceforge to delay the upgrade until the end of June for 1.1 release just in case, or upgrade "now" to avoid any problems after the feature freeze. Any inputs? --- Koichi Suzuki |
From: Venky K. <ve...@ad...> - 2013-04-11 19:49:58
|
Amit/Andrei, Setting a primary node (the same node on all coordinators) seems to fix the problem. We have not had a hangs far. Appears to be humming smoothly now. ________________________________________ Venky Kandaswamy Principal Engineer, Adchemy Inc. 925-200-7124 ________________________________ From: Venky Kandaswamy Sent: Thursday, April 11, 2013 8:09 AM To: pos...@li... Cc: pos...@li... Subject: RE: [Postgres-xc-developers] PGXC hangs when run with concurrent inserts Thanks Amit & Andrei. There is only one update and 4 inserts - at total of 5 threads which are waiting on each other. Not sure if this is a typical deadlock situation. These are all replicated tables and none of the datanodes are marked as primary. I will mark one as primary and see if the problem happens again. Thanks for your insight. ________________________________________ Venky Kandaswamy Principal Engineer, Adchemy Inc. 925-200-7124 ________________________________ From: Andrei Martsinchyk [and...@gm...] Sent: Thursday, April 11, 2013 2:01 AM To: Amit Khandekar Cc: Venky Kandaswamy; pos...@li... Subject: Re: [Postgres-xc-developers] PGXC hangs when run with concurrent inserts 2013/4/11 Amit Khandekar <ami...@en...<mailto:ami...@en...>> On 11 April 2013 13:35, Andrei Martsinchyk <and...@gm...<mailto:and...@gm...>> wrote: I see the not granted tuple level locks on datanodes, they are requested by the "INSERT waiting" processes. Guess they are updating indexes. It seems like these locks are not granted because of exclusive lock held by "UPDATE waiting". But it is not clear, what the update is faiting for? I thnk there are multiple updates. That's why I think they might be waiting for each other, causing a deadlock possibly because there is no primary node. I see only one update in the ps outputs. 2013/4/11 Amit Khandekar <ami...@en...<mailto:ami...@en...>> Hi Venky, Thanks for the details. Have you defined one of the data nodes as a primary node ? If no, we need to define one, because replicated table updates need that in order to avoid deadlocks. If you have already marked a node as a primary node, is the primary node one of the nodes on which the feature table is replicated on ? If no, you may have hit this bug : http://sourceforge.net/tracker/index.php?func=detail&aid=3547808&group_id=311227&atid=1310232 Currently we hit this bug because the primary node is not table-specific, it should be implemented table-specific. For now you need to make sure one of the nodes on which the table is replicated is defined as a primary node. On 11 April 2013 06:51, Venky Kandaswamy <ve...@ad...<mailto:ve...@ad...>> wrote: We are processing inserts/updates using multiple threads. Here is the trace log of the actual statements that are hung. The scenario shows the statements on the coordinator and 2 datanodes. The scenario is similar across all the datanodes. The same data updates did not cause Postgres 9.1.2 to hang. This could be related to an application problem, although we could not reproduce it on Postgres 9.1.2. At a high level, there is an update on the 'feature' table that is holding an exclusive lock on the row. The inserts are inserting to another table that has a foreign key that references the row being locked by the update. Pid 7174 and 7179 are waiting to complete and they are also similar inserts. The only thing in common seems to be that the update is locking the feature row that is referenced in a foreign key in the other inserts. This should not cause a deadlock, I believe. The question in my mind is whether pids 7181 and 7186 should have been granted exclusive access to a tuple while others were granted share access. This might cause a race condition. This causes PGXC to hang. Obviously, the update is in turn waiting for something (which we cannot figure out from the logs) and therefore not committing the update. [postgres@sv4-pgxc-db01 pgxc]$ ps -ef | grep adchemy1234 <COORDINATOR> postgres 7169 7113 0 16:41 ? 00:00:02 postgres: adchemy adchemy1234 192.168.51.73(49186) INSERT postgres 7170 7113 0 16:41 ? 00:00:02 postgres: adchemy adchemy1234 192.168.51.73(49187) INSERT postgres 7171 7113 0 16:41 ? 00:00:02 postgres: adchemy adchemy1234 192.168.51.73(49188) UPDATE postgres 7172 7113 0 16:41 ? 00:00:02 postgres: adchemy adchemy1234 192.168.51.73(49189) INSERT postgres 7173 7113 0 16:41 ? 00:00:02 postgres: adchemy adchemy1234 192.168.51.73(49190) INSERT <COORDINATOR> <DATANODE1> postgres 7174 7127 0 16:41 ? 00:00:01 postgres: adchemy adchemy1234 172.17.28.61(51909) idle in transaction postgres 7175 7127 0 16:41 ? 00:00:01 postgres: adchemy adchemy1234 172.17.28.61(51910) INSERT waiting postgres 7181 7127 0 16:41 ? 00:00:01 postgres: adchemy adchemy1234 172.17.28.61(51924) UPDATE waiting postgres 7182 7127 0 16:41 ? 00:00:01 postgres: adchemy adchemy1234 172.17.28.61(51925) INSERT waiting postgres 7183 7127 0 16:41 ? 00:00:01 postgres: adchemy adchemy1234 172.17.28.61(51926) INSERT waiting <DATANODE1> <DATANODE2> postgres 7179 7140 0 16:41 ? 00:00:00 postgres: adchemy adchemy1234 172.17.28.61(48957) idle in transaction postgres 7180 7140 0 16:41 ? 00:00:00 postgres: adchemy adchemy1234 172.17.28.61(48962) INSERT waiting postgres 7184 7140 0 16:41 ? 00:00:00 postgres: adchemy adchemy1234 172.17.28.61(48970) INSERT waiting postgres 7185 7140 0 16:41 ? 00:00:00 postgres: adchemy adchemy1234 172.17.28.61(48975) INSERT waiting postgres 7186 7140 0 16:41 ? 00:00:00 postgres: adchemy adchemy1234 172.17.28.61(48980) UPDATE waiting <DATANODE2> -----LOGS----- formatted %t %u %p 2013-04-10 16:42:16 PDT adchemy 7169 LOG: execute S_1: BEGIN 2013-04-10 16:42:16 PDT adchemy 7169 LOG: execute <unnamed>: select nextval ('hibernate_sequence') 2013-04-10 16:42:16 PDT adchemy 7169 LOG: execute <unnamed>: insert into biods.product_feature (category_id, category_semid, created_ts, feature_id, feature_semid, feature_value_id, feature_value_semid, modified_by, prd_id, prd_semid, updated_ts, prd_feature_id) values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) 2013-04-10 16:42:16 PDT adchemy 7169 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.343-07', $4 = '42318', $5 = 'description', $6 = '46105', $7 = 'description,Give your riches the designer treatment with Mcms leather heritage wallet. The logo-stamped little number stores your essentials in luxe vintage style.', $8 = NULL, $9 = '46449', $10 = '7630015470685', $11 = '2013-04-10 15:02:42.343-07', $12 = '46455' 2013-04-10 16:42:16 PDT adchemy 7170 LOG: execute S_1: BEGIN 2013-04-10 16:42:16 PDT adchemy 7170 LOG: execute <unnamed>: select nextval ('hibernate_sequence') 2013-04-10 16:42:16 PDT adchemy 7170 LOG: execute <unnamed>: insert into biods.product_feature (category_id, category_semid, created_ts, feature_id, feature_semid, feature_value_id, feature_value_semid, modified_by, prd_id, prd_semid, updated_ts, prd_feature_id) values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) 2013-04-10 16:42:16 PDT adchemy 7170 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:43.413-07', $4 = '42318', $5 = 'description', $6 = '46326', $7 = 'description,Rich leather is dressed up with a bold logo-stamped plaque in this utility chic wallet from Marc By Marc Jacobs.', $8 = NULL, $9 = '46438', $10 = '883936992041', $11 = '2013-04-10 15:02:43.413-07', $12 = '46445' 2013-04-10 16:42:15 PDT adchemy 7171 LOG: execute S_1: BEGIN 2013-04-10 16:42:15 PDT adchemy 7171 LOG: execute <unnamed>: select feature0_.feature_id as feature1_8_1_, feature0_.created_ts as created2_8_1_, feature0_.feature_name as feature3_8_1_, feature0_.feature_semid as feature4_8_1_, feature0_.modified_by as modified5_8_1_, feature0_.source_msg_ts as source6_8_1_, feature0_.updated_ts as updated7_8_1_, featureval1_.feature_id as feature9_8_3_, featureval1_.feature_value_id as feature1_14_3_, featureval1_.feature_value_id as feature1_14_0_, featureval1_.created_ts as created2_14_0_, featureval1_.feature_id as feature9_14_0_, featureval1_.feature_semid as feature3_14_0_, featureval1_.feature_value as feature4_14_0_, featureval1_.feature_value_semid as feature5_14_0_, featureval1_.modified_by as modified6_14_0_, featureval1_.source_msg_ts as source7_14_0_, featureval1_.updated_ts as updated8_14_0_ from biods.feature feature0_ left outer join biods.feature_value featureval1_ on feature0_.feature_id=featureval1_.feature_id where feature0_.feature_id=$1 2013-04-10 16:42:15 PDT adchemy 7171 DETAIL: parameters: $1 = '42318' 2013-04-10 16:42:15 PDT adchemy 7171 LOG: execute <unnamed>: update biods.feature set created_ts=$1, feature_name=$2, feature_semid=$3, modified_by=$4, source_msg_ts=$5, updated_ts=$6 where feature_id=$7 2013-04-10 16:42:15 PDT adchemy 7171 DETAIL: parameters: $1 = '2013-04-10 15:02:34.706-07', $2 = 'description', $3 = 'description', $4 = NULL, $5 = '2013-04-10 15:02:43.576-07', $6 = '2013-04-10 15:02:43.573-07', $7 = '42318' 2013-04-10 16:42:17 PDT adchemy 7172 LOG: execute S_1: BEGIN 2013-04-10 16:42:17 PDT adchemy 7172 LOG: execute <unnamed>: select nextval ('hibernate_sequence') 2013-04-10 16:42:17 PDT adchemy 7172 LOG: execute <unnamed>: insert into biods.product_feature (category_id, category_semid, created_ts, feature_id, feature_semid, feature_value_id, feature_value_semid, modified_by, prd_id, prd_semid, updated_ts, prd_feature_id) values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) 2013-04-10 16:42:17 PDT adchemy 7172 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.003-07', $4 = '42318', $5 = 'description', $6 = '44831', $7 = 'description,A chic logo-detailed cosmetic case for the contemporary girl from Tory Burch. Exclusive to Bloomingdales.', $8 = NULL, $9 = '46453', $10 = '885427179580', $11 = '2013-04-10 15:02:42.003-07', $12 = '46460' 2013-04-10 16:42:15 PDT adchemy 7173 LOG: execute S_1: BEGIN 2013-04-10 16:42:15 PDT adchemy 7173 LOG: execute <unnamed>: select nextval ('hibernate_sequence') 2013-04-10 16:42:15 PDT adchemy 7173 LOG: execute <unnamed>: insert into biods.product_feature (category_id, category_semid, created_ts, feature_id, feature_semid, feature_value_id, feature_value_semid, modified_by, prd_id, prd_semid, updated_ts, prd_feature_id) values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) 2013-04-10 16:42:15 PDT adchemy 7173 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.674-07', $4 = '42318', $5 = 'description', $6 = '46154', $7 = 'description,Keep the essentials close with LeSportsacs crossbody bag in matte black nylon practical interior zip compartments make those daily errands a little bit easier.', $8 = NULL, $9 = '46425', $10 = '883681258669', $11 = '2013-04-10 15:02:42.674-07', $12 = '46435' 2013-04-10 16:42:15 PDT adchemy 7174 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:15 PDT adchemy 7174 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:15 PDT adchemy 7174 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.674-07', $4 = '42318', $5 = 'description', $6 = '46154', $7 = 'description,Keep the essentials close with LeSportsacs crossbody bag in matte black nylon practical interior zip compartments make those daily errands a little bit easier.', $8 = NULL, $9 = '46425', $10 = '883681258669', $11 = '2013-04-10 15:02:42.674-07', $12 = '46435' 2013-04-10 16:42:16 PDT adchemy 7175 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:16 PDT adchemy 7175 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:16 PDT adchemy 7175 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.343-07', $4 = '42318', $5 = 'description', $6 = '46105', $7 = 'description,Give your riches the designer treatment with Mcms leather heritage wallet. The logo-stamped little number stores your essentials in luxe vintage style.', $8 = NULL, $9 = '46449', $10 = '7630015470685', $11 = '2013-04-10 15:02:42.343-07', $12 = '46455' 2013-04-10 16:42:15 PDT adchemy 7179 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:15 PDT adchemy 7179 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:15 PDT adchemy 7179 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.674-07', $4 = '42318', $5 = 'description', $6 = '46154', $7 = 'description,Keep the essentials close with LeSportsacs crossbody bag in matte black nylon practical interior zip compartments make those daily errands a little bit easier.', $8 = NULL, $9 = '46425', $10 = '883681258669', $11 = '2013-04-10 15:02:42.674-07', $12 = '46435' 2013-04-10 16:42:16 PDT adchemy 7180 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:16 PDT adchemy 7180 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:16 PDT adchemy 7180 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.343-07', $4 = '42318', $5 = 'description', $6 = '46105', $7 = 'description,Give your riches the designer treatment with Mcms leather heritage wallet. The logo-stamped little number stores your essentials in luxe vintage style.', $8 = NULL, $9 = '46449', $10 = '7630015470685', $11 = '2013-04-10 15:02:42.343-07', $12 = '46455' 2013-04-10 16:42:15 PDT adchemy 7181 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:15 PDT adchemy 7181 LOG: execute <unnamed>: UPDATE biods.feature SET feature_semid = $3, feature_name = $2, created_ts = $1, updated_ts = $6, source_msg_ts = $5, modified_by = $4 WHERE (feature_id = $7) 2013-04-10 16:42:15 PDT adchemy 7181 DETAIL: parameters: $1 = '2013-04-10 15:02:34.706-07', $2 = 'description', $3 = 'description', $4 = NULL, $5 = '2013-04-10 15:02:43.576-07', $6 = '2013-04-10 15:02:43.573-07', $7 = '42318' 2013-04-10 16:42:17 PDT adchemy 7182 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:17 PDT adchemy 7182 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:17 PDT adchemy 7182 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.003-07', $4 = '42318', $5 = 'description', $6 = '44831', $7 = 'description,A chic logo-detailed cosmetic case for the contemporary girl from Tory Burch. Exclusive to Bloomingdales.', $8 = NULL, $9 = '46453', $10 = '885427179580', $11 = '2013-04-10 15:02:42.003-07', $12 = '46460' 2013-04-10 16:42:16 PDT adchemy 7183 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:16 PDT adchemy 7183 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:16 PDT adchemy 7183 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:43.413-07', $4 = '42318', $5 = 'description', $6 = '46326', $7 = 'description,Rich leather is dressed up with a bold logo-stamped plaque in this utility chic wallet from Marc By Marc Jacobs.', $8 = NULL, $9 = '46438', $10 = '883936992041', $11 = '2013-04-10 15:02:43.413-07', $12 = '46445' 2013-04-10 16:42:17 PDT adchemy 7184 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:17 PDT adchemy 7184 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:17 PDT adchemy 7184 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:42.003-07', $4 = '42318', $5 = 'description', $6 = '44831', $7 = 'description,A chic logo-detailed cosmetic case for the contemporary girl from Tory Burch. Exclusive to Bloomingdales.', $8 = NULL, $9 = '46453', $10 = '885427179580', $11 = '2013-04-10 15:02:42.003-07', $12 = '46460' 2013-04-10 16:42:16 PDT adchemy 7185 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:16 PDT adchemy 7185 LOG: execute <unnamed>: INSERT INTO biods.product_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 2013-04-10 16:42:16 PDT adchemy 7185 DETAIL: parameters: $1 = '42302', $2 = 'Handbags', $3 = '2013-04-10 15:02:43.413-07', $4 = '42318', $5 = 'description', $6 = '46326', $7 = 'description,Rich leather is dressed up with a bold logo-stamped plaque in this utility chic wallet from Marc By Marc Jacobs.', $8 = NULL, $9 = '46438', $10 = '883936992041', $11 = '2013-04-10 15:02:43.413-07', $12 = '46445' 2013-04-10 16:42:15 PDT adchemy 7186 LOG: statement: START TRANSACTION ISOLATION LEVEL read committed READ WRITE 2013-04-10 16:42:15 PDT adchemy 7186 LOG: execute <unnamed>: UPDATE biods.feature SET feature_semid = $3, feature_name = $2, created_ts = $1, updated_ts = $6, source_msg_ts = $5, modified_by = $4 WHERE (feature_id = $7) 2013-04-10 16:42:15 PDT adchemy 7186 DETAIL: parameters: $1 = '2013-04-10 15:02:34.706-07', $2 = 'description', $3 = 'description', $4 = NULL, $5 = '2013-04-10 15:02:43.576-07', $6 = '2013-04-10 15:02:43.573-07', $7 = '42318' LOCKS ON COORDINATOR: [venky@sv4-pgxc-db01 ~]$ /usr/local/pgsql/bin/psql -p 5432 -U postgres -d adchemy1234 -c "SELECT pid, relname, locktype, mode, granted from pg_locks, pg_class where relation=oid and relname not like 'pg_%' order by mode;" pid | relname | locktype | mode | granted ------+--------------------+----------+------------------+--------- 7169 | hibernate_sequence | relation | AccessShareLock | t 7173 | hibernate_sequence | relation | AccessShareLock | t 7172 | hibernate_sequence | relation | AccessShareLock | t 7171 | feature_value | relation | AccessShareLock | t 7171 | feature | relation | AccessShareLock | t 7170 | hibernate_sequence | relation | AccessShareLock | t 7171 | feature | relation | RowExclusiveLock | t 7172 | product_feature | relation | RowExclusiveLock | t 7170 | product_feature | relation | RowExclusiveLock | t 7173 | product_feature | relation | RowExclusiveLock | t 7169 | product_feature | relation | RowExclusiveLock | t (11 rows) LOCKS ON DATANODE1: [venky@sv4-pgxc-db01 ~]$ /usr/local/pgsql/bin/psql -p 5433 -U postgres -d adchemy1234 -c "SELECT pid, relname, locktype, mode, granted from pg_locks, pg_class where relation=oid and relname not like 'pg_%' order by mode;" pid | relname | locktype | mode | granted ------+-----------------------+----------+--------------------------+--------- 7174 | prd_id | relation | AccessShareLock | t 7182 | feature_id | relation | AccessShareLock | t 7174 | feature_value_id | relation | AccessShareLock | t 7183 | feature_id | relation | AccessShareLock | t 7174 | feature_id | relation | AccessShareLock | t 7175 | feature_id | relation | AccessShareLock | t 7181 | feature | tuple | ExclusiveLock | t 7181 | feature_semid | relation | RowExclusiveLock | t 7181 | feature_id | relation | RowExclusiveLock | t 7181 | feature | relation | RowExclusiveLock | t 7175 | cat_prd_feature_semid | relation | RowExclusiveLock | t 7183 | cat_prd_feature_semid | relation | RowExclusiveLock | t 7183 | prd_feature_id | relation | RowExclusiveLock | t 7183 | product_feature | relation | RowExclusiveLock | t 7182 | cat_prd_feature_semid | relation | RowExclusiveLock | t 7182 | prd_feature_id | relation | RowExclusiveLock | t 7182 | product_feature | relation | RowExclusiveLock | t 7175 | prd_feature_id | relation | RowExclusiveLock | t 7175 | product_feature | relation | RowExclusiveLock | t 7174 | product_feature | relation | RowExclusiveLock | t 7206 | feature_semid | relation | RowExclusiveLock | t 7206 | feature_id | relation | RowExclusiveLock | t 7174 | product | relation | RowShareLock | t 7182 | feature | relation | RowShareLock | t 7174 | feature_value | relation | RowShareLock | t 7183 | category | relation | RowShareLock | t 7174 | feature | relation | RowShareLock | t 7174 | category | relation | RowShareLock | t 7175 | category | relation | RowShareLock | t 7175 | feature | relation | RowShareLock | t 7183 | feature | relation | RowShareLock | t 7182 | category | relation | RowShareLock | t 7182 | feature | tuple | ShareLock | f 7175 | feature | tuple | ShareLock | f 7183 | feature | tuple | ShareLock | f 7206 | feature | relation | ShareUpdateExclusiveLock | t LOCKS ON DATANODE2: [venky@sv4-pgxc-db01 ~]$ /usr/local/pgsql/bin/psql -p 5434 -U postgres -d adchemy1234 -c "SELECT pid, relname, locktype, mode, granted from pg_locks, pg_class where relation=oid and relname not like 'pg_%' order by mode;" pid | relname | locktype | mode | granted ------+-----------------------+----------+--------------------------+--------- 7185 | feature_id | relation | AccessShareLock | t 7179 | feature_value_id | relation | AccessShareLock | t 7179 | prd_id | relation | AccessShareLock | t 7184 | feature_id | relation | AccessShareLock | t 7180 | feature_id | relation | AccessShareLock | t 7179 | feature_id | relation | AccessShareLock | t 7186 | feature | tuple | ExclusiveLock | t 7184 | prd_feature_id | relation | RowExclusiveLock | t 7184 | product_feature | relation | RowExclusiveLock | t 7186 | feature_semid | relation | RowExclusiveLock | t 7186 | feature_id | relation | RowExclusiveLock | t 7186 | feature | relation | RowExclusiveLock | t 7185 | cat_prd_feature_semid | relation | RowExclusiveLock | t 7185 | prd_feature_id | relation | RowExclusiveLock | t 7185 | product_feature | relation | RowExclusiveLock | t 7184 | cat_prd_feature_semid | relation | RowExclusiveLock | t 7180 | cat_prd_feature_semid | relation | RowExclusiveLock | t 7180 | prd_feature_id | relation | RowExclusiveLock | t 7180 | product_feature | relation | RowExclusiveLock | t 7179 | product_feature | relation | RowExclusiveLock | t 7202 | feature_semid | relation | RowExclusiveLock | t 7202 | feature_id | relation | RowExclusiveLock | t 7179 | product | relation | RowShareLock | t 7184 | feature | relation | RowShareLock | t 7179 | feature_value | relation | RowShareLock | t 7185 | category | relation | RowShareLock | t 7179 | feature | relation | RowShareLock | t 7179 | category | relation | RowShareLock | t 7180 | feature | relation | RowShareLock | t 7180 | category | relation | RowShareLock | t 7185 | feature | relation | RowShareLock | t 7184 | category | relation | RowShareLock | t 7185 | feature | tuple | ShareLock | f 7180 | feature | tuple | ShareLock | f 7184 | feature | tuple | ShareLock | f 7202 | feature | relation | ShareUpdateExclusiveLock | t (36 rows) ________________________________________ Venky Kandaswamy Principal Engineer, Adchemy Inc. 925-200-7124 ________________________________ From: Koichi Suzuki [koi...@gm...<mailto:koi...@gm...>] Sent: Monday, April 08, 2013 10:41 PM To: Amit Khandekar Cc: Venky Kandaswamy; pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-developers] PGXC hangs when run with concurrent inserts Because insert is being done in parallel, I'm afraid there could be a possibility that we have internal lock conflicts, which should not happen. Regards; ---------- Koichi Suzuki 2013/4/9 Amit Khandekar <ami...@en...<mailto:ami...@en...>> On 9 April 2013 06:46, Venky Kandaswamy <ve...@ad...<mailto:ve...@ad...>> wrote: All, We have been running into a hang issue on our app that appears to be related to PGXC. Our app processes messages from RabbitMQ and inserts/updates tables. We run 5 concurrent threads. The incoming queues are replicated, one feeding Postgres 9.1 and the other feeding PGXC (current git master). PGXC is hanging on inserts after processing a few transactions. It does not appear to be related to the actual data itself. IT looks like all the sessions are waiting for something. There is no information on locks available from pg_locks. Since most of the operations are inserts, it does not look like it is due to locks, unless something has acquired table locks. But just to rule out that possibility, it would be better if you check pg_locks on the datanodes, if you have checked it only on coordinator so far. An strace simply says recfrom(10. The are no errors in the logs from gtm, coordinator or datanodes. The tables have referential integrity and use a shared sequence to get the next id. Is it possible that something is going on with the logic to retrieve sequence numbers? The tables are all replicated. Unfortunately, we have not been able to reproduce a reliable test case. [postgres@gnode0 pgxc]$ /usr/local/pgsql/bin/psql -p 5433 -U postgres -d postgres -c 'select * from pg_catalog.pg_stat_activity;' datid | datname | pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | xact_start | query_start | state_change | waiting | state | query -------+--------------+-------+----------+----------+------------------+----------------+-----------------+-------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+---------+---------------------+-------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 12893 | postgres | 22330 | 10 | postgres | pgxc | 192.168.53.109 | | 47025 | 2013-03-31 21:42:16.724845-07 | | 2013-04-08 15:43:52.313325-07 | 2013-04-08 15:26:11.444754-07 | f | idle | COMMIT PREPARED 'T1 32273' 16393 | master | 4267 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 54961 | 2013-04-08 15:24:28.668023-07 | | 2013-04-08 15:33:17.586836-07 | 2013-04-08 15:33:17.587942-07 | f | idle | SELECT count(*) FRO M ONLY bicommon.account_datasource WHERE true 16395 | adchemy10013 | 4363 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 55084 | 2013-04-08 15:28:48.822939-07 | | 2013-04-08 15:50:21.650727-07 | 2013-04-08 15:50:07.916753-07 | f | idle | SELECT prd_id, prd_ semid, prd_name, prd_line, prd_model, prd_brand, prd_image_url, prd_dest_url, created_ts, updated_ts, source_msg_ts, modified_by FROM biods.product 16393 | master | 4486 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 55246 | 2013-04-08 15:33:21.019388-07 | | 2013-04-08 15:43:51.321376-07 | 2013-04-08 15:43:51.322675-07 | f | idle | SET SESSION AUTHORI ZATION DEFAULT;RESET ALL; 16393 | master | 4781 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 55515 | 2013-04-08 15:42:42.122785-07 | | 2013-04-08 17:02:21.023713-07 | 2013-04-08 17:02:20.804751-07 | f | idle | SET SESSION AUTHORI ZATION DEFAULT;RESET ALL; 16393 | master | 4787 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 55521 | 2013-04-08 15:42:42.142662-07 | | 2013-04-08 16:17:19.26364-07 | 2013-04-08 16:17:19.126163-07 | f | idle | SET SESSION AUTHORI ZATION DEFAULT;RESET ALL; 16393 | master | 4792 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 55526 | 2013-04-08 15:42:42.159009-07 | | 2013-04-08 15:45:11.915026-07 | 2013-04-08 15:45:11.886392-07 | f | idle | SET SESSION AUTHORI ZATION DEFAULT;RESET ALL; 16393 | master | 4799 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 55533 | 2013-04-08 15:42:42.678387-07 | | 2013-04-08 17:02:21.195332-07 | 2013-04-08 17:02:20.805074-07 | f | idle | SET SESSION AUTHORI ZATION DEFAULT;RESET ALL; 16393 | master | 4804 | 16392 | xcadmin | pgxc | 192.168.53.109 | | 55538 | 2013-04-08 15:42:42.694802-07 | | 2013-04-08 15:45:11.904619-07 | 2013-04-08 15:45:11.888493-07 | f | idle | SET SESSION AUTHORI ZATION DEFAULT;RESET ALL; 16395 | adchemy10013 | 4977 | 17361 | adchemy | pgxc | 192.168.53.109 | | 55732 | 2013-04-08 15:47:34.901175-07 | 2013-04-08 15:48:08.345331-07 | 2013-04-08 15:48:08.528818-07 | 2013-04-08 15:48:08.410815-07 | f | idle in transaction | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 4979 | 17361 | adchemy | pgxc | 192.168.53.109 | | 55734 | 2013-04-08 15:47:35.042778-07 | 2013-04-08 15:48:16.384763-07 | 2013-04-08 15:48:16.506899-07 | 2013-04-08 15:48:16.388503-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 4985 | 17361 | adchemy | pgxc | 192.168.53.109 | | 55740 | 2013-04-08 15:47:35.235945-07<tel:35.235945-07> | 2013-04-08 15:48:14.38895-07 | 2013-04-08 15:48:14.445351-07 | 2013-04-08 15:48:14.446752-07 | t | active | UPDATE biods.featur e SET feature_semid = $3, feature_name = $2, created_ts = $1, updated_ts = $6, source_msg_ts = $5, modified_by = $4 WHERE (feature_id = $7) 16395 | adchemy10013 | 4986 | 17361 | adchemy | pgxc | 192.168.53.109 | | 55741 | 2013-04-08 15:47:35.238843-07<tel:35.238843-07> | 2013-04-08 15:48:18.201043-07 | 2013-04-08 15:48:18.273204-07 | 2013-04-08 15:48:18.205647-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 4998 | 17361 | adchemy | pgxc | 192.168.53.109 | | 55753 | 2013-04-08 15:47:35.910309-07 | 2013-04-08 15:48:08.412038-07 | 2013-04-08 15:48:08.566945-07 | 2013-04-08 15:48:08.415026-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 6340 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57002 | 2013-04-08 16:31:44.414804-07 | 2013-04-08 16:31:50.293828-07 | 2013-04-08 16:31:50.433988-07 | 2013-04-08 16:31:50.297752-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 6341 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57003 | 2013-04-08 16:31:44.418356-07 | 2013-04-08 16:31:49.450704-07 | 2013-04-08 16:31:49.599946-07 | 2013-04-08 16:31:49.45562-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 6348 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57010 | 2013-04-08 16:31:45.065767-07<tel:45.065767-07> | 2013-04-08 16:31:50.699979-07 | 2013-04-08 16:31:50.817425-07 | 2013-04-08 16:31:50.704669-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 6349 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57011 | 2013-04-08 16:31:45.06926-07 | 2013-04-08 16:31:51.528207-07 | 2013-04-08 16:31:51.582036-07 | 2013-04-08 16:31:51.532618-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 6350 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57012 | 2013-04-08 16:31:45.072711-07<tel:45.072711-07> | 2013-04-08 16:31:50.085336-07 | 2013-04-08 16:31:50.223221-07 | 2013-04-08 16:31:50.088908-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7269 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57774 | 2013-04-08 16:57:15.563006-07 | 2013-04-08 16:57:21.849156-07 | 2013-04-08 16:57:21.978984-07 | 2013-04-08 16:57:21.853289-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7271 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57776 | 2013-04-08 16:57:15.63199-07 | 2013-04-08 16:57:16.575535-07 | 2013-04-08 16:57:17.00605-07 | 2013-04-08 16:57:17.007747-07 | t | active | INSERT INTO biods.f eature_value (feature_value_id, feature_value_semid, feature_value, feature_semid, feature_id, created_ts, updated_ts, source_msg_ts, modified_by) VALUES ($9, $5, $4, $3, $2, $1, $8, $7, $6) 16395 | adchemy10013 | 7283 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57788 | 2013-04-08 16:57:16.292702-07 | 2013-04-08 16:57:21.849125-07 | 2013-04-08 16:57:21.978824-07 | 2013-04-08 16:57:21.853251-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7284 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57789 | 2013-04-08 16:57:16.295879-07 | 2013-04-08 16:57:24.233166-07 | 2013-04-08 16:57:24.321938-07 | 2013-04-08 16:57:24.237514-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7285 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57790 | 2013-04-08 16:57:16.299271-07 | 2013-04-08 16:57:22.119868-07 | 2013-04-08 16:57:22.197213-07 | 2013-04-08 16:57:22.128357-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7465 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57954 | 2013-04-08 17:01:54.750113-07 | 2013-04-08 17:02:00.17336-07 | 2013-04-08 17:02:00.320469-07 | 2013-04-08 17:02:00.177758-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7466 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57955 | 2013-04-08 17:01:54.753559-07 | 2013-04-08 17:01:59.49003-07 | 2013-04-08 17:01:59.602925-07 | 2013-04-08 17:01:59.493732-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7467 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57956 | 2013-04-08 17:01:54.75699-07 | 2013-04-08 17:01:58.262083-07 | 2013-04-08 17:01:58.349452-07 | 2013-04-08 17:01:58.350822-07 | t | active | INSERT INTO biods.f eature_value (feature_value_id, feature_value_semid, feature_value, feature_semid, feature_id, created_ts, updated_ts, source_msg_ts, modified_by) VALUES ($9, $5, $4, $3, $2, $1, $8, $7, $6) 16395 | adchemy10013 | 7473 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57963 | 2013-04-08 17:01:55.49134-07 | 2013-04-08 17:02:00.313138-07 | 2013-04-08 17:02:00.420405-07 | 2013-04-08 17:02:00.318887-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 16395 | adchemy10013 | 7474 | 17361 | adchemy | pgxc | 192.168.53.109 | | 57964 | 2013-04-08 17:01:55.494777-07 | 2013-04-08 17:02:00.514142-07 | 2013-04-08 17:02:00.577239-07 | 2013-04-08 17:02:00.519572-07 | t | active | INSERT INTO biods.p roduct_feature (prd_feature_id, category_id, prd_id, feature_id, feature_value_id, category_semid, prd_semid, feature_semid, feature_value_semid, created_ts, updated_ts, modified_by) VALUES ($12, $1, $9, $4, $6, $2, $10, $5, $7, $3, $11, $8) 12893 | postgres | 8517 | 10 | postgres | psql | | | -1 | 2013-04-08 17:35:28.217934-07 | | 2013-04-08 17:35:28.220366-07 | 2013-04-08 17:35:28.220369-07 | f | active | select * from pg_ca talog.pg_stat_activity; (30 rows) ________________________________________ Venky Kandaswamy Principal Engineer, Adchemy Inc. 925-200-7124<tel:925-200-7124> ------------------------------------------------------------------------------ Precog is a next-generation analytics platform capable of advanced analytics on semi-structured data. The platform includes APIs for building apps and a phenomenal toolset for data science. Developers can use our toolset for easy data analysis & visualization. Get a free account! http://www2.precog.com/precogplatform/slashdotnewsletter _______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers ------------------------------------------------------------------------------ Precog is a next-generation analytics platform capable of advanced analytics on semi-structured data. The platform includes APIs for building apps and a phenomenal toolset for data science. Developers can use our toolset for easy data analysis & visualization. Get a free account! http://www2.precog.com/precogplatform/slashdotnewsletter _______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers ------------------------------------------------------------------------------ Precog is a next-generation analytics platform capable of advanced analytics on semi-structured data. The platform includes APIs for building apps and a phenomenal toolset for data science. Developers can use our toolset for easy data analysis & visualization. Get a free account! http://www2.precog.com/precogplatform/slashdotnewsletter _______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers ------------------------------------------------------------------------------ Precog is a next-generation analytics platform capable of advanced analytics on semi-structured data. The platform includes APIs for building apps and a phenomenal toolset for data science. Developers can use our toolset for easy data analysis & visualization. Get a free account! http://www2.precog.com/precogplatform/slashdotnewsletter _______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers -- Andrei Martsinchyk StormDB - http://www.stormdb.com<http://www.stormdb.com/> The Database Cloud -- Andrei Martsinchyk StormDB - http://www.stormdb.com<http://www.stormdb.com/> The Database Cloud |