You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
(1) |
2
|
3
|
4
|
5
(3) |
6
|
7
(9) |
8
(13) |
9
|
10
(2) |
11
(1) |
12
(4) |
13
(8) |
14
(7) |
15
(14) |
16
|
17
|
18
(16) |
19
(11) |
20
(7) |
21
(8) |
22
|
23
|
24
|
25
|
26
(9) |
27
(12) |
28
(8) |
29
(4) |
30
|
From: Ashutosh B. <ash...@en...> - 2012-06-28 06:35:15
|
Thanks Michael. On Thu, Jun 28, 2012 at 12:03 PM, Michael Paquier <mic...@gm... > wrote: > Just to justify a little bit more about this unreadable email you > received, you can refer here: > > https://github.com/postgres-xc/postgres-xc/commit/a871778ca44886721b05a64982b5e5d81c7590a3 > > While working on the planner improvements for remote query path > determination, Ashutosh has noticed that he needed a functionality which > was already implemented in Postgres master (pull_var_clause filtering > aggregate Var). This was just a little bit ahead of XC master. So the > decision has been taken to merge XC code up to commit c1d9579 which is in > the middle of Postgres 9.2 dev. > The code of XC will be merged up to the intersection of Postgres master > and 9.2 stable branch in a couple of weeks (for easy backport with 9.2 > stable branch or postgres master), and we are not planning to release any > stable releases until this moment, so this merge has been made to > facilitate the development of the new XC features. > > Thanks, > > > On Thu, Jun 28, 2012 at 3:22 PM, Michael Paquier < > mic...@us...> wrote: > >> Project "Postgres-XC". >> >> The branch, master has been updated >> via 2a32c0ae0e2d01f3cc82384b24f610bd11a23755 (commit) >> via 67ab404afa3ac68f58f586ce889f116b8ff65e3b (commit) >> via 6ba0c48349fd21904822b43a2ea3241a6d0968a9 (commit) >> via a871778ca44886721b05a64982b5e5d81c7590a3 (commit) >> via c1d9579dd8bf3c921ca6bc2b62c40da6d25372e5 (commit) >> via 846af54dd5a77dc02feeb5e34283608012cfb217 (commit) >> via fd6913a18955b0f89ca994b5036c103bcea23f28 (commit) >> via 912bc4f038b3daaea4477c4b4e79fbd8c15e67a0 (commit) >> via afc9635c600ace716294a12d78abd37f65abd0ea (commit) >> via 3315020a091f64c8d08c3b32a2abd46431dcf857 (commit) >> via 75726307e6164673c48d6ce1d143a075b8ce18fa (commit) >> via 4240e429d0c2d889d0cda23c618f94e12c13ade7 (commit) >> via 9d522cb35d8b4f266abadd0d019f68eb8802ae05 (commit) >> via 89fd72cbf26f5d2e3d86ab19c1ead73ab8fac0fe (commit) >> via 9598afa3b0f7a7fdcf3740173346950b2bd5942c (commit) >> > > > -- > Michael Paquier > http://michael.otacoo.com > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Michael P. <mic...@gm...> - 2012-06-28 06:33:44
|
Just to justify a little bit more about this unreadable email you received, you can refer here: https://github.com/postgres-xc/postgres-xc/commit/a871778ca44886721b05a64982b5e5d81c7590a3 While working on the planner improvements for remote query path determination, Ashutosh has noticed that he needed a functionality which was already implemented in Postgres master (pull_var_clause filtering aggregate Var). This was just a little bit ahead of XC master. So the decision has been taken to merge XC code up to commit c1d9579 which is in the middle of Postgres 9.2 dev. The code of XC will be merged up to the intersection of Postgres master and 9.2 stable branch in a couple of weeks (for easy backport with 9.2 stable branch or postgres master), and we are not planning to release any stable releases until this moment, so this merge has been made to facilitate the development of the new XC features. Thanks, On Thu, Jun 28, 2012 at 3:22 PM, Michael Paquier < mic...@us...> wrote: > Project "Postgres-XC". > > The branch, master has been updated > via 2a32c0ae0e2d01f3cc82384b24f610bd11a23755 (commit) > via 67ab404afa3ac68f58f586ce889f116b8ff65e3b (commit) > via 6ba0c48349fd21904822b43a2ea3241a6d0968a9 (commit) > via a871778ca44886721b05a64982b5e5d81c7590a3 (commit) > via c1d9579dd8bf3c921ca6bc2b62c40da6d25372e5 (commit) > via 846af54dd5a77dc02feeb5e34283608012cfb217 (commit) > via fd6913a18955b0f89ca994b5036c103bcea23f28 (commit) > via 912bc4f038b3daaea4477c4b4e79fbd8c15e67a0 (commit) > via afc9635c600ace716294a12d78abd37f65abd0ea (commit) > via 3315020a091f64c8d08c3b32a2abd46431dcf857 (commit) > via 75726307e6164673c48d6ce1d143a075b8ce18fa (commit) > via 4240e429d0c2d889d0cda23c618f94e12c13ade7 (commit) > via 9d522cb35d8b4f266abadd0d019f68eb8802ae05 (commit) > via 89fd72cbf26f5d2e3d86ab19c1ead73ab8fac0fe (commit) > via 9598afa3b0f7a7fdcf3740173346950b2bd5942c (commit) > -- Michael Paquier http://michael.otacoo.com |
From: Michael P. <mic...@gm...> - 2012-06-28 06:16:46
|
On Thu, Jun 28, 2012 at 3:03 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi Michael, > You need to take care of visibility of rows when you use COPY mechanism. > > While you do all the stuff to copy from old relation and to new relation, > you need to simulate the behaviour in AtRewriteTable(). In this function, > we take the latest snapshot and then copy the rows over to new relation > storage. You will need to simulate same behaviour here. > The first version of the patch already does that. When running COPY FROM, TRUNCATE and COPY TO a new snapshot is automatically popped at each step. You can refer to distrib.c on the latest version of the patch. > > On Thu, Jun 28, 2012 at 10:56 AM, Michael Paquier < > mic...@gm...> wrote: > >> >>> The COPY TO results from the datanode are already in the required >>> format for COPY FROM, so the data is ready to be sent back to datanode >>> as-is. So if possible, we should avoid any input-output conversion when >>> storing in tuplestore. >>> >> Do you mean that we can store the results from COPY TO as-is to >> tuplestore, meaning that we can use tuplestore as-is? >> Or do you mean that we shouldn't use tuplestore? >> >> Also, please check if we can avoid storing the complete data in >>> tuplestore, instead we should transfer data from COPY TO to COPY FROM in >>> chunks. >>> >> Would be nice indeed. >> >>> Also I am not sure if we can truncate immediately after COPY TO is >>> fired. Will that affect the data that is being fetched from COPY? >>> >> Yes it will, we need to finish COPY TO to process to launch the TRUNCATE >> on Datanodes, or we won't be able to fetch all the data. >> Hence it looks necessary to store at some point all the data in >> tuplestore of Coordinator, and using chunks is complicated with this way of >> doing. >> >> -- >> Michael Paquier >> http://michael.otacoo.com >> >> >> ------------------------------------------------------------------------------ >> Live Security Virtual Conference >> Exclusive live event will cover all the ways today's security and >> threat landscape has changed and how IT managers can respond. Discussions >> will include endpoint security, mobile security and the latest in malware >> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > > -- Michael Paquier http://michael.otacoo.com |
From: Ashutosh B. <ash...@en...> - 2012-06-28 06:03:43
|
Hi Michael, You need to take care of visibility of rows when you use COPY mechanism. While you do all the stuff to copy from old relation and to new relation, you need to simulate the behaviour in AtRewriteTable(). In this function, we take the latest snapshot and then copy the rows over to new relation storage. You will need to simulate same behaviour here. On Thu, Jun 28, 2012 at 10:56 AM, Michael Paquier <mic...@gm... > wrote: > >> The COPY TO results from the datanode are already in the required format >> for COPY FROM, so the data is ready to be sent back to datanode as-is. So >> if possible, we should avoid any input-output conversion when storing in >> tuplestore. >> > Do you mean that we can store the results from COPY TO as-is to > tuplestore, meaning that we can use tuplestore as-is? > Or do you mean that we shouldn't use tuplestore? > > Also, please check if we can avoid storing the complete data in >> tuplestore, instead we should transfer data from COPY TO to COPY FROM in >> chunks. >> > Would be nice indeed. > >> Also I am not sure if we can truncate immediately after COPY TO is fired. >> Will that affect the data that is being fetched from COPY? >> > Yes it will, we need to finish COPY TO to process to launch the TRUNCATE > on Datanodes, or we won't be able to fetch all the data. > Hence it looks necessary to store at some point all the data in tuplestore > of Coordinator, and using chunks is complicated with this way of doing. > > -- > Michael Paquier > http://michael.otacoo.com > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Michael P. <mic...@gm...> - 2012-06-28 05:26:37
|
> > > The COPY TO results from the datanode are already in the required format > for COPY FROM, so the data is ready to be sent back to datanode as-is. So > if possible, we should avoid any input-output conversion when storing in > tuplestore. > Do you mean that we can store the results from COPY TO as-is to tuplestore, meaning that we can use tuplestore as-is? Or do you mean that we shouldn't use tuplestore? Also, please check if we can avoid storing the complete data in tuplestore, > instead we should transfer data from COPY TO to COPY FROM in chunks. > Would be nice indeed. > Also I am not sure if we can truncate immediately after COPY TO is fired. > Will that affect the data that is being fetched from COPY? > Yes it will, we need to finish COPY TO to process to launch the TRUNCATE on Datanodes, or we won't be able to fetch all the data. Hence it looks necessary to store at some point all the data in tuplestore of Coordinator, and using chunks is complicated with this way of doing. -- Michael Paquier http://michael.otacoo.com |
From: Amit K. <ami...@en...> - 2012-06-28 05:21:09
|
On 28 June 2012 09:05, Michael Paquier <mic...@gm...> wrote: > > >> Something like this: >> 1. Launch copy to stdout >> 2. Launch truncate >> 3. Get the results of step 1 and use copy Apis to redirect rows to >> correct nodes. >> Now the copy to data is put into a file, a tuple store if you want. >> > Let me bring more details here. I had a closer look at postgres > functionalities and there are several possibilities to send a copy output, > the one I would like to use instead of the file currently being used is > DestTuplestore. By using that, it would be possible to store all the tuples > being redistributed without having to use an intermediate file and postgres > would do all the storage work. > So, assuming that TupleStore is used, here are how the redistribution > steps would work by default: > 1. launch copy to and output result to tuplestore > 2. launch truncate > 3. update catalogs > 4. use tuplestore data and relaunch a copy from with execRemote.c APIs. The COPY TO results from the datanode are already in the required format for COPY FROM, so the data is ready to be sent back to datanode as-is. So if possible, we should avoid any input-output conversion when storing in tuplestore. Also, please check if we can avoid storing the complete data in tuplestore, instead we should transfer data from COPY TO to COPY FROM in chunks. Also I am not sure if we can truncate immediately after COPY TO is fired. Will that affect the data that is being fetched from COPY? > > -- > Michael Paquier > http://michael.otacoo.com > |
From: Michael P. <mic...@gm...> - 2012-06-28 03:35:19
|
> > Something like this: > 1. Launch copy to stdout > 2. Launch truncate > 3. Get the results of step 1 and use copy Apis to redirect rows to correct > nodes. > Now the copy to data is put into a file, a tuple store if you want. > Let me bring more details here. I had a closer look at postgres functionalities and there are several possibilities to send a copy output, the one I would like to use instead of the file currently being used is DestTuplestore. By using that, it would be possible to store all the tuples being redistributed without having to use an intermediate file and postgres would do all the storage work. So, assuming that TupleStore is used, here are how the redistribution steps would work by default: 1. launch copy to and output result to tuplestore 2. launch truncate 3. update catalogs 4. use tuplestore data and relaunch a copy from with execRemote.c APIs. -- Michael Paquier http://michael.otacoo.com |