You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
(12) |
2
(4) |
3
|
4
(17) |
5
(2) |
6
(5) |
7
(5) |
8
(23) |
9
|
10
(1) |
11
|
12
(2) |
13
|
14
|
15
|
16
|
17
|
18
(3) |
19
(1) |
20
(3) |
21
(10) |
22
(2) |
23
|
24
(1) |
25
(4) |
26
(8) |
27
(5) |
28
|
29
(3) |
30
(6) |
31
(1) |
|
|
|
|
|
|
From: Abbas B. <abb...@en...> - 2013-03-27 12:06:03
|
Feature ID 3608375 On Tue, Mar 5, 2013 at 1:45 PM, Abbas Butt <abb...@en...>wrote: > The attached patch changes the name of the option to --include-nodes. > > > On Mon, Mar 4, 2013 at 2:41 PM, Abbas Butt <abb...@en...>wrote: > >> >> >> On Mon, Mar 4, 2013 at 2:09 PM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> >>> >>> On Mon, Mar 4, 2013 at 1:51 PM, Abbas Butt <abb...@en...>wrote: >>> >>>> What I had in mind was to have pg_dump, when run with include-node, >>>> emit CREATE NODE/ CREATE NODE GROUP commands only and nothing else. Those >>>> commands will be used to create existing nodes/groups on the new >>>> coordinator to be added. So it does make sense to use this option >>>> independently, in fact it is supposed to be used independently. >>>> >>>> >>> Ok, got it. But then include-node is really a misnomer. We should use >>> --dump-nodes or something like that. >>> >> >> In that case we can use include-nodes here. >> >> >>> >>> >>>> >>>> On Mon, Mar 4, 2013 at 11:21 AM, Ashutosh Bapat < >>>> ash...@en...> wrote: >>>> >>>>> Dumping TO NODE clause only makes sense if we dump CREATE NODE/ CREATE >>>>> NODE GROUP. Dumping CREATE NODE/CREATE NODE GROUP may make sense >>>>> independently, but might be useless without dumping TO NODE clause. >>>>> >>>>> BTW, OTOH, dumping CREATE NODE/CREATE NODE GROUP clause wouldn't >>>>> create the nodes on all the coordinators, >>>> >>>> >>>> All the coordinators already have the nodes information. >>>> >>>> >>>>> but only the coordinator where dump will be restored. That's another >>>>> thing you will need to consider OR are you going to fix that as well? >>>> >>>> >>>> As a first step I am only listing the manual steps required to add a >>>> new node, that might say run this command on all the existing coordinators >>>> by connecting to them one by one manually. We can decide to automate these >>>> steps later. >>>> >>>> >>> ok >>> >>> >>> >>>> >>>> >>>>> >>>>> >>>>> On Mon, Mar 4, 2013 at 11:41 AM, Abbas Butt < >>>>> abb...@en...> wrote: >>>>> >>>>>> I was thinking of using include-nodes to dump CREATE NODE / CREATE >>>>>> NODE GROUP, that is required as one of the missing links in adding a new >>>>>> node. How do you think about that? >>>>>> >>>>>> >>>>>> On Mon, Mar 4, 2013 at 9:02 AM, Ashutosh Bapat < >>>>>> ash...@en...> wrote: >>>>>> >>>>>>> Hi Abbas, >>>>>>> Please take a look at >>>>>>> http://www.postgresql.org/docs/9.2/static/app-pgdump.html, which >>>>>>> gives all the command line options for pg_dump. instead of >>>>>>> include-to-node-clause, just include-nodes would suffice, I guess. >>>>>>> >>>>>>> >>>>>>> On Fri, Mar 1, 2013 at 8:36 PM, Abbas Butt < >>>>>>> abb...@en...> wrote: >>>>>>> >>>>>>>> PFA a updated patch that provides a command line argument called >>>>>>>> --include-to-node-clause to let pg_dump know that the created dump is >>>>>>>> supposed to emit TO NODE clause in the CREATE TABLE command. >>>>>>>> If the argument is provided while taking the dump from a datanode, >>>>>>>> it does not show TO NODE clause in the dump since the catalog table is >>>>>>>> empty in this case. >>>>>>>> The documentation of pg_dump is updated accordingly. >>>>>>>> The rest of the functionality stays the same as before. >>>>>>>> >>>>>>>> >>>>>>>> On Mon, Feb 25, 2013 at 10:29 AM, Ashutosh Bapat < >>>>>>>> ash...@en...> wrote: >>>>>>>> >>>>>>>>> I think we should always dump DISTRIBUTE BY. >>>>>>>>> >>>>>>>>> PG does not stop dumping (or provide an option to do so) newer >>>>>>>>> syntax so that the dump will work on older versions. On similar lines, an >>>>>>>>> XC dump can not be used against PG without modification (removing >>>>>>>>> DISTRIBUTE BY). There can be more serious problems like exceeding table >>>>>>>>> size limits if an XC dump is tried to be restored in PG. >>>>>>>>> >>>>>>>>> As to TO NODE clause, I agree, that one can restore the dump on a >>>>>>>>> cluster with different configuration, so giving an option to dump TO NODE >>>>>>>>> clause will help. >>>>>>>>> >>>>>>>>> On Mon, Feb 25, 2013 at 6:42 AM, Michael Paquier < >>>>>>>>> mic...@gm...> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Mon, Feb 25, 2013 at 4:17 AM, Abbas Butt < >>>>>>>>>> abb...@en...> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Sun, Feb 24, 2013 at 5:33 PM, Michael Paquier < >>>>>>>>>>> mic...@gm...> wrote: >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Sun, Feb 24, 2013 at 7:04 PM, Abbas Butt < >>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Sun, Feb 24, 2013 at 1:44 PM, Michael Paquier < >>>>>>>>>>>>> mic...@gm...> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Sun, Feb 24, 2013 at 3:51 PM, Abbas Butt < >>>>>>>>>>>>>> abb...@en...> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>> PFA a patch to fix pg_dump to generate TO NODE clause in >>>>>>>>>>>>>>> the dump. >>>>>>>>>>>>>>> This is required because otherwise all tables get created on >>>>>>>>>>>>>>> all nodes after a dump-restore cycle. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> Not sure this is good if you take a dump of an XC cluster to >>>>>>>>>>>>>> restore that to a vanilla Postgres cluster. >>>>>>>>>>>>>> Why not adding a new option that would control the generation >>>>>>>>>>>>>> of this clause instead of forcing it? >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> I think you can use the pg_dump that comes with vanilla PG to >>>>>>>>>>>>> do that, can't you? But I am open to adding a control option if every body >>>>>>>>>>>>> thinks so. >>>>>>>>>>>>> >>>>>>>>>>>> Sure you can, this is just to simplify the life of users a >>>>>>>>>>>> maximum by not having multiple pg_dump binaries in their serves. >>>>>>>>>>>> Saying that, I think that there is no option to choose if >>>>>>>>>>>> DISTRIBUTE BY is printed in the dump or not... >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Yah if we choose to have an option we will put both DISTRIBUTE >>>>>>>>>>> BY and TO NODE under it. >>>>>>>>>>> >>>>>>>>>> Why not an option for DISTRIBUTE BY, and another for TO NODE? >>>>>>>>>> This would bring more flexibility to the way dumps are generated. >>>>>>>>>> -- >>>>>>>>>> Michael >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> Everyone hates slow websites. So do we. >>>>>>>>>> Make your web apps faster with AppDynamics >>>>>>>>>> Download AppDynamics Lite for free today: >>>>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>>>>> _______________________________________________ >>>>>>>>>> Postgres-xc-developers mailing list >>>>>>>>>> Pos...@li... >>>>>>>>>> >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Best Wishes, >>>>>>>>> Ashutosh Bapat >>>>>>>>> EntepriseDB Corporation >>>>>>>>> The Enterprise Postgres Company >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> -- >>>>>>>> Abbas >>>>>>>> Architect >>>>>>>> EnterpriseDB Corporation >>>>>>>> The Enterprise PostgreSQL Company >>>>>>>> >>>>>>>> Phone: 92-334-5100153 >>>>>>>> >>>>>>>> Website: www.enterprisedb.com >>>>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>>>> >>>>>>>> This e-mail message (and any attachment) is intended for the use of >>>>>>>> the individual or entity to whom it is addressed. This message >>>>>>>> contains information from EnterpriseDB Corporation that may be >>>>>>>> privileged, confidential, or exempt from disclosure under applicable >>>>>>>> law. If you are not the intended recipient or authorized to receive >>>>>>>> this for the intended recipient, any use, dissemination, >>>>>>>> distribution, >>>>>>>> retention, archiving, or copying of this communication is strictly >>>>>>>> prohibited. If you have received this e-mail in error, please notify >>>>>>>> the sender immediately by reply e-mail and delete this message. >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Best Wishes, >>>>>>> Ashutosh Bapat >>>>>>> EntepriseDB Corporation >>>>>>> The Enterprise Postgres Company >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> -- >>>>>> Abbas >>>>>> Architect >>>>>> EnterpriseDB Corporation >>>>>> The Enterprise PostgreSQL Company >>>>>> >>>>>> Phone: 92-334-5100153 >>>>>> >>>>>> Website: www.enterprisedb.com >>>>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>>>> >>>>>> This e-mail message (and any attachment) is intended for the use of >>>>>> the individual or entity to whom it is addressed. This message >>>>>> contains information from EnterpriseDB Corporation that may be >>>>>> privileged, confidential, or exempt from disclosure under applicable >>>>>> law. If you are not the intended recipient or authorized to receive >>>>>> this for the intended recipient, any use, dissemination, distribution, >>>>>> retention, archiving, or copying of this communication is strictly >>>>>> prohibited. If you have received this e-mail in error, please notify >>>>>> the sender immediately by reply e-mail and delete this message. >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Best Wishes, >>>>> Ashutosh Bapat >>>>> EntepriseDB Corporation >>>>> The Enterprise Postgres Company >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> Abbas >>>> Architect >>>> EnterpriseDB Corporation >>>> The Enterprise PostgreSQL Company >>>> >>>> Phone: 92-334-5100153 >>>> >>>> Website: www.enterprisedb.com >>>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>>> >>>> This e-mail message (and any attachment) is intended for the use of >>>> the individual or entity to whom it is addressed. This message >>>> contains information from EnterpriseDB Corporation that may be >>>> privileged, confidential, or exempt from disclosure under applicable >>>> law. If you are not the intended recipient or authorized to receive >>>> this for the intended recipient, any use, dissemination, distribution, >>>> retention, archiving, or copying of this communication is strictly >>>> prohibited. If you have received this e-mail in error, please notify >>>> the sender immediately by reply e-mail and delete this message. >>>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Enterprise Postgres Company >>> >> >> >> >> -- >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. >> > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Abbas B. <abb...@en...> - 2013-03-27 12:05:03
|
Feature ID 3608376 On Sun, Mar 10, 2013 at 7:59 PM, Abbas Butt <abb...@en...>wrote: > Hi, > Attached please find a patch that adds support in pg_dump to dump nodes > and node groups. This is required while adding a new node to the cluster. > > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Abbas B. <abb...@en...> - 2013-03-27 12:02:46
|
Feature ID 3608379 On Fri, Mar 1, 2013 at 5:48 PM, Amit Khandekar < ami...@en...> wrote: > On 1 March 2013 01:30, Abbas Butt <abb...@en...> wrote: > > > > > > On Thu, Feb 28, 2013 at 12:44 PM, Amit Khandekar > > <ami...@en...> wrote: > >> > >> > >> > >> On 28 February 2013 10:23, Abbas Butt <abb...@en...> > wrote: > >>> > >>> Hi All, > >>> > >>> Attached please find a patch that provides a new command line argument > >>> for postgres called --restoremode. > >>> > >>> While adding a new node to the cluster we need to restore the schema of > >>> existing database to the new node. > >>> If the new node is a datanode and we connect directly to it, it does > not > >>> allow DDL, because it is in read only mode & > >>> If the new node is a coordinator, it will send DDLs to all the other > >>> coordinators which we do not want it to do. > >> > >> > >> What if we allow writes in standalone mode, so that we would initialize > >> the new node using standalone mode instead of --restoremode ? > > > > > > Please take a look at the patch, I am using --restoremode in place of > > --coordinator & --datanode. I am not sure how would stand alone mode fit > in > > here. > > I was trying to see if we can avoid adding a new mode, instead, use > standalone mode for all the purposes for which restoremode is used. > Actually I checked the documentation, it says this mode is used only > for debugging or recovery purposes, so now I myself am a bit hesitent > about this mode for the purpose of restoring. > > > > >> > >> > >>> > >>> To provide ability to restore on the new node a new command line > argument > >>> is provided. > >>> It is to be provided in place of --coordinator OR --datanode. > >>> In restore mode both coordinator and datanode are internally treated > as a > >>> datanode. > >>> For more details see patch comments. > >>> > >>> After this patch one can add a new node to the cluster. > >>> > >>> Here are the steps to add a new coordinator > >>> > >>> > >>> 1) Initdb new coordinator > >>> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data_cord3 > >>> --nodename coord_3 > >>> > >>> 2) Make necessary changes in its postgresql.conf, in particular > specify > >>> new coordinator name and pooler port > >>> > >>> 3) Connect to any of the existing coordinators & lock the cluster for > >>> backup > >>> ./psql postgres -p 5432 > >>> SET xc_lock_for_backup=yes; > >>> \q > >> > >> > >> I haven't given a thought on the earlier patch you sent for cluster lock > >> implementation; may be we can discuss this on that thread, but just a > quick > >> question: > >> > >> Does the cluster-lock command wait for the ongoing DDL commands to > finish > >> ? If not, we have problems. The subsequent pg_dump would not contain > objects > >> created by these particular DDLs. > > > > > > Suppose you have a two coordinator cluster. Assume one client connected > to > > each. Suppose one client issues a lock cluster command and the other > issues > > a DDL. Is this what you mean by an ongoing DDL? If true then answer to > your > > question is Yes. > > > > Suppose you have a prepared transaction that has a DDL in it, again if > this > > can be considered an on going DDL, then again answer to your question is > > Yes. > > > > Suppose you have a two coordinator cluster. Assume one client connected > to > > each. One client starts a transaction and issues a DDL, the second client > > issues a lock cluster command, the first commits the transaction. If > this is > > an ongoing DDL, then the answer to your question is No. But its a matter > of > > deciding which camp are we going to put COMMIT in, the allow camp, or the > > deny camp. I decided to put it in allow camp, because I have not yet > written > > any code to detect whether a transaction being committed has a DDL in it > or > > not, and stopping all transactions from committing looks too restrictive > to > > me. > > > > Do you have some other meaning of an ongoing DDL? > > > > I agree that we should have discussed this on the right thread. Lets > > continue this discussion on that thread. > > Continued on the other thread. > > > > >> > >> > >>> > >>> > >>> 4) Connect to any of the existing coordinators and take backup of the > >>> database > >>> ./pg_dump -p 5432 -C -s > >>> --file=/home/edb/Desktop/NodeAddition/dumps/101_all_objects_coord.sql > test > >>> > >>> 5) Start the new coordinator specify --restoremode while starting the > >>> coordinator > >>> ./postgres --restoremode -D ../data_cord3 -p 5455 > >>> > >>> 6) connect to the new coordinator directly > >>> ./psql postgres -p 5455 > >>> > >>> 7) create all the datanodes and the rest of the coordinators on the > new > >>> coordiantor & reload configuration > >>> CREATE NODE DATA_NODE_1 WITH (HOST = 'localhost', type = > >>> 'datanode', PORT = 15432, PRIMARY); > >>> CREATE NODE DATA_NODE_2 WITH (HOST = 'localhost', type = > >>> 'datanode', PORT = 25432); > >>> > >>> CREATE NODE COORD_1 WITH (HOST = 'localhost', type = > >>> 'coordinator', PORT = 5432); > >>> CREATE NODE COORD_2 WITH (HOST = 'localhost', type = > >>> 'coordinator', PORT = 5433); > >>> > >>> SELECT pgxc_pool_reload(); > >>> > >>> 8) quit psql > >>> > >>> 9) Create the new database on the new coordinator > >>> ./createdb test -p 5455 > >>> > >>> 10) create the roles and table spaces manually, the dump does not > contain > >>> roles or table spaces > >>> ./psql test -p 5455 > >>> CREATE ROLE admin WITH LOGIN CREATEDB CREATEROLE; > >>> CREATE TABLESPACE my_space LOCATION > >>> '/usr/local/pgsql/my_space_location'; > >>> \q > >>> > >> > >> Will pg_dumpall help ? It dumps roles also. > > > > > > Yah , but I am giving example of pg_dump so this step has to be there. > > > >> > >> > >> > >>> > >>> 11) Restore the backup that was taken from an existing coordinator by > >>> connecting to the new coordinator directly > >>> ./psql -d test -f > >>> /home/edb/Desktop/NodeAddition/dumps/101_all_objects_coord.sql -p 5455 > >>> > >>> 11) Quit the new coordinator > >>> > >>> 12) Connect to any of the existing coordinators & unlock the cluster > >>> ./psql postgres -p 5432 > >>> SET xc_lock_for_backup=no; > >>> \q > >>> > >> > >> Unlocking the cluster has to be done *after* the node is added into the > >> cluster. > > > > > > Very true. I stand corrected. This means CREATE NODE has to be allowed > when > > xc_lock_for_backup is set. > > > >> > >> > >> > >>> > >>> 13) Start the new coordinator as a by specifying --coordinator > >>> ./postgres --coordinator -D ../data_cord3 -p 5455 > >>> > >>> 14) Create the new coordinator on rest of the coordinators and reload > >>> configuration > >>> CREATE NODE COORD_3 WITH (HOST = 'localhost', type = > >>> 'coordinator', PORT = 5455); > >>> SELECT pgxc_pool_reload(); > >>> > >>> 15) The new coordinator is now ready > >>> ./psql test -p 5455 > >>> create table test_new_coord(a int, b int); > >>> \q > >>> ./psql test -p 5432 > >>> select * from test_new_coord; > >>> > >>> > >>> Here are the steps to add a new datanode > >>> > >>> > >>> 1) Initdb new datanode > >>> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data3 > --nodename > >>> data_node_3 > >>> > >>> 2) Make necessary changes in its postgresql.conf, in particular > specify > >>> new datanode name > >>> > >>> 3) Connect to any of the existing coordinators & lock the cluster for > >>> backup > >>> ./psql postgres -p 5432 > >>> SET xc_lock_for_backup=yes; > >>> \q > >>> > >>> 4) Connect to any of the existing datanodes and take backup of the > >>> database > >>> ./pg_dump -p 15432 -C -s > >>> --file=/home/edb/Desktop/NodeAddition/dumps/102_all_objects_dn1.sql > test > >>> > >>> 5) Start the new datanode specify --restoremode while starting the it > >>> ./postgres --restoremode -D ../data3 -p 35432 > >>> > >>> 6) Create the new database on the new datanode > >>> ./createdb test -p 35432 > >>> > >>> 7) create the roles and table spaces manually, the dump does not > contain > >>> roles or table spaces > >>> ./psql test -p 35432 > >>> CREATE ROLE admin WITH LOGIN CREATEDB CREATEROLE; > >>> CREATE TABLESPACE my_space LOCATION > >>> '/usr/local/pgsql/my_space_location'; > >>> \q > >>> > >>> 8) Restore the backup that was taken from an existing datanode by > >>> connecting to the new datanode directly > >>> ./psql -d test -f > >>> /home/edb/Desktop/NodeAddition/dumps/102_all_objects_dn1.sql -p 35432 > >>> > >>> 9) Quit the new datanode > >>> > >>> 10) Connect to any of the existing coordinators & unlock the cluster > >>> ./psql postgres -p 5432 > >>> SET xc_lock_for_backup=no; > >>> \q > >>> > >>> 11) Start the new datanode as a datanode by specifying --datanode > >>> ./postgres --datanode -D ../data3 -p 35432 > >>> > >>> 12) Create the new datanode on all the coordinators and reload > >>> configuration > >>> CREATE NODE DATA_NODE_3 WITH (HOST = 'localhost', type = > >>> 'datanode', PORT = 35432); > >>> SELECT pgxc_pool_reload(); > >>> > >>> 13) Redistribute data by using ALTER TABLE REDISTRIBUTE > >>> > >>> 14) The new daatnode is now ready > >>> ./psql test > >>> create table test_new_dn(a int, b int) distribute by > replication; > >>> insert into test_new_dn values(1,2); > >>> EXECUTE DIRECT ON (data_node_1) 'SELECT * from test_new_dn'; > >>> EXECUTE DIRECT ON (data_node_2) 'SELECT * from test_new_dn'; > >>> EXECUTE DIRECT ON (data_node_3) 'SELECT * from test_new_dn'; > >>> > >>> Please note that the steps assume that the patch sent earlier > >>> 1_lock_cluster.patch in mail subject [Patch to lock cluster] is > applied. > >>> > >>> I have also attached test database scripts, that would help in patch > >>> review. > >>> > >>> Comments are welcome. > >>> > >>> -- > >>> Abbas > >>> Architect > >>> EnterpriseDB Corporation > >>> The Enterprise PostgreSQL Company > >>> > >>> Phone: 92-334-5100153 > >>> > >>> Website: www.enterprisedb.com > >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ > >>> Follow us on Twitter: http://www.twitter.com/enterprisedb > >>> > >>> This e-mail message (and any attachment) is intended for the use of > >>> the individual or entity to whom it is addressed. This message > >>> contains information from EnterpriseDB Corporation that may be > >>> privileged, confidential, or exempt from disclosure under applicable > >>> law. If you are not the intended recipient or authorized to receive > >>> this for the intended recipient, any use, dissemination, distribution, > >>> retention, archiving, or copying of this communication is strictly > >>> prohibited. If you have received this e-mail in error, please notify > >>> the sender immediately by reply e-mail and delete this message. > >>> > >>> > ------------------------------------------------------------------------------ > >>> Everyone hates slow websites. So do we. > >>> Make your web apps faster with AppDynamics > >>> Download AppDynamics Lite for free today: > >>> http://p.sf.net/sfu/appdyn_d2d_feb > >>> _______________________________________________ > >>> Postgres-xc-developers mailing list > >>> Pos...@li... > >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >>> > >> > > > > > > > > -- > > -- > > Abbas > > Architect > > EnterpriseDB Corporation > > The Enterprise PostgreSQL Company > > > > Phone: 92-334-5100153 > > > > Website: www.enterprisedb.com > > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > > Follow us on Twitter: http://www.twitter.com/enterprisedb > > > > This e-mail message (and any attachment) is intended for the use of > > the individual or entity to whom it is addressed. This message > > contains information from EnterpriseDB Corporation that may be > > privileged, confidential, or exempt from disclosure under applicable > > law. If you are not the intended recipient or authorized to receive > > this for the intended recipient, any use, dissemination, distribution, > > retention, archiving, or copying of this communication is strictly > > prohibited. If you have received this e-mail in error, please notify > > the sender immediately by reply e-mail and delete this message. > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Abbas B. <abb...@en...> - 2013-03-27 11:55:19
|
Bug ID 3608374 On Fri, Mar 8, 2013 at 12:25 PM, Abbas Butt <abb...@en...>wrote: > Attached please find revised patch that provides the following in addition > to what it did earlier. > > 1. Uses GetPreferredReplicationNode() instead of list_truncate() > 2. Adds test cases to xc_alter_table and xc_copy. > > I tested the following in reasonable detail to find whether any other > caller of GetRelationNodes() needs some fixing or not and found that none > of the other callers needs any more fixing. > I tested > a) copy > b) alter table redistribute > c) utilities > d) dmls etc > > However while testing ALTER TABLE, I found that replicated to hash is not > working correctly. > > This test case fails, since only SIX rows are expected in the final result. > > test=# create table t_r_n12(a int, b int) distribute by replication to > node (DATA_NODE_1, DATA_NODE_2); > CREATE TABLE > test=# insert into t_r_n12 values(1,777),(3,4),(5,6),(20,30),(NULL,999), > (NULL, 999); > INSERT 0 6 > test=# -- rep to hash > test=# ALTER TABLE t_r_n12 distribute by hash(a); > ALTER TABLE > test=# SELECT * FROM t_r_n12 order by 1; > a | b > ----+----- > 1 | 777 > 3 | 4 > 5 | 6 > 20 | 30 > | 999 > | 999 > | 999 > | 999 > (8 rows) > > test=# drop table t_r_n12; > DROP TABLE > > I have added a source forge bug tracker id to this case (Artifact 3607290<https://sourceforge.net/tracker/?func=detail&aid=3607290&group_id=311227&atid=1310232>). > The reason for this error is that the function distrib_delete_hash does not > take into account that the distribution column can be null. I will provide > a separate fix for that one. > Regression shows no extra failure except that test case xc_alter_table > would fail until 3607290 is fixed. > > Regards > > > > On Mon, Feb 25, 2013 at 10:18 AM, Ashutosh Bapat < > ash...@en...> wrote: > >> Thanks a lot Abbas for this quick fix. >> >> I am sorry, it's caused by my refactoring of GetRelationNodes(). >> >> If possible, can you please examine the other callers of >> GetRelationNodes() which would face the problems, esp. the ones for DML and >> utilities. This is other instance, where deciding the nodes to execute on >> at the time of execution will help. >> >> About the fix >> Can you please use GetPreferredReplicationNode() instead of >> list_truncate()? It will pick the preferred node instead of first one. If >> you find more places where we need this fix, it might be better to create a >> wrapper function and use it at those places. >> >> On Sat, Feb 23, 2013 at 2:59 PM, Abbas Butt <abb...@en...>wrote: >> >>> Hi, >>> PFA a patch to fix a crash when COPY TO is used on a replicated table. >>> >>> This test case produces a crash >>> >>> create table tab_rep(a int, b int) distribute by replication; >>> insert into tab_rep values(1,2), (3,4), (5,6), (7,8); >>> COPY tab_rep (a, b) TO stdout; >>> >>> Here is a description of the problem and the fix >>> In case of a read from a replicated table GetRelationNodes() >>> returns all nodes and expects that the planner can choose >>> one depending on the rest of the join tree. >>> In case of COPY TO we should choose the first one in the node list >>> This fixes a system crash and makes pg_dump work fine. >>> >>> -- >>> Abbas >>> Architect >>> EnterpriseDB Corporation >>> The Enterprise PostgreSQL Company >>> >>> Phone: 92-334-5100153 >>> >>> Website: www.enterprisedb.com >>> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >>> Follow us on Twitter: http://www.twitter.com/enterprisedb >>> >>> This e-mail message (and any attachment) is intended for the use of >>> the individual or entity to whom it is addressed. This message >>> contains information from EnterpriseDB Corporation that may be >>> privileged, confidential, or exempt from disclosure under applicable >>> law. If you are not the intended recipient or authorized to receive >>> this for the intended recipient, any use, dissemination, distribution, >>> retention, archiving, or copying of this communication is strictly >>> prohibited. If you have received this e-mail in error, please notify >>> the sender immediately by reply e-mail and delete this message. >>> >>> ------------------------------------------------------------------------------ >>> Everyone hates slow websites. So do we. >>> Make your web apps faster with AppDynamics >>> Download AppDynamics Lite for free today: >>> http://p.sf.net/sfu/appdyn_d2d_feb >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Ashutosh B. <ash...@en...> - 2013-03-27 06:51:39
|
Another problem we will encounter is, what if the memory is not enough to merge runs from all the nodes. We are already seeing 20 node configurations, and that would grow, I guess. In such situations we need to start with these as initial runs input to "polyphase sorting" algorithm by Knuth. On Mon, Mar 25, 2013 at 4:43 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi All, > I am working on using remote sorting for merge joins. The idea is while > using merge join at the coordinator, get the data sorted from the > datanodes; for replicated relations, we can get all the rows sorted and for > distributed tables we have to get sorted runs which can be merged at the > coordinator. For merge join the sorted inner relation needs to be randomly > accessible. For replicated relations this can be achieved by materialising > the result. But for distributed relations, we do not materialise the sorted > result at coordinator but compute the sorted result by merging the sorted > results from individual nodes on the fly. For distributed relations, the > connection to the datanodes themselves are used as logical tapes (which > provide the sorted runs). The final result is computed on the fly by > choosing the smallest or greatest row (as required) from the connections. > > For a Sort node the materialised result can reside in memory (if it fits > there) or on one of the logical tapes used for merge sort. So, in order to > provide random access to the sorted result, we need to materialise the > result either in the memory or on the logical tape. In-memory > materialisation is not easily possible since we have already resorted for > tape based sort, in case of distributed relations and to materialise the > result on tape, there is no logical tape available in current algorithm. To > make it work, there are following possible ways > > 1. When random access is required, materialise the sorted runs from > individual nodes onto tapes (one tape for each node) and then merge them on > one extra tape, which can be used for materialisation. > 2. Use a mix of connections and logical tape in the same tape set. Merge > the sorted runs from connections on a logical tape in the same logical tape > set. > > While the second one looks attractive from performance perspective (it > saves writing and reading from the tape), it would make the merge code ugly > by using mixed tapes. The read calls for connection and logical tape are > different and we will need both on the logical tape where the final result > is materialized. So, I am thinking of going with 1, in fact, to have same > code to handle remote sort, use 1 in all cases (whether or not > materialization is required). > > Had original authors of remote sort code thought about this > materialization? Anything they can share on this topic? > Any comment? > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |