You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
1
|
2
(1) |
3
(6) |
4
(19) |
5
|
6
(15) |
7
(2) |
8
(2) |
9
(22) |
10
(20) |
11
(20) |
12
(14) |
13
(12) |
14
(2) |
15
|
16
(14) |
17
(17) |
18
(4) |
19
(8) |
20
(2) |
21
(3) |
22
|
23
(8) |
24
(1) |
25
|
26
(2) |
27
(1) |
28
|
29
|
30
(7) |
31
(3) |
|
|
|
|
From: Michael P. <mic...@gm...> - 2012-07-08 23:36:40
|
Just giving some precision here. In XC we use an internal 2PC process at transaction commit if this transaction involves more than 2 nodes in a write operation (DML or DDL). So for each connection of your application at a Coordinator, you might finish with a 2PC transaction being run. Hence, the maximum value of max_prepared_transactions with which you will be sure that this error will not come out is the sum of the max_connections of all the Coordinators of your cluster. However you are more or less sure that you won't have a 2PC occurring on a Datanode at the same time for all the backends of Coordinators, so usually max_prepared_transactions could be set sefely at 30%~40% of the sum of Coordinators' max_connections. Check with your application. On Sat, Jul 7, 2012 at 2:27 PM, Nikhil Sontakke <ni...@st...> wrote: > > Any explanation on the above issue is much appreciated. I will try the > next > > run with a higher value set for max_prepared_transactions. Any > > recommendations for a good value on this front? > > > > How many clients you want to run with this eventually? That will > determine a decent value for max_prepared_transactions. Note that > max_prepared_transactions takes a wee bit more of shared memory per > prepared transaction. But it's ok to set it high proportionate to the > max_connections value. > > Regards, > Nikhils > > > thanks, > > Shankar > > > > > > ________________________________ > > From: Shankar Hariharan <har...@ya...> > > To: Ashutosh Bapat <ash...@en...> > > Cc: "pos...@li..." > > <pos...@li...> > > Sent: Friday, July 6, 2012 8:22 AM > > > > Subject: Re: [Postgres-xc-developers] Question on gtm-proxy > > > > Hi Ashutosh, > > I was trying to size the load on a server and was wondering if a GTM > could > > be shared w/o much performance overhead between a small number of > datanodes > > and coordinators. I will post my findings here. > > thanks, > > Shankar > > > > ________________________________ > > From: Ashutosh Bapat <ash...@en...> > > To: Shankar Hariharan <har...@ya...> > > Cc: "pos...@li..." > > <pos...@li...> > > Sent: Friday, July 6, 2012 12:25 AM > > Subject: Re: [Postgres-xc-developers] Question on gtm-proxy > > > > Hi Shankar, > > Running gtm-proxy has shown to improve the performance, because it > lessens > > the load on GTM, by serving requests locally. Why do you want the > > coordinators to connect directly to the GTM? Are you seeing any > performance > > improvement from doing that? > > > > On Fri, Jul 6, 2012 at 10:08 AM, Shankar Hariharan > > <har...@ya...> wrote: > > > > Follow up to earlier email. In the setup described below, can I avoid > using > > a gtm-proxy? That is, can I just simply point coordinators to the one gtm > > running on node 3 ? > > My initial plan was to just run the gtm on node 3 then I thought I could > try > > a datanode without a local coordinator which was why I put these two > > together on node 3. > > thanks, > > Shankar > > > > ________________________________ > > From: Shankar Hariharan <har...@ya...> > > To: "pos...@li..." > > <pos...@li...> > > Sent: Thursday, July 5, 2012 11:35 PM > > Subject: Question on multiple coordinators > > > > Hello, > > > > Am trying out XC 1.0 in the following configuraiton. > > Node 1 - Coord1, Datanode1, gtm-proxy1 > > Node 2- Coord2, Datanode2, gtm-proxy2 > > Node 3- Datanode3, gtm > > > > I setup all nodes but forgot to add Coord1 to Coord2 and vice versa. In > > addition I missed the pg_hba edit as well. So the first table T1 that I > > created for distribution from Coord1 was not "visible| from Coord2 but > was > > on all the data nodes. > > I tried to get Coord2 backinto business in various ways but the first > table > > I created refused to show up on Coord2 : > > - edit pg_hba and add node on both coord1 and 2. Then run select > > pgxc_pool_reload(); > > - restart coord 1 and 2 > > - drop node c2 from c1 and c1 from c2 and add them back followed by > select > > pgxc_pool_reload(); > > > > So I tried to create the same table T1 from Coord2 to observe behavior > and > > it did not like it clearly as all nodes it "wrote" to reported that the > > table already existed which was good. At this point I could understand > that > > Coord2 and Coord1 are not talking alright so I created a new table from > > coord1 with replication. This table was visible from both now. > > > > Question is should I expect to see the first table, let me call it T1 > after > > a while from Coord2 also? > > > > > > thanks, > > Shankar > > > > > > > > > ------------------------------------------------------------------------------ > > Live Security Virtual Conference > > Exclusive live event will cover all the ways today's security and > > threat landscape has changed and how IT managers can respond. Discussions > > will include endpoint security, mobile security and the latest in malware > > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > _______________________________________________ > > Postgres-xc-developers mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > > > > > > -- > > Best Wishes, > > Ashutosh Bapat > > EntepriseDB Corporation > > The Enterprise Postgres Company > > > > > > > > > > > > > > > ------------------------------------------------------------------------------ > > Live Security Virtual Conference > > Exclusive live event will cover all the ways today's security and > > threat landscape has changed and how IT managers can respond. Discussions > > will include endpoint security, mobile security and the latest in malware > > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > _______________________________________________ > > Postgres-xc-developers mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > -- Michael Paquier http://michael.otacoo.com |
From: Mason S. <ma...@st...> - 2012-07-08 16:09:33
|
On Sat, Jul 7, 2012 at 7:52 PM, Shankar Hariharan <har...@ya...> wrote: > Thanks Nikhil. I have set both to 100 for my next run. I have another > question, if I create a table w/o specifying the distribution strategy i > still see that the data is distributed across the nodes. What is the default > distribution strategy? It tries to use the first column of a primary key or unique index, if specified in the CREATE TABLE statement, or the first column of a foreign key. If not available, it uses the first column with a reasonable data type (ie, not BYTEA, not BOOLEAN). > I did run some tests across 3 nodes and noticed that the data is not > distributed equally all times. For instance, when I first inserted 10 > records (all integer values) i noticed that data node 1 just got one record > into node 1 while the other two nodes were almost equal. However after 2 > more inserts of 10 records each all 3 nodes were almost at the same load > leve (w.r.t. number of records). Yes, I think there were just too few rows in your sample data set. As it gets big, it will even out. -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud |