You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
1
|
2
|
3
|
4
(1) |
5
(1) |
6
|
7
|
8
(2) |
9
(2) |
10
|
11
|
12
(4) |
13
(1) |
14
|
15
(3) |
16
(5) |
17
(2) |
18
(2) |
19
|
20
|
21
|
22
(1) |
23
(5) |
24
(2) |
25
(6) |
26
(3) |
27
|
28
(1) |
29
(3) |
30
(7) |
|
|
|
|
From: Mason S. <mas...@en...> - 2010-11-30 22:46:24
|
On 11/29/10 9:10 PM, xiong wang wrote: > Dears, > >>On 11/29/10 4:32 AM, xiong wang wrote: > >> Dears, > >> step as follows: > >>./psql -Upostgres -p 5432 -c 'create table tt(a int);' > >> CREATE TABLE > >> ./psql -Upostgres -c 'begin;select * from tt;'; > >> WARNING: Consuming data node messages after error. > > >We send a begin down to the data nodes implicitly. It looks like if the > >first statement is a BEGIN we should suppress that (but send a valid > >response). > > I should describe the bug a little more. > It seems that postgres-XC can't process the statement combined > by mutiple statements seperated by semicolon > for example > 1. > ./psql -Upostgres -d template1 -p 5432 -c 'checkpoint;select * from t' > WARNING: Consuming data node messages after error. > (note:all nodes(coordinator and datanodes) logs are similar with the bug) > 2. > template1=# \d t > Table "public.t" > Column | Type | Modifiers > --------+---------+----------- > a | integer | > template1=# \d t1 > Table "public.t1" > Column | Type | Modifiers > --------+---------+----------- > a | integer | > b | integer | > template1=# select * from t; > a > --- > 1 > (1 row) > template1=# select * from t1; > a | b > ---+--- > 1 | 1 > (1 row) > ./psql -Upostgres -d template1 -c 'select * from t;select * from t1;' > unexpected field count in "D" message > ERROR: Tuple does not match the descriptor > 3. > template1=# create table t2(a int); > CREATE TABLE > template1=# insert into t2 values(2); > INSERT 0 1 > template1=# \q > ./psql -Upostgres -d template1 -p 5432 -c 'select * from t2;select * > from t' > a > --- > 2 > 1 > (2 rows) Thanks, I updated the case. Feel free also to just add comments directly there in the future. Thanks, Mason -- Mason Sharp EnterpriseDB Corporation The Enterprise Postgres Company This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Mason S. <mas...@en...> - 2010-11-30 22:34:44
|
> > For statement_timestamp() do we perform a separate operation through > GTM, or do we do a delta similar to clock_timestamp()? > > statement_timestamo and transaction timestamp base their calculations > based on delta calculated with GTM. > clock_timestamp does not do anything with GTM, it just uses the local > node timestamp. > Transaction_timestamp should just use the value piggybacked from GTM with XID and not a delta though. The same value should be used on all nodes involved in the transaction, right? Thanks, Mason > -- > Michael Paquier > http://michaelpq.users.sourceforge.net > -- Mason Sharp EnterpriseDB Corporation The Enterprise Postgres Company This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Michael P. <mic...@gm...> - 2010-11-30 08:27:19
|
Hi all, Please see attached a patch that corrects 2PC (2 phase commit) in the case of an implicit 2PC. In the current HEAD, when a transaction involving several nodes in a write operation commits, it does a commit in the following order: 1) Prepare on datanodes 2) Commit on datanodes 3) Commit on Coordinator 4) Commit on GTM The problem is that Commit at Coordinator has to be done first to protect data consistency. With the patch attached, a commit is done in the following order: 1) Prepare on Coordinator (Flush a 2PC file if DDL is involved) 2) Prepare on datanodes involved in a write operation 3) Commit on Coordinator the prepared transaction 4) Commit on Datanodes the prepared transaction 5) Commit on GTM In case of a problem at Coordinator, transaction can be rollbacked on nodes, protecting data visibility and consistency. There is also a little improvement, in current head, it is necessary to go 2 times to GTM to commit global the transaction ID (GXID) used for Prepare and the GXID used for Commit. In this patch, GTM is only contacted once and commits at the same time both GXIDs. Regards, -- Michael Paquier http://michaelpq.users.sourceforge.net |
From: Andrei.Martsinchyk <and...@en...> - 2010-11-30 06:41:17
|
Attached is a patch to support prepared statements with parameters. Only one-step queries supported for now. It also enables SQL functions with parameters. Feel free to review/comment. -- Andrei Martsinchyk EntepriseDB Corporation The Enterprise Postgres Company Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: xiong w. <wan...@gm...> - 2010-11-30 02:11:03
|
Dears, >>On 11/29/10 4:32 AM, xiong wang wrote: >> Dears, >> step as follows: >>./psql -Upostgres -p 5432 -c 'create table tt(a int);' >> CREATE TABLE >> ./psql -Upostgres -c 'begin;select * from tt;'; >> WARNING: Consuming data node messages after error. >We send a begin down to the data nodes implicitly. It looks like if the >first statement is a BEGIN we should suppress that (but send a valid >response). I should describe the bug a little more. It seems that postgres-XC can't process the statement combined by mutiple statements seperated by semicolon for example 1. ./psql -Upostgres -d template1 -p 5432 -c 'checkpoint;select * from t' WARNING: Consuming data node messages after error. (note:all nodes(coordinator and datanodes) logs are similar with the bug) 2. template1=# \d t Table "public.t" Column | Type | Modifiers --------+---------+----------- a | integer | template1=# \d t1 Table "public.t1" Column | Type | Modifiers --------+---------+----------- a | integer | b | integer | template1=# select * from t; a --- 1 (1 row) template1=# select * from t1; a | b ---+--- 1 | 1 (1 row) ./psql -Upostgres -d template1 -c 'select * from t;select * from t1;' unexpected field count in "D" message ERROR: Tuple does not match the descriptor 3. template1=# create table t2(a int); CREATE TABLE template1=# insert into t2 values(2); INSERT 0 1 template1=# \q ./psql -Upostgres -d template1 -p 5432 -c 'select * from t2;select * from t' a --- 2 1 (2 rows) Regards, Benny >I will add it to the SF bug tracker. >Thanks, >Mason > > ------------------------------------------------------------------------------ > Increase Visibility of Your 3D Game App& Earn a Chance To Win $500! > Tap into the largest installed PC base& get more eyes on your game by > optimizing for Intel(R) Graphics Technology. Get started today with the > Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs. > http://p.sf.net/sfu/intelisp-dev2dev > > > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > -- Mason Sharp EnterpriseDB Corporation The Enterprise Postgres Company This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Michael P. <mic...@gm...> - 2010-11-30 01:18:40
|
> > Can you remind me regarding statement_timestamp()? From the commit log: > > This commit supports global timestamp values for now(), > statement_timestamp, > transaction_timestamp,current_date, current_time, current_timestamp, > localtime, local_timestamp and now(). > > clock_timestamp and timeofday make their calculation based > on the local server clock so they get their results from the local > node where it is run. > Their use could lead to inconsistencies if used in a transaction > involving several Datanodes. > > > For statement_timestamp() do we perform a separate operation through > GTM, or do we do a delta similar to clock_timestamp()? > statement_timestamo and transaction timestamp base their calculations based on delta calculated with GTM. clock_timestamp does not do anything with GTM, it just uses the local node timestamp. -- Michael Paquier http://michaelpq.users.sourceforge.net |
From: Mason S. <mas...@en...> - 2010-11-29 19:30:56
|
Michael, Can you remind me regarding statement_timestamp()? From the commit log: This commit supports global timestamp values for now(), statement_timestamp, transaction_timestamp,current_date, current_time, current_timestamp, localtime, local_timestamp and now(). clock_timestamp and timeofday make their calculation based on the local server clock so they get their results from the local node where it is run. Their use could lead to inconsistencies if used in a transaction involving several Datanodes. For statement_timestamp() do we perform a separate operation through GTM, or do we do a delta similar to clock_timestamp()? Thanks, Mason -- Mason Sharp EnterpriseDB Corporation The Enterprise Postgres Company This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Mason S. <mas...@en...> - 2010-11-29 15:03:05
|
On 11/29/10 4:32 AM, xiong wang wrote: > Dears, > step as follows: > ./psql -Upostgres -p 5432 -c 'create table tt(a int);' > CREATE TABLE > ./psql -Upostgres -c 'begin;select * from tt;'; > WARNING: Consuming data node messages after error. > ........(holding) > log on postgres-xc coordinator: > LOG: statement: begin;select * from tt; > ERROR: Unexpected response from the data nodes for 'T' message, > current request type 1 > STATEMENT: begin;select * from tt; > WARNING: Consuming data node messages after error. > log on postgres-xc datanode: > LOG: statement: BEGIN > LOG: statement: begin;select * from tt; > WARNING: there is already a transaction in progress We send a begin down to the data nodes implicitly. It looks like if the first statement is a BEGIN we should suppress that (but send a valid response). I will add it to the SF bug tracker. Thanks, Mason > I verified the statement on postgreSQL. It's ok. > Regards, > Benny > > > ------------------------------------------------------------------------------ > Increase Visibility of Your 3D Game App& Earn a Chance To Win $500! > Tap into the largest installed PC base& get more eyes on your game by > optimizing for Intel(R) Graphics Technology. Get started today with the > Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs. > http://p.sf.net/sfu/intelisp-dev2dev > > > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > -- Mason Sharp EnterpriseDB Corporation The Enterprise Postgres Company This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: xiong w. <wan...@gm...> - 2010-11-29 09:32:12
|
Dears, step as follows: ./psql -Upostgres -p 5432 -c 'create table tt(a int);' CREATE TABLE ./psql -Upostgres -c 'begin;select * from tt;'; WARNING: Consuming data node messages after error. ........(holding) log on postgres-xc coordinator: LOG: statement: begin;select * from tt; ERROR: Unexpected response from the data nodes for 'T' message, current request type 1 STATEMENT: begin;select * from tt; WARNING: Consuming data node messages after error. log on postgres-xc datanode: LOG: statement: BEGIN LOG: statement: begin;select * from tt; WARNING: there is already a transaction in progress I verified the statement on postgreSQL. It's ok. Regards, Benny |
From: Mason S. <mas...@en...> - 2010-11-28 03:17:22
|
I am sending an updated patch. The previous version mistakenly treated some variations as a single-step statement when it should not have. Mason On 11/23/10 3:15 PM, Mason Sharp wrote: > I am sending this out to allow for feedback. > > This patch adds support for INSERT SELECT (including "multi-step" > queries). > > This is done by utilizing COPY. We query the data nodes with the tuples > being sent to the Coordinator. These are then converted into COPY lines > and sent back down to the appropriate nodes. (Long term when we add > data node to data node communication, these can be sent directly). > > We also optimize for the case when the SELECT is single-step, the > destination table is partitioned, the input column value comes from > the partition column of the source, and there are no limit or offset > clauses. In this case, we just pass down the INSERT SELECT to the data > nodes. > > There is one kluge here in that I added a static variable for the > copy state. I did this to avoid having to move the CopyState definition > to a header file (for usage in execMain.c), in order to make merging > with vanilla PostgreSQL easier. > -- Mason Sharp EnterpriseDB Corporation The Enterprise Postgres Company This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Mason S. <mas...@en...> - 2010-11-26 14:59:13
|
We should output an error message as well if the drop failed. In addition, since it cannot be wrapped in 2PC, we may have to consider a utility to cleanup partially dropped databases, but execute direct could do that, we just need to document it, and fix execute direct. Sent from my IPhone On Nov 26, 2010, at 5:53 AM, Michael Paquier <mic...@gm...> wrote: > This seems to be linked with the pooler connections. > Drop database does not work correctly if you don't empty pool before dropping it. > > You should do something like: > \c postgres > clean connection to all for database dbt1; > drop database dbt1; > create database dbt1; > > Have a look at the commit 54a648a3ffe7f8fc0273295ee17e91cebcb34948 in git repository for more details. > > I admit that we need to call automatically pool cleaning when dropping a database. > This is not done yet. We should commit a fix soon to support that. > > Regards, > > -- > Michael Paquier > http://michaelpq.users.sourceforge.net > > ------------------------------------------------------------------------------ > Increase Visibility of Your 3D Game App & Earn a Chance To Win $500! > Tap into the largest installed PC base & get more eyes on your game by > optimizing for Intel(R) Graphics Technology. Get started today with the > Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs. > http://p.sf.net/sfu/intelisp-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Michael P. <mic...@gm...> - 2010-11-26 10:53:57
|
This seems to be linked with the pooler connections. Drop database does not work correctly if you don't empty pool before dropping it. You should do something like: \c postgres clean connection to all for database dbt1; drop database dbt1; create database dbt1; Have a look at the commit 54a648a3ffe7f8fc0273295ee17e91cebcb34948 in git repository for more details. I admit that we need to call automatically pool cleaning when dropping a database. This is not done yet. We should commit a fix soon to support that. Regards, -- Michael Paquier http://michaelpq.users.sourceforge.net |
From: xiong w. <wan...@gm...> - 2010-11-26 07:05:23
|
Dears, If I drop a database, it seems that content of database isn't really deleted. The steps as follows: 1.postgres=# drop database dbt1; DROP DATABASE 2.postgres=# create database dbt1; CREATE DATABASE 3.postgres=# \c dbt1 psql (8.4.3) You are now connected to database "dbt1". 4.dbt1=# create table t(a int); CREATE TABLE 5.dbt1=# insert into t values(1); INSERT 0 1 6.dbt1=# \c postgres psql (8.4.3) You are now connected to database "postgres". 7.postgres=# drop database dbt1; DROP DATABASE 8.postgres=# create database dbt1; CREATE DATABASE 9.postgres=# \c dbt1 psql (8.4.3) You are now connected to database "dbt1". 10.dbt1=# \d List of relations Schema | Name | Type | Owner --------+------+-------+---------- public | t | table | postgres (1 row) 11.dbt1=# select * from t; a --- 1 (1 row) Regards, Benny |
From: Michael P. <mic...@gm...> - 2010-11-25 08:15:16
|
> > > >> >> I can also see in your results that method 1 is using in total 2 >> connections between the loaders and coordinators. >> Method 2 is using 4 connections. >> I think you did so but... Did you include for method 2 the results of >> loader1-coordinator1 + loader1-coordinator2 for loader1 results? >> Same for loader2? >> > As I know one loader can get one DBT1 result. As you told, a coordinator > should get one DBT1 result beacuse you said I should use > loader1-coordinator1 + loader1-coordinator2 as the result of method2. I don' > t know how can I get DBT1 results according to coordinator. > > If you use one DBT-1 folder for two instances to coordinators from a loader, both instances try to write at the same time in the same mix.log, what could result in a loss of data. Why not separating the instances by launching each loaderX-coordinatorX instance in a different DBT-1 folder? In the case of method 2, create 2 DBT-1 folders on each loader server, for a total of 4 folders. You will have 4 mix.log file for the output of: - loader1/coordinator1 - loader1/coordinator2 - loader2/coordinator1 - loader2/coordinator2 If you do that, you have just to combine mix.log files of loader2/coordinator1 and loader2/coordinator2 to get the output of a loader 2 for instance. It may be also interesting to get the output from Coordinator 1 and 2, by combining for instance log files of loader1/coordinatorX and loader2/coordinatorX. -- Michael Paquier http://michaelpq.users.sourceforge.net |
From: xiong w. <wan...@gm...> - 2010-11-25 07:54:17
|
Hi, Michael Thank you for your advice. 2010/11/25 Michael Paquier <mic...@gm...> > I have a comment about some of the parameters you are using with DBT-1. > > #eu > 1500 > #eu/min > 1000 > > The current algorithm of DBT-1 just recognizes multiple numbers when > calculating ramp-up value. > For example, if you set eu at 1500, you have to set eu/min at 500, 750 or > 1500 to have a ramp up time of 1, 2 or 3 minutes. > If you don't do that ramp up is set I set at 0, what might give you wrong > results. > > Then, you seem still have a lot of idle CPU. > You should increase eu by 15%~20% to saturate your cpu. > > I would recommend to set eu at 2000~2200 for a eu/min at 500~550. > I will test it again later. > > I can also see in your results that method 1 is using in total 2 > connections between the loaders and coordinators. > Method 2 is using 4 connections. > I think you did so but... Did you include for method 2 the results of > loader1-coordinator1 + loader1-coordinator2 for loader1 results? > Same for loader2? > As I know one loader can get one DBT1 result. As you told, a coordinator should get one DBT1 result beacuse you said I should use loader1-coordinator1 + loader1-coordinator2 as the result of method2. I don' t know how can I get DBT1 results according to coordinator. Thanks again. Regards, Benny > > -- > Michael Paquier > http://michaelpq.users.sourceforge.net > > |
From: Koichi S. <ko...@in...> - 2010-11-25 07:07:32
|
Hi, I compared your result with ours and I found your environment consumes more system CPU (in our case, with 90% workload, system CPU is around 18 to 19%) and also your result idle % (14% and 17%) is a bit higher than your case. In our case, user CPU is around 70%, system around 19% and the idle around was 10% (we put a workload so that 90% of the server CPU resource is used). We can reduce the idle to a few % by charging much more workload. Idle time shows you've not put a transaction workload to fully use coordinator/datanode CPUs. Please take a look at loader's situation (number of transactions per sec, etc). I suppose there could be another thing to put DBT-1 transactions equally to all the coordinators. Kind Regards; ----------- Koichi Suzuki (2010年11月25日 14:25), xiong wang wrote: > Dears, > I am really appreciated your response. > > Here is the basic information about my test. > My configure on DBT1 as follows: > [appServer] > #dbconnection-connection from 1 dbdriver to 1 backend, with 4 loaders, > each of the 5 coords receives 40 connections > 10 > #transaction_queue_size > 1500 > #transaction_array_size > 1500 > > [dbdriver] > #items > 1000 > #customers > 28800 > #eu > 1500 > #eu/min > 1000 > #mean think_time > 0.1 > #run_duration in seconds > 1200 > #access mode access with access_direct or access_appServer > access_appServer > #access clean of order table with access_clean if cleanup, by default > disactivated if let empty > access_clean > The results by sar as follows: > Method1: > > *coordinator/datanode 1:* > > CPU %user %nice %system %iowait %steal %idle > > Average: all 58.90 0.00 23.61 0.81 0.00 16.69** > > *coordinator/datanode 2:* > > CPU %user %nice %system %iowait %steal %idle > > Average: all 64.19 0.00 28.67 0.55 0.00 6.58 > > Method2: > > *coordinator/datanode 1:* > > CPU %user %nice %system %iowait %steal %idle > > Average: all 59.35 0.00 25.39 0.47 0.00 14.79 > > *coordinator/datanode 2:* > > CPU %user %nice %system %iowait %steal %idle > > Average: all 57.71 0.00 24.56 0.55 0.00 17.18 > > The average results by DBT1 as follows: > > Method1: > > *loader 1:* > > 1356.6 bogotransactions per second** > > *loader 2:* > > 1843.7 bogotransactions per second > > Method2: > > *loader 1:* > > 757.4 bogotransactions per second** > > *loader 2:* > > 779.3 bogotransactions per second > > It's obvious that there is a big difference between Method1 and Method2. > I am courious about why. > > Thanks again. > > Regards, > > Benny > > > 2010/11/24 Koichi Suzuki <ko...@in... > <mailto:ko...@in...>> > > Hi, Xiong; > > Could you tell me CPU and I/O usage you can measure by sar? I'm > afraid load balance is not good in Method 2. How many backend did > you use in each coordinator? Did you have any warning that > connection overflew in data nodes? > > Also, how long warm-up did you have? > > I'll let you know our configuration (sorry please let me have a bit). > > Regards; > --- > Koichi Suzuki > > > (2010年11月24日 14:40), xiong wang wrote: > > Hi Mason, > I tested it by 5 PCs. > The enviroment as follows: > 2 PCs, one datanode and one coordinator together on each of them, > GTM is on another PC, > 2 Loaders are on other 2 PCs. > Network 1G. > I tested Postgres-XC in two methods as follows: > Method 1. > loader -------- coordinator & datanode > \ > GTM > / > loader -------- coordinator & datanode > Method 2. > loader-------- coordinator & datanode > \ / \ > \ / \ > /\ GTM > / \ / > / \ / > loader -------- coordinator & datanode > The DBT1 test results in these two methods are very different. > Method 1 > is much better than Method 2. I don't know why. > If I test Postgres-XC in Method 1, the DBT1 performance is close > to what > the document declares. If I test it in Method 2, the result is much > worse than what the document writes. Could you tell me why the two > methods have so much effect on the DBT1 performace. > Thanks, > Regards, > Benny > >How much worse? > >How many physical servers are in each configuration? How is > each server > >configured in each, with how many data nodes? What kind of > network? > >Gigabit? > >Or was everything on one system? With virtual machines or > without and > >just using different ports? > >Are there errors in the log file (connection limits hit)? > >Regards, > >Mason > > > > ------------------------------------------------------------------------------ > Increase Visibility of Your 3D Game App& Earn a Chance To Win $500! > Tap into the largest installed PC base& get more eyes on your > game by > optimizing for Intel(R) Graphics Technology. Get started today > with the > Intel(R) Software Partner Program. Five $500 cash prizes are up > for grabs. > http://p.sf.net/sfu/intelisp-dev2dev > > > > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > <mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > |
From: Michael P. <mic...@gm...> - 2010-11-25 06:53:34
|
I have a comment about some of the parameters you are using with DBT-1. #eu 1500 #eu/min 1000 The current algorithm of DBT-1 just recognizes multiple numbers when calculating ramp-up value. For example, if you set eu at 1500, you have to set eu/min at 500, 750 or 1500 to have a ramp up time of 1, 2 or 3 minutes. If you don't do that ramp up is set I set at 0, what might give you wrong results. Then, you seem still have a lot of idle CPU. You should increase eu by 15%~20% to saturate your cpu. I would recommend to set eu at 2000~2200 for a eu/min at 500~550. I can also see in your results that method 1 is using in total 2 connections between the loaders and coordinators. Method 2 is using 4 connections. I think you did so but... Did you include for method 2 the results of loader1-coordinator1 + loader1-coordinator2 for loader1 results? Same for loader2? -- Michael Paquier http://michaelpq.users.sourceforge.net |
From: xiong w. <wan...@gm...> - 2010-11-25 05:25:16
|
Dears, I am really appreciated your response. Here is the basic information about my test. My configure on DBT1 as follows: [appServer] #dbconnection-connection from 1 dbdriver to 1 backend, with 4 loaders, each of the 5 coords receives 40 connections 10 #transaction_queue_size 1500 #transaction_array_size 1500 [dbdriver] #items 1000 #customers 28800 #eu 1500 #eu/min 1000 #mean think_time 0.1 #run_duration in seconds 1200 #access mode access with access_direct or access_appServer access_appServer #access clean of order table with access_clean if cleanup, by default disactivated if let empty access_clean The results by sar as follows: Method1: *coordinator/datanode 1:* CPU %user %nice %system %iowait %steal %idle Average: all 58.90 0.00 23.61 0.81 0.00 16.69** *coordinator/datanode 2:* CPU %user %nice %system %iowait %steal %idle Average: all 64.19 0.00 28.67 0.55 0.00 6.58 Method2: *coordinator/datanode 1:* CPU %user %nice %system %iowait %steal %idle Average: all 59.35 0.00 25.39 0.47 0.00 14.79 *coordinator/datanode 2:* CPU %user %nice %system %iowait %steal %idle Average: all 57.71 0.00 24.56 0.55 0.00 17.18 The average results by DBT1 as follows: Method1: *loader 1:* 1356.6 bogotransactions per second** *loader 2:* 1843.7 bogotransactions per second Method2: *loader 1:* 757.4 bogotransactions per second** *loader 2:* 779.3 bogotransactions per second It's obvious that there is a big difference between Method1 and Method2. I am courious about why. Thanks again. Regards, Benny 2010/11/24 Koichi Suzuki <ko...@in...> > Hi, Xiong; > > Could you tell me CPU and I/O usage you can measure by sar? I'm afraid > load balance is not good in Method 2. How many backend did you use in each > coordinator? Did you have any warning that connection overflew in data > nodes? Also, how long warm-up did you have? > > I'll let you know our configuration (sorry please let me have a bit). > > Regards; > --- > Koichi Suzuki > > > (2010年11月24日 14:40), xiong wang wrote: > >> Hi Mason, >> I tested it by 5 PCs. >> The enviroment as follows: >> 2 PCs, one datanode and one coordinator together on each of them, >> GTM is on another PC, >> 2 Loaders are on other 2 PCs. >> Network 1G. >> I tested Postgres-XC in two methods as follows: >> Method 1. >> loader -------- coordinator & datanode >> \ >> GTM >> / >> loader -------- coordinator & datanode >> Method 2. >> loader-------- coordinator & datanode >> \ / \ >> \ / \ >> /\ GTM >> / \ / >> / \ / >> loader -------- coordinator & datanode >> The DBT1 test results in these two methods are very different. Method 1 >> is much better than Method 2. I don't know why. >> If I test Postgres-XC in Method 1, the DBT1 performance is close to what >> the document declares. If I test it in Method 2, the result is much >> worse than what the document writes. Could you tell me why the two >> methods have so much effect on the DBT1 performace. >> Thanks, >> Regards, >> Benny >> >How much worse? >> >How many physical servers are in each configuration? How is each server >> >configured in each, with how many data nodes? What kind of network? >> >Gigabit? >> >Or was everything on one system? With virtual machines or without and >> >just using different ports? >> >Are there errors in the log file (connection limits hit)? >> >Regards, >> >Mason >> >> >> >> >> ------------------------------------------------------------------------------ >> Increase Visibility of Your 3D Game App& Earn a Chance To Win $500! >> Tap into the largest installed PC base& get more eyes on your game by >> optimizing for Intel(R) Graphics Technology. Get started today with the >> Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs. >> http://p.sf.net/sfu/intelisp-dev2dev >> >> >> >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > |
From: Michael P. <mic...@gm...> - 2010-11-25 00:27:54
|
Hi, I have additional questions: What are the parameters you used for the measurement? How many emulated users? What is the ramp-up time? Note: ramp-up time can be decided with the parameter called emulated users/min. What is the average response time of DBT-1 transactions? What is the think time, the time a DBT-1 backend waits between receiving a response and sending its next request? Based on those parameters, how much output did you get, in term of transactions per second? Instead of output, could you give us a rate of how much it became worse? -- Michael Paquier http://michaelpq.users.sourceforge.net |
From: Koichi S. <ko...@in...> - 2010-11-24 09:48:24
|
Hi, Xiong; Could you tell me CPU and I/O usage you can measure by sar? I'm afraid load balance is not good in Method 2. How many backend did you use in each coordinator? Did you have any warning that connection overflew in data nodes? Also, how long warm-up did you have? I'll let you know our configuration (sorry please let me have a bit). Regards; --- Koichi Suzuki (2010年11月24日 14:40), xiong wang wrote: > Hi Mason, > I tested it by 5 PCs. > The enviroment as follows: > 2 PCs, one datanode and one coordinator together on each of them, > GTM is on another PC, > 2 Loaders are on other 2 PCs. > Network 1G. > I tested Postgres-XC in two methods as follows: > Method 1. > loader -------- coordinator & datanode > \ > GTM > / > loader -------- coordinator & datanode > Method 2. > loader-------- coordinator & datanode > \ / \ > \ / \ > /\ GTM > / \ / > / \ / > loader -------- coordinator & datanode > The DBT1 test results in these two methods are very different. Method 1 > is much better than Method 2. I don't know why. > If I test Postgres-XC in Method 1, the DBT1 performance is close to what > the document declares. If I test it in Method 2, the result is much > worse than what the document writes. Could you tell me why the two > methods have so much effect on the DBT1 performace. > Thanks, > Regards, > Benny > >How much worse? > >How many physical servers are in each configuration? How is each server > >configured in each, with how many data nodes? What kind of network? > >Gigabit? > >Or was everything on one system? With virtual machines or without and > >just using different ports? > >Are there errors in the log file (connection limits hit)? > >Regards, > >Mason > > > > ------------------------------------------------------------------------------ > Increase Visibility of Your 3D Game App& Earn a Chance To Win $500! > Tap into the largest installed PC base& get more eyes on your game by > optimizing for Intel(R) Graphics Technology. Get started today with the > Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs. > http://p.sf.net/sfu/intelisp-dev2dev > > > > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: xiong w. <wan...@gm...> - 2010-11-24 05:40:19
|
Hi Mason, I tested it by 5 PCs. The enviroment as follows: 2 PCs, one datanode and one coordinator together on each of them, GTM is on another PC, 2 Loaders are on other 2 PCs. Network 1G. I tested Postgres-XC in two methods as follows: Method 1. loader -------- coordinator & datanode \ GTM / loader -------- coordinator & datanode Method 2. loader-------- coordinator & datanode \ / \ \ / \ /\ GTM / \ / / \ / loader -------- coordinator & datanode The DBT1 test results in these two methods are very different. Method 1 is much better than Method 2. I don't know why. If I test Postgres-XC in Method 1, the DBT1 performance is close to what the document declares. If I test it in Method 2, the result is much worse than what the document writes. Could you tell me why the two methods have so much effect on the DBT1 performace. Thanks, Regards, Benny >How much worse? >How many physical servers are in each configuration? How is each server >configured in each, with how many data nodes? What kind of network? >Gigabit? >Or was everything on one system? With virtual machines or without and >just using different ports? >Are there errors in the log file (connection limits hit)? >Regards, >Mason |
From: Mason S. <mas...@en...> - 2010-11-23 20:16:27
|
I am sending this out to allow for feedback. This patch adds support for INSERT SELECT (including "multi-step" queries). This is done by utilizing COPY. We query the data nodes with the tuples being sent to the Coordinator. These are then converted into COPY lines and sent back down to the appropriate nodes. (Long term when we add data node to data node communication, these can be sent directly). We also optimize for the case when the SELECT is single-step, the destination table is partitioned, the input column value comes from the partition column of the source, and there are no limit or offset clauses. In this case, we just pass down the INSERT SELECT to the data nodes. There is one kluge here in that I added a static variable for the copy state. I did this to avoid having to move the CopyState definition to a header file (for usage in execMain.c), in order to make merging with vanilla PostgreSQL easier. -- Mason Sharp EnterpriseDB Corporation The Enterprise Postgres Company This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Koichi S. <koi...@gm...> - 2010-11-23 14:28:20
|
In the document, we installed one coordinator and one data node into one physical server to make best user of both coordinators and data nodes, independent from the workload shared by coordinators and data nodes. It also make parameters simple to claim the sacalability against the number of servers involved. I don't thinks it's a good idea to install loader and coordinator in the same server. Loader's work is simply application-oriented and should not be included as a part of database performance. As Mason mentioned, I'm very interested in the spec of servers, number of servers used and configuration information what components (GTM, coordinators and data nodes) you installed in what servers, as well as network connections. As you might have seen, we need gigabit network links between GTM, Coordinators and Data Nodes. I strongly recommend to use good L2 switch to reduce the network worklord too. Kind Regards; ---------- Koichi Suzuki 2010/11/23 Mason Sharp <mas...@en...>: > On 11/23/10 3:27 AM, xiong wang wrote: >> Dears, >> >> I tested the postgres-xc DBT1 performance according to the published >> document. But the result is worse than what the document declares. One >> loader with one coordinator is much better. One loader with two >> coordinators is much worse than the former. I don't know which way is >> right. And I don't know the reason why the latter method is so worse >> than the former. >> >> > How much worse? > > How many physical servers are in each configuration? How is each server > configured in each, with how many data nodes? What kind of network? > Gigabit? > > Or was everything on one system? With virtual machines or without and > just using different ports? > > Are there errors in the log file (connection limits hit)? > > Regards, > > Mason > >> You reply will be appreciated. >> >> Thanks. >> >> Best regards, >> >> Benny >> >> ------------------------------------------------------------------------------ >> Increase Visibility of Your 3D Game App& Earn a Chance To Win $500! >> Tap into the largest installed PC base& get more eyes on your game by >> optimizing for Intel(R) Graphics Technology. Get started today with the >> Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs. >> http://p.sf.net/sfu/intelisp-dev2dev >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > -- > Mason Sharp > EnterpriseDB Corporation > The Enterprise Postgres Company > > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > > > ------------------------------------------------------------------------------ > Increase Visibility of Your 3D Game App & Earn a Chance To Win $500! > Tap into the largest installed PC base & get more eyes on your game by > optimizing for Intel(R) Graphics Technology. Get started today with the > Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs. > http://p.sf.net/sfu/intelisp-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Mason S. <mas...@en...> - 2010-11-23 14:15:14
|
On 11/23/10 3:27 AM, xiong wang wrote: > Dears, > > I tested the postgres-xc DBT1 performance according to the published > document. But the result is worse than what the document declares. One > loader with one coordinator is much better. One loader with two > coordinators is much worse than the former. I don't know which way is > right. And I don't know the reason why the latter method is so worse > than the former. > > How much worse? How many physical servers are in each configuration? How is each server configured in each, with how many data nodes? What kind of network? Gigabit? Or was everything on one system? With virtual machines or without and just using different ports? Are there errors in the log file (connection limits hit)? Regards, Mason > You reply will be appreciated. > > Thanks. > > Best regards, > > Benny > > ------------------------------------------------------------------------------ > Increase Visibility of Your 3D Game App& Earn a Chance To Win $500! > Tap into the largest installed PC base& get more eyes on your game by > optimizing for Intel(R) Graphics Technology. Get started today with the > Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs. > http://p.sf.net/sfu/intelisp-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > -- Mason Sharp EnterpriseDB Corporation The Enterprise Postgres Company This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: xiong w. <wan...@gm...> - 2010-11-23 08:27:32
|
Dears, I tested the postgres-xc DBT1 performance according to the published document. But the result is worse than what the document declares. One loader with one coordinator is much better. One loader with two coordinators is much worse than the former. I don't know which way is right. And I don't know the reason why the latter method is so worse than the former. You reply will be appreciated. Thanks. Best regards, Benny |