You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
1
|
2
|
3
|
4
|
5
|
6
|
7
(13) |
8
(8) |
9
(12) |
10
(10) |
11
|
12
(1) |
13
(19) |
14
|
15
(22) |
16
(6) |
17
(20) |
18
|
19
|
20
(1) |
21
(1) |
22
(3) |
23
(4) |
24
(9) |
25
|
26
|
27
(4) |
28
(10) |
29
(3) |
30
(3) |
31
(2) |
|
From: Ashutosh B. <ash...@en...> - 2013-05-17 11:46:34
|
On Fri, May 17, 2013 at 4:15 PM, Amit Khandekar < ami...@en...> wrote: > > > On 15 May 2013 12:16, Ashutosh Bapat <ash...@en...>wrote: > >> Hi Amit, >> Here are comment on trig_shippability patch. >> >> 1. The function pgxc_trigevent_quickfind() needs a better name like >> pgxc_has_trigger_for_event() to convey the functionality clearly. >> 2. The prologue of function pgxc_should_exec_triggers() has all the >> necessary content, but it needs to be written in a better order. The >> prologue talks about a single trigger (to be executed) but in reality the >> function is executed for checking whether all the triggers matching the >> given criteria and belonging to a given relation are firable or not. The >> prologue needs to be corrected. Also, first specify why is such a check >> needed (trigger order being alphabetical makes it necessary that all or >> none of the triggers execute on the same node). >> 3. pgxc_is_inttrigger_firable(), the function name needs to change a bit. >> The int in inttrigger can be easily associated with integer instead of >> internal. Can you please change the name to use internal_ or intern_ >> instead of int. If the name becomes too long, please use suitable prefix. >> > > Done all the above. > > >> 4. RemoteQueryNext() is called multiple times; the after statement >> triggers too will be called those many times. Right now this does not >> happen since there is no way the function will be called multiple times for >> a DML. But in future, we will start supporting RETURNING with FQS, in which >> case RemoteQueryNext will be called multiple times. >> > > True. Right now it works but once RemoteQueryNExt would be called multiple > times for FQS, this won't work. I have added a check of TupIsNull() before > firing AS trigger. BS trigger was ok. > > > This looks fine. > Also, I have removed the relation_access_type field check for statement > triggers. Instead, for FQS updated the remote_query field with the Query > structure, and used it's command tag. > > Good. > > There was one more redundancy discovered. When we have non-shippable AR > triggers but BR triggers don't even exist, we are currently using all > columns in the SET clause in the remote update statement. This is not > necessary. The function should_exec_br_triggers() is used to determine > whether we should do this. But this function returns true even if there are > no BR triggers and one or more non-shippable AR triggers exist. In the > updated patch, I have added a check to see if the BR triggers exist in the > first place, and if not, return false. > > Ok, thanks for noticing this one. > > Before checking in the patch, I want to check one more thing. The with.sql > shows a different output for statements like: > WITH wcte AS ( INSERT INTO child1 VALUES ( 42, 'new' ) RETURNING id AS > newid ) > > Although the current results also fail for the same statements, my patch > results in further diffs for the same statements, so I am not sure what's > going on. When Abbas checks in the with.sql , I will re-run the regression. > Abbas, do you have any rough patch ready in order for me to test that my > patch does not create any new with.sql failures ? Just one that passes > with.sql should suffice. > > Once you check this, it's fine to commit the patch. > >> test xc_trigship >> >> 1. we are using same name xc_auditlog for table and function. Please use >> different names. >> 2. For every object (table/function etc) that the test creates, please >> mention the purpose of that object. That helps validating the object >> definition there and there itself. >> > > Done. > > >> 3. I couldn't understand, how does the test check where the trigger is >> being fired? >> > > The last column of xc_auditlog is node type. > > >> >> >> >> On Mon, May 13, 2013 at 2:36 PM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> Hi Amit, >>> We now have Query structure in RemoteQuery. This corresponds to the >>> query being fired on the datanodes. THe commandType here should tell you >>> whether it's DELETE or not. >>> >>> >>> On Mon, May 13, 2013 at 1:14 PM, Amit Khandekar < >>> ami...@en...> wrote: >>> >>>> In case of multiple triggers of the same type for a given table, the >>>> requirement is that they should be fired in alphabetical order. To do so, >>>> we need to fire either all of them on coordinator, or all of them on >>>> datanode. The main changes in the attached patch are related to this >>>> requirement. >>>> >>>> All of the Exec*Trigger() functions now execute should_exec_trigger*() >>>> functions that return true if the node on which they are run is the right >>>> node to execute those type of triggers. >>>> >>>> Also for BR triggers, the additional requirement is that they should be >>>> run on coordinator if AR triggers are not shippable, regardless of whether >>>> BR themselves are shippable or not. This is because, if we fire BR triggers >>>> on datanode, they might change the final row updated, and so we need to >>>> again fetch the new row back to the coordinator. Instead, if we fire them >>>> on coordinator, we already know what's the final row. Also, we would have >>>> required additional changes to add RETURNING to the remote query to fetch >>>> the final updated row. >>>> >>>> Constraint triggers are exception; we need to fire them always on >>>> datanode. Once we support global constraints, these need not be specially >>>> handled. >>>> >>>> I was trying to avoid the should_exec_trigger*() calls for each of the >>>> Exec*Trigger() functions by doing this in TriggerEnabled() because it is a >>>> common function called for all triggers. But the issue is, for AFTER >>>> triggers, it is called with a different set of event type values than the >>>> usual TRIGGER_TYPE* values. For AFTER triggers, TRIGGER_EVENT_* values are >>>> used. Anyways, TriggerEnabled() is called for each of the trigger. >>>> >>>> The trigger shippability helper functions are now completely changed. >>>> pgxc_find_nonshippable_row_trig() is the key function. The comments in >>>> these functions should give a fair idea of their functionality. >>>> >>>> >>>> For stmt shippability, the trigger functions are now explicitly called >>>> if it's not an internally generated query. rq_internal_params field is used >>>> to know that this DML is user-supplied query (that is, it would be FQS). >>>> The functions are called in RemoteQueryNext(). >>>> >>>> >>>> THere is another patch (relaccess_type.patch) that you need to first >>>> apply before the main patch (trig_shippability.patch) is applied. I had to >>>> add another RELATION_ACCESS_DELETE in ExecNodes. To handle the stmt >>>> triggers, I had to know whether the statement is DELETE or INSERT or UPDATE. >>>> >>>> The new test xc_trigship is added. >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Learn Graph Databases - Download FREE O'Reilly Book >>>> "Graph Databases" is the definitive new guide to graph databases and >>>> their applications. This 200-page book is written by three acclaimed >>>> leaders in the field. The early access version is available now. >>>> Download your free book today! http://p.sf.net/sfu/neotech_d2d_may >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Abbas B. <abb...@en...> - 2013-05-17 10:56:30
|
Yah it was failing for me. On Fri, May 17, 2013 at 2:18 PM, Ashutosh Bapat < ash...@en...> wrote: > Is this failing for you? > > I do not see it failing on my machine, neither on build-farm. > > > On Fri, May 17, 2013 at 2:01 PM, Abbas Butt <abb...@en...>wrote: > >> Hi, >> Attached please find a patch to add some missing order by clauses in >> truncate test case. >> >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> >> >> ------------------------------------------------------------------------------ >> AlienVault Unified Security Management (USM) platform delivers complete >> security visibility with the essential security capabilities. Easily and >> efficiently configure, manage, and operate all of your security controls >> from a single console and one unified framework. Download a free trial. >> http://p.sf.net/sfu/alienvault_d2d >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Ashutosh B. <ash...@en...> - 2013-05-17 09:26:57
|
This looks good. Are there other ways where we can have UPDATE statement somewhere in the query tree list? Do we need to worry about such cases. On Fri, May 17, 2013 at 2:22 PM, Abbas Butt <abb...@en...>wrote: > > > On Thu, May 16, 2013 at 2:25 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> Hi Abbas, >> Instead of fixing the first issue in pgxc_build_dml_statement(), is it >> possible to traverse the Query in validate_part_col_updatable() recursively >> to find UPDATE statements and apply partition column check? >> > > Yes. I have attached that patch for your feedback. If you think its ok I > can send the updated patch including the rest of the changes. > > >> That would cover all the possibilities, I guess. That also saves us much >> effort in case we come to support distribution column updation. >> >> I think, we need a generic solution to solve this command id issue, e.g. >> punching command id always and efficiently. But for now this suffices. >> Please log a bug/feature and put it in 1.2 bucket. >> > > Done. > (Artifact 3613498<https://sourceforge.net/tracker/?func=detail&aid=3613498&group_id=311227&atid=1310235> > ) > >> >> >> >> >> On Wed, May 15, 2013 at 5:31 AM, Abbas Butt <abb...@en...>wrote: >> >>> Adding developers mailing list. >>> >>> >>> On Wed, May 15, 2013 at 4:57 AM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> Hi, >>>> Attached please find a patch to fix test case with. >>>> There were two issues making the test to fail. >>>> 1. Updates to partition column were possible using syntax like >>>> WITH t AS (UPDATE y SET a=a+1 RETURNING *) SELECT * FROM t >>>> The patch blocks this syntax. >>>> >>>> 2. For a WITH query that updates a table in the main query and >>>> inserts a row in the same table in the WITH query we need to use >>>> command ID communication to remote nodes in order to >>>> maintain global data visibility. >>>> For example >>>> CREATE TEMP TABLE tab (id int,val text) DISTRIBUTE BY REPLICATION; >>>> INSERT INTO tab VALUES (1,'p1'); >>>> WITH wcte AS (INSERT INTO tab VALUES(42,'new') RETURNING id AS >>>> newid) >>>> UPDATE tab SET id = id + newid FROM wcte; >>>> The last query gets translated into the following multi-statement >>>> transaction on the primary datanode >>>> (a) START TRANSACTION ISOLATION LEVEL read committed READ WRITE >>>> (b) INSERT INTO tab (id, val) VALUES ($1, $2) RETURNING id -- >>>> (42,'new)' >>>> (c) SELECT id, val, ctid FROM ONLY tab WHERE true >>>> (d) UPDATE ONLY tab tab SET id = $1 WHERE (tab.ctid = $3) -- >>>> (43,(0,1)] >>>> (e) COMMIT TRANSACTION >>>> The command id of the select in step (c), should be such that >>>> it does not see the insert of step (b) >>>> >>>> Comments are welcome. >>>> >>>> Regards >>>> >>>> -- >>>> *Abbas* >>>> Architect >>>> >>>> Ph: 92.334.5100153 >>>> Skype ID: gabbasb >>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>> * >>>> Follow us on Twitter* >>>> @EnterpriseDB >>>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>> >>> >>> >>> >>> -- >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >>> >>> ------------------------------------------------------------------------------ >>> AlienVault Unified Security Management (USM) platform delivers complete >>> security visibility with the essential security capabilities. Easily and >>> efficiently configure, manage, and operate all of your security controls >>> from a single console and one unified framework. Download a free trial. >>> http://p.sf.net/sfu/alienvault_d2d >>> _______________________________________________ >>> Postgres-xc-core mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Ashutosh B. <ash...@en...> - 2013-05-17 09:18:23
|
Is this failing for you? I do not see it failing on my machine, neither on build-farm. On Fri, May 17, 2013 at 2:01 PM, Abbas Butt <abb...@en...>wrote: > Hi, > Attached please find a patch to add some missing order by clauses in > truncate test case. > > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > > > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Ashutosh B. <ash...@en...> - 2013-05-17 09:14:35
|
Hi Abbas, The changes you have done affect the general query deparsing logic, which is used for dumping views. I don't think we should affect that. So, we should device a way to qualify objects only when it's being done for RemoteQuery. This might involve a lot of changes esp. in function definitions. But, diving deeper into the reasons, we have following two problems which might (I haven't tested those myself, so this uncertainty; otherwise I am 100% sure) be causing this issue. One of the reasons why this problem occurs is, we prepare the statements at the datanodes during first execute command. So, one of the way to completely solve this problem, is to prepare the statements at the datanodes at the time of preparing the statement. This is possible if the target datanodes are known at the time of planning. The second reason why we see this problem is bug *3607975*. Solving this bug would solve the regression diff. Can you please attempt it? Solving reason 2 would be enough to silence the diffs, I guess. Can you please check? On Fri, May 17, 2013 at 1:59 PM, Abbas Butt <abb...@en...>wrote: > Hi, > Attached please find a fix for test case plancache. > The test was failing because of the following issue > > create schema s1 create table abc (f1 int) distribute by replication; > create schema s2 create table abc (f1 int) distribute by replication; > insert into s1.abc values(123); > insert into s2.abc values(456); > set search_path = s1; > prepare p1 as select f1 from abc; > set search_path = s2; > execute p1; > > The last execute must send select f1 from s1.abc to the datanode, > despite the fact that the current schema has been set to s2. > > The solution was to schema qualify remote queries, and for that the > function generate_relation_name is modified to make sure that relations are > schema qualified independent of current search path. > > Expected outputs of many test cases are changed which makes the size and > footprint of this patch large. > > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > > > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Ashutosh B. <ash...@en...> - 2013-05-17 08:58:25
|
On Fri, May 17, 2013 at 2:23 PM, Abbas Butt <abb...@en...>wrote: > > > On Thu, May 16, 2013 at 3:13 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> Hi Abbas, >> I am also seeing a lot of changes in the expected output where the rows >> output have changed. What are these changes? >> > > These changes are a result of blocking partition column updates > Are those in sync with PG expected output? Why did we change the original expected output in first place? > and changing the distribution of tables to replication. > > That's acceptable. > >> >> On Thu, May 16, 2013 at 2:55 PM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> Hi Abbas, >>> Instead of fixing the first issue in pgxc_build_dml_statement(), is it >>> possible to traverse the Query in validate_part_col_updatable() recursively >>> to find UPDATE statements and apply partition column check? That would >>> cover all the possibilities, I guess. That also saves us much effort in >>> case we come to support distribution column updation. >>> >>> I think, we need a generic solution to solve this command id issue, e.g. >>> punching command id always and efficiently. But for now this suffices. >>> Please log a bug/feature and put it in 1.2 bucket. >>> >>> >>> >>> >>> On Wed, May 15, 2013 at 5:31 AM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> Adding developers mailing list. >>>> >>>> >>>> On Wed, May 15, 2013 at 4:57 AM, Abbas Butt < >>>> abb...@en...> wrote: >>>> >>>>> Hi, >>>>> Attached please find a patch to fix test case with. >>>>> There were two issues making the test to fail. >>>>> 1. Updates to partition column were possible using syntax like >>>>> WITH t AS (UPDATE y SET a=a+1 RETURNING *) SELECT * FROM t >>>>> The patch blocks this syntax. >>>>> >>>>> 2. For a WITH query that updates a table in the main query and >>>>> inserts a row in the same table in the WITH query we need to use >>>>> command ID communication to remote nodes in order to >>>>> maintain global data visibility. >>>>> For example >>>>> CREATE TEMP TABLE tab (id int,val text) DISTRIBUTE BY REPLICATION; >>>>> INSERT INTO tab VALUES (1,'p1'); >>>>> WITH wcte AS (INSERT INTO tab VALUES(42,'new') RETURNING id AS >>>>> newid) >>>>> UPDATE tab SET id = id + newid FROM wcte; >>>>> The last query gets translated into the following multi-statement >>>>> transaction on the primary datanode >>>>> (a) START TRANSACTION ISOLATION LEVEL read committed READ WRITE >>>>> (b) INSERT INTO tab (id, val) VALUES ($1, $2) RETURNING id -- >>>>> (42,'new)' >>>>> (c) SELECT id, val, ctid FROM ONLY tab WHERE true >>>>> (d) UPDATE ONLY tab tab SET id = $1 WHERE (tab.ctid = $3) -- >>>>> (43,(0,1)] >>>>> (e) COMMIT TRANSACTION >>>>> The command id of the select in step (c), should be such that >>>>> it does not see the insert of step (b) >>>>> >>>>> Comments are welcome. >>>>> >>>>> Regards >>>>> >>>>> -- >>>>> *Abbas* >>>>> Architect >>>>> >>>>> Ph: 92.334.5100153 >>>>> Skype ID: gabbasb >>>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>>> * >>>>> Follow us on Twitter* >>>>> @EnterpriseDB >>>>> >>>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>>> >>>> >>>> >>>> >>>> -- >>>> -- >>>> *Abbas* >>>> Architect >>>> >>>> Ph: 92.334.5100153 >>>> Skype ID: gabbasb >>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>> * >>>> Follow us on Twitter* >>>> @EnterpriseDB >>>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> AlienVault Unified Security Management (USM) platform delivers complete >>>> security visibility with the essential security capabilities. Easily and >>>> efficiently configure, manage, and operate all of your security controls >>>> from a single console and one unified framework. Download a free trial. >>>> http://p.sf.net/sfu/alienvault_d2d >>>> _______________________________________________ >>>> Postgres-xc-core mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >>>> >>>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Abbas B. <abb...@en...> - 2013-05-17 08:53:46
|
On Thu, May 16, 2013 at 3:13 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi Abbas, > I am also seeing a lot of changes in the expected output where the rows > output have changed. What are these changes? > These changes are a result of blocking partition column updates and changing the distribution of tables to replication. > > > On Thu, May 16, 2013 at 2:55 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> Hi Abbas, >> Instead of fixing the first issue in pgxc_build_dml_statement(), is it >> possible to traverse the Query in validate_part_col_updatable() recursively >> to find UPDATE statements and apply partition column check? That would >> cover all the possibilities, I guess. That also saves us much effort in >> case we come to support distribution column updation. >> >> I think, we need a generic solution to solve this command id issue, e.g. >> punching command id always and efficiently. But for now this suffices. >> Please log a bug/feature and put it in 1.2 bucket. >> >> >> >> >> On Wed, May 15, 2013 at 5:31 AM, Abbas Butt <abb...@en...>wrote: >> >>> Adding developers mailing list. >>> >>> >>> On Wed, May 15, 2013 at 4:57 AM, Abbas Butt <abb...@en... >>> > wrote: >>> >>>> Hi, >>>> Attached please find a patch to fix test case with. >>>> There were two issues making the test to fail. >>>> 1. Updates to partition column were possible using syntax like >>>> WITH t AS (UPDATE y SET a=a+1 RETURNING *) SELECT * FROM t >>>> The patch blocks this syntax. >>>> >>>> 2. For a WITH query that updates a table in the main query and >>>> inserts a row in the same table in the WITH query we need to use >>>> command ID communication to remote nodes in order to >>>> maintain global data visibility. >>>> For example >>>> CREATE TEMP TABLE tab (id int,val text) DISTRIBUTE BY REPLICATION; >>>> INSERT INTO tab VALUES (1,'p1'); >>>> WITH wcte AS (INSERT INTO tab VALUES(42,'new') RETURNING id AS >>>> newid) >>>> UPDATE tab SET id = id + newid FROM wcte; >>>> The last query gets translated into the following multi-statement >>>> transaction on the primary datanode >>>> (a) START TRANSACTION ISOLATION LEVEL read committed READ WRITE >>>> (b) INSERT INTO tab (id, val) VALUES ($1, $2) RETURNING id -- >>>> (42,'new)' >>>> (c) SELECT id, val, ctid FROM ONLY tab WHERE true >>>> (d) UPDATE ONLY tab tab SET id = $1 WHERE (tab.ctid = $3) -- >>>> (43,(0,1)] >>>> (e) COMMIT TRANSACTION >>>> The command id of the select in step (c), should be such that >>>> it does not see the insert of step (b) >>>> >>>> Comments are welcome. >>>> >>>> Regards >>>> >>>> -- >>>> *Abbas* >>>> Architect >>>> >>>> Ph: 92.334.5100153 >>>> Skype ID: gabbasb >>>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>>> * >>>> Follow us on Twitter* >>>> @EnterpriseDB >>>> >>>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>>> >>> >>> >>> >>> -- >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >>> >>> ------------------------------------------------------------------------------ >>> AlienVault Unified Security Management (USM) platform delivers complete >>> security visibility with the essential security capabilities. Easily and >>> efficiently configure, manage, and operate all of your security controls >>> from a single console and one unified framework. Download a free trial. >>> http://p.sf.net/sfu/alienvault_d2d >>> _______________________________________________ >>> Postgres-xc-core mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Abbas B. <abb...@en...> - 2013-05-17 08:52:09
|
On Thu, May 16, 2013 at 2:25 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi Abbas, > Instead of fixing the first issue in pgxc_build_dml_statement(), is it > possible to traverse the Query in validate_part_col_updatable() recursively > to find UPDATE statements and apply partition column check? > Yes. I have attached that patch for your feedback. If you think its ok I can send the updated patch including the rest of the changes. > That would cover all the possibilities, I guess. That also saves us much > effort in case we come to support distribution column updation. > > I think, we need a generic solution to solve this command id issue, e.g. > punching command id always and efficiently. But for now this suffices. > Please log a bug/feature and put it in 1.2 bucket. > Done. (Artifact 3613498<https://sourceforge.net/tracker/?func=detail&aid=3613498&group_id=311227&atid=1310235> ) > > > > > On Wed, May 15, 2013 at 5:31 AM, Abbas Butt <abb...@en...>wrote: > >> Adding developers mailing list. >> >> >> On Wed, May 15, 2013 at 4:57 AM, Abbas Butt <abb...@en...>wrote: >> >>> Hi, >>> Attached please find a patch to fix test case with. >>> There were two issues making the test to fail. >>> 1. Updates to partition column were possible using syntax like >>> WITH t AS (UPDATE y SET a=a+1 RETURNING *) SELECT * FROM t >>> The patch blocks this syntax. >>> >>> 2. For a WITH query that updates a table in the main query and >>> inserts a row in the same table in the WITH query we need to use >>> command ID communication to remote nodes in order to >>> maintain global data visibility. >>> For example >>> CREATE TEMP TABLE tab (id int,val text) DISTRIBUTE BY REPLICATION; >>> INSERT INTO tab VALUES (1,'p1'); >>> WITH wcte AS (INSERT INTO tab VALUES(42,'new') RETURNING id AS >>> newid) >>> UPDATE tab SET id = id + newid FROM wcte; >>> The last query gets translated into the following multi-statement >>> transaction on the primary datanode >>> (a) START TRANSACTION ISOLATION LEVEL read committed READ WRITE >>> (b) INSERT INTO tab (id, val) VALUES ($1, $2) RETURNING id -- >>> (42,'new)' >>> (c) SELECT id, val, ctid FROM ONLY tab WHERE true >>> (d) UPDATE ONLY tab tab SET id = $1 WHERE (tab.ctid = $3) -- >>> (43,(0,1)] >>> (e) COMMIT TRANSACTION >>> The command id of the select in step (c), should be such that >>> it does not see the insert of step (b) >>> >>> Comments are welcome. >>> >>> Regards >>> >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> >> >> ------------------------------------------------------------------------------ >> AlienVault Unified Security Management (USM) platform delivers complete >> security visibility with the essential security capabilities. Easily and >> efficiently configure, manage, and operate all of your security controls >> from a single console and one unified framework. Download a free trial. >> http://p.sf.net/sfu/alienvault_d2d >> _______________________________________________ >> Postgres-xc-core mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-core >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Abbas B. <abb...@en...> - 2013-05-17 08:35:55
|
Hi, Attached please find patch to change with.sql test case because of schema qualification in remote queries. -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Abbas B. <abb...@en...> - 2013-05-17 08:31:07
|
Hi, Attached please find a patch to add some missing order by clauses in truncate test case. -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Abbas B. <abb...@en...> - 2013-05-17 08:29:14
|
Hi, Attached please find a fix for test case plancache. The test was failing because of the following issue create schema s1 create table abc (f1 int) distribute by replication; create schema s2 create table abc (f1 int) distribute by replication; insert into s1.abc values(123); insert into s2.abc values(456); set search_path = s1; prepare p1 as select f1 from abc; set search_path = s2; execute p1; The last execute must send select f1 from s1.abc to the datanode, despite the fact that the current schema has been set to s2. The solution was to schema qualify remote queries, and for that the function generate_relation_name is modified to make sure that relations are schema qualified independent of current search path. Expected outputs of many test cases are changed which makes the size and footprint of this patch large. -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: 鈴木 幸市 <ko...@in...> - 2013-05-17 06:37:50
|
I understood. Maybe we must revise COPY manual about the restriction from user's point of view. Regards; --- Koichi Suzuki On 2013/05/17, at 14:44, Amit Khandekar <ami...@en...> wrote: > > > On 17 May 2013 10:14, 鈴木 幸市 <ko...@in...> wrote: > The background of the restriction is apparently PG restriction. My point is does this issue will not happen in PG. > > In PG, when queries are executed through triggers, those are initiated by the backend, there is no client in the picture. So there is no need of any client-server message exchange for executing triggers while the COPY protocol is in progress. > > For XC, the triggers are executed from the client (i.e. coordinator) so there are client-server messages to be exchanged while the COPY is in progress. > > In XC, we catch such scenario in the coordinator itself ; we block any messages to be sent to datanode while the COPY is in progress. > > I am working on allowing statement triggers to be executed from coordinator. That should be possible. > > Regards; > --- > Koichi Suzuki > > > > On 2013/05/17, at 13:36, Ashutosh Bapat <ash...@en...> wrote: > >> Ok, in that case, I don't think we have any other way but to convert COPY into INSERT between coordinator and datanode when the triggers are not shippable. I think this restriction applies only to the row triggers; statement triggers should be fine. >> >> >> On Fri, May 17, 2013 at 10:01 AM, Amit Khandekar <ami...@en...> wrote: >> >> >> On 17 May 2013 09:36, Ashutosh Bapat <ash...@en...> wrote: >> >> >> >> On Fri, May 17, 2013 at 9:27 AM, Amit Khandekar <ami...@en...> wrote: >> >> >> On 15 May 2013 12:53, Amit Khandekar <ami...@en...> wrote: >> In XC, the way COPY is implemented is that for each record, we read the whole line into memory, and then pass it to the datanode as-is. If there are non-shippable default column expressions, we evaluate the default values , convert them into output form, and append them to the data row. >> >> In presence of BR triggers, currently the ExecBRInsertTriggers() do not get called because of the way we skip the whole PG code block; instead we just send the data row as-is, optionally appending default values into the data row. >> >> What we need to do is; convert the tuple returned by ExecBRTriggers into text data row, but the text data should be in COPY format. This is because we need to send the data row to the datanode using COPY command, so it requires correct COPY format, such as escape sequences. >> >> For this, we need to call the function CopyOneRowTo() that is being used by COPY TO. This will make sure it will emit the data row in the COPY format. But we need to create a temporary CopyState because CopyOneRowTo() needs it. We can derive it from the current CopyState that is already created for COPY FROM. Most of the fields remain the same, except we need to re-assign CopyState->line_buf, and CopyState->rowcontext. >> >> This will save us from writing code to make sure the new output data row generated by BR triggers complies with COPY data format. >> >> I had already done similar thing for appending default values into the data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to append the values to the data row in COPY format. There, we did not require CopyOneRow() because we did not require the complete row, we needed to append only a subset of columns to the existing data row. >> >> Comments/suggestions welcome. >> >> I have hit a dead end in the way I am allowing the BR triggers to execute during COPY. >> >> It is not possible to send any non-COPY messages to the backend when the client-server protocol is in COPY mode. Which means, it is not possible to send any commands to the datanode when connection is in DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an SQL query, that query can't be executed because it's not possible to exchange any non-copy messages, let alone sending a query to the backend (i.e. datanode). >> >> Is this an XC restriction or PG restriction? >> >> The above is a PG restriction. >> >> Not accpepting any other client messages during a COPY protocol is a PG backend requirement. Not accepting trigger queries from coordinator has become an XC restriction as a result of the above PG protocol restriction. >> >> >> >> This naturally happens only for non-shippable triggers. If triggers are executed on datanode, then this issue does not arise. >> >> We need to device some other means to support non-shippable triggers for COPY. May be we would end up sending INSERT commands on the datanode instead of COPY command, if there are non-shippable triggers. Each of the data row will be sent as parameters to the insert query. This operation would be slow, but possible. >> >> >> ------------------------------------------------------------------------------ >> AlienVault Unified Security Management (USM) platform delivers complete >> security visibility with the essential security capabilities. Easily and >> efficiently configure, manage, and operate all of your security controls >> from a single console and one unified framework. Download a free trial. >> http://p.sf.net/sfu/alienvault_d2d >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> ------------------------------------------------------------------------------ >> AlienVault Unified Security Management (USM) platform delivers complete >> security visibility with the essential security capabilities. Easily and >> efficiently configure, manage, and operate all of your security controls >> from a single console and one unified framework. Download a free trial. >> http://p.sf.net/sfu/alienvault_d2d_______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Amit K. <ami...@en...> - 2013-05-17 05:44:49
|
On 17 May 2013 10:14, 鈴木 幸市 <ko...@in...> wrote: > The background of the restriction is apparently PG restriction. My point > is does this issue will not happen in PG. > In PG, when queries are executed through triggers, those are initiated by the backend, there is no client in the picture. So there is no need of any client-server message exchange for executing triggers while the COPY protocol is in progress. For XC, the triggers are executed from the client (i.e. coordinator) so there are client-server messages to be exchanged while the COPY is in progress. In XC, we catch such scenario in the coordinator itself ; we block any messages to be sent to datanode while the COPY is in progress. I am working on allowing statement triggers to be executed from coordinator. That should be possible. > > Regards; > --- > Koichi Suzuki > > > > On 2013/05/17, at 13:36, Ashutosh Bapat <ash...@en...> > wrote: > > Ok, in that case, I don't think we have any other way but to convert COPY > into INSERT between coordinator and datanode when the triggers are not > shippable. I think this restriction applies only to the row triggers; > statement triggers should be fine. > > > On Fri, May 17, 2013 at 10:01 AM, Amit Khandekar < > ami...@en...> wrote: > >> >> >> On 17 May 2013 09:36, Ashutosh Bapat <ash...@en...>wrote: >> >>> >>> >>> >>> On Fri, May 17, 2013 at 9:27 AM, Amit Khandekar < >>> ami...@en...> wrote: >>> >>>> >>>> >>>> On 15 May 2013 12:53, Amit Khandekar <ami...@en...>wrote: >>>> >>>>> In XC, the way COPY is implemented is that for each record, we read >>>>> the whole line into memory, and then pass it to the datanode as-is. If >>>>> there are non-shippable default column expressions, we evaluate the default >>>>> values , convert them into output form, and append them to the data row. >>>>> >>>>> In presence of BR triggers, currently the ExecBRInsertTriggers() do >>>>> not get called because of the way we skip the whole PG code block; instead >>>>> we just send the data row as-is, optionally appending default values into >>>>> the data row. >>>>> >>>>> What we need to do is; convert the tuple returned by ExecBRTriggers >>>>> into text data row, but the text data should be in COPY format. This is >>>>> because we need to send the data row to the datanode using COPY command, so >>>>> it requires correct COPY format, such as escape sequences. >>>>> >>>>> For this, we need to call the function CopyOneRowTo() that is being >>>>> used by COPY TO. This will make sure it will emit the data row in the COPY >>>>> format. But we need to create a temporary CopyState because CopyOneRowTo() >>>>> needs it. We can derive it from the current CopyState that is already >>>>> created for COPY FROM. Most of the fields remain the same, except we need >>>>> to re-assign CopyState->line_buf, and CopyState->rowcontext. >>>>> >>>>> This will save us from writing code to make sure the new output data >>>>> row generated by BR triggers complies with COPY data format. >>>>> >>>>> I had already done similar thing for appending default values into the >>>>> data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to >>>>> append the values to the data row in COPY format. There, we did not require >>>>> CopyOneRow() because we did not require the complete row, we needed to >>>>> append only a subset of columns to the existing data row. >>>>> >>>>> Comments/suggestions welcome. >>>>> >>>> >>>> I have hit a dead end in the way I am allowing the BR triggers to >>>> execute during COPY. >>>> >>>> It is not possible to send any non-COPY messages to the backend when >>>> the client-server protocol is in COPY mode. Which means, it is not possible >>>> to send any commands to the datanode when connection is in >>>> DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an >>>> SQL query, that query can't be executed because it's not possible to >>>> exchange any non-copy messages, let alone sending a query to the backend >>>> (i.e. datanode). >>>> >>> >>> Is this an XC restriction or PG restriction? >>> >> >> The above is a PG restriction. >> >> Not accpepting any other client messages during a COPY protocol is a PG >> backend requirement. Not accepting trigger queries from coordinator has >> become an XC restriction as a result of the above PG protocol restriction. >> >> >>> >>>> >>>> This naturally happens only for non-shippable triggers. If triggers are >>>> executed on datanode, then this issue does not arise. >>>> >>>> We need to device some other means to support non-shippable triggers >>>> for COPY. May be we would end up sending INSERT commands on the datanode >>>> instead of COPY command, if there are non-shippable triggers. Each of the >>>> data row will be sent as parameters to the insert query. This operation >>>> would be slow, but possible. >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> AlienVault Unified Security Management (USM) platform delivers complete >>>> security visibility with the essential security capabilities. Easily and >>>> efficiently configure, manage, and operate all of your security controls >>>> from a single console and one unified framework. Download a free trial. >>>> http://p.sf.net/sfu/alienvault_d2d >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Postgres Database Company >>> >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > > http://p.sf.net/sfu/alienvault_d2d_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > |
From: 鈴木 幸市 <ko...@in...> - 2013-05-17 04:44:22
|
The background of the restriction is apparently PG restriction. My point is does this issue will not happen in PG. Regards; --- Koichi Suzuki On 2013/05/17, at 13:36, Ashutosh Bapat <ash...@en...> wrote: > Ok, in that case, I don't think we have any other way but to convert COPY into INSERT between coordinator and datanode when the triggers are not shippable. I think this restriction applies only to the row triggers; statement triggers should be fine. > > > On Fri, May 17, 2013 at 10:01 AM, Amit Khandekar <ami...@en...> wrote: > > > On 17 May 2013 09:36, Ashutosh Bapat <ash...@en...> wrote: > > > > On Fri, May 17, 2013 at 9:27 AM, Amit Khandekar <ami...@en...> wrote: > > > On 15 May 2013 12:53, Amit Khandekar <ami...@en...> wrote: > In XC, the way COPY is implemented is that for each record, we read the whole line into memory, and then pass it to the datanode as-is. If there are non-shippable default column expressions, we evaluate the default values , convert them into output form, and append them to the data row. > > In presence of BR triggers, currently the ExecBRInsertTriggers() do not get called because of the way we skip the whole PG code block; instead we just send the data row as-is, optionally appending default values into the data row. > > What we need to do is; convert the tuple returned by ExecBRTriggers into text data row, but the text data should be in COPY format. This is because we need to send the data row to the datanode using COPY command, so it requires correct COPY format, such as escape sequences. > > For this, we need to call the function CopyOneRowTo() that is being used by COPY TO. This will make sure it will emit the data row in the COPY format. But we need to create a temporary CopyState because CopyOneRowTo() needs it. We can derive it from the current CopyState that is already created for COPY FROM. Most of the fields remain the same, except we need to re-assign CopyState->line_buf, and CopyState->rowcontext. > > This will save us from writing code to make sure the new output data row generated by BR triggers complies with COPY data format. > > I had already done similar thing for appending default values into the data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to append the values to the data row in COPY format. There, we did not require CopyOneRow() because we did not require the complete row, we needed to append only a subset of columns to the existing data row. > > Comments/suggestions welcome. > > I have hit a dead end in the way I am allowing the BR triggers to execute during COPY. > > It is not possible to send any non-COPY messages to the backend when the client-server protocol is in COPY mode. Which means, it is not possible to send any commands to the datanode when connection is in DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an SQL query, that query can't be executed because it's not possible to exchange any non-copy messages, let alone sending a query to the backend (i.e. datanode). > > Is this an XC restriction or PG restriction? > > The above is a PG restriction. > > Not accpepting any other client messages during a COPY protocol is a PG backend requirement. Not accepting trigger queries from coordinator has become an XC restriction as a result of the above PG protocol restriction. > > > > This naturally happens only for non-shippable triggers. If triggers are executed on datanode, then this issue does not arise. > > We need to device some other means to support non-shippable triggers for COPY. May be we would end up sending INSERT commands on the datanode instead of COPY command, if there are non-shippable triggers. Each of the data row will be sent as parameters to the insert query. This operation would be slow, but possible. > > > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Ashutosh B. <ash...@en...> - 2013-05-17 04:36:30
|
Ok, in that case, I don't think we have any other way but to convert COPY into INSERT between coordinator and datanode when the triggers are not shippable. I think this restriction applies only to the row triggers; statement triggers should be fine. On Fri, May 17, 2013 at 10:01 AM, Amit Khandekar < ami...@en...> wrote: > > > On 17 May 2013 09:36, Ashutosh Bapat <ash...@en...>wrote: > >> >> >> >> On Fri, May 17, 2013 at 9:27 AM, Amit Khandekar < >> ami...@en...> wrote: >> >>> >>> >>> On 15 May 2013 12:53, Amit Khandekar <ami...@en...>wrote: >>> >>>> In XC, the way COPY is implemented is that for each record, we read the >>>> whole line into memory, and then pass it to the datanode as-is. If there >>>> are non-shippable default column expressions, we evaluate the default >>>> values , convert them into output form, and append them to the data row. >>>> >>>> In presence of BR triggers, currently the ExecBRInsertTriggers() do not >>>> get called because of the way we skip the whole PG code block; instead we >>>> just send the data row as-is, optionally appending default values into the >>>> data row. >>>> >>>> What we need to do is; convert the tuple returned by ExecBRTriggers >>>> into text data row, but the text data should be in COPY format. This is >>>> because we need to send the data row to the datanode using COPY command, so >>>> it requires correct COPY format, such as escape sequences. >>>> >>>> For this, we need to call the function CopyOneRowTo() that is being >>>> used by COPY TO. This will make sure it will emit the data row in the COPY >>>> format. But we need to create a temporary CopyState because CopyOneRowTo() >>>> needs it. We can derive it from the current CopyState that is already >>>> created for COPY FROM. Most of the fields remain the same, except we need >>>> to re-assign CopyState->line_buf, and CopyState->rowcontext. >>>> >>>> This will save us from writing code to make sure the new output data >>>> row generated by BR triggers complies with COPY data format. >>>> >>>> I had already done similar thing for appending default values into the >>>> data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to >>>> append the values to the data row in COPY format. There, we did not require >>>> CopyOneRow() because we did not require the complete row, we needed to >>>> append only a subset of columns to the existing data row. >>>> >>>> Comments/suggestions welcome. >>>> >>> >>> I have hit a dead end in the way I am allowing the BR triggers to >>> execute during COPY. >>> >>> It is not possible to send any non-COPY messages to the backend when the >>> client-server protocol is in COPY mode. Which means, it is not possible to >>> send any commands to the datanode when connection is in >>> DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an >>> SQL query, that query can't be executed because it's not possible to >>> exchange any non-copy messages, let alone sending a query to the backend >>> (i.e. datanode). >>> >> >> Is this an XC restriction or PG restriction? >> > > The above is a PG restriction. > > Not accpepting any other client messages during a COPY protocol is a PG > backend requirement. Not accepting trigger queries from coordinator has > become an XC restriction as a result of the above PG protocol restriction. > > >> >>> >>> This naturally happens only for non-shippable triggers. If triggers are >>> executed on datanode, then this issue does not arise. >>> >>> We need to device some other means to support non-shippable triggers for >>> COPY. May be we would end up sending INSERT commands on the datanode >>> instead of COPY command, if there are non-shippable triggers. Each of the >>> data row will be sent as parameters to the insert query. This operation >>> would be slow, but possible. >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> AlienVault Unified Security Management (USM) platform delivers complete >>> security visibility with the essential security capabilities. Easily and >>> efficiently configure, manage, and operate all of your security controls >>> from a single console and one unified framework. Download a free trial. >>> http://p.sf.net/sfu/alienvault_d2d >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Postgres Database Company >> > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Amit K. <ami...@en...> - 2013-05-17 04:32:25
|
On 17 May 2013 09:36, Ashutosh Bapat <ash...@en...>wrote: > > > > On Fri, May 17, 2013 at 9:27 AM, Amit Khandekar < > ami...@en...> wrote: > >> >> >> On 15 May 2013 12:53, Amit Khandekar <ami...@en...>wrote: >> >>> In XC, the way COPY is implemented is that for each record, we read the >>> whole line into memory, and then pass it to the datanode as-is. If there >>> are non-shippable default column expressions, we evaluate the default >>> values , convert them into output form, and append them to the data row. >>> >>> In presence of BR triggers, currently the ExecBRInsertTriggers() do not >>> get called because of the way we skip the whole PG code block; instead we >>> just send the data row as-is, optionally appending default values into the >>> data row. >>> >>> What we need to do is; convert the tuple returned by ExecBRTriggers into >>> text data row, but the text data should be in COPY format. This is because >>> we need to send the data row to the datanode using COPY command, so it >>> requires correct COPY format, such as escape sequences. >>> >>> For this, we need to call the function CopyOneRowTo() that is being used >>> by COPY TO. This will make sure it will emit the data row in the COPY >>> format. But we need to create a temporary CopyState because CopyOneRowTo() >>> needs it. We can derive it from the current CopyState that is already >>> created for COPY FROM. Most of the fields remain the same, except we need >>> to re-assign CopyState->line_buf, and CopyState->rowcontext. >>> >>> This will save us from writing code to make sure the new output data row >>> generated by BR triggers complies with COPY data format. >>> >>> I had already done similar thing for appending default values into the >>> data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to >>> append the values to the data row in COPY format. There, we did not require >>> CopyOneRow() because we did not require the complete row, we needed to >>> append only a subset of columns to the existing data row. >>> >>> Comments/suggestions welcome. >>> >> >> I have hit a dead end in the way I am allowing the BR triggers to execute >> during COPY. >> >> It is not possible to send any non-COPY messages to the backend when the >> client-server protocol is in COPY mode. Which means, it is not possible to >> send any commands to the datanode when connection is in >> DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an >> SQL query, that query can't be executed because it's not possible to >> exchange any non-copy messages, let alone sending a query to the backend >> (i.e. datanode). >> > > Is this an XC restriction or PG restriction? > The above is a PG restriction. Not accpepting any other client messages during a COPY protocol is a PG backend requirement. Not accepting trigger queries from coordinator has become an XC restriction as a result of the above PG protocol restriction. > >> >> This naturally happens only for non-shippable triggers. If triggers are >> executed on datanode, then this issue does not arise. >> >> We need to device some other means to support non-shippable triggers for >> COPY. May be we would end up sending INSERT commands on the datanode >> instead of COPY command, if there are non-shippable triggers. Each of the >> data row will be sent as parameters to the insert query. This operation >> would be slow, but possible. >> >> >> >> ------------------------------------------------------------------------------ >> AlienVault Unified Security Management (USM) platform delivers complete >> security visibility with the essential security capabilities. Easily and >> efficiently configure, manage, and operate all of your security controls >> from a single console and one unified framework. Download a free trial. >> http://p.sf.net/sfu/alienvault_d2d >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > |
From: 鈴木 幸市 <ko...@in...> - 2013-05-17 04:08:50
|
I'm afraid it's an XC restriction. Regards; --- Koichi Suzuki On 2013/05/17, at 13:06, Ashutosh Bapat <ash...@en...> wrote: > > > > On Fri, May 17, 2013 at 9:27 AM, Amit Khandekar <ami...@en...> wrote: > > > On 15 May 2013 12:53, Amit Khandekar <ami...@en...> wrote: > In XC, the way COPY is implemented is that for each record, we read the whole line into memory, and then pass it to the datanode as-is. If there are non-shippable default column expressions, we evaluate the default values , convert them into output form, and append them to the data row. > > In presence of BR triggers, currently the ExecBRInsertTriggers() do not get called because of the way we skip the whole PG code block; instead we just send the data row as-is, optionally appending default values into the data row. > > What we need to do is; convert the tuple returned by ExecBRTriggers into text data row, but the text data should be in COPY format. This is because we need to send the data row to the datanode using COPY command, so it requires correct COPY format, such as escape sequences. > > For this, we need to call the function CopyOneRowTo() that is being used by COPY TO. This will make sure it will emit the data row in the COPY format. But we need to create a temporary CopyState because CopyOneRowTo() needs it. We can derive it from the current CopyState that is already created for COPY FROM. Most of the fields remain the same, except we need to re-assign CopyState->line_buf, and CopyState->rowcontext. > > This will save us from writing code to make sure the new output data row generated by BR triggers complies with COPY data format. > > I had already done similar thing for appending default values into the data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to append the values to the data row in COPY format. There, we did not require CopyOneRow() because we did not require the complete row, we needed to append only a subset of columns to the existing data row. > > Comments/suggestions welcome. > > I have hit a dead end in the way I am allowing the BR triggers to execute during COPY. > > It is not possible to send any non-COPY messages to the backend when the client-server protocol is in COPY mode. Which means, it is not possible to send any commands to the datanode when connection is in DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an SQL query, that query can't be executed because it's not possible to exchange any non-copy messages, let alone sending a query to the backend (i.e. datanode). > > Is this an XC restriction or PG restriction? > > > This naturally happens only for non-shippable triggers. If triggers are executed on datanode, then this issue does not arise. > > We need to device some other means to support non-shippable triggers for COPY. May be we would end up sending INSERT commands on the datanode instead of COPY command, if there are non-shippable triggers. Each of the data row will be sent as parameters to the insert query. This operation would be slow, but possible. > > > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Ashutosh B. <ash...@en...> - 2013-05-17 04:06:47
|
On Fri, May 17, 2013 at 9:27 AM, Amit Khandekar < ami...@en...> wrote: > > > On 15 May 2013 12:53, Amit Khandekar <ami...@en...>wrote: > >> In XC, the way COPY is implemented is that for each record, we read the >> whole line into memory, and then pass it to the datanode as-is. If there >> are non-shippable default column expressions, we evaluate the default >> values , convert them into output form, and append them to the data row. >> >> In presence of BR triggers, currently the ExecBRInsertTriggers() do not >> get called because of the way we skip the whole PG code block; instead we >> just send the data row as-is, optionally appending default values into the >> data row. >> >> What we need to do is; convert the tuple returned by ExecBRTriggers into >> text data row, but the text data should be in COPY format. This is because >> we need to send the data row to the datanode using COPY command, so it >> requires correct COPY format, such as escape sequences. >> >> For this, we need to call the function CopyOneRowTo() that is being used >> by COPY TO. This will make sure it will emit the data row in the COPY >> format. But we need to create a temporary CopyState because CopyOneRowTo() >> needs it. We can derive it from the current CopyState that is already >> created for COPY FROM. Most of the fields remain the same, except we need >> to re-assign CopyState->line_buf, and CopyState->rowcontext. >> >> This will save us from writing code to make sure the new output data row >> generated by BR triggers complies with COPY data format. >> >> I had already done similar thing for appending default values into the >> data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to >> append the values to the data row in COPY format. There, we did not require >> CopyOneRow() because we did not require the complete row, we needed to >> append only a subset of columns to the existing data row. >> >> Comments/suggestions welcome. >> > > I have hit a dead end in the way I am allowing the BR triggers to execute > during COPY. > > It is not possible to send any non-COPY messages to the backend when the > client-server protocol is in COPY mode. Which means, it is not possible to > send any commands to the datanode when connection is in > DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an > SQL query, that query can't be executed because it's not possible to > exchange any non-copy messages, let alone sending a query to the backend > (i.e. datanode). > Is this an XC restriction or PG restriction? > > This naturally happens only for non-shippable triggers. If triggers are > executed on datanode, then this issue does not arise. > > We need to device some other means to support non-shippable triggers for > COPY. May be we would end up sending INSERT commands on the datanode > instead of COPY command, if there are non-shippable triggers. Each of the > data row will be sent as parameters to the insert query. This operation > would be slow, but possible. > > > > ------------------------------------------------------------------------------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Amit K. <ami...@en...> - 2013-05-17 03:57:51
|
On 15 May 2013 12:53, Amit Khandekar <ami...@en...>wrote: > In XC, the way COPY is implemented is that for each record, we read the > whole line into memory, and then pass it to the datanode as-is. If there > are non-shippable default column expressions, we evaluate the default > values , convert them into output form, and append them to the data row. > > In presence of BR triggers, currently the ExecBRInsertTriggers() do not > get called because of the way we skip the whole PG code block; instead we > just send the data row as-is, optionally appending default values into the > data row. > > What we need to do is; convert the tuple returned by ExecBRTriggers into > text data row, but the text data should be in COPY format. This is because > we need to send the data row to the datanode using COPY command, so it > requires correct COPY format, such as escape sequences. > > For this, we need to call the function CopyOneRowTo() that is being used > by COPY TO. This will make sure it will emit the data row in the COPY > format. But we need to create a temporary CopyState because CopyOneRowTo() > needs it. We can derive it from the current CopyState that is already > created for COPY FROM. Most of the fields remain the same, except we need > to re-assign CopyState->line_buf, and CopyState->rowcontext. > > This will save us from writing code to make sure the new output data row > generated by BR triggers complies with COPY data format. > > I had already done similar thing for appending default values into the > data row. We call functions like CopyAttributeOutCSV(), CopyInt32() to > append the values to the data row in COPY format. There, we did not require > CopyOneRow() because we did not require the complete row, we needed to > append only a subset of columns to the existing data row. > > Comments/suggestions welcome. > I have hit a dead end in the way I am allowing the BR triggers to execute during COPY. It is not possible to send any non-COPY messages to the backend when the client-server protocol is in COPY mode. Which means, it is not possible to send any commands to the datanode when connection is in DN_CONNECTION_STATE_COPY_IN state. When the trigger function executes an SQL query, that query can't be executed because it's not possible to exchange any non-copy messages, let alone sending a query to the backend (i.e. datanode). This naturally happens only for non-shippable triggers. If triggers are executed on datanode, then this issue does not arise. We need to device some other means to support non-shippable triggers for COPY. May be we would end up sending INSERT commands on the datanode instead of COPY command, if there are non-shippable triggers. Each of the data row will be sent as parameters to the insert query. This operation would be slow, but possible. |