You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
(12) |
2
(4) |
3
|
4
(17) |
5
(2) |
6
(5) |
7
(5) |
8
(23) |
9
|
10
(1) |
11
|
12
(2) |
13
|
14
|
15
|
16
|
17
|
18
(3) |
19
(1) |
20
(3) |
21
(10) |
22
(2) |
23
|
24
(1) |
25
(4) |
26
(8) |
27
(5) |
28
|
29
(3) |
30
(6) |
31
(1) |
|
|
|
|
|
|
From: Koichi S. <koi...@gm...> - 2013-03-08 09:01:38
|
Thanks Abbas for the fix. ---------- Koichi Suzuki 2013/3/8 Abbas Butt <abb...@en...>: > Attached please find patch to fix 3607290. > > Regression shows no extra failure. > > Test cases for this have already been submitted in email subject [Patch to > fix a crash in COPY TO from a replicated table] > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > ------------------------------------------------------------------------------ > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > endpoint security space. For insight on selecting the right partner to > tackle endpoint security challenges, access the full report. > http://p.sf.net/sfu/symantec-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Koichi S. <koi...@gm...> - 2013-03-08 08:40:39
|
I fond that the documentation does not reflect the change. I visited the code and found they're implemented. Could you take a look at gram.y? We need to revise the document to include all these changes. Regards; ---------- Koichi Suzuki 2013/3/8 Abbas Butt <abb...@en...>: > Hi, > ALTER TABLE REDISTRIBUTE does not support TO NODE clause: > How would we redistribute data after e.g. adding a node? > OR > How would we redistribute the data before removing a node? > > I think this functionality will have to be added in the system to complete > the whole picture. > > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > ------------------------------------------------------------------------------ > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > endpoint security space. For insight on selecting the right partner to > tackle endpoint security challenges, access the full report. > http://p.sf.net/sfu/symantec-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Amit K. <ami...@en...> - 2013-03-08 08:38:37
|
On 6 March 2013 14:16, Ashutosh Bapat <ash...@en...> wrote: > Hi Amit, > The patch looks good and is not the track for parameter handling. I see that > we are relying more on the data produced by PG and standard planner rather > than infering ourselves in XC. So, this looks good improvement. > > Here are my comments > Tests > ----- > > 1. It seems testing the parameter handling for queries arising from plpgsql > functions. The function prm_func() seems to be doing that. Can you please > add > some comments in this function specifying what is being tested in various > sets > of statements in function. > 2. Also, it seems to be using two tables prm_emp1 and prm_emp2. The first > one is > being used to populate the other one and a variable inside the function. > Later only the other is being used. Can we just use a single table > instead of > two? > 3. Can we use an existing volatile function instead of a new one like > prm_volfunc()? Done these changes. also added DELETE scenario. > > Code > ---- > 1. Please use prefixes rq_ and rqs_ for the newly added members of > RemoteQuery > and RemoteQueryState structures resp. This allows to locate the usage of > these members easily through cscope/tag etc. As a general rule, we should > always add a prefix for members of commonly used structures or members > which > use very common variable names. rq_, rqs_, en_ are being used for > RemoteQuery, RemoteQueryState and ExecNodes resp. Done. > 2. Is it possible to infer value of has_internal_params from rest of the > members > of RemoteQuery structure? If so, can we drop this member and use > inference logic? Could not find any information that we can safely infer the param types from. > 3. Following code needs more commenting in DMLParamListToDataRow() > 5027 /* Set the remote param types if they are not already set */ > The code below, this comments seems to execute only the first time the > RemoteQueryState is used. Please elaborate this in the comment, lest the > reader is confused as to when this case can happen. Done. > 4. In code below > 5098 /* copy data to the buffer */ > 5099 *datarow = palloc(buf.len); > 5100 memcpy(*datarow, buf.data, buf.len); > 5101 rq_state->paramval_len = buf.len; > 5102 pfree(buf.data); > Can we use datarow = buf.data. The memory context in both the cases will > have > same life. We will save calls to palloc, pfree and memcpy. You can add > comments about why this assignment is safe. We do this type of assignment > at > other places too. See pgxc_rqplan_build_statement. Similar change is > needed > in ExternParamListToDataRow(). Right. Done. > 5. More elaboration needed in prologue of DMLParamListToDataRow(). See some > hints below. We need to elaborate on the purpose of such conversion. Name > of the > function is misleading, there is not ParamList involved to convert from. > We are > converting from TupleSlot. > 5011 /* -------------------------------- > 5012 * DMLParamListToDataRow > 5013 * Obtain a copy of <given> slot's data row <in what form?>, and copy > it into > 5014 * <passed in/given> RemoteQueryState.paramval_data. Also set > remote_param_types <to what?> > 5015 * The slot itself is undisturbed. > 5016 * -------------------------------- Done. Also changed the names of the both internal and extern param functions. > 6. Variable declarations in DMLParamListToDataRow() need to aligned. We > align > the start of declaration and the variable names themselves. Done. This was existing code. But corrected it. > 7. In create_remotedml_plan(), we were using SetRemoteStatementName to have > all > the parameter setting in one place. But you have instead set them > explicitly > in the function itself. Can you please revert back the change? The > remote_param_types set here are being over-written in > DMLParamListToDataRow(). What if the param types/param numbers obtained > in both these > functions are different? Can we add some asserts to check this? The remote_param_types set in create_remotedml_plan() belong to RemoteQuery, whereas those that are set in DMLParamListToDataRow() belong to RemoteQueryState, so they are not overwritten. But the remote param types that are set in create_remotedml_plan are not required. I have realized, that part is redundant, and I have removed it. The internal params are inferred in DMLParamListToDataRow(). > > > > On Tue, Feb 26, 2013 at 9:51 AM, Amit Khandekar > <ami...@en...> wrote: >> >> There has been errors like : >> "Cannot find parameter $4" or >> "Bind supplies 4 parameters while Prepare needs 8 parameters" that we >> have been getting for specific scenarios. These scenarios come up in >> plpgsql functions. This is the root cause: >> >> If PLpgSQL_datum.dtype is not a simple type (PLPGSQL_DTYPE_VAR), the >> parameter types (ParamExternData.ptype) for such plpgsql functions are >> not set until when the values are actually populated. Example of such >> variables is record variable without %rowtype specification. The >> ParamListInfo.paramFetch hook function is called when needed to fetch >> the such parameter types. In the XC function >> pgxc_set_remote_parameters(), we do not consider this, and we check >> only the ParamExternData.ptype to see if parameters are present, and >> end up with lesser parameters than the actual parameters, sometimes >> even ending up with 0 parameter types. >> >> During trigger support implementation, it was discovered that due to >> this issue, >> the NEW.field or OLD.field cannot be used directly in SQL statements. >> >> Actually we don't even need parameter types to be set at plan time in >> XC. We only need them at the BIND message. There, we can anyway infer >> the types from the tuple descriptor. So the attached patch removes all >> the places where parameter types are set, and derives them when the >> BIND data row is built. >> >> I have not touched the SetRemoteStatementName function in this patch. >> There can be scenarios where user calls PREPARE using parameter types, >> and in such cases it is better to use these parameters in >> SetRemoteStatementName() being called from BuildCachedPlan with >> non-NULL boundParams. Actually use of parameter types during PREPARE >> and rebuilding cached plans etc will be dealt further after this one. >> So, I haven't removed param types altogether. >> >> We also need to know whether the parameters are supplied through >> source data plan (DMLs) or they are external. So added a field >> has_internal_params in RemoteQuery to make this difference explicit. >> Data row and parameters types are built in a different manner for DMLs >> and non-DMLs. >> >> Moved the datarow generation function from execTuples.c to execRemote.c . >> >> Regressions >> ----------------- >> >> There is a parameter related error in plpgsql.sql test, which does not >> occur now, so corrected the expected output. It still does not show >> the exact output because of absence of trigger support. >> >> Added new test xc_params.sql which would be further extended later. >> >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_feb >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company |
From: Koichi S. <koi...@gm...> - 2013-03-08 08:35:11
|
Does it work correctly if gtm/gtm_proxy is not running? I found PQping is lighter and easier to use, which is dedicated API to check if the server is running. It is independent from users/databases and does not require any password. Just check the target is working. I think this is more flexible to be used in various setups. Regards; ---------- Koichi Suzuki 2013/3/8 Nikhil Sontakke <ni...@st...>: > I use a simple 'psql -c "\x"' query to monitor coordinator/datanodes. > The psql call ensures that the connection protocol is followed and > accepted by that node. It then does an innocuous activity on the psql > side before exiting. Works well for me. > > Regards, > Nikhils > > On Fri, Mar 8, 2013 at 12:48 PM, Koichi Suzuki > <koi...@gm...> wrote: >> Okay, here's a patch which uses PQping. This is new to 9.1 and is >> extremely simple and matches my needs. >> >> Regards; >> ---------- >> Koichi Suzuki >> >> >> 2013/3/8 Michael Paquier <mic...@gm...>: >>> >>> >>> On Fri, Mar 8, 2013 at 12:13 PM, Koichi Suzuki <koi...@gm...> >>> wrote: >>>> >>>> Because 9.3 merge will not be done in 1.1, I don't think it's feasible >>>> at present. Second means will be to use PQ* functions. Anyway, >>>> this will be provided by pgxc_monitor. May be a good idea to use >>>> custom background, but this could be too much because the requirement >>>> is very small. >>> >>> In this case use something like PQPing or similar, but simply do not involve >>> core. There would be underlying performance impact for sure. >>> -- >>> Michael >> >> ------------------------------------------------------------------------------ >> Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester >> Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the >> endpoint security space. For insight on selecting the right partner to >> tackle endpoint security challenges, access the full report. >> http://p.sf.net/sfu/symantec-dev2dev >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service |
From: Pavan D. <pav...@gm...> - 2013-03-08 08:21:17
|
On Fri, Mar 8, 2013 at 12:46 PM, Koichi Suzuki <koi...@gm...> wrote: > Thank you Pavan. > > I think I added the lines in question. Because you are the original > author if gtm, it's wonderful if you take a look at it. > Looks good to me. The list under question is global to the process and hence the list cells must be allocated in the process-level top context i.e. TopMostMemoryContext. I checked and there are other places in the code where its explained why its necessary to allocate them in the said context. Thanks, Pavan -- Pavan Deolasee http://www.linkedin.com/in/pavandeolasee |
From: Abbas B. <abb...@en...> - 2013-03-08 08:13:31
|
Hi, ALTER TABLE REDISTRIBUTE does not support TO NODE clause: How would we redistribute data after e.g. adding a node? OR How would we redistribute the data before removing a node? I think this functionality will have to be added in the system to complete the whole picture. -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Abbas B. <abb...@en...> - 2013-03-08 08:10:13
|
Attached please find patch to fix 3607290. Regression shows no extra failure. Test cases for this have already been submitted in email subject [Patch to fix a crash in COPY TO from a replicated table] -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Nikhil S. <ni...@st...> - 2013-03-08 08:09:35
|
I use a simple 'psql -c "\x"' query to monitor coordinator/datanodes. The psql call ensures that the connection protocol is followed and accepted by that node. It then does an innocuous activity on the psql side before exiting. Works well for me. Regards, Nikhils On Fri, Mar 8, 2013 at 12:48 PM, Koichi Suzuki <koi...@gm...> wrote: > Okay, here's a patch which uses PQping. This is new to 9.1 and is > extremely simple and matches my needs. > > Regards; > ---------- > Koichi Suzuki > > > 2013/3/8 Michael Paquier <mic...@gm...>: >> >> >> On Fri, Mar 8, 2013 at 12:13 PM, Koichi Suzuki <koi...@gm...> >> wrote: >>> >>> Because 9.3 merge will not be done in 1.1, I don't think it's feasible >>> at present. Second means will be to use PQ* functions. Anyway, >>> this will be provided by pgxc_monitor. May be a good idea to use >>> custom background, but this could be too much because the requirement >>> is very small. >> >> In this case use something like PQPing or similar, but simply do not involve >> core. There would be underlying performance impact for sure. >> -- >> Michael > > ------------------------------------------------------------------------------ > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > endpoint security space. For insight on selecting the right partner to > tackle endpoint security challenges, access the full report. > http://p.sf.net/sfu/symantec-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Abbas B. <abb...@en...> - 2013-03-08 07:25:54
|
Attached please find revised patch that provides the following in addition to what it did earlier. 1. Uses GetPreferredReplicationNode() instead of list_truncate() 2. Adds test cases to xc_alter_table and xc_copy. I tested the following in reasonable detail to find whether any other caller of GetRelationNodes() needs some fixing or not and found that none of the other callers needs any more fixing. I tested a) copy b) alter table redistribute c) utilities d) dmls etc However while testing ALTER TABLE, I found that replicated to hash is not working correctly. This test case fails, since only SIX rows are expected in the final result. test=# create table t_r_n12(a int, b int) distribute by replication to node (DATA_NODE_1, DATA_NODE_2); CREATE TABLE test=# insert into t_r_n12 values(1,777),(3,4),(5,6),(20,30),(NULL,999), (NULL, 999); INSERT 0 6 test=# -- rep to hash test=# ALTER TABLE t_r_n12 distribute by hash(a); ALTER TABLE test=# SELECT * FROM t_r_n12 order by 1; a | b ----+----- 1 | 777 3 | 4 5 | 6 20 | 30 | 999 | 999 | 999 | 999 (8 rows) test=# drop table t_r_n12; DROP TABLE I have added a source forge bug tracker id to this case (Artifact 3607290<https://sourceforge.net/tracker/?func=detail&aid=3607290&group_id=311227&atid=1310232>). The reason for this error is that the function distrib_delete_hash does not take into account that the distribution column can be null. I will provide a separate fix for that one. Regression shows no extra failure except that test case xc_alter_table would fail until 3607290 is fixed. Regards On Mon, Feb 25, 2013 at 10:18 AM, Ashutosh Bapat < ash...@en...> wrote: > Thanks a lot Abbas for this quick fix. > > I am sorry, it's caused by my refactoring of GetRelationNodes(). > > If possible, can you please examine the other callers of > GetRelationNodes() which would face the problems, esp. the ones for DML and > utilities. This is other instance, where deciding the nodes to execute on > at the time of execution will help. > > About the fix > Can you please use GetPreferredReplicationNode() instead of > list_truncate()? It will pick the preferred node instead of first one. If > you find more places where we need this fix, it might be better to create a > wrapper function and use it at those places. > > On Sat, Feb 23, 2013 at 2:59 PM, Abbas Butt <abb...@en...>wrote: > >> Hi, >> PFA a patch to fix a crash when COPY TO is used on a replicated table. >> >> This test case produces a crash >> >> create table tab_rep(a int, b int) distribute by replication; >> insert into tab_rep values(1,2), (3,4), (5,6), (7,8); >> COPY tab_rep (a, b) TO stdout; >> >> Here is a description of the problem and the fix >> In case of a read from a replicated table GetRelationNodes() >> returns all nodes and expects that the planner can choose >> one depending on the rest of the join tree. >> In case of COPY TO we should choose the first one in the node list >> This fixes a system crash and makes pg_dump work fine. >> >> -- >> Abbas >> Architect >> EnterpriseDB Corporation >> The Enterprise PostgreSQL Company >> >> Phone: 92-334-5100153 >> >> Website: www.enterprisedb.com >> EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> This e-mail message (and any attachment) is intended for the use of >> the individual or entity to whom it is addressed. This message >> contains information from EnterpriseDB Corporation that may be >> privileged, confidential, or exempt from disclosure under applicable >> law. If you are not the intended recipient or authorized to receive >> this for the intended recipient, any use, dissemination, distribution, >> retention, archiving, or copying of this communication is strictly >> prohibited. If you have received this e-mail in error, please notify >> the sender immediately by reply e-mail and delete this message. >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_feb >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Koichi S. <koi...@gm...> - 2013-03-08 07:18:55
|
Okay, here's a patch which uses PQping. This is new to 9.1 and is extremely simple and matches my needs. Regards; ---------- Koichi Suzuki 2013/3/8 Michael Paquier <mic...@gm...>: > > > On Fri, Mar 8, 2013 at 12:13 PM, Koichi Suzuki <koi...@gm...> > wrote: >> >> Because 9.3 merge will not be done in 1.1, I don't think it's feasible >> at present. Second means will be to use PQ* functions. Anyway, >> this will be provided by pgxc_monitor. May be a good idea to use >> custom background, but this could be too much because the requirement >> is very small. > > In this case use something like PQPing or similar, but simply do not involve > core. There would be underlying performance impact for sure. > -- > Michael |
From: Koichi S. <koi...@gm...> - 2013-03-08 07:16:54
|
Thank you Pavan. I think I added the lines in question. Because you are the original author if gtm, it's wonderful if you take a look at it. Regards; ---------- Koichi Suzuki 2013/3/8 Pavan Deolasee <pav...@gm...>: > On Thu, Mar 7, 2013 at 5:57 PM, Nikhil Sontakke <ni...@st...> wrote: >> Hi, >> >> PFA, patch which fixes an obnoxious crash in GTM Standby. This one was >> a tough nut to crack down. The crash is as below >> >> Program terminated with signal 11, Segmentation fault. >> #0 0x00000000004253c9 in gtm_lappend () >> Missing separate debuginfos, use: debuginfo-install >> glibc-2.12-1.80.el6_3.6.x86_64 libgcc-4.4.6-4.el6.x86_64 >> (gdb) bt >> #0 0x00000000004253c9 in gtm_lappend () >> #1 0x000000000040ad77 in GTM_BkupBeginTransactionGetGXIDMulti.clone.0 () >> #2 0x000000000040aedb in ProcessBkupBeginTransactionGetGXIDCommand () >> #3 0x000000000040417c in GTM_ThreadMain () >> >> >> >> IMHO, using TopMemoryContext to mean the top context of each thread is >> pretty confusing. Bad choice of name for the memory context according >> to me. Maybe we could have avoided this crash if we had used a >> different name for the context. >> >> This "TopMemoryContext" goes away when that thread goes away. So ain't >> nothing TOP about it. > > Well, let me at least try and defend because that's my baby :-) I > think I chose name TopTransactionContext because I wanted to give a > thread in GTM as much the same treatment as a process gets in > Postgres. So I stick to the same names, but invented TopMost to mean > the context which is global to the GTM process. My idea was and still > is that we should avoid using TopMost as much as we can because that > memory leaks will be hard to plug-in. Remember, we expect GTM to run > as long as any one component of the cluster is running.. which pretty > much means forever because while any one component of the cluster can > go down, but not the entire cluster. > > But if its causing confusion, I won't mind adding a code commentary to > explain the difference. Clearly my fault. > > Thanks, > Pavan > > -- > Pavan Deolasee > http://www.linkedin.com/in/pavandeolasee > > ------------------------------------------------------------------------------ > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > endpoint security space. For insight on selecting the right partner to > tackle endpoint security challenges, access the full report. > http://p.sf.net/sfu/symantec-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Nikhil S. <ni...@st...> - 2013-03-08 06:39:21
|
>> IMHO, using TopMemoryContext to mean the top context of each thread is >> pretty confusing. Bad choice of name for the memory context according >> to me. Maybe we could have avoided this crash if we had used a >> different name for the context. >> >> This "TopMemoryContext" goes away when that thread goes away. So ain't >> nothing TOP about it. > > Well, let me at least try and defend because that's my baby :-) I > think I chose name TopTransactionContext because I wanted to give a > thread in GTM as much the same treatment as a process gets in > Postgres. So I stick to the same names, but invented TopMost to mean > the context which is global to the GTM process. My idea was and still > is that we should avoid using TopMost as much as we can because that > memory leaks will be hard to plug-in. Remember, we expect GTM to run > as long as any one component of the cluster is running.. which pretty > much means forever because while any one component of the cluster can > go down, but not the entire cluster. > > But if its causing confusion, I won't mind adding a code commentary to > explain the difference. Clearly my fault. > Thanks for the explanation Pavan. I come from the Postgres source code background and when I started looking at this problem I looked at the memory context and thought everything is fine because it said "TopMemoryContext". That confused me a little bit. Had it said "ThreadTopContext" it would have been much more readable IMHO. Regards, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Pavan D. <pav...@gm...> - 2013-03-08 04:46:34
|
On Thu, Mar 7, 2013 at 5:57 PM, Nikhil Sontakke <ni...@st...> wrote: > Hi, > > PFA, patch which fixes an obnoxious crash in GTM Standby. This one was > a tough nut to crack down. The crash is as below > > Program terminated with signal 11, Segmentation fault. > #0 0x00000000004253c9 in gtm_lappend () > Missing separate debuginfos, use: debuginfo-install > glibc-2.12-1.80.el6_3.6.x86_64 libgcc-4.4.6-4.el6.x86_64 > (gdb) bt > #0 0x00000000004253c9 in gtm_lappend () > #1 0x000000000040ad77 in GTM_BkupBeginTransactionGetGXIDMulti.clone.0 () > #2 0x000000000040aedb in ProcessBkupBeginTransactionGetGXIDCommand () > #3 0x000000000040417c in GTM_ThreadMain () > > > > IMHO, using TopMemoryContext to mean the top context of each thread is > pretty confusing. Bad choice of name for the memory context according > to me. Maybe we could have avoided this crash if we had used a > different name for the context. > > This "TopMemoryContext" goes away when that thread goes away. So ain't > nothing TOP about it. Well, let me at least try and defend because that's my baby :-) I think I chose name TopTransactionContext because I wanted to give a thread in GTM as much the same treatment as a process gets in Postgres. So I stick to the same names, but invented TopMost to mean the context which is global to the GTM process. My idea was and still is that we should avoid using TopMost as much as we can because that memory leaks will be hard to plug-in. Remember, we expect GTM to run as long as any one component of the cluster is running.. which pretty much means forever because while any one component of the cluster can go down, but not the entire cluster. But if its causing confusion, I won't mind adding a code commentary to explain the difference. Clearly my fault. Thanks, Pavan -- Pavan Deolasee http://www.linkedin.com/in/pavandeolasee |
From: Michael P. <mic...@gm...> - 2013-03-08 03:32:18
|
On Fri, Mar 8, 2013 at 12:13 PM, Koichi Suzuki <koi...@gm...>wrote: > Because 9.3 merge will not be done in 1.1, I don't think it's feasible > at present. Second means will be to use PQ* functions. Anyway, > this will be provided by pgxc_monitor. May be a good idea to use > custom background, but this could be too much because the requirement > is very small. > In this case use something like PQPing or similar, but simply do not involve core. There would be underlying performance impact for sure. -- Michael |
From: Koichi S. <koi...@gm...> - 2013-03-08 03:13:21
|
Because 9.3 merge will not be done in 1.1, I don't think it's feasible at present. Second means will be to use PQ* functions. Anyway, this will be provided by pgxc_monitor. May be a good idea to use custom background, but this could be too much because the requirement is very small. Regards; ---------- Koichi Suzuki 2013/3/8 Michael Paquier <mic...@gm...>: > > > On Fri, Mar 8, 2013 at 11:51 AM, Koichi Suzuki <koi...@gm...> > wrote: >> >> I didn't have reactions to this. Again, we need to detect if >> coordinator/datanode is running even when gtm is down. Select 1 or >> select now does not for this purpose (it works for log shipping slave >> though). >> >> I'd like to start with the watchdog patch I submitted last July, >> attached just in case. This includes watchdog for gtm/gtmproxies. >> This may not be needed so far. >> >> An alternative is just to test if connection with one of PQ* functions >> succeeds. A bit of handling at the server is involved in this >> function and it could be used to detect if the server accepts >> connections. >> >> Please understand this is specific to XC, not to PG. > > Watchdog processes have no place inside the core code. I think that merge > with 9.3 will be done in a close future, so why not using an extension based > on the facility for custom background workers introduced in 9.3. This could > even be used with Postgres itself if it is nicely implemented, you know? > -- > Michael |
From: Michael P. <mic...@gm...> - 2013-03-08 03:03:00
|
On Fri, Mar 8, 2013 at 11:51 AM, Koichi Suzuki <koi...@gm...>wrote: > I didn't have reactions to this. Again, we need to detect if > coordinator/datanode is running even when gtm is down. Select 1 or > select now does not for this purpose (it works for log shipping slave > though). > > I'd like to start with the watchdog patch I submitted last July, > attached just in case. This includes watchdog for gtm/gtmproxies. > This may not be needed so far. > > An alternative is just to test if connection with one of PQ* functions > succeeds. A bit of handling at the server is involved in this > function and it could be used to detect if the server accepts > connections. > > Please understand this is specific to XC, not to PG. > Watchdog processes have no place inside the core code. I think that merge with 9.3 will be done in a close future, so why not using an extension based on the facility for custom background workers introduced in 9.3. This could even be used with Postgres itself if it is nicely implemented, you know? -- Michael |
From: Koichi S. <koi...@gm...> - 2013-03-08 02:52:03
|
I didn't have reactions to this. Again, we need to detect if coordinator/datanode is running even when gtm is down. Select 1 or select now does not for this purpose (it works for log shipping slave though). I'd like to start with the watchdog patch I submitted last July, attached just in case. This includes watchdog for gtm/gtmproxies. This may not be needed so far. An alternative is just to test if connection with one of PQ* functions succeeds. A bit of handling at the server is involved in this function and it could be used to detect if the server accepts connections. Please understand this is specific to XC, not to PG. Any input is welcome. Regards; ---------- Koichi Suzuki 2013/2/21 Koichi Suzuki <koi...@gm...>: > Hello, > > I found that "select 1" does now work to detect datanode/coordinator > crash correctly when gtm/gtm_proxy crashes. When gtm/gtm_proxy > crashes, "select 1" returns error and monitoring program (HA > middleware or other operation support program) determine > coordinator/datanode crashes, which is wrong. > > So we need another means to detect coordinator/datanode is running but > gtm/gtm_proxy crashed. One solution will be to make "select 1" not > to return error. In this case, we may need another means to detect if > coordinator/datanode crashes. It could be very complicated and lead > to allow very inconsistent view visible. I think cleaner solution is > to provide "watchdog" to tell that sever loop is running and is ready > to accept connections. I understand this is duplicate implementation > in the case of PostgreSQL itself but is needed for XC. I also > understand that this could conflict when PG itself implement similar > feature. This kind of risk is found in many other places in XC and I > believe watchdog timer is a good solution for monitoring > coordinator/datanode independent from gtm status. > > Any feedbacks? > ---------- > Koichi Suzuki |
From: Koichi S. <koi...@gm...> - 2013-03-08 02:31:46
|
Yes, memory context usage of this part is not correct and it leaves garbage. I will commit it if no further input is given. Regards; ---------- Koichi Suzuki 2013/3/7 Nikhil Sontakke <ni...@st...>: > Hi, > > PFA, patch which fixes an obnoxious crash in GTM Standby. This one was > a tough nut to crack down. The crash is as below > > Program terminated with signal 11, Segmentation fault. > #0 0x00000000004253c9 in gtm_lappend () > Missing separate debuginfos, use: debuginfo-install > glibc-2.12-1.80.el6_3.6.x86_64 libgcc-4.4.6-4.el6.x86_64 > (gdb) bt > #0 0x00000000004253c9 in gtm_lappend () > #1 0x000000000040ad77 in GTM_BkupBeginTransactionGetGXIDMulti.clone.0 () > #2 0x000000000040aedb in ProcessBkupBeginTransactionGetGXIDCommand () > #3 0x000000000040417c in GTM_ThreadMain () > > > > IMHO, using TopMemoryContext to mean the top context of each thread is > pretty confusing. Bad choice of name for the memory context according > to me. Maybe we could have avoided this crash if we had used a > different name for the context. > > This "TopMemoryContext" goes away when that thread goes away. So ain't > nothing TOP about it. The GTMTransactions.gt_open_transactions list > was being appended to using this memory context. So later if another > thread came in (and the earlier appending thread had been cleaned up), > it will find garbage in this list and this was causing the crash. > > I always saw a couple of threads being cleaned up in the gtm standby > logs just prior to the crash. The fix is to use TopMostMemoryContext. > If it were to me I would re-haul this TopMemoryContext naming business > in GTM. Am sure people will get confused in the future too when they > write code.. > > Regards, > Nikhils > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service > > ------------------------------------------------------------------------------ > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > endpoint security space. For insight on selecting the right partner to > tackle endpoint security challenges, access the full report. > http://p.sf.net/sfu/symantec-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Nikhil S. <ni...@st...> - 2013-03-07 13:03:10
|
> So I think it's better to begin at contrib. It can be moved to bin > when most people think it's natural. > +1 Regards, Nikhils > Regards; > ---------- > Koichi Suzuki > > > 2013/3/7 Koichi Suzuki <koi...@gm...>: >> Hello, >> >> I'm pleased to publish C version of pgxc_ctl in progress in >> https://github.com/postgres-xc/postgres-xc/tree/pgxc_ctl/contrib/pgxc_ctl >> >> This needs much more improvement but it just works fine for basic >> configuration, without slave, so far. Good features to this version >> are: >> >> 1. Maintains configuration file in bash. >> 2. More flexible environment, including log. >> 3. If possible, shell scripts including scp and ssh will be done in >> parallel. C version is quite faster than bash version mainly for >> this reason. >> 4. Log is improved much. Now more than one pgxc_ctl session can >> write to the same log and the log will be controlled by advisory lock >> so that log lines will not be messed up by another pgxc_ctl session. >> 5. Utilities to clean up gtm (unregister failed node) is now >> integrated into pgxc_ctl. pgxc_monitor was also integrate so that >> pgxc_ctl can run without any other modules in contrib. >> >> Because original bash version was more than 5k lines, C version is >> even bigger. I think I should maintain bash version as well for >> people to learn what to do in various XC cluster operation. Further, >> now Abbas is implementing node addition/removal which should >> be supported by pgxc_ctl as well. >> >> Sorry, no document is available so far. It is very similar to bash >> version but be a bit different. >> >> After I do necessary work, I'd like to add this to contrib. >> >> Regards; >> ---------- >> Koichi Suzuki > > ------------------------------------------------------------------------------ > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > endpoint security space. For insight on selecting the right partner to > tackle endpoint security challenges, access the full report. > http://p.sf.net/sfu/symantec-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Koichi S. <koi...@gm...> - 2013-03-07 12:54:03
|
The main reason to put it in contrib is that current pgxc_ctl does not support all the variety of Postgres-XC configuration. For example, owner (Linux user) of each node can be different. It does not support multiple slaves for datanode/coordinator. So I think it's better to begin at contrib. It can be moved to bin when most people think it's natural. Regards; ---------- Koichi Suzuki 2013/3/7 Koichi Suzuki <koi...@gm...>: > Hello, > > I'm pleased to publish C version of pgxc_ctl in progress in > https://github.com/postgres-xc/postgres-xc/tree/pgxc_ctl/contrib/pgxc_ctl > > This needs much more improvement but it just works fine for basic > configuration, without slave, so far. Good features to this version > are: > > 1. Maintains configuration file in bash. > 2. More flexible environment, including log. > 3. If possible, shell scripts including scp and ssh will be done in > parallel. C version is quite faster than bash version mainly for > this reason. > 4. Log is improved much. Now more than one pgxc_ctl session can > write to the same log and the log will be controlled by advisory lock > so that log lines will not be messed up by another pgxc_ctl session. > 5. Utilities to clean up gtm (unregister failed node) is now > integrated into pgxc_ctl. pgxc_monitor was also integrate so that > pgxc_ctl can run without any other modules in contrib. > > Because original bash version was more than 5k lines, C version is > even bigger. I think I should maintain bash version as well for > people to learn what to do in various XC cluster operation. Further, > now Abbas is implementing node addition/removal which should > be supported by pgxc_ctl as well. > > Sorry, no document is available so far. It is very similar to bash > version but be a bit different. > > After I do necessary work, I'd like to add this to contrib. > > Regards; > ---------- > Koichi Suzuki |
From: Nikhil S. <ni...@st...> - 2013-03-07 12:34:43
|
Hi, PFA, patch which fixes an obnoxious crash in GTM Standby. This one was a tough nut to crack down. The crash is as below Program terminated with signal 11, Segmentation fault. #0 0x00000000004253c9 in gtm_lappend () Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.80.el6_3.6.x86_64 libgcc-4.4.6-4.el6.x86_64 (gdb) bt #0 0x00000000004253c9 in gtm_lappend () #1 0x000000000040ad77 in GTM_BkupBeginTransactionGetGXIDMulti.clone.0 () #2 0x000000000040aedb in ProcessBkupBeginTransactionGetGXIDCommand () #3 0x000000000040417c in GTM_ThreadMain () IMHO, using TopMemoryContext to mean the top context of each thread is pretty confusing. Bad choice of name for the memory context according to me. Maybe we could have avoided this crash if we had used a different name for the context. This "TopMemoryContext" goes away when that thread goes away. So ain't nothing TOP about it. The GTMTransactions.gt_open_transactions list was being appended to using this memory context. So later if another thread came in (and the earlier appending thread had been cleaned up), it will find garbage in this list and this was causing the crash. I always saw a couple of threads being cleaned up in the gtm standby logs just prior to the crash. The fix is to use TopMostMemoryContext. If it were to me I would re-haul this TopMemoryContext naming business in GTM. Am sure people will get confused in the future too when they write code.. Regards, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Ashutosh B. <ash...@en...> - 2013-03-07 11:48:43
|
A quick comment, we can have it at src/bin/pgxc_ctl like src/bin/pg_ctl. On Thu, Mar 7, 2013 at 5:15 PM, Koichi Suzuki <koi...@gm...>wrote: > Hello, > > I'm pleased to publish C version of pgxc_ctl in progress in > https://github.com/postgres-xc/postgres-xc/tree/pgxc_ctl/contrib/pgxc_ctl > > This needs much more improvement but it just works fine for basic > configuration, without slave, so far. Good features to this version > are: > > 1. Maintains configuration file in bash. > 2. More flexible environment, including log. > 3. If possible, shell scripts including scp and ssh will be done in > parallel. C version is quite faster than bash version mainly for > this reason. > 4. Log is improved much. Now more than one pgxc_ctl session can > write to the same log and the log will be controlled by advisory lock > so that log lines will not be messed up by another pgxc_ctl session. > 5. Utilities to clean up gtm (unregister failed node) is now > integrated into pgxc_ctl. pgxc_monitor was also integrate so that > pgxc_ctl can run without any other modules in contrib. > > Because original bash version was more than 5k lines, C version is > even bigger. I think I should maintain bash version as well for > people to learn what to do in various XC cluster operation. Further, > now Abbas is implementing node addition/removal which should > be supported by pgxc_ctl as well. > > Sorry, no document is available so far. It is very similar to bash > version but be a bit different. > > After I do necessary work, I'd like to add this to contrib. > > Regards; > ---------- > Koichi Suzuki > > > ------------------------------------------------------------------------------ > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > endpoint security space. For insight on selecting the right partner to > tackle endpoint security challenges, access the full report. > http://p.sf.net/sfu/symantec-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Koichi S. <koi...@gm...> - 2013-03-07 11:45:35
|
Hello, I'm pleased to publish C version of pgxc_ctl in progress in https://github.com/postgres-xc/postgres-xc/tree/pgxc_ctl/contrib/pgxc_ctl This needs much more improvement but it just works fine for basic configuration, without slave, so far. Good features to this version are: 1. Maintains configuration file in bash. 2. More flexible environment, including log. 3. If possible, shell scripts including scp and ssh will be done in parallel. C version is quite faster than bash version mainly for this reason. 4. Log is improved much. Now more than one pgxc_ctl session can write to the same log and the log will be controlled by advisory lock so that log lines will not be messed up by another pgxc_ctl session. 5. Utilities to clean up gtm (unregister failed node) is now integrated into pgxc_ctl. pgxc_monitor was also integrate so that pgxc_ctl can run without any other modules in contrib. Because original bash version was more than 5k lines, C version is even bigger. I think I should maintain bash version as well for people to learn what to do in various XC cluster operation. Further, now Abbas is implementing node addition/removal which should be supported by pgxc_ctl as well. Sorry, no document is available so far. It is very similar to bash version but be a bit different. After I do necessary work, I'd like to add this to contrib. Regards; ---------- Koichi Suzuki |
From: Abbas B. <abb...@en...> - 2013-03-06 11:03:54
|
This revised patch adds a line in the --help of the added command line option. Also corrects a small mistake in managing the new command line option. The rest of the functionality stays the same. On Wed, Mar 6, 2013 at 3:06 PM, Abbas Butt <abb...@en...>wrote: > > > On Mon, Mar 4, 2013 at 2:58 PM, Amit Khandekar < > ami...@en...> wrote: > >> On 4 March 2013 14:44, Abbas Butt <abb...@en...> wrote: >> > >> > >> > On Mon, Mar 4, 2013 at 2:00 PM, Amit Khandekar >> > <ami...@en...> wrote: >> >> >> >> On 1 March 2013 18:45, Abbas Butt <abb...@en...> wrote: >> >> > >> >> > >> >> > On Fri, Mar 1, 2013 at 5:48 PM, Amit Khandekar >> >> > <ami...@en...> wrote: >> >> >> >> >> >> On 19 February 2013 12:37, Abbas Butt <abb...@en...> >> >> >> wrote: >> >> >> > >> >> >> > Hi, >> >> >> > Attached please find a patch that locks the cluster so that dump >> can >> >> >> > be >> >> >> > taken to be restored on the new node to be added. >> >> >> > >> >> >> > To lock the cluster the patch adds a new GUC parameter called >> >> >> > xc_lock_for_backup, however its status is maintained by the >> pooler. >> >> >> > The >> >> >> > reason is that the default behavior of XC is to release >> connections >> >> >> > as >> >> >> > soon >> >> >> > as a command is done and it uses PersistentConnections GUC to >> control >> >> >> > the >> >> >> > behavior. We in this case however need a status that is >> independent >> >> >> > of >> >> >> > the >> >> >> > setting of PersistentConnections. >> >> >> > >> >> >> > Assume we have two coordinator cluster, the patch provides this >> >> >> > behavior: >> >> >> > >> >> >> > Case 1: set and show >> >> >> > ==================== >> >> >> > psql test -p 5432 >> >> >> > set xc_lock_for_backup=yes; >> >> >> > show xc_lock_for_backup; >> >> >> > xc_lock_for_backup >> >> >> > -------------------- >> >> >> > yes >> >> >> > (1 row) >> >> >> > >> >> >> > Case 2: set from one client show from other >> >> >> > ================================== >> >> >> > psql test -p 5432 >> >> >> > set xc_lock_for_backup=yes; >> >> >> > (From another tab) >> >> >> > psql test -p 5432 >> >> >> > show xc_lock_for_backup; >> >> >> > xc_lock_for_backup >> >> >> > -------------------- >> >> >> > yes >> >> >> > (1 row) >> >> >> > >> >> >> > Case 3: set from one, quit it, run again and show >> >> >> > ====================================== >> >> >> > psql test -p 5432 >> >> >> > set xc_lock_for_backup=yes; >> >> >> > \q >> >> >> > psql test -p 5432 >> >> >> > show xc_lock_for_backup; >> >> >> > xc_lock_for_backup >> >> >> > -------------------- >> >> >> > yes >> >> >> > (1 row) >> >> >> > >> >> >> > Case 4: set on one coordinator, show from other >> >> >> > ===================================== >> >> >> > psql test -p 5432 >> >> >> > set xc_lock_for_backup=yes; >> >> >> > (From another tab) >> >> >> > psql test -p 5433 >> >> >> > show xc_lock_for_backup; >> >> >> > xc_lock_for_backup >> >> >> > -------------------- >> >> >> > yes >> >> >> > (1 row) >> >> >> > >> >> >> > pg_dump and pg_dumpall seem to work fine after locking the cluster >> >> >> > for >> >> >> > backup but I would test these utilities in detail next. >> >> >> > >> >> >> > Also I have yet to look in detail that standard_ProcessUtility is >> the >> >> >> > only >> >> >> > place that updates the portion of catalog that is dumped. There >> may >> >> >> > be >> >> >> > some >> >> >> > other places too that need to be blocked for catalog updates. >> >> >> > >> >> >> > The patch adds no extra warnings and regression shows no extra >> >> >> > failure. >> >> >> > >> >> >> > Comments are welcome. >> >> >> >> >> >> Abbas wrote on another thread: >> >> >> >> >> >> > Amit wrote on another thread: >> >> >> >> I haven't given a thought on the earlier patch you sent for >> cluster >> >> >> >> lock >> >> >> >> implementation; may be we can discuss this on that thread, but >> just >> >> >> >> a >> >> >> >> quick >> >> >> >> question: >> >> >> >> >> >> >> >> Does the cluster-lock command wait for the ongoing DDL commands >> to >> >> >> >> finish >> >> >> >> ? If not, we have problems. The subsequent pg_dump would not >> contain >> >> >> >> objects >> >> >> >> created by these particular DDLs. >> >> >> > >> >> >> > >> >> >> > Suppose you have a two coordinator cluster. Assume one client >> >> >> > connected >> >> >> > to >> >> >> > each. Suppose one client issues a lock cluster command and the >> other >> >> >> > issues >> >> >> > a DDL. Is this what you mean by an ongoing DDL? If true then >> answer >> >> >> > to >> >> >> > your >> >> >> > question is Yes. >> >> >> > >> >> >> > Suppose you have a prepared transaction that has a DDL in it, >> again >> >> >> > if >> >> >> > this >> >> >> > can be considered an on going DDL, then again answer to your >> question >> >> >> > is >> >> >> > Yes. >> >> >> > >> >> >> > Suppose you have a two coordinator cluster. Assume one client >> >> >> > connected >> >> >> > to >> >> >> > each. One client starts a transaction and issues a DDL, the second >> >> >> > client >> >> >> > issues a lock cluster command, the first commits the transaction. >> If >> >> >> > this is >> >> >> > an ongoing DDL, then the answer to your question is No. >> >> >> >> >> >> Yes this last scenario is what I meant: A DDL has been executed on >> >> >> nodes, >> >> >> but >> >> >> not committed, when the cluster lock command is run and then >> pg_dump >> >> >> immediately >> >> >> starts its transaction before the DDL is committed. Here pg_dump >> does >> >> >> not see the new objects that would be created. >> >> >> >> -- >> >> Come to think of it, there would always be a small interval where the >> >> concurrency issue would remain. >> > >> > >> >> > Can you please give an example to clarify. >> >> -- >> >> > >> >> >> >> If we were to totally get rid of this >> >> concurrency issue, we need to have some kind of lock. For e.g. the >> >> object access hook function will have shared acces lock on this object >> >> (may be on pg_depend because it is always used for objcet >> >> creation/drop ??) and the lock-cluster command will try to get >> >> exclusive lock on the same. This of course should be done after we are >> >> sure object access hook is called on all types of objects. >> >> For e.g. Suppose we come up with a solution where just before >> transaction commit (i.e. in transaction callback) we check if the >> cluster is locked and there are objects created/dropped in the current >> transaction, and then commit if the cluster is not locked. But betwen >> the instance where we do the lock check and the instance where we >> actually commit, during this time gap, there can be cluster lock >> issued followed immediately by pg_dump. For pg_dump the new objects >> created in that transaction will not be visible. So by doing the >> cluster-lock check at transaction callback, we have reduced the time >> gap significantly although it is not completely gone. >> >> But if lock-cluster command and the object creation functions (whether >> it is object acces hook or process_standardUtility) have a lock on a >> common object, this concurrency issue might be solved. As of now, I >> see pg_depend as one common object which is *always* accessed for >> object creation/drop. >> > > The current locking mechanism works in two ways, at session level or at > transaction level. > The session level locks can stay for as long as the session, what we want > is the lock to stay irrespective of the session. We would like to do a set > xc_lock_for_backup=on and then quit that terminal without worrying that we > have to stay there as long as we want the lock to be there. So locking > pg_depend using existing locking mechanism would work only if we impose the > restriction that the terminal that did set xc_lock_for_backup=on cannot be > closed now, otherwise some objects might be missed from the dump. BTW the > window that we are talking about is significantly small and DDLs are not > very common so we might be all good here. > > >> >> >> >> >> >> >> >> >> >> >> >> I myself am not sure how would we prevent this from happening. There >> >> >> are two callback hooks that might be worth considering though: >> >> >> 1. Transaction End callback (CallXactCallbacks) >> >> >> 2. Object creation/drop hook (InvokeObjectAccessHook) >> >> >> >> >> >> Suppose we create an object creation/drop hook function that would : >> >> >> 1. store the current transaction id in a global objects_created list >> >> >> if the cluster is not locked, >> >> >> 2. or else if the cluster is locked, this hook would ereport() >> saying >> >> >> "cannot create catalog objects in this mode". >> >> >> >> >> >> And then during transaction commit , a new transaction callback hook >> >> >> will: >> >> >> 1. Check the above objects_created list to see if the current >> >> >> transaction has any objects created/dropped. >> >> >> 2. If found and if the cluster-lock is on, it will again ereport() >> >> >> saying "cannot create catalog objects in this mode" >> >> >> >> >> >> Thinking more on the object creation hook, we can even consider this >> >> >> as a substitute for checking the cluster-lock status in >> >> >> standardProcessUtility(). But I am not sure whether this hook does >> get >> >> >> called on each of the catalog objects. At least the code comments >> say >> >> >> it does. >> >> > >> >> > >> >> > These are very good ideas, Thanks, I will work on those lines and >> will >> >> > report back. >> >> > >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> > But its a matter of >> >> >> > deciding which camp are we going to put COMMIT in, the allow >> camp, or >> >> >> > the >> >> >> > deny camp. I decided to put it in allow camp, because I have not >> yet >> >> >> > written >> >> >> > any code to detect whether a transaction being committed has a >> DDL in >> >> >> > it >> >> >> > or >> >> >> > not, and stopping all transactions from committing looks too >> >> >> > restrictive >> >> >> > to >> >> >> > me. >> >> >> >> >> >> >> >> >> > >> >> >> > Do you have some other meaning of an ongoing DDL? >> >> >> >> >> >> >> >> >> >> >> >> > >> >> >> > -- >> >> >> > Abbas >> >> >> > Architect >> >> >> > EnterpriseDB Corporation >> >> >> > The Enterprise PostgreSQL Company >> >> >> > >> >> >> > Phone: 92-334-5100153 >> >> >> > >> >> >> > Website: www.enterprisedb.com >> >> >> > EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> >> >> > Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> >> > >> >> >> > This e-mail message (and any attachment) is intended for the use >> of >> >> >> > the individual or entity to whom it is addressed. This message >> >> >> > contains information from EnterpriseDB Corporation that may be >> >> >> > privileged, confidential, or exempt from disclosure under >> applicable >> >> >> > law. If you are not the intended recipient or authorized to >> receive >> >> >> > this for the intended recipient, any use, dissemination, >> >> >> > distribution, >> >> >> > retention, archiving, or copying of this communication is strictly >> >> >> > prohibited. If you have received this e-mail in error, please >> notify >> >> >> > the sender immediately by reply e-mail and delete this message. >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> ------------------------------------------------------------------------------ >> >> >> > Everyone hates slow websites. So do we. >> >> >> > Make your web apps faster with AppDynamics >> >> >> > Download AppDynamics Lite for free today: >> >> >> > http://p.sf.net/sfu/appdyn_d2d_feb >> >> >> > _______________________________________________ >> >> >> > Postgres-xc-developers mailing list >> >> >> > Pos...@li... >> >> >> > >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > -- >> >> > -- >> >> > Abbas >> >> > Architect >> >> > EnterpriseDB Corporation >> >> > The Enterprise PostgreSQL Company >> >> > >> >> > Phone: 92-334-5100153 >> >> > >> >> > Website: www.enterprisedb.com >> >> > EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> >> > Follow us on Twitter: http://www.twitter.com/enterprisedb >> >> > >> >> > This e-mail message (and any attachment) is intended for the use of >> >> > the individual or entity to whom it is addressed. This message >> >> > contains information from EnterpriseDB Corporation that may be >> >> > privileged, confidential, or exempt from disclosure under applicable >> >> > law. If you are not the intended recipient or authorized to receive >> >> > this for the intended recipient, any use, dissemination, >> distribution, >> >> > retention, archiving, or copying of this communication is strictly >> >> > prohibited. If you have received this e-mail in error, please notify >> >> > the sender immediately by reply e-mail and delete this message. >> > >> > >> > >> > >> > -- >> > -- >> > Abbas >> > Architect >> > EnterpriseDB Corporation >> > The Enterprise PostgreSQL Company >> > >> > Phone: 92-334-5100153 >> > >> > Website: www.enterprisedb.com >> > EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> > Follow us on Twitter: http://www.twitter.com/enterprisedb >> > >> > This e-mail message (and any attachment) is intended for the use of >> > the individual or entity to whom it is addressed. This message >> > contains information from EnterpriseDB Corporation that may be >> > privileged, confidential, or exempt from disclosure under applicable >> > law. If you are not the intended recipient or authorized to receive >> > this for the intended recipient, any use, dissemination, distribution, >> > retention, archiving, or copying of this communication is strictly >> > prohibited. If you have received this e-mail in error, please notify >> > the sender immediately by reply e-mail and delete this message. >> > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Abbas B. <abb...@en...> - 2013-03-06 10:06:58
|
On Mon, Mar 4, 2013 at 2:58 PM, Amit Khandekar < ami...@en...> wrote: > On 4 March 2013 14:44, Abbas Butt <abb...@en...> wrote: > > > > > > On Mon, Mar 4, 2013 at 2:00 PM, Amit Khandekar > > <ami...@en...> wrote: > >> > >> On 1 March 2013 18:45, Abbas Butt <abb...@en...> wrote: > >> > > >> > > >> > On Fri, Mar 1, 2013 at 5:48 PM, Amit Khandekar > >> > <ami...@en...> wrote: > >> >> > >> >> On 19 February 2013 12:37, Abbas Butt <abb...@en...> > >> >> wrote: > >> >> > > >> >> > Hi, > >> >> > Attached please find a patch that locks the cluster so that dump > can > >> >> > be > >> >> > taken to be restored on the new node to be added. > >> >> > > >> >> > To lock the cluster the patch adds a new GUC parameter called > >> >> > xc_lock_for_backup, however its status is maintained by the pooler. > >> >> > The > >> >> > reason is that the default behavior of XC is to release connections > >> >> > as > >> >> > soon > >> >> > as a command is done and it uses PersistentConnections GUC to > control > >> >> > the > >> >> > behavior. We in this case however need a status that is independent > >> >> > of > >> >> > the > >> >> > setting of PersistentConnections. > >> >> > > >> >> > Assume we have two coordinator cluster, the patch provides this > >> >> > behavior: > >> >> > > >> >> > Case 1: set and show > >> >> > ==================== > >> >> > psql test -p 5432 > >> >> > set xc_lock_for_backup=yes; > >> >> > show xc_lock_for_backup; > >> >> > xc_lock_for_backup > >> >> > -------------------- > >> >> > yes > >> >> > (1 row) > >> >> > > >> >> > Case 2: set from one client show from other > >> >> > ================================== > >> >> > psql test -p 5432 > >> >> > set xc_lock_for_backup=yes; > >> >> > (From another tab) > >> >> > psql test -p 5432 > >> >> > show xc_lock_for_backup; > >> >> > xc_lock_for_backup > >> >> > -------------------- > >> >> > yes > >> >> > (1 row) > >> >> > > >> >> > Case 3: set from one, quit it, run again and show > >> >> > ====================================== > >> >> > psql test -p 5432 > >> >> > set xc_lock_for_backup=yes; > >> >> > \q > >> >> > psql test -p 5432 > >> >> > show xc_lock_for_backup; > >> >> > xc_lock_for_backup > >> >> > -------------------- > >> >> > yes > >> >> > (1 row) > >> >> > > >> >> > Case 4: set on one coordinator, show from other > >> >> > ===================================== > >> >> > psql test -p 5432 > >> >> > set xc_lock_for_backup=yes; > >> >> > (From another tab) > >> >> > psql test -p 5433 > >> >> > show xc_lock_for_backup; > >> >> > xc_lock_for_backup > >> >> > -------------------- > >> >> > yes > >> >> > (1 row) > >> >> > > >> >> > pg_dump and pg_dumpall seem to work fine after locking the cluster > >> >> > for > >> >> > backup but I would test these utilities in detail next. > >> >> > > >> >> > Also I have yet to look in detail that standard_ProcessUtility is > the > >> >> > only > >> >> > place that updates the portion of catalog that is dumped. There may > >> >> > be > >> >> > some > >> >> > other places too that need to be blocked for catalog updates. > >> >> > > >> >> > The patch adds no extra warnings and regression shows no extra > >> >> > failure. > >> >> > > >> >> > Comments are welcome. > >> >> > >> >> Abbas wrote on another thread: > >> >> > >> >> > Amit wrote on another thread: > >> >> >> I haven't given a thought on the earlier patch you sent for > cluster > >> >> >> lock > >> >> >> implementation; may be we can discuss this on that thread, but > just > >> >> >> a > >> >> >> quick > >> >> >> question: > >> >> >> > >> >> >> Does the cluster-lock command wait for the ongoing DDL commands to > >> >> >> finish > >> >> >> ? If not, we have problems. The subsequent pg_dump would not > contain > >> >> >> objects > >> >> >> created by these particular DDLs. > >> >> > > >> >> > > >> >> > Suppose you have a two coordinator cluster. Assume one client > >> >> > connected > >> >> > to > >> >> > each. Suppose one client issues a lock cluster command and the > other > >> >> > issues > >> >> > a DDL. Is this what you mean by an ongoing DDL? If true then answer > >> >> > to > >> >> > your > >> >> > question is Yes. > >> >> > > >> >> > Suppose you have a prepared transaction that has a DDL in it, again > >> >> > if > >> >> > this > >> >> > can be considered an on going DDL, then again answer to your > question > >> >> > is > >> >> > Yes. > >> >> > > >> >> > Suppose you have a two coordinator cluster. Assume one client > >> >> > connected > >> >> > to > >> >> > each. One client starts a transaction and issues a DDL, the second > >> >> > client > >> >> > issues a lock cluster command, the first commits the transaction. > If > >> >> > this is > >> >> > an ongoing DDL, then the answer to your question is No. > >> >> > >> >> Yes this last scenario is what I meant: A DDL has been executed on > >> >> nodes, > >> >> but > >> >> not committed, when the cluster lock command is run and then pg_dump > >> >> immediately > >> >> starts its transaction before the DDL is committed. Here pg_dump does > >> >> not see the new objects that would be created. > >> > > -- > >> Come to think of it, there would always be a small interval where the > >> concurrency issue would remain. > > > > > > > Can you please give an example to clarify. > > -- > > > > >> > >> If we were to totally get rid of this > >> concurrency issue, we need to have some kind of lock. For e.g. the > >> object access hook function will have shared acces lock on this object > >> (may be on pg_depend because it is always used for objcet > >> creation/drop ??) and the lock-cluster command will try to get > >> exclusive lock on the same. This of course should be done after we are > >> sure object access hook is called on all types of objects. > > For e.g. Suppose we come up with a solution where just before > transaction commit (i.e. in transaction callback) we check if the > cluster is locked and there are objects created/dropped in the current > transaction, and then commit if the cluster is not locked. But betwen > the instance where we do the lock check and the instance where we > actually commit, during this time gap, there can be cluster lock > issued followed immediately by pg_dump. For pg_dump the new objects > created in that transaction will not be visible. So by doing the > cluster-lock check at transaction callback, we have reduced the time > gap significantly although it is not completely gone. > > But if lock-cluster command and the object creation functions (whether > it is object acces hook or process_standardUtility) have a lock on a > common object, this concurrency issue might be solved. As of now, I > see pg_depend as one common object which is *always* accessed for > object creation/drop. > The current locking mechanism works in two ways, at session level or at transaction level. The session level locks can stay for as long as the session, what we want is the lock to stay irrespective of the session. We would like to do a set xc_lock_for_backup=on and then quit that terminal without worrying that we have to stay there as long as we want the lock to be there. So locking pg_depend using existing locking mechanism would work only if we impose the restriction that the terminal that did set xc_lock_for_backup=on cannot be closed now, otherwise some objects might be missed from the dump. BTW the window that we are talking about is significantly small and DDLs are not very common so we might be all good here. > > > >> > >> > >> >> > >> >> I myself am not sure how would we prevent this from happening. There > >> >> are two callback hooks that might be worth considering though: > >> >> 1. Transaction End callback (CallXactCallbacks) > >> >> 2. Object creation/drop hook (InvokeObjectAccessHook) > >> >> > >> >> Suppose we create an object creation/drop hook function that would : > >> >> 1. store the current transaction id in a global objects_created list > >> >> if the cluster is not locked, > >> >> 2. or else if the cluster is locked, this hook would ereport() saying > >> >> "cannot create catalog objects in this mode". > >> >> > >> >> And then during transaction commit , a new transaction callback hook > >> >> will: > >> >> 1. Check the above objects_created list to see if the current > >> >> transaction has any objects created/dropped. > >> >> 2. If found and if the cluster-lock is on, it will again ereport() > >> >> saying "cannot create catalog objects in this mode" > >> >> > >> >> Thinking more on the object creation hook, we can even consider this > >> >> as a substitute for checking the cluster-lock status in > >> >> standardProcessUtility(). But I am not sure whether this hook does > get > >> >> called on each of the catalog objects. At least the code comments say > >> >> it does. > >> > > >> > > >> > These are very good ideas, Thanks, I will work on those lines and will > >> > report back. > >> > > >> >> > >> >> > >> >> > >> >> > >> >> > But its a matter of > >> >> > deciding which camp are we going to put COMMIT in, the allow camp, > or > >> >> > the > >> >> > deny camp. I decided to put it in allow camp, because I have not > yet > >> >> > written > >> >> > any code to detect whether a transaction being committed has a DDL > in > >> >> > it > >> >> > or > >> >> > not, and stopping all transactions from committing looks too > >> >> > restrictive > >> >> > to > >> >> > me. > >> >> > >> >> > >> >> > > >> >> > Do you have some other meaning of an ongoing DDL? > >> >> > >> >> > >> >> > >> >> > > >> >> > -- > >> >> > Abbas > >> >> > Architect > >> >> > EnterpriseDB Corporation > >> >> > The Enterprise PostgreSQL Company > >> >> > > >> >> > Phone: 92-334-5100153 > >> >> > > >> >> > Website: www.enterprisedb.com > >> >> > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > >> >> > Follow us on Twitter: http://www.twitter.com/enterprisedb > >> >> > > >> >> > This e-mail message (and any attachment) is intended for the use of > >> >> > the individual or entity to whom it is addressed. This message > >> >> > contains information from EnterpriseDB Corporation that may be > >> >> > privileged, confidential, or exempt from disclosure under > applicable > >> >> > law. If you are not the intended recipient or authorized to receive > >> >> > this for the intended recipient, any use, dissemination, > >> >> > distribution, > >> >> > retention, archiving, or copying of this communication is strictly > >> >> > prohibited. If you have received this e-mail in error, please > notify > >> >> > the sender immediately by reply e-mail and delete this message. > >> >> > > >> >> > > >> >> > > >> >> > > ------------------------------------------------------------------------------ > >> >> > Everyone hates slow websites. So do we. > >> >> > Make your web apps faster with AppDynamics > >> >> > Download AppDynamics Lite for free today: > >> >> > http://p.sf.net/sfu/appdyn_d2d_feb > >> >> > _______________________________________________ > >> >> > Postgres-xc-developers mailing list > >> >> > Pos...@li... > >> >> > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > >> >> > > >> > > >> > > >> > > >> > > >> > -- > >> > -- > >> > Abbas > >> > Architect > >> > EnterpriseDB Corporation > >> > The Enterprise PostgreSQL Company > >> > > >> > Phone: 92-334-5100153 > >> > > >> > Website: www.enterprisedb.com > >> > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > >> > Follow us on Twitter: http://www.twitter.com/enterprisedb > >> > > >> > This e-mail message (and any attachment) is intended for the use of > >> > the individual or entity to whom it is addressed. This message > >> > contains information from EnterpriseDB Corporation that may be > >> > privileged, confidential, or exempt from disclosure under applicable > >> > law. If you are not the intended recipient or authorized to receive > >> > this for the intended recipient, any use, dissemination, distribution, > >> > retention, archiving, or copying of this communication is strictly > >> > prohibited. If you have received this e-mail in error, please notify > >> > the sender immediately by reply e-mail and delete this message. > > > > > > > > > > -- > > -- > > Abbas > > Architect > > EnterpriseDB Corporation > > The Enterprise PostgreSQL Company > > > > Phone: 92-334-5100153 > > > > Website: www.enterprisedb.com > > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > > Follow us on Twitter: http://www.twitter.com/enterprisedb > > > > This e-mail message (and any attachment) is intended for the use of > > the individual or entity to whom it is addressed. This message > > contains information from EnterpriseDB Corporation that may be > > privileged, confidential, or exempt from disclosure under applicable > > law. If you are not the intended recipient or authorized to receive > > this for the intended recipient, any use, dissemination, distribution, > > retention, archiving, or copying of this communication is strictly > > prohibited. If you have received this e-mail in error, please notify > > the sender immediately by reply e-mail and delete this message. > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |