You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
1
(4) |
2
(4) |
3
|
4
|
5
(2) |
6
|
7
|
8
|
9
|
10
|
11
|
12
|
13
|
14
|
15
(4) |
16
(1) |
17
|
18
|
19
|
20
|
21
|
22
|
23
|
24
|
25
|
26
(1) |
27
|
28
(5) |
29
(7) |
30
(4) |
|
From: Michael P. <mic...@gm...> - 2011-09-29 23:29:09
|
On Thu, Sep 29, 2011 at 2:31 PM, Ashutosh Bapat < ash...@en...> wrote: > If we kind of know the area where the problems are, it will help to fix the > bug, so that regressions are crash free. I will need to depend upon the > regression a lot for the cleanup. Is it possible to fix the problem soon? To be honest I am not sure. I would first need to find the origin of the problem and I am not really sure it is that easy. Let me have a shot on it though. -- Michael Paquier http://michael.otacoo.com |
From: Ashutosh B. <ash...@en...> - 2011-09-29 05:31:32
|
If we kind of know the area where the problems are, it will help to fix the bug, so that regressions are crash free. I will need to depend upon the regression a lot for the cleanup. Is it possible to fix the problem soon? On Thu, Sep 29, 2011 at 9:00 AM, Michael Paquier <mic...@gm...>wrote: > On Thu, Sep 29, 2011 at 12:22 PM, Pavan Deolasee < > pav...@en...> wrote: > >> On Thu, Sep 29, 2011 at 8:00 AM, Michael Paquier < >> mic...@gm...> wrote: >> >>> On Thu, Sep 29, 2011 at 11:25 AM, Pavan Deolasee < >>> pav...@en...> wrote: >>> >>>> >>>> Could this be because the way we save and restore the GTM info ? I have >>>> seen issues because of that, especially if we fail to shutdown everything >>>> properly. >>>> >>> This is indeed possible. Now snapshot data from GTM is saved with malloc >>> on Datanodes, and we do not use any *safe* palloc mechanism. >>> >> >> No, you got me wrong. I was talking about the mechanism to save the GTM >> state in a file when GTM is shutdown. We then restore from the saved >> information at restart. That sometimes cause problem, especially if we have >> reinitialized the cluster. But I don't think make installcheck does that, so >> may be this is not the issue. >> > OK, there may be issues related that. But I am also able to reproduce the > problem with the 1st regression on a clean cluster from time to time. > > Michael > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Michael P. <mic...@gm...> - 2011-09-29 04:32:07
|
Hi all, I am preparing a sub-release based on branch 0.9.5 stable. Compared to 0.9.5, this release contains some fix regarding performance, and it includes all the commits done in postgresql 9.0 stable up to now. Regressions and performance are not impacted at all, so I will commit that in 0.9.5 stable branch if there are no objections. Regards, -- Michael Paquier http://michael.otacoo.com |
From: Michael P. <mic...@gm...> - 2011-09-29 03:31:07
|
On Thu, Sep 29, 2011 at 12:22 PM, Pavan Deolasee < pav...@en...> wrote: > On Thu, Sep 29, 2011 at 8:00 AM, Michael Paquier < > mic...@gm...> wrote: > >> On Thu, Sep 29, 2011 at 11:25 AM, Pavan Deolasee < >> pav...@en...> wrote: >> >>> >>> Could this be because the way we save and restore the GTM info ? I have >>> seen issues because of that, especially if we fail to shutdown everything >>> properly. >>> >> This is indeed possible. Now snapshot data from GTM is saved with malloc >> on Datanodes, and we do not use any *safe* palloc mechanism. >> > > No, you got me wrong. I was talking about the mechanism to save the GTM > state in a file when GTM is shutdown. We then restore from the saved > information at restart. That sometimes cause problem, especially if we have > reinitialized the cluster. But I don't think make installcheck does that, so > may be this is not the issue. > OK, there may be issues related that. But I am also able to reproduce the problem with the 1st regression on a clean cluster from time to time. Michael |
From: Pavan D. <pav...@en...> - 2011-09-29 03:22:45
|
On Thu, Sep 29, 2011 at 8:00 AM, Michael Paquier <mic...@gm...>wrote: > On Thu, Sep 29, 2011 at 11:25 AM, Pavan Deolasee < > pav...@en...> wrote: > >> >> Could this be because the way we save and restore the GTM info ? I have >> seen issues because of that, especially if we fail to shutdown everything >> properly. >> > This is indeed possible. Now snapshot data from GTM is saved with malloc on > Datanodes, and we do not use any *safe* palloc mechanism. > No, you got me wrong. I was talking about the mechanism to save the GTM state in a file when GTM is shutdown. We then restore from the saved information at restart. That sometimes cause problem, especially if we have reinitialized the cluster. But I don't think make installcheck does that, so may be this is not the issue. Thanks, Pavan -- Pavan Deolasee EnterpriseDB http://www.enterprisedb.com |
From: Pavan D. <pav...@en...> - 2011-09-29 02:33:37
|
Could this be because the way we save and restore the GTM info ? I have seen issues because of that, especially if we fail to shutdown everything properly. Thanks, Pavan On Thu, Sep 29, 2011 at 5:26 AM, Michael Paquier <mic...@gm...>wrote: > Like in bug 3412062, there is a portion of memory that is reacting really > weirdly. > > https://sourceforge.net/tracker/?func=detail&aid=3412062&group_id=311227&atid=1310232 > I suppose that those problems are not directly related but the origin > (memory management) may be the same. > > > On Thu, Sep 29, 2011 at 8:45 AM, Michael Paquier < > mic...@gm...> wrote: > >> I am able to reproduce this issue, but I am not sure to what it is >> related, as it happens randomly. >> As you say, having a tuple concurrently updated would mean a lock or a >> snapshot problem. >> GTM has always worked correctly, so locks? >> >> >> On Wed, Sep 28, 2011 at 8:16 PM, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> Here's the assertion that's failing >>> 72 FATAL: tuple concurrently updated >>> 73 TRAP: FailedAssertion("!(curval == 0 || (curval == 0x03 && status != >>> 0x00) || curval == status)", File: "clog.c", Line: 358) >>> 74 LOG: server process (PID 32506) was terminated by signal 6: Aborted >>> 75 LOG: terminating any other active server processes >> >> -- > Michael Paquier > http://michael.otacoo.com > > > ------------------------------------------------------------------------------ > All the data continuously generated in your IT infrastructure contains a > definitive record of customers, application performance, security > threats, fraudulent activity and more. Splunk takes this data and makes > sense of it. Business sense. IT sense. Common sense. > http://p.sf.net/sfu/splunk-d2dcopy1 > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Pavan Deolasee EnterpriseDB http://www.enterprisedb.com |
From: Michael P. <mic...@gm...> - 2011-09-29 02:30:32
|
On Thu, Sep 29, 2011 at 11:25 AM, Pavan Deolasee < pav...@en...> wrote: > > Could this be because the way we save and restore the GTM info ? I have > seen issues because of that, especially if we fail to shutdown everything > properly. > This is indeed possible. Now snapshot data from GTM is saved with malloc on Datanodes, and we do not use any *safe* palloc mechanism. I saw this assertion crash only on remote nodes, both Coordinator and Datanodes, so this may be related to the way data is received on remote node from Coordinator. My question is: why do we use malloc to store snapshot info received on remote node? Is it related to restrictions on sessions? -- Michael Paquier http://michael.otacoo.com |