一次RAC主机资源使用异常导致的节点重启

本文详细分析了一次RAC数据库重启事件,从SWAP换页到GRID软件重启全过程,涉及数据库实例异常、集群机制响应及操作系统层面的影响。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

故障发生时间线:

某次同事发来让协助分析的一个客户RAC重启的分析回复,供参考。

数据库实例及GRID集群重启的时间线:
1.实例2 ALERT日志:在16:39:11出现SWAP换页,核心进程LMS1 (ospid: 8389020) has not called a wait for 235 secs
从ALERT日志中看实例2在问题发生前在持续进行LOGMNR操作

2.数据库实例2日志:发出远程实例驱逐--member kill
Fri May 17 16:46:56 2019 Remote instance kill

3.节点1CSSD LOG:Member kill不成功后Escalating成节点kill,即GRID软件重启
16:46:56发出Member kill request,time out 30500ms
2019-05-17 16:47:26.889: [    CSSD][4891]clssgmMbrKillEsc: Escalating node 2 Member request 0x00000002 Member success 0x00000000 Member failure 0x00000000 Number left to kill 1   ====>>>>Member kill不成功,Escalating成节点kill,即GRID软件重启


4.节点2 CSSD LOG:
2019-05-17 16:47:29.480: [    CSSD][2320]clssscExit: CSSD aborting from thread clssnmvKillBlockThread
开始关闭集群GRID,之后未发现正常关闭。
日志从16:47:54跳到了7分钟后的16:54:25,开始Starting CSS daemon
正常情况下在重启前CRSD资源要正常关闭,
即CRSD资源的开始清理:2019-05-17 16:47:29.480: [    CSSD][2320]clssscExit: Starting CRSD cleanup
清理完成会有显示:CRSD cleanup successfully,或者清理失败clssscExit: CRSD cleanup failed。此处未发现CRSD清理结束日志。
此处需要结合OS启动记录判断当时是否已经发生了主机的重启。

总结及下一步分析:
从日志大致可以分析出数据库实例2内存不足有大量SWAP换页,此后数据库及主机可能已经HANG住。集群的机制来正常的instance kill已经无法完成,之后Escalating成node kill,从日志看节点2的GRID可能也未正常关闭,之后的16:54:25节点2的GRID重启。在此期间主机是否重启或者人为KILL进程可以确认一下主机上信息。
接下来可以从实例2 ALERT日志:在16:39:11出现SWAP换页为突破口,从数据库AWR/OS日志/主机及数据库内存参数配置等方面继续排查一下。
 

相关日志分析如下:

1.实例2 ALERT日志:
从ALERT日志中看实例2在问题发生前在持续进行LOGMNR操作?
Fri May 17 16:27:39 2019
LOGMINER: End   mining logfile for session 1 thread 1 sequence 12904, +DATA/lis/onlinelog/group_1.257.931716735
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 12905, +DATA/lis/onlinelog/group_2.258.931716735
Fri May 17 16:31:39 2019
LOGMINER: End   mining logfile for session 1 thread 1 sequence 12905, +DATA/lis/onlinelog/group_2.258.931716735
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 12906, +DATA/lis/onlinelog/group_5.259.931716737
####问题发生:
在16:39:11出现SWAP换页 pct of memory swapped in [0.05%] pct of memory swapped out [6.51%].
之后持续出现内存不足相关信息如:
Please make sure there is no memory pressure and the SGA and PGA 
LMS1 (ospid: 8389020) has not called a wait for 235 secs.WARNING: Heavy swapping observed on system in last 5 mins.
此时相关进程已经HANG住如:
Fri May 17 16:41:49 2019
Errors in file /oracle/app/oracle/diag/rdbms/lis/lis2/trace/lis2_lmhb_7275340.trc  (incident=352150):
ORA-29770: global enqueue process LMS1 (OSID 8389020) is hung for more than 70 seconds
Fri May 17 16:45:00 2019
LMS0 (ospid: 7471320) has not called a wait for 129 secs.

#############################
数据库实例2日志:
16:41:55开始实例的Reconfiguration
Fri May 17 16:31:39 2019
Archived Log entry 24755 added for thread 1 sequence 12905 ID 0xffffffffa8e92f3c dest 1:
Fri May 17 16:41:55 2019
Reconfiguration started (old inc 100, new inc 102)
List of instances:
 1 2 (myinst: 1) 
 Global Resource Directory frozen

此时数据库实例的 Reconfiguration没有正常完成,
之后发出远程实例驱逐--member kill

Fri May 17 16:46:56 2019
Remote instance kill is issued with system inc 102
Remote instance kill map (size 1) : 2 
LMON received an instance eviction notification from instance 1
The instance eviction reason is 0x2
The instance eviction map is 2 
Fri May 17 16:47:34 2019
Reconfiguration started (old inc 102, new inc 104)
List of instances:
 1 (myinst: 1) 

#############################
数据库实例1日志在16:47:35被eviction :
Fri May 17 16:47:35 2019
LMD0 (ospid: 7536934) received an instance eviction notification from instance 1 [2]
==================
==================>>>>>>>实例2被实例1驱逐,在节点1实例的日志中16:41:55 2019 Reconfiguration started
==================
Fri May 17 16:47:35 2019
LOGMINER: session#=1 (OGG$CAP_EXT01), preparer MS03 pid=198 OS id=21496192 sid=2894 stopped
Fri May 17 16:47:39 2019
NOTE: deferred map free for map id 803613
PMON (ospid: 4850028): terminating the instance due to error 481

====集群日志:
节点2CSSD LOG:
--收到实例的member kill请求
2019-05-17 16:46:52.924: [    CSSD][3862]clssnmSendingThread: sent 4 status msgs to all nodes
2019-05-17 16:46:56.389: [    CSSD][3348]clssgmpProcessRequestMsg: Member kill request from remote node
2019-05-17 16:46:56.389: [    CSSD][4894]clssgmmkLocalKillThread: Local kill requested: id 1 mbr map 0x00000002 Group name DBLIS flags 0x00000000 st time -1986516821 end time -1986486321 time out 30500 req node 1
2019-05-17 16:46:56.389: [    CSSD][4894]clssgmmkLocalKillThread: Kill requested for member 1 group (1121dc3d0/DBLIS)
2019-05-17 16:46:56.925: [    CSSD][3862]clssnmSendingThread: sending status msg to all nodes
2019-05-17 16:46:56.925: [    CSSD][3862]clssnmSendingThread: sent 4 status msgs to all nodes
========>>>>>>>16:46:56收到clssgmpProcessRequestMsg: Member kill request请求

#########################
节点1CSSD LOG:Member kill不成功后Escalating成节点kill,即GRID软件重启
2019-05-17 16:46:55.419: [    CSSD][3862]clssnmSendingThread: sent 4 status msgs to all nodes
2019-05-17 16:46:56.383: [    CSSD][1029]clssgmExecuteClientRequest: Member kill request from client (113547310)
2019-05-17 16:46:56.383: [    CSSD][1029](:CSSGM00044:)clssgmReqMemberKill: Kill requested map 0x00000002 flags 0x2 escalate 0xffffffff
2019-05-17 16:46:56.388: [    CSSD][4891]clssgmMbrKillThread: Kill requested map 0x00000002 id 1 Group name DBLIS flags 0x00000001 start time 0x89916239 end time 0x8991d95d time out 30500 req node 1 
=======>>>>16:46:56发出Member kill request,time out 30500ms
……………………
2019-05-17 16:47:26.889: [    CSSD][4891]clssgmMbrKillThread: Member kill request complete.
2019-05-17 16:47:26.889: [    CSSD][4891]clssgmMbrKillSendEvent: Missing answers or immediate escalation: Req member 0 Req node 1 Number of answers expected 0 Number of answers outstanding 1 
=======>>>>30s后的16:47:26,Missing answers or immediate escalation,节点2的Member kill request到达阀值时没有应答,
2019-05-17 16:47:26.889: [    CSSD][4891]clssgmQueueGrockEvent: groupName(DBLIS) count(2) master(1) event(11), incarn 0, mbrc 0, to member 0, events 0x68, state 0x0
2019-05-17 16:47:26.889: [    CSSD][4891]clssgmMbrKillEsc: Escalating node 2 Member request 0x00000002 Member success 0x00000000 Member failure 0x00000000 Number left to kill 1   ====>>>>Member kill不成功,Escalating成节点kill,即GRID软件重启
2019-05-17 16:47:26.889: [    CSSD][4891]clssnmMarkNodeForRemoval: node 2, s824prod02 marked for removal
2019-05-17 16:47:26.889: [    CSSD][4891]clssnmKillNode: node 2 (s824prod02) kill initiated
2019-05-17 16:47:26.889: [    CSSD][4891]clssgmMbrKillThread: Exiting
2019-05-17 16:47:26.889: [    CSSD][3605]clssnmDiscHelper: s824prod02, node(2) connection failed, endp (8a7), probe(0), ninf->endp 8a7
2019-05-17 16:47:26.889: [    CSSD][3605]clssnmDiscHelper: node 2 clean up, endp (8a7), init state 5, cur state 5

#########################
节点2 CSSD LOG:开始重启GRID软件
2019-05-17 16:47:27.605: [GIPCXCPT][4376] gipcDissociateF [clssnmDiscHelper : clssnm.c : 3505]: EXCEPTION[ ret gipcretFail (1) ]  failed to dissociate obj 1121c1c70 [0000000004961b74] { gipcEndpoint : localAddr 'gipcha://s824prod02:nm2_sp-cluster/78f3-0ba9-b2d9-e51', remoteAddr 'gipcha://s824prod01:2103-9b7d-3293-1e4', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0, ready 1, wobj 1118e9e30, sendp 0flags 0x13860e, usrFlags 0x0 }, flags 0x0
2019-05-17 16:47:27.605: [    CSSD][4376]clssnmDiscEndp: gipcDestroy 4961b74 
2019-05-17 16:47:27.863: [    CSSD][2063]clssnmvDiskPing: Writing with status 0x3, timestamp 1558082847/2308481948
2019-05-17 16:47:28.366: [    CSSD][2063]clssnmvDiskPing: Writing with status 0x3, timestamp 1558082848/2308482451
2019-05-17 16:47:28.867: [    CSSD][2063]clssnmvDiskPing: Writing with status 0x3, timestamp 1558082848/2308482952
2019-05-17 16:47:28.953: [    CSSD][3862]clssnmSendingThread: sending status msg to all nodes
2019-05-17 16:47:28.953: [    CSSD][3862]clssnmSendingThread: sent 4 status msgs to all nodes
2019-05-17 16:47:29.368: [    CSSD][2063]clssnmvDiskPing: Writing with status 0x3, timestamp 1558082849/2308483452
2019-05-17 16:47:29.480: [    CSSD][2320](:CSSNM00005:)clssnmvDiskKillCheck: Aborting, evicted by node s824prod01, number 1, sync 403954700, stamp 2308036957
2019-05-17 16:47:29.480: [    CSSD][2320]clssnmRemoveNodeInTerm: node 2, s824prod02 terminated due to Normal Shutdown. Removing from member and connected bitmaps
2019-05-17 16:47:29.480: [    CSSD][2320]###################################
2019-05-17 16:47:29.480: [    CSSD][2320]clssscExit: CSSD aborting from thread clssnmvKillBlockThread
2019-05-17 16:47:29.480: [    CSSD][2320]###################################
2019-05-17 16:47:29.480: [    CSSD][2320](:CSSSC00012:)clssscExit: A fatal error occurred and the CSS daemon is terminating abnormally
2019-05-17 16:47:29.480: [    CSSD][2320]clssnmSendMeltdownStatus: node s824prod02, number 2, has experienced a failure in thread number 10 and is shutting down
2019-05-17 16:47:29.480: [    CSSD][2320]clssscExit: Starting CRSD cleanup

#########################
节点1 CSSD LOG:
此时节点2GRID重启也是不顺利,节点1CSSD日志有多次Node kill could not beperformed
2019-05-17 16:47:59.315: [    CSSD][4119]clssnmWaitOnEviction: node(2) exceeded graceful shutdown period, IPMI-kill allowed if configured
2019-05-17 16:47:59.315: [    CSSD][4119]clssnmWaitOnEviction: Node kill could not beperformed. Admin or connection validation failed

----->>>>此处由于没有DHB,此时判断节点2已经DOWN
2019-05-17 16:47:59.323: [    CSSD][2063]clssnmvDiskPing: Writing with status 0x3, timestamp 1558082879/2308069389
2019-05-17 16:47:59.419: [    CSSD][4119]clssnmWaitOnEvictions: node 2, undead 1, EXADATA fence handle 0 kill reqest id 0, last DHB (1558082849, 2308483452, 24304202)
2019-05-17 16:47:59.419: [    CSSD][4119]clssnmCheckKillStatus: Node 2, s824prod02, down, due to observed lack of DHBs, LATS 2308039472, current time 2308069485
2019-05-17 16:47:59.419: [    CSSD][4119]clssnmCheckKillStatus: Node 2, s824prod02, down, LATS(2308039472),timeout(30013)

#########################
节点2 CSSD LOG:
从16:47:54跳到了7分钟后的16:54:25,开始Starting CSS daemon
正常情况下在重启前CRSD资源要正常关闭,
即CRSD资源的开始清理:2019-05-17 16:47:29.480: [    CSSD][2320]clssscExit: Starting CRSD cleanup
清理完成会有显示:CRSD cleanup successfully,或者清理失败clssscExit: CRSD cleanup failed。此处未发现CRSD清理结束日志。
此处需要结合OS启动记录判断当时是否已经发生了主机的重启。
2019-05-17 16:47:54.991: [    CSSD][1029]clssgmProcClientReqs: Checking dead client Q
2019-05-17 16:47:54.991: [    CSSD][1029]clssgmProcClientReqs: Checking dead proc Q

[    CSSD][1]clsugetconf : Configuration type [4]. 
2019-05-17 16:54:25.615: [    CSSD][1]clssscmain: Starting CSS daemon, version 11.2.0.4.0, in (clustered) mode with uniqueness value 1558083265
2019-05-17 16:54:25.618: [    CSSD][1]clssscmain: Environment is production

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值