Oracle Rac ASM OCRDG每过段时间就会dismount排查

起因

Oracle Rac集群已经运行好几年,近段时间一直出现节点无法启动的情况,再初步排查之后发现ASM中的ocrdg磁盘组dismount,再重新mount后恢复,但是过段时间还是会丢失

排查

排查crsd日志

查找crsd.log日志,发现在2:09分集群出现问题,ocrdg无法访问,集群强制dismount磁盘组

2019-08-13 02:09:05.173: [UiServer][1386170112]{1:42831:329} Sending message to PE. ctx= 0x7f29e4009010, Client PID: 12737
2019-08-13 02:09:05.173: [   CRSPE][1388271360]{1:42831:329} Cmd : 0x7f29e0009580 : flags: EVENT_TAG | FORCE_TAG | QUEUE_TAG
2019-08-13 02:09:05.173: [   CRSPE][1388271360]{1:42831:329} Processing PE command id=381. Description: [Stop Resource : 0x7f29e0009580]
2019-08-13 02:09:05.174: [   CRSPE][1388271360]{1:42831:329} Expression Filter : (((NAME == ora.OCRDG.dg) AND (LAST_SERVER == db1)) AND (STATE != OFFLINE))
2019-08-13 02:09:05.174: [   CRSPE][1388271360]{1:42831:329} Expression Filter : (((NAME == ora.OCRDG.dg) AND (LAST_SERVER == db1)) AND (STATE != OFFLINE))
2019-08-13 02:09:05.175: [   CRSPE][1388271360]{1:42831:329} Attribute overrides for the command: USR_ORA_OPI = true;
2019-08-13 02:09:05.175: [   CRSPE][1388271360]{1:42831:329} Filtering duplicate ops: server [] state [OFFLINE]
2019-08-13 02:09:05.176: [   CRSPE][1388271360]{1:42831:329} Op 0x7f29e00e9410 has 5 WOs
2019-08-13 02:09:05.176: [   CRSPE][1388271360]{1:42831:329} RI [ora.OCRDG.dg db1 1] new target state: [OFFLINE] old value: [ONLINE]   'ocrdg状态由Online变为Offlne'
2019-08-13 02:09:05.176: [   CRSPE][1388271360]{1:42831:329} RI [ora.OCRDG.dg db1 1] new internal state: [STOPPING] old value: [STABLE] 'ocrdg状态由STABLE可用变为STOPPING停止'
2019-08-13 02:09:05.176: [  CRSOCR][1396676352]{1:42831:329} Multi Write Batch processing...
2019-08-13 02:09:05.176: [   CRSPE][1388271360]{1:42831:329} Sending message to agfw: id = 20443
2019-08-13 02:09:05.177: [   CRSPE][1388271360]{1:42831:329} CRS-2673: Attempting to stop 'ora.OCRDG.dg' on 'db1'
'尝试停止ora.OCRDG.dg'

排查ASM日志

上面看asm出现问题,再去查看asm日志,发现:
磁盘组上的ASM磁盘被执行延迟ASM PST心跳检查,由于默认的超时时间为15秒导致检查失败,ASM实例会强制dismount磁盘组,导致crsd无法读取ocrdg而挂掉

WARNING: Waited 15 secs for write IO to PST disk 0 in group 2.   "心跳监测15秒超时"
WARNING: Waited 15 secs for write IO to PST disk 1 in group 2.
WARNING: Waited 15 secs for write IO to PST disk 0 in group 2.
WARNING: Waited 15 secs for write IO to PST disk 1 in group 2.
NOTE: process _b000_+asm1 (15894) initiating offline of disk 0.3916004527 (OCR1) with mask 0x7e in group 2
NOTE: process _b000_+asm1 (15894) initiating offline of disk 1.3916004528 (OCR2) with mask 0x7e in group 2
NOTE: checking PST: grp = 2
GMON checking disk modes for group 2 at 22 for pid 31, osid 15894
ERROR: no read quorum in group: required 2, found 0 disks  "错误:依赖2,找到0"
NOTE: checking PST for grp 2 done.
NOTE: initiating PST update: grp = 2, dsk = 0/0xe9697caf, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 2, dsk = 1/0xe9697cb0, mask = 0x6a, op = clear
GMON updating disk modes for group 2 at 23 for pid 31, osid 15894
ERROR: no read quorum in group: required 2, found 0 disks
Tue Aug 13 02:08:34 2019
NOTE: cache dismounting (not clean) group 2/0xE3398C73 (OCRDG)  
WARNING: Offline for disk OCR1 in mode 0x7f failed.
WARNING: Offline for disk OCR2 in mode 0x7f failed.
NOTE: messaging CKPT to quiesce pins Unix process pid: 15896, image: oracle@db1 (B001)
Tue Aug 13 02:08:34 2019
NOTE: halting all I/Os to diskgroup 2 (OCRDG)
Tue Aug 13 02:08:34 2019
NOTE: LGWR doing non-clean dismount of group 2 (OCRDG)
NOTE: LGWR sync ABA=18.64 last written ABA 18.64
Tue Aug 13 02:08:34 2019
kjbdomdet send to inst 2
detach from dom 2, sending detach message to inst 2
Tue Aug 13 02:08:35 2019
List of instances:
 1 2
Dirty detach reconfiguration started (new ddet inc 2, cluster inc 44)
 Global Resource Directory partially frozen for dirty detach
* dirty detach - domain 2 invalid = TRUE 
 15 GCS resources traversed, 0 cancelled
Dirty Detach Reconfiguration complete
Tue Aug 13 02:08:35 2019
WARNING: dirty detached from domain 2
NOTE: cache dismounted group 2/0xE3398C73 (OCRDG) 
SQL> alter diskgroup OCRDG dismount force /* ASM SERVER:3812199539 */   "强制dismount ocrdg磁盘组"
Tue Aug 13 02:08:35 2019
NOTE: cache deleting context for group OCRDG 2/0xe3398c73

尝试解决

经过网上查询,有以下两种方式:
1.确认操作系统和共享存储之间是否存在无响应并且响应时间是否在15秒以下
2.如果无法保证可在ASM实例(在RAC的所有节点上)中设置以下参数:

_asm_hbeatiowait为120秒

As per internal bug 17274537 , based on internal testing the value should be increased to 120 secs, which is fixed in 12.1.0.2

此参数为Oracle 11.0.2.4中加入,默认为15秒,oracle将会在12.1.0.2版本中修复为120秒

如何修改:

sqlplus / as sysdba
sql> alter system set "_asm_hbeatiowait" =120 scope=spfile sid='*';

修改后需要重启crsd或者asm实例

后记

其实问题的根本原因还是在于节点跟共享存储之间响应时间过长,找出这个原因,才能使集群更健康

你可能感兴趣的:(Oracle Rac ASM OCRDG每过段时间就会dismount排查)