概要:
一客户asm diskheader corrution,导致4T数据量的数据库无法启动,无备份,重新构造数据周期太长,无法接受。
最后通过kfed工具修改磁盘头来解决该故障,本文详细的记录了处理过程。
故障现象:
Tue Jun 25 10:14:59 2013 SQL> alter diskgroup HXCXDATA mount force NOTE: cache registered group HXCXDATA number=12 incarn=0x0d73f50d NOTE: cache began mount (first) of group HXCXDATA number=12 incarn=0x0d73f50d NOTE: Assigning number (12,1) to disk (/dev/sddlmbb) NOTE: Assigning number (12,2) to disk (/dev/sddlmbc) NOTE: Assigning number (12,9) to disk (/dev/sddlmbj) NOTE: Assigning number (12,8) to disk (/dev/sddlmbi) NOTE: Assigning number (12,7) to disk (/dev/sddlmbh) NOTE: Assigning number (12,10) to disk (/dev/sddlmbk) NOTE: Assigning number (12,6) to disk (/dev/sddlmbg) NOTE: Assigning number (12,5) to disk (/dev/sddlmbf) Tue Jun 25 10:15:00 2013 ERROR: no read quorum in group: required 1, found 0 disks NOTE: cache dismounting (not clean) group 12/0x0D73F50D (HXCXDATA) NOTE: messaging CKPT to quiesce pins Unix process pid: 53483, image: oracle@zjsgdbnfwbsk01 (TNS V1-V3) NOTE: dbwr not being msg'd to dismount NOTE: lgwr not being msg'd to dismount NOTE: cache dismounted group 12/0x0D73F50D (HXCXDATA) NOTE: cache ending mount (fail) of group HXCXDATA number=12 incarn=0x0d73f50d NOTE: cache deleting context for group HXCXDATA 12/0x0d73f50d GMON dismounting group 12 at 54 for pid 42, osid 53483 NOTE: Disk in mode 0x8 marked for de-assignment NOTE: Disk in mode 0x8 marked for de-assignment NOTE: Disk in mode 0x8 marked for de-assignment NOTE: Disk in mode 0x8 marked for de-assignment NOTE: Disk in mode 0x8 marked for de-assignment NOTE: Disk in mode 0x8 marked for de-assignment NOTE: Disk in mode 0x8 marked for de-assignment NOTE: Disk in mode 0x8 marked for de-assignment
ERROR: diskgroup HXCXDATA was not mounted
ORA-15032: not all alterations performed
ORA-15017: diskgroup "HXCXDATA" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "HXCXDATA"
ERROR: alter diskgroup HXCXDATA mount force
故障分析:
检查集群资源状态:
zjsgdbnfwbsk01:/home/grid$crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.HXCXDATA.dg ONLINE OFFLINE zjsgdbnfwbsk01 ONLINE OFFLINE zjsgdbnfwbsk02 ora.HXCXFRA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.LISTENER.lsnr ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.NFJQDATA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.NFJQFRA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.NFSCDATA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.NFSCFRA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.OCR_VOTE.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.SKYDATA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.WBDATA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.WBDSDATA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.WBDSFRA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.WBFRA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.asm ONLINE ONLINE zjsgdbnfwbsk01 Started ONLINE ONLINE zjsgdbnfwbsk02 Started ora.gsd OFFLINE OFFLINE zjsgdbnfwbsk01 OFFLINE OFFLINE zjsgdbnfwbsk02 ora.net1.network ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.ons ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE zjsgdbnfwbsk02 ora.bjsgsky.db 1 ONLINE ONLINE zjsgdbnfwbsk01 Open 2 ONLINE ONLINE zjsgdbnfwbsk02 Open ora.cvu 1 ONLINE ONLINE zjsgdbnfwbsk02 ora.oc4j 1 ONLINE ONLINE zjsgdbnfwbsk02 ora.scan1.vip 1 ONLINE ONLINE zjsgdbnfwbsk02 ora.zjsgdbnfwbsk01.vip 1 ONLINE ONLINE zjsgdbnfwbsk01 ora.zjsgdbnfwbsk02.vip 1 ONLINE ONLINE zjsgdbnfwbsk02 ora.zjsghxjq.db 1 ONLINE OFFLINE Instance Shutdown 2 ONLINE OFFLINE Instance Shutdown ora.zjsgnfzc.db 1 ONLINE ONLINE zjsgdbnfwbsk01 Open 2 ONLINE ONLINE zjsgdbnfwbsk02 Open ora.zjsgnzjq.db 1 ONLINE ONLINE zjsgdbnfwbsk01 Open 2 ONLINE ONLINE zjsgdbnfwbsk02 Open ora.zjsgwbds.db 1 ONLINE ONLINE zjsgdbnfwbsk01 Open 2 ONLINE ONLINE zjsgdbnfwbsk02 Open ora.zjsgwbjh.db 1 ONLINE ONLINE zjsgdbnfwbsk01 Open 2 ONLINE ONLINE zjsgdbnfwbsk02 Open
发现只所以近期库无法启动,是因为HXCXDATA diskgroup未启动,但手工启动时又报磁盘数量不足错误。解析来我们看一下该diskgroup包含哪些磁盘。检查ASM日志发现以下内容:
SQL> CREATE DISKGROUP HXCXDATA EXTERNAL REDUNDANCY DISK '/dev/sddlmba' SIZE 542720M , '/dev/sddlmbb' SIZE 542720M , '/dev/sddlmbc' SIZE 542720M , '/dev/sddlmbd' SIZE 5120M , '/dev/sddlmbe' SIZE 5120M , '/dev/sddlmbf' SIZE 542720M , '/dev/sddlmbg' SIZE 542720M , '/dev/sddlmbh' SIZE 542720M , '/dev/sddlmbi' SIZE 542720M , '/dev/sddlmbj' SIZE 542720M , '/dev/sddlmbk' SIZE 542720M , '/dev/sddlmbl' SIZE 542720M , '/dev/sddlmbm' SIZE 542720M , '/dev/sddlmbn' SIZE 542720M ATTRIBUTE 'compatible.asm'='11.2.0.0.0','au_size'='4M' /* ASMCA */ SQL> ALTER DISKGROUP HXCXDATA DROP DISK 'HXCXDATA_0011','HXCXDATA_0012','HXCXDATA_0013' /* ASMCA */ SUCCESS: ALTER DISKGROUP HXCXDATA DROP DISK 'HXCXDATA_0011','HXCXDATA_0012','HXCXDATA_0013' /* ASMCA */ SQL> ALTER DISKGROUP HXCXDATA DROP DISK 'HXCXDATA_0003','HXCXDATA_0004' /* ASMCA */ SUCCESS: ALTER DISKGROUP HXCXDATA DROP DISK 'HXCXDATA_0003','HXCXDATA_0004' /* ASMCA */
从以上信息可以看出,diskgroup HXCXDATA共有14块磁盘,后来又drop掉5块,当前应该是9块盘。从视图v$asm_disk中核实发现,只有8块盘为MEMBER,检查结果如下所示:
SQL> select path,name,header_status from v$asm_disk; PATH NAME HEADER_STATUS ------------------------------ -------------------- -------------------- /dev/sddlmbb MEMBER /dev/sddlmbc MEMBER /dev/sddlmbo FORMER /dev/sddlmbd FORMER /dev/sddlmbf MEMBER /dev/sddlmbg MEMBER /dev/sddlmbj MEMBER /dev/sddlmbi MEMBER /dev/sddlmbh MEMBER /dev/sddlmba1 CANDIDATE /dev/sddlmba CANDIDATE <<<<< 该磁盘不正常 PATH NAME HEADER_STATUS ------------------------------ -------------------- -------------------- /dev/sddlmap FORMER /dev/sddlmbk MEMBER /dev/sddlmac WBDATA_0000 MEMBER /dev/sddlmab NFSCFRA_0000 MEMBER /dev/sddlmaa NFJQDATA_0000 MEMBER /dev/sddlmao HXCXFRA_0000 MEMBER /dev/sddlmam WBFRA_0000 MEMBER /dev/sddlmal NFJQFRA_0000 MEMBER /dev/sddlmak NFSCDATA_0001 MEMBER /dev/sddlmaj NFSCDATA_0000 MEMBER /dev/sddlmai WBDATA_0002 MEMBER PATH NAME HEADER_STATUS ------------------------------ -------------------- -------------------- /dev/sddlmah WBDATA_0001 MEMBER /dev/sddlmag SKYDATA_0000 MEMBER /dev/sddlmaf OCR_VOTE_0002 MEMBER /dev/sddlmae OCR_VOTE_0001 MEMBER /dev/sddlmad OCR_VOTE_0000 MEMBER /dev/sddlmbm WBDSDATA_0001 MEMBER /dev/sddlmbn WBDSDATA_0000 MEMBER /dev/sddlmbl WBDSFRA_0001 MEMBER /dev/sddlmbe WBDSFRA_0000 MEMBER 31 rows selected.
检查8块盘的盘头,再次确认这8块盘都属于diskgroup HXCXDATA,从数量上来看确实不够9块,还差1块盘。对比创建脚本,我们不难发现正是因为该diskgroup缺少磁盘/dev/sddlmba,而无法正常启动。检查磁盘/dev/sddlmba时报如下错误:
zjsgdbnfwbsk01:/home/grid$kfed read /dev/sddlmba text=sddlmba.txt KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0] zjsgdbnfwbsk01:/home/grid$ zjsgdbnfwbsk01:/home/grid$kfed repair /dev/sddlmba KFED-00320: Invalid block num1 = [3], num2 = [1], error = [type_kfbh]
可以肯定该磁盘已损坏,并且用kfed的repair参数无法自动修复。
解决方案:
解决办法是参考其他磁盘头内容,重新修复损坏磁盘的盘头。
读取其他盘内容:
zjsgdbnfwbsk01:/home/grid$kfed read /dev/sddlmbb text=sddlmba.txt zjsgdbnfwbsk01:/home/grid$
根据/dev/sddlmbb的磁盘头信息编辑sddlmba.txt内容为对应磁盘/dev/sddlmba的信息(红色部分为修改内容):
kfbh.endian: 1 ; 0x000: 0x01 kfbh.hard: 130 ; 0x001: 0x82 kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD kfbh.datfmt: 1 ; 0x003: 0x01 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 2147483648 ; 0x008: disk=0 <<<<<< kfbh.check: 1084272197 ; 0x00c: 0x40a0ae45 kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 kfdhdb.driver.provstr: ORCLDISK ; 0x000: length=8 kfdhdb.driver.reserved[0]: 0 ; 0x008: 0x00000000 kfdhdb.driver.reserved[1]: 0 ; 0x00c: 0x00000000 kfdhdb.driver.reserved[2]: 0 ; 0x010: 0x00000000 kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000 kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000 kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000 kfdhdb.compat: 186646528 ; 0x020: 0x0b200000 kfdhdb.dsknum: 0 ; 0x024: 0x0000 kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER kfdhdb.dskname: HXCXDATA_0000 ; 0x028: length=13 <<<<<< kfdhdb.grpname: HXCXDATA ; 0x048: length=8 <<<<<< kfdhdb.fgname: HXCXDATA_0000 ; 0x068: length=13 <<<<<< kfdhdb.capname: ; 0x088: length=0 kfdhdb.crestmp.hi: 32986850 ; 0x0a8: HOUR=0x2 DAYS=0x17 MNTH=0x5 YEAR=0x7dd <<<<<< kfdhdb.crestmp.lo: 3357803520 ; 0x0ac: USEC=0x0 MSEC=0x101 SECS=0x2 MINS=0x32 <<<<<< kfdhdb.mntstmp.hi: 32987330 ; 0x0b0: HOUR=0x2 DAYS=0x6 MNTH=0x6 YEAR=0x7dd <<<<<< kfdhdb.mntstmp.lo: 3423425536 ; 0x0b4: USEC=0x0 MSEC=0x355 SECS=0x0 MINS=0x33 <<<<<< kfdhdb.secsize: 512 ; 0x0b8: 0x0200 kfdhdb.blksize: 4096 ; 0x0ba: 0x1000 kfdhdb.ausize: 4194304 ; 0x0bc: 0x00400000 kfdhdb.mfact: 454272 ; 0x0c0: 0x0006ee80 kfdhdb.dsksize: 135680 ; 0x0c4: 0x00021200 kfdhdb.pmcnt: 2 ; 0x0c8: 0x00000002 kfdhdb.fstlocn: 1 ; 0x0cc: 0x00000001 kfdhdb.altlocn: 2 ; 0x0d0: 0x00000002 kfdhdb.f1b1locn: 2 ; 0x0d4: 0x00000002 <<<<<< kfdhdb.redomirrors[0]: 0 ; 0x0d8: 0x0000 kfdhdb.redomirrors[1]: 0 ; 0x0da: 0x0000 kfdhdb.redomirrors[2]: 0 ; 0x0dc: 0x0000 kfdhdb.redomirrors[3]: 0 ; 0x0de: 0x0000 kfdhdb.dbcompat: 168820736 ; 0x0e0: 0x0a100000 kfdhdb.grpstmp.hi: 32986850 ; 0x0e4: HOUR=0x2 DAYS=0x17 MNTH=0x5 YEAR=0x7dd kfdhdb.grpstmp.lo: 3356350464 ; 0x0e8: USEC=0x0 MSEC=0x376 SECS=0x0 MINS=0x32 kfdhdb.vfstart: 0 ; 0x0ec: 0x00000000 kfdhdb.vfend: 0 ; 0x0f0: 0x00000000 kfdhdb.spfile: 18 ; 0x0f4: 0x00000012 <<<<<< kfdhdb.spfflg: 1 ; 0x0f8: 0x00000001 <<<<<< kfdhdb.ub4spare[0]: 0 ; 0x0fc: 0x00000000 kfdhdb.ub4spare[1]: 0 ; 0x100: 0x00000000 kfdhdb.ub4spare[2]: 0 ; 0x104: 0x00000000 kfdhdb.ub4spare[3]: 0 ; 0x108: 0x00000000 kfdhdb.ub4spare[4]: 0 ; 0x10c: 0x00000000 kfdhdb.ub4spare[5]: 0 ; 0x110: 0x00000000 kfdhdb.ub4spare[6]: 0 ; 0x114: 0x00000000 kfdhdb.ub4spare[7]: 0 ; 0x118: 0x00000000 kfdhdb.ub4spare[8]: 0 ; 0x11c: 0x00000000 kfdhdb.ub4spare[9]: 0 ; 0x120: 0x00000000 kfdhdb.ub4spare[10]: 0 ; 0x124: 0x00000000 kfdhdb.ub4spare[11]: 0 ; 0x128: 0x00000000 kfdhdb.ub4spare[12]: 0 ; 0x12c: 0x00000000 kfdhdb.ub4spare[13]: 0 ; 0x130: 0x00000000 kfdhdb.ub4spare[14]: 0 ; 0x134: 0x00000000 kfdhdb.ub4spare[15]: 0 ; 0x138: 0x00000000 kfdhdb.ub4spare[16]: 0 ; 0x13c: 0x00000000 kfdhdb.ub4spare[17]: 0 ; 0x140: 0x00000000 kfdhdb.ub4spare[18]: 0 ; 0x144: 0x00000000 kfdhdb.ub4spare[19]: 0 ; 0x148: 0x00000000 kfdhdb.ub4spare[20]: 0 ; 0x14c: 0x00000000 kfdhdb.ub4spare[21]: 0 ; 0x150: 0x00000000 kfdhdb.ub4spare[22]: 0 ; 0x154: 0x00000000 kfdhdb.ub4spare[23]: 0 ; 0x158: 0x00000000 kfdhdb.ub4spare[24]: 0 ; 0x15c: 0x00000000 kfdhdb.ub4spare[25]: 0 ; 0x160: 0x00000000 kfdhdb.ub4spare[26]: 0 ; 0x164: 0x00000000 kfdhdb.ub4spare[27]: 0 ; 0x168: 0x00000000 kfdhdb.ub4spare[28]: 0 ; 0x16c: 0x00000000 kfdhdb.ub4spare[29]: 0 ; 0x170: 0x00000000 kfdhdb.ub4spare[30]: 0 ; 0x174: 0x00000000 kfdhdb.ub4spare[31]: 0 ; 0x178: 0x00000000 kfdhdb.ub4spare[32]: 0 ; 0x17c: 0x00000000 kfdhdb.ub4spare[33]: 0 ; 0x180: 0x00000000 kfdhdb.ub4spare[34]: 0 ; 0x184: 0x00000000 kfdhdb.ub4spare[35]: 0 ; 0x188: 0x00000000 kfdhdb.ub4spare[36]: 0 ; 0x18c: 0x00000000 kfdhdb.ub4spare[37]: 0 ; 0x190: 0x00000000 kfdhdb.ub4spare[38]: 0 ; 0x194: 0x00000000 kfdhdb.ub4spare[39]: 0 ; 0x198: 0x00000000 kfdhdb.ub4spare[40]: 0 ; 0x19c: 0x00000000 kfdhdb.ub4spare[41]: 0 ; 0x1a0: 0x00000000 kfdhdb.ub4spare[42]: 0 ; 0x1a4: 0x00000000 kfdhdb.ub4spare[43]: 0 ; 0x1a8: 0x00000000 kfdhdb.ub4spare[44]: 0 ; 0x1ac: 0x00000000 kfdhdb.ub4spare[45]: 0 ; 0x1b0: 0x00000000 kfdhdb.ub4spare[46]: 0 ; 0x1b4: 0x00000000 kfdhdb.ub4spare[47]: 0 ; 0x1b8: 0x00000000 kfdhdb.ub4spare[48]: 0 ; 0x1bc: 0x00000000 kfdhdb.ub4spare[49]: 0 ; 0x1c0: 0x00000000 kfdhdb.ub4spare[50]: 0 ; 0x1c4: 0x00000000 kfdhdb.ub4spare[51]: 0 ; 0x1c8: 0x00000000 kfdhdb.ub4spare[52]: 0 ; 0x1cc: 0x00000000 kfdhdb.ub4spare[53]: 0 ; 0x1d0: 0x00000000 kfdhdb.acdb.aba.seq: 0 ; 0x1d4: 0x00000000 kfdhdb.acdb.aba.blk: 0 ; 0x1d8: 0x00000000 kfdhdb.acdb.ents: 0 ; 0x1dc: 0x0000 kfdhdb.acdb.ub2spare: 0 ; 0x1de: 0x0000
写入到磁盘/dev/sddlmba
$kfed merge /dev/sddlmba text=sddlmba.txt
重新启动该diskgroup
zjsgdbnfwbsk01:/home/grid$crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.HXCXDATA.dg ONLINE OFFLINE zjsgdbnfwbsk01 ONLINE OFFLINE zjsgdbnfwbsk02 ora.HXCXFRA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.LISTENER.lsnr ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.NFJQDATA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.NFJQFRA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.NFSCDATA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.NFSCFRA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.OCR_VOTE.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.SKYDATA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.WBDATA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.WBDSDATA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.WBDSFRA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.WBFRA.dg ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.asm ONLINE ONLINE zjsgdbnfwbsk01 Started ONLINE ONLINE zjsgdbnfwbsk02 Started ora.gsd OFFLINE OFFLINE zjsgdbnfwbsk01 OFFLINE OFFLINE zjsgdbnfwbsk02 ora.net1.network ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 ora.ons ONLINE ONLINE zjsgdbnfwbsk01 ONLINE ONLINE zjsgdbnfwbsk02 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE zjsgdbnfwbsk02 ora.bjsgsky.db 1 ONLINE ONLINE zjsgdbnfwbsk01 Open 2 ONLINE ONLINE zjsgdbnfwbsk02 Open ora.cvu 1 ONLINE ONLINE zjsgdbnfwbsk02 ora.oc4j 1 ONLINE ONLINE zjsgdbnfwbsk02 ora.scan1.vip 1 ONLINE ONLINE zjsgdbnfwbsk02 ora.zjsgdbnfwbsk01.vip 1 ONLINE ONLINE zjsgdbnfwbsk01 ora.zjsgdbnfwbsk02.vip 1 ONLINE ONLINE zjsgdbnfwbsk02 ora.zjsghxjq.db 1 ONLINE OFFLINE Instance Shutdown 2 ONLINE OFFLINE Instance Shutdown ora.zjsgnfzc.db 1 ONLINE ONLINE zjsgdbnfwbsk01 Open 2 ONLINE ONLINE zjsgdbnfwbsk02 Open ora.zjsgnzjq.db 1 ONLINE ONLINE zjsgdbnfwbsk01 Open 2 ONLINE ONLINE zjsgdbnfwbsk02 Open ora.zjsgwbds.db 1 ONLINE ONLINE zjsgdbnfwbsk01 Open 2 ONLINE ONLINE zjsgdbnfwbsk02 Open ora.zjsgwbjh.db 1 ONLINE ONLINE zjsgdbnfwbsk01 Open 2 ONLINE ONLINE zjsgdbnfwbsk02 Open zjsgdbnfwbsk01:/home/grid$crsctl start res ora.HXCXDATA.dg CRS-2672: Attempting to start 'ora.HXCXDATA.dg' on 'zjsgdbnfwbsk02' CRS-2672: Attempting to start 'ora.HXCXDATA.dg' on 'zjsgdbnfwbsk01' CRS-2676: Start of 'ora.HXCXDATA.dg' on 'zjsgdbnfwbsk02' succeeded CRS-2672: Attempting to start 'ora.zjsghxjq.db' on 'zjsgdbnfwbsk02' CRS-2676: Start of 'ora.HXCXDATA.dg' on 'zjsgdbnfwbsk01' succeeded CRS-2672: Attempting to start 'ora.zjsghxjq.db' on 'zjsgdbnfwbsk01' CRS-2676: Start of 'ora.zjsghxjq.db' on 'zjsgdbnfwbsk02' succeeded CRS-2676: Start of 'ora.zjsghxjq.db' on 'zjsgdbnfwbsk01' succeeded zjsgdbnfwbsk01:/home/grid$ 启动成功,数据库也启动成功。
总结建议:
不管怎么强调备份的重要性都不过分,建议备份数据库、备份磁盘头信息。以上通过修改磁盘头,虽然数据库被启动了,但是该磁盘的数据区域内容是否被损坏无法得知,只能通过业务去验证。建议通过从备份去恢复该数据库,彻底消除数据可能不一致的疑虑。