CRS-4639: Could not contact Oracle High Availability Services
原因,crs没有启动
方法1、oracle中的bug,
启动之前需要执行
/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1 (如果想让重启能够执行,需要加入rc.local中,两个节点)
然后再执行
[root@rac2 bin]# ./crsctl start crs
方法1如果不行可以尝试方法2
[root@rac2 install]# /u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force -verbose
然后重新配置
[root@rac2 grid]# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/grid
...
即可
如果是centos6以上还需要运行
[root@rac2 ~]# /bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
检查
[root@rac2 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
另外,注意的是,如果还是在重启服务器之后不能启动crs,则需要在启动crs之前执行命令(两个节点)
/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
追根就是oracle的一个bug
[root@rac2 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
这个问题有几个原因
1、使用./crsctl start crs 之后立即执行了check,需要等待
2、需要查看节点连通状态,公有网络私有等,还有就是disk的属性
[root@rac2 disks]# ll
total 0
brw-rw---- 1 oracle dba 8, 17 Feb 10 23:47 DISK5
brw-rw---- 1 oracle dba 8, 33 Feb 10 23:47 DISK6
在启动asm实例的时候报如下错误:
[grid@b1 ~]$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.1.0 Production on Thu Sep 12 18:14:13 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup
ORA-01078: failure in processing system parameters
ORA-29701: unable to connect to Cluster Synchronization Service
然后用crsctl check css检查的时候报如下错误:
[grid@b1 ~]$ crsctl check css
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Check failed, or completed with errors.
解决CRS-4639: Could not contact Oracle High Availability Services过程如下:
[root@b1 grid]# cd /u01/app/11.2.0/grid/crs/install
[root@b1 install]# ./roothas.pl -deconfig -force -verbose
2013-09-12 19:25:05: Checking for super user privileges
2013-09-12 19:25:05: User has super user privileges
2013-09-12 19:25:05: Parsing the host name
Using configuration parameter file: ./crsconfig_params
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Delete failed, or completed with errors.
Failure at scls_scr_getval with code 1
Internal Error Information:
Category: -2
Operation: opendir
Location: scrsearch1
Other: cant open scr home dir scls_scr_getval
System Dependent Information: 2
CRS-4544: Unable to connect to OHAS
CRS-4000: Command Stop failed, or completed with errors.
ACFS-9200: Supported
Successfully deconfigured Oracle Restart stack
[root@b1 install]# cd /u01/app/11.2.0/grid/
[root@b1 grid]# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2013-09-12 19:27:31: Checking for super user privileges
2013-09-12 19:27:31: User has super user privileges
2013-09-12 19:27:31: Parsing the host name
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
CRS-4664: Node b1 successfully pinned.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
b1 2013/09/12 19:29:12 /u01/app/11.2.0/grid/cdata/b1/backup_20130912_192912.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4094 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[grid@b1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.cssd ora.cssd.type OFFLINE OFFLINE
ora.diskmon ora....on.type OFFLINE OFFLINE
[grid@b1 ~]$ crs_start -all
Attempting to start `ora.diskmon` on member `b1`
Attempting to start `ora.cssd` on member `b1`
Start of `ora.diskmon` on member `b1` succeeded.
Start of `ora.cssd` on member `b1` succeeded.
[grid@b1 ~]$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.1.0 Production on Thu Sep 12 19:34:50 2013
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup
ASM instance started
Total System Global Area 283930624 bytes
Fixed Size 2212656 bytes
Variable Size 256552144 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
ASM diskgroups volume enabled