In this Document
Purpose |
Troubleshooting Steps |
1. Check OS statistics from the evicted node from the time of the eviction. |
2. Validate the interconnect network setup. |
3. Check that the OS network settings are correct by running the ORAchk tool. |
4. Check communication over the private network. |
5. Platform specific checks. |
6. Known issues which can cause NHB node eviction. |
7. For more information. |
References |
Missed Network Heartbeat (NHB) evictions happen when the Oracle Cluster Synchronization Services Daemon (ocssd) of the surviving node loses contact with the evicted node over the private network (interconnect). Cluster nodes must be able to communicate over the interconnect in order to avoid a "split brain" situation. Note that in some cases, a node can actually abort itself to avoid "split brain" when communication over the interconnect is compromised.
The most common (but not exclusive) cause of missed NHB is network problems communicating over the private interconnect.
The purpose of this document is to provide steps to check the network after missed NHB eviction.
CHM (Cluster Health Monitor) is installed with 11.2 Grid Infrastructure in several platforms and versions, and gathers OS statistics in the background.
* Currently CHM is integrated with Grid Infrastructure for Linux and Solaris on 11.2.0.2+, and for AIX and Windows in 11.2.0.3+.
* If CHM is available on your platform, gather and review the CHM data from the time of the eviction as soon as possible after the eviction to prevent the data from aging out.
Reference Document 1328466.1: Cluster Health Monitor (CHM) FAQ
For platforms where CHM is not available, OS Watcher (OSW) can be used.
* If OS watcher is installed, check oswnetstat and oswprvtnet.
* If OS watcher is not already installed, install it and set up oswprvtnet tracing to aid in future diagnosis.
Reference Document 301137.1: OSWatcher Black Box (Includes: [Video]) (Doc ID 301137.1)
Reference Document 1054902.1: How to Validate Network and Name Resolution Setup for the Clusterware and RAC (Doc ID 1054902.1)
Reference Document 1268927.1: ORAchk - RAC Configuration Audit Tool (Doc ID 1268927.1)
In 11.2 and above the clusterware uses UDP, in 10.1-11.1 the clusterware uses TCP.
To check UDP settings and communication using this note:
Reference Document 563566.1: Troubleshooting gc block lost and Poor Network Performance in a RAC Environment (Doc ID 563566.1)
To check TCP communication over the private interconnect using this note:
Reference Document 1445075.1: Node reboot or eviction: How to check if your private interconnect CRS can transmit network heartbeats (Doc ID 1445075.1)
AIX: please check that all the fixes from the following note are applied:
Reference Document 1427855.1: AIX: Top Things to DO NOW to Stabilize 11gR2 GI/RAC Cluster (Doc ID 1427855.1)
For 11.2.0.2 and above:
Reference Document 1481481.1: 11gR2 CSS Terminates/Node Eviction After Unplugging one Network Cable in Redundant Interconnect Environment (Doc ID 1481481.1)
Please see "known issues" section of:
Document 1210883.1: 11gR2 Grid Infrastructure Redundant Interconnect and ora.cluster_interconnect.haip (Doc ID 1210883.1)
For more information, please see the following notes in addition to the above:
Reference Document 1367153.1: Top 5 Issues That Cause Node Reboots or Evictions or Unexpected Recycle of CRS (Doc ID 1367153.1)
For 10.1 - 11.1, reference:
Document 265769.1 Troubleshooting 10g and 11.1 Clusterware Reboots (Doc ID 265769.1)
For 11.2 and above, reference:
Document 1050693.1 Troubleshooting 11.2 Clusterware Node Evictions (Reboots) (Doc ID 1050693.1)
and
Document 1507482.1 Oracle Clusterware Cannot Start on all Nodes: Network communication with node <NAME> missing for 90% of timeout interval (Doc ID 1507482.1)