Add a Node to Oracle RAC 11gR2(11.2.0.3) on Oracle Linux 6
This guide shows how to remove a node from an existing 11gR2 Oracle RAC cluster. It is assumed that the node in question is available and is not part of a GNS/Grid Plug and Play cluster. In other words, the database is considered to be "Administrator-Managed". Also, the database software is non-shared. This guide uses a 3-node cluster running Oracle Linux 6.3 (x64). The three nodes are "node1", "node2", and "node3",we will be removing "node3" from the cluster.
Delete Node from Cluster
"Unpin" node
"Unpin" the node – in our case "node3" – from all nodes that are to remain in the cluster; in this case, "node1" and "node2". Specify the node you plan on deleting in the command and do so on each remaining node in the cluster.
[root@node1 ~]# /u01/app/11.2.0/grid/bin/crsctl unpin css -n node3 CRS-4667: Node node3 successfully unpinned.
[root@node2 ~]# /u01/app/11.2.0/grid/bin/crsctl unpin css -n node3 CRS-4667: Node node3 successfully unpinned.
Remove the "zhongwc3" instance from the "zhongwc" database using "dbca" in "Silent Mode"
[oracle@node1 ~]$ dbca -silent -deleteInstance -nodeList node3 -gdbName zhongwc -instanceName zhongwc3 -sysDBAUserName sys -sysDBAPassword oracle Deleting instance 1% complete 2% complete 6% complete 13% complete 20% complete 26% complete 33% complete 40% complete 46% complete 53% complete 60% complete 66% complete Completing instance management. 100% complete Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/zhongwc.log" for further details.
[oracle@node1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Sat Jan 5 15:20:12 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss'; Session altered. SQL> col host_name format a11 SQL> set line 300 SQL> select INSTANCE_NAME,HOST_NAME,VERSION,STARTUP_TIME,STATUS,ACTIVE_STATE,INSTANCE_ROLE,DATABASE_STATUS from gv$INSTANCE; INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS ACTIVE_ST INSTANCE_ROLE DATABASE_STATUS ---------------- ----------- ----------------- ------------------- ------------ --------- ------------------ ----------------- zhongwc1 node1 11.2.0.3.0 2013-01-05 09:53:24 OPEN NORMAL PRIMARY_INSTANCE ACTIVE zhongwc2 node2 11.2.0.3.0 2013-01-04 17:34:40 OPEN NORMAL PRIMARY_INSTANCE ACTIVE
[oracle@node1 ~]$ srvctl config database -d zhongwc -v Database unique name: zhongwc Database name: zhongwc Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1 Oracle user: oracle Spfile: +DATADG/zhongwc/spfilezhongwc.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: zhongwc Database instances: zhongwc1,zhongwc2 Disk Groups: DATADG,FRADG Mount point paths: Services: Type: RAC Database is administrator managed
[oracle@node3 ~]$ srvctl disable listener -n node3 [oracle@node3 ~]$ srvctl stop listener -n node3
[oracle@node3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={node3}" -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 3930 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful.
[oracle@node3 ~]$ cd $ORACLE_HOME/deinstall [oracle@node3 deinstall]$ ./deinstall -local Checking for required files and bootstrapping ... Please wait ... Location of logs /u01/app/oraInventory/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################### CHECK OPERATION START ######################### ## [START] Install check configuration ## Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/dbhome_1 Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database Oracle Base selected for deinstall is: /u01/app/oracle Checking for existence of central inventory location /u01/app/oraInventory Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid The following nodes are part of this cluster: node3 Checking for sufficient temp space availability on node(s) : 'node3' ## [END] Install check configuration ## Network Configuration check config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2013-01-05_03-34-48-PM.log Network Configuration check config END Database Check Configuration START Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2013-01-05_03-34-56-PM.log Database Check Configuration END Enterprise Manager Configuration Assistant START EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2013-01-05_03-35-03-PM.log Enterprise Manager Configuration Assistant END Oracle Configuration Manager check START OCM check log file location : /u01/app/oraInventory/logs//ocm_check3913.log Oracle Configuration Manager check END ######################### CHECK OPERATION END ######################### ####################### CHECK OPERATION SUMMARY ####################### Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid The cluster node(s) on which the Oracle home deinstallation will be performed are:node3 Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'node3', and the global configuration will be removed. Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/dbhome_1 Inventory Location where the Oracle home registered is: /u01/app/oraInventory The option -local will not modify any database configuration for this Oracle home. No Enterprise Manager configuration to be updated for any database(s) No Enterprise Manager ASM targets to update No Enterprise Manager listener targets to migrate Checking the config status for CCR Oracle Home exists with CCR directory, but CCR is not configured CCR check is finished Do you want to continue (y - yes, n - no)? [n]: y A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2013-01-05_03-34-20-PM.out' Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2013-01-05_03-34-20-PM.err' ######################## CLEAN OPERATION START ######################## Enterprise Manager Configuration Assistant START EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2013-01-05_03-35-03-PM.log Updating Enterprise Manager ASM targets (if any) Updating Enterprise Manager listener targets (if any) Enterprise Manager Configuration Assistant END Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2013-01-05_03-35-33-PM.log Network Configuration clean config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2013-01-05_03-35-33-PM.log De-configuring Local Net Service Names configuration file... Local Net Service Names configuration file de-configured successfully. De-configuring backup files... Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END Oracle Configuration Manager clean START OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean3913.log Oracle Configuration Manager clean END Setting the force flag to false Setting the force flag to cleanup the Oracle Base Oracle Universal Installer clean START Detach Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node : Done Delete directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node : Done The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is not empty. Oracle Universal Installer cleanup was successful. Oracle Universal Installer clean END ## [START] Oracle install clean ## Clean install operation removing temporary directory '/tmp/deinstall2013-01-05_03-32-07PM' on node 'node3' ## [END] Oracle install clean ## ######################### CLEAN OPERATION END ######################### ####################### CLEAN OPERATION SUMMARY ####################### Cleaning the config for CCR As CCR is not configured, so skipping the cleaning of CCR configuration CCR clean is finished Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node. Successfully deleted directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node. Oracle Universal Installer cleanup was successful. Oracle deinstall tool successfully cleaned up temporary directories. ####################################################################### ############# ORACLE DEINSTALL & DECONFIG TOOL END #############
[oracle@node1 ~]$ cd $ORACLE_HOME/oui/bin [oracle@node1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={node1,node2}" Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 3598 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful.
[root@node3 ~]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Network exists: 1/192.168.0.0/255.255.0.0/eth0, type static VIP exists: /node1-vip/192.168.1.151/192.168.0.0/255.255.0.0/eth0, hosting node node1 VIP exists: /node2-vip/192.168.1.152/192.168.0.0/255.255.0.0/eth0, hosting node node2 VIP exists: /node3-vip/192.168.1.153/192.168.0.0/255.255.0.0/eth0, hosting node node3 GSD exists ONS exists: Local port 6100, remote port 6200, EM port 2016 CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node3' CRS-2673: Attempting to stop 'ora.crsd' on 'node3' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node3' CRS-2673: Attempting to stop 'ora.oc4j' on 'node3' CRS-2673: Attempting to stop 'ora.CRS.dg' on 'node3' CRS-2673: Attempting to stop 'ora.DATADG.dg' on 'node3' CRS-2673: Attempting to stop 'ora.FRADG.dg' on 'node3' CRS-2677: Stop of 'ora.DATADG.dg' on 'node3' succeeded CRS-2677: Stop of 'ora.FRADG.dg' on 'node3' succeeded CRS-2677: Stop of 'ora.oc4j' on 'node3' succeeded CRS-2672: Attempting to start 'ora.oc4j' on 'node1' CRS-2676: Start of 'ora.oc4j' on 'node1' succeeded CRS-2677: Stop of 'ora.CRS.dg' on 'node3' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'node3' CRS-2677: Stop of 'ora.asm' on 'node3' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node3' has completed CRS-2677: Stop of 'ora.crsd' on 'node3' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'node3' CRS-2673: Attempting to stop 'ora.evmd' on 'node3' CRS-2673: Attempting to stop 'ora.asm' on 'node3' CRS-2673: Attempting to stop 'ora.mdnsd' on 'node3' CRS-2677: Stop of 'ora.evmd' on 'node3' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'node3' succeeded CRS-2677: Stop of 'ora.ctssd' on 'node3' succeeded CRS-2677: Stop of 'ora.asm' on 'node3' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node3' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node3' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'node3' CRS-2677: Stop of 'ora.cssd' on 'node3' succeeded CRS-2673: Attempting to stop 'ora.crf' on 'node3' CRS-2677: Stop of 'ora.crf' on 'node3' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'node3' CRS-2677: Stop of 'ora.gipcd' on 'node3' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'node3' CRS-2677: Stop of 'ora.gpnpd' on 'node3' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node3' has completed CRS-4133: Oracle High Availability Services has been stopped. Successfully deconfigured Oracle clusterware stack on this node
[root@node1 ~]# /u01/app/11.2.0/grid/bin/crsctl delete node -n node3 CRS-4661: Node node3 successfully deleted. [root@node1 ~]# /u01/app/11.2.0/grid/bin/olsnodes -t -s node1 Active Unpinned node2 Active Unpinned
[grid@node3 ~]$ cd $ORACLE_HOME/oui/bin [grid@node3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={node3}" CRS=TRUE -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4075 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful.
[grid@node3 ~]$ cd $ORACLE_HOME/oui/bin [grid@node3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={node3}" CRS=TRUE -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4075 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. [grid@node3 bin]$ cd $ORACLE_HOME/deinstall [grid@node3 deinstall]$ ./deinstall -local Checking for required files and bootstrapping ... Please wait ... Location of logs /tmp/deinstall2013-01-05_03-58-04PM/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################### CHECK OPERATION START ######################### ## [START] Install check configuration ## Checking for existence of the Oracle home location /u01/app/11.2.0/grid Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster Oracle Base selected for deinstall is: /u01/app/grid Checking for existence of central inventory location /u01/app/oraInventory Checking for existence of the Oracle Grid Infrastructure home The following nodes are part of this cluster: node3 Checking for sufficient temp space availability on node(s) : 'node3' ## [END] Install check configuration ## Traces log file: /tmp/deinstall2013-01-05_03-58-04PM/logs//crsdc.log Enter an address or the name of the virtual IP used on node "node3"[node3-vip] > The following information can be collected by running "/sbin/ifconfig -a" on node "node3" Enter the IP netmask of Virtual IP "192.168.1.153" on node "node3"[255.255.255.0] > Enter the network interface name on which the virtual IP address "192.168.1.153" is active > Enter an address or the name of the virtual IP[] > Network Configuration check config START Network de-configuration trace file location: /tmp/deinstall2013-01-05_03-58-04PM/logs/netdc_check2013-01-05_04-02-44-PM.log Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER]: Network Configuration check config END Asm Check Configuration START ASM de-configuration trace file location: /tmp/deinstall2013-01-05_03-58-04PM/logs/asmcadc_check2013-01-05_04-02-56-PM.log ######################### CHECK OPERATION END ######################### ####################### CHECK OPERATION SUMMARY ####################### Oracle Grid Infrastructure Home is: The cluster node(s) on which the Oracle home deinstallation will be performed are:node3 Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'node3', and the global configuration will be removed. Oracle Home selected for deinstall is: /u01/app/11.2.0/grid Inventory Location where the Oracle home registered is: /u01/app/oraInventory Following RAC listener(s) will be de-configured: LISTENER Option -local will not modify any ASM configuration. Do you want to continue (y - yes, n - no)? [n]: y A log of this session will be written to: '/tmp/deinstall2013-01-05_03-58-04PM/logs/deinstall_deconfig2013-01-05_03-58-44-PM.out' Any error messages from this session will be written to: '/tmp/deinstall2013-01-05_03-58-04PM/logs/deinstall_deconfig2013-01-05_03-58-44-PM.err' ######################## CLEAN OPERATION START ######################## ASM de-configuration trace file location: /tmp/deinstall2013-01-05_03-58-04PM/logs/asmcadc_clean2013-01-05_04-03-08-PM.log ASM Clean Configuration END Network Configuration clean config START Network de-configuration trace file location: /tmp/deinstall2013-01-05_03-58-04PM/logs/netdc_clean2013-01-05_04-03-08-PM.log De-configuring RAC listener(s): LISTENER De-configuring listener: LISTENER Stopping listener on node "node3": LISTENER Warning: Failed to stop listener. Listener may not be running. Listener de-configured successfully. De-configuring Naming Methods configuration file... Naming Methods configuration file de-configured successfully. De-configuring backup files... Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END ----------------------------------------> The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes. Run the following command as the root user or the administrator on node "node3". /tmp/deinstall2013-01-05_03-58-04PM/perl/bin/perl -I/tmp/deinstall2013-01-05_03-58-04PM/perl/lib -I/tmp/deinstall2013-01-05_03-58-04PM/crs/install /tmp/deinstall2013-01-05_03-58-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2013-01-05_03-58-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp" Press Enter after you finish running the above commands <----------------------------------------
Don't rush to press "Enter".
Run the following command as the root user or the administrator on node "node3".
/tmp/deinstall2013-01-05_03-58-04PM/perl/bin/perl -I/tmp/deinstall2013-01-05_03-58-04PM/perl/lib -I/tmp/deinstall2013-01-05_03-58-04PM/crs/install /tmp/deinstall2013-01-05_03-58-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2013-01-05_03-58-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
[root@node3 ~]# /tmp/deinstall2013-01-05_03-58-04PM/perl/bin/perl -I/tmp/deinstall2013-01-05_03-58-04PM/perl/lib -I/tmp/deinstall2013-01-05_03-58-04PM/crs/install /tmp/deinstall2013-01-05_03-58-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2013-01-05_03-58-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp" Using configuration parameter file: /tmp/deinstall2013-01-05_03-58-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp ****Unable to retrieve Oracle Clusterware home. Start Oracle Clusterware stack and try again. CRS-4047: No Oracle Clusterware components configured. CRS-4000: Command Stop failed, or completed with errors. ################################################################ # You must kill processes or reboot the system to properly # # cleanup the processes started by Oracle clusterware # ################################################################ Either /etc/oracle/olr.loc does not exist or is not readable Make sure the file exists and it has read and execute access Either /etc/oracle/olr.loc does not exist or is not readable Make sure the file exists and it has read and execute access Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall error: package cvuqdisk is not installed Successfully deconfigured Oracle clusterware stack on this node
[root@node3 ~]# rm -rf /etc/oraInst.loc [root@node3 ~]# rm -rf /opt/ORCLfmap
Run the following command as the root user or the administrator on node "node3". /tmp/deinstall2013-01-05_03-58-04PM/perl/bin/perl -I/tmp/deinstall2013-01-05_03-58-04PM/perl/lib -I/tmp/deinstall2013-01-05_03-58-04PM/crs/install /tmp/deinstall2013-01-05_03-58-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2013-01-05_03-58-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp" Press Enter after you finish running the above commands <---------------------------------------- Remove the directory: /tmp/deinstall2013-01-05_03-58-04PM on node: Setting the force flag to false Setting the force flag to cleanup the Oracle Base Oracle Universal Installer clean START Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done Delete directory '/u01/app/11.2.0/grid' on the local node : Done Delete directory '/u01/app/oraInventory' on the local node : Done Delete directory '/u01/app/grid' on the local node : Done Oracle Universal Installer cleanup was successful. Oracle Universal Installer clean END ## [START] Oracle install clean ## Clean install operation removing temporary directory '/tmp/deinstall2013-01-05_03-58-04PM' on node 'node3' ## [END] Oracle install clean ## ######################### CLEAN OPERATION END ######################### ####################### CLEAN OPERATION SUMMARY ####################### Following RAC listener(s) were de-configured successfully: LISTENER Oracle Clusterware is stopped and successfully de-configured on node "node3" Oracle Clusterware is stopped and de-configured successfully. Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node. Successfully deleted directory '/u01/app/11.2.0/grid' on the local node. Successfully deleted directory '/u01/app/oraInventory' on the local node. Successfully deleted directory '/u01/app/grid' on the local node. Oracle Universal Installer cleanup was successful. Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'node3' at the end of the session. Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'node3' at the end of the session. Oracle deinstall tool successfully cleaned up temporary directories. ####################################################################### ############# ORACLE DEINSTALL & DECONFIG TOOL END #############
[grid@node1 ~]$ cd $ORACLE_HOME/oui/bin [grid@node1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={node1,node2}" CRS=TRUE Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 3493 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful.
[grid@node1 ~]$ cluvfy stage -post nodedel -n node3 Performing post-checks for node removal Checking CRS integrity... Clusterware version consistency passed CRS integrity check passed Node removal check passed Post-check for node removal was successful.
[grid@node1 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.CRS.dg ONLINE ONLINE node1 ONLINE ONLINE node2 ora.DATADG.dg ONLINE ONLINE node1 ONLINE ONLINE node2 ora.FRADG.dg ONLINE ONLINE node1 ONLINE ONLINE node2 ora.LISTENER.lsnr ONLINE ONLINE node1 ONLINE ONLINE node2 ora.asm ONLINE ONLINE node1 Started ONLINE ONLINE node2 Started ora.gsd OFFLINE OFFLINE node1 OFFLINE OFFLINE node2 ora.net1.network ONLINE ONLINE node1 ONLINE ONLINE node2 ora.ons ONLINE ONLINE node1 ONLINE ONLINE node2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE node2 ora.cvu 1 ONLINE ONLINE node1 ora.node1.vip 1 ONLINE ONLINE node1 ora.node2.vip 1 ONLINE ONLINE node2 ora.oc4j 1 ONLINE ONLINE node1 ora.scan1.vip 1 ONLINE ONLINE node2 ora.zhongwc.db 1 ONLINE ONLINE node1 Open 2 ONLINE ONLINE node2 Open
[oracle@node1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Sat Jan 5 16:30:58 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss'; Session altered. SQL> col host_name format a11 SQL> set line 300 SQL> select INSTANCE_NAME,HOST_NAME,VERSION,STARTUP_TIME,STATUS,ACTIVE_STATE,INSTANCE_ROLE,DATABASE_STATUS from gv$INSTANCE; INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS ACTIVE_ST INSTANCE_ROLE DATABASE_STATUS ---------------- ----------- ----------------- ------------------- ------------ --------- ------------------ ----------------- zhongwc1 node1 11.2.0.3.0 2013-01-05 09:53:24 OPEN NORMAL PRIMARY_INSTANCE ACTIVE zhongwc2 node2 11.2.0.3.0 2013-01-04 17:34:40 OPEN NORMAL PRIMARY_INSTANCE ACTIVE