主机名 | 实例名 | 操作系统 | 数据库版本 |
rac1(删除) | racdb1 | RHEL 6.5 64位 | 11.2.0.4.0 |
rac2 | racdb2 | RHEL 6.5 64位 | 11.2.0.4.0 |
rac3 | racdb3 | RHEL 6.5 64位 | 11.2.0.4.0 |
rac4 | racdb4 | RHEL 6.5 64位 | 11.2.0.4.0 |
cd /u01/app/11.2.0/grid/bin/
./ocrconfig -showbackup
./ocrconfig -manualbackup
进dbca图形方式来移除,如下图:
命令行方式(在不删除的节点上,以oracle用户运行):
dbca -silent -deleteInstance -gdbName racdb -instanceName racdb1 -nodelist rac1 -sysDBAUserName sys -sysDBAPassword password
在不删除的节点上以grid用户运行:
srvctl disable listener -l listener -n rac1
srvctl stop listener -l listener -n rac1
srvctl status listener -l listener -n rac1
在要删除的节点rac1上以grid用户运行:
$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1/ "CLUSTER_NODES=rac1" -local
如果ORACLE_HOME是共享的,则在要删除的节点上执行:
cd $ORACLE_HOME/oui/bin
./runInstaller -detachHome ORACLE_HOME=Oracle_home_location
如果是非共享的,则执行:
${ORACLE_HOME}/deinstall/deinstall -local
在不删除的任意节点上,以oracle用户执行:
cd $ORACLE_HOME/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 "CLUSTER_NODES= rac2,rac3,rac4"
在要删除的节点上以grid用户运行:
olsnodes -s -t
如果不是Unpinned,则在任意节点上以root用户执行以下命令将其unpin:
crsctl unpin css -n rac1
在要删除的节点上以root用户执行:
cd /u01/app/11.2.0/grid/crs/install
./rootcrs.pl -deconfig -force
过程如下:
[root@rac1 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.232.0/255.255.255.0/eth0, type static
VIP exists: /rac1-vip/192.168.232.33/192.168.232.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2-vip/192.168.232.34/192.168.232.0/255.255.255.0/eth0, hosting node rac2
VIP exists: /rac3-vip/192.168.232.39/192.168.232.0/255.255.255.0/eth0, hosting node rac3
VIP exists: /rac4-vip/192.168.232.40/192.168.232.0/255.255.255.0/eth0, hosting node rac4
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac1'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac1' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'
CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded
CRS-2677: Stop of 'ora.CRS.dg' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node
注意:如果要删除的节点是当前整个集群中最后一个节点,也就是说要将整个集群删除掉的话,那么需要执行:
./rootcrs.pl -deconfig -force -lastnode
在不删除的节点上以root用户执行:
cd /u01/app/11.2.0/grid/bin
./crsctl delete node -n rac1
在要删除的节点1上以grid用户执行下面的命令:
cd /u01/app/11.2.0/grid/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=rac1" CRS=TRUE -silent -local
如果GRID_HOME是共享的,则在要删除的节点上,以grid用户执行:
cd $GRID_HOME/oui/
./runInstaller -detachHome ORACLE_HOME=Grid_home -silent -local
如果非共享的,以grid用户执行:
cd /u01/app/11.2.0/grid/deinstall/
./deinstall -local
注意:这里一定要加上-local参数,否则的话,这个命令会删除所有节点的GRID_HOME目录。
过程如下:
[grid@rac1 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2019-01-21_11-37-09AM/logs/############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: rac1
Checking for sufficient temp space availability on node(s) : 'rac1'## [END] Install check configuration ##
Traces log file: /tmp/deinstall2019-01-21_11-37-09AM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac1"[rac1-vip]
> #直接回车The following information can be collected by running "/sbin/ifconfig -a" on node "rac1"
Enter the IP netmask of Virtual IP "192.168.232.33" on node "rac1"[255.255.255.0]
> #直接回车Enter the network interface name on which the virtual IP address "192.168.232.33" is active
> #直接回车Enter an address or the name of the virtual IP[]
> #直接回车
Network Configuration check config STARTNetwork de-configuration trace file location: /tmp/deinstall2019-01-21_11-37-09AM/logs/netdc_check2019-01-21_11-38-57-AM.log
Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:LISTENER
At least one listener from the discovered listener list [LISTENER,LISTENER_SCAN1] is missing in the specified listener list [LISTENER]. The Oracle home will be cleaned up, so all the listeners will not be available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /tmp/deinstall2019-01-21_11-37-09AM/logs/asmcadc_check2019-01-21_11-40-29-AM.log
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac1
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rac1', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2019-01-21_11-37-09AM/logs/deinstall_deconfig2019-01-21_11-37-31-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2019-01-21_11-37-09AM/logs/deinstall_deconfig2019-01-21_11-37-31-AM.err'######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2019-01-21_11-37-09AM/logs/asmcadc_clean2019-01-21_11-40-50-AM.log
ASM Clean Configuration ENDNetwork Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall2019-01-21_11-37-09AM/logs/netdc_clean2019-01-21_11-40-50-AM.log
De-configuring RAC listener(s): LISTENER
De-configuring listener: LISTENER
Stopping listener on node "rac1": LISTENER
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.De-configuring backup files...
Backup files de-configured successfully.The network configuration has been cleaned up successfully.
Network Configuration clean config END
---------------------------------------->The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
Run the following command as the root user or the administrator on node "rac1".
/tmp/deinstall2019-01-21_11-37-09AM/perl/bin/perl -I/tmp/deinstall2019-01-21_11-37-09AM/perl/lib -I/tmp/deinstall2019-01-21_11-37-09AM/crs/install /tmp/deinstall2019-01-21_11-37-09AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2019-01-21_11-37-09AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Press Enter after you finish running the above commands
<----------------------------------------
根据提示,以root用户在节点1上另开一个会话执行上述命令:
[root@rac1 ~]# /tmp/deinstall2019-01-21_11-37-09AM/perl/bin/perl -I/tmp/deinstall2019-01-21_11-37-09AM/perl/lib -I/tmp/deinstall2019-01-21_11-37-09AM/crs/install /tmp/deinstall2019-01-21_11-37-09AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2019-01-21_11-37-09AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2019-01-21_11-37-09AM/response/deinstall_Ora11g_gridinfrahome1.rsp
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Modify failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Delete failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware #
################################################################
ACFS-9313: No ADVM/ACFS installation detected.
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node-----------------------------------------------------------------
然后回到刚才的会话窗口,按回车继续执行:
Remove the directory: /tmp/deinstall2019-01-21_11-37-09AM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean STARTDetach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done
Delete directory '/u01/app/11.2.0/grid' on the local node : Done
Delete directory '/u01/app/oraInventory' on the local node : Done
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/metadata_pv'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/sweep'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/incident'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/incpkg'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/stage'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/metadata_dgif'. The directory is in use.
Failed to delete the directory '/u01/app/grid/oradiag_oracle/diag/clients/user_oracle/host_1874443374_80/cdump'. The directory is in use.
The Oracle Base directory '/u01/app/grid' will not be removed on local node. The directory is not empty.Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##Clean install operation removing temporary directory '/tmp/deinstall2019-01-21_11-37-09AM' on node 'rac1'
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware is stopped and successfully de-configured on node "rac1"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Oracle Universal Installer cleanup was successful.
Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac1' at the end of the session.Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac1' at the end of the session.
Run 'rm -rf /etc/oratab' as root on node(s) 'rac1' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############---------------------------------------------------------------
在剩余的任意节点,也就是节点2、3或4上以grid用户执行:
cd /u01/app/11.2.0/grid/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=rac2,rac3,rac4" CRS=TRUE -silent
在节点2、3或4上以grid用户执行:
cluvfy stage -post nodedel -n rac1 -verbose
至此rac1节点的删除操作完成。
参考博文:http://www.cnxdug.org/?p=2511