Orace 11.2R 删除节点

 

使用场景

1、被删除节点一切保留,需要从RAC中剔除,例如因为要更换服务器。

2、被删除节点关于RAC的部分文件丢失,如GI、库软件误删除,需要重新安装GIoracle库软件

实验场景

三节点RAC,主机名是rac1rac2rac3,现在需要删除rac3

3个节点实例均在正常运行,在线删除节点

节点确认

1)在数据库确认节点

SQL> select thread#,status,instance from v$thread; 

THREAD# STATUS INSTANCE

---------- ------ --------------------

 1 OPENtest1

 2 OPENtest2

 3 OPENtest3

[grid@rac1 ~]$ olsnodes  -t -s

rac1  Active         Unpinned

rac2  Active         Unpinned

rac3  Active         Unpinned

2GI在所有保留节点(rac1,rac2)上执行

[root@rac1 bin]# ./crsctl unpin css -n rac3

CRS-4667: Node rac3 successfully unpinned.

[root@rac2 bin]# ./crsctl unpin css -n rac3

CRS-4667: Node rac3 successfully unpinned.

使用dbca删掉rac3实例

在任一保留的节点上删除rac3实例

 

[root@rac1 bin]# su - oracle

[oracle@rac1 ~]$ dbca

 

 

 

验证rac3实例已被删除

查看活动的实例:

[oracle@rac1 ~]$ sqlplus / as sysdba

SQL> select thread#,status,instance from v$thread;

THREAD# STATUS INSTANCE

---------- ------ ------------------------------

          1 OPENtest1

          2 OPENtest2

//test3库已经没有了

 

查看库的配置

 

[oracle@rac1 ~]$ srvctl config database -d test

Database unique name: test

Database name: test

Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1

Oracle user: oracle

Spfile: +DATA/test/spfiletest.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: test

Database instances: test1,test2

Disk Groups: DATA

Mount point paths:

Services:

Type: RAC

Database is administrator managed

 

 

停止rac3节点的监听

[oracle@rac3 ~]$ srvctl config listener -a

Name: LISTENER

Network: 1, Owner: grid

Home:

/u01/app/11.2.0/grid on node(s) rac3,rac1,rac2

End points: TCP:1521

 

[oracle@rac3 ~]$ srvctl disable listener -l listener -n rac3

[oracle@rac3 ~]$ srvctl stop listener -l listener -n rac3

 

[oracle@rac3 ~]$ lsnrctl status

 

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 08-AUG-2016 04:02:53

 

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

 

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))

TNS-12541: TNS:no listener

 TNS-12560: TNS:protocol adapter error

TNS-00511: No listener

Linux Error: 111: Connection refused

 

rac3节点使用使用oracle用户更新集群列表 

[oracle@rac3 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac3}" –local

 

删除rac3节点的库软件 

 

[oracle@rac3 ~]$ $ORACLE_HOME/deinstall/deinstall local

执行完后,rac3$oracle_home目录里的文件已被删除 

[oracle@rac3 ~]$ $ORACLE_HOME/deinstall/deinstall local

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /u01/app/oraInventory/logs/

 

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

 

 

######################### CHECK OPERATION START #########################

## [START] Install check configuration ##

 

 

Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/dbhome_1

Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database

Oracle Base selected for deinstall is: /u01/app/oracle

Checking for existence of central inventory location /u01/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid

The following nodes are part of this cluster: rac3

Checking for sufficient temp space availability on node(s) : 'rac3'

 

## [END] Install check configuration ##

 

 

Network Configuration check config START

 

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2016-08-08_04-15-49-AM.log

 

Network Configuration check config END

 

Database Check Configuration START

 

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2016-08-08_04-15-58-AM.log

 

Use comma as separator when specifying list of values as input

 

Specify the list of database names that are configured in this Oracle home [test]:

 

###### For Database 'test' ######

 

Specify the type of this database (1.Single Instance Database|2.Oracle Restart Enabled Database|3.RAC Database|4.RAC One Node Database) [3]:

Specify the list of nodes on which this database has instances [rac1, rac2]:

Specify the list of instance names [test1, test2]:

Specify the local instance name on node rac3  []:

Specify the Host name where any instance is running  [rac1]:

Specify the instance name running on node rac1  [test1]:

Specify the diagnostic destination location of the database [/u01/app/oracle/diag/rdbms/test]:

Specify the storage type used by the Database ASM|FS []: FS

 

 

Database Check Configuration END

 

Enterprise Manager Configuration Assistant START

 

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2016-08-08_04-16-58-AM.log

 

Checking configuration for database test

Enterprise Manager Configuration Assistant END

Oracle Configuration Manager check START

OCM check log file location : /u01/app/oraInventory/logs//ocm_check9304.log

Oracle Configuration Manager check END

 

######################### CHECK OPERATION END #########################

 

 

####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid

The cluster node(s) on which the Oracle home deinstallation will be performed are:rac3

Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/dbhome_1

Inventory Location where the Oracle home registered is: /u01/app/oraInventory

The following databases were selected for de-configuration : test

Database unique name : test

Storage used : ASM

No Enterprise Manager configuration to be updated for any database(s)

No Enterprise Manager ASM targets to update

No Enterprise Manager listener targets to migrate

Checking the config status for CCR

Oracle Home exists with CCR directory, but CCR is not configured

CCR check is finished

Do you want to continue (y - yes, n - no)? [n]: y

A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2016-08-08_04-15-32-AM.out'

Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2016-08-08_04-15-32-AM.err'

 

######################## CLEAN OPERATION START ########################

 

Enterprise Manager Configuration Assistant START

 

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2016-08-08_04-16-58-AM.log

 

Updating Enterprise Manager ASM targets (if any)

Updating Enterprise Manager listener targets (if any)

Enterprise Manager Configuration Assistant END

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2016-08-08_04-17-07-AM.log

Database Clean Configuration START test

This operation may take few minutes.

Database Clean Configuration END test

 

Network Configuration clean config START

 

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2016-08-08_04-20-30-AM.log

 

De-configuring Listener configuration file on all nodes...

Listener configuration file de-configured successfully.

 

De-configuring Naming Methods configuration file on all nodes...

Naming Methods configuration file de-configured successfully.

 

De-configuring Local Net Service Names configuration file on all nodes...

Local Net Service Names configuration file de-configured successfully.

 

De-configuring Directory Usage configuration file on all nodes...

Directory Usage configuration file de-configured successfully.

 

De-configuring backup files on all nodes...

Backup files de-configured successfully.

 

The network configuration has been cleaned up successfully.

 

Network Configuration clean config END

 

Oracle Configuration Manager clean START

OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean9304.log

Oracle Configuration Manager clean END

Setting the force flag to false

Setting the force flag to cleanup the Oracle Base

Oracle Universal Installer clean START

 

Detach Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node : Done

 

Delete directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node : Done

 

Failed to delete the directory '/u01/app/oracle'. The directory is in use.

Delete directory '/u01/app/oracle' on the local node : Failed <<<<

 

Oracle Universal Installer cleanup completed with errors.

 

Oracle Universal Installer clean END

 

 

## [START] Oracle install clean ##

 

Clean install operation removing temporary directory '/tmp/deinstall2016-08-08_04-15-02AM' on node 'rac3'

 

## [END] Oracle install clean ##

 

 

######################### CLEAN OPERATION END #########################

 

 

####################### CLEAN OPERATION SUMMARY #######################

Successfully de-configured the following database instances : test

Cleaning the config for CCR

As CCR is not configured, so skipping the cleaning of CCR configuration

CCR clean is finished

Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node.

Successfully deleted directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node.

Failed to delete directory '/u01/app/oracle' on the local node.

Oracle Universal Installer cleanup completed with errors.

 

Oracle deinstall tool successfully cleaned up temporary directories.

#######################################################################

 

 

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

在任一保留的节点上停止rac3节点NodeApps

[oracle@rac1 ~]$  srvctl stop nodeapps -n rac3 –f

--发现停了rac3节点的onsVIP

 

在保留节点使用oracle用户更新集群列表

 

[oracle@rac1 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={rac1,rac2}”

 

可能会报失败 

Checking swap space: must be greater than 500 MB.   Actual 3349 MB    Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' failed. [oracle@rac1 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={rac1,rac2}” Starting Oracle Universal Installer...  

Checking swap space: must be greater than 500 MB.   Actual 3127 MB    Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' failed.

 

删除rac3节点的集群软件

 

[root@rac3 ~]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force

[grid@rac3 ~]$ crs_stat -t

CRS-0184: Cannot communicate with the CRS daemon.

Rac3 Cluster进程已被停止

删除rac3节点的VIP

如果上一步执行顺利的,rac3节点rac3VIP此时已被删除,在任一保留节点执行crs_stat -t验证一下:

[grid@rac1 ~]$ crs_stat -t

Name           Type           Target    StateHost       

------------------------------------------------------------

ora.DATA.dg    ora....up.type ONLINE    ONLINErac1       

ora....ER.lsnr ora....er.type ONLINE    ONLINErac1       

ora....N1.lsnr ora....er.type ONLINE    ONLINErac1       

ora....N2.lsnr ora....er.type ONLINE    ONLINErac2       

ora....N3.lsnr ora....er.type ONLINE    ONLINErac1       

ora.OCR.dg     ora....up.type ONLINE    ONLINErac1       

ora.asm        ora.asm.type   ONLINEONLINE    rac1       

ora.cvu        ora.cvu.type   ONLINEONLINE    rac1       

ora.gsd        ora.gsd.type   OFFLINEOFFLINE              

ora....network ora....rk.type ONLINE    ONLINErac1       

ora.oc4j       ora.oc4j.type  ONLINEONLINE    rac2       

ora.ons        ora.ons.type   ONLINEONLINE    rac1       

ora....SM1.asm application    ONLINEONLINE    rac1       

ora....C1.lsnr application    ONLINEONLINE    rac1       

ora.rac1.gsd   application    OFFLINEOFFLINE              

ora.rac1.ons   application    ONLINEONLINE    rac1       

ora.rac1.vip   ora....t1.type ONLINE    ONLINErac1       

ora....SM2.asm application    ONLINEONLINE    rac2       

ora....C2.lsnr application    ONLINEONLINE    rac2       

ora.rac2.gsd   application    OFFLINEOFFLINE              

ora.rac2.ons   application    ONLINEONLINE    rac2       

ora.rac2.vip   ora....t1.type ONLINE    ONLINErac2       

ora.scan1.vip  ora....ip.type ONLINE    ONLINErac1       

ora.scan2.vip  ora....ip.type ONLINE    ONLINErac2       

ora.scan3.vip  ora....ip.type ONLINE    ONLINErac1

 

如果仍然有rac3节点的VIP服务存在,执行如下:  

[root@rac1 ~]# srvctl stop vip -i ora.rac3.vip -f  

[root@rac1 ~]# srvctl remove vip -i ora.rac3.vip -f  

[root@rac1 ~]# crsctl delete resource ora.rac3.vip –f

 

在任一保留的节点上删除rac3节点

[root@rac1 bin]# cd /u01/app/11.2.0/grid/bin/

[root@rac1 bin]# ./crsctl delete node -n rac3

CRS-4661: Node rac3 successfully deleted.

[root@rac1 bin]# ./olsnodes -t -s

rac1 Active       Unpinned

rac2 Active       Unpinned

 

//只有2个节点

 

rac3节点使用grid用户更新集群列表

 

[grid@rac3 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={rac3}”CRS=true local

 

rac3节点删除集群软件

[grid@rac3 ~]$ $ORACLE_HOME/deinstall/deinstall -local

一路下一步

 

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

 

Press Enter after you finish running the above commands

<---------------------------------------- 

回车后一直认为脚本未执行完,Ctrl+C退出。(个人认为是11.2Rbug

注意: 

    由于删除grid软件的脚本最后被中止掉了,所以以下rm 软件的部分脚本未执行:

 

需要人工删除grid软件 

[root@rac3 ~]# cd /u01/app/11.2.0/grid/ 

[root@rac3 grid]# rm -rf * 

[root@rac3 grid]# rm -rf /etc/oraInst.loc 

[root@rac3 grid]# rm -rf /opt/ORCLfmap

 

保留节点使用grid用户更新集群列表

[root@rac1 bin]# su - grid

 [grid@rac1 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList?ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={rac1,rac2}”CRS=true

[oracle@rac2 ~]$ su - grid

 [grid@rac2 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList?ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={rac1,rac2}”CRS=true

可能会失败,失败没有关系

 

验证rac3节点被删除

 

[grid@rac1 ~]$ cluvfy stage -post nodedel -n rac3

[grid@rac1 ~]$ crsctl status resource –t

 

sqlplus / as sysdba

 select thread#,status,instance from v$thread; 

 

 

http://wenku.baidu.com/link?url=IKuSOlDLUv85EFp0fZkAxzPk2THX8EWYYjzIoGq25TZ8DzK4V7ra4PXrs40rdXdZ7Vn05IH_wJQ21OvaCqGNxn5ykAzxFivGrO-rqb1MMpq