基于CentOS与VmwareStation10搭建Oracle11G RAC 64集群环境:4.安装Oracle RAC FAQ-4.6.重新配置与缷载11R2 Grid Infrastructure

1.[root@linuxrac1 ~]# /u01/app/oraInventory/orainstRoot.sh

2.[root@linuxrac2 ~]# /u01/app/oraInventory/orainstRoot.sh

3.[root@linuxrac1 ~]# /u01/app/11.2.0/grid/root.sh

4.[root@linuxrac2 ~]# /u01/app/11.2.0/grid/root.sh

安装集群软件时,没有按上述步骤在两个节点执行相同的脚本,而是采用了下面错误的顺序:

1. [root@linuxrac1 ~]# /u01/app/oraInventory/orainstRoot.sh

2. [root@linuxrac1 ~]# /u01/app/11.2.0/grid/root.sh

3. [root@linuxrac2 ~]# /u01/app/oraInventory/orainstRoot.sh

4. [root@linuxrac2 ~]# /u01/app/11.2.0/grid/root.sh

导致集群安装失败

1. 先恢复配置:恢复配置Grid Infrastructure 并不会移除已经复制的二进制文件,仅仅是回复到配置crs之前的状态

a) 使用root用户登录,并执行下面的命令(所有节点,但 最后一个节点除外)

#perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force

[root@linuxrac1 ~]#perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force

2014-10-16 00:20:37: Parsing the host name

2014-10-16 00:20:37: Checking for super user privileges

2014-10-16 00:20:37: User has super user privileges

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

PRCR-1035 : Failed to look up CRS resource ora.cluster_vip.type for 1

PRCR-1068 : Failed to query resources

Cannot communicate with crsd

PRCR-1070 : Failed to check if resource ora.gsd is registered

Cannot communicate with crsd

PRCR-1070 : Failed to check if resource ora.ons is registered

Cannot communicate with crsd

PRCR-1070 : Failed to check if resource ora.eons is registered

Cannot communicate with crsd

 

ADVM/ACFS is not supported on centos-release-5-4.el5.centos.1

 

ACFS-9201: Not Supported

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'linuxrac1'

CRS-2673: Attempting to stop 'ora.crsd' on 'linuxrac1'

CRS-4548: Unable to connect to CRSD

CRS-2675: Stop of 'ora.crsd' on 'linuxrac1' failed

CRS-2679: Attempting to clean 'ora.crsd' on 'linuxrac1'

CRS-4548: Unable to connect to CRSD

CRS-2678: 'ora.crsd' on 'linuxrac1' has experienced an unrecoverable failure

CRS-0267: Human intervention required to resume its availability.

CRS-2795: Shutdown of Oracle High Availability Services-managed resources on 'linuxrac1' has failed

CRS-4687: Shutdown command has completed with error(s).

CRS-4000: Command Stop failed, or completed with errors.

You must kill crs processes or reboot the system to properly

cleanup the processes started by Oracle clusterware

Successfully deconfigured Oracle clusterware stack on this node

b)        、同样使用root用户在最后一个节点执行下面的命令。该命令将清空ocr配置和voting disk

#perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force

[root@linuxrac2 ~]#perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force

2014-10-16 00:25:37: Parsing the host name

2014-10-16 00:25:37: Checking for super user privileges

2014-10-16 00:25:37: User has super user privileges

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

VIP exists.:linuxrac1

VIP exists.: /linuxrac1-vip/10.10.97.181/255.255.255.0/eth0

GSD exists.

ONS daemon exists. Local port 6100, remote port 6200

eONS daemon exists. Multicast port 18049, multicast IP address 234.241.229.252, listening port 2016

PRKO-2439 : VIP does not exist.

 

PRKO-2313 : VIP linuxrac2 does not exist.

ADVM/ACFS is not supported on centos-release-5-4.el5.centos.1

 

ACFS-9201: Not Supported

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'linuxrac2'

CRS-2673: Attempting to stop 'ora.crsd' on 'linuxrac2'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'linuxrac2'

CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'linuxrac2'

CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'linuxrac2' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'linuxrac2'

CRS-2677: Stop of 'ora.asm' on 'linuxrac2' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'linuxrac2' has completed

CRS-2677: Stop of 'ora.crsd' on 'linuxrac2' succeeded

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'linuxrac2'

CRS-2673: Attempting to stop 'ora.ctssd' on 'linuxrac2'

CRS-2673: Attempting to stop 'ora.evmd' on 'linuxrac2'

CRS-2673: Attempting to stop 'ora.asm' on 'linuxrac2'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'linuxrac2'

CRS-2677: Stop of 'ora.cssdmonitor' on 'linuxrac2' succeeded

CRS-2677: Stop of 'ora.evmd' on 'linuxrac2' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'linuxrac2' succeeded

CRS-2677: Stop of 'ora.asm' on 'linuxrac2' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'linuxrac2' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'linuxrac2'

CRS-2677: Stop of 'ora.cssd' on 'linuxrac2' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'linuxrac2'

CRS-2673: Attempting to stop 'ora.diskmon' on 'linuxrac2'

CRS-2677: Stop of 'ora.gpnpd' on 'linuxrac2' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'linuxrac2'

CRS-2677: Stop of 'ora.gipcd' on 'linuxrac2' succeeded

CRS-2677: Stop of 'ora.diskmon' on 'linuxrac2' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'linuxrac2' has completed

CRS-4133: Oracle High Availability Services has been stopped.

Successfully deconfigured Oracle clusterware stack on this node

c)   如果使用了ASM磁盘,继续下面的操作以使得ASM重新作为候选磁盘(清空所有的ASM磁盘组)

[root@linuxrac1 ~]#dd if=/dev/zero of=/dev/sdb1 bs=1024 count=10000

10000+0 records in

10000+0 records out

10240000 bytes (10M) copied, 0.002998 seconds, 34.2 MB/s

[root@linuxrac2 ~]#dd if=/dev/zero of=/dev/sdb1 bs=1024 count=10000

10000+0 records in

10000+0 records out

10240000 bytes (10M) copied, 0.00289 seconds, 35.4 MB/s

 

[root@linuxrac1 /]#etc/init.d/oracleasm deletedisk OCR_VOTE /dev/sdb1

Removing ASM disk "OCR_VOTE":                              [  OK  ]

[root@linuxrac2 /]#etc/init.d/oracleasm deletedisk OCR_VOTE /dev/sdb1

Removing ASM disk "OCR_VOTE":                              [  OK  ]

[root@linuxrac1 /]#etc/init.d/oracleasm createdisk OCR_VOTE /dev/sdb1

[root@linuxrac2 /]#oracleasm scandisks

Reloading disk partitions: done

Cleaning any stale ASM disks...

Scanning system for ASM disks...

Instantiating disk "OCR_VOTE"

[root@linuxrac2 /]# oracleasm listdisks

DATA

DATA2

FRA

OCR_VOTE

 

2.彻底删除Grid Infrastructure

11G R2 Grid Infrastructure 也提供了彻底卸载的功能,deinstall该命令取代了使用OUI方式清除clusterware以及ASM,回复到安装grid之前的环境。

该命令将停止集群,移除二进制文件及其相关的所有配置信息。

命令位置:$GRID_HOME/deinstall

下面该命令操作的具体事例,操作期间,需要提供一些交互信息,以及在新的session以root身份。

[root@ linuxrac1/ ]# cd /u01/app/11.2.0/grid/

[root@ linuxrac1/ ]# cd bin

[root@ linuxrac1 bin]# ./crsctl check crs

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Check failed, or completed with errors.

[root@ linuxrac1 bin]# cd ../deinstall/

[root@ linuxrac1 deinstall]# pwd

[root@ linuxrac1 deinstall]# su grid

[grid@ linuxrac1 deinstall]# ./deinstall

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /tmp/deinstall2014-10-16_06-18-10-PM/logs/

 

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

 

 

######################## CHECK OPERATION START ########################

Install check configuration START

 

 

Checking for existence of the Oracle home location /u01/app/11.2.0/grid

Oracle Home type selected for de-install is: CRS

Oracle Base selected for de-install is: /u01/app/grid

Checking for existence of central inventory location /u01/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home

The following nodes are part of this cluster: linuxrac1,linuxrac2

 

Install check configuration END

 

Traces log file: /tmp/deinstall2014-10-16_06-18-10-PM/logs//crsdc.log

Enter an address or the name of the virtual IP used on node "linuxrac1"[linuxrac1-vip]

 >

The following information can be collected by running ifconfig -a on node "linuxrac1"

Enter the IP netmask of Virtual IP "10.10.97.181" on node "linuxrac1"[255.255.255.0]

 >

Enter the network interface name on which the virtual IP address "10.10.97.181" is active

 >

Enter an address or the name of the virtual IP used on node "linuxrac2"[linuxrac2-vip]

 >

 

The following information can be collected by running ifconfig -a on node "linuxrac2"

Enter the IP netmask of Virtual IP "10.10.97.183" on node "linuxrac2"[255.255.255.0]

 >

 

Enter the network interface name on which the virtual IP address "10.10.97.183" is active

 >

 

Enter an address or the name of the virtual IP[]

 >

 

 

Network Configuration check config START

 

Network de-configuration trace file location: /tmp/deinstall2014-10-16_06-18-10-PM/logs/netdc_check4793051808580150519.log

 

Specify all RAC listeners that are to be de-configured [LISTENER,LISTENER_SCAN1]:

 

Network Configuration check config END

 

Asm Check Configuration START

 

ASM de-configuration trace file location: /tmp/deinstall2014-10-16_06-18-10-PM/logs/asmcadc_check1638223369054710711.log

 

ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y

 

Enter the OCR/Voting Disk diskgroup name []:    

Specify the ASM Diagnostic Destination [ ]:

Specify the diskgroups that are managed by this ASM instance []:

 

 

######################### CHECK OPERATION END #########################

 

 

####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is:

The cluster node(s) on which the Oracle home exists are: (Please input nodes seperated by ",", eg: node1,node2,...)linuxrac1,linuxrac2

Oracle Home selected for de-install is: /u01/app/11.2.0/grid

Inventory Location where the Oracle home registered is: /u01/app/oraInventory

Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1

ASM instance will be de-configured from this Oracle home

Do you want to continue (y - yes, n - no)? [n]: y

A log of this session will be written to: '/tmp/deinstall2014-10-16_06-18-10-PM/logs/deinstall_deconfig2014-10-16_06-18-44-PM.out'

Any error messages from this session will be written to: '/tmp/deinstall2014-10-16_06-18-10-PM/logs/deinstall_deconfig2014-10-16_06-18-44-PM.err'

 

######################## CLEAN OPERATION START ########################

ASM de-configuration trace file location: /tmp/deinstall2014-10-16_06-18-10-PM/logs/asmcadc_clean779346077107850558.log

ASM Clean Configuration START

ASM Clean Configuration END

 

Network Configuration clean config START

 

Network de-configuration trace file location: /tmp/deinstall2014-10-16_06-18-10-PM/logs/netdc_clean3314924901124092411.log

 

De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1

 

De-configuring listener: LISTENER

    Stopping listener: LISTENER

    Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

 

De-configuring listener: LISTENER_SCAN1

    Stopping listener: LISTENER_SCAN1

    Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

 

De-configuring Naming Methods configuration file on all nodes...

Naming Methods configuration file de-configured successfully.

 

De-configuring Local Net Service Names configuration file on all nodes...

Local Net Service Names configuration file de-configured successfully.

 

De-configuring Directory Usage configuration file on all nodes...

Directory Usage configuration file de-configured successfully.

 

De-configuring backup files on all nodes...

Backup files de-configured successfully.

 

The network configuration has been cleaned up successfully.

 

Network Configuration clean config END

 

 

---------------------------------------->

Oracle Universal Installer clean START

 

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done

 

Delete directory '/u01/app/11.2.0/grid' on the local node : Done

 

Delete directory '/u01/app/oraInventory' on the local node : Done

 

Delete directory '/u01/app/grid' on the local node : Done

 

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'linuxrac2' : Done

 

Delete directory '/u01/app/11.2.0/grid' on the remote nodes 'linuxrac2' : Done

 

Delete directory '/u01/app/oraInventory' on the remote nodes 'linuxrac2' : Done

 

Delete directory '/u01/app/grid' on the remote nodes 'linuxrac2' : Done

 

Oracle Universal Installer cleanup was successful.

 

Oracle Universal Installer clean END

 

 

Oracle install clean START

 

Clean install operation removing temporary directory '/tmp/install' on node 'linuxrac1'

Clean install operation removing temporary directory '/tmp/install' on node 'linuxrac2'

 

Oracle install clean END

 

 

######################### CLEAN OPERATION END #########################

 

 

####################### CLEAN OPERATION SUMMARY #######################

ASM instance was de-configured successfully from the Oracle home

Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1

Oracle Clusterware was already stopped and de-configured on node "linuxrac2"

Oracle Clusterware was already stopped and de-configured on node "linuxrac1"

Oracle Clusterware is stopped and de-configured successfully.

Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.

Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.

Successfully deleted directory '/u01/app/oraInventory' on the local node.

Successfully deleted directory '/u01/app/grid' on the local node.

Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'linuxrac2'.

Successfully deleted directory '/u01/app/11.2.0/grid' on the remote nodes 'linuxrac2'.

Successfully deleted directory '/u01/app/oraInventory' on the remote nodes 'linuxrac2'.

Successfully deleted directory '/u01/app/grid' on the remote nodes 'linuxrac2'.

Oracle Universal Installer cleanup was successful.

 

 

Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'linuxrac1,linuxrac2' at the end of the session.

 

Oracle install successfully cleaned up the temporary directories.

#######################################################################

 

 

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

 

 

基于CentOS与VmwareStation10搭建Oracle11G RAC 64集群环境所有链接:

1.资源准备
http://www.cnblogs.com/HondaHsu/p/4046352.html

2.搭建环境-2.1创建虚拟机
http://www.cnblogs.com/HondaHsu/p/4046378.html

2.搭建环境-2.2安装操作系统CentOS5.4
http://www.cnblogs.com/HondaHsu/p/4046384.html

2.搭建环境-2.3配置共享磁盘
http://www.cnblogs.com/HondaHsu/p/4046389.html

2.搭建环境-2.4. 安装JDK
http://www.cnblogs.com/HondaHsu/p/4046430.html

2.搭建环境-2.5. 配置网络
http://www.cnblogs.com/HondaHsu/p/4046443.html

2.搭建环境-2.6. 安装Oracle所依赖的必要包
http://www.cnblogs.com/HondaHsu/p/4054216.html

2.搭建环境-2.7. 配置资源与参数
http://www.cnblogs.com/HondaHsu/p/4054238.html

2.搭建环境-2.8. 配置用户环境
http://www.cnblogs.com/HondaHsu/p/4054259.html

2.搭建环境-2.9. 配置用户等效性(可选项)
http://www.cnblogs.com/HondaHsu/p/4054277.html

2.搭建环境-2.10.配置用户NTF服务
http://www.cnblogs.com/HondaHsu/p/4054333.html

3.安装Oracle RAC-3.1.安装并配置ASM驱动
http://www.cnblogs.com/HondaHsu/p/4054367.html

3.安装Oracle RAC-3.2.安装 cvuqdisk 软件包
http://www.cnblogs.com/HondaHsu/p/4054395.html

3.安装Oracle RAC-3.3.安装前检查
http://www.cnblogs.com/HondaHsu/p/4054481.html

3.安装Oracle RAC-3.4.安装Grid Infrastructure
http://www.cnblogs.com/HondaHsu/p/4054518.html

3.安装Oracle RAC-3.5.安装oracle11gr2 database 软件与创建数据库
http://www.cnblogs.com/HondaHsu/p/4054586.html

3.安装Oracle RAC-3.6.集群管理命令
http://www.cnblogs.com/HondaHsu/p/4054635.html

4.安装Oracle RAC FAQ-4.1.系统界面报错Gnome
http://www.cnblogs.com/HondaHsu/p/4046314.html

4.安装Oracle RAC FAQ-4.2.Oracleasm Createdisk ASM磁盘失败:Instantiating disk: failed
http://www.cnblogs.com/HondaHsu/p/4046248.html

4.安装Oracle RAC FAQ-4.3.Oracle 集群节点间连通失败
http://www.cnblogs.com/HondaHsu/p/4046263.html

4.安装Oracle RAC FAQ-4.4.无法图形化安装Grid Infrastructure
http://www.cnblogs.com/HondaHsu/p/4046273.html

4.安装Oracle RAC FAQ-4.5.安装Grid,创建ASM磁盘组空间不足
http://www.cnblogs.com/HondaHsu/p/4046285.html

4.安装Oracle RAC FAQ-4.6.重新配置与缷载11R2 Grid Infrastructure
http://www.cnblogs.com/HondaHsu/p/4046300.html

4.安装Oracle RAC FAQ-4.7.Oracle 11G R2 RAC修改public网络IP
http://www.cnblogs.com/HondaHsu/p/4054953.html

你可能感兴趣的:(oracle11g)