Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part2:clusterware安装和升级
环境:OEL 5.7 + Oracle 10.2.0.5 RAC
3.安装Clusterware
4.升级Clusterware
Linux平台 Oracle 10gR2 RAC安装指导:
Part1:Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part1:准备工作
Part2:Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part2:clusterware安装和升级
Part3:Linux平台 Oracle 10gR2(10.2.0.5)RAC安装 Part3:db安装和升级
将存放Oracle相关安装介质目录赋权给Oracle用户:
[root@oradb27 media]# chown -R oracle:oinstall /u01/media/
oracle用户解压安装介质:
[oracle@oradb27 media]$ gunzip 10201_clusterware_linux_x86_64.cpio.gz
[oracle@oradb27 media]$ cpio -idmv < 10201_clusterware_linux_x86_64.cpio
执行预检查:
[root@oradb27 media]# /u01/media/clusterware/rootpre/rootpre.sh
No OraCM running
使用Xmanager(MAC系统是XQuartz)开始安装clusterware:
[root@oradb27 media]# cd /u01/media/clusterware/install
[root@oradb27 install]# vi oraparam.ini
修改下面这里,
[Certified Versions]
Linux=redhat-3,SuSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2
添加redhat-5,即:
[Certified Versions]
Linux=redhat-3,SuSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2,redhat-5
[root@oradb27 clusterware]# pwd
/u01/media/clusterware
[root@oradb27 clusterware]# ./runInstaller
节点1执行:
#开始没有对/dev/sd{a,b,c,d,e},这5个LUN分区
[root@oradb27 rules.d]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@oradb27 rules.d]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Failed to upgrade Oracle Cluster Registry configuration
#对/dev/sd{a,b,c,d,e},这5个LUN分别分区sd{a,b,c,d,e}1后执行成功
[root@oradb27 10.2.0.5]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@oradb27 10.2.0.5]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: oradb27 oradb27-priv oradb27
node 2: oradb28 oradb28-priv oradb28
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
oradb27
CSS is inactive on these nodes.
oradb28
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@oradb27 10.2.0.5]#
官方对这个错误的解决方法可参考MOS文档: Executing root.sh errors with "Failed To Upgrade Oracle Cluster Registry Configuration" (文档 ID 466673.1)
Before running the root.sh on the first node in the cluster do the following:
- Download Patch:4679769 from Metalink (contains a patched version of clsfmt.bin).
- Do the following steps as stated in the patch README to fix the problem:
Note: clsfmt.bin need only be replaced on the 1st node of the cluster
节点2执行:
[root@oradb28 crshome_1]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@oradb28 crshome_1]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: oradb27 oradb27-priv oradb27
node 2: oradb28 oradb28-priv oradb28
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
oradb27
oradb28
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/app/oracle/product/10.2.0.5/crshome_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
[root@oradb28 crshome_1]#
上面的这个报错信息,需要在/u01/app/oracle/product/10.2.0.5/crshome_1/bin下修改vipca和srvctl文件内容:
[root@oradb28 bin]# ls -l vipca
-rwxr-xr-x 1 oracle oinstall 5343 Jan 3 09:44 vipca
[root@oradb28 bin]# ls -l srvctl
-rwxr-xr-x 1 oracle oinstall 5828 Jan 3 09:44 srvctl
加入
unset LD_ASSUME_KERNEL
重新运行 /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)
没有再报错,但是也没有成功显示进行vipca创建。
如果上面3.3步骤正常执行成功了vipca,那么此步骤不再需要;
如果上面3.3步骤没有正常执行成功vipca,那么就需要手工在最后一个节点手工vipca创建:
这里手工执行vipca还遇到一个错误如下:
[root@oradb28 bin]# ./vipca
Error 0(Native: listNetInterfaces:[3])
[Error 0(Native: listNetInterfaces:[3])]
查看网络层相关的信息,并手工注册信息:
[root@oradb28 bin]# ./oifcfg getif
[root@oradb28 bin]# ./oifcfg iflist
eth0 192.168.1.0
eth1 10.10.10.0
[root@oradb28 bin]# ifconfig
eth0 Link encap:Ethernet HWaddr 06:CB:72:01:07:88
inet addr:192.168.1.28 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1018747 errors:0 dropped:0 overruns:0 frame:0
TX packets:542075 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2196870487 (2.0 GiB) TX bytes:43268497 (41.2 MiB)
eth1 Link encap:Ethernet HWaddr 22:1A:5A:DE:C1:21
inet addr:10.10.10.28 Bcast:10.10.10.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:5343 errors:0 dropped:0 overruns:0 frame:0
TX packets:3656 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1315035 (1.2 MiB) TX bytes:1219689 (1.1 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:2193 errors:0 dropped:0 overruns:0 frame:0
TX packets:2193 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:65167 (63.6 KiB) TX bytes:65167 (63.6 KiB)
[root@oradb28 bin]# ./oifcfg -h
PRIF-9: incorrect usage
Name:
oifcfg - Oracle Interface Configuration Tool.
Usage: oifcfg iflist [-p [-n]]
oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>}...
oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>] ]
oifcfg delif [-node <nodename> | -global] [<if_name>[/<subnet>]]
oifcfg [-help]
<nodename> - name of the host, as known to a communications network
<if_name> - name by which the interface is configured in the system
<subnet> - subnet address of the interface
<if_type> - type of the interface { cluster_interconnect | public | storage }
[root@oradb28 bin]# ./oifcfg setif -global eth0/192.168.1.0:public
[root@oradb28 bin]# ./oifcfg getif
eth0 192.168.1.0 global public
[root@oradb28 bin]#
[root@oradb28 bin]#
[root@oradb28 bin]#
[root@oradb28 bin]# ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect
[root@oradb28 bin]# ./oifcfg getif
eth0 192.168.1.0 global public
eth1 10.10.10.0 global cluster_interconnect
[root@oradb28 bin]#
当oifcfg getif正常获取信息后,再次运行VIPCA创建成功。
然后再继续回到安装clusterware的界面继续也显示成功。
此时查看集群的状态应该都是正常的:
[oracle@oradb27 bin]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[oracle@oradb27 bin]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27
ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27
ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27
ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28
ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28
ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28
[oracle@oradb27 bin]$
[oracle@oradb28 ~]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[oracle@oradb28 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27
ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27
ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27
ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28
ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28
ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28
[oracle@oradb28 ~]$
[root@oradb27 media]$ unzip p8202632_10205_Linux-x86-64.zip
[root@oradb27 media]$ cd Disk1/
[root@oradb27 Disk1]$ pwd
/u01/media/Disk1
使用xquartz开始升级clusterware:
ssh -X [email protected]
[root@oradb27 Disk1]$ ./runInstaller
升级过程中,在预安装检查时,有一个参数设置不符合检查要求,如下:
Checking for rmem_default=1048576; found rmem_default=262144. Failed <<<<
可以调整/etc/sysctl.conf配置文件,然后执行sysctl -p生效。
1. Log in as the root user.
2. As the root user, perform the following tasks:
a. Shutdown the CRS daemons by issuing the following command:
/u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
b. Run the shell script located at:
/u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
This script will automatically start the CRS daemons on the
patched node upon completion.
3. After completing this procedure, proceed to the next node and repeat.
即分别执行:
/u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
/u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
节点1执行:
[root@oradb27 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@oradb27 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /u01/app/oracle/product/10.2.0.5/crshome_1
Relinking some shared libraries.
Relinking of patched files is complete.
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
This may take a while on some systems.
.
10205 patch successfully applied.
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully deleted 1 values from OCR.
Successfully deleted 1 keys from OCR.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: oradb27 oradb27-priv oradb27
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
clscfg -upgrade completed successfully
Creating '/u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs' with data used for CRS configuration
Setting CRS configuration values in /u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs
[root@oradb27 bin]#
节点2执行:
[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/bin/crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@oradb28 bin]# /u01/app/oracle/product/10.2.0.5/crshome_1/install/root102.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /u01/app/oracle/product/10.2.0.5/crshome_1
Relinking some shared libraries.
Relinking of patched files is complete.
WARNING: directory '/u01/app/oracle/product/10.2.0.5' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
This may take a while on some systems.
.
10205 patch successfully applied.
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully deleted 1 values from OCR.
Successfully deleted 1 keys from OCR.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 2: oradb28 oradb28-priv oradb28
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
clscfg -upgrade completed successfully
Creating '/u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs' with data used for CRS configuration
Setting CRS configuration values in /u01/app/oracle/product/10.2.0.5/crshome_1/install/paramfile.crs
[root@oradb28 bin]#
升级成功,确认crs版本为10.2.0.5,集群状态正常:
[oracle@oradb27 bin]$ crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.5.0]
[oracle@oradb28 ~]$ crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.5.0]
[oracle@oradb27 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....b27.gsd application 0/5 0/0 ONLINE ONLINE oradb27
ora....b27.ons application 0/3 0/0 ONLINE ONLINE oradb27
ora....b27.vip application 0/0 0/0 ONLINE ONLINE oradb27
ora....b28.gsd application 0/5 0/0 ONLINE ONLINE oradb28
ora....b28.ons application 0/3 0/0 ONLINE ONLINE oradb28
ora....b28.vip application 0/0 0/0 ONLINE ONLINE oradb28
[oracle@oradb27 ~]$
至此,oracle clusterware安装(10.2.0.1)和升级(10.2.0.5)已完成。