ESX上ORACLE 10.2RAC(4.在REHAT4.7中安装ORACLE RAC)

四、 安装CRS软件

上传cluster软件到rac1,rac2的/home/oracle目录下

[root@rac1 ~]# cd /home/oracle

[root@rac1 oracle]# ls

10201_clusterware_linux32.zip Desktop ocfs2 oracleasm

[root@rac1 oracle]# unzip 10201_clusterware_linux32.zip

[root@rac2 oracle]# unzip 10201_clusterware_linux32.zip

[root@rac1 clusterware]# su – oracle

[oracle@rac1 ~]$ clusterware/cluvfy/runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "rac1"

Destination Node Reachable?

------------------------------------ ------------------------

rac2 yes

rac1 yes

Result: Node reachability check passed from node "rac1".

Checking user equivalence...

Check: User equivalence for user "oracle"

Node Name Comment

------------------------------------ ------------------------

rac2 passed

rac1 passed

Result: User equivalence check passed for user "oracle".

Checking administrative privileges...

Check: Existence of user "oracle"

Node Name User Exists Comment

------------ ------------------------ ------------------------

rac2 yes passed

rac1 yes passed

Result: User existence check passed for "oracle".

Check: Existence of group "oinstall"

Node Name Status Group ID

------------ ------------------------ ------------------------

rac2 exists 1000

rac1 exists 1000

Result: Group existence check passed for "oinstall".

Check: Membership of user "oracle" in group "oinstall" [as Primary]

Node Name User Exists Group Exists User in Group Primary Comment

---------------- ------------ ------------ ------------ ------------ ------------

rac2 yes yes yes yes passed

rac1 yes yes yes yes passed

Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed.

Administrative privileges check passed.

Checking node connectivity...

Interface information for node "rac2"

Interface Name IP Address Subnet

------------------------------ ------------------------------ ----------------

eth0 192.168.0.191 192.168.0.0

eth1 10.10.0.191 10.10.0.0

Interface information for node "rac1"

Interface Name IP Address Subnet

------------------------------ ------------------------------ ----------------

eth0 192.168.0.190 192.168.0.0

eth1 10.10.0.190 10.10.0.0

Check: Node connectivity of subnet "192.168.0.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

rac2:eth0 rac1:eth0 yes

Result: Node connectivity check passed for subnet "192.168.0.0" with node(s) rac2,rac1.

Check: Node connectivity of subnet "10.10.0.0"

Source Destination Connected?

------------------------------ ------------------------------ ----------------

rac2:eth1 rac1:eth1 yes

Result: Node connectivity check passed for subnet "10.10.0.0" with node(s) rac2,rac1.

Suitable interfaces for the private interconnect on subnet "192.168.0.0":

rac2 eth0:192.168.0.191

rac1 eth0:192.168.0.190

Suitable interfaces for the private interconnect on subnet "10.10.0.0":

rac2 eth1:10.10.0.191

rac1 eth1:10.10.0.190

ERROR:

Could not find a suitable set of interfaces for VIPs.

Result: Node connectivity check failed.

Checking system requirements for 'crs'...

Check: Total memory

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 1010.25MB (1034496KB) 512MB (524288KB) passed

rac1 1010.25MB (1034496KB) 512MB (524288KB) passed

Result: Total memory check passed.

Check: Free disk space in "/tmp" dir

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 4.27GB (4481496KB) 400MB (409600KB) passed

rac1 4.2GB (4400352KB) 400MB (409600KB) passed

Result: Free disk space check passed.

Check: Swap space

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 2GB (2096472KB) 1GB (1048576KB) passed

rac1 2GB (2096472KB) 1GB (1048576KB) passed

Result: Swap space check passed.

Check: System architecture

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 i686 i686 passed

rac1 i686 i686 passed

Result: System architecture check passed.

Check: Kernel version

Node Name Available Required Comment

------------ ------------------------ ------------------------ ----------

rac2 2.6.9-78.ELsmp 2.4.21-15EL passed

rac1 2.6.9-78.ELsmp 2.4.21-15EL passed

Result: Kernel version check passed.

Check: Package existence for "make-3.79"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

rac2 make-3.80-7.EL4 passed

rac1 make-3.80-7.EL4 passed

Result: Package existence check passed for "make-3.79".

Check: Package existence for "binutils-2.14"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

rac2 binutils-2.15.92.0.2-25 passed

rac1 binutils-2.15.92.0.2-25 passed

Result: Package existence check passed for "binutils-2.14".

Check: Package existence for "gcc-3.2"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

rac2 gcc-3.4.6-10 passed

rac1 gcc-3.4.6-10 passed

Result: Package existence check passed for "gcc-3.2".

Check: Package existence for "glibc-2.3.2-95.27"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

rac2 glibc-2.3.4-2.41 passed

rac1 glibc-2.3.4-2.41 passed

Result: Package existence check passed for "glibc-2.3.2-95.27".

Check: Package existence for "compat-db-4.0.14-5"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

rac2 compat-db-4.1.25-9 passed

rac1 compat-db-4.1.25-9 passed

Result: Package existence check passed for "compat-db-4.0.14-5".

Check: Package existence for "compat-gcc-7.3-2.96.128"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

rac2 missing failed

rac1 missing failed

Result: Package existence check failed for "compat-gcc-7.3-2.96.128".

Check: Package existence for "compat-gcc-c++-7.3-2.96.128"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

rac2 missing failed

rac1 missing failed

Result: Package existence check failed for "compat-gcc-c++-7.3-2.96.128".

Check: Package existence for "compat-libstdc++-7.3-2.96.128"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

rac2 missing failed

rac1 missing failed

Result: Package existence check failed for "compat-libstdc++-7.3-2.96.128".

Check: Package existence for "compat-libstdc++-devel-7.3-2.96.128"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

rac2 missing failed

rac1 missing failed

Result: Package existence check failed for "compat-libstdc++-devel-7.3-2.96.128".

Check: Package existence for "openmotif-2.2.3"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

rac2 openmotif-2.2.3-10.2.el4 passed

rac1 openmotif-2.2.3-10.2.el4 passed

Result: Package existence check passed for "openmotif-2.2.3".

Check: Package existence for "setarch-1.3-1"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

rac2 setarch-1.6-1 passed

rac1 setarch-1.6-1 passed

Result: Package existence check passed for "setarch-1.3-1".

Check: Group existence for "dba"

Node Name Status Comment

------------ ------------------------ ------------------------

rac2 exists passed

rac1 exists passed

Result: Group existence check passed for "dba".

Check: Group existence for "oinstall"

Node Name Status Comment

------------ ------------------------ ------------------------

rac2 exists passed

rac1 exists passed

Result: Group existence check passed for "oinstall".

Check: User existence for "nobody"

Node Name Status Comment

------------ ------------------------ ------------------------

rac2 exists passed

rac1 exists passed

Result: User existence check passed for "nobody".

System requirement failed for 'crs'

Pre-check for cluster services setup was unsuccessful on all the nodes.

在上面预安装的结果中可以看到有两个错误:

1. VIP 因为实验环境中用的私网IP,而需要用的是公网IP,在后面的步骤中手动执行vipca即可,可忽略

2. Rpm软件包,可忽略

重启rac1,以oracle用户登录

[oracle@rac1 ~]$ cd clusterware/

[oracle@rac1 clusterware]$ ls

cluvfy doc install response rpm runInstaller stage upgrade welcome.html

[oracle@rac1 clusterware]$ ./runInstaller

点击next

clip_image002

next

clip_image004

将路径修改为/db/oracle/product/10.2.0/crs

clip_image006

next

clip_image008

点击Add,添加节点rac2

clip_image010

clip_image012

选择etho,Edit,将etho作为公网

clip_image014

填写ocr位置

clip_image016

填写表决盘位置

clip_image018

点击Install进行安装

clip_image020

clip_image022

分别在两个节点rac1和rac2上以ROOT用户,运行下图中显示的两个脚本

clip_image024

[oracle@rac1 ~]$ su -

Password:

[root@rac1 ~]# /db/oracle/oraInventory/orainstRoot.sh

Changing permissions of /db/oracle/oraInventory to 770.

Changing groupname of /db/oracle/oraInventory to oinstall.

The execution of the script is complete

[root@rac1 ~]# /db/oracle/product/10.2.0/crs/root.sh

WARNING: directory '/db/oracle/product/10.2.0' is not owned by root

WARNING: directory '/db/oracle/product' is not owned by root

WARNING: directory '/db/oracle' is not owned by root

WARNING: directory '/db' is not owned by root

Checking to see if Oracle CRS stack is already configured

/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory '/db/oracle/product/10.2.0' is not owned by root

WARNING: directory '/db/oracle/product' is not owned by root

WARNING: directory '/db/oracle' is not owned by root

WARNING: directory '/db' is not owned by root

assigning default hostname rac1 for node 1.

assigning default hostname rac2 for node 2.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node <nodenumber>: <nodename> <private interconnect name> <hostname>

node 1: rac1 rac1-priv rac1

node 2: rac2 rac2-priv rac2

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Now formatting voting device: /crs/vd1

Now formatting voting device: /crs/vd2

Now formatting voting device: /crs/vd3

Format of 3 voting devices complete.

Startup will be queued to init within 90 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

Rac1

CSS is inactive on these nodes.

Rac2

Local node checking complete.

Run root.sh on remaining nodes to start CRS daemons.

[oracle@rac2 clusterware]$ su -

Password:

[root@rac2 ~]# /db/oracle/oraInventory/orainstRoot.sh

Changing permissions of /db/oracle/oraInventory to 770.

Changing groupname of /db/oracle/oraInventory to oinstall.

The execution of the script is complete

[root@rac2 ~]# /db/oracle/product/10.2.0/crs/root.sh

WARNING: directory '/db/oracle/product/10.2.0' is not owned by root

WARNING: directory '/db/oracle/product' is not owned by root

WARNING: directory '/db/oracle' is not owned by root

WARNING: directory '/db' is not owned by root

Checking to see if Oracle CRS stack is already configured

/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

WARNING: directory '/db/oracle/product/10.2.0' is not owned by root

WARNING: directory '/db/oracle/product' is not owned by root

WARNING: directory '/db/oracle' is not owned by root

WARNING: directory '/db' is not owned by root

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is 10G Release 2.

assigning default hostname rac1 for node 1.

assigning default hostname rac2 for node 2.

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node <nodenumber>: <nodename> <private interconnect name> <hostname>

node 1: rac1 rac1-priv rac1

node 2: rac2 rac2-priv rac2

clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.

-force is destructive and will destroy any previous cluster

configuration.

Oracle Cluster Registry for cluster has already been initialized

Startup will be queued to init within 90 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

Rac1

Rac2

CSS is active on all nodes.

Waiting for the Oracle CRSD and EVMD to start

Oracle CRS stack installed and running under init(1M)

Running vipca(silent) for configuring nodeapps

The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs.

[root@rac2 bin]# pwd

/db/oracle/product/10.2.0/crs/bin

[root@rac2 bin]# ./vipca

点击next

clip_image026

选择将eth0作为公网,点击next

clip_image028

填写rac1的虚拟IP名称为rac1-vip,IP地址为192.168.0.100

填写rac2的虚拟IP名称为rac2-vip,IP地址为192.168.0.101

clip_image030

finish

clip_image032

clip_image034

点击OK,Exit

clip_image036

返回rac1,点击OK

clip_image038

Exit

clip_image040

Yes

clip_image042

Technorati 标签: ORACLE RAC

你可能感兴趣的:(oracle,职场,RAC,休闲,esx)