RedHat 5.8 安装Oracle 11gR2_Grid集群

RedHat 5.8 安装Oracle 11gR2_Grid集群

1、 通过rpm安装相应软件包:

rpm -ivh binutils-2*.i386.rpm

rpm -ivh binutils-2*.x86_64.rpm

rpm -ivh compat-libstdc++-33-*i*

rpm -ivh compat-libstdc++-33-x*

rpm -ivh elfutils-libelf-0*.i386.rpm

rpm -ivh elfutils-libelf-0*.x*

rpm -ivh elfutils-libelf-devel-0*.i386.rpm

rpm -ivh elfutils-libelf-devel-0.137-3.el5.x*

rpm -ivh compat-libstdc++-2*.i*.rpm

rpm -ivh compat-libstdc++-3*.i*.rpm

rpm -ivh compat-libstdc++-3*.x*.rpm

rpm -ivh gcc-4*.i*

rpm -ivh gcc-4*.x*

rpm -ivh gcc-4.1.2-52.el5.i386.rpm

rpm -ivh gcc-c++-*.x*.rpm

rpm -ivh gcc-c++-*.i*.rpm

rpm -ivh glibc-2.5-81.i*.rpm

rpm -ivh glibc-2.5-81.x*.rpm

rpm -ivh glibc-common-*.i*.rpm

rpm -ivh glibc-common-*.x*.rpm

rpm -ivh glibc-devel*.i*.rpm

rpm -ivh glibc-devel*.x*.rpm

rpm -ivh glibc-headers*.i*.rpm

rpm -ivh glibc-headers*.x*.rpm

rpm -ivh ksh-*.x*.rpm

rpm -ivh ksh-*.i*.rpm

rpm -ivh pdksh-*.i*.rpm

rpm -ivh pdksh-*.x*.rpm

rpm -ivh libaio-0*.i*.rpm

rpm -ivh libaio-0*.x*.rpm

rpm -ivh libaio-devel*.i*.rpm

rpm -ivh libaio-devel*.x*.rpm

rpm -ivh libgcc-4*.i*.rpm

rpm -ivh libgcc-4*.x*.rpm

rpm -ivh libstdc++-4*.i*.rpm

rpm -ivh libstdc++-4*.x*.rpm

rpm -ivh libstdc++-devel*.i*.rpm

rpm -ivh libstdc++-devel*.x*.rpm

rpm -ivh make-3*.i*.rpm

rpm -ivh make-3*.x*.rpm

rpm -ivh sysstat-*.i*.rpm

rpm -ivh sysstat-*.x*.rpm

rpm -ivh unixODBC-libs-*.i*.rpm

rpm -ivh unixODBC-libs-*.x*.rpm

rpm -ivh unixODBC-2*.i*.rpm

rpm -ivh unixODBC-2*.x*.rpm

rpm -ivh unixODBC-devel*.i*.rpm

rpm -ivh unixODBC-devel*.x*.rpm

--DNS相关软件包

rpm -ivh bind-9*.i*.rpm

rpm -ivh bind-9*.x*.rpm

rpm -ivh bind-utils*.i*.rpm

rpm -ivh bind-utils*.x*.rpm

rpm -ivh caching-nameserver*.i*.rp

rpm -ivh caching-nameserver*.x*.rp

rpm -ivh system-config-bind*.i*.rpm

rpm -ivh system-config-bind*.rpm

rpm -ivh bind-chroot*.i*.rpm

rpm -ivh bind-chroot*.x*.rpm

--ASM包安装  ---需下载

rpm -ivh oracleasm-support-2.1.7-1.el5.i386.rpm

rpm -ivh oracleasm-2.6.18-308.el5-2.0.5-1.el5.i686.rpm

rpm -ivh oracleasmlib-2.0.4-1.el5.i386.rpm

 

2、修改主机名

[root@localhost Server]# vi /etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=grid01/grid02

 

3、配置host文件

[root@grid01 ~]# cat /etc/hosts

# Do not remove the following line, or variousprograms

# that require network functionality will fail.

127.0.0.1               localhost.localdomain localhost

 

#public

172.168.9.15            grid01.prudentwoo.com   grid01

172.168.9.16            grid02.prudentwoo.com   grid02

 

#private

10.10.10.1              pri01.prudentwoo.com    pri01

10.10.10.2              pri02.prudentwoo.com    pri02

 

#virtual

172.168.9.21            vip01.prudentwoo.com    vip01

172.168.9.22            vip02.prudentwoo.com    vip02

 

#scan

172.168.9.17            scan.prudentwoo.com     scan

172.168.9.18            scan.prudentwoo.com     scan

172.168.9.19            scan.prudentwoo.com     scan

172.168.9.20            scan.prudentwoo.com     scan

 

 

4、配置DNS:

4.1、进入/var/named/chroot/etc目录中

[root@rac01 chroot]# cd /var/named/chroot/etc

查看当前目录下的文件,将named.caching-nameserver.conf拷贝一份为named.conf

[root@rac01 etc]# ls

localtime  named.caching-nameserver.conf  named.rfc1912.zones  rndc.key                  

[root@rac01 etc]# cp –p named.caching-nameserver.confnamed.conf

[root@rac01 etc]# ls

localtime  named.caching-nameserver.conf  named.conf named.rfc1912.zones  rndc.key

 

创建快捷方式到/etc/目录下

[root@localhost etc]# ln -s/var/named/chroot/etc/named.conf /etc/named.conf

4.2编辑named.conf文件,并进进行修改,阴影部分为修改部分:

[root@grid01 ~]# ll /etc/named.conf

lrwxrwxrwx 1 root root 32 Dec 2422:12 /etc/named.conf -> /var/named/chroot/etc/named.conf

[root@rac01etc]# vi /etc/name.conf

options{

        listen-on port 53 { any; };

        listen-on-v6 port 53 { ::1; };

        directory       "/var/named";

        dump-file      "/var/named/data/cache_dump.db";

        statistics-file"/var/named/data/named_stats.txt";

        memstatistics-file "/var/named/data/named_mem_stats.txt";

 

        // Those options should be usedcarefully because they disable port

        // randomization

        // query-source    port 53;

        // query-source-v6 port 53;

 

        allow-query     { any; };

        allow-query-cache { any; };

};

logging{

        channel default_debug {

                file"data/named.run";

                severity dynamic;

        };

};

viewlocalhost_resolver {

        match-clients      { any; };

        match-destinations { any; };

        recursion yes;

        include "/etc/named.rfc1912.zones";

};

 

4.3 编辑/etc/named.rfc1912.zones添加如下内容

zone"prudentwoo.com" IN {

   type master;

   file "prudentwoo.com.zone";

   allow-update { none; };

};

 

zone"9.168.172.in-addr.arpa" IN {

   type master;

   file "172.168.9.zone";

     allow-update { none; };

     allow-transfer {

     172.168.9.17;

     172.168.9.18;

     172.168.9.19;

     172.168.9.20;

     };

};

 

 

4.4 编辑正向解析文件:

[root@grid01named]# cd /var/named/chroot/var/named

 [root@rac01 named]# cp–p localdomain.zone prudentwoo.com.zone

[root@grid01named]# cp –p named.local 172.168.9.zone

[root@node1named]# vi prudentwoo.zone

 

$TTL    86400

@               IN SOA  prudentwoo.com.          root.prudentwoo.com. (

                                        42              ; serial (d. adams)

                                        3H              ; refresh

                                       15M             ; retry

                                        1W              ; expiry

                                        1D)            ; minimum

                IN NS           prudentwoo.com

scan            IN A            172.168.9.17

scan            IN A            172.168.9.18

scan            IN A            172.168.9.19

scan            IN A            172.168.9.20

 

4.5编辑反向解析文件:

[root@node1 named]# more 172.168.10.zone

 

$TTL    86400

@       IN     SOA       9.168.172.in-addr.arpa.     root.prudentwoo.com. (

                                     1997022700 ; Serial

                                     28800      ; Refresh

                                      14400      ; Retry

                                     3600000    ; Expire

                                      86400)    ; Minimum

@               IN      NS     prudentwoo.com.

17              IN      PTR    scan.prudentwoo.com.

18              IN      PTR    scan.prudentwoo.com.

19              IN      PTR    scan.prudentwoo.com.

20              IN      PTR    scan.prudentwoo.com.

 

4.6 添加DNS服务器地址,两个节点都要配置,节点二nameserver 反过来配置

[root@rac01 ~]# vi /etc/resolv.conf

#search localdomain

search prudentwoo.com

nameserver 172.168.9.15

nameserver 172.168.9.16

 

4.7 创建连接文件

[root@grid01 named]# ln -s/var/named/chroot/var/named/172.168.9.zone /var/named/172.168.9.zone

[root@grid01 named]# ln -s/var/named/chroot/var/named/prudentwoo.comn.zone /var/named/prudentwoo.comn.zone

 

[root@grid01 named]# ll /var/named/

total 16

lrwxrwxrwx 1 root root    42 Dec 24 21:19172.168.9.zone -> /var/named/chroot/var/named/172.168.9.zone

drwxr-x--- 6 root named 4096 Dec 24 19:38 chroot

drwxrwx--- 2 named named 4096 Dec  2  2011data

lrwxrwxrwx 1 root named   44 Dec 24 19:19localdomain.zone -> /var/named/chroot/var/named/localdomain.zone

lrwxrwxrwx 1 root named   42 Dec 24 19:19localhost.zone -> /var/named/chroot/var/named/localhost.zone

lrwxrwxrwx 1 root named   43 Dec 24 19:19named.broadcast -> /var/named/chroot/var/named/named.broadcast

lrwxrwxrwx 1 root named   36 Dec 24 19:19 named.ca-> /var/named/chroot/var/named/named.ca

-rw-r----- 1 root root  1206 Dec 24 20:55 named.conf

lrwxrwxrwx 1 root named   43 Dec 24 19:19named.ip6.local -> /var/named/chroot/var/named/named.ip6.local

lrwxrwxrwx 1 root named   39 Dec 24 19:19named.local -> /var/named/chroot/var/named/named.local

lrwxrwxrwx 1 root named   38 Dec 24 19:19 named.zero-> /var/named/chroot/var/named/named.zero

lrwxrwxrwx1 root  root    47 Dec 24 21:20 prudentwoo.com.zone ->/var/named/chroot/var/named/prudentwoo.com.zone

drwxrwx---2 named named 4096 Dec  2  2011 slaves

 

-----4.8重启DNS服务

1、DNS出现如下信息无法起来,检查message日志:

[root@rac01 log]# service named restart

Stopping named: [ OK  ]

Starting named: [FAILED]

[root@rac01 log]# cd /var/log

[root@rac01 log]# pwd

/var/log

[root@rac01 log]# tail -f messages

Nov  100:46:01 localhost named[29489]: loading configuration: permission denied

Nov  100:46:01 localhost named[29489]: exiting (due to fatal error)

Nov  100:48:58 localhost named[29564]: starting BIND 9.3.6-P1-RedHat-9.3.6-20.P1.el5-u named -t /var/named/chroot

Nov  100:48:58 localhost named[29564]: adjusted limit on open files from 1024 to1048576

Nov  1 00:48:58localhost named[29564]: found 1 CPU, using 1 worker thread

Nov  100:48:58 localhost named[29564]: using up to 4096 sockets

Nov  100:48:58 localhost named[29564]: loading configuration from '/etc/named.conf'

Nov  100:48:58 localhost named[29564]: none:0: open: /etc/named.conf: permissiondenied

Nov  100:48:58 localhost named[29564]: loading configuration: permission denied

Nov  100:48:58 localhost named[29564]: exiting (due to fatal error)

 

修改named.conf权限,然后再重启:

[root@rac01 log]# ls -al/var/named/chroot/etc/named.conf

-rw-r----- 1 root root 1206 Nov  1 00:07 /var/named/chroot/etc/named.conf

 

[root@rac01 log]# chown root:named/var/named/chroot/etc/named.c

named.caching-nameserver.conf  named.conf                    

 

[root@rac01 log]# chown root:named/var/named/chroot/etc/named.conf

 

4.8重启:

[root@rac01 log]# service named restart

Stopping named: [ OK  ]

Starting named: [ OK  ]

[root@rac01 log]#

 

 

4.9 验证DNS,至此发现DNS出现轮询就OK了。

[root@rac01 named]# nslookupnode-scan.prudentwoo.com

Server:        172.168.9.15

Address:       172.168.9.15#53

 

Name:  node-scan.prudentwoo.com

Address: 172.168.9.21

Name:  node-scan.prudentwoo.com

Address: 172.168.9.22

Name:  node-scan.prudentwoo.com

Address: 172.168.9.23

Name:  node-scan.prudentwoo.com

Address: 172.168.9.24

Name:  node-scan.prudentwoo.com

Address: 172.168.9.20

 

4.10 反向解析验证DNS:

[root@grid01 named]# nslookup 172.168.9.15

Server:        172.168.9.15

Address:       172.168.9.15#53

 

** server can't find 15.9.168.172.in-addr.arpa.:NXDOMAIN

 

You have new mail in /var/spool/mail/root

[root@grid01 named]# nslookup 172.168.9.16

Server:        172.168.9.15

Address:       172.168.9.15#53

 

** server can't find 16.9.168.172.in-addr.arpa.:NXDOMAIN

 

[root@grid01 named]# nslookup 172.168.9.17

Server:        172.168.9.15

Address:       172.168.9.15#53

 

17.9.168.172.in-addr.arpa       name = scan.prudentwoo.com.

 

[root@grid01 named]# nslookup 172.168.9.18

Server:        172.168.9.15

Address:       172.168.9.15#53

 

18.9.168.172.in-addr.arpa       name = scan.prudentwoo.com.

 

[root@grid01 named]# nslookup 172.168.9.19

Server:         172.168.9.15

Address:       172.168.9.15#53

 

19.9.168.172.in-addr.arpa       name = scan.prudentwoo.com.

 

[root@grid01 named]# nslookup 172.168.9.20

 

5、添加NTP和DNS服务自启动

Ntsysv

或 chkconfig namedon   chkchonfig ntpd on

检查自启动服务状态

chkconfig --list

6、创建用户组;

/usr/sbin/groupadd -g 502 dba

/usr/sbin/groupadd -g 501 oinstall

/usr/sbin/groupadd -g 504 asmadmin

/usr/sbin/groupadd -g 506 asmdba

/usr/sbin/groupadd -g 507 asmoper

/usr/sbin/useradd -u 501 -g oinstall -Gasmadmin,asmdba,asmoper grid

/usr/sbin/useradd -u 502 -g oinstall -G dba,asmdbaoracle

[root@rac01 named]# passwd grid

[root@rac01 named]# passwd oracle

[root@rac01 named]#

 

7、创建目录:

Oracle清单目录

mkdir -p /app/oraInventory

chown -R grid:oinstall /app/oraInventory

chmod -R 775 /app/oraInventory

 

Grid安装目录

mkdir -p /app/grid

mkdir -p /app/11.2.0/grid

chown -R grid:oinstall /app/11.2.0/grid

chmod -R 775 /app/11.2.0/grid

 

创建Oracle安装目录

mkdir -p /app/oracle/product/11.2.0/db_1

mkdir -p/app/oracle/cfgtoollogs

chown -R oracle:oinstall /app/oracle

chmod -R 775 /app/oracle

 

8、修改系统参数,在末端添加如下内容

 

8.1 修改内存参数:

vi /etc/sysctl.conf

kernel.shmall = 2097152

kernel.sem = 250 32000 100 128

kernel.shmmni = 4096

#kernel.shmax = 8589934592

fs.file-max = 6815744

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 1048576

net.core.wmem_max = 1048576

fs.aio-max-nr = 1048576

 

8.2 修改限制参数:

编辑/etc/security/limits.conf

*         soft    nproc           2047

*         hard    nproc           16384

*         soft    nofile          1024

*         hard    nofile          65536

*         soft    memlock         3145728

*         hard    memlock         3145728

 

8.3 修改Oracle用户环境变量,添加如下信息

[root@rac01 ~]# su - oracle

[oracle@rac01 ~]$ vi .bash_profile

#.bash_profile

 

# Getthe aliases and functions

if [ -f~/.bashrc ]; then

        . ~/.bashrc

fi

 

# Userspecific environment and startup programs

 

PATH=$PATH:$HOME/bin

 

exportPATH

 

#OracleSettings

 

TMP=/tmp;export TMP

exportTMPDIR=$TMP;

exportORACLE_HOSTNAME=grid01;

exportORACLE_UNQNAME=woo;

exportORACLE_BASE=/app/oracle;

exportORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1;

exportORACLE_SID=woo1;

exportORACLE_TERM=xterm;

exportPATH=/usr/sbin:$PATH;

exportPATH=$ORACLE_HOME/bin:$PATH;

exportLD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;

exportCLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;

umask022

 

8.4 修改grid用户环境变量,添加如下信息:

[root@rac01 ~]# su - grid

[grid@rac01 ~]$ vi .bash_profile

#.bash_profile

 

# Getthe aliases and functions

if [ -f~/.bashrc ]; then

        . ~/.bashrc

fi

 

# Userspecific environment and startup programs

 

PATH=$PATH:$HOME/bin

exportPATH

 

exportTMP=/tmp;

exportTMPDIR=$TMP;

exportGRID_SID=+ASM1;

export ORACLE_BASE=/app/grid;

export ORACLE_HOME=/app/11.2.0/grid;

exportPATH=$GRID_HOME/bin:$PATH;

umask022

 

8.5 编辑/etc/pam.d/login,添加如下内容:

Session required pam_limits.so

 

编辑/etc/profile,添加如下内容:

[root@rac01 ~]# vi /etc/profile

if [ $USER = "oracle" ]||[ $USER = "grid" ] ;then

    if [ $SHELL ="/bin/ksh" ]; then

                ulimit -p16384

                ulimit -n65536

        else

                ulimit -u16384 -n 65536

        fi

        umask 022

fi

9、停ntp时间同步(11G新增检查选项)

/etc/init.d/ntpd stop

chkconfig ntpd off

mv /etc/ntp.conf /etc/ntp.conf.org

rm /etc/ntp.conf

如果要使用NTP,需要对NTP配置文件参数进行修改:

编辑/etc/sysconfig/ntpd,修改如下:

OPTIONS=”-x–u ntp:ntp–p /var/run/ntpd.pid”

然后重启ntp服务:

 

修改DNS主机搜索顺序,将DNS放在优先位置:

[root@rac01 named]# vi /etc/nsswitch.conf

hosts:     dns files nis

 

修改后重启服务:

[root@rac01 named]# service nscd restart

Stopping nscd: [FAILED]

Starting nscd: [  OK  ]

 

创建磁盘分区

OCR  8G

DATA  20G

FRA   20G

 

[root@rac01 ~]# fdisk -l

 

Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   DeviceBoot      Start         End      Blocks  Id  System

/dev/sda1  *           1          38      305203+ 83  Linux

/dev/sda2              39        1201    9341797+  83  Linux

/dev/sda3           1202        2245     8385930  83  Linux

/dev/sda4           2246        2610     2931862+  5  Extended

/dev/sda5           2246        2506     2096451  82  Linux swap / Solaris

 

Disk /dev/sdb: 8589 MB, 8589934592 bytes

255 heads, 63 sectors/track, 1044 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Disk /dev/sdb doesn't contain a valid partitiontable

 

Disk /dev/sdc: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Disk /dev/sdc doesn't contain a valid partitiontable

 

Disk /dev/sdd: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Disk /dev/sdd doesn't contain a valid partitiontable

 

[root@rac01~]# fdisk /dev/sdb

Device contains neither a valid DOS partitiontable, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remainin memory only,

until you decide to write them. After that, ofcourse, the previous

content won't be recoverable.

 

 

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-1044, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-1044,default 1044):

Using default value 1044

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

 

[root@rac01~]# fdisk /dev/sdc

Device contains neither a valid DOS partitiontable, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remainin memory only,

until you decide to write them. After that, ofcourse, the previous

content won't be recoverable.

 

 

The number of cylinders for this disk is set to2610.

There is nothing wrong with that, but this islarger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., oldversions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOSFDISK, OS/2 FDISK)

Warning: invalid flag 0x0000 of partition table 4will be corrected by w(rite)

 

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-2610, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-2610,default 2610):

Using default value 2610

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

 

[root@rac01 ~]# fdisk /dev/sdd

Device contains neither a valid DOS partitiontable, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remainin memory only,

until you decide to write them. After that, ofcourse, the previous

content won't be recoverable.

 

 

The number of cylinders for this disk is set to2610.

There is nothing wrong with that, but this islarger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., oldversions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOSFDISK, OS/2 FDISK)

Warning: invalid flag 0x0000 of partition table 4will be corrected by w(rite)

 

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-2610, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-2610,default 2610):

Using default value 2610

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

[root@rac01 ~]# fdisk -l

 

Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   DeviceBoot      Start         End      Blocks  Id  System

/dev/sda1  *           1          38      305203+ 83  Linux

/dev/sda2              39        1201    9341797+  83  Linux

/dev/sda3           1202        2245     8385930  83  Linux

/dev/sda4           2246        2610     2931862+  5  Extended

/dev/sda5           2246        2506     2096451  82  Linux swap / Solaris

 

Disk /dev/sdb: 8589 MB, 8589934592 bytes

255 heads, 63 sectors/track, 1044 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   DeviceBoot      Start         End      Blocks  Id  System

/dev/sdb1               1        1044    8385898+  83  Linux

 

Disk /dev/sdc: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   DeviceBoot      Start         End      Blocks  Id  System

/dev/sdc1               1        2610   20964793+  83  Linux

 

Disk /dev/sdd: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   DeviceBoot      Start         End      Blocks  Id  System

/dev/sdd1               1        2610   20964793+  83  Linux

[root@rac01 ~]#

 

 

安装ASM

[root@grid02 asm]# ls

oracleasm-2.6.18-308.el5-2.0.5-1.el5.i686.rpm  oracleasmlib-2.0.4-1.el5.i386.rpm  oracleasm-support-2.1.7-1.el5.i386.rpm

[root@grid02 asm]# rpm -ivhoracleasm-support-2.1.7-1.el5.i386.rpm

warning: oracleasm-support-2.1.7-1.el5.i386.rpm:Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...               ########################################### [100%]

  1:oracleasm-support     ########################################### [100%]

[root@grid02 asm]# rpm -ivh oracleasm-2.6.18-308.el5-2.0.5-1.el5.i686.rpm

warning:oracleasm-2.6.18-308.el5-2.0.5-1.el5.i686.rpm: Header V3 DSA signature: NOKEY,key ID 1e5e0159

Preparing...               ########################################### [100%]

  1:oracleasm-2.6.18-308.el###########################################[100%]

[root@grid02 asm]# rpm -ivhoracleasmlib-2.0.4-1.el5.i386.rpm

warning: oracleasmlib-2.0.4-1.el5.i386.rpm: HeaderV3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...                ###########################################[100%]

  1:oracleasmlib          ########################################### [100%]

 

 

配置ASMLib

[root@grid01 grid]# /etc/init.d/oracleasm configure

Configuring the Oracle ASM library driver.

 

This will configure the on-boot properties of theOracle ASM library

driver.  Thefollowing questions will determine whether the driver is

loaded on boot and what permissions it willhave.  The current values

will be shown in brackets ('[]').  Hitting without typing an

answer will keep that current value.  Ctrl-C will abort.

 

Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]: y

Writing Oracle ASM library driver configuration:done

Initializing the Oracle ASMLib driver: [  OK  ]

Scanning the system for Oracle ASMLib disks: [  OK  ]

 

建立ASM磁盘

/usr/sbin/oracleasm createdisk OCR_VOTE1 /dev/sdb1

/usr/sbin/oracleasm createdisk OCR_VOTE2 /dev/sdc1

/usr/sbin/oracleasm createdisk OCR_VOTE3 /dev/sdd1

/usr/sbin/oracleasm createdisk DATA001 /dev/sde1

/usr/sbin/oracleasm createdisk ARC001 /dev/sdf1

/usr/sbin/oracleasm createdisk FRA001 /dev/sdg1

 

另一节点执行发现磁盘

[root@grid02 ~]# /etc/init.d/oracleasm scandisks

Scanning the system for Oracle ASMLib disks: [  OK  ]

[root@grid02 ~]# /etc/init.d/oracleasm listdisks

ARC001

DATA001

FRA001

OCR_VOTE1

OCR_VOTE2

OCR_VOTE3

 

验证安装前的检查:

./runcluvfy.sh stage -pre crsinst -n rac01,rac02-fixup–verbose

 

./runcluvfy.shcomp nodecon -n rac01,rac02 –verbose

 

 

安装完成后执行root脚本

[root@grid01 /]#/app/oraInventory/orainstRoot.sh

Changing permissions of /app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

 

Changing groupname of /app/oraInventory tooinstall.

The execution of the script. is complete.

 

[root@grid01 /]# /app/11.2.0/grid/root.sh

Running Oracle 11g root.sh script...

 

The following environment variables are set as:

   ORACLE_OWNER= grid

   ORACLE_HOME=  /app/11.2.0/grid

 

Enter the full pathname of the local bin directory:[/usr/local/bin]:

   Copyingdbhome to /usr/local/bin ...

   Copyingoraenv to /usr/local/bin ...

   Copyingcoraenv to /usr/local/bin ...

 

 

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file asneeded by

Database Configuration Assistant when a database iscreated

Finished running generic part of root.sh script.

Now product-specific root actions will beperformed.

2012-12-26 23:38:51: Parsing the host name

2012-12-26 23:38:51: Checking for super userprivileges

2012-12-26 23:38:51: User has super user privileges

Using configuration parameter file:/app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

  root wallet

  root walletcert

  root certexport

  peer wallet

  profilereader wallet

  pa wallet

  peer walletkeys

  pa walletkeys

  peer certrequest

  pa certrequest

  peer cert

  pa cert

  peer rootcert TP

  profilereader root cert TP

  pa rootcert TP

  peer pacert TP

  pa peercert TP

  profilereader pa cert TP

  profile readerpeer cert TP

  peer usercert

  pa usercert

Adding daemon to inittab

CRS-4123: Oracle High Availability Services hasbeen started.

ohasd is starting

CRS-2672: Attempting to start 'ora.gipcd' on'grid01'

CRS-2672: Attempting to start 'ora.mdnsd' on 'grid01'

CRS-2676: Start of 'ora.mdnsd' on 'grid01'succeeded

CRS-2676: Start of 'ora.gipcd' on 'grid01'succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on'grid01'

CRS-2676: Start of 'ora.gpnpd' on 'grid01'succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on'grid01'

CRS-2676: Start of 'ora.cssdmonitor' on 'grid01'succeeded

CRS-2672: Attempting to start 'ora.cssd' on'grid01'

CRS-2672: Attempting to start 'ora.diskmon' on'grid01'

CRS-2676: Start of 'ora.diskmon' on 'grid01'succeeded

CRS-2676: Start of 'ora.cssd' on 'grid01' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on'grid01'

CRS-2676: Start of 'ora.ctssd' on 'grid01'succeeded

 

ASM created and started successfully.

 

DiskGroup OCR_VOTE created successfully.

 

clscfg: -install mode specified

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

CRS-2672: Attempting to start 'ora.crsd' on'grid01'

CRS-2676: Start of 'ora.crsd' on 'grid01' succeeded

CRS-4256: Updating the profile

Successful addition of voting disk2b34943383f64f00bf2e91653dd35ff4.

Successful addition of voting disk2c1bbab1e3054f81bf528e9630333192.

Successful addition of voting disk88252b92194c4f62bfe070bb314bd171.

Successfully replaced voting disk group with+OCR_VOTE.

CRS-4256: Updating the profile

CRS-4266: Voting file(s) successfully replaced

## STATE    File Universal Id                File Name Disk group

-- -----    -----------------                --------- ---------

 1.ONLINE   2b34943383f64f00bf2e91653dd35ff4(ORCL:OCR_VOTE3) [OCR_VOTE]

 2.ONLINE   2c1bbab1e3054f81bf528e9630333192(ORCL:OCR_VOTE2) [OCR_VOTE]

 3.ONLINE   88252b92194c4f62bfe070bb314bd171(ORCL:OCR_VOTE1) [OCR_VOTE]

Located 3 voting disk(s).

CRS-2673: Attempting to stop 'ora.crsd' on 'grid01'

CRS-2677: Stop of 'ora.crsd' on 'grid01' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'grid01'

CRS-2677: Stop of 'ora.asm' on 'grid01' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on'grid01'

CRS-2677: Stop of 'ora.ctssd' on 'grid01' succeeded

CRS-2673: Attempting to stop 'ora.cssdmonitor' on'grid01'

CRS-2677: Stop of 'ora.cssdmonitor' on 'grid01'succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'grid01'

CRS-2677: Stop of 'ora.cssd' on 'grid01' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on'grid01'

CRS-2677: Stop of 'ora.gpnpd' on 'grid01' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on'grid01'

CRS-2677: Stop of 'ora.gipcd' on 'grid01' succeeded

CRS-2673: Attempting to stop 'ora.mdnsd' on'grid01'

CRS-2677: Stop of 'ora.mdnsd' on 'grid01' succeeded

CRS-2672: Attempting to start 'ora.mdnsd' on'grid01'

CRS-2676: Start of 'ora.mdnsd' on 'grid01'succeeded

CRS-2672: Attempting to start 'ora.gipcd' on'grid01'

CRS-2676: Start of 'ora.gipcd' on 'grid01'succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on'grid01'

CRS-2676: Start of 'ora.gpnpd' on 'grid01'succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on'grid01'

CRS-2676: Start of 'ora.cssdmonitor' on 'grid01'succeeded

CRS-2672: Attempting to start 'ora.cssd' on'grid01'

CRS-2672: Attempting to start 'ora.diskmon' on'grid01'

CRS-2676: Start of 'ora.diskmon' on 'grid01'succeeded

CRS-2676: Start of 'ora.cssd' on 'grid01' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on'grid01'

CRS-2676: Start of 'ora.ctssd' on 'grid01'succeeded

CRS-2672: Attempting to start 'ora.asm' on 'grid01'

CRS-2676: Start of 'ora.asm' on 'grid01' succeeded

CRS-2672: Attempting to start 'ora.crsd' on'grid01'

CRS-2676: Start of 'ora.crsd' on 'grid01' succeeded

CRS-2672: Attempting to start 'ora.evmd' on'grid01'

CRS-2676: Start of 'ora.evmd' on 'grid01' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'grid01'

CRS-2676: Start of 'ora.asm' on 'grid01' succeeded

CRS-2672: Attempting to start 'ora.OCR_VOTE.dg' on'grid01'

CRS-2676: Start of 'ora.OCR_VOTE.dg' on 'grid01'succeeded

CRS-2672: Attempting to start 'ora.registry.acfs'on 'grid01'

CRS-2676: Start of 'ora.registry.acfs' on 'grid01'succeeded

 

grid01    2012/12/26 23:49:34    /app/11.2.0/grid/cdata/grid01/backup_20121226_234934.olr

Preparing packages for installation...

cvuqdisk-1.0.7-1

Configure Oracle Grid Infrastructure for a Cluster... succeeded

Updating inventory properties for clusterware

Starting Oracle Universal Installer...

 

Checking swap space: must be greater than 500MB.   Actual 2047 MB    Passed

The inventory pointer is located at/etc/oraInst.loc

The inventory is located at /app/oraInventory

'UpdateNodeList' wassuccessful.

 

 

 

节点二执行如下:

[root@grid02 ~]#/app/oraInventory/orainstRoot.sh

Changing permissions of /app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

 

Changing groupname of /app/oraInventory tooinstall.

The execution of the script. is complete.

 

[root@grid02 ~]#/app/11.2.0/grid/root.sh

Running Oracle 11g root.sh script...

 

The following environment variables are set as:

   ORACLE_OWNER= grid

   ORACLE_HOME=  /app/11.2.0/grid

 

Enter the full pathname of the local bin directory:[/usr/local/bin]:

   Copyingdbhome to /usr/local/bin ...

   Copyingoraenv to /usr/local/bin ...

   Copyingcoraenv to /usr/local/bin ...

 

 

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file asneeded by

Database Configuration Assistant when a database iscreated

Finished running generic part of root.sh script.

Now product-specific root actions will beperformed.

2012-12-26 23:52:45: Parsing the host name

2012-12-26 23:52:45: Checking for super userprivileges

2012-12-26 23:52:45: User has super user privileges

Using configuration parameter file:/app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Adding daemon to inittab

CRS-4123: Oracle High Availability Services hasbeen started.

ohasd is starting

CRS-4402: The CSS daemon was started in exclusivemode but found an active CSS daemon on node grid01, number 1, and isterminating

An active cluster was found during exclusivestartup, restarting to join the cluster

CRS-2672: Attempting to start 'ora.mdnsd' on'grid02'

CRS-2676: Start of 'ora.mdnsd' on 'grid02'succeeded

CRS-2672: Attempting to start 'ora.gipcd' on'grid02'

CRS-2676: Start of 'ora.gipcd' on 'grid02'succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on'grid02'

CRS-2676: Start of 'ora.gpnpd' on 'grid02'succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on'grid02'

CRS-2676: Start of 'ora.cssdmonitor' on 'grid02'succeeded

CRS-2672: Attempting to start 'ora.cssd' on'grid02'

CRS-2672: Attempting to start 'ora.diskmon' on'grid02'

CRS-2676: Start of 'ora.diskmon' on 'grid02'succeeded

CRS-2676: Start of 'ora.cssd' on 'grid02' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on'grid02'

CRS-2676: Start of 'ora.ctssd' on 'grid02'succeeded

CRS-2672: Attempting to start 'ora.drivers.acfs' on'grid02'

CRS-2676: Start of 'ora.drivers.acfs' on 'grid02'succeeded

CRS-2672: Attempting to start 'ora.asm' on 'grid02'

CRS-2676: Start of 'ora.asm' on 'grid02' succeeded

CRS-2672: Attempting to start 'ora.crsd' on'grid02'

CRS-2676: Start of 'ora.crsd' on 'grid02' succeeded

CRS-2672: Attempting to start 'ora.evmd' on'grid02'

CRS-2676: Start of 'ora.evmd' on 'grid02' succeeded

 

grid02    2012/12/26 23:58:32    /app/11.2.0/grid/cdata/grid02/backup_20121226_235832.olr

Preparing packages for installation...

cvuqdisk-1.0.7-1

Configure Oracle Grid Infrastructure for a Cluster... succeeded

Updating inventory properties for clusterware

Starting Oracle Universal Installer...

 

Checking swap space: must be greater than 500MB.   Actual 2047 MB    Passed

The inventory pointer is located at/etc/oraInst.loc

The inventory is located at /app/oraInventory

'UpdateNodeList' was successful.

[root@grid02 ~]#

 

 

检查状态,如果有问题,按照如下操作:

[grid@grid01 ~]$ crsctl check cluster -all

 

 

在第二个节点执行删除操作,重新运行root脚本

/app/11.2.0/grid/crs/install/rootcrs.pl -verbose-deconfig –force

 

/app/11.2.0/grid/root.sh

 

 

检查状态:

[grid@grid01 ~]$ crsctl check cluster -all

**************************************************************

grid01:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services isonline

CRS-4533: Event Manager is online

**************************************************************

grid02:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services isonline

CRS-4533: Event Manager isonline

 

[grid@grid02 ~]$ crs_stat -t

Name          Type           Target    State    Host       

------------------------------------------------------------

ora....N1.lsnr ora....er.type ONLINE    ONLINE   grid02     

ora....N2.lsnr ora....er.type ONLINE    ONLINE   grid01     

ora....N3.lsnr ora....er.type ONLINE    ONLINE   grid01     

ora....N4.lsnr ora....er.type ONLINE    ONLINE   grid01     

ora....VOTE.dg ora....up.type ONLINE    ONLINE   grid01     

ora.asm        ora.asm.type   ONLINE   ONLINE    grid01     

ora.eons      ora.eons.type  ONLINE    ONLINE   grid01     

ora....SM1.asm application    ONLINE   ONLINE    grid01     

ora.grid01.gsd application    OFFLINE  OFFLINE              

ora.grid01.ons application    ONLINE   ONLINE    grid01     

ora.grid01.vip ora....t1.type ONLINE    ONLINE   grid01     

ora....SM2.asm application    ONLINE   ONLINE    grid02     

ora.grid02.gsd application    OFFLINE  OFFLINE              

ora.grid02.ons application    ONLINE   ONLINE    grid02     

ora.grid02.vip ora....t1.type ONLINE    ONLINE   grid02     

ora.gsd       ora.gsd.type   OFFLINE   OFFLINE              

ora....network ora....rk.type ONLINE    ONLINE   grid01     

ora.oc4j      ora.oc4j.type  OFFLINE   OFFLINE              

ora.ons       ora.ons.type   ONLINE    ONLINE   grid01     

ora....ry.acfs ora....fs.type ONLINE    ONLINE   grid01     

ora.scan1.vip ora....ip.type ONLINE   ONLINE    grid02     

ora.scan2.vip ora....ip.type ONLINE   ONLINE    grid01     

ora.scan3.vip ora....ip.type ONLINE   ONLINE    grid01     

ora.scan4.vip ora....ip.type ONLINE   ONLINE    grid01     

[grid@grid02 ~]$

 

 

 

错误检查:

 

检查votdisk 状态

crsctlquery css votedisk

 

检查ocr状态

Ocrcheck

 

检查集群节点:

olsnodes –n

olsnodes -n -p -l -s -t -v

 

检查监听状态:

srvctl status listener

 

检查asm

srvctl status asm –a

 

创建数据盘

Asmca

 

启动GSD和OC4J服务

查看状态

Srvctl status oc4j

Srvctl status nodeapps

 

开启自动启动

srvctl enable oc4j

srvctl enable nodeapps

 

手工带起服务

srvctl start nodeapps -n grid01

srvctl start nodeapps –n grid02

 

 

可以看到服务已经全部起来:

 

[grid@grid01 ~]$ crs_stat -t

Name          Type           Target    State    Host       

------------------------------------------------------------

ora....ER.lsnr ora....er.type ONLINE    ONLINE   grid01     

ora....N1.lsnr ora....er.type ONLINE    ONLINE   grid02     

ora....N2.lsnr ora....er.type ONLINE    ONLINE   grid01     

ora....N3.lsnr ora....er.type ONLINE    ONLINE   grid01     

ora....N4.lsnr ora....er.type ONLINE    ONLINE   grid01     

ora....VOTE.dg ora....up.type ONLINE    ONLINE   grid01     

ora.asm       ora.asm.type   ONLINE    ONLINE   grid01     

ora.eons      ora.eons.type  ONLINE    ONLINE   grid01     

ora....SM1.asm application    ONLINE   ONLINE    grid01     

ora....01.lsnr application    ONLINE   ONLINE    grid01     

ora.grid01.gsd application    ONLINE   ONLINE    grid01     

ora.grid01.ons application    ONLINE   ONLINE    grid01     

ora.grid01.vip ora....t1.type ONLINE    ONLINE   grid01     

ora....SM2.asm application    ONLINE   ONLINE    grid02     

ora....02.lsnr application    ONLINE   ONLINE    grid02     

ora.grid02.gsd application    ONLINE   ONLINE    grid02     

ora.grid02.ons application    ONLINE   ONLINE    grid02     

ora.grid02.vip ora....t1.type ONLINE    ONLINE   grid02     

ora.gsd       ora.gsd.type   ONLINE    ONLINE   grid01     

ora....network ora....rk.type ONLINE    ONLINE   grid01      

ora.oc4j      ora.oc4j.type  ONLINE    ONLINE   grid02     

ora.ons       ora.ons.type   ONLINE    ONLINE   grid01     

ora....ry.acfs ora....fs.type ONLINE    ONLINE   grid01     

ora.scan1.vip ora....ip.type ONLINE   ONLINE    grid02      

ora.scan2.vip ora....ip.type ONLINE   ONLINE    grid01     

ora.scan3.vip ora....ip.type ONLINE   ONLINE    grid01     

ora.scan4.vip ora....ip.type ONLINE   ONLINE    grid01     

[grid@grid01 ~]$

 

 

开始安装数据库软件:

 

安装完成之后,在两个节点分别执行root脚本

[root@grid01 ~]#/app/oracle/product/11.2.0/db_1/root.sh

Running Oracle 11g root.sh script...

 

The following environment variables are set as:

   ORACLE_OWNER= oracle

   ORACLE_HOME= /app/oracle/product/11.2.0/db_1

 

Enter the full pathname of the local bin directory:[/usr/local/bin]:

The file "dbhome" already exists in/usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copyingdbhome to /usr/local/bin ...

The file "oraenv" already exists in/usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copyingoraenv to /usr/local/bin ...

The file "coraenv" already exists in/usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copyingcoraenv to /usr/local/bin ...

 

Entries will be added to the /etc/oratab file asneeded by

Database Configuration Assistant when a database iscreated

Finished running generic part of root.sh script.

Now product-specific root actions will beperformed.

Finished product-specific root actions.

 

节点二、

[root@grid02 db_1]#/app/oracle/product/11.2.0/db_1/root.sh

Running Oracle 11g root.sh script...

 

The following environment variables are set as:

   ORACLE_OWNER= oracle

   ORACLE_HOME= /app/oracle/product/11.2.0/db_1

 

Enter the full pathname of the local bin directory:[/usr/local/bin]:

The file "dbhome" already exists in/usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copyingdbhome to /usr/local/bin ...

The file "oraenv" already exists in/usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying oraenvto /usr/local/bin ...

The file "coraenv" already exists in/usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copyingcoraenv to /usr/local/bin ...

 

Entries will be added to the /etc/oratab file asneeded by

Database Configuration Assistant when a database iscreated

Finished running generic part of root.sh script.

Now product-specific root actions will beperformed.

Finished product-specific root actions.

[root@grid02 db_1]#

 

开始创建数据库:

 

[grid@grid01 ~]$ crs_stat -t

Name          Type           Target    State    Host       

------------------------------------------------------------

ora.ARC001.dg ora....up.type ONLINE   ONLINE    grid01     

ora.DATA001.dg ora....up.type ONLINE    ONLINE   grid01     

ora.FRA001.dg ora....up.type ONLINE    ONLINE    grid01     

ora....ER.lsnr ora....er.type ONLINE    ONLINE   grid01     

ora....N1.lsnr ora....er.type ONLINE    ONLINE   grid01     

ora....N2.lsnr ora....er.type ONLINE    ONLINE   grid02     

ora....N3.lsnr ora....er.type ONLINE    ONLINE   grid02     

ora....N4.lsnr ora....er.type ONLINE    ONLINE   grid02     

ora....VOTE.dg ora....up.type ONLINE    ONLINE   grid01     

ora.asm       ora.asm.type   ONLINE    ONLINE   grid01     

ora.eons      ora.eons.type  ONLINE    ONLINE   grid01     

ora....SM1.asm application    ONLINE   ONLINE    grid01     

ora....01.lsnr application    ONLINE   ONLINE    grid01     

ora.grid01.gsd application    ONLINE   ONLINE    grid01     

ora.grid01.ons application    ONLINE   ONLINE    grid01      

ora.grid01.vip ora....t1.type ONLINE    ONLINE   grid01     

ora....SM2.asm application    ONLINE   ONLINE    grid02     

ora....02.lsnr application    ONLINE   ONLINE    grid02     

ora.grid02.gsd application    ONLINE   ONLINE    grid02      

ora.grid02.ons application    ONLINE   ONLINE    grid02     

ora.grid02.vip ora....t1.type ONLINE    ONLINE   grid02     

ora.gsd       ora.gsd.type   ONLINE    ONLINE   grid01     

ora....network ora....rk.type ONLINE    ONLINE   grid01     

ora.oc4j      ora.oc4j.type  ONLINE    ONLINE   grid02     

ora.ons       ora.ons.type   ONLINE    ONLINE   grid01     

ora....ry.acfs ora....fs.type ONLINE    ONLINE   grid01     

ora.scan1.vip ora....ip.type ONLINE   ONLINE    grid01     

ora.scan2.vip ora....ip.type ONLINE   ONLINE    grid02     

ora.scan3.vip ora....ip.type ONLINE   ONLINE    grid02     

ora.scan4.vip ora....ip.type ONLINE   ONLINE    grid02     

ora.woo.db    ora....se.type ONLINE   ONLINE    grid01     

[grid@grid01 ~]$ exit

 

 

数据库安装常见错误处理

1.在第二个节点运行root.sh报错,信息如下:

CRS-4000: Command Start failed, or completed with errors.

Command return code of 1 (256) from command: /app/11.2.0/grid/bin/crsctlstart resource ora.asm -init

Start of resource "ora.asm -init" failed

Failed to start ASM

Failed to start Oracle Clusterware stack

报上面的错误,上网查了下,自己疏忽造成的,由于我的第二个节点是从第一个节点拷贝的,/etc/hosts文件配置不对,本地的主机名配置不正确。

解决办法:

修改/etc/hosts文件配置

#that require network functionality will fail.

127.0.0.1 node2localhost.localdomain localhost

执行:

[root@node2 install]# /app/11.2.0/grid/crs/install/roothas.pl -delete-force –verbose

重新运行脚本:

[root@node2 install]# /app/11.2.0/grid/root.sh

运行成功。

2.在第二个节点运行root.sh报错,报错信息如下:

INFO: ERROR:

INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name"node-scan"

INFO: ERROR:

INFO: PRVF-4657 : Name resolution setup check for "node-scan"(IP address: 192.168.10.25) failed

INFO: ERROR:

INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name"node-scan"

INFO: Verification of SCAN VIP and Listener setup failed

上oracle网站查了下,这种情况的原因有几种,我的原因是由于使用了/etc/hosts文件来解析scan-ip引起的,如果能ping通scan-ip就可以直接忽略掉。

参考metalink文件:ID 887471.1

3.安装oracle软件时报错,报错信息如下:

INS-35354

上metalink查了下,是由于oracle目录清单配置文件里的集群配置不对,修改即可:

$cat /home/grid/oraInventory/ContentsXML/inventory.xml

CRS="true">

将CRS=true添加到两个节点的配置文件里即可。

参考文件:ID 1053393.1

 

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/20674423/viewspace-753480/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/20674423/viewspace-753480/

你可能感兴趣的:(RedHat 5.8 安装Oracle 11gR2_Grid集群)