Install Oracle 11G R2 RAC on RHEL5.6 DNS+ISCSI+ASM

Normal 0 7.8 磅 0 2 false false false EN-US ZH-CN X-NONE Install Oracle 11G R2 RAC on RHEL5.6


DNS+ISCSI+ASM

最近测试了11G R2 RAC的安装,话说11g r2 rac的安装有很多改变,还是折腾了一阵,终于搞定,贴出来,留作纪念。

 

1.  环境概述

2.  DNS配置

3.  Grid Infrastructrue软件安装

4.  AMS磁盘组建立

5.  Oracle软件安装

6.  建立监听

7.  建立数据库

8.  数据库后续配置

9.  安装中遇到的错误及解决

 

 

 

 

 

 

 

 

一.环境描述

1.  没有硬件条件,所以就在一台DELL R710的服务器上建了个ESX VMWARE,在上面建了两个虚拟机,共享存储是用freenas模拟的iscsi.

2.  另外,11g r2 rac安装,配置SCAN-IP时需要用到DNS解析,所以还需要单独配置DNS服务器,当然如果你有DNS服务器了就只需将SCAN-IP添加到DNS就行了,如果没有DNS也嫌麻烦不想配置DNS也可以,使用/etc/hosts文件来解析,但只能使用一个SCAN-IP,并且不能做到负载均衡。

主机描述:

主机名

操作系统

CPU

内存

硬盘

网卡

node1

RedHat Enterprise Linux 5.6(64)

1

1.5G

30G

2

node2

RedHat Enterprise Linux 5.6(64)

1

1.5G

30G

2

 

网络描述:

IP地址

别名

描述

192.168.10.21

node1

节点1PUBLIC IP

192.168.10.22

node2

节点2PUBLIC IP

192.168.10.23

node1-vip

节点1的虚拟IP

192.168.10.24

node2-vip

节点1的虚拟IP

192.168.10.25

node-scan.ordb.com

SCAN-IP1

192.168.10.26

node-scan.ordb.com

SCAN-IP2

192.168.10.27

node-scan.ordb.com

SCAN-IP3

192.168.20.21

node1-priv

节点1的心跳IP

192.168.20.22

node2-priv

节点2的心跳IP

 

磁盘描述:

磁盘组

磁盘名称

磁盘类型

大小

存储类型

描述

本地盘

/dev/sda

SCSI

30G

ext3

用于操作系统和ORACLE SOFT

CRS

/dev/sdb

i SCSI

8G

ASM

用于存放ocrvoting disk

DATA

/dev/sdc

i SCSI

16G

ASM

用于存放数据

FRA

/dev/sdd

i SCSI

12G

ASM

用于存放闪回,归档日志

 

数据库描述:

主机名

数据实例库名

ASM实例名

数据库名

数据库存储

CRS存储

归档日志存储

node1

orcldb1

+ASM1

orcldb

ASM

ASM

ASM

node2

orcldb2

+ASM2

 

软件描述:

操作系统软件:

RHEL-server-5.6-x86_64-dvd.iso

数据库软件:

linux_x64_11gr2_database_1of2.zip

linux_x64_11gr2_database_1of2.zip

linux_x64_11gr2_grid.zip

ASMLIB rpm

 

二.DNS 配置

RHEL5 上配置DNS,本次测试就将DNS安装在第一个节点上。

1.安装所需软件,这些rpm包都在安装光盘里能找到,直接安装即可

bind-9.3.6-16.P1.el5

bind-utils

caching-nameserver

system-config-bind

bind-chroot-9.3.6-16.P1.el5

 

[root@node1 ~]# cd /media/RHEL_5.6\ x86_64\ DVD/Server/

rpm -ivh bind-9.3.6-16.P1.el5.x86_64.rpm

rpm -ivh bind-utils-9.3.6-16.P1.el5.x86_64.rpm

rpm -ivh caching-nameserver-9.3.6-16.P1.el5.x86_64.rpm

rpm -ivh system-config-bind-4.0.3-4.el5.noarch.rpm

rpm -ivh bind-chroot-9.3.6-16.P1.el5.x86_64.rpm

 

2.配置

[root@node1 etc]# cd /var/named/chroot/etc

[root@node1 etc]# ls

localtime named.caching-nameserver.conf  named.conf  named.rfc1912.zones  rndc.key

查看当前目录下的文件,将named.caching-nameserver.conf 拷贝一份为named.conf

[root@node1 etc]# cp  -p named.caching-nameserver.conf named.conf

编辑named.conf文件,并进进行修改,阴影部分为修改部分:

[root@node1 etc]# vi named.conf

options {

    listen-on port 53 { any; };

    listen-on-v6 port 53 { ::1; };

    directory    "/var/named";

    dump-file    "/var/named/data/cache_dump.db";

        statistics-file "/var/named/data/named_stats.txt";

        memstatistics-file "/var/named/data/named_mem_stats.txt";

    query-source    port 53;  

    query-source-v6 port 53;

allow-query    { any; };

allow-query-cache { localhost; };

};

logging {

        channel default_debug {

                file "data/named.run";

                severity dynamic;

        };

};

view localhost_resolver {

    match-clients        { any; };

    match-destinations { any; };

    recursion yes;

    include "/etc/named.rfc1912.zones";

};

/etc/named.rfc1912.zones文件中加如以下内容:

zone "ordb.com" IN {

    type master;

    file "ordb.com.zone";

    allow-update { none; };

};

zone "10.168.192.in-addr.arpa" IN {

    type master;

    file "192.168.10.zone";

allow-update { none; };

allow-transfer {

192.168.10.25;

192.168.10.26;

192.168.10.27;

};

};

注:阴影部分表示可以对三个IP进行轮询

编辑正向解析文件:

[root@node1 named]# more ordb.com.zone

$TTL    86400

@               IN SOA  ordb.com root.ordb.com (

                                        42              ; serial (d. adams)

                                        3H              ; refresh

                                        15M             ; retry

                                        1W              ; expiry

                                        1D )            ; minimum

                IN NS           ordb.com

node-scan       IN A            192.168.10.25

node-scan       IN A            192.168.10.26

node-scan       IN A            192.168.10.27

编辑反向解析文件:

[root@node1 named]# more 192.168.10.zone

$TTL    86400

@       IN      SOA     ordb.com. root.ordb.com.  (

                                      1997022700 ; Serial

                                      28800      ; Refresh

                                      14400      ; Retry

                                      3600000    ; Expire

                                      86400 )    ; Minimum

        IN      NS      ordb.com.

25      IN      PTR     node-scan.ordb.com.

26      IN      PTR     node-scan.ordb.com.

27      IN      PTR     node-scan.ordb.com.

添加DNS服务器地址,两个节点都要配置

[root@node1 named]#vi /etc/resolve.conf

nameserver 192.168.10.21

3.重启DNS

[root@node1 named]# service named restart

Stopping named: .[  OK  ]

Starting named: [  OK  ]

 

4.验证测试:

连续解析三次,发现三次解析的IP的顺序都不来样,确认DNS实现了轮询:

[root@node2 ~]# nslookup node-scan.ordb.com

Server:         192.168.10.21

Address:        192.168.10.21#53

Name:   node-scan.ordb.com

Address: 192.168.10.27

Name:   node-scan.ordb.com

Address: 192.168.10.25

Name:   node-scan.ordb.com

Address: 192.168.10.26

 

[root@node2 ~]# nslookup node-scan.ordb.com

Server:         192.168.10.21

Address:        192.168.10.21#53

Name:   node-scan.ordb.com

Address: 192.168.10.26

Name:   node-scan.ordb.com

Address: 192.168.10.27

Name:   node-scan.ordb.com

Address: 192.168.10.25

 

[root@node2 ~]# nslookup node-scan.ordb.com

Server:         192.168.10.21

Address:        192.168.10.21#53

Name:   node-scan.ordb.com

Address: 192.168.10.25

Name:   node-scan.ordb.com

Address: 192.168.10.26

Name:   node-scan.ordb.com

Address: 192.168.10.27

 

至此,DNS配置成功。

 

5.配置过程中启动时如果遇到以下错误,解决办法:

[root@node1 named]# service named restart

Stopping named: [  OK  ]

Starting named: [FAILED]

查看日志信息:

[root@node1 named]#tail -f /var/log/message

node1 named[4835]: starting BIND 9.3.6-P1-RedHat-9.3.6-16.P1.el5 -u named -t /var/named/chroot

node1 named[4835]: found 2 CPUs, using 2 worker threads

node1 named[4835]: using up to 4096 sockets

node1 named[4835]: loading configuration from '/etc/named.conf'

node1 named[4835]: none:0: open: /etc/named.conf: permission denied

node1 named[4835]: loading configuration: permission denied

node1 named[4835]: exiting (due to fatal error)

 

修改named.conf权限,然后重启

[root@node1 named]# ls -al /var/named/chroot/etc/named.conf

-rw-r----- 1 root root 1424 Feb  2 18:47 /var/named/chroot/etc/named.conf

 

[root@node1 named]# chown root:named /var/named/chroot/etc/named.conf

[root@node1 named]# service named restart

Stopping named: [  OK  ]

Starting named: [  OK  ]

 

 

 

三.安装grid  infrastructure

1.安装前准备:

1)检查Oracle所需的包是否安装

rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils \

compat-libstdc++ \

elfutils-libelf \

elfutils-libelf-devel \

expat \

gcc \

gcc-c++ \

glibc \

glibc-common \

glibc-devel \

glibc-headers \

libaio \

libaio-devel \

libgcc \

libstdc++ \

libstdc++-devel \

make \

pdksh \

sysstat \

unixODBC \

unixODBC-devel

NOTE:有些包需要32位和64位的都要安装,如果没有安装全,在之后安装grid  infrastuctue时会检测到,只需按提示将没安装的包安装好即可。

2) 创建组和用户

创建组:

/usr/sbin/groupadd -g 501 oinstall

/usr/sbin/groupadd -g 502 dba

/usr/sbin/groupadd -g 504 asmadmin

/usr/sbin/groupadd -g 506 asmdba

/usr/sbin/groupadd -g 507 asmoper

创建用户:

/usr/sbin/useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid

/usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba oracle

设置密码

passwd grid

passw doracle

 

3)创建目录

创建Oracle清单目录:

mkdir -p /app/oraInventory

chown -R grid:oinstall /app/oraInventory

chmod -R 775 /app/oraInventory

创建grid安装目录

mkdir -p /app/grid

mkdir -p /app/11.2.0/grid

chown -R grid:oinstall /app/11.2.0/grid

chmod -R 775 /app/11.2.0/grid

创建oracle安装目录:

mkdir -p /app/oracle/product/11.2.0/db_1

mkdir -p /app/oracle/cfgtoollogs           --needed to ensure that dbca is able to run after the rdbms installation

chown -R oracle:oinstall /app/oracle

chmod -R 775 /app/oracle

 

4)修改系统参数

编辑/etc/sysctl.conf文件,添加以下内容:

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 8388608

kernel.shmmax = 536870912

kernel.shmmax = 1073741824

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048586

编辑完成后执行:sysctl -p

编辑/etc/security/limits.conf文件,添加以下内容:

grid soft nproc 2047

grid hard nproc 16384

grid soft nofile 1024

grid hard nofile 65536

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

 

编辑/etc/pam.d/login文件,添加以下内容:

session required pam_limits.so

 

编辑/etc/profile文件,添加以下内容:

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -p 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

umask 022

fi

 

5.ntp 时间同步(11G 新增检查选项)

/etc/init.d/ntpd stop

chkconfig ntpd off

mv /etc/ntp.conf /etc/ntp.conf.org

rm /etc/ntp.conf

如果要使用NTP,需要对NTP配置文件参数进行修改:

编辑 /etc/sysconfig/ntpd,修改如下:

OPTIONS=”-x –u ntp:ntp –p /var/run/ntpd.pid”

然后重启ntp服务:

6.配置网络

/etc/hosts

#eth0 public

192.168.10.21  node1        

192.168.10.22  node2        

#eth0 vip                                              

192.168.10.23  node1-vip

192.168.10.24  node2-vip

#eth1 private                                            

192.168.20.21  node1-priv

192.168.20.22  node2-priv

注:如果没有DNS解析,使用hosts文件做解析scan-ip时,需要将scan-ip添加到hosts文件,但只能添加来个scan-ip,添加格式如下:

192.168.10.25  scan-ip

使用dns解析时,需要将主机搜索的顺序进行修改,将dns放在优先位置,如下:

vi /etc/nsswitch.conf

hosts:file nis dns

修改为:

hosts:dns file nis

修改后重启服务重启:

/sbin/service nscd restart

 

7)设置gridoracle用户的环境变量(两个节点上都执行)

su - grid

vi .bash_profile

TMP=/tmp; export TMP

TMPDIR=$TMP; export TMPDIR

ORACLE_SID=+ASM1; export ORACLE_SID

ORACLE_BASE=/app/grid; export ORACLE_BASE

ORACLE_HOME=/app/11.2.0/grid; export ORACLE_HOME

PATH=$ORACLE_HOME/bin:$PATH; export PATH

if [ $USER = "grid" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -p 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

umask 022

fi

注:grid用户的ORACLE_BASEORACLE_HOME最好不要在同一目录,安装时有问题。

 

su - oracle

vi .bash_profile

#Oracle Settings

TMP=/tmp; export TMP

TMPDIR=$TMP; export TMPDIR

ORACLE_HOSTNAME=node1; export ORACLE_HOSTNAME

ORACLE_UNQNAME=orcldb; export ORACLE_UNQNAME

ORACLE_BASE=/app/oracle; export ORACLE_BASE

ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME

ORACLE_SID=orcldb1; export ORACLE_SID

ORACLE_TERM=xterm; export ORACLE_TERM

PATH=/usr/sbin:$PATH; export PATH

PATH=$ORACLE_HOME/bin:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

if [ $USER = "oracle" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -p 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

umask 022

fi

8)磁盘划分

本次测试是通过FreeNas来模拟的ISCSI硬盘做为共享存储(实际生产部署中iscsi配置还是比较简单,网上资料也比较多),一共三块硬盘,一块用于存储OCRVOTING DISK,一块用来存放数据,一块用来存放闪回数据和归档日志,因为都是用ASM来进行管理,将建三个磁盘组。

以下操作只需在任意一个节点上操作,在另外一个节点扫描同步即可

查看磁盘:

[root@node1 named]# fdisk –l

Disk /dev/sda: 32.2 GB, 32212254720 bytes

255 heads, 63 sectors/track, 3916 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          65      522081   83  Linux

/dev/sda2              66        2041    15872220   83  Linux

/dev/sda3            2042        2857     6554520   83  Linux

/dev/sda4            2858        3916     8506417+   5  Extended

/dev/sda5            2858        3249     3148708+  82  Linux swap / Solaris

/dev/sda6            3250        3510     2096451   83  Linux

/dev/sda7            3511        3641     1052226   83  Linux

/dev/sda8            3642        3772     1052226   83  Linux

/dev/sda9            3773        3903     1052226   83  Linux

 

Disk /dev/sdb: 8589 MB, 8589934592 bytes

64 heads, 32 sectors/track, 8192 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

   Device Boot      Start         End      Blocks   Id  System

 

Disk /dev/sdc: 17.1 GB, 17179869184 bytes

64 heads, 32 sectors/track, 16384 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Disk /dev/sdc doesn't contain a valid partition table

 

Disk /dev/sdd: 12.8 GB, 12884901888 bytes

64 heads, 32 sectors/track, 12288 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Disk /dev/sdc doesn't contain a valid partition table

[root@node1 rpmpack]# fdisk /dev/sdb

对磁盘进行分区:

[root@node1 ~]# fdisk /dev/sdb

[root@node1 ~]# fdisk /dev/sdc

[root@node1 ~]# fdisk /dev/sdd

划分完分区后查看:

[root@node1 ~]# fdisk –l

 

Disk /dev/sda: 32.2 GB, 32212254720 bytes

255 heads, 63 sectors/track, 3916 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          65      522081   83  Linux

/dev/sda2              66        2041    15872220   83  Linux

/dev/sda3            2042        2857     6554520   83  Linux

/dev/sda4            2858        3916     8506417+   5  Extended

/dev/sda5            2858        3249     3148708+  82  Linux swap / Solaris

/dev/sda6            3250        3510     2096451   83  Linux

/dev/sda7            3511        3641     1052226   83  Linux

/dev/sda8            3642        3772     1052226   83  Linux

/dev/sda9            3773        3903     1052226   83  Linux

 

Disk /dev/sdb: 8589 MB, 8589934592 bytes

64 heads, 32 sectors/track, 8192 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1        2500     2559984   83  Linux

/dev/sdb2            2501        5000     2560000   83  Linux

/dev/sdb3            5001        7500     2560000   83  Linux

 

Disk /dev/sdc: 17.1 GB, 17179869184 bytes

64 heads, 32 sectors/track, 16384 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sdc1               1        2500     2559984   83  Linux

/dev/sdc2            2501        5000     2560000   83  Linux

/dev/sdc3            5001        7500     2560000   83  Linux

/dev/sdc4            7501       16384     9097216    5  Extended

/dev/sdc5            7501       10000     2559984   83  Linux

/dev/sdc6           10001       12500     2559984   83  Linux

/dev/sdc7           12501       15000     2559984   83  Linux

 

Disk /dev/sdd: 12.8 GB, 12884901888 bytes

64 heads, 32 sectors/track, 12288 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sdd1               1        2500     2559984   83  Linux

/dev/sdd2            2501        5000     2560000   83  Linux

/dev/sdd3            5001        7500     2560000   83  Linux

/dev/sdd4            7501       12288     4902912    5  Extended

/dev/sdd5            7501       10000     2559984   83  Linux

 

9)安装配置ASMLIB,两个节点上都要执行

[root@node1 rpmpack]# ls

oracleasm-2.6.9-89.ELsmp-2.0.5-1.el4.i686.rpm oracleasmlib-2.0.4-1.el4.i386.rpm  oracleasm-support-2.1.7-1.el4.i386.rpm

[root@node1 rpmpack]# rpm -ivh *.rpm

warning: oracleasm-2.6.9-89.ELsmp-2.0.5-1.el4.i686.rpm: V3 DSA signature: NOKEY, key ID

b38a8516

Preparing...                ################################## [100%]

   1:oracleasm-support      ################################ [ 33%]

   2:oracleasm-2.6.9-89.ELsm################################ [ 67%]

   3:oracleasmlib           ################################# [100%]

 

配置ASMlib

[root@node2 rpmpac]# /etc/init.d/oracleasm configure

Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library

driver.  The following questions will determine whether the driver is

loaded on boot and what permissions it will have.  The current values

will be shown in brackets ('[]').  Hitting without typing an

answer will keep that current value.  Ctrl-C will abort.

 

Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]: y

Writing Oracle ASM library driver configuration: done

Initializing the Oracle ASMLib driver: [  OK  ]

Scanning the system for Oracle ASMLib disks: [  OK  ]

 

ASM磁盘,只需在任意一个节点上执行,在另一个节点上同步即可

建立CRS盘:

/usr/sbin/oracleasm createdisk OCR_VOTE1 /dev/sdb1

/usr/sbin/oracleasm createdisk OCR_VOTE2 /dev/sdb2

/usr/sbin/oracleasm createdisk OCR_VOTE3 /dev/sdb3

建立数据盘:

/usr/sbin/oracleasm createdisk ASMDISK1 /dev/sdc1

/usr/sbin/oracleasm createdisk ASMDISK2 /dev/sdc2

/usr/sbin/oracleasm createdisk ASMDISK3 /dev/sdc3

/usr/sbin/oracleasm createdisk ASMDISK5 /dev/sdc5

/usr/sbin/oracleasm createdisk ASMDISK6 /dev/sdc6

/usr/sbin/oracleasm createdisk ASMDISK7 /dev/sdc7

建立闪回盘:

/usr/sbin/oracleasm createdisk ASMDISK8 /dev/sdd1

/usr/sbin/oracleasm createdisk ASMDISK9 /dev/sdd2

/usr/sbin/oracleasm createdisk ASMDISK10 /dev/sdd3

/usr/sbin/oracleasm createdisk ASMDISK11 /dev/sdd5

查看建立的磁盘:

[root@node1 rpmpack]# /usr/sbin/oracleasm listdisks

ASMDISK1

ASMDISK10

ASMDISK11

ASMDISK2

ASMDISK3

ASMDISK5

ASMDISK6

ASMDISK7

ASMDISK8

ASMDISK9

OCR_VOTE1

OCR_VOTE2

OCR_VOTE3

在另一个节点上执行:

root@node2 rpmpac]# /usr/sbin/oracleasm scandisks

[root@node2 rpmpack]# /usr/sbin/oracleasm listdisks

ASMDISK1

ASMDISK10

ASMDISK11

ASMDISK2

ASMDISK3

ASMDISK5

ASMDISK6

ASMDISK7

ASMDISK8

ASMDISK9

OCR_VOTE1

OCR_VOTE2

OCR_VOTE3

2.安装 GRID INFRASTUCTURE

1).将安装包用grid用户上传到服务器并解压

unzip linux_11gr2_grid.zip

2).执行安装进入图形界面

[grid@node1 grid]$ ./runInstaller

步骤如下:

选择 Install and Configure Grid Infrastructure for a Cluster

选择高级安装

选择语言,一般选英语就可以了

输入DNS里解析的SCNA-NAME,取消GNS选择

点击添加,将第二个节点添加时去,也可选择编辑节点

11g r2 RACssh不用单独配置,可以在这里邮ORACLE安装软件自动配置,选择 ssh connectivity,输入grid用户的密码,点击setup 安装软件会自动配置,如果之前已经配置了ssh,可以点击test进行测试和验证。

ssh配置成功,直接下一步

选择数据存储方式,这里选择ASM

建立ocrvoting disk要使用的磁盘组

设置ASM管理密码

选择不使用IPMI

设置ASM管理组

设置GRID安装目录

设置清单目录

执行安装前验证

验证安装所需的条件

验证如果有问题,都需要先解决然后才能进行安装,比如rpm包不全,这里由于使用的虚拟机,所以报内存不足,直接忽略掉

点击完成开始安装

 

安装完成后,按照提示,以root用户分别在两个节点上运行脚本,运行的结果如下:

[root@node1 ~]# /app/oraInventory/orainstRoot.sh

Changing permissions of /app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /app/oraInventory to oinstall.

The execution of the script. is complete.

 

[root@node1 ~]# /app/11.2.0/grid/root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

   Copying dbhome to /usr/local/bin ...

   Copying oraenv to /usr/local/bin ...

   Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

2012-01-31 23:00:12: Parsing the host name

2012-01-31 23:00:12: Checking for super user privileges

2012-01-31 23:00:12: User has super user privileges

Using configuration parameter file: /app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

  root wallet

  root wallet cert

  root cert export

  peer wallet

  profile reader wallet

  pa wallet

  peer wallet keys

  pa wallet keys

  peer cert request

  pa cert request

  peer cert

  pa cert

  peer root cert TP

  profile reader root cert TP

  pa root cert TP

  peer pa cert TP

  pa peer cert TP

  profile reader pa cert TP

  profile reader peer cert TP

  peer user cert

  pa user cert

Adding daemon to inittab

CRS-4123: Oracle High Availability Services has been started.

ohasd is starting

CRS-2672: Attempting to start 'ora.gipcd' on 'node1'

CRS-2672: Attempting to start 'ora.mdnsd' on 'node1'

CRS-2676: Start of 'ora.mdnsd' on 'node1' succeeded

CRS-2676: Start of 'ora.gipcd' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'node1'

CRS-2676: Start of 'ora.gpnpd' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1'

CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'node1'

CRS-2672: Attempting to start 'ora.diskmon' on 'node1'

CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded

CRS-2676: Start of 'ora.cssd' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'node1'

CRS-2676: Start of 'ora.ctssd' on 'node1' succeeded

ASM created and started successfully.

DiskGroup CRS created successfully.

clscfg: -install mode specified

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

CRS-2672: Attempting to start 'ora.crsd' on 'node1'

CRS-2676: Start of 'ora.crsd' on 'node1' succeeded

CRS-4256: Updating the profile

Successful addition of voting disk a2ead9115d1f4f18bff53d933a3936dc.

Successfully replaced voting disk group with +CRS.

CRS-4256: Updating the profile

CRS-4266: Voting file(s) successfully replaced

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

 1. ONLINE   a2ead9115d1f4f18bff53d933a3936dc (ORCL:OCR_VOTE1) [CRS]

Located 1 voting disk(s).

CRS-2673: Attempting to stop 'ora.crsd' on 'node1'

CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'node1'

CRS-2677: Stop of 'ora.asm' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'

CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'node1'

CRS-2677: Stop of 'ora.cssdmonitor' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'node1'

CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'

CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'

CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'

CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.mdnsd' on 'node1'

CRS-2676: Start of 'ora.mdnsd' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on 'node1'

CRS-2676: Start of 'ora.gipcd' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'node1'

CRS-2676: Start of 'ora.gpnpd' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1'

CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'node1'

CRS-2672: Attempting to start 'ora.diskmon' on 'node1'

CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded

CRS-2676: Start of 'ora.cssd' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'node1'

CRS-2676: Start of 'ora.ctssd' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'node1'

CRS-2676: Start of 'ora.asm' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'node1'

CRS-2676: Start of 'ora.crsd' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.evmd' on 'node1'

CRS-2676: Start of 'ora.evmd' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'node1'

CRS-2676: Start of 'ora.asm' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.CRS.dg' on 'node1'

CRS-2676: Start of 'ora.CRS.dg' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.registry.acfs' on 'node1'

CRS-2676: Start of 'ora.registry.acfs' on 'node1' succeeded

node1     2012/01/31 23:07:11     /app/11.2.0/grid/cdata/node1/backup_20120131_230711.olr

Preparing packages for installation...

cvuqdisk-1.0.7-1

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Updating inventory properties for clusterware

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2685 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /app/oraInventory

'UpdateNodeList' was successful.

第一个节点上运行完成。

 

[root@node2 install]# /app/11.2.0/grid/root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

2012-01-31 23:40:21: Parsing the host name

2012-01-31 23:40:21: Checking for super user privileges

2012-01-31 23:40:21: User has super user privileges

Using configuration parameter file: /app/11.2.0/grid/crs/install/crsconfig_params

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Adding daemon to inittab

CRS-4123: Oracle High Availability Services has been started.

ohasd is starting

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node

node1, number 1, and is terminating

An active cluster was found during exclusive startup, restarting to join the cluster

CRS-2672: Attempting to start 'ora.mdnsd' on 'node2'

CRS-2676: Start of 'ora.mdnsd' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on 'node2'

CRS-2676: Start of 'ora.gipcd' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'node2'

CRS-2676: Start of 'ora.gpnpd' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node2'

CRS-2676: Start of 'ora.cssdmonitor' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'node2'

CRS-2672: Attempting to start 'ora.diskmon' on 'node2'

CRS-2676: Start of 'ora.diskmon' on 'node2' succeeded

CRS-2676: Start of 'ora.cssd' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'node2'

CRS-2676: Start of 'ora.ctssd' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.drivers.acfs' on 'node2'

CRS-2676: Start of 'ora.drivers.acfs' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'node2'

CRS-2676: Start of 'ora.asm' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'node2'

CRS-2676: Start of 'ora.crsd' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.evmd' on 'node2'

CRS-2676: Start of 'ora.evmd' on 'node2' succeeded

node2     2012/01/31 23:42:40     /app/11.2.0/grid/cdata/node2/backup_20120131_234240.olr

Preparing packages for installation...

cvuqdisk-1.0.7-1

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Updating inventory properties for clusterware

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3008 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /app/oraInventory

'UpdateNodeList' was successful.

第二个节点上成功完成。

 

查看crs各组件是否成功启动和运行,输出结果如下,可以看到3scan-ip都是online状态:

[grid@node2 ~]$ crs_stat -t

Name           Type           Target    State     Host       

------------------------------------------------------------

ora.CRS.dg     ora....up.type ONLINE    ONLINE    node1      

ora....N1.lsnr ora....er.type ONLINE    ONLINE    node2      

ora....N2.lsnr ora....er.type ONLINE    ONLINE    node1      

ora....N3.lsnr ora....er.type ONLINE    ONLINE    node1      

ora.asm        ora.asm.type   ONLINE    ONLINE    node1      

ora.eons       ora.eons.type  ONLINE    ONLINE    node1      

ora.gsd        ora.gsd.type   OFFLINE   OFFLINE              

ora....network ora....rk.type ONLINE    ONLINE    node1      

ora....SM1.asm application    ONLINE    ONLINE    node1      

ora.node1.gsd  application    OFFLINE   OFFLINE              

ora.node1.ons  application    ONLINE    ONLINE    node1      

ora.node1.vip  ora....t1.type ONLINE    ONLINE    node1      

ora....SM2.asm application    ONLINE    ONLINE    node2      

ora.node2.gsd  application    OFFLINE   OFFLINE              

ora.node2.ons  application    ONLINE    ONLINE    node2      

ora.node2.vip  ora....t1.type ONLINE    ONLINE    node2      

ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE               

ora.ons        ora.ons.type   ONLINE    ONLINE    node1      

ora....ry.acfs ora....fs.type ONLINE    ONLINE    node1      

ora.scan1.vip  ora....ip.type ONLINE    ONLINE    node2      

ora.scan2.vip  ora....ip.type ONLINE    ONLINE    node1       

ora.scan3.vip  ora....ip.type ONLINE    ONLINE    node1

在操作系统上查看scan-ip的绑定情况

[root@node1 ~]# ifconfig

eth0      Link encap:Ethernet  HWaddr 00:0C:29:B4:C0:EC 

          inet addr:192.168.10.21  Bcast:192.168.10.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:40642 errors:0 dropped:0 overruns:0 frame.:0

          TX packets:46434 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:17812140 (16.9 MiB)  TX bytes:52569723 (50.1 MiB)

eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:B4:C0:EC 

          inet addr:192.168.10.23  Bcast:192.168.10.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth0:2    Link encap:Ethernet  HWaddr 00:0C:29:B4:C0:EC 

          inet addr:192.168.10.26  Bcast:192.168.10.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth0:4    Link encap:Ethernet  HWaddr 00:0C:29:B4:C0:EC 

          inet addr:192.168.10.25  Bcast:192.168.10.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth1      Link encap:Ethernet  HWaddr 00:0C:29:B4:C0:F6 

          inet addr:192.168.20.21  Bcast:192.168.20.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:7918 errors:0 dropped:0 overruns:0 frame.:0

          TX packets:5869 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:3299222 (3.1 MiB)  TX bytes:2799710 (2.6 MiB)

lo        Link encap:Local Loopback 

          inet addr:127.0.0.1  Mask:255.0.0.0

          UP LOOPBACK RUNNING  MTU:16436  Metric:1

          RX packets:7788 errors:0 dropped:0 overruns:0 frame.:0

          TX packets:7788 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:6709223 (6.3 MiB)  TX bytes:6709223 (6.3 MiB)

 

[root@node2 ~]# ifconfig

eth0      Link encap:Ethernet  HWaddr 00:0C:29:EB:9A:79 

          inet addr:192.168.10.22  Bcast:192.168.10.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:27532 errors:0 dropped:0 overruns:0 frame.:0

          TX packets:338756 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:14943876 (14.2 MiB)  TX bytes:24829697 (23.6 MiB)

eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:EB:9A:79 

          inet addr:192.168.10.27  Bcast:192.168.10.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth0:2    Link encap:Ethernet  HWaddr 00:0C:29:EB:9A:79 

          inet addr:192.168.10.24  Bcast:192.168.10.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth1      Link encap:Ethernet  HWaddr 00:0C:29:EB:9A:83 

          inet addr:192.168.20.22  Bcast:192.168.20.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:2545950 errors:0 dropped:0 overruns:0 frame.:0

          TX packets:6337 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:3777570492 (3.5 GiB)  TX bytes:3247036 (3.0 MiB)

lo        Link encap:Local Loopback 

          inet addr:127.0.0.1  Mask:255.0.0.0

          UP LOOPBACK RUNNING  MTU:16436  Metric:1

          RX packets:6618 errors:0 dropped:0 overruns:0 frame.:0

          TX packets:6618 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:7414196 (7.0 MiB)  TX bytes:7414196 (7.0 MiB)

 

四.建立ASM磁盘组

[grid@node1 ~]$ export DISPLAY=192.168.10.31:0.0

[grid@node1 ~]$ asmca

进入图形界面步骤如下:

CREATE建立

输入磁盘组名称:DATA,选中磁盘,建立数据存储的磁盘组DATA

DATA磁盘组成功建立

建立闪回和归档日志存储的磁盘组:FRA

磁盘组建立成功

再次查看crs状态,发现多了两行,每个磁盘组会多来行:

[grid@node1 ~]$ crs_stat -t

Name           Type           Target    State     Host        

------------------------------------------------------------

ora.CRS.dg     ora....up.type ONLINE    ONLINE    node1      

ora.DATA.dg    ora....up.type ONLINE    ONLINE    node1      

ora.FRA.dg     ora....up.type ONLINE    ONLINE    node1      

ora....N1.lsnr ora....er.type ONLINE    ONLINE    node2      

ora....N2.lsnr ora....er.type ONLINE    ONLINE    node1      

ora....N3.lsnr ora....er.type ONLINE    ONLINE    node1      

ora.asm        ora.asm.type   ONLINE    ONLINE    node1      

ora.eons       ora.eons.type  ONLINE    ONLINE    node1      

ora.gsd        ora.gsd.type   OFFLINE   OFFLINE              

ora....network ora....rk.type ONLINE    ONLINE    node1      

ora....SM1.asm application    ONLINE    ONLINE    node1      

ora.node1.gsd  application    OFFLINE   OFFLINE              

ora.node1.ons  application    ONLINE    ONLINE    node1      

ora.node1.vip  ora....t1.type ONLINE    ONLINE    node1      

ora....SM2.asm application    ONLINE    ONLINE    node2      

ora.node2.gsd  application    OFFLINE   OFFLINE              

ora.node2.ons  application    ONLINE    ONLINE    node2      

ora.node2.vip  ora....t1.type ONLINE    ONLINE    node2      

ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE              

ora.ons        ora.ons.type   ONLINE    ONLINE    node1      

ora....ry.acfs ora....fs.type ONLINE    ONLINE    node1      

ora.scan1.vip  ora....ip.type ONLINE    ONLINE    node2      

ora.scan2.vip  ora....ip.type ONLINE    ONLINE    node1      

ora.scan3.vip  ora....ip.type ONLINE    ONLINE    node1

五.建立监听

[root@node1 ~]# su - grid

[grid@node1 ~]$ export DISPLAY=192.168.10.31:0.0

[grid@node1 ~]$ netca

进入图形界面

配置监听

添加监听

设置监听名称

设置协议和网络

配置监听端口

完成配置

 

查看crs状态:

[grid@node1 ~]$ crs_stat -t

Name           Type           Target    State     Host       

------------------------------------------------------------

ora.CRS.dg     ora....up.type ONLINE    ONLINE    node1      

ora.DATA.dg    ora....up.type ONLINE    ONLINE    node1      

ora.FRA.dg     ora....up.type ONLINE    ONLINE    node1      

ora....ER.lsnr ora....er.type ONLINE    ONLINE    node1      

ora....N1.lsnr ora....er.type ONLINE    ONLINE    node2      

ora....N2.lsnr ora....er.type ONLINE    ONLINE    node1      

ora....N3.lsnr ora....er.type ONLINE    ONLINE    node1      

ora.asm        ora.asm.type   ONLINE    ONLINE    node1      

ora.eons       ora.eons.type  ONLINE    ONLINE    node1      

ora.gsd        ora.gsd.type   OFFLINE   OFFLINE              

ora....network ora....rk.type ONLINE    ONLINE    node1      

ora....SM1.asm application    ONLINE    ONLINE    node1      

ora....E1.lsnr application    ONLINE    ONLINE    node1      

ora.node1.gsd  application    OFFLINE   OFFLINE              

ora.node1.ons  application    ONLINE    ONLINE    node1      

ora.node1.vip  ora....t1.type ONLINE    ONLINE    node1      

ora....SM2.asm application    ONLINE    ONLINE    node2      

ora....E2.lsnr application    ONLINE    ONLINE    node2      

ora.node2.gsd  application    OFFLINE   OFFLINE               

ora.node2.ons  application    ONLINE    ONLINE    node2      

ora.node2.vip  ora....t1.type ONLINE    ONLINE    node2      

ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE              

ora.ons        ora.ons.type   ONLINE    ONLINE    node1      

ora....ry.acfs ora....fs.type ONLINE    ONLINE    node1      

ora.scan1.vip  ora....ip.type ONLINE    ONLINE    node2      

ora.scan2.vip  ora....ip.type ONLINE    ONLINE    node1      

ora.scan3.vip  ora....ip.type ONLINE    ONLINE    node1

 

六.安装oracle软件

su - oracle

./runInstall

进入图形界面

不设置邮箱,直接下一步

选择只安装软件,稍后单独来建库

选择RAC安装

将所有节点都选上

建立oracle用户的ssh等效,输入oracle用户密码,点击setup,然后由安装软件自动配置

ssh 配置成功。

设置语言

设置安装版本,一般选择企业版

设置ORACLE软件安装目录

设置ORACLE

选择完成,开始安装

安装完成后,以root用户分别在两个节点上运行脚本

运行脚本输出如下:

[root@node1 ~]# /app/oracle/product/11.2.0/db_1/root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

    ORACLE_OWNER= oracle

    ORACLE_HOME=  /app/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

[root@node2 ~]# /app/oracle/product/11.2.0/db_1/root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

    ORACLE_OWNER= oracle

    ORACLE_HOME=  /app/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

Finished product-specific root action

 

七.建立数据库

[root@node1 etc]# su - oracle

[oracle@node1 ~]$ export DISPLAY=192.168.10.31:0.0

[oracle@node1 ~]$ dbca

进入图形界面

选择建立RAC数据库

选择建立数据库

选择通用数据库

选择所有节点,输入数据库名和sid,点下一步

EM配置,根据需要选择是否配置

输入数据库用户密码

数据存储类型选择ASM,选择数据存放的磁盘组

下一步

设置内存,连接数,数据字符集等参数

添加日志组

选择建立安装脚本

脚本成功建立

开始安装

数据库安装完成

查看crs状态:

[grid@node1 ~]$ crs_stat -t

Name           Type           Target    State     Host       

------------------------------------------------------------

ora.CRS.dg     ora....up.type ONLINE    ONLINE    node1      

ora.DATA.dg    ora....up.type ONLINE    ONLINE    node1      

ora.FRA.dg     ora....up.type ONLINE    ONLINE    node1      

ora....ER.lsnr ora....er.type ONLINE    ONLINE    node1      

ora....N1.lsnr ora....er.type ONLINE    ONLINE    node2      

ora....N2.lsnr ora....er.type ONLINE    ONLINE    node1      

ora....N3.lsnr ora....er.type ONLINE    ONLINE    node1      

ora.asm        ora.asm.type   ONLINE    ONLINE    node1      

ora.eons       ora.eons.type  ONLINE    ONLINE    node1      

ora.gsd        ora.gsd.type   OFFLINE   OFFLINE              

ora....network ora....rk.type ONLINE    ONLINE    node1      

ora....SM1.asm application    ONLINE    ONLINE    node1      

ora....E1.lsnr application    ONLINE    ONLINE    node1      

ora.node1.gsd  application    OFFLINE   OFFLINE               

ora.node1.ons  application    ONLINE    ONLINE    node1      

ora.node1.vip  ora....t1.type ONLINE    ONLINE    node1      

ora....SM2.asm application    ONLINE    ONLINE    node2      

ora....E2.lsnr application    ONLINE    ONLINE    node2      

ora.node2.gsd  application    OFFLINE   OFFLINE              

ora.node2.ons  application    ONLINE    ONLINE    node2      

ora.node2.vip  ora....t1.type ONLINE    ONLINE    node2      

ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE               

ora.ons        ora.ons.type   ONLINE    ONLINE    node1      

ora.orcldb.db  ora....se.type ONLINE    ONLINE    node1      

ora....ry.acfs ora....fs.type ONLINE    ONLINE    node1      

ora.scan1.vip  ora....ip.type ONLINE    ONLINE    node2       

ora.scan2.vip  ora....ip.type ONLINE    ONLINE    node1      

ora.scan3.vip  ora....ip.type ONLINE    ONLINE    node1

 

八.后续配置

1.客户端配置

tnsnames.ora文件配置

ORCLDB =

  (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = node-scan.ordb.com)(PORT = 1521))

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SERVICE_NAME = orcldb)

    )

  )

2.归档日志设置

本次测试将归档日志存放在asm磁盘组上:

归档配置:

使用asmcmd工具,在FRA磁盘组上建立归档目录:

#asmcmd

ASMCMD> cd fra

ASMCMD> pwd

+fra/

ASMCMD> mkdir arch  

ASMCMD> cd arch

ASMCMD> pwd

+fra/arch

ASMCMD> mkdir orcldb1 orcldb2

ASMCMD> ls

orcldb1/

orcldb2/

ASMCMD> ls -al

Type  Redund  Striped  Time             Sys  Name

                                        N    orcldb1/

                                        N    orcldb2/

SQL> alter system set log_archive_dest_1='LOCATION=+fra/arch/orcldb1' scope=both sid='orcldb1';

System altered.

SQL> alter system set log_archive_dest_1='LOCATION=+fra/arch/orcldb2' scope=both sid='orcldb2';

System altered.

关闭两个节点:

SQL> shutdown immediate

Database closed.

Database dismounted.

ORACLE instance shut down.

将任意来个节点启动到mount状态:

SQL> startup mount

ORACLE instance started.

Total System Global Area  471830528 bytes

Fixed Size                  2214456 bytes

Variable Size             289408456 bytes

Database Buffers          171966464 bytes

Redo Buffers                8241152 bytes

Database mounted.

将数据切换到归档模式:

SQL> alter database archivelog;

Database altered.

打开数据库查看:

SQL> alter database open;

Database altered.

SQL> archive log list;

Database log mode              Archive Mode

Automatic archival             Enabled

Archive destination            +FRA/arch/orcldb1

Oldest online log sequence     8

Next log sequence to archive   10

Current log sequence           10

切换日志进行验证:

SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

使用asmcmd工具进行验证:

 [grid@node1 ~]$ asmcmd

ASMCMD> cd fra

ASMCMD> cd arch

ASMCMD> ls

orcldb1/

orcldb2/

ASMCMD> cd orcldb1

ASMCMD> ls

1_10_775268170.dbf

1_11_775268170.dbf

 

九.安装过程中遇到的错误及解决

1. 在第二个节点运行root.sh报错,信息如下:

CRS-4000: Command Start failed, or completed with errors.

Command return code of 1 (256) from command: /app/11.2.0/grid/bin/crsctl start resource ora.asm -init

Start of resource "ora.asm -init" failed

Failed to start ASM

Failed to start Oracle Clusterware stack

报上面的错误,上网查了下,自己疏忽造成的,由于我的第二个节点是从第一个节点拷贝的,/etc/hosts文件配置不对,本地的主机名配置不正确。

解决办法:

修改/etc/hosts文件配置

# that require network functionality will fail.

127.0.0.1               node2 localhost.localdomain localhost

执行:

[root@node2 install]# /app/11.2.0/grid/crs/install/roothas.pl -delete -force –verbose

重新运行脚本:

[root@node2 install]# /app/11.2.0/grid/root.sh

运行成功。

2.在第二个节点运行root.sh报错,报错信息如下:

INFO: ERROR:

INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "node-scan"

INFO: ERROR:

INFO: PRVF-4657 : Name resolution setup check for "node-scan" (IP address: 192.168.10.25) failed

INFO: ERROR:

INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "node-scan"

INFO: Verification of SCAN VIP and Listener setup failed

oracle网站查了下,这种情况的原因有几种,我的原因是由于使用了/etc/hosts文件来解析scan-ip引起的,如果能pingscan-ip就可以直接忽略掉。

参考metalink文件:ID 887471.1

3.安装oracle软件时报错,报错信息如下:

INS-35354

metalink查了下,是由于oracle目录清单配置文件里的集群配置不对,修改即可:

$ cat /home/grid/oraInventory/ContentsXML/inventory.xml

>

  

     

     

     

  

CRS=true添加到两个节点的配置文件里即可。

参考文件:ID 1053393.1

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/24668589/viewspace-716594/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/24668589/viewspace-716594/

你可能感兴趣的:(Install Oracle 11G R2 RAC on RHEL5.6 DNS+ISCSI+ASM)