注意:所有需要在两台主机上都设置的操作都有注明,贴出过程仅以rac1为模版

 

IP规划:

#Public IP

192.168.1.22  rac1

192.168.1.33  rac2

 

#Private IP

1.1.1.111 rac1-priv

1.1.1.222 rac2-priv

 

#Virtual IP

192.168.1.23 rac1-vip

192.168.1.34 rac2-vip

 

#Scan IP

192.168.1.77 rac-scan

 

更改IP地址(rac1rac2

[root@rac1 network-scripts]# vi ifcfg-eno16777736

更改:

 

TYPE=Ethernet

BOOTPROTO=static                                                                                                                                                  

DEFROUTE=yes

PEERDNS=yes

PEERROUTES=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=no

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_PEERDNS=yes

IPV6_PEERROUTES=yes

IPV6_FAILURE_FATAL=no

NAME=eno16777736

UUID=eeaef3ba-b1fe-498f-95e8-3a982ec8931e

DEVICE=eno16777736

ONBOOT=yes

IPADDR=192.168.1.22

NETMASK=255.255.255.0

GATEWAY=192.168.1.1

DNS1=8.8.8.8

~                

由于实验环境缺少网卡2的配置文件,所以自己copy一份手动更改

[root@rac1 network-scripts]# cp ifcfg-eno16777736 ifcfg-eno33554984

[root@rac1 network-scripts]# vi ifcfg-eno33554984

添加:

TYPE=Ethernet

BOOTPROTO=static

DEFROUTE=yes

PEERDNS=yes

PEERROUTES=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=no

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_PEERDNS=yes

IPV6_PEERROUTES=yes

IPV6_FAILURE_FATAL=no

NAME=eno33554984

UUID=7b040b98-b78e-44fa-91e1-5e115f0bdd9f

DEVICE=eno33554984

ONBOOT=yes

IPADDR=1.1.1.111

NETMASK=255.255.255.0

GATEWAY=1.1.1.1

DNS1=8.8.8.8

~                                                                                           

根据自己的IP规划,rac2相同操作

 

测试两边能否互相ping

 

修改主机名

(测试环境安装系统时,已修改好)~

 

查看防火墙状态(rac1rac2

 [root@rac1 ~]# systemctl status firewalld     

关闭防火墙(当前状态)

[root@rac1 ~]# systemctl stop firewalld

关闭防火墙(永久)

[root@rac1 ~]# systemctl disable firewalld

 

 

 

修改host文件(rac1rac2

[root@rac2 network-scripts]# vi /etc/hosts

添加:

#Public IP

172.16.171.22  rac1

172.16.171.33  rac2

 

#Private IP

1.1.1.111 rac1-priv

1.1.1.222 rac2-priv

 

#Virtual IP

172.16.171.23 rac1-vip

172.16.171.34 rac2-vip

 

#Scan IP

172.16.171.77 rac-scan

 

重启网络服务(rac1rac2

[root@rac1 ~]# service network restart

 

配置内核参数(rac1rac2):[root@rac1 ~]# vi /etc/sysctl.conf

添加:

# for oracle 11g

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2147483648

kernel.shmmax = 68719476736

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048586

 

使参数生效(rac1rac2):[root@rac1 ~]# /sbin/sysctl –p

 

更改limits文件(rac1rac2):[root@rac1 ~]# vi /etc/security/limits.conf

添加:

grid soft nproc 2047

grid hard nproc 16384

grid soft nofile 1024

grid hard nofile 65536

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

 

 

更改login文件(rac1rac2):[root@rac1 ~]# vi /etc/pam.d/login

添加:

session    required     pam_limits.so

 

更改profile文件(rac1rac2):[root@rac1 ~]# vi /etc/profile

添加:

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -p 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

umask 022

fi

 

关闭selinuxrac1rac2):[root@rac1 ~]#  vi /etc/selinux/config

修改:SELINUX=disabled

添加:

getsebool

getsebool: SELinux is disabled

 

重启主机

 

 

添加用户和组(rac1rac2

groupadd -g 501 oinstall

groupadd -g 502 dba

groupadd -g 503 oper

groupadd -g 504 asmadmin

groupadd -g 505 asmoper

groupadd -g 506 asmdba

useradd -g oinstall -G dba,asmdba,oper oracle

useradd -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid

 

设置grid’oracle密码(rac1rac2):

[root@rac1 ~]# passwd grid

[root@rac1 ~]# passwd oracle

 

创建目录(rac1rac2):

mkdir -p /u01/app/oracle

mkdir -p /u01/app/grid

mkdir -p /u01/app/11.2.0/grid

chown -R grid:oinstall /u01/app/grid

chown -R grid:oinstall /u01/app/11.2.0

chown -R oracle:oinstall /u01/app/oracle

mkdir -p /u01/app/oraInventory

chown -R grid:oinstall /u01/app/oraInventory

chmod -R 777 /u01/app/oraInventory

chmod -R 777 /u01

 

 

切换用户,添加环境变量(rac1rac2

[root@rac1 ~]# su - oracle

[oracle@rac1 ~]$ vi /home/oracle/.bash_profile

添加:

export ORACLE_SID=rac1

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"

export TMP=/tmp

export TMPDIR=$TMP

export PATH=$PATH:$ORACLE_HOME/bin

注意:rac2里面:export ORACLE_SID=rac2

 

[oracle@rac1 ~]$ su - grid

口令:

[grid@rac1 ~]$ vim .bash_profile

添加:

export ORACLE_SID=+ASM1

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/11.2.0/grid

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"

export PATH=$ORACLE_HOME/bin:$PATH

注意:rac2里面export ORACLE_SID=+ASM2       

 

 

 

关闭设备,修改虚拟机vmx文件。以记事本方式打开(rac1rac2分别操作)。添加:

disk.EnableUUID="TRUE"

disk.locking = "false"

scsi1.shared=”TRUE”

diskLib.dataCacheMaxSize = "0"

diskLib.dataCacheMaxReadAheadSize = "0"

diskLib.DataCacheMinReadAheadSize = "0"

diskLib.dataCachePageSize = "4096"

diskLib.maxUnsyncedWrites = "0"

scsi1:1.deviceType = "disk"

scsi1:2.deviceType = "disk"

scsi1:3.deviceType = "disk"

scsi1:4.deviceType = "disk"

scsi1:5.deviceType = "disk"

scsi1:1.shared = "true"

scsi1:2.shared = "true"

scsi1:3.shared = "true"

scsi1:4.shared = "true"

scsi1:5.shared = "true

 

虚拟机编辑设置,添加三块盘,分别叫:OCR_VOTE.vmdk
data.vmdk       fra.vmdk(注意磁盘的大小)

 

 

剩下两块磁盘相同操作

注意:磁盘名以及磁盘大小和虚拟设备节点SCIS选择1213

 

rac2里面磁盘添加

剩下两块磁盘相同操作,注意虚拟设备节点SCIS选择1213rac1上面磁盘对应

 

开启虚拟机

 

查看磁盘是否挂载(rac1rac2

[root@rac1 ~]# fdisk –l

查询磁盘UUIDrac1rac2

如果uuid查询不到,检查添加的虚拟机文件是否正确

虚拟机的vmx文件增加了disk.enableUUID = "TRUE" 

 

[root@rac1 ~]# /usr/lib/udev/scsi_id -g -u /dev/sdb

36000c2917d180b5daef20885fa95bfbe

[root@rac1 ~]# /usr/lib/udev/scsi_id -g -u /dev/sdc

36000c291b9457755e6bdafe27a6dd685

[root@rac1 ~]# /usr/lib/udev/scsi_id -g -u /dev/sdd

36000c29c91113958099603eb65a72ce3

 

配置udev rules文件

/etc/udev/rule.d/99-oracle-asmdevices.rules

[root@rac1 rules.d]# vi 99-oracle-asmdevices.rules

 

添加:

KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block",  PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",  RESULT=="36000c29e1359ab575540edf6a00fd489", RUN+="/bin/sh -c 'mknod /dev/asmdisk01 b  $major $minor; chown grid:oinstall /dev/asmdisk01; chmod 0660 /dev/asmdisk01'"

 

KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block",  PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",  RESULT=="36000c2988f455427ca667639a40ee44f", RUN+="/bin/sh -c 'mknod /dev/asmdisk02 b  $major $minor; chown grid:oinstall /dev/asmdisk02; chmod 0660 /dev/asmdisk02'"

 

KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block",  PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",  RESULT=="36000c29c692398db6baa48338cc7b9b8", RUN+="/bin/sh -c 'mknod /dev/asmdisk03 b  $major $minor; chown grid:oinstall /dev/asmdisk03; chmod 0660 /dev/asmdisk03'"

 

 

运行:(rac1rac2

检查新的设备名称:

[root@rac1 rules.d]# /sbin/udevadm trigger --type=devices --action=change

 

重新加载UDEVrac1rac2

[root@rac1 rules.d]# /sbin/udevadm control –reload

 

为了诊断udevrac1rac2

[root@rac1 rules.d]# /sbin/udevadm test /sys/block/sdb

[root@rac1 rules.d]# /sbin/udevadm test /sys/block/sdc

[root@rac1 rules.d]# /sbin/udevadm test /sys/block/sdd

 

查看是否绑定成功

[root@rac1 rules.d]# ls /dev/asm*

/dev/asmdisk01  /dev/asmdisk02  /dev/asmdisk03

 

解压grid安装包

Root用户运行运行图形化

[root@rac1 u01]# xhost +

切换用户

[root@rac1 u01]# su – grid

 

设置display

xmanager远程连接运行图像化)

[grid@rac1 ~]$ export DISPLAY=远程机器的IP地址:0.0

 

安装grid

[grid@rac1 ~]$ cd /u01/grid/

 

[grid@rac1 grid]$ ./runInstaller

 

挂载光盘

[root@rac1 dev]# mount /dev/cdrom /mnt/

 

安装所需依赖包(rac1rac2

[root@rac1 /]# cd /mnt/Packages

[root@rac1 Packages]# rpm -ivh elfutils-libelf-devel-0.163-3.el7.x86_64.rpm

[root@rac1 Packages]# rpm -ivh libaio-devel-0.3.109-13.el7.x86_64.rpm

缺少pdksh-5.2.14这个包,网上下载,安装依赖包

[root@rac1 u01]# rpm -ivh pdksh-5.2.14-37.el5_8.1.x86_64.rpm

运行脚本(rac1rac2

[root@rac1 system]# cd /u01/app/oraInventory/

[root@rac1 system]# ./orainstRoot.sh

 

[root@rac1 system]# cd /u01/app/11.2.0/grid/

[root@rac1 system]# ./root.sh

运行root脚本报错

ohasd failed to start

Failed to start the Clusterware. Last 20 lines of the alert log follow:

2015-05-23 23:37:45.460:

[client(13782)]CRS-2101:The OLR was formatted using version 3.

原因:因为RHEL 7使用systemd而不是initd运行进程和重启进程,而root.sh通过传统的initd运行ohasd进程。

解决方法:

取消root.sh重新运行

/u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force –verbose

 

 

1. root用户创建服务文件

 

#touch /usr/lib/systemd/system/ohas.service

 

#chmod 777 /usr/lib/systemd/system/ohas.service

 

2. 将以下内容添加到新创建的ohas.service文件中

 

[root@rac1 init.d]# cat /usr/lib/systemd/system/ohas.service

[Unit]

Description=Oracle High Availability Services

After=syslog.target

 

[Service]

ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple

Restart=always

 

[Install]

WantedBy=multi-user.target

 

3. root用户运行下面的命令

 

systemctl daemon-reload

systemctl enable ohas.service

systemctl start ohas.service

4. 查看运行状态

 

[root@rac1 init.d]# systemctl status ohas.service

ohas.service - Oracle High Availability Services

   Loaded: loaded (/usr/lib/systemd/system/ohas.service; enabled)

   Active: failed (Result: start-limit) since Fri 2015-09-11 16:07:32 CST; 1s ago

  Process: 5734 ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple (code=exited, status=203/EXEC)

 Main PID: 5734 (code=exited, status=203/EXEC)

 

 

Sep 11 16:07:32 rac1 systemd[1]: Starting Oracle High Availability Services...

Sep 11 16:07:32 rac1 systemd[1]: Started Oracle High Availability Services.

Sep 11 16:07:32 rac1 systemd[1]: ohas.service: main process exited, code=exited, status=203/EXEC

Sep 11 16:07:32 rac1 systemd[1]: Unit ohas.service entered failed state.

Sep 11 16:07:32 rac1 systemd[1]: ohas.service holdoff time over, scheduling restart.

Sep 11 16:07:32 rac1 systemd[1]: Stopping Oracle High Availability Services...

Sep 11 16:07:32 rac1 systemd[1]: Starting Oracle High Availability Services...

Sep 11 16:07:32 rac1 systemd[1]: ohas.service start request repeated too quickly, refusing to start.

Sep 11 16:07:32 rac1 systemd[1]: Failed to start Oracle High Availability Services.

Sep 11 16:07:32 rac1 systemd[1]: Unit ohas.service entered failed state.

 

此时状态为失败,原因是现在还没有/etc/init.d/init.ohasd文件。

 

下面可以运行脚本root.sh 不会再报ohasd failed to start错误了。

 

如果还是报ohasd failed to start错误,可能是root.sh脚本创建了init.ohasd之后,ohas.service没有马上启动,解决方法参考以下:

 

当运行root.sh时,一直刷新/etc/init.d ,直到出现 init.ohasd 文件,马上手动启动ohas.service服务 命令:systemctl start ohas.service

 

[root@rac1 init.d]# systemctl status ohas.service

ohas.service - Oracle High Availability Services

   Loaded: loaded (/usr/lib/systemd/system/ohas.service; enabled)

   Active: active (running) since Fri 2015-09-11 16:09:05 CST; 3s ago

 Main PID: 6000 (init.ohasd)

   CGroup: /system.slice/ohas.service

           6000 /bin/sh /etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple

           6026 /bin/sleep 10


Grid用户:检查是否安装正确完成(rac1rac2

[grid@rac1 grid]$ crs_stat –t

配置asm磁盘

使用grid用户执行asmca

[grid@rac1 grid]$ asmca

Root用户解压oracleoracle用户安装oracle

 

[oracle@rac1 database]$ ./runInstaller

[oracle@rac1 database]$ export DISPLAY=远程机器的IP地址:0.0

.

Oracle用户:解决方法(rac1

[oracle@rac1 lib]$ cd /$ORACLE_HOME/sysman/lib

 [oracle@rac1 lib]$ cp ins_emagent.mk ins_emagent.mk.bak

[oracle@rac1 lib]$ vi ins_emagent.mk

输入/NMECTL 快速查找,在参数后面加上-lnnz11  第一个是字母l 后两个是数字1

然后回到界面点击retry

运行脚本(rac1rac2)

[root@rac1 ~]# cd /u01/app/oracle/product/11.2.0/dbhome_1/

[root@rac1 ~]# ./root/sh

Oracle用户DBCA创建数据库(rac1

注意oracle_sid与之前设置的环境变量里的相符合

至此,安装完成,后续测试该环境是否能够正常运行。