ORACLE 12C RAC 之01-環境安裝及配置

1. 分配给临时文件系统的磁盘空间   
    临时目录中至少有1 GB的空间。Oracle建议2 GB或更多

2. 交换相对于RAM的空间分配    
   4 GB到16 GB之间:等于RAM的大小
   超过16 GB:16 GB 

3. 验证共享内存(/dev/shm)是否已正确安装且大小足够
     > SGA+PGA 或 =物理内存*50%

ORACLE 12C ADG 之01-整體規劃

  1. 所需软件介绍
序号 类型 内容
1 数据库 linuxx64_12201_database.zip
2 集群软件 linuxx64_12201_grid_home.zip
3 操作系统 CentOS-7-x86_64-Minimal-1708.iso
4 虚拟机软件 VMware® Workstation 12 Pro 12.5.9 build-7535481
5 工具 Xmanager Enterprise 5
6 工具 rlwrap-0.36 (用于记录sqlplus、rman等命令的历史记录)
  1. IP地址规划
    从Oracle 11g开始,共7个IP地址,2块网卡,其中public、vip和scan都在同一个网段,private在另外网段,主机名不要包含下横线,如:RAC_01是不允许的;通过执行ifconfig -a检查2个节点的网络设备名字是否一致。另外,在安装之前,公网、2個私网共6个IP可以ping通,其它(Virtual*2+SCAN*3)5个不能ping通才是正常的
主机名 IP接口名稱 地址类型 IP 地址 注册位置
GNS01 XAG01 GNS 192.168.40.110 /etc/hosts
XAG01 XAG01 Public 192.168.40.111 /etc/hosts
XAG01 XAG01-VIP Virtual 192.168.40.112 /etc/hosts
XAG01 XAG01-PRI 1 Private 10.0.20.111 /etc/hosts
XAG01 XAG01-PRI 2 Private 10.0.30.112 /etc/hosts
XAG02 XAG02 Public 192.168.40.121 /etc/hosts
XAG02 XAG02-VIP Virtual 192.168.40.122 /etc/hosts
XAG02 XAG02-PRI 1 Private 10.0.20.121 /etc/hosts
XAG02 XAG02-PRI 2 Private 10.0.30.122 /etc/hosts
- XAG-SCAN SCAN 192.168.40.131 /etc/hosts
- XAG-SCAN SCAN 192.168.40.132 /etc/hosts
- XAG-SCAN SCAN 192.168.40.133 /etc/hosts
  1. 操作系统本地磁盘分区规划
序号 分区名称 大小 用途说明
1 /boot 200MB 引导分区
2 /tmp 3G 临时空间
3 /home 3G 所有用户的home目录
4 swap 10G 交换分区(物理內存小於8G則 *2 反之 同物理內存
5 / 10G 根分区
6 /u01 28G oracle和grid的安装目录
序号 分区名称 大小 本機硬盤/共享存儲 用途说明
1 /boot 200MB 本機硬盤 引导分区
2 /tmp 3G 本機硬盤 临时空间
3 /home 3G 本機硬盤 所有用户的home目录
4 swap 64G 本機硬盤 交换分區=物理內存64G
5 / 10G 本機硬盤 根分区
6 /u01 220G 本機硬盤 oracle和grid的安装目录
  1. 共享存储与ASM磁盘组规划
序号 磁盘名称 ASM磁盘名称 磁盘组名称 大小 用途
1 sdb asm-diskb OCR 10G OCR+VOTINGDISK
4 sdc asm_diske GIMR 150G GIMR
2 sdd asm_diskc DATA 8T data
3 sde asm_diskd FRA 10G 快速恢复区
  1. 安裝基本工具(兩台)
[root@XAG02 network-scripts]# yum -y install nano vim wget curl net-tools lsof  zip unzip

yum -y install perl autoconf
yum -y install autoconf

wget ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/Ledest:/misc/CentOS_7/x86_64/rlwrap-0.42-1.1.x86_64.rpm
or
[root@localhost src]# wget http://www.rpmfind.net/linux/epel/6/x86_64/Packages/r/rlwrap-0.42-1.el6.x86_64.rpm

rpm -ivh rlwrap-0.42-1.1.x86_64.rpm
or
rpm -ivh rlwrap-0.42-1.el6.x86_64.rpm
  1. 配置網卡
#XAG01上
[root@XAG01 network-scripts]# cat ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
NAME="ens33"
UUID="7b81bdac-0cf8-4962-9871-cef9a69de18d"
DEVICE="ens33"
ONBOOT="yes"
BOOTPROTO="static"
IPADDR="192.168.40.111"
GATEWAY="192.168.40.2"
NETMASK="255.255.255.0"
DNS1="192.168.40.111"

[root@XAG01 network-scripts]# cat ifcfg-ens34
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=ens34
UUID=bbd84d11-8a9c-410c-b145-a993014c0156
DEVICE=ens34
ONBOOT=yes
BOOTPROTO=static
IPADDR="10.0.20.111"
GATEWAY="0.0.0.0"
NETMASK="255.255.255.0"
DNS1="192.168.40.111"

[root@XAG01 network-scripts]# cat ifcfg-ens35
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=ens35
UUID=252853c8-c1ca-4a93-8358-b25353d3f3e2
DEVICE=ens35
ONBOOT=yes
BOOTPROTO=static
IPADDR="10.0.30.112"
GATEWAY="0.0.0.0"
NETMASK="255.255.255.0"
DNS1="192.168.40.111"

[root@XAG01 network-scripts]# service network restart
Restarting network (via systemctl):                        [  OK  ]

#XAG02上
[root@XAG02 network-scripts]# cat ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
NAME="ens33"
UUID="e5518019-0d2e-4e24-90fc-543bea0a67f0"
DEVICE="ens33"
ONBOOT="yes"
BOOTPROTO=static
IPADDR="192.168.40.121"
GATEWAY="192.168.40.2"
NETMASK="255.255.255.0"
DNS1="192.168.40.111"

[root@XAG02 network-scripts]# cat ifcfg-ens34
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=ens34
UUID=7c6b41a1-1941-4a9c-b9b7-5c98ac7ea658
DEVICE=ens34
ONBOOT=yes
BOOTPROTO=static
IPADDR="10.0.20.121"
GATEWAY="0.0.0.0"
NETMASK="255.255.255.0"
DNS1="192.168.40.111"

[root@XAG02 network-scripts]# cat ifcfg-ens35
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=ens35
UUID=4a6aa4b2-0fa8-490c-9f8b-1e4f02d709ab
DEVICE=ens35
ONBOOT=yes
BOOTPROTO=static
IPADDR="10.0.30.122"
GATEWAY="0.0.0.0"
NETMASK="255.255.255.0"
DNS1="192.168.40.111"

[root@XAG02 network-scripts]# service network restart
Restarting network (via systemctl):                        [  OK  ]
  1. 配置主機名
#XAG01
[root@XAG01 network-scripts]# hostname
XAG01.MP.COM
[root@XAG01 network-scripts]# cat /etc/sysconfig/network
# Created by anaconda
NETWORKING=yes
NOZEROCONF=yes
HOSTNAME=XAG01.MP.COM
[root@XAG01 network-scripts]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

#Public IP
192.168.40.111   XAG01.MP.COM
192.168.40.121  XAG02.MP.COM

#Private IP
10.0.20.111  XAG01-PRI1.MP.COM
10.0.30.112  XAG01-PRI2.MP.COM
10.0.20.121  XAG02-PRI1.MP.COM
10.0.30.122  XAG02-PRI2.MP.COM

#Virtual IP
192.168.40.112  XAG01-VIP.MP.COM
192.168.40.122  XAG02-VIP.MP.COM

#Scan IP
192.168.40.131  XAG-SCAN.MP.COM
192.168.40.132  XAG-SCAN.MP.COM
192.168.40.133  XAG-SCAN.MP.COM

#XAG02
[root@XAG02 network-scripts]# hostname
XAG02.MP.COM
[root@XAG02 network-scripts]# cat /etc/sysconfig/network
# Created by anaconda
NETWORKING=yes
NOZEROCONF=yes
HOSTNAME=XAG02.MP.COM
[root@XAG02 network-scripts]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

#Public IP
192.168.40.111   XAG01.MP.COM
192.168.40.121  XAG02.MP.COM

#Private IP
10.0.20.111  XAG01-PRI1.MP.COM
10.0.30.112  XAG01-PRI2.MP.COM
10.0.20.121  XAG02-PRI1.MP.COM
10.0.30.122  XAG02-PRI2.MP.COM

#Virtual IP
192.168.40.112  XAG01-VIP.MP.COM
192.168.40.122  XAG02-VIP.MP.COM

#Scan IP
192.168.40.131  XAG-SCAN.MP.COM
192.168.40.132  XAG-SCAN.MP.COM
192.168.40.133  XAG-SCAN.MP.COM
  1. 修改防火墙
#安装centos7选择基本安装,配置完网络后首先要先关闭firewall:停止firewall
[root@XAG02 /]# systemctl stop firewalld.service

#禁止firewall开机启动
[root@XAG02 /]# systemctl disable firewalld.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

#安装iptables防火墙yum方式安装iptables
[root@XAG02 /]# yum install iptables-services

#编辑防火墙配置文件

[root@elk-node2 ~]#vim /etc/sysconfig/iptables

#添加下面三句话到默认的22端口这条规则的下面

[root@XAG02 /]#
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 1521 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 1525 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 1158 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 161 -j ACCEPT
#最后重启防火墙使配置生效
[root@XAG02 /]# systemctl restart iptables.service

#查看防火墙状态
[root@XAG02 /]# systemctl status iptables.service

#设置防火墙开机启动
[root@XAG02 /]# systemctl enable iptables.service
  1. 关闭SELINUX 编辑selinux的配置文件
[root@XAG02 /]# vim /etc/selinux/config
#注释掉下面两行
#SELINUX=enforcing
#SELINUXTYPE=targeted
#增加一行
SELINUX=disabled

#重启系统
[root@elk-node2 ~]#shutdown -r now
  1. 安装oracle 12c 依赖包
[root@XAG143 java]# 
yum -y install binutils compat-libcap1 compat-libstdc++-33 compat-libstdc++-33*.i686 elfutils-libelf-devel gcc gcc-c++ glibc*.i686 glibc glibc-devel glibc-devel*.i686 ksh libgcc*.i686 libgcc libstdc++ libstdc++*.i686 libstdc++-devel libstdc++-devel*.i686 libaio libaio*.i686 libaio-devel libaio-devel*.i686 make sysstat unixODBC unixODBC*.i686 unixODBC-devel unixODBC-devel*.i686 libXp  –y

yum install binutils gcc gcc-c++  compat-libstdc++-33 glibc  glibc.i686  glibc-devel  ksh libgcc.i686  libstdc++-devel  libaio  libaio.i686  libaio-devel  libaio-devel.i686  libXtst  libXtst.i686  libX11  libX11.i686 libXau  libXau.i686  libxcb  libxcb.i686  libXi  libXi.i686  make  sysstat  compat-libcap1 –y


yum install binutils compat-libcap1 compat-libstdc++-33 e2fsprogs e2fsprogs-libs glibc glibc-devel ksh libaio-devel libaio libgcc libstdc++ libstdc++-devel libxcb libX11 libXau libXi libXtst make net-tools nfs-utils smartmontools sysstat –y

[root@XAG143 java]# yum -y install smartmontools
[root@XAG143 java]# yum -y install libXrender

#安装cvuqdisk(需要分别在2个节点上执行),该步骤理论上可以在grid软件安装时自动处理

# cd /u01/app/12.2.0/grid/cv/rpm
# export CVUQDISK_GRP=asmadmin
# rpm -ivh cvuqdisk-1.0.10-1.rpm
  1. 修改系統參數
[root@XAG143 java]# vim /etc/security/limits.conf

grid soft nofile 16384
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft nproc 16384
grid hard nproc 16384

oracle  soft  nproc  16384
oracle  hard  nproc  16384
oracle  soft  nofile  16384
oracle  hard  nofile  65536
oracle  soft  stack  10240
oracle  hard  stack  32768
oracle soft memlock 3145728
oracle hard memlock 3145728
  1. sysctl.conf 配置
#kernel.shmall 配置方法如下
#通过getconf获取分页的大小,用来计算SHMALL的合理设置值:
[root@fdb2 ~]# getconf PAGE_SIZE
4096
#查詢物理內存大小
[root@fdb2 ~]# grep MemTotal /proc/meminfo
MemTotal:       4895768 kB
SQL> select 4895768*1024/4096 as shmall  from dual;

    shmall 
----------
        1223942

 SQL>  select 4895768*1024*0.8 as shmmax from dual;

    SHMMAX
----------
    4010613145
-----------------------------------
[root@XAG143 java]# vim /etc/sysctl.conf
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1223942
kernel.shmmax = 4010613145
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
[root@XAG143 java]# vim /etc/pam.d/login
#add
session    required     pam_limits.so

#修改ulimit:
[root@XAG143 java]# vim /etc/profile
#添加oracle 用户 limit:
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then 
ulimit -p 16384 
ulimit -n 65536a 
else 
ulimit -u 16384 -n 65536 
fi 
fi

[root@XAG143 java]# source /etc/profile
  1. 配置NTP
可以采用操作系统的NTP服务,也可以使用Oracle自带的服务ctss,如果ntp没有启用,Oracle会自动启用自己的ctssd进程。
从oracle 11gR2 RAC开始使用Cluster Time Synchronization Service(CTSS)同步各节点的时间,当安装程序发现NTP协议处于非活动状态时,安装集群时间同步服务将以活动模式自动进行安装并通过所有节点的时间。如果发现配置了 NTP,则以观察者模式启动集群时间同步服务,Oracle Clusterware 不会在集群中进行活动的时间同步。

root 用户双节点运行:
/sbin/service ntpd stop
mv /etc/ntp.conf /etc/ntp.conf.bak
service ntpd status
chkconfig ntpd off

# service ntpd stop
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.orig
# rm /var/run/ntpd.pid
  1. 配置/dev/shm大小
vim /etc/fstab

修改/dev/shm的大小  
修改/etc/fstab的这行: 默认的:
tmpfs /dev/shm tmpfs defaults 0 0
改成:
tmpfs /dev/shm tmpfs defaults,size=5G 0 0
size参数也可以用G作单位:size=1G。
重新mount /dev/shm使之生效:
# mount -o remount /dev/shm
马上可以用"df -h"命令检查变化。
  1. 检查并卸载OpenJDK & 安裝JDK 1.8
[root@XAG143 ~]# java -version
-bash: java: command not found
[root@XAG143 ~]#  rpm -qa | grep java
#存在则卸载,命令: rpm -e --nodeps 包名

#安装jdk
[root@XAG143 ~]# mkdir /u01/java -p
[root@XAG143 java]# cd /u01/java/
[root@XAG143 java]# ls
jdk1.8.0_181  jdk-8u181-linux-x64.tar.gz
[root@XAG143 java]# tar -zxvf jdk-8u181-linux-x64.tar.gz
#设置环境变量
[root@XAG143 java]# vim /etc/profile

在profile中添加如下内容:
#set java environment
JAVA_HOME=/u01/java/jdk1.8.0_181
JRE_HOME=/u01/java/jdk1.8.0_181/jre
CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export JAVA_HOME JRE_HOME CLASS_PATH PATH

#让修改生效:
[root@XAG143 java]# source /etc/profile
#输入java -version查看一下jdk版本信息:
[root@XAG143 java]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
  1. 创建组及用戶
#创建组
groupadd -g 1000 oinstall
groupadd -g 1001 dba
groupadd -g 1002 oper
groupadd -g 1003 asmadmin
groupadd -g 1004 asmdba
groupadd -g 1005 asmoper

#创建grid和oracle用户:
useradd -u 1000 -g oinstall -G asmadmin,asmdba,asmoper,dba,oper  -d /home/grid -m grid
useradd -u 1001 -g oinstall -G dba,asmdba,oper  -d /home/oracle -m oracle

--useradd -u 1000 -g oinstall -G dba,asmdba,oper oracle  
--useradd -u 1001 -g oinstall -G asmadmin,asmdba,asmoper,dba grid  

#为oracle和grid用户设密码:
passwd oracle
passwd grid
#设置密码永不过期
chage -M -1 oracle
chage -M -1 grid
chage -l oracle
chage -l grid
#检查
id grid
id oracle
  1. 创建安装目录
mkdir -p /u01/setup
mkdir -p /u01/app/oracle
mkdir -p /u01/app/grid
mkdir -p /u01/app/12.2.0/grid
mkdir -p /u01/app/oracle/product/12.2.0/db_1
mkdir -p /u01/app/oraInventory

chown -R grid:oinstall /u01/app/grid
chown -R grid:oinstall /u01/app/12.2.0
chown -R oracle:oinstall /u01/app/oracle
chown -R oracle:oinstall /u01/setup

chmod -R 775 /u01
chown -R grid:oinstall /u01/app/oraInventory
chmod -R 775 /u01/app/oraInventory
  1. 配置grid和oracle用户的环境变量文件(兩台都要)
[root@XAG01 java]# su - oracle

[oracle@XAG01 ~]$ vim .bash_profile 
[oracle@XAG01 ~]$ cat .bash_profile 
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
    . ~/.bashrc
fi
# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin
export PATH

umask 022
export DISPLAY=10.0.0.85:0.0
export ORACLE_SID=MYRAC11
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/12.2.0/db_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"
export TMP=/tmp
export TMPDIR=$TMP
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
 
export EDITOR=vi
export TNS_ADMIN=$ORACLE_HOME/network/admin
export ORACLE_PATH=.:$ORACLE_BASE/dba_scripts/sql:$ORACLE_HOME/rdbms/admin
export SQLPATH=$ORACLE_HOME/sqlplus/admin
export JAVA_HOME=/u01/java/jdk1.8.0_181

#export NLS_LANG="SIMPLIFIED CHINESE_CHINA.ZHS16GBK" --AL32UTF8 SELECT userenv('LANGUAGE') db_NLS_LANG FROM DUAL;

export NLS_LANG="American_America.AL32UTF8"
alias sqlplus='rlwrap sqlplus'
alias rman='rlwrap rman'
alias asmcmd='rlwrap asmcmd'

[grid@XAG01 ~]$ cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
    . ~/.bashrc
fi
# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin
export PATH

umask 022
export DISPLAY=10.0.0.85:0.0
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/12.2.0/grid
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"
export PATH=$ORACLE_HOME/bin:$PATH
alias sqlplus='rlwrap sqlplus'
alias asmcmd='rlwrap asmcmd'

-----------------------------------------------------------------
# 注意:另外一台数据库实例名须做相应修改:
Oracle:export  ORACLE_SID=MYRAC12
grid:export ORACLE_SID=+ASM2
  1. 添加共享磁盘
C:\>cd  C:\Program Files (x86)\VMware\VMware Workstation

vmware-vdiskmanager.exe -c -s 6g -a lsilogic -t 2 "F:\ORACLEVM\RAC\sharedisk\ocr_vote.vmdk"

vmware-vdiskmanager.exe -c -s 20g -a lsilogic -t 2 "F:\ORACLEVM\RAC\sharedisk\data.vmdk"

vmware-vdiskmanager.exe -c -s 10g -a lsilogic -t 2 "F:\ORACLEVM\RAC\sharedisk\fra.vmdk"

vmware-vdiskmanager.exe -c -s 40g -a lsilogic -t 2 "F:\ORACLEVM\RAC\sharedisk\gimr.vmdk"

#修改配置文件(兩台)
#如果报有的参数不存在的错误,那么请将虚拟机的兼容性设置到Workstation 9.0
#关闭两台虚拟机,用记事本打开 虚拟机名字 .vmx ,即打开配置文件,2个节点都需要修改。添加以下内容,其中,scsix:y 表示第x个总线上的第y个设备:
#shared disks configure
disk.EnableUUID="TRUE"
disk.locking = "FALSE"
scsi1.shared = "TRUE"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.dataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize= "4096"
diskLib.maxUnsyncedWrites = "0"

scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
scsil.sharedBus = "VIRTUAL"

scsi1:0.present = "TRUE"
scsi1:0.mode = "independent-persistent"
scsi1:0.fileName = "F:\ORACLEVM\RAC\sharedisk\ocr_vote.vmdk"
scsi1:0.deviceType = "disk"
scsi1:0.redo = ""

scsi1:1.present = "TRUE"
scsi1:1.mode = "independent-persistent"
scsi1:1.fileName = "F:\ORACLEVM\RAC\sharedisk\data.vmdk"
scsi1:1.deviceType = "disk"
scsi1:1.redo = ""

scsi1:2.present = "TRUE"
scsi1:2.mode = "independent-persistent"
scsi1:2.fileName = "F:\ORACLEVM\RAC\sharedisk\fra.vmdk"
scsi1:2.deviceType = "disk"
scsi1:2.redo = ""

scsi1:3.present = "TRUE"
scsi1:3.mode = "independent-persistent"
scsi1:3.fileName = "F:\ORACLEVM\RAC\sharedisk\gimr.vmdk"
scsi1:3.deviceType = "disk"
scsi1:3.redo = ""
  1. 重新打开VMware Workstation
    关闭 VMware Workstation 软件重新打开,此时看到共享磁盘正确加载则配置正确,这里尤其注意第二个节点,2个节点的硬盘配置和网络适配器的配置应该是一样的,若不一样请检查配置。
#XAG01
[root@XAG01 ~]# fdisk -l | grep /dev/sd
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
/dev/sda1   *        2048      391167      194560   83  Linux
/dev/sda2          391168   104857599    52233216   8e  Linux LVM
Disk /dev/sdb: 6442 MB, 6442450944 bytes, 12582912 sectors
Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 10.7 GB, 10737418240 bytes, 20971520 sectors

[root@XAG01 ~]# fdisk -l | grep  "Disk /dev/sd"
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Disk /dev/sdb: 6442 MB, 6442450944 bytes, 12582912 sectors
Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 10.7 GB, 10737418240 bytes, 20971520 sectors

#XAG02
[root@XAG02 ~]# fdisk -l | grep /dev/sd
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
/dev/sda1   *        2048      391167      194560   83  Linux
/dev/sda2          391168   104857599    52233216   8e  Linux LVM
Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 10.7 GB, 10737418240 bytes, 20971520 sectors
Disk /dev/sdb: 6442 MB, 6442450944 bytes, 12582912 sectors

[root@XAG02 ~]# fdisk -l | grep  "Disk /dev/sd"
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdd: 10.7 GB, 10737418240 bytes, 20971520 sectors
Disk /dev/sdb: 6442 MB, 6442450944 bytes, 12582912 sectors

  1. 设置共享磁盘
    a. 配置udev绑定的scsi_id
[root@XAG01 ~]# cat /etc/issue
\S
Kernel \r on an \m

[root@XAG01 ~]# find / -name scsi_id
/usr/lib/udev/scsi_id

#XAG01
[root@XAG01 ~]# /usr/lib/udev/scsi_id -g -u  /dev/sdb
36000c29d9a7b688763da26fb129281d1
[root@XAG01 ~]# /usr/lib/udev/scsi_id -g -u  /dev/sdc
36000c29466d4a3f445207f6f1e92dc26
[root@XAG01 ~]# /usr/lib/udev/scsi_id -g -u  /dev/sdd
36000c297d10bed975399e41521217083
[root@XAG01 ~]# /usr/lib/udev/scsi_id -g -u  /dev/sde
36000c29827f80a6688f28aac35343349

#XAG02
[root@XAG02 ~]# /usr/lib/udev/scsi_id -g -u  /dev/sdb
36000c29d9a7b688763da26fb129281d1
[root@XAG02 ~]# /usr/lib/udev/scsi_id -g -u  /dev/sdc
36000c29466d4a3f445207f6f1e92dc26
[root@XAG02 ~]# /usr/lib/udev/scsi_id -g -u  /dev/sdd
36000c297d10bed975399e41521217083
[root@XAG02 ~]# /usr/lib/udev/scsi_id -g -u  /dev/sde
36000c29827f80a6688f28aac35343349
#2个节点获取到的值应该是一样的。

b. 创建并配置udev rules文件(兩台都要)-- 共享存储LUN的赋权

[root@XAG01 /]# 
for i in b c d e;
 do
 echo "KERNEL==\"sd*\", SUBSYSTEM==\"block\", PROGRAM==\"/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\",RESULT==\"`/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\",SYMLINK+=\"asm-disk$i\",OWNER=\"grid\",GROUP=\"asmadmin\",MODE=\"0660\""
 done
------------------------------------------------------------------------------------------------------------
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c29d9a7b688763da26fb129281d1",SYMLINK+="asm-diskb",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c29466d4a3f445207f6f1e92dc26",SYMLINK+="asm-diskc",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c297d10bed975399e41521217083",SYMLINK+="asm-diskd",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c29827f80a6688f28aac35343349",SYMLINK+="asm-diske",OWNER="grid",GROUP="asmadmin",MODE="0660"
------------------------------------------------------------------------------------------------------------
#编辑vim /etc/udev/rules.d/99-oracle-asmdevices.rules,加入上边的脚本生成的内容。
        
[root@XAG01 ~]# vim /etc/udev/rules.d/99-oracle-asmdevices.rules

[root@XAG01 /]# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
------------------------------------------------------------------------------------------------------------
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c29d9a7b688763da26fb129281d1",SYMLINK+="asm-diskb",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c29466d4a3f445207f6f1e92dc26",SYMLINK+="asm-diskc",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c297d10bed975399e41521217083",SYMLINK+="asm-diskd",OWNER="grid",GROUP="asmadmin",MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c29827f80a6688f28aac35343349",SYMLINK+="asm-diske",OWNER="grid",GROUP="asmadmin",MODE="0660"
------------------------------------------------------------------------------------------------------------
#fdisk 格式化所有磁盘(不需要做,如已有分區則刪除)

重新加载分区
/sbin/partprobe /dev/sdb
/sbin/partprobe /dev/sdc
/sbin/partprobe /dev/sdd
/sbin/partprobe /dev/sde
用udevadm进行测试
udevadm test /sys/block/sdb
udevadm info --query=all --path=/sys/block/sdb
udevadm info --query=all --name=asm-diskb

udevadm test /sys/block/sdc
udevadm info --query=all --path=/sys/block/sdc
udevadm info --query=all --name=asm-diskc

udevadm test /sys/block/sdd
udevadm info --query=all --path=/sys/block/sdd
udevadm info --query=all --name=asm-diskd

udevadm test /sys/block/sde
udevadm info --query=all --path=/sys/block/sde
udevadm info --query=all --name=asm-diske

udevadm info --query=all --name=asm-diskb
udevadm info --query=all --name=asm-diskc
udevadm info --query=all --name=asm-diskd
udevadm info --query=all --name=asm-diske
#查看是否配置结果:
[root@XAG01 ~]# ll /dev/asm*
lrwxrwxrwx 1 root root 3 Dec 17 09:36 /dev/asm-diskb -> sdb
lrwxrwxrwx 1 root root 3 Dec  7 19:28 /dev/asm-diskc -> sdc
lrwxrwxrwx 1 root root 3 Dec  7 19:28 /dev/asm-diskd -> sdd
lrwxrwxrwx 1 root root 3 Jan 18 09:49 /dev/asm-diske -> sde

配置互信(方法:使用sshUserSetup.sh快速创建互信)
下面两条命令在一个节点上执行即可(可以在root用户下执行):

[root@XAG01 scripts]# pwd
/u01/app/12.2.0/grid/oui/prov/resources/scripts

root@XAG01 scripts]# ./sshUserSetup.sh -user grid  -hosts "XAG02 XAG01" -advanced -exverify –confirm

root@XAG01 scripts]# ./sshUserSetup.sh -user oracle  -hosts "XAG02 XAG01" -advanced -exverify -confirm
[grid@XAG01 grid]$ pwd
/u01/app/12.2.0/grid
[grid@XAG01 grid]$ ls
linuxx64_12201_grid_home.zip
[grid@XAG01 grid]$ unzip linuxx64_12201_grid_home.zip

检测安装环境

--全面检查:
[grid@XAG01 grid]$ /u01/app/12.2.0/grid/runcluvfy.sh stage -pre crsinst -n xag01,xag02 -verbose
[grid@XAG01 grid]$ ./runcluvfy.sh stage -pre crsinst -n xag01,xag02 -fixup -verbose

检查网络和等效性

[grid@XAG01 grid]$ /u01/app/12.2.0/grid/runcluvfy.sh comp nodecon -n xag01,xag02 -verbose




開始安裝 GI(單節點上)

[grid@XAG01 grid]$ ./gridSetup.sh

選擇 Configuration Oracle Grid Infrastructure for a New Cluster
選擇 Configuration an Oracle Standaone Cluster
SCAN Name: XAG-SCAN  [/etc/hosts 或dns 中定義的名稱]
在cluster node informat 界面 點擊 add 增加第2臺機器【节点名和vip名必须小写】
在specify network interface usage 界面 use for 中 分別選擇 public、asm & privete、private
在storage option information 中選擇 configuration asm using block devices
在Create ASM DISK GROUP 界面上 Disk group name:OCR,redundancy:Normal,
change discovery path:/dev/asm*
1:Disk GROUP NAME:OCR ,redundancy:normal , 選擇 3個 asm-orc* 盤
2:Disk GROUP NAME:MGMT ,redundancy:normal , 選擇 3個 asm-mgmt* 盤
asm password : xag123

执行root脚本,先第一個腳本(兩節點),后第二個腳本。

root@XAG01 ~]# /u01/app/oraInventory/orainstRoot.sh
root@XAG02 ~]# /u01/app/oraInventory/orainstRoot.sh
root@XAG01 ~]# /u01/app/12.2.0/grid/root.sh
root@XAG02 ~]# /u01/app/12.2.0/grid/root.sh

安裝DB軟件

[oracle@xag02 setup]$ unzip linuxx64_12201_database.zip
[oracle@xag02 setup]$ cd database/
[oracle@xag02 database]$ ls
install  response  rpm  runInstaller  sshsetup  stage  welcome.html
[oracle@xag02 database]$ ./runInstaller
選擇 install database software only
選擇 oracle real application clusters database installion

創建ASM磁盘组

#asmca创建ASM磁盘组[grid] --新建DATA、FRA 區域
[grid@xag02 grid]$ asmca

創建實例

[oracle@xag02 database]$ dbca
選擇 create database
選擇 advanced configuration
database type: Oracle Real Appliction Cluseters (RAC) database;
configuration type: Admin Managed
globa database name:MYRAC1
sid prefie: MYRAC1

检查节点详细资源情况

[grid@xag01 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
ora.MGMT.dg
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
ora.OCR.dg
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
ora.chad
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
ora.net1.network
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
ora.ons
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       xag01                    STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       xag02                    STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       xag02                    STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       xag02                    169.254.65.33 10.1.0
                                                             .117 10.2.0.117,STAB
                                                             LE
ora.asm
      1        ONLINE  ONLINE       xag02                    Started,STABLE
      2        ONLINE  ONLINE       xag01                    Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       xag02                    STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       xag02                    Open,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       xag02                    STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       xag01                    STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       xag02                    STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       xag02                    STABLE
ora.xag01.vip
      1        ONLINE  ONLINE       xag01                    STABLE
ora.xag02.vip
      1        ONLINE  ONLINE       xag02                    STABLE
--------------------------------------------------------------------------------

验证crsctl的状态 crsctl stat res -t -init

[grid@xag01 ~]$ crsctl stat res -t -init
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       xag01                    STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       xag01                    STABLE
ora.crf
      1        ONLINE  ONLINE       xag01                    STABLE
ora.crsd
      1        ONLINE  ONLINE       xag01                    STABLE
ora.cssd
      1        ONLINE  ONLINE       xag01                    STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       xag01                    STABLE
ora.ctssd
      1        ONLINE  ONLINE       xag01                    ACTIVE:0,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       xag01                    STABLE
ora.gipcd
      1        ONLINE  ONLINE       xag01                    STABLE
ora.gpnpd
      1        ONLINE  ONLINE       xag01                    STABLE
ora.mdnsd
      1        ONLINE  ONLINE       xag01                    STABLE
ora.storage
      1        ONLINE  ONLINE       xag01                    STABLE
--------------------------------------------------------------------------------
[grid@xag01 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

[grid@xag01 ~]$ srvctl status nodeapps
VIP 10.0.28.116 is enabled
VIP 10.0.28.116 is running on node: xag01
VIP 10.0.28.118 is enabled
VIP 10.0.28.118 is running on node: xag02
Network is enabled
Network is running on node: xag02
Network is running on node: xag01
ONS is enabled
ONS daemon is running on node: xag02
ONS daemon is running on node: xag01

[grid@xag01 ~]$ srvctl config nodeapps
Network 1 exists
Subnet IPv4: 10.0.0.0/255.255.0.0/eno1, static
Subnet IPv6: 
Ping Targets: 
Network is enabled
Network is individually enabled on nodes: 
Network is individually disabled on nodes: 
VIP exists: network number 1, hosting node xag01
VIP Name: xag01-vip.mp.com
VIP IPv4 Address: 10.0.28.116
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
VIP exists: network number 1, hosting node xag02
VIP Name: xag02-vip.mp.com
VIP IPv4 Address: 10.0.28.118
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL true
ONS is enabled
ONS is individually enabled on nodes: 
ONS is individually disabled on nodes:

[grid@xag01 ~]$ srvctl status asm
ASM is running on xag01,xag02

[grid@xag01 ~]$ srvctl config asm -a
ASM home: 
Password file: +OCR/orapwASM
Backup of Password file: 
ASM listener: LISTENER
ASM is enabled.
ASM is individually enabled on nodes: 
ASM is individually disabled on nodes: 
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

[grid@xag01 ~]$ srvctl config listener -a
Name: LISTENER
Type: Database Listener
Network: 1, Owner: grid
Home: 
  /u01/app/12.2.0/grid on node(s) xag01,xag02
End points: TCP:1521
Listener is enabled.
Listener is individually enabled on nodes: 
Listener is individually disabled on nodes:

[grid@xag01 ~]$  srvctl config nodeapps -a -s -l
Warning:-listener option has been deprecated and will be ignored.
Network 1 exists
Subnet IPv4: 10.0.0.0/255.255.0.0/eno1, static
Subnet IPv6: 
Ping Targets: 
Network is enabled
Network is individually enabled on nodes: 
Network is individually disabled on nodes: 
VIP exists: network number 1, hosting node xag01
VIP Name: xag01-vip.mp.com
VIP IPv4 Address: 10.0.28.116
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
VIP exists: network number 1, hosting node xag02
VIP Name: xag02-vip.mp.com
VIP IPv4 Address: 10.0.28.118
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL true
ONS is enabled
ONS is individually enabled on nodes: 
ONS is individually disabled on nodes: 
Name: LISTENER
Type: Database Listener
Network: 1, Owner: grid
Home: 
  /u01/app/12.2.0/grid on node(s) xag01,xag02
End points: TCP:1521
Listener is enabled.
Listener is individually enabled on nodes: 
Listener is individually disabled on nodes:

[grid@xag01 ~]$  srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node xag01
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node xag02
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node xag02

[grid@xag01 ~]$ cluvfy comp clocksync -verbose
Verifying Clock Synchronization ...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  xag01                                 passed                  

  Node Name                             State                   
  ------------------------------------  ------------------------
  xag01                                 Active                  

  Node Name     Time Offset               Status                  
  ------------  ------------------------  ------------------------
  xag01         0.0                       passed                  
Verifying Clock Synchronization ...PASSED

Verification of Clock Synchronization across the cluster nodes was successful. 

CVU operation performed:      Clock Synchronization across the cluster nodes
Date:                         Apr 8, 2019 2:37:35 PM
CVU home:                     /u01/app/12.2.0/grid/
User:                         grid
-------------------------------------------------------------------------------------------------

[grid@xag01 ~]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....SM.lsnr ora....er.type ONLINE    ONLINE    xag01       
ora....ER.lsnr ora....er.type ONLINE    ONLINE    xag01       
ora....AF.lsnr ora....er.type OFFLINE   OFFLINE               
ora....N1.lsnr ora....er.type ONLINE    ONLINE    xag01       
ora....N2.lsnr ora....er.type ONLINE    ONLINE    xag02       
ora....N3.lsnr ora....er.type ONLINE    ONLINE    xag02       
ora.MGMT.dg    ora....up.type ONLINE    ONLINE    xag01       
ora.MGMTLSNR   ora....nr.type ONLINE    ONLINE    xag02       
ora.OCR.dg     ora....up.type ONLINE    ONLINE    xag01       
ora.asm        ora.asm.type   ONLINE    ONLINE    xag02       
ora.chad       ora.chad.type  ONLINE    ONLINE    xag01       
ora.cvu        ora.cvu.type   ONLINE    ONLINE    xag02       
ora.mgmtdb     ora....db.type ONLINE    ONLINE    xag02       
ora....network ora....rk.type ONLINE    ONLINE    xag01       
ora.ons        ora.ons.type   ONLINE    ONLINE    xag01       
ora.qosmserver ora....er.type ONLINE    ONLINE    xag02       
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    xag01       
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    xag02       
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    xag02       
ora....01.lsnr application    ONLINE    ONLINE    xag01       
ora.xag01.ons  application    ONLINE    ONLINE    xag01       
ora.xag01.vip  ora....t1.type ONLINE    ONLINE    xag01       
ora....02.lsnr application    ONLINE    ONLINE    xag02       
ora.xag02.ons  application    ONLINE    ONLINE    xag02       
ora.xag02.vip  ora....t1.type ONLINE    ONLINE    xag02

检查集群状态

[grid@xag01 ~]$ crsctl check cluster -all
**************************************************************
xag01:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
xag02:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

檢查數據庫(安裝數據庫后)

[grid@xag02 grid]$ srvctl config database
MYRAC1

[grid@xag02 grid]$ srvctl status database -d MYRAC1
Instance MYRAC11 is running on node xag02
Instance MYRAC12 is running on node xag01

[grid@xag02 grid]$ srvctl status database -d MYRAC1 -f -v
Instance MYRAC11 is running on node xag02. Instance status: Open.
Instance MYRAC12 is running on node xag01. Instance status: Open.

[grid@xag02 grid]$ srvctl config database -d MYRAC1 -a
Database unique name: MYRAC1
Database name: MYRAC1
Oracle home: /u01/app/oracle/product/12.2.0/db_1
Oracle user: oracle
Spfile: +DATA/MYRAC1/PARAMETERFILE/spfile.272.1005064573
Password file: +DATA/MYRAC1/PASSWORD/pwdmyrac1.256.1005064115
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: FRA,DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
Database is enabled
Database is individually enabled on nodes: 
Database is individually disabled on nodes: 
OSDBA group: dba
OSOPER group: oper
Database instances: MYRAC11,MYRAC12
Configured nodes: xag02,xag01
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services: 
Database is administrator managed

[grid@xag02 grid]$ srvctl config vip -n xag02
VIP exists: network number 1, hosting node xag02
VIP Name: xag02-vip.mp.com
VIP IPv4 Address: 10.0.28.118
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
[grid@xag02 grid]$ 
[grid@xag02 grid]$ 
[grid@xag02 grid]$ srvctl config vip -n xag01
VIP exists: network number 1, hosting node xag01
VIP Name: xag01-vip.mp.com
VIP IPv4 Address: 10.0.28.116
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes:

[grid@XAG01 ~]$ asmcmd
ASMCMD> pwd
+
ASMCMD> ls
DATA/
FRA/
MGMT/
OCR/
ASMCMD> cd DATA
ASMCMD> pwd
+DATA

#查询节点
[grid@xag02 grid]$ olsnodes -s
xag02   Active
xag01   Active

#查看群集的名称
[grid@xag02 grid]$ cemutlo -n
xag-cluster

#查询群集状态
[grid@xag02 grid]$ srvctl status nodeapps
VIP 10.0.28.116 is enabled
VIP 10.0.28.116 is running on node: xag01
VIP 10.0.28.118 is enabled
VIP 10.0.28.118 is running on node: xag02
Network is enabled
Network is running on node: xag02
Network is running on node: xag01
ONS is enabled
ONS daemon is running on node: xag02
ONS daemon is running on node: xag01

#检查集群中资源的状态
[grid@xag02 grid]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
ora.DATA.dg
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
ora.FRA.dg
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
ora.MGMT.dg
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
ora.OCR.dg
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
ora.chad
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
ora.net1.network
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
ora.ons
               ONLINE  ONLINE       xag01                    STABLE
               ONLINE  ONLINE       xag02                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       xag01                    STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       xag02                    STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       xag02                    STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       xag02                    169.254.65.33 10.1.0
                                                             .117 10.2.0.117,STAB
                                                             LE
ora.asm
      1        ONLINE  ONLINE       xag02                    Started,STABLE
      2        ONLINE  ONLINE       xag01                    Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       xag02                    STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       xag02                    Open,STABLE
ora.myrac1.db
      1        ONLINE  ONLINE       xag02                    Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             /db_1,STABLE
      2        ONLINE  ONLINE       xag01                    Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             /db_1,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       xag02                    STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       xag01                    STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       xag02                    STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       xag02                    STABLE
ora.xag01.vip
      1        ONLINE  ONLINE       xag01                    STABLE
ora.xag02.vip
      1        ONLINE  ONLINE       xag02                    STABLE
--------------------------------------------------------------------------------

#节点应用程序状态
[grid@xag02 grid]$ srvctl status nodeapps
VIP 10.0.28.116 is enabled
VIP 10.0.28.116 is running on node: xag01
VIP 10.0.28.118 is enabled
VIP 10.0.28.118 is running on node: xag02
Network is enabled
Network is running on node: xag02
Network is running on node: xag01
ONS is enabled
ONS daemon is running on node: xag02
ONS daemon is running on node: xag01

#ASM查询状态查看
[grid@xag02 grid]$ srvctl status asm
ASM is running on xag01,xag02

[grid@xag02 grid]$ srvctl status asm -a
ASM is running on xag01,xag02
ASM is enabled.
ASM instance +ASM1 is running on node xag02
Number of connected clients: 3
Client names: -MGMTDB:_mgmtdb:xag-cluster MYRAC11:MYRAC1:xag-cluster xag02.mp.com:_OCR:xag-cluster
ASM instance +ASM2 is running on node xag01
Number of connected clients: 2
Client names: MYRAC12:MYRAC1:xag-cluster xag01.mp.com:_OCR:xag-cluster

#查看asm配置
[grid@xag02 grid]$ srvctl config asm -a
ASM home: 
Password file: +OCR/orapwASM
Backup of Password file: 
ASM listener: LISTENER
ASM is enabled.
ASM is individually enabled on nodes: 
ASM is individually disabled on nodes: 
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

#查看asm是不是Flex
[grid@xag02 grid]$ asmcmd showclustermode
ASM cluster : Flex mode enabled

#查看监听状态
[grid@xag02 grid]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): xag01,xag02

#查看监听配置
[grid@xag02 grid]$ srvctl config listener -a
Name: LISTENER
Type: Database Listener
Network: 1, Owner: grid
Home: 
  /u01/app/12.2.0/grid on node(s) xag01,xag02
End points: TCP:1521
Listener is enabled.
Listener is individually enabled on nodes: 
Listener is individually disabled on nodes:

#查看scan监听器的状态
[grid@xag02 grid]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node xag01
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node xag02
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node xag02

[grid@xag02 grid]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node xag01
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node xag02
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node xag02

#查看scan网络配置
[grid@xag02 grid]$ srvctl config scan
SCAN name: xag-scan, Network: 1
Subnet IPv4: 10.0.0.0/255.255.0.0/eno1, static
Subnet IPv6: 
SCAN 1 IPv4 VIP: 10.0.28.132
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 2 IPv4 VIP: 10.0.28.133
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 3 IPv4 VIP: 10.0.28.131
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes:

#查看vip网络
[grid@xag02 grid]$ srvctl status vip -n xag01
VIP 10.0.28.116 is enabled
VIP 10.0.28.116 is running on node: xag01

[grid@xag02 grid]$ srvctl status vip -n xag02
VIP 10.0.28.118 is enabled
VIP 10.0.28.118 is running on node: xag02

#节点应用程序配置
[grid@xag02 grid]$ srvctl config nodeapps
Network 1 exists
Subnet IPv4: 10.0.0.0/255.255.0.0/eno1, static
Subnet IPv6: 
Ping Targets: 
Network is enabled
Network is individually enabled on nodes: 
Network is individually disabled on nodes: 
VIP exists: network number 1, hosting node xag01
VIP Name: xag01-vip.mp.com
VIP IPv4 Address: 10.0.28.116
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
VIP exists: network number 1, hosting node xag02
VIP Name: xag02-vip.mp.com
VIP IPv4 Address: 10.0.28.118
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL true
ONS is enabled
ONS is individually enabled on nodes: 
ONS is individually disabled on nodes: 

#数据库名查询
[grid@xag02 grid]$  srvctl config database
MYRAC1

#数据库状态
[grid@xag02 grid]$ srvctl status database -d MYRAC1
Instance MYRAC11 is running on node xag02
Instance MYRAC12 is running on node xag01

[grid@xag02 grid]$  srvctl status database -d MYRAC1 -f -v
Instance MYRAC11 is running on node xag02. Instance status: Open.
Instance MYRAC12 is running on node xag01. Instance status: Open.

#查看数据库配置
[grid@xag02 grid]$ srvctl config database -d MYRAC1 -a
Database unique name: MYRAC1
Database name: MYRAC1
Oracle home: /u01/app/oracle/product/12.2.0/db_1
Oracle user: oracle
Spfile: +DATA/MYRAC1/PARAMETERFILE/spfile.272.1005064573
Password file: +DATA/MYRAC1/PASSWORD/pwdmyrac1.256.1005064115
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: FRA,DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
Database is enabled
Database is individually enabled on nodes: 
Database is individually disabled on nodes: 
OSDBA group: dba
OSOPER group: oper
Database instances: MYRAC11,MYRAC12
Configured nodes: xag02,xag01
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services: 
Database is administrator managed

RAC群集管理命令

#crs_start命令起停rac环境
查看
[grid@XAG01 /]$ crsctl stat res -t

启动
[grid@XAG01 /]$ crs_start -all  

关闭
[grid@XAG01 /]$ crs_stop -all

#停止/启动节点集群服务,须要以root用户
[root@swnode1 ]# crsctl stop cluster -all     -----停止所有节点集群服务
[root@swnode1 ]# crsctl stop cluster          -------停止本节点集群服务

从OCR中删除已有的数据库:
srvctl remove database -d orcl

向OCR中添加一个数据库的实例:
srvctl add instance -d -i -n

#通过srvctl 命令管理一个节点的rac   srvctl start|stop|status nodeapps -n rac_node
[grid@XAG01 ~]$ srvctl status nodeapps -n xag01
VIP 192.168.40.112 is enabled
VIP 192.168.40.112 is running on node: xag01
Network is enabled
Network is running on node: xag01
ONS is enabled
ONS daemon is running on node: xag01

#通过SRVCTL命令来start/stop/check所有的实例: srvctl start|stop|status database -d  db_name
[grid@XAG01 ~]$ srvctl status database -d MYRAC1
Instance MYRAC11 is running on node xag01
Instance MYRAC12 is running on node xag02

#通过SRVCTL命令来start/stop/check管理指定实例:  srvctl start|stop|status instance -d -i 
[grid@XAG01 ~]$ srvctl status instance -d MYRAC1 -i MYRAC11
Instance MYRAC11 is running on node xag01
[grid@XAG01 ~]$ srvctl status instance -d MYRAC1 -i MYRAC12
Instance MYRAC12 is not running on node xag02

ASM管理命令

[grid@XAG01 ~]$ sqlplus / as sysasm

SQL*Plus: Release 12.2.0.1.0 Production on Fri Jan 25 17:29:46 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select instance_name,status from v$instance;

INSTANCE_NAME    STATUS
---------------- ------------
+ASM1        STARTED

如未啟動則如下啟動
SQL> startup

#进入ASM
[grid@XAG01 ~]$ asmcmd -p 
ASMCMD [+] > ls
DATA/
FRA/
MGMT/
OCR/
ASMCMD [+] > pwd
+

重启RAC数据库

关闭顺序 :关闭PDB----->关闭数据库------>关闭集群服务 (先关闭PDB 在关闭数据库实例,否则smon将有一个自动回复过程)
启动顺序:开机自启动集群服务查看集群服务器是否正常---------->打开数据库--------->启动PDB(默认是在mount状态下)

[oracle@XAG01 ~]$ sqlplus / as sysdba
SQL> select name,open_mode from v$pdbs ;

NAME                                  OPEN_MODE
------------------------------------------------------------
PDB$SEED                        READ ONLY
MYPDB                              MOUNTED

#關閉pdb 后 关闭database
[oracle@XAG01 ~]$ srvctl status database -d MYRAC1
Instance MYRAC11 is running on node xag01
Instance MYRAC12 is running on node xag02

[oracle@XAG01 ~]$ srvctl stop database -d MYRAC1

使用ASMCA创建asm磁盘
此步骤仅需要在一个节点上执行

su – grid

$ asmca

su – oracle

$ dbca

[oracle@raclhr-12cR1-N1 bin]$ srvctl status database -d lhr12crac

[oracle@raclhr-12cR1-N1 bin]$ srvctl config database -d lhr12crac -a

启停crs(必须root)

crsctl start crs和crsctl stop crs

---------------以下为附录-----------------
RAC数据库集群启动、停止
RAC数据库目前是全自动的,当操作系统启动时,ASM设备会自动挂载,数据库也会随之自动启动。
如果需要手动启动或者停止数据库,请参照如下说明。

启动、停止oracle数据库实例
监听:
[root@RAC01 ~] srvctl stop listener --停止监听

数据库
[root@RAC01 ~] srvctl stop database -d starboss --停止数据库
或者
[root@RAC01 ~] srvctl start database -d starboss -o open/mount/'read only' --启动到打开、挂载、只读模式

启停Oracle RAC集群
这个操作会停止数据库,并停止rac其他所有的集群服务(如asm实例、vip、监听以及rac高可用环境):
[root@rac01 ~] crsctl stop cluster -all --停止

你可能感兴趣的:(ORACLE 12C RAC 之01-環境安裝及配置)