1、软硬件准备环境
硬件环境
虚拟机环境:VMware workstations 11
4核8G内存 虚拟机*2
hostname:rac1,rac2
软件环境
操作系统:Centos 6 .5 64位
集群软件GRID:11.2.0.1
Database:Oracle database 11gR2
注意:关闭防火墙、selinux等
1.1.1 IP地址规划
IP设定如下:
#Public IP
10.7.8.11 rac1
10.7.8.12 rac2
#Private IP
192.168.1.123 racpriv1
192.168.1.124 racpriv2
#Virtual IP
10.7.8.123 racvip1
10.7.8.124 racvip2
#Scan IP
10.7.8.200 racscan
127.0.0.1 localhost.localdomain localhost
用 ping检查连通性 除了vip和 sanip
扫盲:何为SCAN(Single Client Access Name)
定义的一个网络名及IP地址,这样所有客户端均能够通过它访问RAC数据库。通过
SCAN,当集群环境发生变化时,就不需要再修改客户端的配置。SCAN同样允许客户端通
过简易连接串的方式,也能够利用到RAC的负载平衡及故障切换特性。
不管是否使用GNS都需要SCAN。如果使用GNS,那么ORACLE会自动创建SCAN。
如果不使用GNS,那就必须在DNS中定义SCAN。
我们先配置DNS,DNS是什么我就不多说了,这里主要的作用在于引入一个新的特性,11g R2后引入SCAN IP的概念,就是在客户端和数据库之间增加一层虚拟网络服务层,即是SCAN IP和SCAP IP Listener。在客户端的tnsnames.ora配置文件中,只需要配置SCAN IP的配置信息即可,客户端通过SCAN IP、SCAN IP Listener来访问数据库。同之前各版本的RAC相比,使用SCAN IP的好处就是,当后台RAC数据库添加、删除节点时,客户端配置信息无需修改。可以通过配置DNS服务器或GNS来配置SCAN,我们这里以DNS为例来进行配置。
2、安装oracle 11g所需要的环境包两台机器都要装
[root@rac1~]# yum -y install compat-libstdc++-33 elfutils-libelf-devel gcc gcc-c++glibc-devel glibc-headers libaio-devel libstdc++-devel sysstat unixODBC unixODBC-devel
3、参数配置
以下操作需在两个节点上进行配置
[root@rac1 ~]# vi /etc/sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmmax = 8589934592 (8G内存)
kernel.shmall = 2097152
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
注意上述参数中,需要根据实际情况修改kernel.shmmax参数的值,可以直接设置为物
理内存大小。
使内核参数生效
[root@rac1 ~]# sysctl -p
[root@rac1 ~]# vi/etc/security/limits.conf
grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 2047
grid hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
[root@rac1 ~]# vi/etc/pam.d/login
session required /lib64/security/pam_limits.so
4、添加用户和组以及oracle、grid用户ssh的对等性 以下操作需要两个节点同样进行
groupadd-g 1000 oinstall
groupadd -g 1300 dba
groupadd -g 1301 oper
groupadd -g 1201 asmdba
groupadd -g 1200 asmadmin
groupadd -g 1202 asmoper
useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s/bin/bash -c "Grid Infrastructure Owner" grid
passwd grid
useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash-c "Oracle Software Owner" oracle
passwd oracle
5、设置互信关系,这里记住oracle和grid用户都要设置互信
su- grid
mkdir ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa
以下操作在一个节点上执行即可
cat~/.ssh/id_rsa.pub>>./.ssh/authorized_keys --公钥存在authorized_keys文件中,写到本机
cat ~/.ssh/id_dsa.pub>>./.ssh/authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys --第二个节点的公钥写到本机
ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
两个节点上分别验证(很重要)
ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date
6、创建目录和授权
mkdir-p /u01/app/grid
mkdir -p /u01/app/11.2.0/grid
chown -R grid:oinstall /u01
mkdir -p /u01/app/oracle
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
7、环境变量如下操作也需要在两个节点上同样进行,这里需要注意的是grid用户的SID和oracle用户的SID,是不一样的
[root@rac1~]# su - grid
[grid@rac1 ~]$ less .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
ORACLE_SID=+ASM1; export ORACLE_SID
JAVA_HOME=/usr/local/java;export JAVA_HOME
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME
ORACLE_PATH=/u01/app/oracle/common/oracle/sql; export ORACLE_PATH
ORACLE_TERM=xterm;export ORACLE_TERM
NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS";export NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native; export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
export NLS_LANG=american_america.ZHS16GBK
alias sqlplus="rlwrap sqlplus"
umask 022
[grid@rac1~]$ su - oracle
Password:
[oracle@rac1 ~]$ less .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
ORACLE_SID=test1; export ORACLE_SID
ORACLE_UNQNAME=test; export ORACLE_UNQNAME
JAVA_HOME=/usr/local/java; export JAVA_HOME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME
ORACLE_PATH=/u01/app/common/oracle/sql; export ORACLE_PATH
ORACLE_TERM=xterm; export ORACLE_TERM
NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native; export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK
umask 022
本地磁盘:各虚拟机节点本地磁盘60G硬盘用来安装grid及database
共享存储:两个50G磁盘文件用于ASM;
进入系统后在两台主机输入fdisk -l就可以看到了并且
[root@rac1 ~]# ls /dev/sd*
Sda* sdb sdc
[root@rac2 ~]# ls /dev/sd*
Sda* sdb sdc
配置共享磁盘
对新添的硬盘进行分区
首先通过fdisk对其进行分区,在rac1端(也只需在任意一个节点)执行操作
[root@rac1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSFdisklabel
Building a new DOS disklabel with disk identifier 0x92ba5e05.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected byw(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode(command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-6527, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-6527, default 6527):
Using default value 6527
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@rac1 ~]# fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSFdisklabel
Building a new DOS disklabel with disk identifier 0xb3a6b79c.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected byw(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-6527, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-6527, default 6527):
Using default value 6527
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
格式化磁盘
[root@rac1 ~]# mkfs.ext4 /dev/sd*
Ok之后可以看到如下信息
[root@rac1 ~]# ls /dev/sd*
Sda* sdb sdb1 sdc sdc1
[root@rac2 ~]# ls /dev/sd*
Sda* sdb sdb1 sdc sdc19、配置ASM磁盘
下载并配置asmlib所需要的软件包
http://www.oracle.com/technetwork/server-storage/linux/asmlib/rhel6-1940776.html
以上地址只可以下载到oracleasmlib、 oracleasm-support包,还有一个最关键的包kmod-oracleasm需要以下方式下载:
下载oracle提供的yum配置文件,从Oracle服务器上安装kmod-oracleasm文件:
[root@racnode1 yum.repos.d]# wgethttp://public-yum.oracle.com/public-yum-ol6.repo
wget http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 -O/etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
下载kmod-oracleasm软件包(在两台节点上操作)
yum install kmod-oracleasm -y
[root@rac1 src]# rpm -ivh oracleasmlib-2.0.4-1.el6.x86_64.rpm
Preparing... ########################################### [100%]
1:oracleasmlib ########################################### [100%]
[root@rac1 src]# rpm -ivh oracleasm-support-2.1.8-1.el6.x86_64.rpm
Preparing... ########################################### [100%]
1:oracleasm-support ########################################### [100%]
注意:全部安装完成后需要重启机器才不会报错
10、配置ASM驱动
两个节点都执行
[root@racnode1 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questionswill determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
初始化磁盘
[root@racnode1 ~]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size
Mounting ASMlib driver filesystem: /dev/oracleasm
其中一个节点执行就可以
创建ASM磁盘
[root@racnode1 ~]# oracleasm createdisk asmdisk1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
[root@racnode1 ~]# oracleasmcreatedisk asmdisk2 /dev/sdc1
Writing disk header: done
Instantiating disk: done
如果有多个就继续创建。完成后执行oracleasm scandisks:
[root@racnode1 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
查看磁盘
[root@racnode1 ~]# oracleasm listdisks
ASMDISK1
ASMDISK2
11、禁用NTP server
此操作同样在两个节点上进行
[root@rac1 ~]# service ntpd stop
[root@rac1 ~]# chkconfig ntpd off
[root@rac1 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak
12、上传所有的安装软件
上床到/u01下[root@rac1~]# cd /u01 | ls
linux.x64_11gR2_database_1of2.zip
linux.x64_11gR2_database_2of2.zip
linux.x64_11gR2_grid.zip
解压linux.x64_11gR2_grid.zip 得到grid文件夹
chown -R grid.oinstall grid/
执行安装前检查操作
[root@rac1 ~]# su – grid
[root@rac1 ~]# cd /u01/grid
# ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose
安装GRID基础架构
# ./runInstaller
开始安装grid
A. rac1节点,以root身份执行下列操作:
[root@rac1~]# /u01/app/oraInventory/orainstRoot.sh
更改权限/u01/app/oraInventory.
添加组的读取和写入权限。
删除全局的读取,写入和执行权限。
更改组名/u01/app/oraInventory到 oinstall.
脚本的执行已完成。
B.节点1成功后登录到rac2,执行相同操作:
[root@rac2 ~]#/data/app/oraInventory/orainstRoot.sh
更改权限/u01/app/oraInventory.
添加组的读取和写入权限。
删除全局的读取,写入和执行权限。
更改组名/u01/app/oraInventory到 oinstall.
脚本的执行已完成。
C.返回节点1执行第二个脚本:
[root@rac1~]# cd /u01/app/11.2.0/grid/
[root@rac1 grid]# ./root.sh
Performingroot user operation for Oracle 11g
Thefollowing environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /data/app/grid/11.2.0
Enterthe full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating/etc/oratab file...
Entrieswill be added to the /etc/oratab file as needed by
DatabaseConfiguration Assistant when a database is created
Finishedrunning generic part of root script.
Nowproduct-specific root actions will be performed.
Usingconfiguration parameter file:/data/app/grid/11.2.0/crs/install/crsconfig_params
Creatingtrace directory
Userignored Prerequisites during installation
Failedto create keys in the OLR, rc = 127, Message:
/data/app/grid/11.2.0/bin/clscfg.bin: errorwhile loading shared libraries: libcap.so.1: cannot open shared object file: Nosuch file or directory
Failedto create keys in the OLR at /data/app/grid/11.2.0/crs/install/crsconfig_lib.pmline 7497.
/data/app/grid/11.2.0/perl/bin/perl-I/data/app/grid/11.2.0/perl/lib -I/data/app/grid/11.2.0/crs/install/data/app/grid/11.2.0/crs/install/rootcrs.pl execution failed
报错原因:这是由于确实不存在libcap.so.1文件
解决办法:安装相应的包:
yuminstall compat-libcap1 -y
而后清除crs配置:
# /u01/app/11.2.0/grid/crs/install/rootcrs.pl-deconfig --force
再次执行root.sh脚本:
[[email protected]]# ./root.sh
ConfigureOracle Grid Infrastructure for a Cluster ... succeeded成功!
提示:必须先在第一个节点上运行root.sh,最后一个节点必须等第一个节点执行成功后再运行root.sh脚本。
第一次安装11gR2 RAC的时候就遇到了这个11.0.2.1的经典问题,上网一查才知道这是个bug,解决办法也很简单,就是在执行root.sh之前执行以下命令/bin/ddif=/var/tmp/.oracle/npohasdof=/dev/null bs=1024 count=1如果出现/bin/dd:opening`/var/tmp/.oracle/npohasd': No such file or directory的时候文件还没生成就继续执行,直到能执行为止,一般出现Adding daemon to inittab这条信息的时候执行dd命令。
D.到节点2执行第二个脚本:
[root@rac2grid]# ./root.sh
OLRinitialization - successful
AddingClusterware entries to upstart
CRS-4402:CSS守护程序已在独占模式下启动,但在节点 racnode1 (编号为1)上发现活动 CSS 守护程序,因此正在终止
Anactive cluster was found during exclusive startup, restarting to join thecluster
ConfigureOracle Grid Infrastructure for a Cluster ... succeeded成功!
Root.sh脚本执行成功后,返回OUI界面,这里提示遇到一个错误,根据提示的日志分
析是由于无法解析到racscan导致的(没配置DNS而是用hosts定义的导致的),直接忽略,点击OK。
14、GRID基础框架的安装就完成了。安装好集群件后,检查节点上的应用是否启动。
以oracle用户身份执行下列命令:
[root@rac1disks]# su - oracle
[oracle@rac1 ~]$ cd /u01/app/11.2.0/grid/bin
[oracle@rac1 bin]$ ./crsctl check cluster -all
**************************************************************
racnode1:
CRS-4537:Cluster Ready Services is online
CRS-4529:Cluster Synchronization Services is online
CRS-4533:Event Manager is online
**************************************************************
racnode2:
CRS-4537:Cluster Ready Services is online
CRS-4529:Cluster Synchronization Services is online
CRS-4533:Event Manager is online
**************************************************************
[root@rac2~]# su - oracle
[oracle@rac2 ~]$ cd /u01/app/11.2.0/grid/bin
[oracle@rac2bin]$ ./crsctl check cluster -all
**************************************************************
rac1:
CRS-4537:Cluster Ready Services is online
CRS-4529:Cluster Synchronization Services is online
CRS-4533:Event Manager is online
**************************************************************
racnode2:
CRS-4537:Cluster Ready Services is online
CRS-4529:Cluster Synchronization Services is online
CRS-4533:Event Manager is online
**************************************************************
而后会在命令输出中看到所有集群件守护进程的信息。
如果集群中某些资源显示脱机或没找到,则说明集群件软件没能正确的安装。
15、安装oracle 11.2.0.1软件
解压 linux.x64_11gR2_database_1of2.zip
linux.x64_11gR2_database_2of2.zip得到database文件夹
[root@rac1 ~]# su – oracle
[root@rac1 ~]# cd/u01/database
# ./runInstaller
开始安装database软件
我们还是用OUI模式安装,这里和安装grid一样,启动安装向导 这里我们还是在rac1节点上安装
安装过程不做详述 如果需要私聊我
16、图形安装完毕 按照提示执行脚本即可
17、创建数据库
[oracle@rac1~]$ dbca
执行dbca,进入到安装界面,选择创建RAC 数据库:
来到如下界面后,我们选择第一项集群模式,然后next
图形安装不详述 需要私聊我
18、我们这里安装完毕!!!
检查各服务是否正常:
[oracle@racnode1~]$ crsctl check crs
CRS-4638:Oracle High Availability Services is online
CRS-4537:Cluster Ready Services is online
CRS-4529:Cluster Synchronization Services is online
CRS-4533:Event Manager is online
[oracle@racnode1~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....ER.lsnrora....er.type ONLINE ONLINE racnode1
ora....N1.lsnrora....er.type ONLINE ONLINE racnode1
ora.ORADATA.dgora....up.type ONLINE ONLINE racnode1
ora.asm ora.asm.type ONLINE ONLINE racnode1
ora.cvu ora.cvu.type ONLINE ONLINE racnode1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....networkora....rk.type ONLINE ONLINE racnode1
ora.oc4j ora.oc4j.type ONLINE ONLINE racnode1
ora.ons ora.ons.type ONLINE ONLINE racnode1
ora.racdb.db ora....se.type ONLINE ONLINE racnode1
ora....SM1.asmapplication ONLINE ONLINE racnode1
ora....E1.lsnrapplication ONLINE ONLINE racnode1
ora....de1.gsdapplication OFFLINE OFFLINE
ora....de1.onsapplication ONLINE ONLINE racnode1
ora....de1.vipora....t1.type ONLINE ONLINE racnode1
ora....SM2.asmapplication ONLINE ONLINE racnode2
ora....E2.lsnrapplication ONLINE ONLINE racnode2
ora....de2.gsdapplication OFFLINE OFFLINE
ora....de2.onsapplication ONLINE ONLINE racnode2
ora....de2.vipora....t1.type ONLINE ONLINE racnode2
ora.scan1.vip ora....ip.type ONLINE ONLINE racnode1
查看数据库的运行状态:
[oracle@racnode1~]$ srvctl status database -d racdb
实例 racdb1 正在节点 racnode1 上运行
实例 racdb2 正在节点 racnode2 上运行
查看节点应用程序:
[oracle@racnode1~]$ srvctl status nodeapps
VIPracvip1 已启用
VIPracvip1 正在节点上运行: racnode1
VIPracvip2 已启用
VIPracvip2 正在节点上运行: racnode2
网络已启用
网络正在节点上运行: racnode1
网络正在节点上运行: racnode2
GSD 已禁用
GSD 没有运行的节点: racnode1
GSD 没有运行的节点: racnode2
ONS 已启用
ONS 守护程序正在节点上运行:racnode1
ONS 守护程序正在节点上运行:racnode2
PRKO-2182: 节点上不存在 eONS守护程序: racnode1,racnode2
节点应用配置
[oracle@racnode1~]$ srvctl config nodeapps
VIP 已存在。:racnode1
VIP 已存在。:/racvip1/10.7.7.124/255.255.255.0/eth1
VIP 已存在。:racnode2
VIP 已存在。:/racvip2/10.7.7.234/255.255.255.0/eth1
GSD 已存在。
ONS 守护程序已存在。本地端口 6100, 远程端口 6200
PRKO-2339: eONS 守护程序不存在。
数据库配置
[oracle@racnode1~]$ srvctl config database -d racdb -a
数据库唯一名称: racdb
数据库名: racdb
Oracle 主目录: /data/app/oracle/11.2.0/db_1
Oracle 用户: oracle
Spfile:+ORADATA/racdb/spfileracdb.ora
域:
启动选项: open
停止选项: immediate
数据库角色: PRIMARY
管理策略: AUTOMATIC
服务器池: racdb
数据库实例: racdb1,racdb2
磁盘组: ORADATA
服务:
数据库已启用
数据库是管理员管理的
ASM 状态
oracle@racnode1~]$ srvctl status asm
ASM 正在 racnode1,racnode2 上运行
ASM 配置
[oracle@racnode1~]$ srvctl config asm -a
ASM 主目录: /data/app/grid/11.2.0
ASM 监听程序: LISTENER
ASM 已启用。
TNS 监听器状态
[oracle@racnode1~]$ srvctl status listener
监听程序 LISTENER 已启用
监听程序 LISTENER 正在节点上运行: racnode1,racnode2
TNS 监听器配置
[oracle@racnode1~]$ srvctl config listener -a
名称: LISTENER
网络: 1, 所有者: oracle
主目录:
节点 racnode2,racnode1 上的 /data/app/grid/11.2.0
端点: TCP:1521
节点应用程序配置 VIP、GSD、ONS、监听器
[oracle@racnode1~]$ srvctl config nodeapps -a -g -s -l
-l 选项已过时, 将忽略该选项。
VIP 已存在。:racnode1
VIP 已存在。:/racvip1/10.7.7.124/255.255.255.0/eth1
VIP 已存在。:racnode2
VIP 已存在。:/racvip2/10.7.7.234/255.255.255.0/eth1
GSD 已存在。
ONS 守护程序已存在。本地端口 6100, 远程端口 6200
名称: LISTENER
网络: 1, 所有者: oracle
主目录:
节点 racnode2,racnode1 上的 /data/app/grid/11.2.0
端点: TCP:1521
SCAN 状态
[oracle@racnode1~]$ srvctl status scan
SCANVIP scan1 已启用
SCANVIP scan1 正在节点 racnode1 上运行
SCAN 配置
[oracle@racnode1~]$ srvctl config scan
SCAN 名称: racscan, 网络:1/10.7.8.0/255.255.255.0/eth1
SCANVIP 名称: scan1, IP: /racscan/10.7.8.200