一. Clusterware安装
这里以rac1作为主节点
1. 解压clusterware
cpio -idmv < clusterware10gr2_64.cpio
2. 修改权限
[root@rac1 10G R2]#chown -R oracle:oinstall clusterware
[root@rac1 10G R2]#chmod -R 777 clusterware
3.预安装之前检查必要的安装组件(必须先在系统上安装 JDK 1.4.2,然后才可以运行 CVU。)
./runcluvfy.sh stage -pre crsinst -rac1,rac2
预安装检查oracle的完整性
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -r 10gR2
预安装检查RAC网络环境
./runcluvfy.sh comp nodecon -n rac1,rac2
预安装检查硬件条件的完整性
./runcluvfy.sh stage -post hwos -n rac1,rac2
预安装检查RAC软件环境的必要条件
./runcluvfy.sh comp sys -n rac1,rac2 -p crs -osdba dba -orainv oinstall
3. 以root 用户执行 xhost +
[root@rac1tmp]# xhost +
accesscontrol disabled, clients can connect from any host
4. 切换到oracle 用户,执行runinstall,此时需要打开另一个终端窗口,以root执行rootpre.sh
[oracle@rac1tmp]$ ./runInstaller
********************************************************************************
Pleaserun the script rootpre.sh as root on all machines/nodes. The script can befound at the toplevel of the CD or stage-area. Once you have run the script,please type Y to proceed
Answer'y' if root has run 'rootpre.sh' so you can proceed with Oracle Clusterwareinstallation.
Answer'n' to abort installation and then ask root to run 'rootpre.sh'.
********************************************************************************
Has'rootpre.sh' been run by root? [y/n] (n)
y
5. 开始安装
6. 注意修改crs存放目录
7. 通过检查
8. 配置节点,必须与hosts配置一致
这个错误是由于配置ssh等价时,没有本机执行的原因造成的。
[oracle@rac1 ~]$ ssh rac1
The authenticity of host 'rac1 (133.160.130.18)' can't beestablished.
RSA key fingerprint is3b:33:d0:8e:e4:c0:70:33:ab:69:00:f0:67:8f:78:e2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1,133.160.130.18' (RSA) to the listof known hosts.
Last login: Mon Mar 30 09:29:50 2015 from rac2
[oracle@rac1 ~]$ ssh rac1
Last login: Mon Mar 30 12:55:38 2015 from rac1
[oracle@rac1 ~]$ ssh rac1 date
Mon Mar 30 12:55:53 CST 2015
9. 继续下一步,指定私有网络和公共网络接口
10. 指定ocr 存放位置
11. 指定vote文件存放位置
12. 确定无误后,点击install
13. 安装的时候会弹出一个对话框要求以root身份运行脚本
建议脚本运行顺序:
rac1/oraapp/oracle/oraInventory/orainstRoot.sh
rac2/oraapp/oracle/oraInventory/orainstRoot.sh
rac1/oraapp/oracle/product/10.2.0/crs_1/root.sh
rac2/oraapp/oracle/product/10.2.0/crs_1/root.sh
在运行脚本之前需要修改两个文件,这是一个bug:修改
$CRS_HOME/bin/vipca文件,增加标记为红色的那一行:
esac
unset LD_ASSUME_KERNEL
ARGUMENTS=""
修改
$CRS_HOME/bin/srvctl
文件,增加标记为红色的那一行:
exportLD_ASSUME_KERNEL
unset LD_ASSUME_KERNEL
# Run opscontrol utility
若执行脚本时遇到错误:
[root@rac1crs]# ./root.sh
WARNING:directory '/oraapp/oracle/10g' is not owned by root
WARNING:directory '/oraapp/oracle' is not owned by root
WARNING:directory '/oraapp' is not owned by root
Checking to seeif Oracle CRS stack is already configured
/etc/oracledoes not exist. Creating it now.
Setting thepermissions on OCR backup directory
Setting upNS directories
Failed to upgrade Oracle ClusterRegistry configuration
经过查找资料发现是由于bug造成,该rac 环境采用 lvm2+裸设备,官方提示需要打上补丁:p4679769_10201_LINUX.zip,将里面的clsfmt.bin文件覆盖/oraapp/oracle/10g/crs/bin/clsfmt.bin文件,修改权限即可。
然后再执行
这是rac1节点内容:
[root@rac1crs]# ./root.sh
WARNING:directory '/oraapp/oracle/10g' is not owned by root
WARNING:directory '/oraapp/oracle' is not owned by root
WARNING:directory '/oraapp' is not owned by root
Checkingto see if Oracle CRS stack is already configured
Settingthe permissions on OCR backup directory
Settingup NS directories
OracleCluster Registry configuration upgraded successfully
WARNING:directory '/oraapp/oracle/10g' is not owned by root
WARNING:directory '/oraapp/oracle' is not owned by root
WARNING:directory '/oraapp' is not owned by root
Successfullyaccumulated necessary OCR keys.
Usingports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node<nodenumber>: <nodename> <private interconnect name><hostname>
node 1:rac1 rac1_priv rac1
node 2:rac2 rac2_priv rac2
CreatingOCR keys for user 'root', privgrp 'root'..
Operationsuccessful.
Nowformatting voting device: /dev/raw/raw2
Nowformatting voting device: /dev/raw/raw41
Nowformatting voting device: /dev/raw/raw73
Formatof 3 voting devices complete.
Startupwill be queued to init within 90 seconds.
Addingdaemons to inittab
Expectingthe CRS daemons to be up within 600 seconds.
CSS isactive on these nodes.
rac1
CSS isinactive on these nodes.
rac2
Localnode checking complete.
Runroot.sh on remaining nodes to start CRS daemons.
RAC2上的内容
[root@rac2crs]# ./root.sh
WARNING:directory '/oraapp/oracle/10g' is not owned by root
WARNING:directory '/oraapp/oracle' is not owned by root
WARNING:directory '/oraapp' is not owned by root
Checkingto see if Oracle CRS stack is already configured
/etc/oracledoes not exist. Creating it now.
Settingthe permissions on OCR backup directory
Settingup NS directories
OracleCluster Registry configuration upgraded successfully
WARNING:directory '/oraapp/oracle/10g' is not owned by root
WARNING:directory '/oraapp/oracle' is not owned by root
WARNING:directory '/oraapp' is not owned by root
clscfg:EXISTING configuration version 3 detected.
clscfg:version 3 is 10G Release 2.
Successfullyaccumulated necessary OCR keys.
Usingports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node<nodenumber>: <nodename> <private interconnect name><hostname>
node 1:rac1 rac1_priv rac1
node 2:rac2 rac2_priv rac2
clscfg:Arguments check out successfully.
NO KEYSWERE WRITTEN. Supply -force parameter to override.
-forceis destructive and will destroy any previous cluster
configuration.
OracleCluster Registry for cluster has already been initialized
Startupwill be queued to init within 90 seconds.
Addingdaemons to inittab
Expectingthe CRS daemons to be up within 600 seconds.
CSS isactive on these nodes.
rac1
rac2
CSS isactive on all nodes.
Waitingfor the Oracle CRSD and EVMD to start
OracleCRS stack installed and running under init(1M)
Runningvipca(silent) for configuring nodeapps
Error 0(Native:listNetInterfaces:[3])
[Error 0(Native: listNetInterfaces:[3])]
这里出现了错误,注意红色部分
这个是由于vip没有设置造成的,在图形界面切换至root用户操作
[root@rac1~]# cd /oraapp/oracle/10g/crs/bin/
[root@rac1bin]# ./vipca
点击exit即可,该错误不必重新执行以上脚本,到此安装结束。
14. 检查是否正常运行
[oracle@rac1bin]$ olsnodes
rac1
rac2
[oracle@rac1bin]$ crsctl check crs
CSS appearshealthy
CRSappears healthy
EVMappears healthy
[oracle@rac1bin]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
二. ORACLE软件安装
安装oracle,只需在rac1上安装即可,安装过程中会进行拷贝
1. 解压oracle安装包,打开安装目录,执行以下命令:
[oracle@rac1 database]$ ./runInstaller
2. 选择全部节点
3. 只安装oracle相关软件,不创建数据库
正在拷贝到rac2
4. 跳出以下窗口时,需到依次到各个节点登录到root账户,手动执行以下脚本
Rac1执行信息
[root@rac1db_1]# ./root.sh
RunningOracle10 root.sh script...
Thefollowing environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oraapp/oracle/10g/db_1
Enterthe full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating/etc/oratab file...
Entrieswill be added to the /etc/oratab file as needed by
DatabaseConfiguration Assistant when a database is created
Finishedrunning generic part of root.sh script.
Nowproduct-specific root actions will be performed.
Rac2执行信息
[root@rac2db_1]# ./root.sh
RunningOracle10 root.sh script...
Thefollowing environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oraapp/oracle/10g/db_1
Enterthe full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating/etc/oratab file...
Entrieswill be added to the /etc/oratab file as needed by
DatabaseConfiguration Assistant when a database is created
Finishedrunning generic part of root.sh script.
Nowproduct-specific root actions will be performed.
5. 安装完毕
三. 安装10.2.0.5补丁包
说明:先升级CRS,再升级数据软件(已创建数据库的需要升级数据库)
以下为升级CRS步骤:
1. 上传补丁,并修改所属权限为oracle
2. 登陆节点1,以root执行xhost +,使在进行安装时,能够打开图形界面;之后以oracle用户停掉crs相关进程
3. 确保crs进程全部关闭
4. 解压补丁包,准备安装
5. 注意这里选择CRS目录
6. 确保升级检查无误,进行下一步
7. 安装结束
8. 分别登陆两个节点以root用户执行以下命令
9. 分别在节点1和节点2上执行以下命令
[root@rac1 ~]#/oraapp/oracle/10g/crs/install/root102.sh
至此升级CRS完成
以下为升级数据软件步骤:
1. 登陆节点1,关闭CRS相关进程,并检查关闭完成
2. 切换至补丁目录,运行runinstall
3. 这里选择DB目录
4. 此时需要分别登陆两个节点以root用户执行下面的文件,建议先rac1后rac2
注意:在两个节点上执行覆盖文件之前,建议把/usr/local/bin做一下备份,命令如下:
[root@rac1 ~]# cd /usr/local/
[root@rac1 ~]# cp �Crf bin bin.old
[root@rac2 ~]# cd /usr/local/
[root@rac2 ~]# cp �Crf bin bin.old
在执行过程需要输入必要信息:
、输入本地bin目录,选择默认,直接回车即可。
、提示要覆盖三个文件:先将两个节点上的三个文件都进行备份,然后输入Y直接覆盖。两个节点都执行完毕后,点击“确定”
安装完毕,退出安装界面
启动CRS:以oracle身份登陆节点1执行 crs_start �Call
升级完成
四. ORACLE监听配置
注:只安装数据库软件后,要创建数据库,必须先配置监听器;配置依旧在图形界面进行配置
1. 以oracle用户执行以下命令,调出配置程序
[oracle@rac1network]$ netca
OracleNet Services Configuration:
2. 选择所有节点
3. 命名listener
4. 配置监听协议,根据业务需要,一般选择tcp即可。
5. 配置监听端口
之后选择finish,配置结束。
6. 检查是否配置正确
[oracle@rac1network]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....C1.lsnrapplication ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....C2.lsnrapplication ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
五.创建Database
说明:
1. 创建数据库前,务必检查系统环境变量、内核参数配置,否则会产生不必要的错误;确认开启crc,且运行正常。
2. 这里采用dbca方式,创建之前确认oracle用户可以访问图形界面(以root用户运行xhost + 即可)。
1. 登陆oracle用户,执行以下命令
[oracle@rac1network]$ dbca
2. 选择所有节点都创建
3.选择自定义创建
4. 根据需要安装em
5.指定建库控制文件
注:这里采用裸设备映射方式,映射方法
建立链接文件,放在以下目录(目录存放位置根据需要调整,是否两个节点上都创建不是很确定,我这里都进行了创建)
建立映射文件
注意: redo,undo请按以下方式命名,以便能够被DBCA识别出来,若识别出现问题,需要手动进行修改。
选择映射文件路径
6.若user表空间没有配置会出现一下窗口,users表空间为10G新功能,在新建用户时如不指定表空间则默认表空间为users表空间,避免占用system表空间。
7. 参数配置
调整内存
SGA、PGA的分配原则
OLTP:SGA=系统内存*0.56,PGA=SGA*(0.1~0.2)
OLAP:SGA=系统内存*0.48,PGA=SGA*(0.45~0.65)
设置字符集
参照:select userenv(‘language’) from dual;
select*fromnls_database_parameters;
设置块大小,一般16K即可,
连接接方式选择默认的独占模式
8.调整空间大小,并确认配置无误后进行下一步
注:联机日志文件的规划原则如下:
1)分散放开,多路复用。一般会将同一组的不同日志成员文件放到不同的磁盘或不同的裸设备上。以提高安全性。
2)把重做日志放在速度最快的硬盘上(即:日志所在的磁盘应当具有较高的I/O),一般会将日志文件放在裸设备上。
3)把重做日志文件设为合理大小:例如,增大日志文件大小可以加快一些大型的INSERT、UPDATE、DELETE操作,也能降低日志文件切换频率。减少一些日志等待事件。一般根据具体业务情况有所不同。一般日志组大小应满足自动切换间隔至少15-20分钟左右业务需求
4)ORACLE推荐,同一个重做日值组下的所有重做日志文件大小、成员个数一致
9.开始创建,正式创建之前会弹出配置页面,确认无误后开始创建
创建数据库出现错误说明:
建议查看dump日志以查找具体原因
1)空间不足:ORA-27094
查看alert日志
[root@rac1bdump]# cd /oraapp/oracle/10g/admin/xgxrac/bdump
[root@rac1bdump]# ll
total 28
-rw-r-----1 oracle oinstall 8357 Apr 1 09:09alert_xgxrac1.log
-rw-r-----1 oracle oinstall 888 Apr 1 09:09 xgxrac1_diag_8415.trc
-rw-r-----1 oracle oinstall 740 Apr 1 09:09 xgxrac1_lgwr_8432.trc
-rw-r-----1 oracle oinstall 672 Apr 1 09:09 xgxrac1_lmd0_8421.trc
-rw-r-----1 oracle oinstall 4062 Apr 1 09:09xgxrac1_lmon_8419.trc
[root@rac1bdump]# cat alert_xgxrac1.log
这是由于预分配大小超过RAW设备空间导致的,可以调整raw大小或者调整预分配大小来避免此类错误。
2)CRS-0215
此类问题是由于节点访问存在问题导致的,需要检查环境变量等相关配置是否正确
10.若出现以上界面没有出现错误,点击exit后会启动数据库,届时创建数据库结束.
11. 检查测试
[oracle@rac1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
ora.xgxrac.db application ONLINE ONLINE rac2
ora....c1.inst application ONLINE ONLINE rac1
ora....c2.inst application ONLINE ONLINE rac2
[oracle@rac1 ~]$ srvctl status nodeapps -nrac1
VIP is running on node: rac1
GSD is running on node: rac1
Listener is running on node: rac1
ONS daemon is running on node: rac1
[oracle@rac1 ~]$ srvctl status nodeapps -nrac2
VIP is running on node: rac2
GSD is running on node: rac2
Listener is running on node: rac2
ONS daemon is running on node: rac2
关闭节点rac2,在rac1上查看crs状态
[oracle@rac1 ~]$ crs_stat
NAME=ora.rac1.LISTENER_RAC1.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1
NAME=ora.rac1.gsd
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1
NAME=ora.rac1.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1
NAME=ora.rac1.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1
NAME=ora.rac2.LISTENER_RAC2.lsnr
TYPE=application
TARGET=ONLINE
STATE=OFFLINE
NAME=ora.rac2.gsd
TYPE=application
TARGET=ONLINE
STATE=OFFLINE
NAME=ora.rac2.ons
TYPE=application
TARGET=ONLINE
STATE=OFFLINE
NAME=ora.rac2.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1
NAME=ora.xgxrac.db
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1
NAME=ora.xgxrac.xgxrac1.inst
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1
NAME=ora.xgxrac.xgxrac2.inst
TYPE=application
TARGET=ONLINE
STATE=OFFLINE
开启rac2,等待一段时间后,重新查看crs状态(期间不用做任何操作,若crs状态正常,则配置正确)
[oracle@rac1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
ora.xgxrac.db application ONLINE ONLINE rac1
ora....c1.inst application ONLINE ONLINE rac1
ora....c2.inst application ONLINE ONLINE rac2