oracle rac管理基本命令

rac oracle clusterware 命令集
按不同层次划分
1.节点层次,olsnodes
2.网络层,oifcfg
3.集群层,crsctl,ocrcheck,ocrdump,ocrconfig
4.应用层,srvctl ,onsctl,crs_stat


#具体操作

节点层
olsnode 命令 具体都可以-help查看
[root@dmk01 bin]# ./olsnodes -help
Usage: olsnodes [-n] [-p] [-i] [<node> | -l] [-g] [-v]
        where
                -n print node number with the node name
                -p print private interconnect name with the node name
                -i print virtual IP name with the node name
                <node> print information for the specified node
                -l print information for the local node
                -g turn on logging
                -v run in verbose mod

-n显示每个节点编号
-p显示每个节点用于private interconnect网络接口名
-i显示每个node vip
-g:打印日志信息
-v:打印详细信息
[root@dmk01 bin]# pwd
/oracle/app/product/11.1/crs/bin
[root@dmk01 bin]# ./olsnodes -n -p -i
dmk01   1       dmk01-priv      dmk01-vip
dmk02   2       dmk02-priv      dmk02-vip
dmk03   3       dmk03-priv      dmk03-vip
dmk04   4       dmk04-priv      dmk04-vip
dmk05   5       dmk05-priv      dmk05-vip
dmk06   6       dmk06-priv      dmk06-vip
dmk07   7       dmk07-priv      dmk07-vip
dmk08   8       dmk08-priv      dmk08-vip
dmk09   9       dmk09-priv      dmk09-vip
dmk10   10      dmk10-priv      dmk10-vip
dmk11   11      dmk11-priv      dmk11-vip
dmk12   12      dmk12-priv      dmk12-vip
dmk13   13      dmk13-priv      dmk13-vip
dmk14   14      dmk14-priv      dmk14-vip
dmk15   15      dmk15-priv      dmk15-vip
dmk16   16      dmk16-priv      dmk16-vip
dmk17   17      dmk17-priv      dmk17-vip
dmk18   18      dmk18-priv      dmk18-vip
dmk19   19      dmk19-priv      dmk19-vip
dmk20   20      dmk20-priv      dmk20-vip
dmk21   21      dmk21-priv      dmk21-vip
dmk22   22      dmk22-priv      dmk22-vip
dmk23   23      dmk23-priv      dmk23-vip
dmk24   24      dmk24-priv      dmk24-vip

 

网络层

 

[oracle@dmk01 ~]$ oifcfg iflist   显示网口列表
ib0  192.168.26.0
bond0  10.87.25.0
bond1  192.168.25.0

[oracle@dmk01 ~]$ oifcfg getif  显示每个网卡属性
bond0  10.87.25.0  global  public************************* oracle net和vip
ib0  192.168.26.0  global  cluster_interconnect***************用于心跳


[oracle@dmk01 ~]$ oifcfg getif -global -dmk22   查看node dmk22的配置
bond0  10.87.25.0  global  public
ib0  192.168.26.0  global  cluster_interconnect

 

[oracle@dmk01 ~]$ oifcfg getif -type public  查看指定类型的网卡(例中为public,还有cluster_interconnect类型)
bond0  10.87.25.0  global  public


oifcfg setif -xxxx 添加网卡接口
oifcfg delif -global 删除接口


集群层
包含crsctl,ocrcheck,ocrdump,ocrconfig后3个针对ocr disk

[oracle@dmk01 ~]$ crsctl
Usage: crsctl check crs - checks the viability of the Oracle Clusterware
       crsctl check cssd        - checks the viability of Cluster Synchronization Services
       crsctl check crsd        - checks the viability of Cluster Ready Services
       crsctl check evmd        - checks the viability of Event Manager
       crsctl check cluster [-node <nodename>] - checks the viability of CSS across nodes
       crsctl set css <parameter> <value> - sets a parameter override
       crsctl get css <parameter> - gets the value of a Cluster Synchronization Services parameter
       crsctl unset css <parameter> - sets the Cluster Synchronization Services parameter to its default
       crsctl query css votedisk - lists the voting disks used by Cluster Synchronization Services
       crsctl add css votedisk <path> - adds a new voting disk
       crsctl delete css votedisk <path> - removes a voting disk
       crsctl enable crs - enables startup for all Oracle Clusterware daemons
       crsctl disable crs - disables startup for all Oracle Clusterware daemons
       crsctl start crs [-wait] - starts all Oracle Clusterware daemons
       crsctl stop crs [-wait] - stops all Oracle Clusterware daemons. Stops Oracle Clusterware managed resources in case of cluster.
       crsctl start resources - starts Oracle Clusterware managed resources
       crsctl stop resources - stops Oracle Clusterware managed resources
       crsctl debug statedump css - dumps state info for Cluster Synchronization Services objects
       crsctl debug statedump crs - dumps state info for Cluster Ready Services objects
       crsctl debug statedump evm - dumps state info for Event Manager objects
       crsctl debug log css [module:level] {,module:level} ... - turns on debugging for Cluster Synchronization Services
       crsctl debug log crs [module:level] {,module:level} ... - turns on debugging for Cluster Ready Services
       crsctl debug log evm [module:level] {,module:level} ... - turns on debugging for Event Manager
       crsctl debug log res [resname:level] ... - turns on debugging for Event Manager
       crsctl debug trace css [module:level] {,module:level} ... - turns on debugging for Cluster Synchronization Services
       crsctl debug trace crs [module:level] {,module:level} ... - turns on debugging for Cluster Ready Services
       crsctl debug trace evm [module:level] {,module:level} ... - turns on debugging for Event Manager
       crsctl query crs softwareversion [<nodename>] - lists the version of Oracle Clusterware software installed
       crsctl query crs activeversion - lists the Oracle Clusterware operating version
       crsctl lsmodules css - lists the Cluster Synchronization Services modules that can be used for debugging
       crsctl lsmodules crs - lists the Cluster Ready Services modules that can be used for debugging
       crsctl lsmodules evm - lists the Event Manager modules that can be used for debugging
If necessary any of these commands can be run with additional tracing by adding a 'trace'
 argument at the very front. Example: crsctl trace check css


常用crs命令
#检查crs状态
[oracle@dmk01 ~]$ crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy
#检查cssd组件
[oracle@dmk01 ~]$ crsctl check cssd
Cluster Synchronization Services appears healthy
#检查crsd组件
[oracle@dmk01 ~]$ crsctl check crsd
Cluster Ready Services appears healthy
#检查evmd组件
[oracle@dmk01 ~]$ crsctl check evmd
Event Manager appears healthy
[oracle@dmk01 ~]$

#配置是否crs栈自动启动,root执行,默认自动启动,实际修改/etc/orcle/scls_scr/NODE_NAME/root/crsstart
crsctl disable crs
crsctl enable crs

#启动停止crs
crsctl stop crs
crsctl start crs

#查询votedisk位置
crsctl query css votedisk

#查看crs 参数
crsctl get css 参数名 
例如
[oracle@dmk01 ~]$ crsctl get css misscount
60
#设置css参数
crsctl set css 参数名

#crs由crs,css,evm 这3个服务组成(每个服务包含一系列的module,crsctl可以对每个module跟踪,并记录到日志中)
[oracle@dmk01 ~]$ crsctl lsmodules css
The following are the Cluster Synchronization Services modules::
    CSSD
    COMMCRS
    COMMNS
[oracle@dmk01 ~]$ crsctl lsmodules crs
The following are the CRS modules::
    CRSUI
    CRSCOMM
    CRSRTI
    CRSMAIN
    CRSPLACE
    CRSAPP
    CRSRES
    CRSCOMM
    CRSOCR
    CRSTIMER
    CRSEVT
    CRSD
    CLUCLS
    CLSVER
    CSSCLNT
    COMMCRS
    COMMNS
[oracle@dmk01 ~]$ crsctl lsmodules evm
The following are the Event Manager modules::
   EVMD
   EVMDMAIN
   EVMCOMM
   EVMEVT
   EVMAPP
   EVMAGENT
   CRSOCR
   CLUCLS
   CSSCLNT
   COMMCRS
   COMMNS

 


#启动跟踪cssd modle,root执行,结果存ocssd.log中
crsctl debug log css "cssd:1"

 

#维护votedisk,oracle要求必须一半以上的votedisk可用,否则crs机群立刻宕机
#1.添加votedisk,要求db,asm,crs stack完全停止后进行,才可以操作,需加-force参数
#查询votedisk位置
crsctl query css votedisk
#停所有node crs,root用户执行
crsctl stop crs
#添加votedisk,root执行,删除就是del,由于必须一般以上,如果原本是一个,建议添加2个,一共3个,否则2个的话,一个坏了,就宕了
crsctl add css votedisk /位置   -force
#确认添加后情况,root执行
crsctl query css votedisk
#启动crs stack,root执行
crsctl start crs


#ocr命令
#整个集群信息放在共享存储之中,ocr disk就是这个共享存储,整个集群中只有一个node可以写ocr disk,叫做master node ,所有node都在内存中保存一份ocr copy
#同时只有一个ocr process从这个内存中读取内容,ocr内容改变时候,master node的ocr process负责同步到其他node的ocr process.
#oracle 每4个小时备份ocr,保留最后3个备份,以及前一天,前一周的最后一个备份,备份由master node的crsd进程完成,default backup dest=$CRS_HOME/crs/cdata/cluster_name
#每次备份,备份文件名会自动更改(反映时间顺序),第一次备份叫backup00.ocr

#ocrdump打印出ocr内容(不代表备份),只可阅读
ocr -stdout:打印到屏幕,filename:内容输出到文件中,-keyname 只打印某个键极子键内容,-xml以.xml格式打印输出
#ocrdumo -stdout -keyname SYSTEM.css -xml 打印system.css key
#会在$CRS_HOME/<node_name>/client 产生ocrdump_<PID>.log,若有问题查看日志


#ocrcheck检查ocr内容一致性,在$CRS_HOME/<node_name>/client 产生ocrcheck_<PID>.log产生log
ocrcheck

#ocrconfig,维护ocrdisk,ocrdisk最多只能有2个,一个primary ocr,一个mirror ocr
[oracle@dmk10 ~]$ ocrconfig -help   查看使用
Name:
        ocrconfig - Configuration tool for Oracle Cluster Registry.

Synopsis:
        ocrconfig [option]
        option:
                -export <filename> [-s online]
                                                    - Export cluster register contents to a file
                -import <filename>                  - Import cluster registry contents from a file
                -upgrade [<user> [<group>]]
                                                    - Upgrade cluster registry from previous version
                -downgrade [-version <version string>]
                                                    - Downgrade cluster registry to the specified version
                -backuploc <dirname>                - Configure periodic backup location
                -showbackup [auto|manual]           - Show backup information
                -manualbackup                       - Perform. OCR backup
                -restore <filename>                 - Restore from physical backup
                -replace ocr|ocrmirror [<filename>] - Add/replace/remove a OCR device/file
                -overwrite                          - Overwrite OCR configuration on disk
                -repair ocr|ocrmirror <filename>    - Repair local OCR configuration
                -help                               - Print out this help information

Note:
        A log file will be created in
        $ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
        you have file creation privileges in the above directory before
        running this tool.


#查看ocr备份
ocrconfig -showbackup
#修改备份 位置,默认在 $CRS_HOME/crs/cdata/cluster_name
ocrconfig -backuploc <位置>

#exp/imp 进行备份恢复
#oracle建议集群调整时(add node,del node前)对ocr backup,export命令,若做replace,restore操作,建议cluvfy comp ocr -n all进行全面检查
#关闭所有node crs,root
crsctl stop crs
#exp ocr,user root
ocrconfig -export 位置名称.exp(text.exp)
#检查crs状态,root
crscrl check crs
#破坏ocr(存储损坏之类)此时ocr 内容损坏
#检查ocr一致性,root,cluvfy工具,此时会报告 failed
ocrcheck
runcluvfy.sh comp ocr -n all
#恢复ocr内容,imp,root
ocrconfig -import /oracle/test.exp
#再次用ocrcheck or cluvfy工具检查ocr
#检查没问题,启动 crs,root
crsctl start crs
#启动后检查crs状态,root
crsctl check crs

 


#移动ocr位置
#查看ocr是否有备份
ocrconfig -showbackup
#若无备份执行一次exp作为备份,root
ocrconfig -export 位置名称.exp -s online
#查看当前ocr配置
ocrcheck
#若只有一个primary ocr,无mirror ocr不能直接改变这个ocr位置,必须先add mirror ocr在修改
#add mirror ocr
ocrconfig -replace ocrmirror
#查看是否add成功,root
ocrcheck
#改变primary ocr位置,root
ocrconfig -replace ocr /新位置
#确认修改是否成功,root
ocrcheck
#以上执行后all node /etc/oracle/ocr.loc内容自动同步,若不同步 手动修改
ocrconfig_loc=/新位置
ocrmirrorconfig_loc=xxxxxxxxxxx

local_only=FALSE

 

#应用层,rac database,若干资源组成,每个资源是一个进程或一组进程组成的完整服务
#crs_stat,查看crs维护所有资源的状态
crs_stat -t
#查看指定资源状态,-p ,-v ,-p很详细,-v包含允许重起次数,已执行重起次数,失败阀值,失败次数,
crs_stat 资源名 -p
#查看每个资源权限定义 正确应为 oracle:dba rwxrwxr--(用户,组,权限)
crs_stat -ls

#onsctl
#用于管理配置ons,(oracle notification service),oracle clusterware实现fan event push模型的基础
#10g引入push机制,FAN,当服务端发生某些event时,服务器会主动通知client变化,让client尽早知道服务端变化,机制依赖ons(早期是client定期检索server端来判断服务状态,pull模型

#使用onsctl需要配置ons服务
oracle@HA5-DZ01:[/home/oracle] onsctl                                                                                                                          
usage: /oracle/app/oracle/product/10.2.0/crs/bin/onsctl start|stop|ping|reconfig|debug

start                            - Start opmn only.
stop                             - Stop ons daemon
ping                             - Test to see if ons daemon is running
debug                            - Display debug information for the ons daemon
reconfig                         - Reload the ons configuration
help                             - Print a short syntax description (this).
detailed                         - Print a verbose syntax description.

#rac中用$crs_home中ons配置文件,$crs_home/opmn/conf/ons.config
oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf] more ons.config
localport=6100  *****************************本地监听端口,用来和运行在本地的client进行通信
remoteport=6200 *****************远程监听端口,用来和远程client进行通信

loglevel=3*********************oracle允许trace ons program,级别为1-9默认3
logfile=/位置*********trace位置,默认$CRS_HOME/opmn/logs/opmn.log

useocr&nodes决定本机ons daemon和哪些远程节点上ons daemon通信
node格式hostname/ip:port
useocr=on表示信息存在ocr中,off表示读取nodes中配置(单instance useocr=off)
#useocr=off********要读nodes配置
#nodes=rac1:6200,rac2:6200  表示本机ons和rac1,rac2 这2个node上6200端口通信
#useocr时候,存ocr中,保存在DATABASE.ONS_HOSTS KEY中,可以看到主机和port,类似DATABASE.ONS_HOSTS.rac1,DATABASE.ONS_HOSTS.rac1.PORT
ocrdump -xml test.xml -keyname DATABASE.ONS_HOSTS


#配置ons,可直接编辑ons配置文件修改,若useocr=on可以通过racgons进行配置,root,add添加配置,remove移除配置
racgons add_config rac3:6200,rac4:6200
racgons remove_config rac3:6200,rac4:6200

#onsctl命令,启动,停止,调试,重新载入配置文件
onsctl start|stop|debug|recofig|detailed
#ons运行并不一定表示ons正常工作,要用ping确认
#os层查看进程状态
[oracle@dmk10 conf]$ ps -aef|grep ons
oracle   13173 15458  0 14:04 pts/1    00:00:00 grep ons
oracle   24922     1  0 Mar08 ?        00:00:00 /oracle/app/product/11.1/crs/opmn/bin/ons -d
oracle   24923 24922  0 Mar08 ?        00:00:05 /oracle/app/product/11.1/crs/opmn/bin/ons -d
#确认ons服务状态
$ onsctl ping
Number of onsconfiguration retrieved, numcfg = 3
onscfg[0]
   {node = nbidw7, port = 6251}
Setting remote port from OCR repository to 6251
Adding remote host nbidw7:6251
onscfg[1]
   {node = nbidw8, port = 6251}
Adding remote host nbidw8:6251
onscfg[2]
   {node = nbidw9, port = 6251}
Adding remote host nbidw9:6251
ons is not running ...~~~~~~~~~~~~~~~~~~~~~~~~~~~~没启动
$
#启动ons
onsctl start
#再次确认ons状态(正确为ons is running)
onsctl ping
oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf] onsctl ping
ons is running ...

#debug查看详细信息,可以看到ONS 列表,显示所有连接
oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf] onsctl debug
HTTP/1.1 200 OK
Content-Length: 1285
Content-Type: text/html
Response:


======== NS ========

Listeners:

 NAME    BIND ADDRESS   PORT   FLAGS   SOCKET
------- --------------- ----- -------- ------
Local   127.000.000.001  6100 00000142      6
Remote  010.096.019.037  6200 00000101      7
Request     No listener

Server connections:

    ID           IP        PORT    FLAGS    SENDQ     WORKER   BUSY  SUBS
---------- --------------- ----- -------- ---------- -------- ------ -----

Client connections:

    ID           IP        PORT    FLAGS    SENDQ     WORKER   BUSY  SUBS
---------- --------------- ----- -------- ---------- -------- ------ -----
        11 127.000.000.001  6100 0001001a          0               1     0
        17 127.000.000.001  6100 0001001a          0               1     1

Pending connections:

    ID           IP        PORT    FLAGS    SENDQ     WORKER   BUSY  SUBS
---------- --------------- ----- -------- ---------- -------- ------ -----
         0 127.000.000.001  6100 00020812          0               1     0

Worker Ticket: 37/37, Idle: 60

   THREAD   FLAGS
  -------- --------
         2 00000012
         3 00000012
         4 00000012

Resources:

  Notifications:
    Received: 15, in Receive Q: 0, Processed: 15, in Process Q: 0

  Pools:
    Message: 24/25 (1), Link: 25/25 (1), Subscription: 24/25 (1)
oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf]
onsctl debug

 


#SRVCTL,可操作database,instance,asm,service,listener,node application(GSD,ONS,VIP)

oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf] srvctl -help
Usage: srvctl <command> <object> [<options>]
    command: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config
    objects: database|instance|service|nodeapps|asm|listener
For detailed help on each command and object and its options use:
    srvctl <command> <object> -h


#查看database配置,显示在ocr中注册的所有db(有的环境一个ocr中有多个DB)
srvctl config database
oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf] srvctl config database
HNSMS
#-d查看某个数据库配置,比如几个node组成
srvctl config database -d HNSMS
oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf] srvctl config database -d HNSMS
ha5-dz01 HNSMS1 /oracle/app/oracle/product/10.2.0/database
ha5-dz02 HNSMS2 /oracle/app/oracle/product/10.2.0/database
#-a查看更详细信息
srvctl config database -d HNSMS -a
oracle@HA5-DZ01:[/oracle/app/oracle/product/10.2.0/crs/opmn/conf] srvctl config database -d HNSMS -a
ha5-dz01 HNSMS1 /oracle/app/oracle/product/10.2.0/database
ha5-dz02 HNSMS2 /oracle/app/oracle/product/10.2.0/database
DB_NAME: HNSMS
ORACLE_HOME: /oracle/app/oracle/product/10.2.0/database
SPFILE: /dev/vx/rdsk/vg_db01/dz01_05G_006
DOMAIN: null
DB_ROLE: null
START_OPTIONS: null
POLICY:  AUTOMATIC
ENABLE FLAG: DB ENABLED

#查看node application配置
srvctl config nodeapps -n node_name
[oracle@dmk10 conf]$ srvctl config nodeapps -n dmk10
VIP exists.: /dmk10-vip/10.87.25.35/255.255.255.0/bond0
GSD exists.
ONS daemon exists.
Listener exists.
#-a查看vip,-g查看gsd,-s 查看ons,-l查看listener
srvctl config nodeapps -n node_name -a

#查看listenr
srvctl config listener -n NODE_name
[oracle@dmk10 conf]$ srvctl config listener -n dmk09
dmk09 LISTENER_DMK09
[oracle@dmk10 conf]$ srvctl config listener -n dmk10
dmk10 LISTENER_DMK10


#查看asm,输出每个node ASM instance name,和$ORACLE_HOME
srvctl config asm -n node_name
[oracle@dmk10 conf]$ srvctl config asm -n dmk10
+ASM10 /oracle/app/product/11.1/db

#查看service,查看db 所有serviece配置
srvctl config service -d database name -a
[oracle@dmk10 conf]$ srvctl config service -d rac -a
masamk1 PREF: rac1 rac2 rac3 rac4 rac5 rac6 AVAIL:  TAF: NONE
masamk2 PREF: rac7 rac8 rac9 rac10 rac11 rac12 AVAIL:  TAF: NONE
masamk3 PREF: rac1 rac2 rac3 rac4 rac5 rac6 rac7 rac8 rac9 rac10 rac11 rac12 AVAIL:  TAF: NONE

#-S查看某个service配置 -s masamk1
#-a查看TAF策略

 

 

#使用add添加对象,删除的话remove ,删除db or instance是交互式的
#应用层资源多是图形界面注册ocr中,vip,ons安装最后阶段注册,db,asm,在dbca过程中注册,listener netca注册
#手工注册db
srvctl add database  -d database_name -o $ORACLE_HOME
#注册instance
srvctl add instance -d database_name -n node_name - i instance_name
#添加服务,使用4个参数
-s:服务名
-r:首选instance_name
-a:备用instance name
-p:tad策略(NODE 默认,basic,preconnect)
srvctl add service -d database name -s service name -r instance_name -a instance_name -p TAF POLICY


#enable/disable 启动,禁用对象
#default db,instance,service,asm都随 crs启动而启动,
#配置db随crs启动而启动,关闭disable
srvctl enable database -d database name
#查看是否配置成功,ENABLE FLAG: DB ENABLED表示成功,policy应为automatic,db随crs启动
srvctl config database -d DATABASE_NAME -a
#关闭某个instance自动启动
srvctl disable instance -d database name -i INSTACNE_name

 

#禁止服务在某个instance上运行,
srvctl disable service -d database name -s service name -i instance


#启动停止查看对象
#推荐使用 srvctl来管理db 启动关闭,优点及时更新crs中运行信息
#srvctl start database -d database_name (default启动到open)

#启动到不同状态,针对instance,-o代表 - option
srvctl start database -d database_name -i instacne_name -o mount
srvctl start database -d database_name -i instacne_name -o nomount
#关闭对象,针对instance
srvctl stop instance -i instance_name -o immediate
srvctl stop instance -i instance_name -o abort

#在instance上启动service
srvctl start service -d database name -s service name -i instance name
#查看服务状态
srvctl status service -d database name -v
#在instance上关闭service
srvctl stop service -d database name -s service name -i instance  name


具体启动语法参数可以
srvctl start database/asm/instance/service/listener/nodeapps -h


#查看状态
srvctl status service -d database name  -v
srvctl status database -d database name
srvctl status instance -d database name -i instance name


#跟踪srvctl
#跟踪srvctl 需要设置srvm_trace=trus ,os环境变量既可,
export SRVM_TRACE=true
srvctl 使用下,然后 trace会到屏幕

 

*********恢复******
#ocr,voteing全部损坏,且无backup,此时需要重新初始化ocr & votedisk
#停止all node crs statck
crsctl stop crs
#分别在每个node执行$CRS_HOME/install/rootdelete.sh,root执行
#在任意node执行$CRS_HOME/install/rootdeinstall.sh 只在一个node执行就可以
#在上面执行rootdeinstall.sh的 node上执行$CRS_HOME/root.sh
#在其他node执行$CRS_HOME/root.sh(注意最后一个node输出,ons,gsd,vip,creating,starting)
#netca重新配置listener,确认注册到crs ocr中(此时listner,ons,gsd,vip注册到了ocr)
crs_stat -t -v

#若有asm 将asm add 到ocr中
srvctl add asm -n node_name -i asm_instance_name -o $ORACLE_HOME
#启动asm
srvctl start asm -n node_name
#若启动ASM时候出现ora-27550错误,可能是rac无法确定使用哪块网卡做为private interconnect(心跳)
#在asm pfile中加参数解决,每个node 的asm pfile都要写
+ASM1.cluster_interconnects='心跳IP地址,内网'
+ASM2.cluster_interconnects='心跳IP地址,内网'
#ocr中add database  object
srvctl add database -d database_name -o $ORACLE_HOME(详细路径)
#ocr中add instance object,有几个instance add几个
srvctl add instance -d database_name -i instance_name -n node_name
#修改instance 与asm依赖关系,instance_name 对应asm_instance_name,针对每个instance都要做一次
srvctl modify instance -d database_name -i instance_name,-s asm_instance_name

#启动db
srvctl start database -d database_name
#若需要ora-27550,解决方式与在asm中解决一样
#修改数据库 instance spfile,每个instance的spfile都要修改,对应每个node pirvate ip
alter system set cluster_interconnects='心跳IP地址,内网' scope=spfile sid='INSTANCE_NAME'
alter system set cluster_interconnects='心跳IP地址,内网' scope=spfile sid='INSTANCE_NAME'

#重新启动db
srvctl start database -d database_name

你可能感兴趣的:(oracle,数据库,服务器,database,RAC)