oracle 用户管理数据库命令
srvctl status
Available options: database|instance|service|nodeapps|asm
# Display help for database level
srvctl status database -h
# Display instance's running status on each node
srvctl status database -d orcl
example output:
Instance orcl1 is(not) running on node rac1
Instance orcl2 is(not) running on node rac2
# include disabled applications
srvctl status database -d orcl -f
# verbos output
srvctl status database -d orcl -v
# Additional information for EM Console
srvctl status database -d orcl -S EM_AGENT_DEBUG
# Additional information for EM Console
srvctl status database -d orcl -i orcl1 -S EM_AGENT_DEBUG
# Display help for instance level
srvctl status instance -h
# display appointed instance's running status
srvctl status instance -d orcl -i orcl1
# Display help for node level
srvctl status nodeapps -h
# Display all app's status on the node xxx
srvctl status nodeapps -n
srvctl start
# Start database
srvctl start database -d orcl -o nomount
srvctl start database -d orcl -o mount
srvctl start database -d orcl -o open
# Grammar for start instance
srvctl start instance -d [db_name] -i [instance_name]
-o [start_option] -c [connect_str] –q
# Start all instances on the all nodes
srvctl start instance -d orcl -i orcl1,orcl2,
# Start ASM instance
srvctl start ASM -n [node_name] -i asm1 -o open
# Start all apps in one node
srvctl start nodeapps -n [node_name]
srvctl stop
# Stop database
srvctl stop database -d orcl -o normal
srvctl stop database -d orcl -o immediate
srvctl stop database -d orcl -o abort
# Grammar for stop instance
srvctl stop instance -d [db_name] -i [instance_name]
-o [start_option] -c [connect_str] -q
# Stop all instances on the all nodes
srvctl stop instance -d orcl -i orcl1,orcl2,...
# Stop ASM instance
srvctl stop ASM -n [node_name] -i asm1 -o [option]
# Stop all apps in one node
srvctl stop nodeapps -n [node_name]
17.2、启动/停止集群命令
以下停止/启动操作需要以 root 身份来执行
在本地服务器上停止 Oracle Clusterware 系统
在 rac1 节点上使用 crsctl stop cluster 命令停止 Oracle Clusterware 系统:
#/u01/app/11.2.0/grid/bin/crsctl stop cluster
注:在运行“crsctl stop cluster”命令之后,如果 Oracle Clusterware 管理的资源中有任何一个还在运行,则整个命令失败。使用 -f 选项无条件地停止所有资源并停止 Oracle Clusterware 系统。
在本地服务器上启动 Oracle Clusterware 系统
在 rac1 节点上使用 crsctl start cluster 命令启动 Oracle Clusterware 系统:
[root@rac1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster
注:可通过指定 -all 选项在集群中所有服务器上启动 Oracle Clusterware 系统。
[root@rac1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -all
还可以通过列出服务器(各服务器之间以空格分隔)在集群中一个或多个指定的服务器上启动 Oracle Clusterware 系统:
[root@rac1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -n rac1 rac2
17.3 检查集群的运行状况(集群化命令)
以 grid 用户身份运行以下命令。
[grid@rac1 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
所有 Oracle 实例 —(数据库状态)
[oracle@rac1 ~]$ srvctl status database -d orcl
Instance orcl1 is running on node rac1
Instance orcl2 is running on node rac2
单个 Oracle 实例 —(特定实例的状态)
[oracle@rac1 ~]$ srvctl status instance -d orcl -i orcl1
Instance orcl1 is running on node rac1
节点应用程序 —(状态)
[oracle@rac1 ~]$ srvctl status nodeapps
VIP rac1-vip is enabled
VIP rac1-vip is running on node: rac1
VIP rac2-vip is enabled
VIP rac2-vip is running on node: rac2
Network is enabled
Network is running on node: rac1
Network is running on node: rac2
GSD is disabled
GSD is not running on node: rac1
GSD is not running on node: rac2
ONS is enabled
ONS daemon is running on node: rac1
ONS daemon is running on node: rac2
eONS is enabled
eONS daemon is running on node: rac1
eONS daemon is running on node: rac2
节点应用程序 —(配置)
[oracle@rac1 ~]$ srvctl config nodeapps
VIP exists.:rac1
VIP exists.: /rac1-vip/192.168.1.251/255.255.255.0/eth0
VIP exists.:rac2
VIP exists.: /rac2-vip/192.168.1.252/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
eONS daemon exists. Multicast port 24057, multicast IP address 234.194.43.168, listening port 2016
列出配置的所有数据库
[oracle@rac1 ~]$ srvctl config database
orcl
数据库 —(配置)
[oracle@rac1 ~]$ srvctl config database -d orcl -a
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +ORCL_DATA/orcl/spfileorcl.ora
Domain: idevelopment.info
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: orcl1,orcl2
Disk Groups: ORCL_DATA,FRA
Services:
Database is enabled
Database is administrator managed
ASM —(状态)
[oracle@rac1 ~]$ srvctl status asm
ASM is running on rac1,rac2
ASM —(配置)
$ srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.
TNS 监听器 —(状态)
[oracle@rac1 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac1,rac2
TNS 监听器 —(配置)
[oracle@rac1 ~]$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home:
/u01/app/11.2.0/grid on node(s) rac2,rac1
End points: TCP:1521
SCAN —(状态)
[oracle@rac1 ~]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node rac1
SCAN —(配置)
[oracle@rac1 ~]$ srvctl config scan
SCAN name: racnode-cluster-scan, Network: 1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /racnode-cluster-scan/192.168.1.187
VIP —(特定节点的状态)
[oracle@rac1 ~]$ srvctl status vip -n rac1
VIP rac1-vip is enabled
VIP rac1-vip is running on node: rac1
[oracle@rac1 ~]$ srvctl status vip -n rac2
VIP rac2-vip is enabled
VIP rac2-vip is running on node: rac2
VIP —(特定节点的配置)
[oracle@rac1 ~]$ srvctl config vip -n rac1
VIP exists.:rac1
VIP exists.: /rac1-vip/192.168.1.251/255.255.255.0/eth0
[oracle@rac1 ~]$ srvctl config vip -n rac2
VIP exists.:rac2
VIP exists.: /rac2-vip/192.168.1.252/255.255.255.0/eth0
节点应用程序配置 —(VIP、GSD、ONS、监听器)
[oracle@rac1 ~]$ srvctl config nodeapps -a -g -s -l
-l option has been deprecated and will be ignored.
VIP exists.:rac1
VIP exists.: /rac1-vip/192.168.1.251/255.255.255.0/eth0
VIP exists.:rac2
VIP exists.: /rac2-vip/192.168.1.252/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
Name: LISTENER
Network: 1, Owner: grid
Home:
/u01/app/11.2.0/grid on node(s) rac2,rac1
End points: TCP:1521