11g RAC R2 日常巡检--Grid

一.巡检RAC数据库

1.1列出数据库

[grid@node1 ~]$ srvctl config database

racdb

[grid@node1 ~]$

1.2列出数据库的实例

[grid@node1 ~]$ srvctl status database -d racdb

Instance racdb1 is running on node node1

Instance racdb2 is running on node node2

1.3数据库的配置

[grid@node1 ~]$ srvctl config database -d racdb -a

Database unique name: racdb

Database name: racdb

Oracle home: /u01/app/oracle/11.2.0/dbhome_1

Oracle user: oracle

Spfile: +DATA/racdb/spfileracdb.ora

Domain: 

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: racdb

Database instances: racdb1,racdb2

Disk Groups: DATA

Services: 

Database is enabled

Database is administrator managed

[grid@node1 ~]$ 

二.巡检Grid

2.1集群名称

[grid@node1 ~]$ cemutlo -n

scan-cluster

[grid@node1 ~]$ 

2.2检查集群栈状态

[grid@node1 ~]$ crsctl check cluster -all

**************************************************************

node1:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

node2:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

[grid@node1 ~]$

2.3 集群的资源

[grid@node1 ~]$ crsctl status res -t

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS       

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.DATA.dg

               ONLINE  ONLINE       node1                                        

               ONLINE  ONLINE       node2                                        

ora.LISTENER.lsnr

               ONLINE  ONLINE       node1                                        

               ONLINE  ONLINE       node2                                        

ora.asm

               ONLINE  ONLINE       node1                    Started             

               ONLINE  ONLINE       node2                    Started             

ora.eons

               ONLINE  ONLINE       node1                                        

               ONLINE  ONLINE       node2                                        

ora.gsd

               OFFLINE OFFLINE      node1                                        

               OFFLINE OFFLINE      node2                                        

ora.net1.network

               ONLINE  ONLINE       node1                                        

               ONLINE  ONLINE       node2                                        

ora.ons

               ONLINE  ONLINE       node1                                        

               ONLINE  ONLINE       node2                                        

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       node2                                        

ora.LISTENER_SCAN2.lsnr

      1        ONLINE  ONLINE       node1                                        

ora.LISTENER_SCAN3.lsnr

      1        ONLINE  ONLINE       node1                                        

ora.node1.vip

      1        ONLINE  ONLINE       node1                                        

ora.node2.vip

      1        ONLINE  ONLINE       node2                                        

ora.oc4j

      1        OFFLINE OFFLINE                                                   

ora.racdb.db

      1        ONLINE  ONLINE       node1                    Open                

      2        ONLINE  OFFLINE                                                   

ora.scan1.vip

      1        ONLINE  ONLINE       node2                                        

ora.scan2.vip

      1        ONLINE  ONLINE       node1                                        

ora.scan3.vip

      1        ONLINE  ONLINE       node1                                        

[grid@node1 ~]$ 

主机node1的更加详细的资源

[grid@node1 ~]$ crsctl status res -t -init

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS       

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.asm

      1        ONLINE  ONLINE       node1                    Started             

ora.crsd

      1        ONLINE  ONLINE       node1                                        

ora.cssd

      1        ONLINE  ONLINE       node1                                        

ora.cssdmonitor

      1        ONLINE  ONLINE       node1                                        

ora.ctssd

      1        ONLINE  ONLINE       node1                    ACTIVE:0            

ora.diskmon

      1        ONLINE  ONLINE       node1                                        

ora.evmd

      1        ONLINE  ONLINE       node1                                        

ora.gipcd

      1        ONLINE  ONLINE       node1                                        

ora.gpnpd

      1        ONLINE  ONLINE       node1                                        

ora.mdnsd

      1        ONLINE  ONLINE       node1                                        

[grid@node1 ~]$

主机node2的更加详细的资源

[grid@node2 ~]$ crsctl status res -t -init

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS       

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.asm

      1        ONLINE  ONLINE       node2                    Started             

ora.crsd

      1        ONLINE  ONLINE       node2                                        

ora.cssd

      1        ONLINE  ONLINE       node2                                        

ora.cssdmonitor

      1        ONLINE  ONLINE       node2                                        

ora.ctssd

      1        ONLINE  ONLINE       node2                    ACTIVE:-11700       

ora.diskmon

      1        ONLINE  ONLINE       node2                                        

ora.evmd

      1        ONLINE  ONLINE       node2                                        

ora.gipcd

      1        ONLINE  ONLINE       node2                                        

ora.gpnpd

      1        ONLINE  ONLINE       node2                                        

ora.mdnsd

      1        ONLINE  ONLINE       node2                                        

[grid@node2 ~]$ 

2.4检查节点应用

[grid@node1 ~]$ srvctl status nodeapps

VIP node1-vip is enabled

VIP node1-vip is running on node: node1

VIP node2-vip is enabled

VIP node2-vip is running on node: node2

Network is enabled

Network is running on node: node1

Network is running on node: node2

GSD is disabled

GSD is not running on node: node1

GSD is not running on node: node2

ONS is enabled

ONS daemon is running on node: node1

ONS daemon is running on node: node2

eONS is enabled

eONS daemon is running on node: node1

eONS daemon is running on node: node2

[grid@node1 ~]$ 

2.5 检查SCAN

检查scan-ip地址的配置

[grid@node1 ~]$ srvctl config scan

SCAN name: scan-cluster.com, Network: 1/192.168.0.0/255.255.255.0/eth0

SCAN VIP name: scan1, IP: /scan-cluster/192.168.0.24

SCAN VIP name: scan2, IP: /scan-cluster/192.168.0.25

SCAN VIP name: scan3, IP: /scan-cluster/192.168.0.26

[grid@node1 ~]$ 



检查scan-ip地址的实际分布及状态

[grid@node1 ~]$ srvctl status scan

SCAN VIP scan1 is enabled

SCAN VIP scan1 is running on node node2

SCAN VIP scan2 is enabled

SCAN VIP scan2 is running on node node1

SCAN VIP scan3 is enabled

SCAN VIP scan3 is running on node node1

[grid@node1 ~]$ 



检查scan监听配置

[grid@node1 ~]$ srvctl config scan_listener

SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521

SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521

SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521

[grid@node1 ~]$ 





检查scan监听状态

[grid@node1 ~]$ srvctl status scan_listener

SCAN Listener LISTENER_SCAN1 is enabled

SCAN listener LISTENER_SCAN1 is running on node node2

SCAN Listener LISTENER_SCAN2 is enabled

SCAN listener LISTENER_SCAN2 is running on node node1

SCAN Listener LISTENER_SCAN3 is enabled

SCAN listener LISTENER_SCAN3 is running on node node1

[grid@node1 ~]$ 

2.6 检查VIP和监听

检查VIP的配置情况

[grid@node1 ~]$ srvctl config vip -n node1

VIP exists.:node1

VIP exists.: /node1-vip/192.168.0.21/255.255.255.0/eth0

[grid@node1 ~]$ srvctl config vip -n node2

VIP exists.:node2

VIP exists.: /node2-vip/192.168.0.31/255.255.255.0/eth0

[grid@node1 ~]$



检查VIP的状态

[grid@node1 ~]$ srvctl status nodeapps

或

[grid@node1 ~]$ srvctl status vip -n node1

VIP node1-vip is enabled

VIP node1-vip is running on node: node1

[grid@node1 ~]$ srvctl status vip -n node2

VIP node2-vip is enabled

VIP node2-vip is running on node: node2

[grid@node1 ~]$ 





检查本地监听配置:

[grid@node1 ~]$ srvctl config listener -a

Name: LISTENER

Network: 1, Owner: grid

Home: <CRS home>

  /u01/app/11.2.0/grid on node(s) node2,node1

End points: TCP:1521



检查本地监听状态:

[grid@node1 ~]$ srvctl status listener

Listener LISTENER is enabled

Listener LISTENER is running on node(s): node1,node2

[grid@node1 ~]$ 

2.7 检查ASM

检查ASM状态

[grid@node1 ~]$ srvctl status asm -a

ASM is running on node1,node2

ASM is enabled.



检查ASM配置

[grid@node1 ~]$ srvctl config asm -a

ASM home: /u01/app/11.2.0/grid

ASM listener: LISTENER

ASM is enabled.

[grid@node1 ~]$ 



检查磁盘组

[grid@node1 ~]$ srvctl status diskgroup -g DATA

Disk Group DATA is running on node1,node2

[grid@node1 ~]$  



查看ASM磁盘

[root@node1 bin]# oracleasm listdisks

VOL1

VOL2

[root@node1 bin]#



查看物理磁盘与asm 磁盘对应关系

[root@node1 bin]# oracleasm querydisk -v -p VOL1

Disk "VOL1" is a valid ASM disk

/dev/sdb1: LABEL="VOL1" TYPE="oracleasm" 

[root@node1 bin]#

2.8检查集群节点间的时钟同步

检查节点node1的时间同步

[grid@node1 ~]$ cluvfy comp clocksync -verbose

.......

Verification of Clock Synchronization across the cluster nodes was successful. 

[grid@node1 ~]$
检查节点node2的时间同步 [grid@node2 ~]$ cluvfy comp clocksync -verbose

..............                

CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...

Reference Time Offset Limit: 1000.0 msecs

Check: Reference Time Offset

  Node Name     Time Offset               Status                  

  ------------  ------------------------  ------------------------

  node2         -89900.0                  failed                  

Result: PRVF-9661 : Time offset is NOT within the specified limits on the following nodes: 

"[node2]" 



PRVF-9652 : Cluster Time Synchronization Services check failed



Verification of Clock Synchronization across the cluster nodes was unsuccessful on all the specified nodes. 

[grid@node2 ~]$ 

  注:节点2的服务器时间出现问题

  至此,对Grid的巡检基本上就算完成了

 

你可能感兴趣的:(grid)