--*****************************************
-- 使用 runcluvfy 校验Oracle RAC安装环境
--*****************************************
所谓工欲善其事,必先利其器。安装 Orale RAC 可谓是一个浩大的工程,尤其是没有做好前期的规划与配置工作时将导致安装的复杂
度绝非想象。幸好有runcluvfy工具,这大大简化了安装工作。下面的演示是基于安装Oracle 10g RAC / Linux来完成的。
1.从安装文件路径下使用runcluvfy实施安装前的校验
[oracle@node1 cluvfy]$ pwd
/u01/Clusterware/clusterware/cluvfy
[oracle@node1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "node1"
Destination Node Reachable?
------------------------------------ ------------------------
node1 yes
node2 yes
Result: Node reachability check passed from node "node1".
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Comment
------------------------------------ ------------------------
node2 passed
node1 passed
Result: User equivalence check passed for user "oracle".
Checking administrative privileges...
Check: Existence of user "oracle"
Node Name User Exists Comment
------------ ------------------------ ------------------------
node2 yes passed
node1 yes passed
Result: User existence check passed for "oracle".
Check: Existence of group "oinstall"
Node Name Status Group ID
------------ ------------------------ ------------------------
node2 exists 500
node1 exists 500
Result: Group existence check passed for "oinstall".
Check: Membership of user "oracle" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Comment
---------------- ------------ ------------ ------------ ------------ ------------
node2 yes yes yes yes passed
node1 yes yes yes yes passed
Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed.
Administrative privileges check passed.
Checking node connectivity...
Interface information for node "node2"
Interface Name IP Address Subnet
------------------------------ ------------------------------ ----------------
eth0 192.168.0.12 192.168.0.0
eth1 10.101.0.12 10.101.0.0
Interface information for node "node1"
Interface Name IP Address Subnet
------------------------------ ------------------------------ ----------------
eth0 192.168.0.11 192.168.0.0
eth1 10.101.0.11 10.101.0.0
Check: Node connectivity of subnet "192.168.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node2:eth0 node1:eth0 yes
Result: Node connectivity check passed for subnet "192.168.0.0" with node(s) node2,node1.
Check: Node connectivity of subnet "10.101.0.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node2:eth1 node1:eth1 yes
Result: Node connectivity check passed for subnet "10.101.0.0" with node(s) node2,node1.
Suitable interfaces for the private interconnect on subnet "192.168.0.0":
node2 eth0:192.168.0.12
node1 eth0:192.168.0.11
Suitable interfaces for the private interconnect on subnet "10.101.0.0":
node2 eth1:10.101.0.12
node1 eth1:10.101.0.11
ERROR:
Could not find a suitable set of interfaces for VIPs.
Result: Node connectivity check failed.
Checking system requirements for 'crs'...
Check: Total memory
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 689.38MB (705924KB) 512MB (524288KB) passed
node1 689.38MB (705924KB) 512MB (524288KB) passed
Result: Total memory check passed.
Check: Free disk space in "/tmp" dir
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 4.22GB (4428784KB) 400MB (409600KB) passed
node1 4.22GB (4426320KB) 400MB (409600KB) passed
Result: Free disk space check passed.
Check: Swap space
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 2GB (2096472KB) 1GB (1048576KB) passed
node1 2GB (2096472KB) 1GB (1048576KB) passed
Result: Swap space check passed.
Check: System architecture
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 i686 i686 passed
node1 i686 i686 passed
Result: System architecture check passed.
Check: Kernel version
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 2.6.18-194.el5 2.4.21-15EL passed
node1 2.6.18-194.el5 2.4.21-15EL passed
Result: Kernel version check passed.
Check: Package existence for "make-3.79"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 make-3.81-3.el5 passed
node1 make-3.81-3.el5 passed
Result: Package existence check passed for "make-3.79".
Check: Package existence for "binutils-2.14"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 binutils-2.17.50.0.6-14.el5 passed
node1 binutils-2.17.50.0.6-14.el5 passed
Result: Package existence check passed for "binutils-2.14".
Check: Package existence for "gcc-3.2"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 gcc-4.1.2-48.el5 passed
node1 gcc-4.1.2-48.el5 passed
Result: Package existence check passed for "gcc-3.2".
Check: Package existence for "glibc-2.3.2-95.27"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 glibc-2.5-49 passed
node1 glibc-2.5-49 passed
Result: Package existence check passed for "glibc-2.3.2-95.27".
Check: Package existence for "compat-db-4.0.14-5"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 compat-db-4.2.52-5.1 passed
node1 compat-db-4.2.52-5.1 passed
Result: Package existence check passed for "compat-db-4.0.14-5".
Check: Package existence for "compat-gcc-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 missing failed
node1 missing failed
Result: Package existence check failed for "compat-gcc-7.3-2.96.128".
Check: Package existence for "compat-gcc-c++-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 missing failed
node1 missing failed
Result: Package existence check failed for "compat-gcc-c++-7.3-2.96.128".
Check: Package existence for "compat-libstdc++-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 missing failed
node1 missing failed
Result: Package existence check failed for "compat-libstdc++-7.3-2.96.128".
Check: Package existence for "compat-libstdc++-devel-7.3-2.96.128"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 missing failed
node1 missing failed
Result: Package existence check failed for "compat-libstdc++-devel-7.3-2.96.128".
Check: Package existence for "openmotif-2.2.3"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 openmotif-2.3.1-2.el5_4.1 passed
node1 openmotif-2.3.1-2.el5_4.1 passed
Result: Package existence check passed for "openmotif-2.2.3".
Check: Package existence for "setarch-1.3-1"
Node Name Status Comment
------------------------------ ------------------------------ ----------------
node2 setarch-2.0-1.1 passed
node1 setarch-2.0-1.1 passed
Result: Package existence check passed for "setarch-1.3-1".
Check: Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
node2 exists passed
node1 exists passed
Result: Group existence check passed for "dba".
Check: Group existence for "oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
node2 exists passed
node1 exists passed
Result: Group existence check passed for "oinstall".
Check: User existence for "nobody"
Node Name Status Comment
------------ ------------------------ ------------------------
node2 exists passed
node1 exists passed
Result: User existence check passed for "nobody".
System requirement failed for 'crs'
Pre-check for cluster services setup was unsuccessful on all the nodes.
Could not find a suitable set of interfaces for VIPs.”,可以忽略该错误
信息,这是一个bug,Metalink中有详细说明,doc.id:338924.1。参考本文尾部列出的内容。
对于上面描述的failed的包,尽可能的将其安装到系统。
2.安装Clusterware 后的检查,注意,此时执行的cluvfy是位于已安装的路径
[oracle@node1 ~]$ pwd
/u01/app/oracle/product/10.2.0/crs_1/bin
[oracle@node1 ~]$./cluvfy stage -post crsinst -n node1,node2
Performing post-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "node1".
Checking user equivalence...
User equivalence check passed for user "oracle".
Checking Cluster manager integrity...
Checking CSS daemon...
Daemon status check passed for "CSS daemon".
Cluster manager integrity check passed.
Checking cluster integrity...
Cluster integrity check passed
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.
Uniqueness check for OCR device passed.
Checking the version of OCR...
OCR of correct Version "2" exists.
Checking data integrity of OCR...
Data integrity check for OCR passed.
OCR integrity check passed.
Checking CRS integrity...
Checking daemon liveness...
Liveness check passed for "CRS daemon".
Checking daemon liveness...
Liveness check passed for "CSS daemon".
Checking daemon liveness...
Liveness check passed for "EVM daemon".
Checking CRS health...
CRS health check passed.
CRS integrity check passed.
Checking node application existence...
Checking existence of VIP node application (required)
Check passed.
Checking existence of ONS node application (optional)
Check passed.
Checking existence of GSD node application (optional)
Check passed.
Post-check for cluster services setup was successful.
从上面的校验可以看出,Clusterware的相关后台进程,nodeapps相关资源以及OCR等处于passed状态,即Clusterware成功安装
3.cluvfy的用法
[oracle@node1 ~]$ cluvfy -help #直接使用-help参数即可获得cluvfy的帮助信息
USAGE:
cluvfy [ -help ]
cluvfy stage { -list | -help }
cluvfy stage {-pre|-post} <stage-name> <stage-specific options> [-verbose]
cluvfy comp { -list | -help }
cluvfy comp <component-name> <component-specific options> [-verbose]
[oracle@node1 ~]$ cluvfy comp -list
USAGE:
cluvfy comp <component-name> <component-specific options> [-verbose]
Valid components are:
nodereach : checks reachability between nodes
nodecon : checks node connectivity
cfs : checks CFS integrity
ssa : checks shared storage accessibility
space : checks space availability
sys : checks minimum system requirements
clu : checks cluster integrity
clumgr : checks cluster manager integrity
ocr : checks OCR integrity
crs : checks CRS integrity
nodeapp : checks node applications existence
admprv : checks administrative privileges
peer : compares properties with peers
4.ID 338924.1
CLUVFY Fails With Error: Could not find a suitable set of interfaces for VIPs [ID 338924.1]
________________________________________
Modified 29-JUL-2010 Type PROBLEM Status PUBLISHED
In this Document
Symptoms
Cause
Solution
References
________________________________________
Applies to:
Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.1.0.7 - Release: 10.2 to 11.1
Information in this document applies to any platform.
Symptoms
When running cluvfy to check network connectivity at various stages of the RAC/CRS installation process, cluvfy fails
with errors similar to the following:
=========================
Suitable interfaces for the private interconnect on subnet "10.0.0.0":
node1 eth0:10.0.0.1
node2 eth0:10.0.0.2
Suitable interfaces for the private interconnect on subnet "192.168.1.0":
node1_internal eth1:192.168.1.2
node2_internal eth1:192.168.1.1
ERROR:
Could not find a suitable set of interfaces for VIPs.
Result: Node connectivity check failed.
========================
On Oracle 11g, you may still see a warning in some cases, such as:
========================
WARNING:
Could not find a suitable set of interfaces for VIPs.
========================
Output seen will be comparable to that noted above, but IP addresses and node_names may be different - i.e. the node names
of 'node1','node2','node1_internal','node2_internal' will be substituted with your actual Public and Private node names.
A second problem that will be encountered in this situation is that at the end of the CRS installation for 10gR2, VIPCA
will be run automatically in silent mode, as one of the 'optional' configuration assistants. In this scenario, the VIPCA
will fail at the end of the CRS installation. The InstallActions log will show output such as:
> />> Oracle CRS stack installed and running under init(1M)
> />> Running vipca(silent) for configuring nodeapps
> />> The given interface(s), "eth0" is not public. Public interfaces should
> />> be used to configure virtual IPs.
Cause
This issue occurs due to incorrect assumptions made in cluvfy and vipca based on an Internet Best Practice document -
"RFC1918 - Address Allocation for Private Internets". This Internet Best Practice RFC can be viewed here:
http://www.faqs.org/rfcs/rfc1918.html
From an Oracle perspective, this issue is tracked in BUG:4437727
Per BUG:4437727, cluvfy makes an incorrect assumption based on RFC 1918 that any IP address/subnet that begins with any
of the following octets is private and hence may not be fit for use as a VIP:
172.16.x.x through 172.31.x.x
192.168.x.x
10.x.x.x
However, this assumption does not take into account that it is possible to use these IPs as Public IP's on an internal
network (or intranet). Therefore, it is very common to use IP addresses in these ranges as Public IP's and as Virtual
IP(s), and this is a supported configuration.
Solution
The solution to the error above that is given when running 'cluvfy' is to simply ignore it if you intend to use an IP in
one of the above ranges for your VIP. The installation and configuration can continue with no corrective action necessary.
One result of this, as noted in the problem section, is that the silent VIPCA will fail at the end of the 10gR2 CRS
installation. This is because VIPCA is running in silent mode and is trying to notify that the IPs that were provided
may not be fit to be used as VIP(s). To correct this, you can manually execute the VIPCA GUI after the CRS installation
is complete. VIPCA needs to be executed from the CRS_HOME/bin directory as the 'root' user (on Unix/Linux) or as a
Local Administrator (on Windows):
$ cd $ORA_CRS_HOME/bin
$ ./vipca
Follow the prompts for VIPCA to select the appropriate interface for the public network, and assign the VIPs for each node
when prompted. Manually running VIPCA in the GUI mode, using the same IP addresses, should complete successfully.
Note that if you patch to 10.2.0.3 or above, VIPCA will run correctly in silent mode. The command to re-run vipca
silently can be found in CRS_HOME/cfgtoollogs (or CRS_HOME/cfgtoollogs) in the file 'configToolAllCommands' or
'configToolFailedCommands'. Thus, in the case of a new install, the silent mode VIPCA command will fail after the
10.2.0.1 base release install, but once the CRS Home is patched to 10.2.0.3 or above, vipca can be re-run silently,
without the need to invoke the GUI tool
References
NOTE:316583.1 - VIPCA FAILS COMPLAINING THAT INTERFACE IS NOT PUBLIC
Related
________________________________________
Products
________________________________________
? Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition
Keywords
________________________________________
INSTALLATION FAILS; INTERCONNECT; PRIVATE INTERCONNECT; PRIVATE NETWORKS
Errors
________________________________________RFC-1918
上面的描述很多,下面给出处理办法
在出现错误的节点修改vipca 文件
[root@node2 ~]# vi $CRS_ORA_HOME/bin/vipca
找到如下内容:
Remove this workaround when the bug 3937317 is fixed
arch=`uname -m`
if [ "$arch" = "i686" -o "$arch" = "ia64" ]
then
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
fi
#End workaround
在fi 后新添加一行:
unset LD_ASSUME_KERNEL
以及srvctl 文件
[root@node2 ~]# vi $CRS_ORA_HOME/bin/srvctl
找到如下内容:
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
同样在其后新增加一行:
unset LD_ASSUME_KERNEL
保存退出,然后在故障重新执行root.sh
5. 快捷参考
有关性能优化请参考
Oracle 硬解析与软解析
共享池的调整与优化(Shared pool Tuning)
Buffer cache 的调整与优化(一)
Oracle 表缓存(caching table)的使用
有关ORACLE体系结构请参考
Oracle 表空间与数据文件
Oracle 密码文件
Oracle 参数文件
Oracle 联机重做日志文件(ONLINE LOG FILE)
Oracle 控制文件(CONTROLFILE)
Oracle 归档日志
Oracle 回滚(ROLLBACK)和撤销(UNDO)
Oracle 数据库实例启动关闭过程
Oracle 10g SGA 的自动化管理
Oracle 实例和Oracle数据库(Oracle体系结构)
有关闪回特性请参考
Oracle 闪回特性(FLASHBACK DATABASE)
Oracle 闪回特性(FLASHBACK DROP & RECYCLEBIN)
Oracle 闪回特性(Flashback Query、Flashback Table)
Oracle 闪回特性(Flashback Version、Flashback Transaction)
有关基于用户管理的备份和备份恢复的概念请参考
Oracle 冷备份
Oracle 热备份
Oracle 备份恢复概念
Oracle 实例恢复
Oracle 基于用户管理恢复的处理(详细描述了介质恢复及其处理)
SYSTEM 表空间管理及备份恢复
SYSAUX表空间管理及恢复
有关RMAN的备份恢复与管理请参考
RMAN 概述及其体系结构
RMAN 配置、监控与管理
RMAN 备份详解
RMAN 还原与恢复
RMAN catalog 的创建和使用
基于catalog 创建RMAN存储脚本
基于catalog 的RMAN 备份与恢复
使用RMAN迁移文件系统数据库到ASM
RMAN 备份路径困惑(使用plus archivelog时)
有关ORACLE故障请参考
ORA-32004 的错误处理
ORA-01658 错误
CRS-0215 错误处理
ORA-00119,ORA-00132 错误处理
又一例SPFILE设置错误导致数据库无法启动
对参数FAST_START_MTTR_TARGET = 0 的误解及设定
SPFILE 错误导致数据库无法启动(ORA-01565)
有关ASM请参考
创建ASM实例及ASM数据库
ASM 磁盘、目录的管理
使用 ASMCMD 工具管理ASM目录及文件
有关SQL/PLSQL请参考
SQLPlus 常用命令
替代变量与SQL*Plus环境设置
使用Uniread实现SQLplus翻页功能
SQL 基础-->SELECT 查询
SQL 基础--> NEW_VALUE 的使用
SQL 基础--> 集合运算(UNION 与UNION ALL)
SQL 基础--> 常用函数
SQL 基础--> 视图(CREATE VIEW)
SQL 基础--> 创建和管理表
SQL 基础--> 多表查询
SQL 基础--> 过滤和排序
SQL 基础--> 子查询
SQL 基础--> 分组与分组函数
SQL 基础--> 层次化查询(START BY ... CONNECT BY PRIOR)
SQL 基础--> ROLLUP与CUBE运算符实现数据汇总
PL/SQL --> 游标
PL/SQL --> 异常处理(Exception)
PL/SQL --> 语言基础
PL/SQL --> 流程控制
PL/SQL --> PL/SQL记录
PL/SQL --> 包的创建与管理
PL/SQL --> 隐式游标(SQL%FOUND)
PL/SQL --> 包重载、初始化
PL/SQL --> DBMS_DDL包的使用
PL/SQL --> DML 触发器
PL/SQL --> INSTEAD OF 触发器
PL/SQL --> 存储过程
PL/SQL --> 函数
PL/SQL --> 动态SQL
PL/SQL --> 动态SQL的常见错误
有关ORACLE其它特性
Oracle 常用目录结构(10g)
使用OEM,SQL*Plus,iSQL*Plus 管理Oracle实例
日志记录模式(LOGGING 、FORCE LOGGING 、NOLOGGING)
表段、索引段上的LOGGING与NOLOGGING
Oralce OMF 功能详解
Oracle 用户、对象权限、系统权限
Oracle 角色、配置文件
Oracle 分区表
Oracle 外部表
使用外部表管理Oracle 告警日志(ALAERT_$SID.LOG)
簇表及簇表管理(Index clustered tables)
数据泵 EXPDP 导出工具的使用
数据泵 IMPDP 导入工具的使用
导入导出 Oracle 分区表数据
SQL*Loader使用方法
启用用户进程跟踪
配置非默认端口的动态服务注册
配置ORACLE 客户端连接到数据库
system sys,sysoper sysdba 的区别
ORACLE_SID、DB_NAME、INSTANCE_NAME、DB_DOMIAN、GLOBAL_NAME
Oracle 补丁全集 (Oracle 9i 10g 11g Path)
Oracle 10.2.0.1 升级到 10.2.0.4
Oracle 彻底 kill session