ORACLE11G RAC增加节点操作

参考资料:
http://www.5ienet.com/note/html/sracnode/index.shtml

http://www.askmaclean.com/archives/add-node-to-11-2-0-2-grid-infrastructure.html


1.添加 rac节点:

使用GRID用户:

在当前RAC环境中任意节点的$ORA_CRS_HOME,执行oui/bin/addNode.sh脚本



[grid@rac1 bin]$ ./addNode.sh -pre  CLUSTER_NEW_NODES={rac3} CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}

执行 添加节点 的预检查


"file-max" 的 内核参数 检查失败
在以下节点上检查失败:
    rac3
"ip_local_port_range" 的 内核参数 检查已通过
"rmem_default" 的 内核参数 检查失败
在以下节点上检查失败:
    rac3
"rmem_max" 的 内核参数 检查失败
在以下节点上检查失败:
    rac3
"wmem_default" 的 内核参数 检查失败
在以下节点上检查失败:
    rac3
"wmem_max" 的 内核参数 检查失败
在以下节点上检查失败:
    rac3
"aio-max-nr" 的 内核参数 检查失败
在以下节点上检查失败:
    rac3

修改为:看到很多的配置是6815744,但一直不通过,一猛心,后面加个0.

fs.file-max = 68157440
net.ipv4.ip_local_port_range = 1024 65535
net.core.rmem_default = 4194304
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
fs.aio-max-nr = 1048576
kernel.sem = 250 32000 100 128




PRVF-5636 : 在以下节点上, 无法访问的节点的 DNS 响应时间超过 "15000" 毫秒: rac1,rac3

使用命令,与RAC1进行数据同步
ntpdate -u rac1


PRVF-5436 : 在一个或多个节点上运行的 NTP 守护程序缺少快速定向选项 "-x"

修改了:vi /etc/sysconfig/ntpd
# Drop root to id 'ntp:ntp' by default.
OPTIONS=" -u ntp:ntp -x -p /var/run/ntpd.pid"


./addNode.sh   CLUSTER_NEW_NODES={rac3} CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip} > add_node.log 2>&1

完成后执行:

root@rac3 ~]# /opt/app/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /opt/app/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac3'
CRS-2676: Start of 'ora.mdnsd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac3'
CRS-2676: Start of 'ora.gpnpd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac3'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac3'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac3' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac3'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac3'
CRS-2676: Start of 'ora.diskmon' on 'rac3' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac3' succeeded

未能创建磁盘组CRS, 返回消息如下:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 2 regular failure groups, discovered only 0
ORA-15080: synchronous I/O operation to a disk failed


Configuration of ASM ... failed
see asmca logs at /opt/app/oracle/cfgtoollogs/asmca for details
Did not succssfully configure and start ASM at /opt/app/grid/crs/install/crsconfig_lib.pm line 6763.
/opt/app/grid/perl/bin/perl -I/opt/app/grid/perl/lib -I/opt/app/grid/crs/install /opt/app/grid/crs/install/rootcrs.pl execution failed


还是出这个错误,和安装时第二个节点运行:/opt/app/grid/root.sh
时的是一样的。

解决方法:

1. Modify the /etc/sysconfig/oracleasm with:
       ORACLEASM_SCANORDER="dm"
       ORACLEASM_SCANEXCLUDE="sd"
2. restart the asmlib by :
       # /etc/init.d/oracleasm restart

3. Run root.sh on the 2nd node



二、添加安装新节点数据库软件:


完成新节点添加后,用oracle用户在个$ORACLE_HOME/oui/bin路径下,执行如下命令:

./addNode.sh  CLUSTER_NEW_NODES={m3} CLUSTER_NEW_VIRTUAL_HOSTNAMES={m3-vip}

该命令执行完成后,数据库软件在新节点上安装完成。


(执行时出现: 新添加节点m3不可访问,

用ssh m3 date

ssh m3-private date 再测试了一次SSH 对等访问后,问题解决。




三、为新节点添加数据库实例:


RAC集群中的其中一个节点,用oracle用户,运行dbca,选择RAC database

Instance Management

Add an Instance




你可能感兴趣的:(oracle,install)