此次试验主要是按照三思的文档进行的!先前用udev方式安装了一个两节点的RAC,现在准备测试新加和删除一个节点!
[@more@]添加节点:
1.配置HOSTS,三个节点都要一致
[root@rac01 ~]# more /etc/hosts
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
192.168.56.10 rac01
192.168.56.11 rac01-vip
10.10.1.10 rac01-priv
192.168.56.20 rac02
192.168.56.21 rac02-vip
10.10.1.20 rac02-priv
192.168.56.30 rac03
192.168.56.31 rac03-vip
10.10.1.30 rac03-priv
2.配置SSH互信
[oracle@rac03 ~]$ mkdir .ssh
[oracle@rac03 ~]$ chmod 700 .ssh
[oracle@rac03 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
4d:20:14:95:1a:9b:0d:9a:7a:ee:70:66:0c:27:d8:b8 oracle@rac03
[oracle@rac03 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
7d:a3:7c:70:95:9e:e4:33:e2:49:7e:23:a6:35:c2:c2 oracle@rac03
[oracle@rac01 ~]$ ssh rac03 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'rac03 (192.168.56.30)' can't be established.
RSA key fingerprint is 46:9c:7d:de:a0:c7:01:c0:c0:4f:bc:3c:e5:fa:8e:2d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac03,192.168.56.30' (RSA) to the list of known hosts.
oracle@rac03's password:
[oracle@rac01 ~]$ ssh rac03 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
oracle@rac03's password:
将认证文件拷贝到rac02和rac03节点:
[oracle@rac01 ~]$ scp .ssh/authorized_keys rac02:~/.ssh/authorized_keys
authorized_keys 100% 3004 2.9KB/s 00:00
[oracle@rac01 ~]$ scp .ssh/authorized_keys rac03:~/.ssh/authorized_keys
oracle@rac03's password:
authorized_keys 100% 3004 2.9KB/s 00:00
再在各个节点ssh连接测试:
ssh rac01 date;
ssh rac02 date;
ssh rac03 date;
ssh rac01-priv date;
ssh rac02-priv date;
ssh rac03-priv date;
保证不需要输入密码!
3.使用UDEV添加ASM磁盘
[root@rac03 ~]# for i in c d
> do
> echo "KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="`scsi_id -g -u -s /block/sd$i`", NAME="asm-disk$i", OWNER="oracle", GROUP="oinstall", MODE="0660""
> done
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VB92427d9b-4b6ce89f_", NAME="asm-diskc", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VBdbe9821e-c877d6d6_", NAME="asm-diskd", OWNER="oracle", GROUP="oinstall", MODE="0660"
[root@rac03 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VB92427d9b-4b6ce89f_", NAME="asm-diskc", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VBdbe9821e-c877d6d6_", NAME="asm-diskd", OWNER="oracle", GROUP="oinstall", MODE="0660"
[root@rac03 ~]# start_udev
启动 udev:[确定]
[oracle@rac03 rules.d]$ ls /dev/asm*
/dev/asm-diskc /dev/asm-diskd
[oracle@rac03 rules.d]$ ls /dev/raw/ra*
/dev/raw/raw1 /dev/raw/raw2
4.安装clusterware
一、在第一节点上进入CRS_HOME/bin/oui,运行脚本addNode.sh
[oracle@rac01 bin]$ export LANG=en_US
[oracle@rac01 bin]$ export DISPALY=192.168.56.1:0.0
[oracle@rac01 bin]$ ls
addLangs.sh addNode.sh lsnodes ouica.sh resource runConfig.sh runInstaller runInstaller.sh
[oracle@rac01 bin]$ ./addNode.sh
下一步,输入第三个节点的IP ,进行安装!
安装完成后按提示顺序在对应节点上以root分别执行三个脚本:
[root@rac03 ~]# /u01/oraInventory/orainstRoot.sh
Changing permissions of /u01/oraInventory to 770.
Changing groupname of /u01/oraInventory to oinstall.
The execution of the script is complete
[root@rac01 ~]# /u01/crs/install/rootaddnode.sh
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Attempting to add 1 new nodes to the configuration
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 3: rac03 rac03-priv rac03
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
/u01/crs/bin/srvctl add nodeapps -n rac03 -A rac03-vip/255.255.255.0/eth0 -o /u01/crs
[root@rac03 crs]# ./root.sh
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac01 rac01-priv rac01
node 2: rac02 rac02-priv rac02
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac01
rac02
rac03
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
IP 地址 "rac01-vip" 已被使用。请输入一个未使用的 IP 地址。 ####
二、接下来需要将新节点的 ONS(Oracle Notification Services) 配置信息写入 OCR(Oracleter Register) ,在节点 1 执行脚本如下:
[oracle@rac01 bin]$ pwd
/u01/crs/bin
[oracle@rac01 bin]$ ./racgons add_config rac03:6200
提 示 : rac03的端口号可以查询该结节中ons.config 文件中的配置,此处指定的端口号为remoteport。
[root@rac03 conf]# pwd
/u01/crs/opmn/conf
[root@rac03 conf]# more ons.config
localport=6113
remoteport=6200
loglevel=3
useocr=on
至此,新节点的 CLUSTERWARE 配置完成,要检查安装的结果,可以在新节点中调 用cluvfy 命令进行验证,例如:
[oracle@rac01 ~]$ /u01/crs/bin/cluvfy stage -post crsinst -n rac03 -verbose
5.复制database soft到新节点:
一、在第一节点ORACLE_HOME/oui/bin运行addNode.sh
[oracle@rac01 bin]$ pwd
/u01/oracle/oui/bin
[oracle@rac01 bin]$ ./addNode.sh
下一步安装结束后在节点3运行脚本root.sh:
[root@rac03 bin]# /u01/oracle/root.sh
Running Oracle10 root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/oracle
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
6.配置监听
在第三个节点运行netca,选出cluster listener,下一步选择节点三,开始配置,完成~~
7.创建新实例
在第一节点上使用dbca新建实例:
[oracle@rac01 ~]$ dbca
cluster database -> instance management ->add an instance
->输入sys,密码 -> 选择添加实例的节点名(RAC03) 实例名brentt3
->开始创建新实例。期间提示是否创建ASM相关实例,如果是ASM环境则点是。最后点NO不进行其他操作,完成安装
到此,添加节点工作已经完成
[root@rac01 bin]# crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....t1.inst application ONLINE ONLINE rac01
ora....t2.inst application ONLINE ONLINE rac02
ora....t3.inst application ONLINE ONLINE rac03
ora.brentt.db application ONLINE ONLINE rac01
ora....SM1.asm application ONLINE ONLINE rac01
ora....01.lsnr application ONLINE ONLINE rac01
ora.rac01.gsd application ONLINE ONLINE rac01
ora.rac01.ons application ONLINE ONLINE rac01
ora.rac01.vip application ONLINE ONLINE rac01
ora....SM2.asm application ONLINE ONLINE rac02
ora....02.lsnr application ONLINE ONLINE rac02
ora.rac02.gsd application ONLINE ONLINE rac02
ora.rac02.ons application ONLINE ONLINE rac02
ora.rac02.vip application ONLINE ONLINE rac02
ora....SM3.asm application ONLINE ONLINE rac03
ora....03.lsnr application ONLINE ONLINE rac03
ora.rac03.gsd application ONLINE ONLINE rac03
ora.rac03.ons application ONLINE ONLINE rac03
ora.rac03.vip application ONLINE ONLINE rac03
=========================================================================
删除节点:
注意,下列操作均是在确实要保留的实例上进行,不要选择在要被删除的节点上进行操作!
1.删除database
在第一节点dbca
cluster database -> instance management -> delete an instance -> 选择RAC03实例 -> perform another (N)!
2.删除ASM实例
[root@rac01 bin]# srvctl stop asm -n rac03
[root@rac01 bin]# srvctl remove asm -n rac03
3.删除listener
[oracle@rac01 bin]$ netca
下一步,下一步,选rac03节点,结束!
4.删除节点
一、停止RAC03的nodeapps应用
[root@rac01 bin]# srvctl stop nodeapps -n rac03
删除RAC03的数据库软件,首先在保留的第一节RAC01点上执行,更新oracle Inventory:
[oracle@rac01 bin]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=rac01,rac02"
Starting Oracle Universal Installer...
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
'UpdateNodeList' was successful.
二、接下来在被删的RAC03上执行,更新oracle Inventory:
[oracle@rac03 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=rac03" -local
三、接下来就可以删除该节点的数据库软件了:
[oracle@rac03 ~]$ $ORACLE_HOME/oui/bin/runInstaller -deinstall
出来OUI画面,选择删除软件,选择oradb10g_home,REMOVE! 完成退出!
该服务器如果不准备再安装 ORACLE 数据库的话,可以同时删除 /etc/oratab 文件。
[oracle@rac03 ~]$ rm /etc/oratab
四、删除ONS配置:
在第一节点中执行racgons命令,删除ons配置:
[root@rac01 bin]# ./racgons remove_config rac03:6200
racgons: Existing key value on rac03 = 6200.
racgons: rac03:6200 removed from OCR.
五、删除nodeapps:
[root@rac01 bin]# srvctl remove nodeapps -n rac03
请确认要删除节点 rac03 上的节点级应用程序 (y/[n]) y
六、删除rac03 cluster:
首先仍是在任意保留的节点中操作:
[oracle@rac01 bin]$ $ORA_CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORA_CRS_HOME "CLUSTER_NODES=rac01,rac02" CRS=TRUE
Starting Oracle Universal Installer...
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
'UpdateNodeList' was successful.
再在第三节点运行:
[oracle@rac03 ~]$ $ORA_CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORA_CRS_HOME "CLUSTER_NODES=rac03" CRS=TRUE -local
Starting Oracle Universal Installer...
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
'UpdateNodeList' was successful.
再运行rundeinstall
[oracle@rac03 ~]$ $ORA_CRS_HOME/oui/bin/runInstaller -deinstall
弹出窗口,选中 crs 路径,然后点击 remove 即可。
操作完成后,点击 close 关闭窗口。集群件软件就被成功从目标节点删除了。接下来 如
愿意 ( 或者确实需要 ) ,可以考虑清除 ORACLE 留下的一些操作痕迹,包括但不限于下列 :
删除 $ORACLE_BASE/oraInventory 目录
删除 /etc/inittab 文件
删除 /var/tmp/.oracle 目录
删除 ORA 相关的启动关闭脚本,比如 /etc/init.d/init* ,以及 /etc/rc?.d/*init.crs 等文 件
删除 /etc/oracle 目录
清除 crontab 中关于 ORACLE 的相关任务;
清除 oracle 用户下 profile 中关于 ORA 的相关环境变量设置;
七、从OCR中删除节点信息:
在任意一台保留的节点执行:
[root@rac01 bin]# ./olsnodes -n -i
rac01 1 rac01-vip
rac02 2 rac02-vip
rac03 3
尽管通过前面的操作, ORACLE 软件以及集群件均已被删除,节点信息也更新了,不过 OCR 中还是保留有 rac03 节点的信息,因此这块也需要删除,执行脚本如下 :
[root@rac01 install]# pwd
/u01/crs/install
[root@rac01 install]# ./rootdeletenode.sh rac03,3
CRS-0210: Could not find resource 'ora.rac03.LISTENER_RAC03.lsnr'.
CRS-0210: Could not find resource 'ora.rac03.ons'.
CRS-0210: Could not find resource 'ora.rac03.vip'.
CRS-0210: Could not find resource 'ora.rac03.gsd'.
CRS-0210: Could not find resource ora.rac03.vip.
CRS nodeapps are deleted successfully
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully deleted 14 values from OCR.
Key SYSTEM.css.interfaces.noderac03 marked for deletion is not there. Ignoring.
Successfully deleted 5 keys from OCR.
Node deletion operation successful.
'rac03,3' deleted successfully
[root@rac01 bin]# ./olsnodes -n -i
rac01 1 rac01-vip
rac02 2 rac02-vip
至此,删除节点操作告以段落!
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/26675752/viewspace-1060448/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/26675752/viewspace-1060448/