PowerHA SystemMirror 7.1 has many new features.In this article,i just show you how to use clmgr command to manage PowerHA 7.1.
一.Installing PowerHA 7.1
1.Verify following packages on both nodes:
- [dbserv1@root]#lslpp -L rsct.basic.rte rsct.compat.basic.hacmp rsct.compat.clients.hacmp bos.adt.lib bos.adt.libm bos.adt.syscalls bos.ahafs bos.clvm.enh bos.cluster bos.data bos.net.tcp.client bos.net.tcp.server bos.rte.SRC bos.rte.libc bos.rte.libcfg bos.rte.libcur bos.rte.libpthreads bos.rte.lvm bos.rte.odm cas.agent
2.Edit /etc/hosts file:
- #For Persistent
- 192.168.0.129 dbserv1-pers
- 192.168.0.130 dbserv2-pers
- #For Boot IP
- 192.168.1.130 dbserv2-boot
- 192.168.1.129 dbserv1-boot
- #For Node
- 10.20.144.129 dbserv1
- 10.20.144.130 dbserv2
- #For Service
- 10.20.144.251 oracle
- 10.20.144.252 db2
3.Installing PowerHA 7.1 file sets on both nodes:
- [root@dbserv1 /]#loopmount -i PowerHA_v7.1.iso -o "-V cdrfs -o ro" -m /mnt
- [root@dbserv1 /]#installp -aXYg -d /mnt all
4.Adding boot and persistent ip to /etc/cluster/rhosts file on both nodes:
In PowerHA 7.1,the /usr/es/sbin/cluster/etc/rhosts file replaced by /etc/cluster/rhosts file.
- [dbserv1@root]#cat /etc/cluster/rhosts
- 192.168.1.130
- 192.168.1.129
- 192.168.0.129
- 192.168.0.130
- 10.20.144.129
- 10.20.144.130
5.Creating share volume group:
- //on dbserv1 do following steps
- [root@dbserv1 /]#mkvg -V 48 -y datavg hdisk2
- 0516-1254 mkvg: Changing the PVID in the ODM.
- datavg
- [root@dbserv1 /]#mklv -y lv02 -t jfs2 datavg 10G
- lv02
- [root@dbserv1 /]#crfs -v jfs2 -d /dev/lv02 –m /oradata
- [root@dbserv1 /]#chvg -c datavg //change the datavg to Enhanced Concurrent Capable volume group when you create volume group by mkvg don't with -c option.
- [root@dbserv1 /]#chvg -an datavg
- [root@dbserv1 /]#varyoffvg datavg
- [root@dbserv1 /]#exportvg datavg
- //On dbserv2 import datavg volume group
- [root@dbserv2 /]#importvg -V 48 -y datavg hdisk2
- datavg
6.Verifying the installed PowerHA filesets consistency on both nodes:
- [dbserv1@root]#lppchk -v
- [dbserv1@root]#lppchk -c cluster*
二.Creating a cluster:
1.Create a cluster:
- [dbserv1@root]#clmgr add cluster MySysMirror nodes=dbserv1,dbserv2 repositories=hdisk4
- Cluster Name: MySysMirror
- Cluster Connection Authentication Mode: Standard
- Cluster Message Authentication Mode: None
- Cluster Message Encryption: None
- Use Persistent Labels for Communication: No
- Repository Disk: None
- Cluster IP Address:
- There are 2 node(s) and 2 network(s) defined
- NODE dbserv1:
- Network net_ether_01
- dbserv1 10.20.144.129
- Network net_ether_02
- dbserv1-boot 192.168.1.129
- NODE dbserv2:
- Network net_ether_01
- dbserv2 10.20.144.130
- Network net_ether_02
- dbserv2-boot 192.168.1.130
- No resource groups defined
- Initializing..
- Gathering cluster information, which may take a few minutes...
- Processing...
- Storing the following information in file
- /usr/es/sbin/cluster/etc/config/clvg_config
- ..................
- Cluster Name: MySysMirror
- Cluster Connection Authentication Mode: Standard
- Cluster Message Authentication Mode: None
- Cluster Message Encryption: None
- Use Persistent Labels for Communication: No
- Repository Disk: hdisk4
- Cluster IP Address:
- There are 2 node(s) and 2 network(s) defined
- NODE dbserv1:
- Network net_ether_01
- dbserv1 10.20.144.129
- Network net_ether_02
- dbserv1-boot 192.168.1.129
- NODE dbserv2:
- Network net_ether_01
- dbserv2 10.20.144.130
- Network net_ether_02
- dbserv2-boot 192.168.1.130
- No resource groups defined
- Warning: There is no cluster found.
- cllsclstr: No cluster defined.
- cllsclstr: Error reading configuration.
- Communication path dbserv1 discovered a new node. Hostname is dbserv1. Adding it to the configuration with Nodename dbserv1.
- Communication path dbserv2 discovered a new node. Hostname is dbserv2. Adding it to the configuration with Nodename dbserv2.
- Discovering IP Network Connectivity
- Retrieving data from available cluster nodes. This could take a few minutes.
- Start data collection on node dbserv1
- Start data collection on node dbserv2
- Collector on node dbserv2 completed
- Collector on node dbserv1 completed
- Data collection complete
- Completed 10 percent of the verification checks
- Completed 20 percent of the verification checks
- Completed 30 percent of the verification checks
- Completed 40 percent of the verification checks
- Completed 50 percent of the verification checks
- Completed 60 percent of the verification checks
- Completed 70 percent of the verification checks
- Discovered [6] interfaces
- Completed 80 percent of the verification checks
- Completed 90 percent of the verification checks
- Completed 100 percent of the verification checks
- IP Network Discovery completed normally
The cluster name is MySysMirror and the disk hdisk4 as a repositories disk.The repositories disk is a new feature in PowerHA 7.1.
2.Adding Service IP:
Here i just use one ip for oracle database,it called oracle.
- [dbserv1@root]#clmgr add service_ip oracle NETWORK=net_ether_01 NETMASK=255.255.255.0
3.Adding persistent IP:
- [dbserv1@root]#clmgr add persistent_ip dbserv1-pers NETWORK=net_ether_02 NODE=dbserv1
- [dbserv1@root]#clmgr add persistent_ip dbserv2-pers NETWORK=net_ether_02 NODE=dbserv2
4.Adding application_controller:
- [dbserv1@root]#clmgr add application_controller oradb STARTSCRIPT="/etc/Smydb" STOPSCRIPT="/etc/Kmydb"
5.Adding Resource Group:
- [dbserv1@root]#clmgr add resource_group oraRG VOLUME_GROUP=datavg NODES=dbserv1,dbserv2 SERVICE_LABEL=oracle APPLICATIONS=oradb
- Auto Discover/Import of Volume Groups was set to true.
- Gathering cluster information, which may take a few minutes.
6.Syncing cluster:
- [dbserv1@root]#clmgr sync cluster verify=yes fix=yes
- Saving existing /var/hacmp/clverify/ver_mping/ver_mping.log to /var/hacmp/clverify/ver_mping/ver_mping.log.bak
- Verifying clcomd communication, please be patient.
- Verifying multicast communication with mping.
- Committing any changes, as required, to all available nodes...
- Adding any necessary PowerHA SystemMirror entries to /etc/inittab and /etc/rc.net for IPAT on node dbserv1.
- Adding any necessary PowerHA SystemMirror entries to /etc/inittab and /etc/rc.net for IPAT on node dbserv2.
- Verification has completed normally.
- ...........................
- Remember to redo automatic error notification if configuration has changed.
- Verification has completed normally.
7.Starting cluster on both nodes:
- [dbserv1@root]#clmgr online cluster start_cluster BROADCAST=false CLINFO=true
8.Verifying cluster and resource group status:
- [dbserv1@root]#clmgr -a state query cluster
- STATE="STABLE"
- [dbserv2@root]#clmgr -a state query cluster
- STATE="STABLE"
- [dbserv1@root]#clmgr -a state,current_node query rg oraRG
- STATE="ONLINE"
- CURRENT_NODE="dbserv1"
9.Switching resource group to another node:
- [dbserv1@root]#clmgr move rg oraRG node=dbserv2
- Attempting to move resource group oraRG to node dbserv2.
- Waiting for the cluster to process the resource group movement request....
- Waiting for the cluster to stabilize.........
- Resource group movement successful.
- Resource group oraRG is online on node dbserv2.
- Cluster Name: MySysMirror
- Resource Group Name: oraRG
- Primary instance(s):
- The following node temporarily has the highest priority for this instance:
- dbserv2, user-requested rg_move performed on Tue Jul 17 15:23:11 2012
- Node State
- ---------------------------- ---------------
- dbserv1 OFFLINE
- dbserv2 ONLINE
三.Testing the PowerHA 7.1 cluster
Here i just simulation of a group service failure.This scenario consists of a hot-standby cluster configuration with participating nodes dbserv1 and dbserv2 with only one Ethernet network. Each node has two Ethernet interfaces. I'm going to kill the cthags process in the dbserv2 node that was hosting the resource group.
The resource group current on dbserv2:
- [dbserv1@root]#clmgr -a state,current_node query rg oraRG
- STATE="ONLINE"
- CURRENT_NODE="dbserv2"
When i killed the cthags service on dbserv2 what will be happen?
- [dbserv2@root]#ps -ef|grep cthags
- root 17629232 2949152 0 15:18:00 - 0:00 /usr/sbin/rsct/bin/hagsd cthags
- [dbserv2@root]#kill -9 17629232
After killed cthags service,the dbserv2 will be halt immediately:
- Jul 17 16:26:57 dbserv2 daemon:notice cthags[15728736]: (Recorded using libct_ffdc.a cv 2):::Error ID: 63Y7ej0F5G/E/kK41v1738....................:::Reference ID: :::Template ID: afa89905:::Details File: :::Location: RSCT,pgsd.C,1.62.1.23,695 :::GS_START_ST Group Services daemon started DIAGNOSTIC EXPLANATION HAGS daemon started by SRC. Log file is /var/ct/1rA_5YpzyHuO0ZxZ06xeuB/log/cthags/trace.
- Jul 17 16:26:57 dbserv2 user:notice PowerHA SystemMirror for AIX: clexit.rc : Unexpected termination of clstrmgrES.
- Jul 17 16:26:57 dbserv2 user:notice PowerHA SystemMirror for AIX: clexit.rc : Halting system immediately!!!
And the resource group oraRG move to dbserv1:
- tail -f /var/hacmp/adm/cluster.log
- Jul 17 16:27:08 dbserv1 user:notice PowerHA SystemMirror for AIX: NOTE: While the sync is going on, volume group can be used
- Jul 17 16:27:09 dbserv1 user:notice PowerHA SystemMirror for AIX: EVENT COMPLETED: rg_move dbserv1 1 ACQUIRE 0
- Jul 17 16:27:09 dbserv1 user:notice PowerHA SystemMirror for AIX: EVENT COMPLETED: rg_move_acquire dbserv1 1 0
- Jul 17 16:27:09 dbserv1 user:notice PowerHA SystemMirror for AIX: EVENT START: rg_move_complete dbserv1 1
- Jul 17 16:27:09 dbserv1 user:notice PowerHA SystemMirror for AIX: NOTE: While the sync is going on, volume group can be used
- Jul 17 16:27:10 dbserv1 user:notice PowerHA SystemMirror for AIX: EVENT START: start_server oradb
- Jul 17 16:27:10 dbserv1 user:notice PowerHA SystemMirror for AIX: EVENT COMPLETED: start_server oradb 0
- Jul 17 16:27:10 dbserv1 user:notice PowerHA SystemMirror for AIX: EVENT COMPLETED: rg_move_complete dbserv1 1 0
- [dbserv1@root]#clmgr -a state,current_node query rg oraRG
- STATE="ONLINE"
- CURRENT_NODE="dbserv1"
For more information:
1.Using clmgr command to manage PowerHA 7.1
2.IBM PowerHA SystemMirror 7.1 for AIX
本文出自 “candon123” 博客,谢绝转载!