In this article,you will learn how to configure IBM PowerHA on aix.My surroundings list by following worksheets:
The en0 just for boot ip and the en1 just for standby ip.
一.Requirements
1.Append follwoing lines to /etc/hosts on all of nodes.
- #For Boot IP
- 172.16.255.11 dbserv1
- 172.16.255.13 dbserv2
- #For Standby IP
- 192.168.0.11 dbserv1-stby
- 192.168.0.13 dbserv2-stby
- #For Service IP
- 172.16.255.15 dbserv1-serv
- 172.16.255.17 dbserv2-serv
- #For Persistent IP
- 192.168.2.11 dbserv1-pers
- 192.168.2.13 dbserv2-pers
2.Ensure that the following aix filesets are installed:
- [root@dbserv1 /]#lslpp -l bos.data bos.adt.lib bos.adt.libm bos.adt.syscalls bos.net.tcp.client bos.net.tcp.server bos.rte.SRC bos.rte.libc bos.rte.libcfg bos.rte.libpthreads bos.rte.odm bos.rte.lvm bos.clvm.enh bos.adt.base bos.perf.perfstat bos.perf.libperfstat bos.perf.proctools rsct.basic.rte rsct.compat.clients.rte xlC.aix61.rte
3.Install PowerHA on all of nodes:
- [root@dbserv1 /]#loopmount -i powerHA_v6.1.iso -o "-V cdrfs -o ro" -m /mnt
- [root@dbserv1 /]#installp -a -d /mnt all
After installation,keep the PowerHA up to date and reboot all of nodes.
4.Append boot ip and standby ip to /usr/es/sbin/cluster/etc/rhosts
- [root@dbserv1 etc]#cat rhosts
- 172.16.255.11
- 172.16.255.13
- 192.168.0.11
- 192.168.0.13
- [root@dbserv2 etc]#cat rhosts
- 172.16.255.11
- 172.16.255.13
- 192.168.0.11
- 192.168.0.13
5.Edit /usr/es/sbin/cluster/netmon.cf file.Append each boot ip and standby ip to it on each node.
- [root@dbserv1 cluster]#cat netmon.cf
- 172.16.255.11
- 192.168.0.11
- [root@dbserv2 cluster]#cat netmon.cf
- 172.16.255.13
- 192.168.0.13
6.Create a disk heartbeat:
- //Create heartvg on dbserv1
- [root@dbserv1 /]#mkvg -x -y heartvg -C hdisk5
- [root@dbserv1 /]#lspv|grep hdisk5
- hdisk5 000c1acf7ca3bc3b heartvg
- //import heartvg on dbserv2
- [root@dbserv2 /]#importvg –y heartvg hdisk5
- [root@dbserv2 /]#lspv|grep hdisk5
- hdisk5 000c1acf7ca3bc3b heartvg
Test the disk heartbeat:
- //Running following command on dbserv1
- [root@dbserv1 /]#/usr/sbin/rsct/bin/dhb_read -p hdisk5 -r
- DHB CLASSIC MODE
- First node byte offset: 61440
- Second node byte offset: 62976
- Handshaking byte offset: 65024
- Test byte offset: 64512
- Receive Mode:
- Waiting for response . . .
- Magic number = 0x87654321
- Magic number = 0x87654321
- Magic number = 0x87654321
- Link operating normally
- //Running following command on dbserv2
- [root@dbserv2 /]#/usr/sbin/rsct/bin/dhb_read -p hdisk5 -t
- DHB CLASSIC MODE
- First node byte offset: 61440
- Second node byte offset: 62976
- Handshaking byte offset: 65024
- Test byte offset: 64512
- Transmit Mode:
- Magic number = 0x87654321
- Detected remote utility in receive mode. Waiting for response . . .
- Magic number = 0x87654321
- Magic number = 0x87654321
- Link operating normally
7.Create a Share Volume Group:
- //On dbserv1
- [root@dbserv1 /]#mkvg -V 48 -y oradata hdisk6 hdisk7
- 0516-1254 mkvg: Changing the PVID in the ODM.
- 0516-1254 mkvg: Changing the PVID in the ODM.
- oradata
- [root@dbserv1 /]#mklv -y lv02 -t jfs2 oradata 20G
- lv02
- [root@dbserv1 /]#crfs -v jfs2 -d /dev/lv02 –m /oradata
- File system created successfully.
- 20970676 kilobytes total disk space.
- New File System size is 41943040
- [root@dbserv1 /]#chvg -an oradata
- [root@dbserv1 /]#varyoffvg oradata
- [root@dbserv1 /]#exportvg oradata
- //On dbserv2 import oradata volume group
- [root@dbserv2 /]#importvg -V 48 -y oradata hdisk6
- oradata
- [root@dbserv2 /]#lspv
- hdisk0 000c18cf00094faa rootvg active
- hdisk1 000c18cf003ca02c None
- hdisk2 000c1acf3e6440c6 None
- hdisk3 000c1acf3e645312 None
- hdisk4 000c1acf3e6460d9 None
- hdisk5 000c1acf7ca3bc3b heartvg
- hdisk6 000c1acf7cb764d9 oradata active
- hdisk7 000c1acf7cb765aa oradata active
8.For oracle do following steps.
(1).Check following softwares:
- [root@dbserv2 /]#lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.perfstat bos.perf.libperfstat bos.perf.proctools xlC.aix61.rte
(2).Change following parameters:
- [root@dbserv1 /]#no -p -o tcp_ephemeral_low=9000
- [root@dbserv1 /]#no -p -o tcp_ephemeral_high=65500
- [root@dbserv1 /]#no -p -o udp_ephemeral_low=9000
- [root@dbserv1 /]#no -p -o udp_ephemeral_high=65500
- [root@dbserv2 /]#no -p -o tcp_ephemeral_low=9000
- [root@dbserv2 /]#no -p -o tcp_ephemeral_high=65500
- [root@dbserv2 /]#no -p -o udp_ephemeral_low=9000
- [root@dbserv2 /]#no -p -o udp_ephemeral_high=65500
(3).Create oracle user and groups:
- //On dbserv1
- [root@dbserv1 /]#for id in oinstall dba oper;do mkgroup $id;done
- [root@dbserv1 /]#mkuser oracle;passwd oracle
- [root@dbserv1 /]#chuser pgrp=oinstall oracle
- [root@dbserv1 /]#chuser groups=oinstall,dba,oper oracle
- [root@dbserv1 /]#chuser fsize=-1 oracle
- [root@dbserv1 /]#chuser data=-1 oracle
- //On dbserv2
- [root@dbserv2 /]#for id in oinstall dba oper;do mkgroup $id;done
- [root@dbserv2 /]#mkuser oracle;passwd oracle
- [root@dbserv2 /]#chuser pgrp=oinstall oracle
- [root@dbserv2 /]#chuser groups=oinstall,dba,oper oracle
- [root@dbserv2 /]#chuser fsize=-1 oracle
- [root@dbserv2 /]#chuser data=-1 oracle
(4).Change maxuprocs parameter:
- [root@dbserv1 /]#chdev -l sys0 -a maxuproc=16384
- sys0 changed
- [root@dbserv2 /]#chdev -l sys0 -a maxuproc=16384
- sys0 changed
(5).Create Oracle home:
- [root@dbserv1 /]#mkdir /u01;chown oracle:oinstall /u01;su - oracle
- [root@dbserv1 /]$vi .profile
- export ORACLE_SID=example
- export ORACLE_BASE=/u01/app/oracle
- export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
- export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
- export TNS_ADMIN=$ORACLE_HOME/network/admin
- export ORA_NLS11=$ORACLE_HOME/nls/data
- export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$JAVA_HOME/bin
- export LD_LIBRARY_PATH=$ORACLE_HOME/lib:${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib:/lib:/usr/lib:/usr/local/lib
- export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
- export THREADS_FLAG=native
- [root@dbserv1 /]$source .profile;mkdir -p $ORACLE_HOME
- [root@dbserv2 /]#mkdir /u01;chown oracle:oinstall /u01;su - oracle
- [root@dbserv2 /]$vi .profile
- export ORACLE_SID=example
- export ORACLE_BASE=/u01/app/oracle
- export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
- export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
- export TNS_ADMIN=$ORACLE_HOME/network/admin
- export ORA_NLS11=$ORACLE_HOME/nls/data
- export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$JAVA_HOME/bin
- export LD_LIBRARY_PATH=$ORACLE_HOME/lib:${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib:/lib:/usr/lib:/usr/local/lib
- export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
- export THREADS_FLAG=native
- [root@dbserv2 /]$source .profile;mkdir -p $ORACLE_HOME
(6).Ensure that the /tmp filesystem has enough space:
- [root@dbserv1 /]#chfs -a size=+1G /tmp
二.Create a cluster:
1.Add a cluster:
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddclstr -n hatest
- Current cluster configuration:
- Cluster Name: hatest
- Cluster Connection Authentication Mode: Standard
- Cluster Message Authentication Mode: None
- Cluster Message Encryption: None
- Use Persistent Labels for Communication: No
- There are 0 node(s) and 0 network(s) defined
- No resource groups defined
2.Add nodes to cluster:
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/clnodename -a dbserv1 -p dbserv1
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/clnodename -a dbserv2 -p dbserv2
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/clnodename
- dbserv1
- dbserv2
3. Configure HACMP diskhb network:
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/clmodnetwork -a -l no -n net_diskhb_01 -i diskhb
- [root@dbserv1 /]#/usr/es/sbin/cluster/cspoc/cl_ls2ndhbnets
- Network Name Node and Disk List
- ============ ================== ==================
- net_diskhb_01
4.Configure HACMP Communication devices:
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a diskhb_dbserv1:diskhb:net_diskhb_01:serial:service:/dev/hdisk5 -n dbserv1
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a diskhb_dbserv2:diskhb:net_diskhb_01:serial:service:/dev/hdisk5 -n dbserv2
- [root@dbserv1 /]#/usr/es/sbin/cluster/cspoc/cl_ls2ndhbnets
- Network Name Node and Disk List
- ============ ================== ==================
- net_diskhb_01 dbserv1:/dev/hdisk5 dbserv2:/dev/hdisk5
Test diskhb nerwork:
- [root@dbserv1 /]#/usr/es/sbin/cluster/sbin/cl_tst_2ndhbnet -cspoc -n'dbserv1,dbserv2' '/dev/hdisk5' 'dbserv1' '/dev/hdisk5' 'dbserv2'
- cl_tst_2ndhbnet: Starting the receive side of the test for disk /dev/hdisk5 on node dbserv1
- cl_tst_2ndhbnet: Starting the transmit side of the test for disk /dev/hdisk5 on node dbserv2
- dbserv1: DHB CLASSIC MODE
- dbserv1: First node byte offset: 61440
- dbserv1: Second node byte offset: 62976
- dbserv1: Handshaking byte offset: 65024
- dbserv1: Test byte offset: 64512
- dbserv1:
- dbserv1: Receive Mode:
- dbserv1: Waiting for response . . .
- dbserv1: Magic number = 0x87654321
- dbserv1: Magic number = 0x87654321
- dbserv1: Magic number = 0x87654321
- dbserv1: Link operating normally
- dbserv2: DHB CLASSIC MODE
- dbserv2: First node byte offset: 61440
- dbserv2: Second node byte offset: 62976
- dbserv2: Handshaking byte offset: 65024
- dbserv2: Test byte offset: 64512
- dbserv2:
- dbserv2: Transmit Mode:
- dbserv2: Magic number = 0x87654321
- dbserv2: Detected remote utility in receive mode. Waiting for response . . .
- dbserv2: Magic number = 0x87654321
- dbserv2: Magic number = 0x87654321
- dbserv2: Link operating normally
- cl_tst_2ndhbnet: Test complete
5.Configure HACMP IP-Based Network:
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/clmodnetwork -a –n net_ether_02 –i ether –s 255.255.255.0 –l yes
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/clmodnetwork -a -n net_ether_03 -i ether -s 255.255.255.0 -l yes
6.Add Communication Interfaces:
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a'dbserv1' :'ether' :'net_ether_02' : : : -n'dbserv1'
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a'dbserv2' :'ether' :'net_ether_02' : : : -n'dbserv2'
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a'dbserv1-stby' :'ether' :'net_ether_03' : : : -n'dbserv1'
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a'dbserv2-stby' :'ether' :'net_ether_03' : : : -n'dbserv2'
7.Add service ip:
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -Tservice -B'dbserv1-serv' -w'net_ether_01'
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -Tservice -B'dbserv2-serv' -w'net_ether_01'
8.Add Resource Group:
Extended Configuration->Extended Resource Configuration->HACMP Extended Resource Group Configuration->Add a Resource Group
9.Add persistent ip:
Extended Configuration->Extended Topology Configuration->Configure HACMP Persistent Node IP Label/Addresses->Add a Persistent Node IP Label/Address
10.Verification and Synchronization:
Extended Configuration->Extended Verification and Synchronization or you can use following command:
- [root@dbserv1 /]#usr/es/sbin/cluster/utilities/cldare -rt -V normal
三.Install Oracle databse:
1.Run rootpre.sh scripts:
Before install,you must run rootpre.sh from oralce meadia:
- [root@dbserv1 database]#./rootpre.sh
- ./rootpre.sh output will be logged in /tmp/rootpre.out_12-05-28.13:38:43
- Checking if group services should be configured....
- Group "hagsuser" does not exist.
- Creating required group for group services: hagsuser
- Please add your Oracle userid to the group: hagsuser
- Configuring HACMP group services socket for possible use by Oracle.
- The group or permissions of the group services socket have changed.
- Please stop and restart HACMP before trying to use Oracle.
- [root@dbserv2 database]#./rootpre.sh
- ./rootpre.sh output will be logged in /tmp/rootpre.out_12-05-28.13:38:11
- Checking if group services should be configured....
- Group "hagsuser" does not exist.
- Creating required group for group services: hagsuser
- Please add your Oracle userid to the group: hagsuser
- Configuring HACMP group services socket for possible use by Oracle.
- The group or permissions of the group services socket have changed.
- Please stop and restart HACMP before trying to use Oracle.
After do above step then you can install oracle database and copy the oracle installed files to another node.Make sure that the oracle listener address is you service ip.
2.Create start and stop scripts:
- [root@dbserv1 /]#vi /etc/dbstart
- #!/usr/bin/bash
- #Define Oracle Home
- ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
- #Start Oracle Listener
- if [ -x $ORACLE_HOME/bin/lsnrctl ]; then
- su - oracle "-c lsnrctl start"
- fi
- #Start Oracle Instance
- if [ -x $ORACLE_HOME/bin/sqlplus ]; then
- su - oracle "-c sqlplus"<<EOF
- connect / as sysdba
- startup
- quit
- EOF
- fi
- [root@dbserv1 /]#vi /etc/dbstop
- #!/usr/bin/bash
- #Define Oracle Home
- ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
- #Stop Oracle Listener
- if [ -x $ORACLE_HOME/bin/lsnrctl ]; then
- su - oracle "-c lsnrctl stop"
- fi
- #Stop Oracle Instance
- if [ -x $ORACLE_HOME/bin/sqlplus ]; then
- su - oracle "-c sqlplus"<<EOF
- connect / as sysdba
- shutdown immediate
- quit
- EOF
- fi
- [root@dbserv1 /]#chmod +x /etc/dbst*
- [root@dbserv1 /]#scp /etc/dbst* dbserv2:/etc
四.Register A Application to Resource Group
1.Configure HACMP Application Servers:
Extended Configuration->Extended Resource Configuration->HACMP Extended Resources Configuration->Configure HACMP Application Servers->Add an Application Server
2.Create Application Monitor:
Extended Configuration->Extended Resource Configuration->Configure HACMP Application Servers->Configure HACMP Application Monitoring->Add a Process Application Monitor
3.Register A resource to resource group:
Extended Configuration->Extended Resource Configuration->HACMP Extended Resource Group Configuration->Change/Show Resources and Attributes for a Resource Group
Excute following command:
- [root@dbserv1 /]#usr/es/sbin/cluster/utilities/cldare -rt -V normal
After above steps,start hacmp service and test you configuration.
6.Display HACMP Configuration:
- [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/cldisp
- Cluster: hatest
- Cluster services: active
- State of cluster: up
- Substate: stable
- #############
- APPLICATIONS
- #############
- Cluster hatest provides the following applications: example
- Application: example
- example is started by /etc/dbstart
- example is stopped by /etc/dbstop
- Application monitor of example: example
- Monitor name: example
- Type: process
- Process monitored: tnslsnr
- Process owner: oracle
- Instance count: 1
- Stabilization interval: 60 seconds
- Retry count: 3 tries
- Restart interval: 198 seconds
- Failure action: fallover
- Cleanup method: /etc/lsnrClear.sh
- Restart method: /etc/lsnrRestart.sh
- This application is part of resource group 'oradb'.
- Resource group policies:
- Startup: on first available node
- Fallover: to next priority node in the list
- Fallback: never
- State of example: online
- Nodes configured to provide example: dbserv1 {up} dbserv2 {up}
- Node currently providing example: dbserv1 {up}
- The node that will provide example if dbserv1 fails is: dbserv2
- Resources associated with example:
- Service Labels
- dbserv1-serv(172.16.255.15) {online}
- Interfaces configured to provide dbserv1-serv:
- dbserv1 {up}
- with IP address: 172.16.255.11
- on interface: en0
- on node: dbserv1 {up}
- on network: net_ether_02 {up}
- dbserv2 {up}
- with IP address: 172.16.255.13
- on interface: en0
- on node: dbserv2 {up}
- on network: net_ether_02 {up}
- dbserv2-serv(172.16.255.17) {online}
- Interfaces configured to provide dbserv2-serv:
- dbserv1 {up}
- with IP address: 172.16.255.11
- on interface: en0
- on node: dbserv1 {up}
- on network: net_ether_02 {up}
- dbserv2 {up}
- with IP address: 172.16.255.13
- on interface: en0
- on node: dbserv2 {up}
- on network: net_ether_02 {up}
- Shared Volume Groups:
- oradata
- #############
- TOPOLOGY
- #############
- hatest consists of the following nodes: dbserv1 dbserv2
- dbserv1
- Network interfaces:
- diskhb_01 {up}
- device: /dev/hdisk5
- on network: net_diskhb_01 {up}
- dbserv1 {up}
- with IP address: 172.16.255.11
- on interface: en0
- on network: net_ether_02 {up}
- dbserv1-stby {up}
- with IP address: 192.168.0.11
- on interface: en1
- on network: net_ether_03 {up}
- dbserv2
- Network interfaces:
- diskhb_02 {up}
- device: /dev/hdisk5
- on network: net_diskhb_01 {up}
- dbserv2 {up}
- with IP address: 172.16.255.13
- on interface: en0
- on network: net_ether_02 {up}
- dbserv2-stby {up}
- with IP address: 192.168.0.13
- on interface: en1
- on network: net_ether_03 {up}
Append Following On 2012/6/11:
Before you start powerHA service,you must excute following steps on both nodes then you can run clstat command.
- [root@dbserv1 utilities]# snmpv3_ssw -1
- Stop daemon: snmpmibd
- In /etc/rc.tcpip file, comment out the line that contains: snmpmibd
- In /etc/rc.tcpip file, remove the comment from the line that contains: dpid2
- Stop daemon: snmpd
- Make the symbolic link from /usr/sbin/snmpd to /usr/sbin/snmpdv1
- Make the symbolic link from /usr/sbin/clsnmp to /usr/sbin/clsnmpne
- Start daemon: dpid2
- Start daemon: snmpd
- [root@dbserv2 /]# snmpv3_ssw -1
- Stop daemon: snmpmibd
- In /etc/rc.tcpip file, comment out the line that contains: snmpmibd
- In /etc/rc.tcpip file, remove the comment from the line that contains: dpid2
- Stop daemon: snmpd
- Make the symbolic link from /usr/sbin/snmpd to /usr/sbin/snmpdv1
- Make the symbolic link from /usr/sbin/clsnmp to /usr/sbin/clsnmpne
- Start daemon: dpid2
- Start daemon: snmpd
本文出自 “candon123” 博客,谢绝转载!