1.检查网卡配置
#netstat -i
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
lo0 32808 loopback localhost 1002024 0 1002024 0 0
lan0 1500 86.12.104.0 gdstj1db01 3545472 0 146855 0 0
#ifconfig lan0
lan0: flags=2000000000001843<UP,BROADCAST,RUNNING,MULTICAST,CKO,TSO>
inet 86.12.104.111 netmask fffff800 broadcast 86.12.111.255
2.检查硬件环境
#uname -a
HP-UX gdstj1db B.11.31 U ia64 0019096600 unlimited-user license
uname -a
<<<
HP-UX 11i V3 patch Bundle Sep/ 2008 (B.11.31.0809.326a) or higher (Part Number E10851-01)
To determine which operating system patches are installed, enter the following command:
# /usr/sbin/swlist -l patch
To determine if a specific operating system patch has been installed, enter the following command:
# /usr/sbin/swlist -l patch <patch_number>
To determine which operating system bundles are installed, enter the following command:
# /usr/sbin/swlist -l bundle
*HPUX 11.23 Patches
· HPUX 11.23 with Sept 2004 or newer base, and Mar 2007 Patch bundle for HP-UX 11iV2-B.11.23.0703
· PHKL_33025 file system tunables cumulative patch
· PHKL_34941 Improves Oracle Clusterware restart and diagnosis
· PHCO_32426 reboot(1M) cumulative patch
· PHCO_36744 LVM patch [replaces PHCO_35524]
· PHCO_37069 libsec cumulative patch
· PHCO_37228 libc cumulative patch) [replaces PHCO_36673]
· PHCO_38120 kernel configuration commands patch
· PHKL_34213 vPars CPU migr, cumulative shutdown patch
· PHKL_34989 getrusage(2) performance
· PHKL_36319 mlockall(2), shmget(2) cumulative patch) [replaces PHKL_35478]
· PHKL_36853 pstat patch
· PHKL_37803 mpctl(2) options, manpage, socket count) [replaces PHKL_35767]
· PHKL_37121 sleep kwakeup performance cumulative patch [replaces PHKL_35029]
· PHKL_34840 slow system calls due to cache line sharing
· PHSS_37947 linker + fdp cumulative patch) [replaces PHSS_35979]
· PHNE_37395 cumulative ARPA Transport patch
*HPUX 11.31 Patches
PHCO_40381 11.31 Disk Owner Patch
PHCO_41479 11.31 (fixes an 11.2.0.2 ASM disk discovery issue)
PHKL_38038 VM patch - hot patching/Core file creation directory
PHKL_38938 11.31 SCSI cumulative I/O patch
PHKL_39351 Scheduler patch : post wait hang
PHSS_36354 11.31 assembler patch
PHSS_37042 11.31 hppac (packed decimal)
PHSS_37959 Libcl patch for alternate stack issue fix (QXCR1000818011)
PHSS_39094 11.31 linker + fdp cumulative patch
PHSS_39100 11.31 Math Library Cumulative Patch
PHSS_39102 11.31 Integrity Unwind Library
PHSS_38141 11.31 aC++ Runtime
PHSS_39824 - 11.31 HP C/aC++ Compiler (A.06.23) patch
re:
Oracle Server - Enterprise Edition - Version: 10.2.0.3 to 11.1.0.7 - Release: 10.2 to 11.1
HP-UX Itanium
HP-UX PA-RISC (64-bit)
>>>
hp-ux: Node Crash Due To Large Amount Of Racgimon Threads or CRS_STAT/SRVCTL COMMAND HANG OS bug ( QX:QXCR1000940361 ) [ID 883801.1]
PHKL_40208 and PHKL_40372
/usr/sbin/swlist -l patch PHCO_40381
/usr/sbin/swlist -l patch PHCO_41479
/usr/sbin/swlist -l patch PHKL_38038
/usr/sbin/swlist -l patch PHKL_38938
/usr/sbin/swlist -l patch PHKL_39351
/usr/sbin/swlist -l patch PHSS_36354
/usr/sbin/swlist -l patch PHSS_37042
/usr/sbin/swlist -l patch PHSS_37959
/usr/sbin/swlist -l patch PHSS_39094
/usr/sbin/swlist -l patch PHSS_39100
/usr/sbin/swlist -l patch PHSS_39102
/usr/sbin/swlist -l patch PHSS_38141
/usr/sbin/swlist -l patch PHSS_39824
#/usr/sbin/dmesg | grep "Physical:"
Physical: 268429604 Kbytes, lockable: 202705064 Kbytes, available: 230455872 Kbytes //物理内存
#/usr/sbin/swapinfo -a //SWAP空间
Kb Kb Kb PCT START/ Kb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 25165824 0 25165824 0% 0 - 1 /dev/vg00/lvol2
dev 176553984 0 176553984 0% 0 - 1 /dev/vg00/lv_swap
reserve - 584884 -584884
memory 255322404 30479716 224842688 12%
# ll /dev/async
the output should look something like:
crw-rw-rw- 1 bin bin 101 0x000000 May 16 07:23 /dev/async
1. Log in as the root user.
2. Determine whether /dev/async exists. If the device does not exist, then use the
following command to create it:
# /sbin/mknod /dev/async c 101 0x4
Alternatively, you can set the minor number value to 0x104 using the following
command:
# /sbin/mknod /dev/async c 101 0x104
3. If /dev/async exists, then determine the current value of the minor number, as
shown in the following example:
# ls -l /dev/async
crw-r--r-- 1 root sys 101 0x000000 Sep 28 10:38 /dev/async
4. If the existing minor number of the file is not 0x4 or 0x104, then change it to an
expected value using one of the following commands:
# /sbin/mknod /dev/async c 101 0x4
or
# /sbin/mknod /dev/async c 101 0x104
3.检查用户
#id grid
uid=501(grid) gid=501(oinstall) groups=502(asmadmin),503(asmdba),504(asmoper),505(dba)
#id oracle
uid=502(oracle) gid=501(oinstall) groups=503(asmdba),505(dba)
若无用户&组,创建用户&组命令:
/usr/sbin/groupadd -g 501 oinstall
/usr/sbin/groupadd -g 502 asmadmin
/usr/sbin/groupadd -g 503 asmdba
/usr/sbin/groupadd -g 504 asmoper
/usr/sbin/groupadd -g 505 dba
/usr/sbin/useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper,dba -d /home/grid grid
/usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba -d /home/oracle oracle
清理:
/usr/sbin/userdel grid
/usr/sbin/userdel oracle
/usr/sbin/groupdel oinstall
/usr/sbin/groupdel asmadmin
/usr/sbin/groupdel asmdba
/usr/sbin/groupdel asmoper
/usr/sbin/groupdel dba
添加权限:
vi /etc/privgroup
oinstall RTPRIO MLOCK RTSCHED
/usr/sbin/setprivgrp oinstall RTPRIO MLOCK RTSCHED
/usr/bin/getprivgrp oinstall
oinstall: RTPRIO MLOCK RTSCHED
4.检查文件系统空间
#bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 5242880 225024 4978752 4% /
/dev/vg00/lvol9 83886080 33317056 47408572 41% /u01
5.创建目录结构
<Oracle inventory 目录>
mkdir -p /u01/app/oraInventory
chown -R grid:oinstall /u01/app/oraInventory
chmod -R 775 /u01/app/oraInventory
<Grid Infrastructure BASE 目录>
mkdir -p /u01/app/grid
chown grid:oinstall /u01/app/grid
chmod -R 775 /u01/app/grid
<Grid Infrastructure Home 目录>
mkdir -p /u01/11.2.0/grid
chown -R grid:oinstall /u01/11.2.0/grid
chmod -R 775 /u01/11.2.0/grid
<Oracle Base 目录>
mkdir -p /u01/app/oracle
mkdir /u01/app/oracle/cfgtoollogs
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/app/oracle
<Oracle Rdbms Home 目录>
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1
chmod -R 775 /u01/app/oracle/product/11.2.0/db_1
<software>
mkdir -p /u01/sw/psu
mkdir -p /u01/sw/patch
mkdir -p /u01/sw/db
chown -R grid:oinstall /u01/sw
chmod -R 777 /u01/sw
6.建立连接&修改内核参数
# cd /usr/lib
ln -s /usr/lib/libX11.3 libX11.sl
ln -s /usr/lib/libXIE.2 libXIE.sl
ln -s /usr/lib/libXext.3 libXext.sl
ln -s /usr/lib/libXhp11.3 libXhp11.sl
ln -s /usr/lib/libXi.3 libXi.sl
ln -s /usr/lib/libXm.4 libXm.sl
ln -s /usr/lib/libXp.2 libXp.sl
ln -s /usr/lib/libXt.3 libXt.sl
ln -s /usr/lib/libXtst.2 libXtst.sl
Modify the kernel parameter settings by using either the kcweb application
# /usr/sbin/kcweb -F
or by using the kmtune command line utility (kctune on Itanium):
# kmtune parameter>=value
List the parameters changed:
# kmtune -D
修改如下:
/usr/sbin/kctune ksi_alloc_max=32768
/usr/sbin/kctune executable_stack=0
/usr/sbin/kctune ksi_alloc_max=32768
/usr/sbin/kctune max_thread_proc=1024
/usr/sbin/kctune maxdsiz=1073741824
/usr/sbin/kctune maxdsiz_64bit=2147483648
/usr/sbin/kctune maxfiles=1024
/usr/sbin/kctune maxfiles_lim=63488
/usr/sbin/kctune maxssiz=134217728
/usr/sbin/kctune maxssiz_64bit=1073741824
/usr/sbin/kctune maxuprc=3686
/usr/sbin/kctune msgmni=4096
/usr/sbin/kctune msgtql=4096
/usr/sbin/kctune ncsize=35840
/usr/sbin/kctune nflocks=4096
/usr/sbin/kctune ninode=34816
/usr/sbin/kctune nkthread=7184
/usr/sbin/kctune nproc=4096
/usr/sbin/kctune semmni=4096
/usr/sbin/kctune semmns=8192
/usr/sbin/kctune semmnu=4096
/usr/sbin/kctune semvmx=32767
/usr/sbin/kctune shmmax=1073741824
/usr/sbin/kctune shmmni=4096
/usr/sbin/kctune shmseg=512
vi /etc/rc.config.d/nddconf
TRANSPORT_NAME[0]=tcp
NDD_NAME[0]=tcp_largest_anon_port
NDD_VALUE[0]=65500
TRANSPORT_NAME[1]=tcp
NDD_NAME[1]=tcp_smallest_anon_port
NDD_VALUE[1]=9000
TRANSPORT_NAME[2]=udp
NDD_NAME[2]=udp_largest_anon_port
NDD_VALUE[2]=65500
TRANSPORT_NAME[3]=udp
NDD_NAME[3]=udp_smallest_anon_port
NDD_VALUE[3]=9000
7.配置 /etc/hosts
#Edit by bruce.song 2014.10.18
#hostname
55.12.72.11 hpcluster01
55.12.72.12 hpcluster02
#oracle_vip
55.12.72.13 hpcluster01-vip
55.12.72.14 hpcluster02-vip
#oracle_private_ip1
172.16.7.5 hpcluster01-priv1
172.16.7.6 hpcluster02-priv1
#oracle_private_ip2
172.16.8.5 hpcluster01-priv2
172.16.8.6 hpcluster02-priv2
#oracle_scan_ip
55.12.72.15 hpcluster01-scan
8.修改ntp配置文件
#mv /etc/ntp.conf /etc/ntp.conf.backup
9.修改oracle/grid环境变量
oracle:
-----------------------
umask 022
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
export PS1=`hostname`:'$PWD'"$"
grid
-----------------------
umask 022
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/11.2.0/grid
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
export NLS_DATE_FORMAT="yyyy-mm-dd hh24:mi:ss"
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
export PS1=`hostname`:'$PWD'"$"
10.配置双击互信
/u01/sw/db/grid/sshsetup/sshUserSetup.sh -user grid -hosts "hpcluster01 hpcluster02" -advanced -noPromptPassphrase
/u01/sw/db/grid/sshsetup/sshUserSetup.sh -user oracle -hosts "hpcluster01 hpcluster02" -advanced -noPromptPassphrase
ssh hpcluster01 date ; ssh hpcluster02 date;
11.修改存储属性
# print_manifest | more
# /usr/sbin/ioscan -fun -NC disk
The output from this command is similar to the following:
Class I H/W Path Driver S/W State H/W Type Description
==========================================================================
disk 0 0/0/1/0.6.0 sdisk CLAIMED DEVICE HP DVD-ROM 6x/32x
/dev/rdsk/c0t6d0 /dev/rdsk/c0t6d0
disk 1 0/0/1/1.2.0 sdisk CLAIMED DEVICE SEAGATE ST39103LC
/dev/rdsk/c1t2d0 /dev/rdsk/c1t2d0
3. If the ioscan command does not display device name information for a device,
enter the following command to install the special device files for any new
devices:
# /usr/sbin/insf -e
4. For each disk to add to a disk group, enter the following command on any node to
verify that it is not already part of an LVM volume group:
# /sbin/pvdisplay /dev/dsk/cxtydz
If this command displays volume group information, the disk is already part of a
volume group. The disks that you choose must not be part of an LVM volume
group.
# chown grid:asmadmin /dev/ridsk/cxtydz
chown grid:asmadmin /dev/rdisk/disk1
chown grid:asmadmin /dev/rdisk/disk2
chown grid:asmadmin /dev/rdisk/disk3
chown grid:asmadmin /dev/rdisk/disk4
chown grid:asmadmin /dev/rdisk/disk5
chown grid:asmadmin /dev/rdisk/disk6
chown grid:asmadmin /dev/rdisk/disk7
chown grid:asmadmin /dev/rdisk/disk8
chown grid:asmadmin /dev/rdisk/disk9
chown grid:asmadmin /dev/rdisk/disk10
chown grid:asmadmin /dev/rdisk/disk11
chown grid:asmadmin /dev/rdisk/disk12
chown grid:asmadmin /dev/rdisk/disk13
chown grid:asmadmin /dev/rdisk/disk14
chown grid:asmadmin /dev/rdisk/disk15
chown grid:asmadmin /dev/rdisk/disk16
chown grid:asmadmin /dev/rdisk/disk17
chown grid:asmadmin /dev/rdisk/disk18
chown grid:asmadmin /dev/rdisk/disk19
chown grid:asmadmin /dev/rdisk/disk20
chown grid:asmadmin /dev/rdisk/disk21
chown grid:asmadmin /dev/rdisk/disk22
chown grid:asmadmin /dev/rdisk/disk23
chown grid:asmadmin /dev/rdisk/disk24
chown grid:asmadmin /dev/rdisk/disk25
chown grid:asmadmin /dev/rdisk/disk26
chown grid:asmadmin /dev/rdisk/disk27
chown grid:asmadmin /dev/rdisk/disk28
chown grid:asmadmin /dev/rdisk/disk29
chown grid:asmadmin /dev/rdisk/disk30
chown grid:asmadmin /dev/rdisk/disk31
chmod 660 /dev/rdisk/disk1
chmod 660 /dev/rdisk/disk2
chmod 660 /dev/rdisk/disk3
chmod 660 /dev/rdisk/disk4
chmod 660 /dev/rdisk/disk5
chmod 660 /dev/rdisk/disk6
chmod 660 /dev/rdisk/disk7
chmod 660 /dev/rdisk/disk8
chmod 660 /dev/rdisk/disk9
chmod 660 /dev/rdisk/disk10
chmod 660 /dev/rdisk/disk11
chmod 660 /dev/rdisk/disk12
chmod 660 /dev/rdisk/disk13
chmod 660 /dev/rdisk/disk14
chmod 660 /dev/rdisk/disk15
chmod 660 /dev/rdisk/disk16
chmod 660 /dev/rdisk/disk17
chmod 660 /dev/rdisk/disk18
chmod 660 /dev/rdisk/disk19
chmod 660 /dev/rdisk/disk20
chmod 660 /dev/rdisk/disk21
chmod 660 /dev/rdisk/disk22
chmod 660 /dev/rdisk/disk23
chmod 660 /dev/rdisk/disk24
chmod 660 /dev/rdisk/disk25
chmod 660 /dev/rdisk/disk26
chmod 660 /dev/rdisk/disk27
chmod 660 /dev/rdisk/disk28
chmod 660 /dev/rdisk/disk29
chmod 660 /dev/rdisk/disk30
chmod 660 /dev/rdisk/disk31
===================================================
Critical Patch Updates (CPUs) or Patch Set Update (PSU) Patch 6880880
===================================================
opatch
======================
/u01/sw/unzip_hpx32 /u01/sw/p6880880_112000_HPUX-IA64.zip -d $ORACLE_HOME
$ORACLE_HOME/OPatch/opatch version
auto patch
======================
As the Grid home owner execute
1、/home/grid/ocm.rsp
$ORACLE_HOME/OPatch/ocm/bin/emocmrsp -output /home/grid/ocm.rsp
usually the user root
2、To patch GI home and all Oracle RAC database homes of the same version
opatch auto /u01/sw/psu -ocmrf /home/grid/ocm.rsp <<< restart crs
opatch auto /u01/sw/psu -oh /u01/app/oracle/product/11.2.0/db_1 -ocmrf /home/grid/ocm.rsp
<<</u01/11.2.0/grid/crs/install/crsconfig_params <<< just only patch grid
manual patch
======================
1、Stop the CRS managed resources running from DB homes.
If this is a GI Home environment, as the database home owner execute:
$ <ORACLE_HOME>/bin/srvctl stop home -o <ORACLE_HOME> -s <status file location> -n <node name>
If this is an Oracle Restart Home environment, as the database home owner execute:
$ <ORACLE_HOME>/bin/srvctl stop home -o <ORACLE_HOME> -s <status file location>
Note:
You need to make sure that the Oracle ACFS file systems are unmounted (see Section 2.8) and all other Oracle processes are shutdown before you proceed.
2、Run the pre root script.
If this is a GI Home, as the root user execute:
# <GI_HOME>/crs/install/rootcrs.pl -unlock
If this is an Oracle Restart Home, as the root user execute:
# <GI_HOME>/crs/install/roothas.pl -unlock
3、Apply the CRS patch using.
As the GI home owner execute:
$ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/sw/psu/13919095
As the GI home owner execute:
$ORACLE_HOME/OPatch/opatch apply -oh $ORACLE_HOME -local /u01/sw/psu/13923374
4、Run the pre script for DB component of the patch.
As the database home owner execute:
$ /u01/sw/psu/13919095/custom/server/13919095/custom/scripts/prepatch.sh -dbhome $ORACLE_HOME
5、Apply the DB patch.
As the database home owner execute:
$ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/sw/psu/13919095/custom/server/13919095
$ORACLE_HOME/OPatch/opatch apply -oh $ORACLE_HOME -local /u01/sw/psu/13923374
6、Run the post script for DB component of the patch.
As the database home owner execute:
$ /u01/sw/psu/13919095/custom/server/13919095/custom/scripts/postpatch.sh -dbhome $ORACLE_HOME
7、Run the post script.
As the root user execute:
# /u01/11.2.0/grid/rdbms/install/rootadd_rdbms.sh
If this is a GI Home, as the root user execute:
# /u01/11.2.0/grid/crs/install/rootcrs.pl -patch
select action,comments from registry$history;
bug 13342249
--For each database instance running on the Oracle home being patched, connect to the database using SQL*Plus. Connect as SYSDBA and run the catbundle.sql script as follows:
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> select * from registry$history;
SQL> QUIT