*项目HPUNIX+SERVICEGUARD+10GRAC安装

拟执行的步骤如下:
查看主机(dxdb1) 对应DS4800的PV情况如下

[dxdb1@/]#ioscan -m dsf

/dev/rdisk/disk44        /dev/rdsk/c5t0d0
                         /dev/rdsk/c7t0d0
/dev/rdisk/disk45        /dev/rdsk/c5t0d1
                         /dev/rdsk/c7t0d1
/dev/rdisk/disk46        /dev/rdsk/c5t0d2
                         /dev/rdsk/c7t0d2
/dev/rdisk/disk47        /dev/rdsk/c5t0d3
                         /dev/rdsk/c7t0d3
/dev/rdisk/disk48        /dev/rdsk/c5t0d4
                         /dev/rdsk/c7t0d4
/dev/rdisk/disk49        /dev/rdsk/c5t0d5
                         /dev/rdsk/c7t0d5
/dev/rdisk/disk50        /dev/rdsk/c5t0d6
                         /dev/rdsk/c7t0d6
/dev/rdisk/disk51        /dev/rdsk/c5t0d7
                         /dev/rdsk/c7t0d7
/dev/rdisk/disk52        /dev/rdsk/c5t1d0
                         /dev/rdsk/c7t1d0
/dev/rdisk/disk53        /dev/rdsk/c5t1d1
                         /dev/rdsk/c7t1d1
/dev/rdisk/disk54        /dev/rdsk/c5t1d2
                         /dev/rdsk/c7t1d2
/dev/rdisk/disk55        /dev/rdsk/c5t1d3
                         /dev/rdsk/c7t1d3
/dev/rdisk/disk56        /dev/rdsk/c5t1d4
                         /dev/rdsk/c7t1d4
/dev/rdisk/disk57        /dev/rdsk/c5t1d5
                         /dev/rdsk/c7t1d5
/dev/rdisk/disk58        /dev/rdsk/c5t1d6
                         /dev/rdsk/c7t1d6
/dev/rdisk/disk59        /dev/rdsk/c5t1d7
                         /dev/rdsk/c7t1d7
/dev/rdisk/disk60        /dev/rdsk/c5t2d0
                         /dev/rdsk/c7t2d0
/dev/rdisk/disk61        /dev/rdsk/c5t2d1
                         /dev/rdsk/c7t2d1
/dev/rdisk/disk62        /dev/rdsk/c5t2d2
                         /dev/rdsk/c7t2d2
/dev/rdisk/disk63        /dev/rdsk/c5t2d3
                         /dev/rdsk/c7t2d3
/dev/rdisk/disk64        /dev/rdsk/c5t2d4
                         /dev/rdsk/c7t2d4
/dev/rdisk/disk65        /dev/rdsk/c5t2d5
                         /dev/rdsk/c7t2d5
/dev/rdisk/disk66        /dev/rdsk/c5t2d6
                         /dev/rdsk/c7t2d6
/dev/rdisk/disk67        /dev/rdsk/c5t2d7
                         /dev/rdsk/c7t2d7
/dev/rdisk/disk68        /dev/rdsk/c5t3d0
                         /dev/rdsk/c7t3d0
/dev/rdisk/disk69        /dev/rdsk/c5t3d1
                         /dev/rdsk/c7t3d1
/dev/rdisk/disk70        /dev/rdsk/c5t3d2
                         /dev/rdsk/c7t3d2
/dev/rdisk/disk71        /dev/rdsk/c5t3d3
                         /dev/rdsk/c7t3d3
/dev/rdisk/disk72        /dev/rdsk/c5t3d4
                         /dev/rdsk/c7t3d4
/dev/rdisk/disk73        /dev/rdsk/c5t3d5
                         /dev/rdsk/c7t3d5
/dev/rdisk/disk74        /dev/rdsk/c5t3d6
                         /dev/rdsk/c7t3d6
/dev/rdisk/disk75        /dev/rdsk/c5t3d7
                         /dev/rdsk/c7t3d7
/dev/rdisk/disk76        /dev/rdsk/c5t4d0
                         /dev/rdsk/c7t4d0
/dev/rdisk/disk77        /dev/rdsk/c5t4d1
                         /dev/rdsk/c7t4d1
/dev/rdisk/disk78        /dev/rdsk/c5t4d2
                         /dev/rdsk/c7t4d2
/dev/rdisk/disk79        /dev/rdsk/c5t4d3
                         /dev/rdsk/c7t4d3
/dev/rdisk/disk80        /dev/rdsk/c5t4d4
                         /dev/rdsk/c7t4d4
/dev/rdisk/disk81        /dev/rdsk/c5t4d5
                         /dev/rdsk/c7t4d5
/dev/rdisk/disk82        /dev/rdsk/c5t4d6
                         /dev/rdsk/c7t4d6
/dev/rdisk/disk83        /dev/rdsk/c5t4d7
                         /dev/rdsk/c7t4d7


查看主机(dxdb2) 对应DS4800的PV情况如下
/dev/rdisk/disk89        /dev/rdsk/c17t0d0
                         /dev/rdsk/c18t0d0
/dev/rdisk/disk90        /dev/rdsk/c17t0d1
                         /dev/rdsk/c18t0d1
/dev/rdisk/disk91        /dev/rdsk/c17t0d2
                         /dev/rdsk/c18t0d2
/dev/rdisk/disk92        /dev/rdsk/c17t0d3
                         /dev/rdsk/c18t0d3
/dev/rdisk/disk93        /dev/rdsk/c17t0d4
                         /dev/rdsk/c18t0d4
/dev/rdisk/disk94        /dev/rdsk/c17t0d5
                         /dev/rdsk/c18t0d5
/dev/rdisk/disk95        /dev/rdsk/c17t0d6
                         /dev/rdsk/c18t0d6
/dev/rdisk/disk96        /dev/rdsk/c17t0d7
                         /dev/rdsk/c18t0d7
/dev/rdisk/disk97        /dev/rdsk/c17t1d0
                         /dev/rdsk/c18t1d0
/dev/rdisk/disk98        /dev/rdsk/c17t1d1
                         /dev/rdsk/c18t1d1
/dev/rdisk/disk99        /dev/rdsk/c17t1d2
                         /dev/rdsk/c18t1d2
/dev/rdisk/disk100       /dev/rdsk/c17t1d3
                         /dev/rdsk/c18t1d3
/dev/rdisk/disk101       /dev/rdsk/c17t1d4
                         /dev/rdsk/c18t1d4
/dev/rdisk/disk102       /dev/rdsk/c17t1d5
                         /dev/rdsk/c18t1d5
/dev/rdisk/disk103       /dev/rdsk/c17t1d6
                         /dev/rdsk/c18t1d6
/dev/rdisk/disk104       /dev/rdsk/c17t1d7
                         /dev/rdsk/c18t1d7
/dev/rdisk/disk105       /dev/rdsk/c17t2d2
                         /dev/rdsk/c18t2d2
/dev/rdisk/disk106       /dev/rdsk/c17t2d0
                         /dev/rdsk/c18t2d0
/dev/rdisk/disk107       /dev/rdsk/c17t2d1
                         /dev/rdsk/c18t2d1
/dev/rdisk/disk108       /dev/rdsk/c17t2d3
                         /dev/rdsk/c18t2d3
/dev/rdisk/disk109       /dev/rdsk/c17t2d4
                         /dev/rdsk/c18t2d4
/dev/rdisk/disk110       /dev/rdsk/c17t2d5
                         /dev/rdsk/c18t2d5
/dev/rdisk/disk111       /dev/rdsk/c17t2d6
                         /dev/rdsk/c18t2d6
/dev/rdisk/disk112       /dev/rdsk/c17t2d7
                         /dev/rdsk/c18t2d7
/dev/rdisk/disk113       /dev/rdsk/c17t3d0
                         /dev/rdsk/c18t3d0
/dev/rdisk/disk114       /dev/rdsk/c17t3d1
                         /dev/rdsk/c18t3d1
/dev/rdisk/disk115       /dev/rdsk/c17t3d2
                         /dev/rdsk/c18t3d2
/dev/rdisk/disk116       /dev/rdsk/c17t3d3
                         /dev/rdsk/c18t3d3
/dev/rdisk/disk117       /dev/rdsk/c17t3d4
                         /dev/rdsk/c18t3d4
/dev/rdisk/disk118       /dev/rdsk/c17t3d5
                         /dev/rdsk/c18t3d5
/dev/rdisk/disk119       /dev/rdsk/c17t3d6
                         /dev/rdsk/c18t3d6
/dev/rdisk/disk120       /dev/rdsk/c17t3d7
                         /dev/rdsk/c18t3d7
/dev/rdisk/disk121       /dev/rdsk/c17t4d0
                         /dev/rdsk/c18t4d0
/dev/rdisk/disk122       /dev/rdsk/c17t4d1
                         /dev/rdsk/c18t4d1
/dev/rdisk/disk123       /dev/rdsk/c17t4d2
                         /dev/rdsk/c18t4d2
/dev/rdisk/disk124       /dev/rdsk/c17t4d3
                         /dev/rdsk/c18t4d3
/dev/rdisk/disk125       /dev/rdsk/c17t4d4
                         /dev/rdsk/c18t4d4
/dev/rdisk/disk126       /dev/rdsk/c17t4d5
                         /dev/rdsk/c18t4d5
/dev/rdisk/disk127       /dev/rdsk/c17t4d6
                         /dev/rdsk/c18t4d6
/dev/rdisk/disk128       /dev/rdsk/c17t4d7
                         /dev/rdsk/c18t4d7
[dxdb2@/]#

3.在dxdb1上建立 vglock, vgoradata01,vgoradata02,


cd /dev
mkdir vgoradata01 vgoradata02  vglock
mknod /dev/vglock/group c 64 0x010000
mknod /dev/vgoradata01/group c 64 0x020000
mknod /dev/vgoradata02/group c 64 0x030000

pvcreate -f /dev/rdisk/disk44
pvcreate -f /dev/rdisk/disk45
pvcreate -f /dev/rdisk/disk46
pvcreate -f /dev/rdisk/disk47
pvcreate -f /dev/rdisk/disk48
pvcreate -f /dev/rdisk/disk49
pvcreate -f /dev/rdisk/disk50
pvcreate -f /dev/rdisk/disk51
pvcreate -f /dev/rdisk/disk52
pvcreate -f /dev/rdisk/disk53
pvcreate -f /dev/rdisk/disk54

创建vg
 vgcreate -s 256 /dev/vglock /dev/disk/disk44
 vgcreate -s 256  /dev/vgoradata01 /dev/disk/disk45 /dev/disk/disk46 /dev/disk/disk47 /dev/disk/disk48
 vgcreate -s 256  /dev/vgoradata02 /dev/disk/disk49 /dev/disk/disk50 /dev/disk/disk51 /dev/disk/disk52

创建lv


/sbin/lvcreate -n lv_system            -L 8192  /dev/vgoradata01
/sbin/lvcreate -n lv_temp              -L 8192  /dev/vgoradata01
/sbin/lvcreate -n lv_undot11           -L 8192  /dev/vgoradata01
/sbin/lvcreate -n lv_undot12           -L 8192  /dev/vgoradata01
/sbin/lvcreate -n lv_undotbs21         -L 8192  /dev/vgoradata01
/sbin/lvcreate -n lv_undotbs22         -L 8192  /dev/vgoradata01
/sbin/lvcreate -n lv_user              -L 8192  /dev/vgoradata01
/sbin/lvcreate -n lv_index             -L 8192  /dev/vgoradata01
/sbin/lvcreate -n lv_tools             -L 8192  /dev/vgoradata01
/sbin/lvcreate -n lv_srvconfig         -L 8192  /dev/vgoradata01
/sbin/lvcreate -n lv_spfile            -L 256   /dev/vgoradata01
/sbin/lvcreate -n lv_cntl1             -L 256   /dev/vgoradata01
/sbin/lvcreate -n lv_cntl2             -L 256   /dev/vgoradata01
/sbin/lvcreate -n lv_cntl3             -L 256   /dev/vgoradata01
/sbin/lvcreate -n lv_redo11            -L 512   /dev/vgoradata01
/sbin/lvcreate -n lv_redo12            -L 512   /dev/vgoradata01
/sbin/lvcreate -n lv_redo21            -L 512   /dev/vgoradata01
/sbin/lvcreate -n lv_redo22            -L 512   /dev/vgoradata01
/sbin/lvcreate -n lv_redo31            -L 512   /dev/vgoradata01
/sbin/lvcreate -n lv_redo32            -L 512   /dev/vgoradata01
/sbin/lvcreate -n lv_oraf01         -L 2048  /dev/vgoradata01
/sbin/lvcreate -n lv_oraf02         -L 2048  /dev/vgoradata01
/sbin/lvcreate -n lv_oraf03         -L 2048  /dev/vgoradata01
/sbin/lvcreate -n lv_oraf04         -L 2048  /dev/vgoradata01
/sbin/lvcreate -n lv_oraf05         -L 2048  /dev/vgoradata01
/sbin/lvcreate -n lv_oraf06         -L 2048  /dev/vgoradata01
/sbin/lvcreate -n lv_oraf07         -L 2048  /dev/vgoradata01
80G+1G+3G+14G=98G

/sbin/lvcreate -n lv_data0100            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0101            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0102            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0103            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0104            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0105            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0106            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0107            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0108            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0109            -L 8192   /dev/vgoradata01

80G
/sbin/lvcreate -n lv_data0110            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0111            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0112            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0113            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0114            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0115            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0116            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0117            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0118            -L 8192   /dev/vgoradata01
/sbin/lvcreate -n lv_data0119            -L 8192   /dev/vgoradata01

80G
/sbin/lvcreate -n lv_data0120            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0121            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0122            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0123            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0124            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0125            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0126            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0127            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0128            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0129            -L 16384   /dev/vgoradata01

#160G

/sbin/lvcreate -n lv_data0130            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0131            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0132            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0133            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0134            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0135            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0136            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0137            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0138            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0139            -L 16384   /dev/vgoradata01
/sbin/lvcreate -n lv_data0140            -L 16384   /dev/vgoradata01

160G

160g*3+100G=580G

 

/sbin/lvcreate -n lv_data0200            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0201            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0202            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0203            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0204            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0205            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0206            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0207            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0208            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0209            -L 16384   /dev/vgoradata02

/sbin/lvcreate -n lv_data0210            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0211            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0212            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0213            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0214            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0215            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0216            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0217            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0218            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0219            -L 16384   /dev/vgoradata02

/sbin/lvcreate -n lv_data0220            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0221            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0222            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0223            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0224            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0225            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0226            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0227            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0228            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0229            -L 16384   /dev/vgoradata02

/sbin/lvcreate -n lv_data0230            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0231            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0232            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0233            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0234            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0235            -L 16384   /dev/vgoradata02
/sbin/lvcreate -n lv_data0236            -L 16384   /dev/vgoradata02

480G+16*7=480+112=592G

 

 

B:在主机dxdb2上创建group文件
cd /dev
mkdir vgoradata01 vgoradata02  vglock
mknod /dev/vglock/group c 64 0x010000
mknod /dev/vgoradata01/group c 64 0x020000
mknod /dev/vgoradata02/group c 64 0x030000

注意:# mknod /dev/vglock/group c 64 0x010000
# mknod /dev/vgdb/group c 64 0x020000
  这两个命令使用的0x020000,0x010000一定要和主机dxdb1要严格符合,否则下一步会有错误。在IBM系统的HACMP中这个步骤是不需要手工做的。

C:在主机dxdb1上将卷组映射复制到指定文件。

vgexport -m /tmp/vgoradata01.map -s -v  -p vgoradata01
vgexport -m /tmp/vgoradata02.map -s -v  -p vgoradata02
vgexport -m /tmp/vglock.map -s -v  -p vglock

将文件复制到dxdb2上:
rcp /tmp/vglock.map dxdb2:/tmp/
 rcp /tmp/vgoradata01.map  dxdb2:/tmp/
rcp /tmp/vgoradata02.map  dxdb2:/tmp/

从主机一看磁盘的驱动是:

/dev/rdisk/disk44        /dev/rdsk/c5t0d0
                         /dev/rdsk/c7t0d0
/dev/rdisk/disk45        /dev/rdsk/c5t0d1
                         /dev/rdsk/c7t0d1
/dev/rdisk/disk46        /dev/rdsk/c5t0d2
                         /dev/rdsk/c7t0d2
/dev/rdisk/disk47        /dev/rdsk/c5t0d3
                         /dev/rdsk/c7t0d3
/dev/rdisk/disk48        /dev/rdsk/c5t0d4
                         /dev/rdsk/c7t0d4
/dev/rdisk/disk49        /dev/rdsk/c5t0d5
                         /dev/rdsk/c7t0d5
/dev/rdisk/disk50        /dev/rdsk/c5t0d6
                         /dev/rdsk/c7t0d6
/dev/rdisk/disk51        /dev/rdsk/c5t0d7
                         /dev/rdsk/c7t0d7
/dev/rdisk/disk52        /dev/rdsk/c5t1d0
                         /dev/rdsk/c7t1d0

                         /dev/rdsk/c7t1d2
可能主机二看到的是:


/dev/rdisk/disk89        /dev/rdsk/c17t0d0
                         /dev/rdsk/c18t0d0
/dev/rdisk/disk90        /dev/rdsk/c17t0d1
                         /dev/rdsk/c18t0d1
/dev/rdisk/disk91        /dev/rdsk/c17t0d2
                         /dev/rdsk/c18t0d2
/dev/rdisk/disk92        /dev/rdsk/c17t0d3
                         /dev/rdsk/c18t0d3
/dev/rdisk/disk93        /dev/rdsk/c17t0d4
                         /dev/rdsk/c18t0d4
/dev/rdisk/disk94        /dev/rdsk/c17t0d5
                         /dev/rdsk/c18t0d5
/dev/rdisk/disk95        /dev/rdsk/c17t0d6
                         /dev/rdsk/c18t0d6
/dev/rdisk/disk96        /dev/rdsk/c17t0d7
                         /dev/rdsk/c18t0d7
/dev/rdisk/disk97        /dev/rdsk/c17t1d0
                         /dev/rdsk/c18t1d0

使用系统观察,确实没错,主机二的驱动无法和主机一的匹配,这个时候,在主机二上要改动下面的语句:


vgimport -s -m /tmp/vglock.map /dev/vglock /dev/disk/disk89
vgimport -s -m /tmp/vgoradata01.map /dev/vgoradata01  /dev/disk/disk90  /dev/disk/disk91 /dev/disk/disk92 /dev/disk/disk93
vgimport -s -m /tmp/vgoradata02.map /dev/vgoradata02 /dev/disk/disk94  /dev/disk/disk95 /dev/disk/disk96 /dev/disk/disk97

 

 

在dxdb1上操作,生成service guard配置文件

cmquerycl -v -C /etc/cmcluster/cmclconf.ascii -n dxdb1 -n dxdb2

cmapplyconf -k -v -C /etc/cmcluster/cmclconf.ascii

为了避免卷组的自动激活,vg的属性不属于本地的vg00管理,要交给MC的vlmd进程接管.

运行群集
cmruncl –f –v
查看群集状态
cmviewcl –v
停用群集
cmhaltcl –f –v

把共享存储打开,在两个节点上都运行下面的命令
vgchange -a s /dev/vgoradata01
vgchange -a s /dev/vgoradata02

 

|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||


安装oracle rac
在DB1上
lvcreate -L 30720 -n lvora /dev/vg00
newfs -F vxfs -o largefiles /dev/vg00/rlvora
cd /
mkdir oracle

vi /etc/fstab 添加
/dev/vg00/lvora /oracle vxfs delaylog 0 2
mount -a

在DB2上安装
lvcreate -L 30720 -n lvora /dev/vg00
newfs -F vxfs -o largefiles /dev/vg00/rlvora
cd /
mkdir oracle

vi /etc/fstab 添加
/dev/vg00/lvora /oracle vxfs delaylog 0 2
mount -a

在两台机器上给 /oracle 加权限
 
chown -R oracle:dba /oracle
chown -R oracle:dba /oracle
编辑根目录下的/.rhosts
[c580b2@/oracle/product/10.2.0/crs/bin]#more /.rhosts
c580b1   root
c580b2   root


在两台机器上建立oracle用户,dba 组
groupadd -g 505 dba
useradd -u 505 -d /oracle -G dba oracle
passwd oracle


编辑oracle的home目录下的.rhosts
$ pwd
/oracle
$ more .rhosts
c580b1    oracle
c580b2    oracle


[dxdb1@/]#more /etc/hosts
## Configured using SAM by root on Tue Apr 14 10:04:36 2009
## Configured using SAM by root on Wed Apr 15 14:35:40 2009
# @(#)B.11.31_LRhosts $Revision: 1.9.214.1 $ $Date: 96/10/08 13:20:01 $
#
# The form for each entry is:
# <internet address>    <official hostname> <aliases>
#
# For example:
# 192.1.2.34    hpfcrm  loghost
#
# See the hosts(4) manual page for more information.
# Note: The entries cannot be preceded by a space.
#       The format described in this file is the correct format.
#       The original Berkeley manual page contains an error in
#       the format description.
#

127.0.0.1       localhost       loopback
10.154.*.35    dxdb1
10.154.*.36    dxdb2
8.8.8.17        dxdb1-hb
8.8.8.18        dxdb2-hb
10.154.*.37    dxdb1-v
10.154.*.38    dxdb2-v

[c580b2@/oracle/product/10.2.0/crs/bin]#netstat -ni
Name      Mtu  Network         Address         Ipkts              Ierrs Opkts              Oerrs Coll
lo0      32808 127.0.0.0       127.0.0.1       44107              0     44107              0     0  
lan901    1500 8.8.8.20        8.8.8.22        36341              0     47990              0     0  
lan900    1500 10.154.*.0     10.154.*.40    10869136           0     1904027            0     0  
lan900:8  1500 10.154.*.0     10.154.*.42    0                  0     0                  0     0  
Warning: The above name 'lan900:801' is truncated, use -w to show the output in wide format
[c580b2@/oracle/product/10.2.0/crs/bin]#
[c580b2@/oracle/product/10.2.0/crs/bin]#lanscan
Hardware Station        Crd Hdw   Net-Interface  NM  MAC       HP-DLPI DLPI
Path     Address        In# State NamePPA        ID  Type      Support Mjr#
1/0/0/1/0 0x00237DF94E1B 0   UP    lan0 snap0     1   ETHER     Yes     119
LinkAgg0 0x001F290DEB3B 900 UP    lan900 snap900 7   ETHER     Yes     119
LinkAgg1 0x001F290DEAF3 901 UP    lan901 snap901 8   ETHER     Yes     119
LinkAgg2 0x000000000000 902 DOWN  lan902 snap902 9   ETHER     Yes     119
LinkAgg3 0x000000000000 903 DOWN  lan903 snap903 10  ETHER     Yes     119
LinkAgg4 0x000000000000 904 DOWN  lan904 snap904 11  ETHER     Yes     119
[c580b2@/oracle/product/10.2.0/crs/bin]#


在两台机器上都,配置oracle用户下的.profile
su - oracle
vi .profile
添加如下
umask 022
ORACLE_BASE=/oracle
export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db
export ORACLE_HOME
export ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs
NLS_LANG=AMERICAN_AMERICA.ZHS16GBK
export NLS_LANG
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
export ORA_NLS33
CLASSPATH=$ORACLE_HOME/JRE/lib:$ORACLE_HOME/JRE/lib/rt.jar:$ORACLE_HOME/jlib:$OR
ACLE_HOME/rdbms/jlib
export CLASSPATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:$ORACLE_HOME/lib32
export LD_LIBRARY_PATH
ORACLE_SID=cdxdb1
export ORACLE_SID
PATH=$ORACLE_HOME/bin:.:$ORA_CRS_HOME/bin:$PATH
export PATH
export DISPLAY=10.154.*.119:0.0

进入安装目录,开始安装clusterware
./run* -IgnoreSysprereqs

给两个设备 ocr.vote分配权限
chown -R oracle:dba /dev/vgoradata01/r*

ocr设备的地址是: /dev/vgoradata01/rlv_oraf01
vote设备的地址是:/dev/vgoradata01/rlv_oraf02

[c580b1@/dev/vgoradata01]#/oracle/oraInventory/orainstRoot.sh
[c580b1@/dev/vgoradata01]#/oracle/product/10.2.0/crs/root.sh

[c580b2@/oracle/product/10.2.0/crs]#./root.sh
WARNING: directory '/oracle/product/10.2.0' is not owned by root
WARNING: directory '/oracle/product' is not owned by root
WARNING: directory '/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
Checking to see if any 9i GSD is up

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle/product/10.2.0' is not owned by root
WARNING: directory '/oracle/product' is not owned by root
WARNING: directory '/oracle' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: c580b1 c580b1-hb c580b1
node 2: c580b2 c580b2-hb c580b2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        c580b1
        c580b2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "lan900" is not public. Public interfaces should be used to configure virtual IPs.


[c580b1@/oracle/product/10.2.0/crs/bin]#./crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

# ./vipca

然后在原安装界面上按 retry 按钮,就会成功


|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||

clusterware 打10.2.0.4的 patch
# /oracle/product/10.2.0/crs/bin/crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
# /oracle/product/10.2.0/crs/install/root102.sh
Creating pre-patch directory for saving pre-patch clusterware files

Completed patching clusterware files to /oracle/product/10.2.0/crs
Relinking some shared libraries.
Relinking of patched files is complete.
WARNING: directory '/oracle/product/10.2.0' is not owned by root
WARNING: directory '/oracle/product' is not owned by root
WARNING: directory '/oracle' is not owned by root
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
  This may take a while on some systems.
.
10204 patch successfully applied.
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 2: dxdb2 dxdb2-hb dxdb2
Creating OCR keys for user 'root', privgrp 'sys'..
Operation successful.
clscfg -upgrade completed successfully

 

建立裸设备,分区规划

 

spfile   /dev/vgoradata01/rlv_spfile
sysaux    /dev/vgoradata01/rlv_data0100
system   /dev/vgoradata01/rlv_data0101
temp   /dev/vgoradata01/rlv_temp
undotbs1  /dev/vgoradata01/rlv_undotbs21
undotbs2  /dev/vgoradata01/rlv_undotbs22
users  /dev/vgoradata01/rlv_data0102

 

你可能感兴趣的:(*项目HPUNIX+SERVICEGUARD+10GRAC安装)