HPUX MC/SG RAC环境下 删除、新增lv

 

1.停止oracle RAC  停止MC

root用户,进入$ORACLE_CRS_HOME/bin

A机器执行:./crs_stop -all   等待命令执行完毕,A机器执行./crsctl stop crs ,B机器执行:./crs_stop �Call

等2分钟左右,crs核心进程关闭之后再关闭MC

root用户

A机器执行:cmhaltcl �Cv �Cf

2.vg应该已经关闭,开启A机器vg

[root@cmsdb1]# vgchange -a y vgdata
Activated volume group.
Volume group "vgdata" has been successfully changed.

 

3.删除逻辑卷

[root@cmsdb1]# lvremove /dev/vgdata/Device01_data_10g
The logical volume "/dev/vgdata/Device01_data_10g" is not empty;
do you really want to delete the logical volume (y/n) : y
Logical volume "/dev/vgdata/Device01_data_10g" has been successfully removed.
Volume Group configuration for /dev/vgdata has been saved in /etc/lvmconf/vgdata.conf

4.新建逻辑卷

[root@cmsdb1]# lvcreate -L 700000 -n Device_data_50g /dev/vgdata
Logical volume "/dev/vgdata/Device_data_700g" has been successfully created with
character device "/dev/vgdata/rDevice_data_700g".
Logical volume "/dev/vgdata/Device_data_700g" has been successfully extended.
Volume Group configuration for /dev/vgdata has been saved in /etc/lvmconf/vgdata.conf

5.关闭vg
[root@cmsdb1]# vgchange -a n vgdata
Configuration change completed.
Volume group "vgdata" has been successfully changed.

6.导出map配置,拷贝到B机器

导出
[root@cmsdb1]# vgexport -p -s -v -m /tmp/vgdata.map /dev/vgdata

导入到B机器

[root@cmsdb1]# rcp /tmp/vgdata.map cmsdb2:/tmp/vgdata.map

 

7.登陆B机器

导出vg

[root@cmsdb2]#vgexport /dev/vgdata

新建vg

[root@cmsdb2]# mkdir /dev/vgdata
[root@cmsdb2]# mknod /dev/vgdata/group c 64 0x050000

导入map配置

[root@cmsdb2]# vgimport -s -v -m /tmp/vgdata.map /dev/vgdata
Beginning the import process on Volume Group "/dev/vgdata".
Logical volume "/dev/vgdata/system01_4g" has been successfully created
with lv number 1.
Logical volume "/dev/vgdata/sysaux01_4g" has been successfully created
with lv number 2.
Logical volume "/dev/vgdata/undotbs1_4g" has been successfully created
with lv number 3.
Logical volume "/dev/vgdata/undotbs2_4g" has been successfully created
with lv number 4.
Logical volume "/dev/vgdata/temp01_5g" has been successfully created
with lv number 5.
Logical volume "/dev/vgdata/user01_1g" has been successfully created
with lv number 6.
Logical volume "/dev/vgdata/ocr1_512m" has been successfully created
with lv number 7.
Logical volume "/dev/vgdata/ocr2_512m" has been successfully created
with lv number 8.
Logical volume "/dev/vgdata/rlvdata4_1_50g" has been successfully created
vgimport: Volume group "/dev/vgdata" has been successfully created.
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating the volume group.

 

9.设置lv权限

两台机器都执行:chown -R oracle:dba /dev/vgdata

10.确认两台机器vg都是关闭状态,然后重新应用配置

关闭vgdata vglock 两台机器

[root@cmsdb1]# vgchange -a n vglock
Volume group "vglock" has been successfully changed.
[root@cmsdb1]# vgchange -a n vgdata
Volume group "vgdata" has been successfully changed.

[root@cmsdb2]# vgchange -a n vglock
Volume group "vglock" has been successfully changed.
[root@cmsdb2]# vgchange -a n vgdata
Volume group "vgdata" has been successfully changed.

如不能关闭,请检查是否有oracle进程,关掉oracle所有进程。

11.检查集群配置

[root@cmsdb1]# cmcheckconf -v -C /etc/cmcluster/cluster.ascii -P /etc/cmcluster/oracle/pkgctl.ascii
Begin cluster verification...
Checking cluster file: /etc/cmcluster/cluster.ascii
Checking nodes ... Done
Checking existing configuration ... Done
Gathering storage information
Found 3 devices on node cmsdb1
Found 3 devices on node cmsdb2
Analysis of 6 devices should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 3 volume groups on node cmsdb1
Found 3 volume groups on node cmsdb2
Analysis of 6 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Gathering network information
Beginning network probing (this may take a while)
Completed network probing
Cluster cmsdb is an existing cluster
Begin file consistency checking
/etc/nsswitch.conf not found
-rw-r--r--   1 root       root          1585 Mar 31  2010 /etc/cmcluster/cmclfiles2check
-r--r--r--   1 root       root           524 Oct 22  2009 /etc/cmcluster/cmignoretypes.conf
-r--------   1 bin        bin            118 Oct 22  2009 /etc/cmcluster/cmknowncmds
-rw-r--r--   1 root       root           667 Oct 22  2009 /etc/cmcluster/cmnotdisk.conf
-rw-rw-rw-   1 root       sys            628 Nov  9 15:56 /etc/hosts
-r--r--r--   1 bin        bin          12666 Oct 18 21:02 /etc/services
/etc/nsswitch.conf not found
-rw-r--r--   1 root       root          1585 Mar 31  2010 /etc/cmcluster/cmclfiles2check
-r--r--r--   1 root       root           524 Oct 22  2009 /etc/cmcluster/cmignoretypes.conf
-r--------   1 bin        bin            118 Oct 22  2009 /etc/cmcluster/cmknowncmds
-rw-r--r--   1 root       root           667 Oct 22  2009 /etc/cmcluster/cmnotdisk.conf
-rw-rw-rw-   1 root       sys            638 Nov  9 15:57 /etc/hosts
-r--r--r--   1 bin        bin          12666 Sep 28 10:59 /etc/services
cksum: can't open /etc/nsswitch.conf: No such file or directory
1244500118 1585 /etc/cmcluster/cmclfiles2check
424071083 628 /etc/hosts
1544595097 12666 /etc/services
61360265 524 /etc/cmcluster/cmignoretypes.conf
344617849 118 /etc/cmcluster/cmknowncmds
1390752988 667 /etc/cmcluster/cmnotdisk.conf
cksum: can't open /etc/nsswitch.conf: No such file or directory
1244500118 1585 /etc/cmcluster/cmclfiles2check
628945726 638 /etc/hosts
1544595097 12666 /etc/services
61360265 524 /etc/cmcluster/cmignoretypes.conf
344617849 118 /etc/cmcluster/cmknowncmds
1390752988 667 /etc/cmcluster/cmnotdisk.conf
ERROR: /etc/cmcluster/cmclfiles2check permissions could not be checked on nodes cmsdb1 cmsdb2:  <error>
ERROR: /etc/cmcluster/cmclfiles2check owner could not be checked on nodes cmsdb1 cmsdb2:  <error>
ERROR: /etc/cmcluster/cmclfiles2check checksum could not be checked on nodes cmsdb1 cmsdb2:  <error>
/etc/cmcluster/cmclfiles2check is the same across nodes cmsdb1 cmsdb2
ERROR: /etc/hosts permissions could not be checked on nodes cmsdb1 cmsdb2:  <error>
ERROR: /etc/hosts owner could not be checked on nodes cmsdb1 cmsdb2:  <error>
ERROR: /etc/hosts checksum could not be checked on nodes cmsdb1 cmsdb2:  <error>
/etc/hosts is the same across nodes cmsdb1 cmsdb2
ERROR: /etc/nsswitch.conf permissions could not be checked on nodes cmsdb1 cmsdb2:  <error>
ERROR: /etc/nsswitch.conf owner could not be checked on nodes cmsdb1 cmsdb2:  <error>
ERROR: /etc/nsswitch.conf checksum could not be checked on nodes cmsdb1 cmsdb2:  <error>
/etc/nsswitch.conf is the same across nodes cmsdb1 cmsdb2
ERROR: /etc/services permissions could not be checked on nodes cmsdb1 cmsdb2:  <error>
ERROR: /etc/services owner could not be checked on nodes cmsdb1 cmsdb2:  <error>
ERROR: /etc/services checksum could not be checked on nodes cmsdb1 cmsdb2:  <error>
/etc/services is the same across nodes cmsdb1 cmsdb2
ERROR: /etc/cmcluster/cmignoretypes.conf permissions could not be checked on nodes cmsdb1 cmsdb2:  <error>
ERROR: /etc/cmcluster/cmignoretypes.conf owner could not be checked on nodes cmsdb1 cmsdb2:  <error>
ERROR: /etc/cmcluster/cmignoretypes.conf checksum could not be checked on nodes cmsdb1 cmsdb2:  <error>
/etc/cmcluster/cmignoretypes.conf is the same across nodes cmsdb1 cmsdb2
ERROR: /etc/cmcluster/cmknowncmds permissions could not be checked on nodes cmsdb1 cmsdb2:  <error>
ERROR: /etc/cmcluster/cmknowncmds owner could not be checked on nodes cmsdb1 cmsdb2:  <error>
ERROR: /etc/cmcluster/cmknowncmds checksum could not be checked on nodes cmsdb1 cmsdb2:  <error>
/etc/cmcluster/cmknowncmds is the same across nodes cmsdb1 cmsdb2
ERROR: /etc/cmcluster/cmnotdisk.conf permissions could not be checked on nodes cmsdb1 cmsdb2:  <error>
ERROR: /etc/cmcluster/cmnotdisk.conf owner could not be checked on nodes cmsdb1 cmsdb2:  <error>
ERROR: /etc/cmcluster/cmnotdisk.conf checksum could not be checked on nodes cmsdb1 cmsdb2:  <error>
/etc/cmcluster/cmnotdisk.conf is the same across nodes cmsdb1 cmsdb2
Command 'cat /etc/cmcluster/cmclfiles2check | /usr/sbin/cmcompare -W -v -n cmsdb1 -n cmsdb2' exited with status 2
WARNING: Unable to check consistency of all files listed in /etc/cmcluster/cmclfiles2check
/etc/cmcluster/oracle/pkgctl.ascii: A legacy package is being used.
Package oracle already exists. It will be modified.
/etc/cmcluster/oracle/pkgctl.ascii:0: SERVICE_HALT_TIMEOUT value of 0 is equivalent to 1 sec.
Maximum configured packages parameter is 300.
Configuring 1 package(s).
Modifying configuration on node cmsdb1
Modifying configuration on node cmsdb2
Modifying the cluster configuration for cluster cmsdb
Modifying node cmsdb1 in cluster cmsdb
Modifying node cmsdb2 in cluster cmsdb
Modifying the package configuration for package oracle.
cmcheckconf: Verification completed with no errors found.
Use the cmapplyconf command to apply the configuration

 

上面ERROR可以忽略。成功之后进行下一步

12.重新应用配置

[root@cmsdb1]# cmapplyconf -v -C /etc/cmcluster/cluster.ascii -P /etc/cmcluster/oracle/pkgctl.ascii

跟上面一样,会提示Modify [y/n] 输入y 回车。

 

13.启动MC、关闭MC

[root@cmsdb1]# cmruncl �Cv

检查两台机器的vgdata vglock是否都已经打开

[root@cmsdb1]# vgdisplay  /dev/vgdata

[root@cmsdb1]# vgdisplay  /dev/vglock

此时vglock可能没打开,不用管,继续

在两台机器都执行:

[root@cmsdb1]# vgchange �Ca n vglock

[root@cmsdb1]# vgchange �Cc y vglock

[root@cmsdb1]# vgchange �Ca n vgdata

[root@cmsdb1]# vgchange �Cc y vgdata

此时若报错,检查是否有oracle进程,关掉

关闭MC

[root@cmsdb1]#cmhaltcl �Cv -f

等2分钟左右重新启动MC

[root@cmsdb1]#cmruncl �Cv

启动完成后检查两台机器vgdata vglock状态是否打开,此时vglock可能没打开,不用管。

vgdata 在两台机器上应该一个是server 一个是client,若正常,继续

14.启动RAC

可能会自动启动,若没自动启动,则手工启就行了。

最后检查RAC状态是否正常。

[root@cmsdb1]# ./crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora....B1.lsnr application    ONLINE    ONLINE    cmsdb1     
ora.cmsdb1.gsd application    ONLINE    ONLINE    cmsdb1     
ora.cmsdb1.ons application    ONLINE    ONLINE    cmsdb1     
ora.cmsdb1.vip application    ONLINE    ONLINE    cmsdb1     
ora....B2.lsnr application    ONLINE    ONLINE    cmsdb2     
ora.cmsdb2.gsd application    ONLINE    ONLINE    cmsdb2     
ora.cmsdb2.ons application    ONLINE    ONLINE    cmsdb2     
ora.cmsdb2.vip application    ONLINE    ONLINE    cmsdb2     
ora.ztjc.db    application    ONLINE    ONLINE    cmsdb1     
ora....c1.inst application    ONLINE    ONLINE    cmsdb1     
ora....c2.inst application    ONLINE    ONLINE    cmsdb2     
[root@cmsdb1]#

[root@cmsdb2]# ./crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora....B1.lsnr application    ONLINE    ONLINE    cmsdb1     
ora.cmsdb1.gsd application    ONLINE    ONLINE    cmsdb1     
ora.cmsdb1.ons application    ONLINE    ONLINE    cmsdb1     
ora.cmsdb1.vip application    ONLINE    ONLINE    cmsdb1     
ora....B2.lsnr application    ONLINE    ONLINE    cmsdb2     
ora.cmsdb2.gsd application    ONLINE    ONLINE    cmsdb2     
ora.cmsdb2.ons application    ONLINE    ONLINE    cmsdb2     
ora.cmsdb2.vip application    ONLINE    ONLINE    cmsdb2     
ora.ztjc.db    application    ONLINE    ONLINE    cmsdb1     
ora....c1.inst application    ONLINE    ONLINE    cmsdb1     
ora....c2.inst application    ONLINE    ONLINE    cmsdb2     
[root@cmsdb2]#

 

 

至此,完成。

你可能感兴趣的:(oracle,用户,p)