数据迁移[更换存储]



  数据迁移[更换存储]

客户的RAC数据库要从旧的机房迁移到新的机房,总容量4T,物理位置的改变带来了很多配置上的变化,

其中包括主机IP地址的改变、存储升级导致的ASM数据迁移等,另外,由于搬迁过程中的设备损坏,导致原

来的OCRVOTEDISK无法识别,因此也涉及了对OCRVOTEDISK的恢复。本案例将简要讲述整个搬迁

涉及的过程,希望可以给同样有类似需求的朋友带来一些帮助。

 

2.1  数据库环境

数据库:Oracle10g RAC sxx1(node1)+sxx2(node2)ASMCRSRDBMS10.2.0.4

操作系统:Solaris 10

2.2  问题及解决方案

恢复丢失的OCRVD

修改IP地址,包括PUBLIC IPPRIVATE IP

ASM数据从低端存储SUN  2540迁移至高端存储SUN  9990,涉及迁移system表空间、其他表空间、

controlfilespfile,以及重新指定archivelogflashlog位置

 

1.  恢复丢失的OCRVD

准备用来恢复OCRVD的五个磁盘,新增如下所列680磁道的磁盘:

# format

Searching for disks...done

AVAILABLE DISK SELECTIONS:

      0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>

         /ssm@0,0/pci@18,600000/scsi@2/sd@0,0

      1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>

         /ssm@0,0/pci@18,600000/scsi@2/sd@1,0

      2. c5t600A0B80005AD8BC0000020000000000d0 <HITACHI-OPEN-V-SUN-6006 cyl680 alt 2 hd 15 sec 512>

         /scsi_vhci/ssd@g600a0b80005ad8bc000002bf4cf1a5ca

      3. c5t600A0B80005AD8BC0000020000000001d0 <HITACHI-OPEN-V-SUN-6006 cyl680 alt 2 hd 15 sec 512>

         /scsi_vhci/ssd@g600a0b80005ad8bc000002c14cf1a608

      4. c5t600A0B80005AD8BC0000020000000003d0 <HITACHI-OPEN-V-SUN-6006 cyl680 alt 2 hd 15 sec 512>

         /scsi_vhci/ssd@g600a0b80005ad8bc000002c24cf1aa19

      5. c5t600A0B80005AD8BC0000020000000004d0 <HITACHI-OPEN-V-SUN-6006 cyl680 alt 2 hd 15 sec 512>

         /scsi_vhci/ssd@g600a0b80005ad8bc000002c34cf1aa58

      6. c5t600A0B80005AD8BC0000020000000005d0 <HITACHI-OPEN-V-SUN-6006 cyl680 alt 2 hd 15 sec 512>

         /scsi_vhci/ssd@g600a0b80005ad8bc000002c44cf1ab2c

在单节点用dd对磁盘进行初始化操作:

For OCR--

# dd if=/dev/zeroof=/dev/rdsk/c5t600A0B80005AD8BC0000020000000000d0s7 bs=20971520 count=1

1+0 records in

1+0 records out

# dd if=/dev/zeroof=/dev/rdsk/c5t600A0B80005AD8BC0000020000000001d0s7 bs=20971520 count=1

1+0 records in

1+0 records out

 

For VD--

# dd if=/dev/zeroof=/dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7 bs=20971520 count=1

1+0 records in

1+0 records out

# dd if=/dev/zeroof=/dev/rdsk/c5t600A0B80005AD8BC0000020000000004d0s7 bs=20971520 count=1

1+0 records in

1+0 records out

# dd if=/dev/zeroof=/dev/rdsk/c5t600A0B80005AD8BC0000020000000005d0s7 bs=20971520 count=1

1+0 records in

1+0 records out

在所有节点对磁盘访问进行授权:

For OCR--

# chown root:dba/dev/rdsk/c5t600A0B80005AD8BC0000020000000000d0s7

# chmod 660/dev/rdsk/c5t600A0B80005AD8BC0000020000000000d0s7

# chown root:dba/dev/rdsk/c5t600A0B80005AD8BC0000020000000001d0s7

# chmod 660 /dev/rdsk/c5t600A0B80005AD8BC0000020000000001d0s7

For VD--

# chown oracle:dba/dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7

# chmod 660/dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7

# chown oracle:dba/dev/rdsk/c5t600A0B80005AD8BC0000020000000004d0s7

# chmod 660/dev/rdsk/c5t600A0B80005AD8BC0000020000000004d0s7

# chown oracle:dba/dev/rdsk/c5t600A0B80005AD8BC0000020000000005d0s7

# chmod 660/dev/rdsk/c5t600A0B80005AD8BC0000020000000005d0s7

 

 [恢复OCR]

查询OCR状态:

# /oracle/product/10g/crs/bin/ocrcheck            

PROT-601: Failed to initialize ocrcheck

 

替换原OCR磁盘,出错:

# /oracle/product/10g/crs/bin/ocrconfig-replace ocr '/dev/rdsk/c5t600A0B80005AD8BC0000020000000000d0s7'

/oracle/product/10g/crs/bin/ocrconfig-replace ocrmirror '/dev/rdsk/c5t600A0B80005AD8BC0000020000000001d0s7'

PROT-1: Failed to initialize ocrconfig

 

repair选项替换原OCR磁盘:

# /oracle/product/10g/crs/bin/ocrconfig-repair ocr '/dev/rdsk/c5t600A0B80005AD8BC0000020000000000d0s7'

# /oracle/product/10g/crs/bin/ocrconfig-repair ocrmirror '/dev/rdsk/c5t600A0B80005AD8BC0000020000000001d0s7'

 

使用自动备份的ocr文件来恢复到新的OCR磁盘:

# /oracle/product/10g/crs/bin/ocrconfig-restore '/oracle/product/10g/crs/cdata/crs/backup00.ocr'

此处需注意新的磁盘容量要比原来的磁盘大,否则在恢复是会遇到错误PROT-22: Storage too small,这是

Oracle的一个Bug

 

验证恢复后的OCR状态:

# /oracle/product/10g/crs/bin/ocrcheck

Status of Oracle Cluster Registry is asfollows :

        Version                  :          2

        Total space (kbytes)     :    1048764

        Used space (kbytes)      :       3828

        Available space (kbytes) :    1044936

        ID                       :1649452169

        Device/File Name         :/dev/rdsk/c5t600A0B80005AD8BC0000020000000000d0s7

                                    Device/Fileintegrity check succeeded

         Device/File Name         :/dev/rdsk/c5t600A0B80005AD8BC0000020000000001d0s7

                                    Device/Fileintegrity check succeeded

        Cluster registry integrity check succeeded

alert日志中的内容验证:

# more alertsxx1.log

......

2010-11-28 14:48:25.932

[client(20589)]CRS-1002:The OCR wasrestored from /oracle/product/10g/crs/cdata/crs/backup00.ocr.

ocrconfig日志中的内容验证:

# more/oracle/product/10g/crs/log/sxx1/client/ocrconfig_20589.log

Oracle Database 10g CRS Release 10.2.0.4.0Production Copyright 1996, 2008 Oracle. All rights reserved.

2010-11-28 14:48:23.166: [OCRCONF][1]ocrconfig starts...

2010-11-28 14:48:23.215: [  OCRRAW][1]propriowv_bootbuf: Vote informationon disk 0

[/dev/rdsk/c5t600A0B80005AD8BC0000020000000000d0s7]is adjusted from [0/0] to [1/2]

2010-11-28 14:48:23.218: [  OCRRAW][1]propriowv_bootbuf: Vote informationon disk 1

[/dev/rdsk/c5t600A0B80005AD8BC0000020000000001d0s7]is adjusted from [0/0] to [1/2]

2010-11-28 14:48:25.932: [OCRCONF][1]Successfully restored OCR and set block 0

2010-11-28 14:48:25.932: [OCRCONF][1]Exiting [status=success]...

启动crs

# /etc/init.d/init.crs start

Startup will be queued to init within 30seconds.

等待30秒查看crs相关进程是否启动:

# ps -ef | grep d.bin

   root 20665  1840   0 14:50:01 pts/1       0:00 grep d.bin

查看crs状态:

# ./crsctl check crs

Failure 1 contacting CSS daemon

Cannot communicate with CRS

Cannot communicate with EVM 

 

查看系统日志信息:

# dmesg 

......

Nov 28 15:12:31 sxx1 root: [ID 702911 user.error]Cluster Ready Services waiting on dependencies. Diagnostics

in /tmp/crsctl.1753.

Nov 28 15:12:31 sxx1 root: [ID 702911user.error] Cluster Ready Services waiting on dependencies. Diagnostics

in /tmp/crsctl.1804.

Nov 28 15:12:31 sxx1 root: [ID 702911user.error] Cluster Ready Services waiting on dependencies. Diagnostics

in /tmp/crsctl.1865.

Nov 28 15:13:31 sxx1 root: [ID 702911user.error] Cluster Ready Services waiting on dependencies. Diagnostics

in /tmp/crsctl.1753.

Nov 28 15:13:31 sxx1 root: [ID 702911user.error] Cluster Ready Services waiting on dependencies. Diagnostics

in /tmp/crsctl.1804.

Nov 28 15:13:32 sxx1 root: [ID 702911user.error] Cluster Ready Services waiting on dependencies. Diagnostics

in /tmp/crsctl.1865.

查看crs启动日志中报错内容:

# vi /tmp/crsctl.1865

Failure -2 opening file handle for(c5t50060E8000000000000050D60000001Cd0s7)

Failure 1 checking the CSS voting disk'c5t50060E8000000000000050D60000001Cd0s7'.

Failure -2 opening file handle for(c5t50060E8000000000000050D60000001Dd0s7)

Failure 1 checking the CSS voting disk'c5t50060E8000000000000050D60000001Dd0s7'.

Failure -2 opening file handle for(c5t50060E8000000000000050D60000002Ad0s7)

Failure 1 checking the CSS voting disk'c5t50060E8000000000000050D60000002Ad0s7'.

Not able to read adequate number of votingdisks

[恢复VOTEDISK]

查询VD状态:

# ./crsctl query css votedisk                                                        

 0.    0    /dev/rdsk/c5t50060E8000000000000050D60000001Cd0s7

 1.    0   /dev/rdsk/c5t50060E8000000000000050D60000001Dd0s7

 2.    0   /dev/rdsk/c5t50060E8000000000000050D60000002Ad0s7

 

located 3 votedisk(s).

添加VD磁盘:

# ./crsctl add css votedisk'/dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7'

Cluster is not in a ready state for onlinedisk addition

添加VD磁盘,加force选项:

# ./crsctl add css votedisk'/dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7' -force

Now formatting voting disk:/dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7

successful addition of votedisk/dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7.

# ./crsctl add css votedisk'/dev/rdsk/c5t600A0B80005AD8BC0000020000000004d0s7' -force

Now formatting voting disk:/dev/rdsk/c5t600A0B80005AD8BC0000020000000004d0s7

successful addition of votedisk/dev/rdsk/c5t600A0B80005AD8BC0000020000000004d0s7.

# ./crsctl add css votedisk'/dev/rdsk/c5t600A0B80005AD8BC0000020000000005d0s7' -force

Now formatting voting disk:/dev/rdsk/c5t600A0B80005AD8BC0000020000000005d0s7

successful addition of votedisk/dev/rdsk/c5t600A0B80005AD8BC0000020000000005d0s7.

删除旧的遗失的VD磁盘:

# ./crsctl delete css votedisk'/dev/rdsk/c5t50060E8000000000000050D60000001Cd0s7' -force

successful deletion of votedisk /dev/rdsk/c5t50060E8000000000000050D60000001Cd0s7.

# ./crsctl delete css votedisk'/dev/rdsk/c5t50060E8000000000000050D60000001Dd0s7' -force

successful deletion of votedisk/dev/rdsk/c5t50060E8000000000000050D60000001Dd0s7.

# ./crsctl delete css votedisk '/dev/rdsk/c5t50060E8000000000000050D60000002Ad0s7'-force

successful deletion of votedisk/dev/rdsk/c5t50060E8000000000000050D60000002Ad0s7.

 

查询VD状态:

# ./crsctl query css votedisk

 0.    0   /dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7

 1.    0    /dev/rdsk/c5t600A0B80005AD8BC0000020000000004d0s7

 2.    0   /dev/rdsk/c5t600A0B80005AD8BC0000020000000005d0s7

 

located 3 votedisk(s).

启动crs

# /etc/init.d/init.crs start

Startup will be queued to init within 30seconds.

等待30秒查看crs相关进程是否启动:

# ps -ef | grep d.bin

   root  2911   654  0 15:40:43 ?           0:04/oracle/product/10g/crs/bin/crsd.bin reboot

   root  3143  2926  0 15:40:44 ?           0:00/oracle/product/10g/crs/bin/oprocd.bin run -t 1000 -m 500 -f

 oracle  3225  2970  0 15:40:46 ?           0:01/oracle/product/10g/crs/bin/ocssd.bin

 oracle  2855   652  0 15:40:42 ?           0:01/oracle/product/10g/crs/bin/evmd.bin

   root  3857  1120  0 15:41:34 pts/1       0:00 grepd.bin

查看crs状态:

# ./crsctl check crs

CSS appears healthy

CRS appears healthy

EVM appears healthy

查看系统日志信息:

# dmesg 

......

Nov 28 15:40:42 sxx1 root: [ID 702911user.error] Cluster Ready Services completed waiting on dependencies.

Nov 28 15:40:42 sxx1 last message repeated2 times

Nov 28 15:40:42 sxx1 root: [ID 702911user.error] Oracle CSS Family monitor starting.

Nov 28 15:40:42 sxx1 root: [ID 702911user.error] Running CRSD with TZ = PRC

Nov 28 15:40:43 sxx1 root: [ID 702911user.error] Oracle CSS restart. 0, 1

Nov 28 15:40:53 sxx1 root: [ID 702911user.error] Oracle Cluster Ready Services starting by user request.

查看crs资源状态:

$ crs_stat -t

Name           Type           Target    State    Host        

------------------------------------------------------------ 

ora....C1.inst application    ONLINE   OFFLINE               

ora....C2.inst application    ONLINE   OFFLINE               

ora.SXX.db     application    ONLINE   OFFLINE               

ora....SM1.asm application    ONLINE   ONLINE    sxx1     

ora....C1.lsnr application    ONLINE   OFFLINE               

ora....ac1.gsd application    ONLINE   OFFLINE               

ora....ac1.ons application    ONLINE   ONLINE    sxx1     

ora....ac1.vip application    ONLINE   ONLINE    sxx1     

ora....SM2.asm application    ONLINE   ONLINE    sxx2     

ora....C2.lsnr application    ONLINE   OFFLINE               

ora....ac2.gsd application    ONLINE   OFFLINE               

ora....ac2.ons application    ONLINE   ONLINE    sxx2     

ora....ac2.vip application    ONLINE   ONLINE    sxx2   

2.  修改IP地址,包括PUBLICIPPRIVATE IP

在所有节点操作系统(SunOS 5.10)中如下文件修改相应IP地址:

/etc/inet/hosts

/etc/inet/netmasks

/etc/inet/ipnodes

查看hosts文件配置:

$ cat /etc/hosts

 

127.0.0.1       localhost

 

192.168.100.11  sxx1        loghost

192.168.100.12  sxx2

 

100.1.1.1       sxx1-priv

100.1.1.2       sxx2-priv

 

192.168.100.13  sxx1-vip

192.168.100.14  sxx2-vip

修改当前IP地址设置:

# ifconfig -a

lo0:flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232index 1

        inet 127.0.0.1 netmask ff000000 

ce0:flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2

       inet 192.168.100.11 netmask ffffff00 broadcast 192.168.100.255

       ether 0:3:ba:92:e1:30 

ce0:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4>mtu 1500 index 2

       inet 192.168.1.177 netmask ffffff00 broadcast 192.168.1.255

ce2:flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3

       inet 100.1.1.1 netmask ff000000 broadcast 100.255.255.255

       ether 0:3:ba:81:27:1d 

# ifconfig ce0:1 unplumb

# ifconfig ce0:1 plumb

# ifconfig ce0:1 192.168.100.13 netmask255.255.255.0 broadcast 192.168.100.255 up

另一节点:

# ifconfig ce0:1 unplumb

# ifconfig ce0:1 plumb

# ifconfig ce0:1 192.168.100.14 netmask255.255.255.0 broadcast 192.168.100.255 up

 

修改所有节点的listener.ora tnsnames.ora文件中相应IP地址

# vi/oracle/product/10g/db_1/network/admin/listener.ora

# vi/oracle/product/10g/db_1/network/admin/tnsnames.ora

登陆数据库,修改LOCAL_LISTENER指向:

SQL> ALTER SYSTEM SET LOCAL_LISTENER ='(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.100.13)(PORT = 1521))' SID

= 'SXX1';

System altered.

另一节点:

SQL> ALTER SYSTEM SET LOCAL_LISTENER ='(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.100.14)(PORT = 1521))' SID

= 'SXX2';

System altered.

获取当前IP地址配置信息:

$ oifcfg getif

ce0 192.168.1.0  global  public

ce2 100.1.1.0  global  cluster_interconnect

删除PUBLIC IP配置:

$ oifcfg delif -global ce0

$ oifcfg getif

ce2 100.1.1.0  global  cluster_interconnect

设置新的PUBLIC IP配置:

$ oifcfg setif -globalce0/192.168.100.0:public

$ oifcfg getif

ce0 192.168.100.0  global  public

ce2 100.1.1.0   global  cluster_interconnect

需要在各节点更新nodeapps配置文件,这一步非常重要:

# /oracle/product/10g/crs/bin/srvctl modifynodeapps -n sxx1 -A 192.168.100.13/255.255.255.0/ce0

另一节点:

# /oracle/product/10g/crs/bin/srvctl modifynodeapps -n sxx2 -A 192.168.100.14/255.255.255.0/ce0

验证当前VIP IP设置是否正确:

$ srvctl config nodeapps -n sxx1 -a

VIP exists.:/sxx1-vip/192.168.100.13/255.255.255.0/ce0

可以用vipca工具进行验证:

# xhost +

access control disabled, clients canconnect from any host

# /oracle/product/10g/crs/bin/vipca

 

重启相关资源

$ srvctl stop instance -d SXX -i SXX1

$ srvctl stop asm -n sxx1

$ srvctl stop nodeapps -n sxx1

$ srvctl start nodeapps -n sxx1

$ srvctl start asm -n sxx1

$ srvctl start instance -d SXX -i SXX1

另一节点:

$ srvctl stop instance -d SXX -i SXX2

$ srvctl stop asm -n sxx2

$ srvctl stop nodeapps -n sxx2

$ srvctl start nodeapps -n sxx2

$ srvctl start asm -n sxx2

$ srvctl start instance -d SXX -i SXX2

查看crs相关资源的状态

$ crs_stat -t

Name           Type           Target    State    Host        

------------------------------------------------------------

ora....C1.inst application    ONLINE   ONLINE    sxx1     

ora....C2.inst application    ONLINE   ONLINE    sxx2     

ora.SXX.db     application    ONLINE   ONLINE    sxx2     

ora....SM1.asm application    ONLINE   ONLINE    sxx1     

ora....C1.lsnr application    ONLINE   ONLINE    sxx1     

ora....ac1.gsd application    ONLINE   ONLINE    sxx1     

ora....ac1.ons application    ONLINE   ONLINE    sxx1     

ora....ac1.vip application    ONLINE   ONLINE    sxx1     

ora....SM2.asm application    ONLINE   ONLINE    sxx2     

ora....C2.lsnr application    ONLINE   ONLINE    sxx2     

ora....ac2.gsd application    ONLINE   ONLINE    sxx2     

ora....ac2.ons application    ONLINE   ONLINE    sxx2     

ora....ac2.vip application    ONLINE   ONLINE    sxx2 

 

3.  ASM数据从低端存储SUN  2540迁移至高端存储SUN  9990,涉及迁移system表空间、其他表空间、controlfilespfile,以及重新指定archivelogflash位置

ASM中事先为新存储建立DG_DATADG_ARCHDG_REDO1DG_REDO2DG_INIT磁盘组。

查看表空间及数据文件相关信息:

$ rman target / nocatalog

RMAN> report schema;

Report of database schema

List of Permanent Datafiles

===========================

File Size(MB) Tablespace           RB segs Datafile Name

---- -------- -------------------- -------------------------------

1   610      SYSTEM               ***    +DG_DATA_OLD/sxx/datafile/system.257.688141191

2   4000     TBS_SXX              ***    +DG_DATA_OLD/sxx/datafile/tbs_sxx01.256.689771911

3   10610    SYSAUX               ***    +DG_DATA_OLD/sxx/datafile/sysaux.262.688146467

......省略内容标记

126 52700    TBS_SXX              ***     +DG_DATA_OLD/sxx/datafile/tbs_sxx125.399.722454027

 

[迁移system表空间]

system表空间的数据文件offline

$ sqlplus "/as sysdba"

SQL> shutdown immediate;

SQL> startup mount;

SQL> select file_id,status fromdba_data_files;

SQL> alter database datafile'+DG_DATA_OLD/sxx/datafile/system.257.688141191' offline;

拷贝system表空间的数据文件到新的位置,切换数据文件的指向,并作recover,最好将该数据文件online

$ rman target / nocatalog

RMAN> copy datafile'+DG_DATA_OLD/sxx/datafile/system.257.688141191' to '+DG_DATA';

RMAN> switch datafile 1 to copy;

RMAN> sql "alter database datafile1 online";

RMAN> recover datafile 1;

RMAN> sql "alter database datafile1 online";

删除ASM中旧的磁盘组上的system表空间的数据文件:

$ export ORACLE_SID=+ASM1

$ asmcmd

ASMCMD> alter diskgroup DG_DATA_OLD dropfile '+DG_DATA_OLD/sxx/datafile/system.257.688141191';

 

[迁移其他表空间]

将除system外所有数据文件offline

$ sqlplus "/as sysdba"

SQL> shutdown immediate;

SQL> startup mount;

SQL> alter database datafile'+DG_DATA_OLD/sxx/datafile/tbs_sxx01.256.689771911' offline;

SQL> alter database datafile'+DG_DATA_OLD/sxx/datafile/sysaux.262.688146467' offline;

......省略内容标记

SQL> alter database datafile'+DG_DATA_OLD/sxx/datafile/tbs_sxx125.399.722454027' offline;

拷贝数据文件到新的位置:

$ rman target / nocatalog

RMAN> copy datafile'+DG_DATA_OLD/sxx/datafile/tbs_sxx01.256.689771911' to '+DG_DATA';

RMAN> copy datafile'+DG_DATA_OLD/sxx/datafile/sysaux.262.688146467' to '+DG_DATA';

......省略内容标记

copy datafile'+DG_DATA_OLD/sxx/datafile/tbs_sxx125.399.722454027' to '+DG_DATA';

将数据文件的指向切换到新的位置:

RMAN> switch datafile 2 to copy;

RMAN> switch datafile 3 to copy;

......省略内容标记

RMAN> switch datafile 126 to copy;

将除system外所有数据文件online

$ sqlplus "/as sysdba"

SQL> alter database datafile 2 online;

SQL> alter database datafile 3 online;

......省略内容标记

SQL> alter database datafile 126 online;

删除ASM中旧的磁盘组上的文件:

$ export ORACLE_SID=+ASM1

$ asmcmd

ASMCMD> alter diskgroup DG_DATA_OLD dropfile '+DG_INIT_OLD/sxx/datafile/tbs_sxx01.256.689771911';

ASMCMD> alter diskgroup DG_DATA_OLD dropfile '+DG_INIT_OLD/sxx/datafile/sysaux.262.688146467';

......省略内容标记

ASMCMD> alter diskgroup DG_DATA_OLD dropfile '+DG_DATA_OLD/sxx/datafile/tbs_sxx125.399.722454027';

 

[更改controlfile位置]

备份controlfile

$ sqlplus "/as sysdba"

SQL> create or replace directory D2 as'+DG_INIT/sxx/controlfile'; 

SQL> create or replace directory D3 as'+DG_INIT_OLD/sxx/controlfile'; 

SQL> execdbms_file_transfer.copy_file('D3', 'control01.ctl', 'D2', 'control01.ctl');

SQL> shutdown immediate;

迁移controlfile

$ sqlplus "/as sysdba"

SQL> startup mount;

SQL> alter database backup controlfileto '+DG_INIT/sxx/controlfile/control01.ctl';

SQL> alter system setcontrol_files='+DG_INIT/sxx/controlfile/control01.ctl' scope=spfile;

SQL> shutdown immediate;

冗余controlfile

$ sqlplus "/as sysdba"

SQL> startup mount;

SQL> alter database backup controlfileto '+DG_INIT/sxx/controlfile/control02.ctl';

SQL> alter database backup controlfileto '+DG_INIT/sxx/controlfile/control03.ctl';

SQL> alter system set

control_files='+DG_INIT/sxx/controlfile/control01.ctl','+DG_INIT/sxx/controlfile/control02.ctl','+DG_INIT/sx

x/controlfile/control03.ctl' scope=spfile;

SQL> shutdown immediate;

SQL> startup;

 

 [更改spfile位置]

拷贝spfile到新的位置:

$ sqlplus "/as sysdba"

SQL> createpfile='/export/home/oracle/init_temp.ora' from spfile='+DG_INIT_OLD/sxx/spfilerac.ora';

SQL> createspfile='+DG_INIT/sxx/spfilerac.ora' frompfile='/export/home/oracle/init_temp.ora';

修改本地及其他节点pfile中的指向:

$ echo"SPFILE='+DG_INIT/sxx/spfilerac.ora'" >/oracle/product/10g/db_1/dbs/initSXX1.ora

$ ssh sxx2 "echo \"SPFILE='+DG_INIT/sxx/spfilerac.ora'\"> /oracle/product/10g/db_1/dbs/initSXX2.ora"

修改OCR中新的spfile的位置:

$ srvctl modify database -d SXX -p+DG_INIT/sxx/spfilerac.ora

重启所有节点使新的spfile位置生效:

$ srvctl stop database -d SXX

$ srvctl start database -d SXX

重启service

$ srvctl start service -d SXX

删除ASM中旧的spfile

$ export ORACLE_SID=+ASM1

$ asmcmd

ASMCMD> ALTER DISKGROUP DG_INIT_OLD DROPFILE '+DG_INIT_OLD/sxx/spfilerac.ora';

 

[更改archivelogflashlog存储位置]

$ sqlplus "/as sysdba"

SQL> alter system set log_archive_dest_1='LOCATION=+DG_ARCH/'scope=both;

SQL> alter system setdb_recovery_file_dest_size=200G scope=both;

SQL> alter system setdb_recovery_file_dest="+DG_FLSH" scope=both;

SQL> alter system setdb_create_file_dest="+DG_DATA" scope=both;

SQL> alter system setdb_create_online_log_dest_1="+DG_REDO1" scope=both;

SQL> shutdown immediate;

SQL> startup;

 

[删除挂接旧存储的磁盘组]

$ export ORACLE_SID=+ASM1

$ asmcmd

ASMCMD> drop diskgroup DG_DATA_OLD;

ASMCMD> drop diskgroup DG_ARCH_OLD;

ASMCMD> drop diskgroup DG_FLSH_OLD;

ASMCMD> drop diskgroup DG_INIT_OLD;

ASMCMD> drop diskgroup DG_REDO1_OLD;

ASMCMD> drop diskgroup DG_REDO2_OLD;

注意:  删除旧的磁盘组前应先完成重建并切换redolog的操作,并清空旧的磁盘组上的所有文件。

 

确认旧的磁盘组已删除:

ASMCMD> lsdg

State   Type    Rebal  Unbal Sector  Block       AU Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB 

Offline_disks  Name

MOUNTED EXTERN  N      N        512   4096  1048576   824235   796059                0          796059             

0 DG_ARCH/

MOUNTED EXTERN  N      N        512   4096  1048576  8242350  3714245                0         3714245             

0 DG_DATA/

MOUNTED EXTERN  N      N        512   4096  1048576   824235   823910                0          823910             

0 DG_FLSH/

MOUNTED NORMAL  N      N         512  4096  1048576     19454   19114                0            9557             

0 DG_INIT/

MOUNTED NORMAL  N      N        512   4096  1048576    19454     9166                0            4583             

0 DG_REDO1/

MOUNTED NORMAL  N      N        512   4096  1048576    19454    16236                0            8118             

  1. DG_REDO2/

 

 

至此,迁移的主要工作基本完成,后续还需要重建undo表空间,重建temp 表空间等操作,就不作一一赘

述。

  数据迁移[更换存储]

客户的RAC数据库要从旧的机房迁移到新的机房,总容量4T,物理位置的改变带来了很多配置上的变化,

其中包括主机IP地址的改变、存储升级导致的ASM数据迁移等,另外,由于搬迁过程中的设备损坏,导致原

来的OCRVOTEDISK无法识别,因此也涉及了对OCRVOTEDISK的恢复。本案例将简要讲述整个搬迁

涉及的过程,希望可以给同样有类似需求的朋友带来一些帮助。

 

2.1  数据库环境

数据库:Oracle10g RAC sxx1(node1)+sxx2(node2)ASMCRSRDBMS10.2.0.4

操作系统:Solaris 10

2.2  问题及解决方案

恢复丢失的OCRVD

修改IP地址,包括PUBLIC IPPRIVATE IP

ASM数据从低端存储SUN  2540迁移至高端存储SUN  9990,涉及迁移system表空间、其他表空间、

controlfilespfile,以及重新指定archivelogflashlog位置

 

1.  恢复丢失的OCRVD

准备用来恢复OCRVD的五个磁盘,新增如下所列680磁道的磁盘:

# format

Searching for disks...done

AVAILABLE DISK SELECTIONS:

      0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>

         /ssm@0,0/pci@18,600000/scsi@2/sd@0,0

      1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>

         /ssm@0,0/pci@18,600000/scsi@2/sd@1,0

      2. c5t600A0B80005AD8BC0000020000000000d0 <HITACHI-OPEN-V-SUN-6006 cyl680 alt 2 hd 15 sec 512>

         /scsi_vhci/ssd@g600a0b80005ad8bc000002bf4cf1a5ca

      3. c5t600A0B80005AD8BC0000020000000001d0 <HITACHI-OPEN-V-SUN-6006 cyl680 alt 2 hd 15 sec 512>

         /scsi_vhci/ssd@g600a0b80005ad8bc000002c14cf1a608

      4. c5t600A0B80005AD8BC0000020000000003d0 <HITACHI-OPEN-V-SUN-6006 cyl680 alt 2 hd 15 sec 512>

         /scsi_vhci/ssd@g600a0b80005ad8bc000002c24cf1aa19

      5. c5t600A0B80005AD8BC0000020000000004d0 <HITACHI-OPEN-V-SUN-6006 cyl680 alt 2 hd 15 sec 512>

         /scsi_vhci/ssd@g600a0b80005ad8bc000002c34cf1aa58

      6. c5t600A0B80005AD8BC0000020000000005d0 <HITACHI-OPEN-V-SUN-6006 cyl680 alt 2 hd 15 sec 512>

         /scsi_vhci/ssd@g600a0b80005ad8bc000002c44cf1ab2c

在单节点用dd对磁盘进行初始化操作:

For OCR--

# dd if=/dev/zeroof=/dev/rdsk/c5t600A0B80005AD8BC0000020000000000d0s7 bs=20971520 count=1

1+0 records in

1+0 records out

# dd if=/dev/zeroof=/dev/rdsk/c5t600A0B80005AD8BC0000020000000001d0s7 bs=20971520 count=1

1+0 records in

1+0 records out

 

For VD--

# dd if=/dev/zeroof=/dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7 bs=20971520 count=1

1+0 records in

1+0 records out

# dd if=/dev/zeroof=/dev/rdsk/c5t600A0B80005AD8BC0000020000000004d0s7 bs=20971520 count=1

1+0 records in

1+0 records out

# dd if=/dev/zeroof=/dev/rdsk/c5t600A0B80005AD8BC0000020000000005d0s7 bs=20971520 count=1

1+0 records in

1+0 records out

在所有节点对磁盘访问进行授权:

For OCR--

# chown root:dba/dev/rdsk/c5t600A0B80005AD8BC0000020000000000d0s7

# chmod 660/dev/rdsk/c5t600A0B80005AD8BC0000020000000000d0s7

# chown root:dba/dev/rdsk/c5t600A0B80005AD8BC0000020000000001d0s7

# chmod 660 /dev/rdsk/c5t600A0B80005AD8BC0000020000000001d0s7

For VD--

# chown oracle:dba/dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7

# chmod 660/dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7

# chown oracle:dba/dev/rdsk/c5t600A0B80005AD8BC0000020000000004d0s7

# chmod 660/dev/rdsk/c5t600A0B80005AD8BC0000020000000004d0s7

# chown oracle:dba/dev/rdsk/c5t600A0B80005AD8BC0000020000000005d0s7

# chmod 660/dev/rdsk/c5t600A0B80005AD8BC0000020000000005d0s7

 

 [恢复OCR]

查询OCR状态:

# /oracle/product/10g/crs/bin/ocrcheck            

PROT-601: Failed to initialize ocrcheck

 

替换原OCR磁盘,出错:

# /oracle/product/10g/crs/bin/ocrconfig-replace ocr '/dev/rdsk/c5t600A0B80005AD8BC0000020000000000d0s7'

/oracle/product/10g/crs/bin/ocrconfig-replace ocrmirror '/dev/rdsk/c5t600A0B80005AD8BC0000020000000001d0s7'

PROT-1: Failed to initialize ocrconfig

 

repair选项替换原OCR磁盘:

# /oracle/product/10g/crs/bin/ocrconfig-repair ocr '/dev/rdsk/c5t600A0B80005AD8BC0000020000000000d0s7'

# /oracle/product/10g/crs/bin/ocrconfig-repair ocrmirror '/dev/rdsk/c5t600A0B80005AD8BC0000020000000001d0s7'

 

使用自动备份的ocr文件来恢复到新的OCR磁盘:

# /oracle/product/10g/crs/bin/ocrconfig-restore '/oracle/product/10g/crs/cdata/crs/backup00.ocr'

此处需注意新的磁盘容量要比原来的磁盘大,否则在恢复是会遇到错误PROT-22: Storage too small,这是

Oracle的一个Bug

 

验证恢复后的OCR状态:

# /oracle/product/10g/crs/bin/ocrcheck

Status of Oracle Cluster Registry is asfollows :

        Version                  :          2

        Total space (kbytes)     :    1048764

        Used space (kbytes)      :       3828

        Available space (kbytes) :    1044936

        ID                       :1649452169

        Device/File Name         :/dev/rdsk/c5t600A0B80005AD8BC0000020000000000d0s7

                                    Device/Fileintegrity check succeeded

         Device/File Name         :/dev/rdsk/c5t600A0B80005AD8BC0000020000000001d0s7

                                    Device/Fileintegrity check succeeded

        Cluster registry integrity check succeeded

alert日志中的内容验证:

# more alertsxx1.log

......

2010-11-28 14:48:25.932

[client(20589)]CRS-1002:The OCR wasrestored from /oracle/product/10g/crs/cdata/crs/backup00.ocr.

ocrconfig日志中的内容验证:

# more/oracle/product/10g/crs/log/sxx1/client/ocrconfig_20589.log

Oracle Database 10g CRS Release 10.2.0.4.0Production Copyright 1996, 2008 Oracle. All rights reserved.

2010-11-28 14:48:23.166: [OCRCONF][1]ocrconfig starts...

2010-11-28 14:48:23.215: [  OCRRAW][1]propriowv_bootbuf: Vote informationon disk 0

[/dev/rdsk/c5t600A0B80005AD8BC0000020000000000d0s7]is adjusted from [0/0] to [1/2]

2010-11-28 14:48:23.218: [  OCRRAW][1]propriowv_bootbuf: Vote informationon disk 1

[/dev/rdsk/c5t600A0B80005AD8BC0000020000000001d0s7]is adjusted from [0/0] to [1/2]

2010-11-28 14:48:25.932: [OCRCONF][1]Successfully restored OCR and set block 0

2010-11-28 14:48:25.932: [OCRCONF][1]Exiting [status=success]...

启动crs

# /etc/init.d/init.crs start

Startup will be queued to init within 30seconds.

等待30秒查看crs相关进程是否启动:

# ps -ef | grep d.bin

   root 20665  1840   0 14:50:01 pts/1       0:00 grep d.bin

查看crs状态:

# ./crsctl check crs

Failure 1 contacting CSS daemon

Cannot communicate with CRS

Cannot communicate with EVM 

 

查看系统日志信息:

# dmesg 

......

Nov 28 15:12:31 sxx1 root: [ID 702911 user.error]Cluster Ready Services waiting on dependencies. Diagnostics

in /tmp/crsctl.1753.

Nov 28 15:12:31 sxx1 root: [ID 702911user.error] Cluster Ready Services waiting on dependencies. Diagnostics

in /tmp/crsctl.1804.

Nov 28 15:12:31 sxx1 root: [ID 702911user.error] Cluster Ready Services waiting on dependencies. Diagnostics

in /tmp/crsctl.1865.

Nov 28 15:13:31 sxx1 root: [ID 702911user.error] Cluster Ready Services waiting on dependencies. Diagnostics

in /tmp/crsctl.1753.

Nov 28 15:13:31 sxx1 root: [ID 702911user.error] Cluster Ready Services waiting on dependencies. Diagnostics

in /tmp/crsctl.1804.

Nov 28 15:13:32 sxx1 root: [ID 702911user.error] Cluster Ready Services waiting on dependencies. Diagnostics

in /tmp/crsctl.1865.

查看crs启动日志中报错内容:

# vi /tmp/crsctl.1865

Failure -2 opening file handle for(c5t50060E8000000000000050D60000001Cd0s7)

Failure 1 checking the CSS voting disk'c5t50060E8000000000000050D60000001Cd0s7'.

Failure -2 opening file handle for(c5t50060E8000000000000050D60000001Dd0s7)

Failure 1 checking the CSS voting disk'c5t50060E8000000000000050D60000001Dd0s7'.

Failure -2 opening file handle for(c5t50060E8000000000000050D60000002Ad0s7)

Failure 1 checking the CSS voting disk'c5t50060E8000000000000050D60000002Ad0s7'.

Not able to read adequate number of votingdisks

[恢复VOTEDISK]

查询VD状态:

# ./crsctl query css votedisk                                                        

 0.    0    /dev/rdsk/c5t50060E8000000000000050D60000001Cd0s7

 1.    0   /dev/rdsk/c5t50060E8000000000000050D60000001Dd0s7

 2.    0   /dev/rdsk/c5t50060E8000000000000050D60000002Ad0s7

 

located 3 votedisk(s).

添加VD磁盘:

# ./crsctl add css votedisk'/dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7'

Cluster is not in a ready state for onlinedisk addition

添加VD磁盘,加force选项:

# ./crsctl add css votedisk'/dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7' -force

Now formatting voting disk:/dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7

successful addition of votedisk/dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7.

# ./crsctl add css votedisk'/dev/rdsk/c5t600A0B80005AD8BC0000020000000004d0s7' -force

Now formatting voting disk:/dev/rdsk/c5t600A0B80005AD8BC0000020000000004d0s7

successful addition of votedisk/dev/rdsk/c5t600A0B80005AD8BC0000020000000004d0s7.

# ./crsctl add css votedisk'/dev/rdsk/c5t600A0B80005AD8BC0000020000000005d0s7' -force

Now formatting voting disk:/dev/rdsk/c5t600A0B80005AD8BC0000020000000005d0s7

successful addition of votedisk/dev/rdsk/c5t600A0B80005AD8BC0000020000000005d0s7.

删除旧的遗失的VD磁盘:

# ./crsctl delete css votedisk'/dev/rdsk/c5t50060E8000000000000050D60000001Cd0s7' -force

successful deletion of votedisk /dev/rdsk/c5t50060E8000000000000050D60000001Cd0s7.

# ./crsctl delete css votedisk'/dev/rdsk/c5t50060E8000000000000050D60000001Dd0s7' -force

successful deletion of votedisk/dev/rdsk/c5t50060E8000000000000050D60000001Dd0s7.

# ./crsctl delete css votedisk '/dev/rdsk/c5t50060E8000000000000050D60000002Ad0s7'-force

successful deletion of votedisk/dev/rdsk/c5t50060E8000000000000050D60000002Ad0s7.

 

查询VD状态:

# ./crsctl query css votedisk

 0.    0   /dev/rdsk/c5t600A0B80005AD8BC0000020000000003d0s7

 1.    0    /dev/rdsk/c5t600A0B80005AD8BC0000020000000004d0s7

 2.    0   /dev/rdsk/c5t600A0B80005AD8BC0000020000000005d0s7

 

located 3 votedisk(s).

启动crs

# /etc/init.d/init.crs start

Startup will be queued to init within 30seconds.

等待30秒查看crs相关进程是否启动:

# ps -ef | grep d.bin

   root  2911   654  0 15:40:43 ?           0:04/oracle/product/10g/crs/bin/crsd.bin reboot

   root  3143  2926  0 15:40:44 ?           0:00/oracle/product/10g/crs/bin/oprocd.bin run -t 1000 -m 500 -f

 oracle  3225  2970  0 15:40:46 ?           0:01/oracle/product/10g/crs/bin/ocssd.bin

 oracle  2855   652  0 15:40:42 ?           0:01/oracle/product/10g/crs/bin/evmd.bin

   root  3857  1120  0 15:41:34 pts/1       0:00 grepd.bin

查看crs状态:

# ./crsctl check crs

CSS appears healthy

CRS appears healthy

EVM appears healthy

查看系统日志信息:

# dmesg 

......

Nov 28 15:40:42 sxx1 root: [ID 702911user.error] Cluster Ready Services completed waiting on dependencies.

Nov 28 15:40:42 sxx1 last message repeated2 times

Nov 28 15:40:42 sxx1 root: [ID 702911user.error] Oracle CSS Family monitor starting.

Nov 28 15:40:42 sxx1 root: [ID 702911user.error] Running CRSD with TZ = PRC

Nov 28 15:40:43 sxx1 root: [ID 702911user.error] Oracle CSS restart. 0, 1

Nov 28 15:40:53 sxx1 root: [ID 702911user.error] Oracle Cluster Ready Services starting by user request.

查看crs资源状态:

$ crs_stat -t

Name           Type           Target    State    Host        

------------------------------------------------------------ 

ora....C1.inst application    ONLINE   OFFLINE               

ora....C2.inst application    ONLINE   OFFLINE               

ora.SXX.db     application    ONLINE   OFFLINE               

ora....SM1.asm application    ONLINE   ONLINE    sxx1     

ora....C1.lsnr application    ONLINE   OFFLINE               

ora....ac1.gsd application    ONLINE   OFFLINE               

ora....ac1.ons application    ONLINE   ONLINE    sxx1     

ora....ac1.vip application    ONLINE   ONLINE    sxx1     

ora....SM2.asm application    ONLINE   ONLINE    sxx2     

ora....C2.lsnr application    ONLINE   OFFLINE               

ora....ac2.gsd application    ONLINE   OFFLINE               

ora....ac2.ons application    ONLINE   ONLINE    sxx2     

ora....ac2.vip application    ONLINE   ONLINE    sxx2   

2.  修改IP地址,包括PUBLICIPPRIVATE IP

在所有节点操作系统(SunOS 5.10)中如下文件修改相应IP地址:

/etc/inet/hosts

/etc/inet/netmasks

/etc/inet/ipnodes

查看hosts文件配置:

$ cat /etc/hosts

 

127.0.0.1       localhost

 

192.168.100.11  sxx1        loghost

192.168.100.12  sxx2

 

100.1.1.1       sxx1-priv

100.1.1.2       sxx2-priv

 

192.168.100.13  sxx1-vip

192.168.100.14  sxx2-vip

修改当前IP地址设置:

# ifconfig -a

lo0:flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232index 1

        inet 127.0.0.1 netmask ff000000 

ce0:flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2

       inet 192.168.100.11 netmask ffffff00 broadcast 192.168.100.255

       ether 0:3:ba:92:e1:30 

ce0:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4>mtu 1500 index 2

       inet 192.168.1.177 netmask ffffff00 broadcast 192.168.1.255

ce2:flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3

       inet 100.1.1.1 netmask ff000000 broadcast 100.255.255.255

       ether 0:3:ba:81:27:1d 

# ifconfig ce0:1 unplumb

# ifconfig ce0:1 plumb

# ifconfig ce0:1 192.168.100.13 netmask255.255.255.0 broadcast 192.168.100.255 up

另一节点:

# ifconfig ce0:1 unplumb

# ifconfig ce0:1 plumb

# ifconfig ce0:1 192.168.100.14 netmask255.255.255.0 broadcast 192.168.100.255 up

 

修改所有节点的listener.ora tnsnames.ora文件中相应IP地址

# vi/oracle/product/10g/db_1/network/admin/listener.ora

# vi/oracle/product/10g/db_1/network/admin/tnsnames.ora

登陆数据库,修改LOCAL_LISTENER指向:

SQL> ALTER SYSTEM SET LOCAL_LISTENER ='(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.100.13)(PORT = 1521))' SID

= 'SXX1';

System altered.

另一节点:

SQL> ALTER SYSTEM SET LOCAL_LISTENER ='(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.100.14)(PORT = 1521))' SID

= 'SXX2';

System altered.

获取当前IP地址配置信息:

$ oifcfg getif

ce0 192.168.1.0  global  public

ce2 100.1.1.0  global  cluster_interconnect

删除PUBLIC IP配置:

$ oifcfg delif -global ce0

$ oifcfg getif

ce2 100.1.1.0  global  cluster_interconnect

设置新的PUBLIC IP配置:

$ oifcfg setif -globalce0/192.168.100.0:public

$ oifcfg getif

ce0 192.168.100.0  global  public

ce2 100.1.1.0   global  cluster_interconnect

需要在各节点更新nodeapps配置文件,这一步非常重要:

# /oracle/product/10g/crs/bin/srvctl modifynodeapps -n sxx1 -A 192.168.100.13/255.255.255.0/ce0

另一节点:

# /oracle/product/10g/crs/bin/srvctl modifynodeapps -n sxx2 -A 192.168.100.14/255.255.255.0/ce0

验证当前VIP IP设置是否正确:

$ srvctl config nodeapps -n sxx1 -a

VIP exists.:/sxx1-vip/192.168.100.13/255.255.255.0/ce0

可以用vipca工具进行验证:

# xhost +

access control disabled, clients canconnect from any host

# /oracle/product/10g/crs/bin/vipca

 

重启相关资源

$ srvctl stop instance -d SXX -i SXX1

$ srvctl stop asm -n sxx1

$ srvctl stop nodeapps -n sxx1

$ srvctl start nodeapps -n sxx1

$ srvctl start asm -n sxx1

$ srvctl start instance -d SXX -i SXX1

另一节点:

$ srvctl stop instance -d SXX -i SXX2

$ srvctl stop asm -n sxx2

$ srvctl stop nodeapps -n sxx2

$ srvctl start nodeapps -n sxx2

$ srvctl start asm -n sxx2

$ srvctl start instance -d SXX -i SXX2

查看crs相关资源的状态

$ crs_stat -t

Name           Type           Target    State    Host        

------------------------------------------------------------

ora....C1.inst application    ONLINE   ONLINE    sxx1     

ora....C2.inst application    ONLINE   ONLINE    sxx2     

ora.SXX.db     application    ONLINE   ONLINE    sxx2     

ora....SM1.asm application    ONLINE   ONLINE    sxx1     

ora....C1.lsnr application    ONLINE   ONLINE    sxx1     

ora....ac1.gsd application    ONLINE   ONLINE    sxx1     

ora....ac1.ons application    ONLINE   ONLINE    sxx1     

ora....ac1.vip application    ONLINE   ONLINE    sxx1     

ora....SM2.asm application    ONLINE   ONLINE    sxx2     

ora....C2.lsnr application    ONLINE   ONLINE    sxx2     

ora....ac2.gsd application    ONLINE   ONLINE    sxx2     

ora....ac2.ons application    ONLINE   ONLINE    sxx2     

ora....ac2.vip application    ONLINE   ONLINE    sxx2 

 

3.  ASM数据从低端存储SUN  2540迁移至高端存储SUN  9990,涉及迁移system表空间、其他表空间、controlfilespfile,以及重新指定archivelogflash位置

ASM中事先为新存储建立DG_DATADG_ARCHDG_REDO1DG_REDO2DG_INIT磁盘组。

查看表空间及数据文件相关信息:

$ rman target / nocatalog

RMAN> report schema;

Report of database schema

List of Permanent Datafiles

===========================

File Size(MB) Tablespace           RB segs Datafile Name

---- -------- -------------------- -------------------------------

1   610      SYSTEM               ***    +DG_DATA_OLD/sxx/datafile/system.257.688141191

2   4000     TBS_SXX              ***    +DG_DATA_OLD/sxx/datafile/tbs_sxx01.256.689771911

3   10610    SYSAUX               ***    +DG_DATA_OLD/sxx/datafile/sysaux.262.688146467

......省略内容标记

126 52700    TBS_SXX              ***     +DG_DATA_OLD/sxx/datafile/tbs_sxx125.399.722454027

 

[迁移system表空间]

system表空间的数据文件offline

$ sqlplus "/as sysdba"

SQL> shutdown immediate;

SQL> startup mount;

SQL> select file_id,status fromdba_data_files;

SQL> alter database datafile'+DG_DATA_OLD/sxx/datafile/system.257.688141191' offline;

拷贝system表空间的数据文件到新的位置,切换数据文件的指向,并作recover,最好将该数据文件online

$ rman target / nocatalog

RMAN> copy datafile'+DG_DATA_OLD/sxx/datafile/system.257.688141191' to '+DG_DATA';

RMAN> switch datafile 1 to copy;

RMAN> sql "alter database datafile1 online";

RMAN> recover datafile 1;

RMAN> sql "alter database datafile1 online";

删除ASM中旧的磁盘组上的system表空间的数据文件:

$ export ORACLE_SID=+ASM1

$ asmcmd

ASMCMD> alter diskgroup DG_DATA_OLD dropfile '+DG_DATA_OLD/sxx/datafile/system.257.688141191';

 

[迁移其他表空间]

将除system外所有数据文件offline

$ sqlplus "/as sysdba"

SQL> shutdown immediate;

SQL> startup mount;

SQL> alter database datafile'+DG_DATA_OLD/sxx/datafile/tbs_sxx01.256.689771911' offline;

SQL> alter database datafile'+DG_DATA_OLD/sxx/datafile/sysaux.262.688146467' offline;

......省略内容标记

SQL> alter database datafile'+DG_DATA_OLD/sxx/datafile/tbs_sxx125.399.722454027' offline;

拷贝数据文件到新的位置:

$ rman target / nocatalog

RMAN> copy datafile'+DG_DATA_OLD/sxx/datafile/tbs_sxx01.256.689771911' to '+DG_DATA';

RMAN> copy datafile'+DG_DATA_OLD/sxx/datafile/sysaux.262.688146467' to '+DG_DATA';

......省略内容标记

copy datafile'+DG_DATA_OLD/sxx/datafile/tbs_sxx125.399.722454027' to '+DG_DATA';

将数据文件的指向切换到新的位置:

RMAN> switch datafile 2 to copy;

RMAN> switch datafile 3 to copy;

......省略内容标记

RMAN> switch datafile 126 to copy;

将除system外所有数据文件online

$ sqlplus "/as sysdba"

SQL> alter database datafile 2 online;

SQL> alter database datafile 3 online;

......省略内容标记

SQL> alter database datafile 126 online;

删除ASM中旧的磁盘组上的文件:

$ export ORACLE_SID=+ASM1

$ asmcmd

ASMCMD> alter diskgroup DG_DATA_OLD dropfile '+DG_INIT_OLD/sxx/datafile/tbs_sxx01.256.689771911';

ASMCMD> alter diskgroup DG_DATA_OLD dropfile '+DG_INIT_OLD/sxx/datafile/sysaux.262.688146467';

......省略内容标记

ASMCMD> alter diskgroup DG_DATA_OLD dropfile '+DG_DATA_OLD/sxx/datafile/tbs_sxx125.399.722454027';

 

[更改controlfile位置]

备份controlfile

$ sqlplus "/as sysdba"

SQL> create or replace directory D2 as'+DG_INIT/sxx/controlfile'; 

SQL> create or replace directory D3 as'+DG_INIT_OLD/sxx/controlfile'; 

SQL> execdbms_file_transfer.copy_file('D3', 'control01.ctl', 'D2', 'control01.ctl');

SQL> shutdown immediate;

迁移controlfile

$ sqlplus "/as sysdba"

SQL> startup mount;

SQL> alter database backup controlfileto '+DG_INIT/sxx/controlfile/control01.ctl';

SQL> alter system setcontrol_files='+DG_INIT/sxx/controlfile/control01.ctl' scope=spfile;

SQL> shutdown immediate;

冗余controlfile

$ sqlplus "/as sysdba"

SQL> startup mount;

SQL> alter database backup controlfileto '+DG_INIT/sxx/controlfile/control02.ctl';

SQL> alter database backup controlfileto '+DG_INIT/sxx/controlfile/control03.ctl';

SQL> alter system set

control_files='+DG_INIT/sxx/controlfile/control01.ctl','+DG_INIT/sxx/controlfile/control02.ctl','+DG_INIT/sx

x/controlfile/control03.ctl' scope=spfile;

SQL> shutdown immediate;

SQL> startup;

 

 [更改spfile位置]

拷贝spfile到新的位置:

$ sqlplus "/as sysdba"

SQL> createpfile='/export/home/oracle/init_temp.ora' from spfile='+DG_INIT_OLD/sxx/spfilerac.ora';

SQL> createspfile='+DG_INIT/sxx/spfilerac.ora' frompfile='/export/home/oracle/init_temp.ora';

修改本地及其他节点pfile中的指向:

$ echo"SPFILE='+DG_INIT/sxx/spfilerac.ora'" >/oracle/product/10g/db_1/dbs/initSXX1.ora

$ ssh sxx2 "echo \"SPFILE='+DG_INIT/sxx/spfilerac.ora'\"> /oracle/product/10g/db_1/dbs/initSXX2.ora"

修改OCR中新的spfile的位置:

$ srvctl modify database -d SXX -p+DG_INIT/sxx/spfilerac.ora

重启所有节点使新的spfile位置生效:

$ srvctl stop database -d SXX

$ srvctl start database -d SXX

重启service

$ srvctl start service -d SXX

删除ASM中旧的spfile

$ export ORACLE_SID=+ASM1

$ asmcmd

ASMCMD> ALTER DISKGROUP DG_INIT_OLD DROPFILE '+DG_INIT_OLD/sxx/spfilerac.ora';

 

[更改archivelogflashlog存储位置]

$ sqlplus "/as sysdba"

SQL> alter system set log_archive_dest_1='LOCATION=+DG_ARCH/'scope=both;

SQL> alter system setdb_recovery_file_dest_size=200G scope=both;

SQL> alter system setdb_recovery_file_dest="+DG_FLSH" scope=both;

SQL> alter system setdb_create_file_dest="+DG_DATA" scope=both;

SQL> alter system setdb_create_online_log_dest_1="+DG_REDO1" scope=both;

SQL> shutdown immediate;

SQL> startup;

 

[删除挂接旧存储的磁盘组]

$ export ORACLE_SID=+ASM1

$ asmcmd

ASMCMD> drop diskgroup DG_DATA_OLD;

ASMCMD> drop diskgroup DG_ARCH_OLD;

ASMCMD> drop diskgroup DG_FLSH_OLD;

ASMCMD> drop diskgroup DG_INIT_OLD;

ASMCMD> drop diskgroup DG_REDO1_OLD;

ASMCMD> drop diskgroup DG_REDO2_OLD;

注意:  删除旧的磁盘组前应先完成重建并切换redolog的操作,并清空旧的磁盘组上的所有文件。

 

确认旧的磁盘组已删除:

ASMCMD> lsdg

State   Type    Rebal  Unbal Sector  Block       AU Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB 

Offline_disks  Name

MOUNTED EXTERN  N      N        512   4096  1048576   824235   796059                0          796059             

0 DG_ARCH/

MOUNTED EXTERN  N      N        512   4096  1048576  8242350  3714245                0         3714245             

0 DG_DATA/

MOUNTED EXTERN  N      N        512   4096  1048576   824235   823910                0          823910             

0 DG_FLSH/

MOUNTED NORMAL  N      N         512  4096  1048576     19454   19114                0            9557             

0 DG_INIT/

MOUNTED NORMAL  N      N        512   4096  1048576    19454     9166                0            4583             

0 DG_REDO1/

MOUNTED NORMAL  N      N        512   4096  1048576    19454    16236                0            8118             

  1. DG_REDO2/

 

 

至此,迁移的主要工作基本完成,后续还需要重建undo表空间,重建temp 表空间等操作,就不作一一赘

述。

你可能感兴趣的:(数据迁移[更换存储])