dataguard 下ASM 更换磁盘步骤

For the purposes of this document, the following fictitious environment is used as an example to describe the procedure: 

Device Names: /dev/sdf,/dev/sdg,/dev/sdg1,/dev/lizardr5/
Directory Name: /dev/oracleasm/ and it's sub-directories
Diskgroup Names: DATA01,DATA02,FRA01,FRA02,BACK01
Database Names: stdby10
Databases:
stdby10_gal - Primary database
stdby10_inv - Physical standby database
stdbylog_inv - Logical standby database

************

To add a new SAN to the primary and standby sites, present this new storage to ASM at each site and replace the storage currently allocated to the ASM disk groups with the new storage.

There is to be no impact the primary and standby sites while this is performed once the new storage is made available to the primary and standby nodes.

Assumptions:

1. The new storage can be presented to the primary and standby nodes at the same time the old storage is in place, that is both new and old can co-exist.
2. The diskgroup names in place at the primary and standby sites are going to remain the same.
3. There is no requirement to have the old storage remain in place in the environment and a second copy of the primary and standby database kept on this storage. 


Overview of tasks  performed:

This scenario is going to rely on ASM to perform the tasks required to add the new storage to an existing diskgroup and drop the old storage. There will be no downtime or impact on the primary and standby sites in any shape or form other than the additional IO that will be performed while the ASM rebalance shifts data from the old storage to the new storage during the alter diskgroup command.

The configuration is as follows:

64 bit Linux, Oracle 11.1.0.7 ASM, Oracle 10.2.0.4 Primary and Standby
ASM, Oracle Managed Files, 2 existing diskgroups DATA01, FRA01
Openfiler
ISCSI
1 Primary
2 Standby, 1 physical and 1 logical

SOLUTION

Steps performed to add the storage:

1. Added new LUN's to Openfiler to present via iscsi to the primary and standby nodes.

4 /dev/lizardr5/asmdata02 write-thru wJJHvh-X3xO-oTTS wJJHvh-X3xO-oTTS blockio
5 /dev/lizardr5/asmfra02 write-thru nk7VdO-AbWG-Jsn4 nk7VdO-AbWG-Jsn4 blockio

2. Forced iscsi on each node to pick up on the new LUNS
 

[root]# iscsi-rescan

[root]# iscsi-ls -l
..
.
LUN ID : 4
Vendor: OPNFILER Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
page83 type1: 4f504e46494c4500774a4a4876682d5833784f2d6f545453
page80: 0a
Device: /dev/sdf
LUN ID : 5
Vendor: OPNFILER Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
page83 type1: 4f504e46494c45006e6b3756644f2d416257472d4a736e34
page80: 0a
Device: /dev/sdg


3. Using fdisk created new partitions on the new devices

For example:
 

[root]# fdisk /dev/sdg
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.


The number of cylinders for this disk is set to 50016.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-50016, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-50016, default 50016):
Using default value 50016

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.


4. Established the ASM disks in preparation for use within ASM diskgroups
 

[root]# /etc/init.d/oracleasm createdisk FRA02 /dev/sdg1
Marking disk "/dev/sdg1" as an ASM disk: [ OK ]
[root]# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
[root]# /etc/init.d/oracleasm listdisks
BACK01
DATA01
DATA02
FRA01
FRA02



ASM disk DATA02 is the new storage that is going to be used to replace the old storage allocated via DATA01
ASM disk FRA02 is the new storage that is going to be used to replace the old storage allocated via FRA01

5While the both the primary and standby sites are running and in use the new storage will be implemented and replace the old. A dataguard configuration exists and is enabled while this storage alteration is implemented.

DGMGRL> show configuration;

Configuration
Name: stdby10
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
stdby10_gal - Primary database
stdby10_inv - Physical standby database
stdbylog_inv - Logical standby database

Current status for "stdby10":
SUCCESS

Primary Site:
+++++++++++++
STDBY10.sys.SQL> @stdby_status

DB_UNIQUE_NAME FOR FLASHBACK_ON LOG_MODE OPEN_MODE DATABASE_ROLE GUARD_S PROTECTION_MODE FS_FAILOVER_STATUS
-------------- --- ------------- ------------ ---------- ---------------- ------- -------------------- ---------------------
STDBY10_GAL YES NO ARCHIVELOG READ WRITE PRIMARY NONE MAXIMUM PERFORMANCE DISABLED

1 row selected.


Standby Site:
+++++++++++++
Physical

STDBY10..SQL> @stdby_status

DB_UNIQUE_NAME FOR FLASHBACK_ON LOG_MODE OPEN_MODE DATABASE_ROLE GUARD_S PROTECTION_MODE FS_FAILOVER_STATUS
-------------- --- ------------- ------------ ---------- ---------------- ------- -------------------- ---------------------
STDBY10_INV YES NO ARCHIVELOG MOUNTED PHYSICAL STANDBY NONE MAXIMUM PERFORMANCE DISABLED

1 row selected.


Logical

STDBYLOG.sys.SQL> @stdby_status

DB_UNIQUE_NAME FOR FLASHBACK_ON LOG_MODE OPEN_MODE DATABASE_ROLE GUARD_S PROTECTION_MODE FS_FAILOVER_STATUS
-------------- --- ------------- ------------ ---------- ---------------- ------- -------------------- ---------------------
STDBYLOG_INV NO NO ARCHIVELOG READ WRITE LOGICAL STANDBY ALL MAXIMUM PERFORMANCE DISABLED

1 row selected.

 


6. To implement the new storage the following is performed from with ASM

In this demonstration we will replace the FRA01 disk groups disks with the new storage/disks
 

Current state of disks:

SQL> col name format a12
SQL> col label format a10
SQL> col path format a20
SQL> select NAME,LABEL,PATH,CREATE_DATE,MOUNT_STATUS,HEADER_STATUS from v$asm_disk
SQL> /

NAME              LABEL PATH             CREATE_DA MOUNT_S HEADER_STATU
------------ ---------- ------------------------------ --------- -------
              /dev/oracleasm/disks/FRA02            CLOSED PROVISIONED 
              /dev/oracleasm/disks/BACK01 21-MAY-09 CLOSED MEMBER 
              /dev/oracleasm/disks/DATA01 03-MAY-09 CLOSED FORMER
DATA01_0001   /dev/oracleasm/disks/DATA02 21-MAY-09 CACHED MEMBER
FRA01_0000    /dev/oracleasm/disks/FRA01  03-MAY-09 CACHED MEMBER 



To add the new disk/device /dev/oracleasm/disks/FRA02 and replace the existing disk/device /dev/oracleasm/disks/FRA01 (FRA01_0000) the following is performed from within the ASM instance.  The command that does all the work is:
 

SQL> alter diskgroup FRA01 add disk '/dev/oracleasm/disks/FRA02' drop disk FRA01_0000;


To monitor the ASM disk addition/removal the following SQL/view is removed
 

select INST_ID,GROUP_NUMBER,OPERATION,SOFAR,ACTUAL,EST_WORK,EST_MINUTES from GV$ASM_OPERATION;

SQL> select INST_ID,GROUP_NUMBER,OPERATION,SOFAR,ACTUAL,EST_WORK,EST_MINUTES from GV$ASM_OPERATION;

INST_ID GROUP_NUMBER OPERA SOFAR ACTUAL EST_WORK EST_MINUTES
---------- ------------ ----- ---------- ---------- ---------- -----------
2 2 REBAL
1 2 REBAL 452 1 6287 8

SQL> /

INST_ID GROUP_NUMBER OPERA SOFAR ACTUAL EST_WORK EST_MINUTES
---------- ------------ ----- ---------- ---------- ---------- -----------
1 2 REBAL 1132 1 6288 7
2 2 REBAL

SQL> /

INST_ID GROUP_NUMBER OPERA SOFAR ACTUAL EST_WORK EST_MINUTES
---------- ------------ ----- ---------- ---------- ---------- -----------
2 2 REBAL
1 2 REBAL 5162 1 6289 1


Once the add disk command is issued we see the new disk is now a member of the disk group

SQL> select NAME,LABEL,PATH,CREATE_DATE,MOUNT_STATUS,HEADER_STATUS from v$asm_disk;

NAME LABEL PATH CREATE_DA MOUNT_S HEADER_STATU
------------ ---------- -------------------- --------- ------- ------------
/dev/oracleasm/disks 21-MAY-09 CLOSED MEMBER
/BACK01

/dev/oracleasm/disks 03-MAY-09 CLOSED FORMER
/DATA01

DATA01_0001 /dev/oracleasm/disks 21-MAY-09 CACHED MEMBER
/DATA02

FRA01_0000 /dev/oracleasm/disks 03-MAY-09 CACHED MEMBER
/FRA01

FRA01_0001 /dev/oracleasm/disks 22-MAY-09 CACHED MEMBER
/FRA02 
 



Once the ASM rebalance/disk addition/removal has completed the disk /dev/oracleasm/disks/FRA01 is no longer a member and in use. There will also be no current asm operation in place in v$asm_operation
 

SQL> select NAME,LABEL,PATH,CREATE_DATE,MOUNT_STATUS,HEADER_STATUS from v$asm_disk;

NAME LABEL PATH CREATE_DA MOUNT_S HEADER_STATU
------------ ---------- -------------------- --------- ------- ------------
/dev/oracleasm/disks 03-MAY-09 CLOSED FORMER
/FRA01

/dev/oracleasm/disks 21-MAY-09 CLOSED MEMBER
/BACK01

/dev/oracleasm/disks 03-MAY-09 CLOSED FORMER
/DATA01

DATA01_0001 /dev/oracleasm/disks 21-MAY-09 CACHED MEMBER
/DATA02

FRA01_0001 /dev/oracleasm/disks 22-MAY-09 CACHED MEMBER
/FRA02

SQL> select INST_ID,GROUP_NUMBER,OPERATION,SOFAR,ACTUAL,EST_WORK,EST_MINUTES from GV$ASM_OPERATION;

INST_ID GROUP_NUMBER OPERA SOFAR ACTUAL EST_WORK EST_MINUTES
---------- ------------ ----- ---------- ---------- ---------- -----------
1 2 REBAL 6078 1 6289 0
2 2 REBAL

SQL> select INST_ID,GROUP_NUMBER,OPERATION,SOFAR,ACTUAL,EST_WORK,EST_MINUTES from GV$ASM_OPERATION;

no rows selected

你可能感兴趣的:(oracle)