Oracle 11g ASM Enhancement(原创)

ASM Compatibility
ASM in Oracle Database 11g can support databases from both the 11g release as well as the 10g release. The ASM version must be the higher version or at least the same as the RDBMS version for ASM to support that database. ASM’s disk group compatibility feature lets an Oracle Database 10g client use disk groups created under Oracle Database 11g. You can advance the Oracle database and the ASM disk group compatibility settings across software versions. There are two attributes that determine compatibility settings for each disk group—compatible.asm and compatible.rdbms. The compatible. asm attribute specifies the minimum software version required to use a disk group for ASM. The compatible.rdbms attribute enables you to specify the minimum software version required to use an ASM disk group for a database. The compatible.asm attribute determines the ASM compatibility and controls the ASM metadata on disk structures. The compatible.rdbms setting determines the RDBMS compatibility and controls the minimum client level.

compatible.rdbms indicates the minimum Oracle Database version for the RDBMS instance. This parameter controls the minimum client level and indicates the minimum compatible version of the RDBMS instance that would let the instance mount the ASM disk group. For example, if the RDBMS compatibility is set to 10.1, the Oracle Database client version must be at least 10.1. An ASM instance can support different RDBMS clients running at different compatibility settings. The compatible.rdbms setting specifies the minimum compatible RDBMS version for the ASM instance to mount the disk groups. Each instance supported by ASM must have a database compatible version setting that’s at least equal to or greater than the RDBMS compatibility of all disk groups used by that instance. The database compatible initialization parameter setting for each of the instances must be at least equal to the compatible.rdbms setting. Thus, the compatible parameter setting for each instance and the compatible. rdbms setting together determine if an instance can mount a disk group.

The compatible.asm setting controls the format of data structures for ASM metadata on disks that are part of the ASM disk groups. For example, if you set the compatible.asm attribute to 11.1, the ASM software version must be at least 11.1. The ASM compatibility level must be at least equal to the RDBMS compatibility for that disk group. Remember that the ASM compatibility is concerned with just the format of the ASM metadata while the format of the actual file contents is determined by the compatibility of the database instance. Let’s say the compatible.asm setting is 11.0 and the compatible.rdbms setting is 10.1. This means that ASM can manage the disk group only if the ASM software version is 11.0 or higher.At the same time, a database client needs to have a software version at least at 10.1 to use that disk group.

The default for both the compatible.asm and compatible.rdbms attributes is 10.1. As with the database compatibility feature where you use the initialization parameter compatible in the spfile to set the compatibility level of the database, higher disk group RDBMS and ASM compatibility settings enable you to take advantage of the new ASM-related features in Oracle Database 11g. Once you advance the compatible.rdbms attribute, you can’t revert to the old setting. If you want to go back to the previous value, you must create a new disk group with the previous compatibility setting and restore all the database files that were part of the disk group.

If you’ve made backups with the md_backup command before updating the disk group compatibility settings, the backup is useless once you update the disk group. However, you can use an older backup to revert to the previous compatibility setting.

Note:You can’t change the compatibility settings during a rolling upgrade.

The disk group compatibility attributes can be set during disk group creation by adding the ATTRIBUTE clause to the CREATE DISKGROUP command.
CREATE DISKGROUP data DISK '/dev/raw/*'  ATTRIBUTE 'compatible.asm' = '11.1';
CREATE DISKGROUP data DISK '/dev/raw/*'  ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1';
The disk group compatibility attributes for existing disk groups can be altered using the SET ATTRIBUTE clause to the ALTER DISKGROUP command.
ALTER DISKGROUP data SET ATTRIBUTE 'compatible.asm' = '11.1';
ALTER DISKGROUP data SET ATTRIBUTE 'compatible.rdbms' = '11.1';
The current compatibility settings are available from the V$ASM_DISKGROUP and V$ASM_ATTRIBUTE views.
COLUMN name FORMAT A10
COLUMN compatibility FORMAT A20
COLUMN database_compatibility FORMAT A20
SELECT group_number, name, compatibility, database_compatibility FROM v$asm_diskgroup;
GROUP_NUMBER NAME       COMPATIBILITY        DATABASE_COMPATIBILI
------------ ---------- -------------------- --------------------
           1 DATA       11.1.0.0.0           11.1.0.0.0
1 row selected.
COLUMN name FORMAT A20
COLUMN value FORMAT A20
SELECT group_number, name, value FROM v$asm_attribute ORDER BY group_number, name;
GROUP_NUMBER NAME                 VALUE
------------ -------------------- --------------------
           1 au_size              1048576
           1 compatible.asm       11.1.0.0.0
           1 compatible.rdbms     11.1
           1 disk_repair_time     3.6h
4 rows selected.

ASM Fast Mirror Resync
In Oracle Database 10g, when an ASM disk failure occurs, say, because of a bad cable or controller, ASM won’t be able to complete the writing of an extent to the failed disk if it’s in the middle of doing so. This is true, of course, only if you’re using ASM redundancy. ASM will then take the failed disk offline. Once it re-creates the failed disk’s extents on the other disks in the disk group using redundant extent copies, ASM drops the failed disk. ASM doesn’t read from the offline disk any longer because it assumes that the disk contains only stale data. You’d have to manually add the failed disk back after the failure is fixed, by migrating all its extents back onto it. Or, you must add a new disk to the disk group to take the place of the dropped disk. In either case, the two-step process of writing and rewriting of the failed disk’s extents takes time and resources. Even when a disk failure is transient, caused by a failure of cables, controllers, or disk power supply interruptions, you’ll still have to go through this time-consuming process to take care of a failed disk by fixing the transient failure, adding the dropped disk back to its disk group, and incurring the cost of migrating the extents back to the fixed disk after that.
In Oracle Database 11g, the ASM fast mirror resync feature lowers the overhead involved in resynchronizing disk groups following a transient disk failure. Following a temporary failure, ASM tracks the extents that were changed during the failure and after the failure is fixed, resynchronizes only the changed extents. Thus, the database has to rewrite only a miniscule portion of the contents of the failed disk. The database fixes only the damaged portions of the affected disk and doesn’t have to copy the contents of the entire disk when you take a disk offline and bring it back online after repairing it. Of course, the feature works under the assumption that the offline disk’s contents haven’t been damaged or modified. When you enable this feature, the database will merely take the affected disk offline but won’t drop it.

Fast mirror resync is only available when the disk groups compatibility attributes are set to 11.1 or higher.
ALTER DISKGROUP disk_group_1 SET ATTRIBUTE 'compatible.asm' = '11.1';
ALTER DISKGROUP disk_group_1 SET ATTRIBUTE 'compatible.rdbms' = '11.1;
By default, ASM drops disks if they remain offline for more than 3.6 hours. The disk groups default time limit is altered by changing the DISK_REPAIR_TIME parameter with a unit of minutes (M or m) or hours (H or h).
-- Set using the hours unit of time.
ALTER DISKGROUP disk_group_1 SET ATTRIBUTE 'disk_repair_time' = '4.5h';
-- Set using the minutes unit of time.
ALTER DISKGROUP disk_group_1 SET ATTRIBUTE 'disk_repair_time' = '300m';
The DROP AFTER clause of the ALTER DISKGROUP command is used to override the disk group default DISK_REPAIR_TIME.
-- Use the default DISK_REPAIR_TIME for the diskgroup.
ALTER DISKGROUP disk_group_1 OFFLINE DISK D1_0001;
-- Override the default DISK_REPAIR_TIME.
ALTER DISKGROUP disk_group_1 OFFLINE DISK D1_0001 DROP AFTER 30m;

The value range for DISK_REPAIR_TIME is 0 to 136 years.
Note: If a disk goes offline during a rolling upgrade, the timer is not started until after the rolling upgrade is complete.

You can monitor the fast mirror resync process using the V$ASM_DISK and the V$ASM_DISK_IOSTAT views. The V$ASM_OPERATION view also shows a row corresponding to each disk resync operation, with the OPERATION column set to the value of sync.

ASM Preferred Mirror Read
Mirroring is an ASM feature that protects the integrity of data by storing copies of data on multiple disks. ASM offers you different levels of mirroring, ranging from less stringent to more stringent mirroring strategies. You can specify different disk group types to assign each disk group to a different level of mirroring strategy. You can specify an ASM disk group based on the following three redundancy levels:

  • For 2-way mirroring, choose a normal type.
  • For 3-way mirroring, choose a high disk group type.
  • If you don’t want to use ASM mirroring and prefer to configure hardware RAID for redundancy, choose external redundancy.

The disk group type you specify then determines the mirroring level for a file in a disk group. The redundancy level determines how many disk failures the database can tolerate before losing data or having to drop a disk. ASM uses a failure group to place the mirrored copies of a disk, storing different copies of the data in a different failure group. For a normal redundancy file, when ASM allocates a new extent, it allocates a primary copy and a secondary copy, storing the secondary copy in a different failure group than the primary group. A normal redundancy disk group requires a minimum of two disk groups for 2-way mirroring. A high redundancy disk group, because it requires 3-way mirroring, requires at least three failure groups. Because a disk group with external redundancy doesn’t use ASM mirroring, it doesn’t require any failure groups at all.
In Oracle Database 10g, ASM always reads the primary copy of a mirrored extent whenever you configured an ASM failure group for normal or high redundancy disk groups. That is, ASM only read from the primary failure group and not the secondary failure group, unless the primary failure group wasn’t available. This was true even in cases where it is more efficient to read from a secondary failure group extent that’s closer to the node. In Oracle Database 11g, the database can read from a list of preferred group names that you provide. That is, you can configure a node to read from a specific failgroup instead of automatically reading from the primary failgroup. Thus, if reading from a local copy of an extent is more efficient, the database will do so. Once you configure a preferred mirror read, every node can read from its local disks. This is called the ASM preferred mirror read feature. In order for the ASM instance to read from specific fail groups, you create a preferred read group for the disk groups. The preferred mirror read feature proves very efficient when it comes to reads that involve stretch clusters, which are clusters in which the nodes are spread out far in terms of distance.

To configure preferred read failure groups the disk group compatibility attributes must be set to 11.1 or higher. Once the compatibility options are correct, the ASM_PREFERRED_READ_FAILURE_GROUPS parameter is set to the preferred failure groups for each node,ASM_PREFERRED_READ_FAILURE_GROUPS = <diskgroup_name>.<failure_group_name>.
SELECT name, failgroup FROM v$asm_disk;
NAME                           FAILGROUP
------------------------------ ------------------------------
DATA_0000                      DATA_0000
DATA_0001                      DATA_0001
DATA_0002                      DATA_0002
3 rows selected.
ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS = 'data.data_0000', 'data.data_0001', 'data.data_0002';

Following are some best practices to configure the ASM preferred mirror read feature to achieve the best performance and availability in a two-site stretch cluster.

  • If you use normal redundancy, you must use only two failure groups and all local disks must belong to the same failure group.Each instance can specify only one failure group as its preferred read failure group. If you specify more than one failure group, ASM may not be able to mirror a virtual extent across both groups in the two sites.
  • If you create a high redundancy failure group, you can have a maximum of two failure groups on each site with its local disks. You can specify both local failure groups as preferred read failure groups for the ASM instance.
  • In a three-site stretch cluster, you must use a high redundancy disk group with three failure groups so that ASM can ensure that each virtual extent has a local mirror copy. In addition, this protects the database in the event of a disaster in any one of the three sites.

ASM Scalability and Performance Enhancements
An AU is the basic unit of allocation within an ASM disk group. In Oracle Database 10g, each AU had a single extent, which created problems with memory usage. If you have a large database with numerous default-sized allocation units, the database would need a very large amount of memory in the shared pool. The default AU size is only 1MB. File extents contain a minimum of one AU and an ASM file consists of at least one extent. You can set variable size extents with extents of size 1, 4, 16, 32, and 64 megabytes.When the disk group compatibility attributes are set to 11.1 or higher, the extent size will automatically grow as the file grows. In 11.1, the first 20,000 extents match the allocation unit size (1*AU). The next 20,000 extents are made up of 8 allocation units (8*AU). Beyond that point, the extent size becomes 64 allocation units (64*AU). In 11.2 this behavior has changed from 1, 8, 64 to 1, 4, 16. The ability to set variable-size ASM extents means that ASM can now support larger file size extents while using less memory. ASM sets thresholds for each file and, as a file grows, ASM will increase the extent size based on the file size thresholds. Thus, a file can start with 1MB extents, with ASM increasing the extent size to 4, 16, 32, or 64 megabytes as the file size grows. Note that the size of an extent can vary among files as well as within a file. As a result of the variable extents feature, the database needs fewer extent pointers to describe an ASM file and less memory to manage the extent maps in the shared pool, thus making it easier to implement large ASM configurations.Not only, the database will also automatically perform defragmentation when it has a problem finding the right size extent during extent allocation.

ASM is also more scalable in Oracle Database 11g as compared to Oracle Database 10g. The maximum ASM file size for external redundancy is now 140 petabytes, instead of 35 terabytes in Oracle Database 11g. Variable extent sizes enable you to configure ASM installations that are several hundred terabytes or even several petabytes in size.

In addition to the automatic expansion of the extent sizes, Oracle 11g also allows control over the allocation unit size using the ATTRIBUTE clause in the CREATE DISKGROUP statement, with values ranging from 1M to 64M.
CREATE DISKGROUP disk_group_2
  EXTERNAL REDUNDANCY
  DISK '/dev/sde1'
  ATRRIBUTE 'au_size' = '32M';
The combination of expanding extent sizes and larger allocation units should result in increased I/O performance for very large databases.

You can find out the allocation unit sizes for all disk groups by executing the following query on the V$ASM_DISKGROUP view:
SQL> select name, allocation_unit_size from v$asm_diskgroup;
NAME           ALLOCATION_UNIT_SIZE
-------         --------------------
DGROUP1         1048576
DGROUP3         1048576

New restricted Mount Mode
ASM automatically mounts the disk groups that you specify in the asm_diskgroups initialization parameter so they are available to the database instances. Similarly, when you shut down the ASM instance, the disk groups are automatically dismounted.
Oracle Database 11g introduces a new mount mode for disk groups, called the restrict mode. Whenever you add a disk to a disk group, ASM immediately starts a rebalance operation, which requires an elaborate system of locks to ensure that the correct blocks are accessed and changed. Mounting a disk group in the restricted mode improves the performance of a rebalance operation because the ASM instance doesn’t have to message the database client for locking and unlocking extent maps, thus reducing the locking overhead during rebalancing of disks. Once you finish all maintenance operations in the restrict mode, you must dismount the disk group and mount it again in the normal mode so database clients can use the disk group. Here’s how you can mount a disk group in the restrict mode:

SQL> ALTER DISKGROUP data DISMOUNT;
Diskgroup altered.
SQL> ALTER DISKGROUP data MOUNT RESTRICTED;
Diskgroup altered.
SQL> ALTER DISKGROUP data DISMOUNT;
Diskgroup altered.
SQL> ALTER DISKGROUP data MOUNT;
Diskgroup altered.
In a RAC environment, a disk group mounted in RESTRICTED mode can only be accessed by a single instance. The restricted disk group is not available to any ASM clients, even on the node where it is mounted.
Using RESTRICTED mode improves the performance of rebalance operations in a RAC environment as it elimitates the need for lock and unlock extent map messaging that occurs between ASM instances. Once the rebalance operation is complete, the disk group should be dismounted then mounted in NORMAL mode (the default).

New SYSASM Privilege
Oracle Database 11g introduces a new system privilege called SYSASM to enable you to separate the SYSDBA database administration privilege from the ASM storage administration privilege. To improve security, Oracle recommends that you use the new privilege called SYSASM when performing ASM-related administrative tasks. The SYSASM privilege is quite similar to the SYSDBA and SYSOPER privileges, which are system privileges given to users that perform administrative tasks in the database.
Oracle recommends that you use the SYSASM privilege rather than the SYSDBA privilege to administer an ASM instance.Although the default installation group for the users with the SYASM privilege is the dba group, Oracle intends to require the creation of a separate OS group for ASM administrators in future releases. In this release, Oracle recommends that you create a new operating system group called the OSASM group, and grant the SYSASM privilege only to members of this group. ASM users will then be limited to ASM instances and won’t be able to use the SYSDBA privilege for the main database. The key behind the creation of the new SYSASM privilege is to provide distinct operating system privileges for database administrators, storage administrators, and database operators.
The V$PWFILE_USERS view includes a new column called SYSASM, which shows whether a user can connect with the SYSASM privilege or not. You can revoke the SYSASM privilege from a user by using the revoke sysasm SQL statement.
Note: You can still log into an ASM instance as a user with the SYSDBA privilege, but the database will issue a warning that’s recorded in the alert log for the database.
Rolling Upgrade
Clustered ASM instances for 11g onwards can be upgraded using a rolling upgrade. The ASM cluster is placed in rolling upgrade mode by issuing the following command from one of the nodes.
ALTER SYSTEM START ROLLING MIGRATION TO 11.2.0.0.0;
Once the cluster is in rolling upgrade mode each node in turn can be shutdown, upgraded and started. The cluster runs in a mixed version environment until the upgrade is complete. In this state, the cluster is limited to the following operations:

  • Mount and dismount of the disk groups.
  • Open, close, resize, and delete of database files.
  • Access to local fixed views and fixed packages.

The current status of the ASM cluster can be determined using the following query.
SELECT SYS_CONTEXT('sys_cluster_properties', 'cluster_state') FROM dual;
Once the last node is upgraded, the rolling upgrade is stopped by issuing the following command, which checks all ASM instances are at the appropriate version, turns off rolling upgrade mode and restarts any pending rebalance operations.
ALTER SYSTEM STOP ROLLING MIGRATION;
Restrictions and miscellaneous points about the rolling upgrade process include:

  • The Oracle clusterware must be fully patched before an ASM rolling upgrade is started.
  • Rolling upgrades are only available from 11g onwards, so this method is not suitable for 10g to 11g upgrades.
  • This method can be used to rollback to the previous version if the rolling upgrade fails before completion.
  • If the upgrade fails, any rebalancing operations must complete before a new upgrade can be attempted.
  • New instances joining the cluster during a rolling upgrade are automatically placed in rolling upgrade mode.
  • If all instances in a cluster are stopped during a rolling upgrade, once the instances restart they will no longer be in rolling upgrade mode. The upgrade must be initiated as if it were a new process.

参考至:《McGraw.Hill.OCP.Oracle.Database.11g.New.Features.for.Administrators.Exam.Guide.Apr.2008》
           http://www.oracle-base.com/articles/11g/asm-enhancements-11gr1.php#fast_rebalance

           http://docs.oracle.com/cd/E11882_01/server.112/e40402/initparams013.htm#REFRN10279

           http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_5008.htm#SQLRF53934

本文原创,转载请注明出处、作者

如有错误,欢迎指正

邮箱:[email protected]

你可能感兴趣的:(oracle 11g)