Best Practices for Configuring Magnetic Libraries for CommVault BUR/ILM Suite

Best Practices for Configuring Magnetic Libraries for CommVault BUR/ILM Suite

Published 03/17/2009 04:38 PM   |    Updated 09/10/2010 09:40 AM

 

Purpose:


The purpose of this document is to serve as a reference for the field configuration and implementation of Magnetic Libraries in backup and ILM configurations. 


Conceptual Overview:


Backup and Recovery (“BUR”) operations consist of sequential write / read operations using large blocks (from an I/O perspective) from the disk array.  When using disk as a target for backup it should be noted that an aggregate number (greater than SIX) of high speed / high density tape drives can exceed the throughput of a single array.  In many enterprises, backups written to disk are used as a caching mechanism (Spool Copy) before a copy of the backup is made to tape.  Disk is also used as a short term high speed recovery point for data.
 
It is noted that any RAID Group will be carved into logical unit numbers (“LUNs”), the address for an individual disk drive and by extension, the disk device itself used for the disk target. LUNs should be sized and allocated so that no more than 4 threads are accessing any one RAID Group.  A 64 data alignment is a required configuration for CommVault Windows MediaAgents and may apply to other Operating Systems.  Refer to Appendix 3 and EMC CLARiiON Best Practices for Fibre Channel Devices to determine if an alignment offset is required in your configuration.

Note that Microsoft recommends setting the LUN Alignment Offset for any Array based LUNs using the diskpart utility within Windows in the event that disk performance is below expectation.
 
Refer to Microsoft KB article 929491 for more information.
 

Disk performance may be slower than expected when you use multiple disks in Windows Server 2003, in Windows XP, and in Windows 2000



Direct Attached Disk (SCSI/FIBRE/iSCSI) with RAID

 

With direct attached disk using RAID (Redundant Array of Inexpensive Disks) data is written to disks in a configuration in most cases that includes a parity disk (or disks) is included in the array that provides high performance and fault tolerance.  Depending on the array controller there are several potential RAID levels available to choose from (RAID 0/1/3/5/6/10(0+1).  As a general rule the only levels that should be considered are levels 0, 1, and 5.  It should also be noted that RAID level 6, while not typically available on imbedded controllers, is generally suitable for BUR/ILM purposes.  RAID 6, however, utilizes double distributed parity and the overhead this creates can make RAID 6 slower than many tape technologies.

The best performance that may be generated from a RAID group from a striping perspective is achieved if each I/O operation generated from a MediaAgent equals, or is a multiple of, stripe width.


Stripe width = number of "data" drives multiplied by a stripe size.


Example:   With a RAID 5 group of 8d+1p, with a stripe size of 8KB, the stripe width would be 64KB.  We can achieve the best performance from a RAID 5 group through CommVault if we can tune it such a way that every I/O operation is a sequential read or write of 64KB or 128KB or n x 64KB of data.


Note:   Increasing stripe size has the potential to slow performance as it may take longer for a single spindle to read/write data in the amount of the stripe size rather than distribute it between several spindles. 


Disk used as Spool Copy


Disk used as a spool copy1 because it serves only as a temporary storage spot for data benefits from using RAID 0, which offers the maximum amount of space (no disk space is lost to parity) and maximum performance at minimum cost.  It should be noted however, that RAID 0 provides NO redundancy, and any disk or controller failures will result in the loss of any data on disk that was not yet copied to tape.  If the disk being configured is intended to retain data for some time the following configurations should be considered.
 

Advantages:  High performance, no cost penalty - all storage is usable

Disadvantages:  Significantly reduced data availability  


RAID 3


Raid 3 is an acceptable option for BUR/ILM purposes where the number of streams writing to and reading from the array is relatively low (<10 concurrent IOPS) as it performs best as a target for sequential data access (reads or writes NOT random I/O).  RAID 3 is a volume that has a dedicated parity disk (similar to RAID 4.) A 4+1 (5 disk) RAID 3 virtual disk should be created, with a 64K block size specified not only on the host, but also used when the volume is formatted on the host.  While RAID 3 is capable of using more than 5 disks in a group (RAID 3 also supports a 8+1 group), it is not recommended.  Because RAID 3 cannot survive double disk failure within the same virtual disk the higher the number of disks the greater the odds become of having double disk failure.

Advantages:  Good data availability, high performance for transfer rate intensive applications, cost effective - only one extra disk is required for parity

Disadvantages:  Can satisfy only one I/O request at a time, poor small, random I/O performance  


RAID 5


In any use case when RAID 3 is not available on a controller RAID 5 should be used.  In configurations based on higher volumes of concurrent IOPS, RAID 5 is the preferred configuration for disk. RAID 5 uses a distributed parity model and provides good performance for sequential I/O (while not as good as RAID 3) it also provides relatively optimized performance for random I/O.  RAID level 5 is also more flexible with the number of disks that can be in a Virtual Disk, a 4+1 (5 disk) volume typically performs best.  If a RAID 5 configuration is the chosen configuration for the CommVault Magnetic Library, it is important to maximize the write cache of the array in order to ensure optimum performance for backup to disk.

Advantages:  Average data availability, cost effective - only one extra disk is required for parity

Disadvantages:  Poor write performance if low write cache is configured, no performance gain in data transfer rate intensive applications  


iSCSI


While an iSCSI WILL work as a Magnetic Library target, it should be noted that performance is usually disappointing, as compared to fibre channel.  iSCSI is not designed for I/O intensive applications like backup and it should be recommended against at the design phase, and another target should be used if available at the time of implementation.


SATA


Serial Attached (“SATA”) HDDs provide high capacity at lower price points for enterprises integrating disk into data protection configurations.  SATA drives trade performance for capacity based on the sequential read / write operations that are native to this drive type.  SATA drives are engineered for high capacity, rather than high performance.  Higher capacity drives have the same number of heads as lower capacity drives therefore the number of heads per gigabyte declines, and if nothing is done to increase controller performance, performance declines.  As noted earlier in this document the rule of thumb for performance is larger quantities of smaller drives.  This ultimately raises the spindle count, the number of heads, and the amount of aggregated device cache.  This will lend to maximizing the available bus bandwidth, counter to the design concept of SATA drive integration.  For Backup and Recovery operations, SATA drives generally meet performance requirements associated with throughput on writes (backups) and reads (restores). Concurrent operations demonstrate SATA’s relative shortcomings.

A secondary consideration associated with SATA arrays in data protection operations is the recovery time associated with a drive failure.  A 750GB drive requires a much longer period to rebuild than a 146GB drive.  Depending on how the storage system is architected and utilized, it may take as long as 48 hours for a rebuild of a 750GB drive.  During that rebuild time a potential window for a secondary disk failure in the same array is possible.
 


EMC CLARiiON


For CX200/300/400/500/600/700


The optimal configuration for the older generation CLARiiON arrays is a 4+1 RAID 3 Raid Group with a 63 alignment offset and a 64K block size on the array and also on the host when the volume is formatted.

Because the disk array enclosures (DAE’s) on the CLARiiON hold 15 disks, and we are recommending 5 disk RAID groups, the CLARiiON should be set up to use only 1 hot spare disk per 30 drives.  This prevents 4 drives from being wasted in each DAE.  1 hot spare per 30 drives is also the recommended configuration from EMC.  


For CX3-XX Series


The latest generation of EMC CLARiiON Arrays utilize updated RAID 5 logic, and because of that the optimal configuration for them differs from the previous generations.  The optimal configuration on the CX3 series CLARiiON arrays is a 4+1 RAID 5 RAID Group with a 63 alignment offset and a 64K block size on the array and also on the host when the volume is formatted.


LUN Allocation on CLARiiON Arrays


When creating LUNs on EMC CLARiiON arrays make certain not to create more LUNs than data disks in the RAID Group, with a limit of FOUR LUNs per RAID Group.  Additional LUNs per RAID Group will degrade performance.  This recommendation applies to both RAID level 3 and RAID level 5 RAID Groups.


Data Offsets on CLARiiON Arrays


It is possible when configuring the Alignment Offset within the CLARiiON LUN configuration utility in Navisphere for the data offset value to be set to 63, as the offset is presented within Navisphere as a ‘Cardinal’ number not an ‘Ordinal’ number: i.e. Block_0 to Block_63 which equals 64 blocks.  However EMC recommends setting the LUN Alignment Offset using Microsoft diskpart as this improves performance on the CLARiiON Service Processor, thus removing the need to set the logical offset using Navisphere.  Please refer to Appendix 3 for additional detail on offsets.


 Hewlett Packard Enterprise Virtual Array


Configuration of magnetic libraries on Hewlett Packard Enterprise Virtual Array platform. should follow the HP “Rule of eight” principle.  Eight drives should be used in a disk group and depending on the number of disk shelves; Vraid1 or Vraid5 should be selected.  If less than 8 shelves are available Vraid1 should be selected, if 8 or more shelves are available Vraid5 can be used. 3


EMC Centera


As the Centera controller handles data placement (“blobs”), there are no user configurable storage settings.
 

Hitachi Data Systems


Configuration of magnetic libraries based on Hitachi hardware is more dependent on the number of Raid Groups/LUNS than other hardware platforms.  The RAID Groups should be created using a 4+1 configuration.  HDS design staff should be consulted and testing should be done to identify the optimal Server to LUN ratio.
 


JBOD


JBOD is not a recommended disk target for magnetic libraries where disk is used as the primary backup target (not as a spool copy) because it lacks redundancy.  A single drive failure causes total data loss.  If a JBOD configuration must be used, the volume should be formatted with a 64k block size to improve write performance. 


VTL 


CommVault supports most Virtual Tape Libraries.  To verify compatibility see the Hardware Compatibility Matrix available on Maintenance Advantage.

VTL “media” is treated as tape and this can be one of the drawbacks of using a VTL – as opposed to conventional disk.  Once media is full, all the jobs on the media must age before the media may be scratched and re-used.  In a VTL configuration space on disk for scratched media will never show free on the VTL media even when data ages as CommVault software reserves the ability to restore from aged media.


This is a drawback of the difference between how tapes are handled and how Magnetic Media is handled.  The drawback of the VTL is that customers are restricted by the rules of Tape.  In this case, the virtual tape never really gets the data wiped off of it until there's a Format Operation on the tape itself.  The ONLY way to clear the tape of data is a Format of the tape.  Note that completing an Erase Media operation will merely remove the OML so that the tape cannot be restored from for Audit Reasons or the like.


Notes on DataDomain VTL integration with CommVault:


When Data Domain is used as a MagLib (most common configuration), it is recommended to schedule a clean job at least once a week in order to actually remove data flagged for deletion and also perform. a basic defragmentation.  It is NOT recommended to complete this clean operation while Backups or AuxCopies are using the Data Domain as the clean consumes significant system resources on the Data Domain while the job is running.


Because each VTL vendor is unique, it is recommended that the customer contact their VTL vendor for recommendations on how best to configure the disk array hosting the VTL.
 


Xiotech


Because of the unique nature of Xiotech disk arrays it is recommended that the customer contact Xiotech for recommendations on how best to configure Vdisks on the Xiotech array. 


Shared Libraries with CommVault Dynamic Mount Path Sharing


Disk for Shared libraries with Dynamic Mount Path (DMP) sharing should be configured as stated for the hardware type as outlined in this document.  The volumes must be zoned so that only the MediaAgents that are to share the volumes can see them.  The SCSICmd (found in the \Base folder) must be run to ensure SCSI-3 persistent binding is supported for the attached array.  It is noted that Dynamic Mount Path Sharing requires the use of BASIC disk in a non-clustered server configuration.


Shared Library DMP Architecture Requirements


With the exception of PowerPath v4.4.1, or better, Shared DMP Libraries are NOT supported with path management software1.  To configure a highly available Shared DMP environment, GridStore should be used to allow for failover between MediaAgents, rather than in paths between a single host and the Shared Library.  The datapaths thru the second MA should be configured to run only on a failover of the first MediaAgent


A note about Shared Libraries with Dynamic Mount Path Sharing


This is an advanced feature within CommVault Software versions 7.0 and higher.  If the engineer implementing it does not have experience with the feature it is recommended that CommVault support be engaged to ensure proper configuration.  Failure to properly configure this feature can result in backup data corruption which will cause the client to have no valid backups. 


Best Practices for Shared Disk Storage Access – Considerations and Deployment Order of Priority


The following are the options to be used when sharing the same disk storage across different MediaAgents.  

  1. BEST OPTION:  Share the Magnetic Disk across the servers on LAN.  This disk can be a shared as CIFS or UNC, dependent on the MA Operating System.  The Magnetic Library can be configured using static shared option.  This configuration works well for concurrent access from different servers to the same volume at the same time.   This will work well in situations where a large volume of storage of disk must be shared across different MediaAgents without provisioning storage for each attached server.
     
  2. Configuration Alternative for LAN Free Storage Access:  If the customer environment cannot tolerate any LAN traffic to the disk, but the configuration requires sharing the same LUN for failover between MediaAgents, then a Clustered configuration for the MediaAgents is recommended.  This will allow seamless failover for writes / reads to the disk in the event of a failure on any node in the Cluster.
     
  3. If the customer requires LAN Free backups, without a LUN sharing configuration, it is recommended to deploy conventional Magnetic Libraries with each LUN presented as dedicated storage to each MediaAgent.  In the event of a MediaAgent failure, the customer is required to rezone / re-present the affected LUN to another MediaAgent followed by a Magnetic library migration, accomplished through a single task accessed through the CommServe console GUI.  Manual intervention is only required in the event of a MediaAgent server failure.
     
  4. If the customer requires LAN free backups and multiple server concurrent access from different MediaAgents at the same time to the same disk volume, then a clustered file system (example HP PolyServe) should be configured to allow access to the same LUN for different systems.  CommVault is configured to write to the Clustered File System via static shared libraries, with sharing arbitrated by the Clustered File System Management application.
     
  5. If options ONE through FOUR do not work for the customer, CommVault Dynamic Shared Mountpath may be deployed under the following configuration requirements:

a.  Only one MediaAgent is the writer and the other MediaAgent(s) are configured as the
reader(s).  This configuration helps in LAN free AuxCopy modes where the second MediaAgent will have the tape drive directly attached. GridStor is NOT recommended for use in this configuration.

b.  In the event of a MediaAgent failure in this configuration, failover of the MediaAgents will not be automatic, as the same disk volume cannot be concurrently accessible by different MediaAgents.  Write operations from the different MediaAgents cannot be allowed in this configuration.


Caveats Associated with Shared Dynamic Mountpath


MediaAgent Failure – Risk of Data Corruption During Data Protection Operations


Scenario:  MediaAgent server failure during a data protection operation to a Dynamically Shared Mountpath. 


In this scenario the connectivity failure may be due to a loss of network connectivity between the MediaAgent and CommServe, downed CVD services on the MediaAgent down, or a server failure on the MediaAgent.  In this scenario, any of these connectivity issues may result in data corruption in the writes to the shared LUN.  In this scenario, the failure of the MediaAgent will lock out other MediaAgents from accessing the shared LUN as the SCSI3 lock to the LUN associated with the downed MediaAgent is preserved by the CommServe until it is released.


CommVault provides a manual mechanism for the completion of a preempt reservation command to release the locked shared LUN for access to other MediaAgents within the CommCell.  Manual intervention, only, is available in this instance as it is assumed that the customer will want to determine the state of the array / MediaAgent before continuing data protection operations associated with the shared LUN.  The intent of this manual process is to mitigate any risk of data corruption in writes to the array.  There is no consistently reliable process by which automated pre-emption can be configured to release the lock on the shared LUN.


If the MediaAgent that is actively writing to the shared mountpath becomes inaccessible on the network, the next flush command issued to that MediaAgent from the CommServe will fail due to the lack of connectivity to the active MediaAgent.  This connection failure CANNOT be disregarded as the data written to the shared disk will not be flushed from the array with the most likely consequence being a corruption of the data already written to the shared LUN.  It is for this reason that CommVault does not allow the backup to continue in the case of a connectivity failure to the MA.

Additional Info:


Foot Notes:


  I.e.; Veritas Dynamic Multi Path, HDS Dynamic Link Manager, QLogic SANSurfer, etc. EMC PowerPath must be configured in ClarOpt mode to support path management with Shared DMP configurations.


2
- from, Backup-to-disk performance tuning, by:Craig Everett, Issue: Sep 2006


http://searchstorage.techtarget.com/magPrintFriendly/0,293813,sid5_gci1258320,00.html


3
- from, HP StorageWorks Enterprise Virtual Array 3000 and 5000 configuration best practices — white paper


http://h71028.www7.hp.com/ERC/downloads/5982-9140EN.pdf


Glossary


1– Spool Copy - The Spool Copy feature allows you to define a primary copy to be used a temporary holding area for protected data until the data is copied to an active synchronous copy.  This copy has a retention rule of 0 days and 0 cycles, and hence, once an auxiliary copy operation is performed, all data on this copy is pruned when data aging is run.  Note that a primary copy can be defined as a spool copy only when there is an active synchronous copy for the storage policy.


Appendix


– SCSICmd Tool


The ScsiCmd Tool is used to test whether the hardware supports SCSI-3 reservation.  (SCSI-3 reservation is used when you enable the Use SCSI Reserve for contention resolution option in the Library Properties dialog box.)  This tool is installed along with the MediaAgent software and available on all the MediaAgent computers. 


– Additional Detail on CLARiiON Alignment Offset


The host via its operating system may record private information, known as a signature block, at the start of a RAID-0, RAID-5, or RAID-10 stripe.  The alignment offset can be used to force the beginning of usable LUN space to start past the signature block.  The signature block size, which is dictated by the storage array manufacturer (EMC, in this case), consumes only 16 KB of space, but the NT file system (NTFS)—a file system for Microsoft Windows® operating systems—reserves 31.5 KB of space for the signature block.  This reservation of extra space at the beginning of the LUN can increase the risk of disk crossings or stripe crossings.  Stripe crossings are similar in principle to disk crossings, but refer to accessing two stripes instead of one.  If administrators do not adjust the NTFS signature block size from its default of 31.5 KB to 16 KB, the stripe will not align with the signature block and data will be written to two stripes instead of one.


For example, because NTFS reserves 31.5 KB of signature space, if a LUN has an element size of 64 KB with the default alignment offset of 0 (both are default Navisphere settings), a 64 KB write to that LUN would result in a disk crossing even though it would seem to fit perfectly on the disk.  A disk crossing can also be referred to as a split I/O because the read or write must be split into two or more segments.  In this case, 32.5 KB would be written to the first disk and 31.5 KB would be written to the following disk, because the beginning of the stripe is offset by 31.5 KB of signature space.  This problem can be avoided by providing the correct alignment offset.


Each alignment offset value represents one block.  Therefore, EMC recommends setting the alignment offset value to 63, because 63 times 512 bytes is 31.5 KB.  Contact Dell or EMC to determine the latest recommended alignment offset value, because improper use will degrade performance.  If the alignment offset requirement is uncertain, leave it at the Navisphere default.

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/611609/viewspace-684190/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/611609/viewspace-684190/

你可能感兴趣的:(Best Practices for Configuring Magnetic Libraries for CommVault BUR/ILM Suite)