磁盘IOPS相关计算

在目前主流存储基于云的时代,IOPS仍是存储性能的主要指标之一,通常情况下我们对看到的磁盘进行性能测试,并不是原始物理磁盘,而是经过服务器RAID,虚拟化后的逻辑磁盘,其性能受影响的因素更多,性能分析也更复杂,每个逻辑层的不同设置都可能对其上层磁盘性能产生影响。其具体的性能测试、分析主要也基于不同配置进行对比,优化。

而对原始物理磁盘的性能分析,则要简单的多,以下文章主要摘自刘爱贵博士的一篇文章,主要解释传统磁盘和固态磁盘的性能指标及特性,作为磁盘性能测试分析的基础。

刘爱贵博士原文路径:http://blog.csdn.net/liuaigui/article/details/6168186

IOPS  (Input/Output Per Second)即每秒的输入输出量(或读写次数),是衡量磁盘性能的主要指标之一。IOPS是指单位时间内系统能处理的I/O请求数量,一般以每秒处理的I/O请求数量为单位,I/O请求通常为读或写数据操作请求。随机读写频繁的应用,如OLTP(Online Transaction Processing),IOPS是关键衡量指标。另一个重要指标是 数据吞吐量 (Throughput),指单位时间内可以成功传输的数据数量。对于大量顺序读写的应用,如VOD(Video On Demand),则更关注吞吐量指标。


传统磁盘

传统磁盘本质上一种机械装置,如FC, SAS, SATA磁盘,转速通常为5400/7200/10K/15K rpm不等。影响磁盘的关键因素是磁盘服务时间,即磁盘完成一个I/O请求所花费的时间,它由寻道时间、旋转延迟数据传输时间三部分构成。
寻道时间Tseek是指将读写磁头移动至正确的磁道上所需要的时间。寻道时间越短,I/O操作越快,目前磁盘的平均寻道时间一般在3-15ms。
旋转延迟Trotation是指盘片旋转将请求数据所在扇区移至读写磁头下方所需要的时间。旋转延迟取决于磁盘转速,通常使用磁盘旋转一周所需时间的1/2表示。比如,7200 rpm的磁盘平均旋转延迟大约为60*1000/7200/2 = 4.17ms,而转速为15000 rpm的磁盘其平均旋转延迟约为2ms。
数据传输时间Ttransfer是指完成传输所请求的数据所需要的时间,它取决于数据传输率,其值等于数据大小除以数据传输率。目前IDE/ATA能达到133MB/s,SATA II可达到300MB/s的接口数据传输率,数据传输时间通常远小于前两部分时间。

因此,理论上可以计算出磁盘的最大IOPS,即IOPS = 1000 ms/ (Tseek + Troatation),忽略数据传输时间。假设磁盘平均物理寻道时间为3ms, 磁盘转速为7200,10K,15K rpm,则磁盘IOPS理论最大值分别为,
 
IOPS = 1000 / (3 + 60000/7200/2)  = 140
 IOPS = 1000 / (3 + 60000/10000/2) = 167
 IOPS = 1000 / (3 + 60000/15000/2) = 200


固态硬盘
固态硬盘SSD是一种电子装置, 避免了传统磁盘在寻道和旋转上的时间花费,存储单元寻址开销大大降低,因此IOPS可以非常高,能够达到数万甚至数十万。实际测量中,IOPS数值会受到很多因素的影响,包括I/O负载特征(读写比例,顺序和随机,工作线程数,队列深度,数据记录大小)、系统配置、操作系统、磁盘驱动等等。因此对比测量磁盘IOPS时,必须在同样的测试基准下进行,即便如此也会产生一定的随机不确定性。通常情况下,IOPS可细分为如下几个指标:
Toatal IOPS,混合读写和顺序随机I/O负载情况下的磁盘IOPS,这个与实际I/O情况最为相符,大多数应用关注此指标。
Random Read IOPS,100%随机读负载情况下的IOPS。
Random Write IOPS,100%随机写负载情况下的IOPS。
Sequential Read IOPS,100%顺序负载读情况下的IOPS。
Sequential Write IOPS,100%顺序写负载情况下的IOPS。


IOPS的测试benchmark工具主要有Iometer, IoZone, FIO等,可以综合用于测试磁盘在不同情形下的IOPS。对于应用系统,需要首先确定数据的负载特征,然后选择合理的IOPS指标进行测量和对比分析,据此选择合适的存储介质和软件系统。


下面的磁盘IOPS数据来自http://en.wikipedia.org/wiki/IOPS,给大家一个基本参考。



Mechanical hard driver

Some commonly accepted averages for random IO operations, calculated as 1/(seek + latency) = IOPS:

Device Type IOPS Interface Notes
5,400 rpm SATA drives HDD ~15-50 IOPS[2] SATA 3 Gbit/s  
7,200 rpm SATA drives HDD ~75-100 IOPS[2] SATA 3 Gbit/s  
10,000 rpm SATA drives HDD ~125-150 IOPS[2] SATA 3 Gbit/s  
10,000 rpm SAS drives HDD ~140 IOPS[2] SAS  
15,000 rpm SAS drives HDD ~175-210 IOPS[2] SAS  

Solid-state devices[edit]

Device Type IOPS Interface Notes
Intel X25-M G2 (MLC) SSD ~8,600 IOPS[11] SATA 3 Gbit/s Intel's data sheet[12] claims 6,600/8,600 IOPS (80 GB/160 GB version) and 35,000 IOPS for random 4 KB writes and reads, respectively.
Intel X25-E (SLC) SSD ~5,000 IOPS[13] SATA 3 Gbit/s Intel's data sheet[14] claims 3,300 IOPS and 35,000 IOPS for writes and reads, respectively. 5,000 IOPS are measured for a mix. Intel X25-E G1 has around 3 times higher IOPS compared to the Intel X25-M G2.[15]
G.Skill Phoenix Pro SSD ~20,000 IOPS[16] SATA 3 Gbit/s SandForce-1200 based SSD drives with enhanced firmware, states up to 50,000 IOPS, but benchmarking shows for this particular drive ~25,000 IOPS for random read and ~15,000 IOPS for random write.[16]
OCZ Vertex 3 SSD Up to 60,000 IOPS[17] SATA 6 Gbit/s Random Write 4 KB (Aligned)
Corsair Force Series GT SSD Up to 85,000 IOPS[18] SATA 6 Gbit/s 240 GB Drive, 555 MB/s sequential read & 525 MB/s sequential write, Random Write 4 KB Test (Aligned)
Samsung SSD 850 PRO SSD 100,000 read IOPS
90,000 write IOPS[19]
SATA 6 Gbit/s 4 KB aligned random I/O at QD32
10,000 read IOPS, 36,000 write IOPS at QD1
550 MB/s sequential read, 520 MB/s sequential write on 256 GB and larger models
550 MB/s sequential read, 470 MB/s sequential write on 128 GB model[19]
OCZ Vertex 4 SSD Up to 120,000 IOPS[20] SATA 6 Gbit/s 256 GB Drive, 560 MB/s sequential read & 510 MB/s sequential write, Random Read 4 KB Test 90K IOPS, Random Write 4 KB Test 85K IOPS
(IBM) Texas Memory SystemsRamSan-20 SSD 120,000+ Random Read/Write IOPS[21] PCIe Includes RAM cache
Fusion-io ioDrive SSD 140,000 Read IOPS, 135,000 Write IOPS[22] PCIe  
Virident Systems tachIOn SSD 320,000 sustained READ IOPS using 4KB blocks and 200,000 sustained WRITE IOPS using 4KB blocks[23] PCIe  
OCZ RevoDrive 3 X2 SSD 200,000 Random Write 4K IOPS[24] PCIe  
Fusion-io ioDrive Duo SSD 250,000+ IOPS[25] PCIe  
Violin Memory Violin 3200 SSD 250,000+ Random Read/Write IOPS[26] PCIe /FC/Infiniband/iSCSI Flash Memory Array
WHIPTAIL, ACCELA SSD 250,000/200,000+ Write/Read IOPS[27] Fibre Channel, iSCSI, Infiniband/SRP, NFS, SMB Flash Based Storage Array
DDRdrive X1, SSD 300,000+ (512B Random Read IOPS) and 200,000+ (512B Random Write IOPS)[28][29][30][31] PCIe  
SolidFire SF3010/SF6010 SSD 250,000 4KB Read/Write IOPS[32] iSCSI Flash Based Storage Array (5RU)
Intel SSD 750 Series SSD 440,000 read IOPS
290,000 write IOPS[33][34]
NVMe over PCIe 3.0 x4, U.2and HHHL expansion card 4 KB aligned random I/O with four workers at QD32 (effectively QD128), 1.2 TB model[34]
Up to 2.4 GB/s sequential read, 1.2 GB/s sequential write[33]
Samsung SSD 960 EVO SSD 380,000 read IOPS
360,000 write IOPS[35]
NVMe over PCIe 3.0 x4, M.2 4 KB aligned random I/O with four workers at QD4 (effectively QD16)[36], 1 TB model
14,000 read IOPS, 50,000 write IOPS at QD1
330,000 read IOPS, 330,000 write IOPS on 500 GB model
300,000 read IOPS, 330,000 write IOPS on 250 GB model
Up to 3.2 GB/s sequential read, 1.9 GB/s sequential write[35]
Samsung SSD 960 PRO SSD 440,000 read IOPS
360,000 write IOPS[35]
NVMe over PCIe 3.0 x4, M.2 4 KB aligned random I/O with four workers at QD4 (effectively QD16)[36], 1 TB and 2 TB models
14,000 read IOPS, 50,000 write IOPS at QD1
330,000 read IOPS, 330,000 write IOPS on 512 GB model
Up to 3.5 GB/s sequential read, 2.1 GB/s sequential write[35]
(IBM) Texas Memory SystemsRamSan-720 Appliance FLASH/DRAM 500,000 Optimal Read, 250,000 Optimal Write 4KB IOPS[37] FC / InfiniBand  
OCZ Single SuperScale Z-Drive R4 PCI-Express SSD SSD Up to 500,000 IOPS[38] PCIe  
WHIPTAIL, INVICTA SSD 650,000/550,000+ Read/Write IOPS[39] Fibre Channel, iSCSI, Infiniband/SRP, NFS Flash Based Storage Array
Violin Memory Violin 6000 3RU Flash Memory Array 1,000,000+ Random Read/Write IOPS[40] /FC/Infiniband/10Gb(iSCSI)/ PCIe  
(IBM) Texas Memory SystemsRamSan-630 Appliance Flash/DRAM 1,000,000+ 4KB Random Read/Write IOPS[41] FC / InfiniBand  
IBM FlashSystem 840 Flash/DRAM 1,100,000+ 4KB Random Read/600,000 4KB Write IOPS[42] 8G FC / 16G FC / 10G FCoE / InfiniBand Modular 2U Storage Shelf - 4TB-48TB
Fusion-io ioDrive Octal (single PCI Express card) SSD 1,180,000+ Random Read/Write IOPS[43] PCIe  
OCZ 2x SuperScale Z-Drive R4 PCI-Express SSD SSD Up to 1,200,000 IOPS[38] PCIe  
(IBM)Texas Memory SystemsRamSan-70 Flash/DRAM 1,200,000 Random Read/Write IOPS[44] PCIe Includes RAM cache
Kaminario K2 SSD Up to 2,000,000 IOPS.[45]
1,200,000 IOPS in SPC-1 benchmark simulating business applications[46][47]
FC MLC Flash
NetApp FAS6240 cluster Flash/Disk 1,261,145 SPECsfs2008 nfsv3 IOPs using 1,440 15K disks, across 60 shelves, with virtual storage tiering.[48] NFS, SMB, FC, FCoE, iSCSI SPECsfs2008 is the latest version of the Standard Performance Evaluation Corporation benchmark suite measuring file server throughput and response time, providing a standardized method for comparing performance across different vendor platforms.http://www.spec.org/sfs2008.
Fusion-io ioDrive2 SSD Up to 9,608,000 IOPS[49] PCIe Only via demonstration so far.
E8 Storage SSD Up to 10 million IOPS[50] 10-100Gb Ethernet Rack scale flash appliance
EMC DSSD D5 Flash Up to 10 million IOPS[51] PCIe Out of Box, up to 48 clients with high availability. PCIe Rack Scale Flash Appliance
Pure Storage M50 Flash Up to 220,000 32K IOPS <1ms average latency Up to 7 GB/s bandwidth[52] 16 Gbit/s Fibre Channel 10 Gbit/s Ethernet iSCSI 10 Gbit/s Replication ports 1 Gbit/s Management ports 3U – 7U 1007 - 1447 Watts (nominal) 95 lbs (43.1 kg) fully loaded + 44 lbs per expansion shelf 5.12” x 18.94” x 29.72” chassis
Nimble Storage[53][better source needed]AF9000 Flash Up to 1.4 million IOPS 16 Gbit/s Fibre Channel 10 Gbit/s Ethernet iSCSI 10 Gbit/s 1/10 Gbit/s Management ports 3600 Watts - Up to 2,212 TB RAW capacity - up to 8 expansion shelves - 16 1/10 GBit iSCSI Mgmt Ports - optional 48 1/10 GBit iSCSI Ports - optional 96 8/16 GBit Fibrechannel Ports - Thermal (BTU - 11,792)

你可能感兴趣的:(linux,磁盘,性能)