FIO是测试IOPS的非常好的工具,用来对硬件进行压力测试和验证,支持13种不同的I/O引擎,包括:sync,mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio 等等。
说明:
filename=/dev/sdb1 测试文件名称,通常选择需要测试的盘的data目录。
direct=1 测试过程绕过机器自带的buffer。使测试结果更真实。
rw=randwrite 测试随机写的I/O
rw=randrw 测试随机写和读的I/O
bs=16k 单次io的块文件大小为16k
bsrange=512-2048 同上,提定数据块的大小范围
size=5G 本次的测试文件大小为5g,以每次4k的io进行测试。
numjobs=30 本次的测试线程为30个.
runtime=1000 测试时间为1000秒,如果不写则一直将5g文件分4k每次写完为止。
ioengine=psync io引擎使用pync方式
rwmixwrite=30 在混合读写的模式下,写占30%
group_reporting 关于显示结果的,汇总每个进程的信息。
lockmem=1G 只使用1g内存进行测试。
zero_buffers 用0初始化系统buffer。
nrfiles=8 每个进程生成文件的数量。
所以,随机读:
fio -filename=/dev/sda -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest
顺序写:
fio -filename=/dev/sda -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest
随机写:
fio -filename=/dev/sda -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=1000 -group_reporting -name=mytest
混合随机读写:
fio -filename=/dev/sda -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=100 -group_reporting -name=mytest -ioscheduler=noop
3、实际测试范例
测试混合随机读写:
[root@Dell]# fio -filename=/dev/sda -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=200G -numjobs=30 -runtime=100 -group_reporting -name=mytest1
mytest1: (g=0): rw=randrw, bs=16K-16K/16K-16K/16K-16K, ioengine=psync, iodepth=1
...
mytest1: (g=0): rw=randrw, bs=16K-16K/16K-16K/16K-16K, ioengine=psync, iodepth=1
fio-2.1.2
Starting 30 threads
Jobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [18.8% done] [10192KB/3376KB/0KB /s] [637/211/0 iops] [eta 01m:22sJobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [19.8% done] [9808KB/3200KB/0KB /s] [613/200/0 iops] [eta 01m:21s]Jobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [34.7% done] [10496KB/3232KB/0KB /s] [656/202/0 iops] [eta 01m:06sJobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [35.6% done] [9680KB/3232KB/0KB /s] [605/202/0 iops] [eta 01m:05s]Jobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [37.6% done] [10016KB/2864KB/0KB /s] [626/179/0 iops] [eta 01m:03sJobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [38.6% done] [10320KB/3632KB/0KB /s] [645/227/0 iops] [eta 01m:02sJobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [39.6% done] [9744KB/3264KB/0KB /s] [609/204/0 iops] [eta 01m:01s]Jobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [63.4% done] [10224KB/3792KB/0KB /s] [639/237/0 iops] [eta 00m:37sJobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [64.4% done] [9184KB/3808KB/0KB /s] [574/238/0 iops] [eta 00m:36s]Jobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [68.3% done] [10128KB/3200KB/0KB /s] [633/200/0 iops] [eta 00m:32sJobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [69.3% done] [9872KB/3184KB/0KB /s] [617/199/0 iops] [eta 00m:31s]Jobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [71.3% done] [10528KB/2624KB/0KB /s] [658/164/0 iops] [eta 00m:29sJobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [72.3% done] [9696KB/2752KB/0KB /s] [606/172/0 iops] [eta 00m:28s]Jobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [73.3% done] [10624KB/2912KB/0KB /s] [664/182/0 iops] [eta 00m:27sJobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [74.3% done] [9312KB/2832KB/0KB /s] [582/177/0 iops] [eta 00m:26s]Jobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [95.0% done] [10128KB/3792KB/0KB /s] [633/237/0 iops] [eta 00m:05sJobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [96.0% done] [8320KB/3904KB/0KB /s] [520/244/0 iops] [eta 00m:04s]Jobs: 30 (f=30):
[mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm] [100.0% done] [9264KB/3536KB/0KB /s] [579/221/0 iops] [eta 00m:00s]
mytest1: (groupid=0, jobs=30): err= 0: pid=17792: Tue Nov 12 10:55:58 2013
read : io=948896KB, bw=9475.1KB/s, iops=592, runt=100138msec
clat (usec): min=67, max=796794, avg=49878.72, stdev=59636.00
lat (usec): min=68, max=796794, avg=49879.01, stdev=59636.00
clat percentiles (msec):
| 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 12],
| 30.00th=[ 16], 40.00th=[ 22], 50.00th=[ 29], 60.00th=[ 39],
| 70.00th=[ 53], 80.00th=[ 76], 90.00th=[ 120], 95.00th=[ 165],
| 99.00th=[ 293], 99.50th=[ 351], 99.90th=[ 494], 99.95th=[ 553],
| 99.99th=[ 701]
bw (KB /s): min= 20, max= 967, per=3.38%, avg=320.53, stdev=116.83
write: io=380816KB, bw=3802.1KB/s, iops=237, runt=100138msec
clat (usec): min=64, max=120607, avg=1801.07, stdev=5409.97
lat (usec): min=65, max=120610, avg=1803.86, stdev=5409.96
clat percentiles (usec):
| 1.00th=[ 69], 5.00th=[ 73], 10.00th=[ 77], 20.00th=[ 81],
| 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 90], 60.00th=[ 113],
| 70.00th=[ 724], 80.00th=[ 3248], 90.00th=[ 4384], 95.00th=[ 5344],
| 99.00th=[33536], 99.50th=[41728], 99.90th=[59136], 99.95th=[68096],
| 99.99th=[112128]
bw (KB /s): min= 17, max= 563, per=3.52%, avg=133.68, stdev=75.04
lat (usec) : 100=16.41%, 250=3.47%, 500=0.10%, 750=0.12%, 1000=0.23%
lat (msec) : 2=0.86%, 4=4.57%, 10=13.39%, 20=16.08%, 50=22.27%
lat (msec) : 100=12.87%, 250=8.49%, 500=1.08%, 750=0.06%, 1000=0.01%
cpu : usr=0.02%, sys=0.07%, ctx=83130, majf=0, minf=7
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=59306/w=23801/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=948896KB, aggrb=9475KB/s, minb=9475KB/s, maxb=9475KB/s, mint=100138msec, maxt=100138msec
WRITE: io=380816KB, aggrb=3802KB/s, minb=3802KB/s, maxb=3802KB/s, mint=100138msec, maxt=100138msec
Disk stats (read/write):
sda: ios=59211/24192, merge=0/289, ticks=2951434/63353, in_queue=3092383, util=99.97%
fio可以通过配置文件来配置压力测试的方式,可以用选项 --debug=io来检测fio是否工作
[root@vmforDB05 tmp]# cat fio_test
[global]
bsrange=512-2048
ioengine=libaio
userspace_reap
rw=randrw
rwmixwrite=20
time_based
runtime=180
direct=1
group_reporting
randrepeat=0
norandommap
ramp_time=6
iodepth=16
iodepth_batch=8
iodepth_low=8
iodepth_batch_complete=8
exitall
[test]
filename=/dev/mapper/cachedev
numjobs=1
常用参数说明
bsrange=512-2048 //数据块的大小范围,从512bytes到2048 bytes
ioengine=libaio //指定io引擎
userspace_reap //配合libaio,提高异步io的收割速度
rw=randrw //混合随机对写io,默认读写比例5:5
rwmixwrite=20 //在混合读写的模式下,写占20%
time_based //在runtime压力测试周期内,如果规定数据量测试完,要重复测试
runtime=180 //在180秒,压力测试将终止
direct=1 //设置非缓冲io
group_reporting //如果设置了多任务参数numjobs,用每组报告代替每job报告
randrepeat=0 //设置产生的随机数是不可重复的
norandommap
ramp_time=6
iodepth=16
iodepth_batch=8
iodepth_low=8
iodepth_batch_complete=8
exitall //一个job完成,就停止所有的
filename=/dev/mapper/cachedev //压力测试的文件名
numjobs=1 //job的默认数量,也就是并发数,默认是1
size=200G //这job总共的io大小
refill_buffers //每次提交后都重复填充io buffer
overwrite=1 //设置文件可覆盖
sync=1 //设置异步io
fsync=1 //一个io就同步数据
invalidate=1 //开始io之前就失效buffer-cache
directory=/your_dir // fielname参数值的前缀
thinktime=600 //在发布io前等待600秒
thinktime_spin=200 //消费cpu的时间,thinktime的剩余时间sleep
thinktime_blocks=2 //在thinktime之前发布的block数量
bssplit=4k/30:8k/40:16k/30 //随机读4k文件占30%、8k占40%、16k占30%
rwmixread=70 //读占70%
fiofio_test &get; io_test.log