FIO测试IOPS非常好的工具,用来对硬件进行压力测试和验证,支持13中不同的I/O引擎
[root@ceph1 ~]# wget http://brick.kernel.dk/snaps/fio-2.0.7.tar.gz
[root@ceph1 ~]# yum install libaio-devel -y
[root@ceph1 ~]# cd fio-2.0.7/
[root@ceph1 fio-2.0.7]# make
[root@ceph1 fio-2.0.7]# make install
[root@ceph1 ~]# fdisk -l
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/ceph--01c3a345--e83d--4495--89d3--a25f01be6cf5-osd--block--6909d49d--67cf--472b--8211--656fa8b06687: 21.5 GB, 21470642176 bytes, 41934848 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
-filename=/dev/sdb 测试文件名称,通常选用需要测试的盘的data目录,如果没有/dev/sdb1则选择/dev/sdb
-direct=1 测试过程绕过机器自带的buffer.使测试结果更真实
rw=ranwrite 测试随机写的I/O
rw=randrw 测试随机写和读的I/OS
bs=16k 单次io的块文件大小为16k
bsrange=512-2048 确定数据块的大小范围
size=5g 本次的测试文件大小为5g,以每次4k的io进行测试
numjobs=30 本次的测试线程为30
runtime=1000 测试时间为1000秒,如果不写则一直将5g文件分4k,每次写完为止
ioengine=psync io引擎使用pync方式
rwmixwrite=30 在混合读写的模式下,写占30%
group_reporting关于显示结果的,汇总每个进程的信息。
lockmem=1g 只使用1g内存进行测试
zero_buffers 用0初始化系统buffer
nrfiles=8 每个进程生成文件的数量
[root@ceph1 ~]# fio -filename=/dev/sdb -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=16k -size=1G -numjobs=1 -runtime=100 -group_reporting -name=mytest
下列参数可以根据上面参数说明进行调整
[root@ceph1 ~]# cat fio_test.sh
#随机读
fio -filename=/dev/sdb -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=16k -size=1G -numjobs=1 -runtime=100 -group_reporting -name=mytest
#顺序读
fio -filename=/dev/sdb -direct=1 -iodepth 1 -thread -rw=read -ioengine=psync -bs=16k -size=2G -numjobs=30 -runtime=1000 -group_reporting -name=mytest
#随机写
fio -filename=/dev/sdb -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=psync -bs=16k -size=2G -numjobs=30 -runtime=1000 -group_reporting -name=mytest
#顺序写
fio -filename=/dev/sdb -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=16k -size=2G -numjobs=30 -runtime=1000 -group_reporting -name=mytest
#混合随机读写
fio -filename=/dev/sdb -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=2G -numjobs=30 -runtime=100 -group_reporting -name=mytest -ioscheduler=noop
#实际测试范例
fio -filename=/dev/sdb -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k -size=2G -numjobs=30 -runtime=100 -group_reporting -name=mytest1
[root@ceph1 ~]#
压力测试fio方式方案,进行10000全累加 之后再除以10000,求平均值,里面会有read,write参数
[root@ceph1 ~]# fio -filename=/dev/sdb -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=16k -size=1G -numjobs=1 -runtime=100 -group_reporting -name=mytest | grep io=
read : io=1024.0MB, bw=287045KB/s, iops=17940 , runt= 3653msec
READ: io=1024.0MB, aggrb=287045KB/s, minb=287045KB/s, maxb=287045KB/s, mint=3653msec, maxt=3653msec
write: io=803600KB, bw=8031.4KB/s, iops=501 , runt=100058msec
安装netperf下载netperf-2.7.0
https://pan.baidu.com/s/1mijg6y0
下载之后,在所有的测试机器都需要安装
netperf-2.7.0.tar
[root@ceph1 ~]# tar -xvf netperf-2.7.0.tar
[root@ceph1 ~]# cd netperf-2.7.0/
[root@ceph1 netperf-2.7.0]# ./configure && make && make install
[root@ceph1 ~]# scp -r netperf-2.7.0 root@ceph2:/root/
[root@ceph1 ~]# scp -r netperf-2.7.0 root@ceph3:/root/
在ceph1机器上启动server进程(测试对称多处理结构SMP即多核)
[root@ceph1 ~]# netserver
Starting netserver with host 'IN(6)ADDR_ANY' port '12865' and family AF_UNSPEC
[root@ceph1 ~]# ps -ef | grep netperf
root 8722 1544 0 23:10 pts/0 00:00:00 grep --color=auto netperf
[root@ceph1 ~]# netstat -lnp | grep 12865
tcp6 0 0 :::12865 :::* LISTEN 8699/netserver
TCP_STREAM sockbuf 8192(Mbits/s)测试send size(10,32,64,100,128,256,512,1000,1024,1280,1448,1472,2048,8192,10000,32768,65536)
TCP_STREAM, sockbuf 65536 (Mbits/s)测试send size(10,32,64,100,128,256,512,1000,1024,1280,1448,1472,2048,8192,10000,32768,65536)
UDP_STREAM, sockbuf 200000 (Mbits/s)测试send size(10,32,64,100,128,256,512,1000,1024,1280,1448,1472,2048,8192,10000,32768,65507)
TCP_RR (Transactions/second)测试send size(1,10,32,64,100,128,256,512,1000,1024,1280,1448,1472,2048,8192)
UDP_RR(Transactions/second) 测试send size(1,10,32,64,100,128,256,512,1000,1024,1280,1448,1472,2048,8192)
SCTP_STREAM, sockbuf65536(Mbits/s)send size(10,32,64,100,128,256,512 1000,1024,1280,1448,1472,2048,8192,10000,32768,65536)
SCTP_STREAM, sockbuf 200000 (Mbits/s)send size(10,32,64,100,128,256,512,1000,1024,1280,1448,1472,2048,8192,10000,32768,65536)
SCTP_RR, sockbuf 65536 (Transactions/second) send size(1,10,32,64,100,128,256,512,1000,1280,1448,1472,2048,8192)
在ceph2机器上启动server 进程,下面例子测试TCP_STREAM
[root@ceph2 ~]# netperf -H 192.168.229.130 -l 10 -t TCP_STREAM
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.229.130 () port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.01 1790.95
即测试请求/应答(request/response)网络流量的性能
1 TCP_RR
[root@ceph2 ~]# netperf -H 192.168.229.130 -l 10 -t TCP_RR
2 TCP_CRR
[root@ceph2 ~]# netperf -H 192.168.229.130 -l 10 -t TCP_CRR
3 UDP_RR
[root@ceph2 ~]# netperf -H 192.168.229.130 -l 10 -t UDP_RR
Iperf是一个网络性能测试工具。可以测试TCP和UDP带宽质量,可以测量最大TCP带宽,具有多种参数和UDP特性,可以报告带宽,延迟抖动和数据包丢失
安装方式: yum install epel-release -y && yum install iperf -y
-s 以server模式启动。#iperf -s
-c host以client模式启动。host是server端地址。#iperf -c serverip
通用参数:
-f [kmKM] 分别表示以Kbits, Mbits, KBytes, MBytes显示报告,默认以Mbits为单位,#iperf -c 192.168.100.6 -f K
-i sec 以秒为单位显示报告间隔,#iperf -c 192.168.100.6 -i 2
-l 缓冲区大小,默认是8KB,#iperf -c 192.168.100.6 -l 64
-m 显示tcp最大mtu值
-o 将报告和错误信息输出到文件#iperf -c 192.168.100.6 -o ciperflog.txt
-p 指定服务器端使用的端口或客户端所连接的端口#iperf -s -p 5001;iperf -c 192.168.100.55 -p 5001
-u 使用udp协议
-w 指定TCP窗口大小,默认是8KB
-B 绑定一个主机地址或接口(当主机有多个地址或接口时使用该参数)
-C 兼容旧版本(当server端和client端版本不一样时使用)
-M 设定TCP数据包的最大mtu值
-N 设定TCP不延时
-V 传输ipv6数据包
server专用参数:
-D 以服务方式运行。#iperf -s -D
-R 停止iperf服务。针对-D,#iperf -s -R
client端专用参数:
-d 同时进行双向传输测试
-n 指定传输的字节数,#iperf -c 192.168.100.6 -n 1024000
-r 单独进行双向传输测试
-t 测试时间,默认20秒,#iperf -c 192.168.100.6 -t 5
-F 指定需要传输的文件
-T 指定ttl值
测试方法
[root@ceph1 ~]# iperf -s -d
WARNING: option -d is not valid for server mode
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.229.130 port 5001 connected with 192.168.229.131 port 35716
[ ID] Interval Transfer Bandwidth
[ 4] 0.0- 0.0 sec 1.00 MBytes 590 Mbits/sec
[ 4] local 192.168.229.130 port 5001 connected with 192.168.229.131 port 35718
[ 4] 0.0- 0.1 sec 9.88 MBytes 1.42 Gbits/sec
[root@ceph2 ~]# iperf -c 192.168.229.130 -n 102400
------------------------------------------------------------
Client connecting to 192.168.229.130, TCP port 5001
TCP window size: 230 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.229.131 port 35720 connected with 192.168.229.130 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 0.0 sec 128 KBytes 0.00 bits/sec
[root@ceph2 ~]# iperf -c 192.168.229.130 -n 1024000000
------------------------------------------------------------
Client connecting to 192.168.229.130, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.229.131 port 35722 connected with 192.168.229.130 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 3.9 sec 977 MBytes 2.12 Gbits/sec
[root@ceph2 ~]# iperf -c 192.168.229.130 -t 100
------------------------------------------------------------
Client connecting to 192.168.229.130, TCP port 5001
TCP window size: 230 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.229.131 port 35724 connected with 192.168.229.130 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-100.0 sec 26.0 GBytes 2.23 Gbits/sec
[root@ceph2 ~]#
可以用主机名测试不用IP地址
[root@ceph2 ~]# iperf -c ceph1 -t 1
------------------------------------------------------------
Client connecting to ceph1, TCP port 5001
TCP window size: 178 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.229.131 port 35738 connected with 192.168.229.130 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 265 MBytes 2.21 Gbits/sec
[root@ceph2 ~]#
该工具的语法为:rados bench -p
pool_name:测试所针对的存储池
seconds:测试所持续的秒数
-b:block size,即块大小,默认为 4M
-t:读/写并行数,默认为 16
--no-cleanup 表示测试完成后不删除测试用数据。在做读测试之前,需要使用该参数来运行一遍写测试来产生测试数据,在全部测试结束后可以运行 rados -p
[root@ceph1 ~]# rados bench -p fs_data 1 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 1 seconds or 0 objects
Object prefix: benchmark_data_ceph1_12897
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 16 16 0 0 0 - 0
1 16 32 16 59.6521 64 0.677269 0.552556
Total time run: 1.33617
Total writes made: 32
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 95.7964
Stddev Bandwidth: 0
Max bandwidth (MB/sec): 64
Min bandwidth (MB/sec): 64
Average IOPS: 23
Stddev IOPS: 0
Max IOPS: 16
Min IOPS: 16
Average Latency(s): 0.64773
Stddev Latency(s): 0.313858
Max latency(s): 1.33553
Min latency(s): 0.174907
[root@ceph1 ~]#
[root@ceph1 ~]# rados bench -p fs_data 1 seq
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 16 17 1 226.336 -1 0.0153576 0.0153576
Total time run: 0.529385
Total reads made: 32
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 241.79
Average IOPS: 60
Stddev IOPS: 0
Max IOPS: 31
Min IOPS: 31
Average Latency(s): 0.237907
Max latency(s): 0.5281
Min latency(s): 0.00636216
[root@ceph1 ~]#
[root@ceph1 ~]# rados bench -p fs_data 1 rand
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 16 17 1 248.167 -1 0.011158 0.011158
1 16 89 73 287.214 288 0.403247 0.179399
Total time run: 1.27151
Total reads made: 89
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 279.983
Average IOPS: 69
Stddev IOPS: 0
Max IOPS: 72
Min IOPS: 72
Average Latency(s): 0.223529
Max latency(s): 0.660905
Min latency(s): 0.00182668
[root@ceph1 ~]#
rados -p fs_data cleanup
RADOS性能测试: 使用rados load-gen工具
# rados -p rbd load-gen
--num-objects 初始生成测试用的对象数,默认 200
--min-object-size 测试对象的最小大小,默认 1KB,单位byte
--max-object-size 测试对象的最大大小,默认 5GB,单位byte
--min-op-len 压测IO的最小大小,默认 1KB,单位byte
--max-op-len 压测IO的最大大小,默认 2MB,单位byte
--max-ops 一次提交的最大IO数,相当于iodepth
--target-throughput 一次提交IO的历史累计吞吐量上限,默认 5MB/s,单位B/s
--max-backlog 一次提交IO的吞吐量上限,默认10MB/s,单位B/s
--read-percent 读写混合中读的比例,默认80,范围[0, 100]
--run-length 运行的时间,默认60s,单位秒
iostat主要用于监控系统设备的IO负载情况,iostat首次运行时显示自系统启动开始的各项统计信息,之后运行iostat将显示自上次运行该命令以后的统计信息。用户可以通过指
定统计的次数和时间来获得所需的统计信息。
iostat -d -k 2
参数-d 表示,显示设备(磁盘)使用状态,-k某些使用block为单位的列强制使用kilobytes为单位;
2表示,数据显示每隔2秒刷新一次
[root@ceph2 ~]# iostat -d -k 1 10
Linux 3.10.0-862.el7.x86_64 (ceph2) 12/18/2018 _x86_64_ (1 CPU)
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
fd0 0.00 0.00 0.00 48 0
sda 7.32 23.41 218.97 820102 7670958
sdb 2.17 24.01 129.89 841084 4550488
sdc 1.67 6.26 59.65 219428 2089592
dm-0 2.45 23.83 129.89 834972 4550260
dm-1 1.87 6.09 59.64 213444 2089364
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
fd0 0.00 0.00 0.00 0 0
sda 1.98 0.00 16.83 0 17
sdb 0.00 0.00 0.00 0 0
sdc 0.00 0.00 0.00 0 0
tps:每秒的传输次数,一次传输意思是一次I/O请求。多个逻辑请求可能会被合并为一次I/O请求。
KB_read/s:每秒从设备(drive expressed)读取的数据量
KB_wrth/s:每秒向设备(driveexpressed)写入的数据量
KB_read:读取的总数据量
KB_wrth:写入的总数据量;if 这些单位都是Kilobytes: