ceph存储性能测试

环境信息

主机:3台
osd:每台6个
ceph 版本:L
测试工具:fio
存储网卡:万兆

单块磁盘性能

随机读写:
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=randread -bs=4k -size=2G -numjobs=64 -runt
ime=100 -group_reporting -name=test-rand-read
read: IOPS=1517, BW=6070KiB/s (6215kB/s)(524MiB/88369msec)
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=randwrite -bs=4k -size=2G -numjobs=64 -run
time=30 -group_reporting -name=test-rand-read
  write: IOPS=642, BW=2571KiB/s (2633kB/s)(76.4MiB/30448msec)

顺序读:
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=randwrite -bs=4k -size=5G -numjobs=1 -run
time=30 -group_reporting -name=test-rand-read
   read: IOPS=54, BW=220MiB/s (230MB/s)(5120MiB/23292msec)
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=read -bs=4M -size=5G -numjobs=64 -run
time=30 -group_reporting -name=test-rand-read 
   read: IOPS=117, BW=470MiB/s (493MB/s)(13.0GiB/30508msec)

顺序写
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=write -bs=16M -size=5G -numjobs=1 -runtime
=100 -group_reporting -name=test-rand-read 
  write: IOPS=57, BW=229MiB/s (240MB/s)(5120MiB/22360msec)
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=write -bs=4M -size=5G -numjobs=64 -runtime
=30 -group_reporting -name=test-rand-read
write: IOPS=831, BW=3326MiB/s (3488MB/s)(97.8GiB/30120msec)

两块raid1性能

随机读写:
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=randread -bs=4k -size=2G -numjobs=64 -runtim
e=100 -group_reporting -name=test-rand-read
read: IOPS=2875, BW=11.2MiB/s (11.8MB/s)(353MiB/31385msec)
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=randwrite -bs=4k -size=2G -numjobs=64 -runti
me=100 -group_reporting -name=test-rand-read
write: IOPS=639, BW=2559KiB/s (2620kB/s)(76.0MiB/30431msec)

顺序读:
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=read -bs=4M -size=5G -numjobs=1 -run
time=30 -group_reporting -name=test-rand-read
read: IOPS=83, BW=334MiB/s (351MB/s)(5120MiB/15310msec)
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=read -bs=4M -size=5G -numjobs=64 -ru
ntime=100 -group_reporting -name=test-rand-read
read: IOPS=239, BW=959MiB/s (1005MB/s)(28.3GiB/30214msec)

顺序写:
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=write -bs=4M -size=5G -numjobs=1 -ru
ntime=30 -group_reporting -name=test-rand-read
write: IOPS=53, BW=213MiB/s (223MB/s)(5120MiB/24077msec)
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=write -bs=4M -size=5G -numjobs=64 -r
untime=30 -group_reporting -name=test-rand-read
write: IOPS=829, BW=3319MiB/s (3480MB/s)(97.6GiB/30118msec)

结果对比

单块磁盘 两块raid1 结论
随机读 iops=1517 IOPS=2875 raid1对于单块磁盘随机读有近一倍提升
随机写 IOPS=624 IOPS=642 两者无太大差别
顺序读 BW=230MB/s numjobs=1 bw=351MB/s numjobs=1 raid1对于单块磁盘顺序读有50%左右的提升
顺序读 BW=493MB/s numjobs=64 bw=1005MB/s numjobs=64 numjobs=64时raid1对于单块硬盘顺序读有近1倍提升
顺序写 bw=240MB/s numjobs=1 bw=223MB/s numjobs=1 raid1对于单块硬盘顺序写无性能提升
顺序写 bw=3488MB/s numjobs=64 bw=3480MB/s numjobs=64 numjobs=64时raid1对于单块硬盘顺序写无性能提升

ceph性能测试

随机读写:
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=randread -bs=4k -size=2G -numjobs=64 -runtim
e=100 -group_reporting -name=test-rand-read
read: IOPS=24.1k, BW=94.3MiB/s (98.9MB/s)(9433MiB/100009msec)
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=randwrite -bs=4k -size=2G -numjobs=64 -runti
me=100 -group_reporting -name=test-rand-read
write: IOPS=3924, BW=15.3MiB/s (16.1MB/s)(1536MiB/100194msec)

顺序读:
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=read -bs=4M -size=20G -numjobs=1 -runtime=
30 -group_reporting -name=test-rand-read
read: IOPS=250, BW=1002MiB/s (1051MB/s)(20.0GiB/20430msec)
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=read -bs=4M -size=20G -numjobs=64 -ru
ntime=100 -group_reporting -name=test-rand-read
read: IOPS=174, BW=696MiB/s (730MB/s)(41.9GiB/61550msec)

顺序写:
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=write -bs=4M -size=5G -numjobs=1 -ru
ntime=30 -group_reporting -name=test-rand-read
write: IOPS=210, BW=841MiB/s (882MB/s)(20.0GiB/24348msec)
fio --filename=./test -iodepth=64 -ioengine=libaio -direct=1 -rw=write -bs=4M -size=5G -numjobs=64 -r
untime=30 -group_reporting -name=test-rand-read
write: IOPS=210, BW=844MiB/s (885MB/s)(25.0GiB/31521msec)

结果对比

单块磁盘 ceph集群 结论
随机读 iops=1517 IOPS=24.1k 提升约16倍,比较理想,性能线性增加
随机写 IOPS=624 IOPS=3924 提升约6倍,数据写三份,性能没有读高
顺序读 BW=230MB/s numjobs=1 bw=1051MB/s numjobs=1 已被网络带宽限制
顺序读 BW=493MB/s numjobs=64 bw=730MB/s numjobs=64 已被网络带宽限制
顺序写 bw=240MB/s numjobs=1 bw=882MB/s numjobs=1 已被网络带宽限制
顺序写 bw=3488MB/s numjobs=64 bw=885MB/s numjobs=64 已被网络带宽限制

你可能感兴趣的:(ceph存储性能测试)