link:
http://www.eygle.com/unix/Use.Bonnie++.To.Test.IO.speed.htm
由于Bonnie存在一些众所周知的问题,比如>2G的文件支持.
Russell Coker ([email protected]) 开发了一套新的代码,用以支持>2G的文件等.
得到Tim Bray ([email protected])的许可之后,Russell把他的软件命名为bonnie++,在网上发布,并开始流行起来.
目前的版本已经更新到了1.03a,你可以到以下地址下载:
http://www.coker.com.au/bonnie++/
你也可以点击这里下载,这个版本需要编译,如果你没有编译环境,可以点击这里下载我编译好的,适用于SUN Solaris环境(Solaris8测试通过)
Russell Coker的个人主页是:
http://www.coker.com.au/
Bonnie++ 与 bonnie的区别主要是:
http://www.coker.com.au/bonnie++/diff.html
我简单介绍一下Bonnie++的编译及使用:
1.编译
你需要把以上下载的源码编译以后才能使用,如果你没有编译环境,可以点击这里下载我编译好的,适用于SUN Solaris环境(Solaris8测试通过)
当然你需要安装make,及gcc等必要编译器.在编译过程中,如果遇到以下错误,可能是因为你没有设置正确的环境变量
$ ./configure |
设置环境变量后继续编译,一般可以成功.
# export LD_LIBRARY_PATH=/usr/lib:/usr/local/lib |
编译完成之后会生成bonnie++,可以用来测试了.
2.下面是一些测试结果
a.T3大文件读写测试
# ./bonnie++ -d /data1 -u root -s 4096 -m billing Using uid:0, gid:1. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP billing 4G 9915 87 30319 56 11685 38 9999 99 47326 66 177.6 3 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 639 19 +++++ +++ 1258 22 679 16 +++++ +++ 1197 27 billing,4G,9915,87,30319,56,11685,38,9999,99,47326,66,177.6,3,16,639,19,+++++,+++,1258,22,679,16,+++++,+++,1197,27 |
b. EMC CLARiiON CX500 禁用写Cache的测试数据
这个是在我禁用了写Cache以后的测试数据:
4块盘的Raid1+0测试:
# ./bonnie++ -d /eygle -u root -s 4096 -m jump Using uid:0, gid:1. File size should be double RAM for good results, RAM is 4096M. # ./bonnie++ -d /eygle -u root -s 8192 -m jump Using uid:0, gid:1. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP jump 8G 12647 36 13414 8 7952 13 33636 97 146503 71 465.7 5 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 86 1 +++++ +++ 161 1 81 1 +++++ +++ 163 1 jump,8G,12647,36,13414,8,7952,13,33636,97,146503,71,465.7,5,16,86,1,+++++,+++,161,1,81,1,+++++,+++,163,1 |
4块盘的Raid5,禁用写Cache后的速度:
# ./bonnie++ -d /eygle -u root -s 8192 -m jump Using uid:0, gid:1. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP jump 8G 10956 30 10771 6 3388 5 34169 98 158861 75 431.1 5 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 81 1 +++++ +++ 160 1 82 1 +++++ +++ 109 1 jump,8G,10956,30,10771,6,3388,5,34169,98,158861,75,431.1,5,16,81,1,+++++,+++,160,1,82,1,+++++,+++,109,1 |
对比这两个结果我们发现(单位K/sec):
字符写
|
Block写
|
字符读
|
Block读
|
|
Raid10 |
12,647
|
13,414
|
33,636
|
146,503
|
Raid5 |
10,956
|
10,771
|
34,169
|
158,861
|
Diff |
1,691
|
2,643
|
-533
|
-12,358
|
我们看到,在直接读写上,写Raid10会略快于Raid5;而在读取上,Raid5会略快于Raid10,这符合我们通常的观点.
这里需要提一下的是,通常我们建议把RedoLog file存放在Raid10的磁盘上,因其具有写优势.
c.EMC CLARiiON CX500 启用1G写Cache的测试数据
这是4块盘的Raid10的测试数据:
# ./bonnie++ -d /eygle -u root -s 8192 -m jump |
这是4块盘的Raid5的测试数据:
# ./bonnie++ -d /eygle -u root -s 8192 -m jump Using uid:0, gid:1. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP jump 8G 34620 98 103440 65 35756 61 33900 97 160964 76 495.4 6 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 788 12 +++++ +++ 1503 14 783 11 +++++ +++ 1520 15 jump,8G,34620,98,103440,65,35756,61,33900,97,160964,76,495.4,6,16,788,12,+++++,+++,1503,14,783,11,+++++,+++,1520,15 |
我们再来对比一下这个结果(单位K/sec):
字符写
|
Block写
|
字符读
|
Block读
|
|
Raid10 |
31,447
|
73,130
|
33,607
|
144,470
|
Raid5 |
34,620
|
103,440
|
33,900
|
160,964
|
Diff |
-3,173
|
-30,310
|
-293
|
-16,494
|
Raid5在启用了大的Write Cache下,性能全面超过了Raid10
3.对T3和Emc做个对比
都是4块盘的Raid5的情况下:
字符写
|
Block写
|
字符读
|
Block读
|
|
T3 |
9,915
|
30,319
|
9,999
|
47,326
|
EMC |
34,620
|
103,440
|
33,900
|
160,964
|
Diff |
-24,705
|
-73,120
|
-23,901
|
-113,638
|
Emc/T3 |
3.49
|
3.41
|
3.39
|
3.40
|
如欲转载,请注明作者与出处.并请保留本文的连接.