Copyright © 2009, The e. Publishing Dept. of Morpho Studio (Spruce Int. Found.® ) All rights reserved.

关于ZFS,最经常被提到的问题就是:“如何提升ZFS的性能?”
这不是说ZFS的性能不好,任何现有的文件系统在使用一段时间以后性能都会变差。事实上zfs可以是一个非常快速的文件系统。

ZFS具有最强的自我校正特性和ZFS后台算法的固有特性,能帮助你在无需昂贵的硬件控制器的情况下达到比大多数RAID控制器和RAID模组更好的性能。所以我们说ZFS是目前业界的第一个真正的RAID(廉价磁盘阵列)解决方案。

大多数我所见到的对ZFS性能问题,根本上都是源于对硬件性能的错误假设,或仅仅参照那些不切实际的物理定律做出的判定。

现在,我们应该来看一下10个可以提升ZFS性能的简便方法,这些每个人都能够使用,并不需要首先学成ZFS的专家。
为了便于阅读,这里先列出一个内容目录:

  • 文件系统性能基础
  • 性能预期,目标与策略
  • #1: 添加足够的内存
  • #2: 添加更多的内存
  • #3: 增加更多的内存获得重复数据消除技术带来的性能提升
  • #4: 使用固态硬盘(SSDs)提升读取性能
  • #5: 使用固态硬盘(SSDs)提升写入性能
  • #6: 使用镜像
  • #7: 增加更多的磁盘
  • #8: 保留足够的空闲空间
  • #9: 雇佣一个专家
  • #10: 深入调教——但是你必须清楚自己在做什么
  • 奖励:一些五花八门的设定实例
  • 轮到你了
  • 相关文章

在开始我们的性能主题之前,首先复习下基础知识:

文件系统性能基础

分清两类基本的文件系统操作是非常重要的:读 和 写

有人可能会说这是一个很简单甚至愚蠢的问题。但是你必须有耐心听我说完,作为FS文件系统的两个I/O 子系统,读和写的数据操作流程具有非常大的差异,这意味着提升读/写性能的方法是有差别的。

我们可以使用 zpool iostatiostat(1M) 命令来核查系统的读/写性能是否符合你的意见或预期。

然后,我们需要了解文件系统两类性能参数:

  • Bandwidth(带宽): 以MB/s (或 GB/s 如果你幸运的话)为衡量单位,这个参数告诉你单位时间内有多少数据通过文件系统(被读取或写入)。
  • IOPS: 每秒钟的IO 操作次数。

再次,这些不同的性能视角在进行性能优化时有不同的意义,你只需要了解自己面临的是哪一类特殊问题。 同时读/写都有两种不同的性能模式:

  • Sequential(顺序的): 可预测的,连续(/相邻)存储的数据块
  • Random(随机的): 不可预测了,无序的,难以连续操作/存取的数据块

这里有一个好消息,ZFS 通过称为copy-on-write的魔术(操作特性),自动将随机的写操作转化为连续的写操作。这是一类较少被其他文件系统较少顾及到的性能问题。

最后,对I/Os操作来说,你应该了一心两种操作直接的解差别:

  • Synchronous Writes(同步写): 只有在数据写入稳定的存储介质(stable storage如磁盘)之后,写操作才视为完成。在ZFS中,这类操作会通过ZFS Intent Log, or ZIL来执行. 这类操作最常发生在文件服务器和数据库服务器上,对磁盘的潜伏周期和IPOS性能最为敏感。
  • Asynchronous Writes(异步写): 在数据被提交到磁盘前,只要已被缓存到内存后就可返回(进行其他后续数据操作)的写操作结束。由此很容易获得性能提升,但是是以牺牲数据存储可靠性为代价的。如果在数据被后台程序真正写入到磁盘以前系统意外掉电,将有可能发生数据丢失,甚至是更为严重的问题,比如RAID5条带写入陷阱的问题(将导致整个条带数据的校验错误,所以对可靠性要求高的场合需要采用昂贵的后备电源方案)。

性能预期,目标与策略
马上就将开始今天的性能主题了,不过在开始之前,我们需要理清一些概念:

*确定实际的预期:ZFS是很棒的,是的。 但是你需要遵守物理学定律。 一个10000 rpm的一个磁盘不能实现超过每秒166次的随机IOPS,因为10000 prm(周/分钟) 除以60秒(每分钟)等于166。这表示磁头每秒钟只能在一个随机街区上方定位它自己的位置166次。 任何多于那个数的寻道和你的数据读/写其实不是随机的。磁盘随机读/写操作的最大理论IOPS数就是这么计算出来的。

与此类似,RAID-Z 意味对于每个RAID-Z磁盘组你只会获得相当于单个磁盘的IOPS性能,因为每个文件系统IO 将并行发生在一个RAID-Z磁盘组的全部磁盘上。

你得明确得知你到你的存储设备的物理限制和你期望的实际性能,在什么时候分析你性能并且确定性能目标。

*设定性能目标:
究竟什么情况是" 太慢" ?  什么性能将是可接受的? 现在获得了多大的性能,并且你想要多大的性能?

设定性能目标很重要,因为他们告诉你什么时候你已经做到了。 总有方法提高性能,但是不惜任何代价提高性能是无用的。 知道什么时候你已经做到了,然后庆祝!

*系统性:我们试验这,然后我们试验那,我们用CP(1)来测量,即使我们的应用实际上是数据库。然后我们各处拧(调整参数),并且通常在我们知道它是什么之前,我们意识到:我们真的什么也不知道。

有系统是指确定怎样的方法测量我们想要的性能,设定系统当前的状态,
然后用我们感兴趣的直接与实际应用有关的一种性能测定方式,并坚持在整个性能分析和优化过程中使用相同的方法。

否则,事情变得令人困惑,我们会丢失信号,我们将不能告诉自己是否到达了目标。

现在我们已经理解我们想要那一类的性能提升,我们了解基于今天的硬件我们可以期望实现的性能,我们确定一些实际的目标,并且对性能优化有一条有条不紊的方法,下面让我们开始今天的主题:

#1: 增加足够的内存

磁盘上的一小部分数据空间被用于存储ZFS元数据。这些数据是ZFS自身所需要的,用来知道实际的用户数据在磁盘上的存储位置。换个说法这些元数据是ZFS用来查找用户数据的路线图和数据结构。

如果你的服务器没有足够的内存来存储元数据,那就会耗费额外的元数据读取IO操作来确定每项需要读取的数据是否真的位于磁盘上。这将导致用户数据读取速度变慢,你应该尽量避免这种情况的发生。如果你的实际可用的内存很小,那么这会对磁盘的性能造成严重的影响。

你需要多少内存? 根据thumb的粗略计算规则是你磁盘的总容量除以1千,然后加上为操作系统保留的1GB 。这意味着每1TB数据,你将需要至少1GB的内存用于缓存ZFS元数据,加上操作系统和其他应用程序所需的额外内存容量。

拥有足够的内存将使你在数据读取时获得收益,不管是读取操作时随机的还是顺序的,仅仅因为这些元素数据缓存在内存中,和访问磁盘相比更容易被找到,所以请确认你的系统具有最少n/1000+1GB的内存,n为你的存储池容量(GB

#2: 增加跟多的内存

ZFS 会使用他找到的每一块内存来缓存数据。ZFS具有非常精致的缓存算法,他会尝试缓存最进使用和最经常使用的数据,根据数据的使用情况自适应平衡两种数据类型的缓存。ZFS同时还有高级的预读能力,可以极大得改善不同类型的数据顺序读取性能。

你分配给ZFS的内存越多以上特性就能工作得越好。但是何时你能知道给多的内存是否给你带来突破性的性能或仅有小的性能提升呢?

这取决于你的工作数据集位置。

你的工作数据集是指那部分你最经常使用的数据:系统上运行主要产品/网站/电子商务数据库中的内容,你的主机环境中数据流量最大的客户端程序,你最经常使用的文件等等。

如果你的工作数据数据集能加载到内存中,大多数时间里主要的数据读取请求都可以通过访问内存获得,而无需创建访问低速的磁盘的IO操作。

尝试计算出你最经常使用的数据大小,然后为你的ZFS服务器添加足够的内存使其常驻于内存中,将使你获得最大的读取性能。

如果你希望更加自动化得进行以上工作, Ben Rockwood 编写了一个非常棒的工具,称为 arc_summary (ARC——ZFS Adaptive Replacement Cache ZFS自适应可调整缓存). 其中两个"Ghost" 变量将确切的告诉你根据过去的一段时间内数据负载,到底需要增加多少内存,才能帮助你明显得改善你的ZFS性能。

If you want to influence the balance between user data and metadata in the ZFS ARC cache, check out the primarycache filesystem property that you can set using the zfs(1M) command. For RAM-starved servers with a lot of random reads, it may make sense to restrict the precious RAM cache to metadata and use an L2ARC, explained in tip #4 below.

#3: 增加更多的内存获得重复数据消除技术带来的提升性能

在较早的文章里, 我写过关于ZFS 重复数据消除(ZFS Deduplication.)的基础知识。如果你计划使用这项功能,请记住ZFS将分配一个表格包含文件系统中存储的每一个数据块的存储位置信息以及数据块的校验和,然后就能确定是否一个特定的数据块已经被写入过,以及安全得将这些数据标记为重复的。

重复消除技术将能够节省你的存储空间,同时因为节省了不必要的读写IOPs ,你的ZFS性能也将获得提升。但是,使用这一技术的成本是你需要更多的内存来存储重复数据表(ZFS dedup table),否则额外的低速磁盘的IO操作反而会降低文件系统的性能。

那么ZFS 重复数据表到底有多大呢?Richard Elling 在最近发表的一篇文章中指出:针对每一个数据块,ZFS 重复数据表会有一条记录,每条记录会使用大约250字节。假设数据块大小为8K,那么每1TB的用户数据将需要32GB的内存来容纳。如果你存储的主要是大尺寸的文件,那么你会有一个比较大的平均数据块大小,比如64K,那你只需要4GB内存就能容纳整个重复数据表。

如果你没有足够的内存,就不必使用ZFS的重复数据消除技术,否则会带来额外的磁盘IO开销,反而降低ZFS的性能。

#4: 使用固态硬盘(SSDs)提升读取性能

如果你无法为服务器添加更多的内存(或者你公司的采购部不批准你的申购金额),退一步最好的提升读取性能的办法就是为系统增加固态硬盘(基于闪存)作为二级ARC缓存(L2ARC)。

你可以通过 zpool(1M)命令非常简便得完成配置工作,参阅man-page的 "Cache devices" 章节。

SSDs can deliver two orders of magnitude better IOPS than traditional harddisks, and they're much cheaper on a per-GB basis than RAM.
They form an excellent layer of cache between the ZFS RAM-based ARC and the actual disk storage.

You don't need to observe any reliability requirements when configuring L2ARC devices: If they fail, no data is lost because it can always be retrieved from disk.

This means that L2ARC devices can be cheap, but before you start putting USB sticks into your server, you should make sure they deliver a good performance benefit over your rotating disks :).

SSDs come in various sizes: From drop-in-replacements for existing SATA disks in the range of 32GB to the Oracle Sun F20 PCI card with 96GB of flash and built-in SAS controllers (which is one of the secrets behind Oracle Exadata V2's breakthrough performance), to the mighty fast Oracle Sun F5100 flash array (which is the secret behind Oracle's current TPC-C and other world records) with a whopping 1.96TB of pure flash memory and over a million IOPS. Nice!

And since the dedup table is stored in the ZFS ARC and consequently spills off into the L2ARC if available, using SSDs as cache devices will also benefit deduplication performance.

#5: Use SSDs to Improve Write Performance

Most write performance problems are related to synchronous writes. These are mostly found in file servers and database servers.

With synchronous writes, ZFS needs to wait until each particular IO is written to stable storage, and if that's your disk, then it'll need to wait until the rotating rust has spun into the right place, the harddisk's arm moved to the right position, and finally, until the block has been written. This is mechanical, it's latency-bound, it's slow.

See Roch's excellent article on ZFS NFS performance for a more detailed discussion on this.

SSDs can change the whole game for synchronous writes because they have 100x better latency: No moving parts, no waiting, instant writes, instant performance.

So if you're suffering from a high load in synchronous writes, add SSDs as ZFS log devices (aka ZIL, Logzillas) and watch your synchronous writes fly. Check out the zpool(1M) man page under the "Intent Log" section for more details.

Make sure you mirror your ZIL devices: They are there to guarantee the POSIX requirement for "stable storage" so they must function reliably, otherwise data may be lost on power or system failure.

Also, make sure you use high quality SLC Flash Memory devices, because they can give you reliable write transactions. Cheaper MLC cells can damage existing data if the power fails during write operations, something you really don't want.

#6: Use Mirroring

Many people configure their storage for maximum capacity. They just look at how many TB they can get out of their system. After all, storage is expensive, isn't it?

Wrong. Storage capacity is cheap. Every 18 months or so, the same disk only costs half as much, or you can buy double the capacity for the same price, depending on how you view it.

But storage performance can be precious. So why squeeze the last GB out of your storage if capacity is cheap anyway? Wouldn't it make more sense to trade in capacity for speed?

This is what mirroring disks offer as opposed to RAID-Z or RAID-Z2:

  • RAID-Z(2) groups several disks into a RAID group, called vdevs. This means that every I/O operation at the file system level is going to be translated into a parallel group of I/O operations to all of the disks in the same vdev.
    The result: Each RAID group can only deliver the IOPS performance of a single disk, because the transaction always has to wait until all of the disks in the same vdev are finished.
    This is both true for reads and for writes: The whole pool can only deliver as many IOPS as the total number of striped vdevs times the IOPS of a single disk.
    There are cases where the total bandwidth of RAID-Z can take advantage of the aggregate performance of all drives in parallel, but if you're reading this, you're probably not seeing such a a case.
  • Mirroring behaves differently: For writes, the rules are the same: Each mirrored pair of disks will deliver the write IOPS of a single disk, because each write transaction will need to wait until it has completed on both disks. But a mirrored pair of disks is a much smaller granularity than your typical RAID-Z set (with up to 10 disks per vdev). For 20 disks, this could be the difference between 10x the IOPS of a disk in the mirror case vs. only 2x the IOPS of a disk in a wide stripes RAID-Z2 scenario (8+2 disks per RAID-Z2 vdev). A 5x performance difference!
    For reads, the difference is even bigger: ZFS will round-robin across all of the disks when reading from mirrors. This will give you 20x the IOPS of a single disk in a 20 disk scenario, but still only 2x if you use wide stripes of the 8+2 kind.
    Of course, the numbers can change when using smaller RAID-Z stripes, but the basic rules are the same and the best performance is always achieved with mirroring.

For a more detailed discussion on this, I highly recommend Richard Elling's post on ZFS RAID recommendations: Space, performance and MTTDL.

Also, there's some more discussion on this in my earlier RAID-GREED-article.

Bottom line: If you want performance, use mirroring.

#7: Add More Disks

Our next tip was already buried inside tip #6: Add more disks. The more vdevs ZFS has to play with, the more shoulders it can place its load on and the faster your storage performance will become.

This works both for increasing IOPS and for increasing bandwidth, and it'll also add to your storage space, so there's nothing to lose by adding more disks to your pool.

But keep in mind that the performance benefit of adding more disks (and of using mirrors instead of RAID-Z(2)) only accelerates aggregate performance. The performance of every single I/O operation is still confined to that of a single disk's I/O performance.

So, adding more disks does not substitute for adding SSDs or RAM, but it'll certainly help aggregate IOPS and bandwidth for the cases where lots of concurrent IOPS and bigger overall bandwidth are needed.

#8 Leave Enough Free Space

Don't wait until your pool is full before adding new disks, though.

ZFS uses copy on write which means that it writes new data into free blocks, and only when the überblock has been updated, the new state becomes valid.

This is great for performance because it gives ZFS the opportunity to turn random writes into sequential writes - by choosing the right blocks out of the list of free blocks so they're nicely in order and thus can be written to quickly.

That is, when there are enough blocks.

Because if you don't have enough free blocks in your pool, ZFS will be limited in its choice, and that means it won't be able to choose enough blocks that are in order, and hence it won't be able to create an optimal set of sequential writes, which will impact write performance.

As a rule of thumb, don't let your pool become more full than about 80% of its capacity. Once it reaches that point, you should start adding more disks so ZFS has enough free blocks to choose from in sequential write order.

#9: Hire A ZFS Expert

There's a reason why this point comes up almost last: In the utter majority of all ZFS performance cases, one or more of #1-#8 above are almost always the solution.

And they're cheaper than hiring a ZFS performance expert who will likely tell you to add more RAM, or add SSDs or switch from RAID-Z to mirroring after looking at your configuration for a couple of minutes anyway!

But sometimes, a performance problem can be really tricky. You may think it's a storage performance problem, but instead your application may be suffering from an entirely different effect.

Or maybe there are some complex dependencies going on, or some other unusual interaction between CPUs, memory, networking, I/O and storage.

Or perhaps you're hitting a bug or some other strange phenomenon?

So, if all else fails and none of the above options seem to help, contact your favorite Oracle/Sun representative (or send me a mail) and ask for a performance workshop quote.
If your performance problem is really that hard, we want to know about it.

#10: Be An Evil Tuner - But Know What You Do

If you don't want to go for option #9 and if you know what you do, you can check out the ZFS Evil Tuning Guide.

There's a reason it's called "evil": ZFS is not supposed to be tuned. The default values are almost always the right values, and most of the time, changing them won't help, unless you really know what you're doing. So, handle with care.

Still, when people encounter a ZFS performance problem, they tend to Google "ZFS tuning", then they'll find the Evil Tuning Guide, then think that performance is just a matter of setting that magic variable in /etc/system.

This is simply not true.

Measuring performance in a standardized way, setting goals, then sticking to them helps. Adding RAM helps. Using SSDs helps. Thinking about the right number and RAID level of disks helps. Letting ZFS breathe helps.

But tuning kernel parameters is reserved for very special cases, and then you're probably much better off hiring an expert to help you do that correctly.

Bonus: Some Miscellaneous Settings

If you look through the zfs(1M) man page, you'll notice a few performance related properties you can set.
They're not general cures for all performance problems (otherwise they'd be set by default), but they can help in specific situations. Here are a few:

  • atime: This property controls whether ZFS records the time of last access for reads. Switching this to off will save you extra write IOs when reading data. This can have a big impact if your application doesn't care about the time of last access for a file and if you have a lot of small files that need to be read frequently.
  • checksum and compression can be double-edged swords: The stronger the checksum, the better your data is protected against corruption (and this is even more important when using dedup). But a stronger checksum method will incur some more load on the CPU for both reading and writing.
    Similarly, using compression may save a lot of IOPS if the data can be compressed well, but may be in the way for data that isn't easily compressed. Again, compression costs some extra CPU time.
    Keep an eye on CPU load while running tests and if you find that your CPU is under heavy load, you might need to tweak one of these.
  • recordsize: Don't change this property unless your running a database in this filesystem. ZFS automatically figures out what the best blocksize is for your filesystems.
    In case you're running a database (where the file may be big, but the access pattern is always in fixed-size chunks), setting this property to your database record size may help performance a lot.
  • primarycache and secondarycache: We already introduced the primarycache property in tip #2 above. It controls whether your precious RAM cache should be used for metadata or for both metadata and user data. In cases where you have an SSD configured as a cache device and if you're using a large filesystem, it may help to set primarycache=metadata so the RAM is used for metadata only.
    secondarycache does the same for cache devices, but it should be used to cache metadata only in cases where you have really big file systems and almost no real benefit from caching data.
  • logbias: When executing synchronous writes, there's a tradeoff to be made: Do you want to wait a little, so you can accumulate more synchronous write requests to be written into the log at once, or do you want to service each individual synchronous write as fast as possible, at the expense of throughput?
    This property lets you decide which side of the tradeoff you want to favor.

Your Turn

Sorry for the long article. I hope the table of contents at the beginning makes it more digestible, and I hope it's useful to you as a little checklist for ZFS performance planning and for dealing with ZFS performance problems.

Let me know if you want me to split up longer articles like these (though this one is really meant to remain together).

Now it's your turn: What is your experience with ZFS performance? What options from the above list did you implement for what kind of application/problem and what were your results? What helped and what didn't and what are your own ZFS performance secrets?

Share your ZFS performance expertise in the comments section and help others get the best performance out of ZFS!

Related Posts

  • Seven Useful OpenSolaris ZFS Home Server Tips
  • OpenSolaris ZFS Deduplication: Everything You Need to Know
  • Home Server: RAID-GREED and Why Mirroring is Still Best