对于 ssd 硬盘,假如长期使用, 并且已经用光磁盘 free lists 中的空间, 都会严重影响磁盘写能力 (就算磁盘空间空闲率为 90%) ,
注,
但实际上是由于 ssd 使用 flash 进行数据保存, 每次数据读写过程都需要将曾经使用过的磁盘数据块抹掉后再重写, 出现重复 Io 增加了系统额外资源, 而机械硬盘不需要把数据抹掉而是直接重写,因此,对于需要进行频繁写操作,(OverWrite 操作) 或者没有 freelists 空间的情况而言, Ssd 会发现产生严重的 Io
1. linux 下可以通过启用 trim 功能让电脑自动重新生成 freelists
启用 trim 方法
1. 建议使用 ext4 格式
2. 内核必须大于 2.6.28
3.hdparm -I /dev/sda 查询是否支持
* Data Set Management TRIM supported (支持提示)
4. fstab 中加入 discard 参数
/dev/sda1 / ext4 discard,defaults
5. swap 分区启用方法
echo 1 > /proc/sys/vm/swappiness
2. 建议使用 noop 调度算法
Linux has several different disk schedulers, which are responsible for determining in which order read and write requests to the disk are handled. Using thenoop scheduler means that Linux will simply handle requests in the order they are received, without giving any consideration to where the data physically resides on the disk. This is good for solid-state drives because they have no moving parts, and seek times are identical for all sectors on the disk.
The 2.6 LinuxKernel includes selectable I/O schedulers. They control the way theKernel commits reads and writes to disks – the intention of providing different schedulers is to allow better optimisation for different classes of workload.
Without an I/O scheduler, the kernel would basically just issue each request to disk in the order that it received them. This could result in massiveHardDisk thrashing: if one process was reading from one part of the disk, and one writing to another, the heads would have to seek back and forth across the disk for every operation. The scheduler’s main goal is to optimise disk access times.
An I/O scheduler can use the following techniques to improve performance:
All I/O schedulers should also take into account resource starvation, to ensure requests eventually do get serviced!
There are currently 4 available:
This scheduler only implements request merging.
The anticipatory scheduler is the default scheduler in older 2.6 kernels – if you've not specified one, this is the one that will be loaded. It implements request merging, a one-way elevator, read and write request batching, and attempts some anticipatory reads by holding off a bit after a read batch if it thinks a user is going to ask for more data. It tries to optimise for physical disks by avoiding head movements if possible – one downside to this is that it probably give highly erratic performance on database or storage systems.
The deadline scheduler implements request merging, a one-way elevator, and imposes a deadline on all operations to prevent resource starvation. Because writes return instantly withinLinux, with the actual data being held in cache, the deadline scheduler will also prefer readers – as long as the deadline for a write request hasn't passed. The kernel docs suggest this is the preferred scheduler for database systems, especially if you have TCQ aware disks, or any system with high disk performance.
The complete fair queueing scheduler implements both request merging and the elevator, and attempts to give all users of a particular device the same number of IO requests over a particular time interval. This should make it more efficient for multiuser systems. It seems that Novel SLES sets cfq as the scheduler by default, as does the latestUbuntu release. As of the 2.6.18 kernel, this is the default schedular in kernel.org releases.
The most reliable way to change schedulers is to set the kernel option “elevator” at boot time. You can set it to one of “as”, “cfq”, “deadline” or “noop”, to set the appropriate scheduler.
It seems under more recent 2.6 kernels (2.6.11, possibly earlier), you can change the scheduler at runtime by echoing the name of the scheduler into/sys/block/$devicename/queue/scheduler, where the device name is the basename of the block device, eg “sda” for/dev/sda.
I've not personally done any testing on this, so I can't speak from experience yet. The anticipatory scheduler will be the default one for a reason however - it is optimised for the common case. If you've only got single disk systems (ie, no RAID - hardware or software) then this scheduler is probably the right one for you. If it's a multiuser system, you will probably find CFQ or deadline providing better performance, and the numbers seem to back deadline giving the best performance for database systems.
The noop scheduler has minimal cpu overhead in managing the queues and may be well suited to systems with either low seek times, such as an SSD or systems using a hardware RAID controller, which often has its own IO scheduler designed around the RAID semantics.
The schedulers may have parameters that can be tuned at runtime. Read theLinuxKernel documentation on the schedulers listed in theReferences section below
Read the documents mentioned in the References section below, especially theLinuxKernel documentation on the anticipatory and deadline schedulers.
link from http://www.wlug.org.nz/LinuxIoScheduler
2. 启用 wiper 工具对 SSD 进行重新清空
wiper.sh 由 hdparm 工具附带, 但 rhel5,6 都默认不带改工具, 建议重新编译安装