/proc/sys/vm参数含义翻译(vm.txt)

这些参数主要是用来调整virtual memory子系统的行为以及数据的写出(从RAM到ROM)。
这些节点(参数)的默认值和初始化的过程大部分都可以在mm/swap.c中找到。
目前,/proc/sys/vm目录下有下面这些节点:
- admin_reserve_kbytes
- block_dump
- compact_memory
- compact_unevictable_allowed
- dirty_background_bytes
- dirty_background_ratio
- dirty_bytes
- dirty_expire_centisecs
- dirty_ratio
- dirty_writeback_centisecs
- drop_caches
- extfrag_threshold
- extra_free_kbytes
- hugepages_treat_as_movable
- hugetlb_shm_group
- laptop_mode
- legacy_va_layout
- lowmem_reserve_ratio
- max_map_count
- memory_failure_early_kill
- memory_failure_recovery
- min_free_kbytes
- min_slab_ratio
- min_unmapped_ratio
- mmap_min_addr
- mmap_rnd_bits
- mmap_rnd_compat_bits
- nr_hugepages
- nr_overcommit_hugepages
- nr_trim_pages         (only if CONFIG_MMU=n)
- numa_zonelist_order
- oom_dump_tasks
- oom_kill_allocating_task
- overcommit_kbytes
- overcommit_memory
- overcommit_ratio
- page-cluster
- panic_on_oom
- percpu_pagelist_fraction
- stat_interval
- stat_refresh
- swappiness
- user_reserve_kbytes
- vfs_cache_pressure
- watermark_scale_factor
- zone_reclaim_mode


==============================================================
admin_reserve_kbytes
系统中为拥有cap_sys_admin的权限(可以大致理解到root权限)的user预留的free memory数量
admin_reserve_kbytes 默认为min(3% of free pages, 8MB)(也就是3%的free pages与8MB中较小的值)
在必要的情况下,在默认的大量使用的'guess' 模式下,能有足够多的memory给管理员登录和杀死进程。

运行在大量使用‘never’下的系统应该增加这个值到用于恢复的程序的完整虚拟内存大小。否则,root无法登录来恢复系统。

如何计算一个可用的预留最小值呢?

sshd or login + bash (or some other shell) + top (or ps, kill, etc.) (需要考虑到这几个程序需要的虚拟内存)

对于大量使用'guess'模式,我们可以算RSS。在x86_64,这个值是8MB。

对于大量使用'never'模式,我们可以采用他们的虚拟大小的最大值再加上他们的RSS。(VSZ+RSS)。在x86_64 这个值是128MB.

改变这个值,每当程序需要内存时都会收到影响。

==============================================================

block_dump

把block_dump设置为一个非零值,可以打开block I/O调试。在Documentation/laptops/laptop-mode.txt可以查看更多关于block I/O调试的内容。

==============================================================
compact_memory

在CONFIG_COMPACTION设置了的时候这个参数才能访问。这个参数被写1的时候,所有的zone会被压缩,因此所有可用的memory都是连续的块。对于需要大量内存的情况,这很重要,尽管进程可以根据需要直接压缩内存。

==============================================================
compact_unevictable_allowed
在CONFIG_COMPACTION设置了的时候这个参数才能访问。这个参数被写1的之后,压缩是可以检查不能被写出的、lru(最近最少使用)的页面以便进行压缩的。这个应该使用在为了大的连续内存而可以接受小的页面错误的系统中。这个参数被设置为0来阻止不能被写出的页面的移动。默认值是1。

==============================================================
dirty_background_bytes
这个值包含后台内核刷新进程即将开始回写到的内存的大小。
Contains the amount of dirty memory at which the background kernel flusher threads will start writeback.
Note:dirty_background_bytes 和dirty_background_ratio互斥使用。一次只能指定他们中的一个。当其中一个sysctl 被写入,那会立马在dirty memory的范围的计算过程中起作用,并且另外一个的值读出来是0。

==============================================================
dirty_background_ratio

后台内核刷新线程即将写出的脏数据所占的页面,占包括free页面和可回收页面的在内的总可用页面数的百分比。这里说的总可用内存不等于系统的总内存。

==============================================================
dirty_bytes
一个生成磁盘的进程写入的"memory dirty"的大小,并且这个"memory dirty"会自动回写。
注:dirty_bytes 和dirty_ratio的使用相关。一次只能指定他们中的一个。当一个sysctl被调用,那它会立马被用来评估“dirty memory”的极限并且其他的sysctl读出来的结果是0。
注:dirty_bytes 被允许的最小值是两页(字节数);任何比这个小的值的设定都会被忽略,并且保持原来的值。

==============================================================
dirty_expire_centisecs

这个可调的值是用来定义脏数据因为足够老而被内核清理线程(kernel flusher threads)写出的时间。

它的单位是百分之一秒。在“dirty in-memory”中存在时间超过这个值的数据会在下一次清理进程醒过来的时候写出。

==============================================================
dirty_ratio
这个参数, 是指free pages和reclaimable pages占总内存的一个百分比,当达到这个百分比的时候,生成磁盘写操作的进程将会自动开始把脏数据写出。
总可用memory和总系统内存不相等。
==============================================================
dirty_writeback_centisecs
内核清理线程将定期地醒过来,并且讲脏数据写出到磁盘。这个参数就是指这个时间间隔,单位是百分之一秒。
把这个值设置为0就完全地disable了定期回写的做法。

==============================================================

drop_caches

对这个参数写入,会导致内核清扫caches以及像inodes之类的可回收的slab对象。一旦清理了,他们的内存就变成可用的了。

清理cache:

        echo 1 > /proc/sys/vm/drop_caches

清理可回收的slab对象(包括目录和inodes):
echo 2 > /proc/sys/vm/drop_caches
清理cache和可回收的slab对象:
echo 3 > /proc/sys/vm/drop_caches

这个操作非破坏性的,不会清理脏对象。要增加这个操作可清理的对象的数量,可以运行“sync”后再往

/proc/sys/vm/drop_caches节点写值。这样会使得系统的脏对象变得最少,并且增加可以被清理的对象。

这个文件不是一种用来控制各种内核cache(inodes,目录,cache等)增长的手段。这些对象会在系统别的地方需要内存时被内核主动回收。 

使用这个文件会造成一些性能问题。因为这个操作舍弃了缓存的一些对象,而可能需要大量的I/O和CPU来重新创建这些被舍弃的对象,特别是当他们被大量使用的时候。因此,在测试和debug之外的环境中,是不建议使用的。

当这个文件被使用的时候,你可能可以看到kernel log中有这样的信息:
cat (1234): drop_caches: 3

这只是个提醒。这并不代表你的系统有任何问题。要disable这个功能,echo 4 (bit 3)到drop_caches
==============================================================
extfrag_threshold
这个参数影响的是内核要在为了满足一个高阶(high-order )的内存需求时,是压缩内存还是直接回收。
在debugfs下面的extfrag/extfrag_index文件显示的是系统中每个zone中的每一个order的碎片编号是什么。
值趋于0意味着内存分配会因为内存不足而失败,值是1000是指分配失败是因为碎片,值是-1则指主要水位满足,就能分配成功。
如果碎片index 小于等于extfrag_threshold,那么内核不会压缩一个zone里面的内存。
这个值的默认值是500。
==============================================================
extra_free_kbytes
这个参数告诉VM在后台回收程序开始工作的门限值和直接回收(内存分配进程做的)的门限值之间保持额外的free memory。
需要低延迟的内存分配和在内存分配具有突发性的之后很有用,例如一个实时应用,接受和发送最大的信息可能达到200MB的网络数据(导致内核内存分配),这就需要200MB的额外的可用内存来避免直接回收内存相关的延迟。

==============================================================
hugepages_treat_as_movable
这个值控制的是我们能否从ZONE_MOVABLE中分配大页内存。
如果设置为一个非0的值,大页内存则可以从ZONE_MOVABLE中分配。所以如果kernelcore没有使用,那这个参数就没有功效。
在某些情况下,大页内存的迁移是可行的,这取决于架构和大页的大小.
如果大页内存支持迁移,那么从ZONE_MOVABLE中分配大页内存就无论这个参数的值是多少都是可行的。
换句话说,这个参数只会影响到非可迁移的大页内存。
如果你的系统是中大页是不可迁移的,那么这个参数的一个使用的情况就是使用者可以通过enable 从ZONE_MOVABLE中分配大页内存而使得大页内存池获得更好的扩展性。
这是因为ZONE_MOVABLE中页的回收/迁移/压缩更多,并且你更可能获得连续的内存。
需要注意的是,对非可迁移的大页使用ZONE_MOVABLE,可能会损害其他的一些特征,例如memory hotremove(因为memory hotremove期待的是ZONE_MOVABLE中的内存块都是可以移动的)所以,使用者要权衡利弊。

==============================================================
hugetlb_shm_group
hugetlb_shm_group中包含了被允许使用hugetlb页来创建SysV共享内存段的group id

==============================================================
laptop_mode
laptop_mode 是一个用来控制"laptop mode"的开关。这个开关控制的所有先关的东西都会在Documentation/laptops/laptop-mode.txt中讨论。

==============================================================
legacy_va_layout
如果这个值是一个非0值,这就disable了新的32位的内存映射结构-内核会对所有进程使用legacy (2.4)结构。

==============================================================
lowmem_reserve_ratio


For some specialised workloads on highmem machines it is dangerous for
the kernel to allow process memory to be allocated from the "lowmem"
zone.  This is because that memory could then be pinned via the mlock()
system call, or by unavailability of swapspace.


And on large highmem machines this lack of reclaimable lowmem memory
can be fatal.


So the Linux page allocator has a mechanism which prevents allocations
which _could_ use highmem from using too much lowmem.  This means that
a certain amount of lowmem is defended from the possibility of being
captured into pinned user memory.


(The same argument applies to the old 16 megabyte ISA DMA region.  This
mechanism will also defend that region from allocations which could use
highmem or lowmem).


The `lowmem_reserve_ratio' tunable determines how aggressive the kernel is
in defending these lower zones.


If you have a machine which uses highmem or ISA DMA and your
applications are using mlock(), or if you are running with no swap then
you probably should change the lowmem_reserve_ratio setting.


The lowmem_reserve_ratio is an array. You can see them by reading this file.
-
% cat /proc/sys/vm/lowmem_reserve_ratio
256     256     32
-
Note: # of this elements is one fewer than number of zones. Because the highest
      zone's value is not necessary for following calculation.


But, these values are not used directly. The kernel calculates # of protection
pages for each zones from them. These are shown as array of protection pages
in /proc/zoneinfo like followings. (This is an example of x86-64 box).
Each zone has an array of protection pages like this.


-
Node 0, zone      DMA
  pages free     1355
        min      3
        low      3
        high     4
:
:
    numa_other   0
        protection: (0, 2004, 2004, 2004)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  pagesets
    cpu: 0 pcp: 0
        :
-
These protections are added to score to judge whether this zone should be used
for page allocation or should be reclaimed.


In this example, if normal pages (index=2) are required to this DMA zone and
watermark[WMARK_HIGH] is used for watermark, the kernel judges this zone should
not be used because pages_free(1355) is smaller than watermark + protection[2]
(4 + 2004 = 2008). If this protection value is 0, this zone would be used for
normal page requirement. If requirement is DMA zone(index=0), protection[0]
(=0) is used.


zone[i]'s protection[j] is calculated by following expression.


(i < j):
  zone[i]->protection[j]
  = (total sums of managed_pages from zone[i+1] to zone[j] on the node)
    / lowmem_reserve_ratio[i];
(i = j):
   (should not be protected. = 0;
(i > j):
   (not necessary, but looks 0)


The default values of lowmem_reserve_ratio[i] are
    256 (if zone[i] means DMA or DMA32 zone)
    32  (others).
As above expression, they are reciprocal number of ratio.
256 means 1/256. # of protection pages becomes about "0.39%" of total managed
pages of higher zones on the node.


If you would like to protect more pages, smaller values are effective.
The minimum value is 1 (1/1 -> 100%).


==============================================================


max_map_count:


This file contains the maximum number of memory map areas a process
may have. Memory map areas are used as a side-effect of calling
malloc, directly by mmap and mprotect, and also when loading shared
libraries.


While most applications need less than a thousand maps, certain
programs, particularly malloc debuggers, may consume lots of them,
e.g., up to one or two maps per allocation.


The default value is 65536.


=============================================================
memory_failure_early_kill:
控制出现在后台被硬件检测出来的、内核无法处理的、未修正的内存错误(典型的是在一个内存模块中的2bit错误)时如何杀死后台进程。
在某些情况下(例如页仍然在disk上面有一个备份)内核会处理这个错误而不影响任何应用程序。但是如果没有最新的数据备份,那么就会杀死进程来防止数据错误的传播。
1:在一检查到错误时就杀死所有的出现错误并且没有可重新加载的页面映射的进程。
注意,对于少数类型的页面,不支持这个操作,例如内核内部分配的数据和swap cache,但是大部分页面都支持这个操作。
0:从所有进程中删除这个错误页面,并且杀死想要访问这个页面的进程。
kill是通过使用BUS_MCEERR_AO的可捕获的SIGBUS来完成的,因此进程可以对此进行处理。
这只对一些有先进的机器检查处理的结构有效,并且依赖于硬件的性能。
应用程序可以通过PR_MCE_KILL来覆盖这个设置。


==============================================================
memory_failure_recovery
enable 内存错误修复
1: 尝试修复
0: 在内存错误的时候总是panic

==============================================================
min_free_kbytes:
这个参数是用来使Linux VM保留的free memory的最小值,单位是kilobytes.

VM使用这个值来计算系统中的每个lowmem zone 的watermark[WMARK_MIN]的值。

每个lowmem zone都会获得一个预留的基于zone本身大小的free memory。

 为了来满足PF_MEMALLOC的内存分配,需要一些一些最低限度的memory;如果你把这个值设置为小于1024KB,那你的系统会很容易broken,并且在高负载下很容易死锁。
把这个值设置很大的话会马上出发OOM(out of memory)机制。

=============================================================


min_slab_ratio:


This is available only on NUMA kernels.


A percentage of the total pages in each zone.  On Zone reclaim
(fallback from the local zone occurs) slabs will be reclaimed if more
than this percentage of pages in a zone are reclaimable slab pages.
This insures that the slab growth stays under control even in NUMA
systems that rarely perform global reclaim.


The default is 5 percent.


Note that slab reclaim is triggered in a per zone / node fashion.
The process of reclaiming slab memory is currently not node specific
and may not be fast.


=============================================================


min_unmapped_ratio:


This is available only on NUMA kernels.


This is a percentage of the total pages in each zone. Zone reclaim will
only occur if more than this percentage of pages are in a state that
zone_reclaim_mode allows to be reclaimed.


If zone_reclaim_mode has the value 4 OR'd, then the percentage is compared
against all file-backed unmapped pages including swapcache pages and tmpfs
files. Otherwise, only unmapped pages backed by normal files but not tmpfs
files and similar are considered.


The default is 1 percent.


==============================================================


mmap_min_addr


This file indicates the amount of address space  which a user process will
be restricted from mmapping.  Since kernel null dereference bugs could
accidentally operate based on the information in the first couple of pages
of memory userspace processes should not be allowed to write to them.  By
default this value is set to 0 and no protections will be enforced by the
security module.  Setting this value to something like 64k will allow the
vast majority of applications to work correctly and provide defense in depth
against future potential kernel bugs.


==============================================================


mmap_rnd_bits:


This value can be used to select the number of bits to use to
determine the random offset to the base address of vma regions
resulting from mmap allocations on architectures which support
tuning address space randomization.  This value will be bounded
by the architecture's minimum and maximum supported values.


This value can be changed after boot using the
/proc/sys/vm/mmap_rnd_bits tunable


==============================================================


mmap_rnd_compat_bits:


This value can be used to select the number of bits to use to
determine the random offset to the base address of vma regions
resulting from mmap allocations for applications run in
compatibility mode on architectures which support tuning address
space randomization.  This value will be bounded by the
architecture's minimum and maximum supported values.


This value can be changed after boot using the
/proc/sys/vm/mmap_rnd_compat_bits tunable


==============================================================


nr_hugepages


Change the minimum size of the hugepage pool.


See Documentation/vm/hugetlbpage.txt


==============================================================


nr_overcommit_hugepages


Change the maximum size of the hugepage pool. The maximum is
nr_hugepages + nr_overcommit_hugepages.


See Documentation/vm/hugetlbpage.txt


==============================================================


nr_trim_pages


This is available only on NOMMU kernels.


This value adjusts the excess page trimming behaviour of power-of-2 aligned
NOMMU mmap allocations.


A value of 0 disables trimming of allocations entirely, while a value of 1
trims excess pages aggressively. Any value >= 1 acts as the watermark where
trimming of allocations is initiated.


The default value is 1.


See Documentation/nommu-mmap.txt for more information.


==============================================================


numa_zonelist_order


This sysctl is only for NUMA.
'where the memory is allocated from' is controlled by zonelists.
(This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation.
 you may be able to read ZONE_DMA as ZONE_DMA32...)


In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
ZONE_NORMAL -> ZONE_DMA
This means that a memory allocation request for GFP_KERNEL will
get memory from ZONE_DMA only when ZONE_NORMAL is not available.


In NUMA case, you can think of following 2 types of order.
Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL


(A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
(B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.


Type(A) offers the best locality for processes on Node(0), but ZONE_DMA
will be used before ZONE_NORMAL exhaustion. This increases possibility of
out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.


Type(B) cannot offer the best locality but is more robust against OOM of
the DMA zone.


Type(A) is called as "Node" order. Type (B) is "Zone" order.


"Node order" orders the zonelists by node, then by zone within each node.
Specify "[Nn]ode" for node order


"Zone Order" orders the zonelists by zone type, then by node within each
zone.  Specify "[Zz]one" for zone order.


Specify "[Dd]efault" to request automatic configuration.


On 32-bit, the Normal zone needs to be preserved for allocations accessible
by the kernel, so "zone" order will be selected.


On 64-bit, devices that require DMA32/DMA are relatively rare, so "node"
order will be selected.


Default order is recommended unless this is causing problems for your
system/application.


==============================================================


oom_dump_tasks


Enables a system-wide task dump (excluding kernel threads) to be produced
when the kernel performs an OOM-killing and includes such information as
pid, uid, tgid, vm size, rss, nr_ptes, nr_pmds, swapents, oom_score_adj
score, and name.  This is helpful to determine why the OOM killer was
invoked, to identify the rogue task that caused it, and to determine why
the OOM killer chose the task it did to kill.


If this is set to zero, this information is suppressed.  On very
large systems with thousands of tasks it may not be feasible to dump
the memory state information for each one.  Such systems should not
be forced to incur a performance penalty in OOM conditions when the
information may not be desired.


If this is set to non-zero, this information is shown whenever the
OOM killer actually kills a memory-hogging task.


The default value is 1 (enabled).


==============================================================


oom_kill_allocating_task


This enables or disables killing the OOM-triggering task in
out-of-memory situations.


If this is set to zero, the OOM killer will scan through the entire
tasklist and select a task based on heuristics to kill.  This normally
selects a rogue memory-hogging task that frees up a large amount of
memory when killed.


If this is set to non-zero, the OOM killer simply kills the task that
triggered the out-of-memory condition.  This avoids the expensive
tasklist scan.


If panic_on_oom is selected, it takes precedence over whatever value
is used in oom_kill_allocating_task.


The default value is 0.


==============================================================


overcommit_kbytes:


When overcommit_memory is set to 2, the committed address space is not
permitted to exceed swap plus this amount of physical RAM. See below.


Note: overcommit_kbytes is the counterpart of overcommit_ratio. Only one
of them may be specified at a time. Setting one disables the other (which
then appears as 0 when read).


==============================================================


overcommit_memory:


This value contains a flag that enables memory overcommitment.


When this flag is 0, the kernel attempts to estimate the amount
of free memory left when userspace requests more memory.


When this flag is 1, the kernel pretends there is always enough
memory until it actually runs out.


When this flag is 2, the kernel uses a "never overcommit"
policy that attempts to prevent any overcommit of memory.
Note that user_reserve_kbytes affects this policy.


This feature can be very useful because there are a lot of
programs that malloc() huge amounts of memory "just-in-case"
and don't use much of it.


The default value is 0.


See Documentation/vm/overcommit-accounting and
mm/mmap.c::__vm_enough_memory() for more information.


==============================================================


overcommit_ratio:


When overcommit_memory is set to 2, the committed address
space is not permitted to exceed swap plus this percentage
of physical RAM.  See above.


==============================================================


page-cluster


page-cluster controls the number of pages up to which consecutive pages
are read in from swap in a single attempt. This is the swap counterpart
to page cache readahead.
The mentioned consecutivity is not in terms of virtual/physical addresses,
but consecutive on swap space - that means they were swapped out together.


It is a logarithmic value - setting it to zero means "1 page", setting
it to 1 means "2 pages", setting it to 2 means "4 pages", etc.
Zero disables swap readahead completely.


The default value is three (eight pages at a time).  There may be some
small benefits in tuning this to a different value if your workload is
swap-intensive.


Lower values mean lower latencies for initial faults, but at the same time
extra faults and I/O delays for following faults if they would have been part of
that consecutive pages readahead would have brought in.


=============================================================


panic_on_oom


This enables or disables panic on out-of-memory feature.


If this is set to 0, the kernel will kill some rogue process,
called oom_killer.  Usually, oom_killer can kill rogue processes and
system will survive.


If this is set to 1, the kernel panics when out-of-memory happens.
However, if a process limits using nodes by mempolicy/cpusets,
and those nodes become memory exhaustion status, one process
may be killed by oom-killer. No panic occurs in this case.
Because other nodes' memory may be free. This means system total status
may be not fatal yet.


If this is set to 2, the kernel panics compulsorily even on the
above-mentioned. Even oom happens under memory cgroup, the whole
system panics.


The default value is 0.
1 and 2 are for failover of clustering. Please select either
according to your policy of failover.
panic_on_oom=2+kdump gives you very strong tool to investigate
why oom happens. You can get snapshot.


=============================================================


percpu_pagelist_fraction


This is the fraction of pages at most (high mark pcp->high) in each zone that
are allocated for each per cpu page list.  The min value for this is 8.  It
means that we don't allow more than 1/8th of pages in each zone to be
allocated in any single per_cpu_pagelist.  This entry only changes the value
of hot per cpu pagelists.  User can specify a number like 100 to allocate
1/100th of each zone to each per cpu page list.


The batch value of each per cpu pagelist is also updated as a result.  It is
set to pcp->high/4.  The upper limit of batch is (PAGE_SHIFT * 8)


The initial value is zero.  Kernel does not use this value at boot time to set
the high water marks for each per cpu page list.  If the user writes '0' to this
sysctl, it will revert to this default behavior.


==============================================================


stat_interval


The time interval between which vm statistics are updated.  The default
is 1 second.


==============================================================


stat_refresh


Any read or write (by root only) flushes all the per-cpu vm statistics
into their global totals, for more accurate reports when testing
e.g. cat /proc/sys/vm/stat_refresh /proc/meminfo


As a side-effect, it also checks for negative totals (elsewhere reported
as 0) and "fails" with EINVAL if any are found, with a warning in dmesg.
(At time of writing, a few stats are known sometimes to be found negative,
with no ill effects: errors and warnings on these stats are suppressed.)


==============================================================
swappiness
这个值是用来控制内核交换内存页面的倾向程度。这个值大,那么更倾向于交换;这个值小,就会较少交换。0表示kernel一直到可用内存低于zone的高水位才开始交换。默认值是60。

==============================================================


- user_reserve_kbytes


When overcommit_memory is set to 2, "never overcommit" mode, reserve
min(3% of current process size, user_reserve_kbytes) of free memory.
This is intended to prevent a user from starting a single memory hogging
process, such that they cannot recover (kill the hog).


user_reserve_kbytes defaults to min(3% of the current process size, 128MB).


If this is reduced to zero, then the user will be allowed to allocate
all free memory with a single process, minus admin_reserve_kbytes.
Any subsequent attempts to execute a command will result in
"fork: Cannot allocate memory".


Changing this takes effect whenever an application requests memory.


==============================================================


vfs_cache_pressure
------------------


This percentage value controls the tendency of the kernel to reclaim
the memory which is used for caching of directory and inode objects.


At the default value of vfs_cache_pressure=100 the kernel will attempt to
reclaim dentries and inodes at a "fair" rate with respect to pagecache and
swapcache reclaim.  Decreasing vfs_cache_pressure causes the kernel to prefer
to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will
never reclaim dentries and inodes due to memory pressure and this can easily
lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
causes the kernel to prefer to reclaim dentries and inodes.


Increasing vfs_cache_pressure significantly beyond 100 may have negative
performance impact. Reclaim code needs to take various locks to find freeable
directory and inode objects. With vfs_cache_pressure=1000, it will look for
ten times more freeable objects than there are.


=============================================================


watermark_scale_factor:


This factor controls the aggressiveness of kswapd. It defines the
amount of memory left in a node/system before kswapd is woken up and
how much memory needs to be free before kswapd goes back to sleep.


The unit is in fractions of 10,000. The default value of 10 means the
distances between watermarks are 0.1% of the available memory in the
node/system. The maximum value is 1000, or 10% of memory.


A high rate of threads entering direct reclaim (allocstall) or kswapd
going to sleep prematurely (kswapd_low_wmark_hit_quickly) can indicate
that the number of free pages kswapd maintains for latency reasons is
too small for the allocation bursts occurring in the system. This knob
can then be used to tune kswapd aggressiveness accordingly.


==============================================================


zone_reclaim_mode:


Zone_reclaim_mode allows someone to set more or less aggressive approaches to
reclaim memory when a zone runs out of memory. If it is set to zero then no
zone reclaim occurs. Allocations will be satisfied from other zones / nodes
in the system.


This is value ORed together of


1 = Zone reclaim on
2 = Zone reclaim writes dirty pages out
4 = Zone reclaim swaps pages


zone_reclaim_mode is disabled by default.  For file servers or workloads
that benefit from having their data cached, zone_reclaim_mode should be
left disabled as the caching effect is likely to be more important than
data locality.


zone_reclaim may be enabled if it's known that the workload is partitioned
such that each partition fits within a NUMA node and that accessing remote
memory would cause a measurable performance reduction.  The page allocator
will then reclaim easily reusable pages (those page cache pages that are
currently not used) before allocating off node pages.


Allowing zone reclaim to write out pages stops processes that are
writing large amounts of data from dirtying pages on other nodes. Zone
reclaim will write out dirty pages if a zone fills up and so effectively
throttle the process. This may decrease the performance of a single process
since it cannot use all of system memory to buffer the outgoing writes
anymore but it preserve the memory on other nodes so that the performance
of other processes running on other nodes will not be affected.


Allowing regular swap effectively restricts allocations to the local
node unless explicitly overridden by memory policies or cpuset
configurations.


============ 未完待续,翻译了的部分把原文删除了,还没有删除的就是还没有翻译的部分,下周末来完成吧,说到做到呀=================================

你可能感兴趣的:(安卓学习)