页回收

shrink_*_list()

shrink_active_list()
只能看到在active/inactive lru list中的不可回收页,并且可以将这些不可回收页转移到unevictable list。unevictable list中的页对shrink_active_list不可见。

active/inactive list中出现unveictable page的情形有以下几种情况:

  1. 初次分配ramfs pages,这些页已经在lru list中;
  2. mlocked pages that could not be isolated from the LRU and moved to the unevictable list in mlock_vma_page().
  3. Pages mapped into multiple VM_LOCKED VMAs, but try_to_munlock() couldn’t acquire the VMA’s mmap semaphore to test the flags and set PageMlocked. munlock_vma_page() was forced to let the page back on to the normal LRU list for vmscan to handle.
  4. SHM_LOCK’d shared memory pages. shmctl(SHM_LOCK) does not attempt to allocate or fault in the pages in the shared memory region. This happens when an application accesses the page the first time after SHM_LOCK’ing the segment.

shrink_inactive_list()
shrink_inactive_list将在inactive list中发现的不可回收页divert to appropriate zone’s unevictable list.
shrink_active_list()将一些页转移到inactive list,随后这些页标记为SHM_LOCK’d。shrink_inactive_list在inactive list看到的SHM_LOCK’d页都属于前述情形。
pages mapped into VM_LOCKED VMAs that munlock_vma_page() couldn’t isolate from the LRU to recheck via try_to_munlock(). shrink_inactive_list() won’t notice the latter, but will pass on to shrink_page_list().

shrink_page_list()
shrink_page_list() again culls obviously unevictable pages that it could encounter for similar reason to shrink_inactive_list(). Pages mapped into VM_LOCKED VMAs but without PG_mlocked set will make it all the way to try_to_unmap(). shrink_page_list() will divert them to the unevictable list when try_to_unmap() returns SWAP_MLOCK, as discussed above.

后期关注一下
mark_page_accessed()

你可能感兴趣的:(内核,内存管理)