Hugepages

Why Hugepages?

As we known, the default page size is 4k. It is not enough for the new big-memory system. Since the process address space are virtual, the CPU and the operating system have to remember which page belong to which process, and where it is stored. Obviously, the more pages you have, the more time it takes to find where the memory is mapped.

Most current CPU architectures support bigger pages.those are named Hugepages.Through the Hugepage, the pages can be cutted down and fewer translations requiring fewer cycles to accses the memory.  A less obvious benefit is that address translation information is typically stored in the L2 cache. With huge pages, more cache space is available for application data means that fewer cycles are spent accessing main memory.

How to enable Hugepages?

  1.             Check the hugepage size.

#cat /proc/meminfo

The output of "cat /proc/meminfo" will have lines like:

.....

HugePages_Total: vvv

HugePages_Free:  www

HugePages_Rsvd:  xxx

HugePages_Surp:  yyy

Hugepagesize:    zzz kB

 

where:

HugePages_Total  is the size of the pool of huge pages.

HugePages_Free  is the number of huge pages in the pool that are not yet      allocated.

Hugepagesize    is the size of each page, can be 2M, 4M and so on in different artchitectures.

/proc/sys/vm/nr_hugepages indicates the current number of configured hugetlb pages(HugePages_Total) in the kernel.

 

Use the following command to dynamically allocate/deallocate default sized

huge pages:

 

            echo 20 > /proc/sys/vm/nr_hugepages

So the total hugepage memory is  HugePages_Total* Hugepagesize

 

  1. Mount the hugepage filesystem and set page num

mount -t hugetlbfs nodev  /mnt/huge

echo xx > /proc/sys/vm/nr_hugepages

你可能感兴趣的:(Hugepages)