HugePage by default

Xin Tong trent.tong at gmail.com
Wed Jul 30 19:26:39 EDT 2014


On Wed, Jul 30, 2014 at 5:22 PM, <Valdis.Kletnieks at vt.edu> wrote:

> On Wed, 30 Jul 2014 15:06:39 -0500, Xin Tong said:
>
>
> > 2. modify the kernel (maybe extensively) to allocate 2MB page by default.
>
> How fast do you run out of memory if you do that every time you actually
> only need a few 4K pages?  (In other words - think what that isn't the
> default behavior already :)
>

​I am planning to use this only for workloads with very large memory
footprints, e.g. hadoop, tpcc, etc.

BTW, i see Linux kernel uses the hugetlbfs to manage hugepages. every api
call, mmap, shmget​, etc, all create a hugetlbfs before the hugepages can
be allocated. why can not huge pages be allocated the same way as 4K pages
? whats the point of having the hugetlbfs.

Xin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20140730/a24dbb64/attachment.html 


More information about the Kernelnewbies mailing list