HugePage by default

Xin Tong trent.tong at gmail.com
Wed Jul 30 20:49:25 EDT 2014


​​
How bad is the internal fragmentation going to be if 2M pages are used ?
some of the small vmas are stack, shared libraries and user mmapped files.
I assume heap is going to be 2M at least, which is somewhat reasonable.

shared library vmas can be merged to form large vmas as they have the same
permission mostly. only one stack is needed per thread. I think the big
culprit for internal fragmentation here is the user mmaped files.

Am i right to think as above ?

Xin
On Wed, Jul 30, 2014 at 7:26 PM, <Valdis.Kletnieks at vt.edu> wrote:

> On Wed, 30 Jul 2014 18:26:39 -0500, Xin Tong said:
>
> > I am planning to use this only for workloads with very large memory
> > footprints, e.g. hadoop, tpcc, etc.
>
> You might want to look at how your system gets booted.  I think you'll find
> that you burn through 800 to 2000 or so processes, all of which are
> currently
> tiny, but if you make every 4K allocation grab 2M instead, you're quite
> likely
> to find yourself tripping the OOM before hadoop ever gets launched.
>
> You're probably *much* better off letting the current code do its work,
> since you'll only pay the coalesce cost once for each 2M that hadoop uses.
> And let's face it, that's only going to sum up to fractions of a second,
> and
> then hadoop is going to be banging on the TLB for hours or days.
>
> Don't spend time optimizing the wrong thing....
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20140730/b0b7a82c/attachment.html 


More information about the Kernelnewbies mailing list