HugePage by default

Valdis.Kletnieks at vt.edu Valdis.Kletnieks at vt.edu
Wed Jul 30 20:26:35 EDT 2014


On Wed, 30 Jul 2014 18:26:39 -0500, Xin Tong said:

> I am planning to use this only for workloads with very large memory
> footprints, e.g. hadoop, tpcc, etc.

You might want to look at how your system gets booted.  I think you'll find
that you burn through 800 to 2000 or so processes, all of which are currently
tiny, but if you make every 4K allocation grab 2M instead, you're quite likely
to find yourself tripping the OOM before hadoop ever gets launched.

You're probably *much* better off letting the current code do its work,
since you'll only pay the coalesce cost once for each 2M that hadoop uses.
And let's face it, that's only going to sum up to fractions of a second, and
then hadoop is going to be banging on the TLB for hours or days.

Don't spend time optimizing the wrong thing....
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 848 bytes
Desc: not available
Url : http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20140730/e306ca4e/attachment.bin 


More information about the Kernelnewbies mailing list