<div dir="ltr"><div class="gmail_default" style="font-family:'courier new',monospace"></div><div class="gmail_default" style="font-family:courier new,monospace">How bad is the internal fragmentation going to be if 2M pages are used ? some of the small vmas are stack, shared libraries and user mmapped files. I assume heap is going to be 2M at least, which is somewhat reasonable. </div>
<div class="gmail_default" style="font-family:courier new,monospace"><br></div><div class="gmail_default" style="font-family:courier new,monospace">shared library vmas can be merged to form large vmas as they have the same permission mostly. only one stack is needed per thread. I think the big culprit for internal fragmentation here is the user mmaped files.</div>
<div class="gmail_default" style="font-family:courier new,monospace"><br></div><div class="gmail_default" style="font-family:courier new,monospace">Am i right to think as above ?</div><div class="gmail_default" style="font-family:courier new,monospace">
<br></div><div class="gmail_default" style="font-family:courier new,monospace">Xin</div><div class="gmail_extra"><div class="gmail_quote">On Wed, Jul 30, 2014 at 7:26 PM, <span dir="ltr"><<a href="mailto:Valdis.Kletnieks@vt.edu" target="_blank">Valdis.Kletnieks@vt.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">On Wed, 30 Jul 2014 18:26:39 -0500, Xin Tong said:<br>
<br>
> I am planning to use this only for workloads with very large memory<br>
> footprints, e.g. hadoop, tpcc, etc.<br>
<br>
</div>You might want to look at how your system gets booted. I think you'll find<br>
that you burn through 800 to 2000 or so processes, all of which are currently<br>
tiny, but if you make every 4K allocation grab 2M instead, you're quite likely<br>
to find yourself tripping the OOM before hadoop ever gets launched.<br>
<br>
You're probably *much* better off letting the current code do its work,<br>
since you'll only pay the coalesce cost once for each 2M that hadoop uses.<br>
And let's face it, that's only going to sum up to fractions of a second, and<br>
then hadoop is going to be banging on the TLB for hours or days.<br>
<br>
Don't spend time optimizing the wrong thing....<br>
</blockquote></div><br></div></div>