<div dir="ltr"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 6, 2015 at 10:32 AM, Yann Droneaud <span dir="ltr"><<a href="mailto:ydroneaud@opteya.com" target="_blank">ydroneaud@opteya.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">Le mardi 06 octobre 2015 à 10:13 -0400, Kenneth Adam Miller a écrit :<br>
><br>
><br>
> On Tue, Oct 6, 2015 at 9:58 AM, Yann Droneaud <<a href="mailto:ydroneaud@opteya.com">ydroneaud@opteya.com</a>><br>
> wrote:<br>
> > Le mardi 06 octobre 2015 à 09:26 -0400, Kenneth Adam Miller a écrit<br>
> > :<br>
> ><br>
> > > Any body know about the issue of assigning a process a region of<br>
> > > physical memory to use for it's malloc and free? I'd like to just<br>
> > > have the process call through to a UIO driver with an ioctl, and<br>
> > then<br>
> > > once that's done it gets all it's memory from a specific region.<br>
> > ><br>
> ><br>
> > You mean CONFIG_UIO_DMEM_GENIRQ (drivers/uio/uio_dmem_genirq.c)<br>
> ><br>
> > See:<br>
> > <a href="http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/comm" rel="noreferrer" target="_blank">http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/comm</a><br>
> > it/?id=0a0c3b5a24bd802b1ebbf99e0b01296647b8199b<br>
> > <a href="http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/comm" rel="noreferrer" target="_blank">http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/comm</a><br>
> > it/?id=b533a83008c3fb4983c1213276790cacd39b518f<br>
> > <a href="https://www.kernel.org/doc/htmldocs/uio-howto/using-uio_dmem_genirq" rel="noreferrer" target="_blank">https://www.kernel.org/doc/htmldocs/uio-howto/using-uio_dmem_genirq</a><br>
> > .html<br>
> ><br>
> ><br>
> Well I don't think that does exactly what I would like, although I've<br>
> got that on my machine and I've been compiling it and learning from<br>
> it. Here's my understanding of the process of the way mmap works:<br>
><br>
> Mmap is called from userland and it maps a region of memory of a<br>
> certain size according to the parameters given to it, and the return<br>
> value it has is the address at which the block requested starts, if<br>
> it was successful (which I'm not addressing the unsuccessful case<br>
> here for brevity). The userland process now has only a pointer to a<br>
> region of space, as if they had allocated something with new or<br>
> malloc. Further calls to new or malloc don't mean that the pointers<br>
> returned will preside within the new mmap'd chunk, they are just<br>
> separate regions also mapped into the process.<br>
><br>
<br>
</div></div>You have to write your own custom allocator using the mmap()'ed memory<br>
your retrieved from UIO.<br></blockquote><div><br></div><div>I know about C++'s placement new. But I'd prefer to not have to write my userland code in such a way-I want my userland code to remain agnostic of where it gets the memory from. I just want to put a small prologue in my main, and then have the rest of the program oblivious to the change.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span class=""><br>
> What I would like is a region of memory that, once mapped to a<br>
> process, further calls to new/malloc return pointers that preside<br>
> within this chunk. Calls to new/malloc and delete/free only edit the<br>
> process's internal table, which is fine.<br>
><br>
> Is that wrong? Or is it that mmap already does the latter?<br>
<br>
</span>It's likely wrong. glibc's malloc() using brk() and mmap() to allocate<br>
anonymous pages. Tricking this implementation to use another mean to<br>
retrieve memory is left to the reader.<br>
<br></blockquote><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Anyway, are you sure you want any random calls to malloc() (from glibc<br>
itself or any other linked-in libraries) to eat UIO allocated buffers ?<br>
I don't think so: such physically contiguous, cache coherent buffers<br>
are premium ressources, you don't want to distribute them gratuitously.<br>
<div class="HOEnZb"><div class="h5"><br></div></div></blockquote><div><br></div><div>Yes - we have a hard limit on memory for our processes, and if they try to use more than what we mmap to them, they die, and we're more than fine with that. In fact, that's part of our use case and model, we've planned to allocate just 5 or so processes on our behemoth machine with gigabytes of memory. So they aren't so premium to us. </div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">
Regards.<br>
<br>
--<br>
Yann Droneaud<br>
OPTEYA<br>
<br>
<br>
<br>
</div></div></blockquote></div><br></div></div>