the cost of vmalloc
michi1 at michaelblizek.twilightparadox.com
Fri Sep 16 01:40:58 EDT 2011
On 21:50 Thu 15 Sep , Zheng Da wrote:
> On Sun, Sep 11, 2011 at 2:32 AM, Michael Blizek
> <michi1 at michaelblizek.twilightparadox.com> wrote:
> >> >> The whole point of allocating a large chunk of memory is to avoid
> >> >> extra memory copy because I need to run decompression algorithms on
> >> >> it.
> >> >
> >> > In this case scatterlists solves 2 problems at once. First, you will not need
> >> > to allocate large continuous memory regions. Second, you avoid wasting memory.
> >> The problem is that the decompression library works on contiguous
> >> memory, so I have to provide contiguous memory instead of
> >> scatterlists.
> > Which decompression lib are you talking about? Even if it does not have
> > explicit support for scatterlists, usually you should be able to call the
> > decompress function multiple times. Otherwise, how would you (de)compress data
> > which is larger than available memory?
> Sorry for the late reply.
> I'm using LZO. The data is compressed in blocks of 128KB. I don't
> think I can split the compressed block and run LZO decompressor on the
> pieces multiple times.
> There is a lot of free memory, but the kernel can't find contiguous
> memory sometimes. vmalloc always succeeds when kmalloc fails.
Yes, it really does look as if lzo currently does not support scatterlists.
The change looks fairly simple to me, but apparently there is no maintainer
it :-( .
programing a layer 3+4 network protocol for mesh networks
More information about the Kernelnewbies