the cost of vmalloc

Zheng Da zhengda1936 at gmail.com
Thu Sep 15 21:50:20 EDT 2011


Hi,

On Sun, Sep 11, 2011 at 2:32 AM, Michael Blizek
<michi1 at michaelblizek.twilightparadox.com> wrote:
>> >> The whole point of allocating a large chunk of memory is to avoid
>> >> extra memory copy because I need to run decompression algorithms on
>> >> it.
>> >
>> > In this case scatterlists solves 2 problems at once. First, you will not need
>> > to allocate large continuous memory regions. Second, you avoid wasting memory.
>> The problem is that the decompression library works on contiguous
>> memory, so I have to provide contiguous memory instead of
>> scatterlists.
>
> Which decompression lib are you talking about? Even if it does not have
> explicit support for scatterlists, usually you should be able to call the
> decompress function multiple times. Otherwise, how would you (de)compress data
> which is larger than available memory?
Sorry for the late reply.
I'm using LZO. The data is compressed in blocks of 128KB. I don't
think I can split the compressed block and run LZO decompressor on the
pieces multiple times.
There is a lot of free memory, but the kernel can't find contiguous
memory sometimes. vmalloc always succeeds when kmalloc fails.

Thanks,
Da



More information about the Kernelnewbies mailing list