Questions about slab and mmap

Le Tan tamlokveer at gmail.com
Mon Feb 24 20:05:21 EST 2014


Yes, CONFIG_STRICT_DEVMEM and CONFIG_X86_PAT is on. I turn them off
and compile the kernel again. However, error is still here. I don't
know why it is ok when I mmap() and access the memory while bad when I
munmap() it. Thanks for your help.
By the way, I use kmem_cache_create("logger_data"...) in my module.
But there is no "logger_data" in /proc/slabinfo. Do you know why? Any
suggestions?
Thanks!

2014-02-25 1:12 GMT+08:00 Jeff Haran <Jeff.Haran at citrix.com>:
>> -----Original Message-----
>> From: kernelnewbies-bounces at kernelnewbies.org [mailto:kernelnewbies-bounces at kernelnewbies.org] On Behalf Of Le Tan
>> Sent: Sunday, February 23, 2014 5:39 AM
>> To: kernelnewbies at kernelnewbies.org
>> Subject: Questions about slab and mmap
>>
>> Hello, I am writing a driver module and doing something with slab and
>> mmap. Now I have some problems and need help.
>> 1. I use kmem_cache_create() to create my own slab cache. It returns
>> success. But I can't find my slab item in /proc/slabinfo. Why?
>> 2.If I want the object in the slab cache to be page-aligned, what
>> should I do? Because I want to alloc a memory of size 4096B, and want
>> the memory to be in one page.
>> 3. My driver module maps pages, allocating by kmem_cache_alloc(), to
>> userspace through mmap(). In the userspace program, I first mmap() my
>> driver device, then read something from the address returned, then
>> munmap() it. Sometimes after the program calls munmap(), some error
>> messages will be printed to the log:
>> [42522.596729] BUG: Bad page map in process logger_pro
>> pte:8000000612a30025 pmd:314175067
>> What will be possible reasons?
>>
>>
>> Now I will explain my problems in detail. My module is called
>> "logger". Logger maintains a list of pages (each of them is 4096B).
>> Kernel will put some data into logger. Logger put them in the list of
>> pages in order. When a page is full, logger will alloc another page,
>> and then fill it with data. For efficiency, I use slab.
>>
>> data_cache = kmem_cache_create("logger_data", 4096, 0, 0, NULL);  //I
>> use this to create my own cache. This is called in the init function
>> of my module
>> kmem_cache_alloc(data_cache, GFP_ATOMIC);      // I use this to alloc
>> a new page, use GFP_ATOMIC to avoid sleep. It is called in another
>> place
>>
>> Then I insmod logger. There is no error message. And my driver seems
>> to work. However, I can't find the corresponding item in
>> /proc/slabinfo. Why?
>>
>> I then write a userspace program called "logger_pro" to read data from
>> logger into a file. For efficiency, I ues mmap() to map one page at a
>> time, then read the data to the file.It is as follows:
>>
>> while(1) {
>>       str = mmap(vma);   //always map the same address and offset
>> (they are set to zero), logger will handle this
>>
>>       fwrite(str, 4096, fd);
>>
>>       munmap(str);
>> }
>>
>> In logger module, the logger_vma_fault() function will always returns
>> the first page in the list. When logger_pro munmap(), logger's
>> function logger_vma_close() will be called. In logger_vma_close(),
>> logger delete the page that has been mapped just now using
>> kmem_cache_free(). So, though logger_pro always mmap() the same
>> address and logger always return the fisrt page, logger_pro will
>> eventually read all the data out of logger and logger will delete all
>> the pages that have been read. Everything seems to be OK. But
>> sometimes there are errors in the log:
>>
>> [42522.596689] logger_mmap():start:7f8ff57be000, end:7f8ff57bf000
>>                                //this is the mmap() function of my
>> device module
>> [42522.596694] logger_vma_fault():vmf->pgoff:0d,start:7f8ff57be000,pgoff:0,offset:0
>>                  //this is the fault function of struct
>> vm_operations_struct
>> [42522.596729] BUG: Bad page map in process logger_pro
>> pte:8000000612a30025 pmd:314175067                             //this
>> is the error
>> [42522.596740] page:ffffea00184a8c00 count:2 mapcount:-2146959356
>> mapping:          (null) index:0xffff880612a36000
>> [42522.596747] page flags: 0x200000000004080(slab|head)
>> [42522.596811] addr:00007f8ff57be000 vm_flags:04040071 anon_vma:
>>    (null) mapping:ffff880613b25f08 index:0
>> [42522.596824] vma->vm_ops->fault: logger_vma_fault+0x0/0x140 [logger]
>> [42522.596834] vma->vm_file->f_op->mmap: logger_mmap+0x0/0xd50 [logger]
>> [42522.596842] CPU: 1 PID: 21571 Comm: logger_pro Tainted: G    B
>> IO 3.11.0+ #1
>> [42522.596844] Hardware name: Dell Inc. PowerEdge M610/000HYJ, BIOS
>> 2.0.13 04/06/2010
>> [42522.596846]  00007f8ff57be000 ffff880314199c98 ffffffff816ad166
>> 0000000000006959
>> [42522.596851]  ffff880314539a98 ffff880314199ce8 ffffffff8114e270
>> ffffea00184a8c00
>> [42522.596854]  0000000000000000 ffff880314199cc8 00007f8ff57be000
>> ffff880314199e18
>> [42522.596858] Call Trace:
>> [42522.596867]  [<ffffffff816ad166>] dump_stack+0x46/0x58
>> [42522.596872]  [<ffffffff8114e270>] print_bad_pte+0x190/0x250
>> [42522.596877]  [<ffffffff8115027b>] unmap_single_vma+0x6cb/0x7a0
>> [42522.596880]  [<ffffffff81150bd4>] unmap_vmas+0x54/0xa0
>> [42522.596885]  [<ffffffff81155aa7>] unmap_region+0xa7/0x110
>> [42522.596888]  [<ffffffff81157f97>] do_munmap+0x1f7/0x3e0
>> [42522.596891]  [<ffffffff811581ce>] vm_munmap+0x4e/0x70
>> [42522.596904]  [<ffffffff811591bb>] SyS_munmap+0x2b/0x40
>> [42522.596915]  [<ffffffff816bc9c2>] system_call_fastpath+0x16/0x1b
>> [42522.596920] logger_vma_close():start:7f8ff57be000,
>> end:7f8ff57bf000, vmas:0                 //this is the close function
>> of struct vm_operations_struct
>>
>> So what is wrong here? Any suggestions?
>> Thanks! I am sorry that this email is a little messy.
>>
>> _______________________________________________
>> Kernelnewbies mailing list
>> Kernelnewbies at kernelnewbies.org
>> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>
> I am not sure what kind of target system you are running this on (I don't know what a Dell PowerEdge is), but be aware that on x86_64 there are kernel configurations that would be problematic for what you are trying to do if your kernel has them enabled. Check into whether or not CONFIG_STRICT_DEVMEM or CONFIG_X86_PAT are defined in your kernel. If either are, you'll have problems mapping kernel memory to user space.
>
> Jeff Haran
>



More information about the Kernelnewbies mailing list