Tracing allocators of virtual memory and main memory

sahil aggarwal sahil.agg15 at gmail.com
Thu Mar 12 02:07:09 EDT 2015


Sample Output:


  mem-3374  [005] 589012.489483: mm_fault: (do_page_fault+0x3b3/0x3d8
<- handle_mm_fault) arg1=512
             mem-3374  [005] 589012.489486: mm_fault:
(do_page_fault+0x3b3/0x3d8 <- handle_mm_fault) arg1=512
             mem-3374  [005] 589012.489489: mm_fault:
(do_page_fault+0x3b3/0x3d8 <- handle_mm_fault) arg1=512
             mem-3374  [005] 589012.489493: mm_fault:
(do_page_fault+0x3b3/0x3d8 <- handle_mm_fault) arg1=512
             mem-3374  [005] 589012.489495: sys_brk -> 0x23f1000
             mem-3374  [005] 589012.489500: kmem_cache_alloc:
call_site=ffffffff810fda40 ptr=ffff880fbe4bf298 bytes_req=176
bytes_alloc=176 gfp_flags=GFP_KERNEL|GFP_ZERO
             mem-3374  [005] 589012.489501: sys_brk -> 0x2417000
             mem-3374  [005] 589012.489504: mm_fault:
(do_page_fault+0x3b3/0x3d8 <- handle_mm_fault) arg1=512
             mem-3374  [005] 589012.489511: mm_page_alloc:
page=ffffea00375619f0 pfn=16578386 order=0 migratetype=0
gfp_flags=GFP_KERNEL|GFP_REPEAT|GFP_ZERO
             mem-3374  [005] 589012.489512: kmem_cache_alloc:
call_site=ffffffff81101422 ptr=ffff880fb765b6a8 bytes_req=48
bytes_alloc=48 gfp_flags=GFP_KERNEL
             mem-3374  [005] 589012.489513: kmem_cache_alloc:
call_site=ffffffff81101454 ptr=ffff880fbe4c63a0 bytes_req=64
bytes_alloc=64 gfp_flags=GFP_KERNEL
             mem-3374  [005] 589012.489518: mm_page_alloc:
page=ffffea0034cccf00 pfn=15818528 order=0 migratetype=2
gfp_flags=GFP_HIGHUSER_MOVABLE|GFP_ZERO
             mem-3374  [005] 589012.489520: mm_fault:
(do_page_fault+0x3b3/0x3d8 <- handle_mm_fault) arg1=0
             mem-3374  [005] 589012.489526: mm_page_alloc:
page=ffffea0034b8d2e0 pfn=15795140 order=0 migratetype=2
gfp_flags=GFP_HIGHUSER_MOVABLE|GFP_ZERO
             mem-3374  [005] 589012.489527: mm_fault:
(do_page_fault+0x3b3/0x3d8 <- handle_mm_fault) arg1=0
             mem-3374  [005] 589012.489534: mm_fault:
(do_page_fault+0x3b3/0x3d8 <- handle_mm_fault) arg1=512
             mem-3374  [005] 589012.489536: mm_fault:
(do_page_fault+0x3b3/0x3d8 <- handle_mm_fault) arg1=512
             mem-3374  [005] 589012.489552: mark_page_acc:
(unmap_vmas+0x553/0x812 <- mark_page_accessed)
             mem-3374  [005] 589012.489556: mark_page_acc:
(unmap_vmas+0x553/0x812 <- mark_page_accessed)
             mem-3374  [005] 589012.489557: mark_page_acc:
(unmap_vmas+0x553/0x812 <- mark_page_accessed)



These are some lines of stats for a program which ask for 4096*5 bytes
but dont touch them. If it touch those pages mm_page_alloc will
increase.  So is this correct way to capture brk,mmap and
mm_page_alloc to analyze how much pages thread asked for and how many
of it actually used.?

Thank you
Regards
Sahil

On 12 March 2015 at 08:40, SAHIL <sahil.agg15 at gmail.com> wrote:
> Hi validis
>
> Actually i want to see how much total virtual pages it asked for and how many it actually used, how many were put to swap, how many major page faults happened and how many faults were handled from swap.
> In short whole page level analysis of thread.
>
> Regards
> Sahil Aggarwal
> Contact-9988439647
>
>> On Mar 12, 2015, at 7:24 AM, Valdis.Kletnieks at vt.edu wrote:
>>
>> On Thu, 12 Mar 2015 07:09:32 +0530, SAHIL said:
>>
>>> Yeah right, pidstat which read /proc gives me VSZ ans RSS but i need to
>>> backtrace when VSZ/RSS is high which indicates process is allocating memory
>>> which it is not even using.
>>
>> Do you mean pages it isn't *currently* using, or has *never* used?
>>
>> Also, note that VSZ refers to the virtual size, which may include
>> pages currently out on swap, while RSS refers to actually resident pages.
>>
>> And then there's the other great bugaboo, shared pages that are mapped by more
>> than one process.
>>
>> What exactly are you trying to do?



More information about the Kernelnewbies mailing list