Query regarding memory accesses and page faults in virtual machine using kvm and qemu

ravali pullela rpravali069 at gmail.com
Mon Sep 29 06:39:26 EDT 2014


Hello everyone,

 I am using kvm and VM OS is ubuntu 14.04. Host hardware is intel core i5,
running ubuntu 14.04 OS.

 Background:I am trying to track the number of pages accessed by a VM in
certain period of time. To do this, I am setting the pte flags of all pages
present and which are not protected (i.e., PROTNONE != 1) on the host
virtual address space belonging to the VM process as NOT PRESENT AND
PROTECTED (by clearing PRESENT bit and setting PROTNONE bit in the pte
entry).

 That is, for pages belonging to the VM process, if they are present and
not protected, mark their pte entries to reflect that they are not present
but protected.

 The idea is that whenever this page is accessed, there has to be a fault
because the present bit is cleared. Now, inside the fault handler
handle_pte_fault, I registered a kernel module function, which will reset
the pte bits for the faulted address. [Resetting means making PRESENT = 1
and PROTNONE = 0]

 I am considering the VM as any other normal process on the host.

 Current Scenario:I am testing the above by running a sample test program
inside the VM which does the following.

 1) Mallocs 200 MB which is 51200 pages.

 2) Accesses all the pages to make sure that they are brought into memory.

 3) sleep for say 10 secs to allow tracker to mark the pages

 4) access all the pages [which is expected to generate faults]

 Ran this test process couple of times.

 My tracker is runs on host.

 Issue: My tracker module reports very few pages as accessed which is in
turn because of very few faults say around 350 faults for 51200 which are
expected. Also I am doing a tlb_flush_all in host to avoid direct
translations after the pages are marked and before they are accessed.

 Note: I disabled ksm and huge pages features.

 I tested the tracker on a process running on the host which malloced 900
MB and accesses all the pages, and it reflects the pages dirtied as the
number of pages dirtied. But the issue happens only in the case of tracking
VMs.

 Queries:1) Are all the tlb entries flushed when tlb_flush_all is done or
are the tlb entries of VM tagged separately and this is causing an issue?

 2) Can anyone please guide as to what could be going wrong? Or any
comments on the procedure that could go wrong would help. Any references
would also help as I am new to this qemu, kvm memory management area.

 Thanks,

 Ravali
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20140929/7ea2e30b/attachment.html 


More information about the Kernelnewbies mailing list