How to measure the RAM read/write performance
Arun KS
getarunks at gmail.com
Mon Mar 4 07:16:59 EST 2013
On Mon, Mar 4, 2013 at 2:20 PM, sandeep kumar <coolsandyforyou at gmail.com> wrote:
>>> I want to do a DMA from a mem mapped I/O (lets say physical 0x40000000)
>>> to RAM location @ 0x80000000.
> >>How do you do this?
> Can you please answer the above question also?
http://www.xml.com/ldd/chapter/book/ch13.html
>
> Thanks
> Sandeep
>
>
> On Mon, Mar 4, 2013 at 12:15 PM, Arun KS <getarunks at gmail.com> wrote:
>>
>> Hi Sandeep,
>>
>> On Sat, Mar 2, 2013 at 12:21 PM, sandeep kumar
>> <coolsandyforyou at gmail.com> wrote:
>> >>Another easy way to make memory(ie..pages) non cacheble is use the
>> >>below function,
>> >>dma_alloc_coherent(NULL, size, &p, GFP_KERNEL);
>> >
>> > I did like what you said. With the timings,i can see it is reading
>> > directly
>> > from RAM. I have some doubts..
>> > --> What exactly happend here? Here read/write are done by CPU or DMA
>> > controller?
>>
>> If you have used the virtual address returned by dma_alloc_coherent,
>> then obviously it is CPU who has done the read/write.
>>
>> dma_alloc_coherent allocates memory, sets the page table attributes
>> for this memory as non cacheable.
>> When ever CPU access this memory, it generates virtual address, MMU
>> converts this to physical address using page tables.
>> Page tables has memory attributes which says that this memory is
>> non-cachable.
>>
>> Thanks,
>> Arun
>>
>> >
>> > I am new to DMA kind of operations, i have one doubt in how to do DMA?
>> > Can you give me some example driver code, for this following case....
>> >
>> > --> I want to do a DMA from a mem mapped I/O (lets say physical
>> > 0x40000000)
>> > to RAM location @ 0x80000000.
>> > How do you do this?
>> >
>> > Thanks
>> > Sandeep
>> >
>> >
>> > On Fri, Mar 1, 2013 at 10:45 AM, Arun KS <getarunks at gmail.com> wrote:
>> >>
>> >> On Thu, Feb 28, 2013 at 3:54 PM, sandeep kumar
>> >> <coolsandyforyou at gmail.com> wrote:
>> >> >>1. use early_param to get the physical start address and size of
>> >> >> test_region, or you can just ignore this step and hard code to 510M
>> >> >> and
>> >> >> 2M
>> >> >> for test purpose only.
>> >> >
>> >> >>2. use ioremap_nocache() to map this region to a virtual region. note
>> >> >> that
>> >> >> this funtion may fail if you are asking a very large vitual memroy
>> >> >> region.
>> >> > I did the following things,
>> >> >
>> >> > Thank you so much...It worked. With this i am able to measure RAM
>> >> > performance.
>> >>
>> >> Another easy way to make memory(ie..pages) non cacheble is use the
>> >> below function,
>> >> dma_alloc_coherent(NULL, size, &p, GFP_KERNEL);
>> >>
>> >> This will set the page tables attributes for this page no-cachable.
>> >> All reads & writes will always happen from main memory because MMU see
>> >> the memory attribute for this page as non cacheble.
>> >>
>> >> Thanks,
>> >> Arun
>> >> >
>> >> >
>> >> > On Thu, Feb 28, 2013 at 11:39 AM, sandeep kumar
>> >> > <coolsandyforyou at gmail.com>
>> >> > wrote:
>> >> >>
>> >> >> >1. use early_param to get the physical start address and size of
>> >> >> > test_region, or you can just ignore this step and hard code to
>> >> >> > 510M
>> >> >> > and 2M
>> >> >> > for test purpose only.
>> >> >>
>> >> >> >2. use ioremap_nocache() to map this region to a virtual region.
>> >> >> > note
>> >> >> > that this funtion may fail if you are asking a very large vitual
>> >> >> > memroy
>> >> >> > region.
>> >> >> I did the following things,
>> >> >> 1) Reserved 3MB memory through ATAGS
>> >> >> 2) Wrote small driver to ioremap that memory in following way,
>> >> >>
>> >> >> void *tcpm_base = ioremap_nocache(0x03B00000, SZ_3MB);
>> >> >> 27 if(tcpm_base!=NULL)
>> >> >> 28 {
>> >> >> 29 printk("Jiffies %x %ld\n\n\n\n", jiffies, jiffies);
>> >> >> 30 for(i=0;i<SZ_4KB;i++)
>> >> >> 31 src = readl(tcpm_base+i);
>> >> >> 32 printk("Jiffies %x %ld\n\n\n\n", jiffies, jiffies);
>> >> >> 33 }
>> >> >> 34 else
>> >> >> 35 printk("unable to map 3MB\n");
>> >> >>
>> >> >> 3) I am getting the following error,
>> >> >>
>> >> >> [ 1.876647] Unable to handle kernel paging request at virtual
>> >> >> address
>> >> >> ea82c000
>> >> >> [ 1.880950] pgd = c0004000
>> >> >> [ 1.883636] [ea82c000] *pgd=49818811, *pte=00000000,
>> >> >> *ppte=00000000
>> >> >> [ 1.889892] Internal error: Oops: 7 [#1] PREEMPT
>> >> >> [ 1.894500] Modules linked in:
>> >> >> [ 1.897521] CPU: 0 Not tainted (3.0.31-g1080f34-dirty #106)
>> >> >> [ 1.903442] PC is at sand_misc_init+0x4c/0xac
>> >> >> [ 1.907775] LR is at sand_misc_init+0x3c/0xac
>> >> >> [ 1.912109] pc : [<c0022ee0>] lr : [<c0022ed0>] psr:
>> >> >> 80000013
>> >> >> [ 1.912139] sp : e982bf98 ip : 00000000 fp : 00000000
>> >> >> [ 1.923553] r10: 00000000 r9 : 00000000 r8 : 00000000
>> >> >> [ 1.928771] r7 : 00000000 r6 : c00461b4 r5 : ea828000 r4 :
>> >> >> 00000000
>> >> >> [ 1.935272] r3 : 00003fff r2 : 00003ffd r1 : c07ea2cf r0 :
>> >> >> 00000063
>> >> >> [ 1.941802] Flags: Nzcv IRQs on FIQs on Mode SVC_32 ISA ARM
>> >> >> Segment kernel
>> >> >> [ 1.949096] Control: 10c57c7d Table: 00404059 DAC: 00000015
>> >> >> [ 1.954803]
>> >> >> [ 1.954833] PC: 0xc0022e60:
>> >> >> [ 1.959075] 2e60 c09105f0 c0366554 c07ea27e c07ea296 e3a01000
>> >> >> e92d4010
>> >> >> e1a02001 eb0a9acd
>> >> >> [ 1.967224] 2e80 e59f3008 e5830008 e3a00001 e8bd8010 c0dd8380
>> >> >> e92d4037
>> >> >> e3a04000 e3a0063b
>> >> >> [ 1.975402] 2ea0 e3a01901 e1a02004 e5cd4007 e5cd4006 eb00b9aa
>> >> >> e2505000
>> >> >> 0a000019 e59f3070
>> >> >> [ 1.983551] 2ec0 e59f0070 e5931000 e5932000 eb18a072 e58d4000
>> >> >> e3033fff
>> >> >> ea000007 e59d2000
>> >> >> [ 1.991699] 2ee0 e7952002 f57ff04f e6ef2072 e5cd2007 e59d2000
>> >> >> e2822001
>> >> >> e58d2000 e59d2000
>> >> >> [ 1.999877] 2f00 e1520003 dafffff4 e59f3024 e59f0024 e5931000
>> >> >> e5932000
>> >> >> e28dd00c e8bd4030
>> >> >> [ 2.008026] 2f20 ea18a05d e59f0010 e28dd00c e8bd4030 ea18a059
>> >> >> c08ea600
>> >> >> c07ea2cf c07ea2e2
>> >> >> [ 2.016174] 2f40 e59f3040 e3a01000 e92d4010 e59f0038 e5932000
>> >> >> eb0a845a
>> >> >> e59f3030 e3500000
>> >> >> [ 2.024353]
>> >> >> [ 2.024353] LR: 0xc0022e50:
>> >> >> [ 2.028594] 2e50 e3e00015 e8bd81fc c0dd8380 c07ea279 c09105f0
>> >> >> c0366554
>> >> >> c07ea27e c07ea296
>> >> >> [ 2.036773] 2e70 e3a01000 e92d4010 e1a02001 eb0a9acd e59f3008
>> >> >> e5830008
>> >> >> e3a00001 e8bd8010
>> >> >> [ 2.044921] 2e90 c0dd8380 e92d4037 e3a04000 e3a0063b e3a01901
>> >> >> e1a02004
>> >> >> e5cd4007 e5cd4006
>> >> >> [ 2.053070] 2eb0 eb00b9aa e2505000 0a000019 e59f3070 e59f0070
>> >> >> e5931000
>> >> >> e5932000 eb18a072
>> >> >> [ 2.061248] 2ed0 e58d4000 e3033fff ea000007 e59d2000 e7952002
>> >> >> f57ff04f
>> >> >> e6ef2072 e5cd2007
>> >> >> [ 2.069396] 2ef0 e59d2000 e2822001 e58d2000 e59d2000 e1520003
>> >> >> dafffff4
>> >> >> e59f3024 e59f0024
>> >> >> [ 2.077575] 2f10 e5931000 e5932000 e28dd00c e8bd4030 ea18a05d
>> >> >> e59f0010
>> >> >> e28dd00c e8bd4030
>> >> >> [ 2.085723] 2f30 ea18a059 c08ea600 c07ea2cf c07ea2e2 e59f3040
>> >> >> e3a01000
>> >> >> e92d4010 e59f0038
>> >> >> [ 2.093872]
>> >> >> [ 2.093902] SP: 0xe982bf18:
>> >> >> [ 2.098144] bf18 382e3120 30393536 00205d35 000000d0 00004fff
>> >> >> 00003b00
>> >> >> 192d8000 ea82bfff
>> >> >> [ 2.106292] bf38 03b00000 ffffffff e982bf84 c00461b4 00000000
>> >> >> c0044dac
>> >> >> 00000063 c07ea2cf
>> >> >> [ 2.114440] bf58 00003ffd 00003fff 00000000 ea828000 c00461b4
>> >> >> 00000000
>> >> >> 00000000 00000000
>> >> >> [ 2.122619] bf78 00000000 00000000 00000000 e982bf98 c0022ed0
>> >> >> c0022ee0
>> >> >> 80000013 ffffffff
>> >> >> [ 2.130767] bf98 00003ffd 0000a27e 00000000 c0037b4c c0022e94
>> >> >> c003f3fc
>> >> >> e9814a80 00373231
>> >> >> [ 2.138946] bfb8 00000000 00000000 00000000 00000236 c0037b4c
>> >> >> c003818c
>> >> >> c00461b4 00000013
>> >> >> [ 2.147094] bfd8 00000000 00000000 00000000 c0008374 00000000
>> >> >> c0008300
>> >> >> c00461b4 c00461b4
>> >> >> [ 2.155242] bff8 00000000 00000000 00000000 00000001 00000000
>> >> >> e9817940
>> >> >> c08e8ef4 00000000
>> >> >> [ 2.163421]
>> >> >> [ 2.163421] R1: 0xc07ea24f:
>> >> >> [ 2.167663] a24c 65207265 726f7272 6f6c6220 25206b63 000a646c
>> >> >> 706f6f6c
>> >> >> 26006425 3e2d6f6c
>> >> >> [ 2.175842] a26c 635f6f6c 6d5f6c74 78657475 6f6f6c00 363c0070
>> >> >> 6f6f6c3e
>> >> >> 6d203a70 6c75646f
>> >> >> [ 2.183990] a28c 6f6c2065 64656461 363c000a 6f6f6c3e 6f203a70
>> >> >> 6f207475
>> >> >> 656d2066 79726f6d
>> >> >> [ 2.192138] a2ac 6162000a 6e696b63 69665f67 7300656c 6c657a69
>> >> >> 74696d69
>> >> >> 74756100 656c636f
>> >> >> [ 2.200317] a2cc 4a007261 69666669 25207365 6c252078 0a0a0a64
>> >> >> 6e75000a
>> >> >> 656c6261 206f7420
>> >> >> [ 2.208465] a2ec 2070616d 0a424d33 656d7000 65725f6d 6e6f6967
>> >> >> 333c0073
>> >> >> 656d703e 7325286d
>> >> >> [ 2.216613] a30c 736b3a29 635f7465 74616572 6e615f65 64615f64
>> >> >> 61662064
>> >> >> 000a6c69 703e343c
>> >> >> [ 2.224792] a32c 3a6d656d 6d6f7320 69687465 6920676e 65762073
>> >> >> 77207972
>> >> >> 676e6f72 6f79202c
>> >> >> [ 2.232940] a34c 72612075 6c632065 6e69736f 20612067 62206d76
>> >> >> 696b6361
>> >> >> 6120676e 6c61206e
>> >> >> [ 2.241119]
>> >> >> [ 2.241119] R5: 0xea827f80:
>> >> >> [ 2.245361] 7f80 ******** ******** ******** ******** ********
>> >> >> ********
>> >> >> ******** ********
>> >> >> [ 2.253509] 7fa0 ******** ******** ******** ******** ********
>> >> >> ********
>> >> >> ******** ********
>> >> >> [ 2.261688] 7fc0 ******** ******** ******** ******** ********
>> >> >> ********
>> >> >> ******** ********
>> >> >> [ 2.269836] 7fe0 ******** ******** ******** ******** ********
>> >> >> ********
>> >> >> ******** ********
>> >> >> [ 2.278015] 8000 00000000 00000000 00000000 00000000 00000000
>> >> >> 00000000
>> >> >> 00000000 00000000
>> >> >> [ 2.286163] 8020 00000000 00000000 00000000 00000000 00000000
>> >> >> 00000000
>> >> >> 00000000 00000000
>> >> >> [ 2.294311] 8040 00000000 00000000 00000000 00000000 00000000
>> >> >> 00000000
>> >> >> 00000000 00000000
>> >> >> [ 2.302490] 8060 00000000 00000000 00000000 00000000 00000000
>> >> >> 00000000
>> >> >> 00000000 00000000
>> >> >> [ 2.310638]
>> >> >> [ 2.310638] R6: 0xc0046134:
>> >> >> [ 2.314880] 6134 eb038dff eb02a1f9 e1a03007 e1a00005 e1a02006
>> >> >> eb025566
>> >> >> e59f3008 e5834000
>> >> >> [ 2.323059] 6154 e8bd41f0 ea0259b3 c09059d4 c08fbcd4 c064e808
>> >> >> c0791c0d
>> >> >> c0958220 e59fc020
>> >> >> [ 2.331207] 6174 e92d4007 e59f301c e59cc000 e1a02001 e59f1014
>> >> >> e58dc000
>> >> >> eb055aab e3a00000
>> >> >> [ 2.339385] 6194 e8bd800e c0958220 c0791c27 c0791c1b e121f007
>> >> >> e1a00004
>> >> >> e1a0e006 e1a0f005
>> >> >> [ 2.347534] 61b4 eb02959a e320f000 e59f300c e5932000 e2822001
>> >> >> e5832000
>> >> >> e12fff1e c0958224
>> >> >> [ 2.355682] 61d4 e59f300c e5932000 e2422001 e5832000 e12fff1e
>> >> >> c0958224
>> >> >> e12fff1e e12fff1e
>> >> >> [ 2.363861] 61f4 e12fff1e e12fff1e eafffffe e59f3014 e92d4010
>> >> >> e5933004
>> >> >> e3530000 08bd8010
>> >> >> [ 2.372009] 6214 e12fff33 e8bd8010 c0958224 e59f3014 e1a01000
>> >> >> e92d4010
>> >> >> e5d30000 e1a0e00f
>> >> >> [ 2.380187] Process swapper (pid: 1, stack limit = 0xe982a2e8)
>> >> >> [ 2.385986] Stack: (0xe982bf98 to 0xe982c000)
>> >> >> [ 2.390319] bf80:
>> >> >> 00003ffd 0000a27e
>> >> >> [ 2.398498] bfa0: 00000000 c0037b4c c0022e94 c003f3fc e9814a80
>> >> >> 00373231
>> >> >> 00000000 00000000
>> >> >> [ 2.406646] bfc0: 00000000 00000236 c0037b4c c003818c c00461b4
>> >> >> 00000013
>> >> >> 00000000 00000000
>> >> >> [ 2.414825] bfe0: 00000000 c0008374 00000000 c0008300 c00461b4
>> >> >> c00461b4
>> >> >> 00000000 00000000
>> >> >> [ 2.423004] [<c0022ee0>] (sand_misc_init+0x4c/0xac) from
>> >> >> [<c003f3fc>]
>> >> >> (do_one_initcall+0xd0/0x1a4)
>> >> >> [ 2.431915] [<c003f3fc>] (do_one_initcall+0xd0/0x1a4) from
>> >> >> [<c0008374>]
>> >> >> (kernel_init+0x74/0x118)
>> >> >> [ 2.440673] [<c0008374>] (kernel_init+0x74/0x118) from
>> >> >> [<c00461b4>]
>> >> >> (kernel_thread_exit+0x0/0x8)
>> >> >> [ 2.449462] Code: e58d4000 e3033fff ea000007 e59d2000 (e7952002)
>> >> >> [ 2.455566] ---[ end trace f76f3c76dcb9b9ef ]---
>> >> >> [ 2.460144] Kernel panic - not syncing: Attempted to kill init!
>> >> >> [ 2.466064] [<c004ad10>] (unwind_backtrace+0x0/0x12c) from
>> >> >> [<c064af70>]
>> >> >> (panic+0x90/0x1bc)
>> >> >> [ 2.474304] [<c064af70>] (panic+0x90/0x1bc) from [<c00eb8dc>]
>> >> >> (do_exit+0xb8/0x734)
>> >> >> [ 2.481842] [<c00eb8dc>] (do_exit+0xb8/0x734) from [<c0048f8c>]
>> >> >> (die+0x208/0x23c)
>> >> >> [ 2.489318] [<c0048f8c>] (die+0x208/0x23c) from [<c004e158>]
>> >> >> (__do_kernel_fault+0x64/0x84)
>> >> >> [ 2.497558] [<c004e158>] (__do_kernel_fault+0x64/0x84) from
>> >> >> [<c004e3e0>] (do_page_fault+0x268/0x288)
>> >> >> [ 2.506683] [<c004e3e0>] (do_page_fault+0x268/0x288) from
>> >> >> [<c003f270>]
>> >> >> (do_DataAbort+0x34/0x94)
>> >> >> [ 2.515350] [<c003f270>] (do_DataAbort+0x34/0x94) from
>> >> >> [<c0044dac>]
>> >> >> (__dabt_svc+0x4c/0x60)
>> >> >> [ 2.523590] Exception stack(0xe982bf50 to 0xe982bf98)
>> >> >> [ 2.528625] bf40: 00000063
>> >> >> c07ea2cf
>> >> >> 00003ffd 00003fff
>> >> >> [ 2.536804] bf60: 00000000 ea828000 c00461b4 00000000 00000000
>> >> >> 00000000
>> >> >> 00000000 00000000
>> >> >> [ 2.544952] bf80: 00000000 e982bf98 c0022ed0 c0022ee0 80000013
>> >> >> ffffffff
>> >> >> [ 2.551544] [<c0044dac>] (__dabt_svc+0x4c/0x60) from [<c0022ee0>]
>> >> >> (sand_misc_init+0x4c/0xac)
>> >> >> [ 2.559967] [<c0022ee0>] (sand_misc_init+0x4c/0xac) from
>> >> >> [<c003f3fc>]
>> >> >> (do_one_initcall+0xd0/0x1a4)
>> >> >> [ 2.568908] [<c003f3fc>] (do_one_initcall+0xd0/0x1a4) from
>> >> >> [<c0008374>]
>> >> >> (kernel_init+0x74/0x118)
>> >> >> [ 2.577667] [<c0008374>] (kernel_init+0x74/0x118) from
>> >> >> [<c00461b4>]
>> >> >> (kernel_thread_exit+0x0/0x8)
>> >> >>
>> >> >>
>> >> >> Any idea what went wrong.
>> >> >> I am sure about the ioremap() start address, thats what i reserved
>> >> >> in
>> >> >> ATAGS.
>> >> >>
>> >> >> Thanks
>> >> >> Sandeep
>> >> >>
>> >> >>
>> >> >> On Thu, Feb 28, 2013 at 10:30 AM, sandeep kumar
>> >> >> <coolsandyforyou at gmail.com> wrote:
>> >> >>>
>> >> >>> >1. use early_param to get the physical start address and size of
>> >> >>> > test_region, or you can just ignore this step and hard code to
>> >> >>> > 510M
>> >> >>> > and 2M
>> >> >>> > for test purpose only.
>> >> >>>
>> >> >>> >2. use ioremap_nocache() to map this region to a virtual region.
>> >> >>> > note
>> >> >>> > that this funtion may fail if you are asking a very large vitual
>> >> >>> > memroy
>> >> >>> > region.
>> >> >>>
>> >> >>> Sounds good, i am gonna try this and let you know.. :)
>> >> >>>
>> >> >>>
>> >> >>> On Wed, Feb 27, 2013 at 8:19 PM, buyitian <buyit at live.cn> wrote:
>> >> >>>>
>> >> >>>> ----------------------------------------
>> >> >>>> > From: buyit at live.cn
>> >> >>>> > To: coolsandyforyou at gmail.com; kernelnewbies at kernelnewbies.org
>> >> >>>> > Subject: RE: How to measure the RAM read/write performance
>> >> >>>> > Date: Wed, 27 Feb 2013 22:33:15 +0800
>> >> >>>> > CC: dhylands at gmail.com
>> >> >>>> >
>> >> >>>> > ________________________________
>> >> >>>> > > From: coolsandyforyou at gmail.com
>> >> >>>> > > Date: Tue, 26 Feb 2013 17:01:54 +0530
>> >> >>>> > > Subject: How to measure the RAM read/write performance
>> >> >>>> > > To: kernelnewbies at kernelnewbies.org
>> >> >>>> > > CC: dhylands at gmail.com
>> >> >>>> > >
>> >> >>>> > > Hi All
>> >> >>>> > > In performance benchmark tools, When we profile read/write
>> >> >>>> > > timings
>> >> >>>> > > mostly, those read/writes are done to cache only.
>> >> >>>> > >
>> >> >>>> > > I want to measure my DDR(RAM chip) performance.
>> >> >>>> > > So i want to make sure, every read/write should happen to DDR
>> >> >>>> > > RAM
>> >> >>>> > > chip only.
>> >> >>>> > >
>> >> >>>> > > How can i achieve this...Any ideas/suggestions...?
>> >> >>>> >
>> >> >>>> > try to reserve a large region from bootloader(L4 in Qualcomm
>> >> >>>> > platform), let's say it is 10MB continuous physical memory.
>> >> >>>>
>> >> >>>> sorry, to be accurate, reserve physical memory is done by kernel
>> >> >>>> cmdline, this cmdline parameter can be passed from L4 to kernel,
>> >> >>>> or
>> >> >>>> configed
>> >> >>>> by kernel itself.
>> >> >>>> the cmdline will be like below:
>> >> >>>> mem=510M at 0 test_region=2M at 510M
>> >> >>>>
>> >> >>>> above example tells kernel you have totally 512MB physical memory,
>> >> >>>> but
>> >> >>>> kernel will only use the first 510MB, the latter 2MB memory is
>> >> >>>> used
>> >> >>>> by you.
>> >> >>>> how to map and use this region depends on you.
>> >> >>>>
>> >> >>>> > in kernel, map this region to an continuous virtual region, note
>> >> >>>> > that
>> >> >>>> > the pgprot should be uncachable since you want to test without
>> >> >>>> > cache.
>> >> >>>>
>> >> >>>> 1. use early_param to get the physical start address and size of
>> >> >>>> test_region, or you can just ignore this step and hard code to
>> >> >>>> 510M
>> >> >>>> and 2M
>> >> >>>> for test purpose only.
>> >> >>>>
>> >> >>>> 2. use ioremap_nocache() to map this region to a virtual region.
>> >> >>>> note
>> >> >>>> that this funtion may fail if you are asking a very large vitual
>> >> >>>> memroy
>> >> >>>> region.
>> >> >>>>
>> >> >>>> > once you configed like this, you can read/write to this vitual
>> >> >>>> > region
>> >> >>>> > without data cache invovled.
>> >> >>>> >
>> >> >>>> > >
>> >> >>>> > > --
>> >> >>>> > > With regards,
>> >> >>>> > > Sandeep Kumar Anantapalli,
>> >> >>>> > >
>> >> >>>> > > _______________________________________________ Kernelnewbies
>> >> >>>> > > mailing
>> >> >>>> > > list Kernelnewbies at kernelnewbies.org
>> >> >>>> > > http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>> >> >>>> > _______________________________________________
>> >> >>>> > Kernelnewbies mailing list
>> >> >>>> > Kernelnewbies at kernelnewbies.org
>> >> >>>> > http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> --
>> >> >>> With regards,
>> >> >>> Sandeep Kumar Anantapalli,
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> With regards,
>> >> >> Sandeep Kumar Anantapalli,
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > With regards,
>> >> > Sandeep Kumar Anantapalli,
>> >> >
>> >> > _______________________________________________
>> >> > Kernelnewbies mailing list
>> >> > Kernelnewbies at kernelnewbies.org
>> >> > http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>> >> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards,
>> > Sandeep Kumar Anantapalli,
>
>
>
>
> --
> With regards,
> Sandeep Kumar Anantapalli,
More information about the Kernelnewbies
mailing list