<br><br><div class="gmail_quote">On Wed, Oct 12, 2011 at 11:47 PM, bob jonny <span dir="ltr"><<a href="mailto:ilikepie420@live.com">ilikepie420@live.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div><div dir="ltr">
In dma_alloc_coherent(), where is it allocating memory from, and how does it know that that memory is cache coherent? Does every device have it's cache coherent memory? I got lost at function pointer struct dma_map_ops ops, is there an easy way to figure out what ops->alloc_coherent() points to?<br>
                                           </div></div>
<br>_______________________________________________<br>
Kernelnewbies mailing list<br>
<a href="mailto:Kernelnewbies@kernelnewbies.org">Kernelnewbies@kernelnewbies.org</a><br>
<a href="http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies" target="_blank">http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies</a><br>
<br></blockquote></div>Hi Bob,<br><br>dma_alloc_coherent() invokes the alloc_coherent method for the particular device which is the first parameter.<br><br>So to see from where the memory is getting allocated, you need to see that device's dma_map_op's alloc_coherent method.<br>
<br>I went through some of them, they tend to alloc memory by calling either __get_free_pages() or alloc_pages() functions.<br><br>To maintain cache coherency, the pages which are allocated in these functions have to be uncached and if they where cached earlier then they have to be flushed.<br>
<br>Following function explains all these : -<br><br>void *dma_generic_alloc_coherent(struct device *dev, size_t size,<br> dma_addr_t *dma_handle, gfp_t gfp)<br>{<br> void *ret, *ret_nocache;<br>
int order = get_order(size);<br><br> gfp |= __GFP_ZERO;<br><br> ret = (void *)__get_free_pages(gfp, order);<br> if (!ret)<br> return NULL;<br><br> /*<br> * Pages from the page allocator may have data present in<br>
* cache. So flush the cache before using uncached memory.<br> */<br> dma_cache_sync(dev, ret, size, DMA_BIDIRECTIONAL);<br><br> ret_nocache = (void __force *)ioremap_nocache(virt_to_phys(ret), size);<br>
if (!ret_nocache) { <br> free_pages((unsigned long)ret, order);<br> return NULL;<br> } <br><br> split_page(pfn_to_page(virt_to_phys(ret) >> PAGE_SHIFT), order);<br>
<br> *dma_handle = virt_to_phys(ret);<br><br> return ret_nocache;<br>}<br><br>Regards,<br>Rohan Puri<br>