<br><br><div class="gmail_quote">On Tue, May 3, 2011 at 2:32 AM, Abu Rasheda <span dir="ltr"><<a href="mailto:rcpilot2010@gmail.com">rcpilot2010@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
I am testing my driver on much faster host processor and facing<br>
following issues:<br>
<br>
My host is too powerful and it can fill up device buffer queue very fast.<br>
<br>
I get best performance when I do busy wait, but this is not desirable<br>
and is bad design.<br>
<br>
I need to sleep and wake up quickly and predictability. Indication<br>
from device that queue has space, is coming in form of memory write<br>
(device writes to a memory location of i86 processor).<br>
<br>
I tried using wait_event_interruptible_timeout, I am depending on 2nd<br>
parameter of the function but it wake up is too slow, even tried using<br>
value of 1.<br>
<br>
Any suggestions ?<br><br></blockquote><div><br></div><div>Many of us are just newbies in this area, and therefore, get it working is much more important than to optimize it - u can safely said that the hardcore kernel developer has already optimize many of these problems away, and so if they cannot do it, there must be a reason...try to probe more first perhaps.</div>
<div><br></div><div>High speed networking device has many special hardware features: IP/UDP/TCP checksum offload etc.</div><div><br></div><div><a href="http://www.fenrus.org/how-to-not-write-a-device-driver-paper.pdf">http://www.fenrus.org/how-to-not-write-a-device-driver-paper.pdf</a></div>
</div><div><a href="http://www.sun.com/products/networking/infiniband/ibhcaPCI-E/docs/datasheet.pdf">http://www.sun.com/products/networking/infiniband/ibhcaPCI-E/docs/datasheet.pdf</a></div><div><br></div>Read this about NAPI:<div>
<br></div><div><a href="http://www.linuxfoundation.org/collaborate/workgroups/networking/napi">http://www.linuxfoundation.org/collaborate/workgroups/networking/napi</a></div><div><br></div><div><a href="http://www.linuxfoundation.org/collaborate/workgroups/networking/napi"></a>Eg, "Interrupt mitigation" - whereby interrupt mechanism is disabled (in particular my desktop PC's r8196.c is using this feature) and polling takes over instead - but u have to implemented complicated mechanism to reinject the interrupt if necessary (read r8196.c).</div>
<div><br></div><div>And if u really ready....this is a good writeup:</div><div><br></div><div><a href="http://datatag.web.cern.ch/datatag/howto/tcp.html">http://datatag.web.cern.ch/datatag/howto/tcp.html</a></div><div><br>
</div><div>Other possible suggestion/features:</div><div><br></div><div>Jumbo frames:</div><div><br></div><div><a href="http://www.cyberciti.biz/faq/rhel-centos-debian-ubuntu-jumbo-frames-configuration/">http://www.cyberciti.biz/faq/rhel-centos-debian-ubuntu-jumbo-frames-configuration/</a></div>
<div><br></div><div>PCI posting (pdf paper above and r8196.c).</div><div><br></div><div>Disabling TCP software checksum (and use the hardware instead):</div><div><br></div><div><a href="http://www.linuxquestions.org/questions/linux-enterprise-47/how-to-disable-tcp-checksumming-690745/">http://www.linuxquestions.org/questions/linux-enterprise-47/how-to-disable-tcp-checksumming-690745/</a></div>
<div><br></div><div>And for loads of other ideas (eg, TCP bypass):</div><div><br></div><div><b><a href="http://ttthebear.blogspot.com/2008/07/linux-kernel-bypass-and-performance.html">http://ttthebear.blogspot.com/2008/07/linux-kernel-bypass-and-performance.html</a></b></div>
<div><br></div><div>Generally a lot of these ideas can be found in the kernel source codes - just search and copy the implementation.....the highest network data transfer is achieved in infiniband-based Mellanox card (in some China supercomputer), and this involved the use of GPU technology etc....</div>
<div><br></div><div><a href="https://lists.sdsc.edu/pipermail/npaci-rocks-discussion/2009-May/039639.html">https://lists.sdsc.edu/pipermail/npaci-rocks-discussion/2009-May/039639.html</a></div><div><br></div><div>-- <br>Regards,<br>
Peter Teoh<br>
</div>