pcie dma transfer

Christoph Böhmwalder christoph at boehmwalder.at
Mon Jun 4 08:31:24 EDT 2018


On Mon, Jun 04, 2018 at 02:05:05PM +0200, Greg KH wrote:
> The problem in this design might happen right here.  What happens
> in the device between the interrupt being signaled, and the data being
> copied out of the buffer?  Where do new packets go to?  How does the
> device know it is "safe" to write new data to that memory?  That extra
> housekeeping in the hardware gets very complex very quickly.

That's one of our concerns as well, our solution seems to be way too
complex (as we, for example, still need a way to parse out the
individual packets from the buffer). I think we should focus more on
KISS going forward.

> This all might work, if you have multiple buffers, as that is how some
> drivers work.  Look at how the XHCI design is specified.  The spec is
> open, and it gives you a very good description of how a relativly
> high-speed PCIe device should work, with buffer management and the like.
> You can probably use a lot of that type of design for your new work and
> make things run a lot faster than what you currently have.
> 
> You also have access to loads of very high-speed drivers in Linux today,
> to get design examples from.  Look at the networking driver of the
> 10, 40, and 100Gb cards, as well as the infiband drivers, and even some
> of the PCIe flash block drivers.  Look at what the NVME spec says for
> how those types of high-speed storage devices should be designed for
> other examples.

Thanks for the pointer, I will take a look at that.

As for now, we're looking at implementing a solution using the ringbuffer
method described in LDD, since that seems quite reasonable. That may all
change once we research the other drivers a little bit though.

Thanks for your help and (as always) quick response time!

--
Regards,
Christoph



More information about the Kernelnewbies mailing list