PCI / PCIe Device Memory - Rationale for Choosing MMIO Over PMIO (and Visa-Versa)

Jack Winch sunt.un.morcov at gmail.com
Thu Nov 5 09:27:58 EST 2020


Hi all,

Over the last couple of months, I've been reading the hardware
documentation and Linux device driver source code for a range of
different PCI and PCIe devices.  Those examined range from
multi-function data acquisition cards through to avionics bus
interface devices.  In doing so, I have referenced numerous resources
(including the Third Edition of LDD - what a great book - and the
documentation available for the Linux PCI Bus Subsystem on
kernel.org).

One thing I'm still a little unclear on is why vendors might opt to
map PCI / PCIe device memory into the system memory map as either
Memory-Mapped I/O (MMIO) or Port-Mapped I/O (PMIO).  That is, for what
reasons would a device manufacturer choose to make use of one address
space over the other for regions of a PCI / PCIe device's memory?
Some of the general reasons are alluded to by the aforementioned
resources (e.g., more instruction cycles are required to access data
via PMIO, MMIO can be marked as prefetchable and handled as such,
etc).

Would anyone who's been engaged in the development of a PCI / PCIe
device talk about their experience and what factors led to one address
space being chosen over the other (for specific regions of a specific
device)?

Specific examples would really help me (and probably others)
understand what factors are involved in this decision and how a
suitable choice is made.  Reading the driver source code, for specific
devices, has been great for developing an initial understanding of the
different approaches taken by device manufacturers, but source code
and hardware documentation rarely provide any information on the
rationale for a chosen implementation.

Any specific technical accounts on this matter, etc, would be much appreciated.

Jack



More information about the Kernelnewbies mailing list