Hi Guys,<div><br><div>Want to know some details about MSI queues. </div><div>Looked into many only online references for MSI, they points mainly to introduction area. </div><div>I am looking for rule/decision criteria for distributing traffic on different MSI queues. </div>
<div>Can we configure MSI queue at runtime for traffic distribution.</div><div><br></div><div>On my machine I can see below data in /proc/interrupts</div><div><div> 64: 218601 13334 25585 7505 PCI-MSI-edge eth0-0</div>
<div> 65: 5717754 6491556 501052 729091 PCI-MSI-edge eth0-1</div>
<div> 66: 844740 9897919 96 230 PCI-MSI-edge eth0-2</div><div> 67: 222750 436800 403 1205846 PCI-MSI-edge eth0-3</div><div> 68: 777 1502281 314536 100125 PCI-MSI-edge eth0-4</div>
<div> 69: 482616 431247 2164970 1627540 PCI-MSI-edge eth0-5</div><div> 70: 323501 1433873 81970 18359 PCI-MSI-edge eth0-6</div><div> 71: 37298 35844 8271 18516 PCI-MSI-edge eth0-7</div>
<div><br></div><div>When send UDP packets with IP X.X.X.100 interrupt happens in eth0-1 and with IP X.X.X.101 interrupts happens in eth0-5.</div><div>Similary for Ip X.X.X.102 and X.X.X.104, interrupt happens on distinct MSI queue...Seems like modules 4 operation here being criteria.</div>
<div>Can we have vlan based MSI queue rules?</div><div><br></div><div>When I set the core affinity, with /proc/irq/<intrrpt no>/smp_affinity. It changes on its own when I send burst of UPD packets with above IP as dest. Is this expected?</div>
<div><br></div><div>Thanks</div><div>Mukesh</div><div><br></div><div><br></div>
</div>
</div>