<html>
<body>
<font size=3>At 10:14 AM 6/23/2006, Grant Grundler wrote:<br>
<blockquote type=cite class=cite cite="">On Fri, Jun 23, 2006 at
04:04:31PM +0200, Arjan van de Ven wrote:<br>
> > I thought the posted write WILL eventually get to adapter
memory. Not<br>
> > stall forever cached in a bridge. I'm wrong?<br>
> <br>
> I'm not sure there is a theoretical upper bound.... <br><br>
I'm not aware of one either since MMIO writes can travel<br>
across many other chips that are not constrained by<br>
PCI ordering rules (I'm thinking of SGI
Altix...)</font></blockquote><br>
It is processor / coherency backplane technology specific as to the
number of outstanding writes. There is also no guarantee that such
writes will hit the top of the PCI hierarchy in the order they were
posted in a multi-core / processor system. Hence, it is up to
software to guarantee that ordering is preserved and to not assume
anything about ordering from a hardware perspective. Once a
transaction hits the PCI hierarchy, then the PCI ordering rules apply and
depending upon the transaction type and other rules, what is guaranteed
is deterministic in nature.<br><br>
<br>
<blockquote type=cite class=cite cite=""><font size=3>> (and if it's
several msec per bridge, then you have a lot of latency<br>
> anyway)<br><br>
That's what my original concern was when I saw you point this out.<br>
But MMIO reads here would be expensive and many drivers tolerate<br>
this latency in exchange for avoiding the MMIO read in the<br>
performance path.</blockquote><br>
As the saying goes, MMIO Reads are "pure evil" and should be
avoided at all costs if performance is the goal. Even in a
relatively flat I/O hierarchy, the additional latency is non-trivial and
can lead to a significant loss in performance for the system.
<br><br>
Mike</font></body>
</html>