[openib-general] Multi-port HCA
Michael Krause
krause at cup.hp.com
Thu Oct 5 09:03:37 PDT 2006
At 07:18 AM 10/5/2006, Roland Dreier wrote:
> Bernard> I don't think it is the PCI-e bus because it can handle
> Bernard> much more than 20 Gb/s.
>
>This isn't true. Mellanox cards have PCI-e x8 interfaces, which has a
>theoretical limit of 16 Gb/sec in each direction, and a practical
>limit that is even lower due to packetization and other overhead.
Nominally derate to 80% of the bandwidth after the 8b/10b encoding is
removed and you'll come close to maximum of what a PCIe Root Port can
service. Depending upon the amount of control messages (work requests, CQ
updates, interrupts, etc.) generated, the effective bandwidth is further
reduced - IPC workloads tend to be worse than storage as the ratio of
control to application data is higher (topology has an impact here as well
but for this discussion, assume point-to-point attachment).
The chipsets using PCIe 2.5 GT/s (raw signaling rate) generally drive a
single port HCA quite nicely but not a dual-port. An IB DDR using PCIe
2.5 GT/s is not going to come that close to link rate. You'll need to
wait for the PCIe 5.0 GT/s chipsets which for servers isn't any time soon
(most public information shows 2008 for shipment though expect people to
sample earlier and for clients to ship products much earlier). The problem
facing servers is whether there will be enough x8 Root Ports available to
attach such links. Some vendors may decide to only ship x4 5.0 GT/s since
it is the equivalent of a x8 2.5 GT/s thinking the world will just roll to
this signaling rate quickly. However, given the need for interoperability
and the desire by OEM to avoid customer backlash when their brand new
system cannot perform as well as the older system when x8 cards are used,
well, one can only hope they are listening closely to their customers as
the OEM's customers won't be happy.
Mike
More information about the general
mailing list