[ofa-general] ***SPAM*** InfiniBand and PCIe 2.0
Bart Van Assche
bart.vanassche at gmail.com
Mon Mar 9 11:47:39 PDT 2009
Hello,
Although I'm not entirely sure this is the right mailing list for such
questions: can anyone give me some advice on how to get an InfiniBand HCA
working at 5 GT/s ? I have inserted a MT25418 HCA in a PCIe 2.0 slot.
According to Mellanox' documentation, the maximum transfer speed should be
5.0 GT/s. lspci reports 2.5 GT/s for the link status however. The details
are as follows:
* MT25418 specs:
http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=36&menu_section=34
* Kernel: Linux 2.6.28.7
* Motherboard: Asus P5Q Deluxe
* lspci output:
01:00.0 InfiniBand: Mellanox Technologies MT25418 [ConnectX IB DDR, PCIe 2.0
2.5GT/s] (rev a0)
Subsystem: Mellanox Technologies MT25418 [ConnectX IB DDR, PCIe 2.0
2.5GT/s]
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Interrupt: pin A routed to IRQ 16
Region 0: Memory at fe700000 (64-bit, non-prefetchable) [size=1M]
Region 2: Memory at cf800000 (64-bit, prefetchable) [size=8M]
Region 4: Memory at fe6fe000 (64-bit, non-prefetchable) [size=8K]
Capabilities: [40] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [48] Vital Product Data <?>
Capabilities: [9c] MSI-X: Enable+ Mask- TabSize=256
Vector table: BAR=4 offset=00000000
PBA: BAR=4 offset=00001000
Capabilities: [60] Express (v2) Endpoint, MSI 00
DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s
<64ns, L1 unlimited
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal-
Unsupported-
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
MaxPayload 128 bytes, MaxReadReq 512 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr-
TransPend-
LnkCap: Port #8, Speed 2.5GT/s, Width x8, ASPM L0s, Latency
L0 unlimited, L1 unlimited
ClockPM- Suprise- LLActRep- BwNot-
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain-
CommClk-
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x8, TrErr- Train- SlotClk-
DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Range ABCD, TimeoutDis+
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-
LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance-
SpeedDis-, Selectable De-emphasis: -6dB
Transmit Margin: Normal Operating Range,
EnterModifiedCompliance- ComplianceSOS-
Compliance De-emphasis: -6dB
LnkSta2: Current De-emphasis Level: -6dB
Kernel driver in use: mlx4_core
Kernel modules: mlx4_core
Bart.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20090309/25700985/attachment.html>
More information about the general
mailing list