[ofa-general] ***SPAM*** InfiniBand and PCIe 2.0

Boris Shpolyansky boris at mellanox.com
Mon Mar 9 11:54:03 PDT 2009


Bart,

 

Particular card you were testing doesn't support 5 GT/s PCIe operation.

There are other adaptors from the same product family based on ConnectX
silicon device that are capable of PCIe gen-2 (5GT/s) link speed.
Please, contact your distributor for details.

 

Boris Shpolyansky

Sr. Member of Technical Staff, Applications

 

Mellanox Technologies Inc.

350 Oakmead Parkway, Suite 100

Sunnyvale, CA 94085

Tel.: (408) 916 0014

Fax: (408) 585 0314

Cell: (408) 834 9365

www.mellanox.com

________________________________

From: general-bounces at lists.openfabrics.org
[mailto:general-bounces at lists.openfabrics.org] On Behalf Of Bart Van
Assche
Sent: Monday, March 09, 2009 11:48 AM
To: general at lists.openfabrics.org
Subject: [ofa-general] ***SPAM*** InfiniBand and PCIe 2.0

 

Hello,

Although I'm not entirely sure this is the right mailing list for such
questions: can anyone give me some advice on how to get an InfiniBand
HCA working at 5 GT/s ? I have inserted a MT25418 HCA in a PCIe 2.0
slot. According to Mellanox' documentation, the maximum transfer speed
should be 5.0 GT/s. lspci reports 2.5 GT/s for the link status however.
The details are as follows:
* MT25418 specs:
http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family
=36&menu_section=34
* Kernel: Linux 2.6.28.7
* Motherboard: Asus P5Q Deluxe
* lspci output:

01:00.0 InfiniBand: Mellanox Technologies MT25418 [ConnectX IB DDR, PCIe
2.0 2.5GT/s] (rev a0)
        Subsystem: Mellanox Technologies MT25418 [ConnectX IB DDR, PCIe
2.0 2.5GT/s]
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 32 bytes
        Interrupt: pin A routed to IRQ 16
        Region 0: Memory at fe700000 (64-bit, non-prefetchable)
[size=1M]
        Region 2: Memory at cf800000 (64-bit, prefetchable) [size=8M]
        Region 4: Memory at fe6fe000 (64-bit, non-prefetchable)
[size=8K]
        Capabilities: [40] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [48] Vital Product Data <?>
        Capabilities: [9c] MSI-X: Enable+ Mask- TabSize=256
                Vector table: BAR=4 offset=00000000
                PBA: BAR=4 offset=00001000
        Capabilities: [60] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s
<64ns, L1 unlimited
                        ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
                DevCtl: Report errors: Correctable- Non-Fatal- Fatal-
Unsupported-
                        RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr-
TransPend-
                LnkCap: Port #8, Speed 2.5GT/s, Width x8, ASPM L0s,
Latency L0 unlimited, L1 unlimited
                        ClockPM- Suprise- LLActRep- BwNot-
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain-
CommClk-
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 2.5GT/s, Width x8, TrErr- Train- SlotClk-
DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range ABCD, TimeoutDis+
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-
                LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance-
SpeedDis-, Selectable De-emphasis: -6dB
                         Transmit Margin: Normal Operating Range,
EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -6dB
        Kernel driver in use: mlx4_core
        Kernel modules: mlx4_core

Bart.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20090309/01cc0bf3/attachment.html>


More information about the general mailing list