[openib-general] Mellanox device in INIT state

Shirley Ma xma at us.ibm.com
Tue Sep 13 11:53:58 PDT 2005


After loading the ib_mthca module on PPC, the device state is in INIT not 
ACTIVE state. Any clue?

0000:d9:00.0 InfiniBand: Mellanox Technologies MT23108 InfiniHost HCA (rev 
a1)
        Subsystem: Mellanox Technologies MT23108 InfiniHost HCA
        Flags: bus master, 66Mhz, medium devsel, latency 144, IRQ 137
        Memory at c0800000 (64-bit, non-prefetchable) [size=1M]
        Memory at c0000000 (64-bit, prefetchable) [size=8M]
        Capabilities: [40] #11 [001f]
        Capabilities: [50] Vital Product Data
        Capabilities: [60] Message Signalled Interrupts: 64bit+ Queue=0/5 
Enable-
        Capabilities: [70] PCI-X non-bridge device.

Below are from /var/log/messages:

Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: FW version 
000300030003, max commands 64
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: FW size 6143 KB 
(start c7a00000, end c7ffffff)
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: HCA memory size 
131071 KB (start c0000000, end c7ffffff)
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: Max QPs: 16777216, 
reserved QPs: 1024, entry size: 256
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: Max SRQs: 1024, 
reserved SRQs: 16, entry size: 32
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: Max CQs: 16777216, 
reserved CQs: 128, entry size: 64
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: Max EQs: 64, 
reserved EQs: 1, entry size: 64
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: reserved MPTs: 16, 
reserved MTTs: 16
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: Max PDs: 16777216, 
reserved PDs: 0, reserved UARs: 1
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: Max QP/MCG: 
16777216, reserved MGMs: 0
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: Flags: 00370347
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: profile[ 0]--10/20 
@ 0x        c0000000 (size 0x 4000000)
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: profile[ 1]-- 0/16 
@ 0x        c4000000 (size 0x 1000000)
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: profile[ 2]-- 7/18 
@ 0x        c5000000 (size 0x  800000)
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: profile[ 3]-- 9/17 
@ 0x        c5800000 (size 0x  800000)
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: profile[ 4]-- 3/16 
@ 0x        c6000000 (size 0x  400000)
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: profile[ 5]-- 4/16 
@ 0x        c6400000 (size 0x  200000)
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: profile[ 6]--12/15 
@ 0x        c6600000 (size 0x  100000)
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: profile[ 7]-- 8/13 
@ 0x        c6700000 (size 0x   80000)
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: profile[ 8]--11/11 
@ 0x        c6780000 (size 0x   10000)
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: profile[ 9]-- 2/10 
@ 0x        c6790000 (size 0x    8000)
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: profile[10]-- 6/ 5 
@ 0x        c6798000 (size 0x     800)
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: HCA memory: 
allocated 106082 KB/124928 KB (18846 KB free)
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: Allocated EQ 1 with 
65536 entries
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: Allocated EQ 2 with 
128 entries
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: Allocated EQ 3 with 
128 entries
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: Setting mask 
00000000000f43fe for eqn 2
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: Setting mask 
0000000000000400 for eqn 3
Sep 13 10:49:12 elm3b39 kernel: ib_mthca 0000:d9:00.0: NOP command IRQ 
test passed


Thanks
Shirley Ma
IBM Linux Technology Center
15300 SW Koll Parkway
Beaverton, OR 97006-6063
Phone(Fax): (503) 578-7638

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20050913/b4ccda5f/attachment.html>


More information about the general mailing list