[ofw] ib_peek_cq() function returns ib_unsupported

Leonid Keller leonid at mellanox.co.il
Thu May 1 06:36:34 PDT 2008


See inline 

> -----Original Message-----
> From: ofw-bounces at lists.openfabrics.org 
> [mailto:ofw-bounces at lists.openfabrics.org] On Behalf Of rakesh thakur
> Sent: Thursday, May 01, 2008 3:38 PM
> To: ofw at lists.openfabrics.org
> Subject: [ofw] ib_peek_cq() function returns ib_unsupported
> 
> Hi All,
> 
> I am using one of the mellanox card and mellanox winIB 
> software WinIB_x86_1_4_0_2027.msi While using the function 
> ib_peek_cq() i get the IB_Unsupported error
> 
> To get the above api working shoule we have to get the new 
> version of software or we may have to change the mellanox card..
Neither.
Ib_peek_cq is an unimplemented verb in the driver.
To add it one needs to implement it in the driver and/or in the user
verb provider (mthcau.dll)
Why do you need this verb ? It is not implemented also in Linux.

> 
> Also max_num_ent_cq = 0x1ffff        (Max num of supported entries per
> CQ) whereas
> max_qp_ous_wr = 0x4000          (Maximum Number of 
> outstanding WR on any WQ)
> 
> as we can see that we have difference in outstanding entries 
> in a WQ and a CQ (do understand why is it so). is there any 
> means to increase the permissible values( number of entries) 
> outstanding in WQ.?
Max CQ size (max_num_ent_cq) and max QP size (max_qp_ous_wr) are taken
from the card and can't be increased.
But one can create thousands of QPs and CQs.
CQ is greater than QP because several QPs can be mapped (i.e. report
their completions) to the same CQ.

> 
> Any suggestion or direction toward proper link (thread) would 
> be helpful.
> 
> C:\Documents and Settings\thakurr>vstat -v
> 
>         hca_idx=0
>         uplink={BUS=PCI_E, SPEED=2.5 Gbps, WIDTH=x1, CAPS=2.5*x8}
>         vendor_id=0x02c9
>         vendor_part_id=0x6274
>         hw_ver=0xa0
>         fw_ver=1.00.0800
>         PSID=MT_03F0110001
>         node_guid=0002:c902:0023:792c
>         num_phys_ports = 1
>         max_num_qp = 0xfc00             (Maximum Number of 
> QPs supported)
>         max_qp_ous_wr = 0x4000          (Maximum Number of outstanding
> WR on any WQ)
>         max_num_sg_ent = 0x1e           (Max num of scatter/gather
> entries for WQE other than RD)
>         max_num_sg_ent_rd = 0x0         (Max num of scatter/gather
> entries for RD WQE)
>         max_num_srq = 0x3c0             (Maximum Number of 
> SRQs supported)
>         max_wqe_per_srq = 0x3fff        (Maximum Number of outstanding
> WR on any SRQ)
>         max_srq_sentries = 0x1e         (Maximum Number of
> scatter/gather entries for SRQ WQE)
>         srq_resize_supported = 0        (SRQ resize supported)
>         max_num_cq = 0xff80             (Max num of supported CQs)
>         max_num_ent_cq = 0x1ffff        (Max num of supported 
> entries per CQ)
>         max_num_mr = 0x1fff0            (Maximum number of memory
> region supported)
>         max_mr_size = 0xffffffff        (Largest contiguous block of
> memory region in bytes)
>         max_pd_num = 0x7ffc             (Maximum number of protection
> domains supported)
>         page_size_cap = 0x1000          (Largest page size supported
> by this HCA)
>         local_ca_ack_delay = 0xf        (Log2 4.096usec Max. RX to ACK
> or NAK delay)
>         max_qp_ous_rd_atom = 0x4        (Maximum number of oust. RDMA
> read/atomic as target)
>         max_ee_ous_rd_atom = 0          (EE Maximum number of outs.
> RDMA read/atomic as target)
>         max_res_rd_atom = 0x0           (Max. Num. of resources used
> for RDMA read/atomic as target)
>         max_qp_init_rd_atom = 0x80      (Max. Num. of outs. RDMA
> read/atomic as initiator)
>         max_ee_init_rd_atom = 0         (EE Max. Num. of outs. RDMA
> read/atomic as initiator)
>         atomic_cap = LOCAL              (Level of Atomicity supported)
>         max_ee_num = 0x0                (Maximum number of 
> EEC supported)
>         max_rdd_num = 0x0               (Maximum number of 
> IB_RDD supported)
>         max_mw_num = 0x0                (Maximum Number of memory
> windows supported)
>         max_raw_ipv6_qp = 0x0           (Maximum number of Raw IPV6
> QPs supported)
>         max_raw_ethy_qp = 0x0           (Maximum number of Raw
> Ethertypes QPs supported)
>         max_mcast_grp_num = 0x2000      (Maximum Number of 
> multicast groups)
>         max_mcast_qp_attach_num = 0x8   (Maximum number of QP per
> multicast group)
>         max_ah_num = 0x0                (Maximum number of 
> address handles)
>         max_num_fmr = 0x0               (Maximum number FMRs)
>         max_num_map_per_fmr = 0x7fff    (Maximum number of 
> (re)maps per FMR befo
> re an unmap operation in required)
>                 port=1
>                 port_state=PORT_ACTIVE (4)
>                 link_speed=2.5Gbps (1)
>                 link_width=4x (2)
>                 rate=10
>                 sm_lid=0x0001
>                 port_lid=0x0012
>                 port_lmc=0x0
>                 max_mtu=2048 (4)
>                 max_msg_sz=0x80000000   (Max message size)
>                 capabilities: 
> VENDOR_CLASS,TRAP,APM,SL_MAP,LED_INFO,CLIENT_REG,S
> YSGUID,
>                 max_vl_num=0x0          (Maximum number of VL 
> supported by this
> port)
>                 bad_pkey_counter=0x0    (Bad PKey counter)
>                 qkey_viol_counter=0x0   (QKey violation counter)
>                 sm_sl=0x0               (IB_SL to be used in 
> communication with
> subnet manager)
>                 pkey_tbl_len=0x40       (Current size of pkey table)
>                 gid_tbl_len=0x20        (Current size of GID table)
>                 subnet_timeout=0x12     (Subnet Timeout for 
> this port (see PortI
> nfo))
>                 initTypeReply=0x0       (optional 
> InitTypeReply value. 0 if not
> supported)
>                 GID[0]=fe80:0000:0000:0000:0002:c902:0023:792d
> 
> 
> Thanks & Regards
> Rakesh Thakur
> _______________________________________________
> ofw mailing list
> ofw at lists.openfabrics.org
> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ofw
> 



More information about the ofw mailing list