[ofa-general] RNIC resource limits

Philip Frey1 PHF at zurich.ibm.com
Wed Jun 4 04:24:04 PDT 2008


Hello,

I have asked this question about RNIC resource limits before:

<snip>

> > Could you give me some insight in what the limits of the Chelsio RNIC 
> > are? (Max MRs, QPs, PDs etc)
> >
> > Many thanks and kind regards,
> >  Philip 
>

<snip>

> Try running ibv_devinfo -v to see driver/hw limits.
>
> However, how are you limited?  Are you getting failures registering 
> memory?  Did you try setting your ulimit -l to unlimited or at least as 
> large as the memory region you want to register?
>
> Steve.

Steve, thanks for the answer!

When running 'ibv_devinfo -v' on my Chelsio RNIC (T3) with OFED 1.3 (FW 
5.0)
I get the following:

[root at achilles ~]# ibv_devinfo -v
hca_id: cxgb3_0
        fw_ver:                         0.0.0
        node_guid:                      0007:4301:33f7:0000
        sys_image_guid:                 0007:4301:33f7:0000
        vendor_id:                      0x1425
        vendor_part_id:                 49
        hw_ver:                         0x0
        board_id:                       1425.31
        phys_port_cnt:                  2
        max_mr_size:                    0xffffffffffffffff
        page_size_cap:                  0x0
        max_qp:                         32736
        max_qp_wr:                      16777215
        device_cap_flags:               0x00038000
        max_sge:                        4
        max_sge_rd:                     1
        max_cq:                         32767
        max_cqe:                        16777215
        max_mr:                         32768
        max_pd:                         32767
        max_qp_rd_atom:                 8
        max_ee_rd_atom:                 0
        max_res_rd_atom:                0
        max_qp_init_rd_atom:            8
        max_ee_init_rd_atom:            0
        atomic_cap:                     ATOMIC_NONE (0)
        max_ee:                         0
        max_rdd:                        0
        max_mw:                         0
        max_raw_ipv6_qp:                0
        max_raw_ethy_qp:                0
        max_mcast_grp:                  0
        max_mcast_qp_attach:            0
        max_total_mcast_qp_attach:      0
        max_ah:                         0
        max_fmr:                        0
        max_srq:                        0
        max_pkeys:                      0
        local_ca_ack_delay:             0
                port:   1
                        state:                  PORT_ACTIVE (4)
                        max_mtu:                4096 (5)
                        active_mtu:             invalid MTU (225)
                        sm_lid:                 0
                        port_lid:               0
                        port_lmc:               0x00
                        max_msg_sz:             0xffffffff
                        port_cap_flags:         0x009f0000
                        max_vl_num:             invalid value (255)
                        bad_pkey_cntr:          0x213
                        qkey_viol_cntr:         0x0
                        sm_sl:                  0
                        pkey_tbl_len:           1
                        gid_tbl_len:            1
                        subnet_timeout:         146
                        init_type_reply:        39
                        active_width:           4X (2)
                        active_speed:           5.0 Gbps (2)
                        phys_state:             invalid physical state (0)

                port:   2
                        state:                  PORT_ACTIVE (4)
                        max_mtu:                4096 (5)
                        active_mtu:             invalid MTU (225)
                        sm_lid:                 0
                        port_lid:               0
                        port_lmc:               0x00
                        max_msg_sz:             0xffffffff
                        port_cap_flags:         0x009f0000
                        max_vl_num:             invalid value (255)
                        bad_pkey_cntr:          0x213
                        qkey_viol_cntr:         0x0
                        sm_sl:                  0
                        pkey_tbl_len:           1
                        gid_tbl_len:            1
                        subnet_timeout:         146
                        init_type_reply:        39
                        active_width:           4X (2)
                        active_speed:           5.0 Gbps (2)
                        phys_state:             invalid physical state (0)

When creating a QP, I need to specify some capacity information:
struct ibv_qp_cap {
        uint32_t                max_send_wr;
        uint32_t                max_recv_wr;
        uint32_t                max_send_sge;
        uint32_t                max_recv_sge;
        uint32_t                max_inline_data;
};

According to the above listing, I should be able to use:
16777215        WRs     (max_qp_wr: 16777215)       [is this per qp or 
total?]
4               SGEs    (max_sge: 4)

It does not say anything about the max_inline_data.

I have tried to create a QP with as many resources as possible and found 
the following:
max_send_wr             cannot exceed 16384
max_recv_wr             cannot exceed 1023
max_send_sge    cannot exceed 4294967295 (maximum a uint32_t can hold)
max_recv_sge    cannot exceed 4294967295 (maximum a uint32_t can hold)
max_inline_data cannot exceed 64

If the stated limits are exceeded, the call to ibv_qp_create() fails.

I am wondering now, if I can really use as many WRs, SGEs and inline data 
as the figures
above or not. It is also not clear to me if these figures represent per-QP 
values or
if they are global max values across all QPs.

Many thanks for you advice and best regards,
 Philip
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20080604/387293ff/attachment.html>


More information about the general mailing list