[ofa-general] max_mr limit
Rajouri Jammu
rajouri.jammu at gmail.com
Sun Mar 16 23:20:34 PDT 2008
I wanted to add that the typical size is variable but in the case when I
could only allocate 32635 MRs the size of each MR was 100K.
On Sun, Mar 16, 2008 at 11:19 PM, Rajouri Jammu <rajouri.jammu at gmail.com>
wrote:
> 100K Bytes.
>
>
> On Sun, Mar 16, 2008 at 11:16 PM, Dotan Barak <dotanb at dev.mellanox.co.il>
> wrote:
>
> > Hi.
> > What is the typical size of the MRs that you are trying to register?
> >
> > thanks
> > Dotan
> >
> > Rajouri Jammu wrote:
> > > Jack,
> > >
> > > The problem I'm seeing is that I'm not able to register even the
> > > default number of memory regions.
> > > The default is 131056 but I'm not able to register more than 32635
> > regions
> > > I'm on OFED -1.2.5.4 <http://1.2.5.4>.
> > >
> > > Any ideas?
> > >
> > > Below is the output of lspci and ibv_devinfo -v. I also recently
> > > upgraded to the latest f/w but that didn't make a difference.
> > >
> > >
> > > lspci | grep Mellanox
> > > 01:00.0 InfiniBand: Mellanox Technologies MT25208 InfiniHost III Ex
> > > (Tavor compatibility mode) (rev 20)
> > >
> > > ibv_devinfo -v
> > > hca_id: mthca0
> > > fw_ver: 4.8.200
> > > node_guid: 0006:6a00:9800:8403
> > > sys_image_guid: 0006:6a00:9800:8403
> > > vendor_id: 0x02c9
> > > vendor_part_id: 25208
> > > hw_ver: 0xA0
> > > board_id: MT_0200000001
> > > phys_port_cnt: 2
> > > max_mr_size: 0xffffffffffffffff
> > > page_size_cap: 0xfffff000
> > > max_qp: 64512
> > > max_qp_wr: 65535
> > > device_cap_flags: 0x00001c76
> > > max_sge: 59
> > > max_sge_rd: 0
> > > max_cq: 65408
> > > max_cqe: 131071
> > > max_mr: 131056
> > > max_pd: 32768
> > > max_qp_rd_atom: 4
> > > max_ee_rd_atom: 0
> > > max_res_rd_atom: 258048
> > > max_qp_init_rd_atom: 128
> > > max_ee_init_rd_atom: 0
> > > atomic_cap: ATOMIC_HCA (1)
> > > max_ee: 0
> > > max_rdd: 0
> > > max_mw: 0
> > > max_raw_ipv6_qp: 0
> > > max_raw_ethy_qp: 0
> > > max_mcast_grp: 8192
> > > max_mcast_qp_attach: 56
> > > max_total_mcast_qp_attach: 458752
> > > max_ah: 0
> > > max_fmr: 0
> > > max_srq: 960
> > > max_srq_wr: 65535
> > > max_srq_sge: 31
> > > max_pkeys: 64
> > > local_ca_ack_delay: 15
> > >
> > >
> > >
> > > On Tue, Mar 11, 2008 at 11:35 PM, Jack Morgenstein
> > > <jackm at dev.mellanox.co.il <mailto:jackm at dev.mellanox.co.il>> wrote:
> > >
> > > On Tuesday 11 March 2008 19:38, Rajouri Jammu wrote:
> > > > I think it's Arabel.
> > > >
> > > > Both drivers are loaded (ib_mthca and mlx4_core). How do I tell
> > > which
> > > > driver's settings I should modify?
> > > >
> > > > What will be the max_mr value if log_num_mpt = 20?
> > > >
> > >
> > > To see which device you have, type the following command in your
> > > linux console:
> > >
> > > lspci | grep Mellanox
> > >
> > > For ConnectX (mlx4), you will see:
> > > InfiniBand: Mellanox Technologies: Unknown device 634a (rev a0)
> > > (lspci has not caught up with us yet).
> > >
> > > For Arbel (InfiniHost III), you will see either:
> > > InfiniBand: Mellanox Technologies MT25208 InfiniHost III Ex HCA
> > > (rev a0)
> > >
> > > or, if you are running your arbel in Tavor compatibility mode:
> > >
> > > InfiniBand: Mellanox Technologies MT25208 InfiniHost III Ex (Tavor
> > > compatibility mode) (rev 20)
> > > ===========
> > > If your installed HCA is a ConnectX, you should use the module
> > > parameters for mlx4.
> > > If your installed HCA is an InfiniHost III, you should use the
> > > module parameters for ib_mthca.
> > >
> > > - Jack
> > >
> > >
> > >
> > ------------------------------------------------------------------------
> > >
> > > _______________________________________________
> > > general mailing list
> > > general at lists.openfabrics.org
> > > http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general
> > >
> > > To unsubscribe, please visit
> > http://openib.org/mailman/listinfo/openib-general
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20080316/afa2a930/attachment-0001.html>
More information about the general
mailing list