I wanted to add that the typical size is variable but in the case when I could only allocate 32635 MRs the size of each MR was 100K.<div><br><br><div class="gmail_quote">On Sun, Mar 16, 2008 at 11:19 PM, Rajouri Jammu <<a href="mailto:rajouri.jammu@gmail.com">rajouri.jammu@gmail.com</a>> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">100K Bytes.<div><div></div><div class="Wj3C7c"><br><br><div class="gmail_quote">On Sun, Mar 16, 2008 at 11:16 PM, Dotan Barak <<a href="mailto:dotanb@dev.mellanox.co.il" target="_blank">dotanb@dev.mellanox.co.il</a>> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi.<br>
What is the typical size of the MRs that you are trying to register?<br>
<br>
thanks<br>
Dotan<br>
<div><br>
Rajouri Jammu wrote:<br>
> Jack,<br>
><br>
> The problem I'm seeing is that I'm not able to register even the<br>
> default number of memory regions.<br>
> The default is 131056 but I'm not able to register more than 32635 regions<br>
</div>> I'm on OFED -<a href="http://1.2.5.4" target="_blank">1.2.5.4</a> <<a href="http://1.2.5.4" target="_blank">http://1.2.5.4</a>>.<br>
<div><div></div><div>><br>
> Any ideas?<br>
><br>
> Below is the output of lspci and ibv_devinfo -v. I also recently<br>
> upgraded to the latest f/w but that didn't make a difference.<br>
><br>
><br>
> lspci | grep Mellanox<br>
> 01:00.0 InfiniBand: Mellanox Technologies MT25208 InfiniHost III Ex<br>
> (Tavor compatibility mode) (rev 20)<br>
><br>
> ibv_devinfo -v<br>
> hca_id: mthca0<br>
> fw_ver: 4.8.200<br>
> node_guid: 0006:6a00:9800:8403<br>
> sys_image_guid: 0006:6a00:9800:8403<br>
> vendor_id: 0x02c9<br>
> vendor_part_id: 25208<br>
> hw_ver: 0xA0<br>
> board_id: MT_0200000001<br>
> phys_port_cnt: 2<br>
> max_mr_size: 0xffffffffffffffff<br>
> page_size_cap: 0xfffff000<br>
> max_qp: 64512<br>
> max_qp_wr: 65535<br>
> device_cap_flags: 0x00001c76<br>
> max_sge: 59<br>
> max_sge_rd: 0<br>
> max_cq: 65408<br>
> max_cqe: 131071<br>
> max_mr: 131056<br>
> max_pd: 32768<br>
> max_qp_rd_atom: 4<br>
> max_ee_rd_atom: 0<br>
> max_res_rd_atom: 258048<br>
> max_qp_init_rd_atom: 128<br>
> max_ee_init_rd_atom: 0<br>
> atomic_cap: ATOMIC_HCA (1)<br>
> max_ee: 0<br>
> max_rdd: 0<br>
> max_mw: 0<br>
> max_raw_ipv6_qp: 0<br>
> max_raw_ethy_qp: 0<br>
> max_mcast_grp: 8192<br>
> max_mcast_qp_attach: 56<br>
> max_total_mcast_qp_attach: 458752<br>
> max_ah: 0<br>
> max_fmr: 0<br>
> max_srq: 960<br>
> max_srq_wr: 65535<br>
> max_srq_sge: 31<br>
> max_pkeys: 64<br>
> local_ca_ack_delay: 15<br>
><br>
><br>
><br>
> On Tue, Mar 11, 2008 at 11:35 PM, Jack Morgenstein<br>
</div></div><div><div></div><div>> <<a href="mailto:jackm@dev.mellanox.co.il" target="_blank">jackm@dev.mellanox.co.il</a> <mailto:<a href="mailto:jackm@dev.mellanox.co.il" target="_blank">jackm@dev.mellanox.co.il</a>>> wrote:<br>
><br>
> On Tuesday 11 March 2008 19:38, Rajouri Jammu wrote:<br>
> > I think it's Arabel.<br>
> ><br>
> > Both drivers are loaded (ib_mthca and mlx4_core). How do I tell<br>
> which<br>
> > driver's settings I should modify?<br>
> ><br>
> > What will be the max_mr value if log_num_mpt = 20?<br>
> ><br>
><br>
> To see which device you have, type the following command in your<br>
> linux console:<br>
><br>
> lspci | grep Mellanox<br>
><br>
> For ConnectX (mlx4), you will see:<br>
> InfiniBand: Mellanox Technologies: Unknown device 634a (rev a0)<br>
> (lspci has not caught up with us yet).<br>
><br>
> For Arbel (InfiniHost III), you will see either:<br>
> InfiniBand: Mellanox Technologies MT25208 InfiniHost III Ex HCA<br>
> (rev a0)<br>
><br>
> or, if you are running your arbel in Tavor compatibility mode:<br>
><br>
> InfiniBand: Mellanox Technologies MT25208 InfiniHost III Ex (Tavor<br>
> compatibility mode) (rev 20)<br>
> ===========<br>
> If your installed HCA is a ConnectX, you should use the module<br>
> parameters for mlx4.<br>
> If your installed HCA is an InfiniHost III, you should use the<br>
> module parameters for ib_mthca.<br>
><br>
> - Jack<br>
><br>
><br>
</div></div>> ------------------------------------------------------------------------<br>
><br>
> _______________________________________________<br>
> general mailing list<br>
> <a href="mailto:general@lists.openfabrics.org" target="_blank">general@lists.openfabrics.org</a><br>
> <a href="http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general" target="_blank">http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general</a><br>
><br>
> To unsubscribe, please visit <a href="http://openib.org/mailman/listinfo/openib-general" target="_blank">http://openib.org/mailman/listinfo/openib-general</a><br>
<br>
</blockquote></div><br>
</div></div></blockquote></div><br></div>