All programs are executed as the root user.<br><br>ulimit -a<br><br>time(seconds) unlimited<br>file(blocks) unlimited<br>data(kbytes) unlimited<br>stack(kbytes) unlimited<br>coredump(blocks) 0<br>
memory(kbytes) unlimited<br>locked memory(kbytes) unlimited<br>process 8063<br>nofiles 1048576<br>vmemory(kbytes) unlimited<br>locks unlimited<br><br><br><div class="gmail_quote">
On Tue, Feb 24, 2009 at 11:50 PM, Dotan Barak <span dir="ltr"><<a href="mailto:dotanba@gmail.com">dotanba@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Do you execute your program under the root user or under any other user?<br>
(maybe you fail because of the ulimit value of memory which can be pinned)<br>
<br>
<br>
Dotan<br>
<br>
On Wed, Feb 25, 2009 at 7:51 AM, Phillip Wilson <<a href="mailto:phillipwils@gmail.com">phillipwils@gmail.com</a>> wrote:<br>
> The “ibv_reg_mr()” function call fails with HCA (DID=0x634A) that uses the<br>
> mlx4_0 driver when the system is under load (memory and cpu). The system<br>
> usually has over 500MB of system memory when “ibv_reg_mr()” call fails.<br>
><br>
><br>
><br>
> If I only run one HCA with (DID=0x6278) that uses the mthca0 driver with the<br>
> other tools to generate stress the “ibv_reg_mr()” call always passes. If I<br>
> only run the HCA with (DID=0x634A) with the other tools to generate stress<br>
> the “ibv_reg_mr()” call will always fails; it usually takes less than 30<br>
> minutes for the failure to occur.<br>
><br>
><br>
><br>
><br>
><br>
> The maximum number of memory regions requested at one time is up to 8 (32MB)<br>
> with two HCA dual port cards and the maximum size for a memory region is 1<br>
> MB.<br>
><br>
><br>
><br>
> (i.e. ctx->mr = ibv_reg_mr(ctx->pd,<br>
><br>
> buffer, /*malloc 4MB buffer<br>
> per process*/<br>
><br>
> size, /*2 Bytes to 1MB */<br>
><br>
> IBV_ACCESS_LOCAL_WRITE);<br>
><br>
> )<br>
><br>
><br>
><br>
> I modified the ibv_rc_pingpong test to use the parent-child paradigm instead<br>
> of the current client/server approach for my environment. The code forks a<br>
> parent process and a child process per port which serves the same purpose as<br>
> the current client/server approach. The code also forks a process to run on<br>
> a HCA. Basically, the same code is executed on each HCA except for the user<br>
> libraries (libmlx4.so, libmthca.so), mlx4.ko, mthca.ko and firmware on each<br>
> HCA.<br>
><br>
><br>
><br>
> Since the code in the user libraries is very similar to each other, I<br>
> suspect the issue is in the kernel code or HCA firmware.<br>
><br>
><br>
><br>
> Does any one know what kernel patch fixes this issue starting from kernel<br>
> 2.6.24 through 2.6.28? Has anyone else seen this issue?<br>
><br>
><br>
><br>
> System Information:<br>
><br>
><br>
><br>
> The system has 4GB of memory.<br>
><br>
><br>
><br>
> uname -a<br>
><br>
> Linux (none) 2.6.24.02.02.08 #21 SMP Thu Feb 19 11:04:35 PST 2009 ia64<br>
> unknown<br>
><br>
><br>
><br>
> OFED 1.2.5<br>
><br>
><br>
><br>
> lspci -d 15b3:<br>
><br>
><br>
><br>
> 0000:10:00.0 InfiniBand: Mellanox Technologies MT25208 InfiniHost III Ex<br>
> (Tavor compatibility mode) (rev 20)<br>
><br>
> 0000:c3:00.0 InfiniBand: Mellanox Technologies: Unknown device 634a (rev a0)<br>
><br>
><br>
><br>
> lspci -d 15b3: -n<br>
><br>
> 0000:10:00.0 0c06: 15b3:6278 (rev 20)<br>
><br>
> 0000:c3:00.0 0c06: 15b3:634a (rev a0)<br>
><br>
><br>
><br>
> ibv_devinfo -v<br>
><br>
> hca_id: mlx4_0<br>
><br>
> fw_ver: 2.5.000<br>
><br>
><br>
><br>
> hca_id: mthca0<br>
><br>
> fw_ver: 4.8.930<br>
><br>
> _______________________________________________<br>
> general mailing list<br>
> <a href="mailto:general@lists.openfabrics.org">general@lists.openfabrics.org</a><br>
> <a href="http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general" target="_blank">http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general</a><br>
><br>
> To unsubscribe, please visit<br>
> <a href="http://openib.org/mailman/listinfo/openib-general" target="_blank">http://openib.org/mailman/listinfo/openib-general</a><br>
><br>
</blockquote></div><br>