[Users] increasing limit of Registerable memory with OFED-3.5-1
Hal Rosenstock
hal.rosenstock at gmail.com
Fri Jul 26 03:36:56 PDT 2013
On Fri, Jul 26, 2013 at 4:24 AM, Anton Starikov <ant.starikov at gmail.com>wrote:
> Yep, that one I saw.
>
> But earlier here was long discussion about which of them (log_mtts_per_seg
> or log_num_mtt ) to use, and conclusion was that log_num_mtt is more
> preferable as it decrease fragmentation, if I remember correctly.
>
FWIW, in terms of the latest upstream driver sources, I don't see
log_num_mtt module parameter for mlx4; only log_mtts_per_seg.
-- Hal
>
> Anton.
>
> On Jul 25, 2013, at 5:58 PM, Hal Rosenstock <hal.rosenstock at gmail.com>
> wrote:
>
> >
> >
> > On Wed, Jul 24, 2013 at 8:56 AM, Anton Starikov <ant.starikov at gmail.com>
> wrote:
> >
> > It is connectx-3 (MT27500). I checked driver sources, there is no
> mentioning of log_num_mtt .
> >
> > It's log_mtts_per_seg, log_mtts_per_seg and it's found in
> > drivers/net/ethernet/mellanox/mlx4/main.c
> >
> >
> > On Jul 24, 2013, at 12:46 PM, Hal Rosenstock <hal.rosenstock at gmail.com>
> wrote:
> >
> > >
> > >
> > > On Wed, Jul 24, 2013 at 3:31 AM, Anton Starikov <
> ant.starikov at gmail.com> wrote:
> > > Hello,
> > >
> > >
> > > I'm using OFED-3.5-1 with SL-6.4 (I had to do some minor patching to
> get it working on 2.6.32-358.14.1.el6.x86_64 kernel due to double export of
> __pskb_copy).
> > >
> > > OpenMPI give known warning about limit of registrable memory (below).
> But in current modules there is no "log_num_mtt" parameter to tune.
> > >
> > > Which HCA are you using ?
> > >
> > > -- Hal
> > >
> > >
> > > Just in case, My hard and soft limits for maximal locked memory are
> unlimited.
> > >
> > > What should be procedure with latest OFED then?
> > >
> > > Thank you,
> > >
> > > Anton Starikov
> > >
> > > ------------------------------------------
> > >
> > > WARNING: It appears that your OpenFabrics subsystem is configured to
> only
> > > allow registering part of your physical memory. This can cause MPI
> jobs to
> > > run with erratic performance, hang, and/or crash.
> > >
> > > This may be caused by your OpenFabrics vendor limiting the amount of
> > > physical memory that can be registered. You should investigate the
> > > relevant Linux kernel module parameters that control how much physical
> > > memory can be registered, and increase them to allow registering all
> > > physical memory on your machine.
> > >
> > > See this Open MPI FAQ item for more information on these Linux kernel
> module
> > > parameters:
> > >
> > > http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages
> > >
> > > Local host: node1
> > > Registerable memory: 32768 MiB
> > > Total memory: 262098 MiB
> > >
> > > Your MPI job will continue, but may be behave poorly and/or hang.
> > > _______________________________________________
> > > Users mailing list
> > > Users at lists.openfabrics.org
> > > http://lists.openfabrics.org/cgi-bin/mailman/listinfo/users
> > >
> >
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/users/attachments/20130726/2576ec92/attachment.html>
More information about the Users
mailing list