[ofa-general] New proposal for memory management

arkady kanevsky arkady.kanevsky at gmail.com
Thu Apr 30 14:25:56 PDT 2009


Jeff,are the MPI applications that are broken are the ones which is
malloc/free
instead of MPI_ALLOC calls?
Arkady

On Thu, Apr 30, 2009 at 4:45 PM, Woodruff, Robert J <
robert.j.woodruff at intel.com> wrote:

> Jeff wrote,
>
> >I'm a little amazed that it's gone this long without being fixed (I
> >know I spoke about this exact issue at Sonoma 3 years ago!).
>
> If MPIs are so broken over OFA, then can you give me the list of
> applications
> that are failing ? I did not hear from any customer at Sonoma that all
> the MPIs are totally broken and do not work.
> As far as I know, most if not all of the applications are running
> just fine with the MPIs as they are today. At least I am not aware of
> any applications that are failing when using Intel MPI. Sure, you need to
> hook
> malloc and such and that is a bit tricky, but it seems like you
> have figured out how to do it and it works.  If a notifier capability would
> make this easier,
> then perhaps that should be added, but adding a memory registration cache
> to the kernel,
> that may or may not even meet the needs of all MPIs does not seem
> like the right approach, it will just lead to kernel bloat.
>
> Sure you have to manage your own cache, but you have chosen to
> do a cache to get better performance. You did not have to do a cache at
> all. You chose to do it to make your MPI better.
>
> To me, all this sounds like a lot of whining....
> Why can't the OS fix all my problems.
>
> _______________________________________________
> general mailing list
> general at lists.openfabrics.org
> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general
>
> To unsubscribe, please visit
> http://openib.org/mailman/listinfo/openib-general
>



-- 
Cheers,
Arkady Kanevsky
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20090430/5ac4f513/attachment.html>


More information about the general mailing list