[ofiwg] mapping adapter memory

Reese Faucette (rfaucett) rfaucett at cisco.com
Tue Oct 7 22:30:34 PDT 2014


Sorry for late response, PTO all last week.

Regarding users setting OPTIMIZE_LATENCY and OPTIMIZE_BANDWIDTH for every endpoint - the point of this API (libfabric in general), IMO, is to give a uniform way to get "really good" performance on lots of different platforms without much effort and a way to get "awesome" performance for those willing to research their providers and tune their applications accordingly.

A goal of maximum performance for users who can't be bothered to learn their platform is not realistic, and we should not be attempting that.

Bandwidth vs. latency is an age-old tradeoff, and any programmer truly tuning for performance will understand this and know where their app needs lower-than-normal latency EPs (control messages, for example) and where they need maximal bandwidth EPs (bulk data transfer, for example).

So, I don't buy that argument that since novice programmers may not use these hints properly, there is no advantage to having them.  Our group will be writing both provider code and upper-layer code to run on libfabric, and we will tune with every hint possible, most likely for more than just our provider, giving lots of example code for any other users of the API to learn from.

-reese

From: Tsai-yang Jea [mailto:tjea at us.ibm.com] 

> I can see both of these options would be set all the time.  Users are greedy. They probably want both good latency and good bandwidth.
> Unless, there is a clear documentation that describes the limitations and drawbacks by enabling these options (however, the limitation and drawbacks is provide specific). Then, users may choose carefully.  

> In general, I think users will set both options since they are ignored if hardware does not support them. Who doesn't want an EP that has both good
> bandwidth and latency? If both of the options are always set, or set most of the time, it is the same as not having the options.


From: "Reese Faucette (rfaucett)" <rfaucett at cisco.com>
To: "Hefty, Sean" <sean.hefty at intel.com>, "ofiwg at lists.openfabrics.org" <ofiwg at lists.openfabrics.org>
Date: 09/24/2014 11:03 PM
Subject: Re: [ofiwg] mapping adapter memory
Sent by: ofiwg-bounces at lists.openfabrics.org
________________________________________



> Conceptually, I can see where not all endpoints may have dedicated
> hardware behind them and may have to share resources with other
> endpoints, and potentially other processes.  Even adapters that can
> dedicate hardware resources to every endpoint may not perform well as a
> result of caching limitations on the HCA.  This could require an app to share
> resources (e.g. a kernel allocated QP) for specific communication channels.
> 
> Maybe a provider can expose some attributes on the 'optimal' use of any
> the underlying hardware, so that an application or job scheduler doesn't
> oversubscribe the hardware.  Reporting maximum values doesn't do that,
> since apps often allocate the max values expecting that there won't be any
> performance loss for doing so.

How about something like this:
There are hints the app can specify when creating an EP, such as "OPTIMIZE_FOR_LATENCY" or "OPTIMIZE_FOR_BW".  If the hardware has nothing special to do for those hints, they are ignored.  If there are some special hardware resources that can do one or the other, the provider will make a best-effort to match them with their respective hints.  Once these special resources are exhausted, "you get what you get".  So, if hardware supports N "low latency" QPs, and the app requests N+1 OPTIMIZE_FOR_LATENCY QPs, the N+1th QP will just be a little slower than the others.  Moral: ask for the more important ones first.

That seems not-to-invasive, and is effective for my needs.
-reese
_______________________________________________
ofiwg mailing list
ofiwg at lists.openfabrics.org
http://lists.openfabrics.org/mailman/listinfo/ofiwg




More information about the ofiwg mailing list