<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
<meta name="Generator" content="Microsoft Exchange Server">
<!-- converted from text --><style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left: #800000 2px solid; } --></style>
</head>
<body>
<meta content="text/html; charset=UTF-8">
<style type="text/css" style="">
<!--
p
{margin-top:0;
margin-bottom:0}
-->
</style>
<div dir="ltr">
<div id="x_divtagdefaultwrapper" dir="ltr" style="font-size:12pt; color:#000000; font-family:Calibri,Helvetica,sans-serif">
<p>I thought I'd experiment with scalable endpoints as an alternative to the thread local endpoints, but I'm getting ENOSYS from the tcp;rxm_ofi setup.</p>
<p><br>
</p>
<p>is that something I can work around with different flags, or is that something that jjust isn't supported (feature matrix doesn't mention it)</p>
<p><br>
</p>
<p>Thanks</p>
<p>JB<br>
</p>
<p><br>
</p>
<p></p>
<pre style="margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; text-indent:0px"><span style="color:#808000">int</span><span style="color:#808080"> </span><span style="font-weight:600; color:#e5e4e1">fi_no_scalable_ep</span>(<span style="color:#87cde6">struct</span><span style="color:#808080"> </span><span style="color:#87cde6">fid_domain</span><span style="color:#808080"> </span>*<span style="color:#a0dc2c">domain</span>,<span style="color:#808080"> </span><span style="color:#87cde6">struct</span><span style="color:#808080"> </span><span style="color:#87cde6">fi_info</span><span style="color:#808080"> </span>*<span style="color:#a0dc2c">info</span>,</pre>
<pre style="margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; text-indent:0px"><span style="color:#808080"> </span><span style="color:#87cde6">struct</span><span style="color:#808080"> </span><span style="color:#87cde6">fid_ep</span><span style="color:#808080"> </span>**<span style="color:#a0dc2c">sep</span>,<span style="color:#808080"> </span><span style="color:#808000">void</span><span style="color:#808080"> </span>*<span style="color:#a0dc2c">context</span>)</pre>
<pre style="margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; text-indent:0px">{</pre>
<pre style="margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; text-indent:0px"><span style="color:#808080"> </span><span style="color:#87cde6">return</span><span style="color:#808080"> </span><span style="color:#e6e5e2">-</span><span style="font-weight:600; color:#cc8c5b">FI_ENOSYS</span>;</pre>
<pre style="margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; text-indent:0px">}</pre>
<pre style="margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; text-indent:0px"><br><br></pre>
<br>
<p></p>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="x_divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>From:</b> Hefty, Sean <sean.hefty@intel.com><br>
<b>Sent:</b> 23 March 2021 00:08:38<br>
<b>To:</b> Biddiscombe, John A.; libfabric-users@lists.openfabrics.org<br>
<b>Subject:</b> RE: Queue size question</font>
<div> </div>
</div>
</div>
<font size="2"><span style="font-size:10pt;">
<div class="PlainText">> I have a test that seems to run fine on tcp;ofi_rxm - though this test is two ranks on<br>
> the same laptop, so it isn't really a very good test - however, I can throw anything at<br>
> it and it seems to reliably complete.<br>
> <br>
> On GNI, I get lockups and after much head scratching, I am wondering what the<br>
> significance of the tx/rx attribute size may be.<br>
> <br>
> On tcp/ofi_rxm the size reports as "size: 65536" and I can have 16 threads each sending<br>
> up to 128 messages in flight on one thread per endpoint, and a single receive endpoint<br>
> handling all receives - possibly 16*128 messages with posted receives = 2048.<br>
> <br>
> When I run on GNI, using two nodes, each reports tx/rx attr "size: 500" - and I find<br>
> that when many messages are in flight, things can lock up because some posted sends are<br>
> never received. This seems to happen even when I drop down to 16 threads with 8 in<br>
> flight messages which ought to be 128 at a time - and I would have suspected that a<br>
> size of 500 (cq size limitation?) would handle this.<br>
> <br>
> Question 1 - what is the tx/rx attr size really telling me?<br>
<br>
Unfortunately, this is provider dependent, and there's very little that can be done to define it crisper without forcing an implementation. In some cases it's related to a HW queue size. I suspect that may be the case with gni. However, the HW queue size
doesn't necessarily mean that the number is equal to the number of operations that can be queue. For example, it's possible for a send that requires 2 iovecs to consume 2 entries in the queue. But each operation consuming 1 entry is usually a safe assumption.
<br>
<br>
Someone familiar with gni will need to chime in on how it maps to their HW.<br>
<br>
> Question 2 - if I post more than the allowed receives or sends, should I not receive<br>
> some kind of error? (I have enabled resource management, so I might expect a retry code<br>
> when I attempt the send/recv)<br>
<br>
Yes, you should see -FI_EAGAIN when trying to post more operations that the queues support. There are checks like this in some providers -- I think rxm, verbs, and tcp all do, and rxm is actually forgiving about it by allowing queues to overflow. (Because
it's easy to swamp a receiver, even with a reasonably well-written app.)<br>
<br>
> Ideally, I'd like to throttle the number of messages in flight according to what the<br>
> hardware reports its capabilities - which vars should I use from the fi_info to do<br>
> this?<br>
<br>
Resource management is the correct setting. Manually limited your application to the tx/rx sizes, and sizing the CQ appropriately should have done the trick.<br>
<br>
It sounds like this is a problem likely restricted to gni.<br>
<br>
- Sean<br>
</div>
</span></font>
</body>
</html>