<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:Wingdings;
panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal;
font-family:"Calibri","sans-serif";
color:windowtext;}
span.EmailStyle18
{mso-style-type:personal-reply;
font-family:"Calibri","sans-serif";
color:#1F497D;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal"><span style="color:#1F497D">The manpage on fi_endpoint says that scalable endpoints have only one transport level address. It seems to me that in your example using passive endpoints, you would end up with two transport addresses. I’m not
sure that the scalable endpoint is what you want here. Are you wanting the ability to use FI_RDM with passive endpoint?<o:p></o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D">I think in general, there might be an issue with sharing the same receive queue (i.e. FI_RDM) when using passive endpoints. It seems to me from the APIs that every time you accept a new “connection” you end up
with a new endpoint – like in a connection oriented API. Since every endpoint has its own receive queue, there isn’t a good way to share that receive queue.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D">I’m not sure what the solution is here, other than to wait for Sean
</span><span style="font-family:Wingdings;color:#1F497D">J</span><span style="color:#1F497D"> It could be that the fi_cm APIs need a mechanism to “reuse” an open RDM type endpoint, vs creating a new one each time.<o:p></o:p></span></p>
<p class="MsoNormal"><a name="_MailEndCompose"><span style="color:#1F497D"><o:p> </o:p></span></a></p>
<div style="border:none;border-left:solid blue 1.5pt;padding:0in 0in 0in 4.0pt">
<div>
<div style="border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><b>From:</b> ofiwg-bounces@lists.openfabrics.org [mailto:ofiwg-bounces@lists.openfabrics.org]
<b>On Behalf Of </b>Reese Faucette (rfaucett)<br>
<b>Sent:</b> Tuesday, October 28, 2014 12:25 PM<br>
<b>To:</b> ofiwg@lists.openfabrics.org<br>
<b>Subject:</b> [ofiwg] shared recvs<o:p></o:p></p>
</div>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">If an app wishes to create a passive endpoint, accept connections on it, and then post receives that will receive data coming from any remote connection on that endpoint, how exactly is that accomplished?
<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">My best guess is by using fi_rx_context, we can post a “shared” receive buffer via:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">fi_pendpoint(&pep);<o:p></o:p></p>
<p class="MsoNormal">fi_bind(pep, cmeq);<o:p></o:p></p>
<p class="MsoNormal">fi_listen(pep);<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">fi_eq_sread(cmeq); // wait for CONNREQ 1<o:p></o:p></p>
<p class="MsoNormal">fi_endpoint(&cep1); // remote-specific EP<o:p></o:p></p>
<p class="MsoNormal">fi_accept(cep1);<o:p></o:p></p>
<p class="MsoNormal">fi_eq_sread(cmeq); // wait for COMPLETE 1<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">fi_eq_sread(cmeq); // wait for CONNREQ 2<o:p></o:p></p>
<p class="MsoNormal">fi_endpoint(&cep2); // remote-specific EP<o:p></o:p></p>
<p class="MsoNormal">fi_accept(cep2);<o:p></o:p></p>
<p class="MsoNormal">fi_eq_sread(cmeq); // wait for COMPLETE 2<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">fi_rx_context(cep1, 0, &rxep); // get common EP for RX<o:p></o:p></p>
<p class="MsoNormal">fi_recv(rxep, buf, len);<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Now, a send from the remote endpoint associated with either cep1 or cep2 will land in buf, yes ?<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">I’m sure there are cases where support for this mode of operation is desired vs. not, what are the endpoint flags that would control whether this approach will work or not? I imagine that an endpoint that supports the above would NOT support
posting a receive directly to cep1 or cep2, and that endpoint that expect to post received to cep1 only for remote1 and cep2 for remote 2 (e.g. traditional Verbs RC endpoint) would not support this shared mode of operation.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Assuming I am not too far off in the woods here, how is this shared/non-shared approach to receives communicated in the API?<o:p></o:p></p>
<p class="MsoNormal">Thanks,<o:p></o:p></p>
<p class="MsoNormal">-reese<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
</div>
</body>
</html>