<html>
<body>
<font size=3>At 04:43 AM 1/6/2005, Diego Crupnicoff wrote:<br>
</font><blockquote type=cite class=cite cite="">
<font face="arial" size=2 color="#0000FF">I feel like we are talking
about different things here:<br>
</font><font size=3> <br>
</font><font face="arial" size=2 color="#0000FF">The ***IP*** MTU is
relevant for IPoIB performance because it determines the number of times
that you are going to be hit by the per-packet overhead of the ***host***
networking stack. My point was that the ***IP MTU*** will not be tied to
the ***IB*** MTU if a connected mode IPoIB (or SDP) is used instead of
the current IPoIB that uses IB UD transport service. The IB MTU would
then be irrelevant to this discussion.<br>
</font><font size=3> <br>
</font><font face="arial" size=2 color="#0000FF">As for the eventual 2G
***IP*** MTU limit, it still sounds more than reasonable to me. I
wouldn't mind if a 10TB file gets split into some IP packets up to
2GB?!?!? each.</font></blockquote><br>
Keep in mind that IP has a limit on its datagram size (normal and jumbo
datagrams) which is far below 2GB. IP datagrams are
datagrams. Large messages are expected to use SAR across a set of
datagrams to insure forward progress with minimal impact to overall
performance in the event of a transmission error. <br><br>
<blockquote type=cite class=cite cite="">
<font face="arial" size=2 color="#0000FF">(With the exception of the UD
transport service where IB messages are limited to be single packet), the
choice of ***IB*** MTU and its impact on performance is a completely
unrelated issue. IB messages are split into packets and reassembled by
the HCA HW. So the per-IB-message overhead of the SW stack is independent
of the IB MTU. The choice of IB MTU may indeed affect performance for
other reasons but it is not immediately obvious that the largest
available IB MTU is the best option for all cases. For example, latency
optimization of small high priority packets under load may benefit from
smaller IB MTUs (e.g. 256).</font></blockquote><br>
This is best handled by VL arbitration. Changing the IB MTU to 256
for a UD based implementation would violate the IP minimum datagram size
requirement. <br><br>
Mike<br><br>
<blockquote type=cite class=cite cite=""><font size=3> <br>
</font><font face="arial" size=2 color="#0000FF">Diego<br>
</font><font size=3> <br>
</font>
<dl>
<dd><font face="tahoma" size=2>-----Original Message-----<br>
<dd>From:</b> Stephen Poole
[<a href="mailto:spoole@lanl.gov" eudora="autourl">
mailto:spoole@lanl.gov</a>] <br>
<dd>Sent:</b> Thursday, January 06, 2005 5:45 AM<br>
<dd>To:</b> Diego Crupnicoff<br>
<dd>Cc:</b> 'openib-general@openib.org'<br>
<dd>Subject:</b> RE: [openib-general] ip over ib throughtput<br><br>
</font>
<dd><font size=3>Have you done any "load" analysis of a 2K .vs.
4K MTU ? Your analogy of having 2G as a total message size is potentially
flawed. You seem to assume that 2G is the end-all in size, it is not.
What about when you want to (down the road) use IB for files in the
1-10TB in size. Granted, we can live with 2G, but it is not some nirvana
number. Second, with the 2G limit on messages sizes, only determines the
upper bound in overall size, I could send 2G @ 32bytes MTU. So, the
question is, how much less of a system load/impact would a 4K MTU be over
a 2K MTU. Remember, even Ethernet finally decided to go to Jumbo Frames,
why, system impact and more. Remember HIPPI/GSN, the MTU was 64K, reason,
system impact. The numbers I have seen running IPoIB really impact the
system.<br><br>
<dd>Steve...<br><br>
<dd>At 10:38 AM -0800 1/5/05, Diego Crupnicoff wrote:<br>
</font><blockquote type=cite class=cite cite="">
<dd><font size=2>Note however that the relevant IB limit is the max
***message size*** which happens to be equal to the ***IB*** MTU for the
current IPoIB (that runs on top of IB UD transport service where IB
messages are limited to a single packet).</font><font size=3><br>
</font>
<dd><font size=2>A connected mode IPoIB (that runs on top of IB RC/UC
transport service) would allow IB messages up to 2GB long. That will
allow for much larger (effectively as large as you may ever dream of)
***IP*** MTUs, regardless of the underlying IB
MTU.</font><font size=3><br>
</font>
<dd><font size=2>Diego</font><font size=3><br>
</font>
<dd><font size=2>> -----Original Message-----</font><font size=3><br>
</font>
<dd><font size=2>> From: Hal Rosenstock
[<a href="mailto:halr@voltaire.com">mailto:halr@voltaire.com</a>]</font>
<font size=3><br>
</font>
<dd><font size=2>> Sent: Wednesday, January 05, 2005 2:21
PM</font><font size=3><br>
</font>
<dd><font size=2>> To: Peter Buckingham</font><font size=3><br>
</font>
<dd><font size=2>> Cc:
openib-general@openib.org</font><font size=3><br>
</font>
<dd><font size=2>> Subject: Re: [openib-general] ip over ib
throughtput</font><font size=3><br>
</font>
<dd><font size=2>></font><font size=3><br>
</font>
<dd><font size=2>></font><font size=3><br>
</font>
<dd><font size=2>> On Wed, 2005-01-05 at 12:23, Peter Buckingham
wrote:</font><font size=3><br>
</font>
<dd><font size=2>> > stupid question: why are we limited to a 2K
MTU for IPoIB?</font><font size=3><br>
</font>
<dd><font size=2>></font><font size=3><br>
</font>
<dd><font size=2>> The IB max MTU is 4K. The current HCAs support a
max MTU of 2K.</font><font size=3><br>
</font>
<dd><font size=2>></font><font size=3><br>
</font>
<dd><font size=2>> -- Hal</font><font size=3><br>
</font>
<dd><font size=2>></font><font size=3><br>
</font>
<dd><font size=2>>
_______________________________________________</font><font size=3><br>
</font>
<dd><font size=2>> openib-general mailing
list</font><font size=3><br>
</font>
<dd><font size=2>> openib-general@openib.org</font><font size=3><br>
</font>
<dd><font size=2>></font><font size=3>
</font><a href="http://openib.org/mailman/listinfo/openib-"><font size=2>
http://openib.org/mailman/listinfo/openib-</a>>
general</font><font size=3><br>
</font>
<dd><font size=2>></font><font size=3><br>
</font>
<dd><font size=2>> To</font><font size=3><br>
</font>
<dd><font size=2>> unsubscribe, please visit</font><font size=3><br>
</font>
<dd><font size=2>></font><font size=3>
</font><a href="http://openib.org/mailman/listinfo/openib-general">
<font size=2>http://openib.org/mailman/listinfo/openib-general</a></font>
<font size=3><br>
</font>
<dd><font size=2>></font><font size=3><br><br>
<dd>_______________________________________________<br>
<dd>openib-general mailing list<br>
<dd>openib-general@openib.org<br>
<dd>
<a href="http://openib.org/mailman/listinfo/openib-general" eudora="autourl">
http://openib.org/mailman/listinfo/openib-general</a><br><br>
<dd>To unsubscribe, please visit
<a href="http://openib.org/mailman/listinfo/openib-general" eudora="autourl">
http://openib.org/mailman/listinfo/openib-general</a></blockquote><br><br>
<br>
</font>
<dd><pre>--
</pre><font face="Courier New, Courier" size=3></font>
<dd>Steve Poole
(spoole@lanl.gov)<x-tab> </x-tab><x-tab>
</x-tab><x-tab>
</x-tab><x-tab>
</x-tab><x-tab>
</x-tab>Office:
505.665.9662<br>
<dd>Los Alamos National
Laboratory<x-tab> </x-tab><x-tab>
</x-tab><x-tab>
</x-tab><x-tab>
</x-tab><x-tab>
</x-tab>
Cell: 505.699.3807<br>
<dd>CCN - Special Projects / Advanced
Development<x-tab> </x-tab><x-tab>
</x-tab><x-tab>
</x-tab>
Fax: 505.665.7793<br>
<dd>P.O. Box 1663, MS B255<br>
<dd>Los Alamos, NM. 87545<br>
<dd>03149801S<br><br>
</dl>_______________________________________________<br>
openib-general mailing list<br>
openib-general@openib.org<br>
<a href="http://openib.org/mailman/listinfo/openib-general" eudora="autourl">
http://openib.org/mailman/listinfo/openib-general</a><br><br>
To unsubscribe, please visit
<a href="http://openib.org/mailman/listinfo/openib-general" eudora="autourl">
http://openib.org/mailman/listinfo/openib-general</a>
</blockquote></body>
</html>