<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Mar 12, 2014 at 11:50 PM, Hal Rosenstock <span dir="ltr"><<a href="mailto:hal.rosenstock@gmail.com" target="_blank">hal.rosenstock@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div>By the fact that you didn't mention PortXmitDiscards, does it mean that these are 0 ? Assuming so, PortXmitWait is indicating there is some congestion but it has not risen to the level of dropping packets. It's the rate of increase of the XmitWait counter that's important rather than the absolute number so if you want to chase this, the focus should be on the ports most congested.</div>
</div></blockquote><div><br></div><div>Yes, most are 0. 2-3 ports have XmitDiscards, but these are pointing to nodes in maintenance with known issues.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr">
<div> </div><div>Since the old tool didn't report XmitWait counters, it's hard to know whether this is the same as before or not unless you did this manually.</div><div> </div><div>Was the routing previously fat tree ? </div>
</div></blockquote><div><br></div><div>Yes</div><div>Here's a PDF of the physical topology: <a href="https://dl.dropboxusercontent.com/u/2292440/CQ-UL_IB_topology.pdf">https://dl.dropboxusercontent.com/u/2292440/CQ-UL_IB_topology.pdf</a></div>
<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div>Are there any other fat tree related log messages in the OpenSM log ? </div>
</div></blockquote><div><br></div><div>Nothing specific to Fat Tree. Some links going up or down (node maintenance). But there are a lot of MAD errors from a SwitchInfo request:</div><div><br></div><div><div>Mar 13 09:50:04 909147 [4FAFC700] 0x01 -> log_rcv_cb_error: ERR 3111: Received MAD with error status = 0x1C</div>
<div> SubnGetResp(SwitchInfo), attr_mod 0x0, TID 0x73c86e46</div><div> Initial path: 0,1,33,30,28 Return path: 0,10,32,13,28</div></div><div> </div><div>80 of these messages occur periodically, filling the logs. smpquery on the paths shows that these all point to the Sun QNEM switches (80 I4 chips). I did find a reference in the linux RDMA list about this: <a href="http://permalink.gmane.org/gmane.linux.drivers.rdma/7988">http://permalink.gmane.org/gmane.linux.drivers.rdma/7988</a>. I assume that the switch is not reporting it capabilities correctly. Can this have an impact?</div>
<div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div>Is there any fat tree configuration of compute and/or I/O nodes ?</div>
</div></blockquote><div><br></div><div>We're specifying the root_guid and cn_guid files in opensm.conf: </div><div><div>root_guid_file /etc/rdma/guids.txt</div><div>cn_guid_file /etc/rdma/cn-guids.txt</div></div><div>
</div><div>We are not using the I/O nodes configuration</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr">
<div> </div><div>Any idea on what is the traffic pattern ? Are you running MPI ?</div></div></blockquote><div><br></div><div>We have Lustre file systems over IB and MPI jobs sharing the same IB network. When I gathered the counters, most of the compute were busy.</div>
<div><br></div><div>Thanks</div><div>Florent</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr"><div> </div><div>-- Hal</div></div><div class="gmail_extra"><br><br><div class="gmail_quote"><div><div class="h5">On Wed, Mar 12, 2014 at 8:17 PM, Florent Parent <span dir="ltr"><<a href="mailto:florent.parent@calculquebec.ca" target="_blank">florent.parent@calculquebec.ca</a>></span> wrote:<br>
</div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div><div class="h5"><div dir="ltr"><div><br>
</div>Hello IB users,<br><div><div><br></div><div>We recently migrated our opensm from 3.2.6 to 3.3.17. In this upgrade, we moved to CentOS6.5 with the stock RDMA and infiniband-diags_1.5.12-5., and running opensm 3.3.17. Routing is FatTree:</div>
<div><div>General fabric topology info</div><div>============================</div><div>- FatTree rank (roots to leaf switches): 3</div><div>- FatTree max switch rank: 2</div><div>- Fabric has 966 CAs, 966 CA ports (603 of them CNs), 186 switches</div>
<div>- Fabric has 36 switches at rank 0 (roots)</div><div>- Fabric has 64 switches at rank 1</div><div>- Fabric has 86 switches at rank 2 (86 of them leafs)</div></div><div><br></div><div>Now to the question: ibqueryerrors 1.5.12 is reporting high PortXmitWait values throughout the fabric. We did not see this counter before (it was not reported by the older <a href="http://ibqueryerrors.pl" target="_blank">ibqueryerrors.pl</a>)</div>
<div><br></div><div>To give an idea of the scale of the counters, here's a capture of ibqueryerrors --data on one specific I4 switch, 10 seconds after clearing the counters (-k -K):</div><div><br></div><div>GUID 0x21283a83b30050 port 4: PortXmitWait == 2932676 PortXmitData == 90419517 (344.923MB) PortRcvData == 1526963011 (5.688GB)</div>
<div>GUID 0x21283a83b30050 port 5: PortXmitWait == 3110105 PortXmitData == 509580912 (1.898GB) PortRcvData == 13622 (53.211KB)</div><div>GUID 0x21283a83b30050 port 6: PortXmitWait == 8696397 PortXmitData == 480870802 (1.791GB) PortRcvData == 17067 (66.668KB)</div>
<div>GUID 0x21283a83b30050 port 7: PortXmitWait == 1129568 PortXmitData == 126483825 (482.497MB) PortRcvData == 24973385 (95.266MB)</div><div>GUID 0x21283a83b30050 port 8: PortXmitWait == 29021 PortXmitData == 19444902 (74.176MB) PortRcvData == 84447725 (322.143MB)</div>
<div>GUID 0x21283a83b30050 port 9: PortXmitWait == 4945130 PortXmitData == 161911244 (617.642MB) PortRcvData == 27161 (106.098KB)</div><div>GUID 0x21283a83b30050 port 10: PortXmitWait == 16795 PortXmitData == 35572510 (135.698MB) PortRcvData == 681174731 (2.538GB)</div>
<div>... (this goes on for every active ports)</div><div><br></div><div>We are not observing any failures, so I suspect that I need help to interpret these numbers. Do I need to be worried? </div><div><br></div><div>Cheers,</div>
<div>Florent</div></div><div><br></div></div>
<br></div></div>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.openfabrics.org" target="_blank">Users@lists.openfabrics.org</a><br>
<a href="http://lists.openfabrics.org/cgi-bin/mailman/listinfo/users" target="_blank">http://lists.openfabrics.org/cgi-bin/mailman/listinfo/users</a><br>
<br></blockquote></div><br></div>
</blockquote></div><br></div></div>