[ofa-general] Retry count error with ipath on OFED-1.3

Nico Mittenzwey nico.mittenzwey at s2001.tu-chemnitz.de
Thu May 15 07:31:40 PDT 2008


Hi,

We have a problem with our QLogic InfiniPath PE-800 (rev 02), OFED 1.3 
and MPI. Running simple MPI jobs like the OSU MPI bandwidth test between 
two nodes results in a retry count error (see end of the mail). We tried 
different MPI implementations like the supplied openmpi or self compiled 
openmpi/mvapich but always get this error.
Using OFED 1.2 or the QLogic InfiniPath driver (which includes OFED 1.2) 
we don't get any errors.
The system is a Scientific Linux SL release 5.1 with kernel 
2.6.18-8.1.3.el5 (for OFED 1.2) or 2.6.18-53.1.14.el5 (OFED 1.3). There 
is also a Mellanox MT25204 HCA in the system which works perfectly 
(removing it doesn't help with the ipath problem).

Since we like to stay updated we want to use OFED 1.3.

Did anyone get the same error and found a solution?

Thanks & regards
Nico



OFED 1.3 Infinipath Error:
 ># OSU MPI Bandwidth Test v3.1
 ># Size        Bandwidth (MB/s)
 >1                         0.17
 >2                         0.39
 >4                         0.66
 >8                         1.80
 >16                        2.53
 >32                        5.11
 >64                        8.80
 >128                      23.09
 >256                      43.65
 >512                      84.42
 >1024                    151.63
 >[0,1,0][btl_openib_component.c:1338:btl_openib_component_progress] 
from >compute-6-7 to: compute-6-8 error polling HP CQ with status RETRY 
 >EXCEEDED ERROR status number 12 for wr_id 185705200 opcode 1
 >-------------------------------------------------------------------------- 

 >The InfiniBand retry count between two MPI processes has been
 >exceeded.  "Retry count" is defined in the InfiniBand spec 1.2
 >(section 12.7.38):
 >
 >    The total number of times that the sender wishes the receiver to
 >    retry timeout, packet sequence, etc. errors before posting a
 >    completion error.
 >
 >This error typically means that there is something awry within the
 >InfiniBand fabric itself.  You should note the hosts on which this
 >error has occurred; it has been observed that rebooting or removing a
 >particular host from the job can sometimes resolve this issue.
 >
 >Two MCA parameters can be used to control Open MPI's behavior with
 >respect to the retry count:
 >
 >* btl_openib_ib_retry_count - The number of times the sender will
 >  attempt to retry (defaulted to 7, the maximum value).
 >
 >* btl_openib_ib_timeout - The local ACK timeout parameter (defaulted
 >  to 10).  The actual timeout value used is calculated as:
 >
 >     4.096 microseconds * (2^btl_openib_ib_timeout)
 >
 >  See the InfiniBand spec 1.2 (section 12.7.34) for more details.
 >-------------------------------------------------------------------------- 

 >mpirun noticed that job rank 1 with PID 16883 on node compute-6-8 
 >exited on signal 15 (Terminated).



More information about the general mailing list