[ofa-general] Has anyone seen these errors in openmpi running linpack with big N problem sizes ?

Rafael David Tinoco Rafael.Tinoco at Sun.COM
Mon Sep 14 11:47:15 PDT 2009


This seems to be happening only with high number of nodes and big problem sizes (16G per host).


-----Original Message-----
From: Rafael David Tinoco [mailto:Rafael.Tinoco at Sun.COM] 
Sent: Monday, September 14, 2009 3:06 PM
To: 'Rafael David Tinoco'
Subject: Has anyone seen these errors in openmpi running linpack with big N problem sizes ?

Hello, Im getting weird mpi problems running linpack with big N problem sizes.

Has anyone seen this ?

Im running:

2.6.18-128.el5
+ OFED 1.4.1

On all my nodes, testing with linpack 85 nodes so far.

My nodes are in one C48 in mesh topology.

Tks

Tinoco

--------------------------------------------------------------------------
The InfiniBand retry count between two MPI processes has been
exceeded.  "Retry count" is defined in the InfiniBand spec 1.2
(section 12.7.38):

    The total number of times that the sender wishes the receiver to
    retry timeout, packet sequence, etc. errors before posting a
    completion error.

This error typically means that there is something awry within the
InfiniBand fabric itself.  You should note the hosts on which this
error has occurred; it has been observed that rebooting or removing a
particular host from the job can sometimes resolve this issue.

Two MCA parameters can be used to control Open MPI's behavior with
respect to the retry count:

* btl_openib_ib_retry_count - The number of times the sender will
  attempt to retry (defaulted to 7, the maximum value).
* btl_openib_ib_timeout - The local ACK timeout parameter (defaulted
  to 10).  The actual timeout value used is calculated as:

     4.096 microseconds * (2^btl_openib_ib_timeout)

  See the InfiniBand spec 1.2 (section 12.7.34) for more details.

Below is some information about the host that raised the error and the
peer to which it was connected:

  Local host:   b03n10
  Local device: mlx4_0
  Peer host:    b02n05

You may need to consult with your system administrator to get this
problem fixed.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpiexec has exited due to process rank 499 with PID 5570 on
node b03n10 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpiexec (as reported here).
--------------------------------------------------------------------------
[[59611,1],499][btl_openib_component.c:2929:handle_wc] from b03n10 to: b02n05 error polling LP CQ with status RETRY EXCEEDED ERROR
status number 12 for wr_id 348242776 opcode 0  vendor error 129 qp_idx 3

Rafael David Tinoco - Sun Microsystems
Systems Engineer - High Performance Computing
Rafael.Tinoco at Sun.COM - 55.11.5187.2194





More information about the general mailing list