[ewg] Need help for Infiniband optimisation for our cluster (MTU...)

Mike Heinz michael.heinz at qlogic.com
Tue Dec 7 07:21:29 PST 2010


When you say "connected mode" you referring to ipoib or your MPI configuration? You really don't want to use ipoib for HPC applications. What MPI are you using?  

For MPI - my personal experience is that OpenMPI is sometimes more reliable but Mvapich-1 offers the best performance.

-----Original Message-----
From: ewg-bounces at lists.openfabrics.org [mailto:ewg-bounces at lists.openfabrics.org] On Behalf Of giggzounet
Sent: Tuesday, December 07, 2010 9:01 AM
To: ewg at lists.openfabrics.org
Subject: [ewg] Need help for Infiniband optimisation for our cluster (MTU...)

Hi,

I'm new on this list. We have in our laboratory a little cluster:
- master 8 cores
- 8 nodes with 12 cores
- DDR infiniband switch Mellanox MTS3600R

On these machines we have an oscar cluster with CentOS 5.5. We have
installed the ofed packages 1.5.1. The default config for the infiniband
is used. So infiniband is running in connected mode.

Our cluster is used to solve CFD (Computational Fluid Dynamics)
problems. And I'm trying to optimize the infiniband network and so I
have several questions:

- Is it the right mailing list to ask ? (if not...where should I post ?)

- Is there a how-to on infiniband optimisation ?

- CFD computations need a lot of bandwidth. There are a lot of data
exchange through MPI (we are using intel mpi). Has the infiniband mode
(connected or datagram) influence in this case ? What is the "best" MTU
for those computation ?


Best regards,
Guillaume

_______________________________________________
ewg mailing list
ewg at lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg




More information about the ewg mailing list