[ewg] Need help for Infiniband optimisation for our cluster (MTU...)
giggzounet at gmail.com
Tue Dec 7 07:46:02 PST 2010
I'm referring to ipoib.
I'm using Intel mpi. But we have openmpi too. But per default Intel MPI
is used. intel mpi is configured with "shm:ofa".
Le 07/12/2010 16:21, Mike Heinz a écrit :
> When you say "connected mode" you referring to ipoib or your MPI configuration? You really don't want to use ipoib for HPC applications. What MPI are you using?
> For MPI - my personal experience is that OpenMPI is sometimes more reliable but Mvapich-1 offers the best performance.
> -----Original Message-----
> From: ewg-bounces at lists.openfabrics.org [mailto:ewg-bounces at lists.openfabrics.org] On Behalf Of giggzounet
> Sent: Tuesday, December 07, 2010 9:01 AM
> To: ewg at lists.openfabrics.org
> Subject: [ewg] Need help for Infiniband optimisation for our cluster (MTU...)
> I'm new on this list. We have in our laboratory a little cluster:
> - master 8 cores
> - 8 nodes with 12 cores
> - DDR infiniband switch Mellanox MTS3600R
> On these machines we have an oscar cluster with CentOS 5.5. We have
> installed the ofed packages 1.5.1. The default config for the infiniband
> is used. So infiniband is running in connected mode.
> Our cluster is used to solve CFD (Computational Fluid Dynamics)
> problems. And I'm trying to optimize the infiniband network and so I
> have several questions:
> - Is it the right mailing list to ask ? (if not...where should I post ?)
> - Is there a how-to on infiniband optimisation ?
> - CFD computations need a lot of bandwidth. There are a lot of data
> exchange through MPI (we are using intel mpi). Has the infiniband mode
> (connected or datagram) influence in this case ? What is the "best" MTU
> for those computation ?
> Best regards,
> ewg mailing list
> ewg at lists.openfabrics.org
More information about the ewg