[ofa-general] Infiniband bandwidth

Ramiro Alba Queipo raq at cttc.upc.edu
Thu Oct 2 01:43:11 PDT 2008


On Wed, 2008-10-01 at 14:21 -0700, Michael Krause wrote:
> At 09:08 AM 10/1/2008, Ramiro Alba Queipo wrote:
> > Hi all,
> > 
> > We have an infiniband cluster of 22 nodes witch 20 Gbps Mellanox
> > MHGS18-XTC cards and I tried to make performance net tests both to
> > check
> > hardware as to clarify concepts.
> > 
> > Starting from the theoretic pick according to the infiniband card
> > (in my
> > case 4X DDR => 20 Gbits/s => 2.5 Gbytes/s) we have some limits:
> > 
> > 1) Bus type: PCIe 8x => 250 Mbytes/lane => 250 * 8 = 2 Gbytes/s
> > 
> > 2) According to a thread an users openmpi mail-list (???):
> > 
> >   The 16 Gbit/s number is the theoretical peak, IB is coded 8/10 so
> >   out of the 20 Gbit/s 16 is what you get. On SDR this number is 
> >   (of course) 8 Gbit/s achievable (which is ~1000 MB/s) and I've 
> >   seen well above 900 on MPI (this on 8x PCIe, 2x margin)   
> >   
> >   Is this true?
> 
> IB uses 8b/10 encoding.  This results in a 20% overhead on every
> frame.  Further, IB protocol - header, CRC, flow control credits, etc.
> will consume additional bandwidth - the amount will vary with workload
> and traffic patters.  Also, any fabric can experience congestion which
> may reduce throughput for any given data flow.  
> 
> PCIe uses 8b/10b encoding for both 2.5GT/s and 5.0 GT/s signaling (the
> next generation signaling is scrambled based so provides 2x the data
> bandwidth with significantly less encoding overhead).  It also has
> protocol overheads conceptually similar to IB which will consume
> additional bandwidth (keep in mind many volume chipsets only support a
> 256B transaction size so a single IB frame may require 8-16 PCIe
> transactions to process.   There will also be application / device
> driver control messages between the host and the I/O device which will
> consume additional bandwidth.   
> 
> Also keep in mind that the actual application bandwidth may be further
> gated by the memory subsystem, the I/O-to-memory latency, etc. so
> while the theoretical bandwidths may be quite high, they will be
> constrained by the interactions and the limitations within the overall
> hardware and software stacks.  
> 
> 
> > 3) According to other comment in the same thread:
> > 
> >   The data throughput limit for 8x PCIe is ~12 Gb/s. The theoretical
> >   limit is 16 Gb/s, but each PCIe packet has a whopping 20 byte
> >   overhead. If the adapter uses 64 byte packets, then you see 1/3 of
> >   the throughput go to overhead.
> > 
> >   Could someone explain me that?
> 
> DMA Read completions are often returned one cache line at a time while
> DMA Writes are often transmitted at the Max_Payload_Size of 256B (some
> chipsets do coalesce completions allowing up to the Max_Payload_Size
> to be returned).  Depending upon the mix of transactions required to
> move an IB frame, the overheads may seem excessive.
> 
> PCIe overheads vary with the transaction type, the flow control credit
> exchanges, CRC, etc.   It is important to keep these in mind when
> evaluating the solution.  
> 
> > Then I got another comment about the matter:
> > 
> > The best uni-directional performance I have heard of for PCIe 8x IB
> > DDR is ~1,400 MB/s (11.2 Gb/s) with Lustre, which is about 55% of
> > the
> > theoretical 20 Gb/s advertised speed.
> > 
> > 
> > ---------------------------------------------------------------------
> > 
> > 
> > Now, I did some tests (mpi used is OpenMPI) with the following
> > results:
> > 
> > a) Using "Performance tests" from OFED 1.31
> >       
> >    ib_write_bw -a server ->  1347 MB/s
> > 
> > b) Using hpcc (2 cores at diferent nodes) -> 1157 MB/s (--mca
> > mpi_leave_pinned 1)
> > 
> > c) Using "OSU Micro-Benchmarks" in "MPItests" from OFED 1.3.1
> > 
> >    1) 2 cores from different nodes
> > 
> >     - mpirun -np 2 --hostfile pool osu_bibw -> 2001.29 MB/s
> > (bidirectional)
> >     - mpirun -np 2 --hostfile pool osu_bw -> 1311.31 MB/s
> > 
> >    2) 2 cores from the same node
> > 
> >     - mpirun -np 2  osu_bibw -> 2232 MB/s (bidirectional)
> >     - mpirun -np 2  osu_bw -> 2058 MB/s
> > 
> > The questions are:
> > 
> > - Are those results coherent with what it should be?
> > - Why tests with the two core in the same node are better?
> > - Should not the bidirectional test be a bit higher?
> > - Why hpcc is so low? 
> 
> You would need to provide more information about the system hardware,
> the fabrics, etc. to make any rational response.  There are many

Whe have DELL PowerEdge SC1435 nodes with two AMD 2350 processors (2.0
GHz processor frequency and 1.8 GHz of Integrated Memory Controller
Speed). The fabrics is built from 20 Gbps Mellanox MHGS18-XTC cards and
a Flextrics 24 port 4X DDR switch, with 3 meter cables from Mellanox
(MCC4L30-003 4X microGiGaCN latch, 30 AWG).

>  variables here and as I noted above, one cannot just derate the
> hardware by a fixed percentage and conclude there is a real problem in
> the solution stack.   He is more complex.   The question you should
> ask is whether the micro-benchmarks you are executing are a realistic
> reflection of the real workload.  If not, then do any of these numbers

No I don't think they are. My main intention is to understand what I
really have and why, and to check for link degradations. Keep in mind
that this is my first contact with infiniband problematics and before
the end of this year we will have 76 nodes (608 cores) with an
infiniband net that will be used both for calculations and data, using
NFS-RDMA.

Apart from our own test, what tests you would use to check for a ready
cluster?

>  matter at the end of the day especially if the total time spent
> within the interconnect stacks are relatively small or bursty.
> 
> Mike 



-- 
Aquest missatge ha estat analitzat per MailScanner
a la cerca de virus i d'altres continguts perillosos,
i es considera que està net.
For all your IT requirements visit: http://www.transtec.co.uk




More information about the general mailing list