[ofa-general] Performance penalty of OFED 1.1 versus IBGD 1.8.2
Pavel Shamis (Pasha)
pasha at dev.mellanox.co.il
Wed Feb 28 06:12:40 PST 2007
Also please run : mpirun_rsh -v
I want to check which version of mvapich you have.
Pavel Shamis (Pasha) wrote:
>> Pavel> Hi Roland,
>> >> I'm migrating from IBGD 1.8.2 (kernel 2.6.15.7) to OFED 1.1,
>> >> and saw some unpleasant performance drops when using OFED 1.1
>> >> (kernel 2.6.20.1 with included IB drivers). The main drop is in
>> >> throughput as measured by the OSU MPI bandwidth
>> >> benchmark. However, the latency for large packet sizes is also
>> >> worse (see results below). I tried with and without "options
>> >> ib_mthca msi_x=1" (using IBGD, disabling msi_x makes a
>> >> siginficant performance difference of approx. 10%). The IB card
>> >> is a Mellanox MHGS18-XT (PCIe/DDR Firmware 1.2.0) running on an
>> >> Opteron with nForce4 2200 Professional chipset.
>> >> >> Does anybody have an explanation or even better a
>> solution to
>> >> this issue?
>>
>> Pavel> Please try to add follow mvapich parameter :
>> Pavel> VIADEV_DEFAULT_MTU=MTU2048
>>
>> Thanks for the suggestion. Unfortunately, it didn't improve the simple
>> bandwidth results. Bi-directional bandwidth increased by 3%
>> though. Any more ideas?
> 3% is good start :-)
> Please also try to add this one:
> VIADEV_MAX_RDMA_SIZE=4194304
>
> -Pasha
>
>>
>> Roland
>>
>>> ------------------------------------------------------------------------
>>>
>>> IBGD
>>> --------
>>>
>>> # OSU MPI Bandwidth Test (Version 2.1)
>>> # Size Bandwidth (MB/s)
>>> 1 0.830306
>>> 2 1.642710
>>> 4 3.307494
>>> 8 6.546477
>>> 16 13.161954
>>> 32 26.395154
>>> 64 52.913060
>>> 128 101.890547
>>> 256 172.227478
>>> 512 383.296292
>>> 1024 611.172247
>>> 2048 830.147571
>>> 4096 1068.057366
>>> 8192 1221.262520
>>> 16384 1271.771983
>>> 32768 1369.702828
>>> 65536 1426.124683
>>> 131072 1453.781151
>>> 262144 1457.297992
>>> 524288 1464.625860
>>> 1048576 1468.953875
>>> 2097152 1470.614903
>>> 4194304 1471.607758
>>>
>>> # OSU MPI Latency Test (Version 2.1)
>>> # Size Latency (us)
>>> 0 3.03
>>> 1 3.03
>>> 2 3.04
>>> 4 3.03
>>> 8 3.03
>>> 16 3.04
>>> 32 3.11
>>> 64 3.23
>>> 128 3.49
>>> 256 3.83
>>> 512 4.88
>>> 1024 6.31
>>> 2048 8.60
>>> 4096 11.02
>>> 8192 15.78
>>> 16384 28.85
>>> 32768 39.82
>>> 65536 60.30
>>> 131072 106.65
>>> 262144 196.47
>>> 524288 374.62
>>> 1048576 730.79
>>> 2097152 1442.32
>>> 4194304 2864.80
>>>
>>> OFED 1.1
>>> ---------
>>>
>>> # OSU MPI Bandwidth Test (Version 2.2)
>>> # Size Bandwidth (MB/s)
>>> 1 0.698614
>>> 2 1.463192
>>> 4 2.941852
>>> 8 5.859464
>>> 16 11.697510
>>> 32 23.339031
>>> 64 46.403081
>>> 128 92.013928
>>> 256 182.918388
>>> 512 315.076923
>>> 1024 500.083937
>>> 2048 765.294564
>>> 4096 1003.652513
>>> 8192 1147.640312
>>> 16384 1115.803139
>>> 32768 1221.120298
>>> 65536 1282.328447
>>> 131072 1315.715608
>>> 262144 1331.456393
>>> 524288 1340.691793
>>> 1048576 1345.650404
>>> 2097152 1349.279211
>>> 4194304 1350.489883
>>>
>>> # OSU MPI Latency Test (Version 2.2)
>>> # Size Latency (us)
>>> 0 2.99
>>> 1 3.03
>>> 2 3.06
>>> 4 3.03
>>> 8 3.03
>>> 16 3.04
>>> 32 3.12
>>> 64 3.27
>>> 128 3.96
>>> 256 4.29
>>> 512 4.99
>>> 1024 6.53
>>> 2048 9.08
>>> 4096 11.92
>>> 8192 17.39
>>> 16384 31.05
>>> 32768 43.47
>>> 65536 67.17
>>> 131072 115.30
>>> 262144 212.33
>>> 524288 405.20
>>> 1048576 790.45
>>> 2097152 1558.88
>>> 4194304 3095.17
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> general mailing list
>>> general at lists.openfabrics.org
>>> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general
>>>
>>> To unsubscribe, please visit
>>> http://openib.org/mailman/listinfo/openib-general
>
> _______________________________________________
> general mailing list
> general at lists.openfabrics.org
> http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general
>
> To unsubscribe, please visit
> http://openib.org/mailman/listinfo/openib-general
More information about the general
mailing list