[openfabrics-ewg] Cisco SQA results so far for OFED 1.0 rc5
Tziporet Koren
tziporet at mellanox.co.il
Tue Jun 6 10:12:41 PDT 2006
Hi All,
We found the problem - it was a change in compilations flags that was
inserted while added support for PPC64.
We will fix it for RC6.
Thanks Scott for finding this issue.
Tziporet
-----Original Message-----
From: openfabrics-ewg-bounces at openib.org
[mailto:openfabrics-ewg-bounces at openib.org] On Behalf Of Tziporet Koren
Sent: Monday, June 05, 2006 7:44 PM
To: Scott Weitzenkamp (sweitzen); Pavel Shamis
Cc: openfabrics-ewg at openib.org
Subject: RE: [openfabrics-ewg] Cisco SQA results so far for OFED 1.0 rc5
We will run the verbs performance tests to make sure nothing happened to
the basic verbs performance.
I will publish results tomorrow
Tziporet
-----Original Message-----
From: openfabrics-ewg-bounces at openib.org
[mailto:openfabrics-ewg-bounces at openib.org] On Behalf Of Scott
Weitzenkamp (sweitzen)
Sent: Monday, June 05, 2006 6:33 PM
To: Pavel Shamis
Cc: openfabrics-ewg at openib.org
Subject: RE: [openfabrics-ewg] Cisco SQA results so far for OFED 1.0 rc5
> Hi,
> The default mtu size in mvapich-RC5 was changed to MTU1024
> (it solves bw
> issue on PCI_X). You can try VIADEV_DEFAULT_MTU=MTU2048 parameter on
> your platforms (PCI_EX) and I believe that you will get the
> rc4 latency
> numbers.
I'll try that for MVAPICH, but that doesn't explain the worse latency
for Open MPI and Intel MPI.
Scott
_______________________________________________
openfabrics-ewg mailing list
openfabrics-ewg at openib.org
http://openib.org/mailman/listinfo/openfabrics-ewg
_______________________________________________
openfabrics-ewg mailing list
openfabrics-ewg at openib.org
http://openib.org/mailman/listinfo/openfabrics-ewg
More information about the ewg
mailing list