[openfabrics-ewg] Cisco SQA results so far for OFED 1.0 rc5
Scott Weitzenkamp (sweitzen)
sweitzen at cisco.com
Tue Jun 6 13:06:19 PDT 2006
Great! We were also able to reproduce the problem using rdma_lat.
We've added rdma_lat to our smoke test, so we detect this kind of
problem sooner.
Scott Weitzenkamp
SQA and Release Manager
Server Virtualization Business Unit
Cisco Systems
> -----Original Message-----
> From: Tziporet Koren [mailto:tziporet at mellanox.co.il]
> Sent: Tuesday, June 06, 2006 10:13 AM
> To: Tziporet Koren; Scott Weitzenkamp (sweitzen); Pavel Shamis
> Cc: openfabrics-ewg at openib.org
> Subject: RE: [openfabrics-ewg] Cisco SQA results so far for
> OFED 1.0 rc5
>
> Hi All,
>
> We found the problem - it was a change in compilations flags that was
> inserted while added support for PPC64.
> We will fix it for RC6.
> Thanks Scott for finding this issue.
>
> Tziporet
>
> -----Original Message-----
> From: openfabrics-ewg-bounces at openib.org
> [mailto:openfabrics-ewg-bounces at openib.org] On Behalf Of
> Tziporet Koren
> Sent: Monday, June 05, 2006 7:44 PM
> To: Scott Weitzenkamp (sweitzen); Pavel Shamis
> Cc: openfabrics-ewg at openib.org
> Subject: RE: [openfabrics-ewg] Cisco SQA results so far for
> OFED 1.0 rc5
>
> We will run the verbs performance tests to make sure nothing
> happened to
> the basic verbs performance.
> I will publish results tomorrow
>
> Tziporet
>
> -----Original Message-----
> From: openfabrics-ewg-bounces at openib.org
> [mailto:openfabrics-ewg-bounces at openib.org] On Behalf Of Scott
> Weitzenkamp (sweitzen)
> Sent: Monday, June 05, 2006 6:33 PM
> To: Pavel Shamis
> Cc: openfabrics-ewg at openib.org
> Subject: RE: [openfabrics-ewg] Cisco SQA results so far for
> OFED 1.0 rc5
>
> > Hi,
> > The default mtu size in mvapich-RC5 was changed to MTU1024
> > (it solves bw
> > issue on PCI_X). You can try VIADEV_DEFAULT_MTU=MTU2048
> parameter on
> > your platforms (PCI_EX) and I believe that you will get the
> > rc4 latency
> > numbers.
>
> I'll try that for MVAPICH, but that doesn't explain the worse latency
> for Open MPI and Intel MPI.
>
> Scott
> _______________________________________________
> openfabrics-ewg mailing list
> openfabrics-ewg at openib.org
> http://openib.org/mailman/listinfo/openfabrics-ewg
> _______________________________________________
> openfabrics-ewg mailing list
> openfabrics-ewg at openib.org
> http://openib.org/mailman/listinfo/openfabrics-ewg
>
More information about the ewg
mailing list