[openfabrics-ewg] OFED 1.0 release criteria

Sujal Das Sujal at Mellanox.com
Mon May 8 15:41:59 PDT 2006


Scott (Cisco) and Bob (SliverStorm):  Please let us know if your
respective SRP gateways will be qualified for the OFED 1.0 release.
Provide details of what will be qualified and supported.

 

________________________________

From: openfabrics-ewg-bounces at openib.org
[mailto:openfabrics-ewg-bounces at openib.org] On Behalf Of Tziporet Koren
Sent: Monday, May 08, 2006 7:31 AM
To: openfabrics-ewg at openib.org
Subject: [openfabrics-ewg] OFED 1.0 release criteria

 

Hi All,

Since a request for release criteria was raised I prepared this first
proposal and I would like to review it in the meeting today.

Tziporet

========================================================================
====================

OpenFabrics Enterprise Distribution (OFED) 1.0 release criteria:

The release criteria are composed from the following:

1.      Bugs and limitations

2.      Systems supported

3.      Performance

4.      Testing

In each section there are different criteria according to the component
categories definition:

1.      Basic: GA components that installed in typical installation.

2.      Add-on: Components that are optional and one should choose them
specifically.

3.      Technology preview: Components that their quality level is not
GA, but can be used by customers for technology development

1. Bugs and limitations:

The criteria for limitations and known bugs should be defined according
to each component category:

1.      Basic: no high and showstopper bugs

2.      Add-on: no showstopper bugs

3.      Technology preview: passing basic tests (each component owner
should define what are these tests)


In addition the install and build scripts should be clean since it is
most important that in first interface of the customer with IB will be
smooth.

2. Systems Supported

There are three different categories in the system setting:

*       CPU architecture

*       Infiniband HW (HCAs and switches) 

*       Operating system.

Each component of OFED should support the systems in this way:

1.      Basic: supporting all systems

2.      Add-on: supporting the systems defined by the component owner 

3.      Technology preview: supporting the systems defined by the
component owner

Note: Owners please send me the list of components you support

2.1. CPU Architectures: 

a)      x86_32 

b)      x84_64 (Intel; AMD) 

c)      ia64

d)      PPC64 (Power5, Power6)

 

2.2 Infiniband HW:

a)      HCAs: 

I.      Mellanox: both DDR and SDR are supported
FW burned should be the last official released by Mellanox:

i.      InfiniHost III Lx: fw-25204-1.0.800

ii.     InfiniHost III Ex: fw-25218-5.1.400 and fw-25208-4.7.600

iii.    InfiniHost: fw-23108-3.4.000

iv.     InfiniScale III - fw-47396-0.8.4

v.      InfiniScale - fw-43132-5.6.0

II.     Qlogic: Please send the list of your HCAs

b)      Switches: (each vendor should send the list)

I.      Cisco:

II.     Voltaire:

III.    SilverStorm:

IV.     Flextronix:

 

2.3. Linux distributors and kernels 

a)      Redhat: 

a.      AS EL4 up2 and up3; 

b.      Fedora C4 (2.6.11-1.1369_FC4)

b)      Novel:

a.      SLES10 beta 10

b.      SuSE Pro 10 (kernel 2.6.13-15-smp)

c)      kernel.org: 2.6.16.x

3. Performance requirements: 

The performance (latency and bandwidth) of OFED 1.0 should be at least
as gen1 software stacks available or better.

The performance benchmark for each ULP:

1.      Basic verbs - performance tests from OpenFabrics (send, RDMA
read/write latency & BW)

2.      IPoIB - netperf

3.      MPI - Pallas

4.      SDP - iperf

5.      SRP - iometer

6.      iSER - iometer

4. Testing

1.      Minimum cluster size to be tested:
Need at least 128 nodes cluster - not clear to me if any company has
such a cluster

2.      Long runs: The final release should run at least 72 (maybe
higher?) hours without any failure.

3.      Storage target tested:

a)      Engenio target

b)      Cisco & SST - please add more target systems

c)      Voltaire - please add iSER target

 

Other criteria: (not clear to me if these are must for 1.0 release)

1. Scalability requirements 

b)      SM:

a)      Bringup a subnet with 1,000 nodes in 2 minutes

b)      SM should not be a bottle neck in any application running
(IPoIB)

c)      MPI:

a)      MPI runner - should be able to launch thousands of processes
(say 50,000) in a bounded time manner.

b)      Memory consumption - should be able to run many processes on the
same node (for now, 8 processes is the upper limit with the Opteron
machines), in a many node (thousands of nodes) installation.

c)      Sending HUGE messages in collectives - MPI should not fail for
limited physical memory.

 

2. Documentation requirements 

a)      Product brief - who is the owner for this?

b)      README & Installation guide 

c)      Release notes 

d)      Troubleshooting

e)      FAQ

3.  Specifications compliance: 

a)       Verbs & management: InfiniBand Architecture Specification,
Volume 1, Release 1.2 

b)       IPoIB: www.ietf.org <http://www.ietf.org> :
draft-ietf-ipoib-architecture-04 and
draft-ietf-ipoib-ip-over-infiniband-07 

c)       SDP: Annex A4" of the InfiniBand Architecture Specification,
Volume 1, Release 1.2 

d)       SRP: SCSI RDMA Protocol-2 (SRP-2), Doc. no. T10/1524-D.
(www.t10.org/ftp/t10/drafts/srp2/srp2r00a.pdf
<http://www.t10.org/ftp/t10/drafts/srp2/srp2r00a.pdf> ). 

e)       MPI: www.mpi-forum.org/docs/mpi-11-html/mpi-report.html
<http://www.mpi-forum.org/docs/mpi-11-html/mpi-report.html> 

f)         iSER:
www.ietf.org/internet-drafts/draft-hufferd-iser-ib-01.pdf
<http://www.ietf.org/internet-drafts/draft-hufferd-iser-ib-01.pdf>  

g)       RDS: SS can you provide info

 

 

 

Tziporet Koren

Software Director

Mellanox Technologies

mailto:tziporet at mellanox.co.il <mailto:tziporet at mellanox.co.il> 

Tel +972-4-9097200, ext 380

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/ewg/attachments/20060508/2db358a5/attachment.html>


More information about the ewg mailing list