[ewg] [PATCH] docs: update nes_release_notes.txt for OFED 1.5.1

Chien Tung chien.tin.tung at intel.com
Wed Mar 17 13:05:25 PDT 2010


Signed-off-by: Chien Tung <chien.tin.tung at intel.com>
---
 nes_release_notes.txt |  220 +++++++++++++++++++++----------------------------
 1 files changed, 93 insertions(+), 127 deletions(-)

diff --git a/nes_release_notes.txt b/nes_release_notes.txt
index b698766..6386c54 100644
--- a/nes_release_notes.txt
+++ b/nes_release_notes.txt
@@ -1,57 +1,78 @@
             Open Fabrics Enterprise Distribution (OFED)
       NetEffect Ethernet Cluster Server Adapter Release Notes
-                           December 2009
+                           March 2010
 
 
 
 The iw_nes module and libnes user library provide RDMA and L2IF
 support for the NetEffect Ethernet Cluster Server Adapters.
 
+==========
+What's New
+==========
+OFED 1.5.1 contains several enhancements and bug fixes to iw_nes driver.
+
+* Add support for KR, device id 0x0110.
+* Add new ethtool stats.
+* Fix crash caused by multiple disconnects during Asynchronous Event handling.
+* Fix crash in listener destroy during loopback setup.
+* Clear stall bit before destroying NIC QP.
+* Set assume aligned header bit.
+
 
 ============================================
 Required Setting - RDMA Unify TCP port space
 ============================================
 RDMA connections use the same TCP port space as the host stack.  To avoid
-conflicts, set rdma_cm module option unify_tcp_port_sapce to 1 by adding
+conflicts, set rdma_cm module option unify_tcp_port_space to 1 by adding
 the following to /etc/modprobe.conf:
 
     options rdma_cm unify_tcp_port_space=1
 
 
+=====================
+Power Management Mode
+=====================
+It is recommended to disable Active State Power Management in the BIOS, e.g.:
+
+  PCIe ASPM L0s - Advanced State Power Management: DISABLED
+
+
 =======================
 Loadable Module Options
 =======================
 The following options can be used when loading the iw_nes module by modifying
-modprobe.conf file:
+modprobe.conf file.
 
-wide_ppm_offset = 0
+wide_ppm_offset=0
     Set to 1 will increase CX4 interface clock ppm offset to 300ppm.
     Default setting 0 is 100ppm.
 
-mpa_version = 1
+mpa_version=1
     MPA version to be used int MPA Req/Resp (0 or 1).
 
-disable_mpa_crc = 0
+disable_mpa_crc=0
     Disable checking of MPA CRC.
+    Set to 1 to enable MPA CRC.
 
-send_first = 0
+send_first=0
     Send RDMA Message First on Active Connection.
 
-nes_drv_opt = 0x00000100
+nes_drv_opt=0x00000000
     Following options are supported:
 
-    Enable MSI - 0x00000010
-    No Inline Data - 0x00000080
-    Disable Interrupt Moderation - 0x00000100
-    Disable Virtual Work Queue - 0x00000200
+    0x00000010 - Enable MSI
+    0x00000080 - No Inline Data.
+    0x00000100 - Diable Interrupt Moderation
+    0x00000200 - Disable Virtual Work Queue
 
-nes_debug_level = 0
-    Enable debug output level.
+nes_debug_level=0
+    Specify debug output level.
 
-wqm_quanta = 65536
+wqm_quanta=65536
     Set size of data to be transmitted at a time.
 
-limit_maxrdreqsz = 0
+limit_maxrdreqsz=0
     Limit PCI read request size to 256 bytes.
 
 
@@ -79,7 +100,6 @@ NOTE: Assuming NetEffect Ethernet Cluster Server Adapter is assigned eth2.
     ethtool -C eth2 rx-usecs-high 1000 - largest interrupt moderation timer
                                          for dynamic interrupt moderation
 
-
 ===================
 uDAPL Configuration
 ===================
@@ -89,6 +109,13 @@ Rest of the document assumes the following uDAPL settings in dat.conf:
     ofa-v2-iwarp u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 "eth2 0" ""
 
 
+==============
+mpd.hosts file
+==============
+mpd.hosts is a text file with a list of nodes, one per line, in the MPI ring.  
+Use either fully qualified hostname or IP address.
+
+
 =======================================
 Recommended Settings for HP MPI 2.2.7
 =======================================
@@ -98,45 +125,69 @@ Add the following to mpirun command:
 
 Example mpirun command with uDAPL-2.0:
 
-    mpirun -UDAPL -prot -intra=shm
-           -e MPI_ICLIB_UDAPL=libdaplofa.so.2
+    mpirun -np 2 -hostfile /opt/mpd.hosts
+           -UDAPL -prot -intra=shm
            -e MPI_HASIC_UDAPL=ofa-v2-iwarp
            -1sided
-           -f /opt/hpmpi/appfile
-
+           /opt/hpmpi/help/hello_world
+        
 Example mpirun command with uDAPL-1.2:
 
-    mpirun -UDAPL -prot -intra=shm
-           -e MPI_ICLIB_UDAPL=libdaplcma.so.1
+    mpirun -np 2 -hostfile /opt/mpd.hosts
+           -UDAPL -prot -intra=shm
            -e MPI_HASIC_UDAPL=OpenIB-iwarp
            -1sided
-           -f /opt/hpmpi/appfile
+           /opt/hpmpi/help/hello_world
+    
 
+============================================================
+Recommended Settings for Platform MPI 7.1 (formerly HP-MPI)
+============================================================
+Add the following to mpirun command:
 
-========================================
-Recommended Settings for Intel MPI 3.2.x
-========================================
+    -1sided
+
+Example mpirun command with uDAPL-2.0:
+
+    mpirun -np 2 -hostfile /opt/mpd.hosts 
+           -UDAPL -prot -intra=shm
+           -e MPI_HASIC_UDAPL=ofa-v2-iwarp
+           -1sided
+           /opt/platform_mpi/help/hello_world
+
+Example mpirun command with uDAPL-1.2:
+
+    mpirun -np 2 -hostfile /opt/mpd.hosts 
+           -UDAPL -prot -intra=shm
+           -e MPI_HASIC_UDAPL=OpenIB-iwarp
+           -1sided
+           /opt/platform_mpi/help/hello_world
+           
+
+==============================================
+Recommended Settings for Intel MPI 3.2.x/4.0.x
+==============================================
 Add the following to mpiexec command:
 
     -genv I_MPI_FALLBACK_DEVICE 0
     -genv I_MPI_DEVICE rdma:OpenIB-iwarp
-    -genv I_MPI_RENDEZVOUS_RDMA_WRITE
+    -genv I_MPI_USE_RENDEZVOUS_RDMA_WRITE 1
 
 Example mpiexec command line for uDAPL-2.0:
 
     mpiexec -genv I_MPI_FALLBACK_DEVICE 0
             -genv I_MPI_DEVICE rdma:ofa-v2-iwarp
-            -genv I_MPI_RENDEZVOUS_RDMA_WRITE
+            -genv I_MPI_USE_RENDEZVOUS_RDMA_WRITE 1
             -ppn 1 -n 2
-            /opt/intel/impi/3.2.0.011/bin64/IMB-MPI1
+            /opt/intel/impi/3.2.2/bin64/IMB-MPI1
 
 Example mpiexec command line for uDAPL-1.2:
 
     mpiexec -genv I_MPI_FALLBACK_DEVICE 0
             -genv I_MPI_DEVICE rdma:OpenIB-iwarp
-            -genv I_MPI_RENDEZVOUS_RDMA_WRITE
+            -genv I_MPI_USE_RENDEZVOUS_RDMA_WRITE 1
             -ppn 1 -n 2
-            /opt/intel/impi/3.2.0.011/bin64/IMB-MPI1
+            /opt/intel/impi/3.2.2/bin64/IMB-MPI1
 
 
 ========================================
@@ -160,7 +211,7 @@ Example mpiexec command line:
             -env MV2_DEFAULT_MAX_CQ_SIZE 32766
             -env MV2_RDMA_CM_MAX_PORT 65535
             -env MV2_VBUF_TOTAL_SIZE 9216
-            /usr/mpi/gcc/mvapich2-1.2p1/tests/osu_benchmarks-3.0/osu_latency
+            /usr/mpi/gcc/mvapich2-1.4/tests/osu_benchmarks-3.1.1/osu_latency
 
 
 ==========================================
@@ -175,14 +226,14 @@ Example mpirun command with uDAPL-2.0:
     mpiexec -l -n 64
             -env MV2_DAPL_PROVIDER ofa-v2-iwarp
             -env MV2_ON_DEMAND_THRESHOLD 64
-            /usr/mpi/gcc/mvapich2-1.2p1/tests/osu_benchmarks-3.0/osu_latency
+            /usr/mpi/gcc/mvapich2-1.4/tests/IMB-3.2/IMB-MPI1
 
 Example mpirun command with uDAPL-1.2:
 
     mpiexec -l -n 64
             -env MV2_DAPL_PROVIDER OpenIB-iwarp
             -env MV2_ON_DEMAND_THRESHOLD 64
-            /usr/mpi/gcc/mvapich2-1.2p1/tests/osu_benchmarks-3.0/osu_latency
+            /usr/mpi/gcc/mvapich2-1.4/tests/IMB-3.2/IMB-MPI1
 
 
 ===========================
@@ -196,28 +247,7 @@ http://www.open-mpi.org/faq/?category=tuning#setting-mca-params
 
 
 =======================================
-Recommended Settings for Open MPI 1.3.2
-=======================================
-Caching pinned memory is enabled by default but it may be necessary
-to limit the size of the cache to prevent running out of memory by
-adding the following parameter:
-
-    mpool_rdma_rcache_size_limit = <cache size>
-
-The cache size depends on the number of processes and nodes, e.g. for
-64 processes with 8 nodes, limit the pinned cache size to
-104857600 (100 MBytes).
-
-Example mpirun command line:
-
-    mpirun -np 2 -hostfile /opt/mpd.hosts
-           -mca btl openib,self,sm
-           -mca mpool_rdma_rcache_size_limit 104857600
-           /usr/mpi/gcc/openmpi-1.3.2/tests/IMB-3.1/IMB-MPI1
-
-
-=======================================
-Recommended Settings for Open MPI 1.3.1
+Recommended Settings for Open MPI 1.4.1
 =======================================
 There is a known problem with cached pinned memory.  It is recommended
 that pinned memory caching be disabled.  For more information, see
@@ -225,83 +255,19 @@ https://svn.open-mpi.org/trac/ompi/ticket/1853
 
 To disable pinned memory caching, add the following parameter:
 
-    mpi_leave_pinned = 0
-
-Example mpirun command line:
-
-    mpirun -np 2 -hostfile /opt/mpd.hosts
-           -mca btl openib,self,sm
-           -mca btl_mpi_leave_pinned 0
-           /usr/mpi/gcc/openmpi-1.3.1/tests/IMB-3.1/IMB-MPI1
-
-
-=====================================
-Recommended Settings for Open MPI 1.3
-=====================================
-There is a known problem with cached pinned memory.  It is recommended
-that pinned memory caching be disabled.  For more information, see
-https://svn.open-mpi.org/trac/ompi/ticket/1853
-
-To disable pinned memory caching, add the following parameter:
+    -mca mpi_leave_pinned 0
 
-    mpi_leave_pinned = 0
+Allow the sender to use RDMA Writes:
 
-Receive Queue setting:
+    -mca btl_openib_flags 2
 
-    btl_openib_receive_queues = P,65536,256,192,128
-
-Set maximum size of inline data segment to 64:
-
-    btl_openib_max_inline_data = 64
-
-Example mpirun command:
+Example mpirun command line:
 
     mpirun -np 2 -hostfile /opt/mpd.hosts
            -mca btl openib,self,sm
            -mca btl_mpi_leave_pinned 0
-           -mca btl_openib_receive_queues P,65536,256,192,128
-           -mca btl_openib_max_inline_data 64
-           /usr/mpi/gcc/openmpi-1.3/tests/IMB-3.1/IMB-MPI1
-
-
-============
-Known Issues
-============
-The following is a list of known issues with Linux kernel and
-OFED 1.5 release.
-
-1. We have observed "__qdisc_run" softlockup crash running UDP
-   traffic on RHEL5.1 systems with more than 8 cores.  The issue
-   is in Linux network stack. The fix for this is available from
-   the following link:
-
-http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git
-;a=commitdiff;h=2ba2506ca7ca62c56edaa334b0fe61eb5eab6ab0
-;hp=32aced7509cb20ef3ec67c9b56f5b55c41dd4f8d
-
-
-2. For MVAPICH2, IMB-EXT's Window and Accumulate test will error
-   out with "recv desc error, 128".  The workaround is to turn off one 
-   sided communication by adding following to the mpirun command:
-
-   -env MV2_USE_RDMA_ONE_SIDED 0
-
-   
-3. IMB-EXT does not run with Open MPI 1.3.1 or 1.3.  The workaround is
-   to turn off message coalescing by adding the following to mpirun
-   command:
-
-    -mca btl_openib_use_message_coalescing 0
-
-4. On RHEL4u5, the file /dev/infiniband/uverbs0 does not get created.
-   Without this file, programs such as rping will display an "Unable
-   to open RDMA device" error.  To avoide this problem edit
-   /etc/init.d/network file and comment out the following two lines
-   by adding # to the beginning of the line:
-
-sysctl -w kernel.hotplug="/etc/hotplug/firmware.agent. > /dev/null 2>&1
-sysctl -w kernel.hotplug=$oldhotplug > /dev/null 2>&1
-
+           -mca btl_openib_flags 2
+           /usr/mpi/gcc/openmpi-1.4.1/tests/IMB-3.2/IMB-MPI1
 
 
 NetEffect is a trademark of Intel Corporation in the U.S. and other countries.
-- 
1.6.4.2




More information about the ewg mailing list