<div><div>Dear Open Fabrics members. </div><div><br></div><div>I'm newbie about Infiniband and Open Fabrics. </div><div><br></div><div>I'm trying to setup NFS/RDMA on CentOS 5.5 x86_64</div><div><br></div><div>I'm reading this document /usr/share/doc/ofed-docs-1.4.1/nfs-rdma.release-notes.txt</div>
<div>in CentOS package ( I attached bellow )</div><div>I installed the following packages and I could setup IP over Infiniband setting. </div><div><br></div><div>Environment: </div><div> OS </div><div> CentOS 5.5 x86_64</div>
<div><br></div><div> Installed Packages:</div><div> ibutils-1.2-11.el5</div><div> libibumad-1.3.3-1.el5</div><div> libibverbs-1.1.3-2.el5</div><div> librdmacm-1.0.10-1.el5</div><div> mstflint-1.4-1.el5</div>
<div> ofed-docs-1.4.1-2.el5</div><div> opensm-libs-3.3.3-1.el5</div><div> openib-1.4.1-5.el5</div><div> opensm-3.3.3-1.el5</div><div><br></div><div> Infiniband Card :</div><div> InfiniBand: Mellanox Technologies MT25208</div>
<div><br></div><div><br></div><div><br></div><div>Questions:</div><div><br></div><div> 1) Do I have to build OFED packages??</div><div><br></div><div> I could not find mount.rnfs command, svcrdma.ko and xprtrdma.ko on CentOS</div>
<div> Can I have to build OFED packages from source??</div><div> <a href="http://69.55.239.13/downloads/OFED/">http://69.55.239.13/downloads/OFED/</a></div><div><br></div><div> 2) What is the best platform to test OFED for free</div>
<div> I don't have any commercial Linux ie.RHEL / SUSE. </div><div><br></div><div> Could you tell me what is the best platform to test OFED on *free*</div><div> platform
</div><div> ie) Fedora, debian, CentOS...</div>
<div><br></div><div>Sincerely</div><div><br></div><div>-- </div><div>Hiroyuki Sato.</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div>################################################################################</div>
<div># #</div><div># NFS/RDMA README #</div><div># #</div>
<div>################################################################################</div><div><br></div><div> Author: NetApp and Open Grid Computing</div><div> </div><div> Adapted for OFED 1.4 (from linux-2.6.27.8/Documentation/filesystems/nfs-rdma.txt)</div>
<div> by Jeff Becker</div><div><br></div><div>Table of Contents</div><div>~~~~~~~~~~~~~~~~~</div><div> - Overview</div><div> - OFED 1.4 limitations</div><div> - Getting Help</div><div> - Installation</div><div> - Check RDMA and NFS Setup</div>
<div> - NFS/RDMA Setup</div><div><br></div><div>Overview</div><div>~~~~~~~~</div><div><br></div><div> This document describes how to install and setup the Linux NFS/RDMA client</div><div> and server software.</div><div>
<br></div><div> The NFS/RDMA client was first included in Linux 2.6.24. The NFS/RDMA server</div><div> was first included in the following release, Linux 2.6.25.</div><div><br></div><div> In our testing, we have obtained excellent performance results (full 10Gbit</div>
<div> wire bandwidth at minimal client CPU) under many workloads. The code passes</div><div> the full Connectathon test suite and operates over both Infiniband and iWARP</div><div> RDMA adapters.</div><div><br></div><div>
OFED 1.4.1 limitations:</div><div>~~~~~~~~~~~~~~~~~~~~~</div><div> NFS-RDMA is supported for the following releases:</div><div> - Redhat Enterprise Linux (RHEL) version 5.1</div><div> - Redhat Enterprise Linux (RHEL) version 5.2</div>
<div> - Redhat Enterprise Linux (RHEL) version 5.3</div><div> - SUSE Linux Enterprise Server (SLES) version 10, Service Pack 2</div><div> - SUSE Linux Enterprise Server (SLES) version 11</div><div><br></div><div> And the following <a href="http://kernel.org">kernel.org</a> kernels:</div>
<div> - 2.6.22</div><div> - 2.6.26</div><div> - 2.6.27</div><div><br></div><div> All other Linux Distrubutions and kernel versions are NOT supported on OFED 1.4.1</div><div><br></div><div>Getting Help</div><div>~~~~~~~~~~~~</div>
<div><br></div><div> If you get stuck, you can ask questions on the</div><div> <a href="mailto:nfs-rdma-devel@lists.sourceforge.net">nfs-rdma-devel@lists.sourceforge.net</a>, or <a href="mailto:general@lists.openfabrics.org">general@lists.openfabrics.org</a></div>
<div> mailing lists.</div><div><br></div><div>Installation</div><div>~~~~~~~~~~~~</div><div><br></div><div> These instructions are a step by step guide to building a machine for</div><div> use with NFS/RDMA.</div><div>
<br></div><div> - Install an RDMA device</div><div><br></div><div> Any device supported by the drivers in drivers/infiniband/hw is acceptable.</div><div><br></div><div> Testing has been performed using several Mellanox-based IB cards and </div>
<div> the Chelsio cxgb3 iWARP adapter.</div><div><br></div><div> - Install OFED 1.4.1</div><div><br></div><div> NFS/RDMA has been tested on RHEL5.1, RHEL5.2, RHEL 5.3, SLES10SP2, SLES11,</div><div> kernels 2.6.22, 2.6.26, and 2.6.27. On these kernels, NFS-RDMA will be</div>
<div> installed by default if you simply select "install all", and can be</div><div> specifically included by a "custom" install.</div><div><br></div><div> In addition, the install script will install a version of the nfs-utils that</div>
<div> is required for NFS/RDMA. The binary installed will be named "mount.rnfs".</div><div> This version is not necessary for Linux Distributions with nfs-utils 1.1 or</div><div> later.</div><div><br></div>
<div> Upon successful installation, the nfs kernel modules will be placed in the</div><div> directory /lib/modules/'uname -a'/updates. It is recommended that you reboot to</div><div> ensure that the correct modules are loaded.</div>
<div><br></div><div>Check RDMA and NFS Setup</div><div>~~~~~~~~~~~~~~~~~~~~~~~~</div><div><br></div><div> Before configuring the NFS/RDMA software, it is a good idea to test</div><div> your new kernel to ensure that the kernel is working correctly.</div>
<div> In particular, it is a good idea to verify that the RDMA stack</div><div> is functioning as expected and standard NFS over TCP/IP and/or UDP/IP</div><div> is working properly.</div><div><br></div><div> - Check RDMA Setup</div>
<div><br></div><div> If you built the RDMA components as modules, load them at</div><div> this time. For example, if you are using a Mellanox Tavor/Sinai/Arbel</div><div> card:</div><div><br></div><div> $ modprobe ib_mthca</div>
<div> $ modprobe ib_ipoib</div><div><br></div><div> If you are using InfiniBand, make sure there is a Subnet Manager (SM)</div><div> running on the network. If your IB switch has an embedded SM, you can</div><div>
use it. Otherwise, you will need to run an SM, such as OpenSM, on one</div><div> of your end nodes.</div><div><br></div><div> If an SM is running on your network, you should see the following:</div><div><br></div>
<div> $ cat /sys/class/infiniband/driverX/ports/1/state</div><div> 4: ACTIVE</div><div><br></div><div> where driverX is mthca0, ipath5, ehca3, etc.</div><div><br></div><div> To further test the InfiniBand software stack, use IPoIB (this</div>
<div> assumes you have two IB hosts named host1 and host2):</div><div><br></div><div> host1$ ifconfig ib0 a.b.c.x</div><div> host2$ ifconfig ib0 a.b.c.y</div><div> host1$ ping a.b.c.y</div><div> host2$ ping a.b.c.x</div>
<div><br></div><div> For other device types, follow the appropriate procedures.</div><div><br></div><div> - Check NFS Setup</div><div><br></div><div> For the NFS components enabled above (client and/or server),</div>
<div> test their functionality over standard Ethernet using TCP/IP or UDP/IP.</div><div><br></div><div>NFS/RDMA Setup</div><div>~~~~~~~~~~~~~~</div><div><br></div><div> We recommend that you use two machines, one to act as the client and</div>
<div> one to act as the server.</div><div><br></div><div> One time configuration:</div><div><br></div><div> - On the server system, configure the /etc/exports file and</div><div> start the NFS/RDMA server.</div><div>
<br></div><div> Exports entries with the following formats have been tested:</div><div><br></div><div> /vol0 192.168.0.47(fsid=0,rw,async,insecure,no_root_squash)</div><div> /vol0 <a href="http://192.168.0.0/255.255.255.0(fsid=0,rw,async,insecure,no_root_squash)">192.168.0.0/255.255.255.0(fsid=0,rw,async,insecure,no_root_squash)</a></div>
<div><br></div><div> The IP address(es) is(are) the client's IPoIB address for an InfiniBand</div><div> HCA or the client's iWARP address(es) for an RNIC.</div><div><br></div><div> NOTE: The "insecure" option must be used because the NFS/RDMA client does</div>
<div> not use a reserved port.</div><div><br></div><div> Each time a machine boots:</div><div><br></div><div> - Load and configure the RDMA drivers</div><div><br></div><div> For InfiniBand using a Mellanox adapter:</div>
<div><br></div><div> $ modprobe ib_mthca</div><div> $ modprobe ib_ipoib</div><div> $ ifconfig ib0 a.b.c.d</div><div><br></div><div> NOTE: use unique addresses for the client and server</div><div><br></div><div>
- Start the NFS server</div><div><br></div><div> Load the RDMA transport module:</div><div><br></div><div> $ modprobe svcrdma</div><div><br></div><div> Start the server:</div><div><br></div><div> $ /etc/init.d/nfsserver start</div>
<div><br></div><div> or</div><div><br></div><div> $ service nfs start</div><div><br></div><div> Instruct the server to listen on the RDMA transport:</div><div><br></div><div> $ echo rdma 20049 > /proc/fs/nfsd/portlist</div>
<div><br></div><div> - On the client system</div><div><br></div><div> Load the RDMA client module:</div><div><br></div><div> $ modprobe xprtrdma</div><div><br></div><div> Mount the NFS/RDMA server:</div><div><br>
</div><div> $ mount -o rdma,port=20049 <IPoIB-server-name-or-address>:/<export> /mnt</div><div><br></div><div> To verify that the mount is using RDMA, run "cat /proc/mounts" and check</div><div>
the "proto" field for the given mount.</div><div><br></div><div> Congratulations! You're using NFS/RDMA!</div><div><br></div><div>Known Issues</div><div>~~~~~~~~~~~~~~~~~~~~~~~~</div><div><br></div><div>
If you're running NFSRDMA over Chelsio's T3 RNIC and your cients are using</div><div>a 64KB page size (like PPC64 and IA64 systems) and your server is using a</div><div>4KB page size (like i386 and X86_64), then you need to mount the server</div>
<div>using rsize=32768,wsize=32768 to avoid overrunning the Chelsio RNIC fast</div><div>register limits. This is a known firmware limitation in the Chelsio RNIC.</div><div><br></div><div><br></div><div><br></div></div>