[openib-general] using IB on a port without IPoIB running NIC
Tang, Changqing
changquing.tang at hp.com
Mon Jan 8 06:07:46 PST 2007
Or:
Thank you for the information, I may change my mind to require
IPoIB to run newer version of HP-MPI on OFED 1.2, if I don't find other
way to easily establish IB connection dynamically between two process
groups with dynamic size.
--CQ
> -----Original Message-----
> From: Or Gerlitz [mailto:ogerlitz at voltaire.com]
> Sent: Monday, January 08, 2007 1:18 AM
> To: Tang, Changqing
> Cc: openib-general at openib.org
> Subject: using IB on a port without IPoIB running NIC
>
> Tang, Changqing wrote:
> > We understand that, but we hope to have a connect/accept style IB
> > connection setup, without IPoIB involved,
>
> > like HP-UX IT-API(similar to uDAPL without underlying IP
> support), it
> > works with multiple cards.
>
> > Configure 4-5 IP addresses on a single node is kind of silly.
>
> CQ,
>
> Few more thoughts on your "being able to MPI on an IB PORT
> without an IPoIB working NIC" requirement...
>
> Basically, people use IB for both IPC and I/O, where except
> for SRP, all the IB I/O ULPs (both block based: iSER and file
> based: Lustre, GPFS,
> rNFS) use IP addressing and hence are either coded to the
> RDMA CM or work on top of TCP/IP (iSCSI-TCP, NFS, pFS, etc).
>
> So if the user will not configure IPoIB on this IB port, it
> will not be utilized for I/O.
>
> Now, you mention a use case of 4 cards on a node, I believe
> that typically this would happen on big SMP machines where
> you **must** use all the active IB links for I/O: eg when
> most of your MPI work is within the SMP (128 to 512 ranks)
> and most of the IB work is for I/O .
>
> I understand (please check and let me know eg about HP 1U
> offering) that all/most nowadays 1U PCI-EX nodes can have at
> most **one** PCI-EX card.
>
> Combing the above limitation with the fact that these nodes
> would run at most 16 ranks (eg 8 dual-core CPUs) and that 8
> ranks/IB link is a ratio that makes sense, we are remained
> with **2** and not 4-5 NICs to configure.
>
> Oh, and one more thing, 4 IB links per node would make an N
> node cluster to 4N IB end-ports cluster for which you need
> f(4N) switching IB ports, and the specific f(.) might turn
> the IB deployment over this cluster into very expensive one...
>
> Or.
>
>
More information about the general
mailing list