[openib-general] [PATCH v2 04/13] Connection Manager
Evgeniy Polyakov
johnpol at 2ka.mipt.ru
Mon Dec 4 21:13:57 PST 2006
On Mon, Dec 04, 2006 at 10:20:51AM -0600, Steve Wise (swise at opengridcomputing.com) wrote:
> > > This and a lot of other changes in this driver definitely says you
> > > implement your own stack of protocols on top of infiniband hardware.
> >
> > ...but I do know this driver is for 10-gig ethernet HW.
> >
>
> There is no SW TCP stack in this driver. The HW supports RDMA over
> TCP/IP/10GbE in HW and this is required for zero-copy RDMA over Ethernet
> (aka iWARP). The device is a 10 GbE device, not Infiniband. The
> Ethernet driver, upon which the rdma driver depends, acts both like a
> traditional Ethernet NIC for the Linux stack as well as a TCP offload
> device for the RDMA driver allowing establishment of RDMA connections.
> The Connection Manager (patch 04/13) sends/receives messages from the
> Ethernet driver that sets up HW TCP connections for doing RDMA. While
> this is indeed implementing TCP offload, it is _not_ integrating it with
> the sockets layer nor the linux stack and offloading sockets
> connections. Its only supporting offload connections for the RDMA
> driver to do iWARP. The Ammasso device is another example of this
> (drivers/infiniband/hw/amso1100). Deep iSCSI adapters are another
> example of this.
So what will happen when application will create a socket, bind it to
that NIC, and then try to establish a TCP connection? How NIC will
decide that received packets are from socket but not for internal TCP
state machine handled by that device?
As a side note, does all iwarp devices _require_ to have very
limited TCP engine implemented it in its hardware, or it is possible
to work with external SW stack?
> Steve.
--
Evgeniy Polyakov
More information about the general
mailing list