[Openib-windows] New CM implementation in fab_cm_branch

Fab Tillier ftillier at silverstorm.com
Tue May 24 17:00:00 PDT 2005


Folks,

Just an FYI - I committed the implementation of the changes I've been
working on for the CM.  The key features:

- Better lookups using RB maps - similar in architecture to gen2 Linux CM.
In fact, I was doing this at the same time as Sean, so we exchanged a lot of
ideas.
- All MAD processing is done in the context of the CQ callback at
DISPATCH_LEVEL.  This eliminates a thread context switch for user-mode
connections.  For kernel clients, the thread context switch happens later.
- The connection endpoints follow a polling model - a notification callback
is only invoked to indicate that there a MAD is available for processing.
The notification mechanisms are similar to a CQ in that the connection
object is rearmed (albeit automatically when the last MAD is reaped).  This
was done to simplify user-mode support.
- Per connection IOCTL handling.  The previous CM kept a single CM IOCTL
request outstanding per AL instance.  This limited CM scalability to a
single thread in user-mode.  The new CM proxy and CM cooperate to allow an
IOCTL to be outstanding per connection end point.  User-mode now has a
thread pool for processing async IOCTLs, sized to the number of processors.
The result is that on an MP system, there can now be additional parallelism
in connection establishment.

Current limitations:
- I haven't implemented peer to peer matching on the REQ
- The CM handle is now a struct, which is a bit kludgey in that these
structs get passed by value into functions.  There will be further CM API
changes forthcoming.
- DAPLtest seems to have some issues.  Not sure why, but the server can't
perform multiple runs of a test, and hangs after the second run.  Connection
attempts beyond the second run timeout.

The key files for the CM:

core\al\kernel\al_cm_cep.c
core\al\al_cm_cep.h
core\al\kernel\al_proxy_cep.c
core\al\user\ual_cm_cep.c
core\al\al_cm_qp.c

Let me know if you have any questions or feedback.

Thanks,

- Fab




More information about the ofw mailing list