[openib-general] Re: CMA stuff

Caitlin Bestler caitlinb at broadcom.com
Thu Mar 2 13:33:03 PST 2006


openib-general-bounces at openib.org wrote:
> Quoting r. Roland Dreier <rdreier at cisco.com>:
>> Subject: CMA stuff
>> 
>> What's the latest state of the CMA for merging upstream? Have there
>> been any changes since the last patchset you sent out?
>> 
>> I'd like to at least put it in a branch of my git tree, and try to
>> get some consensus (even if it is "no one cares enough to comment, go
>> ahead and merge it") before 2.6.17 opens up.  So I want to work with
>> the latest code for merging upstream.
>> 
>> Thanks,
>>   Roland
> 
> Here's a short list of issues I wanted to bring up -
> unfortunately I just started looking at CMA again after a
> pause so I might not be up to date or might misunderstand
> something. Pls correct me if I'm wrong.
> 
> - Do we expect ULPs to let CMA mange the QPs, or do it themselves?
> 
When the ULP chooses to use CMA the QP state is managed by
the CMA. Transport specific APIs can be used when the application
wants to have explicit control of QP states. There is no such
thing as transport neutral explicit state control.

> - When cma manages QP state, there doesnt seem to exist an option
>   to post receive WQEs when qp is in INIT state.
>   This is required at least for SDP if SDP's to use.
>

The consumer can post receive buffers as soon as the QP
is created. Send buffers cannot be posted until the
consumer has been notified that the connection is established.

Note that on iWARP/MPA this can be delayed from when the passive
side accepts the connection.
 
> - CMA does not seem to do anything with the path static rate it gets
>   from SA. Am I missing the place where it does do it?
>

 

> 
> 	Basically TCP uses a two stage backlog to defend
> 	against SYN attacks.
> 	When a SYN is received a small amount of state is kept
> 	until the full handshake is completed, at which point a
>     full socket is created and queued onto the listen sockets
>     accept queue. The second stage uses the
> 	listen() backlog parameter to manage the accept queue.
> 	The first stage queue size is managed using a sysctl,
> 	(net.pv4.tcp_max_syn_backlog) which on a lot of systems
>     defaults to 1024.
> 
> 	So I think ideally CMA would do the same.

Listening is being delegated to device specific code.
The resources supporting these listen will vary by
the actual device. Therefore the meaning of the backlog
parameter can only be understood as a requirement --
You MUST support at least this many requests and
SHOULD NOT support more (perhaps that might even
be MUST NOT).

Applications that attempt to tune parameters based upon
presumptions of how drivers work, rather than in terms
of their requirements/expectations of the driver, are
always fragile.




More information about the general mailing list