[openib-general] Re: CMA stuff
Michael S. Tsirkin
mst at mellanox.co.il
Thu Mar 2 13:15:14 PST 2006
Quoting r. Roland Dreier <rdreier at cisco.com>:
> Subject: CMA stuff
>
> What's the latest state of the CMA for merging upstream? Have there
> been any changes since the last patchset you sent out?
>
> I'd like to at least put it in a branch of my git tree, and try to get
> some consensus (even if it is "no one cares enough to comment, go
> ahead and merge it") before 2.6.17 opens up. So I want to work with
> the latest code for merging upstream.
>
> Thanks,
> Roland
Here's a short list of issues I wanted to bring up - unfortunately
I just started looking at CMA again after a pause so I might not be
up to date or might misunderstand something. Pls correct me if I'm wrong.
- Do we expect ULPs to let CMA mange the QPs, or do it themselves?
- When cma manages QP state, there doesnt seem to exist an option
to post receive WQEs when qp is in INIT state.
This is required at least for SDP if SDP's to use.
- CMA does not seem to do anything with the path static rate it gets
from SA. Am I missing the place where it does do it?
- Any chance IPv6 will get supported soon?
- backlog parameter
- Most code handling backlog seems to be in ucma -
shouldnt this be generic to cma?
- It seems that in ucma backlog is checked when a connection request
arrives. However this is not how TCP handles backlog,
so socket apps being ported to CMA might hit a problem.
Here's an explanation I received about backlog:
Basically TCP uses a two stage backlog to defend against SYN attacks.
When a SYN is received a small amount of state is kept until the full
handshake is completed, at which point a full socket is created and
queued onto the listen sockets accept queue. The second stage uses the
listen() backlog parameter to manage the accept queue. The first stage
queue size is managed using a sysctl, (net.pv4.tcp_max_syn_backlog)
which on a lot of systems defaults to 1024.
So I think ideally CMA would do the same.
--
Michael S. Tsirkin
Staff Engineer, Mellanox Technologies
More information about the general
mailing list