[openib-general] Re: ibv_get_async_event
Roland Dreier
rolandd at cisco.com
Tue Sep 6 14:23:19 PDT 2005
I thought about this some more and I came to the conclusion that Sean
is right. We should come up with something race-free, even if an app
is perverse enough to use multiple threads to read CQ events.
I think the only way to do that is for the app to acknowledge
completion events, since a completion event could be read by a thread
that loses the CPU before returning to the app and then delayed for
arbitrarily long before the app sees the event. However, it is
possible to amortize the locking cost of acknowledging events by
allowing the app to acknowledge multiple events in a single call.
The API I came up with is the following:
/**
* ibv_ack_cq_events - Free an async event
* @cq: CQ to acknowledge events for
* @nevents: Number of events to acknowledge.
*
* All completion events which are returned by ibv_get_cq_event() must
* be acknowledged. ibv_destroy_cq() will wait for all completion
* events to be acknowledged, so there should be a one-to-one
* correspondence between acks and successful gets. An application
* may accumulate multiple completion events and acknowledge them in a
* single call by passing the number of events to ack in @nevents.
*/
extern void ibv_ack_cq_events(struct ibv_cq *cq, unsigned int nevents);
(I also renamed ibv_put_async_event() to ibv_ack_async_event() for
symmetry)
I coded this up and did some unscientific measurements using
ibv_rc_pingpong (using CQ events with --size=1). Even with a call to
ibv_async_event() every time a CQ event is read, the cost is too small
to measure. In other words, the variability from run to run of my
test drowns out the cost of the call to ibv_ack_cq_events().
Patches to follow...
- R.
More information about the general
mailing list