[openib-general] [PATCH 2.6.21-rc1 4/5] ehca: replace yield() by wait_for_completion()

Roland Dreier rdreier at cisco.com
Thu Feb 15 09:57:48 PST 2007


Looking at this one more time, I think it actually may be buggy:

 > @@ -147,6 +147,7 @@ struct ib_cq *ehca_create_cq(struct ib_d
 >  	spin_lock_init(&my_cq->spinlock);
 >  	spin_lock_init(&my_cq->cb_lock);
 >  	spin_lock_init(&my_cq->task_lock);
 > +	init_completion(&my_cq->zero_callbacks);

So you initialize the zero_callbacks completion once, at
ehca_create_cq().

But then 

 > @@ -612,11 +613,14 @@ static void run_comp_task(struct ehca_cp
 >  
 >  		spin_lock(&cq->task_lock);
 >  		cq->nr_callbacks--;
 > -		if (cq->nr_callbacks == 0) {
 > +		is_complete = (cq->nr_callbacks == 0);
 > +		if (is_complete) {
 >  			list_del_init(cct->cq_list.next);
 >  			cct->cq_jobs--;
 >  		}
 >  		spin_unlock(&cq->task_lock);
 > +		if (is_complete) /* wake up waiting destroy_cq() */
 > +			complete(&cq->zero_callbacks);
 >  	}

every time nr_callbacks drops to 0, you complete the zero_callbacks
completion.  So the first time a callback runs, you will complete
zero_callbacks, which will let wait_for_completion() finish even if
you later increment nr_callbacks again.

Also this

 > -	while (my_cq->nr_callbacks) {
 > +	if (my_cq->nr_callbacks) {
 >  		spin_unlock_irqrestore(&ehca_cq_idr_lock, flags);
 > -		yield();
 > +		wait_for_completion(&my_cq->zero_callbacks);
 >  		spin_lock_irqsave(&ehca_cq_idr_lock, flags);
 >  	}

looks rather unsafe -- I don't see any common locking protecting both
this test of nr_callbacks and the setting of nr_callbacks in the ehca
irq handling... so I don't see anything protecting you from seeing
nr_callbacks==0 and not going into the if() (or while() -- the old
code has the same problem I think) but then doing ++nr_callbacks
somewhere else.  In fact since you do the idr_remove() and
hipz_h_destroy_cq() *after* you make sure no callbacks are running,
this seems like it could happen easily.

So I'm holding off on applying this for now.  Please think it over and
either tell me the current patch is OK, or fix it up.  There's not
really too much urgency because a change like this is something I
would be comfortable merging between 2.6.21-rc1 and -rc2.

 - R.




More information about the general mailing list