[openib-general] [PATCH] mthca: initialize send and receive queue locks separately

Ingo Molnar mingo at elte.hu
Tue Jul 4 01:56:53 PDT 2006


* Michael S. Tsirkin <mst at mellanox.co.il> wrote:

> > Initializing the locks separately in mthca_alloc_qp_common() stops the warning
> > and will let lockdep enforce proper ordering on paths that acquire both locks.
> > 
> > Signed-off-by: Zach Brown <zach.brown at oracle.com>
> 
> This moves code out of a common function and so results in code 
> duplication and has memory cost.

the patch below does the same via the lockdep_set_class() method, which 
has no cost on non-lockdep kernels.

	Ingo

---------------->
Subject: lockdep: annotate drivers/infiniband/hw/mthca/mthca_qp.c
From: Ingo Molnar <mingo at elte.hu>

annotate mthca queue locks: split them into send and receive locks.

(both can be held at once, but there is ordering between them which
lockdep enforces)

Has no effect on non-lockdep kernels.

Signed-off-by: Ingo Molnar <mingo at elte.hu>
---
 drivers/infiniband/hw/mthca/mthca_qp.c |   16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

Index: linux/drivers/infiniband/hw/mthca/mthca_qp.c
===================================================================
--- linux.orig/drivers/infiniband/hw/mthca/mthca_qp.c
+++ linux/drivers/infiniband/hw/mthca/mthca_qp.c
@@ -222,9 +222,15 @@ static void *get_send_wqe(struct mthca_q
 			 (PAGE_SIZE - 1));
 }
 
-static void mthca_wq_init(struct mthca_wq *wq)
+/*
+ * Send and receive queues for two different lock classes:
+ */
+static struct lock_class_key mthca_wq_send_lock_class, mthca_wq_recv_lock_class;
+
+static void mthca_wq_init(struct mthca_wq *wq, struct lock_class_key *key)
 {
 	spin_lock_init(&wq->lock);
+	lockdep_set_class(&wq->lock, key);
 	wq->next_ind  = 0;
 	wq->last_comp = wq->max - 1;
 	wq->head      = 0;
@@ -845,10 +851,10 @@ int mthca_modify_qp(struct ib_qp *ibqp, 
 			mthca_cq_clean(dev, to_mcq(qp->ibqp.recv_cq), qp->qpn,
 				       qp->ibqp.srq ? to_msrq(qp->ibqp.srq) : NULL);
 
-		mthca_wq_init(&qp->sq);
+		mthca_wq_init(&qp->sq, &mthca_wq_send_lock_class);
 		qp->sq.last = get_send_wqe(qp, qp->sq.max - 1);
 
-		mthca_wq_init(&qp->rq);
+		mthca_wq_init(&qp->rq, &mthca_wq_recv_lock_class);
 		qp->rq.last = get_recv_wqe(qp, qp->rq.max - 1);
 
 		if (mthca_is_memfree(dev)) {
@@ -1112,8 +1118,8 @@ static int mthca_alloc_qp_common(struct 
 	qp->atomic_rd_en = 0;
 	qp->resp_depth   = 0;
 	qp->sq_policy    = send_policy;
-	mthca_wq_init(&qp->sq);
-	mthca_wq_init(&qp->rq);
+	mthca_wq_init(&qp->sq, &mthca_wq_send_lock_class);
+	mthca_wq_init(&qp->rq, &mthca_wq_recv_lock_class);
 
 	ret = mthca_map_memfree(dev, qp);
 	if (ret)




More information about the general mailing list