diff mbox series

[for-next] RDMA/rxe: Convert spin_lock_irqsave to spin_lock

Message ID 20221006170454.89903-1-rpearsonhpe@gmail.com (mailing list archive)
State Rejected
Delegated to: Jason Gunthorpe
Headers show
Series [for-next] RDMA/rxe: Convert spin_lock_irqsave to spin_lock | expand

Commit Message

Bob Pearson Oct. 6, 2022, 5:04 p.m. UTC
Currently the rxe driver uses a spin_lock_irqsave call to protect
the producer end of a completion queue from multiple QPs which may
be sharing the cq. Since all accesses to the cq are from tasklets
this is overkill and a simple spin_lock will be sufficient.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_cq.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)


base-commit: cbdae01d8b517b81ed271981395fee8ebd08ba7d

Comments

Jason Gunthorpe Oct. 28, 2022, 5:32 p.m. UTC | #1
On Thu, Oct 06, 2022 at 12:04:55PM -0500, Bob Pearson wrote:
> Currently the rxe driver uses a spin_lock_irqsave call to protect
> the producer end of a completion queue from multiple QPs which may
> be sharing the cq. Since all accesses to the cq are from tasklets
> this is overkill and a simple spin_lock will be sufficient.

A tasklet is a softirq, which means it needs to use the BH suffix for
all acquires that are in a process context if it the spinlock is
obtained from a tasklet

There is a nice table in Documentation/kernel-hacking/locking.rst that
explains this.

Jason
Pearson, Robert B Oct. 28, 2022, 5:41 p.m. UTC | #2
Yes. Every time it turns out that you really need irqsave locks for rxe it is because some maniac is calling into the verbs API at interrupt level. I have run into this a couple of times.

Bob

-----Original Message-----
From: Jason Gunthorpe <jgg@nvidia.com> 
Sent: Friday, October 28, 2022 12:33 PM
To: Bob Pearson <rpearsonhpe@gmail.com>
Cc: zyjzyj2000@gmail.com; linux-rdma@vger.kernel.org
Subject: Re: [PATCH for-next] RDMA/rxe: Convert spin_lock_irqsave to spin_lock

On Thu, Oct 06, 2022 at 12:04:55PM -0500, Bob Pearson wrote:
> Currently the rxe driver uses a spin_lock_irqsave call to protect the 
> producer end of a completion queue from multiple QPs which may be 
> sharing the cq. Since all accesses to the cq are from tasklets this is 
> overkill and a simple spin_lock will be sufficient.

A tasklet is a softirq, which means it needs to use the BH suffix for all acquires that are in a process context if it the spinlock is obtained from a tasklet

There is a nice table in Documentation/kernel-hacking/locking.rst that explains this.

Jason
Jason Gunthorpe Oct. 28, 2022, 5:43 p.m. UTC | #3
On Fri, Oct 28, 2022 at 05:41:29PM +0000, Pearson, Robert B wrote:

> Yes. Every time it turns out that you really need irqsave locks for
> rxe it is because some maniac is calling into the verbs API at
> interrupt level. I have run into this a couple of times.

I'm pretty sure post_send is not legal to call from an irq or
softirq..

Jason
diff mbox series

Patch

diff --git a/drivers/infiniband/sw/rxe/rxe_cq.c b/drivers/infiniband/sw/rxe/rxe_cq.c
index 76534bc66cb6..408ffe2e6f14 100644
--- a/drivers/infiniband/sw/rxe/rxe_cq.c
+++ b/drivers/infiniband/sw/rxe/rxe_cq.c
@@ -104,13 +104,12 @@  int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
 	struct ib_event ev;
 	int full;
 	void *addr;
-	unsigned long flags;
 
-	spin_lock_irqsave(&cq->cq_lock, flags);
+	spin_lock(&cq->cq_lock);
 
 	full = queue_full(cq->queue, QUEUE_TYPE_TO_CLIENT);
 	if (unlikely(full)) {
-		spin_unlock_irqrestore(&cq->cq_lock, flags);
+		spin_unlock(&cq->cq_lock);
 		if (cq->ibcq.event_handler) {
 			ev.device = cq->ibcq.device;
 			ev.element.cq = &cq->ibcq;
@@ -126,7 +125,7 @@  int rxe_cq_post(struct rxe_cq *cq, struct rxe_cqe *cqe, int solicited)
 
 	queue_advance_producer(cq->queue, QUEUE_TYPE_TO_CLIENT);
 
-	spin_unlock_irqrestore(&cq->cq_lock, flags);
+	spin_unlock(&cq->cq_lock);
 
 	if ((cq->notify == IB_CQ_NEXT_COMP) ||
 	    (cq->notify == IB_CQ_SOLICITED && solicited)) {