diff mbox series

ublk: pass ubq, req, and io to ublk_commit_completion()

Message ID 20250416171934.3632673-1-csander@purestorage.com (mailing list archive)
State New
Headers show
Series ublk: pass ubq, req, and io to ublk_commit_completion() | expand

Commit Message

Caleb Sander Mateos April 16, 2025, 5:19 p.m. UTC
__ublk_ch_uring_cmd() already computes struct ublk_queue *ubq,
struct request *req, and struct ublk_io *io. Pass them to
ublk_commit_completion() to avoid repeating the lookups.

Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
---
 drivers/block/ublk_drv.c | 16 ++++------------
 1 file changed, 4 insertions(+), 12 deletions(-)

Comments

Uday Shankar April 16, 2025, 6:08 p.m. UTC | #1
On Wed, Apr 16, 2025 at 11:19:33AM -0600, Caleb Sander Mateos wrote:
> __ublk_ch_uring_cmd() already computes struct ublk_queue *ubq,
> struct request *req, and struct ublk_io *io. Pass them to
> ublk_commit_completion() to avoid repeating the lookups.

I think this is rolled into https://lore.kernel.org/linux-block/20250415-ublk_task_per_io-v4-2-54210b91a46f@purestorage.com/
Caleb Sander Mateos April 16, 2025, 6:50 p.m. UTC | #2
On Wed, Apr 16, 2025 at 11:08 AM Uday Shankar <ushankar@purestorage.com> wrote:
>
> On Wed, Apr 16, 2025 at 11:19:33AM -0600, Caleb Sander Mateos wrote:
> > __ublk_ch_uring_cmd() already computes struct ublk_queue *ubq,
> > struct request *req, and struct ublk_io *io. Pass them to
> > ublk_commit_completion() to avoid repeating the lookups.
>
> I think this is rolled into https://lore.kernel.org/linux-block/20250415-ublk_task_per_io-v4-2-54210b91a46f@purestorage.com/

Yup, it sure is. Thanks for already doing this.

Best,
Caleb
diff mbox series

Patch

diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index bc86231f5e27..10bfa13aa140 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -1525,27 +1525,19 @@  static int ublk_ch_mmap(struct file *filp, struct vm_area_struct *vma)
 
 	pfn = virt_to_phys(ublk_queue_cmd_buf(ub, q_id)) >> PAGE_SHIFT;
 	return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot);
 }
 
-static void ublk_commit_completion(struct ublk_device *ub,
+static void ublk_commit_completion(struct ublk_queue *ubq,
+		struct request *req, struct ublk_io *io,
 		const struct ublksrv_io_cmd *ub_cmd)
 {
-	u32 qid = ub_cmd->q_id, tag = ub_cmd->tag;
-	struct ublk_queue *ubq = ublk_get_queue(ub, qid);
-	struct ublk_io *io = &ubq->ios[tag];
-	struct request *req;
-
 	/* now this cmd slot is owned by nbd driver */
 	io->flags &= ~UBLK_IO_FLAG_OWNED_BY_SRV;
 	io->res = ub_cmd->result;
 
-	/* find the io request and complete */
-	req = blk_mq_tag_to_rq(ub->tag_set.tags[qid], tag);
-	if (WARN_ON_ONCE(unlikely(!req)))
-		return;
-
+	/* complete the io request */
 	if (req_op(req) == REQ_OP_ZONE_APPEND)
 		req->__sector = ub_cmd->zone_append_lba;
 
 	if (likely(!blk_should_fake_timeout(req->q)))
 		ublk_put_req_ref(ubq, req);
@@ -2032,11 +2024,11 @@  static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd,
 			ret = -EINVAL;
 			goto out;
 		}
 
 		ublk_fill_io_cmd(io, cmd, ub_cmd->addr);
-		ublk_commit_completion(ub, ub_cmd);
+		ublk_commit_completion(ubq, req, io, ub_cmd);
 		break;
 	case UBLK_IO_NEED_GET_DATA:
 		if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV))
 			goto out;
 		ublk_fill_io_cmd(io, cmd, ub_cmd->addr);