From patchwork Sat Sep 3 16:52:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12965017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A7BCC54EE9 for ; Sat, 3 Sep 2022 16:52:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230488AbiICQwl (ORCPT ); Sat, 3 Sep 2022 12:52:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230333AbiICQwk (ORCPT ); Sat, 3 Sep 2022 12:52:40 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C481525C47 for ; Sat, 3 Sep 2022 09:52:39 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id o2-20020a17090a9f8200b0020025a22208so1151358pjp.2 for ; Sat, 03 Sep 2022 09:52:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=NDfU2jexhTtidmvUsAHPOMw5Ob+mQ0ASulhdL6098FQ=; b=WNFyzMKkV4hx8AGpo2rFi0g2YdWpfE2kBN9yHuanu3w3xQYASmLlrMTHKC6kQJ1qQ3 /QeSymQll227q0JcnolcqEf7vxvrNzngpQGzsz+VPgDPV5KLnbpfkVpecFLXuKL+xugR wzkKsqtWIIq9P5/NzR3eTNNP5ZxUUiW894Z2SV/UDpDG4/XtTW7D6CPimPX5NGAZvKE3 hVUZ8fYphlwJGIFJtIeyM4Woi+Bms49KPhdBOTfot74AQaGWRxxeMAe7+hpQwwf9HGFh utkUuH3TKfjlbEgB4XQNlAcXBUwBpymmySrgbSXOCo/i0Q6LcWTXbWuI2e5mC1hGw9qP qo5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=NDfU2jexhTtidmvUsAHPOMw5Ob+mQ0ASulhdL6098FQ=; b=edJBBjzl4u6BoAU9vZcgGyOkXvhxTt0MSAPa3MVw9wjtOapoRfWR9FqImgsviPR3KI 0NUdcM+tJ0GiJuvr+C4SwR1982uZOXbjQ/x1wTY0hBg28X/vu0iKEhdGgsGVthTPwcja 03lTJppdBDXoVgBscmZ8UKvOrwWFcxktXOzZxQF7JLPLaOZ23vsfT+olkRnRwAMLro9H p+RmzUyI/JUBnU+GupStFbWhj2l5lRgTFw3lBLF6+e2qfvJs46VrWCOA7CuU5P+4vutk LcSibE8Gw9zzkXKXDTxTLOXt1tHBFlKyGyCc5i1PE6RCNO/JbY5nHmF/gAOxbCVbYDMu hn+g== X-Gm-Message-State: ACgBeo3cZEf41Zy8/WgdWSAfGb80c1/haP7aG5Z8w2zkYLIbeGqyEeM7 rKQDUF8eDLl4aCerQiPRmxtvT6o0BL8AZw== X-Google-Smtp-Source: AA6agR7DYBmTZ1nKloJjqSB/DuNbz1khC5IT1JjHo0Kw+drNN63bY2OO8j2P8b7PYV09RXQe+DyymA== X-Received: by 2002:a17:902:aa87:b0:172:689f:106b with SMTP id d7-20020a170902aa8700b00172689f106bmr41056895plr.127.1662223958892; Sat, 03 Sep 2022 09:52:38 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id w185-20020a6262c2000000b005289a50e4c2sm4187296pfb.23.2022.09.03.09.52.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 03 Sep 2022 09:52:38 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: joshi.k@samsung.com, Jens Axboe Subject: [PATCH 1/4] io_uring: cleanly separate request types for iopoll Date: Sat, 3 Sep 2022 10:52:31 -0600 Message-Id: <20220903165234.210547-2-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220903165234.210547-1-axboe@kernel.dk> References: <20220903165234.210547-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org After the addition of iopoll support for passthrough, there's a bit of a mixup here. Clean it up and get rid of the casting for the passthrough command type. Signed-off-by: Jens Axboe Reviewed-by: Kanchan Joshi --- io_uring/rw.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/io_uring/rw.c b/io_uring/rw.c index 9698a789b3d5..966c923bc0be 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -994,7 +994,7 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin) wq_list_for_each(pos, start, &ctx->iopoll_list) { struct io_kiocb *req = container_of(pos, struct io_kiocb, comp_list); - struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw); + struct file *file = req->file; int ret; /* @@ -1006,12 +1006,15 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin) break; if (req->opcode == IORING_OP_URING_CMD) { - struct io_uring_cmd *ioucmd = (struct io_uring_cmd *)rw; + struct io_uring_cmd *ioucmd; - ret = req->file->f_op->uring_cmd_iopoll(ioucmd); - } else - ret = rw->kiocb.ki_filp->f_op->iopoll(&rw->kiocb, &iob, - poll_flags); + ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd); + ret = file->f_op->uring_cmd_iopoll(ioucmd); + } else { + struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw); + + ret = file->f_op->iopoll(&rw->kiocb, &iob, poll_flags); + } if (unlikely(ret < 0)) return ret; else if (ret) From patchwork Sat Sep 3 16:52:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12965016 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E6ECC6FA86 for ; Sat, 3 Sep 2022 16:52:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229648AbiICQwl (ORCPT ); Sat, 3 Sep 2022 12:52:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230348AbiICQwl (ORCPT ); Sat, 3 Sep 2022 12:52:41 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ADF944AD75 for ; Sat, 3 Sep 2022 09:52:40 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id fa2so4722917pjb.2 for ; Sat, 03 Sep 2022 09:52:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=G3oQEHPP4aCknOkBVbhd/F2rNscoAVe643nWi1SXJKk=; b=pXc++ytMunbS4ouRAL+lDn62c44vBGJKumoF+dLc934p+rhg3+IUeKpgvC9rtFx8wT cQ+RMUIyawpc134in7j4C2g9Cbda3hANjB7uYMJnRcmDCto0XMWicsMuAZ9Dp0UXX4o7 z2iUH/WYBq9WDoqflpQBh10ArC5Quey+wNGH9NziCuTjCWJFYBO0gppvJ5l3BgV11/OG OT3Az+3woUGQMSgbF7NYK8rWydHk3zw6vy2GCEHdtJkxVkMp0MB9kyxKlAzTeKnDs1gf H0XZhJHObotw8IMnS6D9FVrri/UCs/ehayeeZ+ncerrIeq8+NRhMdFtxXq/ctYHurvZQ AhvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=G3oQEHPP4aCknOkBVbhd/F2rNscoAVe643nWi1SXJKk=; b=uSG6vOUd92E/tMelje7e1vYwK1rdyZ8d9X4IXjnZkt8qCOcfzePlv91uRj/UQSniYq NYGkrJ6wNK3KMOIA2cRaJ55lRfjCSFT0utxcPHU8v948sQcFE3XH5g+UkTtuHk4OkGTW PPXj1mr63sZkloIZKpcipw1tdhr2hL3MoYw/vuj++N5SwTL8jZThmawRTjee/nTpvUPj 6WlaZuN2QRl/NTyGxCPe1GIAEXyQ9Q96uqv8m1wTHBx7ugnGKk42aOWw1nCHA7S/w0Xi 9CpM6L2AhmNIQ8vhtnM1obVADm2rUmDhbNkW6VMVi1CsRTa+YeEg2kwxmDMeZacVtbvh 81Zg== X-Gm-Message-State: ACgBeo3lelm12eSc8YI2aPM7CSoMZnF+ebFw0c/AsBi8eiCPLqENcv/a jndeauDsmWWNhrSHyDST9eVGRSaGeW0QSQ== X-Google-Smtp-Source: AA6agR63QucbyZgiMn5shU0TlwhO/sMgJuulCyLh+LTn0oEv1fL6ZTH027c8n3A3yLvJ7jHuY+a8Ug== X-Received: by 2002:a17:902:6b84:b0:172:f7cc:175 with SMTP id p4-20020a1709026b8400b00172f7cc0175mr39931719plk.158.1662223959914; Sat, 03 Sep 2022 09:52:39 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id w185-20020a6262c2000000b005289a50e4c2sm4187296pfb.23.2022.09.03.09.52.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 03 Sep 2022 09:52:39 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: joshi.k@samsung.com, Jens Axboe Subject: [PATCH 2/4] io_uring: add local task_work run helper that is entered locked Date: Sat, 3 Sep 2022 10:52:32 -0600 Message-Id: <20220903165234.210547-3-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220903165234.210547-1-axboe@kernel.dk> References: <20220903165234.210547-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org We have a few spots that drop the mutex just to run local task_work, which immediately tries to grab it again. Add a helper that just passes in whether we're locked already. Signed-off-by: Jens Axboe --- io_uring/io_uring.c | 23 ++++++++++++++++------- io_uring/io_uring.h | 1 + 2 files changed, 17 insertions(+), 7 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 4edc31d0a3e0..f841f0e126bc 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1161,9 +1161,8 @@ static void __cold io_move_task_work_from_local(struct io_ring_ctx *ctx) } } -int io_run_local_work(struct io_ring_ctx *ctx) +int __io_run_local_work(struct io_ring_ctx *ctx, bool locked) { - bool locked; struct llist_node *node; struct llist_node fake; struct llist_node *current_final = NULL; @@ -1178,8 +1177,6 @@ int io_run_local_work(struct io_ring_ctx *ctx) return -EEXIST; } - locked = mutex_trylock(&ctx->uring_lock); - node = io_llist_xchg(&ctx->work_llist, &fake); ret = 0; again: @@ -1204,12 +1201,24 @@ int io_run_local_work(struct io_ring_ctx *ctx) goto again; } - if (locked) { + if (locked) io_submit_flush_completions(ctx); - mutex_unlock(&ctx->uring_lock); - } trace_io_uring_local_work_run(ctx, ret, loops); return ret; + +} + +int io_run_local_work(struct io_ring_ctx *ctx) +{ + bool locked; + int ret; + + locked = mutex_trylock(&ctx->uring_lock); + ret = __io_run_local_work(ctx, locked); + if (locked) + mutex_unlock(&ctx->uring_lock); + + return ret; } static void io_req_tw_post(struct io_kiocb *req, bool *locked) diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index f417d75d7bc1..0f90d1dfa42b 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -27,6 +27,7 @@ enum { struct io_uring_cqe *__io_get_cqe(struct io_ring_ctx *ctx); bool io_req_cqe_overflow(struct io_kiocb *req); int io_run_task_work_sig(struct io_ring_ctx *ctx); +int __io_run_local_work(struct io_ring_ctx *ctx, bool locked); int io_run_local_work(struct io_ring_ctx *ctx); void io_req_complete_failed(struct io_kiocb *req, s32 res); void __io_req_complete(struct io_kiocb *req, unsigned issue_flags); From patchwork Sat Sep 3 16:52:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12965018 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40C09ECAAD4 for ; Sat, 3 Sep 2022 16:52:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230333AbiICQwn (ORCPT ); Sat, 3 Sep 2022 12:52:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230348AbiICQwm (ORCPT ); Sat, 3 Sep 2022 12:52:42 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2EA125C47 for ; Sat, 3 Sep 2022 09:52:41 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id m10-20020a17090a730a00b001fa986fd8eeso8411386pjk.0 for ; Sat, 03 Sep 2022 09:52:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=epndrfcBIAu7oOwtOoo29Z1hPOOMo+qcgkQJ015903E=; b=YdeCDwghJYEn5E+PFT2ZuD2Hb7dGU82UDM5mW8fibAvG+YzaJwA7JMJOuHbXpc79se yFqlYjPIJfQ3ULtsGrNz91KaTknHntCLMM0Ex9L8lN9gE1Sr0Mlnu5RJBO54XHkuSVr+ kqaXKJ20ElFCfqy1CkkSqmxjzq8jLfZhS11S1e4W2aTqPaUxYML8wIADcFbndTngvd2j 3HWnabWmGN9AXJSqbSqQRA7R19bHkmuyz7776hv+o/1e5IFkN6NgsSruwIEDGLyPczO1 iHREzqf89aGycLGV+lsEWU6K0tzFzHQcsp996QT6Jxb4pDb9t/FQPoMjOtsIm7TyGqx+ lapg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=epndrfcBIAu7oOwtOoo29Z1hPOOMo+qcgkQJ015903E=; b=F9GnNy4nGVPwhMNr6log3W5W05gz1V54ai7diSFUcbmIMDk2eZ2Lua/KzzbgJttPy2 N/f3urH1QrwJ1ogBT/4rAe1SxjWIvL7zxKZFV/bL2NDrcpxLJxMJvv8lp8WuYLIUpzU/ 0MCVwI25gti2u6+jaT0xOmFIA1195tdRctezLBRNY+qU5qyhBVKGmEJnLuHBHh/kG5w8 9/Q/26opI8nT2fMrInCDIwNLZ4w6+vltEPlEGZQtemhsRl0x9UDbM2+XtJ1hv8jRzI8l /kiAbCj1lQAyBkvToA81s180fEe90odcQoNC2xYrOkVnHWYEgu2P3L5LVzOpE/YS0Hq2 o9fQ== X-Gm-Message-State: ACgBeo2QgpfKz6V8Xs6FrpWK3ol2MgnsSlkwRRfG9aQT+VfwOETkDTrZ hr/aUKBl0bqHltRQqaPEcWEAe512PQP2Ew== X-Google-Smtp-Source: AA6agR6upUwzCSiGU0ShI+eXDkq21FlEVmpvF4kxWTQISGHcJ85ur1yoJI6JPAwBxMSSfsfkXgRGdw== X-Received: by 2002:a17:902:d4c8:b0:174:a871:152d with SMTP id o8-20020a170902d4c800b00174a871152dmr31182003plg.4.1662223960849; Sat, 03 Sep 2022 09:52:40 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id w185-20020a6262c2000000b005289a50e4c2sm4187296pfb.23.2022.09.03.09.52.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 03 Sep 2022 09:52:40 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: joshi.k@samsung.com, Jens Axboe Subject: [PATCH 3/4] io_uring: ensure iopoll runs local task work as well Date: Sat, 3 Sep 2022 10:52:33 -0600 Message-Id: <20220903165234.210547-4-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220903165234.210547-1-axboe@kernel.dk> References: <20220903165234.210547-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org Combine the two checks we have for task_work running and whether or not we need to shuffle the mutex into one, so we unify how task_work is run in the iopoll loop. This helps ensure that local task_work is run when needed, and also optimizes that path to avoid a mutex shuffle if it's not needed. Signed-off-by: Jens Axboe --- io_uring/io_uring.c | 39 ++++++++++++++++++++------------------- io_uring/io_uring.h | 6 ++++++ 2 files changed, 26 insertions(+), 19 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index f841f0e126bc..118db2264189 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1428,25 +1428,26 @@ static int io_iopoll_check(struct io_ring_ctx *ctx, long min) * forever, while the workqueue is stuck trying to acquire the * very same mutex. */ - if (wq_list_empty(&ctx->iopoll_list)) { - u32 tail = ctx->cached_cq_tail; - - mutex_unlock(&ctx->uring_lock); - ret = io_run_task_work_ctx(ctx); - mutex_lock(&ctx->uring_lock); - if (ret < 0) - break; - - /* some requests don't go through iopoll_list */ - if (tail != ctx->cached_cq_tail || - wq_list_empty(&ctx->iopoll_list)) - break; - } - - if (task_work_pending(current)) { - mutex_unlock(&ctx->uring_lock); - io_run_task_work(); - mutex_lock(&ctx->uring_lock); + if (wq_list_empty(&ctx->iopoll_list) || + io_task_work_pending(ctx)) { + if (!llist_empty(&ctx->work_llist)) + __io_run_local_work(ctx, true); + if (task_work_pending(current) || + wq_list_empty(&ctx->iopoll_list)) { + u32 tail = ctx->cached_cq_tail; + + mutex_unlock(&ctx->uring_lock); + ret = io_run_task_work(); + mutex_lock(&ctx->uring_lock); + + if (ret < 0) + break; + + /* some requests don't go through iopoll_list */ + if (tail != ctx->cached_cq_tail || + wq_list_empty(&ctx->iopoll_list)) + break; + } } ret = io_do_iopoll(ctx, !min); if (ret < 0) diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 0f90d1dfa42b..9d89425292b7 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -236,6 +236,12 @@ static inline int io_run_task_work(void) return 0; } +static inline bool io_task_work_pending(struct io_ring_ctx *ctx) +{ + return test_thread_flag(TIF_NOTIFY_SIGNAL) || + !wq_list_empty(&ctx->work_llist); +} + static inline int io_run_task_work_ctx(struct io_ring_ctx *ctx) { int ret = 0; From patchwork Sat Sep 3 16:52:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12965019 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CFF92ECAAD4 for ; Sat, 3 Sep 2022 16:52:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230391AbiICQwq (ORCPT ); Sat, 3 Sep 2022 12:52:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231156AbiICQwo (ORCPT ); Sat, 3 Sep 2022 12:52:44 -0400 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E46884C634 for ; Sat, 3 Sep 2022 09:52:42 -0700 (PDT) Received: by mail-pl1-x633.google.com with SMTP id c2so4771375plo.3 for ; Sat, 03 Sep 2022 09:52:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=+qBjgowB2IPvEXh4hW2qqFOwv1S2VSWIvC4vev06Ra8=; b=QzIg28DvPM+fCOL5TfyvCb7+EnlWmrEFjAgNFrlq2b+xGnzi7e63MT96vewPf6nAo+ HM+45I/WEM9dRSPL+H1RakHW6gu+bUFZSMXLs24+A7k0/jlmFJSbjkmGT5IZ2bXYPfPl 7j6X4wB3iw14ve2AUiKnn7IrqozDkF1JMg/+TD9pXicfT6lQPCLvpnpkgkM4nKYLY3BX jhfxBcpddxiatlADyW1SaoX17q8qZkvIeUDksZ8/oycmujkGvHOn73k9+0yRiOC+7h6l 3RsVODceZYF1CqP/r5n9MWcpdqWYhnqYzobSoYGeslsjGB6hH/aJpkjblekrF8jFKapy nxGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=+qBjgowB2IPvEXh4hW2qqFOwv1S2VSWIvC4vev06Ra8=; b=rxInCMK2K2Bq48qUpGQKO23oAg6FgVaIouEPi41d63ONLXNWLfiFQ1mudnWiZSv7g5 H7L3CXMSGXLaO9y++ikWqWAjoL1w6nPpNZVdgTzKVBbnkfzVaR4M/R5dFymzQ18aze3p 6cSpsISYQfc8I/U4N2d/ghfR902IegPCRdS15k2th1KRE+ULSU/08Ir+B3jchdRlYvnt bvg5pkECgKWi+iNO5stDjwkQebSyok+ArrRW3vc8vGknN20BtG43qN7FNDEjN3tB+37b 2IL4VeoOIXNWilnsADOC5aEtuEjOAHmntPvpSppJCMAoeBMcM2z7305ibZWrBxgqffBV m94A== X-Gm-Message-State: ACgBeo1n8xjEQ5o2bGbz5FJo55Y7EU9etaCRkt9nCVKjRqlyfDwsvM/0 HGNpOtO5jb0z4HRrQ7u1L9CEwuThRPJh2g== X-Google-Smtp-Source: AA6agR6uKVwtAq8GHasdajTfSpsWgJJzeFRlpg4F5PgvaqDyT/bqDpNUreszfB9NM18GtLxrUducVg== X-Received: by 2002:a17:90b:4b4a:b0:1fd:d452:884d with SMTP id mi10-20020a17090b4b4a00b001fdd452884dmr11005257pjb.131.1662223961981; Sat, 03 Sep 2022 09:52:41 -0700 (PDT) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id w185-20020a6262c2000000b005289a50e4c2sm4187296pfb.23.2022.09.03.09.52.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 03 Sep 2022 09:52:41 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: joshi.k@samsung.com, Jens Axboe Subject: [PATCH 4/4] fs: add batch and poll flags to the uring_cmd_iopoll() handler Date: Sat, 3 Sep 2022 10:52:34 -0600 Message-Id: <20220903165234.210547-5-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220903165234.210547-1-axboe@kernel.dk> References: <20220903165234.210547-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org We need the poll_flags to know how to poll for the IO, and we should have the batch structure in preparation for supporting batched completions with iopoll. Signed-off-by: Jens Axboe Reviewed-by: Kanchan Joshi --- drivers/nvme/host/ioctl.c | 12 ++++++++---- drivers/nvme/host/nvme.h | 6 ++++-- include/linux/fs.h | 3 ++- io_uring/rw.c | 3 ++- 4 files changed, 16 insertions(+), 8 deletions(-) diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c index 7756b439a688..548aca8b5b9f 100644 --- a/drivers/nvme/host/ioctl.c +++ b/drivers/nvme/host/ioctl.c @@ -623,7 +623,9 @@ int nvme_ns_chr_uring_cmd(struct io_uring_cmd *ioucmd, unsigned int issue_flags) return nvme_ns_uring_cmd(ns, ioucmd, issue_flags); } -int nvme_ns_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd) +int nvme_ns_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd, + struct io_comp_batch *iob, + unsigned int poll_flags) { struct bio *bio; int ret = 0; @@ -636,7 +638,7 @@ int nvme_ns_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd) struct nvme_ns, cdev); q = ns->queue; if (test_bit(QUEUE_FLAG_POLL, &q->queue_flags) && bio && bio->bi_bdev) - ret = bio_poll(bio, NULL, 0); + ret = bio_poll(bio, iob, poll_flags); rcu_read_unlock(); return ret; } @@ -722,7 +724,9 @@ int nvme_ns_head_chr_uring_cmd(struct io_uring_cmd *ioucmd, return ret; } -int nvme_ns_head_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd) +int nvme_ns_head_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd, + struct io_comp_batch *iob, + unsigned int poll_flags) { struct cdev *cdev = file_inode(ioucmd->file)->i_cdev; struct nvme_ns_head *head = container_of(cdev, struct nvme_ns_head, cdev); @@ -738,7 +742,7 @@ int nvme_ns_head_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd) q = ns->queue; if (test_bit(QUEUE_FLAG_POLL, &q->queue_flags) && bio && bio->bi_bdev) - ret = bio_poll(bio, NULL, 0); + ret = bio_poll(bio, iob, poll_flags); rcu_read_unlock(); } srcu_read_unlock(&head->srcu, srcu_idx); diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index fdcbc93dea21..216acbe953b3 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -821,8 +821,10 @@ long nvme_ns_head_chr_ioctl(struct file *file, unsigned int cmd, unsigned long arg); long nvme_dev_ioctl(struct file *file, unsigned int cmd, unsigned long arg); -int nvme_ns_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd); -int nvme_ns_head_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd); +int nvme_ns_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd, + struct io_comp_batch *iob, unsigned int poll_flags); +int nvme_ns_head_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd, + struct io_comp_batch *iob, unsigned int poll_flags); int nvme_ns_chr_uring_cmd(struct io_uring_cmd *ioucmd, unsigned int issue_flags); int nvme_ns_head_chr_uring_cmd(struct io_uring_cmd *ioucmd, diff --git a/include/linux/fs.h b/include/linux/fs.h index d6badd19784f..01681d061a6a 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -2132,7 +2132,8 @@ struct file_operations { loff_t len, unsigned int remap_flags); int (*fadvise)(struct file *, loff_t, loff_t, int); int (*uring_cmd)(struct io_uring_cmd *ioucmd, unsigned int issue_flags); - int (*uring_cmd_iopoll)(struct io_uring_cmd *ioucmd); + int (*uring_cmd_iopoll)(struct io_uring_cmd *, struct io_comp_batch *, + unsigned int poll_flags); } __randomize_layout; struct inode_operations { diff --git a/io_uring/rw.c b/io_uring/rw.c index 966c923bc0be..4a061326c664 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -1009,7 +1009,8 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin) struct io_uring_cmd *ioucmd; ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd); - ret = file->f_op->uring_cmd_iopoll(ioucmd); + ret = file->f_op->uring_cmd_iopoll(ioucmd, &iob, + poll_flags); } else { struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw);