From patchwork Wed Jun 28 17:09:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13296077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41C05EB64DC for ; Wed, 28 Jun 2023 17:10:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230188AbjF1RKE (ORCPT ); Wed, 28 Jun 2023 13:10:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42686 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232037AbjF1RJ7 (ORCPT ); Wed, 28 Jun 2023 13:09:59 -0400 Received: from mail-il1-x130.google.com (mail-il1-x130.google.com [IPv6:2607:f8b0:4864:20::130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87B451BE4 for ; Wed, 28 Jun 2023 10:09:58 -0700 (PDT) Received: by mail-il1-x130.google.com with SMTP id e9e14a558f8ab-34577a22c7cso17525ab.0 for ; Wed, 28 Jun 2023 10:09:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20221208.gappssmtp.com; s=20221208; t=1687972197; x=1690564197; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OL1cH5hY2ffD4MiWMhFB84qVLfZbKF6cTgZjlJaLuyU=; b=gD9DvBm30YTqwg1YjHwEH5WNCKWdzVKvPb3Hp7AQVDWFneGCx9zP+VV3pjtU/tL9LJ dZzhlJzL0R3AjvtKBKy6zP4i86Ej3pH7AKliEP+1/T71H+lW1OkHAoxxHm+m3sBMubFb z/dBDTNRDl4JWi+JXM9MgAHvOtWWyYZvrvn1bEvbCrI2QbLogq2hvzYa3nG3LPhQQbIV yF3BQ1a8A8pRjvUDcJ7jxjjYy4eZnViJurX2ujMM0FROcECZNYOkottfqO5NHAZSgwcj YZuAB4/VebiFdm1KEKg94zQwVSdqcbbSsAYSIpJnKnXTvKTlpi6gARBX6kBZlD8AvaPc TzzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687972197; x=1690564197; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OL1cH5hY2ffD4MiWMhFB84qVLfZbKF6cTgZjlJaLuyU=; b=mH0GLlVf4sdm/HIzAT2fFQmn3qDhBv9OugIbeDIa1U0hqMXE6hX8pACHthQmAtrEbQ ZZrNxwLcx/Qydp4xB0Wt+i+0BUtFYz8KCh11BzBj6MQBnbeqZGI9iHYhWhmn1eGVQqsk 2HAPb0JomjV+1TKGh3QBgJ41mQXxIVASZ8CfcSh8etWRz3w0hkAfaNisM5hW/jrOgqIl yw+eii8ZUl8kLH7VQUsC8iNKaKwl/BnCrdCHv464V8vCJbJLSGmjc/oyrszO5abxkasf AV3WbXxjCG47RKGsATXNoeGnG8gkgj4FT5G4C8ZordpfGs4zkNWP9eyRKmei4BG9N6ak +aZw== X-Gm-Message-State: AC+VfDwXq9uVu2Jb21hAvZmnIYjeeZ58yhBCguQE9P3PTiznJlNF/gOH 6hrN9cnaGP/0UA5PBEZWI4nDfCPb26F9TFdzf2g= X-Google-Smtp-Source: ACHHUZ4IWPfniJiDW8gBctXWmRmSUCygt+EfVEd4/1QsZdHg2RvguQmgmbmZHpdhLbZh8Le37fLLxg== X-Received: by 2002:a5d:9da6:0:b0:783:617c:a8f0 with SMTP id ay38-20020a5d9da6000000b00783617ca8f0mr13357221iob.2.1687972197397; Wed, 28 Jun 2023 10:09:57 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id t11-20020a02c48b000000b0042aecf02051sm708342jam.51.2023.06.28.10.09.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jun 2023 10:09:56 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: Jens Axboe , Linus Torvalds Subject: [PATCH 1/3] io_uring/net: use proper value for msg_inq Date: Wed, 28 Jun 2023 11:09:51 -0600 Message-Id: <20230628170953.952923-2-axboe@kernel.dk> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230628170953.952923-1-axboe@kernel.dk> References: <20230628170953.952923-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org struct msghdr->msg_inq is a signed type, yet we attempt to store what is essentially an unsigned bitmask in there. We only really need to know if the field was stored or not, but let's use the proper type to avoid any misunderstandings on what is being attempted here. Link: https://lore.kernel.org/io-uring/CAHk-=wjKb24aSe6fE4zDH-eh8hr-FB9BbukObUVSMGOrsBHCRQ@mail.gmail.com/ Reported-by: Linus Torvalds Signed-off-by: Jens Axboe --- io_uring/net.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/io_uring/net.c b/io_uring/net.c index a8e303796f16..be2d153e39d2 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -630,7 +630,7 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret, unsigned int cflags; cflags = io_put_kbuf(req, issue_flags); - if (msg->msg_inq && msg->msg_inq != -1U) + if (msg->msg_inq && msg->msg_inq != -1) cflags |= IORING_CQE_F_SOCK_NONEMPTY; if (!(req->flags & REQ_F_APOLL_MULTISHOT)) { @@ -645,7 +645,7 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret, io_recv_prep_retry(req); /* Known not-empty or unknown state, retry */ if (cflags & IORING_CQE_F_SOCK_NONEMPTY || - msg->msg_inq == -1U) + msg->msg_inq == -1) return false; if (issue_flags & IO_URING_F_MULTISHOT) *ret = IOU_ISSUE_SKIP_COMPLETE; @@ -804,7 +804,7 @@ int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags) flags |= MSG_DONTWAIT; kmsg->msg.msg_get_inq = 1; - kmsg->msg.msg_inq = -1U; + kmsg->msg.msg_inq = -1; if (req->flags & REQ_F_APOLL_MULTISHOT) { ret = io_recvmsg_multishot(sock, sr, kmsg, flags, &mshot_finished); @@ -902,7 +902,7 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags) if (unlikely(ret)) goto out_free; - msg.msg_inq = -1U; + msg.msg_inq = -1; msg.msg_flags = 0; flags = sr->msg_flags; From patchwork Wed Jun 28 17:09:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13296079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B13C3EB64DC for ; Wed, 28 Jun 2023 17:10:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231561AbjF1RKG (ORCPT ); Wed, 28 Jun 2023 13:10:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232050AbjF1RKA (ORCPT ); Wed, 28 Jun 2023 13:10:00 -0400 Received: from mail-io1-xd32.google.com (mail-io1-xd32.google.com [IPv6:2607:f8b0:4864:20::d32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DECBC10C for ; Wed, 28 Jun 2023 10:09:59 -0700 (PDT) Received: by mail-io1-xd32.google.com with SMTP id ca18e2360f4ac-760dff4b701so2338739f.0 for ; Wed, 28 Jun 2023 10:09:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20221208.gappssmtp.com; s=20221208; t=1687972199; x=1690564199; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9Sy6Sx0KyVfu6SSofVlvPg2ozz2tXo7on0jmFzVosUo=; b=B0CzJS1Zi0Y/z8asn2A8oY4cI6dlgXqoYnwd3tom4UsHE9PdcYwh8LurRFwxe38eHQ kCNG0DWCfyP180Ah68XfSObxJyQ7PlXW/RTalhW2chedWE53Jr+znxWyKMSeEvGZYgx6 NQEvq9/O8JSo7nFviXYUrWx5XLIEZzIZh8YWtgn4t1/qILQsnlZGAvH4dfK2GFwKs2TG Pd6iEOT4E7O6k9yPrZ2KA7gCONZT1HdyO2MJm8cwJiEnw1/01bmPfqbTsWfPLEyAbnfb L3XuD8JvSQrQ1DUp2S/HT2R8xAYTmxXLtD0WpTHvI+ikLOki26K6OOB9R2u8sHnNzf7O tVew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687972199; x=1690564199; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9Sy6Sx0KyVfu6SSofVlvPg2ozz2tXo7on0jmFzVosUo=; b=CzWxPm04VNRNS13khRrvb7Tz3s4UqDdE5xZEpk8SdKO2SaF018XeZRsZS/DIfUnVtd ubVSs0odr09sKNmV41D8almu1JfjF1xqe9TR0QyYT0wGcP48hTb4/5t+DLM0VA1lAa+2 bAASRk/wsAxh1k4yK9/IOetwc6bq+01mTtZysVSvI/gqi80WhwixoYW11Adokymbhq1F 1YbwNqqB5CEJH4Y5v0DVxotbvYPJ2JTg/bOh8uEuaAHTjqz+jprWKj68OdywlnUEjEFB D0xrmsbauWplpbfTW/bAXUiCx6XbvtCvtVFLVV2tUnhlDaxyGHaMMJiQTDXZoA6FeK/o YFRg== X-Gm-Message-State: AC+VfDwTDU+vJze73zvy+865t1+QWo2myYgtnfGgllFkQu8ZPCGwTOz1 PFJxUJh5Ze/EnU35Q1hY9ok+Pan5/PH5SKXiQRY= X-Google-Smtp-Source: ACHHUZ7ioNSgQh3N5lNUABusq6a3/WQZVkc9VPcGTxjTpbrtLfGRwbhzaYluLUWMUZZAi5VMzDYGlA== X-Received: by 2002:a6b:b707:0:b0:780:cb36:6f24 with SMTP id h7-20020a6bb707000000b00780cb366f24mr18999149iof.2.1687972198812; Wed, 28 Jun 2023 10:09:58 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id t11-20020a02c48b000000b0042aecf02051sm708342jam.51.2023.06.28.10.09.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jun 2023 10:09:57 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 2/3] io_uring: remove io_fallback_tw() forward declaration Date: Wed, 28 Jun 2023 11:09:52 -0600 Message-Id: <20230628170953.952923-3-axboe@kernel.dk> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230628170953.952923-1-axboe@kernel.dk> References: <20230628170953.952923-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org It's used just one function higher up, get rid of the declaration and just move it up a bit. Signed-off-by: Jens Axboe --- io_uring/io_uring.c | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 1b53a2ab0a27..f84d258ea348 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -149,7 +149,6 @@ static bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx, static void io_queue_sqe(struct io_kiocb *req); static void io_move_task_work_from_local(struct io_ring_ctx *ctx); static void __io_submit_flush_completions(struct io_ring_ctx *ctx); -static __cold void io_fallback_tw(struct io_uring_task *tctx); struct kmem_cache *req_cachep; @@ -1238,6 +1237,20 @@ static inline struct llist_node *io_llist_cmpxchg(struct llist_head *head, return cmpxchg(&head->first, old, new); } +static __cold void io_fallback_tw(struct io_uring_task *tctx) +{ + struct llist_node *node = llist_del_all(&tctx->task_list); + struct io_kiocb *req; + + while (node) { + req = container_of(node, struct io_kiocb, io_task_work.node); + node = node->next; + if (llist_add(&req->io_task_work.node, + &req->ctx->fallback_llist)) + schedule_delayed_work(&req->ctx->fallback_work, 1); + } +} + void tctx_task_work(struct callback_head *cb) { struct io_tw_state ts = {}; @@ -1279,20 +1292,6 @@ void tctx_task_work(struct callback_head *cb) trace_io_uring_task_work_run(tctx, count, loops); } -static __cold void io_fallback_tw(struct io_uring_task *tctx) -{ - struct llist_node *node = llist_del_all(&tctx->task_list); - struct io_kiocb *req; - - while (node) { - req = container_of(node, struct io_kiocb, io_task_work.node); - node = node->next; - if (llist_add(&req->io_task_work.node, - &req->ctx->fallback_llist)) - schedule_delayed_work(&req->ctx->fallback_work, 1); - } -} - static inline void io_req_local_work_add(struct io_kiocb *req, unsigned flags) { struct io_ring_ctx *ctx = req->ctx; From patchwork Wed Jun 28 17:09:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13296076 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBE9AEB64D7 for ; Wed, 28 Jun 2023 17:10:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231609AbjF1RKD (ORCPT ); Wed, 28 Jun 2023 13:10:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232053AbjF1RKC (ORCPT ); Wed, 28 Jun 2023 13:10:02 -0400 Received: from mail-io1-xd2f.google.com (mail-io1-xd2f.google.com [IPv6:2607:f8b0:4864:20::d2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DC841BE6 for ; Wed, 28 Jun 2023 10:10:01 -0700 (PDT) Received: by mail-io1-xd2f.google.com with SMTP id ca18e2360f4ac-7835bbeb6a0so2191139f.0 for ; Wed, 28 Jun 2023 10:10:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20221208.gappssmtp.com; s=20221208; t=1687972200; x=1690564200; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qGeotf15uqrwCXJBzrcEcErfPDo2q6OqBuDdMyzUqqU=; b=OlK3Y3gL4TmaSJXJiYsfgn5Oycv8ozxCoMYapKaCsjbF2McSJ2cxGH7hs6wU6xo2ED B0IAc6HdtrUvTK8hI1Rp8j9vLHhGYz1vSTZJp1QJOTSyCdzThmy8Q1DtcrHAsFBDn7Xm 4SwboPBp86V957eYLONvDp1lRY5P6KF0ajptRc6R2Tk9Bd/SOxVKvira4nWjgj5SSlJS HACOAjrHODoET/Y9rPFb7MMyxvSt6WjXjkIaNuVuze9QZOZ3F3oiVJMG6s3hpEE0Gada 6LTR6y2GFdUa6DpBGa1azlnYhvYrIqMjS20e/a5CnekjjyKhD6RVGWhoyRLs1XlWOIVs PQrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687972200; x=1690564200; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qGeotf15uqrwCXJBzrcEcErfPDo2q6OqBuDdMyzUqqU=; b=c1476olSoDRgS0BxLIET2ycwSEB8FyVNagWksPsJDLwbDI0Wf+LtXsUbNBwuJl1uWq JaDRGOBEM6T2vEhQLfS/Fxp8SSi3brMKzFegIRTxXvNd/g9NDljkYE67L3NQIUApFmm6 NL/iCknRr1bNxT0xQMKJ9ENZZXz1GmzzQiFIzMDc/Nbrw9zealtsHu9ZWXf0cp+u8Hh1 krz5v8w8oaBBc5ZiBf/DGPmxnwxdQgSSfcE9yCuj+MNM4VVjzHtI7tVTMJ0i7ddRCgxc u9sc7NRVA2HYH/yzFTZH07zIumXMlRquCtuCYMvFlX5A70Xi1aGEciMG2LBSTWv7onT1 +6iQ== X-Gm-Message-State: AC+VfDwfLJ6OjNzqA++5lZOvNzdnl1BI3SNLfjKr3q5CV3VZvI7HQgY8 xUz+dN6FnBfzknO2AxGmyajKtwvRtwu2093OhNI= X-Google-Smtp-Source: ACHHUZ606jneCQVzLuuVPX/79tWADZCVUunqHlyBja/OCkyYzNFegIYJLYjSUMbcAGICC2bOyN5f2w== X-Received: by 2002:a05:6602:1648:b0:780:c6bb:ad8d with SMTP id y8-20020a056602164800b00780c6bbad8dmr583517iow.0.1687972199982; Wed, 28 Jun 2023 10:09:59 -0700 (PDT) Received: from localhost.localdomain ([96.43.243.2]) by smtp.gmail.com with ESMTPSA id t11-20020a02c48b000000b0042aecf02051sm708342jam.51.2023.06.28.10.09.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jun 2023 10:09:59 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: Jens Axboe Subject: [PATCH 3/3] io_uring: flush offloaded and delayed task_work on exit Date: Wed, 28 Jun 2023 11:09:53 -0600 Message-Id: <20230628170953.952923-4-axboe@kernel.dk> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230628170953.952923-1-axboe@kernel.dk> References: <20230628170953.952923-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org io_uring offloads task_work for cancelation purposes when the task is exiting. This is conceptually fine, but we should be nicer and actually wait for that work to complete before returning. Add an argument to io_fallback_tw() telling it to flush the deferred work when it's all queued up, and have it flush a ctx behind whenever the ctx changes. Signed-off-by: Jens Axboe --- io_uring/io_uring.c | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index f84d258ea348..e8096d502a7c 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1237,18 +1237,32 @@ static inline struct llist_node *io_llist_cmpxchg(struct llist_head *head, return cmpxchg(&head->first, old, new); } -static __cold void io_fallback_tw(struct io_uring_task *tctx) +static __cold void io_fallback_tw(struct io_uring_task *tctx, bool sync) { struct llist_node *node = llist_del_all(&tctx->task_list); + struct io_ring_ctx *last_ctx = NULL; struct io_kiocb *req; while (node) { req = container_of(node, struct io_kiocb, io_task_work.node); node = node->next; + if (sync && last_ctx != req->ctx) { + if (last_ctx) { + flush_delayed_work(&last_ctx->fallback_work); + percpu_ref_put(&last_ctx->refs); + } + last_ctx = req->ctx; + percpu_ref_get(&last_ctx->refs); + } if (llist_add(&req->io_task_work.node, &req->ctx->fallback_llist)) schedule_delayed_work(&req->ctx->fallback_work, 1); } + + if (last_ctx) { + flush_delayed_work(&last_ctx->fallback_work); + percpu_ref_put(&last_ctx->refs); + } } void tctx_task_work(struct callback_head *cb) @@ -1263,7 +1277,7 @@ void tctx_task_work(struct callback_head *cb) unsigned int count = 0; if (unlikely(current->flags & PF_EXITING)) { - io_fallback_tw(tctx); + io_fallback_tw(tctx, true); return; } @@ -1358,7 +1372,7 @@ static void io_req_normal_work_add(struct io_kiocb *req) if (likely(!task_work_add(req->task, &tctx->task_work, ctx->notify_method))) return; - io_fallback_tw(tctx); + io_fallback_tw(tctx, false); } void __io_req_task_work_add(struct io_kiocb *req, unsigned flags) @@ -3108,6 +3122,8 @@ static __cold void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx) if (ctx->rings) io_kill_timeouts(ctx, NULL, true); + flush_delayed_work(&ctx->fallback_work); + INIT_WORK(&ctx->exit_work, io_ring_exit_work); /* * Use system_unbound_wq to avoid spawning tons of event kworkers