From patchwork Sun Nov 20 17:28:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 13050113 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CBF5C4332F for ; Sun, 20 Nov 2022 17:28:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229498AbiKTR2R (ORCPT ); Sun, 20 Nov 2022 12:28:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229618AbiKTR2N (ORCPT ); Sun, 20 Nov 2022 12:28:13 -0500 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1132F26A for ; Sun, 20 Nov 2022 09:28:13 -0800 (PST) Received: by mail-pl1-x635.google.com with SMTP id j12so8651219plj.5 for ; Sun, 20 Nov 2022 09:28:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1LkqzqvF/knzjUasduD/iS7lgrm2MeIBBhxzdjzsoUM=; b=0rTI5eUh+faPvBoSv4jP7lKUmcnz3mFpfmdd5GeX32U49agtyEoIgRFCzRBrAH03b8 I0kGWX2bZL5EaaELvXu1lhBUwXcmhyotwCnkNHS3cVGc6nBAZU/SkMMz3ixpiJKdRbrh Gay0ig17uXIXnHmbFuTehOQ0XaCej6u4SmcEJjinCVa7B2YobUSdmNf53kp5x9Yi59YM 7Vvq7B72RNmgr1MZVHiP213rSvdoS4VIMN8mD9Dhq5qD6uQmeMdmFZ3c5pbEUNSlm/WF 8Y61kTqXR+ANgELknP3WsgPCRNSunojGO1NSj2WWyjISg6Ox8rqtJ9UMLeYxG2Y9rBR9 cTJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1LkqzqvF/knzjUasduD/iS7lgrm2MeIBBhxzdjzsoUM=; b=rlU3TMSIqBq3Il8DpUl1eZRZAgvpU+IjE/AM60JV5zTelJtx86BLhyAJiEHW1CIVdh lStZPwiG8ocOYniotVRWaB2Xl2nSiyf52b5/sypCb1gCsCImF/ESw63So8FKPAfRBoFT D5bxwI9JOQF+h7mVN40Vv/WF0Uo2ugYo/8Q/rHySS8q1yBWpw/LnC0svl8hcM+nq+O50 LnQ/y9C5sNXJcxFuEqyBNPO8ZJK7AwaCvViXcnCzsp9YbELj22we7KEpqaYQctNNcjkG 7Ss7DJewHVPj9LJ/XSDaDbn2Osy0BUBsgc5XuwExZd/VLpsU1X9+uNdyd3o8RBHJ7MV3 jytw== X-Gm-Message-State: ANoB5pm2Z0skrM4IPhbrnYP4GsatGLtG6e1nrXwX6PaWOswfVsnjLRT9 FGfGqadwdwoT7+zl1ECi7/ZIQiG/AaCwKA== X-Google-Smtp-Source: AA0mqf6Xc+ZETcQQ/wv3RORTymlVOWVPAn+fGhOouEcA24JHeUF4XeBZGKztuN8Kfvrq9VG+66wlkg== X-Received: by 2002:a17:90a:c78c:b0:213:bbb4:13ce with SMTP id gn12-20020a17090ac78c00b00213bbb413cemr16452436pjb.246.1668965292339; Sun, 20 Nov 2022 09:28:12 -0800 (PST) Received: from localhost.localdomain ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id d12-20020a170903230c00b0017e9b820a1asm7876953plh.100.2022.11.20.09.28.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Nov 2022 09:28:11 -0800 (PST) From: Jens Axboe To: io-uring@vger.kernel.org Cc: Jens Axboe , stable@vger.kernel.org Subject: [PATCH 2/4] eventfd: provide a eventfd_signal_mask() helper Date: Sun, 20 Nov 2022 10:28:05 -0700 Message-Id: <20221120172807.358868-3-axboe@kernel.dk> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221120172807.358868-1-axboe@kernel.dk> References: <20221120172807.358868-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: io-uring@vger.kernel.org This is identical to eventfd_signal(), but it allows the caller to pass in a mask to be used for the poll wakeup key. The use case is avoiding repeated multishot triggers if we have a dependency between eventfd and io_uring. If we setup an eventfd context and register that as the io_uring eventfd, and at the same time queue a multishot poll request for the eventfd context, then any CQE posted will repeatedly trigger the multishot request until it terminates when the CQ ring overflows. In preparation for io_uring detecting this circular dependency, add the mentioned helper so that io_uring can pass in EPOLL_URING as part of the poll wakeup key. Cc: stable@vger.kernel.org # 6.0 Signed-off-by: Jens Axboe --- fs/eventfd.c | 37 +++++++++++++++++++++---------------- include/linux/eventfd.h | 1 + 2 files changed, 22 insertions(+), 16 deletions(-) diff --git a/fs/eventfd.c b/fs/eventfd.c index c0ffee99ad23..249ca6c0b784 100644 --- a/fs/eventfd.c +++ b/fs/eventfd.c @@ -43,21 +43,7 @@ struct eventfd_ctx { int id; }; -/** - * eventfd_signal - Adds @n to the eventfd counter. - * @ctx: [in] Pointer to the eventfd context. - * @n: [in] Value of the counter to be added to the eventfd internal counter. - * The value cannot be negative. - * - * This function is supposed to be called by the kernel in paths that do not - * allow sleeping. In this function we allow the counter to reach the ULLONG_MAX - * value, and we signal this as overflow condition by returning a EPOLLERR - * to poll(2). - * - * Returns the amount by which the counter was incremented. This will be less - * than @n if the counter has overflowed. - */ -__u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n) +__u64 eventfd_signal_mask(struct eventfd_ctx *ctx, __u64 n, unsigned mask) { unsigned long flags; @@ -78,12 +64,31 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n) n = ULLONG_MAX - ctx->count; ctx->count += n; if (waitqueue_active(&ctx->wqh)) - wake_up_locked_poll(&ctx->wqh, EPOLLIN); + wake_up_locked_poll(&ctx->wqh, EPOLLIN | mask); current->in_eventfd = 0; spin_unlock_irqrestore(&ctx->wqh.lock, flags); return n; } + +/** + * eventfd_signal - Adds @n to the eventfd counter. + * @ctx: [in] Pointer to the eventfd context. + * @n: [in] Value of the counter to be added to the eventfd internal counter. + * The value cannot be negative. + * + * This function is supposed to be called by the kernel in paths that do not + * allow sleeping. In this function we allow the counter to reach the ULLONG_MAX + * value, and we signal this as overflow condition by returning a EPOLLERR + * to poll(2). + * + * Returns the amount by which the counter was incremented. This will be less + * than @n if the counter has overflowed. + */ +__u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n) +{ + return eventfd_signal_mask(ctx, n, 0); +} EXPORT_SYMBOL_GPL(eventfd_signal); static void eventfd_free_ctx(struct eventfd_ctx *ctx) diff --git a/include/linux/eventfd.h b/include/linux/eventfd.h index 30eb30d6909b..e849329ce1a8 100644 --- a/include/linux/eventfd.h +++ b/include/linux/eventfd.h @@ -40,6 +40,7 @@ struct file *eventfd_fget(int fd); struct eventfd_ctx *eventfd_ctx_fdget(int fd); struct eventfd_ctx *eventfd_ctx_fileget(struct file *file); __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n); +__u64 eventfd_signal_mask(struct eventfd_ctx *ctx, __u64 n, unsigned mask); int eventfd_ctx_remove_wait_queue(struct eventfd_ctx *ctx, wait_queue_entry_t *wait, __u64 *cnt); void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt);