From patchwork Tue Jul 28 16:00:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 11689475 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CAD7313B1 for ; Tue, 28 Jul 2020 16:01:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AF74820792 for ; Tue, 28 Jul 2020 16:01:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XDQG+FtS" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731156AbgG1QBl (ORCPT ); Tue, 28 Jul 2020 12:01:41 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:31948 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1731240AbgG1QBk (ORCPT ); Tue, 28 Jul 2020 12:01:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1595952098; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Odxlq9Ln5GrmGx+aMPEKeiGZ+J1+ZE/sVWWvuPsLyGs=; b=XDQG+FtSUi+i+plrbB02Udb2slTBziov/GqkdmaZXrtPYZNdvnzgJN+t1xK0doljD88Xb9 OG7R72J161hk9Wl/Llt0D3hs46hEwYLNE+13j6XwMdxxbPcZLMTGzyoCACzLtIzgXw5tZJ wGsjPmyUKv5pVWHBbIaoFvQmj4fWbuk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-201-469_zStINx24Kcz1HIo-5w-1; Tue, 28 Jul 2020 12:01:34 -0400 X-MC-Unique: 469_zStINx24Kcz1HIo-5w-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DD7B2E919; Tue, 28 Jul 2020 16:01:31 +0000 (UTC) Received: from steredhat.redhat.com (ovpn-112-109.ams2.redhat.com [10.36.112.109]) by smtp.corp.redhat.com (Postfix) with ESMTP id B985F5D9CD; Tue, 28 Jul 2020 16:01:19 +0000 (UTC) From: Stefano Garzarella To: Jens Axboe Cc: io-uring@vger.kernel.org, Alexander Viro , Kees Cook , Jeff Moyer , Kernel Hardening , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Aleksa Sarai , Sargun Dhillon , Stefan Hajnoczi , Christian Brauner , Jann Horn Subject: [PATCH v3 1/3] io_uring: use an enumeration for io_uring_register(2) opcodes Date: Tue, 28 Jul 2020 18:00:59 +0200 Message-Id: <20200728160101.48554-2-sgarzare@redhat.com> In-Reply-To: <20200728160101.48554-1-sgarzare@redhat.com> References: <20200728160101.48554-1-sgarzare@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The enumeration allows us to keep track of the last io_uring_register(2) opcode available. Behaviour and opcodes names don't change. Signed-off-by: Stefano Garzarella --- include/uapi/linux/io_uring.h | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-) diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h index 7843742b8b74..efc50bd0af34 100644 --- a/include/uapi/linux/io_uring.h +++ b/include/uapi/linux/io_uring.h @@ -253,17 +253,22 @@ struct io_uring_params { /* * io_uring_register(2) opcodes and arguments */ -#define IORING_REGISTER_BUFFERS 0 -#define IORING_UNREGISTER_BUFFERS 1 -#define IORING_REGISTER_FILES 2 -#define IORING_UNREGISTER_FILES 3 -#define IORING_REGISTER_EVENTFD 4 -#define IORING_UNREGISTER_EVENTFD 5 -#define IORING_REGISTER_FILES_UPDATE 6 -#define IORING_REGISTER_EVENTFD_ASYNC 7 -#define IORING_REGISTER_PROBE 8 -#define IORING_REGISTER_PERSONALITY 9 -#define IORING_UNREGISTER_PERSONALITY 10 +enum { + IORING_REGISTER_BUFFERS, + IORING_UNREGISTER_BUFFERS, + IORING_REGISTER_FILES, + IORING_UNREGISTER_FILES, + IORING_REGISTER_EVENTFD, + IORING_UNREGISTER_EVENTFD, + IORING_REGISTER_FILES_UPDATE, + IORING_REGISTER_EVENTFD_ASYNC, + IORING_REGISTER_PROBE, + IORING_REGISTER_PERSONALITY, + IORING_UNREGISTER_PERSONALITY, + + /* this goes last */ + IORING_REGISTER_LAST +}; struct io_uring_files_update { __u32 offset; From patchwork Tue Jul 28 16:01:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 11689479 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 317EE13B1 for ; Tue, 28 Jul 2020 16:01:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1565420792 for ; Tue, 28 Jul 2020 16:01:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Od0c/9yf" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731252AbgG1QBw (ORCPT ); Tue, 28 Jul 2020 12:01:52 -0400 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:37967 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1731102AbgG1QBv (ORCPT ); Tue, 28 Jul 2020 12:01:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1595952109; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jNrC58ZGSkUMr5qrUApDM8W+Fk8SmgNkRBc4mmuFeCs=; b=Od0c/9yf4IirDCNFUpFgoincWYu0KhAKLn8AbHvAyaQPokNn7rn1XCDw+QqVo6pWEv5A0A N3PYoA5w3H9SqAlta4IdZXQ7ptqDAnI1hPva69RebTROOAHj5tHa/sOPbh0JCEdaEojIjH LPuNJFyH0awIC0Rb4rv0CA/XIsnuUpU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-164-1K9sQ9GJPYyJ4gZ3bTHCpQ-1; Tue, 28 Jul 2020 12:01:47 -0400 X-MC-Unique: 1K9sQ9GJPYyJ4gZ3bTHCpQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 77EE5100CCC5; Tue, 28 Jul 2020 16:01:45 +0000 (UTC) Received: from steredhat.redhat.com (ovpn-112-109.ams2.redhat.com [10.36.112.109]) by smtp.corp.redhat.com (Postfix) with ESMTP id 046A15D9CD; Tue, 28 Jul 2020 16:01:32 +0000 (UTC) From: Stefano Garzarella To: Jens Axboe Cc: io-uring@vger.kernel.org, Alexander Viro , Kees Cook , Jeff Moyer , Kernel Hardening , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Aleksa Sarai , Sargun Dhillon , Stefan Hajnoczi , Christian Brauner , Jann Horn Subject: [PATCH v3 2/3] io_uring: add IOURING_REGISTER_RESTRICTIONS opcode Date: Tue, 28 Jul 2020 18:01:00 +0200 Message-Id: <20200728160101.48554-3-sgarzare@redhat.com> In-Reply-To: <20200728160101.48554-1-sgarzare@redhat.com> References: <20200728160101.48554-1-sgarzare@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The new io_uring_register(2) IOURING_REGISTER_RESTRICTIONS opcode permanently installs a feature allowlist on an io_ring_ctx. The io_ring_ctx can then be passed to untrusted code with the knowledge that only operations present in the allowlist can be executed. The allowlist approach ensures that new features added to io_uring do not accidentally become available when an existing application is launched on a newer kernel version. Currently is it possible to restrict sqe opcodes, sqe flags, and register opcodes. IOURING_REGISTER_RESTRICTIONS can only be made once. Afterwards it is not possible to change restrictions anymore. This prevents untrusted code from removing restrictions. Suggested-by: Stefan Hajnoczi Signed-off-by: Stefano Garzarella --- v3: - added IORING_RESTRICTION_SQE_FLAGS_ALLOWED and IORING_RESTRICTION_SQE_FLAGS_REQUIRED - removed IORING_RESTRICTION_FIXED_FILES_ONLY RFC v2: - added 'restricted' flag in the ctx [Jens] - added IORING_MAX_RESTRICTIONS define - returned EBUSY instead of EINVAL when restrictions are already registered - reset restrictions if an error happened during the registration --- fs/io_uring.c | 113 +++++++++++++++++++++++++++++++++- include/uapi/linux/io_uring.h | 31 ++++++++++ 2 files changed, 143 insertions(+), 1 deletion(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 32b0064f806e..518986371aae 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -97,6 +97,8 @@ #define IORING_MAX_FILES_TABLE (1U << IORING_FILE_TABLE_SHIFT) #define IORING_FILE_TABLE_MASK (IORING_MAX_FILES_TABLE - 1) #define IORING_MAX_FIXED_FILES (64 * IORING_MAX_FILES_TABLE) +#define IORING_MAX_RESTRICTIONS (IORING_RESTRICTION_LAST + \ + IORING_REGISTER_LAST + IORING_OP_LAST) struct io_uring { u32 head ____cacheline_aligned_in_smp; @@ -218,6 +220,13 @@ struct io_buffer { __u16 bid; }; +struct io_restriction { + DECLARE_BITMAP(register_op, IORING_REGISTER_LAST); + DECLARE_BITMAP(sqe_op, IORING_OP_LAST); + u8 sqe_flags_allowed; + u8 sqe_flags_required; +}; + struct io_ring_ctx { struct { struct percpu_ref refs; @@ -230,6 +239,7 @@ struct io_ring_ctx { unsigned int cq_overflow_flushed: 1; unsigned int drain_next: 1; unsigned int eventfd_async: 1; + unsigned int restricted: 1; /* * Ring buffer of indices into array of io_uring_sqe, which is @@ -337,6 +347,7 @@ struct io_ring_ctx { struct llist_head file_put_llist; struct work_struct exit_work; + struct io_restriction restrictions; }; /* @@ -5925,6 +5936,19 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req, if (unlikely(sqe_flags & ~SQE_VALID_FLAGS)) return -EINVAL; + if (unlikely(ctx->restricted)) { + if (!test_bit(req->opcode, ctx->restrictions.sqe_op)) + return -EACCES; + + if ((sqe_flags & ctx->restrictions.sqe_flags_required) != + ctx->restrictions.sqe_flags_required) + return -EACCES; + + if (sqe_flags & ~(ctx->restrictions.sqe_flags_allowed | + ctx->restrictions.sqe_flags_required)) + return -EACCES; + } + if ((sqe_flags & IOSQE_BUFFER_SELECT) && !io_op_defs[req->opcode].buffer_select) return -EOPNOTSUPP; @@ -8116,6 +8140,77 @@ static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id) return -EINVAL; } +static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg, + unsigned int nr_args) +{ + struct io_uring_restriction *res; + size_t size; + int i, ret; + + /* We allow only a single restrictions registration */ + if (ctx->restricted) + return -EBUSY; + + if (!arg || nr_args > IORING_MAX_RESTRICTIONS) + return -EINVAL; + + size = array_size(nr_args, sizeof(*res)); + if (size == SIZE_MAX) + return -EOVERFLOW; + + res = kmalloc(size, GFP_KERNEL); + if (!res) + return -ENOMEM; + + if (copy_from_user(res, arg, size)) { + ret = -EFAULT; + goto out; + } + + for (i = 0; i < nr_args; i++) { + switch (res[i].opcode) { + case IORING_RESTRICTION_REGISTER_OP: + if (res[i].register_op >= IORING_REGISTER_LAST) { + ret = -EINVAL; + goto out; + } + + __set_bit(res[i].register_op, + ctx->restrictions.register_op); + break; + case IORING_RESTRICTION_SQE_OP: + if (res[i].sqe_op >= IORING_OP_LAST) { + ret = -EINVAL; + goto out; + } + + __set_bit(res[i].sqe_op, ctx->restrictions.sqe_op); + break; + case IORING_RESTRICTION_SQE_FLAGS_ALLOWED: + ctx->restrictions.sqe_flags_allowed = res[i].sqe_flags; + break; + case IORING_RESTRICTION_SQE_FLAGS_REQUIRED: + ctx->restrictions.sqe_flags_required = res[i].sqe_flags; + break; + default: + ret = -EINVAL; + goto out; + + } + } + + ctx->restricted = 1; + + ret = 0; +out: + /* Reset all restrictions if an error happened */ + if (ret != 0) + memset(&ctx->restrictions, 0, sizeof(ctx->restrictions)); + + kfree(res); + return ret; +} + static bool io_register_op_must_quiesce(int op) { switch (op) { @@ -8162,6 +8257,18 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode, if (ret) { percpu_ref_resurrect(&ctx->refs); ret = -EINTR; + goto out_quiesce; + } + } + + if (ctx->restricted) { + if (opcode >= IORING_REGISTER_LAST) { + ret = -EINVAL; + goto out; + } + + if (!test_bit(opcode, ctx->restrictions.register_op)) { + ret = -EACCES; goto out; } } @@ -8225,15 +8332,19 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode, break; ret = io_unregister_personality(ctx, nr_args); break; + case IORING_REGISTER_RESTRICTIONS: + ret = io_register_restrictions(ctx, arg, nr_args); + break; default: ret = -EINVAL; break; } +out: if (io_register_op_must_quiesce(opcode)) { /* bring the ctx back to life */ percpu_ref_reinit(&ctx->refs); -out: +out_quiesce: reinit_completion(&ctx->ref_comp); } return ret; diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h index efc50bd0af34..7303500fc6d3 100644 --- a/include/uapi/linux/io_uring.h +++ b/include/uapi/linux/io_uring.h @@ -265,6 +265,7 @@ enum { IORING_REGISTER_PROBE, IORING_REGISTER_PERSONALITY, IORING_UNREGISTER_PERSONALITY, + IORING_REGISTER_RESTRICTIONS, /* this goes last */ IORING_REGISTER_LAST @@ -293,4 +294,34 @@ struct io_uring_probe { struct io_uring_probe_op ops[0]; }; +struct io_uring_restriction { + __u16 opcode; + union { + __u8 register_op; /* IORING_RESTRICTION_REGISTER_OP */ + __u8 sqe_op; /* IORING_RESTRICTION_SQE_OP */ + __u8 sqe_flags; /* IORING_RESTRICTION_SQE_FLAGS_* */ + }; + __u8 resv; + __u32 resv2[3]; +}; + +/* + * io_uring_restriction->opcode values + */ +enum { + /* Allow an io_uring_register(2) opcode */ + IORING_RESTRICTION_REGISTER_OP, + + /* Allow an sqe opcode */ + IORING_RESTRICTION_SQE_OP, + + /* Allow sqe flags */ + IORING_RESTRICTION_SQE_FLAGS_ALLOWED, + + /* Require sqe flags (these flags must be set on each submission) */ + IORING_RESTRICTION_SQE_FLAGS_REQUIRED, + + IORING_RESTRICTION_LAST +}; + #endif From patchwork Tue Jul 28 16:01:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 11689483 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B1171159A for ; Tue, 28 Jul 2020 16:02:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 94FC02053B for ; Tue, 28 Jul 2020 16:02:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="a40FZ3ZR" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731209AbgG1QCW (ORCPT ); Tue, 28 Jul 2020 12:02:22 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:45042 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1731136AbgG1QCW (ORCPT ); Tue, 28 Jul 2020 12:02:22 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1595952140; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SFNyFdVtoA6jvFVEx573pwq9MAUjMHekOy6Q4vgiQx8=; b=a40FZ3ZRUGLwIxCLQZEi43bVzi3LoAg+lfUdOFKG8ycvCgtGiSd7niM5t/M6YFMssB6+vX SoEMvIfC9lqL5G6MbrohzA2guplmZ8L53Y2g+rexbHHnTyKfrxZJ7luynZSbkBjZANFDH0 Gnia86SuM6fok/K+fo/KaZPVciT6Ljc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-304-Gr6INdQuMBmOzpXrDYCGiA-1; Tue, 28 Jul 2020 12:02:16 -0400 X-MC-Unique: Gr6INdQuMBmOzpXrDYCGiA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 894DE800688; Tue, 28 Jul 2020 16:02:11 +0000 (UTC) Received: from steredhat.redhat.com (ovpn-112-109.ams2.redhat.com [10.36.112.109]) by smtp.corp.redhat.com (Postfix) with ESMTP id BE14B5DA72; Tue, 28 Jul 2020 16:01:45 +0000 (UTC) From: Stefano Garzarella To: Jens Axboe Cc: io-uring@vger.kernel.org, Alexander Viro , Kees Cook , Jeff Moyer , Kernel Hardening , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Aleksa Sarai , Sargun Dhillon , Stefan Hajnoczi , Christian Brauner , Jann Horn Subject: [PATCH v3 3/3] io_uring: allow disabling rings during the creation Date: Tue, 28 Jul 2020 18:01:01 +0200 Message-Id: <20200728160101.48554-4-sgarzare@redhat.com> In-Reply-To: <20200728160101.48554-1-sgarzare@redhat.com> References: <20200728160101.48554-1-sgarzare@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This patch adds a new IORING_SETUP_R_DISABLED flag to start the rings disabled, allowing the user to register restrictions, buffers, files, before to start processing SQEs. When IORING_SETUP_R_DISABLED is set, SQE are not processed and SQPOLL kthread is not started. The restrictions registration are allowed only when the rings are disable to prevent concurrency issue while processing SQEs. The rings can be enabled using IORING_REGISTER_ENABLE_RINGS opcode with io_uring_register(2). Suggested-by: Jens Axboe Signed-off-by: Stefano Garzarella --- v3: - enabled restrictions only when the rings start RFC v2: - removed return value of io_sq_offload_start() --- fs/io_uring.c | 58 +++++++++++++++++++++++++++++------ include/uapi/linux/io_uring.h | 2 ++ 2 files changed, 50 insertions(+), 10 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 518986371aae..49db8899fefb 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -225,6 +225,7 @@ struct io_restriction { DECLARE_BITMAP(sqe_op, IORING_OP_LAST); u8 sqe_flags_allowed; u8 sqe_flags_required; + bool registered; }; struct io_ring_ctx { @@ -6991,8 +6992,8 @@ static int io_init_wq_offload(struct io_ring_ctx *ctx, return ret; } -static int io_sq_offload_start(struct io_ring_ctx *ctx, - struct io_uring_params *p) +static int io_sq_offload_create(struct io_ring_ctx *ctx, + struct io_uring_params *p) { int ret; @@ -7029,7 +7030,6 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx, ctx->sqo_thread = NULL; goto err; } - wake_up_process(ctx->sqo_thread); } else if (p->flags & IORING_SETUP_SQ_AFF) { /* Can't have SQ_AFF without SQPOLL */ ret = -EINVAL; @@ -7048,6 +7048,12 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx, return ret; } +static void io_sq_offload_start(struct io_ring_ctx *ctx) +{ + if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sqo_thread) + wake_up_process(ctx->sqo_thread); +} + static void io_unaccount_mem(struct user_struct *user, unsigned long nr_pages) { atomic_long_sub(nr_pages, &user->locked_vm); @@ -7676,9 +7682,6 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit, int submitted = 0; struct fd f; - if (current->task_works) - task_work_run(); - if (flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP)) return -EINVAL; @@ -7695,6 +7698,12 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit, if (!percpu_ref_tryget(&ctx->refs)) goto out_fput; + if (ctx->flags & IORING_SETUP_R_DISABLED) + return -EBADF; + + if (current->task_works) + task_work_run(); + /* * For SQ polling, the thread will do all submissions and completions. * Just return the requested submit count, and wake the thread if @@ -8000,10 +8009,13 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p, if (ret) goto err; - ret = io_sq_offload_start(ctx, p); + ret = io_sq_offload_create(ctx, p); if (ret) goto err; + if (!(p->flags & IORING_SETUP_R_DISABLED)) + io_sq_offload_start(ctx); + memset(&p->sq_off, 0, sizeof(p->sq_off)); p->sq_off.head = offsetof(struct io_rings, sq.head); p->sq_off.tail = offsetof(struct io_rings, sq.tail); @@ -8064,7 +8076,8 @@ static long io_uring_setup(u32 entries, struct io_uring_params __user *params) if (p.flags & ~(IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL | IORING_SETUP_SQ_AFF | IORING_SETUP_CQSIZE | - IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ)) + IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ | + IORING_SETUP_R_DISABLED)) return -EINVAL; return io_uring_create(entries, &p, params); @@ -8147,8 +8160,12 @@ static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg, size_t size; int i, ret; + /* Restrictions allowed only if rings started disabled */ + if (!(ctx->flags & IORING_SETUP_R_DISABLED)) + return -EINVAL; + /* We allow only a single restrictions registration */ - if (ctx->restricted) + if (ctx->restrictions.registered) return -EBUSY; if (!arg || nr_args > IORING_MAX_RESTRICTIONS) @@ -8199,7 +8216,7 @@ static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg, } } - ctx->restricted = 1; + ctx->restrictions.registered = true; ret = 0; out: @@ -8211,6 +8228,21 @@ static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg, return ret; } +static int io_register_enable_rings(struct io_ring_ctx *ctx) +{ + if (!(ctx->flags & IORING_SETUP_R_DISABLED)) + return -EINVAL; + + if (ctx->restrictions.registered) + ctx->restricted = 1; + + ctx->flags &= ~IORING_SETUP_R_DISABLED; + + io_sq_offload_start(ctx); + + return 0; +} + static bool io_register_op_must_quiesce(int op) { switch (op) { @@ -8332,6 +8364,12 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode, break; ret = io_unregister_personality(ctx, nr_args); break; + case IORING_REGISTER_ENABLE_RINGS: + ret = -EINVAL; + if (arg || nr_args) + break; + ret = io_register_enable_rings(ctx); + break; case IORING_REGISTER_RESTRICTIONS: ret = io_register_restrictions(ctx, arg, nr_args); break; diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h index 7303500fc6d3..7f9c92313795 100644 --- a/include/uapi/linux/io_uring.h +++ b/include/uapi/linux/io_uring.h @@ -94,6 +94,7 @@ enum { #define IORING_SETUP_CQSIZE (1U << 3) /* app defines CQ size */ #define IORING_SETUP_CLAMP (1U << 4) /* clamp SQ/CQ ring sizes */ #define IORING_SETUP_ATTACH_WQ (1U << 5) /* attach to existing wq */ +#define IORING_SETUP_R_DISABLED (1U << 6) /* start with ring disabled */ enum { IORING_OP_NOP, @@ -266,6 +267,7 @@ enum { IORING_REGISTER_PERSONALITY, IORING_UNREGISTER_PERSONALITY, IORING_REGISTER_RESTRICTIONS, + IORING_REGISTER_ENABLE_RINGS, /* this goes last */ IORING_REGISTER_LAST