From patchwork Wed Nov 6 12:26:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 13864590 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BEB8C18C00E for ; Wed, 6 Nov 2024 12:27:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730896042; cv=none; b=O4xIkagzJTCF2Ox0/uE+xGIiq40nVVOyVCP2GTLxIkBIv/t905CDTNcEKD1dx0vkQTUztrQVnXTCuSfVVyC7svcfLpEx8RFezD0Hc2tv+Xydv57lN/D2r34U+9mmhhLRXuKYXS+OLcY6Y/mQJ3KsBpB2qJCsewMEvqbzCvR7MmA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730896042; c=relaxed/simple; bh=cx69wQ58LbhzH3xwzCYAeiEvLLlkfUyXhMvHWWB9JlQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nwKw0Z2PItE1MFMimgHhwW88oER4/7dxCxcV9TaZyeRC+2j+ILKOWeQ0uIqcw53VPOvsOMK7pyjSV9w7wdybRBNlhUJcHzgagYIxTXix/vogzb4ANOTc+qHknBdrTWdT8fZwdfmyua/TJb9XWjRbHR4ULL81fYZ1F1L8PNsK8F0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Viv+9nRC; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Viv+9nRC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1730896039; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gPw2kmEX5kl0FyN/JZb5DZTJ8S+p2c666XJCyJVZx5M=; b=Viv+9nRCTmBL/J8Ao8ZJY3JirGN/JPjRgI9xUf2FmhOk5jfXEARXbrjJ7KVW2pufs73DRX lqZLGlYFQ7GT8TATtzBnQRD3UOZznZX1PCa0assPEKy7Wr6IpZ+1hbga5T4ifowSo+wGLY on7RILX530vEpj9QqmNq6dYeMLlszpo= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-197-dSeFhuJBOVCPnwExNGN7fw-1; Wed, 06 Nov 2024 07:27:16 -0500 X-MC-Unique: dSeFhuJBOVCPnwExNGN7fw-1 X-Mimecast-MFC-AGG-ID: dSeFhuJBOVCPnwExNGN7fw Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 73DAC1944D16; Wed, 6 Nov 2024 12:27:15 +0000 (UTC) Received: from localhost (unknown [10.72.116.107]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 761C91956088; Wed, 6 Nov 2024 12:27:13 +0000 (UTC) From: Ming Lei To: Jens Axboe , io-uring@vger.kernel.org, Pavel Begunkov Cc: linux-block@vger.kernel.org, Uday Shankar , Akilesh Kailash , Ming Lei Subject: [PATCH V9 1/7] io_uring: rename io_mapped_ubuf as io_mapped_buf Date: Wed, 6 Nov 2024 20:26:50 +0800 Message-ID: <20241106122659.730712-2-ming.lei@redhat.com> In-Reply-To: <20241106122659.730712-1-ming.lei@redhat.com> References: <20241106122659.730712-1-ming.lei@redhat.com> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 Rename io_mapped_ubuf so that the same structure can be used for describing kernel buffer. Signed-off-by: Ming Lei --- io_uring/fdinfo.c | 2 +- io_uring/rsrc.c | 10 +++++----- io_uring/rsrc.h | 6 +++--- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/io_uring/fdinfo.c b/io_uring/fdinfo.c index b214e5a407b5..9ca95f877312 100644 --- a/io_uring/fdinfo.c +++ b/io_uring/fdinfo.c @@ -218,7 +218,7 @@ __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file) } seq_printf(m, "UserBufs:\t%u\n", ctx->buf_table.nr); for (i = 0; has_lock && i < ctx->buf_table.nr; i++) { - struct io_mapped_ubuf *buf = NULL; + struct io_mapped_buf *buf = NULL; if (ctx->buf_table.nodes[i]) buf = ctx->buf_table.nodes[i]->buf; diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 2fb1791d7255..70dba4a4de1d 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -106,7 +106,7 @@ static void io_buffer_unmap(struct io_ring_ctx *ctx, struct io_rsrc_node *node) unsigned int i; if (node->buf) { - struct io_mapped_ubuf *imu = node->buf; + struct io_mapped_buf *imu = node->buf; if (!refcount_dec_and_test(&imu->refs)) return; @@ -580,7 +580,7 @@ static bool headpage_already_acct(struct io_ring_ctx *ctx, struct page **pages, /* check previously registered pages */ for (i = 0; i < ctx->buf_table.nr; i++) { struct io_rsrc_node *node = ctx->buf_table.nodes[i]; - struct io_mapped_ubuf *imu; + struct io_mapped_buf *imu; if (!node) continue; @@ -597,7 +597,7 @@ static bool headpage_already_acct(struct io_ring_ctx *ctx, struct page **pages, } static int io_buffer_account_pin(struct io_ring_ctx *ctx, struct page **pages, - int nr_pages, struct io_mapped_ubuf *imu, + int nr_pages, struct io_mapped_buf *imu, struct page **last_hpage) { int i, ret; @@ -724,7 +724,7 @@ static struct io_rsrc_node *io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov, struct page **last_hpage) { - struct io_mapped_ubuf *imu = NULL; + struct io_mapped_buf *imu = NULL; struct page **pages = NULL; struct io_rsrc_node *node; unsigned long off; @@ -866,7 +866,7 @@ int io_sqe_buffers_register(struct io_ring_ctx *ctx, void __user *arg, } int io_import_fixed(int ddir, struct iov_iter *iter, - struct io_mapped_ubuf *imu, + struct io_mapped_buf *imu, u64 buf_addr, size_t len) { u64 buf_end; diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h index bc3a863b14bb..9d55f9769c77 100644 --- a/io_uring/rsrc.h +++ b/io_uring/rsrc.h @@ -22,11 +22,11 @@ struct io_rsrc_node { u64 tag; union { unsigned long file_ptr; - struct io_mapped_ubuf *buf; + struct io_mapped_buf *buf; }; }; -struct io_mapped_ubuf { +struct io_mapped_buf { u64 ubuf; unsigned int len; unsigned int nr_bvecs; @@ -50,7 +50,7 @@ void io_rsrc_data_free(struct io_rsrc_data *data); int io_rsrc_data_alloc(struct io_rsrc_data *data, unsigned nr); int io_import_fixed(int ddir, struct iov_iter *iter, - struct io_mapped_ubuf *imu, + struct io_mapped_buf *imu, u64 buf_addr, size_t len); int io_register_clone_buffers(struct io_ring_ctx *ctx, void __user *arg); From patchwork Wed Nov 6 12:26:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 13864591 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3CE641DFD87 for ; Wed, 6 Nov 2024 12:27:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730896048; cv=none; b=JgozjeE21eHMBnJ0FqOTXEwUhKjP6yTl8Db8FFduY7dANF3h9QxpVfRPWA//kD9BEtuAwfzOjbDqIXU8vbffv0mzSoXzgcKLrkRNituOPeVnuth07g4/2d9VsvKfqoCJEYW7LTI+K1gcqhwLrVXg4tKNIR7f1eZ+BlFjKyjiOs0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730896048; c=relaxed/simple; bh=1QWulgHUvJkdOB0e5l3GzqDIh6AiXlQGyix1I1F/ZgY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d/cKPRD4BiwoWT6pgRdPU2tAjFcI9DGnjA8Y1y6dxqCJZiW4yL262iEGSN8RpFtDrlPLgn7vKSi++uS9TVlFcYtJqijEYgOo3gbKF1tPhlSQNjXQyK3nnO+qYs+4wHbz6r/+L7shnSVr6ULG7cWLgaeSzWwIjfAAlTRIRvaPNWc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=WEDX4P5Y; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="WEDX4P5Y" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1730896044; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lprUT577+Ss+DKbpmzYjj4MgCX45ZINOsjEQPPDWDG0=; b=WEDX4P5Ywo7DYGSwy8dMjCrXQgSn9RHFsaxYlL78Vod5+MIj0fqrzh/a0xaHti/WEGtD6h GC9fGygUw5H3ez/9URx2GZUbbuIl0r8ZQV+qae6y9CqWBH312D+I3lk+FTcZ3U80zhpouv ZK8mTYHFWaFzZnBtDNGzQDEOtNwTALA= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-614-ZXJVmKGKOW2P9_HVR4spxA-1; Wed, 06 Nov 2024 07:27:21 -0500 X-MC-Unique: ZXJVmKGKOW2P9_HVR4spxA-1 X-Mimecast-MFC-AGG-ID: ZXJVmKGKOW2P9_HVR4spxA Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id BC031195FE03; Wed, 6 Nov 2024 12:27:19 +0000 (UTC) Received: from localhost (unknown [10.72.116.107]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 7E9C51955F40; Wed, 6 Nov 2024 12:27:18 +0000 (UTC) From: Ming Lei To: Jens Axboe , io-uring@vger.kernel.org, Pavel Begunkov Cc: linux-block@vger.kernel.org, Uday Shankar , Akilesh Kailash , Ming Lei Subject: [PATCH V9 2/7] io_uring: rename ubuf of io_mapped_buf as start Date: Wed, 6 Nov 2024 20:26:51 +0800 Message-ID: <20241106122659.730712-3-ming.lei@redhat.com> In-Reply-To: <20241106122659.730712-1-ming.lei@redhat.com> References: <20241106122659.730712-1-ming.lei@redhat.com> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 ->ubuf of `io_mapped_buf` stores the start address of userspace fixed buffer. `io_mapped_buf` will be extended for covering kernel buffer, so rename ->ubuf as ->start. Signed-off-by: Ming Lei --- io_uring/fdinfo.c | 2 +- io_uring/rsrc.c | 6 +++--- io_uring/rsrc.h | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/io_uring/fdinfo.c b/io_uring/fdinfo.c index 9ca95f877312..9b39cb596136 100644 --- a/io_uring/fdinfo.c +++ b/io_uring/fdinfo.c @@ -223,7 +223,7 @@ __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file) if (ctx->buf_table.nodes[i]) buf = ctx->buf_table.nodes[i]->buf; if (buf) - seq_printf(m, "%5u: 0x%llx/%u\n", i, buf->ubuf, buf->len); + seq_printf(m, "%5u: 0x%llx/%u\n", i, buf->start, buf->len); else seq_printf(m, "%5u: \n", i); } diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 70dba4a4de1d..9b8827c72230 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -765,7 +765,7 @@ static struct io_rsrc_node *io_sqe_buffer_register(struct io_ring_ctx *ctx, size = iov->iov_len; /* store original address for later verification */ - imu->ubuf = (unsigned long) iov->iov_base; + imu->start = (unsigned long) iov->iov_base; imu->len = iov->iov_len; imu->nr_bvecs = nr_pages; imu->folio_shift = PAGE_SHIFT; @@ -877,14 +877,14 @@ int io_import_fixed(int ddir, struct iov_iter *iter, if (unlikely(check_add_overflow(buf_addr, (u64)len, &buf_end))) return -EFAULT; /* not inside the mapped region */ - if (unlikely(buf_addr < imu->ubuf || buf_end > (imu->ubuf + imu->len))) + if (unlikely(buf_addr < imu->start || buf_end > (imu->start + imu->len))) return -EFAULT; /* * Might not be a start of buffer, set size appropriately * and advance us to the beginning. */ - offset = buf_addr - imu->ubuf; + offset = buf_addr - imu->start; iov_iter_bvec(iter, ddir, imu->bvec, imu->nr_bvecs, offset + len); if (offset) { diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h index 9d55f9769c77..887699400e29 100644 --- a/io_uring/rsrc.h +++ b/io_uring/rsrc.h @@ -27,7 +27,7 @@ struct io_rsrc_node { }; struct io_mapped_buf { - u64 ubuf; + u64 start; unsigned int len; unsigned int nr_bvecs; unsigned int folio_shift; From patchwork Wed Nov 6 12:26:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 13864592 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9BB911DDA15 for ; Wed, 6 Nov 2024 12:27:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730896051; cv=none; b=pGQu9lY6mTPbFrJnF1QLH7XRdxsu6hCcGmJeaDi9/M680U97JDUJwVGes2jkOBp7l46pfUIpTZunaCIZ0h//D4L8njfu8EkTRRr2vE/580dL3l3PbQoz7Str9rtG6TWGZlSzE+186O8ybOU/MUYxGiY09bkDjgJ7dpwhFeOODnM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730896051; c=relaxed/simple; bh=R0hor+AGXWJQ+b6eCKSSwPBAK1nbOUeit79ZDkEKo3E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mYZRzfmIINYHMgSSMbuGd2SbAxa0/LrmXYwUIB8f6oGccZzZOXD9Mw20XhILrEswah9z78e9Rwm5D0a4EIKbRwHNS9VJTiPxKJ/fLQ9GuJJMqxXRIghUvryd18O7ngyGDwpAl/AfhmebWgAx/GqT19f0V2UYqNp4mKnP9QOVZH0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=PQsCw8zc; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="PQsCw8zc" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1730896048; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1Bkn+1zjp+r+Ic/txbig//oiVTK8s22E84dBS34JjeY=; b=PQsCw8zcWeeX2GzTxHPnU1RjTIVUXWfCkGCrIKGw36C7sBD+Ce8gbIOvgz2NZva+NttqNv jgm9GzIN/H2TlRAqgUP+K5f80pWHtzqUACDw8UpvBXSO/LYQK4yYZZqBHZpovFJgyO91+Y rC9/QUCH3E9krSxeotSntaaz/ceLscA= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-307-exft2KXlNOapteNZjSRhrQ-1; Wed, 06 Nov 2024 07:27:25 -0500 X-MC-Unique: exft2KXlNOapteNZjSRhrQ-1 X-Mimecast-MFC-AGG-ID: exft2KXlNOapteNZjSRhrQ Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 1ECBE1944D2B; Wed, 6 Nov 2024 12:27:24 +0000 (UTC) Received: from localhost (unknown [10.72.116.107]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E342619560AA; Wed, 6 Nov 2024 12:27:22 +0000 (UTC) From: Ming Lei To: Jens Axboe , io-uring@vger.kernel.org, Pavel Begunkov Cc: linux-block@vger.kernel.org, Uday Shankar , Akilesh Kailash , Ming Lei Subject: [PATCH V9 3/7] io_uring: shrink io_mapped_buf Date: Wed, 6 Nov 2024 20:26:52 +0800 Message-ID: <20241106122659.730712-4-ming.lei@redhat.com> In-Reply-To: <20241106122659.730712-1-ming.lei@redhat.com> References: <20241106122659.730712-1-ming.lei@redhat.com> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 `struct io_mapped_buf` will be extended to cover kernel buffer which may be in fast IO path, and `struct io_mapped_buf` needs to be per-IO. So shrink sizeof(struct io_mapped_buf) by the following ways: - folio_shift is < 64, so 6bits are enough to hold it, the remained bits can be used for the coming kernel buffer - define `acct_pages` as 'unsigned int', which is big enough for accounting pages in the buffer Signed-off-by: Ming Lei --- io_uring/rsrc.c | 2 ++ io_uring/rsrc.h | 6 +++--- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 9b8827c72230..16f5abe03d10 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -685,6 +685,8 @@ static bool io_try_coalesce_buffer(struct page ***pages, int *nr_pages, return false; data->folio_shift = folio_shift(folio); + WARN_ON_ONCE(data->folio_shift >= 64); + /* * Check if pages are contiguous inside a folio, and all folios have * the same page count except for the head and tail. diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h index 887699400e29..255ec94ea172 100644 --- a/io_uring/rsrc.h +++ b/io_uring/rsrc.h @@ -30,9 +30,9 @@ struct io_mapped_buf { u64 start; unsigned int len; unsigned int nr_bvecs; - unsigned int folio_shift; refcount_t refs; - unsigned long acct_pages; + unsigned int acct_pages; + unsigned int folio_shift:6; struct bio_vec bvec[] __counted_by(nr_bvecs); }; @@ -41,7 +41,7 @@ struct io_imu_folio_data { unsigned int nr_pages_head; /* For non-head/tail folios, has to be fully included */ unsigned int nr_pages_mid; - unsigned int folio_shift; + unsigned char folio_shift; }; struct io_rsrc_node *io_rsrc_node_alloc(struct io_ring_ctx *ctx, int type); From patchwork Wed Nov 6 12:26:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 13864593 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 222991DDA15 for ; Wed, 6 Nov 2024 12:27:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730896053; cv=none; b=ecMElv7Aso6ujM0bwuHE1RRBwF55U/iK/SQL6RF0rA7iMHRwNH6d9BnJL0QduJJi0m830HFM4jwib94mW1mNJrNYw1jd8mR/e9E3mdQL4rLWACLbvrMLk9pc7gqRASIHRY+6KPkUiKPGjFvtn7LrKnb4CqQE7ecP4Uv/3jmmq8s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730896053; c=relaxed/simple; bh=lWy3gMaaLgVUMpofx8sCUJYhQg7wxEzpSW70O0prnh0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gQ7bNFz9MbHX5/H01OOakt1oSv0gHVS2Iuq5eoY2SLorH3pRWfjRSAOWyKiIFs2sLRwj+39UqqCxREbAW5aaQe5mIv9fRwsa1FwdUqt48AYX5o9uixCCOgcEVVTcY35PxMlhQVTjUbJZOVk9Rt+HqNGiEvgscBL/kHKxtgnky58= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=ZRDwZZ3H; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZRDwZZ3H" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1730896051; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QJy5oiJoIue39dNqC8Dha4XLX5LUbQASeo3V+CBBROU=; b=ZRDwZZ3HS7WG0TMbe4jn4/O6IoVgTyhrccljO+U4vKG/wkNILkd/CjEcM2UykRfYTbDMKD c11rueL4sLYU3pUSZ0SCyasRS2XtkHFJm/XuS/po1lJ3lW/XNfPNlF8C+sUf9upkN9cx9d QTtQGYhPMWMi+hzb83nWvvsZhdIEVzg= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-677-wyOfM9wFPz2dI-_hyGsBwQ-1; Wed, 06 Nov 2024 07:27:30 -0500 X-MC-Unique: wyOfM9wFPz2dI-_hyGsBwQ-1 X-Mimecast-MFC-AGG-ID: wyOfM9wFPz2dI-_hyGsBwQ Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C8CD41944ABF; Wed, 6 Nov 2024 12:27:28 +0000 (UTC) Received: from localhost (unknown [10.72.116.107]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 870BB195607C; Wed, 6 Nov 2024 12:27:26 +0000 (UTC) From: Ming Lei To: Jens Axboe , io-uring@vger.kernel.org, Pavel Begunkov Cc: linux-block@vger.kernel.org, Uday Shankar , Akilesh Kailash , Ming Lei Subject: [PATCH V9 4/7] io_uring: reuse io_mapped_buf for kernel buffer Date: Wed, 6 Nov 2024 20:26:53 +0800 Message-ID: <20241106122659.730712-5-ming.lei@redhat.com> In-Reply-To: <20241106122659.730712-1-ming.lei@redhat.com> References: <20241106122659.730712-1-ming.lei@redhat.com> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 Prepare for supporting kernel buffer in case of io group, in which group leader leases kernel buffer to io_uring, and consumed by io_uring OPs. So reuse io_mapped_buf for group kernel buffer, and unfortunately io_import_fixed() can't be reused since userspace fixed buffer is virt-contiguous, but it isn't true for kernel buffer. Also kernel buffer lifetime is bound with group leader request, it isn't necessary to use rsrc_node for tracking its lifetime, especially it needs extra allocation of rsrc_node for each IO. Signed-off-by: Ming Lei --- include/linux/io_uring_types.h | 23 +++++++++++++++++++++++ io_uring/kbuf.c | 34 ++++++++++++++++++++++++++++++++++ io_uring/kbuf.h | 3 +++ io_uring/rsrc.c | 1 + io_uring/rsrc.h | 10 ---------- 5 files changed, 61 insertions(+), 10 deletions(-) diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index bc883078c1ed..9af83cf214c2 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -2,6 +2,7 @@ #define IO_URING_TYPES_H #include +#include #include #include #include @@ -39,6 +40,28 @@ enum io_uring_cmd_flags { IO_URING_F_COMPAT = (1 << 12), }; +struct io_mapped_buf { + u64 start; + unsigned int len; + unsigned int nr_bvecs; + + /* kbuf hasn't refs and accounting, its lifetime is bound with req */ + union { + struct { + refcount_t refs; + unsigned int acct_pages; + }; + /* pbvec is only for kbuf */ + const struct bio_vec *pbvec; + }; + unsigned int folio_shift:6; + unsigned int dir:1; /* ITER_DEST or ITER_SOURCE */ + unsigned int kbuf:1; /* kernel buffer or not */ + /* offset in the 1st bvec, for kbuf only */ + unsigned int offset; + struct bio_vec bvec[] __counted_by(nr_bvecs); +}; + struct io_wq_work_node { struct io_wq_work_node *next; }; diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c index d407576ddfb7..c4a776860cb4 100644 --- a/io_uring/kbuf.c +++ b/io_uring/kbuf.c @@ -838,3 +838,37 @@ int io_pbuf_mmap(struct file *file, struct vm_area_struct *vma) io_put_bl(ctx, bl); return ret; } + +/* + * kernel buffer is built over generic bvec, and can't be always + * virt-contiguous, which is different with userspace fixed buffer, + * so we can't reuse io_import_fixed() here + * + * Also kernel buffer lifetime is bound with request, and we needn't + * to use rsrc_node to track its lifetime + */ +int io_import_kbuf(int ddir, struct iov_iter *iter, + const struct io_mapped_buf *kbuf, + u64 buf_off, size_t len) +{ + unsigned long offset = kbuf->offset; + + WARN_ON_ONCE(!kbuf->kbuf); + + if (ddir != kbuf->dir) + return -EINVAL; + + if (unlikely(buf_off > kbuf->len)) + return -EFAULT; + + if (unlikely(len > kbuf->len - buf_off)) + return -EFAULT; + + offset += buf_off; + iov_iter_bvec(iter, ddir, kbuf->pbvec, kbuf->nr_bvecs, offset + len); + + if (offset) + iov_iter_advance(iter, offset); + + return 0; +} diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h index 36aadfe5ac00..04ccd52dd0ad 100644 --- a/io_uring/kbuf.h +++ b/io_uring/kbuf.h @@ -88,6 +88,9 @@ void io_put_bl(struct io_ring_ctx *ctx, struct io_buffer_list *bl); struct io_buffer_list *io_pbuf_get_bl(struct io_ring_ctx *ctx, unsigned long bgid); int io_pbuf_mmap(struct file *file, struct vm_area_struct *vma); +int io_import_kbuf(int ddir, struct iov_iter *iter, + const struct io_mapped_buf *kbuf, + u64 buf_off, size_t len); static inline bool io_kbuf_recycle_ring(struct io_kiocb *req) { diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 16f5abe03d10..5f35641c55ab 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -771,6 +771,7 @@ static struct io_rsrc_node *io_sqe_buffer_register(struct io_ring_ctx *ctx, imu->len = iov->iov_len; imu->nr_bvecs = nr_pages; imu->folio_shift = PAGE_SHIFT; + imu->kbuf = 0; if (coalesced) imu->folio_shift = data.folio_shift; refcount_set(&imu->refs, 1); diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h index 255ec94ea172..d54a5f84b9ed 100644 --- a/io_uring/rsrc.h +++ b/io_uring/rsrc.h @@ -26,16 +26,6 @@ struct io_rsrc_node { }; }; -struct io_mapped_buf { - u64 start; - unsigned int len; - unsigned int nr_bvecs; - refcount_t refs; - unsigned int acct_pages; - unsigned int folio_shift:6; - struct bio_vec bvec[] __counted_by(nr_bvecs); -}; - struct io_imu_folio_data { /* Head folio can be partially included in the fixed buf */ unsigned int nr_pages_head; From patchwork Wed Nov 6 12:26:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 13864594 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9D38B1DF995 for ; Wed, 6 Nov 2024 12:27:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730896062; cv=none; b=leBpGZPFOwy1/uEIV0zz6IvGeqm+uqJREsGJHhQYX0JYBGu4+/EDQM1E4AdSE2sIZOd6/B0KypxPW8c0D0muIvSto+VpsGnmOv2HknCUe7snaSahV+0vGW9MZr1NL9vbW4ZRo91cXvkqHR91bs+x3RSfgXrgDtmpUtpr/Pu1W60= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730896062; c=relaxed/simple; bh=Sa4Gnl9J/SnoJIEA4GJcjY48+pimkSxJKcj3R9Jjx6o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=s2qlulf5dXWxVJfBHkS1vrPHRMNaXaGnKofnEx8DjfMgu5evzr7ST3nla+EckJGIZgloDNJBtf3grkaiJBZQ3FIBUoxykP2smEwjeyMazJXpMsxKy5unXAbyaZigbNsJNQXu2IuBWh+67cvs2Wz4yDvpW7n2o6ZDPTvFtje4Ugk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=NzHQ1tjB; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="NzHQ1tjB" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1730896057; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=whxH4SPRWv0Hdf9zeWiFlnJwp/WHJ7cyfCorPtQzbjg=; b=NzHQ1tjBqXPJ6JMsG7HO+ZrgdQDW8YD+ZBJTDzDBH4R50uA3IebFu0bmq9jV225nlzHM8c xc+sBdrIvLHTbtznRsHdpWlxCH8DQm5E/+qEnoWQCYdSQyQLTVextRAIj+TFWvhZQs3dB+ EXI3mvzE48ptlZ8w4CCb0IwMK6MKhtI= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-173-thiCYAPVP2-iuInWaWcFmQ-1; Wed, 06 Nov 2024 07:27:34 -0500 X-MC-Unique: thiCYAPVP2-iuInWaWcFmQ-1 X-Mimecast-MFC-AGG-ID: thiCYAPVP2-iuInWaWcFmQ Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id DC176195608B; Wed, 6 Nov 2024 12:27:32 +0000 (UTC) Received: from localhost (unknown [10.72.116.107]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id D64C319560AA; Wed, 6 Nov 2024 12:27:31 +0000 (UTC) From: Ming Lei To: Jens Axboe , io-uring@vger.kernel.org, Pavel Begunkov Cc: linux-block@vger.kernel.org, Uday Shankar , Akilesh Kailash , Ming Lei Subject: [PATCH V9 5/7] io_uring: support leased group buffer with REQ_F_GROUP_BUF Date: Wed, 6 Nov 2024 20:26:54 +0800 Message-ID: <20241106122659.730712-6-ming.lei@redhat.com> In-Reply-To: <20241106122659.730712-1-ming.lei@redhat.com> References: <20241106122659.730712-1-ming.lei@redhat.com> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 SQE group introduces one new mechanism to share resource among one group of requests, and all member requests can consume the resource leased by group leader efficiently in parallel. This patch uses the added SQE group to lease kernel buffer from group leader(driver) to members(io_uring) in sqe group: - this kernel buffer is owned by kernel device(driver), and has very short lifetime, such as, it is often aligned with block IO lifetime - group leader leases the kernel buffer from driver to member requests of io_uring subsystem - member requests uses the leased buffer to do FS or network IO, IOSQE_IO_DRAIN bit isn't used for group member IO, so it is mapped to GROUP_KBUF; the actual use becomes very similar with buffer select. - this kernel buffer is returned back after all member requests are completed io_uring builtin provide/register buffer isn't one good match for this use case: - complicated dependency on add/remove buffer this buffer has to be added/removed to one global table by add/remove OPs, and all consumer OPs have to sync with the add/remove OPs; either consumer OPs have to by issued one by one with IO_LINK; or two extra syscall are added for one time of buffer lease & consumption, this way slows down ublk io handling, and may lose zero copy value - application becomes more complicated - application may panic and the kernel buffer is left in io_uring, which complicates io_uring shutdown handling since returning back buffer needs to cowork with buffer owner - big change is needed in io_uring provide/register buffer - the requirement is just to lease the kernel buffer to io_uring subsystem for very short time, not necessary to move it into io_uring and make it global This way looks a bit similar with kernel's pipe/splice, but there are some important differences: - splice is for transferring data between two FDs via pipe, and fd_out can only read data from pipe, but data can't be written to; this feature can lease buffer from group leader(driver subsystem) to members(io_uring subsystem), so member request can write data to this buffer if the buffer direction is allowed to write to. - splice implements data transfer by moving pages between subsystem and pipe, that means page ownership is transferred, and this way is one of the most complicated thing of splice; this patch supports scenarios in which the buffer can't be transferred, and buffer is only borrowed to member requests for consumption, and is returned back after member requests consume the leased buffer, so buffer lifetime is aligned with group leader lifetime, and buffer lifetime is simplified a lot. Especially the buffer is guaranteed to be returned back. - splice can't run in async way basically It can help to implement generic zero copy between device and related operations, such as ublk, fuse. Signed-off-by: Ming Lei --- include/linux/io_uring_types.h | 29 +++++++++++++++++- io_uring/io_uring.c | 32 ++++++++++++++++---- io_uring/io_uring.h | 5 ++++ io_uring/kbuf.c | 55 ++++++++++++++++++++++++++++++++-- io_uring/kbuf.h | 49 ++++++++++++++++++++++++++++-- io_uring/net.c | 27 ++++++++++++++++- io_uring/rw.c | 37 +++++++++++++++++++---- 7 files changed, 216 insertions(+), 18 deletions(-) diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 9af83cf214c2..f3d891fded55 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -40,8 +40,16 @@ enum io_uring_cmd_flags { IO_URING_F_COMPAT = (1 << 12), }; +struct io_mapped_buf; +typedef void (io_uring_kbuf_ack_t) (const struct io_mapped_buf *); + struct io_mapped_buf { - u64 start; + /* start is always 0 for kernel buffer */ + union { + u64 start; + /* called for returning back the kernel buffer */ + io_uring_kbuf_ack_t *kbuf_ack; + }; unsigned int len; unsigned int nr_bvecs; @@ -504,6 +512,7 @@ enum { REQ_F_BUFFERS_COMMIT_BIT, REQ_F_GROUP_LEADER_BIT, REQ_F_BUF_NODE_BIT, + REQ_F_GROUP_BUF_BIT, /* not a real bit, just to check we're not overflowing the space */ __REQ_F_LAST_BIT, @@ -588,6 +597,16 @@ enum { REQ_F_GROUP_LEADER = IO_REQ_FLAG(REQ_F_GROUP_LEADER_BIT), /* buf node is valid */ REQ_F_BUF_NODE = IO_REQ_FLAG(REQ_F_BUF_NODE_BIT), + /* + * Use group leader's buffer + * + * For group member, this flag is mapped from IOSQE_IO_DRAIN which + * isn't used for group member + * + * Group buffer has to be imported in ->issue() since it depends on + * group leader. + */ + REQ_F_GROUP_BUF = IO_REQ_FLAG(REQ_F_GROUP_BUF_BIT), }; typedef void (*io_req_tw_func_t)(struct io_kiocb *req, struct io_tw_state *ts); @@ -670,6 +689,14 @@ struct io_kiocb { struct io_buffer_list *buf_list; struct io_rsrc_node *buf_node; + + /* valid IFF REQ_F_GROUP_BUF is set */ + union { + /* store group buffer for group leader */ + const struct io_mapped_buf *grp_buf; + /* for group member */ + bool grp_buf_imported; + }; }; union { diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 076171977d5e..0a87312083bd 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -114,7 +114,7 @@ #define IO_REQ_CLEAN_FLAGS (REQ_F_BUFFER_SELECTED | REQ_F_NEED_CLEANUP | \ REQ_F_POLLED | REQ_F_INFLIGHT | REQ_F_CREDS | \ - REQ_F_ASYNC_DATA) + REQ_F_ASYNC_DATA | REQ_F_GROUP_BUF) #define IO_REQ_CLEAN_SLOW_FLAGS (REQ_F_REFCOUNT | REQ_F_LINK | REQ_F_HARDLINK |\ REQ_F_GROUP | IO_REQ_CLEAN_FLAGS) @@ -391,6 +391,8 @@ static bool req_need_defer(struct io_kiocb *req, u32 seq) static void io_clean_op(struct io_kiocb *req) { + if (req->flags & REQ_F_GROUP_BUF) + io_drop_group_buf(req); if (req->flags & REQ_F_BUFFER_SELECTED) { spin_lock(&req->ctx->completion_lock); io_kbuf_drop(req); @@ -925,14 +927,20 @@ static void io_queue_group_members(struct io_kiocb *req) req->grp_link = NULL; while (member) { struct io_kiocb *next = member->grp_link; + bool grp_buf = member->flags & REQ_F_GROUP_BUF; member->grp_leader = req; if (unlikely(member->flags & REQ_F_FAIL)) io_req_task_queue_fail(member, member->cqe.res); + else if (unlikely(grp_buf && member->flags & REQ_F_BUF_NODE)) + io_req_task_queue_fail(member, -EINVAL); else if (unlikely(req->flags & REQ_F_FAIL)) io_req_task_queue_fail(member, -ECANCELED); - else + else { + if (grp_buf) + io_req_mark_group_buf(member, false); io_req_task_queue(member); + } member = next; } } @@ -997,6 +1005,11 @@ static enum group_mem io_prep_free_group_req(struct io_kiocb *req, io_queue_group_members(req); return GROUP_LEADER; } + + /* we are done with leased group buffer */ + if (req->flags & REQ_F_GROUP_BUF) + req->flags &= ~REQ_F_GROUP_BUF; + if (!req_is_last_group_member(req)) return GROUP_OTHER_MEMBER; @@ -2196,9 +2209,18 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req, if (sqe_flags & IOSQE_CQE_SKIP_SUCCESS) ctx->drain_disabled = true; if (sqe_flags & IOSQE_IO_DRAIN) { - if (ctx->drain_disabled) - return io_init_fail_req(req, -EOPNOTSUPP); - io_init_req_drain(req); + /* IO_DRAIN is mapped to GROUP_BUF for group members */ + if (ctx->submit_state.group.head) { + /* can't do buffer select */ + if (sqe_flags & IOSQE_BUFFER_SELECT) + return io_init_fail_req(req, -EINVAL); + req->flags &= ~REQ_F_IO_DRAIN; + req->flags |= REQ_F_GROUP_BUF; + } else { + if (ctx->drain_disabled) + return io_init_fail_req(req, -EOPNOTSUPP); + io_init_req_drain(req); + } } } if (unlikely(ctx->restricted || ctx->drain_active || ctx->drain_next)) { diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 57b0d0209097..dd61529cd382 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -364,6 +364,11 @@ static inline bool req_is_group_leader(struct io_kiocb *req) return req->flags & REQ_F_GROUP_LEADER; } +static inline bool req_is_group_member(struct io_kiocb *req) +{ + return (req->flags & REQ_F_GROUP) && !req_is_group_leader(req); +} + /* * Don't complete immediately but use deferred completion infrastructure. * Protected by ->uring_lock and can only be used either with diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c index c4a776860cb4..1a8ed35d4d6c 100644 --- a/io_uring/kbuf.c +++ b/io_uring/kbuf.c @@ -847,9 +847,9 @@ int io_pbuf_mmap(struct file *file, struct vm_area_struct *vma) * Also kernel buffer lifetime is bound with request, and we needn't * to use rsrc_node to track its lifetime */ -int io_import_kbuf(int ddir, struct iov_iter *iter, - const struct io_mapped_buf *kbuf, - u64 buf_off, size_t len) +static int io_import_kbuf(int ddir, struct iov_iter *iter, + const struct io_mapped_buf *kbuf, + u64 buf_off, size_t len) { unsigned long offset = kbuf->offset; @@ -872,3 +872,52 @@ int io_import_kbuf(int ddir, struct iov_iter *iter, return 0; } + +int io_import_group_buf(struct io_kiocb *req, int dir, struct iov_iter *iter, + unsigned long buf_off, unsigned int len) +{ + struct io_kiocb *lead = req->grp_leader; + int ret; + + if (!req_is_group_member(req)) + return -EINVAL; + + if (!lead || !(lead->flags & REQ_F_GROUP_BUF)) + return -EINVAL; + + /* buffer node may be assigned just before importing */ + if (req->flags & REQ_F_BUF_NODE) + return -EINVAL; + + if (io_req_group_buf_imported(req)) + return 0; + + ret = io_import_kbuf(dir, iter, lead->grp_buf, buf_off, len); + if (!ret) + io_req_mark_group_buf(req, true); + return ret; +} + +int io_lease_group_kbuf(struct io_kiocb *req, + const struct io_mapped_buf *grp_buf) +{ + if (!(req->flags & REQ_F_GROUP_LEADER)) + return -EINVAL; + + if (req->flags & (REQ_F_BUFFER_SELECT | REQ_F_BUF_NODE)) + return -EINVAL; + + if (!grp_buf->kbuf_ack || !grp_buf->pbvec || !grp_buf->kbuf) + return -EINVAL; + + /* + * Allow io_uring OPs to borrow this leased kbuf, which is returned + * back by calling `kbuf_ack` when the group leader is freed. + * + * Not like pipe/splice, this kernel buffer is always owned by the + * provider, and has to be returned back. + */ + req->grp_buf = grp_buf; + req->flags |= REQ_F_GROUP_BUF; + return 0; +} diff --git a/io_uring/kbuf.h b/io_uring/kbuf.h index 04ccd52dd0ad..f98e2d8dc48c 100644 --- a/io_uring/kbuf.h +++ b/io_uring/kbuf.h @@ -88,9 +88,11 @@ void io_put_bl(struct io_ring_ctx *ctx, struct io_buffer_list *bl); struct io_buffer_list *io_pbuf_get_bl(struct io_ring_ctx *ctx, unsigned long bgid); int io_pbuf_mmap(struct file *file, struct vm_area_struct *vma); -int io_import_kbuf(int ddir, struct iov_iter *iter, - const struct io_mapped_buf *kbuf, - u64 buf_off, size_t len); + +int io_import_group_buf(struct io_kiocb *req, int dir, struct iov_iter *iter, + unsigned long buf_off, unsigned int len); +int io_lease_group_kbuf(struct io_kiocb *req, + const struct io_mapped_buf *grp_buf); static inline bool io_kbuf_recycle_ring(struct io_kiocb *req) { @@ -223,4 +225,45 @@ static inline unsigned int io_put_kbufs(struct io_kiocb *req, int len, { return __io_put_kbufs(req, len, nbufs, issue_flags); } + +static inline bool io_use_group_buf(struct io_kiocb *req) +{ + return req->flags & REQ_F_GROUP_BUF; +} + +static inline bool io_use_group_kbuf(struct io_kiocb *req) +{ + if (io_use_group_buf(req)) + return req->grp_leader && req->grp_leader->grp_buf->kbuf; + return false; +} + +static inline void io_drop_group_buf(struct io_kiocb *req) +{ + const struct io_mapped_buf *gbuf = req->grp_buf; + + if (gbuf && gbuf->kbuf) + gbuf->kbuf_ack(gbuf); +} + +/* zero remained bytes of kernel buffer for avoiding to leak data */ +static inline void io_req_zero_remained(struct io_kiocb *req, struct iov_iter *iter) +{ + size_t left = iov_iter_count(iter); + + if (iov_iter_rw(iter) == READ && left > 0) + iov_iter_zero(left, iter); +} + +/* For group member only */ +static inline void io_req_mark_group_buf(struct io_kiocb *req, bool imported) +{ + req->grp_buf_imported = imported; +} + +/* For group member only */ +static inline bool io_req_group_buf_imported(struct io_kiocb *req) +{ + return req->grp_buf_imported; +} #endif diff --git a/io_uring/net.c b/io_uring/net.c index 2ccc2b409431..e87b67498733 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -88,6 +88,13 @@ struct io_sr_msg { */ #define MULTISHOT_MAX_RETRY 32 +#define user_ptr_to_u64(x) ( \ +{ \ + typecheck(void __user *, (x)); \ + (u64)(unsigned long)(x); \ +} \ +) + int io_shutdown_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { struct io_shutdown *shutdown = io_kiocb_to_cmd(req, struct io_shutdown); @@ -384,7 +391,7 @@ static int io_send_setup(struct io_kiocb *req, const struct io_uring_sqe *sqe) kmsg->msg.msg_name = &kmsg->addr; kmsg->msg.msg_namelen = addr_len; } - if (!io_do_buffer_select(req)) { + if (!io_do_buffer_select(req) && !io_use_group_buf(req)) { ret = import_ubuf(ITER_SOURCE, sr->buf, sr->len, &kmsg->msg.msg_iter); if (unlikely(ret < 0)) @@ -599,6 +606,15 @@ int io_send(struct io_kiocb *req, unsigned int issue_flags) if (issue_flags & IO_URING_F_NONBLOCK) flags |= MSG_DONTWAIT; + if (io_use_group_buf(req)) { + ret = io_import_group_buf(req, ITER_SOURCE, + &kmsg->msg.msg_iter, + user_ptr_to_u64(sr->buf), + sr->len); + if (unlikely(ret)) + return ret; + } + retry_bundle: if (io_do_buffer_select(req)) { struct buf_sel_arg arg = { @@ -889,6 +905,8 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret, *ret = IOU_STOP_MULTISHOT; else *ret = IOU_OK; + if (io_use_group_kbuf(req)) + io_req_zero_remained(req, &kmsg->msg.msg_iter); io_req_msg_cleanup(req, issue_flags); return true; } @@ -1161,6 +1179,13 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags) goto out_free; } sr->buf = NULL; + } else if (io_use_group_buf(req)) { + ret = io_import_group_buf(req, ITER_DEST, + &kmsg->msg.msg_iter, + user_ptr_to_u64(sr->buf), + sr->len); + if (unlikely(ret)) + goto out_free; } kmsg->msg.msg_flags = 0; diff --git a/io_uring/rw.c b/io_uring/rw.c index e368b9afde03..c5ab464dd51d 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -488,6 +488,11 @@ static bool __io_complete_rw_common(struct io_kiocb *req, long res) } req_set_fail(req); req->cqe.res = res; + if (io_use_group_kbuf(req)) { + struct io_async_rw *io = req->async_data; + + io_req_zero_remained(req, &io->iter); + } } return false; } @@ -629,11 +634,15 @@ static inline loff_t *io_kiocb_ppos(struct kiocb *kiocb) */ static ssize_t loop_rw_iter(int ddir, struct io_rw *rw, struct iov_iter *iter) { + struct io_kiocb *req = cmd_to_io_kiocb(rw); struct kiocb *kiocb = &rw->kiocb; struct file *file = kiocb->ki_filp; ssize_t ret = 0; loff_t *ppos; + if (io_use_group_kbuf(req)) + return -EOPNOTSUPP; + /* * Don't support polled IO through this interface, and we can't * support non-blocking either. For the latter, this just causes @@ -832,20 +841,32 @@ static int io_rw_init_file(struct io_kiocb *req, fmode_t mode, int rw_type) return 0; } +static int rw_import_group_buf(struct io_kiocb *req, int dir, + struct io_rw *rw, struct io_async_rw *io) +{ + int ret = io_import_group_buf(req, dir, &io->iter, rw->addr, rw->len); + + if (!ret) + iov_iter_save_state(&io->iter, &io->iter_state); + return ret; +} + static int __io_read(struct io_kiocb *req, unsigned int issue_flags) { bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK; struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw); struct io_async_rw *io = req->async_data; struct kiocb *kiocb = &rw->kiocb; - ssize_t ret; + ssize_t ret = 0; loff_t *ppos; - if (io_do_buffer_select(req)) { + if (io_do_buffer_select(req)) ret = io_import_iovec(ITER_DEST, req, io, issue_flags); - if (unlikely(ret < 0)) - return ret; - } + else if (io_use_group_buf(req)) + ret = rw_import_group_buf(req, ITER_DEST, rw, io); + if (unlikely(ret < 0)) + return ret; + ret = io_rw_init_file(req, FMODE_READ, READ); if (unlikely(ret)) return ret; @@ -1028,6 +1049,12 @@ int io_write(struct io_kiocb *req, unsigned int issue_flags) ssize_t ret, ret2; loff_t *ppos; + if (io_use_group_buf(req)) { + ret = rw_import_group_buf(req, ITER_SOURCE, rw, io); + if (unlikely(ret < 0)) + return ret; + } + ret = io_rw_init_file(req, FMODE_WRITE, WRITE); if (unlikely(ret)) return ret; From patchwork Wed Nov 6 12:26:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 13864595 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1A1331DF98C for ; Wed, 6 Nov 2024 12:27:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730896065; cv=none; b=i6ZDhKUZX6PWu/PHZbHWAyROf3nRqrfZe90B+3xo+DP52rxOs7MasqnKjwX+dYXzmTnr5PmC5hI3wvGikskYDHhhaS1+dfMBP3Jwki4N42b+gkTkAJeB4WS6ML/XXYOoCpziEGRZzLgEHjtauBokCaxPHsOEFoWA5USJKsfVVak= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730896065; c=relaxed/simple; bh=ef9QRRM73LFQtxlW0meSWrPeRjCZELXcD6HSWOHesh0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GcmZWk03kzfGUPZhglzhzNm8yPaZ039E58gdP+vL5eVIZStNCdHodUN5SgcHUjR4CgHl2HGDu0AKu81Hgqx1qSuiQFeMo/CAH1mQI490BnTkn/CSEyKQAs2TJKrVOV2hU5CpObPslD/xo4Z3uK4Qex7iapqlnkrFVvl+g/zN6YY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=XPLV+QRP; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="XPLV+QRP" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1730896062; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4/rwFt4byI/RP9SWAIWNPx5U2cnvRwoR2eVyqA6RWuE=; b=XPLV+QRPfPx0/Q+6HjQq/zwbS5jEjaYnNXdMRDEbkNtxTU23fbcrL8+UDlAn0mxEoCAmJY wSqHInDPw3IK8+sE6/askUG+pE8Qe+q70Vn37gvwpY64TUpd6LPBsOS6u9hRnAC+cyCx1E Jp54V/JyPDqgSNLe3WC8wXP2/nAMBHs= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-319-vxpMDMu-Me2SgOZAagQJTw-1; Wed, 06 Nov 2024 07:27:39 -0500 X-MC-Unique: vxpMDMu-Me2SgOZAagQJTw-1 X-Mimecast-MFC-AGG-ID: vxpMDMu-Me2SgOZAagQJTw Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 0EC761955BEE; Wed, 6 Nov 2024 12:27:37 +0000 (UTC) Received: from localhost (unknown [10.72.116.107]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 139721956088; Wed, 6 Nov 2024 12:27:35 +0000 (UTC) From: Ming Lei To: Jens Axboe , io-uring@vger.kernel.org, Pavel Begunkov Cc: linux-block@vger.kernel.org, Uday Shankar , Akilesh Kailash , Ming Lei Subject: [PATCH V9 6/7] io_uring/uring_cmd: support leasing device kernel buffer to io_uring Date: Wed, 6 Nov 2024 20:26:55 +0800 Message-ID: <20241106122659.730712-7-ming.lei@redhat.com> In-Reply-To: <20241106122659.730712-1-ming.lei@redhat.com> References: <20241106122659.730712-1-ming.lei@redhat.com> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 Add API of io_uring_cmd_lease_kbuf() for driver to lease its kernel buffer to io_uring. The leased buffer can only be consumed by io_uring OPs in group wide, and the uring_cmd has to be one group leader. This way can support generic device zero copy over device buffer in userspace: - create one sqe group - lease one device buffer to io_uring by the group leader of uring_cmd - io_uring member OPs consume this kernel buffer by passing IOSQE_IO_DRAIN which isn't used for group member, and mapped to GROUP_BUF. - the kernel buffer is returned back after all member OPs are completed Signed-off-by: Ming Lei --- include/linux/io_uring/cmd.h | 7 +++++++ io_uring/uring_cmd.c | 10 ++++++++++ 2 files changed, 17 insertions(+) diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h index 578a3fdf5c71..8a6f1db1ca84 100644 --- a/include/linux/io_uring/cmd.h +++ b/include/linux/io_uring/cmd.h @@ -60,6 +60,8 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd, /* Execute the request from a blocking context */ void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd); +int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd, + const struct io_mapped_buf *grp_kbuf); #else static inline int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw, struct iov_iter *iter, void *ioucmd) @@ -82,6 +84,11 @@ static inline void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd, static inline void io_uring_cmd_issue_blocking(struct io_uring_cmd *ioucmd) { } +static inline int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd, + const struct io_mapped_buf *grp_kbuf) +{ + return -EOPNOTSUPP; +} #endif /* diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c index 40b8b777ba12..5d59183212f6 100644 --- a/io_uring/uring_cmd.c +++ b/io_uring/uring_cmd.c @@ -15,6 +15,7 @@ #include "alloc_cache.h" #include "rsrc.h" #include "uring_cmd.h" +#include "kbuf.h" static struct uring_cache *io_uring_async_get(struct io_kiocb *req) { @@ -175,6 +176,15 @@ void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, ssize_t res2, } EXPORT_SYMBOL_GPL(io_uring_cmd_done); +int io_uring_cmd_lease_kbuf(struct io_uring_cmd *ioucmd, + const struct io_mapped_buf *grp_kbuf) +{ + struct io_kiocb *req = cmd_to_io_kiocb(ioucmd); + + return io_lease_group_kbuf(req, grp_kbuf); +} +EXPORT_SYMBOL_GPL(io_uring_cmd_lease_kbuf); + static int io_uring_cmd_prep_setup(struct io_kiocb *req, const struct io_uring_sqe *sqe) { From patchwork Wed Nov 6 12:26:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 13864596 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2BDB71DF992 for ; Wed, 6 Nov 2024 12:27:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730896069; cv=none; b=Mx1HRwyudx/Q2dMAjSORoj9cMkytnKPdqs7usUo5GbEmNOLPfSo3d4IPOA9P+xEpFUbEJBYXnvi4SMg0FM53uB9AOTDbbdNvcH8Qq0Xt0p4aDm5fRm9bkYYk9BtJ5jsbyLY+5NQjIomC9mivTboBg5Im6fHa2hqWUbRErlJn3r8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730896069; c=relaxed/simple; bh=urQWHiH77YJTmsEu5hTe87cSmbXxgFCQq640UTcGOSU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qn+TXSyuYU09oU4juSP1aCBhxvPAYzg4OhqOcFhDgI+6DJWwQpGnTomRzxerm2Y07BdFDzrarTBFWUBAg/5OK7OIZolnyyptv7frkEmlonlQWbNroHbpyjOxAr1eHexxJBLfJMPZF1XOIleNQuemi3seFkm3X+yWKbkaFUAHoBg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=V7UO3qSK; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="V7UO3qSK" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1730896065; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QXLRaNyikFHPTXnIBW88UenpXV8h2w/e/TjazBJromI=; b=V7UO3qSKqOAyNz/VUZ0POLIQ5yCjKpGfF7Hghh6abe0NRjftw46qxWiWkHzu6nxSB9Yt/o DTC3mFgvjpby4839rMEzR8iSii/XelB+0bagTYvjq3YuiaQp/qcUXKIqG3ivysdlP2GroW e+xquTnmnN7rceLWiaG/P1Luba4xGFw= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-360-nYFpt9SbNHWXNe0I1uJrKg-1; Wed, 06 Nov 2024 07:27:42 -0500 X-MC-Unique: nYFpt9SbNHWXNe0I1uJrKg-1 X-Mimecast-MFC-AGG-ID: nYFpt9SbNHWXNe0I1uJrKg Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 32F69195608F; Wed, 6 Nov 2024 12:27:41 +0000 (UTC) Received: from localhost (unknown [10.72.116.107]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 3EA7319541A6; Wed, 6 Nov 2024 12:27:39 +0000 (UTC) From: Ming Lei To: Jens Axboe , io-uring@vger.kernel.org, Pavel Begunkov Cc: linux-block@vger.kernel.org, Uday Shankar , Akilesh Kailash , Ming Lei Subject: [PATCH V9 7/7] ublk: support leasing io buffer to io_uring Date: Wed, 6 Nov 2024 20:26:56 +0800 Message-ID: <20241106122659.730712-8-ming.lei@redhat.com> In-Reply-To: <20241106122659.730712-1-ming.lei@redhat.com> References: <20241106122659.730712-1-ming.lei@redhat.com> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Suopport to lease block IO buffer for userpace to run io_uring operations(FS, network IO), then ublk zero copy can be supported. userspace code: git clone https://github.com/ublk-org/ublksrv.git -b uring_group And both loop and nbd zero copy(io_uring send and send zc) are covered. Performance improvement is quite obvious in big block size test, such as 'loop --buffered_io' perf is doubled in 64KB block test("loop/007 vs loop/009"). Signed-off-by: Ming Lei --- drivers/block/ublk_drv.c | 160 ++++++++++++++++++++++++++++++++-- include/uapi/linux/ublk_cmd.h | 11 ++- 2 files changed, 161 insertions(+), 10 deletions(-) diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 6ba2c1dd1d87..5803c6418d1e 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -51,6 +51,8 @@ /* private ioctl command mirror */ #define UBLK_CMD_DEL_DEV_ASYNC _IOC_NR(UBLK_U_CMD_DEL_DEV_ASYNC) +#define UBLK_IO_PROVIDE_IO_BUF _IOC_NR(UBLK_U_IO_PROVIDE_IO_BUF) + /* All UBLK_F_* have to be included into UBLK_F_ALL */ #define UBLK_F_ALL (UBLK_F_SUPPORT_ZERO_COPY \ | UBLK_F_URING_CMD_COMP_IN_TASK \ @@ -71,6 +73,9 @@ struct ublk_rq_data { struct llist_node node; struct kref ref; + + bool allocated_bvec; + struct io_mapped_buf buf[0]; }; struct ublk_uring_cmd_pdu { @@ -189,11 +194,15 @@ struct ublk_params_header { __u32 types; }; +static inline struct request *__ublk_check_and_get_req(struct ublk_device *ub, + struct ublk_queue *ubq, int tag, size_t offset); static bool ublk_abort_requests(struct ublk_device *ub, struct ublk_queue *ubq); static inline unsigned int ublk_req_build_flags(struct request *req); static inline struct ublksrv_io_desc *ublk_get_iod(struct ublk_queue *ubq, int tag); +static void ublk_io_buf_giveback_cb(const struct io_mapped_buf *buf); + static inline bool ublk_dev_is_user_copy(const struct ublk_device *ub) { return ub->dev_info.flags & UBLK_F_USER_COPY; @@ -588,6 +597,11 @@ static inline bool ublk_need_req_ref(const struct ublk_queue *ubq) return ublk_support_user_copy(ubq); } +static inline bool ublk_support_zc(const struct ublk_queue *ubq) +{ + return ubq->flags & UBLK_F_SUPPORT_ZERO_COPY; +} + static inline void ublk_init_req_ref(const struct ublk_queue *ubq, struct request *req) { @@ -851,6 +865,72 @@ static size_t ublk_copy_user_pages(const struct request *req, return done; } +/* + * The built command buffer is immutable, so it is fine to feed it to + * concurrent io_uring provide buf commands + */ +static int ublk_init_zero_copy_buffer(struct request *req) +{ + struct ublk_rq_data *data = blk_mq_rq_to_pdu(req); + struct io_mapped_buf *imu = data->buf; + struct req_iterator rq_iter; + unsigned int nr_bvecs = 0; + struct bio_vec *bvec; + unsigned int offset; + struct bio_vec bv; + + if (!ublk_rq_has_data(req)) + goto exit; + + rq_for_each_bvec(bv, req, rq_iter) + nr_bvecs++; + + if (!nr_bvecs) + goto exit; + + if (req->bio != req->biotail) { + int idx = 0; + + bvec = kvmalloc_array(nr_bvecs, sizeof(struct bio_vec), + GFP_NOIO); + if (!bvec) + return -ENOMEM; + + offset = 0; + rq_for_each_bvec(bv, req, rq_iter) + bvec[idx++] = bv; + data->allocated_bvec = true; + } else { + struct bio *bio = req->bio; + + offset = bio->bi_iter.bi_bvec_done; + bvec = __bvec_iter_bvec(bio->bi_io_vec, bio->bi_iter); + } + imu->kbuf = 1; + imu->pbvec = bvec; + imu->nr_bvecs = nr_bvecs; + imu->offset = offset; + imu->len = blk_rq_bytes(req); + imu->dir = req_op(req) == REQ_OP_READ ? ITER_DEST : ITER_SOURCE; + imu->kbuf_ack = ublk_io_buf_giveback_cb; + + return 0; +exit: + imu->pbvec = NULL; + return 0; +} + +static void ublk_deinit_zero_copy_buffer(struct request *req) +{ + struct ublk_rq_data *data = blk_mq_rq_to_pdu(req); + struct io_mapped_buf *imu = data->buf; + + if (data->allocated_bvec) { + kvfree(imu->pbvec); + data->allocated_bvec = false; + } +} + static inline bool ublk_need_map_req(const struct request *req) { return ublk_rq_has_data(req) && req_op(req) == REQ_OP_WRITE; @@ -862,13 +942,25 @@ static inline bool ublk_need_unmap_req(const struct request *req) (req_op(req) == REQ_OP_READ || req_op(req) == REQ_OP_DRV_IN); } -static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req, +static int ublk_map_io(const struct ublk_queue *ubq, struct request *req, struct ublk_io *io) { const unsigned int rq_bytes = blk_rq_bytes(req); - if (ublk_support_user_copy(ubq)) + if (ublk_support_user_copy(ubq)) { + if (ublk_support_zc(ubq)) { + int ret = ublk_init_zero_copy_buffer(req); + + /* + * The only failure is -ENOMEM for allocating providing + * buffer command, return zero so that we can requeue + * this req. + */ + if (unlikely(ret)) + return 0; + } return rq_bytes; + } /* * no zero copy, we delay copy WRITE request data into ublksrv @@ -886,13 +978,16 @@ static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req, } static int ublk_unmap_io(const struct ublk_queue *ubq, - const struct request *req, + struct request *req, struct ublk_io *io) { const unsigned int rq_bytes = blk_rq_bytes(req); - if (ublk_support_user_copy(ubq)) + if (ublk_support_user_copy(ubq)) { + if (ublk_support_zc(ubq)) + ublk_deinit_zero_copy_buffer(req); return rq_bytes; + } if (ublk_need_unmap_req(req)) { struct iov_iter iter; @@ -1038,6 +1133,7 @@ static inline void __ublk_complete_rq(struct request *req) return; exit: + ublk_deinit_zero_copy_buffer(req); blk_mq_end_request(req, res); } @@ -1680,6 +1776,45 @@ static inline void ublk_prep_cancel(struct io_uring_cmd *cmd, io_uring_cmd_mark_cancelable(cmd, issue_flags); } +static void ublk_io_buf_giveback_cb(const struct io_mapped_buf *buf) +{ + struct ublk_rq_data *data = container_of(buf, struct ublk_rq_data, buf[0]); + struct request *req = blk_mq_rq_from_pdu(data); + struct ublk_queue *ubq = req->mq_hctx->driver_data; + + ublk_put_req_ref(ubq, req); +} + +static int ublk_provide_io_buf(struct io_uring_cmd *cmd, + struct ublk_queue *ubq, int tag) +{ + struct ublk_device *ub = cmd->file->private_data; + struct ublk_rq_data *data; + struct request *req; + + if (!ub) + return -EPERM; + + req = __ublk_check_and_get_req(ub, ubq, tag, 0); + if (!req) + return -EINVAL; + + pr_devel("%s: qid %d tag %u request bytes %u\n", + __func__, tag, ubq->q_id, blk_rq_bytes(req)); + + data = blk_mq_rq_to_pdu(req); + + /* + * io_uring guarantees that the callback will be called after + * the provided buffer is consumed, and it is automatic removal + * before this uring command is freed. + * + * This request won't be completed unless the callback is called, + * so ublk module won't be unloaded too. + */ + return io_uring_cmd_lease_kbuf(cmd, data->buf); +} + static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags, const struct ublksrv_io_cmd *ub_cmd) @@ -1731,6 +1866,10 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd, ret = -EINVAL; switch (_IOC_NR(cmd_op)) { + case UBLK_IO_PROVIDE_IO_BUF: + if (unlikely(!ublk_support_zc(ubq))) + goto out; + return ublk_provide_io_buf(cmd, ubq, tag); case UBLK_IO_FETCH_REQ: /* UBLK_IO_FETCH_REQ is only allowed before queue is setup */ if (ublk_queue_ready(ubq)) { @@ -2149,11 +2288,14 @@ static void ublk_align_max_io_size(struct ublk_device *ub) static int ublk_add_tag_set(struct ublk_device *ub) { + int zc = !!(ub->dev_info.flags & UBLK_F_SUPPORT_ZERO_COPY); + struct ublk_rq_data *data; + ub->tag_set.ops = &ublk_mq_ops; ub->tag_set.nr_hw_queues = ub->dev_info.nr_hw_queues; ub->tag_set.queue_depth = ub->dev_info.queue_depth; ub->tag_set.numa_node = NUMA_NO_NODE; - ub->tag_set.cmd_size = sizeof(struct ublk_rq_data); + ub->tag_set.cmd_size = struct_size(data, buf, zc); ub->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; ub->tag_set.driver_data = ub; return blk_mq_alloc_tag_set(&ub->tag_set); @@ -2458,8 +2600,12 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd) goto out_free_dev_number; } - /* We are not ready to support zero copy */ - ub->dev_info.flags &= ~UBLK_F_SUPPORT_ZERO_COPY; + /* zero copy depends on user copy */ + if ((ub->dev_info.flags & UBLK_F_SUPPORT_ZERO_COPY) && + !ublk_dev_is_user_copy(ub)) { + ret = -EINVAL; + goto out_free_dev_number; + } ub->dev_info.nr_hw_queues = min_t(unsigned int, ub->dev_info.nr_hw_queues, nr_cpu_ids); diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h index 12873639ea96..04d73b349709 100644 --- a/include/uapi/linux/ublk_cmd.h +++ b/include/uapi/linux/ublk_cmd.h @@ -94,6 +94,8 @@ _IOWR('u', UBLK_IO_COMMIT_AND_FETCH_REQ, struct ublksrv_io_cmd) #define UBLK_U_IO_NEED_GET_DATA \ _IOWR('u', UBLK_IO_NEED_GET_DATA, struct ublksrv_io_cmd) +#define UBLK_U_IO_PROVIDE_IO_BUF \ + _IOWR('u', 0x23, struct ublksrv_io_cmd) /* only ABORT means that no re-fetch */ #define UBLK_IO_RES_OK 0 @@ -127,9 +129,12 @@ #define UBLKSRV_IO_BUF_TOTAL_SIZE (1ULL << UBLKSRV_IO_BUF_TOTAL_BITS) /* - * zero copy requires 4k block size, and can remap ublk driver's io - * request into ublksrv's vm space - */ + * io_uring provide kbuf command based zero copy + * + * Not available for UBLK_F_UNPRIVILEGED_DEV, because we rely on ublk + * server to fill up request buffer for READ IO, and ublk server can't + * be trusted in case of UBLK_F_UNPRIVILEGED_DEV. +*/ #define UBLK_F_SUPPORT_ZERO_COPY (1ULL << 0) /*