From patchwork Wed Feb 12 18:57:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13972312 Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 74308262163 for ; Wed, 12 Feb 2025 18:59:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739386749; cv=none; b=eF4J4ikxra1EPeggZvN7UIS5S6yUZWo276idaGo++beJJrb8VgnjzF9ws3jv7vWCIqYfDEXANyDrTPjDsUJ2Hk9apW7jAUdcALQGy46buUhsOMcfnEOOGG+NRMOeaeIXLLHq3FEm3xbAUDJpR2UvjeFZPQuTozSSW4ruhRG3UlU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739386749; c=relaxed/simple; bh=xV+HbjoPZhjUAGGHGU8YbI0mz3uuDMqf8qpq252dPvg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cHrk5EcSv//VyhqYMlafQqqxB9mdD2iy0mRCT+ucljEw7jwBUA8pUPGQbWFDNcBWS42B30AU8oltep1vDd5WIhX7tYPY2V9bn67xw0RWv/4BQLGB7pEccXcLmFZHgcx4jrz8yAnxai8cQAKYhvSEVifpnhVdKu2JlNlgmq2Q5m0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=g/+fMHEt; arc=none smtp.client-ip=209.85.216.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="g/+fMHEt" Received: by mail-pj1-f48.google.com with SMTP id 98e67ed59e1d1-2fa48404207so247556a91.1 for ; Wed, 12 Feb 2025 10:59:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1739386746; x=1739991546; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ayGmHinChkh4euhmmLNAbFxNHBewzdvRepp7HKsmfMg=; b=g/+fMHEtEraVQDyYBYbMVV8SEv9an8ULlAZy7RPnL6UQbq6Kzo8ud/8+WemvAlhnvI mw6LF6oHv3kx4QOUPEr0lxKaeT1e2h2aNF2cEBKpzyMiixzue0VhE/N/zy1L9VmBjMMG LhGVc/548994OOGguOqis7AH8XQFhTpGlKfSiiqNjfLtZ9xGlIb/YnJ7CFTxoIHLsfJO 0/kj01/iHez9jpK/+U2lMcUlJMhtRbRZlPtL049bZSF3RvdGvfZGN0tR/ltaqfpoADyd 5bXZOtyyn24xPdhl6nrp/C84t1xpAyWOYP0606w00kfV+8QbkMwVEbQde4JKbf9tzUIv Lkdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739386746; x=1739991546; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ayGmHinChkh4euhmmLNAbFxNHBewzdvRepp7HKsmfMg=; b=uAjZJclpFdxwqmAJu9ZZnjTSlp6T2agmuOrQSyiKfGn5Pemc/+V8VsFlh5nkIq2Z3A 64Bb34uNcPJrZT/Rq6xUf30bNgXrZfJcRqW8IZR9rhgxSy1BZ2bQphS9hG+HgxSS7NUe 2CHgk0xIl7gSj1uOD0AlUglJSGYB2Z9RlYWnfTfONYC0bR07ixrtTBwiDf46NhvUGC9d kUeRebLI7Gpj5l6UrN/DYVaSAf4nTzx/+Vak80jZ7x+vJkBz17T6Gmqy6vwpbehw3pK7 hDy031nQOBNQlQbFVwhHfcLR21WpgddiDcDuEXAqyF9l46lZvPQg3mU1grT1CVILjrjS NXwA== X-Gm-Message-State: AOJu0Yw/2Sywd+UZPighIUhUxp6Umi9NI4ekKHS6GV8+u6j0xpZdzqPT ShDGfOHSU4ecvaoGimwEFTG8uls1NCBMEtfp0+ZwP4EbqCznhLV+ZdvujXa2OqPtU0YKQuSLVE+ L X-Gm-Gg: ASbGncsLQxLJUTIYjKM5j1UUan8d1QNH8byO7+ph6/CcevpIt0IxAh+/yfHdvFIqZ6E YnLY0JKPQ2PHuaxrvw1OELzMBTzbKRGK3j0rwI5IDF4wD4bxg7mdano7rvo/FA2U0zmFGMirqQK iaHq+z8nyWRZTLvYEtPf5tXz7kNl9QCAZjDNfVPckXDKb4Y8YjisFSD/mH/NzA51iTBW9AR8hIZ NaaAL2JSezWu0JtrTxNlMKq0teenYMZrkTavXmPQcpKKNAS86nBc41G2TkQ83LGBgf2s3t91bDo X-Google-Smtp-Source: AGHT+IHNA5/1rvh9tMg5T+fsnqnqy+jZQlT0WndDShpeYXDyQBc2FHUbNSnxUGacqmTopWiQVUlfhg== X-Received: by 2002:a05:6a00:2e84:b0:730:937f:e835 with SMTP id d2e1a72fcca58-7323c1c7e5dmr253964b3a.17.1739386746623; Wed, 12 Feb 2025 10:59:06 -0800 (PST) Received: from localhost ([2a03:2880:ff:71::]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7308caf5b91sm6006891b3a.116.2025.02.12.10.59.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Feb 2025 10:59:06 -0800 (PST) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH net-next v13 02/11] io_uring/zcrx: add io_zcrx_area Date: Wed, 12 Feb 2025 10:57:52 -0800 Message-ID: <20250212185859.3509616-3-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250212185859.3509616-1-dw@davidwei.uk> References: <20250212185859.3509616-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add io_zcrx_area that represents a region of userspace memory that is used for zero copy. During ifq registration, userspace passes in the uaddr and len of userspace memory, which is then pinned by the kernel. Each net_iov is mapped to one of these pages. The freelist is a spinlock protected list that keeps track of all the net_iovs/pages that aren't used. For now, there is only one area per ifq and area registration happens implicitly as part of ifq registration. There is no API for adding/removing areas yet. The struct for area registration is there for future extensibility once we support multiple areas and TCP devmem. Reviewed-by: Jens Axboe Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/uapi/linux/io_uring.h | 9 ++++ io_uring/rsrc.c | 2 +- io_uring/rsrc.h | 1 + io_uring/zcrx.c | 89 ++++++++++++++++++++++++++++++++++- io_uring/zcrx.h | 16 +++++++ 5 files changed, 114 insertions(+), 3 deletions(-) diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h index 6a1632d0fba1..44844707d327 100644 --- a/include/uapi/linux/io_uring.h +++ b/include/uapi/linux/io_uring.h @@ -981,6 +981,15 @@ struct io_uring_zcrx_offsets { __u64 __resv[2]; }; +struct io_uring_zcrx_area_reg { + __u64 addr; + __u64 len; + __u64 rq_area_token; + __u32 flags; + __u32 __resv1; + __u64 __resv2[2]; +}; + /* * Argument for IORING_REGISTER_ZCRX_IFQ */ diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index af39b69eb4fd..20b884c84e55 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -77,7 +77,7 @@ static int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages) return 0; } -static int io_buffer_validate(struct iovec *iov) +int io_buffer_validate(struct iovec *iov) { unsigned long tmp, acct_len = iov->iov_len + (PAGE_SIZE - 1); diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h index 190f7ee45de9..0f8c20246a74 100644 --- a/io_uring/rsrc.h +++ b/io_uring/rsrc.h @@ -68,6 +68,7 @@ int io_register_rsrc_update(struct io_ring_ctx *ctx, void __user *arg, unsigned size, unsigned type); int io_register_rsrc(struct io_ring_ctx *ctx, void __user *arg, unsigned int size, unsigned int type); +int io_buffer_validate(struct iovec *iov); bool io_check_coalesce_buffer(struct page **page_array, int nr_pages, struct io_imu_folio_data *data); diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c index f3ace7e8264d..04883a3ae80c 100644 --- a/io_uring/zcrx.c +++ b/io_uring/zcrx.c @@ -10,6 +10,7 @@ #include "kbuf.h" #include "memmap.h" #include "zcrx.h" +#include "rsrc.h" #define IO_RQ_MAX_ENTRIES 32768 @@ -44,6 +45,79 @@ static void io_free_rbuf_ring(struct io_zcrx_ifq *ifq) ifq->rqes = NULL; } +static void io_zcrx_free_area(struct io_zcrx_area *area) +{ + kvfree(area->freelist); + kvfree(area->nia.niovs); + if (area->pages) { + unpin_user_pages(area->pages, area->nia.num_niovs); + kvfree(area->pages); + } + kfree(area); +} + +static int io_zcrx_create_area(struct io_zcrx_ifq *ifq, + struct io_zcrx_area **res, + struct io_uring_zcrx_area_reg *area_reg) +{ + struct io_zcrx_area *area; + int i, ret, nr_pages; + struct iovec iov; + + if (area_reg->flags || area_reg->rq_area_token) + return -EINVAL; + if (area_reg->__resv1 || area_reg->__resv2[0] || area_reg->__resv2[1]) + return -EINVAL; + if (area_reg->addr & ~PAGE_MASK || area_reg->len & ~PAGE_MASK) + return -EINVAL; + + iov.iov_base = u64_to_user_ptr(area_reg->addr); + iov.iov_len = area_reg->len; + ret = io_buffer_validate(&iov); + if (ret) + return ret; + + ret = -ENOMEM; + area = kzalloc(sizeof(*area), GFP_KERNEL); + if (!area) + goto err; + + area->pages = io_pin_pages((unsigned long)area_reg->addr, area_reg->len, + &nr_pages); + if (IS_ERR(area->pages)) { + ret = PTR_ERR(area->pages); + area->pages = NULL; + goto err; + } + area->nia.num_niovs = nr_pages; + + area->nia.niovs = kvmalloc_array(nr_pages, sizeof(area->nia.niovs[0]), + GFP_KERNEL | __GFP_ZERO); + if (!area->nia.niovs) + goto err; + + area->freelist = kvmalloc_array(nr_pages, sizeof(area->freelist[0]), + GFP_KERNEL | __GFP_ZERO); + if (!area->freelist) + goto err; + + for (i = 0; i < nr_pages; i++) + area->freelist[i] = i; + + area->free_count = nr_pages; + area->ifq = ifq; + /* we're only supporting one area per ifq for now */ + area->area_id = 0; + area_reg->rq_area_token = (u64)area->area_id << IORING_ZCRX_AREA_SHIFT; + spin_lock_init(&area->freelist_lock); + *res = area; + return 0; +err: + if (area) + io_zcrx_free_area(area); + return ret; +} + static struct io_zcrx_ifq *io_zcrx_ifq_alloc(struct io_ring_ctx *ctx) { struct io_zcrx_ifq *ifq; @@ -59,6 +133,9 @@ static struct io_zcrx_ifq *io_zcrx_ifq_alloc(struct io_ring_ctx *ctx) static void io_zcrx_ifq_free(struct io_zcrx_ifq *ifq) { + if (ifq->area) + io_zcrx_free_area(ifq->area); + io_free_rbuf_ring(ifq); kfree(ifq); } @@ -66,6 +143,7 @@ static void io_zcrx_ifq_free(struct io_zcrx_ifq *ifq) int io_register_zcrx_ifq(struct io_ring_ctx *ctx, struct io_uring_zcrx_ifq_reg __user *arg) { + struct io_uring_zcrx_area_reg area; struct io_uring_zcrx_ifq_reg reg; struct io_uring_region_desc rd; struct io_zcrx_ifq *ifq; @@ -99,7 +177,7 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx, } reg.rq_entries = roundup_pow_of_two(reg.rq_entries); - if (!reg.area_ptr) + if (copy_from_user(&area, u64_to_user_ptr(reg.area_ptr), sizeof(area))) return -EFAULT; ifq = io_zcrx_ifq_alloc(ctx); @@ -110,6 +188,10 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx, if (ret) goto err; + ret = io_zcrx_create_area(ifq, &ifq->area, &area); + if (ret) + goto err; + ifq->rq_entries = reg.rq_entries; ifq->if_rxq = reg.if_rxq; @@ -122,7 +204,10 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx, ret = -EFAULT; goto err; } - + if (copy_to_user(u64_to_user_ptr(reg.area_ptr), &area, sizeof(area))) { + ret = -EFAULT; + goto err; + } ctx->ifq = ifq; return 0; err: diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h index 58e4ab6c6083..53fd94b65b38 100644 --- a/io_uring/zcrx.h +++ b/io_uring/zcrx.h @@ -3,9 +3,25 @@ #define IOU_ZC_RX_H #include +#include + +struct io_zcrx_area { + struct net_iov_area nia; + struct io_zcrx_ifq *ifq; + + u16 area_id; + struct page **pages; + + /* freelist */ + spinlock_t freelist_lock ____cacheline_aligned_in_smp; + u32 free_count; + u32 *freelist; +}; struct io_zcrx_ifq { struct io_ring_ctx *ctx; + struct io_zcrx_area *area; + struct io_uring *rq_ring; struct io_uring_zcrx_rqe *rqes; u32 rq_entries;