From patchwork Wed Oct 16 18:52:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13838743 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67A9E18D655 for ; Wed, 16 Oct 2024 18:53:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104783; cv=none; b=pWEavGFxUiKx8c6SXtw9Z018PaX2vKpr7QQ4QjVRcyNvti8CBrTjwaZz+GXJO7bjTF6eKcf90XGUAJ2A/PqfEb9QqUxroHU6dCVvDer31GauV+o28oteqi7PP2UumyIbQImnb4gPODQKJEWE6gS6uG6Zx/RTeoWHvaTjELQ/z2E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104783; c=relaxed/simple; bh=GmYXYqS56Q71ym4lYvVibSD//+mv9JJ8nnPgR9ssbiw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Iq9DEqFoYWGb6ELwGc5RX7ot5sbv1WGYXC7olfsGGVk6Kd5ETMG/8mfJNXkorqCphQODa6s3PPaz5D9qqg4ESlRe2HAR+jzK8a7TxA7C7eqaET+2ujrKKJh1IjqUVMr+Emaw6eVElPFSQyIte5ZkMS2rnRpxx6eKLhzsoPJ4IkA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=nwDyXkEy; arc=none smtp.client-ip=209.85.214.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="nwDyXkEy" Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-207115e3056so1058505ad.2 for ; Wed, 16 Oct 2024 11:53:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1729104782; x=1729709582; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=k2iyOiGEHYMsmcUpYi5USfAzSVTmDt+usnXuma/ITEc=; b=nwDyXkEyTNMbAmqs6v6tfzIbqaheMJAJpi0ajXUAG5JW0CLYNTmLBkE2AvpSNEKjW8 EDSz8r/EUtbTLZ4mNNb27jlPuS1VBPAyQ7RlXyNJDd2Nc6I2Ax9gNTEk3poV6G6SheoK w8zmW3tbVs5j999AKXdSnbxI/e9DKbFFw52QbrMvh2SHrxoJH+WkTWbX6vyzkkRBm5D7 BQAZb980QG1D2Q/jtRQZ8yMGvnPB7C/W8f7tOLJadUGfDqqZ7rV/QyaHr48vcgRmDqcf yaXmgwk1KtF4dpNY5q0pMf55om2yPaUGWasvAWzXb89pJz/jCXqgH9bICahVZpZNebKn U79A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729104782; x=1729709582; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k2iyOiGEHYMsmcUpYi5USfAzSVTmDt+usnXuma/ITEc=; b=uejxzjR2W2gct6lHNVGvuF0Kr7T9oUpP0Q9IA53st5VMrerPdLlJFPaZxzp4wEe2MC lH4W042+d3D0VGD9WB9ZyHxVUKpBA+4sEbM0pasCoIXlIlz1nfRf9Aovaobut4NLP8Ne jwPMIdrZJaJ3UKe3G3i+Pt9urftUPlfY80IBuX/zsbYmzl3cskUpTwXWhbsv8vuWE0bg pnN34gUmM+n+tmfAuPDCJ3PaOtmKAV6wFFwF0gP6ClwShBniRPhSOk5f6ZRbI8oyt47/ thfJMD2ZDsRDVnW/qn9Wzpx6I+eMe1B5n8Q+cSSfv/uzZN+9K6K7FBD0qTe+SWljdCu2 Vt+Q== X-Forwarded-Encrypted: i=1; AJvYcCWKUTLf6cCAuiZeAii/D8sebEDDWakj8p5xrsmvNYo20ryVXETAAeZFibsqV9tdTyGbrtaju+g=@vger.kernel.org X-Gm-Message-State: AOJu0Yx4gu3IQwXXsK42SeLeuANledO/L62G2Uq1fWZ9v8dGxJzXW40u RpX9OUlFvT2I+FN7+FTe09R58hgavdjj7D3N0wYIbZbgCPYkV78ZLeYEbM+gRfA= X-Google-Smtp-Source: AGHT+IFupie1j7/PvDtA9AJslm4f3oJrkkuMXhlztk4BbvClCKLlGfJYl1Rs/KJTpJSdBts6nRHr4Q== X-Received: by 2002:a17:902:ecd0:b0:20c:cccd:17a3 with SMTP id d9443c01a7336-20d27f20bacmr63453165ad.46.1729104781779; Wed, 16 Oct 2024 11:53:01 -0700 (PDT) Received: from localhost (fwdproxy-prn-006.fbsv.net. [2a03:2880:ff:6::face:b00c]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20d1804c17csm31816735ad.227.2024.10.16.11.53.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2024 11:53:01 -0700 (PDT) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: David Wei , Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH v6 01/15] net: prefix devmem specific helpers Date: Wed, 16 Oct 2024 11:52:38 -0700 Message-ID: <20241016185252.3746190-2-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241016185252.3746190-1-dw@davidwei.uk> References: <20241016185252.3746190-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Pavel Begunkov Add prefixes to all helpers that are specific to devmem TCP, i.e. net_iov_binding[_id]. Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- net/core/devmem.c | 2 +- net/core/devmem.h | 14 +++++++------- net/ipv4/tcp.c | 2 +- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/net/core/devmem.c b/net/core/devmem.c index 11b91c12ee11..858982858f81 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -93,7 +93,7 @@ net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) void net_devmem_free_dmabuf(struct net_iov *niov) { - struct net_devmem_dmabuf_binding *binding = net_iov_binding(niov); + struct net_devmem_dmabuf_binding *binding = net_devmem_iov_binding(niov); unsigned long dma_addr = net_devmem_get_dma_addr(niov); if (WARN_ON(!gen_pool_has_addr(binding->chunk_pool, dma_addr, diff --git a/net/core/devmem.h b/net/core/devmem.h index 76099ef9c482..99782ddeca40 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -86,11 +86,16 @@ static inline unsigned int net_iov_idx(const struct net_iov *niov) } static inline struct net_devmem_dmabuf_binding * -net_iov_binding(const struct net_iov *niov) +net_devmem_iov_binding(const struct net_iov *niov) { return net_iov_owner(niov)->binding; } +static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) +{ + return net_devmem_iov_binding(niov)->id; +} + static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) { struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov); @@ -99,11 +104,6 @@ static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) ((unsigned long)net_iov_idx(niov) << PAGE_SHIFT); } -static inline u32 net_iov_binding_id(const struct net_iov *niov) -{ - return net_iov_owner(niov)->binding->id; -} - static inline void net_devmem_dmabuf_binding_get(struct net_devmem_dmabuf_binding *binding) { @@ -171,7 +171,7 @@ static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) return 0; } -static inline u32 net_iov_binding_id(const struct net_iov *niov) +static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) { return 0; } diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 82cc4a5633ce..e928efc22f80 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2494,7 +2494,7 @@ static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb, /* Will perform the exchange later */ dmabuf_cmsg.frag_token = tcp_xa_pool.tokens[tcp_xa_pool.idx]; - dmabuf_cmsg.dmabuf_id = net_iov_binding_id(niov); + dmabuf_cmsg.dmabuf_id = net_devmem_iov_binding_id(niov); offset += copy; remaining_len -= copy; From patchwork Wed Oct 16 18:52:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13838744 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B64C2212D0A for ; Wed, 16 Oct 2024 18:53:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104785; cv=none; b=JqtXIG+rkjm/ycSaucesDd+mXjGlPkAu1B882xZdzzhX8OOpDwFMvmCofPn5RsSP8vUK23J3YUODJGu9KB579kWO3dzYTwy+q2EIoywnE2t80SG7EyGtxMk7s0cyk91g6RTFTOkXL0zF0OHDyNC+jeSLLsG9RYe4VqERuoq0src= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104785; c=relaxed/simple; bh=frqeAzwkJ9oMlnR6KCmqaH2uYtAqksDh6+nrwl2/U64=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=P4n/VzLbJwt5a/AVk4Dl8HFifTZ/r1FJjNh4J7daLxwVa2LWsNrfSGOycl6R/W0b+CsPXUmuBEG2pJqf4MkiWugSOIbGCt16rxbg8obg5hGkFzcDkmPkf6bAQWUk4fAwLCue1bJ1rXNvD1C7pzImvk2ppoz5pOA9q9pUQXQVel4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=fzjKy19S; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="fzjKy19S" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-20c805a0753so1399395ad.0 for ; Wed, 16 Oct 2024 11:53:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1729104783; x=1729709583; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nsSDsak9FzFqdyI2Y+zWFMkiWrzR17UOTNp3LeYZscU=; b=fzjKy19SCUYm40cpR3NruqO3i+NNrRr0me8T2hmoDbGAX93oCob546qoVHpH3HUSyx DmcjpnpCY1+6yZU77cS6g0n1sFEETkUQAohDdNgVwWSwDRH1gie0WcbzdRwrYi1EUgJn 6nzkVsT3CCZU/E3dTKND6OdcpdoUauCUbGXn9em4w0wkpRzUezQPsGBXgn4hjDoz1n9n QvWMQknQ3nQ/lzjst6fKlSh5X6ZDtbK6BK4W8E+QgoEu/pm3NiMpwMYkZNT5rzo4O9XA AIl45eSkj1ezg8ci0Xiv6K4JjWXOc7pbc3FTmQM8Ua52vzYIZOV3IjEnfuUI7JdLtTOY ugvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729104783; x=1729709583; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nsSDsak9FzFqdyI2Y+zWFMkiWrzR17UOTNp3LeYZscU=; b=FbsgKTPYcS0KN9Cjjyw29YPDcWp7NBQXhLsDv+YZ5qss6yS1kt1fuQcxf4TGTM9yGZ Op56/7k2QobeUlUZeaBIsXjXRLDwrU7pG4aZ1epuaLlcmH3CkZL5k+bjBxNbfEmI9glU oyzF9hHX8tNewq5/PQBEfBXma3FdYlcdCkqpzA6UgkAIOZMuVIeFW0BZ7EZoP5msuhx9 oOouk36Y5+rCeHSqGGIppZ1d4Cb69gn9JEN+FNusLUCLPQBzu4nUjrOz1MXFDTlEli9z GJf1i5OEcphlI8SLNokI+6JUNcFWt/bo2r4sDKNLACIIYxbYHo+T/l/aBWrhQp9yNvdN X1+w== X-Forwarded-Encrypted: i=1; AJvYcCX/fuVgrm/EMKmDfZden177VVAlNFRDYSkOCR1Zbr8/TO212JwaxsMJx4/hJdEdKzInlAiVbHg=@vger.kernel.org X-Gm-Message-State: AOJu0YzOkK/pcLwUdSYssIt86cww37apvp4FgEJiocYX6QC9Wabc3vBd i6zq7gZkf15Ne/eIg+HZlIGKTuOyZX5IIeA/o7iJX0ivhC1k/MwB6Qn4a/A5AMA= X-Google-Smtp-Source: AGHT+IGj8SR9svvC1MZf3wX7OYEzgqVhQEJq16/fox8+AYyLftCICVJk3ndxZDbHjFXMYhO9SIlDJw== X-Received: by 2002:a17:902:ce88:b0:20c:94f6:3e03 with SMTP id d9443c01a7336-20d27f20462mr68954885ad.47.1729104783123; Wed, 16 Oct 2024 11:53:03 -0700 (PDT) Received: from localhost (fwdproxy-prn-038.fbsv.net. [2a03:2880:ff:26::face:b00c]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20d1805b53fsm31914855ad.259.2024.10.16.11.53.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2024 11:53:02 -0700 (PDT) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: David Wei , Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH v6 02/15] net: generalise net_iov chunk owners Date: Wed, 16 Oct 2024 11:52:39 -0700 Message-ID: <20241016185252.3746190-3-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241016185252.3746190-1-dw@davidwei.uk> References: <20241016185252.3746190-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Pavel Begunkov Currently net_iov stores a pointer to struct dmabuf_genpool_chunk_owner, which serves as a useful abstraction to share data and provide a context. However, it's too devmem specific, and we want to reuse it for other memory providers, and for that we need to decouple net_iov from devmem. Make net_iov to point to a new base structure called net_iov_area, which dmabuf_genpool_chunk_owner extends. Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/netmem.h | 21 ++++++++++++++++++++- net/core/devmem.c | 25 +++++++++++++------------ net/core/devmem.h | 25 +++++++++---------------- 3 files changed, 42 insertions(+), 29 deletions(-) diff --git a/include/net/netmem.h b/include/net/netmem.h index 8a6e20be4b9d..3795ded30d2c 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -24,11 +24,20 @@ struct net_iov { unsigned long __unused_padding; unsigned long pp_magic; struct page_pool *pp; - struct dmabuf_genpool_chunk_owner *owner; + struct net_iov_area *owner; unsigned long dma_addr; atomic_long_t pp_ref_count; }; +struct net_iov_area { + /* Array of net_iovs for this area. */ + struct net_iov *niovs; + size_t num_niovs; + + /* Offset into the dma-buf where this chunk starts. */ + unsigned long base_virtual; +}; + /* These fields in struct page are used by the page_pool and net stack: * * struct { @@ -54,6 +63,16 @@ NET_IOV_ASSERT_OFFSET(dma_addr, dma_addr); NET_IOV_ASSERT_OFFSET(pp_ref_count, pp_ref_count); #undef NET_IOV_ASSERT_OFFSET +static inline struct net_iov_area *net_iov_owner(const struct net_iov *niov) +{ + return niov->owner; +} + +static inline unsigned int net_iov_idx(const struct net_iov *niov) +{ + return niov - net_iov_owner(niov)->niovs; +} + /* netmem */ /** diff --git a/net/core/devmem.c b/net/core/devmem.c index 858982858f81..5c10cf0e2a18 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -32,14 +32,15 @@ static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool, { struct dmabuf_genpool_chunk_owner *owner = chunk->owner; - kvfree(owner->niovs); + kvfree(owner->area.niovs); kfree(owner); } static dma_addr_t net_devmem_get_dma_addr(const struct net_iov *niov) { - struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov); + struct dmabuf_genpool_chunk_owner *owner; + owner = net_devmem_iov_to_chunk_owner(niov); return owner->base_dma_addr + ((dma_addr_t)net_iov_idx(niov) << PAGE_SHIFT); } @@ -82,7 +83,7 @@ net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) offset = dma_addr - owner->base_dma_addr; index = offset / PAGE_SIZE; - niov = &owner->niovs[index]; + niov = &owner->area.niovs[index]; niov->pp_magic = 0; niov->pp = NULL; @@ -250,9 +251,9 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, goto err_free_chunks; } - owner->base_virtual = virtual; + owner->area.base_virtual = virtual; owner->base_dma_addr = dma_addr; - owner->num_niovs = len / PAGE_SIZE; + owner->area.num_niovs = len / PAGE_SIZE; owner->binding = binding; err = gen_pool_add_owner(binding->chunk_pool, dma_addr, @@ -264,17 +265,17 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, goto err_free_chunks; } - owner->niovs = kvmalloc_array(owner->num_niovs, - sizeof(*owner->niovs), - GFP_KERNEL); - if (!owner->niovs) { + owner->area.niovs = kvmalloc_array(owner->area.num_niovs, + sizeof(*owner->area.niovs), + GFP_KERNEL); + if (!owner->area.niovs) { err = -ENOMEM; goto err_free_chunks; } - for (i = 0; i < owner->num_niovs; i++) { - niov = &owner->niovs[i]; - niov->owner = owner; + for (i = 0; i < owner->area.num_niovs; i++) { + niov = &owner->area.niovs[i]; + niov->owner = &owner->area; page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov), net_devmem_get_dma_addr(niov)); } diff --git a/net/core/devmem.h b/net/core/devmem.h index 99782ddeca40..a2b9913e9a17 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -10,6 +10,8 @@ #ifndef _NET_DEVMEM_H #define _NET_DEVMEM_H +#include + struct netlink_ext_ack; struct net_devmem_dmabuf_binding { @@ -51,17 +53,11 @@ struct net_devmem_dmabuf_binding { * allocations from this chunk. */ struct dmabuf_genpool_chunk_owner { - /* Offset into the dma-buf where this chunk starts. */ - unsigned long base_virtual; + struct net_iov_area area; + struct net_devmem_dmabuf_binding *binding; /* dma_addr of the start of the chunk. */ dma_addr_t base_dma_addr; - - /* Array of net_iovs for this chunk. */ - struct net_iov *niovs; - size_t num_niovs; - - struct net_devmem_dmabuf_binding *binding; }; void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding); @@ -75,20 +71,17 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, void dev_dmabuf_uninstall(struct net_device *dev); static inline struct dmabuf_genpool_chunk_owner * -net_iov_owner(const struct net_iov *niov) +net_devmem_iov_to_chunk_owner(const struct net_iov *niov) { - return niov->owner; -} + struct net_iov_area *owner = net_iov_owner(niov); -static inline unsigned int net_iov_idx(const struct net_iov *niov) -{ - return niov - net_iov_owner(niov)->niovs; + return container_of(owner, struct dmabuf_genpool_chunk_owner, area); } static inline struct net_devmem_dmabuf_binding * net_devmem_iov_binding(const struct net_iov *niov) { - return net_iov_owner(niov)->binding; + return net_devmem_iov_to_chunk_owner(niov)->binding; } static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) @@ -98,7 +91,7 @@ static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) { - struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov); + struct net_iov_area *owner = net_iov_owner(niov); return owner->base_virtual + ((unsigned long)net_iov_idx(niov) << PAGE_SHIFT); From patchwork Wed Oct 16 18:52:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13838745 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09DAA2141D9 for ; Wed, 16 Oct 2024 18:53:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104786; cv=none; b=ppuO0hwF8aGopfU2HKRywjSa8S1Pglgvtlp+lzDXtGnXzTlROaF+BlqdRAqiQFe4jxMjRNknHZsHdtWe2W3430SDeaAyAymu037P80GKgfmGKPISijgJhD2GHmDGHIhIxWzp3I9q0wyLSXtb30ZoBkgoIDsJ8y/CVCsbcGCVDf0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104786; c=relaxed/simple; bh=c/hs9Erraa93gXWEWREOKQp+/LMKJ82xklT+mxet+Hs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RDB/rqKtF95WDHYocIhOP6C0vI7PmLf+Hyzs4jsPHvZiV/xZ4eG3YGZU56crpVdoHB6O+xm29yNygPNccMZ6vukLurwDLTUBCUe9zkj0crYHu51ka6LotdEJ1POY6FVyo3TuuIWB+betjk9YYCmYohmL6TU006sGYJ83KrKrqog= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=TdwWyC0v; arc=none smtp.client-ip=209.85.214.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="TdwWyC0v" Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-20ca7fc4484so927965ad.3 for ; Wed, 16 Oct 2024 11:53:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1729104784; x=1729709584; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TnQKuPWcDSploXBkZqG6nFOkPs0h6NssQB03UgU7Hn8=; b=TdwWyC0vwI43FgjV4+V1vV7wi9CyGMit0sCfkgc869Td0+IJ63xaeMlza+tGALeIen NMf0YNe1aaXTFvM5HUM4Xm28rvM8SswDKRxb0l0jKQ3RwxOZ6CdyFd8WHq+9OeA3kIOT dq78iBNjRAjBtATg74+5ZV3bxm/mcXoNFQRBTuJG0w77RwnEZFRJYZBuLRmIsxiYUlvz U/kngPSbnkQ+Y2+XscZqgCuL50dFEu08wkWXNqzm+611EtUFQm2HZ6bYOEhKNDWuNFQ1 BHU27yV+Nxqy3RP4WeMxrBl3mmCfTVxnSivUyGciyFmbCrcxJkZEahawBA/Dlvq2SEFR AfNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729104784; x=1729709584; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TnQKuPWcDSploXBkZqG6nFOkPs0h6NssQB03UgU7Hn8=; b=U6hxcKy38KGsLOVtGq2Cdi+j6NQyN/48cEsbER/7r0X22Mv+H0SKo9bJOr+xNEntmO LkSyAPBgFqc1aMDxHhZiSHlpX7RokfWUmDBdCbE253etNNZFyPsZsvcVV3wRBvME54ju xPMqLHnSQfK2ZZug63ADJNRGpRgSAdc2qYnRpCkEes0THov+7n5s1tjwT83tsL1+0AY4 0AX5r6RC7ISETiXrCcix/7eb1ncQlZavQpjofwe9I/tqgtVfOtX6yYhxHNYc0TDMHjbE vscuBvYNv71KUtAslUFaFzBIScsByEgiw/uINUMVLLAlDDmbYvTVNPgSqhC/UL0r+5QG ZsIQ== X-Forwarded-Encrypted: i=1; AJvYcCX4fAb8bLY0PwiDwmBiQv4RmGr2BjeQkgKQaCLzi4cYGCljTUosuczh8bW3brKh9bXhthlk/v0=@vger.kernel.org X-Gm-Message-State: AOJu0YxEnOO2tAcIvVL02aT3USprcKeITyGTQMNmAdIrDzaP3XjemUlH j2OUusYfeZmo0DKDtM9HpiSB32kDGN+pW6luSXmGGkDua3j7fI+p+zt3OHHKBvs= X-Google-Smtp-Source: AGHT+IEh5cu2mu0MJPXxPBYOPgZupLKE6ZtIE//SyXgeftoFTcJvPeuX00cX/dFi3ndeTU/P/rUpag== X-Received: by 2002:a17:902:cec2:b0:20c:7485:891c with SMTP id d9443c01a7336-20cbb24ca76mr204201315ad.54.1729104784455; Wed, 16 Oct 2024 11:53:04 -0700 (PDT) Received: from localhost (fwdproxy-prn-033.fbsv.net. [2a03:2880:ff:21::face:b00c]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20d1805caa6sm31680975ad.271.2024.10.16.11.53.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2024 11:53:03 -0700 (PDT) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: David Wei , Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH v6 03/15] net: page_pool: create hooks for custom page providers Date: Wed, 16 Oct 2024 11:52:40 -0700 Message-ID: <20241016185252.3746190-4-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241016185252.3746190-1-dw@davidwei.uk> References: <20241016185252.3746190-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Jakub Kicinski The page providers which try to reuse the same pages will need to hold onto the ref, even if page gets released from the pool - as in releasing the page from the pp just transfers the "ownership" reference from pp to the provider, and provider will wait for other references to be gone before feeding this page back into the pool. Signed-off-by: Jakub Kicinski [Pavel] Rebased, renamed callback, +converted devmem Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/page_pool/types.h | 9 +++++++++ net/core/devmem.c | 14 +++++++++++++- net/core/page_pool.c | 17 +++++++++-------- 3 files changed, 31 insertions(+), 9 deletions(-) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index c022c410abe3..8a35fe474adb 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -152,8 +152,16 @@ struct page_pool_stats { */ #define PAGE_POOL_FRAG_GROUP_ALIGN (4 * sizeof(long)) +struct memory_provider_ops { + netmem_ref (*alloc_netmems)(struct page_pool *pool, gfp_t gfp); + bool (*release_netmem)(struct page_pool *pool, netmem_ref netmem); + int (*init)(struct page_pool *pool); + void (*destroy)(struct page_pool *pool); +}; + struct pp_memory_provider_params { void *mp_priv; + const struct memory_provider_ops *mp_ops; }; struct page_pool { @@ -215,6 +223,7 @@ struct page_pool { struct ptr_ring ring; void *mp_priv; + const struct memory_provider_ops *mp_ops; #ifdef CONFIG_PAGE_POOL_STATS /* recycle stats are per-cpu to avoid locking */ diff --git a/net/core/devmem.c b/net/core/devmem.c index 5c10cf0e2a18..01738029e35c 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -26,6 +26,8 @@ /* Protected by rtnl_lock() */ static DEFINE_XARRAY_FLAGS(net_devmem_dmabuf_bindings, XA_FLAGS_ALLOC1); +static const struct memory_provider_ops dmabuf_devmem_ops; + static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool, struct gen_pool_chunk *chunk, void *not_used) @@ -117,6 +119,7 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding) WARN_ON(rxq->mp_params.mp_priv != binding); rxq->mp_params.mp_priv = NULL; + rxq->mp_params.mp_ops = NULL; rxq_idx = get_netdev_rx_queue_index(rxq); @@ -142,7 +145,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, } rxq = __netif_get_rx_queue(dev, rxq_idx); - if (rxq->mp_params.mp_priv) { + if (rxq->mp_params.mp_ops) { NL_SET_ERR_MSG(extack, "designated queue already memory provider bound"); return -EEXIST; } @@ -160,6 +163,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, return err; rxq->mp_params.mp_priv = binding; + rxq->mp_params.mp_ops = &dmabuf_devmem_ops; err = netdev_rx_queue_restart(dev, rxq_idx); if (err) @@ -169,6 +173,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, err_xa_erase: rxq->mp_params.mp_priv = NULL; + rxq->mp_params.mp_ops = NULL; xa_erase(&binding->bound_rxqs, xa_idx); return err; @@ -388,3 +393,10 @@ bool mp_dmabuf_devmem_release_page(struct page_pool *pool, netmem_ref netmem) /* We don't want the page pool put_page()ing our net_iovs. */ return false; } + +static const struct memory_provider_ops dmabuf_devmem_ops = { + .init = mp_dmabuf_devmem_init, + .destroy = mp_dmabuf_devmem_destroy, + .alloc_netmems = mp_dmabuf_devmem_alloc_netmems, + .release_netmem = mp_dmabuf_devmem_release_page, +}; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index a813d30d2135..c21c5b9edc68 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -284,10 +284,11 @@ static int page_pool_init(struct page_pool *pool, rxq = __netif_get_rx_queue(pool->slow.netdev, pool->slow.queue_idx); pool->mp_priv = rxq->mp_params.mp_priv; + pool->mp_ops = rxq->mp_params.mp_ops; } - if (pool->mp_priv) { - err = mp_dmabuf_devmem_init(pool); + if (pool->mp_ops) { + err = pool->mp_ops->init(pool); if (err) { pr_warn("%s() mem-provider init failed %d\n", __func__, err); @@ -584,8 +585,8 @@ netmem_ref page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp) return netmem; /* Slow-path: cache empty, do real allocation */ - if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_priv) - netmem = mp_dmabuf_devmem_alloc_netmems(pool, gfp); + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + netmem = pool->mp_ops->alloc_netmems(pool, gfp); else netmem = __page_pool_alloc_pages_slow(pool, gfp); return netmem; @@ -676,8 +677,8 @@ void page_pool_return_page(struct page_pool *pool, netmem_ref netmem) bool put; put = true; - if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_priv) - put = mp_dmabuf_devmem_release_page(pool, netmem); + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + put = pool->mp_ops->release_netmem(pool, netmem); else __page_pool_release_page_dma(pool, netmem); @@ -1010,8 +1011,8 @@ static void __page_pool_destroy(struct page_pool *pool) page_pool_unlist(pool); page_pool_uninit(pool); - if (pool->mp_priv) { - mp_dmabuf_devmem_destroy(pool); + if (pool->mp_ops) { + pool->mp_ops->destroy(pool); static_branch_dec(&page_pool_mem_providers); } From patchwork Wed Oct 16 18:52:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13838746 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6AD252144B1 for ; Wed, 16 Oct 2024 18:53:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104787; cv=none; b=fu/q5koPkYkAPuS13rJucdL+LpyqZZO3yJ+vJLPL4JZxBgWl/wtMdrDMwC51IXpq3ONCsSQGe8+bzP9Y5Im283Y12b0wcpuy2nmtorYx/1tu2VjDlHTIRZ+8Iff5mQoJeW+ubr/ukVhGof9xQMaEnLBtG3uMSpzXXZaD3ln2pz8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104787; c=relaxed/simple; bh=GYfv98ZUd48uxOI6Ts+rl9u4lukVAUinedFku6rUGEI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=quzLSd3qDKgIbeDJ7sgDMdLDsg2mxeQxHfztEnG9fKHRiWuYWUwPg4BaSGj3XK7jQdGHAs/FAd1ZAU/XKcAt77ZuoIVQ2Gpf2my3ULFkZ0scSgho06/q7OK4ZVr+22UKepyD+C+LHLVSby1I1HDOdQeZjLa2mULzJ3EQuTyez00= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=spKfnELA; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="spKfnELA" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-20c805a0753so1399735ad.0 for ; Wed, 16 Oct 2024 11:53:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1729104786; x=1729709586; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gEYTOkkPbOMOeAXcMwBoyyLeSIseQx5WZXYwap0u980=; b=spKfnELAPJm2zelVThTwzx8WES+MRHkBOTrVagE3bV7jwP1Oo0ZwtJKg+9wC1UuuP/ M67MP1ZUXAvAjHmWNdLvwoAI2OcKI5d0gGVmuvcJxfar0LbqeR4DvzsFJwKpnbQx0xD5 WD5A4IKh3cLVlQuh472P5RdrdaEiEzqnfeXOooSiHDYrraU7ptLRal+5Ym9gAZRfPvxG derkvGASPI7tnt+ywA0IIg2nR+I6NO3kAk8tcU1GBLPLJA9fzc+/oPLCtAuM7gExs69o feOwYG/P9BjNdlffQ0REaONZPgGnzZYpnaJjECVBY89HH7yfalzmS5T9A2vpEf34X/cy LIEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729104786; x=1729709586; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gEYTOkkPbOMOeAXcMwBoyyLeSIseQx5WZXYwap0u980=; b=m138bf5snr7NYuxjD0LB8JQwNMwkgBKqzNgldfMo+rKhj/aMywKYhv/GbFlKK91LZh wXE6WnkqYmE0wf3CNW0jpyFuQAcL/T2jSC3HkQpbs5Tt2NAcSo1G0KsKPgNHrqnmPqNG p4fqAA3YOpgKtM+tPC/rHNTaJwIOCYT9KQJ1YKQ36RCI070pkTQ98K79gmjCKeeq3sDh pedIMjxRBynT3blq0Qo34a3mw8QBtcy7VqgizLfrRsdZ+L1Pl5YUDu+Rul/s7MdjaBYn NxbAtUljwUA22iu4pOR+JNPvl7t6jahnZ5sIw7BB8h4D0qhqSN1jN4TQ7znM6iDAqPYt IPFA== X-Forwarded-Encrypted: i=1; AJvYcCUtDfmLSj70svFLgGaKqqlsKNLI1rDaG46q6sKk7U9J+8/5nCJDugEELdD1GMUxkk5FRAY143k=@vger.kernel.org X-Gm-Message-State: AOJu0YxpGLE2NKVtP2vP6X/HOwlYWo4G4EpKVl4rsBP2+eKc/TfmlVRP 9QOrKmkr/o+qKMNzuaVv1jFrg1jKW3+a9HkqToWPv3xLJAYdh8xzGvltyjPjc64= X-Google-Smtp-Source: AGHT+IES12a7fAsskB7fQc1zYGG7uvcmwMhCjQS6javdYbymUS1L+VyRgNNviWjhI77NB23qHN3O+g== X-Received: by 2002:a17:903:2283:b0:20b:a10c:9bdf with SMTP id d9443c01a7336-20d27ed02f5mr62699775ad.32.1729104785844; Wed, 16 Oct 2024 11:53:05 -0700 (PDT) Received: from localhost (fwdproxy-prn-116.fbsv.net. [2a03:2880:ff:74::face:b00c]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20d1804b734sm31677265ad.222.2024.10.16.11.53.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2024 11:53:05 -0700 (PDT) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: David Wei , Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH v6 04/15] net: prepare for non devmem TCP memory providers Date: Wed, 16 Oct 2024 11:52:41 -0700 Message-ID: <20241016185252.3746190-5-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241016185252.3746190-1-dw@davidwei.uk> References: <20241016185252.3746190-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Pavel Begunkov There is a good bunch of places in generic paths assuming that the only page pool memory provider is devmem TCP. As we want to reuse the net_iov and provider infrastructure, we need to patch it up and explicitly check the provider type when we branch into devmem TCP code. Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- net/core/devmem.c | 10 ++++++++-- net/core/devmem.h | 8 ++++++++ net/core/page_pool_user.c | 15 +++++++++------ net/ipv4/tcp.c | 6 ++++++ 4 files changed, 31 insertions(+), 8 deletions(-) diff --git a/net/core/devmem.c b/net/core/devmem.c index 01738029e35c..78983a98e5dc 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -28,6 +28,12 @@ static DEFINE_XARRAY_FLAGS(net_devmem_dmabuf_bindings, XA_FLAGS_ALLOC1); static const struct memory_provider_ops dmabuf_devmem_ops; +bool net_is_devmem_page_pool_ops(const struct memory_provider_ops *ops) +{ + return ops == &dmabuf_devmem_ops; +} +EXPORT_SYMBOL_GPL(net_is_devmem_page_pool_ops); + static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool, struct gen_pool_chunk *chunk, void *not_used) @@ -316,10 +322,10 @@ void dev_dmabuf_uninstall(struct net_device *dev) unsigned int i; for (i = 0; i < dev->real_num_rx_queues; i++) { - binding = dev->_rx[i].mp_params.mp_priv; - if (!binding) + if (dev->_rx[i].mp_params.mp_ops != &dmabuf_devmem_ops) continue; + binding = dev->_rx[i].mp_params.mp_priv; xa_for_each(&binding->bound_rxqs, xa_idx, rxq) if (rxq == &dev->_rx[i]) { xa_erase(&binding->bound_rxqs, xa_idx); diff --git a/net/core/devmem.h b/net/core/devmem.h index a2b9913e9a17..a3fdd66bb05b 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -116,6 +116,8 @@ struct net_iov * net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding); void net_devmem_free_dmabuf(struct net_iov *ppiov); +bool net_is_devmem_page_pool_ops(const struct memory_provider_ops *ops); + #else struct net_devmem_dmabuf_binding; @@ -168,6 +170,12 @@ static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) { return 0; } + +static inline bool +net_is_devmem_page_pool_ops(const struct memory_provider_ops *ops) +{ + return false; +} #endif #endif /* _NET_DEVMEM_H */ diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c index 48335766c1bf..604862a73535 100644 --- a/net/core/page_pool_user.c +++ b/net/core/page_pool_user.c @@ -214,7 +214,7 @@ static int page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool, const struct genl_info *info) { - struct net_devmem_dmabuf_binding *binding = pool->mp_priv; + struct net_devmem_dmabuf_binding *binding; size_t inflight, refsz; void *hdr; @@ -244,8 +244,11 @@ page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool, pool->user.detach_time)) goto err_cancel; - if (binding && nla_put_u32(rsp, NETDEV_A_PAGE_POOL_DMABUF, binding->id)) - goto err_cancel; + if (net_is_devmem_page_pool_ops(pool->mp_ops)) { + binding = pool->mp_priv; + if (nla_put_u32(rsp, NETDEV_A_PAGE_POOL_DMABUF, binding->id)) + goto err_cancel; + } genlmsg_end(rsp, hdr); @@ -353,16 +356,16 @@ void page_pool_unlist(struct page_pool *pool) int page_pool_check_memory_provider(struct net_device *dev, struct netdev_rx_queue *rxq) { - struct net_devmem_dmabuf_binding *binding = rxq->mp_params.mp_priv; + void *mp_priv = rxq->mp_params.mp_priv; struct page_pool *pool; struct hlist_node *n; - if (!binding) + if (!mp_priv) return 0; mutex_lock(&page_pools_lock); hlist_for_each_entry_safe(pool, n, &dev->page_pools, user.list) { - if (pool->mp_priv != binding) + if (pool->mp_priv != mp_priv) continue; if (pool->slow.queue_idx == get_netdev_rx_queue_index(rxq)) { diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index e928efc22f80..31e01da61c12 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -277,6 +277,7 @@ #include #include #include +#include #include #include @@ -2476,6 +2477,11 @@ static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb, } niov = skb_frag_net_iov(frag); + if (net_is_devmem_page_pool_ops(niov->pp->mp_ops)) { + err = -ENODEV; + goto out; + } + end = start + skb_frag_size(frag); copy = end - offset; From patchwork Wed Oct 16 18:52:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13838747 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D52422144CF for ; Wed, 16 Oct 2024 18:53:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104789; cv=none; b=J25LiNXPyRt1RxV+hvbQanm8rA9N1Gc9M2SAAqCuXiEZix6gaN6urSgUePXgYn5mJhQ4AoXKOKBX/dqcYFXLvUR8FYyjlS6lLUPK3+GcrXe6bQJhuuHHjP5qoRjKE3Xf8GOJKY/8JBnbphqhOuXKvFUEMqVbqtsUdxtKmK1VIoU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104789; c=relaxed/simple; bh=N3Lp9/YMutyVAvs4ge9B26lNnGjZAtdOVSMB+PgeaiE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Id5n85zZZomqcJHr28wB0dV7s0XvsmcZbo1F3ts+1h2dKx40MBfy/53PxPCoDH/1N7D1Zbt1uUmNLbNtAIoAGyjJwGyZxSBuRNTO5JzHH4p2uPT1Js3Jeg6f6aHw4UhWs93i/vgjwzsQalGMnYwW1Oo6tyMCMlZYRY0hgKlVc+Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=plXN6UGW; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="plXN6UGW" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-20c77459558so1288375ad.0 for ; Wed, 16 Oct 2024 11:53:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1729104787; x=1729709587; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vqnj2vEyFSNWAQ9NLMcT3cBERALcNmYBRjCTByCN/C4=; b=plXN6UGWVI0s58g2c13bqr68rgHXY/CfC1Ody1QYQY1vxsMooTd+rwJyMlsJg+l0Hv p1AGyGACS44E/Mh4xb8IYK2SgH+D/HBTwQlSNWDv9hhWTSBhAbeXE9NwBKIM2cae13H9 Uvt6PWRu7naD2LCrqSpH+hQcNZw1vZdUC9bMLNRr8SfDCcDlPOdxYBcjiAponu9/2fsW uDv17UsuVVU6nMkupl+hZQehLPTMjDSDPugQNmerj896ZxwTZIjN88R/aEYA1VxzuVed /ctKPAkWRaLmZNpC8JpsONr06eyVQyXHBx+htQkd1o95zaxStIYLOUFCOLVazaQZFgSd keUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729104787; x=1729709587; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vqnj2vEyFSNWAQ9NLMcT3cBERALcNmYBRjCTByCN/C4=; b=S2hpLd/wZ8p6qaRoGBR8I+YpJ84SWHeQG3kXlyuOcwcqgAWw9IaO5O2ziU6RIOs07y DJK5cWBIovgl3IkHFcCWYWOrhREl545SXlcwOP4DNi0eI9gNezulAChqcI6dJB+RQ0A2 52/ea9FVUn3YT/0jCbP/S9GjvP+MUWPikBcwK9qb4hqd5A/8Er1hmm11czWJ0XlLrLW5 9p8XWJC+4AO8Z2Cy0qCZT/ktJ2pKEXm1tVP+N8lub6Lyo55dHC9Ro7zQXhtyuSWWDMz/ wlNO4ITLSVqPu2ILJzRpUMwYuzX91QZpZkbou/Kom4kjqqZBXS38MohhiEsYo83jOZ/t h+nQ== X-Forwarded-Encrypted: i=1; AJvYcCXqe/4giFjobRzBMARNf+AVbK+32amwyZJ+Q2Ya2v32iZDENLNKuxihzDXYKZfm5SQWZPu5k08=@vger.kernel.org X-Gm-Message-State: AOJu0Yz8eGcOlJNqKq9F6MUUL5LMlvnrZZSHl1i/rNSDk1HLWauyjUcD G99+RA98nD/Lu6ebsg9y++U16zy4pborI3VC77rum4CyYqJYqdtM1vsafxCrRKs= X-Google-Smtp-Source: AGHT+IEwAQMbKcO2swe43BJm87R77nv1ylDvAa5ptxGJajdS3oBkRi+JXcNlOYLFwPMwDCELFcU/bw== X-Received: by 2002:a17:902:ce09:b0:20d:2e83:6995 with SMTP id d9443c01a7336-20d2e836aa1mr55849845ad.47.1729104787172; Wed, 16 Oct 2024 11:53:07 -0700 (PDT) Received: from localhost (fwdproxy-prn-016.fbsv.net. [2a03:2880:ff:10::face:b00c]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20d17f9d3f8sm31819785ad.80.2024.10.16.11.53.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2024 11:53:06 -0700 (PDT) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: David Wei , Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH v6 05/15] net: page_pool: add ->scrub mem provider callback Date: Wed, 16 Oct 2024 11:52:42 -0700 Message-ID: <20241016185252.3746190-6-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241016185252.3746190-1-dw@davidwei.uk> References: <20241016185252.3746190-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Pavel Begunkov Some page pool memory providers like io_uring need to catch the point when the page pool is asked to be destroyed. ->destroy is not enough because it relies on the page pool to wait for its buffers first, but for that to happen a provider might need to react, e.g. to collect all buffers that are currently given to the user space. Add a new provider's scrub callback serving the purpose and called off the pp's generic (cold) scrubbing path, i.e. page_pool_scrub(). Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/page_pool/types.h | 1 + net/core/page_pool.c | 3 +++ 2 files changed, 4 insertions(+) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 8a35fe474adb..fd0376ad0d26 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -157,6 +157,7 @@ struct memory_provider_ops { bool (*release_netmem)(struct page_pool *pool, netmem_ref netmem); int (*init)(struct page_pool *pool); void (*destroy)(struct page_pool *pool); + void (*scrub)(struct page_pool *pool); }; struct pp_memory_provider_params { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c21c5b9edc68..9a675e16e6a4 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -1038,6 +1038,9 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool) static void page_pool_scrub(struct page_pool *pool) { + if (pool->mp_ops && pool->mp_ops->scrub) + pool->mp_ops->scrub(pool); + page_pool_empty_alloc_cache_once(pool); pool->destroy_cnt++; From patchwork Wed Oct 16 18:52:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13838748 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 410F41B4F2F for ; Wed, 16 Oct 2024 18:53:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104790; cv=none; b=oIA4cYVL/ux0WI4b7Jkd4MRK89XIHiOrVrVfiEITvYgO3ZsNk1jelLM+0YOhviPYgjAGSX/6cRFD16JZnLmEe2jTsIsZhG52wCIQw4CoD82k9boK9Hjo7vdtcN4mEsRUH1kULLcEuoJK7k0VEzLufZffG/PuvCbdPL4yshnFERw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104790; c=relaxed/simple; bh=89Bp0oZ2BOmf5uCIKFKyoWYm/7ULInna3cvf7XCxcog=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mGQT2aO2ezRIWp1PayjCV1xGCO+bLjmgzl2DUKjeYzvjm6To9CRQE6s+w25P2lwZ72eWb0HIE5mHPAIn+bEJrXc67//H8QpZ2GgzJac6E2PveIUveVew0FuMj8hgliMyo5J5sKdlU0njcorSs+si5rxunmdLKuFUeu3o81ygfzU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=3cWxXEZI; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="3cWxXEZI" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-71e3fce4a60so80999b3a.0 for ; Wed, 16 Oct 2024 11:53:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1729104789; x=1729709589; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZQ+z4LZe+3hLij7urnws1Z9lmMZkjsHwRtyYTTLmPFQ=; b=3cWxXEZIeijrrqDZDoEgU2sGeKvjJj88h+ZBDOM6SvQBXh5S28NEOVlBxG5fx5SbjT Vx5qyJVVIsVQhbpPAhyoqQTMy1nOKo7YVD4jiD6C+yvbuc+hDUPi2FQko4mqOF/17mAf aekn45lVawyfLfxfzLe0olDsZdOWKlUd/cW9WqTEF3MR3SXJeX6TKBhqs0ORwNbd2zwl AYD4kXkzHd9jEMx6BmraMeqbS9DmZxxJqo9Khubq4KTQE6wSOp2pmylPvIxoOShrsGyV F4sUgupH0yyfb+qV0G4zYIH3GfKh6cuDXh79Gk3+Jy+bz/w7HXXEofRU45OL+fF2lA9l BPrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729104789; x=1729709589; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZQ+z4LZe+3hLij7urnws1Z9lmMZkjsHwRtyYTTLmPFQ=; b=LbLYtVOzEAOzDbDqoLBdykzsiF8zy8NB/IE0inN//d1Rk3P+hwKWf2oPQuCOnhsVpD wimqDlr4cCuwx0Lnk5D+YL6evk/nAKrx3TDdpIFDdDqnJJL1rciiZtgYRP+Q0Yx9d2ga Pm3QcdMFMOtBLCnwHjgvAhLkGPTl13F0BoYpn9fOBqCzBseIDHUrGNpk4WXpyETL6KvV B378RFNePS0TGHP6jSGXklcd7LewCXMPXfrWAAunqEMGRStaDziE7u7mMahBHMS652Yq wGJecWS/PgiqoXZBcprqUKyiLDyDBU5X+y8tIVPNwURwpClf/tPp9u7N/pSYPovsBI18 kNsw== X-Forwarded-Encrypted: i=1; AJvYcCVd7yU79lTIiQXk1CzGK4Xm2N96EVx38TBiHyxV2bh9eicWPPkdBgX8XuV76MCseWDCWGvsxH4=@vger.kernel.org X-Gm-Message-State: AOJu0Yxl6eD+Igqo+ivJGU+SyMUnAZZUZChvuJPwF22MXXPZd3p940V+ 0tPxDW/F4hb1+so6BkRyiSy4zF1e5T/nnBTIT1AlgS5zkoZb8MIOpT3fz7bwrIA= X-Google-Smtp-Source: AGHT+IFzkiZAp9Acg3DgIEWGxEJph/8lSDTB21GH6rjxRmEXJn+agzFg/+B8FoO//iyFTbUWbbQgbg== X-Received: by 2002:a05:6a20:c89c:b0:1cc:e14b:cf3b with SMTP id adf61e73a8af0-1d905f115f8mr5394950637.27.1729104788636; Wed, 16 Oct 2024 11:53:08 -0700 (PDT) Received: from localhost (fwdproxy-prn-039.fbsv.net. [2a03:2880:ff:27::face:b00c]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71e773a2ee0sm3410024b3a.76.2024.10.16.11.53.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2024 11:53:08 -0700 (PDT) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: David Wei , Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH v6 06/15] net: page pool: add helper creating area from pages Date: Wed, 16 Oct 2024 11:52:43 -0700 Message-ID: <20241016185252.3746190-7-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241016185252.3746190-1-dw@davidwei.uk> References: <20241016185252.3746190-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Pavel Begunkov Add a helper that takes an array of pages and initialises passed in memory provider's area with them, where each net_iov takes one page. It's also responsible for setting up dma mappings. We keep it in page_pool.c not to leak netmem details to outside providers like io_uring, which don't have access to netmem_priv.h and other private helpers. Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/page_pool/memory_provider.h | 10 ++++ net/core/page_pool.c | 63 ++++++++++++++++++++++++- 2 files changed, 71 insertions(+), 2 deletions(-) create mode 100644 include/net/page_pool/memory_provider.h diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h new file mode 100644 index 000000000000..83d7eec0058d --- /dev/null +++ b/include/net/page_pool/memory_provider.h @@ -0,0 +1,10 @@ +#ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H +#define _NET_PAGE_POOL_MEMORY_PROVIDER_H + +int page_pool_mp_init_paged_area(struct page_pool *pool, + struct net_iov_area *area, + struct page **pages); +void page_pool_mp_release_area(struct page_pool *pool, + struct net_iov_area *area); + +#endif diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 9a675e16e6a4..8bd4a3c80726 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -13,6 +13,7 @@ #include #include +#include #include #include @@ -459,7 +460,8 @@ page_pool_dma_sync_for_device(const struct page_pool *pool, __page_pool_dma_sync_for_device(pool, netmem, dma_sync_size); } -static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem) +static bool page_pool_dma_map_page(struct page_pool *pool, netmem_ref netmem, + struct page *page) { dma_addr_t dma; @@ -468,7 +470,7 @@ static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem) * into page private data (i.e 32bit cpu with 64bit DMA caps) * This mapping is kept for lifetime of page, until leaving pool. */ - dma = dma_map_page_attrs(pool->p.dev, netmem_to_page(netmem), 0, + dma = dma_map_page_attrs(pool->p.dev, page, 0, (PAGE_SIZE << pool->p.order), pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); @@ -490,6 +492,11 @@ static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem) return false; } +static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem) +{ + return page_pool_dma_map_page(pool, netmem, netmem_to_page(netmem)); +} + static struct page *__page_pool_alloc_page_order(struct page_pool *pool, gfp_t gfp) { @@ -1154,3 +1161,55 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid) } } EXPORT_SYMBOL(page_pool_update_nid); + +static void page_pool_release_page_dma(struct page_pool *pool, + netmem_ref netmem) +{ + __page_pool_release_page_dma(pool, netmem); +} + +int page_pool_mp_init_paged_area(struct page_pool *pool, + struct net_iov_area *area, + struct page **pages) +{ + struct net_iov *niov; + netmem_ref netmem; + int i, ret = 0; + + if (!pool->dma_map) + return -EOPNOTSUPP; + + for (i = 0; i < area->num_niovs; i++) { + niov = &area->niovs[i]; + netmem = net_iov_to_netmem(niov); + + page_pool_set_pp_info(pool, netmem); + if (!page_pool_dma_map_page(pool, netmem, pages[i])) { + ret = -EINVAL; + goto err_unmap_dma; + } + } + return 0; + +err_unmap_dma: + while (i--) { + netmem = net_iov_to_netmem(&area->niovs[i]); + page_pool_release_page_dma(pool, netmem); + } + return ret; +} + +void page_pool_mp_release_area(struct page_pool *pool, + struct net_iov_area *area) +{ + int i; + + if (!pool->dma_map) + return; + + for (i = 0; i < area->num_niovs; i++) { + struct net_iov *niov = &area->niovs[i]; + + page_pool_release_page_dma(pool, net_iov_to_netmem(niov)); + } +} From patchwork Wed Oct 16 18:52:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13838749 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A960D2144CF for ; Wed, 16 Oct 2024 18:53:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104791; cv=none; b=JfHuGjx2am0g/g7Mrvyy1X5xHCR0OVyNnA+F4j5sSL2szOKl1XeWXABwojwfdiaPgSFhoIGHJ8JsUadkUeCwZH3d9EJY7tiVAXPU+Wv5jjGYo4bpsktpAe/uZupDDzzf5boFmkylPFG5zXkUKX/hE0bNCpuQD52PJo/Kp1vEP14= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104791; c=relaxed/simple; bh=stlUgp33BwTRN/0cQTOTCoDhqZGc/HHCVuHNB1SnHKg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WYAxfJmV9/R/6DPZHbtEHmQV7OyECKPO9IqVGYA7XCX+BuN4bMM7SQLsrx4v4Nm6yqkkEtlTjZuZrZQMFH3TshLMLnPPbvqTPo9i63TJbopBZvbmxi1lYy/eKjM0ScHdewgAKXMHgFKsrWNY85udGWt2YaqbJHH/IbJlOkvogKM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=2B7NXSDE; arc=none smtp.client-ip=209.85.215.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="2B7NXSDE" Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-7db54269325so107915a12.2 for ; Wed, 16 Oct 2024 11:53:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1729104790; x=1729709590; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kKhajILgXPnH7N489X2OSTc+5BkU3eYPAK3guNmJgN4=; b=2B7NXSDEaEGEpj0Of9wgUDMZmq5zK80gJIUUwrJP0Z8+dIrK28bl76g15cbELZIjQ4 u8MRJSOiqsyobPKgQEabuawbTn1WZA0BKd0JRqaWc7dNWL5z4kTc0uSQbbPXE3JHB2/5 cJFaRKD4i64J9j+iAusfazEKHJFOtuc8+oWgWHWQ7nf73kvmopbV2ffkNtRGZn84//Ar MO/3Q2k4eJ3qMEKsR1pGpPC865MmW7Gf1/XtoybncZFS8mSj9X0ByPxJLinNlp3OrGxh cbIen0mYew27Qjhm6ROvDvfcDGhLujwqa/LpCmB3XufSqI0N9w7TxCtTnQK1DxHaCZ4m nbxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729104790; x=1729709590; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kKhajILgXPnH7N489X2OSTc+5BkU3eYPAK3guNmJgN4=; b=qwBrhUkZvO2OsZAlqOku85XLkc7hjRHU/7wWwT0Au4sMUeYoXVUHKH6Fnhj0Bku/R1 EVnZYwd/lK6wjNUkO9RMmOtyNquRb6V+I6Rh1z9W28lNJ3J7afFpM38HIccasaD5fjAV mCbXYHsFXHgfdm3jm7mchFkK5AU+ZrPqACJtFcEnFHvsn9tSPqT4OV0NNW0Sv8r2i99/ COHo4SM710FJiyRMwNbjjXpiGCMv03mx5Daxa8CxxFrVuRCzfBzFdiuwOCp7ZNNa3oUC U8BvvJQDu3CAdAlhGup+DhqleblJ3m3cJZ1asFuTK2ZJd+aGOw2eYpxlVdXp+MFvouib eSng== X-Forwarded-Encrypted: i=1; AJvYcCU3PCWqChA2Q4V0jltS5OTrX+DByTDTePnIqEs3sHe5V8c9n7AblaeDu3xs7ZnOUVxJy/JJAOU=@vger.kernel.org X-Gm-Message-State: AOJu0Yy7H/TxdqgUa+AzHOZwYhy9BrEc+Sb1ZCfB3qjz0acJ7QoQnG3L F+qri/zdTnThqHa/ltHUIDJWJA3w2yeJVgyztwD2y87GA/Bw/DNgCzO2eIzVyI0= X-Google-Smtp-Source: AGHT+IFUKSBYw0TVlOrWwfyB+Hat8M0ZvOzZaPKQW6ZyDBwTh6AcSIrs+UBlG+sTrrHZo3hmqYeeGA== X-Received: by 2002:a05:6a21:9d83:b0:1cf:6d20:4d6 with SMTP id adf61e73a8af0-1d8c959539amr26460080637.16.1729104790090; Wed, 16 Oct 2024 11:53:10 -0700 (PDT) Received: from localhost (fwdproxy-prn-014.fbsv.net. [2a03:2880:ff:e::face:b00c]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71e7750a74csm3389377b3a.212.2024.10.16.11.53.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2024 11:53:09 -0700 (PDT) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: David Wei , Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH v6 07/15] net: page_pool: introduce page_pool_mp_return_in_cache Date: Wed, 16 Oct 2024 11:52:44 -0700 Message-ID: <20241016185252.3746190-8-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241016185252.3746190-1-dw@davidwei.uk> References: <20241016185252.3746190-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Pavel Begunkov Add a helper that allows a page pool memory provider to efficiently return a netmem off the allocation callback. Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/page_pool/memory_provider.h | 4 ++++ net/core/page_pool.c | 19 +++++++++++++++++++ 2 files changed, 23 insertions(+) diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h index 83d7eec0058d..352b3a35d31c 100644 --- a/include/net/page_pool/memory_provider.h +++ b/include/net/page_pool/memory_provider.h @@ -1,3 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + #ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H #define _NET_PAGE_POOL_MEMORY_PROVIDER_H @@ -7,4 +9,6 @@ int page_pool_mp_init_paged_area(struct page_pool *pool, void page_pool_mp_release_area(struct page_pool *pool, struct net_iov_area *area); +void page_pool_mp_return_in_cache(struct page_pool *pool, netmem_ref netmem); + #endif diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 8bd4a3c80726..9078107c906d 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -1213,3 +1213,22 @@ void page_pool_mp_release_area(struct page_pool *pool, page_pool_release_page_dma(pool, net_iov_to_netmem(niov)); } } + +/* + * page_pool_mp_return_in_cache() - return a netmem to the allocation cache. + * @pool: pool from which pages were allocated + * @netmem: netmem to return + * + * Return already allocated and accounted netmem to the page pool's allocation + * cache. The function doesn't provide synchronisation and must only be called + * from the napi context. + */ +void page_pool_mp_return_in_cache(struct page_pool *pool, netmem_ref netmem) +{ + if (WARN_ON_ONCE(pool->alloc.count >= PP_ALLOC_CACHE_REFILL)) + return; + + page_pool_dma_sync_for_device(pool, netmem, -1); + page_pool_fragment_netmem(netmem, 1); + pool->alloc.cache[pool->alloc.count++] = netmem; +} From patchwork Wed Oct 16 18:52:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13838750 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B5BC215F6D for ; Wed, 16 Oct 2024 18:53:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104793; cv=none; b=APwnxwhpRFNmWDEEzXWzD1OLML9V2RBX0qS72GWgCipvZ7mQWnFERGgR5Sk8ySRZ2q6+S7er4HzSe5YZ8jSxFaZMJL6ppp76pnok3GSuHWOiGT1EjpM+3lmXbcJqcUURbHc4KylhfxouDWOvSX8j9LCwUFbsO/dm1iebVMT95R0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104793; c=relaxed/simple; bh=A45LcKejhku6kiwYYxqy5WdOuv5BPW+KlIxNe+zpclg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=X+zrn6/IQbjTUKVQwCIfvWq5noAS+IiPcd4eDkw7qGf8seJ+kBCSuxy9KU7Bb3JArzvTaITpi/xoAbtFxjrs6GUcOk76wPz32HsntDyQkDDMZq9t2ptsfN99e6FmzPwztmeGTXREhbyvqp6tAF5GvI4d8l+PyI6DItMvyj6DLVM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=h7RLT7OZ; arc=none smtp.client-ip=209.85.210.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="h7RLT7OZ" Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-71e4fa3ea7cso97177b3a.0 for ; Wed, 16 Oct 2024 11:53:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1729104791; x=1729709591; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=apJGwAUYt00y9e1Uwo5xTUJdF3GLIZYTh851xLLFxXk=; b=h7RLT7OZq0R8zVVyu6sF3mOxoauPKLPfPuehhCItE1m0Q2BifURS4deHFimc9hjIn0 EJmi87woNEBKUQql8m8bVEYO9OUf+NHYXW2vDdqFcjVQhTWqPtUwZY73N8YnkXyKF69I hax44UkN/6JqdnSvd0LrlzmuqpYjj7v9TLabMyI8YOgFUPRPzS6l5Ykm4/BCpZ+7zNqg qsSZNTGwO8KfQkVf+jutaZlRr8VsO+39qiNKMBOulK/Ifa+gQ7Eh15U/pDxXB/DSwZ39 WJF04O1hjXT9wGCCqbpLGvHG38NakOapJ0E4RjdYItPhKTjBI9lTmumH/l0eUEd0199V rmdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729104791; x=1729709591; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=apJGwAUYt00y9e1Uwo5xTUJdF3GLIZYTh851xLLFxXk=; b=Q448Jzu27gYnqOmNd7eLvs7pmKRlyMUfb5UpcPOCxuz7bnDnRSEUxqtQeToisSzf8f y0HBx85f08GPECc/Y0g7iyFyNBuOAHGw/L8PJ3/J0nBUO44FbcfjPglJutbmD3vQI2nl XZ4SJTdbLkvQ8/sfJgW6SXMHjqmt8uLOSma76sWePjR7uMvrYw91twUB9A+/JLlVm8Q4 l0RbMWoWJcGnra141nFXYfSsjOWDnYtT2qy8g71hIYxNvpCoJfX+2yEjg3NTOq8x8FsG d4uBHAIe42b2KLad8efeYXKyt36mG0Hgo/Bg4jOFkMmKjQ2EsA/FckfcyBQMcRQrL+IF w0aA== X-Forwarded-Encrypted: i=1; AJvYcCUaOd3F8GCzrhcGNb/Bsld94q7DgujhnFTY8iXAw9/6N7zlGdl43HWrQALx0NMQYQ9eylmMiXs=@vger.kernel.org X-Gm-Message-State: AOJu0Yxeu6jlH84EkfmQaQCspDVX4Q6Lge9HXbNbYWmG34FH29eHz2T2 /DP2ZaOGhiHYwtQXOgZK27XuQOyKvoNt0gEnl3boBjcjtxtxtBUIVSnWR8zgQxU= X-Google-Smtp-Source: AGHT+IGTZ20yd5Juz+WkfKgvQG56lqh6jBNe39ZC5JtKYUomLGJOK0M8dRPpLQStZ31INSvELCg7ZQ== X-Received: by 2002:a05:6a00:13a3:b0:71d:f7ea:89f6 with SMTP id d2e1a72fcca58-71e4c1c00f4mr25661805b3a.18.1729104791470; Wed, 16 Oct 2024 11:53:11 -0700 (PDT) Received: from localhost (fwdproxy-prn-032.fbsv.net. [2a03:2880:ff:20::face:b00c]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71e77371796sm3409607b3a.31.2024.10.16.11.53.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2024 11:53:11 -0700 (PDT) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: David Wei , Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH v6 08/15] net: add helper executing custom callback from napi Date: Wed, 16 Oct 2024 11:52:45 -0700 Message-ID: <20241016185252.3746190-9-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241016185252.3746190-1-dw@davidwei.uk> References: <20241016185252.3746190-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Pavel Begunkov It's useful to have napi private bits and pieces like page pool's fast allocating cache, so that the hot allocation path doesn't have to do any additional synchronisation. In case of io_uring memory provider introduced in following patches, we keep the consumer end of the io_uring's refill queue private to napi as it's a hot path. However, from time to time we need to synchronise with the napi, for example to add more user memory or allocate fallback buffers. Add a helper function napi_execute that allows to run a custom callback from under napi context so that it can access and modify napi protected parts of io_uring. It works similar to busy polling and stops napi from running in the meantime, so it's supposed to be a slow control path. Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/busy_poll.h | 6 ++++ net/core/dev.c | 77 ++++++++++++++++++++++++++++++++--------- 2 files changed, 66 insertions(+), 17 deletions(-) diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h index f03040baaefd..3fd9e65731e9 100644 --- a/include/net/busy_poll.h +++ b/include/net/busy_poll.h @@ -47,6 +47,7 @@ bool sk_busy_loop_end(void *p, unsigned long start_time); void napi_busy_loop(unsigned int napi_id, bool (*loop_end)(void *, unsigned long), void *loop_end_arg, bool prefer_busy_poll, u16 budget); +void napi_execute(unsigned napi_id, void (*cb)(void *), void *cb_arg); void napi_busy_loop_rcu(unsigned int napi_id, bool (*loop_end)(void *, unsigned long), @@ -63,6 +64,11 @@ static inline bool sk_can_busy_loop(struct sock *sk) return false; } +static inline void napi_execute(unsigned napi_id, + void (*cb)(void *), void *cb_arg) +{ +} + #endif /* CONFIG_NET_RX_BUSY_POLL */ static inline unsigned long busy_loop_current_time(void) diff --git a/net/core/dev.c b/net/core/dev.c index c682173a7642..f3bd5fd56286 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -6347,6 +6347,30 @@ enum { NAPI_F_END_ON_RESCHED = 2, }; +static inline bool napi_state_start_busy_polling(struct napi_struct *napi, + unsigned flags) +{ + unsigned long val = READ_ONCE(napi->state); + + /* If multiple threads are competing for this napi, + * we avoid dirtying napi->state as much as we can. + */ + if (val & (NAPIF_STATE_DISABLE | NAPIF_STATE_SCHED | + NAPIF_STATE_IN_BUSY_POLL)) + goto fail; + + if (cmpxchg(&napi->state, val, + val | NAPIF_STATE_IN_BUSY_POLL | + NAPIF_STATE_SCHED) != val) + goto fail; + + return true; +fail: + if (flags & NAPI_F_PREFER_BUSY_POLL) + set_bit(NAPI_STATE_PREFER_BUSY_POLL, &napi->state); + return false; +} + static void busy_poll_stop(struct napi_struct *napi, void *have_poll_lock, unsigned flags, u16 budget) { @@ -6422,24 +6446,8 @@ static void __napi_busy_loop(unsigned int napi_id, local_bh_disable(); bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx); if (!napi_poll) { - unsigned long val = READ_ONCE(napi->state); - - /* If multiple threads are competing for this napi, - * we avoid dirtying napi->state as much as we can. - */ - if (val & (NAPIF_STATE_DISABLE | NAPIF_STATE_SCHED | - NAPIF_STATE_IN_BUSY_POLL)) { - if (flags & NAPI_F_PREFER_BUSY_POLL) - set_bit(NAPI_STATE_PREFER_BUSY_POLL, &napi->state); + if (!napi_state_start_busy_polling(napi, flags)) goto count; - } - if (cmpxchg(&napi->state, val, - val | NAPIF_STATE_IN_BUSY_POLL | - NAPIF_STATE_SCHED) != val) { - if (flags & NAPI_F_PREFER_BUSY_POLL) - set_bit(NAPI_STATE_PREFER_BUSY_POLL, &napi->state); - goto count; - } have_poll_lock = netpoll_poll_lock(napi); napi_poll = napi->poll; } @@ -6503,6 +6511,41 @@ void napi_busy_loop(unsigned int napi_id, } EXPORT_SYMBOL(napi_busy_loop); +void napi_execute(unsigned napi_id, + void (*cb)(void *), void *cb_arg) +{ + struct napi_struct *napi; + void *have_poll_lock = NULL; + + guard(rcu)(); + napi = napi_by_id(napi_id); + if (!napi) + return; + + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + preempt_disable(); + + for (;;) { + local_bh_disable(); + + if (napi_state_start_busy_polling(napi, 0)) { + have_poll_lock = netpoll_poll_lock(napi); + cb(cb_arg); + local_bh_enable(); + busy_poll_stop(napi, have_poll_lock, 0, 1); + break; + } + + local_bh_enable(); + if (unlikely(need_resched())) + break; + cpu_relax(); + } + + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + preempt_enable(); +} + #endif /* CONFIG_NET_RX_BUSY_POLL */ static void __napi_hash_add_with_id(struct napi_struct *napi, From patchwork Wed Oct 16 18:52:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13838751 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B6B31216438 for ; Wed, 16 Oct 2024 18:53:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104795; cv=none; b=M5kaKv/VyBU/E0dSTd9ahoRvET0xb0QDlpMF5MGL2LHzht61QQQLIx5PUozAQPB80BdWpknkI+D2EMHLnEsAW3KzSs7MpWSFCztLrYD6d/0oUAavh3z/0FqAJFAKzbtjiV3l6jExqXOc8b+iuMg/ILP/r5ttg/2JnY5TVCIU6aw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104795; c=relaxed/simple; bh=48QCLJgqhYbvarB3txIXhf4LNnniadIku+AEw2SSpKM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZAyOxCsQYj5lhKkFf4ZZXPfGS4cQd7EgTgI0LYN3gib4ylaVEAjLEUiwn7TdJ4UOxmtuKmpjYCiOKwBQgJ6z1v10oY83KVfX7BbgKjAOx0127CWnlPIxZylqT+MK8Rtc768HPic1SIQVqkJyRWBogD2M7bU5oEUzu/nzbuJjTBs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=Zn6lz6EI; arc=none smtp.client-ip=209.85.210.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="Zn6lz6EI" Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-71e8235f0b6so75095b3a.3 for ; Wed, 16 Oct 2024 11:53:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1729104793; x=1729709593; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0royKbecdQTzyRANIzJDQPi8VoPBT1y944ATwcLH2Qo=; b=Zn6lz6EIIGM4kJcPwYc8V/xwV4xV4yqHXqA3kxgYth06GOiTDXe95SfwjmesQAHP/m kVZCvRHgpeARWBlkQn27WPS/QiM2ebc/2uZbri8KNw+QQ8ZnIiBMh6DJ+GLt4HPSnjpi HZyjzJbVJhKTTDg3TgCdoW7A7+33I+jKdFAl/jzMUUmdBjBA8CfYP+zERoopMPoQQlJo TJ3RTHJSXYNfoYiwuCBwRX5T/Pa+7d0jQ0aLsDnTT2deR96scrrIIgNMxJkClL2S5baN QU66jli9f9IwtxLCiLh2ah2vCauySv8eUZrWz2k/TQ0bmnU0T3HnzAXs7HDM92bRerCZ sOmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729104793; x=1729709593; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0royKbecdQTzyRANIzJDQPi8VoPBT1y944ATwcLH2Qo=; b=lnZDVc8OAcnYEZwNppxO8A5rVyAsOYlbepjiVMsICYn5yq7lfdqF21xTizzT+ACeWx Gxodz4e7MvwewmYViGD1IlOprH6rgiRj91qtabL/ynX3D4WriRN/VU37nUmwWhuPyRuZ qO9AY5uAiBVr6dyhBiBVT0yjy40YWpsjETDL60cuTo+VBvw4teKe2nW909DzezvKyrTz STn4Pdn1HTz3789ZQRVzLqQMMC3jYD1itP9jWeVppMSVCAwjyWpIH4Lt8MtVt32U27Eg dSjogTIOdluNM900itSLfhUWiohRcSCjH/ksVCWcosZGEA8L3YIUJaMtImMPK538Us54 NNTQ== X-Forwarded-Encrypted: i=1; AJvYcCWN+D1XCiXvWFZBhlem44GIqG03aM9eVLAGD2vw0ulo9YD6y6TQb/w27P0tsVWwEQIJAxMScQ0=@vger.kernel.org X-Gm-Message-State: AOJu0Yy9Lsd1v1ANvavNL8Ja/EPpnVIUDt4SDL6hLljwUuh/e6JMO0Je URb6DSJbFEOARq6QFlmt1jIfIK/fXG3EyZMMbE4VlxfLoTr+w7WcFrU47bVNn3I= X-Google-Smtp-Source: AGHT+IGWlaI1e+u1EEwqcDw7JZJM4Nic0a8V75PDn4qW9p4oxVXVIjrTlyLb5zxwdvoT5eOJ3imYUw== X-Received: by 2002:a05:6a00:1381:b0:71e:6743:7599 with SMTP id d2e1a72fcca58-71e67437b79mr17032061b3a.7.1729104792902; Wed, 16 Oct 2024 11:53:12 -0700 (PDT) Received: from localhost (fwdproxy-prn-022.fbsv.net. [2a03:2880:ff:16::face:b00c]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71e773ea180sm3529616b3a.90.2024.10.16.11.53.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2024 11:53:12 -0700 (PDT) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: David Wei , Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH v6 09/15] io_uring/zcrx: add interface queue and refill queue Date: Wed, 16 Oct 2024 11:52:46 -0700 Message-ID: <20241016185252.3746190-10-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241016185252.3746190-1-dw@davidwei.uk> References: <20241016185252.3746190-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: David Wei Add a new object called an interface queue (ifq) that represents a net rx queue that has been configured for zero copy. Each ifq is registered using a new registration opcode IORING_REGISTER_ZCRX_IFQ. The refill queue is allocated by the kernel and mapped by userspace using a new offset IORING_OFF_RQ_RING, in a similar fashion to the main SQ/CQ. It is used by userspace to return buffers that it is done with, which will then be re-used by the netdev again. The main CQ ring is used to notify userspace of received data by using the upper 16 bytes of a big CQE as a new struct io_uring_zcrx_cqe. Each entry contains the offset + len to the data. For now, each io_uring instance only has a single ifq. Signed-off-by: David Wei Reviewed-by: Jens Axboe --- Kconfig | 2 + include/linux/io_uring_types.h | 3 + include/uapi/linux/io_uring.h | 43 ++++++++++ io_uring/KConfig | 10 +++ io_uring/Makefile | 1 + io_uring/io_uring.c | 7 ++ io_uring/memmap.c | 8 ++ io_uring/register.c | 7 ++ io_uring/zcrx.c | 143 +++++++++++++++++++++++++++++++++ io_uring/zcrx.h | 39 +++++++++ 10 files changed, 263 insertions(+) create mode 100644 io_uring/KConfig create mode 100644 io_uring/zcrx.c create mode 100644 io_uring/zcrx.h diff --git a/Kconfig b/Kconfig index 745bc773f567..529ea7694ba9 100644 --- a/Kconfig +++ b/Kconfig @@ -30,3 +30,5 @@ source "lib/Kconfig" source "lib/Kconfig.debug" source "Documentation/Kconfig" + +source "io_uring/KConfig" diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 9c7e1d3f06e5..f5cbc06cc0e6 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -39,6 +39,8 @@ enum io_uring_cmd_flags { IO_URING_F_COMPAT = (1 << 12), }; +struct io_zcrx_ifq; + struct io_wq_work_node { struct io_wq_work_node *next; }; @@ -373,6 +375,7 @@ struct io_ring_ctx { struct io_alloc_cache rsrc_node_cache; struct wait_queue_head rsrc_quiesce_wq; unsigned rsrc_quiesce; + struct io_zcrx_ifq *ifq; u32 pers_next; struct xarray personalities; diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h index 86cb385fe0b5..d398e19f8eea 100644 --- a/include/uapi/linux/io_uring.h +++ b/include/uapi/linux/io_uring.h @@ -467,6 +467,8 @@ struct io_uring_cqe { #define IORING_OFF_PBUF_RING 0x80000000ULL #define IORING_OFF_PBUF_SHIFT 16 #define IORING_OFF_MMAP_MASK 0xf8000000ULL +#define IORING_OFF_RQ_RING 0x20000000ULL +#define IORING_OFF_RQ_SHIFT 16 /* * Filled with the offset for mmap(2) @@ -615,6 +617,9 @@ enum io_uring_register_op { /* send MSG_RING without having a ring */ IORING_REGISTER_SEND_MSG_RING = 31, + /* register a netdev hw rx queue for zerocopy */ + IORING_REGISTER_ZCRX_IFQ = 32, + /* this goes last */ IORING_REGISTER_LAST, @@ -845,6 +850,44 @@ enum io_uring_socket_op { SOCKET_URING_OP_SETSOCKOPT, }; +/* Zero copy receive refill queue entry */ +struct io_uring_zcrx_rqe { + __u64 off; + __u32 len; + __u32 __pad; +}; + +struct io_uring_zcrx_cqe { + __u64 off; + __u64 __pad; +}; + +/* The bit from which area id is encoded into offsets */ +#define IORING_ZCRX_AREA_SHIFT 48 +#define IORING_ZCRX_AREA_MASK (~(((__u64)1 << IORING_ZCRX_AREA_SHIFT) - 1)) + +struct io_uring_zcrx_offsets { + __u32 head; + __u32 tail; + __u32 rqes; + __u32 mmap_sz; + __u64 __resv[2]; +}; + +/* + * Argument for IORING_REGISTER_ZCRX_IFQ + */ +struct io_uring_zcrx_ifq_reg { + __u32 if_idx; + __u32 if_rxq; + __u32 rq_entries; + __u32 flags; + + __u64 area_ptr; /* pointer to struct io_uring_zcrx_area_reg */ + struct io_uring_zcrx_offsets offsets; + __u64 __resv[3]; +}; + #ifdef __cplusplus } #endif diff --git a/io_uring/KConfig b/io_uring/KConfig new file mode 100644 index 000000000000..9e2a4beba1ef --- /dev/null +++ b/io_uring/KConfig @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# io_uring configuration +# + +config IO_URING_ZCRX + def_bool y + depends on PAGE_POOL + depends on INET + depends on NET_RX_BUSY_POLL diff --git a/io_uring/Makefile b/io_uring/Makefile index 53167bef37d7..a95b0b8229c9 100644 --- a/io_uring/Makefile +++ b/io_uring/Makefile @@ -14,6 +14,7 @@ obj-$(CONFIG_IO_URING) += io_uring.o opdef.o kbuf.o rsrc.o notif.o \ epoll.o statx.o timeout.o fdinfo.o \ cancel.o waitid.o register.o \ truncate.o memmap.o +obj-$(CONFIG_IO_URING_ZCRX) += zcrx.o obj-$(CONFIG_IO_WQ) += io-wq.o obj-$(CONFIG_FUTEX) += futex.o obj-$(CONFIG_NET_RX_BUSY_POLL) += napi.o diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index d7ad4ea5f40b..fc43e6feb5c5 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -97,6 +97,7 @@ #include "uring_cmd.h" #include "msg_ring.h" #include "memmap.h" +#include "zcrx.h" #include "timeout.h" #include "poll.h" @@ -2739,6 +2740,7 @@ static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx) return; mutex_lock(&ctx->uring_lock); + io_unregister_zcrx_ifqs(ctx); if (ctx->buf_data) __io_sqe_buffers_unregister(ctx); if (ctx->file_data) @@ -2910,6 +2912,11 @@ static __cold void io_ring_exit_work(struct work_struct *work) io_cqring_overflow_kill(ctx); mutex_unlock(&ctx->uring_lock); } + if (ctx->ifq) { + mutex_lock(&ctx->uring_lock); + io_shutdown_zcrx_ifqs(ctx); + mutex_unlock(&ctx->uring_lock); + } if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) io_move_task_work_from_local(ctx); diff --git a/io_uring/memmap.c b/io_uring/memmap.c index a0f32a255fd1..4c384e8615f6 100644 --- a/io_uring/memmap.c +++ b/io_uring/memmap.c @@ -12,6 +12,7 @@ #include "memmap.h" #include "kbuf.h" +#include "zcrx.h" static void *io_mem_alloc_compound(struct page **pages, int nr_pages, size_t size, gfp_t gfp) @@ -223,6 +224,10 @@ static void *io_uring_validate_mmap_request(struct file *file, loff_t pgoff, io_put_bl(ctx, bl); return ptr; } + case IORING_OFF_RQ_RING: + if (!ctx->ifq) + return ERR_PTR(-EINVAL); + return ctx->ifq->rq_ring; } return ERR_PTR(-EINVAL); @@ -261,6 +266,9 @@ __cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma) ctx->n_sqe_pages); case IORING_OFF_PBUF_RING: return io_pbuf_mmap(file, vma); + case IORING_OFF_RQ_RING: + return io_uring_mmap_pages(ctx, vma, ctx->ifq->rqe_pages, + ctx->ifq->n_rqe_pages); } return -EINVAL; diff --git a/io_uring/register.c b/io_uring/register.c index 52b2f9b74af8..1fac52b14e3d 100644 --- a/io_uring/register.c +++ b/io_uring/register.c @@ -29,6 +29,7 @@ #include "napi.h" #include "eventfd.h" #include "msg_ring.h" +#include "zcrx.h" #define IORING_MAX_RESTRICTIONS (IORING_RESTRICTION_LAST + \ IORING_REGISTER_LAST + IORING_OP_LAST) @@ -549,6 +550,12 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode, break; ret = io_register_clone_buffers(ctx, arg); break; + case IORING_REGISTER_ZCRX_IFQ: + ret = -EINVAL; + if (!arg || nr_args != 1) + break; + ret = io_register_zcrx_ifq(ctx, arg); + break; default: ret = -EINVAL; break; diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c new file mode 100644 index 000000000000..4c53fd4f7bb3 --- /dev/null +++ b/io_uring/zcrx.c @@ -0,0 +1,143 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include + +#include + +#include "io_uring.h" +#include "kbuf.h" +#include "memmap.h" +#include "zcrx.h" + +#define IO_RQ_MAX_ENTRIES 32768 + +static int io_allocate_rbuf_ring(struct io_zcrx_ifq *ifq, + struct io_uring_zcrx_ifq_reg *reg) +{ + size_t off, size; + void *ptr; + + off = sizeof(struct io_uring); + size = off + sizeof(struct io_uring_zcrx_rqe) * reg->rq_entries; + + ptr = io_pages_map(&ifq->rqe_pages, &ifq->n_rqe_pages, size); + if (IS_ERR(ptr)) + return PTR_ERR(ptr); + + ifq->rq_ring = (struct io_uring *)ptr; + ifq->rqes = (struct io_uring_zcrx_rqe *)(ptr + off); + return 0; +} + +static void io_free_rbuf_ring(struct io_zcrx_ifq *ifq) +{ + io_pages_unmap(ifq->rq_ring, &ifq->rqe_pages, &ifq->n_rqe_pages, true); + ifq->rq_ring = NULL; + ifq->rqes = NULL; +} + +static struct io_zcrx_ifq *io_zcrx_ifq_alloc(struct io_ring_ctx *ctx) +{ + struct io_zcrx_ifq *ifq; + + ifq = kzalloc(sizeof(*ifq), GFP_KERNEL); + if (!ifq) + return NULL; + + ifq->if_rxq = -1; + ifq->ctx = ctx; + return ifq; +} + +static void io_zcrx_ifq_free(struct io_zcrx_ifq *ifq) +{ + io_free_rbuf_ring(ifq); + kfree(ifq); +} + +int io_register_zcrx_ifq(struct io_ring_ctx *ctx, + struct io_uring_zcrx_ifq_reg __user *arg) +{ + struct io_uring_zcrx_ifq_reg reg; + struct io_zcrx_ifq *ifq; + size_t ring_sz, rqes_sz; + int ret; + + /* + * 1. Interface queue allocation. + * 2. It can observe data destined for sockets of other tasks. + */ + if (!capable(CAP_NET_ADMIN)) + return -EPERM; + + /* mandatory io_uring features for zc rx */ + if (!(ctx->flags & IORING_SETUP_DEFER_TASKRUN && + ctx->flags & IORING_SETUP_CQE32)) + return -EINVAL; + if (ctx->ifq) + return -EBUSY; + if (copy_from_user(®, arg, sizeof(reg))) + return -EFAULT; + if (reg.__resv[0] || reg.__resv[1] || reg.__resv[2]) + return -EINVAL; + if (reg.if_rxq == -1 || !reg.rq_entries || reg.flags) + return -EINVAL; + if (reg.rq_entries > IO_RQ_MAX_ENTRIES) { + if (!(ctx->flags & IORING_SETUP_CLAMP)) + return -EINVAL; + reg.rq_entries = IO_RQ_MAX_ENTRIES; + } + reg.rq_entries = roundup_pow_of_two(reg.rq_entries); + + if (!reg.area_ptr) + return -EFAULT; + + ifq = io_zcrx_ifq_alloc(ctx); + if (!ifq) + return -ENOMEM; + + ret = io_allocate_rbuf_ring(ifq, ®); + if (ret) + goto err; + + ifq->rq_entries = reg.rq_entries; + ifq->if_rxq = reg.if_rxq; + + ring_sz = sizeof(struct io_uring); + rqes_sz = sizeof(struct io_uring_zcrx_rqe) * ifq->rq_entries; + reg.offsets.mmap_sz = ring_sz + rqes_sz; + reg.offsets.rqes = ring_sz; + reg.offsets.head = offsetof(struct io_uring, head); + reg.offsets.tail = offsetof(struct io_uring, tail); + + if (copy_to_user(arg, ®, sizeof(reg))) { + ret = -EFAULT; + goto err; + } + + ctx->ifq = ifq; + return 0; +err: + io_zcrx_ifq_free(ifq); + return ret; +} + +void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx) +{ + struct io_zcrx_ifq *ifq = ctx->ifq; + + lockdep_assert_held(&ctx->uring_lock); + + if (!ifq) + return; + + ctx->ifq = NULL; + io_zcrx_ifq_free(ifq); +} + +void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx) +{ + lockdep_assert_held(&ctx->uring_lock); +} diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h new file mode 100644 index 000000000000..1f76eecac5fd --- /dev/null +++ b/io_uring/zcrx.h @@ -0,0 +1,39 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef IOU_ZC_RX_H +#define IOU_ZC_RX_H + +#include + +struct io_zcrx_ifq { + struct io_ring_ctx *ctx; + struct net_device *dev; + struct io_uring *rq_ring; + struct io_uring_zcrx_rqe *rqes; + u32 rq_entries; + + unsigned short n_rqe_pages; + struct page **rqe_pages; + + u32 if_rxq; +}; + +#if defined(CONFIG_IO_URING_ZCRX) +int io_register_zcrx_ifq(struct io_ring_ctx *ctx, + struct io_uring_zcrx_ifq_reg __user *arg); +void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx); +void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx); +#else +static inline int io_register_zcrx_ifq(struct io_ring_ctx *ctx, + struct io_uring_zcrx_ifq_reg __user *arg) +{ + return -EOPNOTSUPP; +} +static inline void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx) +{ +} +static inline void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx) +{ +} +#endif + +#endif From patchwork Wed Oct 16 18:52:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13838752 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA325216A13 for ; Wed, 16 Oct 2024 18:53:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.47 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104797; cv=none; b=WSIHDMX6zbnmbarn7qukZmuu21flr2bY9S3QAuXWbU7w4r6tEFI0H0msw8sUTkkC31vaAUZBOII4s1JWlxY0ayiyIAzLJkSA/a6xhRyjIV04n7Urus9awxy4+KkoxvG+0IfEJUJ4GMJhGRafry4c+ST8cLh5x5rEWCyeicOxc+c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104797; c=relaxed/simple; bh=8AFR9S4ZUTv84CtU9GOp5lOFqpcyY4h6K0jKuwalAuA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RuVi9Q07lVMSaOseGizjj6q++8NHvaxFDWBeeV43wACf0JWeaq6DXhDvjPB6sfdAfJP5qVslIhnm47ASuaOPz6e3a5vE6klRRQg7fE0EN6alvhwyK2lpuWHOncE5zCpBJ6oWc9tuWn//UocYs+nnnuuVW8OlWz49q8PN7kho24g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=iiz8xEfS; arc=none smtp.client-ip=209.85.216.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="iiz8xEfS" Received: by mail-pj1-f47.google.com with SMTP id 98e67ed59e1d1-2e2e23f2931so122931a91.0 for ; Wed, 16 Oct 2024 11:53:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1729104794; x=1729709594; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3UMXW/NRxDrrT4dChF0R+CHeZz53MErti5xzo5Ksjrg=; b=iiz8xEfS7jxXmw9DeKz00/jctJRtNImoTsykfK5IXy3hNrYxnB+EGdoaTOGDgNmc++ LdO77y89ew6rZ4F6gQUPi8uVwYCP6FD5KvNqALn10jkFc740H1Ty+qjCsepGcV0Fe7vX ggDagYPZyRPmWUnzuIv8tqYmOi2Bg3CAZ/mRBfL3J3zqrGgthym2bELyvOPLx1Yb8E84 V5YZy9cVXXRs7L7zPjaSPZLh5QMoT6rrZ/uOfp0gjh4bZxDaNEueSOPCTjz5V5JAcx+z fSbyrlp4oKbbN+4lHk5dqtjbeCiqR5POa7juIH6Px/BIfLDL0Htj+oMKJ0SGiB8RD6+d dE0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729104794; x=1729709594; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3UMXW/NRxDrrT4dChF0R+CHeZz53MErti5xzo5Ksjrg=; b=RirkgGUCbHdQcopwujLzoiavHUAPf7uOOSS4rn2c71+f6dZM/QqwAy9b+JOqdpPUPy YEEtH1ExD81bBXZ+ugJQbJHa/2AEoSThbxdAA65b/NJ+inVlrRCgHlaGfd13BCET/oRn L91LRGhlvn3gOoRCJCwOieVLSGOd7X9++FIC7AHqr+f962XH0Kud9p/If0HcTfWANpds FQ91ZMXBdOHFhLULKhh8X/isOZSJxv6tfjSY/bqV4obGnJulzSFLwhfQfraFhDxatiwz acVKEw81aK9DXkx0fiX3+zI0lRIJ3bn48xiAh0GMf8EWDoB5dT6p1WJ2s99ihEnhoWBJ Qw5Q== X-Forwarded-Encrypted: i=1; AJvYcCVVf1zhU5Wqd6diwo/QTY0SKwnKfjzNGJ43lL5atQ+sINTUK8QffONQ4Yk3Ddcv+BmQqyAmljA=@vger.kernel.org X-Gm-Message-State: AOJu0Yw3gXDbJPiw2V2USwypxvy60giKd7c1OFZ/6D4JIgNk6/hJFenC ++x6DV6BuaZY0xbE+TNS3lynWeRcPB3NNSILrWLTPV4QRxcNxIpqunSEBWbl5v8= X-Google-Smtp-Source: AGHT+IEkow8fbpCZbDx4JDJlkYhM2tGrynpradwU3yiwpoKCxyEsD4k1Tqi7/tjnPoR9p+eay6MdOw== X-Received: by 2002:a17:90b:4c8f:b0:2e2:af04:8b64 with SMTP id 98e67ed59e1d1-2e3151b7bb8mr21361816a91.7.1729104794254; Wed, 16 Oct 2024 11:53:14 -0700 (PDT) Received: from localhost (fwdproxy-prn-016.fbsv.net. [2a03:2880:ff:10::face:b00c]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e3b7e6339fsm1666550a91.1.2024.10.16.11.53.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2024 11:53:13 -0700 (PDT) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: David Wei , Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH v6 10/15] io_uring/zcrx: add io_zcrx_area Date: Wed, 16 Oct 2024 11:52:47 -0700 Message-ID: <20241016185252.3746190-11-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241016185252.3746190-1-dw@davidwei.uk> References: <20241016185252.3746190-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: David Wei Add io_zcrx_area that represents a region of userspace memory that is used for zero copy. During ifq registration, userspace passes in the uaddr and len of userspace memory, which is then pinned by the kernel. Each net_iov is mapped to one of these pages. The freelist is a spinlock protected list that keeps track of all the net_iovs/pages that aren't used. For now, there is only one area per ifq and area registration happens implicitly as part of ifq registration. There is no API for adding/removing areas yet. The struct for area registration is there for future extensibility once we support multiple areas and TCP devmem. Signed-off-by: Pavel Begunkov Signed-off-by: David Wei Reviewed-by: Jens Axboe --- include/uapi/linux/io_uring.h | 9 ++++ io_uring/rsrc.c | 2 +- io_uring/rsrc.h | 1 + io_uring/zcrx.c | 93 ++++++++++++++++++++++++++++++++++- io_uring/zcrx.h | 16 ++++++ 5 files changed, 118 insertions(+), 3 deletions(-) diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h index d398e19f8eea..d43183264dcf 100644 --- a/include/uapi/linux/io_uring.h +++ b/include/uapi/linux/io_uring.h @@ -874,6 +874,15 @@ struct io_uring_zcrx_offsets { __u64 __resv[2]; }; +struct io_uring_zcrx_area_reg { + __u64 addr; + __u64 len; + __u64 rq_area_token; + __u32 flags; + __u32 __resv1; + __u64 __resv2[2]; +}; + /* * Argument for IORING_REGISTER_ZCRX_IFQ */ diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 33a3d156a85b..4da644de8843 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -86,7 +86,7 @@ static int io_account_mem(struct io_ring_ctx *ctx, unsigned long nr_pages) return 0; } -static int io_buffer_validate(struct iovec *iov) +int io_buffer_validate(struct iovec *iov) { unsigned long tmp, acct_len = iov->iov_len + (PAGE_SIZE - 1); diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h index 8ed588036210..0933dc99f41d 100644 --- a/io_uring/rsrc.h +++ b/io_uring/rsrc.h @@ -83,6 +83,7 @@ int io_register_rsrc_update(struct io_ring_ctx *ctx, void __user *arg, unsigned size, unsigned type); int io_register_rsrc(struct io_ring_ctx *ctx, void __user *arg, unsigned int size, unsigned int type); +int io_buffer_validate(struct iovec *iov); static inline void io_put_rsrc_node(struct io_ring_ctx *ctx, struct io_rsrc_node *node) { diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c index 4c53fd4f7bb3..a276572fe953 100644 --- a/io_uring/zcrx.c +++ b/io_uring/zcrx.c @@ -10,6 +10,7 @@ #include "kbuf.h" #include "memmap.h" #include "zcrx.h" +#include "rsrc.h" #define IO_RQ_MAX_ENTRIES 32768 @@ -38,6 +39,83 @@ static void io_free_rbuf_ring(struct io_zcrx_ifq *ifq) ifq->rqes = NULL; } +static void io_zcrx_free_area(struct io_zcrx_area *area) +{ + if (area->freelist) + kvfree(area->freelist); + if (area->nia.niovs) + kvfree(area->nia.niovs); + if (area->pages) { + unpin_user_pages(area->pages, area->nia.num_niovs); + kvfree(area->pages); + } + kfree(area); +} + +static int io_zcrx_create_area(struct io_ring_ctx *ctx, + struct io_zcrx_ifq *ifq, + struct io_zcrx_area **res, + struct io_uring_zcrx_area_reg *area_reg) +{ + struct io_zcrx_area *area; + int i, ret, nr_pages; + struct iovec iov; + + if (area_reg->flags || area_reg->rq_area_token) + return -EINVAL; + if (area_reg->__resv1 || area_reg->__resv2[0] || area_reg->__resv2[1]) + return -EINVAL; + if (area_reg->addr & ~PAGE_MASK || area_reg->len & ~PAGE_MASK) + return -EINVAL; + + iov.iov_base = u64_to_user_ptr(area_reg->addr); + iov.iov_len = area_reg->len; + ret = io_buffer_validate(&iov); + if (ret) + return ret; + + ret = -ENOMEM; + area = kzalloc(sizeof(*area), GFP_KERNEL); + if (!area) + goto err; + + area->pages = io_pin_pages((unsigned long)area_reg->addr, area_reg->len, + &nr_pages); + if (IS_ERR(area->pages)) { + ret = PTR_ERR(area->pages); + area->pages = NULL; + goto err; + } + area->nia.num_niovs = nr_pages; + + area->nia.niovs = kvmalloc_array(nr_pages, sizeof(area->nia.niovs[0]), + GFP_KERNEL | __GFP_ZERO); + if (!area->nia.niovs) + goto err; + + area->freelist = kvmalloc_array(nr_pages, sizeof(area->freelist[0]), + GFP_KERNEL | __GFP_ZERO); + if (!area->freelist) + goto err; + + for (i = 0; i < nr_pages; i++) { + area->freelist[i] = i; + } + + area->free_count = nr_pages; + area->ifq = ifq; + /* we're only supporting one area per ifq for now */ + area->area_id = 0; + area_reg->rq_area_token = (u64)area->area_id << IORING_ZCRX_AREA_SHIFT; + spin_lock_init(&area->freelist_lock); + *res = area; + return 0; +err: + if (area) + io_zcrx_free_area(area); + return ret; +} + static struct io_zcrx_ifq *io_zcrx_ifq_alloc(struct io_ring_ctx *ctx) { struct io_zcrx_ifq *ifq; @@ -53,6 +131,9 @@ static struct io_zcrx_ifq *io_zcrx_ifq_alloc(struct io_ring_ctx *ctx) static void io_zcrx_ifq_free(struct io_zcrx_ifq *ifq) { + if (ifq->area) + io_zcrx_free_area(ifq->area); + io_free_rbuf_ring(ifq); kfree(ifq); } @@ -60,6 +141,7 @@ static void io_zcrx_ifq_free(struct io_zcrx_ifq *ifq) int io_register_zcrx_ifq(struct io_ring_ctx *ctx, struct io_uring_zcrx_ifq_reg __user *arg) { + struct io_uring_zcrx_area_reg area; struct io_uring_zcrx_ifq_reg reg; struct io_zcrx_ifq *ifq; size_t ring_sz, rqes_sz; @@ -91,7 +173,7 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx, } reg.rq_entries = roundup_pow_of_two(reg.rq_entries); - if (!reg.area_ptr) + if (copy_from_user(&area, u64_to_user_ptr(reg.area_ptr), sizeof(area))) return -EFAULT; ifq = io_zcrx_ifq_alloc(ctx); @@ -102,6 +184,10 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx, if (ret) goto err; + ret = io_zcrx_create_area(ctx, ifq, &ifq->area, &area); + if (ret) + goto err; + ifq->rq_entries = reg.rq_entries; ifq->if_rxq = reg.if_rxq; @@ -116,7 +202,10 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx, ret = -EFAULT; goto err; } - + if (copy_to_user(u64_to_user_ptr(reg.area_ptr), &area, sizeof(area))) { + ret = -EFAULT; + goto err; + } ctx->ifq = ifq; return 0; err: diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h index 1f76eecac5fd..a8db61498c67 100644 --- a/io_uring/zcrx.h +++ b/io_uring/zcrx.h @@ -3,10 +3,26 @@ #define IOU_ZC_RX_H #include +#include + +struct io_zcrx_area { + struct net_iov_area nia; + struct io_zcrx_ifq *ifq; + + u16 area_id; + struct page **pages; + + /* freelist */ + spinlock_t freelist_lock ____cacheline_aligned_in_smp; + u32 free_count; + u32 *freelist; +}; struct io_zcrx_ifq { struct io_ring_ctx *ctx; struct net_device *dev; + struct io_zcrx_area *area; + struct io_uring *rq_ring; struct io_uring_zcrx_rqe *rqes; u32 rq_entries; From patchwork Wed Oct 16 18:52:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13838753 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 282A3216A1E for ; Wed, 16 Oct 2024 18:53:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104797; cv=none; b=VWhaa56ZBdEvIcKyn1VJ5MR4JW4Y1PaV7j8s/V0PC2XPEZa7cC9go2MwO4CTIClBkfCx4hI7nchkWZMjIJNi2GEnE+McMkHTd2h8Z8h7JJkcs1RBYjxCb3kfP+R3/PyjP9ncFXc+fddUwHdnyL4YG+QNnS8EzoCcK/9QDIw5e28= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104797; c=relaxed/simple; bh=SpI7Zbj19Gy/EtvrfQmkoPEnLfihjJOPaTKZTrAxJXI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kkbBy61FkZBn7piTBIuV6J7xLikhdYM3hsZFdvsjX/r5YMZMVF6nWfA/n4xrpqIZ3mqWRAcupgEmvzn0O3/2NW9ilTnaYpqn5QUyX5TQ2fVKp6nyndOKPQcO+mBBlJsT7EGNlurCjSqsE94OUvR839OHosbNeGYi2LqX4hfkrFg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=0SbvdF3k; arc=none smtp.client-ip=209.85.216.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="0SbvdF3k" Received: by mail-pj1-f45.google.com with SMTP id 98e67ed59e1d1-2e1c91fe739so102193a91.2 for ; Wed, 16 Oct 2024 11:53:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1729104795; x=1729709595; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gbfoCUYOd98zkFy23A6SaPZzHs8neEBMKAyCLztJErg=; b=0SbvdF3kS2amYOb1FXbidHzGWNdxBabzYXbEfpB83CjD0nF/Z1Zhi+t016adnYy5Sf OtGTWFf55QsL1PbjuuBmXLK92iytB+3r7PoigPJ4gsrZjSdejNiW6QFZu+Hl9BMO0wI6 x4sW2AvuahATBrKZs46P86oJK8aj+eb58y0oCn8WMGPAxS+20GiEMld0gILiFZlD6Rck GA81kearZXcHnSQGyPCAJBUbgV4EkXCN0HVaSs5KbioYIVoWTD+sk16Tw7VFZutCOUq1 /CzAfczRpZaaTk+MDnRKHh2O6lag1hk31Og3fLmRtS2HiHd8ec+MxBx28h3mHMC6ytWT YkMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729104795; x=1729709595; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gbfoCUYOd98zkFy23A6SaPZzHs8neEBMKAyCLztJErg=; b=T9b4gnLjx4gmpXoL4vGd6y0FlYCGJr5kSUnw8NXItrcS5ClsQ2TJ7dnKw2W5SDVo0d tyZnI3TxqEIGuuFzrRVtPp52SFT2neNuPZZwrZxXMkQ3wgw9JTcHc+KDq8PUhaMqDcUG eE+GBrADwxKyZMUjifMwnsp6lySeK/xSKmcR6kzHtn8/p+uGj8FjCzkV9Tb5XX6xyR2V rFASaM87lGKALf1OsS9IyVULrVa8W7NOYm/u4q4ZU3DFstbzn3maEXqSXUcQG1Ka8WgB cGzln5FrNqKe16QN2oCls3rjTRvNk/mpPbivHibhBeo4JZ/hx3hcval08Kd6aVFoiq9M 2+Dg== X-Forwarded-Encrypted: i=1; AJvYcCWBJdnEghvIO1aoqdTDu/lwBnFrtiYZ6ZXBIh/xo8bm0JEFqfZKjx058NEzFRhRV/VQigl+3lU=@vger.kernel.org X-Gm-Message-State: AOJu0Yz6g2FvsH/J57L2edCETKuIcScRHkBr+XdWX0QyYGrhgOQZExiy NX3aMmSnBy2McAHmv7+vr+/rwnUYmAPa4VRbO6vu7nLQbiAWN9byC2+EjKHYFn0= X-Google-Smtp-Source: AGHT+IEj/idN9IhMGd1wlSUgDAwrSEfk45xsIMSlOEE04Wgw9vpeSwe3lNWSU/Ws5LTOQ1Ol45Ovmw== X-Received: by 2002:a17:90b:3701:b0:2e2:cd79:ec06 with SMTP id 98e67ed59e1d1-2e2f0abb916mr23041497a91.10.1729104795516; Wed, 16 Oct 2024 11:53:15 -0700 (PDT) Received: from localhost (fwdproxy-prn-035.fbsv.net. [2a03:2880:ff:23::face:b00c]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e3e08c1d0bsm129707a91.19.2024.10.16.11.53.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2024 11:53:15 -0700 (PDT) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: David Wei , Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH v6 11/15] io_uring/zcrx: implement zerocopy receive pp memory provider Date: Wed, 16 Oct 2024 11:52:48 -0700 Message-ID: <20241016185252.3746190-12-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241016185252.3746190-1-dw@davidwei.uk> References: <20241016185252.3746190-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Pavel Begunkov Implement a page pool memory provider for io_uring to receieve in a zero copy fashion. For that, the provider allocates user pages wrapped around into struct net_iovs, that are stored in a previously registered struct net_iov_area. Unlike with traditional receives, for which pages from a page pool can be deallocated right after the user receives data, e.g. via recv(2), we extend the lifetime by recycling buffers only after the user space acknowledges that it's done processing the data via the refill queue. Before handing buffers to the user, we mark them by bumping the refcount by a bias value IO_ZC_RX_UREF, which will be checked when the buffer is returned back. When the corresponding io_uring instance and/or page pool are destroyed, we'll force back all buffers that are currently in the user space in ->io_pp_zc_scrub by clearing the bias. Refcounting and lifetime: Initially, all buffers are considered unallocated and stored in ->freelist, at which point they are not yet directly exposed to the core page pool code and not accounted to page pool's pages_state_hold_cnt. The ->alloc_netmems callback will allocate them by placing into the page pool's cache, setting the refcount to 1 as usual and adjusting pages_state_hold_cnt. Then, either the buffer is dropped and returns back to the page pool into the ->freelist via io_pp_zc_release_netmem, in which case the page pool will match hold_cnt for us with ->pages_state_release_cnt. Or more likely the buffer will go through the network/protocol stacks and end up in the corresponding socket's receive queue. From there the user can get it via an new io_uring request implemented in following patches. As mentioned above, before giving a buffer to the user we bump the refcount by IO_ZC_RX_UREF. Once the user is done with the buffer processing, it must return it back via the refill queue, from where our ->alloc_netmems implementation can grab it, check references, put IO_ZC_RX_UREF, and recycle the buffer if there are no more users left. As we place such buffers right back into the page pools fast cache and they didn't go through the normal pp release path, they are still considered "allocated" and no pp hold_cnt is required. For the same reason we dma sync buffers for the device in io_zc_add_pp_cache(). Signed-off-by: Pavel Begunkov Signed-off-by: David Wei Reviewed-by: Jens Axboe --- io_uring/zcrx.c | 215 ++++++++++++++++++++++++++++++++++++++++++++++++ io_uring/zcrx.h | 5 ++ 2 files changed, 220 insertions(+) diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c index a276572fe953..aad35676207e 100644 --- a/io_uring/zcrx.c +++ b/io_uring/zcrx.c @@ -2,7 +2,12 @@ #include #include #include +#include +#include #include +#include +#include +#include #include @@ -14,6 +19,16 @@ #define IO_RQ_MAX_ENTRIES 32768 +__maybe_unused +static const struct memory_provider_ops io_uring_pp_zc_ops; + +static inline struct io_zcrx_area *io_zcrx_iov_to_area(const struct net_iov *niov) +{ + struct net_iov_area *owner = net_iov_owner(niov); + + return container_of(owner, struct io_zcrx_area, nia); +} + static int io_allocate_rbuf_ring(struct io_zcrx_ifq *ifq, struct io_uring_zcrx_ifq_reg *reg) { @@ -99,6 +114,9 @@ static int io_zcrx_create_area(struct io_ring_ctx *ctx, goto err; for (i = 0; i < nr_pages; i++) { + struct net_iov *niov = &area->nia.niovs[i]; + + niov->owner = &area->nia; area->freelist[i] = i; } @@ -230,3 +248,200 @@ void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx) { lockdep_assert_held(&ctx->uring_lock); } + +static bool io_zcrx_niov_put(struct net_iov *niov, int nr) +{ + return atomic_long_sub_and_test(nr, &niov->pp_ref_count); +} + +static bool io_zcrx_put_niov_uref(struct net_iov *niov) +{ + if (atomic_long_read(&niov->pp_ref_count) < IO_ZC_RX_UREF) + return false; + + return io_zcrx_niov_put(niov, IO_ZC_RX_UREF); +} + +static inline void io_zc_add_pp_cache(struct page_pool *pp, + struct net_iov *niov) +{ +} + +static inline u32 io_zcrx_rqring_entries(struct io_zcrx_ifq *ifq) +{ + u32 entries; + + entries = smp_load_acquire(&ifq->rq_ring->tail) - ifq->cached_rq_head; + return min(entries, ifq->rq_entries); +} + +static struct io_uring_zcrx_rqe *io_zcrx_get_rqe(struct io_zcrx_ifq *ifq, + unsigned mask) +{ + unsigned int idx = ifq->cached_rq_head++ & mask; + + return &ifq->rqes[idx]; +} + +static void io_zcrx_ring_refill(struct page_pool *pp, + struct io_zcrx_ifq *ifq) +{ + unsigned int entries = io_zcrx_rqring_entries(ifq); + unsigned int mask = ifq->rq_entries - 1; + + entries = min_t(unsigned, entries, PP_ALLOC_CACHE_REFILL - pp->alloc.count); + if (unlikely(!entries)) + return; + + do { + struct io_uring_zcrx_rqe *rqe = io_zcrx_get_rqe(ifq, mask); + struct io_zcrx_area *area; + struct net_iov *niov; + unsigned niov_idx, area_idx; + + area_idx = rqe->off >> IORING_ZCRX_AREA_SHIFT; + niov_idx = (rqe->off & ~IORING_ZCRX_AREA_MASK) / PAGE_SIZE; + + if (unlikely(rqe->__pad || area_idx)) + continue; + area = ifq->area; + + if (unlikely(niov_idx >= area->nia.num_niovs)) + continue; + niov_idx = array_index_nospec(niov_idx, area->nia.num_niovs); + + niov = &area->nia.niovs[niov_idx]; + if (!io_zcrx_put_niov_uref(niov)) + continue; + page_pool_mp_return_in_cache(pp, net_iov_to_netmem(niov)); + } while (--entries); + + smp_store_release(&ifq->rq_ring->head, ifq->cached_rq_head); +} + +static void io_zcrx_refill_slow(struct page_pool *pp, struct io_zcrx_ifq *ifq) +{ + struct io_zcrx_area *area = ifq->area; + + spin_lock_bh(&area->freelist_lock); + while (area->free_count && pp->alloc.count < PP_ALLOC_CACHE_REFILL) { + struct net_iov *niov; + u32 pgid; + + pgid = area->freelist[--area->free_count]; + niov = &area->nia.niovs[pgid]; + + page_pool_mp_return_in_cache(pp, net_iov_to_netmem(niov)); + + pp->pages_state_hold_cnt++; + trace_page_pool_state_hold(pp, net_iov_to_netmem(niov), + pp->pages_state_hold_cnt); + } + spin_unlock_bh(&area->freelist_lock); +} + +static void io_zcrx_recycle_niov(struct net_iov *niov) +{ + struct io_zcrx_area *area = io_zcrx_iov_to_area(niov); + + spin_lock_bh(&area->freelist_lock); + area->freelist[area->free_count++] = net_iov_idx(niov); + spin_unlock_bh(&area->freelist_lock); +} + +static netmem_ref io_pp_zc_alloc_netmems(struct page_pool *pp, gfp_t gfp) +{ + struct io_zcrx_ifq *ifq = pp->mp_priv; + + /* pp should already be ensuring that */ + if (unlikely(pp->alloc.count)) + goto out_return; + + io_zcrx_ring_refill(pp, ifq); + if (likely(pp->alloc.count)) + goto out_return; + + io_zcrx_refill_slow(pp, ifq); + if (!pp->alloc.count) + return 0; +out_return: + return pp->alloc.cache[--pp->alloc.count]; +} + +static bool io_pp_zc_release_netmem(struct page_pool *pp, netmem_ref netmem) +{ + struct net_iov *niov; + + if (WARN_ON_ONCE(!netmem_is_net_iov(netmem))) + return false; + + niov = netmem_to_net_iov(netmem); + + if (io_zcrx_niov_put(niov, 1)) + io_zcrx_recycle_niov(niov); + return false; +} + +static void io_pp_zc_scrub(struct page_pool *pp) +{ + struct io_zcrx_ifq *ifq = pp->mp_priv; + struct io_zcrx_area *area = ifq->area; + int i; + + /* Reclaim back all buffers given to the user space. */ + for (i = 0; i < area->nia.num_niovs; i++) { + struct net_iov *niov = &area->nia.niovs[i]; + int count; + + if (!io_zcrx_put_niov_uref(niov)) + continue; + io_zcrx_recycle_niov(niov); + + count = atomic_inc_return_relaxed(&pp->pages_state_release_cnt); + trace_page_pool_state_release(pp, net_iov_to_netmem(niov), count); + } +} + +static int io_pp_zc_init(struct page_pool *pp) +{ + struct io_zcrx_ifq *ifq = pp->mp_priv; + struct io_zcrx_area *area = ifq->area; + int ret; + + if (!ifq) + return -EINVAL; + if (pp->p.order != 0) + return -EINVAL; + if (!pp->p.napi) + return -EINVAL; + + ret = page_pool_mp_init_paged_area(pp, &area->nia, area->pages); + if (ret) + return ret; + + percpu_ref_get(&ifq->ctx->refs); + ifq->pp = pp; + return 0; +} + +static void io_pp_zc_destroy(struct page_pool *pp) +{ + struct io_zcrx_ifq *ifq = pp->mp_priv; + struct io_zcrx_area *area = ifq->area; + + page_pool_mp_release_area(pp, &ifq->area->nia); + + ifq->pp = NULL; + + if (WARN_ON_ONCE(area->free_count != area->nia.num_niovs)) + return; + percpu_ref_put(&ifq->ctx->refs); +} + +static const struct memory_provider_ops io_uring_pp_zc_ops = { + .alloc_netmems = io_pp_zc_alloc_netmems, + .release_netmem = io_pp_zc_release_netmem, + .init = io_pp_zc_init, + .destroy = io_pp_zc_destroy, + .scrub = io_pp_zc_scrub, +}; diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h index a8db61498c67..464b4bd89b64 100644 --- a/io_uring/zcrx.h +++ b/io_uring/zcrx.h @@ -5,6 +5,9 @@ #include #include +#define IO_ZC_RX_UREF 0x10000 +#define IO_ZC_RX_KREF_MASK (IO_ZC_RX_UREF - 1) + struct io_zcrx_area { struct net_iov_area nia; struct io_zcrx_ifq *ifq; @@ -22,10 +25,12 @@ struct io_zcrx_ifq { struct io_ring_ctx *ctx; struct net_device *dev; struct io_zcrx_area *area; + struct page_pool *pp; struct io_uring *rq_ring; struct io_uring_zcrx_rqe *rqes; u32 rq_entries; + u32 cached_rq_head; unsigned short n_rqe_pages; struct page **rqe_pages; From patchwork Wed Oct 16 18:52:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13838754 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B266F216A11 for ; Wed, 16 Oct 2024 18:53:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104799; cv=none; b=g3BjCSqKTSaVOW+9rvhLhXSoUKAvDiJez6R+WmXmaM3tk2zRV2JKb+xQ9PzhjWSVd2+m355C9KZLWaeU4+87rGIhBZHcQFOjtS5LDaSKA2gBexcVXHPhsH+OsLNxiaPxwf1Mpi058N7AcRu05fJLPGW4P2az6eWlCCm81uelKUE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104799; c=relaxed/simple; bh=4bx1el/r2YtPjuAFsRLhjqjC+zCSkvj/7VVbuM73waI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uelzqJjfv1h2XEGr+89MGjz+L1PbsYZ5MUzFP69V8FjCZct+hj32YPTA/OeVSoPZbe7eRHTGyIKy0ijo9B6QY9+BsKtPV+hZqB1ty2WCrvs6x49czMfXpTPIc5l803SykM7NMNP3ynfPbPAtAQzAFcqv7DPYvc3iWXJtG/cbSZs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=qjwnhTU6; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="qjwnhTU6" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-20cb89a4e4cso1003635ad.3 for ; Wed, 16 Oct 2024 11:53:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1729104797; x=1729709597; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AFNlMBQ6w/5BUP+spieAHM4CnlBoAP9AfYmgJ00vVW4=; b=qjwnhTU6pCAhtoKAivA5aeSAFBEwTvXiLifTFHV0VjDHnVNr+HzgW90rZ5b0sFZyEQ IKzCzrGiyv7tJ1AT1yBRl0ptCcw6H0HuMVLbO8d8Lo1KRBaNQ8sFjQHdFOTTLFyoODRG Qtr6h/SyVyUkUdKcfcs47bkoCpCWqoEBv154G1vIPlp+bTNvxk2D209okYv+TJNqPlu4 pHM6FaXJKNE9xABKXvFFYSQOV0OmNFn0+Fhxwb+nkrayQ4LIiEUpII1ZPowzhpbCzZ55 mCgJrisxuA5E3A1womLeGGvLgH+Tz/sVWpHX8cS7sIxcIOJbFYY74Csk+sk0aALdLrPh R4fA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729104797; x=1729709597; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AFNlMBQ6w/5BUP+spieAHM4CnlBoAP9AfYmgJ00vVW4=; b=kkBrc7L73vx9BnL3neqNJmuWSN4exh5VjlsS/+UNju8sJdzdK5G9YvINWIGND4ZSGA 0g8gAQrmQZ5wIlEqPiYxiqzgSyrCNnnnKdLI9AK5AhtKhsFYWo+i7V7SRrXkEbp3i6gi UhkR7uUHVHQZpOb2dqWj7HpXN3XAz2uzIIWwu9USg9kP/bFKEFmq+QTg0fEDdJEtKO70 1yjqqI+ohczDc0kVTLWtX04vS64xU/7FuHGfCjjpInDMvmGVywMdtsCl9NIfSHKgl7GT SXFFDBRUnP5m+kEemoBLPVCuiODbnDxbQ6UPpQ+8xQ/vAPwyx9QD10Te84Mhl4h5sOLk pFsg== X-Forwarded-Encrypted: i=1; AJvYcCX6TNUjPABUFYm+/9LnUU7NbpsEF30xe9k/nS9DCm/NViU74NJAlc7sgSXIH0oH8/e4gOI9hyQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yz+s+MafX+qqPzp8fqzhhkLmpMeFEoYgJ1UHiUW0IYDCLrxpe4J Fisuhzeq8TyN+STdJ6NHx68Iwz76HUyi+Nj3vvjIJCjKtIcAvD107lWGTgwXqcRvPWd/vBrbOxf 5 X-Google-Smtp-Source: AGHT+IGTdq8FEdwhg9IsSaD6XeHQ0fdWVGYGR/mtejWplI3zfsjl0aZ3w5iLENwzFWM5C65laVw2uw== X-Received: by 2002:a17:902:e80c:b0:20b:7e0d:8f with SMTP id d9443c01a7336-20d27e46a2fmr70740865ad.3.1729104796872; Wed, 16 Oct 2024 11:53:16 -0700 (PDT) Received: from localhost (fwdproxy-prn-010.fbsv.net. [2a03:2880:ff:a::face:b00c]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20d180361a7sm31763085ad.143.2024.10.16.11.53.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2024 11:53:16 -0700 (PDT) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: David Wei , Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH v6 12/15] io_uring/zcrx: add io_recvzc request Date: Wed, 16 Oct 2024 11:52:49 -0700 Message-ID: <20241016185252.3746190-13-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241016185252.3746190-1-dw@davidwei.uk> References: <20241016185252.3746190-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add io_uring opcode OP_RECV_ZC for doing zero copy reads out of a socket. Only the connection should be land on the specific rx queue set up for zero copy, and the socket must be handled by the io_uring instance that the rx queue was registered for zero copy with. That's because neither net_iovs / buffers from our queue can be read by outside applications, nor zero copy is possible if traffic for the zero copy connection goes to another queue. This coordination is outside of the scope of this patch series. Also, any traffic directed to the zero copy enabled queue is immediately visible to the application, which is why CAP_NET_ADMIN is required at the registeration step. Of course, no data is actually read out of the socket, it has already been copied by the netdev into userspace memory via DMA. OP_RECV_ZC reads skbs out of the socket and checks that its frags are indeed net_iovs that belong to io_uring. A cqe is queued for each one of these frags. Recall that each cqe is a big cqe, with the top half being an io_uring_zcrx_cqe. The cqe res field contains the len or error. The lower IORING_ZCRX_AREA_SHIFT bits of the struct io_uring_zcrx_cqe::off field contain the offset relative to the start of the zero copy area. The upper part of the off field is trivially zero, and will be used to carry the area id. For now, there is no limit as to how much work each OP_RECV_ZC request does. It will attempt to drain a socket of all available data. This request always operates in multishot mode. Signed-off-by: David Wei Reviewed-by: Jens Axboe --- include/uapi/linux/io_uring.h | 2 + io_uring/io_uring.h | 10 ++ io_uring/net.c | 71 +++++++++++++ io_uring/opdef.c | 16 +++ io_uring/zcrx.c | 181 +++++++++++++++++++++++++++++++++- io_uring/zcrx.h | 11 +++ 6 files changed, 290 insertions(+), 1 deletion(-) diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h index d43183264dcf..0dcb239ebc59 100644 --- a/include/uapi/linux/io_uring.h +++ b/include/uapi/linux/io_uring.h @@ -87,6 +87,7 @@ struct io_uring_sqe { union { __s32 splice_fd_in; __u32 file_index; + __u32 zcrx_ifq_idx; __u32 optlen; struct { __u16 addr_len; @@ -259,6 +260,7 @@ enum io_uring_op { IORING_OP_FTRUNCATE, IORING_OP_BIND, IORING_OP_LISTEN, + IORING_OP_RECV_ZC, /* this goes last, obviously */ IORING_OP_LAST, diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 9d70b2cf7b1e..bb7f414e7835 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -176,6 +176,16 @@ static inline bool io_get_cqe(struct io_ring_ctx *ctx, struct io_uring_cqe **ret return io_get_cqe_overflow(ctx, ret, false); } +static inline bool io_defer_get_uncommited_cqe(struct io_ring_ctx *ctx, + struct io_uring_cqe **cqe_ret) +{ + io_lockdep_assert_cq_locked(ctx); + + ctx->cq_extra++; + ctx->submit_state.cq_flush = true; + return io_get_cqe(ctx, cqe_ret); +} + static __always_inline bool io_fill_cqe_req(struct io_ring_ctx *ctx, struct io_kiocb *req) { diff --git a/io_uring/net.c b/io_uring/net.c index 18507658a921..9716ecdcb570 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -16,6 +16,7 @@ #include "net.h" #include "notif.h" #include "rsrc.h" +#include "zcrx.h" #if defined(CONFIG_NET) struct io_shutdown { @@ -89,6 +90,13 @@ struct io_sr_msg { */ #define MULTISHOT_MAX_RETRY 32 +struct io_recvzc { + struct file *file; + unsigned msg_flags; + u16 flags; + struct io_zcrx_ifq *ifq; +}; + int io_shutdown_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { struct io_shutdown *shutdown = io_kiocb_to_cmd(req, struct io_shutdown); @@ -1202,6 +1210,69 @@ int io_recv(struct io_kiocb *req, unsigned int issue_flags) return ret; } +int io_recvzc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) +{ + struct io_recvzc *zc = io_kiocb_to_cmd(req, struct io_recvzc); + unsigned ifq_idx; + + if (unlikely(sqe->file_index || sqe->addr2 || sqe->addr || + sqe->len || sqe->addr3)) + return -EINVAL; + + ifq_idx = READ_ONCE(sqe->zcrx_ifq_idx); + if (ifq_idx != 0) + return -EINVAL; + zc->ifq = req->ctx->ifq; + if (!zc->ifq) + return -EINVAL; + + zc->flags = READ_ONCE(sqe->ioprio); + zc->msg_flags = READ_ONCE(sqe->msg_flags); + if (zc->msg_flags) + return -EINVAL; + if (zc->flags & ~(IORING_RECVSEND_POLL_FIRST | IORING_RECV_MULTISHOT)) + return -EINVAL; + /* multishot required */ + if (!(zc->flags & IORING_RECV_MULTISHOT)) + return -EINVAL; + /* All data completions are posted as aux CQEs. */ + req->flags |= REQ_F_APOLL_MULTISHOT; + + return 0; +} + +int io_recvzc(struct io_kiocb *req, unsigned int issue_flags) +{ + struct io_recvzc *zc = io_kiocb_to_cmd(req, struct io_recvzc); + struct socket *sock; + int ret; + + if (!(req->flags & REQ_F_POLLED) && + (zc->flags & IORING_RECVSEND_POLL_FIRST)) + return -EAGAIN; + + sock = sock_from_file(req->file); + if (unlikely(!sock)) + return -ENOTSOCK; + + ret = io_zcrx_recv(req, zc->ifq, sock, zc->msg_flags | MSG_DONTWAIT); + if (unlikely(ret <= 0) && ret != -EAGAIN) { + if (ret == -ERESTARTSYS) + ret = -EINTR; + + req_set_fail(req); + io_req_set_res(req, ret, 0); + + if (issue_flags & IO_URING_F_MULTISHOT) + return IOU_STOP_MULTISHOT; + return IOU_OK; + } + + if (issue_flags & IO_URING_F_MULTISHOT) + return IOU_ISSUE_SKIP_COMPLETE; + return -EAGAIN; +} + void io_send_zc_cleanup(struct io_kiocb *req) { struct io_sr_msg *zc = io_kiocb_to_cmd(req, struct io_sr_msg); diff --git a/io_uring/opdef.c b/io_uring/opdef.c index a2be3bbca5ff..599eb3ea5ff4 100644 --- a/io_uring/opdef.c +++ b/io_uring/opdef.c @@ -36,6 +36,7 @@ #include "waitid.h" #include "futex.h" #include "truncate.h" +#include "zcrx.h" static int io_no_issue(struct io_kiocb *req, unsigned int issue_flags) { @@ -513,6 +514,18 @@ const struct io_issue_def io_issue_defs[] = { .async_size = sizeof(struct io_async_msghdr), #else .prep = io_eopnotsupp_prep, +#endif + }, + [IORING_OP_RECV_ZC] = { + .needs_file = 1, + .unbound_nonreg_file = 1, + .pollin = 1, + .ioprio = 1, +#if defined(CONFIG_NET) + .prep = io_recvzc_prep, + .issue = io_recvzc, +#else + .prep = io_eopnotsupp_prep, #endif }, }; @@ -742,6 +755,9 @@ const struct io_cold_def io_cold_defs[] = { [IORING_OP_LISTEN] = { .name = "LISTEN", }, + [IORING_OP_RECV_ZC] = { + .name = "RECV_ZC", + }, }; const char *io_uring_get_opcode(u8 opcode) diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c index aad35676207e..477b0d1b7b91 100644 --- a/io_uring/zcrx.c +++ b/io_uring/zcrx.c @@ -8,6 +8,8 @@ #include #include #include +#include +#include #include @@ -19,7 +21,12 @@ #define IO_RQ_MAX_ENTRIES 32768 -__maybe_unused +struct io_zcrx_args { + struct io_kiocb *req; + struct io_zcrx_ifq *ifq; + struct socket *sock; +}; + static const struct memory_provider_ops io_uring_pp_zc_ops; static inline struct io_zcrx_area *io_zcrx_iov_to_area(const struct net_iov *niov) @@ -249,6 +256,11 @@ void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx) lockdep_assert_held(&ctx->uring_lock); } +static void io_zcrx_get_buf_uref(struct net_iov *niov) +{ + atomic_long_add(IO_ZC_RX_UREF, &niov->pp_ref_count); +} + static bool io_zcrx_niov_put(struct net_iov *niov, int nr) { return atomic_long_sub_and_test(nr, &niov->pp_ref_count); @@ -445,3 +457,170 @@ static const struct memory_provider_ops io_uring_pp_zc_ops = { .destroy = io_pp_zc_destroy, .scrub = io_pp_zc_scrub, }; + +static bool io_zcrx_queue_cqe(struct io_kiocb *req, struct net_iov *niov, + struct io_zcrx_ifq *ifq, int off, int len) +{ + struct io_uring_zcrx_cqe *rcqe; + struct io_zcrx_area *area; + struct io_uring_cqe *cqe; + u64 offset; + + if (!io_defer_get_uncommited_cqe(req->ctx, &cqe)) + return false; + + cqe->user_data = req->cqe.user_data; + cqe->res = len; + cqe->flags = IORING_CQE_F_MORE; + + area = io_zcrx_iov_to_area(niov); + offset = off + (net_iov_idx(niov) << PAGE_SHIFT); + rcqe = (struct io_uring_zcrx_cqe *)(cqe + 1); + rcqe->off = offset + ((u64)area->area_id << IORING_ZCRX_AREA_SHIFT); + rcqe->__pad = 0; + return true; +} + +static int io_zcrx_recv_frag(struct io_kiocb *req, struct io_zcrx_ifq *ifq, + const skb_frag_t *frag, int off, int len) +{ + struct net_iov *niov; + + off += skb_frag_off(frag); + + if (unlikely(!skb_frag_is_net_iov(frag))) + return -EOPNOTSUPP; + + niov = netmem_to_net_iov(frag->netmem); + if (niov->pp->mp_ops != &io_uring_pp_zc_ops || + niov->pp->mp_priv != ifq) + return -EFAULT; + + if (!io_zcrx_queue_cqe(req, niov, ifq, off, len)) + return -ENOSPC; + io_zcrx_get_buf_uref(niov); + return len; +} + +static int +io_zcrx_recv_skb(read_descriptor_t *desc, struct sk_buff *skb, + unsigned int offset, size_t len) +{ + struct io_zcrx_args *args = desc->arg.data; + struct io_zcrx_ifq *ifq = args->ifq; + struct io_kiocb *req = args->req; + struct sk_buff *frag_iter; + unsigned start, start_off; + int i, copy, end, off; + int ret = 0; + + start = skb_headlen(skb); + start_off = offset; + + if (offset < start) + return -EOPNOTSUPP; + + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + const skb_frag_t *frag; + + if (WARN_ON(start > offset + len)) + return -EFAULT; + + frag = &skb_shinfo(skb)->frags[i]; + end = start + skb_frag_size(frag); + + if (offset < end) { + copy = end - offset; + if (copy > len) + copy = len; + + off = offset - start; + ret = io_zcrx_recv_frag(req, ifq, frag, off, copy); + if (ret < 0) + goto out; + + offset += ret; + len -= ret; + if (len == 0 || ret != copy) + goto out; + } + start = end; + } + + skb_walk_frags(skb, frag_iter) { + if (WARN_ON(start > offset + len)) + return -EFAULT; + + end = start + frag_iter->len; + if (offset < end) { + copy = end - offset; + if (copy > len) + copy = len; + + off = offset - start; + ret = io_zcrx_recv_skb(desc, frag_iter, off, copy); + if (ret < 0) + goto out; + + offset += ret; + len -= ret; + if (len == 0 || ret != copy) + goto out; + } + start = end; + } + +out: + if (offset == start_off) + return ret; + return offset - start_off; +} + +static int io_zcrx_tcp_recvmsg(struct io_kiocb *req, struct io_zcrx_ifq *ifq, + struct sock *sk, int flags) +{ + struct io_zcrx_args args = { + .req = req, + .ifq = ifq, + .sock = sk->sk_socket, + }; + read_descriptor_t rd_desc = { + .count = 1, + .arg.data = &args, + }; + int ret; + + lock_sock(sk); + ret = tcp_read_sock(sk, &rd_desc, io_zcrx_recv_skb); + if (ret <= 0) { + if (ret < 0 || sock_flag(sk, SOCK_DONE)) + goto out; + if (sk->sk_err) + ret = sock_error(sk); + else if (sk->sk_shutdown & RCV_SHUTDOWN) + goto out; + else if (sk->sk_state == TCP_CLOSE) + ret = -ENOTCONN; + else + ret = -EAGAIN; + } else if (sock_flag(sk, SOCK_DONE)) { + /* Make it to retry until it finally gets 0. */ + ret = -EAGAIN; + } +out: + release_sock(sk); + return ret; +} + +int io_zcrx_recv(struct io_kiocb *req, struct io_zcrx_ifq *ifq, + struct socket *sock, unsigned int flags) +{ + struct sock *sk = sock->sk; + const struct proto *prot = READ_ONCE(sk->sk_prot); + + if (prot->recvmsg != tcp_recvmsg) + return -EPROTONOSUPPORT; + + sock_rps_record_flow(sk); + return io_zcrx_tcp_recvmsg(req, ifq, sk, flags); +} diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h index 464b4bd89b64..1f039ad45a63 100644 --- a/io_uring/zcrx.h +++ b/io_uring/zcrx.h @@ -3,6 +3,7 @@ #define IOU_ZC_RX_H #include +#include #include #define IO_ZC_RX_UREF 0x10000 @@ -43,6 +44,8 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx, struct io_uring_zcrx_ifq_reg __user *arg); void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx); void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx); +int io_zcrx_recv(struct io_kiocb *req, struct io_zcrx_ifq *ifq, + struct socket *sock, unsigned int flags); #else static inline int io_register_zcrx_ifq(struct io_ring_ctx *ctx, struct io_uring_zcrx_ifq_reg __user *arg) @@ -55,6 +58,14 @@ static inline void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx) static inline void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx) { } +static inline int io_zcrx_recv(struct io_kiocb *req, struct io_zcrx_ifq *ifq, + struct socket *sock, unsigned int flags) +{ + return -EOPNOTSUPP; +} #endif +int io_recvzc(struct io_kiocb *req, unsigned int issue_flags); +int io_recvzc_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe); + #endif From patchwork Wed Oct 16 18:52:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13838755 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CBD62139AF for ; Wed, 16 Oct 2024 18:53:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104800; cv=none; b=tt0TJuGa3fxdfTn3LO5EzynEp8l8EylskpL16Jsg63+HiY7m9pZMdo4YagZzrXuMcpQVYf6nL7PboJK/8bajA2Kz4c8rG3OZ/++lzupTD3K/1vg4pBrehyl6uyo9YbPCd9hrnOQu2/1+bKoZzKKlgkJI5Z1zw39tgeO/cGR9KYY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104800; c=relaxed/simple; bh=aa+huor/6qdUs/u5/3W+OBGNv1FL/yOsPwUTTmTQnzg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jqqhLzM+58yKI9xYk2F1/yiU8W6m7oAafuO2w0ZkvPh7/V+R+FgSBXWjVT4CKPu4Vnzau6OoMKuV+GxMMSNsrL3zAEW1g5kuEfNtXVaVk+xTmoyDkgr2MDwa3K0Dqi/f7QZPswuBliba8Fg2uk0hNWO70SF51Iy8FTMBKU8s9TQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=TkvKqhQV; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="TkvKqhQV" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-20cf6eea3c0so1392705ad.0 for ; Wed, 16 Oct 2024 11:53:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1729104798; x=1729709598; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=infCm4b2+YzvgJNyI1UPPed5lAP0dKTh35zcwGfwOss=; b=TkvKqhQVzFRosqDZl+x08qAk7m3eph9yB/SpqinAcuA17jQHIER8aMTZe5+kIOC/5K OqWbx9uFLIn15aCw8KwjSVDES+LI2Vj5Ei89aIq0dlTTAvN1AoHVByQXF4GHeBHGbjZg d2JQqsMKNQpxoj0oeF5K37Leq+07IbfOR1+r/SRbQrFouOx4IMQRhE03ZyAwLkCKKVmC kOFwSQgWxXDq6AhWhlviYc/AJ1OhGu4XI+OaX/blG44pNDT0OFt+NBQL/x0fwSjVTpVS D33oWn0xHGufgXbwgvsDt1OV5+TLftIuw12resqasQx6WUyc13fEvlOqv04f7EajRmfF heVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729104798; x=1729709598; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=infCm4b2+YzvgJNyI1UPPed5lAP0dKTh35zcwGfwOss=; b=otlPQgEDtAP9KxnMSS2A1fHOzbLhCrXV2+AEeYyW6qZZ8HAw1yaalEkayoSpSea41n imUmwMjeKBzvUouEtGi5mGQhMQ+lQt5dgCtpd2ujGFUp771toU4459HEg7MmZJvj649f w2Kj2CFI19tzNgpD7P4nc8te5VAiigw3BvYX7A2MYwarJy0BvzLMpZLGi3EVoeYD9/sv DTcwH3kuarHoAF8kNR39nifQn0sHz0R9Kwa7WCVX2oEeXpGVvzJSIWqTUUq2bCxWmMPR e/APim+bvWppazchNSbaQxkyEh0Vs+wUKsI8GXLZxm6qnnN1QkQ9USmOY14vuRFwTvTp enFQ== X-Forwarded-Encrypted: i=1; AJvYcCW6lNhnyB/fn9gkU/826torCha6FMLsN5PhgzbRhsSiJsp+xaL711mfxQtpqz+v9w4lKBneNCs=@vger.kernel.org X-Gm-Message-State: AOJu0YwNnEZPqHo97ve/UYn4h7XcG8v0GxhBGfnmFe2qu+S9Lisz+KU8 EzDn2T8sKiMMFK6uMSisnyAVxLBf33W+ApjgPRSuMxpoEm1q+4H2L0jN4JWnY3k= X-Google-Smtp-Source: AGHT+IGkNe9K9cBR5SbFKWb0dgia9tqwF0W7vBE2efKajMt672mzsefOWdsyqZtZEToIi9SbmxMkew== X-Received: by 2002:a17:903:1cd:b0:20c:7a0b:74a5 with SMTP id d9443c01a7336-20d27f1c789mr72724885ad.39.1729104798616; Wed, 16 Oct 2024 11:53:18 -0700 (PDT) Received: from localhost (fwdproxy-prn-115.fbsv.net. [2a03:2880:ff:73::face:b00c]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20d17f9da9dsm31920695ad.111.2024.10.16.11.53.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2024 11:53:18 -0700 (PDT) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: David Wei , Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH v6 13/15] io_uring/zcrx: set pp memory provider for an rx queue Date: Wed, 16 Oct 2024 11:52:50 -0700 Message-ID: <20241016185252.3746190-14-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241016185252.3746190-1-dw@davidwei.uk> References: <20241016185252.3746190-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: David Wei Set the page pool memory provider for the rx queue configured for zero copy to io_uring. Then the rx queue is reset using netdev_rx_queue_restart() and netdev core + page pool will take care of filling the rx queue from the io_uring zero copy memory provider. For now, there is only one ifq so its destruction happens implicitly during io_uring cleanup. Signed-off-by: David Wei Reviewed-by: Jens Axboe --- io_uring/zcrx.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++-- io_uring/zcrx.h | 2 ++ 2 files changed, 86 insertions(+), 2 deletions(-) diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c index 477b0d1b7b91..3f4625730dbd 100644 --- a/io_uring/zcrx.c +++ b/io_uring/zcrx.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include @@ -36,6 +37,65 @@ static inline struct io_zcrx_area *io_zcrx_iov_to_area(const struct net_iov *nio return container_of(owner, struct io_zcrx_area, nia); } +static int io_open_zc_rxq(struct io_zcrx_ifq *ifq, unsigned ifq_idx) +{ + struct netdev_rx_queue *rxq; + struct net_device *dev = ifq->dev; + int ret; + + ASSERT_RTNL(); + + if (ifq_idx >= dev->num_rx_queues) + return -EINVAL; + ifq_idx = array_index_nospec(ifq_idx, dev->num_rx_queues); + + rxq = __netif_get_rx_queue(ifq->dev, ifq_idx); + if (rxq->mp_params.mp_priv) + return -EEXIST; + + ifq->if_rxq = ifq_idx; + rxq->mp_params.mp_ops = &io_uring_pp_zc_ops; + rxq->mp_params.mp_priv = ifq; + ret = netdev_rx_queue_restart(ifq->dev, ifq->if_rxq); + if (ret) + goto fail; + return 0; +fail: + rxq->mp_params.mp_ops = NULL; + rxq->mp_params.mp_priv = NULL; + ifq->if_rxq = -1; + return ret; +} + +static void io_close_zc_rxq(struct io_zcrx_ifq *ifq) +{ + struct netdev_rx_queue *rxq; + int err; + + if (ifq->if_rxq == -1) + return; + + rtnl_lock(); + if (WARN_ON_ONCE(ifq->if_rxq >= ifq->dev->num_rx_queues)) { + rtnl_unlock(); + return; + } + + rxq = __netif_get_rx_queue(ifq->dev, ifq->if_rxq); + + WARN_ON_ONCE(rxq->mp_params.mp_priv != ifq); + + rxq->mp_params.mp_ops = NULL; + rxq->mp_params.mp_priv = NULL; + + err = netdev_rx_queue_restart(ifq->dev, ifq->if_rxq); + if (err) + pr_devel("io_uring: can't restart a queue on zcrx close\n"); + + rtnl_unlock(); + ifq->if_rxq = -1; +} + static int io_allocate_rbuf_ring(struct io_zcrx_ifq *ifq, struct io_uring_zcrx_ifq_reg *reg) { @@ -156,9 +216,12 @@ static struct io_zcrx_ifq *io_zcrx_ifq_alloc(struct io_ring_ctx *ctx) static void io_zcrx_ifq_free(struct io_zcrx_ifq *ifq) { + io_close_zc_rxq(ifq); + if (ifq->area) io_zcrx_free_area(ifq->area); - + if (ifq->dev) + netdev_put(ifq->dev, &ifq->netdev_tracker); io_free_rbuf_ring(ifq); kfree(ifq); } @@ -214,7 +277,18 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx, goto err; ifq->rq_entries = reg.rq_entries; - ifq->if_rxq = reg.if_rxq; + + ret = -ENODEV; + rtnl_lock(); + ifq->dev = netdev_get_by_index(current->nsproxy->net_ns, reg.if_idx, + &ifq->netdev_tracker, GFP_KERNEL); + if (!ifq->dev) + goto err_rtnl_unlock; + + ret = io_open_zc_rxq(ifq, reg.if_rxq); + if (ret) + goto err_rtnl_unlock; + rtnl_unlock(); ring_sz = sizeof(struct io_uring); rqes_sz = sizeof(struct io_uring_zcrx_rqe) * ifq->rq_entries; @@ -224,15 +298,20 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx, reg.offsets.tail = offsetof(struct io_uring, tail); if (copy_to_user(arg, ®, sizeof(reg))) { + io_close_zc_rxq(ifq); ret = -EFAULT; goto err; } if (copy_to_user(u64_to_user_ptr(reg.area_ptr), &area, sizeof(area))) { + io_close_zc_rxq(ifq); ret = -EFAULT; goto err; } ctx->ifq = ifq; return 0; + +err_rtnl_unlock: + rtnl_unlock(); err: io_zcrx_ifq_free(ifq); return ret; @@ -254,6 +333,9 @@ void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx) void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx) { lockdep_assert_held(&ctx->uring_lock); + + if (ctx->ifq) + io_close_zc_rxq(ctx->ifq); } static void io_zcrx_get_buf_uref(struct net_iov *niov) diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h index 1f039ad45a63..d3f6b6cdd647 100644 --- a/io_uring/zcrx.h +++ b/io_uring/zcrx.h @@ -5,6 +5,7 @@ #include #include #include +#include #define IO_ZC_RX_UREF 0x10000 #define IO_ZC_RX_KREF_MASK (IO_ZC_RX_UREF - 1) @@ -37,6 +38,7 @@ struct io_zcrx_ifq { struct page **rqe_pages; u32 if_rxq; + netdevice_tracker netdev_tracker; }; #if defined(CONFIG_IO_URING_ZCRX) From patchwork Wed Oct 16 18:52:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13838756 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CBFEF2170C5 for ; Wed, 16 Oct 2024 18:53:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104802; cv=none; b=g0atUbfUdUXqTCEGcONCeQ/3I1C3ctvmyk7XIoUkI4G0dbuejb2ZtylIsmwwHXCOGYTjOzvnLTX2I7zHpfCZHZTyQ7/SpvniPLKpP6bD3y9JmoH+sCOcQgNbKmI7WZfkGKI7VtsqQIoLSpsNd6IydihuO/nbZ2PdHtiQS7tP/hU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104802; c=relaxed/simple; bh=7FnSeT5rG0hNe8XnQzL2pjhkkXdqIXvOONCY1JXWVNE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=paWcHRdcR0VQ5qWYlU4C7xbXZ2cFJD+Prdc3zlPYld8lBUbEiGX5SCEoygaKz9+so4lvUywP6/QFmXVIkNRycowcT3noDH6qZ849N2XrY9dgkM18x2w8D95O9sPE4+5jmyYvHkrpysTtsqFlnJ6H9MiU8WFLopkGtQqdwvWGsJo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=r2on4Ysq; arc=none smtp.client-ip=209.85.215.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="r2on4Ysq" Received: by mail-pg1-f171.google.com with SMTP id 41be03b00d2f7-7ea16c7759cso97967a12.1 for ; Wed, 16 Oct 2024 11:53:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1729104800; x=1729709600; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=m9i7k9AwDi6yEmab9fhAfUnB2mFTPp04Wex6JOJ/x70=; b=r2on4YsqO6OdrA3r7GzxQ6irJVAnpPsmnpvAMVhM0I4HUEbyDI8CWL0cqZF2qWxJfO t4w2IuQZ4YPdFEI0o/WcXYuNMzpBcOhVkGXE8mTSKSV6Y9ugwjPm2rkzCWqdckurZaGH Sudd+SzjZElgUXXsBFkn6QgUqY+eY8dRmxK/nFzvPZDMgw9FKQJM69LZRJMBfHI4V9pO oPLO+pr14gOWW1nnlkS1UyWp7oIV4NugOkY5iqFozIoPlzMNHSgDXw0M3iQPTNJYhr+N lDJspaXZ3wlOxJZO3/OKosmckJ0/bxRo53pThC0hdolo/sO/0kyXrZ1iOSWroAju/fJf VJqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729104800; x=1729709600; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=m9i7k9AwDi6yEmab9fhAfUnB2mFTPp04Wex6JOJ/x70=; b=RzhpSDYlfF0GGXBLUuXSZZEulyn9mBloltWhj6K9rn6ZTsHzkkl05h6iJuKMuYI4AC lne8kKAG80lWQRejMRyvcX8qq9iKHydWcsp8yFZZCWdUYSvrzT6R/8L7jq1RQq7cw7n/ OIy/SVfPxni5ZboEn7REw1K5ceJOcOHmGSUMtcZtSNSIBCEDm9BCN/9D+/L984PF94Y9 sL+sJgPuxcANiIDEL0urnUg1lPfvaPU/D8ZJC8nkQqR5USbusycsnumAf1OFOLOIgcDU uWvuy2EigzTFEvcBeLZLA0Tde2nlhDvj22nAoGXnrYKU/GRxCuZVZxS575+3NyrYj/aI tf9A== X-Forwarded-Encrypted: i=1; AJvYcCUskpWtdV++M9Dr9BeQqRdXU/M77+LmJmfIOfU47IXmfsafTOD/7Aj2mficjB0YBCKvQHE3RWg=@vger.kernel.org X-Gm-Message-State: AOJu0YxIXV6C8gvvTTdj1cmuYivt+ziZBN8fFcS7tpnKdzUN/Y/jdrYh PK7rKvUE3rJ2Evk8sqgAf4HYnugkuDM9LjahLYMwSmZ9kML/V4nfrcdFwuzoK7A= X-Google-Smtp-Source: AGHT+IEoI36f9pqEoz/bozxkyaBDK5RbzTQp8pwJP+rZi2722htvDZq7QElzusWbIog8++770qpR9Q== X-Received: by 2002:a05:6a20:c997:b0:1c8:fe09:f8cd with SMTP id adf61e73a8af0-1d8bcf356d4mr28302231637.24.1729104799898; Wed, 16 Oct 2024 11:53:19 -0700 (PDT) Received: from localhost (fwdproxy-prn-027.fbsv.net. [2a03:2880:ff:1b::face:b00c]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2e3e08fa030sm124424a91.45.2024.10.16.11.53.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2024 11:53:19 -0700 (PDT) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: David Wei , Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH v6 14/15] io_uring/zcrx: add copy fallback Date: Wed, 16 Oct 2024 11:52:51 -0700 Message-ID: <20241016185252.3746190-15-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241016185252.3746190-1-dw@davidwei.uk> References: <20241016185252.3746190-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Pavel Begunkov There are scenarios in which the zerocopy path might get a normal in-kernel buffer, it could be a mis-steered packet or simply the linear part of an skb. Another use case is to allow the driver to allocate kernel pages when it's out of zc buffers, which makes it more resilient to spikes in load and allow the user to choose the balance between the amount of memory provided and performance. At the moment we fail such requests. Instead, grab a buffer from the page pool, copy data there, and return back to user in the usual way. Because the refill ring is private to the napi our page pool is running from, it's done by stopping the napi via napi_execute() helper. It grabs only one buffer, which is inefficient, and improving it is left for follow up patches. Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- io_uring/zcrx.c | 133 +++++++++++++++++++++++++++++++++++++++++++++--- io_uring/zcrx.h | 1 + 2 files changed, 127 insertions(+), 7 deletions(-) diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c index 3f4625730dbd..1f4db70e3370 100644 --- a/io_uring/zcrx.c +++ b/io_uring/zcrx.c @@ -5,6 +5,8 @@ #include #include #include +#include +#include #include #include #include @@ -28,6 +30,11 @@ struct io_zcrx_args { struct socket *sock; }; +struct io_zc_refill_data { + struct io_zcrx_ifq *ifq; + struct net_iov *niov; +}; + static const struct memory_provider_ops io_uring_pp_zc_ops; static inline struct io_zcrx_area *io_zcrx_iov_to_area(const struct net_iov *niov) @@ -37,6 +44,13 @@ static inline struct io_zcrx_area *io_zcrx_iov_to_area(const struct net_iov *nio return container_of(owner, struct io_zcrx_area, nia); } +static inline struct page *io_zcrx_iov_page(const struct net_iov *niov) +{ + struct io_zcrx_area *area = io_zcrx_iov_to_area(niov); + + return area->pages[net_iov_idx(niov)]; +} + static int io_open_zc_rxq(struct io_zcrx_ifq *ifq, unsigned ifq_idx) { struct netdev_rx_queue *rxq; @@ -59,6 +73,13 @@ static int io_open_zc_rxq(struct io_zcrx_ifq *ifq, unsigned ifq_idx) ret = netdev_rx_queue_restart(ifq->dev, ifq->if_rxq); if (ret) goto fail; + + if (WARN_ON_ONCE(!ifq->pp)) { + ret = -EFAULT; + goto fail; + } + /* grab napi_id while still under rtnl */ + ifq->napi_id = ifq->pp->p.napi->napi_id; return 0; fail: rxq->mp_params.mp_ops = NULL; @@ -526,6 +547,7 @@ static void io_pp_zc_destroy(struct page_pool *pp) page_pool_mp_release_area(pp, &ifq->area->nia); ifq->pp = NULL; + ifq->napi_id = 0; if (WARN_ON_ONCE(area->free_count != area->nia.num_niovs)) return; @@ -540,6 +562,34 @@ static const struct memory_provider_ops io_uring_pp_zc_ops = { .scrub = io_pp_zc_scrub, }; +static void io_napi_refill(void *data) +{ + struct io_zc_refill_data *rd = data; + struct io_zcrx_ifq *ifq = rd->ifq; + netmem_ref netmem; + + if (WARN_ON_ONCE(!ifq->pp)) + return; + + netmem = page_pool_alloc_netmem(ifq->pp, GFP_ATOMIC | __GFP_NOWARN); + if (!netmem) + return; + if (WARN_ON_ONCE(!netmem_is_net_iov(netmem))) + return; + + rd->niov = netmem_to_net_iov(netmem); +} + +static struct net_iov *io_zc_get_buf_task_safe(struct io_zcrx_ifq *ifq) +{ + struct io_zc_refill_data rd = { + .ifq = ifq, + }; + + napi_execute(ifq->napi_id, io_napi_refill, &rd); + return rd.niov; +} + static bool io_zcrx_queue_cqe(struct io_kiocb *req, struct net_iov *niov, struct io_zcrx_ifq *ifq, int off, int len) { @@ -563,6 +613,45 @@ static bool io_zcrx_queue_cqe(struct io_kiocb *req, struct net_iov *niov, return true; } +static ssize_t io_zcrx_copy_chunk(struct io_kiocb *req, struct io_zcrx_ifq *ifq, + void *data, unsigned int offset, size_t len) +{ + size_t copy_size, copied = 0; + int ret = 0, off = 0; + struct page *page; + u8 *vaddr; + + do { + struct net_iov *niov; + + niov = io_zc_get_buf_task_safe(ifq); + if (!niov) { + ret = -ENOMEM; + break; + } + + page = io_zcrx_iov_page(niov); + vaddr = kmap_local_page(page); + copy_size = min_t(size_t, PAGE_SIZE, len); + memcpy(vaddr, data + offset, copy_size); + kunmap_local(vaddr); + + if (!io_zcrx_queue_cqe(req, niov, ifq, off, copy_size)) { + napi_pp_put_page(net_iov_to_netmem(niov)); + return -ENOSPC; + } + + io_zcrx_get_buf_uref(niov); + napi_pp_put_page(net_iov_to_netmem(niov)); + + offset += copy_size; + len -= copy_size; + copied += copy_size; + } while (offset < len); + + return copied ? copied : ret; +} + static int io_zcrx_recv_frag(struct io_kiocb *req, struct io_zcrx_ifq *ifq, const skb_frag_t *frag, int off, int len) { @@ -570,8 +659,24 @@ static int io_zcrx_recv_frag(struct io_kiocb *req, struct io_zcrx_ifq *ifq, off += skb_frag_off(frag); - if (unlikely(!skb_frag_is_net_iov(frag))) - return -EOPNOTSUPP; + if (unlikely(!skb_frag_is_net_iov(frag))) { + struct page *page = skb_frag_page(frag); + u32 p_off, p_len, t, copied = 0; + u8 *vaddr; + int ret = 0; + + skb_frag_foreach_page(frag, off, len, + page, p_off, p_len, t) { + vaddr = kmap_local_page(page); + ret = io_zcrx_copy_chunk(req, ifq, vaddr, p_off, p_len); + kunmap_local(vaddr); + + if (ret < 0) + return copied ? copied : ret; + copied += ret; + } + return copied; + } niov = netmem_to_net_iov(frag->netmem); if (niov->pp->mp_ops != &io_uring_pp_zc_ops || @@ -592,15 +697,29 @@ io_zcrx_recv_skb(read_descriptor_t *desc, struct sk_buff *skb, struct io_zcrx_ifq *ifq = args->ifq; struct io_kiocb *req = args->req; struct sk_buff *frag_iter; - unsigned start, start_off; + unsigned start, start_off = offset; int i, copy, end, off; int ret = 0; - start = skb_headlen(skb); - start_off = offset; + if (unlikely(offset < skb_headlen(skb))) { + ssize_t copied; + size_t to_copy; - if (offset < start) - return -EOPNOTSUPP; + to_copy = min_t(size_t, skb_headlen(skb) - offset, len); + copied = io_zcrx_copy_chunk(req, ifq, skb->data, offset, to_copy); + if (copied < 0) { + ret = copied; + goto out; + } + offset += copied; + len -= copied; + if (!len) + goto out; + if (offset != skb_headlen(skb)) + goto out; + } + + start = skb_headlen(skb); for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { const skb_frag_t *frag; diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h index d3f6b6cdd647..5d7920972e95 100644 --- a/io_uring/zcrx.h +++ b/io_uring/zcrx.h @@ -39,6 +39,7 @@ struct io_zcrx_ifq { u32 if_rxq; netdevice_tracker netdev_tracker; + unsigned napi_id; }; #if defined(CONFIG_IO_URING_ZCRX) From patchwork Wed Oct 16 18:52:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13838757 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ECC102170CC for ; Wed, 16 Oct 2024 18:53:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104803; cv=none; b=WsX7tml/v4cOI2bBuRGDbvANCArWhJOP5MXTQYYnNUMewIYUm5i98pj8k9JJQarj7bEjoDFqq6HjgmwZIMjhnJcF47/7AsSIdNZ+QmgNy+1nJ462qawr3P7yRfA99jLcatLQ1p7nugENTtVNRGRWgmHd4Q2LcIGK5WUlIkzbQ+k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729104803; c=relaxed/simple; bh=R2xg4awbMnuxmxxdZQvawbB6Glg1Q6V2IXHJy+GrJgk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aoG9Y5Cc/tDQUyx5vnIuU05SPX5v8BDPSMbnz2iwVtnKakRy03zjy5TUW992+LVieZ8BczXfzoZ4365sfiPzayoijjd86rHOwnHDZJVa4F9g9anlF7kVjRQnqIOsDJFNe/jQvlcMQ6TG6ZSfWc6mYQT1mOV4ZyUUXbj5r4P8g+w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=RiAPnvgK; arc=none smtp.client-ip=209.85.210.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="RiAPnvgK" Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-71e49ef3bb9so95267b3a.1 for ; Wed, 16 Oct 2024 11:53:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1729104801; x=1729709601; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pcqR/6GFofymoBmw+CHLZpbByxhu6rkcZaJ0jMgsDWE=; b=RiAPnvgK6zExA7UeqFGRhk0KxDZ67zFyBESZ92aSVDF8p96tdCFvoKZb5hQZx39bpI bngA4PQSPYmTINJBR6vxPTQ2Am9NwbQWNv7927Px1II6wjbspaw37DTfyIgtZM57VKrZ uCVhobruDG38tI+vMOsW4iEDX4eMqeJaSyM4AFDaaD89NLy39uxK9gGWxZqbfBAF7l6Q UTN2Uxs/B7qktCptjxFppUnWmWfReGxz4GmCuG3Wo2YAo/huazZM9S8Ev/MBhFP+ecEb X7gcunIdvJ11eLVeh5OkACqPRYUgVJ+rf1UOAEtjixwTSnObc8mOAt+/osZ8t1LI9MvW OI0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729104801; x=1729709601; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pcqR/6GFofymoBmw+CHLZpbByxhu6rkcZaJ0jMgsDWE=; b=WwuyWNBsXJQ8s4tyY76iXgi6n60LzzxdKYglOSdM/Icc9ij4UAscXKpUuPgs/8jQie qKxjjcrxgrxL7Ix/tgmi+PI1auIGyhqevrCN7QadzmYy/rnFA5z9bbEAr68n74uwugw1 IUmNMcuFGV3Sa09Rdel40Q2I2ScO73wM4V4RhClCNEG/mStYPZKT53uuR6nJKLvfVj6u rnG/o75GMjKyxi5+FMo7jHBp73Po59qHmT/qqxV6V79Xk4RtpibL2pUfLyn2gLYdXAbC RtkqVlLdug8ssUNzrASfSvIVRXkoCUT+Nkgn21+EnX+/pjPf6apIXzF8jcssiQeT1SNW cjBg== X-Forwarded-Encrypted: i=1; AJvYcCUnVKNKDbF/Afh5skl1tQj4m0QcAvDUBEIYfGIxk3vZD6Dygs92FzUXEQTLNOorYXmBKbuhnYw=@vger.kernel.org X-Gm-Message-State: AOJu0YyJ6allp7KwO70sftxvZTDPWoLBRos3XeDtsQcY0inNkDXVUV+Q 1QVZbJ9CtpUg0oKrvTpTjds7+ZakMVsBdknvChvtKMxz4RQoyeVNMvIjqpTRPnc= X-Google-Smtp-Source: AGHT+IFq74+NXxlV/h/CvdQlFy6zQsMt566N2UrAfZP9nAHBsh0K1U4WjJq/895gJxJA1ElmiuflXg== X-Received: by 2002:a05:6a00:3cd1:b0:71e:44f6:690f with SMTP id d2e1a72fcca58-71e8fd8d44dmr813114b3a.8.1729104801317; Wed, 16 Oct 2024 11:53:21 -0700 (PDT) Received: from localhost (fwdproxy-prn-023.fbsv.net. [2a03:2880:ff:17::face:b00c]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71e77518a5fsm3382791b3a.214.2024.10.16.11.53.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Oct 2024 11:53:20 -0700 (PDT) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: David Wei , Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [PATCH v6 15/15] io_uring/zcrx: throttle receive requests Date: Wed, 16 Oct 2024 11:52:52 -0700 Message-ID: <20241016185252.3746190-16-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241016185252.3746190-1-dw@davidwei.uk> References: <20241016185252.3746190-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Pavel Begunkov io_zc_rx_tcp_recvmsg() continues until it fails or there is nothing to receive. If the other side sends fast enough, we might get stuck in io_zc_rx_tcp_recvmsg() producing more and more CQEs but not letting the user to handle them leading to unbound latencies. Break out of it based on an arbitrarily chosen limit, the upper layer will either return to userspace or requeue the request. Signed-off-by: Pavel Begunkov Signed-off-by: David Wei Reviewed-by: Jens Axboe --- io_uring/net.c | 5 ++++- io_uring/zcrx.c | 17 ++++++++++++++--- io_uring/zcrx.h | 6 ++++-- 3 files changed, 22 insertions(+), 6 deletions(-) diff --git a/io_uring/net.c b/io_uring/net.c index 9716ecdcb570..27966dfa2938 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -1255,10 +1255,13 @@ int io_recvzc(struct io_kiocb *req, unsigned int issue_flags) if (unlikely(!sock)) return -ENOTSOCK; - ret = io_zcrx_recv(req, zc->ifq, sock, zc->msg_flags | MSG_DONTWAIT); + ret = io_zcrx_recv(req, zc->ifq, sock, zc->msg_flags | MSG_DONTWAIT, + issue_flags); if (unlikely(ret <= 0) && ret != -EAGAIN) { if (ret == -ERESTARTSYS) ret = -EINTR; + if (ret == IOU_REQUEUE) + return IOU_REQUEUE; req_set_fail(req); io_req_set_res(req, ret, 0); diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c index 1f4db70e3370..a2c753e8e46e 100644 --- a/io_uring/zcrx.c +++ b/io_uring/zcrx.c @@ -24,10 +24,13 @@ #define IO_RQ_MAX_ENTRIES 32768 +#define IO_SKBS_PER_CALL_LIMIT 20 + struct io_zcrx_args { struct io_kiocb *req; struct io_zcrx_ifq *ifq; struct socket *sock; + unsigned nr_skbs; }; struct io_zc_refill_data { @@ -701,6 +704,9 @@ io_zcrx_recv_skb(read_descriptor_t *desc, struct sk_buff *skb, int i, copy, end, off; int ret = 0; + if (unlikely(args->nr_skbs++ > IO_SKBS_PER_CALL_LIMIT)) + return -EAGAIN; + if (unlikely(offset < skb_headlen(skb))) { ssize_t copied; size_t to_copy; @@ -778,7 +784,8 @@ io_zcrx_recv_skb(read_descriptor_t *desc, struct sk_buff *skb, } static int io_zcrx_tcp_recvmsg(struct io_kiocb *req, struct io_zcrx_ifq *ifq, - struct sock *sk, int flags) + struct sock *sk, int flags, + unsigned int issue_flags) { struct io_zcrx_args args = { .req = req, @@ -804,6 +811,9 @@ static int io_zcrx_tcp_recvmsg(struct io_kiocb *req, struct io_zcrx_ifq *ifq, ret = -ENOTCONN; else ret = -EAGAIN; + } else if (unlikely(args.nr_skbs > IO_SKBS_PER_CALL_LIMIT) && + (issue_flags & IO_URING_F_MULTISHOT)) { + ret = IOU_REQUEUE; } else if (sock_flag(sk, SOCK_DONE)) { /* Make it to retry until it finally gets 0. */ ret = -EAGAIN; @@ -814,7 +824,8 @@ static int io_zcrx_tcp_recvmsg(struct io_kiocb *req, struct io_zcrx_ifq *ifq, } int io_zcrx_recv(struct io_kiocb *req, struct io_zcrx_ifq *ifq, - struct socket *sock, unsigned int flags) + struct socket *sock, unsigned int flags, + unsigned int issue_flags) { struct sock *sk = sock->sk; const struct proto *prot = READ_ONCE(sk->sk_prot); @@ -823,5 +834,5 @@ int io_zcrx_recv(struct io_kiocb *req, struct io_zcrx_ifq *ifq, return -EPROTONOSUPPORT; sock_rps_record_flow(sk); - return io_zcrx_tcp_recvmsg(req, ifq, sk, flags); + return io_zcrx_tcp_recvmsg(req, ifq, sk, flags, issue_flags); } diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h index 5d7920972e95..45485bdce61a 100644 --- a/io_uring/zcrx.h +++ b/io_uring/zcrx.h @@ -48,7 +48,8 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx, void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx); void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx); int io_zcrx_recv(struct io_kiocb *req, struct io_zcrx_ifq *ifq, - struct socket *sock, unsigned int flags); + struct socket *sock, unsigned int flags, + unsigned int issue_flags); #else static inline int io_register_zcrx_ifq(struct io_ring_ctx *ctx, struct io_uring_zcrx_ifq_reg __user *arg) @@ -62,7 +63,8 @@ static inline void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx) { } static inline int io_zcrx_recv(struct io_kiocb *req, struct io_zcrx_ifq *ifq, - struct socket *sock, unsigned int flags) + struct socket *sock, unsigned int flags, + unsigned int issue_flags) { return -EOPNOTSUPP; }