From patchwork Wed Dec 18 00:35:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13912787 Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B0BA1F956 for ; Wed, 18 Dec 2024 00:36:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734482161; cv=none; b=V/Gvrr4vfL7raOawSqrcte4qMAtD9Aua3RVHT2WjXQVwYfb680YkljKtMocbMuwJTh/iVs/6mfCbYn+g/zXCNNWHot+ugtZBDZG+UwoBX3F9scrGinHDAishQn89arK0VrYYA70meqV0ZsB2sH7ZQFfPyo7HJpEOyI0fzBuL5lU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734482161; c=relaxed/simple; bh=CztbvP9+RIyhpbrNQZVgPuMnd7qkzCOzRjukEnHDodc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CdQkJPgL6WprHY4SshyRBMNELfD/KD7PvavMHH8Qp0SvmCNQISQ23F3K9XuCfX4BPAYBURpV+Qufya6dSvzYlqW1WFMpEVdWrpfkf2SXQGWcZD+QVCiqTMK/p+/gP/12U4agohsTD1iQ2fvP/0RNuB2fS2tNIokg50y+iyRrvMg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=jDFxBStL; arc=none smtp.client-ip=209.85.210.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="jDFxBStL" Received: by mail-pf1-f180.google.com with SMTP id d2e1a72fcca58-728e81257bfso4843817b3a.2 for ; Tue, 17 Dec 2024 16:36:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1734482159; x=1735086959; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3FmBtD/9kNoCPyeDOsGVnpCJKDKHQdivfmWBaZAFy/E=; b=jDFxBStL702Qzebhgvjw8DUM/qZhn4Bsh/MwdZdhABqeeXz5NJCwomO9mlRzsayg4K CQHx/nVFUcDZrCDm3EEDER72/s0CGN73a6YjUhPpWQi87d2IkBO6KVzkLghs8dlZgmPg 0YGPa5GK/hFwQETXwA8v6YYP2aogWx9zAg0Omg0RhxES0aR1C36AxuN+sjtuP+khVoSo O6pNPnPfJeXc+IIhCa9JwK1NefLnHq5vEC4BrJl2XgMUVu9ka4u0HP6CkSyQ2+V4yvLA QyIWP0s5H3Vy4t+fJIalnmpaBwS1XojhI9Sj64yND51FKRrQ/pvV8XVh6y3NtEKbRhx7 CTWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734482159; x=1735086959; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3FmBtD/9kNoCPyeDOsGVnpCJKDKHQdivfmWBaZAFy/E=; b=CVQXgGOR1QdOQmi4I/2ha9u8OJpTbUacxsARUB3+fPKDvj9EhWk54W2ZpUMY+85/zH 48MpKCrQC4xZgBa8a4PrsOjto3ITF2uZjHhRUT4T+LEPhTPAHmcY01oiAT12DzNAiPu4 EkNovTDyCm0oBz1fKNq1WKmaU2S78xDcCEOen8T6tyfb+GXFjTHKI64wb3Rz1Vn0zzf6 hbArQ2NDn27MoxStul2z450xzm/nyyo1qnFIF/zTy56X8xw3MaRq6tUvDRs61Ibq8P7Y 6va1VqJTyKKRIqtfifRPx5tvyplEjw3OysSVQJV7OsrqV1R1H96MzyWkiXwRj3tzarUM eydA== X-Gm-Message-State: AOJu0YxReZ6/mcBiph0LLL+8X5B5PoNOPOnFkYZoZpS3X9T3Q+qcJzUR oTOThxrqnw79tQDYcgoHxEvzEViTsNgE0n88GF998jV0wNyr6Ffj/fQ6UoacoLPprTJ/VkBsiTz i X-Gm-Gg: ASbGnctZwOYNj8AvVWi5VPoAidhv8baILw4a0+Nd8A8I5xpZDVNJuvqq8s3O5MtH+mQ cOqLRSuut5Mk4t8T9BgEhp85dGdGYMkTba5vn1H8+3WEBsi+76+84kkb90wQ3BLRzq3zf6SVrrh l1krGqqY7enGFVd4GO//KYdAJeYMo8aV1bPNdj70+5Yn5DAzwaF9LdeZVVIslw3ifhIiL5KiFn8 yTatqk/tL1sduAJy6vd6Z5A+8HorCgSm5AN76JO6A== X-Google-Smtp-Source: AGHT+IHQ+upJz/RGUUcV/5i4GKWOu+LYEOubkwJvJFMyfJVlxE6d4/uwYN9uI8/eqCX8ruOMmIPNzA== X-Received: by 2002:a05:6a21:3991:b0:1e1:a094:f20e with SMTP id adf61e73a8af0-1e5b47fc6c3mr1441096637.17.1734482159683; Tue, 17 Dec 2024 16:35:59 -0800 (PST) Received: from localhost ([2a03:2880:ff:10::]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-801d5a933c1sm6453128a12.2.2024.12.17.16.35.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Dec 2024 16:35:59 -0800 (PST) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [--bla-- 01/20] net: page_pool: don't cast mp param to devmem Date: Tue, 17 Dec 2024 16:35:29 -0800 Message-ID: <20241218003549.786301-2-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241218003549.786301-1-dw@davidwei.uk> References: <20241218003549.786301-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Pavel Begunkov page_pool_check_memory_provider() is a generic path and shouldn't assume anything about the actual type of the memory provider argument. It's fine while devmem is the only provider, but cast away the devmem specific binding types to avoid confusion. Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- net/core/page_pool_user.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c index 48335766c1bf..8d31c71bea1a 100644 --- a/net/core/page_pool_user.c +++ b/net/core/page_pool_user.c @@ -353,7 +353,7 @@ void page_pool_unlist(struct page_pool *pool) int page_pool_check_memory_provider(struct net_device *dev, struct netdev_rx_queue *rxq) { - struct net_devmem_dmabuf_binding *binding = rxq->mp_params.mp_priv; + void *binding = rxq->mp_params.mp_priv; struct page_pool *pool; struct hlist_node *n; From patchwork Wed Dec 18 00:35:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13912788 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B44E322F11 for ; Wed, 18 Dec 2024 00:36:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734482163; cv=none; b=lZER4Ik2re2CVNae9Z7IGp1V7cDYIGkwEHxXnJr4FeU08btcr8ZubVec+CflzpLr/YzjKNy7BYRjpzM2LmSAQ6YuhSECmTwhW4OeEBClOh0apeHYswHzd3GkSFB1Hy8YBr4eL5NYqb5b54ty5foOIJGinomGZsZ7i3SruYogZj8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734482163; c=relaxed/simple; bh=kokGPXOtKmJ8NMnDjbo7lcERhIq+pUTFKWNlFSMMVJE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SbKdhCJRMKRsfh0kaD1NNAD9l4ztt1hXgqXeeL3S3B9gW/TaiILpYpiExmtXYgoF0Abz7WgkufKjv+CTbZ4wEGz8XjuixkYDbXAZmfYu1DqTxDsHl0q/J6+k99B1Ca6Q4HwQ6LGswea752wlwesBrSJ+GLl8UeTFvnXdj5Y850U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=V6ybPA68; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="V6ybPA68" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-728e1799d95so6936374b3a.2 for ; Tue, 17 Dec 2024 16:36:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1734482161; x=1735086961; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ysSCaIB4GXTEiUBi/fqYnu5BV5g5/6ElTQwwBF3xc+Y=; b=V6ybPA68ijZ1VQDl8IpmS1yl4jKYPZL3XuBgLPVZ6nRDG0uaT3cjQLPRz2UDnbOsFf 5KQJxypQnu9kGtkPt52xNld/J9v9ANskQ5cJVfD7sDKOlMg1hxZP+RbNNnFZCMQGZZog TW1LX5h7GRYbb0xo/0BCU5y8Q6zaaCZ2j/Rd0W43C6Vj/NvVWCAACHbX+EcwtmcLQOQF LZWKZAcSHxIesrkR6irOszsudw2bDh6ic9WlrB0jKox7A2+/ylkYgvU+IOu7IVGMgjCa w40pUl4NShpKWS5M8zP3pN8xNDVNN2gPBRdMbdmWNbRrqys5589GVnPPaC/AU4YrHRr6 88Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734482161; x=1735086961; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ysSCaIB4GXTEiUBi/fqYnu5BV5g5/6ElTQwwBF3xc+Y=; b=EgV63eHVYZvQ1kCbY6uyXH6lH0GQP7lF46R81VrUC73rlcS6Wp8AfODU5kMsDvEpCp SSu+kLV233CsCJaUwnSPe5GCk2CQDV8+p0baMQU7GwwsKHgL5JLu6iYfDkAbEbWs4sLB CPkwU2cJNSjMO1NmmlGNWp1pt8YUCKiacL1oCc3643ycHRDqn1A58pWVYzk190olE/69 yEQ0dyT0xXcOFW3+wOmpG54W8U5F0uLyBZM43HR1ce9d1wRWMYu+7cg8pVwBZ+gEkD97 J/1i38ql3jDXfhNawM1y/lXCPcdrL9N3rTT6rbqDbFDo/6mYAaHfhGkS/UcS38s6lc/9 OnDg== X-Gm-Message-State: AOJu0Ywuj386C1a8IlqlpbK9RHPbgW+uYBHuiDS94ASQcwBC0tqF1LTq s56VfyaOYjd2gDIlDknb3I0JauKVSCkcfM+KUwtfKTE4NJVPRx2l5PgXjJ6RDqd2pHPSLIU+pqz 2 X-Gm-Gg: ASbGnct94PzrU7PLTDF6VwXOtg0AOiGBuL97YXuvvXsf3oQr3GgQ5vnipLBN5IS7XPz daaXel5E0nsvroUgbFFPU/os7iQIhFr0wXVjcka8NnSNAlNUxtHbIZL2LS6yKP80AXv6MiEX6ri oEcZ9E/xKzJlgT0MHTEiRWjz8nUxKRuOflNZDqY/MHgemlKRQBBbc9foiUDtykRICG2pkTwJLvR R2iUcz5Vv8qhZBumVBAPE9XkCjR+TJp3wNgRk6P X-Google-Smtp-Source: AGHT+IHK6RI0g4RndMxzssu5s+vJ3eAoHV4b2tI6m6aTkW2JbWD2HemNCOpu054vZvqLojVx6HHW/Q== X-Received: by 2002:a05:6a00:1807:b0:725:4615:a778 with SMTP id d2e1a72fcca58-72a8d225a2bmr1336107b3a.7.1734482161021; Tue, 17 Dec 2024 16:36:01 -0800 (PST) Received: from localhost ([2a03:2880:ff:b::]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72918ad658esm7268702b3a.50.2024.12.17.16.36.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Dec 2024 16:36:00 -0800 (PST) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [--bla-- 02/20] net: prefix devmem specific helpers Date: Tue, 17 Dec 2024 16:35:30 -0800 Message-ID: <20241218003549.786301-3-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241218003549.786301-1-dw@davidwei.uk> References: <20241218003549.786301-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Pavel Begunkov Add prefixes to all helpers that are specific to devmem TCP, i.e. net_iov_binding[_id]. Reviewed-by: Mina Almasry Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- net/core/devmem.c | 2 +- net/core/devmem.h | 14 +++++++------- net/ipv4/tcp.c | 2 +- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/net/core/devmem.c b/net/core/devmem.c index 0b6ed7525b22..5e1a05082ab8 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -93,7 +93,7 @@ net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) void net_devmem_free_dmabuf(struct net_iov *niov) { - struct net_devmem_dmabuf_binding *binding = net_iov_binding(niov); + struct net_devmem_dmabuf_binding *binding = net_devmem_iov_binding(niov); unsigned long dma_addr = net_devmem_get_dma_addr(niov); if (WARN_ON(!gen_pool_has_addr(binding->chunk_pool, dma_addr, diff --git a/net/core/devmem.h b/net/core/devmem.h index 76099ef9c482..99782ddeca40 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -86,11 +86,16 @@ static inline unsigned int net_iov_idx(const struct net_iov *niov) } static inline struct net_devmem_dmabuf_binding * -net_iov_binding(const struct net_iov *niov) +net_devmem_iov_binding(const struct net_iov *niov) { return net_iov_owner(niov)->binding; } +static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) +{ + return net_devmem_iov_binding(niov)->id; +} + static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) { struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov); @@ -99,11 +104,6 @@ static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) ((unsigned long)net_iov_idx(niov) << PAGE_SHIFT); } -static inline u32 net_iov_binding_id(const struct net_iov *niov) -{ - return net_iov_owner(niov)->binding->id; -} - static inline void net_devmem_dmabuf_binding_get(struct net_devmem_dmabuf_binding *binding) { @@ -171,7 +171,7 @@ static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) return 0; } -static inline u32 net_iov_binding_id(const struct net_iov *niov) +static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) { return 0; } diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 0d704bda6c41..b872de9a8271 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2494,7 +2494,7 @@ static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb, /* Will perform the exchange later */ dmabuf_cmsg.frag_token = tcp_xa_pool.tokens[tcp_xa_pool.idx]; - dmabuf_cmsg.dmabuf_id = net_iov_binding_id(niov); + dmabuf_cmsg.dmabuf_id = net_devmem_iov_binding_id(niov); offset += copy; remaining_len -= copy; From patchwork Wed Dec 18 00:35:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13912789 Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DB2273595B for ; Wed, 18 Dec 2024 00:36:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734482164; cv=none; b=bBrdI6zuonmg00l57UiHljwKOOnHn1RnE4+ZecdHjYCKtbjK64thWTGlkobvvu3oGaI9sHC51WweIv0jSN15Y44KnXIqJbEjzlOlIyf/Ra4R+9v0RON+OUXzP/c9aRwfyJnYuf14grJVxyDB4qZkuUrx5o3Fmk3ZiacaWMs+g0o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734482164; c=relaxed/simple; bh=pcgA9iCs/ieieDW/xSNKiw2slohdBDWi6PwGr9syOoI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=O91M5mhoCaMtyKjwm3vuJIN0lZGeRYM2nr02m1zy+eHkde9EtXgnWbtJ4EID2+X3nS51MYS7/X/V+AMpgOghpcJ+eU0FQzm/T6ZgePuSwmTSm0AudKHsEoTVcw2P/rFMmaWf86KlWDReSnjROMteiJrk6mlpgRlEFSuYgzGdZNs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=fWP9b3uS; arc=none smtp.client-ip=209.85.210.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="fWP9b3uS" Received: by mail-pf1-f182.google.com with SMTP id d2e1a72fcca58-728eccf836bso5208873b3a.1 for ; Tue, 17 Dec 2024 16:36:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1734482162; x=1735086962; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jpi18jMRooqrSQpnsZ+v4ZIObLSk5Puxja5FgSfrqcQ=; b=fWP9b3uSPqIq1k7ZyATulfjELeCDRScNMdBy28pPGzdQ4pp+Fl1HUJL341Zj9QrpOa 5ri8kTU8gz9TqCj61ncSXsda4oPEIZcq/p69WGDtF+ZQRMegVQN+wf6iR6OGTAAcpErE YPb5CjqqXBuOL73bWMoHpKQldkjPkaqTBoGuGh2YI1aPBqTuakaFLxl6q5kUfuJFCErX Zc4gf96krDCdZKBbVV6FqadDhHe7LcXNKsbAqylAZBuPtWaFLCTF+P4OXaWAH0H59Kum x/0hc+W5nUAbAlTb/vRN7rpQZwS2ck2+n4MJEmTh0nSw6aOHYcqBvyZvCCDhsNwuJYjI GqLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734482162; x=1735086962; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jpi18jMRooqrSQpnsZ+v4ZIObLSk5Puxja5FgSfrqcQ=; b=NXRRd08BcWN4aOhILrMqAOqXY2N3ugZnUnhfDDIJTEaizIhvBe9eE0dKARh6vVh+35 tnL54pAcdKztxAWcR+EBT95sMxNTTLZYGS9wgxRlx32Xb8DC5gaCbQdEyz6tW5bMMK9z 03jqxKeDSb9Ch3EuX8PG3uXkscgGe9WgQMM7/YlXdTch3hi5JorBCihiVyEtqHonYDGe JqI3fYJijKYFKnTu4PaX6/hwvPzh5dq21pPmYtr9RFd/qMSXGOe1XZMtKb7+KCOfN7m+ BEFwyV0IBIm26eiinx/vhNU9gQVtKoEGxCUy/O+n9ikixkKB8PQMce16d41RL1wBNbpm GNJw== X-Gm-Message-State: AOJu0YwUBn5qWc1BKz6JQ1IaUbMdd/WGnM6UAkaYyHOmnwDCFBRzU9sd hQ/KvBk+ILRmxdLwu+kJFAo8ZJhFJm2qWajlXj+u4cTIc0pbF5qM6bzwQ/hIQ27gtUSj3P7RN2U / X-Gm-Gg: ASbGncsCxbUCJpEM20FDNkyh/VmALE2CZUuWycULj+s5Dy6fcCucZ2aV04yixVDg/QR /cez7e1eD3sLi9pnMwEzv/8+J/q/n/W4kXdUzV+AJzRQ4pQbNzWO5pOwNT/eK793W/t+eIt6gvZ 2ViuMQk0hZSjzQZB7GFoCBdDOiXd/duQ3wSEsD8TYxVqmTw6axSTsUktKsm5X0yqcAzDgHzy6NV kYSaro+IqI+8DUwAw2fw7CJL+DdZmaPrH54b7Sk+g== X-Google-Smtp-Source: AGHT+IEwEBH8ItE2D1j8RkGo5OX0xOea+KM1TEW/rpFI1ZULCb28ELCF4raPmOAzOAIMhoqxdOxNVQ== X-Received: by 2002:a05:6a21:3985:b0:1e1:a716:316a with SMTP id adf61e73a8af0-1e5b47fc5a5mr1685143637.10.1734482162244; Tue, 17 Dec 2024 16:36:02 -0800 (PST) Received: from localhost ([2a03:2880:ff:1e::]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72918ad5ac5sm7569268b3a.67.2024.12.17.16.36.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Dec 2024 16:36:01 -0800 (PST) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [--bla-- 03/20] net: generalise net_iov chunk owners Date: Tue, 17 Dec 2024 16:35:31 -0800 Message-ID: <20241218003549.786301-4-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241218003549.786301-1-dw@davidwei.uk> References: <20241218003549.786301-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Pavel Begunkov Currently net_iov stores a pointer to struct dmabuf_genpool_chunk_owner, which serves as a useful abstraction to share data and provide a context. However, it's too devmem specific, and we want to reuse it for other memory providers, and for that we need to decouple net_iov from devmem. Make net_iov to point to a new base structure called net_iov_area, which dmabuf_genpool_chunk_owner extends. Reviewed-by: Mina Almasry Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/netmem.h | 21 ++++++++++++++++++++- net/core/devmem.c | 25 +++++++++++++------------ net/core/devmem.h | 25 +++++++++---------------- 3 files changed, 42 insertions(+), 29 deletions(-) diff --git a/include/net/netmem.h b/include/net/netmem.h index 1b58faa4f20f..c61d5b21e7b4 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -24,11 +24,20 @@ struct net_iov { unsigned long __unused_padding; unsigned long pp_magic; struct page_pool *pp; - struct dmabuf_genpool_chunk_owner *owner; + struct net_iov_area *owner; unsigned long dma_addr; atomic_long_t pp_ref_count; }; +struct net_iov_area { + /* Array of net_iovs for this area. */ + struct net_iov *niovs; + size_t num_niovs; + + /* Offset into the dma-buf where this chunk starts. */ + unsigned long base_virtual; +}; + /* These fields in struct page are used by the page_pool and net stack: * * struct { @@ -54,6 +63,16 @@ NET_IOV_ASSERT_OFFSET(dma_addr, dma_addr); NET_IOV_ASSERT_OFFSET(pp_ref_count, pp_ref_count); #undef NET_IOV_ASSERT_OFFSET +static inline struct net_iov_area *net_iov_owner(const struct net_iov *niov) +{ + return niov->owner; +} + +static inline unsigned int net_iov_idx(const struct net_iov *niov) +{ + return niov - net_iov_owner(niov)->niovs; +} + /* netmem */ /** diff --git a/net/core/devmem.c b/net/core/devmem.c index 5e1a05082ab8..c250db6993d3 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -32,14 +32,15 @@ static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool, { struct dmabuf_genpool_chunk_owner *owner = chunk->owner; - kvfree(owner->niovs); + kvfree(owner->area.niovs); kfree(owner); } static dma_addr_t net_devmem_get_dma_addr(const struct net_iov *niov) { - struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov); + struct dmabuf_genpool_chunk_owner *owner; + owner = net_devmem_iov_to_chunk_owner(niov); return owner->base_dma_addr + ((dma_addr_t)net_iov_idx(niov) << PAGE_SHIFT); } @@ -82,7 +83,7 @@ net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) offset = dma_addr - owner->base_dma_addr; index = offset / PAGE_SIZE; - niov = &owner->niovs[index]; + niov = &owner->area.niovs[index]; niov->pp_magic = 0; niov->pp = NULL; @@ -250,9 +251,9 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, goto err_free_chunks; } - owner->base_virtual = virtual; + owner->area.base_virtual = virtual; owner->base_dma_addr = dma_addr; - owner->num_niovs = len / PAGE_SIZE; + owner->area.num_niovs = len / PAGE_SIZE; owner->binding = binding; err = gen_pool_add_owner(binding->chunk_pool, dma_addr, @@ -264,17 +265,17 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, goto err_free_chunks; } - owner->niovs = kvmalloc_array(owner->num_niovs, - sizeof(*owner->niovs), - GFP_KERNEL); - if (!owner->niovs) { + owner->area.niovs = kvmalloc_array(owner->area.num_niovs, + sizeof(*owner->area.niovs), + GFP_KERNEL); + if (!owner->area.niovs) { err = -ENOMEM; goto err_free_chunks; } - for (i = 0; i < owner->num_niovs; i++) { - niov = &owner->niovs[i]; - niov->owner = owner; + for (i = 0; i < owner->area.num_niovs; i++) { + niov = &owner->area.niovs[i]; + niov->owner = &owner->area; page_pool_set_dma_addr_netmem(net_iov_to_netmem(niov), net_devmem_get_dma_addr(niov)); } diff --git a/net/core/devmem.h b/net/core/devmem.h index 99782ddeca40..a2b9913e9a17 100644 --- a/net/core/devmem.h +++ b/net/core/devmem.h @@ -10,6 +10,8 @@ #ifndef _NET_DEVMEM_H #define _NET_DEVMEM_H +#include + struct netlink_ext_ack; struct net_devmem_dmabuf_binding { @@ -51,17 +53,11 @@ struct net_devmem_dmabuf_binding { * allocations from this chunk. */ struct dmabuf_genpool_chunk_owner { - /* Offset into the dma-buf where this chunk starts. */ - unsigned long base_virtual; + struct net_iov_area area; + struct net_devmem_dmabuf_binding *binding; /* dma_addr of the start of the chunk. */ dma_addr_t base_dma_addr; - - /* Array of net_iovs for this chunk. */ - struct net_iov *niovs; - size_t num_niovs; - - struct net_devmem_dmabuf_binding *binding; }; void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding); @@ -75,20 +71,17 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, void dev_dmabuf_uninstall(struct net_device *dev); static inline struct dmabuf_genpool_chunk_owner * -net_iov_owner(const struct net_iov *niov) +net_devmem_iov_to_chunk_owner(const struct net_iov *niov) { - return niov->owner; -} + struct net_iov_area *owner = net_iov_owner(niov); -static inline unsigned int net_iov_idx(const struct net_iov *niov) -{ - return niov - net_iov_owner(niov)->niovs; + return container_of(owner, struct dmabuf_genpool_chunk_owner, area); } static inline struct net_devmem_dmabuf_binding * net_devmem_iov_binding(const struct net_iov *niov) { - return net_iov_owner(niov)->binding; + return net_devmem_iov_to_chunk_owner(niov)->binding; } static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) @@ -98,7 +91,7 @@ static inline u32 net_devmem_iov_binding_id(const struct net_iov *niov) static inline unsigned long net_iov_virtual_addr(const struct net_iov *niov) { - struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov); + struct net_iov_area *owner = net_iov_owner(niov); return owner->base_virtual + ((unsigned long)net_iov_idx(niov) << PAGE_SHIFT); From patchwork Wed Dec 18 00:35:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13912790 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 469693596C for ; Wed, 18 Dec 2024 00:36:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734482166; cv=none; b=CO8BcSs1AtDtOVcBgQdy9F2jEBnOGQvct6T81SbDVCqyaOfl8xIRTDhSDgkGeYJJjExPyoSYnhVv4jfcNv3qwUPIix3rme0ZP5HL3dtBMMM/W4/b5CzW8++MWU/uYodnoenOyRZYbX+h2UTYg+PxaRslqVLmMyG65F3PZh58fTo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734482166; c=relaxed/simple; bh=EBvFIsawVGxLxMU3p1j1YX1UqSiCQdW1wkhcOZFyBMQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Hnd4ku2kxsldbk80Awi6dd51npLgdYH5b5XnYL7DltKbOuBKDLspGuF15hQtgvZ+s866PdVdgGct5NUz0MSrNfiLbhXx3WKI3ATJ0/TJ7nUycWQfFV2+WMTqQba3CezBOA5BNqeZm2hX5BTJV3sRVuPXvslKnl9Gpv2/zqAUo3w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=m5QOy1A2; arc=none smtp.client-ip=209.85.214.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="m5QOy1A2" Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-21675fd60feso61593495ad.2 for ; Tue, 17 Dec 2024 16:36:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1734482163; x=1735086963; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7+YHcXEAtBvssxU5fzHig35lu0TSHQacf/OaCK3OLTg=; b=m5QOy1A2r31UB7TyarF0k7oZ5syYtxSu176MseJEbO8HAQvmyPxZmktK6JebkkinAT OfnN59ugxTjsqbX5MKVK4QxtQQiIM/eYE/7xK6E0ixOrmnsbPXCbh2C0jKLfRkKZlkQ7 2xPXT8aXW1RMqiJFnb/LPAlMlyqGXroFBrtTb2P49zfUKSby1HysBI4bvxvSVSuE/Q3P BtpFNuIB39SYRfYkxeO3bIXMAXzUSvr+/JD2Nlb8m/j4lIydTNG9H8/Pd4kPQQXYqa0p xliZVDFsI95RGHc2s7H+JvDU3AH+s9SiRLITliNNK7DeALWOEjLHxAaYdnNng0EXkcDH TiMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734482163; x=1735086963; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7+YHcXEAtBvssxU5fzHig35lu0TSHQacf/OaCK3OLTg=; b=AAjAe0Xb782A12PNN7MOXVABmU2eRDztDOL03R/NLxH/4diDUu3TXSIc1AzuDBmHnm bQhVwf5ZtamfZiQ879Fd4LF9hH+VVTMAova345/AcJH1maYepcConejmMGLCs7HXsJvZ g5ycO+v5gfKjWo+8J+n2R0nqdI8lvFneQn54ENjJcqBK8Jt7rX9dg7UWO6yYlbbRrLRE 2HMQkTvG16r4OtCpn+0zlwOrcoPFywQd07QhToFdAupAWjpQztT2aqEVUWaRuZhBJ3oG DXV//HjbeoUZP3VrTvaJk6AX2U2DCv3p8xN9H7b1NjRFiSgmdYdJEdLyXDOpaov2TqK/ sPnA== X-Gm-Message-State: AOJu0YwCN33u0izXdARoAcZUBTbTI4myZYo22uI/tonOv9WwKrmmXSWt Qgj2UUcNtWfnMR5BtExj5onsxznfYwOI1JqUyyNwqdWuo3khuaLQzn06L8b+o0VmufUny29HsmZ 5 X-Gm-Gg: ASbGnctLGJC3bf+wJMLcLwQ7eRW89nLJVRnDoUDvMQFpEOpLkGy7f7W7qQezZFkaRdY 5JiRjqtWSLIcyhRnp+g9WSQHest6UZYx8IUDZ/hiu2BZDXYXiHfoSKc+dYt/O/66y4iHPKz7i/r sDgfp4y98eQv7TGWN/kbDwT4ZYLAqvBba9o/XAf9BCD1o7YlshLp//ifDlqJ6EVW3mAf06HfcLs a2/teV0Yo+vTpK3VBqeQUXM0x+Mmj1piSpnWvuj X-Google-Smtp-Source: AGHT+IG9eZOp7o24F0/wYICCIexTeTkUf+O5Z4ADqX8ZWVX0DN3EdsMA37bLTB4Nn7tgXsIOn+hHHg== X-Received: by 2002:a17:903:2447:b0:216:725c:a12c with SMTP id d9443c01a7336-218d6fd7926mr11915495ad.9.1734482163605; Tue, 17 Dec 2024 16:36:03 -0800 (PST) Received: from localhost ([2a03:2880:ff:d::]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-218a1db5e05sm65238145ad.44.2024.12.17.16.36.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Dec 2024 16:36:03 -0800 (PST) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [--bla-- 04/20] net: page_pool: create hooks for custom page providers Date: Tue, 17 Dec 2024 16:35:32 -0800 Message-ID: <20241218003549.786301-5-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241218003549.786301-1-dw@davidwei.uk> References: <20241218003549.786301-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Jakub Kicinski The page providers which try to reuse the same pages will need to hold onto the ref, even if page gets released from the pool - as in releasing the page from the pp just transfers the "ownership" reference from pp to the provider, and provider will wait for other references to be gone before feeding this page back into the pool. Signed-off-by: Jakub Kicinski Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/page_pool/types.h | 9 +++++++++ net/core/devmem.c | 14 +++++++++++++- net/core/page_pool.c | 22 ++++++++++++++-------- 3 files changed, 36 insertions(+), 9 deletions(-) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index ed4cd114180a..d6241e8a5106 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -152,8 +152,16 @@ struct page_pool_stats { */ #define PAGE_POOL_FRAG_GROUP_ALIGN (4 * sizeof(long)) +struct memory_provider_ops { + netmem_ref (*alloc_netmems)(struct page_pool *pool, gfp_t gfp); + bool (*release_netmem)(struct page_pool *pool, netmem_ref netmem); + int (*init)(struct page_pool *pool); + void (*destroy)(struct page_pool *pool); +}; + struct pp_memory_provider_params { void *mp_priv; + const struct memory_provider_ops *mp_ops; }; struct page_pool { @@ -216,6 +224,7 @@ struct page_pool { struct ptr_ring ring; void *mp_priv; + const struct memory_provider_ops *mp_ops; #ifdef CONFIG_PAGE_POOL_STATS /* recycle stats are per-cpu to avoid locking */ diff --git a/net/core/devmem.c b/net/core/devmem.c index c250db6993d3..48903b7ab215 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -26,6 +26,8 @@ /* Protected by rtnl_lock() */ static DEFINE_XARRAY_FLAGS(net_devmem_dmabuf_bindings, XA_FLAGS_ALLOC1); +static const struct memory_provider_ops dmabuf_devmem_ops; + static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool, struct gen_pool_chunk *chunk, void *not_used) @@ -117,6 +119,7 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding) WARN_ON(rxq->mp_params.mp_priv != binding); rxq->mp_params.mp_priv = NULL; + rxq->mp_params.mp_ops = NULL; rxq_idx = get_netdev_rx_queue_index(rxq); @@ -142,7 +145,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, } rxq = __netif_get_rx_queue(dev, rxq_idx); - if (rxq->mp_params.mp_priv) { + if (rxq->mp_params.mp_ops) { NL_SET_ERR_MSG(extack, "designated queue already memory provider bound"); return -EEXIST; } @@ -160,6 +163,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, return err; rxq->mp_params.mp_priv = binding; + rxq->mp_params.mp_ops = &dmabuf_devmem_ops; err = netdev_rx_queue_restart(dev, rxq_idx); if (err) @@ -169,6 +173,7 @@ int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, err_xa_erase: rxq->mp_params.mp_priv = NULL; + rxq->mp_params.mp_ops = NULL; xa_erase(&binding->bound_rxqs, xa_idx); return err; @@ -388,3 +393,10 @@ bool mp_dmabuf_devmem_release_page(struct page_pool *pool, netmem_ref netmem) /* We don't want the page pool put_page()ing our net_iovs. */ return false; } + +static const struct memory_provider_ops dmabuf_devmem_ops = { + .init = mp_dmabuf_devmem_init, + .destroy = mp_dmabuf_devmem_destroy, + .alloc_netmems = mp_dmabuf_devmem_alloc_netmems, + .release_netmem = mp_dmabuf_devmem_release_page, +}; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index e07ad7315955..784a547b2ca4 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -285,13 +285,19 @@ static int page_pool_init(struct page_pool *pool, rxq = __netif_get_rx_queue(pool->slow.netdev, pool->slow.queue_idx); pool->mp_priv = rxq->mp_params.mp_priv; + pool->mp_ops = rxq->mp_params.mp_ops; } - if (pool->mp_priv) { + if (pool->mp_ops) { if (!pool->dma_map || !pool->dma_sync) return -EOPNOTSUPP; - err = mp_dmabuf_devmem_init(pool); + if (WARN_ON(!is_kernel_rodata((unsigned long)pool->mp_ops))) { + err = -EFAULT; + goto free_ptr_ring; + } + + err = pool->mp_ops->init(pool); if (err) { pr_warn("%s() mem-provider init failed %d\n", __func__, err); @@ -588,8 +594,8 @@ netmem_ref page_pool_alloc_netmems(struct page_pool *pool, gfp_t gfp) return netmem; /* Slow-path: cache empty, do real allocation */ - if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_priv) - netmem = mp_dmabuf_devmem_alloc_netmems(pool, gfp); + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + netmem = pool->mp_ops->alloc_netmems(pool, gfp); else netmem = __page_pool_alloc_pages_slow(pool, gfp); return netmem; @@ -680,8 +686,8 @@ void page_pool_return_page(struct page_pool *pool, netmem_ref netmem) bool put; put = true; - if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_priv) - put = mp_dmabuf_devmem_release_page(pool, netmem); + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + put = pool->mp_ops->release_netmem(pool, netmem); else __page_pool_release_page_dma(pool, netmem); @@ -1049,8 +1055,8 @@ static void __page_pool_destroy(struct page_pool *pool) page_pool_unlist(pool); page_pool_uninit(pool); - if (pool->mp_priv) { - mp_dmabuf_devmem_destroy(pool); + if (pool->mp_ops) { + pool->mp_ops->destroy(pool); static_branch_dec(&page_pool_mem_providers); } From patchwork Wed Dec 18 00:35:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wei X-Patchwork-Id: 13912791 Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 76D0D3597F for ; Wed, 18 Dec 2024 00:36:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734482166; cv=none; b=R+wiPVkucDOlcUX32bQB+tqxPh0ikJU7mDgLcChhBxKkqkagFbd/thFqVULLfC1LtzkLD19tx4m3hj85lACMCW5rAO4kPgg9TIS8C2KV5cSdZvcBsPUU0gEmBild6UpYqBPz54HFIBp93lDG3H6sMrkYQkUl0X6NqMnIGY6PCH8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734482166; c=relaxed/simple; bh=135g8sWdi14q9q8Itt2VZQPm7R0gFcz1ULHZpEn4dmo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GBHRy/I0hFJ4rN00KuFJ0RUGnTXenhiv8PGkYmxjHA98saTWxoosJpMfxwdpsKKRf1lstuu5LFxO2iz14NFKxnGIYAi+m16+DsNsXZvxu6q33Cm00QgcTb3AFex/tCE0KhqHrVJ7YH5S9KsNpBfg6D8cqb7/KnGN8qWnNlMKIsI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk; spf=none smtp.mailfrom=davidwei.uk; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b=wi9teH3Z; arc=none smtp.client-ip=209.85.215.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=davidwei.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=davidwei-uk.20230601.gappssmtp.com header.i=@davidwei-uk.20230601.gappssmtp.com header.b="wi9teH3Z" Received: by mail-pg1-f174.google.com with SMTP id 41be03b00d2f7-7fcfb7db9bfso4038320a12.1 for ; Tue, 17 Dec 2024 16:36:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=davidwei-uk.20230601.gappssmtp.com; s=20230601; t=1734482165; x=1735086965; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KQ73wFKVt12qccWHG7cPPzOfiVyAwI1MB/XTBNYCLeg=; b=wi9teH3Zjc4zmlVYJ+wxsCtSP4yXDkKkfv/oKxU54LFiymJrRurciP7UeTPHMXFwGj PSLC+7BLrwPwELcavZsc12jOGytCAnqwwq2pTKgeo/CNbKAE5YP5J7Pxt9pPzTEPQlsk n4EK5bUwW39A+WNQVMcpXIwON/vVxamvgGEM0OwmGL4BcrVfcIWlhxyediwaqZMVEXmr TzGUbMVbFUubiGxTas2A4ELA0LVPOcK+HBvV+JLy41ygDhjv54vNDg2tzA427i+284Rg q787crFEfEn09hSdpEYp4uRLjcg8ay4Nf+98GPtIExIzkSY402W1TtkfkeZuI1j8FTI+ JqLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734482165; x=1735086965; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KQ73wFKVt12qccWHG7cPPzOfiVyAwI1MB/XTBNYCLeg=; b=qDwLW+Tt6cbIuN1W6mTG7A0nB0qQjbDZo3wSSjsfiHz1z1rIwF8DVB7CRplvUa4ZEM 72HGR4kZ2tSwD1eXNeE++yqCjuqDNSPbhbhw13xIvdq1qIcuxpXwK8IUzmHkvSYIvXht EtXdNcg6HjVwLwFKcFdE3q/LXHC8zLpZ+Tvh4w/BDKPN5NjQCVhtKp6OyfUAWT0qc7mj AlaxB2kSGTK5TJ/6j3q/FCOKpt9RA8NtJkTxUGXi22t2/t7QjfMRnf98VJFZrskDbPEM W4gBRESvYdrBuMhnnk+LowbUIcPcLXutWoglwzIyWdE8Ohpxip53simS/5pACrBlER71 ut/A== X-Gm-Message-State: AOJu0YzZ/KxAu3wvTwg6AENwHwi36KTJOzACbdP5UXUerMt7PHGdY+hA e1YCuhw6daLFK8WEHWuijuU7qtaehEfO7wQ4W4i8JgDfjFrhu1k7Pmacjsdk1fMiPsN1U5WAphC 7 X-Gm-Gg: ASbGncvbJt2A6jprnvwGr0iaQaivf02i6ubQzqVe/1NoS/deD4Fsy5EA4zh2p+hmnrR h5167S5eYoWAuzHHvCkoksyrG1ndAwCMnYxvste2CFDzDOBXt7KLQh7VPsYe53l4aad3ebY+lon dafxcAGy9JAbG/1ZxGVv7IQ55I7nDbkdus+EopwSxdt91RXTiz+rumsNS+SsToqBnpNBNaM9Apv 0gBYrCXEX/9sJCkpeUS1hHXoYwO8O7SJmpy3UqQxA== X-Google-Smtp-Source: AGHT+IEEmLhfsg3d8KRh7lYkeLWJXR5yLAID5ZTIPn5Yij4dunrusE+MZoJ2ZrylKcRPNbr7UU+E7g== X-Received: by 2002:a17:90b:4c0e:b0:2ee:49c4:4a7c with SMTP id 98e67ed59e1d1-2f2e91feea3mr1318385a91.18.1734482164829; Tue, 17 Dec 2024 16:36:04 -0800 (PST) Received: from localhost ([2a03:2880:ff:20::]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2f2ed62b2afsm109623a91.12.2024.12.17.16.36.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Dec 2024 16:36:04 -0800 (PST) From: David Wei To: io-uring@vger.kernel.org, netdev@vger.kernel.org Cc: Jens Axboe , Pavel Begunkov , Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet , Jesper Dangaard Brouer , David Ahern , Mina Almasry , Stanislav Fomichev , Joe Damato , Pedro Tammela Subject: [--bla-- 05/20] net: page_pool: add mp op for netlink reporting Date: Tue, 17 Dec 2024 16:35:33 -0800 Message-ID: <20241218003549.786301-6-dw@davidwei.uk> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241218003549.786301-1-dw@davidwei.uk> References: <20241218003549.786301-1-dw@davidwei.uk> Precedence: bulk X-Mailing-List: io-uring@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Pavel Begunkov Add a mandatory memory provider callback that prints information about the provider. Signed-off-by: Pavel Begunkov Signed-off-by: David Wei --- include/net/page_pool/types.h | 1 + net/core/devmem.c | 9 +++++++++ net/core/page_pool_user.c | 3 +-- 3 files changed, 11 insertions(+), 2 deletions(-) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index d6241e8a5106..a473ea0c48c4 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -157,6 +157,7 @@ struct memory_provider_ops { bool (*release_netmem)(struct page_pool *pool, netmem_ref netmem); int (*init)(struct page_pool *pool); void (*destroy)(struct page_pool *pool); + int (*nl_report)(const struct page_pool *pool, struct sk_buff *rsp); }; struct pp_memory_provider_params { diff --git a/net/core/devmem.c b/net/core/devmem.c index 48903b7ab215..df51a6c312db 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -394,9 +394,18 @@ bool mp_dmabuf_devmem_release_page(struct page_pool *pool, netmem_ref netmem) return false; } +static int mp_dmabuf_devmem_nl_report(const struct page_pool *pool, + struct sk_buff *rsp) +{ + const struct net_devmem_dmabuf_binding *binding = pool->mp_priv; + + return nla_put_u32(rsp, NETDEV_A_PAGE_POOL_DMABUF, binding->id); +} + static const struct memory_provider_ops dmabuf_devmem_ops = { .init = mp_dmabuf_devmem_init, .destroy = mp_dmabuf_devmem_destroy, .alloc_netmems = mp_dmabuf_devmem_alloc_netmems, .release_netmem = mp_dmabuf_devmem_release_page, + .nl_report = mp_dmabuf_devmem_nl_report, }; diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c index 8d31c71bea1a..61212f388bc8 100644 --- a/net/core/page_pool_user.c +++ b/net/core/page_pool_user.c @@ -214,7 +214,6 @@ static int page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool, const struct genl_info *info) { - struct net_devmem_dmabuf_binding *binding = pool->mp_priv; size_t inflight, refsz; void *hdr; @@ -244,7 +243,7 @@ page_pool_nl_fill(struct sk_buff *rsp, const struct page_pool *pool, pool->user.detach_time)) goto err_cancel; - if (binding && nla_put_u32(rsp, NETDEV_A_PAGE_POOL_DMABUF, binding->id)) + if (pool->mp_ops && pool->mp_ops->nl_report(pool, rsp)) goto err_cancel; genlmsg_end(rsp, hdr);