From patchwork Wed Jan 11 04:21:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095968 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61F0AC46467 for ; Wed, 11 Jan 2023 04:23:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235542AbjAKEXE (ORCPT ); Tue, 10 Jan 2023 23:23:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231478AbjAKEWW (ORCPT ); Tue, 10 Jan 2023 23:22:22 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E1505657F for ; Tue, 10 Jan 2023 20:22:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=I7aASeW2bakt0a8pxVeEBiqLf4GO7WCoG44w/epo15A=; b=uSu5hoIgzDuHOgIJLcq8PNEDzt SDNRMpkVSmukNDELlT4wmw0nN8d0STryyMwkA+yq08u3WIKobL5OUpuLgNQR0ejpOVFDPXrN+cukw Bm/Nic1p/cRl8RdUNGMt349f4EXBtNG00qE5v8infB+/jfzTcKycMNzKUFkFBunH3rdEgsNk+fDXK 09RberagCyyu0kXSP+W4OOR/dNGz6yOgAXQ6bEXiuuCGMGP3GXkJpaV/AYh+XGR8iUzJ/PV2hk4kp JnVjwSQBBKGA46RWkMdbc4n3dEwzedYAFKtaDLtzvg7xNdGPeuc3X62IUi8mQ+3NcmmHrgxoUdwls NmEHsHIQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScy-003nxi-8L; Wed, 11 Jan 2023 04:22:16 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 01/26] netmem: Create new type Date: Wed, 11 Jan 2023 04:21:49 +0000 Message-Id: <20230111042214.907030-2-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org As part of simplifying struct page, create a new netmem type which mirrors the page_pool members in struct page. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Acked-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- Documentation/networking/page_pool.rst | 5 +++ include/net/page_pool.h | 46 ++++++++++++++++++++++++++ 2 files changed, 51 insertions(+) diff --git a/Documentation/networking/page_pool.rst b/Documentation/networking/page_pool.rst index 5db8c263b0c6..2c3c81473b97 100644 --- a/Documentation/networking/page_pool.rst +++ b/Documentation/networking/page_pool.rst @@ -221,3 +221,8 @@ Driver unload /* Driver unload */ page_pool_put_full_page(page_pool, page, false); xdp_rxq_info_unreg(&xdp_rxq); + +Functions and structures +======================== + +.. kernel-doc:: include/net/page_pool.h diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 813c93499f20..cbea4df54918 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -50,6 +50,52 @@ PP_FLAG_DMA_SYNC_DEV |\ PP_FLAG_PAGE_FRAG) +/** + * struct netmem - A memory allocation from a &struct page_pool. + * @flags: The same as the page flags. Do not use directly. + * @pp_magic: Magic value to avoid recycling non page_pool allocated pages. + * @pp: The page pool this netmem was allocated from. + * @dma_addr: Call netmem_get_dma_addr() to read this value. + * @dma_addr_upper: Might need to be 64-bit on 32-bit architectures. + * @pp_frag_count: For frag page support, not supported in 32-bit + * architectures with 64-bit DMA. + * @_mapcount: Do not access this member directly. + * @_refcount: Do not access this member directly. Read it using + * netmem_ref_count() and manipulate it with netmem_get() and netmem_put(). + * + * This struct overlays struct page for now. Do not modify without a + * good understanding of the issues. + */ +struct netmem { + unsigned long flags; + unsigned long pp_magic; + struct page_pool *pp; + /* private: no need to document this padding */ + unsigned long _pp_mapping_pad; /* aliases with folio->mapping */ + /* public: */ + unsigned long dma_addr; + union { + unsigned long dma_addr_upper; + atomic_long_t pp_frag_count; + }; + atomic_t _mapcount; + atomic_t _refcount; +}; + +#define NETMEM_MATCH(pg, nm) \ + static_assert(offsetof(struct page, pg) == offsetof(struct netmem, nm)) +NETMEM_MATCH(flags, flags); +NETMEM_MATCH(lru, pp_magic); +NETMEM_MATCH(pp, pp); +NETMEM_MATCH(mapping, _pp_mapping_pad); +NETMEM_MATCH(dma_addr, dma_addr); +NETMEM_MATCH(dma_addr_upper, dma_addr_upper); +NETMEM_MATCH(pp_frag_count, pp_frag_count); +NETMEM_MATCH(_mapcount, _mapcount); +NETMEM_MATCH(_refcount, _refcount); +#undef NETMEM_MATCH +static_assert(sizeof(struct netmem) <= sizeof(struct page)); + /* * Fast allocation side cache array/stack * From patchwork Wed Jan 11 04:21:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095972 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE277C46467 for ; Wed, 11 Jan 2023 04:24:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235802AbjAKEYa (ORCPT ); Tue, 10 Jan 2023 23:24:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235551AbjAKEXS (ORCPT ); Tue, 10 Jan 2023 23:23:18 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D727213DEE for ; Tue, 10 Jan 2023 20:22:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=cNwP0/WvOKg0Wgjxh7GnXV1CpjoOIUWMfQYwPvXk4mY=; b=geBpLgOanato67yHa3PwwMr+ps J0i+CTTY7BVztaFUBdnUJY794WUJrkIihg+9U1s8Fno1dzuBOmjnkGMtnElTWov/1Owkx3h4pPDmC 7oRwKAxWj8vk0tBCecA+0Lt6LtDOI+MzEKIyJNvqWdQrjboBGe9rbZ6gax3F3GhxUij1ew+mEspBO e7RyNArQeoVnFrkKTa0LAOyMuRDZ6DqlBrtPiMvbIB+FVkEyv1tC2uXzIn2im7Bg2pY74ayVXnyrj V8ki2aGjasVV7gjl1jiEJtrG97Ahy2XxwdNyjrRceRETGqiM0s6wQqyaSTmyz2EzDf/K+kWECVKr4 lQlaDrAA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScy-003nxk-AX; Wed, 11 Jan 2023 04:22:16 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 02/26] netmem: Add utility functions Date: Wed, 11 Jan 2023 04:21:50 +0000 Message-Id: <20230111042214.907030-3-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org netmem_page() is defined this way to preserve constness. page_netmem() doesn't call compound_head() because netmem users always use the head page; it does include a debugging assert to check that it's true. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Jesse Brandeburg --- include/net/page_pool.h | 59 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index cbea4df54918..414907e67ff6 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -96,6 +96,65 @@ NETMEM_MATCH(_refcount, _refcount); #undef NETMEM_MATCH static_assert(sizeof(struct netmem) <= sizeof(struct page)); +#define netmem_page(nmem) (_Generic((nmem), \ + const struct netmem *: (const struct page *)nmem, \ + struct netmem *: (struct page *)nmem)) + +static inline struct netmem *page_netmem(struct page *page) +{ + VM_BUG_ON_PAGE(PageTail(page), page); + return (struct netmem *)page; +} + +static inline unsigned long netmem_pfn(const struct netmem *nmem) +{ + return page_to_pfn(netmem_page(nmem)); +} + +static inline unsigned long netmem_nid(const struct netmem *nmem) +{ + return page_to_nid(netmem_page(nmem)); +} + +static inline struct netmem *virt_to_netmem(const void *x) +{ + return page_netmem(virt_to_head_page(x)); +} + +static inline void *netmem_to_virt(const struct netmem *nmem) +{ + return page_to_virt(netmem_page(nmem)); +} + +static inline void *netmem_address(const struct netmem *nmem) +{ + return page_address(netmem_page(nmem)); +} + +static inline int netmem_ref_count(const struct netmem *nmem) +{ + return page_ref_count(netmem_page(nmem)); +} + +static inline void netmem_get(struct netmem *nmem) +{ + struct folio *folio = (struct folio *)nmem; + + folio_get(folio); +} + +static inline void netmem_put(struct netmem *nmem) +{ + struct folio *folio = (struct folio *)nmem; + + folio_put(folio); +} + +static inline bool netmem_is_pfmemalloc(const struct netmem *nmem) +{ + return nmem->pp_magic & BIT(1); +} + /* * Fast allocation side cache array/stack * From patchwork Wed Jan 11 04:21:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095971 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C86DC46467 for ; Wed, 11 Jan 2023 04:24:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229680AbjAKEYC (ORCPT ); Tue, 10 Jan 2023 23:24:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235504AbjAKEXD (ORCPT ); Tue, 10 Jan 2023 23:23:03 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBDAE13D64 for ; Tue, 10 Jan 2023 20:22:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=0azbnQLE/iS8Hxok7qIg20pGmsfVsXiZRb6KS8BjIqY=; b=QqMEVI1eVM8Ds6Qm2wr6OiPby+ 0KEYFh7xMhh675MlHV84blnuHBpci3KHmGqP0UsWlt60pEIvf00zvPrMoq27p3UlcbxtnCS0DNbOG i6L2ELQ7Kuu1u3+P2l+IuEBmJ851mjXpTmKG5MdjdeFkQepZ3CBrV1cikM9d2RRI9Qe4biGHqR5ui 0rPqddyAMr1BpWxQ3+8lwQtVgDo9GFrTf53Tt61u4Wy6MtKWOCsc8/QquQlyW2Vn8J3O2RCyghCFZ 4PSHlO9nMD3pBnqXRMWcE4DADJ28gwW7rN8cq/wTFHN35hUIgFMDMYtb06KUuoMlrE44rvOjZ7RUh wH/NcNQw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScy-003nxm-DB; Wed, 11 Jan 2023 04:22:16 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 03/26] page_pool: Add netmem_set_dma_addr() and netmem_get_dma_addr() Date: Wed, 11 Jan 2023 04:21:51 +0000 Message-Id: <20230111042214.907030-4-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Turn page_pool_set_dma_addr() and page_pool_get_dma_addr() into wrappers. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- include/net/page_pool.h | 24 ++++++++++++++++++------ 1 file changed, 18 insertions(+), 6 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 414907e67ff6..ff4d11d43697 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -449,21 +449,33 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, #define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \ (sizeof(dma_addr_t) > sizeof(unsigned long)) -static inline dma_addr_t page_pool_get_dma_addr(struct page *page) +static inline dma_addr_t netmem_get_dma_addr(struct netmem *nmem) { - dma_addr_t ret = page->dma_addr; + dma_addr_t ret = nmem->dma_addr; if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) - ret |= (dma_addr_t)page->dma_addr_upper << 16 << 16; + ret |= (dma_addr_t)nmem->dma_addr_upper << 16 << 16; return ret; } -static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) +/* Compat, remove when all users gone */ +static inline dma_addr_t page_pool_get_dma_addr(struct page *page) +{ + return netmem_get_dma_addr(page_netmem(page)); +} + +static inline void netmem_set_dma_addr(struct netmem *nmem, dma_addr_t addr) { - page->dma_addr = addr; + nmem->dma_addr = addr; if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) - page->dma_addr_upper = upper_32_bits(addr); + nmem->dma_addr_upper = upper_32_bits(addr); +} + +/* Compat, remove when all users gone */ +static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) +{ + netmem_set_dma_addr(page_netmem(page), addr); } static inline bool is_page_pool_compiled_in(void) From patchwork Wed Jan 11 04:21:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095973 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1D05C5479D for ; Wed, 11 Jan 2023 04:24:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235669AbjAKEYd (ORCPT ); Tue, 10 Jan 2023 23:24:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235666AbjAKEXT (ORCPT ); Tue, 10 Jan 2023 23:23:19 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E347313DEF for ; Tue, 10 Jan 2023 20:22:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7UrvHMQwSHueXOfPW55RWxshiElvT2ojOuY7+1iudqA=; b=nDUgaLrkS3Lz2rgQePBM4xXWvc nySlwQKZm7EJ6PyUni6yPXKFnGKBO+q1NuWbYRPJUh5iSmhnlUgk/V4QEwTuuzZEYRoCrhgLn8cJ3 yW4Wt0FDMS9punaTdLWDhAOUszE/n6VLDCmZJ5b+oFjdcLYIxT+7CA2pFBWlq5aPWS//eQ77Yh5WL oZqWVySW+7RVU1lsizLHoexJjI9pfm0ibommCDoA3niuWrltpt8HZrYfJGeydZhzXhmXyimeozRG6 HHFJRIRGy/VkSoRWAjM0Q41XjZdJELigoiu4pnqgppv/OJcGWkGas4vrDy+gZixawISaM+TaAAtlu 2vqyALVw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScy-003nxo-Gm; Wed, 11 Jan 2023 04:22:16 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 04/26] page_pool: Convert page_pool_release_page() to page_pool_release_netmem() Date: Wed, 11 Jan 2023 04:21:52 +0000 Message-Id: <20230111042214.907030-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Also convert page_pool_clear_pp_info() and trace_page_pool_state_release() to take a netmem. Include a wrapper for page_pool_release_page() to avoid converting all callers. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- include/net/page_pool.h | 15 +++++++++++---- include/trace/events/page_pool.h | 14 +++++++------- net/core/page_pool.c | 18 +++++++++--------- 3 files changed, 27 insertions(+), 20 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index ff4d11d43697..34d47c10550e 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -18,7 +18,7 @@ * * API keeps track of in-flight pages, in-order to let API user know * when it is safe to dealloactor page_pool object. Thus, API users - * must make sure to call page_pool_release_page() when a page is + * must make sure to call page_pool_release_netmem() when a page is * "leaving" the page_pool. Or call page_pool_put_page() where * appropiate. For maintaining correct accounting. * @@ -354,7 +354,7 @@ struct xdp_mem_info; void page_pool_destroy(struct page_pool *pool); void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *), struct xdp_mem_info *mem); -void page_pool_release_page(struct page_pool *pool, struct page *page); +void page_pool_release_netmem(struct page_pool *pool, struct netmem *nmem); void page_pool_put_page_bulk(struct page_pool *pool, void **data, int count); #else @@ -367,8 +367,8 @@ static inline void page_pool_use_xdp_mem(struct page_pool *pool, struct xdp_mem_info *mem) { } -static inline void page_pool_release_page(struct page_pool *pool, - struct page *page) +static inline void page_pool_release_netmem(struct page_pool *pool, + struct netmem *nmem) { } @@ -378,6 +378,13 @@ static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data, } #endif +/* Compat, remove when all users gone */ +static inline void page_pool_release_page(struct page_pool *pool, + struct page *page) +{ + page_pool_release_netmem(pool, page_netmem(page)); +} + void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, unsigned int dma_sync_size, bool allow_direct); diff --git a/include/trace/events/page_pool.h b/include/trace/events/page_pool.h index ca534501158b..113aad0c9e5b 100644 --- a/include/trace/events/page_pool.h +++ b/include/trace/events/page_pool.h @@ -42,26 +42,26 @@ TRACE_EVENT(page_pool_release, TRACE_EVENT(page_pool_state_release, TP_PROTO(const struct page_pool *pool, - const struct page *page, u32 release), + const struct netmem *nmem, u32 release), - TP_ARGS(pool, page, release), + TP_ARGS(pool, nmem, release), TP_STRUCT__entry( __field(const struct page_pool *, pool) - __field(const struct page *, page) + __field(const struct netmem *, nmem) __field(u32, release) __field(unsigned long, pfn) ), TP_fast_assign( __entry->pool = pool; - __entry->page = page; + __entry->nmem = nmem; __entry->release = release; - __entry->pfn = page_to_pfn(page); + __entry->pfn = netmem_pfn(nmem); ), - TP_printk("page_pool=%p page=%p pfn=0x%lx release=%u", - __entry->pool, __entry->page, __entry->pfn, __entry->release) + TP_printk("page_pool=%p nmem=%p pfn=0x%lx release=%u", + __entry->pool, __entry->nmem, __entry->pfn, __entry->release) ); TRACE_EVENT(page_pool_state_hold, diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 9b203d8660e4..437241aba5a7 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -336,10 +336,10 @@ static void page_pool_set_pp_info(struct page_pool *pool, pool->p.init_callback(page, pool->p.init_arg); } -static void page_pool_clear_pp_info(struct page *page) +static void page_pool_clear_pp_info(struct netmem *nmem) { - page->pp_magic = 0; - page->pp = NULL; + nmem->pp_magic = 0; + nmem->pp = NULL; } static struct page *__page_pool_alloc_page_order(struct page_pool *pool, @@ -467,7 +467,7 @@ static s32 page_pool_inflight(struct page_pool *pool) * a regular page (that will eventually be returned to the normal * page-allocator via put_page). */ -void page_pool_release_page(struct page_pool *pool, struct page *page) +void page_pool_release_netmem(struct page_pool *pool, struct netmem *nmem) { dma_addr_t dma; int count; @@ -478,23 +478,23 @@ void page_pool_release_page(struct page_pool *pool, struct page *page) */ goto skip_dma_unmap; - dma = page_pool_get_dma_addr(page); + dma = netmem_get_dma_addr(nmem); /* When page is unmapped, it cannot be returned to our pool */ dma_unmap_page_attrs(pool->p.dev, dma, PAGE_SIZE << pool->p.order, pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC); - page_pool_set_dma_addr(page, 0); + netmem_set_dma_addr(nmem, 0); skip_dma_unmap: - page_pool_clear_pp_info(page); + page_pool_clear_pp_info(nmem); /* This may be the last page returned, releasing the pool, so * it is not safe to reference pool afterwards. */ count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt); - trace_page_pool_state_release(pool, page, count); + trace_page_pool_state_release(pool, nmem, count); } -EXPORT_SYMBOL(page_pool_release_page); +EXPORT_SYMBOL(page_pool_release_netmem); /* Return a page to the page allocator, cleaning up our state */ static void page_pool_return_page(struct page_pool *pool, struct page *page) From patchwork Wed Jan 11 04:21:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095976 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16C72C46467 for ; Wed, 11 Jan 2023 04:24:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230402AbjAKEYq (ORCPT ); Tue, 10 Jan 2023 23:24:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231303AbjAKEX1 (ORCPT ); Tue, 10 Jan 2023 23:23:27 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67C3013DFF for ; Tue, 10 Jan 2023 20:22:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=00FqBCzZySWB3TyzUQFGpi0m9pANOQoVzxBJXXHFCBo=; b=krfKGVAQH/1m9UUn3g85XFE4ly gSm8bAkQCTgGArdKd/rqgZamAhyuHx1rZzTSOK0ajtPu9YItY4aVr/gsqtPohhZcPTOfP0bhrwGiV pNg0ykc3MiJlvhjoTxQvMTrJWsjj4/wAdhZuu98HxAR3e54aYY1tmym6kSO9mNzMZ47PsJLK4kHot JoirFZ5FqB2Z7F1vPE2znaNIC/kONHWxMDsei/OuVYuc2dulZ0LhCEw28Qu2y/9+qoWQi1lHsj7qn Ypj1Mm/V7Sv0ptgQc8QbZwpKVlFoKssNjC7mzK7YVyT40zp6kjMbyBcPi9PXkp84joNOcvRqq1MLA zEwSMCCA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScy-003nxq-Ji; Wed, 11 Jan 2023 04:22:16 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 05/26] page_pool: Start using netmem in allocation path. Date: Wed, 11 Jan 2023 04:21:53 +0000 Message-Id: <20230111042214.907030-6-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Convert __page_pool_alloc_page_order() and __page_pool_alloc_pages_slow() to use netmem internally. This removes a couple of calls to compound_head() that are hidden inside put_page(). Convert trace_page_pool_state_hold(), page_pool_dma_map() and page_pool_set_pp_info() to take a netmem argument. Saves 83 bytes of text in __page_pool_alloc_page_order() and 98 in __page_pool_alloc_pages_slow() for a total of 181 bytes. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- include/trace/events/page_pool.h | 14 +++++------ net/core/page_pool.c | 42 +++++++++++++++++--------------- 2 files changed, 29 insertions(+), 27 deletions(-) diff --git a/include/trace/events/page_pool.h b/include/trace/events/page_pool.h index 113aad0c9e5b..d1237a7ce481 100644 --- a/include/trace/events/page_pool.h +++ b/include/trace/events/page_pool.h @@ -67,26 +67,26 @@ TRACE_EVENT(page_pool_state_release, TRACE_EVENT(page_pool_state_hold, TP_PROTO(const struct page_pool *pool, - const struct page *page, u32 hold), + const struct netmem *nmem, u32 hold), - TP_ARGS(pool, page, hold), + TP_ARGS(pool, nmem, hold), TP_STRUCT__entry( __field(const struct page_pool *, pool) - __field(const struct page *, page) + __field(const struct netmem *, nmem) __field(u32, hold) __field(unsigned long, pfn) ), TP_fast_assign( __entry->pool = pool; - __entry->page = page; + __entry->nmem = nmem; __entry->hold = hold; - __entry->pfn = page_to_pfn(page); + __entry->pfn = netmem_pfn(nmem); ), - TP_printk("page_pool=%p page=%p pfn=0x%lx hold=%u", - __entry->pool, __entry->page, __entry->pfn, __entry->hold) + TP_printk("page_pool=%p netmem=%p pfn=0x%lx hold=%u", + __entry->pool, __entry->nmem, __entry->pfn, __entry->hold) ); TRACE_EVENT(page_pool_update_nid, diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 437241aba5a7..4e985502c569 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -304,8 +304,9 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool, pool->p.dma_dir); } -static bool page_pool_dma_map(struct page_pool *pool, struct page *page) +static bool page_pool_dma_map(struct page_pool *pool, struct netmem *nmem) { + struct page *page = netmem_page(nmem); dma_addr_t dma; /* Setup DMA mapping: use 'struct page' area for storing DMA-addr @@ -328,12 +329,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) } static void page_pool_set_pp_info(struct page_pool *pool, - struct page *page) + struct netmem *nmem) { - page->pp = pool; - page->pp_magic |= PP_SIGNATURE; + nmem->pp = pool; + nmem->pp_magic |= PP_SIGNATURE; if (pool->p.init_callback) - pool->p.init_callback(page, pool->p.init_arg); + pool->p.init_callback(netmem_page(nmem), pool->p.init_arg); } static void page_pool_clear_pp_info(struct netmem *nmem) @@ -345,26 +346,26 @@ static void page_pool_clear_pp_info(struct netmem *nmem) static struct page *__page_pool_alloc_page_order(struct page_pool *pool, gfp_t gfp) { - struct page *page; + struct netmem *nmem; gfp |= __GFP_COMP; - page = alloc_pages_node(pool->p.nid, gfp, pool->p.order); - if (unlikely(!page)) + nmem = page_netmem(alloc_pages_node(pool->p.nid, gfp, pool->p.order)); + if (unlikely(!nmem)) return NULL; if ((pool->p.flags & PP_FLAG_DMA_MAP) && - unlikely(!page_pool_dma_map(pool, page))) { - put_page(page); + unlikely(!page_pool_dma_map(pool, nmem))) { + netmem_put(nmem); return NULL; } alloc_stat_inc(pool, slow_high_order); - page_pool_set_pp_info(pool, page); + page_pool_set_pp_info(pool, nmem); /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; - trace_page_pool_state_hold(pool, page, pool->pages_state_hold_cnt); - return page; + trace_page_pool_state_hold(pool, nmem, pool->pages_state_hold_cnt); + return netmem_page(nmem); } /* slow path */ @@ -398,18 +399,18 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, * page element have not been (possibly) DMA mapped. */ for (i = 0; i < nr_pages; i++) { - page = pool->alloc.cache[i]; + struct netmem *nmem = page_netmem(pool->alloc.cache[i]); if ((pp_flags & PP_FLAG_DMA_MAP) && - unlikely(!page_pool_dma_map(pool, page))) { - put_page(page); + unlikely(!page_pool_dma_map(pool, nmem))) { + netmem_put(nmem); continue; } - page_pool_set_pp_info(pool, page); - pool->alloc.cache[pool->alloc.count++] = page; + page_pool_set_pp_info(pool, nmem); + pool->alloc.cache[pool->alloc.count++] = netmem_page(nmem); /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; - trace_page_pool_state_hold(pool, page, + trace_page_pool_state_hold(pool, nmem, pool->pages_state_hold_cnt); } @@ -421,7 +422,8 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, page = NULL; } - /* When page just alloc'ed is should/must have refcnt 1. */ + /* When page just allocated it should have refcnt 1 (but may have + * speculative references) */ return page; } From patchwork Wed Jan 11 04:21:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095960 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92A21C54EBC for ; Wed, 11 Jan 2023 04:22:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231350AbjAKEWo (ORCPT ); Tue, 10 Jan 2023 23:22:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231607AbjAKEWP (ORCPT ); Tue, 10 Jan 2023 23:22:15 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0C2D12621 for ; Tue, 10 Jan 2023 20:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=aQIazIavYV+NBVdSEy/Z4Zzxu1F3nzX6W09JzMIlN/w=; b=EXnjQTT8iPOAy0YpF8azvGs419 hmzpyj3005xEdLAGG2Ra74mk2bN22rfEB0y9WJF+CKPCz4olluQWe1VmCZ4onG6rIBHShFcCGqSCR LYyukeNkxBIcC1zBnXghIgtN7Kj4gC9kqYDqJq8czJo8r2l6Jz1W37CkkKcG9FAAJ9XzVdFajQP4l rAzqtqjz3AnaZcx4hEnHw6sX2coXtOsgEtsv/kSQP/bo2opvOr/yLKCNtwQGynDGWR6ej97S/SW7Y J/u7U6FvNIAKDWmO3+gPDGGMoa9JzU4VrdgNs0cNFTDqlF325t6HwGIUBnbIkgmi8A+mvkS+8y0sv hvedGfPg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScy-003nxs-Lr; Wed, 11 Jan 2023 04:22:16 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 06/26] page_pool: Convert page_pool_return_page() to page_pool_return_netmem() Date: Wed, 11 Jan 2023 04:21:54 +0000 Message-Id: <20230111042214.907030-7-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Removes a call to compound_head(), saving 464 bytes of kernel text as page_pool_return_page() is inlined seven times. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- net/core/page_pool.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 4e985502c569..b606952773a6 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -220,7 +220,13 @@ struct page_pool *page_pool_create(const struct page_pool_params *params) } EXPORT_SYMBOL(page_pool_create); -static void page_pool_return_page(struct page_pool *pool, struct page *page); +static void page_pool_return_netmem(struct page_pool *pool, struct netmem *nm); + +static inline +void page_pool_return_page(struct page_pool *pool, struct page *page) +{ + page_pool_return_netmem(pool, page_netmem(page)); +} noinline static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) @@ -499,11 +505,11 @@ void page_pool_release_netmem(struct page_pool *pool, struct netmem *nmem) EXPORT_SYMBOL(page_pool_release_netmem); /* Return a page to the page allocator, cleaning up our state */ -static void page_pool_return_page(struct page_pool *pool, struct page *page) +static void page_pool_return_netmem(struct page_pool *pool, struct netmem *nmem) { - page_pool_release_page(pool, page); + page_pool_release_netmem(pool, nmem); - put_page(page); + netmem_put(nmem); /* An optimization would be to call __free_pages(page, pool->p.order) * knowing page is not part of page-cache (thus avoiding a * __page_cache_release() call). From patchwork Wed Jan 11 04:21:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095969 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8655BC46467 for ; Wed, 11 Jan 2023 04:23:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231520AbjAKEXH (ORCPT ); Tue, 10 Jan 2023 23:23:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234978AbjAKEWg (ORCPT ); Tue, 10 Jan 2023 23:22:36 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A68A312AE3 for ; Tue, 10 Jan 2023 20:22:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=TVi+2uh0XxrIfC6B52OUoVXQFDNYl2L/xl03Bg8sz9o=; b=dXjKNEzzK6eRfENac3kSUQEKT+ WtZ+nHNgZEzK600Le9mPFPHfQl3YeaPgti4hWU8jjq5SIjI1TKQQJaxLXVtyGm2APLp0Xwf/PpA+F I6FE2/oaBUxjZ3hih/l881XyT15lbLFAQzu/EjZm4TBNYb3Ab03jrn9UsxmIvmVTewjMM+81Jl+wc T3q9pmK8rOCat0MvWWb3u09VNK1hQBYw15vv13HWIMQX4KQYO6i1P8Adx63Yi637EOdhVkEaVyxIK Lh4fVJW0FrjdmmMNHe6/Q2nCS7KcyH26jHESOYdSZUMmRAAnAQkTsjugzgsspQAIyF2PYSR6rh+tO wZ13V62w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScy-003nxu-O5; Wed, 11 Jan 2023 04:22:16 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 07/26] page_pool: Convert __page_pool_put_page() to __page_pool_put_netmem() Date: Wed, 11 Jan 2023 04:21:55 +0000 Message-Id: <20230111042214.907030-8-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Removes the call to compound_head() hidden in put_page() which saves 169 bytes of kernel text as __page_pool_put_page() is inlined twice. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- net/core/page_pool.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index b606952773a6..8f3f7cc5a2d5 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -558,8 +558,8 @@ static bool page_pool_recycle_in_cache(struct page *page, * If the page refcnt != 1, then the page will be returned to memory * subsystem. */ -static __always_inline struct page * -__page_pool_put_page(struct page_pool *pool, struct page *page, +static __always_inline struct netmem * +__page_pool_put_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { /* This allocator is optimized for the XDP mode that uses @@ -571,19 +571,20 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, * page is NOT reusable when allocated when system is under * some pressure. (page_is_pfmemalloc) */ - if (likely(page_ref_count(page) == 1 && !page_is_pfmemalloc(page))) { - /* Read barrier done in page_ref_count / READ_ONCE */ + if (likely(netmem_ref_count(nmem) == 1 && + !netmem_is_pfmemalloc(nmem))) { + /* Read barrier done in netmem_ref_count / READ_ONCE */ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) - page_pool_dma_sync_for_device(pool, page, + page_pool_dma_sync_for_device(pool, netmem_page(nmem), dma_sync_size); if (allow_direct && in_serving_softirq() && - page_pool_recycle_in_cache(page, pool)) + page_pool_recycle_in_cache(netmem_page(nmem), pool)) return NULL; /* Page found as candidate for recycling */ - return page; + return nmem; } /* Fallback/non-XDP mode: API user have elevated refcnt. * @@ -599,13 +600,21 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, * will be invoking put_page. */ recycle_stat_inc(pool, released_refcnt); - /* Do not replace this with page_pool_return_page() */ - page_pool_release_page(pool, page); - put_page(page); + /* Do not replace this with page_pool_return_netmem() */ + page_pool_release_netmem(pool, nmem); + netmem_put(nmem); return NULL; } +static __always_inline struct page * +__page_pool_put_page(struct page_pool *pool, struct page *page, + unsigned int dma_sync_size, bool allow_direct) +{ + return netmem_page(__page_pool_put_netmem(pool, page_netmem(page), + dma_sync_size, allow_direct)); +} + void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, unsigned int dma_sync_size, bool allow_direct) { From patchwork Wed Jan 11 04:21:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095966 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B245C5479D for ; Wed, 11 Jan 2023 04:23:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229839AbjAKEXA (ORCPT ); Tue, 10 Jan 2023 23:23:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234942AbjAKEWR (ORCPT ); Tue, 10 Jan 2023 23:22:17 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE52265EC for ; Tue, 10 Jan 2023 20:22:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OJRuErKd0hbnopnmoI/WclHtV+StgsJye53NrCv6pzw=; b=dUcCVol0tbUmjZ5qI29AmYraGj DvRLCAJ1WN2NZ38MVOdf5hxhK2SUqwkWy1QFKCxhlMRRwUKD2kWnRGpoN3aXO21M92IQrty/BGzMC xQzBbkSWKjlFMash1rn9ydhFWlh5z4xXys4bE0bz+nuZlM7tWV62vcYpM+6UbFQuedcU1JsBrnHMh QzJB1kMiVwl0GAmgZbsXO4IVTvov7h17LSaqla+syRCgRRw12nyc5KDvM660fDz5wirQpucpl/TbQ C/e8hwhPTydgM72xYN5YTNaFrO5Qh4GnsUOyrO8j0/xyC8+MaAoFGa6I0ApB4IMAd6Qy1conj7Yx0 o8tSDkuQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScy-003nxw-QJ; Wed, 11 Jan 2023 04:22:16 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 08/26] page_pool: Convert pp_alloc_cache to contain netmem Date: Wed, 11 Jan 2023 04:21:56 +0000 Message-Id: <20230111042214.907030-9-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Change the type here from page to netmem. It works out well to convert page_pool_refill_alloc_cache() to return a netmem instead of a page as part of this commit. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- include/net/page_pool.h | 2 +- net/core/page_pool.c | 52 ++++++++++++++++++++--------------------- 2 files changed, 27 insertions(+), 27 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 34d47c10550e..583c13f6f2ab 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -173,7 +173,7 @@ static inline bool netmem_is_pfmemalloc(const struct netmem *nmem) #define PP_ALLOC_CACHE_REFILL 64 struct pp_alloc_cache { u32 count; - struct page *cache[PP_ALLOC_CACHE_SIZE]; + struct netmem *cache[PP_ALLOC_CACHE_SIZE]; }; struct page_pool_params { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 8f3f7cc5a2d5..c54217ce6b77 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -229,10 +229,10 @@ void page_pool_return_page(struct page_pool *pool, struct page *page) } noinline -static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) +static struct netmem *page_pool_refill_alloc_cache(struct page_pool *pool) { struct ptr_ring *r = &pool->ring; - struct page *page; + struct netmem *nmem; int pref_nid; /* preferred NUMA node */ /* Quicker fallback, avoid locks when ring is empty */ @@ -253,49 +253,49 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) /* Refill alloc array, but only if NUMA match */ do { - page = __ptr_ring_consume(r); - if (unlikely(!page)) + nmem = __ptr_ring_consume(r); + if (unlikely(!nmem)) break; - if (likely(page_to_nid(page) == pref_nid)) { - pool->alloc.cache[pool->alloc.count++] = page; + if (likely(netmem_nid(nmem) == pref_nid)) { + pool->alloc.cache[pool->alloc.count++] = nmem; } else { /* NUMA mismatch; * (1) release 1 page to page-allocator and * (2) break out to fallthrough to alloc_pages_node. * This limit stress on page buddy alloactor. */ - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); alloc_stat_inc(pool, waive); - page = NULL; + nmem = NULL; break; } } while (pool->alloc.count < PP_ALLOC_CACHE_REFILL); /* Return last page */ if (likely(pool->alloc.count > 0)) { - page = pool->alloc.cache[--pool->alloc.count]; + nmem = pool->alloc.cache[--pool->alloc.count]; alloc_stat_inc(pool, refill); } - return page; + return nmem; } /* fast path */ static struct page *__page_pool_get_cached(struct page_pool *pool) { - struct page *page; + struct netmem *nmem; /* Caller MUST guarantee safe non-concurrent access, e.g. softirq */ if (likely(pool->alloc.count)) { /* Fast-path */ - page = pool->alloc.cache[--pool->alloc.count]; + nmem = pool->alloc.cache[--pool->alloc.count]; alloc_stat_inc(pool, fast); } else { - page = page_pool_refill_alloc_cache(pool); + nmem = page_pool_refill_alloc_cache(pool); } - return page; + return netmem_page(nmem); } static void page_pool_dma_sync_for_device(struct page_pool *pool, @@ -391,13 +391,13 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* Unnecessary as alloc cache is empty, but guarantees zero count */ if (unlikely(pool->alloc.count > 0)) - return pool->alloc.cache[--pool->alloc.count]; + return netmem_page(pool->alloc.cache[--pool->alloc.count]); /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */ memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); nr_pages = alloc_pages_bulk_array_node(gfp, pool->p.nid, bulk, - pool->alloc.cache); + (struct page **)pool->alloc.cache); if (unlikely(!nr_pages)) return NULL; @@ -405,7 +405,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, * page element have not been (possibly) DMA mapped. */ for (i = 0; i < nr_pages; i++) { - struct netmem *nmem = page_netmem(pool->alloc.cache[i]); + struct netmem *nmem = pool->alloc.cache[i]; if ((pp_flags & PP_FLAG_DMA_MAP) && unlikely(!page_pool_dma_map(pool, nmem))) { netmem_put(nmem); @@ -413,7 +413,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, } page_pool_set_pp_info(pool, nmem); - pool->alloc.cache[pool->alloc.count++] = netmem_page(nmem); + pool->alloc.cache[pool->alloc.count++] = nmem; /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; trace_page_pool_state_hold(pool, nmem, @@ -422,7 +422,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* Return last page */ if (likely(pool->alloc.count > 0)) { - page = pool->alloc.cache[--pool->alloc.count]; + page = netmem_page(pool->alloc.cache[--pool->alloc.count]); alloc_stat_inc(pool, slow); } else { page = NULL; @@ -547,7 +547,7 @@ static bool page_pool_recycle_in_cache(struct page *page, } /* Caller MUST have verified/know (page_ref_count(page) == 1) */ - pool->alloc.cache[pool->alloc.count++] = page; + pool->alloc.cache[pool->alloc.count++] = page_netmem(page); recycle_stat_inc(pool, cached); return true; } @@ -785,7 +785,7 @@ static void page_pool_free(struct page_pool *pool) static void page_pool_empty_alloc_cache_once(struct page_pool *pool) { - struct page *page; + struct netmem *nmem; if (pool->destroy_cnt) return; @@ -795,8 +795,8 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool) * call concurrently. */ while (pool->alloc.count) { - page = pool->alloc.cache[--pool->alloc.count]; - page_pool_return_page(pool, page); + nmem = pool->alloc.cache[--pool->alloc.count]; + page_pool_return_netmem(pool, nmem); } } @@ -878,15 +878,15 @@ EXPORT_SYMBOL(page_pool_destroy); /* Caller must provide appropriate safe context, e.g. NAPI. */ void page_pool_update_nid(struct page_pool *pool, int new_nid) { - struct page *page; + struct netmem *nmem; trace_page_pool_update_nid(pool, new_nid); pool->p.nid = new_nid; /* Flush pool alloc cache, as refill will check NUMA node */ while (pool->alloc.count) { - page = pool->alloc.cache[--pool->alloc.count]; - page_pool_return_page(pool, page); + nmem = pool->alloc.cache[--pool->alloc.count]; + page_pool_return_netmem(pool, nmem); } } EXPORT_SYMBOL(page_pool_update_nid); From patchwork Wed Jan 11 04:21:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095970 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 503F1C5479D for ; Wed, 11 Jan 2023 04:23:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230247AbjAKEX3 (ORCPT ); Tue, 10 Jan 2023 23:23:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233245AbjAKEWu (ORCPT ); Tue, 10 Jan 2023 23:22:50 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 113E513D1C for ; Tue, 10 Jan 2023 20:22:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=oJXlJV+xhbtSVawf+Q+5NVB7QM515lQScfXBhApxCL8=; b=rXlLddMW1WaOxYEqGX51ttgdPu NkyrU0uI1GkRHbtzFkqu/yrvhq81Yd+VFFuPTUs+nLug6/DlUlEh5pObp4GR4SZ3wcArtOicihz0J 1n8gMsg2B9xcVfnJIPIFnwM0zd7RGQ60pDHj9Eve2PNhiDFfnMShzDvNFOLC20ngWB21qzy/sS3Bm DFqWgBnM/vhhiryHYjNPewy45XLCOjejxOsEJ/8JurwWk3b2+d5tFrYXZDJGP0qRsbGdeizb0xZzL rFGp+5mRSh4K/R6oy6JF2coujVkTVal2S6qdtma1bAMLu2xMbWM0ZOah7W5RoeVxFRES6pDDLJ9kT uwsmNuxw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScy-003nxy-Sc; Wed, 11 Jan 2023 04:22:16 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 09/26] page_pool: Convert page_pool_defrag_page() to page_pool_defrag_netmem() Date: Wed, 11 Jan 2023 04:21:57 +0000 Message-Id: <20230111042214.907030-10-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add a page_pool_defrag_page() wrapper. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- include/net/page_pool.h | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 583c13f6f2ab..72e241ebed0a 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -394,7 +394,7 @@ static inline void page_pool_fragment_page(struct page *page, long nr) atomic_long_set(&page->pp_frag_count, nr); } -static inline long page_pool_defrag_page(struct page *page, long nr) +static inline long page_pool_defrag_netmem(struct netmem *nmem, long nr) { long ret; @@ -407,14 +407,20 @@ static inline long page_pool_defrag_page(struct page *page, long nr) * especially when dealing with a page that may be partitioned * into only 2 or 3 pieces. */ - if (atomic_long_read(&page->pp_frag_count) == nr) + if (atomic_long_read(&nmem->pp_frag_count) == nr) return 0; - ret = atomic_long_sub_return(nr, &page->pp_frag_count); + ret = atomic_long_sub_return(nr, &nmem->pp_frag_count); WARN_ON(ret < 0); return ret; } +/* Compat, remove when all users gone */ +static inline long page_pool_defrag_page(struct page *page, long nr) +{ + return page_pool_defrag_netmem(page_netmem(page), nr); +} + static inline bool page_pool_is_last_frag(struct page_pool *pool, struct page *page) { From patchwork Wed Jan 11 04:21:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095962 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61B17C46467 for ; Wed, 11 Jan 2023 04:22:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233103AbjAKEWt (ORCPT ); Tue, 10 Jan 2023 23:22:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231667AbjAKEWP (ORCPT ); Tue, 10 Jan 2023 23:22:15 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D9AC513CF9 for ; Tue, 10 Jan 2023 20:22:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=aHqbxa3I8JJiDz9jm1RkvBoLj0lWQmpkNfdSraTtfMU=; b=hMcl3QxNj5c0tJLjsSe4Wcz5Jk ghASW92Z7wS0aiqP/ZtPeccd00n21bSeMtMJDc2DxQdb5KAFw+ZWHduxyFXFxDi5jOu/v7St1nOg3 L6DcK7VGTpUtg1orv8WB8C4uhuphRGUy9jFt7UYizUuPjMcVi7y8Yt+ahkCX+689JoIfSO9XEtBSq sLJe1ifO/2VD1V4U1P5MjAnZEFH5zqBaw/gXlpLrEi1P5LxArr2tmKBNAa6c/KVeaDrJOVbqmYhVv JmywqQjbILPqM5nxSe/CLve/4fw2cmyP4DRSTq6nN9VmXJT008OdDfkvtRSkdnUZkj84G4poOeGA+ 1xQkpgSg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScy-003ny0-VL; Wed, 11 Jan 2023 04:22:16 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 10/26] page_pool: Convert page_pool_put_defragged_page() to netmem Date: Wed, 11 Jan 2023 04:21:58 +0000 Message-Id: <20230111042214.907030-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Also convert page_pool_is_last_frag(), page_pool_put_page(), page_pool_recycle_in_ring() and use netmem in page_pool_put_page_bulk(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Jesse Brandeburg --- include/net/page_pool.h | 24 +++++++++++++++++------- net/core/page_pool.c | 29 +++++++++++++++-------------- 2 files changed, 32 insertions(+), 21 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 72e241ebed0a..60354e771fdd 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -385,7 +385,7 @@ static inline void page_pool_release_page(struct page_pool *pool, page_pool_release_netmem(pool, page_netmem(page)); } -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, +void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct); @@ -422,15 +422,15 @@ static inline long page_pool_defrag_page(struct page *page, long nr) } static inline bool page_pool_is_last_frag(struct page_pool *pool, - struct page *page) + struct netmem *nmem) { /* If fragments aren't enabled or count is 0 we were the last user */ return !(pool->p.flags & PP_FLAG_PAGE_FRAG) || - (page_pool_defrag_page(page, 1) == 0); + (page_pool_defrag_netmem(nmem, 1) == 0); } -static inline void page_pool_put_page(struct page_pool *pool, - struct page *page, +static inline void page_pool_put_netmem(struct page_pool *pool, + struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { @@ -438,13 +438,23 @@ static inline void page_pool_put_page(struct page_pool *pool, * allow registering MEM_TYPE_PAGE_POOL, but shield linker. */ #ifdef CONFIG_PAGE_POOL - if (!page_pool_is_last_frag(pool, page)) + if (!page_pool_is_last_frag(pool, nmem)) return; - page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct); + page_pool_put_defragged_netmem(pool, nmem, dma_sync_size, allow_direct); #endif } +/* Compat, remove when all users gone */ +static inline void page_pool_put_page(struct page_pool *pool, + struct page *page, + unsigned int dma_sync_size, + bool allow_direct) +{ + page_pool_put_netmem(pool, page_netmem(page), dma_sync_size, + allow_direct); +} + /* Same as above but will try to sync the entire area pool->max_len */ static inline void page_pool_put_full_page(struct page_pool *pool, struct page *page, bool allow_direct) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c54217ce6b77..e727a74504c2 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -516,14 +516,15 @@ static void page_pool_return_netmem(struct page_pool *pool, struct netmem *nmem) */ } -static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page) +static bool page_pool_recycle_in_ring(struct page_pool *pool, + struct netmem *nmem) { int ret; /* BH protection not needed if current is serving softirq */ if (in_serving_softirq()) - ret = ptr_ring_produce(&pool->ring, page); + ret = ptr_ring_produce(&pool->ring, nmem); else - ret = ptr_ring_produce_bh(&pool->ring, page); + ret = ptr_ring_produce_bh(&pool->ring, nmem); if (!ret) { recycle_stat_inc(pool, ring); @@ -615,17 +616,17 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, dma_sync_size, allow_direct)); } -void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, +void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { - page = __page_pool_put_page(pool, page, dma_sync_size, allow_direct); - if (page && !page_pool_recycle_in_ring(pool, page)) { + nmem = __page_pool_put_netmem(pool, nmem, dma_sync_size, allow_direct); + if (nmem && !page_pool_recycle_in_ring(pool, nmem)) { /* Cache full, fallback to free pages */ recycle_stat_inc(pool, ring_full); - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); } } -EXPORT_SYMBOL(page_pool_put_defragged_page); +EXPORT_SYMBOL(page_pool_put_defragged_netmem); /* Caller must not use data area after call, as this function overwrites it */ void page_pool_put_page_bulk(struct page_pool *pool, void **data, @@ -634,16 +635,16 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, int i, bulk_len = 0; for (i = 0; i < count; i++) { - struct page *page = virt_to_head_page(data[i]); + struct netmem *nmem = virt_to_netmem(data[i]); /* It is not the last user for the page frag case */ - if (!page_pool_is_last_frag(pool, page)) + if (!page_pool_is_last_frag(pool, nmem)) continue; - page = __page_pool_put_page(pool, page, -1, false); + nmem = __page_pool_put_netmem(pool, nmem, -1, false); /* Approved for bulk recycling in ptr_ring cache */ - if (page) - data[bulk_len++] = page; + if (nmem) + data[bulk_len++] = nmem; } if (unlikely(!bulk_len)) @@ -669,7 +670,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, * since put_page() with refcnt == 1 can be an expensive operation */ for (; i < bulk_len; i++) - page_pool_return_page(pool, data[i]); + page_pool_return_netmem(pool, data[i]); } EXPORT_SYMBOL(page_pool_put_page_bulk); From patchwork Wed Jan 11 04:21:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095964 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B24EEC46467 for ; Wed, 11 Jan 2023 04:22:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235394AbjAKEWz (ORCPT ); Tue, 10 Jan 2023 23:22:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233237AbjAKEWP (ORCPT ); Tue, 10 Jan 2023 23:22:15 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86DBA657F for ; Tue, 10 Jan 2023 20:22:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=rcz4ILIU0y+s+VzEhxoTryrk4XlkMDYpJ2m+mT7d/Ps=; b=EUgLfOhtF2Ji1vKd2kCpKbkGL8 LsLmFriJbXWCq8uAJGKzj7c3sod/O5HFYHMvO6WAvZn3T1wc4KAs+XbM6gZrxv1t8cQVcV+6K1Wes 9hNEDSCSFDvY4td4q3imcYF9Yy58ZUd3ODMJu4dDSGecxLl6BX9CHuBFoVe/Sp14xrYrfkORLvKav XBQNfLe2qUNtETVU/GkBQSpQ2zNxt0r+0FT3s7sc1kO1mwzv8XrLeFsTAXqtdptoRl0WPWEnbI0eN v0JS4BtcZ1Mz32S9AyNwzvoPDLlboEnpirvMCzt/SmOD/xKVA3XDT+5GvdvMttalJku8ZE2C4rF3O 6vVUUX6A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScz-003ny2-1J; Wed, 11 Jan 2023 04:22:17 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 11/26] page_pool: Convert page_pool_empty_ring() to use netmem Date: Wed, 11 Jan 2023 04:21:59 +0000 Message-Id: <20230111042214.907030-12-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Retrieve a netmem from the ptr_ring instead of a page. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- net/core/page_pool.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index e727a74504c2..0212244e07e7 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -755,16 +755,16 @@ EXPORT_SYMBOL(page_pool_alloc_frag); static void page_pool_empty_ring(struct page_pool *pool) { - struct page *page; + struct netmem *nmem; /* Empty recycle ring */ - while ((page = ptr_ring_consume_bh(&pool->ring))) { + while ((nmem = ptr_ring_consume_bh(&pool->ring)) != NULL) { /* Verify the refcnt invariant of cached pages */ - if (!(page_ref_count(page) == 1)) + if (netmem_ref_count(nmem) != 1) pr_crit("%s() page_pool refcnt %d violation\n", - __func__, page_ref_count(page)); + __func__, netmem_ref_count(nmem)); - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); } } From patchwork Wed Jan 11 04:22:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095977 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AD82C54EBC for ; Wed, 11 Jan 2023 04:24:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233839AbjAKEYv (ORCPT ); Tue, 10 Jan 2023 23:24:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235641AbjAKEX1 (ORCPT ); Tue, 10 Jan 2023 23:23:27 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A9B813DF3 for ; Tue, 10 Jan 2023 20:22:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=wAE8NhpNO3lxLIOjixkjYKfrR/nGKcU8+ikn8c5d6n8=; b=UzF+8vJDhqL4B0bM6koWS/l6/u hzWXIx0zNexFwUr6aScedxt+DA/f60KprikkLipXg5NZ0otAVbS/IejfOK2Ucx0pWysRtXmAh89Gb 6ki3CTbAZEY/TyBG2wk675ADlYwJWV0bm07LlVvNqGYjCXnPUxLzWiZEa0xwy9MQEmzWapFC77ZNx sjmW+puVUAgwzgORJSyjZU6o50eSMyGLsLrQ8VdJZ13ucNqxGQgUCLqmXYXATqylIS7rInA748eVh KhOHhlC7xYosvBnAx6aakrBkrfmiyB6AaBGOt5qSOtir2Z1IAeSXrnpvPc+Oo0muGU0JMB3IzvQ6T 9/j61r8g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScz-003ny4-3T; Wed, 11 Jan 2023 04:22:17 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 12/26] page_pool: Convert page_pool_alloc_pages() to page_pool_alloc_netmem() Date: Wed, 11 Jan 2023 04:22:00 +0000 Message-Id: <20230111042214.907030-13-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add wrappers for page_pool_alloc_pages() and page_pool_dev_alloc_netmem(). Also convert __page_pool_alloc_pages_slow() to __page_pool_alloc_netmem_slow() and __page_pool_alloc_page_order() to __page_pool_alloc_netmem(). __page_pool_get_cached() now returns a netmem. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- include/net/page_pool.h | 16 ++++++++++++++-- net/core/page_pool.c | 39 +++++++++++++++++++-------------------- 2 files changed, 33 insertions(+), 22 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 60354e771fdd..a568d94043af 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -13,7 +13,7 @@ * regular page allocator APIs. * * Basic use involve replacing alloc_pages() calls with the - * page_pool_alloc_pages() call. Drivers should likely use + * page_pool_alloc_netmem() call. Drivers should likely use * page_pool_dev_alloc_pages() replacing dev_alloc_pages(). * * API keeps track of in-flight pages, in-order to let API user know @@ -314,7 +314,19 @@ struct page_pool { u64 destroy_cnt; }; -struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp); +struct netmem *page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp); + +static inline struct netmem *page_pool_dev_alloc_netmem(struct page_pool *pool) +{ + return page_pool_alloc_netmem(pool, GFP_ATOMIC | __GFP_NOWARN); +} + +/* Compat, remove when all users gone */ +static inline +struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) +{ + return netmem_page(page_pool_alloc_netmem(pool, gfp)); +} static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 0212244e07e7..c7ea487acbaa 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -282,7 +282,7 @@ static struct netmem *page_pool_refill_alloc_cache(struct page_pool *pool) } /* fast path */ -static struct page *__page_pool_get_cached(struct page_pool *pool) +static struct netmem *__page_pool_get_cached(struct page_pool *pool) { struct netmem *nmem; @@ -295,7 +295,7 @@ static struct page *__page_pool_get_cached(struct page_pool *pool) nmem = page_pool_refill_alloc_cache(pool); } - return netmem_page(nmem); + return nmem; } static void page_pool_dma_sync_for_device(struct page_pool *pool, @@ -349,8 +349,8 @@ static void page_pool_clear_pp_info(struct netmem *nmem) nmem->pp = NULL; } -static struct page *__page_pool_alloc_page_order(struct page_pool *pool, - gfp_t gfp) +static +struct netmem *__page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp) { struct netmem *nmem; @@ -371,27 +371,27 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool, /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; trace_page_pool_state_hold(pool, nmem, pool->pages_state_hold_cnt); - return netmem_page(nmem); + return nmem; } /* slow path */ noinline -static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, +static struct netmem *__page_pool_alloc_netmem_slow(struct page_pool *pool, gfp_t gfp) { const int bulk = PP_ALLOC_CACHE_REFILL; unsigned int pp_flags = pool->p.flags; unsigned int pp_order = pool->p.order; - struct page *page; + struct netmem *nmem; int i, nr_pages; /* Don't support bulk alloc for high-order pages */ if (unlikely(pp_order)) - return __page_pool_alloc_page_order(pool, gfp); + return __page_pool_alloc_netmem(pool, gfp); /* Unnecessary as alloc cache is empty, but guarantees zero count */ if (unlikely(pool->alloc.count > 0)) - return netmem_page(pool->alloc.cache[--pool->alloc.count]); + return pool->alloc.cache[--pool->alloc.count]; /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */ memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); @@ -422,34 +422,33 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* Return last page */ if (likely(pool->alloc.count > 0)) { - page = netmem_page(pool->alloc.cache[--pool->alloc.count]); + nmem = pool->alloc.cache[--pool->alloc.count]; alloc_stat_inc(pool, slow); } else { - page = NULL; + nmem = NULL; } /* When page just allocated it should have refcnt 1 (but may have * speculative references) */ - return page; + return nmem; } /* For using page_pool replace: alloc_pages() API calls, but provide * synchronization guarantee for allocation side. */ -struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) +struct netmem *page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp) { - struct page *page; + struct netmem *nmem; /* Fast-path: Get a page from cache */ - page = __page_pool_get_cached(pool); - if (page) - return page; + nmem = __page_pool_get_cached(pool); + if (nmem) + return nmem; /* Slow-path: cache empty, do real allocation */ - page = __page_pool_alloc_pages_slow(pool, gfp); - return page; + return __page_pool_alloc_netmem_slow(pool, gfp); } -EXPORT_SYMBOL(page_pool_alloc_pages); +EXPORT_SYMBOL(page_pool_alloc_netmem); /* Calculate distance between two u32 values, valid if distance is below 2^(31) * https://en.wikipedia.org/wiki/Serial_number_arithmetic#General_Solution From patchwork Wed Jan 11 04:22:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095958 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81BB2C677F1 for ; Wed, 11 Jan 2023 04:22:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235192AbjAKEWh (ORCPT ); Tue, 10 Jan 2023 23:22:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231656AbjAKEWP (ORCPT ); Tue, 10 Jan 2023 23:22:15 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7B68DF60 for ; Tue, 10 Jan 2023 20:22:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=BgXZXrpUA502VvYX3JzIzXVS+iE+tqqm0qCwbZsBJSk=; b=swFTqKK9y01fN9SJsO0AhCt7z1 Y9h21jqXZG6ObuJt5W3Nc0bwNb+w96822zwvB/VmyL1ffwjJIBHbtvaia8bFUeTWh+hXvfhMG7JRG bJeBCzG+7jrrvVfQBTlYo/94AB2+23eDxx8S7LkyEgHLRovbUQvIj7ndGWlYgU8yIQJDJkppTTYGf tY5Mwsop0+nN2Ss9coVpvgtm8xGr/TkoUvEbFlavt/26/7eIknqmdqQkCwIoAAguuYUjcXjFZmudg ZszSY9YSIFRcCy/dCnOND781We01OcmI20x1L9/gu2esacveDyRPVLyovigtKwX8ej7MQMtJuOKEo yl1t1Q8A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScz-003ny6-6W; Wed, 11 Jan 2023 04:22:17 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 13/26] page_pool: Convert page_pool_dma_sync_for_device() to take a netmem Date: Wed, 11 Jan 2023 04:22:01 +0000 Message-Id: <20230111042214.907030-14-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org All callers converted. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- net/core/page_pool.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c7ea487acbaa..3fa03baa80ee 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -299,10 +299,10 @@ static struct netmem *__page_pool_get_cached(struct page_pool *pool) } static void page_pool_dma_sync_for_device(struct page_pool *pool, - struct page *page, + struct netmem *nmem, unsigned int dma_sync_size) { - dma_addr_t dma_addr = page_pool_get_dma_addr(page); + dma_addr_t dma_addr = netmem_get_dma_addr(nmem); dma_sync_size = min(dma_sync_size, pool->p.max_len); dma_sync_single_range_for_device(pool->p.dev, dma_addr, @@ -329,7 +329,7 @@ static bool page_pool_dma_map(struct page_pool *pool, struct netmem *nmem) page_pool_set_dma_addr(page, dma); if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) - page_pool_dma_sync_for_device(pool, page, pool->p.max_len); + page_pool_dma_sync_for_device(pool, nmem, pool->p.max_len); return true; } @@ -576,7 +576,7 @@ __page_pool_put_netmem(struct page_pool *pool, struct netmem *nmem, /* Read barrier done in netmem_ref_count / READ_ONCE */ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) - page_pool_dma_sync_for_device(pool, netmem_page(nmem), + page_pool_dma_sync_for_device(pool, nmem, dma_sync_size); if (allow_direct && in_serving_softirq() && @@ -676,6 +676,7 @@ EXPORT_SYMBOL(page_pool_put_page_bulk); static struct page *page_pool_drain_frag(struct page_pool *pool, struct page *page) { + struct netmem *nmem = page_netmem(page); long drain_count = BIAS_MAX - pool->frag_users; /* Some user is still using the page frag */ @@ -684,7 +685,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) - page_pool_dma_sync_for_device(pool, page, -1); + page_pool_dma_sync_for_device(pool, nmem, -1); return page; } From patchwork Wed Jan 11 04:22:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095967 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB8CCC46467 for ; Wed, 11 Jan 2023 04:23:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231667AbjAKEXC (ORCPT ); Tue, 10 Jan 2023 23:23:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235003AbjAKEWU (ORCPT ); Tue, 10 Jan 2023 23:22:20 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5B498FCF for ; Tue, 10 Jan 2023 20:22:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=LIpyWsE1egW3rdNDJQSSceevtlMG1hljGa+CXueNLKg=; b=GqjYjjRm0EkP5JDIoTGdWf1KIB UAU1GPViWTyvFI0LpZMp1stJI3JSjksXOHpc4mRJjHrzyUj4rzyECDM4UrxrrtkO++UiyRp8w9HQ+ OkeX1ts0qLFGqFqkbILsSa+BI56pD0xEKCIrZ0NoMqvZNmuBxdXZ3HEfgQPo6wWgCHk++r341Q2Bg sh8QAh6PDCH1etDN5JvznbegdC/2l5POOzLNZlAJyklPE1pRNjWAJNgTy/+DVXaKwRwozHBaK1bi6 DbY2rpOTKxenAoeRl+N78UHMTdzEiyNohLRjwJVb22bZnrF6hZs736NYKLzAiXMsSdWHAyW1VBswU WicYJQyg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScz-003ny8-9G; Wed, 11 Jan 2023 04:22:17 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 14/26] page_pool: Convert page_pool_recycle_in_cache() to netmem Date: Wed, 11 Jan 2023 04:22:02 +0000 Message-Id: <20230111042214.907030-15-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Removes a few casts. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- net/core/page_pool.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 3fa03baa80ee..b925a4dcb09b 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -538,7 +538,7 @@ static bool page_pool_recycle_in_ring(struct page_pool *pool, * * Caller must provide appropriate safe context. */ -static bool page_pool_recycle_in_cache(struct page *page, +static bool page_pool_recycle_in_cache(struct netmem *nmem, struct page_pool *pool) { if (unlikely(pool->alloc.count == PP_ALLOC_CACHE_SIZE)) { @@ -547,7 +547,7 @@ static bool page_pool_recycle_in_cache(struct page *page, } /* Caller MUST have verified/know (page_ref_count(page) == 1) */ - pool->alloc.cache[pool->alloc.count++] = page_netmem(page); + pool->alloc.cache[pool->alloc.count++] = nmem; recycle_stat_inc(pool, cached); return true; } @@ -580,7 +580,7 @@ __page_pool_put_netmem(struct page_pool *pool, struct netmem *nmem, dma_sync_size); if (allow_direct && in_serving_softirq() && - page_pool_recycle_in_cache(netmem_page(nmem), pool)) + page_pool_recycle_in_cache(nmem, pool)) return NULL; /* Page found as candidate for recycling */ From patchwork Wed Jan 11 04:22:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095975 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3E4FC46467 for ; Wed, 11 Jan 2023 04:24:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235839AbjAKEYk (ORCPT ); Tue, 10 Jan 2023 23:24:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235672AbjAKEX0 (ORCPT ); Tue, 10 Jan 2023 23:23:26 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DB1513DF7 for ; Tue, 10 Jan 2023 20:22:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=U72xqlqZdFtENPRRx+9rCkghzEjYY0Lo68usGx26lnQ=; b=SZcAgEFK5V6x+xk6wG+iTbrW/C ZYV9UVKgyflsHxh8K95OvNqqX8XX16of+BuJRNQSLsjbYuYlPwdFtdzOogmClk+DmLkCLXgq2a0mN yGWIuZMyTg8ZAK/HidKMe0isCU0y0i7tDYG9KTgHsTkj4rCN58jEnCZxrxNkytz1wrIJsP9KPb338 fzPOU0pmVzJhlmyljsRCab2kuspC/mZ7AmBdjvfZL5CgTZXbLempfP0zgHe2vTxte5X8bnKqlFnOI t80FAFzByJiqtw6mmdapM90Xb7S+Ey5tDnocdR+CvainRlHDYYgXMG/UOtWvH0VWFOvcQ05Aqr5m+ H2o6FVmg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScz-003nyA-C0; Wed, 11 Jan 2023 04:22:17 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 15/26] page_pool: Remove __page_pool_put_page() Date: Wed, 11 Jan 2023 04:22:03 +0000 Message-Id: <20230111042214.907030-16-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This wrapper is no longer used. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- net/core/page_pool.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index b925a4dcb09b..c495e3a16e83 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -607,14 +607,6 @@ __page_pool_put_netmem(struct page_pool *pool, struct netmem *nmem, return NULL; } -static __always_inline struct page * -__page_pool_put_page(struct page_pool *pool, struct page *page, - unsigned int dma_sync_size, bool allow_direct) -{ - return netmem_page(__page_pool_put_netmem(pool, page_netmem(page), - dma_sync_size, allow_direct)); -} - void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct) { From patchwork Wed Jan 11 04:22:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095956 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 344D1C46467 for ; Wed, 11 Jan 2023 04:22:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229781AbjAKEWb (ORCPT ); Tue, 10 Jan 2023 23:22:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229819AbjAKEWM (ORCPT ); Tue, 10 Jan 2023 23:22:12 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47D0B764C for ; Tue, 10 Jan 2023 20:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=XRtyI2AR8pOTz2SSpH+3UHWp1L/rvdwrom4dzGOvnRw=; b=ZDskqu3zfZn3VgAYmF+JbJqtI0 94mnjg38S92TQGUUiTpG6inMfSkjNqVAYKkDQFrzJ12zJzn0vG9MwI0hHaeBDxqXozHbbv5AZmy5S 8t8LjVDAc47qXM2Oh9QcKYFLG7P7pfNKPf+tVtYEmjRQA07lzx1C9GIq+KAV7jOPScMWkIpwPLFSb y1Zq7qKFlnZbZVZYROP3p1pgntgjy9D3eYUIDrZutNjHBMuP3lkupvE2LdqnxMMkQ1ymOXXXryMIV LkTVYeU7lbllQf2h9b9+j9+Uhp7Zup2SF5Ymr70IFQIMBhK4amiHpj6qDPsm+Pl6HfGFDeHK6o5Hh 0jqE3kPQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScz-003nyE-Eq; Wed, 11 Jan 2023 04:22:17 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 16/26] page_pool: Use netmem in page_pool_drain_frag() Date: Wed, 11 Jan 2023 04:22:04 +0000 Message-Id: <20230111042214.907030-17-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org We're not quite ready to change the API of page_pool_drain_frag(), but we can remove the use of several wrappers by using the netmem throughout. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- net/core/page_pool.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c495e3a16e83..cd469a9970e7 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -672,17 +672,17 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, long drain_count = BIAS_MAX - pool->frag_users; /* Some user is still using the page frag */ - if (likely(page_pool_defrag_page(page, drain_count))) + if (likely(page_pool_defrag_netmem(nmem, drain_count))) return NULL; - if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { + if (netmem_ref_count(nmem) == 1 && !netmem_is_pfmemalloc(nmem)) { if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) page_pool_dma_sync_for_device(pool, nmem, -1); return page; } - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); return NULL; } From patchwork Wed Jan 11 04:22:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095953 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 677CDC54EBC for ; Wed, 11 Jan 2023 04:22:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234935AbjAKEWW (ORCPT ); Tue, 10 Jan 2023 23:22:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231308AbjAKEWK (ORCPT ); Tue, 10 Jan 2023 23:22:10 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 480ACE0A1 for ; Tue, 10 Jan 2023 20:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=kcHD+D5Tm5hJWRmWu3rBaG/FQXhUTdD3u/FWIcAC/HM=; b=TZSfhYSbRvNq1VJQ066og0F5pp X1lMd3WlHbOxeJFAiEO/trtpyRTh3ZGdB9Hgtjq3WB8/94sl2BycF5+iHE0TrJIXgn+9lLdB9JhVg Fp+viqi93tQr/z+LA97w4RjpVTd3kSsa2e1BEm6SRRHP8NlpFIK7CaD6znKzolsZ1oYneYU1ufM/b Jb7CoQ0TK6L4Xli+68eLdBW3MvcEta0mX/auV4z5IJ43LI7rCR6YjgZ69qqHATp7e+HTJ+fWcicLY rQDZqW6YH32oJu0DlVbYM7hudn6G/B3C8inqWF0lBORA4eF7/cvroaSkwf+OSzlAKUD+84TpUjNKO 3bf7XZlQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScz-003nyW-IM; Wed, 11 Jan 2023 04:22:17 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesse Brandeburg Subject: [PATCH v3 17/26] page_pool: Convert page_pool_return_skb_page() to use netmem Date: Wed, 11 Jan 2023 04:22:05 +0000 Message-Id: <20230111042214.907030-18-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This function accesses the pagepool members of struct page directly, so it needs to become netmem. Add page_pool_put_full_netmem(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jesse Brandeburg Acked-by: Jesper Dangaard Brouer --- include/net/page_pool.h | 9 ++++++++- net/core/page_pool.c | 13 ++++++------- 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index a568d94043af..e205eaed21a5 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -468,10 +468,17 @@ static inline void page_pool_put_page(struct page_pool *pool, } /* Same as above but will try to sync the entire area pool->max_len */ +static inline void page_pool_put_full_netmem(struct page_pool *pool, + struct netmem *nmem, bool allow_direct) +{ + page_pool_put_netmem(pool, nmem, -1, allow_direct); +} + +/* Compat, remove when all users gone */ static inline void page_pool_put_full_page(struct page_pool *pool, struct page *page, bool allow_direct) { - page_pool_put_page(pool, page, -1, allow_direct); + page_pool_put_full_netmem(pool, page_netmem(page), allow_direct); } /* Same as above but the caller must guarantee safe context. e.g NAPI */ diff --git a/net/core/page_pool.c b/net/core/page_pool.c index cd469a9970e7..ddf9f2bb85f7 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -886,28 +886,27 @@ EXPORT_SYMBOL(page_pool_update_nid); bool page_pool_return_skb_page(struct page *page) { + struct netmem *nmem = page_netmem(compound_head(page)); struct page_pool *pp; - page = compound_head(page); - - /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation + /* nmem->pp_magic is OR'ed with PP_SIGNATURE after the allocation * in order to preserve any existing bits, such as bit 0 for the * head page of compound page and bit 1 for pfmemalloc page, so * mask those bits for freeing side when doing below checking, - * and page_is_pfmemalloc() is checked in __page_pool_put_page() + * and netmem_is_pfmemalloc() is checked in __page_pool_put_netmem() * to avoid recycling the pfmemalloc page. */ - if (unlikely((page->pp_magic & ~0x3UL) != PP_SIGNATURE)) + if (unlikely((nmem->pp_magic & ~0x3UL) != PP_SIGNATURE)) return false; - pp = page->pp; + pp = nmem->pp; /* Driver set this to memory recycling info. Reset it on recycle. * This will *not* work for NIC using a split-page memory model. * The page will be returned to the pool here regardless of the * 'flipped' fragment being in use or not. */ - page_pool_put_full_page(pp, page, false); + page_pool_put_full_netmem(pp, nmem, false); return true; } From patchwork Wed Jan 11 04:22:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095954 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E508C46467 for ; Wed, 11 Jan 2023 04:22:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235240AbjAKEW0 (ORCPT ); Tue, 10 Jan 2023 23:22:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231303AbjAKEWK (ORCPT ); Tue, 10 Jan 2023 23:22:10 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45A5565EC for ; Tue, 10 Jan 2023 20:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OBSzJ4w2B3aAKm7QVu3HQC5r9WPsQ9tzpI4Tb+ORw1s=; b=apyePiHKvq7zkmyHZHbTbkIw08 zviH/7IF8Gu1CbA8dLQus5zZBliFQeGyNSnZrXlbNolxI7/5e62h+9aMzcG7EsPpOX7JHHjc0NVGd PaeT2MKZ9IuVQZFV/KChBa4pRZUvBuL0OM4XFJoo2ali4CNO+yeohlS8ctbVEexcA68hcyzU1xUM2 QxeVzRC5R+t70nhfwwxT3pikwvstwykKVGItH9qC5WrXYfLrNsCrEGxgdUc56E3JrB/TVoSCW5UEU LT/f6JregHr6grPzcjNbj3smDAs+cp9shiepYEzghoacuOWVbicHqfSU20TErjXAE5zyekug1FmPn FlyTpmfw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScz-003nyc-Lx; Wed, 11 Jan 2023 04:22:17 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesse Brandeburg Subject: [PATCH v3 18/26] page_pool: Allow page_pool_recycle_direct() to take a netmem or a page Date: Wed, 11 Jan 2023 04:22:06 +0000 Message-Id: <20230111042214.907030-19-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org With no better name for a variant of page_pool_recycle_direct() which takes a netmem instead of a page, use _Generic() to allow it to take either a page or a netmem argument. It's a bit ugly, but maybe not the worst alternative? Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jesse Brandeburg Acked-by: Jesper Dangaard Brouer --- include/net/page_pool.h | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index e205eaed21a5..64ac397dcd9f 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -482,12 +482,22 @@ static inline void page_pool_put_full_page(struct page_pool *pool, } /* Same as above but the caller must guarantee safe context. e.g NAPI */ -static inline void page_pool_recycle_direct(struct page_pool *pool, +static inline void __page_pool_recycle_direct(struct page_pool *pool, + struct netmem *nmem) +{ + page_pool_put_full_netmem(pool, nmem, true); +} + +static inline void __page_pool_recycle_page_direct(struct page_pool *pool, struct page *page) { - page_pool_put_full_page(pool, page, true); + page_pool_put_full_netmem(pool, page_netmem(page), true); } +#define page_pool_recycle_direct(pool, mem) _Generic((mem), \ + struct netmem *: __page_pool_recycle_direct(pool, (struct netmem *)mem), \ + struct page *: __page_pool_recycle_page_direct(pool, (struct page *)mem)) + #define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \ (sizeof(dma_addr_t) > sizeof(unsigned long)) From patchwork Wed Jan 11 04:22:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095961 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6FD0C46467 for ; Wed, 11 Jan 2023 04:22:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231656AbjAKEWr (ORCPT ); Tue, 10 Jan 2023 23:22:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60700 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229914AbjAKEWP (ORCPT ); Tue, 10 Jan 2023 23:22:15 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D78181262F for ; Tue, 10 Jan 2023 20:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=3ySWEkgCmf2GpW8kDKeRKe5QMZY/BleTQoqlmO/foAQ=; b=laM8j/vpMpDHefrvTjdg9eH3VN e4A5AWkeJ+cd7UEJQJ1F3ozE/V24RiXSO+3ZWBfq82cf2rHIF9fDVdX7l1A/msOZMPyrvybII2y3O b1yAs0HOAfk8xOc8DR6/rr3sY7xw7Mr9zwMsCkkuB4x4jwAp6SautLdWrVHFr5d7QWyov8ArlD499 a3dpnZHmN5GSD6CGQ98xwcsZjucuFU0N3TLdM7FDXR7vePRMOvpubcJnB3ZGky61UbIZ87de7iZ8O oTfwBBUyEFj+HQSA2DAikgNl4sjDHW+/niupoVQHvDrKXOhVlayP6BmmUFiN2C3kjvkAvb6pbyBTO P0QU3yHg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScz-003nyi-Pn; Wed, 11 Jan 2023 04:22:17 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 19/26] page_pool: Convert frag_page to frag_nmem Date: Wed, 11 Jan 2023 04:22:07 +0000 Message-Id: <20230111042214.907030-20-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Remove page_pool_defrag_page() and page_pool_return_page() as they have no more callers. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- include/net/page_pool.h | 18 ++++++---------- net/core/page_pool.c | 47 ++++++++++++++++++----------------------- 2 files changed, 26 insertions(+), 39 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 64ac397dcd9f..5e5030f5bbb4 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -262,7 +262,7 @@ struct page_pool { u32 pages_state_hold_cnt; unsigned int frag_offset; - struct page *frag_page; + struct netmem *frag_nmem; long frag_users; #ifdef CONFIG_PAGE_POOL_STATS @@ -335,8 +335,8 @@ static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) return page_pool_alloc_pages(pool, gfp); } -struct page *page_pool_alloc_frag(struct page_pool *pool, unsigned int *offset, - unsigned int size, gfp_t gfp); +struct netmem *page_pool_alloc_frag(struct page_pool *pool, + unsigned int *offset, unsigned int size, gfp_t gfp); static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, unsigned int *offset, @@ -344,7 +344,7 @@ static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, { gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); - return page_pool_alloc_frag(pool, offset, size, gfp); + return netmem_page(page_pool_alloc_frag(pool, offset, size, gfp)); } /* get the stored dma direction. A driver might decide to treat this locally and @@ -401,9 +401,9 @@ void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, unsigned int dma_sync_size, bool allow_direct); -static inline void page_pool_fragment_page(struct page *page, long nr) +static inline void page_pool_fragment_netmem(struct netmem *nmem, long nr) { - atomic_long_set(&page->pp_frag_count, nr); + atomic_long_set(&nmem->pp_frag_count, nr); } static inline long page_pool_defrag_netmem(struct netmem *nmem, long nr) @@ -427,12 +427,6 @@ static inline long page_pool_defrag_netmem(struct netmem *nmem, long nr) return ret; } -/* Compat, remove when all users gone */ -static inline long page_pool_defrag_page(struct page *page, long nr) -{ - return page_pool_defrag_netmem(page_netmem(page), nr); -} - static inline bool page_pool_is_last_frag(struct page_pool *pool, struct netmem *nmem) { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index ddf9f2bb85f7..5624cdae1f4e 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -222,12 +222,6 @@ EXPORT_SYMBOL(page_pool_create); static void page_pool_return_netmem(struct page_pool *pool, struct netmem *nm); -static inline -void page_pool_return_page(struct page_pool *pool, struct page *page) -{ - page_pool_return_netmem(pool, page_netmem(page)); -} - noinline static struct netmem *page_pool_refill_alloc_cache(struct page_pool *pool) { @@ -665,10 +659,9 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, } EXPORT_SYMBOL(page_pool_put_page_bulk); -static struct page *page_pool_drain_frag(struct page_pool *pool, - struct page *page) +static struct netmem *page_pool_drain_frag(struct page_pool *pool, + struct netmem *nmem) { - struct netmem *nmem = page_netmem(page); long drain_count = BIAS_MAX - pool->frag_users; /* Some user is still using the page frag */ @@ -679,7 +672,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) page_pool_dma_sync_for_device(pool, nmem, -1); - return page; + return nmem; } page_pool_return_netmem(pool, nmem); @@ -689,22 +682,22 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, static void page_pool_free_frag(struct page_pool *pool) { long drain_count = BIAS_MAX - pool->frag_users; - struct page *page = pool->frag_page; + struct netmem *nmem = pool->frag_nmem; - pool->frag_page = NULL; + pool->frag_nmem = NULL; - if (!page || page_pool_defrag_page(page, drain_count)) + if (!nmem || page_pool_defrag_netmem(nmem, drain_count)) return; - page_pool_return_page(pool, page); + page_pool_return_netmem(pool, nmem); } -struct page *page_pool_alloc_frag(struct page_pool *pool, +struct netmem *page_pool_alloc_frag(struct page_pool *pool, unsigned int *offset, unsigned int size, gfp_t gfp) { unsigned int max_size = PAGE_SIZE << pool->p.order; - struct page *page = pool->frag_page; + struct netmem *nmem = pool->frag_nmem; if (WARN_ON(!(pool->p.flags & PP_FLAG_PAGE_FRAG) || size > max_size)) @@ -713,35 +706,35 @@ struct page *page_pool_alloc_frag(struct page_pool *pool, size = ALIGN(size, dma_get_cache_alignment()); *offset = pool->frag_offset; - if (page && *offset + size > max_size) { - page = page_pool_drain_frag(pool, page); - if (page) { + if (nmem && *offset + size > max_size) { + nmem = page_pool_drain_frag(pool, nmem); + if (nmem) { alloc_stat_inc(pool, fast); goto frag_reset; } } - if (!page) { - page = page_pool_alloc_pages(pool, gfp); - if (unlikely(!page)) { - pool->frag_page = NULL; + if (!nmem) { + nmem = page_pool_alloc_netmem(pool, gfp); + if (unlikely(!nmem)) { + pool->frag_nmem = NULL; return NULL; } - pool->frag_page = page; + pool->frag_nmem = nmem; frag_reset: pool->frag_users = 1; *offset = 0; pool->frag_offset = size; - page_pool_fragment_page(page, BIAS_MAX); - return page; + page_pool_fragment_netmem(nmem, BIAS_MAX); + return nmem; } pool->frag_users++; pool->frag_offset = *offset + size; alloc_stat_inc(pool, fast); - return page; + return nmem; } EXPORT_SYMBOL(page_pool_alloc_frag); From patchwork Wed Jan 11 04:22:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095951 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A789C677F1 for ; Wed, 11 Jan 2023 04:22:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234994AbjAKEWR (ORCPT ); Tue, 10 Jan 2023 23:22:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231231AbjAKEWK (ORCPT ); Tue, 10 Jan 2023 23:22:10 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47E1EBE06 for ; Tue, 10 Jan 2023 20:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=sIvsfRyHd+n90C8eGgTzNiW5NYYQVxV4bU3jT+uf9ZE=; b=E9lPWmVWGAZEtiz2aU0caCoAjM 0FaB62SnKuQ7vhAud1coGaNAZWFv6wS/MayWmIttQoI9kZi5cUYTt7rIuo8UYZHLDJHwzfabR15XV 2KC6PUfmKhj1IbzBoJJpIowxEGC7SSlscrgXzl3CpZ/RfZbqD4Ms5gy3QFu/dcACcOJLm5pripjTr 1GUWLYuSaaWnsXx+WNMLSvRkwtJyWPLnlC/KI8qHhrdzLHHg6TisKhTv1hthzZn0ioCbCN2efQk8S 4NCTzlVNJc5uQ9thLVUWog9oEgKfD1nBOZCr0U7LrPSkjOBsGE6f3ekYluynD3ogsjjZBCcx4uv2J rhLM/FoA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFScz-003nyo-Tp; Wed, 11 Jan 2023 04:22:17 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 20/26] xdp: Convert to netmem Date: Wed, 11 Jan 2023 04:22:08 +0000 Message-Id: <20230111042214.907030-21-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org We dereference the 'pp' member of struct page, so we must use a netmem here. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- net/core/xdp.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/net/core/xdp.c b/net/core/xdp.c index 844c9d99dc0e..7520c3b27356 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -375,17 +375,18 @@ EXPORT_SYMBOL_GPL(xdp_rxq_info_reg_mem_model); void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, struct xdp_buff *xdp) { + struct netmem *nmem; struct page *page; switch (mem->type) { case MEM_TYPE_PAGE_POOL: - page = virt_to_head_page(data); + nmem = virt_to_netmem(data); if (napi_direct && xdp_return_frame_no_direct()) napi_direct = false; - /* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE) + /* No need to check ((nmem->pp_magic & ~0x3UL) == PP_SIGNATURE) * as mem->type knows this a page_pool page */ - page_pool_put_full_page(page->pp, page, napi_direct); + page_pool_put_full_netmem(nmem->pp, nmem, napi_direct); break; case MEM_TYPE_PAGE_SHARED: page_frag_free(data); From patchwork Wed Jan 11 04:22:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095955 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36440C5479D for ; Wed, 11 Jan 2023 04:22:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230471AbjAKEW3 (ORCPT ); Tue, 10 Jan 2023 23:22:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229986AbjAKEWK (ORCPT ); Tue, 10 Jan 2023 23:22:10 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47EDBDF62 for ; Tue, 10 Jan 2023 20:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Sd/TXFITrHs9oBCigPbZAQGp64qBzBR18nRKYWdlGyA=; b=uodsYMsVSwkp3eHv0O/ZIrmIzF zqTlnoaGCM0c2hfy51wGkgmonqy04JlvYKGpEroBQ75AKzircoAGKR634BqCnQBzPz0VK2fpi4FN2 UUVC84sW6Pd8BRNV2lecki7A+HzPpTH2tkHaXc53cr5AbuWWVOPGpdCWL1FImfAp/JkbfVZBt8Q0/ 6hRzfuO3GyAX5CQGIG4tLPs58rf1YTnUoo0pJ5hVjN+y6vgOwKg+wDw0/ZY+jMIzbRDhjTIwdir3W +aJdE3H+Ma/idTUudjtBbxzYr7SiPmbFoPqjbl1FgVNQbq0tXO9OHGH6IUtgz+b0rxA4fgzsISoPn 4YRc3S0w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFSd0-003nyu-1L; Wed, 11 Jan 2023 04:22:18 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 21/26] mm: Remove page pool members from struct page Date: Wed, 11 Jan 2023 04:22:09 +0000 Message-Id: <20230111042214.907030-22-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org These are now split out into their own netmem struct. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- include/linux/mm_types.h | 22 ---------------------- include/net/page_pool.h | 4 ---- 2 files changed, 26 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 65b912ba97ef..de545f6cca3c 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -116,28 +116,6 @@ struct page { */ unsigned long private; }; - struct { /* page_pool used by netstack */ - /** - * @pp_magic: magic value to avoid recycling non - * page_pool allocated pages. - */ - unsigned long pp_magic; - struct page_pool *pp; - unsigned long _pp_mapping_pad; - unsigned long dma_addr; - union { - /** - * dma_addr_upper: might require a 64-bit - * value on 32-bit architectures. - */ - unsigned long dma_addr_upper; - /** - * For frag page support, not supported in - * 32-bit architectures with 64-bit DMA. - */ - atomic_long_t pp_frag_count; - }; - }; struct { /* Tail pages of compound page */ unsigned long compound_head; /* Bit zero is set */ diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 5e5030f5bbb4..2f0cd018b8f2 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -86,11 +86,7 @@ struct netmem { static_assert(offsetof(struct page, pg) == offsetof(struct netmem, nm)) NETMEM_MATCH(flags, flags); NETMEM_MATCH(lru, pp_magic); -NETMEM_MATCH(pp, pp); NETMEM_MATCH(mapping, _pp_mapping_pad); -NETMEM_MATCH(dma_addr, dma_addr); -NETMEM_MATCH(dma_addr_upper, dma_addr_upper); -NETMEM_MATCH(pp_frag_count, pp_frag_count); NETMEM_MATCH(_mapcount, _mapcount); NETMEM_MATCH(_refcount, _refcount); #undef NETMEM_MATCH From patchwork Wed Jan 11 04:22:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095959 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E105C54EBC for ; Wed, 11 Jan 2023 04:22:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235311AbjAKEWl (ORCPT ); Tue, 10 Jan 2023 23:22:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231596AbjAKEWO (ORCPT ); Tue, 10 Jan 2023 23:22:14 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C87A810568 for ; Tue, 10 Jan 2023 20:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=V/N8ZDynthW5Fc5GLNsWK45Y4vcH+YJgVhAgrk4URbs=; b=bLXhO64CiVSYtCXYvg7dj2411b 6Ih8FOPtWZtKEy7IVmUOEV+d89FQF98GOFKAmZR5T2nQJuT7FwpgYskeYrGvUoeMD5okrfF/nvb6E LhmdFmC3bh6g12Aiw3j6UkvGuG/nvKd2yYsr+9WxgpzjLwqQZc2/WgyJTgfyC3wmBBXPIU7s7zGYn rcYEIkFAEW6rObiVuPRLo6SsZa6Eg0kfRfQMOwsoVux8qWYRO0Pb+U+BlNlMXIdf0CQSVf9LhqGFC 4xgvixNEoR1V298HVLSrUmHJza/v2FjW8e86Sh3iUVMd25nP7jAbSAouI1PlonCOZoIZam0yiuFMX XIWR8J6w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFSd0-003nz0-5e; Wed, 11 Jan 2023 04:22:18 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesper Dangaard Brouer , Jesse Brandeburg Subject: [PATCH v3 22/26] page_pool: Pass a netmem to init_callback() Date: Wed, 11 Jan 2023 04:22:10 +0000 Message-Id: <20230111042214.907030-23-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Convert the only user of init_callback. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Jesper Dangaard Brouer Reviewed-by: Ilias Apalodimas Reviewed-by: Jesse Brandeburg --- include/net/page_pool.h | 2 +- net/bpf/test_run.c | 4 ++-- net/core/page_pool.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 2f0cd018b8f2..af8ba8a0dd05 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -181,7 +181,7 @@ struct page_pool_params { enum dma_data_direction dma_dir; /* DMA mapping direction */ unsigned int max_len; /* max DMA sync memory size */ unsigned int offset; /* DMA addr offset */ - void (*init_callback)(struct page *page, void *arg); + void (*init_callback)(struct netmem *nmem, void *arg); void *init_arg; }; diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 2723623429ac..bd3c64e69f6e 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -116,9 +116,9 @@ struct xdp_test_data { #define TEST_XDP_FRAME_SIZE (PAGE_SIZE - sizeof(struct xdp_page_head)) #define TEST_XDP_MAX_BATCH 256 -static void xdp_test_run_init_page(struct page *page, void *arg) +static void xdp_test_run_init_page(struct netmem *nmem, void *arg) { - struct xdp_page_head *head = phys_to_virt(page_to_phys(page)); + struct xdp_page_head *head = netmem_to_virt(nmem); struct xdp_buff *new_ctx, *orig_ctx; u32 headroom = XDP_PACKET_HEADROOM; struct xdp_test_data *xdp = arg; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 5624cdae1f4e..a1e404a7397f 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -334,7 +334,7 @@ static void page_pool_set_pp_info(struct page_pool *pool, nmem->pp = pool; nmem->pp_magic |= PP_SIGNATURE; if (pool->p.init_callback) - pool->p.init_callback(netmem_page(nmem), pool->p.init_arg); + pool->p.init_callback(nmem, pool->p.init_arg); } static void page_pool_clear_pp_info(struct netmem *nmem) From patchwork Wed Jan 11 04:22:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095952 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 279E8C67871 for ; Wed, 11 Jan 2023 04:22:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235059AbjAKEWU (ORCPT ); Tue, 10 Jan 2023 23:22:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60700 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231332AbjAKEWK (ORCPT ); Tue, 10 Jan 2023 23:22:10 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47AEC7646 for ; Tue, 10 Jan 2023 20:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=QoaAG2kAgAPkNcCiK7t3f4EkfcuvVTfPdM/eyK8Bayk=; b=LU13WSn36J9OyqWY+IaUSE1Z1a Wievx5eQ/zQzEuLLnrGjXadW9SAJKWjA/DuQQCaUb3MVIhvqejVrFFFhudRXk4fXO2VNATzrniZA9 a0Sjzs/o5e7HT8brbzoy9sYz75dwKQtKY0M9RywnOjCKFqTpe2voPwqa61J9NGOLiBTFDikR82UIO 8lzQRpz4gCh1sA6sany3BfM5AZSwYjpoX7lUeRLHaVvuj3tVCE6olVaWGA/Hj/N0OG8NcUcjR8b4N +B5yIlhOzDqUlINl/7yiVNKasqCb57RkNbj59aBCO7UQNaRPgwRcbGiIPTax2VE1kKtCS8sdoaEWL IqPDGWXg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFSd0-003nz7-A7; Wed, 11 Jan 2023 04:22:18 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesse Brandeburg Subject: [PATCH v3 23/26] net: Add support for netmem in skb_frag Date: Wed, 11 Jan 2023 04:22:11 +0000 Message-Id: <20230111042214.907030-24-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Allow drivers to add netmem to skbs & retrieve them again. If the VM_BUG_ON triggers, we can add a call to compound_head() either in this function or in page_netmem(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jesse Brandeburg --- include/linux/skbuff.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 4c8492401a10..4b04240385cc 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3346,6 +3346,12 @@ static inline struct page *skb_frag_page(const skb_frag_t *frag) return frag->bv_page; } +static inline struct netmem *skb_frag_netmem(const skb_frag_t *frag) +{ + VM_BUG_ON_PAGE(PageTail(frag->bv_page), frag->bv_page); + return page_netmem(frag->bv_page); +} + /** * __skb_frag_ref - take an addition reference on a paged fragment. * @frag: the paged fragment @@ -3454,6 +3460,11 @@ static inline void __skb_frag_set_page(skb_frag_t *frag, struct page *page) frag->bv_page = page; } +static inline void __skb_frag_set_netmem(skb_frag_t *frag, struct netmem *nmem) +{ + __skb_frag_set_page(frag, netmem_page(nmem)); +} + /** * skb_frag_set_page - sets the page contained in a paged fragment of an skb * @skb: the buffer From patchwork Wed Jan 11 04:22:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095965 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D892C54EBC for ; Wed, 11 Jan 2023 04:23:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233237AbjAKEW5 (ORCPT ); Tue, 10 Jan 2023 23:22:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231520AbjAKEWO (ORCPT ); Tue, 10 Jan 2023 23:22:14 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47FF0DFC6 for ; Tue, 10 Jan 2023 20:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VCC7zk0y8ZK0ii1fcA4LBC62+hbVnPqfAL8JxgfRigk=; b=YqJ/crgQ+MgBIm7HmucCZhNYCi zospNW1BkE5uD79CZxLj0VVa58ju6lr28cZoOsOudE3WL3ikt+lurEPL/aW1gnbLmpQLjmrioxIjh CVAlFd8+hDvvoIrMd+WrGd7sySH6k+BkKk2ASVrcHtEsdpreuKt8o4rqvtJswyc6/9hPJdPq5sGtQ rXR1S/Hzzieo69yFT42iUT/TsAVO/jHk0aFC+3uOfLPcJq20vz3F+RcWm8CxlXADoCzSsyjZUbTdG LtsQpwzq8uIEizS6dhQcpxX55tUc89Kv9APkHEuIXhLeki7p/2x97HNvywgsJBsb3OkudCQMnnovg 58IgkTpA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFSd0-003nzI-D3; Wed, 11 Jan 2023 04:22:18 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Jesse Brandeburg Subject: [PATCH v3 24/26] mvneta: Convert to netmem Date: Wed, 11 Jan 2023 04:22:12 +0000 Message-Id: <20230111042214.907030-25-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Use the netmem APIs instead of the page APIs. Improves type-safety. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jesse Brandeburg --- drivers/net/ethernet/marvell/mvneta.c | 48 +++++++++++++-------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index f8925cac61e4..6177d2ffd33c 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1931,15 +1931,15 @@ static int mvneta_rx_refill(struct mvneta_port *pp, gfp_t gfp_mask) { dma_addr_t phys_addr; - struct page *page; + struct netmem *nmem; - page = page_pool_alloc_pages(rxq->page_pool, + nmem = page_pool_alloc_netmem(rxq->page_pool, gfp_mask | __GFP_NOWARN); - if (!page) + if (!nmem) return -ENOMEM; - phys_addr = page_pool_get_dma_addr(page) + pp->rx_offset_correction; - mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq); + phys_addr = netmem_get_dma_addr(nmem) + pp->rx_offset_correction; + mvneta_rx_desc_fill(rx_desc, phys_addr, nmem, rxq); return 0; } @@ -2006,7 +2006,7 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp, if (!data || !(rx_desc->buf_phys_addr)) continue; - page_pool_put_full_page(rxq->page_pool, data, false); + page_pool_put_full_netmem(rxq->page_pool, data, false); } if (xdp_rxq_info_is_reg(&rxq->xdp_rxq)) xdp_rxq_info_unreg(&rxq->xdp_rxq); @@ -2072,11 +2072,11 @@ mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, goto out; for (i = 0; i < sinfo->nr_frags; i++) - page_pool_put_full_page(rxq->page_pool, - skb_frag_page(&sinfo->frags[i]), true); + page_pool_put_full_netmem(rxq->page_pool, + skb_frag_netmem(&sinfo->frags[i]), true); out: - page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data), + page_pool_put_netmem(rxq->page_pool, virt_to_netmem(xdp->data), sync_len, true); } @@ -2088,7 +2088,6 @@ mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq, struct device *dev = pp->dev->dev.parent; struct mvneta_tx_desc *tx_desc; int i, num_frames = 1; - struct page *page; if (unlikely(xdp_frame_has_frags(xdpf))) num_frames += sinfo->nr_frags; @@ -2123,9 +2122,10 @@ mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq, buf->type = MVNETA_TYPE_XDP_NDO; } else { - page = unlikely(frag) ? skb_frag_page(frag) - : virt_to_page(xdpf->data); - dma_addr = page_pool_get_dma_addr(page); + struct netmem *nmem = unlikely(frag) ? + skb_frag_netmem(frag) : + virt_to_netmem(xdpf->data); + dma_addr = netmem_get_dma_addr(nmem); if (unlikely(frag)) dma_addr += skb_frag_off(frag); else @@ -2308,9 +2308,9 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp, struct mvneta_rx_desc *rx_desc, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, int *size, - struct page *page) + struct netmem *nmem) { - unsigned char *data = page_address(page); + unsigned char *data = netmem_to_virt(nmem); int data_len = -MVNETA_MH_SIZE, len; struct net_device *dev = pp->dev; enum dma_data_direction dma_dir; @@ -2343,7 +2343,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, struct mvneta_rx_desc *rx_desc, struct mvneta_rx_queue *rxq, struct xdp_buff *xdp, int *size, - struct page *page) + struct netmem *nmem) { struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); struct net_device *dev = pp->dev; @@ -2371,16 +2371,16 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, skb_frag_off_set(frag, pp->rx_offset_correction); skb_frag_size_set(frag, data_len); - __skb_frag_set_page(frag, page); + __skb_frag_set_netmem(frag, nmem); if (!xdp_buff_has_frags(xdp)) { sinfo->xdp_frags_size = *size; xdp_buff_set_frags_flag(xdp); } - if (page_is_pfmemalloc(page)) + if (netmem_is_pfmemalloc(nmem)) xdp_buff_set_frag_pfmemalloc(xdp); } else { - page_pool_put_full_page(rxq->page_pool, page, true); + page_pool_put_full_netmem(rxq->page_pool, nmem, true); } *size -= len; } @@ -2440,10 +2440,10 @@ static int mvneta_rx_swbm(struct napi_struct *napi, struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq); u32 rx_status, index; struct sk_buff *skb; - struct page *page; + struct netmem *nmem; index = rx_desc - rxq->descs; - page = (struct page *)rxq->buf_virt_addr[index]; + nmem = rxq->buf_virt_addr[index]; rx_status = rx_desc->status; rx_proc++; @@ -2461,17 +2461,17 @@ static int mvneta_rx_swbm(struct napi_struct *napi, desc_status = rx_status; mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, - &size, page); + &size, nmem); } else { if (unlikely(!xdp_buf.data_hard_start)) { rx_desc->buf_phys_addr = 0; - page_pool_put_full_page(rxq->page_pool, page, + page_pool_put_full_netmem(rxq->page_pool, nmem, true); goto next; } mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf, - &size, page); + &size, nmem); } /* Middle or Last descriptor */ if (!(rx_status & MVNETA_RXD_LAST_DESC)) From patchwork Wed Jan 11 04:22:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095963 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87538C46467 for ; Wed, 11 Jan 2023 04:22:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235300AbjAKEWw (ORCPT ); Tue, 10 Jan 2023 23:22:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233118AbjAKEWP (ORCPT ); Tue, 10 Jan 2023 23:22:15 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D549FD2A for ; Tue, 10 Jan 2023 20:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=f4bVpm5c6UDj4OJqEneaSAXEYkXWOAmLehfaiY3u5Fc=; b=lNddTkT+9kXb5XYJjTfCecsHSB /NDKNK5AiyjS84GTgLr2gN+vxPLKaMfe1bbERW9Ir7G+4xTXzuThzle61Spc9zJJdgn57juksrrXa aay6OBnKEk/6aX1NbkRGjPXcfG1gbko2iLscYrcR/qv3x/5QYqcJnAQY1IYet7np6QTOqVjJ/FVvy PokccSODRza1ntTVbCiMJg9byrtatfjegbKucMVwJt/OZwLjUnwNTW8Q8thhAIed3YPhGgJNdyk3E PXJaYP22d469IdSkdubIDrjM1DG7vEl0XMfgSdjtpFxGz36CGGZHF4bki7Jwxv/rrXbt9If4E5YWA wVy+Ui6g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFSd0-003nzO-Gw; Wed, 11 Jan 2023 04:22:18 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt , Tariq Toukan , Jesse Brandeburg Subject: [PATCH v3 25/26] mlx5: Convert to netmem Date: Wed, 11 Jan 2023 04:22:13 +0000 Message-Id: <20230111042214.907030-26-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Use the netmem APIs instead of the page_pool APIs. Possibly we should add a netmem equivalent of skb_add_rx_frag(), but that can happen later. Saves one call to compound_head() in the call to put_page() in mlx5e_page_release_dynamic() which saves 58 bytes of text. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Tariq Toukan Reviewed-by: Jesse Brandeburg --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 10 +- .../net/ethernet/mellanox/mlx5/core/en/txrx.h | 4 +- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 24 ++-- .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 2 +- .../net/ethernet/mellanox/mlx5/core/en_main.c | 12 +- .../net/ethernet/mellanox/mlx5/core/en_rx.c | 130 +++++++++--------- 6 files changed, 94 insertions(+), 88 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 2d77fb8a8a01..35bff3b0d9f6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -467,7 +467,7 @@ struct mlx5e_txqsq { } ____cacheline_aligned_in_smp; union mlx5e_alloc_unit { - struct page *page; + struct netmem *nmem; struct xdp_buff *xsk; }; @@ -501,7 +501,7 @@ struct mlx5e_xdp_info { } frame; struct { struct mlx5e_rq *rq; - struct page *page; + struct netmem *nmem; } page; }; }; @@ -619,7 +619,7 @@ struct mlx5e_mpw_info { struct mlx5e_page_cache { u32 head; u32 tail; - struct page *page_cache[MLX5E_CACHE_SIZE]; + struct netmem *page_cache[MLX5E_CACHE_SIZE]; }; struct mlx5e_rq; @@ -657,13 +657,13 @@ struct mlx5e_rq_frags_info { struct mlx5e_dma_info { dma_addr_t addr; - struct page *page; + struct netmem *nmem; }; struct mlx5e_shampo_hd { u32 mkey; struct mlx5e_dma_info *info; - struct page *last_page; + struct netmem *last_nmem; u16 hd_per_wq; u16 hd_per_wqe; unsigned long *bitmap; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h index 853f312cd757..688d3ea9aa36 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h @@ -65,8 +65,8 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget); int mlx5e_poll_ico_cq(struct mlx5e_cq *cq); /* RX */ -void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct page *page); -void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct page *page, bool recycle); +void mlx5e_nmem_dma_unmap(struct mlx5e_rq *rq, struct netmem *nmem); +void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct netmem *nmem, bool recycle); INDIRECT_CALLABLE_DECLARE(bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq)); INDIRECT_CALLABLE_DECLARE(bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq)); int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index 20507ef2f956..878e4e9f0f8b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -32,6 +32,7 @@ #include #include +#include "en/txrx.h" #include "en/xdp.h" #include "en/params.h" @@ -57,7 +58,7 @@ int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk) static inline bool mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, - struct page *page, struct xdp_buff *xdp) + struct netmem *nmem, struct xdp_buff *xdp) { struct skb_shared_info *sinfo = NULL; struct mlx5e_xmit_data xdptxd; @@ -116,7 +117,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, xdpi.mode = MLX5E_XDP_XMIT_MODE_PAGE; xdpi.page.rq = rq; - dma_addr = page_pool_get_dma_addr(page) + (xdpf->data - (void *)xdpf); + dma_addr = netmem_get_dma_addr(nmem) + (xdpf->data - (void *)xdpf); dma_sync_single_for_device(sq->pdev, dma_addr, xdptxd.len, DMA_BIDIRECTIONAL); if (unlikely(xdp_frame_has_frags(xdpf))) { @@ -127,7 +128,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, dma_addr_t addr; u32 len; - addr = page_pool_get_dma_addr(skb_frag_page(frag)) + + addr = netmem_get_dma_addr(skb_frag_netmem(frag)) + skb_frag_off(frag); len = skb_frag_size(frag); dma_sync_single_for_device(sq->pdev, addr, len, @@ -141,14 +142,14 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, mlx5e_xmit_xdp_frame, sq, &xdptxd, sinfo, 0))) return false; - xdpi.page.page = page; + xdpi.page.nmem = nmem; mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); if (unlikely(xdp_frame_has_frags(xdpf))) { for (i = 0; i < sinfo->nr_frags; i++) { skb_frag_t *frag = &sinfo->frags[i]; - xdpi.page.page = skb_frag_page(frag); + xdpi.page.nmem = skb_frag_netmem(frag); mlx5e_xdpi_fifo_push(&sq->db.xdpi_fifo, &xdpi); } } @@ -157,7 +158,7 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, } /* returns true if packet was consumed by xdp */ -bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, +bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct netmem *nmem, struct bpf_prog *prog, struct xdp_buff *xdp) { u32 act; @@ -168,19 +169,19 @@ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, case XDP_PASS: return false; case XDP_TX: - if (unlikely(!mlx5e_xmit_xdp_buff(rq->xdpsq, rq, page, xdp))) + if (unlikely(!mlx5e_xmit_xdp_buff(rq->xdpsq, rq, nmem, xdp))) goto xdp_abort; __set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); /* non-atomic */ return true; case XDP_REDIRECT: - /* When XDP enabled then page-refcnt==1 here */ + /* When XDP enabled then nmem->refcnt==1 here */ err = xdp_do_redirect(rq->netdev, xdp, prog); if (unlikely(err)) goto xdp_abort; __set_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags); __set_bit(MLX5E_RQ_FLAG_XDP_REDIRECT, rq->flags); if (xdp->rxq->mem.type != MEM_TYPE_XSK_BUFF_POOL) - mlx5e_page_dma_unmap(rq, page); + mlx5e_nmem_dma_unmap(rq, nmem); rq->stats->xdp_redirect++; return true; default: @@ -445,7 +446,7 @@ mlx5e_xmit_xdp_frame(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, skb_frag_t *frag = &sinfo->frags[i]; dma_addr_t addr; - addr = page_pool_get_dma_addr(skb_frag_page(frag)) + + addr = netmem_get_dma_addr(skb_frag_netmem(frag)) + skb_frag_off(frag); dseg++; @@ -495,7 +496,8 @@ static void mlx5e_free_xdpsq_desc(struct mlx5e_xdpsq *sq, break; case MLX5E_XDP_XMIT_MODE_PAGE: /* XDP_TX from the regular RQ */ - mlx5e_page_release_dynamic(xdpi.page.rq, xdpi.page.page, recycle); + mlx5e_page_release_dynamic(xdpi.page.rq, + xdpi.page.nmem, recycle); break; case MLX5E_XDP_XMIT_MODE_XSK: /* AF_XDP send */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index bc2d9034af5b..5bc875f131a2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -46,7 +46,7 @@ struct mlx5e_xsk_param; int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk); -bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, +bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct netmem *nmem, struct bpf_prog *prog, struct xdp_buff *xdp); void mlx5e_xdp_mpwqe_complete(struct mlx5e_xdpsq *sq); bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index cff5f2e29e1e..7c2a1ecd730b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -555,16 +555,18 @@ static void mlx5e_rq_err_cqe_work(struct work_struct *recover_work) static int mlx5e_alloc_mpwqe_rq_drop_page(struct mlx5e_rq *rq) { - rq->wqe_overflow.page = alloc_page(GFP_KERNEL); - if (!rq->wqe_overflow.page) + struct page *page = alloc_page(GFP_KERNEL); + if (!page) return -ENOMEM; - rq->wqe_overflow.addr = dma_map_page(rq->pdev, rq->wqe_overflow.page, 0, + rq->wqe_overflow.addr = dma_map_page(rq->pdev, page, 0, PAGE_SIZE, rq->buff.map_dir); if (dma_mapping_error(rq->pdev, rq->wqe_overflow.addr)) { - __free_page(rq->wqe_overflow.page); + __free_page(page); return -ENOMEM; } + + rq->wqe_overflow.nmem = page_netmem(page); return 0; } @@ -572,7 +574,7 @@ static void mlx5e_free_mpwqe_rq_drop_page(struct mlx5e_rq *rq) { dma_unmap_page(rq->pdev, rq->wqe_overflow.addr, PAGE_SIZE, rq->buff.map_dir); - __free_page(rq->wqe_overflow.page); + __free_page(netmem_page(rq->wqe_overflow.nmem)); } static int mlx5e_init_rxq_rq(struct mlx5e_channel *c, struct mlx5e_params *params, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index c8820ab22169..000489ac0907 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -274,7 +274,7 @@ static inline u32 mlx5e_decompress_cqes_start(struct mlx5e_rq *rq, return mlx5e_decompress_cqes_cont(rq, wq, 1, budget_rem); } -static inline bool mlx5e_rx_cache_put(struct mlx5e_rq *rq, struct page *page) +static inline bool mlx5e_rx_cache_put(struct mlx5e_rq *rq, struct netmem *nmem) { struct mlx5e_page_cache *cache = &rq->page_cache; u32 tail_next = (cache->tail + 1) & (MLX5E_CACHE_SIZE - 1); @@ -285,12 +285,12 @@ static inline bool mlx5e_rx_cache_put(struct mlx5e_rq *rq, struct page *page) return false; } - if (!dev_page_is_reusable(page)) { + if (!dev_page_is_reusable(netmem_page(nmem))) { stats->cache_waive++; return false; } - cache->page_cache[cache->tail] = page; + cache->page_cache[cache->tail] = nmem; cache->tail = tail_next; return true; } @@ -306,16 +306,16 @@ static inline bool mlx5e_rx_cache_get(struct mlx5e_rq *rq, union mlx5e_alloc_uni return false; } - if (page_ref_count(cache->page_cache[cache->head]) != 1) { + if (netmem_ref_count(cache->page_cache[cache->head]) != 1) { stats->cache_busy++; return false; } - au->page = cache->page_cache[cache->head]; + au->nmem = cache->page_cache[cache->head]; cache->head = (cache->head + 1) & (MLX5E_CACHE_SIZE - 1); stats->cache_reuse++; - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); /* Non-XSK always uses PAGE_SIZE. */ dma_sync_single_for_device(rq->pdev, addr, PAGE_SIZE, rq->buff.map_dir); return true; @@ -328,43 +328,45 @@ static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq, union mlx5e_alloc_u if (mlx5e_rx_cache_get(rq, au)) return 0; - au->page = page_pool_dev_alloc_pages(rq->page_pool); - if (unlikely(!au->page)) + au->nmem = page_pool_dev_alloc_netmem(rq->page_pool); + if (unlikely(!au->nmem)) return -ENOMEM; /* Non-XSK always uses PAGE_SIZE. */ - addr = dma_map_page(rq->pdev, au->page, 0, PAGE_SIZE, rq->buff.map_dir); + addr = dma_map_page(rq->pdev, netmem_page(au->nmem), 0, PAGE_SIZE, + rq->buff.map_dir); if (unlikely(dma_mapping_error(rq->pdev, addr))) { - page_pool_recycle_direct(rq->page_pool, au->page); - au->page = NULL; + page_pool_recycle_direct(rq->page_pool, au->nmem); + au->nmem = NULL; return -ENOMEM; } - page_pool_set_dma_addr(au->page, addr); + netmem_set_dma_addr(au->nmem, addr); return 0; } -void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct page *page) +void mlx5e_nmem_dma_unmap(struct mlx5e_rq *rq, struct netmem *nmem) { - dma_addr_t dma_addr = page_pool_get_dma_addr(page); + dma_addr_t dma_addr = netmem_get_dma_addr(nmem); dma_unmap_page_attrs(rq->pdev, dma_addr, PAGE_SIZE, rq->buff.map_dir, DMA_ATTR_SKIP_CPU_SYNC); - page_pool_set_dma_addr(page, 0); + netmem_set_dma_addr(nmem, 0); } -void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct page *page, bool recycle) +void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, struct netmem *nmem, + bool recycle) { if (likely(recycle)) { - if (mlx5e_rx_cache_put(rq, page)) + if (mlx5e_rx_cache_put(rq, nmem)) return; - mlx5e_page_dma_unmap(rq, page); - page_pool_recycle_direct(rq->page_pool, page); + mlx5e_nmem_dma_unmap(rq, nmem); + page_pool_recycle_direct(rq->page_pool, nmem); } else { - mlx5e_page_dma_unmap(rq, page); - page_pool_release_page(rq->page_pool, page); - put_page(page); + mlx5e_nmem_dma_unmap(rq, nmem); + page_pool_release_netmem(rq->page_pool, nmem); + netmem_put(nmem); } } @@ -389,7 +391,7 @@ static inline void mlx5e_put_rx_frag(struct mlx5e_rq *rq, bool recycle) { if (frag->last_in_page) - mlx5e_page_release_dynamic(rq, frag->au->page, recycle); + mlx5e_page_release_dynamic(rq, frag->au->nmem, recycle); } static inline struct mlx5e_wqe_frag_info *get_frag(struct mlx5e_rq *rq, u16 ix) @@ -413,7 +415,7 @@ static int mlx5e_alloc_rx_wqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe_cyc *wqe, goto free_frags; headroom = i == 0 ? rq->buff.headroom : 0; - addr = page_pool_get_dma_addr(frag->au->page); + addr = netmem_get_dma_addr(frag->au->nmem); wqe->data[i].addr = cpu_to_be64(addr + frag->offset + headroom); } @@ -475,21 +477,21 @@ mlx5e_add_skb_frag(struct mlx5e_rq *rq, struct sk_buff *skb, union mlx5e_alloc_unit *au, u32 frag_offset, u32 len, unsigned int truesize) { - dma_addr_t addr = page_pool_get_dma_addr(au->page); + dma_addr_t addr = netmem_get_dma_addr(au->nmem); dma_sync_single_for_cpu(rq->pdev, addr + frag_offset, len, rq->buff.map_dir); - page_ref_inc(au->page); + netmem_get(au->nmem); skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, - au->page, frag_offset, len, truesize); + netmem_page(au->nmem), frag_offset, len, truesize); } static inline void mlx5e_copy_skb_header(struct mlx5e_rq *rq, struct sk_buff *skb, - struct page *page, dma_addr_t addr, + struct netmem *nmem, dma_addr_t addr, int offset_from, int dma_offset, u32 headlen) { - const void *from = page_address(page) + offset_from; + const void *from = netmem_address(nmem) + offset_from; /* Aligning len to sizeof(long) optimizes memcpy performance */ unsigned int len = ALIGN(headlen, sizeof(long)); @@ -522,7 +524,7 @@ mlx5e_free_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, bool recycle } else { for (i = 0; i < rq->mpwqe.pages_per_wqe; i++) if (no_xdp_xmit || !test_bit(i, wi->xdp_xmit_bitmap)) - mlx5e_page_release_dynamic(rq, alloc_units[i].page, recycle); + mlx5e_page_release_dynamic(rq, alloc_units[i].nmem, recycle); } } @@ -586,7 +588,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; u16 entries, pi, header_offset, err, wqe_bbs, new_entries; u32 lkey = rq->mdev->mlx5e_res.hw_objs.mkey; - struct page *page = shampo->last_page; + struct netmem *nmem = shampo->last_nmem; u64 addr = shampo->last_addr; struct mlx5e_dma_info *dma_info; struct mlx5e_umr_wqe *umr_wqe; @@ -613,11 +615,11 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, err = mlx5e_page_alloc_pool(rq, &au); if (unlikely(err)) goto err_unmap; - page = dma_info->page = au.page; - addr = dma_info->addr = page_pool_get_dma_addr(au.page); + nmem = dma_info->nmem = au.nmem; + addr = dma_info->addr = netmem_get_dma_addr(au.nmem); } else { dma_info->addr = addr + header_offset; - dma_info->page = page; + dma_info->nmem = nmem; } update_klm: @@ -635,7 +637,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, }; shampo->pi = (shampo->pi + new_entries) & (shampo->hd_per_wq - 1); - shampo->last_page = page; + shampo->last_nmem = nmem; shampo->last_addr = addr; sq->pc += wqe_bbs; sq->doorbell_cseg = &umr_wqe->ctrl; @@ -647,7 +649,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq, dma_info = &shampo->info[--index]; if (!(i & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1))) { dma_info->addr = ALIGN_DOWN(dma_info->addr, PAGE_SIZE); - mlx5e_page_release_dynamic(rq, dma_info->page, true); + mlx5e_page_release_dynamic(rq, dma_info->nmem, true); } } rq->stats->buff_alloc_err++; @@ -721,7 +723,7 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) err = mlx5e_page_alloc_pool(rq, au); if (unlikely(err)) goto err_unmap; - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); umr_wqe->inline_mtts[i] = (struct mlx5_mtt) { .ptag = cpu_to_be64(addr | MLX5_EN_WR), }; @@ -763,7 +765,7 @@ static int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix) err_unmap: while (--i >= 0) { au--; - mlx5e_page_release_dynamic(rq, au->page, true); + mlx5e_page_release_dynamic(rq, au->nmem, true); } err: @@ -782,7 +784,7 @@ void mlx5e_shampo_dealloc_hd(struct mlx5e_rq *rq, u16 len, u16 start, bool close { struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; int hd_per_wq = shampo->hd_per_wq; - struct page *deleted_page = NULL; + struct netmem *deleted_nmem = NULL; struct mlx5e_dma_info *hd_info; int i, index = start; @@ -795,9 +797,9 @@ void mlx5e_shampo_dealloc_hd(struct mlx5e_rq *rq, u16 len, u16 start, bool close hd_info = &shampo->info[index]; hd_info->addr = ALIGN_DOWN(hd_info->addr, PAGE_SIZE); - if (hd_info->page != deleted_page) { - deleted_page = hd_info->page; - mlx5e_page_release_dynamic(rq, hd_info->page, false); + if (hd_info->nmem != deleted_nmem) { + deleted_nmem = hd_info->nmem; + mlx5e_page_release_dynamic(rq, hd_info->nmem, false); } } @@ -1136,7 +1138,7 @@ static void *mlx5e_shampo_get_packet_hd(struct mlx5e_rq *rq, u16 header_index) struct mlx5e_dma_info *last_head = &rq->mpwqe.shampo->info[header_index]; u16 head_offset = (last_head->addr & (PAGE_SIZE - 1)) + rq->buff.headroom; - return page_address(last_head->page) + head_offset; + return netmem_address(last_head->nmem) + head_offset; } static void mlx5e_shampo_update_ipv4_udp_hdr(struct mlx5e_rq *rq, struct iphdr *ipv4) @@ -1595,11 +1597,11 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, dma_addr_t addr; u32 frag_size; - va = page_address(au->page) + wi->offset; + va = netmem_address(au->nmem) + wi->offset; data = va + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset, frag_size, rq->buff.map_dir); net_prefetch(data); @@ -1610,7 +1612,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, net_prefetchw(va); /* xdp_frame data area */ mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); - if (mlx5e_xdp_handle(rq, au->page, prog, &xdp)) + if (mlx5e_xdp_handle(rq, au->nmem, prog, &xdp)) return NULL; /* page/packet was consumed by XDP */ rx_headroom = xdp.data - xdp.data_hard_start; @@ -1623,7 +1625,7 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, return NULL; /* queue up for recycling/reuse */ - page_ref_inc(au->page); + netmem_get(au->nmem); return skb; } @@ -1645,10 +1647,10 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi u32 truesize; void *va; - va = page_address(au->page) + wi->offset; + va = netmem_address(au->nmem) + wi->offset; frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt); - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); dma_sync_single_range_for_cpu(rq->pdev, addr, wi->offset, rq->buff.frame0_sz, rq->buff.map_dir); net_prefetchw(va); /* xdp_frame data area */ @@ -1669,7 +1671,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi frag_consumed_bytes = min_t(u32, frag_info->frag_size, cqe_bcnt); - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); dma_sync_single_for_cpu(rq->pdev, addr + wi->offset, frag_consumed_bytes, rq->buff.map_dir); @@ -1683,11 +1685,11 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi } frag = &sinfo->frags[sinfo->nr_frags++]; - __skb_frag_set_page(frag, au->page); + __skb_frag_set_netmem(frag, au->nmem); skb_frag_off_set(frag, wi->offset); skb_frag_size_set(frag, frag_consumed_bytes); - if (page_is_pfmemalloc(au->page)) + if (netmem_is_pfmemalloc(au->nmem)) xdp_buff_set_frag_pfmemalloc(&xdp); sinfo->xdp_frags_size += frag_consumed_bytes; @@ -1701,7 +1703,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi au = head_wi->au; prog = rcu_dereference(rq->xdp_prog); - if (prog && mlx5e_xdp_handle(rq, au->page, prog, &xdp)) { + if (prog && mlx5e_xdp_handle(rq, au->nmem, prog, &xdp)) { if (test_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { int i; @@ -1718,7 +1720,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi if (unlikely(!skb)) return NULL; - page_ref_inc(au->page); + netmem_get(au->nmem); if (unlikely(xdp_buff_has_frags(&xdp))) { int i; @@ -1967,8 +1969,8 @@ mlx5e_skb_from_cqe_mpwrq_nonlinear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *w mlx5e_fill_skb_data(skb, rq, au, byte_cnt, frag_offset); /* copy header */ - addr = page_pool_get_dma_addr(head_au->page); - mlx5e_copy_skb_header(rq, skb, head_au->page, addr, + addr = netmem_get_dma_addr(head_au->nmem); + mlx5e_copy_skb_header(rq, skb, head_au->nmem, addr, head_offset, head_offset, headlen); /* skb linear part was allocated with headlen and aligned to long */ skb->tail += headlen; @@ -1996,11 +1998,11 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, return NULL; } - va = page_address(au->page) + head_offset; + va = netmem_address(au->nmem) + head_offset; data = va + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt); - addr = page_pool_get_dma_addr(au->page); + addr = netmem_get_dma_addr(au->nmem); dma_sync_single_range_for_cpu(rq->pdev, addr, head_offset, frag_size, rq->buff.map_dir); net_prefetch(data); @@ -2011,7 +2013,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, net_prefetchw(va); /* xdp_frame data area */ mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); - if (mlx5e_xdp_handle(rq, au->page, prog, &xdp)) { + if (mlx5e_xdp_handle(rq, au->nmem, prog, &xdp)) { if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */ return NULL; /* page/packet was consumed by XDP */ @@ -2027,7 +2029,7 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, return NULL; /* queue up for recycling/reuse */ - page_ref_inc(au->page); + netmem_get(au->nmem); return skb; } @@ -2044,7 +2046,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, void *hdr, *data; u32 frag_size; - hdr = page_address(head->page) + head_offset; + hdr = netmem_address(head->nmem) + head_offset; data = hdr + rx_headroom; frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + head_size); @@ -2059,7 +2061,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, return NULL; /* queue up for recycling/reuse */ - page_ref_inc(head->page); + netmem_get(head->nmem); } else { /* allocate SKB and copy header for large header */ @@ -2072,7 +2074,7 @@ mlx5e_skb_from_cqe_shampo(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi, } prefetchw(skb->data); - mlx5e_copy_skb_header(rq, skb, head->page, head->addr, + mlx5e_copy_skb_header(rq, skb, head->nmem, head->addr, head_offset + rx_headroom, rx_headroom, head_size); /* skb linear part was allocated with headlen and aligned to long */ @@ -2124,7 +2126,7 @@ mlx5e_free_rx_shampo_hd_entry(struct mlx5e_rq *rq, u16 header_index) if (((header_index + 1) & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1)) == 0) { shampo->info[header_index].addr = ALIGN_DOWN(addr, PAGE_SIZE); - mlx5e_page_release_dynamic(rq, shampo->info[header_index].page, true); + mlx5e_page_release_dynamic(rq, shampo->info[header_index].nmem, true); } bitmap_clear(shampo->bitmap, header_index, 1); } From patchwork Wed Jan 11 04:22:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13095957 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A975C677F1 for ; Wed, 11 Jan 2023 04:22:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231555AbjAKEWg (ORCPT ); Tue, 10 Jan 2023 23:22:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230386AbjAKEWM (ORCPT ); Tue, 10 Jan 2023 23:22:12 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47D8B8FCF for ; Tue, 10 Jan 2023 20:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=X98r58MbAgZPQ4+1bekghvQ632iWN8JTQMfdSkUSkNQ=; b=OLOZYsZtuOdqFm6rh9vApicxj0 YxilqtkFg9Oh9hVGvdjGPAkPwtg4tAD4191gQ0FFNvHAdZ8D2XHfJ6nJKRaRMvGX9/z0/B7g6CiuB az34Nlwn8x88CPEvBaFmpfp5RAKZavkYXw2fzPbniaDcv7rPqRS16OQ9XTWya5w3QhoCPL5dQ1K4a h3o5Lx/F3vdVmKlJkPkrcQbxKozqoxXwCwxulplEzCamliGZNoNpqpVWdx6MGYCASYobcsUs6gRnK LEWmQdjZTnBWjx7eBp+ML3uT/eHQsbSqAiyFiO6/LRRsLXNP+aRLHOYD78Q//HFE4JfkoQIWT6cJ2 0ShX1ejA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFSd0-003nzY-LT; Wed, 11 Jan 2023 04:22:18 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v3 26/26] hns3: Convert to netmem Date: Wed, 11 Jan 2023 04:22:14 +0000 Message-Id: <20230111042214.907030-27-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230111042214.907030-1-willy@infradead.org> References: <20230111042214.907030-1-willy@infradead.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Use the new netmem APIs in the hns3 driver. Convert page_pool_dev_alloc_frag() to return a netmem as this is the only user of the API. Signed-off-by: Matthew Wilcox (Oracle) --- drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 16 ++++++++-------- include/net/page_pool.h | 7 +++---- 2 files changed, 11 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index b4c4fb873568..ca0dc201bd47 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -3355,15 +3355,15 @@ static int hns3_alloc_buffer(struct hns3_enet_ring *ring, struct page *p; if (ring->page_pool) { - p = page_pool_dev_alloc_frag(ring->page_pool, + struct netmem *nmem = page_pool_dev_alloc_frag(ring->page_pool, &cb->page_offset, hns3_buf_size(ring)); - if (unlikely(!p)) + if (unlikely(!nmem)) return -ENOMEM; - cb->priv = p; - cb->buf = page_address(p); - cb->dma = page_pool_get_dma_addr(p); + cb->priv = nmem; + cb->buf = netmem_address(nmem); + cb->dma = netmem_get_dma_addr(nmem); cb->type = DESC_TYPE_PP_FRAG; cb->reuse_flag = 0; return 0; @@ -3395,7 +3395,7 @@ static void hns3_free_buffer(struct hns3_enet_ring *ring, if (cb->type & DESC_TYPE_PAGE && cb->pagecnt_bias) __page_frag_cache_drain(cb->priv, cb->pagecnt_bias); else if (cb->type & DESC_TYPE_PP_FRAG) - page_pool_put_full_page(ring->page_pool, cb->priv, + page_pool_put_full_netmem(ring->page_pool, cb->priv, false); } memset(cb, 0, sizeof(*cb)); @@ -4043,8 +4043,8 @@ static int hns3_alloc_skb(struct hns3_enet_ring *ring, unsigned int length, if (dev_page_is_reusable(desc_cb->priv)) desc_cb->reuse_flag = 1; else if (desc_cb->type & DESC_TYPE_PP_FRAG) - page_pool_put_full_page(ring->page_pool, desc_cb->priv, - false); + page_pool_put_full_netmem(ring->page_pool, + desc_cb->priv, false); else /* This page cannot be reused so discard it */ __page_frag_cache_drain(desc_cb->priv, desc_cb->pagecnt_bias); diff --git a/include/net/page_pool.h b/include/net/page_pool.h index af8ba8a0dd05..0a2588e6a0f3 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -334,13 +334,12 @@ static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) struct netmem *page_pool_alloc_frag(struct page_pool *pool, unsigned int *offset, unsigned int size, gfp_t gfp); -static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, - unsigned int *offset, - unsigned int size) +static inline struct netmem *page_pool_dev_alloc_frag(struct page_pool *pool, + unsigned int *offset, unsigned int size) { gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); - return netmem_page(page_pool_alloc_frag(pool, offset, size, gfp)); + return page_pool_alloc_frag(pool, offset, size, gfp); } /* get the stored dma direction. A driver might decide to treat this locally and