From patchwork Fri Jun 4 18:33:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matteo Croce X-Patchwork-Id: 12300461 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6A50C4743D for ; Fri, 4 Jun 2021 18:34:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 915066140B for ; Fri, 4 Jun 2021 18:34:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230353AbhFDSgc (ORCPT ); Fri, 4 Jun 2021 14:36:32 -0400 Received: from mail-ej1-f48.google.com ([209.85.218.48]:37563 "EHLO mail-ej1-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230344AbhFDSgc (ORCPT ); Fri, 4 Jun 2021 14:36:32 -0400 Received: by mail-ej1-f48.google.com with SMTP id ce15so15930001ejb.4; Fri, 04 Jun 2021 11:34:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KVXA2B0j0E76Hw4rKgteG9BP+cY/0VQ2EiVfHUNMg/4=; b=nlkGuBMd+Vty6+XLHkc2LbiSbwqfINiZ/ReNk8htptoFZbKJd4j2KgwYSoht5E5Wqe c6Za3C57g7wbv7IzS8m5R2Nry1bmPbbzNpLd4jYp2QY9jMXLxhS2A9DE2wDfHGeOB+aK 3SgbN6soIrUNvwHNmClCEoU8fW4uDLkaHySq5QeBe4T9t4Mfrncw8G4PxHN3sWJqQi+3 74+ByQ6lqGbWdZ9lTOi0ikr8wEmE5/SmExv8tGHXwzh/6EqJzCqanW/ZOwJ8y8uRHe8B jlzxaRhMnV7g7eQOrj0K+aAbn/0jHHTSEUc2Il9M4yHndUQmc+Evu1c0wVXx3tYp+auR uf3Q== X-Gm-Message-State: AOAM532Xqv9vvQNeR7kEp2xOlQIMPig8BED5NzFcu5nyXyT7BW4uZFiL SzjvsZY4Jxsvm4vJbhxPX8F4z8bKZIecfA== X-Google-Smtp-Source: ABdhPJwzWy/UpSiAT2oGFWyX5XJHteunpsaqe2BdTWj7r5QsxGM1CjDsIVuG2Daau4lDB2+8+7DnXg== X-Received: by 2002:a17:907:7808:: with SMTP id la8mr3285635ejc.13.1622831671378; Fri, 04 Jun 2021 11:34:31 -0700 (PDT) Received: from msft-t490s.teknoraver.net (net-37-119-128-179.cust.vodafonedsl.it. [37.119.128.179]) by smtp.gmail.com with ESMTPSA id k12sm3732039edi.87.2021.06.04.11.34.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 11:34:30 -0700 (PDT) From: Matteo Croce To: netdev@vger.kernel.org, linux-mm@kvack.org Cc: Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , "David S. Miller" , Jakub Kicinski , Thomas Petazzoni , Marcin Wojtas , Russell King , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Jesper Dangaard Brouer , Ilias Apalodimas , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Boris Pismenny , Arnd Bergmann , Andrew Morton , "Peter Zijlstra (Intel)" , Vlastimil Babka , Yu Zhao , Will Deacon , Fenghua Yu , Roman Gushchin , Hugh Dickins , Peter Xu , Jason Gunthorpe , Jonathan Lemon , Alexander Lobakin , Cong Wang , wenxu , Kevin Hao , Jakub Sitnicki , Marco Elver , Willem de Bruijn , Miaohe Lin , Yunsheng Lin , Guillaume Nault , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, Matthew Wilcox , Eric Dumazet , David Ahern , Lorenzo Bianconi , Saeed Mahameed , Andrew Lunn , Paolo Abeni , Sven Auhagen Subject: [PATCH net-next v7 5/5] mvneta: recycle buffers Date: Fri, 4 Jun 2021 20:33:49 +0200 Message-Id: <20210604183349.30040-6-mcroce@linux.microsoft.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210604183349.30040-1-mcroce@linux.microsoft.com> References: <20210604183349.30040-1-mcroce@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Matteo Croce Use the new recycling API for page_pool. In a drop rate test, the packet rate increased by 10%, from 296 Kpps to 326 Kpps. perf top on a stock system shows: Overhead Shared Object Symbol 23.66% [kernel] [k] __pi___inval_dcache_area 22.85% [mvneta] [k] mvneta_rx_swbm 7.54% [kernel] [k] kmem_cache_alloc 6.49% [kernel] [k] eth_type_trans 3.94% [kernel] [k] dev_gro_receive 3.91% [kernel] [k] __netif_receive_skb_core 3.91% [kernel] [k] kmem_cache_free 3.76% [kernel] [k] page_pool_release_page 3.56% [kernel] [k] free_unref_page 2.40% [kernel] [k] build_skb 1.49% [kernel] [k] skb_release_data 1.45% [kernel] [k] __alloc_pages_bulk 1.30% [kernel] [k] page_frag_free And this is the same output with recycling enabled: Overhead Shared Object Symbol 26.41% [kernel] [k] __pi___inval_dcache_area 25.00% [mvneta] [k] mvneta_rx_swbm 8.14% [kernel] [k] kmem_cache_alloc 6.84% [kernel] [k] eth_type_trans 4.44% [kernel] [k] __netif_receive_skb_core 4.38% [kernel] [k] kmem_cache_free 4.16% [kernel] [k] dev_gro_receive 3.21% [kernel] [k] page_pool_put_page 2.41% [kernel] [k] build_skb 1.82% [kernel] [k] skb_release_data 1.61% [kernel] [k] napi_gro_receive 1.25% [kernel] [k] page_pool_refill_alloc_cache 1.16% [kernel] [k] __netif_receive_skb_list_core We can see that page_pool_release_page(), free_unref_page() and __alloc_pages_bulk() are no longer on top of the list when receiving traffic. The test was done with mausezahn on the TX side with 64 byte raw ethernet frames. Signed-off-by: Matteo Croce --- drivers/net/ethernet/marvell/mvneta.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 7d5cd9bc6c99..c15ce06427d0 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -2320,7 +2320,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp, } static struct sk_buff * -mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, +mvneta_swbm_build_skb(struct mvneta_port *pp, struct page_pool *pool, struct xdp_buff *xdp, u32 desc_status) { struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); @@ -2331,7 +2331,7 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, if (!skb) return ERR_PTR(-ENOMEM); - page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data)); + skb_mark_for_recycle(skb, virt_to_page(xdp->data), pool); skb_reserve(skb, xdp->data - xdp->data_hard_start); skb_put(skb, xdp->data_end - xdp->data); @@ -2343,7 +2343,10 @@ mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, skb_frag_page(frag), skb_frag_off(frag), skb_frag_size(frag), PAGE_SIZE); - page_pool_release_page(rxq->page_pool, skb_frag_page(frag)); + /* We don't need to reset pp_recycle here. It's already set, so + * just mark fragments for recycling. + */ + page_pool_store_mem_info(skb_frag_page(frag), pool); } return skb; @@ -2425,7 +2428,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, mvneta_run_xdp(pp, rxq, xdp_prog, &xdp_buf, frame_sz, &ps)) goto next; - skb = mvneta_swbm_build_skb(pp, rxq, &xdp_buf, desc_status); + skb = mvneta_swbm_build_skb(pp, rxq->page_pool, &xdp_buf, desc_status); if (IS_ERR(skb)) { struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);