From patchwork Mon Jan 18 15:13:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 12027529 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62D15C433E0 for ; Mon, 18 Jan 2021 15:24:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 28914223DB for ; Mon, 18 Jan 2021 15:24:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2393354AbhARPYD (ORCPT ); Mon, 18 Jan 2021 10:24:03 -0500 Received: from mga02.intel.com ([134.134.136.20]:63478 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390548AbhARPXm (ORCPT ); Mon, 18 Jan 2021 10:23:42 -0500 IronPort-SDR: izW57mRq4ByXOUaxMfphigGpJZwdk5pGOkIHuoHr5mVqkPmf0E/HezqwexTRaItM7ganAxrLkm UTh8APoj3j5g== X-IronPort-AV: E=McAfee;i="6000,8403,9867"; a="165905497" X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="165905497" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 07:22:56 -0800 IronPort-SDR: Wl1Ohqw2f9zjVMa9ygvPDQw5InWNl1iorcbiVyzoi9S7Sh0cHQ3bcxlrWy0n7NXxjGyQDGFnJ6 3yWzWtndyxlg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="500676300" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga004.jf.intel.com with ESMTP; 18 Jan 2021 07:22:54 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v3 net-next 01/11] i40e: drop redundant check when setting xdp prog Date: Mon, 18 Jan 2021 16:13:08 +0100 Message-Id: <20210118151318.12324-2-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210118151318.12324-1-maciej.fijalkowski@intel.com> References: <20210118151318.12324-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Net core handles the case where netdev has no xdp prog attached and current prog is NULL. Therefore, remove such check within i40e_xdp_setup. Reviewed-by: Björn Töpel Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/i40e/i40e_main.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 521ea9df38d5..103df35c251d 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -12462,9 +12462,6 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi, if (frame_size > vsi->rx_buf_len) return -EINVAL; - if (!i40e_enabled_xdp_vsi(vsi) && !prog) - return 0; - /* When turning XDP on->off/off->on we reset and rebuild the rings. */ need_reset = (i40e_enabled_xdp_vsi(vsi) != !!prog); From patchwork Mon Jan 18 15:13:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 12027531 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0620C433DB for ; Mon, 18 Jan 2021 15:24:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6E46B2245C for ; Mon, 18 Jan 2021 15:24:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2393418AbhARPYJ (ORCPT ); Mon, 18 Jan 2021 10:24:09 -0500 Received: from mga02.intel.com ([134.134.136.20]:63476 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391586AbhARPXz (ORCPT ); Mon, 18 Jan 2021 10:23:55 -0500 IronPort-SDR: r7ZCXCkTZozXdF69AqTS+8GDxWvstKcRcmnfQ41qzi8ZkYvnoRGo87k+W6ZVdvVa8WleqXS1xT OKeJO/BLBnsQ== X-IronPort-AV: E=McAfee;i="6000,8403,9867"; a="165905500" X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="165905500" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 07:22:58 -0800 IronPort-SDR: Bd4fTjG3K5Bhe8zJXXX38hsKDWODEMr3ulSxXAOB/B1LVnSsITF5KbNtnjEpbQJNkFI5oAhqT4 fVCUysn7Xs4Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="500676310" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga004.jf.intel.com with ESMTP; 18 Jan 2021 07:22:56 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v3 net-next 02/11] i40e: drop misleading function comments Date: Mon, 18 Jan 2021 16:13:09 +0100 Message-Id: <20210118151318.12324-3-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210118151318.12324-1-maciej.fijalkowski@intel.com> References: <20210118151318.12324-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org i40e_cleanup_headers has a statement about check against skb being linear or not which is not relevant anymore, so let's remove it. Same case for i40e_can_reuse_rx_page, it references things that are not present there anymore. Reviewed-by: Björn Töpel Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 33 ++++----------------- 1 file changed, 6 insertions(+), 27 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 2574e78f7597..f8aa68f2a7fd 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -1809,9 +1809,6 @@ void i40e_process_skb_fields(struct i40e_ring *rx_ring, * @skb: pointer to current skb being fixed * @rx_desc: pointer to the EOP Rx descriptor * - * Also address the case where we are pulling data in on pages only - * and as such no data is present in the skb header. - * * In addition if skb is not at least 60 bytes we need to pad it so that * it is large enough to qualify as a valid Ethernet frame. * @@ -1857,33 +1854,15 @@ static inline bool i40e_page_is_reusable(struct page *page) } /** - * i40e_can_reuse_rx_page - Determine if this page can be reused by - * the adapter for another receive - * + * i40e_can_reuse_rx_page - Determine if page can be reused for another Rx * @rx_buffer: buffer containing the page * @rx_buffer_pgcnt: buffer page refcount pre xdp_do_redirect() call * - * If page is reusable, rx_buffer->page_offset is adjusted to point to - * an unused region in the page. - * - * For small pages, @truesize will be a constant value, half the size - * of the memory at page. We'll attempt to alternate between high and - * low halves of the page, with one half ready for use by the hardware - * and the other half being consumed by the stack. We use the page - * ref count to determine whether the stack has finished consuming the - * portion of this page that was passed up with a previous packet. If - * the page ref count is >1, we'll assume the "other" half page is - * still busy, and this page cannot be reused. - * - * For larger pages, @truesize will be the actual space used by the - * received packet (adjusted upward to an even multiple of the cache - * line size). This will advance through the page by the amount - * actually consumed by the received packets while there is still - * space for a buffer. Each region of larger pages will be used at - * most once, after which the page will not be reused. - * - * In either case, if the page is reusable its refcount is increased. - **/ + * If page is reusable, we have a green light for calling i40e_reuse_rx_page, + * which will assign the current buffer to the buffer that next_to_alloc is + * pointing to; otherwise, the DMA mapping needs to be destroyed and + * page freed + */ static bool i40e_can_reuse_rx_page(struct i40e_rx_buffer *rx_buffer, int rx_buffer_pgcnt) { From patchwork Mon Jan 18 15:13:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 12027533 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85FBEC433DB for ; Mon, 18 Jan 2021 15:24:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 596AD2245C for ; Mon, 18 Jan 2021 15:24:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2393419AbhARPYN (ORCPT ); Mon, 18 Jan 2021 10:24:13 -0500 Received: from mga02.intel.com ([134.134.136.20]:63478 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2393365AbhARPYC (ORCPT ); Mon, 18 Jan 2021 10:24:02 -0500 IronPort-SDR: Tr/PWOiN5psZzpAv5my2Rd5pNtAdRRqeZv2CA2sj14sGc+3gy3XXlCrLHvsqH0Kve8bAnU7pwL 67dHNtYVI2ng== X-IronPort-AV: E=McAfee;i="6000,8403,9867"; a="165905504" X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="165905504" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 07:23:01 -0800 IronPort-SDR: tsCMy+5O9mwtFKnLO29RpX/+c/Wlxm9p7/FXUsc50Lkj1SlzJ6WKGLqIy8qLZTLAy9U3zcAmUw ZkWpJOnrPm0Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="500676319" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga004.jf.intel.com with ESMTP; 18 Jan 2021 07:22:58 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v3 net-next 03/11] i40e: adjust i40e_is_non_eop Date: Mon, 18 Jan 2021 16:13:10 +0100 Message-Id: <20210118151318.12324-4-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210118151318.12324-1-maciej.fijalkowski@intel.com> References: <20210118151318.12324-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org i40e_is_non_eop had a leftover comment and unused skb argument which was used for placing the skb onto rx_buf in case when current buffer was non-eop one. This is not relevant anymore as commit e72e56597ba1 ("i40e/i40evf: Moves skb from i40e_rx_buffer to i40e_ring") pulled the non-complete skb handling out of rx_bufs up to rx_ring. Therefore, let's adjust the function arguments that i40e_is_non_eop takes. Furthermore, since there is already a function responsible for bumping the ntc, make use of that and drop that logic from i40e_is_non_eop so that the scope of this function is limited to what the name actually states. Reviewed-by: Björn Töpel Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 23 ++++++--------------- 1 file changed, 6 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index f8aa68f2a7fd..7e008dbbef97 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -2130,25 +2130,13 @@ static void i40e_put_rx_buffer(struct i40e_ring *rx_ring, * i40e_is_non_eop - process handling of non-EOP buffers * @rx_ring: Rx ring being processed * @rx_desc: Rx descriptor for current buffer - * @skb: Current socket buffer containing buffer in progress * - * This function updates next to clean. If the buffer is an EOP buffer - * this function exits returning false, otherwise it will place the - * sk_buff in the next buffer to be chained and return true indicating - * that this is in fact a non-EOP buffer. - **/ + * If the buffer is an EOP buffer, this function exits returning false, + * otherwise return true indicating that this is in fact a non-EOP buffer. + */ static bool i40e_is_non_eop(struct i40e_ring *rx_ring, - union i40e_rx_desc *rx_desc, - struct sk_buff *skb) + union i40e_rx_desc *rx_desc) { - u32 ntc = rx_ring->next_to_clean + 1; - - /* fetch, update, and store next to clean */ - ntc = (ntc < rx_ring->count) ? ntc : 0; - rx_ring->next_to_clean = ntc; - - prefetch(I40E_RX_DESC(rx_ring, ntc)); - /* if we are the last buffer then there is nothing else to do */ #define I40E_RXD_EOF BIT(I40E_RX_DESC_STATUS_EOF_SHIFT) if (likely(i40e_test_staterr(rx_desc, I40E_RXD_EOF))) @@ -2427,7 +2415,8 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget) i40e_put_rx_buffer(rx_ring, rx_buffer, rx_buffer_pgcnt); cleaned_count++; - if (i40e_is_non_eop(rx_ring, rx_desc, skb)) + i40e_inc_ntc(rx_ring); + if (i40e_is_non_eop(rx_ring, rx_desc)) continue; if (i40e_cleanup_headers(rx_ring, skb, rx_desc)) { From patchwork Mon Jan 18 15:13:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 12027541 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAABDC433E0 for ; Mon, 18 Jan 2021 15:24:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B11DA2245C for ; Mon, 18 Jan 2021 15:24:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2393464AbhARPYt (ORCPT ); Mon, 18 Jan 2021 10:24:49 -0500 Received: from mga02.intel.com ([134.134.136.20]:63476 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2393431AbhARPYW (ORCPT ); Mon, 18 Jan 2021 10:24:22 -0500 IronPort-SDR: wCVvfzh0KDuFoouLFXJwC/WHDg9K3otqW2cNrP2LE3yn1vyg7pZQKnN0vPM+1cJcK7XTFL2ZTx tiXV5PyXXV9w== X-IronPort-AV: E=McAfee;i="6000,8403,9867"; a="165905514" X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="165905514" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 07:23:03 -0800 IronPort-SDR: Svm4C2oTnqmgVefwqsvjsAaIAgHNe86gHgU9NshqdyQ8B9QKOBFMAeDZaqmVCTYKL9SNw0uauF ddNBa7QeSp1A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="500676335" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga004.jf.intel.com with ESMTP; 18 Jan 2021 07:23:01 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v3 net-next 04/11] ice: simplify ice_run_xdp Date: Mon, 18 Jan 2021 16:13:11 +0100 Message-Id: <20210118151318.12324-5-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210118151318.12324-1-maciej.fijalkowski@intel.com> References: <20210118151318.12324-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org There's no need for 'result' variable, we can directly return the internal status based on action returned by xdp prog. Reviewed-by: Björn Töpel Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_txrx.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 422f53997c02..dc1ad45eac8d 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -537,22 +537,20 @@ static int ice_run_xdp(struct ice_ring *rx_ring, struct xdp_buff *xdp, struct bpf_prog *xdp_prog) { - int err, result = ICE_XDP_PASS; struct ice_ring *xdp_ring; + int err; u32 act; act = bpf_prog_run_xdp(xdp_prog, xdp); switch (act) { case XDP_PASS: - break; + return ICE_XDP_PASS; case XDP_TX: xdp_ring = rx_ring->vsi->xdp_rings[smp_processor_id()]; - result = ice_xmit_xdp_buff(xdp, xdp_ring); - break; + return ice_xmit_xdp_buff(xdp, xdp_ring); case XDP_REDIRECT: err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); - result = !err ? ICE_XDP_REDIR : ICE_XDP_CONSUMED; - break; + return !err ? ICE_XDP_REDIR : ICE_XDP_CONSUMED; default: bpf_warn_invalid_xdp_action(act); fallthrough; @@ -560,11 +558,8 @@ ice_run_xdp(struct ice_ring *rx_ring, struct xdp_buff *xdp, trace_xdp_exception(rx_ring->netdev, xdp_prog, act); fallthrough; case XDP_DROP: - result = ICE_XDP_CONSUMED; - break; + return ICE_XDP_CONSUMED; } - - return result; } /** From patchwork Mon Jan 18 15:13:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 12027589 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B624C433E6 for ; Mon, 18 Jan 2021 15:57:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1F3AA22472 for ; Mon, 18 Jan 2021 15:57:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405995AbhARP5B (ORCPT ); Mon, 18 Jan 2021 10:57:01 -0500 Received: from mga02.intel.com ([134.134.136.20]:63478 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2393439AbhARPY1 (ORCPT ); Mon, 18 Jan 2021 10:24:27 -0500 IronPort-SDR: hjpcMJYM5GBmdA1NH2+694PKHe83Rz+tzxKbUV/wjUJh57i1XL5O8jsuK2dw97dYMuch5I6YGn dlhteGPmFh4Q== X-IronPort-AV: E=McAfee;i="6000,8403,9867"; a="165905517" X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="165905517" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 07:23:05 -0800 IronPort-SDR: /AAtrNXx2qDpe/AQNzQfGXAB9gFIbrprRcUCIWb1eaBgCn3YBIaIkJHqcn0J3vWTaN5E46YxaL 3Ymlu6PnsBMQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="500676342" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga004.jf.intel.com with ESMTP; 18 Jan 2021 07:23:03 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v3 net-next 05/11] ice: move skb pointer from rx_buf to rx_ring Date: Mon, 18 Jan 2021 16:13:12 +0100 Message-Id: <20210118151318.12324-6-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210118151318.12324-1-maciej.fijalkowski@intel.com> References: <20210118151318.12324-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Similar thing has been done in i40e, as there is no real need for having the sk_buff pointer in each rx_buf. Non-eop frames can be simply handled on that pointer moved upwards to rx_ring. Reviewed-by: Björn Töpel Signed-off-by: Maciej Fijalkowski Reviewed-by: Björn Töpel Signed-off-by: Maciej Fijalkowski Tested-by: Tony Brelinski A Contingent Worker at Intel --- drivers/net/ethernet/intel/ice/ice_txrx.c | 30 ++++++++++------------- drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +- 2 files changed, 14 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index dc1ad45eac8d..50fbb77bab70 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -375,6 +375,11 @@ void ice_clean_rx_ring(struct ice_ring *rx_ring) if (!rx_ring->rx_buf) return; + if (rx_ring->skb) { + dev_kfree_skb(rx_ring->skb); + rx_ring->skb = NULL; + } + if (rx_ring->xsk_pool) { ice_xsk_clean_rx_ring(rx_ring); goto rx_skip_free; @@ -384,10 +389,6 @@ void ice_clean_rx_ring(struct ice_ring *rx_ring) for (i = 0; i < rx_ring->count; i++) { struct ice_rx_buf *rx_buf = &rx_ring->rx_buf[i]; - if (rx_buf->skb) { - dev_kfree_skb(rx_buf->skb); - rx_buf->skb = NULL; - } if (!rx_buf->page) continue; @@ -859,7 +860,6 @@ ice_reuse_rx_page(struct ice_ring *rx_ring, struct ice_rx_buf *old_buf) /** * ice_get_rx_buf - Fetch Rx buffer and synchronize data for use * @rx_ring: Rx descriptor ring to transact packets on - * @skb: skb to be used * @size: size of buffer to add to skb * @rx_buf_pgcnt: rx_buf page refcount * @@ -867,8 +867,8 @@ ice_reuse_rx_page(struct ice_ring *rx_ring, struct ice_rx_buf *old_buf) * for use by the CPU. */ static struct ice_rx_buf * -ice_get_rx_buf(struct ice_ring *rx_ring, struct sk_buff **skb, - const unsigned int size, int *rx_buf_pgcnt) +ice_get_rx_buf(struct ice_ring *rx_ring, const unsigned int size, + int *rx_buf_pgcnt) { struct ice_rx_buf *rx_buf; @@ -880,7 +880,6 @@ ice_get_rx_buf(struct ice_ring *rx_ring, struct sk_buff **skb, 0; #endif prefetchw(rx_buf->page); - *skb = rx_buf->skb; if (!size) return rx_buf; @@ -1042,29 +1041,24 @@ ice_put_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf, /* clear contents of buffer_info */ rx_buf->page = NULL; - rx_buf->skb = NULL; } /** * ice_is_non_eop - process handling of non-EOP buffers * @rx_ring: Rx ring being processed * @rx_desc: Rx descriptor for current buffer - * @skb: Current socket buffer containing buffer in progress * * If the buffer is an EOP buffer, this function exits returning false, * otherwise return true indicating that this is in fact a non-EOP buffer. */ static bool -ice_is_non_eop(struct ice_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc, - struct sk_buff *skb) +ice_is_non_eop(struct ice_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc) { /* if we are the last buffer then there is nothing else to do */ #define ICE_RXD_EOF BIT(ICE_RX_FLEX_DESC_STATUS0_EOF_S) if (likely(ice_test_staterr(rx_desc, ICE_RXD_EOF))) return false; - /* place skb in next buffer to be received */ - rx_ring->rx_buf[rx_ring->next_to_clean].skb = skb; rx_ring->rx_stats.non_eop_descs++; return true; @@ -1087,6 +1081,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) unsigned int total_rx_bytes = 0, total_rx_pkts = 0, frame_sz = 0; u16 cleaned_count = ICE_DESC_UNUSED(rx_ring); unsigned int xdp_res, xdp_xmit = 0; + struct sk_buff *skb = rx_ring->skb; struct bpf_prog *xdp_prog = NULL; struct xdp_buff xdp; bool failure; @@ -1103,7 +1098,6 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) union ice_32b_rx_flex_desc *rx_desc; struct ice_rx_buf *rx_buf; unsigned char *hard_start; - struct sk_buff *skb; unsigned int size; u16 stat_err_bits; int rx_buf_pgcnt; @@ -1138,7 +1132,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) ICE_RX_FLX_DESC_PKT_LEN_M; /* retrieve a buffer from the ring */ - rx_buf = ice_get_rx_buf(rx_ring, &skb, size, &rx_buf_pgcnt); + rx_buf = ice_get_rx_buf(rx_ring, size, &rx_buf_pgcnt); if (!size) { xdp.data = NULL; @@ -1200,7 +1194,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) cleaned_count++; /* skip if it is NOP desc */ - if (ice_is_non_eop(rx_ring, rx_desc, skb)) + if (ice_is_non_eop(rx_ring, rx_desc)) continue; stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S); @@ -1230,6 +1224,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) /* send completed skb up the stack */ ice_receive_skb(rx_ring, skb, vlan_tag); + skb = NULL; /* update budget accounting */ total_rx_pkts++; @@ -1240,6 +1235,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) if (xdp_prog) ice_finalize_xdp_rx(rx_ring, xdp_xmit); + rx_ring->skb = skb; ice_update_rx_ring_stats(rx_ring, total_rx_pkts, total_rx_bytes); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index ff1a1cbd078e..c77dbbb760cd 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -165,7 +165,6 @@ struct ice_tx_offload_params { struct ice_rx_buf { union { struct { - struct sk_buff *skb; dma_addr_t dma; struct page *page; unsigned int page_offset; @@ -298,6 +297,7 @@ struct ice_ring { struct xsk_buff_pool *xsk_pool; /* CL3 - 3rd cacheline starts here */ struct xdp_rxq_info xdp_rxq; + struct sk_buff *skb; /* CLX - the below items are only accessed infrequently and should be * in their own cache line if possible */ From patchwork Mon Jan 18 15:13:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 12027545 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBE30C433DB for ; Mon, 18 Jan 2021 15:26:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 987ED206B2 for ; Mon, 18 Jan 2021 15:26:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405429AbhARP0w (ORCPT ); Mon, 18 Jan 2021 10:26:52 -0500 Received: from mga02.intel.com ([134.134.136.20]:63424 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2393453AbhARPYk (ORCPT ); Mon, 18 Jan 2021 10:24:40 -0500 IronPort-SDR: VnmMtiq3XwfHbDH6euXN7XOvzkEJhYJbVtI7OILUgxtaNyYlgU+IQUFALBiG2wclMa7t8oKtZM GZ+4Gl2r1HrQ== X-IronPort-AV: E=McAfee;i="6000,8403,9867"; a="165905528" X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="165905528" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 07:23:07 -0800 IronPort-SDR: 9Z09z4AQZSOaXL7RKy7c++fmufl2dMwJ7Auq1L5+b+VxYQVh06y4eIeWJTu6M1s1vfTWZWfDTj gjID006UC1uQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="500676349" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga004.jf.intel.com with ESMTP; 18 Jan 2021 07:23:05 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v3 net-next 06/11] ice: remove redundant checks in ice_change_mtu Date: Mon, 18 Jan 2021 16:13:13 +0100 Message-Id: <20210118151318.12324-7-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210118151318.12324-1-maciej.fijalkowski@intel.com> References: <20210118151318.12324-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org dev_validate_mtu checks that mtu value specified by user is not less than min mtu and not greater than max allowed mtu. It is being done before calling the ndo_change_mtu exposed by driver, so remove these redundant checks in ice_change_mtu. Reviewed-by: Björn Töpel Signed-off-by: Maciej Fijalkowski Reviewed-by: Björn Töpel Signed-off-by: Maciej Fijalkowski Tested-by: Tony Brelinski A Contingent Worker at Intel --- drivers/net/ethernet/intel/ice/ice_main.c | 9 --------- 1 file changed, 9 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index 6e251dfffc91..868d4c224eaf 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -6123,15 +6123,6 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu) } } - if (new_mtu < (int)netdev->min_mtu) { - netdev_err(netdev, "new MTU invalid. min_mtu is %d\n", - netdev->min_mtu); - return -EINVAL; - } else if (new_mtu > (int)netdev->max_mtu) { - netdev_err(netdev, "new MTU invalid. max_mtu is %d\n", - netdev->min_mtu); - return -EINVAL; - } /* if a reset is in progress, wait for some time for it to complete */ do { if (ice_is_reset_in_progress(pf->state)) { From patchwork Mon Jan 18 15:13:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 12027557 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B6DDC433DB for ; Mon, 18 Jan 2021 15:32:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C8BB222285 for ; Mon, 18 Jan 2021 15:32:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405954AbhARPcg (ORCPT ); Mon, 18 Jan 2021 10:32:36 -0500 Received: from mga02.intel.com ([134.134.136.20]:63476 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2393446AbhARPYk (ORCPT ); Mon, 18 Jan 2021 10:24:40 -0500 IronPort-SDR: lpoVsIsfUiXb1fQlvyqoE4pOc7xx1xMBa/yaxLdXbGlfRROcbFokb0Jnwl2iFoj4hOGIB3fkK8 +fCPEAUh2EeA== X-IronPort-AV: E=McAfee;i="6000,8403,9867"; a="165905532" X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="165905532" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 07:23:09 -0800 IronPort-SDR: Ia6Ihj9or7sidCKiG6tg7M2o7yrzuvbc58f8Xtvfp+12CwWNYl4xCIO+eqT8TacTkQHrsPbyWS FIejOUD/Gueg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="500676363" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga004.jf.intel.com with ESMTP; 18 Jan 2021 07:23:07 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v3 net-next 07/11] ice: skip NULL check against XDP prog in ZC path Date: Mon, 18 Jan 2021 16:13:14 +0100 Message-Id: <20210118151318.12324-8-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210118151318.12324-1-maciej.fijalkowski@intel.com> References: <20210118151318.12324-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Whole zero-copy variant of clean Rx irq is executed when xsk_pool is attached to rx_ring and it can happen only when XDP program is present on interface. Therefore it is safe to assume that program is always !NULL and there is no need for checking it in ice_run_xdp_zc. Reviewed-by: Björn Töpel Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ice/ice_xsk.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 1782146db644..b926ca9c4f4f 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -517,11 +517,10 @@ ice_run_xdp_zc(struct ice_ring *rx_ring, struct xdp_buff *xdp) u32 act; rcu_read_lock(); + /* ZC patch is enabled only when XDP program is set, + * so here it can not be NULL + */ xdp_prog = READ_ONCE(rx_ring->xdp_prog); - if (!xdp_prog) { - rcu_read_unlock(); - return ICE_XDP_PASS; - } act = bpf_prog_run_xdp(xdp_prog, xdp); switch (act) { From patchwork Mon Jan 18 15:13:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 12027543 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26F33C433DB for ; Mon, 18 Jan 2021 15:25:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EFB2F22C9D for ; Mon, 18 Jan 2021 15:25:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405650AbhARPZ0 (ORCPT ); Mon, 18 Jan 2021 10:25:26 -0500 Received: from mga02.intel.com ([134.134.136.20]:63478 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390745AbhARPYy (ORCPT ); Mon, 18 Jan 2021 10:24:54 -0500 IronPort-SDR: WMzbcqEwnbeavvirUfhfwaplWD+pGwRuRQL6Y59wB5AXz+TmG3BdT1W419q61nw+NOAIvK52k0 hsGUmXjLL9mA== X-IronPort-AV: E=McAfee;i="6000,8403,9867"; a="165905537" X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="165905537" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 07:23:11 -0800 IronPort-SDR: MmXoUGZmd9k38JI/TnhxVUKTuP9rclFCNQmm/eZC9b9p2TgdDIi4Ad/K5fYzxXYr088JRzdi6p 78KolVSjnsCQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="500676375" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga004.jf.intel.com with ESMTP; 18 Jan 2021 07:23:09 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v3 net-next 08/11] i40e, xsk: Simplify the do-while allocation loop Date: Mon, 18 Jan 2021 16:13:15 +0100 Message-Id: <20210118151318.12324-9-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210118151318.12324-1-maciej.fijalkowski@intel.com> References: <20210118151318.12324-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Björn Töpel Fold the count decrement into the while-statement. Reviewed-by: Maciej Fijalkowski Signed-off-by: Björn Töpel --- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index 47eb9c584a12..457ce365a007 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -215,9 +215,7 @@ bool i40e_alloc_rx_buffers_zc(struct i40e_ring *rx_ring, u16 count) bi = i40e_rx_bi(rx_ring, 0); ntu = 0; } - - count--; - } while (count); + } while (--count); no_buffers: if (rx_ring->next_to_use != ntu) { From patchwork Mon Jan 18 15:13:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 12027549 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 242B6C43331 for ; Mon, 18 Jan 2021 15:31:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0340A22CA0 for ; Mon, 18 Jan 2021 15:31:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405761AbhARP0z (ORCPT ); Mon, 18 Jan 2021 10:26:55 -0500 Received: from mga02.intel.com ([134.134.136.20]:63476 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2393474AbhARPZF (ORCPT ); Mon, 18 Jan 2021 10:25:05 -0500 IronPort-SDR: uozTAuH+MAVRus1csKjKDp+NHHW3YXe4wK+rJlQf3Aq3YYtSbBSaMf7lvAXjyCLXoBV/BfxMFC Xv2vbVdN2kBw== X-IronPort-AV: E=McAfee;i="6000,8403,9867"; a="165905540" X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="165905540" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 07:23:14 -0800 IronPort-SDR: TIxmgZTywD02ZZMNx/aDE3SSIvjMj3PwzGhNOmUVIm7MB/kCf8gUokCu082xX0t7w4ZK0gAZt7 azX6q3IRzCBg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="500676387" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga004.jf.intel.com with ESMTP; 18 Jan 2021 07:23:12 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v3 net-next 09/11] i40e: store the result of i40e_rx_offset() onto i40e_ring Date: Mon, 18 Jan 2021 16:13:16 +0100 Message-Id: <20210118151318.12324-10-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210118151318.12324-1-maciej.fijalkowski@intel.com> References: <20210118151318.12324-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Output of i40e_rx_offset() is based on ethtool's priv flag setting, which when changed, causes PF reset (disables napi, frees irqs, loads different Rx mem model, etc.). This means that within napi its result is constant and there is no reason to call it per each processed frame. Add new 'rx_offset' field to i40e_ring that is meant to hold the i40e_rx_offset() result and use it within i40e_clean_rx_irq(). Furthermore, use it within i40e_alloc_mapped_page(). Last but not least, un-inline the function of interest so that compiler makes the decision about inlining as it lives in .c file. Reviewed-by: Björn Töpel Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/i40e/i40e_txrx.c | 35 +++++++++++---------- drivers/net/ethernet/intel/i40e/i40e_txrx.h | 1 + 2 files changed, 19 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 7e008dbbef97..3a6be7fa9817 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -1415,6 +1415,17 @@ void i40e_free_rx_resources(struct i40e_ring *rx_ring) } } +/** + * i40e_rx_offset - Return expected offset into page to access data + * @rx_ring: Ring we are requesting offset of + * + * Returns the offset value for ring into the data buffer. + */ +static unsigned int i40e_rx_offset(struct i40e_ring *rx_ring) +{ + return ring_uses_build_skb(rx_ring) ? I40E_SKB_PAD : 0; +} + /** * i40e_setup_rx_descriptors - Allocate Rx descriptors * @rx_ring: Rx descriptor ring (for a specific queue) to setup @@ -1443,6 +1454,7 @@ int i40e_setup_rx_descriptors(struct i40e_ring *rx_ring) rx_ring->next_to_alloc = 0; rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; + rx_ring->rx_offset = i40e_rx_offset(rx_ring); /* XDP RX-queue info only needed for RX rings exposed to XDP */ if (rx_ring->vsi->type == I40E_VSI_MAIN) { @@ -1478,17 +1490,6 @@ void i40e_release_rx_desc(struct i40e_ring *rx_ring, u32 val) writel(val, rx_ring->tail); } -/** - * i40e_rx_offset - Return expected offset into page to access data - * @rx_ring: Ring we are requesting offset of - * - * Returns the offset value for ring into the data buffer. - */ -static inline unsigned int i40e_rx_offset(struct i40e_ring *rx_ring) -{ - return ring_uses_build_skb(rx_ring) ? I40E_SKB_PAD : 0; -} - static unsigned int i40e_rx_frame_truesize(struct i40e_ring *rx_ring, unsigned int size) { @@ -1497,8 +1498,8 @@ static unsigned int i40e_rx_frame_truesize(struct i40e_ring *rx_ring, #if (PAGE_SIZE < 8192) truesize = i40e_rx_pg_size(rx_ring) / 2; /* Must be power-of-2 */ #else - truesize = i40e_rx_offset(rx_ring) ? - SKB_DATA_ALIGN(size + i40e_rx_offset(rx_ring)) + + truesize = rx_ring->rx_offset ? + SKB_DATA_ALIGN(size + rx_ring->rx_offset) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) : SKB_DATA_ALIGN(size); #endif @@ -1549,7 +1550,7 @@ static bool i40e_alloc_mapped_page(struct i40e_ring *rx_ring, bi->dma = dma; bi->page = page; - bi->page_offset = i40e_rx_offset(rx_ring); + bi->page_offset = rx_ring->rx_offset; page_ref_add(page, USHRT_MAX - 1); bi->pagecnt_bias = USHRT_MAX; @@ -1916,7 +1917,7 @@ static void i40e_add_rx_frag(struct i40e_ring *rx_ring, #if (PAGE_SIZE < 8192) unsigned int truesize = i40e_rx_pg_size(rx_ring) / 2; #else - unsigned int truesize = SKB_DATA_ALIGN(size + i40e_rx_offset(rx_ring)); + unsigned int truesize = SKB_DATA_ALIGN(size + rx_ring->rx_offset); #endif skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, @@ -2312,8 +2313,9 @@ static void i40e_inc_ntc(struct i40e_ring *rx_ring) static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget) { unsigned int total_rx_bytes = 0, total_rx_packets = 0, frame_sz = 0; - struct sk_buff *skb = rx_ring->skb; u16 cleaned_count = I40E_DESC_UNUSED(rx_ring); + unsigned int offset = rx_ring->rx_offset; + struct sk_buff *skb = rx_ring->skb; unsigned int xdp_xmit = 0; bool failure = false; struct xdp_buff xdp; @@ -2373,7 +2375,6 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget) /* retrieve a buffer from the ring */ if (!skb) { - unsigned int offset = i40e_rx_offset(rx_ring); unsigned char *hard_start; hard_start = page_address(rx_buffer->page) + diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h index 5f531b195959..86fed05b4f19 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h @@ -387,6 +387,7 @@ struct i40e_ring { */ struct i40e_channel *ch; + u16 rx_offset; struct xdp_rxq_info xdp_rxq; struct xsk_buff_pool *xsk_pool; struct xdp_desc *xsk_descs; /* For storing descriptors in the AF_XDP ZC path */ From patchwork Mon Jan 18 15:13:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 12027535 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60BEEC433E0 for ; Mon, 18 Jan 2021 15:25:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 36F7322C9D for ; Mon, 18 Jan 2021 15:25:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2405408AbhARPZS (ORCPT ); Mon, 18 Jan 2021 10:25:18 -0500 Received: from mga02.intel.com ([134.134.136.20]:63424 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2393475AbhARPZG (ORCPT ); Mon, 18 Jan 2021 10:25:06 -0500 IronPort-SDR: 6ug3QFN8YF+iNqZlLmHMEEYdyfv0WeZssSSnAClGi7PPw49dtIa7fKdF1Y+p9T0y9OvYewjQjh VxaI0wLZi+3w== X-IronPort-AV: E=McAfee;i="6000,8403,9867"; a="165905546" X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="165905546" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 07:23:16 -0800 IronPort-SDR: WpbDM4yLvCumj55b1pe2r5iLRHuqjefTXbefL7emJACY/KTBZclgIz6RnHohNFKqqmhmPXoseD SyV1SBcOP1GA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="500676397" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga004.jf.intel.com with ESMTP; 18 Jan 2021 07:23:14 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v3 net-next 10/11] ice: store the result of ice_rx_offset() onto ice_ring Date: Mon, 18 Jan 2021 16:13:17 +0100 Message-Id: <20210118151318.12324-11-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210118151318.12324-1-maciej.fijalkowski@intel.com> References: <20210118151318.12324-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Output of ice_rx_offset() is based on ethtool's priv flag setting, which when changed, causes PF reset (disables napi, frees irqs, loads different Rx mem model, etc.). This means that within napi its result is constant and there is no reason to call it per each processed frame. Add new 'rx_offset' field to ice_ring that is meant to hold the ice_rx_offset() result and use it within ice_clean_rx_irq(). Furthermore, use it within ice_alloc_mapped_page(). Reviewed-by: Björn Töpel Signed-off-by: Maciej Fijalkowski Reviewed-by: Björn Töpel Signed-off-by: Maciej Fijalkowski Tested-by: Tony Brelinski A Contingent Worker at Intel --- drivers/net/ethernet/intel/ice/ice_txrx.c | 43 ++++++++++++----------- drivers/net/ethernet/intel/ice/ice_txrx.h | 1 + 2 files changed, 23 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 50fbb77bab70..fdd0d8b326ff 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -443,6 +443,22 @@ void ice_free_rx_ring(struct ice_ring *rx_ring) } } +/** + * ice_rx_offset - Return expected offset into page to access data + * @rx_ring: Ring we are requesting offset of + * + * Returns the offset value for ring into the data buffer. + */ +static unsigned int ice_rx_offset(struct ice_ring *rx_ring) +{ + if (ice_ring_uses_build_skb(rx_ring)) + return ICE_SKB_PAD; + else if (ice_is_xdp_ena_vsi(rx_ring->vsi)) + return XDP_PACKET_HEADROOM; + + return 0; +} + /** * ice_setup_rx_ring - Allocate the Rx descriptors * @rx_ring: the Rx ring to set up @@ -477,6 +493,7 @@ int ice_setup_rx_ring(struct ice_ring *rx_ring) rx_ring->next_to_use = 0; rx_ring->next_to_clean = 0; + rx_ring->rx_offset = ice_rx_offset(rx_ring); if (ice_is_xdp_ena_vsi(rx_ring->vsi)) WRITE_ONCE(rx_ring->xdp_prog, rx_ring->vsi->xdp_prog); @@ -494,22 +511,6 @@ int ice_setup_rx_ring(struct ice_ring *rx_ring) return -ENOMEM; } -/** - * ice_rx_offset - Return expected offset into page to access data - * @rx_ring: Ring we are requesting offset of - * - * Returns the offset value for ring into the data buffer. - */ -static unsigned int ice_rx_offset(struct ice_ring *rx_ring) -{ - if (ice_ring_uses_build_skb(rx_ring)) - return ICE_SKB_PAD; - else if (ice_is_xdp_ena_vsi(rx_ring->vsi)) - return XDP_PACKET_HEADROOM; - - return 0; -} - static unsigned int ice_rx_frame_truesize(struct ice_ring *rx_ring, unsigned int __maybe_unused size) { @@ -518,8 +519,8 @@ ice_rx_frame_truesize(struct ice_ring *rx_ring, unsigned int __maybe_unused size #if (PAGE_SIZE < 8192) truesize = ice_rx_pg_size(rx_ring) / 2; /* Must be power-of-2 */ #else - truesize = ice_rx_offset(rx_ring) ? - SKB_DATA_ALIGN(ice_rx_offset(rx_ring) + size) + + truesize = rx_ring->rx_offset ? + SKB_DATA_ALIGN(rx_ring->rx_offset + size) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) : SKB_DATA_ALIGN(size); #endif @@ -652,7 +653,7 @@ ice_alloc_mapped_page(struct ice_ring *rx_ring, struct ice_rx_buf *bi) bi->dma = dma; bi->page = page; - bi->page_offset = ice_rx_offset(rx_ring); + bi->page_offset = rx_ring->rx_offset; page_ref_add(page, USHRT_MAX - 1); bi->pagecnt_bias = USHRT_MAX; @@ -814,7 +815,7 @@ ice_add_rx_frag(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf, struct sk_buff *skb, unsigned int size) { #if (PAGE_SIZE >= 8192) - unsigned int truesize = SKB_DATA_ALIGN(size + ice_rx_offset(rx_ring)); + unsigned int truesize = SKB_DATA_ALIGN(size + rx_ring->rx_offset); #else unsigned int truesize = ice_rx_pg_size(rx_ring) / 2; #endif @@ -1080,6 +1081,7 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) { unsigned int total_rx_bytes = 0, total_rx_pkts = 0, frame_sz = 0; u16 cleaned_count = ICE_DESC_UNUSED(rx_ring); + unsigned int offset = rx_ring->rx_offset; unsigned int xdp_res, xdp_xmit = 0; struct sk_buff *skb = rx_ring->skb; struct bpf_prog *xdp_prog = NULL; @@ -1094,7 +1096,6 @@ int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) /* start the loop to process Rx packets bounded by 'budget' */ while (likely(total_rx_pkts < (unsigned int)budget)) { - unsigned int offset = ice_rx_offset(rx_ring); union ice_32b_rx_flex_desc *rx_desc; struct ice_rx_buf *rx_buf; unsigned char *hard_start; diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index c77dbbb760cd..d5f609056ab9 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -295,6 +295,7 @@ struct ice_ring { struct rcu_head rcu; /* to avoid race on free */ struct bpf_prog *xdp_prog; struct xsk_buff_pool *xsk_pool; + u16 rx_offset; /* CL3 - 3rd cacheline starts here */ struct xdp_rxq_info xdp_rxq; struct sk_buff *skb; From patchwork Mon Jan 18 15:13:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Fijalkowski, Maciej" X-Patchwork-Id: 12027551 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49447C433E6 for ; Mon, 18 Jan 2021 15:31:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 17AE522D3E for ; Mon, 18 Jan 2021 15:31:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390675AbhARP04 (ORCPT ); Mon, 18 Jan 2021 10:26:56 -0500 Received: from mga02.intel.com ([134.134.136.20]:63478 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2405378AbhARPZN (ORCPT ); Mon, 18 Jan 2021 10:25:13 -0500 IronPort-SDR: sIj2+pK0smDQlEc0cCCEvHwliMDI8eCOVvnOO8pO2FwhOyMQuC8OC4sSnxO82diN6IUt/7pX75 dbkeYC8N165w== X-IronPort-AV: E=McAfee;i="6000,8403,9867"; a="165905550" X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="165905550" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2021 07:23:18 -0800 IronPort-SDR: PCz4eqMqNycaZiDIyAVP9JD3a17PQB+Wx9vbw41cIs1WoYRohWvQR86Np+sB3tnCrgYIohP4qa /kaJ4CHzTcTg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,356,1602572400"; d="scan'208";a="500676413" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga004.jf.intel.com with ESMTP; 18 Jan 2021 07:23:16 -0800 From: Maciej Fijalkowski To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, bpf@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH v3 net-next 11/11] ixgbe: store the result of ixgbe_rx_offset() onto ixgbe_ring Date: Mon, 18 Jan 2021 16:13:18 +0100 Message-Id: <20210118151318.12324-12-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210118151318.12324-1-maciej.fijalkowski@intel.com> References: <20210118151318.12324-1-maciej.fijalkowski@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Output of ixgbe_rx_offset() is based on ethtool's priv flag setting, which when changed, causes PF reset (disables napi, frees irqs, loads different Rx mem model, etc.). This means that within napi its result is constant and there is no reason to call it per each processed frame. Add new 'rx_offset' field to ixgbe_ring that is meant to hold the ixgbe_rx_offset() result and use it within ixgbe_clean_rx_irq(). Furthermore, use it within ixgbe_alloc_mapped_page(). Last but not least, un-inline the function of interest as it lives in .c file so let compiler do the decision about the inlining. Reviewed-by: Björn Töpel Signed-off-by: Maciej Fijalkowski --- drivers/net/ethernet/intel/ixgbe/ixgbe.h | 1 + drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 15 ++++++++------- 2 files changed, 9 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h index de0fc6ecf491..a604552fa634 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h @@ -349,6 +349,7 @@ struct ixgbe_ring { struct ixgbe_tx_queue_stats tx_stats; struct ixgbe_rx_queue_stats rx_stats; }; + u16 rx_offset; struct xdp_rxq_info xdp_rxq; struct xsk_buff_pool *xsk_pool; u16 ring_idx; /* {rx,tx,xdp}_ring back reference idx */ diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index 56dca73d158e..1f518cfb1dca 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -1520,7 +1520,7 @@ static inline void ixgbe_rx_checksum(struct ixgbe_ring *ring, } } -static inline unsigned int ixgbe_rx_offset(struct ixgbe_ring *rx_ring) +static unsigned int ixgbe_rx_offset(struct ixgbe_ring *rx_ring) { return ring_uses_build_skb(rx_ring) ? IXGBE_SKB_PAD : 0; } @@ -1561,7 +1561,7 @@ static bool ixgbe_alloc_mapped_page(struct ixgbe_ring *rx_ring, bi->dma = dma; bi->page = page; - bi->page_offset = ixgbe_rx_offset(rx_ring); + bi->page_offset = rx_ring->rx_offset; page_ref_add(page, USHRT_MAX - 1); bi->pagecnt_bias = USHRT_MAX; rx_ring->rx_stats.alloc_rx_page++; @@ -2006,8 +2006,8 @@ static void ixgbe_add_rx_frag(struct ixgbe_ring *rx_ring, #if (PAGE_SIZE < 8192) unsigned int truesize = ixgbe_rx_pg_size(rx_ring) / 2; #else - unsigned int truesize = ring_uses_build_skb(rx_ring) ? - SKB_DATA_ALIGN(IXGBE_SKB_PAD + size) : + unsigned int truesize = rx_ring->rx_offset ? + SKB_DATA_ALIGN(rx_ring->rx_offset + size) : SKB_DATA_ALIGN(size); #endif skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buffer->page, @@ -2254,8 +2254,8 @@ static unsigned int ixgbe_rx_frame_truesize(struct ixgbe_ring *rx_ring, #if (PAGE_SIZE < 8192) truesize = ixgbe_rx_pg_size(rx_ring) / 2; /* Must be power-of-2 */ #else - truesize = ring_uses_build_skb(rx_ring) ? - SKB_DATA_ALIGN(IXGBE_SKB_PAD + size) + + truesize = rx_ring->rx_offset ? + SKB_DATA_ALIGN(rx_ring->rx_offset + size) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) : SKB_DATA_ALIGN(size); #endif @@ -2298,6 +2298,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, unsigned int mss = 0; #endif /* IXGBE_FCOE */ u16 cleaned_count = ixgbe_desc_unused(rx_ring); + unsigned int offset = rx_ring->rx_offset; unsigned int xdp_xmit = 0; struct xdp_buff xdp; @@ -2335,7 +2336,6 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector, /* retrieve a buffer from the ring */ if (!skb) { - unsigned int offset = ixgbe_rx_offset(rx_ring); unsigned char *hard_start; hard_start = page_address(rx_buffer->page) + @@ -6583,6 +6583,7 @@ int ixgbe_setup_rx_resources(struct ixgbe_adapter *adapter, rx_ring->next_to_clean = 0; rx_ring->next_to_use = 0; + rx_ring->rx_offset = ixgbe_rx_offset(rx_ring); /* XDP RX-queue info */ if (xdp_rxq_info_reg(&rx_ring->xdp_rxq, adapter->netdev,