From patchwork Mon Nov 20 23:43:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13462278 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="Ibb6NuDz" Received: from mail-qt1-x82a.google.com (mail-qt1-x82a.google.com [IPv6:2607:f8b0:4864:20::82a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08F6FC3 for ; Mon, 20 Nov 2023 15:44:38 -0800 (PST) Received: by mail-qt1-x82a.google.com with SMTP id d75a77b69052e-41feb963f60so41924611cf.1 for ; Mon, 20 Nov 2023 15:44:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1700523877; x=1701128677; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=FHM+sryUZM0YAe3w2WUaneVLgkyeK6T8H2Goa73kBR4=; b=Ibb6NuDzyaTwttHmPAYX6r3KSmov1sc3SV1tKutwrUqLdt9+2Z/rLanFKAgpxpoIIf X+pBleygHCpPGdcaduHnCIz+KbRtRWqFcl8r28SXAl8tXXM96nZb2F/lNKeLMPVD3VTR XKzj2eaRPe86RODsVz3oaYNTf+NMC+DCSFlgk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700523877; x=1701128677; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FHM+sryUZM0YAe3w2WUaneVLgkyeK6T8H2Goa73kBR4=; b=FDmxhMtTHRxa9GXBnRXs8pxm+9w+WpYTAlKi2woApwVCZZx8wcOpvfPQ2p2GpHFYSE qlz/aLiWQ89jNW4CX6uaV+aeGn+kSwp4803b+gEAFrmFUc940977nmRagsTMoRt4sG1T YdDGHof9TsUDZgqte73y3lYoLk8TL+RTqLKGk93nX08koCfWTl7lOx0uUYazIJCgY5Aw /N64f/eZVUSZD590eFSWHejz9SQR+KossRaoFUlfYsMVKbvMYe3OIk6pr0HAhfsi2IGv XX0I3tXOasGXho8FNvnVCY4fdo2z3ObrwGuugdboSzAEiRkU4cJPkPYcFD4ujcbVEEig LnkQ== X-Gm-Message-State: AOJu0YynxclAvVSRBMml4kGSLW7rvnTZ48Z6FV9BsSQ4Wo9fjs9OL1Kv FRuc8aZrUGbyo/YiH2x27H/IPy1HfTbHzBbBXps= X-Google-Smtp-Source: AGHT+IFHdoEWyyi1oZI1F7yEp0ccJo3oxEjINof+rcIQTNvEiRA0ilKZfULD4sRKLq/cT24m+QmVtw== X-Received: by 2002:a05:622a:15c2:b0:417:af0a:70a0 with SMTP id d2-20020a05622a15c200b00417af0a70a0mr1726341qty.14.1700523877056; Mon, 20 Nov 2023 15:44:37 -0800 (PST) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id i9-20020ac871c9000000b0041803dfb240sm3053384qtp.45.2023.11.20.15.44.35 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Nov 2023 15:44:36 -0800 (PST) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Pavan Chebbi , Somnath Kotur Subject: [PATCH net-next 01/13] bnxt_en: The caller of bnxt_alloc_ctx_mem() should always free bp->ctx Date: Mon, 20 Nov 2023 15:43:53 -0800 Message-Id: <20231120234405.194542-2-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20231120234405.194542-1-michael.chan@broadcom.com> References: <20231120234405.194542-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org bnxt_alloc_ctx_mem() calls bnxt_hwrm_func_backing_store_qcaps() to allocate the memory for bp->ctx. Initialize bp->ctx with the allocated memory and let the caller free it during unwind. The unwind logic is already there, we just need to always set bp->ctx to the allocated memory so the caller will always free it. This simplifies the logic and makes it easier to expand on the backing store logic. Reviewed-by: Pavan Chebbi Reviewed-by: Somnath Kotur Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index e6ac1bd21bb3..6b19d5b8d95a 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -7233,6 +7233,8 @@ static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) rc = -ENOMEM; goto ctx_err; } + bp->ctx = ctx; + ctx->qp_max_entries = le32_to_cpu(resp->qp_max_entries); ctx->qp_min_qp1_entries = le16_to_cpu(resp->qp_min_qp1_entries); ctx->qp_max_l2_entries = le16_to_cpu(resp->qp_max_l2_entries); @@ -7276,13 +7278,11 @@ static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) tqm_rings = ctx->tqm_fp_rings_count + BNXT_MAX_TQM_SP_RINGS; ctx_pg = kcalloc(tqm_rings, sizeof(*ctx_pg), GFP_KERNEL); if (!ctx_pg) { - kfree(ctx); rc = -ENOMEM; goto ctx_err; } for (i = 0; i < tqm_rings; i++, ctx_pg++) ctx->tqm_mem[i] = ctx_pg; - bp->ctx = ctx; } else { rc = 0; } From patchwork Mon Nov 20 23:43:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13462279 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="K/8VRauR" Received: from mail-qt1-x829.google.com (mail-qt1-x829.google.com [IPv6:2607:f8b0:4864:20::829]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CEFCC7 for ; Mon, 20 Nov 2023 15:44:39 -0800 (PST) Received: by mail-qt1-x829.google.com with SMTP id d75a77b69052e-42033328ad0so29939321cf.0 for ; Mon, 20 Nov 2023 15:44:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1700523878; x=1701128678; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=atj0wcg9XYQePoheikbPi9EKAIZzEF9jPNFTWltqhrY=; b=K/8VRauRGnvpKAovrzK5PWyQJLz7GorhsdgPnc3kjTvuFojMF7b5mMmFkDmZRnNR2U kwu+dkHfgEv9P6NZkclDdV9gruHwwj1t4o5yffDBT4K+IevJ2uQmiFk18kl0/zgAIqA/ wfmYh5doVR0YR23PzcscXuRgl/ggEEJDBkxOE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700523878; x=1701128678; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=atj0wcg9XYQePoheikbPi9EKAIZzEF9jPNFTWltqhrY=; b=ZOph2JqOt8kao0BYBA7eUhECQmebVjBolRVbi6lrrsLT694YHYrYXju6n4Yw4xSBxf GVebX1bwYbPdMMMhuNLPjy1+252Jqwf4QQYM+BKLosYtTyz9N0L78//idwAeNOzdfzR2 1RYsjZZOcpA96RX8J3vDCFZ+aa4iyW7N4ndPwoXqnTCPnsTeQhlci5RVQ2GIhZ2/128W 5bw08+DuARnyN2wTJgcIFvkL0jukjjCvlqt0KZJzBowtgfulj9aXpRFtAvry6Ex8ltwb haaPsiVzK3g96MrB/5SdOqje1go7mBf4HiPBUylxxm/udJj/wTzD3mK5i0ASSiHWbeCH u1rg== X-Gm-Message-State: AOJu0YxnuIyuHJXHhWvGa7zHbI07gSwkMjO5tl5PHN9NfGUZAp9IkM9l iGoHaX35leS3X6Yfu7/JGZT7TA== X-Google-Smtp-Source: AGHT+IHlT+BNCYyJ3G1giaXbfyf3KH+fE7GUOntzEBLAU3l021t0YOBEVDN72auwvP3ySc1Yjynd7A== X-Received: by 2002:a05:622a:50b:b0:418:15ab:85b8 with SMTP id l11-20020a05622a050b00b0041815ab85b8mr10844638qtx.13.1700523878329; Mon, 20 Nov 2023 15:44:38 -0800 (PST) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id i9-20020ac871c9000000b0041803dfb240sm3053384qtp.45.2023.11.20.15.44.37 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Nov 2023 15:44:38 -0800 (PST) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Somnath Kotur Subject: [PATCH net-next 02/13] bnxt_en: Free bp->ctx inside bnxt_free_ctx_mem() Date: Mon, 20 Nov 2023 15:43:54 -0800 Message-Id: <20231120234405.194542-3-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20231120234405.194542-1-michael.chan@broadcom.com> References: <20231120234405.194542-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org We always free bp->ctx right after calling bnxt_free_ctx_mem(), so just free it at the end of that function to make things simpler. Reviewed-by: Somnath Kotur Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 14 ++------------ drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c | 2 -- 2 files changed, 2 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 6b19d5b8d95a..8ff21768e592 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -7552,6 +7552,8 @@ void bnxt_free_ctx_mem(struct bnxt *bp) bnxt_free_ctx_pg_tbls(bp, &ctx->srq_mem); bnxt_free_ctx_pg_tbls(bp, &ctx->qp_mem); ctx->flags &= ~BNXT_CTX_FLAG_INITED; + kfree(ctx); + bp->ctx = NULL; } static int bnxt_alloc_ctx_mem(struct bnxt *bp) @@ -10321,8 +10323,6 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up) if (!test_bit(BNXT_STATE_IN_FW_RESET, &bp->state)) bnxt_ulp_stop(bp); bnxt_free_ctx_mem(bp); - kfree(bp->ctx); - bp->ctx = NULL; bnxt_dcb_free(bp); rc = bnxt_fw_init_one(bp); if (rc) { @@ -11948,8 +11948,6 @@ static void bnxt_fw_reset_close(struct bnxt *bp) if (pci_is_enabled(bp->pdev)) pci_disable_device(bp->pdev); bnxt_free_ctx_mem(bp); - kfree(bp->ctx); - bp->ctx = NULL; } static bool is_bnxt_fw_ok(struct bnxt *bp) @@ -13368,8 +13366,6 @@ static void bnxt_remove_one(struct pci_dev *pdev) bp->fw_health = NULL; bnxt_cleanup_pci(bp); bnxt_free_ctx_mem(bp); - kfree(bp->ctx); - bp->ctx = NULL; kfree(bp->rss_indir_tbl); bp->rss_indir_tbl = NULL; bnxt_free_port_stats(bp); @@ -13969,8 +13965,6 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) bp->fw_health = NULL; bnxt_cleanup_pci(bp); bnxt_free_ctx_mem(bp); - kfree(bp->ctx); - bp->ctx = NULL; kfree(bp->rss_indir_tbl); bp->rss_indir_tbl = NULL; @@ -14023,8 +14017,6 @@ static int bnxt_suspend(struct device *device) bnxt_hwrm_func_drv_unrgtr(bp); pci_disable_device(bp->pdev); bnxt_free_ctx_mem(bp); - kfree(bp->ctx); - bp->ctx = NULL; rtnl_unlock(); return rc; } @@ -14121,8 +14113,6 @@ static pci_ers_result_t bnxt_io_error_detected(struct pci_dev *pdev, if (pci_is_enabled(pdev)) pci_disable_device(pdev); bnxt_free_ctx_mem(bp); - kfree(bp->ctx); - bp->ctx = NULL; rtnl_unlock(); /* Request a slot slot reset. */ diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c index f302dac56599..10b842539b08 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c @@ -469,8 +469,6 @@ static int bnxt_dl_reload_down(struct devlink *dl, bool netns_change, } bnxt_cancel_reservations(bp, false); bnxt_free_ctx_mem(bp); - kfree(bp->ctx); - bp->ctx = NULL; break; } case DEVLINK_RELOAD_ACTION_FW_ACTIVATE: { From patchwork Mon Nov 20 23:43:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13462281 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="h6xxtyB0" Received: from mail-qt1-x834.google.com (mail-qt1-x834.google.com [IPv6:2607:f8b0:4864:20::834]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1245595 for ; Mon, 20 Nov 2023 15:44:41 -0800 (PST) Received: by mail-qt1-x834.google.com with SMTP id d75a77b69052e-41cc535cd5cso28303101cf.2 for ; Mon, 20 Nov 2023 15:44:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1700523880; x=1701128680; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=11iRhgr2PLRNhVPu9HUHPwoEEU0Uc3Zhta4y4uUD0D8=; b=h6xxtyB0Fpu3fF/fEKoUKnJLlf2TaO0NY504rSHXf8RgxfArQk8HhtHaprFb3my2hl BJ2lLyLb+PL3mReGXsb6BX//9oMcOvuTrMTs7SKLsPdkjEKe/B0Xmsfcx5V86KWWzxxa FBLC3gEtjCDvP1fLMUi6UA1usEqAwAr9WYhdk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700523880; x=1701128680; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=11iRhgr2PLRNhVPu9HUHPwoEEU0Uc3Zhta4y4uUD0D8=; b=G/TR2qKN622v6FWEeQD2+x50P8mTxcWxoBHqS8tS85oHOHt0BtUAshyY/te7P1dV5C hG1yQtrCZJnTbVj0eDauQE7SFPI6OK656w4FIYTZbC6FG5cyHK0WTJZmS/Q3nUiuXHzo Wj1ibry/WRyRiDF1jDVqBKh1vvvr1p0ws34iQsYGE+z7FBFvyLkW5rlSOA7BH9C/gu1P ErYhxCxUdYjS33OJNU3KjEThhb04a2Lx79YtKxvSz1xPIj8dxK7ELnu4snmLto96Ett1 Y5gHfGeat9OMwAGHKzEoc8Ow5CKOfr/wfQaT355qoBtmdAuevMGG+FiUgI2y+cSdSwZU abJQ== X-Gm-Message-State: AOJu0Yy3YQJ3ZtM/AlZrOxQmcVC66P6pjx7SJXepkRuNpDEUAMwR5t6H msxFCzQgH/N61MiAd2IVJoA2yg== X-Google-Smtp-Source: AGHT+IF9KZRb8qQUbpU5cGFYfartjGQIWIbXxV6SVSwj98epKfu/l7Ms5Enh5VAxHRng3oI3N4MeXw== X-Received: by 2002:a05:622a:10f:b0:41e:551f:9392 with SMTP id u15-20020a05622a010f00b0041e551f9392mr13583108qtw.51.1700523879936; Mon, 20 Nov 2023 15:44:39 -0800 (PST) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id i9-20020ac871c9000000b0041803dfb240sm3053384qtp.45.2023.11.20.15.44.38 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Nov 2023 15:44:39 -0800 (PST) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Somnath Kotur , Pavan Chebbi Subject: [PATCH net-next 03/13] bnxt_en: Restructure context memory data structures Date: Mon, 20 Nov 2023 15:43:55 -0800 Message-Id: <20231120234405.194542-4-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20231120234405.194542-1-michael.chan@broadcom.com> References: <20231120234405.194542-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The current code uses a flat bnxt_ctx_mem_info structure to store 8 types of context memory for the NIC. All the context memory types are very similar and have similar parameters. They can all share a common structure to improve the organization. Also, new firmware interface will provide a new API to retrieve each type of context memory by calling the API repeatedly. This patch reorganizes the bnxt_ctx_mem_info structure to fit better with the new firmware interface. It will also work with the legacy firmware interface. The flat fields in bnxt_ctx_mem_info are replaced by the bnxt_ctx_mem_type array. The bnxt_mem_init array info will no longer be needed. Reviewed-by: Somnath Kotur Reviewed-by: Pavan Chebbi Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 323 ++++++++++++---------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 101 ++++--- 2 files changed, 242 insertions(+), 182 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 8ff21768e592..df25b559ee45 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -3121,20 +3121,20 @@ static void bnxt_free_skbs(struct bnxt *bp) bnxt_free_rx_skbs(bp); } -static void bnxt_init_ctx_mem(struct bnxt_mem_init *mem_init, void *p, int len) +static void bnxt_init_ctx_mem(struct bnxt_ctx_mem_type *ctxm, void *p, int len) { - u8 init_val = mem_init->init_val; - u16 offset = mem_init->offset; + u8 init_val = ctxm->init_value; + u16 offset = ctxm->init_offset; u8 *p2 = p; int i; if (!init_val) return; - if (offset == BNXT_MEM_INVALID_OFFSET) { + if (offset == BNXT_CTX_INIT_INVALID_OFFSET) { memset(p, init_val, len); return; } - for (i = 0; i < len; i += mem_init->size) + for (i = 0; i < len; i += ctxm->entry_size) *(p2 + i + offset) = init_val; } @@ -3201,8 +3201,8 @@ static int bnxt_alloc_ring(struct bnxt *bp, struct bnxt_ring_mem_info *rmem) if (!rmem->pg_arr[i]) return -ENOMEM; - if (rmem->mem_init) - bnxt_init_ctx_mem(rmem->mem_init, rmem->pg_arr[i], + if (rmem->ctx_mem) + bnxt_init_ctx_mem(rmem->ctx_mem, rmem->pg_arr[i], rmem->page_size); if (rmem->nr_pages > 1 || rmem->depth > 0) { if (i == rmem->nr_pages - 2 && @@ -7175,37 +7175,16 @@ static int bnxt_hwrm_func_qcfg(struct bnxt *bp) return rc; } -static void bnxt_init_ctx_initializer(struct bnxt_ctx_mem_info *ctx, - struct hwrm_func_backing_store_qcaps_output *resp) +static void bnxt_init_ctx_initializer(struct bnxt_ctx_mem_type *ctxm, + u8 init_val, u8 init_offset, + bool init_mask_set) { - struct bnxt_mem_init *mem_init; - u16 init_mask; - u8 init_val; - u8 *offset; - int i; - - init_val = resp->ctx_kind_initializer; - init_mask = le16_to_cpu(resp->ctx_init_mask); - offset = &resp->qp_init_offset; - mem_init = &ctx->mem_init[BNXT_CTX_MEM_INIT_QP]; - for (i = 0; i < BNXT_CTX_MEM_INIT_MAX; i++, mem_init++, offset++) { - mem_init->init_val = init_val; - mem_init->offset = BNXT_MEM_INVALID_OFFSET; - if (!init_mask) - continue; - if (i == BNXT_CTX_MEM_INIT_STAT) - offset = &resp->stat_init_offset; - if (init_mask & (1 << i)) - mem_init->offset = *offset * 4; - else - mem_init->init_val = 0; - } - ctx->mem_init[BNXT_CTX_MEM_INIT_QP].size = ctx->qp_entry_size; - ctx->mem_init[BNXT_CTX_MEM_INIT_SRQ].size = ctx->srq_entry_size; - ctx->mem_init[BNXT_CTX_MEM_INIT_CQ].size = ctx->cq_entry_size; - ctx->mem_init[BNXT_CTX_MEM_INIT_VNIC].size = ctx->vnic_entry_size; - ctx->mem_init[BNXT_CTX_MEM_INIT_STAT].size = ctx->stat_entry_size; - ctx->mem_init[BNXT_CTX_MEM_INIT_MRAV].size = ctx->mrav_entry_size; + ctxm->init_value = init_val; + ctxm->init_offset = BNXT_CTX_INIT_INVALID_OFFSET; + if (init_mask_set) + ctxm->init_offset = init_offset * 4; + else + ctxm->init_value = 0; } static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) @@ -7225,8 +7204,11 @@ static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) rc = hwrm_req_send_silent(bp, req); if (!rc) { struct bnxt_ctx_pg_info *ctx_pg; + struct bnxt_ctx_mem_type *ctxm; struct bnxt_ctx_mem_info *ctx; + u8 init_val, init_idx = 0; int i, tqm_rings; + u16 init_mask; ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); if (!ctx) { @@ -7235,39 +7217,69 @@ static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) } bp->ctx = ctx; - ctx->qp_max_entries = le32_to_cpu(resp->qp_max_entries); - ctx->qp_min_qp1_entries = le16_to_cpu(resp->qp_min_qp1_entries); - ctx->qp_max_l2_entries = le16_to_cpu(resp->qp_max_l2_entries); - ctx->qp_entry_size = le16_to_cpu(resp->qp_entry_size); - ctx->srq_max_l2_entries = le16_to_cpu(resp->srq_max_l2_entries); - ctx->srq_max_entries = le32_to_cpu(resp->srq_max_entries); - ctx->srq_entry_size = le16_to_cpu(resp->srq_entry_size); - ctx->cq_max_l2_entries = le16_to_cpu(resp->cq_max_l2_entries); - ctx->cq_max_entries = le32_to_cpu(resp->cq_max_entries); - ctx->cq_entry_size = le16_to_cpu(resp->cq_entry_size); - ctx->vnic_max_vnic_entries = - le16_to_cpu(resp->vnic_max_vnic_entries); - ctx->vnic_max_ring_table_entries = + init_val = resp->ctx_kind_initializer; + init_mask = le16_to_cpu(resp->ctx_init_mask); + + ctxm = &ctx->ctx_arr[BNXT_CTX_QP]; + ctxm->max_entries = le32_to_cpu(resp->qp_max_entries); + ctxm->qp_qp1_entries = le16_to_cpu(resp->qp_min_qp1_entries); + ctxm->qp_l2_entries = le16_to_cpu(resp->qp_max_l2_entries); + ctxm->entry_size = le16_to_cpu(resp->qp_entry_size); + bnxt_init_ctx_initializer(ctxm, init_val, resp->qp_init_offset, + (init_mask & (1 << init_idx++)) != 0); + + ctxm = &ctx->ctx_arr[BNXT_CTX_SRQ]; + ctxm->srq_l2_entries = le16_to_cpu(resp->srq_max_l2_entries); + ctxm->max_entries = le32_to_cpu(resp->srq_max_entries); + ctxm->entry_size = le16_to_cpu(resp->srq_entry_size); + bnxt_init_ctx_initializer(ctxm, init_val, resp->srq_init_offset, + (init_mask & (1 << init_idx++)) != 0); + + ctxm = &ctx->ctx_arr[BNXT_CTX_CQ]; + ctxm->cq_l2_entries = le16_to_cpu(resp->cq_max_l2_entries); + ctxm->max_entries = le32_to_cpu(resp->cq_max_entries); + ctxm->entry_size = le16_to_cpu(resp->cq_entry_size); + bnxt_init_ctx_initializer(ctxm, init_val, resp->cq_init_offset, + (init_mask & (1 << init_idx++)) != 0); + + ctxm = &ctx->ctx_arr[BNXT_CTX_VNIC]; + ctxm->vnic_entries = le16_to_cpu(resp->vnic_max_vnic_entries); + ctxm->max_entries = ctxm->vnic_entries + le16_to_cpu(resp->vnic_max_ring_table_entries); - ctx->vnic_entry_size = le16_to_cpu(resp->vnic_entry_size); - ctx->stat_max_entries = le32_to_cpu(resp->stat_max_entries); - ctx->stat_entry_size = le16_to_cpu(resp->stat_entry_size); - ctx->tqm_entry_size = le16_to_cpu(resp->tqm_entry_size); - ctx->tqm_min_entries_per_ring = - le32_to_cpu(resp->tqm_min_entries_per_ring); - ctx->tqm_max_entries_per_ring = - le32_to_cpu(resp->tqm_max_entries_per_ring); - ctx->tqm_entries_multiple = resp->tqm_entries_multiple; - if (!ctx->tqm_entries_multiple) - ctx->tqm_entries_multiple = 1; - ctx->mrav_max_entries = le32_to_cpu(resp->mrav_max_entries); - ctx->mrav_entry_size = le16_to_cpu(resp->mrav_entry_size); - ctx->mrav_num_entries_units = + ctxm->entry_size = le16_to_cpu(resp->vnic_entry_size); + bnxt_init_ctx_initializer(ctxm, init_val, + resp->vnic_init_offset, + (init_mask & (1 << init_idx++)) != 0); + + ctxm = &ctx->ctx_arr[BNXT_CTX_STAT]; + ctxm->max_entries = le32_to_cpu(resp->stat_max_entries); + ctxm->entry_size = le16_to_cpu(resp->stat_entry_size); + bnxt_init_ctx_initializer(ctxm, init_val, + resp->stat_init_offset, + (init_mask & (1 << init_idx++)) != 0); + + ctxm = &ctx->ctx_arr[BNXT_CTX_STQM]; + ctxm->entry_size = le16_to_cpu(resp->tqm_entry_size); + ctxm->min_entries = le32_to_cpu(resp->tqm_min_entries_per_ring); + ctxm->max_entries = le32_to_cpu(resp->tqm_max_entries_per_ring); + ctxm->entry_multiple = resp->tqm_entries_multiple; + if (!ctxm->entry_multiple) + ctxm->entry_multiple = 1; + + memcpy(&ctx->ctx_arr[BNXT_CTX_FTQM], ctxm, sizeof(*ctxm)); + + ctxm = &ctx->ctx_arr[BNXT_CTX_MRAV]; + ctxm->max_entries = le32_to_cpu(resp->mrav_max_entries); + ctxm->entry_size = le16_to_cpu(resp->mrav_entry_size); + ctxm->mrav_num_entries_units = le16_to_cpu(resp->mrav_num_entries_units); - ctx->tim_entry_size = le16_to_cpu(resp->tim_entry_size); - ctx->tim_max_entries = le32_to_cpu(resp->tim_max_entries); + bnxt_init_ctx_initializer(ctxm, init_val, + resp->mrav_init_offset, + (init_mask & (1 << init_idx++)) != 0); - bnxt_init_ctx_initializer(ctx, resp); + ctxm = &ctx->ctx_arr[BNXT_CTX_TIM]; + ctxm->entry_size = le16_to_cpu(resp->tim_entry_size); + ctxm->max_entries = le32_to_cpu(resp->tim_max_entries); ctx->tqm_fp_rings_count = resp->tqm_fp_rings_count; if (!ctx->tqm_fp_rings_count) @@ -7275,6 +7287,9 @@ static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) else if (ctx->tqm_fp_rings_count > BNXT_MAX_TQM_FP_RINGS) ctx->tqm_fp_rings_count = BNXT_MAX_TQM_FP_RINGS; + ctxm = &ctx->ctx_arr[BNXT_CTX_FTQM]; + ctxm->instance_bmap = (1 << ctx->tqm_fp_rings_count) - 1; + tqm_rings = ctx->tqm_fp_rings_count + BNXT_MAX_TQM_SP_RINGS; ctx_pg = kcalloc(tqm_rings, sizeof(*ctx_pg), GFP_KERNEL); if (!ctx_pg) { @@ -7321,6 +7336,7 @@ static int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, u32 enables) struct hwrm_func_backing_store_cfg_input *req; struct bnxt_ctx_mem_info *ctx = bp->ctx; struct bnxt_ctx_pg_info *ctx_pg; + struct bnxt_ctx_mem_type *ctxm; void **__req = (void **)&req; u32 req_len = sizeof(*req); __le32 *num_entries; @@ -7343,70 +7359,86 @@ static int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, u32 enables) req->enables = cpu_to_le32(enables); if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_QP) { ctx_pg = &ctx->qp_mem; + ctxm = &ctx->ctx_arr[BNXT_CTX_QP]; req->qp_num_entries = cpu_to_le32(ctx_pg->entries); - req->qp_num_qp1_entries = cpu_to_le16(ctx->qp_min_qp1_entries); - req->qp_num_l2_entries = cpu_to_le16(ctx->qp_max_l2_entries); - req->qp_entry_size = cpu_to_le16(ctx->qp_entry_size); + req->qp_num_qp1_entries = cpu_to_le16(ctxm->qp_qp1_entries); + req->qp_num_l2_entries = cpu_to_le16(ctxm->qp_l2_entries); + req->qp_entry_size = cpu_to_le16(ctxm->entry_size); bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, &req->qpc_pg_size_qpc_lvl, &req->qpc_page_dir); } if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_SRQ) { ctx_pg = &ctx->srq_mem; + ctxm = &ctx->ctx_arr[BNXT_CTX_SRQ]; req->srq_num_entries = cpu_to_le32(ctx_pg->entries); - req->srq_num_l2_entries = cpu_to_le16(ctx->srq_max_l2_entries); - req->srq_entry_size = cpu_to_le16(ctx->srq_entry_size); + req->srq_num_l2_entries = cpu_to_le16(ctxm->srq_l2_entries); + req->srq_entry_size = cpu_to_le16(ctxm->entry_size); bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, &req->srq_pg_size_srq_lvl, &req->srq_page_dir); } if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_CQ) { ctx_pg = &ctx->cq_mem; + ctxm = &ctx->ctx_arr[BNXT_CTX_CQ]; req->cq_num_entries = cpu_to_le32(ctx_pg->entries); - req->cq_num_l2_entries = cpu_to_le16(ctx->cq_max_l2_entries); - req->cq_entry_size = cpu_to_le16(ctx->cq_entry_size); + req->cq_num_l2_entries = cpu_to_le16(ctxm->cq_l2_entries); + req->cq_entry_size = cpu_to_le16(ctxm->entry_size); bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, &req->cq_pg_size_cq_lvl, &req->cq_page_dir); } if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_VNIC) { ctx_pg = &ctx->vnic_mem; - req->vnic_num_vnic_entries = - cpu_to_le16(ctx->vnic_max_vnic_entries); + ctxm = &ctx->ctx_arr[BNXT_CTX_VNIC]; + req->vnic_num_vnic_entries = cpu_to_le16(ctxm->vnic_entries); req->vnic_num_ring_table_entries = - cpu_to_le16(ctx->vnic_max_ring_table_entries); - req->vnic_entry_size = cpu_to_le16(ctx->vnic_entry_size); + cpu_to_le16(ctxm->max_entries - ctxm->vnic_entries); + req->vnic_entry_size = cpu_to_le16(ctxm->entry_size); bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, &req->vnic_pg_size_vnic_lvl, &req->vnic_page_dir); } if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_STAT) { ctx_pg = &ctx->stat_mem; - req->stat_num_entries = cpu_to_le32(ctx->stat_max_entries); - req->stat_entry_size = cpu_to_le16(ctx->stat_entry_size); + ctxm = &ctx->ctx_arr[BNXT_CTX_STAT]; + req->stat_num_entries = cpu_to_le32(ctxm->max_entries); + req->stat_entry_size = cpu_to_le16(ctxm->entry_size); bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, &req->stat_pg_size_stat_lvl, &req->stat_page_dir); } if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_MRAV) { + u32 units; + ctx_pg = &ctx->mrav_mem; + ctxm = &ctx->ctx_arr[BNXT_CTX_MRAV]; req->mrav_num_entries = cpu_to_le32(ctx_pg->entries); - if (ctx->mrav_num_entries_units) - flags |= - FUNC_BACKING_STORE_CFG_REQ_FLAGS_MRAV_RESERVATION_SPLIT; - req->mrav_entry_size = cpu_to_le16(ctx->mrav_entry_size); + units = ctxm->mrav_num_entries_units; + if (units) { + u32 num_mr, num_ah = ctxm->mrav_av_entries; + u32 entries; + + num_mr = ctx_pg->entries - num_ah; + entries = ((num_mr / units) << 16) | (num_ah / units); + req->mrav_num_entries = cpu_to_le32(entries); + flags |= FUNC_BACKING_STORE_CFG_REQ_FLAGS_MRAV_RESERVATION_SPLIT; + } + req->mrav_entry_size = cpu_to_le16(ctxm->entry_size); bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, &req->mrav_pg_size_mrav_lvl, &req->mrav_page_dir); } if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_TIM) { ctx_pg = &ctx->tim_mem; + ctxm = &ctx->ctx_arr[BNXT_CTX_TIM]; req->tim_num_entries = cpu_to_le32(ctx_pg->entries); - req->tim_entry_size = cpu_to_le16(ctx->tim_entry_size); + req->tim_entry_size = cpu_to_le16(ctxm->entry_size); bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, &req->tim_pg_size_tim_lvl, &req->tim_page_dir); } + ctxm = &ctx->ctx_arr[BNXT_CTX_STQM]; for (i = 0, num_entries = &req->tqm_sp_num_entries, pg_attr = &req->tqm_sp_pg_size_tqm_sp_lvl, pg_dir = &req->tqm_sp_page_dir, @@ -7416,7 +7448,7 @@ static int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, u32 enables) if (!(enables & ena)) continue; - req->tqm_entry_size = cpu_to_le16(ctx->tqm_entry_size); + req->tqm_entry_size = cpu_to_le16(ctxm->entry_size); ctx_pg = ctx->tqm_mem[i]; *num_entries = cpu_to_le32(ctx_pg->entries); bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, pg_attr, pg_dir); @@ -7441,7 +7473,7 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, static int bnxt_alloc_ctx_pg_tbls(struct bnxt *bp, struct bnxt_ctx_pg_info *ctx_pg, u32 mem_size, - u8 depth, struct bnxt_mem_init *mem_init) + u8 depth, struct bnxt_ctx_mem_type *ctxm) { struct bnxt_ring_mem_info *rmem = &ctx_pg->ring_mem; int rc; @@ -7479,7 +7511,7 @@ static int bnxt_alloc_ctx_pg_tbls(struct bnxt *bp, rmem->pg_tbl_map = ctx_pg->ctx_dma_arr[i]; rmem->depth = 1; rmem->nr_pages = MAX_CTX_PAGES; - rmem->mem_init = mem_init; + rmem->ctx_mem = ctxm; if (i == (nr_tbls - 1)) { int rem = ctx_pg->nr_pages % MAX_CTX_PAGES; @@ -7494,7 +7526,7 @@ static int bnxt_alloc_ctx_pg_tbls(struct bnxt *bp, rmem->nr_pages = DIV_ROUND_UP(mem_size, BNXT_PAGE_SIZE); if (rmem->nr_pages > 1 || depth) rmem->depth = 1; - rmem->mem_init = mem_init; + rmem->ctx_mem = ctxm; rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg); } return rc; @@ -7559,10 +7591,12 @@ void bnxt_free_ctx_mem(struct bnxt *bp) static int bnxt_alloc_ctx_mem(struct bnxt *bp) { struct bnxt_ctx_pg_info *ctx_pg; + struct bnxt_ctx_mem_type *ctxm; struct bnxt_ctx_mem_info *ctx; - struct bnxt_mem_init *init; + u32 l2_qps, qp1_qps, max_qps; u32 mem_size, ena, entries; u32 entries_sp, min; + u32 srqs, max_srqs; u32 num_mr, num_ah; u32 extra_srqs = 0; u32 extra_qps = 0; @@ -7579,60 +7613,65 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) if (!ctx || (ctx->flags & BNXT_CTX_FLAG_INITED)) return 0; + ctxm = &ctx->ctx_arr[BNXT_CTX_QP]; + l2_qps = ctxm->qp_l2_entries; + qp1_qps = ctxm->qp_qp1_entries; + max_qps = ctxm->max_entries; + ctxm = &ctx->ctx_arr[BNXT_CTX_SRQ]; + srqs = ctxm->srq_l2_entries; + max_srqs = ctxm->max_entries; if ((bp->flags & BNXT_FLAG_ROCE_CAP) && !is_kdump_kernel()) { pg_lvl = 2; - extra_qps = 65536; - extra_srqs = 8192; + extra_qps = min_t(u32, 65536, max_qps - l2_qps - qp1_qps); + extra_srqs = min_t(u32, 8192, max_srqs - srqs); } + ctxm = &ctx->ctx_arr[BNXT_CTX_QP]; ctx_pg = &ctx->qp_mem; - ctx_pg->entries = ctx->qp_min_qp1_entries + ctx->qp_max_l2_entries + - extra_qps; - if (ctx->qp_entry_size) { - mem_size = ctx->qp_entry_size * ctx_pg->entries; - init = &ctx->mem_init[BNXT_CTX_MEM_INIT_QP]; - rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, pg_lvl, init); + ctx_pg->entries = l2_qps + qp1_qps + extra_qps; + if (ctxm->entry_size) { + mem_size = ctxm->entry_size * ctx_pg->entries; + rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, pg_lvl, ctxm); if (rc) return rc; } + ctxm = &ctx->ctx_arr[BNXT_CTX_SRQ]; ctx_pg = &ctx->srq_mem; - ctx_pg->entries = ctx->srq_max_l2_entries + extra_srqs; - if (ctx->srq_entry_size) { - mem_size = ctx->srq_entry_size * ctx_pg->entries; - init = &ctx->mem_init[BNXT_CTX_MEM_INIT_SRQ]; - rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, pg_lvl, init); + ctx_pg->entries = srqs + extra_srqs; + if (ctxm->entry_size) { + mem_size = ctxm->entry_size * ctx_pg->entries; + rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, pg_lvl, ctxm); if (rc) return rc; } + ctxm = &ctx->ctx_arr[BNXT_CTX_CQ]; ctx_pg = &ctx->cq_mem; - ctx_pg->entries = ctx->cq_max_l2_entries + extra_qps * 2; - if (ctx->cq_entry_size) { - mem_size = ctx->cq_entry_size * ctx_pg->entries; - init = &ctx->mem_init[BNXT_CTX_MEM_INIT_CQ]; - rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, pg_lvl, init); + ctx_pg->entries = ctxm->cq_l2_entries + extra_qps * 2; + if (ctxm->entry_size) { + mem_size = ctxm->entry_size * ctx_pg->entries; + rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, pg_lvl, ctxm); if (rc) return rc; } + ctxm = &ctx->ctx_arr[BNXT_CTX_VNIC]; ctx_pg = &ctx->vnic_mem; - ctx_pg->entries = ctx->vnic_max_vnic_entries + - ctx->vnic_max_ring_table_entries; - if (ctx->vnic_entry_size) { - mem_size = ctx->vnic_entry_size * ctx_pg->entries; - init = &ctx->mem_init[BNXT_CTX_MEM_INIT_VNIC]; - rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 1, init); + ctx_pg->entries = ctxm->max_entries; + if (ctxm->entry_size) { + mem_size = ctxm->entry_size * ctx_pg->entries; + rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 1, ctxm); if (rc) return rc; } + ctxm = &ctx->ctx_arr[BNXT_CTX_STAT]; ctx_pg = &ctx->stat_mem; - ctx_pg->entries = ctx->stat_max_entries; - if (ctx->stat_entry_size) { - mem_size = ctx->stat_entry_size * ctx_pg->entries; - init = &ctx->mem_init[BNXT_CTX_MEM_INIT_STAT]; - rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 1, init); + ctx_pg->entries = ctxm->max_entries; + if (ctxm->entry_size) { + mem_size = ctxm->entry_size * ctx_pg->entries; + rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 1, ctxm); if (rc) return rc; } @@ -7641,30 +7680,31 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) if (!(bp->flags & BNXT_FLAG_ROCE_CAP)) goto skip_rdma; + ctxm = &ctx->ctx_arr[BNXT_CTX_MRAV]; ctx_pg = &ctx->mrav_mem; /* 128K extra is needed to accommodate static AH context * allocation by f/w. */ - num_mr = 1024 * 256; - num_ah = 1024 * 128; + num_mr = min_t(u32, ctxm->max_entries / 2, 1024 * 256); + num_ah = min_t(u32, num_mr, 1024 * 128); + ctxm->split_entry_cnt = BNXT_CTX_MRAV_AV_SPLIT_ENTRY + 1; + if (!ctxm->mrav_av_entries || ctxm->mrav_av_entries > num_ah) + ctxm->mrav_av_entries = num_ah; + ctx_pg->entries = num_mr + num_ah; - if (ctx->mrav_entry_size) { - mem_size = ctx->mrav_entry_size * ctx_pg->entries; - init = &ctx->mem_init[BNXT_CTX_MEM_INIT_MRAV]; - rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 2, init); + if (ctxm->entry_size) { + mem_size = ctxm->entry_size * ctx_pg->entries; + rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 2, ctxm); if (rc) return rc; } ena = FUNC_BACKING_STORE_CFG_REQ_ENABLES_MRAV; - if (ctx->mrav_num_entries_units) - ctx_pg->entries = - ((num_mr / ctx->mrav_num_entries_units) << 16) | - (num_ah / ctx->mrav_num_entries_units); + ctxm = &ctx->ctx_arr[BNXT_CTX_TIM]; ctx_pg = &ctx->tim_mem; - ctx_pg->entries = ctx->qp_mem.entries; - if (ctx->tim_entry_size) { - mem_size = ctx->tim_entry_size * ctx_pg->entries; + ctx_pg->entries = l2_qps + qp1_qps + extra_qps; + if (ctxm->entry_size) { + mem_size = ctxm->entry_size * ctx_pg->entries; rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 1, NULL); if (rc) return rc; @@ -7672,18 +7712,19 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) ena |= FUNC_BACKING_STORE_CFG_REQ_ENABLES_TIM; skip_rdma: - min = ctx->tqm_min_entries_per_ring; - entries_sp = ctx->vnic_max_vnic_entries + ctx->qp_max_l2_entries + - 2 * (extra_qps + ctx->qp_min_qp1_entries) + min; - entries_sp = roundup(entries_sp, ctx->tqm_entries_multiple); - entries = ctx->qp_max_l2_entries + 2 * (extra_qps + ctx->qp_min_qp1_entries); - entries = roundup(entries, ctx->tqm_entries_multiple); - entries = clamp_t(u32, entries, min, ctx->tqm_max_entries_per_ring); + ctxm = &ctx->ctx_arr[BNXT_CTX_STQM]; + min = ctxm->min_entries; + entries_sp = ctx->ctx_arr[BNXT_CTX_VNIC].vnic_entries + l2_qps + + 2 * (extra_qps + qp1_qps) + min; + entries_sp = roundup(entries_sp, ctxm->entry_multiple); + entries = l2_qps + 2 * (extra_qps + qp1_qps); + entries = roundup(entries, ctxm->entry_multiple); + entries = clamp_t(u32, entries, min, ctxm->max_entries); for (i = 0; i < ctx->tqm_fp_rings_count + 1; i++) { ctx_pg = ctx->tqm_mem[i]; ctx_pg->entries = i ? entries : entries_sp; - if (ctx->tqm_entry_size) { - mem_size = ctx->tqm_entry_size * ctx_pg->entries; + if (ctxm->entry_size) { + mem_size = ctxm->entry_size * ctx_pg->entries; rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 1, NULL); if (rc) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 4ce993943924..757d2edb8c05 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -762,13 +762,6 @@ struct bnxt_sw_rx_agg_bd { dma_addr_t mapping; }; -struct bnxt_mem_init { - u8 init_val; - u16 offset; -#define BNXT_MEM_INVALID_OFFSET 0xffff - u16 size; -}; - struct bnxt_ring_mem_info { int nr_pages; int page_size; @@ -778,7 +771,7 @@ struct bnxt_ring_mem_info { #define BNXT_RMEM_USE_FULL_PAGE_FLAG 4 u16 depth; - struct bnxt_mem_init *mem_init; + struct bnxt_ctx_mem_type *ctx_mem; void **pg_arr; dma_addr_t *dma_arr; @@ -1551,35 +1544,70 @@ do { \ attr = FUNC_BACKING_STORE_CFG_REQ_QPC_PG_SIZE_PG_4K; \ } while (0) +struct bnxt_ctx_mem_type { + u16 type; + u16 entry_size; + u32 flags; + u32 instance_bmap; + u8 init_value; + u8 entry_multiple; + u16 init_offset; +#define BNXT_CTX_INIT_INVALID_OFFSET 0xffff + u32 max_entries; + u32 min_entries; + u8 last:1; + u8 split_entry_cnt; +#define BNXT_MAX_SPLIT_ENTRY 4 + union { + struct { + u32 qp_l2_entries; + u32 qp_qp1_entries; + u32 qp_fast_qpmd_entries; + }; + u32 srq_l2_entries; + u32 cq_l2_entries; + u32 vnic_entries; + struct { + u32 mrav_av_entries; + u32 mrav_num_entries_units; + }; + u32 split[BNXT_MAX_SPLIT_ENTRY]; + }; +}; + +#define BNXT_CTX_MRAV_AV_SPLIT_ENTRY 0 + +#define BNXT_CTX_QP FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_QP +#define BNXT_CTX_SRQ FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SRQ +#define BNXT_CTX_CQ FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_CQ +#define BNXT_CTX_VNIC FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_VNIC +#define BNXT_CTX_STAT FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_STAT +#define BNXT_CTX_STQM FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SP_TQM_RING +#define BNXT_CTX_FTQM FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_FP_TQM_RING +#define BNXT_CTX_MRAV FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_MRAV +#define BNXT_CTX_TIM FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_TIM +#define BNXT_CTX_TKC FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_TKC +#define BNXT_CTX_RKC FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_RKC +#define BNXT_CTX_MTQM FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_MP_TQM_RING +#define BNXT_CTX_SQDBS FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SQ_DB_SHADOW +#define BNXT_CTX_RQDBS FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_RQ_DB_SHADOW +#define BNXT_CTX_SRQDBS FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SRQ_DB_SHADOW +#define BNXT_CTX_CQDBS FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_CQ_DB_SHADOW +#define BNXT_CTX_QTKC FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_QUIC_TKC +#define BNXT_CTX_QRKC FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_QUIC_RKC +#define BNXT_CTX_TBLSC FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_TBL_SCOPE +#define BNXT_CTX_XPAR FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_XID_PARTITION + +#define BNXT_CTX_MAX (BNXT_CTX_TIM + 1) +#define BNXT_CTX_V2_MAX (BNXT_CTX_XPAR + 1) +#define BNXT_CTX_INV ((u16)-1) + struct bnxt_ctx_mem_info { - u32 qp_max_entries; - u16 qp_min_qp1_entries; - u16 qp_max_l2_entries; - u16 qp_entry_size; - u16 srq_max_l2_entries; - u32 srq_max_entries; - u16 srq_entry_size; - u16 cq_max_l2_entries; - u32 cq_max_entries; - u16 cq_entry_size; - u16 vnic_max_vnic_entries; - u16 vnic_max_ring_table_entries; - u16 vnic_entry_size; - u32 stat_max_entries; - u16 stat_entry_size; - u16 tqm_entry_size; - u32 tqm_min_entries_per_ring; - u32 tqm_max_entries_per_ring; - u32 mrav_max_entries; - u16 mrav_entry_size; - u16 tim_entry_size; - u32 tim_max_entries; - u16 mrav_num_entries_units; - u8 tqm_entries_multiple; u8 tqm_fp_rings_count; u32 flags; #define BNXT_CTX_FLAG_INITED 0x01 + struct bnxt_ctx_mem_type ctx_arr[BNXT_CTX_MAX]; struct bnxt_ctx_pg_info qp_mem; struct bnxt_ctx_pg_info srq_mem; @@ -1589,15 +1617,6 @@ struct bnxt_ctx_mem_info { struct bnxt_ctx_pg_info mrav_mem; struct bnxt_ctx_pg_info tim_mem; struct bnxt_ctx_pg_info *tqm_mem[BNXT_MAX_TQM_RINGS]; - -#define BNXT_CTX_MEM_INIT_QP 0 -#define BNXT_CTX_MEM_INIT_SRQ 1 -#define BNXT_CTX_MEM_INIT_CQ 2 -#define BNXT_CTX_MEM_INIT_VNIC 3 -#define BNXT_CTX_MEM_INIT_STAT 4 -#define BNXT_CTX_MEM_INIT_MRAV 5 -#define BNXT_CTX_MEM_INIT_MAX 6 - struct bnxt_mem_init mem_init[BNXT_CTX_MEM_INIT_MAX]; }; enum bnxt_health_severity { From patchwork Mon Nov 20 23:43:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13462280 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="hrgTtBjm" Received: from mail-oo1-xc30.google.com (mail-oo1-xc30.google.com [IPv6:2607:f8b0:4864:20::c30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F8ACAA for ; Mon, 20 Nov 2023 15:44:42 -0800 (PST) Received: by mail-oo1-xc30.google.com with SMTP id 006d021491bc7-58786e23d38so3028760eaf.3 for ; Mon, 20 Nov 2023 15:44:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1700523881; x=1701128681; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=eMqWM59mmofnXAAZyjMA+fb2cK7FdwubXeDGwv5daMg=; b=hrgTtBjmMZBPOKsIe0gCltDtqkn0d43J3kZC34/y8dXDJsyZquleUPDVRzK24/khry zrCsNMKjSTI/sT8XALNUzUAkIxCVRBo4+H69YrhGS1swRke+Wuw8NAmDBdDAW4PBxaRR CKMJ3uV8wxtiXN5HFeyJDkSju1mNbtSKjZwzA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700523881; x=1701128681; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eMqWM59mmofnXAAZyjMA+fb2cK7FdwubXeDGwv5daMg=; b=kWjSTq7cdXGQ5e/r8X22BOQY04b8NYmoDs+zX104BXYKhz6DK4D1SSqVQ0WLeauxal sEWCW6y6XyE9kfv+uyKT9lmP5Cd9DopurfUDjqhttMqcMcYnWkdSvNDxa1TwiMYDp9NY Om7n3+O8qXGjlHAx8gsBFBKyymv6oYse7tf4JcQY3NiiTA0lsW6QklwzHcg7u99zmRp3 slX/EUFMHli0v2jPXSv2sGQ93E2Su0KTeXg+B2SNWWeNgy0iB1fKFhPgtYvjCuxI8Nl+ HVRo0cdHPzSAydtxIC2CRydC7j7ztifvN+hcZuR6uN3wTsWkiMti+vR1OeuOGTZ7Kb9i PvoA== X-Gm-Message-State: AOJu0YwFw3iH/8niELoyyoibzAXi7BK5kEZA1Twkc/b0WinAwHVKF1FM 6DtX/r//Tubt+kywoXnOll0myw== X-Google-Smtp-Source: AGHT+IH6NfmgFXQf4qJ3KR6q+jcd55JEqo667TTnD/w0jo6I47cQcnKfVWP5EuvQF2bAfDUEGzECRg== X-Received: by 2002:a05:6358:7e14:b0:16b:c63d:5dfe with SMTP id o20-20020a0563587e1400b0016bc63d5dfemr9266167rwm.16.1700523881362; Mon, 20 Nov 2023 15:44:41 -0800 (PST) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id i9-20020ac871c9000000b0041803dfb240sm3053384qtp.45.2023.11.20.15.44.40 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Nov 2023 15:44:41 -0800 (PST) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Somnath Kotur Subject: [PATCH net-next 04/13] bnxt_en: Add page info to struct bnxt_ctx_mem_type Date: Mon, 20 Nov 2023 15:43:56 -0800 Message-Id: <20231120234405.194542-5-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20231120234405.194542-1-michael.chan@broadcom.com> References: <20231120234405.194542-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org This will further improve the organization of the bnxt_ctx_mem_info structure by moving the standalone page info structures into the bnxt_ctx_mem_type array. Add the allocation and free logic first and the next patch will migrate to use the new infrastructure. Reviewed-by: Somnath Kotur Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 31 +++++++++++++++++++++++ drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 + 2 files changed, 32 insertions(+) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index df25b559ee45..3b18bcee151a 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -7187,6 +7187,27 @@ static void bnxt_init_ctx_initializer(struct bnxt_ctx_mem_type *ctxm, ctxm->init_value = 0; } +static int bnxt_alloc_all_ctx_pg_info(struct bnxt *bp, int ctx_max) +{ + struct bnxt_ctx_mem_info *ctx = bp->ctx; + u16 type; + + for (type = 0; type < ctx_max; type++) { + struct bnxt_ctx_mem_type *ctxm = &ctx->ctx_arr[type]; + int n = 1; + + if (!ctxm->max_entries) + continue; + + if (ctxm->instance_bmap) + n = hweight32(ctxm->instance_bmap); + ctxm->pg_info = kcalloc(n, sizeof(*ctxm->pg_info), GFP_KERNEL); + if (!ctxm->pg_info) + return -ENOMEM; + } + return 0; +} + static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) { struct hwrm_func_backing_store_qcaps_output *resp; @@ -7298,6 +7319,7 @@ static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) } for (i = 0; i < tqm_rings; i++, ctx_pg++) ctx->tqm_mem[i] = ctx_pg; + rc = bnxt_alloc_all_ctx_pg_info(bp, BNXT_CTX_MAX); } else { rc = 0; } @@ -7564,6 +7586,7 @@ static void bnxt_free_ctx_pg_tbls(struct bnxt *bp, void bnxt_free_ctx_mem(struct bnxt *bp) { struct bnxt_ctx_mem_info *ctx = bp->ctx; + u16 type; int i; if (!ctx) @@ -7583,6 +7606,14 @@ void bnxt_free_ctx_mem(struct bnxt *bp) bnxt_free_ctx_pg_tbls(bp, &ctx->cq_mem); bnxt_free_ctx_pg_tbls(bp, &ctx->srq_mem); bnxt_free_ctx_pg_tbls(bp, &ctx->qp_mem); + + for (type = 0; type < BNXT_CTX_MAX; type++) { + struct bnxt_ctx_mem_type *ctxm = &ctx->ctx_arr[type]; + + kfree(ctxm->pg_info); + ctxm->pg_info = NULL; + } + ctx->flags &= ~BNXT_CTX_FLAG_INITED; kfree(ctx); bp->ctx = NULL; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 757d2edb8c05..7e67df57b8af 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1573,6 +1573,7 @@ struct bnxt_ctx_mem_type { }; u32 split[BNXT_MAX_SPLIT_ENTRY]; }; + struct bnxt_ctx_pg_info *pg_info; }; #define BNXT_CTX_MRAV_AV_SPLIT_ENTRY 0 From patchwork Mon Nov 20 23:43:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13462282 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="e+3uxdvq" Received: from mail-vs1-xe29.google.com (mail-vs1-xe29.google.com [IPv6:2607:f8b0:4864:20::e29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F53BC8 for ; Mon, 20 Nov 2023 15:44:44 -0800 (PST) Received: by mail-vs1-xe29.google.com with SMTP id ada2fe7eead31-4629ef1dbc4so481696137.2 for ; Mon, 20 Nov 2023 15:44:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1700523883; x=1701128683; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=fvmlDgPm/OTGtsNIlrjV63lUUeoMPCn1z8AQkCDM9lA=; b=e+3uxdvqe+ka9TEv4IY0M2cFA59jA4KlN5ONFxoN4H5lHYl+dJ/b6C+WQaOVhjWgz0 uZaMzYGB61L4nOezaDlRQJs5shkwUV7YXx8W4bruM6JltFU65SLhSkRjPFlpzkoURc+l JWXQ/E/+OcdEXUKkF888+55gsoYIpecNy7Ws4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700523883; x=1701128683; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fvmlDgPm/OTGtsNIlrjV63lUUeoMPCn1z8AQkCDM9lA=; b=J6GxaCsjA8OXlhPZ2LYPJrVdjKGf3ckvzmX2IcoSgrT0Yuztb5k2aX4CCp5Z2ENEWl /xjaMJqXe7WzL2TZUR67W/2VuadlNcJlkMMua5+n+cofiQrqDBZ8GRWA6eCldkL+m48K m1ep58Gzpy+yx9zXiAIfSWWJJLO+nd7Her99fkIb/E+kxHHmS5K7mcNvjXcUnmfpSXp2 wgqpAcEPUBy2DGNJKZhPnimHLPZOBrHasPLC2HnbNq4tLuIkQpEQbTq7424ixJH0DspK hq9bs5n6FyeNHhppxWOTXVYqR2lBEeqxhYlAOL+BWxlHIRMTYw/JGZzKDzeyLausd4G2 +Pfw== X-Gm-Message-State: AOJu0YxWsIGWTkJVnfM70agy0EXasbs0piITbq8QKzOkvMOfJRUXiTPK BYxvFA37SdOd16ilBd3RNi2yBg== X-Google-Smtp-Source: AGHT+IGAzHblxlRVXBLJqyvk64uAIGQrRygql3deliOQIQaTysD6VPiem1/NWnVw6W7LLW4fBpaGsg== X-Received: by 2002:a05:6102:1162:b0:462:ad97:c412 with SMTP id k2-20020a056102116200b00462ad97c412mr2099512vsg.26.1700523883081; Mon, 20 Nov 2023 15:44:43 -0800 (PST) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id i9-20020ac871c9000000b0041803dfb240sm3053384qtp.45.2023.11.20.15.44.41 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Nov 2023 15:44:42 -0800 (PST) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Somnath Kotur Subject: [PATCH net-next 05/13] bnxt_en: Use the pg_info field in bnxt_ctx_mem_type struct Date: Mon, 20 Nov 2023 15:43:57 -0800 Message-Id: <20231120234405.194542-6-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20231120234405.194542-1-michael.chan@broadcom.com> References: <20231120234405.194542-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Use the newly added pg_info field in bnxt_ctx_mem_type struct and remove the standalone page info structures in bnxt_ctx_mem_info. This now completes the reorganization of the context memory structures to work better with the new and more flexible firmware interface for newer chips. Reviewed-by: Somnath Kotur Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 74 +++++++++-------------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 9 --- 2 files changed, 29 insertions(+), 54 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 3b18bcee151a..524023b8e959 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -7224,11 +7224,9 @@ static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) resp = hwrm_req_hold(bp, req); rc = hwrm_req_send_silent(bp, req); if (!rc) { - struct bnxt_ctx_pg_info *ctx_pg; struct bnxt_ctx_mem_type *ctxm; struct bnxt_ctx_mem_info *ctx; u8 init_val, init_idx = 0; - int i, tqm_rings; u16 init_mask; ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); @@ -7311,14 +7309,6 @@ static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) ctxm = &ctx->ctx_arr[BNXT_CTX_FTQM]; ctxm->instance_bmap = (1 << ctx->tqm_fp_rings_count) - 1; - tqm_rings = ctx->tqm_fp_rings_count + BNXT_MAX_TQM_SP_RINGS; - ctx_pg = kcalloc(tqm_rings, sizeof(*ctx_pg), GFP_KERNEL); - if (!ctx_pg) { - rc = -ENOMEM; - goto ctx_err; - } - for (i = 0; i < tqm_rings; i++, ctx_pg++) - ctx->tqm_mem[i] = ctx_pg; rc = bnxt_alloc_all_ctx_pg_info(bp, BNXT_CTX_MAX); } else { rc = 0; @@ -7380,8 +7370,8 @@ static int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, u32 enables) req->enables = cpu_to_le32(enables); if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_QP) { - ctx_pg = &ctx->qp_mem; ctxm = &ctx->ctx_arr[BNXT_CTX_QP]; + ctx_pg = ctxm->pg_info; req->qp_num_entries = cpu_to_le32(ctx_pg->entries); req->qp_num_qp1_entries = cpu_to_le16(ctxm->qp_qp1_entries); req->qp_num_l2_entries = cpu_to_le16(ctxm->qp_l2_entries); @@ -7391,8 +7381,8 @@ static int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, u32 enables) &req->qpc_page_dir); } if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_SRQ) { - ctx_pg = &ctx->srq_mem; ctxm = &ctx->ctx_arr[BNXT_CTX_SRQ]; + ctx_pg = ctxm->pg_info; req->srq_num_entries = cpu_to_le32(ctx_pg->entries); req->srq_num_l2_entries = cpu_to_le16(ctxm->srq_l2_entries); req->srq_entry_size = cpu_to_le16(ctxm->entry_size); @@ -7401,8 +7391,8 @@ static int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, u32 enables) &req->srq_page_dir); } if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_CQ) { - ctx_pg = &ctx->cq_mem; ctxm = &ctx->ctx_arr[BNXT_CTX_CQ]; + ctx_pg = ctxm->pg_info; req->cq_num_entries = cpu_to_le32(ctx_pg->entries); req->cq_num_l2_entries = cpu_to_le16(ctxm->cq_l2_entries); req->cq_entry_size = cpu_to_le16(ctxm->entry_size); @@ -7411,8 +7401,8 @@ static int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, u32 enables) &req->cq_page_dir); } if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_VNIC) { - ctx_pg = &ctx->vnic_mem; ctxm = &ctx->ctx_arr[BNXT_CTX_VNIC]; + ctx_pg = ctxm->pg_info; req->vnic_num_vnic_entries = cpu_to_le16(ctxm->vnic_entries); req->vnic_num_ring_table_entries = cpu_to_le16(ctxm->max_entries - ctxm->vnic_entries); @@ -7422,8 +7412,8 @@ static int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, u32 enables) &req->vnic_page_dir); } if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_STAT) { - ctx_pg = &ctx->stat_mem; ctxm = &ctx->ctx_arr[BNXT_CTX_STAT]; + ctx_pg = ctxm->pg_info; req->stat_num_entries = cpu_to_le32(ctxm->max_entries); req->stat_entry_size = cpu_to_le16(ctxm->entry_size); bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, @@ -7433,8 +7423,8 @@ static int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, u32 enables) if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_MRAV) { u32 units; - ctx_pg = &ctx->mrav_mem; ctxm = &ctx->ctx_arr[BNXT_CTX_MRAV]; + ctx_pg = ctxm->pg_info; req->mrav_num_entries = cpu_to_le32(ctx_pg->entries); units = ctxm->mrav_num_entries_units; if (units) { @@ -7452,8 +7442,8 @@ static int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, u32 enables) &req->mrav_page_dir); } if (enables & FUNC_BACKING_STORE_CFG_REQ_ENABLES_TIM) { - ctx_pg = &ctx->tim_mem; ctxm = &ctx->ctx_arr[BNXT_CTX_TIM]; + ctx_pg = ctxm->pg_info; req->tim_num_entries = cpu_to_le32(ctx_pg->entries); req->tim_entry_size = cpu_to_le16(ctxm->entry_size); bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, @@ -7464,14 +7454,15 @@ static int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, u32 enables) for (i = 0, num_entries = &req->tqm_sp_num_entries, pg_attr = &req->tqm_sp_pg_size_tqm_sp_lvl, pg_dir = &req->tqm_sp_page_dir, - ena = FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_SP; + ena = FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_SP, + ctx_pg = ctxm->pg_info; i < BNXT_MAX_TQM_RINGS; + ctx_pg = &ctx->ctx_arr[BNXT_CTX_FTQM].pg_info[i], i++, num_entries++, pg_attr++, pg_dir++, ena <<= 1) { if (!(enables & ena)) continue; req->tqm_entry_size = cpu_to_le16(ctxm->entry_size); - ctx_pg = ctx->tqm_mem[i]; *num_entries = cpu_to_le32(ctx_pg->entries); bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, pg_attr, pg_dir); } @@ -7587,30 +7578,23 @@ void bnxt_free_ctx_mem(struct bnxt *bp) { struct bnxt_ctx_mem_info *ctx = bp->ctx; u16 type; - int i; if (!ctx) return; - if (ctx->tqm_mem[0]) { - for (i = 0; i < ctx->tqm_fp_rings_count + 1; i++) - bnxt_free_ctx_pg_tbls(bp, ctx->tqm_mem[i]); - kfree(ctx->tqm_mem[0]); - ctx->tqm_mem[0] = NULL; - } - - bnxt_free_ctx_pg_tbls(bp, &ctx->tim_mem); - bnxt_free_ctx_pg_tbls(bp, &ctx->mrav_mem); - bnxt_free_ctx_pg_tbls(bp, &ctx->stat_mem); - bnxt_free_ctx_pg_tbls(bp, &ctx->vnic_mem); - bnxt_free_ctx_pg_tbls(bp, &ctx->cq_mem); - bnxt_free_ctx_pg_tbls(bp, &ctx->srq_mem); - bnxt_free_ctx_pg_tbls(bp, &ctx->qp_mem); - for (type = 0; type < BNXT_CTX_MAX; type++) { struct bnxt_ctx_mem_type *ctxm = &ctx->ctx_arr[type]; + struct bnxt_ctx_pg_info *ctx_pg = ctxm->pg_info; + int i, n = 1; + + if (!ctx_pg) + continue; + if (ctxm->instance_bmap) + n = hweight32(ctxm->instance_bmap); + for (i = 0; i < n; i++) + bnxt_free_ctx_pg_tbls(bp, &ctx_pg[i]); - kfree(ctxm->pg_info); + kfree(ctx_pg); ctxm->pg_info = NULL; } @@ -7658,7 +7642,7 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) } ctxm = &ctx->ctx_arr[BNXT_CTX_QP]; - ctx_pg = &ctx->qp_mem; + ctx_pg = ctxm->pg_info; ctx_pg->entries = l2_qps + qp1_qps + extra_qps; if (ctxm->entry_size) { mem_size = ctxm->entry_size * ctx_pg->entries; @@ -7668,7 +7652,7 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) } ctxm = &ctx->ctx_arr[BNXT_CTX_SRQ]; - ctx_pg = &ctx->srq_mem; + ctx_pg = ctxm->pg_info; ctx_pg->entries = srqs + extra_srqs; if (ctxm->entry_size) { mem_size = ctxm->entry_size * ctx_pg->entries; @@ -7678,7 +7662,7 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) } ctxm = &ctx->ctx_arr[BNXT_CTX_CQ]; - ctx_pg = &ctx->cq_mem; + ctx_pg = ctxm->pg_info; ctx_pg->entries = ctxm->cq_l2_entries + extra_qps * 2; if (ctxm->entry_size) { mem_size = ctxm->entry_size * ctx_pg->entries; @@ -7688,7 +7672,7 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) } ctxm = &ctx->ctx_arr[BNXT_CTX_VNIC]; - ctx_pg = &ctx->vnic_mem; + ctx_pg = ctxm->pg_info; ctx_pg->entries = ctxm->max_entries; if (ctxm->entry_size) { mem_size = ctxm->entry_size * ctx_pg->entries; @@ -7698,7 +7682,7 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) } ctxm = &ctx->ctx_arr[BNXT_CTX_STAT]; - ctx_pg = &ctx->stat_mem; + ctx_pg = ctxm->pg_info; ctx_pg->entries = ctxm->max_entries; if (ctxm->entry_size) { mem_size = ctxm->entry_size * ctx_pg->entries; @@ -7712,7 +7696,7 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) goto skip_rdma; ctxm = &ctx->ctx_arr[BNXT_CTX_MRAV]; - ctx_pg = &ctx->mrav_mem; + ctx_pg = ctxm->pg_info; /* 128K extra is needed to accommodate static AH context * allocation by f/w. */ @@ -7732,7 +7716,7 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) ena = FUNC_BACKING_STORE_CFG_REQ_ENABLES_MRAV; ctxm = &ctx->ctx_arr[BNXT_CTX_TIM]; - ctx_pg = &ctx->tim_mem; + ctx_pg = ctxm->pg_info; ctx_pg->entries = l2_qps + qp1_qps + extra_qps; if (ctxm->entry_size) { mem_size = ctxm->entry_size * ctx_pg->entries; @@ -7751,8 +7735,8 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) entries = l2_qps + 2 * (extra_qps + qp1_qps); entries = roundup(entries, ctxm->entry_multiple); entries = clamp_t(u32, entries, min, ctxm->max_entries); - for (i = 0; i < ctx->tqm_fp_rings_count + 1; i++) { - ctx_pg = ctx->tqm_mem[i]; + for (i = 0, ctx_pg = ctxm->pg_info; i < ctx->tqm_fp_rings_count + 1; + ctx_pg = &ctx->ctx_arr[BNXT_CTX_FTQM].pg_info[i], i++) { ctx_pg->entries = i ? entries : entries_sp; if (ctxm->entry_size) { mem_size = ctxm->entry_size * ctx_pg->entries; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 7e67df57b8af..067a66eedf36 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1609,15 +1609,6 @@ struct bnxt_ctx_mem_info { u32 flags; #define BNXT_CTX_FLAG_INITED 0x01 struct bnxt_ctx_mem_type ctx_arr[BNXT_CTX_MAX]; - - struct bnxt_ctx_pg_info qp_mem; - struct bnxt_ctx_pg_info srq_mem; - struct bnxt_ctx_pg_info cq_mem; - struct bnxt_ctx_pg_info vnic_mem; - struct bnxt_ctx_pg_info stat_mem; - struct bnxt_ctx_pg_info mrav_mem; - struct bnxt_ctx_pg_info tim_mem; - struct bnxt_ctx_pg_info *tqm_mem[BNXT_MAX_TQM_RINGS]; }; enum bnxt_health_severity { From patchwork Mon Nov 20 23:43:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13462283 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="hG9oYKZN" Received: from mail-oo1-xc2b.google.com (mail-oo1-xc2b.google.com [IPv6:2607:f8b0:4864:20::c2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01E8BAA for ; Mon, 20 Nov 2023 15:44:46 -0800 (PST) Received: by mail-oo1-xc2b.google.com with SMTP id 006d021491bc7-5842a7fdc61so2580412eaf.3 for ; Mon, 20 Nov 2023 15:44:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1700523885; x=1701128685; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=jpwEa1tyAbByi1gRfGpj2gZgjDD2BUbcV1qdbEpVvQc=; b=hG9oYKZNZ6jFbb2kIuYeGduEU456mZFs68tUcDXLUx0nJ6camZ9exnCrFp5rXPtxNF cXmtPxl3sy2ZQJ+7ZZyuYYMr7VfdWQ85gfp+KCgFv0cPXNdrKX4faL4LP2BvCl46Rvcw pf/m8sIZBDXBL1MqEUtzFEpCH38auH1oagNYg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700523885; x=1701128685; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jpwEa1tyAbByi1gRfGpj2gZgjDD2BUbcV1qdbEpVvQc=; b=tsqFVYffLkMKiVJjCDO5ZT0IzhDz/ofrG2w8pJ/CsZlkcMMJ9Ouxfls6/4RGhfXpIt KrGP9FZHv/zMcIaFGhZyCtNBmmuxt5UFCtNreI7+QzTaRu22cs0n5k8ZdqSP8zSxbW1B 3KUXDoDX2hWZ7KImOoFt6cR8UV7/4RWkv0/FWO1c7KKgTo/xmTMiMJTDdLJv2T/14u8q ccxMhLtpSVo2GWQLmpvS3idsMWdWE1leGOOg9PJNt3CK3zlLseqz+0go9xyCM1SfVgVv 1gxi58WO/dIbyUF8be1Z/qtEf55P4D/66mIQ0PjF8vF8rMiPfBb6rwrRxNsCUzIO73DU uHmA== X-Gm-Message-State: AOJu0Yyy8etEpdTnH+wlkKdahx4BEKeQSHDnbZJmpCpliC+rluB9qG3g 3kimaGN4rt1VDimjL7aGUSsYJQ== X-Google-Smtp-Source: AGHT+IFhNbpqW4JDk17M/D1gDpXjp98CCBx4tJZzV6y6bzeqqSVEE215bZqJEV6zxgKkhE+T+DwA0A== X-Received: by 2002:a05:6358:9209:b0:169:9c45:ca12 with SMTP id d9-20020a056358920900b001699c45ca12mr10695254rwb.23.1700523884955; Mon, 20 Nov 2023 15:44:44 -0800 (PST) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id i9-20020ac871c9000000b0041803dfb240sm3053384qtp.45.2023.11.20.15.44.43 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Nov 2023 15:44:44 -0800 (PST) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Andy Gospodarek Subject: [PATCH net-next 06/13] bnxt_en: Add bnxt_setup_ctxm_pg_tbls() helper function Date: Mon, 20 Nov 2023 15:43:58 -0800 Message-Id: <20231120234405.194542-7-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20231120234405.194542-1-michael.chan@broadcom.com> References: <20231120234405.194542-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org In bnxt_alloc_ctx_mem(), the logic to set up the context memory entries and to allocate the context memory tables is done repetitively. Add a helper function to simplify the code. The setup of the Fast Path TQM entries relies on some information from the Slow Path TQM entries. Copy the SP_TQM entries to the FP_TQM entries to simplify the logic. Reviewed-by: Andy Gospodarek Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 133 ++++++++++------------ 1 file changed, 59 insertions(+), 74 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 524023b8e959..e42c82ed0fd5 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -7307,6 +7307,7 @@ static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) ctx->tqm_fp_rings_count = BNXT_MAX_TQM_FP_RINGS; ctxm = &ctx->ctx_arr[BNXT_CTX_FTQM]; + memcpy(ctxm, &ctx->ctx_arr[BNXT_CTX_STQM], sizeof(*ctxm)); ctxm->instance_bmap = (1 << ctx->tqm_fp_rings_count) - 1; rc = bnxt_alloc_all_ctx_pg_info(bp, BNXT_CTX_MAX); @@ -7574,6 +7575,30 @@ static void bnxt_free_ctx_pg_tbls(struct bnxt *bp, ctx_pg->nr_pages = 0; } +static int bnxt_setup_ctxm_pg_tbls(struct bnxt *bp, + struct bnxt_ctx_mem_type *ctxm, u32 entries, + u8 pg_lvl) +{ + struct bnxt_ctx_pg_info *ctx_pg = ctxm->pg_info; + int i, rc = 0, n = 1; + u32 mem_size; + + if (!ctxm->entry_size || !ctx_pg) + return -EINVAL; + if (ctxm->instance_bmap) + n = hweight32(ctxm->instance_bmap); + if (ctxm->entry_multiple) + entries = roundup(entries, ctxm->entry_multiple); + entries = clamp_t(u32, entries, ctxm->min_entries, ctxm->max_entries); + mem_size = entries * ctxm->entry_size; + for (i = 0; i < n && !rc; i++) { + ctx_pg[i].entries = entries; + rc = bnxt_alloc_ctx_pg_tbls(bp, &ctx_pg[i], mem_size, pg_lvl, + ctxm->init_value ? ctxm : NULL); + } + return rc; +} + void bnxt_free_ctx_mem(struct bnxt *bp) { struct bnxt_ctx_mem_info *ctx = bp->ctx; @@ -7605,13 +7630,11 @@ void bnxt_free_ctx_mem(struct bnxt *bp) static int bnxt_alloc_ctx_mem(struct bnxt *bp) { - struct bnxt_ctx_pg_info *ctx_pg; struct bnxt_ctx_mem_type *ctxm; struct bnxt_ctx_mem_info *ctx; u32 l2_qps, qp1_qps, max_qps; - u32 mem_size, ena, entries; - u32 entries_sp, min; - u32 srqs, max_srqs; + u32 ena, entries_sp, entries; + u32 srqs, max_srqs, min; u32 num_mr, num_ah; u32 extra_srqs = 0; u32 extra_qps = 0; @@ -7642,61 +7665,37 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) } ctxm = &ctx->ctx_arr[BNXT_CTX_QP]; - ctx_pg = ctxm->pg_info; - ctx_pg->entries = l2_qps + qp1_qps + extra_qps; - if (ctxm->entry_size) { - mem_size = ctxm->entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, pg_lvl, ctxm); - if (rc) - return rc; - } + rc = bnxt_setup_ctxm_pg_tbls(bp, ctxm, l2_qps + qp1_qps + extra_qps, + pg_lvl); + if (rc) + return rc; ctxm = &ctx->ctx_arr[BNXT_CTX_SRQ]; - ctx_pg = ctxm->pg_info; - ctx_pg->entries = srqs + extra_srqs; - if (ctxm->entry_size) { - mem_size = ctxm->entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, pg_lvl, ctxm); - if (rc) - return rc; - } + rc = bnxt_setup_ctxm_pg_tbls(bp, ctxm, srqs + extra_srqs, pg_lvl); + if (rc) + return rc; ctxm = &ctx->ctx_arr[BNXT_CTX_CQ]; - ctx_pg = ctxm->pg_info; - ctx_pg->entries = ctxm->cq_l2_entries + extra_qps * 2; - if (ctxm->entry_size) { - mem_size = ctxm->entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, pg_lvl, ctxm); - if (rc) - return rc; - } + rc = bnxt_setup_ctxm_pg_tbls(bp, ctxm, ctxm->cq_l2_entries + + extra_qps * 2, pg_lvl); + if (rc) + return rc; ctxm = &ctx->ctx_arr[BNXT_CTX_VNIC]; - ctx_pg = ctxm->pg_info; - ctx_pg->entries = ctxm->max_entries; - if (ctxm->entry_size) { - mem_size = ctxm->entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 1, ctxm); - if (rc) - return rc; - } + rc = bnxt_setup_ctxm_pg_tbls(bp, ctxm, ctxm->max_entries, 1); + if (rc) + return rc; ctxm = &ctx->ctx_arr[BNXT_CTX_STAT]; - ctx_pg = ctxm->pg_info; - ctx_pg->entries = ctxm->max_entries; - if (ctxm->entry_size) { - mem_size = ctxm->entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 1, ctxm); - if (rc) - return rc; - } + rc = bnxt_setup_ctxm_pg_tbls(bp, ctxm, ctxm->max_entries, 1); + if (rc) + return rc; ena = 0; if (!(bp->flags & BNXT_FLAG_ROCE_CAP)) goto skip_rdma; ctxm = &ctx->ctx_arr[BNXT_CTX_MRAV]; - ctx_pg = ctxm->pg_info; /* 128K extra is needed to accommodate static AH context * allocation by f/w. */ @@ -7706,24 +7705,15 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) if (!ctxm->mrav_av_entries || ctxm->mrav_av_entries > num_ah) ctxm->mrav_av_entries = num_ah; - ctx_pg->entries = num_mr + num_ah; - if (ctxm->entry_size) { - mem_size = ctxm->entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 2, ctxm); - if (rc) - return rc; - } + rc = bnxt_setup_ctxm_pg_tbls(bp, ctxm, num_mr + num_ah, 2); + if (rc) + return rc; ena = FUNC_BACKING_STORE_CFG_REQ_ENABLES_MRAV; ctxm = &ctx->ctx_arr[BNXT_CTX_TIM]; - ctx_pg = ctxm->pg_info; - ctx_pg->entries = l2_qps + qp1_qps + extra_qps; - if (ctxm->entry_size) { - mem_size = ctxm->entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 1, NULL); - if (rc) - return rc; - } + rc = bnxt_setup_ctxm_pg_tbls(bp, ctxm, l2_qps + qp1_qps + extra_qps, 1); + if (rc) + return rc; ena |= FUNC_BACKING_STORE_CFG_REQ_ENABLES_TIM; skip_rdma: @@ -7731,22 +7721,17 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) min = ctxm->min_entries; entries_sp = ctx->ctx_arr[BNXT_CTX_VNIC].vnic_entries + l2_qps + 2 * (extra_qps + qp1_qps) + min; - entries_sp = roundup(entries_sp, ctxm->entry_multiple); + rc = bnxt_setup_ctxm_pg_tbls(bp, ctxm, entries_sp, 2); + if (rc) + return rc; + + ctxm = &ctx->ctx_arr[BNXT_CTX_FTQM]; entries = l2_qps + 2 * (extra_qps + qp1_qps); - entries = roundup(entries, ctxm->entry_multiple); - entries = clamp_t(u32, entries, min, ctxm->max_entries); - for (i = 0, ctx_pg = ctxm->pg_info; i < ctx->tqm_fp_rings_count + 1; - ctx_pg = &ctx->ctx_arr[BNXT_CTX_FTQM].pg_info[i], i++) { - ctx_pg->entries = i ? entries : entries_sp; - if (ctxm->entry_size) { - mem_size = ctxm->entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_pg_tbls(bp, ctx_pg, mem_size, 1, - NULL); - if (rc) - return rc; - } + rc = bnxt_setup_ctxm_pg_tbls(bp, ctxm, entries, 2); + if (rc) + return rc; + for (i = 0; i < ctx->tqm_fp_rings_count + 1; i++) ena |= FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_SP << i; - } ena |= FUNC_BACKING_STORE_CFG_REQ_DFLT_ENABLES; rc = bnxt_hwrm_func_backing_store_cfg(bp, ena); if (rc) { From patchwork Mon Nov 20 23:43:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13462284 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="X2q1CnU9" Received: from mail-qt1-x829.google.com (mail-qt1-x829.google.com [IPv6:2607:f8b0:4864:20::829]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6237395 for ; Mon, 20 Nov 2023 15:44:47 -0800 (PST) Received: by mail-qt1-x829.google.com with SMTP id d75a77b69052e-41cc535cd5cso28303511cf.2 for ; Mon, 20 Nov 2023 15:44:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1700523886; x=1701128686; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=113oq2Pt7E+2XS47rfNX7OdcNWpuPF2LkUNEcoXPJlw=; b=X2q1CnU9UFE3jxIWvUlKkIeduH/+Vb5qMmwdFSa1yTsQwRXP4zFhDVaGvZo1dl29jN fakPK99GImCSYpy5aO+UrI1BFNawvdo9iLha5bQ7CjBhv6n4CNaZKvfM0ix8AREQFzwI uknnZaQUz8D64eQeQYRBwq94mKzGqPVgcp1/o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700523886; x=1701128686; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=113oq2Pt7E+2XS47rfNX7OdcNWpuPF2LkUNEcoXPJlw=; b=xIRT7+CXq1ps39a/JmMhVxqtR/DF+gtjLmumeN15P5XJXVgv7aoQEvXi0Y2ABf0c+a xnAsqIsBCtWaIuYrTO51/MJDNS57DnFI4sDz2Ei4DG05TCwoztYHm4ivmqvieH1+IgeH d7PsBasJJTGHZ0sxFeRNCC5HiqqZ1OaRhy4uvZThm3d4Suwggp0F7PreLB6OVjV5KmQj J4KfUo7Shv+ZuNmo1bzdhb1ja0Ro5Hf5fTIc0S4mT0wml74up4C4mmfImzAv+Yx3vOXr T2XAmswISBnzZd2b5g6B3NGtU1IwP4JJYYjnACK3ZwMJFFuASANG/VpH6FlP9n84Bedo CSZA== X-Gm-Message-State: AOJu0YypTCOz4eSi2w6iWXca5q+pMUmibi1joP8Tl077gEIZ48t8ldrn 9DdRoBlS2BCiNtRMJ7MAyJ0XUg== X-Google-Smtp-Source: AGHT+IHwOGVdv3ES6f86gaZKNqPnXza1A6rW8yxr1MXStZJ+om0QoZGFM3MtRMN3TuQANdF4xFh1Mg== X-Received: by 2002:ac8:5810:0:b0:418:1565:ed50 with SMTP id g16-20020ac85810000000b004181565ed50mr10968929qtg.66.1700523886389; Mon, 20 Nov 2023 15:44:46 -0800 (PST) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id i9-20020ac871c9000000b0041803dfb240sm3053384qtp.45.2023.11.20.15.44.45 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Nov 2023 15:44:46 -0800 (PST) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Pavan Chebbi Subject: [PATCH net-next 07/13] bnxt_en: Add support for new backing store query firmware API Date: Mon, 20 Nov 2023 15:43:59 -0800 Message-Id: <20231120234405.194542-8-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20231120234405.194542-1-michael.chan@broadcom.com> References: <20231120234405.194542-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Use the new v2 firmware API if supported by the firmware. We now have the infrastructure to support the v2 API. Reviewed-by: Pavan Chebbi Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 85 +++++++++++++++++++++-- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 3 +- 2 files changed, 81 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index e42c82ed0fd5..19da6c8f8650 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -7208,6 +7208,72 @@ static int bnxt_alloc_all_ctx_pg_info(struct bnxt *bp, int ctx_max) return 0; } +#define BNXT_CTX_INIT_VALID(flags) \ + (!!((flags) & \ + FUNC_BACKING_STORE_QCAPS_V2_RESP_FLAGS_ENABLE_CTX_KIND_INIT)) + +static int bnxt_hwrm_func_backing_store_qcaps_v2(struct bnxt *bp) +{ + struct hwrm_func_backing_store_qcaps_v2_output *resp; + struct hwrm_func_backing_store_qcaps_v2_input *req; + u16 last_valid_type = BNXT_CTX_INV; + struct bnxt_ctx_mem_info *ctx; + u16 type; + int rc; + + rc = hwrm_req_init(bp, req, HWRM_FUNC_BACKING_STORE_QCAPS_V2); + if (rc) + return rc; + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return -ENOMEM; + bp->ctx = ctx; + + resp = hwrm_req_hold(bp, req); + + for (type = 0; type < BNXT_CTX_V2_MAX; ) { + struct bnxt_ctx_mem_type *ctxm = &ctx->ctx_arr[type]; + u8 init_val, init_off, i; + __le32 *p; + u32 flags; + + req->type = cpu_to_le16(type); + rc = hwrm_req_send(bp, req); + if (rc) + goto ctx_done; + flags = le32_to_cpu(resp->flags); + type = le16_to_cpu(resp->next_valid_type); + if (!(flags & FUNC_BACKING_STORE_QCAPS_V2_RESP_FLAGS_TYPE_VALID)) + continue; + + ctxm->type = le16_to_cpu(resp->type); + last_valid_type = ctxm->type; + ctxm->entry_size = le16_to_cpu(resp->entry_size); + ctxm->flags = flags; + ctxm->instance_bmap = le32_to_cpu(resp->instance_bit_map); + ctxm->entry_multiple = resp->entry_multiple; + ctxm->max_entries = le32_to_cpu(resp->max_num_entries); + ctxm->min_entries = le32_to_cpu(resp->min_num_entries); + init_val = resp->ctx_init_value; + init_off = resp->ctx_init_offset; + bnxt_init_ctx_initializer(ctxm, init_val, init_off, + BNXT_CTX_INIT_VALID(flags)); + ctxm->split_entry_cnt = min_t(u8, resp->subtype_valid_cnt, + BNXT_MAX_SPLIT_ENTRY); + for (i = 0, p = &resp->split_entry_0; i < ctxm->split_entry_cnt; + i++, p++) + ctxm->split[i] = le32_to_cpu(*p); + } + if (last_valid_type < BNXT_CTX_V2_MAX) + ctx->ctx_arr[last_valid_type].last = true; + rc = bnxt_alloc_all_ctx_pg_info(bp, BNXT_CTX_V2_MAX); + +ctx_done: + hwrm_req_drop(bp, req); + return rc; +} + static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) { struct hwrm_func_backing_store_qcaps_output *resp; @@ -7217,6 +7283,9 @@ static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) if (bp->hwrm_spec_code < 0x10902 || BNXT_VF(bp) || bp->ctx) return 0; + if (bp->fw_cap & BNXT_FW_CAP_BACKING_STORE_V2) + return bnxt_hwrm_func_backing_store_qcaps_v2(bp); + rc = hwrm_req_init(bp, req, HWRM_FUNC_BACKING_STORE_QCAPS); if (rc) return rc; @@ -7229,13 +7298,15 @@ static int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) u8 init_val, init_idx = 0; u16 init_mask; - ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + ctx = bp->ctx; if (!ctx) { - rc = -ENOMEM; - goto ctx_err; + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) { + rc = -ENOMEM; + goto ctx_err; + } + bp->ctx = ctx; } - bp->ctx = ctx; - init_val = resp->ctx_kind_initializer; init_mask = le16_to_cpu(resp->ctx_init_mask); @@ -7607,7 +7678,7 @@ void bnxt_free_ctx_mem(struct bnxt *bp) if (!ctx) return; - for (type = 0; type < BNXT_CTX_MAX; type++) { + for (type = 0; type < BNXT_CTX_V2_MAX; type++) { struct bnxt_ctx_mem_type *ctxm = &ctx->ctx_arr[type]; struct bnxt_ctx_pg_info *ctx_pg = ctxm->pg_info; int i, n = 1; @@ -7914,6 +7985,8 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp) bp->fw_cap |= BNXT_FW_CAP_HOT_RESET_IF; if (BNXT_PF(bp) && (flags_ext & FUNC_QCAPS_RESP_FLAGS_EXT_FW_LIVEPATCH_SUPPORTED)) bp->fw_cap |= BNXT_FW_CAP_LIVEPATCH; + if (flags_ext & FUNC_QCAPS_RESP_FLAGS_EXT_BS_V2_SUPPORTED) + bp->fw_cap |= BNXT_FW_CAP_BACKING_STORE_V2; flags_ext2 = le32_to_cpu(resp->flags_ext2); if (flags_ext2 & FUNC_QCAPS_RESP_FLAGS_EXT2_RX_ALL_PKTS_TIMESTAMPS_SUPPORTED) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 067a66eedf36..0dbf854530f1 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1608,7 +1608,7 @@ struct bnxt_ctx_mem_info { u32 flags; #define BNXT_CTX_FLAG_INITED 0x01 - struct bnxt_ctx_mem_type ctx_arr[BNXT_CTX_MAX]; + struct bnxt_ctx_mem_type ctx_arr[BNXT_CTX_V2_MAX]; }; enum bnxt_health_severity { @@ -2070,6 +2070,7 @@ struct bnxt { #define BNXT_FW_CAP_THRESHOLD_TEMP_SUPPORTED BIT_ULL(33) #define BNXT_FW_CAP_DFLT_VLAN_TPID_PCP BIT_ULL(34) #define BNXT_FW_CAP_PRE_RESV_VNICS BIT_ULL(35) + #define BNXT_FW_CAP_BACKING_STORE_V2 BIT_ULL(36) u32 fw_dbg_cap; From patchwork Mon Nov 20 23:44:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13462285 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="FAcUsZam" Received: from mail-qt1-x829.google.com (mail-qt1-x829.google.com [IPv6:2607:f8b0:4864:20::829]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E20C0C7 for ; Mon, 20 Nov 2023 15:44:48 -0800 (PST) Received: by mail-qt1-x829.google.com with SMTP id d75a77b69052e-41e58a33ec9so30476681cf.1 for ; Mon, 20 Nov 2023 15:44:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1700523888; x=1701128688; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=ZnJ4Fd6jgSYIbyztkUkCdk6VHmDxyI6t1l4oxLWExNw=; b=FAcUsZaml+TzkAISzO8nmaMAp91O9o8oAb3P+MP0VEoLAO3pmz/qE/RULVf+7ZjvXX gCeHfXmi6PaJyCjVZlb71Nvx/Yh2j5hvwDeo24/MJVJ7FumCouwRVEuujeyc/shaRkSv /wdZUm/yC2aZ/6X+N1Fz1+oR5S+Igrl4n9a00= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700523888; x=1701128688; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZnJ4Fd6jgSYIbyztkUkCdk6VHmDxyI6t1l4oxLWExNw=; b=sr12FcTzH5coLlxF6fXAQlxDVzfuYid5ZJkntrgioUEOHpPIdfLmm4WKwMfWaB2qAR YJZBPaTyjI9MsLE9F9XMs3MwSAnBtJbx18xkunbr96A6E90OsUqd+vhZwi9Cx+B9oKo/ +AyewOF0YFQaL6LqeiYopEBObiuJP1phDq7yOc/isXJj7infUrHNZDCm/pZsbNJyNE13 EpAC1tjkY1FIPe+gRbB6WvU0hGzMmbAgAUAxoDWZvb/XQRP+pAF8N3ao/LR6YtvaScD1 iuIKPtR1MaXvsZut59FEBEfE8f+8uJr/ZoGv1Kyns1UIoSd0vtPRKfqygObBJwtyGW2k CMZg== X-Gm-Message-State: AOJu0YxxPdqloijFn3KoRX9awwAmMCM8NaDDvbrwlRW/s7kU3G5Js2ci MtpNiLMq55LCKesL07Rl3ox5sA== X-Google-Smtp-Source: AGHT+IFhirdZ6AJYxLoUIiw2HOgtkXsxFhc6PS1bbUuncoaIQ+h3RuI5nKEtAt0BKgax8ur3fcYm7A== X-Received: by 2002:ac8:5c05:0:b0:41e:236a:b314 with SMTP id i5-20020ac85c05000000b0041e236ab314mr12064205qti.68.1700523887840; Mon, 20 Nov 2023 15:44:47 -0800 (PST) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id i9-20020ac871c9000000b0041803dfb240sm3053384qtp.45.2023.11.20.15.44.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Nov 2023 15:44:47 -0800 (PST) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Hongguang Gao Subject: [PATCH net-next 08/13] bnxt_en: Add support for HWRM_FUNC_BACKING_STORE_CFG_V2 firmware calls Date: Mon, 20 Nov 2023 15:44:00 -0800 Message-Id: <20231120234405.194542-9-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20231120234405.194542-1-michael.chan@broadcom.com> References: <20231120234405.194542-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Newer chips starting with 57600 will use this new firmware HWRM call to configure backing store memory. Add this new call if it is supported by the firmware. Reviewed-by: Hongguang Gao Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 71 ++++++++++++++++++++++- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 + 2 files changed, 71 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 19da6c8f8650..85d1fdb616ca 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -7670,6 +7670,71 @@ static int bnxt_setup_ctxm_pg_tbls(struct bnxt *bp, return rc; } +static int bnxt_hwrm_func_backing_store_cfg_v2(struct bnxt *bp, + struct bnxt_ctx_mem_type *ctxm, + bool last) +{ + struct hwrm_func_backing_store_cfg_v2_input *req; + u32 instance_bmap = ctxm->instance_bmap; + int i, j, rc = 0, n = 1; + __le32 *p; + + if (!(ctxm->flags & BNXT_CTX_MEM_TYPE_VALID) || !ctxm->pg_info) + return 0; + + if (instance_bmap) + n = hweight32(ctxm->instance_bmap); + else + instance_bmap = 1; + + rc = hwrm_req_init(bp, req, HWRM_FUNC_BACKING_STORE_CFG_V2); + if (rc) + return rc; + hwrm_req_hold(bp, req); + req->type = cpu_to_le16(ctxm->type); + req->entry_size = cpu_to_le16(ctxm->entry_size); + req->subtype_valid_cnt = ctxm->split_entry_cnt; + for (i = 0, p = &req->split_entry_0; i < ctxm->split_entry_cnt; i++) + p[i] = cpu_to_le32(ctxm->split[i]); + for (i = 0, j = 0; j < n && !rc; i++) { + struct bnxt_ctx_pg_info *ctx_pg; + + if (!(instance_bmap & (1 << i))) + continue; + req->instance = cpu_to_le16(i); + ctx_pg = &ctxm->pg_info[j++]; + if (!ctx_pg->entries) + continue; + req->num_entries = cpu_to_le32(ctx_pg->entries); + bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, + &req->page_size_pbl_level, + &req->page_dir); + if (last && j == n) + req->flags = + cpu_to_le32(FUNC_BACKING_STORE_CFG_V2_REQ_FLAGS_BS_CFG_ALL_DONE); + rc = hwrm_req_send(bp, req); + } + hwrm_req_drop(bp, req); + return rc; +} + +static int bnxt_backing_store_cfg_v2(struct bnxt *bp) +{ + struct bnxt_ctx_mem_info *ctx = bp->ctx; + struct bnxt_ctx_mem_type *ctxm; + int rc = 0; + u16 type; + + for (type = 0 ; type < BNXT_CTX_V2_MAX; type++) { + ctxm = &ctx->ctx_arr[type]; + + rc = bnxt_hwrm_func_backing_store_cfg_v2(bp, ctxm, ctxm->last); + if (rc) + return rc; + } + return 0; +} + void bnxt_free_ctx_mem(struct bnxt *bp) { struct bnxt_ctx_mem_info *ctx = bp->ctx; @@ -7804,7 +7869,11 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp) for (i = 0; i < ctx->tqm_fp_rings_count + 1; i++) ena |= FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_SP << i; ena |= FUNC_BACKING_STORE_CFG_REQ_DFLT_ENABLES; - rc = bnxt_hwrm_func_backing_store_cfg(bp, ena); + + if (bp->fw_cap & BNXT_FW_CAP_BACKING_STORE_V2) + rc = bnxt_backing_store_cfg_v2(bp); + else + rc = bnxt_hwrm_func_backing_store_cfg(bp, ena); if (rc) { netdev_err(bp->dev, "Failed configuring context mem, rc = %d.\n", rc); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 0dbf854530f1..a591f950ce14 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1548,6 +1548,7 @@ struct bnxt_ctx_mem_type { u16 type; u16 entry_size; u32 flags; +#define BNXT_CTX_MEM_TYPE_VALID FUNC_BACKING_STORE_QCAPS_V2_RESP_FLAGS_TYPE_VALID u32 instance_bmap; u8 init_value; u8 entry_multiple; From patchwork Mon Nov 20 23:44:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13462286 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="HXxhYwUc" Received: from mail-qt1-x834.google.com (mail-qt1-x834.google.com [IPv6:2607:f8b0:4864:20::834]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22D2CC8 for ; Mon, 20 Nov 2023 15:44:50 -0800 (PST) Received: by mail-qt1-x834.google.com with SMTP id d75a77b69052e-41cc537ed54so30425461cf.2 for ; Mon, 20 Nov 2023 15:44:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1700523889; x=1701128689; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=T4esdB+zPKllJ6zqtC3uuHmZwYDnv7UWKiy0T5PK8j4=; b=HXxhYwUcc/9J2AWPNiPcrSWqxiQFQY2zsQz5Z44H+d4mOxGP2UOqhJ3z9CbH8CzQbr GOHcz45oRvo/CjcYz6hjb5RZt0qhsAurVF6WhknyU3uP1wPS3vUEfXn0Jh47hiwt5/Qu hnLiQqHQVn7MhIrQWTNo8EIC4YiJzjjzjXo+8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700523889; x=1701128689; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=T4esdB+zPKllJ6zqtC3uuHmZwYDnv7UWKiy0T5PK8j4=; b=Kh5xBEvPBh8upkXFzoiemOP3Z8WT5Qkq/tqBSuSIugFaZ4jpSUG1xKf5Ez9IfE44d1 AkO32BW7oA52FLys0Zl2/3Zeer5yWqcYQBrAAvKbncI7xreFaJX7xjbgHv/2KaEp2XZC PnZCuwL7pYd8CDz5jDLk4k/YNPCXi/jE2JPf65y/KlseXr94u2B2w+xvXr+BXfG9JhlS WyuD/aW/2lncHDOSQbLImiyQXamJqwELIECxSnxGj1AAwnITy3PGmyc8hWvU5mhadDNY dxwZnurcNL3qP5Bz03IFLByHONqryVHZbZPQNyptVL0d+3dvlm/pkJMBl34kYb9TZJGa 7nnw== X-Gm-Message-State: AOJu0Yw6f2Vc9E6/AyaRr5+RGHHqkQcyTWtu35JwS3jWfgEjWuivjmqL AjUtwDVrBREjHdOWKQjxvLn/LnWIzmsQ21Ff6xE= X-Google-Smtp-Source: AGHT+IGl6iG2iN4Ehnvk7mlXwl/dkRfclKhkGDnNCbJysGpoT4QuQHG0ycwRtztrb3rgYBmD21zRGg== X-Received: by 2002:ac8:5cc1:0:b0:41f:e523:ed2f with SMTP id s1-20020ac85cc1000000b0041fe523ed2fmr11607007qta.46.1700523889103; Mon, 20 Nov 2023 15:44:49 -0800 (PST) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id i9-20020ac871c9000000b0041803dfb240sm3053384qtp.45.2023.11.20.15.44.47 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Nov 2023 15:44:48 -0800 (PST) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Pavan Chebbi Subject: [PATCH net-next 09/13] bnxt_en: Add db_ring_mask and related macro to bnxt_db_info struct. Date: Mon, 20 Nov 2023 15:44:01 -0800 Message-Id: <20231120234405.194542-10-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20231120234405.194542-1-michael.chan@broadcom.com> References: <20231120234405.194542-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org This allows the doorbell related logic to mask the doorbell index to the proper range before writing the doorbell. The current code masks the doorbell index immediately to keep it in the legal ranges for the most part. Subsequent patches will change the logic so that the index increments unbounded and it only gets masked before use. This is preparation work for the new chip that requires an additional Epoch bit in the doorbell that needs to toggle when the index has wrapped around. This patch just adds the basic infrastructure and the logic is largely unchanged. We now replace RING_CMP() with the new DB_RING_IDX() at appropriate places where we mask the completion ring index before writing the doorbell. Reviewed-by: Pavan Chebbi Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 39 ++++++++++++++++++----- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 13 +++++--- 2 files changed, 40 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 85d1fdb616ca..48c443d52344 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -254,18 +254,18 @@ static bool bnxt_vf_pciid(enum board_idx idx) writel(DB_CP_IRQ_DIS_FLAGS, db) #define BNXT_DB_CQ(db, idx) \ - writel(DB_CP_FLAGS | RING_CMP(idx), (db)->doorbell) + writel(DB_CP_FLAGS | DB_RING_IDX(db, idx), (db)->doorbell) #define BNXT_DB_NQ_P5(db, idx) \ - bnxt_writeq(bp, (db)->db_key64 | DBR_TYPE_NQ | RING_CMP(idx), \ + bnxt_writeq(bp, (db)->db_key64 | DBR_TYPE_NQ | DB_RING_IDX(db, idx),\ (db)->doorbell) #define BNXT_DB_CQ_ARM(db, idx) \ - writel(DB_CP_REARM_FLAGS | RING_CMP(idx), (db)->doorbell) + writel(DB_CP_REARM_FLAGS | DB_RING_IDX(db, idx), (db)->doorbell) #define BNXT_DB_NQ_ARM_P5(db, idx) \ - bnxt_writeq(bp, (db)->db_key64 | DBR_TYPE_NQ_ARM | RING_CMP(idx),\ - (db)->doorbell) + bnxt_writeq(bp, (db)->db_key64 | DBR_TYPE_NQ_ARM | \ + DB_RING_IDX(db, idx), (db)->doorbell) static void bnxt_db_nq(struct bnxt *bp, struct bnxt_db_info *db, u32 idx) { @@ -287,7 +287,7 @@ static void bnxt_db_cq(struct bnxt *bp, struct bnxt_db_info *db, u32 idx) { if (bp->flags & BNXT_FLAG_CHIP_P5) bnxt_writeq(bp, db->db_key64 | DBR_TYPE_CQ_ARMALL | - RING_CMP(idx), db->doorbell); + DB_RING_IDX(db, idx), db->doorbell); else BNXT_DB_CQ(db, idx); } @@ -526,7 +526,8 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) memcpy(txbd, tx_push1, sizeof(*txbd)); prod = NEXT_TX(prod); tx_push->doorbell = - cpu_to_le32(DB_KEY_TX_PUSH | DB_LONG_TX_PUSH | prod); + cpu_to_le32(DB_KEY_TX_PUSH | DB_LONG_TX_PUSH | + DB_RING_IDX(&txr->tx_db, prod)); WRITE_ONCE(txr->tx_prod, prod); tx_buf->is_push = 1; @@ -2871,7 +2872,8 @@ static void __bnxt_poll_cqs_done(struct bnxt *bp, struct bnxt_napi *bnapi, if (cpr2->had_work_done) { db = &cpr2->cp_db; bnxt_writeq(bp, db->db_key64 | dbr_type | - RING_CMP(cpr2->cp_raw_cons), db->doorbell); + DB_RING_IDX(db, cpr2->cp_raw_cons), + db->doorbell); cpr2->had_work_done = 0; } } @@ -5987,6 +5989,26 @@ static int bnxt_hwrm_set_async_event_cr(struct bnxt *bp, int idx) } } +static void bnxt_set_db_mask(struct bnxt *bp, struct bnxt_db_info *db, + u32 ring_type) +{ + switch (ring_type) { + case HWRM_RING_ALLOC_TX: + db->db_ring_mask = bp->tx_ring_mask; + break; + case HWRM_RING_ALLOC_RX: + db->db_ring_mask = bp->rx_ring_mask; + break; + case HWRM_RING_ALLOC_AGG: + db->db_ring_mask = bp->rx_agg_ring_mask; + break; + case HWRM_RING_ALLOC_CMPL: + case HWRM_RING_ALLOC_NQ: + db->db_ring_mask = bp->cp_ring_mask; + break; + } +} + static void bnxt_set_db(struct bnxt *bp, struct bnxt_db_info *db, u32 ring_type, u32 map_idx, u32 xid) { @@ -6026,6 +6048,7 @@ static void bnxt_set_db(struct bnxt *bp, struct bnxt_db_info *db, u32 ring_type, break; } } + bnxt_set_db_mask(bp, db, ring_type); } static int bnxt_hwrm_ring_alloc(struct bnxt *bp) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index a591f950ce14..7d79a2c8d3c3 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -813,8 +813,11 @@ struct bnxt_db_info { u64 db_key64; u32 db_key32; }; + u32 db_ring_mask; }; +#define DB_RING_IDX(db, idx) ((idx) & (db)->db_ring_mask) + struct bnxt_tx_ring_info { struct bnxt_napi *bnapi; struct bnxt_cp_ring_info *tx_cpr; @@ -2353,9 +2356,10 @@ static inline void bnxt_db_write_relaxed(struct bnxt *bp, struct bnxt_db_info *db, u32 idx) { if (bp->flags & BNXT_FLAG_CHIP_P5) { - bnxt_writeq_relaxed(bp, db->db_key64 | idx, db->doorbell); + bnxt_writeq_relaxed(bp, db->db_key64 | DB_RING_IDX(db, idx), + db->doorbell); } else { - u32 db_val = db->db_key32 | idx; + u32 db_val = db->db_key32 | DB_RING_IDX(db, idx); writel_relaxed(db_val, db->doorbell); if (bp->flags & BNXT_FLAG_DOUBLE_DB) @@ -2368,9 +2372,10 @@ static inline void bnxt_db_write(struct bnxt *bp, struct bnxt_db_info *db, u32 idx) { if (bp->flags & BNXT_FLAG_CHIP_P5) { - bnxt_writeq(bp, db->db_key64 | idx, db->doorbell); + bnxt_writeq(bp, db->db_key64 | DB_RING_IDX(db, idx), + db->doorbell); } else { - u32 db_val = db->db_key32 | idx; + u32 db_val = db->db_key32 | DB_RING_IDX(db, idx); writel(db_val, db->doorbell); if (bp->flags & BNXT_FLAG_DOUBLE_DB) From patchwork Mon Nov 20 23:44:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13462287 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="Mf4+LSNm" Received: from mail-oa1-x33.google.com (mail-oa1-x33.google.com [IPv6:2001:4860:4864:20::33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 428CC95 for ; Mon, 20 Nov 2023 15:44:51 -0800 (PST) Received: by mail-oa1-x33.google.com with SMTP id 586e51a60fabf-1efad296d42so2890695fac.2 for ; Mon, 20 Nov 2023 15:44:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1700523890; x=1701128690; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=bqe3q3gcaeVni3mDguaIglphkpCu9D8s2XuHLUz6sFg=; b=Mf4+LSNmlNP59eWxiESMgkOoSjX1yx1ndvLSUEkAwbn5/t6nf4/PXS04qOVqzEM9Mu T5EJjYx5WixhWN4Ipm+qElpvZx2L7M905CpNBpQ3ZobZm1u7LGzPatZHUEPld/cNk2Ph dvSAQ5v4rzNFyOt3GUim3ku2TFLPXl/ueqtzY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700523890; x=1701128690; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bqe3q3gcaeVni3mDguaIglphkpCu9D8s2XuHLUz6sFg=; b=tFTBOVPb48UqXrCNBdbLCls3+qQMEOOcjQ53vXr2HwIneUha2yakJR4rROLlWkNlyz Rlxv+f9nlfFGeAS4Pkf95ff0Qgsor9b7XZeSJ+TFgbBhEeZ4pbR/ghsTa1omA5BuJ2zQ oL3R4rYhGB8IJzSpVRvKFJDCTL2PdIzOIKAeqtvYx5dcjReAuewEKARguUWDVkM7yo/Q n/j4eLYOGvODq16EwnY9STp7LmsGka5y4djcZM1XIKHWKa5M+2mebBB6tsVJaUnAc6WO z+226J44RJw5PODGA0LHaW5OijmKwG7jM0yG+v2EjOmZzDAX+uQCwjWD4CidAQoFCbbJ 9e0g== X-Gm-Message-State: AOJu0YxMsH+waRId+pUy8xq59XviDTNy+2nGNCUE6/sbekccPp6ftLUc EtYNGCQUdR46HkQnPGt2FwFtKA== X-Google-Smtp-Source: AGHT+IGo6nh6yfyFPAP9g/kIrMkwkKs2Jv/aD0K4uVSqnkqywcGfLGkpINs+9dn31eBg2APULsMD6Q== X-Received: by 2002:a05:6870:3893:b0:1f9:5c71:a684 with SMTP id y19-20020a056870389300b001f95c71a684mr2431706oan.42.1700523890398; Mon, 20 Nov 2023 15:44:50 -0800 (PST) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id i9-20020ac871c9000000b0041803dfb240sm3053384qtp.45.2023.11.20.15.44.49 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Nov 2023 15:44:50 -0800 (PST) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Pavan Chebbi Subject: [PATCH net-next 10/13] bnxt_en: Modify TX ring indexing logic. Date: Mon, 20 Nov 2023 15:44:02 -0800 Message-Id: <20231120234405.194542-11-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20231120234405.194542-1-michael.chan@broadcom.com> References: <20231120234405.194542-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Change the TX ring logic so that the index increments unbounded and mask it only when needed. Modify the existing macros so that the index is not masked. Add a new macro RING_TX() to mask it only when needed to get the index of txr->tx_buf_ring[]. Reviewed-by: Pavan Chebbi Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 22 +++++++++---------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 5 +++-- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 14 ++++++------ 3 files changed, 21 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 48c443d52344..c83450564765 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -432,9 +432,9 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) len = skb_headlen(skb); last_frag = skb_shinfo(skb)->nr_frags; - txbd = &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)]; + txbd = &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; - tx_buf = &txr->tx_buf_ring[prod]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, prod)]; tx_buf->skb = skb; tx_buf->nr_frags = last_frag; @@ -522,7 +522,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) txbd->tx_bd_opaque = SET_TX_OPAQUE(bp, txr, prod, 2); prod = NEXT_TX(prod); tx_push->tx_bd_opaque = txbd->tx_bd_opaque; - txbd = &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)]; + txbd = &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; memcpy(txbd, tx_push1, sizeof(*txbd)); prod = NEXT_TX(prod); tx_push->doorbell = @@ -569,7 +569,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) prod = NEXT_TX(prod); txbd1 = (struct tx_bd_ext *) - &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)]; + &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; txbd1->tx_bd_hsize_lflags = lflags; if (skb_is_gso(skb)) { @@ -610,7 +610,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; prod = NEXT_TX(prod); - txbd = &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)]; + txbd = &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; len = skb_frag_size(frag); mapping = skb_frag_dma_map(&pdev->dev, frag, 0, len, @@ -619,7 +619,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) if (unlikely(dma_mapping_error(&pdev->dev, mapping))) goto tx_dma_error; - tx_buf = &txr->tx_buf_ring[prod]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, prod)]; dma_unmap_addr_set(tx_buf, mapping, mapping); txbd->tx_bd_haddr = cpu_to_le64(mapping); @@ -668,7 +668,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) /* start back at beginning and unmap skb */ prod = txr->tx_prod; - tx_buf = &txr->tx_buf_ring[prod]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, prod)]; dma_unmap_single(&pdev->dev, dma_unmap_addr(tx_buf, mapping), skb_headlen(skb), DMA_TO_DEVICE); prod = NEXT_TX(prod); @@ -676,7 +676,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev) /* unmap remaining mapped pages */ for (i = 0; i < last_frag; i++) { prod = NEXT_TX(prod); - tx_buf = &txr->tx_buf_ring[prod]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, prod)]; dma_unmap_page(&pdev->dev, dma_unmap_addr(tx_buf, mapping), skb_frag_size(&skb_shinfo(skb)->frags[i]), DMA_TO_DEVICE); @@ -702,12 +702,12 @@ static void __bnxt_tx_int(struct bnxt *bp, struct bnxt_tx_ring_info *txr, u16 cons = txr->tx_cons; int tx_pkts = 0; - while (cons != hw_cons) { + while (RING_TX(bp, cons) != hw_cons) { struct bnxt_sw_tx_bd *tx_buf; struct sk_buff *skb; int j, last; - tx_buf = &txr->tx_buf_ring[cons]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, cons)]; cons = NEXT_TX(cons); skb = tx_buf->skb; tx_buf->skb = NULL; @@ -731,7 +731,7 @@ static void __bnxt_tx_int(struct bnxt *bp, struct bnxt_tx_ring_info *txr, for (j = 0; j < last; j++) { cons = NEXT_TX(cons); - tx_buf = &txr->tx_buf_ring[cons]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, cons)]; dma_unmap_page( &pdev->dev, dma_unmap_addr(tx_buf, mapping), diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 7d79a2c8d3c3..1b04510f677b 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -689,7 +689,7 @@ struct nqe_cn { #define RX_RING(x) (((x) & ~(RX_DESC_CNT - 1)) >> (BNXT_PAGE_SHIFT - 4)) #define RX_IDX(x) ((x) & (RX_DESC_CNT - 1)) -#define TX_RING(x) (((x) & ~(TX_DESC_CNT - 1)) >> (BNXT_PAGE_SHIFT - 4)) +#define TX_RING(bp, x) (((x) & (bp)->tx_ring_mask) >> (BNXT_PAGE_SHIFT - 4)) #define TX_IDX(x) ((x) & (TX_DESC_CNT - 1)) #define CP_RING(x) (((x) & ~(CP_DESC_CNT - 1)) >> (BNXT_PAGE_SHIFT - 4)) @@ -720,7 +720,8 @@ struct nqe_cn { #define NEXT_RX_AGG(idx) (((idx) + 1) & bp->rx_agg_ring_mask) -#define NEXT_TX(idx) (((idx) + 1) & bp->tx_ring_mask) +#define RING_TX(bp, idx) ((idx) & (bp)->tx_ring_mask) +#define NEXT_TX(idx) ((idx) + 1) #define ADV_RAW_CMP(idx, n) ((idx) + (n)) #define NEXT_RAW_CMP(idx) ADV_RAW_CMP(idx, 1) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c index 9d428eb3fdb9..4791f6a14e55 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -42,12 +42,12 @@ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp, /* fill up the first buffer */ prod = txr->tx_prod; - tx_buf = &txr->tx_buf_ring[prod]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, prod)]; tx_buf->nr_frags = num_frags; if (xdp) tx_buf->page = virt_to_head_page(xdp->data); - txbd = &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)]; + txbd = &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; flags = (len << TX_BD_LEN_SHIFT) | ((num_frags + 1) << TX_BD_FLAGS_BD_CNT_SHIFT) | bnxt_lhint_arr[len >> 9]; @@ -67,10 +67,10 @@ struct bnxt_sw_tx_bd *bnxt_xmit_bd(struct bnxt *bp, WRITE_ONCE(txr->tx_prod, prod); /* first fill up the first buffer */ - frag_tx_buf = &txr->tx_buf_ring[prod]; + frag_tx_buf = &txr->tx_buf_ring[RING_TX(bp, prod)]; frag_tx_buf->page = skb_frag_page(frag); - txbd = &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)]; + txbd = &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; frag_len = skb_frag_size(frag); frag_mapping = skb_frag_dma_map(&pdev->dev, frag, 0, @@ -139,8 +139,8 @@ void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int budget) if (!budget) return; - while (tx_cons != tx_hw_cons) { - tx_buf = &txr->tx_buf_ring[tx_cons]; + while (RING_TX(bp, tx_cons) != tx_hw_cons) { + tx_buf = &txr->tx_buf_ring[RING_TX(bp, tx_cons)]; if (tx_buf->action == XDP_REDIRECT) { struct pci_dev *pdev = bp->pdev; @@ -160,7 +160,7 @@ void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int budget) frags = tx_buf->nr_frags; for (j = 0; j < frags; j++) { tx_cons = NEXT_TX(tx_cons); - tx_buf = &txr->tx_buf_ring[tx_cons]; + tx_buf = &txr->tx_buf_ring[RING_TX(bp, tx_cons)]; page_pool_recycle_direct(rxr->page_pool, tx_buf->page); } } else { From patchwork Mon Nov 20 23:44:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13462288 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="P83IvLt5" Received: from mail-ot1-x329.google.com (mail-ot1-x329.google.com [IPv6:2607:f8b0:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5276AA for ; Mon, 20 Nov 2023 15:44:52 -0800 (PST) Received: by mail-ot1-x329.google.com with SMTP id 46e09a7af769-6ce2ea3a944so3256826a34.1 for ; Mon, 20 Nov 2023 15:44:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1700523892; x=1701128692; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=9vEyflXwbxS2Y0ACXeNhIZOrUhvrfnDAJQpcjqHZf88=; b=P83IvLt5IObT0nBSPplIB2wuxZ5lrG+BWXE8VJBjewEKbUtkOCZtUno4sPoil6klNC KEXq/o3CWRIu01/vPxb2iOQw84JX3RlEabT6ETKgW1OTVzu0pANVlZ3acYACf6DT7/sI 7Jk0re/KNBmRUv//7k6mWyjEW2i3496u+ISNw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700523892; x=1701128692; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9vEyflXwbxS2Y0ACXeNhIZOrUhvrfnDAJQpcjqHZf88=; b=cvsK3UKw6dlG+oxGHrYAUwsKw5/j/spIER2YBD5Fzv/AYF7kGFecesnjoLmoqyY2lx Z/qw5xZwdJhH8emuSMQyrHOne180mx/xAj16LR+s6LNSw8pbSEfuT2d6advvYw3jktDR bOY1f679pOeMQ6/DoRh5FH7EVxwS9DDa/ZvPVx5rul52Qy2PwfBqDaIkxXMl9i5wioXQ 0Y+vC1zfN8BziaH193PTO1VRvG6PLX1fBuxF5W0gAhKupa7ZCIyKUsox71ZoVq+XA6Bw H8RN+g5kR/hOcuK5xatudCMgbgkFoZ3Cw0LqlHoRVQWnD1CmoG2O0KyeiQ3UOZqEITCm ZSfQ== X-Gm-Message-State: AOJu0YxgW8ciL5DPdA09vack9R5nAnV3aZ4uxHU8Ir/duwFaRuXI/HSa XejTFbQdnSg48pm+JscytTnK+fJLMUWX2t4xEg4= X-Google-Smtp-Source: AGHT+IHWfKJS8T87bm1Z7DAFO/Zh7JEFwfUkEaOiY8joN63Gyl7cGutQ5tKKGy4CwNZdxr9lXKRNvQ== X-Received: by 2002:a05:6358:5294:b0:169:9586:9195 with SMTP id g20-20020a056358529400b0016995869195mr8719738rwa.31.1700523891849; Mon, 20 Nov 2023 15:44:51 -0800 (PST) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id i9-20020ac871c9000000b0041803dfb240sm3053384qtp.45.2023.11.20.15.44.50 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Nov 2023 15:44:51 -0800 (PST) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Pavan Chebbi Subject: [PATCH net-next 11/13] bnxt_en: Modify RX ring indexing logic. Date: Mon, 20 Nov 2023 15:44:03 -0800 Message-Id: <20231120234405.194542-12-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20231120234405.194542-1-michael.chan@broadcom.com> References: <20231120234405.194542-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Modify the RX indexing logic for both RX ring and RX aggregation ring just like the TX logic. Change it so that the index increments unbounded and mask it only when needed. Modify the existing RX macros so that the index is not masked. Add new macros RING_RX()/RING_RX_AGG() to mask it only when needed to get the index of rxr->rx_buf_ring[] and rxr->rx_agg_ring[]. Reviewed-by: Pavan Chebbi Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 29 ++++++++++++----------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 10 +++++--- 2 files changed, 22 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index c83450564765..b2cf42953130 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -821,8 +821,8 @@ static inline u8 *__bnxt_alloc_rx_frag(struct bnxt *bp, dma_addr_t *mapping, int bnxt_alloc_rx_data(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, u16 prod, gfp_t gfp) { - struct rx_bd *rxbd = &rxr->rx_desc_ring[RX_RING(prod)][RX_IDX(prod)]; - struct bnxt_sw_rx_bd *rx_buf = &rxr->rx_buf_ring[prod]; + struct rx_bd *rxbd = &rxr->rx_desc_ring[RX_RING(bp, prod)][RX_IDX(prod)]; + struct bnxt_sw_rx_bd *rx_buf = &rxr->rx_buf_ring[RING_RX(bp, prod)]; dma_addr_t mapping; if (BNXT_RX_PAGE_MODE(bp)) { @@ -855,9 +855,10 @@ void bnxt_reuse_rx_data(struct bnxt_rx_ring_info *rxr, u16 cons, void *data) { u16 prod = rxr->rx_prod; struct bnxt_sw_rx_bd *cons_rx_buf, *prod_rx_buf; + struct bnxt *bp = rxr->bnapi->bp; struct rx_bd *cons_bd, *prod_bd; - prod_rx_buf = &rxr->rx_buf_ring[prod]; + prod_rx_buf = &rxr->rx_buf_ring[RING_RX(bp, prod)]; cons_rx_buf = &rxr->rx_buf_ring[cons]; prod_rx_buf->data = data; @@ -865,8 +866,8 @@ void bnxt_reuse_rx_data(struct bnxt_rx_ring_info *rxr, u16 cons, void *data) prod_rx_buf->mapping = cons_rx_buf->mapping; - prod_bd = &rxr->rx_desc_ring[RX_RING(prod)][RX_IDX(prod)]; - cons_bd = &rxr->rx_desc_ring[RX_RING(cons)][RX_IDX(cons)]; + prod_bd = &rxr->rx_desc_ring[RX_RING(bp, prod)][RX_IDX(prod)]; + cons_bd = &rxr->rx_desc_ring[RX_RING(bp, cons)][RX_IDX(cons)]; prod_bd->rx_bd_haddr = cons_bd->rx_bd_haddr; } @@ -886,7 +887,7 @@ static inline int bnxt_alloc_rx_page(struct bnxt *bp, u16 prod, gfp_t gfp) { struct rx_bd *rxbd = - &rxr->rx_agg_desc_ring[RX_RING(prod)][RX_IDX(prod)]; + &rxr->rx_agg_desc_ring[RX_AGG_RING(bp, prod)][RX_IDX(prod)]; struct bnxt_sw_rx_agg_bd *rx_agg_buf; struct page *page; dma_addr_t mapping; @@ -903,7 +904,7 @@ static inline int bnxt_alloc_rx_page(struct bnxt *bp, __set_bit(sw_prod, rxr->rx_agg_bmap); rx_agg_buf = &rxr->rx_agg_ring[sw_prod]; - rxr->rx_sw_agg_prod = NEXT_RX_AGG(sw_prod); + rxr->rx_sw_agg_prod = RING_RX_AGG(bp, NEXT_RX_AGG(sw_prod)); rx_agg_buf->page = page; rx_agg_buf->offset = offset; @@ -979,13 +980,13 @@ static void bnxt_reuse_rx_agg_bufs(struct bnxt_cp_ring_info *cpr, u16 idx, prod_rx_buf->mapping = cons_rx_buf->mapping; - prod_bd = &rxr->rx_agg_desc_ring[RX_RING(prod)][RX_IDX(prod)]; + prod_bd = &rxr->rx_agg_desc_ring[RX_AGG_RING(bp, prod)][RX_IDX(prod)]; prod_bd->rx_bd_haddr = cpu_to_le64(cons_rx_buf->mapping); prod_bd->rx_bd_opaque = sw_prod; prod = NEXT_RX_AGG(prod); - sw_prod = NEXT_RX_AGG(sw_prod); + sw_prod = RING_RX_AGG(bp, NEXT_RX_AGG(sw_prod)); } rxr->rx_agg_prod = prod; rxr->rx_sw_agg_prod = sw_prod; @@ -1327,7 +1328,7 @@ static void bnxt_tpa_start(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, cons = tpa_start->rx_tpa_start_cmp_opaque; prod = rxr->rx_prod; cons_rx_buf = &rxr->rx_buf_ring[cons]; - prod_rx_buf = &rxr->rx_buf_ring[prod]; + prod_rx_buf = &rxr->rx_buf_ring[RING_RX(bp, prod)]; tpa_info = &rxr->rx_tpa[agg_id]; if (unlikely(cons != rxr->rx_next_cons || @@ -1348,7 +1349,7 @@ static void bnxt_tpa_start(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, mapping = tpa_info->mapping; prod_rx_buf->mapping = mapping; - prod_bd = &rxr->rx_desc_ring[RX_RING(prod)][RX_IDX(prod)]; + prod_bd = &rxr->rx_desc_ring[RX_RING(bp, prod)][RX_IDX(prod)]; prod_bd->rx_bd_haddr = cpu_to_le64(mapping); @@ -1381,8 +1382,8 @@ static void bnxt_tpa_start(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, tpa_info->agg_count = 0; rxr->rx_prod = NEXT_RX(prod); - cons = NEXT_RX(cons); - rxr->rx_next_cons = NEXT_RX(cons); + cons = RING_RX(bp, NEXT_RX(cons)); + rxr->rx_next_cons = RING_RX(bp, NEXT_RX(cons)); cons_rx_buf = &rxr->rx_buf_ring[cons]; bnxt_reuse_rx_data(rxr, cons, cons_rx_buf->data); @@ -2050,7 +2051,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, next_rx_no_len: rxr->rx_prod = NEXT_RX(prod); - rxr->rx_next_cons = NEXT_RX(cons); + rxr->rx_next_cons = RING_RX(bp, NEXT_RX(cons)); next_rx_no_prod_no_len: *raw_cons = tmp_raw_cons; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 1b04510f677b..17e1881c8a27 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -686,7 +686,9 @@ struct nqe_cn { */ #define BNXT_MIN_TX_DESC_CNT (MAX_SKB_FRAGS + 2) -#define RX_RING(x) (((x) & ~(RX_DESC_CNT - 1)) >> (BNXT_PAGE_SHIFT - 4)) +#define RX_RING(bp, x) (((x) & (bp)->rx_ring_mask) >> (BNXT_PAGE_SHIFT - 4)) +#define RX_AGG_RING(bp, x) (((x) & (bp)->rx_agg_ring_mask) >> \ + (BNXT_PAGE_SHIFT - 4)) #define RX_IDX(x) ((x) & (RX_DESC_CNT - 1)) #define TX_RING(bp, x) (((x) & (bp)->tx_ring_mask) >> (BNXT_PAGE_SHIFT - 4)) @@ -716,9 +718,11 @@ struct nqe_cn { #define RX_CMP_TYPE(rxcmp) \ (le32_to_cpu((rxcmp)->rx_cmp_len_flags_type) & RX_CMP_CMP_TYPE) -#define NEXT_RX(idx) (((idx) + 1) & bp->rx_ring_mask) +#define RING_RX(bp, idx) ((idx) & (bp)->rx_ring_mask) +#define NEXT_RX(idx) ((idx) + 1) -#define NEXT_RX_AGG(idx) (((idx) + 1) & bp->rx_agg_ring_mask) +#define RING_RX_AGG(bp, idx) ((idx) & (bp)->rx_agg_ring_mask) +#define NEXT_RX_AGG(idx) ((idx) + 1) #define RING_TX(bp, idx) ((idx) & (bp)->tx_ring_mask) #define NEXT_TX(idx) ((idx) + 1) From patchwork Mon Nov 20 23:44:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13462289 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="P7EmitlF" Received: from mail-qt1-x833.google.com (mail-qt1-x833.google.com [IPv6:2607:f8b0:4864:20::833]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D8D7C7 for ; Mon, 20 Nov 2023 15:44:54 -0800 (PST) Received: by mail-qt1-x833.google.com with SMTP id d75a77b69052e-41ea8debcdaso29402881cf.1 for ; Mon, 20 Nov 2023 15:44:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1700523893; x=1701128693; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=0yyqM9MOXV0F74BMbhpiadlrwI85BPD9MewMelPilU4=; b=P7EmitlFX9UWiMRmFfeuLUzN39dmvdm/tnaP3Y+LWUeuur1Fz8TKid6ldGBVUDKjmL M0hU167mP8/qSv1/IRwBeksTqsUyf3OqkPUAe60lVmbep97FGTnmsZ1KacjsFMcbnL27 pDQANka2blaNraBu734MGLlzhkmdR0YstQGdc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700523893; x=1701128693; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0yyqM9MOXV0F74BMbhpiadlrwI85BPD9MewMelPilU4=; b=tw623PkyBI5Z/kj8VGEerQ3aYWjh2B09pB/NcHoerF6ue30UFPxQ8xLwm8bwXkHvSr RKDzBl+VODnN0eA3RT4OHIUCXSXnqx5oBNdGljJU9X9dCb04GbIjhUTMsQm10s2f7JM8 ZKTlflNqVB86Mpeuoyi89X3Hiam5QpugjkAgartJ2xLInWoR2mx3/SuBJoNxz7GHPTEI yKaTwLvK8+bG0yFesREbGRSXyrmiBHLhr/x8PTKo0zLHHnUVqGazJM+3jkCuTTZKiBd/ OXCynEIACB2+cIQQwUlQkkV7gG9zcqFeMsf8/cJZKsnoRD4pbREt3MGGRuw2WpdUvr3o LtiA== X-Gm-Message-State: AOJu0YzWYP/sKjEERZ5HYmpF7rmmJdE21Wwg6I6byn09zxSi/G+cLwEV 8rqFTnLl+3hsUY4OclAafNBFXA== X-Google-Smtp-Source: AGHT+IFT0LRC8wHkW1fL0XMBEYI7VwYL8EDJjfOeOL62Qgj+I7vrLkBhNlWAXc1YVY6A+ZrFX9/EMQ== X-Received: by 2002:ac8:5fce:0:b0:41e:1dff:bed5 with SMTP id k14-20020ac85fce000000b0041e1dffbed5mr12205267qta.37.1700523893280; Mon, 20 Nov 2023 15:44:53 -0800 (PST) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id i9-20020ac871c9000000b0041803dfb240sm3053384qtp.45.2023.11.20.15.44.52 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Nov 2023 15:44:52 -0800 (PST) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Somnath Kotur , Pavan Chebbi Subject: [PATCH net-next 12/13] bnxt_en: Modify the NAPI logic for the new P7 chips Date: Mon, 20 Nov 2023 15:44:04 -0800 Message-Id: <20231120234405.194542-13-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20231120234405.194542-1-michael.chan@broadcom.com> References: <20231120234405.194542-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Modify the NAPI logic for the new doorbell mechanism on P7 chips. These changes are compatible with the current P5 chips. In the current logic, bnxt_poll_p5() services 1 or more CQs for each MSIX. Each MSIX has an associated NQ and each NQ has 1 or more associated CQs. If any CQ reaches NAPI budget, we'll stay in polling mode and will unconditionally check and service all CQs until we exit polling. We always re-arm all CQs when we exit polling. To be compatible with the new Toggle bit mechanism in P7 chips, we need to modify the logic so that we service and re-arm the CQ only if we receive an NQE notification for work for that CQ. We add a new had_nqe_notify bit to the cp_ring_info structure and it gets set when we see the NQE notification for that CQ anytime during polling. We'll service and re-arm only the CQs with the had_nqe_notify bits set. Reviewed-by: Somnath Kotur Reviewed-by: Pavan Chebbi Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 10 ++++++++-- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 2 ++ 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index b2cf42953130..f968bf12989b 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -2854,8 +2854,11 @@ static int __bnxt_poll_cqs(struct bnxt *bp, struct bnxt_napi *bnapi, int budget) for (i = 0; i < cpr->cp_ring_count; i++) { struct bnxt_cp_ring_info *cpr2 = &cpr->cp_ring_arr[i]; - work_done += __bnxt_poll_work(bp, cpr2, budget - work_done); - cpr->has_more_work |= cpr2->has_more_work; + if (cpr2->had_nqe_notify) { + work_done += __bnxt_poll_work(bp, cpr2, + budget - work_done); + cpr->has_more_work |= cpr2->has_more_work; + } } return work_done; } @@ -2876,6 +2879,8 @@ static void __bnxt_poll_cqs_done(struct bnxt *bp, struct bnxt_napi *bnapi, DB_RING_IDX(db, cpr2->cp_raw_cons), db->doorbell); cpr2->had_work_done = 0; + if (dbr_type == DBR_TYPE_CQ_ARMALL) + cpr2->had_nqe_notify = 0; } } __bnxt_poll_work_done(bp, bnapi, budget); @@ -2934,6 +2939,7 @@ static int bnxt_poll_p5(struct napi_struct *napi, int budget) idx = BNXT_NQ_HDL_IDX(idx); cpr2 = &cpr->cp_ring_arr[idx]; + cpr2->had_nqe_notify = 1; work_done += __bnxt_poll_work(bp, cpr2, budget - work_done); cpr->has_more_work |= cpr2->has_more_work; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 17e1881c8a27..b287e4a9adff 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1017,6 +1017,8 @@ struct bnxt_cp_ring_info { u8 had_work_done:1; u8 has_more_work:1; + u8 had_nqe_notify:1; + u8 cp_ring_type; u8 cp_idx; From patchwork Mon Nov 20 23:44:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Chan X-Patchwork-Id: 13462290 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="ewwFf6o1" Received: from mail-qt1-x832.google.com (mail-qt1-x832.google.com [IPv6:2607:f8b0:4864:20::832]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C697AA for ; Mon, 20 Nov 2023 15:44:56 -0800 (PST) Received: by mail-qt1-x832.google.com with SMTP id d75a77b69052e-41cbd2cf3bbso51393561cf.0 for ; Mon, 20 Nov 2023 15:44:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1700523895; x=1701128695; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=s3mzhNqod3yErA5m58WFR0zvvtciUsw1iyXqWUCiTp0=; b=ewwFf6o12YkXJWrqwGcPejnxbCn++2nqPgOApJHrVDvDF3WVVZs+EK+ZiZRoSjaHOM khHje28yL9PsY9tEXeIITKh3bjZclVd55XoXF0e4PSS7Cz0ZCG9kLjNx0vHhxObGWPxT cbdobAe05tKuBtXqYQcrKF7jmS0SwfZknweek= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700523895; x=1701128695; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=s3mzhNqod3yErA5m58WFR0zvvtciUsw1iyXqWUCiTp0=; b=u3ct4o3tl7m0CW4QZ3yHegZeMio6bpke/k2lYRLmr/7GUUhDeBSnFmIy7cFpdgXkJg f+p7WFFLHAs4lmn9TwYhxf06G2GBF7tHx1CAmDR37lT/wMezT+igyYWy7ee/Lajikqwt cPk0N32KQfxYFDCf/wDjcATZw9XXJAEAm7AY9sywATRx8ldx5kajhGaomjSGo7cuSf1H vPiohIKG4PrV+U6417aDUWg3Omm5C4l+CUTXsri6EaN1lv9vBU8NlR0hDMD5vHenYnDO q+QTqgpmFfafWTpcOuC1HiS+AFcSB+CYYOJiQ1UoZb5pVpnMZrhcV0GlLhhwmyg+e+iw uQbA== X-Gm-Message-State: AOJu0Yz4X80b8inSRqH3rmEEJosURsoK1tWSZ61dhNTbAHjT4ZY1bOg1 2wmRKYHxHT4yM7nmKOJeLV33rg== X-Google-Smtp-Source: AGHT+IF1CevDmmYVf17u1OW9w9ql3RZwEMP0KFF1p8mWIHMpFnPYM06/klN2bz2oO8b7DSy/WVlvQA== X-Received: by 2002:a05:622a:5d84:b0:416:4cc5:2f51 with SMTP id fu4-20020a05622a5d8400b004164cc52f51mr1406544qtb.1.1700523895145; Mon, 20 Nov 2023 15:44:55 -0800 (PST) Received: from lvnvda5233.lvn.broadcom.net ([192.19.161.250]) by smtp.gmail.com with ESMTPSA id i9-20020ac871c9000000b0041803dfb240sm3053384qtp.45.2023.11.20.15.44.53 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Nov 2023 15:44:54 -0800 (PST) From: Michael Chan To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, gospo@broadcom.com, Randy Schacher , Andy Gospodarek , Ajit Khaparde Subject: [PATCH net-next 13/13] bnxt_en: Rename some macros for the P5 chips Date: Mon, 20 Nov 2023 15:44:05 -0800 Message-Id: <20231120234405.194542-14-michael.chan@broadcom.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20231120234405.194542-1-michael.chan@broadcom.com> References: <20231120234405.194542-1-michael.chan@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Randy Schacher In preparation to support a new P7 chip which has a lot of similarities with the P5 chip, rename the BNXT_FLAG_CHIP_P5 flag to BNXT_FLAG_CHIP_P5_PLUS. This will make it clear that the flag is for P5 and newer chips. Also, since there are no additional P5 variants in production, rename BNXT_FLAG_CHIP_P5_THOR() to BNXT_FLAG_CHIP_P5() to keep the naming more simple. Reviewed-by: Andy Gospodarek Reviewed-by: Ajit Khaparde Signed-off-by: Randy Schacher Signed-off-by: Michael Chan --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 184 +++++++++--------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 18 +- .../net/ethernet/broadcom/bnxt/bnxt_devlink.c | 8 +- .../net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 6 +- drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c | 4 +- .../net/ethernet/broadcom/bnxt/bnxt_sriov.c | 8 +- drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c | 2 +- 7 files changed, 116 insertions(+), 114 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index f968bf12989b..d8cf7b30bd03 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -269,7 +269,7 @@ static bool bnxt_vf_pciid(enum board_idx idx) static void bnxt_db_nq(struct bnxt *bp, struct bnxt_db_info *db, u32 idx) { - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) BNXT_DB_NQ_P5(db, idx); else BNXT_DB_CQ(db, idx); @@ -277,7 +277,7 @@ static void bnxt_db_nq(struct bnxt *bp, struct bnxt_db_info *db, u32 idx) static void bnxt_db_nq_arm(struct bnxt *bp, struct bnxt_db_info *db, u32 idx) { - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) BNXT_DB_NQ_ARM_P5(db, idx); else BNXT_DB_CQ_ARM(db, idx); @@ -285,7 +285,7 @@ static void bnxt_db_nq_arm(struct bnxt *bp, struct bnxt_db_info *db, u32 idx) static void bnxt_db_cq(struct bnxt *bp, struct bnxt_db_info *db, u32 idx) { - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) bnxt_writeq(bp, db->db_key64 | DBR_TYPE_CQ_ARMALL | DB_RING_IDX(db, idx), db->doorbell); else @@ -321,7 +321,7 @@ static void bnxt_sched_reset_rxr(struct bnxt *bp, struct bnxt_rx_ring_info *rxr) { if (!rxr->bnapi->in_reset) { rxr->bnapi->in_reset = true; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) set_bit(BNXT_RESET_TASK_SP_EVENT, &bp->sp_event); else set_bit(BNXT_RST_RING_SP_EVENT, &bp->sp_event); @@ -739,7 +739,7 @@ static void __bnxt_tx_int(struct bnxt *bp, struct bnxt_tx_ring_info *txr, DMA_TO_DEVICE); } if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS)) { - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (BNXT_CHIP_P5(bp)) { /* PTP worker takes ownership of the skb */ if (!bnxt_get_tx_ts_p5(bp, skb)) skb = NULL; @@ -946,7 +946,7 @@ static void bnxt_reuse_rx_agg_bufs(struct bnxt_cp_ring_info *cpr, u16 idx, bool p5_tpa = false; u32 i; - if ((bp->flags & BNXT_FLAG_CHIP_P5) && tpa) + if ((bp->flags & BNXT_FLAG_CHIP_P5_PLUS) && tpa) p5_tpa = true; for (i = 0; i < agg_bufs; i++) { @@ -1113,7 +1113,7 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp, u32 i, total_frag_len = 0; bool p5_tpa = false; - if ((bp->flags & BNXT_FLAG_CHIP_P5) && tpa) + if ((bp->flags & BNXT_FLAG_CHIP_P5_PLUS) && tpa) p5_tpa = true; for (i = 0; i < agg_bufs; i++) { @@ -1268,7 +1268,7 @@ static int bnxt_discard_rx(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, } else if (cmp_type == CMP_TYPE_RX_L2_TPA_END_CMP) { struct rx_tpa_end_cmp *tpa_end = cmp; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) return 0; agg_bufs = TPA_END_AGG_BUFS(tpa_end); @@ -1319,7 +1319,7 @@ static void bnxt_tpa_start(struct bnxt *bp, struct bnxt_rx_ring_info *rxr, struct rx_bd *prod_bd; dma_addr_t mapping; - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { agg_id = TPA_START_AGG_ID_P5(tpa_start); agg_id = bnxt_alloc_agg_idx(rxr, agg_id); } else { @@ -1582,7 +1582,7 @@ static inline struct sk_buff *bnxt_gro_skb(struct bnxt *bp, skb_shinfo(skb)->gso_size = le32_to_cpu(tpa_end1->rx_tpa_end_cmp_seg_len); skb_shinfo(skb)->gso_type = tpa_info->gso_type; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) payload_off = TPA_END_PAYLOAD_OFF_P5(tpa_end1); else payload_off = TPA_END_PAYLOAD_OFF(tpa_end); @@ -1630,7 +1630,7 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp, return NULL; } - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { agg_id = TPA_END_AGG_ID_P5(tpa_end); agg_id = bnxt_lookup_agg_idx(rxr, agg_id); agg_bufs = TPA_END_AGG_BUFS_P5(tpa_end1); @@ -1895,7 +1895,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, rc = -EIO; if (rx_err & RX_CMPL_ERRORS_BUFFER_ERROR_MASK) { bnapi->cp_ring.sw_stats.rx.rx_buf_errors++; - if (!(bp->flags & BNXT_FLAG_CHIP_P5) && + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS) && !(bp->fw_cap & BNXT_FW_CAP_RING_MONITOR)) { netdev_warn_once(bp->dev, "RX buffer error %x\n", rx_err); @@ -2026,7 +2026,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, if (unlikely((flags & RX_CMP_FLAGS_ITYPES_MASK) == RX_CMP_FLAGS_ITYPE_PTP_W_TS) || bp->ptp_all_rx_tstamp) { - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { u32 cmpl_ts = le32_to_cpu(rxcmp1->rx_cmp_timestamp); u64 ns, ts; @@ -2434,7 +2434,7 @@ static int bnxt_async_event_process(struct bnxt *bp, struct bnxt_rx_ring_info *rxr; u16 grp_idx; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) goto async_event_process_exit; netdev_warn(bp->dev, "Ring monitor event, ring type %lu id 0x%x\n", @@ -3258,7 +3258,7 @@ static int bnxt_alloc_tpa_info(struct bnxt *bp) int i, j; bp->max_tpa = MAX_TPA; - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { if (!bp->max_tpa_v2) return 0; bp->max_tpa = max_t(u16, bp->max_tpa_v2, MAX_TPA_P5); @@ -3273,7 +3273,7 @@ static int bnxt_alloc_tpa_info(struct bnxt *bp) if (!rxr->rx_tpa) return -ENOMEM; - if (!(bp->flags & BNXT_FLAG_CHIP_P5)) + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) continue; for (j = 0; j < bp->max_tpa; j++) { agg = kcalloc(MAX_SKB_FRAGS, sizeof(*agg), GFP_KERNEL); @@ -3651,7 +3651,7 @@ static int bnxt_alloc_cp_rings(struct bnxt *bp) else ring->map_idx = i; - if (!(bp->flags & BNXT_FLAG_CHIP_P5)) + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) continue; if (i < bp->rx_nr_rings) { @@ -3965,7 +3965,7 @@ static int bnxt_alloc_vnics(struct bnxt *bp) int num_vnics = 1; #ifdef CONFIG_RFS_ACCEL - if ((bp->flags & (BNXT_FLAG_RFS | BNXT_FLAG_CHIP_P5)) == BNXT_FLAG_RFS) + if ((bp->flags & (BNXT_FLAG_RFS | BNXT_FLAG_CHIP_P5_PLUS)) == BNXT_FLAG_RFS) num_vnics += bp->rx_nr_rings; #endif @@ -4238,7 +4238,7 @@ static int bnxt_alloc_vnic_attributes(struct bnxt *bp) } } - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) goto vnic_skip_grps; if (vnic->flags & BNXT_VNIC_RSS_FLAG) @@ -4258,7 +4258,7 @@ static int bnxt_alloc_vnic_attributes(struct bnxt *bp) /* Allocate rss table and hash key */ size = L1_CACHE_ALIGN(HW_HASH_INDEX_SIZE * sizeof(u16)); - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) size = L1_CACHE_ALIGN(BNXT_MAX_RSS_TABLE_SIZE_P5); vnic->rss_table_size = size + HW_HASH_KEY_SIZE; @@ -4368,7 +4368,7 @@ static int bnxt_hwrm_func_qstat_ext(struct bnxt *bp, int rc; if (!(bp->fw_cap & BNXT_FW_CAP_EXT_HW_STATS_SUPPORTED) || - !(bp->flags & BNXT_FLAG_CHIP_P5)) + !(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) return -EOPNOTSUPP; rc = hwrm_req_init(bp, req, HWRM_FUNC_QSTATS_EXT); @@ -4406,7 +4406,7 @@ static void bnxt_init_stats(struct bnxt *bp) stats = &cpr->stats; rc = bnxt_hwrm_func_qstat_ext(bp, stats); if (rc) { - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) mask = (1ULL << 48) - 1; else mask = -1ULL; @@ -4686,7 +4686,7 @@ static int bnxt_alloc_mem(struct bnxt *bp, bool irq_re_init) bp->bnapi[i] = bnapi; bp->bnapi[i]->index = i; bp->bnapi[i]->bp = bp; - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { struct bnxt_cp_ring_info *cpr = &bp->bnapi[i]->cp_ring; @@ -4704,7 +4704,7 @@ static int bnxt_alloc_mem(struct bnxt *bp, bool irq_re_init) for (i = 0; i < bp->rx_nr_rings; i++) { struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i]; - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { rxr->rx_ring_struct.ring_mem.flags = BNXT_RMEM_RING_PTE_FLAG; rxr->rx_agg_ring_struct.ring_mem.flags = @@ -4737,7 +4737,7 @@ static int bnxt_alloc_mem(struct bnxt *bp, bool irq_re_init) struct bnxt_tx_ring_info *txr = &bp->tx_ring[i]; struct bnxt_napi *bnapi2; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) txr->tx_ring_struct.ring_mem.flags = BNXT_RMEM_RING_PTE_FLAG; bp->tx_ring_map[i] = bp->tx_nr_rings_xdp + i; @@ -4758,7 +4758,7 @@ static int bnxt_alloc_mem(struct bnxt *bp, bool irq_re_init) j++; } txr->bnapi = bnapi2; - if (!(bp->flags & BNXT_FLAG_CHIP_P5)) + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) txr->tx_cpr = &bnapi2->cp_ring; } @@ -5284,7 +5284,7 @@ static int bnxt_hwrm_vnic_set_tpa(struct bnxt *bp, u16 vnic_id, u32 tpa_flags) nsegs = (MAX_SKB_FRAGS - n) / n; } - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { segs = MAX_TPA_SEGS_P5; max_aggs = bp->max_tpa; } else { @@ -5310,7 +5310,7 @@ static u16 bnxt_cp_ring_from_grp(struct bnxt *bp, struct bnxt_ring_struct *ring) static u16 bnxt_cp_ring_for_rx(struct bnxt *bp, struct bnxt_rx_ring_info *rxr) { - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) return rxr->rx_cpr->cp_ring_struct.fw_ring_id; else return bnxt_cp_ring_from_grp(bp, &rxr->rx_ring_struct); @@ -5318,7 +5318,7 @@ static u16 bnxt_cp_ring_for_rx(struct bnxt *bp, struct bnxt_rx_ring_info *rxr) static u16 bnxt_cp_ring_for_tx(struct bnxt *bp, struct bnxt_tx_ring_info *txr) { - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) return txr->tx_cpr->cp_ring_struct.fw_ring_id; else return bnxt_cp_ring_from_grp(bp, &txr->tx_ring_struct); @@ -5328,7 +5328,7 @@ static int bnxt_alloc_rss_indir_tbl(struct bnxt *bp) { int entries; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) entries = BNXT_MAX_RSS_TABLE_ENTRIES_P5; else entries = HW_HASH_INDEX_SIZE; @@ -5378,7 +5378,7 @@ static u16 bnxt_get_max_rss_ring(struct bnxt *bp) int bnxt_get_nr_rss_ctxs(struct bnxt *bp, int rx_rings) { - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) return DIV_ROUND_UP(rx_rings, BNXT_RSS_TABLE_ENTRIES_P5); if (BNXT_CHIP_TYPE_NITRO_A0(bp)) return 2; @@ -5424,7 +5424,7 @@ static void __bnxt_hwrm_vnic_set_rss(struct bnxt *bp, struct hwrm_vnic_rss_cfg_input *req, struct bnxt_vnic_info *vnic) { - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) bnxt_fill_hw_rss_tbl_p5(bp, vnic); else bnxt_fill_hw_rss_tbl(bp, vnic); @@ -5449,7 +5449,7 @@ static int bnxt_hwrm_vnic_set_rss(struct bnxt *bp, u16 vnic_id, bool set_rss) struct hwrm_vnic_rss_cfg_input *req; int rc; - if ((bp->flags & BNXT_FLAG_CHIP_P5) || + if ((bp->flags & BNXT_FLAG_CHIP_P5_PLUS) || vnic->fw_rss_cos_lb_ctx[0] == INVALID_HW_RING_ID) return 0; @@ -5614,7 +5614,7 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, u16 vnic_id) if (rc) return rc; - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { struct bnxt_rx_ring_info *rxr = &bp->rx_ring[0]; req->default_rx_ring_id = @@ -5714,7 +5714,7 @@ static int bnxt_hwrm_vnic_alloc(struct bnxt *bp, u16 vnic_id, if (rc) return rc; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) goto vnic_no_ring_grps; /* map ring groups to this vnic */ @@ -5762,7 +5762,7 @@ static int bnxt_hwrm_vnic_qcaps(struct bnxt *bp) if (!rc) { u32 flags = le32_to_cpu(resp->flags); - if (!(bp->flags & BNXT_FLAG_CHIP_P5) && + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS) && (flags & VNIC_QCAPS_RESP_FLAGS_RSS_DFLT_CR_CAP)) bp->flags |= BNXT_FLAG_NEW_RSS_CAP; if (flags & @@ -5773,14 +5773,14 @@ static int bnxt_hwrm_vnic_qcaps(struct bnxt *bp) * VLAN_STRIP_CAP properly. */ if ((flags & VNIC_QCAPS_RESP_FLAGS_VLAN_STRIP_CAP) || - (BNXT_CHIP_P5_THOR(bp) && + (BNXT_CHIP_P5(bp) && !(bp->fw_cap & BNXT_FW_CAP_EXT_HW_STATS_SUPPORTED))) bp->fw_cap |= BNXT_FW_CAP_VLAN_RX_STRIP; if (flags & VNIC_QCAPS_RESP_FLAGS_RSS_HASH_TYPE_DELTA_CAP) bp->fw_cap |= BNXT_FW_CAP_RSS_HASH_TYPE_DELTA; bp->max_tpa_v2 = le16_to_cpu(resp->max_aggs_supported); if (bp->max_tpa_v2) { - if (BNXT_CHIP_P5_THOR(bp)) + if (BNXT_CHIP_P5(bp)) bp->hw_ring_stats_size = BNXT_RING_STATS_SIZE_P5; else bp->hw_ring_stats_size = BNXT_RING_STATS_SIZE_P5_SR2; @@ -5797,7 +5797,7 @@ static int bnxt_hwrm_ring_grp_alloc(struct bnxt *bp) int rc; u16 i; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) return 0; rc = hwrm_req_init(bp, req, HWRM_RING_GRP_ALLOC); @@ -5830,7 +5830,7 @@ static void bnxt_hwrm_ring_grp_free(struct bnxt *bp) struct hwrm_ring_grp_free_input *req; u16 i; - if (!bp->grp_info || (bp->flags & BNXT_FLAG_CHIP_P5)) + if (!bp->grp_info || (bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) return; if (hwrm_req_init(bp, req, HWRM_RING_GRP_FREE)) @@ -5895,7 +5895,7 @@ static int hwrm_ring_alloc_send_msg(struct bnxt *bp, case HWRM_RING_ALLOC_RX: req->ring_type = RING_ALLOC_REQ_RING_TYPE_RX; req->length = cpu_to_le32(bp->rx_ring_mask + 1); - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { u16 flags = 0; /* Association of rx ring with stats context */ @@ -5910,7 +5910,7 @@ static int hwrm_ring_alloc_send_msg(struct bnxt *bp, } break; case HWRM_RING_ALLOC_AGG: - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { req->ring_type = RING_ALLOC_REQ_RING_TYPE_RX_AGG; /* Association of agg ring with rx ring */ grp_info = &bp->grp_info[ring->grp_idx]; @@ -5928,7 +5928,7 @@ static int hwrm_ring_alloc_send_msg(struct bnxt *bp, case HWRM_RING_ALLOC_CMPL: req->ring_type = RING_ALLOC_REQ_RING_TYPE_L2_CMPL; req->length = cpu_to_le32(bp->cp_ring_mask + 1); - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { /* Association of cp ring with nq */ grp_info = &bp->grp_info[map_index]; req->nq_ring_id = cpu_to_le16(grp_info->cp_fw_ring_id); @@ -6019,7 +6019,7 @@ static void bnxt_set_db_mask(struct bnxt *bp, struct bnxt_db_info *db, static void bnxt_set_db(struct bnxt *bp, struct bnxt_db_info *db, u32 ring_type, u32 map_idx, u32 xid) { - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { if (BNXT_PF(bp)) db->doorbell = bp->bar1 + DB_PF_OFFSET_P5; else @@ -6064,7 +6064,7 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp) int i, rc = 0; u32 type; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) type = HWRM_RING_ALLOC_NQ; else type = HWRM_RING_ALLOC_CMPL; @@ -6100,7 +6100,7 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp) struct bnxt_ring_struct *ring; u32 map_idx; - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { struct bnxt_cp_ring_info *cpr2 = txr->tx_cpr; struct bnxt_napi *bnapi = txr->bnapi; u32 type2 = HWRM_RING_ALLOC_CMPL; @@ -6138,7 +6138,7 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp) if (!agg_rings) bnxt_db_write(bp, &rxr->rx_db, rxr->rx_prod); bp->grp_info[map_idx].rx_fw_ring_id = ring->fw_ring_id; - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { struct bnxt_cp_ring_info *cpr2 = rxr->rx_cpr; u32 type2 = HWRM_RING_ALLOC_CMPL; @@ -6251,7 +6251,7 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path) } } - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) type = RING_FREE_REQ_RING_TYPE_RX_AGG; else type = RING_FREE_REQ_RING_TYPE_RX; @@ -6278,7 +6278,7 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path) */ bnxt_disable_int_sync(bp); - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) type = RING_FREE_REQ_RING_TYPE_NQ; else type = RING_FREE_REQ_RING_TYPE_L2_CMPL; @@ -6345,7 +6345,7 @@ static int bnxt_hwrm_get_rings(struct bnxt *bp) cp = le16_to_cpu(resp->alloc_cmpl_rings); stats = le16_to_cpu(resp->alloc_stat_ctx); hw_resc->resv_irqs = cp; - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { int rx = hw_resc->resv_rx_rings; int tx = hw_resc->resv_tx_rings; @@ -6410,7 +6410,7 @@ __bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, int tx_rings, int rx_rings, if (BNXT_NEW_RM(bp)) { enables |= rx_rings ? FUNC_CFG_REQ_ENABLES_NUM_RX_RINGS : 0; enables |= stats ? FUNC_CFG_REQ_ENABLES_NUM_STAT_CTXS : 0; - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { enables |= cp_rings ? FUNC_CFG_REQ_ENABLES_NUM_MSIX : 0; enables |= tx_rings + ring_grps ? FUNC_CFG_REQ_ENABLES_NUM_CMPL_RINGS : 0; @@ -6426,7 +6426,7 @@ __bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, int tx_rings, int rx_rings, enables |= vnics ? FUNC_CFG_REQ_ENABLES_NUM_VNICS : 0; req->num_rx_rings = cpu_to_le16(rx_rings); - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { req->num_cmpl_rings = cpu_to_le16(tx_rings + ring_grps); req->num_msix = cpu_to_le16(cp_rings); req->num_rsscos_ctxs = @@ -6461,7 +6461,7 @@ __bnxt_hwrm_reserve_vf_rings(struct bnxt *bp, int tx_rings, int rx_rings, enables |= rx_rings ? FUNC_VF_CFG_REQ_ENABLES_NUM_RX_RINGS | FUNC_VF_CFG_REQ_ENABLES_NUM_RSSCOS_CTXS : 0; enables |= stats ? FUNC_VF_CFG_REQ_ENABLES_NUM_STAT_CTXS : 0; - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { enables |= tx_rings + ring_grps ? FUNC_VF_CFG_REQ_ENABLES_NUM_CMPL_RINGS : 0; } else { @@ -6476,7 +6476,7 @@ __bnxt_hwrm_reserve_vf_rings(struct bnxt *bp, int tx_rings, int rx_rings, req->num_l2_ctxs = cpu_to_le16(BNXT_VF_MAX_L2_CTX); req->num_tx_rings = cpu_to_le16(tx_rings); req->num_rx_rings = cpu_to_le16(rx_rings); - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { req->num_cmpl_rings = cpu_to_le16(tx_rings + ring_grps); req->num_rsscos_ctxs = cpu_to_le16(DIV_ROUND_UP(ring_grps, 64)); } else { @@ -6572,7 +6572,7 @@ static int bnxt_cp_rings_in_use(struct bnxt *bp) { int cp; - if (!(bp->flags & BNXT_FLAG_CHIP_P5)) + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) return bnxt_nq_rings_in_use(bp); cp = bp->tx_nr_rings + bp->rx_nr_rings; @@ -6629,7 +6629,8 @@ static bool bnxt_need_reserve_rings(struct bnxt *bp) bnxt_check_rss_tbl_no_rmgr(bp); return false; } - if ((bp->flags & BNXT_FLAG_RFS) && !(bp->flags & BNXT_FLAG_CHIP_P5)) + if ((bp->flags & BNXT_FLAG_RFS) && + !(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) vnic = rx + 1; if (bp->flags & BNXT_FLAG_AGG_RINGS) rx <<= 1; @@ -6637,9 +6638,9 @@ static bool bnxt_need_reserve_rings(struct bnxt *bp) if (hw_resc->resv_rx_rings != rx || hw_resc->resv_cp_rings != cp || hw_resc->resv_vnics != vnic || hw_resc->resv_stat_ctxs != stat || (hw_resc->resv_hw_ring_grps != grp && - !(bp->flags & BNXT_FLAG_CHIP_P5))) + !(bp->flags & BNXT_FLAG_CHIP_P5_PLUS))) return true; - if ((bp->flags & BNXT_FLAG_CHIP_P5) && BNXT_PF(bp) && + if ((bp->flags & BNXT_FLAG_CHIP_P5_PLUS) && BNXT_PF(bp) && hw_resc->resv_irqs != nq) return true; return false; @@ -6661,7 +6662,8 @@ static int __bnxt_reserve_rings(struct bnxt *bp) if (bp->flags & BNXT_FLAG_SHARED_RINGS) sh = true; - if ((bp->flags & BNXT_FLAG_RFS) && !(bp->flags & BNXT_FLAG_CHIP_P5)) + if ((bp->flags & BNXT_FLAG_RFS) && + !(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) vnic = rx + 1; if (bp->flags & BNXT_FLAG_AGG_RINGS) rx <<= 1; @@ -6752,7 +6754,7 @@ static int bnxt_hwrm_check_vf_rings(struct bnxt *bp, int tx_rings, int rx_rings, FUNC_VF_CFG_REQ_FLAGS_STAT_CTX_ASSETS_TEST | FUNC_VF_CFG_REQ_FLAGS_VNIC_ASSETS_TEST | FUNC_VF_CFG_REQ_FLAGS_RSSCOS_CTX_ASSETS_TEST; - if (!(bp->flags & BNXT_FLAG_CHIP_P5)) + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) flags |= FUNC_VF_CFG_REQ_FLAGS_RING_GRP_ASSETS_TEST; req->flags = cpu_to_le32(flags); @@ -6774,7 +6776,7 @@ static int bnxt_hwrm_check_pf_rings(struct bnxt *bp, int tx_rings, int rx_rings, FUNC_CFG_REQ_FLAGS_CMPL_ASSETS_TEST | FUNC_CFG_REQ_FLAGS_STAT_CTX_ASSETS_TEST | FUNC_CFG_REQ_FLAGS_VNIC_ASSETS_TEST; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) flags |= FUNC_CFG_REQ_FLAGS_RSSCOS_CTX_ASSETS_TEST | FUNC_CFG_REQ_FLAGS_NQ_ASSETS_TEST; else @@ -6993,7 +6995,7 @@ bnxt_hwrm_set_tx_coal(struct bnxt *bp, struct bnxt_napi *bnapi, rc = hwrm_req_send(bp, req); if (rc) return rc; - if (!(bp->flags & BNXT_FLAG_CHIP_P5)) + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) return 0; } return 0; @@ -7030,7 +7032,7 @@ int bnxt_hwrm_set_coal(struct bnxt *bp) if (rc) break; - if (!(bp->flags & BNXT_FLAG_CHIP_P5)) + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) continue; if (bnapi->rx_ring && bnapi->tx_ring[0]) { @@ -7188,7 +7190,7 @@ static int bnxt_hwrm_func_qcfg(struct bnxt *bp) if (bp->db_size) goto func_qcfg_exit; - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { if (BNXT_PF(bp)) min_db_offset = DB_PF_OFFSET_P5; else @@ -7951,7 +7953,7 @@ int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp, bool all) hw_resc->min_stat_ctxs = le16_to_cpu(resp->min_stat_ctx); hw_resc->max_stat_ctxs = le16_to_cpu(resp->max_stat_ctx); - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { u16 max_msix = le16_to_cpu(resp->max_msix); hw_resc->max_nqs = max_msix; @@ -7980,7 +7982,7 @@ static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp) u8 flags; int rc; - if (bp->hwrm_spec_code < 0x10801 || !BNXT_CHIP_P5_THOR(bp)) { + if (bp->hwrm_spec_code < 0x10801 || !BNXT_CHIP_P5(bp)) { rc = -ENODEV; goto no_ptp; } @@ -8012,7 +8014,7 @@ static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp) if (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_PARTIAL_DIRECT_ACCESS_REF_CLOCK) { ptp->refclk_regs[0] = le32_to_cpu(resp->ts_ref_clock_reg_lower); ptp->refclk_regs[1] = le32_to_cpu(resp->ts_ref_clock_reg_upper); - } else if (bp->flags & BNXT_FLAG_CHIP_P5) { + } else if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { ptp->refclk_regs[0] = BNXT_TS_REG_TIMESYNC_TS0_LOWER; ptp->refclk_regs[1] = BNXT_TS_REG_TIMESYNC_TS0_UPPER; } else { @@ -8721,7 +8723,7 @@ static void bnxt_accumulate_all_stats(struct bnxt *bp) int i; /* Chip bug. Counter intermittently becomes 0. */ - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) ignore_zero = true; for (i = 0; i < bp->cp_nr_rings; i++) { @@ -8915,7 +8917,7 @@ static void bnxt_clear_vnic(struct bnxt *bp) return; bnxt_hwrm_clear_vnic_filter(bp); - if (!(bp->flags & BNXT_FLAG_CHIP_P5)) { + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) { /* clear all RSS setting before free vnic ctx */ bnxt_hwrm_clear_vnic_rss(bp); bnxt_hwrm_vnic_ctx_free(bp); @@ -8924,7 +8926,7 @@ static void bnxt_clear_vnic(struct bnxt *bp) if (bp->flags & BNXT_FLAG_TPA) bnxt_set_tpa(bp, false); bnxt_hwrm_vnic_free(bp); - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) bnxt_hwrm_vnic_ctx_free(bp); } @@ -9081,7 +9083,7 @@ static int __bnxt_setup_vnic_p5(struct bnxt *bp, u16 vnic_id) static int bnxt_setup_vnic(struct bnxt *bp, u16 vnic_id) { - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) return __bnxt_setup_vnic_p5(bp, vnic_id); else return __bnxt_setup_vnic(bp, vnic_id); @@ -9092,7 +9094,7 @@ static int bnxt_alloc_rfs_vnics(struct bnxt *bp) #ifdef CONFIG_RFS_ACCEL int i, rc = 0; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) return 0; for (i = 0; i < bp->rx_nr_rings; i++) { @@ -9474,7 +9476,7 @@ static unsigned int bnxt_get_max_func_cp_rings_for_en(struct bnxt *bp) { unsigned int cp = bp->hw_resc.max_cp_rings; - if (!(bp->flags & BNXT_FLAG_CHIP_P5)) + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) cp -= bnxt_get_ulp_msix_num(bp); return cp; @@ -9484,7 +9486,7 @@ static unsigned int bnxt_get_max_func_irqs(struct bnxt *bp) { struct bnxt_hw_resc *hw_resc = &bp->hw_resc; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) return min_t(unsigned int, hw_resc->max_irqs, hw_resc->max_nqs); return min_t(unsigned int, hw_resc->max_irqs, hw_resc->max_cp_rings); @@ -9500,7 +9502,7 @@ unsigned int bnxt_get_avail_cp_rings_for_en(struct bnxt *bp) unsigned int cp; cp = bnxt_get_max_func_cp_rings_for_en(bp); - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) return cp - bp->rx_nr_rings - bp->tx_nr_rings; else return cp - bp->cp_nr_rings; @@ -9519,7 +9521,7 @@ int bnxt_get_avail_msix(struct bnxt *bp, int num) int max_idx, avail_msix; max_idx = bp->total_irqs; - if (!(bp->flags & BNXT_FLAG_CHIP_P5)) + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) max_idx = min_t(int, bp->total_irqs, max_cp); avail_msix = max_idx - bp->cp_nr_rings; if (!BNXT_NEW_RM(bp) || avail_msix >= num) @@ -9798,7 +9800,7 @@ static void bnxt_init_napi(struct bnxt *bp) if (bp->flags & BNXT_FLAG_USING_MSIX) { int (*poll_fn)(struct napi_struct *, int) = bnxt_poll; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) poll_fn = bnxt_poll_p5; else if (BNXT_CHIP_TYPE_NITRO_A0(bp)) cp_nr_rings--; @@ -11517,7 +11519,7 @@ static bool bnxt_can_reserve_rings(struct bnxt *bp) /* If the chip and firmware supports RFS */ static bool bnxt_rfs_supported(struct bnxt *bp) { - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { if (bp->fw_cap & BNXT_FW_CAP_CFA_RFS_RING_TBL_IDX_V2) return true; return false; @@ -11538,7 +11540,7 @@ static bool bnxt_rfs_capable(struct bnxt *bp) #ifdef CONFIG_RFS_ACCEL int vnics, max_vnics, max_rss_ctxs; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) return bnxt_rfs_supported(bp); if (!(bp->flags & BNXT_FLAG_MSIX_CAP) || !bnxt_can_reserve_rings(bp) || !bp->rx_nr_rings) return false; @@ -11640,7 +11642,7 @@ static int bnxt_set_features(struct net_device *dev, netdev_features_t features) update_tpa = true; if ((bp->flags & BNXT_FLAG_TPA) == 0 || (flags & BNXT_FLAG_TPA) == 0 || - (bp->flags & BNXT_FLAG_CHIP_P5)) + (bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) re_init = true; } @@ -12051,8 +12053,7 @@ static void bnxt_timer(struct timer_list *t) if (test_bit(BNXT_STATE_L2_FILTER_RETRY, &bp->state)) bnxt_queue_sp_work(bp, BNXT_RX_MASK_SP_EVENT); - if ((bp->flags & BNXT_FLAG_CHIP_P5) && !bp->chip_rev && - netif_carrier_ok(dev)) + if ((BNXT_CHIP_P5(bp)) && !bp->chip_rev && netif_carrier_ok(dev)) bnxt_queue_sp_work(bp, BNXT_RING_COAL_NOW_SP_EVENT); bnxt_restart_timer: @@ -12303,7 +12304,7 @@ static void bnxt_chk_missed_irq(struct bnxt *bp) { int i; - if (!(bp->flags & BNXT_FLAG_CHIP_P5)) + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) return; for (i = 0; i < bp->cp_nr_rings; i++) { @@ -12507,7 +12508,8 @@ int bnxt_check_rings(struct bnxt *bp, int tx, int rx, bool sh, int tcs, return -ENOMEM; vnics = 1; - if ((bp->flags & (BNXT_FLAG_RFS | BNXT_FLAG_CHIP_P5)) == BNXT_FLAG_RFS) + if ((bp->flags & (BNXT_FLAG_RFS | BNXT_FLAG_CHIP_P5_PLUS)) == + BNXT_FLAG_RFS) vnics += rx; tx_cp = __bnxt_num_tx_to_cp(bp, tx_rings_needed, tx_sets, tx_xdp); @@ -12588,10 +12590,10 @@ static bool bnxt_fw_pre_resv_vnics(struct bnxt *bp) { u16 fw_maj = BNXT_FW_MAJ(bp), fw_bld = BNXT_FW_BLD(bp); - if (!(bp->flags & BNXT_FLAG_CHIP_P5) && + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS) && (fw_maj > 218 || (fw_maj == 218 && fw_bld >= 18))) return true; - if ((bp->flags & BNXT_FLAG_CHIP_P5) && + if ((bp->flags & BNXT_FLAG_CHIP_P5_PLUS) && (fw_maj > 216 || (fw_maj == 216 && fw_bld >= 172))) return true; return false; @@ -13647,7 +13649,7 @@ static void _bnxt_get_max_rings(struct bnxt *bp, int *max_rx, int *max_tx, max_irq = min_t(int, bnxt_get_max_func_irqs(bp) - bnxt_get_ulp_msix_num(bp), hw_resc->max_stat_ctxs - bnxt_get_ulp_stat_ctxs(bp)); - if (!(bp->flags & BNXT_FLAG_CHIP_P5)) + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) *max_cp = min_t(int, *max_cp, max_irq); max_ring_grps = hw_resc->max_hw_ring_grps; if (BNXT_CHIP_TYPE_NITRO_A0(bp) && BNXT_PF(bp)) { @@ -13656,7 +13658,7 @@ static void _bnxt_get_max_rings(struct bnxt *bp, int *max_rx, int *max_tx, } if (bp->flags & BNXT_FLAG_AGG_RINGS) *max_rx >>= 1; - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { if (*max_cp < (*max_rx + *max_tx)) { *max_rx = *max_cp / 2; *max_tx = *max_rx; @@ -14004,8 +14006,8 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) if (BNXT_PF(bp)) bnxt_vpd_read_info(bp); - if (BNXT_CHIP_P5(bp)) { - bp->flags |= BNXT_FLAG_CHIP_P5; + if (BNXT_CHIP_P5_PLUS(bp)) { + bp->flags |= BNXT_FLAG_CHIP_P5_PLUS; if (BNXT_CHIP_SR2(bp)) bp->flags |= BNXT_FLAG_CHIP_SR2; } @@ -14070,7 +14072,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) bp->gro_func = bnxt_gro_func_5730x; if (BNXT_CHIP_P4(bp)) bp->gro_func = bnxt_gro_func_5731x; - else if (BNXT_CHIP_P5(bp)) + else if (BNXT_CHIP_P5_PLUS(bp)) bp->gro_func = bnxt_gro_func_5750x; } if (!BNXT_CHIP_P4_PLUS(bp)) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index b287e4a9adff..94b3627406c4 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1430,7 +1430,7 @@ struct bnxt_test_info { }; #define CHIMP_REG_VIEW_ADDR \ - ((bp->flags & BNXT_FLAG_CHIP_P5) ? 0x80000000 : 0xb1000000) + ((bp->flags & BNXT_FLAG_CHIP_P5_PLUS) ? 0x80000000 : 0xb1000000) #define BNXT_GRCPF_REG_CHIMP_COMM 0x0 #define BNXT_GRCPF_REG_CHIMP_COMM_TRIGGER 0x100 @@ -1859,7 +1859,7 @@ struct bnxt { atomic_t intr_sem; u32 flags; - #define BNXT_FLAG_CHIP_P5 0x1 + #define BNXT_FLAG_CHIP_P5_PLUS 0x1 #define BNXT_FLAG_VF 0x2 #define BNXT_FLAG_LRO 0x4 #ifdef CONFIG_INET @@ -1913,21 +1913,21 @@ struct bnxt { #define BNXT_CHIP_TYPE_NITRO_A0(bp) ((bp)->flags & BNXT_FLAG_CHIP_NITRO_A0) #define BNXT_RX_PAGE_MODE(bp) ((bp)->flags & BNXT_FLAG_RX_PAGE_MODE) #define BNXT_SUPPORTS_TPA(bp) (!BNXT_CHIP_TYPE_NITRO_A0(bp) && \ - (!((bp)->flags & BNXT_FLAG_CHIP_P5) || \ + (!((bp)->flags & BNXT_FLAG_CHIP_P5_PLUS) ||\ (bp)->max_tpa_v2) && !is_kdump_kernel()) #define BNXT_RX_JUMBO_MODE(bp) ((bp)->flags & BNXT_FLAG_JUMBO) #define BNXT_CHIP_SR2(bp) \ ((bp)->chip_num == CHIP_NUM_58818) -#define BNXT_CHIP_P5_THOR(bp) \ +#define BNXT_CHIP_P5(bp) \ ((bp)->chip_num == CHIP_NUM_57508 || \ (bp)->chip_num == CHIP_NUM_57504 || \ (bp)->chip_num == CHIP_NUM_57502) /* Chip class phase 5 */ -#define BNXT_CHIP_P5(bp) \ - (BNXT_CHIP_P5_THOR(bp) || BNXT_CHIP_SR2(bp)) +#define BNXT_CHIP_P5_PLUS(bp) \ + (BNXT_CHIP_P5(bp) || BNXT_CHIP_SR2(bp)) /* Chip class phase 4.x */ #define BNXT_CHIP_P4(bp) \ @@ -1938,7 +1938,7 @@ struct bnxt { !BNXT_CHIP_TYPE_NITRO_A0(bp))) #define BNXT_CHIP_P4_PLUS(bp) \ - (BNXT_CHIP_P4(bp) || BNXT_CHIP_P5(bp)) + (BNXT_CHIP_P4(bp) || BNXT_CHIP_P5_PLUS(bp)) struct bnxt_aux_priv *aux_priv; struct bnxt_en_dev *edev; @@ -2362,7 +2362,7 @@ static inline void bnxt_writeq_relaxed(struct bnxt *bp, u64 val, static inline void bnxt_db_write_relaxed(struct bnxt *bp, struct bnxt_db_info *db, u32 idx) { - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { bnxt_writeq_relaxed(bp, db->db_key64 | DB_RING_IDX(db, idx), db->doorbell); } else { @@ -2378,7 +2378,7 @@ static inline void bnxt_db_write_relaxed(struct bnxt *bp, static inline void bnxt_db_write(struct bnxt *bp, struct bnxt_db_info *db, u32 idx) { - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { bnxt_writeq(bp, db->db_key64 | DB_RING_IDX(db, idx), db->doorbell); } else { diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c index 10b842539b08..ae1bdda56d22 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c @@ -739,7 +739,7 @@ static int bnxt_hwrm_get_nvm_cfg_ver(struct bnxt *bp, u32 *nvm_cfg_ver) } /* earlier devices present as an array of raw bytes */ - if (!BNXT_CHIP_P5(bp)) { + if (!BNXT_CHIP_P5_PLUS(bp)) { dim = 0; i = 0; bits *= 3; /* array of 3 version components */ @@ -759,7 +759,7 @@ static int bnxt_hwrm_get_nvm_cfg_ver(struct bnxt *bp, u32 *nvm_cfg_ver) goto exit; bnxt_copy_from_nvm_data(&ver, data, bits, bytes); - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_PLUS(bp)) { *nvm_cfg_ver <<= 8; *nvm_cfg_ver |= ver.vu8; } else { @@ -779,7 +779,7 @@ static int bnxt_dl_info_put(struct bnxt *bp, struct devlink_info_req *req, if (!strlen(buf)) return 0; - if ((bp->flags & BNXT_FLAG_CHIP_P5) && + if ((bp->flags & BNXT_FLAG_CHIP_P5_PLUS) && (!strcmp(key, DEVLINK_INFO_VERSION_GENERIC_FW_NCSI) || !strcmp(key, DEVLINK_INFO_VERSION_GENERIC_FW_ROCE))) return 0; @@ -1005,7 +1005,7 @@ static int bnxt_dl_info_get(struct devlink *dl, struct devlink_info_req *req, if (rc) return rc; - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_PLUS(bp)) { rc = bnxt_dl_livepatch_info_put(bp, req, BNXT_FW_SRT_PATCH); if (rc) return rc; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c index 585044310141..b0cea5b600cc 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -511,7 +511,7 @@ static int bnxt_get_num_tpa_ring_stats(struct bnxt *bp) { if (BNXT_SUPPORTS_TPA(bp)) { if (bp->max_tpa_v2) { - if (BNXT_CHIP_P5_THOR(bp)) + if (BNXT_CHIP_P5(bp)) return BNXT_NUM_TPA_RING_STATS_P5; return BNXT_NUM_TPA_RING_STATS_P5_SR2; } @@ -1322,7 +1322,7 @@ u32 bnxt_get_rxfh_indir_size(struct net_device *dev) { struct bnxt *bp = netdev_priv(dev); - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) return ALIGN(bp->rx_nr_rings, BNXT_RSS_TABLE_ENTRIES_P5); return HW_HASH_INDEX_SIZE; } @@ -3943,7 +3943,7 @@ static int bnxt_run_loopback(struct bnxt *bp) int rc; cpr = &rxr->bnapi->cp_ring; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) cpr = rxr->rx_cpr; pkt_size = min(bp->dev->mtu + ETH_HLEN, bp->rx_copy_thresh); skb = netdev_alloc_skb(bp->dev, pkt_size); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c index f3886710e778..a1ec39b46518 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c @@ -650,7 +650,7 @@ static int bnxt_map_ptp_regs(struct bnxt *bp) int rc, i; reg_arr = ptp->refclk_regs; - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (BNXT_CHIP_P5(bp)) { rc = bnxt_map_regs(bp, reg_arr, 2, BNXT_PTP_GRC_WIN); if (rc) return rc; @@ -967,7 +967,7 @@ int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg) rc = err; goto out; } - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (BNXT_CHIP_P5(bp)) { spin_lock_bh(&ptp->ptp_lock); bnxt_refclk_read(bp, NULL, &ptp->current_time); WRITE_ONCE(ptp->old_time, ptp->current_time); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c index c722b3b41730..175192ebaa77 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c @@ -536,7 +536,7 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs, bool reset) if (rc) return rc; - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { vf_msix = hw_resc->max_nqs - bnxt_nq_rings_in_use(bp); vf_ring_grps = 0; } else { @@ -565,7 +565,7 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs, bool reset) req->min_l2_ctxs = cpu_to_le16(min); req->min_vnics = cpu_to_le16(min); req->min_stat_ctx = cpu_to_le16(min); - if (!(bp->flags & BNXT_FLAG_CHIP_P5)) + if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS)) req->min_hw_ring_grps = cpu_to_le16(min); } else { vf_cp_rings /= num_vfs; @@ -602,7 +602,7 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs, bool reset) req->max_stat_ctx = cpu_to_le16(vf_stat_ctx); req->max_hw_ring_grps = cpu_to_le16(vf_ring_grps); req->max_rsscos_ctx = cpu_to_le16(vf_rss); - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) req->max_msix = cpu_to_le16(vf_msix / num_vfs); hwrm_req_hold(bp, req); @@ -630,7 +630,7 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs, bool reset) le16_to_cpu(req->min_rsscos_ctx) * n; hw_resc->max_stat_ctxs -= le16_to_cpu(req->min_stat_ctx) * n; hw_resc->max_vnics -= le16_to_cpu(req->min_vnics) * n; - if (bp->flags & BNXT_FLAG_CHIP_P5) + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) hw_resc->max_nqs -= vf_msix; rc = pf->active_vfs; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c index 6ba2b9398633..e89731492f5e 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c @@ -42,7 +42,7 @@ static void bnxt_fill_msix_vecs(struct bnxt *bp, struct bnxt_msix_entry *ent) for (i = 0; i < num_msix; i++) { ent[i].vector = bp->irq_tbl[idx + i].vector; ent[i].ring_idx = idx + i; - if (bp->flags & BNXT_FLAG_CHIP_P5) { + if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { ent[i].db_offset = DB_PF_OFFSET_P5; if (BNXT_VF(bp)) ent[i].db_offset = DB_VF_OFFSET_P5;