From patchwork Wed Apr 19 06:20:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sai Krishna Gajula X-Patchwork-Id: 13216423 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28339C6FD18 for ; Wed, 19 Apr 2023 06:22:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231844AbjDSGWQ (ORCPT ); Wed, 19 Apr 2023 02:22:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232141AbjDSGWB (ORCPT ); Wed, 19 Apr 2023 02:22:01 -0400 Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50017769F; Tue, 18 Apr 2023 23:21:27 -0700 (PDT) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 33J3qnIs021482; Tue, 18 Apr 2023 23:21:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=CZ9r/k8Q5WgscKeIOIPc2oioLD49Mr5Rdggqk+k3F44=; b=KkMLL6avawwNfzj3Sb7W+vetoQPWTmoEqVL8QzulBtztnTjdmKN0MQKxWZTpP4xcnxZr GUo9ov94dIKd1BtGYhFKKiBiEvAJEMWWl5OB8B73ad/WZyJKnQBCCIVbERQ+RFfWm8l9 dIOSZzGHTFOBAUAyvI6OjE6vHeSQFKqzWb3WbZYP6A9yYfOVo5IEwE1fQDgZeVmI9i3M YBMvm7Ij9DHgXiHqhPLxsTfgQrGAA3jOeDOBplF+0oDtcIVCYpj1yNmZ8HoFFSbNwMvq U8VpY49BVPZg2FzwQhD4/QLWkbWHj1CHO9O0mtTOSBQvcLkFebil3wzXt0cKPZrfDRFp gw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3q28s0gmhs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 18 Apr 2023 23:21:13 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 18 Apr 2023 23:21:11 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 18 Apr 2023 23:21:11 -0700 Received: from hyd1425.marvell.com (unknown [10.29.37.83]) by maili.marvell.com (Postfix) with ESMTP id 850353F7055; Tue, 18 Apr 2023 23:21:06 -0700 (PDT) From: Sai Krishna To: , , , , , , , , , , , , , CC: Ratheesh Kannoth , Sai Krishna Subject: [net PATCH v3 09/10] octeontx2-af: Skip PFs if not enabled Date: Wed, 19 Apr 2023 11:50:17 +0530 Message-ID: <20230419062018.286136-10-saikrishnag@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230419062018.286136-1-saikrishnag@marvell.com> References: <20230419062018.286136-1-saikrishnag@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: LcLxkYSGXUCvb2IM7SQAwX-xBU5REjv8 X-Proofpoint-ORIG-GUID: LcLxkYSGXUCvb2IM7SQAwX-xBU5REjv8 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-19_02,2023-04-18_01,2023-02-09_01 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Ratheesh Kannoth Skip mbox initialization of disabled PFs. Firmware configures PFs and allocate mbox resources etc. Linux should configure particular PFs, which ever are enabled by firmware. Fixes: 9bdc47a6e328 ("octeontx2-af: Mbox communication support btw AF and it's VFs") Signed-off-by: Ratheesh Kannoth Signed-off-by: Sunil Kovvuri Goutham Signed-off-by: Sai Krishna --- .../net/ethernet/marvell/octeontx2/af/mbox.c | 5 ++- .../net/ethernet/marvell/octeontx2/af/mbox.h | 3 +- .../net/ethernet/marvell/octeontx2/af/rvu.c | 40 ++++++++++++++++--- 3 files changed, 41 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c index 2898931d5260..9690ac01f02c 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c @@ -157,7 +157,7 @@ EXPORT_SYMBOL(otx2_mbox_init); */ int otx2_mbox_regions_init(struct otx2_mbox *mbox, void **hwbase, struct pci_dev *pdev, void *reg_base, - int direction, int ndevs) + int direction, int ndevs, unsigned long *pf_bmap) { struct otx2_mbox_dev *mdev; int devid, err; @@ -169,6 +169,9 @@ int otx2_mbox_regions_init(struct otx2_mbox *mbox, void **hwbase, mbox->hwbase = hwbase[0]; for (devid = 0; devid < ndevs; devid++) { + if (!test_bit(devid, pf_bmap)) + continue; + mdev = &mbox->dev[devid]; mdev->mbase = hwbase[devid]; mdev->hwbase = hwbase[devid]; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index 0ce533848536..26636a4d7dcc 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -96,9 +96,10 @@ void otx2_mbox_destroy(struct otx2_mbox *mbox); int otx2_mbox_init(struct otx2_mbox *mbox, void __force *hwbase, struct pci_dev *pdev, void __force *reg_base, int direction, int ndevs); + int otx2_mbox_regions_init(struct otx2_mbox *mbox, void __force **hwbase, struct pci_dev *pdev, void __force *reg_base, - int direction, int ndevs); + int direction, int ndevs, unsigned long *bmap); void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid); int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid); int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index 8683ce57ed3f..7b54756902cf 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -2282,7 +2282,7 @@ static inline void rvu_afvf_mbox_up_handler(struct work_struct *work) } static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr, - int num, int type) + int num, int type, unsigned long *pf_bmap) { struct rvu_hwinfo *hw = rvu->hw; int region; @@ -2294,6 +2294,9 @@ static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr, */ if (type == TYPE_AFVF) { for (region = 0; region < num; region++) { + if (!test_bit(region, pf_bmap)) + continue; + if (hw->cap.per_pf_mbox_regs) { bar4 = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFX_BAR4_ADDR(0)) + @@ -2315,6 +2318,9 @@ static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr, * RVU_AF_PF_BAR4_ADDR register. */ for (region = 0; region < num; region++) { + if (!test_bit(region, pf_bmap)) + continue; + if (hw->cap.per_pf_mbox_regs) { bar4 = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFX_BAR4_ADDR(region)); @@ -2343,8 +2349,27 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, int err = -EINVAL, i, dir, dir_up; void __iomem *reg_base; struct rvu_work *mwork; + unsigned long *pf_bmap; void **mbox_regions; const char *name; + u64 cfg; + + pf_bmap = kcalloc(BITS_TO_LONGS(num), sizeof(long), GFP_KERNEL); + if (!pf_bmap) + return -ENOMEM; + + /* RVU VFs */ + if (type == TYPE_AFVF) + bitmap_set(pf_bmap, 0, num); + + if (type == TYPE_AFPF) { + /* Mark enabled PFs in bitmap */ + for (i = 0; i < num; i++) { + cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(i)); + if (cfg & BIT_ULL(20)) + set_bit(i, pf_bmap); + } + } mbox_regions = kcalloc(num, sizeof(void *), GFP_KERNEL); if (!mbox_regions) @@ -2356,7 +2381,7 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, dir = MBOX_DIR_AFPF; dir_up = MBOX_DIR_AFPF_UP; reg_base = rvu->afreg_base; - err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFPF); + err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFPF, pf_bmap); if (err) goto free_regions; break; @@ -2365,7 +2390,7 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, dir = MBOX_DIR_PFVF; dir_up = MBOX_DIR_PFVF_UP; reg_base = rvu->pfreg_base; - err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFVF); + err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFVF, pf_bmap); if (err) goto free_regions; break; @@ -2396,16 +2421,19 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, } err = otx2_mbox_regions_init(&mw->mbox, mbox_regions, rvu->pdev, - reg_base, dir, num); + reg_base, dir, num, pf_bmap); if (err) goto exit; err = otx2_mbox_regions_init(&mw->mbox_up, mbox_regions, rvu->pdev, - reg_base, dir_up, num); + reg_base, dir_up, num, pf_bmap); if (err) goto exit; for (i = 0; i < num; i++) { + if (!test_bit(i, pf_bmap)) + continue; + mwork = &mw->mbox_wrk[i]; mwork->rvu = rvu; INIT_WORK(&mwork->work, mbox_handler); @@ -2415,6 +2443,7 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, INIT_WORK(&mwork->work, mbox_up_handler); } kfree(mbox_regions); + kfree(pf_bmap); return 0; exit: @@ -2424,6 +2453,7 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, iounmap((void __iomem *)mbox_regions[num]); free_regions: kfree(mbox_regions); + kfree(pf_bmap); return err; }