From patchwork Tue Jun 14 20:28:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haiyang Zhang X-Patchwork-Id: 12881578 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A078AC43334 for ; Tue, 14 Jun 2022 20:29:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358194AbiFNU35 (ORCPT ); Tue, 14 Jun 2022 16:29:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356579AbiFNU34 (ORCPT ); Tue, 14 Jun 2022 16:29:56 -0400 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2101.outbound.protection.outlook.com [40.107.243.101]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E0824EA12; Tue, 14 Jun 2022 13:29:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HHxVp5vAmq/q1BCzaotvbdXA3ExvHVesbE2Ybw826VhfsO7zt8EBZb4CwHxQ9+izPK466ZKlsw84u2cxjCSMrBdMCDBJxRDBMf+zWyq/y3UI9knMqGdjXLo0/HhHhBopFBiaboKP42xG2dGpEt+Amb19oyiAmqzqWfUhaVOv01b9OUNvEZqvmttsq7VSo93ts5rNl/z768rpxqsARCotvsoTCJJEDQghveAlBw2EUWDOJub2eNUY5BRnfhQJpuQzDkWjIOVWT66vhDW7o4ooxlZJIyX/+PNel3bD2tdGzhrEJdbDd1ENMflkXAqnWP4ZWLajbiY6YtKvrJRWF8p/NA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=C44B/0tWOMpfl5HTsKNb9OgUy1nnuD3Up/YDXOUO+Ts=; b=gq8Nw3dGuylh5QUr6PTyRA/vlUlS1S54LehdhSLyjCihIZ/hqlxkEjvtn/0X759iCULmHEHt8FMedz/xgiqD55liyrBo9gtu59TV0eWuKaTCPN36Kkb1hu/XuEB70j5hQA+dssRAWL4wX4hfpNhFvQ5PEdfrm3HQwHyOFfRq5PMYd9f6z4ZQZROw0u1iHNhXmqqaZf+iwJPmTUbY1B2AgLXXaBOQzTZOXjam5MG4XyFcWsv9YsuB0gJ6LTMVkYhP/Y0YBVDM7XdMxMNo9yCZHDmyenFWA5RjTGwh/xkOMZX62ntL5zcsPfqfBlYQFZ48n6kEkuMfDmPbZ5YnracVAQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=microsoft.com; dmarc=pass action=none header.from=microsoft.com; dkim=pass header.d=microsoft.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=C44B/0tWOMpfl5HTsKNb9OgUy1nnuD3Up/YDXOUO+Ts=; b=Rhb4lY+rqO+6XQJREtNFXNeXhSdP0bu8wut7Mffuz/SaPkpvGcpJ/UpcRPuUIM1Jk3SEECrYDE4x6YqCcVLN7u7ctTl/+CquSsBOba1z5AjjBQNEIgsb2Ft0ZvCCmrkzS2BMJucGqV0bkvL6EpybhpGoymR8Psoc/kRpvyqTNmU= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=microsoft.com; Received: from BYAPR21MB1223.namprd21.prod.outlook.com (2603:10b6:a03:103::11) by MN0PR21MB3314.namprd21.prod.outlook.com (2603:10b6:208:37f::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.8; Tue, 14 Jun 2022 20:29:49 +0000 Received: from BYAPR21MB1223.namprd21.prod.outlook.com ([fe80::7ceb:34aa:b695:d8ac]) by BYAPR21MB1223.namprd21.prod.outlook.com ([fe80::7ceb:34aa:b695:d8ac%5]) with mapi id 15.20.5353.001; Tue, 14 Jun 2022 20:29:49 +0000 From: Haiyang Zhang To: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org Cc: haiyangz@microsoft.com, decui@microsoft.com, kys@microsoft.com, sthemmin@microsoft.com, paulros@microsoft.com, shacharr@microsoft.com, olaf@aepfle.de, vkuznets@redhat.com, davem@davemloft.net, linux-kernel@vger.kernel.org Subject: [PATCH net-next,v2,1/2] net: mana: Add the Linux MANA PF driver Date: Tue, 14 Jun 2022 13:28:54 -0700 Message-Id: <1655238535-19257-2-git-send-email-haiyangz@microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1655238535-19257-1-git-send-email-haiyangz@microsoft.com> References: <1655238535-19257-1-git-send-email-haiyangz@microsoft.com> X-ClientProxiedBy: MW4PR04CA0313.namprd04.prod.outlook.com (2603:10b6:303:82::18) To BYAPR21MB1223.namprd21.prod.outlook.com (2603:10b6:a03:103::11) MIME-Version: 1.0 Sender: LKML haiyangz X-MS-Exchange-MessageSentRepresentingType: 2 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e7956383-672d-4539-283e-08da4e44a115 X-MS-TrafficTypeDiagnostic: MN0PR21MB3314:EE_ X-LD-Processed: 72f988bf-86f1-41af-91ab-2d7cd011db47,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zfeOqUomdmnzN3g6DrulrDE2cZHiA8S1SIhBpIRVptmTJTaTZAEsvckGQYmQDvL1CmRL4RrzG9qlwzYFel+KS/xzWKOOK6PeRY5O2D5TAs+4YE6uleemChsPC5gbNMjKlEMRtP4ZZqjZjEP8yqhGqiSq9U3nuAG2M1iGTBEJppaipzISkUf82La74uSV1ZtEVOEfUGXDpz4zMwCaOgYdRyuM3ICtvNWvebvB7NnLpriybp9L/6egdKy1bokXmcg6InqHC+3SISJbwLi0sDxoue+Ged45L2kR8bDx8WgVDZY7UR+RuuomWGY17FyXtZmkE8HztlU1fMDGFyzzk3+l5k4fC+cHlKNjd0OKPzmmQoy5WyCKGDuzJg0Jon/PapXGdLFeFA1pQiKdN64GuC9SAuZXq2/3Iyt/lgOe3LLmuexuBNp/ipQeWZeL6h2QQrGA03VhOHF1DjvIk71XVG8WOT05Jrs6J/QQWUM7XpEBYVceS5w/tknk3aeXbry6s8BaCq4zX0ZSJyVm+VoY1W58wAoYPL9bXHQGTAxcfd8ulA3DqeBYmzgEt9I1i4T5Xbv3qPwOjk7N0O/ZLYsoo9NYkm8RQRNjiaDdW6cyakkPYbFMJbsv8UdoC2o8s7i0oNJcFSNR0k8KHvwQdBEcnBwNV5lZ9qq55gjSH+ci8rne9/3WWPP3JvgjzNbIjd71ffZLRvTONZR3dyAjFKjIXg0yz4qWh5vULR7kxZQW7S37SFj4vlAmHyzh3id1NKRmyZqd X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR21MB1223.namprd21.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(451199009)(6666004)(6506007)(186003)(52116002)(2616005)(6512007)(26005)(38100700002)(82960400001)(38350700002)(82950400001)(83380400001)(5660300002)(30864003)(66946007)(36756003)(8676002)(4326008)(8936002)(6486002)(316002)(2906002)(66556008)(66476007)(10290500003)(7846003)(508600001);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: evbu0ZMPyG8tZCvDHX4N75FflqujpDYW+0Q38rdVFGFS97zjqabU/ODnaIf8jpXTo8wt7ax3UXHuJKk1zIhIWECFshHwXSHKXZ5KUlo4caU6gOSqEuzap/PAWBUi/nlavZlFp0eQVoFQcR/uiY3c9dfBB71w+8bM6WSGqRgrz6sBme2aZ+Y4d/WaN87HqIm1fXJ7sHcr3Fe8lRDY0psmpKrHauZkziy83NrUnhdiwKNbGI2WIb8n5cRsQG3lFGV96DDGXdS025QoUDW4RLqKBvgINlcPP4K6aLls1nyVQiwtfRpyGV6V3a0guCrEnmWh6OfzwpcoVgbh1GX6hTTGS+wJUSNL+nCa8KIiilun8A0KVYBkoQF5NNOrlkmczAss+QAe+PaSQyd9iXZKS/XsJdGbxv+9jI/hTBKdFEqd+Odw6Dg/nULzM+3nu4KGwBJZBfSS8JnUXPXx2PimxQFVpVr6vankW2VIYhakc52taILiaySU8JhUVtbhG4Lkyx8Ilk4EYNFqyECXSP3j5Rz0ONNCGyOiBQvAJsXVuPBUMlkOf1HRY2M9/UCZRLGheM/UlQLSeEkT+7NRXYyIoc8vsm70qQOYIhSnpw9uNVNLgZsrL20Z3sQxI4cv2vZ/YKZKyzyA70pp9TmHc8kcF5W5ganOBgg8Oynbo1L+g27IvhHWdTU0o+ab8Io0ve9ED1itucJg3+AQCS67oV0TxTmvh8CvvqBzjkQPMFZR9uF4Mw0Su1zUMZtpjzUFck4tx4u9ugd7PlspSf3SK10WdW3NuKUFYvinVM14spOsmmqDonjkUqAydN3enQmb2RSCiUllbbPfl+ZZGUgkWFQtV7maTH9ettHxqSLwI5+VKIASt13F16jCgMLKJH4eK9MwzihAcE1t2lUXO9OoHzhSIqqLtrZCaKkUp1smfc0pTVdzFXB12t9S4O5AaVMNvlfJ1g4Of02dXln1RjG+l35hkbsbyfFHn/GmJCBeyGg1+x1jm3hmum11T2QF/Z+jOQFS7qgoZzVmQLm/KsCaBbss6dg7LM6oPPl5kzQTW/wvu/unie3+dWld8WnXfugNmHtt8Q4SXwxOgeo/Qn20jS0QPAXmKikFj7iZPhmGpyQhZrraKj9604WX7uHInmO0lAoaxmT8NMSB3IhW/2SbjURV8vBHBcxU45KC44YedZy4mK7rX7zfmjFwPz/i6ektX/o3raLPxTuGFfyioqCCaXFlr26cv8ZHWt7l/i5Bjxm3LpAd9Af/115R2m0UtBbwOYctQyPHhDNggPXMKg60nUK1/Nk0t9Te/ARCmd2pNgba1lYYuoR5Sb1ldCXRwfkONyjfdMImU/YbyBp9wc4FcZGABXl3ExMYocdqjke0KdzTlM5GAvyny3c1dWvR2R0kvktJoT5u5cOrZ+7a+YDwzXR39f2WpSkAvZYCLIXLd5KwI600XBucedEKCmAbshneEW6C7J76l82jT9dEXCXEEF0bl+yF7FkS/9nYH9hn+IpbZcEbuoEVlXnkkGNfiGGflUjF+YN7GSb2/nGxcOtUjkqOs1bJu4+QAWZldONQ6vLr30Yqs9+v0ooTjKySrawO4zSYg7gqJuddKSXPXHslFMtUGvlhSFLYh6Xh8olhLrJAPcg9c+mjkWxg6hf2lMrybSpgOfL4zJ6X5PwBZrlKTlT+zW43+bcHGPh31I9TNDnMTpedU5/1n6bvlC1AFJsI3uvL2HhVOUVrrpxkGAQNqAeKrRQHJA== X-OriginatorOrg: microsoft.com X-MS-Exchange-CrossTenant-Network-Message-Id: e7956383-672d-4539-283e-08da4e44a115 X-MS-Exchange-CrossTenant-AuthSource: BYAPR21MB1223.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2022 20:29:49.5482 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 72f988bf-86f1-41af-91ab-2d7cd011db47 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: NhwK3ARllUUmwpNadKeatL5DRGMIeA9YrSlPcf5AQeXyi++2Y5lmYmAtYciOON0b6DqoKh+X0gBKh80r7NvxDA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR21MB3314 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Dexuan Cui This minimal PF driver runs on bare metal. Currently Ethernet TX/RX works. SR-IOV management is not supported yet. Signed-off-by: Dexuan Cui Co-developed-by: Haiyang Zhang Signed-off-by: Haiyang Zhang --- drivers/net/ethernet/microsoft/mana/gdma.h | 10 ++ .../net/ethernet/microsoft/mana/gdma_main.c | 39 ++++- .../net/ethernet/microsoft/mana/hw_channel.c | 18 ++- .../net/ethernet/microsoft/mana/hw_channel.h | 5 + drivers/net/ethernet/microsoft/mana/mana.h | 64 +++++++++ drivers/net/ethernet/microsoft/mana/mana_en.c | 135 ++++++++++++++++++ 6 files changed, 267 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/gdma.h b/drivers/net/ethernet/microsoft/mana/gdma.h index 41ecd156e95f..4a6efe6ada08 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma.h +++ b/drivers/net/ethernet/microsoft/mana/gdma.h @@ -348,6 +348,7 @@ struct gdma_context { struct completion eq_test_event; u32 test_event_eq_id; + bool is_pf; void __iomem *bar0_va; void __iomem *shm_base; void __iomem *db_page_base; @@ -469,6 +470,15 @@ struct gdma_eqe { #define GDMA_REG_DB_PAGE_SIZE 0x10 #define GDMA_REG_SHM_OFFSET 0x18 +#define GDMA_PF_REG_DB_PAGE_SIZE 0xD0 +#define GDMA_PF_REG_DB_PAGE_OFF 0xC8 +#define GDMA_PF_REG_SHM_OFF 0x70 + +#define GDMA_SRIOV_REG_CFG_BASE_OFF 0x108 + +#define MANA_PF_DEVICE_ID 0x00B9 +#define MANA_VF_DEVICE_ID 0x00BA + struct gdma_posted_wqe_info { u32 wqe_size_in_bu; }; diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index 49b85ca578b0..5f9240182351 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -18,7 +18,24 @@ static u64 mana_gd_r64(struct gdma_context *g, u64 offset) return readq(g->bar0_va + offset); } -static void mana_gd_init_registers(struct pci_dev *pdev) +static void mana_gd_init_pf_regs(struct pci_dev *pdev) +{ + struct gdma_context *gc = pci_get_drvdata(pdev); + void __iomem *sriov_base_va; + u64 sriov_base_off; + + gc->db_page_size = mana_gd_r32(gc, GDMA_PF_REG_DB_PAGE_SIZE) & 0xFFFF; + gc->db_page_base = gc->bar0_va + + mana_gd_r64(gc, GDMA_PF_REG_DB_PAGE_OFF); + + sriov_base_off = mana_gd_r64(gc, GDMA_SRIOV_REG_CFG_BASE_OFF); + + sriov_base_va = gc->bar0_va + sriov_base_off; + gc->shm_base = sriov_base_va + + mana_gd_r64(gc, sriov_base_off + GDMA_PF_REG_SHM_OFF); +} + +static void mana_gd_init_vf_regs(struct pci_dev *pdev) { struct gdma_context *gc = pci_get_drvdata(pdev); @@ -30,6 +47,16 @@ static void mana_gd_init_registers(struct pci_dev *pdev) gc->shm_base = gc->bar0_va + mana_gd_r64(gc, GDMA_REG_SHM_OFFSET); } +static void mana_gd_init_registers(struct pci_dev *pdev) +{ + struct gdma_context *gc = pci_get_drvdata(pdev); + + if (gc->is_pf) + mana_gd_init_pf_regs(pdev); + else + mana_gd_init_vf_regs(pdev); +} + static int mana_gd_query_max_resources(struct pci_dev *pdev) { struct gdma_context *gc = pci_get_drvdata(pdev); @@ -1304,6 +1331,11 @@ static void mana_gd_cleanup(struct pci_dev *pdev) mana_gd_remove_irqs(pdev); } +static bool mana_is_pf(unsigned short dev_id) +{ + return dev_id == MANA_PF_DEVICE_ID; +} + static int mana_gd_probe(struct pci_dev *pdev, const struct pci_device_id *ent) { struct gdma_context *gc; @@ -1340,10 +1372,10 @@ static int mana_gd_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (!bar0_va) goto free_gc; + gc->is_pf = mana_is_pf(pdev->device); gc->bar0_va = bar0_va; gc->dev = &pdev->dev; - err = mana_gd_setup(pdev); if (err) goto unmap_bar; @@ -1438,7 +1470,8 @@ static void mana_gd_shutdown(struct pci_dev *pdev) #endif static const struct pci_device_id mana_id_table[] = { - { PCI_DEVICE(PCI_VENDOR_ID_MICROSOFT, 0x00BA) }, + { PCI_DEVICE(PCI_VENDOR_ID_MICROSOFT, MANA_PF_DEVICE_ID) }, + { PCI_DEVICE(PCI_VENDOR_ID_MICROSOFT, MANA_VF_DEVICE_ID) }, { } }; diff --git a/drivers/net/ethernet/microsoft/mana/hw_channel.c b/drivers/net/ethernet/microsoft/mana/hw_channel.c index 078d6a5a0768..543a5d5c304f 100644 --- a/drivers/net/ethernet/microsoft/mana/hw_channel.c +++ b/drivers/net/ethernet/microsoft/mana/hw_channel.c @@ -158,6 +158,14 @@ static void mana_hwc_init_event_handler(void *ctx, struct gdma_queue *q_self, hwc->rxq->msg_buf->gpa_mkey = val; hwc->txq->msg_buf->gpa_mkey = val; break; + + case HWC_INIT_DATA_PF_DEST_RQ_ID: + hwc->pf_dest_vrq_id = val; + break; + + case HWC_INIT_DATA_PF_DEST_CQ_ID: + hwc->pf_dest_vrcq_id = val; + break; } break; @@ -773,10 +781,13 @@ void mana_hwc_destroy_channel(struct gdma_context *gc) int mana_hwc_send_request(struct hw_channel_context *hwc, u32 req_len, const void *req, u32 resp_len, void *resp) { + struct gdma_context *gc = hwc->gdma_dev->gdma_context; struct hwc_work_request *tx_wr; struct hwc_wq *txq = hwc->txq; struct gdma_req_hdr *req_msg; struct hwc_caller_ctx *ctx; + u32 dest_vrcq = 0; + u32 dest_vrq = 0; u16 msg_id; int err; @@ -803,7 +814,12 @@ int mana_hwc_send_request(struct hw_channel_context *hwc, u32 req_len, tx_wr->msg_size = req_len; - err = mana_hwc_post_tx_wqe(txq, tx_wr, 0, 0, false); + if (gc->is_pf) { + dest_vrq = hwc->pf_dest_vrq_id; + dest_vrcq = hwc->pf_dest_vrcq_id; + } + + err = mana_hwc_post_tx_wqe(txq, tx_wr, dest_vrq, dest_vrcq, false); if (err) { dev_err(hwc->dev, "HWC: Failed to post send WQE: %d\n", err); goto out; diff --git a/drivers/net/ethernet/microsoft/mana/hw_channel.h b/drivers/net/ethernet/microsoft/mana/hw_channel.h index 31c6e83c454a..6a757a6e2732 100644 --- a/drivers/net/ethernet/microsoft/mana/hw_channel.h +++ b/drivers/net/ethernet/microsoft/mana/hw_channel.h @@ -20,6 +20,8 @@ #define HWC_INIT_DATA_MAX_NUM_CQS 7 #define HWC_INIT_DATA_PDID 8 #define HWC_INIT_DATA_GPA_MKEY 9 +#define HWC_INIT_DATA_PF_DEST_RQ_ID 10 +#define HWC_INIT_DATA_PF_DEST_CQ_ID 11 /* Structures labeled with "HW DATA" are exchanged with the hardware. All of * them are naturally aligned and hence don't need __packed. @@ -178,6 +180,9 @@ struct hw_channel_context { struct semaphore sema; struct gdma_resource inflight_msg_res; + u32 pf_dest_vrq_id; + u32 pf_dest_vrcq_id; + struct hwc_caller_ctx *caller_ctx; }; diff --git a/drivers/net/ethernet/microsoft/mana/mana.h b/drivers/net/ethernet/microsoft/mana/mana.h index d36405af9432..f198b34c232f 100644 --- a/drivers/net/ethernet/microsoft/mana/mana.h +++ b/drivers/net/ethernet/microsoft/mana/mana.h @@ -374,6 +374,7 @@ struct mana_port_context { unsigned int num_queues; mana_handle_t port_handle; + mana_handle_t pf_filter_handle; u16 port_idx; @@ -420,6 +421,12 @@ enum mana_command_code { MANA_FENCE_RQ = 0x20006, MANA_CONFIG_VPORT_RX = 0x20007, MANA_QUERY_VPORT_CONFIG = 0x20008, + + /* Privileged commands for the PF mode */ + MANA_REGISTER_FILTER = 0x28000, + MANA_DEREGISTER_FILTER = 0x28001, + MANA_REGISTER_HW_PORT = 0x28003, + MANA_DEREGISTER_HW_PORT = 0x28004, }; /* Query Device Configuration */ @@ -547,6 +554,63 @@ struct mana_cfg_rx_steer_resp { struct gdma_resp_hdr hdr; }; /* HW DATA */ +/* Register HW vPort */ +struct mana_register_hw_vport_req { + struct gdma_req_hdr hdr; + u16 attached_gfid; + u8 is_pf_default_vport; + u8 reserved1; + u8 allow_all_ether_types; + u8 reserved2; + u8 reserved3; + u8 reserved4; +}; /* HW DATA */ + +struct mana_register_hw_vport_resp { + struct gdma_resp_hdr hdr; + mana_handle_t hw_vport_handle; +}; /* HW DATA */ + +/* Deregister HW vPort */ +struct mana_deregister_hw_vport_req { + struct gdma_req_hdr hdr; + mana_handle_t hw_vport_handle; +}; /* HW DATA */ + +struct mana_deregister_hw_vport_resp { + struct gdma_resp_hdr hdr; +}; /* HW DATA */ + +/* Register filter */ +struct mana_register_filter_req { + struct gdma_req_hdr hdr; + mana_handle_t vport; + u8 mac_addr[6]; + u8 reserved1; + u8 reserved2; + u8 reserved3; + u8 reserved4; + u16 reserved5; + u32 reserved6; + u32 reserved7; + u32 reserved8; +}; /* HW DATA */ + +struct mana_register_filter_resp { + struct gdma_resp_hdr hdr; + mana_handle_t filter_handle; +}; /* HW DATA */ + +/* Deregister filter */ +struct mana_deregister_filter_req { + struct gdma_req_hdr hdr; + mana_handle_t filter_handle; +}; /* HW DATA */ + +struct mana_deregister_filter_resp { + struct gdma_resp_hdr hdr; +}; /* HW DATA */ + #define MANA_MAX_NUM_QUEUES 64 #define MANA_SHORT_VPORT_OFFSET_MAX ((1U << 8) - 1) diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index b1d773823232..3ef09e0cdbaa 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -446,6 +446,119 @@ static int mana_verify_resp_hdr(const struct gdma_resp_hdr *resp_hdr, return 0; } +static int mana_pf_register_hw_vport(struct mana_port_context *apc) +{ + struct mana_register_hw_vport_resp resp = {}; + struct mana_register_hw_vport_req req = {}; + int err; + + mana_gd_init_req_hdr(&req.hdr, MANA_REGISTER_HW_PORT, + sizeof(req), sizeof(resp)); + req.attached_gfid = 1; + req.is_pf_default_vport = 1; + req.allow_all_ether_types = 1; + + err = mana_send_request(apc->ac, &req, sizeof(req), &resp, + sizeof(resp)); + if (err) { + netdev_err(apc->ndev, "Failed to register hw vPort: %d\n", err); + return err; + } + + err = mana_verify_resp_hdr(&resp.hdr, MANA_REGISTER_HW_PORT, + sizeof(resp)); + if (err || resp.hdr.status) { + netdev_err(apc->ndev, "Failed to register hw vPort: %d, 0x%x\n", + err, resp.hdr.status); + return err ? err : -EPROTO; + } + + apc->port_handle = resp.hw_vport_handle; + return 0; +} + +static void mana_pf_deregister_hw_vport(struct mana_port_context *apc) +{ + struct mana_deregister_hw_vport_resp resp = {}; + struct mana_deregister_hw_vport_req req = {}; + int err; + + mana_gd_init_req_hdr(&req.hdr, MANA_DEREGISTER_HW_PORT, + sizeof(req), sizeof(resp)); + req.hw_vport_handle = apc->port_handle; + + err = mana_send_request(apc->ac, &req, sizeof(req), &resp, + sizeof(resp)); + if (err) { + netdev_err(apc->ndev, "Failed to unregister hw vPort: %d\n", + err); + return; + } + + err = mana_verify_resp_hdr(&resp.hdr, MANA_DEREGISTER_HW_PORT, + sizeof(resp)); + if (err || resp.hdr.status) + netdev_err(apc->ndev, + "Failed to deregister hw vPort: %d, 0x%x\n", + err, resp.hdr.status); +} + +static int mana_pf_register_filter(struct mana_port_context *apc) +{ + struct mana_register_filter_resp resp = {}; + struct mana_register_filter_req req = {}; + int err; + + mana_gd_init_req_hdr(&req.hdr, MANA_REGISTER_FILTER, + sizeof(req), sizeof(resp)); + req.vport = apc->port_handle; + memcpy(req.mac_addr, apc->mac_addr, ETH_ALEN); + + err = mana_send_request(apc->ac, &req, sizeof(req), &resp, + sizeof(resp)); + if (err) { + netdev_err(apc->ndev, "Failed to register filter: %d\n", err); + return err; + } + + err = mana_verify_resp_hdr(&resp.hdr, MANA_REGISTER_FILTER, + sizeof(resp)); + if (err || resp.hdr.status) { + netdev_err(apc->ndev, "Failed to register filter: %d, 0x%x\n", + err, resp.hdr.status); + return err ? err : -EPROTO; + } + + apc->pf_filter_handle = resp.filter_handle; + return 0; +} + +static void mana_pf_deregister_filter(struct mana_port_context *apc) +{ + struct mana_deregister_filter_resp resp = {}; + struct mana_deregister_filter_req req = {}; + int err; + + mana_gd_init_req_hdr(&req.hdr, MANA_DEREGISTER_FILTER, + sizeof(req), sizeof(resp)); + req.filter_handle = apc->pf_filter_handle; + + err = mana_send_request(apc->ac, &req, sizeof(req), &resp, + sizeof(resp)); + if (err) { + netdev_err(apc->ndev, "Failed to unregister filter: %d\n", + err); + return; + } + + err = mana_verify_resp_hdr(&resp.hdr, MANA_DEREGISTER_FILTER, + sizeof(resp)); + if (err || resp.hdr.status) + netdev_err(apc->ndev, + "Failed to deregister filter: %d, 0x%x\n", + err, resp.hdr.status); +} + static int mana_query_device_cfg(struct mana_context *ac, u32 proto_major_ver, u32 proto_minor_ver, u32 proto_micro_ver, u16 *max_num_vports) @@ -1653,6 +1766,7 @@ static int mana_add_rx_queues(struct mana_port_context *apc, static void mana_destroy_vport(struct mana_port_context *apc) { + struct gdma_dev *gd = apc->ac->gdma_dev; struct mana_rxq *rxq; u32 rxq_idx; @@ -1666,6 +1780,9 @@ static void mana_destroy_vport(struct mana_port_context *apc) } mana_destroy_txq(apc); + + if (gd->gdma_context->is_pf) + mana_pf_deregister_hw_vport(apc); } static int mana_create_vport(struct mana_port_context *apc, @@ -1676,6 +1793,12 @@ static int mana_create_vport(struct mana_port_context *apc, apc->default_rxobj = INVALID_MANA_HANDLE; + if (gd->gdma_context->is_pf) { + err = mana_pf_register_hw_vport(apc); + if (err) + return err; + } + err = mana_cfg_vport(apc, gd->pdid, gd->doorbell); if (err) return err; @@ -1755,6 +1878,7 @@ static int mana_init_port(struct net_device *ndev) int mana_alloc_queues(struct net_device *ndev) { struct mana_port_context *apc = netdev_priv(ndev); + struct gdma_dev *gd = apc->ac->gdma_dev; int err; err = mana_create_vport(apc, ndev); @@ -1781,6 +1905,12 @@ int mana_alloc_queues(struct net_device *ndev) if (err) goto destroy_vport; + if (gd->gdma_context->is_pf) { + err = mana_pf_register_filter(apc); + if (err) + goto destroy_vport; + } + mana_chn_setxdp(apc, mana_xdp_get(apc)); return 0; @@ -1825,6 +1955,7 @@ int mana_attach(struct net_device *ndev) static int mana_dealloc_queues(struct net_device *ndev) { struct mana_port_context *apc = netdev_priv(ndev); + struct gdma_dev *gd = apc->ac->gdma_dev; struct mana_txq *txq; int i, err; @@ -1833,6 +1964,9 @@ static int mana_dealloc_queues(struct net_device *ndev) mana_chn_setxdp(apc, NULL); + if (gd->gdma_context->is_pf) + mana_pf_deregister_filter(apc); + /* No packet can be transmitted now since apc->port_is_up is false. * There is still a tiny chance that mana_poll_tx_cq() can re-enable * a txq because it may not timely see apc->port_is_up being cleared @@ -1915,6 +2049,7 @@ static int mana_probe_port(struct mana_context *ac, int port_idx, apc->max_queues = gc->max_num_queues; apc->num_queues = gc->max_num_queues; apc->port_handle = INVALID_MANA_HANDLE; + apc->pf_filter_handle = INVALID_MANA_HANDLE; apc->port_idx = port_idx; ndev->netdev_ops = &mana_devops; From patchwork Tue Jun 14 20:28:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haiyang Zhang X-Patchwork-Id: 12881579 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA59CC43334 for ; Tue, 14 Jun 2022 20:30:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1358208AbiFNUaB (ORCPT ); Tue, 14 Jun 2022 16:30:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358171AbiFNU37 (ORCPT ); Tue, 14 Jun 2022 16:29:59 -0400 Received: from na01-obe.outbound.protection.outlook.com (mail-bgr052101064017.outbound.protection.outlook.com [52.101.64.17]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2FDB29834; Tue, 14 Jun 2022 13:29:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CjU9CCd781LN5H6xEJecVSgXeGLXAnlqau1CxV8QH/AyDGi2gRgSfe8rETE8FFGTO0VuCjRjL4bpx2xfsxa9PeBS/QkIUOfZIfSpdTIus6YumCm061RHn8BFvUFM/lPjLFdA9/WA9QHPIYDXwiXw9N9GmnCNBDQN+p3H2XQEjoHWr5Eu0OTReSPDsCInrhgNTpQJNIgCwgFw/1tEwn3yAGsvbOo5r1zfoOmL9AZ9aAC+pz16Fha79Z0HmyOdrH2piLhbksj1yUrco6QrXSeYHhH4POHy/ir69+QmVCxUDkBMOlxjWflOGcIfRcTm+q4T7LLTt5fhWMgGFCJUY5wTrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BbLQur2lLKngM1gyBsHo22EVD11TSunfeRWn8X4BpbQ=; b=XVhPbbHXpP3abGm1rcGUpEV5cAnNxMLGxiggUbUoTQMITEISx+hknwasObwxbk41nJtp2eJ+Hzoe/lg8pLDaPW7t97ZoXhIqZO98DguFQ86QX0O4hyQH11hJakfeLOMj64ERQViIuugotsAnJqYlcRCbxB0b+6A0pdVwixRpzxthpPcyXL19klxPwI0rWceBGJR01dhagVO/No0z4Y8NoKR/6lI2/ZuyXQjK91vz4g1p8GnCjRHnSpNk5pTil1ESsvJ9Vn1jFCNYL2Ot4IBhK05xwAwfNiBy32kRY4WXrbbwLcCjeNF44w6cd0uKurdMeG80uXALjPK80xv4hNkddA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=microsoft.com; dmarc=pass action=none header.from=microsoft.com; dkim=pass header.d=microsoft.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BbLQur2lLKngM1gyBsHo22EVD11TSunfeRWn8X4BpbQ=; b=b96NdCY3Q9GdlGCcKh/zUGvdAC4X3uIR3LNFwiWDOVZOuCtJcaaVTtJh/H72vuh9RwHAYtRZx0Y5T4j8jFBlik1ey7JP9ytbwrb2n/5Z7gj7wup7CxlYlz8BOCgs1v1blUgoPfmy0LFvsXAnyccxDwtVhfhsIMuRb8z7MvvNM2U= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=microsoft.com; Received: from BYAPR21MB1223.namprd21.prod.outlook.com (2603:10b6:a03:103::11) by MN0PR21MB3314.namprd21.prod.outlook.com (2603:10b6:208:37f::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5373.8; Tue, 14 Jun 2022 20:29:56 +0000 Received: from BYAPR21MB1223.namprd21.prod.outlook.com ([fe80::7ceb:34aa:b695:d8ac]) by BYAPR21MB1223.namprd21.prod.outlook.com ([fe80::7ceb:34aa:b695:d8ac%5]) with mapi id 15.20.5353.001; Tue, 14 Jun 2022 20:29:56 +0000 From: Haiyang Zhang To: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org Cc: haiyangz@microsoft.com, decui@microsoft.com, kys@microsoft.com, sthemmin@microsoft.com, paulros@microsoft.com, shacharr@microsoft.com, olaf@aepfle.de, vkuznets@redhat.com, davem@davemloft.net, linux-kernel@vger.kernel.org Subject: [PATCH net-next,v2,2/2] net: mana: Add support of XDP_REDIRECT action Date: Tue, 14 Jun 2022 13:28:55 -0700 Message-Id: <1655238535-19257-3-git-send-email-haiyangz@microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1655238535-19257-1-git-send-email-haiyangz@microsoft.com> References: <1655238535-19257-1-git-send-email-haiyangz@microsoft.com> X-ClientProxiedBy: MW4PR04CA0313.namprd04.prod.outlook.com (2603:10b6:303:82::18) To BYAPR21MB1223.namprd21.prod.outlook.com (2603:10b6:a03:103::11) MIME-Version: 1.0 Sender: LKML haiyangz X-MS-Exchange-MessageSentRepresentingType: 2 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 434e0bf2-7876-4da2-2b20-08da4e44a4e7 X-MS-TrafficTypeDiagnostic: MN0PR21MB3314:EE_ X-LD-Processed: 72f988bf-86f1-41af-91ab-2d7cd011db47,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: yWl5kZzUQbT2yucVSdc9XhJnXdk5mgNoeYxxFj6U4T++iXp26jMS9gvjAHAsv9jsaWXVBrMcRwKJ0jA89rYxXTMVFZ6t0w96aIdIZ5vv7bOYvLR1MK6l3TTQ6KT4mQGJQAWj/Ma0t/k+rmmW7sMh3BT+lHAjdZOZfDqUCNiFDy9i2hgZciH5PniRi0G72KDUbl1hpHXQbcdYrGKjWUe2EygfFFOIM9rwM3RIbQ8lVJ9rS5DesUd+0Fde26oc1g0jc0n2hOZabBRfSILIAJl7hbyyMUwjeR1QSJ3oEKl5L79u8enq9vMNxG4YJqopfYIjAVsJN1eo7PyoOUPevoOssVIWOM+9qZ8CNA06G2PAQp6rkWoIsaLNFY0v6wcJRZgggSVWOy9BCm7/0NLWV/PYwm4sV9hKwQw9GMaTFlVJJo2VUTf3Bttujbj0IUSw/XjQC12q4tq1adnuamTPf/mlpxjOy7n7YVNloZHM2aDL7RaPrH8Asx25RAX8jPK2+tJVgI6pM6mfKEBuH9rodWVeNXA6FkvPI8GNWMB7zdJ2uf6XfJFxV6hpDlFV4gQto+7V0ElDQ/21VlITMRqDaaiCXrMBBNmJYNPXY5w3Vj7+8Rlv9VMG63WHCvDS4vpxuonkBXmZ9/bWxvzYraR3YiS4ciLxGnS3ceZl8kORK9cOMKcvdBayvKxYeWyA/TnQbkzKZW5KyVTiO2ZJTXJGvYY82B/bMYRyYvhKv2mKEnbWgS9v+4kSyaYDvPfBqfB4nB+6 X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR21MB1223.namprd21.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230016)(4636009)(366004)(451199009)(6506007)(186003)(52116002)(2616005)(6512007)(26005)(38100700002)(82960400001)(38350700002)(82950400001)(83380400001)(5660300002)(66946007)(36756003)(8676002)(4326008)(8936002)(6486002)(316002)(2906002)(66556008)(66476007)(10290500003)(7846003)(508600001);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Q5f8vODSJL1JDYg+uvggaCHIawsWxm9kgK6064jh4Y6Jsnche+07iPzvll31mveU4QWvkI8RMXnoXWnOOA7Dq7kckWl3VVKUShuyU3tp67Ew9vgyqcywtjM5UKYC1TvXtMHuh9yvt1Z/0Oz/3aLHgPlmPY4jM4BIIwMaVFD97K35gy63ZhpWyD//GoApHrgVNcAgPR9C0Yfy8T4ZDLEF4hRJ9FYE0xPpX6459IghWXWkcTxt17G+wOe0dOCFfXhPesaDaXz7qIX3rFD25zwHQw6nbWkF/RN4rewnAiuIaBCpU0xOQOrGugA80PyNGZktxJb3NTUv7GINKXkQqBD4YhuMAuXJ1vP9n25NoabLlM/AQrCS2V0O8tay/AA6IbFFeBKvLIhKYtmB/KZi4igXG4I+0mi2LPFM2CsfVqYahTKmAr000iPE8NFZ3aBTIgs+mnz6R5xpEOcpyndSzw8QAzRnkzMzzkUVJuUtc9Kn3R7zAxDTeVWwGWKvHdUGeQ+Jih4KrPeS8ZY+pM6JUimS0Z/wJGDYnmlbqNp4HcJzoMxyr1yT2bY+07JQWJrZSUtJlgOp+lGrE+xN57x2DwqOXbnETeGJ9Y+F4Uuv1c2/811ZJpq2EaWngqSt/6dzN2t66UnK51xCs46ul+eQnX3E+lVIFjb9SlQ1kgs0tfDeRHij/HCBhrBfkTpYElosZfy0tJZUD2+3DCwGnCKNdEYYxZIuWliv1kqBR5htsc53zXd8sqbv0zjSR2NPb7AvsH7avTPLLE8yGbe9lsKqK/WjR53yF8morC0WTpZbX+/TPNWNyTGpi0hiiCBulz8drK8mQu51rj4/wmYwS2+qo2NOUTzAoe1ppXhn989Ew0KUs1n0q7niEnYe/5Qek1a7mA40ndqcj/cwh7vu0itw0NknHp11kp9VgqF346RjBeSP/ADxtvxmV+XOgnfBpjF+C1Jo5AadWSXeT/XWZNZY7pVlCUXUwAtnvr9BqZ+eMqYMmnmLDTQnwh0oSuwPchQ/jeHXeTPM7VdE91r8V8IJqnoiZ/zLrhB4x3wYx3fCwKdWGbn83iIYCjUjqxj09tIaZf4tttUfG2sssmuU7SpFMJphwanh5di+Gb17wjq7GUDk/c5D2KlwVx9DZn+uiXhbEXqvOBSdDPXmnkoha400+ccYaGYsIgG9s49qyCeR4mlfaX7ztCARLOpxy9KGe3Q4/sdrpxZLmKHoZXKVJcncAOu23dbPrvvEeQ29kAAHy8rkTSWcaqcSh5g6xysXITppQxApDEmoGNceXKkfbyE0S8KACWykz8k4D058AgTHnQecC+l4A5wMR8rLMnV3f2dDPaKMiID9ZrfxcTkRuI2XAuaJ5fMs20GoQjHKKHRHjG/FpOz44rIwMHiOJJlmFAYqFGo2JfuK6y7AzccUbbAWK/HJCOkbeOHUP2clFyFBFWzFXHk0Uc+ApYT7IiEPC0lQKqfz5XzWx5bgGIWvBBZDQGTs6ysDqXU8ebSfEVGC7gY4Am5LjfSPFw4WEbLXNnlir3VHT3/IWJQD0qJcOcqnELFNYJgypPjhRlN0dkIfb1wGm1/qIaaAdVgv2wOJkVkmBsfFmJIHPDyuVcRv8/TEfZW6WyDfxXnlQjwuVoYtxR6OvsYJsoWiP1xOp3D4SB4fbTo25edcZAhU0x59kM3j86K0oT+G3G9Ucuy+FBta4Ah6VVcTFNtMkVTeKxHaEwgxvdbFrZsOT/Hvz8xpldWn6yR90g== X-OriginatorOrg: microsoft.com X-MS-Exchange-CrossTenant-Network-Message-Id: 434e0bf2-7876-4da2-2b20-08da4e44a4e7 X-MS-Exchange-CrossTenant-AuthSource: BYAPR21MB1223.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2022 20:29:55.8963 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 72f988bf-86f1-41af-91ab-2d7cd011db47 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: EEgbKHgmls6nkf1ptH1DXxztH7C+Wnn3VPU+dr0fIg9qhG9504cVsreZjr5i8w4+jSfgg96ZMdm/Y9cXT4UDBQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR21MB3314 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add a handler of the XDP_REDIRECT return code from a XDP program. The packets will be flushed at the end of each RX/CQ NAPI poll cycle. ndo_xdp_xmit() is implemented by sharing the code in mana_xdp_tx(). Ethtool per queue counters are added for XDP redirect and xmit operations. Signed-off-by: Haiyang Zhang --- drivers/net/ethernet/microsoft/mana/mana.h | 6 ++ .../net/ethernet/microsoft/mana/mana_bpf.c | 64 +++++++++++++++++++ drivers/net/ethernet/microsoft/mana/mana_en.c | 13 +++- .../ethernet/microsoft/mana/mana_ethtool.c | 12 +++- 4 files changed, 93 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/mana.h b/drivers/net/ethernet/microsoft/mana/mana.h index f198b34c232f..d58be64374c8 100644 --- a/drivers/net/ethernet/microsoft/mana/mana.h +++ b/drivers/net/ethernet/microsoft/mana/mana.h @@ -53,12 +53,14 @@ struct mana_stats_rx { u64 bytes; u64 xdp_drop; u64 xdp_tx; + u64 xdp_redirect; struct u64_stats_sync syncp; }; struct mana_stats_tx { u64 packets; u64 bytes; + u64 xdp_xmit; struct u64_stats_sync syncp; }; @@ -311,6 +313,8 @@ struct mana_rxq { struct bpf_prog __rcu *bpf_prog; struct xdp_rxq_info xdp_rxq; struct page *xdp_save_page; + bool xdp_flush; + int xdp_rc; /* XDP redirect return code */ /* MUST BE THE LAST MEMBER: * Each receive buffer has an associated mana_recv_buf_oob. @@ -396,6 +400,8 @@ int mana_probe(struct gdma_dev *gd, bool resuming); void mana_remove(struct gdma_dev *gd, bool suspending); void mana_xdp_tx(struct sk_buff *skb, struct net_device *ndev); +int mana_xdp_xmit(struct net_device *ndev, int n, struct xdp_frame **frames, + u32 flags); u32 mana_run_xdp(struct net_device *ndev, struct mana_rxq *rxq, struct xdp_buff *xdp, void *buf_va, uint pkt_len); struct bpf_prog *mana_xdp_get(struct mana_port_context *apc); diff --git a/drivers/net/ethernet/microsoft/mana/mana_bpf.c b/drivers/net/ethernet/microsoft/mana/mana_bpf.c index 1d2f948b5c00..421fd39ff3a8 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_bpf.c +++ b/drivers/net/ethernet/microsoft/mana/mana_bpf.c @@ -32,9 +32,55 @@ void mana_xdp_tx(struct sk_buff *skb, struct net_device *ndev) ndev->stats.tx_dropped++; } +static int mana_xdp_xmit_fm(struct net_device *ndev, struct xdp_frame *frame, + u16 q_idx) +{ + struct sk_buff *skb; + + skb = xdp_build_skb_from_frame(frame, ndev); + if (unlikely(!skb)) + return -ENOMEM; + + skb_set_queue_mapping(skb, q_idx); + + mana_xdp_tx(skb, ndev); + + return 0; +} + +int mana_xdp_xmit(struct net_device *ndev, int n, struct xdp_frame **frames, + u32 flags) +{ + struct mana_port_context *apc = netdev_priv(ndev); + struct mana_stats_tx *tx_stats; + int i, count = 0; + u16 q_idx; + + if (unlikely(!apc->port_is_up)) + return 0; + + q_idx = smp_processor_id() % ndev->real_num_tx_queues; + + for (i = 0; i < n; i++) { + if (mana_xdp_xmit_fm(ndev, frames[i], q_idx)) + break; + + count++; + } + + tx_stats = &apc->tx_qp[q_idx].txq.stats; + + u64_stats_update_begin(&tx_stats->syncp); + tx_stats->xdp_xmit += count; + u64_stats_update_end(&tx_stats->syncp); + + return count; +} + u32 mana_run_xdp(struct net_device *ndev, struct mana_rxq *rxq, struct xdp_buff *xdp, void *buf_va, uint pkt_len) { + struct mana_stats_rx *rx_stats; struct bpf_prog *prog; u32 act = XDP_PASS; @@ -49,12 +95,30 @@ u32 mana_run_xdp(struct net_device *ndev, struct mana_rxq *rxq, act = bpf_prog_run_xdp(prog, xdp); + rx_stats = &rxq->stats; + switch (act) { case XDP_PASS: case XDP_TX: case XDP_DROP: break; + case XDP_REDIRECT: + rxq->xdp_rc = xdp_do_redirect(ndev, xdp, prog); + if (!rxq->xdp_rc) { + rxq->xdp_flush = true; + + u64_stats_update_begin(&rx_stats->syncp); + rx_stats->packets++; + rx_stats->bytes += pkt_len; + rx_stats->xdp_redirect++; + u64_stats_update_end(&rx_stats->syncp); + + break; + } + + fallthrough; + case XDP_ABORTED: trace_xdp_exception(ndev, prog, act); break; diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index 3ef09e0cdbaa..9259a74eca40 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include @@ -382,6 +383,7 @@ static const struct net_device_ops mana_devops = { .ndo_validate_addr = eth_validate_addr, .ndo_get_stats64 = mana_get_stats64, .ndo_bpf = mana_bpf, + .ndo_xdp_xmit = mana_xdp_xmit, }; static void mana_cleanup_port_context(struct mana_port_context *apc) @@ -1120,6 +1122,9 @@ static void mana_rx_skb(void *buf_va, struct mana_rxcomp_oob *cqe, act = mana_run_xdp(ndev, rxq, &xdp, buf_va, pkt_len); + if (act == XDP_REDIRECT && !rxq->xdp_rc) + return; + if (act != XDP_PASS && act != XDP_TX) goto drop_xdp; @@ -1275,11 +1280,14 @@ static void mana_process_rx_cqe(struct mana_rxq *rxq, struct mana_cq *cq, static void mana_poll_rx_cq(struct mana_cq *cq) { struct gdma_comp *comp = cq->gdma_comp_buf; + struct mana_rxq *rxq = cq->rxq; int comp_read, i; comp_read = mana_gd_poll_cq(cq->gdma_cq, comp, CQE_POLLING_BUFFER); WARN_ON_ONCE(comp_read > CQE_POLLING_BUFFER); + rxq->xdp_flush = false; + for (i = 0; i < comp_read; i++) { if (WARN_ON_ONCE(comp[i].is_sq)) return; @@ -1288,8 +1296,11 @@ static void mana_poll_rx_cq(struct mana_cq *cq) if (WARN_ON_ONCE(comp[i].wq_num != cq->rxq->gdma_id)) return; - mana_process_rx_cqe(cq->rxq, cq, &comp[i]); + mana_process_rx_cqe(rxq, cq, &comp[i]); } + + if (rxq->xdp_flush) + xdp_do_flush(); } static void mana_cq_handler(void *context, struct gdma_queue *gdma_queue) diff --git a/drivers/net/ethernet/microsoft/mana/mana_ethtool.c b/drivers/net/ethernet/microsoft/mana/mana_ethtool.c index e13f2453eabb..c530db76880f 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_ethtool.c +++ b/drivers/net/ethernet/microsoft/mana/mana_ethtool.c @@ -23,7 +23,7 @@ static int mana_get_sset_count(struct net_device *ndev, int stringset) if (stringset != ETH_SS_STATS) return -EINVAL; - return ARRAY_SIZE(mana_eth_stats) + num_queues * 6; + return ARRAY_SIZE(mana_eth_stats) + num_queues * 8; } static void mana_get_strings(struct net_device *ndev, u32 stringset, u8 *data) @@ -50,6 +50,8 @@ static void mana_get_strings(struct net_device *ndev, u32 stringset, u8 *data) p += ETH_GSTRING_LEN; sprintf(p, "rx_%d_xdp_tx", i); p += ETH_GSTRING_LEN; + sprintf(p, "rx_%d_xdp_redirect", i); + p += ETH_GSTRING_LEN; } for (i = 0; i < num_queues; i++) { @@ -57,6 +59,8 @@ static void mana_get_strings(struct net_device *ndev, u32 stringset, u8 *data) p += ETH_GSTRING_LEN; sprintf(p, "tx_%d_bytes", i); p += ETH_GSTRING_LEN; + sprintf(p, "tx_%d_xdp_xmit", i); + p += ETH_GSTRING_LEN; } } @@ -70,6 +74,8 @@ static void mana_get_ethtool_stats(struct net_device *ndev, struct mana_stats_tx *tx_stats; unsigned int start; u64 packets, bytes; + u64 xdp_redirect; + u64 xdp_xmit; u64 xdp_drop; u64 xdp_tx; int q, i = 0; @@ -89,12 +95,14 @@ static void mana_get_ethtool_stats(struct net_device *ndev, bytes = rx_stats->bytes; xdp_drop = rx_stats->xdp_drop; xdp_tx = rx_stats->xdp_tx; + xdp_redirect = rx_stats->xdp_redirect; } while (u64_stats_fetch_retry_irq(&rx_stats->syncp, start)); data[i++] = packets; data[i++] = bytes; data[i++] = xdp_drop; data[i++] = xdp_tx; + data[i++] = xdp_redirect; } for (q = 0; q < num_queues; q++) { @@ -104,10 +112,12 @@ static void mana_get_ethtool_stats(struct net_device *ndev, start = u64_stats_fetch_begin_irq(&tx_stats->syncp); packets = tx_stats->packets; bytes = tx_stats->bytes; + xdp_xmit = tx_stats->xdp_xmit; } while (u64_stats_fetch_retry_irq(&tx_stats->syncp, start)); data[i++] = packets; data[i++] = bytes; + data[i++] = xdp_xmit; } }