From patchwork Tue May 17 09:04:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Long Li X-Patchwork-Id: 12852168 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94523C433EF for ; Tue, 17 May 2022 09:05:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244187AbiEQJFT (ORCPT ); Tue, 17 May 2022 05:05:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244006AbiEQJEw (ORCPT ); Tue, 17 May 2022 05:04:52 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4B3053F89C; Tue, 17 May 2022 02:04:46 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 24C3F20F3D18; Tue, 17 May 2022 02:04:46 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 24C3F20F3D18 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1652778286; bh=IxR+n7UNCA9UQ4z8c+/nGV0tNIIicHDV8zjt8VJkApk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=oWPbxGzM1oYOeRGpWdScuz8JJjqBv7C6aW+ZmPjQfw9iYVcaAUA4eMJG7upI9hO67 uaF2YK6ZxNtnIz5Rg1RW5EtM8zehGrReBT8RtejTAjHC4+AEkYx0UKZ2f+1nVAYpOY z0FBynVFPBoC/ALX/SPYeoy+Wjkfu3mWJl5203NQ= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [PATCH 01/12] net: mana: Add support for auxiliary device Date: Tue, 17 May 2022 02:04:25 -0700 Message-Id: <1652778276-2986-2-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> References: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Long Li In preparation for supporting MANA RDMA driver, add support for auxiliary device in the Ethernet driver. The RDMA device is modeled as an auxiliary device to the Ethernet device. Signed-off-by: Long Li Reported-by: kernel test robot --- drivers/net/ethernet/microsoft/mana/gdma.h | 2 + drivers/net/ethernet/microsoft/mana/mana.h | 6 ++ drivers/net/ethernet/microsoft/mana/mana_en.c | 83 ++++++++++++++++++- 3 files changed, 90 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/microsoft/mana/gdma.h b/drivers/net/ethernet/microsoft/mana/gdma.h index 41ecd156e95f..d815d323be87 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma.h +++ b/drivers/net/ethernet/microsoft/mana/gdma.h @@ -204,6 +204,8 @@ struct gdma_dev { /* GDMA driver specific pointer */ void *driver_data; + + struct auxiliary_device *adev; }; #define MINIMUM_SUPPORTED_PAGE_SIZE PAGE_SIZE diff --git a/drivers/net/ethernet/microsoft/mana/mana.h b/drivers/net/ethernet/microsoft/mana/mana.h index d36405af9432..51bff91b63ee 100644 --- a/drivers/net/ethernet/microsoft/mana/mana.h +++ b/drivers/net/ethernet/microsoft/mana/mana.h @@ -6,6 +6,7 @@ #include "gdma.h" #include "hw_channel.h" +#include /* Microsoft Azure Network Adapter (MANA)'s definitions * @@ -561,4 +562,9 @@ struct mana_tx_package { struct gdma_posted_wqe_info wqe_info; }; +struct mana_adev { + struct auxiliary_device adev; + struct gdma_dev *mdev; +}; + #endif /* _MANA_H */ diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index b7d3ba1b4d17..c706bf943e49 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -13,6 +13,18 @@ #include "mana.h" +static DEFINE_IDA(mana_adev_ida); + +int mana_adev_idx_alloc(void) +{ + return ida_alloc(&mana_adev_ida, GFP_KERNEL); +} + +void mana_adev_idx_free(int idx) +{ + ida_free(&mana_adev_ida, idx); +} + /* Microsoft Azure Network Adapter (MANA) functions */ static int mana_open(struct net_device *ndev) @@ -1960,6 +1972,70 @@ static int mana_probe_port(struct mana_context *ac, int port_idx, return err; } +static void adev_release(struct device *dev) +{ + struct mana_adev *madev = container_of(dev, struct mana_adev, adev.dev); + + kfree(madev); +} + +static void remove_adev(struct gdma_dev *gd) +{ + struct auxiliary_device *adev = gd->adev; + int id = adev->id; + + auxiliary_device_delete(adev); + auxiliary_device_uninit(adev); + + mana_adev_idx_free(id); + gd->adev = NULL; +} + +static int add_adev(struct gdma_dev *gd) +{ + int ret = 0; + struct mana_adev *madev; + struct auxiliary_device *adev; + + madev = kzalloc(sizeof(*madev), GFP_KERNEL); + if (!madev) + return -ENOMEM; + + adev = &madev->adev; + adev->id = mana_adev_idx_alloc(); + if (adev->id < 0) { + ret = adev->id; + goto idx_fail; + } + + adev->name = "rdma"; + adev->dev.parent = gd->gdma_context->dev; + adev->dev.release = adev_release; + madev->mdev = gd; + + ret = auxiliary_device_init(adev); + if (ret) + goto init_fail; + + ret = auxiliary_device_add(adev); + if (ret) + goto add_fail; + + gd->adev = adev; + return 0; + +add_fail: + auxiliary_device_uninit(adev); + +init_fail: + mana_adev_idx_free(adev->id); + +idx_fail: + kfree(madev); + + return ret; +} + int mana_probe(struct gdma_dev *gd, bool resuming) { struct gdma_context *gc = gd->gdma_context; @@ -2027,6 +2103,8 @@ int mana_probe(struct gdma_dev *gd, bool resuming) break; } } + + err = add_adev(gd); out: if (err) mana_remove(gd, false); @@ -2043,6 +2121,10 @@ void mana_remove(struct gdma_dev *gd, bool suspending) int err; int i; + /* adev currently doesn't support suspending, always remove it */ + if (gd->adev) + remove_adev(gd); + for (i = 0; i < ac->num_ports; i++) { ndev = ac->ports[i]; if (!ndev) { @@ -2075,7 +2157,6 @@ void mana_remove(struct gdma_dev *gd, bool suspending) } mana_destroy_eq(ac); - out: mana_gd_deregister_device(gd); From patchwork Tue May 17 09:04:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Long Li X-Patchwork-Id: 12852167 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1F25C433EF for ; Tue, 17 May 2022 09:05:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244152AbiEQJFN (ORCPT ); Tue, 17 May 2022 05:05:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244065AbiEQJEw (ORCPT ); Tue, 17 May 2022 05:04:52 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 123B14832A; Tue, 17 May 2022 02:04:47 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 2F77320F3D1A; Tue, 17 May 2022 02:04:47 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 2F77320F3D1A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1652778287; bh=0dsZ0bZFwegjOW5xJagkjlSAdsffEmIkK2SaE7ckO08=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=egE5OchiO0fxF84xVynNCRfmpjAyZZxAOuUpEmtbfS/X0EtWtzESK702X5qy/4yCu IKvDTmbpNzW9Own9TOgvYS3tLEAGwWL/8llSb4GThHVIWVYncZAklLbYoeKJt3bZuT gdEyvGSGSfOYKWrJt6D55ZvIOwmmS5b2ibTe0fgI= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [PATCH 02/12] net: mana: Record the physical address for doorbell page region Date: Tue, 17 May 2022 02:04:26 -0700 Message-Id: <1652778276-2986-3-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> References: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Long Li For supporting RDMA device with multiple user contexts with their individual doorbell pages, record the start address of doorbell page region for use by the RDMA driver to allocate user context doorbell IDs. Signed-off-by: Long Li --- drivers/net/ethernet/microsoft/mana/gdma.h | 2 ++ drivers/net/ethernet/microsoft/mana/gdma_main.c | 4 ++++ 2 files changed, 6 insertions(+) diff --git a/drivers/net/ethernet/microsoft/mana/gdma.h b/drivers/net/ethernet/microsoft/mana/gdma.h index d815d323be87..c724ca410fcb 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma.h +++ b/drivers/net/ethernet/microsoft/mana/gdma.h @@ -350,9 +350,11 @@ struct gdma_context { struct completion eq_test_event; u32 test_event_eq_id; + phys_addr_t bar0_pa; void __iomem *bar0_va; void __iomem *shm_base; void __iomem *db_page_base; + phys_addr_t phys_db_page_base; u32 db_page_size; /* Shared memory chanenl (used to bootstrap HWC) */ diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index 49b85ca578b0..9fafaa0c8e76 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -27,6 +27,9 @@ static void mana_gd_init_registers(struct pci_dev *pdev) gc->db_page_base = gc->bar0_va + mana_gd_r64(gc, GDMA_REG_DB_PAGE_OFFSET); + gc->phys_db_page_base = gc->bar0_pa + + mana_gd_r64(gc, GDMA_REG_DB_PAGE_OFFSET); + gc->shm_base = gc->bar0_va + mana_gd_r64(gc, GDMA_REG_SHM_OFFSET); } @@ -1335,6 +1338,7 @@ static int mana_gd_probe(struct pci_dev *pdev, const struct pci_device_id *ent) mutex_init(&gc->eq_test_event_mutex); pci_set_drvdata(pdev, gc); + gc->bar0_pa = pci_resource_start(pdev, 0); bar0_va = pci_iomap(pdev, bar, 0); if (!bar0_va) From patchwork Tue May 17 09:04:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Long Li X-Patchwork-Id: 12852166 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62158C43217 for ; Tue, 17 May 2022 09:05:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243930AbiEQJFM (ORCPT ); Tue, 17 May 2022 05:05:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235185AbiEQJEw (ORCPT ); Tue, 17 May 2022 05:04:52 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3E82147063; Tue, 17 May 2022 02:04:48 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 1511020F7224; Tue, 17 May 2022 02:04:48 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 1511020F7224 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1652778288; bh=Ssqwyn9buuJdDtO8VksCBiYTIi8iEmbq5gI64z2s+PY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=L/8+iOHGHmUS3/sbpCnwTFNk21iE2oBN/YzvRyTKlhRuoRb4NEzrU8ofUxR2swxWq sAeS25TLIQ0AgdFr+yyUwuZil/2HgRwEuS3/UGGtVbxnmLJdUaHFcFmCJ2LjAbdyqO i0Fh8Z5I5qetdcJy/8NH0B1x32GAkB3iTz2D9lpc= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [PATCH 03/12] net: mana: Handle vport sharing between devices Date: Tue, 17 May 2022 02:04:27 -0700 Message-Id: <1652778276-2986-4-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> References: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Long Li For outgoing packets, the PF requires the VF to configure the vport with corresponding protection domain and doorbell ID for the kernel or user context. The vport can't be shared between different contexts. Implement the logic to exclusively take over the vport by either the Ethernet device or RDMA device. Signed-off-by: Long Li --- drivers/net/ethernet/microsoft/mana/mana.h | 4 ++++ drivers/net/ethernet/microsoft/mana/mana_en.c | 19 +++++++++++++++++-- 2 files changed, 21 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/mana.h b/drivers/net/ethernet/microsoft/mana/mana.h index 51bff91b63ee..26f14fcb6a61 100644 --- a/drivers/net/ethernet/microsoft/mana/mana.h +++ b/drivers/net/ethernet/microsoft/mana/mana.h @@ -375,6 +375,7 @@ struct mana_port_context { unsigned int num_queues; mana_handle_t port_handle; + atomic_t port_use_count; u16 port_idx; @@ -567,4 +568,7 @@ struct mana_adev { struct gdma_dev *mdev; }; +int mana_cfg_vport(struct mana_port_context *apc, u32 protection_dom_id, + u32 doorbell_pg_id); +void mana_uncfg_vport(struct mana_port_context *apc); #endif /* _MANA_H */ diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index c706bf943e49..4f7a50ace9f6 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -530,13 +530,25 @@ static int mana_query_vport_cfg(struct mana_port_context *apc, u32 vport_index, return 0; } -static int mana_cfg_vport(struct mana_port_context *apc, u32 protection_dom_id, - u32 doorbell_pg_id) +void mana_uncfg_vport(struct mana_port_context *apc) +{ + atomic_dec(&apc->port_use_count); +} +EXPORT_SYMBOL_GPL(mana_uncfg_vport); + +int mana_cfg_vport(struct mana_port_context *apc, u32 protection_dom_id, + u32 doorbell_pg_id) { struct mana_config_vport_resp resp = {}; struct mana_config_vport_req req = {}; int err; + /* Ethernet driver and IB driver can't take the port at the same time */ + if (atomic_inc_return(&apc->port_use_count) != 1) { + atomic_dec(&apc->port_use_count); + return -ENODEV; + } + mana_gd_init_req_hdr(&req.hdr, MANA_CONFIG_VPORT_TX, sizeof(req), sizeof(resp)); req.vport = apc->port_handle; @@ -566,6 +578,7 @@ static int mana_cfg_vport(struct mana_port_context *apc, u32 protection_dom_id, out: return err; } +EXPORT_SYMBOL_GPL(mana_cfg_vport); static int mana_cfg_vport_steering(struct mana_port_context *apc, enum TRI_STATE rx, @@ -1678,6 +1691,8 @@ static void mana_destroy_vport(struct mana_port_context *apc) } mana_destroy_txq(apc); + + mana_uncfg_vport(apc); } static int mana_create_vport(struct mana_port_context *apc, From patchwork Tue May 17 09:04:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Long Li X-Patchwork-Id: 12852165 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8F91C433EF for ; Tue, 17 May 2022 09:05:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238041AbiEQJFK (ORCPT ); Tue, 17 May 2022 05:05:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244142AbiEQJEw (ORCPT ); Tue, 17 May 2022 05:04:52 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 04E3548E6C; Tue, 17 May 2022 02:04:49 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id CE71A20F7227; Tue, 17 May 2022 02:04:48 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com CE71A20F7227 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1652778288; bh=hEF94zBhhFc6cjWkWz8qz+16QCWVZ8PWfrlcF7EcnHA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=Pyan3u3tvxDtmd5lOVAfm3I8LnNm4GqSnHV94Napn2Yy23L4EFqYx7RrXOWhTINx5 +bZFYnkeeoyFs73Od3p8ppkPbg1QSScx1bqECWb3gft+2sgPt3Rb1YgpInt7M5LlN8 jSaUMjjEsGBg9r+hG5OD/+UJJGZr7yW0D4VwThho= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [PATCH 04/12] net: mana: Add functions for allocating doorbell page from GDMA Date: Tue, 17 May 2022 02:04:28 -0700 Message-Id: <1652778276-2986-5-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> References: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Long Li The RDMA device needs to allocate doorbell pages for each user context. Implement those functions and expose them for use by the RDMA driver. Signed-off-by: Long Li --- drivers/net/ethernet/microsoft/mana/gdma.h | 29 ++++++++++ .../net/ethernet/microsoft/mana/gdma_main.c | 54 +++++++++++++++++++ 2 files changed, 83 insertions(+) diff --git a/drivers/net/ethernet/microsoft/mana/gdma.h b/drivers/net/ethernet/microsoft/mana/gdma.h index c724ca410fcb..f945755760dc 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma.h +++ b/drivers/net/ethernet/microsoft/mana/gdma.h @@ -22,11 +22,15 @@ enum gdma_request_type { GDMA_GENERATE_TEST_EQE = 10, GDMA_CREATE_QUEUE = 12, GDMA_DISABLE_QUEUE = 13, + GDMA_ALLOCATE_RESOURCE_RANGE = 22, + GDMA_DESTROY_RESOURCE_RANGE = 24, GDMA_CREATE_DMA_REGION = 25, GDMA_DMA_REGION_ADD_PAGES = 26, GDMA_DESTROY_DMA_REGION = 27, }; +#define GDMA_RESOURCE_DOORBELL_PAGE 27 + enum gdma_queue_type { GDMA_INVALID_QUEUE, GDMA_SQ, @@ -568,6 +572,26 @@ struct gdma_register_device_resp { u32 db_id; }; /* HW DATA */ +struct gdma_allocate_resource_range_req { + struct gdma_req_hdr hdr; + u32 resource_type; + u32 num_resources; + u32 alignment; + u32 allocated_resources; +}; + +struct gdma_allocate_resource_range_resp { + struct gdma_resp_hdr hdr; + u32 allocated_resources; +}; + +struct gdma_destroy_resource_range_req { + struct gdma_req_hdr hdr; + u32 resource_type; + u32 num_resources; + u32 allocated_resources; +}; + /* GDMA_CREATE_QUEUE */ struct gdma_create_queue_req { struct gdma_req_hdr hdr; @@ -676,4 +700,9 @@ void mana_gd_free_memory(struct gdma_mem_info *gmi); int mana_gd_send_request(struct gdma_context *gc, u32 req_len, const void *req, u32 resp_len, void *resp); + +int mana_gd_allocate_doorbell_page(struct gdma_context *gc, int *doorbell_page); + +int mana_gd_destroy_doorbell_page(struct gdma_context *gc, int doorbell_page); + #endif /* _GDMA_H */ diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index 9fafaa0c8e76..86ffe0e39df0 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -153,6 +153,60 @@ void mana_gd_free_memory(struct gdma_mem_info *gmi) gmi->dma_handle); } +int mana_gd_destroy_doorbell_page(struct gdma_context *gc, int doorbell_page) +{ + struct gdma_destroy_resource_range_req req = {}; + struct gdma_resp_hdr resp = {}; + int err; + + mana_gd_init_req_hdr(&req.hdr, GDMA_DESTROY_RESOURCE_RANGE, + sizeof(req), sizeof(resp)); + + req.resource_type = GDMA_RESOURCE_DOORBELL_PAGE; + req.num_resources = 1; + req.allocated_resources = doorbell_page; + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + if (err || resp.status) { + dev_err(gc->dev, + "Failed to destroy doorbell page: ret %d, 0x%x\n", + err, resp.status); + return err ? err : -EPROTO; + } + + return 0; +} +EXPORT_SYMBOL(mana_gd_destroy_doorbell_page); + +int mana_gd_allocate_doorbell_page(struct gdma_context *gc, + int *doorbell_page) +{ + struct gdma_allocate_resource_range_req req = {}; + struct gdma_allocate_resource_range_resp resp = {}; + int err; + + mana_gd_init_req_hdr(&req.hdr, GDMA_ALLOCATE_RESOURCE_RANGE, + sizeof(req), sizeof(resp)); + + req.resource_type = GDMA_RESOURCE_DOORBELL_PAGE; + req.num_resources = 1; + req.alignment = 0; + req.allocated_resources = 0; // have GDMA start searching from 0 + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + if (err || resp.hdr.status) { // resp.hdr.status should be >=0 + dev_err(gc->dev, + "Failed to allocate doorbell page: ret %d, 0x%x\n", + err, resp.hdr.status); + return err ? err : -EPROTO; + } + + *doorbell_page = resp.allocated_resources; + + return 0; +} +EXPORT_SYMBOL(mana_gd_allocate_doorbell_page); + static int mana_gd_create_hw_eq(struct gdma_context *gc, struct gdma_queue *queue) { From patchwork Tue May 17 09:04:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Long Li X-Patchwork-Id: 12852159 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA1A9C43217 for ; Tue, 17 May 2022 09:05:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244171AbiEQJFD (ORCPT ); Tue, 17 May 2022 05:05:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244147AbiEQJEx (ORCPT ); Tue, 17 May 2022 05:04:53 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B996848E7C; Tue, 17 May 2022 02:04:49 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 93C9A20F7228; Tue, 17 May 2022 02:04:49 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 93C9A20F7228 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1652778289; bh=ZSGgxZeLIKsFB36lfkhwVhmsmPYSt9vljt8jQ1PtJZ0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=GNHpc5+K2Pb7W4R3wicIAlk6IwZKx4Rdo67tK5JwiVCcS0T36pzEAatIkYSHHYzaB ixKRw9Da2MRAXO8ym/ExQurHrQ4LBS4q3TGAgldL9kUNg1Cr4WdRYtGR1n9riQvtno 3isIOPfv4dsqT0UL8NiM2DJXxdcN2tYI8UpXy0gk= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [PATCH 05/12] net: mana: Set the DMA device max page size Date: Tue, 17 May 2022 02:04:29 -0700 Message-Id: <1652778276-2986-6-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> References: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Long Li The system chooses default 64K page size if the device does not specify the max page size the device can handle for DMA. This do not work well when device is registering large chunk of memory in that a large page size is more efficient. Set it to the maximum hardware supported page size. Signed-off-by: Long Li --- drivers/net/ethernet/microsoft/mana/gdma_main.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index 86ffe0e39df0..426087688480 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -1385,6 +1385,13 @@ static int mana_gd_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (err) goto release_region; + // The max GDMA HW supported page size is 2M + err = dma_set_max_seg_size(&pdev->dev, SZ_2M); + if (err) { + dev_err(&pdev->dev, "Failed to set dma device segment size\n"); + goto release_region; + } + err = -ENOMEM; gc = vzalloc(sizeof(*gc)); if (!gc) From patchwork Tue May 17 09:04:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Long Li X-Patchwork-Id: 12852158 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E957FC4167D for ; Tue, 17 May 2022 09:05:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244139AbiEQJFA (ORCPT ); Tue, 17 May 2022 05:05:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244151AbiEQJEx (ORCPT ); Tue, 17 May 2022 05:04:53 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 71F4D48E7E; Tue, 17 May 2022 02:04:50 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 49E0220F7229; Tue, 17 May 2022 02:04:50 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 49E0220F7229 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1652778290; bh=k4pgOhdQZPMmBW/HILpEakcJCJNftE250pBH+BQ1VoI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=pMnlVVV064b8rdrjpkI/tiF5QBLv61iLq6d15XAQztjoC9jzg4b8iZ4upPnc6KJyL bDe6PGzQFkwOCTaNnAgtgxGqJ6n90TLxBV5ceDn/XtbaQdSeTrd8EviUxGrlgkW3S5 ctZ10g6+mJ/5KPc43HEXgVXdq0NUZb/p7CBo7W3k= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [PATCH 06/12] net: mana: Define data structures for protection domain and memory registration Date: Tue, 17 May 2022 02:04:30 -0700 Message-Id: <1652778276-2986-7-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> References: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Long Li The MANA hardware support protection domain and memory registration for use in RDMA environment. Add those definitions and expose them for use by the RDMA driver. Signed-off-by: Long Li --- drivers/net/ethernet/microsoft/mana/gdma.h | 149 +++++++++++++++++- .../net/ethernet/microsoft/mana/gdma_main.c | 26 +-- drivers/net/ethernet/microsoft/mana/mana_en.c | 16 +- 3 files changed, 168 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/gdma.h b/drivers/net/ethernet/microsoft/mana/gdma.h index f945755760dc..bc8cd9528937 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma.h +++ b/drivers/net/ethernet/microsoft/mana/gdma.h @@ -27,6 +27,10 @@ enum gdma_request_type { GDMA_CREATE_DMA_REGION = 25, GDMA_DMA_REGION_ADD_PAGES = 26, GDMA_DESTROY_DMA_REGION = 27, + GDMA_CREATE_PD = 29, + GDMA_DESTROY_PD = 30, + GDMA_CREATE_MR = 31, + GDMA_DESTROY_MR = 32, }; #define GDMA_RESOURCE_DOORBELL_PAGE 27 @@ -59,6 +63,8 @@ enum { GDMA_DEVICE_MANA = 2, }; +typedef u64 gdma_obj_handle_t; + struct gdma_resource { /* Protect the bitmap */ spinlock_t lock; @@ -192,7 +198,7 @@ struct gdma_mem_info { u64 length; /* Allocated by the PF driver */ - u64 gdma_region; + gdma_obj_handle_t dma_region_handle; }; #define REGISTER_ATB_MST_MKEY_LOWER_SIZE 8 @@ -599,7 +605,7 @@ struct gdma_create_queue_req { u32 reserved1; u32 pdid; u32 doolbell_id; - u64 gdma_region; + gdma_obj_handle_t gdma_region; u32 reserved2; u32 queue_size; u32 log2_throttle_limit; @@ -626,6 +632,28 @@ struct gdma_disable_queue_req { u32 alloc_res_id_on_creation; }; /* HW DATA */ +enum atb_page_size { + ATB_PAGE_SIZE_4K, + ATB_PAGE_SIZE_8K, + ATB_PAGE_SIZE_16K, + ATB_PAGE_SIZE_32K, + ATB_PAGE_SIZE_64K, + ATB_PAGE_SIZE_128K, + ATB_PAGE_SIZE_256K, + ATB_PAGE_SIZE_512K, + ATB_PAGE_SIZE_1M, + ATB_PAGE_SIZE_2M, + ATB_PAGE_SIZE_MAX, +}; + +enum gdma_mr_access_flags { + GDMA_ACCESS_FLAG_LOCAL_READ = (1 << 0), + GDMA_ACCESS_FLAG_LOCAL_WRITE = (1 << 1), + GDMA_ACCESS_FLAG_REMOTE_READ = (1 << 2), + GDMA_ACCESS_FLAG_REMOTE_WRITE = (1 << 3), + GDMA_ACCESS_FLAG_REMOTE_ATOMIC = (1 << 4), +}; + /* GDMA_CREATE_DMA_REGION */ struct gdma_create_dma_region_req { struct gdma_req_hdr hdr; @@ -652,14 +680,14 @@ struct gdma_create_dma_region_req { struct gdma_create_dma_region_resp { struct gdma_resp_hdr hdr; - u64 gdma_region; + gdma_obj_handle_t dma_region_handle; }; /* HW DATA */ /* GDMA_DMA_REGION_ADD_PAGES */ struct gdma_dma_region_add_pages_req { struct gdma_req_hdr hdr; - u64 gdma_region; + gdma_obj_handle_t dma_region_handle; u32 page_addr_list_len; u32 reserved3; @@ -671,9 +699,117 @@ struct gdma_dma_region_add_pages_req { struct gdma_destroy_dma_region_req { struct gdma_req_hdr hdr; - u64 gdma_region; + gdma_obj_handle_t dma_region_handle; }; /* HW DATA */ +enum gdma_pd_flags { + GDMA_PD_FLAG_ALLOW_GPA_MR = (1 << 0), + GDMA_PD_FLAG_ALLOW_FMR_MR = (1 << 1), +}; + +struct gdma_create_pd_req { + struct gdma_req_hdr hdr; + enum gdma_pd_flags flags; + u32 reserved; +}; + +struct gdma_create_pd_resp { + struct gdma_resp_hdr hdr; + gdma_obj_handle_t pd_handle; + u32 pd_id; + u32 reserved; +}; + +struct gdma_destroy_pd_req { + struct gdma_req_hdr hdr; + gdma_obj_handle_t pd_handle; +}; + +struct gdma_destory_pd_resp { + struct gdma_resp_hdr hdr; +}; + +enum gdma_mr_type { + // + // Guest Physical Address - MRs of this type allow access + // to any DMA-mapped memory using bus-logical address + // + GDMA_MR_TYPE_GPA = 1, + + // + // Guest Virtual Address - MRs of this type allow access + // to memory mapped by PTEs associated with this MR using a virtual + // address that is set up in the MST + // + GDMA_MR_TYPE_GVA, + + // + // Fast Memory Register - Like GVA but the MR is initially put in the + // FREE state (as opposed to Valid), and the specified number of + // PTEs are reserved for future fast memory reservations. + // + GDMA_MR_TYPE_FMR, +}; + +struct gdma_create_mr_params { + gdma_obj_handle_t pd_handle; + enum gdma_mr_type mr_type; + union { + struct { + gdma_obj_handle_t dma_region_handle; + u64 virtual_address; + enum gdma_mr_access_flags access_flags; + } gva; + struct { + enum gdma_mr_access_flags access_flags; + } gpa; + struct { + enum atb_page_size page_size; + u32 reserved_pte_count; + } fmr; + }; +}; + +struct gdma_create_mr_request { + struct gdma_req_hdr hdr; + gdma_obj_handle_t pd_handle; + enum gdma_mr_type mr_type; + u32 reserved; + + union { + struct { + enum gdma_mr_access_flags access_flags; + } gpa; + + struct { + gdma_obj_handle_t dma_region_handle; + u64 virtual_address; + enum gdma_mr_access_flags access_flags; + } gva; + + struct { + enum atb_page_size page_size; + u32 reserved_pte_count; + } fmr; + }; +}; + +struct gdma_create_mr_response { + struct gdma_resp_hdr hdr; + gdma_obj_handle_t mr_handle; + u32 lkey; + u32 rkey; +}; + +struct gdma_destroy_mr_request { + struct gdma_req_hdr hdr; + gdma_obj_handle_t mr_handle; +}; + +struct gdma_destroy_mr_response { + struct gdma_resp_hdr hdr; +}; + int mana_gd_verify_vf_version(struct pci_dev *pdev); int mana_gd_register_device(struct gdma_dev *gd); @@ -705,4 +841,7 @@ int mana_gd_allocate_doorbell_page(struct gdma_context *gc, int *doorbell_page); int mana_gd_destroy_doorbell_page(struct gdma_context *gc, int doorbell_page); +int mana_gd_destroy_dma_region(struct gdma_context *gc, + gdma_obj_handle_t dma_region_handle); + #endif /* _GDMA_H */ diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index 426087688480..55c4059ac870 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -224,7 +224,7 @@ static int mana_gd_create_hw_eq(struct gdma_context *gc, req.type = queue->type; req.pdid = queue->gdma_dev->pdid; req.doolbell_id = queue->gdma_dev->doorbell; - req.gdma_region = queue->mem_info.gdma_region; + req.gdma_region = queue->mem_info.dma_region_handle; req.queue_size = queue->queue_size; req.log2_throttle_limit = queue->eq.log2_throttle_limit; req.eq_pci_msix_index = queue->eq.msix_index; @@ -238,7 +238,7 @@ static int mana_gd_create_hw_eq(struct gdma_context *gc, queue->id = resp.queue_index; queue->eq.disable_needed = true; - queue->mem_info.gdma_region = GDMA_INVALID_DMA_REGION; + queue->mem_info.dma_region_handle = GDMA_INVALID_DMA_REGION; return 0; } @@ -692,24 +692,30 @@ int mana_gd_create_hwc_queue(struct gdma_dev *gd, return err; } -static void mana_gd_destroy_dma_region(struct gdma_context *gc, u64 gdma_region) +int mana_gd_destroy_dma_region(struct gdma_context *gc, + gdma_obj_handle_t dma_region_handle) { struct gdma_destroy_dma_region_req req = {}; struct gdma_general_resp resp = {}; int err; - if (gdma_region == GDMA_INVALID_DMA_REGION) - return; + if (dma_region_handle == GDMA_INVALID_DMA_REGION) + return 0; mana_gd_init_req_hdr(&req.hdr, GDMA_DESTROY_DMA_REGION, sizeof(req), sizeof(resp)); - req.gdma_region = gdma_region; + req.dma_region_handle = dma_region_handle; err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); - if (err || resp.hdr.status) + if (err || resp.hdr.status) { dev_err(gc->dev, "Failed to destroy DMA region: %d, 0x%x\n", err, resp.hdr.status); + return -EPROTO; + } + + return 0; } +EXPORT_SYMBOL(mana_gd_destroy_dma_region); static int mana_gd_create_dma_region(struct gdma_dev *gd, struct gdma_mem_info *gmi) @@ -754,14 +760,14 @@ static int mana_gd_create_dma_region(struct gdma_dev *gd, if (err) goto out; - if (resp.hdr.status || resp.gdma_region == GDMA_INVALID_DMA_REGION) { + if (resp.hdr.status || resp.dma_region_handle == GDMA_INVALID_DMA_REGION) { dev_err(gc->dev, "Failed to create DMA region: 0x%x\n", resp.hdr.status); err = -EPROTO; goto out; } - gmi->gdma_region = resp.gdma_region; + gmi->dma_region_handle = resp.dma_region_handle; out: kfree(req); return err; @@ -884,7 +890,7 @@ void mana_gd_destroy_queue(struct gdma_context *gc, struct gdma_queue *queue) return; } - mana_gd_destroy_dma_region(gc, gmi->gdma_region); + mana_gd_destroy_dma_region(gc, gmi->dma_region_handle); mana_gd_free_memory(gmi); kfree(queue); } diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index 4f7a50ace9f6..dc9fcb99e937 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -1364,10 +1364,10 @@ static int mana_create_txq(struct mana_port_context *apc, memset(&wq_spec, 0, sizeof(wq_spec)); memset(&cq_spec, 0, sizeof(cq_spec)); - wq_spec.gdma_region = txq->gdma_sq->mem_info.gdma_region; + wq_spec.gdma_region = txq->gdma_sq->mem_info.dma_region_handle; wq_spec.queue_size = txq->gdma_sq->queue_size; - cq_spec.gdma_region = cq->gdma_cq->mem_info.gdma_region; + cq_spec.gdma_region = cq->gdma_cq->mem_info.dma_region_handle; cq_spec.queue_size = cq->gdma_cq->queue_size; cq_spec.modr_ctx_id = 0; cq_spec.attached_eq = cq->gdma_cq->cq.parent->id; @@ -1382,8 +1382,8 @@ static int mana_create_txq(struct mana_port_context *apc, txq->gdma_sq->id = wq_spec.queue_index; cq->gdma_cq->id = cq_spec.queue_index; - txq->gdma_sq->mem_info.gdma_region = GDMA_INVALID_DMA_REGION; - cq->gdma_cq->mem_info.gdma_region = GDMA_INVALID_DMA_REGION; + txq->gdma_sq->mem_info.dma_region_handle = GDMA_INVALID_DMA_REGION; + cq->gdma_cq->mem_info.dma_region_handle = GDMA_INVALID_DMA_REGION; txq->gdma_txq_id = txq->gdma_sq->id; @@ -1594,10 +1594,10 @@ static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc, memset(&wq_spec, 0, sizeof(wq_spec)); memset(&cq_spec, 0, sizeof(cq_spec)); - wq_spec.gdma_region = rxq->gdma_rq->mem_info.gdma_region; + wq_spec.gdma_region = rxq->gdma_rq->mem_info.dma_region_handle; wq_spec.queue_size = rxq->gdma_rq->queue_size; - cq_spec.gdma_region = cq->gdma_cq->mem_info.gdma_region; + cq_spec.gdma_region = cq->gdma_cq->mem_info.dma_region_handle; cq_spec.queue_size = cq->gdma_cq->queue_size; cq_spec.modr_ctx_id = 0; cq_spec.attached_eq = cq->gdma_cq->cq.parent->id; @@ -1610,8 +1610,8 @@ static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc, rxq->gdma_rq->id = wq_spec.queue_index; cq->gdma_cq->id = cq_spec.queue_index; - rxq->gdma_rq->mem_info.gdma_region = GDMA_INVALID_DMA_REGION; - cq->gdma_cq->mem_info.gdma_region = GDMA_INVALID_DMA_REGION; + rxq->gdma_rq->mem_info.dma_region_handle = GDMA_INVALID_DMA_REGION; + cq->gdma_cq->mem_info.dma_region_handle = GDMA_INVALID_DMA_REGION; rxq->gdma_id = rxq->gdma_rq->id; cq->gdma_id = cq->gdma_cq->id; From patchwork Tue May 17 09:04:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Long Li X-Patchwork-Id: 12852157 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EE22C4321E for ; Tue, 17 May 2022 09:05:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244116AbiEQJE7 (ORCPT ); Tue, 17 May 2022 05:04:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244152AbiEQJEx (ORCPT ); Tue, 17 May 2022 05:04:53 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 200F949686; Tue, 17 May 2022 02:04:51 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 04D6420F722B; Tue, 17 May 2022 02:04:51 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 04D6420F722B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1652778291; bh=/BppzPeZ494XyxOwZmPcGdJSVL9IuEWNVP2swpa6YwA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=UzKmCnqtP+C8/NDVzNuDYFhjnFcAJ/d4S87kMjfimwOWDJLwxvfXJo6zFiSP3Qy8F x9KyTf9WOMWeeFaijFOzYpDhTe139Iu2/KyTtXKX+064YiBCBLdiwJkA1vEfYBcYLk 7JzvDa92QExJK5dWaKNVcMNMlkNG7wLw4F5gCuW8= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [PATCH 07/12] net: mana: Export Work Queue functions for use by RDMA driver Date: Tue, 17 May 2022 02:04:31 -0700 Message-Id: <1652778276-2986-8-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> References: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Long Li RDMA device may need to create Ethernet device queues for use by Queue Pair type RAW. This allows a user-mode context accesses Ethernet hardware queues. Export the supporting functions for use by the RDMA driver. Signed-off-by: Long Li --- drivers/net/ethernet/microsoft/mana/gdma_main.c | 1 + drivers/net/ethernet/microsoft/mana/mana.h | 9 +++++++++ drivers/net/ethernet/microsoft/mana/mana_en.c | 16 +++++++++------- 3 files changed, 19 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index 55c4059ac870..9c93d7a403ea 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -125,6 +125,7 @@ int mana_gd_send_request(struct gdma_context *gc, u32 req_len, const void *req, return mana_hwc_send_request(hwc, req_len, req, resp_len, resp); } +EXPORT_SYMBOL(mana_gd_send_request); int mana_gd_alloc_memory(struct gdma_context *gc, unsigned int length, struct gdma_mem_info *gmi) diff --git a/drivers/net/ethernet/microsoft/mana/mana.h b/drivers/net/ethernet/microsoft/mana/mana.h index 26f14fcb6a61..29e14ad8b930 100644 --- a/drivers/net/ethernet/microsoft/mana/mana.h +++ b/drivers/net/ethernet/microsoft/mana/mana.h @@ -568,6 +568,15 @@ struct mana_adev { struct gdma_dev *mdev; }; +int mana_create_wq_obj(struct mana_port_context *apc, + mana_handle_t vport, + u32 wq_type, struct mana_obj_spec *wq_spec, + struct mana_obj_spec *cq_spec, + mana_handle_t *wq_obj); + +void mana_destroy_wq_obj(struct mana_port_context *apc, u32 wq_type, + mana_handle_t wq_obj); + int mana_cfg_vport(struct mana_port_context *apc, u32 protection_dom_id, u32 doorbell_pg_id); void mana_uncfg_vport(struct mana_port_context *apc); diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index dc9fcb99e937..b4af85e81834 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -644,11 +644,11 @@ static int mana_cfg_vport_steering(struct mana_port_context *apc, return err; } -static int mana_create_wq_obj(struct mana_port_context *apc, - mana_handle_t vport, - u32 wq_type, struct mana_obj_spec *wq_spec, - struct mana_obj_spec *cq_spec, - mana_handle_t *wq_obj) +int mana_create_wq_obj(struct mana_port_context *apc, + mana_handle_t vport, + u32 wq_type, struct mana_obj_spec *wq_spec, + struct mana_obj_spec *cq_spec, + mana_handle_t *wq_obj) { struct mana_create_wqobj_resp resp = {}; struct mana_create_wqobj_req req = {}; @@ -697,9 +697,10 @@ static int mana_create_wq_obj(struct mana_port_context *apc, out: return err; } +EXPORT_SYMBOL_GPL(mana_create_wq_obj); -static void mana_destroy_wq_obj(struct mana_port_context *apc, u32 wq_type, - mana_handle_t wq_obj) +void mana_destroy_wq_obj(struct mana_port_context *apc, u32 wq_type, + mana_handle_t wq_obj) { struct mana_destroy_wqobj_resp resp = {}; struct mana_destroy_wqobj_req req = {}; @@ -724,6 +725,7 @@ static void mana_destroy_wq_obj(struct mana_port_context *apc, u32 wq_type, netdev_err(ndev, "Failed to destroy WQ object: %d, 0x%x\n", err, resp.hdr.status); } +EXPORT_SYMBOL_GPL(mana_destroy_wq_obj); static void mana_destroy_eq(struct mana_context *ac) { From patchwork Tue May 17 09:04:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Long Li X-Patchwork-Id: 12852160 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA2C3C433FE for ; Tue, 17 May 2022 09:05:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244165AbiEQJFC (ORCPT ); Tue, 17 May 2022 05:05:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236567AbiEQJEx (ORCPT ); Tue, 17 May 2022 05:04:53 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id D56654968B; Tue, 17 May 2022 02:04:51 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id BB6AA20F722C; Tue, 17 May 2022 02:04:51 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com BB6AA20F722C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1652778291; bh=4YEwMNpnuhs73NHZ6QM8LhUSh0u/pzA3ybkxsxihO5o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=GPtSprhwNgGHoq+irb6ZwPGliargdDkiYDajjrGE6MkFTqzKUP8Yz7L+c43QExW7h y274rPkwZxPX2RZPV9tF/XWMGmj/fupyYINYOH945J7EhDKDrTyLt5k0NioYWPK65K 1TmC9iPcfHG+NoQyYt9OAaEwjzuKqKZjE1GlRT38= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [PATCH 08/12] net: mana: Record port number in netdev Date: Tue, 17 May 2022 02:04:32 -0700 Message-Id: <1652778276-2986-9-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> References: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Long Li The port number is useful for user-mode application to identify this net device based on port index. Set to the correct value in ndev. Signed-off-by: Long Li --- drivers/net/ethernet/microsoft/mana/mana_en.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index b4af85e81834..6bb38c90b008 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -1952,6 +1952,7 @@ static int mana_probe_port(struct mana_context *ac, int port_idx, ndev->max_mtu = ndev->mtu; ndev->min_mtu = ndev->mtu; ndev->needed_headroom = MANA_HEADROOM; + ndev->dev_port = port_idx; SET_NETDEV_DEV(ndev, gc->dev); netif_carrier_off(ndev); From patchwork Tue May 17 09:04:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Long Li X-Patchwork-Id: 12852161 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5673FC43219 for ; Tue, 17 May 2022 09:05:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240554AbiEQJFE (ORCPT ); Tue, 17 May 2022 05:05:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243995AbiEQJEx (ORCPT ); Tue, 17 May 2022 05:04:53 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9DFCF49695; Tue, 17 May 2022 02:04:52 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 85E9C20F722D; Tue, 17 May 2022 02:04:52 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 85E9C20F722D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1652778292; bh=3vVr+K4NDmssO2e4KeqmSxjJfA0f8IyEGQ1NHgRJqnU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=VWGLLjOgzh/fPLu0QFAgzkQGCAFRydSbyjC61R28ddpA+6/K0Rc3p/eXRcxaYxmXP DKhJPhMaLwJV7+P3pcVgIuwK8wCbIpc6UijKKmWa3UZWfyPcTuw4wff0SLTbP5pgBA q03WoQKVHc2GAXfOWPyGPpc4JwwZ4GAuX0+HluAM= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [PATCH 09/12] net: mana: Move header files to a common location Date: Tue, 17 May 2022 02:04:33 -0700 Message-Id: <1652778276-2986-10-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> References: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Long Li In preparation to add MANA RDMA driver, move all the required header files to a common location for use by both Ethernet and RDMA drivers. Signed-off-by: Long Li --- MAINTAINERS | 1 + drivers/net/ethernet/microsoft/mana/gdma_main.c | 2 +- drivers/net/ethernet/microsoft/mana/hw_channel.c | 4 ++-- drivers/net/ethernet/microsoft/mana/mana_bpf.c | 2 +- drivers/net/ethernet/microsoft/mana/mana_en.c | 2 +- drivers/net/ethernet/microsoft/mana/mana_ethtool.c | 2 +- drivers/net/ethernet/microsoft/mana/shm_channel.c | 2 +- {drivers/net/ethernet/microsoft => include/linux}/mana/gdma.h | 0 .../ethernet/microsoft => include/linux}/mana/hw_channel.h | 0 {drivers/net/ethernet/microsoft => include/linux}/mana/mana.h | 0 .../ethernet/microsoft => include/linux}/mana/shm_channel.h | 0 11 files changed, 8 insertions(+), 7 deletions(-) rename {drivers/net/ethernet/microsoft => include/linux}/mana/gdma.h (100%) rename {drivers/net/ethernet/microsoft => include/linux}/mana/hw_channel.h (100%) rename {drivers/net/ethernet/microsoft => include/linux}/mana/mana.h (100%) rename {drivers/net/ethernet/microsoft => include/linux}/mana/shm_channel.h (100%) diff --git a/MAINTAINERS b/MAINTAINERS index 40fa1955ca3f..268c68dc40dc 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9108,6 +9108,7 @@ F: include/asm-generic/hyperv-tlfs.h F: include/asm-generic/mshyperv.h F: include/clocksource/hyperv_timer.h F: include/linux/hyperv.h +F: include/mana/ F: include/uapi/linux/hyperv.h F: net/vmw_vsock/hyperv_transport.c F: tools/hv/ diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index 9c93d7a403ea..96edf8491ebd 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -6,7 +6,7 @@ #include #include -#include "mana.h" +#include static u32 mana_gd_r32(struct gdma_context *g, u64 offset) { diff --git a/drivers/net/ethernet/microsoft/mana/hw_channel.c b/drivers/net/ethernet/microsoft/mana/hw_channel.c index 078d6a5a0768..609cd714dcc0 100644 --- a/drivers/net/ethernet/microsoft/mana/hw_channel.c +++ b/drivers/net/ethernet/microsoft/mana/hw_channel.c @@ -1,8 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* Copyright (c) 2021, Microsoft Corporation. */ -#include "gdma.h" -#include "hw_channel.h" +#include +#include static int mana_hwc_get_msg_index(struct hw_channel_context *hwc, u16 *msg_id) { diff --git a/drivers/net/ethernet/microsoft/mana/mana_bpf.c b/drivers/net/ethernet/microsoft/mana/mana_bpf.c index 1d2f948b5c00..7476f21e5f37 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_bpf.c +++ b/drivers/net/ethernet/microsoft/mana/mana_bpf.c @@ -8,7 +8,7 @@ #include #include -#include "mana.h" +#include void mana_xdp_tx(struct sk_buff *skb, struct net_device *ndev) { diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index 6bb38c90b008..928b14a7ee1f 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -11,7 +11,7 @@ #include #include -#include "mana.h" +#include static DEFINE_IDA(mana_adev_ida); diff --git a/drivers/net/ethernet/microsoft/mana/mana_ethtool.c b/drivers/net/ethernet/microsoft/mana/mana_ethtool.c index e13f2453eabb..c2ecb5154139 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_ethtool.c +++ b/drivers/net/ethernet/microsoft/mana/mana_ethtool.c @@ -5,7 +5,7 @@ #include #include -#include "mana.h" +#include static const struct { char name[ETH_GSTRING_LEN]; diff --git a/drivers/net/ethernet/microsoft/mana/shm_channel.c b/drivers/net/ethernet/microsoft/mana/shm_channel.c index da255da62176..161a4e6ba32a 100644 --- a/drivers/net/ethernet/microsoft/mana/shm_channel.c +++ b/drivers/net/ethernet/microsoft/mana/shm_channel.c @@ -6,7 +6,7 @@ #include #include -#include "shm_channel.h" +#include #define PAGE_FRAME_L48_WIDTH_BYTES 6 #define PAGE_FRAME_L48_WIDTH_BITS (PAGE_FRAME_L48_WIDTH_BYTES * 8) diff --git a/drivers/net/ethernet/microsoft/mana/gdma.h b/include/linux/mana/gdma.h similarity index 100% rename from drivers/net/ethernet/microsoft/mana/gdma.h rename to include/linux/mana/gdma.h diff --git a/drivers/net/ethernet/microsoft/mana/hw_channel.h b/include/linux/mana/hw_channel.h similarity index 100% rename from drivers/net/ethernet/microsoft/mana/hw_channel.h rename to include/linux/mana/hw_channel.h diff --git a/drivers/net/ethernet/microsoft/mana/mana.h b/include/linux/mana/mana.h similarity index 100% rename from drivers/net/ethernet/microsoft/mana/mana.h rename to include/linux/mana/mana.h diff --git a/drivers/net/ethernet/microsoft/mana/shm_channel.h b/include/linux/mana/shm_channel.h similarity index 100% rename from drivers/net/ethernet/microsoft/mana/shm_channel.h rename to include/linux/mana/shm_channel.h From patchwork Tue May 17 09:04:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Long Li X-Patchwork-Id: 12852164 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D50EC4167B for ; Tue, 17 May 2022 09:05:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244250AbiEQJFI (ORCPT ); Tue, 17 May 2022 05:05:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244074AbiEQJEx (ORCPT ); Tue, 17 May 2022 05:04:53 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 581F83FBFA; Tue, 17 May 2022 02:04:53 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 3C69E20F722E; Tue, 17 May 2022 02:04:53 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 3C69E20F722E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1652778293; bh=1IOD/U+l1YRKlVxXHuKGkE3uJ4MrohSoxhf/g3D3Yi8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=e2tvtnLnfJn7gQjzjm1M/kFZsDwUhwg+VEVBS1jPg+g/90pW4kUu2JK/9rXseFGSe facZPk1bpEq5q1t7mdTpQJ3d1crwcz5nHoIN60A2bEOx2bE8NmYlO3HKB1hR0E+Nju KceRrgOjhCF0XDTVd7xtE/1g19Fz47am8EuAtszQ= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [PATCH 10/12] net: mana: Define max values for SGL entries Date: Tue, 17 May 2022 02:04:34 -0700 Message-Id: <1652778276-2986-11-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> References: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Long Li The number of maximum SGl entries should be computed from the maximum WQE size for the intended queue type, witj the corresponding OOB data size. This guarantees the hardware queue can successfully queue requests up to the queue depth exposed to the upper layer. Signed-off-by: Long Li --- drivers/net/ethernet/microsoft/mana/mana_en.c | 2 +- include/linux/mana/gdma.h | 7 +++++++ include/linux/mana/mana.h | 4 +--- 3 files changed, 9 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index 928b14a7ee1f..6eb5eca5524d 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -187,7 +187,7 @@ int mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) pkg.wqe_req.client_data_unit = 0; pkg.wqe_req.num_sge = 1 + skb_shinfo(skb)->nr_frags; - WARN_ON_ONCE(pkg.wqe_req.num_sge > 30); + WARN_ON_ONCE(pkg.wqe_req.num_sge > MAX_TX_WQE_SGL_ENTRIES); if (pkg.wqe_req.num_sge <= ARRAY_SIZE(pkg.sgl_array)) { pkg.wqe_req.sgl = pkg.sgl_array; diff --git a/include/linux/mana/gdma.h b/include/linux/mana/gdma.h index bc8cd9528937..d6a970118f4c 100644 --- a/include/linux/mana/gdma.h +++ b/include/linux/mana/gdma.h @@ -436,6 +436,13 @@ struct gdma_wqe { #define MAX_TX_WQE_SIZE 512 #define MAX_RX_WQE_SIZE 256 +#define MAX_TX_WQE_SGL_ENTRIES ((GDMA_MAX_SQE_SIZE - \ + sizeof(struct gdma_sge) - INLINE_OOB_SMALL_SIZE) / \ + sizeof(struct gdma_sge)) + +#define MAX_RX_WQE_SGL_ENTRIES ((GDMA_MAX_RQE_SIZE - \ + sizeof(struct gdma_sge)) / sizeof(struct gdma_sge)) + struct gdma_cqe { u32 cqe_data[GDMA_COMP_DATA_SIZE / 4]; diff --git a/include/linux/mana/mana.h b/include/linux/mana/mana.h index 29e14ad8b930..1cf77a03bff2 100644 --- a/include/linux/mana/mana.h +++ b/include/linux/mana/mana.h @@ -264,8 +264,6 @@ struct mana_cq { int budget; }; -#define GDMA_MAX_RQE_SGES 15 - struct mana_recv_buf_oob { /* A valid GDMA work request representing the data buffer. */ struct gdma_wqe_request wqe_req; @@ -275,7 +273,7 @@ struct mana_recv_buf_oob { /* SGL of the buffer going to be sent has part of the work request. */ u32 num_sge; - struct gdma_sge sgl[GDMA_MAX_RQE_SGES]; + struct gdma_sge sgl[MAX_RX_WQE_SGL_ENTRIES]; /* Required to store the result of mana_gd_post_work_request. * gdma_posted_wqe_info.wqe_size_in_bu is required for progressing the From patchwork Tue May 17 09:04:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Long Li X-Patchwork-Id: 12852162 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C923C433F5 for ; Tue, 17 May 2022 09:05:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244238AbiEQJFI (ORCPT ); Tue, 17 May 2022 05:05:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244028AbiEQJEy (ORCPT ); Tue, 17 May 2022 05:04:54 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3BDC448E75; Tue, 17 May 2022 02:04:54 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 23D9020F7230; Tue, 17 May 2022 02:04:54 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 23D9020F7230 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1652778294; bh=6RZGGvFWrSGpPbswEMH8aRsp+dxd1M1jw+kx4KN2h5k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=ffSoI6UcZSsVkTrcMBtafR/4BW8d2IJSV1qpmR/cVAyh8+XyqnustFWw3s102uobi 9vn163S4vzn/da5NOz55QBGKgS+DuMgznMryri5Dr8dallaGRZWRSmeYYoE7gPcEpJ vhoJWRmHcz3dEYDB6MZ+DB/H+bROUQssHS3i24OM= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [PATCH 11/12] net: mana: Define and process GDMA response code GDMA_STATUS_MORE_ENTRIES Date: Tue, 17 May 2022 02:04:35 -0700 Message-Id: <1652778276-2986-12-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> References: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Long Li When doing memory registration, the PF may respond with GDMA_STATUS_MORE_ENTRIES to indicate a follow request is needed. This is not an error and should be processed as expected. Signed-off-by: Long Li --- drivers/net/ethernet/microsoft/mana/hw_channel.c | 2 +- include/linux/mana/gdma.h | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/microsoft/mana/hw_channel.c b/drivers/net/ethernet/microsoft/mana/hw_channel.c index 609cd714dcc0..a80c14676c75 100644 --- a/drivers/net/ethernet/microsoft/mana/hw_channel.c +++ b/drivers/net/ethernet/microsoft/mana/hw_channel.c @@ -820,7 +820,7 @@ int mana_hwc_send_request(struct hw_channel_context *hwc, u32 req_len, goto out; } - if (ctx->status_code) { + if (ctx->status_code && ctx->status_code != GDMA_STATUS_MORE_ENTRIES) { dev_err(hwc->dev, "HWC: Failed hw_channel req: 0x%x\n", ctx->status_code); err = -EPROTO; diff --git a/include/linux/mana/gdma.h b/include/linux/mana/gdma.h index d6a970118f4c..d40f1dffca5c 100644 --- a/include/linux/mana/gdma.h +++ b/include/linux/mana/gdma.h @@ -9,6 +9,8 @@ #include "shm_channel.h" +#define GDMA_STATUS_MORE_ENTRIES ((u32)0x00000105L) + /* Structures labeled with "HW DATA" are exchanged with the hardware. All of * them are naturally aligned and hence don't need __packed. */ From patchwork Tue May 17 09:04:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Long Li X-Patchwork-Id: 12852163 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC18FC433FE for ; Tue, 17 May 2022 09:05:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244206AbiEQJFG (ORCPT ); Tue, 17 May 2022 05:05:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244156AbiEQJFB (ORCPT ); Tue, 17 May 2022 05:05:01 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 467DE49243; Tue, 17 May 2022 02:04:55 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 1311020F7234; Tue, 17 May 2022 02:04:55 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 1311020F7234 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1652778295; bh=Ebyp1CS2eIB2LwzQbU/+XADNWhULVBHblvKjTztu9ps=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=aMmyerRaWZjB3hMUkbqYqk1WrTVgD9FQKknM6xBpH0IwG0clPWuPhTrn/Pe/BrTPj OgroyDXQDFr7YXw+NPBhQfoI+CWJ6z4QKUmL7A6oAwFcd/Pu+vlif/qUN2ufSGmojL jCz9d5mbW+PqLERxcAR3rg+20sUgF7PubUHBJ4c0= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [PATCH 12/12] RDMA/mana_ib: Add a driver for Microsoft Azure Network Adapter Date: Tue, 17 May 2022 02:04:36 -0700 Message-Id: <1652778276-2986-13-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> References: <1652778276-2986-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Long Li Add a RDMA VF driver for Microsoft Azure Network Adapter (MANA). Signed-off-by: Long Li Reported-by: kernel test robot Reported-by: kernel test robot Reported-by: kernel test robot --- MAINTAINERS | 3 + drivers/infiniband/Kconfig | 1 + drivers/infiniband/hw/Makefile | 1 + drivers/infiniband/hw/mana/Kconfig | 7 + drivers/infiniband/hw/mana/Makefile | 4 + drivers/infiniband/hw/mana/cq.c | 74 +++ drivers/infiniband/hw/mana/main.c | 679 ++++++++++++++++++++++++ drivers/infiniband/hw/mana/mana_ib.h | 145 +++++ drivers/infiniband/hw/mana/mr.c | 133 +++++ drivers/infiniband/hw/mana/qp.c | 466 ++++++++++++++++ drivers/infiniband/hw/mana/wq.c | 111 ++++ include/linux/mana/mana.h | 3 + include/uapi/rdma/ib_user_ioctl_verbs.h | 1 + include/uapi/rdma/mana-abi.h | 68 +++ 14 files changed, 1696 insertions(+) create mode 100644 drivers/infiniband/hw/mana/Kconfig create mode 100644 drivers/infiniband/hw/mana/Makefile create mode 100644 drivers/infiniband/hw/mana/cq.c create mode 100644 drivers/infiniband/hw/mana/main.c create mode 100644 drivers/infiniband/hw/mana/mana_ib.h create mode 100644 drivers/infiniband/hw/mana/mr.c create mode 100644 drivers/infiniband/hw/mana/qp.c create mode 100644 drivers/infiniband/hw/mana/wq.c create mode 100644 include/uapi/rdma/mana-abi.h diff --git a/MAINTAINERS b/MAINTAINERS index 268c68dc40dc..5185532c0fd2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9078,6 +9078,7 @@ M: Haiyang Zhang M: Stephen Hemminger M: Wei Liu M: Dexuan Cui +M: Long Li L: linux-hyperv@vger.kernel.org S: Supported T: git git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux.git @@ -9095,6 +9096,7 @@ F: arch/x86/kernel/cpu/mshyperv.c F: drivers/clocksource/hyperv_timer.c F: drivers/hid/hid-hyperv.c F: drivers/hv/ +F: drivers/infiniband/hw/mana/ F: drivers/input/serio/hyperv-keyboard.c F: drivers/iommu/hyperv-iommu.c F: drivers/net/ethernet/microsoft/ @@ -9110,6 +9112,7 @@ F: include/clocksource/hyperv_timer.h F: include/linux/hyperv.h F: include/mana/ F: include/uapi/linux/hyperv.h +F: include/uapi/rdma/mana-abi.h F: net/vmw_vsock/hyperv_transport.c F: tools/hv/ diff --git a/drivers/infiniband/Kconfig b/drivers/infiniband/Kconfig index 33d3ce9c888e..a062c662ecff 100644 --- a/drivers/infiniband/Kconfig +++ b/drivers/infiniband/Kconfig @@ -83,6 +83,7 @@ source "drivers/infiniband/hw/qib/Kconfig" source "drivers/infiniband/hw/cxgb4/Kconfig" source "drivers/infiniband/hw/efa/Kconfig" source "drivers/infiniband/hw/irdma/Kconfig" +source "drivers/infiniband/hw/mana/Kconfig" source "drivers/infiniband/hw/mlx4/Kconfig" source "drivers/infiniband/hw/mlx5/Kconfig" source "drivers/infiniband/hw/ocrdma/Kconfig" diff --git a/drivers/infiniband/hw/Makefile b/drivers/infiniband/hw/Makefile index fba0b3be903e..f62e9e00c780 100644 --- a/drivers/infiniband/hw/Makefile +++ b/drivers/infiniband/hw/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_INFINIBAND_QIB) += qib/ obj-$(CONFIG_INFINIBAND_CXGB4) += cxgb4/ obj-$(CONFIG_INFINIBAND_EFA) += efa/ obj-$(CONFIG_INFINIBAND_IRDMA) += irdma/ +obj-$(CONFIG_MANA_INFINIBAND) += mana/ obj-$(CONFIG_MLX4_INFINIBAND) += mlx4/ obj-$(CONFIG_MLX5_INFINIBAND) += mlx5/ obj-$(CONFIG_INFINIBAND_OCRDMA) += ocrdma/ diff --git a/drivers/infiniband/hw/mana/Kconfig b/drivers/infiniband/hw/mana/Kconfig new file mode 100644 index 000000000000..b3ff03a23257 --- /dev/null +++ b/drivers/infiniband/hw/mana/Kconfig @@ -0,0 +1,7 @@ +# SPDX-License-Identifier: GPL-2.0-only +config MANA_INFINIBAND + tristate "Microsoft Azure Network Adapter support" + depends on NETDEVICES && ETHERNET && PCI && MICROSOFT_MANA + help + This driver provides low-level RDMA support for + Microsoft Azure Network Adapter (MANA). diff --git a/drivers/infiniband/hw/mana/Makefile b/drivers/infiniband/hw/mana/Makefile new file mode 100644 index 000000000000..a799fe264c5a --- /dev/null +++ b/drivers/infiniband/hw/mana/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_MANA_INFINIBAND) += mana_ib.o + +mana_ib-y := main.o wq.o qp.o cq.o mr.o diff --git a/drivers/infiniband/hw/mana/cq.c b/drivers/infiniband/hw/mana/cq.c new file mode 100644 index 000000000000..0eac77c97658 --- /dev/null +++ b/drivers/infiniband/hw/mana/cq.c @@ -0,0 +1,74 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (c) 2022, Microsoft Corporation. All rights reserved. + */ + +#include "mana_ib.h" + +int mana_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, + struct ib_udata *udata) +{ + struct mana_ib_create_cq ucmd = {}; + struct ib_device *ibdev = ibcq->device; + struct mana_ib_dev *mdev = + container_of(ibdev, struct mana_ib_dev, ib_dev); + struct mana_ib_cq *cq = container_of(ibcq, struct mana_ib_cq, ibcq); + int err; + + err = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen)); + if (err) { + pr_err("Failed to copy from udata for create cq, %d\n", err); + return -EFAULT; + } + + if (attr->cqe > MAX_SEND_BUFFERS_PER_QUEUE) { + pr_err("CQE %d exceeding limit\n", attr->cqe); + return -EINVAL; + } + cq->cqe = attr->cqe; + + pr_debug("ucmd buf_addr 0x%llx\n", ucmd.buf_addr); + + cq->umem = ib_umem_get(ibdev, ucmd.buf_addr, + cq->cqe * COMP_ENTRY_SIZE, + IB_ACCESS_LOCAL_WRITE); + if (IS_ERR(cq->umem)) { + err = PTR_ERR(cq->umem); + pr_err("Failed to get umem for create cq, err %d\n", err); + return err; + } + + err = mana_ib_gd_create_dma_region(mdev, cq->umem, &cq->gdma_region, + PAGE_SIZE); + if (err) { + pr_err("Failed to create dma region for create cq, %d\n", err); + goto err_release_umem; + } + + pr_debug("%s: mana_ib_gd_create_dma_region ret %d gdma_region 0x%llx\n", + __func__, err, cq->gdma_region); + + /* + * The CQ ID is not known at this time + * The ID is generated at create_qp + */ + + return 0; + +err_release_umem: + ib_umem_release(cq->umem); + return err; +} + +int mana_ib_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) +{ + struct mana_ib_cq *cq = container_of(ibcq, struct mana_ib_cq, ibcq); + struct ib_device *ibdev = ibcq->device; + struct mana_ib_dev *mdev = + container_of(ibdev, struct mana_ib_dev, ib_dev); + + mana_ib_gd_destroy_dma_region(mdev, cq->gdma_region); + ib_umem_release(cq->umem); + + return 0; +} diff --git a/drivers/infiniband/hw/mana/main.c b/drivers/infiniband/hw/mana/main.c new file mode 100644 index 000000000000..e288495e3ede --- /dev/null +++ b/drivers/infiniband/hw/mana/main.c @@ -0,0 +1,679 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (c) 2022, Microsoft Corporation. All rights reserved. + */ + +#include "mana_ib.h" + +MODULE_DESCRIPTION("Microsoft Azure Network Adapter IB driver"); +MODULE_LICENSE("Dual BSD/GPL"); + +static const struct auxiliary_device_id mana_id_table[] = { + { .name = "mana.rdma", }, + {}, +}; + +MODULE_DEVICE_TABLE(auxiliary, mana_id_table); + +void mana_ib_uncfg_vport(struct mana_ib_dev *dev, + struct mana_ib_pd *pd, u32 port) +{ + struct gdma_dev *gd = dev->gdma_dev; + struct mana_context *mc = gd->driver_data; + struct net_device *ndev; + struct mana_port_context *mpc; + + ndev = mc->ports[port]; + mpc = netdev_priv(ndev); + + if (atomic_dec_and_test(&pd->vport_use_count)) + mana_uncfg_vport(mpc); +} + +int mana_ib_cfg_vport(struct mana_ib_dev *dev, u32 port, struct mana_ib_pd *pd, + u32 doorbell_id) +{ + struct gdma_dev *mdev = dev->gdma_dev; + struct mana_context *mc = mdev->driver_data; + struct net_device *ndev = mc->ports[port]; + struct mana_port_context *mpc = netdev_priv(ndev); + + int err; + + if (atomic_inc_return(&pd->vport_use_count) > 1) { + pr_debug("Skip as this PD is already configured vport\n"); + return 0; + } + + err = mana_cfg_vport(mpc, pd->pdn, doorbell_id); + if (err) { + pr_err("mana_cfg_vport err %d\n", err); + atomic_dec(&pd->vport_use_count); + return err; + } + + pd->tx_shortform_allowed = mpc->tx_shortform_allowed; + pd->tx_vp_offset = mpc->tx_vp_offset; + + pr_debug("vport handle %llx pdid %x doorbell_id %x " + "tx_shortform_allowed %d tx_vp_offset %u\n", + mpc->port_handle, pd->pdn, doorbell_id, + pd->tx_shortform_allowed, pd->tx_vp_offset); + + return 0; +} + +static int mana_ib_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) +{ + struct mana_ib_pd *pd = container_of(ibpd, struct mana_ib_pd, ibpd); + struct ib_device *ibdev = ibpd->device; + struct mana_ib_dev *dev = + container_of(ibdev, struct mana_ib_dev, ib_dev); + + int ret; + enum gdma_pd_flags flags = 0; + + // Set flags if this is a kernel request + if (ibpd->uobject == NULL) + flags = GDMA_PD_FLAG_ALLOW_GPA_MR | GDMA_PD_FLAG_ALLOW_FMR_MR; + + ret = mana_ib_gd_create_pd(dev, &pd->pd_handle, &pd->pdn, flags); + if (ret) + pr_err("Failed to get pd id, err %d\n", ret); + + return ret; +} + +static int mana_ib_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) +{ + struct mana_ib_pd *pd = container_of(ibpd, struct mana_ib_pd, ibpd); + struct ib_device *ibdev = ibpd->device; + struct mana_ib_dev *dev = container_of( + ibdev, struct mana_ib_dev, ib_dev); + + return mana_ib_gd_destroy_pd(dev, pd->pd_handle); +} + +static int mana_ib_alloc_ucontext(struct ib_ucontext *ibcontext, + struct ib_udata *udata) +{ + struct mana_ib_ucontext *ucontext = + container_of(ibcontext, struct mana_ib_ucontext, ibucontext); + struct ib_device *ibdev = ibcontext->device; + struct mana_ib_dev *mdev = + container_of(ibdev, struct mana_ib_dev, ib_dev); + struct gdma_dev *dev = mdev->gdma_dev; + struct gdma_context *gc = dev->gdma_context; + int doorbell_page; + int ret; + + // Allocate a doorbell page index + ret = mana_gd_allocate_doorbell_page(gc, &doorbell_page); + if (ret) { + pr_err("Failed to allocate doorbell page %d\n", ret); + return -ENOMEM; + } + + pr_debug("Doorbell page allocated %d\n", doorbell_page); + + ucontext->doorbell = doorbell_page; + + return 0; +} + +static void mana_ib_dealloc_ucontext(struct ib_ucontext *ibcontext) +{ + struct mana_ib_ucontext *mana_ucontext = + container_of(ibcontext, struct mana_ib_ucontext, ibucontext); + struct ib_device *ibdev = ibcontext->device; + struct mana_ib_dev *mdev = + container_of(ibdev, struct mana_ib_dev, ib_dev); + struct gdma_context *gc = mdev->gdma_dev->gdma_context; + int ret; + + ret = mana_gd_destroy_doorbell_page(gc, mana_ucontext->doorbell); + if (ret) + pr_err("Failed to destroy doorbell page %d\n", ret); +} + +static inline enum atb_page_size mana_ib_get_atb_page_size(u64 page_sz) +{ + int pos = 0; + + page_sz = (page_sz >> 12); //start with 4k + + while (page_sz) { + pos++; + page_sz = (page_sz >> 1); + } + return (enum atb_page_size)(pos - 1); +} + +static int _mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, + const dma_addr_t *page_addr_array, + size_t num_pages_total, + u64 address, u64 length, + mana_handle_t *gdma_region, + u64 page_sz) +{ + struct gdma_dev *mdev = dev->gdma_dev; + struct gdma_context *gc = mdev->gdma_context; + struct hw_channel_context *hwc = gc->hwc.driver_data; + size_t num_pages_cur, num_pages_to_handle; + unsigned int create_req_msg_size; + unsigned int i; + struct gdma_dma_region_add_pages_req *add_req = NULL; + int err; + + struct gdma_create_dma_region_req *create_req; + struct gdma_create_dma_region_resp create_resp = {}; + + size_t max_pgs_create_cmd = (hwc->max_req_msg_size - + sizeof(*create_req)) / sizeof(u64); + + num_pages_to_handle = min_t(size_t, num_pages_total, + max_pgs_create_cmd); + create_req_msg_size = struct_size(create_req, page_addr_list, + num_pages_to_handle); + + create_req = kzalloc(create_req_msg_size, GFP_KERNEL); + if (!create_req) + return -ENOMEM; + + mana_gd_init_req_hdr(&create_req->hdr, GDMA_CREATE_DMA_REGION, + create_req_msg_size, sizeof(create_resp)); + + create_req->length = length; + create_req->offset_in_page = address & (page_sz - 1); + create_req->gdma_page_type = mana_ib_get_atb_page_size(page_sz); + create_req->page_count = num_pages_total; + create_req->page_addr_list_len = num_pages_to_handle; + + pr_debug("size_dma_region %llu num_pages_total %lu, " + "page_sz 0x%llx offset_in_page %u\n", + length, num_pages_total, page_sz, create_req->offset_in_page); + + pr_debug("num_pages_to_handle %lu, gdma_page_type %u", + num_pages_to_handle, create_req->gdma_page_type); + + for (i = 0; i < num_pages_to_handle; ++i) { + dma_addr_t cur_addr = page_addr_array[i]; + + create_req->page_addr_list[i] = cur_addr; + + pr_debug("page num %u cur_addr 0x%llx\n", i, cur_addr); + } + + err = mana_gd_send_request(gc, create_req_msg_size, create_req, + sizeof(create_resp), &create_resp); + kfree(create_req); + + if (err || create_resp.hdr.status) { + dev_err(gc->dev, "Failed to create DMA region: %d, 0x%x\n", + err, create_resp.hdr.status); + goto error; + } + + *gdma_region = create_resp.dma_region_handle; + pr_debug("Created DMA region with handle 0x%llx\n", *gdma_region); + + num_pages_cur = num_pages_to_handle; + + if (num_pages_cur < num_pages_total) { + + unsigned int add_req_msg_size; + size_t max_pgs_add_cmd = (hwc->max_req_msg_size - + sizeof(*add_req)) / sizeof(u64); + + num_pages_to_handle = min_t(size_t, + num_pages_total - num_pages_cur, + max_pgs_add_cmd); + + // Calculate the max num of pages that will be handled + add_req_msg_size = struct_size(add_req, page_addr_list, + num_pages_to_handle); + + add_req = kmalloc(add_req_msg_size, GFP_KERNEL); + if (!add_req) { + err = -ENOMEM; + goto error; + } + + while (num_pages_cur < num_pages_total) { + struct gdma_general_resp add_resp = {}; + u32 expected_status; + int expected_ret; + + if (num_pages_cur + num_pages_to_handle < + num_pages_total) { + // This value means that more pages are needed + expected_status = GDMA_STATUS_MORE_ENTRIES; + expected_ret = 0x0; + } else { + expected_status = 0x0; + expected_ret = 0x0; + } + + memset(add_req, 0, add_req_msg_size); + + mana_gd_init_req_hdr(&add_req->hdr, + GDMA_DMA_REGION_ADD_PAGES, + add_req_msg_size, + sizeof(add_resp)); + add_req->dma_region_handle = *gdma_region; + add_req->page_addr_list_len = num_pages_to_handle; + + for (i = 0; i < num_pages_to_handle; ++i) { + dma_addr_t cur_addr = + page_addr_array[num_pages_cur + i]; + + add_req->page_addr_list[i] = cur_addr; + + pr_debug("page_addr_list %lu addr 0x%llx\n", + num_pages_cur + i, cur_addr); + } + + err = mana_gd_send_request(gc, add_req_msg_size, + add_req, sizeof(add_resp), + &add_resp); + if (err != expected_ret || + add_resp.hdr.status != expected_status) { + dev_err(gc->dev, + "Failed to put DMA pages %u: %d,0x%x\n", + i, err, add_resp.hdr.status); + err = -EPROTO; + goto free_req; + } + + num_pages_cur += num_pages_to_handle; + num_pages_to_handle = min_t(size_t, + num_pages_total - + num_pages_cur, + max_pgs_add_cmd); + add_req_msg_size = sizeof(*add_req) + + num_pages_to_handle * sizeof(u64); + } +free_req: + kfree(add_req); + } + +error: + return err; +} + + +int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, struct ib_umem *umem, + mana_handle_t *dma_region_handle, u64 page_sz) +{ + size_t num_pages = ib_umem_num_dma_blocks(umem, page_sz); + struct ib_block_iter biter; + dma_addr_t *page_addr_array; + unsigned int i = 0; + int err; + + pr_debug("num pages %lu umem->address 0x%lx\n", + num_pages, umem->address); + + page_addr_array = kmalloc_array(num_pages, + sizeof(*page_addr_array), GFP_KERNEL); + if (!page_addr_array) + return -ENOMEM; + + rdma_umem_for_each_dma_block(umem, &biter, page_sz) + page_addr_array[i++] = rdma_block_iter_dma_address(&biter); + + err = _mana_ib_gd_create_dma_region(dev, page_addr_array, num_pages, + umem->address, umem->length, + dma_region_handle, page_sz); + + kfree(page_addr_array); + + return err; +} + +int mana_ib_gd_destroy_dma_region(struct mana_ib_dev *dev, u64 gdma_region) +{ + struct gdma_dev *mdev = dev->gdma_dev; + struct gdma_context *gc = mdev->gdma_context; + + pr_debug("%s: destroy dma region 0x%llx\n", __func__, gdma_region); + + return mana_gd_destroy_dma_region(gc, gdma_region); +} + +int mana_ib_gd_create_pd(struct mana_ib_dev *dev, u64 *pd_handle, u32 *pd_id, + enum gdma_pd_flags flags) +{ + struct gdma_dev *mdev = dev->gdma_dev; + struct gdma_context *gc = mdev->gdma_context; + int err; + + struct gdma_create_pd_req req = {}; + struct gdma_create_pd_resp resp = {}; + + mana_gd_init_req_hdr(&req.hdr, GDMA_CREATE_PD, + sizeof(req), sizeof(resp)); + + req.flags = flags; + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + + if (!err && !resp.hdr.status) { + *pd_handle = resp.pd_handle; + *pd_id = resp.pd_id; + pr_debug("pd_handle 0x%llx pd_id %d\n", *pd_handle, *pd_id); + } else { + pr_err("Failed to get pd_id err %d status %u\n", + err, resp.hdr.status); + if (!err) + err = -EPROTO; + } + return err; +} + +int mana_ib_gd_destroy_pd(struct mana_ib_dev *dev, u64 pd_handle) +{ + struct gdma_dev *mdev = dev->gdma_dev; + struct gdma_context *gc = mdev->gdma_context; + int err; + + struct gdma_destroy_pd_req req = {}; + struct gdma_destory_pd_resp resp = {}; + + mana_gd_init_req_hdr(&req.hdr, GDMA_DESTROY_PD, + sizeof(req), sizeof(resp)); + + req.pd_handle = pd_handle; + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + + if (err || resp.hdr.status) { + pr_err("Failed to destroy pd_handle 0x%llx err %d status %u", + pd_handle, err, resp.hdr.status); + if (!err) + err = -EPROTO; + } + + return err; +} + +int mana_ib_gd_create_mr(struct mana_ib_dev *dev, struct mana_ib_mr *mr, + struct gdma_create_mr_params *mr_params) +{ + struct gdma_dev *mdev = dev->gdma_dev; + struct gdma_context *gc = mdev->gdma_context; + int err; + + struct gdma_create_mr_request req = {}; + struct gdma_create_mr_response resp = {}; + + mana_gd_init_req_hdr(&req.hdr, GDMA_CREATE_MR, + sizeof(req), sizeof(resp)); + req.pd_handle = mr_params->pd_handle; + + switch (mr_params->mr_type) { + case GDMA_MR_TYPE_GVA: + req.mr_type = GDMA_MR_TYPE_GVA; + req.gva.dma_region_handle = mr_params->gva.dma_region_handle; + req.gva.virtual_address = mr_params->gva.virtual_address; + req.gva.access_flags = mr_params->gva.access_flags; + break; + + case GDMA_MR_TYPE_GPA: + req.mr_type = GDMA_MR_TYPE_GPA; + req.gpa.access_flags = mr_params->gpa.access_flags; + break; + + case GDMA_MR_TYPE_FMR: + req.mr_type = GDMA_MR_TYPE_FMR; + req.fmr.page_size = mr_params->fmr.page_size; + req.fmr.reserved_pte_count = mr_params->fmr.reserved_pte_count; + break; + + default: + pr_warn("invalid param (GDMA_MR_TYPE) passed, " + "req.mr_type %d\n", req.mr_type); + err = -EINVAL; + goto error; + } + + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + + if (err || resp.hdr.status) { + pr_err("Failed to create mr %d, %u", err, resp.hdr.status); + goto error; + } + + mr->ibmr.lkey = resp.lkey; + mr->ibmr.rkey = resp.rkey; + mr->mr_handle = resp.mr_handle; + + return 0; +error: + return err; +} + +int mana_ib_gd_destroy_mr(struct mana_ib_dev *dev, gdma_obj_handle_t mr_handle) +{ + struct gdma_dev *mdev = dev->gdma_dev; + struct gdma_context *gc = mdev->gdma_context; + int err; + + struct gdma_destroy_mr_response resp = {}; + struct gdma_destroy_mr_request req = {}; + + mana_gd_init_req_hdr(&req.hdr, GDMA_DESTROY_MR, + sizeof(req), sizeof(resp)); + + req.mr_handle = mr_handle; + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + if (err || resp.hdr.status) { + dev_err(gc->dev, "Failed to destroy MR: %d, 0x%x\n", err, + resp.hdr.status); + if (!err) + err = -EPROTO; + return err; + } + + return 0; +} + + +static int mana_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma) +{ + struct mana_ib_ucontext *mana_ucontext = + container_of(ibcontext, struct mana_ib_ucontext, ibucontext); + struct ib_device *ibdev = ibcontext->device; + struct mana_ib_dev *mdev = + container_of(ibdev, struct mana_ib_dev, ib_dev); + struct gdma_context *gc = mdev->gdma_dev->gdma_context; + pgprot_t prot; + phys_addr_t pfn; + int ret; + + // map to the page indexed by ucontext->doorbell + pfn = (gc->phys_db_page_base + + gc->db_page_size * mana_ucontext->doorbell) >> PAGE_SHIFT; + prot = pgprot_writecombine(vma->vm_page_prot); + + ret = rdma_user_mmap_io(ibcontext, vma, pfn, gc->db_page_size, + prot, NULL); + if (ret) { + pr_err("can't rdma_user_mmap_io ret %d\n", ret); + } else + pr_debug("mapped I/O pfn 0x%llx page_size %u, ret %d\n", + pfn, gc->db_page_size, ret); + + return ret; +} + +static int mana_ib_get_port_immutable(struct ib_device *ibdev, u32 port_num, + struct ib_port_immutable *immutable) +{ + /* + * This version only support RAW_PACKET + * other values need to be filled for other types + */ + immutable->core_cap_flags = RDMA_CORE_PORT_RAW_PACKET; + + return 0; +} + +static int mana_ib_query_device(struct ib_device *ibdev, + struct ib_device_attr *props, + struct ib_udata *uhw) +{ + props->max_qp = MANA_MAX_NUM_QUEUES; + props->max_qp_wr = MAX_SEND_BUFFERS_PER_QUEUE; + + /* + * max_cqe could be potentially much bigger. + * As this version of driver only support RAW QP, set it to the same + * value as max_qp_wr + */ + props->max_cqe = MAX_SEND_BUFFERS_PER_QUEUE; + + props->max_mr_size = MANA_IB_MAX_MR_SIZE; + props->max_mr = INT_MAX; + props->max_send_sge = MAX_TX_WQE_SGL_ENTRIES; + props->max_recv_sge = MAX_RX_WQE_SGL_ENTRIES; + + return 0; +} + +int mana_ib_query_port(struct ib_device *ibdev, u32 port, + struct ib_port_attr *props) +{ + /* This version doesn't return port properties */ + return 0; +} + +static int mana_ib_query_gid(struct ib_device *ibdev, u32 port, int index, + union ib_gid *gid) +{ + /* This version doesn't return GID properties */ + return 0; +} + +static void mana_ib_disassociate_ucontext(struct ib_ucontext *ibcontext) +{ +} + +static const struct ib_device_ops mana_ib_dev_ops = { + .owner = THIS_MODULE, + .driver_id = RDMA_DRIVER_MANA, + .uverbs_abi_ver = MANA_IB_UVERBS_ABI_VERSION, + + .alloc_pd = mana_ib_alloc_pd, + .dealloc_pd = mana_ib_dealloc_pd, + + .alloc_ucontext = mana_ib_alloc_ucontext, + .dealloc_ucontext = mana_ib_dealloc_ucontext, + + .create_cq = mana_ib_create_cq, + .destroy_cq = mana_ib_destroy_cq, + + .create_qp = mana_ib_create_qp, + .modify_qp = mana_ib_modify_qp, + .destroy_qp = mana_ib_destroy_qp, + + .disassociate_ucontext = mana_ib_disassociate_ucontext, + + .mmap = mana_ib_mmap, + + .reg_user_mr = mana_ib_reg_user_mr, + .dereg_mr = mana_ib_dereg_mr, + + .create_wq = mana_ib_create_wq, + .modify_wq = mana_ib_modify_wq, + .destroy_wq = mana_ib_destroy_wq, + + .create_rwq_ind_table = mana_ib_create_rwq_ind_table, + .destroy_rwq_ind_table = mana_ib_destroy_rwq_ind_table, + + .get_port_immutable = mana_ib_get_port_immutable, + .query_device = mana_ib_query_device, + .query_port = mana_ib_query_port, + .query_gid = mana_ib_query_gid, + + INIT_RDMA_OBJ_SIZE(ib_cq, mana_ib_cq, ibcq), + INIT_RDMA_OBJ_SIZE(ib_pd, mana_ib_pd, ibpd), + INIT_RDMA_OBJ_SIZE(ib_qp, mana_ib_qp, ibqp), + INIT_RDMA_OBJ_SIZE(ib_ucontext, mana_ib_ucontext, ibucontext), + INIT_RDMA_OBJ_SIZE(ib_rwq_ind_table, mana_ib_rwq_ind_table, + ib_ind_table), +}; + +static int mana_ib_probe(struct auxiliary_device *adev, + const struct auxiliary_device_id *id) +{ + struct mana_adev *madev = container_of(adev, struct mana_adev, adev); + struct gdma_dev *mdev = madev->mdev; + struct mana_context *mc = mdev->driver_data; + struct mana_ib_dev *dev; + int ret = 0; + + dev = ib_alloc_device(mana_ib_dev, ib_dev); + if (!dev) + return -ENOMEM; + + + ib_set_device_ops(&dev->ib_dev, &mana_ib_dev_ops); + + dev->ib_dev.phys_port_cnt = mc->num_ports; + + pr_debug("mdev=%p id=%d num_ports=%d\n", + mdev, mdev->dev_id.as_uint32, + dev->ib_dev.phys_port_cnt); + + dev->gdma_dev = mdev; + dev->ib_dev.node_type = RDMA_NODE_IB_CA; + + /* + * num_comp_vectors needs to set to the max MSIX index + * when interrupts and event queues are implemented + */ + dev->ib_dev.num_comp_vectors = 1; + dev->ib_dev.dev.parent = mdev->gdma_context->dev; + + ret = ib_register_device(&dev->ib_dev, "mana_%d", + mdev->gdma_context->dev); + if (ret) { + ib_dealloc_device(&dev->ib_dev); + return ret; + } + + dev_set_drvdata(&adev->dev, dev); + + return 0; +} + +static void mana_ib_remove(struct auxiliary_device *adev) +{ + struct mana_ib_dev *dev = dev_get_drvdata(&adev->dev); + + ib_unregister_device(&dev->ib_dev); + ib_dealloc_device(&dev->ib_dev); +} + +static struct auxiliary_driver mana_driver = { + .name = "rdma", + .probe = mana_ib_probe, + .remove = mana_ib_remove, + .id_table = mana_id_table, +}; + +static int __init mana_ib_init(void) +{ + auxiliary_driver_register(&mana_driver); + + return 0; +} + +static void __exit mana_ib_cleanup(void) +{ + auxiliary_driver_unregister(&mana_driver); +} + +module_init(mana_ib_init); +module_exit(mana_ib_cleanup); diff --git a/drivers/infiniband/hw/mana/mana_ib.h b/drivers/infiniband/hw/mana/mana_ib.h new file mode 100644 index 000000000000..0f2ec882f0a2 --- /dev/null +++ b/drivers/infiniband/hw/mana/mana_ib.h @@ -0,0 +1,145 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* + * Copyright (c) 2022 Microsoft Corporation. All rights reserved. + */ + +#ifndef _MANA_IB_H_ +#define _MANA_IB_H_ + +#include +#include +#include +#include +#include + +#include + +#define PAGE_SZ_BM (SZ_4K | SZ_8K | SZ_16K | SZ_32K | SZ_64K | SZ_128K \ + | SZ_256K | SZ_512K | SZ_1M | SZ_2M) + +// Maximum size of a memory registration is 1G bytes +#define MANA_IB_MAX_MR_SIZE (1024 * 1024 * 1024) + +struct mana_ib_dev { + struct ib_device ib_dev; + struct gdma_dev *gdma_dev; +}; + +struct mana_ib_wq { + struct ib_wq ibwq; + struct ib_umem *umem; + int wqe; + u32 wq_buf_size; + u64 gdma_region; + u64 id; + mana_handle_t rx_object; +}; + +struct mana_ib_pd { + struct ib_pd ibpd; + u32 pdn; + mana_handle_t pd_handle; + atomic_t vport_use_count; + bool tx_shortform_allowed; + u32 tx_vp_offset; +}; + +struct mana_ib_mr { + struct ib_mr ibmr; + struct ib_umem *umem; + mana_handle_t mr_handle; +}; + +struct mana_ib_cq { + struct ib_cq ibcq; + struct ib_umem *umem; + int cqe; + u64 gdma_region; + u64 id; +}; + +struct mana_ib_qp { + struct ib_qp ibqp; + + // Send queue info + struct ib_umem *sq_umem; + int sqe; + u64 sq_gdma_region; + u64 sq_id; + + // Set if this QP uses ind_table for receive queues + + mana_handle_t tx_object; + + // the port on the IB device, starting with 1 + u32 port; +}; + +struct mana_ib_ucontext { + struct ib_ucontext ibucontext; + u32 doorbell; +}; + +struct mana_ib_rwq_ind_table { + struct ib_rwq_ind_table ib_ind_table; +}; + +int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, + struct ib_umem *umem, + mana_handle_t *gdma_region, u64 page_sz); + +int mana_ib_gd_destroy_dma_region(struct mana_ib_dev *dev, + mana_handle_t gdma_region); + +struct ib_wq *mana_ib_create_wq(struct ib_pd *pd, + struct ib_wq_init_attr *init_attr, + struct ib_udata *udata); + +int mana_ib_modify_wq(struct ib_wq *wq, struct ib_wq_attr *wq_attr, + u32 wq_attr_mask, struct ib_udata *udata); + +int mana_ib_destroy_wq(struct ib_wq *ibwq, struct ib_udata *udata); + +int mana_ib_create_rwq_ind_table(struct ib_rwq_ind_table *ib_rwq_ind_table, + struct ib_rwq_ind_table_init_attr *init_attr, + struct ib_udata *udata); + +int mana_ib_destroy_rwq_ind_table(struct ib_rwq_ind_table *ib_rwq_ind_tbl); + +struct ib_mr *mana_ib_get_dma_mr(struct ib_pd *ibpd, int access_flags); + +struct ib_mr *mana_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, + u64 iova, int access_flags, + struct ib_udata *udata); + +int mana_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata); + +int mana_ib_create_qp(struct ib_qp *qp, struct ib_qp_init_attr *qp_init_attr, + struct ib_udata *udata); + + +int mana_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, + int attr_mask, struct ib_udata *udata); + +int mana_ib_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata); + +int mana_ib_cfg_vport(struct mana_ib_dev *dev, u32 port_id, + struct mana_ib_pd *pd, u32 doorbell_id); +void mana_ib_uncfg_vport(struct mana_ib_dev *dev, struct mana_ib_pd *pd, + u32 port); + +int mana_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, + struct ib_udata *udata); + +int mana_ib_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata); + +int mana_ib_gd_create_pd(struct mana_ib_dev *dev, u64 *pd_handle, u32 *pd_id, + enum gdma_pd_flags flags); + +int mana_ib_gd_destroy_pd(struct mana_ib_dev *dev, u64 pd_handle); + +int mana_ib_gd_create_mr(struct mana_ib_dev *dev, struct mana_ib_mr *mr, + struct gdma_create_mr_params *mr_params); + +int mana_ib_gd_destroy_mr(struct mana_ib_dev *dev, mana_handle_t mr_handle); +#endif diff --git a/drivers/infiniband/hw/mana/mr.c b/drivers/infiniband/hw/mana/mr.c new file mode 100644 index 000000000000..691f9ec734c7 --- /dev/null +++ b/drivers/infiniband/hw/mana/mr.c @@ -0,0 +1,133 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (c) 2022, Microsoft Corporation. All rights reserved. + */ + +#include "mana_ib.h" + +#define VALID_MR_FLAGS (IB_ACCESS_LOCAL_WRITE | \ + IB_ACCESS_REMOTE_WRITE | \ + IB_ACCESS_REMOTE_READ) + +static enum gdma_mr_access_flags +mana_ib_verbs_to_gdma_access_flags(int access_flags) +{ + enum gdma_mr_access_flags flags = GDMA_ACCESS_FLAG_LOCAL_READ; + + if (access_flags & IB_ACCESS_LOCAL_WRITE) + flags |= GDMA_ACCESS_FLAG_LOCAL_WRITE; + + if (access_flags & IB_ACCESS_REMOTE_WRITE) + flags |= GDMA_ACCESS_FLAG_REMOTE_WRITE; + + if (access_flags & IB_ACCESS_REMOTE_READ) + flags |= GDMA_ACCESS_FLAG_REMOTE_READ; + + return flags; +} +struct ib_mr *mana_ib_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 length, + u64 iova, int access_flags, + struct ib_udata *udata) +{ + struct mana_ib_pd *pd = container_of(ibpd, struct mana_ib_pd, ibpd); + struct ib_device *ibdev = ibpd->device; + struct mana_ib_dev *dev = container_of( + ibdev, struct mana_ib_dev, ib_dev); + struct mana_ib_mr *mr; + gdma_obj_handle_t dma_region_handle; + struct gdma_create_mr_params mr_params = {}; + u64 page_sz = PAGE_SIZE; + int err; + + pr_debug("start 0x%llx, iova 0x%llx length 0x%llx access_flags 0x%x", + start, iova, length, access_flags); + + if (access_flags & ~VALID_MR_FLAGS) + return ERR_PTR(-EINVAL); + + mr = kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) + return ERR_PTR(-ENOMEM); + + mr->umem = ib_umem_get(ibdev, start, length, access_flags); + if (IS_ERR(mr->umem)) { + err = PTR_ERR(mr->umem); + pr_err("Failed to get umem for register user-mr, %d\n", err); + goto err_free; + } + + page_sz = ib_umem_find_best_pgsz(mr->umem, PAGE_SZ_BM, iova); + if (unlikely(!page_sz)) { + pr_err("Failed to get best page size\n"); + err = -EOPNOTSUPP; + goto err_umem; + } + pr_debug("Page size chosen %llu\n", page_sz); + + err = mana_ib_gd_create_dma_region(dev, mr->umem, &dma_region_handle, + page_sz); + if (err) { + pr_err("Failed to create dma region for register user-mr, %d\n", + err); + goto err_umem; + } + + pr_debug("mana_ib_gd_create_dma_region ret %d gdma_region %llx\n", + err, dma_region_handle); + + mr_params.pd_handle = pd->pd_handle; + mr_params.mr_type = GDMA_MR_TYPE_GVA; + mr_params.gva.dma_region_handle = dma_region_handle; + mr_params.gva.virtual_address = iova; + mr_params.gva.access_flags = + mana_ib_verbs_to_gdma_access_flags(access_flags); + + err = mana_ib_gd_create_mr(dev, mr, &mr_params); + if (err) + goto err_dma_region; + + /* + * There is no need to keep track of dma_region_handle after MR is + * successfully created. The dma_region_handle is tracked in the PF + * as part of the lifecycle of this MR. + */ + + mr->ibmr.length = length; + mr->ibmr.page_size = page_sz; + return &mr->ibmr; + +err_dma_region: + mana_gd_destroy_dma_region(dev->gdma_dev->gdma_context, + dma_region_handle); + +err_umem: + ib_umem_release(mr->umem); + +err_free: + kfree(mr); + return ERR_PTR(err); +} + +int mana_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) +{ + struct mana_ib_mr *mr = + container_of(ibmr, struct mana_ib_mr, ibmr); + struct ib_device *ibdev = ibmr->device; + struct mana_ib_dev *dev = + container_of(ibdev, struct mana_ib_dev, ib_dev); + + int err; + + err = mana_ib_gd_destroy_mr(dev, mr->mr_handle); + if (err) + return err; + + if (mr->umem) + ib_umem_release(mr->umem); + + kfree(mr); + + return 0; +} + + diff --git a/drivers/infiniband/hw/mana/qp.c b/drivers/infiniband/hw/mana/qp.c new file mode 100644 index 000000000000..75ab983c3f5c --- /dev/null +++ b/drivers/infiniband/hw/mana/qp.c @@ -0,0 +1,466 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (c) 2022, Microsoft Corporation. All rights reserved. + */ + +#include "mana_ib.h" + +int mana_ib_cfg_vport_steering(struct mana_ib_dev *dev, struct net_device *ndev, + mana_handle_t default_rxobj, + mana_handle_t ind_table[], u32 log_ind_tbl_size, + u32 rx_hash_key_len, u8 *rx_hash_key) +{ + struct gdma_dev *mdev = dev->gdma_dev; + struct gdma_context *gc = mdev->gdma_context; + struct mana_port_context *mpc = netdev_priv(ndev); + + struct mana_cfg_rx_steer_req *req = NULL; + struct mana_cfg_rx_steer_resp resp = {}; + u32 req_buf_size; + int err; + mana_handle_t *req_indir_tab; + int i; + + req_buf_size = sizeof(*req) + + sizeof(mana_handle_t) * MANA_INDIRECT_TABLE_SIZE; + req = kzalloc(req_buf_size, GFP_KERNEL); + if (!req) + return -ENOMEM; + + mana_gd_init_req_hdr(&req->hdr, MANA_CONFIG_VPORT_RX, req_buf_size, + sizeof(resp)); + + req->vport = mpc->port_handle; + req->rx_enable = 1; + req->update_default_rxobj = 1; + req->default_rxobj = default_rxobj; + req->hdr.dev_id = mdev->dev_id; + + /* If there are more than 1 entries in indirection table, enable RSS */ + if (log_ind_tbl_size) + req->rss_enable = true; + + req->num_indir_entries = MANA_INDIRECT_TABLE_SIZE; + req->indir_tab_offset = sizeof(*req); + req->update_indir_tab = true; + + req_indir_tab = (mana_handle_t *)(req + 1); + /* + * The ind table passed to the hardware must have + * MANA_INDIRECT_TABLE_SIZE entries. Adjust the verb + * ind_table to MANA_INDIRECT_TABLE_SIZE if required + */ + pr_debug("ind table size %u\n", 1 << log_ind_tbl_size); + for (i = 0; i < MANA_INDIRECT_TABLE_SIZE; i++) { + req_indir_tab[i] = ind_table[i % (1 << log_ind_tbl_size)]; + pr_debug("index %u handle 0x%llx\n", i, req_indir_tab[i]); + } + + req->update_hashkey = true; + if (rx_hash_key_len) + memcpy(req->hashkey, rx_hash_key, rx_hash_key_len); + else + netdev_rss_key_fill(req->hashkey, MANA_HASH_KEY_SIZE); + + pr_debug("vport handle %llu default_rxobj 0x%llx\n", + req->vport, default_rxobj); + + err = mana_gd_send_request(gc, req_buf_size, req, sizeof(resp), &resp); + if (err) { + netdev_err(ndev, "Failed to configure vPort RX: %d\n", err); + goto out; + } + + if (resp.hdr.status) { + netdev_err(ndev, "vPort RX configuration failed: 0x%x\n", + resp.hdr.status); + err = -EPROTO; + } + +out: + kfree(req); + return err; +} + + +static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd, + struct ib_qp_init_attr *attr, struct ib_udata *udata) +{ + struct mana_ib_dev *mdev = + container_of(pd->device, struct mana_ib_dev, ib_dev); + struct gdma_dev *gd = mdev->gdma_dev; + struct mana_context *mc = gd->driver_data; + struct net_device *ndev; + struct mana_port_context *mpc; + struct ib_rwq_ind_table *ind_tbl = attr->rwq_ind_tbl; + struct mana_ib_qp *qp = container_of(ibqp, struct mana_ib_qp, ibqp); + struct ib_wq *ibwq; + struct mana_ib_wq *wq; + struct ib_cq *ibcq; + struct mana_ib_cq *cq; + int i = 0, ret; + u32 port; + mana_handle_t *mana_ind_table; + + struct mana_ib_create_qp_rss ucmd = {}; + struct mana_ib_create_qp_rss_resp resp = {}; + + ret = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen)); + if (ret) { + pr_err("Failed to copy from udata for create rss-qp, err %d\n", + ret); + return -EFAULT; + } + + if (attr->cap.max_recv_wr > MAX_SEND_BUFFERS_PER_QUEUE) { + pr_err("Requested max_recv_wr %d exceeding limit.\n", + attr->cap.max_recv_wr); + return -EINVAL; + } + + if (attr->cap.max_recv_sge > MAX_RX_WQE_SGL_ENTRIES) { + pr_err("Requested max_recv_sge %d exceeding limit.\n", + attr->cap.max_recv_sge); + return -EINVAL; + } + + if (ucmd.rx_hash_function != MANA_IB_RX_HASH_FUNC_TOEPLITZ) { + pr_err("RX Hash function is not supported, %d\n", + ucmd.rx_hash_function); + return -EINVAL; + } + + // IB ports start with 1, MANA start with 0 + port = ucmd.port; + if (port < 1 || port > mc->num_ports) { + pr_err("Invalid port %u in creating qp\n", port); + return -EINVAL; + } + ndev = mc->ports[port - 1]; + mpc = netdev_priv(ndev); + + pr_debug("rx_hash_function %d port %d\n", ucmd.rx_hash_function, port); + + mana_ind_table = kzalloc(sizeof(mana_handle_t) * + (1 << ind_tbl->log_ind_tbl_size), + GFP_KERNEL); + if (!mana_ind_table) { + ret = -ENOMEM; + goto fail; + } + + qp->port = port; + + for (i = 0; i < (1 << ind_tbl->log_ind_tbl_size); i++) { + struct mana_obj_spec wq_spec = {}; + struct mana_obj_spec cq_spec = {}; + + ibwq = ind_tbl->ind_tbl[i]; + wq = container_of(ibwq, struct mana_ib_wq, ibwq); + + ibcq = ibwq->cq; + cq = container_of(ibcq, struct mana_ib_cq, ibcq); + + wq_spec.gdma_region = wq->gdma_region; + wq_spec.queue_size = wq->wq_buf_size; + + cq_spec.gdma_region = cq->gdma_region; + cq_spec.queue_size = cq->cqe * COMP_ENTRY_SIZE; + cq_spec.modr_ctx_id = 0; + cq_spec.attached_eq = GDMA_CQ_NO_EQ; + + ret = mana_create_wq_obj(mpc, mpc->port_handle, GDMA_RQ, + &wq_spec, &cq_spec, &wq->rx_object); + if (ret) + goto fail; + + /* The GDMA regions are now owned by the WQ object */ + wq->gdma_region = GDMA_INVALID_DMA_REGION; + cq->gdma_region = GDMA_INVALID_DMA_REGION; + + wq->id = wq_spec.queue_index; + cq->id = cq_spec.queue_index; + + pr_debug("ret %d rx_object 0x%llx wq id %llu cq id %llu\n", + ret, wq->rx_object, wq->id, cq->id); + + resp.entries[i].cqid = cq->id; + resp.entries[i].wqid = wq->id; + + mana_ind_table[i] = wq->rx_object; + } + resp.num_entries = i; + + ret = mana_ib_cfg_vport_steering(mdev, ndev, wq->rx_object, + mana_ind_table, + ind_tbl->log_ind_tbl_size, + ucmd.rx_hash_key_len, + ucmd.rx_hash_key); + if (ret) + goto fail; + + kfree(mana_ind_table); + + if (udata) { + ret = ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (ret) { + pr_err("Failed to copy to udata create rss-qp, %d\n", + ret); + goto fail; + } + } + + return 0; + +fail: + while (i-- > 0) { + ibwq = ind_tbl->ind_tbl[i]; + wq = container_of(ibwq, struct mana_ib_wq, ibwq); + mana_destroy_wq_obj(mpc, GDMA_RQ, wq->rx_object); + } + + kfree(mana_ind_table); + + return ret; +} + +int mana_ib_create_qp_raw(struct ib_qp *ibqp, struct ib_pd *ibpd, + struct ib_qp_init_attr *attr, + struct ib_udata *udata) +{ + struct ib_ucontext *ib_ucontext = ibpd->uobject->context; + struct mana_ib_ucontext *mana_ucontext = + container_of(ib_ucontext, struct mana_ib_ucontext, ibucontext); + struct mana_ib_pd *pd = container_of(ibpd, struct mana_ib_pd, ibpd); + struct mana_ib_create_qp ucmd = {}; + struct mana_ib_create_qp_resp resp = {}; + struct mana_ib_qp *qp = container_of(ibqp, struct mana_ib_qp, ibqp); + struct mana_ib_cq *send_cq = + container_of(attr->send_cq, struct mana_ib_cq, ibcq); + struct mana_ib_dev *mdev = + container_of(ibpd->device, struct mana_ib_dev, ib_dev); + struct gdma_dev *gd = mdev->gdma_dev; + struct mana_context *mc = gd->driver_data; + struct net_device *ndev; + struct mana_port_context *mpc; + struct mana_obj_spec wq_spec = {}; + struct mana_obj_spec cq_spec = {}; + int err; + u32 port; + + struct ib_umem *umem; + + err = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen)); + if (err) { + pr_err("Failed to copy from udata create qp-raw, %d\n", err); + return -EFAULT; + } + + // IB ports start with 1, MANA Ethernet ports start with 0 + port = ucmd.port; + if (ucmd.port > mc->num_ports) + return -EINVAL; + + if (attr->cap.max_send_wr > MAX_SEND_BUFFERS_PER_QUEUE) { + pr_err("Requested max_send_wr %d exceeding limit\n", + attr->cap.max_send_wr); + return -EINVAL; + } + + if (attr->cap.max_send_sge > MAX_TX_WQE_SGL_ENTRIES) { + pr_err("Requested max_send_sge %d exceeding limit\n", + attr->cap.max_send_sge); + return -EINVAL; + } + + ndev = mc->ports[port - 1]; + mpc = netdev_priv(ndev); + pr_debug("port %u ndev %p mpc %p\n", port, ndev, mpc); + + err = mana_ib_cfg_vport(mdev, port - 1, pd, mana_ucontext->doorbell); + if (err) { + pr_err("cfg vport failed err %d\n", err); + return -ENODEV; + } + + qp->port = port; + + pr_debug("ucmd sq_buf_addr 0x%llx port %u\n", + ucmd.sq_buf_addr, ucmd.port); + + umem = ib_umem_get(ibpd->device, ucmd.sq_buf_addr, ucmd.sq_buf_size, + IB_ACCESS_LOCAL_WRITE); + if (IS_ERR(umem)) { + err = PTR_ERR(umem); + pr_err("Failed to get umem for create qp-raw, err %d\n", err); + goto err_free_vport; + } + qp->sq_umem = umem; + + err = mana_ib_gd_create_dma_region(mdev, qp->sq_umem, + &qp->sq_gdma_region, PAGE_SIZE); + if (err) { + pr_err("Failed to create dma region for create qp-raw, %d\n", + err); + goto err_release_umem; + } + + pr_debug("%s: mana_ib_gd_create_dma_region ret %d gdma_region 0x%llx\n", + __func__, err, qp->sq_gdma_region); + + // Create a WQ on the same port handle used by the Ethernet + wq_spec.gdma_region = qp->sq_gdma_region; + wq_spec.queue_size = ucmd.sq_buf_size; + + cq_spec.gdma_region = send_cq->gdma_region; + cq_spec.queue_size = send_cq->cqe * COMP_ENTRY_SIZE; + cq_spec.modr_ctx_id = 0; + cq_spec.attached_eq = GDMA_CQ_NO_EQ; + + err = mana_create_wq_obj(mpc, mpc->port_handle, GDMA_SQ, + &wq_spec, &cq_spec, &qp->tx_object); + if (err) { + pr_err("Failed to create wq for create raw-qp, err %d\n", err); + goto err_destroy_dma_region; + } + + /* The GDMA regions are now owned by the WQ object */ + qp->sq_gdma_region = GDMA_INVALID_DMA_REGION; + send_cq->gdma_region = GDMA_INVALID_DMA_REGION; + + qp->sq_id = wq_spec.queue_index; + send_cq->id = cq_spec.queue_index; + + pr_debug("ret %d qp->tx_object 0x%llx sq id %llu cq id %llu\n", + err, qp->tx_object, qp->sq_id, send_cq->id); + + resp.sqid = qp->sq_id; + resp.cqid = send_cq->id; + resp.tx_vp_offset = pd->tx_vp_offset; + + if (udata) { + err = ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (err) { + pr_err("Failed to copy udata for create qp-raw, %d\n", + err); + goto err_destroy_wq_obj; + } + } + + return 0; + +err_destroy_wq_obj: + mana_destroy_wq_obj(mpc, GDMA_SQ, qp->tx_object); + +err_destroy_dma_region: + mana_ib_gd_destroy_dma_region(mdev, qp->sq_gdma_region); + +err_release_umem: + ib_umem_release(umem); + +err_free_vport: + mana_ib_uncfg_vport(mdev, pd, port - 1); + + return err; +} + +int mana_ib_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attr, + struct ib_udata *udata) +{ + switch (attr->qp_type) { + + case IB_QPT_RAW_PACKET: + // When rwq_ind_tbl is used, it's for creating WQs for RSS + if (attr->rwq_ind_tbl) + return mana_ib_create_qp_rss(ibqp, ibqp->pd, attr, udata); + + return mana_ib_create_qp_raw(ibqp, ibqp->pd, attr, udata); + default: + // Creating QP other than IB_QPT_RAW_PACKET is not supported + pr_err("Creating QP type %u not supported\n", attr->qp_type); + } + + return -EINVAL; +} + +int mana_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, + int attr_mask, struct ib_udata *udata) +{ + // modify_qp is not supported by this version of the driver + return -ENOTSUPP; +} + +static int mana_ib_destroy_qp_rss(struct mana_ib_qp *qp, + struct ib_rwq_ind_table *ind_tbl, + struct ib_udata *udata) +{ + struct mana_ib_dev *mdev = + container_of(qp->ibqp.device, struct mana_ib_dev, ib_dev); + struct gdma_dev *gd = mdev->gdma_dev; + struct mana_context *mc = gd->driver_data; + struct net_device *ndev; + struct mana_port_context *mpc; + struct ib_wq *ibwq; + struct mana_ib_wq *wq; + int i; + + ndev = mc->ports[qp->port - 1]; + mpc = netdev_priv(ndev); + pr_debug("ndev %p mpc %p\n", ndev, mpc); + + for (i = 0; i < (1 << ind_tbl->log_ind_tbl_size); i++) { + ibwq = ind_tbl->ind_tbl[i]; + wq = container_of(ibwq, struct mana_ib_wq, ibwq); + pr_debug("wq->rx_object %llu\n", wq->rx_object); + mana_destroy_wq_obj(mpc, GDMA_RQ, wq->rx_object); + } + + return 0; +} + +int mana_ib_destroy_qp_raw(struct mana_ib_qp *qp, struct ib_udata *udata) +{ + struct mana_ib_dev *mdev = + container_of(qp->ibqp.device, struct mana_ib_dev, ib_dev); + struct gdma_dev *gd = mdev->gdma_dev; + struct mana_context *mc = gd->driver_data; + struct net_device *ndev; + struct mana_port_context *mpc; + struct ib_pd *ibpd = qp->ibqp.pd; + struct mana_ib_pd *pd = container_of(ibpd, struct mana_ib_pd, ibpd); + + ndev = mc->ports[qp->port - 1]; + mpc = netdev_priv(ndev); + pr_debug("ndev %p mpc %p qp->tx_object %llu\n", + ndev, mpc, qp->tx_object); + + mana_destroy_wq_obj(mpc, GDMA_SQ, qp->tx_object); + + if (qp->sq_umem) { + mana_ib_gd_destroy_dma_region(mdev, qp->sq_gdma_region); + ib_umem_release(qp->sq_umem); + } + + mana_ib_uncfg_vport(mdev, pd, qp->port - 1); + + return 0; +} + +int mana_ib_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) +{ + struct mana_ib_qp *qp = container_of(ibqp, struct mana_ib_qp, ibqp); + + switch (ibqp->qp_type) { + case IB_QPT_RAW_PACKET: + if (ibqp->rwq_ind_tbl) + return mana_ib_destroy_qp_rss(qp, ibqp->rwq_ind_tbl, + udata); + + return mana_ib_destroy_qp_raw(qp, udata); + + default: + pr_debug("Unexpected QP type %u\n", ibqp->qp_type); + } + + return -ENOENT; +} diff --git a/drivers/infiniband/hw/mana/wq.c b/drivers/infiniband/hw/mana/wq.c new file mode 100644 index 000000000000..945aa163c452 --- /dev/null +++ b/drivers/infiniband/hw/mana/wq.c @@ -0,0 +1,111 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright (c) 2022, Microsoft Corporation. All rights reserved. + */ + +#include "mana_ib.h" + +struct ib_wq *mana_ib_create_wq(struct ib_pd *pd, + struct ib_wq_init_attr *init_attr, + struct ib_udata *udata) +{ + struct ib_umem *umem; + struct mana_ib_dev *mdev = container_of(pd->device, + struct mana_ib_dev, ib_dev); + struct mana_ib_create_wq ucmd = { }; + struct mana_ib_wq *wq; + int err; + + pr_debug("udata->inlen %lu\n", udata->inlen); + err = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen)); + if (err) { + pr_err("Failed to copy from udata for create wq, %d\n", err); + return ERR_PTR(-EFAULT); + } + + wq = kzalloc(sizeof(*wq), GFP_KERNEL); + if (!wq) + return ERR_PTR(-ENOMEM); + + pr_debug("ucmd wq_buf_addr 0x%llx\n", ucmd.wq_buf_addr); + + umem = ib_umem_get(pd->device, ucmd.wq_buf_addr, ucmd.wq_buf_size, + IB_ACCESS_LOCAL_WRITE); + if (IS_ERR(umem)) { + err = PTR_ERR(umem); + pr_err("Failed to get umem for create wq, err %d\n", err); + goto err_free_wq; + } + + wq->umem = umem; + wq->wqe = init_attr->max_wr; + wq->wq_buf_size = ucmd.wq_buf_size; + wq->rx_object = INVALID_MANA_HANDLE; + + err = mana_ib_gd_create_dma_region(mdev, wq->umem, &wq->gdma_region, + PAGE_SIZE); + if (err) { + pr_err("Failed to create dma region for create wq, %d\n", err); + goto err_release_umem; + } + + pr_debug("%s: mana_ib_gd_create_dma_region ret %d gdma_region 0x%llx\n", + __func__, err, wq->gdma_region); + + // WQ ID is returned at wq_create time, doesn't know the value yet + + return &wq->ibwq; + +err_release_umem: + ib_umem_release(umem); + +err_free_wq: + kfree(wq); + + return ERR_PTR(err); +} + + +int mana_ib_modify_wq(struct ib_wq *wq, struct ib_wq_attr *wq_attr, + u32 wq_attr_mask, struct ib_udata *udata) +{ + // modify_wq is not supported by this version of the driver + return -ENOTSUPP; +} + +int mana_ib_destroy_wq(struct ib_wq *ibwq, struct ib_udata *udata) +{ + struct mana_ib_wq *wq = container_of(ibwq, struct mana_ib_wq, ibwq); + struct ib_device *ib_dev = ibwq->device; + struct mana_ib_dev *mdev = container_of(ib_dev, struct mana_ib_dev, + ib_dev); + + mana_ib_gd_destroy_dma_region(mdev, wq->gdma_region); + ib_umem_release(wq->umem); + + kfree(wq); + + return 0; +} + +int mana_ib_create_rwq_ind_table(struct ib_rwq_ind_table *ib_rwq_ind_table, + struct ib_rwq_ind_table_init_attr *init_attr, + struct ib_udata *udata) +{ + pr_debug("udata->inlen %lu\n", udata->inlen); + + /* + * There is no additional data in ind_table to be maintained by this + * driver, do nothing + */ + return 0; +} + +int mana_ib_destroy_rwq_ind_table(struct ib_rwq_ind_table *ib_rwq_ind_tbl) +{ + /* + * There is no additional data in ind_table to be maintained by this + * driver, do nothing + */ + return 0; +} diff --git a/include/linux/mana/mana.h b/include/linux/mana/mana.h index 1cf77a03bff2..114698f682cf 100644 --- a/include/linux/mana/mana.h +++ b/include/linux/mana/mana.h @@ -403,6 +403,9 @@ int mana_bpf(struct net_device *ndev, struct netdev_bpf *bpf); extern const struct ethtool_ops mana_ethtool_ops; +/* A CQ can be created not associated with any EQ */ +#define GDMA_CQ_NO_EQ 0xffff + struct mana_obj_spec { u32 queue_index; u64 gdma_region; diff --git a/include/uapi/rdma/ib_user_ioctl_verbs.h b/include/uapi/rdma/ib_user_ioctl_verbs.h index 3072e5d6b692..081aabf536dc 100644 --- a/include/uapi/rdma/ib_user_ioctl_verbs.h +++ b/include/uapi/rdma/ib_user_ioctl_verbs.h @@ -250,6 +250,7 @@ enum rdma_driver_id { RDMA_DRIVER_QIB, RDMA_DRIVER_EFA, RDMA_DRIVER_SIW, + RDMA_DRIVER_MANA, }; enum ib_uverbs_gid_type { diff --git a/include/uapi/rdma/mana-abi.h b/include/uapi/rdma/mana-abi.h new file mode 100644 index 000000000000..4e40f70a0601 --- /dev/null +++ b/include/uapi/rdma/mana-abi.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ +/* + * Copyright (c) 2022, Microsoft Corporation. All rights reserved. + */ + +#ifndef MANA_ABI_USER_H +#define MANA_ABI_USER_H + +#include +#include + +#include + +/* + * Increment this value if any changes that break userspace ABI + * compatibility are made. + */ + +#define MANA_IB_UVERBS_ABI_VERSION 1 + +struct mana_ib_create_cq { + __aligned_u64 buf_addr; +}; + +struct mana_ib_create_qp { + __aligned_u64 sq_buf_addr; + __u32 sq_buf_size; + __u32 port; +}; + +struct mana_ib_create_qp_resp { + __u32 sqid; + __u32 cqid; + __u32 tx_vp_offset; + __u32 reserved; +}; + +struct mana_ib_create_wq { + __aligned_u64 wq_buf_addr; + __u32 wq_buf_size; + __u32 reserved; +}; + +/* RX Hash function flags */ +enum mana_ib_rx_hash_function_flags { + MANA_IB_RX_HASH_FUNC_TOEPLITZ = 1 << 0, +}; + +struct mana_ib_create_qp_rss { + __aligned_u64 rx_hash_fields_mask; + __u8 rx_hash_function; + __u8 reserved[7]; + __u32 rx_hash_key_len; + __u8 rx_hash_key[40]; + __u32 port; +}; + +struct rss_resp_entry { + __u32 cqid; + __u32 wqid; +}; + +struct mana_ib_create_qp_rss_resp { + __aligned_u64 num_entries; + struct rss_resp_entry entries[MANA_MAX_NUM_QUEUES]; +}; + +#endif