From patchwork Mon Jul 5 14:13:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhu, Lingshan" X-Patchwork-Id: 12359201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D37DC07E9C for ; Mon, 5 Jul 2021 14:19:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5BF0561946 for ; Mon, 5 Jul 2021 14:19:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231446AbhGEOWC (ORCPT ); Mon, 5 Jul 2021 10:22:02 -0400 Received: from mga04.intel.com ([192.55.52.120]:44987 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231448AbhGEOWC (ORCPT ); Mon, 5 Jul 2021 10:22:02 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10036"; a="207158044" X-IronPort-AV: E=Sophos;i="5.83,325,1616482800"; d="scan'208";a="207158044" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2021 07:19:25 -0700 X-IronPort-AV: E=Sophos;i="5.83,325,1616482800"; d="scan'208";a="485524234" Received: from unknown (HELO localhost.localdomain.bj.intel.com) ([10.240.192.107]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2021 07:19:23 -0700 From: Zhu Lingshan To: jasowang@redhat.com, mst@redhat.com Cc: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, kvm@vger.kernel.org, Zhu Lingshan Subject: [PATCH V2 2/2] vDPA/ifcvf: implement management netlink framework for ifcvf Date: Mon, 5 Jul 2021 22:13:33 +0800 Message-Id: <20210705141333.9262-3-lingshan.zhu@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210705141333.9262-1-lingshan.zhu@intel.com> References: <20210705141333.9262-1-lingshan.zhu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This commit implments the management netlink framework for ifcvf, including register and add / remove a device It works with iprouter2: [root@localhost lszhu]# vdpa mgmtdev show -jp { "mgmtdev": { "pci/0000:01:00.5": { "supported_classes": [ "net" ] }, "pci/0000:01:00.6": { "supported_classes": [ "net" ] } } } [root@localhost lszhu]# vdpa dev add mgmtdev pci/0000:01:00.5 name vdpa0 [root@localhost lszhu]# vdpa dev add mgmtdev pci/0000:01:00.6 name vdpa1 Signed-off-by: Zhu Lingshan --- drivers/vdpa/ifcvf/ifcvf_base.h | 6 ++ drivers/vdpa/ifcvf/ifcvf_main.c | 154 ++++++++++++++++++++++++-------- 2 files changed, 124 insertions(+), 36 deletions(-) diff --git a/drivers/vdpa/ifcvf/ifcvf_base.h b/drivers/vdpa/ifcvf/ifcvf_base.h index ded1b1b5fb13..e5251fcbb200 100644 --- a/drivers/vdpa/ifcvf/ifcvf_base.h +++ b/drivers/vdpa/ifcvf/ifcvf_base.h @@ -104,6 +104,12 @@ struct ifcvf_lm_cfg { struct ifcvf_vring_lm_cfg vring_lm_cfg[IFCVF_MAX_QUEUE_PAIRS]; }; +struct ifcvf_vdpa_mgmt_dev { + struct vdpa_mgmt_dev mdev; + struct ifcvf_adapter *adapter; + struct pci_dev *pdev; +}; + int ifcvf_init_hw(struct ifcvf_hw *hw, struct pci_dev *dev); int ifcvf_start_hw(struct ifcvf_hw *hw); void ifcvf_stop_hw(struct ifcvf_hw *hw); diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c index 5f70ab1283a0..c72d9b36e4a0 100644 --- a/drivers/vdpa/ifcvf/ifcvf_main.c +++ b/drivers/vdpa/ifcvf/ifcvf_main.c @@ -218,7 +218,7 @@ static void ifcvf_vdpa_set_status(struct vdpa_device *vdpa_dev, u8 status) int ret; vf = vdpa_to_vf(vdpa_dev); - adapter = dev_get_drvdata(vdpa_dev->dev.parent); + adapter = vdpa_to_adapter(vdpa_dev); status_old = ifcvf_get_status(vf); if (status_old == status) @@ -442,6 +442,16 @@ static const struct vdpa_config_ops ifc_vdpa_ops = { .set_config_cb = ifcvf_vdpa_set_config_cb, }; +static struct virtio_device_id id_table_net[] = { + {VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID}, + {0}, +}; + +static struct virtio_device_id id_table_blk[] = { + {VIRTIO_ID_BLOCK, VIRTIO_DEV_ANY_ID}, + {0}, +}; + static u32 get_dev_type(struct pci_dev *pdev) { u32 dev_type; @@ -462,48 +472,30 @@ static u32 get_dev_type(struct pci_dev *pdev) return dev_type; } -static int ifcvf_probe(struct pci_dev *pdev, const struct pci_device_id *id) +static int ifcvf_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name) { - struct device *dev = &pdev->dev; + struct ifcvf_vdpa_mgmt_dev *ifcvf_mgmt_dev; struct ifcvf_adapter *adapter; + struct pci_dev *pdev; struct ifcvf_hw *vf; + struct device *dev; int ret, i; - ret = pcim_enable_device(pdev); - if (ret) { - IFCVF_ERR(pdev, "Failed to enable device\n"); - return ret; - } - - ret = pcim_iomap_regions(pdev, BIT(0) | BIT(2) | BIT(4), - IFCVF_DRIVER_NAME); - if (ret) { - IFCVF_ERR(pdev, "Failed to request MMIO region\n"); - return ret; - } - - ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); - if (ret) { - IFCVF_ERR(pdev, "No usable DMA configuration\n"); - return ret; - } - - ret = devm_add_action_or_reset(dev, ifcvf_free_irq_vectors, pdev); - if (ret) { - IFCVF_ERR(pdev, - "Failed for adding devres for freeing irq vectors\n"); - return ret; - } + ifcvf_mgmt_dev = container_of(mdev, struct ifcvf_vdpa_mgmt_dev, mdev); + if (ifcvf_mgmt_dev->adapter) + return -EOPNOTSUPP; + pdev = ifcvf_mgmt_dev->pdev; + dev = &pdev->dev; adapter = vdpa_alloc_device(struct ifcvf_adapter, vdpa, - dev, &ifc_vdpa_ops, NULL); - if (adapter == NULL) { + dev, &ifc_vdpa_ops, name); + if (!adapter) { IFCVF_ERR(pdev, "Failed to allocate vDPA structure"); return -ENOMEM; } - pci_set_master(pdev); - pci_set_drvdata(pdev, adapter); + ifcvf_mgmt_dev->adapter = adapter; + pci_set_drvdata(pdev, ifcvf_mgmt_dev); vf = &adapter->vf; vf->dev_type = get_dev_type(pdev); @@ -523,9 +515,10 @@ static int ifcvf_probe(struct pci_dev *pdev, const struct pci_device_id *id) vf->hw_features = ifcvf_get_hw_features(vf); - ret = vdpa_register_device(&adapter->vdpa, IFCVF_MAX_QUEUE_PAIRS * 2); + adapter->vdpa.mdev = &ifcvf_mgmt_dev->mdev; + ret = _vdpa_register_device(&adapter->vdpa, IFCVF_MAX_QUEUE_PAIRS * 2); if (ret) { - IFCVF_ERR(pdev, "Failed to register ifcvf to vdpa bus"); + IFCVF_ERR(pdev, "Failed to register to vDPA bus"); goto err; } @@ -536,11 +529,100 @@ static int ifcvf_probe(struct pci_dev *pdev, const struct pci_device_id *id) return ret; } +static void ifcvf_vdpa_dev_del(struct vdpa_mgmt_dev *mdev, struct vdpa_device *dev) +{ + struct ifcvf_vdpa_mgmt_dev *ifcvf_mgmt_dev; + + ifcvf_mgmt_dev = container_of(mdev, struct ifcvf_vdpa_mgmt_dev, mdev); + _vdpa_unregister_device(dev); + ifcvf_mgmt_dev->adapter = NULL; +} + +static const struct vdpa_mgmtdev_ops ifcvf_vdpa_mgmt_dev_ops = { + .dev_add = ifcvf_vdpa_dev_add, + .dev_del = ifcvf_vdpa_dev_del +}; + +static int ifcvf_probe(struct pci_dev *pdev, const struct pci_device_id *id) +{ + struct ifcvf_vdpa_mgmt_dev *ifcvf_mgmt_dev; + struct device *dev = &pdev->dev; + u32 dev_type; + int ret; + + ifcvf_mgmt_dev = kzalloc(sizeof(struct ifcvf_vdpa_mgmt_dev), GFP_KERNEL); + if (!ifcvf_mgmt_dev) { + IFCVF_ERR(pdev, "Failed to alloc memory for the vDPA management device\n"); + return -ENOMEM; + } + + dev_type = get_dev_type(pdev); + switch (dev_type) { + case VIRTIO_ID_NET: + ifcvf_mgmt_dev->mdev.id_table = id_table_net; + break; + case VIRTIO_ID_BLOCK: + ifcvf_mgmt_dev->mdev.id_table = id_table_blk; + break; + default: + IFCVF_ERR(pdev, "VIRTIO ID %u not supported\n", dev_type); + ret = -EOPNOTSUPP; + goto err; + } + + ifcvf_mgmt_dev->mdev.ops = &ifcvf_vdpa_mgmt_dev_ops; + ifcvf_mgmt_dev->mdev.device = dev; + ifcvf_mgmt_dev->pdev = pdev; + + ret = pcim_enable_device(pdev); + if (ret) { + IFCVF_ERR(pdev, "Failed to enable device\n"); + goto err; + } + + ret = pcim_iomap_regions(pdev, BIT(0) | BIT(2) | BIT(4), + IFCVF_DRIVER_NAME); + if (ret) { + IFCVF_ERR(pdev, "Failed to request MMIO region\n"); + goto err; + } + + ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); + if (ret) { + IFCVF_ERR(pdev, "No usable DMA configuration\n"); + goto err; + } + + ret = devm_add_action_or_reset(dev, ifcvf_free_irq_vectors, pdev); + if (ret) { + IFCVF_ERR(pdev, + "Failed for adding devres for freeing irq vectors\n"); + goto err; + } + + pci_set_master(pdev); + + ret = vdpa_mgmtdev_register(&ifcvf_mgmt_dev->mdev); + if (ret) { + IFCVF_ERR(pdev, + "Failed to initialize the management interfaces\n"); + goto err; + } + + return 0; + +err: + kfree(ifcvf_mgmt_dev); + return ret; +} + static void ifcvf_remove(struct pci_dev *pdev) { - struct ifcvf_adapter *adapter = pci_get_drvdata(pdev); + struct ifcvf_vdpa_mgmt_dev *ifcvf_mgmt_dev; - vdpa_unregister_device(&adapter->vdpa); + ifcvf_mgmt_dev = pci_get_drvdata(pdev); + vdpa_mgmtdev_unregister(&ifcvf_mgmt_dev->mdev); + kfree(ifcvf_mgmt_dev); } static struct pci_device_id ifcvf_pci_ids[] = {