From patchwork Fri Oct 27 17:00:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438634 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D01FC25B47 for ; Fri, 27 Oct 2023 17:01:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345924AbjJ0RB2 (ORCPT ); Fri, 27 Oct 2023 13:01:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229454AbjJ0RB1 (ORCPT ); Fri, 27 Oct 2023 13:01:27 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2B4F128; Fri, 27 Oct 2023 10:01:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426085; x=1729962085; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yddUC7fjOVq+Oh8ZZ68eE5QsYDbcq0+cPCy+eAgINMY=; b=EyF1pPWZ6xi9l5VH6oofbuU8bfaFrVqwl3m84/oo/PL/rFNKwPFIciW4 jgBZoa3TFz8J/2mM+6M/ItRnGGe4TI82vo5nXj2oY+izU7vAvTOXLpEVC l+dnY3lSG0UczjACI5jJyMICRHk5M+cbFt8Kz515SY6+6a2qU51umqpZM 6q5QokgloCs7hfaGxHdVLsaaDE8PPrupXNvxXg7Yuk9XbNg8HhsIyDTwi cwabtJ466gXtlaqFJJSGUlkX5DDgrM6HBkmokeLelvepK1f8DMDmGUyjp L9j1UHZ332JUVMlx8DeGcdKDnfxb8s/IAJ/UTFCSO9p9sXP/f8QKbiQU9 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611825" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611825" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988143" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988143" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:13 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 01/26] PCI/MSI: Provide stubs for IMS functions Date: Fri, 27 Oct 2023 10:00:33 -0700 Message-Id: <4752a2f147eae4683770ad71e0b01934588c2442.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The IMS related functions (pci_create_ims_domain(), pci_ims_alloc_irq(), and pci_ims_free_irq()) are not declared when CONFIG_PCI_MSI is disabled. Provide definitions of these functions for use when callers are compiled with CONFIG_PCI_MSI disabled. Fixes: 0194425af0c8 ("PCI/MSI: Provide IMS (Interrupt Message Store) support") Fixes: c9e5bea27383 ("PCI/MSI: Provide pci_ims_alloc/free_irq()") Signed-off-by: Reinette Chatre Cc: stable@vger.kernel.org # v6.2+ --- Patch has been submitted separately and is queued for inclusion. https://lore.kernel.org/lkml/169757242009.3135.5502383859327174030.tip-bot2@tip-bot2/ It is included in this series in support of automated testing by bots picking series from this submission. include/linux/pci.h | 34 ++++++++++++++++++++++++++-------- 1 file changed, 26 insertions(+), 8 deletions(-) diff --git a/include/linux/pci.h b/include/linux/pci.h index 8c7c2c3c6c65..b56417276042 100644 --- a/include/linux/pci.h +++ b/include/linux/pci.h @@ -1624,6 +1624,8 @@ struct msix_entry { u16 entry; /* Driver uses to specify entry, OS writes */ }; +struct msi_domain_template; + #ifdef CONFIG_PCI_MSI int pci_msi_vec_count(struct pci_dev *dev); void pci_disable_msi(struct pci_dev *dev); @@ -1656,6 +1658,11 @@ void pci_msix_free_irq(struct pci_dev *pdev, struct msi_map map); void pci_free_irq_vectors(struct pci_dev *dev); int pci_irq_vector(struct pci_dev *dev, unsigned int nr); const struct cpumask *pci_irq_get_affinity(struct pci_dev *pdev, int vec); +bool pci_create_ims_domain(struct pci_dev *pdev, const struct msi_domain_template *template, + unsigned int hwsize, void *data); +struct msi_map pci_ims_alloc_irq(struct pci_dev *pdev, union msi_instance_cookie *icookie, + const struct irq_affinity_desc *affdesc); +void pci_ims_free_irq(struct pci_dev *pdev, struct msi_map map); #else static inline int pci_msi_vec_count(struct pci_dev *dev) { return -ENOSYS; } @@ -1719,6 +1726,25 @@ static inline const struct cpumask *pci_irq_get_affinity(struct pci_dev *pdev, { return cpu_possible_mask; } + +static inline bool pci_create_ims_domain(struct pci_dev *pdev, + const struct msi_domain_template *template, + unsigned int hwsize, void *data) +{ return false; } + +static inline struct msi_map pci_ims_alloc_irq(struct pci_dev *pdev, + union msi_instance_cookie *icookie, + const struct irq_affinity_desc *affdesc) +{ + struct msi_map map = { .index = -ENOSYS, }; + + return map; +} + +static inline void pci_ims_free_irq(struct pci_dev *pdev, struct msi_map map) +{ +} + #endif /** @@ -2616,14 +2642,6 @@ static inline bool pci_is_thunderbolt_attached(struct pci_dev *pdev) void pci_uevent_ers(struct pci_dev *pdev, enum pci_ers_result err_type); #endif -struct msi_domain_template; - -bool pci_create_ims_domain(struct pci_dev *pdev, const struct msi_domain_template *template, - unsigned int hwsize, void *data); -struct msi_map pci_ims_alloc_irq(struct pci_dev *pdev, union msi_instance_cookie *icookie, - const struct irq_affinity_desc *affdesc); -void pci_ims_free_irq(struct pci_dev *pdev, struct msi_map map); - #include #define pci_printk(level, pdev, fmt, arg...) \ From patchwork Fri Oct 27 17:00:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438636 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64DC1C25B47 for ; Fri, 27 Oct 2023 17:01:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346061AbjJ0RBd (ORCPT ); Fri, 27 Oct 2023 13:01:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345948AbjJ0RBa (ORCPT ); Fri, 27 Oct 2023 13:01:30 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 400441A5; Fri, 27 Oct 2023 10:01:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426087; x=1729962087; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RMArCxGqHD+3tMudRHMvYRNJuBVfq4KK4WYAG6z1Fl4=; b=lwSo3Zc4CW1vwbOiECbplUSh4UR6hshZD2a6f4LEUJCVoQmn1WTQNnm9 dcRE1H9Mse8/kfDvDhC039+mEDS3nhKV91dMUAfNYyT1Pcq+Mg8QsNogL bCnYmOTTZ/cEX3STlQks6HqhRczMD+x5jz9fR+1tNtYlOvZEUvm8F7Jxa lGM2zUS/BR8lotcKlTCDkxE62If/8ej494PhzXwS029Xmgh+x0657QOXG M6fey5F08vnkzBpmLbXU9G8Tl+ZKwNlJ2ZIG0vX02T5N2t2EuCIgJTuq5 88ozw69CxJRaTwOj8e/cWjEvctsEoew+VVh3KZXKP89ci9PqKHh0UiMHL g==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611832" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611832" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988146" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988146" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:13 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 02/26] vfio/pci: Move PCI specific check from wrapper to PCI function Date: Fri, 27 Oct 2023 10:00:34 -0700 Message-Id: <53d90183704bf3bd633a70983ef9ca8c7c341777.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org vfio_pci_set_irqs_ioctl() uses a PCI device specific check to determine if PCI specific vfio_pci_set_err_trigger() should be called. Move the PCI device specific check into PCI specific vfio_pci_set_err_trigger() to make it easier for vfio_pci_set_irqs_ioctl() to become a frontend for interrupt backends for PCI devices as well as virtual devices. Signed-off-by: Reinette Chatre --- No changes since RFC V2. drivers/vfio/pci/vfio_pci_intrs.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index cbb4bcbfbf83..b5b1c09bef25 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -758,6 +758,9 @@ static int vfio_pci_set_err_trigger(struct vfio_pci_core_device *vdev, unsigned index, unsigned start, unsigned count, uint32_t flags, void *data) { + if (!pci_is_pcie(vdev->pdev)) + return -ENOTTY; + if (index != VFIO_PCI_ERR_IRQ_INDEX || start != 0 || count > 1) return -EINVAL; @@ -813,8 +816,7 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_core_device *vdev, uint32_t flags, case VFIO_PCI_ERR_IRQ_INDEX: switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) { case VFIO_IRQ_SET_ACTION_TRIGGER: - if (pci_is_pcie(vdev->pdev)) - func = vfio_pci_set_err_trigger; + func = vfio_pci_set_err_trigger; break; } break; From patchwork Fri Oct 27 17:00:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438635 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8012BC25B47 for ; Fri, 27 Oct 2023 17:01:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345983AbjJ0RBb (ORCPT ); Fri, 27 Oct 2023 13:01:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345941AbjJ0RBa (ORCPT ); Fri, 27 Oct 2023 13:01:30 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 291EF1A1; Fri, 27 Oct 2023 10:01:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426086; x=1729962086; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=15lJMwbf/gDQp6s/rBs3T3D7+FEeslB9BPq+GkkAYEI=; b=SmrIIn2A/YLuwEC7+iKRNoPt1gockL23VWzsa0JEu32LmCJoo6x8VW6z 2iIo61t8btJqvoGqYHzdmGMxHzm1byNFiPhGYgo6PnpGnbyYu/vE8Ka0S Z53/PvX2o0b2/hluNnJa8UbWTjG7K03MtQO13ZOyNZ9MMA9rvGtT88OG7 YIODQwoSQx7Uwsas9sTnzIdMEwoa/g4/5tBWUh/ihowOAPBxmDHWTKeKf 0Nkbc3QrZf362//nHGPx3vaR/+tPzCpST0RdSfttSVnhPlhZS1gk1QgS+ dT1WqPEb/D59+k1o7K82MDSU3Ee9EH/AopEoLjwfmxg4jp0j1a3B3sKfW g==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611847" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611847" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988149" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988149" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:13 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 03/26] vfio/pci: Use unsigned int instead of unsigned Date: Fri, 27 Oct 2023 10:00:35 -0700 Message-Id: <640dad3021715f4585f0c0ccb57224826cc82b68.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org checkpatch.pl warns about usage of bare unsigned. Change unsigned to unsigned int as a preparatory change to avoid checkpatch.pl producing several warnings as the work adding support for backends to VFIO interrupt management progress. Signed-off-by: Reinette Chatre --- Changes since RFC V2: - Include vfio_msi_set_block() in changes. Note to maintainers: After this change checkpatch.pl still has a few complaints about existing code using int32_t instead of s32. This was not changed and these warnings remain. drivers/vfio/pci/vfio_pci_intrs.c | 42 ++++++++++++++++++------------- 1 file changed, 24 insertions(+), 18 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index b5b1c09bef25..9f4f3ab48f87 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -503,8 +503,9 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_core_device *vdev, return ret; } -static int vfio_msi_set_block(struct vfio_pci_core_device *vdev, unsigned start, - unsigned count, int32_t *fds, bool msix) +static int vfio_msi_set_block(struct vfio_pci_core_device *vdev, + unsigned int start, unsigned int count, + int32_t *fds, bool msix) { unsigned int i, j; int ret = 0; @@ -553,8 +554,9 @@ static void vfio_msi_disable(struct vfio_pci_core_device *vdev, bool msix) * IOCTL support */ static int vfio_pci_set_intx_unmask(struct vfio_pci_core_device *vdev, - unsigned index, unsigned start, - unsigned count, uint32_t flags, void *data) + unsigned int index, unsigned int start, + unsigned int count, uint32_t flags, + void *data) { if (!is_intx(vdev) || start != 0 || count != 1) return -EINVAL; @@ -584,8 +586,8 @@ static int vfio_pci_set_intx_unmask(struct vfio_pci_core_device *vdev, } static int vfio_pci_set_intx_mask(struct vfio_pci_core_device *vdev, - unsigned index, unsigned start, - unsigned count, uint32_t flags, void *data) + unsigned int index, unsigned int start, + unsigned int count, uint32_t flags, void *data) { if (!is_intx(vdev) || start != 0 || count != 1) return -EINVAL; @@ -604,8 +606,9 @@ static int vfio_pci_set_intx_mask(struct vfio_pci_core_device *vdev, } static int vfio_pci_set_intx_trigger(struct vfio_pci_core_device *vdev, - unsigned index, unsigned start, - unsigned count, uint32_t flags, void *data) + unsigned int index, unsigned int start, + unsigned int count, uint32_t flags, + void *data) { if (is_intx(vdev) && !count && (flags & VFIO_IRQ_SET_DATA_NONE)) { vfio_intx_disable(vdev); @@ -647,8 +650,9 @@ static int vfio_pci_set_intx_trigger(struct vfio_pci_core_device *vdev, } static int vfio_pci_set_msi_trigger(struct vfio_pci_core_device *vdev, - unsigned index, unsigned start, - unsigned count, uint32_t flags, void *data) + unsigned int index, unsigned int start, + unsigned int count, uint32_t flags, + void *data) { struct vfio_pci_irq_ctx *ctx; unsigned int i; @@ -755,8 +759,9 @@ static int vfio_pci_set_ctx_trigger_single(struct eventfd_ctx **ctx, } static int vfio_pci_set_err_trigger(struct vfio_pci_core_device *vdev, - unsigned index, unsigned start, - unsigned count, uint32_t flags, void *data) + unsigned int index, unsigned int start, + unsigned int count, uint32_t flags, + void *data) { if (!pci_is_pcie(vdev->pdev)) return -ENOTTY; @@ -769,8 +774,9 @@ static int vfio_pci_set_err_trigger(struct vfio_pci_core_device *vdev, } static int vfio_pci_set_req_trigger(struct vfio_pci_core_device *vdev, - unsigned index, unsigned start, - unsigned count, uint32_t flags, void *data) + unsigned int index, unsigned int start, + unsigned int count, uint32_t flags, + void *data) { if (index != VFIO_PCI_REQ_IRQ_INDEX || start != 0 || count > 1) return -EINVAL; @@ -780,11 +786,11 @@ static int vfio_pci_set_req_trigger(struct vfio_pci_core_device *vdev, } int vfio_pci_set_irqs_ioctl(struct vfio_pci_core_device *vdev, uint32_t flags, - unsigned index, unsigned start, unsigned count, - void *data) + unsigned int index, unsigned int start, + unsigned int count, void *data) { - int (*func)(struct vfio_pci_core_device *vdev, unsigned index, - unsigned start, unsigned count, uint32_t flags, + int (*func)(struct vfio_pci_core_device *vdev, unsigned int index, + unsigned int start, unsigned int count, uint32_t flags, void *data) = NULL; switch (index) { From patchwork Fri Oct 27 17:00:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438639 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD168C25B70 for ; Fri, 27 Oct 2023 17:01:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345941AbjJ0RBl (ORCPT ); Fri, 27 Oct 2023 13:01:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346009AbjJ0RBb (ORCPT ); Fri, 27 Oct 2023 13:01:31 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 24B47E1; Fri, 27 Oct 2023 10:01:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426089; x=1729962089; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QurWvqwaCbuGq5TOlO2MFOTsKf0tkl0qBDeYQV0jKBg=; b=GrL0Y/Fbqyx4P/pCPqmlqdJAgQmhdw2txK/xNK93nDieWjAY3GiYq3Vw UYZSsOjBBceFEg6nrFxQHlgXwp9ugwtZL1Mja15WWeHUD5+DbFUkr9PUJ H2V0SjYL1OEDbAplUc1Ow4vw9fc/g8GPzn8OcgX538UQo5kWvU5a7BJBM cH3Rieiy3hmnFmidAjB123rlMrNQcOzeW2rpMMQxoHDwFSXT8hlaqB8DT TbaU/IS6u+8+SGliQm0mGT+iHyN2ZEDBk4UeMMfAa4nYuwqvWOCHkUxL4 HQrM8u9s+1Ddrwu1ZQIsbAeVpiILh2n09RUhTI2MgCBfofcMls3I4ItMZ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611853" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611853" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988154" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988154" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:14 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 04/26] vfio/pci: Make core interrupt callbacks accessible to all virtual devices Date: Fri, 27 Oct 2023 10:00:36 -0700 Message-Id: <52141bbf2a7e7c4d0a9ce74e2f652b8f4e3211fd.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The functions handling actions on interrupts for a virtual PCI device triggered by VFIO_DEVICE_SET_IRQS ioctl() expect to act on a passthrough PCI device represented by a struct vfio_pci_core_device. A virtual device can support MSI-X while not being a passthrough PCI device and thus not be represented by a struct vfio_pci_core_device. To support MSI-X in virtual devices it needs to be possible for their drivers to interact with the MSI-X interrupt management and thus the interrupt management should not require struct vfio_pci_core_device. Introduce struct vfio_pci_intr_ctx that will contain a virtual device's interrupt context to be managed by an interrupt management backend. The first supported backend is the existing PCI device interrupt management. Modify the core VFIO PCI interrupt management functions to expect this structure. As a backend managing interrupts of passthrough PCI devices the existing VFIO PCI functions continue to operate on an actual PCI device represented by struct vfio_pci_core_device that is provided via a private pointer. More members will be added to struct vfio_pci_intr_ctx as members unique to interrupt context are transitioned from struct vfio_pci_core_device. Signed-off-by: Reinette Chatre --- Changes since RFC V2: - Improve changelog and comments. drivers/vfio/pci/vfio_pci_core.c | 7 ++++--- drivers/vfio/pci/vfio_pci_intrs.c | 29 ++++++++++++++++++++--------- drivers/vfio/pci/vfio_pci_priv.h | 2 +- include/linux/vfio_pci_core.h | 9 +++++++++ 4 files changed, 34 insertions(+), 13 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index 1929103ee59a..bb8181444c41 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -594,7 +594,7 @@ void vfio_pci_core_disable(struct vfio_pci_core_device *vdev) /* Stop the device from further DMA */ pci_clear_master(pdev); - vfio_pci_set_irqs_ioctl(vdev, VFIO_IRQ_SET_DATA_NONE | + vfio_pci_set_irqs_ioctl(&vdev->intr_ctx, VFIO_IRQ_SET_DATA_NONE | VFIO_IRQ_SET_ACTION_TRIGGER, vdev->irq_type, 0, 0, NULL); @@ -1216,8 +1216,8 @@ static int vfio_pci_ioctl_set_irqs(struct vfio_pci_core_device *vdev, mutex_lock(&vdev->igate); - ret = vfio_pci_set_irqs_ioctl(vdev, hdr.flags, hdr.index, hdr.start, - hdr.count, data); + ret = vfio_pci_set_irqs_ioctl(&vdev->intr_ctx, hdr.flags, hdr.index, + hdr.start, hdr.count, data); mutex_unlock(&vdev->igate); kfree(data); @@ -2166,6 +2166,7 @@ int vfio_pci_core_init_dev(struct vfio_device *core_vdev) INIT_LIST_HEAD(&vdev->sriov_pfs_item); init_rwsem(&vdev->memory_lock); xa_init(&vdev->ctx); + vdev->intr_ctx.priv = vdev; return 0; } diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 9f4f3ab48f87..af1053873eaa 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -553,11 +553,13 @@ static void vfio_msi_disable(struct vfio_pci_core_device *vdev, bool msix) /* * IOCTL support */ -static int vfio_pci_set_intx_unmask(struct vfio_pci_core_device *vdev, +static int vfio_pci_set_intx_unmask(struct vfio_pci_intr_ctx *intr_ctx, unsigned int index, unsigned int start, unsigned int count, uint32_t flags, void *data) { + struct vfio_pci_core_device *vdev = intr_ctx->priv; + if (!is_intx(vdev) || start != 0 || count != 1) return -EINVAL; @@ -585,10 +587,12 @@ static int vfio_pci_set_intx_unmask(struct vfio_pci_core_device *vdev, return 0; } -static int vfio_pci_set_intx_mask(struct vfio_pci_core_device *vdev, +static int vfio_pci_set_intx_mask(struct vfio_pci_intr_ctx *intr_ctx, unsigned int index, unsigned int start, unsigned int count, uint32_t flags, void *data) { + struct vfio_pci_core_device *vdev = intr_ctx->priv; + if (!is_intx(vdev) || start != 0 || count != 1) return -EINVAL; @@ -605,11 +609,13 @@ static int vfio_pci_set_intx_mask(struct vfio_pci_core_device *vdev, return 0; } -static int vfio_pci_set_intx_trigger(struct vfio_pci_core_device *vdev, +static int vfio_pci_set_intx_trigger(struct vfio_pci_intr_ctx *intr_ctx, unsigned int index, unsigned int start, unsigned int count, uint32_t flags, void *data) { + struct vfio_pci_core_device *vdev = intr_ctx->priv; + if (is_intx(vdev) && !count && (flags & VFIO_IRQ_SET_DATA_NONE)) { vfio_intx_disable(vdev); return 0; @@ -649,11 +655,12 @@ static int vfio_pci_set_intx_trigger(struct vfio_pci_core_device *vdev, return 0; } -static int vfio_pci_set_msi_trigger(struct vfio_pci_core_device *vdev, +static int vfio_pci_set_msi_trigger(struct vfio_pci_intr_ctx *intr_ctx, unsigned int index, unsigned int start, unsigned int count, uint32_t flags, void *data) { + struct vfio_pci_core_device *vdev = intr_ctx->priv; struct vfio_pci_irq_ctx *ctx; unsigned int i; bool msix = (index == VFIO_PCI_MSIX_IRQ_INDEX) ? true : false; @@ -758,11 +765,13 @@ static int vfio_pci_set_ctx_trigger_single(struct eventfd_ctx **ctx, return -EINVAL; } -static int vfio_pci_set_err_trigger(struct vfio_pci_core_device *vdev, +static int vfio_pci_set_err_trigger(struct vfio_pci_intr_ctx *intr_ctx, unsigned int index, unsigned int start, unsigned int count, uint32_t flags, void *data) { + struct vfio_pci_core_device *vdev = intr_ctx->priv; + if (!pci_is_pcie(vdev->pdev)) return -ENOTTY; @@ -773,11 +782,13 @@ static int vfio_pci_set_err_trigger(struct vfio_pci_core_device *vdev, count, flags, data); } -static int vfio_pci_set_req_trigger(struct vfio_pci_core_device *vdev, +static int vfio_pci_set_req_trigger(struct vfio_pci_intr_ctx *intr_ctx, unsigned int index, unsigned int start, unsigned int count, uint32_t flags, void *data) { + struct vfio_pci_core_device *vdev = intr_ctx->priv; + if (index != VFIO_PCI_REQ_IRQ_INDEX || start != 0 || count > 1) return -EINVAL; @@ -785,11 +796,11 @@ static int vfio_pci_set_req_trigger(struct vfio_pci_core_device *vdev, count, flags, data); } -int vfio_pci_set_irqs_ioctl(struct vfio_pci_core_device *vdev, uint32_t flags, +int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, unsigned int index, unsigned int start, unsigned int count, void *data) { - int (*func)(struct vfio_pci_core_device *vdev, unsigned int index, + int (*func)(struct vfio_pci_intr_ctx *intr_ctx, unsigned int index, unsigned int start, unsigned int count, uint32_t flags, void *data) = NULL; @@ -838,5 +849,5 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_core_device *vdev, uint32_t flags, if (!func) return -ENOTTY; - return func(vdev, index, start, count, flags, data); + return func(intr_ctx, index, start, count, flags, data); } diff --git a/drivers/vfio/pci/vfio_pci_priv.h b/drivers/vfio/pci/vfio_pci_priv.h index 5e4fa69aee16..6dddcfe7ab19 100644 --- a/drivers/vfio/pci/vfio_pci_priv.h +++ b/drivers/vfio/pci/vfio_pci_priv.h @@ -26,7 +26,7 @@ struct vfio_pci_ioeventfd { bool vfio_pci_intx_mask(struct vfio_pci_core_device *vdev); void vfio_pci_intx_unmask(struct vfio_pci_core_device *vdev); -int vfio_pci_set_irqs_ioctl(struct vfio_pci_core_device *vdev, uint32_t flags, +int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, unsigned index, unsigned start, unsigned count, void *data); diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index 562e8754869d..38355a4817fd 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -49,6 +49,14 @@ struct vfio_pci_region { u32 flags; }; +/* + * Interrupt context of virtual PCI device + * @priv: Private data of interrupt management backend + */ +struct vfio_pci_intr_ctx { + void *priv; +}; + struct vfio_pci_core_device { struct vfio_device vdev; struct pci_dev *pdev; @@ -96,6 +104,7 @@ struct vfio_pci_core_device { struct mutex vma_lock; struct list_head vma_list; struct rw_semaphore memory_lock; + struct vfio_pci_intr_ctx intr_ctx; }; /* Will be exported for vfio pci drivers usage */ From patchwork Fri Oct 27 17:00:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438640 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F550C25B6F for ; Fri, 27 Oct 2023 17:01:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232381AbjJ0RBn (ORCPT ); Fri, 27 Oct 2023 13:01:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346054AbjJ0RBc (ORCPT ); Fri, 27 Oct 2023 13:01:32 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 562BF11B; Fri, 27 Oct 2023 10:01:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426090; x=1729962090; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/8K/5xjT+sSTW3WaO0jnJx8Y9erwUUEZ5J56SycKU4A=; b=K+hO8B7nq2S3zlU43GPBIT5rC7vZjmIOasAf+Eam97aRDJS9VrPJeihy Uwkuuvn4Gy+G1v0CqkdY8iavRvahRQnGcMk1EBQ5ruOa6pt6r2rS8aea/ lbCr7E+60knymR2xjcc8lw+Hql/W3fDfWPgIttlHJ64UUUNvf/eybrijN DstnSQf38DA2bPPycIR6ZpEJvChejid1MYtqb99l1v2GKUqB8GlBlW3xo zOnBsvMG7xq91PiWv1Wx/3qyETURjNe4fecQIGs/oAyp5sQVip4wqw0A7 62msY7sosAE0VMfAg+S7/Pl0jCJsEXuFCkY9273NEk2/IPQNsBzFRaYqU A==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611862" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611862" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988161" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988161" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:14 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 05/26] vfio/pci: Split PCI interrupt management into front and backend Date: Fri, 27 Oct 2023 10:00:37 -0700 Message-Id: <8362e7bf5af9ac0e6a075750a08e93cbdc08036f.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org VFIO PCI interrupt management supports passthrough PCI devices with an interrupt in the guest backed by the same type of interrupt on the PCI device. Interrupt management can be more flexible. An interrupt in the guest may be backed by a different type of interrupt on the host, for example MSI-X in guest can be backed by IMS on the host, or not backed by a device interrupt at all when the interrupt is emulated by the virtual device driver. The main entry to guest interrupt management is via the VFIO_DEVICE_SET_IRQS ioctl(). By default the work is passed to interrupt management for PCI devices with the PCI specific functions called directly. Make the ioctl() configurable to support different interrupt management backends. This is accomplished by introducing interrupt context specific callbacks that are initialized by the virtual device driver and then triggered via the ioctl(). The introduction of virtual device driver specific callbacks require its initialization. Create a dedicated interrupt context initialization function to avoid mixing more interrupt context initialization with general virtual device driver initialization. Signed-off-by: Reinette Chatre --- Changes since RFC V2: - Improve changelog and comments. - Make vfio_pci_intr_ops static. drivers/vfio/pci/vfio_pci_core.c | 2 +- drivers/vfio/pci/vfio_pci_intrs.c | 35 +++++++++++++++++++++++++------ include/linux/vfio_pci_core.h | 25 ++++++++++++++++++++++ 3 files changed, 55 insertions(+), 7 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index bb8181444c41..310259bbacae 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -2166,7 +2166,7 @@ int vfio_pci_core_init_dev(struct vfio_device *core_vdev) INIT_LIST_HEAD(&vdev->sriov_pfs_item); init_rwsem(&vdev->memory_lock); xa_init(&vdev->ctx); - vdev->intr_ctx.priv = vdev; + vfio_pci_init_intr_ctx(vdev, &vdev->intr_ctx); return 0; } diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index af1053873eaa..96587acfebf0 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -796,6 +796,23 @@ static int vfio_pci_set_req_trigger(struct vfio_pci_intr_ctx *intr_ctx, count, flags, data); } +static struct vfio_pci_intr_ops vfio_pci_intr_ops = { + .set_intx_mask = vfio_pci_set_intx_mask, + .set_intx_unmask = vfio_pci_set_intx_unmask, + .set_intx_trigger = vfio_pci_set_intx_trigger, + .set_msi_trigger = vfio_pci_set_msi_trigger, + .set_err_trigger = vfio_pci_set_err_trigger, + .set_req_trigger = vfio_pci_set_req_trigger, +}; + +void vfio_pci_init_intr_ctx(struct vfio_pci_core_device *vdev, + struct vfio_pci_intr_ctx *intr_ctx) +{ + intr_ctx->ops = &vfio_pci_intr_ops; + intr_ctx->priv = vdev; +} +EXPORT_SYMBOL_GPL(vfio_pci_init_intr_ctx); + int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, unsigned int index, unsigned int start, unsigned int count, void *data) @@ -808,13 +825,16 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, case VFIO_PCI_INTX_IRQ_INDEX: switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) { case VFIO_IRQ_SET_ACTION_MASK: - func = vfio_pci_set_intx_mask; + if (intr_ctx->ops->set_intx_mask) + func = intr_ctx->ops->set_intx_mask; break; case VFIO_IRQ_SET_ACTION_UNMASK: - func = vfio_pci_set_intx_unmask; + if (intr_ctx->ops->set_intx_unmask) + func = intr_ctx->ops->set_intx_unmask; break; case VFIO_IRQ_SET_ACTION_TRIGGER: - func = vfio_pci_set_intx_trigger; + if (intr_ctx->ops->set_intx_trigger) + func = intr_ctx->ops->set_intx_trigger; break; } break; @@ -826,21 +846,24 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, /* XXX Need masking support exported */ break; case VFIO_IRQ_SET_ACTION_TRIGGER: - func = vfio_pci_set_msi_trigger; + if (intr_ctx->ops->set_msi_trigger) + func = intr_ctx->ops->set_msi_trigger; break; } break; case VFIO_PCI_ERR_IRQ_INDEX: switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) { case VFIO_IRQ_SET_ACTION_TRIGGER: - func = vfio_pci_set_err_trigger; + if (intr_ctx->ops->set_err_trigger) + func = intr_ctx->ops->set_err_trigger; break; } break; case VFIO_PCI_REQ_IRQ_INDEX: switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) { case VFIO_IRQ_SET_ACTION_TRIGGER: - func = vfio_pci_set_req_trigger; + if (intr_ctx->ops->set_req_trigger) + func = intr_ctx->ops->set_req_trigger; break; } break; diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index 38355a4817fd..d3fa0e49a1a8 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -51,12 +51,35 @@ struct vfio_pci_region { /* * Interrupt context of virtual PCI device + * @ops: Interrupt management backend functions * @priv: Private data of interrupt management backend */ struct vfio_pci_intr_ctx { + const struct vfio_pci_intr_ops *ops; void *priv; }; +struct vfio_pci_intr_ops { + int (*set_intx_mask)(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int index, unsigned int start, + unsigned int count, uint32_t flags, void *data); + int (*set_intx_unmask)(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int index, unsigned int start, + unsigned int count, uint32_t flags, void *data); + int (*set_intx_trigger)(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int index, unsigned int start, + unsigned int count, uint32_t flags, void *data); + int (*set_msi_trigger)(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int index, unsigned int start, + unsigned int count, uint32_t flags, void *data); + int (*set_err_trigger)(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int index, unsigned int start, + unsigned int count, uint32_t flags, void *data); + int (*set_req_trigger)(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int index, unsigned int start, + unsigned int count, uint32_t flags, void *data); +}; + struct vfio_pci_core_device { struct vfio_device vdev; struct pci_dev *pdev; @@ -124,6 +147,8 @@ int vfio_pci_core_sriov_configure(struct vfio_pci_core_device *vdev, int nr_virtfn); long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd, unsigned long arg); +void vfio_pci_init_intr_ctx(struct vfio_pci_core_device *vdev, + struct vfio_pci_intr_ctx *intr_ctx); int vfio_pci_core_ioctl_feature(struct vfio_device *device, u32 flags, void __user *arg, size_t argsz); ssize_t vfio_pci_core_read(struct vfio_device *core_vdev, char __user *buf, From patchwork Fri Oct 27 17:00:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438638 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B769C25B6F for ; Fri, 27 Oct 2023 17:01:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346111AbjJ0RBj (ORCPT ); Fri, 27 Oct 2023 13:01:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345955AbjJ0RBb (ORCPT ); Fri, 27 Oct 2023 13:01:31 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 564BD128; Fri, 27 Oct 2023 10:01:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426090; x=1729962090; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=g0joQn3WaEtVm0P8mwhR6h/BllZzb35jHWARL4FWxWs=; b=O5atHFpstQ4Edi5/Qt2tAp8YeY+h6nHUsRvsNJ9p/1p0x/HSsoX5oGXR wyNog9G+FcCB4SlqUvvD80a0C0Bv9f5sCb/vstK9hA3nA4PCHfSfEPa3g cbwWX1pcWtddUFqZQaZX3bi1C848qCZzRUWUzcVZwKg6fRf0lpHqEYwCT 8S8e5gMdkdWC2OfG3Vu67l8QBYkaSxXjUpqnlRXICz3r6ndFpZNzbfVbG I0dIY5C0rzVFKEViRIiPp+0R8BLK1+wifKgzwa0i/AXy91Ey91emBtP3W 2RgoqRedMJB8zUf8/Nfdyt8vvfBogGeJuU5nGPdjUho/HQbxa28m9nunU w==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611878" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611878" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988166" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988166" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:15 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 06/26] vfio/pci: Separate MSI and MSI-X handling Date: Fri, 27 Oct 2023 10:00:38 -0700 Message-Id: <397accb1341ac18273e6bc3e39361693a5411b4f.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org VFIO PCI interrupt management uses a single entry for both MSI and MSI-X management with the called functions using a boolean when necessary to distinguish between MSI and MSI-X. This remains unchanged. Virtual device interrupt management should not be required to use the same callback for both MSI and MSI-X. It may be possible for a virtual device to not support MSI at all and only provide MSI-X interrupt management. Separate the MSI and MSI-X interrupt management by allowing different callbacks for each interrupt type. For PCI devices the callback remains the same. Signed-off-by: Reinette Chatre --- No changes since RFC V2. drivers/vfio/pci/vfio_pci_intrs.c | 14 +++++++++++++- include/linux/vfio_pci_core.h | 3 +++ 2 files changed, 16 insertions(+), 1 deletion(-) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 96587acfebf0..7de906363402 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -801,6 +801,7 @@ static struct vfio_pci_intr_ops vfio_pci_intr_ops = { .set_intx_unmask = vfio_pci_set_intx_unmask, .set_intx_trigger = vfio_pci_set_intx_trigger, .set_msi_trigger = vfio_pci_set_msi_trigger, + .set_msix_trigger = vfio_pci_set_msi_trigger, .set_err_trigger = vfio_pci_set_err_trigger, .set_req_trigger = vfio_pci_set_req_trigger, }; @@ -839,7 +840,6 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, } break; case VFIO_PCI_MSI_IRQ_INDEX: - case VFIO_PCI_MSIX_IRQ_INDEX: switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) { case VFIO_IRQ_SET_ACTION_MASK: case VFIO_IRQ_SET_ACTION_UNMASK: @@ -851,6 +851,18 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, break; } break; + case VFIO_PCI_MSIX_IRQ_INDEX: + switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) { + case VFIO_IRQ_SET_ACTION_MASK: + case VFIO_IRQ_SET_ACTION_UNMASK: + /* XXX Need masking support exported */ + break; + case VFIO_IRQ_SET_ACTION_TRIGGER: + if (intr_ctx->ops->set_msix_trigger) + func = intr_ctx->ops->set_msix_trigger; + break; + } + break; case VFIO_PCI_ERR_IRQ_INDEX: switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) { case VFIO_IRQ_SET_ACTION_TRIGGER: diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index d3fa0e49a1a8..db7ee9517d94 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -72,6 +72,9 @@ struct vfio_pci_intr_ops { int (*set_msi_trigger)(struct vfio_pci_intr_ctx *intr_ctx, unsigned int index, unsigned int start, unsigned int count, uint32_t flags, void *data); + int (*set_msix_trigger)(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int index, unsigned int start, + unsigned int count, uint32_t flags, void *data); int (*set_err_trigger)(struct vfio_pci_intr_ctx *intr_ctx, unsigned int index, unsigned int start, unsigned int count, uint32_t flags, void *data); From patchwork Fri Oct 27 17:00:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438641 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FDE0C25B70 for ; Fri, 27 Oct 2023 17:01:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230101AbjJ0RBo (ORCPT ); Fri, 27 Oct 2023 13:01:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346074AbjJ0RBd (ORCPT ); Fri, 27 Oct 2023 13:01:33 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57A661A1; Fri, 27 Oct 2023 10:01:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426091; x=1729962091; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XaZOFPsmwRizI8imFAFKLm4WgdKXG4jjNnA6hIazfMw=; b=c+CkO0VuA3SnN1WklDBnTCC/mU+GDKWt/fUKERVgMDJ8P9yoHGjjNIrs rM4ICsnzAhi3YpJXthxhegaEiQhEhSKwyuz8Qa7Vqc1YHqCrXpGVkQlQO jYemqqdj1YdXampi609VSKAfuYwqpaCICkzmF35KPVCqTwBM8LjHOsfQc KHmxL0v+/twhiatuYOSb35+Sd2U8tsL4cN7DQznRaZeqZGXQmgL7oTzzH 9ikPcDmcVMBLlLYbShYCdQWHla5aU7zAB97mazHKqx5OSlhAj5NoyxUSu 8lXBFDtmWxFjxawOYXPScOPxBDG8e/Qiuk2cs+JExi5OfQtSMocBPuEWT A==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611884" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611884" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988169" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988169" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:15 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 07/26] vfio/pci: Move interrupt eventfd to interrupt context Date: Fri, 27 Oct 2023 10:00:39 -0700 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The eventfds associated with device request notification and error IRQ are managed by VFIO PCI interrupt management as triggered by the VFIO_DEVICE_SET_IRQS ioctl(). Move these eventfds as well as their mutex to the generic and dedicated interrupt management context struct vfio_pci_intr_ctx to enable another interrupt management backend to manage these eventfd. igate mutex protects eventfd modification. With the eventfd within the bigger scoped interrupt context the mutex scope is also expanded to mean that all members of struct vfio_pci_intr_ctx are protected by it. This move results in the vfio_pci_set_req_trigger() to no longer require a struct vfio_pci_core_device, operating just on the generic struct vfio_pci_intr_ctx, and thus available for direct use by other interrupt management backends. This introduces the first interrupt context related cleanup call. Create vfio_pci_release_intr_ctx() to match existing vfio_pci_init_intr_ctx(). Signed-off-by: Reinette Chatre --- Changes since RFC V2: - Improve changelog. drivers/vfio/pci/vfio_pci_core.c | 39 +++++++++++++++---------------- drivers/vfio/pci/vfio_pci_intrs.c | 13 +++++++---- include/linux/vfio_pci_core.h | 10 +++++--- 3 files changed, 35 insertions(+), 27 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index 310259bbacae..5c9bd5d2db53 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -700,16 +700,16 @@ void vfio_pci_core_close_device(struct vfio_device *core_vdev) #endif vfio_pci_core_disable(vdev); - mutex_lock(&vdev->igate); - if (vdev->err_trigger) { - eventfd_ctx_put(vdev->err_trigger); - vdev->err_trigger = NULL; + mutex_lock(&vdev->intr_ctx.igate); + if (vdev->intr_ctx.err_trigger) { + eventfd_ctx_put(vdev->intr_ctx.err_trigger); + vdev->intr_ctx.err_trigger = NULL; } - if (vdev->req_trigger) { - eventfd_ctx_put(vdev->req_trigger); - vdev->req_trigger = NULL; + if (vdev->intr_ctx.req_trigger) { + eventfd_ctx_put(vdev->intr_ctx.req_trigger); + vdev->intr_ctx.req_trigger = NULL; } - mutex_unlock(&vdev->igate); + mutex_unlock(&vdev->intr_ctx.igate); } EXPORT_SYMBOL_GPL(vfio_pci_core_close_device); @@ -1214,12 +1214,12 @@ static int vfio_pci_ioctl_set_irqs(struct vfio_pci_core_device *vdev, return PTR_ERR(data); } - mutex_lock(&vdev->igate); + mutex_lock(&vdev->intr_ctx.igate); ret = vfio_pci_set_irqs_ioctl(&vdev->intr_ctx, hdr.flags, hdr.index, hdr.start, hdr.count, data); - mutex_unlock(&vdev->igate); + mutex_unlock(&vdev->intr_ctx.igate); kfree(data); return ret; @@ -1876,20 +1876,20 @@ void vfio_pci_core_request(struct vfio_device *core_vdev, unsigned int count) container_of(core_vdev, struct vfio_pci_core_device, vdev); struct pci_dev *pdev = vdev->pdev; - mutex_lock(&vdev->igate); + mutex_lock(&vdev->intr_ctx.igate); - if (vdev->req_trigger) { + if (vdev->intr_ctx.req_trigger) { if (!(count % 10)) pci_notice_ratelimited(pdev, "Relaying device request to user (#%u)\n", count); - eventfd_signal(vdev->req_trigger, 1); + eventfd_signal(vdev->intr_ctx.req_trigger, 1); } else if (count == 0) { pci_warn(pdev, "No device request channel registered, blocked until released by user\n"); } - mutex_unlock(&vdev->igate); + mutex_unlock(&vdev->intr_ctx.igate); } EXPORT_SYMBOL_GPL(vfio_pci_core_request); @@ -2156,7 +2156,6 @@ int vfio_pci_core_init_dev(struct vfio_device *core_vdev) vdev->pdev = to_pci_dev(core_vdev->dev); vdev->irq_type = VFIO_PCI_NUM_IRQS; - mutex_init(&vdev->igate); spin_lock_init(&vdev->irqlock); mutex_init(&vdev->ioeventfds_lock); INIT_LIST_HEAD(&vdev->dummy_resources_list); @@ -2177,7 +2176,7 @@ void vfio_pci_core_release_dev(struct vfio_device *core_vdev) struct vfio_pci_core_device *vdev = container_of(core_vdev, struct vfio_pci_core_device, vdev); - mutex_destroy(&vdev->igate); + vfio_pci_release_intr_ctx(&vdev->intr_ctx); mutex_destroy(&vdev->ioeventfds_lock); mutex_destroy(&vdev->vma_lock); kfree(vdev->region); @@ -2300,12 +2299,12 @@ pci_ers_result_t vfio_pci_core_aer_err_detected(struct pci_dev *pdev, { struct vfio_pci_core_device *vdev = dev_get_drvdata(&pdev->dev); - mutex_lock(&vdev->igate); + mutex_lock(&vdev->intr_ctx.igate); - if (vdev->err_trigger) - eventfd_signal(vdev->err_trigger, 1); + if (vdev->intr_ctx.err_trigger) + eventfd_signal(vdev->intr_ctx.err_trigger, 1); - mutex_unlock(&vdev->igate); + mutex_unlock(&vdev->intr_ctx.igate); return PCI_ERS_RESULT_CAN_RECOVER; } diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 7de906363402..a4c8b589c87b 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -778,7 +778,7 @@ static int vfio_pci_set_err_trigger(struct vfio_pci_intr_ctx *intr_ctx, if (index != VFIO_PCI_ERR_IRQ_INDEX || start != 0 || count > 1) return -EINVAL; - return vfio_pci_set_ctx_trigger_single(&vdev->err_trigger, + return vfio_pci_set_ctx_trigger_single(&intr_ctx->err_trigger, count, flags, data); } @@ -787,12 +787,10 @@ static int vfio_pci_set_req_trigger(struct vfio_pci_intr_ctx *intr_ctx, unsigned int count, uint32_t flags, void *data) { - struct vfio_pci_core_device *vdev = intr_ctx->priv; - if (index != VFIO_PCI_REQ_IRQ_INDEX || start != 0 || count > 1) return -EINVAL; - return vfio_pci_set_ctx_trigger_single(&vdev->req_trigger, + return vfio_pci_set_ctx_trigger_single(&intr_ctx->req_trigger, count, flags, data); } @@ -811,9 +809,16 @@ void vfio_pci_init_intr_ctx(struct vfio_pci_core_device *vdev, { intr_ctx->ops = &vfio_pci_intr_ops; intr_ctx->priv = vdev; + mutex_init(&intr_ctx->igate); } EXPORT_SYMBOL_GPL(vfio_pci_init_intr_ctx); +void vfio_pci_release_intr_ctx(struct vfio_pci_intr_ctx *intr_ctx) +{ + mutex_destroy(&intr_ctx->igate); +} +EXPORT_SYMBOL_GPL(vfio_pci_release_intr_ctx); + int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, unsigned int index, unsigned int start, unsigned int count, void *data) diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index db7ee9517d94..1eb5842cff11 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -53,10 +53,16 @@ struct vfio_pci_region { * Interrupt context of virtual PCI device * @ops: Interrupt management backend functions * @priv: Private data of interrupt management backend + * @igate: Protects members of struct vfio_pci_intr_ctx + * @err_trigger: Eventfd associated with error reporting IRQ + * @req_trigger: Eventfd associated with device request notification */ struct vfio_pci_intr_ctx { const struct vfio_pci_intr_ops *ops; void *priv; + struct mutex igate; + struct eventfd_ctx *err_trigger; + struct eventfd_ctx *req_trigger; }; struct vfio_pci_intr_ops { @@ -92,7 +98,6 @@ struct vfio_pci_core_device { u8 *vconfig; struct perm_bits *msi_perm; spinlock_t irqlock; - struct mutex igate; struct xarray ctx; int irq_type; int num_regions; @@ -117,8 +122,6 @@ struct vfio_pci_core_device { struct pci_saved_state *pci_saved_state; struct pci_saved_state *pm_save; int ioeventfds_nr; - struct eventfd_ctx *err_trigger; - struct eventfd_ctx *req_trigger; struct eventfd_ctx *pm_wake_eventfd_ctx; struct list_head dummy_resources_list; struct mutex ioeventfds_lock; @@ -152,6 +155,7 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd, unsigned long arg); void vfio_pci_init_intr_ctx(struct vfio_pci_core_device *vdev, struct vfio_pci_intr_ctx *intr_ctx); +void vfio_pci_release_intr_ctx(struct vfio_pci_intr_ctx *intr_ctx); int vfio_pci_core_ioctl_feature(struct vfio_device *device, u32 flags, void __user *arg, size_t argsz); ssize_t vfio_pci_core_read(struct vfio_device *core_vdev, char __user *buf, From patchwork Fri Oct 27 17:00:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438642 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67F0FC25B47 for ; Fri, 27 Oct 2023 17:01:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346341AbjJ0RBr (ORCPT ); Fri, 27 Oct 2023 13:01:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346109AbjJ0RBi (ORCPT ); Fri, 27 Oct 2023 13:01:38 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0D211A5; Fri, 27 Oct 2023 10:01:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426091; x=1729962091; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oEdS6qGPsqVzRnAM51x5uz01SKSS8bbH3iIPSR6hDZw=; b=k8RiUUwGGJEaQLlIbDAB1w4DFegYeabb3obNF8pY3LlILE8JN4zCvmLy PyRCD5Lq99NbeqZREGmoIOOJKZwPfs46coN1Y46BWGR40dpYtzdmvKfxz gHUUAIeqZT7T0v8nXX+/ltKwOi0wNtgd66g7nDVT2JIBQ2xABKsvBYePT KYStpHS0Zzx9I8LWZLEXH8A9V4uluur+gHOejGF0epsWO1BnJFFsiYgY9 IDEYKMl/FdHYV2Zb/5PGvWaWMNldvFZN9UJldOBUjw9mKjjLlJsujLJg2 SO8hIgvReHM6pxiZxDNxpfvacGvhoDo5Oyi64g52X/T3He8KjHdSEp8/c g==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611899" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611899" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988174" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988174" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:15 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 08/26] vfio/pci: Move mutex acquisition into function Date: Fri, 27 Oct 2023 10:00:40 -0700 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org vfio_pci_set_irqs_ioctl() is the entrypoint for interrupt management via the VFIO_DEVICE_SET_IRQS ioctl(). vfio_pci_set_irqs_ioctl() can be called from a virtual device driver after its callbacks have been configured to support the needed interrupt management. The igate mutex is obtained before vfio_pci_set_irqs_ioctl() to protect against concurrent changes to interrupt context. It should not be necessary for all users of vfio_pci_set_irqs_ioctl() to remember to take the mutex. Acquire and release the mutex within vfio_pci_set_irqs_ioctl(). Signed-off-by: Reinette Chatre --- Changes since RFC V2: - Improve changelog. drivers/vfio/pci/vfio_pci_core.c | 2 -- drivers/vfio/pci/vfio_pci_intrs.c | 10 ++++++++-- 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index 5c9bd5d2db53..bf4de137ad2f 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -1214,12 +1214,10 @@ static int vfio_pci_ioctl_set_irqs(struct vfio_pci_core_device *vdev, return PTR_ERR(data); } - mutex_lock(&vdev->intr_ctx.igate); ret = vfio_pci_set_irqs_ioctl(&vdev->intr_ctx, hdr.flags, hdr.index, hdr.start, hdr.count, data); - mutex_unlock(&vdev->intr_ctx.igate); kfree(data); return ret; diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index a4c8b589c87b..5d600548b5d7 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -826,7 +826,9 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, int (*func)(struct vfio_pci_intr_ctx *intr_ctx, unsigned int index, unsigned int start, unsigned int count, uint32_t flags, void *data) = NULL; + int ret = -ENOTTY; + mutex_lock(&intr_ctx->igate); switch (index) { case VFIO_PCI_INTX_IRQ_INDEX: switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) { @@ -887,7 +889,11 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, } if (!func) - return -ENOTTY; + goto out_unlock; + + ret = func(intr_ctx, index, start, count, flags, data); - return func(intr_ctx, index, start, count, flags, data); +out_unlock: + mutex_unlock(&intr_ctx->igate); + return ret; } From patchwork Fri Oct 27 17:00:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438644 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF3A8C25B6F for ; Fri, 27 Oct 2023 17:01:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346345AbjJ0RBv (ORCPT ); Fri, 27 Oct 2023 13:01:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346131AbjJ0RBj (ORCPT ); Fri, 27 Oct 2023 13:01:39 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6D991AA; Fri, 27 Oct 2023 10:01:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426092; x=1729962092; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IpPDZ7Ks2iVqUt7LpEzOx+GuVFbnS3TggpdU7AmmhBk=; b=RiUiN4DNKnOLy6JXvYztcvMGaQdjMNur3sAw33PSnlFuhsBch5wi7bjk IbqBqPNeN1hGyVnNge4ZSOc+sb9HVYvygPKYRWsbzEJBzjoM6ZXR5V2kG iJ1jj67z9gEjwZA0pruWreUNvwkoF7gpEfNv7laqcePu1xdULnyqnxhBV Fu/de+t9SfGNlrNoT9ohjbwIDmfSHffwnJVZutrW4OYn5eC8d0zvKKWOf 6HH3z0METsLx9rWqbSLFDnPzVPiw+Jg7My1sQxaW9TAaExVGIObGl87UO IBml3QbXvqr08Om0yDB67zmTKV3LejBkHKt699bxD9UY0BT0bI424geBA A==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611902" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611902" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988178" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988178" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:16 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 09/26] vfio/pci: Move per-interrupt contexts to generic interrupt struct Date: Fri, 27 Oct 2023 10:00:41 -0700 Message-Id: <356e143a82f495dd2f474e66eab1effbfbe9a3c7.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org VFIO PCI interrupt management maintains per-interrupt context within an xarray using the interrupt vector as index. Move the per-interrupt context to the generic interrupt context in struct vfio_pci_intr_ctx to enable the per-interrupt context to be managed by different backends. Signed-off-by: Reinette Chatre --- Changes since RFC V2: - Improve changelog. drivers/vfio/pci/vfio_pci_core.c | 1 - drivers/vfio/pci/vfio_pci_intrs.c | 9 +++++---- include/linux/vfio_pci_core.h | 3 ++- 3 files changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index bf4de137ad2f..cf303a9555f0 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -2162,7 +2162,6 @@ int vfio_pci_core_init_dev(struct vfio_device *core_vdev) INIT_LIST_HEAD(&vdev->vma_list); INIT_LIST_HEAD(&vdev->sriov_pfs_item); init_rwsem(&vdev->memory_lock); - xa_init(&vdev->ctx); vfio_pci_init_intr_ctx(vdev, &vdev->intr_ctx); return 0; diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 5d600548b5d7..3cfd7fdec87b 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -52,13 +52,13 @@ static struct vfio_pci_irq_ctx *vfio_irq_ctx_get(struct vfio_pci_core_device *vdev, unsigned long index) { - return xa_load(&vdev->ctx, index); + return xa_load(&vdev->intr_ctx.ctx, index); } static void vfio_irq_ctx_free(struct vfio_pci_core_device *vdev, struct vfio_pci_irq_ctx *ctx, unsigned long index) { - xa_erase(&vdev->ctx, index); + xa_erase(&vdev->intr_ctx.ctx, index); kfree(ctx); } @@ -72,7 +72,7 @@ vfio_irq_ctx_alloc(struct vfio_pci_core_device *vdev, unsigned long index) if (!ctx) return NULL; - ret = xa_insert(&vdev->ctx, index, ctx, GFP_KERNEL_ACCOUNT); + ret = xa_insert(&vdev->intr_ctx.ctx, index, ctx, GFP_KERNEL_ACCOUNT); if (ret) { kfree(ctx); return NULL; @@ -530,7 +530,7 @@ static void vfio_msi_disable(struct vfio_pci_core_device *vdev, bool msix) unsigned long i; u16 cmd; - xa_for_each(&vdev->ctx, i, ctx) { + xa_for_each(&vdev->intr_ctx.ctx, i, ctx) { vfio_virqfd_disable(&ctx->unmask); vfio_virqfd_disable(&ctx->mask); vfio_msi_set_vector_signal(vdev, i, -1, msix); @@ -810,6 +810,7 @@ void vfio_pci_init_intr_ctx(struct vfio_pci_core_device *vdev, intr_ctx->ops = &vfio_pci_intr_ops; intr_ctx->priv = vdev; mutex_init(&intr_ctx->igate); + xa_init(&intr_ctx->ctx); } EXPORT_SYMBOL_GPL(vfio_pci_init_intr_ctx); diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index 1eb5842cff11..0f9df87aedd9 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -56,6 +56,7 @@ struct vfio_pci_region { * @igate: Protects members of struct vfio_pci_intr_ctx * @err_trigger: Eventfd associated with error reporting IRQ * @req_trigger: Eventfd associated with device request notification + * @ctx: Per-interrupt context indexed by vector */ struct vfio_pci_intr_ctx { const struct vfio_pci_intr_ops *ops; @@ -63,6 +64,7 @@ struct vfio_pci_intr_ctx { struct mutex igate; struct eventfd_ctx *err_trigger; struct eventfd_ctx *req_trigger; + struct xarray ctx; }; struct vfio_pci_intr_ops { @@ -98,7 +100,6 @@ struct vfio_pci_core_device { u8 *vconfig; struct perm_bits *msi_perm; spinlock_t irqlock; - struct xarray ctx; int irq_type; int num_regions; struct vfio_pci_region *region; From patchwork Fri Oct 27 17:00:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B89DFC25B70 for ; Fri, 27 Oct 2023 17:01:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346349AbjJ0RBx (ORCPT ); Fri, 27 Oct 2023 13:01:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43308 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346130AbjJ0RBj (ORCPT ); Fri, 27 Oct 2023 13:01:39 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 10BFF1AC; Fri, 27 Oct 2023 10:01:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426092; x=1729962092; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+BceffSdjfzhxDzKvutJR5FzA2WFYIt4Jk5nR/uMrQI=; b=iWlMdvqNYZ9TJOAmzeC6OoBowPReL2o8MZMdozBagxOaMjrHzydJNtUO 8MJFalrjtdWaVjR5BV+goE5SOMqN8T0yYVo7ddpB4Q5kc0qoyrmDUGBup 6bq9EBliwiCmwR0bCxuniHB1QmRbA0QyqnTDjlfriqIz+ns1UZEZ77Jy5 IXyiPtzH/HV8/uVEdvLd3leeyF693MZ6HeoD8SKgP3ZHDz0duU2ZmJHES mXjP7MmSK6m9OvK8dTzhCSjv5pl74/wtmellir7Jr7A+kGWIi9ofiPRIh Wxv2NlMuEIsBMtkM7J/jOKRn9H8gcEAcQpmQJr7it1QnlKzyv4gUHqHIW w==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611911" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611911" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988182" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988182" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:16 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 10/26] vfio/pci: Move IRQ type to generic interrupt context Date: Fri, 27 Oct 2023 10:00:42 -0700 Message-Id: <9ad61f26e4bc76a007780475919cbb58d550da88.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The type of interrupts within the guest is not unique to PCI devices and needed for other virtual devices supporting interrupts. Move interrupt type to the generic interrupt context struct vfio_pci_intr_ctx. Signed-off-by: Reinette Chatre --- Question for maintainers: irq_type is accessed in ioctl() flow as well as other flows. It is not clear to me how it is protected against concurrent access. Should accesses outside of ioctl() flow take the mutex? No changes since RFC V2. drivers/vfio/pci/vfio_pci_config.c | 2 +- drivers/vfio/pci/vfio_pci_core.c | 5 ++--- drivers/vfio/pci/vfio_pci_intrs.c | 21 +++++++++++---------- include/linux/vfio_pci_core.h | 3 ++- 4 files changed, 16 insertions(+), 15 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c index 7e2e62ab0869..2535bdbc016d 100644 --- a/drivers/vfio/pci/vfio_pci_config.c +++ b/drivers/vfio/pci/vfio_pci_config.c @@ -1168,7 +1168,7 @@ static int vfio_msi_config_write(struct vfio_pci_core_device *vdev, int pos, flags = le16_to_cpu(*pflags); /* MSI is enabled via ioctl */ - if (vdev->irq_type != VFIO_PCI_MSI_IRQ_INDEX) + if (vdev->intr_ctx.irq_type != VFIO_PCI_MSI_IRQ_INDEX) flags &= ~PCI_MSI_FLAGS_ENABLE; /* Check queue size */ diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index cf303a9555f0..34109ed38454 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -427,7 +427,7 @@ static int vfio_pci_core_runtime_suspend(struct device *dev) * vfio_pci_intx_mask() will return false and in that case, INTx * should not be unmasked in the runtime resume. */ - vdev->pm_intx_masked = ((vdev->irq_type == VFIO_PCI_INTX_IRQ_INDEX) && + vdev->pm_intx_masked = ((vdev->intr_ctx.irq_type == VFIO_PCI_INTX_IRQ_INDEX) && vfio_pci_intx_mask(vdev)); return 0; @@ -596,7 +596,7 @@ void vfio_pci_core_disable(struct vfio_pci_core_device *vdev) vfio_pci_set_irqs_ioctl(&vdev->intr_ctx, VFIO_IRQ_SET_DATA_NONE | VFIO_IRQ_SET_ACTION_TRIGGER, - vdev->irq_type, 0, 0, NULL); + vdev->intr_ctx.irq_type, 0, 0, NULL); /* Device closed, don't need mutex here */ list_for_each_entry_safe(ioeventfd, ioeventfd_tmp, @@ -2153,7 +2153,6 @@ int vfio_pci_core_init_dev(struct vfio_device *core_vdev) container_of(core_vdev, struct vfio_pci_core_device, vdev); vdev->pdev = to_pci_dev(core_vdev->dev); - vdev->irq_type = VFIO_PCI_NUM_IRQS; spin_lock_init(&vdev->irqlock); mutex_init(&vdev->ioeventfds_lock); INIT_LIST_HEAD(&vdev->dummy_resources_list); diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 3cfd7fdec87b..858795ba50fe 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -33,19 +33,19 @@ struct vfio_pci_irq_ctx { static bool irq_is(struct vfio_pci_core_device *vdev, int type) { - return vdev->irq_type == type; + return vdev->intr_ctx.irq_type == type; } static bool is_intx(struct vfio_pci_core_device *vdev) { - return vdev->irq_type == VFIO_PCI_INTX_IRQ_INDEX; + return vdev->intr_ctx.irq_type == VFIO_PCI_INTX_IRQ_INDEX; } static bool is_irq_none(struct vfio_pci_core_device *vdev) { - return !(vdev->irq_type == VFIO_PCI_INTX_IRQ_INDEX || - vdev->irq_type == VFIO_PCI_MSI_IRQ_INDEX || - vdev->irq_type == VFIO_PCI_MSIX_IRQ_INDEX); + return !(vdev->intr_ctx.irq_type == VFIO_PCI_INTX_IRQ_INDEX || + vdev->intr_ctx.irq_type == VFIO_PCI_MSI_IRQ_INDEX || + vdev->intr_ctx.irq_type == VFIO_PCI_MSIX_IRQ_INDEX); } static @@ -255,7 +255,7 @@ static int vfio_intx_enable(struct vfio_pci_core_device *vdev) if (vdev->pci_2_3) pci_intx(vdev->pdev, !ctx->masked); - vdev->irq_type = VFIO_PCI_INTX_IRQ_INDEX; + vdev->intr_ctx.irq_type = VFIO_PCI_INTX_IRQ_INDEX; return 0; } @@ -331,7 +331,7 @@ static void vfio_intx_disable(struct vfio_pci_core_device *vdev) vfio_virqfd_disable(&ctx->mask); } vfio_intx_set_signal(vdev, -1); - vdev->irq_type = VFIO_PCI_NUM_IRQS; + vdev->intr_ctx.irq_type = VFIO_PCI_NUM_IRQS; vfio_irq_ctx_free(vdev, ctx, 0); } @@ -367,7 +367,7 @@ static int vfio_msi_enable(struct vfio_pci_core_device *vdev, int nvec, bool msi } vfio_pci_memory_unlock_and_restore(vdev, cmd); - vdev->irq_type = msix ? VFIO_PCI_MSIX_IRQ_INDEX : + vdev->intr_ctx.irq_type = msix ? VFIO_PCI_MSIX_IRQ_INDEX : VFIO_PCI_MSI_IRQ_INDEX; if (!msix) { @@ -547,7 +547,7 @@ static void vfio_msi_disable(struct vfio_pci_core_device *vdev, bool msix) if (vdev->nointx) pci_intx(pdev, 0); - vdev->irq_type = VFIO_PCI_NUM_IRQS; + vdev->intr_ctx.irq_type = VFIO_PCI_NUM_IRQS; } /* @@ -677,7 +677,7 @@ static int vfio_pci_set_msi_trigger(struct vfio_pci_intr_ctx *intr_ctx, int32_t *fds = data; int ret; - if (vdev->irq_type == index) + if (vdev->intr_ctx.irq_type == index) return vfio_msi_set_block(vdev, start, count, fds, msix); @@ -807,6 +807,7 @@ static struct vfio_pci_intr_ops vfio_pci_intr_ops = { void vfio_pci_init_intr_ctx(struct vfio_pci_core_device *vdev, struct vfio_pci_intr_ctx *intr_ctx) { + intr_ctx->irq_type = VFIO_PCI_NUM_IRQS; intr_ctx->ops = &vfio_pci_intr_ops; intr_ctx->priv = vdev; mutex_init(&intr_ctx->igate); diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index 0f9df87aedd9..e666c19da223 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -57,6 +57,7 @@ struct vfio_pci_region { * @err_trigger: Eventfd associated with error reporting IRQ * @req_trigger: Eventfd associated with device request notification * @ctx: Per-interrupt context indexed by vector + * @irq_type: Type of interrupt from guest perspective */ struct vfio_pci_intr_ctx { const struct vfio_pci_intr_ops *ops; @@ -65,6 +66,7 @@ struct vfio_pci_intr_ctx { struct eventfd_ctx *err_trigger; struct eventfd_ctx *req_trigger; struct xarray ctx; + int irq_type; }; struct vfio_pci_intr_ops { @@ -100,7 +102,6 @@ struct vfio_pci_core_device { u8 *vconfig; struct perm_bits *msi_perm; spinlock_t irqlock; - int irq_type; int num_regions; struct vfio_pci_region *region; u8 msi_qmax; From patchwork Fri Oct 27 17:00:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438643 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60384C25B48 for ; Fri, 27 Oct 2023 17:01:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346140AbjJ0RBt (ORCPT ); Fri, 27 Oct 2023 13:01:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346126AbjJ0RBj (ORCPT ); Fri, 27 Oct 2023 13:01:39 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B8E71B1; Fri, 27 Oct 2023 10:01:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426092; x=1729962092; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GkJBWrxAiWSak7q/17XkI04lJCWO+cXFv4bwhrGKIvE=; b=KbiGsC+5QHYZ6KjgJB+NDISXh/rQ7DzO6yXLLLrx6NkHD+q8b2mlPP4d SGYS60aolfYxm6bwYoPIo+Ui6lAVR0DdD3kJdRtv9EjQWwBNq1flrUNl2 p1HQmzogvimHa+YSSa9CpGN5vpADlQ81IoGQ5rms1fVKRcCYTbw4+EK9k 4TEzaN5LnD3tpUQHGsh30kppG3/qrwTVMk8zjA8u9Amu8132Co0/Lg6w4 x2dpf8aDm6ZvDiIXWhEH3BkAcW9sPv9htJeAMkjMY3v54dRjZf2W1Ppsl ZG2+VhUc2+RPaWzEMXVLliCEJnXOe3741fexC5NptUNpigCa37f0uiXsp A==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611928" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611928" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988186" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988186" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:16 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 11/26] vfio/pci: Provide interrupt context to irq_is() and is_irq_none() Date: Fri, 27 Oct 2023 10:00:43 -0700 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The IRQ type moved to the interrupt context, struct vfio_pci_intr_ctx. Let the tests on the IRQ type use the interrupt context directly without any assumption about the containing structure. Doing so makes these generic utilities available to all interrupt management backends. Signed-off-by: Reinette Chatre --- Changes since RFC V2: - New patch drivers/vfio/pci/vfio_pci_intrs.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 858795ba50fe..9aff5c38f198 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -31,9 +31,9 @@ struct vfio_pci_irq_ctx { struct irq_bypass_producer producer; }; -static bool irq_is(struct vfio_pci_core_device *vdev, int type) +static bool irq_is(struct vfio_pci_intr_ctx *intr_ctx, int type) { - return vdev->intr_ctx.irq_type == type; + return intr_ctx->irq_type == type; } static bool is_intx(struct vfio_pci_core_device *vdev) @@ -41,11 +41,11 @@ static bool is_intx(struct vfio_pci_core_device *vdev) return vdev->intr_ctx.irq_type == VFIO_PCI_INTX_IRQ_INDEX; } -static bool is_irq_none(struct vfio_pci_core_device *vdev) +static bool is_irq_none(struct vfio_pci_intr_ctx *intr_ctx) { - return !(vdev->intr_ctx.irq_type == VFIO_PCI_INTX_IRQ_INDEX || - vdev->intr_ctx.irq_type == VFIO_PCI_MSI_IRQ_INDEX || - vdev->intr_ctx.irq_type == VFIO_PCI_MSIX_IRQ_INDEX); + return !(intr_ctx->irq_type == VFIO_PCI_INTX_IRQ_INDEX || + intr_ctx->irq_type == VFIO_PCI_MSI_IRQ_INDEX || + intr_ctx->irq_type == VFIO_PCI_MSIX_IRQ_INDEX); } static @@ -235,7 +235,7 @@ static int vfio_intx_enable(struct vfio_pci_core_device *vdev) { struct vfio_pci_irq_ctx *ctx; - if (!is_irq_none(vdev)) + if (!is_irq_none(&vdev->intr_ctx)) return -EINVAL; if (!vdev->pdev->irq) @@ -353,7 +353,7 @@ static int vfio_msi_enable(struct vfio_pci_core_device *vdev, int nvec, bool msi int ret; u16 cmd; - if (!is_irq_none(vdev)) + if (!is_irq_none(&vdev->intr_ctx)) return -EINVAL; /* return the number of supported vectors if we can't get all: */ @@ -621,7 +621,7 @@ static int vfio_pci_set_intx_trigger(struct vfio_pci_intr_ctx *intr_ctx, return 0; } - if (!(is_intx(vdev) || is_irq_none(vdev)) || start != 0 || count != 1) + if (!(is_intx(vdev) || is_irq_none(intr_ctx)) || start != 0 || count != 1) return -EINVAL; if (flags & VFIO_IRQ_SET_DATA_EVENTFD) { @@ -665,12 +665,12 @@ static int vfio_pci_set_msi_trigger(struct vfio_pci_intr_ctx *intr_ctx, unsigned int i; bool msix = (index == VFIO_PCI_MSIX_IRQ_INDEX) ? true : false; - if (irq_is(vdev, index) && !count && (flags & VFIO_IRQ_SET_DATA_NONE)) { + if (irq_is(intr_ctx, index) && !count && (flags & VFIO_IRQ_SET_DATA_NONE)) { vfio_msi_disable(vdev, msix); return 0; } - if (!(irq_is(vdev, index) || is_irq_none(vdev))) + if (!(irq_is(intr_ctx, index) || is_irq_none(intr_ctx))) return -EINVAL; if (flags & VFIO_IRQ_SET_DATA_EVENTFD) { @@ -692,7 +692,7 @@ static int vfio_pci_set_msi_trigger(struct vfio_pci_intr_ctx *intr_ctx, return ret; } - if (!irq_is(vdev, index)) + if (!irq_is(intr_ctx, index)) return -EINVAL; for (i = start; i < start + count; i++) { From patchwork Fri Oct 27 17:00:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438665 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53229C25B48 for ; Fri, 27 Oct 2023 17:02:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346331AbjJ0RCH (ORCPT ); Fri, 27 Oct 2023 13:02:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346161AbjJ0RBk (ORCPT ); Fri, 27 Oct 2023 13:01:40 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF3611BB; Fri, 27 Oct 2023 10:01:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426094; x=1729962094; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YH4+58g19CrkXBQvJccNHa3tURxWh5a0OYq+Xc60VMo=; b=mzcJAwlcFjq4uu8Id3gQd8fh0nYL8P/TAwtjqvuJpdpCcW5TVFKf7nFV zKjTjDDzWCqiKKJA14E8xjR/NFL2MW8PORbIj9yScxOIbC/J9X39QQHtA uzJazBfGRbw20KFegilchyW79EQHeOhsCnhqT9+rooZQLKXpaG+TSC8rI IpK30c0l2h6uhRTHhWPPXmGfg0jDHGF5+MLrhqJT/KhCkLJgOYkgCSY2u kcEMFJFi6DLgqA2G3jJdVn/iRWeVPnF8/KgF0ep0UhLdVT2e3WqNkevwI FgncTWegOrnLdWFOczyGibvwyYf3LZYYJEgo+cdorsvNNLfaAp8Nwbd04 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611942" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611942" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988190" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988190" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:16 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 12/26] vfio/pci: Provide interrupt context to generic ops Date: Fri, 27 Oct 2023 10:00:44 -0700 Message-Id: <982ab998895e918a8920e5d5d927bb653f2cd7cf.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The functions operating on the per-interrupt context were originally created to support management of PCI device interrupts where the interrupt context was maintained within the virtual PCI device's struct vfio_pci_core_device. Now that the per-interrupt context has been moved to a more generic struct vfio_pci_intr_ctx these utilities can be changed to expect the generic structure instead. This enables these utilities to be used in other interrupt management backends. Signed-off-by: Reinette Chatre --- No changes since RFC V2. drivers/vfio/pci/vfio_pci_intrs.c | 41 ++++++++++++++++--------------- 1 file changed, 21 insertions(+), 20 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 9aff5c38f198..cdb6f875271f 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -49,21 +49,21 @@ static bool is_irq_none(struct vfio_pci_intr_ctx *intr_ctx) } static -struct vfio_pci_irq_ctx *vfio_irq_ctx_get(struct vfio_pci_core_device *vdev, +struct vfio_pci_irq_ctx *vfio_irq_ctx_get(struct vfio_pci_intr_ctx *intr_ctx, unsigned long index) { - return xa_load(&vdev->intr_ctx.ctx, index); + return xa_load(&intr_ctx->ctx, index); } -static void vfio_irq_ctx_free(struct vfio_pci_core_device *vdev, +static void vfio_irq_ctx_free(struct vfio_pci_intr_ctx *intr_ctx, struct vfio_pci_irq_ctx *ctx, unsigned long index) { - xa_erase(&vdev->intr_ctx.ctx, index); + xa_erase(&intr_ctx->ctx, index); kfree(ctx); } static struct vfio_pci_irq_ctx * -vfio_irq_ctx_alloc(struct vfio_pci_core_device *vdev, unsigned long index) +vfio_irq_ctx_alloc(struct vfio_pci_intr_ctx *intr_ctx, unsigned long index) { struct vfio_pci_irq_ctx *ctx; int ret; @@ -72,7 +72,7 @@ vfio_irq_ctx_alloc(struct vfio_pci_core_device *vdev, unsigned long index) if (!ctx) return NULL; - ret = xa_insert(&vdev->intr_ctx.ctx, index, ctx, GFP_KERNEL_ACCOUNT); + ret = xa_insert(&intr_ctx->ctx, index, ctx, GFP_KERNEL_ACCOUNT); if (ret) { kfree(ctx); return NULL; @@ -91,7 +91,7 @@ static void vfio_send_intx_eventfd(void *opaque, void *unused) if (likely(is_intx(vdev) && !vdev->virq_disabled)) { struct vfio_pci_irq_ctx *ctx; - ctx = vfio_irq_ctx_get(vdev, 0); + ctx = vfio_irq_ctx_get(&vdev->intr_ctx, 0); if (WARN_ON_ONCE(!ctx)) return; eventfd_signal(ctx->trigger, 1); @@ -120,7 +120,7 @@ bool vfio_pci_intx_mask(struct vfio_pci_core_device *vdev) goto out_unlock; } - ctx = vfio_irq_ctx_get(vdev, 0); + ctx = vfio_irq_ctx_get(&vdev->intr_ctx, 0); if (WARN_ON_ONCE(!ctx)) goto out_unlock; @@ -169,7 +169,7 @@ static int vfio_pci_intx_unmask_handler(void *opaque, void *unused) goto out_unlock; } - ctx = vfio_irq_ctx_get(vdev, 0); + ctx = vfio_irq_ctx_get(&vdev->intr_ctx, 0); if (WARN_ON_ONCE(!ctx)) goto out_unlock; @@ -207,7 +207,7 @@ static irqreturn_t vfio_intx_handler(int irq, void *dev_id) unsigned long flags; int ret = IRQ_NONE; - ctx = vfio_irq_ctx_get(vdev, 0); + ctx = vfio_irq_ctx_get(&vdev->intr_ctx, 0); if (WARN_ON_ONCE(!ctx)) return ret; @@ -241,7 +241,7 @@ static int vfio_intx_enable(struct vfio_pci_core_device *vdev) if (!vdev->pdev->irq) return -ENODEV; - ctx = vfio_irq_ctx_alloc(vdev, 0); + ctx = vfio_irq_ctx_alloc(&vdev->intr_ctx, 0); if (!ctx) return -ENOMEM; @@ -269,7 +269,7 @@ static int vfio_intx_set_signal(struct vfio_pci_core_device *vdev, int fd) unsigned long flags; int ret; - ctx = vfio_irq_ctx_get(vdev, 0); + ctx = vfio_irq_ctx_get(&vdev->intr_ctx, 0); if (WARN_ON_ONCE(!ctx)) return -EINVAL; @@ -324,7 +324,7 @@ static void vfio_intx_disable(struct vfio_pci_core_device *vdev) { struct vfio_pci_irq_ctx *ctx; - ctx = vfio_irq_ctx_get(vdev, 0); + ctx = vfio_irq_ctx_get(&vdev->intr_ctx, 0); WARN_ON_ONCE(!ctx); if (ctx) { vfio_virqfd_disable(&ctx->unmask); @@ -332,7 +332,7 @@ static void vfio_intx_disable(struct vfio_pci_core_device *vdev) } vfio_intx_set_signal(vdev, -1); vdev->intr_ctx.irq_type = VFIO_PCI_NUM_IRQS; - vfio_irq_ctx_free(vdev, ctx, 0); + vfio_irq_ctx_free(&vdev->intr_ctx, ctx, 0); } /* @@ -421,7 +421,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_core_device *vdev, int irq = -EINVAL, ret; u16 cmd; - ctx = vfio_irq_ctx_get(vdev, vector); + ctx = vfio_irq_ctx_get(&vdev->intr_ctx, vector); if (ctx) { irq_bypass_unregister_producer(&ctx->producer); @@ -432,7 +432,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_core_device *vdev, /* Interrupt stays allocated, will be freed at MSI-X disable. */ kfree(ctx->name); eventfd_ctx_put(ctx->trigger); - vfio_irq_ctx_free(vdev, ctx, vector); + vfio_irq_ctx_free(&vdev->intr_ctx, ctx, vector); } if (fd < 0) @@ -445,7 +445,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_core_device *vdev, return irq; } - ctx = vfio_irq_ctx_alloc(vdev, vector); + ctx = vfio_irq_ctx_alloc(&vdev->intr_ctx, vector); if (!ctx) return -ENOMEM; @@ -499,7 +499,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_core_device *vdev, out_free_name: kfree(ctx->name); out_free_ctx: - vfio_irq_ctx_free(vdev, ctx, vector); + vfio_irq_ctx_free(&vdev->intr_ctx, ctx, vector); return ret; } @@ -570,7 +570,8 @@ static int vfio_pci_set_intx_unmask(struct vfio_pci_intr_ctx *intr_ctx, if (unmask) vfio_pci_intx_unmask(vdev); } else if (flags & VFIO_IRQ_SET_DATA_EVENTFD) { - struct vfio_pci_irq_ctx *ctx = vfio_irq_ctx_get(vdev, 0); + struct vfio_pci_irq_ctx *ctx = vfio_irq_ctx_get(&vdev->intr_ctx, + 0); int32_t fd = *(int32_t *)data; if (WARN_ON_ONCE(!ctx)) @@ -696,7 +697,7 @@ static int vfio_pci_set_msi_trigger(struct vfio_pci_intr_ctx *intr_ctx, return -EINVAL; for (i = start; i < start + count; i++) { - ctx = vfio_irq_ctx_get(vdev, i); + ctx = vfio_irq_ctx_get(&vdev->intr_ctx, i); if (!ctx) continue; if (flags & VFIO_IRQ_SET_DATA_NONE) { From patchwork Fri Oct 27 17:00:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3D8EC25B70 for ; Fri, 27 Oct 2023 17:01:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346373AbjJ0RB5 (ORCPT ); Fri, 27 Oct 2023 13:01:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346163AbjJ0RBk (ORCPT ); Fri, 27 Oct 2023 13:01:40 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2352D1BC; Fri, 27 Oct 2023 10:01:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426094; x=1729962094; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a7OqrEqBGe0dtm6LA2KrhYz8WQmFLC+1PYXBMh2oJhw=; b=XSDy2yTQxGIE/kFb4hGHbh5a0n3t16f8HPebQRGO0etfIbubJXUuV2BY +Eak23BFzvsfLlm20r3GMV7c8f/A6El6V8YxqUkZV9KDZWGQxuGGWYSAb y2CSW55hyXwNrQDTwJfWGHfjaHiGYkAFG2WYSCF8q6by/S7AwUbxADWe+ pYc4xG/FYB4KRGQyPen02CGgk5QLWhFDn+ocPPINWwmyN5XfDT10ucFQh oQWAvE7Q+v7e4GgLEme+HdIvQFFlWz9YCGjqinFwrYpbLFvn9jXcZo52s 36SKzF6t/wl7EIY1YHRCdxe38ktsYf4WxCf9zz646qDtvERzDnLFChpZg g==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611950" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611950" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988197" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988197" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:17 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 13/26] vfio/pci: Provide interrupt context to vfio_msi_enable() and vfio_msi_disable() Date: Fri, 27 Oct 2023 10:00:45 -0700 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org vfio_msi_enable() and vfio_msi_disable() perform the PCI specific operations to allocate and free interrupts on the device that will back the guest interrupts. This makes these functions backend specific calls that should be called by the interrupt management frontend. Pass the interrupt context as parameter to vfio_msi_enable() and vfio_msi_disable() so that they can be called by a generic frontend and make it possible for other backends to provide their own vfio_msi_enable() and vfio_msi_disable(). Signed-off-by: Reinette Chatre --- Changes since RFC V2: - New patch drivers/vfio/pci/vfio_pci_intrs.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index cdb6f875271f..ad3f9c1baccc 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -346,14 +346,15 @@ static irqreturn_t vfio_msihandler(int irq, void *arg) return IRQ_HANDLED; } -static int vfio_msi_enable(struct vfio_pci_core_device *vdev, int nvec, bool msix) +static int vfio_msi_enable(struct vfio_pci_intr_ctx *intr_ctx, int nvec, bool msix) { + struct vfio_pci_core_device *vdev = intr_ctx->priv; struct pci_dev *pdev = vdev->pdev; unsigned int flag = msix ? PCI_IRQ_MSIX : PCI_IRQ_MSI; int ret; u16 cmd; - if (!is_irq_none(&vdev->intr_ctx)) + if (!is_irq_none(intr_ctx)) return -EINVAL; /* return the number of supported vectors if we can't get all: */ @@ -367,7 +368,7 @@ static int vfio_msi_enable(struct vfio_pci_core_device *vdev, int nvec, bool msi } vfio_pci_memory_unlock_and_restore(vdev, cmd); - vdev->intr_ctx.irq_type = msix ? VFIO_PCI_MSIX_IRQ_INDEX : + intr_ctx->irq_type = msix ? VFIO_PCI_MSIX_IRQ_INDEX : VFIO_PCI_MSI_IRQ_INDEX; if (!msix) { @@ -523,14 +524,15 @@ static int vfio_msi_set_block(struct vfio_pci_core_device *vdev, return ret; } -static void vfio_msi_disable(struct vfio_pci_core_device *vdev, bool msix) +static void vfio_msi_disable(struct vfio_pci_intr_ctx *intr_ctx, bool msix) { + struct vfio_pci_core_device *vdev = intr_ctx->priv; struct pci_dev *pdev = vdev->pdev; struct vfio_pci_irq_ctx *ctx; unsigned long i; u16 cmd; - xa_for_each(&vdev->intr_ctx.ctx, i, ctx) { + xa_for_each(&intr_ctx->ctx, i, ctx) { vfio_virqfd_disable(&ctx->unmask); vfio_virqfd_disable(&ctx->mask); vfio_msi_set_vector_signal(vdev, i, -1, msix); @@ -547,7 +549,7 @@ static void vfio_msi_disable(struct vfio_pci_core_device *vdev, bool msix) if (vdev->nointx) pci_intx(pdev, 0); - vdev->intr_ctx.irq_type = VFIO_PCI_NUM_IRQS; + intr_ctx->irq_type = VFIO_PCI_NUM_IRQS; } /* @@ -667,7 +669,7 @@ static int vfio_pci_set_msi_trigger(struct vfio_pci_intr_ctx *intr_ctx, bool msix = (index == VFIO_PCI_MSIX_IRQ_INDEX) ? true : false; if (irq_is(intr_ctx, index) && !count && (flags & VFIO_IRQ_SET_DATA_NONE)) { - vfio_msi_disable(vdev, msix); + vfio_msi_disable(intr_ctx, msix); return 0; } @@ -682,13 +684,13 @@ static int vfio_pci_set_msi_trigger(struct vfio_pci_intr_ctx *intr_ctx, return vfio_msi_set_block(vdev, start, count, fds, msix); - ret = vfio_msi_enable(vdev, start + count, msix); + ret = vfio_msi_enable(intr_ctx, start + count, msix); if (ret) return ret; ret = vfio_msi_set_block(vdev, start, count, fds, msix); if (ret) - vfio_msi_disable(vdev, msix); + vfio_msi_disable(intr_ctx, msix); return ret; } From patchwork Fri Oct 27 17:00:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438646 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7630BC25B47 for ; Fri, 27 Oct 2023 17:01:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346354AbjJ0RBz (ORCPT ); Fri, 27 Oct 2023 13:01:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346053AbjJ0RBk (ORCPT ); Fri, 27 Oct 2023 13:01:40 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76CA21BE; Fri, 27 Oct 2023 10:01:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426095; x=1729962095; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bBS/18bRpnFhIcmeDxdHepWbJUmanHfqzWI9dXFjxAQ=; b=DtLvWDjN65As9exdCw7oFr3h/bFyxLTTuihrqueZpzvct90z6ucfRw6H jyhav7sW3EIaG75AsIQzVo8OPv463o7E5Sf7T5/BhNDE6gol0SBoIcXBt 7161Avl9IYRWmvImQS4eR/uSEY1dEmfusGG/2uilAZZHKjQMqQuKA4xsk FZQQ2nog/F56pGwFOVobqeN8D1NgiAcKiP0W/E/9qS728JAeABStGWKJs 6dEl/9neoukAtji1hUMFEWztkmHwP0EsrnNLkHFrB31qd/wAyKmD+kaSl AQ8yCdKLIl0JMLcxywh89cEjgGqIc+roFphgjjB3HPQIGVCRtwjKh8Ew0 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611958" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611958" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988200" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988200" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:17 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 14/26] vfio/pci: Let interrupt management backend interpret interrupt index Date: Fri, 27 Oct 2023 10:00:46 -0700 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org vfio_pci_set_msi_trigger() and vfio_msi_set_block() are generic and can be shared by different interrupt backends. This implies that these functions should not interpret user provided parameters but instead pass them to the backend specific code for interpretation. Instead of assuming that only MSI or MSI-X can be provided via the index and passing a boolean based on what was received, pass the actual index to backend for interpretation. Signed-off-by: Reinette Chatre --- Changes since RFC V2: - New patch drivers/vfio/pci/vfio_pci_intrs.c | 38 +++++++++++++++++-------------- 1 file changed, 21 insertions(+), 17 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index ad3f9c1baccc..d2b80e176651 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -346,17 +346,20 @@ static irqreturn_t vfio_msihandler(int irq, void *arg) return IRQ_HANDLED; } -static int vfio_msi_enable(struct vfio_pci_intr_ctx *intr_ctx, int nvec, bool msix) +static int vfio_msi_enable(struct vfio_pci_intr_ctx *intr_ctx, int nvec, + unsigned int index) { struct vfio_pci_core_device *vdev = intr_ctx->priv; struct pci_dev *pdev = vdev->pdev; - unsigned int flag = msix ? PCI_IRQ_MSIX : PCI_IRQ_MSI; + unsigned int flag; int ret; u16 cmd; if (!is_irq_none(intr_ctx)) return -EINVAL; + flag = (index == VFIO_PCI_MSIX_IRQ_INDEX) ? PCI_IRQ_MSIX : PCI_IRQ_MSI; + /* return the number of supported vectors if we can't get all: */ cmd = vfio_pci_memory_lock_and_enable(vdev); ret = pci_alloc_irq_vectors(pdev, 1, nvec, flag); @@ -368,10 +371,9 @@ static int vfio_msi_enable(struct vfio_pci_intr_ctx *intr_ctx, int nvec, bool ms } vfio_pci_memory_unlock_and_restore(vdev, cmd); - intr_ctx->irq_type = msix ? VFIO_PCI_MSIX_IRQ_INDEX : - VFIO_PCI_MSI_IRQ_INDEX; + intr_ctx->irq_type = index; - if (!msix) { + if (index == VFIO_PCI_MSI_IRQ_INDEX) { /* * Compute the virtual hardware field for max msi vectors - * it is the log base 2 of the number of vectors. @@ -414,8 +416,10 @@ static int vfio_msi_alloc_irq(struct vfio_pci_core_device *vdev, } static int vfio_msi_set_vector_signal(struct vfio_pci_core_device *vdev, - unsigned int vector, int fd, bool msix) + unsigned int vector, int fd, + unsigned int index) { + bool msix = (index == VFIO_PCI_MSIX_IRQ_INDEX) ? true : false; struct pci_dev *pdev = vdev->pdev; struct vfio_pci_irq_ctx *ctx; struct eventfd_ctx *trigger; @@ -506,25 +510,26 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_core_device *vdev, static int vfio_msi_set_block(struct vfio_pci_core_device *vdev, unsigned int start, unsigned int count, - int32_t *fds, bool msix) + int32_t *fds, unsigned int index) { unsigned int i, j; int ret = 0; for (i = 0, j = start; i < count && !ret; i++, j++) { int fd = fds ? fds[i] : -1; - ret = vfio_msi_set_vector_signal(vdev, j, fd, msix); + ret = vfio_msi_set_vector_signal(vdev, j, fd, index); } if (ret) { for (i = start; i < j; i++) - vfio_msi_set_vector_signal(vdev, i, -1, msix); + vfio_msi_set_vector_signal(vdev, i, -1, index); } return ret; } -static void vfio_msi_disable(struct vfio_pci_intr_ctx *intr_ctx, bool msix) +static void vfio_msi_disable(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int index) { struct vfio_pci_core_device *vdev = intr_ctx->priv; struct pci_dev *pdev = vdev->pdev; @@ -535,7 +540,7 @@ static void vfio_msi_disable(struct vfio_pci_intr_ctx *intr_ctx, bool msix) xa_for_each(&intr_ctx->ctx, i, ctx) { vfio_virqfd_disable(&ctx->unmask); vfio_virqfd_disable(&ctx->mask); - vfio_msi_set_vector_signal(vdev, i, -1, msix); + vfio_msi_set_vector_signal(vdev, i, -1, index); } cmd = vfio_pci_memory_lock_and_enable(vdev); @@ -666,10 +671,9 @@ static int vfio_pci_set_msi_trigger(struct vfio_pci_intr_ctx *intr_ctx, struct vfio_pci_core_device *vdev = intr_ctx->priv; struct vfio_pci_irq_ctx *ctx; unsigned int i; - bool msix = (index == VFIO_PCI_MSIX_IRQ_INDEX) ? true : false; if (irq_is(intr_ctx, index) && !count && (flags & VFIO_IRQ_SET_DATA_NONE)) { - vfio_msi_disable(intr_ctx, msix); + vfio_msi_disable(intr_ctx, index); return 0; } @@ -682,15 +686,15 @@ static int vfio_pci_set_msi_trigger(struct vfio_pci_intr_ctx *intr_ctx, if (vdev->intr_ctx.irq_type == index) return vfio_msi_set_block(vdev, start, count, - fds, msix); + fds, index); - ret = vfio_msi_enable(intr_ctx, start + count, msix); + ret = vfio_msi_enable(intr_ctx, start + count, index); if (ret) return ret; - ret = vfio_msi_set_block(vdev, start, count, fds, msix); + ret = vfio_msi_set_block(vdev, start, count, fds, index); if (ret) - vfio_msi_disable(intr_ctx, msix); + vfio_msi_disable(intr_ctx, index); return ret; } From patchwork Fri Oct 27 17:00:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16708C25B72 for ; Fri, 27 Oct 2023 17:02:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345949AbjJ0RCA (ORCPT ); Fri, 27 Oct 2023 13:02:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346191AbjJ0RBl (ORCPT ); Fri, 27 Oct 2023 13:01:41 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1CBB9D40; Fri, 27 Oct 2023 10:01:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426095; x=1729962095; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HCox2ulOzQiGBwuU9Q0+MtUDv4dr89lG8sCECB6xAZg=; b=irpCT1kpOa6OhvIECJu5dDqdHvgAUrTc0x/EokN3O2lv2fNofUMZId9i upguZw5cs4zt2VC4Fbat3yoPu91WLgcD6OHVydFrkiUw88nCSi4UHzZqO rw8qou+MfQxSsDLVP+HEteh0XXDK7lAG76YCduKpllZnn+XQ6VtVD9RSj zVy+ierq9agmYTB/qXgnPhlZieXVfMWVLO3P8DNMzfSlH0nc4Xrh1EM/T +z5qo53Ue1/Zn8jXY2Q7vVNUQpi+1lBE91Ab94sGTKtc0IUFypHFWz0aD fn7n0DZdCMEfuVcsRWTtlf+VphQOnGUX00Fu71C8WYJxNsK20Kr2w37o9 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611981" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611981" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988205" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988205" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:17 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 15/26] vfio/pci: Move generic code to frontend Date: Fri, 27 Oct 2023 10:00:47 -0700 Message-Id: <8c1d36376cbfade8576d72ef148ea842322ec375.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org vfio_pci_set_msi_trigger() and vfio_msi_set_block() are generic and thus appropriate to be frontend code. This means that they should operate on the interrupt context, not backend specific data. Provide the interrupt context as parameter to vfio_pci_set_msi_trigger() and vfio_msi_set_block() and remove all references to the PCI interrupt management data from these functions. This enables these functions to form part of the interrupt management frontend shared by different interrupt management backends. Signed-off-by: Reinette Chatre --- Changes since RFC V2: - New patch drivers/vfio/pci/vfio_pci_intrs.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index d2b80e176651..adad93c31ac6 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -415,18 +415,19 @@ static int vfio_msi_alloc_irq(struct vfio_pci_core_device *vdev, return map.index < 0 ? map.index : map.virq; } -static int vfio_msi_set_vector_signal(struct vfio_pci_core_device *vdev, +static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, unsigned int vector, int fd, unsigned int index) { bool msix = (index == VFIO_PCI_MSIX_IRQ_INDEX) ? true : false; + struct vfio_pci_core_device *vdev = intr_ctx->priv; struct pci_dev *pdev = vdev->pdev; struct vfio_pci_irq_ctx *ctx; struct eventfd_ctx *trigger; int irq = -EINVAL, ret; u16 cmd; - ctx = vfio_irq_ctx_get(&vdev->intr_ctx, vector); + ctx = vfio_irq_ctx_get(intr_ctx, vector); if (ctx) { irq_bypass_unregister_producer(&ctx->producer); @@ -437,7 +438,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_core_device *vdev, /* Interrupt stays allocated, will be freed at MSI-X disable. */ kfree(ctx->name); eventfd_ctx_put(ctx->trigger); - vfio_irq_ctx_free(&vdev->intr_ctx, ctx, vector); + vfio_irq_ctx_free(intr_ctx, ctx, vector); } if (fd < 0) @@ -450,7 +451,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_core_device *vdev, return irq; } - ctx = vfio_irq_ctx_alloc(&vdev->intr_ctx, vector); + ctx = vfio_irq_ctx_alloc(intr_ctx, vector); if (!ctx) return -ENOMEM; @@ -504,11 +505,11 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_core_device *vdev, out_free_name: kfree(ctx->name); out_free_ctx: - vfio_irq_ctx_free(&vdev->intr_ctx, ctx, vector); + vfio_irq_ctx_free(intr_ctx, ctx, vector); return ret; } -static int vfio_msi_set_block(struct vfio_pci_core_device *vdev, +static int vfio_msi_set_block(struct vfio_pci_intr_ctx *intr_ctx, unsigned int start, unsigned int count, int32_t *fds, unsigned int index) { @@ -517,12 +518,12 @@ static int vfio_msi_set_block(struct vfio_pci_core_device *vdev, for (i = 0, j = start; i < count && !ret; i++, j++) { int fd = fds ? fds[i] : -1; - ret = vfio_msi_set_vector_signal(vdev, j, fd, index); + ret = vfio_msi_set_vector_signal(intr_ctx, j, fd, index); } if (ret) { for (i = start; i < j; i++) - vfio_msi_set_vector_signal(vdev, i, -1, index); + vfio_msi_set_vector_signal(intr_ctx, i, -1, index); } return ret; @@ -540,7 +541,7 @@ static void vfio_msi_disable(struct vfio_pci_intr_ctx *intr_ctx, xa_for_each(&intr_ctx->ctx, i, ctx) { vfio_virqfd_disable(&ctx->unmask); vfio_virqfd_disable(&ctx->mask); - vfio_msi_set_vector_signal(vdev, i, -1, index); + vfio_msi_set_vector_signal(intr_ctx, i, -1, index); } cmd = vfio_pci_memory_lock_and_enable(vdev); @@ -668,7 +669,6 @@ static int vfio_pci_set_msi_trigger(struct vfio_pci_intr_ctx *intr_ctx, unsigned int count, uint32_t flags, void *data) { - struct vfio_pci_core_device *vdev = intr_ctx->priv; struct vfio_pci_irq_ctx *ctx; unsigned int i; @@ -684,15 +684,15 @@ static int vfio_pci_set_msi_trigger(struct vfio_pci_intr_ctx *intr_ctx, int32_t *fds = data; int ret; - if (vdev->intr_ctx.irq_type == index) - return vfio_msi_set_block(vdev, start, count, + if (intr_ctx->irq_type == index) + return vfio_msi_set_block(intr_ctx, start, count, fds, index); ret = vfio_msi_enable(intr_ctx, start + count, index); if (ret) return ret; - ret = vfio_msi_set_block(vdev, start, count, fds, index); + ret = vfio_msi_set_block(intr_ctx, start, count, fds, index); if (ret) vfio_msi_disable(intr_ctx, index); @@ -703,7 +703,7 @@ static int vfio_pci_set_msi_trigger(struct vfio_pci_intr_ctx *intr_ctx, return -EINVAL; for (i = start; i < start + count; i++) { - ctx = vfio_irq_ctx_get(&vdev->intr_ctx, i); + ctx = vfio_irq_ctx_get(intr_ctx, i); if (!ctx) continue; if (flags & VFIO_IRQ_SET_DATA_NONE) { From patchwork Fri Oct 27 17:00:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438664 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3935FC25B48 for ; Fri, 27 Oct 2023 17:02:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346370AbjJ0RCD (ORCPT ); Fri, 27 Oct 2023 13:02:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346205AbjJ0RBl (ORCPT ); Fri, 27 Oct 2023 13:01:41 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37AB2D42; Fri, 27 Oct 2023 10:01:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426095; x=1729962095; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4LhbS49tlQOs/14GbzRw202WRPZQBhJkwr2CrMC/ZaY=; b=kcXIbjiEKNGKI8v2mnKTODYnLTJJnHS7n7KO/at3i9BQkoYvmho0KaeS 5qzUMOZELRGLGoZRPgSD+NanqI1wNycY/3wcT4uDx2JnprxVj6QYyFva7 boUc/NcEU6EWQ25Dn1maNTdYYMHnDnH+8x+ArwDd6z5W25+fQMHuGjhZY xIoxUoGVgqo0p2T7/CrdpIuPB3e/EPm7i0KtPcgAsjc2CU87dOkVxzAqW QXZUZESLFd2BBuwhPrNl72HcNUM4RBHm2gIffccDb1XluPnNlfC0Iwb7z 4BSpGCwqipUqC1KwinkFMgcMQlX+IdZ03A+KcmuEcYPWtMaq6ZTGRIeVN A==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="611999" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="611999" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988208" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988208" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:18 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 16/26] vfio/pci: Split interrupt context initialization Date: Fri, 27 Oct 2023 10:00:48 -0700 Message-Id: <1f65808ba9e7c54c5ea1590dadfeb1e10ac5c276.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org struct vfio_pci_intr_ctx is the context associated with interrupts of a virtual device. The interrupt context is initialized with backend specific data required by the particular interrupt management backend as well as common initialization required by all interrupt management backends. Split interrupt context initialization into common and interrupt management backend specific calls. The entrypoint will be the initialization of a particular interrupt management backend which in turn calls the common initialization. Signed-off-by: Reinette Chatre --- No changes since RFC V2. drivers/vfio/pci/vfio_pci_intrs.c | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index adad93c31ac6..14131d5288e3 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -801,6 +801,18 @@ static int vfio_pci_set_req_trigger(struct vfio_pci_intr_ctx *intr_ctx, count, flags, data); } +static void _vfio_pci_init_intr_ctx(struct vfio_pci_intr_ctx *intr_ctx) +{ + intr_ctx->irq_type = VFIO_PCI_NUM_IRQS; + mutex_init(&intr_ctx->igate); + xa_init(&intr_ctx->ctx); +} + +static void _vfio_pci_release_intr_ctx(struct vfio_pci_intr_ctx *intr_ctx) +{ + mutex_destroy(&intr_ctx->igate); +} + static struct vfio_pci_intr_ops vfio_pci_intr_ops = { .set_intx_mask = vfio_pci_set_intx_mask, .set_intx_unmask = vfio_pci_set_intx_unmask, @@ -814,17 +826,15 @@ static struct vfio_pci_intr_ops vfio_pci_intr_ops = { void vfio_pci_init_intr_ctx(struct vfio_pci_core_device *vdev, struct vfio_pci_intr_ctx *intr_ctx) { - intr_ctx->irq_type = VFIO_PCI_NUM_IRQS; + _vfio_pci_init_intr_ctx(intr_ctx); intr_ctx->ops = &vfio_pci_intr_ops; intr_ctx->priv = vdev; - mutex_init(&intr_ctx->igate); - xa_init(&intr_ctx->ctx); } EXPORT_SYMBOL_GPL(vfio_pci_init_intr_ctx); void vfio_pci_release_intr_ctx(struct vfio_pci_intr_ctx *intr_ctx) { - mutex_destroy(&intr_ctx->igate); + _vfio_pci_release_intr_ctx(intr_ctx); } EXPORT_SYMBOL_GPL(vfio_pci_release_intr_ctx); From patchwork Fri Oct 27 17:00:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438648 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BB76C25B47 for ; Fri, 27 Oct 2023 17:01:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346406AbjJ0RB6 (ORCPT ); Fri, 27 Oct 2023 13:01:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43424 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346209AbjJ0RBl (ORCPT ); Fri, 27 Oct 2023 13:01:41 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77D06D45; Fri, 27 Oct 2023 10:01:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426097; x=1729962097; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TB0MfEQ6zMbspsfmMyNqxCW0CYXal0biIXEhDTEl/2E=; b=WgdFC9o1ytz2BVQYKAIyoPzCmU4dgwx9DG1tOlA9wcosVysIBK80OVsU UGsAznUUBYcxn9DCTl9TKSSDK8aQ+7OaHdA6lSoeKbYfDIoZXAoJQ1Zdx asZSISlBoad3fQaVd6If8A/vpZ05AyyxzJoF+KAtE+mjkCyG8kxSo4hiy aVSqn3K6nu4s4gB9ELMhoJfsAqXuEdtqknnPYB/uz5++iUj6G9lSGmk1d 3o/rEX/8BvM1vSKzwU+nSaZfpNbsYYtHYfAzDNq37mF2ho+5xhalwZPRN M5LFhbJdmdhY4jf12fhWOFnwq8F1jAdTBLRotDzYfC3LfPi8ZUJUeEfS/ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="612019" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="612019" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988211" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988211" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:18 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 17/26] vfio/pci: Make vfio_pci_set_irqs_ioctl() available Date: Fri, 27 Oct 2023 10:00:49 -0700 Message-Id: <1b51730bac31a6f491ef44b722f0bd19da4312dc.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org vfio_pci_set_irqs_ioctl() is now a generic entrypoint that can be configured to support different interrupt management backend. Export vfio_pci_set_irqs_ioctl() for use by other virtual device drivers. Signed-off-by: Reinette Chatre --- Changes since RFC V2: - Improve changelog. drivers/vfio/pci/vfio_pci_intrs.c | 1 + include/linux/vfio_pci_core.h | 3 +++ 2 files changed, 4 insertions(+) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 14131d5288e3..80040fde6f6b 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -916,3 +916,4 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, mutex_unlock(&intr_ctx->igate); return ret; } +EXPORT_SYMBOL_GPL(vfio_pci_set_irqs_ioctl); diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index e666c19da223..8d2fb51a2dcc 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -158,6 +158,9 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd, void vfio_pci_init_intr_ctx(struct vfio_pci_core_device *vdev, struct vfio_pci_intr_ctx *intr_ctx); void vfio_pci_release_intr_ctx(struct vfio_pci_intr_ctx *intr_ctx); +int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, + unsigned int index, unsigned int start, + unsigned int count, void *data); int vfio_pci_core_ioctl_feature(struct vfio_device *device, u32 flags, void __user *arg, size_t argsz); ssize_t vfio_pci_core_read(struct vfio_device *core_vdev, char __user *buf, From patchwork Fri Oct 27 17:00:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438666 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4EA9C25B6F for ; Fri, 27 Oct 2023 17:02:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346377AbjJ0RCK (ORCPT ); Fri, 27 Oct 2023 13:02:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346225AbjJ0RBn (ORCPT ); Fri, 27 Oct 2023 13:01:43 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D51CCD4C; Fri, 27 Oct 2023 10:01:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426097; x=1729962097; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dCYNGaS6u8lNWArc0rXK+9yXFKPwyzJs8u5KXSyF0Rc=; b=B0qa6+aRaZkEUOAng/x41z/x4Ib8d89tIVRrswwYFYSnHFlRExkpZQ14 ydJPDIQvjHRBZLneZPhJxUxHdZltWqBqOnJ/x1jeLeTOLanXPOUyYZCuB uZ1QwcRJ4d3kbTEiwuBqGIsCsLb6jkKuMFd1OR5PWt64f4vnKPrsA+c40 v6YdTMm+eAteZyNh0egUXJd4uNWMkQvMqpzn/HVoiIT4fo0h6QkrzD1Vk PoYD93vZy2MMSCWCLTAD3oqdmpggTgxRdxfjWXL67RhD0R3t+IxAXd+tE bYeYqBi2qOIYIGnxTB8ARo07BgPT/hb/cxVPADLwMMlzBXKGh/5vJFf4C Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="612064" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="612064" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988215" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988215" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:18 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 18/26] vfio/pci: Preserve per-interrupt contexts Date: Fri, 27 Oct 2023 10:00:50 -0700 Message-Id: <12630e207092c11a69efe691a9273abcef831c18.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Interrupt management for PCI passthrough devices create a new per-interrupt context every time an interrupt is allocated, freeing it when the interrupt is freed. The per-interrupt context contains the properties of a particular interrupt. Without a property that guides interrupt allocation and free it is acceptable to always create a new per-interrupt context. Maintain per-interrupt context across interrupt allocate and free events in preparation for per-interrupt properties that guide interrupt allocation and free. Examples of such properties are: (a) whether the interrupt is emulated or not, which guides whether the backend should indeed allocate and/or free an interrupt, (b) an instance cookie associated with the interrupt that needs to be provided to interrupt allocation when the interrupt is backed by IMS. This means that existence of per-interrupt context no longer implies a valid trigger, pointers to freed memory should be cleared, and a new per-interrupt context cannot be assumed needing allocation when an interrupt is allocated. Signed-off-by: Reinette Chatre --- Changes since RFC V2: - New patch drivers/vfio/pci/vfio_pci_intrs.c | 41 ++++++++++++++++++++++--------- 1 file changed, 29 insertions(+), 12 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 80040fde6f6b..8d84e7d62594 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -429,7 +429,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, ctx = vfio_irq_ctx_get(intr_ctx, vector); - if (ctx) { + if (ctx && ctx->trigger) { irq_bypass_unregister_producer(&ctx->producer); irq = pci_irq_vector(pdev, vector); cmd = vfio_pci_memory_lock_and_enable(vdev); @@ -437,8 +437,9 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, vfio_pci_memory_unlock_and_restore(vdev, cmd); /* Interrupt stays allocated, will be freed at MSI-X disable. */ kfree(ctx->name); + ctx->name = NULL; eventfd_ctx_put(ctx->trigger); - vfio_irq_ctx_free(intr_ctx, ctx, vector); + ctx->trigger = NULL; } if (fd < 0) @@ -451,16 +452,17 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, return irq; } - ctx = vfio_irq_ctx_alloc(intr_ctx, vector); - if (!ctx) - return -ENOMEM; + /* Per-interrupt context remain allocated. */ + if (!ctx) { + ctx = vfio_irq_ctx_alloc(intr_ctx, vector); + if (!ctx) + return -ENOMEM; + } ctx->name = kasprintf(GFP_KERNEL_ACCOUNT, "vfio-msi%s[%d](%s)", msix ? "x" : "", vector, pci_name(pdev)); - if (!ctx->name) { - ret = -ENOMEM; - goto out_free_ctx; - } + if (!ctx->name) + return -ENOMEM; trigger = eventfd_ctx_fdget(fd); if (IS_ERR(trigger)) { @@ -504,8 +506,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, eventfd_ctx_put(trigger); out_free_name: kfree(ctx->name); -out_free_ctx: - vfio_irq_ctx_free(intr_ctx, ctx, vector); + ctx->name = NULL; return ret; } @@ -704,7 +705,7 @@ static int vfio_pci_set_msi_trigger(struct vfio_pci_intr_ctx *intr_ctx, for (i = start; i < start + count; i++) { ctx = vfio_irq_ctx_get(intr_ctx, i); - if (!ctx) + if (!ctx || !ctx->trigger) continue; if (flags & VFIO_IRQ_SET_DATA_NONE) { eventfd_signal(ctx->trigger, 1); @@ -810,6 +811,22 @@ static void _vfio_pci_init_intr_ctx(struct vfio_pci_intr_ctx *intr_ctx) static void _vfio_pci_release_intr_ctx(struct vfio_pci_intr_ctx *intr_ctx) { + struct vfio_pci_irq_ctx *ctx; + unsigned long i; + + /* + * Per-interrupt context remains allocated after interrupt is + * freed. Per-interrupt context need to be freed separately. + */ + mutex_lock(&intr_ctx->igate); + xa_for_each(&intr_ctx->ctx, i, ctx) { + WARN_ON_ONCE(ctx->trigger); + WARN_ON_ONCE(ctx->name); + xa_erase(&intr_ctx->ctx, i); + kfree(ctx); + } + mutex_unlock(&intr_ctx->igate); + mutex_destroy(&intr_ctx->igate); } From patchwork Fri Oct 27 17:00:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438650 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8713BC25B70 for ; Fri, 27 Oct 2023 17:02:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346366AbjJ0RCC (ORCPT ); Fri, 27 Oct 2023 13:02:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232296AbjJ0RBm (ORCPT ); Fri, 27 Oct 2023 13:01:42 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4CD2D48; Fri, 27 Oct 2023 10:01:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426097; x=1729962097; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jAVKFX+vaV6MZ7MbRQwvQO7Qe37BB/mxkHKB6ZjkpCk=; b=MvTqs5TSbsZ394BMBLoEgBhlqlrW/3EW+C2mUtjddyVDqBlSc1jRxnJG jp8Ajk/ROAcoG13iPoGWfHdNvkkFQHnR8wzVteDqXuNi0++GPnzWUbv6C XqlVqnwdw+D3hUwHaWh4fyTlw/NXz2RJOCqQZl7CVu5+sCOHoxtdSm2I7 bA+RnvQiD3wDQphzB8RqXbE9ZDYzz092utHk8RvA21QhLRAR/lyzVdDWX g5+9miz/2o5NZZuWbi6nANUM7V6+P6rbHZtCzJOkqFYp2l01BJqm2tXKa O7FdxJ1eqebLzIi71eZpSwmuWTmZVamnoziLquU3dLltEfnJ0x5nX7BMH A==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="612078" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="612078" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988220" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988220" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:19 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 19/26] vfio/pci: Store Linux IRQ number in per-interrupt context Date: Fri, 27 Oct 2023 10:00:51 -0700 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The Linux IRQ number is a property shared among all interrupt backends but not all interrupt management backends have a simple query for it. pci_irq_vector() can be used to obtain the Linux IRQ number of a MSI-X interrupt but there is no such query for IMS interrupts. The Linux IRQ number is needed during interrupt free as well as during register of IRQ bypass producer. It is unnecessary to query the Linux IRQ number at each stage, the number can be stored at the time the interrupt is allocated and obtained from its per-interrupt context when needed. Signed-off-by: Reinette Chatre --- Changes since RFC V2: - New patch drivers/vfio/pci/vfio_pci_intrs.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 8d84e7d62594..fd0713dc9f81 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -29,6 +29,7 @@ struct vfio_pci_irq_ctx { char *name; bool masked; struct irq_bypass_producer producer; + int virq; }; static bool irq_is(struct vfio_pci_intr_ctx *intr_ctx, int type) @@ -431,10 +432,11 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, if (ctx && ctx->trigger) { irq_bypass_unregister_producer(&ctx->producer); - irq = pci_irq_vector(pdev, vector); + irq = ctx->virq; cmd = vfio_pci_memory_lock_and_enable(vdev); - free_irq(irq, ctx->trigger); + free_irq(ctx->virq, ctx->trigger); vfio_pci_memory_unlock_and_restore(vdev, cmd); + ctx->virq = 0; /* Interrupt stays allocated, will be freed at MSI-X disable. */ kfree(ctx->name); ctx->name = NULL; @@ -488,8 +490,10 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, if (ret) goto out_put_eventfd_ctx; + ctx->virq = irq; + ctx->producer.token = trigger; - ctx->producer.irq = irq; + ctx->producer.irq = ctx->virq; ret = irq_bypass_register_producer(&ctx->producer); if (unlikely(ret)) { dev_info(&pdev->dev, From patchwork Fri Oct 27 17:00:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438667 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7869C25B48 for ; Fri, 27 Oct 2023 17:02:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346456AbjJ0RCM (ORCPT ); Fri, 27 Oct 2023 13:02:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346229AbjJ0RBn (ORCPT ); Fri, 27 Oct 2023 13:01:43 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C74CE1; Fri, 27 Oct 2023 10:01:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426098; x=1729962098; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cOtSDallyPYUFy6e95eMcuO12DmemiUO7bjfEUG72Hs=; b=UOoENk6mSu2BDDxdP2ANEVWmMrqs30KqbzERkRwTwAlfhpYwsg/aCq/e fj7+8MgAX65XvX3W0o+lUD2DcSs2P8NQGDzps298EhXIwuF/nhkQYsLWG 5h7uCqzVT+zQt3YXkK5P3uahl/8e+t0iNx2hYxumFUe2JPYT5voaqo2cN F1CyzRw1x4S/eTWjXMG3wNXCyqJC4oxut0V2ztBJXMcKY0dkTw2X+7ouD Ceb6hu3ijB+e+s2UhtOjgknWivyD4qrxaDIT+CDbWtufnFhR1/Qo7dqlh uzL3HhszKy2IjqFJty4U0COTIekhgVN2T193SH4KxbQN7MEf4HQty50UC w==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="612093" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="612093" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988223" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988223" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:19 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 20/26] vfio/pci: Separate frontend and backend code during interrupt enable/disable Date: Fri, 27 Oct 2023 10:00:52 -0700 Message-Id: <12f5f47089d4c9e988ab48e66266e0c6a420f842.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org vfio_msi_set_vector_signal() contains a mix of generic and backend specific code. Separate the backend specific code into functions that can be replaced by backend-specific callbacks. The dev_info() used in error message is replaced by a pr_info() that prints the device name generated by the backend specific code intended to be used during request_irq(). Signed-off-by: Reinette Chatre --- Changes since RFC V2: - New patch drivers/vfio/pci/vfio_pci_intrs.c | 110 +++++++++++++++++++----------- 1 file changed, 70 insertions(+), 40 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index fd0713dc9f81..c1f65b8adfe2 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -416,28 +416,81 @@ static int vfio_msi_alloc_irq(struct vfio_pci_core_device *vdev, return map.index < 0 ? map.index : map.virq; } -static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, - unsigned int vector, int fd, +static void vfio_msi_free_interrupt(struct vfio_pci_intr_ctx *intr_ctx, + struct vfio_pci_irq_ctx *ctx, + unsigned int vector) +{ + struct vfio_pci_core_device *vdev = intr_ctx->priv; + u16 cmd; + + cmd = vfio_pci_memory_lock_and_enable(vdev); + free_irq(ctx->virq, ctx->trigger); + vfio_pci_memory_unlock_and_restore(vdev, cmd); + ctx->virq = 0; + /* Interrupt stays allocated, will be freed at MSI-X disable. */ +} + +static int vfio_msi_request_interrupt(struct vfio_pci_intr_ctx *intr_ctx, + struct vfio_pci_irq_ctx *ctx, + unsigned int vector, unsigned int index) +{ + bool msix = (index == VFIO_PCI_MSIX_IRQ_INDEX) ? true : false; + struct vfio_pci_core_device *vdev = intr_ctx->priv; + int irq, ret; + u16 cmd; + + /* Interrupt stays allocated, will be freed at MSI-X disable. */ + irq = vfio_msi_alloc_irq(vdev, vector, msix); + if (irq < 0) + return irq; + + /* + * If the vector was previously allocated, refresh the on-device + * message data before enabling in case it had been cleared or + * corrupted (e.g. due to backdoor resets) since writing. + */ + cmd = vfio_pci_memory_lock_and_enable(vdev); + if (msix) { + struct msi_msg msg; + + get_cached_msi_msg(irq, &msg); + pci_write_msi_msg(irq, &msg); + } + + ret = request_irq(irq, vfio_msihandler, 0, ctx->name, ctx->trigger); + vfio_pci_memory_unlock_and_restore(vdev, cmd); + + ctx->virq = irq; + + return ret; +} + +static char *vfio_msi_device_name(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int vector, + unsigned int index) { bool msix = (index == VFIO_PCI_MSIX_IRQ_INDEX) ? true : false; struct vfio_pci_core_device *vdev = intr_ctx->priv; struct pci_dev *pdev = vdev->pdev; + + return kasprintf(GFP_KERNEL_ACCOUNT, "vfio-msi%s[%d](%s)", + msix ? "x" : "", vector, pci_name(pdev)); +} + +static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int vector, int fd, + unsigned int index) +{ struct vfio_pci_irq_ctx *ctx; struct eventfd_ctx *trigger; - int irq = -EINVAL, ret; - u16 cmd; + int ret; ctx = vfio_irq_ctx_get(intr_ctx, vector); if (ctx && ctx->trigger) { irq_bypass_unregister_producer(&ctx->producer); - irq = ctx->virq; - cmd = vfio_pci_memory_lock_and_enable(vdev); - free_irq(ctx->virq, ctx->trigger); - vfio_pci_memory_unlock_and_restore(vdev, cmd); - ctx->virq = 0; - /* Interrupt stays allocated, will be freed at MSI-X disable. */ + vfio_msi_free_interrupt(intr_ctx, ctx, vector); kfree(ctx->name); ctx->name = NULL; eventfd_ctx_put(ctx->trigger); @@ -447,13 +500,6 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, if (fd < 0) return 0; - if (irq == -EINVAL) { - /* Interrupt stays allocated, will be freed at MSI-X disable. */ - irq = vfio_msi_alloc_irq(vdev, vector, msix); - if (irq < 0) - return irq; - } - /* Per-interrupt context remain allocated. */ if (!ctx) { ctx = vfio_irq_ctx_alloc(intr_ctx, vector); @@ -461,8 +507,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, return -ENOMEM; } - ctx->name = kasprintf(GFP_KERNEL_ACCOUNT, "vfio-msi%s[%d](%s)", - msix ? "x" : "", vector, pci_name(pdev)); + ctx->name = vfio_msi_device_name(intr_ctx, vector, index); if (!ctx->name) return -ENOMEM; @@ -472,42 +517,27 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, goto out_free_name; } - /* - * If the vector was previously allocated, refresh the on-device - * message data before enabling in case it had been cleared or - * corrupted (e.g. due to backdoor resets) since writing. - */ - cmd = vfio_pci_memory_lock_and_enable(vdev); - if (msix) { - struct msi_msg msg; - - get_cached_msi_msg(irq, &msg); - pci_write_msi_msg(irq, &msg); - } + ctx->trigger = trigger; - ret = request_irq(irq, vfio_msihandler, 0, ctx->name, trigger); - vfio_pci_memory_unlock_and_restore(vdev, cmd); + ret = vfio_msi_request_interrupt(intr_ctx, ctx, vector, index); if (ret) goto out_put_eventfd_ctx; - ctx->virq = irq; - ctx->producer.token = trigger; ctx->producer.irq = ctx->virq; ret = irq_bypass_register_producer(&ctx->producer); if (unlikely(ret)) { - dev_info(&pdev->dev, - "irq bypass producer (token %p) registration fails: %d\n", - ctx->producer.token, ret); + pr_info("%s irq bypass producer (token %p) registration fails: %d\n", + ctx->name, ctx->producer.token, ret); ctx->producer.token = NULL; } - ctx->trigger = trigger; return 0; out_put_eventfd_ctx: - eventfd_ctx_put(trigger); + eventfd_ctx_put(ctx->trigger); + ctx->trigger = NULL; out_free_name: kfree(ctx->name); ctx->name = NULL; From patchwork Fri Oct 27 17:00:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A38A2C25B6F for ; Fri, 27 Oct 2023 17:02:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346470AbjJ0RCP (ORCPT ); Fri, 27 Oct 2023 13:02:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346274AbjJ0RBn (ORCPT ); Fri, 27 Oct 2023 13:01:43 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E986129; Fri, 27 Oct 2023 10:01:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426099; x=1729962099; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=q5E2bhYf2dMQUdkDHlF7uY8ToTzJMpiMxWpjZP6f9Y8=; b=b1vR/AhPxpJVSQ3hu6CVYE2YjPv8SCJ7M+bya/slVET+dTGcbD5y1T8e egS99gGDlQuGa6NqBwS3CDznDIZYT3P+19CIcLWDF1Ho4i07wj7ZmF25R pH3DvE4Z0NKhvPTZOB8NrCrliNhgvtoOsdEatFPY+POPhSOBbrEDw0NPb 2qY3IzIMczBDpIHkZY5NWeo9mRT4sUJXHPHorcge8X9PDQ+CeRpDMrMMq phRZ5qT9blhnUKO8QOH9B5j3Q3ySmt6UKdLHAvC5DoAFouGEe9m1nrjnk ZAm4+kmf3zczJoYGBymk7xF1KwZiJpDpImlC8f4t0g+bI0thD6kykpDis w==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="612103" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="612103" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988226" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988226" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:19 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 21/26] vfio/pci: Replace backend specific calls with callbacks Date: Fri, 27 Oct 2023 10:00:53 -0700 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The backend specific code needed to manage the interrupts are isolated into separate functions. With the backend specific code isolated into functions, these functions can be turned into callbacks for other backends to use. Signed-off-by: Reinette Chatre --- Changes since RFC V2: - New patch drivers/vfio/pci/vfio_pci_intrs.c | 17 +++++++++++------ include/linux/vfio_pci_core.h | 15 +++++++++++++++ 2 files changed, 26 insertions(+), 6 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index c1f65b8adfe2..1e6376b048de 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -490,7 +490,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, if (ctx && ctx->trigger) { irq_bypass_unregister_producer(&ctx->producer); - vfio_msi_free_interrupt(intr_ctx, ctx, vector); + intr_ctx->ops->msi_free_interrupt(intr_ctx, ctx, vector); kfree(ctx->name); ctx->name = NULL; eventfd_ctx_put(ctx->trigger); @@ -507,7 +507,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, return -ENOMEM; } - ctx->name = vfio_msi_device_name(intr_ctx, vector, index); + ctx->name = intr_ctx->ops->msi_device_name(intr_ctx, vector, index); if (!ctx->name) return -ENOMEM; @@ -519,7 +519,7 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, ctx->trigger = trigger; - ret = vfio_msi_request_interrupt(intr_ctx, ctx, vector, index); + ret = intr_ctx->ops->msi_request_interrupt(intr_ctx, ctx, vector, index); if (ret) goto out_put_eventfd_ctx; @@ -708,7 +708,7 @@ static int vfio_pci_set_msi_trigger(struct vfio_pci_intr_ctx *intr_ctx, unsigned int i; if (irq_is(intr_ctx, index) && !count && (flags & VFIO_IRQ_SET_DATA_NONE)) { - vfio_msi_disable(intr_ctx, index); + intr_ctx->ops->msi_disable(intr_ctx, index); return 0; } @@ -723,13 +723,13 @@ static int vfio_pci_set_msi_trigger(struct vfio_pci_intr_ctx *intr_ctx, return vfio_msi_set_block(intr_ctx, start, count, fds, index); - ret = vfio_msi_enable(intr_ctx, start + count, index); + ret = intr_ctx->ops->msi_enable(intr_ctx, start + count, index); if (ret) return ret; ret = vfio_msi_set_block(intr_ctx, start, count, fds, index); if (ret) - vfio_msi_disable(intr_ctx, index); + intr_ctx->ops->msi_disable(intr_ctx, index); return ret; } @@ -872,6 +872,11 @@ static struct vfio_pci_intr_ops vfio_pci_intr_ops = { .set_msix_trigger = vfio_pci_set_msi_trigger, .set_err_trigger = vfio_pci_set_err_trigger, .set_req_trigger = vfio_pci_set_req_trigger, + .msi_enable = vfio_msi_enable, + .msi_disable = vfio_msi_disable, + .msi_request_interrupt = vfio_msi_request_interrupt, + .msi_free_interrupt = vfio_msi_free_interrupt, + .msi_device_name = vfio_msi_device_name, }; void vfio_pci_init_intr_ctx(struct vfio_pci_core_device *vdev, diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index 8d2fb51a2dcc..f0951084a26f 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -69,6 +69,8 @@ struct vfio_pci_intr_ctx { int irq_type; }; +struct vfio_pci_irq_ctx; + struct vfio_pci_intr_ops { int (*set_intx_mask)(struct vfio_pci_intr_ctx *intr_ctx, unsigned int index, unsigned int start, @@ -91,6 +93,19 @@ struct vfio_pci_intr_ops { int (*set_req_trigger)(struct vfio_pci_intr_ctx *intr_ctx, unsigned int index, unsigned int start, unsigned int count, uint32_t flags, void *data); + int (*msi_enable)(struct vfio_pci_intr_ctx *intr_ctx, int nvec, + unsigned int index); + void (*msi_disable)(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int index); + int (*msi_request_interrupt)(struct vfio_pci_intr_ctx *intr_ctx, + struct vfio_pci_irq_ctx *ctx, + unsigned int vector, + unsigned int index); + void (*msi_free_interrupt)(struct vfio_pci_intr_ctx *intr_ctx, + struct vfio_pci_irq_ctx *ctx, + unsigned int vector); + char *(*msi_device_name)(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int vector, unsigned int index); }; struct vfio_pci_core_device { From patchwork Fri Oct 27 17:00:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438668 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0D17C25B70 for ; Fri, 27 Oct 2023 17:02:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346465AbjJ0RCP (ORCPT ); Fri, 27 Oct 2023 13:02:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346289AbjJ0RBn (ORCPT ); Fri, 27 Oct 2023 13:01:43 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1693ED5A; Fri, 27 Oct 2023 10:01:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426099; x=1729962099; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aMCoAudS/OSl3PQsYt3+4hHw4XTo4wOWeFRNgKX6Zxk=; b=GRypearYxPV+atYqhqCh8AbR2GbLN2ZFBpb9WoRHjWKufGDISmkFTrSU q9ZNYHIiL+YEkaxJSul3tKz4E8lQ9l76NZwMt8F5PiOY83srOmZl/mMP3 U7ylNCWeZusc02UEJi+YcKZLsAI2xB4UQmg9ddZLq4xwXIE1axeEAF2Sn J+8shuFV2RIvRtof5Lpe/41m4mV15+UKsu5TSM1KyXc5T6wOOyOKjvioC WF70MZPZXeoHEHZfNGqOQfD+u3Pldp8sbbTLw6hHh+QSoUPCKr5EjFKLv Pn87dmrdpGI/eIkFdYKKheZhfNqtUPGlsfyYS5jE7emfVEaYwMh7e9UWu g==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="612113" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="612113" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988230" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988230" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:20 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 22/26] vfio/pci: Introduce backend specific context initializer Date: Fri, 27 Oct 2023 10:00:54 -0700 Message-Id: <6b3f44ab66c4408b0b7d277b40ed6edac9e83708.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The per-interrupt context may contain backend specific data. Call a backend provided initializer on per-interrupt context creation. Signed-off-by: Reinette Chatre --- Changes since RFC V2: - New patch drivers/vfio/pci/vfio_pci_intrs.c | 8 ++++++++ include/linux/vfio_pci_core.h | 2 ++ 2 files changed, 10 insertions(+) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 1e6376b048de..8c86f2d6229f 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -73,6 +73,14 @@ vfio_irq_ctx_alloc(struct vfio_pci_intr_ctx *intr_ctx, unsigned long index) if (!ctx) return NULL; + if (intr_ctx->ops->init_irq_ctx) { + ret = intr_ctx->ops->init_irq_ctx(intr_ctx, ctx); + if (ret < 0) { + kfree(ctx); + return NULL; + } + } + ret = xa_insert(&intr_ctx->ctx, index, ctx, GFP_KERNEL_ACCOUNT); if (ret) { kfree(ctx); diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index f0951084a26f..d5140a732741 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -106,6 +106,8 @@ struct vfio_pci_intr_ops { unsigned int vector); char *(*msi_device_name)(struct vfio_pci_intr_ctx *intr_ctx, unsigned int vector, unsigned int index); + int (*init_irq_ctx)(struct vfio_pci_intr_ctx *intr_ctx, + struct vfio_pci_irq_ctx *ctx); }; struct vfio_pci_core_device { From patchwork Fri Oct 27 17:00:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438673 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38F2FC25B70 for ; Fri, 27 Oct 2023 17:02:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346508AbjJ0RCY (ORCPT ); Fri, 27 Oct 2023 13:02:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346293AbjJ0RBo (ORCPT ); Fri, 27 Oct 2023 13:01:44 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4844CD5F; Fri, 27 Oct 2023 10:01:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426100; x=1729962100; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=81vCVvogpN6l9KpHrXDO0EMLjJAz80Wg8hp5eZraD+Q=; b=egfashB57ggsR8/+iCn8hzx4l7qHJcjTgGj1JXlOQ6vGE2Ke2By0RHNG aWvXec5Ah9ymX1l/MwvWsTN9wVqGY71relSlBQe0k7yD8cgS0Xxz5VhHo BFZBPwkpxMTSC3hrHdZ7SBMMtxk8iZ49Kdb53CO0sG7DqBB8jZeRmpgzi Cu6JDSBM0euckBRvLoRBk4LSgataurwpZgERmAWboOl3IO1ahCLlWN42E frhITTVtCv5evKCLTfQRKNiHHpmXVzIv5ouvYpphbM9u9ZDTl4WNrtw5i yDvNbnY8FeeOi2NDeU22/w0DnINyilq9NhSYxagTt3q4R7kova4VCZhG3 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="612129" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="612129" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988233" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988233" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:20 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 23/26] vfio/pci: Support emulated interrupts Date: Fri, 27 Oct 2023 10:00:55 -0700 Message-Id: <5c1e815b67aa51dfa229027147e7c2e5a7676eea.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Access from a guest to a virtual device may be either 'direct-path', where the guest interacts directly with the underlying hardware, or 'intercepted path' where the virtual device emulates operations. Support emulated interrupts that can be used to handle 'intercepted path' operations. For example, a virtual device may use 'intercepted path' for configuration. Doing so, configuration requests intercepted by the virtual device driver are handled within the virtual device driver with completion signaled to the guest without interacting with the underlying hardware. Add vfio_pci_set_emulated() and vfio_pci_send_signal() to the VFIO PCI API. vfio_pci_set_emulated() configures a range of interrupts to be emulated. Any range of interrupts can be configured as emulated as long as no interrupt has previously been allocated at that vector. The virtual device driver uses vfio_pci_send_signal() to trigger interrupts in the guest. Originally-by: Dave Jiang Signed-off-by: Reinette Chatre --- Changes since RFC V2: - Remove the backend "supports_emulated" flag. All backends now support emulated interrupts. - Move emulated interrupt enabling from IMS backend to frontend. drivers/vfio/pci/vfio_pci_intrs.c | 87 ++++++++++++++++++++++++++++++- include/linux/vfio_pci_core.h | 3 ++ 2 files changed, 88 insertions(+), 2 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 8c86f2d6229f..6e34b8d8c216 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -23,6 +23,7 @@ #include "vfio_pci_priv.h" struct vfio_pci_irq_ctx { + bool emulated:1; struct eventfd_ctx *trigger; struct virqfd *unmask; struct virqfd *mask; @@ -497,8 +498,10 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, ctx = vfio_irq_ctx_get(intr_ctx, vector); if (ctx && ctx->trigger) { - irq_bypass_unregister_producer(&ctx->producer); - intr_ctx->ops->msi_free_interrupt(intr_ctx, ctx, vector); + if (!ctx->emulated) { + irq_bypass_unregister_producer(&ctx->producer); + intr_ctx->ops->msi_free_interrupt(intr_ctx, ctx, vector); + } kfree(ctx->name); ctx->name = NULL; eventfd_ctx_put(ctx->trigger); @@ -527,6 +530,9 @@ static int vfio_msi_set_vector_signal(struct vfio_pci_intr_ctx *intr_ctx, ctx->trigger = trigger; + if (ctx->emulated) + return 0; + ret = intr_ctx->ops->msi_request_interrupt(intr_ctx, ctx, vector, index); if (ret) goto out_put_eventfd_ctx; @@ -902,6 +908,83 @@ void vfio_pci_release_intr_ctx(struct vfio_pci_intr_ctx *intr_ctx) } EXPORT_SYMBOL_GPL(vfio_pci_release_intr_ctx); +/* + * vfio_pci_send_signal() - Send signal to the eventfd. + * @intr_ctx: Interrupt context. + * @vector: Vector for which interrupt will be signaled. + * + * Trigger signal to guest for emulated interrupts. + */ +void vfio_pci_send_signal(struct vfio_pci_intr_ctx *intr_ctx, unsigned int vector) +{ + struct vfio_pci_irq_ctx *ctx; + + mutex_lock(&intr_ctx->igate); + + ctx = vfio_irq_ctx_get(intr_ctx, vector); + + if (WARN_ON_ONCE(!ctx || !ctx->emulated || !ctx->trigger)) + goto out_unlock; + + eventfd_signal(ctx->trigger, 1); + +out_unlock: + mutex_unlock(&intr_ctx->igate); +} +EXPORT_SYMBOL_GPL(vfio_pci_send_signal); + +/* + * vfio_pci_set_emulated() - Set range of interrupts that will be emulated. + * @intr_ctx: Interrupt context. + * @start: First emulated interrupt vector. + * @count: Number of emulated interrupts starting from @start. + * + * Emulated interrupts will not be backed by hardware interrupts but + * instead triggered by virtual device driver. + * + * Return: error code on failure (-EBUSY if the vector is not available, + * -ENOMEM on allocation failure), 0 on success. No partial success, on + * success entire range was set as emulated, on failure no interrupt in + * range was set as emulated. + */ +int vfio_pci_set_emulated(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int start, unsigned int count) +{ + struct vfio_pci_irq_ctx *ctx; + unsigned long i, j; + int ret = -EINVAL; + + mutex_lock(&intr_ctx->igate); + + for (i = start; i < start + count; i++) { + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL_ACCOUNT); + if (!ctx) { + ret = -ENOMEM; + goto out_err; + } + ctx->emulated = true; + ret = xa_insert(&intr_ctx->ctx, i, ctx, GFP_KERNEL_ACCOUNT); + if (ret) { + kfree(ctx); + goto out_err; + } + } + + mutex_unlock(&intr_ctx->igate); + return 0; + +out_err: + for (j = start; j < i; j++) { + ctx = vfio_irq_ctx_get(intr_ctx, j); + vfio_irq_ctx_free(intr_ctx, ctx, j); + } + + mutex_unlock(&intr_ctx->igate); + + return ret; +} +EXPORT_SYMBOL_GPL(vfio_pci_set_emulated); + int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, unsigned int index, unsigned int start, unsigned int count, void *data) diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index d5140a732741..4fe0df25162f 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -178,6 +178,9 @@ void vfio_pci_release_intr_ctx(struct vfio_pci_intr_ctx *intr_ctx); int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, unsigned int index, unsigned int start, unsigned int count, void *data); +void vfio_pci_send_signal(struct vfio_pci_intr_ctx *intr_ctx, unsigned int vector); +int vfio_pci_set_emulated(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int start, unsigned int count); int vfio_pci_core_ioctl_feature(struct vfio_device *device, u32 flags, void __user *arg, size_t argsz); ssize_t vfio_pci_core_read(struct vfio_device *core_vdev, char __user *buf, From patchwork Fri Oct 27 17:00:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438672 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DE7FC25B6F for ; Fri, 27 Oct 2023 17:02:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346493AbjJ0RCX (ORCPT ); Fri, 27 Oct 2023 13:02:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346312AbjJ0RBo (ORCPT ); Fri, 27 Oct 2023 13:01:44 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 240E61AA; Fri, 27 Oct 2023 10:01:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426100; x=1729962100; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pWW9/chnawglktdCwIKFGbmE0Wc5Bu/x/Utfmgc3Jg4=; b=WOTsayEboxcCB+Q20ZwsRVUqPaHDim67Jsx+9+o0ueXQXSmLaoA1Ytjt EYF7rY+d/rLFBqJHZNKRVefigtUwct+biY7x7/Ta+1YjMQE0cOJHqnj8K FqaLeFofYnjtzgJ8KsVN7vLktCWPXV3JDanlxvZsLu3v5qrdQrbxDlJ2i MvP69p0uEgDGbAL9HZOdQ59+RpNAXQJ8fhlawOzaiviSSb0cdouWlTGCh shazwQ4K8PBmmNAOSsVQoUNg7TH5Uv1qUihjMsFbN96HkmJKN2n4SVZLF rjMoxZ2MKihYWuOVpRDa5ZdoP63+jaffXZrv1wgGSqrAUSkFjxpPwNLG9 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="612139" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="612139" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988238" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988238" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:20 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 24/26] vfio/pci: Add core IMS support Date: Fri, 27 Oct 2023 10:00:56 -0700 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new interrupt management backend enabling a guest MSI-X interrupt to be backed by an IMS interrupt on the host. An IMS interrupt is allocated via pci_ims_alloc_irq() and requires an implementation specific cookie that is opaque to the IMS backend. This can be a PASID, queue ID, pointer etc. During initialization the IMS backend learns which PCI device to operate on (and thus which interrupt domain to allocate from) and what the default cookie should be for any new interrupt allocation. A virtual device driver starts by initializing the backend using new vfio_pci_ims_init_intr_ctx(), cleanup using new vfio_pci_ims_release_intr_ctx(). Once initialized the virtual device driver can call vfio_pci_set_irqs_ioctl() to handle the VFIO_DEVICE_SET_IRQS ioctl() after it has validated the parameters to be appropriate for the particular device. To support the IMS backend the core utilities need to be aware which interrupt context it interacts with. New ims_backed_irq enables this and is false for the PCI passthrough backend and true for the IMS backend. Signed-off-by: Reinette Chatre --- Changes since RFC V2: - Improve changelog. - Refactored implementation to use new callbacks for interrupt enable/disable and allocate/free to eliminate code duplication. (Kevin) - Make vfio_pci_ims_intr_ops static. drivers/vfio/pci/vfio_pci_intrs.c | 178 ++++++++++++++++++++++++++++++ include/linux/vfio_pci_core.h | 7 ++ 2 files changed, 185 insertions(+) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 6e34b8d8c216..b318a3f671e8 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -22,6 +22,21 @@ #include "vfio_pci_priv.h" +/* + * Interrupt Message Store (IMS) private interrupt context data + * @vdev: Virtual device. Used for name of device in + * request_irq(). + * @pdev: PCI device owning the IMS domain from where + * interrupts are allocated. + * @default_cookie: Default cookie used for IMS interrupts without unique + * cookie. + */ +struct vfio_pci_ims { + struct vfio_device *vdev; + struct pci_dev *pdev; + union msi_instance_cookie default_cookie; +}; + struct vfio_pci_irq_ctx { bool emulated:1; struct eventfd_ctx *trigger; @@ -31,6 +46,8 @@ struct vfio_pci_irq_ctx { bool masked; struct irq_bypass_producer producer; int virq; + int ims_id; + union msi_instance_cookie icookie; }; static bool irq_is(struct vfio_pci_intr_ctx *intr_ctx, int type) @@ -899,6 +916,7 @@ void vfio_pci_init_intr_ctx(struct vfio_pci_core_device *vdev, _vfio_pci_init_intr_ctx(intr_ctx); intr_ctx->ops = &vfio_pci_intr_ops; intr_ctx->priv = vdev; + intr_ctx->ims_backed_irq = false; } EXPORT_SYMBOL_GPL(vfio_pci_init_intr_ctx); @@ -985,6 +1003,166 @@ int vfio_pci_set_emulated(struct vfio_pci_intr_ctx *intr_ctx, } EXPORT_SYMBOL_GPL(vfio_pci_set_emulated); +/* Guest MSI-X interrupts backed by IMS host interrupts */ + +/* + * Free the IMS interrupt associated with @ctx. + * + * For an IMS interrupt the interrupt is freed from the underlying + * PCI device's IMS domain. + */ +static void vfio_pci_ims_irq_free(struct vfio_pci_intr_ctx *intr_ctx, + struct vfio_pci_irq_ctx *ctx) +{ + struct vfio_pci_ims *ims = intr_ctx->priv; + struct msi_map irq_map = {}; + + irq_map.index = ctx->ims_id; + irq_map.virq = ctx->virq; + pci_ims_free_irq(ims->pdev, irq_map); + ctx->ims_id = -EINVAL; + ctx->virq = 0; +} + +/* + * Allocate a host IMS interrupt for @ctx. + * + * For an IMS interrupt the interrupt is allocated from the underlying + * PCI device's IMS domain. + */ +static int vfio_pci_ims_irq_alloc(struct vfio_pci_intr_ctx *intr_ctx, + struct vfio_pci_irq_ctx *ctx) +{ + struct vfio_pci_ims *ims = intr_ctx->priv; + struct msi_map irq_map = {}; + + irq_map = pci_ims_alloc_irq(ims->pdev, &ctx->icookie, NULL); + if (irq_map.index < 0) + return irq_map.index; + + ctx->ims_id = irq_map.index; + ctx->virq = irq_map.virq; + + return 0; +} + +static void vfio_ims_free_interrupt(struct vfio_pci_intr_ctx *intr_ctx, + struct vfio_pci_irq_ctx *ctx, + unsigned int vector) +{ + free_irq(ctx->virq, ctx->trigger); + vfio_pci_ims_irq_free(intr_ctx, ctx); +} + +static int vfio_ims_request_interrupt(struct vfio_pci_intr_ctx *intr_ctx, + struct vfio_pci_irq_ctx *ctx, + unsigned int vector, + unsigned int index) +{ + int ret; + + ret = vfio_pci_ims_irq_alloc(intr_ctx, ctx); + if (ret < 0) + return ret; + + ret = request_irq(ctx->virq, vfio_msihandler, 0, ctx->name, + ctx->trigger); + if (ret < 0) { + vfio_pci_ims_irq_free(intr_ctx, ctx); + return ret; + } + + return 0; +} + +static char *vfio_ims_device_name(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int vector, + unsigned int index) +{ + struct vfio_pci_ims *ims = intr_ctx->priv; + struct device *dev = &ims->vdev->device; + + return kasprintf(GFP_KERNEL, "vfio-ims[%d](%s)", vector, dev_name(dev)); +} + +static void vfio_ims_disable(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int index) +{ + struct vfio_pci_irq_ctx *ctx; + unsigned long i; + + xa_for_each(&intr_ctx->ctx, i, ctx) + vfio_msi_set_vector_signal(intr_ctx, i, -1, index); +} + +/* + * The virtual device driver is responsible for enabling IMS by creating + * the IMS domaim from where interrupts will be allocated dynamically. + * IMS thus has to be enabled by the time an ioctl() arrives. + */ +static int vfio_ims_enable(struct vfio_pci_intr_ctx *intr_ctx, int nvec, + unsigned int index) +{ + return -EINVAL; +} + +static int vfio_ims_init_irq_ctx(struct vfio_pci_intr_ctx *intr_ctx, + struct vfio_pci_irq_ctx *ctx) +{ + struct vfio_pci_ims *ims = intr_ctx->priv; + + ctx->icookie = ims->default_cookie; + + return 0; +} + +static struct vfio_pci_intr_ops vfio_pci_ims_intr_ops = { + .set_msix_trigger = vfio_pci_set_msi_trigger, + .set_req_trigger = vfio_pci_set_req_trigger, + .msi_enable = vfio_ims_enable, + .msi_disable = vfio_ims_disable, + .msi_request_interrupt = vfio_ims_request_interrupt, + .msi_free_interrupt = vfio_ims_free_interrupt, + .msi_device_name = vfio_ims_device_name, + .init_irq_ctx = vfio_ims_init_irq_ctx, +}; + +int vfio_pci_ims_init_intr_ctx(struct vfio_device *vdev, + struct vfio_pci_intr_ctx *intr_ctx, + struct pci_dev *pdev, + union msi_instance_cookie *default_cookie) +{ + struct vfio_pci_ims *ims; + + ims = kzalloc(sizeof(*ims), GFP_KERNEL_ACCOUNT); + if (!ims) + return -ENOMEM; + + ims->pdev = pdev; + ims->default_cookie = *default_cookie; + ims->vdev = vdev; + + _vfio_pci_init_intr_ctx(intr_ctx); + + intr_ctx->ops = &vfio_pci_ims_intr_ops; + intr_ctx->priv = ims; + intr_ctx->ims_backed_irq = true; + intr_ctx->irq_type = VFIO_PCI_MSIX_IRQ_INDEX; + + return 0; +} +EXPORT_SYMBOL_GPL(vfio_pci_ims_init_intr_ctx); + +void vfio_pci_ims_release_intr_ctx(struct vfio_pci_intr_ctx *intr_ctx) +{ + struct vfio_pci_ims *ims = intr_ctx->priv; + + _vfio_pci_release_intr_ctx(intr_ctx); + kfree(ims); + intr_ctx->irq_type = VFIO_PCI_NUM_IRQS; +} +EXPORT_SYMBOL_GPL(vfio_pci_ims_release_intr_ctx); + int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, unsigned int index, unsigned int start, unsigned int count, void *data) diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index 4fe0df25162f..a3161af791f8 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -58,6 +58,7 @@ struct vfio_pci_region { * @req_trigger: Eventfd associated with device request notification * @ctx: Per-interrupt context indexed by vector * @irq_type: Type of interrupt from guest perspective + * @ims_backed_irq: Interrupts managed by IMS backend */ struct vfio_pci_intr_ctx { const struct vfio_pci_intr_ops *ops; @@ -67,6 +68,7 @@ struct vfio_pci_intr_ctx { struct eventfd_ctx *req_trigger; struct xarray ctx; int irq_type; + bool ims_backed_irq:1; }; struct vfio_pci_irq_ctx; @@ -181,6 +183,11 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, void vfio_pci_send_signal(struct vfio_pci_intr_ctx *intr_ctx, unsigned int vector); int vfio_pci_set_emulated(struct vfio_pci_intr_ctx *intr_ctx, unsigned int start, unsigned int count); +int vfio_pci_ims_init_intr_ctx(struct vfio_device *vdev, + struct vfio_pci_intr_ctx *intr_ctx, + struct pci_dev *pdev, + union msi_instance_cookie *default_cookie); +void vfio_pci_ims_release_intr_ctx(struct vfio_pci_intr_ctx *intr_ctx); int vfio_pci_core_ioctl_feature(struct vfio_device *device, u32 flags, void __user *arg, size_t argsz); ssize_t vfio_pci_core_read(struct vfio_device *core_vdev, char __user *buf, From patchwork Fri Oct 27 17:00:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438671 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C552DC25B48 for ; Fri, 27 Oct 2023 17:02:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346489AbjJ0RCT (ORCPT ); Fri, 27 Oct 2023 13:02:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346313AbjJ0RBo (ORCPT ); Fri, 27 Oct 2023 13:01:44 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66646D64; Fri, 27 Oct 2023 10:01:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426101; x=1729962101; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tbz3SXJJY7i1djigYQYirkmt+YTKLECJsMGiiTtlipg=; b=mhzsi8qRl9dKKufxxHLRj/zfPp/MuUkphL2MYY6oWbd0wFJOhINfyfx8 RtwWkSIn9nAC0XY1bu5RplNbIUTEtvkiH0vkfPXeV6j4ynGQPtRG2Vk2j jSjQ75LFEJVtSw16ly3TDeTPLlMhmbbzmXuhluRc8b5Bv16D4Pre1nYuS ODZ5A+A3545OQ9NRIp5/twiMpSjIoXnX8ekYA1al6ee38h971iefcbsLH pFhgqODQXbXFU6ENUN8iXsXUeggZ0rYUpioF7QyrZ64VUDoCTWGRuqKNe odbXhk+jA2kxY9R9YE3Gj9MVm7j+tCpIoKyjEH8cgvv/ZNiMzQDHwaQIJ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="612148" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="612148" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988246" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988246" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:21 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 25/26] vfio/pci: Add accessor for IMS index Date: Fri, 27 Oct 2023 10:00:57 -0700 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org A virtual device driver needs to facilitate translation between the guest's MSI-X interrupt and the backing IMS interrupt with which the physical device is programmed. For example, the guest may need to obtain the IMS index from the virtual device driver that it needs to program into descriptors submitted to the device to ensure that the completion interrupts are generated correctly. Introduce vfio_pci_ims_hwirq() to the IMS backend as a helper that returns the IMS interrupt index backing a provided MSI-X interrupt index belonging to a guest. Originally-by: Dave Jiang Signed-off-by: Reinette Chatre --- No changes since RFC V2. drivers/vfio/pci/vfio_pci_intrs.c | 25 +++++++++++++++++++++++++ include/linux/vfio_pci_core.h | 1 + 2 files changed, 26 insertions(+) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index b318a3f671e8..32ebc8fec4c4 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -1163,6 +1163,31 @@ void vfio_pci_ims_release_intr_ctx(struct vfio_pci_intr_ctx *intr_ctx) } EXPORT_SYMBOL_GPL(vfio_pci_ims_release_intr_ctx); +/* + * Return IMS index of IMS interrupt backing MSI-X interrupt @vector + */ +int vfio_pci_ims_hwirq(struct vfio_pci_intr_ctx *intr_ctx, unsigned int vector) +{ + struct vfio_pci_irq_ctx *ctx; + int id = -EINVAL; + + mutex_lock(&intr_ctx->igate); + + if (!intr_ctx->ims_backed_irq) + goto out_unlock; + + ctx = vfio_irq_ctx_get(intr_ctx, vector); + if (!ctx || ctx->emulated) + goto out_unlock; + + id = ctx->ims_id; + +out_unlock: + mutex_unlock(&intr_ctx->igate); + return id; +} +EXPORT_SYMBOL_GPL(vfio_pci_ims_hwirq); + int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, unsigned int index, unsigned int start, unsigned int count, void *data) diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index a3161af791f8..dbc77839ef26 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -180,6 +180,7 @@ void vfio_pci_release_intr_ctx(struct vfio_pci_intr_ctx *intr_ctx); int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, unsigned int index, unsigned int start, unsigned int count, void *data); +int vfio_pci_ims_hwirq(struct vfio_pci_intr_ctx *intr_ctx, unsigned int vector); void vfio_pci_send_signal(struct vfio_pci_intr_ctx *intr_ctx, unsigned int vector); int vfio_pci_set_emulated(struct vfio_pci_intr_ctx *intr_ctx, unsigned int start, unsigned int count); From patchwork Fri Oct 27 17:00:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 13438670 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE6B6C25B47 for ; Fri, 27 Oct 2023 17:02:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346476AbjJ0RCR (ORCPT ); Fri, 27 Oct 2023 13:02:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346337AbjJ0RBo (ORCPT ); Fri, 27 Oct 2023 13:01:44 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDD8BD6A; Fri, 27 Oct 2023 10:01:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698426101; x=1729962101; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rp7C9TZkUzJPsNhPFEIC2B+5Ox/xRqY6xhnZVuHn0yI=; b=FAjSJBjUqzICuPgU+fF0Nevl+H6P2gd9wl7xPd5BTTI57ruSJ3IOmQTY cBsyUiSfvx3hYWLUw2HtfMcO9bgVq/hH8CGvDX9ITfv7PmxBeURZwrzEW ClblfP5GMfQwGzhFzrB08rOfcnMGWjr0apPipmA3mei1Dk/6eNnQ7nsYJ PTFt7HMvFOR2VzngPtZL4iOvND+EweGBC/yo2s3AkaicJduX3k9oY1QAf /Z5YrF/LCClkeyEQdPQcHvTK1l0BouP1bSnEemvAa4siRDrbQp7UnI6Do As+S/1bW/gNUC399a4YU2QXSLlzWYk3v4yTCp9G7N7bb1cVWCy3TzOGja g==; X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="612150" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="612150" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10876"; a="1090988252" X-IronPort-AV: E=Sophos;i="6.03,256,1694761200"; d="scan'208";a="1090988252" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2023 10:01:21 -0700 From: Reinette Chatre To: jgg@nvidia.com, yishaih@nvidia.com, shameerali.kolothum.thodi@huawei.com, kevin.tian@intel.com, alex.williamson@redhat.com Cc: kvm@vger.kernel.org, dave.jiang@intel.com, jing2.liu@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tom.zanussi@linux.intel.com, reinette.chatre@intel.com, linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [RFC PATCH V3 26/26] vfio/pci: Support IMS cookie modification Date: Fri, 27 Oct 2023 10:00:58 -0700 Message-Id: <5a118965e4ae827c28c2b1de6fa791e9ebfd5958.1698422237.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org IMS supports an implementation specific cookie that is associated with each interrupt. By default the IMS interrupt allocation backend will assign a default cookie to a new interrupt instance. Add support for a virtual device driver to set the interrupt instance specific cookie. For example, the virtual device driver may intercept the guest's MMIO write that configuresa a new PASID for a particular interrupt. Calling vfio_pci_ims_set_cookie() with the new PASID value as IMS cookie enables subsequent interrupts to be allocated with accurate data. Signed-off-by: Reinette Chatre --- No changes since RFC V2. drivers/vfio/pci/vfio_pci_intrs.c | 53 +++++++++++++++++++++++++++++++ include/linux/vfio_pci_core.h | 3 ++ 2 files changed, 56 insertions(+) diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c index 32ebc8fec4c4..5dc22dd9390e 100644 --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -1188,6 +1188,59 @@ int vfio_pci_ims_hwirq(struct vfio_pci_intr_ctx *intr_ctx, unsigned int vector) } EXPORT_SYMBOL_GPL(vfio_pci_ims_hwirq); +/* + * vfio_pci_ims_set_cookie() - Set unique cookie for vector. + * @intr_ctx: Interrupt context. + * @vector: Vector. + * @icookie: New cookie for @vector. + * + * When new IMS interrupt is allocated for @vector it will be + * assigned @icookie. + */ +int vfio_pci_ims_set_cookie(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int vector, + union msi_instance_cookie *icookie) +{ + struct vfio_pci_irq_ctx *ctx; + int ret = -EINVAL; + + mutex_lock(&intr_ctx->igate); + + if (!intr_ctx->ims_backed_irq) + goto out_unlock; + + ctx = vfio_irq_ctx_get(intr_ctx, vector); + if (ctx) { + if (WARN_ON_ONCE(ctx->emulated)) { + ret = -EINVAL; + goto out_unlock; + } + ctx->icookie = *icookie; + ret = 0; + goto out_unlock; + } + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL_ACCOUNT); + if (!ctx) { + ret = -ENOMEM; + goto out_unlock; + } + + ctx->icookie = *icookie; + ret = xa_insert(&intr_ctx->ctx, vector, ctx, GFP_KERNEL_ACCOUNT); + if (ret) { + kfree(ctx); + goto out_unlock; + } + + ret = 0; + +out_unlock: + mutex_unlock(&intr_ctx->igate); + return ret; +} +EXPORT_SYMBOL_GPL(vfio_pci_ims_set_cookie); + int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, unsigned int index, unsigned int start, unsigned int count, void *data) diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h index dbc77839ef26..b989b533e852 100644 --- a/include/linux/vfio_pci_core.h +++ b/include/linux/vfio_pci_core.h @@ -181,6 +181,9 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_intr_ctx *intr_ctx, uint32_t flags, unsigned int index, unsigned int start, unsigned int count, void *data); int vfio_pci_ims_hwirq(struct vfio_pci_intr_ctx *intr_ctx, unsigned int vector); +int vfio_pci_ims_set_cookie(struct vfio_pci_intr_ctx *intr_ctx, + unsigned int vector, + union msi_instance_cookie *icookie); void vfio_pci_send_signal(struct vfio_pci_intr_ctx *intr_ctx, unsigned int vector); int vfio_pci_set_emulated(struct vfio_pci_intr_ctx *intr_ctx, unsigned int start, unsigned int count);