mbox series

[v2,00/17] Add Nested Translation Support for SMMUv3

Message ID cover.1683688960.git.nicolinc@nvidia.com (mailing list archive)
Headers show
Series Add Nested Translation Support for SMMUv3 | expand

Message

Nicolin Chen May 10, 2023, 3:33 a.m. UTC
[ This series is rebased on top of v6.4-rc1 merging Jason's iommu_hwpt
  branch and Yi's vfio cdev v11 branch, then the replace v7 series and
  the nesting v2 (candidate) series and Intel VT-d series. Note that
  some of them are still getting finalized. So, there can be potential
  minor API changes that would not be reflected in this series. Yet, we
  can start the review at the SMMU driver specific things.

  @robin, the hw_info patch still requires the errata patch that you
  mentioned. Perhaps we can merge that separately or include it in v3.

  Thanks! ]

Changelog
v2:
 * Added arm_smmu_set_dev_data after the set_dev_data series.
 * Added Jason's patch "vfio: Remove VFIO_TYPE1_NESTING_IOMMU"
 * Replaced the iommu_get_unmanaged_domain() helper with Robin's patch.
 * Reworked the code in arm_smmu_cmdq_build_cmd() to make NH_VA to be
   a superset of NH_VAA.
 * Added inline comments and a bug-report link to the patch unsetting
   dst[2] and dst[3] of STE.
 * Dropped the to_s2_cfg helper since only one place really needs it.
 * Dropped the VMID (override) flag and s2vmid in iommu_hwpt_arm_smmuv3
   structure, because it's expected for user space to use a shared S2
   domain/hwpt for all devices, i.e. the VMID (allocated with the S2
   domain is already unified. If there's some special case that still
   needs a VMID unification, we should probably add it incrementally.
 * Move the introduction of the "struct arm_smmu_domain *s2" function
   parameter to the proper patch.
 * Redefined "struct iommu_hwpt_arm_smmuv3" by adding ste_uptr/len and
   out_event_uptr/len. Then added an arm_smmu_domain_finalise_nested()
   function to read guest Stream Table Entry with a proper sanity.
 * Reworked arm_smmu_cache_invalidate_user() by reading the guest CMDQ
   directly, to support batching. Also, added return value feedback of
   -ETIMEDOUT at CMD_SYNC, and reported CERROR_ILL errors via the CONS
   in the user_data structure.
 * Updated data/functions following the nesting infrastructure updates.
 * Added/fixed multiple comments per v1 review inputs.
v1:
 https://lore.kernel.org/all/cover.1678348754.git.nicolinc@nvidia.com/

--------------------------------------------------------------------------

Hi all,

This series of patches add nested translation support for ARM SMMUv3.

Eric Auger made a huge effort previously with the VFIO uAPIs, and sent
his v16 a year ago. Now, the nested translation should follow the new
IOMMUFD uAPIs design. So, most of the key features are ported from the
privous VFIO solution, and then rebuilt on top of the IOMMUFD nesting
infrastructure.

The essential parts in the driver to support a nested translation are
->hw_info, ->domain_alloc_user and ->cache_invalidate_user ops. So this
series fundamentally adds these three functions in the SMMUv3 driver,
along with several preparations and cleanups for them.

One unique requirement for SMMUv3 nested translation support is the MSI
doorbell address translation, which is a 2-stage translation too. And,
to working with the ITS driver, an msi_cookie needs to be setup on the
kernel-managed domain, the stage-2 domain of the nesting setup. And the
same msi_cookie will be fetched, via iommu_dma_get_msi_mapping_domain(),
in the iommu core to allocate and creates IOVA mappings for MSI doorbell
page(s). However, with the nesting design, the device is attached to a
user-managed domain, the stage-1 domain. So both the setup and fetching
of the msi_cookie would not work at the level of stage-2 domain. Thus,
on both sides, the msi_cookie setup and fetching require a redirection
of the domain pointer. It's easy to do so in iommufd core, but needs a
new op in the iommu core and driver.

You can also find this series on the Github:
https://github.com/nicolinc/iommufd/commits/iommufd_nesting-v2

The kernel branch is tested with this QEMU branch:
https://github.com/nicolinc/qemu/commits/wip/iommufd_rfcv4+nesting+smmuv3-v2

Thanks!
Nicolin Chen

Eric Auger (2):
  iommu/arm-smmu-v3: Unset corresponding STE fields when s2_cfg is NULL
  iommu/arm-smmu-v3: Add STRTAB_STE_0_CFG_NESTED for 2-stage translation

Jason Gunthorpe (1):
  vfio: Remove VFIO_TYPE1_NESTING_IOMMU

Nicolin Chen (13):
  iommufd: Add nesting related data structures for ARM SMMUv3
  iommufd/device: Setup MSI on kernel-managed domains
  iommu/arm-smmu-v3: Add arm_smmu_hw_info
  iommu/arm-smmu-v3: Add arm_smmu_set/unset_dev_user_data
  iommu/arm-smmu-v3: Remove ARM_SMMU_DOMAIN_NESTED
  iommu/arm-smmu-v3: Allow ARM_SMMU_DOMAIN_S1 stage to access s2_cfg
  iommu/arm-smmu-v3: Add s1dss in struct arm_smmu_s1_cfg
  iommu/arm-smmu-v3: Pass in user_cfg to arm_smmu_domain_finalise
  iommu/arm-smmu-v3: Add arm_smmu_domain_alloc_user
  iommu/arm-smmu-v3: Support IOMMU_DOMAIN_NESTED type of allocations
  iommu/arm-smmu-v3: Implement arm_smmu_get_msi_mapping_domain
  iommu/arm-smmu-v3: Add CMDQ_OP_TLBI_NH_VAA and CMDQ_OP_TLBI_NH_ALL
  iommu/arm-smmu-v3: Add arm_smmu_cache_invalidate_user

Robin Murphy (1):
  iommu/dma: Support MSIs through nested domains

 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 461 ++++++++++++++++++--
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h |  11 +-
 drivers/iommu/arm/arm-smmu/arm-smmu.c       |  16 -
 drivers/iommu/dma-iommu.c                   |  18 +-
 drivers/iommu/iommu.c                       |  10 -
 drivers/iommu/iommufd/device.c              |   5 +-
 drivers/iommu/iommufd/main.c                |   1 +
 drivers/iommu/iommufd/vfio_compat.c         |   7 +-
 drivers/vfio/vfio_iommu_type1.c             |  12 +-
 include/linux/iommu.h                       |   7 +-
 include/uapi/linux/iommufd.h                |  83 ++++
 include/uapi/linux/vfio.h                   |   2 +-
 12 files changed, 538 insertions(+), 95 deletions(-)

Comments

Tian, Kevin May 10, 2023, 8:11 a.m. UTC | #1
> From: Nicolin Chen <nicolinc@nvidia.com>
> Sent: Wednesday, May 10, 2023 11:33 AM
> 
> One unique requirement for SMMUv3 nested translation support is the MSI
> doorbell address translation, which is a 2-stage translation too. And,
> to working with the ITS driver, an msi_cookie needs to be setup on the
> kernel-managed domain, the stage-2 domain of the nesting setup. And the
> same msi_cookie will be fetched, via
> iommu_dma_get_msi_mapping_domain(),
> in the iommu core to allocate and creates IOVA mappings for MSI doorbell
> page(s). However, with the nesting design, the device is attached to a
> user-managed domain, the stage-1 domain. So both the setup and fetching
> of the msi_cookie would not work at the level of stage-2 domain. Thus,
> on both sides, the msi_cookie setup and fetching require a redirection
> of the domain pointer. It's easy to do so in iommufd core, but needs a
> new op in the iommu core and driver.
> 

Looks the new preferred way is to map the physical ITS page to an IPA
provided by Qemu then let the guest allocate the cookie in S1 which
is then passed back by Qemu to the host kernel? [1]

[1] https://lore.kernel.org/linux-iommu/5ff0d72b-a7b8-c8a9-60e5-396e7a1ef363@arm.com/
Nicolin Chen May 10, 2023, 8:41 a.m. UTC | #2
On Wed, May 10, 2023 at 08:11:28AM +0000, Tian, Kevin wrote:
 
> > From: Nicolin Chen <nicolinc@nvidia.com>
> > Sent: Wednesday, May 10, 2023 11:33 AM
> >
> > One unique requirement for SMMUv3 nested translation support is the MSI
> > doorbell address translation, which is a 2-stage translation too. And,
> > to working with the ITS driver, an msi_cookie needs to be setup on the
> > kernel-managed domain, the stage-2 domain of the nesting setup. And the
> > same msi_cookie will be fetched, via
> > iommu_dma_get_msi_mapping_domain(),
> > in the iommu core to allocate and creates IOVA mappings for MSI doorbell
> > page(s). However, with the nesting design, the device is attached to a
> > user-managed domain, the stage-1 domain. So both the setup and fetching
> > of the msi_cookie would not work at the level of stage-2 domain. Thus,
> > on both sides, the msi_cookie setup and fetching require a redirection
> > of the domain pointer. It's easy to do so in iommufd core, but needs a
> > new op in the iommu core and driver.
> >
> 
> Looks the new preferred way is to map the physical ITS page to an IPA
> provided by Qemu then let the guest allocate the cookie in S1 which
> is then passed back by Qemu to the host kernel? [1]
> 
> [1] https://lore.kernel.org/linux-iommu/5ff0d72b-a7b8-c8a9-60e5-396e7a1ef363@arm.com/

Hmm..is that something firm to implement at this stage?

Thank you
Nicolin
Zhangfei Gao May 15, 2023, 10 a.m. UTC | #3
Hi, Nico

On Wed, 10 May 2023 at 11:34, Nicolin Chen <nicolinc@nvidia.com> wrote:
>
> [ This series is rebased on top of v6.4-rc1 merging Jason's iommu_hwpt
>   branch and Yi's vfio cdev v11 branch, then the replace v7 series and
>   the nesting v2 (candidate) series and Intel VT-d series. Note that
>   some of them are still getting finalized. So, there can be potential
>   minor API changes that would not be reflected in this series. Yet, we
>   can start the review at the SMMU driver specific things.
>
>   @robin, the hw_info patch still requires the errata patch that you
>   mentioned. Perhaps we can merge that separately or include it in v3.
>
>   Thanks! ]
>
> Changelog
> v2:
>  * Added arm_smmu_set_dev_data after the set_dev_data series.
>  * Added Jason's patch "vfio: Remove VFIO_TYPE1_NESTING_IOMMU"
>  * Replaced the iommu_get_unmanaged_domain() helper with Robin's patch.
>  * Reworked the code in arm_smmu_cmdq_build_cmd() to make NH_VA to be
>    a superset of NH_VAA.
>  * Added inline comments and a bug-report link to the patch unsetting
>    dst[2] and dst[3] of STE.
>  * Dropped the to_s2_cfg helper since only one place really needs it.
>  * Dropped the VMID (override) flag and s2vmid in iommu_hwpt_arm_smmuv3
>    structure, because it's expected for user space to use a shared S2
>    domain/hwpt for all devices, i.e. the VMID (allocated with the S2
>    domain is already unified. If there's some special case that still
>    needs a VMID unification, we should probably add it incrementally.
>  * Move the introduction of the "struct arm_smmu_domain *s2" function
>    parameter to the proper patch.
>  * Redefined "struct iommu_hwpt_arm_smmuv3" by adding ste_uptr/len and
>    out_event_uptr/len. Then added an arm_smmu_domain_finalise_nested()
>    function to read guest Stream Table Entry with a proper sanity.
>  * Reworked arm_smmu_cache_invalidate_user() by reading the guest CMDQ
>    directly, to support batching. Also, added return value feedback of
>    -ETIMEDOUT at CMD_SYNC, and reported CERROR_ILL errors via the CONS
>    in the user_data structure.
>  * Updated data/functions following the nesting infrastructure updates.
>  * Added/fixed multiple comments per v1 review inputs.
> v1:
>  https://lore.kernel.org/all/cover.1678348754.git.nicolinc@nvidia.com/
>
> --------------------------------------------------------------------------
>
> Hi all,
>
> This series of patches add nested translation support for ARM SMMUv3.
>
> Eric Auger made a huge effort previously with the VFIO uAPIs, and sent
> his v16 a year ago. Now, the nested translation should follow the new
> IOMMUFD uAPIs design. So, most of the key features are ported from the
> privous VFIO solution, and then rebuilt on top of the IOMMUFD nesting
> infrastructure.
>
> The essential parts in the driver to support a nested translation are
> ->hw_info, ->domain_alloc_user and ->cache_invalidate_user ops. So this
> series fundamentally adds these three functions in the SMMUv3 driver,
> along with several preparations and cleanups for them.
>
> One unique requirement for SMMUv3 nested translation support is the MSI
> doorbell address translation, which is a 2-stage translation too. And,
> to working with the ITS driver, an msi_cookie needs to be setup on the
> kernel-managed domain, the stage-2 domain of the nesting setup. And the
> same msi_cookie will be fetched, via iommu_dma_get_msi_mapping_domain(),
> in the iommu core to allocate and creates IOVA mappings for MSI doorbell
> page(s). However, with the nesting design, the device is attached to a
> user-managed domain, the stage-1 domain. So both the setup and fetching
> of the msi_cookie would not work at the level of stage-2 domain. Thus,
> on both sides, the msi_cookie setup and fetching require a redirection
> of the domain pointer. It's easy to do so in iommufd core, but needs a
> new op in the iommu core and driver.
>
> You can also find this series on the Github:
> https://github.com/nicolinc/iommufd/commits/iommufd_nesting-v2
>
> The kernel branch is tested with this QEMU branch:
> https://github.com/nicolinc/qemu/commits/wip/iommufd_rfcv4+nesting+smmuv3-v2
>

I rebased on these two branches and did some basic tests.

The basic functions work after backport
iommufd: Add IOMMU_PAGE_RESPONSE
iommufd: Add device fault handler support

https://github.com/Linaro/linux-kernel-warpdrive/tree/uacce-devel-6.4
https://github.com/Linaro/qemu/tree/iommufd-6.4-nesting-smmuv3-v2

However when debugging hotplug PCI device, it still does not work,
Segmentation fault same as 6.2.

guest kernel
CONFIG_HOTPLUG_PCI_PCIE=y

boot guest (this info does not appear in 6.2)
qemu-system-aarch64: -device
vfio-pci,host=0000:76:00.1,bus=pci.1,addr=0x0,id=acc1,iommufd=iommufd0:
Failed to set data -1
qemu-system-aarch64: -device
vfio-pci,host=0000:76:00.1,bus=pci.1,addr=0x0,id=acc1,iommufd=iommufd0:
failed to set device data

$ sudo nc -U /tmp/qmpm_1.socket
(qemu) info pci
(qemu) device_del acc1

guest:
qemu-system-aarch64: IOMMU_IOAS_UNMAP failed: No such file or directory
qemu-system-aarch64: vfio_container_dma_unmap(0xaaaae1fc0380,
0x8000000000, 0x10000) = -2 (No such file or directory)
qemu-system-aarch64: Failed to unset data -1
Segmentation fault (core dumped).  // also happened in 6.2

Thanks
Nicolin Chen May 15, 2023, 3:57 p.m. UTC | #4
Hi Zhangfei,

On Mon, May 15, 2023 at 06:00:26PM +0800, Zhangfei Gao wrote:
 
> I rebased on these two branches and did some basic tests.
> 
> The basic functions work after backport
> iommufd: Add IOMMU_PAGE_RESPONSE
> iommufd: Add device fault handler support
> 
> https://github.com/Linaro/linux-kernel-warpdrive/tree/uacce-devel-6.4
> https://github.com/Linaro/qemu/tree/iommufd-6.4-nesting-smmuv3-v2

Thanks for testing!

> However when debugging hotplug PCI device, it still does not work,
> Segmentation fault same as 6.2.
> 
> guest kernel
> CONFIG_HOTPLUG_PCI_PCIE=y
> 
> boot guest (this info does not appear in 6.2)
> qemu-system-aarch64: -device
> vfio-pci,host=0000:76:00.1,bus=pci.1,addr=0x0,id=acc1,iommufd=iommufd0:
> Failed to set data -1
> qemu-system-aarch64: -device
> vfio-pci,host=0000:76:00.1,bus=pci.1,addr=0x0,id=acc1,iommufd=iommufd0:
> failed to set device data

Hmm.. I wonder what fails the set_dev_data ioctl...

> $ sudo nc -U /tmp/qmpm_1.socket
> (qemu) info pci
> (qemu) device_del acc1
> 
> guest:
> qemu-system-aarch64: IOMMU_IOAS_UNMAP failed: No such file or directory
> qemu-system-aarch64: vfio_container_dma_unmap(0xaaaae1fc0380,
> 0x8000000000, 0x10000) = -2 (No such file or directory)

This is resulted from the following commit that we should
drop later:

commit c4fd2efd7c02dd30491adf676c1b0aed67656f36
Author: Yi Liu <yi.l.liu@intel.com>
Date:   Thu Apr 27 05:47:03 2023 -0700

    vfio/container: Skip readonly pages

    This is a temparary solution for Intel platform due to an errata in
    which readonly pages in second stage page table is exclusive with
    nested support.

    Signed-off-by: Yi Liu <yi.l.liu@intel.com>


> qemu-system-aarch64: Failed to unset data -1
> Segmentation fault (core dumped).  // also happened in 6.2

Hmm, would it be possible for you to run the test again by
adding the following tracers to your QEMU command?
    --trace "iommufd*" \
    --trace "smmu*" \
    --trace "vfio_*" \
    --trace "pci_*"

Thanks
Nic
Zhangfei Gao May 16, 2023, 3:12 a.m. UTC | #5
On Mon, 15 May 2023 at 23:58, Nicolin Chen <nicolinc@nvidia.com> wrote:
>
> Hi Zhangfei,
>
> On Mon, May 15, 2023 at 06:00:26PM +0800, Zhangfei Gao wrote:
>
> > I rebased on these two branches and did some basic tests.
> >
> > The basic functions work after backport
> > iommufd: Add IOMMU_PAGE_RESPONSE
> > iommufd: Add device fault handler support
> >
> > https://github.com/Linaro/linux-kernel-warpdrive/tree/uacce-devel-6.4
> > https://github.com/Linaro/qemu/tree/iommufd-6.4-nesting-smmuv3-v2
>
> Thanks for testing!
>
> > However when debugging hotplug PCI device, it still does not work,
> > Segmentation fault same as 6.2.
> >
> > guest kernel
> > CONFIG_HOTPLUG_PCI_PCIE=y
> >
> > boot guest (this info does not appear in 6.2)
> > qemu-system-aarch64: -device
> > vfio-pci,host=0000:76:00.1,bus=pci.1,addr=0x0,id=acc1,iommufd=iommufd0:
> > Failed to set data -1
> > qemu-system-aarch64: -device
> > vfio-pci,host=0000:76:00.1,bus=pci.1,addr=0x0,id=acc1,iommufd=iommufd0:
> > failed to set device data
>
> Hmm.. I wonder what fails the set_dev_data ioctl...
Simply debug, it is because dev_data.sid=0, causing
arm_smmu_set_dev_user_data fail

hw/arm/smmu-common.c
smmu_dev_set_iommu_device
.sid = smmu_get_sid(sdev)
smmu_dev_set_iommu_device dev_data.sid=0

drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
arm_smmu_set_dev_user_data
u32 sid_user = user->sid;
if (!sid_user) return -EINVAL;

>
> > $ sudo nc -U /tmp/qmpm_1.socket
> > (qemu) info pci
> > (qemu) device_del acc1
> >
> > guest:
> > qemu-system-aarch64: IOMMU_IOAS_UNMAP failed: No such file or directory
> > qemu-system-aarch64: vfio_container_dma_unmap(0xaaaae1fc0380,
> > 0x8000000000, 0x10000) = -2 (No such file or directory)
>
From ex-email reply
(Eric) In qemu arm virt machine 0x8000000000 matches the PCI MMIO region.
(Yi) Currently, iommufd kernel part doesn't support mapping device BAR MMIO.
This is a known gap.

> This is resulted from the following commit that we should
> drop later:
>
> commit c4fd2efd7c02dd30491adf676c1b0aed67656f36
> Author: Yi Liu <yi.l.liu@intel.com>
> Date:   Thu Apr 27 05:47:03 2023 -0700
>
>     vfio/container: Skip readonly pages
>
>     This is a temparary solution for Intel platform due to an errata in
>     which readonly pages in second stage page table is exclusive with
>     nested support.
>
>     Signed-off-by: Yi Liu <yi.l.liu@intel.com>
>
>
> > qemu-system-aarch64: Failed to unset data -1
> > Segmentation fault (core dumped).  // also happened in 6.2
>
> Hmm, would it be possible for you to run the test again by
> adding the following tracers to your QEMU command?
>     --trace "iommufd*" \
>     --trace "smmu*" \
>     --trace "vfio_*" \
>     --trace "pci_*"
>

Have sent you the log directly, since it is too big.

Thanks
Nicolin Chen May 25, 2023, 11:42 p.m. UTC | #6
On Tue, May 16, 2023 at 11:12:44AM +0800, Zhangfei Gao wrote:

> > > However when debugging hotplug PCI device, it still does not work,
> > > Segmentation fault same as 6.2.
> > >
> > > guest kernel
> > > CONFIG_HOTPLUG_PCI_PCIE=y
> > >
> > > boot guest (this info does not appear in 6.2)
> > > qemu-system-aarch64: -device
> > > vfio-pci,host=0000:76:00.1,bus=pci.1,addr=0x0,id=acc1,iommufd=iommufd0:
> > > Failed to set data -1
> > > qemu-system-aarch64: -device
> > > vfio-pci,host=0000:76:00.1,bus=pci.1,addr=0x0,id=acc1,iommufd=iommufd0:
> > > failed to set device data
> >
> > Hmm.. I wonder what fails the set_dev_data ioctl...
> Simply debug, it is because dev_data.sid=0, causing
> arm_smmu_set_dev_user_data fail

I found that too. The input pci bus number is 1, yet the in
the context of set_dev_data, the pci bus number is 0, which
resulted in a 0-valued sid. I will take another look to get
why.

> > > $ sudo nc -U /tmp/qmpm_1.socket
> > > (qemu) info pci
> > > (qemu) device_del acc1
> > >
> > > guest:
> > > qemu-system-aarch64: IOMMU_IOAS_UNMAP failed: No such file or directory
> > > qemu-system-aarch64: vfio_container_dma_unmap(0xaaaae1fc0380,
> > > 0x8000000000, 0x10000) = -2 (No such file or directory)
> >
> From ex-email reply
> (Eric) In qemu arm virt machine 0x8000000000 matches the PCI MMIO region.
> (Yi) Currently, iommufd kernel part doesn't support mapping device BAR MMIO.
> This is a known gap.

OK.

> > > qemu-system-aarch64: Failed to unset data -1
> > > Segmentation fault (core dumped).  // also happened in 6.2
> >
> > Hmm, would it be possible for you to run the test again by
> > adding the following tracers to your QEMU command?
> >     --trace "iommufd*" \
> >     --trace "smmu*" \
> >     --trace "vfio_*" \
> >     --trace "pci_*"
> >
> 
> Have sent you the log directly, since it is too big.

I have found two missing pieces in the device detach routine.
Applying the following should fix the crash at hotplug path.

----------------------------------------------------------------------------
diff --git a/hw/vfio/container-base.c b/hw/vfio/container-base.c
index 89a256efa999..2344307523cb 100644
--- a/hw/vfio/container-base.c
+++ b/hw/vfio/container-base.c
@@ -151,8 +151,10 @@ void vfio_container_destroy(VFIOContainer *container)
     }

     QLIST_FOREACH_SAFE(giommu, &container->giommu_list, giommu_next, tmp) {
-        memory_region_unregister_iommu_notifier(
-                MEMORY_REGION(giommu->iommu_mr), &giommu->n);
+        if (giommu->n.notifier_flags) {
+            memory_region_unregister_iommu_notifier(
+                    MEMORY_REGION(giommu->iommu_mr), &giommu->n);
+        }
         QLIST_REMOVE(giommu, giommu_next);
         g_free(giommu);
     }
diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
index 844c60892db2..35d31480390d 100644
--- a/hw/vfio/iommufd.c
+++ b/hw/vfio/iommufd.c
@@ -652,6 +652,9 @@ found:
      */
     if (QLIST_EMPTY(&container->hwpt_list)) {
         vfio_as_del_container(space, bcontainer);
+        if (bcontainer->nested) {
+            memory_listener_unregister(& bcontainer->prereg_listener);
+        }
     }
     __vfio_device_detach_container(vbasedev, container, &err);
     if (err) {
----------------------------------------------------------------------------

Would you please try your case with it?

Thanks
Nic
Zhangfei Gao May 26, 2023, 1:58 a.m. UTC | #7
在 2023/5/26 07:42, Nicolin Chen 写道:
> On Tue, May 16, 2023 at 11:12:44AM +0800, Zhangfei Gao wrote:
>
>>>> However when debugging hotplug PCI device, it still does not work,
>>>> Segmentation fault same as 6.2.
>>>>
>>>> guest kernel
>>>> CONFIG_HOTPLUG_PCI_PCIE=y
>>>>
>>>> boot guest (this info does not appear in 6.2)
>>>> qemu-system-aarch64: -device
>>>> vfio-pci,host=0000:76:00.1,bus=pci.1,addr=0x0,id=acc1,iommufd=iommufd0:
>>>> Failed to set data -1
>>>> qemu-system-aarch64: -device
>>>> vfio-pci,host=0000:76:00.1,bus=pci.1,addr=0x0,id=acc1,iommufd=iommufd0:
>>>> failed to set device data
>>> Hmm.. I wonder what fails the set_dev_data ioctl...
>> Simply debug, it is because dev_data.sid=0, causing
>> arm_smmu_set_dev_user_data fail
> I found that too. The input pci bus number is 1, yet the in
> the context of set_dev_data, the pci bus number is 0, which
> resulted in a 0-valued sid. I will take another look to get
> why.
>
>>>> $ sudo nc -U /tmp/qmpm_1.socket
>>>> (qemu) info pci
>>>> (qemu) device_del acc1
>>>>
>>>> guest:
>>>> qemu-system-aarch64: IOMMU_IOAS_UNMAP failed: No such file or directory
>>>> qemu-system-aarch64: vfio_container_dma_unmap(0xaaaae1fc0380,
>>>> 0x8000000000, 0x10000) = -2 (No such file or directory)
>>  From ex-email reply
>> (Eric) In qemu arm virt machine 0x8000000000 matches the PCI MMIO region.
>> (Yi) Currently, iommufd kernel part doesn't support mapping device BAR MMIO.
>> This is a known gap.
> OK.
>
>>>> qemu-system-aarch64: Failed to unset data -1
>>>> Segmentation fault (core dumped).  // also happened in 6.2
>>> Hmm, would it be possible for you to run the test again by
>>> adding the following tracers to your QEMU command?
>>>      --trace "iommufd*" \
>>>      --trace "smmu*" \
>>>      --trace "vfio_*" \
>>>      --trace "pci_*"
>>>
>> Have sent you the log directly, since it is too big.
> I have found two missing pieces in the device detach routine.
> Applying the following should fix the crash at hotplug path.
>
> ----------------------------------------------------------------------------
> diff --git a/hw/vfio/container-base.c b/hw/vfio/container-base.c
> index 89a256efa999..2344307523cb 100644
> --- a/hw/vfio/container-base.c
> +++ b/hw/vfio/container-base.c
> @@ -151,8 +151,10 @@ void vfio_container_destroy(VFIOContainer *container)
>       }
>
>       QLIST_FOREACH_SAFE(giommu, &container->giommu_list, giommu_next, tmp) {
> -        memory_region_unregister_iommu_notifier(
> -                MEMORY_REGION(giommu->iommu_mr), &giommu->n);
> +        if (giommu->n.notifier_flags) {
> +            memory_region_unregister_iommu_notifier(
> +                    MEMORY_REGION(giommu->iommu_mr), &giommu->n);
> +        }
>           QLIST_REMOVE(giommu, giommu_next);
>           g_free(giommu);
>       }
> diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
> index 844c60892db2..35d31480390d 100644
> --- a/hw/vfio/iommufd.c
> +++ b/hw/vfio/iommufd.c
> @@ -652,6 +652,9 @@ found:
>        */
>       if (QLIST_EMPTY(&container->hwpt_list)) {
>           vfio_as_del_container(space, bcontainer);
> +        if (bcontainer->nested) {
> +            memory_listener_unregister(& bcontainer->prereg_listener);
> +        }
>       }
>       __vfio_device_detach_container(vbasedev, container, &err);
>       if (err) {
> ----------------------------------------------------------------------------
>
> Would you please try your case with it?


Yes, this solve the hotplug segmentation fault

Still report

qemu-system-aarch64: IOMMU_IOAS_UNMAP failed: No such file or directory
qemu-system-aarch64: vfio_container_dma_unmap(0xaaaae622e300, 
0x8000000000, 0x10000) = -2 (No such file or directory)
qemu-system-aarch64: Failed to unset data -1 (only the first time of 
device_del)

Test with device_del and device_add

Thanks.
Nicolin Chen May 26, 2023, 5:10 a.m. UTC | #8
On Fri, May 26, 2023 at 09:58:52AM +0800, zhangfei gao wrote:

> > I have found two missing pieces in the device detach routine.
> > Applying the following should fix the crash at hotplug path.
> > 
> > ----------------------------------------------------------------------------
> > diff --git a/hw/vfio/container-base.c b/hw/vfio/container-base.c
> > index 89a256efa999..2344307523cb 100644
> > --- a/hw/vfio/container-base.c
> > +++ b/hw/vfio/container-base.c
> > @@ -151,8 +151,10 @@ void vfio_container_destroy(VFIOContainer *container)
> >       }
> > 
> >       QLIST_FOREACH_SAFE(giommu, &container->giommu_list, giommu_next, tmp) {
> > -        memory_region_unregister_iommu_notifier(
> > -                MEMORY_REGION(giommu->iommu_mr), &giommu->n);
> > +        if (giommu->n.notifier_flags) {
> > +            memory_region_unregister_iommu_notifier(
> > +                    MEMORY_REGION(giommu->iommu_mr), &giommu->n);
> > +        }
> >           QLIST_REMOVE(giommu, giommu_next);
> >           g_free(giommu);
> >       }
> > diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
> > index 844c60892db2..35d31480390d 100644
> > --- a/hw/vfio/iommufd.c
> > +++ b/hw/vfio/iommufd.c
> > @@ -652,6 +652,9 @@ found:
> >        */
> >       if (QLIST_EMPTY(&container->hwpt_list)) {
> >           vfio_as_del_container(space, bcontainer);
> > +        if (bcontainer->nested) {
> > +            memory_listener_unregister(& bcontainer->prereg_listener);
> > +        }
> >       }
> >       __vfio_device_detach_container(vbasedev, container, &err);
> >       if (err) {
> > ----------------------------------------------------------------------------
> > 
> > Would you please try your case with it?
> 
> 
> Yes, this solve the hotplug segmentation fault

Nice. Thanks!

> Still report
> 
> qemu-system-aarch64: IOMMU_IOAS_UNMAP failed: No such file or directory
> qemu-system-aarch64: vfio_container_dma_unmap(0xaaaae622e300,
> 0x8000000000, 0x10000) = -2 (No such file or directory)
> qemu-system-aarch64: Failed to unset data -1 (only the first time of
> device_del)
> 
> Test with device_del and device_add

I found the "pci.1" has secondary bus number 0 when VM inits:

(qemu) info pci
  [...]
  Bus  0, device   2, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0, pin A
      BUS 0.
      secondary bus 0.
      subordinate bus 0.
      IO range [0xf000, 0x0fff]
      memory range [0xfff00000, 0x000fffff]
      prefetchable memory range [0xfff00000, 0x000fffff]
      BAR0: 32 bit memory at 0xffffffffffffffff [0x00000ffe].
      id "pci.1"

Then it changes later during the guest OS boots:

(qemu) info pci
  [...]
  Bus  0, device   2, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 255, pin A
      BUS 0.
      secondary bus 1.
      subordinate bus 1.
      IO range [0x0000, 0x0fff]
      memory range [0x10000000, 0x101fffff]
      prefetchable memory range [0x8000000000, 0x80000fffff]
      BAR0: 32 bit memory at 0x10240000 [0x10240fff].
      id "pci.1"

This must be related the PCI bus init thing, since it doesn't
fully assign correct the bus numbers and ranges being listed
above, in the first dump.

I will try figuring out what's going on, because this doesn't
make too much sense for our ->set_iommu_device callback if a
PCIBus isn't fully ready.

Alternatively, I could move the set_dev_data ioctl out of the
->set_iommu_device callback to a later stage.

Overall, this should be fixed in the next version.

Thank you
Nicolin