Message ID | 20210222155338.26132-5-shameerali.kolothum.thodi@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM/ARM64 Add support for pinned VMIDs | expand |
Hi Shameer, On Mon, Feb 22, 2021 at 03:53:37PM +0000, Shameer Kolothum wrote: > If the SMMU supports BTM and the device belongs to NESTED domain > with shared pasid table, we need to use the VMID allocated by the > KVM for the s2 configuration. Hence, request a pinned VMID from KVM. > > Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> > --- > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 49 ++++++++++++++++++++- > 1 file changed, 47 insertions(+), 2 deletions(-) > > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > index 26bf7da1bcd0..04f83f7c8319 100644 > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > @@ -28,6 +28,7 @@ > #include <linux/pci.h> > #include <linux/pci-ats.h> > #include <linux/platform_device.h> > +#include <linux/kvm_host.h> > > #include <linux/amba/bus.h> > > @@ -2195,6 +2196,33 @@ static void arm_smmu_bitmap_free(unsigned long *map, int idx) > clear_bit(idx, map); > } > > +static int arm_smmu_pinned_vmid_get(struct arm_smmu_domain *smmu_domain) > +{ > + struct arm_smmu_master *master; > + > + master = list_first_entry_or_null(&smmu_domain->devices, > + struct arm_smmu_master, domain_head); This probably needs to hold devices_lock while using master. > + if (!master) > + return -EINVAL; > + > + return kvm_pinned_vmid_get(master->dev); > +} > + > +static int arm_smmu_pinned_vmid_put(struct arm_smmu_domain *smmu_domain) > +{ > + struct arm_smmu_master *master; > + > + master = list_first_entry_or_null(&smmu_domain->devices, > + struct arm_smmu_master, domain_head); > + if (!master) > + return -EINVAL; > + > + if (smmu_domain->s2_cfg.vmid) > + return kvm_pinned_vmid_put(master->dev); > + > + return 0; > +} > + > static void arm_smmu_domain_free(struct iommu_domain *domain) > { > struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); > @@ -2215,8 +2243,11 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) > mutex_unlock(&arm_smmu_asid_lock); > } > if (s2_cfg->set) { > - if (s2_cfg->vmid) > - arm_smmu_bitmap_free(smmu->vmid_map, s2_cfg->vmid); > + if (s2_cfg->vmid) { > + if (!(smmu->features & ARM_SMMU_FEAT_BTM) && > + smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) > + arm_smmu_bitmap_free(smmu->vmid_map, s2_cfg->vmid); > + } > } > > kfree(smmu_domain); > @@ -3199,6 +3230,17 @@ static int arm_smmu_attach_pasid_table(struct iommu_domain *domain, > !(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB)) > goto out; > > + if (smmu->features & ARM_SMMU_FEAT_BTM) { > + ret = arm_smmu_pinned_vmid_get(smmu_domain); > + if (ret < 0) > + goto out; > + > + if (smmu_domain->s2_cfg.vmid) > + arm_smmu_bitmap_free(smmu->vmid_map, smmu_domain->s2_cfg.vmid); > + > + smmu_domain->s2_cfg.vmid = (u16)ret; That will require a TLB invalidation on the old VMID, once the STE is rewritten. More generally I think this pinned VMID set conflicts with that of stage-2-only domains (which is the default state until a guest attaches a PASID table). Say you have one guest using DOMAIN_NESTED without PASID table, just DMA to IPA using VMID 0x8000. Now another guest attaches a PASID table and obtains the same VMID from KVM. The stage-2 translation might use TLB entries from the other guest, no? They'll both create stage-2 TLB entries with {StreamWorld=NS-EL1, VMID=0x8000} It's tempting to allocate all VMIDs through KVM instead, but that will force a dependency on KVM to use VFIO_TYPE1_NESTING_IOMMU and might break existing users of that extension (though I'm not sure there are any). Instead we might need to restrict the SMMU VMID bitmap to match the private VMID set in KVM. Besides we probably want to restrict this feature to systems supporting VMID16 on both SMMU and CPUs, or at least check that they are compatible. > + } > + > smmu_domain->s1_cfg.cdcfg.cdtab_dma = cfg->base_ptr; > smmu_domain->s1_cfg.s1cdmax = cfg->pasid_bits; > smmu_domain->s1_cfg.s1fmt = cfg->vendor_data.smmuv3.s1fmt; > @@ -3221,6 +3263,7 @@ static int arm_smmu_attach_pasid_table(struct iommu_domain *domain, > static void arm_smmu_detach_pasid_table(struct iommu_domain *domain) > { > struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); > + struct arm_smmu_device *smmu = smmu_domain->smmu; > struct arm_smmu_master *master; > unsigned long flags; > > @@ -3237,6 +3280,8 @@ static void arm_smmu_detach_pasid_table(struct iommu_domain *domain) > arm_smmu_install_ste_for_dev(master); > spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); > > + if (smmu->features & ARM_SMMU_FEAT_BTM) > + arm_smmu_pinned_vmid_put(smmu_domain); Aliasing here as well: the VMID is still live but can be reallocated by KVM and another domain might obtain it. Thanks, Jean > unlock: > mutex_unlock(&smmu_domain->init_mutex); > } > -- > 2.17.1 >
Hi Jean, > -----Original Message----- > From: Jean-Philippe Brucker [mailto:jean-philippe@linaro.org] > Sent: 04 March 2021 17:11 > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> > Cc: linux-arm-kernel@lists.infradead.org; iommu@lists.linux-foundation.org; > kvmarm@lists.cs.columbia.edu; maz@kernel.org; > alex.williamson@redhat.com; eric.auger@redhat.com; > zhangfei.gao@linaro.org; Jonathan Cameron > <jonathan.cameron@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>; > linuxarm@openeuler.org > Subject: Re: [RFC PATCH 4/5] iommu/arm-smmu-v3: Use pinned VMID for > NESTED stage with BTM > > Hi Shameer, > > On Mon, Feb 22, 2021 at 03:53:37PM +0000, Shameer Kolothum wrote: > > If the SMMU supports BTM and the device belongs to NESTED domain > > with shared pasid table, we need to use the VMID allocated by the > > KVM for the s2 configuration. Hence, request a pinned VMID from KVM. > > > > Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> > > --- > > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 49 > ++++++++++++++++++++- > > 1 file changed, 47 insertions(+), 2 deletions(-) > > > > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > > index 26bf7da1bcd0..04f83f7c8319 100644 > > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > > @@ -28,6 +28,7 @@ > > #include <linux/pci.h> > > #include <linux/pci-ats.h> > > #include <linux/platform_device.h> > > +#include <linux/kvm_host.h> > > > > #include <linux/amba/bus.h> > > > > @@ -2195,6 +2196,33 @@ static void arm_smmu_bitmap_free(unsigned > long *map, int idx) > > clear_bit(idx, map); > > } > > > > +static int arm_smmu_pinned_vmid_get(struct arm_smmu_domain > *smmu_domain) > > +{ > > + struct arm_smmu_master *master; > > + > > + master = list_first_entry_or_null(&smmu_domain->devices, > > + struct arm_smmu_master, domain_head); > > This probably needs to hold devices_lock while using master. Ok. > > > + if (!master) > > + return -EINVAL; > > + > > + return kvm_pinned_vmid_get(master->dev); > > +} > > + > > +static int arm_smmu_pinned_vmid_put(struct arm_smmu_domain > *smmu_domain) > > +{ > > + struct arm_smmu_master *master; > > + > > + master = list_first_entry_or_null(&smmu_domain->devices, > > + struct arm_smmu_master, domain_head); > > + if (!master) > > + return -EINVAL; > > + > > + if (smmu_domain->s2_cfg.vmid) > > + return kvm_pinned_vmid_put(master->dev); > > + > > + return 0; > > +} > > + > > static void arm_smmu_domain_free(struct iommu_domain *domain) > > { > > struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); > > @@ -2215,8 +2243,11 @@ static void arm_smmu_domain_free(struct > iommu_domain *domain) > > mutex_unlock(&arm_smmu_asid_lock); > > } > > if (s2_cfg->set) { > > - if (s2_cfg->vmid) > > - arm_smmu_bitmap_free(smmu->vmid_map, s2_cfg->vmid); > > + if (s2_cfg->vmid) { > > + if (!(smmu->features & ARM_SMMU_FEAT_BTM) && > > + smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) > > + arm_smmu_bitmap_free(smmu->vmid_map, > s2_cfg->vmid); > > + } > > } > > > > kfree(smmu_domain); > > @@ -3199,6 +3230,17 @@ static int arm_smmu_attach_pasid_table(struct > iommu_domain *domain, > > !(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB)) > > goto out; > > > > + if (smmu->features & ARM_SMMU_FEAT_BTM) { > > + ret = arm_smmu_pinned_vmid_get(smmu_domain); > > + if (ret < 0) > > + goto out; > > + > > + if (smmu_domain->s2_cfg.vmid) > > + arm_smmu_bitmap_free(smmu->vmid_map, > smmu_domain->s2_cfg.vmid); > > + > > + smmu_domain->s2_cfg.vmid = (u16)ret; > > That will require a TLB invalidation on the old VMID, once the STE is > rewritten. True. Will add that. > More generally I think this pinned VMID set conflicts with that of > stage-2-only domains (which is the default state until a guest attaches a > PASID table). Say you have one guest using DOMAIN_NESTED without PASID > table, just DMA to IPA using VMID 0x8000. Now another guest attaches a > PASID table and obtains the same VMID from KVM. The stage-2 translation > might use TLB entries from the other guest, no? They'll both create > stage-2 TLB entries with {StreamWorld=NS-EL1, VMID=0x8000} > > It's tempting to allocate all VMIDs through KVM instead, but that will > force a dependency on KVM to use VFIO_TYPE1_NESTING_IOMMU and might > break > existing users of that extension (though I'm not sure there are any). > Instead we might need to restrict the SMMU VMID bitmap to match the > private VMID set in KVM. Right, that is indeed a problem. I will take a look at this suggestion. > Besides we probably want to restrict this feature to systems supporting > VMID16 on both SMMU and CPUs, or at least check that they are compatible. Yes. Ideally I would like to detect that in the KVM code and enable/disable the VMID splitting based on that. But I am yet to figure out an easy way to do that in KVM. > > + } > > + > > smmu_domain->s1_cfg.cdcfg.cdtab_dma = cfg->base_ptr; > > smmu_domain->s1_cfg.s1cdmax = cfg->pasid_bits; > > smmu_domain->s1_cfg.s1fmt = cfg->vendor_data.smmuv3.s1fmt; > > @@ -3221,6 +3263,7 @@ static int arm_smmu_attach_pasid_table(struct > iommu_domain *domain, > > static void arm_smmu_detach_pasid_table(struct iommu_domain > *domain) > > { > > struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); > > + struct arm_smmu_device *smmu = smmu_domain->smmu; > > struct arm_smmu_master *master; > > unsigned long flags; > > > > @@ -3237,6 +3280,8 @@ static void arm_smmu_detach_pasid_table(struct > iommu_domain *domain) > > arm_smmu_install_ste_for_dev(master); > > spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); > > > > + if (smmu->features & ARM_SMMU_FEAT_BTM) > > + arm_smmu_pinned_vmid_put(smmu_domain); > > Aliasing here as well: the VMID is still live but can be reallocated by > KVM and another domain might obtain it. Ok. Got it. Thanks for the review, Shameer > > Thanks, > Jean > > > unlock: > > mutex_unlock(&smmu_domain->init_mutex); > > } > > -- > > 2.17.1 > >
Hi Jean, > -----Original Message----- > From: Jean-Philippe Brucker [mailto:jean-philippe@linaro.org] > Sent: 04 March 2021 17:11 > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> > Cc: linux-arm-kernel@lists.infradead.org; iommu@lists.linux-foundation.org; > kvmarm@lists.cs.columbia.edu; maz@kernel.org; > alex.williamson@redhat.com; eric.auger@redhat.com; > zhangfei.gao@linaro.org; Jonathan Cameron > <jonathan.cameron@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>; > linuxarm@openeuler.org > Subject: Re: [RFC PATCH 4/5] iommu/arm-smmu-v3: Use pinned VMID for > NESTED stage with BTM [...] > > > > kfree(smmu_domain); > > @@ -3199,6 +3230,17 @@ static int arm_smmu_attach_pasid_table(struct > iommu_domain *domain, > > !(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB)) > > goto out; > > > > + if (smmu->features & ARM_SMMU_FEAT_BTM) { > > + ret = arm_smmu_pinned_vmid_get(smmu_domain); > > + if (ret < 0) > > + goto out; > > + > > + if (smmu_domain->s2_cfg.vmid) > > + arm_smmu_bitmap_free(smmu->vmid_map, > smmu_domain->s2_cfg.vmid); > > + > > + smmu_domain->s2_cfg.vmid = (u16)ret; > > That will require a TLB invalidation on the old VMID, once the STE is > rewritten. > > More generally I think this pinned VMID set conflicts with that of > stage-2-only domains (which is the default state until a guest attaches a > PASID table). Say you have one guest using DOMAIN_NESTED without PASID > table, just DMA to IPA using VMID 0x8000. Now another guest attaches a > PASID table and obtains the same VMID from KVM. The stage-2 translation > might use TLB entries from the other guest, no? They'll both create > stage-2 TLB entries with {StreamWorld=NS-EL1, VMID=0x8000} Now that we are trying to align the KVM VMID allocation algorithm similar to that of the ASID allocator [1], I attempted to use that for the SMMU pinned VMID allocation. But the issue you have mentioned above is still valid. And as a solution what I have tried now is follow what pinned ASID is doing in SVA, -Use xarray for private VMIDs -Get pinned VMID from KVM for DOMAIN_NESTED with PASID table -If the new pinned VMID is in use by private, then update the private VMID(VMID update to a live STE). This seems to work, but still need to run more tests with this though. > It's tempting to allocate all VMIDs through KVM instead, but that will > force a dependency on KVM to use VFIO_TYPE1_NESTING_IOMMU and might > break > existing users of that extension (though I'm not sure there are any). > Instead we might need to restrict the SMMU VMID bitmap to match the > private VMID set in KVM. Another solution I have in mind is, make the new KVM VMID allocator common between SMMUv3 and KVM. This will help to avoid all the private and shared VMID splitting, also no need for live updates to STE VMID. One possible drawback is less number of available KVM VMIDs but with 16 bit VMID space I am not sure how much that is a concern. Please let me know your thoughts. Thanks, Shameer [1]. https://lore.kernel.org/kvmarm/20210616155606.2806-1-shameerali.kolothum.thodi@huawei.com/
Hi Shameer, On Wed, Jul 21, 2021 at 08:54:00AM +0000, Shameerali Kolothum Thodi wrote: > > More generally I think this pinned VMID set conflicts with that of > > stage-2-only domains (which is the default state until a guest attaches a > > PASID table). Say you have one guest using DOMAIN_NESTED without PASID > > table, just DMA to IPA using VMID 0x8000. Now another guest attaches a > > PASID table and obtains the same VMID from KVM. The stage-2 translation > > might use TLB entries from the other guest, no? They'll both create > > stage-2 TLB entries with {StreamWorld=NS-EL1, VMID=0x8000} > > Now that we are trying to align the KVM VMID allocation algorithm similar to > that of the ASID allocator [1], I attempted to use that for the SMMU pinned > VMID allocation. But the issue you have mentioned above is still valid. > > And as a solution what I have tried now is follow what pinned ASID is doing > in SVA, > -Use xarray for private VMIDs > -Get pinned VMID from KVM for DOMAIN_NESTED with PASID table > -If the new pinned VMID is in use by private, then update the private > VMID(VMID update to a live STE). > > This seems to work, but still need to run more tests with this though. > > > It's tempting to allocate all VMIDs through KVM instead, but that will > > force a dependency on KVM to use VFIO_TYPE1_NESTING_IOMMU and might > > break > > existing users of that extension (though I'm not sure there are any). > > Instead we might need to restrict the SMMU VMID bitmap to match the > > private VMID set in KVM. > > Another solution I have in mind is, make the new KVM VMID allocator common > between SMMUv3 and KVM. This will help to avoid all the private and shared > VMID splitting, also no need for live updates to STE VMID. One possible drawback > is less number of available KVM VMIDs but with 16 bit VMID space I am not sure > how much that is a concern. Yes I think that works too. In practice there shouldn't be many VMIDs on the SMMU side, the feature's only enabled when a user wants to assign devices with nesting translation (unlike ASIDs where each device in the system gets a private ASID by default). Note that you still need to pin all VMIDs used by the SMMU, otherwise you'll have to update the STE after rollover. The problem we have with VFIO_TYPE1_NESTING_IOMMU might be solved by the upcoming deprecation of VFIO_*_IOMMU [2]. We need a specific sequence from userspace: 1. Attach VFIO group to KVM (KVM_DEV_VFIO_GROUP_ADD) 2. Create nesting IOMMU domain and attach the group to it (VFIO_GROUP_SET_CONTAINER, VFIO_SET_IOMMU becomes IOMMU_IOASID_ALLOC, VFIO_DEVICE_ATTACH_IOASID) Currently QEMU does 2 then 1, which would cause the SMMU to allocate a separate VMID. If we wanted to extend VFIO_TYPE1_NESTING_IOMMU with PASID tables we'd need to mandate 1-2 and may break existing users. In the new design we can require from the start that creating a nesting IOMMU container through /dev/iommu *must* come with a KVM context, that way we're sure to reuse the existing VMID. Thanks, Jean [2] https://lore.kernel.org/linux-iommu/BN9PR11MB5433B1E4AE5B0480369F97178C189@BN9PR11MB5433.namprd11.prod.outlook.com/
> -----Original Message----- > From: Jean-Philippe Brucker [mailto:jean-philippe@linaro.org] > Sent: 22 July 2021 17:46 > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> > Cc: linux-arm-kernel@lists.infradead.org; iommu@lists.linux-foundation.org; > kvmarm@lists.cs.columbia.edu; maz@kernel.org; > alex.williamson@redhat.com; eric.auger@redhat.com; > zhangfei.gao@linaro.org; Jonathan Cameron > <jonathan.cameron@huawei.com>; Zengtao (B) <prime.zeng@hisilicon.com>; > linuxarm@openeuler.org; Linuxarm <linuxarm@huawei.com> > Subject: [Linuxarm] Re: [RFC PATCH 4/5] iommu/arm-smmu-v3: Use pinned > VMID for NESTED stage with BTM > > Hi Shameer, > > On Wed, Jul 21, 2021 at 08:54:00AM +0000, Shameerali Kolothum Thodi > wrote: > > > More generally I think this pinned VMID set conflicts with that of > > > stage-2-only domains (which is the default state until a guest attaches a > > > PASID table). Say you have one guest using DOMAIN_NESTED without > PASID > > > table, just DMA to IPA using VMID 0x8000. Now another guest attaches a > > > PASID table and obtains the same VMID from KVM. The stage-2 translation > > > might use TLB entries from the other guest, no? They'll both create > > > stage-2 TLB entries with {StreamWorld=NS-EL1, VMID=0x8000} > > > > Now that we are trying to align the KVM VMID allocation algorithm similar > to > > that of the ASID allocator [1], I attempted to use that for the SMMU pinned > > VMID allocation. But the issue you have mentioned above is still valid. > > > > And as a solution what I have tried now is follow what pinned ASID is doing > > in SVA, > > -Use xarray for private VMIDs > > -Get pinned VMID from KVM for DOMAIN_NESTED with PASID table > > -If the new pinned VMID is in use by private, then update the private > > VMID(VMID update to a live STE). > > > > This seems to work, but still need to run more tests with this though. > > > > > It's tempting to allocate all VMIDs through KVM instead, but that will > > > force a dependency on KVM to use VFIO_TYPE1_NESTING_IOMMU and > might > > > break > > > existing users of that extension (though I'm not sure there are any). > > > Instead we might need to restrict the SMMU VMID bitmap to match the > > > private VMID set in KVM. > > > > Another solution I have in mind is, make the new KVM VMID allocator > common > > between SMMUv3 and KVM. This will help to avoid all the private and > shared > > VMID splitting, also no need for live updates to STE VMID. One possible > drawback > > is less number of available KVM VMIDs but with 16 bit VMID space I am not > sure > > how much that is a concern. > > Yes I think that works too. In practice there shouldn't be many VMIDs on > the SMMU side, the feature's only enabled when a user wants to assign > devices with nesting translation (unlike ASIDs where each device in the > system gets a private ASID by default). Ok. What about implementations that supports only stage 2? Do we need a private VMID allocator for those or can use the same common KVM VMID allocator? > Note that you still need to pin all VMIDs used by the SMMU, otherwise > you'll have to update the STE after rollover. Sure. > The problem we have with VFIO_TYPE1_NESTING_IOMMU might be solved by > the > upcoming deprecation of VFIO_*_IOMMU [2]. We need a specific sequence > from > userspace: > 1. Attach VFIO group to KVM (KVM_DEV_VFIO_GROUP_ADD) > 2. Create nesting IOMMU domain and attach the group to it > (VFIO_GROUP_SET_CONTAINER, VFIO_SET_IOMMU becomes > IOMMU_IOASID_ALLOC, VFIO_DEVICE_ATTACH_IOASID) > Currently QEMU does 2 then 1, which would cause the SMMU to allocate a > separate VMID. Yes. I have observed this with my current implementation. I have a check to see the private S2 config VMID belongs to the same domain s2_cfg, then skip the live update to the STE VMID. > If we wanted to extend VFIO_TYPE1_NESTING_IOMMU with > PASID > tables we'd need to mandate 1-2 and may break existing users. In the new > design we can require from the start that creating a nesting IOMMU > container through /dev/iommu *must* come with a KVM context, that way > we're sure to reuse the existing VMID. Ok. That helps. Thanks, Shameer
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 26bf7da1bcd0..04f83f7c8319 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -28,6 +28,7 @@ #include <linux/pci.h> #include <linux/pci-ats.h> #include <linux/platform_device.h> +#include <linux/kvm_host.h> #include <linux/amba/bus.h> @@ -2195,6 +2196,33 @@ static void arm_smmu_bitmap_free(unsigned long *map, int idx) clear_bit(idx, map); } +static int arm_smmu_pinned_vmid_get(struct arm_smmu_domain *smmu_domain) +{ + struct arm_smmu_master *master; + + master = list_first_entry_or_null(&smmu_domain->devices, + struct arm_smmu_master, domain_head); + if (!master) + return -EINVAL; + + return kvm_pinned_vmid_get(master->dev); +} + +static int arm_smmu_pinned_vmid_put(struct arm_smmu_domain *smmu_domain) +{ + struct arm_smmu_master *master; + + master = list_first_entry_or_null(&smmu_domain->devices, + struct arm_smmu_master, domain_head); + if (!master) + return -EINVAL; + + if (smmu_domain->s2_cfg.vmid) + return kvm_pinned_vmid_put(master->dev); + + return 0; +} + static void arm_smmu_domain_free(struct iommu_domain *domain) { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); @@ -2215,8 +2243,11 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) mutex_unlock(&arm_smmu_asid_lock); } if (s2_cfg->set) { - if (s2_cfg->vmid) - arm_smmu_bitmap_free(smmu->vmid_map, s2_cfg->vmid); + if (s2_cfg->vmid) { + if (!(smmu->features & ARM_SMMU_FEAT_BTM) && + smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) + arm_smmu_bitmap_free(smmu->vmid_map, s2_cfg->vmid); + } } kfree(smmu_domain); @@ -3199,6 +3230,17 @@ static int arm_smmu_attach_pasid_table(struct iommu_domain *domain, !(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB)) goto out; + if (smmu->features & ARM_SMMU_FEAT_BTM) { + ret = arm_smmu_pinned_vmid_get(smmu_domain); + if (ret < 0) + goto out; + + if (smmu_domain->s2_cfg.vmid) + arm_smmu_bitmap_free(smmu->vmid_map, smmu_domain->s2_cfg.vmid); + + smmu_domain->s2_cfg.vmid = (u16)ret; + } + smmu_domain->s1_cfg.cdcfg.cdtab_dma = cfg->base_ptr; smmu_domain->s1_cfg.s1cdmax = cfg->pasid_bits; smmu_domain->s1_cfg.s1fmt = cfg->vendor_data.smmuv3.s1fmt; @@ -3221,6 +3263,7 @@ static int arm_smmu_attach_pasid_table(struct iommu_domain *domain, static void arm_smmu_detach_pasid_table(struct iommu_domain *domain) { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_master *master; unsigned long flags; @@ -3237,6 +3280,8 @@ static void arm_smmu_detach_pasid_table(struct iommu_domain *domain) arm_smmu_install_ste_for_dev(master); spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); + if (smmu->features & ARM_SMMU_FEAT_BTM) + arm_smmu_pinned_vmid_put(smmu_domain); unlock: mutex_unlock(&smmu_domain->init_mutex); }
If the SMMU supports BTM and the device belongs to NESTED domain with shared pasid table, we need to use the VMID allocated by the KVM for the s2 configuration. Hence, request a pinned VMID from KVM. Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 49 ++++++++++++++++++++- 1 file changed, 47 insertions(+), 2 deletions(-)