Message ID | 2b2595de703c60a772ebcffe248d0cf036143e6a.1564414114.git.robin.murphy@arm.com (mailing list archive) |
---|---|
State | Mainlined |
Commit | 8af23fad626173eed7cc02733874d4124049bd5e |
Headers | show |
Series | iommu/dma: Handle MSI mappings separately | expand |
On 2019-07-29 16:32, Robin Murphy wrote: > MSI pages must always be mapped into a device's *current* domain, > which > *might* be the default DMA domain, but might instead be a VFIO domain > with its own MSI cookie. This subtlety got accidentally lost in the > streamlining of __iommu_dma_map(), but rather than reintroduce more > complexity and/or special-casing, it turns out neater to just split > this > path out entirely. > > Since iommu_dma_get_msi_page() already duplicates much of what > __iommu_dma_map() does, it can easily just make the allocation and > mapping calls directly as well. That way we can further streamline > the > helper back to exclusively operating on DMA domains. > > Fixes: b61d271e59d7 ("iommu/dma: Move domain lookup into > __iommu_dma_{map,unmap}") > Reported-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> > Reported-by: Andre Przywara <andre.przywara@arm.com> > Signed-off-by: Robin Murphy <robin.murphy@arm.com> With this patch, my TX2 is back in business with a bnx2x device passed to a guest. FWIW: Tested-by: Marc Zyngier <maz@kernel.org> Thanks, M.
On Mon, 29 Jul 2019 16:32:38 +0100 Robin Murphy <robin.murphy@arm.com> wrote: Hi, > MSI pages must always be mapped into a device's *current* domain, which > *might* be the default DMA domain, but might instead be a VFIO domain > with its own MSI cookie. This subtlety got accidentally lost in the > streamlining of __iommu_dma_map(), but rather than reintroduce more > complexity and/or special-casing, it turns out neater to just split this > path out entirely. > > Since iommu_dma_get_msi_page() already duplicates much of what > __iommu_dma_map() does, it can easily just make the allocation and > mapping calls directly as well. That way we can further streamline the > helper back to exclusively operating on DMA domains. > > Fixes: b61d271e59d7 ("iommu/dma: Move domain lookup into __iommu_dma_{map,unmap}") > Reported-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> > Reported-by: Andre Przywara <andre.przywara@arm.com> > Signed-off-by: Robin Murphy <robin.murphy@arm.com> Thanks, that indeed fixes the pass through problem for me, the NVMe and SATA controller can now happily receive MSIs again. Tested-by: Andre Przywara <andre.przywara@arm.com> Cheers, Andre. > --- > drivers/iommu/dma-iommu.c | 17 ++++++++++------- > 1 file changed, 10 insertions(+), 7 deletions(-) > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index a7f9c3edbcb2..6441197a75ea 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -459,13 +459,11 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, > { > struct iommu_domain *domain = iommu_get_dma_domain(dev); > struct iommu_dma_cookie *cookie = domain->iova_cookie; > - size_t iova_off = 0; > + struct iova_domain *iovad = &cookie->iovad; > + size_t iova_off = iova_offset(iovad, phys); > dma_addr_t iova; > > - if (cookie->type == IOMMU_DMA_IOVA_COOKIE) { > - iova_off = iova_offset(&cookie->iovad, phys); > - size = iova_align(&cookie->iovad, size + iova_off); > - } > + size = iova_align(iovad, size + iova_off); > > iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); > if (!iova) > @@ -1147,16 +1145,21 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, > if (!msi_page) > return NULL; > > - iova = __iommu_dma_map(dev, msi_addr, size, prot); > - if (iova == DMA_MAPPING_ERROR) > + iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); > + if (!iova) > goto out_free_page; > > + if (iommu_map(domain, iova, msi_addr, size, prot)) > + goto out_free_iova; > + > INIT_LIST_HEAD(&msi_page->list); > msi_page->phys = msi_addr; > msi_page->iova = iova; > list_add(&msi_page->list, &cookie->msi_page_list); > return msi_page; > > +out_free_iova: > + iommu_dma_free_iova(cookie, iova, size); > out_free_page: > kfree(msi_page); > return NULL;
> -----Original Message----- > From: linux-arm-kernel [mailto:linux-arm-kernel-bounces@lists.infradead.org] > On Behalf Of Robin Murphy > Sent: 29 July 2019 16:33 > To: joro@8bytes.org > Cc: maz@kernel.org; iommu@lists.linux-foundation.org; Shameerali Kolothum > Thodi <shameerali.kolothum.thodi@huawei.com>; > linux-arm-kernel@lists.infradead.org; Andre Przywara > <andre.przywara@arm.com> > Subject: [PATCH] iommu/dma: Handle MSI mappings separately > > MSI pages must always be mapped into a device's *current* domain, which > *might* be the default DMA domain, but might instead be a VFIO domain > with its own MSI cookie. This subtlety got accidentally lost in the > streamlining of __iommu_dma_map(), but rather than reintroduce more > complexity and/or special-casing, it turns out neater to just split this > path out entirely. > > Since iommu_dma_get_msi_page() already duplicates much of what > __iommu_dma_map() does, it can easily just make the allocation and > mapping calls directly as well. That way we can further streamline the > helper back to exclusively operating on DMA domains. > > Fixes: b61d271e59d7 ("iommu/dma: Move domain lookup into > __iommu_dma_{map,unmap}") > Reported-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> > Reported-by: Andre Przywara <andre.przywara@arm.com> > Signed-off-by: Robin Murphy <robin.murphy@arm.com> Thanks. This fixes the vf assignment issue on D06 as well. FWIW, Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Shameer
On Mon, Jul 29, 2019 at 04:32:38PM +0100, Robin Murphy wrote: > MSI pages must always be mapped into a device's *current* domain, which > *might* be the default DMA domain, but might instead be a VFIO domain > with its own MSI cookie. This subtlety got accidentally lost in the > streamlining of __iommu_dma_map(), but rather than reintroduce more > complexity and/or special-casing, it turns out neater to just split this > path out entirely. > > Since iommu_dma_get_msi_page() already duplicates much of what > __iommu_dma_map() does, it can easily just make the allocation and > mapping calls directly as well. That way we can further streamline the > helper back to exclusively operating on DMA domains. > > Fixes: b61d271e59d7 ("iommu/dma: Move domain lookup into __iommu_dma_{map,unmap}") > Reported-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> > Reported-by: Andre Przywara <andre.przywara@arm.com> > Signed-off-by: Robin Murphy <robin.murphy@arm.com> Hmm. I remember proposing this patch and you didn't like it because we could also have msis for a !IOMMU_DMA_IOVA_COOKIE cookie type. Or did we talk past each other? Note that if this change turns out to be valid we should also clean up the iommu_dma_free_iova() side.
On 30/07/2019 07:28, Christoph Hellwig wrote: > On Mon, Jul 29, 2019 at 04:32:38PM +0100, Robin Murphy wrote: >> MSI pages must always be mapped into a device's *current* domain, which >> *might* be the default DMA domain, but might instead be a VFIO domain >> with its own MSI cookie. This subtlety got accidentally lost in the >> streamlining of __iommu_dma_map(), but rather than reintroduce more >> complexity and/or special-casing, it turns out neater to just split this >> path out entirely. >> >> Since iommu_dma_get_msi_page() already duplicates much of what >> __iommu_dma_map() does, it can easily just make the allocation and >> mapping calls directly as well. That way we can further streamline the >> helper back to exclusively operating on DMA domains. >> >> Fixes: b61d271e59d7 ("iommu/dma: Move domain lookup into __iommu_dma_{map,unmap}") >> Reported-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> >> Reported-by: Andre Przywara <andre.przywara@arm.com> >> Signed-off-by: Robin Murphy <robin.murphy@arm.com> > > Hmm. I remember proposing this patch and you didn't like it because > we could also have msis for a !IOMMU_DMA_IOVA_COOKIE cookie type. > Or did we talk past each other? Do you have a pointer? That sparks the vaguest of memories, but I can't seem to turn anything up in my inbox. If that was my objection, though, it sounds like your patch was probably trying to go a step or two further than this one. > Note that if this change turns out to be valid we should also > clean up the iommu_dma_free_iova() side. We're not touching the iommu_dma_{alloc,free}_iova() path here; those are designed to cope with both types of cookie, and I think that's a reasonable abstraction to keep. This is just getting rid of the asymmetry - and now bug - caused by trying to keep the MSI page flow going through a special case in __iommu_dma_map() despite that having evolved into a more specific DMA domain fastpath (there's no corresponding unmap special case since MSI mappings just persist and get recycled until the domain is destroyed). Robin.
On Tue, Jul 30, 2019 at 11:43:25AM +0100, Robin Murphy wrote: > > Hmm. I remember proposing this patch and you didn't like it because > > we could also have msis for a !IOMMU_DMA_IOVA_COOKIE cookie type. > > Or did we talk past each other? > > Do you have a pointer? That sparks the vaguest of memories, but I can't seem > to turn anything up in my inbox. If that was my objection, though, it sounds > like your patch was probably trying to go a step or two further than this > one. I can't find anything either. This must have been a git tree I passed around to you before posting it. > > Note that if this change turns out to be valid we should also > > clean up the iommu_dma_free_iova() side. > > We're not touching the iommu_dma_{alloc,free}_iova() path here; those are > designed to cope with both types of cookie, and I think that's a reasonable > abstraction to keep. This is just getting rid of the asymmetry - and now bug > - caused by trying to keep the MSI page flow going through a special case in > __iommu_dma_map() despite that having evolved into a more specific DMA > domain fastpath (there's no corresponding unmap special case since MSI > mappings just persist and get recycled until the domain is destroyed). Ok, that might have been the issue with my earlier patch..
On Mon, Jul 29, 2019 at 04:32:38PM +0100, Robin Murphy wrote: > drivers/iommu/dma-iommu.c | 17 ++++++++++------- > 1 file changed, 10 insertions(+), 7 deletions(-) Applied to iommu/fixes, thanks Robin.
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index a7f9c3edbcb2..6441197a75ea 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -459,13 +459,11 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, { struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; - size_t iova_off = 0; + struct iova_domain *iovad = &cookie->iovad; + size_t iova_off = iova_offset(iovad, phys); dma_addr_t iova; - if (cookie->type == IOMMU_DMA_IOVA_COOKIE) { - iova_off = iova_offset(&cookie->iovad, phys); - size = iova_align(&cookie->iovad, size + iova_off); - } + size = iova_align(iovad, size + iova_off); iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); if (!iova) @@ -1147,16 +1145,21 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, if (!msi_page) return NULL; - iova = __iommu_dma_map(dev, msi_addr, size, prot); - if (iova == DMA_MAPPING_ERROR) + iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); + if (!iova) goto out_free_page; + if (iommu_map(domain, iova, msi_addr, size, prot)) + goto out_free_iova; + INIT_LIST_HEAD(&msi_page->list); msi_page->phys = msi_addr; msi_page->iova = iova; list_add(&msi_page->list, &cookie->msi_page_list); return msi_page; +out_free_iova: + iommu_dma_free_iova(cookie, iova, size); out_free_page: kfree(msi_page); return NULL;
MSI pages must always be mapped into a device's *current* domain, which *might* be the default DMA domain, but might instead be a VFIO domain with its own MSI cookie. This subtlety got accidentally lost in the streamlining of __iommu_dma_map(), but rather than reintroduce more complexity and/or special-casing, it turns out neater to just split this path out entirely. Since iommu_dma_get_msi_page() already duplicates much of what __iommu_dma_map() does, it can easily just make the allocation and mapping calls directly as well. That way we can further streamline the helper back to exclusively operating on DMA domains. Fixes: b61d271e59d7 ("iommu/dma: Move domain lookup into __iommu_dma_{map,unmap}") Reported-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reported-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> --- drivers/iommu/dma-iommu.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-)