diff mbox series

dma-direct: relax addressability checks in dma_direct_supported

Message ID 20200203171601.539254-1-hch@lst.de (mailing list archive)
State Mainlined
Commit 91ef26f914171cf753330f13724fd9142b5b1640
Headers show
Series dma-direct: relax addressability checks in dma_direct_supported | expand

Commit Message

Christoph Hellwig Feb. 3, 2020, 5:16 p.m. UTC
dma_direct_supported tries to find the minimum addressable bitmask
based on the end pfn and optional magic that architectures can use
to communicate the size of the magic ZONE_DMA that can be used
for bounce buffering.  But between the DMA offsets that can change
per device (or sometimes even region), the fact the ZONE_DMA isn't
even guaranteed to be the lowest addresses and failure of having
proper interfaces to the MM code this fails at least for one
arm subarchitecture.

As all the legacy DMA implementations have supported 32-bit DMA
masks, and 32-bit masks are guranteed to always work by the API
contract (using bounce buffers if needed), we can short cut the
complicated check and always return true without breaking existing
assumptions.  Hopefully we can properly clean up the interaction
with the arch defined zones and the bootmem allocator eventually.

Fixes: ad3c7b18c5b3 ("arm: use swiotlb for bounce buffering on LPAE configs")
Reported-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
---
 kernel/dma/direct.c | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

Comments

Peter Ujfalusi Feb. 4, 2020, 9:34 a.m. UTC | #1
Hi Christoph,

On 03/02/2020 19.16, Christoph Hellwig wrote:
> dma_direct_supported tries to find the minimum addressable bitmask
> based on the end pfn and optional magic that architectures can use
> to communicate the size of the magic ZONE_DMA that can be used
> for bounce buffering.  But between the DMA offsets that can change
> per device (or sometimes even region), the fact the ZONE_DMA isn't
> even guaranteed to be the lowest addresses and failure of having
> proper interfaces to the MM code this fails at least for one
> arm subarchitecture.
> 
> As all the legacy DMA implementations have supported 32-bit DMA
> masks, and 32-bit masks are guranteed to always work by the API
> contract (using bounce buffers if needed), we can short cut the
> complicated check and always return true without breaking existing
> assumptions.  Hopefully we can properly clean up the interaction
> with the arch defined zones and the bootmem allocator eventually.
> 
> Fixes: ad3c7b18c5b3 ("arm: use swiotlb for bounce buffering on LPAE configs")
> Reported-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>

Thank you for the proper patch, I can reaffirm my Tested-by.
We have also tested remoteproc on k2, which got broken as well.

Thanks again,
- Péter

> ---
>  kernel/dma/direct.c | 24 +++++++++++-------------
>  1 file changed, 11 insertions(+), 13 deletions(-)
> 
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 04f308a47fc3..efab894c1679 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -464,28 +464,26 @@ int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma,
>  }
>  #endif /* CONFIG_MMU */
>  
> -/*
> - * Because 32-bit DMA masks are so common we expect every architecture to be
> - * able to satisfy them - either by not supporting more physical memory, or by
> - * providing a ZONE_DMA32.  If neither is the case, the architecture needs to
> - * use an IOMMU instead of the direct mapping.
> - */
>  int dma_direct_supported(struct device *dev, u64 mask)
>  {
> -	u64 min_mask;
> -
> -	if (IS_ENABLED(CONFIG_ZONE_DMA))
> -		min_mask = DMA_BIT_MASK(zone_dma_bits);
> -	else
> -		min_mask = DMA_BIT_MASK(32);
> +	u64 min_mask = (max_pfn - 1) << PAGE_SHIFT;
>  
> -	min_mask = min_t(u64, min_mask, (max_pfn - 1) << PAGE_SHIFT);
> +	/*
> +	 * Because 32-bit DMA masks are so common we expect every architecture
> +	 * to be able to satisfy them - either by not supporting more physical
> +	 * memory, or by providing a ZONE_DMA32.  If neither is the case, the
> +	 * architecture needs to use an IOMMU instead of the direct mapping.
> +	 */
> +	if (mask >= DMA_BIT_MASK(32))
> +		return 1;
>  
>  	/*
>  	 * This check needs to be against the actual bit mask value, so
>  	 * use __phys_to_dma() here so that the SME encryption mask isn't
>  	 * part of the check.
>  	 */
> +	if (IS_ENABLED(CONFIG_ZONE_DMA))
> +		min_mask = min_t(u64, min_mask, DMA_BIT_MASK(zone_dma_bits));
>  	return mask >= __phys_to_dma(dev, min_mask);
>  }
>  
> 

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
diff mbox series

Patch

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 04f308a47fc3..efab894c1679 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -464,28 +464,26 @@  int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma,
 }
 #endif /* CONFIG_MMU */
 
-/*
- * Because 32-bit DMA masks are so common we expect every architecture to be
- * able to satisfy them - either by not supporting more physical memory, or by
- * providing a ZONE_DMA32.  If neither is the case, the architecture needs to
- * use an IOMMU instead of the direct mapping.
- */
 int dma_direct_supported(struct device *dev, u64 mask)
 {
-	u64 min_mask;
-
-	if (IS_ENABLED(CONFIG_ZONE_DMA))
-		min_mask = DMA_BIT_MASK(zone_dma_bits);
-	else
-		min_mask = DMA_BIT_MASK(32);
+	u64 min_mask = (max_pfn - 1) << PAGE_SHIFT;
 
-	min_mask = min_t(u64, min_mask, (max_pfn - 1) << PAGE_SHIFT);
+	/*
+	 * Because 32-bit DMA masks are so common we expect every architecture
+	 * to be able to satisfy them - either by not supporting more physical
+	 * memory, or by providing a ZONE_DMA32.  If neither is the case, the
+	 * architecture needs to use an IOMMU instead of the direct mapping.
+	 */
+	if (mask >= DMA_BIT_MASK(32))
+		return 1;
 
 	/*
 	 * This check needs to be against the actual bit mask value, so
 	 * use __phys_to_dma() here so that the SME encryption mask isn't
 	 * part of the check.
 	 */
+	if (IS_ENABLED(CONFIG_ZONE_DMA))
+		min_mask = min_t(u64, min_mask, DMA_BIT_MASK(zone_dma_bits));
 	return mask >= __phys_to_dma(dev, min_mask);
 }