diff mbox series

[07/12] dma-direct: simplify the use atomic pool logic in dma_direct_alloc

Message ID 20231016054755.915155-8-hch@lst.de (mailing list archive)
State Not Applicable
Headers show
Series [01/12] riscv: RISCV_NONSTANDARD_CACHE_OPS shouldn't depend on RISCV_DMA_NONCOHERENT | expand

Checks

Context Check Description
netdev/series_format warning Series does not have a cover letter
netdev/tree_selection success Guessed tree name to be net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1362 this patch: 1362
netdev/cc_maintainers warning 1 maintainers not CCed: aou@eecs.berkeley.edu
netdev/build_clang success Errors and warnings before: 1386 this patch: 1386
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1387 this patch: 1387
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 37 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Christoph Hellwig Oct. 16, 2023, 5:47 a.m. UTC
The logic in dma_direct_alloc when to use the atomic pool vs remapping
grew a bit unreadable.  Consolidate it into a single check, and clean
up the set_uncached vs remap logic a bit as well.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 kernel/dma/direct.c | 25 ++++++++++---------------
 1 file changed, 10 insertions(+), 15 deletions(-)

Comments

Robin Murphy Oct. 16, 2023, 11:58 a.m. UTC | #1
On 16/10/2023 6:47 am, Christoph Hellwig wrote:
> The logic in dma_direct_alloc when to use the atomic pool vs remapping
> grew a bit unreadable.  Consolidate it into a single check, and clean
> up the set_uncached vs remap logic a bit as well.

Reviewed-by: Robin Murphy <robin.murphy@arm.com>

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   kernel/dma/direct.c | 25 ++++++++++---------------
>   1 file changed, 10 insertions(+), 15 deletions(-)
> 
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index ec410af1d8a14e..1327d04fa32a25 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -234,27 +234,22 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>   					dma_handle);
>   
>   		/*
> -		 * Otherwise remap if the architecture is asking for it.  But
> -		 * given that remapping memory is a blocking operation we'll
> -		 * instead have to dip into the atomic pools.
> +		 * Otherwise we require the architecture to either be able to
> +		 * mark arbitrary parts of the kernel direct mapping uncached,
> +		 * or remapped it uncached.
>   		 */
> +		set_uncached = IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED);
>   		remap = IS_ENABLED(CONFIG_DMA_DIRECT_REMAP);
> -		if (remap) {
> -			if (dma_direct_use_pool(dev, gfp))
> -				return dma_direct_alloc_from_pool(dev, size,
> -						dma_handle, gfp);
> -		} else {
> -			if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED))
> -				return NULL;
> -			set_uncached = true;
> -		}
> +		if (!set_uncached && !remap)
> +			return NULL;
>   	}
>   
>   	/*
> -	 * Decrypting memory may block, so allocate the memory from the atomic
> -	 * pools if we can't block.
> +	 * Remapping or decrypting memory may block, allocate the memory from
> +	 * the atomic pools instead if we aren't allowed block.
>   	 */
> -	if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp))
> +	if ((remap || force_dma_unencrypted(dev)) &&
> +	    dma_direct_use_pool(dev, gfp))
>   		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
>   
>   	/* we always manually zero the memory once we are done */
diff mbox series

Patch

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index ec410af1d8a14e..1327d04fa32a25 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -234,27 +234,22 @@  void *dma_direct_alloc(struct device *dev, size_t size,
 					dma_handle);
 
 		/*
-		 * Otherwise remap if the architecture is asking for it.  But
-		 * given that remapping memory is a blocking operation we'll
-		 * instead have to dip into the atomic pools.
+		 * Otherwise we require the architecture to either be able to
+		 * mark arbitrary parts of the kernel direct mapping uncached,
+		 * or remapped it uncached.
 		 */
+		set_uncached = IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED);
 		remap = IS_ENABLED(CONFIG_DMA_DIRECT_REMAP);
-		if (remap) {
-			if (dma_direct_use_pool(dev, gfp))
-				return dma_direct_alloc_from_pool(dev, size,
-						dma_handle, gfp);
-		} else {
-			if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED))
-				return NULL;
-			set_uncached = true;
-		}
+		if (!set_uncached && !remap)
+			return NULL;
 	}
 
 	/*
-	 * Decrypting memory may block, so allocate the memory from the atomic
-	 * pools if we can't block.
+	 * Remapping or decrypting memory may block, allocate the memory from
+	 * the atomic pools instead if we aren't allowed block.
 	 */
-	if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp))
+	if ((remap || force_dma_unencrypted(dev)) &&
+	    dma_direct_use_pool(dev, gfp))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	/* we always manually zero the memory once we are done */