From patchwork Wed Dec 27 15:04:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baruch Siach X-Patchwork-Id: 13505346 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1AA82C46CD2 for ; Wed, 27 Dec 2023 15:05:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=oEiqVjJudxm0UDERa/Sph9LHMHUTwGGF10m64Ex2UNE=; b=YKg4mPuszstJTE DIxK8A1ZpOvH+JXN8qJVGt4Cd6cB6v4c5GQqkbJYXWVJ5dXtR8tbyY42g5xaxBvuvu5om9Y2F+Kxj jYdgWH65Jb5q9RdKa0F2jZ1OEfyFiBrY8Y9otVBs3+v+p4Xw0cK0ffLa5J6tSM7xnzh3LlB4HuCpM mBpW/PweAWv1P173y5vFKDnHavWYPCFfM3KHza3l35v0vUEQQY+EK7rVn3CAiabbEKU0KAlN9fE8H xLGHaje5vVShP37rDL2tvCTEAt0pRDqoC7cJYxAAKDKqP+EA0cQUYZruePLmMLihGVF+rpofJpbih 3xLyVQtyxYE6y1aDrFUQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rIVT4-00EtPh-1U; Wed, 27 Dec 2023 15:05:10 +0000 Received: from guitar.tkos.co.il ([84.110.109.230] helo=mail.tkos.co.il) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rIVSm-00EtKZ-2Y for linux-arm-kernel@lists.infradead.org; Wed, 27 Dec 2023 15:04:56 +0000 Received: from tarshish.tkos.co.il (unknown [10.0.8.3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.tkos.co.il (Postfix) with ESMTPS id 14300440F4B; Wed, 27 Dec 2023 17:02:44 +0200 (IST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tkos.co.il; s=default; t=1703689364; bh=st9CHgymVQhW3AqOhh2Rin5cH7JUg2qvgO/LnRxoumc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YoaHfAeaNk4IPaAWZfUwbo1ubF56b+leDPEd4c4QmVuIHJy/OFHORP1tdrd0ZBHih GEeHhUgDHV1h40Dm0kVpFDVWGczdYLfWSqAG1frkc4xywnvHxHbuv1SeJJ1tlepZBs tKuxvMy56I8tNUFNZT88w+a/5nPWUwhaJZEgjiYkF4VbUxsNB2VHAlhGp63I8f+btv SFUEnGnn4fgpmh7m10km6y4VqfClzcNVQpB+7O9paBCoYgKksirVPXIh+fcV0xikaj z5Qcpw1ROUGDFlH8BBsmv/kKhaz7blHANDbenLCXGPyyEFuEI/nQStojzw5fwt4pw+ ONZwK5p79Fhbw== From: Baruch Siach To: Christoph Hellwig , Marek Szyprowski , Rob Herring , Frank Rowand , Catalin Marinas , Will Deacon Cc: Baruch Siach , Robin Murphy , iommu@lists.linux.dev, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, =?utf-8?b?UGV0ciBUZXNhxZnDrWs=?= , Ramon Fried Subject: [PATCH RFC 3/4] dma-direct: add offset to zone_dma_bits Date: Wed, 27 Dec 2023 17:04:27 +0200 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231227_070453_264671_C311D1EE X-CRM114-Status: GOOD ( 18.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Current code using zone_dma_bits assume that all addresses range in the bits mask are suitable for DMA. For some existing platforms this assumption is not correct. DMA range might have non zero lower limit. Add 'zone_dma_off' for platform code to set base address for DMA zone. Rename the dma_direct_supported() local 'min_mask' variable to better describe its use as limit. Suggested-by: Catalin Marinas Signed-off-by: Baruch Siach --- include/linux/dma-direct.h | 1 + kernel/dma/direct.c | 10 ++++++---- kernel/dma/pool.c | 2 +- kernel/dma/swiotlb.c | 5 +++-- 4 files changed, 11 insertions(+), 7 deletions(-) diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index 18aade195884..77c52019c1ff 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -13,6 +13,7 @@ #include extern unsigned int zone_dma_bits; +extern phys_addr_t zone_dma_off; /* * Record the mapping of CPU physical to DMA addresses for a given region. diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 73c95815789a..7a54fa71a174 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -21,6 +21,7 @@ * override the variable below for dma-direct to work properly. */ unsigned int zone_dma_bits __ro_after_init = 24; +phys_addr_t zone_dma_off __ro_after_init; static inline dma_addr_t phys_to_dma_direct(struct device *dev, phys_addr_t phys) @@ -59,7 +60,7 @@ static gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 *phys_limit) * zones. */ *phys_limit = dma_to_phys(dev, dma_limit); - if (*phys_limit <= DMA_BIT_MASK(zone_dma_bits)) + if (*phys_limit <= zone_dma_off + DMA_BIT_MASK(zone_dma_bits)) return GFP_DMA; if (*phys_limit <= DMA_BIT_MASK(32)) return GFP_DMA32; @@ -566,7 +567,7 @@ int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma, int dma_direct_supported(struct device *dev, u64 mask) { - u64 min_mask = (max_pfn - 1) << PAGE_SHIFT; + u64 min_limit = (max_pfn - 1) << PAGE_SHIFT; /* * Because 32-bit DMA masks are so common we expect every architecture @@ -583,8 +584,9 @@ int dma_direct_supported(struct device *dev, u64 mask) * part of the check. */ if (IS_ENABLED(CONFIG_ZONE_DMA)) - min_mask = min_t(u64, min_mask, DMA_BIT_MASK(zone_dma_bits)); - return mask >= phys_to_dma_unencrypted(dev, min_mask); + min_limit = min_t(u64, min_limit, + zone_dma_off + DMA_BIT_MASK(zone_dma_bits)); + return mask >= phys_to_dma_unencrypted(dev, min_limit); } /* diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index b481c48a31a6..0103e932444d 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -70,7 +70,7 @@ static bool cma_in_zone(gfp_t gfp) /* CMA can't cross zone boundaries, see cma_activate_area() */ end = cma_get_base(cma) + size - 1; if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA)) - return end <= DMA_BIT_MASK(zone_dma_bits); + return end <= zone_dma_off + DMA_BIT_MASK(zone_dma_bits); if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32)) return end <= DMA_BIT_MASK(32); return true; diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 33d942615be5..bd3071894d2b 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -446,7 +446,8 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask, if (!remap) io_tlb_default_mem.can_grow = true; if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp_mask & __GFP_DMA)) - io_tlb_default_mem.phys_limit = DMA_BIT_MASK(zone_dma_bits); + io_tlb_default_mem.phys_limit = + zone_dma_off + DMA_BIT_MASK(zone_dma_bits); else if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp_mask & __GFP_DMA32)) io_tlb_default_mem.phys_limit = DMA_BIT_MASK(32); else @@ -625,7 +626,7 @@ static struct page *swiotlb_alloc_tlb(struct device *dev, size_t bytes, } gfp &= ~GFP_ZONEMASK; - if (phys_limit <= DMA_BIT_MASK(zone_dma_bits)) + if (phys_limit <= zone_dma_off + DMA_BIT_MASK(zone_dma_bits)) gfp |= __GFP_DMA; else if (phys_limit <= DMA_BIT_MASK(32)) gfp |= __GFP_DMA32;