From patchwork Tue Apr 9 06:17:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baruch Siach X-Patchwork-Id: 13621868 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7D05FCD129A for ; Tue, 9 Apr 2024 06:18:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PLPTA70VbK0f1cm1r1xuLuxQGItxqeky8MyPjxiff38=; b=JHGJCZx2ve3EDI RbCfg+L2T3/1DsU4FyUI1djSNq1tsLIURDy5kqlYxC6CX5IVswGpLNVBbKR7uJI8XlkXqiS9CR4ZJ cK0cTCJ3E6zxdl00B4+iD8DR//ZFWKcCXOKhQN/dXzcq0424cNaLwRshA3ezKr+WLWcFfF2VKBcLh 0u+Trx9yOhFWUgRZcwNgKXoNp7kPYsqhCzoMY38GbRls3oJRUPRN8xGEH91Yj5sypNSTJLbg031+d b5B2do+M2/nP2NBzG0KOb5fDK+z+W6gNUaGgaNF0tqbfOa+S1tSjywdwIQJMT4duc98f1y6/qsB3y jVmWICrPcD4VSkFlUNKg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru4oc-00000000YB2-43FO; Tue, 09 Apr 2024 06:18:43 +0000 Received: from wiki.tkos.co.il ([84.110.109.230] helo=mail.tkos.co.il) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1ru4oE-00000000Y31-0C4t for linux-arm-kernel@lists.infradead.org; Tue, 09 Apr 2024 06:18:21 +0000 Received: from tarshish.tkos.co.il (unknown [10.0.8.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.tkos.co.il (Postfix) with ESMTPS id 9BB83440317; Tue, 9 Apr 2024 09:17:44 +0300 (IDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=tkos.co.il; s=default; t=1712643465; bh=0hgo4O4gn7WmugyfT+01Avn3n1mKMQvAQDRnfRZipYM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QHxxcWbmyQfXE4BxeT+gjPvaPi11vN5dFnOJCh6m6YGFDE6Ayf9vzoDbb3jvclyxE CV+N6wKFW2R2QiAAHIPg0WoSZZCitar4W0XwFxeOCU6Ai2D0ILnBH7bKtmwl1G17dI SPr0H3+mxsawR/qdRsnQdCnOV/VK98dRPCJ809l6+9bN/SvD8QTz0eTB9sXfTF1Ni7 R/Zs0VH2/FkwT+DDYxbE+PCqeb/FK48Sonx6NvR6U5lrErLPSaqzewMWw3lNyhhzyE VNVoE4h+tIMMbffxgZfaLdETp8y05fRfAS+iarX8zrJZJAxr15pEpm6dtoLvfg7ZQr qq+dMKlFlARfw== From: Baruch Siach To: Christoph Hellwig , Marek Szyprowski , Rob Herring , Saravana Kannan , Catalin Marinas , Will Deacon Cc: Baruch Siach , Robin Murphy , iommu@lists.linux.dev, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, =?utf-8?b?UGV0?= =?utf-8?b?ciBUZXNhxZnDrWs=?= , Ramon Fried , Elad Nachman Subject: [PATCH RFC v2 4/5] dma-direct: add base offset to zone_dma_bits Date: Tue, 9 Apr 2024 09:17:57 +0300 Message-ID: <1d7b0d59590aae631b6f0b894257ab961b907b44.1712642324.git.baruch@tkos.co.il> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240408_231819_188473_E1C81C12 X-CRM114-Status: GOOD ( 18.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Current code using zone_dma_bits assume that all addresses range in the bits mask are suitable for DMA. For some existing platforms this assumption is not correct. DMA range might have non zero lower limit. Add 'zone_dma_base' for platform code to set base address for DMA zone. Rename the dma_direct_supported() local 'min_mask' variable to better describe its use as limit. Suggested-by: Catalin Marinas Signed-off-by: Baruch Siach --- include/linux/dma-direct.h | 1 + kernel/dma/direct.c | 9 +++++---- kernel/dma/pool.c | 2 +- kernel/dma/swiotlb.c | 4 ++-- 4 files changed, 9 insertions(+), 7 deletions(-) diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index 7cf76f1d3239..dd0330cbef81 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -13,6 +13,7 @@ #include extern phys_addr_t zone_dma_limit; +extern phys_addr_t zone_dma_base; /* * Record the mapping of CPU physical to DMA addresses for a given region. diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 3b2ebcd4f576..92bb241645d6 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -21,6 +21,7 @@ * override the variable below for dma-direct to work properly. */ phys_addr_t zone_dma_limit __ro_after_init = DMA_BIT_MASK(24); +phys_addr_t zone_dma_base __ro_after_init; static inline dma_addr_t phys_to_dma_direct(struct device *dev, phys_addr_t phys) @@ -59,7 +60,7 @@ static gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 *phys_limit) * zones. */ *phys_limit = dma_to_phys(dev, dma_limit); - if (*phys_limit <= zone_dma_limit) + if (*phys_limit <= zone_dma_base + zone_dma_limit) return GFP_DMA; if (*phys_limit <= DMA_BIT_MASK(32)) return GFP_DMA32; @@ -567,7 +568,7 @@ int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma, int dma_direct_supported(struct device *dev, u64 mask) { - u64 min_mask = (max_pfn - 1) << PAGE_SHIFT; + u64 min_limit = (max_pfn - 1) << PAGE_SHIFT; /* * Because 32-bit DMA masks are so common we expect every architecture @@ -584,8 +585,8 @@ int dma_direct_supported(struct device *dev, u64 mask) * part of the check. */ if (IS_ENABLED(CONFIG_ZONE_DMA)) - min_mask = min_t(u64, min_mask, zone_dma_limit); - return mask >= phys_to_dma_unencrypted(dev, min_mask); + min_limit = min_t(u64, min_limit, zone_dma_base + zone_dma_limit); + return mask >= phys_to_dma_unencrypted(dev, min_limit); } /* diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 410a7b40e496..61a86f3d83ae 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -70,7 +70,7 @@ static bool cma_in_zone(gfp_t gfp) /* CMA can't cross zone boundaries, see cma_activate_area() */ end = cma_get_base(cma) + size - 1; if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp & GFP_DMA)) - return end <= zone_dma_limit; + return end <= zone_dma_base + zone_dma_limit; if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp & GFP_DMA32)) return end <= DMA_BIT_MASK(32); return true; diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 96d6eee7d215..814052df07c5 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -446,7 +446,7 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask, if (!remap) io_tlb_default_mem.can_grow = true; if (IS_ENABLED(CONFIG_ZONE_DMA) && (gfp_mask & __GFP_DMA)) - io_tlb_default_mem.phys_limit = zone_dma_limit; + io_tlb_default_mem.phys_limit = zone_dma_base + zone_dma_limit; else if (IS_ENABLED(CONFIG_ZONE_DMA32) && (gfp_mask & __GFP_DMA32)) io_tlb_default_mem.phys_limit = DMA_BIT_MASK(32); else @@ -625,7 +625,7 @@ static struct page *swiotlb_alloc_tlb(struct device *dev, size_t bytes, } gfp &= ~GFP_ZONEMASK; - if (phys_limit <= zone_dma_limit) + if (phys_limit <= zone_dma_base + zone_dma_limit) gfp |= __GFP_DMA; else if (phys_limit <= DMA_BIT_MASK(32)) gfp |= __GFP_DMA32;