From patchwork Thu Jan 25 14:58:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 13531071 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 65C93C47258 for ; Thu, 25 Jan 2024 15:11:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=dSRo6emwXUl+AOLjdAF8SoorP5n2GMLmZL6asg49Blo=; b=jAaImevT3Z1gtS aNSkXBwMZaS8hv+K0l+Cd1AD9GYWSzJ60sw9+P6N3j/CkYiBe6CzaEP3Qs21U3YhVCSKd7KnsYhZx fCAT652bLcE/K3DqMNvzQQ+oOCOx46iEKAfuAaQECXf8YL7qPgCtnOYkuW8PeISx1vUyuJLukMR+W vhVVRjAA3FeIxuIIRgatrYKqKo9aQh5Je6Sg+su2x8U/dtb/v5TVcwsi41cliXlniavRSf+BUVvVE +raohLDw0REvbRh3jdo7tN336V9MrrGG5JvFDbN1whHRRak9YUiSWMJdxvv0UkLu/Xo/gGjzJ5aix w8mPnBFOTZgAHJMj8Vhw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rT1Nz-00000000YS8-1A0p; Thu, 25 Jan 2024 15:11:23 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rT1Nx-00000000YRM-0fNO for linux-riscv@lists.infradead.org; Thu, 25 Jan 2024 15:11:22 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 8B9AE61365; Thu, 25 Jan 2024 15:11:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 72034C433C7; Thu, 25 Jan 2024 15:11:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1706195480; bh=VdSMPHq9c7sR9UiOhE5GzJdCBp/tPQFpSJmR+kQEo9Y=; h=From:To:Cc:Subject:Date:From; b=Lq5ExDP+hnvKvAN5e33VVvQPicB2+YzePhlbp/M7p2hCzMHivadA/0gUlfi6N81OM ijFt55wI5xvg4wl2ClkhJsmbcg/LmN++brzAPZ9zhYCqGehQVqPoM1Po8BjENfHo25 lHnSWb5LZL8pmHwCvErL4WQWyzCW3c84XuifN2MXDN+L4YlxtT9uJ4iSWjMV6G4+GZ IZNXxvmEC95Af2vY6MEPYmpqoOYD1UKBS4w8MApj18YmRUaGvMV6Ty7OUD9N2ay1mL NR6O4VJgcg1XJmz8AFRPvF0Ty131J/z3mYXYLexREcQQHoQvIsXbRGeW2kvDLuR0rK /UhSMiuzyY2ew== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Alexandre Ghiti Subject: [PATCH v3] riscv: mm: still create swiotlb buffer for kmalloc() bouncing if required Date: Thu, 25 Jan 2024 22:58:31 +0800 Message-ID: <20240125145831.947-1-jszhang@kernel.org> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240125_071121_284048_D0CCB003 X-CRM114-Status: GOOD ( 12.47 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org After commit f51f7a0fc2f4 ("riscv: enable DMA_BOUNCE_UNALIGNED_KMALLOC for !dma_coherent"), for non-coherent platforms with less than 4GB memory, we rely on users to pass "swiotlb=mmnn,force" kernel parameters to enable DMA bouncing for unaligned kmalloc() buffers. Now let's go further: If no bouncing needed for ZONE_DMA, let kernel automatically allocate 1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing on non-coherent platforms, so that no need to pass "swiotlb=mmnn,force" any more. The math of "1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing" is taken from arm64. Users can still force smaller swiotlb buffer by passing "swiotlb=mmnn". Signed-off-by: Jisheng Zhang Reviewed-by: Alexandre Ghiti --- since v2: - rebase on v6.8-rc1 - collect Reviewed-by tag since v1 - fix build error if CONFIG_RISCV_DMA_NONCOHERENT=n arch/riscv/include/asm/cache.h | 2 +- arch/riscv/mm/init.c | 16 +++++++++++++++- 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h index 2174fe7bac9a..570e9d8acad1 100644 --- a/arch/riscv/include/asm/cache.h +++ b/arch/riscv/include/asm/cache.h @@ -26,8 +26,8 @@ #ifndef __ASSEMBLY__ -#ifdef CONFIG_RISCV_DMA_NONCOHERENT extern int dma_cache_alignment; +#ifdef CONFIG_RISCV_DMA_NONCOHERENT #define dma_get_cache_alignment dma_get_cache_alignment static inline int dma_get_cache_alignment(void) { diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 32cad6a65ccd..3359472df9a5 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -162,11 +162,25 @@ static void print_vm_layout(void) { } void __init mem_init(void) { + bool swiotlb = max_pfn > PFN_DOWN(dma32_phys_limit); #ifdef CONFIG_FLATMEM BUG_ON(!mem_map); #endif /* CONFIG_FLATMEM */ - swiotlb_init(max_pfn > PFN_DOWN(dma32_phys_limit), SWIOTLB_VERBOSE); + if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb && + dma_cache_alignment != 1) { + /* + * If no bouncing needed for ZONE_DMA, allocate 1MB swiotlb + * buffer per 1GB of RAM for kmalloc() bouncing on + * non-coherent platforms. + */ + unsigned long size = + DIV_ROUND_UP(memblock_phys_mem_size(), 1024); + swiotlb_adjust_size(min(swiotlb_size_or_default(), size)); + swiotlb = true; + } + + swiotlb_init(swiotlb, SWIOTLB_VERBOSE); memblock_free_all(); print_vm_layout();