From patchwork Sun Apr 24 17:20:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12825016 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 40CDCC433EF for ; Sun, 24 Apr 2022 17:23:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-Id:Date:Subject:Cc :To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=1DxY5WPbDAxKtgiMNPW9hlMCcyOHgwNWQzYJtim1nzc=; b=JbKqTzGUJmNOfC JBGOaGFlECfVMhYe7TdihffcUUyWfbKRxcCp7KNW0y0H3kgZplxOhIrrGJWIM1vtGw2GDKLHEaNBI /ABCZ40RP0zv1s9E4Oj5VdUEhcbWwAD2kh+qkBisS8PmpsYKbuZrS8F+hah1bJyAw7DmlVbHuM5KC PSoQIu3WgUfNKqMJ31wmj7Ug4nFLvvAjvqL1dEsPuI8EyNaULvoTlHAPMvdRHnGb8VDb8GjXFl8cj rcVT+HnzRYLD7iMR2A6FbFwrDPhaygZ0ehNBucUklFn4OG+tQng6oMLWCCgSI3ytPIupEaM2okK8k WYUhFE1yZ7QIp4yCuLGQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nifus-0074G6-OO; Sun, 24 Apr 2022 17:20:58 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nifup-0074FX-7u for linux-arm-kernel@lists.infradead.org; Sun, 24 Apr 2022 17:20:57 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E0878611F0; Sun, 24 Apr 2022 17:20:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 50E1FC385A9; Sun, 24 Apr 2022 17:20:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1650820853; bh=4SgAcVXLHGqxh2NVuCquX9xm9/nhdLmAxMSJu3gMgqI=; h=From:To:Cc:Subject:Date:From; b=MYh21kkHqtkHrAMHn/d2IYATE6bWshwZufmkNFFqMaffarwSSnQ0JWyKz2w1dudBI oG/0vGaGZKmTFYvYaVwW6V/pCMi1ef4ZTzwnUfirI6b8IIo0QqhhusrLP6Og+LQqCQ fS485VteVO6wC5RHi7itPwhlJBrYLM/hjwAQsXu6biY5f3g32iI9G3iMSyUhZTMKfm mF19N6ijiUsZvPGCtqX8Wn9o8eJEp5CrVsPQeHHy5ACwUOZ57iHDVpF02qKySpXdgC oJ7WRo7qCUdgZLFbOM+Ls8LQJomxxoPSJRfMsfFoQZ9SofyFhzAIk1K48S3QYlyYwK ZZ3IWbNKjBqLg== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Catalin Marinas , Greg Kroah-Hartman , Guillaume Tucker , Mark Brown , Mark-PK Tsai , Mike Rapoport , Mike Rapoport , Russell King , Tony Lindgren , Will Deacon , bot@kernelci.org, kernelci-results@groups.io, linux-arm-kernel@lists.infradead.org, stable@vger.kernel.org Subject: [PATCH] arm[64]/memremap: don't abuse pfn_valid() to ensure presence of linear map Date: Sun, 24 Apr 2022 20:20:44 +0300 Message-Id: <20220424172044.22220-1-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220424_102055_406052_CE58973B X-CRM114-Status: GOOD ( 20.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport The semantics of pfn_valid() is to check presence of the memory map for a PFN and not whether a PFN is covered by the linear map. The memory map may be present for NOMAP memory regions, but they won't be mapped in the linear mapping. Accessing such regions via __va() when they are memremap()'ed will cause a crash. On v5.4.y the crash happens on qemu-arm with UEFI [1]: <1>[ 0.084476] 8<--- cut here --- <1>[ 0.084595] Unable to handle kernel paging request at virtual address dfb76000 <1>[ 0.084938] pgd = (ptrval) <1>[ 0.085038] [dfb76000] *pgd=5f7fe801, *pte=00000000, *ppte=00000000 ... <4>[ 0.093923] [] (memcpy) from [] (dmi_setup+0x60/0x418) <4>[ 0.094204] [] (dmi_setup) from [] (arm_dmi_init+0x8/0x10) <4>[ 0.094408] [] (arm_dmi_init) from [] (do_one_initcall+0x50/0x228) <4>[ 0.094619] [] (do_one_initcall) from [] (kernel_init_freeable+0x15c/0x1f8) <4>[ 0.094841] [] (kernel_init_freeable) from [] (kernel_init+0x8/0x10c) <4>[ 0.095057] [] (kernel_init) from [] (ret_from_fork+0x14/0x2c) On kernels v5.10.y and newer the same crash won't reproduce on ARM because commit b10d6bca8720 ("arch, drivers: replace for_each_membock() with for_each_mem_range()") changed the way memory regions are registered in the resource tree, but that merely covers up the problem. On ARM64 memory resources registered in yet another way and there the issue of wrong usage of pfn_valid() to ensure availability of the linear map is also covered. Implement arch_memremap_can_ram_remap() on ARM and ARM64 to prevent access to NOMAP regions via the linear mapping in memremap(). Link: https://lore.kernel.org/all/Yl65zxGgFzF1Okac@sirena.org.uk Reported-by: "kernelci.org bot" Tested-by: Mark Brown Cc: stable@vger.kernel.org # 5.4+ Signed-off-by: Mike Rapoport Reported-by: kernel test robot Reported-by: kernel test robot --- arch/arm/include/asm/io.h | 4 ++++ arch/arm/mm/ioremap.c | 9 ++++++++- arch/arm64/include/asm/io.h | 4 ++++ arch/arm64/mm/ioremap.c | 8 ++++++++ kernel/iomem.c | 2 +- 5 files changed, 25 insertions(+), 2 deletions(-) base-commit: b2d229d4ddb17db541098b83524d901257e93845 diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h index 0c70eb688a00..fbb2eeea7285 100644 --- a/arch/arm/include/asm/io.h +++ b/arch/arm/include/asm/io.h @@ -145,6 +145,10 @@ extern void __iomem * (*arch_ioremap_caller)(phys_addr_t, size_t, unsigned int, void *); extern void (*arch_iounmap)(volatile void __iomem *); +extern bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size, + unsigned long flags); +#define arch_memremap_can_ram_remap arch_memremap_can_ram_remap + /* * Bad read/write accesses... */ diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c index aa08bcb72db9..6eb1ad24544d 100644 --- a/arch/arm/mm/ioremap.c +++ b/arch/arm/mm/ioremap.c @@ -43,7 +43,6 @@ #include #include "mm.h" - LIST_HEAD(static_vmlist); static struct static_vm *find_static_vm_paddr(phys_addr_t paddr, @@ -493,3 +492,11 @@ void __init early_ioremap_init(void) { early_ioremap_setup(); } + +bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size, + unsigned long flags) +{ + unsigned long pfn = PHYS_PFN(offset); + + return memblock_is_map_memory(pfn); +} diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h index 7fd836bea7eb..3995652daf81 100644 --- a/arch/arm64/include/asm/io.h +++ b/arch/arm64/include/asm/io.h @@ -192,4 +192,8 @@ extern void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size); extern int valid_phys_addr_range(phys_addr_t addr, size_t size); extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size); +extern bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size, + unsigned long flags); +#define arch_memremap_can_ram_remap arch_memremap_can_ram_remap + #endif /* __ASM_IO_H */ diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c index b7c81dacabf0..b21f91cd830d 100644 --- a/arch/arm64/mm/ioremap.c +++ b/arch/arm64/mm/ioremap.c @@ -99,3 +99,11 @@ void __init early_ioremap_init(void) { early_ioremap_setup(); } + +bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size, + unsigned long flags) +{ + unsigned long pfn = PHYS_PFN(offset); + + return pfn_is_map_memory(pfn); +} diff --git a/kernel/iomem.c b/kernel/iomem.c index 62c92e43aa0d..e85bed24c0a9 100644 --- a/kernel/iomem.c +++ b/kernel/iomem.c @@ -33,7 +33,7 @@ static void *try_ram_remap(resource_size_t offset, size_t size, unsigned long pfn = PHYS_PFN(offset); /* In the simple case just return the existing linear address */ - if (pfn_valid(pfn) && !PageHighMem(pfn_to_page(pfn)) && + if (!PageHighMem(pfn_to_page(pfn)) && arch_memremap_can_ram_remap(offset, size, flags)) return __va(offset);