From patchwork Mon Aug 1 08:04:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12933609 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8FAB5C3F6B0 for ; Mon, 1 Aug 2022 08:06:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=tnteJ5UBzzdd2eIYtuvN6ZCZaIKQGhsrabcJ5hirr6Q=; b=Bv9ATt0PTjmHI2 fUP2c50qjkuMcCLadzn3lh9pUoW8q+dmc+ZJ0zYYvfiVsjZZWY3u1MN0igYQ8Xh68TcPDKL8bMJYe CeQUpz+rd7YJ2WRxw3eo/nxcy6IoWUZTY5996huxOdT0FnHpJKcisnQuY8BGrHQaB8p3S1guB6pVu PSH0gM+keOSx66+98PX3VWDUQ6WeKLva+hxJiuMsLTInjGjuwm3iMC+1to0rtF6oTvUPw0C+OCUMC gApnOQp89QBNgWBxhCmxGGdJcDcYbs5dYbtxQ5m0nBeTZFBRhrcSJdsYS0CIQe4L+tPrUymR3PGBm A00PRqTZyy9Hn+hwNWPQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oIQPz-003krV-Q1; Mon, 01 Aug 2022 08:04:51 +0000 Received: from ams.source.kernel.org ([145.40.68.75]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oIQPp-003kjL-8o for linux-arm-kernel@lists.infradead.org; Mon, 01 Aug 2022 08:04:42 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9EE45B80DD0; Mon, 1 Aug 2022 08:04:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 17DC3C4347C; Mon, 1 Aug 2022 08:04:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659341078; bh=OjCzYsGS4fOFkbuq211CjFLG8Lt6vwd7g0byaHHcDJ4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OZ8U61ks8p3bY6UnJTlV0sMIHMJR3KU/pSpKF94ESgUOOoyaZo9Lio1IUp8BiH9x8 aGslZfie20B+ioHRav4LyrO8FhehUSgocWmlqNUkSvSMdHMD9VQcpui75A/uznScRj 8Ex0MngjTHn8FEFoF3rOHQcp6OwtYt3ia3FTQHxGZu1Ty/fxNISLhrQI/vjOzncJRX jK3aLNLeCm8PocPHOaxZ1lcsOdVdUu2Qf/FfTBL4I0zFqcjEkt7dQyKMoqSG/VAmfT evBSBD1z0Z2glj2FDjtxYuHoLgEDYYsDWxfV3ykJd9byXFf08g0AIBwp3yEGIbGCaY vwG+oSO1H4vqg== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Guanghui Feng , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH 1/4] arm64: introduce have_zone_dma() helper Date: Mon, 1 Aug 2022 11:04:15 +0300 Message-Id: <20220801080418.120311-2-rppt@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220801080418.120311-1-rppt@kernel.org> References: <20220801080418.120311-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220801_010441_642059_E0A0F1CF X-CRM114-Status: GOOD ( 15.67 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport rather than open-code the check whether CONFIG_ZONE_DMA or CONFIG_ZONE_DMA32 are enabled. Signed-off-by: Mike Rapoport --- arch/arm64/include/asm/memory.h | 8 ++++++++ arch/arm64/mm/init.c | 4 ++-- arch/arm64/mm/mmu.c | 6 ++---- 3 files changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 0af70d9abede..fa89d3bded8b 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -351,6 +351,14 @@ static inline void *phys_to_virt(phys_addr_t x) }) void dump_mem_limit(void); + +static inline bool have_zone_dma(void) +{ + if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32)) + return true; + + return false; +} #endif /* !ASSEMBLY */ /* diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 339ee84e5a61..fa2260040c0f 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -389,7 +389,7 @@ void __init arm64_memblock_init(void) early_init_fdt_scan_reserved_mem(); - if (!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) + if (!have_zone_dma()) reserve_crashkernel(); high_memory = __va(memblock_end_of_DRAM() - 1) + 1; @@ -438,7 +438,7 @@ void __init bootmem_init(void) * request_standard_resources() depends on crashkernel's memory being * reserved, so do it here. */ - if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32)) + if (have_zone_dma()) reserve_crashkernel(); memblock_dump_all(); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 626ec32873c6..d170b7956b01 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -529,8 +529,7 @@ static void __init map_mem(pgd_t *pgdp) #ifdef CONFIG_KEXEC_CORE if (crash_mem_map) { - if (IS_ENABLED(CONFIG_ZONE_DMA) || - IS_ENABLED(CONFIG_ZONE_DMA32)) + if (have_zone_dma()) flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; else if (crashk_res.end) memblock_mark_nomap(crashk_res.start, @@ -571,8 +570,7 @@ static void __init map_mem(pgd_t *pgdp) * through /sys/kernel/kexec_crash_size interface. */ #ifdef CONFIG_KEXEC_CORE - if (crash_mem_map && - !IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) { + if (crash_mem_map && !have_zone_dma()) { if (crashk_res.end) { __map_memblock(pgdp, crashk_res.start, crashk_res.end + 1, From patchwork Mon Aug 1 08:04:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12933610 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 93F47C00144 for ; Mon, 1 Aug 2022 08:06:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZYQzoLhEUEzlSiX1J0FO/KuFRSy14KnVM+ZgRZLacro=; b=gAS2SBZVon1m3u 8Vn7JV0+dP9IC10C+zSIclxDqkWRvSlg1dQgzECO95K+gSt+qwcKRUbnO97pU9xLMoAYxb5EpYM4d oVxw6Ov2aqc7QPrLDn6Ga2LunBJngAGwaPt9h5VjsczxbiFKLgFDQIMyxWG8Brng4pQDAuB7D8DMz dTNd1Lgew0dam2euJXOZh9+5BQjXtcVZm8Epq8ZoOHdAFX5vBNoZvgTjHO4tCvpZTFpe4v7EBRqid K2o7IMBRL1nwd2OORvt5Y3SYltr9A7sgSJfLg7GztuN22kRqM5aCsSh677ZK1lglls4Usvl+WO0CZ oeRtl3psdhwuT5mIm7rQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oIQQD-003kvk-Kl; Mon, 01 Aug 2022 08:05:05 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oIQPq-003kkx-Kx for linux-arm-kernel@lists.infradead.org; Mon, 01 Aug 2022 08:04:44 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 10A3860F6C; Mon, 1 Aug 2022 08:04:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F1A1FC43470; Mon, 1 Aug 2022 08:04:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659341081; bh=ZL4ATVbziZrOALfir8Dz0aCrysF/GVyZD7Hah4EI7BU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HUktLeef3Hq29EFtuQWBwxp6i1t6pcS7qd0boRJ3PBt2FpzjdRKJp56846sTSzkqX vkpkrYO6rKrDNp8m4/0zMYJldaAPt++kFGatcVoCMRu7JWfv+Nlw+QvemlVIopJ2YN ZeMTaqiAYlFIL+CMtzvAt2D3+BV56y3Di43CZPYAKuCQoHtDb2EKs0r1W3HOIKGlXt nL0wCsni4U/qz9HRNCKfAaLq+NyizM68reJPJUHuxSO6x9eEZ1H73MwdtC4j14FRxN ZBg7LNcRqAF4gYVG3M9ZCxWeiCL54/8OMED4BBTsokEFhkM2cQ4lALkPFbH9gkFc3+ Kj661lsXZr31g== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Guanghui Feng , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH 2/4] arm64/mmu: drop _hotplug from unmap_hotplug_* function names Date: Mon, 1 Aug 2022 11:04:16 +0300 Message-Id: <20220801080418.120311-3-rppt@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220801080418.120311-1-rppt@kernel.org> References: <20220801080418.120311-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220801_010442_802841_D210870A X-CRM114-Status: GOOD ( 13.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport so that they can be used for remapping crash kernel. Signed-off-by: Mike Rapoport --- arch/arm64/mm/mmu.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index d170b7956b01..baa2dda2dcce 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -861,7 +861,7 @@ static bool pgtable_range_aligned(unsigned long start, unsigned long end, return true; } -static void unmap_hotplug_pte_range(pmd_t *pmdp, unsigned long addr, +static void unmap_pte_range(pmd_t *pmdp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) { @@ -882,7 +882,7 @@ static void unmap_hotplug_pte_range(pmd_t *pmdp, unsigned long addr, } while (addr += PAGE_SIZE, addr < end); } -static void unmap_hotplug_pmd_range(pud_t *pudp, unsigned long addr, +static void unmap_pmd_range(pud_t *pudp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) { @@ -911,11 +911,11 @@ static void unmap_hotplug_pmd_range(pud_t *pudp, unsigned long addr, continue; } WARN_ON(!pmd_table(pmd)); - unmap_hotplug_pte_range(pmdp, addr, next, free_mapped, altmap); + unmap_pte_range(pmdp, addr, next, free_mapped, altmap); } while (addr = next, addr < end); } -static void unmap_hotplug_pud_range(p4d_t *p4dp, unsigned long addr, +static void unmap_pud_range(p4d_t *p4dp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) { @@ -944,11 +944,11 @@ static void unmap_hotplug_pud_range(p4d_t *p4dp, unsigned long addr, continue; } WARN_ON(!pud_table(pud)); - unmap_hotplug_pmd_range(pudp, addr, next, free_mapped, altmap); + unmap_pmd_range(pudp, addr, next, free_mapped, altmap); } while (addr = next, addr < end); } -static void unmap_hotplug_p4d_range(pgd_t *pgdp, unsigned long addr, +static void unmap_p4d_range(pgd_t *pgdp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) { @@ -963,11 +963,11 @@ static void unmap_hotplug_p4d_range(pgd_t *pgdp, unsigned long addr, continue; WARN_ON(!p4d_present(p4d)); - unmap_hotplug_pud_range(p4dp, addr, next, free_mapped, altmap); + unmap_pud_range(p4dp, addr, next, free_mapped, altmap); } while (addr = next, addr < end); } -static void unmap_hotplug_range(unsigned long addr, unsigned long end, +static void unmap_range(unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) { unsigned long next; @@ -989,7 +989,7 @@ static void unmap_hotplug_range(unsigned long addr, unsigned long end, continue; WARN_ON(!pgd_present(pgd)); - unmap_hotplug_p4d_range(pgdp, addr, next, free_mapped, altmap); + unmap_p4d_range(pgdp, addr, next, free_mapped, altmap); } while (addr = next, addr < end); } @@ -1208,7 +1208,7 @@ void vmemmap_free(unsigned long start, unsigned long end, { WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); - unmap_hotplug_range(start, end, true, altmap); + unmap_range(start, end, true, altmap); free_empty_tables(start, end, VMEMMAP_START, VMEMMAP_END); } #endif /* CONFIG_MEMORY_HOTPLUG */ @@ -1472,7 +1472,7 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size) WARN_ON(pgdir != init_mm.pgd); WARN_ON((start < PAGE_OFFSET) || (end > PAGE_END)); - unmap_hotplug_range(start, end, false, NULL); + unmap_range(start, end, false, NULL); free_empty_tables(start, end, PAGE_OFFSET, PAGE_END); } From patchwork Mon Aug 1 08:04:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12933611 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8890AC00144 for ; Mon, 1 Aug 2022 08:06:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DyWhm/ZCQ7WMQUymu5bhg9pnBm4GmHV/mstIv11tzeE=; b=4kOjPsYeKeuVTq ZrSVJZibgTfppT96jN1GC7qkK89gaZgq0HxhsbcY4iERmW25WvYgGAzWN6d3MvEMbq3DxNqHzRskK A9cI+n09+K5l86m6ttZrtPJBS9W1S4tamOZOaOlvpshXZRkeLj1b1jvWRZkbXzcsgK6vAR556NJp/ qJpb5zX3ViPNFO9P3+dMhCjecq/KMDyx+uQwWMK9V2QkymUFaQUKR1+Iv+plZSYtTZ9XmeGcgyp4U VMBlcYy/MUZHWfUWT34/D7o4qQGeudlGJjvSjjlwtJ1s53WADQerig+MQ1v8B2YZMx8HNbnUGhh84 jpSAQNQVa4EGdlEbg39w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oIQQO-003l2r-Po; Mon, 01 Aug 2022 08:05:16 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oIQPu-003ko9-Bc for linux-arm-kernel@lists.infradead.org; Mon, 01 Aug 2022 08:04:47 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C9EEF60F68; Mon, 1 Aug 2022 08:04:45 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 47132C43142; Mon, 1 Aug 2022 08:04:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659341085; bh=SyAn2OHlBoyp4SJs3YcoSeH4zAJTR/slnZqVIScbiks=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Lb4X4JP+kNTAv0eGRJhEMXlCO1mRzMi4zoNFWWJleYiOLzI/Wai1ETeVIr/3h0u/3 mI0r14pSSP/C04M+CHkHZL72Kvdhw86NOTOPnf+Y7dDf/2kHtBc/9WGP/gUlEGx5Eb xBz1XfEwKYldWQBSs14yy31vvtxQgjDmY9rHHq23iAS8DDmTQNhKN9lwJL8M+uGHV2 iWqdH0u6tdfYyCqn6a0SOjKkI1xoIS2P5OHQcDKafsAbtXO7KMotIJk8JoAIF25Zgj pzqueuu6VcfX8p56Likx83mfhhgbO8grf3KLwf+x0QDt1N4qz21buo+0tXcffqJXP8 4Q2uYr35AtWLw== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Guanghui Feng , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH 3/4] arm64/mmu: move helpers for hotplug page tables freeing close to callers Date: Mon, 1 Aug 2022 11:04:17 +0300 Message-Id: <20220801080418.120311-4-rppt@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220801080418.120311-1-rppt@kernel.org> References: <20220801080418.120311-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220801_010446_509536_41BE0D4D X-CRM114-Status: GOOD ( 13.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport to minimize extra ifdefery when unmap_*() methods will be used to remap crash kernel. Signed-off-by: Mike Rapoport --- arch/arm64/mm/mmu.c | 50 ++++++++++++++++++++++----------------------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index baa2dda2dcce..2f548fb2244c 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -837,30 +837,6 @@ static void free_hotplug_page_range(struct page *page, size_t size, } } -static void free_hotplug_pgtable_page(struct page *page) -{ - free_hotplug_page_range(page, PAGE_SIZE, NULL); -} - -static bool pgtable_range_aligned(unsigned long start, unsigned long end, - unsigned long floor, unsigned long ceiling, - unsigned long mask) -{ - start &= mask; - if (start < floor) - return false; - - if (ceiling) { - ceiling &= mask; - if (!ceiling) - return false; - } - - if (end - 1 > ceiling - 1) - return false; - return true; -} - static void unmap_pte_range(pmd_t *pmdp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) @@ -993,6 +969,30 @@ static void unmap_range(unsigned long addr, unsigned long end, } while (addr = next, addr < end); } +static bool pgtable_range_aligned(unsigned long start, unsigned long end, + unsigned long floor, unsigned long ceiling, + unsigned long mask) +{ + start &= mask; + if (start < floor) + return false; + + if (ceiling) { + ceiling &= mask; + if (!ceiling) + return false; + } + + if (end - 1 > ceiling - 1) + return false; + return true; +} + +static void free_hotplug_pgtable_page(struct page *page) +{ + free_hotplug_page_range(page, PAGE_SIZE, NULL); +} + static void free_empty_pte_table(pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling) @@ -1146,7 +1146,7 @@ static void free_empty_tables(unsigned long addr, unsigned long end, free_empty_p4d_table(pgdp, addr, next, floor, ceiling); } while (addr = next, addr < end); } -#endif +#endif /* CONFIG_MEMORY_HOTPLUG */ #if !ARM64_KERNEL_USES_PMD_MAPS int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, From patchwork Mon Aug 1 08:04:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12933612 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AF675C00144 for ; Mon, 1 Aug 2022 08:06:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gxYBbJ7ga8PqqMvUW8IJs9dxyv9ZAnmpGU8EIuGDykc=; b=1Tbd/LsAx9/pDR SNo14PiTMSuHcpP9hPvpBO9Bn87vPnIBir3llrjb/boIZyiDkqLzuMjjyljOCwQIVCyyRCBnCMlnw Ex0O9wD35Qi7SQkg/EjcfepwEVCRVHAOcALDotMewXgB8g4ZR6DhM+fq/FhCg+fAdlWFsA6F3AQam 3lKX5TPdPyx7O5wy0zIaxAhWfsw1n+dSpC9qQn4aRJulr0dM0Ug3YVGsVD11VjOEVFE36Mcm89RVn HdhkQ9g24S6tVeamAvjGA6czoAXlFsSqZCT817v+KXX0EIwWnjNl4UA9CAHKqSHKbgxqwT1IgRwLj eFL12aAhrOgi7ahrlrXQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oIQQj-003lDO-H1; Mon, 01 Aug 2022 08:05:37 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oIQPy-003kqf-EV for linux-arm-kernel@lists.infradead.org; Mon, 01 Aug 2022 08:04:52 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0243B60F7A; Mon, 1 Aug 2022 08:04:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 26360C433D6; Mon, 1 Aug 2022 08:04:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659341089; bh=qKJExSsMZxxR+aUTqOoUvDyCTHTRATeezFg2ogsh8+Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ebTy4eAPxdr5U3gaHN547xPcezVzmKmRICjgBlcfJvgPMu1wIRAr9Y7RI9Usc+Yp+ wHakJin/06O2DipcHWg5yUWr7xsSjwx72cOPR+oMFP/850pvkyqIauVmyqREF/HqTi 1RGHn1v/NrmYthQzj/C6K1a3IXiG+orb65+mvyXn+Z1zLjcTuaMK1JzA/Biugza1qm l9ey9VAOBFvLRggzSdhb0VO0+1ysXdhEY+ERIpG3rnHUKsSWL5M0WK5vVp9i6wbyCO FKfRYj7npF85Wq2Q32da3EyKRnt/6xNEEqh+76pPtWc4Ac9L0q+xyfWk412UQ2I4dR Rb854hDm7cInA== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Guanghui Feng , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH 4/4] arm64/mm: remap crash kernel with base pages even if rodata_full disabled Date: Mon, 1 Aug 2022 11:04:18 +0300 Message-Id: <20220801080418.120311-5-rppt@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220801080418.120311-1-rppt@kernel.org> References: <20220801080418.120311-1-rppt@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220801_010450_594958_D909CFA7 X-CRM114-Status: GOOD ( 24.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mike Rapoport For server systems it is important to protect crash kernel memory for post-mortem analysis. In order to protect this memory it should be mapped at PTE level. When CONFIG_ZONE_DMA or CONFIG_ZONE_DMA32 is enabled, usage of crash kernel essentially forces mapping of the entire linear map with base pages even if rodata_full is not set (commit 2687275a5843 ("arm64: Force NO_BLOCK_MAPPINGS if crashkernel reservation is required")) and this causes performance degradation. With ZONE_DMA/DMA32 enabled, the crash kernel memory is reserved after the linear map is created, but before multiprocessing and multithreading are enabled, so it is safe to remap the crash kernel memory with base pages as long as the page table entries that would be changed do not map the memory that might be accessed during the remapping. To ensure there are no memory accesses in the range that will be remapped, align crash memory reservation to PUD_SIZE boundaries, remap the entire PUD-aligned area and than free the memory that was allocated beyond the crash_size requested by the user. Signed-off-by: Mike Rapoport --- arch/arm64/include/asm/mmu.h | 2 ++ arch/arm64/mm/init.c | 40 ++++++++++++++++++++++++++++++++++-- arch/arm64/mm/mmu.c | 40 +++++++++++++++++++++++++++++++----- 3 files changed, 75 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 48f8466a4be9..d9829a7def69 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -71,6 +71,8 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot); extern void mark_linear_text_alias_ro(void); extern bool kaslr_requires_kpti(void); +extern int remap_crashkernel(phys_addr_t start, phys_addr_t size, + phys_addr_t aligned_size); #define INIT_MM_CONTEXT(name) \ .pgd = init_pg_dir, diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index fa2260040c0f..be74e091bef7 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include #include @@ -116,6 +117,38 @@ static int __init reserve_crashkernel_low(unsigned long long low_size) return 0; } +static unsigned long long __init +reserve_remap_crashkernel(unsigned long long crash_base, + unsigned long long crash_size, + unsigned long long crash_max) +{ + unsigned long long size; + + if (!have_zone_dma()) + return 0; + + if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE)) + return 0; + + if (crash_base) + return 0; + + size = ALIGN(crash_size, PUD_SIZE); + + crash_base = memblock_phys_alloc_range(size, PUD_SIZE, 0, crash_max); + if (!crash_base) + return 0; + + if (remap_crashkernel(crash_base, crash_size, size)) { + memblock_phys_free(crash_base, size); + return 0; + } + + memblock_phys_free(crash_base + crash_size, size - crash_size); + + return crash_base; +} + /* * reserve_crashkernel() - reserves memory for crash kernel * @@ -162,8 +195,11 @@ static void __init reserve_crashkernel(void) if (crash_base) crash_max = crash_base + crash_size; - crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, - crash_base, crash_max); + crash_base = reserve_remap_crashkernel(crash_base, crash_size, + crash_max); + if (!crash_base) + crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, + crash_base, crash_max); if (!crash_base) { pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 2f548fb2244c..183936775fab 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -528,10 +528,8 @@ static void __init map_mem(pgd_t *pgdp) memblock_mark_nomap(kernel_start, kernel_end - kernel_start); #ifdef CONFIG_KEXEC_CORE - if (crash_mem_map) { - if (have_zone_dma()) - flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; - else if (crashk_res.end) + if (crash_mem_map && !have_zone_dma()) { + if (crashk_res.end) memblock_mark_nomap(crashk_res.start, resource_size(&crashk_res)); } @@ -825,7 +823,7 @@ int kern_addr_valid(unsigned long addr) return pfn_valid(pte_pfn(pte)); } -#ifdef CONFIG_MEMORY_HOTPLUG +#if defined(CONFIG_MEMORY_HOTPLUG) || defined(CONFIG_KEXEC_CORE) static void free_hotplug_page_range(struct page *page, size_t size, struct vmem_altmap *altmap) { @@ -968,7 +966,9 @@ static void unmap_range(unsigned long addr, unsigned long end, unmap_p4d_range(pgdp, addr, next, free_mapped, altmap); } while (addr = next, addr < end); } +#endif /* CONFIG_MEMORY_HOTPLUG || CONFIG_KEXEC_CORE */ +#ifdef CONFIG_MEMORY_HOTPLUG static bool pgtable_range_aligned(unsigned long start, unsigned long end, unsigned long floor, unsigned long ceiling, unsigned long mask) @@ -1213,6 +1213,36 @@ void vmemmap_free(unsigned long start, unsigned long end, } #endif /* CONFIG_MEMORY_HOTPLUG */ +int __init remap_crashkernel(phys_addr_t start, phys_addr_t size, + phys_addr_t aligned_size) +{ +#ifdef CONFIG_KEXEC_CORE + phys_addr_t end = start + size; + phys_addr_t aligned_end = start + aligned_size; + + if (!IS_ALIGNED(start, PUD_SIZE) || !IS_ALIGNED(aligned_end, PUD_SIZE)) + return -EINVAL; + + /* Clear PUDs containing crash kernel memory */ + unmap_range(__phys_to_virt(start), __phys_to_virt(aligned_end), + false, NULL); + + /* map crash kernel memory with base pages */ + __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start), + size, PAGE_KERNEL, early_pgtable_alloc, + NO_EXEC_MAPPINGS | NO_BLOCK_MAPPINGS | + NO_CONT_MAPPINGS); + + /* map area from end of crash kernel to PUD end with large pages */ + size = aligned_end - end; + if (size) + __create_pgd_mapping(swapper_pg_dir, end, __phys_to_virt(end), + size, PAGE_KERNEL, early_pgtable_alloc, 0); +#endif + + return 0; +} + static inline pud_t *fixmap_pud(unsigned long addr) { pgd_t *pgdp = pgd_offset_k(addr);