From patchwork Fri Aug 19 04:11:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12948296 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBA46C32771 for ; Fri, 19 Aug 2022 04:12:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 156798E0001; Fri, 19 Aug 2022 00:12:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 106968D0002; Fri, 19 Aug 2022 00:12:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F11068E0001; Fri, 19 Aug 2022 00:12:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E40B38D0002 for ; Fri, 19 Aug 2022 00:12:20 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B9E461404DB for ; Fri, 19 Aug 2022 04:12:20 +0000 (UTC) X-FDA: 79815020040.23.8D9830B Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf08.hostedemail.com (Postfix) with ESMTP id 4BA33160007 for ; Fri, 19 Aug 2022 04:12:20 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 72E1FB825A9; Fri, 19 Aug 2022 04:12:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 86439C43470; Fri, 19 Aug 2022 04:12:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660882337; bh=OJw5ok0VWpXV3rCiT2RPnbejds8oP9obR0Vq/h7CFMg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kk7a5L2fGYglj/X+iysS0UCkB1aNmS+LAXX5gdRkSk6QWLPtsYWlXgpsLQSX4/2m6 xSAnTQjm8z1SSGIu8AvMC+WvtHiaUkHJXdotVu+VIGcGwbSuWAa+pd3D9TwA5hO/xV SeQY4tvS/87p9KJbLFiypyt5avneNb1U2JMS3OHFQ8Wg6vRfY+VP9ARAPb3z3fFNQe huyGcQmXx05+a+1zofD5JiK4ztPPW2Jk7PfQ9e7+JOYJ2Xd9K53aTQjezSdfbXcy+C tUZwCybmYrgTbhNUX0ean/n/TqOYsmBrz6//+A5XGGyoJP5834TNnLWwbx8TiIrg1s Q0eNNmNBjqIyQ== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Guanghui Feng , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 1/5] arm64: rename defer_reserve_crashkernel() to have_zone_dma() Date: Fri, 19 Aug 2022 07:11:52 +0300 Message-Id: <20220819041156.873873-2-rppt@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220819041156.873873-1-rppt@kernel.org> References: <20220819041156.873873-1-rppt@kernel.org> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660882339; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=R4ywyJaaqViCFo5thr1iZPt0MOnEaUbXpBihi/KPduA=; b=AhNOK0ryDek4UkFpIAg9voNo1MdQOLiaTVwASrILX3gJP8gjFxo7VjnDY7TVAVRgwGJ80J 7d/u8Zj8gRRmNYbukM3pd4aFB370UlkpTlMhbVhi++SH8He6+uCfTOFqkTmhr+bmFRv8yy Lb0FkmT9OorzWzhvWQFDRU8hcvqx7eM= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=kk7a5L2f; spf=pass (imf08.hostedemail.com: domain of rppt@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660882339; a=rsa-sha256; cv=none; b=BtYCtQUMGNfEQIkqzzCiSJlrHleEBcep/be4l5lI6UBSIsaWq5kbtR2bv9KRjmJNCt0NQ4 SPKDwwnw/Ny03slSQQWZT4+R9TwWh9mXbARcsY6ZRotnixtOtj6wKpjHM9CPHZxbTTX9DD +7hl44TMmtQjs4MsetfNc4mbKaI5Nfs= X-Rspam-User: Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=kk7a5L2f; spf=pass (imf08.hostedemail.com: domain of rppt@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Stat-Signature: syg6xh5s44easkpdtce96xucfezubwa9 X-Rspamd-Queue-Id: 4BA33160007 X-Rspamd-Server: rspam05 X-HE-Tag: 1660882340-643255 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The new name better describes what the function does and does not restrict its use to crash kernel reservations. Signed-off-by: Mike Rapoport --- arch/arm64/include/asm/memory.h | 2 +- arch/arm64/mm/init.c | 4 ++-- arch/arm64/mm/mmu.c | 4 ++-- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 9dd08cd339c3..27fce129b97e 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -364,7 +364,7 @@ static inline void *phys_to_virt(phys_addr_t x) void dump_mem_limit(void); -static inline bool defer_reserve_crashkernel(void) +static inline bool have_zone_dma(void) { return IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32); } diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index b9af30be813e..a6585d50a76c 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -389,7 +389,7 @@ void __init arm64_memblock_init(void) early_init_fdt_scan_reserved_mem(); - if (!defer_reserve_crashkernel()) + if (!have_zone_dma()) reserve_crashkernel(); high_memory = __va(memblock_end_of_DRAM() - 1) + 1; @@ -438,7 +438,7 @@ void __init bootmem_init(void) * request_standard_resources() depends on crashkernel's memory being * reserved, so do it here. */ - if (defer_reserve_crashkernel()) + if (have_zone_dma()) reserve_crashkernel(); memblock_dump_all(); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index db7c4e6ae57b..bf303f1dea25 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -548,7 +548,7 @@ static void __init map_mem(pgd_t *pgdp) #ifdef CONFIG_KEXEC_CORE if (crash_mem_map) { - if (defer_reserve_crashkernel()) + if (have_zone_dma()) flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; else if (crashk_res.end) memblock_mark_nomap(crashk_res.start, @@ -589,7 +589,7 @@ static void __init map_mem(pgd_t *pgdp) * through /sys/kernel/kexec_crash_size interface. */ #ifdef CONFIG_KEXEC_CORE - if (crash_mem_map && !defer_reserve_crashkernel()) { + if (crash_mem_map && !have_zone_dma()) { if (crashk_res.end) { __map_memblock(pgdp, crashk_res.start, crashk_res.end + 1, From patchwork Fri Aug 19 04:11:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12948297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37A8EC25B0E for ; Fri, 19 Aug 2022 04:12:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ACC938D0002; Fri, 19 Aug 2022 00:12:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A7E5F8E0002; Fri, 19 Aug 2022 00:12:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9462D8D0005; Fri, 19 Aug 2022 00:12:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 880328D0002 for ; Fri, 19 Aug 2022 00:12:22 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6572DA01D7 for ; Fri, 19 Aug 2022 04:12:22 +0000 (UTC) X-FDA: 79815020124.02.645BFF1 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf27.hostedemail.com (Postfix) with ESMTP id 0D8F140011 for ; Fri, 19 Aug 2022 04:12:21 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3AD1C6153C; Fri, 19 Aug 2022 04:12:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C0F87C433C1; Fri, 19 Aug 2022 04:12:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660882341; bh=zBIuI5xJv0DW/mdZk8va9FmpzHKWXDhqhhaqKGKSfXg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kQAoFAYvMa2YCWmHycPxD7RjUUKALOFfcuCjJ+ET6gwjVaBZ8zpGwlCII6ybIwIYp dtbZUq65yd+jjOfgLj36qfXtIMqNN7ENdkHwsq+RRb7R2lCWsbgpnTsXW7Lgnx+1p+ lPr65UdT/vl7v5EptEhxOtmELPq/fbJXOSWLQRVNcncYtz3/v0BuLRZJfEk1HeOgqX FepEhVhEBTSyoK+x7RETM710mRHS1fVwe92Ip1E12u0OwAaGV+t6TERW81HD9T9BNP IKDBeDVG27XnSoWiA13XzEwwA11t0szOgzydrU5cvgb5u9+6ZfvyRE/rVP9h1RLonZ XJSsuYdQfmesw== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Guanghui Feng , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/5] arm64/mmu: drop _hotplug from unmap_hotplug_* function names Date: Fri, 19 Aug 2022 07:11:53 +0300 Message-Id: <20220819041156.873873-3-rppt@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220819041156.873873-1-rppt@kernel.org> References: <20220819041156.873873-1-rppt@kernel.org> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660882342; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nZH9MGIczeJZWkrCdw9R1yt6XzsHUks7XfKi+poPIZs=; b=NUkWxkZnxrSk69OihhjTmYwuMRByulTxIuMsyRZl+A7FRSbfD38wq5oz+2c6HH4oEr5g1r uhBxxEXgfaO71d7vUU7qWDR+Q792Z6RGWfxm7WxlNq5TOX1Ynh507b1GRQXEOhhQomo76V PX2Ug4VGpq+9drDa8OD0Fb3S1UdOVPQ= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=kQAoFAYv; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf27.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660882342; a=rsa-sha256; cv=none; b=5fubRn3KvPY/2QoCzkLQPmfkSY4sFW/SvZMgLcQ7V324LgiGWK6SpmfRJmhLugjEhMwbSK mwQRzO1Ql8+MaRkw4TRwHe8Jve/W0efS8dgtYNlchJgDODogGLcSMDrWBLsQLCgxFH0MhH vukfJl12FWLUthAHpOiYNB3pFeieENw= X-Stat-Signature: p5c8rnp7hbubdd3co4xqk555tuqyen1s X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 0D8F140011 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=kQAoFAYv; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf27.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org X-Rspam-User: X-HE-Tag: 1660882341-597759 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport so that they can be used for remapping crash kernel. Signed-off-by: Mike Rapoport --- arch/arm64/mm/mmu.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index bf303f1dea25..ea81e40a25cd 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -911,7 +911,7 @@ static bool pgtable_range_aligned(unsigned long start, unsigned long end, return true; } -static void unmap_hotplug_pte_range(pmd_t *pmdp, unsigned long addr, +static void unmap_pte_range(pmd_t *pmdp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) { @@ -932,7 +932,7 @@ static void unmap_hotplug_pte_range(pmd_t *pmdp, unsigned long addr, } while (addr += PAGE_SIZE, addr < end); } -static void unmap_hotplug_pmd_range(pud_t *pudp, unsigned long addr, +static void unmap_pmd_range(pud_t *pudp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) { @@ -961,11 +961,11 @@ static void unmap_hotplug_pmd_range(pud_t *pudp, unsigned long addr, continue; } WARN_ON(!pmd_table(pmd)); - unmap_hotplug_pte_range(pmdp, addr, next, free_mapped, altmap); + unmap_pte_range(pmdp, addr, next, free_mapped, altmap); } while (addr = next, addr < end); } -static void unmap_hotplug_pud_range(p4d_t *p4dp, unsigned long addr, +static void unmap_pud_range(p4d_t *p4dp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) { @@ -994,11 +994,11 @@ static void unmap_hotplug_pud_range(p4d_t *p4dp, unsigned long addr, continue; } WARN_ON(!pud_table(pud)); - unmap_hotplug_pmd_range(pudp, addr, next, free_mapped, altmap); + unmap_pmd_range(pudp, addr, next, free_mapped, altmap); } while (addr = next, addr < end); } -static void unmap_hotplug_p4d_range(pgd_t *pgdp, unsigned long addr, +static void unmap_p4d_range(pgd_t *pgdp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) { @@ -1013,11 +1013,11 @@ static void unmap_hotplug_p4d_range(pgd_t *pgdp, unsigned long addr, continue; WARN_ON(!p4d_present(p4d)); - unmap_hotplug_pud_range(p4dp, addr, next, free_mapped, altmap); + unmap_pud_range(p4dp, addr, next, free_mapped, altmap); } while (addr = next, addr < end); } -static void unmap_hotplug_range(unsigned long addr, unsigned long end, +static void unmap_range(unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) { unsigned long next; @@ -1039,7 +1039,7 @@ static void unmap_hotplug_range(unsigned long addr, unsigned long end, continue; WARN_ON(!pgd_present(pgd)); - unmap_hotplug_p4d_range(pgdp, addr, next, free_mapped, altmap); + unmap_p4d_range(pgdp, addr, next, free_mapped, altmap); } while (addr = next, addr < end); } @@ -1258,7 +1258,7 @@ void vmemmap_free(unsigned long start, unsigned long end, { WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); - unmap_hotplug_range(start, end, true, altmap); + unmap_range(start, end, true, altmap); free_empty_tables(start, end, VMEMMAP_START, VMEMMAP_END); } #endif /* CONFIG_MEMORY_HOTPLUG */ @@ -1522,7 +1522,7 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size) WARN_ON(pgdir != init_mm.pgd); WARN_ON((start < PAGE_OFFSET) || (end > PAGE_END)); - unmap_hotplug_range(start, end, false, NULL); + unmap_range(start, end, false, NULL); free_empty_tables(start, end, PAGE_OFFSET, PAGE_END); } From patchwork Fri Aug 19 04:11:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12948298 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A246CC25B0E for ; Fri, 19 Aug 2022 04:12:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3EE7F8E0003; Fri, 19 Aug 2022 00:12:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 39E958E0002; Fri, 19 Aug 2022 00:12:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 266428E0003; Fri, 19 Aug 2022 00:12:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 191E38E0002 for ; Fri, 19 Aug 2022 00:12:28 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E1E95C0665 for ; Fri, 19 Aug 2022 04:12:27 +0000 (UTC) X-FDA: 79815020334.30.F54E6B0 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf31.hostedemail.com (Postfix) with ESMTP id 76D77200BC for ; Fri, 19 Aug 2022 04:12:27 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2D933B825B0; Fri, 19 Aug 2022 04:12:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9A812C433D6; Fri, 19 Aug 2022 04:12:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660882344; bh=5gYyN5/YhF/Awfp/DEZJAfWB5qjBkFX3C40Sr9VWbiQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l4D61Oc16nqlsYrAc37zlc937iYsEazJo+ROdjVVM+XyoMCZv/SkE41jWolD090hQ m86h2HiDf7k+8MAoDxmfiYkLl0xoXwqtYkRacioBIAp7pTFIea1dDMtDP5JXA0CyR9 4ja7Uft2RlR9Jasht+v9SgKYeprc7lGwcilQ7/SI/k966XK15tgZn7hvOVOMDEmmQQ 49xHflDuMymL437/LKVd3keknFKn0yQyzNq2DrZls7S9Z69P5HCKG5DOlWckKkng1H mAk2Lf2RjEUfAquoHqVU+vE/tp3vBnhO3PzuhFdn6j0Ng9zCwJk7nFbQSK18/1HvVm iLUpNJZC87SmQ== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Guanghui Feng , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/5] arm64/mmu: move helpers for hotplug page tables freeing close to callers Date: Fri, 19 Aug 2022 07:11:54 +0300 Message-Id: <20220819041156.873873-4-rppt@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220819041156.873873-1-rppt@kernel.org> References: <20220819041156.873873-1-rppt@kernel.org> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660882347; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=f7FEqy9Ggqojd2s7x+ras5W4n565vFnOg0fhMhcgfxc=; b=LRuGAZh6wG3QtdO4Lp3ujIwpF9T2fcAcT95Z/CeU89PtK+moguZgVkzF8FkCKBYNUzGaXE ziBUrVRu2RmQJnF1DZsdpOsn2aJ26VxMkkDfyCq4WBs+3qX9Karc0ML70f6rv40oIWn2jb r8jWimWN/4q/d4vqGCDDx06pXboKgnI= ARC-Authentication-Results: i=1; imf31.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=l4D61Oc1; spf=pass (imf31.hostedemail.com: domain of rppt@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660882347; a=rsa-sha256; cv=none; b=Ab8zakFwBKAzihGBTXkI4tJNEnun+WiXUcMJmdryK9ZU5UFSynFy8458lxPSwESK3iWWr7 lnACBHGDEnRB8VOprKfTO686ePncjMvWA3/2tV43dFQP4d5YeUsZcr/ruXJe9IaNBJSiOZ nFMdPc698UBrvc5dkWnMo30YVdZZfP4= X-Rspam-User: Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=l4D61Oc1; spf=pass (imf31.hostedemail.com: domain of rppt@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Stat-Signature: w6ph1kqdjpzgk79b8e76t544d5bgtu8i X-Rspamd-Queue-Id: 76D77200BC X-Rspamd-Server: rspam05 X-HE-Tag: 1660882347-8190 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport to minimize extra ifdefery when unmap_*() methods will be used to remap crash kernel. Signed-off-by: Mike Rapoport --- arch/arm64/mm/mmu.c | 50 ++++++++++++++++++++++----------------------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index ea81e40a25cd..92267e5e9b5f 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -887,30 +887,6 @@ static void free_hotplug_page_range(struct page *page, size_t size, } } -static void free_hotplug_pgtable_page(struct page *page) -{ - free_hotplug_page_range(page, PAGE_SIZE, NULL); -} - -static bool pgtable_range_aligned(unsigned long start, unsigned long end, - unsigned long floor, unsigned long ceiling, - unsigned long mask) -{ - start &= mask; - if (start < floor) - return false; - - if (ceiling) { - ceiling &= mask; - if (!ceiling) - return false; - } - - if (end - 1 > ceiling - 1) - return false; - return true; -} - static void unmap_pte_range(pmd_t *pmdp, unsigned long addr, unsigned long end, bool free_mapped, struct vmem_altmap *altmap) @@ -1043,6 +1019,30 @@ static void unmap_range(unsigned long addr, unsigned long end, } while (addr = next, addr < end); } +static bool pgtable_range_aligned(unsigned long start, unsigned long end, + unsigned long floor, unsigned long ceiling, + unsigned long mask) +{ + start &= mask; + if (start < floor) + return false; + + if (ceiling) { + ceiling &= mask; + if (!ceiling) + return false; + } + + if (end - 1 > ceiling - 1) + return false; + return true; +} + +static void free_hotplug_pgtable_page(struct page *page) +{ + free_hotplug_page_range(page, PAGE_SIZE, NULL); +} + static void free_empty_pte_table(pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling) @@ -1196,7 +1196,7 @@ static void free_empty_tables(unsigned long addr, unsigned long end, free_empty_p4d_table(pgdp, addr, next, floor, ceiling); } while (addr = next, addr < end); } -#endif +#endif /* CONFIG_MEMORY_HOTPLUG */ #if !ARM64_KERNEL_USES_PMD_MAPS int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, From patchwork Fri Aug 19 04:11:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12948300 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74EE9C32771 for ; Fri, 19 Aug 2022 04:14:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 17EA28D0003; Fri, 19 Aug 2022 00:14:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 107A88D0002; Fri, 19 Aug 2022 00:14:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EEB6C8D0003; Fri, 19 Aug 2022 00:14:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id DCD0A8D0002 for ; Fri, 19 Aug 2022 00:14:12 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B35B8806A6 for ; Fri, 19 Aug 2022 04:14:12 +0000 (UTC) X-FDA: 79815024744.16.AB6A962 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf10.hostedemail.com (Postfix) with ESMTP id 39F09C0035 for ; Fri, 19 Aug 2022 04:12:31 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 298FDB825AB; Fri, 19 Aug 2022 04:12:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 75F9AC433B5; Fri, 19 Aug 2022 04:12:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660882348; bh=Alvns4dpbVoV7VohHSyYxFeYgoUid9EzSZfmT2izcoU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=C99RTrEyoqCLyTDVCqKkr2U2/6O+Gm+ZTiCS4iz71T9txXvKXALdZ+QkY7e7gis16 ZPrkUuyKKRp+A9Xxc89yUSR8rIWnpy19eeFbxMsCZLamHyf7HL8VkOc3sgjcP113ag M53xCZHpvVblDmZaSbTgh8KJeisMJbGylmO5J9lUVC2wbwdn5Cg6UlC1n/mDLS0INr Czr9Nmt+1adrYhKWfEg7Ubw9CyIstkNx6rPePd+MELheuSjJ4jpVwKKAaU69Pm0i4E oxThwMWb99qbdimO2bC3xbf2a0lC3V3UFW8ujasyfhViK2SmBAeP7RxPTqEdu/QZRK 1p4PItQ/aOVvA== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Guanghui Feng , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 4/5] arm64/mm: remap crash kernel with base pages even if rodata_full disabled Date: Fri, 19 Aug 2022 07:11:55 +0300 Message-Id: <20220819041156.873873-5-rppt@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220819041156.873873-1-rppt@kernel.org> References: <20220819041156.873873-1-rppt@kernel.org> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660882350; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pwhwoLhiHduNG7n85D07w0Kg9Sb4jx77OyuHEUBwrv0=; b=QHgtWSpq4cOOPeLdRaDu+ESoBL7m9ocuj79oMU475Uf74O3xBPumVDvtCwiEz2Mkjn74h5 vm320KI/sS0Jy/N2W/x8Qc+cX5zHq14fpZbif+Kxiyci9qPtwltM8L0SBbGeuc934AqdH9 xNyzYLzXh0TWF6vQmBWSQLWCqAsA+zg= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=C99RTrEy; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf10.hostedemail.com: domain of rppt@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660882350; a=rsa-sha256; cv=none; b=xpauF/blpfw6tzSqx1WuUaumXdD4W/itu6NXP4M89HfSEeZt9c9yef83zKPcVnbNcpTiBX Kot4nxTNd8THR0flC8n4JLiPpkKYNHp7uaHjQlPUe0+R8G4cInM2TGROjwGng5Yc6htSPZ Nr7EEm8vJYeVk1WU719j4XgXBz6LkwY= Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=C99RTrEy; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf10.hostedemail.com: domain of rppt@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=rppt@kernel.org X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 39F09C0035 X-Stat-Signature: 8gzet7k7iokrzt6nck5njhjo3cmgndk1 X-Rspam-User: X-HE-Tag: 1660882351-487446 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport For server systems it is important to protect crash kernel memory for post-mortem analysis. In order to protect this memory it should be mapped at PTE level. When CONFIG_ZONE_DMA or CONFIG_ZONE_DMA32 is enabled, usage of crash kernel essentially forces mapping of the entire linear map with base pages even if rodata_full is not set (commit 2687275a5843 ("arm64: Force NO_BLOCK_MAPPINGS if crashkernel reservation is required")) and this causes performance degradation. With ZONE_DMA/DMA32 enabled, the crash kernel memory is reserved after the linear map is created, but before multiprocessing and multithreading are enabled, so it is safe to remap the crash kernel memory with base pages as long as the page table entries that would be changed do not map the memory that might be accessed during the remapping. To ensure there are no memory accesses in the range that will be remapped, align crash memory reservation to PUD_SIZE boundaries, remap the entire PUD-aligned area and than free the memory that was allocated beyond the crash_size requested by the user. Signed-off-by: Mike Rapoport --- arch/arm64/include/asm/mmu.h | 3 ++ arch/arm64/kernel/machine_kexec.c | 6 +++ arch/arm64/mm/init.c | 65 +++++++++++++++++++++++++------ arch/arm64/mm/mmu.c | 40 ++++++++++++++++--- 4 files changed, 98 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 48f8466a4be9..aba3c095272e 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -71,6 +71,9 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot); extern void mark_linear_text_alias_ro(void); extern bool kaslr_requires_kpti(void); +extern int remap_crashkernel(phys_addr_t start, phys_addr_t size, + phys_addr_t aligned_size); +extern bool crashkres_protection_possible; #define INIT_MM_CONTEXT(name) \ .pgd = init_pg_dir, diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 19c2d487cb08..68295403aa40 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -272,6 +272,9 @@ void arch_kexec_protect_crashkres(void) { int i; + if (!crashkres_protection_possible) + return; + for (i = 0; i < kexec_crash_image->nr_segments; i++) set_memory_valid( __phys_to_virt(kexec_crash_image->segment[i].mem), @@ -282,6 +285,9 @@ void arch_kexec_unprotect_crashkres(void) { int i; + if (!crashkres_protection_possible) + return; + for (i = 0; i < kexec_crash_image->nr_segments; i++) set_memory_valid( __phys_to_virt(kexec_crash_image->segment[i].mem), diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index a6585d50a76c..d5d647aaf23b 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include #include @@ -70,19 +71,19 @@ EXPORT_SYMBOL(memstart_addr); * crash kernel memory which has a dependency on arm64_dma_phys_limit. * Reserving memory early for crash kernel allows linear creation of block * mappings (greater than page-granularity) for all the memory bank rangs. - * In this scheme a comparatively quicker boot is observed. + * In this scheme a comparatively quicker boot is observed and overall + * memory access via the linear map is more efficient as there is less TLB + * pressure. * * If ZONE_DMA configs are defined, crash kernel memory reservation * is delayed until DMA zone memory range size initialization performed in * zone_sizes_init(). The defer is necessary to steer clear of DMA zone - * memory range to avoid overlap allocation. So crash kernel memory boundaries - * are not known when mapping all bank memory ranges, which otherwise means - * not possible to exclude crash kernel range from creating block mappings - * so page-granularity mappings are created for the entire memory range. - * Hence a slightly slower boot is observed. - * - * Note: Page-granularity mappings are necessary for crash kernel memory - * range for shrinking its size via /sys/kernel/kexec_crash_size interface. + * memory range to avoid overlap allocation. To keep block mappings in the + * linear map, the first reservation attempt tries to allocate PUD-aligned + * region so that it would be possible to remap crash kernel memory with + * base pages. If there is not enough memory for such extended reservation, + * the exact amount of memory is reserved and crash kernel protection is + * disabled. */ #if IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32) phys_addr_t __ro_after_init arm64_dma_phys_limit; @@ -90,6 +91,8 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit; phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1; #endif +bool __ro_after_init crashkres_protection_possible; + /* Current arm64 boot protocol requires 2MB alignment */ #define CRASH_ALIGN SZ_2M @@ -116,6 +119,43 @@ static int __init reserve_crashkernel_low(unsigned long long low_size) return 0; } +static unsigned long long __init +reserve_remap_crashkernel(unsigned long long crash_base, + unsigned long long crash_size, + unsigned long long crash_max) +{ + unsigned long long size; + + /* + * If linear map uses base pages or there is no ZONE_DMA/ZONE_DMA32 + * the crashk_res will be mapped with PTEs in mmu::map_mem() + */ + if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE) || + !have_zone_dma()) { + crashkres_protection_possible = true; + return 0; + } + + if (crash_base) + return 0; + + size = ALIGN(crash_size, PUD_SIZE); + + crash_base = memblock_phys_alloc_range(size, PUD_SIZE, 0, crash_max); + if (!crash_base) + return 0; + + if (remap_crashkernel(crash_base, crash_size, size)) { + memblock_phys_free(crash_base, size); + return 0; + } + + crashkres_protection_possible = true; + memblock_phys_free(crash_base + crash_size, size - crash_size); + + return crash_base; +} + /* * reserve_crashkernel() - reserves memory for crash kernel * @@ -162,8 +202,11 @@ static void __init reserve_crashkernel(void) if (crash_base) crash_max = crash_base + crash_size; - crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, - crash_base, crash_max); + crash_base = reserve_remap_crashkernel(crash_base, crash_size, + crash_max); + if (!crash_base) + crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, + crash_base, crash_max); if (!crash_base) { pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 92267e5e9b5f..83f2f18f7f34 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -547,10 +547,8 @@ static void __init map_mem(pgd_t *pgdp) memblock_mark_nomap(kernel_start, kernel_end - kernel_start); #ifdef CONFIG_KEXEC_CORE - if (crash_mem_map) { - if (have_zone_dma()) - flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; - else if (crashk_res.end) + if (crash_mem_map && !have_zone_dma()) { + if (crashk_res.end) memblock_mark_nomap(crashk_res.start, resource_size(&crashk_res)); } @@ -875,7 +873,7 @@ int kern_addr_valid(unsigned long addr) return pfn_valid(pte_pfn(pte)); } -#ifdef CONFIG_MEMORY_HOTPLUG +#if defined(CONFIG_MEMORY_HOTPLUG) || defined(CONFIG_KEXEC_CORE) static void free_hotplug_page_range(struct page *page, size_t size, struct vmem_altmap *altmap) { @@ -1018,7 +1016,9 @@ static void unmap_range(unsigned long addr, unsigned long end, unmap_p4d_range(pgdp, addr, next, free_mapped, altmap); } while (addr = next, addr < end); } +#endif /* CONFIG_MEMORY_HOTPLUG || CONFIG_KEXEC_CORE */ +#ifdef CONFIG_MEMORY_HOTPLUG static bool pgtable_range_aligned(unsigned long start, unsigned long end, unsigned long floor, unsigned long ceiling, unsigned long mask) @@ -1263,6 +1263,36 @@ void vmemmap_free(unsigned long start, unsigned long end, } #endif /* CONFIG_MEMORY_HOTPLUG */ +int __init remap_crashkernel(phys_addr_t start, phys_addr_t size, + phys_addr_t aligned_size) +{ +#ifdef CONFIG_KEXEC_CORE + phys_addr_t end = start + size; + phys_addr_t aligned_end = start + aligned_size; + + if (!IS_ALIGNED(start, PUD_SIZE) || !IS_ALIGNED(aligned_end, PUD_SIZE)) + return -EINVAL; + + /* Clear PUDs containing crash kernel memory */ + unmap_range(__phys_to_virt(start), __phys_to_virt(aligned_end), + false, NULL); + + /* map crash kernel memory with base pages */ + __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start), + size, PAGE_KERNEL, early_pgtable_alloc, + NO_EXEC_MAPPINGS | NO_BLOCK_MAPPINGS | + NO_CONT_MAPPINGS); + + /* map area from end of crash kernel to PUD end with large pages */ + size = aligned_end - end; + if (size) + __create_pgd_mapping(swapper_pg_dir, end, __phys_to_virt(end), + size, PAGE_KERNEL, early_pgtable_alloc, 0); +#endif + + return 0; +} + static inline pud_t *fixmap_pud(unsigned long addr) { pgd_t *pgdp = pgd_offset_k(addr); From patchwork Fri Aug 19 04:11:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 12948299 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61FF9C32771 for ; Fri, 19 Aug 2022 04:12:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F12758D0005; Fri, 19 Aug 2022 00:12:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EC1F68D0003; Fri, 19 Aug 2022 00:12:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB0ED8D0005; Fri, 19 Aug 2022 00:12:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id CE1F08D0003 for ; Fri, 19 Aug 2022 00:12:34 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id AB652A01D7 for ; Fri, 19 Aug 2022 04:12:34 +0000 (UTC) X-FDA: 79815020628.12.B6E13FA Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf04.hostedemail.com (Postfix) with ESMTP id C476B40004 for ; Fri, 19 Aug 2022 04:12:33 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id AF01E6153F; Fri, 19 Aug 2022 04:12:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8142AC43140; Fri, 19 Aug 2022 04:12:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1660882353; bh=xA88Ovos9H3DHR/HPlEe0AwH/mYo7aEjxL8NAKpwIOI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dZb0HOnlL2Y22Acis6FSAF9KAcZ8A1LfVc5yWntVp0HlvxGO+KFsurLTJob9b+hQh PkfQ8LIifmzwzPin/wpr8baDOHLVGMK9hlbneLnyAg4Vhw+WTENfeCYHnsmxUFfadT /cx8xnmqRSVv/+OVb+4vOf/9n9Cg+KXY/kiu+eBy4vzbXYQcqBckcpqJYtcv8h1t/w qyDkiaBA+aAJjJK7hdkJcNfRp9/x8rfE3dnSkkyV//lEzdUx1c1+C9r06rmuX1dbKN T0Bkc9f7cE2HpKpkJ8/WyTDrSRuU4Ib/DgFQIkZHrw8dPEISYIzaKYU0+Y9UlLXG/k RCeyHxW12/8Cg== From: Mike Rapoport To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Guanghui Feng , Mark Rutland , Mike Rapoport , Mike Rapoport , Will Deacon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 5/5] arm64/mmu: simplify logic around crash kernel mapping in map_mem() Date: Fri, 19 Aug 2022 07:11:56 +0300 Message-Id: <20220819041156.873873-6-rppt@kernel.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220819041156.873873-1-rppt@kernel.org> References: <20220819041156.873873-1-rppt@kernel.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660882354; a=rsa-sha256; cv=none; b=UUxRCqIRMkaorbQOtMauCB6SO/ykKvXUwLbOiA8TqpSWAZcZtjdKSUSEa4xNRHuBi1zeCH 16jRW8+CWYhK8GX944JSkGWciZ+KGEazptCQz2x4aj/5GI2oAD3LJtu3Vg6B3M2KBqEhkY 1pCv057TFloc0SkNysJRX3rRDijFYfc= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=dZb0HOnl; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf04.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660882354; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ENv6NMYpPjYuQUQjnvjBQP5A6MhNUqa49pAnq4pN6mw=; b=P5qHb6eYpMdYYN/UObk8TuCk9zfkuhr5MLdp17DASpkFTvab8kfBgDMVDRT15qqe0bflkP kFeCf7BkpT0gTBuYNFYky/DZ3IvwRRMnNjaegBLtkkVuRg8mVJoRS5OMv0WlMQqHle9oeH vNqFolk9nwkFA7rdqeoDuVSCvxesBws= Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=dZb0HOnl; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf04.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org X-Rspam-User: X-Stat-Signature: cm81dpcm1mstanrk684hhy9iz7af41pr X-Rspamd-Queue-Id: C476B40004 X-Rspamd-Server: rspam03 X-HE-Tag: 1660882353-305237 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The check for crashkernel command line parameter and presence of CONFIG_ZONE_DMA[32] in mmu::map_mem() are not necessary because crashk_res.end would be set by the time map_mem() runs only if reserve_crashkernel() was called from arm64_memblock_init() and only if there was proper crashkernel parameter in the command line. Leave only check that crashk_res.end is non-zero to decide whether crash kernel memory should be mapped with base pages. Signed-off-by: Mike Rapoport --- arch/arm64/mm/mmu.c | 44 ++++++++++++-------------------------------- 1 file changed, 12 insertions(+), 32 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 83f2f18f7f34..fa23cfa6b772 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -502,21 +502,6 @@ void __init mark_linear_text_alias_ro(void) PAGE_KERNEL_RO); } -static bool crash_mem_map __initdata; - -static int __init enable_crash_mem_map(char *arg) -{ - /* - * Proper parameter parsing is done by reserve_crashkernel(). We only - * need to know if the linear map has to avoid block mappings so that - * the crashkernel reservations can be unmapped later. - */ - crash_mem_map = true; - - return 0; -} -early_param("crashkernel", enable_crash_mem_map); - static void __init map_mem(pgd_t *pgdp) { static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN); @@ -547,11 +532,9 @@ static void __init map_mem(pgd_t *pgdp) memblock_mark_nomap(kernel_start, kernel_end - kernel_start); #ifdef CONFIG_KEXEC_CORE - if (crash_mem_map && !have_zone_dma()) { - if (crashk_res.end) - memblock_mark_nomap(crashk_res.start, - resource_size(&crashk_res)); - } + if (crashk_res.end) + memblock_mark_nomap(crashk_res.start, + resource_size(&crashk_res)); #endif /* map all the memory banks */ @@ -582,20 +565,17 @@ static void __init map_mem(pgd_t *pgdp) memblock_clear_nomap(kernel_start, kernel_end - kernel_start); /* - * Use page-level mappings here so that we can shrink the region - * in page granularity and put back unused memory to buddy system - * through /sys/kernel/kexec_crash_size interface. + * Use page-level mappings here so that we can protect crash kernel + * memory to allow post-mortem analysis when things go awry. */ #ifdef CONFIG_KEXEC_CORE - if (crash_mem_map && !have_zone_dma()) { - if (crashk_res.end) { - __map_memblock(pgdp, crashk_res.start, - crashk_res.end + 1, - PAGE_KERNEL, - NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS); - memblock_clear_nomap(crashk_res.start, - resource_size(&crashk_res)); - } + if (crashk_res.end) { + __map_memblock(pgdp, crashk_res.start, + crashk_res.end + 1, + PAGE_KERNEL, + NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS); + memblock_clear_nomap(crashk_res.start, + resource_size(&crashk_res)); } #endif }