From patchwork Sun Jun 28 08:34:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 11629949 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 06ECE138C for ; Sun, 28 Jun 2020 08:33:03 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D31E120724 for ; Sun, 28 Jun 2020 08:33:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="2QixTTLh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D31E120724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Irpk4w0ntMR7tirX8x4Y4ncCpjFn+dBte/sFT3S9Vk4=; b=2QixTTLhdRCxvZI9L4JoF1ZMr 03gKYHkdSYZ7pUFtDgfh+DgwALDW6XxwdJrRZYDOHGTgr0+ZCLMw1VCnRKpFAxJRMYIdIvOXxq62j lKQTplzt9Uym17LIVETNxsbYsrsRNKUnXeoZGR8f0d97KErCFKdLdX5RKrrLA3qedWw6uumb8u4fz Yw1iMIKqpIVDqJm7Bwcpkn65aUKuECWWEpgW1Y4cZGrG+WKtKmd6lAzCo6zNUvrOwgPFzXGOKFPnp qSDczVFuEcrJ0f4psF1zb+mYptYCSNodVeCI916SjjAzzPznXRIxMiDohHLiL+fbMTZQBvw3SyZgJ ilS+uxeoQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpSij-0000Gk-CG; Sun, 28 Jun 2020 08:31:25 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190] helo=huawei.com) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpSi9-0008SI-A7; Sun, 28 Jun 2020 08:30:54 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id DF74FE214E2B292AEEA4; Sun, 28 Jun 2020 16:30:39 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Sun, 28 Jun 2020 16:30:33 +0800 From: Chen Zhou To: , , , , , , , , , , , , , , Subject: [PATCH v9 1/5] x86: kdump: move reserve_crashkernel_low() into crash_core.c Date: Sun, 28 Jun 2020 16:34:54 +0800 Message-ID: <20200628083458.40066-2-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200628083458.40066-1-chenzhou10@huawei.com> References: <20200628083458.40066-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.190 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.190 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org In preparation for supporting reserve_crashkernel_low in arm64 as x86_64 does, move reserve_crashkernel_low() into kernel/crash_core.c. BTW, move x86_64 CRASH_ALIGN to 2M suggested by Dave. CONFIG_PHYSICAL_ALIGN can be selected from 2M to 16M, move to the same as arm64. Note, in arm64, we reserve low memory if and only if crashkernel=X,low is specified. Different with x86_64, don't set low memory automatically. Reported-by: kbuild test robot Signed-off-by: Chen Zhou Tested-by: John Donnelly Tested-by: Prabhakar Kushwaha Acked-by: Dave Young --- arch/x86/kernel/setup.c | 66 ++++------------------------- include/linux/crash_core.h | 3 ++ include/linux/kexec.h | 2 - kernel/crash_core.c | 85 ++++++++++++++++++++++++++++++++++++++ kernel/kexec_core.c | 17 -------- 5 files changed, 96 insertions(+), 77 deletions(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index a3767e74c758..33db99ae3035 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -401,8 +401,8 @@ static void __init memblock_x86_reserve_range_setup_data(void) #ifdef CONFIG_KEXEC_CORE -/* 16M alignment for crash kernel regions */ -#define CRASH_ALIGN SZ_16M +/* 2M alignment for crash kernel regions */ +#define CRASH_ALIGN SZ_2M /* * Keep the crash kernel below this limit. @@ -425,59 +425,6 @@ static void __init memblock_x86_reserve_range_setup_data(void) # define CRASH_ADDR_HIGH_MAX SZ_64T #endif -static int __init reserve_crashkernel_low(void) -{ -#ifdef CONFIG_X86_64 - unsigned long long base, low_base = 0, low_size = 0; - unsigned long total_low_mem; - int ret; - - total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); - - /* crashkernel=Y,low */ - ret = parse_crashkernel_low(boot_command_line, total_low_mem, &low_size, &base); - if (ret) { - /* - * two parts from kernel/dma/swiotlb.c: - * -swiotlb size: user-specified with swiotlb= or default. - * - * -swiotlb overflow buffer: now hardcoded to 32k. We round it - * to 8M for other buffers that may need to stay low too. Also - * make sure we allocate enough extra low memory so that we - * don't run out of DMA buffers for 32-bit devices. - */ - low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); - } else { - /* passed with crashkernel=0,low ? */ - if (!low_size) - return 0; - } - - low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN); - if (!low_base) { - pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", - (unsigned long)(low_size >> 20)); - return -ENOMEM; - } - - ret = memblock_reserve(low_base, low_size); - if (ret) { - pr_err("%s: Error reserving crashkernel low memblock.\n", __func__); - return ret; - } - - pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System low RAM: %ldMB)\n", - (unsigned long)(low_size >> 20), - (unsigned long)(low_base >> 20), - (unsigned long)(total_low_mem >> 20)); - - crashk_low_res.start = low_base; - crashk_low_res.end = low_base + low_size - 1; - insert_resource(&iomem_resource, &crashk_low_res); -#endif - return 0; -} - static void __init reserve_crashkernel(void) { unsigned long long crash_size, crash_base, total_mem; @@ -541,9 +488,12 @@ static void __init reserve_crashkernel(void) return; } - if (crash_base >= (1ULL << 32) && reserve_crashkernel_low()) { - memblock_free(crash_base, crash_size); - return; + if (crash_base >= (1ULL << 32)) { + if (reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } + insert_resource(&iomem_resource, &crashk_low_res); } pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index 525510a9f965..4df8c0bff03e 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -63,6 +63,8 @@ phys_addr_t paddr_vmcoreinfo_note(void); extern unsigned char *vmcoreinfo_data; extern size_t vmcoreinfo_size; extern u32 *vmcoreinfo_note; +extern struct resource crashk_res; +extern struct resource crashk_low_res; Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, void *data, size_t data_len); @@ -74,5 +76,6 @@ int parse_crashkernel_high(char *cmdline, unsigned long long system_ram, unsigned long long *crash_size, unsigned long long *crash_base); int parse_crashkernel_low(char *cmdline, unsigned long long system_ram, unsigned long long *crash_size, unsigned long long *crash_base); +int __init reserve_crashkernel_low(void); #endif /* LINUX_CRASH_CORE_H */ diff --git a/include/linux/kexec.h b/include/linux/kexec.h index ea67910ae6b7..a460afdbab0f 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -330,8 +330,6 @@ extern int kexec_load_disabled; /* Location of a reserved region to hold the crash kernel. */ -extern struct resource crashk_res; -extern struct resource crashk_low_res; extern note_buf_t __percpu *crash_notes; /* flag to track if kexec reboot is in progress */ diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 9f1557b98468..a7580d291c37 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -7,6 +7,8 @@ #include #include #include +#include +#include #include #include @@ -19,6 +21,22 @@ u32 *vmcoreinfo_note; /* trusted vmcoreinfo, e.g. we can make a copy in the crash memory */ static unsigned char *vmcoreinfo_data_safecopy; +/* Location of the reserved area for the crash kernel */ +struct resource crashk_res = { + .name = "Crash kernel", + .start = 0, + .end = 0, + .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, + .desc = IORES_DESC_CRASH_KERNEL +}; +struct resource crashk_low_res = { + .name = "Crash kernel", + .start = 0, + .end = 0, + .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, + .desc = IORES_DESC_CRASH_KERNEL +}; + /* * parsing the "crashkernel" commandline * @@ -292,6 +310,73 @@ int __init parse_crashkernel_low(char *cmdline, "crashkernel=", suffix_tbl[SUFFIX_LOW]); } +#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64) +#define CRASH_ALIGN SZ_2M +#endif + +int __init reserve_crashkernel_low(void) +{ +#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64) + unsigned long long base, low_base = 0, low_size = 0; + unsigned long total_low_mem; + int ret; + + total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); + + /* crashkernel=Y,low */ + ret = parse_crashkernel_low(boot_command_line, total_low_mem, &low_size, + &base); + if (ret) { +#ifdef CONFIG_X86_64 + /* + * two parts from lib/swiotlb.c: + * -swiotlb size: user-specified with swiotlb= or default. + * + * -swiotlb overflow buffer: now hardcoded to 32k. We round it + * to 8M for other buffers that may need to stay low too. Also + * make sure we allocate enough extra low memory so that we + * don't run out of DMA buffers for 32-bit devices. + */ + low_size = max(swiotlb_size_or_default() + (8UL << 20), + 256UL << 20); +#else + /* + * in arm64, reserve low memory if and only if crashkernel=X,low + * specified. + */ + return -EINVAL; +#endif + } else { + /* passed with crashkernel=0,low ? */ + if (!low_size) + return 0; + } + + low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN); + if (!low_base) { + pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", + (unsigned long)(low_size >> 20)); + return -ENOMEM; + } + + ret = memblock_reserve(low_base, low_size); + if (ret) { + pr_err("%s: Error reserving crashkernel low memblock.\n", + __func__); + return ret; + } + + pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System low RAM: %ldMB)\n", + (unsigned long)(low_size >> 20), + (unsigned long)(low_base >> 20), + (unsigned long)(total_low_mem >> 20)); + + crashk_low_res.start = low_base; + crashk_low_res.end = low_base + low_size - 1; +#endif + return 0; +} + Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, void *data, size_t data_len) { diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index c19c0dad1ebe..db66bbabfff3 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -53,23 +53,6 @@ note_buf_t __percpu *crash_notes; /* Flag to indicate we are going to kexec a new kernel */ bool kexec_in_progress = false; - -/* Location of the reserved area for the crash kernel */ -struct resource crashk_res = { - .name = "Crash kernel", - .start = 0, - .end = 0, - .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, - .desc = IORES_DESC_CRASH_KERNEL -}; -struct resource crashk_low_res = { - .name = "Crash kernel", - .start = 0, - .end = 0, - .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, - .desc = IORES_DESC_CRASH_KERNEL -}; - int kexec_should_crash(struct task_struct *p) { /* From patchwork Sun Jun 28 08:34:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 11629945 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83621138C for ; Sun, 28 Jun 2020 08:32:54 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5A77520724 for ; Sun, 28 Jun 2020 08:32:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="WA0iRORF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5A77520724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=kfYtzQ1NfB0ET9vB9F1YQkD35N4ecdaxhOjwK7E10Zw=; b=WA0iRORFy4TwbbYGKSD8l5efe sGCfmKc1InOs9TxqAVANPHL3yxo0m2BE2BDNkJgfQLuSK8SyrwhcxVqmziPywxkBpNVQOCjBwj84v fTy9BABo9Kpa6Yr3uvxBh7KO5coO74soTpHbXLhQuWIFgQkjmW9WU0bby3voOQ14shm8n1xtQ7Qt8 w2ASmfGcSWQAo/9T/1RJD0xH7X542w+oPyF0YbOfS57oGdwV8ywY25OqbDZ1lRNId3bcD7K3FxKo5 rM8aLrmQZ1MdK8WfMi4svDSHfpOy6gp9TJpZBnVp0EPAlhLJKZ3S4v6LylNaW5CQ7FHVYuXS61aI8 OweoFvQ0w==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpSiC-0008UA-Cl; Sun, 28 Jun 2020 08:30:53 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190] helo=huawei.com) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpSi6-0008SJ-3h; Sun, 28 Jun 2020 08:30:47 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 087E1ECADE2ACC3F6BB8; Sun, 28 Jun 2020 16:30:40 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Sun, 28 Jun 2020 16:30:34 +0800 From: Chen Zhou To: , , , , , , , , , , , , , , Subject: [PATCH v9 2/5] arm64: kdump: reserve crashkenel above 4G for crash dump kernel Date: Sun, 28 Jun 2020 16:34:55 +0800 Message-ID: <20200628083458.40066-3-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200628083458.40066-1-chenzhou10@huawei.com> References: <20200628083458.40066-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.190 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.190 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Crashkernel=X tries to reserve memory for the crash dump kernel under 4G. If crashkernel=X,low is specified simultaneously, reserve spcified size low memory for crash kdump kernel devices firstly and then reserve memory above 4G. Suggested by James, just introduced crashkernel=X,low to arm64. As memtioned above, if crashkernel=X,low is specified simultaneously, reserve spcified size low memory for crash kdump kernel devices firstly and then reserve memory above 4G, which is much simpler. Signed-off-by: Chen Zhou Tested-by: John Donnelly Tested-by: Prabhakar Kushwaha --- arch/arm64/kernel/setup.c | 8 +++++++- arch/arm64/mm/init.c | 31 +++++++++++++++++++++++++++++-- 2 files changed, 36 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 93b3844cf442..4dc51a2ac012 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -238,7 +238,13 @@ static void __init request_standard_resources(void) kernel_data.end <= res->end) request_resource(res, &kernel_data); #ifdef CONFIG_KEXEC_CORE - /* Userspace will find "Crash kernel" region in /proc/iomem. */ + /* + * Userspace will find "Crash kernel" region in /proc/iomem. + * Note: the low region is renamed as Crash kernel (low). + */ + if (crashk_low_res.end && crashk_low_res.start >= res->start && + crashk_low_res.end <= res->end) + request_resource(res, &crashk_low_res); if (crashk_res.end && crashk_res.start >= res->start && crashk_res.end <= res->end) request_resource(res, &crashk_res); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 1e93cfc7c47a..ce7ced85f5fb 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -81,6 +81,7 @@ static void __init reserve_crashkernel(void) { unsigned long long crash_base, crash_size; int ret; + phys_addr_t crash_max = arm64_dma32_phys_limit; ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), &crash_size, &crash_base); @@ -88,12 +89,38 @@ static void __init reserve_crashkernel(void) if (ret || !crash_size) return; + ret = reserve_crashkernel_low(); + if (!ret && crashk_low_res.end) { + /* + * If crashkernel=X,low specified, there may be two regions, + * we need to make some changes as follows: + * + * 1. rename the low region as "Crash kernel (low)" + * In order to distinct from the high region and make no effect + * to the use of existing kexec-tools, rename the low region as + * "Crash kernel (low)". + * + * 2. change the upper bound for crash memory + * Set MEMBLOCK_ALLOC_ACCESSIBLE upper bound for crash memory. + * + * 3. mark the low region as "nomap" + * The low region is intended to be used for crash dump kernel + * devices, just mark the low region as "nomap" simply. + */ + const char *rename = "Crash kernel (low)"; + + crashk_low_res.name = rename; + crash_max = MEMBLOCK_ALLOC_ACCESSIBLE; + memblock_mark_nomap(crashk_low_res.start, + resource_size(&crashk_low_res)); + } + crash_size = PAGE_ALIGN(crash_size); if (crash_base == 0) { /* Current arm64 boot protocol requires 2MB alignment */ - crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit, - crash_size, SZ_2M); + crash_base = memblock_find_in_range(0, crash_max, crash_size, + SZ_2M); if (crash_base == 0) { pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); From patchwork Sun Jun 28 08:34:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 11629943 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8FCED1392 for ; Sun, 28 Jun 2020 08:32:49 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 685DB20724 for ; Sun, 28 Jun 2020 08:32:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="oE40p/U8" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 685DB20724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=bDv4XTC7Gs7O8liCKuicJus2oDKKPwL/aCuRbWwb+EE=; b=oE40p/U8+TAjqaZHZt4NZDgIz NcmQMz70y3QVd4zA/ePurIeSEhOINpnw3wVlCBjkYQ0doLOBrfB/DiH4bisXDRvmIPCeaVKCKT/uH SJCRXrxF9qUlYeqRTztlZWk9dBbnOvH/uuUriaXuPUUhLAul0wDX93FBkOK3JCGXaXCMqh+Nh+trE F8Ftj7aF2VrRZnPjbFB6WdLEhlBN2m8AHlRIlVV6+0+IACXCxpW3aGiN8SdTdv6jfV39gk0+MBHxR /cFde0hXr3K/E//NFI+RTagJdV14MPBeBiFGisKeQWMtgOJgwB/tgaDkW6/6Ie4JbNcAkVg4CdgkP PaG50EbOQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpSiN-00005S-Jd; Sun, 28 Jun 2020 08:31:03 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32] helo=huawei.com) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpSi7-0008ST-Lv; Sun, 28 Jun 2020 08:30:48 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 1A0F180360D7FB836549; Sun, 28 Jun 2020 16:30:45 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Sun, 28 Jun 2020 16:30:35 +0800 From: Chen Zhou To: , , , , , , , , , , , , , , Subject: [PATCH v9 3/5] arm64: kdump: add memory for devices by DT property linux, usable-memory-range Date: Sun, 28 Jun 2020 16:34:56 +0800 Message-ID: <20200628083458.40066-4-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200628083458.40066-1-chenzhou10@huawei.com> References: <20200628083458.40066-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.32 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.32 listed in wl.mailspike.net] -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org If we want to reserve crashkernel above 4G, we could use parameters "crashkernel=X crashkernel=Y,low", in this case, specified size low memory is reserved for crash dump kernel devices and never mapped by the first kernel. This memory range is advertised to crash dump kernel via DT property under /chosen, linux,usable-memory-range = We reused the DT property linux,usable-memory-range and made the low memory region as the second range "BASE2 SIZE2", which keeps compatibility with existing user-space and older kdump kernels. Crash dump kernel reads this property at boot time and call memblock_add() to add the low memory region after memblock_cap_memory_range() has been called. Signed-off-by: Chen Zhou Tested-by: John Donnelly Tested-by: Prabhakar Kushwaha --- arch/arm64/mm/init.c | 43 +++++++++++++++++++++++++++++++++---------- 1 file changed, 33 insertions(+), 10 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index ce7ced85f5fb..f5b31e8f1f34 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -69,6 +69,15 @@ EXPORT_SYMBOL(vmemmap); phys_addr_t arm64_dma_phys_limit __ro_after_init; static phys_addr_t arm64_dma32_phys_limit __ro_after_init; +/* + * The main usage of linux,usable-memory-range is for crash dump kernel. + * Originally, the number of usable-memory regions is one. Now crash dump + * kernel support at most two regions, low region and high region. + * To make compatibility with existing user-space and older kdump, the low + * region is always the last range of linux,usable-memory-range if exist. + */ +#define MAX_USABLE_RANGES 2 + #ifdef CONFIG_KEXEC_CORE /* * reserve_crashkernel() - reserves memory for crash kernel @@ -272,9 +281,9 @@ early_param("mem", early_mem); static int __init early_init_dt_scan_usablemem(unsigned long node, const char *uname, int depth, void *data) { - struct memblock_region *usablemem = data; - const __be32 *reg; - int len; + struct memblock_region *usable_rgns = data; + const __be32 *reg, *endp; + int len, nr = 0; if (depth != 1 || strcmp(uname, "chosen") != 0) return 0; @@ -283,22 +292,36 @@ static int __init early_init_dt_scan_usablemem(unsigned long node, if (!reg || (len < (dt_root_addr_cells + dt_root_size_cells))) return 1; - usablemem->base = dt_mem_next_cell(dt_root_addr_cells, ®); - usablemem->size = dt_mem_next_cell(dt_root_size_cells, ®); + endp = reg + (len / sizeof(__be32)); + while ((endp - reg) >= (dt_root_addr_cells + dt_root_size_cells)) { + usable_rgns[nr].base = dt_mem_next_cell(dt_root_addr_cells, ®); + usable_rgns[nr].size = dt_mem_next_cell(dt_root_size_cells, ®); + + if (++nr >= MAX_USABLE_RANGES) + break; + } return 1; } static void __init fdt_enforce_memory_region(void) { - struct memblock_region reg = { - .size = 0, + struct memblock_region usable_rgns[MAX_USABLE_RANGES] = { + { .size = 0 }, + { .size = 0 } }; - of_scan_flat_dt(early_init_dt_scan_usablemem, ®); + of_scan_flat_dt(early_init_dt_scan_usablemem, &usable_rgns); - if (reg.size) - memblock_cap_memory_range(reg.base, reg.size); + /* + * The first range of usable-memory regions is for crash dump + * kernel with only one region or for high region with two regions, + * the second range is dedicated for low region if exist. + */ + if (usable_rgns[0].size) + memblock_cap_memory_range(usable_rgns[0].base, usable_rgns[0].size); + if (usable_rgns[1].size) + memblock_add(usable_rgns[1].base, usable_rgns[1].size); } void __init arm64_memblock_init(void) From patchwork Sun Jun 28 08:34:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 11629947 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3B7B4138C for ; Sun, 28 Jun 2020 08:32:59 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0647920724 for ; Sun, 28 Jun 2020 08:32:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="qepX+Soe" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0647920724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=cldfRjxc34I0il3Lquta7HNcsV+phzvervJBSVcbSo0=; b=qepX+SoeU2bjboWHrArU6ChuP E5+313jjh9aaN2d1TNieOULIEx5pGjQsQZOlZcsb7GheJvQ9xUrHzgIAoa6ZXG7bXlXuyXjrMQyIm HQSsyJH5kqYop5bb2Sysl+o/tauDtmzoUGtz4SUUdr6EK/Lg74lmzXnqEypFkyYfzrUgjXPFGJ6tc bPgg3yXESarvpLrWCepWv06CCO64/xWZtpihTWQsO4hjwyitF+cHS6G95jbpM25Uzm4vJOkI8XPaZ Fdlju/Cs28LbmGWYUtzSZ2iW4E07gDAAZKI9RbgAifDGSoD4wgY0EbBmjvcHJdbbIEyplDkRSQBct 7z/Pd6thg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpSiU-00008h-Mh; Sun, 28 Jun 2020 08:31:10 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32] helo=huawei.com) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpSi8-0008Sb-4M; Sun, 28 Jun 2020 08:30:49 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 303A83761374F50AAB1F; Sun, 28 Jun 2020 16:30:45 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Sun, 28 Jun 2020 16:30:36 +0800 From: Chen Zhou To: , , , , , , , , , , , , , , Subject: [PATCH v9 4/5] arm64: kdump: fix kdump broken with ZONE_DMA reintroduced Date: Sun, 28 Jun 2020 16:34:57 +0800 Message-ID: <20200628083458.40066-5-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200628083458.40066-1-chenzhou10@huawei.com> References: <20200628083458.40066-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.32 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.32 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org commit 1a8e1cef7603 ("arm64: use both ZONE_DMA and ZONE_DMA32") broken the arm64 kdump. If the memory reserved for crash dump kernel falled in ZONE_DMA32, the devices in crash dump kernel need to use ZONE_DMA will alloc fail. This patch addressed the above issue based on "reserving crashkernel above 4G". Originally, we reserve low memory below 4G, and now just need to adjust memory limit to arm64_dma_phys_limit in reserve_crashkernel_low if ZONE_DMA is enabled. That is, if there are devices need to use ZONE_DMA in crash dump kernel, it is a good choice to use parameters "crashkernel=X crashkernel=Y,low". Signed-off-by: Chen Zhou --- kernel/crash_core.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/kernel/crash_core.c b/kernel/crash_core.c index a7580d291c37..e8ecbbc761a3 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -320,6 +320,7 @@ int __init reserve_crashkernel_low(void) unsigned long long base, low_base = 0, low_size = 0; unsigned long total_low_mem; int ret; + phys_addr_t crash_max = 1ULL << 32; total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); @@ -352,7 +353,11 @@ int __init reserve_crashkernel_low(void) return 0; } - low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN); +#ifdef CONFIG_ARM64 + if (IS_ENABLED(CONFIG_ZONE_DMA)) + crash_max = arm64_dma_phys_limit; +#endif + low_base = memblock_find_in_range(0, crash_max, low_size, CRASH_ALIGN); if (!low_base) { pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", (unsigned long)(low_size >> 20)); From patchwork Sun Jun 28 08:34:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 11629951 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CB5AC138C for ; Sun, 28 Jun 2020 08:33:06 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 94E5720775 for ; Sun, 28 Jun 2020 08:33:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="m1k6hr3U" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 94E5720775 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=/7R07VDvKK4nMYTrgN9JmeeMH9LyAN6OmKaeM1eucTo=; b=m1k6hr3UxM8/53JW/Rn/HiDZl Gq2Ch9FNplgNaB/kIzpeX2xQGqy2jQc6A76VlqCt0W5M/cE8TmasB0I8UrQFDXvEShdaLDlamTUPB sBx8kbUJBYxUA0gHpTQ0tuhz5WOCshWqbvpXPF4hwZloJyoanhQGuR345sK6DQfQfw30ZT0sQegm0 Iv2JjpUmjAxNR2VA6wQ9OCYXgCUtl0IO/xoOIvWeykH75TsZIOl64suhtyA2zonvt5QDMdkr0qbeD lmUuDOR1OBIMKwdw5kkcwcdmi/9LEUuCVGZ8pGgWBMFxhkEu7NAb16H8PS87/4oPD9+mVp65drBhi PqfwZADnQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpSiY-0000Ad-UQ; Sun, 28 Jun 2020 08:31:14 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32] helo=huawei.com) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpSi9-0008SU-3f; Sun, 28 Jun 2020 08:30:51 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 25BDD937DB0F724DA0A4; Sun, 28 Jun 2020 16:30:45 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Sun, 28 Jun 2020 16:30:37 +0800 From: Chen Zhou To: , , , , , , , , , , , , , , Subject: [PATCH v9 5/5] kdump: update Documentation about crashkernel on arm64 Date: Sun, 28 Jun 2020 16:34:58 +0800 Message-ID: <20200628083458.40066-6-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200628083458.40066-1-chenzhou10@huawei.com> References: <20200628083458.40066-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.32 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.32 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiexiuqi@huawei.com, chenzhou10@huawei.com, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, huawei.libin@huawei.com, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Now we support crashkernel=X,[low] on arm64, update the Documentation. We could use parameters "crashkernel=X crashkernel=Y,low" to reserve memory above 4G. Signed-off-by: Chen Zhou Tested-by: John Donnelly Tested-by: Prabhakar Kushwaha --- Documentation/admin-guide/kdump/kdump.rst | 13 +++++++++++-- Documentation/admin-guide/kernel-parameters.txt | 17 +++++++++++++++-- 2 files changed, 26 insertions(+), 4 deletions(-) diff --git a/Documentation/admin-guide/kdump/kdump.rst b/Documentation/admin-guide/kdump/kdump.rst index 2da65fef2a1c..6ba294d425c9 100644 --- a/Documentation/admin-guide/kdump/kdump.rst +++ b/Documentation/admin-guide/kdump/kdump.rst @@ -299,7 +299,13 @@ Boot into System Kernel "crashkernel=64M@16M" tells the system kernel to reserve 64 MB of memory starting at physical address 0x01000000 (16MB) for the dump-capture kernel. - On x86 and x86_64, use "crashkernel=64M@16M". + On x86 use "crashkernel=64M@16M". + + On x86_64, use "crashkernel=Y[@X]" to select a region under 4G first, and + fall back to reserve region above 4G when '@offset' hasn't been specified. + We can also use "crashkernel=X,high" to select a region above 4G, which + also tries to allocate at least 256M below 4G automatically and + "crashkernel=Y,low" can be used to allocate specified size low memory. On ppc64, use "crashkernel=128M@32M". @@ -316,8 +322,11 @@ Boot into System Kernel kernel will automatically locate the crash kernel image within the first 512MB of RAM if X is not given. - On arm64, use "crashkernel=Y[@X]". Note that the start address of + On arm64, use "crashkernel=Y[@X]". Note that the start address of the kernel, X if explicitly specified, must be aligned to 2MiB (0x200000). + If crashkernel=Z,low is specified simultaneously, reserve spcified size + low memory for crash kdump kernel devices firstly and then reserve memory + above 4G. Load the Dump-capture Kernel ============================ diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index fb95fad81c79..335431a351c0 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -722,6 +722,9 @@ [KNL, x86_64] select a region under 4G first, and fall back to reserve region above 4G when '@offset' hasn't been specified. + [KNL, arm64] If crashkernel=X,low is specified, reserve + spcified size low memory for crash kdump kernel devices + firstly, and then reserve memory above 4G. See Documentation/admin-guide/kdump/kdump.rst for further details. crashkernel=range1:size1[,range2:size2,...][@offset] @@ -746,13 +749,23 @@ requires at least 64M+32K low memory, also enough extra low memory is needed to make sure DMA buffers for 32-bit devices won't run out. Kernel would try to allocate at - at least 256M below 4G automatically. + least 256M below 4G automatically. This one let user to specify own low range under 4G for second kernel instead. 0: to disable low allocation. It will be ignored when crashkernel=X,high is not used or memory reserved is below 4G. - + [KNL, arm64] range under 4G. + This one let user to specify own low range under 4G + for crash dump kernel instead. + Different with x86_64, kernel allocates specified size + physical memory region only when this parameter is specified + instead of trying to allocate at least 256M below 4G + automatically. + This parameter is used along with crashkernel=X when we + want to reserve crashkernel above 4G. If there are devices + need to use ZONE_DMA in crash dump kernel, it is also + a good choice. cryptomgr.notests [KNL] Disable crypto self-tests