From patchwork Fri Aug 30 07:11:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 11123037 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 55A8D1399 for ; Fri, 30 Aug 2019 07:10:13 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 317CA22CE3 for ; Fri, 30 Aug 2019 07:10:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Zju0B5ZF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 317CA22CE3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7KjgO0N0lNyr9tZF+Nxek7g3xlelgXCUFyxnNskknLA=; b=Zju0B5ZFDSuDN5 EQRa8Kri3eHZIiES6qCMnxCEl1l5a9J93wE4xxugrXOUgWWF2Lc0YqczE4BMoHBqDcZ+DpIe+fZxY SnNhulEOeU5WZBqQWnhucGJDTtKXEzlgKPZg90LRw6mPdlwIrK/cXQAUGVz+2qHcMgE5W1qpFNFey 899bHzJNLCrikTLFawj878TnfuoITrAdhS3Uth9hWZjpySTA0btrzO1vpYEvOVVZdg00Evxw/AqmX RYfE7Jp+FKJbI9cmxYD1HoacWKLvN/UAHDGWiz5VAwerRFAUB8RYsFU5sldErZabhyaNIdG94CpDX Vv0EHLX5DdvIBjTE79wg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1i3b2n-0001uO-SX; Fri, 30 Aug 2019 07:10:03 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1i3b1m-0001nR-A4; Fri, 30 Aug 2019 07:09:00 +0000 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 1F047236FDFC497CB1F2; Fri, 30 Aug 2019 15:08:56 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.439.0; Fri, 30 Aug 2019 15:08:49 +0800 From: Chen Zhou To: , , , , , , Subject: [PATCH v6 1/4] x86: kdump: move reserve_crashkernel_low() into crash_core.c Date: Fri, 30 Aug 2019 15:11:57 +0800 Message-ID: <20190830071200.56169-2-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190830071200.56169-1-chenzhou10@huawei.com> References: <20190830071200.56169-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190830_000859_008729_D2965280 X-CRM114-Status: GOOD ( 15.07 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.190 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Chen Zhou , kexec@lists.infradead.org, linux-kernel@vger.kernel.org, horms@verge.net.au, guohanjun@huawei.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org In preparation for supporting reserve_crashkernel_low in arm64 as x86_64 does, move reserve_crashkernel_low() into kernel/crash_core.c. Note, in arm64, we reserve low memory if and only if crashkernel=X,low is specified. Different with x86_64, don't set low memory automatically. Signed-off-by: Chen Zhou --- arch/x86/include/asm/kexec.h | 3 ++ arch/x86/kernel/setup.c | 65 ++++---------------------------------------- include/linux/crash_core.h | 4 +++ include/linux/kexec.h | 1 - kernel/crash_core.c | 65 ++++++++++++++++++++++++++++++++++++++++++++ 5 files changed, 78 insertions(+), 60 deletions(-) diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index 5e7d6b4..bf21d6c 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -18,6 +18,9 @@ # define KEXEC_CONTROL_CODE_MAX_SIZE 2048 +/* 16M alignment for crash kernel regions */ +#define CRASH_ALIGN SZ_16M + #ifndef __ASSEMBLY__ #include diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index bbe35bf..2b437c4 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -449,9 +449,6 @@ static void __init memblock_x86_reserve_range_setup_data(void) #ifdef CONFIG_KEXEC_CORE -/* 16M alignment for crash kernel regions */ -#define CRASH_ALIGN SZ_16M - /* * Keep the crash kernel below this limit. * @@ -473,59 +470,6 @@ static void __init memblock_x86_reserve_range_setup_data(void) # define CRASH_ADDR_HIGH_MAX SZ_64T #endif -static int __init reserve_crashkernel_low(void) -{ -#ifdef CONFIG_X86_64 - unsigned long long base, low_base = 0, low_size = 0; - unsigned long total_low_mem; - int ret; - - total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); - - /* crashkernel=Y,low */ - ret = parse_crashkernel_low(boot_command_line, total_low_mem, &low_size, &base); - if (ret) { - /* - * two parts from lib/swiotlb.c: - * -swiotlb size: user-specified with swiotlb= or default. - * - * -swiotlb overflow buffer: now hardcoded to 32k. We round it - * to 8M for other buffers that may need to stay low too. Also - * make sure we allocate enough extra low memory so that we - * don't run out of DMA buffers for 32-bit devices. - */ - low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); - } else { - /* passed with crashkernel=0,low ? */ - if (!low_size) - return 0; - } - - low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN); - if (!low_base) { - pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", - (unsigned long)(low_size >> 20)); - return -ENOMEM; - } - - ret = memblock_reserve(low_base, low_size); - if (ret) { - pr_err("%s: Error reserving crashkernel low memblock.\n", __func__); - return ret; - } - - pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System low RAM: %ldMB)\n", - (unsigned long)(low_size >> 20), - (unsigned long)(low_base >> 20), - (unsigned long)(total_low_mem >> 20)); - - crashk_low_res.start = low_base; - crashk_low_res.end = low_base + low_size - 1; - insert_resource(&iomem_resource, &crashk_low_res); -#endif - return 0; -} - static void __init reserve_crashkernel(void) { unsigned long long crash_size, crash_base, total_mem; @@ -589,9 +533,12 @@ static void __init reserve_crashkernel(void) return; } - if (crash_base >= (1ULL << 32) && reserve_crashkernel_low()) { - memblock_free(crash_base, crash_size); - return; + if (crash_base >= (1ULL << 32)) { + if (reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } + insert_resource(&iomem_resource, &crashk_low_res); } pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index 525510a..9192e43 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -6,6 +6,8 @@ #include #include +#include + #define CRASH_CORE_NOTE_NAME "CORE" #define CRASH_CORE_NOTE_HEAD_BYTES ALIGN(sizeof(struct elf_note), 4) #define CRASH_CORE_NOTE_NAME_BYTES ALIGN(sizeof(CRASH_CORE_NOTE_NAME), 4) @@ -63,6 +65,7 @@ phys_addr_t paddr_vmcoreinfo_note(void); extern unsigned char *vmcoreinfo_data; extern size_t vmcoreinfo_size; extern u32 *vmcoreinfo_note; +extern struct resource crashk_low_res; Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, void *data, size_t data_len); @@ -74,5 +77,6 @@ int parse_crashkernel_high(char *cmdline, unsigned long long system_ram, unsigned long long *crash_size, unsigned long long *crash_base); int parse_crashkernel_low(char *cmdline, unsigned long long system_ram, unsigned long long *crash_size, unsigned long long *crash_base); +int __init reserve_crashkernel_low(void); #endif /* LINUX_CRASH_CORE_H */ diff --git a/include/linux/kexec.h b/include/linux/kexec.h index b9b1bc5..62330aa 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -306,7 +306,6 @@ extern int kexec_load_disabled; /* Location of a reserved region to hold the crash kernel. */ extern struct resource crashk_res; -extern struct resource crashk_low_res; extern note_buf_t __percpu *crash_notes; /* flag to track if kexec reboot is in progress */ diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 9f1557b..13cdf35 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -7,6 +7,8 @@ #include #include #include +#include +#include #include #include @@ -292,6 +294,69 @@ int __init parse_crashkernel_low(char *cmdline, "crashkernel=", suffix_tbl[SUFFIX_LOW]); } +int __init reserve_crashkernel_low(void) +{ +#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64) + unsigned long long base, low_base = 0, low_size = 0; + unsigned long total_low_mem; + int ret; + + total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); + + /* crashkernel=Y,low */ + ret = parse_crashkernel_low(boot_command_line, total_low_mem, &low_size, + &base); + if (ret) { +#ifdef CONFIG_X86_64 + /* + * two parts from lib/swiotlb.c: + * -swiotlb size: user-specified with swiotlb= or default. + * + * -swiotlb overflow buffer: now hardcoded to 32k. We round it + * to 8M for other buffers that may need to stay low too. Also + * make sure we allocate enough extra low memory so that we + * don't run out of DMA buffers for 32-bit devices. + */ + low_size = max(swiotlb_size_or_default() + (8UL << 20), + 256UL << 20); +#else + /* + * in arm64, reserve low memory if and only if crashkernel=X,low + * specified. + */ + return -EINVAL; +#endif + } else { + /* passed with crashkernel=0,low ? */ + if (!low_size) + return 0; + } + + low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN); + if (!low_base) { + pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", + (unsigned long)(low_size >> 20)); + return -ENOMEM; + } + + ret = memblock_reserve(low_base, low_size); + if (ret) { + pr_err("%s: Error reserving crashkernel low memblock.\n", + __func__); + return ret; + } + + pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System low RAM: %ldMB)\n", + (unsigned long)(low_size >> 20), + (unsigned long)(low_base >> 20), + (unsigned long)(total_low_mem >> 20)); + + crashk_low_res.start = low_base; + crashk_low_res.end = low_base + low_size - 1; +#endif + return 0; +} + Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, void *data, size_t data_len) {