From patchwork Wed Oct 20 02:03:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12571523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D4D6C433F5 for ; Wed, 20 Oct 2021 02:16:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5394D61212 for ; Wed, 20 Oct 2021 02:16:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5394D61212 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=spufohVbM90FFUCCldK6bQU1slz+rRcHKHXnsAWgJ6o=; b=QNb+OD7nTov8bQ 1SGUeIPNLpl9bm/0TbkOTyh3Cz89vjQeSM15+fwp1w99D7aYS45Eu0ezirUDVDauDx0s4K8mULi5O o9V5pBEhYEzIcX6WIpexW9otYH26vPUV5PbCYkRf6ff8Q14VFtjIKt3h81MEjJCEoHnh2NkMaLCg9 4vpz5B0qcjFvPcPYeFbSfZJ7HStFGntIH1MckNhfD634rCmMEfeIsYb/fGvr3PYF8iIwexRbebJYp YXJK0gPA1PCtslEDm1lXpdtGw4Wz5ybI5hV869RWVLShpNtTf7+msQY4fOlIXM7rSa84EFeAgTo3b 0S/Lh3/8JciGGwTdSlgw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1md18C-003BA8-8c; Wed, 20 Oct 2021 02:15:04 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1md17o-003At2-22; Wed, 20 Oct 2021 02:14:42 +0000 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HYvFh10ZQzbnHX; Wed, 20 Oct 2021 10:10:00 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:32 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:31 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Nicolas Saenz Julienne , Feng Zhou , Kefeng Wang Subject: [PATCH v15 01/10] x86: kdump: replace the hard-coded alignment with macro CRASH_ALIGN Date: Wed, 20 Oct 2021 10:03:08 +0800 Message-ID: <20211020020317.1220-2-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211020020317.1220-1-thunder.leizhen@huawei.com> References: <20211020020317.1220-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211019_191440_358775_91E7C046 X-CRM114-Status: GOOD ( 11.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Chen Zhou Move CRASH_ALIGN to header asm/kexec.h for later use. Suggested-by: Dave Young Suggested-by: Baoquan He Signed-off-by: Chen Zhou Tested-by: John Donnelly --- arch/x86/include/asm/kexec.h | 3 +++ arch/x86/kernel/setup.c | 3 --- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index 0a6e34b07017586..5f63ad6b6e74b15 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -18,6 +18,9 @@ # define KEXEC_CONTROL_CODE_MAX_SIZE 2048 +/* 16M alignment for crash kernel regions */ +#define CRASH_ALIGN SZ_16M + #ifndef __ASSEMBLY__ #include diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 40ed44ead063128..3d394127dc03d20 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -392,9 +392,6 @@ static void __init memblock_x86_reserve_range_setup_data(void) #ifdef CONFIG_KEXEC_CORE -/* 16M alignment for crash kernel regions */ -#define CRASH_ALIGN SZ_16M - /* * Keep the crash kernel below this limit. * From patchwork Wed Oct 20 02:03:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12571525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E3CAC433EF for ; Wed, 20 Oct 2021 02:16:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 67E99611EF for ; Wed, 20 Oct 2021 02:16:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 67E99611EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Dn29gfU2QuwFOxUumgfVo71ekFuSgFPGyegZQjk+PY0=; b=SXLS4Bl057rajN +2qAqT1zMUlsSv5wyfZmbVcp769uRYEneA922+btfjPqoKPb5pYkxglYOCU3zjjVLtW6mJMxGGrI5 DtuYJXfkQm6WxjC1k2gNr9WBi2wr6bdnHoK3Uaal8g9c1+/PYEu123jan9x8AWjCtpJJUvJlG/ZUF 2VmAKecdkAGO7YAqKfUV+BkTJcIXNMMd73/DzulOawPiOvf2vf6bEtoJ5+OT5EeimcnHoX1mcITFK 0wXu4OFAehC5oIOF/6cy+p/lopLfEPRP6WNBUEghPH9YNQnL2w552e/cLeucsBwr3zVEE3Gv0bGjs OIPXciQzy9cmYyFsBHiA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1md17u-003AyJ-EG; Wed, 20 Oct 2021 02:14:46 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1md17n-003At8-PW; Wed, 20 Oct 2021 02:14:41 +0000 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4HYvKX08CFz8tnp; Wed, 20 Oct 2021 10:13:20 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:33 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:32 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Nicolas Saenz Julienne , Feng Zhou , Kefeng Wang Subject: [PATCH v15 02/10] x86: kdump: make the lower bound of crash kernel reservation consistent Date: Wed, 20 Oct 2021 10:03:09 +0800 Message-ID: <20211020020317.1220-3-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211020020317.1220-1-thunder.leizhen@huawei.com> References: <20211020020317.1220-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211019_191440_040048_9B98521C X-CRM114-Status: GOOD ( 10.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Chen Zhou The lower bounds of crash kernel reservation and crash kernel low reservation are different, use the consistent value CRASH_ALIGN. Suggested-by: Dave Young Signed-off-by: Chen Zhou Tested-by: John Donnelly --- arch/x86/kernel/setup.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 3d394127dc03d20..5bebd46c7ce81f5 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -441,7 +441,8 @@ static int __init reserve_crashkernel_low(void) return 0; } - low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, 0, CRASH_ADDR_LOW_MAX); + low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_LOW_MAX); if (!low_base) { pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", (unsigned long)(low_size >> 20)); From patchwork Wed Oct 20 02:03:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12571535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 079CBC433EF for ; Wed, 20 Oct 2021 02:18:25 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C8092611EF for ; Wed, 20 Oct 2021 02:18:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C8092611EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=iptXcifia2hrPCprQIK825e/CiWKRtMvJ/4okfyYZOc=; b=mvPRkAtgDGwxlw AKnXR1g9o1isBcWKh/HBlpDc5B0H4bKnuTyoOKkYUG4i5xQSF0wOYjbEhNsTOHyWgdYyFPzpbzBEb YeZZ3nI7RJeVROWcXkAkQIgAQF2RVE/CKZQszdfOMEHQjJMh3USh2h/GfI4cICSznEBlsYGFyRVeQ p1C46+3z0T6uP4h7xIIG508rf4IMsRP6FtFHGt0sWbXDF3ecH6CMgV+0tlzsgkU/W6jVMrUpoqUPB KlFLwkomQ55Tm4G/czUYyax9ydxNVwYmA0VyI3wBu+Ldj4fyLq4v/qTYOza9u+lYUlMxuqVOb42uq L1t+UOJXIs7hFgDH87Cw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1md19e-003Brr-AH; Wed, 20 Oct 2021 02:16:34 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1md17w-003Azn-EV; Wed, 20 Oct 2021 02:14:50 +0000 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HYvFx2qTZzbnHh; Wed, 20 Oct 2021 10:10:13 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:34 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:33 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Nicolas Saenz Julienne , Feng Zhou , Kefeng Wang Subject: [PATCH v15 03/10] x86: kdump: use macro CRASH_ADDR_LOW_MAX in functions reserve_crashkernel() Date: Wed, 20 Oct 2021 10:03:10 +0800 Message-ID: <20211020020317.1220-4-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211020020317.1220-1-thunder.leizhen@huawei.com> References: <20211020020317.1220-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211019_191448_714838_A8AC018D X-CRM114-Status: GOOD ( 11.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Chen Zhou To make the functions reserve_crashkernel() as generic, replace some hard-coded numbers with macro CRASH_ADDR_LOW_MAX. Signed-off-by: Chen Zhou Tested-by: John Donnelly Acked-by: Baoquan He --- arch/x86/kernel/setup.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 5bebd46c7ce81f5..1b2c9f5c71a870e 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -489,8 +489,9 @@ static void __init reserve_crashkernel(void) if (!crash_base) { /* * Set CRASH_ADDR_LOW_MAX upper bound for crash memory, - * crashkernel=x,high reserves memory over 4G, also allocates - * 256M extra low memory for DMA buffers and swiotlb. + * crashkernel=x,high reserves memory over CRASH_ADDR_LOW_MAX, + * also allocates 256M extra low memory for DMA buffers + * and swiotlb. * But the extra memory is not required for all machines. * So try low memory first and fall back to high memory * unless "crashkernel=size[KMG],high" is specified. @@ -518,7 +519,7 @@ static void __init reserve_crashkernel(void) } } - if (crash_base >= (1ULL << 32) && reserve_crashkernel_low()) { + if (crash_base >= CRASH_ADDR_LOW_MAX && reserve_crashkernel_low()) { memblock_free(crash_base, crash_size); return; } From patchwork Wed Oct 20 02:03:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12571531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5A8FC433F5 for ; Wed, 20 Oct 2021 02:17:42 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 934F0611EF for ; Wed, 20 Oct 2021 02:17:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 934F0611EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CGT//FoiHQsXEFXtNw4Ivz0i+q00yYpBldZY74ueJp4=; b=yofwLAfZx3KOYs CaYvUG0U+BgMczl1LGqcWZO3VyU7g7c3hwDX2yCxO2IKkl9C5OGf1b6Mgcr3Gwu1lRVSUCu/A/aBt 39zEDx6/e9YiAesI9P8cwaJLI0OqGIYAycfnYeueE0Rq/j4kGiVQwGw3I/y/DvvTJWWbxF5DW3ODj Vf3XkXRrKYigJsI/2LOQ47HTGNfKj0hfYfo7pFwnFPE68V/8cdawpUGoly8tvtXm2JE1+h2aNRbrC kQ6hEx2gzdVMYsoSGfongoq5mnMQB4ZPfWqSI27ci7HxphNF2JJHVzqT7lefG2GMCUTrThstX1uzp OZNlLHbF4+wgri9kkJxw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1md19C-003Bcr-Hq; Wed, 20 Oct 2021 02:16:06 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1md17t-003Ay2-0y; Wed, 20 Oct 2021 02:14:46 +0000 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4HYvFR0w7Bz90LJ; Wed, 20 Oct 2021 10:09:47 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:35 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:34 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Nicolas Saenz Julienne , Feng Zhou , Kefeng Wang Subject: [PATCH v15 04/10] x86: kdump: move xen_pv_domain() check and insert_resource() to setup_arch() Date: Wed, 20 Oct 2021 10:03:11 +0800 Message-ID: <20211020020317.1220-5-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211020020317.1220-1-thunder.leizhen@huawei.com> References: <20211020020317.1220-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211019_191445_534880_9073A749 X-CRM114-Status: GOOD ( 13.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Chen Zhou We will make the functions reserve_crashkernel() as generic, the xen_pv_domain() check in reserve_crashkernel() is relevant only to x86, the same as insert_resource() in reserve_crashkernel[_low](). So move xen_pv_domain() check and insert_resource() to setup_arch() to keep them in x86. Suggested-by: Mike Rapoport Signed-off-by: Chen Zhou Tested-by: John Donnelly Acked-by: Baoquan He --- arch/x86/kernel/setup.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 1b2c9f5c71a870e..657ec7fb62da37c 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -456,7 +456,6 @@ static int __init reserve_crashkernel_low(void) crashk_low_res.start = low_base; crashk_low_res.end = low_base + low_size - 1; - insert_resource(&iomem_resource, &crashk_low_res); #endif return 0; } @@ -480,11 +479,6 @@ static void __init reserve_crashkernel(void) high = true; } - if (xen_pv_domain()) { - pr_info("Ignoring crashkernel for a Xen PV domain\n"); - return; - } - /* 0 means: find the address automatically */ if (!crash_base) { /* @@ -531,7 +525,6 @@ static void __init reserve_crashkernel(void) crashk_res.start = crash_base; crashk_res.end = crash_base + crash_size - 1; - insert_resource(&iomem_resource, &crashk_res); } #else static void __init reserve_crashkernel(void) @@ -1131,7 +1124,17 @@ void __init setup_arch(char **cmdline_p) * Reserve memory for crash kernel after SRAT is parsed so that it * won't consume hotpluggable memory. */ - reserve_crashkernel(); + if (xen_pv_domain()) + pr_info("Ignoring crashkernel for a Xen PV domain\n"); + else { + reserve_crashkernel(); +#ifdef CONFIG_KEXEC_CORE + if (crashk_res.end > crashk_res.start) + insert_resource(&iomem_resource, &crashk_res); + if (crashk_low_res.end > crashk_low_res.start) + insert_resource(&iomem_resource, &crashk_low_res); +#endif + } memblock_find_dma_reserve(); From patchwork Wed Oct 20 02:03:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12571529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC548C433EF for ; Wed, 20 Oct 2021 02:17:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8A858611EF for ; Wed, 20 Oct 2021 02:17:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8A858611EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+XkKs5HeuTvLxQeH/1JADZ8NgP5rJO0XyFuXmksnW8E=; b=AcDtKcaweLf9Lu PBJGuA6ERgsRscVzXr2Eqg2W2nQA906qoKVRqB4Wrd8UJsn0FU6ckqxreZIA5cF4gL2aS/Nw93HHX kjOw89b/GuZmoDHbnTclgPhgivGcXXR0wrEPLrXWa115nk+uXfaHodmNxVblWRcDZKYt1ft8tCdXy j7/DJrMHiVB19EtCSMeKz9K9wTD5R9qgBMh6A2lxd0XhgmZWpXHL2nOZWo2NM/PQUN5M/Q5jHaCnb tHWmWVSalMjsuLPFTShWwShvg55IY+0pz4KyqCnZBzq+VcE66sJ13slvcOW4P4PSUD6BsoHQmQDqe Rdu8AaRmIwsbd6F2AgRQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1md18u-003BTD-Nc; Wed, 20 Oct 2021 02:15:48 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1md17n-003Atz-QX; Wed, 20 Oct 2021 02:14:46 +0000 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4HYvFl43J0zbd58; Wed, 20 Oct 2021 10:10:03 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:36 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:35 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Nicolas Saenz Julienne , Feng Zhou , Kefeng Wang Subject: [PATCH v15 05/10] x86: kdump: move reserve_crashkernel[_low]() into crash_core.c Date: Wed, 20 Oct 2021 10:03:12 +0800 Message-ID: <20211020020317.1220-6-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211020020317.1220-1-thunder.leizhen@huawei.com> References: <20211020020317.1220-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211019_191440_426540_990D8BAE X-CRM114-Status: GOOD ( 31.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Chen Zhou Make the functions reserve_crashkernel[_low]() as generic. Arm64 will use these to reimplement crashkernel=X. Signed-off-by: Chen Zhou Tested-by: John Donnelly --- arch/x86/include/asm/elf.h | 3 + arch/x86/include/asm/kexec.h | 28 +++++- arch/x86/kernel/setup.c | 143 +------------------------------ include/linux/crash_core.h | 3 + include/linux/kexec.h | 2 - kernel/crash_core.c | 159 +++++++++++++++++++++++++++++++++++ kernel/kexec_core.c | 17 ---- 7 files changed, 192 insertions(+), 163 deletions(-) diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h index 29fea180a6658e8..7a6c36cff8331f5 100644 --- a/arch/x86/include/asm/elf.h +++ b/arch/x86/include/asm/elf.h @@ -94,6 +94,9 @@ extern unsigned int vdso32_enabled; #define elf_check_arch(x) elf_check_arch_ia32(x) +/* We can also handle crash dumps from 64 bit kernel. */ +# define vmcore_elf_check_arch_cross(x) ((x)->e_machine == EM_X86_64) + /* SVR4/i386 ABI (pages 3-31, 3-32) says that when the program starts %edx contains a pointer to a function which might be registered using `atexit'. This provides a mean for the dynamic linker to call DT_FINI functions for diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index 5f63ad6b6e74b15..3533ede83b42158 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -21,6 +21,27 @@ /* 16M alignment for crash kernel regions */ #define CRASH_ALIGN SZ_16M +/* + * Keep the crash kernel below this limit. + * + * Earlier 32-bits kernels would limit the kernel to the low 512 MB range + * due to mapping restrictions. + * + * 64-bit kdump kernels need to be restricted to be under 64 TB, which is + * the upper limit of system RAM in 4-level paging mode. Since the kdump + * jump could be from 5-level paging to 4-level paging, the jump will fail if + * the kernel is put above 64 TB, and during the 1st kernel bootup there's + * no good way to detect the paging mode of the target kernel which will be + * loaded for dumping. + */ +#ifdef CONFIG_X86_32 +# define CRASH_ADDR_LOW_MAX SZ_512M +# define CRASH_ADDR_HIGH_MAX SZ_512M +#else +# define CRASH_ADDR_LOW_MAX SZ_4G +# define CRASH_ADDR_HIGH_MAX SZ_64T +#endif + #ifndef __ASSEMBLY__ #include @@ -51,9 +72,6 @@ struct kimage; /* The native architecture */ # define KEXEC_ARCH KEXEC_ARCH_386 - -/* We can also handle crash dumps from 64 bit kernel. */ -# define vmcore_elf_check_arch_cross(x) ((x)->e_machine == EM_X86_64) #else /* Maximum physical address we can use pages from */ # define KEXEC_SOURCE_MEMORY_LIMIT (MAXMEM-1) @@ -195,6 +213,10 @@ typedef void crash_vmclear_fn(void); extern crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss; extern void kdump_nmi_shootdown_cpus(void); +#ifdef CONFIG_KEXEC_CORE +extern void __init reserve_crashkernel(void); +#endif + #endif /* __ASSEMBLY__ */ #endif /* _ASM_X86_KEXEC_H */ diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 657ec7fb62da37c..bef6340e0e32441 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -39,6 +39,7 @@ #include #include #include +#include #include #include #include @@ -386,147 +387,7 @@ static void __init memblock_x86_reserve_range_setup_data(void) } } -/* - * --------- Crashkernel reservation ------------------------------ - */ - -#ifdef CONFIG_KEXEC_CORE - -/* - * Keep the crash kernel below this limit. - * - * Earlier 32-bits kernels would limit the kernel to the low 512 MB range - * due to mapping restrictions. - * - * 64-bit kdump kernels need to be restricted to be under 64 TB, which is - * the upper limit of system RAM in 4-level paging mode. Since the kdump - * jump could be from 5-level paging to 4-level paging, the jump will fail if - * the kernel is put above 64 TB, and during the 1st kernel bootup there's - * no good way to detect the paging mode of the target kernel which will be - * loaded for dumping. - */ -#ifdef CONFIG_X86_32 -# define CRASH_ADDR_LOW_MAX SZ_512M -# define CRASH_ADDR_HIGH_MAX SZ_512M -#else -# define CRASH_ADDR_LOW_MAX SZ_4G -# define CRASH_ADDR_HIGH_MAX SZ_64T -#endif - -static int __init reserve_crashkernel_low(void) -{ -#ifdef CONFIG_X86_64 - unsigned long long base, low_base = 0, low_size = 0; - unsigned long low_mem_limit; - int ret; - - low_mem_limit = min(memblock_phys_mem_size(), CRASH_ADDR_LOW_MAX); - - /* crashkernel=Y,low */ - ret = parse_crashkernel_low(boot_command_line, low_mem_limit, &low_size, &base); - if (ret) { - /* - * two parts from kernel/dma/swiotlb.c: - * -swiotlb size: user-specified with swiotlb= or default. - * - * -swiotlb overflow buffer: now hardcoded to 32k. We round it - * to 8M for other buffers that may need to stay low too. Also - * make sure we allocate enough extra low memory so that we - * don't run out of DMA buffers for 32-bit devices. - */ - low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); - } else { - /* passed with crashkernel=0,low ? */ - if (!low_size) - return 0; - } - - low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, CRASH_ALIGN, - CRASH_ADDR_LOW_MAX); - if (!low_base) { - pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", - (unsigned long)(low_size >> 20)); - return -ENOMEM; - } - - pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (low RAM limit: %ldMB)\n", - (unsigned long)(low_size >> 20), - (unsigned long)(low_base >> 20), - (unsigned long)(low_mem_limit >> 20)); - - crashk_low_res.start = low_base; - crashk_low_res.end = low_base + low_size - 1; -#endif - return 0; -} - -static void __init reserve_crashkernel(void) -{ - unsigned long long crash_size, crash_base, total_mem; - bool high = false; - int ret; - - total_mem = memblock_phys_mem_size(); - - /* crashkernel=XM */ - ret = parse_crashkernel(boot_command_line, total_mem, &crash_size, &crash_base); - if (ret != 0 || crash_size <= 0) { - /* crashkernel=X,high */ - ret = parse_crashkernel_high(boot_command_line, total_mem, - &crash_size, &crash_base); - if (ret != 0 || crash_size <= 0) - return; - high = true; - } - - /* 0 means: find the address automatically */ - if (!crash_base) { - /* - * Set CRASH_ADDR_LOW_MAX upper bound for crash memory, - * crashkernel=x,high reserves memory over CRASH_ADDR_LOW_MAX, - * also allocates 256M extra low memory for DMA buffers - * and swiotlb. - * But the extra memory is not required for all machines. - * So try low memory first and fall back to high memory - * unless "crashkernel=size[KMG],high" is specified. - */ - if (!high) - crash_base = memblock_phys_alloc_range(crash_size, - CRASH_ALIGN, CRASH_ALIGN, - CRASH_ADDR_LOW_MAX); - if (!crash_base) - crash_base = memblock_phys_alloc_range(crash_size, - CRASH_ALIGN, CRASH_ALIGN, - CRASH_ADDR_HIGH_MAX); - if (!crash_base) { - pr_info("crashkernel reservation failed - No suitable area found.\n"); - return; - } - } else { - unsigned long long start; - - start = memblock_phys_alloc_range(crash_size, SZ_1M, crash_base, - crash_base + crash_size); - if (start != crash_base) { - pr_info("crashkernel reservation failed - memory is in use.\n"); - return; - } - } - - if (crash_base >= CRASH_ADDR_LOW_MAX && reserve_crashkernel_low()) { - memblock_free(crash_base, crash_size); - return; - } - - pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", - (unsigned long)(crash_size >> 20), - (unsigned long)(crash_base >> 20), - (unsigned long)(total_mem >> 20)); - - crashk_res.start = crash_base; - crashk_res.end = crash_base + crash_size - 1; -} -#else +#ifndef CONFIG_KEXEC_CORE static void __init reserve_crashkernel(void) { } diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index de62a722431e7db..f6b99da4ed08ecf 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -73,6 +73,9 @@ extern unsigned char *vmcoreinfo_data; extern size_t vmcoreinfo_size; extern u32 *vmcoreinfo_note; +extern struct resource crashk_res; +extern struct resource crashk_low_res; + Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, void *data, size_t data_len); void final_note(Elf_Word *buf); diff --git a/include/linux/kexec.h b/include/linux/kexec.h index 0c994ae37729e1e..cd744d962f6f417 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -352,8 +352,6 @@ extern int kexec_load_disabled; /* Location of a reserved region to hold the crash kernel. */ -extern struct resource crashk_res; -extern struct resource crashk_low_res; extern note_buf_t __percpu *crash_notes; /* flag to track if kexec reboot is in progress */ diff --git a/kernel/crash_core.c b/kernel/crash_core.c index eb53f5ec62c900f..21105942df72897 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -8,6 +8,12 @@ #include #include #include +#include +#include + +#ifdef CONFIG_KEXEC_CORE +#include +#endif #include #include @@ -22,6 +28,22 @@ u32 *vmcoreinfo_note; /* trusted vmcoreinfo, e.g. we can make a copy in the crash memory */ static unsigned char *vmcoreinfo_data_safecopy; +/* Location of the reserved area for the crash kernel */ +struct resource crashk_res = { + .name = "Crash kernel", + .start = 0, + .end = 0, + .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, + .desc = IORES_DESC_CRASH_KERNEL +}; +struct resource crashk_low_res = { + .name = "Crash kernel", + .start = 0, + .end = 0, + .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, + .desc = IORES_DESC_CRASH_KERNEL +}; + /* * parsing the "crashkernel" commandline * @@ -295,6 +317,143 @@ int __init parse_crashkernel_low(char *cmdline, "crashkernel=", suffix_tbl[SUFFIX_LOW]); } +/* + * --------- Crashkernel reservation ------------------------------ + */ + +#ifdef CONFIG_KEXEC_CORE + +#ifdef CONFIG_X86 +static int __init reserve_crashkernel_low(void) +{ +#ifdef CONFIG_X86_64 + unsigned long long base, low_base = 0, low_size = 0; + unsigned long low_mem_limit; + int ret; + + low_mem_limit = min(memblock_phys_mem_size(), CRASH_ADDR_LOW_MAX); + + /* crashkernel=Y,low */ + ret = parse_crashkernel_low(boot_command_line, low_mem_limit, &low_size, &base); + if (ret) { + /* + * two parts from kernel/dma/swiotlb.c: + * -swiotlb size: user-specified with swiotlb= or default. + * + * -swiotlb overflow buffer: now hardcoded to 32k. We round it + * to 8M for other buffers that may need to stay low too. Also + * make sure we allocate enough extra low memory so that we + * don't run out of DMA buffers for 32-bit devices. + */ + low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); + } else { + /* passed with crashkernel=0,low ? */ + if (!low_size) + return 0; + } + + low_base = memblock_phys_alloc_range(low_size, CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_LOW_MAX); + if (!low_base) { + pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", + (unsigned long)(low_size >> 20)); + return -ENOMEM; + } + + pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (low RAM limit: %ldMB)\n", + (unsigned long)(low_size >> 20), + (unsigned long)(low_base >> 20), + (unsigned long)(low_mem_limit >> 20)); + + crashk_low_res.start = low_base; + crashk_low_res.end = low_base + low_size - 1; +#endif + return 0; +} + +/* + * reserve_crashkernel() - reserves memory for crash kernel + * + * This function reserves memory area given in "crashkernel=" kernel command + * line parameter. The memory reserved is used by dump capture kernel when + * primary kernel is crashing. + */ +void __init reserve_crashkernel(void) +{ + unsigned long long crash_size, crash_base, total_mem; + bool high = false; + int ret; + + total_mem = memblock_phys_mem_size(); + + /* crashkernel=XM */ + ret = parse_crashkernel(boot_command_line, total_mem, &crash_size, &crash_base); + if (ret != 0 || crash_size <= 0) { + /* crashkernel=X,high */ + ret = parse_crashkernel_high(boot_command_line, total_mem, + &crash_size, &crash_base); + if (ret != 0 || crash_size <= 0) + return; + high = true; + } + + /* 0 means: find the address automatically */ + if (!crash_base) { + /* + * Set CRASH_ADDR_LOW_MAX upper bound for crash memory, + * crashkernel=x,high reserves memory over CRASH_ADDR_LOW_MAX, + * also allocates 256M extra low memory for DMA buffers + * and swiotlb. + * But the extra memory is not required for all machines. + * So try low memory first and fall back to high memory + * unless "crashkernel=size[KMG],high" is specified. + */ + if (!high) + crash_base = memblock_phys_alloc_range(crash_size, + CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_LOW_MAX); + if (!crash_base) + crash_base = memblock_phys_alloc_range(crash_size, + CRASH_ALIGN, CRASH_ALIGN, + CRASH_ADDR_HIGH_MAX); + if (!crash_base) { + pr_info("crashkernel reservation failed - No suitable area found.\n"); + return; + } + } else { + /* User specifies base address explicitly. */ + unsigned long long start; + + if (!IS_ALIGNED(crash_base, CRASH_ALIGN)) { + pr_warn("cannot reserve crashkernel: base address is not %ldMB aligned\n", + (unsigned long)CRASH_ALIGN >> 20); + return; + } + + start = memblock_phys_alloc_range(crash_size, SZ_1M, crash_base, + crash_base + crash_size); + if (start != crash_base) { + pr_info("crashkernel reservation failed - memory is in use.\n"); + return; + } + } + + if (crash_base >= CRASH_ADDR_LOW_MAX && reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } + + pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", + (unsigned long)(crash_size >> 20), + (unsigned long)(crash_base >> 20), + (unsigned long)(total_mem >> 20)); + + crashk_res.start = crash_base; + crashk_res.end = crash_base + crash_size - 1; +} +#endif /* CONFIG_X86 */ +#endif /* CONFIG_KEXEC_CORE */ + Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, void *data, size_t data_len) { diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index 5a5d192a89ac307..1e0d4909bbb6b77 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -54,23 +54,6 @@ note_buf_t __percpu *crash_notes; /* Flag to indicate we are going to kexec a new kernel */ bool kexec_in_progress = false; - -/* Location of the reserved area for the crash kernel */ -struct resource crashk_res = { - .name = "Crash kernel", - .start = 0, - .end = 0, - .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, - .desc = IORES_DESC_CRASH_KERNEL -}; -struct resource crashk_low_res = { - .name = "Crash kernel", - .start = 0, - .end = 0, - .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, - .desc = IORES_DESC_CRASH_KERNEL -}; - int kexec_should_crash(struct task_struct *p) { /* From patchwork Wed Oct 20 02:03:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12571537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E647CC433F5 for ; Wed, 20 Oct 2021 02:18:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A765A611EF for ; Wed, 20 Oct 2021 02:18:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A765A611EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ECieNYpjesGnwAYXdy6J+0HAKpJvq7YW4auSRPtA5+k=; b=KTc8Cdrsv1Ed9d E7ddeSFTSopGtp1vLKy8Gu0A48ZuOKOUG0CiMGv+fCblZoOa46gc+uimsR9vcc43JE9ho2E4Ug492 cU1iEeU4jq5Mmdea9H6jgQNXeVosjXU6rwhtgUj86liW5OAxP50F7CHhTAmptX2jQK16EvqKtVCTT 50iIMtKb0RddaQzAxgFDHKWI98+W9ZVqs4U5snLfkAUJq8cTGYu+cx8Q9lrovQewOX5DwGFtSdAIH h3I62is5wk4vHwgSd9y5mVuEoOoiXuU53gFLeOaNlEUm4tzeVapY4ZZBpdlXwI8MkOr1398yxOJ/P E6uHE1L8TW7AOIqVsJCA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1md1AQ-003CK5-59; Wed, 20 Oct 2021 02:17:23 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1md17w-003B0I-SL; Wed, 20 Oct 2021 02:14:52 +0000 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HYvFx4mhYzbnHd; Wed, 20 Oct 2021 10:10:13 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:37 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:36 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Nicolas Saenz Julienne , Feng Zhou , Kefeng Wang Subject: [PATCH v15 06/10] arm64: kdump: introduce some macros for crash kernel reservation Date: Wed, 20 Oct 2021 10:03:13 +0800 Message-ID: <20211020020317.1220-7-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211020020317.1220-1-thunder.leizhen@huawei.com> References: <20211020020317.1220-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211019_191449_380525_F9CE873C X-CRM114-Status: GOOD ( 12.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Chen Zhou Introduce macro CRASH_ALIGN for alignment, macro CRASH_ADDR_LOW_MAX for upper bound of low crash memory, macro CRASH_ADDR_HIGH_MAX for upper bound of high crash memory, use macros instead. Besides, keep consistent with x86, use CRASH_ALIGN as the lower bound of crash kernel reservation. Signed-off-by: Chen Zhou Tested-by: John Donnelly --- arch/arm64/include/asm/kexec.h | 6 ++++++ arch/arm64/mm/init.c | 4 ++-- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 00dbcc71aeb2918..b51ceb143cbbdb0 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -25,6 +25,12 @@ #define KEXEC_ARCH KEXEC_ARCH_AARCH64 +/* 2M alignment for crash kernel regions */ +#define CRASH_ALIGN SZ_2M + +#define CRASH_ADDR_LOW_MAX arm64_dma_phys_limit +#define CRASH_ADDR_HIGH_MAX MEMBLOCK_ALLOC_ACCESSIBLE + #ifndef __ASSEMBLY__ /** diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 37a81754d9b61f7..2c94ae13b160834 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -75,7 +75,7 @@ phys_addr_t arm64_dma_phys_limit __ro_after_init; static void __init reserve_crashkernel(void) { unsigned long long crash_base, crash_size; - unsigned long long crash_max = arm64_dma_phys_limit; + unsigned long long crash_max = CRASH_ADDR_LOW_MAX; int ret; ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), @@ -91,7 +91,7 @@ static void __init reserve_crashkernel(void) crash_max = crash_base + crash_size; /* Current arm64 boot protocol requires 2MB alignment */ - crash_base = memblock_phys_alloc_range(crash_size, SZ_2M, + crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, crash_base, crash_max); if (!crash_base) { pr_warn("cannot allocate crashkernel (size:0x%llx)\n", From patchwork Wed Oct 20 02:03:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12571543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0258C433EF for ; Wed, 20 Oct 2021 02:21:46 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8BA286135E for ; Wed, 20 Oct 2021 02:21:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8BA286135E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=itKy0UE/3KVLpS3eQpvZ1GFPsXhcIhfYrirHnDqLMxw=; b=qLSaZ9rjrqtxKq KaMdf6oyCsW0kw1AsySH8ZSPNzz39+ucak8raJMMJITrzbX3AfkjzxvIErJ8aCmyiVWnrcfE7m3kE OMSV3fPIleoWXhj9FOtirOs4Rwy5ir4WAVmjkXSfpq+ktNVBQ7DZfT3dXBU7XUZTrSKTre/rp9fXA ZqUgVhdSQ62DdAb3vrkDKc/1sa1FeAojaZ7h5V3FUzbgIvZpVQRfsePGZgvp1fSUCviyRnFFDo1J2 7w6jqT00zGcYyPxsR37F9r/ZETA0v3U3Fqh6McWYDPOyuzU2L3l4zRlMmIa0CDj4gcq7zFvSoT5Z0 NGVqtjLb/ye0N+WHWHnQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1md1Cw-003Dge-Bq; Wed, 20 Oct 2021 02:19:58 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1md182-003B4Y-4q; Wed, 20 Oct 2021 02:14:57 +0000 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4HYvKt1Twnz8tl8; Wed, 20 Oct 2021 10:13:38 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:38 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:37 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Nicolas Saenz Julienne , Feng Zhou , Kefeng Wang Subject: [PATCH v15 07/10] arm64: kdump: reimplement crashkernel=X Date: Wed, 20 Oct 2021 10:03:14 +0800 Message-ID: <20211020020317.1220-8-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211020020317.1220-1-thunder.leizhen@huawei.com> References: <20211020020317.1220-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211019_191454_630120_D1429AAE X-CRM114-Status: GOOD ( 24.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Chen Zhou There are following issues in arm64 kdump: 1. We use crashkernel=X to reserve crashkernel below 4G, which will fail when there is no enough low memory. 2. If reserving crashkernel above 4G, in this case, crash dump kernel will boot failure because there is no low memory available for allocation. To solve these issues, change the behavior of crashkernel=X and introduce crashkernel=X,[high,low]. crashkernel=X tries low allocation in DMA zone, and fall back to high allocation if it fails. We can also use "crashkernel=X,high" to select a region above DMA zone, which also tries to allocate at least 256M in DMA zone automatically. "crashkernel=Y,low" can be used to allocate specified size low memory. Another minor change, there may be two regions reserved for crash dump kernel, in order to distinct from the high region and make no effect to the use of existing kexec-tools, rename the low region as "Crash kernel (low)". Signed-off-by: Chen Zhou Tested-by: John Donnelly --- arch/arm64/include/asm/kexec.h | 4 ++ arch/arm64/kernel/machine_kexec_file.c | 12 +++++- arch/arm64/kernel/setup.c | 13 +++++- arch/arm64/mm/init.c | 59 +++++--------------------- kernel/crash_core.c | 6 +-- 5 files changed, 40 insertions(+), 54 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index b51ceb143cbbdb0..fa17fc8a5a2701b 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -96,6 +96,10 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif +#ifdef CONFIG_KEXEC_CORE +extern void __init reserve_crashkernel(void); +#endif + #define ARCH_HAS_KIMAGE_ARCH struct kimage_arch { diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/machine_kexec_file.c index 63634b4d72c158f..6f3fa059ca4e816 100644 --- a/arch/arm64/kernel/machine_kexec_file.c +++ b/arch/arm64/kernel/machine_kexec_file.c @@ -65,10 +65,18 @@ static int prepare_elf_headers(void **addr, unsigned long *sz) /* Exclude crashkernel region */ ret = crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); + if (ret) + goto out; + + if (crashk_low_res.end) { + ret = crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_res.end); + if (ret) + goto out; + } - if (!ret) - ret = crash_prepare_elf64_headers(cmem, true, addr, sz); + ret = crash_prepare_elf64_headers(cmem, true, addr, sz); +out: kfree(cmem); return ret; } diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index be5f85b0a24de69..4bb2e55366be64d 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -248,7 +248,18 @@ static void __init request_standard_resources(void) kernel_data.end <= res->end) request_resource(res, &kernel_data); #ifdef CONFIG_KEXEC_CORE - /* Userspace will find "Crash kernel" region in /proc/iomem. */ + /* + * Userspace will find "Crash kernel" or "Crash kernel (low)" + * region in /proc/iomem. + * In order to distinct from the high region and make no effect + * to the use of existing kexec-tools, rename the low region as + * "Crash kernel (low)". + */ + if (crashk_low_res.end && crashk_low_res.start >= res->start && + crashk_low_res.end <= res->end) { + crashk_low_res.name = "Crash kernel (low)"; + request_resource(res, &crashk_low_res); + } if (crashk_res.end && crashk_res.start >= res->start && crashk_res.end <= res->end) request_resource(res, &crashk_res); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 2c94ae13b160834..cde26d49f76cfa0 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -36,6 +36,7 @@ #include #include #include +#include #include #include #include @@ -64,57 +65,11 @@ EXPORT_SYMBOL(memstart_addr); */ phys_addr_t arm64_dma_phys_limit __ro_after_init; -#ifdef CONFIG_KEXEC_CORE -/* - * reserve_crashkernel() - reserves memory for crash kernel - * - * This function reserves memory area given in "crashkernel=" kernel command - * line parameter. The memory reserved is used by dump capture kernel when - * primary kernel is crashing. - */ +#ifndef CONFIG_KEXEC_CORE static void __init reserve_crashkernel(void) { - unsigned long long crash_base, crash_size; - unsigned long long crash_max = CRASH_ADDR_LOW_MAX; - int ret; - - ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), - &crash_size, &crash_base); - /* no crashkernel= or invalid value specified */ - if (ret || !crash_size) - return; - - crash_size = PAGE_ALIGN(crash_size); - - /* User specifies base address explicitly. */ - if (crash_base) - crash_max = crash_base + crash_size; - - /* Current arm64 boot protocol requires 2MB alignment */ - crash_base = memblock_phys_alloc_range(crash_size, CRASH_ALIGN, - crash_base, crash_max); - if (!crash_base) { - pr_warn("cannot allocate crashkernel (size:0x%llx)\n", - crash_size); - return; - } - - pr_info("crashkernel reserved: 0x%016llx - 0x%016llx (%lld MB)\n", - crash_base, crash_base + crash_size, crash_size >> 20); - - /* - * The crashkernel memory will be removed from the kernel linear - * map. Inform kmemleak so that it won't try to access it. - */ - kmemleak_ignore_phys(crash_base); - crashk_res.start = crash_base; - crashk_res.end = crash_base + crash_size - 1; } -#else -static void __init reserve_crashkernel(void) -{ -} -#endif /* CONFIG_KEXEC_CORE */ +#endif /* * Return the maximum physical address for a zone accessible by the given bits @@ -399,6 +354,14 @@ void __init bootmem_init(void) * reserved, so do it here. */ reserve_crashkernel(); +#ifdef CONFIG_KEXEC_CORE + /* + * The low region is intended to be used for crash dump kernel devices, + * just mark the low region as "nomap" simply. + */ + if (crashk_low_res.end) + memblock_mark_nomap(crashk_low_res.start, resource_size(&crashk_low_res)); +#endif memblock_dump_all(); } diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 21105942df72897..4d81b9ff42db88b 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -323,10 +323,10 @@ int __init parse_crashkernel_low(char *cmdline, #ifdef CONFIG_KEXEC_CORE -#ifdef CONFIG_X86 +#if defined(CONFIG_X86) || defined(CONFIG_ARM64) static int __init reserve_crashkernel_low(void) { -#ifdef CONFIG_X86_64 +#ifdef CONFIG_64BIT unsigned long long base, low_base = 0, low_size = 0; unsigned long low_mem_limit; int ret; @@ -451,7 +451,7 @@ void __init reserve_crashkernel(void) crashk_res.start = crash_base; crashk_res.end = crash_base + crash_size - 1; } -#endif /* CONFIG_X86 */ +#endif #endif /* CONFIG_KEXEC_CORE */ Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, From patchwork Wed Oct 20 02:03:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12571539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CFA6C433EF for ; Wed, 20 Oct 2021 02:19:31 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F38F7611EF for ; Wed, 20 Oct 2021 02:19:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org F38F7611EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=e9qIULIiL+BQzWLqJ2tVKD1/g+jlrKM0VftvAt9LY+c=; b=E/wJYFbm35xjve 72by4+LHIsE4mljIcnuUDEAXVx5OnHbbW6iIo7X7w1m1uimplHnmqqhTGleSPfWfhsg8xSqy22WB0 7q1Nag0j0sI9WYKgpl1mrRy/X2eaYBFxhg9LcZ8Uu8nO+Ukk5gI3qs9QgEpyP4UFj6xEN64vErNB7 EQnYyqn5ZiRqtpW5VuZncRIQw+4oeBrAYfS1szJlihkDPlsLWCrnamT9ZjbTmie8l6n872uTEmrAw fgeXFM6IcFYfKireE+qNDg/PZYZFgh8ChZZ6UjfF6fUoKtitBrun70oRYnF+c9xiquijX4sNiYzKx Svvsi96vSQXf0BH7GiMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1md1Aw-003Cef-3E; Wed, 20 Oct 2021 02:17:54 +0000 Received: from szxga01-in.huawei.com ([45.249.212.187]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1md17x-003B0j-NW; Wed, 20 Oct 2021 02:14:52 +0000 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HYvK93b0gzZcMQ; Wed, 20 Oct 2021 10:13:01 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:39 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:38 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Nicolas Saenz Julienne , Feng Zhou , Kefeng Wang Subject: [PATCH v15 08/10] x86, arm64: Add ARCH_WANT_RESERVE_CRASH_KERNEL config Date: Wed, 20 Oct 2021 10:03:15 +0800 Message-ID: <20211020020317.1220-9-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211020020317.1220-1-thunder.leizhen@huawei.com> References: <20211020020317.1220-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211019_191450_301355_EC7D985A X-CRM114-Status: GOOD ( 13.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Chen Zhou We make the functions reserve_crashkernel[_low]() as generic for x86 and arm64. Since reserve_crashkernel[_low]() implementations are quite similar on other architectures as well, we can have more users of this later. So have CONFIG_ARCH_WANT_RESERVE_CRASH_KERNEL in arch/Kconfig and select this by X86 and ARM64. Suggested-by: Mike Rapoport Signed-off-by: Chen Zhou Acked-by: Baoquan He --- arch/Kconfig | 3 +++ arch/arm64/Kconfig | 1 + arch/x86/Kconfig | 2 ++ kernel/crash_core.c | 7 ++----- 4 files changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 8df1c71026435df..d0585ce1b81b9cb 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -24,6 +24,9 @@ config KEXEC_ELF config HAVE_IMA_KEXEC bool +config ARCH_WANT_RESERVE_CRASH_KERNEL + bool + config SET_FS bool diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index fee914c716aa262..0ddf06afe625584 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -94,6 +94,7 @@ config ARM64 select ARCH_WANT_FRAME_POINTERS select ARCH_WANT_HUGE_PMD_SHARE if ARM64_4K_PAGES || (ARM64_16K_PAGES && !ARM64_VA_BITS_36) select ARCH_WANT_LD_ORPHAN_WARN + select ARCH_WANT_RESERVE_CRASH_KERNEL if KEXEC_CORE select ARCH_WANTS_NO_INSTR select ARCH_HAS_UBSAN_SANITIZE_ALL select ARM_AMBA diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index d9830e7e1060f7c..66eb5d088695c77 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -12,6 +12,7 @@ config X86_32 depends on !64BIT # Options that are inherently 32-bit kernel only: select ARCH_WANT_IPC_PARSE_VERSION + select ARCH_WANT_RESERVE_CRASH_KERNEL if KEXEC_CORE select CLKSRC_I8253 select CLONE_BACKWARDS select GENERIC_VDSO_32 @@ -28,6 +29,7 @@ config X86_64 select ARCH_HAS_GIGANTIC_PAGE select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 select ARCH_USE_CMPXCHG_LOCKREF + select ARCH_WANT_RESERVE_CRASH_KERNEL if KEXEC_CORE select HAVE_ARCH_SOFT_DIRTY select MODULES_USE_ELF_RELA select NEED_DMA_MAP_STATE diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 4d81b9ff42db88b..4d5bf55ed71c253 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -321,9 +321,7 @@ int __init parse_crashkernel_low(char *cmdline, * --------- Crashkernel reservation ------------------------------ */ -#ifdef CONFIG_KEXEC_CORE - -#if defined(CONFIG_X86) || defined(CONFIG_ARM64) +#ifdef CONFIG_ARCH_WANT_RESERVE_CRASH_KERNEL static int __init reserve_crashkernel_low(void) { #ifdef CONFIG_64BIT @@ -451,8 +449,7 @@ void __init reserve_crashkernel(void) crashk_res.start = crash_base; crashk_res.end = crash_base + crash_size - 1; } -#endif -#endif /* CONFIG_KEXEC_CORE */ +#endif /* CONFIG_ARCH_WANT_RESERVE_CRASH_KERNEL */ Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, void *data, size_t data_len) From patchwork Wed Oct 20 02:03:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12571541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79531C433EF for ; Wed, 20 Oct 2021 02:19:58 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 410A4611EF for ; Wed, 20 Oct 2021 02:19:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 410A4611EF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=tGoiDtYhd/0kISoYyBJTXApqu5sWAZxB9y/4WdNn5Yc=; b=R+4iV1opUpJGIx CkX0uK+8A3zVgpKnfe7H7Ur46U3T1spx9qWv1pq5AaA0qkf8DsOBpJZF1QxpZKnObYUKsx9KkEEfk dudJhbDEgZ3FA/MR9XxYcHmeOAVnvMTvbkoQ40kAbo02VRT8Lk7G+SDo3lhIP8L+euK7E5tfA/bxX Uc2jnglSXDlI/ZUYM78arrY4bjb6tSAK9xrKy1q74tOAPXDSvctmL94xHZALwDXDIp2s7AboRc5oS x61wbZv+OQpAFLnKKL+C3vWwFMXWDDi6I0WtOhoahuEN618URyhBax14sVBRHvZzZTPqu1/oUgZm6 oUDop5Tjg3+wKHbhiShQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1md1BN-003CsT-7I; Wed, 20 Oct 2021 02:18:21 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1md17x-003B0i-MB; Wed, 20 Oct 2021 02:14:53 +0000 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4HYvFy20PfzRfHY; Wed, 20 Oct 2021 10:10:14 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:40 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:39 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Nicolas Saenz Julienne , Feng Zhou , Kefeng Wang Subject: [PATCH v15 09/10] of: fdt: Add memory for devices by DT property "linux, usable-memory-range" Date: Wed, 20 Oct 2021 10:03:16 +0800 Message-ID: <20211020020317.1220-10-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211020020317.1220-1-thunder.leizhen@huawei.com> References: <20211020020317.1220-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211019_191450_230543_294E6112 X-CRM114-Status: GOOD ( 17.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Chen Zhou When reserving crashkernel in high memory, some low memory is reserved for crash dump kernel devices and never mapped by the first kernel. This memory range is advertised to crash dump kernel via DT property under /chosen, linux,usable-memory-range = We reused the DT property linux,usable-memory-range and made the low memory region as the second range "BASE2 SIZE2", which keeps compatibility with existing user-space and older kdump kernels. Crash dump kernel reads this property at boot time and call memblock_add() to add the low memory region after memblock_cap_memory_range() has been called. Signed-off-by: Chen Zhou Signed-off-by: Zhen Lei --- drivers/of/fdt.c | 47 ++++++++++++++++++++++++++++++++++++----------- 1 file changed, 36 insertions(+), 11 deletions(-) diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index 4546572af24bbf1..cf59c847b2c28a5 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -969,8 +969,16 @@ static void __init early_init_dt_check_for_elfcorehdr(unsigned long node) elfcorehdr_addr, elfcorehdr_size); } -static phys_addr_t cap_mem_addr; -static phys_addr_t cap_mem_size; +/* + * The main usage of linux,usable-memory-range is for crash dump kernel. + * Originally, the number of usable-memory regions is one. Now there may + * be two regions, low region and high region. + * To make compatibility with existing user-space and older kdump, the low + * region is always the last range of linux,usable-memory-range if exist. + */ +#define MAX_USABLE_RANGES 2 + +static struct memblock_region cap_mem_regions[MAX_USABLE_RANGES]; /** * early_init_dt_check_for_usable_mem_range - Decode usable memory range @@ -979,20 +987,30 @@ static phys_addr_t cap_mem_size; */ static void __init early_init_dt_check_for_usable_mem_range(unsigned long node) { - const __be32 *prop; - int len; + const __be32 *prop, *endp; + int len, nr = 0; + struct memblock_region *rgn = &cap_mem_regions[0]; pr_debug("Looking for usable-memory-range property... "); prop = of_get_flat_dt_prop(node, "linux,usable-memory-range", &len); - if (!prop || (len < (dt_root_addr_cells + dt_root_size_cells))) + if (!prop) return; - cap_mem_addr = dt_mem_next_cell(dt_root_addr_cells, &prop); - cap_mem_size = dt_mem_next_cell(dt_root_size_cells, &prop); + endp = prop + (len / sizeof(__be32)); + while ((endp - prop) >= (dt_root_addr_cells + dt_root_size_cells)) { + rgn->base = dt_mem_next_cell(dt_root_addr_cells, &prop); + rgn->size = dt_mem_next_cell(dt_root_size_cells, &prop); + + pr_debug("cap_mem_regions[%d]: base=%pa, size=%pa\n", + nr, &rgn->base, &rgn->size); + + if (++nr >= MAX_USABLE_RANGES) + break; + + rgn++; + } - pr_debug("cap_mem_start=%pa cap_mem_size=%pa\n", &cap_mem_addr, - &cap_mem_size); } #ifdef CONFIG_SERIAL_EARLYCON @@ -1265,7 +1283,8 @@ bool __init early_init_dt_verify(void *params) void __init early_init_dt_scan_nodes(void) { - int rc = 0; + int i, rc = 0; + struct memblock_region *rgn = &cap_mem_regions[0]; /* Initialize {size,address}-cells info */ of_scan_flat_dt(early_init_dt_scan_root, NULL); @@ -1279,7 +1298,13 @@ void __init early_init_dt_scan_nodes(void) of_scan_flat_dt(early_init_dt_scan_memory, NULL); /* Handle linux,usable-memory-range property */ - memblock_cap_memory_range(cap_mem_addr, cap_mem_size); + memblock_cap_memory_range(rgn->base, rgn->size); + for (i = 1; i < MAX_USABLE_RANGES; i++) { + rgn++; + + if (rgn->size) + memblock_add(rgn->base, rgn->size); + } } bool __init early_init_dt_scan(void *params) From patchwork Wed Oct 20 02:03:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 12571545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA346C433F5 for ; Wed, 20 Oct 2021 02:22:23 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7DE2360241 for ; Wed, 20 Oct 2021 02:22:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7DE2360241 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=N5dYLT/yoNh9xZNrZz4l2SGRRqRZHTmZWAXVJ4203M4=; b=It6jSVz7ORYyuN wjznCOA6kKqDNGC1mv4K32taKNddIU+zZEqAZy0cJSfnc2c7KgR+066IT4asmfesa83H84LE5gFQ/ 9Z8Q29tU8iJC7nmBqK50ZFkW8NVGkapToDaaD3HOZCvRtvnT2EY4B+DxzLGE5SQF3ozgG9bd1EYmy 3M9s8ncHzxxnXcKgzmTFZboS4fBvEjo43K5yZanrYeNBPc7M/Wuh79o4aDB5ZSUTUjbtSYv4vGJxD ubwqNxqEnodRCvx/FdFxzO7rJPjx6Z9Bo6l9IfeEFtbAV6JpeiSSvKNBNVdk33LoWI8iuNap3L/Fn 6lquB20oTJXML4CWk+4g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1md1Df-003E2l-OY; Wed, 20 Oct 2021 02:20:44 +0000 Received: from szxga03-in.huawei.com ([45.249.212.189]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1md182-003B4q-Lz; Wed, 20 Oct 2021 02:14:57 +0000 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4HYvKt4p9Sz8tnS; Wed, 20 Oct 2021 10:13:38 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:41 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.15; Wed, 20 Oct 2021 10:14:40 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Nicolas Saenz Julienne , Feng Zhou , Kefeng Wang Subject: [PATCH v15 10/10] kdump: update Documentation about crashkernel Date: Wed, 20 Oct 2021 10:03:17 +0800 Message-ID: <20211020020317.1220-11-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211020020317.1220-1-thunder.leizhen@huawei.com> References: <20211020020317.1220-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211019_191455_207253_FAFC1E01 X-CRM114-Status: GOOD ( 16.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Chen Zhou For arm64, the behavior of crashkernel=X has been changed, which tries low allocation in DMA zone and fall back to high allocation if it fails. We can also use "crashkernel=X,high" to select a high region above DMA zone, which also tries to allocate at least 256M low memory in DMA zone automatically and "crashkernel=Y,low" can be used to allocate specified size low memory. So update the Documentation. Signed-off-by: Chen Zhou --- Documentation/admin-guide/kdump/kdump.rst | 11 +++++++++-- Documentation/admin-guide/kernel-parameters.txt | 11 +++++++++-- 2 files changed, 18 insertions(+), 4 deletions(-) diff --git a/Documentation/admin-guide/kdump/kdump.rst b/Documentation/admin-guide/kdump/kdump.rst index cb30ca3df27c9b2..d4c287044be0c70 100644 --- a/Documentation/admin-guide/kdump/kdump.rst +++ b/Documentation/admin-guide/kdump/kdump.rst @@ -361,8 +361,15 @@ Boot into System Kernel kernel will automatically locate the crash kernel image within the first 512MB of RAM if X is not given. - On arm64, use "crashkernel=Y[@X]". Note that the start address of - the kernel, X if explicitly specified, must be aligned to 2MiB (0x200000). + On arm64, use "crashkernel=X" to try low allocation in DMA zone and + fall back to high allocation if it fails. + We can also use "crashkernel=X,high" to select a high region above + DMA zone, which also tries to allocate at least 256M low memory in + DMA zone automatically. + "crashkernel=Y,low" can be used to allocate specified size low memory. + Use "crashkernel=Y@X" if you really have to reserve memory from + specified start address X. Note that the start address of the kernel, + X if explicitly specified, must be aligned to 2MiB (0x200000). Load the Dump-capture Kernel ============================ diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 43dc35fe5bc038e..98b87e82321413b 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -783,6 +783,9 @@ [KNL, X86-64] Select a region under 4G first, and fall back to reserve region above 4G when '@offset' hasn't been specified. + [KNL, ARM64] Try low allocation in DMA zone and fall back + to high allocation if it fails when '@offset' hasn't been + specified. See Documentation/admin-guide/kdump/kdump.rst for further details. crashkernel=range1:size1[,range2:size2,...][@offset] @@ -799,6 +802,8 @@ Otherwise memory region will be allocated below 4G, if available. It will be ignored if crashkernel=X is specified. + [KNL, ARM64] range in high memory. + Allow kernel to allocate physical memory region from top. crashkernel=size[KMG],low [KNL, X86-64] range under 4G. When crashkernel=X,high is passed, kernel could allocate physical memory region @@ -807,13 +812,15 @@ requires at least 64M+32K low memory, also enough extra low memory is needed to make sure DMA buffers for 32-bit devices won't run out. Kernel would try to allocate at - at least 256M below 4G automatically. + least 256M below 4G automatically. This one let user to specify own low range under 4G for second kernel instead. 0: to disable low allocation. It will be ignored when crashkernel=X,high is not used or memory reserved is below 4G. - + [KNL, ARM64] range in low memory. + This one let user to specify a low range in DMA zone for + crash dump kernel. cryptomgr.notests [KNL] Disable crypto self-tests