From patchwork Sat Oct 31 07:44:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenzhou X-Patchwork-Id: 11871155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 463B6C388F7 for ; Sat, 31 Oct 2020 07:42:05 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F02992222F for ; Sat, 31 Oct 2020 07:42:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="zieiZqaV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F02992222F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=tirVn7KPDrrCE2lJDTReUK9xFua4Iu2Kx+GhJgL2wl8=; b=zieiZqaV74iaczoq8Tu9GwAVI feZxqbc7OWutkeZHrhwEjwwDOSesyt0A+OQPMjOi/F5VbOTFMd1e3Rd9blehwpi8moS2InKL+Zr/t xy4CToiGM7MWSKi8cdZTdKbp7KtlBr68i5WE92mGEsmsoKiGR5eB2HkrjapvRo77cBztR6r2QAtH7 cFvMIB2wkJYB2b6Qln5BsdFpyd5uUHRiVVw97Rm1O3hNCx9z+7I1IKd/Acne1zvWqG7k9A1WdjGNb PIaiHc/VsxMBTp8dKq95IrIzDOQfmo7MSR2USktPjW0reskNWxS5Yx1UNdx+RFYhV0sRUIzcA1zoo vrzBys3ow==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kYlVA-0005Au-C8; Sat, 31 Oct 2020 07:40:40 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kYlTw-0004jw-KC; Sat, 31 Oct 2020 07:40:24 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4CNWJz45YZz15PLx; Sat, 31 Oct 2020 15:39:15 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.487.0; Sat, 31 Oct 2020 15:39:06 +0800 From: Chen Zhou To: , , , , , , , , , Subject: [PATCH v13 7/8] arm64: kdump: add memory for devices by DT property linux, usable-memory-range Date: Sat, 31 Oct 2020 15:44:36 +0800 Message-ID: <20201031074437.168008-8-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201031074437.168008-1-chenzhou10@huawei.com> References: <20201031074437.168008-1-chenzhou10@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201031_034022_848982_AABF8BC7 X-CRM114-Status: GOOD ( 15.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: John Donnelly , wangkefeng.wang@huawei.com, arnd@arndb.de, linux-doc@vger.kernel.org, chenzhou10@huawei.com, xiexiuqi@huawei.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, robh+dt@kernel.org, horms@verge.net.au, james.morse@arm.com, linux-arm-kernel@lists.infradead.org, huawei.libin@huawei.com, guohanjun@huawei.com, nsaenzjulienne@suse.de Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When reserving crashkernel in high memory, some low memory is reserved for crash dump kernel devices and never mapped by the first kernel. This memory range is advertised to crash dump kernel via DT property under /chosen, linux,usable-memory-range = We reused the DT property linux,usable-memory-range and made the low memory region as the second range "BASE2 SIZE2", which keeps compatibility with existing user-space and older kdump kernels. Crash dump kernel reads this property at boot time and call memblock_add() to add the low memory region after memblock_cap_memory_range() has been called. Signed-off-by: Chen Zhou Tested-by: John Donnelly --- arch/arm64/mm/init.c | 43 +++++++++++++++++++++++++++++++++---------- 1 file changed, 33 insertions(+), 10 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 888c4f7eadc3..794f992cb200 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -69,6 +69,15 @@ static void __init reserve_crashkernel(void) } #endif +/* + * The main usage of linux,usable-memory-range is for crash dump kernel. + * Originally, the number of usable-memory regions is one. Now there may + * be two regions, low region and high region. + * To make compatibility with existing user-space and older kdump, the low + * region is always the last range of linux,usable-memory-range if exist. + */ +#define MAX_USABLE_RANGES 2 + #ifdef CONFIG_CRASH_DUMP static int __init early_init_dt_scan_elfcorehdr(unsigned long node, const char *uname, int depth, void *data) @@ -184,9 +193,9 @@ early_param("mem", early_mem); static int __init early_init_dt_scan_usablemem(unsigned long node, const char *uname, int depth, void *data) { - struct memblock_region *usablemem = data; - const __be32 *reg; - int len; + struct memblock_region *usable_rgns = data; + const __be32 *reg, *endp; + int len, nr = 0; if (depth != 1 || strcmp(uname, "chosen") != 0) return 0; @@ -195,22 +204,36 @@ static int __init early_init_dt_scan_usablemem(unsigned long node, if (!reg || (len < (dt_root_addr_cells + dt_root_size_cells))) return 1; - usablemem->base = dt_mem_next_cell(dt_root_addr_cells, ®); - usablemem->size = dt_mem_next_cell(dt_root_size_cells, ®); + endp = reg + (len / sizeof(__be32)); + while ((endp - reg) >= (dt_root_addr_cells + dt_root_size_cells)) { + usable_rgns[nr].base = dt_mem_next_cell(dt_root_addr_cells, ®); + usable_rgns[nr].size = dt_mem_next_cell(dt_root_size_cells, ®); + + if (++nr >= MAX_USABLE_RANGES) + break; + } return 1; } static void __init fdt_enforce_memory_region(void) { - struct memblock_region reg = { - .size = 0, + struct memblock_region usable_rgns[MAX_USABLE_RANGES] = { + { .size = 0 }, + { .size = 0 } }; - of_scan_flat_dt(early_init_dt_scan_usablemem, ®); + of_scan_flat_dt(early_init_dt_scan_usablemem, &usable_rgns); - if (reg.size) - memblock_cap_memory_range(reg.base, reg.size); + /* + * The first range of usable-memory regions is for crash dump + * kernel with only one region or for high region with two regions, + * the second range is dedicated for low region if exist. + */ + if (usable_rgns[0].size) + memblock_cap_memory_range(usable_rgns[0].base, usable_rgns[0].size); + if (usable_rgns[1].size) + memblock_add(usable_rgns[1].base, usable_rgns[1].size); } void __init arm64_memblock_init(void)