From patchwork Thu Feb 6 13:27:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13963094 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00F0AC02194 for ; Thu, 6 Feb 2025 13:33:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 67EA7280005; Thu, 6 Feb 2025 08:33:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 62F25280001; Thu, 6 Feb 2025 08:33:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A7E0280005; Thu, 6 Feb 2025 08:33:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 2E4C5280001 for ; Thu, 6 Feb 2025 08:33:33 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 99C83B213C for ; Thu, 6 Feb 2025 13:33:24 +0000 (UTC) X-FDA: 83089611528.02.0F8E073 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf14.hostedemail.com (Postfix) with ESMTP id ECCA1100015 for ; Thu, 6 Feb 2025 13:33:22 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Kz7kCZ1P; spf=pass (imf14.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738848803; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=npMOmDSUnyrulsJA8p7A8ODJKTAU6DGgMP/y7C7NBKQ=; b=UkSku8hmtjM66SEya6AF0Wl8GMi8D9bfK7FRgYTY5vIUFGJ/U5ENYZPMYSqdzQ3MITNwdt HjWGtmpk85WnN1EUupCWAb4WnXJi2DDpI+3sIiLaJ2gF/BB6QuG6Tr+thoo5u1wzKmZDtl ayLi+bQmD0jqwqG/ngXsFIltw+k93t8= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Kz7kCZ1P; spf=pass (imf14.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738848803; a=rsa-sha256; cv=none; b=TWZAwSytbM9U1BSSiH8Z8uP6YTlik3J6FQHq17CWVmoCfDRAB9TmfZcgfe4AeRrh6GHNRn llBq+ly0BQev9ZpYvc2LWosBrZOAX/afEbzJKTvQ4KtIlL7r9Nv2lgbEluW26a7MK34L/6 qkEmJuU4UyAlQAtPw7hX/vQyf+TEwos= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 9C5945C5B7E; Thu, 6 Feb 2025 13:27:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 90562C4CEE2; Thu, 6 Feb 2025 13:28:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738848501; bh=1Z+ne27ukJ46AA8dNZtkdXTc5hsuBk47Ctb0PcMMIFU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Kz7kCZ1PP6AN3eBJ3Qzs02OUd8ZwxBOu6yNeh1IL9DrnUmfm3zQQlI75ZuvfoJmd1 L69sbYK3mzMfs6/uzfjaS7JElpcRpvRttIwaX1cFSWHTPKwfBDkrl+Wa3v32fzVFh6 5WD2UBrO80XKetdUmRCqbjB4xY0IfEgKUyc2qzaYkissKseFD3Oiu3JM2jSdonzRkc M3inTAg4X0GzYq59T9RS3p1azmHXU+iMuhyhPhOFIiaMdtfDSicgAzACujavZi/eoX xS2gkFzbtf9mgXm7FumrPRpUuqMEKHHclv1yrU4XFY3JLzpQsFEwT/L0BAeGGLsbKj PyZ7QwE1KOR7w== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Mike Rapoport , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Pratyush Yadav , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v4 01/14] mm/mm_init: rename init_reserved_page to init_deferred_page Date: Thu, 6 Feb 2025 15:27:41 +0200 Message-ID: <20250206132754.2596694-2-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206132754.2596694-1-rppt@kernel.org> References: <20250206132754.2596694-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: ECCA1100015 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: iajhj31k8p8i5m6py7gig7rdg6p7p71k X-HE-Tag: 1738848802-984921 X-HE-Meta: U2FsdGVkX19Gf2fT82qzp7EBwvbGjaqEG/v3yxmpGZe/RcvMCT9R+47HqBMfIl1YwY8nVeAXeMnYdFZmjD4oxxft9v/9P1iXJ2EIOnsCphglIDH35nwd9kQKLZJy/KxgBTDFpJIsla70t9pm1/qHkHzVGsrxB8R0YggftXWhdNUFKYed69LL2Nxje/uD/728vaFT0YM+ccwg4aNM4l16P02xT/srSnE6YgcDZmkavUFOXgyVY1ojOyhj52ClDeREInewYED8Zz6JLj8S/8+4NEC5cA5667U6hwzC+6c9Hxu9OiEb8QGD0F1B997g2ax+l+iH6wZnPJTfMvQxiCN0gbHOe7twyIjmOQxhX2BVK+wNarMCoJidHwZbJEti+yhythRfkXDVSP2dM3u0DHrH8tKQ4gi+r6KHs4+uh2lSppSYV/Qjd6LyjCs4Ytd0BpyE8S9fWFuVtQB3YMWxkBBBprrmn0HKCja4wVl8P6v4gCKed3MvjwVHOv1jDgCzupUzFx501vegR5P9yOL/xwH07OWFfnfMu4vDW/relqGFh56wpZZc4ufcQaqwaFOovl+cE2kwJw31Z+5CDrWvk2c95Hutw0fxHpYALm8UNogauVFovMKw6OYhpfBnZrlb/iqll0GyW370vCvEOof9nNzO6x3EmqaSVzgLzY0een+STsDWZW7MUp+9zATN1xOsBd3UDgxf3WFVHoXxdpbQ81rzDK0b/s3LK4Q7nI5Mgx0Xpieey3bMzXgfrkZ6FGaiEUdurj8QWE1P9g6gpE/M5QflaDDpr0+wCTFEF9RvfjwqqqhRNPyc28TzV2Gt/FdSMdhI5GjYI7ITSri1JZYHcklYJES8RZahI6gkAD5tS5DrFfp/s0vQGwFiZaJRVbJ0lI7KAVIZczxOOJkSlvhxHSrLndn7DyWa2m+9nWPj30qEslPfCFN4wCeGzjdnorFz9fvx69SC3Va/+1/H9ITM/Xd NYWiHDwW pNTaLuz+CAB3LV4emI1FUWT73fHNoZH/UBQzJujwrfm4oAhE1I43Lik5UUXj3snIH8gF7arj/5IcLFdKEktZs3/+73/YQgPjh1plgWNXP5uRYeRKht+nnXj+lmXq18dUISuNUTmUDEovYnSLcCm38o9dIRhgOnZhLG7gXfnio/4lGRm1mRD33j7/jEbM+sYvo7YqMP+JLEepwx5gtGjHIz9uyLuXam+vuR/jGPvqMHULbQG5v97VJu/wkCOeKt/86CzMV/PiDgR4WzfRzdBlS9K7+5qWmpd7gfoj5B9jZD38PFTEuG+9XEa5J2xbpPQvQpaEonYGkbSpxk/Gb9dZ8pvWRtdrjCi+Ku1cQf6sx0Wf8W5o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" When CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, init_reserved_page() function performs initialization of a struct page that would have been deferred normally. Rename it to init_deferred_page() to better reflect what the function does. Signed-off-by: Mike Rapoport (Microsoft) --- mm/mm_init.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/mm_init.c b/mm/mm_init.c index 2630cc30147e..c4b425125bad 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -705,7 +705,7 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } -static void __meminit init_reserved_page(unsigned long pfn, int nid) +static void __meminit init_deferred_page(unsigned long pfn, int nid) { pg_data_t *pgdat; int zid; @@ -739,7 +739,7 @@ static inline bool defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } -static inline void init_reserved_page(unsigned long pfn, int nid) +static inline void init_deferred_page(unsigned long pfn, int nid) { } #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ @@ -760,7 +760,7 @@ void __meminit reserve_bootmem_region(phys_addr_t start, if (pfn_valid(start_pfn)) { struct page *page = pfn_to_page(start_pfn); - init_reserved_page(start_pfn, nid); + init_deferred_page(start_pfn, nid); /* * no need for atomic set_bit because the struct From patchwork Thu Feb 6 13:27:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13963095 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20596C02196 for ; Thu, 6 Feb 2025 13:33:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B150F28000E; Thu, 6 Feb 2025 08:33:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AC2FC28000B; Thu, 6 Feb 2025 08:33:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B11328000E; Thu, 6 Feb 2025 08:33:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 7C74128000B for ; Thu, 6 Feb 2025 08:33:44 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C7B554B7AF for ; Thu, 6 Feb 2025 13:33:33 +0000 (UTC) X-FDA: 83089611906.17.1E224BB Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf05.hostedemail.com (Postfix) with ESMTP id 2FF81100025 for ; Thu, 6 Feb 2025 13:33:31 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=gX7xnL6G; spf=pass (imf05.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738848812; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dwgPwEXkcwGOXGQmeLgJ7EjQPMg4p6VQGy7v4D8ZRag=; b=v5Rvqq+Drkjs085dVziZ42KwLM0/FReKUA0ZbZX9EG1gbKt2bHcdcl/Wu0pojxSWjiKPq5 mxFH7PFsTzoGNNSC1fBWWfGAgaYrTLOwxQ04Y9DgoufhLjc2ADKr8Qs2FEbT1eJOtJOD1G IwIvgvK0OpS44EvpF5WUOVHfngEgIcE= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=gX7xnL6G; spf=pass (imf05.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738848812; a=rsa-sha256; cv=none; b=sMPX11Qwz8M8e4qDQZFfxVL4nqyPiaAGGNRriMsRknkmAsrx5kY7T9BhuIh/rF/r5pvg8W v8dCqpsl9dYtUbu32qIeV3L9ER1KXb8G4NKJCaOuUTY4HZ0IZZP0EsMLIHG67sBcrYELwB 9/WZW18yIQD3DIpksILjjTGEyu5v1OU= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 98CB35C5036; Thu, 6 Feb 2025 13:27:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 93B3EC4CEE8; Thu, 6 Feb 2025 13:28:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738848510; bh=7To/iR/9rk2Ld4I6CBIN7hsqKtoOH+4meeYk+05Pg1s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gX7xnL6GCVqc63zHzfRGBVkVXmpG/UInlzAamHp6vWIxeNSlWeMw/gWcWyc0yWTSS BWxYBGynAl51oCkZFwu7C8eswNs1UrjGbvPC+ieegpopxsbnHfj5IyryQnq70zC0FN B3VcUdPxUB9Z3qgOXAiUblAJh9z+mGkASNvE69WcxMel63Bdv7nW8Z7smXHPcooJ5B foloDgLeRj5zqgo5nMOI1Ia8x2BwRWlwVmxQFR+EFxzG+BIPcapmdTRenMIRm44tfy p6M/KDBMiWoC6IyvgbJgFAdHs+eEVS8/MJzW8ZyEVVyD7BziWY7QNBT0HGAJdoHVH7 GXtbyy+SA5x1A== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Mike Rapoport , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Pratyush Yadav , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v4 02/14] memblock: add MEMBLOCK_RSRV_KERN flag Date: Thu, 6 Feb 2025 15:27:42 +0200 Message-ID: <20250206132754.2596694-3-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206132754.2596694-1-rppt@kernel.org> References: <20250206132754.2596694-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 2FF81100025 X-Stat-Signature: tuf15zfsyis4ny6qdzwydju3twm6r9zh X-Rspam-User: X-HE-Tag: 1738848811-479702 X-HE-Meta: U2FsdGVkX18VoZz7NIkS9tXWYoioxnbwUHI3RL/VNNluS/LwGUnTB7guYaVmsrSqTgXPTKJ8qRynLpNxgvY58/QqIGtZ7AcoR+3oP0ZRH7UgBx6Wr5cgbOOnndisnMq5oR9Mdpfr0u/YRJEsz2PKu1f9d/rfy+NEBYODRLfnT5sygM1TWFNEf6OEuyHQv0+aTcGtuoyt5iskVkWivof1LOv99y6pCnZM8PNb/aBPJQIOzgEv+aY3t1fpKdj/YuNoaQaiqGd1znayHrBVn9MqOcBbxh4/X6GKlji969qzPx5+qkZNzk1KG4AFz1vu3No6bwwZ55BuKj3dRhytkekQQezU5j6Vj9GEyyhj5uw1QeQ/opIufyfBVHSDao0oFh1pbnpiYFmXY5JaSKkO5GdGaJPVgHR3csKcqQk6Zv+Q5qOZLm6XeMHM7vUtL/Q2L4ZYNMzI2qkqjqdrHT8/SfsNlypQ12roWGGVtON89s1X8OwDsb2dwgLCpbsxREqtp+9aYjiPLMBqOp1G5fhhaCRni5S6r7Owjn1Z4kK2AdKSt1zYKyLSbLTB+uoEV6WhLFjicKuWaP5WxInWUgw8UCsSPl06FhZ+vdSf1m7b1KK6dCQvNlJit4MYAgFTJjon1770Eyo5LOMOCrWVGC/2W3N6TkjxNMNf5MOnJRQf0IF0YRBP2Qc93+F3EkGuhE6MJRscgEKk3S4D4r34mYyQF9S0JBNpp+iu315iDiZWs4uvAdpCsCAxovvWfAqj/CjB2XyhDlqBJ6tdoSTRDOvoTSr1uSR1KytGWN3DNJJVvbl6tFqZwHrC/VjX/NqoA1vpubLqAVufvzwIqR6ivOZxnyEDl3GGqGbqYiATknqflN9ZhyGLa6tDMpCWff2Eb7gfgJAE2lQjbGk4zC7lkuKXCruVvAl0CicahNKJZIftr/eHNh4gQh9QMuSfncPzU8CBeagpxnEsVrB8PQkqKZ7Uw9R i3gC3fmb cXFnI9hvNf4HvvnzM10jgzf/AkvS4ngJ7MSwAjviFbJMZjw+xpFoyoq0bcXtxAZvqeSA8TnisvmsXuDy9Kf7RJ0tzxCsSuHg1w20607Ul1JVpVNKDBywxOchitdqeuAqeapc4mbctE7hLkZontrKvAadegJsIEafNK8x2C1MFGldpun/ymcCcUUko5vX53Td+1gqMNis3urayJ6FvsrkMVUeWqsBQpT5akg+ePJOrL1rxxairMRFR01jZs/kbx4AgHAvgJre7gZgnL4tp21i056KZkj6M+Sdqyq28wJnJUYSNZwxsF/uyQeT3LH6Fno7d/BjHKRRSIDWs5LkKHZoN0aA55uB3j+6kL1ODfGxE+43raRM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" to denote areas that were reserved for kernel use either directly with memblock_reserve_kern() or via memblock allocations. Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/memblock.h | 16 +++++++++++++++- mm/memblock.c | 32 ++++++++++++++++++++++++-------- 2 files changed, 39 insertions(+), 9 deletions(-) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index e79eb6ac516f..65e274550f5d 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -50,6 +50,7 @@ enum memblock_flags { MEMBLOCK_NOMAP = 0x4, /* don't add to kernel direct mapping */ MEMBLOCK_DRIVER_MANAGED = 0x8, /* always detected via a driver */ MEMBLOCK_RSRV_NOINIT = 0x10, /* don't initialize struct pages */ + MEMBLOCK_RSRV_KERN = 0x20, /* memory reserved for kernel use */ }; /** @@ -116,7 +117,19 @@ int memblock_add_node(phys_addr_t base, phys_addr_t size, int nid, int memblock_add(phys_addr_t base, phys_addr_t size); int memblock_remove(phys_addr_t base, phys_addr_t size); int memblock_phys_free(phys_addr_t base, phys_addr_t size); -int memblock_reserve(phys_addr_t base, phys_addr_t size); +int __memblock_reserve(phys_addr_t base, phys_addr_t size, int nid, + enum memblock_flags flags); + +static __always_inline int memblock_reserve(phys_addr_t base, phys_addr_t size) +{ + return __memblock_reserve(base, size, NUMA_NO_NODE, 0); +} + +static __always_inline int memblock_reserve_kern(phys_addr_t base, phys_addr_t size) +{ + return __memblock_reserve(base, size, NUMA_NO_NODE, MEMBLOCK_RSRV_KERN); +} + #ifdef CONFIG_HAVE_MEMBLOCK_PHYS_MAP int memblock_physmem_add(phys_addr_t base, phys_addr_t size); #endif @@ -477,6 +490,7 @@ static inline __init_memblock bool memblock_bottom_up(void) phys_addr_t memblock_phys_mem_size(void); phys_addr_t memblock_reserved_size(void); +phys_addr_t memblock_reserved_kern_size(int nid); unsigned long memblock_estimated_nr_free_pages(void); phys_addr_t memblock_start_of_DRAM(void); phys_addr_t memblock_end_of_DRAM(void); diff --git a/mm/memblock.c b/mm/memblock.c index 95af35fd1389..4c33baf4d97c 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -491,7 +491,7 @@ static int __init_memblock memblock_double_array(struct memblock_type *type, * needn't do it */ if (!use_slab) - BUG_ON(memblock_reserve(addr, new_alloc_size)); + BUG_ON(memblock_reserve_kern(addr, new_alloc_size)); /* Update slab flag */ *in_slab = use_slab; @@ -641,7 +641,7 @@ static int __init_memblock memblock_add_range(struct memblock_type *type, #ifdef CONFIG_NUMA WARN_ON(nid != memblock_get_region_node(rgn)); #endif - WARN_ON(flags != rgn->flags); + WARN_ON(flags != MEMBLOCK_NONE && flags != rgn->flags); nr_new++; if (insert) { if (start_rgn == -1) @@ -901,14 +901,15 @@ int __init_memblock memblock_phys_free(phys_addr_t base, phys_addr_t size) return memblock_remove_range(&memblock.reserved, base, size); } -int __init_memblock memblock_reserve(phys_addr_t base, phys_addr_t size) +int __init_memblock __memblock_reserve(phys_addr_t base, phys_addr_t size, + int nid, enum memblock_flags flags) { phys_addr_t end = base + size - 1; - memblock_dbg("%s: [%pa-%pa] %pS\n", __func__, - &base, &end, (void *)_RET_IP_); + memblock_dbg("%s: [%pa-%pa] nid=%d flags=%x %pS\n", __func__, + &base, &end, nid, flags, (void *)_RET_IP_); - return memblock_add_range(&memblock.reserved, base, size, MAX_NUMNODES, 0); + return memblock_add_range(&memblock.reserved, base, size, nid, flags); } #ifdef CONFIG_HAVE_MEMBLOCK_PHYS_MAP @@ -1459,14 +1460,14 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, again: found = memblock_find_in_range_node(size, align, start, end, nid, flags); - if (found && !memblock_reserve(found, size)) + if (found && !__memblock_reserve(found, size, nid, MEMBLOCK_RSRV_KERN)) goto done; if (numa_valid_node(nid) && !exact_nid) { found = memblock_find_in_range_node(size, align, start, end, NUMA_NO_NODE, flags); - if (found && !memblock_reserve(found, size)) + if (found && !memblock_reserve_kern(found, size)) goto done; } @@ -1751,6 +1752,20 @@ phys_addr_t __init_memblock memblock_reserved_size(void) return memblock.reserved.total_size; } +phys_addr_t __init_memblock memblock_reserved_kern_size(int nid) +{ + struct memblock_region *r; + phys_addr_t total = 0; + + for_each_reserved_mem_region(r) { + if (nid == memblock_get_region_node(r) || !numa_valid_node(nid)) + if (r->flags & MEMBLOCK_RSRV_KERN) + total += r->size; + } + + return total; +} + /** * memblock_estimated_nr_free_pages - return estimated number of free pages * from memblock point of view @@ -2397,6 +2412,7 @@ static const char * const flagname[] = { [ilog2(MEMBLOCK_NOMAP)] = "NOMAP", [ilog2(MEMBLOCK_DRIVER_MANAGED)] = "DRV_MNG", [ilog2(MEMBLOCK_RSRV_NOINIT)] = "RSV_NIT", + [ilog2(MEMBLOCK_RSRV_KERN)] = "RSV_KERN", }; static int memblock_debug_show(struct seq_file *m, void *private) From patchwork Thu Feb 6 13:27:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13963077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68412C02194 for ; Thu, 6 Feb 2025 13:28:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DF4606B0089; Thu, 6 Feb 2025 08:28:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DA4FE280001; Thu, 6 Feb 2025 08:28:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C20016B0089; Thu, 6 Feb 2025 08:28:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A4CB6280001 for ; Thu, 6 Feb 2025 08:28:42 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 01D2D160C5C for ; Thu, 6 Feb 2025 13:28:41 +0000 (UTC) X-FDA: 83089599684.07.AB9B0F9 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf06.hostedemail.com (Postfix) with ESMTP id 47EA3180012 for ; Thu, 6 Feb 2025 13:28:40 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=tQXlEHYR; spf=pass (imf06.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738848520; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bPh3lNz6J1IqvgpQUsM5fxi2HsnrXM+2iMG2kudWRXQ=; b=lxUyCLNG+1RbLV1WOJM5qm/tBVUsnUjEJo8QzGF5aHRcEWmDG4SFKBj9oFM8NnT4GVuCk/ YvCvrCzXQ49C12Ud/GDFQl3lRxsOIJrFFLW9avUblq34aKB9asAkGCXGMMVzpP7yRWoKlv hbRhalDr/iWNCcZXCroR3HXb+lsnh5A= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=tQXlEHYR; spf=pass (imf06.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738848520; a=rsa-sha256; cv=none; b=0oOK0pC4AKXzgvpCuQ0y7OUpVr00sSEl0JHCn34INf+FshyC0+b627XVMYIGzHNcRS8IJ6 lnJlr0zYl96Wo7vjLGU61h7NKOaUe8qX7DZhdH6LcGZA5trDSus/wurXLb1RKwLAN0fR+u RKwt6g664Vf0A1xuwusQQlTlvU+jPEM= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 950615C5622; Thu, 6 Feb 2025 13:27:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 90622C4CEDF; Thu, 6 Feb 2025 13:28:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738848519; bh=jbto8nYL7gyz9w3JuRtz1Kv4sCtMzVgQq1ItUiJAi08=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tQXlEHYRcwyd0iEvfarvlnAZjpvmKi38PA7XV5eVZhMCPqfk4vhko7s5pXFOrC5I8 Qk7ApdYOsOfxIQ/LrikOddSai8YjHzdEKISRn48k3cUBYYARcJd8iVHEqr5+/hytWZ X+xDiuo6AG5WV/8riXOlp3QFOZl/PrsngsJGW6+Xm04rK5kISPCm/oU7m4snmL9GDu repoFpN7AfhuvV44REOSWJQ865XLBAHj6sO13CMiueJoxV9XFETv7lsFxq5VdKSVHK FEXw4gZ5piJBM9vT0xq7QO/pL+TRDI7EuPTG4xwfWQ535/p+xfRXg+XXdNJ3Odm2qb qDowkgTke9auw== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Mike Rapoport , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Pratyush Yadav , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v4 03/14] memblock: Add support for scratch memory Date: Thu, 6 Feb 2025 15:27:43 +0200 Message-ID: <20250206132754.2596694-4-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206132754.2596694-1-rppt@kernel.org> References: <20250206132754.2596694-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 47EA3180012 X-Stat-Signature: af4en3bihuhnjkh7mpfwzn6pcrjiqrok X-Rspam-User: X-HE-Tag: 1738848520-381813 X-HE-Meta: U2FsdGVkX1/hk+63oR6reb+dyiJiECIubrMmqCXXpLwCj19UXQF0/iFPqOCYM0Zt0cRvJF8KbLFN1QhX1jAoJBcfEuj+ibJ+MiHXGPe+6hJURBbUZdW7hUrXSmN3q7t8Xrm16PjPrdV4V1fm8l6PfKn5WJUvyAeSzL8h6CNenxEl8Lm9D2VLH53Yffv1gdJNq2xkUsUGwhZdZW3/uiOMzIHQbFylo5hTaaMWpSWHOBzbIxzHdqlhbBf+JCGdXj49EGCt9ebaSFWim/0VAhO/+E398tknzmdi4LOFfODjjc4iNSJg91KrIUGVEnBh+Q3V4rilxWsVSuKreWnsdcGZEMHA+jXKpt8sHKkpJ9hMYwzjpbrr+wWk0+esUA0Y+MC6n93CQF/v2t5wPWK32BS4wooacmiXQ+ZsgA4C7v45kg9a3iFKVUkNb4X9EoYUaja73bNc1X7ZS4XVSD5g6MbLLp0m6aQW5uMqxLnbgxMo2UA6IzPg8NGsI/WtBqZAyG7B6nnRLvIfSXIF6Fk6JO+5bOcNWnaPZL6K0haI3yqX68FUoNCbyPHqJJAwmgiNkuE6CZywn6VygJM+xTjwTQdNa428OKRb535j7rCAOudwlRlBecnUZWWa58nlXEpfBErGIm6+HQh56BRZgvZWyScJv1f/pdMoZ2oS8YOEDNFdQQXVb9xMX+aS/YWd+XwhoOx5URml4kkaTd1pEPHDMLyQte8u1HzP5EUg5n2li48NIB8nvydc9IfDXW1Ji/UccNH/eh10esYEkPS6mWVLJ3fGcvheZNt5Yx1A41paxpAPvXrfdffpFB1GFZswBq0wxGHmb5CwW6BGkqPKVEqKX1JYqd1LhMXboofcLFBJqHQ1GwQgDJQmpHYyB5ouWm8PzDVgZr/luKVw+CGMqRytJSD1AU7lQRMRGhgV4wMT6YY609REYFYdiZ5NQgPaPVakr3XY6tzJrrEo9+GZsBNHZvA 9C2BOAcH TOFXyjPCjWeeo6RF2lhj39AUevKrhg2UZD1G/K9NH7Y3FrDVsR5h3qhD5m5jWV8S9RfwREWnTaUYoLmu8zEd2BBp27pTrr2k4Gc7Rop0VVgT1mE5Cv+UnAT6H6dt++Ov0GyTQoiwTNP7QOlDRFypVvtu0TGM61E2/Q8kJ+P38fB/SmDiJsE+Hrg4ZwVenBeMHwvOk1vUYe++3frcwDK1R+Z7rdRBckGoeP6cVf3PlgJwasK+Vwg8x30+lRwI5LTu/kdXWIG0FBnsKOz1Q0bh3djHbgdya5O3H8I1AyMNg085cACJQcYLEfBiuft0OLegVASgJZe/lOFQqayY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexander Graf With KHO (Kexec HandOver), we need a way to ensure that the new kernel does not allocate memory on top of any memory regions that the previous kernel was handing over. But to know where those are, we need to include them in the memblock.reserved array which may not be big enough to hold all ranges that need to be persisted across kexec. To resize the array, we need to allocate memory. That brings us into a catch 22 situation. The solution to that is limit memblock allocations to the scratch regions: safe regions to operate in the case when there is memory that should remain intact across kexec. KHO provides several "scratch regions" as part of its metadata. These scratch regions are contiguous memory blocks that known not to contain any memory that should be persisted across kexec. These regions should be large enough to accommodate all memblock allocations done by the kexeced kernel. We introduce a new memblock_set_scratch_only() function that allows KHO to indicate that any memblock allocation must happen from the scratch regions. Later, we may want to perform another KHO kexec. For that, we reuse the same scratch regions. To ensure that no eventually handed over data gets allocated inside a scratch region, we flip the semantics of the scratch region with memblock_clear_scratch_only(): After that call, no allocations may happen from scratch memblock regions. We will lift that restriction in the next patch. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/memblock.h | 20 +++++++++++++ mm/Kconfig | 4 +++ mm/memblock.c | 61 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 85 insertions(+) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 65e274550f5d..14e4c6b73e2c 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -42,6 +42,11 @@ extern unsigned long long max_possible_pfn; * kernel resource tree. * @MEMBLOCK_RSRV_NOINIT: memory region for which struct pages are * not initialized (only for reserved regions). + * @MEMBLOCK_KHO_SCRATCH: memory region that kexec can pass to the next + * kernel in handover mode. During early boot, we do not know about all + * memory reservations yet, so we get scratch memory from the previous + * kernel that we know is good to use. It is the only memory that + * allocations may happen from in this phase. */ enum memblock_flags { MEMBLOCK_NONE = 0x0, /* No special request */ @@ -51,6 +56,7 @@ enum memblock_flags { MEMBLOCK_DRIVER_MANAGED = 0x8, /* always detected via a driver */ MEMBLOCK_RSRV_NOINIT = 0x10, /* don't initialize struct pages */ MEMBLOCK_RSRV_KERN = 0x20, /* memory reserved for kernel use */ + MEMBLOCK_KHO_SCRATCH = 0x40, /* scratch memory for kexec handover */ }; /** @@ -145,6 +151,8 @@ int memblock_mark_mirror(phys_addr_t base, phys_addr_t size); int memblock_mark_nomap(phys_addr_t base, phys_addr_t size); int memblock_clear_nomap(phys_addr_t base, phys_addr_t size); int memblock_reserved_mark_noinit(phys_addr_t base, phys_addr_t size); +int memblock_mark_kho_scratch(phys_addr_t base, phys_addr_t size); +int memblock_clear_kho_scratch(phys_addr_t base, phys_addr_t size); void memblock_free_all(void); void memblock_free(void *ptr, size_t size); @@ -289,6 +297,11 @@ static inline bool memblock_is_driver_managed(struct memblock_region *m) return m->flags & MEMBLOCK_DRIVER_MANAGED; } +static inline bool memblock_is_kho_scratch(struct memblock_region *m) +{ + return m->flags & MEMBLOCK_KHO_SCRATCH; +} + int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn, unsigned long *end_pfn); void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, @@ -617,5 +630,12 @@ static inline void early_memtest(phys_addr_t start, phys_addr_t end) { } static inline void memtest_report_meminfo(struct seq_file *m) { } #endif +#ifdef CONFIG_MEMBLOCK_KHO_SCRATCH +void memblock_set_kho_scratch_only(void); +void memblock_clear_kho_scratch_only(void); +#else +static inline void memblock_set_kho_scratch_only(void) { } +static inline void memblock_clear_kho_scratch_only(void) { } +#endif #endif /* _LINUX_MEMBLOCK_H */ diff --git a/mm/Kconfig b/mm/Kconfig index 1b501db06417..550bbafe5c0b 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -506,6 +506,10 @@ config HAVE_GUP_FAST depends on MMU bool +# Enable memblock support for scratch memory which is needed for kexec handover +config MEMBLOCK_KHO_SCRATCH + bool + # Don't discard allocated memory used to track "memory" and "reserved" memblocks # after early boot, so it can still be used to test for validity of memory. # Also, memblocks are updated with memory hot(un)plug. diff --git a/mm/memblock.c b/mm/memblock.c index 4c33baf4d97c..3d68b1fc2bd2 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -106,6 +106,13 @@ unsigned long min_low_pfn; unsigned long max_pfn; unsigned long long max_possible_pfn; +#ifdef CONFIG_MEMBLOCK_KHO_SCRATCH +/* When set to true, only allocate from MEMBLOCK_KHO_SCRATCH ranges */ +static bool kho_scratch_only; +#else +#define kho_scratch_only false +#endif + static struct memblock_region memblock_memory_init_regions[INIT_MEMBLOCK_MEMORY_REGIONS] __initdata_memblock; static struct memblock_region memblock_reserved_init_regions[INIT_MEMBLOCK_RESERVED_REGIONS] __initdata_memblock; #ifdef CONFIG_HAVE_MEMBLOCK_PHYS_MAP @@ -165,6 +172,10 @@ bool __init_memblock memblock_has_mirror(void) static enum memblock_flags __init_memblock choose_memblock_flags(void) { + /* skip non-scratch memory for kho early boot allocations */ + if (kho_scratch_only) + return MEMBLOCK_KHO_SCRATCH; + return system_has_some_mirror ? MEMBLOCK_MIRROR : MEMBLOCK_NONE; } @@ -924,6 +935,18 @@ int __init_memblock memblock_physmem_add(phys_addr_t base, phys_addr_t size) } #endif +#ifdef CONFIG_MEMBLOCK_KHO_SCRATCH +__init_memblock void memblock_set_kho_scratch_only(void) +{ + kho_scratch_only = true; +} + +__init_memblock void memblock_clear_kho_scratch_only(void) +{ + kho_scratch_only = false; +} +#endif + /** * memblock_setclr_flag - set or clear flag for a memory region * @type: memblock type to set/clear flag for @@ -1049,6 +1072,36 @@ int __init_memblock memblock_reserved_mark_noinit(phys_addr_t base, phys_addr_t MEMBLOCK_RSRV_NOINIT); } +/** + * memblock_mark_kho_scratch - Mark a memory region as MEMBLOCK_KHO_SCRATCH. + * @base: the base phys addr of the region + * @size: the size of the region + * + * Only memory regions marked with %MEMBLOCK_KHO_SCRATCH will be considered + * for allocations during early boot with kexec handover. + * + * Return: 0 on success, -errno on failure. + */ +int __init_memblock memblock_mark_kho_scratch(phys_addr_t base, phys_addr_t size) +{ + return memblock_setclr_flag(&memblock.memory, base, size, 1, + MEMBLOCK_KHO_SCRATCH); +} + +/** + * memblock_clear_kho_scratch - Clear MEMBLOCK_KHO_SCRATCH flag for a + * specified region. + * @base: the base phys addr of the region + * @size: the size of the region + * + * Return: 0 on success, -errno on failure. + */ +int __init_memblock memblock_clear_kho_scratch(phys_addr_t base, phys_addr_t size) +{ + return memblock_setclr_flag(&memblock.memory, base, size, 0, + MEMBLOCK_KHO_SCRATCH); +} + static bool should_skip_region(struct memblock_type *type, struct memblock_region *m, int nid, int flags) @@ -1080,6 +1133,13 @@ static bool should_skip_region(struct memblock_type *type, if (!(flags & MEMBLOCK_DRIVER_MANAGED) && memblock_is_driver_managed(m)) return true; + /* + * In early alloc during kexec handover, we can only consider + * MEMBLOCK_KHO_SCRATCH regions for the allocations + */ + if ((flags & MEMBLOCK_KHO_SCRATCH) && !memblock_is_kho_scratch(m)) + return true; + return false; } @@ -2413,6 +2473,7 @@ static const char * const flagname[] = { [ilog2(MEMBLOCK_DRIVER_MANAGED)] = "DRV_MNG", [ilog2(MEMBLOCK_RSRV_NOINIT)] = "RSV_NIT", [ilog2(MEMBLOCK_RSRV_KERN)] = "RSV_KERN", + [ilog2(MEMBLOCK_KHO_SCRATCH)] = "KHO_SCRATCH", }; static int memblock_debug_show(struct seq_file *m, void *private) From patchwork Thu Feb 6 13:27:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13963079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA907C02199 for ; Thu, 6 Feb 2025 13:29:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 607D4280003; Thu, 6 Feb 2025 08:29:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B654280001; Thu, 6 Feb 2025 08:29:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 430F4280003; Thu, 6 Feb 2025 08:29:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8FC50280001 for ; Thu, 6 Feb 2025 08:29:02 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B033AB2066 for ; Thu, 6 Feb 2025 13:28:50 +0000 (UTC) X-FDA: 83089600020.27.FC5F811 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf26.hostedemail.com (Postfix) with ESMTP id 11BCC140007 for ; Thu, 6 Feb 2025 13:28:48 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iP52aTjs; spf=pass (imf26.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738848529; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7882muB/Tm8hGGnNePu56w2ZumLV2bjuy1gzvj/jTBc=; b=BqtiCVt/yRI79D6MC31akIK7YokBFSdg8T1+nlTsrjEHBxRiyaHWm197mwTpFjAptOylb4 /LVOdBe9qe5orHG4RbfLDmCmU08V6Pd3PFxAf1WHofs03Q5knb5s94VHvW/tnOTDOx5+fQ FzVhIFX06wN/i4KnNgTpw14FFA5qA94= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738848529; a=rsa-sha256; cv=none; b=Y+2AFZ863g++0XgtDRli0BGFP+eiQv7d5hQPsCprgBKF8Yfd5l+AeSBVwJxN96zdb6vU8F 1CDrcjKOdO27K5u/vOUsyZNsEcYABwtlPzLJONTjsOlDG6Q/4tS4kcZmwmOEGcADtnpCul 6fud6vSgUbTvbNRkUxfbyvwhsRflHDM= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iP52aTjs; spf=pass (imf26.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 915775C5DB4; Thu, 6 Feb 2025 13:28:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8EFA4C4CEE6; Thu, 6 Feb 2025 13:28:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738848528; bh=vZ2oVKwNk81LIKw4K9tDNq2MhirLQhNrcfqw+4p4mWw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iP52aTjsSBe2LDTe0pnAzCseffHQfiiXLdkCZ6YMmfdqtWu+xXwHKeXFw5hn0E2tQ 3GK9EJlIQnIlmzDV1l5xvcVpE+O9OEg5NhBeupMsH69YOSFfRLWkuti9O2ZZ9XTvEi KELxGS/6NrXc3HXJgWAq3t43QkPSlgM70G8Sm6B9VHfcpUtPqBkJzN9UbM348fuIIl XoZYpln2ARsIgWzPR+jTMVjM0ni8Kkgt1Y5pPt07cmB2c0Jcs8s+AR4y8KZzMHfxO0 GVVM5EP/C+/iQXBAp2JMJS8wBwuEveVCeEXRYQxJzEbX7AmM1lgqXq0u3RHW0mm+tG 6KRAnJV2bv4fA== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Mike Rapoport , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Pratyush Yadav , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v4 04/14] memblock: introduce memmap_init_kho_scratch() Date: Thu, 6 Feb 2025 15:27:44 +0200 Message-ID: <20250206132754.2596694-5-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206132754.2596694-1-rppt@kernel.org> References: <20250206132754.2596694-1-rppt@kernel.org> MIME-Version: 1.0 X-Stat-Signature: gr5rnyxrz1h5ecttct6rgu9sf78dqsks X-Rspam-User: X-Rspamd-Queue-Id: 11BCC140007 X-Rspamd-Server: rspam03 X-HE-Tag: 1738848528-660913 X-HE-Meta: U2FsdGVkX1+8qsR3bWDbv8Y1YmYgqkPEu9aMtJnFym1EKwLNu9KltmDSbeJ3agKyLkQGTsbt48G6MfVSUFSkONm4K8a2X7oq/EM3MyGSOo2oHZepldtDW/CSoTwjRww3COUsZJ1bUsyvFYtN048NEwBZPpWQn0zstI/4FRXRgW8FVigrFEYinV9nuNQNlo3cYRPCH1PRpdHSefiGkTqC3RLjlHeZ/5n0HQf34iwgIbLEb0hPG+GxXa7Gmku/83SL0x5lJSRkC6iub38roXhI74PERfqd5YDw4EOPIJZR5JULAdt+4A7KAbP59FzWMCsF4Y8dDNGGuvuMwyZV03+LnTmQg28Nt8yUa00cHlY1etEAf3r8R1/+BpHIXis1gj79IEEsb3CMd2gijg7unae8LbtuY+FYyoDL+L1MUtNPMoUFN1FL+C9kfSPNdfDzExnHGnVU0pgxLD1mO7ifHloH0StfFnnas40YCnm3KR32eKrxr/+NErKAZSNIj6ijQLj3kD4XIdGSUQ8Wv4nW0jxAqKwv+CJvu8M3F5vzBnU1aRjuXkefPUr4tk2SB8VyArHD3pSYU371yA7b8FVukSG5Ij2FI7WFJgJG2UFCfZIrG9IN7MgPEwvzUhRz2C0vtFEgXYphGB8D6OCxFP6e12oj/SqtQvHOarkiZQ9abAViJW39Sfj/PMpADVEPmDbpr6CpIqg1TaX/gC2yy8q/Li/+m9U5HlDeeMkXlbleIEPMfv+EaJRLZ0PiAXSL544IojBLTDAClUfeJI+3eupsXZXUh/3hNJ6MEZ36NfbTXgo6KwNwbeSmWU/GNOkTkD5M53r4hFhp/VI9oUBm0TjOWLUVcrrmLyXAb+eEp6PIaic0ZcN5IuIHK2RcTuUtHBcsFy7qvbaVG9PO3bTKaPBsIufOaE5kDyIblPsuV8y3+n/WF1FPsRkpoERof60X2Qp+VGdqImDfIgK2rKfcNMNDWXN dDcaf4hE cw1ULkcsjRFKHRBZS4QgMhOFARI8OCzMacevnkI72VIgSRxjsTcHRWZ7K/5rbUDQlhB8qaIPnFtoZTjFkErs8MGfUBZpmO41IlmcVX+KK6kWzYg2t5O8jILfQARI7XCWsBhKQ0710U4GDabEdemofh13lxoNtL3RQsWybS6rmX0sk+v5EZCxNmnpf7dGqdeHp9dx0kCoqwJjP7uQP8hie+G4X4khdBxyXgRk5nv8Gr6z9PjWuwSoehg6+okrmSmxIAQk936nrGY8TjK/qlBrBNswg0pWIiL8V22gpOz//8vEDdlW1I9iomaGwwHmPyGxaiwTxXyrcIfGMn1hU6S2idBdoV/BfKLBt/+DBRWKlHMRmIl4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" With deferred initialization of struct page it will be necessary to initialize memory map for KHO scratch regions early. Add memmap_init_kho_scratch() method that will allow such initialization in upcoming patches. Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/memblock.h | 2 ++ mm/internal.h | 2 ++ mm/memblock.c | 22 ++++++++++++++++++++++ mm/mm_init.c | 11 ++++++++--- 4 files changed, 34 insertions(+), 3 deletions(-) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 14e4c6b73e2c..20887e199cdb 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -633,9 +633,11 @@ static inline void memtest_report_meminfo(struct seq_file *m) { } #ifdef CONFIG_MEMBLOCK_KHO_SCRATCH void memblock_set_kho_scratch_only(void); void memblock_clear_kho_scratch_only(void); +void memmap_init_kho_scratch_pages(void); #else static inline void memblock_set_kho_scratch_only(void) { } static inline void memblock_clear_kho_scratch_only(void) { } +static inline void memmap_init_kho_scratch_pages(void) {} #endif #endif /* _LINUX_MEMBLOCK_H */ diff --git a/mm/internal.h b/mm/internal.h index 109ef30fee11..986ad9c2a8b2 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1053,6 +1053,8 @@ DECLARE_STATIC_KEY_TRUE(deferred_pages); bool __init deferred_grow_zone(struct zone *zone, unsigned int order); #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ +void init_deferred_page(unsigned long pfn, int nid); + enum mminit_level { MMINIT_WARNING, MMINIT_VERIFY, diff --git a/mm/memblock.c b/mm/memblock.c index 3d68b1fc2bd2..54bd95745381 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -945,6 +945,28 @@ __init_memblock void memblock_clear_kho_scratch_only(void) { kho_scratch_only = false; } + +void __init_memblock memmap_init_kho_scratch_pages(void) +{ + phys_addr_t start, end; + unsigned long pfn; + int nid; + u64 i; + + if (!IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) + return; + + /* + * Initialize struct pages for free scratch memory. + * The struct pages for reserved scratch memory will be set up in + * reserve_bootmem_region() + */ + __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, + MEMBLOCK_KHO_SCRATCH, &start, &end, &nid) { + for (pfn = PFN_UP(start); pfn < PFN_DOWN(end); pfn++) + init_deferred_page(pfn, nid); + } +} #endif /** diff --git a/mm/mm_init.c b/mm/mm_init.c index c4b425125bad..04441c258b05 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -705,7 +705,7 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } -static void __meminit init_deferred_page(unsigned long pfn, int nid) +static void __meminit __init_deferred_page(unsigned long pfn, int nid) { pg_data_t *pgdat; int zid; @@ -739,11 +739,16 @@ static inline bool defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } -static inline void init_deferred_page(unsigned long pfn, int nid) +static inline void __init_deferred_page(unsigned long pfn, int nid) { } #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ +void __meminit init_deferred_page(unsigned long pfn, int nid) +{ + __init_deferred_page(pfn, nid); +} + /* * Initialised pages do not have PageReserved set. This function is * called for each range allocated by the bootmem allocator and @@ -760,7 +765,7 @@ void __meminit reserve_bootmem_region(phys_addr_t start, if (pfn_valid(start_pfn)) { struct page *page = pfn_to_page(start_pfn); - init_deferred_page(start_pfn, nid); + __init_deferred_page(start_pfn, nid); /* * no need for atomic set_bit because the struct From patchwork Thu Feb 6 13:27:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13963078 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EF66C02194 for ; Thu, 6 Feb 2025 13:29:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B5DA6280002; Thu, 6 Feb 2025 08:29:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B0C94280001; Thu, 6 Feb 2025 08:29:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D5D0280002; Thu, 6 Feb 2025 08:29:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 76A53280001 for ; Thu, 6 Feb 2025 08:29:00 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 396ABA0F3E for ; Thu, 6 Feb 2025 13:29:00 +0000 (UTC) X-FDA: 83089600440.28.5B980BC Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf24.hostedemail.com (Postfix) with ESMTP id 892BA180072 for ; Thu, 6 Feb 2025 13:28:58 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="qO1/AQOE"; spf=pass (imf24.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738848538; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JWNFAr1UgGxnd1H+0JMgQNdN+1xmBCWMhSFCjNOQGwk=; b=bViruqDep2zO2qEBFJE5cfXX05z13jcACUiRuSriyD74UOV/zUjIuGmkpmoGuvljTBFlJ0 JVc4tvzMiFa+WqYdp2t2OyTRkSbqBDbNeNuja9YhGyie/pocrJtLNHHCD8W+FGl2VYkOx5 yWbxmb+SCeBiBW4THrO4rD55+ryBN14= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738848538; a=rsa-sha256; cv=none; b=Qx5WvEybutZd8202fD7iImduZ75cZQlHUg8tKGnq6lceyUuUNESzRxv6ffeV5DLxq/yIHn +KIrl/rflGnuEDAbRUbFwL+RfbrJOHEuXpQhp/tyrm0zgSdzg4ENTPmOlisV6s28vvlF2r wKZ/uDe4u4nr4XxfVDRVoNOMQD0TOcM= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="qO1/AQOE"; spf=pass (imf24.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 93C92A4426F; Thu, 6 Feb 2025 13:27:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8CDBBC4CEE2; Thu, 6 Feb 2025 13:28:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738848537; bh=+IdzFBU7Z+fpC01FoYQg0eFNpTL6GjRXJUD9Oo2kR0A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qO1/AQOExODTqc8/VMNK0mku51Q1Qxx19wQiZod6uP2wusf0XmDoPGaWg/fpiAqFz otFnLpYMQw7FkMH++7iXU+JPWl3DpRxdSzKIk5yP+NOx+hOd6pRYGoKzywSaGeAwbf 5HbxEDCQMEPGbkvVhd6iKrCuRQkGV0z6KLVB1l0WcrRcUNGh3F/ReSkEqdOG0XeOw6 CL+8d+ab1eBnSi5tGCaMutEPbWzvTEMfgu4tVGyVr4sUV3WpezfRg0weOOlgmXrHHX 66b9u2MUu39v+aik3GjPkTuAlaS554rHScEQZLLcXOKU1UGotfQHK3o6xMBKD/KsFK 3N+gwqvcbvAjw== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Mike Rapoport , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Pratyush Yadav , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v4 05/14] kexec: Add Kexec HandOver (KHO) generation helpers Date: Thu, 6 Feb 2025 15:27:45 +0200 Message-ID: <20250206132754.2596694-6-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206132754.2596694-1-rppt@kernel.org> References: <20250206132754.2596694-1-rppt@kernel.org> MIME-Version: 1.0 X-Stat-Signature: yfop3a77q56o83acpep5cxnmk9tkkdbf X-Rspam-User: X-Rspamd-Queue-Id: 892BA180072 X-Rspamd-Server: rspam03 X-HE-Tag: 1738848538-215110 X-HE-Meta: U2FsdGVkX1+AZGUgw+v33O8jEmCMuEMqbaPG7lLMG5RyI9hqrxn/NaC5ZCGtY40GZFPsE+KhlVrPXSlq8VZqt5s/UfR7jTvCxbaNOuop6bdHpmaWR8TRScs1UTSRoQejD0+ATMk9JS53tsBZrHPjsLKGzLTl4GDI7Y/eYjklHs2Rqf3UsB7l0A9n6qTEln8EKt2ee7lUVkrHtsSHl2WUXw1KD0HLKqzKDQqDNNmjDte6yXZi2bnWdW9w6tQFUCkebNcol6ZCnhELvkX8eVjFbO3a/pKgUReOuGNYKUDhh21QGk4sNuRGxd414zLPWvGdyWx5j7/Wm9CnbVwNAknCjq4RH7JlVAdbpmnysTD/xhgjgprnOACnGlb0ExmiBQAqh9aNd5CMvhx87/p3zC9Ia45ug/6jcF9e3rZH1pS2+ZD5aldJiw5sd3+pjq0/B3alOrOgHTiFgC63euL2JHMjKWJdRPJNtf4anHG+fqdkKqLehrau5MUZj7bYpBz/y6OtE1G7EyFQys31KGfdwq9/h7McFA6yPVvgR4vJkBh0C3WUcdq/IUVe2Ppn9reDpo3fppmNCpbzocPdvcFNq3vPSmbk/DDUzptTgn0J1MLgd/kM0CWJpZW3eEJsltJIz8Xt9H7U01Gmh1/o+sxiyO4A6ecAyyLKSu1Y9qgxzdJaqRRZpD1oi/nVjkAK9sXspsvM/LMIw6X5I65QME8J8PxyHbnvcjvM8gSr3vOZFzt0JBn/pYaWVX47vp3krdFU+jkAvSsFiDJ6ATReFYR7Je2o23LR9MtwZVZyT6ttQxZvTaWbzEHcVV5OgBti+S2ahk0ZJ2hStljZFU2/o/dZlGZwTANGw/RB6HueFhjLtJJr+qKIHaiEmSj3UbF5lrCFaOqQOBtXSXJbWf+PoieDPqtCeWCLcJzrvnd+JIkUDoBWEtFpmL1GCo+Appo4EzynNjwdPyKCRFDvYoWskZNWMM1 pjG31EIK GleJPF0/Itvx2E8mOiHgkeamr0GBlcQSjJ/45FgFO4Om+6Bd6/7WVGnhT3qUjOoc0Bsu8eaVNRPCo7/fJREtjPh33laklmp8Zml9TT18ywWScNufCmoI6t68q0Ctm5TA6Op+1hlYh8pxoVJ7ddaqmmoJJks7ZoWhzMyyb5ukIA3xu+G//WAirtqG4H8gNwiguZR+/NtpzEYo8vNCw9ZOJWUEptJZiq8n9Q2MQHzF8TXb+bnLucaeHaFLqtspgTZXtkHgizciLLdzB3C+DYm40fNjrJixVagyhx2Q7pvkI8+/WS+NNZEVDFj8WasRJKeSA+hX5wN8ssklUILJnDoou4LxjxqZeKR4WzOQXK6XJvLP1wRbQ1BqmYhC3xBjQa5WChDm13V932132TE114FItsHxu96TZIr3+k6So6vzM/VlvEINX4n5yVr+f9YXEKwb3BLBa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexander Graf This patch adds the core infrastructure to generate Kexec HandOver metadata. Kexec HandOver is a mechanism that allows Linux to preserve state - arbitrary properties as well as memory locations - across kexec. It does so using 2 concepts: 1) Device Tree - Every KHO kexec carries a KHO specific flattened device tree blob that describes the state of the system. Device drivers can register to KHO to serialize their state before kexec. 2) Scratch Regions - CMA regions that we allocate in the first kernel. CMA gives us the guarantee that no handover pages land in those regions, because handover pages must be at a static physical memory location. We use these regions as the place to load future kexec images so that they won't collide with any handover data. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) --- Documentation/ABI/testing/sysfs-kernel-kho | 53 +++ .../admin-guide/kernel-parameters.txt | 24 + MAINTAINERS | 1 + include/linux/cma.h | 2 + include/linux/kexec.h | 18 + include/linux/kexec_handover.h | 10 + kernel/Makefile | 1 + kernel/kexec_handover.c | 450 ++++++++++++++++++ mm/internal.h | 3 - mm/mm_init.c | 8 + 10 files changed, 567 insertions(+), 3 deletions(-) create mode 100644 Documentation/ABI/testing/sysfs-kernel-kho create mode 100644 include/linux/kexec_handover.h create mode 100644 kernel/kexec_handover.c diff --git a/Documentation/ABI/testing/sysfs-kernel-kho b/Documentation/ABI/testing/sysfs-kernel-kho new file mode 100644 index 000000000000..f13b252bc303 --- /dev/null +++ b/Documentation/ABI/testing/sysfs-kernel-kho @@ -0,0 +1,53 @@ +What: /sys/kernel/kho/active +Date: December 2023 +Contact: Alexander Graf +Description: + Kexec HandOver (KHO) allows Linux to transition the state of + compatible drivers into the next kexec'ed kernel. To do so, + device drivers will serialize their current state into a DT. + While the state is serialized, they are unable to perform + any modifications to state that was serialized, such as + handed over memory allocations. + + When this file contains "1", the system is in the transition + state. When contains "0", it is not. To switch between the + two states, echo the respective number into this file. + +What: /sys/kernel/kho/dt_max +Date: December 2023 +Contact: Alexander Graf +Description: + KHO needs to allocate a buffer for the DT that gets + generated before it knows the final size. By default, it + will allocate 10 MiB for it. You can write to this file + to modify the size of that allocation. + +What: /sys/kernel/kho/dt +Date: December 2023 +Contact: Alexander Graf +Description: + When KHO is active, the kernel exposes the generated DT that + carries its current KHO state in this file. Kexec user space + tooling can use this as input file for the KHO payload image. + +What: /sys/kernel/kho/scratch_len +Date: December 2023 +Contact: Alexander Graf +Description: + To support continuous KHO kexecs, we need to reserve + physically contiguous memory regions that will always stay + available for future kexec allocations. This file describes + the length of these memory regions. Kexec user space tooling + can use this to determine where it should place its payload + images. + +What: /sys/kernel/kho/scratch_phys +Date: December 2023 +Contact: Alexander Graf +Description: + To support continuous KHO kexecs, we need to reserve + physically contiguous memory regions that will always stay + available for future kexec allocations. This file describes + the physical location of these memory regions. Kexec user space + tooling can use this to determine where it should place its + payload images. diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index fb8752b42ec8..ed656e2fb05e 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2698,6 +2698,30 @@ kgdbwait [KGDB,EARLY] Stop kernel execution and enter the kernel debugger at the earliest opportunity. + kho= [KEXEC,EARLY] + Format: { "0" | "1" | "off" | "on" | "y" | "n" } + Enables or disables Kexec HandOver. + "0" | "off" | "n" - kexec handover is disabled + "1" | "on" | "y" - kexec handover is enabled + + kho_scratch= [KEXEC,EARLY] + Format: nn[KMG],mm[KMG] | nn% + Defines the size of the KHO scratch region. The KHO + scratch regions are physically contiguous memory + ranges that can only be used for non-kernel + allocations. That way, even when memory is heavily + fragmented with handed over memory, the kexeced + kernel will always have enough contiguous ranges to + bootstrap itself. + + It is possible to specify the exact amount of + memory in the form of "nn[KMG],mm[KMG]" where the + first parameter defines the size of a global + scratch area and the second parameter defines the + size of additional per-node scratch areas. + The form "nn%" defines scale factor (in percents) + of memory that was used during boot. + kmac= [MIPS] Korina ethernet MAC address. Configure the RouterBoard 532 series on-chip Ethernet adapter MAC address. diff --git a/MAINTAINERS b/MAINTAINERS index 896a307fa065..8327795e8899 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -12826,6 +12826,7 @@ M: Eric Biederman L: kexec@lists.infradead.org S: Maintained W: http://kernel.org/pub/linux/utils/kernel/kexec/ +F: Documentation/ABI/testing/sysfs-kernel-kho F: include/linux/kexec.h F: include/uapi/linux/kexec.h F: kernel/kexec* diff --git a/include/linux/cma.h b/include/linux/cma.h index d15b64f51336..828a3c17504b 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -56,6 +56,8 @@ extern void cma_reserve_pages_on_error(struct cma *cma); #ifdef CONFIG_CMA struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp); bool cma_free_folio(struct cma *cma, const struct folio *folio); +/* Free whole pageblock and set its migration type to MIGRATE_CMA. */ +void init_cma_reserved_pageblock(struct page *page); #else static inline struct folio *cma_alloc_folio(struct cma *cma, int order, gfp_t gfp) { diff --git a/include/linux/kexec.h b/include/linux/kexec.h index f0e9f8eda7a3..ef5c90abafd1 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -483,6 +483,24 @@ void set_kexec_sig_enforced(void); static inline void set_kexec_sig_enforced(void) {} #endif +/* KHO Notifier index */ +enum kho_event { + KEXEC_KHO_DUMP = 0, + KEXEC_KHO_ABORT = 1, +}; + +struct notifier_block; + +#ifdef CONFIG_KEXEC_HANDOVER +int register_kho_notifier(struct notifier_block *nb); +int unregister_kho_notifier(struct notifier_block *nb); +void kho_memory_init(void); +#else +static inline int register_kho_notifier(struct notifier_block *nb) { return 0; } +static inline int unregister_kho_notifier(struct notifier_block *nb) { return 0; } +static inline void kho_memory_init(void) {} +#endif /* CONFIG_KEXEC_HANDOVER */ + #endif /* !defined(__ASSEBMLY__) */ #endif /* LINUX_KEXEC_H */ diff --git a/include/linux/kexec_handover.h b/include/linux/kexec_handover.h new file mode 100644 index 000000000000..c4b0aab823dc --- /dev/null +++ b/include/linux/kexec_handover.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef LINUX_KEXEC_HANDOVER_H +#define LINUX_KEXEC_HANDOVER_H + +struct kho_mem { + phys_addr_t addr; + phys_addr_t size; +}; + +#endif /* LINUX_KEXEC_HANDOVER_H */ diff --git a/kernel/Makefile b/kernel/Makefile index 87866b037fbe..cef5377c25cd 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -75,6 +75,7 @@ obj-$(CONFIG_CRASH_DUMP) += crash_core.o obj-$(CONFIG_KEXEC) += kexec.o obj-$(CONFIG_KEXEC_FILE) += kexec_file.o obj-$(CONFIG_KEXEC_ELF) += kexec_elf.o +obj-$(CONFIG_KEXEC_HANDOVER) += kexec_handover.o obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o obj-$(CONFIG_COMPAT) += compat.o obj-$(CONFIG_CGROUPS) += cgroup/ diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c new file mode 100644 index 000000000000..eccfe3a25798 --- /dev/null +++ b/kernel/kexec_handover.c @@ -0,0 +1,450 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * kexec_handover.c - kexec handover metadata processing + * Copyright (C) 2023 Alexander Graf + * Copyright (C) 2025 Microsoft Corporation, Mike Rapoport + */ + +#define pr_fmt(fmt) "KHO: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include + +static bool kho_enable __ro_after_init; + +static int __init kho_parse_enable(char *p) +{ + return kstrtobool(p, &kho_enable); +} +early_param("kho", kho_parse_enable); + +/* + * With KHO enabled, memory can become fragmented because KHO regions may + * be anywhere in physical address space. The scratch regions give us a + * safe zones that we will never see KHO allocations from. This is where we + * can later safely load our new kexec images into and then use the scratch + * area for early allocations that happen before page allocator is + * initialized. + */ +static struct kho_mem *kho_scratch; +static unsigned int kho_scratch_cnt; + +struct kho_out { + struct blocking_notifier_head chain_head; + struct kobject *kobj; + struct mutex lock; + void *dt; + u64 dt_len; + u64 dt_max; + bool active; +}; + +static struct kho_out kho_out = { + .chain_head = BLOCKING_NOTIFIER_INIT(kho_out.chain_head), + .lock = __MUTEX_INITIALIZER(kho_out.lock), + .dt_max = 10 * SZ_1M, +}; + +int register_kho_notifier(struct notifier_block *nb) +{ + return blocking_notifier_chain_register(&kho_out.chain_head, nb); +} +EXPORT_SYMBOL_GPL(register_kho_notifier); + +int unregister_kho_notifier(struct notifier_block *nb) +{ + return blocking_notifier_chain_unregister(&kho_out.chain_head, nb); +} +EXPORT_SYMBOL_GPL(unregister_kho_notifier); + +static ssize_t dt_read(struct file *file, struct kobject *kobj, + struct bin_attribute *attr, char *buf, + loff_t pos, size_t count) +{ + mutex_lock(&kho_out.lock); + memcpy(buf, attr->private + pos, count); + mutex_unlock(&kho_out.lock); + + return count; +} + +struct bin_attribute bin_attr_dt_kern = __BIN_ATTR(dt, 0400, dt_read, NULL, 0); + +static int kho_expose_dt(void *fdt) +{ + long fdt_len = fdt_totalsize(fdt); + int err; + + kho_out.dt = fdt; + kho_out.dt_len = fdt_len; + + bin_attr_dt_kern.size = fdt_totalsize(fdt); + bin_attr_dt_kern.private = fdt; + err = sysfs_create_bin_file(kho_out.kobj, &bin_attr_dt_kern); + + return err; +} + +static void kho_abort(void) +{ + if (!kho_out.active) + return; + + sysfs_remove_bin_file(kho_out.kobj, &bin_attr_dt_kern); + + kvfree(kho_out.dt); + kho_out.dt = NULL; + kho_out.dt_len = 0; + + blocking_notifier_call_chain(&kho_out.chain_head, KEXEC_KHO_ABORT, NULL); + + kho_out.active = false; +} + +static int kho_serialize(void) +{ + void *fdt = NULL; + int err = -ENOMEM; + + fdt = kvmalloc(kho_out.dt_max, GFP_KERNEL); + if (!fdt) + goto out; + + if (fdt_create(fdt, kho_out.dt_max)) { + err = -EINVAL; + goto out; + } + + err = fdt_finish_reservemap(fdt); + if (err) + goto out; + + err = fdt_begin_node(fdt, ""); + if (err) + goto out; + + err = fdt_property_string(fdt, "compatible", "kho-v1"); + if (err) + goto out; + + /* Loop through all kho dump functions */ + err = blocking_notifier_call_chain(&kho_out.chain_head, KEXEC_KHO_DUMP, fdt); + err = notifier_to_errno(err); + if (err) + goto out; + + /* Close / */ + err = fdt_end_node(fdt); + if (err) + goto out; + + err = fdt_finish(fdt); + if (err) + goto out; + + if (WARN_ON(fdt_check_header(fdt))) { + err = -EINVAL; + goto out; + } + + err = kho_expose_dt(fdt); + +out: + if (err) { + pr_err("failed to serialize state: %d", err); + kho_abort(); + } + return err; +} + +/* Handling for /sys/kernel/kho */ + +#define KHO_ATTR_RO(_name) \ + static struct kobj_attribute _name##_attr = __ATTR_RO_MODE(_name, 0400) +#define KHO_ATTR_RW(_name) \ + static struct kobj_attribute _name##_attr = __ATTR_RW_MODE(_name, 0600) + +static ssize_t active_store(struct kobject *dev, struct kobj_attribute *attr, + const char *buf, size_t size) +{ + ssize_t retsize = size; + bool val = false; + int ret; + + if (kstrtobool(buf, &val) < 0) + return -EINVAL; + + if (!kho_enable) + return -EOPNOTSUPP; + if (!kho_scratch_cnt) + return -ENOMEM; + + mutex_lock(&kho_out.lock); + if (val != kho_out.active) { + if (val) { + ret = kho_serialize(); + if (ret) { + retsize = -EINVAL; + goto out; + } + kho_out.active = true; + } else { + kho_abort(); + } + } + +out: + mutex_unlock(&kho_out.lock); + return retsize; +} + +static ssize_t active_show(struct kobject *dev, struct kobj_attribute *attr, + char *buf) +{ + ssize_t ret; + + mutex_lock(&kho_out.lock); + ret = sysfs_emit(buf, "%d\n", kho_out.active); + mutex_unlock(&kho_out.lock); + + return ret; +} +KHO_ATTR_RW(active); + +static ssize_t dt_max_store(struct kobject *dev, struct kobj_attribute *attr, + const char *buf, size_t size) +{ + u64 val; + + if (kstrtoull(buf, 0, &val)) + return -EINVAL; + + /* FDT already exists, it's too late to change dt_max */ + if (kho_out.dt_len) + return -EBUSY; + + kho_out.dt_max = val; + + return size; +} + +static ssize_t dt_max_show(struct kobject *dev, struct kobj_attribute *attr, + char *buf) +{ + return sysfs_emit(buf, "0x%llx\n", kho_out.dt_max); +} +KHO_ATTR_RW(dt_max); + +static ssize_t scratch_len_show(struct kobject *dev, struct kobj_attribute *attr, + char *buf) +{ + ssize_t count = 0; + + for (int i = 0; i < kho_scratch_cnt; i++) + count += sysfs_emit_at(buf, count, "0x%llx\n", kho_scratch[i].size); + + return count; +} +KHO_ATTR_RO(scratch_len); + +static ssize_t scratch_phys_show(struct kobject *dev, struct kobj_attribute *attr, + char *buf) +{ + ssize_t count = 0; + + for (int i = 0; i < kho_scratch_cnt; i++) + count += sysfs_emit_at(buf, count, "0x%llx\n", kho_scratch[i].addr); + + return count; +} +KHO_ATTR_RO(scratch_phys); + +static const struct attribute *kho_out_attrs[] = { + &active_attr.attr, + &dt_max_attr.attr, + &scratch_phys_attr.attr, + &scratch_len_attr.attr, + NULL, +}; + +static __init int kho_out_sysfs_init(void) +{ + int err; + + kho_out.kobj = kobject_create_and_add("kho", kernel_kobj); + if (!kho_out.kobj) + return -ENOMEM; + + err = sysfs_create_files(kho_out.kobj, kho_out_attrs); + if (err) + goto err_put_kobj; + + return 0; + +err_put_kobj: + kobject_put(kho_out.kobj); + return err; +} + +static __init int kho_init(void) +{ + int err; + + if (!kho_enable) + return -EINVAL; + + err = kho_out_sysfs_init(); + if (err) + return err; + + for (int i = 0; i < kho_scratch_cnt; i++) { + unsigned long base_pfn = PHYS_PFN(kho_scratch[i].addr); + unsigned long count = kho_scratch[i].size >> PAGE_SHIFT; + unsigned long pfn; + + for (pfn = base_pfn; pfn < base_pfn + count; + pfn += pageblock_nr_pages) + init_cma_reserved_pageblock(pfn_to_page(pfn)); + } + + return 0; +} +late_initcall(kho_init); + +/* + * The scratch areas are scaled by default as percent of memory allocated from + * memblock. A user can override the scale with command line parameter: + * + * kho_scratch=N% + * + * It is also possible to explicitly define size for a global and per-node + * scratch areas: + * + * kho_scratch=n[KMG],m[KMG] + * + * The explicit size definition takes precedence over scale definition. + */ +static unsigned int scratch_scale __initdata = 200; +static phys_addr_t scratch_size_global __initdata; +static phys_addr_t scratch_size_pernode __initdata; + +static int __init kho_parse_scratch_size(char *p) +{ + unsigned long size, size_pernode; + char *endptr, *oldp = p; + + if (!p) + return -EINVAL; + + size = simple_strtoul(p, &endptr, 0); + if (*endptr == '%') { + scratch_scale = size; + pr_notice("scratch scale is %d percent\n", scratch_scale); + } else { + size = memparse(p, &p); + if (!size || p == oldp) + return -EINVAL; + + if (*p != ',') + return -EINVAL; + + size_pernode = memparse(p + 1, &p); + if (!size_pernode) + return -EINVAL; + + scratch_size_global = size; + scratch_size_pernode = size_pernode; + scratch_scale = 0; + + pr_notice("scratch areas: global: %lluMB pernode: %lldMB\n", + (u64)(scratch_size_global >> 20), + (u64)(scratch_size_pernode >> 20)); + } + + return 0; +} +early_param("kho_scratch", kho_parse_scratch_size); + +static phys_addr_t __init scratch_size(int nid) +{ + phys_addr_t size; + + if (scratch_scale) { + size = memblock_reserved_kern_size(nid) * scratch_scale / 100; + } else { + if (numa_valid_node(nid)) + size = scratch_size_pernode; + else + size = scratch_size_global; + } + + return round_up(size, CMA_MIN_ALIGNMENT_BYTES); +} + +/** + * kho_reserve_scratch - Reserve a contiguous chunk of memory for kexec + * + * With KHO we can preserve arbitrary pages in the system. To ensure we still + * have a large contiguous region of memory when we search the physical address + * space for target memory, let's make sure we always have a large CMA region + * active. This CMA region will only be used for movable pages which are not a + * problem for us during KHO because we can just move them somewhere else. + */ +static void kho_reserve_scratch(void) +{ + phys_addr_t addr, size; + int nid, i = 1; + + if (!kho_enable) + return; + + /* FIXME: deal with node hot-plug/remove */ + kho_scratch_cnt = num_online_nodes() + 1; + size = kho_scratch_cnt * sizeof(*kho_scratch); + kho_scratch = memblock_alloc(size, PAGE_SIZE); + if (!kho_scratch) + goto err_disable_kho; + + /* reserve large contiguous area for allocations without nid */ + size = scratch_size(NUMA_NO_NODE); + addr = memblock_phys_alloc(size, CMA_MIN_ALIGNMENT_BYTES); + if (!addr) + goto err_free_scratch_desc; + + kho_scratch[0].addr = addr; + kho_scratch[0].size = size; + + for_each_online_node(nid) { + size = scratch_size(nid); + addr = memblock_alloc_range_nid(size, CMA_MIN_ALIGNMENT_BYTES, + 0, MEMBLOCK_ALLOC_ACCESSIBLE, + nid, true); + if (!addr) + goto err_free_scratch_areas; + + kho_scratch[i].addr = addr; + kho_scratch[i].size = size; + i++; + } + + return; + +err_free_scratch_areas: + for (i--; i >= 0; i--) + memblock_phys_free(kho_scratch[i].addr, kho_scratch[i].size); +err_free_scratch_desc: + memblock_free(kho_scratch, kho_scratch_cnt * sizeof(*kho_scratch)); +err_disable_kho: + kho_enable = false; +} + +void __init kho_memory_init(void) +{ + kho_reserve_scratch(); +} diff --git a/mm/internal.h b/mm/internal.h index 986ad9c2a8b2..fdd379fddf6d 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -841,9 +841,6 @@ int isolate_migratepages_range(struct compact_control *cc, unsigned long low_pfn, unsigned long end_pfn); -/* Free whole pageblock and set its migration type to MIGRATE_CMA. */ -void init_cma_reserved_pageblock(struct page *page); - #endif /* CONFIG_COMPACTION || CONFIG_CMA */ int find_suitable_fallback(struct free_area *area, unsigned int order, diff --git a/mm/mm_init.c b/mm/mm_init.c index 04441c258b05..60f08930e434 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -30,6 +30,7 @@ #include #include #include +#include #include "internal.h" #include "slab.h" #include "shuffle.h" @@ -2661,6 +2662,13 @@ void __init mm_core_init(void) report_meminit(); kmsan_init_shadow(); stack_depot_early_init(); + + /* + * KHO memory setup must happen while memblock is still active, but + * as close as possible to buddy initialization + */ + kho_memory_init(); + mem_init(); kmem_cache_init(); /* From patchwork Thu Feb 6 13:27:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13963080 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCA71C02194 for ; Thu, 6 Feb 2025 13:29:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4F5C7280004; Thu, 6 Feb 2025 08:29:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A530280001; Thu, 6 Feb 2025 08:29:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 31ECD280004; Thu, 6 Feb 2025 08:29:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0E6C2280001 for ; Thu, 6 Feb 2025 08:29:09 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C405C80FBA for ; Thu, 6 Feb 2025 13:29:08 +0000 (UTC) X-FDA: 83089600776.13.316944F Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf28.hostedemail.com (Postfix) with ESMTP id 30047C003A for ; Thu, 6 Feb 2025 13:29:07 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=biZ9QFQP; spf=pass (imf28.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738848547; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=45SYQSp9p7qqqIC8zBgRI8ji8Hid3qXXZnPvMdKywQY=; b=Tkz3yhiqdjlP7rWi74jH+6+yb4jnv6pJt5M9js6OFHjr/G80wgpNQX2PrnFFwMBILBPYR0 Hdf10/H99pb/gCeAs5m+x9iGw1Cgm9ZJii5DTvsSmWVIExzFjMbFsJUjgwNrVCH0x/zh27 lQMtYhWed+5+vyGQZ1Uw/8nRyDHzNBA= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=biZ9QFQP; spf=pass (imf28.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738848547; a=rsa-sha256; cv=none; b=Fd3qXcHD5yzjXPtySWF6Fx4R1LdvosjgaGE4LoHYd6Yxczk9YHdfHPknPM9rx9gbArYG9S wBXYCSTj3tS7cjO878tvViYeGiaXKK855HY/eoxDcIJlnuzF3OAlsZiBUYz8AHsnqjfnIM MYlTrAm+E7IYte+ycBigqqt/NCwm90o= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 9056FA4427C; Thu, 6 Feb 2025 13:27:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BEB2AC4CEE3; Thu, 6 Feb 2025 13:28:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738848546; bh=zScMayK3cWD1mMJXYYmDK7F4kfedIYXeuYTj7qJBVlI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=biZ9QFQP+P7B9drawcug/6K/P8DFFCsk//HfEbfYsoUfNZ3KoiE1keLRcg6i4Si9i SFnd6QSpv4kaH/s/tNra2R6B1LEoVRmYFHbLsuo/Dpp53LqixKCoIuYxAcfD+NXn+o Wop7PZUdYVaVuwQz6apo9DnV/M499hgqSZEgaNNyRalFsGyJ9e1SIUEa+xW10DWZxF updAIiyE6b6QCGI7vinf4fYe3iox/lPFfK6mDAgnErUeIJlJbnj7K34pEH+MZNjzMI vQG3LYdoH6lxxShXDdMfepS9xYsoSj5gmB2PkkP1/qU2mBs082Z4E77uaT3jU8sg0y sIM51EF+xVong== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Mike Rapoport , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Pratyush Yadav , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v4 06/14] kexec: Add KHO parsing support Date: Thu, 6 Feb 2025 15:27:46 +0200 Message-ID: <20250206132754.2596694-7-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206132754.2596694-1-rppt@kernel.org> References: <20250206132754.2596694-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 30047C003A X-Stat-Signature: kp398bg381unmuwu95n6hnm3d4ebhgie X-Rspam-User: X-HE-Tag: 1738848547-735420 X-HE-Meta: U2FsdGVkX18Xh7Sgkd1x5jZ/MSh2304FpI0hQz1BNH6dJyNV1Y6B1O/OzWULu9l5EhxnbLecmFni2IAgAA1TPcCPhO/pnFomzwyaxY9Sr0R9KMAbmR+kkZmZLgTlRpE3cR1Ij22vAhu2tpd0ZApJzXVivF2SAscWI9y9YBecmnF74Co57ha59Rvs1S5E7NOgHLIlGViDs8EZNcoN9CApnPGKAcnTkLhbGtdA296TiNiWasOAZOaeCkwdxIFHs8Hn5RQLcNkMxfj1CjnEHGIcDPDBT3yuZ8S6bmRlbAaeM4ED/hXq5l9eyH1m1VaZrrohEeJCOeifZRxD5Bp8u2NsbRD6OPCVSUhRK6H8DXSomdH5vJe62bk9AHHIEwoqNg8PN2P9GY8C4FafZ/pTNv5+dmxG+wzLZFatpOikBjCVlMLwvDK5PX7Na4hLddqRMLy7+ZZsK30SnbMlYfcMV7KLZ7qOWmrIHr04903HIfo+kmemuoPEKlZjJhqZRiwH37RM8sfmuVcce1sTyI8bQ63LbM21FXp2el9kvPrIAM7OHa4vXpSV1Kba+WzJlCT/eBjUNpXolRP5aAu2I5AZoP0timlC53lr1OktgLZquZHLWwla2tysgY6/BN7f1CpREEXJrlBlMbgYoRPSu+RUa5x98ebkb6aIBHdybV+yNi3aaBA7CP1hm58fJ/1busUntxM7+5t5PDjM4o5hq/ImvbFUaLtqcCsmo3H5cMMlWp3i35WExBlhgDHw1qVlDweaY3bSuH3TzyaaxU4oc/tWIqt57BYmp4F6RAYYLkvLZ5oYOEl6vXErU1rEbtSU1SmecJK5ZMYqfIQ2+4ye3OJptC05oIEd5/ymGn29LSb5ww6xY/nVHV+tqVLsjQIsUO68yFifKHo78SEsqAzVald0asupfDREBC1jAeXjQU53J7mVVdPvqXSFY6ZljLrespj2DlXIu33NOrt87swzWiaMOqJ 48avqjkY AI2hxWXcnoB/gXfQlz3Y/i8gf3QRg/yL75rK5uUXLbzo72HYiQ2HLsCD4eUQWhMrZrpmaBYXg6vQuN5u9RmnNelW8nOldNjaSeLdpqkyO+d7serShpv4RpR/apjwFbOgnAwjZ1g9nnZuxjs5oJ/3P2gV8A49QDQhMXYcJOsjokd/N5RdSPcpR5JgWBK1Ovd2+oEP+dTV+ougpCDsfT7rMjkDWKE0SNFQUSSeWyA7PLdxN8Ea2u3qou7uh21oFIcrxdXellGjOSRKWxiDHhY/TTRDvdDVDoKc2S2sGK+mXvZ0FvM0w5o4F/cNFs3nUGhU+6YDfL5Hck7a9eLlkMkLZE3XYdsZH5GdyWRf29aWVQk4RnkUnWcL2UYeObevSEC8GPe+MG7AdMdgfjGPP8QcSG6+1V3WH5w9KNxLanrsPHgOtaKPUFW3PSs0MHodm0UrgLrzf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexander Graf When we have a KHO kexec, we get a device tree and scratch region to populate the state of the system. Provide helper functions that allow architecture code to easily handle memory reservations based on them and give device drivers visibility into the KHO DT and memory reservations so they can recover their own state. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) --- Documentation/ABI/testing/sysfs-firmware-kho | 9 + MAINTAINERS | 1 + include/linux/kexec.h | 12 + kernel/kexec_handover.c | 268 ++++++++++++++++++- mm/memblock.c | 1 + 5 files changed, 290 insertions(+), 1 deletion(-) create mode 100644 Documentation/ABI/testing/sysfs-firmware-kho diff --git a/Documentation/ABI/testing/sysfs-firmware-kho b/Documentation/ABI/testing/sysfs-firmware-kho new file mode 100644 index 000000000000..e4ed2cb7c810 --- /dev/null +++ b/Documentation/ABI/testing/sysfs-firmware-kho @@ -0,0 +1,9 @@ +What: /sys/firmware/kho/dt +Date: December 2023 +Contact: Alexander Graf +Description: + When the kernel was booted with Kexec HandOver (KHO), + the device tree that carries metadata about the previous + kernel's state is in this file. This file may disappear + when all consumers of it finished to interpret their + metadata. diff --git a/MAINTAINERS b/MAINTAINERS index 8327795e8899..e1e01b2a3727 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -12826,6 +12826,7 @@ M: Eric Biederman L: kexec@lists.infradead.org S: Maintained W: http://kernel.org/pub/linux/utils/kernel/kexec/ +F: Documentation/ABI/testing/sysfs-firmware-kho F: Documentation/ABI/testing/sysfs-kernel-kho F: include/linux/kexec.h F: include/uapi/linux/kexec.h diff --git a/include/linux/kexec.h b/include/linux/kexec.h index ef5c90abafd1..4fdf5ee27144 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -490,12 +490,24 @@ enum kho_event { }; struct notifier_block; +struct kho_mem; #ifdef CONFIG_KEXEC_HANDOVER +void kho_populate(phys_addr_t dt_phys, phys_addr_t scratch_phys, + u64 scratch_len); +const void *kho_get_fdt(void); +void kho_return_mem(const struct kho_mem *mem); +void *kho_claim_mem(const struct kho_mem *mem); int register_kho_notifier(struct notifier_block *nb); int unregister_kho_notifier(struct notifier_block *nb); void kho_memory_init(void); #else +static inline void kho_populate(phys_addr_t dt_phys, phys_addr_t scratch_phys, + u64 scratch_len) {} +static inline void *kho_get_fdt(void) { return NULL; } +static inline void kho_return_mem(const struct kho_mem *mem) { } +static inline void *kho_claim_mem(const struct kho_mem *mem) { return NULL; } + static inline int register_kho_notifier(struct notifier_block *nb) { return 0; } static inline int unregister_kho_notifier(struct notifier_block *nb) { return 0; } static inline void kho_memory_init(void) {} diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c index eccfe3a25798..3b360e3a6057 100644 --- a/kernel/kexec_handover.c +++ b/kernel/kexec_handover.c @@ -51,6 +51,15 @@ static struct kho_out kho_out = { .dt_max = 10 * SZ_1M, }; +struct kho_in { + struct kobject *kobj; + phys_addr_t kho_scratch_phys; + phys_addr_t handover_phys; + u32 handover_len; +}; + +static struct kho_in kho_in; + int register_kho_notifier(struct notifier_block *nb) { return blocking_notifier_chain_register(&kho_out.chain_head, nb); @@ -63,6 +72,89 @@ int unregister_kho_notifier(struct notifier_block *nb) } EXPORT_SYMBOL_GPL(unregister_kho_notifier); +const void *kho_get_fdt(void) +{ + if (!kho_in.handover_phys) + return NULL; + + return __va(kho_in.handover_phys); +} +EXPORT_SYMBOL_GPL(kho_get_fdt); + +static void kho_return_pfn(ulong pfn) +{ + struct page *page = pfn_to_online_page(pfn); + + if (WARN_ON(!page)) + return; + __free_page(page); +} + +/** + * kho_return_mem - Notify the kernel that initially reserved memory is no + * longer needed. + * @mem: memory range that was preserved during kexec handover + * + * When the last consumer of a page returns their memory, kho returns the page + * to the buddy allocator as free page. + */ +void kho_return_mem(const struct kho_mem *mem) +{ + unsigned long start_pfn, end_pfn, pfn; + + start_pfn = PFN_DOWN(mem->addr); + end_pfn = PFN_UP(mem->addr + mem->size); + + for (pfn = start_pfn; pfn < end_pfn; pfn++) + kho_return_pfn(pfn); +} +EXPORT_SYMBOL_GPL(kho_return_mem); + +static int kho_claim_pfn(ulong pfn) +{ + struct page *page = pfn_to_online_page(pfn); + + if (!page) + return -ENOMEM; + + /* almost as free_reserved_page(), just don't free the page */ + ClearPageReserved(page); + init_page_count(page); + adjust_managed_page_count(page, 1); + + return 0; +} + +/** + * kho_claim_mem - Notify the kernel that a handed over memory range is now + * in use + * @mem: memory range that was preserved during kexec handover + * + * A kernel subsystem preserved that range during handover and it is going + * to reuse this range after kexec. The pages in the range are treated as + * allocated, but not %PG_reserved. + * + * Return: virtual address of the preserved memory range + */ +void *kho_claim_mem(const struct kho_mem *mem) +{ + unsigned long start_pfn, end_pfn, pfn; + void *va = __va(mem->addr); + + start_pfn = PFN_DOWN(mem->addr); + end_pfn = PFN_UP(mem->addr + mem->size); + + for (pfn = start_pfn; pfn < end_pfn; pfn++) { + int err = kho_claim_pfn(pfn); + + if (err) + return NULL; + } + + return va; +} +EXPORT_SYMBOL_GPL(kho_claim_mem); + static ssize_t dt_read(struct file *file, struct kobject *kobj, struct bin_attribute *attr, char *buf, loff_t pos, size_t count) @@ -273,6 +365,30 @@ static const struct attribute *kho_out_attrs[] = { NULL, }; +/* Handling for /sys/firmware/kho */ +static BIN_ATTR_SIMPLE_RO(dt_fw); + +static __init int kho_in_sysfs_init(const void *fdt) +{ + int err; + + kho_in.kobj = kobject_create_and_add("kho", firmware_kobj); + if (!kho_in.kobj) + return -ENOMEM; + + bin_attr_dt_fw.size = fdt_totalsize(fdt); + bin_attr_dt_fw.private = (void *)fdt; + err = sysfs_create_bin_file(kho_in.kobj, &bin_attr_dt_fw); + if (err) + goto err_put_kobj; + + return 0; + +err_put_kobj: + kobject_put(kho_in.kobj); + return err; +} + static __init int kho_out_sysfs_init(void) { int err; @@ -294,6 +410,7 @@ static __init int kho_out_sysfs_init(void) static __init int kho_init(void) { + const void *fdt = kho_get_fdt(); int err; if (!kho_enable) @@ -303,6 +420,21 @@ static __init int kho_init(void) if (err) return err; + if (fdt) { + err = kho_in_sysfs_init(fdt); + /* + * Failure to create /sys/firmware/kho/dt does not prevent + * reviving state from KHO and setting up KHO for the next + * kexec. + */ + if (err) + pr_err("failed exposing handover FDT in sysfs\n"); + + kho_scratch = __va(kho_in.kho_scratch_phys); + + return 0; + } + for (int i = 0; i < kho_scratch_cnt; i++) { unsigned long base_pfn = PHYS_PFN(kho_scratch[i].addr); unsigned long count = kho_scratch[i].size >> PAGE_SHIFT; @@ -444,7 +576,141 @@ static void kho_reserve_scratch(void) kho_enable = false; } +/* + * Scan the DT for any memory ranges and make sure they are reserved in + * memblock, otherwise they will end up in a weird state on free lists. + */ +static void kho_init_reserved_pages(void) +{ + const void *fdt = kho_get_fdt(); + int offset = 0, depth = 0, initial_depth = 0, len; + + if (!fdt) + return; + + /* Go through the mem list and add 1 for each reference */ + for (offset = 0; + offset >= 0 && depth >= initial_depth; + offset = fdt_next_node(fdt, offset, &depth)) { + const struct kho_mem *mems; + u32 i; + + mems = fdt_getprop(fdt, offset, "mem", &len); + if (!mems || len & (sizeof(*mems) - 1)) + continue; + + for (i = 0; i < len; i += sizeof(*mems)) { + const struct kho_mem *mem = &mems[i]; + + memblock_reserve(mem->addr, mem->size); + } + } +} + +static void __init kho_release_scratch(void) +{ + phys_addr_t start, end; + u64 i; + + memmap_init_kho_scratch_pages(); + + /* + * Mark scratch mem as CMA before we return it. That way we + * ensure that no kernel allocations happen on it. That means + * we can reuse it as scratch memory again later. + */ + __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, + MEMBLOCK_KHO_SCRATCH, &start, &end, NULL) { + ulong start_pfn = pageblock_start_pfn(PFN_DOWN(start)); + ulong end_pfn = pageblock_align(PFN_UP(end)); + ulong pfn; + + for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) + set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_CMA); + } +} + void __init kho_memory_init(void) { - kho_reserve_scratch(); + if (!kho_get_fdt()) { + kho_reserve_scratch(); + } else { + kho_init_reserved_pages(); + kho_release_scratch(); + } +} + +void __init kho_populate(phys_addr_t handover_dt_phys, phys_addr_t scratch_phys, + u64 scratch_len) +{ + void *handover_dt; + struct kho_mem *scratch; + + /* Determine the real size of the DT */ + handover_dt = early_memremap(handover_dt_phys, sizeof(struct fdt_header)); + if (!handover_dt) { + pr_warn("setup: failed to memremap kexec FDT (0x%llx)\n", handover_dt_phys); + return; + } + + if (fdt_check_header(handover_dt)) { + pr_warn("setup: kexec handover FDT is invalid (0x%llx)\n", handover_dt_phys); + early_memunmap(handover_dt, PAGE_SIZE); + return; + } + + kho_in.handover_len = fdt_totalsize(handover_dt); + kho_in.handover_phys = handover_dt_phys; + + early_memunmap(handover_dt, sizeof(struct fdt_header)); + + /* Reserve the DT so we can still access it in late boot */ + memblock_reserve(kho_in.handover_phys, kho_in.handover_len); + + kho_in.kho_scratch_phys = scratch_phys; + kho_scratch_cnt = scratch_len / sizeof(*kho_scratch); + scratch = early_memremap(scratch_phys, scratch_len); + if (!scratch) { + pr_warn("setup: failed to memremap kexec scratch (0x%llx)\n", scratch_phys); + return; + } + + /* + * We pass a safe contiguous blocks of memory to use for early boot + * purporses from the previous kernel so that we can resize the + * memblock array as needed. + */ + for (int i = 0; i < kho_scratch_cnt; i++) { + struct kho_mem *area = &scratch[i]; + u64 size = area->size; + + memblock_add(area->addr, size); + + if (WARN_ON(memblock_mark_kho_scratch(area->addr, size))) { + pr_err("Kexec failed to mark the scratch region. Disabling KHO revival."); + kho_in.handover_len = 0; + kho_in.handover_phys = 0; + scratch = NULL; + break; + } + pr_debug("Marked 0x%pa+0x%pa as scratch", &area->addr, &size); + } + + early_memunmap(scratch, scratch_len); + + if (!scratch) + return; + + memblock_reserve(scratch_phys, scratch_len); + + /* + * Now that we have a viable region of scratch memory, let's tell + * the memblocks allocator to only use that for any allocations. + * That way we ensure that nothing scribbles over in use data while + * we initialize the page tables which we will need to ingest all + * memory reservations from the previous kernel. + */ + memblock_set_kho_scratch_only(); + + pr_info("setup: Found kexec handover data. Will skip init for some devices\n"); } diff --git a/mm/memblock.c b/mm/memblock.c index 54bd95745381..84df96efca62 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -2366,6 +2366,7 @@ void __init memblock_free_all(void) free_unused_memmap(); reset_all_zones_managed_pages(); + memblock_clear_kho_scratch_only(); pages = free_low_memory_core_early(); totalram_pages_add(pages); } From patchwork Thu Feb 6 13:27:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13963081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DC55C02196 for ; Thu, 6 Feb 2025 13:29:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 96184280006; Thu, 6 Feb 2025 08:29:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 91184280005; Thu, 6 Feb 2025 08:29:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B350280006; Thu, 6 Feb 2025 08:29:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 580E0280005 for ; Thu, 6 Feb 2025 08:29:18 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 0BB591A10A5 for ; Thu, 6 Feb 2025 13:29:18 +0000 (UTC) X-FDA: 83089601196.14.7C701A4 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf18.hostedemail.com (Postfix) with ESMTP id 5410C1C0071 for ; Thu, 6 Feb 2025 13:29:16 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=hrsbmPUP; spf=pass (imf18.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738848556; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fhXYBgXE3ol2laxEcRqsFgZSU0pzJzpUmdB1HeVZ4gA=; b=TL1yAajXZWexOea7cV0Oqwctlv9CHeSxfXJe4//8HVXOH2MVTQr4hAJRucP2INaIx00cps 6lH7PedC3vKVAuhf85r/mRwZDKeq9ZnuWXqbYI/Y4PclXXuIPsLH8EFKOtcyNSZ0wVBUBM Nt3298TT3SfqwEJSbBqC1UX5FjduVJY= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=hrsbmPUP; spf=pass (imf18.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738848556; a=rsa-sha256; cv=none; b=MzAIhmSAeS0SScQ22jjrzyqI96xp3Wilhg4xlsY1j+vzCeRMYHLgBfP76LoHwLTwE+wbst iFmQCsIO/6h9af4oGuAUEZ/5wveFYqku+F9qpCTHDDz9PQNtBbWPfQlAu/3+y+0nlesrme 6wUdDcre7AM7istkv92D9MlydrqY5Ts= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C308D5C5DA5; Thu, 6 Feb 2025 13:28:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BB770C4CEE2; Thu, 6 Feb 2025 13:29:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738848555; bh=SA91Fj5qgInsouyzMgIZTm/AqTY6Uhs6HmphlAhMyfc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hrsbmPUP/jQmIDpCQWb7ZVFdD6V3M6xwSxP5LmMRwYVJmfZEvBF8s0HqGvp6Pn98+ yT32m17rXidrXqfDAMYUYU5k+KHZ8OSDawwoFighWF4jZyJJBSn6Yjk1M44Ke3f1lH CxbPw1x49PyrAi2rJykASo1U6yJUvr8g21FD4tG6rqJP2EftVZC+J/7WHMJe2UJMQT WOTXp2l+mVPqJUsfjH/uPsk7iBDfflHWWB6RBscNyEdfkA/0RGq3ExTrmnEycsR8y9 v1C9vgcx9BRy4e1VjhK/GA1OsAAMypaygKx1LA9cv7iuRnym59qSPk7+g9LQ35vh+D X57gti5NrFeUw== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Mike Rapoport , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Pratyush Yadav , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v4 07/14] kexec: Add KHO support to kexec file loads Date: Thu, 6 Feb 2025 15:27:47 +0200 Message-ID: <20250206132754.2596694-8-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206132754.2596694-1-rppt@kernel.org> References: <20250206132754.2596694-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 5410C1C0071 X-Stat-Signature: 4f9e43ffrq9eu63fuw4yqehn1rdg8omp X-Rspam-User: X-HE-Tag: 1738848556-298547 X-HE-Meta: U2FsdGVkX1/djZUnrf/6uJM/+tyPMMSUp9odiloe2eoJnz046MLvXu9xEA2FDPuqTrfZ4d5yv1G6v8JZOA6Dn7o0hareklRkAmYJGh5/CPHZFZSRhMFG2sfB6fZitbqK0yLiOd3NbhOVmx0Yjp32NlBoW+Kpynu1M8tz2yliHXmqebBWvVs5b8aFidYOGQwNE+m/q3Cr3jpsrZFEwWcU8xHpEl7TRPaBYbzSqAk+yuc/kvSrosSOlgISLLhQ+s3lc3CfsRQZMiGY5K1a1dGMSW1vwGbiXPBqClIq9BusNtTZjLGS7OFNtlrmgAIrqT5gbUOl4tYbTgIBeRHzEORuhjEitrhMdSi7ZpxdHGsHByRKishLPSf6ekRZUETtYXU5fqTpCFNnfkLsvexeGW1bXDJLGzTgGunn0/duJPvZYDIn3G8AKle4jZs3NKPmsrbLKIOkk78534FTByRRPlHExTeshdrOX+Ml3yETEumsPKsXYQT+oWy4al1KDM0GeEa625Kwr+sjzdco/N6CEITsBwucKND7zCBWGhyUJqIyAqDauo4TW9juJac1PLj3yTHVqG0oCZXzFefXLrbsM/xT+aN9pTid6UTWi23My6kl82UBd3JX7OwtlA4MHFVICex4h+KQqkmdYbDKHQQq0NjwvcQ1Cn81tMen68wjxdMdPzhkUPPhgYfY+BSDRCyFM96C+rRKiRnPd0eHOxsWFfdhsICqcLm8UmvFs9laTQP7ZpT95M+cplaIgE2d1VSYuaDTS1mk2KHDh/J2TLuyYk7EPBvexvbW8WUpjPG0PHg2KL4mlGL/0K1MuHhH9OVWQIPKzWaC398O5Kgcx0Yw7gsI7r9DkUDyEqq5Oa0WVe2iL0d15pJn/yqwtqN5Zrcj/6aiYgyOA98WROvT+dNB9jJRXxgmSrCvsQKAOl/nizY9+RFXjyH2jZtJE7DwrSx4+WAyhZg97W+Gt9qW3CusuDS R2KWsFMN F+6Cyfu1lqjrFXM/S8pYci5BvIQ2pMlzyvrmdAckeEW2Ohv68pZ16vI6nwAIPeZN6MgtWNgYnB4GYTy3jAxGQzv4PtrVVSg9dngcKzUeyCkfj9+yVe+NWTj+VaN+c8BhY7fO70qh9Va5HdEbxBdCy0byQawiYhpzB0TOENDYiu/185mh8fpz9Ua1U5/M4td53EjIdNBu8KfBzHnQJ/qs/Mw8B9j2ErHcMlVNZYpE9mmrkxl3iHUufdYMDYKXYDvdVcSJbSQQT+h4oLIkFjJxWqscQBTzh9IY+RGNkR1SEDwerizqrMn4gqeVpeKWcLFl7NUJxJBD/3PYcBcI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexander Graf Kexec has 2 modes: A user space driven mode and a kernel driven mode. For the kernel driven mode, kernel code determines the physical addresses of all target buffers that the payload gets copied into. With KHO, we can only safely copy payloads into the "scratch area". Teach the kexec file loader about it, so it only allocates for that area. In addition, enlighten it with support to ask the KHO subsystem for its respective payloads to copy into target memory. Also teach the KHO subsystem how to fill the images for file loads. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/kexec.h | 7 ++++ kernel/kexec_file.c | 19 +++++++++ kernel/kexec_handover.c | 92 +++++++++++++++++++++++++++++++++++++++++ kernel/kexec_internal.h | 16 +++++++ 4 files changed, 134 insertions(+) diff --git a/include/linux/kexec.h b/include/linux/kexec.h index 4fdf5ee27144..c5e851717089 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -364,6 +364,13 @@ struct kimage { size_t ima_buffer_size; #endif +#ifdef CONFIG_KEXEC_HANDOVER + struct { + struct kexec_buf dt; + struct kexec_buf scratch; + } kho; +#endif + /* Core ELF header buffer */ void *elf_headers; unsigned long elf_headers_sz; diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c index 3eedb8c226ad..d28d23bc1cf4 100644 --- a/kernel/kexec_file.c +++ b/kernel/kexec_file.c @@ -113,6 +113,12 @@ void kimage_file_post_load_cleanup(struct kimage *image) image->ima_buffer = NULL; #endif /* CONFIG_IMA_KEXEC */ +#ifdef CONFIG_KEXEC_HANDOVER + kvfree(image->kho.dt.buffer); + image->kho.dt = (struct kexec_buf) {}; + image->kho.scratch = (struct kexec_buf) {}; +#endif + /* See if architecture has anything to cleanup post load */ arch_kimage_file_post_load_cleanup(image); @@ -253,6 +259,11 @@ kimage_file_prepare_segments(struct kimage *image, int kernel_fd, int initrd_fd, /* IMA needs to pass the measurement list to the next kernel. */ ima_add_kexec_buffer(image); + /* If KHO is active, add its images to the list */ + ret = kho_fill_kimage(image); + if (ret) + goto out; + /* Call image load handler */ ldata = kexec_image_load_default(image); @@ -636,6 +647,14 @@ int kexec_locate_mem_hole(struct kexec_buf *kbuf) if (kbuf->mem != KEXEC_BUF_MEM_UNKNOWN) return 0; + /* + * If KHO is active, only use KHO scratch memory. All other memory + * could potentially be handed over. + */ + ret = kho_locate_mem_hole(kbuf, locate_mem_hole_callback); + if (ret <= 0) + return ret; + if (!IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) ret = kexec_walk_resources(kbuf, locate_mem_hole_callback); else diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c index 3b360e3a6057..c26753d613cb 100644 --- a/kernel/kexec_handover.c +++ b/kernel/kexec_handover.c @@ -16,6 +16,8 @@ #include #include +#include "kexec_internal.h" + static bool kho_enable __ro_after_init; static int __init kho_parse_enable(char *p) @@ -155,6 +157,96 @@ void *kho_claim_mem(const struct kho_mem *mem) } EXPORT_SYMBOL_GPL(kho_claim_mem); +int kho_fill_kimage(struct kimage *image) +{ + ssize_t scratch_size; + int err = 0; + void *dt; + + mutex_lock(&kho_out.lock); + + if (!kho_out.active) + goto out; + + /* + * Create a kexec copy of the DT here. We need this because lifetime may + * be different between kho.dt and the kimage + */ + dt = kvmemdup(kho_out.dt, kho_out.dt_len, GFP_KERNEL); + if (!dt) { + err = -ENOMEM; + goto out; + } + + /* Allocate target memory for kho dt */ + image->kho.dt = (struct kexec_buf) { + .image = image, + .buffer = dt, + .bufsz = kho_out.dt_len, + .mem = KEXEC_BUF_MEM_UNKNOWN, + .memsz = kho_out.dt_len, + .buf_align = SZ_64K, /* Makes it easier to map */ + .buf_max = ULONG_MAX, + .top_down = true, + }; + err = kexec_add_buffer(&image->kho.dt); + if (err) { + pr_info("===> %s: kexec_add_buffer\n", __func__); + goto out; + } + + scratch_size = sizeof(*kho_scratch) * kho_scratch_cnt; + image->kho.scratch = (struct kexec_buf) { + .image = image, + .buffer = kho_scratch, + .bufsz = scratch_size, + .mem = KEXEC_BUF_MEM_UNKNOWN, + .memsz = scratch_size, + .buf_align = SZ_64K, /* Makes it easier to map */ + .buf_max = ULONG_MAX, + .top_down = true, + }; + err = kexec_add_buffer(&image->kho.scratch); + +out: + mutex_unlock(&kho_out.lock); + return err; +} + +static int kho_walk_scratch(struct kexec_buf *kbuf, + int (*func)(struct resource *, void *)) +{ + int ret = 0; + int i; + + for (i = 0; i < kho_scratch_cnt; i++) { + struct resource res = { + .start = kho_scratch[i].addr, + .end = kho_scratch[i].addr + kho_scratch[i].size - 1, + }; + + /* Try to fit the kimage into our KHO scratch region */ + ret = func(&res, kbuf); + if (ret) + break; + } + + return ret; +} + +int kho_locate_mem_hole(struct kexec_buf *kbuf, + int (*func)(struct resource *, void *)) +{ + int ret; + + if (!kho_out.active || kbuf->image->type == KEXEC_TYPE_CRASH) + return 1; + + ret = kho_walk_scratch(kbuf, func); + + return ret == 1 ? 0 : -EADDRNOTAVAIL; +} + static ssize_t dt_read(struct file *file, struct kobject *kobj, struct bin_attribute *attr, char *buf, loff_t pos, size_t count) diff --git a/kernel/kexec_internal.h b/kernel/kexec_internal.h index d35d9792402d..c535dbd3b5bd 100644 --- a/kernel/kexec_internal.h +++ b/kernel/kexec_internal.h @@ -39,4 +39,20 @@ extern size_t kexec_purgatory_size; #else /* CONFIG_KEXEC_FILE */ static inline void kimage_file_post_load_cleanup(struct kimage *image) { } #endif /* CONFIG_KEXEC_FILE */ + +struct kexec_buf; + +#ifdef CONFIG_KEXEC_HANDOVER +int kho_locate_mem_hole(struct kexec_buf *kbuf, + int (*func)(struct resource *, void *)); +int kho_fill_kimage(struct kimage *image); +#else +static inline int kho_locate_mem_hole(struct kexec_buf *kbuf, + int (*func)(struct resource *, void *)) +{ + return 0; +} + +static inline int kho_fill_kimage(struct kimage *image) { return 0; } +#endif #endif /* LINUX_KEXEC_INTERNAL_H */ From patchwork Thu Feb 6 13:27:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13963088 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC6EEC02199 for ; Thu, 6 Feb 2025 13:29:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7419E280008; Thu, 6 Feb 2025 08:29:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6F1F3280005; Thu, 6 Feb 2025 08:29:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 56988280008; Thu, 6 Feb 2025 08:29:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 37C6A280005 for ; Thu, 6 Feb 2025 08:29:46 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id C67EF1A116C for ; Thu, 6 Feb 2025 13:29:26 +0000 (UTC) X-FDA: 83089601532.08.6195964 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf28.hostedemail.com (Postfix) with ESMTP id 201D1C0032 for ; Thu, 6 Feb 2025 13:29:25 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Q3ZxlqkB; spf=pass (imf28.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738848565; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/1KShM+jWPTp6waZdwCyqTCBrvrO5AnhykYtsX6/Ihw=; b=Co/bHU0RbvMn+fBXWlbSOXp1ORycU2k0n6y1keEZGeZKchFxwCdjrHWapKOElHouQUGb16 iPhS6W+3N8feflWXXdi8yfMR/xjnJLnYa0g0xD1KdUQa3YJBdEaC+PeO47P71MnNFC+2TL cNOyowziD+D6nsbk/pRxy7F3giepzBk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738848565; a=rsa-sha256; cv=none; b=EVs585s2577qW4xo0NaLObKQohNEwil4u5oaWss9Gd+rtSfqvN5lIEoVJVSABY5GT3gPWK CdeMpTIJWY07SKTYYxSDz2UkFmbbTGnBX2bb36rppLswO1IhFSG4vo06OXvsupF7iGZVrQ 3Mb/Dxr79LoL3sTJatT2oPAvcNRd5Fc= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Q3ZxlqkB; spf=pass (imf28.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 8C710A44279; Thu, 6 Feb 2025 13:27:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BBE40C4CEDF; Thu, 6 Feb 2025 13:29:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738848564; bh=X7mR6kiT/fP3C44/I2+NRTu7ef7doFLaO1U8zS8Qi9w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q3ZxlqkBYYhEdIJRnvdvLTErR+PthzQXy3Zxyfp34sfONUHY6JyXio3UB3ihtRZFZ AChoLyh1lqS5kBY9KDoO6gA/r9YCgzeUntG4/FQPSXZ5REwjRRZBCqGjd2JQE6Dy8c 08oliT0qZRQXtfz2TC4v8BwURGflWE5xZiglbUtMfzUXLRpkJom5sZ3LGYNcST1YG0 z9vu406bHiv1idZHn4xmBg7I12AnuMDbziGboj7zAZ+95NCWR5WWrkQ7r/Kzose6yg JpGvRZheiC/kEfCgfagYXGWCHTR4v8zMhdTmJYQClKFYkCK5g3n4ZZLyDTx1Zngce9 Arf1qcyN4wlPw== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Mike Rapoport , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Pratyush Yadav , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v4 08/14] kexec: Add config option for KHO Date: Thu, 6 Feb 2025 15:27:48 +0200 Message-ID: <20250206132754.2596694-9-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206132754.2596694-1-rppt@kernel.org> References: <20250206132754.2596694-1-rppt@kernel.org> MIME-Version: 1.0 X-Stat-Signature: cjkkt7e73wmyegrit58cqkz3amn7xqqa X-Rspam-User: X-Rspamd-Queue-Id: 201D1C0032 X-Rspamd-Server: rspam03 X-HE-Tag: 1738848564-110920 X-HE-Meta: U2FsdGVkX1+F+nK9VFADJ0G8tUoGikDY8m5xMU6rm8+qTlcVeIEDEGSAhgMQ1MEa8Wxd6Y46JFJOV5+iTtrZFAwE+Ui9DcKGCeTVkV27f+fTV5FbH962O5NJP7OQIc++p+GkVgVs4jKfaCO1Uz/WVyNBhZXUnOjtMbOJW4gh7oMZ5Ehh0UbdVK2qZdUOu1v39zEOP+GjTnxMb026klqoG0txohBRRBHnA7R+75Z4rQHtLHqVVUNtqolosZN1bklbLgaH68d8RQQNILh60lMevZ9kpZ0p4E2FAACxZh4/ypEAUZ5HN/DLZYs+/lTczKuho13gkWboXRq3iq76vj0rN71212To4np7MY2M4uxvVqZydWBnsmIh6mwfcpYRfvxlk3JZlVS3R5MY0CETmJBAKjEhtyEzLNsIzvIg4SLqMgOoiidwlJ4IWodZpr8igBRJGuXxtYfGR0NqO5NVOidLg5nb7A8v2CRUrY0scyEiX+bUFAH+fAKsjxZ26ZQb3m5cYxgm8Jxgbt8mHLL9F5cns5hxPu0ePwtFQgKjeitYb+oKjXmS6k0Vvvocgs9ONpYQXbZ/sgTpK7LjRqvXnkeAMgZEqMsLDvKjgjDF8PvUpEqe6kP3sBcT1iu5+IOAV/aQssts1KqWNpSlTVXjGKVYAwEYLkoARI46M/fsKRwdCCmpsCPgANiUXveNuIkx2gLtiD3thdYpQWIzI+RD6UN3cKWvfnkfVG7AQG+ZinodlBx8oGJk0K8XGobq2H1Zkn8/vwQsx7vk2D6lzCsbd0NBJ4D2QW/BgeBDaQZaIRPFra71iV3dkDE/cyOMY3TxI+OP58QZXTanF+8+F6NS+pLGlKrozWIfdbvROStUCwslBZMqJgIaPhSJQ9GH3MDRjc3UKzbefIXQgrFI7NDel05hIL9IzbhsYUENaA7cawHcequUtT49pTbewOqXKTvPAk01cITjGoqeIP62FOSuj0Z AffZgRYJ ipQee17Bj+VB190k1gl+twJiEmraYUmIrWohH3GhHHusg+1nvE1pIOxG3dM4jh7aibNyV5jZP75HqNRZrCB2VKu04RBZLzrVkeYEofv0AhLF2z/fk1eUJvmqFDl39CU8HmbQ+KYyUXBJ5TR0zTsMsrSBwYhAXqwaP8Jd5Y+X7Hic8Bu1gs1jg2eT+RGwXy8FH6C0GLv7jImgXahr+v///bxM3ywXCI8IDFm/3Kr7fwvAM44Q+a9/onLkxyi7gPJ6ThkDFREgFGg044Z89Usr+mqzgmI5lYUtl03dIdo5VtOLB/c+6rRVYvd0bjylF8/mq2wJE0J+Ty1TP95w= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexander Graf We have all generic code in place now to support Kexec with KHO. This patch adds a config option that depends on architecture support to enable KHO support. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) --- kernel/Kconfig.kexec | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/kernel/Kconfig.kexec b/kernel/Kconfig.kexec index 4d111f871951..332824d8d6dc 100644 --- a/kernel/Kconfig.kexec +++ b/kernel/Kconfig.kexec @@ -95,6 +95,19 @@ config KEXEC_JUMP Jump between original kernel and kexeced kernel and invoke code in physical address mode via KEXEC +config KEXEC_HANDOVER + bool "kexec handover" + depends on ARCH_SUPPORTS_KEXEC_HANDOVER && ARCH_SUPPORTS_KEXEC_FILE + select MEMBLOCK_KHO_SCRATCH + select KEXEC_FILE + select LIBFDT + select CMA + help + Allow kexec to hand over state across kernels by generating and + passing additional metadata to the target kernel. This is useful + to keep data or state alive across the kexec. For this to work, + both source and target kernels need to have this option enabled. + config CRASH_DUMP bool "kernel crash dumps" default ARCH_DEFAULT_CRASH_DUMP From patchwork Thu Feb 6 13:27:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13963087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E95CC02194 for ; Thu, 6 Feb 2025 13:29:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 93AE1280007; Thu, 6 Feb 2025 08:29:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8EA92280005; Thu, 6 Feb 2025 08:29:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 78B1C280007; Thu, 6 Feb 2025 08:29:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 595CB280005 for ; Thu, 6 Feb 2025 08:29:43 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id BF68C1C8E34 for ; Thu, 6 Feb 2025 13:29:35 +0000 (UTC) X-FDA: 83089601910.02.8283DD2 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf19.hostedemail.com (Postfix) with ESMTP id 21DB41A0042 for ; Thu, 6 Feb 2025 13:29:33 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=LI5Cnhhe; spf=pass (imf19.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738848574; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dA6iBQ5oZZOPe8K34gSac4Lm6LFXcMcJE6yIUoz0bps=; b=L/gfjhksHn9ZalEbI5h43m3656JFlPJBxCvagZH/kCVWi1FXNLznMFDUuaUEG8fnB0rDcw 7q4EbNWPzPzACmmtcGD0BYvW26aSrFtn9YGnd3AAjxuLy6/f0vKwQq0KzivUFadBpf6W4F EcH2fOdq+vnkv51YESJkPYo6Xan0Y34= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738848574; a=rsa-sha256; cv=none; b=7oU6C/Xnv+/j0RS2iesuIJKBNbB4RJvPKY3wRUe90u6ampt8SrFVmMH5tX1upgfhqj5ARr ctfP1RhDIHxyGkp6mLWLaRwlWJjXROWDou9xLgp33KVsB8gMpalQgbb4Q/P4oDWnNIl51W oGh2212u5+hWKjGJb7R7cdEtqfj0I+c= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=LI5Cnhhe; spf=pass (imf19.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 8ABAAA43F8C; Thu, 6 Feb 2025 13:27:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B66DCC4CEDD; Thu, 6 Feb 2025 13:29:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738848573; bh=ke287oo305Uam86hfMT5albHkTPGceT7i4ZdUVTV7qs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LI5CnhheIluEf6E0yT0RtPvHaYAUSlrYt6PYxNsdXVlD1mnmeL23vT7jUPIcsCGH7 7bROZIcwG83GSfk1kdHyjFtwYBfJMvN0yFn2dVTzscgnvSojIdtykNyLALyBPdYQdU K0Bch5vpDoeXCbdMAZmaxmA+HlNug5/HmbekVQPF+a4OKOfVqgbaN0WXdL/xm10aMu 1y64lYXF1pTud4pkooN5cf/H/xqxZuWY6FVPvRyzqYSrU+K/oMXuPZwJhKinovYfIt QPlHJFDiVyKKaP5GFcG/nG7Q6qzYqH5a9SjjmsXrnhE6IsSJfon/X3ZQz2lzG9adOL g2XwZIlLqNlKw== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Mike Rapoport , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Pratyush Yadav , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v4 09/14] kexec: Add documentation for KHO Date: Thu, 6 Feb 2025 15:27:49 +0200 Message-ID: <20250206132754.2596694-10-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206132754.2596694-1-rppt@kernel.org> References: <20250206132754.2596694-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 21DB41A0042 X-Stat-Signature: zemchfy6tfwqwra9j113bc9wjh7q4yng X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1738848573-418144 X-HE-Meta: U2FsdGVkX1+PU13IFBcJU8Uw6lhnOH35Tjl+x1M/bPYyAtLBmRb/wihXMmX3gq7N/OdTmPDeT7Pd1eBeBF8wmTIXf86LK4xwk5J1guYHzl+zg9/WXfZSZhd66vliiOOT6OcjeVVCMWZkoFfm3OtEDEhkKIBoxMNNSwIYuAibJB6kq1WC9b0aTEVsp1Vt2A41+swDVjh6Jog+EHduu0nfLUoyUPUbiK3ROLWp54PqYk5Sj8YVOX6P2MSycAvC8lm/dZDmMPuUWchSXBDAKMWLregOvrwBeTiFcht9SqtEf2ay/A01OadPly8uQOaNqM6GWhpJGEErbWYtup6QLLr3SGOwMoUlatXcvL2ZJTd4T5b+SCUPnXx9AeLhcRD9OkAkHfU8sfly2w51qXRe+AtjvKWCwFt1bq8WfGEOiH3BtUdDqmIFoMlz5qsIdvpAcMtNnB5SEYuZ4J0bgZPRQhOHEwJdy1v+RZnclM3GWN65Lv183Wkc+Me2nCDDjRN/lrTCvmd2cyH3VP49rr2pDIAQjcxnn0nyewJ11KXlEuKWddSgVhfGLmdHZDeHxDt/+qNU/a484gCuFYu05R1iKQPvMk1AgHcNMlv7u4I7y24hQRGU3jIor9qFRJdvAEqOjgqXJxS38lErSQVmYcswmHS0GCVel2vrTc7ISmzijbvuqQTLumQgz8eZLvT+pL5DKCIeu2Kxm7pH/05u5z4xSlHfpvj+9NVyqMwpy7Gha0UDgeBtNnupp14tspxnbeuuXh34XboKr2qeDxYKFR84dwH2vG4tT46CnUEkirxIHC8PV+hUP45VOFmX6Zyw9I70UgYMtK647Uvg2JkNxfX0m9fJD7xpm9CrXANvHQvzdo49+uXkm2b72WQcJbmGc7gtoNLiFJJSk3j03wZ5Oxa2nmCCUx9gKytvXKdc1A6ezdnhq9PPmFz3a0Yuf5wL7kMdXnh6ovkOq14KQlychn5boTR MT4Iz0v2 Z2t7NaBoQy9Um+XSPEnwg2M6qpZjzbs22VZ/e0xJEPyY6zUycOL0N8Q0A24+LbnHBQEcn6j2B6WbLNVSvOsffItFSRgHJDnIq1HHLId5hQPBDyM5fUyWq2dEBImzw+CM5dyp63eFyXEuQYAnamxI9HQ0ih2FZB8sBA+82fZoU/RJ6p42nYW8LO3m9UIj0J2OSel+Tnm21PwRRy7W9qQj7UhoH/P36ZR0ky/F43UvRF0asybvNDwLvodPST9W9rYKo9ZGUDTVjRN4nrp/ZuWeBvWGqL2jMTKYLQ3h0Oc19J9gqnG1FwMCbSAoy0oQYLr5OPeDH8tTkAurgMXg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexander Graf With KHO in place, let's add documentation that describes what it is and how to use it. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) --- Documentation/kho/concepts.rst | 80 ++++++++++++++++++++++++++++++++ Documentation/kho/index.rst | 19 ++++++++ Documentation/kho/usage.rst | 60 ++++++++++++++++++++++++ Documentation/subsystem-apis.rst | 1 + MAINTAINERS | 1 + 5 files changed, 161 insertions(+) create mode 100644 Documentation/kho/concepts.rst create mode 100644 Documentation/kho/index.rst create mode 100644 Documentation/kho/usage.rst diff --git a/Documentation/kho/concepts.rst b/Documentation/kho/concepts.rst new file mode 100644 index 000000000000..232bddacc0ef --- /dev/null +++ b/Documentation/kho/concepts.rst @@ -0,0 +1,80 @@ +.. SPDX-License-Identifier: GPL-2.0-or-later + +======================= +Kexec Handover Concepts +======================= + +Kexec HandOver (KHO) is a mechanism that allows Linux to preserve state - +arbitrary properties as well as memory locations - across kexec. + +It introduces multiple concepts: + +KHO Device Tree +--------------- + +Every KHO kexec carries a KHO specific flattened device tree blob that +describes the state of the system. Device drivers can register to KHO to +serialize their state before kexec. After KHO, device drivers can read +the device tree and extract previous state. + +KHO only uses the fdt container format and libfdt library, but does not +adhere to the same property semantics that normal device trees do: Properties +are passed in native endianness and standardized properties like ``regs`` and +``ranges`` do not exist, hence there are no ``#...-cells`` properties. + +KHO introduces a new concept to its device tree: ``mem`` properties. A +``mem`` property can be inside any subnode in the device tree. When present, +it contains an array of physical memory ranges that the new kernel must mark +as reserved on boot. It is recommended, but not required, to make these ranges +as physically contiguous as possible to reduce the number of array elements :: + + struct kho_mem { + __u64 addr; + __u64 len; + }; + +After boot, drivers can call the kho subsystem to transfer ownership of memory +that was reserved via a ``mem`` property to themselves to continue using memory +from the previous execution. + +The KHO device tree follows the in-Linux schema requirements. Any element in +the device tree is documented via device tree schema yamls that explain what +data gets transferred. + +Scratch Regions +--------------- + +To boot into kexec, we need to have a physically contiguous memory range that +contains no handed over memory. Kexec then places the target kernel and initrd +into that region. The new kernel exclusively uses this region for memory +allocations before during boot up to the initialization of the page allocator. + +We guarantee that we always have such regions through the scratch regions: On +first boot KHO allocates several physically contiguous memory regions. Since +after kexec these regions will be used by early memory allocations, there is a +scratch region per NUMA node plus a scratch region to satisfy allocations +requests that do not require particilar NUMA node assignment. +By default, size of the scratch region is calculated based on amount of memory +allocated during boot. The ``kho_scratch`` kernel command line option may be used to explicitly define size of the scratch regions. +The scratch regions are declared as CMA when page allocator is initialized so +that their memory can be used during system lifetime. CMA gives us the +guarantee that no handover pages land in that region, because handover pages +must be at a static physical memory location and CMA enforces that only +movable pages can be located inside. + +After KHO kexec, we ignore the ``kho_scratch`` kernel command line option and +instead reuse the exact same region that was originally allocated. This allows +us to recursively execute any amount of KHO kexecs. Because we used this region +for boot memory allocations and as target memory for kexec blobs, some parts +of that memory region may be reserved. These reservations are irrenevant for +the next KHO, because kexec can overwrite even the original kernel. + +KHO active phase +---------------- + +To enable user space based kexec file loader, the kernel needs to be able to +provide the device tree that describes the previous kernel's state before +performing the actual kexec. The process of generating that device tree is +called serialization. When the device tree is generated, some properties +of the system may become immutable because they are already written down +in the device tree. That state is called the KHO active phase. diff --git a/Documentation/kho/index.rst b/Documentation/kho/index.rst new file mode 100644 index 000000000000..5e7eeeca8520 --- /dev/null +++ b/Documentation/kho/index.rst @@ -0,0 +1,19 @@ +.. SPDX-License-Identifier: GPL-2.0-or-later + +======================== +Kexec Handover Subsystem +======================== + +.. toctree:: + :maxdepth: 1 + + concepts + usage + +.. only:: subproject and html + + + Indices + ======= + + * :ref:`genindex` diff --git a/Documentation/kho/usage.rst b/Documentation/kho/usage.rst new file mode 100644 index 000000000000..e7300fbb309c --- /dev/null +++ b/Documentation/kho/usage.rst @@ -0,0 +1,60 @@ +.. SPDX-License-Identifier: GPL-2.0-or-later + +==================== +Kexec Handover Usage +==================== + +Kexec HandOver (KHO) is a mechanism that allows Linux to preserve state - +arbitrary properties as well as memory locations - across kexec. + +This document expects that you are familiar with the base KHO +:ref:`Documentation/kho/concepts.rst `. If you have not read +them yet, please do so now. + +Prerequisites +------------- + +KHO is available when the ``CONFIG_KEXEC_HANDOVER`` config option is set to y +at compile time. Every KHO producer may have its own config option that you +need to enable if you would like to preserve their respective state across +kexec. + +To use KHO, please boot the kernel with the ``kho=on`` command line +parameter. You may use ``kho_scratch`` parameter to define size of the +scratch regions. For example ``kho_scratch=512M,512M`` will reserve a 512 +MiB for a global scratch region and 512 MiB per NUMA node scratch regions +on boot. + +Perform a KHO kexec +------------------- + +Before you can perform a KHO kexec, you need to move the system into the +:ref:`Documentation/kho/concepts.rst ` :: + + $ echo 1 > /sys/kernel/kho/active + +After this command, the KHO device tree is available in ``/sys/kernel/kho/dt``. + +Next, load the target payload and kexec into it. It is important that you +use the ``-s`` parameter to use the in-kernel kexec file loader, as user +space kexec tooling currently has no support for KHO with the user space +based file loader :: + + # kexec -l Image --initrd=initrd -s + # kexec -e + +The new kernel will boot up and contain some of the previous kernel's state. + +For example, if you used ``reserve_mem`` command line parameter to create +an early memory reservation, the new kernel will have that memory at the +same physical address as the old kernel. + +Abort a KHO exec +---------------- + +You can move the system out of KHO active phase again by calling :: + + $ echo 1 > /sys/kernel/kho/active + +After this command, the KHO device tree is no longer available in +``/sys/kernel/kho/dt``. diff --git a/Documentation/subsystem-apis.rst b/Documentation/subsystem-apis.rst index b52ad5b969d4..5fc69d6ff9f0 100644 --- a/Documentation/subsystem-apis.rst +++ b/Documentation/subsystem-apis.rst @@ -90,3 +90,4 @@ Other subsystems peci/index wmi/index tee/index + kho/index diff --git a/MAINTAINERS b/MAINTAINERS index e1e01b2a3727..82c2ef421c00 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -12828,6 +12828,7 @@ S: Maintained W: http://kernel.org/pub/linux/utils/kernel/kexec/ F: Documentation/ABI/testing/sysfs-firmware-kho F: Documentation/ABI/testing/sysfs-kernel-kho +F: Documentation/kho/ F: include/linux/kexec.h F: include/uapi/linux/kexec.h F: kernel/kexec* From patchwork Thu Feb 6 13:27:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13963090 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 559B0C02194 for ; Thu, 6 Feb 2025 13:29:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DA829280009; Thu, 6 Feb 2025 08:29:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D586A280005; Thu, 6 Feb 2025 08:29:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BAB1D280009; Thu, 6 Feb 2025 08:29:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 9AD1E280005 for ; Thu, 6 Feb 2025 08:29:58 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E709514113A for ; Thu, 6 Feb 2025 13:29:44 +0000 (UTC) X-FDA: 83089602288.13.EC64CDC Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf02.hostedemail.com (Postfix) with ESMTP id 4C30580029 for ; Thu, 6 Feb 2025 13:29:43 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=WFHeaz2W; spf=pass (imf02.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738848583; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OqhPo9ta4iBL6YwMdbyL5HGD55Dm8ujj4PSl7i6Hsgs=; b=ZJQ4NTX3dsQ5lT6Mc9siDthswxIx9KIV+42Y2yEqNhasMhTit0L9cCgWD3C3fqXVyCB3cj JfTcQ2/mFt7J/9ArYA/YcMNhQtmF/zEkTp3WJIDPQ1qntX4aaPtqZ+sSJiPkKjzSkxae9n Oz4B9iGOBGgiBjrP7VNqPx9ekSX/css= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738848583; a=rsa-sha256; cv=none; b=iOT5gMeWuHM9zpGsVPFBRUMT0idLGbKPdGW+yRu6lk+uWjiWM9EWAlu7eTQImFwMv94TQG vQLiXqO/lpNEnhAELdQ4PAJ0epS2GpMbJl5uOUIcEg9WWoAnmjwxdXhjrEaHfF+V062usA ARbwwS5deksktTMHPpx4Wrj6bqrMDHM= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=WFHeaz2W; spf=pass (imf02.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 90828A43521; Thu, 6 Feb 2025 13:27:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B6746C4CEE5; Thu, 6 Feb 2025 13:29:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738848582; bh=YhQ3gt0FcvyhHrH/fq8eDTzHj2J/JGVva+XZhcAjRME=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WFHeaz2Wf6zAlFKNEJGYupJoP4tb+oKUBXfVqahTutVYbwIRBynlBQHaXayGwkEu2 DuphCe8xk2pe+8oH7+XZQSe9pjgWZGsmkIospzcHjxO3RytUcsVovJmfiYKBBORcsL 4luOZoe358T6aqKuQn/upUfy7Z5PmlquzBJHTrWLVRYP7hc0VriUcgl4lPKOe4bajV ZVwAy9jBA2xpMFC/I3n9y07yrA5AKtZFTK9AcidCwRDnRl6rYTvWRjSGn6dW1qiFfP OHYDZqXXjaHUsYQmI1svg4fNC3oyX5TyerVpmcK3gD0lOT5H5hMpCAfnQUVx3w8RiF MCVMFhIn9tgsQ== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Mike Rapoport , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Pratyush Yadav , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v4 10/14] arm64: Add KHO support Date: Thu, 6 Feb 2025 15:27:50 +0200 Message-ID: <20250206132754.2596694-11-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206132754.2596694-1-rppt@kernel.org> References: <20250206132754.2596694-1-rppt@kernel.org> MIME-Version: 1.0 X-Stat-Signature: 15miuke6hk4rxs3yswd1kqyggo9zbxwb X-Rspamd-Queue-Id: 4C30580029 X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1738848583-287740 X-HE-Meta: U2FsdGVkX1+cveJQYIRq2UkSSKp4y7zWaFowk4ZA5cUC7R3A3ITQbAIX3aoWJf20wTbsoVgDr3sCNr2oONPIQqf2gW2F3xLezMg9avda5U3TGR9wGamcDyZ3tx1SODC4gH9GfMwkvmYwxmeVvAEQHvnC+TaMYo4kW/iyzJpQLAdlCJJzHoT/vTpLXUS62Cr2BhNeyYL7uK6THB8NEEnWZnJro77+ikS9YLGJkRerzlJEMBB78JMK9WxaefVUy7omKYxEYwwJD3i6mUN8N7el93W8CRRmUR9mlIiC8E4xvBYOt6mKAsEomeaev+wj18FxUAEnxWwMtddYMf62D3Dks/Bfs8cvFVtChxKYgd9lES++8EiKf64ofRR0qcPVncEFvDi4hud4JQGhaf9qKGv62XnuHYWJQX/bpuiWirDLvuA3Wx2T/a7dP5V+zMVv1Bhd3TrK7VugMjUG3VPxshGDDePzqrT8wzjb1h3f/BvHHumKIR+3wx8u419Is7y/dWtiUc2zKRooG0xcpuAwPCF63EHmdGka2/3LMw56qpsRXO1ODKV4U3sSUdQJ7RXCZLKIA8iwOUWLuSle2AaXvAETFz4+l6wQSYZsLWCXQlYwOmWDdDHq5jtUo8/eHywxT/9A5CpqTgIL19ZKJVSRheYLFYSurFF6hPAgBcaYdDBAbrp9QU6D8LxC6564JhUUtKX/PruUUZY0O5EFkdOELUDMLybDxPewxi1geacc1nszxji1IK1ISGkfUht5EuxW+Zf5JxrHQYIG4bhHaW4DvLFZ/JEqOaesz/NMqydSVy9Oi7KYYlPksbAseB4rj2NWcb8ei9DGrh41bIF2RolnENDernyybBuwBgmN83KG7r82BU5muQr5hWP5GBB8aOY5GdvpA2VIlYhgpSSBRKpi1K2E57JcEjvcbGpIQnFjd1UnRR3FVnjEIkC0NObUsvj9pN2MXVskNGx0Vc8F+U2yKVo t0g7SQeB QGkDK3ByhcyJ9uI6NGluhO2r/8BKjfl8jjXidHmgC43S3nQ6iPrkcL6RsYopqriezVZ4UluO+u6MhwkNMvH7T7SDYhvJTGJCbC33l9si5XmB0IIlw+4SBjR/doxRWjvumDCKySYDn22IqsZLFoxaKTVBlVlY71MUNPwOgCECuXJFmHjRDV8AOw3tFi0092fT/2nQdvpqX0RdpTJDca2KACWeQp/dbjefEUmdobCmoAg5xg75CO5WVo3xtqx8nXgpraU1nz3xPud2kWY3H7mAkksXAAitI/JO39QsKC3WYxfqG5EnNDOjYlvOkOu7j093si3R2jtnhyVnzQh8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexander Graf We now have all bits in place to support KHO kexecs. This patch adds awareness of KHO in the kexec file as well as boot path for arm64 and adds the respective kconfig option to the architecture so that it can use KHO successfully. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) --- arch/arm64/Kconfig | 3 +++ drivers/of/fdt.c | 36 ++++++++++++++++++++++++++++++++++++ drivers/of/kexec.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 81 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index fcdd0ed3eca8..5d9f07cea258 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1590,6 +1590,9 @@ config ARCH_SUPPORTS_KEXEC_IMAGE_VERIFY_SIG config ARCH_DEFAULT_KEXEC_IMAGE_VERIFY_SIG def_bool y +config ARCH_SUPPORTS_KEXEC_HANDOVER + def_bool y + config ARCH_SUPPORTS_CRASH_DUMP def_bool y diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index aedd0e2dcd89..3178bf9c6bd2 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -875,6 +875,39 @@ void __init early_init_dt_check_for_usable_mem_range(void) memblock_add(rgn[i].base, rgn[i].size); } +/** + * early_init_dt_check_kho - Decode info required for kexec handover from DT + */ +static void __init early_init_dt_check_kho(void) +{ + unsigned long node = chosen_node_offset; + u64 kho_start, scratch_start, scratch_size; + const __be32 *p; + int l; + + if (!IS_ENABLED(CONFIG_KEXEC_HANDOVER) || (long)node < 0) + return; + + p = of_get_flat_dt_prop(node, "linux,kho-dt", &l); + if (l != (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32)) + return; + + kho_start = dt_mem_next_cell(dt_root_addr_cells, &p); + + p = of_get_flat_dt_prop(node, "linux,kho-scratch", &l); + if (l != (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32)) + return; + + scratch_start = dt_mem_next_cell(dt_root_addr_cells, &p); + scratch_size = dt_mem_next_cell(dt_root_addr_cells, &p); + + p = of_get_flat_dt_prop(node, "linux,kho-mem", &l); + if (l != (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32)) + return; + + kho_populate(kho_start, scratch_start, scratch_size); +} + #ifdef CONFIG_SERIAL_EARLYCON int __init early_init_dt_scan_chosen_stdout(void) @@ -1169,6 +1202,9 @@ void __init early_init_dt_scan_nodes(void) /* Handle linux,usable-memory-range property */ early_init_dt_check_for_usable_mem_range(); + + /* Handle kexec handover */ + early_init_dt_check_kho(); } bool __init early_init_dt_scan(void *dt_virt, phys_addr_t dt_phys) diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c index 5b924597a4de..f6cf0bc13246 100644 --- a/drivers/of/kexec.c +++ b/drivers/of/kexec.c @@ -264,6 +264,43 @@ static inline int setup_ima_buffer(const struct kimage *image, void *fdt, } #endif /* CONFIG_IMA_KEXEC */ +static int kho_add_chosen(const struct kimage *image, void *fdt, int chosen_node) +{ + void *dt = NULL; + phys_addr_t dt_mem = 0; + phys_addr_t dt_len = 0; + phys_addr_t scratch_mem = 0; + phys_addr_t scratch_len = 0; + int ret = 0; + +#ifdef CONFIG_KEXEC_HANDOVER + dt = image->kho.dt.buffer; + dt_mem = image->kho.dt.mem; + dt_len = image->kho.dt.bufsz; + + scratch_mem = image->kho.scratch.mem; + scratch_len = image->kho.scratch.bufsz; +#endif + + if (!dt) + goto out; + + pr_debug("Adding kho metadata to DT"); + + ret = fdt_appendprop_addrrange(fdt, 0, chosen_node, "linux,kho-dt", + dt_mem, dt_len); + if (ret) + goto out; + + ret = fdt_appendprop_addrrange(fdt, 0, chosen_node, "linux,kho-scratch", + scratch_mem, scratch_len); + if (ret) + goto out; + +out: + return ret; +} + /* * of_kexec_alloc_and_setup_fdt - Alloc and setup a new Flattened Device Tree * @@ -414,6 +451,11 @@ void *of_kexec_alloc_and_setup_fdt(const struct kimage *image, #endif } + /* Add kho metadata if this is a KHO image */ + ret = kho_add_chosen(image, fdt, chosen_node); + if (ret) + goto out; + /* add bootargs */ if (cmdline) { ret = fdt_setprop_string(fdt, chosen_node, "bootargs", cmdline); From patchwork Thu Feb 6 13:27:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13963089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFE53C02194 for ; Thu, 6 Feb 2025 13:29:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 718D46B0083; Thu, 6 Feb 2025 08:29:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A0F5280005; Thu, 6 Feb 2025 08:29:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F3E56B008A; Thu, 6 Feb 2025 08:29:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 2C3DF280005 for ; Thu, 6 Feb 2025 08:29:54 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CDE701C7D8E for ; Thu, 6 Feb 2025 13:29:53 +0000 (UTC) X-FDA: 83089602666.21.C734677 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf17.hostedemail.com (Postfix) with ESMTP id 47FDE40058 for ; Thu, 6 Feb 2025 13:29:52 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Mvy2lbhp; spf=pass (imf17.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738848592; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eSODbqt1dCHlKuqG5gOGW7Y2t1iZh7V36JKTjYF1S7o=; b=V0VBRe6DwW5OzPT0SECL9pZQLOuWyhR63XxS+uQQn7D+wpJGuEunYlQIsDTWzgLDRnDcft uQLojpHlZfDu21dDsqbaBlAFVw7xAdbx9ws8i77ipcPJNeJrZbum45ROkdK1RB2aRvMznE aCmA/wSdQkZIgvGjA52t5ZZml3DFJfo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738848592; a=rsa-sha256; cv=none; b=b8tGT7QTcXjhx2ogcILLwkEAp/r1XxZfM/v3hghMhsDitflyteoHGVVt87hmEDwo/j1nXm /hJxJhcD6SlDcdl+UGBQVQMu06VQySlwsq24Rm2ZcHzje8HvMwExoaED7OAB/iNBuo2yr0 RQH+I3yfWz2Ni2LRdptgIepNayDMlhk= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Mvy2lbhp; spf=pass (imf17.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 8CC13A4426C; Thu, 6 Feb 2025 13:28:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B8439C4CEDD; Thu, 6 Feb 2025 13:29:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738848591; bh=1txtjW96JxUndZYD9a00W/76sD+LvXXvrxBD/8JuEQE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Mvy2lbhpllMBnnyJK/ZkaYgJSkQKeGHs+Q9PA1rGGi1MWQw8SCI9y53zhVA8MhcPW umRbwPhZK1MIY+MGeqpJY+Q4K1KriPoBHFFeU1mUXBHZNVykg6dnluODkAN0hAGvyM a+pW17FtbIxucMLTKmm2OLhVdZBXxATM+cFA5MiyCLIMwQ+ubMNwe8H2BVfuaYKRPx VijP3Vocku1QnN/NBswC9idxp0054VWMo+t7FpS7i4O0OMK2rmyHCU72RxHmRitVI8 hzEmnGVnh1y0Ziuiwgpf/8Ap0fUKJN5vJbWyQe29fgLAKaxyNyv9bB2KWVsgmUri/l JhEn8cdB5h2Yw== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Mike Rapoport , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Pratyush Yadav , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v4 11/14] x86/setup: use memblock_reserve_kern for memory used by kernel Date: Thu, 6 Feb 2025 15:27:51 +0200 Message-ID: <20250206132754.2596694-12-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206132754.2596694-1-rppt@kernel.org> References: <20250206132754.2596694-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 47FDE40058 X-Stat-Signature: pkm1rfb43gxy1y4mafc73j8bj7egg4xt X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1738848592-166480 X-HE-Meta: U2FsdGVkX1+rICFA+nFrlxW7kXWZR16hEH0DAwRnkqx1nYQBtvW8iUtsWEZktkti5ISI6NetcY1UinO9zY29KZiW9sCnCkNHENJHWZ5pEXCj3914O5xOWwMhKdCQi/1biuKuEV5S4YY1cKfQ5zGUlK5tTFtH2AbGijrZSd/iYAsG/IxL76WKhtxKQ22dvh8ZpC2jnvhnYyfKmDiAejymEJ91bX2fJoapD5SQ71iNKSEMDSeqwBg6kng0pzCBwSjpKPEA+haSfrhpqCjsiZPoq0eO56pi23KfYcSn033Sa9pecG77LqGRHble/o3NZ3k2+ZGi4fNxWfpQx7Bpv9gJ5U0KzfVbkwJnMqeLjtY0sYvuA0/c2DRhO/sIVIDijs1bofaC8Za0ozISSSITF2CdLgld1zrt9A9S/gohLtuNkfwXLazWu7fJW9mZRh1+4QOV2ZUZJJQuLa4zq7dWzTHZJhQu9KPrHuLKMWvv58rC6wVbUSRrqwESMGNbb9eULWuMPCaLbGLXJ2rWHgBYuLF31QijBuvnKDInfExWTGRRHI9REQf9JIyneKPJgXZ81pWpYtKJ+i5ZnI+QB6D9rSKUHruix4U3dRArLLB8W3saJbdE5ZuYw7wFJFT8pnemSUPJ/ZwFYwLQmFSwEu/ARbZ2T5uGF0Xpf1VQukoDcetqI6QxGadesaalyctGfnA+o0/22mXL77OWfFH1mqKvC3ZLbj/rQcltcpbXuIwkDdT5N4tWleo57Dy6KEJg6WwesXc1t9WJziW9eqCZGGdQ3V2S43uwzq4jMYLFEa/SD8pYReuVVc0AtpfJ0/YCm0mvaKXJgLB/oXZ0shvknhtVpQOV/+W8K8uEY/0yBwoV/qkjtvbnuibQOQKjZS72kMsBpd6w20FH5I+IKLMi2GdjcuM99Q0ABAfifRCfw0DEH4WtCEFLxOir87/+vekJR9FW8Ci79+JcOWlZtZj/S7yRJ5s wTp9qSCg aSnKFDMoKHF1WOetnrI0+OmMlEOax39ezoVoCnhfM2IWgr3qpHN1uPWzSfHjU2EC7cyjMOcxTIxzvBpAwvNvIy+173caucIAPcQJXOI5+GWwWuuGun2QQ7wLb3Ur13Z0YT8HZxpQg9hCMq4tAP76ErHHHo7ZHuO7ERDVQ+uHhENoktkECyyqDzKIOSm7QZcQgZ376VoLVvCgf3zECNThzBwsPeR1Y4neFsmPx10CubI6cAHDO2DdMCVUCIwlHrQmB2FsB/SkBTzGozE/+mWDJ/vkpdNnvAbwYLsZVNNBTk12Qb/7ffl6b1uOY8ZI3a23l//OsIve1WX6ZyvEKTrc9zWt0ZzwfvD/bLwK9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" memblock_reserve() does not distinguish memory used by firmware from memory used by kernel. The distinction is nice to have for accounting of early memory allocations and reservations, but it is essential for kexec handover (kho) to know how much memory kernel consumes during boot. Use memblock_reserve_kern() to reserve kernel memory, such as kernel image, initrd and setup data. Signed-off-by: Mike Rapoport (Microsoft) --- arch/x86/kernel/setup.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index cebee310e200..c80c124af332 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -220,8 +220,8 @@ static void __init cleanup_highmap(void) static void __init reserve_brk(void) { if (_brk_end > _brk_start) - memblock_reserve(__pa_symbol(_brk_start), - _brk_end - _brk_start); + memblock_reserve_kern(__pa_symbol(_brk_start), + _brk_end - _brk_start); /* Mark brk area as locked down and no longer taking any new allocations */ @@ -294,7 +294,7 @@ static void __init early_reserve_initrd(void) !ramdisk_image || !ramdisk_size) return; /* No initrd provided by bootloader */ - memblock_reserve(ramdisk_image, ramdisk_end - ramdisk_image); + memblock_reserve_kern(ramdisk_image, ramdisk_end - ramdisk_image); } static void __init reserve_initrd(void) @@ -347,7 +347,7 @@ static void __init add_early_ima_buffer(u64 phys_addr) } if (data->size) { - memblock_reserve(data->addr, data->size); + memblock_reserve_kern(data->addr, data->size); ima_kexec_buffer_phys = data->addr; ima_kexec_buffer_size = data->size; } @@ -447,7 +447,7 @@ static void __init memblock_x86_reserve_range_setup_data(void) len = sizeof(*data); pa_next = data->next; - memblock_reserve(pa_data, sizeof(*data) + data->len); + memblock_reserve_kern(pa_data, sizeof(*data) + data->len); if (data->type == SETUP_INDIRECT) { len += data->len; @@ -461,7 +461,7 @@ static void __init memblock_x86_reserve_range_setup_data(void) indirect = (struct setup_indirect *)data->data; if (indirect->type != SETUP_INDIRECT) - memblock_reserve(indirect->addr, indirect->len); + memblock_reserve_kern(indirect->addr, indirect->len); } pa_data = pa_next; @@ -649,7 +649,7 @@ static void __init early_reserve_memory(void) * __end_of_kernel_reserve symbol must be explicitly reserved with a * separate memblock_reserve() or they will be discarded. */ - memblock_reserve(__pa_symbol(_text), + memblock_reserve_kern(__pa_symbol(_text), (unsigned long)__end_of_kernel_reserve - (unsigned long)_text); /* From patchwork Thu Feb 6 13:27:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13963091 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5BB3C02194 for ; Thu, 6 Feb 2025 13:30:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5440928000A; Thu, 6 Feb 2025 08:30:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A4F4280005; Thu, 6 Feb 2025 08:30:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3207328000A; Thu, 6 Feb 2025 08:30:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 120A9280005 for ; Thu, 6 Feb 2025 08:30:03 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CBA021C8724 for ; Thu, 6 Feb 2025 13:30:02 +0000 (UTC) X-FDA: 83089603044.07.87138D9 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf17.hostedemail.com (Postfix) with ESMTP id 39BC040065 for ; Thu, 6 Feb 2025 13:30:01 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ZKP2u0Dm; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf17.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738848601; a=rsa-sha256; cv=none; b=dWW6HRapfLo/PF5lvPz3JAFtvNNpFic5OrCsHPLWL6+MrElMT9n60zLboYpYC1QNzQkx8g wlhzNQdzfjjejCD4/CaJHVZoXm5AIvqE8XvVzFYixjmCpx6VkI0PBlqZiWCk5Cwaw7l5nK anAYM8FlHoMQ2X/vH22Y/RwbPFsYGjM= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ZKP2u0Dm; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf17.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738848601; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fClrLX1vxpbaBk7T2PaHkDCgbYCk70prspsVws/dy+w=; b=fb89pXtz8vgf1z2GgyAPWgeiUMlmnmY04Ym88GTHnQq3Ob8pi7gkaX4xkYd7AbBiV+8fwG zYdnC6wh0RqzW3CRvpASCa+DhS101Gx1ArY6PkQL6ij+7YObwCUhGQbDvh6ID5vFKGtose Hud5mx2sjg3ww9WOhED3davpM/kgvzw= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 8D6BFA43CAA; Thu, 6 Feb 2025 13:28:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B6D0AC4CEE9; Thu, 6 Feb 2025 13:29:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738848600; bh=ZLI3PhtfbLyx7JnTAOow9a9GGOT0TaY/xFogSnBbQeU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZKP2u0DmQtzkHULrFH/6I9xUJaeF14jOC22mE3/TdRIN5KIY+tqvl22/iMfUSIMB7 ssJ07XcHsfiBXs4ZMugUK8Nv2eF7PCFeVqM/PLV8nUKFLAa9Nv5UrmDU1cczs+JXqC mF2k8z8sD/iXn8fdH0JjbPArU+rAppsJAS/kDGwUodoqRKPrV6FOdfB0wMTExbXQYI S1O68MIju5Y6e3S80+QWPnUImIvd22PIz//r8WiLTwXzL1I5U7NL1eMHD/g2kXn0F8 q7Aszxn7QllRcHUNHaru/Li90DSjqGIyY65LSyFzHMnbAC+EIGsdGO12L/LElXxX53 QHivTVOmdIJ2A== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Mike Rapoport , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Pratyush Yadav , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v4 12/14] x86: Add KHO support Date: Thu, 6 Feb 2025 15:27:52 +0200 Message-ID: <20250206132754.2596694-13-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206132754.2596694-1-rppt@kernel.org> References: <20250206132754.2596694-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 39BC040065 X-Rspamd-Server: rspam10 X-Stat-Signature: h6zqfjnif3xgnq75zqkrngiei4cse6gn X-HE-Tag: 1738848601-909021 X-HE-Meta: U2FsdGVkX1/UQtbnTI8rhCxMfOjB2XhRcMupHgNNIbxt620/4f0Ru4xwFNxxGHc2pPBJRo4bJLB84QocWEW+Vf1Dw3Nj6X4Mur2I4MNiOmUQH/GyH/+gJxRjipvL32orEpX3vLtX1mrqQIkaVyNfRZwy1rVWe02uXjMxqiGcdFNmmBlWMMG2ndzTXE3wsmUtD8WG+59RkAFRvepIMhTn9LonCFCNLZNkXRkVZXpkSPfKw4QHrHiKHYYehb/gli86Czrz7kAk7Fljuce+Xm+B+VsLjgbHRuP1hioqt2vvbYHkq2Z24+l+AxpbBwm/+OWuNQ+nT++dhxu29ndRObnkmGk+CAQyyHdtlWijxVyN5ehVy6KoPc6zzxJV9DcM3e5frSRRM99Q7GvpYe5H6X7FfH77sbRjSacJIgYocDKkMtbNt36f2llNuf4aMdPW1YkHTWT0luonzy1V78W/AufktypFDb8cSPw++X/YQD4t8FlgxVTzpjYDz9+BQIr/PG5V0hXRHvGnq9OMO0NFNl8iEU1/Yy1v5gFfupMmAo74MgFO9+C9P6tNhTvwipOVra3BC0s06J3Hkk2NM23Cu2MgqDM33DNuYBGOKt0LrGuxHSY94tTGew58+ykrjNcP5N+IcDROFeS2PrsdHxBQiL8TT9TdEb3QHrPMG6q3zbneeVwHP3s9XTMVehrYJxw5cvMKzfFbnVpmaEtnUIFkEyMYggQkQkapuTox6jvlgoVXHAs2WILnIyLNTewdftBNhp3kWm/SBz/3YYtZp25zMB9gzp3Z2hsYaS81PQyR730oIBIF6T1IrxnoSdPa0av0PV8j35IW+7ftFgQU6wN73QyH8C4vJedWjQE4da/eGHcOErDKYnmFBKdAkS6JmsLg7VybO9ARibi3oc0LSlzv1k0hLsP01ZQ/85DmIcutq1NbglpHUJwyPkMRFzbik7skEIyDdRUr6bCjG0dTqRyRBqG z34bwwqt Idvcgt7du2FU76RTeEDbkLKzBfJfGk3CuY5YFPPtnb0lG9pEJG+zHi5Df163BxIl/3wAB0ct/TQiU3A5jJ8cs5O25LVJLYJiPPu64UY6iZTct/9Ay4GPpqkJmD93r63Y0GV5rN2a32x5lhZpnbIOPh4mfanZYL53F8SKqMINzDNBU3ogJDo12+WKBJzOubUAwTRX5JdNmksSftsT6cgvZnNnbANzzuxG87lY3UGCJm0IFehJEXwacHdeKUubYPoj3rIun9Yc6A7LolQrI3xr0+2sjtjxM7JwT25OWLpkjKiGbUNaZD7+79ky/vQ3J2gygv5IZ0ZK9YsqnBh8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexander Graf We now have all bits in place to support KHO kexecs. This patch adds awareness of KHO in the kexec file as well as boot path for x86 and adds the respective kconfig option to the architecture so that it can use KHO successfully. In addition, it enlightens it decompression code with KHO so that its KASLR location finder only considers memory regions that are not already occupied by KHO memory. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) --- arch/x86/Kconfig | 3 ++ arch/x86/boot/compressed/kaslr.c | 52 +++++++++++++++++++++++++- arch/x86/include/asm/setup.h | 4 ++ arch/x86/include/uapi/asm/setup_data.h | 13 ++++++- arch/x86/kernel/e820.c | 18 +++++++++ arch/x86/kernel/kexec-bzimage64.c | 36 ++++++++++++++++++ arch/x86/kernel/setup.c | 25 +++++++++++++ arch/x86/realmode/init.c | 2 + 8 files changed, 151 insertions(+), 2 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 87198d957e2f..3a2d7b381704 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2090,6 +2090,9 @@ config ARCH_SUPPORTS_KEXEC_BZIMAGE_VERIFY_SIG config ARCH_SUPPORTS_KEXEC_JUMP def_bool y +config ARCH_SUPPORTS_KEXEC_HANDOVER + def_bool y + config ARCH_SUPPORTS_CRASH_DUMP def_bool X86_64 || (X86_32 && HIGHMEM) diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c index f03d59ea6e40..c932a30deb20 100644 --- a/arch/x86/boot/compressed/kaslr.c +++ b/arch/x86/boot/compressed/kaslr.c @@ -760,6 +760,55 @@ static void process_e820_entries(unsigned long minimum, } } +/* + * If KHO is active, only process its scratch areas to ensure we are not + * stepping onto preserved memory. + */ +#ifdef CONFIG_KEXEC_HANDOVER +static bool process_kho_entries(unsigned long minimum, unsigned long image_size) +{ + struct kho_mem *kho_scratch; + struct setup_data *ptr; + int i, nr_areas = 0; + + ptr = (struct setup_data *)(unsigned long)boot_params_ptr->hdr.setup_data; + while (ptr) { + if (ptr->type == SETUP_KEXEC_KHO) { + struct kho_data *kho = (struct kho_data *)ptr->data; + + kho_scratch = (void *)kho->scratch_addr; + nr_areas = kho->scratch_size / sizeof(*kho_scratch); + + break; + } + + ptr = (struct setup_data *)(unsigned long)ptr->next; + } + + if (!nr_areas) + return false; + + for (i = 0; i < nr_areas; i++) { + struct kho_mem *area = &kho_scratch[i]; + struct mem_vector region = { + .start = area->addr, + .size = area->size, + }; + + if (process_mem_region(®ion, minimum, image_size)) + break; + } + + return true; +} +#else +static inline bool process_kho_entries(unsigned long minimum, + unsigned long image_size) +{ + return false; +} +#endif + static unsigned long find_random_phys_addr(unsigned long minimum, unsigned long image_size) { @@ -775,7 +824,8 @@ static unsigned long find_random_phys_addr(unsigned long minimum, return 0; } - if (!process_efi_entries(minimum, image_size)) + if (!process_kho_entries(minimum, image_size) && + !process_efi_entries(minimum, image_size)) process_e820_entries(minimum, image_size); phys_addr = slots_fetch_random(); diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h index 85f4fde3515c..70e045321d4b 100644 --- a/arch/x86/include/asm/setup.h +++ b/arch/x86/include/asm/setup.h @@ -66,6 +66,10 @@ extern void x86_ce4100_early_setup(void); static inline void x86_ce4100_early_setup(void) { } #endif +#ifdef CONFIG_KEXEC_HANDOVER +#include +#endif + #ifndef _SETUP #include diff --git a/arch/x86/include/uapi/asm/setup_data.h b/arch/x86/include/uapi/asm/setup_data.h index b111b0c18544..c258c37768ee 100644 --- a/arch/x86/include/uapi/asm/setup_data.h +++ b/arch/x86/include/uapi/asm/setup_data.h @@ -13,7 +13,8 @@ #define SETUP_CC_BLOB 7 #define SETUP_IMA 8 #define SETUP_RNG_SEED 9 -#define SETUP_ENUM_MAX SETUP_RNG_SEED +#define SETUP_KEXEC_KHO 10 +#define SETUP_ENUM_MAX SETUP_KEXEC_KHO #define SETUP_INDIRECT (1<<31) #define SETUP_TYPE_MAX (SETUP_ENUM_MAX | SETUP_INDIRECT) @@ -78,6 +79,16 @@ struct ima_setup_data { __u64 size; } __attribute__((packed)); +/* + * Locations of kexec handover metadata + */ +struct kho_data { + __u64 dt_addr; + __u64 dt_size; + __u64 scratch_addr; + __u64 scratch_size; +} __attribute__((packed)); + #endif /* __ASSEMBLY__ */ #endif /* _UAPI_ASM_X86_SETUP_DATA_H */ diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c index 82b96ed9890a..0b81cd70b02a 100644 --- a/arch/x86/kernel/e820.c +++ b/arch/x86/kernel/e820.c @@ -1329,6 +1329,24 @@ void __init e820__memblock_setup(void) memblock_add(entry->addr, entry->size); } + /* + * At this point with KHO we only allocate from scratch memory. + * At the same time, we configure memblock to only allow + * allocations from memory below ISA_END_ADDRESS which is not + * a natural scratch region, because Linux ignores memory below + * ISA_END_ADDRESS at runtime. Beside very few (if any) early + * allocations, we must allocate real-mode trapoline below + * ISA_END_ADDRESS. + * + * To make sure that we can actually perform allocations during + * this phase, let's mark memory below ISA_END_ADDRESS as scratch + * so we can allocate from there in a scratch-only world. + * + * After real mode trampoline is allocated, we clear scratch + * marking from the memory below ISA_END_ADDRESS + */ + memblock_mark_kho_scratch(0, ISA_END_ADDRESS); + /* Throw away partial pages: */ memblock_trim_memory(PAGE_SIZE); diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c index 68530fad05f7..15fc3c1a92e8 100644 --- a/arch/x86/kernel/kexec-bzimage64.c +++ b/arch/x86/kernel/kexec-bzimage64.c @@ -233,6 +233,31 @@ setup_ima_state(const struct kimage *image, struct boot_params *params, #endif /* CONFIG_IMA_KEXEC */ } +static void setup_kho(const struct kimage *image, struct boot_params *params, + unsigned long params_load_addr, + unsigned int setup_data_offset) +{ +#ifdef CONFIG_KEXEC_HANDOVER + struct setup_data *sd = (void *)params + setup_data_offset; + struct kho_data *kho = (void *)sd + sizeof(*sd); + + sd->type = SETUP_KEXEC_KHO; + sd->len = sizeof(struct kho_data); + + /* Only add if we have all KHO images in place */ + if (!image->kho.dt.buffer || !image->kho.scratch.buffer) + return; + + /* Add setup data */ + kho->dt_addr = image->kho.dt.mem; + kho->dt_size = image->kho.dt.bufsz; + kho->scratch_addr = image->kho.scratch.mem; + kho->scratch_size = image->kho.scratch.bufsz; + sd->next = params->hdr.setup_data; + params->hdr.setup_data = params_load_addr + setup_data_offset; +#endif /* CONFIG_KEXEC_HANDOVER */ +} + static int setup_boot_parameters(struct kimage *image, struct boot_params *params, unsigned long params_load_addr, @@ -312,6 +337,13 @@ setup_boot_parameters(struct kimage *image, struct boot_params *params, sizeof(struct ima_setup_data); } + if (IS_ENABLED(CONFIG_KEXEC_HANDOVER)) { + /* Setup space to store preservation metadata */ + setup_kho(image, params, params_load_addr, setup_data_offset); + setup_data_offset += sizeof(struct setup_data) + + sizeof(struct kho_data); + } + /* Setup RNG seed */ setup_rng_seed(params, params_load_addr, setup_data_offset); @@ -479,6 +511,10 @@ static void *bzImage64_load(struct kimage *image, char *kernel, kbuf.bufsz += sizeof(struct setup_data) + sizeof(struct ima_setup_data); + if (IS_ENABLED(CONFIG_KEXEC_HANDOVER)) + kbuf.bufsz += sizeof(struct setup_data) + + sizeof(struct kho_data); + params = kzalloc(kbuf.bufsz, GFP_KERNEL); if (!params) return ERR_PTR(-ENOMEM); diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index c80c124af332..e0a89f4bb46f 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -385,6 +385,28 @@ int __init ima_get_kexec_buffer(void **addr, size_t *size) } #endif +static void __init add_kho(u64 phys_addr, u32 data_len) +{ +#ifdef CONFIG_KEXEC_HANDOVER + struct kho_data *kho; + u64 addr = phys_addr + sizeof(struct setup_data); + u64 size = data_len - sizeof(struct setup_data); + + kho = early_memremap(addr, size); + if (!kho) { + pr_warn("setup: failed to memremap kho data (0x%llx, 0x%llx)\n", + addr, size); + return; + } + + kho_populate(kho->dt_addr, kho->scratch_addr, kho->scratch_size); + + early_memunmap(kho, size); +#else + pr_warn("Passed KHO data, but CONFIG_KEXEC_HANDOVER not set. Ignoring.\n"); +#endif +} + static void __init parse_setup_data(void) { struct setup_data *data; @@ -413,6 +435,9 @@ static void __init parse_setup_data(void) case SETUP_IMA: add_early_ima_buffer(pa_data); break; + case SETUP_KEXEC_KHO: + add_kho(pa_data, data_len); + break; case SETUP_RNG_SEED: data = early_memremap(pa_data, data_len); add_bootloader_randomness(data->data, data->len); diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c index f9bc444a3064..9b9f4534086d 100644 --- a/arch/x86/realmode/init.c +++ b/arch/x86/realmode/init.c @@ -65,6 +65,8 @@ void __init reserve_real_mode(void) * setup_arch(). */ memblock_reserve(0, SZ_1M); + + memblock_clear_kho_scratch(0, SZ_1M); } static void __init sme_sev_setup_real_mode(struct trampoline_header *th) From patchwork Thu Feb 6 13:27:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13963092 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A581C02194 for ; Thu, 6 Feb 2025 13:30:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9652428000C; Thu, 6 Feb 2025 08:30:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8ED8D280005; Thu, 6 Feb 2025 08:30:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 740CB28000C; Thu, 6 Feb 2025 08:30:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 4C67D280005 for ; Thu, 6 Feb 2025 08:30:12 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id EEFC5C1138 for ; Thu, 6 Feb 2025 13:30:11 +0000 (UTC) X-FDA: 83089603422.13.27FC448 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf05.hostedemail.com (Postfix) with ESMTP id 5F375100023 for ; Thu, 6 Feb 2025 13:30:10 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=UlLGwm05; spf=pass (imf05.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738848610; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=HRU1eY8lQWE+WdlV3otZw/7J3jFpvj6dJXq20rk8Gjo=; b=1mKvQfsJ1SuWL70609U5ZVoB41J/4hUL5brya4Eq/NFRAbgkA7O313iP8Qlh8C4dhbgqt0 AYO3+BfGt+ENyfaBDGBE2swRx5piBQ0VoN7EASwwlca0xPAlfYfXtQKRi6lkXeL5zmNTKO zysQeD9RHjpgRZSPYMGCYIH36K2wHTc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738848610; a=rsa-sha256; cv=none; b=Lgu3sHyP6BZt4Lsbq1fxh+6NTX1ZrmIzKQD88hJFrP6qBeXqxByNu4jQPKht9ctDf9woSY fS/Te1xMdr6cHkRQGf9kiTTPHBPMvRxpIy8Bt8YyImsp15i6UT0k4DE/UL1c/f0LTqCbUs dvW+guoM8780ru911XYCStv8miVous0= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=UlLGwm05; spf=pass (imf05.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 8FC82A4426C; Thu, 6 Feb 2025 13:28:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BC5DAC4CEE7; Thu, 6 Feb 2025 13:30:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738848609; bh=DoJEW5UPWLB+Obb+G1lnBQtqt27Wrp5AJwpwp06AFOo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UlLGwm05L5QcpmJWTDfel287vfdneKikF+X9x1PLxloCZkQFYSTdwfZk67H4CbACf SjpGYMrBglL1eCrzUUZK57n5/M23b+qmEwIJqInUyuIAYnn+TmqNQ3nPdRbyYMKuNl ZByOBYnqMEy5lY8CmeFvD9vgr6at+lROV7TfunAKx2CDVXwvh1AKBfFFfGVYMSC8vi EMegz5keLq3kHnLZZhSOtnzyesXeNk71x3PhY+aFp5TEzIu3szzKS+tzHxI8Radxu8 EOITqVt3uvy9poaB8XIXos09+EVWO6TlgoFm05ynd9PhrdVKy6VE0uZZJFIrSgxsnl do126iSCoB9zQ== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Mike Rapoport , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Pratyush Yadav , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v4 13/14] memblock: Add KHO support for reserve_mem Date: Thu, 6 Feb 2025 15:27:53 +0200 Message-ID: <20250206132754.2596694-14-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206132754.2596694-1-rppt@kernel.org> References: <20250206132754.2596694-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5F375100023 X-Stat-Signature: mz63upea9r3pmb15d1xfefz667pfoyrc X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1738848610-772084 X-HE-Meta: U2FsdGVkX19ewE6f/XCQ+nMST6L5AeOg0NYASi2fNdk0rg3wDunX2P0WvRXHSBoFpdqSljR3/N+sPrWqUkPyS4xTO9LOoJOn6HpVJcYlHeZbLsWE0F8f2mU0P+HFOBLs/Xj52mfFnsasq38DaJI3R9itnlTtjJzqlGfW49JMmKJPZMiEmYLMOwcbHSAANbuQP1m07DNAelBOuQkBpbWsU3Avwn9b70sIVQyBZ06rvcVQapxtMqmoRvilRUoh/ArgdE2llJoM3eiyLBE077FmS+cKI5OdTbpDmTp9yitIf5+6E72MPiaep8qMSprqcgiD1lglr9mHY23jpqKxiO++rjptmVsJ+Iy0IwbLcAcEfPiiRFzPNov0lyk3lJt0trENfsLE1vdV5Xd3CwIrQxLUNWPZFCl36VPjO+Y9Z7zl3szdbjTW5eEc64ibQ8Nd9Snxdo7MWTB4P9Ym3P9d0e//Nv8/N+omX+N71b8BaX20+dwXvLtOplALG8f9yr6OcXEZQW1g9UYUBzEsi1jCTHJBe62XfCyZ/C3rABpYcCmRQgsoTMRHqcBfrzckHfH3yhF6yVAL63SIDvvR856ylFa18+0mfAdVqoQbP4eTVNwyFGh0qVBxBfJacrger7MzEv8dX0NnXKKA7mJaPYN9TxL2JQ0N2rvwWLcz4Ai3iS0mBItDNtv4b9CTyQDCLo8vEk4i6vGWVuSpoEn0gjQiQt2EX487Pmbe8blWjveD0LsSPLXDjzPC7yR5oEM62K9sRxYoAVG0IWrcFyK+KJ0ZKXSqCp3kv7dm+FodGyLGYameDqFOH68i4+Z0yKpe5HYmengyOLqy9H/7uOjdtncBVeeSwBebRJnQ1oBNcK/007+NWmSnOstoL3Xmn+X/4k0sbxwsZxGRiMpQiRlPyJYpcsWeZY/202S4q2B93BnDdaOCGqXoAHJ2saQUT5JckbLKtocbxZqAbDsL7xbsMHiPGHJ nTi+hSJV fJpwCzaxfsYdhpaQH6gjep5AU2pqXDX1dzcKjCxQDmm/leB2viV4kS7/+r2XaMy4CtV8XQ2X8tN1RSDsURbZOpiaRE3gpnTpp0+jvJMOh0ToH6V7x5vCqUSkbofdavXVcgMc8Fjter7ify0uNwR5426VEIdpDQEows0wfm6tlWR4YdVsDNPHrnimYowgNd/1zVM6a5RZoLhaAYZ4HvlybHew0TC5zsgxyfVhw0TkhIPJURuKSMz+B2nXtpeKzVLQk0/aOo8inlbp4SOitRUgWTgL/UUboO1Ntk4Y4vVmsGL/0XjaIeLy/DXfwjxTM85ehN42oHe9cFZFQLjjjIA5Pei2f+QSohf9Irb+oOELU5uDR1Hw0DvjZhjLtOw+Iwseoq30i X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexander Graf Linux has recently gained support for "reserve_mem": A mechanism to allocate a region of memory early enough in boot that we can cross our fingers and hope it stays at the same location during most boots, so we can store for example ftrace buffers into it. Thanks to KASLR, we can never be really sure that "reserve_mem" allocations are static across kexec. Let's teach it KHO awareness so that it serializes its reservations on kexec exit and deserializes them again on boot, preserving the exact same mapping across kexec. This is an example user for KHO in the KHO patch set to ensure we have at least one (not very controversial) user in the tree before extending KHO's use to more subsystems. Signed-off-by: Alexander Graf Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) --- mm/memblock.c | 131 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 131 insertions(+) diff --git a/mm/memblock.c b/mm/memblock.c index 84df96efca62..fdb08b60efc1 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -16,6 +16,9 @@ #include #include #include +#include +#include +#include #include #include @@ -2423,6 +2426,70 @@ int reserve_mem_find_by_name(const char *name, phys_addr_t *start, phys_addr_t * } EXPORT_SYMBOL_GPL(reserve_mem_find_by_name); +static bool __init reserve_mem_kho_revive(const char *name, phys_addr_t size, + phys_addr_t align) +{ + const void *fdt = kho_get_fdt(); + const char *path = "/reserve_mem"; + int node, child, err; + + if (!IS_ENABLED(CONFIG_KEXEC_HANDOVER)) + return false; + + if (!fdt) + return false; + + node = fdt_path_offset(fdt, "/reserve_mem"); + if (node < 0) + return false; + + err = fdt_node_check_compatible(fdt, node, "reserve_mem-v1"); + if (err) { + pr_warn("Node '%s' has unknown compatible", path); + return false; + } + + fdt_for_each_subnode(child, fdt, node) { + const struct kho_mem *mem; + const char *child_name; + int len; + + /* Search for old kernel's reserved_mem with the same name */ + child_name = fdt_get_name(fdt, child, NULL); + if (strcmp(name, child_name)) + continue; + + err = fdt_node_check_compatible(fdt, child, "reserve_mem_map-v1"); + if (err) { + pr_warn("Node '%s/%s' has unknown compatible", path, name); + continue; + } + + mem = fdt_getprop(fdt, child, "mem", &len); + if (!mem || len != sizeof(*mem)) + continue; + + if (mem->addr & (align - 1)) { + pr_warn("KHO reserved_mem '%s' has wrong alignment (0x%lx, 0x%lx)", + name, (long)align, (long)mem->addr); + continue; + } + + if (mem->size != size) { + pr_warn("KHO reserved_mem '%s' has wrong size (0x%lx != 0x%lx)", + name, (long)mem->size, (long)size); + continue; + } + + reserved_mem_add(mem->addr, mem->size, name); + pr_info("Revived memory reservation '%s' from KHO", name); + + return true; + } + + return false; +} + /* * Parse reserve_mem=nn:align:name */ @@ -2478,6 +2545,11 @@ static int __init reserve_mem(char *p) if (reserve_mem_find_by_name(name, &start, &tmp)) return -EBUSY; + /* Pick previous allocations up from KHO if available */ + if (reserve_mem_kho_revive(name, size, align)) + return 1; + + /* TODO: Allocation must be outside of scratch region */ start = memblock_phys_alloc(size, align); if (!start) return -ENOMEM; @@ -2488,6 +2560,65 @@ static int __init reserve_mem(char *p) } __setup("reserve_mem=", reserve_mem); +static int reserve_mem_kho_write_map(void *fdt, struct reserve_mem_table *map) +{ + int err = 0; + const char compatible[] = "reserve_mem_map-v1"; + struct kho_mem mem = { + .addr = map->start, + .size = map->size, + }; + + err |= fdt_begin_node(fdt, map->name); + err |= fdt_property(fdt, "compatible", compatible, sizeof(compatible)); + err |= fdt_property(fdt, "mem", &mem, sizeof(mem)); + err |= fdt_end_node(fdt); + + return err; +} + +static int reserve_mem_kho_notifier(struct notifier_block *self, + unsigned long cmd, void *v) +{ + const char compatible[] = "reserve_mem-v1"; + void *fdt = v; + int err = 0; + int i; + + switch (cmd) { + case KEXEC_KHO_ABORT: + return NOTIFY_DONE; + case KEXEC_KHO_DUMP: + /* Handled below */ + break; + default: + return NOTIFY_BAD; + } + + if (!reserved_mem_count) + return NOTIFY_DONE; + + err |= fdt_begin_node(fdt, "reserve_mem"); + err |= fdt_property(fdt, "compatible", compatible, sizeof(compatible)); + for (i = 0; i < reserved_mem_count; i++) + err |= reserve_mem_kho_write_map(fdt, &reserved_mem_table[i]); + err |= fdt_end_node(fdt); + + return err ? NOTIFY_BAD : NOTIFY_DONE; +} + +static struct notifier_block reserve_mem_kho_nb = { + .notifier_call = reserve_mem_kho_notifier, +}; + +static int __init reserve_mem_init(void) +{ + register_kho_notifier(&reserve_mem_kho_nb); + + return 0; +} +core_initcall(reserve_mem_init); + #if defined(CONFIG_DEBUG_FS) && defined(CONFIG_ARCH_KEEP_MEMBLOCK) static const char * const flagname[] = { [ilog2(MEMBLOCK_HOTPLUG)] = "HOTPLUG", From patchwork Thu Feb 6 13:27:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13963093 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCFB1C02194 for ; Thu, 6 Feb 2025 13:30:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 78F9228000D; Thu, 6 Feb 2025 08:30:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 73E06280005; Thu, 6 Feb 2025 08:30:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B76C28000D; Thu, 6 Feb 2025 08:30:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 388AF280005 for ; Thu, 6 Feb 2025 08:30:21 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id EA3ACB0CDA for ; Thu, 6 Feb 2025 13:30:20 +0000 (UTC) X-FDA: 83089603800.27.AF5C62C Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf09.hostedemail.com (Postfix) with ESMTP id 4DD59140060 for ; Thu, 6 Feb 2025 13:30:19 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=JHyCveIB; spf=pass (imf09.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738848619; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=d2NqMw/5VR+un3i5cyW8rSGlyShyy5beN9EqcYd6H7I=; b=YAwtCZJvjWikx49PUpXDe2pgLhZc6hfhDj5NWegazrAAZQd69BuxcGDBecAmi/OWnN1dDm u4ggwR6QUq2P74DXYH+jk4ziVZAcb+Nhcs0Ofb4GzAC+q4C9iut8hZAlz/wQpLg5ke4cTY fdzAbruMoq52pUt2Ab8H+9R6gb0uZkM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738848619; a=rsa-sha256; cv=none; b=cIWVHzDyobLpAJSEcKKb7D+uDeuhq62mqMrCtNXt+4GOSe12hUQ6iEzmnzH/CFjm7O7WKb Sr13TUMhVRYkZG2ZJt3n7En3pgxCDEN/AG2gnG6gW6Fr9ph4UCss8DEDbwuasIiB5WVo0N I3kKPm28fp3mUATdZjN5OWS3d00f4Hk= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=JHyCveIB; spf=pass (imf09.hostedemail.com: domain of rppt@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 955A5A43E68; Thu, 6 Feb 2025 13:28:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BBB81C4CEE8; Thu, 6 Feb 2025 13:30:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1738848618; bh=DEPWTj2lMxZNBMBbVuId7YwohjOTilwE4mRuWlbJrXA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JHyCveIB+H+hP369+oAVupUsNOhw1rXCFbz/HMMrekGHYAW5g4PNpWIwrA8MwexgL 4yMDEsor1brMgVYnrKTNOKQMYqerjM3u2rtawVm++rzJDLZojHWhKDBkq33U5MfHSV NQtV2EFA/lMKTVnsQP2JZSAaMs3oEfA7VLfyV889coTXhhEVxMdqkij+tqxaqUj2Tc +ihizHttzw+XGMKcXveRsjk85ki9HH1SFWo0iThELJ22q8Xi9jHv9Q91sHpsMVossU ZgFgJe+G3yxv9QmNthG7QhvnBHAriJsMyUex426XIiM46CWpwr37AZ05MfQc08FbZ1 R1sab0YI6csgw== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Graf , Andrew Morton , Andy Lutomirski , Anthony Yznaga , Arnd Bergmann , Ashish Kalra , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Dave Hansen , David Woodhouse , Eric Biederman , Ingo Molnar , James Gowans , Jonathan Corbet , Krzysztof Kozlowski , Mark Rutland , Mike Rapoport , Paolo Bonzini , Pasha Tatashin , "H. Peter Anvin" , Peter Zijlstra , Pratyush Yadav , Rob Herring , Rob Herring , Saravana Kannan , Stanislav Kinsburskii , Steven Rostedt , Thomas Gleixner , Tom Lendacky , Usama Arif , Will Deacon , devicetree@vger.kernel.org, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v4 14/14] Documentation: KHO: Add memblock bindings Date: Thu, 6 Feb 2025 15:27:54 +0200 Message-ID: <20250206132754.2596694-15-rppt@kernel.org> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250206132754.2596694-1-rppt@kernel.org> References: <20250206132754.2596694-1-rppt@kernel.org> MIME-Version: 1.0 X-Stat-Signature: 1htrzsijzf91bri44dteo8rmntk9uh1c X-Rspam-User: X-Rspamd-Queue-Id: 4DD59140060 X-Rspamd-Server: rspam03 X-HE-Tag: 1738848619-542190 X-HE-Meta: U2FsdGVkX1/kIkhX7ckeUjc0rjbUw8TwUpncfG/n+Q2pnJ7QP7Y4vEi6dv+6M2WgDAd33S132ZENsuOj5BS32qKTYIHQevBkRR/1fRHdnfl3wkG+OetzOsqjjaqkEZALzQXjXVGs7md5SBH4lHMHi7QGcZushyIF9g9ewhZx5iFseXAcRqSiZt8tizVQJUn35AD5S4VhA6vh1vaymiQJ4/tIaumdJdWV4/bHuyCLs56v6oCy//KDeArOg7TUY6cK1+I/S6HRKMhHw2GCdNVBSVTCHpyUmEeb9oSNhBpcQy9N62trLCfG9DL6GpBtbm/LeyhS7vy6Ecm9ThCmyytVOoK3l1LUG2BOXDGSlOoU5TbFH4b8yJN+DkSqJsRWzhahnI5fXOSu8jrltvwojMslTL2pxAYSt9ek/UwBey8HAaDzmdxZaYvMWiwqGcEvtPIisp+9r0GabRwg1aDWkaWV0fEq/tj78nrD4ypO/4ms9es0CshpEuKSOczFg5Xa6TkPn+EllWoH93W/zcr2CpR60BA2dqr3h4k82fUbhMfHzKBP+gq7mM7e0glxliR+5Kix7acAPMfPrXzR74XuE5DvnGDwMEhCq4YQtPXDWFIpejmxcvyI0P7lbKEZtO8BYz5pEGYUPzCQBNznRChFspQp7LQoyWPoS2ybuCI0d8oUq1zLAxBnqe+fR4upRcjnEwThbELyjGuC9GEqUnktAnMtGXDEXxi0jyyN5qOSc3DZp+HdpSynoX9t0nB0Gc9rFaqfjObGUWQFAYmcLt8L5+irbZS4lK18ot6V079hXMjzKIV1Dle9OHOWjnUIdtVsA+zXK4xwGxTqUsRjaLFVAauwLQwGqOmerEFbxseDMO/3EA08yu8OFu6fnQ7Qy4NIkgJAV0bDlmRQHf0/ooLz9OumpuzCo9qNxinyzaDgY4dNAOaqFA/UUcaPtktB5FXE52O80oNWqEWP8SusGRNm6tc uieLWWE2 JgSK6XP1TSL671REDojNfxraifMqqIV4eRqDXLinaG0ZNlt1qpUjWCtYIT+zRr/fjFl8sMQH66nPv8LaqGayFOmi1mgBH3Rv4W3Ra9cXebrQd1quMDY+Izv+L+ZLpEpnfAy/TKTYUrtZe7lBfWSUMq5ihY6m+o/679qv5SKEqlBY4pE9JVl77H286HU/U8C6HK+kfsQALx2vUHgvkIpam3WQibLAsK+DoIBwvyCZryLHVBysUIDR4GA8m1yg9FtoDb/3Xi7yhK3zYN7N5Vo/gP7qFi7FAY8g+3W+cIHyJFQOFC1d7ecvrL+2+De24xdSS35Z/eWisgOot+mYStJqPu7h0K3vh/o0F6ttab2ok6q850LKyO2sahfgWR4fXnJDfwTSn X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" We introduced KHO into Linux: A framework that allows Linux to pass metadata and memory across kexec from Linux to Linux. KHO reuses fdt as file format and shares a lot of the same properties of firmware-to- Linux boot formats: It needs a stable, documented ABI that allows for forward and backward compatibility as well as versioning. As first user of KHO, we introduced memblock which can now preserve memory ranges reserved with reserve_mem command line options contents across kexec, so you can use the post-kexec kernel to read traces from the pre-kexec kernel. This patch adds memblock schemas similar to "device" device tree ones to a new kho bindings directory. This allows us to force contributors to document the data that moves across KHO kexecs and catch breaking change during review. Co-developed-by: Alexander Graf Signed-off-by: Alexander Graf Signed-off-by: Mike Rapoport (Microsoft) --- .../kho/bindings/memblock/reserve_mem.yaml | 41 ++++++++++++++++++ .../bindings/memblock/reserve_mem_map.yaml | 42 +++++++++++++++++++ 2 files changed, 83 insertions(+) create mode 100644 Documentation/kho/bindings/memblock/reserve_mem.yaml create mode 100644 Documentation/kho/bindings/memblock/reserve_mem_map.yaml diff --git a/Documentation/kho/bindings/memblock/reserve_mem.yaml b/Documentation/kho/bindings/memblock/reserve_mem.yaml new file mode 100644 index 000000000000..7b01791b10b3 --- /dev/null +++ b/Documentation/kho/bindings/memblock/reserve_mem.yaml @@ -0,0 +1,41 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/memblock/reserve_mem.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Memblock reserved memory + +maintainers: + - Mike Rapoport + +description: | + Memblock can serialize its current memory reservations created with + reserve_mem command line option across kexec through KHO. + The post-KHO kernel can then consume these reservations and they are + guaranteed to have the same physical address. + +properties: + compatible: + enum: + - reserve_mem-v1 + +patternProperties: + "$[0-9a-f_]+^": + $ref: reserve_mem_map.yaml# + description: reserved memory regions + +required: + - compatible + +additionalProperties: false + +examples: + - | + reserve_mem { + compatible = "reserve_mem-v1"; + r1 { + compatible = "reserve_mem_map-v1"; + mem = <0xc07c 0x2000000 0x01 0x00>; + }; + }; diff --git a/Documentation/kho/bindings/memblock/reserve_mem_map.yaml b/Documentation/kho/bindings/memblock/reserve_mem_map.yaml new file mode 100644 index 000000000000..09001c5f2124 --- /dev/null +++ b/Documentation/kho/bindings/memblock/reserve_mem_map.yaml @@ -0,0 +1,42 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/memblock/reserve_mem_map.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Memblock reserved memory regions + +maintainers: + - Mike Rapoport + +description: | + Memblock can serialize its current memory reservations created with + reserve_mem command line option across kexec through KHO. + This object describes each such region. + +properties: + compatible: + enum: + - reserve_mem_map-v1 + + mem: + $ref: /schemas/types.yaml#/definitions/uint32-array + description: | + Array of { u64 phys_addr, u64 len } elements that describe a list of + memory ranges. + +required: + - compatible + - mem + +additionalProperties: false + +examples: + - | + reserve_mem { + compatible = "reserve_mem-v1"; + r1 { + compatible = "reserve_mem_map-v1"; + mem = <0xc07c 0x2000000 0x01 0x00>; + }; + };