From patchwork Tue Mar 21 17:05:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13182971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2B55C6FD1D for ; Tue, 21 Mar 2023 17:06:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5BA5E900004; Tue, 21 Mar 2023 13:06:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 565F66B0082; Tue, 21 Mar 2023 13:06:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42D64900004; Tue, 21 Mar 2023 13:06:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 3607B6B007E for ; Tue, 21 Mar 2023 13:06:07 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 08E4180222 for ; Tue, 21 Mar 2023 17:06:06 +0000 (UTC) X-FDA: 80593533174.17.6BD702E Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf27.hostedemail.com (Postfix) with ESMTP id 21D7F40022 for ; Tue, 21 Mar 2023 17:06:04 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=fOs7Gfwv; spf=pass (imf27.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679418365; a=rsa-sha256; cv=none; b=8SqnGne7IhDDxZ2rWPaU8sTz2XTkeThMibXd1/VqXUb1ztTmRoemxWj7TIIY3jkUV7AfgN rSmB8DVe8umyK920Zr87xHSglOfrDbeNKLzQ4/m//ZJIOvwz9OzCXN69LFbWbYP7mBO1S9 MelzmXPUP4eIbs9WBS9lLTYff6vneVw= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=fOs7Gfwv; spf=pass (imf27.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679418365; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ncyTOjHNKSsEQ+KGS23UVc8jEQl0OfNzGP5rd+HANXE=; b=Gbq3vlKmO9Z6+aJTl3qnb7nvLM+Sl7341G2+ZupjzGyGhqyDjXv17swjf6HbGmjvkbgk+4 oC5gVVgabQdM9TjGHiIdOPmPkIg0Zm0jcxqUzATiSDbby+hs5YEZVIPRWORVGR4rqxGams IztpOHZ5MWQ3Tm7pPtOym8BkHVHC4fI= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 291A761D4A; Tue, 21 Mar 2023 17:06:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8C8DEC4339B; Tue, 21 Mar 2023 17:06:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679418364; bh=5iW5aRqqvPvN7+YDCkp9h19X6grwHNlWZF3KQEDlyQM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fOs7Gfwv7/Rk4621KaAXZUlFBRJppIfIM83ucN9zYqWPS8fBl45hJ3ARoy0JktZzF m+9a+mMEOXKDe9+HB2FRCxhDUF5msNPHCfdfT2UEJPBgA/YvnFwxYMDawGgd9uZF13 lMcjE4cYXz/kPMvh9mjucPi3BiV7HVBajgjJU5tibcEHnlSwuapsFk7mmW6Jarp3Ap hb6aIYB5Qj6lWMUJ3JG5s/Tf6s8pWFjUXs2ZOR8lLlxm1ev0IIf5dU73xHDjm8Mcju NjkYVFcoLCoK1uPPspznK5JiVgN44OJD3c5ZQl4PZZOi7bd4YZ5wmdM7lFxDa3qaHq roqVgHKT4c1Fw== From: Mike Rapoport To: Andrew Morton Cc: David Hildenbrand , Doug Berger , Matthew Wilcox , Mel Gorman , Michal Hocko , Mike Rapoport , Thomas Bogendoerfer , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 09/14] mm: move init_mem_debugging_and_hardening() to mm/mm_init.c Date: Tue, 21 Mar 2023 19:05:08 +0200 Message-Id: <20230321170513.2401534-10-rppt@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230321170513.2401534-1-rppt@kernel.org> References: <20230321170513.2401534-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 21D7F40022 X-Rspamd-Server: rspam01 X-Stat-Signature: 1cb9csz1431q6itfktb7r7suok1z9bcw X-HE-Tag: 1679418364-28456 X-HE-Meta: U2FsdGVkX1/J5tYkrL2JCGYj7AnPOlhtNwqGzTGVlnip/szJgyCoQVjKbjVENs3RYZXPdKX8rod7n4yT4h/ThWjZHB/vmsnSzNds1cPhO/aSKEebSyeSTw+ubKcrKhE6DUO+sMH+TSGgHB47ur2298z7MPxQX6jqqP/IwM2blBI4b2qQIzHbHiZjy+obSjRCVTuNswCoUf9V1ji+f6d2zH8SjkImVHGCOc8BaN3hWK945iYMwSCTne9Tpqj+vdw47BBl5FYN55Wu2Uce7WCmKeMzikIkcHRwfvlAHYJj98d+Xx+KnStCmIrEuHQWkeGSRdAwPufTpaxQrg6YRNcEZXtrMaV5bXqXpamq08I/qOc0N32FcyuCEwwejKQolvia7h7qrz1uBZt/oK2h3adGowyMoLI+1d7shcqZWBLMv4htNPuGRy08ajjIVLUT/cWugvQLP5voY8O0ST+T5x+2RDE6dBIpVMKlq/gkIeKoEldUmNy0SOn1KZbYPX2x/FIMCa1qbhSS3x8wd/oam39uIkUFzmvM2gVhrNvnTJK7gtaLQXVWbND6lcCB6QaTPdbqLY5bkLF4UP7RCnvdm+gxiYetn99fhkk6P/gl2z6CxBQ+mV71c5pEp+Uy2EWHW+0SpCfw5jicSalm8LHu3MzAuZbhXZaidZzqEsAH7DIQPQX8PQRnfJ89DCRXY0AijQW1bcyyYcIsKwgKkDdkeSkG5ut5Es40ykzR3vgahwvH3yz/zrDBXNXv0sNFHoFs53mZS6B78gtTK2PQYycIJ/6uHmAZl43FxmvGmVwGIn5EdaS7bsazNG8YhTbeJ9v/qPpQW1vQYjUbly40HKrh3JYARhqoXBmTTDkn2DaGriYaXPDaVQIaEuJiDHsKGPvxGRczrYnhCGpRbXWSdHhEUqLxgRxX5dkadL+VKVs9NdIcLB2wwlUtf96FziIwrsFtnF4fsF8qNCYVUvyMmW1VIKE 9cehmB3w yXExnruBWs/BySoWU/rqF0JAU2AAD8Efq0xVzzbKCyW+eBVDvBpsRSRbiZvBfs0zMMZmTKNAJo5IoCkTyLvzEqlDUKsZa2f2jYWutq4ATO+ztXXrAVrKS4wdVW6OqFiKkhh0FUKyL+1pozRRarti57wsgaeVgBVqSViQs6jzxyKp37GS/+/3ov/Zq2h9ZvJ7awl2Z7qtBQNGz58t13YpSKgSLMlU3+FB5hH5QfrlXIjcXfS57V6K6sShMoG0vQoNphPqvFmBOYdNLBGWq6k8TuosO5irsz1j2T0RjHCdzVIKiRVPYvO6es1RdRO0dj1gtxvl3qNxxZaoWZRKGoM0axjjOHuN6BOMJMaRiGumGQWctHaHf8Udf7p1K5a73YPOkzbBEj5zT6qjiusE5Mh0QMoSJ5Mu4feeMo9RlXIyLH5cGOr4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Mike Rapoport (IBM)" init_mem_debugging_and_hardening() is only called from mm_core_init(). Move it close to the caller, make it static and rename it to mem_debugging_and_hardening_init() for consistency with surrounding convention. Signed-off-by: Mike Rapoport (IBM) Acked-by: David Hildenbrand Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 1 - mm/internal.h | 8 ++++ mm/mm_init.c | 91 +++++++++++++++++++++++++++++++++++++++++++- mm/page_alloc.c | 95 ---------------------------------------------- 4 files changed, 98 insertions(+), 97 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c3c67d8bc833..2fecabb1a328 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3394,7 +3394,6 @@ extern int apply_to_existing_page_range(struct mm_struct *mm, unsigned long address, unsigned long size, pte_fn_t fn, void *data); -extern void __init init_mem_debugging_and_hardening(void); #ifdef CONFIG_PAGE_POISONING extern void __kernel_poison_pages(struct page *page, int numpages); extern void __kernel_unpoison_pages(struct page *page, int numpages); diff --git a/mm/internal.h b/mm/internal.h index 2a925de49393..4750e3a7fd0d 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -204,6 +204,14 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address); extern char * const zone_names[MAX_NR_ZONES]; +/* perform sanity checks on struct pages being allocated or freed */ +DECLARE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled); + +static inline bool is_check_pages_enabled(void) +{ + return static_branch_unlikely(&check_pages_enabled); +} + /* * Structure for holding the mostly immutable allocation parameters passed * between functions involved in allocations, including the alloc_pages* diff --git a/mm/mm_init.c b/mm/mm_init.c index f1475413394d..43f6d3ed24ef 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2531,6 +2531,95 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, __free_pages_core(page, order); } +static bool _init_on_alloc_enabled_early __read_mostly + = IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON); +static int __init early_init_on_alloc(char *buf) +{ + + return kstrtobool(buf, &_init_on_alloc_enabled_early); +} +early_param("init_on_alloc", early_init_on_alloc); + +static bool _init_on_free_enabled_early __read_mostly + = IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON); +static int __init early_init_on_free(char *buf) +{ + return kstrtobool(buf, &_init_on_free_enabled_early); +} +early_param("init_on_free", early_init_on_free); + +DEFINE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled); + +/* + * Enable static keys related to various memory debugging and hardening options. + * Some override others, and depend on early params that are evaluated in the + * order of appearance. So we need to first gather the full picture of what was + * enabled, and then make decisions. + */ +static void __init mem_debugging_and_hardening_init(void) +{ + bool page_poisoning_requested = false; + bool want_check_pages = false; + +#ifdef CONFIG_PAGE_POISONING + /* + * Page poisoning is debug page alloc for some arches. If + * either of those options are enabled, enable poisoning. + */ + if (page_poisoning_enabled() || + (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) && + debug_pagealloc_enabled())) { + static_branch_enable(&_page_poisoning_enabled); + page_poisoning_requested = true; + want_check_pages = true; + } +#endif + + if ((_init_on_alloc_enabled_early || _init_on_free_enabled_early) && + page_poisoning_requested) { + pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, " + "will take precedence over init_on_alloc and init_on_free\n"); + _init_on_alloc_enabled_early = false; + _init_on_free_enabled_early = false; + } + + if (_init_on_alloc_enabled_early) { + want_check_pages = true; + static_branch_enable(&init_on_alloc); + } else { + static_branch_disable(&init_on_alloc); + } + + if (_init_on_free_enabled_early) { + want_check_pages = true; + static_branch_enable(&init_on_free); + } else { + static_branch_disable(&init_on_free); + } + + if (IS_ENABLED(CONFIG_KMSAN) && + (_init_on_alloc_enabled_early || _init_on_free_enabled_early)) + pr_info("mem auto-init: please make sure init_on_alloc and init_on_free are disabled when running KMSAN\n"); + +#ifdef CONFIG_DEBUG_PAGEALLOC + if (debug_pagealloc_enabled()) { + want_check_pages = true; + static_branch_enable(&_debug_pagealloc_enabled); + + if (debug_guardpage_minorder()) + static_branch_enable(&_debug_guardpage_enabled); + } +#endif + + /* + * Any page debugging or hardening option also enables sanity checking + * of struct pages being allocated or freed. With CONFIG_DEBUG_VM it's + * enabled already. + */ + if (!IS_ENABLED(CONFIG_DEBUG_VM) && want_check_pages) + static_branch_enable(&check_pages_enabled); +} + /* Report memory auto-initialization states for this boot. */ static void __init report_meminit(void) { @@ -2570,7 +2659,7 @@ void __init mm_core_init(void) * bigger than MAX_ORDER unless SPARSEMEM. */ page_ext_init_flatmem(); - init_mem_debugging_and_hardening(); + mem_debugging_and_hardening_init(); kfence_alloc_pool(); report_meminit(); kmsan_init_shadow(); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d1276bfe7a30..2f333c26170c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -240,31 +240,6 @@ EXPORT_SYMBOL(init_on_alloc); DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_FREE_DEFAULT_ON, init_on_free); EXPORT_SYMBOL(init_on_free); -/* perform sanity checks on struct pages being allocated or freed */ -static DEFINE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled); - -static inline bool is_check_pages_enabled(void) -{ - return static_branch_unlikely(&check_pages_enabled); -} - -static bool _init_on_alloc_enabled_early __read_mostly - = IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON); -static int __init early_init_on_alloc(char *buf) -{ - - return kstrtobool(buf, &_init_on_alloc_enabled_early); -} -early_param("init_on_alloc", early_init_on_alloc); - -static bool _init_on_free_enabled_early __read_mostly - = IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON); -static int __init early_init_on_free(char *buf) -{ - return kstrtobool(buf, &_init_on_free_enabled_early); -} -early_param("init_on_free", early_init_on_free); - /* * A cached value of the page's pageblock's migratetype, used when the page is * put on a pcplist. Used to avoid the pageblock migratetype lookup when @@ -798,76 +773,6 @@ static inline void clear_page_guard(struct zone *zone, struct page *page, unsigned int order, int migratetype) {} #endif -/* - * Enable static keys related to various memory debugging and hardening options. - * Some override others, and depend on early params that are evaluated in the - * order of appearance. So we need to first gather the full picture of what was - * enabled, and then make decisions. - */ -void __init init_mem_debugging_and_hardening(void) -{ - bool page_poisoning_requested = false; - bool want_check_pages = false; - -#ifdef CONFIG_PAGE_POISONING - /* - * Page poisoning is debug page alloc for some arches. If - * either of those options are enabled, enable poisoning. - */ - if (page_poisoning_enabled() || - (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) && - debug_pagealloc_enabled())) { - static_branch_enable(&_page_poisoning_enabled); - page_poisoning_requested = true; - want_check_pages = true; - } -#endif - - if ((_init_on_alloc_enabled_early || _init_on_free_enabled_early) && - page_poisoning_requested) { - pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, " - "will take precedence over init_on_alloc and init_on_free\n"); - _init_on_alloc_enabled_early = false; - _init_on_free_enabled_early = false; - } - - if (_init_on_alloc_enabled_early) { - want_check_pages = true; - static_branch_enable(&init_on_alloc); - } else { - static_branch_disable(&init_on_alloc); - } - - if (_init_on_free_enabled_early) { - want_check_pages = true; - static_branch_enable(&init_on_free); - } else { - static_branch_disable(&init_on_free); - } - - if (IS_ENABLED(CONFIG_KMSAN) && - (_init_on_alloc_enabled_early || _init_on_free_enabled_early)) - pr_info("mem auto-init: please make sure init_on_alloc and init_on_free are disabled when running KMSAN\n"); - -#ifdef CONFIG_DEBUG_PAGEALLOC - if (debug_pagealloc_enabled()) { - want_check_pages = true; - static_branch_enable(&_debug_pagealloc_enabled); - - if (debug_guardpage_minorder()) - static_branch_enable(&_debug_guardpage_enabled); - } -#endif - - /* - * Any page debugging or hardening option also enables sanity checking - * of struct pages being allocated or freed. With CONFIG_DEBUG_VM it's - * enabled already. - */ - if (!IS_ENABLED(CONFIG_DEBUG_VM) && want_check_pages) - static_branch_enable(&check_pages_enabled); -} - static inline void set_buddy_order(struct page *page, unsigned int order) { set_page_private(page, order);