From patchwork Wed Aug 24 06:50:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizhe.67@bytedance.com X-Patchwork-Id: 12952964 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BB44C00140 for ; Wed, 24 Aug 2022 06:51:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 910106B0073; Wed, 24 Aug 2022 02:51:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8BF8D940007; Wed, 24 Aug 2022 02:51:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 787756B0075; Wed, 24 Aug 2022 02:51:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 69A826B0073 for ; Wed, 24 Aug 2022 02:51:23 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 4635B80111 for ; Wed, 24 Aug 2022 06:51:23 +0000 (UTC) X-FDA: 79833564846.26.4283159 Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) by imf03.hostedemail.com (Postfix) with ESMTP id 088F520040 for ; Wed, 24 Aug 2022 06:51:21 +0000 (UTC) Received: by mail-pf1-f170.google.com with SMTP id 67so7428004pfv.2 for ; Tue, 23 Aug 2022 23:51:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc; bh=f4kGHpDiTMc7aSXLbliqd1x6vmGGCzguotz7s4Fiy6o=; b=8NmA2RtSjgVWxKE3P11MYB9xDj+uZFhnJM4DBdnFMMYaISlvqmru3I8t0IzAfBIjm+ 2V2FLdpH6+LNGtbFnfGeQ9Lx81O1rFyRBvaJ0muM9OdPCF2/n5k4ionqO9SStAka9nH2 fzezBt4n6/UcyipjQABD1cAdptiNyEgRj4HakdiHCcYhcqYK3FKz8eIO9TIBe1WDD1u5 IlBdaEaog0UcgcN41Xxlo3XX58N/dVI+xU6bP/s2+X0wxvZFIl2foudWDyBqjyID4FX3 9ZHnwtMwKQaNU7A2TuEciz3XwbIlWMstA1TPvcBPvYklrnpf7GDOUFYOt4yLk3Q/kawq TlBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc; bh=f4kGHpDiTMc7aSXLbliqd1x6vmGGCzguotz7s4Fiy6o=; b=Zr/fBVV2jwR4YNPzkr930XRnvkb5JvWekb6jzrj3Qf0RPaEkt3ZNsFmpBQZRRVWlL+ XVw9yBYnWwLx0tnU5gBdRC0bqhwGOiQUZbD/Ig9n9eleeu8tUFvTEbS4v0SReWh3jyk2 8KRf2Umz1GQmI1bYTr4ko+E9ToM0zyQH7wwEVR1ga+lggOtSY7Jmx/AZIbBq5yO2KydE WOEoFcTZ7YkRWXdLyEYmMkhfjLiJr2JLdVsIHvMSU8vDlzupc9rBU8h19Il+ChahF+om FnjUSW5HabZyamxYpxw/bqIajFxyuVNbfq0vyReSFzOvdgGnl6XftAF/XYG/X8jUHflj 0EOg== X-Gm-Message-State: ACgBeo3DfXcOayBbJ5/wTkt6LurAqENoog7r/zuRsYtlxRD7n5sq4gWT j/D7IOibReU1mwHI8TLPWmffQw== X-Google-Smtp-Source: AA6agR4iPifEiVYf6J0ntQeFYnS7mi+e8o3iPYpH7UDbIWZxWhEewn2WsiEkUPKeHN3dcgHgdD5huQ== X-Received: by 2002:a63:8bc1:0:b0:42a:1604:3342 with SMTP id j184-20020a638bc1000000b0042a16043342mr21693992pge.368.1661323880784; Tue, 23 Aug 2022 23:51:20 -0700 (PDT) Received: from MacBook-Pro.local.bytedance.net ([139.177.225.241]) by smtp.gmail.com with ESMTPSA id a12-20020aa794ac000000b00536873f23dfsm6318112pfl.136.2022.08.23.23.51.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 23 Aug 2022 23:51:20 -0700 (PDT) From: lizhe.67@bytedance.com To: akpm@linux-foundation.org, mhocko@suse.com, vbabka@suse.cz, mhiramat@kernel.org, keescook@chromium.org, Jason@zx2c4.com, mark-pk.tsai@mediatek.com, rostedt@goodmis.org, corbet@lwn.net Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, lizefan.x@bytedance.com, yuanzhu@bytedance.com, lizhe.67@bytedance.com Subject: [PATCH v2] page_ext: introduce boot parameter 'early_page_ext' Date: Wed, 24 Aug 2022 14:50:58 +0800 Message-Id: <20220824065058.81051-1-lizhe.67@bytedance.com> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661323882; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=f4kGHpDiTMc7aSXLbliqd1x6vmGGCzguotz7s4Fiy6o=; b=KNpIa+ehLKjAR33kYD8ZL7xorNKQ0+3vtzUB1t0vUMhd/5oWFnHfJvD3ghll0sVXWB3HkU D9YGnfWkE0QpMnTfP0QtQR8sARg4MMnAjvWbE29el5/VHsg8b57N6C2fXzObu+CPW8qKjv LCQtpv86xZ4CytnzxhEhVWjp3AanybQ= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=8NmA2RtS; spf=pass (imf03.hostedemail.com: domain of lizhe.67@bytedance.com designates 209.85.210.170 as permitted sender) smtp.mailfrom=lizhe.67@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661323882; a=rsa-sha256; cv=none; b=Djnb9YoxqMhI9fpm8IdxBFl+PjKUSSZMUUIM3mw0NT/ID5/HLaU7HHSjT9mLzgPON0hj0T WNSWIR5Veuz5Ug5EIta6tvC/DKH9A2Zcar3FpHRa+pHkc+Eg3ckp1TaBY2HIJ5vhoiI0rC JiS/fwZds8XcyzYYVh3ZNf29Ej1QSA4= X-Stat-Signature: wi61oah3gcizgdukh14iyapr59859nxr X-Rspamd-Queue-Id: 088F520040 X-Rspam-User: Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=8NmA2RtS; spf=pass (imf03.hostedemail.com: domain of lizhe.67@bytedance.com designates 209.85.210.170 as permitted sender) smtp.mailfrom=lizhe.67@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspamd-Server: rspam01 X-HE-Tag: 1661323881-336705 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Li Zhe In 'commit 2f1ee0913ce5 ("Revert "mm: use early_pfn_to_nid in page_ext_init"")', we call page_ext_init() after page_alloc_init_late() to avoid some panic problem. It seems that we cannot track early page allocations in current kernel even if page structure has been initialized early. This patch introduce a new boot parameter 'early_page_ext' to resolve this problem. If we pass it to kernel, function page_ext_init() will be moved up and feature 'deferred initialization of struct pages' will be disabled. It can help us to catch early page allocations. This is useful especially when we find that the free memory value is not the same right after different kernel booting. Changelogs: v1->v2: - use a cmd line parameter to move up function page_ext_init() instead of using CONFIG_DEFERRED_STRUCT_PAGE_INIT - fix oom problem[1] v1 patch: https://lore.kernel.org/lkml/Yv3r6Y1vh+6AbY4+@dhcp22.suse.cz/T/ [1]: https://lore.kernel.org/linux-mm/YwHmXLu5txij+p35@xsang-OptiPlex-9020/ Suggested-by: Michal Hocko Signed-off-by: Li Zhe --- .../admin-guide/kernel-parameters.txt | 6 ++++++ include/linux/page_ext.h | 14 ++++++++++--- init/main.c | 4 +++- mm/page_alloc.c | 2 ++ mm/page_ext.c | 21 ++++++++++++++++++- 5 files changed, 42 insertions(+), 5 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index d7f30902fda0..7b5726828ac0 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1471,6 +1471,12 @@ Permit 'security.evm' to be updated regardless of current integrity status. + early_page_ext [KNL] Boot-time early page_ext initializing option. + This boot parameter disables the deferred initialization + of struct page and move up function page_ext_init() in + order to catch early page allocations. Available with + CONFIG_PAGE_EXTENSION=y. + failslab= fail_usercopy= fail_page_alloc= diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h index fabb2e1e087f..3e081cf8a1ec 100644 --- a/include/linux/page_ext.h +++ b/include/linux/page_ext.h @@ -38,19 +38,22 @@ struct page_ext { extern unsigned long page_ext_size; extern void pgdat_page_ext_init(struct pglist_data *pgdat); +#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT +extern bool early_page_ext_enable(void); +#endif #ifdef CONFIG_SPARSEMEM static inline void page_ext_init_flatmem(void) { } -extern void page_ext_init(void); +extern void page_ext_init(bool early); static inline void page_ext_init_flatmem_late(void) { } #else extern void page_ext_init_flatmem(void); extern void page_ext_init_flatmem_late(void); -static inline void page_ext_init(void) +static inline void page_ext_init(bool early) { } #endif @@ -67,6 +70,11 @@ static inline struct page_ext *page_ext_next(struct page_ext *curr) #else /* !CONFIG_PAGE_EXTENSION */ struct page_ext; +static inline bool early_page_ext_enable(void) +{ + return false; +} + static inline void pgdat_page_ext_init(struct pglist_data *pgdat) { } @@ -76,7 +84,7 @@ static inline struct page_ext *lookup_page_ext(const struct page *page) return NULL; } -static inline void page_ext_init(void) +static inline void page_ext_init(bool early) { } diff --git a/init/main.c b/init/main.c index 91642a4e69be..3760c0326525 100644 --- a/init/main.c +++ b/init/main.c @@ -849,6 +849,8 @@ static void __init mm_init(void) pgtable_init(); debug_objects_mem_init(); vmalloc_init(); + /* Should be run after vmap initialization */ + page_ext_init(true); /* Should be run before the first non-init thread is created */ init_espfix_bsp(); /* Should be run after espfix64 is set up. */ @@ -1606,7 +1608,7 @@ static noinline void __init kernel_init_freeable(void) padata_init(); page_alloc_init_late(); /* Initialize page ext after all struct pages are initialized. */ - page_ext_init(); + page_ext_init(false); do_basic_setup(); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e5486d47406e..e580b197aa1e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -482,6 +482,8 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) { static unsigned long prev_end_pfn, nr_initialised; + if (early_page_ext_enable()) + return false; /* * prev_end_pfn static that contains the end of previous zone * No need to protect because called very early in boot before smp_init. diff --git a/mm/page_ext.c b/mm/page_ext.c index 3dc715d7ac29..82ba561730ef 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -85,6 +85,22 @@ unsigned long page_ext_size = sizeof(struct page_ext); static unsigned long total_usage; +#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT +static bool early_page_ext __meminitdata; +bool __meminit early_page_ext_enable(void) +{ + return early_page_ext; +} +#else +static bool early_page_ext __meminitdata = true; +#endif +static int __init setup_early_page_ext(char *str) +{ + early_page_ext = true; + return 0; +} +early_param("early_page_ext", setup_early_page_ext); + static bool __init invoke_need_callbacks(void) { int i; @@ -378,11 +394,14 @@ static int __meminit page_ext_callback(struct notifier_block *self, return notifier_from_errno(ret); } -void __init page_ext_init(void) +void __init page_ext_init(bool early) { unsigned long pfn; int nid; + if (early != early_page_ext) + return; + if (!invoke_need_callbacks()) return;