From patchwork Mon Aug 15 12:09:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: lizhe.67@bytedance.com X-Patchwork-Id: 12943479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3F06C00140 for ; Mon, 15 Aug 2022 12:10:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 273B68D0002; Mon, 15 Aug 2022 08:10:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 223038D0001; Mon, 15 Aug 2022 08:10:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0EB148D0002; Mon, 15 Aug 2022 08:10:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id F39A68D0001 for ; Mon, 15 Aug 2022 08:10:12 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id CAD5F4061C for ; Mon, 15 Aug 2022 12:10:12 +0000 (UTC) X-FDA: 79801709064.23.AA35305 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) by imf15.hostedemail.com (Postfix) with ESMTP id 98650A01C2 for ; Mon, 15 Aug 2022 12:10:10 +0000 (UTC) Received: by mail-pf1-f178.google.com with SMTP id p125so6478019pfp.2 for ; Mon, 15 Aug 2022 05:10:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc; bh=OlDX2VDnJXF8DFXTx8IYulTZNrKUDZ/9Y1YnZ/YzQC4=; b=lDZ61bhM15Wngv3Bvpqwe7Ilv9QfpUGUgKo1boTWthrF2QgDG+E5j3/uZZcijmb5XQ QkiUQ+zW5ZpnCZT9S4YlmE5SqX328yZ9MorMW1j77Vqu1qPdgXxGa892gjy/AUyQo/l8 tVz4Lg69erHas3S36eVSzsf/lXYA9wvUTuF0EHbaVdjINMnwJpsSrhx7H2NcPaeateV2 hEQzPQpnI5v3GNlnA9bnXr30E6AkAFxC0By77BuRBvLoxH/BywxqoAx9T3T+diJVribi vrvgyQkEi/KMBUteI2nlXzqpliO/j+8DAg8dvSHSjfPfMX78mojznYyRqdXj1BoEkpJY TjOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc; bh=OlDX2VDnJXF8DFXTx8IYulTZNrKUDZ/9Y1YnZ/YzQC4=; b=Srj8Yq+BC28skSlgKmtyWcoluX6C4nesRnHpdQp7brwG3LEKHR/CUQRX+BwPCcd8QI UVeG487yLEX0i/yXpyJZuX4aBkpC7ieB8N6d1ZDe5ddpMcobedtptFGwbr1p4Syo8u0l e3U5mmIxztejXq1x3qKiGMC2iMq6cuv6chNIhvu/JTyN2TvcaG/+heinbf1cMGcR4Kqg s08T3BnhSvxUDx+JKUemQFcvN0ptKjCmzkCFa++pe0iv+RkQypXt+TajtFlLo+VQWLG7 zG5lt7sRULN8kz/JMiWm+BPXVbb8opiu0Gp1KWrga6zRZiqswfdwtJfMLbrDy9MShA7X gF0A== X-Gm-Message-State: ACgBeo3EjrC1RJu5pdaYQIZahEg0NZXFbGQmZk7nsbZ6jl3VqPCauM7D ihDRcRG10Lk9ufPpqSZXgXDQyw== X-Google-Smtp-Source: AA6agR7bBC5JKuBtwgIwIds69ERKbTRvLzHGmp/pKrpijW7/diQ/huZoEWLp758tFQQMw9DoESc1Lw== X-Received: by 2002:a05:6a00:8cc:b0:52c:7ab5:2ce7 with SMTP id s12-20020a056a0008cc00b0052c7ab52ce7mr15907837pfu.28.1660565409307; Mon, 15 Aug 2022 05:10:09 -0700 (PDT) Received: from MacBook-Pro.local.bytedance.net ([139.177.225.225]) by smtp.gmail.com with ESMTPSA id q16-20020aa79830000000b0052d36feb7fcsm6426193pfl.198.2022.08.15.05.10.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Aug 2022 05:10:09 -0700 (PDT) From: lizhe.67@bytedance.com To: akpm@linux-foundation.org, mhiramat@kernel.org, vbabka@suse.cz, keescook@chromium.org, Jason@zx2c4.com, mark-pk.tsai@mediatek.com, rostedt@goodmis.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, lizefan.x@bytedance.com, yuanzhu@bytedance.com, lizhe.67@bytedance.com Subject: [PATCH] page_ext: move up page_ext_init() to catch early page allocation if DEFERRED_STRUCT_PAGE_INIT is n Date: Mon, 15 Aug 2022 20:09:54 +0800 Message-Id: <20220815120954.65957-1-lizhe.67@bytedance.com> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660565412; a=rsa-sha256; cv=none; b=YwewI9AdSdlE5SPffz8264HheUUj8wmD5SlHGdU0imOIFMISpFC2ObcmVUlXy2TJBaVkii 1RGEuw/aFhIVozyeSUuMOF3bf96n2tWzNn5LRlktT2Do4D48i8rQVwC3T1amhOoisHceuw tojLqjZTcIwfN7t836BrBIvw7AkPcJI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660565412; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=OlDX2VDnJXF8DFXTx8IYulTZNrKUDZ/9Y1YnZ/YzQC4=; b=kR4PMPxDn23/B33StHDrZCLyLO+wXxx5FENbXdkidyY7h6J34op0gAWxlkbmMI9esfxm4X eCZVbpZCgUkkhJPLZhzKTok/GN0a6K0upcGKTEvSrZ+e0lf6I/pW8XujTTQyVw0RXg1YDJ KCjGD5kchUYhqJ8DdPKkV22HiyaAsxU= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=lDZ61bhM; spf=pass (imf15.hostedemail.com: domain of lizhe.67@bytedance.com designates 209.85.210.178 as permitted sender) smtp.mailfrom=lizhe.67@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Stat-Signature: dy778bm6boxdf4yqxzdioaqy9pd3fau3 X-Rspamd-Queue-Id: 98650A01C2 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=lDZ61bhM; spf=pass (imf15.hostedemail.com: domain of lizhe.67@bytedance.com designates 209.85.210.178 as permitted sender) smtp.mailfrom=lizhe.67@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1660565410-178209 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Li Zhe In 'commit 2f1ee0913ce5 ("Revert "mm: use early_pfn_to_nid in page_ext_init"")', we call page_ext_init() after page_alloc_init_late() to avoid some panic problem. It seems that we cannot track early page allocations in current kernel even if page structure has been initialized early. This patch move up page_ext_init() to catch early page allocations when DEFERRED_STRUCT_PAGE_INIT is n. After this patch, we only need to turn DEFERRED_STRUCT_PAGE_INIT to n then we are able to analyze the early page allocations. This is useful especially when we find that the free memory value is not the same right after different kernel booting. Signed-off-by: Li Zhe --- include/linux/page_ext.h | 30 +++++++++++++++++++++++++++--- init/main.c | 7 +++++-- mm/page_ext.c | 2 +- 3 files changed, 33 insertions(+), 6 deletions(-) diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h index fabb2e1e087f..b77a13689e00 100644 --- a/include/linux/page_ext.h +++ b/include/linux/page_ext.h @@ -43,14 +43,34 @@ extern void pgdat_page_ext_init(struct pglist_data *pgdat); static inline void page_ext_init_flatmem(void) { } -extern void page_ext_init(void); static inline void page_ext_init_flatmem_late(void) { } +extern void _page_ext_init(void); +#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT +static inline void page_ext_init_early(void) +{ +} +static inline void page_ext_init_late(void) +{ + _page_ext_init(); +} +#else +static inline void page_ext_init_early(void) +{ + _page_ext_init(); +} +static inline void page_ext_init_late(void) +{ +} +#endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ #else extern void page_ext_init_flatmem(void); extern void page_ext_init_flatmem_late(void); -static inline void page_ext_init(void) +static inline void page_ext_init_early(void) +{ +} +static inline void page_ext_init_late(void) { } #endif @@ -76,7 +96,11 @@ static inline struct page_ext *lookup_page_ext(const struct page *page) return NULL; } -static inline void page_ext_init(void) +static inline void page_ext_init_early(void) +{ +} + +static inline void page_ext_init_late(void) { } diff --git a/init/main.c b/init/main.c index 91642a4e69be..7f9533ba527d 100644 --- a/init/main.c +++ b/init/main.c @@ -845,6 +845,7 @@ static void __init mm_init(void) * slab is ready so that stack_depot_init() works properly */ page_ext_init_flatmem_late(); + page_ext_init_early(); kmemleak_init(); pgtable_init(); debug_objects_mem_init(); @@ -1605,8 +1606,10 @@ static noinline void __init kernel_init_freeable(void) padata_init(); page_alloc_init_late(); - /* Initialize page ext after all struct pages are initialized. */ - page_ext_init(); + /* Initialize page ext after all struct pages are initialized if + * CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled + */ + page_ext_init_late(); do_basic_setup(); diff --git a/mm/page_ext.c b/mm/page_ext.c index 3dc715d7ac29..50419e7349cb 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -378,7 +378,7 @@ static int __meminit page_ext_callback(struct notifier_block *self, return notifier_from_errno(ret); } -void __init page_ext_init(void) +void __init _page_ext_init(void) { unsigned long pfn; int nid;