From patchwork Thu Sep 22 01:12:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12984355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01F0BC6FA8E for ; Thu, 22 Sep 2022 01:13:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 890A780009; Wed, 21 Sep 2022 21:12:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7CBEC80007; Wed, 21 Sep 2022 21:12:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5A83B80009; Wed, 21 Sep 2022 21:12:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 45FDF80007 for ; Wed, 21 Sep 2022 21:12:58 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id CB8A6C131C for ; Thu, 22 Sep 2022 01:12:57 +0000 (UTC) X-FDA: 79937947194.03.C918422 Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by imf21.hostedemail.com (Postfix) with ESMTP id 8455E1C0002 for ; Thu, 22 Sep 2022 01:12:57 +0000 (UTC) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id 304E25C0145; Wed, 21 Sep 2022 21:12:57 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Wed, 21 Sep 2022 21:12:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm2; t=1663809177; x= 1663895577; bh=6hIod5Xb/n6/GvZSv988w0dL7EymQUiDUpxAypFJV3A=; b=g 7Hl4GZksfYJYKHux7Fd2YpcCSHO5mS3JUXy8fcKHPvVkPPkMdEFdlKech0GjtrB+ UhoHax5scLMYKxjoQjNyqgxYR5evq969VB9S7g4xUjel5JH4IdmH6vcQJmGOvfIi /NW23afwZnlfVDU0e7j34U46msX8vWSUgJk95CwULeN98Bk0yhDApxEcf7KX3li7 LI/qOg/pd5s3Zmz+swiy52pcxwZ5Fb+Ca1I8H9ezHpclUBml9qzVDM7VoTU4tEGm BPQVvDajtHKjXyVb8qvfrF/rMLEsde+1RM82aL3cAFGlXVVHUoCNMEilyIdIuK45 rDsiqIIxWmtvxIj1ncp7w== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm2; t=1663809177; x=1663895577; bh=6 hIod5Xb/n6/GvZSv988w0dL7EymQUiDUpxAypFJV3A=; b=jir8dylmgmGv34mmu SolLmqh5uuUvUyXUnTyb/4bjKXRPIfoO/LQZEzdMK8rcIsZyTmcqSUvYJ9UntiGP yhy6z/dg3fE5bBOKBi9nRhimkBUn02kL1TKHQxDdtEH/L5YULjTkbF+uEQR/x064 zNvwRuwGrP5rncAL/BsEgq7YAi7kkEXb2MHYlXVgkSEW3ZUeCFr2MT2eyw8tAfem S42tvNOj9sbTJE9ch5pMcG8LlQdfqVg/VIP97JCRPx9Sg1Mo+uhXFWaghZSfYal+ EkN8T9GI03YGdGu+fwL0hsmTL5mm0J7hHNeWQ4W6RS4gzyeNL14uMobvF731IuBF mLtXw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrfeefvddggeefucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 21 Sep 2022 21:12:56 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org Cc: Zi Yan , David Hildenbrand , Matthew Wilcox , Vlastimil Babka , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Yang Shi , David Rientjes , James Houghton , Mike Rapoport , Muchun Song , Andrew Morton , linux-kernel@vger.kernel.org Subject: [PATCH v1 03/12] mm: adapt deferred struct page init to new MAX_ORDER. Date: Wed, 21 Sep 2022 21:12:43 -0400 Message-Id: <20220922011252.2266780-4-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220922011252.2266780-1-zi.yan@sent.com> References: <20220922011252.2266780-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663809177; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6hIod5Xb/n6/GvZSv988w0dL7EymQUiDUpxAypFJV3A=; b=INbmxwxmX6/nVH/VvnwcdzoklA65xGnlXFYdCYwu+JKsZCsc3Z/QHVGRn6nlXRjELuxbK2 DKRmPOjBKvqGjLC2beVZ/uzLo88k3kF3J401E1EkpV98dVveOvjqcj8r6xpBx3MmjZiR4k y6WTAL103bgeN6EHBE+tgX2FD/nVhq4= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b="g 7Hl4GZ"; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=jir8dylm; spf=pass (imf21.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.26 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663809177; a=rsa-sha256; cv=none; b=lAQJ+Q8vKlPGGMlP5W9HxffER0v8jvSJi5SgnvuhKwdT5oN1V1vKL3sDMQvqglB6vkErkd 4DWeM85L5/iHrepeTpi9Sb7Pn38WwZQ/TaXOV9DtAcbLUhxWDp2P2TlwbVG3zvbLGqMfLO nrnsJYMe5w+chg4+nwdA+hM/1eGl0Tc= X-Stat-Signature: ircw8983rezttfqmqu8iapxn8oz5iib6 X-Rspamd-Queue-Id: 8455E1C0002 X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b="g 7Hl4GZ"; dkim=pass header.d=messagingengine.com header.s=fm2 header.b=jir8dylm; spf=pass (imf21.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.26 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com X-Rspamd-Server: rspam03 X-HE-Tag: 1663809177-842864 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan deferred_init only initializes first section of a zone and defers the rest and the rest of the zone will be initialized in size of a section. When MAX_ORDER grows beyond a section size, early_page_uninitialised() did not prevent pages beyond first section from initialization, since it only checked the starting pfn and assumes MAX_ORDER is smaller than a section size. In addition, deferred_init_maxorder() uses MAX_ORDER_NR_PAGES as the initialization unit, which can cause the initialized chunk of memory overlapping with other initialization jobs. For the first issue, make early_page_uninitialised() decrease the order for non-deferred memory initialization when it is bigger than first section. For the second issue, when adjust pfn alignment in deferred_init_maxorder(), make sure the alignment is not bigger than a section size. Signed-off-by: Zi Yan --- mm/internal.h | 2 +- mm/memblock.c | 6 ++++-- mm/page_alloc.c | 28 ++++++++++++++++++++-------- 3 files changed, 25 insertions(+), 11 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 22fb1e6e3541..d688c0320cda 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -361,7 +361,7 @@ extern int __isolate_free_page(struct page *page, unsigned int order); extern void __putback_isolated_page(struct page *page, unsigned int order, int mt); extern void memblock_free_pages(struct page *page, unsigned long pfn, - unsigned int order); + unsigned int *order); extern void __free_pages_core(struct page *page, unsigned int order); extern void prep_compound_page(struct page *page, unsigned int order); extern void post_alloc_hook(struct page *page, unsigned int order, diff --git a/mm/memblock.c b/mm/memblock.c index acbc77367faf..b957c12a93e7 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1640,7 +1640,9 @@ void __init memblock_free_late(phys_addr_t base, phys_addr_t size) end = PFN_DOWN(base + size); for (; cursor < end; cursor++) { - memblock_free_pages(pfn_to_page(cursor), cursor, 0); + unsigned int order = 0; + + memblock_free_pages(pfn_to_page(cursor), cursor, &order); totalram_pages_inc(); } } @@ -2035,7 +2037,7 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end) while (start + (1UL << order) > end) order--; - memblock_free_pages(pfn_to_page(start), start, order); + memblock_free_pages(pfn_to_page(start), start, &order); start += (1UL << order); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b3dd5248e63d..e3af87d89ebf 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -464,13 +464,19 @@ static inline bool deferred_pages_enabled(void) } /* Returns true if the struct page for the pfn is uninitialised */ -static inline bool __meminit early_page_uninitialised(unsigned long pfn) +static inline bool __meminit early_page_uninitialised(unsigned long pfn, unsigned int *order) { int nid = early_pfn_to_nid(pfn); if (node_online(nid) && pfn >= NODE_DATA(nid)->first_deferred_pfn) return true; + /* clamp down order to not exceed first_deferred_pfn */ + if (order) + *order = min_t(unsigned int, + *order, + ilog2(NODE_DATA(nid)->first_deferred_pfn - pfn)); + return false; } @@ -518,7 +524,7 @@ static inline bool deferred_pages_enabled(void) return false; } -static inline bool early_page_uninitialised(unsigned long pfn) +static inline bool early_page_uninitialised(unsigned long pfn, unsigned int *order) { return false; } @@ -1653,7 +1659,7 @@ static void __meminit init_reserved_page(unsigned long pfn) pg_data_t *pgdat; int nid, zid; - if (!early_page_uninitialised(pfn)) + if (!early_page_uninitialised(pfn, NULL)) return; nid = early_pfn_to_nid(pfn); @@ -1809,15 +1815,15 @@ int __meminit early_pfn_to_nid(unsigned long pfn) #endif /* CONFIG_NUMA */ void __init memblock_free_pages(struct page *page, unsigned long pfn, - unsigned int order) + unsigned int *order) { - if (early_page_uninitialised(pfn)) + if (early_page_uninitialised(pfn, order)) return; - if (!kmsan_memblock_free_pages(page, order)) { + if (!kmsan_memblock_free_pages(page, *order)) { /* KMSAN will take care of these pages. */ return; } - __free_pages_core(page, order); + __free_pages_core(page, *order); } /* @@ -2036,7 +2042,13 @@ static unsigned long __init deferred_init_maxorder(u64 *i, struct zone *zone, unsigned long *start_pfn, unsigned long *end_pfn) { - unsigned long mo_pfn = ALIGN(*start_pfn + 1, MAX_ORDER_NR_PAGES); + /* + * deferred_init_memmap_chunk gives out jobs with max size to + * PAGES_PER_SECTION. Do not align mo_pfn beyond that. + */ + unsigned long align = min_t(unsigned long, + MAX_ORDER_NR_PAGES, PAGES_PER_SECTION); + unsigned long mo_pfn = ALIGN(*start_pfn + 1, align); unsigned long spfn = *start_pfn, epfn = *end_pfn; unsigned long nr_pages = 0; u64 j = *i;