From patchwork Sun Dec 13 15:45:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11970825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E039C4361B for ; Sun, 13 Dec 2020 15:48:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E843A2313B for ; Sun, 13 Dec 2020 15:48:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437075AbgLMPrw (ORCPT ); Sun, 13 Dec 2020 10:47:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2406429AbgLMPrk (ORCPT ); Sun, 13 Dec 2020 10:47:40 -0500 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63E07C06179C for ; Sun, 13 Dec 2020 07:46:31 -0800 (PST) Received: by mail-pj1-x1029.google.com with SMTP id m5so5142949pjv.5 for ; Sun, 13 Dec 2020 07:46:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3qJg+8yJ0ZaO1UKBsLGCMyx4R42eQSImcGjXZI1+SbQ=; b=1tNrFId6SspA/TSmYH0EdK1p+LJu/lDoZx1gjYBBa4whA/ohpKQR/m4rUtwZY6Kw3i Cwf8Z5udsgj76ZEu3xwBhWMRoyGUE0o6GwRdF0ttEg6sG1VmZcXP8bqp1cELBQdccopR rsFKS749XjvVjETMoJAnSufJPjyqN0qDdm23VKlmd7MvIsp6RxywaFWv/IRG9Q184r8l ZX8hyibQW8xyqtW1JD7wpECC4TBhSgR1Mhw5lrhmU/D4rQcw0WCb3NxMOeouCN/CDX/Z O7Z7dvdfabA5IS8McPqeR9/LRye76m/+GahCL+U4hd+MvliUzkspJZ9xQhDUfLX5oMlO k6iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3qJg+8yJ0ZaO1UKBsLGCMyx4R42eQSImcGjXZI1+SbQ=; b=iH0IfnFyAQP/x5yP5Djo6O3PQhFwSqKCisw9AAdnv43OlKUVBCWrKRJl7BtQFGfBrx 4LhQsJ/1NGHnyoGUqIxRtMo05QURfSGKUS5Ba4fICDlq3WHuJyR7o8UbazwpuIIKJgCF TKqBonoyLDwGKp/a6Fx+369+CL8PhVrBNW5Prt0bpSy1wFlmeS2F37BhMTReoL8+wtXC /UF5nwlncwqPjdCtExAttrb7OOfUa6jBJMNi8YHVeP2ljpXJgthqpfzBp9w6mIsM1G5g XXCDHUWRKSoM0V9SCIfT/PbkeEX3OtigZVdf7bT+s0avd9CwneoIQv4Smb7ORmmQEdRB 4RGA== X-Gm-Message-State: AOAM532XqR5mSB185o5CwfzWYDdcttxpnBQZTY9/8t2BdDr4IBlm5Bva 8dkUA7rtVKOWdlSOFEmWE0H6PQ== X-Google-Smtp-Source: ABdhPJxQxh/BJtN3poTYty4W7mSS9E48gnwbuJDfmxL0vNKwWV0yAtOLiqWmUGZNNu4XZUM6Y//PIw== X-Received: by 2002:a17:902:6b01:b029:da:d295:2582 with SMTP id o1-20020a1709026b01b02900dad2952582mr11220755plk.14.1607874390850; Sun, 13 Dec 2020 07:46:30 -0800 (PST) Received: from localhost.bytedance.net ([103.136.221.66]) by smtp.gmail.com with ESMTPSA id e24sm13113753pjt.16.2020.12.13.07.46.19 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 13 Dec 2020 07:46:30 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v9 01/11] mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c Date: Sun, 13 Dec 2020 23:45:24 +0800 Message-Id: <20201213154534.54826-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201213154534.54826-1-songmuchun@bytedance.com> References: <20201213154534.54826-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Move bootmem info registration common API to individual bootmem_info.c. And we will use {get,put}_page_bootmem() to initialize the page for the vmemmap pages or free the vmemmap pages to buddy in the later patch. So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code movement without any functional change. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador Reviewed-by: David Hildenbrand --- arch/x86/mm/init_64.c | 3 +- include/linux/bootmem_info.h | 40 +++++++++++++ include/linux/memory_hotplug.h | 27 --------- mm/Makefile | 1 + mm/bootmem_info.c | 124 +++++++++++++++++++++++++++++++++++++++++ mm/memory_hotplug.c | 116 -------------------------------------- mm/sparse.c | 1 + 7 files changed, 168 insertions(+), 144 deletions(-) create mode 100644 include/linux/bootmem_info.h create mode 100644 mm/bootmem_info.c diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index b5a3fa4033d3..0a45f062826e 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include @@ -1571,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, return err; } -#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE) +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void register_page_bootmem_memmap(unsigned long section_nr, struct page *start_page, unsigned long nr_pages) { diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h new file mode 100644 index 000000000000..4ed6dee1adc9 --- /dev/null +++ b/include/linux/bootmem_info.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_BOOTMEM_INFO_H +#define __LINUX_BOOTMEM_INFO_H + +#include + +/* + * Types for free bootmem stored in page->lru.next. These have to be in + * some random range in unsigned long space for debugging purposes. + */ +enum { + MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, + SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, + MIX_SECTION_INFO, + NODE_INFO, + MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, +}; + +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE +void __init register_page_bootmem_info_node(struct pglist_data *pgdat); + +void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type); +void put_page_bootmem(struct page *page); +#else +static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) +{ +} + +static inline void put_page_bootmem(struct page *page) +{ +} + +static inline void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type) +{ +} +#endif + +#endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 15acce5ab106..84590964ad35 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -33,18 +33,6 @@ struct vmem_altmap; ___page; \ }) -/* - * Types for free bootmem stored in page->lru.next. These have to be in - * some random range in unsigned long space for debugging purposes. - */ -enum { - MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, - SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, - MIX_SECTION_INFO, - NODE_INFO, - MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, -}; - /* Types for control the zone type of onlined and offlined memory */ enum { /* Offline the memory. */ @@ -222,17 +210,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat) #endif /* CONFIG_NUMA */ #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */ -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat); -#else -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} -#endif -extern void put_page_bootmem(struct page *page); -extern void get_page_bootmem(unsigned long ingo, struct page *page, - unsigned long type); - void get_online_mems(void); void put_online_mems(void); @@ -260,10 +237,6 @@ static inline void zone_span_writelock(struct zone *zone) {} static inline void zone_span_writeunlock(struct zone *zone) {} static inline void zone_seqlock_init(struct zone *zone) {} -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} - static inline int try_online_node(int nid) { return 0; diff --git a/mm/Makefile b/mm/Makefile index a1af02ba8f3f..ed4b88fa0f5e 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -83,6 +83,7 @@ obj-$(CONFIG_SLUB) += slub.o obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_KFENCE) += kfence/ obj-$(CONFIG_FAILSLAB) += failslab.o +obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c new file mode 100644 index 000000000000..fcab5a3f8cc0 --- /dev/null +++ b/mm/bootmem_info.c @@ -0,0 +1,124 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * linux/mm/bootmem_info.c + * + * Copyright (C) + */ +#include +#include +#include +#include +#include + +void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) +{ + page->freelist = (void *)type; + SetPagePrivate(page); + set_page_private(page, info); + page_ref_inc(page); +} + +void put_page_bootmem(struct page *page) +{ + unsigned long type; + + type = (unsigned long) page->freelist; + BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || + type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); + + if (page_ref_dec_return(page) == 1) { + page->freelist = NULL; + ClearPagePrivate(page); + set_page_private(page, 0); + INIT_LIST_HEAD(&page->lru); + free_reserved_page(page); + } +} + +#ifndef CONFIG_SPARSEMEM_VMEMMAP +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + /* Get section's memmap address */ + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + /* + * Get page for the memmap's phys address + * XXX: need more consideration for sparse_vmemmap... + */ + page = virt_to_page(memmap); + mapsize = sizeof(struct page) * PAGES_PER_SECTION; + mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; + + /* remember memmap's page */ + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, SECTION_INFO); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); + +} +#else /* CONFIG_SPARSEMEM_VMEMMAP */ +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); +} +#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ + +void __init register_page_bootmem_info_node(struct pglist_data *pgdat) +{ + unsigned long i, pfn, end_pfn, nr_pages; + int node = pgdat->node_id; + struct page *page; + + nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; + page = virt_to_page(pgdat); + + for (i = 0; i < nr_pages; i++, page++) + get_page_bootmem(node, page, NODE_INFO); + + pfn = pgdat->node_start_pfn; + end_pfn = pgdat_end_pfn(pgdat); + + /* register section info */ + for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { + /* + * Some platforms can assign the same pfn to multiple nodes - on + * node0 as well as nodeN. To avoid registering a pfn against + * multiple nodes we check that this pfn does not already + * reside in some other nodes. + */ + if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) + register_page_bootmem_info_section(pfn); + } +} diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index a8cef4955907..4c4ca99745b7 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -141,122 +141,6 @@ static void release_memory_resource(struct resource *res) } #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE -void get_page_bootmem(unsigned long info, struct page *page, - unsigned long type) -{ - page->freelist = (void *)type; - SetPagePrivate(page); - set_page_private(page, info); - page_ref_inc(page); -} - -void put_page_bootmem(struct page *page) -{ - unsigned long type; - - type = (unsigned long) page->freelist; - BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || - type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); - - if (page_ref_dec_return(page) == 1) { - page->freelist = NULL; - ClearPagePrivate(page); - set_page_private(page, 0); - INIT_LIST_HEAD(&page->lru); - free_reserved_page(page); - } -} - -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -#ifndef CONFIG_SPARSEMEM_VMEMMAP -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - /* Get section's memmap address */ - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - /* - * Get page for the memmap's phys address - * XXX: need more consideration for sparse_vmemmap... - */ - page = virt_to_page(memmap); - mapsize = sizeof(struct page) * PAGES_PER_SECTION; - mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; - - /* remember memmap's page */ - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, SECTION_INFO); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); - -} -#else /* CONFIG_SPARSEMEM_VMEMMAP */ -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); -} -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ - -void __init register_page_bootmem_info_node(struct pglist_data *pgdat) -{ - unsigned long i, pfn, end_pfn, nr_pages; - int node = pgdat->node_id; - struct page *page; - - nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; - page = virt_to_page(pgdat); - - for (i = 0; i < nr_pages; i++, page++) - get_page_bootmem(node, page, NODE_INFO); - - pfn = pgdat->node_start_pfn; - end_pfn = pgdat_end_pfn(pgdat); - - /* register section info */ - for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { - /* - * Some platforms can assign the same pfn to multiple nodes - on - * node0 as well as nodeN. To avoid registering a pfn against - * multiple nodes we check that this pfn does not already - * reside in some other nodes. - */ - if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) - register_page_bootmem_info_section(pfn); - } -} -#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ - static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, const char *reason) { diff --git a/mm/sparse.c b/mm/sparse.c index 7bd23f9d6cef..87676bf3af40 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "internal.h" #include From patchwork Sun Dec 13 15:45:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11970827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FC01C4167B for ; Sun, 13 Dec 2020 15:48:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3A2102333C for ; Sun, 13 Dec 2020 15:48:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437052AbgLMPrx (ORCPT ); Sun, 13 Dec 2020 10:47:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2406498AbgLMPrk (ORCPT ); Sun, 13 Dec 2020 10:47:40 -0500 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63DD6C0617B0 for ; Sun, 13 Dec 2020 07:46:41 -0800 (PST) Received: by mail-pj1-x1043.google.com with SMTP id m5so5143061pjv.5 for ; Sun, 13 Dec 2020 07:46:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0qsAVJu/9vqouKcThrGhG8OM4QgmEtPOs8IzBVJC9Eg=; b=bcCqwkwnJlHPPNe4wEpW5hjoTWMM+1oRkxutnuCbTtrmGvuy15sSG1xKY2/ayuRD36 QfiyN5tXQ8iGvdNERtKYMTDpzmQa/ldlrTr4OW1Fzf2LmtD4UrcFyR2u02xJX7cnHClz pfwOe1qOqFY3zrBEbJqfghpMX5rA1zGA9Wa/5aSEKIG/7kFgGF1WPwVR2mcI6D88NidF Ta+hwRMvhCv4dk6a/h1VI/xRv91DG7oK0kT7ENQ9et5wWTxBtQze/pnnwKulTz2/xulH Vumpi3lNg4/0RCY7rfLqcgd/5vMzFVnS+yNY6sANV2Aww7az8eiWjw67kkA35YRdqZW9 fQVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0qsAVJu/9vqouKcThrGhG8OM4QgmEtPOs8IzBVJC9Eg=; b=WT9i4TBhxLS5jDDgw6cG7rLF2FIklSO8rBN0wIC5sFRoYmvTDv2vR/hS9hQWzTtG9D JlTARCyCrOUEj6H0dVNinml1AeX5rcq2d/8iO5ESGBdD+fj1u1pjNNVAtP72k8uBLdSq 3jsRfXy8aAyn2hgNPfuSfkID86TjOcrZh5SQ7fsuLORaAkX8v3G7hVET14HIFpQ/w4l4 0EpndQdZDSP0ROqDQqBqYLRmmSpy30gXrG+F5ykVIiJKc4vLlCvGA8Ga67pHNBVhU/YV nCyblIC4eQvHNyzQ8KefgaJF/awWnoncGo0W/DkIOMVU9YI/1iX2UT/D1l4McH+uo/au cPxw== X-Gm-Message-State: AOAM531RdCx/1meydtN/AkcGiqN6OqRisR4TotLQHroGQMY347onhknG 8Fp24DjGPTKeY6831LCcGIADlA== X-Google-Smtp-Source: ABdhPJwPhF1kEZCNrZ52wXqgSJ5X0GJE2c5s27DHWYT+jpGXu3QnSrBZF6KISGYeD2zuJBx1zYnGaw== X-Received: by 2002:a17:902:c395:b029:da:9aca:c972 with SMTP id g21-20020a170902c395b02900da9acac972mr19420371plg.32.1607874400964; Sun, 13 Dec 2020 07:46:40 -0800 (PST) Received: from localhost.bytedance.net ([103.136.221.66]) by smtp.gmail.com with ESMTPSA id e24sm13113753pjt.16.2020.12.13.07.46.31 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 13 Dec 2020 07:46:40 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v9 02/11] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Date: Sun, 13 Dec 2020 23:45:25 +0800 Message-Id: <20201213154534.54826-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201213154534.54826-1-songmuchun@bytedance.com> References: <20201213154534.54826-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure whether to enable the feature of freeing unused vmemmap associated with HugeTLB pages. And this is just for dependency check. Now only support x86-64. Because this config depends on HAVE_BOOTMEM_INFO_NODE. And the function of the register_page_bootmem_info() is aimed to register bootmem info. So we should register bootmem info when this config is enabled. Signed-off-by: Muchun Song --- arch/x86/mm/init_64.c | 2 +- fs/Kconfig | 15 +++++++++++++++ 2 files changed, 16 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0a45f062826e..0435bee2e172 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall; static void __init register_page_bootmem_info(void) { -#ifdef CONFIG_NUMA +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) int i; for_each_online_node(i) diff --git a/fs/Kconfig b/fs/Kconfig index 976e8b9033c4..4c3a9c614983 100644 --- a/fs/Kconfig +++ b/fs/Kconfig @@ -245,6 +245,21 @@ config HUGETLBFS config HUGETLB_PAGE def_bool HUGETLBFS +config HUGETLB_PAGE_FREE_VMEMMAP + def_bool HUGETLB_PAGE + depends on X86_64 + depends on SPARSEMEM_VMEMMAP + depends on HAVE_BOOTMEM_INFO_NODE + help + When using HUGETLB_PAGE_FREE_VMEMMAP, the system can save up some + memory from pre-allocated HugeTLB pages when they are not used. + 6 pages per HugeTLB page of the pmd level mapping and (PAGE_SIZE - 2) + pages per HugeTLB page of the pud level mapping. + + When the pages are going to be used or freed up, the vmemmap array + representing that range needs to be remapped again and the pages + we discarded earlier need to be rellocated again. + config MEMFD_CREATE def_bool TMPFS || HUGETLBFS From patchwork Sun Dec 13 15:45:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11970829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4837CC0018C for ; Sun, 13 Dec 2020 15:48:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1ECEB23137 for ; Sun, 13 Dec 2020 15:48:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437158AbgLMPrx (ORCPT ); Sun, 13 Dec 2020 10:47:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2406508AbgLMPrl (ORCPT ); Sun, 13 Dec 2020 10:47:41 -0500 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A517C061282 for ; Sun, 13 Dec 2020 07:46:52 -0800 (PST) Received: by mail-pj1-x1044.google.com with SMTP id z12so4865966pjn.1 for ; Sun, 13 Dec 2020 07:46:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Xu0pqAUwQJKS2TZqtFCcjZsX9HErdf2lp2OO+4gUWw0=; b=PbVxce+Awt2uVqeHVoVEnQporbNbIHpLq+7aRo/f4HPCRtX6l5RtkIjoPm0mRrx/WK I654aqPPsjV7SlUOPgx9yekJFdv6vnJOwcHfMRT66IIwk0PnmJ8EZq62rmSXB83+XNQZ mcAI8KV1i6LzECNQ7xpoXWuldswar89vVt8NBmGOAPBDUQ8BfVJHuwA1n2KwZCdmNW5d qLS9Vt4LWz5bw3exk0EG1vjKG5b6tmG4/siOl1BZbxId3dYJTIFletrUgrT8vtqgRGLo K7xT7kE8LCLMurq0vK0ipidyrHzZmdN+oQTMMkm0kx86CK9IVZUqSsWdwdt+ipQ5Wzsy 2ZtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Xu0pqAUwQJKS2TZqtFCcjZsX9HErdf2lp2OO+4gUWw0=; b=QTC9Q6g+WEK+Ko0ymbGfOERSV2xGbVnHtbJoSY5dm9wHmatO5j+jc30OD3HKupAj4X Ec4IwYOCA6HQnRVMpafF0MLbSo2bpnd7QwrklU1JLtA9i1EiQ/BJy52QgKbZ9dReiTPW XMl+0/dKqPuiKdcHhfa/nDX+B/JSzD6cAXOV3i6/iGaW/Pt6meLo2wcYJO7j3EDBhNaS Ve2OSl2H32zQc6FZm7uaEAgFGhdyT+gmyl4Lo36eMeeuM3vXd9dLj+2BRnyeL9NXUWpb W308XjRuvFgThJCe0TL0FkDTFqUfTBK75j89CDcsLKuyjH3ecknostN8yDiCJ8FUDwum Co2g== X-Gm-Message-State: AOAM530ZIQdBTlAYTDQ4YQpwJD/CJ2kdyycF3YZl66SuRaOGAZAaZ4gK WibttWE7jsljxIbp8stDMcnLMg== X-Google-Smtp-Source: ABdhPJwU+PdCHeBJeq128pKaA/MAwBrQTi9CnuSPA/WiluUyBz0/oAF+4ShYOQoTvfyv2IuEold3Jw== X-Received: by 2002:a17:90a:1782:: with SMTP id q2mr21257969pja.189.1607874411402; Sun, 13 Dec 2020 07:46:51 -0800 (PST) Received: from localhost.bytedance.net ([103.136.221.66]) by smtp.gmail.com with ESMTPSA id e24sm13113753pjt.16.2020.12.13.07.46.41 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 13 Dec 2020 07:46:50 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page Date: Sun, 13 Dec 2020 23:45:26 +0800 Message-Id: <20201213154534.54826-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201213154534.54826-1-songmuchun@bytedance.com> References: <20201213154534.54826-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Every HugeTLB has more than one struct page structure. We __know__ that we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures to store metadata associated with each HugeTLB. There are a lot of struct page structures associated with each HugeTLB page. For tail pages, the value of compound_head is the same. So we can reuse first page of tail page structures. We map the virtual addresses of the remaining pages of tail page structures to the first tail page struct, and then free these page frames. Therefore, we need to reserve two pages as vmemmap areas. When we allocate a HugeTLB page from the buddy, we can free some vmemmap pages associated with each HugeTLB page. It is more appropriate to do it in the prep_new_huge_page(). The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap pages associated with a HugeTLB page can be freed, returns zero for now, which means the feature is disabled. We will enable it once all the infrastructure is there. Signed-off-by: Muchun Song --- include/linux/bootmem_info.h | 27 +++++- include/linux/mm.h | 2 + mm/Makefile | 1 + mm/hugetlb.c | 3 + mm/hugetlb_vmemmap.c | 209 +++++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 20 +++++ mm/sparse-vmemmap.c | 170 +++++++++++++++++++++++++++++++++++ 7 files changed, 431 insertions(+), 1 deletion(-) create mode 100644 mm/hugetlb_vmemmap.c create mode 100644 mm/hugetlb_vmemmap.h diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 4ed6dee1adc9..4c80b7be1771 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -2,7 +2,7 @@ #ifndef __LINUX_BOOTMEM_INFO_H #define __LINUX_BOOTMEM_INFO_H -#include +#include /* * Types for free bootmem stored in page->lru.next. These have to be in @@ -22,6 +22,27 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat); void get_page_bootmem(unsigned long info, struct page *page, unsigned long type); void put_page_bootmem(struct page *page); + +/* + * Any memory allocated via the memblock allocator and not via the + * buddy will be marked reserved already in the memmap. For those + * pages, we can call this function to free it to buddy allocator. + */ +static inline void free_bootmem_page(struct page *page) +{ + unsigned long magic = (unsigned long)page->freelist; + + /* + * The reserve_bootmem_region sets the reserved flag on bootmem + * pages. + */ + VM_WARN_ON(page_ref_count(page) != 2); + + if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) + put_page_bootmem(page); + else + VM_WARN_ON(1); +} #else static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) { @@ -35,6 +56,10 @@ static inline void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) { } + +static inline void free_bootmem_page(struct page *page) +{ +} #endif #endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/mm.h b/include/linux/mm.h index eabe7d9f80d8..ab02e405a979 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3005,6 +3005,8 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) } #endif +void vmemmap_remap_reuse(unsigned long start, unsigned long size); + void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap); diff --git a/mm/Makefile b/mm/Makefile index ed4b88fa0f5e..056801d8daae 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP) += frontswap.o obj-$(CONFIG_ZSWAP) += zswap.o obj-$(CONFIG_HAS_DMA) += dmapool.o obj-$(CONFIG_HUGETLBFS) += hugetlb.o +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) += hugetlb_vmemmap.o obj-$(CONFIG_NUMA) += mempolicy.o obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1f3bf1710b66..140135fc8113 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -42,6 +42,7 @@ #include #include #include "internal.h" +#include "hugetlb_vmemmap.h" int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; @@ -1497,6 +1498,8 @@ void free_huge_page(struct page *page) static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) { + free_huge_page_vmemmap(h, page); + INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); set_hugetlb_cgroup(page, NULL); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c new file mode 100644 index 000000000000..5a714bd60d6b --- /dev/null +++ b/mm/hugetlb_vmemmap.c @@ -0,0 +1,209 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + * + * The struct page structures (page structs) are used to describe a physical + * page frame. By default, there is a one-to-one mapping from a page frame to + * it's corresponding page struct. + * + * The HugeTLB pages consist of multiple base page size pages and is supported + * by many architectures. See hugetlbpage.rst in the Documentation directory + * for more details. On the x86-64 architecture, HugeTLB pages of size 2MB and + * 1GB are currently supported. Since the base page size on x86 is 4KB, a 2MB + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of + * 4096 base pages. For each base page, there is a corresponding page struct. + * + * Within the HugeTLB subsystem, only the first 4 page structs are used to + * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER + * provides this upper limit. The only 'useful' information in the remaining + * page structs is the compound_head field, and this field is the same for all + * tail pages. + * + * By removing redundant page structs for HugeTLB pages, memory can returned to + * the buddy allocator for other uses. + * + * Different architectures support different HugeTLB pages. For example, the + * following table is the HugeTLB page size supported by x86 and arm64 + * architectures. Becasue arm64 supports 4k, 16k, and 64k base pages and + * supports contiguous entries, so it supports many kinds of sizes of HugeTLB + * page. + * + * +--------------+-----------+-----------------------------------------------+ + * | Architecture | Page Size | HugeTLB Page Size | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * | x86-64 | 4KB | 2MB | 1GB | | | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * | | 4KB | 64KB | 2MB | 32MB | 1GB | + * | +-----------+-----------+-----------+-----------+-----------+ + * | arm64 | 16KB | 2MB | 32MB | 1GB | | + * | +-----------+-----------+-----------+-----------+-----------+ + * | | 64KB | 2MB | 512MB | 16GB | | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * + * When the system boot up, every HugeTLB page has more than one struct page + * structs whose size is (unit: pages): + * + * struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + * + * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size + * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following + * relationship. + * + * HugeTLB_Size = n * PAGE_SIZE + * + * Then, + * + * struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + * = n * sizeof(struct page) / PAGE_SIZE + * + * We can use huge mapping at the pud/pmd level for the HugeTLB page. + * + * For the HugeTLB page of the pmd level mapping, then + * + * struct_size = n * sizeof(struct page) / PAGE_SIZE + * = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE + * = sizeof(struct page) / sizeof(pte_t) + * = 64 / 8 + * = 8 (pages) + * + * Where n is how many pte entries which one page can contains. So the value of + * n is (PAGE_SIZE / sizeof(pte_t)). + * + * This optimization only supports 64-bit system, so the value of sizeof(pte_t) + * is 8. And this optimization also applicable only when the size of struct page + * is a power of two. In most cases, the size of struct page is 64 (e.g. x86-64 + * and arm64). So if we use pmd level mapping for a HugeTLB page, the size of + * struct page structs of it is 8 pages whose size depends on the size of the + * base page. + * + * For the HugeTLB page of the pud level mapping, then + * + * struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd) + * = PAGE_SIZE / 8 * 8 (pages) + * = PAGE_SIZE (pages) + * + * Where the struct_size(pmd) is the size of the struct page structs of a + * HugeTLB page of the pmd level mapping. + * + * Next, we take the pmd level mapping of the HugeTLB page as an example to + * show the internal implementation of this optimization. There are 8 pages + * struct page structs associated with a HugeTLB page which is pmd mapped. + * + * Here is how things look before optimization. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | -------------> | 2 | + * | | +-----------+ +-----------+ + * | | | 3 | -------------> | 3 | + * | | +-----------+ +-----------+ + * | | | 4 | -------------> | 4 | + * | PMD | +-----------+ +-----------+ + * | level | | 5 | -------------> | 5 | + * | mapping | +-----------+ +-----------+ + * | | | 6 | -------------> | 6 | + * | | +-----------+ +-----------+ + * | | | 7 | -------------> | 7 | + * | | +-----------+ +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * The value of page->compound_head is the same for all tail pages. The first + * page of page structs (page 0) associated with the HugeTLB page contains the 4 + * page structs necessary to describe the HugeTLB. The only use of the remaining + * pages of page structs (page 1 to page 7) is to point to page->compound_head. + * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs + * will be used for each HugeTLB page. This will allow us to free the remaining + * 6 pages to the buddy allocator. + * + * Here is how things look after remapping. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | ----------------^ ^ ^ ^ ^ ^ + * | | +-----------+ | | | | | + * | | | 3 | ------------------+ | | | | + * | | +-----------+ | | | | + * | | | 4 | --------------------+ | | | + * | PMD | +-----------+ | | | + * | level | | 5 | ----------------------+ | | + * | mapping | +-----------+ | | + * | | | 6 | ------------------------+ | + * | | +-----------+ | + * | | | 7 | --------------------------+ + * | | +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for + * vmemmap pages and restore the previous mapping relationship. + * + * For the HugeTLB page of the pud level mapping. It is similar to the former. + * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages. + * + * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures + * (e.g. aarch64) provides a contiguous bit in the translation table entries + * that hints to the MMU to indicate that it is one of a contiguous set of + * entries that can be cached in a single TLB entry. + * + * The contiguous bit is used to increase the mapping size at the pmd and pte + * (last) level. So this type of HugeTLB page can be optimized only when its + * size of the struct page structs is greater than 2 pages. + */ +#define pr_fmt(fmt) "HugeTLB vmemmap: " fmt + +#include "hugetlb_vmemmap.h" + +/* + * There are a lot of struct page structures associated with each HugeTLB page. + * For tail pages, the value of compound_head is the same. So we can reuse first + * page of tail page structures. We map the virtual addresses of the remaining + * pages of tail page structures to the first tail page struct, and then free + * these page frames. Therefore, we need to reserve two pages as vmemmap areas. + */ +#define RESERVE_VMEMMAP_NR 2U +#define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) + +/* + * How many vmemmap pages associated with a HugeTLB page that can be freed + * to the buddy allocator. + * + * Todo: Returns zero for now, which means the feature is disabled. We will + * enable it once all the infrastructure is there. + */ +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} + +static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) +{ + return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; +} + +void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + unsigned long vmemmap_addr = (unsigned long)head; + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + vmemmap_remap_reuse(vmemmap_addr + RESERVE_VMEMMAP_SIZE, + free_vmemmap_pages_size_per_hpage(h)); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h new file mode 100644 index 000000000000..6923f03534d5 --- /dev/null +++ b/mm/hugetlb_vmemmap.h @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + */ +#ifndef _LINUX_HUGETLB_VMEMMAP_H +#define _LINUX_HUGETLB_VMEMMAP_H +#include + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void free_huge_page_vmemmap(struct hstate *h, struct page *head); +#else +static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ +#endif /* _LINUX_HUGETLB_VMEMMAP_H */ diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 16183d85a7d5..78c527617e8d 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -27,8 +27,178 @@ #include #include #include +#include +#include + #include #include +#include + +/* + * vmemmap_rmap_walk - walk vmemmap page table + * + * @rmap_pte: called for each non-empty PTE (lowest-level) entry. + * @reuse: the page which is reused for the tail vmemmap pages. + * @vmemmap_pages: the list head of the vmemmap pages that can be freed. + */ +struct vmemmap_rmap_walk { + void (*rmap_pte)(pte_t *pte, unsigned long addr, + struct vmemmap_rmap_walk *walk); + struct page *reuse; + struct list_head *vmemmap_pages; +}; + +/* + * The index of the pte page table which is mapped to the tail of the + * vmemmap page. + */ +#define VMEMMAP_TAIL_PAGE_REUSE -1 + +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, struct vmemmap_rmap_walk *walk) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + do { + BUG_ON(pte_none(*pte)); + + if (!walk->reuse) + walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]); + + if (walk->rmap_pte) + walk->rmap_pte(pte, addr, walk); + } while (pte++, addr += PAGE_SIZE, addr != end); +} + +static void vmemmap_pmd_range(pud_t *pud, unsigned long addr, + unsigned long end, struct vmemmap_rmap_walk *walk) +{ + pmd_t *pmd; + unsigned long next; + + pmd = pmd_offset(pud, addr); + do { + BUG_ON(pmd_none(*pmd)); + + next = pmd_addr_end(addr, end); + vmemmap_pte_range(pmd, addr, next, walk); + } while (pmd++, addr = next, addr != end); +} + +static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr, + unsigned long end, struct vmemmap_rmap_walk *walk) +{ + pud_t *pud; + unsigned long next; + + pud = pud_offset(p4d, addr); + do { + BUG_ON(pud_none(*pud)); + + next = pud_addr_end(addr, end); + vmemmap_pmd_range(pud, addr, next, walk); + } while (pud++, addr = next, addr != end); +} + +static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr, + unsigned long end, struct vmemmap_rmap_walk *walk) +{ + p4d_t *p4d; + unsigned long next; + + p4d = p4d_offset(pgd, addr); + do { + BUG_ON(p4d_none(*p4d)); + + next = p4d_addr_end(addr, end); + vmemmap_pud_range(p4d, addr, next, walk); + } while (p4d++, addr = next, addr != end); +} + +static void vmemmap_remap_range(unsigned long start, unsigned long end, + struct vmemmap_rmap_walk *walk) +{ + unsigned long addr = start; + unsigned long next; + pgd_t *pgd; + + VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); + VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); + + pgd = pgd_offset_k(addr); + do { + BUG_ON(pgd_none(*pgd)); + + next = pgd_addr_end(addr, end); + vmemmap_p4d_range(pgd, addr, next, walk); + } while (pgd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + +/* + * Free a vmemmap page. A vmemmap page can be allocated from the memblock + * allocator or buddy allocator. If the PG_reserved flag is set, it means + * that it allocated from the memblock allocator, just free it via the + * free_bootmem_page(). Otherwise, use __free_page(). + */ +static inline void free_vmemmap_page(struct page *page) +{ + if (PageReserved(page)) + free_bootmem_page(page); + else + __free_page(page); +} + +/* Free a list of the vmemmap pages */ +static void free_vmemmap_page_list(struct list_head *list) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, list, lru) { + list_del(&page->lru); + free_vmemmap_page(page); + } +} + +static void vmemmap_remap_reuse_pte(pte_t *pte, unsigned long addr, + struct vmemmap_rmap_walk *walk) +{ + /* + * Make the tail pages are mapped with read-only to catch + * illegal write operation to the tail pages. + */ + pgprot_t pgprot = PAGE_KERNEL_RO; + pte_t entry = mk_pte(walk->reuse, pgprot); + struct page *page; + + page = pte_page(*pte); + list_add(&page->lru, walk->vmemmap_pages); + + set_pte_at(&init_mm, addr, pte, entry); +} + +/** + * vmemmap_remap_reuse - remap the vmemmap virtual address range + * [start, start + size) to the page which + * [start - PAGE_SIZE, start) is mapped. + * @start: start address of the vmemmap virtual address range + * @end: size of the vmemmap virtual address range + */ +void vmemmap_remap_reuse(unsigned long start, unsigned long size) +{ + unsigned long end = start + size; + LIST_HEAD(vmemmap_pages); + + struct vmemmap_rmap_walk walk = { + .rmap_pte = vmemmap_remap_reuse_pte, + .vmemmap_pages = &vmemmap_pages, + }; + + vmemmap_remap_range(start, end, &walk); + free_vmemmap_page_list(&vmemmap_pages); +} /* * Allocate a block of memory to be used to back the virtual memory map From patchwork Sun Dec 13 15:45:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11970831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F10CC4361B for ; Sun, 13 Dec 2020 15:48:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CE8B92313B for ; Sun, 13 Dec 2020 15:48:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437209AbgLMPr7 (ORCPT ); Sun, 13 Dec 2020 10:47:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2436975AbgLMPrw (ORCPT ); Sun, 13 Dec 2020 10:47:52 -0500 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70A97C0613CF for ; Sun, 13 Dec 2020 07:47:02 -0800 (PST) Received: by mail-pf1-x441.google.com with SMTP id s21so10416063pfu.13 for ; Sun, 13 Dec 2020 07:47:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JzVjZqxZV7qnxps+XwTtc4y07Zkwq00cj3VktTcKlD8=; b=17mJ8xTPcqAavz6MM0pAp760eQvu1eOoW7F90e/v4d0+No+R9P9OubAYsCbNcLEVhi 4F0PXmlUXQFrHZ9DUsY9oNcKkrUtmWslkjdROemZ8jxKIB3hQNMxHC8Mv8x+h5ZhPvxL Adxo2ov7TO6cwdxhIwXM0qqP1p0Q6Rotkex5ZSXtGG91+2tmsRdIvOz054gGerKyvbIj zH6QlfqKm1N3wu0FuKGDww9JNOPgPJia8daKsll0Z/Eq1PC2lxqOfbs9hTHFlFo9ONzl X/3K/NXu8SIIM6/85FW17Zvms5Ahn2mUr4mVNX4RBRxzkpc/YaZNHnOCgnLlkaiA/K7U tACA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JzVjZqxZV7qnxps+XwTtc4y07Zkwq00cj3VktTcKlD8=; b=bo+jmh1nhxdiNNfzilIly6QahHGyeYQY6Em+GKTQSXc+BueWnwZdi67ROvNMSGBjAR 99uUIb/qPD8S98gos2ZQX55W6cpjMqPYwj9t/1CO7SWoR0ptRYmTVCkmqRMF8YbVwS++ p35qiLEJ2ee2CaoKu+UAtKyBmzuf/utKHC+AGDg8iElNQh6M/jHjbB9KMCgOb3NrUyKd +v17Eh8DIoQq6CtgzYK2NgQE6YgqiX3Yi1shFZTOTXUYTcTe+DB8H1rucU1cpBNiG7Z4 6FczHR3e++kMLgGGcUfVUXoDn6gyQnrvCwTQynLwrQ4IK8kFfH5axCAEZJRfsJ5ql73x c7BA== X-Gm-Message-State: AOAM531A2fw72915tpBNLtYOPzqBXQpLRJ055arZMoGO/Sdn4NZRcyQq Y1Ga7GCb4jSfP7PKuB7ayCkaxg== X-Google-Smtp-Source: ABdhPJwTdVq1N13ImJdxPVfax9bptCqn2O3MF5sosHV3Bj6TaA1ng4gLRWLVQ+rMi1MnmyQiDitiNw== X-Received: by 2002:a63:6e0d:: with SMTP id j13mr20805782pgc.236.1607874421951; Sun, 13 Dec 2020 07:47:01 -0800 (PST) Received: from localhost.bytedance.net ([103.136.221.66]) by smtp.gmail.com with ESMTPSA id e24sm13113753pjt.16.2020.12.13.07.46.51 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 13 Dec 2020 07:47:01 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v9 04/11] mm/hugetlb: Defer freeing of HugeTLB pages Date: Sun, 13 Dec 2020 23:45:27 +0800 Message-Id: <20201213154534.54826-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201213154534.54826-1-songmuchun@bytedance.com> References: <20201213154534.54826-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org In the subsequent patch, we will allocate the vmemmap pages when free HugeTLB pages. But update_and_free_page() is called from a non-task context(and hold hugetlb_lock), so we can defer the actual freeing in a workqueue to prevent use GFP_ATOMIC to allocate the vmemmap pages. Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz --- mm/hugetlb.c | 77 ++++++++++++++++++++++++++++++++++++++++++++++++---- mm/hugetlb_vmemmap.c | 12 -------- mm/hugetlb_vmemmap.h | 17 ++++++++++++ 3 files changed, 88 insertions(+), 18 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 140135fc8113..0ff9b90e524f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1292,15 +1292,76 @@ static inline void destroy_compound_gigantic_page(struct page *page, unsigned int order) { } #endif -static void update_and_free_page(struct hstate *h, struct page *page) +static void __free_hugepage(struct hstate *h, struct page *page); + +/* + * As update_and_free_page() is be called from a non-task context(and hold + * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent + * use GFP_ATOMIC to allocate a lot of vmemmap pages. + * + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of + * pages to be freed and frees them one-by-one. As the page->mapping pointer + * is going to be cleared in update_hpage_vmemmap_workfn() anyway, it is + * reused as the llist_node structure of a lockless linked list of huge + * pages to be freed. + */ +static LLIST_HEAD(hpage_update_freelist); + +static void update_hpage_vmemmap_workfn(struct work_struct *work) { - int i; + struct llist_node *node; + struct page *page; + + node = llist_del_all(&hpage_update_freelist); + while (node) { + page = container_of((struct address_space **)node, + struct page, mapping); + node = node->next; + page->mapping = NULL; + __free_hugepage(page_hstate(page), page); + + cond_resched(); + } +} +static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn); + +static inline void __update_and_free_page(struct hstate *h, struct page *page) +{ + /* No need to allocate vmemmap pages */ + if (!free_vmemmap_pages_per_hpage(h)) { + __free_hugepage(h, page); + return; + } + + /* + * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap + * pages. + * + * Only call schedule_work() if hpage_update_freelist is previously + * empty. Otherwise, schedule_work() had been called but the workfn + * hasn't retrieved the list yet. + */ + if (llist_add((struct llist_node *)&page->mapping, + &hpage_update_freelist)) + schedule_work(&hpage_update_work); +} + +static void update_and_free_page(struct hstate *h, struct page *page) +{ if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; h->nr_huge_pages--; h->nr_huge_pages_node[page_to_nid(page)]--; + + __update_and_free_page(h, page); +} + +static void __free_hugepage(struct hstate *h, struct page *page) +{ + int i; + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | @@ -1313,13 +1374,17 @@ static void update_and_free_page(struct hstate *h, struct page *page) set_page_refcounted(page); if (hstate_is_gigantic(h)) { /* - * Temporarily drop the hugetlb_lock, because - * we might block in free_gigantic_page(). + * Temporarily drop the hugetlb_lock only when this type of + * HugeTLB page does not support vmemmap optimization (which + * contex do not hold the hugetlb_lock), because we might block + * in free_gigantic_page(). */ - spin_unlock(&hugetlb_lock); + if (!free_vmemmap_pages_per_hpage(h)) + spin_unlock(&hugetlb_lock); destroy_compound_gigantic_page(page, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h)); - spin_lock(&hugetlb_lock); + if (!free_vmemmap_pages_per_hpage(h)) + spin_lock(&hugetlb_lock); } else { __free_pages(page, huge_page_order(h)); } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 5a714bd60d6b..6d4e77a2b6c7 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -180,18 +180,6 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) -/* - * How many vmemmap pages associated with a HugeTLB page that can be freed - * to the buddy allocator. - * - * Todo: Returns zero for now, which means the feature is disabled. We will - * enable it once all the infrastructure is there. - */ -static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) -{ - return 0; -} - static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) { return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 6923f03534d5..01f8637adbe0 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -12,9 +12,26 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void free_huge_page_vmemmap(struct hstate *h, struct page *head); + +/* + * How many vmemmap pages associated with a HugeTLB page that can be freed + * to the buddy allocator. + * + * Todo: Returns zero for now, which means the feature is disabled. We will + * enable it once all the infrastructure is there. + */ +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} #else static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Sun Dec 13 15:45:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11970845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46DAEC4361B for ; Sun, 13 Dec 2020 15:51:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0C0752313B for ; Sun, 13 Dec 2020 15:51:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437233AbgLMPsD (ORCPT ); Sun, 13 Dec 2020 10:48:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437189AbgLMPr5 (ORCPT ); Sun, 13 Dec 2020 10:47:57 -0500 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CEC04C061794 for ; Sun, 13 Dec 2020 07:47:12 -0800 (PST) Received: by mail-pg1-x543.google.com with SMTP id w5so9998806pgj.3 for ; Sun, 13 Dec 2020 07:47:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/UY5+OXSNu8UfWeK3zWQcCdNSo4BtWiu6jtKlnHjBTk=; b=LpAJVwR/AtggCqh/HXEik7Fba5T4Y6khxFO9MSPDtvERvZGNkivS9ssxSzNXB68M6D LFzC4Rp9m6K1zYfnvXonVSXPgrAGyxd6aQDgtrkcL/ceiM3zLlP/HfkIUAQl1+jexRgK JtN+fW1/mVS9pn+8N/7lY8FOmwlnU90l3QIe0/i9Hs6bI2PlyYPmLOd0GAbBtIE1Inia hhBBLFqYc+SyDJroyrqnSgdYKb9uW7rxq5iZo890nyI0gpQ+ugjwdwMr7D67HOPMmSP0 cwKveCzzJy1w6jZS8moYumSCwE26oSvlz4g+otowjL+AhCC/2k/1Dw55i78lLVRYhHLz ykZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/UY5+OXSNu8UfWeK3zWQcCdNSo4BtWiu6jtKlnHjBTk=; b=Q9FBhYqZi0YEiUCe7HQJbpsZv0gNnHh2rraVaUQyKImtYhpoi+6I8MFyRvPChpq8f9 hyO3FiYAsmps452U4SUiacN1Xu5LGB+SS/h0Z/Ox8T1+t1FN3EEemhGeAasje8gybQuG ufSSQtPXAZ6CqnK+p/PbtZDuZdTx7Fer7mvncazzr+3Ckx3O1BQJjKuQaonOnUoe4Kw9 YlMRLV/ceYMSNcHDT41Q3YH8c7c4dc8XE8+Zc4CHSeWnma9Tlkqh0s5mhX24tD320jAG YCSG6sqJE7iBXq57mz1ltMYuIAKLQWPzIsdZj0gW1Zgy8mdu/UZMIGtIq3zjJJOOJ3sN COSA== X-Gm-Message-State: AOAM532Q2Mg/7wGf/HAibIFSJQ1peTCr1gBcJ9A4cBXFDDDuYHVoaqg+ Qn7GENF2TYMSlaigHxUyj4b75w== X-Google-Smtp-Source: ABdhPJx6ttNBb9hxOAX4RE8YcdzTHLfEAk2T0tGCYa/oYq7cwR27rBr9hx1rfU4gzkFNS8XHcpWfRw== X-Received: by 2002:aa7:9055:0:b029:19e:4bf4:c6bc with SMTP id n21-20020aa790550000b029019e4bf4c6bcmr20125543pfo.58.1607874432387; Sun, 13 Dec 2020 07:47:12 -0800 (PST) Received: from localhost.bytedance.net ([103.136.221.66]) by smtp.gmail.com with ESMTPSA id e24sm13113753pjt.16.2020.12.13.07.47.02 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 13 Dec 2020 07:47:11 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v9 05/11] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page Date: Sun, 13 Dec 2020 23:45:28 +0800 Message-Id: <20201213154534.54826-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201213154534.54826-1-songmuchun@bytedance.com> References: <20201213154534.54826-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org When we free a HugeTLB page to the buddy allocator, we should allocate the vmemmap pages associated with it. We can do that in the __free_hugepage() before freeing it to buddy. Signed-off-by: Muchun Song --- include/linux/mm.h | 1 + mm/hugetlb.c | 2 ++ mm/hugetlb_vmemmap.c | 11 +++++++++ mm/hugetlb_vmemmap.h | 5 ++++ mm/sparse-vmemmap.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++++++- 5 files changed, 87 insertions(+), 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ab02e405a979..5b8dc36e4d20 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3006,6 +3006,7 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) #endif void vmemmap_remap_reuse(unsigned long start, unsigned long size); +void vmemmap_remap_restore(unsigned long start, unsigned long size); void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0ff9b90e524f..542e6cb81321 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1362,6 +1362,8 @@ static void __free_hugepage(struct hstate *h, struct page *page) { int i; + alloc_huge_page_vmemmap(h, page); + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 6d4e77a2b6c7..02201c2e3dfa 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -185,6 +185,17 @@ static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; } +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + unsigned long vmemmap_addr = (unsigned long)head; + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + vmemmap_remap_restore(vmemmap_addr + RESERVE_VMEMMAP_SIZE, + free_vmemmap_pages_size_per_hpage(h)); +} + void free_huge_page_vmemmap(struct hstate *h, struct page *head) { unsigned long vmemmap_addr = (unsigned long)head; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 01f8637adbe0..b2c8d2f11d48 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -11,6 +11,7 @@ #include #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); /* @@ -25,6 +26,10 @@ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) return 0; } #else +static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} + static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 78c527617e8d..ffcf092c92ed 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include @@ -39,7 +40,8 @@ * * @rmap_pte: called for each non-empty PTE (lowest-level) entry. * @reuse: the page which is reused for the tail vmemmap pages. - * @vmemmap_pages: the list head of the vmemmap pages that can be freed. + * @vmemmap_pages: the list head of the vmemmap pages that can be freed + * or is mapped from. */ struct vmemmap_rmap_walk { void (*rmap_pte)(pte_t *pte, unsigned long addr, @@ -54,6 +56,9 @@ struct vmemmap_rmap_walk { */ #define VMEMMAP_TAIL_PAGE_REUSE -1 +/* The gfp mask of allocating vmemmap page */ +#define GFP_VMEMMAP_PAGE (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN) + static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct vmemmap_rmap_walk *walk) { @@ -200,6 +205,68 @@ void vmemmap_remap_reuse(unsigned long start, unsigned long size) free_vmemmap_page_list(&vmemmap_pages); } +static void vmemmap_remap_restore_pte(pte_t *pte, unsigned long addr, + struct vmemmap_rmap_walk *walk) +{ + pgprot_t pgprot = PAGE_KERNEL; + struct page *page; + void *to; + + BUG_ON(pte_page(*pte) != walk->reuse); + + page = list_first_entry(walk->vmemmap_pages, struct page, lru); + list_del(&page->lru); + to = page_to_virt(page); + copy_page(to, page_to_virt(walk->reuse)); + + set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); +} + +static void alloc_vmemmap_page_list(struct list_head *list, + unsigned long nr_pages) +{ + while (nr_pages--) { + struct page *page; + +retry: + page = alloc_page(GFP_VMEMMAP_PAGE); + if (unlikely(!page)) { + msleep(100); + /* + * We should retry infinitely, because we cannot + * handle allocation failures. Once we allocate + * vmemmap pages successfully, then we can free + * a HugeTLB page. + */ + goto retry; + } + list_add_tail(&page->lru, list); + } +} + +/** + * vmemmap_remap_restore - remap the vmemmap virtual address range + * [start, start + size) to the page respectively + * which from the @vmemmap_pages + * @start: start address of the vmemmap virtual address range + * @end: size of the vmemmap virtual address range + */ +void vmemmap_remap_restore(unsigned long start, unsigned long size) +{ + LIST_HEAD(vmemmap_pages); + unsigned long end = start + size; + + struct vmemmap_rmap_walk walk = { + .rmap_pte = vmemmap_remap_restore_pte, + .vmemmap_pages = &vmemmap_pages, + }; + + might_sleep(); + + alloc_vmemmap_page_list(&vmemmap_pages, size >> PAGE_SHIFT); + vmemmap_remap_range(start, end, &walk); +} + /* * Allocate a block of memory to be used to back the virtual memory map * or to back the page tables that are used to create the mapping. From patchwork Sun Dec 13 15:45:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11970833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52AEEC1B0E3 for ; Sun, 13 Dec 2020 15:48:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2EA052336D for ; Sun, 13 Dec 2020 15:48:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437244AbgLMPsQ (ORCPT ); Sun, 13 Dec 2020 10:48:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437199AbgLMPr5 (ORCPT ); Sun, 13 Dec 2020 10:47:57 -0500 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 378C4C0611CA for ; Sun, 13 Dec 2020 07:47:23 -0800 (PST) Received: by mail-pg1-x543.google.com with SMTP id n1so8053882pge.8 for ; Sun, 13 Dec 2020 07:47:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2Hn/VAt0Cz4OvxZ3ZLZ7kdSFuid7Egkgjo1KIVr5tXk=; b=MGyj+v/fpgjFg796Ql1cWg7MCfhrF/gzP0PnhpQ3xcGoeVoGgctAJ9l/XQfptmEpGD iO8X8MxsERuR+Vqfs27ZZJ0SBOJyRxxccqWe0y7do5j8ujw1ZQEJhAfnvFk8tr+ddTHV YAHIbwcOwd0TOsa44UYqevn+bg767PoXrK5OZ6ZbwTxGKUURcxng/vS5CSM5euoBdC7N 2I8rkCrcVI7kYF5JtHO3B0JU1e4X5iXm57m3pwEQXMjxUT4m7/LENZinOPIN1WNotLeS vjtEH6PEAQqRqCLBuU6wkMB9Xn66DFWDPce/4k74rDgyaxKl4ELcar4mpuF8jRRoYNhq jhqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2Hn/VAt0Cz4OvxZ3ZLZ7kdSFuid7Egkgjo1KIVr5tXk=; b=jdyiCYS6DyAADFCnWcUKpaUYCdWO/r+hR2+SLQ4TC9OCgMU6+YQ2yY0YctENXDWECz S830VDPWJqXvBbVox6q96cy4OdYmkiHU7glNLD7P85Nw7VIF2uUrg/ChFKz85WvaHA5V am94ckdIxBYy22lyhd4jDauSQht3h/NiPi/HNoG0M3cJDKCwn19fdiUFiNecCsCEAfWP Uu4K3okrqNPXlcJYy9qynmU1YLMLo8MFBfLGyJI77AaKJ+HCSp2XeFFMebx6HBkz2PN8 XzsWURAeJap3mxTjbqatBGYPoIlBPp9/pyCDDsrgHHTUnHZceHPlv6MTTScvsP0eBVy1 ktmg== X-Gm-Message-State: AOAM533CpvsdmI1ltOJU+FxcyfbQE4FkAcQGAqdiPe49fFsspsUzN1qD USws9OooM8klBN0DN7ARQDUYbA== X-Google-Smtp-Source: ABdhPJzm8g+/dhJF7aOm/BWobYUOtXsvZXX19jmBO3Px6nr+cLrLxt9DJkTUH0p/rLUE4u9kv7VXEQ== X-Received: by 2002:a63:215c:: with SMTP id s28mr20432259pgm.117.1607874442722; Sun, 13 Dec 2020 07:47:22 -0800 (PST) Received: from localhost.bytedance.net ([103.136.221.66]) by smtp.gmail.com with ESMTPSA id e24sm13113753pjt.16.2020.12.13.07.47.12 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 13 Dec 2020 07:47:22 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v9 06/11] mm/hugetlb: Set the PageHWPoison to the raw error page Date: Sun, 13 Dec 2020 23:45:29 +0800 Message-Id: <20201213154534.54826-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201213154534.54826-1-songmuchun@bytedance.com> References: <20201213154534.54826-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Because we reuse the first tail vmemmap page frame and remap it with read-only, we cannot set the PageHWPosion on a tail page. So we can use the head[4].private to record the real error page index and set the raw error page PageHWPoison later. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador --- mm/hugetlb.c | 48 ++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 40 insertions(+), 8 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 542e6cb81321..29de425f879a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1347,6 +1347,43 @@ static inline void __update_and_free_page(struct hstate *h, struct page *page) schedule_work(&hpage_update_work); } +static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head) +{ + struct page *page; + + if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h)) + return; + + page = head + page_private(head + 4); + + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + if (page != head) { + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + +static inline void hwpoison_subpage_set(struct hstate *h, struct page *head, + struct page *page) +{ + if (!PageHWPoison(head)) + return; + + if (free_vmemmap_pages_per_hpage(h)) { + set_page_private(head + 4, page - head); + } else if (page != head) { + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + static void update_and_free_page(struct hstate *h, struct page *page) { if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) @@ -1363,6 +1400,7 @@ static void __free_hugepage(struct hstate *h, struct page *page) int i; alloc_huge_page_vmemmap(h, page); + hwpoison_subpage_deliver(h, page); for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | @@ -1840,14 +1878,8 @@ int dissolve_free_huge_page(struct page *page) int nid = page_to_nid(head); if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; - /* - * Move PageHWPoison flag from head page to the raw error page, - * which makes any subpages rather than the error page reusable. - */ - if (PageHWPoison(head) && page != head) { - SetPageHWPoison(page); - ClearPageHWPoison(head); - } + + hwpoison_subpage_set(h, head, page); list_del(&head->lru); h->free_huge_pages--; h->free_huge_pages_node[nid]--; From patchwork Sun Dec 13 15:45:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11970835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E746FC0018C for ; Sun, 13 Dec 2020 15:48:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B80402313B for ; Sun, 13 Dec 2020 15:48:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437277AbgLMPs2 (ORCPT ); Sun, 13 Dec 2020 10:48:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437264AbgLMPsW (ORCPT ); Sun, 13 Dec 2020 10:48:22 -0500 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDE71C0611CC for ; Sun, 13 Dec 2020 07:47:33 -0800 (PST) Received: by mail-pl1-x641.google.com with SMTP id bj5so7288656plb.4 for ; Sun, 13 Dec 2020 07:47:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=W/IANVbAC10oR1yCmyJw00iPLDRZ87joisEDYZKBsYc=; b=h4QHAnO90pJNatAnqiSd6FwpDzPKz7evZGo6Jv35PHkvf5uGCTCg9scbUDwgPLF/Sq LQ9ChkrgO/TUNnGsjydbt2eKVo649UzWcUX4J5ecTj5HW0zAXFA81HyW5TSXoDqUeHYw 5oL3Ify2l+LN74TEgJ/wn2LU1QDb2/J6cFIlh+H7Fk6vHzeFSprETgweuy4eFr6NaL7X 3JOcCrDyd8tDLX+VHHoHnPEkJZbnOOcD38bz8ZiElx0carfbJDtEQQNtifMVRTTzlxNc NaEXhGSe++0/0e6WkfZTvxCAVErvsTyraCIJB85ne6TMeV3m4UkYRaOomSce4KQcY8hz DnfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=W/IANVbAC10oR1yCmyJw00iPLDRZ87joisEDYZKBsYc=; b=s80TkW39JHsKMUqW5ZRuKRcUgAfPE7+RF8J1LAxaTkuGwxIWi933FevJupQYg35WRo Bl/QS7DJuo7DQJwqZpfbGz6aUj89i9axXIvLq6Q1L0DqSWF1QjlDbg5UnI1ziaGHe+83 TmOXTcsJ0YmgJleol/zJ4SwrtgZr3MTMOhsl1DbQZgV8/Il+mVYD2p4g6rhFuMpcwVXs dZ9+q4sK7iQmgqOxYfI3yDX06Q2VFKe8x7RoZTv9nNny9u1bT3m6PoJB9BhNjXYRv/KL uxD4BIc36CNpE0MVwn30TPqonY6Z35+iu+IhpFaqollvXyK+ibtNj9fgeLkY+CZVq1lc /ukA== X-Gm-Message-State: AOAM531KPuGzegt5kRrRq5gIh/OI1diezeDB82O6jlwfYllYpzsmGQ0J F4Xw0sJJ/BI+OfUHq97W29H7kA== X-Google-Smtp-Source: ABdhPJwNK76xsvc3WyZtf8T7mzZOaqed1/OA/f6WnTwj0FynXSxGhw2aZSeW3K5vwAg3d5eRa/Z7qQ== X-Received: by 2002:a17:90a:248:: with SMTP id t8mr21928466pje.193.1607874453575; Sun, 13 Dec 2020 07:47:33 -0800 (PST) Received: from localhost.bytedance.net ([103.136.221.66]) by smtp.gmail.com with ESMTPSA id e24sm13113753pjt.16.2020.12.13.07.47.23 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 13 Dec 2020 07:47:33 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v9 07/11] mm/hugetlb: Flush work when dissolving hugetlb page Date: Sun, 13 Dec 2020 23:45:30 +0800 Message-Id: <20201213154534.54826-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201213154534.54826-1-songmuchun@bytedance.com> References: <20201213154534.54826-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org We should flush work when dissolving a hugetlb page to make sure that the hugetlb page is freed to the buddy. Signed-off-by: Muchun Song --- mm/hugetlb.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 29de425f879a..b0847b2ce01d 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1326,6 +1326,12 @@ static void update_hpage_vmemmap_workfn(struct work_struct *work) } static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn); +static inline void flush_hpage_update_work(struct hstate *h) +{ + if (free_vmemmap_pages_per_hpage(h)) + flush_work(&hpage_update_work); +} + static inline void __update_and_free_page(struct hstate *h, struct page *page) { /* No need to allocate vmemmap pages */ @@ -1861,6 +1867,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, int dissolve_free_huge_page(struct page *page) { int rc = -EBUSY; + struct hstate *h = NULL; /* Not to disrupt normal path by vainly holding hugetlb_lock */ if (!PageHuge(page)) @@ -1874,8 +1881,9 @@ int dissolve_free_huge_page(struct page *page) if (!page_count(page)) { struct page *head = compound_head(page); - struct hstate *h = page_hstate(head); int nid = page_to_nid(head); + + h = page_hstate(head); if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; @@ -1889,6 +1897,14 @@ int dissolve_free_huge_page(struct page *page) } out: spin_unlock(&hugetlb_lock); + + /* + * We should flush work before return to make sure that + * the HugeTLB page is freed to the buddy. + */ + if (!rc && h) + flush_hpage_update_work(h); + return rc; } From patchwork Sun Dec 13 15:45:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11970837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08813C1B0E3 for ; Sun, 13 Dec 2020 15:48:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D5C5F2333C for ; Sun, 13 Dec 2020 15:48:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437261AbgLMPs2 (ORCPT ); Sun, 13 Dec 2020 10:48:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437277AbgLMPsY (ORCPT ); Sun, 13 Dec 2020 10:48:24 -0500 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93BBEC0617A6 for ; Sun, 13 Dec 2020 07:47:44 -0800 (PST) Received: by mail-pf1-x443.google.com with SMTP id 11so10433756pfu.4 for ; Sun, 13 Dec 2020 07:47:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jOWKhnPQ+qrTNz0SqXogpNl+v7yCmMPDPD5D2VmKboE=; b=THyq/mtn+bzbLFokRHWPa1f2o2xqQZeczJ/U+QdkvOAq8gB8PBn4kFsffFl3P2UkPW TeSe8pQn6E8b91J6f2P5/xMSQlMwZ86a8SIDJkKex8JwmFWqRKk9DQchk7/753x7RPwq LIPgXj69AYrX+c/SsFUNoDAYlruEYeiZheVUr3dzZGX3VdhSnpKTGIGdSJYM+6/bO3Jf ErAOC0nruxKXi/Zos98v/qWNvDBPbe5XjLHdBmA98JBnDEdCTiZBweWcetbSvg7e4gKQ OMKf2sGkxjUACChWwQIkkgeo6+PEoSf+qfisbfAl5Xq1X2cnOwQOTabiJ7znvE74ttjD qoDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jOWKhnPQ+qrTNz0SqXogpNl+v7yCmMPDPD5D2VmKboE=; b=I9B0qRQYdAo6p7S2gKwmvJnX+cub6wGnpLu9izCvmhvYpxbpXegK2dz85Hvg/aZeZ5 Fxg8HM1tND7P1/Hr0WI948bYJ7XxJM/KnnQHaEp+Kc6cC3Y8VbR3bUhGIh4GDB0nnMsR dgFX26vxyZ+xYmzTnaGVF38ejBWs6l11AtB/CMddALCvqq6Jxk3bPHGm0hD+huexlotN sw2jGliShpKzZMltKwuk+C/+qYS/ryok95AN2zPdAoaBXn+yGaq7ehknaxHhoszylvPv laPnNu0OZbttdFi95LYO6jY7FTNNmSkF6+QKwg4x6pnKNgEhlis/CtIBd8/OozXUxVJG syTQ== X-Gm-Message-State: AOAM531cA05e/p71n2mufUJ9wk24NgJnmpcZoaJr6Emgp2yiJ4BRZciN ot5GpQqNugGFPwnJV3YZZLpTdg== X-Google-Smtp-Source: ABdhPJwecuU0NsgiWUQQAQ7ypbXC+vvtzXl2omUhQe6yFTXFbUpfQaCfb4T//jUJ3a0R4Yj9SjuDzA== X-Received: by 2002:a63:805:: with SMTP id 5mr20405179pgi.160.1607874464156; Sun, 13 Dec 2020 07:47:44 -0800 (PST) Received: from localhost.bytedance.net ([103.136.221.66]) by smtp.gmail.com with ESMTPSA id e24sm13113753pjt.16.2020.12.13.07.47.34 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 13 Dec 2020 07:47:43 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v9 08/11] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Date: Sun, 13 Dec 2020 23:45:31 +0800 Message-Id: <20201213154534.54826-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201213154534.54826-1-songmuchun@bytedance.com> References: <20201213154534.54826-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Add a kernel parameter hugetlb_free_vmemmap to disable the feature of freeing unused vmemmap pages associated with each hugetlb page on boot. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador --- Documentation/admin-guide/kernel-parameters.txt | 9 +++++++++ Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++ arch/x86/mm/init_64.c | 8 ++++++-- include/linux/hugetlb.h | 19 +++++++++++++++++++ mm/hugetlb_vmemmap.c | 16 ++++++++++++++++ 5 files changed, 53 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 3ae25630a223..9e6854f21d55 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1551,6 +1551,15 @@ Documentation/admin-guide/mm/hugetlbpage.rst. Format: size[KMG] + hugetlb_free_vmemmap= + [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, + this controls freeing unused vmemmap pages associated + with each HugeTLB page. + Format: { on | off (default) } + + on: enable the feature + off: disable the feature + hung_task_panic= [KNL] Should the hung task detector generate panics. Format: 0 | 1 diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst index f7b1c7462991..3a23c2377acc 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -145,6 +145,9 @@ default_hugepagesz will all result in 256 2M huge pages being allocated. Valid default huge page size is architecture dependent. +hugetlb_free_vmemmap + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing + unused vmemmap pages associated with each HugeTLB page. When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` indicates the current number of pre-allocated huge pages of the default size. diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0435bee2e172..1bce5f20e6ca 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, { int err; - if (end - start < PAGES_PER_SECTION * sizeof(struct page)) + if (is_hugetlb_free_vmemmap_enabled() || + end - start < PAGES_PER_SECTION * sizeof(struct page)) err = vmemmap_populate_basepages(start, end, node, NULL); else if (boot_cpu_has(X86_FEATURE_PSE)) err = vmemmap_populate_hugepages(start, end, node, altmap); @@ -1585,6 +1587,8 @@ void register_page_bootmem_memmap(unsigned long section_nr, pmd_t *pmd; unsigned int nr_pmd_pages; struct page *page; + bool base_mapping = !boot_cpu_has(X86_FEATURE_PSE) || + is_hugetlb_free_vmemmap_enabled(); for (; addr < end; addr = next) { pte_t *pte = NULL; @@ -1610,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long section_nr, } get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO); - if (!boot_cpu_has(X86_FEATURE_PSE)) { + if (base_mapping) { next = (addr + PAGE_SIZE) & PAGE_MASK; pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ebca2ef02212..7f47f0eeca3b 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -770,6 +770,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, } #endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +extern bool hugetlb_free_vmemmap_enabled; + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return hugetlb_free_vmemmap_enabled; +} +#else +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} +#endif + #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; @@ -923,6 +937,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr pte_t *ptep, pte_t pte, unsigned long sz) { } + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} #endif /* CONFIG_HUGETLB_PAGE */ static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 02201c2e3dfa..64ad929cac61 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -180,6 +180,22 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) +bool hugetlb_free_vmemmap_enabled; + +static int __init early_hugetlb_free_vmemmap_param(char *buf) +{ + if (!buf) + return -EINVAL; + + if (!strcmp(buf, "on")) + hugetlb_free_vmemmap_enabled = true; + else if (strcmp(buf, "off")) + return -EINVAL; + + return 0; +} +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param); + static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) { return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; From patchwork Sun Dec 13 15:45:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11970843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10E06C433FE for ; Sun, 13 Dec 2020 15:50:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D173F2313B for ; Sun, 13 Dec 2020 15:50:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437321AbgLMPsu (ORCPT ); Sun, 13 Dec 2020 10:48:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437302AbgLMPsf (ORCPT ); Sun, 13 Dec 2020 10:48:35 -0500 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0772C0613D3 for ; Sun, 13 Dec 2020 07:47:54 -0800 (PST) Received: by mail-pj1-x1044.google.com with SMTP id iq13so4865687pjb.3 for ; Sun, 13 Dec 2020 07:47:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UoePNyFzH81/Y50VNftfnwrNjalzBPSKlZ0P7dMLO04=; b=1AFD/oCkXSNirCm2p+m1rpWm9khOCZCufBu5Tgmajurx7K4I6b6BCsqvL8xc77/fzI kat3IN3qqNJLwR5zlWxCcHC9H7EK30fIxvRibG0MljEmYIctFLgBm3jXkiiANLngQbra sM9VHAalxMBkHdOi9UkjZO9JUmxlxMe4zyBNC/mDTEa6L1kQrKtcZcdk8s4cG72A4Vu1 3QVIGXoR5/8YOLoLPm3BReuLDqVZG8Zpd8Jb5908r6Gfp2O+tL24XF0X91A5r7vV9VdE mEY7cDuI5NsCsbTQx9IkCk3ixalom1VzliZpNElCCACXDQ+QvpuQtcvdFNb2XaVxq0Is IctQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UoePNyFzH81/Y50VNftfnwrNjalzBPSKlZ0P7dMLO04=; b=C4Sn2BRC+ld6qtJwHM+xkOL56w+nxBJGTS3VCMKYSRJpyXTPDIKPEJMRt+i8coOgk3 mXk0r/oJm5ANdcTWRh0Ku/tMa0Avozme5nEkEwbXnFJ5oGf7pcES/kr7PEKmk3pFV+FK g5mjhwaQjXUN1JfrZR2v7kutnWd6VPavADNd9vk1w07jTnloBXKVpMtl8/fqABrN9TWy 2XRcpsjzZQpHAxOIay+uRjq/o1gy9uO97CGaJx5vrIiUqHrGBpbEZd3Ows1zott8nUvd iyytAOifaL2vTIxCUAdle4PfzXUXOA4BoAk4MR2doHl8ydybSntzv7fbS3eQUW0aiSLJ Knkw== X-Gm-Message-State: AOAM532X2mE7r5zvsPjvpQmllJ8qoQmEl1bIMx8BKOgNBGhnRwY/xr9R bMS8rnob+SX+v/Bj6WsSWyZfVw== X-Google-Smtp-Source: ABdhPJyR6EfcfvwbKvCNayYwEIKpv0lDCQTOqO5hOY0ffty4yveTg5ZvseKOhu8KgsQdcCyOeKc9Xw== X-Received: by 2002:a17:902:d34a:b029:da:861e:ecd8 with SMTP id l10-20020a170902d34ab02900da861eecd8mr18872343plk.45.1607874474534; Sun, 13 Dec 2020 07:47:54 -0800 (PST) Received: from localhost.bytedance.net ([103.136.221.66]) by smtp.gmail.com with ESMTPSA id e24sm13113753pjt.16.2020.12.13.07.47.44 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 13 Dec 2020 07:47:54 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v9 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Date: Sun, 13 Dec 2020 23:45:32 +0800 Message-Id: <20201213154534.54826-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201213154534.54826-1-songmuchun@bytedance.com> References: <20201213154534.54826-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org All the infrastructure is ready, so we introduce nr_free_vmemmap_pages field in the hstate to indicate how many vmemmap pages associated with a HugeTLB page that we can free to buddy allocator. And initialize it in the hugetlb_vmemmap_init(). This patch is actual enablement of the feature. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador --- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 1 + mm/hugetlb_vmemmap.c | 29 +++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 10 ++++++---- 4 files changed, 39 insertions(+), 4 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 7f47f0eeca3b..66d82ae7b712 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -492,6 +492,9 @@ struct hstate { unsigned int nr_huge_pages_node[MAX_NUMNODES]; unsigned int free_huge_pages_node[MAX_NUMNODES]; unsigned int surplus_huge_pages_node[MAX_NUMNODES]; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + unsigned int nr_free_vmemmap_pages; +#endif #ifdef CONFIG_CGROUP_HUGETLB /* cgroup control files */ struct cftype cgroup_files_dfl[7]; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index b0847b2ce01d..2b45235a70e9 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3323,6 +3323,7 @@ void __init hugetlb_add_hstate(unsigned int order) h->next_nid_to_free = first_memory_node; snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB", huge_page_size(h)/1024); + hugetlb_vmemmap_init(h); parsed_hstate = h; } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 64ad929cac61..d3b4c39f67c0 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -184,6 +184,10 @@ bool hugetlb_free_vmemmap_enabled; static int __init early_hugetlb_free_vmemmap_param(char *buf) { + /* We cannot optimize if a "struct page" crosses page boundaries. */ + if (!is_power_of_2(sizeof(struct page))) + return 0; + if (!buf) return -EINVAL; @@ -222,3 +226,28 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head) vmemmap_remap_reuse(vmemmap_addr + RESERVE_VMEMMAP_SIZE, free_vmemmap_pages_size_per_hpage(h)); } + +void __init hugetlb_vmemmap_init(struct hstate *h) +{ + unsigned int nr_pages = pages_per_huge_page(h); + unsigned int vmemmap_pages; + + if (!hugetlb_free_vmemmap_enabled) + return; + + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; + /* + * The head page and the first tail page are not to be freed to buddy + * system, the others page will map to the first tail page. So there + * are the remaining pages that can be freed. + * + * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true + * on some architectures (e.g. aarch64). See Documentation/arm64/ + * hugetlbpage.rst for more details. + */ + if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR)) + h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR; + + pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, + h->name); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index b2c8d2f11d48..8fd9ae113dbd 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -13,17 +13,15 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); +void hugetlb_vmemmap_init(struct hstate *h); /* * How many vmemmap pages associated with a HugeTLB page that can be freed * to the buddy allocator. - * - * Todo: Returns zero for now, which means the feature is disabled. We will - * enable it once all the infrastructure is there. */ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { - return 0; + return h->nr_free_vmemmap_pages; } #else static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) @@ -38,5 +36,9 @@ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { return 0; } + +static inline void hugetlb_vmemmap_init(struct hstate *h) +{ +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Sun Dec 13 15:45:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11970839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21475C4361B for ; Sun, 13 Dec 2020 15:49:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E8A7F2313B for ; Sun, 13 Dec 2020 15:49:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437333AbgLMPsv (ORCPT ); Sun, 13 Dec 2020 10:48:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437304AbgLMPsp (ORCPT ); Sun, 13 Dec 2020 10:48:45 -0500 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71EC8C061793 for ; Sun, 13 Dec 2020 07:48:05 -0800 (PST) Received: by mail-pj1-x1043.google.com with SMTP id f14so4868617pju.4 for ; Sun, 13 Dec 2020 07:48:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8NPGizKXhIw6LYHXB/g9S8tO1PI1jnab/kHwdu+D4Ug=; b=oPhLptY/qqtVKcudr08x3sHl/beeopjU2TvjrXHJdbdVPPdS0MzXzYtLXXVwG0Kq9T LBT6BWP2ta14GXxYp9tksWuJeAS6qn8SpJY6r47m4sdbVKSIoNgUnqC28+KRfhvgKzEh rHf6XfhSkWrc68sLAIaVZLFfxWgZgB72Nj/z8hfBYNIXbuuIa+S/Q/pvITtnuZtVnEDK TFUj9PXtBxk4BkbUr2lzrE+//C5cLsozVE4abh8MKeo0enW/Mn00E07+Y9fL+wNaGikY JeGvaAfrOQgZ77NQUeCvXlv8TZTtitHBy8BVJVTBoWOQx2l8K82DhTAZbOUksv7sGT0i 57CQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8NPGizKXhIw6LYHXB/g9S8tO1PI1jnab/kHwdu+D4Ug=; b=ARspN9617ddH5qN5njz7afVJJHcgzgI7cWsikC04LhvHuLEv5oFypI+5hXxEWPo9z1 sHLGl1wuG60fudjwuG9Ti4EiOCDINHwPpFHp2mklSViHZkZ/cS/sSXt9Ek6x80860hrd QF3ft43DzcHZe20WKyiaC4T7uJaPjLC6ofivWXdGUTINs7oO4lFWW9JkU6h/Zb/zYqGA ayzBndyIvQFzxJisMTqrHgIDZdPe7Ee21q7wD5oZmmUkTCTEwKs7v7gHNcZiX5VTpgRx dkoCSdwDcolJVb6VKaINaPLCxUR8+jqLVq7uHUrcNkv0U/u/UEqv10K1GcZW1KN16hBK IhYA== X-Gm-Message-State: AOAM533eb2YokmDZks4M/GEzcPN7D+x97CgfDqgPhkFfGGucwXWLIpMW xP58/fgB+pRYmVSLxlZX3h9coQ== X-Google-Smtp-Source: ABdhPJzpr/Z/RxuagpBkxf5dniRoM/fCaPYTeqJPB/w+ZQSOkCMu4OPhrK8dBKr4Mu4cgwEFvFE4oA== X-Received: by 2002:a17:90b:4a81:: with SMTP id lp1mr22041651pjb.55.1607874485011; Sun, 13 Dec 2020 07:48:05 -0800 (PST) Received: from localhost.bytedance.net ([103.136.221.66]) by smtp.gmail.com with ESMTPSA id e24sm13113753pjt.16.2020.12.13.07.47.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 13 Dec 2020 07:48:04 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v9 10/11] mm/hugetlb: Gather discrete indexes of tail page Date: Sun, 13 Dec 2020 23:45:33 +0800 Message-Id: <20201213154534.54826-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201213154534.54826-1-songmuchun@bytedance.com> References: <20201213154534.54826-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org For HugeTLB page, there are more metadata to save in the struct page. But the head struct page cannot meet our needs, so we have to abuse other tail struct page to store the metadata. In order to avoid conflicts caused by subsequent use of more tail struct pages, we can gather these discrete indexes of tail struct page. In this case, it will be easier to add a new tail page index later. There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct page structs can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a BUILD_BUG_ON to catch invalid usage of the tail struct page. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador --- include/linux/hugetlb.h | 13 +++++++++++++ include/linux/hugetlb_cgroup.h | 15 +++++++++------ mm/hugetlb.c | 16 ++++++++-------- mm/hugetlb_vmemmap.c | 8 ++++++++ 4 files changed, 38 insertions(+), 14 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 66d82ae7b712..7295f6b3d55e 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t; #include #include +enum { + SUBPAGE_INDEX_ACTIVE = 1, /* reuse page flags of PG_private */ + SUBPAGE_INDEX_TEMPORARY, /* reuse page->mapping */ +#ifdef CONFIG_CGROUP_HUGETLB + SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */ + SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ +#endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + SUBPAGE_INDEX_HWPOISON, /* reuse page->private */ +#endif + NR_USED_SUBPAGE, +}; + struct hugepage_subpool { spinlock_t lock; long count; diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h index 2ad6e92f124a..3d3c1c49efe4 100644 --- a/include/linux/hugetlb_cgroup.h +++ b/include/linux/hugetlb_cgroup.h @@ -24,8 +24,9 @@ struct file_region; /* * Minimum page order trackable by hugetlb cgroup. * At least 4 pages are necessary for all the tracking information. - * The second tail page (hpage[2]) is the fault usage cgroup. - * The third tail page (hpage[3]) is the reservation usage cgroup. + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD]) + * is the reservation usage cgroup. */ #define HUGETLB_CGROUP_MIN_ORDER 2 @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd) if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return NULL; if (rsvd) - return (struct hugetlb_cgroup *)page[3].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD); else - return (struct hugetlb_cgroup *)page[2].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP); } static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page) @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page, if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return -1; if (rsvd) - page[3].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD, + (unsigned long)h_cg); else - page[2].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP, + (unsigned long)h_cg); return 0; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2b45235a70e9..0e8f13184de0 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1360,7 +1360,7 @@ static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head) if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h)) return; - page = head + page_private(head + 4); + page = head + page_private(head + SUBPAGE_INDEX_HWPOISON); /* * Move PageHWPoison flag from head page to the raw error page, @@ -1379,7 +1379,7 @@ static inline void hwpoison_subpage_set(struct hstate *h, struct page *head, return; if (free_vmemmap_pages_per_hpage(h)) { - set_page_private(head + 4, page - head); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head); } else if (page != head) { /* * Move PageHWPoison flag from head page to the raw error page, @@ -1456,20 +1456,20 @@ struct hstate *size_to_hstate(unsigned long size) bool page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHuge(page), page); - return PageHead(page) && PagePrivate(&page[1]); + return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* never called for tail page */ static void set_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - SetPagePrivate(&page[1]); + SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } static void clear_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - ClearPagePrivate(&page[1]); + ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* @@ -1481,17 +1481,17 @@ static inline bool PageHugeTemporary(struct page *page) if (!PageHuge(page)) return false; - return (unsigned long)page[2].mapping == -1U; + return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U; } static inline void SetPageHugeTemporary(struct page *page) { - page[2].mapping = (void *)-1U; + page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U; } static inline void ClearPageHugeTemporary(struct page *page) { - page[2].mapping = NULL; + page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL; } static void __free_huge_page(struct page *page) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index d3b4c39f67c0..bbcefd5fb7d1 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -232,6 +232,14 @@ void __init hugetlb_vmemmap_init(struct hstate *h) unsigned int nr_pages = pages_per_huge_page(h); unsigned int vmemmap_pages; + /* + * There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct + * page structs can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so + * add a BUILD_BUG_ON to catch invalid usage of the tail struct page. + */ + BUILD_BUG_ON(NR_USED_SUBPAGE >= + RESERVE_VMEMMAP_SIZE / sizeof(struct page)); + if (!hugetlb_free_vmemmap_enabled) return; From patchwork Sun Dec 13 15:45:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11970841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EFC1C433FE for ; Sun, 13 Dec 2020 15:49:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 69A342333C for ; Sun, 13 Dec 2020 15:49:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437304AbgLMPtK (ORCPT ); Sun, 13 Dec 2020 10:49:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437372AbgLMPs4 (ORCPT ); Sun, 13 Dec 2020 10:48:56 -0500 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5E31C061248 for ; Sun, 13 Dec 2020 07:48:15 -0800 (PST) Received: by mail-pf1-x444.google.com with SMTP id s21so10417190pfu.13 for ; Sun, 13 Dec 2020 07:48:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8PHQMDJVT/2kbWGLsVFJZvXcjqBWqkHm052KcpZafNs=; b=xoZ7TJtucKFyWd/KXHfwhMtbtS4gJ6sw9uzj7IqYvTpodH5CzDSwSgXuThA5h4Q6A+ eZihdRRdqxg5NDapbLPVVgPCzSna72Ym4XfQXY6oSHt9a5RUQCNIh0qqBr7mAsuBCfoh 8Kczq64LoXlGC0LpNaCTgeQD9aEe8/FBf9RY2n+DD2IWNpFwhGyEKpXKsYzygvQT0bhn Yh3PhLQdwuL7zMGzVhrd9JU3/4IXXL/poO5HC+z+5rkI7f/oiSOn+kFgDRuKIpFjcBXz 8JPanJC3BkrKigzXcRTFANHdPnbkPk7SmCiDiUI3mzZogyAT6Qa9T7LzEzPWuvuXblWj 1RsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8PHQMDJVT/2kbWGLsVFJZvXcjqBWqkHm052KcpZafNs=; b=GmmSdltn/neLDUBMsm8UF9d1zoNblQzjzVlTZYr6OWw0W171moigOC6BDj7eiFy44U 4UCjvgoTq5HKKUwIewFbBVnTRrO7nuDmv2WFHALixMFZvoOU6jDPsVJg5kRUeKRnHUWx ojUVjf3o3GRTuZ9Kn24RAvAnzS9ooN9rZJ5aXdW5vw59upf9kP0xuodhjRWrvYI56yWs nPje1voVy0hN1h4DVDG//mzYIet3OLuUIeAHql/agK8oYP8OXCXzL4V7CPEEj2K6EtL7 5gV9XLbRopvU3j321vqY1I9jQfU4WEFr5fBCfwcPBlFxqQIJ7aItdngPjoplJ0FlybtI pbnQ== X-Gm-Message-State: AOAM533A8w5T9aRMdqtO2xJDqFeNZ+IwlEu0aUJ1KHADgXhSbd7xOGl3 yyBUaEEftc0eY961p72SbFb9Hw== X-Google-Smtp-Source: ABdhPJxO8m6vbHvN+Y+6gAAGCOcfBgNQKWbes0jV8vWQBVVN5WQG3RTs7OvVcaoRVyAvV3yCN29fGQ== X-Received: by 2002:a63:4e4c:: with SMTP id o12mr20238792pgl.348.1607874495455; Sun, 13 Dec 2020 07:48:15 -0800 (PST) Received: from localhost.bytedance.net ([103.136.221.66]) by smtp.gmail.com with ESMTPSA id e24sm13113753pjt.16.2020.12.13.07.48.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 13 Dec 2020 07:48:14 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v9 11/11] mm/hugetlb: Optimize the code with the help of the compiler Date: Sun, 13 Dec 2020 23:45:34 +0800 Message-Id: <20201213154534.54826-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201213154534.54826-1-songmuchun@bytedance.com> References: <20201213154534.54826-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org We cannot optimize if a "struct page" crosses page boundaries. If it is true, we can optimize the code with the help of a compiler. When free_vmemmap_pages_per_hpage() returns zero, most functions are optimized by the compiler. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 3 ++- mm/hugetlb_vmemmap.c | 7 +++++++ mm/hugetlb_vmemmap.h | 5 +++-- 3 files changed, 12 insertions(+), 3 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 7295f6b3d55e..adc17765e0e9 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -791,7 +791,8 @@ extern bool hugetlb_free_vmemmap_enabled; static inline bool is_hugetlb_free_vmemmap_enabled(void) { - return hugetlb_free_vmemmap_enabled; + return hugetlb_free_vmemmap_enabled && + is_power_of_2(sizeof(struct page)); } #else static inline bool is_hugetlb_free_vmemmap_enabled(void) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index bbcefd5fb7d1..e83c48c63a7b 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -240,6 +240,13 @@ void __init hugetlb_vmemmap_init(struct hstate *h) BUILD_BUG_ON(NR_USED_SUBPAGE >= RESERVE_VMEMMAP_SIZE / sizeof(struct page)); + /* + * The compiler can help us to optimize this function to null + * when the size of the struct page is not power of 2. + */ + if (!is_power_of_2(sizeof(struct page))) + return; + if (!hugetlb_free_vmemmap_enabled) return; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 8fd9ae113dbd..1a29a80f9fe1 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -17,11 +17,12 @@ void hugetlb_vmemmap_init(struct hstate *h); /* * How many vmemmap pages associated with a HugeTLB page that can be freed - * to the buddy allocator. + * to the buddy allocator. The checking of the is_power_of_2() aims to let + * the compiler help us optimize the code as much as possible. */ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { - return h->nr_free_vmemmap_pages; + return h->nr_free_vmemmap_pages && is_power_of_2(sizeof(struct page)); } #else static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)