From patchwork Sun Jan 17 15:10:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12025469 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FC69C433E0 for ; Sun, 17 Jan 2021 15:14:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5E41022460 for ; Sun, 17 Jan 2021 15:14:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729266AbhAQPOX (ORCPT ); Sun, 17 Jan 2021 10:14:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729171AbhAQPNr (ORCPT ); Sun, 17 Jan 2021 10:13:47 -0500 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53CACC0613C1 for ; Sun, 17 Jan 2021 07:13:07 -0800 (PST) Received: by mail-pj1-x102c.google.com with SMTP id p15so7752337pjv.3 for ; Sun, 17 Jan 2021 07:13:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3qJg+8yJ0ZaO1UKBsLGCMyx4R42eQSImcGjXZI1+SbQ=; b=WElgWV3vq4WFN4woFDrgK1FWAep2xocUw1qx51aLMQQ1S5TldH+YEOsWktg+X+8m3A FbrGtXJksyOm/C41shiKnDnZ+Qhw0sqh+j5kEkIUQfgmRnrPNwPU7U6AHuWXjBw/7OZd flIcUDeKjPXd0yGmdmKV2gyvkbacu5a20k4oG8vywT4DuriNc8ionBlHYI6BbRrlOYh5 5ozcPsqtARGcrJVNYrGVfyivKFkT5jQZ+IAp4ku+K3p4Soq1actDwhPhPbdj5sYZWyIX oJEYgdnRSgervEfhJf0rx4C+LteNt2N27yHvTKtEa2fZmzxqd0tlN6bRUbjdc0yR7AE/ 1Sww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3qJg+8yJ0ZaO1UKBsLGCMyx4R42eQSImcGjXZI1+SbQ=; b=Dh8KXlzGkB1iuPXf3vRCpzRfg/Ya+HxZL8yMSSS7/WonYlhPVDqPm+QCD2/sKccXYk s+In+pZyBNdb6+d9t5lO36NnJxaOaLeqBzfGgFQdIO5WOyfXxgeEEhw16BMIeSOeilax bMAsqyNh8pF3XauYejXI8NXsZFoGikUR8xMrX4Gs2K9PNN5He+2Qwh9+//yp9qYLX5Z+ yYMyvIq5mpV4Emwy6sr07oEr6ekTG+Qm1jlNKXW0/Ga02WsbXtn2UHsgL6RteWB8LaNZ UtUi+GskOUQ8+xq60+hEPMDrWlLkb/RxFrqmUEGqaHbQxFbhrpMq63DypCUAwM7ybp0M AeTg== X-Gm-Message-State: AOAM531BK+fF9qpH/3fHQv+yXjuFA2n6TXZ6+rdKkO7rmKo47UDmU/0o 9ihEArz3GjXi24TNSbM7wfNHpg== X-Google-Smtp-Source: ABdhPJw+hLC0uWioM46xybHybAj3mZYCok+d9eL+FDt7dwqw4nzdCmzxn1bkFGztwIvGrpiWVL+EhQ== X-Received: by 2002:a17:902:eccb:b029:de:8483:505d with SMTP id a11-20020a170902eccbb02900de8483505dmr9385629plh.63.1610896386824; Sun, 17 Jan 2021 07:13:06 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id i22sm9247915pjv.35.2021.01.17.07.12.49 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 Jan 2021 07:13:06 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v13 01/12] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Date: Sun, 17 Jan 2021 23:10:42 +0800 Message-Id: <20210117151053.24600-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210117151053.24600-1-songmuchun@bytedance.com> References: <20210117151053.24600-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Move bootmem info registration common API to individual bootmem_info.c. And we will use {get,put}_page_bootmem() to initialize the page for the vmemmap pages or free the vmemmap pages to buddy in the later patch. So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code movement without any functional change. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador Reviewed-by: David Hildenbrand Reviewed-by: Miaohe Lin --- arch/x86/mm/init_64.c | 3 +- include/linux/bootmem_info.h | 40 +++++++++++++ include/linux/memory_hotplug.h | 27 --------- mm/Makefile | 1 + mm/bootmem_info.c | 124 +++++++++++++++++++++++++++++++++++++++++ mm/memory_hotplug.c | 116 -------------------------------------- mm/sparse.c | 1 + 7 files changed, 168 insertions(+), 144 deletions(-) create mode 100644 include/linux/bootmem_info.h create mode 100644 mm/bootmem_info.c diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index b5a3fa4033d3..0a45f062826e 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include @@ -1571,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, return err; } -#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE) +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void register_page_bootmem_memmap(unsigned long section_nr, struct page *start_page, unsigned long nr_pages) { diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h new file mode 100644 index 000000000000..4ed6dee1adc9 --- /dev/null +++ b/include/linux/bootmem_info.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_BOOTMEM_INFO_H +#define __LINUX_BOOTMEM_INFO_H + +#include + +/* + * Types for free bootmem stored in page->lru.next. These have to be in + * some random range in unsigned long space for debugging purposes. + */ +enum { + MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, + SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, + MIX_SECTION_INFO, + NODE_INFO, + MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, +}; + +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE +void __init register_page_bootmem_info_node(struct pglist_data *pgdat); + +void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type); +void put_page_bootmem(struct page *page); +#else +static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) +{ +} + +static inline void put_page_bootmem(struct page *page) +{ +} + +static inline void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type) +{ +} +#endif + +#endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 15acce5ab106..84590964ad35 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -33,18 +33,6 @@ struct vmem_altmap; ___page; \ }) -/* - * Types for free bootmem stored in page->lru.next. These have to be in - * some random range in unsigned long space for debugging purposes. - */ -enum { - MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, - SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, - MIX_SECTION_INFO, - NODE_INFO, - MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, -}; - /* Types for control the zone type of onlined and offlined memory */ enum { /* Offline the memory. */ @@ -222,17 +210,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat) #endif /* CONFIG_NUMA */ #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */ -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat); -#else -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} -#endif -extern void put_page_bootmem(struct page *page); -extern void get_page_bootmem(unsigned long ingo, struct page *page, - unsigned long type); - void get_online_mems(void); void put_online_mems(void); @@ -260,10 +237,6 @@ static inline void zone_span_writelock(struct zone *zone) {} static inline void zone_span_writeunlock(struct zone *zone) {} static inline void zone_seqlock_init(struct zone *zone) {} -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} - static inline int try_online_node(int nid) { return 0; diff --git a/mm/Makefile b/mm/Makefile index a1af02ba8f3f..ed4b88fa0f5e 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -83,6 +83,7 @@ obj-$(CONFIG_SLUB) += slub.o obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_KFENCE) += kfence/ obj-$(CONFIG_FAILSLAB) += failslab.o +obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c new file mode 100644 index 000000000000..fcab5a3f8cc0 --- /dev/null +++ b/mm/bootmem_info.c @@ -0,0 +1,124 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * linux/mm/bootmem_info.c + * + * Copyright (C) + */ +#include +#include +#include +#include +#include + +void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) +{ + page->freelist = (void *)type; + SetPagePrivate(page); + set_page_private(page, info); + page_ref_inc(page); +} + +void put_page_bootmem(struct page *page) +{ + unsigned long type; + + type = (unsigned long) page->freelist; + BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || + type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); + + if (page_ref_dec_return(page) == 1) { + page->freelist = NULL; + ClearPagePrivate(page); + set_page_private(page, 0); + INIT_LIST_HEAD(&page->lru); + free_reserved_page(page); + } +} + +#ifndef CONFIG_SPARSEMEM_VMEMMAP +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + /* Get section's memmap address */ + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + /* + * Get page for the memmap's phys address + * XXX: need more consideration for sparse_vmemmap... + */ + page = virt_to_page(memmap); + mapsize = sizeof(struct page) * PAGES_PER_SECTION; + mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; + + /* remember memmap's page */ + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, SECTION_INFO); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); + +} +#else /* CONFIG_SPARSEMEM_VMEMMAP */ +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); +} +#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ + +void __init register_page_bootmem_info_node(struct pglist_data *pgdat) +{ + unsigned long i, pfn, end_pfn, nr_pages; + int node = pgdat->node_id; + struct page *page; + + nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; + page = virt_to_page(pgdat); + + for (i = 0; i < nr_pages; i++, page++) + get_page_bootmem(node, page, NODE_INFO); + + pfn = pgdat->node_start_pfn; + end_pfn = pgdat_end_pfn(pgdat); + + /* register section info */ + for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { + /* + * Some platforms can assign the same pfn to multiple nodes - on + * node0 as well as nodeN. To avoid registering a pfn against + * multiple nodes we check that this pfn does not already + * reside in some other nodes. + */ + if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) + register_page_bootmem_info_section(pfn); + } +} diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index a8cef4955907..4c4ca99745b7 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -141,122 +141,6 @@ static void release_memory_resource(struct resource *res) } #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE -void get_page_bootmem(unsigned long info, struct page *page, - unsigned long type) -{ - page->freelist = (void *)type; - SetPagePrivate(page); - set_page_private(page, info); - page_ref_inc(page); -} - -void put_page_bootmem(struct page *page) -{ - unsigned long type; - - type = (unsigned long) page->freelist; - BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || - type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); - - if (page_ref_dec_return(page) == 1) { - page->freelist = NULL; - ClearPagePrivate(page); - set_page_private(page, 0); - INIT_LIST_HEAD(&page->lru); - free_reserved_page(page); - } -} - -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -#ifndef CONFIG_SPARSEMEM_VMEMMAP -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - /* Get section's memmap address */ - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - /* - * Get page for the memmap's phys address - * XXX: need more consideration for sparse_vmemmap... - */ - page = virt_to_page(memmap); - mapsize = sizeof(struct page) * PAGES_PER_SECTION; - mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; - - /* remember memmap's page */ - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, SECTION_INFO); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); - -} -#else /* CONFIG_SPARSEMEM_VMEMMAP */ -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); -} -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ - -void __init register_page_bootmem_info_node(struct pglist_data *pgdat) -{ - unsigned long i, pfn, end_pfn, nr_pages; - int node = pgdat->node_id; - struct page *page; - - nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; - page = virt_to_page(pgdat); - - for (i = 0; i < nr_pages; i++, page++) - get_page_bootmem(node, page, NODE_INFO); - - pfn = pgdat->node_start_pfn; - end_pfn = pgdat_end_pfn(pgdat); - - /* register section info */ - for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { - /* - * Some platforms can assign the same pfn to multiple nodes - on - * node0 as well as nodeN. To avoid registering a pfn against - * multiple nodes we check that this pfn does not already - * reside in some other nodes. - */ - if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) - register_page_bootmem_info_section(pfn); - } -} -#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ - static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, const char *reason) { diff --git a/mm/sparse.c b/mm/sparse.c index 7bd23f9d6cef..87676bf3af40 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "internal.h" #include From patchwork Sun Jan 17 15:10:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12025471 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19F55C433E0 for ; Sun, 17 Jan 2021 15:15:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E5EB72070E for ; Sun, 17 Jan 2021 15:15:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729223AbhAQPPE (ORCPT ); Sun, 17 Jan 2021 10:15:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729224AbhAQPOU (ORCPT ); Sun, 17 Jan 2021 10:14:20 -0500 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44736C0613ED for ; Sun, 17 Jan 2021 07:13:25 -0800 (PST) Received: by mail-pf1-x429.google.com with SMTP id b3so8687263pft.3 for ; Sun, 17 Jan 2021 07:13:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ntojA21L7AWHHQRUBdhZHwBioWXg1bVrcDP2ArBhkkY=; b=Ea92toZH/1l0FNhx1nYmpPDeMydz/IKBkjQNccA4JqlaQPi/3Hu7gQ+VMm6zzsAlvz AEPxfJbUcsBRQ/K72F17e5DJy+XSBlPDcFg+yOHm9bZCaxpwuGmq7436uQ3VgxPB6b7F BcGJ7sVKe1WKZbF34ZFO80YezkJaRfz1IlIUL5QduSqo7EI5ifnDybPjYy6FMoMCs0TN DbWgBtJnx8ZU04vNOUTtdXotGgWbbzZRb5jIVaoukR8rnv6/p/9+pwVgJNNKulSFr1/a hcA8Lxw1XnfXxzP0tExT7KFk9sqmACYVhZRtBblbQ8fe0Qcczny3sVq4hnxCmxdSO3+/ PP+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ntojA21L7AWHHQRUBdhZHwBioWXg1bVrcDP2ArBhkkY=; b=cP84rPq4kmoxTDjU8OIt1aH5ZT8/ZlfOmGnacJgIVUlBT3YFE+CgXCRFDalY/ymRAY vKnOaI3+FU7sPc7p0y2K8z8+uICBe1dx8DLNf+7cNhSZ4wOIIeJx0T6I7NkqufrTAAtg wOE4Uy3o1rKBCfX0iyh5AxqyrorW9Fktr2Y/BDa/OwGcapEN3Ax6BMOJSogZ/LU2XxKf kjNlt6/0SAW1+CnFklCPZUVzdud5qB7jQE4jqe+GxMvJ2Z4iFqP0YVcis+RxesYn/ptn epFR4jkO5FBppTnl5IVN2zyX4ozNviDbIOh6KrlVOrsS3Yqf20hvJqLRomqUE/6qDjI2 dsiw== X-Gm-Message-State: AOAM5306m9na+Ap6cfbj/di7zZ8Go5PH4PdUaA4pTXKcIl7+CBOYayQS KVhX3F2vHdRVpbcsrj+pg9mgqw== X-Google-Smtp-Source: ABdhPJylD4KONe3SPQrZ2L10bXdihiwZcjTG3YitmaxUO2lbiyLsg5iYrSq+a5GgBs8/6xFFDzNMFQ== X-Received: by 2002:a63:a804:: with SMTP id o4mr6213615pgf.67.1610896404686; Sun, 17 Jan 2021 07:13:24 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id i22sm9247915pjv.35.2021.01.17.07.13.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 Jan 2021 07:13:24 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v13 02/12] mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Date: Sun, 17 Jan 2021 23:10:43 +0800 Message-Id: <20210117151053.24600-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210117151053.24600-1-songmuchun@bytedance.com> References: <20210117151053.24600-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The HUGETLB_PAGE_FREE_VMEMMAP option is used to enable the freeing of unnecessary vmemmap associated with HugeTLB pages. The config option is introduced early so that supporting code can be written to depend on the option. The initial version of the code only provides support for x86-64. Like other code which frees vmemmap, this config option depends on HAVE_BOOTMEM_INFO_NODE. The routine register_page_bootmem_info() is used to register bootmem info. Therefore, make sure register_page_bootmem_info is enabled if HUGETLB_PAGE_FREE_VMEMMAP is defined. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Acked-by: Mike Kravetz Reviewed-by: Miaohe Lin --- arch/x86/mm/init_64.c | 2 +- fs/Kconfig | 18 ++++++++++++++++++ 2 files changed, 19 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0a45f062826e..0435bee2e172 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall; static void __init register_page_bootmem_info(void) { -#ifdef CONFIG_NUMA +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) int i; for_each_online_node(i) diff --git a/fs/Kconfig b/fs/Kconfig index 976e8b9033c4..e7c4c2a79311 100644 --- a/fs/Kconfig +++ b/fs/Kconfig @@ -245,6 +245,24 @@ config HUGETLBFS config HUGETLB_PAGE def_bool HUGETLBFS +config HUGETLB_PAGE_FREE_VMEMMAP + def_bool HUGETLB_PAGE + depends on X86_64 + depends on SPARSEMEM_VMEMMAP + depends on HAVE_BOOTMEM_INFO_NODE + help + The option HUGETLB_PAGE_FREE_VMEMMAP allows for the freeing of + some vmemmap pages associated with pre-allocated HugeTLB pages. + For example, on X86_64 6 vmemmap pages of size 4KB each can be + saved for each 2MB HugeTLB page. 4094 vmemmap pages of size 4KB + each can be saved for each 1GB HugeTLB page. + + When a HugeTLB page is allocated or freed, the vmemmap array + representing the range associated with the page will need to be + remapped. When a page is allocated, vmemmap pages are freed + after remapping. When a page is freed, previously discarded + vmemmap pages must be allocated before remapping. + config MEMFD_CREATE def_bool TMPFS || HUGETLBFS From patchwork Sun Jan 17 15:10:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12025473 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64ED7C433E6 for ; Sun, 17 Jan 2021 15:15:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2876E2070B for ; Sun, 17 Jan 2021 15:15:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729544AbhAQPPW (ORCPT ); Sun, 17 Jan 2021 10:15:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37964 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729218AbhAQPO3 (ORCPT ); Sun, 17 Jan 2021 10:14:29 -0500 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81603C061573 for ; Sun, 17 Jan 2021 07:13:49 -0800 (PST) Received: by mail-pj1-x1030.google.com with SMTP id g15so3775671pjd.2 for ; Sun, 17 Jan 2021 07:13:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mtf51s6TawvBN/iw1jkXj6TwoznJrivoSD1p3oQzcVg=; b=C2qfsR8/ysOVXkg1qfS8bvHkmShmtr0RLWIICjihUHoF33dPUcjb0hvaNTDI5XuE23 AAvOALE7JpDXn94YDkc05J0g0bK8sj8N0fWN2OwMINHQ1VjHHMDst5l9qqs2VM2eOctV 6d39recZN4XA5mB2EKvcF39JzmZTgPzSWd+AnePNYMCSoNatg/fLMhKi8B/+eRXnZkf2 z1nvTj0ZmHn6YoZ1Hw5xGuQF3vRLqKEOCHAhthrm4HLrpP3bztSK2ax63xnlrqqKjya2 SkbZky8U91EPzkGjUYS27+Yh1kXVCcGBOPm4lpv8Il3BHySV/igC2OM2/ufEaKiEr6Sg 68ZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mtf51s6TawvBN/iw1jkXj6TwoznJrivoSD1p3oQzcVg=; b=A47HQiSLR33tQ+4XUElH8+NfVlw3Dsr7mEo25DmwaLbjoLb9+UiujPxX2TiNxwuXqA wGTdx0H/nZ8rCWEdARDPvDsr8+T7byYYI/aP3A44P7TXIi8YDDz07/qxc42K4UH+68Ef HDD1/GMn94GXOCmVt7SKvUR/C3JMJBF1YNiR9pJiRo+R1Hm6KSzMhFchXbtZC30Blk+m LDW3nbsm6v2jvqwvqkNGRnu0HwmGcd3YKvEmaEJxA/f5KhKyiUR8I3VlqJao+m3fvrp3 PjExG825NBCAtXQSoL4VrtK5ReisyboGkU0tCfVbVdP7EHpufOzBg7qpkYHAl5POmBb0 Cs2Q== X-Gm-Message-State: AOAM530nEBsT2Fa6K5yvDT8cAcQVtw9hfL8T9N5WzW4Gmap5DILag02N zKAqfmtQXLtuTX6paw/Yv8o30Q== X-Google-Smtp-Source: ABdhPJwBl0KfjC+sddmeFPMyn1s9veX43EwS7dXL5utYrDt9A40+UUm2Xetfbncb3PXp9tDFVg/K8Q== X-Received: by 2002:a17:902:242:b029:dc:3baf:2033 with SMTP id 60-20020a1709020242b02900dc3baf2033mr22109455plc.36.1610896428824; Sun, 17 Jan 2021 07:13:48 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id i22sm9247915pjv.35.2021.01.17.07.13.25 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 Jan 2021 07:13:48 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v13 03/12] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page Date: Sun, 17 Jan 2021 23:10:44 +0800 Message-Id: <20210117151053.24600-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210117151053.24600-1-songmuchun@bytedance.com> References: <20210117151053.24600-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Every HugeTLB has more than one struct page structure. We __know__ that we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures to store metadata associated with each HugeTLB. There are a lot of struct page structures associated with each HugeTLB page. For tail pages, the value of compound_head is the same. So we can reuse first page of tail page structures. We map the virtual addresses of the remaining pages of tail page structures to the first tail page struct, and then free these page frames. Therefore, we need to reserve two pages as vmemmap areas. When we allocate a HugeTLB page from the buddy, we can free some vmemmap pages associated with each HugeTLB page. It is more appropriate to do it in the prep_new_huge_page(). The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap pages associated with a HugeTLB page can be freed, returns zero for now, which means the feature is disabled. We will enable it once all the infrastructure is there. Signed-off-by: Muchun Song --- include/linux/bootmem_info.h | 27 +++++- include/linux/mm.h | 3 + mm/Makefile | 1 + mm/hugetlb.c | 3 + mm/hugetlb_vmemmap.c | 211 +++++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 20 ++++ mm/sparse-vmemmap.c | 198 ++++++++++++++++++++++++++++++++++++++++ 7 files changed, 462 insertions(+), 1 deletion(-) create mode 100644 mm/hugetlb_vmemmap.c create mode 100644 mm/hugetlb_vmemmap.h diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 4ed6dee1adc9..ec03a624dfa2 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -2,7 +2,7 @@ #ifndef __LINUX_BOOTMEM_INFO_H #define __LINUX_BOOTMEM_INFO_H -#include +#include /* * Types for free bootmem stored in page->lru.next. These have to be in @@ -22,6 +22,27 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat); void get_page_bootmem(unsigned long info, struct page *page, unsigned long type); void put_page_bootmem(struct page *page); + +/* + * Any memory allocated via the memblock allocator and not via the + * buddy will be marked reserved already in the memmap. For those + * pages, we can call this function to free it to buddy allocator. + */ +static inline void free_bootmem_page(struct page *page) +{ + unsigned long magic = (unsigned long)page->freelist; + + /* + * The reserve_bootmem_region sets the reserved flag on bootmem + * pages. + */ + VM_BUG_ON_PAGE(page_ref_count(page) != 2, page); + + if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) + put_page_bootmem(page); + else + VM_BUG_ON_PAGE(1, page); +} #else static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) { @@ -35,6 +56,10 @@ static inline void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) { } + +static inline void free_bootmem_page(struct page *page) +{ +} #endif #endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/mm.h b/include/linux/mm.h index eabe7d9f80d8..f928994ed273 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3005,6 +3005,9 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) } #endif +void vmemmap_remap_free(unsigned long start, unsigned long end, + unsigned long reuse); + void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap); diff --git a/mm/Makefile b/mm/Makefile index ed4b88fa0f5e..056801d8daae 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP) += frontswap.o obj-$(CONFIG_ZSWAP) += zswap.o obj-$(CONFIG_HAS_DMA) += dmapool.o obj-$(CONFIG_HUGETLBFS) += hugetlb.o +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) += hugetlb_vmemmap.o obj-$(CONFIG_NUMA) += mempolicy.o obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1f3bf1710b66..140135fc8113 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -42,6 +42,7 @@ #include #include #include "internal.h" +#include "hugetlb_vmemmap.h" int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; @@ -1497,6 +1498,8 @@ void free_huge_page(struct page *page) static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) { + free_huge_page_vmemmap(h, page); + INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); set_hugetlb_cgroup(page, NULL); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c new file mode 100644 index 000000000000..4ffa2a4ae2a8 --- /dev/null +++ b/mm/hugetlb_vmemmap.c @@ -0,0 +1,211 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + * + * The struct page structures (page structs) are used to describe a physical + * page frame. By default, there is a one-to-one mapping from a page frame to + * it's corresponding page struct. + * + * The HugeTLB pages consist of multiple base page size pages and is supported + * by many architectures. See hugetlbpage.rst in the Documentation directory + * for more details. On the x86-64 architecture, HugeTLB pages of size 2MB and + * 1GB are currently supported. Since the base page size on x86 is 4KB, a 2MB + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of + * 4096 base pages. For each base page, there is a corresponding page struct. + * + * Within the HugeTLB subsystem, only the first 4 page structs are used to + * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER + * provides this upper limit. The only 'useful' information in the remaining + * page structs is the compound_head field, and this field is the same for all + * tail pages. + * + * By removing redundant page structs for HugeTLB pages, memory can be returned + * to the buddy allocator for other uses. + * + * Different architectures support different HugeTLB pages. For example, the + * following table is the HugeTLB page size supported by x86 and arm64 + * architectures. Becasue arm64 supports 4k, 16k, and 64k base pages and + * supports contiguous entries, so it supports many kinds of sizes of HugeTLB + * page. + * + * +--------------+-----------+-----------------------------------------------+ + * | Architecture | Page Size | HugeTLB Page Size | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * | x86-64 | 4KB | 2MB | 1GB | | | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * | | 4KB | 64KB | 2MB | 32MB | 1GB | + * | +-----------+-----------+-----------+-----------+-----------+ + * | arm64 | 16KB | 2MB | 32MB | 1GB | | + * | +-----------+-----------+-----------+-----------+-----------+ + * | | 64KB | 2MB | 512MB | 16GB | | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * + * When the system boot up, every HugeTLB page has more than one struct page + * structs whose size is (unit: pages): + * + * struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + * + * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size + * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following + * relationship. + * + * HugeTLB_Size = n * PAGE_SIZE + * + * Then, + * + * struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + * = n * sizeof(struct page) / PAGE_SIZE + * + * We can use huge mapping at the pud/pmd level for the HugeTLB page. + * + * For the HugeTLB page of the pmd level mapping, then + * + * struct_size = n * sizeof(struct page) / PAGE_SIZE + * = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE + * = sizeof(struct page) / sizeof(pte_t) + * = 64 / 8 + * = 8 (pages) + * + * Where n is how many pte entries which one page can contains. So the value of + * n is (PAGE_SIZE / sizeof(pte_t)). + * + * This optimization only supports 64-bit system, so the value of sizeof(pte_t) + * is 8. And this optimization also applicable only when the size of struct page + * is a power of two. In most cases, the size of struct page is 64 (e.g. x86-64 + * and arm64). So if we use pmd level mapping for a HugeTLB page, the size of + * struct page structs of it is 8 pages whose size depends on the size of the + * base page. + * + * For the HugeTLB page of the pud level mapping, then + * + * struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd) + * = PAGE_SIZE / 8 * 8 (pages) + * = PAGE_SIZE (pages) + * + * Where the struct_size(pmd) is the size of the struct page structs of a + * HugeTLB page of the pmd level mapping. + * + * Next, we take the pmd level mapping of the HugeTLB page as an example to + * show the internal implementation of this optimization. There are 8 pages + * struct page structs associated with a HugeTLB page which is pmd mapped. + * + * Here is how things look before optimization. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | -------------> | 2 | + * | | +-----------+ +-----------+ + * | | | 3 | -------------> | 3 | + * | | +-----------+ +-----------+ + * | | | 4 | -------------> | 4 | + * | PMD | +-----------+ +-----------+ + * | level | | 5 | -------------> | 5 | + * | mapping | +-----------+ +-----------+ + * | | | 6 | -------------> | 6 | + * | | +-----------+ +-----------+ + * | | | 7 | -------------> | 7 | + * | | +-----------+ +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * The value of page->compound_head is the same for all tail pages. The first + * page of page structs (page 0) associated with the HugeTLB page contains the 4 + * page structs necessary to describe the HugeTLB. The only use of the remaining + * pages of page structs (page 1 to page 7) is to point to page->compound_head. + * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs + * will be used for each HugeTLB page. This will allow us to free the remaining + * 6 pages to the buddy allocator. + * + * Here is how things look after remapping. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | ----------------^ ^ ^ ^ ^ ^ + * | | +-----------+ | | | | | + * | | | 3 | ------------------+ | | | | + * | | +-----------+ | | | | + * | | | 4 | --------------------+ | | | + * | PMD | +-----------+ | | | + * | level | | 5 | ----------------------+ | | + * | mapping | +-----------+ | | + * | | | 6 | ------------------------+ | + * | | +-----------+ | + * | | | 7 | --------------------------+ + * | | +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for + * vmemmap pages and restore the previous mapping relationship. + * + * For the HugeTLB page of the pud level mapping. It is similar to the former. + * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages. + * + * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures + * (e.g. aarch64) provides a contiguous bit in the translation table entries + * that hints to the MMU to indicate that it is one of a contiguous set of + * entries that can be cached in a single TLB entry. + * + * The contiguous bit is used to increase the mapping size at the pmd and pte + * (last) level. So this type of HugeTLB page can be optimized only when its + * size of the struct page structs is greater than 2 pages. + */ +#include "hugetlb_vmemmap.h" + +/* + * There are a lot of struct page structures associated with each HugeTLB page. + * For tail pages, the value of compound_head is the same. So we can reuse first + * page of tail page structures. We map the virtual addresses of the remaining + * pages of tail page structures to the first tail page struct, and then free + * these page frames. Therefore, we need to reserve two pages as vmemmap areas. + */ +#define RESERVE_VMEMMAP_NR 2U +#define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) + +/* + * How many vmemmap pages associated with a HugeTLB page that can be freed + * to the buddy allocator. + * + * Todo: Returns zero for now, which means the feature is disabled. We will + * enable it once all the infrastructure is there. + */ +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} + +static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) +{ + return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; +} + +void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + unsigned long vmemmap_addr = (unsigned long)head; + unsigned long vmemmap_end, vmemmap_reuse; + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + vmemmap_addr += RESERVE_VMEMMAP_SIZE; + vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h); + vmemmap_reuse = vmemmap_addr - PAGE_SIZE; + + vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h new file mode 100644 index 000000000000..6923f03534d5 --- /dev/null +++ b/mm/hugetlb_vmemmap.h @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + */ +#ifndef _LINUX_HUGETLB_VMEMMAP_H +#define _LINUX_HUGETLB_VMEMMAP_H +#include + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void free_huge_page_vmemmap(struct hstate *h, struct page *head); +#else +static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ +#endif /* _LINUX_HUGETLB_VMEMMAP_H */ diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 16183d85a7d5..ce4be1fa93c2 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -27,8 +27,206 @@ #include #include #include +#include +#include + #include #include +#include + +/** + * vmemmap_remap_walk - walk vmemmap page table + * + * @remap_pte: called for each non-empty PTE (lowest-level) entry. + * @reuse_page: the page which is reused for the tail vmemmap pages. + * @reuse_addr: the virtual address of the @reuse_page page. + * @vmemmap_pages: the list head of the vmemmap pages that can be freed. + */ +struct vmemmap_remap_walk { + void (*remap_pte)(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk); + struct page *reuse_page; + unsigned long reuse_addr; + struct list_head *vmemmap_pages; +}; + +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + + /* + * The reuse_page is found 'first' in table walk before we start + * remapping (which is calling @walk->remap_pte). + */ + if (walk->reuse_addr == addr) { + BUG_ON(pte_none(*pte)); + + walk->reuse_page = pte_page(*pte++); + /* + * Becasue the reuse address is part of the range that we are + * walking, skip the reuse address range. + */ + addr += PAGE_SIZE; + } + + for (; addr != end; addr += PAGE_SIZE, pte++) { + BUG_ON(pte_none(*pte)); + + walk->remap_pte(pte, addr, walk); + } +} + +static void vmemmap_pmd_range(pud_t *pud, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pmd_t *pmd; + unsigned long next; + + pmd = pmd_offset(pud, addr); + do { + BUG_ON(pmd_none(*pmd)); + + next = pmd_addr_end(addr, end); + vmemmap_pte_range(pmd, addr, next, walk); + } while (pmd++, addr = next, addr != end); +} + +static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pud_t *pud; + unsigned long next; + + pud = pud_offset(p4d, addr); + do { + BUG_ON(pud_none(*pud)); + + next = pud_addr_end(addr, end); + vmemmap_pmd_range(pud, addr, next, walk); + } while (pud++, addr = next, addr != end); +} + +static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + p4d_t *p4d; + unsigned long next; + + p4d = p4d_offset(pgd, addr); + do { + BUG_ON(p4d_none(*p4d)); + + next = p4d_addr_end(addr, end); + vmemmap_pud_range(p4d, addr, next, walk); + } while (p4d++, addr = next, addr != end); +} + +static void vmemmap_remap_range(unsigned long start, unsigned long end, + struct vmemmap_remap_walk *walk) +{ + unsigned long addr = start; + unsigned long next; + pgd_t *pgd; + + VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); + VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); + + pgd = pgd_offset_k(addr); + do { + BUG_ON(pgd_none(*pgd)); + + next = pgd_addr_end(addr, end); + vmemmap_p4d_range(pgd, addr, next, walk); + } while (pgd++, addr = next, addr != end); + + /* + * We do not change the mapping of the vmemmap virtual address range + * [@start, @start + PAGE_SIZE) which is belong to the reuse range. + * So we not need to flush the TLB. + */ + flush_tlb_kernel_range(start - PAGE_SIZE, end); +} + +/* + * Free a vmemmap page. A vmemmap page can be allocated from the memblock + * allocator or buddy allocator. If the PG_reserved flag is set, it means + * that it allocated from the memblock allocator, just free it via the + * free_bootmem_page(). Otherwise, use __free_page(). + */ +static inline void free_vmemmap_page(struct page *page) +{ + if (PageReserved(page)) + free_bootmem_page(page); + else + __free_page(page); +} + +/* Free a list of the vmemmap pages */ +static void free_vmemmap_page_list(struct list_head *list) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, list, lru) { + list_del(&page->lru); + free_vmemmap_page(page); + } +} + +static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk) +{ + /* + * Remap the tail pages as read-only to catch illegal write operation + * to the tail pages. + */ + pgprot_t pgprot = PAGE_KERNEL_RO; + pte_t entry = mk_pte(walk->reuse_page, pgprot); + struct page *page = pte_page(*pte); + + list_add(&page->lru, walk->vmemmap_pages); + set_pte_at(&init_mm, addr, pte, entry); +} + +/** + * vmemmap_remap_free - remap the vmemmap virtual address range [@start, @end) + * to the page which @reuse is mapped, then free vmemmap + * pages. + * @start: start address of the vmemmap virtual address range. + * @end: end address of the vmemmap virtual address range. + * @reuse: reuse address. + */ +void vmemmap_remap_free(unsigned long start, unsigned long end, + unsigned long reuse) +{ + LIST_HEAD(vmemmap_pages); + struct vmemmap_remap_walk walk = { + .remap_pte = vmemmap_remap_pte, + .reuse_addr = reuse, + .vmemmap_pages = &vmemmap_pages, + }; + + /* + * In order to make remapping routine most efficient for the huge pages, + * the routine of vmemmap page table walking has the following rules + * (see more details from the vmemmap_pte_range()): + * + * - The @reuse address is part of the range that we are walking. + * - The @reuse address is the first in the complete range. + * + * So we need to make sure that @start and @reuse meet the above rules. + */ + BUG_ON(start - reuse != PAGE_SIZE); + + vmemmap_remap_range(reuse, end, &walk); + free_vmemmap_page_list(&vmemmap_pages); +} /* * Allocate a block of memory to be used to back the virtual memory map From patchwork Sun Jan 17 15:10:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12025521 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C76B3C433DB for ; Sun, 17 Jan 2021 15:21:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9306A22522 for ; Sun, 17 Jan 2021 15:21:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729349AbhAQPVX (ORCPT ); Sun, 17 Jan 2021 10:21:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729248AbhAQPOk (ORCPT ); Sun, 17 Jan 2021 10:14:40 -0500 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8150C061793 for ; Sun, 17 Jan 2021 07:14:10 -0800 (PST) Received: by mail-pf1-x434.google.com with SMTP id t29so3553416pfg.11 for ; Sun, 17 Jan 2021 07:14:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9dJD9jIzWDoyhaAcVJQeaUo28aYYhfu54r+mE2CBxNg=; b=WZDoQulZaOHqUK7FYNLKOHufGI3gm9j2qkD0tsBiPKndmaWLewA1SGkCa+UjuLc7rW TLPzKj1a2Zrn7sD4jCbTlW0NiAmZwQB8S3KLRRGiWWAQQtawa31SstAs/YN7mI+l+tM2 +2ECS/Xej6ZG1fLc3Z7n7BEAyn7X6JN/z6MRe8R5BuKDrew2SI3j/dOPRU5zImglfkS+ USUuSkIIhiabsDk74CkWVM3UP6XZOWlfrPnzpbQPj41BoDwSnKGid1Zz8cvqm7mYSJlk o5Ywnn/8mdJ5vo8e3fkQNIuFBjwLVmQy1aqOIQlplkLi0qZh5uoYmpRP+hT0hl5TQvZQ nIXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9dJD9jIzWDoyhaAcVJQeaUo28aYYhfu54r+mE2CBxNg=; b=tj8dzw7yMI3U6H4JzhcxpW/4a8SNROMuMFxa8BKkaVZVu5zx4AbIdg9Z+Ef0FH2TY/ Qgp4kyULcRV2TrhKfgwLlVWwPNIW21L+51UpkyKVL1x/A/eCbiTH+xMbKdYilKnGocbZ FLlX/Zzco7NfNzucVFaAhOGwFk40rfJ2o2BwxrO6denzBH9lceyYJ3/owp3mSufYL6bK teWPZQ4EuszEEouHFLSmvvSvRik34IGd3Z+7nwfQtmK8aMbeWW88+h76RzPTRCAjRCrC T/Vp0KPpGQUkMO/p78w8yNm2NSDgVeI80FzP2Au3EsYEObBarBeT2G4iHl5AhDZz6dd9 pB8Q== X-Gm-Message-State: AOAM533kUUwRMduC48d6NTIENoc3FaGCiKQ9ZqXQCwZiV4Ah3ku9J4dM Qn4HHK/LFz3MnTsN1U55m+Tn6A== X-Google-Smtp-Source: ABdhPJz+9H+G2DnPbSxxBH5sFXFhg3WUBNTmJ3IP8AMyUEP0AtODY7e2nHgbQHOSy42bOXlh6WCa1g== X-Received: by 2002:a63:78ca:: with SMTP id t193mr22215079pgc.391.1610896450449; Sun, 17 Jan 2021 07:14:10 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id i22sm9247915pjv.35.2021.01.17.07.13.49 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 Jan 2021 07:14:09 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v13 04/12] mm: hugetlb: defer freeing of HugeTLB pages Date: Sun, 17 Jan 2021 23:10:45 +0800 Message-Id: <20210117151053.24600-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210117151053.24600-1-songmuchun@bytedance.com> References: <20210117151053.24600-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org In the subsequent patch, we should allocate the vmemmap pages when freeing HugeTLB pages. But update_and_free_page() is always called with holding hugetlb_lock, so we cannot use GFP_KERNEL to allocate vmemmap pages. However, we can defer the actual freeing in a kworker to prevent from using GFP_ATOMIC to allocate the vmemmap pages. The update_hpage_vmemmap_workfn() is where the call to allocate vmemmmap pages will be inserted. Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz --- mm/hugetlb.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++-- mm/hugetlb_vmemmap.c | 12 --------- mm/hugetlb_vmemmap.h | 17 ++++++++++++ 3 files changed, 89 insertions(+), 14 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 140135fc8113..c165186ec2cf 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1292,15 +1292,85 @@ static inline void destroy_compound_gigantic_page(struct page *page, unsigned int order) { } #endif -static void update_and_free_page(struct hstate *h, struct page *page) +static void __free_hugepage(struct hstate *h, struct page *page); + +/* + * As update_and_free_page() is always called with holding hugetlb_lock, so we + * cannot use GFP_KERNEL to allocate vmemmap pages. However, we can defer the + * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate + * the vmemmap pages. + * + * The update_hpage_vmemmap_workfn() is where the call to allocate vmemmmap + * pages will be inserted. + * + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of pages + * to be freed and frees them one-by-one. As the page->mapping pointer is going + * to be cleared in update_hpage_vmemmap_workfn() anyway, it is reused as the + * llist_node structure of a lockless linked list of huge pages to be freed. + */ +static LLIST_HEAD(hpage_update_freelist); + +static void update_hpage_vmemmap_workfn(struct work_struct *work) { - int i; + struct llist_node *node; + + node = llist_del_all(&hpage_update_freelist); + + while (node) { + struct page *page; + struct hstate *h; + + page = container_of((struct address_space **)node, + struct page, mapping); + node = node->next; + page->mapping = NULL; + h = page_hstate(page); + + spin_lock(&hugetlb_lock); + __free_hugepage(h, page); + spin_unlock(&hugetlb_lock); + cond_resched(); + } +} +static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn); + +static inline void __update_and_free_page(struct hstate *h, struct page *page) +{ + /* No need to allocate vmemmap pages */ + if (!free_vmemmap_pages_per_hpage(h)) { + __free_hugepage(h, page); + return; + } + + /* + * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap + * pages. + * + * Only call schedule_work() if hpage_update_freelist is previously + * empty. Otherwise, schedule_work() had been called but the workfn + * hasn't retrieved the list yet. + */ + if (llist_add((struct llist_node *)&page->mapping, + &hpage_update_freelist)) + schedule_work(&hpage_update_work); +} + +static void update_and_free_page(struct hstate *h, struct page *page) +{ if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; h->nr_huge_pages--; h->nr_huge_pages_node[page_to_nid(page)]--; + + __update_and_free_page(h, page); +} + +static void __free_hugepage(struct hstate *h, struct page *page) +{ + int i; + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 4ffa2a4ae2a8..19f1898aaede 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -178,18 +178,6 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) -/* - * How many vmemmap pages associated with a HugeTLB page that can be freed - * to the buddy allocator. - * - * Todo: Returns zero for now, which means the feature is disabled. We will - * enable it once all the infrastructure is there. - */ -static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) -{ - return 0; -} - static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) { return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 6923f03534d5..01f8637adbe0 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -12,9 +12,26 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void free_huge_page_vmemmap(struct hstate *h, struct page *head); + +/* + * How many vmemmap pages associated with a HugeTLB page that can be freed + * to the buddy allocator. + * + * Todo: Returns zero for now, which means the feature is disabled. We will + * enable it once all the infrastructure is there. + */ +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} #else static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Sun Jan 17 15:10:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12025515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1389EC433DB for ; Sun, 17 Jan 2021 15:20:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CE9D92255F for ; Sun, 17 Jan 2021 15:20:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729535AbhAQPUf (ORCPT ); Sun, 17 Jan 2021 10:20:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38110 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729392AbhAQPPH (ORCPT ); Sun, 17 Jan 2021 10:15:07 -0500 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6E07C061795 for ; Sun, 17 Jan 2021 07:14:26 -0800 (PST) Received: by mail-pg1-x52a.google.com with SMTP id q7so9335569pgm.5 for ; Sun, 17 Jan 2021 07:14:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eriJkkL7fLLxATd22osz6+JQx60d8cBmS+HC2zuMsVY=; b=momL6MFk49e8UpFPXPurDl4ejJIdmDtiX6SKLsBAOdKpCQLeIwaGBPMaHEr2ULRj7N vP1IpCXFXvlB4XP1rR+0FA/G1imHRm8fAzrT+/bM0yGySjgr5EDhLd4PNdPu7MYC2ATX FtjqCU4STC7bpU/9vgT2J5FhsiEz2+yzrJXRPI80BdS//LyCOWepWeGSqKkXnN6mQnJw j5KHYeNfV1pCEmuephfUWYcY4+HlScp2zLO2ztjj44O06dTL7I+BR6uJmjLIrKXrYznK 13w5/jV6XfeneoDN6gpgGc8X2a5284C+9N2/2OxkNqyAwVFUZjnsrbMBW26rpMlTRpko eF6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eriJkkL7fLLxATd22osz6+JQx60d8cBmS+HC2zuMsVY=; b=Qni+PaONhS+VYgFwBe14tQml2no9hVWTJ6rOkOmTzCGN+Zrc+8UEi0ooSWDxFXzA8B JaGCFVc/elj5U1SUYqUCgSmY0tqPi2wFxps1JIXaCt1YL30wLWYUJVqUI+qcyVEi3a/f jxpAstLaT7VVUMVp8isILOWsQFuP6f0BHsdwU1q2jHE+IXnBgIsgR+RP3jUC62h3Wwkh UApe/vbjooOgJVgUAMrfLxV51xQ784rhOsc/TWrw3DTlhxaiNNlSBtbQttEHR4PTRLgG CQQcoZZbMXSxUBTko1SHr9Wl+EeULLwOwar0U+/P7HtPmYQHo6zW7O87cVDecz8/9F0o mwlA== X-Gm-Message-State: AOAM530ht+6/ZgVSaVUvrQ5C48L4sWsiFwxh9BsyRnKQsZfIsl5UjZv8 i0ornTsR9B3g8gldBiAhI4obfg== X-Google-Smtp-Source: ABdhPJz3uiBcudD2MIn4mZIBdhsFabwzy+orYv2RfLJJ8jjcwimj0pEgEVaGzsFbzlxHpoxLEeCNSQ== X-Received: by 2002:a63:3549:: with SMTP id c70mr22007739pga.361.1610896466134; Sun, 17 Jan 2021 07:14:26 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id i22sm9247915pjv.35.2021.01.17.07.14.10 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 Jan 2021 07:14:25 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v13 05/12] mm: hugetlb: allocate the vmemmap pages associated with each HugeTLB page Date: Sun, 17 Jan 2021 23:10:46 +0800 Message-Id: <20210117151053.24600-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210117151053.24600-1-songmuchun@bytedance.com> References: <20210117151053.24600-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org When we free a HugeTLB page to the buddy allocator, we should allocate the vmemmap pages associated with it. We can do that in the __free_hugepage() before freeing it to buddy. Signed-off-by: Muchun Song --- include/linux/mm.h | 2 ++ mm/hugetlb.c | 2 ++ mm/hugetlb_vmemmap.c | 15 ++++++++++ mm/hugetlb_vmemmap.h | 5 ++++ mm/sparse-vmemmap.c | 77 +++++++++++++++++++++++++++++++++++++++++++++++++++- 5 files changed, 100 insertions(+), 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index f928994ed273..16b55d13b0ab 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3007,6 +3007,8 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) void vmemmap_remap_free(unsigned long start, unsigned long end, unsigned long reuse); +void vmemmap_remap_alloc(unsigned long start, unsigned long end, + unsigned long reuse); void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c165186ec2cf..d11c32fcdb38 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1326,6 +1326,8 @@ static void update_hpage_vmemmap_workfn(struct work_struct *work) page->mapping = NULL; h = page_hstate(page); + alloc_huge_page_vmemmap(h, page); + spin_lock(&hugetlb_lock); __free_hugepage(h, page); spin_unlock(&hugetlb_lock); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 19f1898aaede..6108ae80314f 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -183,6 +183,21 @@ static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; } +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + unsigned long vmemmap_addr = (unsigned long)head; + unsigned long vmemmap_end, vmemmap_reuse; + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + vmemmap_addr += RESERVE_VMEMMAP_SIZE; + vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h); + vmemmap_reuse = vmemmap_addr - PAGE_SIZE; + + vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse); +} + void free_huge_page_vmemmap(struct hstate *h, struct page *head) { unsigned long vmemmap_addr = (unsigned long)head; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 01f8637adbe0..b2c8d2f11d48 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -11,6 +11,7 @@ #include #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); /* @@ -25,6 +26,10 @@ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) return 0; } #else +static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} + static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index ce4be1fa93c2..3b146d5949f3 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include @@ -40,7 +41,8 @@ * @remap_pte: called for each non-empty PTE (lowest-level) entry. * @reuse_page: the page which is reused for the tail vmemmap pages. * @reuse_addr: the virtual address of the @reuse_page page. - * @vmemmap_pages: the list head of the vmemmap pages that can be freed. + * @vmemmap_pages: the list head of the vmemmap pages that can be freed + * or is mapped from. */ struct vmemmap_remap_walk { void (*remap_pte)(pte_t *pte, unsigned long addr, @@ -50,6 +52,10 @@ struct vmemmap_remap_walk { struct list_head *vmemmap_pages; }; +/* The gfp mask of allocating vmemmap page */ +#define GFP_VMEMMAP_PAGE \ + (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN | __GFP_THISNODE) + static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct vmemmap_remap_walk *walk) @@ -228,6 +234,75 @@ void vmemmap_remap_free(unsigned long start, unsigned long end, free_vmemmap_page_list(&vmemmap_pages); } +static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk) +{ + pgprot_t pgprot = PAGE_KERNEL; + struct page *page; + void *to; + + BUG_ON(pte_page(*pte) != walk->reuse_page); + + page = list_first_entry(walk->vmemmap_pages, struct page, lru); + list_del(&page->lru); + to = page_to_virt(page); + copy_page(to, (void *)walk->reuse_addr); + + set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); +} + +static void alloc_vmemmap_page_list(struct list_head *list, + unsigned long start, unsigned long end) +{ + unsigned long addr; + + for (addr = start; addr < end; addr += PAGE_SIZE) { + struct page *page; + int nid = page_to_nid((const void *)addr); + +retry: + page = alloc_pages_node(nid, GFP_VMEMMAP_PAGE, 0); + if (unlikely(!page)) { + msleep(100); + /* + * We should retry infinitely, because we cannot + * handle allocation failures. Once we allocate + * vmemmap pages successfully, then we can free + * a HugeTLB page. + */ + goto retry; + } + list_add_tail(&page->lru, list); + } +} + +/** + * vmemmap_remap_alloc - remap the vmemmap virtual address range [@start, end) + * to the page which is from the @vmemmap_pages + * respectively. + * @start: start address of the vmemmap virtual address range. + * @end: end address of the vmemmap virtual address range. + * @reuse: reuse address. + */ +void vmemmap_remap_alloc(unsigned long start, unsigned long end, + unsigned long reuse) +{ + LIST_HEAD(vmemmap_pages); + struct vmemmap_remap_walk walk = { + .remap_pte = vmemmap_restore_pte, + .reuse_addr = reuse, + .vmemmap_pages = &vmemmap_pages, + }; + + might_sleep(); + + /* See the comment in the vmemmap_remap_free(). */ + BUG_ON(start - reuse != PAGE_SIZE); + + alloc_vmemmap_page_list(&vmemmap_pages, start, end); + vmemmap_remap_range(reuse, end, &walk); +} + /* * Allocate a block of memory to be used to back the virtual memory map * or to back the page tables that are used to create the mapping. From patchwork Sun Jan 17 15:10:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12025501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D284C433E0 for ; Sun, 17 Jan 2021 15:16:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 525AE2070E for ; Sun, 17 Jan 2021 15:16:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729525AbhAQPQM (ORCPT ); Sun, 17 Jan 2021 10:16:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729527AbhAQPPW (ORCPT ); Sun, 17 Jan 2021 10:15:22 -0500 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8AA8C0613ED for ; Sun, 17 Jan 2021 07:14:41 -0800 (PST) Received: by mail-pl1-x636.google.com with SMTP id u11so2935283plg.13 for ; Sun, 17 Jan 2021 07:14:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tAzAJAw7JBXGA5wVumCYbe7IIg4huGuKzHI34tpedh0=; b=KTUr9bgvGRfZY7zXyuwqqHtpRf08YVqRbt+LDLJU7qpqnjsV8fzOF/1SuTd2UmsSpW mGlYwlOtzOcINqgu4C5bZ1e93d+XOozvOTgYxbuejQG+mvigc4Z7OXdbWEjfXbhW45pr grL/MeFRczzi5E2HbJ3K9zLMQ5nKra2Ed6nKyQX1Zr3nWiHUBSaF8W3GnWun90ZMqdmG TMSIxi2AQWQ9Ni3I2zfoUC0zvJqa99AyQvzd+bUHhdEANd1Ky8yDKSwzXKSRRkeGTYSs N5J5MtL78j0A4Pa+APQ/OvmwiQmzGXnIcPi9FQ5wVzLLcuRUU5DHBiBSOlUT9EyWNCX1 Nuqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tAzAJAw7JBXGA5wVumCYbe7IIg4huGuKzHI34tpedh0=; b=SDITcnRRqk7ftAIUoNMQq57T3TCWC02b9WmMHLPQPH6n+7YpgbUOaMEMPZR2rT1EPR lj9s0h2CwTcLmdNDJEEnI5+Ut1V4IVl9jNpWXrVAOi7sh7ON0CrkFDZxqfgpljAizr4a mijeqgNPTiLSgFyTzTdgdAYgX/mzkn+CuVP3EHt1qYDEmPfNq3XtUpTicumxTLmEvQGh hg6da+kxGFZA4wIcuRDp4jDeuyQKHhDKJBT1V4HLj4MrSF4Hami5NTEWR1VaBuaOVoPX LnLoBUn/vqDVOtGdIQ1BduWOtNnB53lP59F7E61769Jki90PMVkRwvZ/dwM4FDmUAK6/ BKvA== X-Gm-Message-State: AOAM532QaFfl5MiGlSiKtevNgDYriyJ2PcxKj/Fp6bEzIVMSxHbUyKBu mL/DLlNHZj+zRGP3ReQvb0cvOA== X-Google-Smtp-Source: ABdhPJyZnm1OiBW8g2a5JU4rQ20aHbQusIDFhvpTdfXrG3sJdzrVFR5b4ifkqeOM6cYqFgDldVrTeA== X-Received: by 2002:a17:90a:e006:: with SMTP id u6mr20691095pjy.201.1610896481318; Sun, 17 Jan 2021 07:14:41 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id i22sm9247915pjv.35.2021.01.17.07.14.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 Jan 2021 07:14:40 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v13 06/12] mm: hugetlb: set the PageHWPoison to the raw error page Date: Sun, 17 Jan 2021 23:10:47 +0800 Message-Id: <20210117151053.24600-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210117151053.24600-1-songmuchun@bytedance.com> References: <20210117151053.24600-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Because we reuse the first tail vmemmap page frame and remap it with read-only, we cannot set the PageHWPosion on a tail page. So we can use the head[4].private to record the real error page index and set the raw error page PageHWPoison later. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Acked-by: David Rientjes --- mm/hugetlb.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 61 insertions(+), 8 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d11c32fcdb38..6caaa7e5dd2a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1358,6 +1358,63 @@ static inline void __update_and_free_page(struct hstate *h, struct page *page) schedule_work(&hpage_update_work); } +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head) +{ + struct page *page; + + if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h)) + return; + + page = head + page_private(head + 4); + + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + if (page != head) { + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + +static inline void hwpoison_subpage_set(struct hstate *h, struct page *head, + struct page *page) +{ + if (!PageHWPoison(head)) + return; + + if (free_vmemmap_pages_per_hpage(h)) { + set_page_private(head + 4, page - head); + } else if (page != head) { + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + +#else +static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head) +{ +} + +static inline void hwpoison_subpage_set(struct hstate *h, struct page *head, + struct page *page) +{ + if (PageHWPoison(head) && page != head) { + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} +#endif + static void update_and_free_page(struct hstate *h, struct page *page) { if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) @@ -1373,6 +1430,8 @@ static void __free_hugepage(struct hstate *h, struct page *page) { int i; + hwpoison_subpage_deliver(h, page); + for (i = 0; i < pages_per_huge_page(h); i++) { page[i].flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | @@ -1845,14 +1904,8 @@ int dissolve_free_huge_page(struct page *page) int nid = page_to_nid(head); if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; - /* - * Move PageHWPoison flag from head page to the raw error page, - * which makes any subpages rather than the error page reusable. - */ - if (PageHWPoison(head) && page != head) { - SetPageHWPoison(page); - ClearPageHWPoison(head); - } + + hwpoison_subpage_set(h, head, page); list_del(&head->lru); h->free_huge_pages--; h->free_huge_pages_node[nid]--; From patchwork Sun Jan 17 15:10:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12025507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0346CC433E0 for ; Sun, 17 Jan 2021 15:17:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CB80E22460 for ; Sun, 17 Jan 2021 15:17:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729636AbhAQPRB (ORCPT ); Sun, 17 Jan 2021 10:17:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38262 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729570AbhAQPPs (ORCPT ); Sun, 17 Jan 2021 10:15:48 -0500 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFACCC061799 for ; Sun, 17 Jan 2021 07:14:54 -0800 (PST) Received: by mail-pl1-x630.google.com with SMTP id g3so7239066plp.2 for ; Sun, 17 Jan 2021 07:14:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dASYKFPFHjiUnTmEMcq8ifWiWDwHax4a4AuznfbkXhg=; b=fWMqIgVimAbohcSo2EpgXsd0ZyJcaZzla1+P95eOMhnyBL67qm5KAucfNimJ/Y8PPb 6lF03rRUJiEGIKgB/d1AF+BsYCsr8ZT/Pkx2QnCiOzg30HL6dN6laFtm47lJUIMes242 fLMU5PBdfikWpoU3XtBfxGbxS21r+dNKUIs0erFDXDnITMS6DorxcHLayrDhExX+u1kd LKu1pnMupUMbBV/W81fmIsxl2VIip0Rjx4irqQjbsT4/3aLaB+AP+zMyGWoJlzg4xCTQ XUmjXCkW9w6/ar5htk+JxAIb65JCm2a9ZELvCKmKVdv+ntoLKXK7+b4P7Ny7qpDyk5jR /pZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dASYKFPFHjiUnTmEMcq8ifWiWDwHax4a4AuznfbkXhg=; b=V1+x6i7GwgMnuYdlqqFC+UJOBubt+wMDSg4Q5p3/oxQ+lFG1fNGvMoTh/v2D/YfgOO 8cbh/fb9X+I4Q68ddGi37Xgr9JUVoeSrWEK4+BerHA2WKWokx96XFAL46rnOGhCTIlPJ o3fx1q+RHvmoWTLywdDmInE04W5JnsV37GrrWp0qxRZvr9q8R5z8g2h0GSXHnXhnZ+2o wOI0QFMoU1tgGnf6Fay2nIlV9MSvrc2gr+jfn+TW3iRXLjSAfEfJFNumx0wApaor0thx VFSDh7hkteiEuPrNXCfP8OoBOCC/abe0T7Pb959qe/P/Gi5hTo/R1UIHo1knCCkgXKho zNlQ== X-Gm-Message-State: AOAM531PUiVubOxld1dFaFoPdwD7U58udLUS9EXCXQbZxox2YvbRAbCI orB+T0WnLPKbA9oTK6CQsAxkVg== X-Google-Smtp-Source: ABdhPJyLwFkAudGuCIo1upuVxlqgazAYTsuwjbaVLNbfgDm7FzQG69F7R66FSfzEmpPd+yF9XBCLig== X-Received: by 2002:a17:90a:df84:: with SMTP id p4mr21024876pjv.81.1610896494542; Sun, 17 Jan 2021 07:14:54 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id i22sm9247915pjv.35.2021.01.17.07.14.41 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 Jan 2021 07:14:54 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v13 07/12] mm: hugetlb: flush work when dissolving a HugeTLB page Date: Sun, 17 Jan 2021 23:10:48 +0800 Message-Id: <20210117151053.24600-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210117151053.24600-1-songmuchun@bytedance.com> References: <20210117151053.24600-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org We should flush work when dissolving a HugeTLB page to make sure that the HugeTLB page is freed to the buddy allocator. Because the caller of dissolve_free_huge_pages() relies on this guarantee. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador --- mm/hugetlb.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6caaa7e5dd2a..3222bad8b112 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1337,6 +1337,12 @@ static void update_hpage_vmemmap_workfn(struct work_struct *work) } static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn); +static inline void flush_hpage_update_work(struct hstate *h) +{ + if (free_vmemmap_pages_per_hpage(h)) + flush_work(&hpage_update_work); +} + static inline void __update_and_free_page(struct hstate *h, struct page *page) { /* No need to allocate vmemmap pages */ @@ -1887,6 +1893,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, int dissolve_free_huge_page(struct page *page) { int rc = -EBUSY; + struct hstate *h = NULL; /* Not to disrupt normal path by vainly holding hugetlb_lock */ if (!PageHuge(page)) @@ -1900,8 +1907,9 @@ int dissolve_free_huge_page(struct page *page) if (!page_count(page)) { struct page *head = compound_head(page); - struct hstate *h = page_hstate(head); int nid = page_to_nid(head); + + h = page_hstate(head); if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; @@ -1915,6 +1923,14 @@ int dissolve_free_huge_page(struct page *page) } out: spin_unlock(&hugetlb_lock); + + /* + * We should flush work before return to make sure that + * the HugeTLB page is freed to the buddy. + */ + if (!rc && h) + flush_hpage_update_work(h); + return rc; } From patchwork Sun Jan 17 15:10:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12025513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 138B7C433DB for ; Sun, 17 Jan 2021 15:19:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E3B4722525 for ; Sun, 17 Jan 2021 15:19:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729524AbhAQPTg (ORCPT ); Sun, 17 Jan 2021 10:19:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729557AbhAQPPk (ORCPT ); Sun, 17 Jan 2021 10:15:40 -0500 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00332C06179B for ; Sun, 17 Jan 2021 07:15:08 -0800 (PST) Received: by mail-pf1-x42c.google.com with SMTP id h10so8666954pfo.9 for ; Sun, 17 Jan 2021 07:15:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=leMCwiJUMOALRI5sultiZwFK5v4LdPgwOBvUXqzpH+M=; b=nU947GUBl5gNEwin/Dh5khj/oNK7YocifknATbfcT2PQvf7wzkSL2i6ac0RWNMcswJ Qx9QJkfu5r/pQ7szUy775TQPy5EYJmymzC4qqeBhixQbfrtgPUktVp0kRD17Ri4XG+4J rFJZmd9tnAuk3hx+ll4HmwdZo6uF07W1TaOuGWE4131eUzOEOT/wPOHoTn9/3BgFsoBD NCvpEwEJMtD/6JLkB8zj6uSHx5rbDo6arnxQMWYoygDiNYrii79NFNrRhjzxLaOJoiZB EmB3Sqvobb4k/AvAem+aG5Crop1DkzAosxPSsCTPLN8ffvz62UC2MmSITreWPt793E89 z9mA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=leMCwiJUMOALRI5sultiZwFK5v4LdPgwOBvUXqzpH+M=; b=j0O6FXDdp3W0pjPTGAvy0Y5OoieXNp+f+a8eczxGBj9bM2ktywarNJevmbtPTFMu1Y U+ZjVKBo4PUjML2rcuG+fO+guvGhWTSbiK5mC5SNP/41/MvJINqgsLU2W+y1t1ogUCRp kJP7rTG9VUhUcBZMUF9Q9eThaDF7czoio/VPz9Uj2+XZ9VZFF8mTqjnWc6SI2MAKmOxW gag4+Fkt8eQY8Qb5AaGE7K3EiCbB82/NY67tIaopVSB3AQWYgE/6XH6zPeVl7HD023od 3SmJfTDd84Lf6R8K0rwp3Qz5Voz3MtCsptmtoZsyAHLC9p0VKXmVYILX4faEwWni2Dep A2yA== X-Gm-Message-State: AOAM533/w7XX1bJxoylkb0T/mjLWslQ1auMSZNamCpjyBvq2DBP4vJ5g FFV9MYWMoqiZLq+JHG6Yp3DL4A== X-Google-Smtp-Source: ABdhPJwKE9KZZq+vF8UdJn0d8va2uhh4pytapUNaWVa6FNjoKs5iqgjgPpbCtvxqTymHRujF1jknNg== X-Received: by 2002:aa7:83cd:0:b029:1a5:fb23:ad7f with SMTP id j13-20020aa783cd0000b02901a5fb23ad7fmr22184733pfn.46.1610896508537; Sun, 17 Jan 2021 07:15:08 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id i22sm9247915pjv.35.2021.01.17.07.14.54 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 Jan 2021 07:15:08 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v13 08/12] mm: hugetlb: introduce PageHugeInflight Date: Sun, 17 Jan 2021 23:10:49 +0800 Message-Id: <20210117151053.24600-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210117151053.24600-1-songmuchun@bytedance.com> References: <20210117151053.24600-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org When we free a HugeTLB page whose vmemmap pages can be optimized, it is freed to the buddy allocator through a kworker. And the ref count of page is zero, so if we dissolve it before it is freed to the buddy allocator. It can be freed again. In order to avoid this, we introduce PageHugeInflight to indicate that the HugeTLB page is already freed from hugepage pool but not freed to buddy allocator yet. When we hit the inflight page, we just need to flush the work. Signed-off-by: Muchun Song --- mm/hugetlb.c | 38 +++++++++++++++++++++++++++++++++++++- 1 file changed, 37 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3222bad8b112..14549204ddcb 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1343,6 +1343,36 @@ static inline void flush_hpage_update_work(struct hstate *h) flush_work(&hpage_update_work); } +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +static inline bool PageHugeInflight(struct page *head) +{ + return page_private(head + 5) == -1UL; +} + +static inline void SetPageHugeInflight(struct page *head) +{ + set_page_private(head + 5, -1UL); +} + +static inline void ClearPageHugeInflight(struct page *head) +{ + set_page_private(head + 5, 0); +} +#else +static inline bool PageHugeInflight(struct page *head) +{ + return false; +} + +static inline void SetPageHugeInflight(struct page *head) +{ +} + +static inline void ClearPageHugeInflight(struct page *head) +{ +} +#endif + static inline void __update_and_free_page(struct hstate *h, struct page *page) { /* No need to allocate vmemmap pages */ @@ -1351,6 +1381,8 @@ static inline void __update_and_free_page(struct hstate *h, struct page *page) return; } + SetPageHugeInflight(page); + /* * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap * pages. @@ -1637,6 +1669,7 @@ static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) { free_huge_page_vmemmap(h, page); + ClearPageHugeInflight(page); INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); set_hugetlb_cgroup(page, NULL); @@ -1913,13 +1946,16 @@ int dissolve_free_huge_page(struct page *page) if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; + rc = 0; hwpoison_subpage_set(h, head, page); + if (PageHugeInflight(head)) + goto out; + list_del(&head->lru); h->free_huge_pages--; h->free_huge_pages_node[nid]--; h->max_huge_pages--; update_and_free_page(h, head); - rc = 0; } out: spin_unlock(&hugetlb_lock); From patchwork Sun Jan 17 15:10:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12025503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A347C433E6 for ; Sun, 17 Jan 2021 15:17:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E36B02251F for ; Sun, 17 Jan 2021 15:17:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729601AbhAQPQq (ORCPT ); Sun, 17 Jan 2021 10:16:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729615AbhAQPQH (ORCPT ); Sun, 17 Jan 2021 10:16:07 -0500 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A673EC06179E for ; Sun, 17 Jan 2021 07:15:21 -0800 (PST) Received: by mail-pg1-x529.google.com with SMTP id g15so9328595pgu.9 for ; Sun, 17 Jan 2021 07:15:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JnsZZKIgn5gwlEYoLyYIa0LxHiKDymjmv3kxxyaJG00=; b=T+paHMHzbuRpJk/ASaxwCzWhNssMeMVnRNkTQl3Uvo8ulkzRc5NVJmU7qgiMAAb2eF yTV0S1Emz+PtW28aWAql7AvZ7fYHtVTavhljOq77kLLq0CeAdnx2YMwVCmD9kr5P371B Mz40ePdPZzt8SH0qUCbbMR8Oc4GJ6eUNWsaA3rIXbSB6/BCzDYKOjwZBDLxEyTotP0vF Ds/8cm+CboDDzhzBwwan4Gz5NcrO1SBlvmiUkR2L1HtLIzeU+4pQLhEQ603i88VkfVHW WZSMnb1Omr7seTQcrnwjuUZv13ssU5OlTuyPBztN9qI3OZ5dVLWFySAjWqthWg6o1Z1W UQpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JnsZZKIgn5gwlEYoLyYIa0LxHiKDymjmv3kxxyaJG00=; b=T5Nge9ObxhHZRZOTrP1nf7TvTJnhP0fZtWdBiE7v0gIRWd92Zy55RkXKVb4tHevk2A tJVOTW8ugGbfwBUPCYioIV12XocnlWwkiwustWSWaFUFCrMHINRqMkwvWkApJWLT1TY/ L2BSr17rg4fY4uNO0jCqj9feY0MRpao4+f3IEheuSEzE9XdY/EMutJmnAf1mfGnfHpJN P39BUiSsTiCV/H4HimzW5pTCMgkDTHHwuS9UMcZi4u2fjew0OzIGOR/KPDu8X9Uru/vw J26cLJSEEa36kRwYCpv7e4omgFUnlGSYocsGqXSzrfLD7E5cHvbK8c6C0IMP/On8Z2lw BlAQ== X-Gm-Message-State: AOAM531owGTMCM4IDXRYFb5iVgGMsAX/aLi/l7MDmcaqloT7SWnRBsqt TtVBfX4dkWpOU5Dm0B3Zvyw3vw== X-Google-Smtp-Source: ABdhPJz9j4TZYXqG8gcPcOaVWk7ttOxFlXy6jPgkB6mRXLJuCjQ576xj2vPkTyxr4Mnvywf8KXmVLg== X-Received: by 2002:a62:1506:0:b029:19e:aca2:4ed8 with SMTP id 6-20020a6215060000b029019eaca24ed8mr21941294pfv.27.1610896521133; Sun, 17 Jan 2021 07:15:21 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id i22sm9247915pjv.35.2021.01.17.07.15.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 Jan 2021 07:15:20 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v13 09/12] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap Date: Sun, 17 Jan 2021 23:10:50 +0800 Message-Id: <20210117151053.24600-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210117151053.24600-1-songmuchun@bytedance.com> References: <20210117151053.24600-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Add a kernel parameter hugetlb_free_vmemmap to enable the feature of freeing unused vmemmap pages associated with each hugetlb page on boot. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Reviewed-by: Barry Song --- Documentation/admin-guide/kernel-parameters.txt | 14 ++++++++++++++ Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++ arch/x86/mm/init_64.c | 8 ++++++-- include/linux/hugetlb.h | 19 +++++++++++++++++++ mm/hugetlb_vmemmap.c | 24 ++++++++++++++++++++++++ 5 files changed, 66 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 3ae25630a223..44dde9be7e00 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1551,6 +1551,20 @@ Documentation/admin-guide/mm/hugetlbpage.rst. Format: size[KMG] + hugetlb_free_vmemmap= + [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, + this controls freeing unused vmemmap pages associated + with each HugeTLB page. When this option is enabled, + we disable PMD/huge page mapping of vmemmap pages which + increase page table pages. So if a user/sysadmin only + uses a small number of HugeTLB pages (as a percentage + of system memory), they could end up using more memory + with hugetlb_free_vmemmap on as opposed to off. + Format: { on | off (default) } + + on: enable the feature + off: disable the feature + hung_task_panic= [KNL] Should the hung task detector generate panics. Format: 0 | 1 diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst index f7b1c7462991..3a23c2377acc 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -145,6 +145,9 @@ default_hugepagesz will all result in 256 2M huge pages being allocated. Valid default huge page size is architecture dependent. +hugetlb_free_vmemmap + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing + unused vmemmap pages associated with each HugeTLB page. When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` indicates the current number of pre-allocated huge pages of the default size. diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0435bee2e172..1bce5f20e6ca 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, { int err; - if (end - start < PAGES_PER_SECTION * sizeof(struct page)) + if (is_hugetlb_free_vmemmap_enabled() || + end - start < PAGES_PER_SECTION * sizeof(struct page)) err = vmemmap_populate_basepages(start, end, node, NULL); else if (boot_cpu_has(X86_FEATURE_PSE)) err = vmemmap_populate_hugepages(start, end, node, altmap); @@ -1585,6 +1587,8 @@ void register_page_bootmem_memmap(unsigned long section_nr, pmd_t *pmd; unsigned int nr_pmd_pages; struct page *page; + bool base_mapping = !boot_cpu_has(X86_FEATURE_PSE) || + is_hugetlb_free_vmemmap_enabled(); for (; addr < end; addr = next) { pte_t *pte = NULL; @@ -1610,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long section_nr, } get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO); - if (!boot_cpu_has(X86_FEATURE_PSE)) { + if (base_mapping) { next = (addr + PAGE_SIZE) & PAGE_MASK; pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ebca2ef02212..7f47f0eeca3b 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -770,6 +770,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, } #endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +extern bool hugetlb_free_vmemmap_enabled; + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return hugetlb_free_vmemmap_enabled; +} +#else +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} +#endif + #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; @@ -923,6 +937,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr pte_t *ptep, pte_t pte, unsigned long sz) { } + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} #endif /* CONFIG_HUGETLB_PAGE */ static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 6108ae80314f..8206978d1679 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -166,6 +166,8 @@ * (last) level. So this type of HugeTLB page can be optimized only when its * size of the struct page structs is greater than 2 pages. */ +#define pr_fmt(fmt) "HugeTLB: " fmt + #include "hugetlb_vmemmap.h" /* @@ -178,6 +180,28 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) +bool hugetlb_free_vmemmap_enabled; + +static int __init early_hugetlb_free_vmemmap_param(char *buf) +{ + /* We cannot optimize if a "struct page" crosses page boundaries. */ + if ((!is_power_of_2(sizeof(struct page)))) { + pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n"); + return 0; + } + + if (!buf) + return -EINVAL; + + if (!strcmp(buf, "on")) + hugetlb_free_vmemmap_enabled = true; + else if (strcmp(buf, "off")) + return -EINVAL; + + return 0; +} +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param); + static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) { return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; From patchwork Sun Jan 17 15:10:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12025505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF4B1C433DB for ; Sun, 17 Jan 2021 15:17:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C6F3022460 for ; Sun, 17 Jan 2021 15:17:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729088AbhAQPQv (ORCPT ); Sun, 17 Jan 2021 10:16:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729074AbhAQPQS (ORCPT ); Sun, 17 Jan 2021 10:16:18 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 286E4C0613C1 for ; Sun, 17 Jan 2021 07:15:38 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id g15so3777523pjd.2 for ; Sun, 17 Jan 2021 07:15:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hxK3vZa53yZECwtQk1Adv7wPniSEQYOdf+3yOUQ+iTA=; b=VloiUfaBTFBH6av464B48TOSDY83SUQFlqHi/txGmQyrX218GAYQBQhyq0WjV9WSp4 fAnE0GoAJOib3PeCNzoKWPq0RX17INjsjqCahhWrBMCyu/slW6aszOhLyaFkrXwVkjwN oiVZkw65xaqOwZKastwbxIDB6TCMZ2wWI/LK84KCuo+QygmxV7xCc2haNABtnJkp5pEr vrktkQV+1+2kbd4CFrxClL5QKM9TmJ9W6Yq+wAwwvtUmmozsBTUFj02M0tLpo2vnNxsB ywdk6XJC0SogXpATcViLbUJoauiiicklpf0Rwp5NF2SBI49KaR8RPbxZb1TzxI1+vu7m 6Z6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hxK3vZa53yZECwtQk1Adv7wPniSEQYOdf+3yOUQ+iTA=; b=dNrl3+f+5T14B109Ev+MrIa2aBLBICI+IvBrFJW6ZmnXEyigEsz8qTnE+XSGsuW9Bl rgsK18JTOc1KFILZ29ZhUXecgFKBwFAzWbYRmozl1a6Ua7H8ocixQeD2xJU2KlfvFkIL 1rrsOXt/7MycX1zQYnkKMRaKwRfucyABlVxNJcg+bGzwqW2Gf3nZdvgEn4vvxHpAn8YU 0y+C935Ll6QlcyVDWXXlrJRbYuS2WKyQbuyhILTyQ7ii7A3sYgoYaaGZ4q3bS6ZRojZi g3lKESbOhodrQEHWGSTfZQGRWceYbayJxMJLPfke2QF+LST04tqbMEaEi0T0eRIFr9DP 5M7g== X-Gm-Message-State: AOAM530C2wIyQiyuFO6gxBQGYO+2q5ZliiHGONn3IYiGKsdpfEeQUvc7 MOBeEzDJLjsTYgQdLkkv8A1QIw== X-Google-Smtp-Source: ABdhPJyr7NtH4kJKhAyQwyRjTW71YeutQtiSSXnPvRjRNUKly/sIybKd5/ljO7yMJOxXNGQFIdkrYQ== X-Received: by 2002:a17:902:b493:b029:dc:3e1d:4dda with SMTP id y19-20020a170902b493b02900dc3e1d4ddamr22133630plr.48.1610896537688; Sun, 17 Jan 2021 07:15:37 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id i22sm9247915pjv.35.2021.01.17.07.15.21 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 Jan 2021 07:15:37 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v13 10/12] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate Date: Sun, 17 Jan 2021 23:10:51 +0800 Message-Id: <20210117151053.24600-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210117151053.24600-1-songmuchun@bytedance.com> References: <20210117151053.24600-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org All the infrastructure is ready, so we introduce nr_free_vmemmap_pages field in the hstate to indicate how many vmemmap pages associated with a HugeTLB page that can be freed to buddy allocator. And initialize it in the hugetlb_vmemmap_init(). This patch is actual enablement of the feature. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador --- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 1 + mm/hugetlb_vmemmap.c | 25 +++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 10 ++++++---- 4 files changed, 35 insertions(+), 4 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 7f47f0eeca3b..66d82ae7b712 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -492,6 +492,9 @@ struct hstate { unsigned int nr_huge_pages_node[MAX_NUMNODES]; unsigned int free_huge_pages_node[MAX_NUMNODES]; unsigned int surplus_huge_pages_node[MAX_NUMNODES]; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + unsigned int nr_free_vmemmap_pages; +#endif #ifdef CONFIG_CGROUP_HUGETLB /* cgroup control files */ struct cftype cgroup_files_dfl[7]; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 14549204ddcb..0e14fad63823 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3385,6 +3385,7 @@ void __init hugetlb_add_hstate(unsigned int order) h->next_nid_to_free = first_memory_node; snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB", huge_page_size(h)/1024); + hugetlb_vmemmap_init(h); parsed_hstate = h; } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 8206978d1679..7dcb4aa1e512 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -236,3 +236,28 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head) vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse); } + +void __init hugetlb_vmemmap_init(struct hstate *h) +{ + unsigned int nr_pages = pages_per_huge_page(h); + unsigned int vmemmap_pages; + + if (!hugetlb_free_vmemmap_enabled) + return; + + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; + /* + * The head page and the first tail page are not to be freed to buddy + * allocator, the other pages will map to the first tail page, so they + * can be freed. + * + * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true + * on some architectures (e.g. aarch64). See Documentation/arm64/ + * hugetlbpage.rst for more details. + */ + if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR)) + h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR; + + pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, + h->name); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index b2c8d2f11d48..8fd9ae113dbd 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -13,17 +13,15 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); +void hugetlb_vmemmap_init(struct hstate *h); /* * How many vmemmap pages associated with a HugeTLB page that can be freed * to the buddy allocator. - * - * Todo: Returns zero for now, which means the feature is disabled. We will - * enable it once all the infrastructure is there. */ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { - return 0; + return h->nr_free_vmemmap_pages; } #else static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) @@ -38,5 +36,9 @@ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { return 0; } + +static inline void hugetlb_vmemmap_init(struct hstate *h) +{ +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ From patchwork Sun Jan 17 15:10:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12025511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50470C433E0 for ; Sun, 17 Jan 2021 15:19:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1928422460 for ; Sun, 17 Jan 2021 15:19:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729423AbhAQPSx (ORCPT ); Sun, 17 Jan 2021 10:18:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38456 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729288AbhAQPQj (ORCPT ); Sun, 17 Jan 2021 10:16:39 -0500 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE1BAC0613D3 for ; Sun, 17 Jan 2021 07:15:58 -0800 (PST) Received: by mail-pf1-x42b.google.com with SMTP id b3so8689544pft.3 for ; Sun, 17 Jan 2021 07:15:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0r+JCD4Y++UkuwDhT+3Um93OT5s28ZmvypPmtNNw098=; b=k3Gn2lQZJK8zZF5Q1z8ZjLNvPnzK5/qZdud4LaC7Yj1dyg2sl7cpQf/+0XUKKYdWQx WhNSsHSTKLgmfSsqrdbqmvxZD4qSxmCCwlmLPpclfky+xXSmz7Qcd5MOvMPylsZ/pjnQ HRQGQpdO4BE9H8GB6BgKj/ABgy9LOjF/KXDImUfybGguvK9YNKAhTloNE+2fJT88U4yq Fu3LeTXn3puHbsEI7xC6kNamrthlNG2txj7TAjyzey8v6exTL492dsO1uIDx/sGgiBOk wW7hJ1HyfruwjSTDGE7+vGqTWth/7iMnbJwiklJkDyPeErBOx14tXMaetliubMkO2u4H n4oQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0r+JCD4Y++UkuwDhT+3Um93OT5s28ZmvypPmtNNw098=; b=noOFTIb4RU4IuMIwIJa6GFn2Yjend7SQo1CcCC38Ml8J24t9GfvPJYep7Q31oVdR2H ODDRS7AdGvcEIA5l7mMnWDtMzhJSuS1qO5/O1piGcUbTzW3pbuVbid/yVWZRMjom8v+j xQwIL1NYZpzDkQQwC5WpKLfMGigNBDIaP54W3rC4ktT1G6X0uryD3RbLi3S+bpR4Upu8 heUbEFZoRb3XYNwwrUjd4cRUGT7GGD8t159E1bXvCMNkGl+J3MAq08pjkEhCGzdgqkmv eA/Mxa+2LSspbPL/w+KTjirO+2sVEcNeG77HlDbbBzEtb8cn4Rs5dMkNw35DZ41CmCn5 psJg== X-Gm-Message-State: AOAM533sIKJW7wDG3pwBxTqfMV9gnJSQbg4LAHkVpH439zzmKhb9QV/9 EBpwbvWFMqEvY637oqQ6n38lHg== X-Google-Smtp-Source: ABdhPJzeWgpcmNzxzNxHDXk9WwDEnqjTordZah07UE2deqp7jFbfH+5aDWblVQ4OEouKj3q3At/s9Q== X-Received: by 2002:aa7:9384:0:b029:1ae:4dbf:f34d with SMTP id t4-20020aa793840000b02901ae4dbff34dmr22081917pfe.11.1610896558373; Sun, 17 Jan 2021 07:15:58 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id i22sm9247915pjv.35.2021.01.17.07.15.38 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 Jan 2021 07:15:57 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v13 11/12] mm: hugetlb: gather discrete indexes of tail page Date: Sun, 17 Jan 2021 23:10:52 +0800 Message-Id: <20210117151053.24600-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210117151053.24600-1-songmuchun@bytedance.com> References: <20210117151053.24600-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org For HugeTLB page, there are more metadata to save in the struct page. But the head struct page cannot meet our needs, so we have to abuse other tail struct page to store the metadata. In order to avoid conflicts caused by subsequent use of more tail struct pages, we can gather these discrete indexes of tail struct page. In this case, it will be easier to add a new tail page index later. There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a BUILD_BUG_ON to catch invalid usage of the tail struct page. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador --- include/linux/hugetlb.h | 14 ++++++++++++++ include/linux/hugetlb_cgroup.h | 15 +++++++++------ mm/hugetlb.c | 25 ++++++++++++------------- mm/hugetlb_vmemmap.c | 8 ++++++++ 4 files changed, 43 insertions(+), 19 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 66d82ae7b712..05fd2db09b78 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -28,6 +28,20 @@ typedef struct { unsigned long pd; } hugepd_t; #include #include +enum { + SUBPAGE_INDEX_ACTIVE = 1, /* reuse page flags of PG_private */ + SUBPAGE_INDEX_TEMPORARY, /* reuse page->mapping */ +#ifdef CONFIG_CGROUP_HUGETLB + SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */ + SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ +#endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + SUBPAGE_INDEX_HWPOISON, /* reuse page->private */ + SUBPAGE_INDEX_INFLIGHT, /* reuse page->private */ +#endif + NR_USED_SUBPAGE, +}; + struct hugepage_subpool { spinlock_t lock; long count; diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h index 2ad6e92f124a..3d3c1c49efe4 100644 --- a/include/linux/hugetlb_cgroup.h +++ b/include/linux/hugetlb_cgroup.h @@ -24,8 +24,9 @@ struct file_region; /* * Minimum page order trackable by hugetlb cgroup. * At least 4 pages are necessary for all the tracking information. - * The second tail page (hpage[2]) is the fault usage cgroup. - * The third tail page (hpage[3]) is the reservation usage cgroup. + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD]) + * is the reservation usage cgroup. */ #define HUGETLB_CGROUP_MIN_ORDER 2 @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd) if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return NULL; if (rsvd) - return (struct hugetlb_cgroup *)page[3].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD); else - return (struct hugetlb_cgroup *)page[2].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP); } static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page) @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page, if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return -1; if (rsvd) - page[3].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD, + (unsigned long)h_cg); else - page[2].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP, + (unsigned long)h_cg); return 0; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0e14fad63823..fdabc1d0ef98 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1346,17 +1346,17 @@ static inline void flush_hpage_update_work(struct hstate *h) #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP static inline bool PageHugeInflight(struct page *head) { - return page_private(head + 5) == -1UL; + return page_private(head + SUBPAGE_INDEX_INFLIGHT) == -1UL; } static inline void SetPageHugeInflight(struct page *head) { - set_page_private(head + 5, -1UL); + set_page_private(head + SUBPAGE_INDEX_INFLIGHT, -1UL); } static inline void ClearPageHugeInflight(struct page *head) { - set_page_private(head + 5, 0); + set_page_private(head + SUBPAGE_INDEX_INFLIGHT, 0); } #else static inline bool PageHugeInflight(struct page *head) @@ -1404,7 +1404,7 @@ static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head) if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h)) return; - page = head + page_private(head + 4); + page = head + page_private(head + SUBPAGE_INDEX_HWPOISON); /* * Move PageHWPoison flag from head page to the raw error page, @@ -1423,7 +1423,7 @@ static inline void hwpoison_subpage_set(struct hstate *h, struct page *head, return; if (free_vmemmap_pages_per_hpage(h)) { - set_page_private(head + 4, page - head); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head); } else if (page != head) { /* * Move PageHWPoison flag from head page to the raw error page, @@ -1433,7 +1433,6 @@ static inline void hwpoison_subpage_set(struct hstate *h, struct page *head, ClearPageHWPoison(head); } } - #else static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head) { @@ -1514,20 +1513,20 @@ struct hstate *size_to_hstate(unsigned long size) bool page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHuge(page), page); - return PageHead(page) && PagePrivate(&page[1]); + return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* never called for tail page */ static void set_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - SetPagePrivate(&page[1]); + SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } static void clear_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - ClearPagePrivate(&page[1]); + ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* @@ -1539,17 +1538,17 @@ static inline bool PageHugeTemporary(struct page *page) if (!PageHuge(page)) return false; - return (unsigned long)page[2].mapping == -1U; + return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U; } static inline void SetPageHugeTemporary(struct page *page) { - page[2].mapping = (void *)-1U; + page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U; } static inline void ClearPageHugeTemporary(struct page *page) { - page[2].mapping = NULL; + page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL; } static void __free_huge_page(struct page *page) @@ -3374,7 +3373,7 @@ void __init hugetlb_add_hstate(unsigned int order) return; } BUG_ON(hugetlb_max_hstate >= HUGE_MAX_HSTATE); - BUG_ON(order == 0); + BUG_ON((1U << order) < NR_USED_SUBPAGE); h = &hstates[hugetlb_max_hstate++]; h->order = order; h->mask = ~((1ULL << (order + PAGE_SHIFT)) - 1); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 7dcb4aa1e512..6b8f7bb2273e 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -242,6 +242,14 @@ void __init hugetlb_vmemmap_init(struct hstate *h) unsigned int nr_pages = pages_per_huge_page(h); unsigned int vmemmap_pages; + /* + * There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct + * page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, + * so add a BUILD_BUG_ON to catch invalid usage of the tail struct page. + */ + BUILD_BUG_ON(NR_USED_SUBPAGE >= + RESERVE_VMEMMAP_SIZE / sizeof(struct page)); + if (!hugetlb_free_vmemmap_enabled) return; From patchwork Sun Jan 17 15:10:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12025509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55EAFC433E0 for ; Sun, 17 Jan 2021 15:17:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2337D22460 for ; Sun, 17 Jan 2021 15:17:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729652AbhAQPRR (ORCPT ); Sun, 17 Jan 2021 10:17:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729631AbhAQPRA (ORCPT ); Sun, 17 Jan 2021 10:17:00 -0500 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25AE0C061786 for ; Sun, 17 Jan 2021 07:16:20 -0800 (PST) Received: by mail-pl1-x635.google.com with SMTP id b8so7242947plx.0 for ; Sun, 17 Jan 2021 07:16:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Hon7HPngFu4HxwS2zmocIDkYMNnjiKlEfAxske+w4FE=; b=o9w3kmdeB44NHPerSaJK3r4nOoxS+ilbYyOlCgiDtSEBLu1iCybMnEh6N0HB01ouGy ujK2jPwc+Bu5nlXkTUwdM/9MVD76B4p1sh37oDxRIG6JNF3F8jUnxIBe5/yM930czL2R 6EQihcgC5IdA69gVJdaPHhNIXRxM2wG2t3m12fuqheISvcFpxG4rrmjdHo0y7q9+9+Or 1XPm1uRdWMz6Tdple82oGQK2bYLDVnatibZtQx+djfiYOjhYURytEzion44njw7Js95e Z6CIvMNiJosKbmBjc6VlH1DIwFfYvApHQvsw2dtboWlcN9VoiPm6gleV0SIqRKFCpgQd zDfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Hon7HPngFu4HxwS2zmocIDkYMNnjiKlEfAxske+w4FE=; b=mjtZVGMVz8qz0dpLxa3YrCNV9DDMj7WUrn+//QD5eU+GIXWMIeK3N7KdoeqJYpiC6L FmMRNYIALdEhbWpylcXvJmSrhHdrxUsu9p5VPtV297oavc65A8mEzKm8tZ6ysGJnoUQI ie5aWUNSGFYckNrbbdEI44kGSxSiJfuxiI1G2XUEJrqucuk8wXSoK5Ovg2jvYPj6OSJ7 1KUjZG/w3XlCKLwAWlfcf+mml39DTcyRpLPfjeL9+pV02DSPNCKxa7rASxEMoJhUy91m hxM0VlhRqtUnRn5VzVO56yMHspkB5CwkSftYl6cwaWqsFhgD7W6MzFtdQ2HeCfG9PvDV 8K8g== X-Gm-Message-State: AOAM530dMh9w91veGW0TqM287hbdmefdfrHOpN6TroxgMuJd5HzcTcpW x78GID6X4VC/mh6vsBfIiBNOaA== X-Google-Smtp-Source: ABdhPJyMujlTajhbDBzy36AvKbjYP8NL6BMKYb+gJOmBwUZHEJq3x9AY9vNFVVF5MFrYR3Imq/emHg== X-Received: by 2002:a17:90a:c82:: with SMTP id v2mr21354379pja.171.1610896579767; Sun, 17 Jan 2021 07:16:19 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id i22sm9247915pjv.35.2021.01.17.07.15.58 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 17 Jan 2021 07:16:19 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v13 12/12] mm: hugetlb: optimize the code with the help of the compiler Date: Sun, 17 Jan 2021 23:10:53 +0800 Message-Id: <20210117151053.24600-13-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210117151053.24600-1-songmuchun@bytedance.com> References: <20210117151053.24600-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org We cannot optimize if a "struct page" crosses page boundaries. If it is true, we can optimize the code with the help of a compiler. When free_vmemmap_pages_per_hpage() returns zero, most functions are optimized by the compiler. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 3 ++- mm/hugetlb_vmemmap.c | 7 +++++++ mm/hugetlb_vmemmap.h | 5 +++-- 3 files changed, 12 insertions(+), 3 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 05fd2db09b78..b685bc4d79d5 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -792,7 +792,8 @@ extern bool hugetlb_free_vmemmap_enabled; static inline bool is_hugetlb_free_vmemmap_enabled(void) { - return hugetlb_free_vmemmap_enabled; + return hugetlb_free_vmemmap_enabled && + is_power_of_2(sizeof(struct page)); } #else static inline bool is_hugetlb_free_vmemmap_enabled(void) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 6b8f7bb2273e..5ea12c7507a6 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -250,6 +250,13 @@ void __init hugetlb_vmemmap_init(struct hstate *h) BUILD_BUG_ON(NR_USED_SUBPAGE >= RESERVE_VMEMMAP_SIZE / sizeof(struct page)); + /* + * The compiler can help us to optimize this function to null + * when the size of the struct page is not power of 2. + */ + if (!is_power_of_2(sizeof(struct page))) + return; + if (!hugetlb_free_vmemmap_enabled) return; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 8fd9ae113dbd..e8de41295d4d 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -17,11 +17,12 @@ void hugetlb_vmemmap_init(struct hstate *h); /* * How many vmemmap pages associated with a HugeTLB page that can be freed - * to the buddy allocator. + * to the buddy allocator. The checking of the is_power_of_2() aims to let + * the compiler help us optimize the code as much as possible. */ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { - return h->nr_free_vmemmap_pages; + return is_power_of_2(sizeof(struct page)) ? h->nr_free_vmemmap_pages : 0; } #else static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)