From patchwork Mon Mar 15 09:20:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12138777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 726B9C43381 for ; Mon, 15 Mar 2021 09:26:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2EAB964EB3 for ; Mon, 15 Mar 2021 09:26:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229704AbhCOJ02 (ORCPT ); Mon, 15 Mar 2021 05:26:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229524AbhCOJ0T (ORCPT ); Mon, 15 Mar 2021 05:26:19 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B85DC061762 for ; Mon, 15 Mar 2021 02:26:18 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id s21so8449743pjq.1 for ; Mon, 15 Mar 2021 02:26:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=S4ngru+JovezFfbjHJ9+O5AYuBHwlQiCRmW/1ZlmOgQ=; b=Q4z5+vj360nNHtYTJh1RBYHqTvBDmeKyuLOkFm3tUYHOfsNLl2sqSDva84jzqlN9OY d1+w/ycmsJKlNwPjNT34h7pGQ8FfwII2DPsX+2GKBVnVxd+6v1EtMsoJ4oTiwfGsebMH 2BXhV0ft3nCv0hIhzJKHHwyjWk9Ze0HrZ2iN9p0KdPDUkGX6tWVJscOgneYi5D1+tzpX BIrAv4xKfcFuAwDj0hDY6aD5uTJQTOAU7dcsRix9d5fUY0deXHxMjAZhVkqVl/pVTpox D15+t+E323ce5cab7pLlLMGm2dgCDUzWwiyL8Q21RoJh1dq3IRpIJy1ruYOpxKT9J87x 51aA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=S4ngru+JovezFfbjHJ9+O5AYuBHwlQiCRmW/1ZlmOgQ=; b=qJQTeIUVConMN1ifZfQ3lyIrech90y49XeggHDifuh0Q3eV2ahoRgR+mGqLdHXHMVy AMb761VJ9jbRFls5fVxc1nmMTGQGnN0urd+/tqnrAMVZ67ONMyI7YUuJoAZlt3b4Z15h RfxGm35a8/j98Be/PCfvnRqzt2CSCtP3cMnMKGQfBDt5hdR8PeuSph5z1Ah2uyzQBHDV gDvRlTRxIPzPC3u667byxNxAgZNduSUgTmo5h0dXNHlpRLksVbo0npY6LaRjhYVY/R1I 1LnIw9wfNjKwjXCmP5zzJDQroMre8KxzFb1au28sSVMjyWU+vmUjhtQPyM06wMPnJyfP Zgsw== X-Gm-Message-State: AOAM531iSksfoxggXylRuBEv0GaOa967NjbkxpSpITqrrN2UHDp0FdGX QuhUVwNl3jtl1ZwdAc6QDZ3/tg== X-Google-Smtp-Source: ABdhPJxHGIui8DfFLr23IkdE3XxZLOfpZkI06e9H0WI+ux9efey3jwX8T8AEZIxTuhOfueDrPI0lnQ== X-Received: by 2002:a17:902:f1c2:b029:e4:6c23:489f with SMTP id e2-20020a170902f1c2b02900e46c23489fmr11337331plc.62.1615800377894; Mon, 15 Mar 2021 02:26:17 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.231]) by smtp.gmail.com with ESMTPSA id gm10sm10607883pjb.4.2021.03.15.02.26.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Mar 2021 02:26:17 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Miaohe Lin , Chen Huang , Bodeddula Balasubramaniam Subject: [PATCH v19 1/8] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Date: Mon, 15 Mar 2021 17:20:08 +0800 Message-Id: <20210315092015.35396-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210315092015.35396-1-songmuchun@bytedance.com> References: <20210315092015.35396-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Move bootmem info registration common API to individual bootmem_info.c. And we will use {get,put}_page_bootmem() to initialize the page for the vmemmap pages or free the vmemmap pages to buddy in the later patch. So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code movement without any functional change. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador Reviewed-by: David Hildenbrand Reviewed-by: Miaohe Lin Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam --- arch/sparc/mm/init_64.c | 1 + arch/x86/mm/init_64.c | 3 +- include/linux/bootmem_info.h | 40 +++++++++++++ include/linux/memory_hotplug.h | 27 --------- mm/Makefile | 1 + mm/bootmem_info.c | 127 +++++++++++++++++++++++++++++++++++++++++ mm/memory_hotplug.c | 116 ------------------------------------- mm/sparse.c | 1 + 8 files changed, 172 insertions(+), 144 deletions(-) create mode 100644 include/linux/bootmem_info.h create mode 100644 mm/bootmem_info.c diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 182bb7bdaa0a..c709b72e81bf 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index b5a3fa4033d3..0a45f062826e 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include @@ -1571,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, return err; } -#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE) +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE void register_page_bootmem_memmap(unsigned long section_nr, struct page *start_page, unsigned long nr_pages) { diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h new file mode 100644 index 000000000000..4ed6dee1adc9 --- /dev/null +++ b/include/linux/bootmem_info.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_BOOTMEM_INFO_H +#define __LINUX_BOOTMEM_INFO_H + +#include + +/* + * Types for free bootmem stored in page->lru.next. These have to be in + * some random range in unsigned long space for debugging purposes. + */ +enum { + MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, + SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, + MIX_SECTION_INFO, + NODE_INFO, + MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, +}; + +#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE +void __init register_page_bootmem_info_node(struct pglist_data *pgdat); + +void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type); +void put_page_bootmem(struct page *page); +#else +static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) +{ +} + +static inline void put_page_bootmem(struct page *page) +{ +} + +static inline void get_page_bootmem(unsigned long info, struct page *page, + unsigned long type) +{ +} +#endif + +#endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 7288aa5ef73b..96659a8b9d02 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -18,18 +18,6 @@ struct vmem_altmap; #ifdef CONFIG_MEMORY_HOTPLUG struct page *pfn_to_online_page(unsigned long pfn); -/* - * Types for free bootmem stored in page->lru.next. These have to be in - * some random range in unsigned long space for debugging purposes. - */ -enum { - MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, - SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, - MIX_SECTION_INFO, - NODE_INFO, - MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO, -}; - /* Types for control the zone type of onlined and offlined memory */ enum { /* Offline the memory. */ @@ -210,17 +198,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat) #endif /* CONFIG_NUMA */ #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */ -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat); -#else -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} -#endif -extern void put_page_bootmem(struct page *page); -extern void get_page_bootmem(unsigned long ingo, struct page *page, - unsigned long type); - void get_online_mems(void); void put_online_mems(void); @@ -248,10 +225,6 @@ static inline void zone_span_writelock(struct zone *zone) {} static inline void zone_span_writeunlock(struct zone *zone) {} static inline void zone_seqlock_init(struct zone *zone) {} -static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) -{ -} - static inline int try_online_node(int nid) { return 0; diff --git a/mm/Makefile b/mm/Makefile index 72227b24a616..daabf86d7da8 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -83,6 +83,7 @@ obj-$(CONFIG_SLUB) += slub.o obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_KFENCE) += kfence/ obj-$(CONFIG_FAILSLAB) += failslab.o +obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c new file mode 100644 index 000000000000..5b152dba7344 --- /dev/null +++ b/mm/bootmem_info.c @@ -0,0 +1,127 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Bootmem core functions. + * + * Copyright (c) 2020, Bytedance. + * + * Author: Muchun Song + * + */ +#include +#include +#include +#include +#include + +void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) +{ + page->freelist = (void *)type; + SetPagePrivate(page); + set_page_private(page, info); + page_ref_inc(page); +} + +void put_page_bootmem(struct page *page) +{ + unsigned long type; + + type = (unsigned long) page->freelist; + BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || + type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); + + if (page_ref_dec_return(page) == 1) { + page->freelist = NULL; + ClearPagePrivate(page); + set_page_private(page, 0); + INIT_LIST_HEAD(&page->lru); + free_reserved_page(page); + } +} + +#ifndef CONFIG_SPARSEMEM_VMEMMAP +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + /* Get section's memmap address */ + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + /* + * Get page for the memmap's phys address + * XXX: need more consideration for sparse_vmemmap... + */ + page = virt_to_page(memmap); + mapsize = sizeof(struct page) * PAGES_PER_SECTION; + mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; + + /* remember memmap's page */ + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, SECTION_INFO); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); + +} +#else /* CONFIG_SPARSEMEM_VMEMMAP */ +static void register_page_bootmem_info_section(unsigned long start_pfn) +{ + unsigned long mapsize, section_nr, i; + struct mem_section *ms; + struct page *page, *memmap; + struct mem_section_usage *usage; + + section_nr = pfn_to_section_nr(start_pfn); + ms = __nr_to_section(section_nr); + + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); + + register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); + + usage = ms->usage; + page = virt_to_page(usage); + + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; + + for (i = 0; i < mapsize; i++, page++) + get_page_bootmem(section_nr, page, MIX_SECTION_INFO); +} +#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ + +void __init register_page_bootmem_info_node(struct pglist_data *pgdat) +{ + unsigned long i, pfn, end_pfn, nr_pages; + int node = pgdat->node_id; + struct page *page; + + nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; + page = virt_to_page(pgdat); + + for (i = 0; i < nr_pages; i++, page++) + get_page_bootmem(node, page, NODE_INFO); + + pfn = pgdat->node_start_pfn; + end_pfn = pgdat_end_pfn(pgdat); + + /* register section info */ + for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { + /* + * Some platforms can assign the same pfn to multiple nodes - on + * node0 as well as nodeN. To avoid registering a pfn against + * multiple nodes we check that this pfn does not already + * reside in some other nodes. + */ + if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) + register_page_bootmem_info_section(pfn); + } +} diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 5ba51a8bdaeb..a2a72b617040 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -144,122 +144,6 @@ static void release_memory_resource(struct resource *res) } #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE -void get_page_bootmem(unsigned long info, struct page *page, - unsigned long type) -{ - page->freelist = (void *)type; - SetPagePrivate(page); - set_page_private(page, info); - page_ref_inc(page); -} - -void put_page_bootmem(struct page *page) -{ - unsigned long type; - - type = (unsigned long) page->freelist; - BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || - type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); - - if (page_ref_dec_return(page) == 1) { - page->freelist = NULL; - ClearPagePrivate(page); - set_page_private(page, 0); - INIT_LIST_HEAD(&page->lru); - free_reserved_page(page); - } -} - -#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE -#ifndef CONFIG_SPARSEMEM_VMEMMAP -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - /* Get section's memmap address */ - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - /* - * Get page for the memmap's phys address - * XXX: need more consideration for sparse_vmemmap... - */ - page = virt_to_page(memmap); - mapsize = sizeof(struct page) * PAGES_PER_SECTION; - mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT; - - /* remember memmap's page */ - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, SECTION_INFO); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); - -} -#else /* CONFIG_SPARSEMEM_VMEMMAP */ -static void register_page_bootmem_info_section(unsigned long start_pfn) -{ - unsigned long mapsize, section_nr, i; - struct mem_section *ms; - struct page *page, *memmap; - struct mem_section_usage *usage; - - section_nr = pfn_to_section_nr(start_pfn); - ms = __nr_to_section(section_nr); - - memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); - - register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); - - usage = ms->usage; - page = virt_to_page(usage); - - mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; - - for (i = 0; i < mapsize; i++, page++) - get_page_bootmem(section_nr, page, MIX_SECTION_INFO); -} -#endif /* !CONFIG_SPARSEMEM_VMEMMAP */ - -void __init register_page_bootmem_info_node(struct pglist_data *pgdat) -{ - unsigned long i, pfn, end_pfn, nr_pages; - int node = pgdat->node_id; - struct page *page; - - nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT; - page = virt_to_page(pgdat); - - for (i = 0; i < nr_pages; i++, page++) - get_page_bootmem(node, page, NODE_INFO); - - pfn = pgdat->node_start_pfn; - end_pfn = pgdat_end_pfn(pgdat); - - /* register section info */ - for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) { - /* - * Some platforms can assign the same pfn to multiple nodes - on - * node0 as well as nodeN. To avoid registering a pfn against - * multiple nodes we check that this pfn does not already - * reside in some other nodes. - */ - if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node)) - register_page_bootmem_info_section(pfn); - } -} -#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ - static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, const char *reason) { diff --git a/mm/sparse.c b/mm/sparse.c index 7bd23f9d6cef..87676bf3af40 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "internal.h" #include From patchwork Mon Mar 15 09:20:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12138779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEFA7C433E9 for ; Mon, 15 Mar 2021 09:27:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A70D164E31 for ; Mon, 15 Mar 2021 09:27:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229505AbhCOJ07 (ORCPT ); Mon, 15 Mar 2021 05:26:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229725AbhCOJ0a (ORCPT ); Mon, 15 Mar 2021 05:26:30 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9AEBC061763 for ; Mon, 15 Mar 2021 02:26:29 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id cl21-20020a17090af695b02900c61ac0f0e9so1663200pjb.1 for ; Mon, 15 Mar 2021 02:26:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9nTlzKqZFlWsUkZKn9oi/q21qUxZ65s/6xhXufEmb7c=; b=hWchwabdiuGTEDXT0RnHQWVCfKgdpTINIQpgIKJMmpUxPnLs5KZZWlLhwMQs66cy8L LaR47BUkauAlgULv7eOKGQzVhsCy0zcTAkS8egki6tUcz2QXlkB4iCbgPXQuHLxjqEKI HRf6kSRTnRdWGuli1mtWCGlDQE3BK2OdB/1yi0ovyl057cJSy0SSxf+POp8oqK7KHPTE CpsW46i7z1ECXMeDlmI27PGQ0uWMwKTdFMI1HRepf3DcEUZU2RAy24fKY7gFPyJMpSbM DzU/4uNtkANw5b7ry04Qg+pG5kOh++XJLJUuEc62zug6EAUGSwevWKoCN4VO+cv9tGRb IHJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9nTlzKqZFlWsUkZKn9oi/q21qUxZ65s/6xhXufEmb7c=; b=djUUUyRPmiJ4yNYqf4FPli6E72NaTyKEqRyo729Fz2IlfctC9RWQTzpq9sNE2z857F 8d8waxgrZz8gyTLKwbiQjW6Dnd3eNGxxkz2Iknp7BB4O1JJ+3M0DS7njli4VaCmAK5u8 vbmZ1VSRL/jQSA/yip8kYf/s29ABbcz0uoTDbhXt5ZJq3NZXb23NA6QBQiWpdJmADO+s MIAacS2Pl4ILKocPhVXWaUz/aADeieUm7WLzRrwRjGh8Nh+lnn33CtE4S1SumGqDmzUh TLYOC0DCFydb2H5+OHbnPTnto0KwWxa5EE0tYILmFbDDlB0pYNd7+dqozQO0a7POvpOL YvRg== X-Gm-Message-State: AOAM531tHCcxil3fnMreasfG6Ev2ceMsEo4R+ClfURmcy6AYvDP735Eu tvJFKZEsPeLkEVQXYsaTbplSFA== X-Google-Smtp-Source: ABdhPJwXeYdwuYouTL7HqzYPkbaPF1vJwuTSZ8pT828C8BJTHKy/RHocXTIASW6rfNXVbC0YSqFHIA== X-Received: by 2002:a17:90b:fd2:: with SMTP id gd18mr11805276pjb.115.1615800389353; Mon, 15 Mar 2021 02:26:29 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.231]) by smtp.gmail.com with ESMTPSA id gm10sm10607883pjb.4.2021.03.15.02.26.18 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Mar 2021 02:26:29 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Miaohe Lin , Chen Huang , Bodeddula Balasubramaniam , Balbir Singh Subject: [PATCH v19 2/8] mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Date: Mon, 15 Mar 2021 17:20:09 +0800 Message-Id: <20210315092015.35396-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210315092015.35396-1-songmuchun@bytedance.com> References: <20210315092015.35396-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The option HUGETLB_PAGE_FREE_VMEMMAP allows for the freeing of some vmemmap pages associated with pre-allocated HugeTLB pages. For example, on X86_64 6 vmemmap pages of size 4KB each can be saved for each 2MB HugeTLB page. 4094 vmemmap pages of size 4KB each can be saved for each 1GB HugeTLB page. When a HugeTLB page is allocated or freed, the vmemmap array representing the range associated with the page will need to be remapped. When a page is allocated, vmemmap pages are freed after remapping. When a page is freed, previously discarded vmemmap pages must be allocated before remapping. The config option is introduced early so that supporting code can be written to depend on the option. The initial version of the code only provides support for x86-64. If config HAVE_BOOTMEM_INFO_NODE is enabled, the freeing vmemmap page code denpend on it to free vmemmap pages. Otherwise, just use free_reserved_page() to free vmemmmap pages. The routine register_page_bootmem_info() is used to register bootmem info. Therefore, make sure register_page_bootmem_info is enabled if HUGETLB_PAGE_FREE_VMEMMAP is defined. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Acked-by: Mike Kravetz Reviewed-by: Miaohe Lin Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam Reviewed-by: Balbir Singh --- arch/x86/mm/init_64.c | 2 +- fs/Kconfig | 5 +++++ 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0a45f062826e..0435bee2e172 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall; static void __init register_page_bootmem_info(void) { -#ifdef CONFIG_NUMA +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) int i; for_each_online_node(i) diff --git a/fs/Kconfig b/fs/Kconfig index eccbcf1e3f2e..46fef1f721c0 100644 --- a/fs/Kconfig +++ b/fs/Kconfig @@ -237,6 +237,11 @@ config HUGETLBFS config HUGETLB_PAGE def_bool HUGETLBFS +config HUGETLB_PAGE_FREE_VMEMMAP + def_bool HUGETLB_PAGE + depends on X86_64 + depends on SPARSEMEM_VMEMMAP + config MEMFD_CREATE def_bool TMPFS || HUGETLBFS From patchwork Mon Mar 15 09:20:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12138781 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34529C4332D for ; Mon, 15 Mar 2021 09:27:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E1E7664E90 for ; Mon, 15 Mar 2021 09:27:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229735AbhCOJ1B (ORCPT ); Mon, 15 Mar 2021 05:27:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229749AbhCOJ0l (ORCPT ); Mon, 15 Mar 2021 05:26:41 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5213FC061762 for ; Mon, 15 Mar 2021 02:26:41 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id a22-20020a17090aa516b02900c1215e9b33so14017839pjq.5 for ; Mon, 15 Mar 2021 02:26:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=h2/ODMtgLnweb1N5hCFVuD0E57uJQlZA+87DYa7fEos=; b=fWqFlVpLDZVvabLTqY6XWqIm5EPe9ESjAJ65lwHImJuxbe6oi0UTTL5++R8iGfCNsn rLoc43/C1rfgUtH3bGELhMC3P/tjwKuoyKOTZYklNssCqsCh7a4ShaabLwxV0mb2zC5P iwxyaO+Nw/P4QkOrK75T/YF5xiwQS5OOZZxVcy1kmnGWesurCAKSBVceRc2ilGKgftmI pznCl127tMcy1ByP+QBUYNnKsfwH9HPQL5EcmqlGacHmQ9V6ahNdxhbXGsnmrB/ZWnZY 4tF7cV2wvQJwNzfGAzk9grb7AI3HY/LZ5O8xcf15QlN7nQMbTtRUAja/pzRN348G6fbc BPyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=h2/ODMtgLnweb1N5hCFVuD0E57uJQlZA+87DYa7fEos=; b=r0xONe0hIm1xkscymQiAcrPQ21CzwP3rmW+H2flTIS03ZbduifZ3hck0YSbUPqhxnk QECsYCSzT8eJaBOkF8om33L6ENrSBQ0FU543GDoKYQF+XL2iL8UoU3ftQps2Wgi4IlLX Zdz/X5wNSSXC6Q33Btx/THJxMBHdmALzgqzWUQf7zlBHxf1dcGI5d3xvHZCyZB47D+tl /OPUdtgWYSr3e0gzKNL8iVJvLXupvFcXhhV9e1jO721pa4HiHCvboWiwv1DDbA/613aN QdQB1Mla2TQbQI/xvl+xe9apF14TF7JNGJdiPmbPwCmY9GYEZX1P7wYGO5nK/Da+wPGa f05w== X-Gm-Message-State: AOAM532rv7sRsvOuGBMCZg30ge/sjgopmOImzwNM/Rm4TD9n2KbnPhWa Mio/jutQxcCLV38K2sobnCrvJg== X-Google-Smtp-Source: ABdhPJzN98itLq+BobCymLYOZG6KeM6JPt1UFnCPke4TlN6xln50a6pC7P2034I9S10auzTeMIb7Sw== X-Received: by 2002:a17:903:4093:b029:e5:b933:fab7 with SMTP id z19-20020a1709034093b02900e5b933fab7mr10455889plc.11.1615800400893; Mon, 15 Mar 2021 02:26:40 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.231]) by smtp.gmail.com with ESMTPSA id gm10sm10607883pjb.4.2021.03.15.02.26.29 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Mar 2021 02:26:40 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Miaohe Lin , Chen Huang , Bodeddula Balasubramaniam Subject: [PATCH v19 3/8] mm: hugetlb: gather discrete indexes of tail page Date: Mon, 15 Mar 2021 17:20:10 +0800 Message-Id: <20210315092015.35396-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210315092015.35396-1-songmuchun@bytedance.com> References: <20210315092015.35396-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org For HugeTLB page, there are more metadata to save in the struct page. But the head struct page cannot meet our needs, so we have to abuse other tail struct page to store the metadata. In order to avoid conflicts caused by subsequent use of more tail struct pages, we can gather these discrete indexes of tail struct page. In this case, it will be easier to add a new tail page index later. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Reviewed-by: Miaohe Lin Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam Acked-by: Michal Hocko --- include/linux/hugetlb.h | 21 +++++++++++++++++++-- include/linux/hugetlb_cgroup.h | 19 +++++++++++-------- 2 files changed, 30 insertions(+), 10 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ce6533584eb7..09421f5f35e2 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -28,6 +28,23 @@ typedef struct { unsigned long pd; } hugepd_t; #include #include +/* + * For HugeTLB page, there are more metadata to save in the struct page. But + * the head struct page cannot meet our needs, so we have to abuse other tail + * struct page to store the metadata. In order to avoid conflicts caused by + * subsequent use of more tail struct pages, we gather these discrete indexes + * of tail struct page here. + */ +enum { + SUBPAGE_INDEX_SUBPOOL = 1, /* reuse page->private */ +#ifdef CONFIG_CGROUP_HUGETLB + SUBPAGE_INDEX_CGROUP, /* reuse page->private */ + SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ + __MAX_CGROUP_SUBPAGE_INDEX = SUBPAGE_INDEX_CGROUP_RSVD, +#endif + __NR_USED_SUBPAGE, +}; + struct hugepage_subpool { spinlock_t lock; long count; @@ -604,13 +621,13 @@ extern unsigned int default_hstate_idx; */ static inline struct hugepage_subpool *hugetlb_page_subpool(struct page *hpage) { - return (struct hugepage_subpool *)(hpage+1)->private; + return (void *)page_private(hpage + SUBPAGE_INDEX_SUBPOOL); } static inline void hugetlb_set_page_subpool(struct page *hpage, struct hugepage_subpool *subpool) { - set_page_private(hpage+1, (unsigned long)subpool); + set_page_private(hpage + SUBPAGE_INDEX_SUBPOOL, (unsigned long)subpool); } static inline struct hstate *hstate_file(struct file *f) diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h index 2ad6e92f124a..54ec689e3c9c 100644 --- a/include/linux/hugetlb_cgroup.h +++ b/include/linux/hugetlb_cgroup.h @@ -21,15 +21,16 @@ struct hugetlb_cgroup; struct resv_map; struct file_region; +#ifdef CONFIG_CGROUP_HUGETLB /* * Minimum page order trackable by hugetlb cgroup. * At least 4 pages are necessary for all the tracking information. - * The second tail page (hpage[2]) is the fault usage cgroup. - * The third tail page (hpage[3]) is the reservation usage cgroup. + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD]) + * is the reservation usage cgroup. */ -#define HUGETLB_CGROUP_MIN_ORDER 2 +#define HUGETLB_CGROUP_MIN_ORDER order_base_2(__MAX_CGROUP_SUBPAGE_INDEX + 1) -#ifdef CONFIG_CGROUP_HUGETLB enum hugetlb_memory_event { HUGETLB_MAX, HUGETLB_NR_MEMORY_EVENTS, @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd) if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return NULL; if (rsvd) - return (struct hugetlb_cgroup *)page[3].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD); else - return (struct hugetlb_cgroup *)page[2].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP); } static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page) @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page, if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return -1; if (rsvd) - page[3].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD, + (unsigned long)h_cg); else - page[2].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP, + (unsigned long)h_cg); return 0; } From patchwork Mon Mar 15 09:20:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12138783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42A0EC43331 for ; Mon, 15 Mar 2021 09:27:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0E55164E6B for ; Mon, 15 Mar 2021 09:27:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229749AbhCOJ1C (ORCPT ); Mon, 15 Mar 2021 05:27:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229771AbhCOJ0x (ORCPT ); Mon, 15 Mar 2021 05:26:53 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A359C061762 for ; Mon, 15 Mar 2021 02:26:53 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id n10so20104182pgl.10 for ; Mon, 15 Mar 2021 02:26:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Jyyeox1khNMxcI4z3jYUS6Cg1vjqwljydpREwPV6R2E=; b=nlXP28xHcLzXgXQZkSLrYd0LRcZGhlhvSfo2T7N0KCzlBeoJhFOxLWi/K0ydsOQtb3 G22Vhln2zdmZJG8QwaJp71VHuze2ZJTYwP++nQR8S+ORjxcLU4oV5MSjd4wSBVKLjiDq /deKa+zbKe2jnTZCN7I2Uj87fytSBac4fwqenbF1O6mq7gxw1q8m3YT55GoU4NYtga5r HQ8EIvHBqhDYeci7SGP/E3HXqXbH5/VmmXPWcYtE0css78J3ckL6M0B9tRIyUM4JIQ8O iaz9mij8qJ1q3mnNk34WN5TMyM8HdAguuyQlD5RMc6zzFiD7deOQqDZqcrYej27vEPFi bgaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Jyyeox1khNMxcI4z3jYUS6Cg1vjqwljydpREwPV6R2E=; b=N8aYPkpP7W+2wyPNNQV/grXXAXh4acm4un73fOSlCMkK0G/0Ko8r69+jDlPqzHTGy9 1YsMOk8/eOtPce1sG/4c07c4/PgxwEhRZwJj93hmEw6oSqdFCColZX5oflFsYSk7jKOd KTCSx6TXpB5fRuixZRfziJadTRMZDeeeLSLWJmQ3Jnywgn7hcPZfXOJFxQXTpaO66VDa Hf5/ZHK85XZ26THGZls83bRoRoCTeA8b16L1Gvy9X/gebmdDrPgKTW7he1BMcCiVN42g GnFb1+wugUtclLmuRmxiXrg+KDIoSzqeHM0VrGPJbpg+ND4vl7egk4hPASgUB51zvSE4 VZxw== X-Gm-Message-State: AOAM5335MeDjK7bjNnLThE1e8zuzM/bwHhrArOniklfxl7QMlGt0md4s ASnqczQ3LNpG13hHbdPxg8a+3w== X-Google-Smtp-Source: ABdhPJwowjGVkSdK4ymtiQMlhZNDSG7QsnqwaIcP/LgoJ0QLagH8BJK3cy5zA8VeX13YVYxs4O/+FQ== X-Received: by 2002:a63:410:: with SMTP id 16mr22216443pge.220.1615800412751; Mon, 15 Mar 2021 02:26:52 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.231]) by smtp.gmail.com with ESMTPSA id gm10sm10607883pjb.4.2021.03.15.02.26.41 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Mar 2021 02:26:52 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Chen Huang , Bodeddula Balasubramaniam Subject: [PATCH v19 4/8] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page Date: Mon, 15 Mar 2021 17:20:11 +0800 Message-Id: <20210315092015.35396-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210315092015.35396-1-songmuchun@bytedance.com> References: <20210315092015.35396-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Every HugeTLB has more than one struct page structure. We __know__ that we only use the first 4 (__NR_USED_SUBPAGE) struct page structures to store metadata associated with each HugeTLB. There are a lot of struct page structures associated with each HugeTLB page. For tail pages, the value of compound_head is the same. So we can reuse first page of tail page structures. We map the virtual addresses of the remaining pages of tail page structures to the first tail page struct, and then free these page frames. Therefore, we need to reserve two pages as vmemmap areas. When we allocate a HugeTLB page from the buddy, we can free some vmemmap pages associated with each HugeTLB page. It is more appropriate to do it in the prep_new_huge_page(). The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap pages associated with a HugeTLB page can be freed, returns zero for now, which means the feature is disabled. We will enable it once all the infrastructure is there. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam Acked-by: Michal Hocko --- include/linux/bootmem_info.h | 28 +++++- include/linux/mm.h | 3 + mm/Makefile | 1 + mm/hugetlb.c | 3 + mm/hugetlb_vmemmap.c | 219 +++++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 20 ++++ mm/sparse-vmemmap.c | 194 ++++++++++++++++++++++++++++++++++++++ 7 files changed, 467 insertions(+), 1 deletion(-) create mode 100644 mm/hugetlb_vmemmap.c create mode 100644 mm/hugetlb_vmemmap.h diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 4ed6dee1adc9..2bc8b1f69c93 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -2,7 +2,7 @@ #ifndef __LINUX_BOOTMEM_INFO_H #define __LINUX_BOOTMEM_INFO_H -#include +#include /* * Types for free bootmem stored in page->lru.next. These have to be in @@ -22,6 +22,27 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat); void get_page_bootmem(unsigned long info, struct page *page, unsigned long type); void put_page_bootmem(struct page *page); + +/* + * Any memory allocated via the memblock allocator and not via the + * buddy will be marked reserved already in the memmap. For those + * pages, we can call this function to free it to buddy allocator. + */ +static inline void free_bootmem_page(struct page *page) +{ + unsigned long magic = (unsigned long)page->freelist; + + /* + * The reserve_bootmem_region sets the reserved flag on bootmem + * pages. + */ + VM_BUG_ON_PAGE(page_ref_count(page) != 2, page); + + if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) + put_page_bootmem(page); + else + VM_BUG_ON_PAGE(1, page); +} #else static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) { @@ -35,6 +56,11 @@ static inline void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) { } + +static inline void free_bootmem_page(struct page *page) +{ + free_reserved_page(page); +} #endif #endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/mm.h b/include/linux/mm.h index 77e64e3eac80..4ddfc31f21c6 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2971,6 +2971,9 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) } #endif +void vmemmap_remap_free(unsigned long start, unsigned long end, + unsigned long reuse); + void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap); diff --git a/mm/Makefile b/mm/Makefile index daabf86d7da8..3d7d57e3b55b 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP) += frontswap.o obj-$(CONFIG_ZSWAP) += zswap.o obj-$(CONFIG_HAS_DMA) += dmapool.o obj-$(CONFIG_HUGETLBFS) += hugetlb.o +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) += hugetlb_vmemmap.o obj-$(CONFIG_NUMA) += mempolicy.o obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c232cb67dda2..43fed6785322 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -42,6 +42,7 @@ #include #include #include "internal.h" +#include "hugetlb_vmemmap.h" int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; @@ -1463,6 +1464,8 @@ void free_huge_page(struct page *page) static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) { + free_huge_page_vmemmap(h, page); + INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); set_hugetlb_cgroup(page, NULL); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c new file mode 100644 index 000000000000..0209b736e0b4 --- /dev/null +++ b/mm/hugetlb_vmemmap.c @@ -0,0 +1,219 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + * + * The struct page structures (page structs) are used to describe a physical + * page frame. By default, there is a one-to-one mapping from a page frame to + * it's corresponding page struct. + * + * HugeTLB pages consist of multiple base page size pages and is supported by + * many architectures. See hugetlbpage.rst in the Documentation directory for + * more details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB + * are currently supported. Since the base page size on x86 is 4KB, a 2MB + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of + * 4096 base pages. For each base page, there is a corresponding page struct. + * + * Within the HugeTLB subsystem, only the first 4 page structs are used to + * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER + * provides this upper limit. The only 'useful' information in the remaining + * page structs is the compound_head field, and this field is the same for all + * tail pages. + * + * By removing redundant page structs for HugeTLB pages, memory can be returned + * to the buddy allocator for other uses. + * + * Different architectures support different HugeTLB pages. For example, the + * following table is the HugeTLB page size supported by x86 and arm64 + * architectures. Because arm64 supports 4k, 16k, and 64k base pages and + * supports contiguous entries, so it supports many kinds of sizes of HugeTLB + * page. + * + * +--------------+-----------+-----------------------------------------------+ + * | Architecture | Page Size | HugeTLB Page Size | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * | x86-64 | 4KB | 2MB | 1GB | | | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * | | 4KB | 64KB | 2MB | 32MB | 1GB | + * | +-----------+-----------+-----------+-----------+-----------+ + * | arm64 | 16KB | 2MB | 32MB | 1GB | | + * | +-----------+-----------+-----------+-----------+-----------+ + * | | 64KB | 2MB | 512MB | 16GB | | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * + * When the system boot up, every HugeTLB page has more than one struct page + * structs which size is (unit: pages): + * + * struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + * + * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size + * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following + * relationship. + * + * HugeTLB_Size = n * PAGE_SIZE + * + * Then, + * + * struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + * = n * sizeof(struct page) / PAGE_SIZE + * + * We can use huge mapping at the pud/pmd level for the HugeTLB page. + * + * For the HugeTLB page of the pmd level mapping, then + * + * struct_size = n * sizeof(struct page) / PAGE_SIZE + * = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE + * = sizeof(struct page) / sizeof(pte_t) + * = 64 / 8 + * = 8 (pages) + * + * Where n is how many pte entries which one page can contains. So the value of + * n is (PAGE_SIZE / sizeof(pte_t)). + * + * This optimization only supports 64-bit system, so the value of sizeof(pte_t) + * is 8. And this optimization also applicable only when the size of struct page + * is a power of two. In most cases, the size of struct page is 64 bytes (e.g. + * x86-64 and arm64). So if we use pmd level mapping for a HugeTLB page, the + * size of struct page structs of it is 8 page frames which size depends on the + * size of the base page. + * + * For the HugeTLB page of the pud level mapping, then + * + * struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd) + * = PAGE_SIZE / 8 * 8 (pages) + * = PAGE_SIZE (pages) + * + * Where the struct_size(pmd) is the size of the struct page structs of a + * HugeTLB page of the pmd level mapping. + * + * E.g.: A 2MB HugeTLB page on x86_64 consists in 8 page frames while 1GB + * HugeTLB page consists in 4096. + * + * Next, we take the pmd level mapping of the HugeTLB page as an example to + * show the internal implementation of this optimization. There are 8 pages + * struct page structs associated with a HugeTLB page which is pmd mapped. + * + * Here is how things look before optimization. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | -------------> | 2 | + * | | +-----------+ +-----------+ + * | | | 3 | -------------> | 3 | + * | | +-----------+ +-----------+ + * | | | 4 | -------------> | 4 | + * | PMD | +-----------+ +-----------+ + * | level | | 5 | -------------> | 5 | + * | mapping | +-----------+ +-----------+ + * | | | 6 | -------------> | 6 | + * | | +-----------+ +-----------+ + * | | | 7 | -------------> | 7 | + * | | +-----------+ +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * The value of page->compound_head is the same for all tail pages. The first + * page of page structs (page 0) associated with the HugeTLB page contains the 4 + * page structs necessary to describe the HugeTLB. The only use of the remaining + * pages of page structs (page 1 to page 7) is to point to page->compound_head. + * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs + * will be used for each HugeTLB page. This will allow us to free the remaining + * 6 pages to the buddy allocator. + * + * Here is how things look after remapping. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | ----------------^ ^ ^ ^ ^ ^ + * | | +-----------+ | | | | | + * | | | 3 | ------------------+ | | | | + * | | +-----------+ | | | | + * | | | 4 | --------------------+ | | | + * | PMD | +-----------+ | | | + * | level | | 5 | ----------------------+ | | + * | mapping | +-----------+ | | + * | | | 6 | ------------------------+ | + * | | +-----------+ | + * | | | 7 | --------------------------+ + * | | +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for + * vmemmap pages and restore the previous mapping relationship. + * + * For the HugeTLB page of the pud level mapping. It is similar to the former. + * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages. + * + * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures + * (e.g. aarch64) provides a contiguous bit in the translation table entries + * that hints to the MMU to indicate that it is one of a contiguous set of + * entries that can be cached in a single TLB entry. + * + * The contiguous bit is used to increase the mapping size at the pmd and pte + * (last) level. So this type of HugeTLB page can be optimized only when its + * size of the struct page structs is greater than 2 pages. + */ +#include "hugetlb_vmemmap.h" + +/* + * There are a lot of struct page structures associated with each HugeTLB page. + * For tail pages, the value of compound_head is the same. So we can reuse first + * page of tail page structures. We map the virtual addresses of the remaining + * pages of tail page structures to the first tail page struct, and then free + * these page frames. Therefore, we need to reserve two pages as vmemmap areas. + */ +#define RESERVE_VMEMMAP_NR 2U +#define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) + +/* + * How many vmemmap pages associated with a HugeTLB page that can be freed + * to the buddy allocator. + * + * Todo: Returns zero for now, which means the feature is disabled. We will + * enable it once all the infrastructure is there. + */ +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} + +static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) +{ + return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; +} + +void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + unsigned long vmemmap_addr = (unsigned long)head; + unsigned long vmemmap_end, vmemmap_reuse; + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + vmemmap_addr += RESERVE_VMEMMAP_SIZE; + vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h); + vmemmap_reuse = vmemmap_addr - PAGE_SIZE; + + /* + * Remap the vmemmap virtual address range [@vmemmap_addr, @vmemmap_end) + * to the page which @vmemmap_reuse is mapped to, then free the pages + * which the range [@vmemmap_addr, @vmemmap_end] is mapped to. + */ + vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h new file mode 100644 index 000000000000..6923f03534d5 --- /dev/null +++ b/mm/hugetlb_vmemmap.h @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song + */ +#ifndef _LINUX_HUGETLB_VMEMMAP_H +#define _LINUX_HUGETLB_VMEMMAP_H +#include + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void free_huge_page_vmemmap(struct hstate *h, struct page *head); +#else +static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ +#endif /* _LINUX_HUGETLB_VMEMMAP_H */ diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 16183d85a7d5..7d40b5bd7046 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -27,8 +27,202 @@ #include #include #include +#include +#include + #include #include +#include + +/** + * vmemmap_remap_walk - walk vmemmap page table + * + * @remap_pte: called for each lowest-level entry (PTE). + * @reuse_page: the page which is reused for the tail vmemmap pages. + * @reuse_addr: the virtual address of the @reuse_page page. + * @vmemmap_pages: the list head of the vmemmap pages that can be freed. + */ +struct vmemmap_remap_walk { + void (*remap_pte)(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk); + struct page *reuse_page; + unsigned long reuse_addr; + struct list_head *vmemmap_pages; +}; + +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pte_t *pte = pte_offset_kernel(pmd, addr); + + /* + * The reuse_page is found 'first' in table walk before we start + * remapping (which is calling @walk->remap_pte). + */ + if (!walk->reuse_page) { + walk->reuse_page = pte_page(*pte); + /* + * Because the reuse address is part of the range that we are + * walking, skip the reuse address range. + */ + addr += PAGE_SIZE; + pte++; + } + + for (; addr != end; addr += PAGE_SIZE, pte++) + walk->remap_pte(pte, addr, walk); +} + +static void vmemmap_pmd_range(pud_t *pud, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pmd_t *pmd; + unsigned long next; + + pmd = pmd_offset(pud, addr); + do { + BUG_ON(pmd_leaf(*pmd)); + + next = pmd_addr_end(addr, end); + vmemmap_pte_range(pmd, addr, next, walk); + } while (pmd++, addr = next, addr != end); +} + +static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pud_t *pud; + unsigned long next; + + pud = pud_offset(p4d, addr); + do { + next = pud_addr_end(addr, end); + vmemmap_pmd_range(pud, addr, next, walk); + } while (pud++, addr = next, addr != end); +} + +static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + p4d_t *p4d; + unsigned long next; + + p4d = p4d_offset(pgd, addr); + do { + next = p4d_addr_end(addr, end); + vmemmap_pud_range(p4d, addr, next, walk); + } while (p4d++, addr = next, addr != end); +} + +static void vmemmap_remap_range(unsigned long start, unsigned long end, + struct vmemmap_remap_walk *walk) +{ + unsigned long addr = start; + unsigned long next; + pgd_t *pgd; + + VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); + VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); + + pgd = pgd_offset_k(addr); + do { + next = pgd_addr_end(addr, end); + vmemmap_p4d_range(pgd, addr, next, walk); + } while (pgd++, addr = next, addr != end); + + /* + * We only change the mapping of the vmemmap virtual address range + * [@start + PAGE_SIZE, end), so we only need to flush the TLB which + * belongs to the range. + */ + flush_tlb_kernel_range(start + PAGE_SIZE, end); +} + +/* + * Free a vmemmap page. A vmemmap page can be allocated from the memblock + * allocator or buddy allocator. If the PG_reserved flag is set, it means + * that it allocated from the memblock allocator, just free it via the + * free_bootmem_page(). Otherwise, use __free_page(). + */ +static inline void free_vmemmap_page(struct page *page) +{ + if (PageReserved(page)) + free_bootmem_page(page); + else + __free_page(page); +} + +/* Free a list of the vmemmap pages */ +static void free_vmemmap_page_list(struct list_head *list) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, list, lru) { + list_del(&page->lru); + free_vmemmap_page(page); + } +} + +static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk) +{ + /* + * Remap the tail pages as read-only to catch illegal write operation + * to the tail pages. + */ + pgprot_t pgprot = PAGE_KERNEL_RO; + pte_t entry = mk_pte(walk->reuse_page, pgprot); + struct page *page = pte_page(*pte); + + list_add(&page->lru, walk->vmemmap_pages); + set_pte_at(&init_mm, addr, pte, entry); +} + +/** + * vmemmap_remap_free - remap the vmemmap virtual address range [@start, @end) + * to the page which @reuse is mapped to, then free vmemmap + * which the range are mapped to. + * @start: start address of the vmemmap virtual address range that we want + * to remap. + * @end: end address of the vmemmap virtual address range that we want to + * remap. + * @reuse: reuse address. + * + * Note: This function depends on vmemmap being base page mapped. Please make + * sure that we disable PMD mapping of vmemmap pages when calling this function. + */ +void vmemmap_remap_free(unsigned long start, unsigned long end, + unsigned long reuse) +{ + LIST_HEAD(vmemmap_pages); + struct vmemmap_remap_walk walk = { + .remap_pte = vmemmap_remap_pte, + .reuse_addr = reuse, + .vmemmap_pages = &vmemmap_pages, + }; + + /* + * In order to make remapping routine most efficient for the huge pages, + * the routine of vmemmap page table walking has the following rules + * (see more details from the vmemmap_pte_range()): + * + * - The range [@start, @end) and the range [@reuse, @reuse + PAGE_SIZE) + * should be continuous. + * - The @reuse address is part of the range [@reuse, @end) that we are + * walking which is passed to vmemmap_remap_range(). + * - The @reuse address is the first in the complete range. + * + * So we need to make sure that @start and @reuse meet the above rules. + */ + BUG_ON(start - reuse != PAGE_SIZE); + + vmemmap_remap_range(reuse, end, &walk); + free_vmemmap_page_list(&vmemmap_pages); +} /* * Allocate a block of memory to be used to back the virtual memory map From patchwork Mon Mar 15 09:20:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12138785 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 003C6C433DB for ; Mon, 15 Mar 2021 09:28:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A9B6464E21 for ; Mon, 15 Mar 2021 09:28:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229624AbhCOJ1c (ORCPT ); Mon, 15 Mar 2021 05:27:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229774AbhCOJ1E (ORCPT ); Mon, 15 Mar 2021 05:27:04 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DFD7C06175F for ; Mon, 15 Mar 2021 02:27:04 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id a8so8480854plp.13 for ; Mon, 15 Mar 2021 02:27:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JyHfqAIZLhrT0oCSnpDGn99/WbkxHK6FrGf+RKUyWk0=; b=x1DwkcphMcCiItUFNcV57RePeAyPh5bxPZbvyZsP4nwhSbXM7feXYwbfeI+czsVU3s 3ZC9vTnyGHbYhDI4QS3yTwOgBzkS6nWCGIGUesNnMod2fqLT2ZqWVgXKxzlCsA4j0LJ0 gQpOjXQp/IXXMIUCpvxmJAy2TaGf0IR4XnhXJUMs+mYkHbEsz1l63UlQmrzsnf+09xFZ ELHMbW5OoqyBZyJTlqBUDzrhWGtqiPDLSAPEChQrJz3JDtxG+wqp93gs7GhemzyBss29 iRCfN2Jhscy+1MVDgKqyLxgq2KkXTy/tiB761mk0KY+6oJHpYmmdHHc09sRUza/vtrd8 AKTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JyHfqAIZLhrT0oCSnpDGn99/WbkxHK6FrGf+RKUyWk0=; b=sIJUAPObT3sqQsLTIgXVmywUZazHBkQ/psugYvWJ6aHXhy+NDD1gU3mBlNnsk79enS oN1mKN8jBmyif6Z7RRZmn4Wo39Kp2LT0HZW7ahHYWsDn94z4vDvAJeRI0I8+DeEnyl6e 8UqnDViHPdZ4Gaubk9Sb8De55JlnqCGzrmuZO3J9wY1q994+oBTSmHJvHi1XvUFl5rOL FCIRdL6qO8bSlYOS9fIbVHvAO77/cOxQkULaRMA/GumpZOuD6izSqdLczHAi5xOu8qMT ii8d+Pz0nAigpHSTGMGbJZKEpiTXw34Hhd9MbrswTCFzFed418opgkWXPCOkLpZn9BNE OdYw== X-Gm-Message-State: AOAM533mi65rfmW8Tc7//5UA/BS2FeqqB07u1uX3aBhm17fjaWTBBRhb sbcAx2moP5OvYWC8w/mWhIuJXw== X-Google-Smtp-Source: ABdhPJzhFc7oT3umqMxqMXjzrzBgwPzmc9Kp3ONht/7EVhLCS86Bp445Ifs88zpQgldMuLWcWIA+vg== X-Received: by 2002:a17:90b:20c:: with SMTP id fy12mr11847034pjb.41.1615800423715; Mon, 15 Mar 2021 02:27:03 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.231]) by smtp.gmail.com with ESMTPSA id gm10sm10607883pjb.4.2021.03.15.02.26.53 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Mar 2021 02:27:03 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Chen Huang , Bodeddula Balasubramaniam Subject: [PATCH v19 5/8] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page Date: Mon, 15 Mar 2021 17:20:12 +0800 Message-Id: <20210315092015.35396-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210315092015.35396-1-songmuchun@bytedance.com> References: <20210315092015.35396-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org When we free a HugeTLB page to the buddy allocator, we need to allocate the vmemmap pages associated with it. However, we may not be able to allocate the vmemmap pages when the system is under memory pressure. In this case, we just refuse to free the HugeTLB page. This changes behavior in some corner cases as listed below: 1) Failing to free a huge page triggered by the user (decrease nr_pages). User needs to try again later. 2) Failing to free a surplus huge page when freed by the application. Try again later when freeing a huge page next time. 3) Failing to dissolve a free huge page on ZONE_MOVABLE via offline_pages(). This can happen when we have plenty of ZONE_MOVABLE memory, but not enough kernel memory to allocate vmemmmap pages. We may even be able to migrate huge page contents, but will not be able to dissolve the source huge page. This will prevent an offline operation and is unfortunate as memory offlining is expected to succeed on movable zones. Users that depend on memory hotplug to succeed for movable zones should carefully consider whether the memory savings gained from this feature are worth the risk of possibly not being able to offline memory in certain situations. 4) Failing to dissolve a huge page on CMA/ZONE_MOVABLE via alloc_contig_range() - once we have that handling in place. Mainly affects CMA and virtio-mem. Similar to 3). virito-mem will handle migration errors gracefully. CMA might be able to fallback on other free areas within the CMA region. Vmemmap pages are allocated from the page freeing context. In order for those allocations to be not disruptive (e.g. trigger oom killer) __GFP_NORETRY is used. hugetlb_lock is dropped for the allocation because a non sleeping allocation would be too fragile and it could fail too easily under memory pressure. GFP_ATOMIC or other modes to access memory reserves is not used because we want to prevent consuming reserves under heavy hugetlb freeing. Signed-off-by: Muchun Song Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam Reviewed-by: Oscar Salvador --- Documentation/admin-guide/mm/hugetlbpage.rst | 8 +++ Documentation/admin-guide/mm/memory-hotplug.rst | 13 +++++ include/linux/mm.h | 2 + mm/hugetlb.c | 76 ++++++++++++++++++++----- mm/hugetlb_vmemmap.c | 43 +++++++++----- mm/hugetlb_vmemmap.h | 23 ++++++++ mm/sparse-vmemmap.c | 75 +++++++++++++++++++++++- 7 files changed, 211 insertions(+), 29 deletions(-) diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst index f7b1c7462991..6988895d09a8 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -60,6 +60,10 @@ HugePages_Surp the pool above the value in ``/proc/sys/vm/nr_hugepages``. The maximum number of surplus huge pages is controlled by ``/proc/sys/vm/nr_overcommit_hugepages``. + Note: When the feature of freeing unused vmemmap pages associated + with each hugetlb page is enabled, the number of surplus huge pages + may be temporarily larger than the maximum number of surplus huge + pages when the system is under memory pressure. Hugepagesize is the default hugepage size (in Kb). Hugetlb @@ -80,6 +84,10 @@ returned to the huge page pool when freed by a task. A user with root privileges can dynamically allocate more or free some persistent huge pages by increasing or decreasing the value of ``nr_hugepages``. +Note: When the feature of freeing unused vmemmap pages associated with each +hugetlb page is enabled, we can fail to free the huge pages triggered by +the user when ths system is under memory pressure. Please try again later. + Pages that are used as huge pages are reserved inside the kernel and cannot be used for other purposes. Huge pages cannot be swapped out under memory pressure. diff --git a/Documentation/admin-guide/mm/memory-hotplug.rst b/Documentation/admin-guide/mm/memory-hotplug.rst index 5307f90738aa..05b2316983d6 100644 --- a/Documentation/admin-guide/mm/memory-hotplug.rst +++ b/Documentation/admin-guide/mm/memory-hotplug.rst @@ -357,6 +357,19 @@ creates ZONE_MOVABLE as following. Unfortunately, there is no information to show which memory block belongs to ZONE_MOVABLE. This is TBD. + Memory offlining can fail when dissolving a free huge page on ZONE_MOVABLE + and the feature of freeing unused vmemmap pages associated with each hugetlb + page is enabled. + + This can happen when we have plenty of ZONE_MOVABLE memory, but not enough + kernel memory to allocate vmemmmap pages. We may even be able to migrate + huge page contents, but will not be able to dissolve the source huge page. + This will prevent an offline operation and is unfortunate as memory offlining + is expected to succeed on movable zones. Users that depend on memory hotplug + to succeed for movable zones should carefully consider whether the memory + savings gained from this feature are worth the risk of possibly not being + able to offline memory in certain situations. + .. _memory_hotplug_how_to_offline_memory: How to offline memory diff --git a/include/linux/mm.h b/include/linux/mm.h index 4ddfc31f21c6..77693c944a36 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2973,6 +2973,8 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) void vmemmap_remap_free(unsigned long start, unsigned long end, unsigned long reuse); +int vmemmap_remap_alloc(unsigned long start, unsigned long end, + unsigned long reuse, gfp_t gfp_mask); void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 43fed6785322..e42b19337a8f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1304,16 +1304,53 @@ static inline void destroy_compound_gigantic_page(struct page *page, unsigned int order) { } #endif -static void update_and_free_page(struct hstate *h, struct page *page) +static int update_and_free_page_surplus(struct hstate *h, struct page *page, + bool acct_surplus) + __releases(&hugetlb_lock) __acquires(&hugetlb_lock) { int i; struct page *subpage = page; + int nid = page_to_nid(page); if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) - return; + return 0; h->nr_huge_pages--; - h->nr_huge_pages_node[page_to_nid(page)]--; + h->nr_huge_pages_node[nid]--; + + /* + * If the vmemmap pages associated with the HugeTLB page can be + * optimized, we might block in alloc_huge_page_vmemmap(), so + * drop the hugetlb_lock. + */ + if (free_vmemmap_pages_per_hpage(h)) + spin_unlock(&hugetlb_lock); + + if (alloc_huge_page_vmemmap(h, page)) { + spin_lock(&hugetlb_lock); + INIT_LIST_HEAD(&page->lru); + h->nr_huge_pages++; + h->nr_huge_pages_node[nid]++; + + /* + * If we cannot allocate vmemmap pages, just refuse to free the + * page and put the page back on the hugetlb free list and treat + * as a surplus page. + */ + if (acct_surplus) { + h->surplus_huge_pages++; + h->surplus_huge_pages_node[nid]++; + } + + arch_clear_hugepage_flags(page); + enqueue_huge_page(h, page); + + return -ENOMEM; + } + + if (free_vmemmap_pages_per_hpage(h)) + spin_lock(&hugetlb_lock); + for (i = 0; i < pages_per_huge_page(h); i++, subpage = mem_map_next(subpage, page, i)) { subpage->flags &= ~(1 << PG_locked | 1 << PG_error | @@ -1337,6 +1374,13 @@ static void update_and_free_page(struct hstate *h, struct page *page) } else { __free_pages(page, huge_page_order(h)); } + + return 0; +} + +static inline int update_and_free_page(struct hstate *h, struct page *page) +{ + return update_and_free_page_surplus(h, page, true); } struct hstate *size_to_hstate(unsigned long size) @@ -1404,9 +1448,9 @@ static void __free_huge_page(struct page *page) } else if (h->surplus_huge_pages_node[nid]) { /* remove the page from active list */ list_del(&page->lru); - update_and_free_page(h, page); h->surplus_huge_pages--; h->surplus_huge_pages_node[nid]--; + update_and_free_page(h, page); } else { arch_clear_hugepage_flags(page); enqueue_huge_page(h, page); @@ -1447,7 +1491,7 @@ void free_huge_page(struct page *page) /* * Defer freeing if in non-task context to avoid hugetlb_lock deadlock. */ - if (!in_task()) { + if (in_atomic()) { /* * Only call schedule_work() if hpage_freelist is previously * empty. Otherwise, schedule_work() had been called but the @@ -1693,14 +1737,14 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, list_entry(h->hugepage_freelists[node].next, struct page, lru); list_del(&page->lru); + ClearHPageFreed(page); h->free_huge_pages--; h->free_huge_pages_node[node]--; if (acct_surplus) { h->surplus_huge_pages--; h->surplus_huge_pages_node[node]--; } - update_and_free_page(h, page); - ret = 1; + ret = !update_and_free_page(h, page); break; } } @@ -1713,10 +1757,14 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, * nothing for in-use hugepages and non-hugepages. * This function returns values like below: * - * -EBUSY: failed to dissolved free hugepages or the hugepage is in-use - * (allocated or reserved.) - * 0: successfully dissolved free hugepages or the page is not a - * hugepage (considered as already dissolved) + * -ENOMEM: failed to allocate vmemmap pages to free the freed hugepages + * when the system is under memory pressure and the feature of + * freeing unused vmemmap pages associated with each hugetlb page + * is enabled. + * -EBUSY: failed to dissolved free hugepages or the hugepage is in-use + * (allocated or reserved.) + * 0: successfully dissolved free hugepages or the page is not a + * hugepage (considered as already dissolved) */ int dissolve_free_huge_page(struct page *page) { @@ -1768,11 +1816,13 @@ int dissolve_free_huge_page(struct page *page) ClearPageHWPoison(head); } list_del(&head->lru); + ClearHPageFreed(page); h->free_huge_pages--; h->free_huge_pages_node[nid]--; h->max_huge_pages--; - update_and_free_page(h, head); - rc = 0; + rc = update_and_free_page_surplus(h, head, false); + if (rc) + h->max_huge_pages++; } out: spin_unlock(&hugetlb_lock); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 0209b736e0b4..0e6835264da3 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -18,10 +18,9 @@ * 4096 base pages. For each base page, there is a corresponding page struct. * * Within the HugeTLB subsystem, only the first 4 page structs are used to - * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER - * provides this upper limit. The only 'useful' information in the remaining - * page structs is the compound_head field, and this field is the same for all - * tail pages. + * contain unique information about a HugeTLB page. __NR_USED_SUBPAGE provides + * this upper limit. The only 'useful' information in the remaining page structs + * is the compound_head field, and this field is the same for all tail pages. * * By removing redundant page structs for HugeTLB pages, memory can be returned * to the buddy allocator for other uses. @@ -181,21 +180,35 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) -/* - * How many vmemmap pages associated with a HugeTLB page that can be freed - * to the buddy allocator. - * - * Todo: Returns zero for now, which means the feature is disabled. We will - * enable it once all the infrastructure is there. - */ -static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) { - return 0; + return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; } -static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) +/* + * Previously discarded vmemmap pages will be allocated and remapping + * after this function returns. + */ +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head) { - return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; + unsigned long vmemmap_addr = (unsigned long)head; + unsigned long vmemmap_end, vmemmap_reuse; + + if (!free_vmemmap_pages_per_hpage(h)) + return 0; + + vmemmap_addr += RESERVE_VMEMMAP_SIZE; + vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h); + vmemmap_reuse = vmemmap_addr - PAGE_SIZE; + /* + * The pages which the vmemmap virtual address range [@vmemmap_addr, + * @vmemmap_end) are mapped to are freed to the buddy allocator, and + * the range is mapped to the page which @vmemmap_reuse is mapped to. + * When a HugeTLB page is freed to the buddy allocator, previously + * discarded vmemmap pages must be allocated and remapping. + */ + return vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse, + GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE); } void free_huge_page_vmemmap(struct hstate *h, struct page *head) diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 6923f03534d5..a37771b0b82a 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -11,10 +11,33 @@ #include #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); + +/* + * How many vmemmap pages associated with a HugeTLB page that can be freed + * to the buddy allocator. + * + * Todo: Returns zero for now, which means the feature is disabled. We will + * enable it once all the infrastructure is there. + */ +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} #else +static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + return 0; +} + static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 7d40b5bd7046..693de0aec7a8 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -40,7 +40,8 @@ * @remap_pte: called for each lowest-level entry (PTE). * @reuse_page: the page which is reused for the tail vmemmap pages. * @reuse_addr: the virtual address of the @reuse_page page. - * @vmemmap_pages: the list head of the vmemmap pages that can be freed. + * @vmemmap_pages: the list head of the vmemmap pages that can be freed + * or is mapped from. */ struct vmemmap_remap_walk { void (*remap_pte)(pte_t *pte, unsigned long addr, @@ -224,6 +225,78 @@ void vmemmap_remap_free(unsigned long start, unsigned long end, free_vmemmap_page_list(&vmemmap_pages); } +static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk) +{ + pgprot_t pgprot = PAGE_KERNEL; + struct page *page; + void *to; + + BUG_ON(pte_page(*pte) != walk->reuse_page); + + page = list_first_entry(walk->vmemmap_pages, struct page, lru); + list_del(&page->lru); + to = page_to_virt(page); + copy_page(to, (void *)walk->reuse_addr); + + set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); +} + +static int alloc_vmemmap_page_list(unsigned long start, unsigned long end, + gfp_t gfp_mask, struct list_head *list) +{ + unsigned long nr_pages = (end - start) >> PAGE_SHIFT; + int nid = page_to_nid((struct page *)start); + struct page *page, *next; + + while (nr_pages--) { + page = alloc_pages_node(nid, gfp_mask, 0); + if (!page) + goto out; + list_add_tail(&page->lru, list); + } + + return 0; +out: + list_for_each_entry_safe(page, next, list, lru) + __free_pages(page, 0); + return -ENOMEM; +} + +/** + * vmemmap_remap_alloc - remap the vmemmap virtual address range [@start, end) + * to the page which is from the @vmemmap_pages + * respectively. + * @start: start address of the vmemmap virtual address range that we want + * to remap. + * @end: end address of the vmemmap virtual address range that we want to + * remap. + * @reuse: reuse address. + * @gpf_mask: GFP flag for allocating vmemmap pages. + */ +int vmemmap_remap_alloc(unsigned long start, unsigned long end, + unsigned long reuse, gfp_t gfp_mask) +{ + LIST_HEAD(vmemmap_pages); + struct vmemmap_remap_walk walk = { + .remap_pte = vmemmap_restore_pte, + .reuse_addr = reuse, + .vmemmap_pages = &vmemmap_pages, + }; + + /* See the comment in the vmemmap_remap_free(). */ + BUG_ON(start - reuse != PAGE_SIZE); + + might_sleep_if(gfpflags_allow_blocking(gfp_mask)); + + if (alloc_vmemmap_page_list(start, end, gfp_mask, &vmemmap_pages)) + return -ENOMEM; + + vmemmap_remap_range(reuse, end, &walk); + + return 0; +} + /* * Allocate a block of memory to be used to back the virtual memory map * or to back the page tables that are used to create the mapping. From patchwork Mon Mar 15 09:20:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12138789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6AA4C4332E for ; Mon, 15 Mar 2021 09:28:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6227564EAE for ; Mon, 15 Mar 2021 09:28:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229764AbhCOJ1f (ORCPT ); Mon, 15 Mar 2021 05:27:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42376 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229510AbhCOJ1P (ORCPT ); Mon, 15 Mar 2021 05:27:15 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83E79C061574 for ; Mon, 15 Mar 2021 02:27:15 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id f2-20020a17090a4a82b02900c67bf8dc69so14021119pjh.1 for ; Mon, 15 Mar 2021 02:27:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jTYzbhsiA8bs2WRejVmaHhThRa6HhDAe7Bk8Um9iG/k=; b=oUOYR62B4HrwFWKoQPv3NKrcouBpXkF2Xbfj1dqe0HZ7QL39QjE0uIHdWF18QHJn6M /eYEetuE0Q4AwrcG/SUMjZqc1q4xp/SW+1Te097bI0/vYpWXKfQctJt3F93XOZNs1/7Q Dh474vez/g4qryni+h02n8fo1Xu3H7IcRJcRMggGoIxkFvmceVU7ITiXguK5/PcHjVbW HIL1mA9u79UR89qC62jAI59tw9oR4VR7CmXSBXFEZifAMlKPLxaYlkXs65ODx4c3D2IE RJvCR6w8kNgcFEnhzMaT0xMpPSu+JUWcpaOPkOA93HY/r4SczST+21IEsJFKLiFup0sY mOmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jTYzbhsiA8bs2WRejVmaHhThRa6HhDAe7Bk8Um9iG/k=; b=CBKe3hH0ZR6lAmgeYzNE+IhTQQACtPqZYTT3bfly8zty29duyiK11KCAzk7nGelr+c CjRupVX2RlBmQ5hnMPHt3qxOvGqRDgOv4eD7TFAS7oUK/HaJv8Dct7+Neh/t80RiqSsm mEjYIL3D82KQbs/aOdLgFpthyQLrFd+E44Oi1LHHh6HxBmR8RHKP09z+iJpjfTHqsTwp FPBlFMczrJ6T+O7sB7NannOV2vrgfOPi1craXv6kmcgyxn/CBVz6zFDU8jA+/mAKGgWs WiHoPThAfkD05TT0dqvVALfpjd9BcqOgP1F+M0+NCo56aXmcN/k9UFACkRxfrTTmWeSW AK6w== X-Gm-Message-State: AOAM5305obEaaMbfrry5uaocpj61e6fRXh3Sk2x5zNmLi2xOnwndxysP J0pzMSU0RgiMCTdq95Ps57b4gDPlm/PSzXFZPCE= X-Google-Smtp-Source: ABdhPJyEig3ApBxf8QVu35XAmR9adFxfmMYahwRL8W1n+VUerLTXU/+AJUV2Wh/oSpQqdws0zcb3fw== X-Received: by 2002:a17:90a:db51:: with SMTP id u17mr11805014pjx.194.1615800435132; Mon, 15 Mar 2021 02:27:15 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.231]) by smtp.gmail.com with ESMTPSA id gm10sm10607883pjb.4.2021.03.15.02.27.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Mar 2021 02:27:14 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Chen Huang , Bodeddula Balasubramaniam Subject: [PATCH v19 6/8] mm: hugetlb: set the PageHWPoison to the raw error page Date: Mon, 15 Mar 2021 17:20:13 +0800 Message-Id: <20210315092015.35396-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210315092015.35396-1-songmuchun@bytedance.com> References: <20210315092015.35396-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Because we reuse the first tail vmemmap page frame and remap it with read-only, we cannot set the PageHWPosion on some tail pages. So we can use the head[4].private (There are at least 128 struct page structures associated with the optimized HugeTLB page, so using head[4].private is safe) to record the real error page index and set the raw error page PageHWPoison later. We cannot have more poisoned tail pages. So a single slot is sufficient. Why? memory_failure() if (PageHuge(page)) memory_failure_hugetlb() head = compound_head(page) if (TestSetPageHWPoison(head)) return Because we do not clear the HWPoison of the head page, we cannot poison another tail page. Note: some pages might miss their poisoning (even without this patch). Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Acked-by: David Rientjes Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam --- include/linux/hugetlb.h | 3 ++ mm/hugetlb.c | 81 +++++++++++++++++++++++++++++++++++++++++++------ 2 files changed, 75 insertions(+), 9 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 09421f5f35e2..7f7a0e3405ae 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -42,6 +42,9 @@ enum { SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ __MAX_CGROUP_SUBPAGE_INDEX = SUBPAGE_INDEX_CGROUP_RSVD, #endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + SUBPAGE_INDEX_HWPOISON, /* reuse page->private */ +#endif __NR_USED_SUBPAGE, }; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e42b19337a8f..53f239818293 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1304,6 +1304,74 @@ static inline void destroy_compound_gigantic_page(struct page *page, unsigned int order) { } #endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head) +{ + struct page *page; + + if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h)) + return; + + page = head + page_private(head + SUBPAGE_INDEX_HWPOISON); + + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + if (page != head) { + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + +static inline void hwpoison_subpage_set(struct hstate *h, struct page *head, + struct page *page) +{ + if (!PageHWPoison(head)) + return; + + if (free_vmemmap_pages_per_hpage(h)) { + set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head); + } else if (page != head) { + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + +static inline void hwpoison_subpage_clear(struct hstate *h, struct page *head) +{ + if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h)) + return; + + set_page_private(head + SUBPAGE_INDEX_HWPOISON, 0); +} +#else +static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head) +{ +} + +static inline void hwpoison_subpage_set(struct hstate *h, struct page *head, + struct page *page) +{ + if (PageHWPoison(head) && page != head) { + /* + * Move PageHWPoison flag from head page to the raw error page, + * which makes any subpages rather than the error page reusable. + */ + SetPageHWPoison(page); + ClearPageHWPoison(head); + } +} + +static inline void hwpoison_subpage_clear(struct hstate *h, struct page *head) +{ +} +#endif + static int update_and_free_page_surplus(struct hstate *h, struct page *page, bool acct_surplus) __releases(&hugetlb_lock) __acquires(&hugetlb_lock) @@ -1807,22 +1875,17 @@ int dissolve_free_huge_page(struct page *page) goto retry; } - /* - * Move PageHWPoison flag from head page to the raw error page, - * which makes any subpages rather than the error page reusable. - */ - if (PageHWPoison(head) && page != head) { - SetPageHWPoison(page); - ClearPageHWPoison(head); - } + hwpoison_subpage_set(h, head, page); list_del(&head->lru); ClearHPageFreed(page); h->free_huge_pages--; h->free_huge_pages_node[nid]--; h->max_huge_pages--; rc = update_and_free_page_surplus(h, head, false); - if (rc) + if (rc) { h->max_huge_pages++; + hwpoison_subpage_clear(h, head); + } } out: spin_unlock(&hugetlb_lock); From patchwork Mon Mar 15 09:20:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12138787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F8BDC43381 for ; Mon, 15 Mar 2021 09:28:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 154BD64E90 for ; Mon, 15 Mar 2021 09:28:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229818AbhCOJ1i (ORCPT ); Mon, 15 Mar 2021 05:27:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229686AbhCOJ11 (ORCPT ); Mon, 15 Mar 2021 05:27:27 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9CF7C06175F for ; Mon, 15 Mar 2021 02:27:27 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id v14so13177978pgq.2 for ; Mon, 15 Mar 2021 02:27:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vBua015q6pgUZwF4bYRyMqw+el8xArmuUJXoDGKJgn8=; b=Wc+Zb2KmmO2SjGntHZ0zphpfzRsbDU0P40JHVf4GGU1QfnWfmxF7nNvdMZwfMZbZhR Ashcw6BTgojn3qhgY0WTEFiYnqbJwwhv+u+Hwz0Czz3EixTxn5bwAGtWvcPRE6I2HPnK baOs/kZiYQUD79bzqv9fkfppBmLPkaC1c/yZrD0xTnseHwmnk1+a/Pt7S6HH+ECvwMtF 6VEnactd5qeqcjGPEBGhEEHt9H6IN688/inOcsIj6vCDY+XrvWZSwRFq5VHCCDbIkt5j qMJ7BBI+dMNc0NmxnaVgXn2iKoCaoYUExwrvMk++V1vuKeTQHbSSXytuFDKpskPPTOj2 Kp8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vBua015q6pgUZwF4bYRyMqw+el8xArmuUJXoDGKJgn8=; b=IzlwvfNvzzVKrTA1TEfaV+Vsw7dupyxxXPgFYcJIk9oC80ptq7mO6UZpjs2g5FVWbr LuSB2e2KK/7gZtDFFycQZWoHnwn6wmITDpyHim3f6xmRqQKKsyGO4n/ap90O4cfEFNsm pgtBaG57SEN/OlpHHYOB4uCwRC74kIBHXx+t7996nK+UwmnGvqhrfYxPPc1r0iR6a3SH N8Xsm4WGJCLHBzoIfqT5l35RjEGNcsJauhRyWlVLpuE9jDU27lAvN5JBueb7X6px6saU MrKjbB5nvq6AnFbHt0ltDayyL0qZ5waUrFUW1uYI161oPwSsYKCOjCrxz8HjLHTNC+eu cfHA== X-Gm-Message-State: AOAM530aVCcMNthkYOaKi8B3L2NqSMaW7qZ6mU73JEJDFCzn8UbizCL7 JoIOfVeAl7tbSyjJ7fCPyPIOeg== X-Google-Smtp-Source: ABdhPJzCS1CpmC+64f8rFzsa9KaQnKdjFvLPDU7xPVlpFJp+kW6zFS+WU/AAxZYkV1tF1kvn2DJdPQ== X-Received: by 2002:a62:58c6:0:b029:1ee:70e1:a094 with SMTP id m189-20020a6258c60000b02901ee70e1a094mr9634372pfb.38.1615800447185; Mon, 15 Mar 2021 02:27:27 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.231]) by smtp.gmail.com with ESMTPSA id gm10sm10607883pjb.4.2021.03.15.02.27.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Mar 2021 02:27:26 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Miaohe Lin , Chen Huang , Bodeddula Balasubramaniam Subject: [PATCH v19 7/8] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap Date: Mon, 15 Mar 2021 17:20:14 +0800 Message-Id: <20210315092015.35396-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210315092015.35396-1-songmuchun@bytedance.com> References: <20210315092015.35396-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Add a kernel parameter hugetlb_free_vmemmap to enable the feature of freeing unused vmemmap pages associated with each hugetlb page on boot. We disables PMD mapping of vmemmap pages for x86-64 arch when this feature is enabled. Because vmemmap_remap_free() depends on vmemmap being base page mapped. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Reviewed-by: Barry Song Reviewed-by: Miaohe Lin Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam --- Documentation/admin-guide/kernel-parameters.txt | 17 +++++++++++++++++ Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++ arch/x86/mm/init_64.c | 8 ++++++-- include/linux/hugetlb.h | 19 +++++++++++++++++++ mm/hugetlb_vmemmap.c | 24 ++++++++++++++++++++++++ 5 files changed, 69 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 04545725f187..2e6b57207a3d 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1557,6 +1557,23 @@ Documentation/admin-guide/mm/hugetlbpage.rst. Format: size[KMG] + hugetlb_free_vmemmap= + [KNL] Reguires CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + enabled. + Allows heavy hugetlb users to free up some more + memory (6 * PAGE_SIZE for each 2MB hugetlb page). + This feauture is not free though. Large page + tables are not used to back vmemmap pages which + can lead to a performance degradation for some + workloads. Also there will be memory allocation + required when hugetlb pages are freed from the + pool which can lead to corner cases under heavy + memory pressure. + Format: { on | off (default) } + + on: enable the feature + off: disable the feature + hung_task_panic= [KNL] Should the hung task detector generate panics. Format: 0 | 1 diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst index 6988895d09a8..8abaeb144e44 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -153,6 +153,9 @@ default_hugepagesz will all result in 256 2M huge pages being allocated. Valid default huge page size is architecture dependent. +hugetlb_free_vmemmap + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing + unused vmemmap pages associated with each HugeTLB page. When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` indicates the current number of pre-allocated huge pages of the default size. diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0435bee2e172..39f88c5faadc 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, { int err; - if (end - start < PAGES_PER_SECTION * sizeof(struct page)) + if ((is_hugetlb_free_vmemmap_enabled() && !altmap) || + end - start < PAGES_PER_SECTION * sizeof(struct page)) err = vmemmap_populate_basepages(start, end, node, NULL); else if (boot_cpu_has(X86_FEATURE_PSE)) err = vmemmap_populate_hugepages(start, end, node, altmap); @@ -1585,6 +1587,8 @@ void register_page_bootmem_memmap(unsigned long section_nr, pmd_t *pmd; unsigned int nr_pmd_pages; struct page *page; + bool base_mapping = !boot_cpu_has(X86_FEATURE_PSE) || + is_hugetlb_free_vmemmap_enabled(); for (; addr < end; addr = next) { pte_t *pte = NULL; @@ -1610,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long section_nr, } get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO); - if (!boot_cpu_has(X86_FEATURE_PSE)) { + if (base_mapping) { next = (addr + PAGE_SIZE) & PAGE_MASK; pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 7f7a0e3405ae..3efc6b9b23f2 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -872,6 +872,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, } #endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +extern bool hugetlb_free_vmemmap_enabled; + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return hugetlb_free_vmemmap_enabled; +} +#else +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} +#endif + #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; @@ -1025,6 +1039,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr pte_t *ptep, pte_t pte, unsigned long sz) { } + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} #endif /* CONFIG_HUGETLB_PAGE */ static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 0e6835264da3..721258beeb94 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -168,6 +168,8 @@ * (last) level. So this type of HugeTLB page can be optimized only when its * size of the struct page structs is greater than 2 pages. */ +#define pr_fmt(fmt) "HugeTLB: " fmt + #include "hugetlb_vmemmap.h" /* @@ -180,6 +182,28 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) +bool hugetlb_free_vmemmap_enabled; + +static int __init early_hugetlb_free_vmemmap_param(char *buf) +{ + /* We cannot optimize if a "struct page" crosses page boundaries. */ + if ((!is_power_of_2(sizeof(struct page)))) { + pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n"); + return 0; + } + + if (!buf) + return -EINVAL; + + if (!strcmp(buf, "on")) + hugetlb_free_vmemmap_enabled = true; + else if (strcmp(buf, "off")) + return -EINVAL; + + return 0; +} +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param); + static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) { return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; From patchwork Mon Mar 15 09:20:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12138791 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3063C433DB for ; Mon, 15 Mar 2021 09:28:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9D00564E33 for ; Mon, 15 Mar 2021 09:28:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229672AbhCOJ2C (ORCPT ); Mon, 15 Mar 2021 05:28:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229830AbhCOJ1l (ORCPT ); Mon, 15 Mar 2021 05:27:41 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3307C061762 for ; Mon, 15 Mar 2021 02:27:40 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id a188so6194993pfb.4 for ; Mon, 15 Mar 2021 02:27:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cszE7wMfBbYNxizL5sc0ajOqCGxq2ZshpdXC0HOV9L4=; b=km/CjZkCx6zyRgKYOGEoxWIjcIx1unZZquR8skWzd/pW6clIuXtwQGxiajPex0ATz2 P2wRVzt9l8BuviGhbq75n2agvjNjBaM/+hcKfgT8oClqlRtNdn/Apw1gj8ewV0jNmlxa zTdGDKZxTmalwPV9uQWrR10edynmv54Hgj6Omv9jkDnilNrBJ1B/ZwQXuW3L7nCBkdLC siDsQsYkTSH4fdAwnqQl7snYd/m1RG9Yy33+j6CuEKeRc64uxcxzXb9EMgpJjacVeavb 0PQT5RmteXVMLjtHYX+F4gVQzT3CT7yaEcTcn+A8fSICz6t22z82/cCKOR2nBV3yDgx3 ayWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cszE7wMfBbYNxizL5sc0ajOqCGxq2ZshpdXC0HOV9L4=; b=EJ40P4F6/iimqq/LjNWRSLRpQ59rL9+c3pAynpP84XiTWcY09aMzP0d3t/K6ebxorw CKC/mUb8JeNC+ReYmTBRHWhUdC7Pi4F7WSqZw6WE3MwJXRpqFpTUbokwe/xrm07o8sWy PP3yLoYYlG2MbbA8hvJds1Mhm1BuPxDXIzayX7G6oEr4VK/AdOp+urIjWHBacELGGpV+ 66t0QWYfmKXimMTrsilxn+NkUUQsSgbsjgtV1G1L5RMUGDb5Rcq9yj0erzmAA5xa4Bkx yw7YLnS8xkM77HLfPExPZa0tm4rS95yTMfbf3NgSURbMtbrExFVwzTJ+TmdkYVwMocvz 4lmw== X-Gm-Message-State: AOAM531tfnluyHVQOTE0td5cPLW98uITMQGvDmj4Z2zUQSKpKcjN5BaZ hliZjFfc7S/TzWQCi/powyWjow== X-Google-Smtp-Source: ABdhPJxPEVdT+G36Se/9qGYicEGixno2NzopXRjEZUuaXgjgAEFZkNn1DOLUc0BnnJ58P4wMWFOvBA== X-Received: by 2002:a05:6a00:2da:b029:202:7800:567 with SMTP id b26-20020a056a0002dab029020278000567mr12592116pft.71.1615800460350; Mon, 15 Mar 2021 02:27:40 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.231]) by smtp.gmail.com with ESMTPSA id gm10sm10607883pjb.4.2021.03.15.02.27.27 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Mar 2021 02:27:40 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Miaohe Lin , Chen Huang , Bodeddula Balasubramaniam Subject: [PATCH v19 8/8] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate Date: Mon, 15 Mar 2021 17:20:15 +0800 Message-Id: <20210315092015.35396-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210315092015.35396-1-songmuchun@bytedance.com> References: <20210315092015.35396-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org All the infrastructure is ready, so we introduce nr_free_vmemmap_pages field in the hstate to indicate how many vmemmap pages associated with a HugeTLB page that can be freed to buddy allocator. And initialize it in the hugetlb_vmemmap_init(). This patch is actual enablement of the feature. There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a BUILD_BUG_ON to catch invalid usage of the tail struct page. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador Reviewed-by: Miaohe Lin Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam --- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 1 + mm/hugetlb_vmemmap.c | 33 +++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 10 ++++++---- 4 files changed, 43 insertions(+), 4 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 3efc6b9b23f2..c70421e26189 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -580,6 +580,9 @@ struct hstate { unsigned int nr_huge_pages_node[MAX_NUMNODES]; unsigned int free_huge_pages_node[MAX_NUMNODES]; unsigned int surplus_huge_pages_node[MAX_NUMNODES]; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + unsigned int nr_free_vmemmap_pages; +#endif #ifdef CONFIG_CGROUP_HUGETLB /* cgroup control files */ struct cftype cgroup_files_dfl[7]; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 53f239818293..37c06e0e3660 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3315,6 +3315,7 @@ void __init hugetlb_add_hstate(unsigned int order) h->next_nid_to_free = first_memory_node; snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB", huge_page_size(h)/1024); + hugetlb_vmemmap_init(h); parsed_hstate = h; } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 721258beeb94..13e7e57a1327 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -254,3 +254,36 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head) */ vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse); } + +void __init hugetlb_vmemmap_init(struct hstate *h) +{ + unsigned int nr_pages = pages_per_huge_page(h); + unsigned int vmemmap_pages; + + /* + * There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct + * page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, + * so add a BUILD_BUG_ON to catch invalid usage of the tail struct page. + */ + BUILD_BUG_ON(__NR_USED_SUBPAGE >= + RESERVE_VMEMMAP_SIZE / sizeof(struct page)); + + if (!hugetlb_free_vmemmap_enabled) + return; + + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; + /* + * The head page and the first tail page are not to be freed to buddy + * allocator, the other pages will map to the first tail page, so they + * can be freed. + * + * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true + * on some architectures (e.g. aarch64). See Documentation/arm64/ + * hugetlbpage.rst for more details. + */ + if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR)) + h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR; + + pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, + h->name); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index a37771b0b82a..cb2bef8f9e73 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -13,17 +13,15 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP int alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); +void hugetlb_vmemmap_init(struct hstate *h); /* * How many vmemmap pages associated with a HugeTLB page that can be freed * to the buddy allocator. - * - * Todo: Returns zero for now, which means the feature is disabled. We will - * enable it once all the infrastructure is there. */ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { - return 0; + return h->nr_free_vmemmap_pages; } #else static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head) @@ -35,6 +33,10 @@ static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } +static inline void hugetlb_vmemmap_init(struct hstate *h) +{ +} + static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { return 0;