From patchwork Mon Mar 8 10:28:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12121929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A80CC43381 for ; Mon, 8 Mar 2021 10:32:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 67FDD651E8 for ; Mon, 8 Mar 2021 10:32:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230047AbhCHKcK (ORCPT ); Mon, 8 Mar 2021 05:32:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231458AbhCHKbr (ORCPT ); Mon, 8 Mar 2021 05:31:47 -0500 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B8E9C06175F for ; Mon, 8 Mar 2021 02:31:47 -0800 (PST) Received: by mail-pf1-x42a.google.com with SMTP id q20so6826687pfu.8 for ; Mon, 08 Mar 2021 02:31:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Gw7If/C0Ne/OYQ2y0VpGAH3dv4XdNgIwNckS1MywxXA=; b=toEEpPxF02Kt6aVribByeJkYVqNhu26gr1srHf6CK/vm1Z0/ZEYsSZiMvVSb5AMJkP OXhSYI31jpE+0bqaJhgdCM/RZ0zpQOMjHVJ+7VTwIo3iCmPogoNo2Z8J89MSMLWEBKYQ JNtPQs0silxX8NB2Ijbdbd5JsQ/bVHMOy/+UvgYLlVq4rifLdA+EP6SEx3KJleBz+leg G5A57aos63rgx7oW1ppRB6bRCHMS6EP/BPBgW0WuP6C/p5/pg7RatCn9/lRLO5dfTPF2 6Ag8NKprulSTjkWbGhvFP1/BCm5av1WQ9aRafQTcgYnO/IGZyNC/E0+nI8sItW6YYkPm f8Gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Gw7If/C0Ne/OYQ2y0VpGAH3dv4XdNgIwNckS1MywxXA=; b=Rjpx/On5Q8nn2b1LJYRUrLiT4zaZcTtqY+Ov0KuCkRl0XNtLKeno8plZxyt9WclRA5 UHRmS8P3EWrGopjSU6bZPB49YsGZJpzQw/Hnq8aOWQVXCuroaK+a+fUbjTybpAPjZwYn jmGuGmEXt+/Mb5cLiCcy7U31M+anchfho3cZNb9b1Dntojt6xoaYeEvx5P4Xq8EtLSvt P+41WyBUBMyHpM1UjyXo+yO7QEJDxuPmJGf6WvS1NeM8OhMnm+FNzUrLZPzfZe+ko4i9 pwvSFProLHnqLRVxlht8qQCuE5H6LZ89RlRVO1lw0W5GG9opzmXO6YyPtHB1StduIhus vwDg== X-Gm-Message-State: AOAM532hZS9IgPNT6BXwp94qWzqtew16Lj3Wfm2wbNudr0ryUcuXW9tc dK79O0thx774WDc4PRrdRLr86g== X-Google-Smtp-Source: ABdhPJwdZ4YcdFW8CHkEwUCOSqhE49zvBYv/iQeoO3Ptr+k9SWCXFNamG02s5HPNee1mkDYrx2JaWg== X-Received: by 2002:aa7:8e8f:0:b029:1f1:5a1a:7f82 with SMTP id a15-20020aa78e8f0000b02901f15a1a7f82mr14180479pfr.52.1615199507074; Mon, 08 Mar 2021 02:31:47 -0800 (PST) Received: from localhost.localdomain ([139.177.225.255]) by smtp.gmail.com with ESMTPSA id ge16sm10744705pjb.43.2021.03.08.02.31.33 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 Mar 2021 02:31:46 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Miaohe Lin , Chen Huang , Bodeddula Balasubramaniam Subject: [PATCH v18 7/9] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate Date: Mon, 8 Mar 2021 18:28:05 +0800 Message-Id: <20210308102807.59745-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210308102807.59745-1-songmuchun@bytedance.com> References: <20210308102807.59745-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org All the infrastructure is ready, so we introduce nr_free_vmemmap_pages field in the hstate to indicate how many vmemmap pages associated with a HugeTLB page that can be freed to buddy allocator. And initialize it in the hugetlb_vmemmap_init(). This patch is actual enablement of the feature. Signed-off-by: Muchun Song Acked-by: Mike Kravetz Reviewed-by: Oscar Salvador Reviewed-by: Miaohe Lin Tested-by: Chen Huang Tested-by: Bodeddula Balasubramaniam --- include/linux/hugetlb.h | 3 +++ mm/hugetlb.c | 1 + mm/hugetlb_vmemmap.c | 25 +++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 10 ++++++---- 4 files changed, 35 insertions(+), 4 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 78934e9aeab6..a4d80f7263fc 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -560,6 +560,9 @@ struct hstate { unsigned int nr_huge_pages_node[MAX_NUMNODES]; unsigned int free_huge_pages_node[MAX_NUMNODES]; unsigned int surplus_huge_pages_node[MAX_NUMNODES]; +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + unsigned int nr_free_vmemmap_pages; +#endif #ifdef CONFIG_CGROUP_HUGETLB /* cgroup control files */ struct cftype cgroup_files_dfl[7]; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c0c1b7635ca9..c221b937be17 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3312,6 +3312,7 @@ void __init hugetlb_add_hstate(unsigned int order) h->next_nid_to_free = first_memory_node; snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB", huge_page_size(h)/1024); + hugetlb_vmemmap_init(h); parsed_hstate = h; } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 7807ed6678e0..b65f0d5189bd 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -251,3 +251,28 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head) */ vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse); } + +void __init hugetlb_vmemmap_init(struct hstate *h) +{ + unsigned int nr_pages = pages_per_huge_page(h); + unsigned int vmemmap_pages; + + if (!hugetlb_free_vmemmap_enabled) + return; + + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; + /* + * The head page and the first tail page are not to be freed to buddy + * allocator, the other pages will map to the first tail page, so they + * can be freed. + * + * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true + * on some architectures (e.g. aarch64). See Documentation/arm64/ + * hugetlbpage.rst for more details. + */ + if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR)) + h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR; + + pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, + h->name); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index a37771b0b82a..cb2bef8f9e73 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -13,17 +13,15 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP int alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); +void hugetlb_vmemmap_init(struct hstate *h); /* * How many vmemmap pages associated with a HugeTLB page that can be freed * to the buddy allocator. - * - * Todo: Returns zero for now, which means the feature is disabled. We will - * enable it once all the infrastructure is there. */ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { - return 0; + return h->nr_free_vmemmap_pages; } #else static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head) @@ -35,6 +33,10 @@ static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) { } +static inline void hugetlb_vmemmap_init(struct hstate *h) +{ +} + static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) { return 0;