From patchwork Fri Nov 20 06:43:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11919599 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB338C56201 for ; Fri, 20 Nov 2020 06:49:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 577AF22404 for ; Fri, 20 Nov 2020 06:49:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="ilarJVpi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 577AF22404 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D30FA6B0081; Fri, 20 Nov 2020 01:49:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CBB186B0082; Fri, 20 Nov 2020 01:49:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5CD96B0083; Fri, 20 Nov 2020 01:49:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0181.hostedemail.com [216.40.44.181]) by kanga.kvack.org (Postfix) with ESMTP id 769BA6B0081 for ; Fri, 20 Nov 2020 01:49:17 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0EA8A180AD81F for ; Fri, 20 Nov 2020 06:49:17 +0000 (UTC) X-FDA: 77503869954.24.band80_250b8ae27349 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id DF9D41A4A0 for ; Fri, 20 Nov 2020 06:49:16 +0000 (UTC) X-HE-Tag: band80_250b8ae27349 X-Filterd-Recvd-Size: 9173 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Fri, 20 Nov 2020 06:49:16 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id 62so6481692pgg.12 for ; Thu, 19 Nov 2020 22:49:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RKbgeHukBWWau0oXAXahnUGuDDx8LJgRTsdrkFi6LMQ=; b=ilarJVpiysC2TwGERjYBOQId0CfOFqTeTxO7s8DGA84pEF0YJlXiJnIUO2LxN4OTMZ IIPz9cMkunEhpWSzBqOy/UFDV64Jxgz2bdA8RXKQMQi2tjD/5SUXakR4JCnKB1UWL9f3 9ycO2r3mHuVQ0oUiI/Nq19yr2Pwn5Zd1BTHlkA4J2zhUNADCS9oHgyOomHo2P0tmiEY/ bLMkbOQyMv5iDmEYIe7+S4TLTsxaJ7W9z1po4juPLslOYOgzXl4F4k3ufV8TUzLPcA9o 5L6RFeE/bN6W49M9knzp2Ne9Pn0QduTQxtRvzv6EmRZMEHdIkOFE7ttB6xfTg/sKq7Ru X2UA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RKbgeHukBWWau0oXAXahnUGuDDx8LJgRTsdrkFi6LMQ=; b=GJbIKfp/SQuJ0MSUdhOOoIxbLccbiqht18XsdvWNE9bRaxAF60Z8Ijtmryl4v7Ggmf DAHlwTLGVQSwzMjFxN5Z0pNEuD5jjUQHLgKb4L3Wv9hw9PS9mvDa04JGhMtJIMf3/8o9 2Hru4Rh/dNIE0WD/MO2v87PJhjZIb4gcQoNlpOQZSK40It52LN7F2QZQ7DrDDyxHyCbE 4QcqJkp9kGuyKbKwGt9Iy0NhEtjs0VioQZCtY4z1aOz/V398IOQtv2onah0qbpYQW8i1 h4Adr4B+zCp7SDP+tsG5u+8zs5YPYef6TRyrJsCezduWy5IUbxTLdctBac0KoR2x+R26 gb7g== X-Gm-Message-State: AOAM531ko536w9930AoJPitiIT6tpZH9Mqi1Feu2S9YFyowENhStAC76 UTt/1WtiJWBvUy3TI9pzPCJjpw== X-Google-Smtp-Source: ABdhPJw3lX9EfMCu83bPE2RgBSOWlgHv04Op3EI6DAr4jn8pAlfR7gcrTul/y/lkJ3qjOK3OjW4pPw== X-Received: by 2002:aa7:9699:0:b029:18a:e057:c44 with SMTP id f25-20020aa796990000b029018ae0570c44mr13140619pfk.34.1605854955543; Thu, 19 Nov 2020 22:49:15 -0800 (PST) Received: from localhost.localdomain ([103.136.221.72]) by smtp.gmail.com with ESMTPSA id 23sm2220278pfx.210.2020.11.19.22.49.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 19 Nov 2020 22:49:14 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v5 19/21] mm/hugetlb: Gather discrete indexes of tail page Date: Fri, 20 Nov 2020 14:43:23 +0800 Message-Id: <20201120064325.34492-20-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201120064325.34492-1-songmuchun@bytedance.com> References: <20201120064325.34492-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For hugetlb page, there are more metadata to save in the struct page. But the head struct page cannot meet our needs, so we have to abuse other tail struct page to store the metadata. In order to avoid conflicts caused by subsequent use of more tail struct pages, we can gather these discrete indexes of tail struct page In this case, it will be easier to add a new tail page index later. Signed-off-by: Muchun Song --- include/linux/hugetlb.h | 13 +++++++++++++ include/linux/hugetlb_cgroup.h | 15 +++++++++------ mm/hugetlb.c | 12 ++++++------ mm/hugetlb_vmemmap.h | 4 ++-- 4 files changed, 30 insertions(+), 14 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index da18fc9ed152..fa9d38a3ac6f 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t; #include #include +enum { + SUBPAGE_INDEX_ACTIVE = 1, /* reuse page flags of PG_private */ + SUBPAGE_INDEX_TEMPORARY, /* reuse page->mapping */ +#ifdef CONFIG_CGROUP_HUGETLB + SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */ + SUBPAGE_INDEX_CGROUP_RSVD, /* reuse page->private */ +#endif +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP + SUBPAGE_INDEX_HWPOISON, /* reuse page->private */ +#endif + NR_USED_SUBPAGE, +}; + struct hugepage_subpool { spinlock_t lock; long count; diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h index 2ad6e92f124a..3d3c1c49efe4 100644 --- a/include/linux/hugetlb_cgroup.h +++ b/include/linux/hugetlb_cgroup.h @@ -24,8 +24,9 @@ struct file_region; /* * Minimum page order trackable by hugetlb cgroup. * At least 4 pages are necessary for all the tracking information. - * The second tail page (hpage[2]) is the fault usage cgroup. - * The third tail page (hpage[3]) is the reservation usage cgroup. + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD]) + * is the reservation usage cgroup. */ #define HUGETLB_CGROUP_MIN_ORDER 2 @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd) if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return NULL; if (rsvd) - return (struct hugetlb_cgroup *)page[3].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD); else - return (struct hugetlb_cgroup *)page[2].private; + return (void *)page_private(page + SUBPAGE_INDEX_CGROUP); } static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page) @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page, if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER) return -1; if (rsvd) - page[3].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD, + (unsigned long)h_cg); else - page[2].private = (unsigned long)h_cg; + set_page_private(page + SUBPAGE_INDEX_CGROUP, + (unsigned long)h_cg); return 0; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9aad0b63d369..dfa982f4b525 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1429,20 +1429,20 @@ struct hstate *size_to_hstate(unsigned long size) bool page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHuge(page), page); - return PageHead(page) && PagePrivate(&page[1]); + return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* never called for tail page */ static void set_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - SetPagePrivate(&page[1]); + SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } static void clear_page_huge_active(struct page *page) { VM_BUG_ON_PAGE(!PageHeadHuge(page), page); - ClearPagePrivate(&page[1]); + ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]); } /* @@ -1454,17 +1454,17 @@ static inline bool PageHugeTemporary(struct page *page) if (!PageHuge(page)) return false; - return (unsigned long)page[2].mapping == -1U; + return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U; } static inline void SetPageHugeTemporary(struct page *page) { - page[2].mapping = (void *)-1U; + page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U; } static inline void ClearPageHugeTemporary(struct page *page) { - page[2].mapping = NULL; + page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL; } static void __free_huge_page(struct page *page) diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 65e94436ffff..d9c1f45e93ae 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -25,7 +25,7 @@ static inline void subpage_hwpoison_deliver(struct page *head) struct page *page = head; if (PageHWPoison(head)) - page = head + page_private(head + 4); + page = head + page_private(head + SUBPAGE_INDEX_HWPOISON); /* * Move PageHWPoison flag from head page to the raw error page, @@ -40,7 +40,7 @@ static inline void subpage_hwpoison_deliver(struct page *head) static inline void set_subpage_hwpoison(struct page *head, struct page *page) { if (PageHWPoison(head)) - set_page_private(head + 4, page - head); + set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head); } static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)