From patchwork Thu Nov 17 21:02:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13047287 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6D3BC433FE for ; Thu, 17 Nov 2022 21:03:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9CCA48E0005; Thu, 17 Nov 2022 16:03:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 928728E0001; Thu, 17 Nov 2022 16:03:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A1796B0075; Thu, 17 Nov 2022 16:03:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 6AE916B0072 for ; Thu, 17 Nov 2022 16:03:32 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 15304A0F5D for ; Thu, 17 Nov 2022 21:03:32 +0000 (UTC) X-FDA: 80144160264.23.37CB835 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf13.hostedemail.com (Postfix) with ESMTP id 4F0A62000E for ; Thu, 17 Nov 2022 21:03:30 +0000 (UTC) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOQbn002482; Thu, 17 Nov 2022 21:03:11 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=k+s6al3wwRtBndCW2ZWmf31NL6al8soF2mssq2Ti0Oc=; b=mZlOHPgAIKUokYiEYU/wm818rb3shzppAw9LoNV9rfPRyh2Hmca0AcnVA4RKROurg1dT e8bLXLt0LzHkhQm+SPWJPLuCZ6X3x/1GfLiBaB/WWSfJ3RcK2lO4HhBZeXOXqyZ0MShI dkQ4sPM37QTzkI63BVK3QWEoBqiCMn2UThfsPJ5GdJ8r/jNegaZcnqx9p3SpbqZ2EKJo Zjb1cLsvaIiDlNspDSmWe1BCeHqqRTaDkVGw9aE0xuLC0xV+znIv7RmK0SupkOLAJJok imGso4NsuPTtYzZBxCjsEibB85WxF5wRJG5URlzxu9JhQWygStYEE4wseIVT3DuJFrTY yA== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8yktgkv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:11 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKgvsI010998; Thu, 17 Nov 2022 21:03:09 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagy2m-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:09 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fd032582; Thu, 17 Nov 2022 21:03:09 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-2; Thu, 17 Nov 2022 21:03:09 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 01/10] mm: add folio dtor and order setter functions Date: Thu, 17 Nov 2022 13:02:49 -0800 Message-Id: <20221117210258.12732-2-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-GUID: 7-LgzR41kaozz5_oBEAW_85xG-oTzz4V X-Proofpoint-ORIG-GUID: 7-LgzR41kaozz5_oBEAW_85xG-oTzz4V ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668719010; a=rsa-sha256; cv=none; b=ax9j7+ma2uanbM25VfVPB+sdUYnsxYbcXwI4KfvWC/WJXJRheB5GBLZVhlHknSratCscz1 HQ+wKbxsyeWj9j1vDf3gJWzUX5OLsXR1F5DoCzx8k4NtR1d068OQCcYXp0z1F4YSSLxHzW aYTsw3RmX9a+XoyuVWbD4xgxH2rST8A= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=mZlOHPgA; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf13.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668719010; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=k+s6al3wwRtBndCW2ZWmf31NL6al8soF2mssq2Ti0Oc=; b=4kwLvJnsexXLjbDD/1+vnZuq5+TsVWwau6jN22cbUIqbJJusqjBQ1nqeMHkW5f1+DfzNi7 scV9XIJT2pwpt0iRi+6NKWoZf7as9SjoDmrdkeaAV1trRoYbM40wk6Glq2nlE04wtGstAq v6pMBJgbnOg0eIL/WNr9DSqflj0+Ey8= X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 4F0A62000E Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=mZlOHPgA; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf13.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Stat-Signature: tgpzbc7fxpf3ijdhe5imam3ne1rai418 X-HE-Tag: 1668719010-542476 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add folio equivalents for set_compound_order() and set_compound_page_dtor(). Also remove extra new-lines introduced by mm/hugetlb: convert move_hugetlb_state() to folios and mm/hugetlb_cgroup: convert hugetlb_cgroup_uncharge_page() to folios. Suggested-by: Mike Kravetz Suggested-by: Muchun Song Signed-off-by: Sidhartha Kumar --- include/linux/mm.h | 16 ++++++++++++++++ mm/hugetlb.c | 4 +--- 2 files changed, 17 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a48c5ad16a5e..47ba2b2b22ae 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -972,6 +972,13 @@ static inline void set_compound_page_dtor(struct page *page, page[1].compound_dtor = compound_dtor; } +static inline void folio_set_compound_dtor(struct folio *folio, + enum compound_dtor_id compound_dtor) +{ + VM_BUG_ON_FOLIO(compound_dtor >= NR_COMPOUND_DTORS, folio); + folio->_folio_dtor = compound_dtor; +} + void destroy_large_folio(struct folio *folio); static inline int head_compound_pincount(struct page *head) @@ -987,6 +994,15 @@ static inline void set_compound_order(struct page *page, unsigned int order) #endif } +static inline void folio_set_compound_order(struct folio *folio, + unsigned int order) +{ + folio->_folio_order = order; +#ifdef CONFIG_64BIT + folio->_folio_nr_pages = 1U << order; +#endif +} + /* Returns the number of pages in this potentially compound page. */ static inline unsigned long compound_nr(struct page *page) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 002337cc27b5..157f2392c64f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1780,7 +1780,7 @@ static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio) { hugetlb_vmemmap_optimize(h, &folio->page); INIT_LIST_HEAD(&folio->lru); - folio->_folio_dtor = HUGETLB_PAGE_DTOR; + folio_set_compound_dtor(folio, HUGETLB_PAGE_DTOR); hugetlb_set_folio_subpool(folio, NULL); set_hugetlb_cgroup(folio, NULL); set_hugetlb_cgroup_rsvd(folio, NULL); @@ -2936,7 +2936,6 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, * a reservation exists for the allocation. */ page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, gbl_chg); - if (!page) { spin_unlock_irq(&hugetlb_lock); page = alloc_buddy_huge_page_with_mpol(h, vma, addr); @@ -7349,7 +7348,6 @@ void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int re int old_nid = folio_nid(old_folio); int new_nid = folio_nid(new_folio); - folio_set_hugetlb_temporary(old_folio); folio_clear_hugetlb_temporary(new_folio); From patchwork Thu Nov 17 21:02:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13047289 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5697C4332F for ; Thu, 17 Nov 2022 21:03:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 27AE36B0078; Thu, 17 Nov 2022 16:03:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0008C8E0002; Thu, 17 Nov 2022 16:03:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A889A8E0006; Thu, 17 Nov 2022 16:03:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7FA346B0073 for ; Thu, 17 Nov 2022 16:03:32 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 49FCBA116C for ; Thu, 17 Nov 2022 21:03:32 +0000 (UTC) X-FDA: 80144160264.21.9D9863B Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf26.hostedemail.com (Postfix) with ESMTP id 9CCA514000B for ; Thu, 17 Nov 2022 21:03:31 +0000 (UTC) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOOsG002460; Thu, 17 Nov 2022 21:03:13 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=Mwgxn09UOrTmFlxHKtjfX4954mioyEjow3Fh9kxIaBI=; b=fBXJHHHjY3Ssevgb0OWIQ4bPwMJwpdWIKu2hsNhyMFZp4ERJkBbIcaY+3rCpR6pWK3td ma2CDaxLS9ZeRwK6BQknBrCfhU8NexP7QHUXFH3y6vIMTeI80a+PYUNBe6Whsc3MNW3Y HTyuT/qk3fpfHKippcmkDC4rc/+GsFtfOo24JRxnPKbU7aW+RwrgIDhj4meVgL2KWI5+ x4UUEz1Fg68Br3MzuXb75SYVBv9Qs478lYT44V81diFx6c63RNBY6muxOmK+V4AtvKSd XpWTPaNmFMM/tnc27pMGqVst5sohnfToot4X52pe4JzL7l+E8HBvXpZLbfyULckJZu2n Cw== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8yktgm2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:12 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKdVqx010856; Thu, 17 Nov 2022 21:03:11 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagy42-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:11 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Ff032582; Thu, 17 Nov 2022 21:03:10 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-3; Thu, 17 Nov 2022 21:03:10 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 02/10] mm/hugetlb: convert destroy_compound_gigantic_page() to folios Date: Thu, 17 Nov 2022 13:02:50 -0800 Message-Id: <20221117210258.12732-3-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-GUID: YPKAZCsTo2tT3XhgA7Js0vlwMEPkwgre X-Proofpoint-ORIG-GUID: YPKAZCsTo2tT3XhgA7Js0vlwMEPkwgre ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668719011; a=rsa-sha256; cv=none; b=CKDJOw3hBZHjAx+s4Hu/kN4QaPTbi0UMFIjEvoH5Z9dJqOXdgHnvaE6NtRibJLy1bQFfTC ECuk4kA49NQvwc8NzQEbVVugllAqpokhrCHRPVyT0KJRG4HVXcuYBnYfXn68Nz9UzYJ3Yk BDR4PoAl4zlmZUmrtuROuWcUukpezD0= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=fBXJHHHj; spf=pass (imf26.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668719011; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Mwgxn09UOrTmFlxHKtjfX4954mioyEjow3Fh9kxIaBI=; b=veu/VbWhrXKmgPtrpDP1ZCAt52U/od3xcBh6TzeKES+4D/N88x21+HMwPwYmNFw5YZS1IT Pyg4dcJTz0/RkzTzHaDlfh0WQsFBMuc60XVYPqPhCTo+x7yUD3cNZ3/DLUW3wOFBMWb6S8 RZiUvh1U2nbhkwzw6WHEsUnwANRmx0k= X-Stat-Signature: 8zxahzybi3a7enoe5idn9qumjzpt3zuy X-Rspamd-Queue-Id: 9CCA514000B Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=fBXJHHHj; spf=pass (imf26.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-Rspamd-Server: rspam04 X-Rspam-User: X-HE-Tag: 1668719011-884575 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert page operations within __destroy_compound_gigantic_page() to the corresponding folio operations. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 43 +++++++++++++++++++++---------------------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 157f2392c64f..5edb81541ede 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1325,43 +1325,40 @@ static int hstate_next_node_to_free(struct hstate *h, nodemask_t *nodes_allowed) nr_nodes--) /* used to demote non-gigantic_huge pages as well */ -static void __destroy_compound_gigantic_page(struct page *page, +static void __destroy_compound_gigantic_folio(struct folio *folio, unsigned int order, bool demote) { int i; int nr_pages = 1 << order; struct page *p; - atomic_set(compound_mapcount_ptr(page), 0); - atomic_set(subpages_mapcount_ptr(page), 0); - atomic_set(compound_pincount_ptr(page), 0); + atomic_set(folio_mapcount_ptr(folio), 0); + atomic_set(folio_subpages_mapcount_ptr(folio), 0); + atomic_set(folio_pincount_ptr(folio), 0); for (i = 1; i < nr_pages; i++) { - p = nth_page(page, i); + p = folio_page(folio, i); p->mapping = NULL; clear_compound_head(p); if (!demote) set_page_refcounted(p); } - set_compound_order(page, 0); -#ifdef CONFIG_64BIT - page[1].compound_nr = 0; -#endif - __ClearPageHead(page); + folio_set_compound_order(folio, 0); + folio_clear_head(folio); } -static void destroy_compound_hugetlb_page_for_demote(struct page *page, +static void destroy_compound_hugetlb_folio_for_demote(struct folio *folio, unsigned int order) { - __destroy_compound_gigantic_page(page, order, true); + __destroy_compound_gigantic_folio(folio, order, true); } #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE -static void destroy_compound_gigantic_page(struct page *page, +static void destroy_compound_gigantic_folio(struct folio *folio, unsigned int order) { - __destroy_compound_gigantic_page(page, order, false); + __destroy_compound_gigantic_folio(folio, order, false); } static void free_gigantic_page(struct page *page, unsigned int order) @@ -1430,7 +1427,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, return NULL; } static inline void free_gigantic_page(struct page *page, unsigned int order) { } -static inline void destroy_compound_gigantic_page(struct page *page, +static inline void destroy_compound_gigantic_folio(struct folio *folio, unsigned int order) { } #endif @@ -1477,8 +1474,8 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page, * * For gigantic pages set the destructor to the null dtor. This * destructor will never be called. Before freeing the gigantic - * page destroy_compound_gigantic_page will turn the compound page - * into a simple group of pages. After this the destructor does not + * page destroy_compound_gigantic_folio will turn the folio into a + * simple group of pages. After this the destructor does not * apply. * * This handles the case where more than one ref is held when and @@ -1559,6 +1556,7 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, static void __update_and_free_page(struct hstate *h, struct page *page) { int i; + struct folio *folio = page_folio(page); struct page *subpage; if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) @@ -1587,8 +1585,8 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * Move PageHWPoison flag from head page to the raw error pages, * which makes any healthy subpages reusable. */ - if (unlikely(PageHWPoison(page))) - hugetlb_clear_page_hwpoison(page); + if (unlikely(folio_test_hwpoison(folio))) + hugetlb_clear_page_hwpoison(&folio->page); for (i = 0; i < pages_per_huge_page(h); i++) { subpage = nth_page(page, i); @@ -1604,7 +1602,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) */ if (hstate_is_gigantic(h) || hugetlb_cma_page(page, huge_page_order(h))) { - destroy_compound_gigantic_page(page, huge_page_order(h)); + destroy_compound_gigantic_folio(folio, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h)); } else { __free_pages(page, huge_page_order(h)); @@ -3435,6 +3433,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) { int i, nid = page_to_nid(page); struct hstate *target_hstate; + struct folio *folio = page_folio(page); struct page *subpage; int rc = 0; @@ -3453,10 +3452,10 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) } /* - * Use destroy_compound_hugetlb_page_for_demote for all huge page + * Use destroy_compound_hugetlb_folio_for_demote for all huge page * sizes as it will not ref count pages. */ - destroy_compound_hugetlb_page_for_demote(page, huge_page_order(h)); + destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(h)); /* * Taking target hstate mutex synchronizes with set_max_huge_pages. From patchwork Thu Nov 17 21:02:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13047286 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D16DBC4332F for ; Thu, 17 Nov 2022 21:03:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3FA356B0071; Thu, 17 Nov 2022 16:03:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3AA856B0072; Thu, 17 Nov 2022 16:03:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 24AFC8E0001; Thu, 17 Nov 2022 16:03:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 1294F6B0071 for ; Thu, 17 Nov 2022 16:03:32 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id CF2851A075F for ; Thu, 17 Nov 2022 21:03:31 +0000 (UTC) X-FDA: 80144160222.21.F70CD93 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf08.hostedemail.com (Postfix) with ESMTP id 4AF7E160006 for ; Thu, 17 Nov 2022 21:03:30 +0000 (UTC) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKY74u016799; Thu, 17 Nov 2022 21:03:14 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=THT0/VFA6cUN/tQQrWJEQ7HX1532qlc3yOTzGPja16Y=; b=HOdAn38DMgOTa+N9ssboUAViktvfm5BQ/L3WicuQqCZnYX49AdNCvhWZRERAujkZxdAa V3PZmxn+/FymIdDBcFg1LDuTFQxRI+QaD9hsVeUoh/kYKjYdgpS8fgnYwWy27NPieXko 0mpihLyvuwA8zUDNw00gsQDYpvFDDx5mOLZ0faHkX6yxwFuySKvCm5022weV9HzR3PbC AGzZfUWJo9OBuR6I7vFvZOdMcFIyKxn74m67VxiXuxqLiR/YPofUt4lNPwMFEvqKNu2Y xvfVGrilUhhPY9kFw/O0FBNkzr9w7j/+uQs4J7IhQucaP9zwt9zB7tkaHPf+0uaJ6JLo Dg== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8yktgmb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:14 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKXQTX010907; Thu, 17 Nov 2022 21:03:12 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagy4v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:12 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fh032582; Thu, 17 Nov 2022 21:03:12 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-4; Thu, 17 Nov 2022 21:03:11 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 03/10] mm/hugetlb: convert dissolve_free_huge_page() to folios Date: Thu, 17 Nov 2022 13:02:51 -0800 Message-Id: <20221117210258.12732-4-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-GUID: nhWf9rDHgY9oOzn1_pf27y3htBFgoPfl X-Proofpoint-ORIG-GUID: nhWf9rDHgY9oOzn1_pf27y3htBFgoPfl ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668719010; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=THT0/VFA6cUN/tQQrWJEQ7HX1532qlc3yOTzGPja16Y=; b=ajFmqRiiKnTL3xTkX5h6y6egFkpo+3GzyMCp8mGc+C1IF7wncQvjSfPpWhQMy6kqi3H0BP reJ1keKKAaB4NGyxlyCH2MTEPZFqaHPlExQ3Qg5+MZmxtnnDTk+kD3l9CjQ59ONWta0p+4 kfo4OwLRQphzGnrtZmSfPJu7A+2rVY0= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=HOdAn38D; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf08.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668719010; a=rsa-sha256; cv=none; b=RhYZjnyLFIjcPWhY7oIoMTl22/HH07yC/vq4j1KKb2IDyullDztN6/ZrQnpry7auYumHMl EBUOcFtuFd/Jp8lX23mWvNgf+o/AVebuHGnz4hBLzsJahK5nFCKfNDAG+9Iah58IwqYNsc X66RYHRtIbl9TyRQM/4BDsfM5hq2cW0= X-Rspam-User: X-Stat-Signature: 56d5d396tjnabaa197t73pow7hpefuij X-Rspamd-Queue-Id: 4AF7E160006 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=HOdAn38D; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf08.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Rspamd-Server: rspam07 X-HE-Tag: 1668719010-829327 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes compound_head() call by using a folio rather than a head page. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 5edb81541ede..ff609efe5497 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2126,21 +2126,21 @@ static struct page *remove_pool_huge_page(struct hstate *h, int dissolve_free_huge_page(struct page *page) { int rc = -EBUSY; + struct folio *folio = page_folio(page); retry: /* Not to disrupt normal path by vainly holding hugetlb_lock */ - if (!PageHuge(page)) + if (!folio_test_hugetlb(folio)) return 0; spin_lock_irq(&hugetlb_lock); - if (!PageHuge(page)) { + if (!folio_test_hugetlb(folio)) { rc = 0; goto out; } - if (!page_count(page)) { - struct page *head = compound_head(page); - struct hstate *h = page_hstate(head); + if (!folio_ref_count(folio)) { + struct hstate *h = folio_hstate(folio); if (!available_huge_pages(h)) goto out; @@ -2148,7 +2148,7 @@ int dissolve_free_huge_page(struct page *page) * We should make sure that the page is already on the free list * when it is dissolved. */ - if (unlikely(!HPageFreed(head))) { + if (unlikely(!folio_test_hugetlb_freed(folio))) { spin_unlock_irq(&hugetlb_lock); cond_resched(); @@ -2163,7 +2163,7 @@ int dissolve_free_huge_page(struct page *page) goto retry; } - remove_hugetlb_page(h, head, false); + remove_hugetlb_page(h, &folio->page, false); h->max_huge_pages--; spin_unlock_irq(&hugetlb_lock); @@ -2175,12 +2175,12 @@ int dissolve_free_huge_page(struct page *page) * Attempt to allocate vmemmmap here so that we can take * appropriate action on failure. */ - rc = hugetlb_vmemmap_restore(h, head); + rc = hugetlb_vmemmap_restore(h, &folio->page); if (!rc) { - update_and_free_page(h, head, false); + update_and_free_page(h, &folio->page, false); } else { spin_lock_irq(&hugetlb_lock); - add_hugetlb_page(h, head, false); + add_hugetlb_page(h, &folio->page, false); h->max_huge_pages++; spin_unlock_irq(&hugetlb_lock); } From patchwork Thu Nov 17 21:02:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13047291 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32F60C4332F for ; Thu, 17 Nov 2022 21:03:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B275F8E0006; Thu, 17 Nov 2022 16:03:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AD6788E0003; Thu, 17 Nov 2022 16:03:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 94F4C8E0006; Thu, 17 Nov 2022 16:03:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 862338E0003 for ; Thu, 17 Nov 2022 16:03:33 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 60C3141020 for ; Thu, 17 Nov 2022 21:03:33 +0000 (UTC) X-FDA: 80144160306.30.F93BDB5 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf27.hostedemail.com (Postfix) with ESMTP id B59CF4000F for ; Thu, 17 Nov 2022 21:03:32 +0000 (UTC) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOJkU004807; Thu, 17 Nov 2022 21:03:25 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=mZHrU3BK7yqZ4Vo8Pufvz4V3D37bYSkveHg4q16NVVk=; b=VWxlmb0mNDjm13eBhavoZeiFeiEVLiwFGSuMcf6Ud5y7NHTLDHgrGHTnD+szwyYc9AEc OF8aCX0DzuyxqExl5f3fI0ocBQImFOaCeKJmMlsCbvPk67RScr5EEJFv94gTNimsYHGY bRxb4Q+2K36VM9O2/mLMJpLPSkf9cTrHN9ItSug1DJLhpH63y/etqncknODQMPpP64nx wcc8e8GEJmDqv+8uZ83LaP7PPfJC7oaA2iYx6OY9pf2DnH/blQyeG1vun+MwbNhhtfFp MeH9C8yxHURoe5+JkJTAZ1TkX0RZ1gOwLkkQ6PKUmMpir0j2y1hEzIxHAyZfml7AecEQ kg== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3jste1f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:21 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKX0ST010906; Thu, 17 Nov 2022 21:03:14 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagy5x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:14 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fj032582; Thu, 17 Nov 2022 21:03:13 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-5; Thu, 17 Nov 2022 21:03:13 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 04/10] mm/hugetlb: convert remove_hugetlb_page() to folios Date: Thu, 17 Nov 2022 13:02:52 -0800 Message-Id: <20221117210258.12732-5-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-ORIG-GUID: O_u6e90ysPysUcIJo01hnKRH-XJFbJse X-Proofpoint-GUID: O_u6e90ysPysUcIJo01hnKRH-XJFbJse ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=VWxlmb0m; spf=pass (imf27.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668719012; a=rsa-sha256; cv=none; b=ggkeDkq90YkuKxJXYBM9Oo1f/13UCaQEBLMml5BrLLPCXIkn81OFy80XfN5TTVfU219Yok KRw6XO7wo2L5IdtRcCx3QISHyJxD3drF//SZvVmhZJZps4FLOEZW4AdjvI6sLl0ga/WBGn E5OAwzDR9LHqAy8NWLUBPQ0zDRNFWo0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668719012; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mZHrU3BK7yqZ4Vo8Pufvz4V3D37bYSkveHg4q16NVVk=; b=qnNhstXAtqrHJLjZcqERWQrUIQePZR/pGvl2/3Ei0tzI3DeJEUJqshHzEc/iNQV/hXlPcc vgh1CddSEjjvScH5r3b65hQFWVGUKoazU6vJrVXY0L1RownD1ib0VzUhepO0lSPY7E1cy/ AZ29+ZtCX2xyqdHJXOPQ1JWYEBs6VyY= Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=VWxlmb0m; spf=pass (imf27.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-Rspamd-Server: rspam02 X-Rspam-User: X-Stat-Signature: j8umtaarrqe46xoqg8e35fhqpywqyy1w X-Rspamd-Queue-Id: B59CF4000F X-HE-Tag: 1668719012-854336 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes page_folio() call by converting callers to directly pass a folio into __remove_hugetlb_page(). Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 48 +++++++++++++++++++++++++----------------------- 1 file changed, 25 insertions(+), 23 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ff609efe5497..d9604c0dac54 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1432,19 +1432,18 @@ static inline void destroy_compound_gigantic_folio(struct folio *folio, #endif /* - * Remove hugetlb page from lists, and update dtor so that page appears + * Remove hugetlb folio from lists, and update dtor so that the folio appears * as just a compound page. * - * A reference is held on the page, except in the case of demote. + * A reference is held on the folio, except in the case of demote. * * Must be called with hugetlb lock held. */ -static void __remove_hugetlb_page(struct hstate *h, struct page *page, +static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, bool adjust_surplus, bool demote) { - int nid = page_to_nid(page); - struct folio *folio = page_folio(page); + int nid = folio_nid(folio); VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio(folio), folio); VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio_rsvd(folio), folio); @@ -1453,9 +1452,9 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page, if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; - list_del(&page->lru); + list_del(&folio->lru); - if (HPageFreed(page)) { + if (folio_test_hugetlb_freed(folio)) { h->free_huge_pages--; h->free_huge_pages_node[nid]--; } @@ -1485,26 +1484,26 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page, * be turned into a page of smaller size. */ if (!demote) - set_page_refcounted(page); + folio_ref_unfreeze(folio, 1); if (hstate_is_gigantic(h)) - set_compound_page_dtor(page, NULL_COMPOUND_DTOR); + folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR); else - set_compound_page_dtor(page, COMPOUND_PAGE_DTOR); + folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR); h->nr_huge_pages--; h->nr_huge_pages_node[nid]--; } -static void remove_hugetlb_page(struct hstate *h, struct page *page, +static void remove_hugetlb_folio(struct hstate *h, struct folio *folio, bool adjust_surplus) { - __remove_hugetlb_page(h, page, adjust_surplus, false); + __remove_hugetlb_folio(h, folio, adjust_surplus, false); } -static void remove_hugetlb_page_for_demote(struct hstate *h, struct page *page, +static void remove_hugetlb_folio_for_demote(struct hstate *h, struct folio *folio, bool adjust_surplus) { - __remove_hugetlb_page(h, page, adjust_surplus, true); + __remove_hugetlb_folio(h, folio, adjust_surplus, true); } static void add_hugetlb_page(struct hstate *h, struct page *page, @@ -1639,8 +1638,9 @@ static void free_hpage_workfn(struct work_struct *work) /* * The VM_BUG_ON_PAGE(!PageHuge(page), page) in page_hstate() * is going to trigger because a previous call to - * remove_hugetlb_page() will set_compound_page_dtor(page, - * NULL_COMPOUND_DTOR), so do not use page_hstate() directly. + * remove_hugetlb_folio() will call folio_set_compound_dtor + * (folio, NULL_COMPOUND_DTOR), so do not use page_hstate() + * directly. */ h = size_to_hstate(page_size(page)); @@ -1749,12 +1749,12 @@ void free_huge_page(struct page *page) h->resv_huge_pages++; if (folio_test_hugetlb_temporary(folio)) { - remove_hugetlb_page(h, page, false); + remove_hugetlb_folio(h, folio, false); spin_unlock_irqrestore(&hugetlb_lock, flags); update_and_free_page(h, page, true); } else if (h->surplus_huge_pages_node[nid]) { /* remove the page from active list */ - remove_hugetlb_page(h, page, true); + remove_hugetlb_folio(h, folio, true); spin_unlock_irqrestore(&hugetlb_lock, flags); update_and_free_page(h, page, true); } else { @@ -2090,6 +2090,7 @@ static struct page *remove_pool_huge_page(struct hstate *h, { int nr_nodes, node; struct page *page = NULL; + struct folio *folio; lockdep_assert_held(&hugetlb_lock); for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) { @@ -2101,7 +2102,8 @@ static struct page *remove_pool_huge_page(struct hstate *h, !list_empty(&h->hugepage_freelists[node])) { page = list_entry(h->hugepage_freelists[node].next, struct page, lru); - remove_hugetlb_page(h, page, acct_surplus); + folio = page_folio(page); + remove_hugetlb_folio(h, folio, acct_surplus); break; } } @@ -2163,7 +2165,7 @@ int dissolve_free_huge_page(struct page *page) goto retry; } - remove_hugetlb_page(h, &folio->page, false); + remove_hugetlb_folio(h, folio, false); h->max_huge_pages--; spin_unlock_irq(&hugetlb_lock); @@ -2801,7 +2803,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * and enqueue_huge_page() for new_page. The counters will remain * stable since this happens under the lock. */ - remove_hugetlb_page(h, old_page, false); + remove_hugetlb_folio(h, old_folio, false); /* * Ref count on new page is already zero as it was dropped @@ -3228,7 +3230,7 @@ static void try_to_free_low(struct hstate *h, unsigned long count, goto out; if (PageHighMem(page)) continue; - remove_hugetlb_page(h, page, false); + remove_hugetlb_folio(h, page_folio(page), false); list_add(&page->lru, &page_list); } } @@ -3439,7 +3441,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order); - remove_hugetlb_page_for_demote(h, page, false); + remove_hugetlb_folio_for_demote(h, folio, false); spin_unlock_irq(&hugetlb_lock); rc = hugetlb_vmemmap_restore(h, page); From patchwork Thu Nov 17 21:02:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13047295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B79CFC4332F for ; Thu, 17 Nov 2022 21:03:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CD3AF8E000A; Thu, 17 Nov 2022 16:03:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C36898E0009; Thu, 17 Nov 2022 16:03:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB0B48E000A; Thu, 17 Nov 2022 16:03:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 92C068E0009 for ; Thu, 17 Nov 2022 16:03:39 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 615D240134 for ; Thu, 17 Nov 2022 21:03:39 +0000 (UTC) X-FDA: 80144160558.01.97187C9 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf13.hostedemail.com (Postfix) with ESMTP id EAA132000B for ; Thu, 17 Nov 2022 21:03:38 +0000 (UTC) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOJkV004807; Thu, 17 Nov 2022 21:03:32 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=cqECmlw3uurUHKox1I6ZdisuXQoCt6s1cy/6t7wynMs=; b=owINbtNFwCFxdpC9OE3VsaWfEbaWpFOA3KdRDnycY5U/d6odcsBzxjGovspwsbFzkbDS vdobQvZR4I/u5gUzgC90nD4OHEEGWvKuQ8W7JmX0s0mW8Yz/JcJOjz65k8nBECzBblgH 5RSEPHCspapuA991Oc0+0UUGqAQcz9dLH1yWh15RIHhCPb9oWHx5GR55U83JJvDnXmAr 3R3bbJb4oB6tiiE6IEQFIbxWaN2Rhlx5Tbj+zc7+FWzMjI/CPecTn3dyQgDgpHrcG7Oa D9BGz+d+jzuucUCGUNr9aJSTGZKd+joAIhItJAXby1MbaFF9l4ULc44+qhX8AW894XaF hw== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3jste1j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:22 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKZxKC010886; Thu, 17 Nov 2022 21:03:15 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagy6v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:15 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fl032582; Thu, 17 Nov 2022 21:03:15 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-6; Thu, 17 Nov 2022 21:03:14 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 05/10] mm/hugetlb: convert update_and_free_page() to folios Date: Thu, 17 Nov 2022 13:02:53 -0800 Message-Id: <20221117210258.12732-6-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-ORIG-GUID: 8oGgEcfOoIqJtLrVyJh52foqg_R44GrQ X-Proofpoint-GUID: 8oGgEcfOoIqJtLrVyJh52foqg_R44GrQ ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668719019; a=rsa-sha256; cv=none; b=e3VzuU7kL+Trs3qoRzk0ZUYiQByuGwLqDFeAMrjv/Z2PIT+u5bxiBAEqFM26QDQH63aANt VDUMqiHNosWqb66o9fElkMMaygpX2IeUsLjAfIWDthAXGcgRMDAyOsHqHLfmurbm1Fj4Ht aMGYBtyLaa/tZZOu1T6uZaDAkN0h2QQ= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=owINbtNF; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf13.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668719019; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cqECmlw3uurUHKox1I6ZdisuXQoCt6s1cy/6t7wynMs=; b=VARhqwgYfrTQ64I3xLkoE8HnTDEYQ0zF2G5dEWZhrIAyIDboqbcF+YKDpgiSkH3lmHivqp DCIWQOzAz7tSO1ji/gbZskMRN69XM6vVHfr7t6R2PF2JWHOMEhL3/FmY2Ouv97HuxnU881 Tq6rUGXj2PHds6l7hA51NCm+tE9FWT4= Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=owINbtNF; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf13.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Rspam-User: X-Stat-Signature: 5zamr919uacutwtsjhg6rhtboe38bai9 X-Rspamd-Queue-Id: EAA132000B X-Rspamd-Server: rspam11 X-HE-Tag: 1668719018-528991 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make more progress on converting the free_huge_page() destructor to operate on folios by converting update_and_free_page() to folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d9604c0dac54..80301fab56d8 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1478,7 +1478,7 @@ static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, * apply. * * This handles the case where more than one ref is held when and - * after update_and_free_page is called. + * after update_and_free_hugetlb_folio is called. * * In the case of demote we do not ref count the page as it will soon * be turned into a page of smaller size. @@ -1609,7 +1609,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) } /* - * As update_and_free_page() can be called under any context, so we cannot + * As update_and_free_hugetlb_folio() can be called under any context, so we cannot * use GFP_KERNEL to allocate vmemmap pages. However, we can defer the * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate * the vmemmap pages. @@ -1657,11 +1657,11 @@ static inline void flush_free_hpage_work(struct hstate *h) flush_work(&free_hpage_work); } -static void update_and_free_page(struct hstate *h, struct page *page, +static void update_and_free_hugetlb_folio(struct hstate *h, struct folio *folio, bool atomic) { - if (!HPageVmemmapOptimized(page) || !atomic) { - __update_and_free_page(h, page); + if (!folio_test_hugetlb_vmemmap_optimized(folio) || !atomic) { + __update_and_free_page(h, &folio->page); return; } @@ -1672,16 +1672,18 @@ static void update_and_free_page(struct hstate *h, struct page *page, * empty. Otherwise, schedule_work() had been called but the workfn * hasn't retrieved the list yet. */ - if (llist_add((struct llist_node *)&page->mapping, &hpage_freelist)) + if (llist_add((struct llist_node *)&folio->mapping, &hpage_freelist)) schedule_work(&free_hpage_work); } static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list) { struct page *page, *t_page; + struct folio *folio; list_for_each_entry_safe(page, t_page, list, lru) { - update_and_free_page(h, page, false); + folio = page_folio(page); + update_and_free_hugetlb_folio(h, folio, false); cond_resched(); } } @@ -1751,12 +1753,12 @@ void free_huge_page(struct page *page) if (folio_test_hugetlb_temporary(folio)) { remove_hugetlb_folio(h, folio, false); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page, true); + update_and_free_hugetlb_folio(h, folio, true); } else if (h->surplus_huge_pages_node[nid]) { /* remove the page from active list */ remove_hugetlb_folio(h, folio, true); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page, true); + update_and_free_hugetlb_folio(h, folio, true); } else { arch_clear_hugepage_flags(page); enqueue_huge_page(h, page); @@ -2170,8 +2172,8 @@ int dissolve_free_huge_page(struct page *page) spin_unlock_irq(&hugetlb_lock); /* - * Normally update_and_free_page will allocate required vmemmmap - * before freeing the page. update_and_free_page will fail to + * Normally update_and_free_hugtlb_folio will allocate required vmemmmap + * before freeing the page. update_and_free_hugtlb_folio will fail to * free the page if it can not allocate required vmemmap. We * need to adjust max_huge_pages if the page is not freed. * Attempt to allocate vmemmmap here so that we can take @@ -2179,7 +2181,7 @@ int dissolve_free_huge_page(struct page *page) */ rc = hugetlb_vmemmap_restore(h, &folio->page); if (!rc) { - update_and_free_page(h, &folio->page, false); + update_and_free_hugetlb_folio(h, folio, false); } else { spin_lock_irq(&hugetlb_lock); add_hugetlb_page(h, &folio->page, false); @@ -2816,7 +2818,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * Pages have been replaced, we can safely free the old one. */ spin_unlock_irq(&hugetlb_lock); - update_and_free_page(h, old_page, false); + update_and_free_hugetlb_folio(h, old_folio, false); } return ret; @@ -2825,7 +2827,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, spin_unlock_irq(&hugetlb_lock); /* Page has a zero ref count, but needs a ref to be freed */ folio_ref_unfreeze(new_folio, 1); - update_and_free_page(h, new_page, false); + update_and_free_hugetlb_folio(h, new_folio, false); return ret; } From patchwork Thu Nov 17 21:02:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13047288 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FDC8C43219 for ; Thu, 17 Nov 2022 21:03:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D74208E0001; Thu, 17 Nov 2022 16:03:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CD4B06B0072; Thu, 17 Nov 2022 16:03:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9C4F98E0003; Thu, 17 Nov 2022 16:03:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 7D5BC6B0072 for ; Thu, 17 Nov 2022 16:03:32 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 27D061C64B1 for ; Thu, 17 Nov 2022 21:03:32 +0000 (UTC) X-FDA: 80144160264.18.5BA68DC Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf19.hostedemail.com (Postfix) with ESMTP id A70D91A000F for ; Thu, 17 Nov 2022 21:03:31 +0000 (UTC) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOKk1004815; Thu, 17 Nov 2022 21:03:20 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=0qWyxReQlxSizjSVKyD4aM3NoT7R0refv45YW4nsk6s=; b=W04PJwqVZab1rB5Zby5TwF9khOolPJcYZ1TZxDREJZAOP6v5Gix0WU7Eb5YjlLx7gLrn aJ8wJqYmNK+QdenKn0hEdaI+zUeqQIQtv5nTrrxIupYCzXmaVSvQQokYqkKhfppGNzXO uiUMxzZN92iAZFzFokIaPwaNxLaifYA0rAWuECeKVTGs3eIouUMRsuRytS9bw57R9U2J R9+Kyc0Ol1wVe9YTLSraW6ww5Sfalt2b8rIxqhEgurIBHINasgIwr5bSULgn4xmMrIxd Z7Ul2PSjCc64PlAZsiQ4pWARO+LZMOOTC8a/yGWT6KM1pJTmNaFAag11DJaP88/WNQR+ tQ== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3jste1p-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:19 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKeRoh010857; Thu, 17 Nov 2022 21:03:17 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagy8g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:17 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fn032582; Thu, 17 Nov 2022 21:03:16 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-7; Thu, 17 Nov 2022 21:03:16 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 06/10] mm/hugetlb: convert add_hugetlb_page() to folios and add hugetlb_cma_folio() Date: Thu, 17 Nov 2022 13:02:54 -0800 Message-Id: <20221117210258.12732-7-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-ORIG-GUID: 5PyG6JSvGqMgaqRjUhIf2Cn11r8hVcFN X-Proofpoint-GUID: 5PyG6JSvGqMgaqRjUhIf2Cn11r8hVcFN ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=W04PJwqV; spf=pass (imf19.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668719011; a=rsa-sha256; cv=none; b=qpIcVixQN+Jmr+qVeiKR0lu9o/I6/lmQGW7Q5MDMMQb0RJIBVnIeIO1/QJ+2oo9aNfafJV RJLjAvSwYHYo8e+b5wCrPTqnYsz0gGU5wWhiiZQfHfpuLpITLg9O6sxsb8NYGUfKY3lwAB dylSE2lDF2OqvRxJ9jmG4LEKAv43rpg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668719011; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0qWyxReQlxSizjSVKyD4aM3NoT7R0refv45YW4nsk6s=; b=jtRyuRg9oE9wtgFX89QChoDyebhFrRLcTKK4ZUFLL3lkP84EiB639e3VfEdtr3Y96gDEsz nOloVtesmCEKdIZ5uFRVJqYaaYDD540dT5vU7s+W38wrYH9y15r7TQ4igeAfdy6ndFoyeI zTWS3gVR292DbSNzdWb9v+q2Z+5fb+o= Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=W04PJwqV; spf=pass (imf19.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-Rspamd-Server: rspam02 X-Rspam-User: X-Stat-Signature: zqg4zxkbx5a9fqjee5qyzaws1fajyzaq X-Rspamd-Queue-Id: A70D91A000F X-HE-Tag: 1668719011-245698 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert add_hugetlb_page() to take in a folio, also convert hugetlb_cma_page() to take in a folio. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 80301fab56d8..bf36aa8e6072 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -54,13 +54,13 @@ struct hstate hstates[HUGE_MAX_HSTATE]; #ifdef CONFIG_CMA static struct cma *hugetlb_cma[MAX_NUMNODES]; static unsigned long hugetlb_cma_size_in_node[MAX_NUMNODES] __initdata; -static bool hugetlb_cma_page(struct page *page, unsigned int order) +static bool hugetlb_cma_folio(struct folio *folio, unsigned int order) { - return cma_pages_valid(hugetlb_cma[page_to_nid(page)], page, + return cma_pages_valid(hugetlb_cma[folio_nid(folio)], &folio->page, 1 << order); } #else -static bool hugetlb_cma_page(struct page *page, unsigned int order) +static bool hugetlb_cma_folio(struct folio *folio, unsigned int order) { return false; } @@ -1506,17 +1506,17 @@ static void remove_hugetlb_folio_for_demote(struct hstate *h, struct folio *foli __remove_hugetlb_folio(h, folio, adjust_surplus, true); } -static void add_hugetlb_page(struct hstate *h, struct page *page, +static void add_hugetlb_folio(struct hstate *h, struct folio *folio, bool adjust_surplus) { int zeroed; - int nid = page_to_nid(page); + int nid = folio_nid(folio); - VM_BUG_ON_PAGE(!HPageVmemmapOptimized(page), page); + VM_BUG_ON_FOLIO(!folio_test_hugetlb_vmemmap_optimized(folio), folio); lockdep_assert_held(&hugetlb_lock); - INIT_LIST_HEAD(&page->lru); + INIT_LIST_HEAD(&folio->lru); h->nr_huge_pages++; h->nr_huge_pages_node[nid]++; @@ -1525,21 +1525,21 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, h->surplus_huge_pages_node[nid]++; } - set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); - set_page_private(page, 0); + folio_set_compound_dtor(folio, HUGETLB_PAGE_DTOR); + folio_change_private(folio, 0); /* * We have to set HPageVmemmapOptimized again as above - * set_page_private(page, 0) cleared it. + * folio_change_private(folio, 0) cleared it. */ - SetHPageVmemmapOptimized(page); + folio_set_hugetlb_vmemmap_optimized(folio); /* - * This page is about to be managed by the hugetlb allocator and + * This folio is about to be managed by the hugetlb allocator and * should have no users. Drop our reference, and check for others * just in case. */ - zeroed = put_page_testzero(page); - if (!zeroed) + zeroed = folio_put_testzero(folio); + if (unlikely(!zeroed)) /* * It is VERY unlikely soneone else has taken a ref on * the page. In this case, we simply return as the @@ -1548,8 +1548,8 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, */ return; - arch_clear_hugepage_flags(page); - enqueue_huge_page(h, page); + arch_clear_hugepage_flags(&folio->page); + enqueue_huge_page(h, &folio->page); } static void __update_and_free_page(struct hstate *h, struct page *page) @@ -1575,7 +1575,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * page and put the page back on the hugetlb free list and treat * as a surplus page. */ - add_hugetlb_page(h, page, true); + add_hugetlb_folio(h, page_folio(page), true); spin_unlock_irq(&hugetlb_lock); return; } @@ -1600,7 +1600,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * need to be given back to CMA in free_gigantic_page. */ if (hstate_is_gigantic(h) || - hugetlb_cma_page(page, huge_page_order(h))) { + hugetlb_cma_folio(folio, huge_page_order(h))) { destroy_compound_gigantic_folio(folio, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h)); } else { @@ -2184,7 +2184,7 @@ int dissolve_free_huge_page(struct page *page) update_and_free_hugetlb_folio(h, folio, false); } else { spin_lock_irq(&hugetlb_lock); - add_hugetlb_page(h, &folio->page, false); + add_hugetlb_folio(h, folio, false); h->max_huge_pages++; spin_unlock_irq(&hugetlb_lock); } @@ -3451,7 +3451,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) /* Allocation of vmemmmap failed, we can not demote page */ spin_lock_irq(&hugetlb_lock); set_page_refcounted(page); - add_hugetlb_page(h, page, false); + add_hugetlb_folio(h, page_folio(page), false); return rc; } From patchwork Thu Nov 17 21:02:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13047290 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 994DAC433FE for ; Thu, 17 Nov 2022 21:03:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 52FB28E0002; Thu, 17 Nov 2022 16:03:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2749B6B0073; Thu, 17 Nov 2022 16:03:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C5DEA6B0075; Thu, 17 Nov 2022 16:03:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 942708E0002 for ; Thu, 17 Nov 2022 16:03:32 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 74A7380377 for ; Thu, 17 Nov 2022 21:03:32 +0000 (UTC) X-FDA: 80144160264.12.6C71191 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf20.hostedemail.com (Postfix) with ESMTP id A8AB71C0008 for ; Thu, 17 Nov 2022 21:03:31 +0000 (UTC) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOOsX002460; Thu, 17 Nov 2022 21:03:23 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=DrYH6pRWKnBgFEybzVkbkIij9mLZr9efXDcEx32Vsb4=; b=VDGIwT5M5SD802U1EjVYgYvMvY6sHlKd5pcfMjemg07SH41xeDz6wbz9VvtpZ9PAq16W 99G+Y+gBnPly26Utio0czONIJgtOc+G3uEBR31tdl4IGcrpaXXKJnZP9eCh/a1QrkuV1 gkbpUU2RMnI8CM6MBwy8qcWSV8i0rBNPVGz1u5e1CRzcsPUxvq4BOKpJMwkTEjCbccFZ I8633KcHf1pPqlp1uVWvxA14rJnEj8ozBBy9aM7IHWMEQjg45ga3NR3QEY3AascCcC0Z zaJvmBEcOrZUzLnxnbBI5XwYF/bs26xKBkbgoNU3Nq4C2v95ClHCg6hfSujpz/QV6DtW uw== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8yktgn4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:22 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKXMtJ011016; Thu, 17 Nov 2022 21:03:19 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagy9j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:18 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fp032582; Thu, 17 Nov 2022 21:03:18 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-8; Thu, 17 Nov 2022 21:03:18 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 07/10] mm/hugetlb: convert enqueue_huge_page() to folios Date: Thu, 17 Nov 2022 13:02:55 -0800 Message-Id: <20221117210258.12732-8-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-GUID: WCB2r5V_hKEQ7qtIE9ysF_dPdvWcN3qx X-Proofpoint-ORIG-GUID: WCB2r5V_hKEQ7qtIE9ysF_dPdvWcN3qx ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668719011; a=rsa-sha256; cv=none; b=fIBdh0F+kjfYbT5/49fhayqh2Qs9DuL3nsqWy2FzlZUtHlixsXQxASc+KV3TBxqIdJHCoa lsc3JI77rXoLDt5zmFWZD+dpAP/H7lXf5zucnbcF9S2kKkU4b+gHf4+K2LEd3Z4X+J9EYk VjLFrt0SK/FjoGoeKS97k1DWHnu937Y= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=VDGIwT5M; spf=pass (imf20.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668719011; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DrYH6pRWKnBgFEybzVkbkIij9mLZr9efXDcEx32Vsb4=; b=eKEPgRa3kl0hiiYoiuYnaPwRmi1JMkOPZsuid6nn1w9jKw1lIxaGYvH93gsMwXagU5DikH XvXTHNbUXplfofIycqHCVAIiFBHBIV2sPfCYHBwGZJipFas7NFN5jqcfWRfJ/virAhM7xB /3fNJqxODU5QcrueRH9UJy/M9fumjDE= Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=VDGIwT5M; spf=pass (imf20.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: A8AB71C0008 X-Rspam-User: X-Stat-Signature: jzf5o3jhyjz54pwmw1jahii1ks76fm4u X-HE-Tag: 1668719011-329705 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert callers of enqueue_huge_page() to pass in a folio, function is renamed to enqueue_hugetlb_folio(). Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index bf36aa8e6072..4eb6f3d6f46e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1127,17 +1127,17 @@ static bool vma_has_reserves(struct vm_area_struct *vma, long chg) return false; } -static void enqueue_huge_page(struct hstate *h, struct page *page) +static void enqueue_hugetlb_folio(struct hstate *h, struct folio *folio) { - int nid = page_to_nid(page); + int nid = folio_nid(folio); lockdep_assert_held(&hugetlb_lock); - VM_BUG_ON_PAGE(page_count(page), page); + VM_BUG_ON_FOLIO(folio_ref_count(folio), folio); - list_move(&page->lru, &h->hugepage_freelists[nid]); + list_move(&folio->lru, &h->hugepage_freelists[nid]); h->free_huge_pages++; h->free_huge_pages_node[nid]++; - SetHPageFreed(page); + folio_set_hugetlb_freed(folio); } static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) @@ -1549,7 +1549,7 @@ static void add_hugetlb_folio(struct hstate *h, struct folio *folio, return; arch_clear_hugepage_flags(&folio->page); - enqueue_huge_page(h, &folio->page); + enqueue_hugetlb_folio(h, folio); } static void __update_and_free_page(struct hstate *h, struct page *page) @@ -1761,7 +1761,7 @@ void free_huge_page(struct page *page) update_and_free_hugetlb_folio(h, folio, true); } else { arch_clear_hugepage_flags(page); - enqueue_huge_page(h, page); + enqueue_hugetlb_folio(h, folio); spin_unlock_irqrestore(&hugetlb_lock, flags); } } @@ -2436,7 +2436,7 @@ static int gather_surplus_pages(struct hstate *h, long delta) if ((--needed) < 0) break; /* Add the page to the hugetlb allocator */ - enqueue_huge_page(h, page); + enqueue_hugetlb_folio(h, page_folio(page)); } free: spin_unlock_irq(&hugetlb_lock); @@ -2802,8 +2802,8 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * Ok, old_page is still a genuine free hugepage. Remove it from * the freelist and decrease the counters. These will be * incremented again when calling __prep_account_new_huge_page() - * and enqueue_huge_page() for new_page. The counters will remain - * stable since this happens under the lock. + * and enqueue_hugetlb_folio() for new_folio. The counters will + * remain stable since this happens under the lock. */ remove_hugetlb_folio(h, old_folio, false); @@ -2812,7 +2812,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * earlier. It can be directly added to the pool free list. */ __prep_account_new_huge_page(h, nid); - enqueue_huge_page(h, new_page); + enqueue_hugetlb_folio(h, new_folio); /* * Pages have been replaced, we can safely free the old one. From patchwork Thu Nov 17 21:02:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13047294 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6BC9C433FE for ; Thu, 17 Nov 2022 21:03:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B9D128E0003; Thu, 17 Nov 2022 16:03:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ADDAE8E0009; Thu, 17 Nov 2022 16:03:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 835C78E0003; Thu, 17 Nov 2022 16:03:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 47DE98E0007 for ; Thu, 17 Nov 2022 16:03:34 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 17A5E120787 for ; Thu, 17 Nov 2022 21:03:34 +0000 (UTC) X-FDA: 80144160348.19.76DEEC2 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf10.hostedemail.com (Postfix) with ESMTP id 8714DC0005 for ; Thu, 17 Nov 2022 21:03:33 +0000 (UTC) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOOsY002460; Thu, 17 Nov 2022 21:03:23 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=cwkUHEJT5bDkOzVSGa3m03Vwd59Go/cano1leZnsjBU=; b=f7crwLu4zaBU+jwvzBHOsYEFrBd4NcUw3Xj6RAYVogF3jVfljo5ZOF49cyR2Rs6gokzL ce8BkqUGJEKvSg1Zyaw7J1/kBeDGQLJgLcavHmRYc/cQ40zYGwB4wnl/cvfh4hb+rXqs BFks4NnwRTL/vlUoHndVl/bpYLUrd2/x4d7jWSDI3HvR0ZYhxibUbQUp4vmFW0oJ/yjC bbuneS01PvTsn7kgov1+08G8WT+OzeiqdKG3DMxaIU7LDxX/qH3i2AMXjCQ/gpTWTTTO QLeh5FnCY2d+fwdptTy8TZesQYyUzdaX4UZI4JkZ+fQRB6W80AfrZYIIb51ncyjYXjn2 cA== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8yktgn8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:22 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKXQTe010907; Thu, 17 Nov 2022 21:03:20 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagyax-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:20 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fr032582; Thu, 17 Nov 2022 21:03:19 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-9; Thu, 17 Nov 2022 21:03:19 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 08/10] mm/hugetlb: convert free_gigantic_page() to folios Date: Thu, 17 Nov 2022 13:02:56 -0800 Message-Id: <20221117210258.12732-9-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-GUID: AbH47GrRIcaFuj3YJ6J0Owjvw9P9qaQ9 X-Proofpoint-ORIG-GUID: AbH47GrRIcaFuj3YJ6J0Owjvw9P9qaQ9 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668719013; a=rsa-sha256; cv=none; b=7ltmKA3q6zPhLImrQZMkln3gLRmj0wZJBrKufEMK+uazT9sBTYJ1Uwy98INas7h+G6VgDO 2k6FTvXNYnbY13l06ShlJyRMzb4pkTkDJnFy66sGxRPtpQu9XOwyGoiizQhQzXQbHT5pBF P8J+TcAB/ut8cHfqYT9Fno/EpLGmvl4= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=f7crwLu4; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf10.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668719013; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cwkUHEJT5bDkOzVSGa3m03Vwd59Go/cano1leZnsjBU=; b=wpTH++ufRGO3rlNCVG+YwCvU8tL3JL1YK+VTl7l3gPh0IHICN7uySkCK1YbIr01UQsyDwZ zWlrFP0DvpSpmBXYfZnkUa5InTD0iYTewz15FhEUtjW/W2BIoOB5jYzkpU3ZE7B8vryy8Z m2VQTSZN9MTE+3hadtqwYfiFFjStEF4= X-Stat-Signature: otck54wmqo7c1tqi5wrcfo9nbs7z6i36 X-Rspamd-Queue-Id: 8714DC0005 X-Rspam-User: Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=f7crwLu4; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf10.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Rspamd-Server: rspam09 X-HE-Tag: 1668719013-975326 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert callers of free_gigantic_page() to use folios, function is then renamed to free_gigantic_folio(). Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4eb6f3d6f46e..38c5ca015363 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1361,18 +1361,20 @@ static void destroy_compound_gigantic_folio(struct folio *folio, __destroy_compound_gigantic_folio(folio, order, false); } -static void free_gigantic_page(struct page *page, unsigned int order) +static void free_gigantic_folio(struct folio *folio, unsigned int order) { /* * If the page isn't allocated using the cma allocator, * cma_release() returns false. */ #ifdef CONFIG_CMA - if (cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order)) + int nid = folio_nid(folio); + + if (cma_release(hugetlb_cma[nid], &folio->page, 1 << order)) return; #endif - free_contig_range(page_to_pfn(page), 1 << order); + free_contig_range(folio_pfn(folio), 1 << order); } #ifdef CONFIG_CONTIG_ALLOC @@ -1426,7 +1428,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, { return NULL; } -static inline void free_gigantic_page(struct page *page, unsigned int order) { } +static inline void free_gigantic_folio(struct folio *folio, + unsigned int order) { } static inline void destroy_compound_gigantic_folio(struct folio *folio, unsigned int order) { } #endif @@ -1565,7 +1568,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * If we don't know which subpages are hwpoisoned, we can't free * the hugepage, so it's leaked intentionally. */ - if (HPageRawHwpUnreliable(page)) + if (folio_test_hugetlb_raw_hwp_unreliable(folio)) return; if (hugetlb_vmemmap_restore(h, page)) { @@ -1575,7 +1578,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * page and put the page back on the hugetlb free list and treat * as a surplus page. */ - add_hugetlb_folio(h, page_folio(page), true); + add_hugetlb_folio(h, folio, true); spin_unlock_irq(&hugetlb_lock); return; } @@ -1588,7 +1591,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) hugetlb_clear_page_hwpoison(&folio->page); for (i = 0; i < pages_per_huge_page(h); i++) { - subpage = nth_page(page, i); + subpage = folio_page(folio, i); subpage->flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | 1 << PG_active | 1 << PG_private | @@ -1597,12 +1600,12 @@ static void __update_and_free_page(struct hstate *h, struct page *page) /* * Non-gigantic pages demoted from CMA allocated gigantic pages - * need to be given back to CMA in free_gigantic_page. + * need to be given back to CMA in free_gigantic_folio. */ if (hstate_is_gigantic(h) || hugetlb_cma_folio(folio, huge_page_order(h))) { destroy_compound_gigantic_folio(folio, huge_page_order(h)); - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_folio(folio, huge_page_order(h)); } else { __free_pages(page, huge_page_order(h)); } @@ -2023,6 +2026,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, nodemask_t *node_alloc_noretry) { struct page *page; + struct folio *folio; bool retry = false; retry: @@ -2033,14 +2037,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, nid, nmask, node_alloc_noretry); if (!page) return NULL; - + folio = page_folio(page); if (hstate_is_gigantic(h)) { if (!prep_compound_gigantic_page(page, huge_page_order(h))) { /* * Rare failure to convert pages to compound page. * Free pages and try again - ONCE! */ - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_folio(folio, huge_page_order(h)); if (!retry) { retry = true; goto retry; @@ -3048,6 +3052,7 @@ static void __init gather_bootmem_prealloc(void) list_for_each_entry(m, &huge_boot_pages, list) { struct page *page = virt_to_page(m); + struct folio *folio = page_folio(page); struct hstate *h = m->hstate; VM_BUG_ON(!hstate_is_gigantic(h)); @@ -3058,7 +3063,7 @@ static void __init gather_bootmem_prealloc(void) free_huge_page(page); /* add to the hugepage allocator */ } else { /* VERY unlikely inflated ref count on a tail page */ - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_folio(folio, huge_page_order(h)); } /* From patchwork Thu Nov 17 21:02:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13047293 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F773C4332F for ; Thu, 17 Nov 2022 21:03:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8B1B88E0007; Thu, 17 Nov 2022 16:03:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 66CD48E000A; Thu, 17 Nov 2022 16:03:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 384C28E0003; Thu, 17 Nov 2022 16:03:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1D9118E0007 for ; Thu, 17 Nov 2022 16:03:34 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E8EE514125E for ; Thu, 17 Nov 2022 21:03:33 +0000 (UTC) X-FDA: 80144160306.24.0FF1ACF Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf06.hostedemail.com (Postfix) with ESMTP id 63492180002 for ; Thu, 17 Nov 2022 21:03:33 +0000 (UTC) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOKut004819; Thu, 17 Nov 2022 21:03:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=MCK/rXGUTZNQyhTZ+P51qmllsIFD0ccOv0nuQ2VWPKM=; b=MfRqQmhDx6K5xV8shmyIJBd7uIX0lygdVT7WzN3v1ea1XOIxeXlnPBzwtByaUk73rfZv dcuu9WvbCG603vi0uF8G3v6h8Cl7bUZpZUqOe3GdOYi3W0IMpxuHR9QupKzKWbTj419C sZGiW33H9HNqkzJETyoSi9LaeGcOjbp22YRbXktCUsl9gzMk9HWh8DdZj9Ge005pqkXK zlc6D4+Q14y2U6lywcTJXVYb8pmJsSXviuywNfezJzC7J43YykM5f5ur/HJnGeB1BUa5 5ytmiPQS1GTPwaaWjCUq3QDaQDOcNGI091bMv/8QdqwbB6xB88aBDDEJuaC4e05KyJzQ mw== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv3jste28-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:24 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKZSqx010894; Thu, 17 Nov 2022 21:03:22 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagycb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:21 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Ft032582; Thu, 17 Nov 2022 21:03:21 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-10; Thu, 17 Nov 2022 21:03:21 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 09/10] mm/hugetlb: convert hugetlb prep functions to folios Date: Thu, 17 Nov 2022 13:02:57 -0800 Message-Id: <20221117210258.12732-10-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-ORIG-GUID: lMDZlMUHC4wS5IWV71oJIP6jr0QEoaU7 X-Proofpoint-GUID: lMDZlMUHC4wS5IWV71oJIP6jr0QEoaU7 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668719013; a=rsa-sha256; cv=none; b=Eemkl01ztaXBnwL40huvIn2CtNZ52F+jJC3oJ6y3PMBSY0TG8VeIrMNwo6UuCqONkbvlEO weEZTSuGB5h+qyHLtAPVpKOdXfpGjo0LT0q/q1W2ntSWy9wkH9kwVWCN15n221fT2lmqy3 QxPx9mifJkEBW9W6TBH3m7iiPNWo7Ro= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=MfRqQmhD; spf=pass (imf06.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668719013; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MCK/rXGUTZNQyhTZ+P51qmllsIFD0ccOv0nuQ2VWPKM=; b=4iu7+iioULqL06VGhWc24wwGLSdB9JyfB7Zsp+g4Rp9wSS05E28wAQ5A2KQWa3txjPwTTT v14vkk2BK0kP7OoHw5iB0mM9q92IczEoun0wKd5M09RYII+utTovc2XkC/4INrfQCSFdkl q4TxyEAswKGzmgoXhwG17/RMMldCBDs= X-Stat-Signature: 1qmz88tetngg7g6bsx97gw555os3gia5 X-Rspamd-Queue-Id: 63492180002 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=MfRqQmhD; spf=pass (imf06.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1668719013-706831 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert prep_new_huge_page() and __prep_compound_gigantic_page() to folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 61 +++++++++++++++++++++++++--------------------------- 1 file changed, 29 insertions(+), 32 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 38c5ca015363..6ecf9874c521 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1789,28 +1789,26 @@ static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio) set_hugetlb_cgroup_rsvd(folio, NULL); } -static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) +static void prep_new_hugetlb_folio(struct hstate *h, struct folio *folio, int nid) { - struct folio *folio = page_folio(page); - __prep_new_hugetlb_folio(h, folio); spin_lock_irq(&hugetlb_lock); __prep_account_new_huge_page(h, nid); spin_unlock_irq(&hugetlb_lock); } -static bool __prep_compound_gigantic_page(struct page *page, unsigned int order, - bool demote) +static bool __prep_compound_gigantic_folio(struct folio *folio, + unsigned int order, bool demote) { int i, j; int nr_pages = 1 << order; struct page *p; - /* we rely on prep_new_huge_page to set the destructor */ - set_compound_order(page, order); - __SetPageHead(page); + /* we rely on prep_new_hugetlb_folio to set the destructor */ + folio_set_compound_order(folio, order); + __SetPageHead(&folio->page); for (i = 0; i < nr_pages; i++) { - p = nth_page(page, i); + p = folio_page(folio, i); /* * For gigantic hugepages allocated through bootmem at @@ -1851,43 +1849,41 @@ static bool __prep_compound_gigantic_page(struct page *page, unsigned int order, VM_BUG_ON_PAGE(page_count(p), p); } if (i != 0) - set_compound_head(p, page); + set_compound_head(p, &folio->page); } - atomic_set(compound_mapcount_ptr(page), -1); - atomic_set(subpages_mapcount_ptr(page), 0); - atomic_set(compound_pincount_ptr(page), 0); + atomic_set(folio_mapcount_ptr(folio), -1); + atomic_set(folio_subpages_mapcount_ptr(folio), 0); + atomic_set(folio_pincount_ptr(folio), 0); return true; out_error: /* undo page modifications made above */ for (j = 0; j < i; j++) { - p = nth_page(page, j); + p = folio_page(folio, j); if (j != 0) clear_compound_head(p); set_page_refcounted(p); } /* need to clear PG_reserved on remaining tail pages */ for (; j < nr_pages; j++) { - p = nth_page(page, j); + p = folio_page(folio, j); __ClearPageReserved(p); } - set_compound_order(page, 0); -#ifdef CONFIG_64BIT - page[1].compound_nr = 0; -#endif - __ClearPageHead(page); + folio_set_compound_order(folio, 0); + folio_clear_head(folio); return false; } -static bool prep_compound_gigantic_page(struct page *page, unsigned int order) +static bool prep_compound_gigantic_folio(struct folio *folio, + unsigned int order) { - return __prep_compound_gigantic_page(page, order, false); + return __prep_compound_gigantic_folio(folio, order, false); } -static bool prep_compound_gigantic_page_for_demote(struct page *page, +static bool prep_compound_gigantic_folio_for_demote(struct folio *folio, unsigned int order) { - return __prep_compound_gigantic_page(page, order, true); + return __prep_compound_gigantic_folio(folio, order, true); } /* @@ -2039,7 +2035,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, return NULL; folio = page_folio(page); if (hstate_is_gigantic(h)) { - if (!prep_compound_gigantic_page(page, huge_page_order(h))) { + if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) { /* * Rare failure to convert pages to compound page. * Free pages and try again - ONCE! @@ -2052,7 +2048,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, return NULL; } } - prep_new_huge_page(h, page, page_to_nid(page)); + prep_new_hugetlb_folio(h, folio, folio_nid(folio)); return page; } @@ -3056,10 +3052,10 @@ static void __init gather_bootmem_prealloc(void) struct hstate *h = m->hstate; VM_BUG_ON(!hstate_is_gigantic(h)); - WARN_ON(page_count(page) != 1); - if (prep_compound_gigantic_page(page, huge_page_order(h))) { - WARN_ON(PageReserved(page)); - prep_new_huge_page(h, page, page_to_nid(page)); + WARN_ON(folio_ref_count(folio) != 1); + if (prep_compound_gigantic_folio(folio, huge_page_order(h))) { + WARN_ON(folio_test_reserved(folio)); + prep_new_hugetlb_folio(h, folio, folio_nid(folio)); free_huge_page(page); /* add to the hugepage allocator */ } else { /* VERY unlikely inflated ref count on a tail page */ @@ -3478,13 +3474,14 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) for (i = 0; i < pages_per_huge_page(h); i += pages_per_huge_page(target_hstate)) { subpage = nth_page(page, i); + folio = page_folio(subpage); if (hstate_is_gigantic(target_hstate)) - prep_compound_gigantic_page_for_demote(subpage, + prep_compound_gigantic_folio_for_demote(folio, target_hstate->order); else prep_compound_page(subpage, target_hstate->order); set_page_private(subpage, 0); - prep_new_huge_page(target_hstate, subpage, nid); + prep_new_hugetlb_folio(target_hstate, folio, nid); free_huge_page(subpage); } mutex_unlock(&target_hstate->resize_lock); From patchwork Thu Nov 17 21:02:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13047292 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FE90C433FE for ; Thu, 17 Nov 2022 21:03:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5361C8E0008; Thu, 17 Nov 2022 16:03:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4BC6D8E0009; Thu, 17 Nov 2022 16:03:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2C2248E0008; Thu, 17 Nov 2022 16:03:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 11C2B8E0003 for ; Thu, 17 Nov 2022 16:03:34 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id E5C591A0784 for ; Thu, 17 Nov 2022 21:03:33 +0000 (UTC) X-FDA: 80144160306.22.708F688 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf03.hostedemail.com (Postfix) with ESMTP id 76B222000D for ; Thu, 17 Nov 2022 21:03:33 +0000 (UTC) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AHKOOse002460; Thu, 17 Nov 2022 21:03:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=oiF9VDlWy9y8mIee/ulZOw0MitvLoK+ljs9NDxhkktU=; b=VvDg14FEWYbX1LeHqxnrB+23hs03FodxFJd+LnUisyoPukL26JOD2XDqt6fDADYXPe2R /bhUwENwMks+z2fAOcsC/MeIi7UKsVEEhDEFoRwMCVRUnXvPjRJ6YKnamC0H9i48RSWE od8QAxDt6dddyCnq5/GEALy2G1hzKl8AyjjBJExhmmy6fgIU4BltV2jJAMxqOBtdcyqC uYoNwUGyBifYB46VzUqb4hoOGueYV38TsEFl7e4ZnQFu2lZ2K7LnsfWk9r74siCADTTF ZrArSscj/JF2ruguWlOECO1wKjxbG64TLx458PCq9KJGhRmIL+Tdnkwm3pCYjGQKr1Lt fw== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kv8yktgnn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:25 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AHKX0SZ010906; Thu, 17 Nov 2022 21:03:23 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kagyd6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 17 Nov 2022 21:03:23 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AHL37Fv032582; Thu, 17 Nov 2022 21:03:22 GMT Received: from sid-dell.us.oracle.com (dhcp-10-132-95-73.usdhcp.oraclecorp.com [10.132.95.73]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kagy08-11; Thu, 17 Nov 2022 21:03:22 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v2 10/10] mm/hugetlb: change hugetlb allocation functions to return a folio Date: Thu, 17 Nov 2022 13:02:58 -0800 Message-Id: <20221117210258.12732-11-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221117210258.12732-1-sidhartha.kumar@oracle.com> References: <20221117210258.12732-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_06,2022-11-17_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211170150 X-Proofpoint-GUID: DOBrNUfYvOVgJV39WwrLF5eaa6twuBCS X-Proofpoint-ORIG-GUID: DOBrNUfYvOVgJV39WwrLF5eaa6twuBCS ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668719013; a=rsa-sha256; cv=none; b=Mif0STkexXUapDCAmrZsOom1mVuXr2RJ8FLlYitLGCE3BE5VZstiNbvKZ2UVMnnb/szih2 n9sLObQFIY64yQCllhTdeP38GJYVQIewwipGjdO6xPeBXfPpAvEbAKPSY0qI7BFdGtgyrA QqsjOLM/qcLar2pTkTU8DnslYWeeawk= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=VvDg14FE; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf03.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668719013; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oiF9VDlWy9y8mIee/ulZOw0MitvLoK+ljs9NDxhkktU=; b=4fBakD1MXqkQqcB0Ly5ukG0WNuXmRz5bNm1s62XYIlVeCTYlkqMQW1XGqWHe9xAXTjhP0r UYjpVXU+TVg35fhbeyAMflmUbzJ0fY1B5tWf/PNjJhMLNGhL6oFdJVMEYSmHdi4HGKiTlv Gs4ZB53IAhbLT+3iBZVfkDUQO4CDUuM= X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 76B222000D Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=VvDg14FE; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf03.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Stat-Signature: i4jgdsyxgnisoq3wf5nz5as1ojrp1ngs X-HE-Tag: 1668719013-545340 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Many hugetlb allocation helper functions have now been converting to folios, update their higher level callers to be compatible with folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 98 ++++++++++++++++++++++++---------------------------- 1 file changed, 46 insertions(+), 52 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6ecf9874c521..aa0a388bded4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1378,7 +1378,7 @@ static void free_gigantic_folio(struct folio *folio, unsigned int order) } #ifdef CONFIG_CONTIG_ALLOC -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { unsigned long nr_pages = pages_per_huge_page(h); @@ -1394,7 +1394,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, page = cma_alloc(hugetlb_cma[nid], nr_pages, huge_page_order(h), true); if (page) - return page; + return page_folio(page); } if (!(gfp_mask & __GFP_THISNODE)) { @@ -1405,17 +1405,16 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, page = cma_alloc(hugetlb_cma[node], nr_pages, huge_page_order(h), true); if (page) - return page; + return page_folio(page); } } } #endif - - return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask); + return page_folio(alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask)); } #else /* !CONFIG_CONTIG_ALLOC */ -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { return NULL; @@ -1423,7 +1422,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, #endif /* CONFIG_CONTIG_ALLOC */ #else /* !CONFIG_ARCH_HAS_GIGANTIC_PAGE */ -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { return NULL; @@ -1948,7 +1947,7 @@ pgoff_t hugetlb_basepage_index(struct page *page) return (index << compound_order(page_head)) + compound_idx; } -static struct page *alloc_buddy_huge_page(struct hstate *h, +static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask, nodemask_t *node_alloc_noretry) { @@ -2007,7 +2006,7 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, if (node_alloc_noretry && !page && alloc_try_hard) node_set(nid, *node_alloc_noretry); - return page; + return page_folio(page); } /* @@ -2017,23 +2016,21 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, * Note that returned page is 'frozen': ref count of head page and all tail * pages is zero. */ -static struct page *alloc_fresh_huge_page(struct hstate *h, +static struct folio *alloc_fresh_hugetlb_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask, nodemask_t *node_alloc_noretry) { - struct page *page; struct folio *folio; bool retry = false; retry: if (hstate_is_gigantic(h)) - page = alloc_gigantic_page(h, gfp_mask, nid, nmask); + folio = alloc_gigantic_folio(h, gfp_mask, nid, nmask); else - page = alloc_buddy_huge_page(h, gfp_mask, + folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, nmask, node_alloc_noretry); - if (!page) + if (!folio) return NULL; - folio = page_folio(page); if (hstate_is_gigantic(h)) { if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) { /* @@ -2050,7 +2047,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, } prep_new_hugetlb_folio(h, folio, folio_nid(folio)); - return page; + return folio; } /* @@ -2060,21 +2057,21 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, static int alloc_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, nodemask_t *node_alloc_noretry) { - struct page *page; + struct folio *folio; int nr_nodes, node; gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; for_each_node_mask_to_alloc(h, nr_nodes, node, nodes_allowed) { - page = alloc_fresh_huge_page(h, gfp_mask, node, nodes_allowed, - node_alloc_noretry); - if (page) + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, node, + nodes_allowed, node_alloc_noretry); + if (folio) break; } - if (!page) + if (!folio) return 0; - free_huge_page(page); /* free it into the hugepage allocator */ + free_huge_page(&folio->page); /* free it into the hugepage allocator */ return 1; } @@ -2235,7 +2232,7 @@ int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn) static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask) { - struct page *page = NULL; + struct folio *folio = NULL; if (hstate_is_gigantic(h)) return NULL; @@ -2245,8 +2242,8 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, goto out_unlock; spin_unlock_irq(&hugetlb_lock); - page = alloc_fresh_huge_page(h, gfp_mask, nid, nmask, NULL); - if (!page) + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL); + if (!folio) return NULL; spin_lock_irq(&hugetlb_lock); @@ -2258,43 +2255,42 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, * codeflow */ if (h->surplus_huge_pages >= h->nr_overcommit_huge_pages) { - SetHPageTemporary(page); + folio_set_hugetlb_temporary(folio); spin_unlock_irq(&hugetlb_lock); - free_huge_page(page); + free_huge_page(&folio->page); return NULL; } h->surplus_huge_pages++; - h->surplus_huge_pages_node[page_to_nid(page)]++; + h->surplus_huge_pages_node[folio_nid(folio)]++; out_unlock: spin_unlock_irq(&hugetlb_lock); - return page; + return &folio->page; } static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask) { - struct page *page; + struct folio *folio; if (hstate_is_gigantic(h)) return NULL; - page = alloc_fresh_huge_page(h, gfp_mask, nid, nmask, NULL); - if (!page) + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL); + if (!folio) return NULL; /* fresh huge pages are frozen */ - set_page_refcounted(page); - + folio_ref_unfreeze(folio, 1); /* * We do not account these pages as surplus because they are only * temporary and will be released properly on the last reference */ - SetHPageTemporary(page); + folio_set_hugetlb_temporary(folio); - return page; + return &folio->page; } /* @@ -2743,19 +2739,18 @@ void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma, } /* - * alloc_and_dissolve_huge_page - Allocate a new page and dissolve the old one + * alloc_and_dissolve_hugetlb_folio - Allocate a new folio and dissolve + * the old one * @h: struct hstate old page belongs to * @old_page: Old page to dissolve * @list: List to isolate the page in case we need to * Returns 0 on success, otherwise negated error. */ -static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, - struct list_head *list) +static int alloc_and_dissolve_hugetlb_folio(struct hstate *h, + struct folio *old_folio, struct list_head *list) { gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; - struct folio *old_folio = page_folio(old_page); int nid = folio_nid(old_folio); - struct page *new_page; struct folio *new_folio; int ret = 0; @@ -2766,26 +2761,25 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * the pool. This simplifies and let us do most of the processing * under the lock. */ - new_page = alloc_buddy_huge_page(h, gfp_mask, nid, NULL, NULL); - if (!new_page) + new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, NULL, NULL); + if (!new_folio) return -ENOMEM; - new_folio = page_folio(new_page); __prep_new_hugetlb_folio(h, new_folio); retry: spin_lock_irq(&hugetlb_lock); if (!folio_test_hugetlb(old_folio)) { /* - * Freed from under us. Drop new_page too. + * Freed from under us. Drop new_folio too. */ goto free_new; } else if (folio_ref_count(old_folio)) { /* - * Someone has grabbed the page, try to isolate it here. + * Someone has grabbed the folio, try to isolate it here. * Fail with -EBUSY if not possible. */ spin_unlock_irq(&hugetlb_lock); - ret = isolate_hugetlb(old_page, list); + ret = isolate_hugetlb(&old_folio->page, list); spin_lock_irq(&hugetlb_lock); goto free_new; } else if (!folio_test_hugetlb_freed(old_folio)) { @@ -2863,7 +2857,7 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) if (folio_ref_count(folio) && !isolate_hugetlb(&folio->page, list)) ret = 0; else if (!folio_ref_count(folio)) - ret = alloc_and_dissolve_huge_page(h, &folio->page, list); + ret = alloc_and_dissolve_hugetlb_folio(h, folio, list); return ret; } @@ -3081,14 +3075,14 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) if (!alloc_bootmem_huge_page(h, nid)) break; } else { - struct page *page; + struct folio *folio; gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; - page = alloc_fresh_huge_page(h, gfp_mask, nid, + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, &node_states[N_MEMORY], NULL); - if (!page) + if (!folio) break; - free_huge_page(page); /* free it into the hugepage allocator */ + free_huge_page(&folio->page); /* free it into the hugepage allocator */ } cond_resched(); }