From patchwork Fri Nov 18 22:19:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13048850 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3767C433FE for ; Fri, 18 Nov 2022 22:20:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D90E58E0006; Fri, 18 Nov 2022 17:20:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B00888E0002; Fri, 18 Nov 2022 17:20:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8627C8E0002; Fri, 18 Nov 2022 17:20:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 660478E0006 for ; Fri, 18 Nov 2022 17:20:31 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 35069C0340 for ; Fri, 18 Nov 2022 22:20:31 +0000 (UTC) X-FDA: 80147983062.16.C0F3A7D Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by imf02.hostedemail.com (Postfix) with ESMTP id 2F9708000B for ; Fri, 18 Nov 2022 22:20:29 +0000 (UTC) Received: from pps.filterd (m0246631.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AILNqP0028284; Fri, 18 Nov 2022 22:20:10 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=tGmuZTBNbre96NHUK8JjH0uovdu0VRYw1Si/RwfIyuI=; b=hW2vbpKavv0zc0LzqJcKmPU8NQeFFiR3rjXzqrDYjJiv9wgZpZuWHbSfCleXTwiqHMbI 2TTS+hH6WxEzt1nHBUpLG1bkfagw4q4PV27OffY8rkZ5z6KyvTX56asyyrMTMVhUvxvY ZnkvwrCJO1WSdnFYkU2G64BAlwDasDAkQQ/2pNhKADynGU7eFT3ER2zNCvPHWLkzh34h jSkVdOPG6lJ6OVCROx1S6/SSpKdUcIRQ09kOLukNi+2zaQd13Vam8cqcjHmQm+xgN3yt TWmhZgINBSTFEms3naZ+ftycPQMrn5HC6/nTS71C0buex79bpdZAU7xI4lHcxeXu+xrB /A== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kxh458853-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:09 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AIKcWxX001852; Fri, 18 Nov 2022 22:20:09 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kc1eve-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:09 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AIMK7qP040370; Fri, 18 Nov 2022 22:20:08 GMT Received: from sid-dell.us.oracle.com (dhcp-10-159-141-24.vpn.oracle.com [10.159.141.24]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kc1esc-2; Fri, 18 Nov 2022 22:20:08 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v4 01/10] mm: add folio dtor and order setter functions Date: Fri, 18 Nov 2022 14:19:53 -0800 Message-Id: <20221118222002.82588-2-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221118222002.82588-1-sidhartha.kumar@oracle.com> References: <20221118222002.82588-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-18_08,2022-11-18_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211180133 X-Proofpoint-ORIG-GUID: RhIRZtB6Dzm0KkkTFIQbHxb16u5h1bTp X-Proofpoint-GUID: RhIRZtB6Dzm0KkkTFIQbHxb16u5h1bTp ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=hW2vbpKa; spf=pass (imf02.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.177.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668810029; a=rsa-sha256; cv=none; b=YD/6djuDqaZmVDx0TOer17mrTAl/rx5xfMngvSMESPxJG9UCQVPIrguPqwgva12lokZCoz rQNwywu1h/eD8KJoA6RCo28JclWIrWkDON8uJSGTWUVJVvJJjIHF07Ox2R3S2Eh5OkBPDN y7xX53lCYBgR1WsOW3wmRUAPaqhJHjU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668810029; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tGmuZTBNbre96NHUK8JjH0uovdu0VRYw1Si/RwfIyuI=; b=IoLcQJcI/3qAi1M57IM+v9vcKcDYEkjdvWL2Wfktr7FYpxbIrVCfvETjp2YoNlnbvDdSHj syNO4airGWwHeQSypCqB1AS/qkMMOEj55nDscli9LA9XZqsIQNPymIPp3kYEQwixO8wbyQ 7my1RfyXq8hq9ekXdTVFtk3gkCELBjY= Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=hW2vbpKa; spf=pass (imf02.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.177.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-Rspamd-Server: rspam02 X-Rspam-User: X-Stat-Signature: 4gqhem6gpp5x8ghtsx5owrrfpj85y6gt X-Rspamd-Queue-Id: 2F9708000B X-HE-Tag: 1668810029-962197 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add folio equivalents for set_compound_order() and set_compound_page_dtor(). Also remove extra new-lines introduced by mm/hugetlb: convert move_hugetlb_state() to folios and mm/hugetlb_cgroup: convert hugetlb_cgroup_uncharge_page() to folios. Suggested-by: Mike Kravetz Suggested-by: Muchun Song Signed-off-by: Sidhartha Kumar --- include/linux/mm.h | 16 ++++++++++++++++ mm/hugetlb.c | 4 +--- 2 files changed, 17 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a48c5ad16a5e..2bdef8a5298a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -972,6 +972,13 @@ static inline void set_compound_page_dtor(struct page *page, page[1].compound_dtor = compound_dtor; } +static inline void folio_set_compound_dtor(struct folio *folio, + enum compound_dtor_id compound_dtor) +{ + VM_BUG_ON_FOLIO(compound_dtor >= NR_COMPOUND_DTORS, folio); + folio->_folio_dtor = compound_dtor; +} + void destroy_large_folio(struct folio *folio); static inline int head_compound_pincount(struct page *head) @@ -987,6 +994,15 @@ static inline void set_compound_order(struct page *page, unsigned int order) #endif } +static inline void folio_set_compound_order(struct folio *folio, + unsigned int order) +{ + folio->_folio_order = order; +#ifdef CONFIG_64BIT + folio->_folio_nr_pages = order ? 1U << order : 0; +#endif +} + /* Returns the number of pages in this potentially compound page. */ static inline unsigned long compound_nr(struct page *page) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4c2fe7fec475..6390de8975c5 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1780,7 +1780,7 @@ static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio) { hugetlb_vmemmap_optimize(h, &folio->page); INIT_LIST_HEAD(&folio->lru); - folio->_folio_dtor = HUGETLB_PAGE_DTOR; + folio_set_compound_dtor(folio, HUGETLB_PAGE_DTOR); hugetlb_set_folio_subpool(folio, NULL); set_hugetlb_cgroup(folio, NULL); set_hugetlb_cgroup_rsvd(folio, NULL); @@ -2938,7 +2938,6 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, * a reservation exists for the allocation. */ page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, gbl_chg); - if (!page) { spin_unlock_irq(&hugetlb_lock); page = alloc_buddy_huge_page_with_mpol(h, vma, addr); @@ -7351,7 +7350,6 @@ void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int re int old_nid = folio_nid(old_folio); int new_nid = folio_nid(new_folio); - folio_set_hugetlb_temporary(old_folio); folio_clear_hugetlb_temporary(new_folio); From patchwork Fri Nov 18 22:19:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13048846 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D88D7C43219 for ; Fri, 18 Nov 2022 22:20:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7CA7D8E0005; Fri, 18 Nov 2022 17:20:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7569E8E0002; Fri, 18 Nov 2022 17:20:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4BB298E0003; Fri, 18 Nov 2022 17:20:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2CF216B0075 for ; Fri, 18 Nov 2022 17:20:30 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id F3BBC160A4D for ; Fri, 18 Nov 2022 22:20:29 +0000 (UTC) X-FDA: 80147982978.28.8E11FF1 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf02.hostedemail.com (Postfix) with ESMTP id 7A0CD80010 for ; Fri, 18 Nov 2022 22:20:29 +0000 (UTC) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AILU16w007898; Fri, 18 Nov 2022 22:20:14 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=RHKiiVc28hJsMCo6RKUzQfHyZDcgMk6bAgu3H8/3i84=; b=OPKUKIperTgeSLKAXCH+TLcK59t+CkPU971CcGTZMmahY03KRTCmy8JhKRWtWMWmkziG FM4B8K4ieOZujNZA0HGPpnRyNSjM3gCTVWcI9JWuK0J66iQ4UUeBkj9A5kNNGC0YLwZr qTSCZGRDeJ6tkz1binh1f7GVSBcRHLYPyfiFvAueRAoduw2iPOm15Lbfa0EGW9kZ4HS6 tNYC747rPNLgCbdLgIbgWVX4IIaq8NzOViywYaVuy8NWpkRxzDBB1AVqrIVgFQa7MlBa ydjxjJR64j6DJBAlobfMMJfe3dQie/7g/agesKDPSKCs5HtD2p8YZla/eAPzgoj7iUIU bQ== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kxg1gredv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:13 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AIKNcpJ002620; Fri, 18 Nov 2022 22:20:11 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kc1ex3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:11 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AIMK7qR040370; Fri, 18 Nov 2022 22:20:10 GMT Received: from sid-dell.us.oracle.com (dhcp-10-159-141-24.vpn.oracle.com [10.159.141.24]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kc1esc-3; Fri, 18 Nov 2022 22:20:10 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v4 02/10] mm/hugetlb: convert destroy_compound_gigantic_page() to folios Date: Fri, 18 Nov 2022 14:19:54 -0800 Message-Id: <20221118222002.82588-3-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221118222002.82588-1-sidhartha.kumar@oracle.com> References: <20221118222002.82588-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-18_08,2022-11-18_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211180133 X-Proofpoint-GUID: L6W28uzzG4lp2HvGtvnLRibKpFBP5Z0P X-Proofpoint-ORIG-GUID: L6W28uzzG4lp2HvGtvnLRibKpFBP5Z0P ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668810029; a=rsa-sha256; cv=none; b=Sfa2HERcFNaEmccnwjvQCDqEPAgnmv9X25N5p4xiEtDtDdHDMGyJQGMSozi3WJN7WxjWAA rXypIfNSeQXvzPYK2aa9wzGmOxUZUKTgN4aDancUj/lIF0ew8YTo/+TbYsFCiX/QuwJAVx q0m00NbJUs45rji/fqpgJbOCvgQ/yjM= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=OPKUKIpe; spf=pass (imf02.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668810029; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RHKiiVc28hJsMCo6RKUzQfHyZDcgMk6bAgu3H8/3i84=; b=JH00jeMDa7+/vjFtNoUyK3Uo0CouQcYi3V4jIOmCctP/irVOi4EYuT9TJp3lDVrMleSAxM BsK+tznx2qXGUJLgd4Qt3CD1LO+aBBDgXX4rHz7AvUWs3BbSZ0he5joiQVAfDwGpN/OZwW vzoMSIePrNHLJY81zEgkEAXdd//auAc= X-Stat-Signature: 73zrcuck5m4iuh3zh8193cj565cd44ow X-Rspamd-Queue-Id: 7A0CD80010 X-Rspamd-Server: rspam01 X-Rspam-User: Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=OPKUKIpe; spf=pass (imf02.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-HE-Tag: 1668810029-574428 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert page operations within __destroy_compound_gigantic_page() to the corresponding folio operations. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 43 +++++++++++++++++++++---------------------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6390de8975c5..f6f791675c38 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1325,43 +1325,40 @@ static int hstate_next_node_to_free(struct hstate *h, nodemask_t *nodes_allowed) nr_nodes--) /* used to demote non-gigantic_huge pages as well */ -static void __destroy_compound_gigantic_page(struct page *page, +static void __destroy_compound_gigantic_folio(struct folio *folio, unsigned int order, bool demote) { int i; int nr_pages = 1 << order; struct page *p; - atomic_set(compound_mapcount_ptr(page), 0); - atomic_set(subpages_mapcount_ptr(page), 0); - atomic_set(compound_pincount_ptr(page), 0); + atomic_set(folio_mapcount_ptr(folio), 0); + atomic_set(folio_subpages_mapcount_ptr(folio), 0); + atomic_set(folio_pincount_ptr(folio), 0); for (i = 1; i < nr_pages; i++) { - p = nth_page(page, i); + p = folio_page(folio, i); p->mapping = NULL; clear_compound_head(p); if (!demote) set_page_refcounted(p); } - set_compound_order(page, 0); -#ifdef CONFIG_64BIT - page[1].compound_nr = 0; -#endif - __ClearPageHead(page); + folio_set_compound_order(folio, 0); + __folio_clear_head(folio); } -static void destroy_compound_hugetlb_page_for_demote(struct page *page, +static void destroy_compound_hugetlb_folio_for_demote(struct folio *folio, unsigned int order) { - __destroy_compound_gigantic_page(page, order, true); + __destroy_compound_gigantic_folio(folio, order, true); } #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE -static void destroy_compound_gigantic_page(struct page *page, +static void destroy_compound_gigantic_folio(struct folio *folio, unsigned int order) { - __destroy_compound_gigantic_page(page, order, false); + __destroy_compound_gigantic_folio(folio, order, false); } static void free_gigantic_page(struct page *page, unsigned int order) @@ -1430,7 +1427,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, return NULL; } static inline void free_gigantic_page(struct page *page, unsigned int order) { } -static inline void destroy_compound_gigantic_page(struct page *page, +static inline void destroy_compound_gigantic_folio(struct folio *folio, unsigned int order) { } #endif @@ -1477,8 +1474,8 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page, * * For gigantic pages set the destructor to the null dtor. This * destructor will never be called. Before freeing the gigantic - * page destroy_compound_gigantic_page will turn the compound page - * into a simple group of pages. After this the destructor does not + * page destroy_compound_gigantic_folio will turn the folio into a + * simple group of pages. After this the destructor does not * apply. * * This handles the case where more than one ref is held when and @@ -1559,6 +1556,7 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, static void __update_and_free_page(struct hstate *h, struct page *page) { int i; + struct folio *folio = page_folio(page); struct page *subpage; if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) @@ -1587,8 +1585,8 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * Move PageHWPoison flag from head page to the raw error pages, * which makes any healthy subpages reusable. */ - if (unlikely(PageHWPoison(page))) - hugetlb_clear_page_hwpoison(page); + if (unlikely(folio_test_hwpoison(folio))) + hugetlb_clear_page_hwpoison(&folio->page); for (i = 0; i < pages_per_huge_page(h); i++) { subpage = nth_page(page, i); @@ -1604,7 +1602,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) */ if (hstate_is_gigantic(h) || hugetlb_cma_page(page, huge_page_order(h))) { - destroy_compound_gigantic_page(page, huge_page_order(h)); + destroy_compound_gigantic_folio(folio, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h)); } else { __free_pages(page, huge_page_order(h)); @@ -3437,6 +3435,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) { int i, nid = page_to_nid(page); struct hstate *target_hstate; + struct folio *folio = page_folio(page); struct page *subpage; int rc = 0; @@ -3455,10 +3454,10 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) } /* - * Use destroy_compound_hugetlb_page_for_demote for all huge page + * Use destroy_compound_hugetlb_folio_for_demote for all huge page * sizes as it will not ref count pages. */ - destroy_compound_hugetlb_page_for_demote(page, huge_page_order(h)); + destroy_compound_hugetlb_folio_for_demote(folio, huge_page_order(h)); /* * Taking target hstate mutex synchronizes with set_max_huge_pages. From patchwork Fri Nov 18 22:19:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13048861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 352A2C4332F for ; Fri, 18 Nov 2022 22:26:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C7E408E0002; Fri, 18 Nov 2022 17:26:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C2E356B007B; Fri, 18 Nov 2022 17:26:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF5BB8E0002; Fri, 18 Nov 2022 17:26:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A001A6B0078 for ; Fri, 18 Nov 2022 17:26:18 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 573F314048F for ; Fri, 18 Nov 2022 22:26:18 +0000 (UTC) X-FDA: 80147997636.22.EDAF86B Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by imf03.hostedemail.com (Postfix) with ESMTP id CFAB62000E for ; Fri, 18 Nov 2022 22:26:17 +0000 (UTC) Received: from pps.filterd (m0246631.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AILNqRV028279; Fri, 18 Nov 2022 22:26:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=41NsoJ4o2oqWcaqkV/bF829aaUMi6RclrIsMH9Q0ko8=; b=oH2sYQQDBdRscfr3FFXq56pD4WThZI7Xj5d5VlS8Er4FNTBIMKWGQIP8kEdorQcK5ZQC xJCDGX/8+I/+w2WO2vAA89CjEXY5LJewMFcAQq+aG4jCutD8g3eJc6J1y54gL0/4Z2s1 EZh/YkmcAOPTBPJ4aDkD0G0zrP7UVneo9xjWaLr9BRRSIp9020ERlA0RwqOC+wOrk6R6 21soeePovO1xMSWyhpofJCIle3KAYbo3GYgKYmm2o+ECiqIDYmvlKE3Os+ZqvI517roi gYq9SJo2Gqi6/Xdsk2SzGOa7gAWpVTElY8PxvjbQFj610ZU/s2Svtj+vVO0nuu+b3SIS VA== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kxh4588eh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:26:07 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AIKooL8001656; Fri, 18 Nov 2022 22:20:12 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kc1eyx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:12 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AIMK7qT040370; Fri, 18 Nov 2022 22:20:12 GMT Received: from sid-dell.us.oracle.com (dhcp-10-159-141-24.vpn.oracle.com [10.159.141.24]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kc1esc-4; Fri, 18 Nov 2022 22:20:12 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v4 03/10] mm/hugetlb: convert dissolve_free_huge_page() to folios Date: Fri, 18 Nov 2022 14:19:55 -0800 Message-Id: <20221118222002.82588-4-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221118222002.82588-1-sidhartha.kumar@oracle.com> References: <20221118222002.82588-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-18_08,2022-11-18_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211180133 X-Proofpoint-ORIG-GUID: rOeE3h-xmqyDq8opu44Gw04Ee9V1_uXw X-Proofpoint-GUID: rOeE3h-xmqyDq8opu44Gw04Ee9V1_uXw ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668810377; a=rsa-sha256; cv=none; b=4mfpRlPggpUHwcxMFY8d0dHm3zifwy+iVp+7rUDYmpZ/pchkojU7JkD11B0+tyoWEaE14y xF+SS0T9mfSJVFfVmGAUhQ72uoqCkzeyZN5CbHez5u6M9NOgUa5TOt3SW36idR+xSe4oQT 6B4grl2N24bOmMFkPa++wBLIg9IK/RA= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=oH2sYQQD; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf03.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.177.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668810377; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=41NsoJ4o2oqWcaqkV/bF829aaUMi6RclrIsMH9Q0ko8=; b=3vosX2fGbg4Y2VSSqnYibz3MHld2uWaGFg63G/+OdD+G4TuLZDhIHXbHKS1IwhbUqD1zyr Ei1mE5hBXyCmXnAUTxbnperuR4oslD1R45ipqF58mcfqDtf7DY1qX3GFtylhd/yApUtshg D9yOZlyi4P/14iaOwOArBhSs0wITnAE= X-Stat-Signature: wh43dsy4fkai444c7o7diygg9ewodarb X-Rspamd-Queue-Id: CFAB62000E X-Rspam-User: Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=oH2sYQQD; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf03.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.177.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Rspamd-Server: rspam09 X-HE-Tag: 1668810377-862970 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes compound_head() call by using a folio rather than a head page. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f6f791675c38..d0acabbf3025 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2128,21 +2128,21 @@ static struct page *remove_pool_huge_page(struct hstate *h, int dissolve_free_huge_page(struct page *page) { int rc = -EBUSY; + struct folio *folio = page_folio(page); retry: /* Not to disrupt normal path by vainly holding hugetlb_lock */ - if (!PageHuge(page)) + if (!folio_test_hugetlb(folio)) return 0; spin_lock_irq(&hugetlb_lock); - if (!PageHuge(page)) { + if (!folio_test_hugetlb(folio)) { rc = 0; goto out; } - if (!page_count(page)) { - struct page *head = compound_head(page); - struct hstate *h = page_hstate(head); + if (!folio_ref_count(folio)) { + struct hstate *h = folio_hstate(folio); if (!available_huge_pages(h)) goto out; @@ -2150,7 +2150,7 @@ int dissolve_free_huge_page(struct page *page) * We should make sure that the page is already on the free list * when it is dissolved. */ - if (unlikely(!HPageFreed(head))) { + if (unlikely(!folio_test_hugetlb_freed(folio))) { spin_unlock_irq(&hugetlb_lock); cond_resched(); @@ -2165,7 +2165,7 @@ int dissolve_free_huge_page(struct page *page) goto retry; } - remove_hugetlb_page(h, head, false); + remove_hugetlb_page(h, &folio->page, false); h->max_huge_pages--; spin_unlock_irq(&hugetlb_lock); @@ -2177,12 +2177,12 @@ int dissolve_free_huge_page(struct page *page) * Attempt to allocate vmemmmap here so that we can take * appropriate action on failure. */ - rc = hugetlb_vmemmap_restore(h, head); + rc = hugetlb_vmemmap_restore(h, &folio->page); if (!rc) { - update_and_free_page(h, head, false); + update_and_free_page(h, &folio->page, false); } else { spin_lock_irq(&hugetlb_lock); - add_hugetlb_page(h, head, false); + add_hugetlb_page(h, &folio->page, false); h->max_huge_pages++; spin_unlock_irq(&hugetlb_lock); } From patchwork Fri Nov 18 22:19:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13049433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 752AEC433FE for ; Sat, 19 Nov 2022 00:46:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BED246B0072; Fri, 18 Nov 2022 19:46:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B9CD06B0073; Fri, 18 Nov 2022 19:46:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A64FB8E0001; Fri, 18 Nov 2022 19:46:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 973226B0072 for ; Fri, 18 Nov 2022 19:46:48 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 577691401B0 for ; Sat, 19 Nov 2022 00:46:48 +0000 (UTC) X-FDA: 80148351696.30.83BB1C9 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf24.hostedemail.com (Postfix) with ESMTP id DF3DB180012 for ; Sat, 19 Nov 2022 00:46:46 +0000 (UTC) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AILOF8b007813; Fri, 18 Nov 2022 22:20:16 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=b3PQObH1nF66pRxTQNIQTkBEH8mytBqt2q8Yl3YFS+0=; b=IKGpyc2cxAXtn1dd9EtGfXWLA9prJMPNKnxurgwYmN2APRjN1iRBEvVKczLDRMH8P5jZ 5gyBNIiCCZd0lIaupdDgyv+M9oVZV/dq6zllW+8783ZJW1eoG8xhGT0zFpCa2uUtnCxQ bKRTwcBa/zjbEYsBf7H531uaPSnOgHAGf+q5D6HCbzh0diXEFQA2TtemHNEcvTdW1+Xm xMXsWVPPTWdjIydu5rtdSEDh/ux+XMiOdUJHjR1zbrwRYKYjojRfa4x4goeVNeTNa37p 3uzYTzRz8ry0+QP37VbGf3izxiIu3TvygUdQAXPBZnThdBSPNPwhi2WcUiqvIiEhwqS1 LA== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kx9699wat-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:16 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AIKZ8Li001657; Fri, 18 Nov 2022 22:20:14 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kc1f12-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:14 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AIMK7qV040370; Fri, 18 Nov 2022 22:20:13 GMT Received: from sid-dell.us.oracle.com (dhcp-10-159-141-24.vpn.oracle.com [10.159.141.24]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kc1esc-5; Fri, 18 Nov 2022 22:20:13 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v4 04/10] mm/hugetlb: convert remove_hugetlb_page() to folios Date: Fri, 18 Nov 2022 14:19:56 -0800 Message-Id: <20221118222002.82588-5-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221118222002.82588-1-sidhartha.kumar@oracle.com> References: <20221118222002.82588-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-18_08,2022-11-18_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211180133 X-Proofpoint-GUID: CSbvYnpvcy-AvvMsIkQJBRL21N68WzEJ X-Proofpoint-ORIG-GUID: CSbvYnpvcy-AvvMsIkQJBRL21N68WzEJ ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668818807; a=rsa-sha256; cv=none; b=U0A7KjiO4+lfsXP3pkErssbW9Z8EejKENDZcRf9k7GVs0WjwwHu9iX3khS7kc/H9YB6NJ3 DMMa7roQUK3D8eXGXpKn1/RL3fPr+hlj+QA13Temf+xHxxGAAy1JBEJolw9jf7s24G7opT wP9DazGJuaAmzazBPjtnOOpsGEcC5aM= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=IKGpyc2c; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf24.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668818807; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=b3PQObH1nF66pRxTQNIQTkBEH8mytBqt2q8Yl3YFS+0=; b=Z2AsG1w3fBgIwfpLhqaupNbc6QUBj0j1KQ3znAuzmlJXihj4QqRtYugybbjc4W/Sg6KePO xBjAh5IHQsZqNtwnHFlBYDTCBrVpCtJXh/mXTedMDpFI8Vn1X9C1ky2LELFQdf7ZVREMrj xZhGxnsS4AkPRd4GMtd0h8jMNR88Cgo= X-Stat-Signature: e8ag3w49mnneg8wd51gy3hsmd8edy7r7 X-Rspamd-Queue-Id: DF3DB180012 X-Rspam-User: Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=IKGpyc2c; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf24.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Rspamd-Server: rspam09 X-HE-Tag: 1668818806-800089 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes page_folio() call by converting callers to directly pass a folio into __remove_hugetlb_page(). Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 48 +++++++++++++++++++++++++----------------------- 1 file changed, 25 insertions(+), 23 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d0acabbf3025..90ba01a76f87 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1432,19 +1432,18 @@ static inline void destroy_compound_gigantic_folio(struct folio *folio, #endif /* - * Remove hugetlb page from lists, and update dtor so that page appears + * Remove hugetlb folio from lists, and update dtor so that the folio appears * as just a compound page. * - * A reference is held on the page, except in the case of demote. + * A reference is held on the folio, except in the case of demote. * * Must be called with hugetlb lock held. */ -static void __remove_hugetlb_page(struct hstate *h, struct page *page, +static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, bool adjust_surplus, bool demote) { - int nid = page_to_nid(page); - struct folio *folio = page_folio(page); + int nid = folio_nid(folio); VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio(folio), folio); VM_BUG_ON_FOLIO(hugetlb_cgroup_from_folio_rsvd(folio), folio); @@ -1453,9 +1452,9 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page, if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; - list_del(&page->lru); + list_del(&folio->lru); - if (HPageFreed(page)) { + if (folio_test_hugetlb_freed(folio)) { h->free_huge_pages--; h->free_huge_pages_node[nid]--; } @@ -1485,26 +1484,26 @@ static void __remove_hugetlb_page(struct hstate *h, struct page *page, * be turned into a page of smaller size. */ if (!demote) - set_page_refcounted(page); + folio_ref_unfreeze(folio, 1); if (hstate_is_gigantic(h)) - set_compound_page_dtor(page, NULL_COMPOUND_DTOR); + folio_set_compound_dtor(folio, NULL_COMPOUND_DTOR); else - set_compound_page_dtor(page, COMPOUND_PAGE_DTOR); + folio_set_compound_dtor(folio, COMPOUND_PAGE_DTOR); h->nr_huge_pages--; h->nr_huge_pages_node[nid]--; } -static void remove_hugetlb_page(struct hstate *h, struct page *page, +static void remove_hugetlb_folio(struct hstate *h, struct folio *folio, bool adjust_surplus) { - __remove_hugetlb_page(h, page, adjust_surplus, false); + __remove_hugetlb_folio(h, folio, adjust_surplus, false); } -static void remove_hugetlb_page_for_demote(struct hstate *h, struct page *page, +static void remove_hugetlb_folio_for_demote(struct hstate *h, struct folio *folio, bool adjust_surplus) { - __remove_hugetlb_page(h, page, adjust_surplus, true); + __remove_hugetlb_folio(h, folio, adjust_surplus, true); } static void add_hugetlb_page(struct hstate *h, struct page *page, @@ -1639,8 +1638,9 @@ static void free_hpage_workfn(struct work_struct *work) /* * The VM_BUG_ON_PAGE(!PageHuge(page), page) in page_hstate() * is going to trigger because a previous call to - * remove_hugetlb_page() will set_compound_page_dtor(page, - * NULL_COMPOUND_DTOR), so do not use page_hstate() directly. + * remove_hugetlb_folio() will call folio_set_compound_dtor + * (folio, NULL_COMPOUND_DTOR), so do not use page_hstate() + * directly. */ h = size_to_hstate(page_size(page)); @@ -1749,12 +1749,12 @@ void free_huge_page(struct page *page) h->resv_huge_pages++; if (folio_test_hugetlb_temporary(folio)) { - remove_hugetlb_page(h, page, false); + remove_hugetlb_folio(h, folio, false); spin_unlock_irqrestore(&hugetlb_lock, flags); update_and_free_page(h, page, true); } else if (h->surplus_huge_pages_node[nid]) { /* remove the page from active list */ - remove_hugetlb_page(h, page, true); + remove_hugetlb_folio(h, folio, true); spin_unlock_irqrestore(&hugetlb_lock, flags); update_and_free_page(h, page, true); } else { @@ -2092,6 +2092,7 @@ static struct page *remove_pool_huge_page(struct hstate *h, { int nr_nodes, node; struct page *page = NULL; + struct folio *folio; lockdep_assert_held(&hugetlb_lock); for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) { @@ -2103,7 +2104,8 @@ static struct page *remove_pool_huge_page(struct hstate *h, !list_empty(&h->hugepage_freelists[node])) { page = list_entry(h->hugepage_freelists[node].next, struct page, lru); - remove_hugetlb_page(h, page, acct_surplus); + folio = page_folio(page); + remove_hugetlb_folio(h, folio, acct_surplus); break; } } @@ -2165,7 +2167,7 @@ int dissolve_free_huge_page(struct page *page) goto retry; } - remove_hugetlb_page(h, &folio->page, false); + remove_hugetlb_folio(h, folio, false); h->max_huge_pages--; spin_unlock_irq(&hugetlb_lock); @@ -2803,7 +2805,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * and enqueue_huge_page() for new_page. The counters will remain * stable since this happens under the lock. */ - remove_hugetlb_page(h, old_page, false); + remove_hugetlb_folio(h, old_folio, false); /* * Ref count on new page is already zero as it was dropped @@ -3230,7 +3232,7 @@ static void try_to_free_low(struct hstate *h, unsigned long count, goto out; if (PageHighMem(page)) continue; - remove_hugetlb_page(h, page, false); + remove_hugetlb_folio(h, page_folio(page), false); list_add(&page->lru, &page_list); } } @@ -3441,7 +3443,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order); - remove_hugetlb_page_for_demote(h, page, false); + remove_hugetlb_folio_for_demote(h, folio, false); spin_unlock_irq(&hugetlb_lock); rc = hugetlb_vmemmap_restore(h, page); From patchwork Fri Nov 18 22:19:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13048844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BA9BC433FE for ; Fri, 18 Nov 2022 22:20:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EA69E8E0001; Fri, 18 Nov 2022 17:20:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E2F738E0002; Fri, 18 Nov 2022 17:20:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C5AB48E0001; Fri, 18 Nov 2022 17:20:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B06D56B0074 for ; Fri, 18 Nov 2022 17:20:28 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 7224D1409BE for ; Fri, 18 Nov 2022 22:20:28 +0000 (UTC) X-FDA: 80147982936.07.8D32E04 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf26.hostedemail.com (Postfix) with ESMTP id E253714000B for ; Fri, 18 Nov 2022 22:20:27 +0000 (UTC) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AILOF8d007813; Fri, 18 Nov 2022 22:20:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=y8+A3q3OU7z0MZTrGwpnM52Q3PMm64WXjLUV67OQZgM=; b=ieh+9OWsXk7zx2prxGZH5ksV8jxEMpLm92QThmwyAPxJNJ3bvzDFSiWqJ/CmVcapU3ix nB6uZTAKTBTm3qq0zM+ODMfO7ZVQzLHYcC5twvubOuBLdKAvs/5SrblN2FYeGS0lDwNN u7tq1fqijvWXCJbnSt2D5232S9siH5M7HFU/ouC36u2ZVQIn0/gy2Titl9bhVJhdcY4E MtOB2GdnvLYcV03B30A6dJzeYsfqJpdHwlDNLLSrfzfJ0B8f3J1GvgQZCVwjrpJ5z6dC IbF5cN80dUN5uAXJRftzJGAGcSr/EMg/BtZS6beTBFyvzVHDeQIq4Ej7TgcWAIzQd2kJ rA== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kx9699way-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:17 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AIKYCmN001832; Fri, 18 Nov 2022 22:20:16 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kc1f2e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:15 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AIMK7qX040370; Fri, 18 Nov 2022 22:20:15 GMT Received: from sid-dell.us.oracle.com (dhcp-10-159-141-24.vpn.oracle.com [10.159.141.24]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kc1esc-6; Fri, 18 Nov 2022 22:20:15 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v4 05/10] mm/hugetlb: convert update_and_free_page() to folios Date: Fri, 18 Nov 2022 14:19:57 -0800 Message-Id: <20221118222002.82588-6-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221118222002.82588-1-sidhartha.kumar@oracle.com> References: <20221118222002.82588-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-18_08,2022-11-18_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=995 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211180133 X-Proofpoint-GUID: N5KihH8oKsEDkjJhanFCu2QH_tiVJUzT X-Proofpoint-ORIG-GUID: N5KihH8oKsEDkjJhanFCu2QH_tiVJUzT ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668810028; a=rsa-sha256; cv=none; b=w54lERIVWA+k/O/RD3O5abuqId2Au5aUO+vG6WVD4CaaAKBp8jWVByuTqOkpN4LhhvQ/gV fbGZ94BEUIgRrSZ9xObNy7QM2rDJoz9J884YB3z3FF+kZCBUsMDNh9HETFt3EI0RWN2Tbj +59I1vUArKZr2LZF2OGPpOo+yKRkXUA= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=ieh+9OWs; spf=pass (imf26.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668810028; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=y8+A3q3OU7z0MZTrGwpnM52Q3PMm64WXjLUV67OQZgM=; b=fbJU9y6g4Ft0iYcYhlS/S8H4xfEB3OIn6mzAy6c/aC4fzt9Fmy69BqsoeXfzDM0zOtCRsC I2eVhZ92ma68Z5GLjdE49N3rtP/n4YEVYXz1ZNsMdxbrWd2yhyS2ZiTYBxLx4zXYrmeyuO HWI/d3EpnM0aGx4trrhqNgHg1NZnuAQ= X-Stat-Signature: ojx8adjw6cbn8s5fzttpt56o1o8qcrzj X-Rspamd-Queue-Id: E253714000B Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=ieh+9OWs; spf=pass (imf26.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-Rspamd-Server: rspam04 X-Rspam-User: X-HE-Tag: 1668810027-608469 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make more progress on converting the free_huge_page() destructor to operate on folios by converting update_and_free_page() to folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 90ba01a76f87..83777d1ccbf3 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1478,7 +1478,7 @@ static void __remove_hugetlb_folio(struct hstate *h, struct folio *folio, * apply. * * This handles the case where more than one ref is held when and - * after update_and_free_page is called. + * after update_and_free_hugetlb_folio is called. * * In the case of demote we do not ref count the page as it will soon * be turned into a page of smaller size. @@ -1609,7 +1609,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) } /* - * As update_and_free_page() can be called under any context, so we cannot + * As update_and_free_hugetlb_folio() can be called under any context, so we cannot * use GFP_KERNEL to allocate vmemmap pages. However, we can defer the * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate * the vmemmap pages. @@ -1657,11 +1657,11 @@ static inline void flush_free_hpage_work(struct hstate *h) flush_work(&free_hpage_work); } -static void update_and_free_page(struct hstate *h, struct page *page, +static void update_and_free_hugetlb_folio(struct hstate *h, struct folio *folio, bool atomic) { - if (!HPageVmemmapOptimized(page) || !atomic) { - __update_and_free_page(h, page); + if (!folio_test_hugetlb_vmemmap_optimized(folio) || !atomic) { + __update_and_free_page(h, &folio->page); return; } @@ -1672,16 +1672,18 @@ static void update_and_free_page(struct hstate *h, struct page *page, * empty. Otherwise, schedule_work() had been called but the workfn * hasn't retrieved the list yet. */ - if (llist_add((struct llist_node *)&page->mapping, &hpage_freelist)) + if (llist_add((struct llist_node *)&folio->mapping, &hpage_freelist)) schedule_work(&free_hpage_work); } static void update_and_free_pages_bulk(struct hstate *h, struct list_head *list) { struct page *page, *t_page; + struct folio *folio; list_for_each_entry_safe(page, t_page, list, lru) { - update_and_free_page(h, page, false); + folio = page_folio(page); + update_and_free_hugetlb_folio(h, folio, false); cond_resched(); } } @@ -1751,12 +1753,12 @@ void free_huge_page(struct page *page) if (folio_test_hugetlb_temporary(folio)) { remove_hugetlb_folio(h, folio, false); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page, true); + update_and_free_hugetlb_folio(h, folio, true); } else if (h->surplus_huge_pages_node[nid]) { /* remove the page from active list */ remove_hugetlb_folio(h, folio, true); spin_unlock_irqrestore(&hugetlb_lock, flags); - update_and_free_page(h, page, true); + update_and_free_hugetlb_folio(h, folio, true); } else { arch_clear_hugepage_flags(page); enqueue_huge_page(h, page); @@ -2172,8 +2174,8 @@ int dissolve_free_huge_page(struct page *page) spin_unlock_irq(&hugetlb_lock); /* - * Normally update_and_free_page will allocate required vmemmmap - * before freeing the page. update_and_free_page will fail to + * Normally update_and_free_hugtlb_folio will allocate required vmemmmap + * before freeing the page. update_and_free_hugtlb_folio will fail to * free the page if it can not allocate required vmemmap. We * need to adjust max_huge_pages if the page is not freed. * Attempt to allocate vmemmmap here so that we can take @@ -2181,7 +2183,7 @@ int dissolve_free_huge_page(struct page *page) */ rc = hugetlb_vmemmap_restore(h, &folio->page); if (!rc) { - update_and_free_page(h, &folio->page, false); + update_and_free_hugetlb_folio(h, folio, false); } else { spin_lock_irq(&hugetlb_lock); add_hugetlb_page(h, &folio->page, false); @@ -2818,7 +2820,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * Pages have been replaced, we can safely free the old one. */ spin_unlock_irq(&hugetlb_lock); - update_and_free_page(h, old_page, false); + update_and_free_hugetlb_folio(h, old_folio, false); } return ret; @@ -2827,7 +2829,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, spin_unlock_irq(&hugetlb_lock); /* Page has a zero ref count, but needs a ref to be freed */ folio_ref_unfreeze(new_folio, 1); - update_and_free_page(h, new_page, false); + update_and_free_hugetlb_folio(h, new_folio, false); return ret; } From patchwork Fri Nov 18 22:19:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13048848 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5350DC433FE for ; Fri, 18 Nov 2022 22:20:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DA1908E0003; Fri, 18 Nov 2022 17:20:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CAACE8E0002; Fri, 18 Nov 2022 17:20:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C8428E0003; Fri, 18 Nov 2022 17:20:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 539906B0078 for ; Fri, 18 Nov 2022 17:20:30 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 157F61A0786 for ; Fri, 18 Nov 2022 22:20:30 +0000 (UTC) X-FDA: 80147983020.29.39B008A Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf27.hostedemail.com (Postfix) with ESMTP id 8C45A40011 for ; Fri, 18 Nov 2022 22:20:29 +0000 (UTC) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AIMBUYC008967; Fri, 18 Nov 2022 22:20:19 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=uVJeZg/HX6vyYPoFNdC886kmv1sph3E5Lr0P4W6wh0M=; b=XfnL+cMr6j2cm5728a/LxAGfWIcueVQU9tX2rVK2lDto/HD+RFCP0xmEsGhYArGkThpM uCn2qEJFvL6K1klRvenBMcO6k/kp3MRXwvH20rcywxp5Mxjd1gX1rwhe43Xp0gS9HMy4 gS1iBdEy5WXxofnTdfKTWsbRPOfZDPMWvLW1tX3JG8/AEOzVong7h0OX5tHsHAvfEGJ9 rN/E6sJm9eUJ9B/Zr1qM0BxySXGqxh3zCckjnu+5U9CH3tydsUqFLUnkXyGjr0ZtKf/6 vNt0120OlpSFZcxIokOa19Epw8R6m9EcwBleR13kSfEHQgBOY024kqh5ef55cgHRgVGW /Q== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kx9699wb3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:18 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AIL7Mc1001664; Fri, 18 Nov 2022 22:20:17 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kc1f3t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:17 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AIMK7qZ040370; Fri, 18 Nov 2022 22:20:17 GMT Received: from sid-dell.us.oracle.com (dhcp-10-159-141-24.vpn.oracle.com [10.159.141.24]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kc1esc-7; Fri, 18 Nov 2022 22:20:16 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v4 06/10] mm/hugetlb: convert add_hugetlb_page() to folios and add hugetlb_cma_folio() Date: Fri, 18 Nov 2022 14:19:58 -0800 Message-Id: <20221118222002.82588-7-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221118222002.82588-1-sidhartha.kumar@oracle.com> References: <20221118222002.82588-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-18_08,2022-11-18_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211180133 X-Proofpoint-GUID: PKPBo-wcLAm4WG0jhhj3jCQ1jJJZPUYe X-Proofpoint-ORIG-GUID: PKPBo-wcLAm4WG0jhhj3jCQ1jJJZPUYe ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668810029; a=rsa-sha256; cv=none; b=KaFO94FBvx8b8OqH4aQTv3MLEDDJX83i0RXI+1JKFTNOi+slcnDZMva8vIxdCmYVOWVA9D AWwBdmgOanKYI9L/ElcW6YJU4Y0uDSHHK3NxjtOPS9aJ42asB2l3SGO3hmgekxUnjIS6fO xVSQEjqlinTSBD4c5paeXXg6x/0qhwc= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=XfnL+cMr; spf=pass (imf27.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668810029; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uVJeZg/HX6vyYPoFNdC886kmv1sph3E5Lr0P4W6wh0M=; b=66+QoG991zP+HLPH6vTcl6nTEcXnIZiz4jtYv+1sqqll9DXOo6Ol+m/uEdy2gbLyzNnMp9 46677qgTT7xVFE4ozQyWlcvvi/zuPxeQdD5L0geW0r/MwyANzyiQjOSz4vkKWqj4F39lUS kgakpeg4sVQUJCQuyD6uUf1S7vVejPQ= X-Stat-Signature: qd5jjcpdz6faqg1uhtsbe7tuwxwnp96d X-Rspamd-Queue-Id: 8C45A40011 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=XfnL+cMr; spf=pass (imf27.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1668810029-967069 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert add_hugetlb_page() to take in a folio, also convert hugetlb_cma_page() to take in a folio. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 83777d1ccbf3..6acaa61571be 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -54,13 +54,13 @@ struct hstate hstates[HUGE_MAX_HSTATE]; #ifdef CONFIG_CMA static struct cma *hugetlb_cma[MAX_NUMNODES]; static unsigned long hugetlb_cma_size_in_node[MAX_NUMNODES] __initdata; -static bool hugetlb_cma_page(struct page *page, unsigned int order) +static bool hugetlb_cma_folio(struct folio *folio, unsigned int order) { - return cma_pages_valid(hugetlb_cma[page_to_nid(page)], page, + return cma_pages_valid(hugetlb_cma[folio_nid(folio)], &folio->page, 1 << order); } #else -static bool hugetlb_cma_page(struct page *page, unsigned int order) +static bool hugetlb_cma_folio(struct folio *folio, unsigned int order) { return false; } @@ -1506,17 +1506,17 @@ static void remove_hugetlb_folio_for_demote(struct hstate *h, struct folio *foli __remove_hugetlb_folio(h, folio, adjust_surplus, true); } -static void add_hugetlb_page(struct hstate *h, struct page *page, +static void add_hugetlb_folio(struct hstate *h, struct folio *folio, bool adjust_surplus) { int zeroed; - int nid = page_to_nid(page); + int nid = folio_nid(folio); - VM_BUG_ON_PAGE(!HPageVmemmapOptimized(page), page); + VM_BUG_ON_FOLIO(!folio_test_hugetlb_vmemmap_optimized(folio), folio); lockdep_assert_held(&hugetlb_lock); - INIT_LIST_HEAD(&page->lru); + INIT_LIST_HEAD(&folio->lru); h->nr_huge_pages++; h->nr_huge_pages_node[nid]++; @@ -1525,21 +1525,21 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, h->surplus_huge_pages_node[nid]++; } - set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); - set_page_private(page, 0); + folio_set_compound_dtor(folio, HUGETLB_PAGE_DTOR); + folio_change_private(folio, 0); /* * We have to set HPageVmemmapOptimized again as above - * set_page_private(page, 0) cleared it. + * folio_change_private(folio, 0) cleared it. */ - SetHPageVmemmapOptimized(page); + folio_set_hugetlb_vmemmap_optimized(folio); /* - * This page is about to be managed by the hugetlb allocator and + * This folio is about to be managed by the hugetlb allocator and * should have no users. Drop our reference, and check for others * just in case. */ - zeroed = put_page_testzero(page); - if (!zeroed) + zeroed = folio_put_testzero(folio); + if (unlikely(!zeroed)) /* * It is VERY unlikely soneone else has taken a ref on * the page. In this case, we simply return as the @@ -1548,8 +1548,8 @@ static void add_hugetlb_page(struct hstate *h, struct page *page, */ return; - arch_clear_hugepage_flags(page); - enqueue_huge_page(h, page); + arch_clear_hugepage_flags(&folio->page); + enqueue_huge_page(h, &folio->page); } static void __update_and_free_page(struct hstate *h, struct page *page) @@ -1575,7 +1575,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * page and put the page back on the hugetlb free list and treat * as a surplus page. */ - add_hugetlb_page(h, page, true); + add_hugetlb_folio(h, page_folio(page), true); spin_unlock_irq(&hugetlb_lock); return; } @@ -1600,7 +1600,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * need to be given back to CMA in free_gigantic_page. */ if (hstate_is_gigantic(h) || - hugetlb_cma_page(page, huge_page_order(h))) { + hugetlb_cma_folio(folio, huge_page_order(h))) { destroy_compound_gigantic_folio(folio, huge_page_order(h)); free_gigantic_page(page, huge_page_order(h)); } else { @@ -2186,7 +2186,7 @@ int dissolve_free_huge_page(struct page *page) update_and_free_hugetlb_folio(h, folio, false); } else { spin_lock_irq(&hugetlb_lock); - add_hugetlb_page(h, &folio->page, false); + add_hugetlb_folio(h, folio, false); h->max_huge_pages++; spin_unlock_irq(&hugetlb_lock); } @@ -3453,7 +3453,7 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) /* Allocation of vmemmmap failed, we can not demote page */ spin_lock_irq(&hugetlb_lock); set_page_refcounted(page); - add_hugetlb_page(h, page, false); + add_hugetlb_folio(h, page_folio(page), false); return rc; } From patchwork Fri Nov 18 22:19:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13048847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23290C4332F for ; Fri, 18 Nov 2022 22:20:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ABFDB6B0075; Fri, 18 Nov 2022 17:20:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9C9EC8E0006; Fri, 18 Nov 2022 17:20:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F4856B007B; Fri, 18 Nov 2022 17:20:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 410D18E0002 for ; Fri, 18 Nov 2022 17:20:30 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 195FEA04E5 for ; Fri, 18 Nov 2022 22:20:30 +0000 (UTC) X-FDA: 80147983020.30.3206431 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by imf29.hostedemail.com (Postfix) with ESMTP id 8E40D120012 for ; Fri, 18 Nov 2022 22:20:29 +0000 (UTC) Received: from pps.filterd (m0246629.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AILU0ev007709; Fri, 18 Nov 2022 22:20:21 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=sAXFkFVBxedbdhMSnBfTVVcx1FfqiRiE1ACXX2z7LAg=; b=wmXpq3o12odsINjFBpzcqe41qKTcqZIAsAnvJizkPRDl59O0QhWP1ZjPvkP9YyUDlChx sEMjOobVTQE+pG/voMIQh0XKbQW0px1MDhR2kzQOF9WOsAsmAzU2gGTsS/E/WMImXuSQ zr/ZB20fVtewP1ThtWI/e6eR0xufKKKI1BWmkipytLrLOFlNCMHR3lLym/+BpIHkioqQ tfr2nqgservxatgtSNMDmWmZGP0m5JitZy5gx/614fj1292fuoZyguOdBX8zyxWq9F/9 /1bTjyQOWVFw/VbEJsulIG9bgAP59aFcTGKgyW+q9WBfubuUi6SaIP3RBPdPGEHbp8ws 6g== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kxg1greea-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:20 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AIK2U7T001637; Fri, 18 Nov 2022 22:20:19 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kc1f5b-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:19 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AIMK7qb040370; Fri, 18 Nov 2022 22:20:18 GMT Received: from sid-dell.us.oracle.com (dhcp-10-159-141-24.vpn.oracle.com [10.159.141.24]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kc1esc-8; Fri, 18 Nov 2022 22:20:18 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v4 07/10] mm/hugetlb: convert enqueue_huge_page() to folios Date: Fri, 18 Nov 2022 14:19:59 -0800 Message-Id: <20221118222002.82588-8-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221118222002.82588-1-sidhartha.kumar@oracle.com> References: <20221118222002.82588-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-18_08,2022-11-18_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211180133 X-Proofpoint-GUID: sqeHaXVtfQQ_3ytDbnDXvleADl69Gf2m X-Proofpoint-ORIG-GUID: sqeHaXVtfQQ_3ytDbnDXvleADl69Gf2m ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668810029; a=rsa-sha256; cv=none; b=tlq/Glm2VPr63z+Z4wvWTHh3cIagpdVznL3TRSNqqF9YpYo2oFjyayZluNiKouAr49kV4K +vTL8DRjyq3Lj5ST90ZuLD1Xo5eGcfwAs+oE0lV4qL+1fk4rBDa7PtlbbVHLqDhNugw7uC bE+j53MEVikmSLZ+biwGjyOyiVPpPPo= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=wmXpq3o1; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf29.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668810029; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sAXFkFVBxedbdhMSnBfTVVcx1FfqiRiE1ACXX2z7LAg=; b=dHU5DJ884tpUGqcRL1KlDQPiwlMq8LbFPgtAY0m1XpO6es+EKcVSAn8rvdMRkMhLt4uJh6 SvIoFUYpv+UsomF5gxsBm+MluhFvUkwPp5VTgjPEvO/EGJUbziD19b6UnJvbiOZnPewN8B rkJZySLAgeXLlOqmHRE0Ns725wd0tME= X-Stat-Signature: 8bhzf1zrp5q5jqfs3zzb8emjoo3f7d63 X-Rspamd-Queue-Id: 8E40D120012 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=wmXpq3o1; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf29.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.165.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1668810029-926438 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert callers of enqueue_huge_page() to pass in a folio, function is renamed to enqueue_hugetlb_folio(). Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6acaa61571be..36ae789009dc 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1127,17 +1127,17 @@ static bool vma_has_reserves(struct vm_area_struct *vma, long chg) return false; } -static void enqueue_huge_page(struct hstate *h, struct page *page) +static void enqueue_hugetlb_folio(struct hstate *h, struct folio *folio) { - int nid = page_to_nid(page); + int nid = folio_nid(folio); lockdep_assert_held(&hugetlb_lock); - VM_BUG_ON_PAGE(page_count(page), page); + VM_BUG_ON_FOLIO(folio_ref_count(folio), folio); - list_move(&page->lru, &h->hugepage_freelists[nid]); + list_move(&folio->lru, &h->hugepage_freelists[nid]); h->free_huge_pages++; h->free_huge_pages_node[nid]++; - SetHPageFreed(page); + folio_set_hugetlb_freed(folio); } static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid) @@ -1549,7 +1549,7 @@ static void add_hugetlb_folio(struct hstate *h, struct folio *folio, return; arch_clear_hugepage_flags(&folio->page); - enqueue_huge_page(h, &folio->page); + enqueue_hugetlb_folio(h, folio); } static void __update_and_free_page(struct hstate *h, struct page *page) @@ -1761,7 +1761,7 @@ void free_huge_page(struct page *page) update_and_free_hugetlb_folio(h, folio, true); } else { arch_clear_hugepage_flags(page); - enqueue_huge_page(h, page); + enqueue_hugetlb_folio(h, folio); spin_unlock_irqrestore(&hugetlb_lock, flags); } } @@ -2438,7 +2438,7 @@ static int gather_surplus_pages(struct hstate *h, long delta) if ((--needed) < 0) break; /* Add the page to the hugetlb allocator */ - enqueue_huge_page(h, page); + enqueue_hugetlb_folio(h, page_folio(page)); } free: spin_unlock_irq(&hugetlb_lock); @@ -2804,8 +2804,8 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * Ok, old_page is still a genuine free hugepage. Remove it from * the freelist and decrease the counters. These will be * incremented again when calling __prep_account_new_huge_page() - * and enqueue_huge_page() for new_page. The counters will remain - * stable since this happens under the lock. + * and enqueue_hugetlb_folio() for new_folio. The counters will + * remain stable since this happens under the lock. */ remove_hugetlb_folio(h, old_folio, false); @@ -2814,7 +2814,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * earlier. It can be directly added to the pool free list. */ __prep_account_new_huge_page(h, nid); - enqueue_huge_page(h, new_page); + enqueue_hugetlb_folio(h, new_folio); /* * Pages have been replaced, we can safely free the old one. From patchwork Fri Nov 18 22:20:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13048849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE3C6C4332F for ; Fri, 18 Nov 2022 22:20:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A5C858E0007; Fri, 18 Nov 2022 17:20:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A7E08E0006; Fri, 18 Nov 2022 17:20:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 79FC18E0007; Fri, 18 Nov 2022 17:20:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 522BA8E0002 for ; Fri, 18 Nov 2022 17:20:31 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 3146FA0947 for ; Fri, 18 Nov 2022 22:20:31 +0000 (UTC) X-FDA: 80147983062.19.258B7AD Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by imf21.hostedemail.com (Postfix) with ESMTP id CB9B41C000E for ; Fri, 18 Nov 2022 22:20:30 +0000 (UTC) Received: from pps.filterd (m0246631.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AILNqQI028279; Fri, 18 Nov 2022 22:20:21 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=AsgMZ2LJqnopBhIKo6BhJx8thhSJpl8MTE3CrL6uzHY=; b=FY21OGcSxwsSiMUvablV/wK+6LPXJdaFuVBubQdLf8qHe8JUbh/xDG0VHH9uXxdoCDJL CgXiBxQLoV37W6TMt548DZyiox0xd5msxdaOggKahtMY+1QRWWKRDLQ3m5MbLa5dcAEP DnswYClXb6cKf6+MFX50G/MTYCECGoowCPVJ/xOeGrCqkjCAQ9hp1iubpDHCuP65saLF DIYddoO9C7czJbG5VZSKe73e0eQ8LR5b3kfSEEMI8Y4MZgQFzyI7+UdrHBdtqjvQtW1J waCJb7NJrUSJxZrLl614DuHIMqUXiRDQ6dvlaxvPrNDGRQyseosANBlwk7W+U/9BsKLE tg== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kxh458859-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:21 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AIKooLE001656; Fri, 18 Nov 2022 22:20:20 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kc1f6f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:20 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AIMK7qd040370; Fri, 18 Nov 2022 22:20:20 GMT Received: from sid-dell.us.oracle.com (dhcp-10-159-141-24.vpn.oracle.com [10.159.141.24]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kc1esc-9; Fri, 18 Nov 2022 22:20:20 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v4 08/10] mm/hugetlb: convert free_gigantic_page() to folios Date: Fri, 18 Nov 2022 14:20:00 -0800 Message-Id: <20221118222002.82588-9-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221118222002.82588-1-sidhartha.kumar@oracle.com> References: <20221118222002.82588-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-18_08,2022-11-18_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211180133 X-Proofpoint-ORIG-GUID: gLCHJ9oRVphfWHUDuwHHQjHXKYnFPcdQ X-Proofpoint-GUID: gLCHJ9oRVphfWHUDuwHHQjHXKYnFPcdQ ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668810030; a=rsa-sha256; cv=none; b=za13lCvpZc7ymqV/9JtJ5jxGmVxL1cuJonxICvsk3nWmvjqRvWVwFsVX8cdXQkGhgRxosm dzqWIAf1A+b7yf880on+xbaqY1FhpRm34iQeK2q18XBgMMthi8oDEPkCgg4hB5p92yujdM GYPw4wYKggkF4buTwSggFgr654z9UCw= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=FY21OGcS; spf=pass (imf21.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.177.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668810030; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AsgMZ2LJqnopBhIKo6BhJx8thhSJpl8MTE3CrL6uzHY=; b=T7RnQEIa/n/EQ4OUHxiy3slCLoI+WTvQpN2jJBILe/SOTBI78WAJU658LGj4Pv1CHpNL7V KvS+cQkglqSNeidtA7ciuxibVwpNvMIUkJWUQTMJmEs5OYPUVvmdqUJjmAZ0Pl4EJ+ZgqL 05a73J99CbKBGmYT8kwA40jqrb3p/TY= X-Stat-Signature: wrg4udhigw59otbu3dyyim9bryb8hp9b X-Rspamd-Queue-Id: CB9B41C000E Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=FY21OGcS; spf=pass (imf21.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.177.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1668810030-373212 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert callers of free_gigantic_page() to use folios, function is then renamed to free_gigantic_folio(). Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 36ae789009dc..c28d3c67bc0b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1361,18 +1361,20 @@ static void destroy_compound_gigantic_folio(struct folio *folio, __destroy_compound_gigantic_folio(folio, order, false); } -static void free_gigantic_page(struct page *page, unsigned int order) +static void free_gigantic_folio(struct folio *folio, unsigned int order) { /* * If the page isn't allocated using the cma allocator, * cma_release() returns false. */ #ifdef CONFIG_CMA - if (cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order)) + int nid = folio_nid(folio); + + if (cma_release(hugetlb_cma[nid], &folio->page, 1 << order)) return; #endif - free_contig_range(page_to_pfn(page), 1 << order); + free_contig_range(folio_pfn(folio), 1 << order); } #ifdef CONFIG_CONTIG_ALLOC @@ -1426,7 +1428,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, { return NULL; } -static inline void free_gigantic_page(struct page *page, unsigned int order) { } +static inline void free_gigantic_folio(struct folio *folio, + unsigned int order) { } static inline void destroy_compound_gigantic_folio(struct folio *folio, unsigned int order) { } #endif @@ -1565,7 +1568,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * If we don't know which subpages are hwpoisoned, we can't free * the hugepage, so it's leaked intentionally. */ - if (HPageRawHwpUnreliable(page)) + if (folio_test_hugetlb_raw_hwp_unreliable(folio)) return; if (hugetlb_vmemmap_restore(h, page)) { @@ -1575,7 +1578,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) * page and put the page back on the hugetlb free list and treat * as a surplus page. */ - add_hugetlb_folio(h, page_folio(page), true); + add_hugetlb_folio(h, folio, true); spin_unlock_irq(&hugetlb_lock); return; } @@ -1588,7 +1591,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page) hugetlb_clear_page_hwpoison(&folio->page); for (i = 0; i < pages_per_huge_page(h); i++) { - subpage = nth_page(page, i); + subpage = folio_page(folio, i); subpage->flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | 1 << PG_active | 1 << PG_private | @@ -1597,12 +1600,12 @@ static void __update_and_free_page(struct hstate *h, struct page *page) /* * Non-gigantic pages demoted from CMA allocated gigantic pages - * need to be given back to CMA in free_gigantic_page. + * need to be given back to CMA in free_gigantic_folio. */ if (hstate_is_gigantic(h) || hugetlb_cma_folio(folio, huge_page_order(h))) { destroy_compound_gigantic_folio(folio, huge_page_order(h)); - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_folio(folio, huge_page_order(h)); } else { __free_pages(page, huge_page_order(h)); } @@ -2025,6 +2028,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, nodemask_t *node_alloc_noretry) { struct page *page; + struct folio *folio; bool retry = false; retry: @@ -2035,14 +2039,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, nid, nmask, node_alloc_noretry); if (!page) return NULL; - + folio = page_folio(page); if (hstate_is_gigantic(h)) { if (!prep_compound_gigantic_page(page, huge_page_order(h))) { /* * Rare failure to convert pages to compound page. * Free pages and try again - ONCE! */ - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_folio(folio, huge_page_order(h)); if (!retry) { retry = true; goto retry; @@ -3050,6 +3054,7 @@ static void __init gather_bootmem_prealloc(void) list_for_each_entry(m, &huge_boot_pages, list) { struct page *page = virt_to_page(m); + struct folio *folio = page_folio(page); struct hstate *h = m->hstate; VM_BUG_ON(!hstate_is_gigantic(h)); @@ -3060,7 +3065,7 @@ static void __init gather_bootmem_prealloc(void) free_huge_page(page); /* add to the hugepage allocator */ } else { /* VERY unlikely inflated ref count on a tail page */ - free_gigantic_page(page, huge_page_order(h)); + free_gigantic_folio(folio, huge_page_order(h)); } /* From patchwork Fri Nov 18 22:20:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13048851 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D2D9C4332F for ; Fri, 18 Nov 2022 22:20:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BC0C18E0008; Fri, 18 Nov 2022 17:20:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B22208E0002; Fri, 18 Nov 2022 17:20:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 926958E0008; Fri, 18 Nov 2022 17:20:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 6BA8B8E0002 for ; Fri, 18 Nov 2022 17:20:32 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 4C6EDC0501 for ; Fri, 18 Nov 2022 22:20:32 +0000 (UTC) X-FDA: 80147983104.09.3E65F10 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by imf28.hostedemail.com (Postfix) with ESMTP id DBD66C0006 for ; Fri, 18 Nov 2022 22:20:31 +0000 (UTC) Received: from pps.filterd (m0246630.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AILNiEn010938; Fri, 18 Nov 2022 22:20:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=zGX+Paz/jOxcPkmu5FIpP7u2F4f1Aw/VOFV5GSkM3Xw=; b=ujCfcGxoq6w02oR9p4QckAZ74SGShWnCVoDscaUMWv0RKnbW1DhM1D18iM8CFDZ0U6+l X8VAIB6GkhtFshXlPIIrYVvB48ftvrWWljZW70oOS/aAImc6P+ljGwELTG6Z5g+PNSHr Y6D6ZjNIZprXMThkkk/Ti5x+H/5RkNwCdlhZvhEDLzamwFfttVhEWqvxvRl3DoWFLzca C/VJkQrC/0gucPV0mE3PNgKb9lbCrYYqGcS3PlkQRip5SS6g3NJM3Q52c17rTJO1vak2 JJQTrPMYjpJ8O8BW705kvDNzoXMQoNDlf4w1PgXghp1m+j9sflI5zpdRTc53Aau42usT bA== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kxbub995f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:24 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AIKWW5r001830; Fri, 18 Nov 2022 22:20:22 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kc1f80-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:22 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AIMK7qf040370; Fri, 18 Nov 2022 22:20:22 GMT Received: from sid-dell.us.oracle.com (dhcp-10-159-141-24.vpn.oracle.com [10.159.141.24]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kc1esc-10; Fri, 18 Nov 2022 22:20:22 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v4 09/10] mm/hugetlb: convert hugetlb prep functions to folios Date: Fri, 18 Nov 2022 14:20:01 -0800 Message-Id: <20221118222002.82588-10-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221118222002.82588-1-sidhartha.kumar@oracle.com> References: <20221118222002.82588-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-18_08,2022-11-18_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211180133 X-Proofpoint-ORIG-GUID: FLqtD_HLNaeGxmDRqt3G973sQyU_Etud X-Proofpoint-GUID: FLqtD_HLNaeGxmDRqt3G973sQyU_Etud ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668810032; a=rsa-sha256; cv=none; b=1eDg5g98WRmYYAkS/AunltA/M4LIxQq6bUJJn5lFTzOmImTAdq3HizNV+hqP0BmvB8/iom evXWRQ0dmiMkq2/wb2rGfWQvLHeY3KOQ9DgPZsw2mU+z9djFZdG3S70xZKqx6ZI2MF8sgd pJk6fQj0FLtU3LZ+JaHm54pWdPHMpYY= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=ujCfcGxo; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf28.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.177.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668810031; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zGX+Paz/jOxcPkmu5FIpP7u2F4f1Aw/VOFV5GSkM3Xw=; b=p1Dgt37pYl5BGszpKRUurrBFRhhq43Z8jci0RmtCglEroG+tqusmV1ckHmqj7DKokzz9sT vQnAvD3+DgyMCQBSnu1oFSrxYEMdgRV30BOl/UcmX/okxbt7bkGywbFx07gHbGZuK1DLzS 6wMoNv2eccussow+lEqZsybO3mWBxmE= X-Stat-Signature: qoad163onnphg1zrmz8pxnsp8dfe8tyq X-Rspamd-Queue-Id: DBD66C0006 X-Rspam-User: Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=ujCfcGxo; dmarc=pass (policy=none) header.from=oracle.com; spf=pass (imf28.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.177.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com X-Rspamd-Server: rspam09 X-HE-Tag: 1668810031-247802 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert prep_new_huge_page() and __prep_compound_gigantic_page() to folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 63 +++++++++++++++++++++++++--------------------------- 1 file changed, 30 insertions(+), 33 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c28d3c67bc0b..b690ea7aaa00 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1789,29 +1789,27 @@ static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio) set_hugetlb_cgroup_rsvd(folio, NULL); } -static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) +static void prep_new_hugetlb_folio(struct hstate *h, struct folio *folio, int nid) { - struct folio *folio = page_folio(page); - __prep_new_hugetlb_folio(h, folio); spin_lock_irq(&hugetlb_lock); __prep_account_new_huge_page(h, nid); spin_unlock_irq(&hugetlb_lock); } -static bool __prep_compound_gigantic_page(struct page *page, unsigned int order, - bool demote) +static bool __prep_compound_gigantic_folio(struct folio *folio, + unsigned int order, bool demote) { int i, j; int nr_pages = 1 << order; struct page *p; - /* we rely on prep_new_huge_page to set the destructor */ - set_compound_order(page, order); - __ClearPageReserved(page); - __SetPageHead(page); + /* we rely on prep_new_hugetlb_folio to set the destructor */ + folio_set_compound_order(folio, order); + __folio_clear_reserved(folio); + __folio_set_head(folio); for (i = 0; i < nr_pages; i++) { - p = nth_page(page, i); + p = folio_page(folio, i); /* * For gigantic hugepages allocated through bootmem at @@ -1853,43 +1851,41 @@ static bool __prep_compound_gigantic_page(struct page *page, unsigned int order, VM_BUG_ON_PAGE(page_count(p), p); } if (i != 0) - set_compound_head(p, page); + set_compound_head(p, &folio->page); } - atomic_set(compound_mapcount_ptr(page), -1); - atomic_set(subpages_mapcount_ptr(page), 0); - atomic_set(compound_pincount_ptr(page), 0); + atomic_set(folio_mapcount_ptr(folio), -1); + atomic_set(folio_subpages_mapcount_ptr(folio), 0); + atomic_set(folio_pincount_ptr(folio), 0); return true; out_error: /* undo page modifications made above */ for (j = 0; j < i; j++) { - p = nth_page(page, j); + p = folio_page(folio, j); if (j != 0) clear_compound_head(p); set_page_refcounted(p); } /* need to clear PG_reserved on remaining tail pages */ for (; j < nr_pages; j++) { - p = nth_page(page, j); + p = folio_page(folio, j); __ClearPageReserved(p); } - set_compound_order(page, 0); -#ifdef CONFIG_64BIT - page[1].compound_nr = 0; -#endif - __ClearPageHead(page); + folio_set_compound_order(folio, 0); + __folio_clear_head(folio); return false; } -static bool prep_compound_gigantic_page(struct page *page, unsigned int order) +static bool prep_compound_gigantic_folio(struct folio *folio, + unsigned int order) { - return __prep_compound_gigantic_page(page, order, false); + return __prep_compound_gigantic_folio(folio, order, false); } -static bool prep_compound_gigantic_page_for_demote(struct page *page, +static bool prep_compound_gigantic_folio_for_demote(struct folio *folio, unsigned int order) { - return __prep_compound_gigantic_page(page, order, true); + return __prep_compound_gigantic_folio(folio, order, true); } /* @@ -2041,7 +2037,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, return NULL; folio = page_folio(page); if (hstate_is_gigantic(h)) { - if (!prep_compound_gigantic_page(page, huge_page_order(h))) { + if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) { /* * Rare failure to convert pages to compound page. * Free pages and try again - ONCE! @@ -2054,7 +2050,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, return NULL; } } - prep_new_huge_page(h, page, page_to_nid(page)); + prep_new_hugetlb_folio(h, folio, folio_nid(folio)); return page; } @@ -3058,10 +3054,10 @@ static void __init gather_bootmem_prealloc(void) struct hstate *h = m->hstate; VM_BUG_ON(!hstate_is_gigantic(h)); - WARN_ON(page_count(page) != 1); - if (prep_compound_gigantic_page(page, huge_page_order(h))) { - WARN_ON(PageReserved(page)); - prep_new_huge_page(h, page, page_to_nid(page)); + WARN_ON(folio_ref_count(folio) != 1); + if (prep_compound_gigantic_folio(folio, huge_page_order(h))) { + WARN_ON(folio_test_reserved(folio)); + prep_new_hugetlb_folio(h, folio, folio_nid(folio)); free_huge_page(page); /* add to the hugepage allocator */ } else { /* VERY unlikely inflated ref count on a tail page */ @@ -3480,13 +3476,14 @@ static int demote_free_huge_page(struct hstate *h, struct page *page) for (i = 0; i < pages_per_huge_page(h); i += pages_per_huge_page(target_hstate)) { subpage = nth_page(page, i); + folio = page_folio(subpage); if (hstate_is_gigantic(target_hstate)) - prep_compound_gigantic_page_for_demote(subpage, + prep_compound_gigantic_folio_for_demote(folio, target_hstate->order); else prep_compound_page(subpage, target_hstate->order); set_page_private(subpage, 0); - prep_new_huge_page(target_hstate, subpage, nid); + prep_new_hugetlb_folio(target_hstate, folio, nid); free_huge_page(subpage); } mutex_unlock(&target_hstate->resize_lock); From patchwork Fri Nov 18 22:20:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sidhartha Kumar X-Patchwork-Id: 13048852 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6612CC433FE for ; Fri, 18 Nov 2022 22:20:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BC04F8E0009; Fri, 18 Nov 2022 17:20:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B4B2D8E0002; Fri, 18 Nov 2022 17:20:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8DA8B8E0009; Fri, 18 Nov 2022 17:20:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 618558E0002 for ; Fri, 18 Nov 2022 17:20:33 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 2D9C4AAD73 for ; Fri, 18 Nov 2022 22:20:33 +0000 (UTC) X-FDA: 80147983146.04.529BDF8 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by imf06.hostedemail.com (Postfix) with ESMTP id C959718000C for ; Fri, 18 Nov 2022 22:20:32 +0000 (UTC) Received: from pps.filterd (m0246630.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AILNjW6010944; Fri, 18 Nov 2022 22:20:25 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=xjFNR0+YHVKKmji0eOTm6FSXkffaZEYjgSGJZ2r6Kok=; b=2+HJ31eibUrmdIvm+h9gGAi3ugpM9nSxj7h/yP4kFA3JFzIcVG5W6r5Gb0/GJiIv4HcP 4qMgRVKx99fzm9ybnlNbsmnBf81F/lpm3WkNBrmy6A5bk6UPnbmcCX9mDXstGji5i6z2 Owsirr6QVIBJqjNWY0cOwvsSIdZZtDxfQKiH8iD0JfqitgGU6z54+PQ+ae6B+APGe9AW fS9Ce6C6uJ117xXnLVe5rncfaqfdLG+tr6i1kjL+U4Ms6tVqpQJsJJzm/B85HWYTVeGK vrcE6awXZETIRlO4MI+B3hM5cKdSzgQElf1Uifoo6qEGU7WTmdrFyL/s7fVJcG6GXVlK jQ== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3kxbub995k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:24 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 2AIKgCrS001825; Fri, 18 Nov 2022 22:20:24 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3ku3kc1f96-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Nov 2022 22:20:24 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 2AIMK7qh040370; Fri, 18 Nov 2022 22:20:23 GMT Received: from sid-dell.us.oracle.com (dhcp-10-159-141-24.vpn.oracle.com [10.159.141.24]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3ku3kc1esc-11; Fri, 18 Nov 2022 22:20:23 +0000 From: Sidhartha Kumar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, songmuchun@bytedance.com, mike.kravetz@oracle.com, willy@infradead.org, almasrymina@google.com, linmiaohe@huawei.com, hughd@google.com, Sidhartha Kumar Subject: [PATCH mm-unstable v4 10/10] mm/hugetlb: change hugetlb allocation functions to return a folio Date: Fri, 18 Nov 2022 14:20:02 -0800 Message-Id: <20221118222002.82588-11-sidhartha.kumar@oracle.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221118222002.82588-1-sidhartha.kumar@oracle.com> References: <20221118222002.82588-1-sidhartha.kumar@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-18_08,2022-11-18_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211180133 X-Proofpoint-ORIG-GUID: Xu_BTJ8irP3of8i3krNOxSTLg64AGDo2 X-Proofpoint-GUID: Xu_BTJ8irP3of8i3krNOxSTLg64AGDo2 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668810032; a=rsa-sha256; cv=none; b=7CH3yriUa91qHp0dyEm8fPtvmkvvDDoX2vXu+9xxcJxSPqkTsdbD6Gy2QOjlYWfWu8iVbQ 3z+BWmPMnVJ01L4BSEaSFTwsE2jC9zI1qLUDlv8XjUVOnWSisOuYRBrt8f9CBVItNtBicB aQl9zeTZAHAmmAA1dyVVylG7zbkbYAY= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=2+HJ31ei; spf=pass (imf06.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.177.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668810032; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xjFNR0+YHVKKmji0eOTm6FSXkffaZEYjgSGJZ2r6Kok=; b=JiKLbf3T7grHxos56zR2u4rTz6XHlEcfvju6OlWxiU/AHARExo0/jsa4G1LJabt9csfGb5 OoIKx6UchnkS8OdRILwJ4F47kaLDxuyIInXiRbrrOXPaq+2zeC+Xp6XJTrI+sKgm8WdeFr u5TapYxfzizKi3/FxIxv6WSY7/4osXk= X-Stat-Signature: 9onb935pyg54att5mgsjk75t3dwmro5i X-Rspamd-Queue-Id: C959718000C Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=oracle.com header.s=corp-2022-7-12 header.b=2+HJ31ei; spf=pass (imf06.hostedemail.com: domain of sidhartha.kumar@oracle.com designates 205.220.177.32 as permitted sender) smtp.mailfrom=sidhartha.kumar@oracle.com; dmarc=pass (policy=none) header.from=oracle.com X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1668810032-8025 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Many hugetlb allocation helper functions have now been converting to folios, update their higher level callers to be compatible with folios. Signed-off-by: Sidhartha Kumar --- mm/hugetlb.c | 98 ++++++++++++++++++++++++---------------------------- 1 file changed, 46 insertions(+), 52 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index b690ea7aaa00..2de3e5ccbc3f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1378,7 +1378,7 @@ static void free_gigantic_folio(struct folio *folio, unsigned int order) } #ifdef CONFIG_CONTIG_ALLOC -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { unsigned long nr_pages = pages_per_huge_page(h); @@ -1394,7 +1394,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, page = cma_alloc(hugetlb_cma[nid], nr_pages, huge_page_order(h), true); if (page) - return page; + return page_folio(page); } if (!(gfp_mask & __GFP_THISNODE)) { @@ -1405,17 +1405,16 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, page = cma_alloc(hugetlb_cma[node], nr_pages, huge_page_order(h), true); if (page) - return page; + return page_folio(page); } } } #endif - - return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask); + return page_folio(alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask)); } #else /* !CONFIG_CONTIG_ALLOC */ -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { return NULL; @@ -1423,7 +1422,7 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, #endif /* CONFIG_CONTIG_ALLOC */ #else /* !CONFIG_ARCH_HAS_GIGANTIC_PAGE */ -static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, +static struct folio *alloc_gigantic_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nodemask) { return NULL; @@ -1950,7 +1949,7 @@ pgoff_t hugetlb_basepage_index(struct page *page) return (index << compound_order(page_head)) + compound_idx; } -static struct page *alloc_buddy_huge_page(struct hstate *h, +static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask, nodemask_t *node_alloc_noretry) { @@ -2009,7 +2008,7 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, if (node_alloc_noretry && !page && alloc_try_hard) node_set(nid, *node_alloc_noretry); - return page; + return page_folio(page); } /* @@ -2019,23 +2018,21 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, * Note that returned page is 'frozen': ref count of head page and all tail * pages is zero. */ -static struct page *alloc_fresh_huge_page(struct hstate *h, +static struct folio *alloc_fresh_hugetlb_folio(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask, nodemask_t *node_alloc_noretry) { - struct page *page; struct folio *folio; bool retry = false; retry: if (hstate_is_gigantic(h)) - page = alloc_gigantic_page(h, gfp_mask, nid, nmask); + folio = alloc_gigantic_folio(h, gfp_mask, nid, nmask); else - page = alloc_buddy_huge_page(h, gfp_mask, + folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, nmask, node_alloc_noretry); - if (!page) + if (!folio) return NULL; - folio = page_folio(page); if (hstate_is_gigantic(h)) { if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) { /* @@ -2052,7 +2049,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, } prep_new_hugetlb_folio(h, folio, folio_nid(folio)); - return page; + return folio; } /* @@ -2062,21 +2059,21 @@ static struct page *alloc_fresh_huge_page(struct hstate *h, static int alloc_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, nodemask_t *node_alloc_noretry) { - struct page *page; + struct folio *folio; int nr_nodes, node; gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; for_each_node_mask_to_alloc(h, nr_nodes, node, nodes_allowed) { - page = alloc_fresh_huge_page(h, gfp_mask, node, nodes_allowed, - node_alloc_noretry); - if (page) + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, node, + nodes_allowed, node_alloc_noretry); + if (folio) break; } - if (!page) + if (!folio) return 0; - free_huge_page(page); /* free it into the hugepage allocator */ + free_huge_page(&folio->page); /* free it into the hugepage allocator */ return 1; } @@ -2237,7 +2234,7 @@ int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn) static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask) { - struct page *page = NULL; + struct folio *folio = NULL; if (hstate_is_gigantic(h)) return NULL; @@ -2247,8 +2244,8 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, goto out_unlock; spin_unlock_irq(&hugetlb_lock); - page = alloc_fresh_huge_page(h, gfp_mask, nid, nmask, NULL); - if (!page) + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL); + if (!folio) return NULL; spin_lock_irq(&hugetlb_lock); @@ -2260,43 +2257,42 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask, * codeflow */ if (h->surplus_huge_pages >= h->nr_overcommit_huge_pages) { - SetHPageTemporary(page); + folio_set_hugetlb_temporary(folio); spin_unlock_irq(&hugetlb_lock); - free_huge_page(page); + free_huge_page(&folio->page); return NULL; } h->surplus_huge_pages++; - h->surplus_huge_pages_node[page_to_nid(page)]++; + h->surplus_huge_pages_node[folio_nid(folio)]++; out_unlock: spin_unlock_irq(&hugetlb_lock); - return page; + return &folio->page; } static struct page *alloc_migrate_huge_page(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask) { - struct page *page; + struct folio *folio; if (hstate_is_gigantic(h)) return NULL; - page = alloc_fresh_huge_page(h, gfp_mask, nid, nmask, NULL); - if (!page) + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, nmask, NULL); + if (!folio) return NULL; /* fresh huge pages are frozen */ - set_page_refcounted(page); - + folio_ref_unfreeze(folio, 1); /* * We do not account these pages as surplus because they are only * temporary and will be released properly on the last reference */ - SetHPageTemporary(page); + folio_set_hugetlb_temporary(folio); - return page; + return &folio->page; } /* @@ -2745,19 +2741,18 @@ void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma, } /* - * alloc_and_dissolve_huge_page - Allocate a new page and dissolve the old one + * alloc_and_dissolve_hugetlb_folio - Allocate a new folio and dissolve + * the old one * @h: struct hstate old page belongs to * @old_page: Old page to dissolve * @list: List to isolate the page in case we need to * Returns 0 on success, otherwise negated error. */ -static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, - struct list_head *list) +static int alloc_and_dissolve_hugetlb_folio(struct hstate *h, + struct folio *old_folio, struct list_head *list) { gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; - struct folio *old_folio = page_folio(old_page); int nid = folio_nid(old_folio); - struct page *new_page; struct folio *new_folio; int ret = 0; @@ -2768,26 +2763,25 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page, * the pool. This simplifies and let us do most of the processing * under the lock. */ - new_page = alloc_buddy_huge_page(h, gfp_mask, nid, NULL, NULL); - if (!new_page) + new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, NULL, NULL); + if (!new_folio) return -ENOMEM; - new_folio = page_folio(new_page); __prep_new_hugetlb_folio(h, new_folio); retry: spin_lock_irq(&hugetlb_lock); if (!folio_test_hugetlb(old_folio)) { /* - * Freed from under us. Drop new_page too. + * Freed from under us. Drop new_folio too. */ goto free_new; } else if (folio_ref_count(old_folio)) { /* - * Someone has grabbed the page, try to isolate it here. + * Someone has grabbed the folio, try to isolate it here. * Fail with -EBUSY if not possible. */ spin_unlock_irq(&hugetlb_lock); - ret = isolate_hugetlb(old_page, list); + ret = isolate_hugetlb(&old_folio->page, list); spin_lock_irq(&hugetlb_lock); goto free_new; } else if (!folio_test_hugetlb_freed(old_folio)) { @@ -2865,7 +2859,7 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) if (folio_ref_count(folio) && !isolate_hugetlb(&folio->page, list)) ret = 0; else if (!folio_ref_count(folio)) - ret = alloc_and_dissolve_huge_page(h, &folio->page, list); + ret = alloc_and_dissolve_hugetlb_folio(h, folio, list); return ret; } @@ -3083,14 +3077,14 @@ static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, int nid) if (!alloc_bootmem_huge_page(h, nid)) break; } else { - struct page *page; + struct folio *folio; gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; - page = alloc_fresh_huge_page(h, gfp_mask, nid, + folio = alloc_fresh_hugetlb_folio(h, gfp_mask, nid, &node_states[N_MEMORY], NULL); - if (!page) + if (!folio) break; - free_huge_page(page); /* free it into the hugepage allocator */ + free_huge_page(&folio->page); /* free it into the hugepage allocator */ } cond_resched(); }