From patchwork Tue Apr 2 20:06:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13614552 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57CC0CD1284 for ; Tue, 2 Apr 2024 20:07:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B5B266B008A; Tue, 2 Apr 2024 16:07:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B0B5D6B0092; Tue, 2 Apr 2024 16:07:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F9736B0093; Tue, 2 Apr 2024 16:07:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 821FF6B008A for ; Tue, 2 Apr 2024 16:07:06 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 38638160367 for ; Tue, 2 Apr 2024 20:07:06 +0000 (UTC) X-FDA: 81965675652.08.8341D9B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id D335D40016 for ; Tue, 2 Apr 2024 20:07:02 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qA3s6Vct; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712088424; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=3CSj5o89LeXPfjWqELq+0Y2h1dsrry5l1W2ZtB8azgk=; b=ozvst96DXMOeWq0OZq8+cdlBLNd/Q+gh3OJK0Z5V2Eg8WEhv99YXo31M8gO8RvV8n0DG1N b97ImFMAKxwUaFE885wDNJo77PhhDELYR7JapqRtRv+6TIllO9+FWzC1nFdxoAfAfi1bfO bUgtIlCpra0+MyFTqV60prpfm9kX27c= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712088424; a=rsa-sha256; cv=none; b=d/a+YqDLQPN7IxLM6tXGRtWBXUtPeoz+9G5GdnJ3IZQX2jxpD8Lt1FZS2TH59gQdiX6LRd ITptV/JnGfaccFZf0p2XguLMzi6FlJDlSR8KJ8cBlD0hZ65wozKBkJud5szpdYSFrvxWob lcprNRdpa3m1oEkUXGuX5onIoSYjTKE= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qA3s6Vct; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:In-Reply-To:References; bh=3CSj5o89LeXPfjWqELq+0Y2h1dsrry5l1W2ZtB8azgk=; b=qA3s6VctNOX4g66rrkpMFIiwa1 QPoO7MafonpepDvqLhwrWaS0HMI7pqooNKrYcv5Z+RUsQDV9zZR2s1+PgiBiwyEmciz+qiUvdDCsh nZkOwfw0u1NSn94Pxur6io+r1ARwMDuuv7lmY8uPjA5ip0yrl/sP0MqaPKxRneSpUuhU3Je7L99qy XH6JLA1thzh5Rrt95jqkkeAtceh07T1buoGa7mKFEDK0M4joAPQuoVHThwu0/rEkJ0U4BZozbwZHv FnVJuWYV1De8mF6tQmYD7YVY5kb46PJ0ZfZE2BMSgTPpji0aEax36SwdXRHtpSDvx81II3PmtpEvk 29mXT7AQ==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rrkPK-00000003pja-2ov2; Tue, 02 Apr 2024 20:06:58 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, Muchun Song Subject: [PATCH] hugetlb: Convert alloc_buddy_hugetlb_folio to use a folio Date: Tue, 2 Apr 2024 21:06:54 +0100 Message-ID: <20240402200656.913841-1-willy@infradead.org> X-Mailer: git-send-email 2.44.0 MIME-Version: 1.0 X-Stat-Signature: moxwjod5g8gwkmjzjfqpma3nicbnnaxe X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: D335D40016 X-Rspam-User: X-HE-Tag: 1712088422-533838 X-HE-Meta: U2FsdGVkX18iYmr8ylC9/51VCsJVkPnYmBAQODJvY4mww1OY1ZYFYeJxLW/Ffma2sLfYXzEEGEbTOWfSQYKy4AZW6DaaDTOdA9juYUEPokuL/7NOoW6IuHdkKrPpvM38M7rU/sGkWTV0qPm2Hb+BrqmSmm1X8pyY5HMKDs0Xq/E1tXcj/zoWHJULOOUEoolbeo48Myywox3y0AvW+L4Au6KfSNp0wb+6/dfZbb/T/XrsOYmpcRhrQAjTV/upAIsIi/Zr5ULyfatQA/+3dgJ2GEdNOLd+in4XjCQS13PQUEPzgxrii9MpXa2GvdyBLQFq/VENFBrdNIOug8qXrcHQIIBMWv9nIVQXLG6mf+YqrHvSrOOwoh5lxcOIz2CBYhKeDonxf+G7ZPDev5JIvPa/UV01Zw5lMZmchEgj+oaU6/+uZCbP0Rmu2fDqXXiiqd6uCWFfFYN+od+oiEuQ+VkK1pn3udJwYs5RU21fwFV8dXdUXmiYxNQmnUGxlzSeFSapUoZyUpVl/8w25fNzJfBh/6TCKqryaPrtRHieMH4kXHPPQyOJ4Q1wW71k30TS/Hfb4QfCuQUG52f8Qz+M7uqRQ+afZFvhN2z69pgyRmLJ476WYIuk8XiyC0oTxMI/HDSBld0NDjdBN3Kg/t1PbiX9Xg7Q8WZxbhfo/MTdLoCm+LX1sLbc9fwKtj87RJgWdPWWshafUn+mn/RjSZA78m4vp0Sj48AN715DL3vlXYNKctjksqdArVwxSuvPlBPpIcD3PMIguA8pVMH797UWrzlRnfn5L9petLFM9iHJrPiCRuAn/09AmiluKQo0V0sWmYDBm9SH7tSqqwcIQ4nAGwyJa49/alNHmtipFXBS22Gp6eDK2IHnKs4480HI7JcZhNRU5/68w1JajFJ5WgE4ZtRHTfJN4vbrzVP9xQYTHbvyYAsiUiSrshjUcWCTTgJNYCpLV9zNv+XUY3EETEEvGTQ SMRxh0EZ ndcOiDOQPm0bcPRAeYiQzTrpkmd8TAfnXyGqGxCd8AZV2OIliwnVGEgqngYqD8K68iSb/S1lxwNslhrHdJsSllOS9R4MpCptxWdrysWde82+hwvmOfzLtcJD4Bq876XtAU9y+AVP8jvWLItxmHm+8Nc6wEroMAM5+RklOoTJt9NuxwF/q1OT6S7LcIUIX9RthXQUxB9UrrEd0+firpBSJnz2i2ea4NPwKNuLo X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: While this function returned a folio, it was still using __alloc_pages() and __free_pages(). Use __folio_alloc() and put_folio() instead. This actually removes a call to compound_head(), but more importantly, it prepares us for the move to memdescs. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Sidhartha Kumar Reviewed-by: Oscar Salvador Reviewed-by: Muchun Song --- mm/hugetlb.c | 33 ++++++++++++++++----------------- 1 file changed, 16 insertions(+), 17 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a3bffa8debde..5f1e0b1a0d57 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2177,13 +2177,13 @@ static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h, nodemask_t *node_alloc_noretry) { int order = huge_page_order(h); - struct page *page; + struct folio *folio; bool alloc_try_hard = true; bool retry = true; /* - * By default we always try hard to allocate the page with - * __GFP_RETRY_MAYFAIL flag. However, if we are allocating pages in + * By default we always try hard to allocate the folio with + * __GFP_RETRY_MAYFAIL flag. However, if we are allocating folios in * a loop (to adjust global huge page counts) and previous allocation * failed, do not continue to try hard on the same node. Use the * node_alloc_noretry bitmap to manage this state information. @@ -2196,43 +2196,42 @@ static struct folio *alloc_buddy_hugetlb_folio(struct hstate *h, if (nid == NUMA_NO_NODE) nid = numa_mem_id(); retry: - page = __alloc_pages(gfp_mask, order, nid, nmask); + folio = __folio_alloc(gfp_mask, order, nid, nmask); - /* Freeze head page */ - if (page && !page_ref_freeze(page, 1)) { - __free_pages(page, order); + if (folio && !folio_ref_freeze(folio, 1)) { + folio_put(folio); if (retry) { /* retry once */ retry = false; goto retry; } /* WOW! twice in a row. */ - pr_warn("HugeTLB head page unexpected inflated ref count\n"); - page = NULL; + pr_warn("HugeTLB unexpected inflated folio ref count\n"); + folio = NULL; } /* - * If we did not specify __GFP_RETRY_MAYFAIL, but still got a page this - * indicates an overall state change. Clear bit so that we resume - * normal 'try hard' allocations. + * If we did not specify __GFP_RETRY_MAYFAIL, but still got a + * folio this indicates an overall state change. Clear bit so + * that we resume normal 'try hard' allocations. */ - if (node_alloc_noretry && page && !alloc_try_hard) + if (node_alloc_noretry && folio && !alloc_try_hard) node_clear(nid, *node_alloc_noretry); /* - * If we tried hard to get a page but failed, set bit so that + * If we tried hard to get a folio but failed, set bit so that * subsequent attempts will not try as hard until there is an * overall state change. */ - if (node_alloc_noretry && !page && alloc_try_hard) + if (node_alloc_noretry && !folio && alloc_try_hard) node_set(nid, *node_alloc_noretry); - if (!page) { + if (!folio) { __count_vm_event(HTLB_BUDDY_PGALLOC_FAIL); return NULL; } __count_vm_event(HTLB_BUDDY_PGALLOC); - return page_folio(page); + return folio; } static struct folio *__alloc_fresh_hugetlb_folio(struct hstate *h,