From patchwork Tue Mar 26 20:28:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13605068 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 873A5C54E67 for ; Tue, 26 Mar 2024 20:28:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6849E6B0098; Tue, 26 Mar 2024 16:28:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E4716B009B; Tue, 26 Mar 2024 16:28:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 19C546B0099; Tue, 26 Mar 2024 16:28:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id EDB0F6B0093 for ; Tue, 26 Mar 2024 16:28:39 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 93123160339 for ; Tue, 26 Mar 2024 20:28:39 +0000 (UTC) X-FDA: 81940328358.22.0A36C74 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 2575620015 for ; Tue, 26 Mar 2024 20:28:37 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=r69DxK8f; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711484918; a=rsa-sha256; cv=none; b=54xxrx2BVUTWjcgYQ1XwJMMrRstSu37HcM51qxfSiuAh1dVkUCey/jxmOL/Sy/FuTSSIbM ZCh6bvSv4LGKO3zRrIHq107++Z9XNeRMZp25hiSYTebZcMc20WOsOzXpbe25rx80gE87eR M3Bq7ZvUc4aT8fUh8Y7iiXRLNtobPeY= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=r69DxK8f; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711484918; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YM0bEoV78naax0Yzl0xLH8KstlzK88rkA2GPtu5jU1g=; b=GE3qeY8/AX4XC2sIJGhntu1CheNsxYUqegc9bhNyWiGGAs4mv5AFA+IYxDDsx28y3wmyK8 Miq2aN+whQkXsTr3X0LIoYu+RDq+NQ8K5p7Efz40VEjdH3X7liiedcNeVD4Sva1q4YOrhb ii6OjGi6hSmaSkfeBExOON7U5MDJ/38= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YM0bEoV78naax0Yzl0xLH8KstlzK88rkA2GPtu5jU1g=; b=r69DxK8foYzxXQxJL5nJIUr3hM xXOJcylzV7RBHeNvVxI4/68n+Flcs9GtBAYP78lxMa2Q7HiU1NmZiG0qc3p2qlpwr9gplxIEwFs7y b0kJUryfKx7NGjLklYOhwTCnbFH0ZzwVhp+E4IH53fqXcaA9h8/eS/94wD6mSietY5qw5mu8Ea1Rm GFuyK/UlVHDbtzYbvRCfNj3pMwnJTabXpC1uEtcfxKzPv3zwW5sp80nP09DhrkQmf22JPc3zuYS7N RovBUNHnir1FUj64AjGDp1UuSE9j8rnoUMHAKp+wZoTregFk6mD7Tr/7IxwIWS8fgJ9Z/VoQilWbr HOW/c6lg==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpDPQ-00000002CGd-3TpQ; Tue, 26 Mar 2024 20:28:36 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 5/8] mm: Convert huge_zero_page to huge_zero_folio Date: Tue, 26 Mar 2024 20:28:25 +0000 Message-ID: <20240326202833.523759-6-willy@infradead.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240326202833.523759-1-willy@infradead.org> References: <20240326202833.523759-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 2575620015 X-Stat-Signature: qeafea8mkb8kqaqd4k35wm9boic4x1hd X-HE-Tag: 1711484917-960018 X-HE-Meta: U2FsdGVkX1/C/bAqbYtBqQ1qpuWKZWz7Fal/zpIHm9p5JvpcQMXi4h4GSXa4/oUSOXFCHx7FpzzQ5vCVEBUwGgOapxD7RgigdDiKGzrwsncF10j8H97WNy7eAnWUvp1PwUsRLBhk1ueNrRxkcThCDbn9hDWPJhW16qwx9AIKzJgCPovyjZsnZ+NfV/yDfP8OBRfsGHWTT+zPoqxgBnjXxw+7n0NMMGglGEHERtylzPMJY7CjIplF7Iz0SYBeqViF/QlauPbAnduEE6f7f4oWBInVAJbfYtyioe9foiqzgHrDRI/WdGU2mNYhLiYdMFsg6d5Snlc6HoTfVaAAFwYvRHVkpkm5r8oKMnnOMwj7SUHgjxCrWl4rPjFBA0Dsyy8utj+UUXGFruPKVb0uvhTxMBuX9Puk2RcPWnj4Pylzb71RbQ6Z/k6He78xJ1sMFITPM+gTecpo0QcoZcZrHUOrW6NhHZDmgtHawLeYDN61uJ/ZtYVTDxGdbGEJS5W1FX0McWY4smvkGloT6dcpVUUrY8CJkVWui92usg+7DRLdBlHsXkwNBFtnaBaH9rtL1RRzRlY5O/9IpJl+ju9TdAlGoUcFnOQeKLSq5a/kAppS1eZrWBcAV+1Uy0mfdmjUB4tpU9p2KlcQ7ajaM9HS8Q17IltceBvMfLm5x2H1jEJ80GTSuuKDFiHSU6n/vF0lmeNAeoymXTwj19xQq8yhWhHKJ61rx7FsINIj7mD0BOx6k3BDazTs/8ip/2FPL0r3x633qRYy3WgOORqKHNNmQxsNncHCYGfXq7sa/d+NL06pmYdnoLhsO6MEhIsXrXH8jJSP6pJ5FBl3dp5uKqw/HnuMe7GcYUit24k03tkDxZnOJciAtReEQ7Nlq3SPo4jnizSA0mMBufc262EnI3wYkJ4OjxHF1SBX7GNZcnSqm8PN4jA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: With all callers of is_huge_zero_page() converted, we can now switch the huge_zero_page itself from being a compound page to a folio. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/huge_mm.h | 21 ++++++++------------- mm/huge_memory.c | 28 ++++++++++++++-------------- 2 files changed, 22 insertions(+), 27 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 600c6008262b..7ba59ba36354 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -348,17 +348,12 @@ struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf); -extern struct page *huge_zero_page; +extern struct folio *huge_zero_folio; extern unsigned long huge_zero_pfn; -static inline bool is_huge_zero_page(const struct page *page) -{ - return READ_ONCE(huge_zero_page) == page; -} - static inline bool is_huge_zero_folio(const struct folio *folio) { - return READ_ONCE(huge_zero_page) == &folio->page; + return READ_ONCE(huge_zero_folio) == folio; } static inline bool is_huge_zero_pmd(pmd_t pmd) @@ -371,9 +366,14 @@ static inline bool is_huge_zero_pud(pud_t pud) return false; } -struct page *mm_get_huge_zero_page(struct mm_struct *mm); +struct folio *mm_get_huge_zero_folio(struct mm_struct *mm); void mm_put_huge_zero_page(struct mm_struct *mm); +static inline struct page *mm_get_huge_zero_page(struct mm_struct *mm) +{ + return &mm_get_huge_zero_folio(mm)->page; +} + #define mk_huge_pmd(page, prot) pmd_mkhuge(mk_pmd(page, prot)) static inline bool thp_migration_supported(void) @@ -485,11 +485,6 @@ static inline vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) return 0; } -static inline bool is_huge_zero_page(const struct page *page) -{ - return false; -} - static inline bool is_huge_zero_folio(const struct folio *folio) { return false; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8ee09bfdfdb7..91eb5de3c728 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -74,7 +74,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, struct shrink_control *sc); static atomic_t huge_zero_refcount; -struct page *huge_zero_page __read_mostly; +struct folio *huge_zero_folio __read_mostly; unsigned long huge_zero_pfn __read_mostly = ~0UL; unsigned long huge_anon_orders_always __read_mostly; unsigned long huge_anon_orders_madvise __read_mostly; @@ -192,24 +192,24 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, static bool get_huge_zero_page(void) { - struct page *zero_page; + struct folio *zero_folio; retry: if (likely(atomic_inc_not_zero(&huge_zero_refcount))) return true; - zero_page = alloc_pages((GFP_TRANSHUGE | __GFP_ZERO) & ~__GFP_MOVABLE, + zero_folio = folio_alloc((GFP_TRANSHUGE | __GFP_ZERO) & ~__GFP_MOVABLE, HPAGE_PMD_ORDER); - if (!zero_page) { + if (!zero_folio) { count_vm_event(THP_ZERO_PAGE_ALLOC_FAILED); return false; } preempt_disable(); - if (cmpxchg(&huge_zero_page, NULL, zero_page)) { + if (cmpxchg(&huge_zero_folio, NULL, zero_folio)) { preempt_enable(); - __free_pages(zero_page, compound_order(zero_page)); + folio_put(zero_folio); goto retry; } - WRITE_ONCE(huge_zero_pfn, page_to_pfn(zero_page)); + WRITE_ONCE(huge_zero_pfn, folio_pfn(zero_folio)); /* We take additional reference here. It will be put back by shrinker */ atomic_set(&huge_zero_refcount, 2); @@ -227,10 +227,10 @@ static void put_huge_zero_page(void) BUG_ON(atomic_dec_and_test(&huge_zero_refcount)); } -struct page *mm_get_huge_zero_page(struct mm_struct *mm) +struct folio *mm_get_huge_zero_folio(struct mm_struct *mm) { if (test_bit(MMF_HUGE_ZERO_PAGE, &mm->flags)) - return READ_ONCE(huge_zero_page); + return READ_ONCE(huge_zero_folio); if (!get_huge_zero_page()) return NULL; @@ -238,7 +238,7 @@ struct page *mm_get_huge_zero_page(struct mm_struct *mm) if (test_and_set_bit(MMF_HUGE_ZERO_PAGE, &mm->flags)) put_huge_zero_page(); - return READ_ONCE(huge_zero_page); + return READ_ONCE(huge_zero_folio); } void mm_put_huge_zero_page(struct mm_struct *mm) @@ -258,10 +258,10 @@ static unsigned long shrink_huge_zero_page_scan(struct shrinker *shrink, struct shrink_control *sc) { if (atomic_cmpxchg(&huge_zero_refcount, 1, 0) == 1) { - struct page *zero_page = xchg(&huge_zero_page, NULL); - BUG_ON(zero_page == NULL); + struct folio *zero_folio = xchg(&huge_zero_folio, NULL); + BUG_ON(zero_folio == NULL); WRITE_ONCE(huge_zero_pfn, ~0UL); - __free_pages(zero_page, compound_order(zero_page)); + folio_put(zero_folio); return HPAGE_PMD_NR; } @@ -1340,7 +1340,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, * since we already have a zero page to copy. It just takes a * reference. */ - mm_get_huge_zero_page(dst_mm); + mm_get_huge_zero_folio(dst_mm); goto out_zero_page; }