From patchwork Wed Apr 3 17:18:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13616535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F09F8CD128A for ; Wed, 3 Apr 2024 17:18:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 756826B009A; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5ED276B009F; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F0566B009E; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 17D686B009B for ; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 9A248C0D24 for ; Wed, 3 Apr 2024 17:18:42 +0000 (UTC) X-FDA: 81968880084.16.F30EC6F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 1A97220020 for ; Wed, 3 Apr 2024 17:18:40 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=RAJBuPlO; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712164721; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9I/nqgbgGPS1XVtsGw0obO8NcU9mTwezvoS5j2v+Cc0=; b=M8R9vHTs1QzMaatfjr+YbOBaQv+jouVgf+nxqXTPDzhnn6RemLg4/tv7WbH6cKbx8/M9a/ GEE8Y0Edpk8w7stmKekqyKyCjjVSOyncyUQdePMWzVrY23hKPp1NJXqyDcW49Trzm0uevf F+dvrDxPjQn0HgnqUYmAuo4K+Nt0j+I= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=RAJBuPlO; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712164721; a=rsa-sha256; cv=none; b=FfFiC2me0G0H6eTHn6ZrlAQCx/34r91ey/3WU0mQ58m9pSnYddSwBK+318Xxiymg55bBvl tUS8Nv5XoXsLrRO9/ZzcInvVwSCOz8v2V6o8wsPSeTa3zHijFGfntevtqEGfORNCukaESK Pv1yP7J6XSkoJ/Wv1f2hNtG5gqHUQsg= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9I/nqgbgGPS1XVtsGw0obO8NcU9mTwezvoS5j2v+Cc0=; b=RAJBuPlOFy4F3Ys9Uj6ZHl4lVu Qzn4WWaxH58fwyXsOdepJsSF11JXAMGYkaxC9K2E3r9/J2KPrtGcl21J03LF+YIv6djwHxFreXuSP 4EIo4//84noloRFoH14OJXs2q6rTu+N3wVTFFMiJ+LA6zhUiNwFcN2uNB1+cP/YYxRHhi0bqTBrmo GH6yw5gg6NXFh07jqoLVnk3xLFxOstuGy7EnpBEwntOwXXDh+YVOE9iO2NRdBlgULw24FJlECf6EJ i9+R+1rD8EAUi+gsoMGFoTDDqBQGvFDbkAOmbQKKdcwEqzWxYdAYFzN5hfRPAy7MwFJYKZG6O/n0A bX70USGw==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rs4Fz-00000006492-3PdR; Wed, 03 Apr 2024 17:18:39 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 5/7] khugepaged: Remove hpage from collapse_file() Date: Wed, 3 Apr 2024 18:18:34 +0100 Message-ID: <20240403171838.1445826-6-willy@infradead.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240403171838.1445826-1-willy@infradead.org> References: <20240403171838.1445826-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: buht8cr4kpa8jjnm18sy1npfxnhq63ds X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 1A97220020 X-HE-Tag: 1712164720-58594 X-HE-Meta: U2FsdGVkX1+qWASQsJ7m/KTcPj9YUmrTBpL0pMIkRnCiRWvAogYVl8HYGTXJDtEghDcLDPa4yaZxj47tPdto+C7A8mWrO8fUowOnCdrzt+6NGFXxO6UhTua61UIMxt58RwL2gaZTmlhvzTPkhCHoFZx3d5tXxyLgOuFeyPAmt+NwlyUAqGIyuxEJM9T81um3sWWLh35YuN2EI42pC0xvMVQqVaVerL3B7NDK+YYjgkvdjNPmjcyOc/ZAixVZgTj7fBqbV25a1g9J+D/e9zAkWooayp88i0FVJD85yNjoMpTlN7cX45vUqpYSiw9AtVXj7B2HBwqkcTnJeutHEayqkdc0KDmFwF2Mve4u/AFI4vUH1gUBlxAhqrtBSnYZpARVyEC2FDWmtQ9TpD1khxBpQysO/7grw2msUexBPVzfzGv7i33nrhVR5VdlFoZ4pz6doWZmxHHhJkZs53xXLUltbYZej5065h6Q48ZajyylJqOWMu2zO3U7nGlVy7YCWSjkQEEK6bmTHx+ZQEYVSH8rK6jcrigXyauf2swmqpgKUmSHOzEVwSOdmTEb7WbshtgaqL3X3PO8SbUmYOvjAtZGqVbnTh2FyvyspVZB3JuL9tNlWmCJoU76SDJ3PGSFCjb7a42/15JpThAOLyMBZkjMj/KCysKggkN9sCaymEm63Rx9C1uL8RdgxcCFecbG8G/Ur85SmfBkpGuAwr74PtRbdCFAWaPmSZrEFR/1K5HnQ7l+E3T1mTkmEBh++xPXkGE52feoeveIY9Zc6ksfGBmZYqYmruiep6zITUm5mGKrFBP8h2WkY66pzaL+pTvLMrm2zG2jCQEu3ep12EbX7xVKG9re1A4uRr8M/R99u5RWQNBxr8r6m15UBrOksD8fUpxZDGoG35+QqA5VS4hnZOMgpeMb4yJtBoW6h1OvMIFOMGw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use new_folio throughout where we had been using hpage. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Vishal Moola (Oracle) --- include/trace/events/huge_memory.h | 6 +-- mm/khugepaged.c | 77 +++++++++++++++--------------- 2 files changed, 42 insertions(+), 41 deletions(-) diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h index 6e2ef1d4b002..dc6eeef2d3da 100644 --- a/include/trace/events/huge_memory.h +++ b/include/trace/events/huge_memory.h @@ -207,10 +207,10 @@ TRACE_EVENT(mm_khugepaged_scan_file, ); TRACE_EVENT(mm_khugepaged_collapse_file, - TP_PROTO(struct mm_struct *mm, struct page *hpage, pgoff_t index, + TP_PROTO(struct mm_struct *mm, struct folio *new_folio, pgoff_t index, bool is_shmem, unsigned long addr, struct file *file, int nr, int result), - TP_ARGS(mm, hpage, index, addr, is_shmem, file, nr, result), + TP_ARGS(mm, new_folio, index, addr, is_shmem, file, nr, result), TP_STRUCT__entry( __field(struct mm_struct *, mm) __field(unsigned long, hpfn) @@ -224,7 +224,7 @@ TRACE_EVENT(mm_khugepaged_collapse_file, TP_fast_assign( __entry->mm = mm; - __entry->hpfn = hpage ? page_to_pfn(hpage) : -1; + __entry->hpfn = new_folio ? folio_pfn(new_folio) : -1; __entry->index = index; __entry->addr = addr; __entry->is_shmem = is_shmem; diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 71a4119ce3a8..d44584b5e004 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1780,30 +1780,27 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, struct collapse_control *cc) { struct address_space *mapping = file->f_mapping; - struct page *hpage; struct page *page; - struct page *tmp; + struct page *tmp, *dst; struct folio *folio, *new_folio; pgoff_t index = 0, end = start + HPAGE_PMD_NR; LIST_HEAD(pagelist); XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER); int nr_none = 0, result = SCAN_SUCCEED; bool is_shmem = shmem_file(file); - int nr = 0; VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem); VM_BUG_ON(start & (HPAGE_PMD_NR - 1)); result = alloc_charge_folio(&new_folio, mm, cc); - hpage = &new_folio->page; if (result != SCAN_SUCCEED) goto out; - __SetPageLocked(hpage); + __folio_set_locked(new_folio); if (is_shmem) - __SetPageSwapBacked(hpage); - hpage->index = start; - hpage->mapping = mapping; + __folio_set_swapbacked(new_folio); + new_folio->index = start; + new_folio->mapping = mapping; /* * Ensure we have slots for all the pages in the range. This is @@ -2036,20 +2033,24 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, * The old pages are locked, so they won't change anymore. */ index = start; + dst = folio_page(new_folio, 0); list_for_each_entry(page, &pagelist, lru) { while (index < page->index) { - clear_highpage(hpage + (index % HPAGE_PMD_NR)); + clear_highpage(dst); index++; + dst++; } - if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page) > 0) { + if (copy_mc_highpage(dst, page) > 0) { result = SCAN_COPY_MC; goto rollback; } index++; + dst++; } while (index < end) { - clear_highpage(hpage + (index % HPAGE_PMD_NR)); + clear_highpage(dst); index++; + dst++; } if (nr_none) { @@ -2077,16 +2078,17 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, } /* - * If userspace observed a missing page in a VMA with a MODE_MISSING - * userfaultfd, then it might expect a UFFD_EVENT_PAGEFAULT for that - * page. If so, we need to roll back to avoid suppressing such an - * event. Since wp/minor userfaultfds don't give userspace any - * guarantees that the kernel doesn't fill a missing page with a zero - * page, so they don't matter here. + * If userspace observed a missing page in a VMA with + * a MODE_MISSING userfaultfd, then it might expect a + * UFFD_EVENT_PAGEFAULT for that page. If so, we need to + * roll back to avoid suppressing such an event. Since + * wp/minor userfaultfds don't give userspace any + * guarantees that the kernel doesn't fill a missing + * page with a zero page, so they don't matter here. * - * Any userfaultfds registered after this point will not be able to - * observe any missing pages due to the previously inserted retry - * entries. + * Any userfaultfds registered after this point will + * not be able to observe any missing pages due to the + * previously inserted retry entries. */ vma_interval_tree_foreach(vma, &mapping->i_mmap, start, end) { if (userfaultfd_missing(vma)) { @@ -2111,33 +2113,32 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, xas_lock_irq(&xas); } - folio = page_folio(hpage); - nr = folio_nr_pages(folio); if (is_shmem) - __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr); + __lruvec_stat_mod_folio(new_folio, NR_SHMEM_THPS, HPAGE_PMD_NR); else - __lruvec_stat_mod_folio(folio, NR_FILE_THPS, nr); + __lruvec_stat_mod_folio(new_folio, NR_FILE_THPS, HPAGE_PMD_NR); if (nr_none) { - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr_none); + __lruvec_stat_mod_folio(new_folio, NR_FILE_PAGES, nr_none); /* nr_none is always 0 for non-shmem. */ - __lruvec_stat_mod_folio(folio, NR_SHMEM, nr_none); + __lruvec_stat_mod_folio(new_folio, NR_SHMEM, nr_none); } /* - * Mark hpage as uptodate before inserting it into the page cache so - * that it isn't mistaken for an fallocated but unwritten page. + * Mark new_folio as uptodate before inserting it into the + * page cache so that it isn't mistaken for an fallocated but + * unwritten page. */ - folio_mark_uptodate(folio); - folio_ref_add(folio, HPAGE_PMD_NR - 1); + folio_mark_uptodate(new_folio); + folio_ref_add(new_folio, HPAGE_PMD_NR - 1); if (is_shmem) - folio_mark_dirty(folio); - folio_add_lru(folio); + folio_mark_dirty(new_folio); + folio_add_lru(new_folio); /* Join all the small entries into a single multi-index entry. */ xas_set_order(&xas, start, HPAGE_PMD_ORDER); - xas_store(&xas, folio); + xas_store(&xas, new_folio); WARN_ON_ONCE(xas_error(&xas)); xas_unlock_irq(&xas); @@ -2148,7 +2149,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, retract_page_tables(mapping, start); if (cc && !cc->is_khugepaged) result = SCAN_PTE_MAPPED_HUGEPAGE; - folio_unlock(folio); + folio_unlock(new_folio); /* * The collapse has succeeded, so free the old pages. @@ -2193,13 +2194,13 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, smp_mb(); } - hpage->mapping = NULL; + new_folio->mapping = NULL; - unlock_page(hpage); - put_page(hpage); + folio_unlock(new_folio); + folio_put(new_folio); out: VM_BUG_ON(!list_empty(&pagelist)); - trace_mm_khugepaged_collapse_file(mm, hpage, index, is_shmem, addr, file, nr, result); + trace_mm_khugepaged_collapse_file(mm, new_folio, index, is_shmem, addr, file, HPAGE_PMD_NR, result); return result; }