From patchwork Wed Apr 3 17:18:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13616532 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81F0ACD128A for ; Wed, 3 Apr 2024 17:18:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9224C6B0095; Wed, 3 Apr 2024 13:18:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8D1606B0098; Wed, 3 Apr 2024 13:18:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 799156B0099; Wed, 3 Apr 2024 13:18:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5A1566B0098 for ; Wed, 3 Apr 2024 13:18:42 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 283B81C0286 for ; Wed, 3 Apr 2024 17:18:42 +0000 (UTC) X-FDA: 81968880084.05.188404A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 81422C000A for ; Wed, 3 Apr 2024 17:18:40 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=JLc7GpWD; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712164720; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zOpNw/+Dxq+DZUz6MBpRGqjk0sOK0iT06TETKlPifKE=; b=mGAUVAifmIa3Yxv8j6d4iCRlYJGDjRB+KhqC5z+Hs+KAfi3IjJUVULUfEYa+BO0fcyzjVc 6iolNJJzlA/obaz/bZKkH1+LyAKHGdnpVPKiNGC3klW19cONJlp0kbANC2XcNfcJkhB4k2 wXxwCr3fx+famurJhtlT/4gmVyBymNM= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=JLc7GpWD; dmarc=none; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712164720; a=rsa-sha256; cv=none; b=5EZRfNQcGI0oyw+3DApTZJ4ooi+Qoc/cwfHmMKwS9mNrHyutL3KKdpryN5kvIpkfiBTG6B Pjzp79b6QuOodoCRY6x8TwX7XASFiB+HkEP/GZDfBgrKqYY3kR7nTIrXd1VS8ol4YLpGX+ A0k/kYLs4s4kuuCQxvkmjJ1MGqZYKIQ= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=zOpNw/+Dxq+DZUz6MBpRGqjk0sOK0iT06TETKlPifKE=; b=JLc7GpWDlAgseemsv4Hbjpn86D 7izHZ8PJT6KjStjJVgENQgG8ZIXFBi8RnupJaX5BHlmIzsyFlMYN/+spHN3Tj0Wst48Jhf8OpVLWf YPbsStch2pg6e5cOQrABgibnPyqYkE4kK9ijasbiJd95stK8XE3gc22XW71Kyd0vThzJul4XO7wmg h6HgnWdFWTm6yhZ9fitGEIAbNnPs/yYsqQQoeZn0G41j14oKYsW5snzsaGnCJdUFW/F8UFyW4BK8L kKgfeujc+A6BhFeonfzOmP8GINcNh3mSQVWXRbgNxQPILP8JvGqf150FKIvu3Myu0TskAU0OnWROl W5av2dLQ==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rs4Fz-0000000648a-10Yj; Wed, 03 Apr 2024 17:18:39 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 1/7] khugepaged: Inline hpage_collapse_alloc_folio() Date: Wed, 3 Apr 2024 18:18:30 +0100 Message-ID: <20240403171838.1445826-2-willy@infradead.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240403171838.1445826-1-willy@infradead.org> References: <20240403171838.1445826-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 81422C000A X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: sun9sx9sp7zgygeaps9zqyrpixgndxec X-HE-Tag: 1712164720-616422 X-HE-Meta: U2FsdGVkX19+FPOo7F1SBG0pUyGa5ZoPkExMYF6uIHhp0Amn+9Xi+kIYJx2bxdduTkl7myuGo39jTKq+Dxrqlhc6Tvm7TyHrJ+dWc7qDJ7TPUJqi5cW5Sme4kDwhPN7wQaKeSwV4enc+GKHmQXr/jndnlOjuMrNuC9pgHjejTUZhvoHusoJ+G5sth73YaDE8zjj2Zl3axBPK1ewRN9BU5Afx8bu78uGCw3IYiDw2cVp2LKb/q/Q1klPImej+NzGn/5eOREwAaiXZh487Mi6iSDIaGRynXLnd6t2YoWunznxep3fpxdvtAOE1QOjuVINZssedBce3Kj2Ro/EjgQScEddBD+ApUF8DyB6zRoyLTzNXOOmitvUITjQXkv4zHk35y8dVNsSvyG/tHEN3+VlAxrYaAMkbhX+Fvxc78Sg2HRzcj6botGKtrl5ZkoyCne7MGCRqm179I6R69ZmH7hjAhFbforLrIVsfUMBVsLrsydkAcWhuFNjryUCV8IoRA1xlUow5zUa00QOk0IfJPychguXaMwLYkrKw2Q5ZBGA6NUauoMdIdjOVUAzpNs8eGj7SzRsLXB+Z8LLhr35WaPIaZnwOAvn8dRJVkbBorP2noCI9tybCGM8yQmkoet+izbVi7nHkcUU9GVF3u7/VUZ6DYei4NsmPVEzWb6InttrscM/cWMqBGWz/cCJjC8Lk8zFrl5elZvX3NYW00BavB4rqTyy/H3b+OjEO1ZEQH6eII1Xv9Bpf56dnXvmBhzmVMPu8/TjJglkaGVYSkzBGsuyPH5EvuUqjpwV6aRO+1HNo5O+becOCwShk6hpOXIvnpADnhKJSWl3l3lPf/9c1LvdnUbmKjyDY0a5pEv23OgnijCnDE8Mj0yLoHj6pyPOlwN+gaTYzyDvL4zR8QaVVXXjYiw1zYLb7YRISJnwNpHMh1fS5cf+4tCIoSE1VYUl5csQZeT1KSlgzjNYDpE2WUHm 0chVzITv Ljo1WGny9teuzN/sUa8uPoQ4e2c5JWV3G7h09iN7e1YM0WcytTCSesL+AXpR7uZXSYhZdxkVgVUAs9VXj4Q8P0hP4JzK2N+mXV2zYZzje8p7FeI+yVCEzR0JvKfLaCIOdYxpeV50UvqhsHJdSALIyRb3yfDVDvBeSc5CNi0gk04+mUy4MxAfW59ApHDrM4qucZruXUrMSeH4qo/yEVtgOvfDCgQb0N74CDI/a X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This function has one caller, and the combined function is simpler to read, reason about and modify. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Vishal Moola (Oracle) --- mm/khugepaged.c | 19 ++++--------------- 1 file changed, 4 insertions(+), 15 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 38830174608f..ad16dd8b26a8 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -891,20 +891,6 @@ static int hpage_collapse_find_target_node(struct collapse_control *cc) } #endif -static bool hpage_collapse_alloc_folio(struct folio **folio, gfp_t gfp, int node, - nodemask_t *nmask) -{ - *folio = __folio_alloc(gfp, HPAGE_PMD_ORDER, node, nmask); - - if (unlikely(!*folio)) { - count_vm_event(THP_COLLAPSE_ALLOC_FAILED); - return false; - } - - count_vm_event(THP_COLLAPSE_ALLOC); - return true; -} - /* * If mmap_lock temporarily dropped, revalidate vma * before taking mmap_lock. @@ -1067,11 +1053,14 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm, int node = hpage_collapse_find_target_node(cc); struct folio *folio; - if (!hpage_collapse_alloc_folio(&folio, gfp, node, &cc->alloc_nmask)) { + folio = __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask); + if (!folio) { *hpage = NULL; + count_vm_event(THP_COLLAPSE_ALLOC_FAILED); return SCAN_ALLOC_HUGE_PAGE_FAIL; } + count_vm_event(THP_COLLAPSE_ALLOC); if (unlikely(mem_cgroup_charge(folio, mm, gfp))) { folio_put(folio); *hpage = NULL; From patchwork Wed Apr 3 17:18:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13616537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3057DCD128A for ; Wed, 3 Apr 2024 17:18:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4905C6B009C; Wed, 3 Apr 2024 13:18:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 43E446B009D; Wed, 3 Apr 2024 13:18:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E0DF6B009E; Wed, 3 Apr 2024 13:18:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 11C4F6B009C for ; Wed, 3 Apr 2024 13:18:47 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4473E160BC5 for ; Wed, 3 Apr 2024 17:18:42 +0000 (UTC) X-FDA: 81968880084.25.EE853CC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id A49F1A000D for ; Wed, 3 Apr 2024 17:18:40 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=CpIP3BOg; dmarc=none; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712164720; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=P8+6TjAMcEOrqM2Z3/eY5WjRMgJRuZlDrWpfRv4AbQg=; b=25rVL8HPx7YRE7fx8POz/i3ABpddHr2BzG+xsLR8i+F5CRo10bszZiKPtOGmtIbkOt74+E ct+lKg3TO8RHkC1HSypmHRo4MoaMdMP+VR5Zu4JV66kxKGTkX0Q4sSj/459Eo9aN7TXYMj C99bJLCzaivdMKzQYjSftugMQz7QNUg= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=CpIP3BOg; dmarc=none; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712164720; a=rsa-sha256; cv=none; b=xXcjLdhOvykPbCsMQwqeC2bVG4g1fYmUr6jjEUHomo38FeORJwZU2IkDxYu2rtzZyWGGly qqzG2c+XO7BPPO9aQEsp/hbrdHEdG7NeWBPS2Yp+R5hiVs/sdlEEK8xjWYFi1ZAoHE6doO 3FK6eNCRtL6XCKOW3exmUPSrxFRH3YE= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=P8+6TjAMcEOrqM2Z3/eY5WjRMgJRuZlDrWpfRv4AbQg=; b=CpIP3BOghWHuA78aesPeRz66Vn 3sD3cRDZpIS292UgoGRAZ0Jji+y+WP+cJzC3WJUybsTnkGJIkqQihkxd1wjXFRKTe2WLFXFkO8+Dm T/AouZHmhJKqdJOOEFSb6nta1CvUt/eO4z6t6a7NDNNbFyVSUg+LFP3OCPLtqZmEMUy/8OH075NmD bKRL2odk3gUhN2vYxo6P4HnBj1vfOSZS0OB+B1egQcG6bpiFbhzKFc1kHTXCdBJojlcwz2AXL6f6m xwnKbhA5Hv9kM2KmOdYihlyBrvNcYKFucJAukt/IUeaByCYBs6+FO77T8tUDq74xI43QuYMh1O08K dK5xduEQ==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rs4Fz-0000000648g-1VOO; Wed, 03 Apr 2024 17:18:39 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 2/7] khugepaged: Convert alloc_charge_hpage to alloc_charge_folio Date: Wed, 3 Apr 2024 18:18:31 +0100 Message-ID: <20240403171838.1445826-3-willy@infradead.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240403171838.1445826-1-willy@infradead.org> References: <20240403171838.1445826-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: A49F1A000D X-Stat-Signature: k5tpw6y1z36fh3468f5fkqgdb963rzo5 X-Rspam-User: X-HE-Tag: 1712164720-470881 X-HE-Meta: U2FsdGVkX18AreXOkx+QaFdbltUrmtVky3IrEyD1SOMnCGa1k3jzsFCB6th5JviFp6O87xUwmpjdmLUojMZvkYneCyHBPnkE9pDGpwSKvkqR3ivuc7qTZsZSYdt7HMCHHETkVwhhwbnkIWOBOrBJfKqu4nVxb2HhDAFh0eV3SQ7oFozvrninKU+R3cohRBEgI+vYL1ZYY0CxlvwtZGWS+OQVayIYsNd2XZUyHL49dU++byiuzP7kmdBC3KZb6ZTAyO6RFz6KpNDLzsQIpQ458GbMt3yKzzDur4HAD6g9e+FhQh3GW40ecYPmasOTqer+w+EkUYN53XE3mI5knA2HmBxGtfNKPFOts9vhmoewmsnquJIxublt2WfRviD94Gx6QMa9v24iB2VmWcdEZYSMOSnO8oQon9pfOij7vC5jxQhorAXYasH7tlJkrRx0YYJPgsNt01qeWmalqWzyO7AKJi8C1Epo+gcftwvMFY5t63ul4IWfcvms2s76/O7k+s6j//Ooix2l9wWiaDlsXWUUS88p3Gfs57sUpoCDJNfgaJgApfaPzDS0BzczPi46S+lgo6HSuvGDePfaVL4Tbx0udbHLzh2j6e3CFrx6pK3UkwxDQmJtu2CQDd1KF+DOBmw2NyNh3drjt/KMnALXTCqXASafNEuEcpEH5ZOjS3ydxnXRscih29oRR8guPxPdCAB8IhVgrA4GgI8hdmgAYvDmSYSWO6CghrxaOgJe2VSSc2aHpq2bXP9BCsj2/CiAGpi4qCAoCNLx2RlLMcj3DPYM35guFedA/xzyHXAtFQqQp9PgTBD3JkXiAZPTPmS/1kASoFz7CYKV+dOfIb7gb0ULTTKx6PAZGLE47G9azEMb5VYM1rOXo/U+j+RPA/TTE84mQhTsbkCmQ4lrYDlUzEEtmB1BpeClxY8JRPFdr3QHk38dZl2I0DVXUfWy910z7Wn3dsKBWqzjBBMPE5/q0QY GqhYX/52 UtY8EtErf99ZQX908gvJpebSRQa/wQ30fVlJEsXTQ0DM7nZ+ErptIqxtJ7/BJE63tFoP8+ZTmLMvEymU0t+nNbmXAz3yMz4vYXPrtuW/CM9ysyWVQBlzYIFs5hJmZpNcWQfX9kcy0NJuECuyEMIrHRWpmOrSPsWkjGHQ69s7L8J/fgRlsZbxBtlSfi4o91pbUHO1SejueJPC7/KtA2dzaVrRvEA3tPmCwhDbh X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Both callers want to deal with a folio, so return a folio from this function. Signed-off-by: Matthew Wilcox (Oracle) --- mm/khugepaged.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index ad16dd8b26a8..2f1dacd65d12 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1045,7 +1045,7 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm, return result; } -static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm, +static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm, struct collapse_control *cc) { gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() : @@ -1055,7 +1055,7 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm, folio = __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask); if (!folio) { - *hpage = NULL; + *foliop = NULL; count_vm_event(THP_COLLAPSE_ALLOC_FAILED); return SCAN_ALLOC_HUGE_PAGE_FAIL; } @@ -1063,13 +1063,13 @@ static int alloc_charge_hpage(struct page **hpage, struct mm_struct *mm, count_vm_event(THP_COLLAPSE_ALLOC); if (unlikely(mem_cgroup_charge(folio, mm, gfp))) { folio_put(folio); - *hpage = NULL; + *foliop = NULL; return SCAN_CGROUP_CHARGE_FAIL; } count_memcg_folio_events(folio, THP_COLLAPSE_ALLOC, 1); - *hpage = folio_page(folio, 0); + *foliop = folio; return SCAN_SUCCEED; } @@ -1098,7 +1098,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, */ mmap_read_unlock(mm); - result = alloc_charge_hpage(&hpage, mm, cc); + result = alloc_charge_folio(&folio, mm, cc); + hpage = &folio->page; if (result != SCAN_SUCCEED) goto out_nolock; @@ -1204,7 +1205,6 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, if (unlikely(result != SCAN_SUCCEED)) goto out_up_write; - folio = page_folio(hpage); /* * The smp_wmb() inside __folio_mark_uptodate() ensures the * copy_huge_page writes become visible before the set_pmd_at() @@ -1789,7 +1789,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, struct page *hpage; struct page *page; struct page *tmp; - struct folio *folio; + struct folio *folio, *new_folio; pgoff_t index = 0, end = start + HPAGE_PMD_NR; LIST_HEAD(pagelist); XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER); @@ -1800,7 +1800,8 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem); VM_BUG_ON(start & (HPAGE_PMD_NR - 1)); - result = alloc_charge_hpage(&hpage, mm, cc); + result = alloc_charge_folio(&new_folio, mm, cc); + hpage = &new_folio->page; if (result != SCAN_SUCCEED) goto out; From patchwork Wed Apr 3 17:18:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13616533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6B30CD1288 for ; Wed, 3 Apr 2024 17:18:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 022906B0098; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E9EDA6B0099; Wed, 3 Apr 2024 13:18:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D3F976B009A; Wed, 3 Apr 2024 13:18:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id B0EBF6B0099 for ; Wed, 3 Apr 2024 13:18:42 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 7DC06C0CFC for ; Wed, 3 Apr 2024 17:18:42 +0000 (UTC) X-FDA: 81968880084.01.B7A23EA Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf18.hostedemail.com (Postfix) with ESMTP id D09041C0015 for ; Wed, 3 Apr 2024 17:18:40 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=onBeYxW+; dmarc=none; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712164720; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Z7HWqHXiS6gALWSH/SSFJ3qdbLs7tqqs0zZ3mdj/GGg=; b=xUC+PxHETyTVEch7T1kIUuQIruJmVln0B0Q7ucN6mxOAgN+whgTzGnMxduAs1H6vq3RmVi xxqw4FN6TKkbkQoUtqHnyCLMUR5eXnhgmLRaPBZxz+dNVMOaoweZSgSV0/TjdN+u61YXF2 iXxBdB0K9JnrG4ar0uZRFEkDhin5bnY= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=onBeYxW+; dmarc=none; spf=none (imf18.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712164720; a=rsa-sha256; cv=none; b=tmBNeBaTMDgDOvgQhRXuHqqrUTTJZSuvi/y+HhrJAPii07tvNhVGLUQylZpmA/xwHHSr/8 Dmtie5OgHDSlIvTIxub2fD9ITsbGTn3Pu9H5tp0gGwOHm/UprGMQPrdJbKLOiwxzH1eWZE hGnuyK6uPSlNXS8XYmx6SzHcL3kIEbQ= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Z7HWqHXiS6gALWSH/SSFJ3qdbLs7tqqs0zZ3mdj/GGg=; b=onBeYxW+QSqrKaZhLvrJFSdE+Z nrLdr4iI1OLNVpEFeFxHXr4xyvi3Ih5IxEEvv8cOLbBhv8u1JqSmXrbzdPspuDXWxnzdf/Q4W5VBS BcDnT6B6svjfmrpuLPzkrj9pBz8w3LWZ0BwMivrWzlcok/adWfcHzM9sFtBFumgFUryAg5eNiRwmM YcdsLosspYe2Ub+QAa0SOezouPQpcOCB4LtXWnVmIN+wGhLW4Fdu6fp1eh/brfvlGOwfui2SLxjkh F7gokTFJKwXAEbdJgMLfD5xpTuabsOjpBsR3JAnav3hDwqPswga8I8jEBS2+7IQu9KOWC2A1D4CII 8MQX1R7w==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rs4Fz-0000000648m-1zSh; Wed, 03 Apr 2024 17:18:39 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 3/7] khugepaged: Remove hpage from collapse_huge_page() Date: Wed, 3 Apr 2024 18:18:32 +0100 Message-ID: <20240403171838.1445826-4-willy@infradead.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240403171838.1445826-1-willy@infradead.org> References: <20240403171838.1445826-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: D09041C0015 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: bgkzukonruontcs44nh13uhhpspgaheb X-HE-Tag: 1712164720-37176 X-HE-Meta: U2FsdGVkX1/cXrjWazzGvLpczE0auUExCG/mihNdcbQhwMxDjIZ49CFRxEMRJ2JErvbp4Rw6xFueoGOoIuXRsR0MTirTP/QUejo4jfusIRVY4tgtzdw400V4Kzyq9PvjpgcdSZqh4JfVyPFAEYo4HaYOECfOK+pCVGFozspMJziTIVii2Cd5S5+vrJm0IObf4nKjTAh2xCqYsqeq82uNwXxavr9sUtwWlmIfzGRXY3z2TYGP+oDhAuGL//vq8WuyXrnPMRHTo/n6bEdBILxAg1NPqaItSducLhXWSfLiC0RzlD+EOeJ1/jSICivglYgOOd9ROc71I44NjmPfJXBoWEyCkSB+U10Vo3t3bzulat3UXoqCf5bawfl3VM2IEJzYJ6vNxrsR9jDtmKUdCfVMdyXZAody3kVCZUA38yVMa3eGQr9cG/F5a+iVqukA7wUmz5bvbAl+4WtOzbR/8Xk7/YbA+PUx3ujtA6ufj118ec1hVjsm1ZRIiDSM54x9kU+BrF6kxiYx4N1QL+taFnhpP561XmojW8bJiXv25YesjkIAMhQsLMGS16C83fjNimqoX2Pc+don2OnnnlU+LJuzvLbVQ7iopMilTRORX8Qqb3CXUMEzrw+Epva/LxZUtN4rjDYJRO1E8JmvVB0uo0aW+9rDPhPcgqnLENzwrtTEv6j2vNa6x9nKuGTMpoTzv5HCOl7ug6PG6JAPyRbtGG6myNCo7mdimns0HL/vJ5+bB3EXHxJ7Pa6QuvucvM65SD2fCX4rccTfusOQS8v/tue5X7FyjWy09jyFSRlwxuiuf85PXniyyrJm7m2I1T3NaYq8NFtgaDaRXjtPBrFYnaQZKjB10hnOCV9lQ1iamjrXghUBOGDDqjM4iCTBcsqjYM+aso+YzNGMuAWaBYdH4ld0BlpXEEK9nnc+5m6MOJSxXgiqIlV8uZ0ZqE32uerl1RuFKmyJ0onLaX4EC1SXGrp hLaIT0nz sqnJJecKN3ayo7gFxnozldDQXUhQmBBf7WFcQetO/KHrz3kAJjI6xDydBRFD2Q2NvJre/XCsxaKnLwWPcpOMJuwRE7401M8k246IZRj9YDqfCUxQgUceimoTgoNPUxNqYC1XoKF1MO24mFKrXe6mGM4xCVCCnTfISemqn2AP1EVokQucdek6l+wa3a6Xelbvi3vGiwz0QFFLZo48onn48tAlPi+aBAxBBoOw6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Work purely in terms of the folio. Removes a call to compound_head() in put_page(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Vishal Moola (Oracle) --- mm/khugepaged.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 2f1dacd65d12..1c99d18602e5 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1082,7 +1082,6 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, pte_t *pte; pgtable_t pgtable; struct folio *folio; - struct page *hpage; spinlock_t *pmd_ptl, *pte_ptl; int result = SCAN_FAIL; struct vm_area_struct *vma; @@ -1099,7 +1098,6 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, mmap_read_unlock(mm); result = alloc_charge_folio(&folio, mm, cc); - hpage = &folio->page; if (result != SCAN_SUCCEED) goto out_nolock; @@ -1198,7 +1196,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, */ anon_vma_unlock_write(vma->anon_vma); - result = __collapse_huge_page_copy(pte, hpage, pmd, _pmd, + result = __collapse_huge_page_copy(pte, &folio->page, pmd, _pmd, vma, address, pte_ptl, &compound_pagelist); pte_unmap(pte); @@ -1213,7 +1211,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, __folio_mark_uptodate(folio); pgtable = pmd_pgtable(_pmd); - _pmd = mk_huge_pmd(hpage, vma->vm_page_prot); + _pmd = mk_huge_pmd(&folio->page, vma->vm_page_prot); _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma); spin_lock(pmd_ptl); @@ -1225,14 +1223,14 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, update_mmu_cache_pmd(vma, address, pmd); spin_unlock(pmd_ptl); - hpage = NULL; + folio = NULL; result = SCAN_SUCCEED; out_up_write: mmap_write_unlock(mm); out_nolock: - if (hpage) - put_page(hpage); + if (folio) + folio_put(folio); trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result); return result; } From patchwork Wed Apr 3 17:18:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13616534 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFE84CD1296 for ; Wed, 3 Apr 2024 17:18:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B5C26B009B; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 462A26B009A; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 271A26B009D; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id F1EE76B009A for ; Wed, 3 Apr 2024 13:18:42 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 9EE83804ED for ; Wed, 3 Apr 2024 17:18:42 +0000 (UTC) X-FDA: 81968880084.13.971F528 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 0630140013 for ; Wed, 3 Apr 2024 17:18:40 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=NUlsT+40; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712164721; a=rsa-sha256; cv=none; b=wsWbmxM//hbGIRTFFRU3kbiYlkh/YkSdtIgNbzvJbG/Ov55PyTziWAJXaihEq7qKtUWgpc DStiS5C2vY4q0XZDfERv9HVqLanSm5GPtgC6svx5yqSrv5B7vG4Fhu2dt7wPAyFYTgpMCw yebBcOYmYvm2v4oEnXrnNfnin96TKNc= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=NUlsT+40; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712164721; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5KBtkk4C2+I/o/hutHRiC/1J57LW7EKPaByhsL0golM=; b=HRYW08z8ZWpOYavo6Zb2qNkbH+ShVxrIlOrmxqNFv8r+oJZhd26jqH+/fZ3312secnmBwg Ar4GwRieFbxj/M2bXCFP69IaBfurEw/yEfu/zUpozM1m6OxfKy3EnzjmCEEkh4R5xLNKh6 3gzKM6DNrmtO2wYC94fNW89VOzSU+F8= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5KBtkk4C2+I/o/hutHRiC/1J57LW7EKPaByhsL0golM=; b=NUlsT+40l7HiXtADmpUTWeaX81 +DieXbymEwvrPee2WghryCMwA0Wadmim2Md6kDqIcs6ePpzvuBbYb+/21/IEwmC4STnLSuRXSCNnV 9i3MfuUG5tTrDEclTSBgFqq0wQ3v3mkryU1oonAusU3J6BxLdnmraBqQ82k+ZlGoxA6381RM/+MIL q4jv4zJ5JPOW2gS88h5wg4GymUzzygU/50zsfgxZ6aNoq4w64U6I0kJp46vzunc71zy7Zwvf1KTvr DumcksijvHFdI5l/KtvlKLtOxdLlpdYXQNLPKRDFbDM0sUohJN99LrM5ZEqnVcDa34tZZz98AsTeF 5B//n8tA==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rs4Fz-0000000648s-2lrz; Wed, 03 Apr 2024 17:18:39 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 4/7] khugepaged: Pass a folio to __collapse_huge_page_copy() Date: Wed, 3 Apr 2024 18:18:33 +0100 Message-ID: <20240403171838.1445826-5-willy@infradead.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240403171838.1445826-1-willy@infradead.org> References: <20240403171838.1445826-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0630140013 X-Stat-Signature: ho6wydsddei3icyw8h4myxueyu3xfqhy X-Rspam-User: X-HE-Tag: 1712164720-527214 X-HE-Meta: U2FsdGVkX1+93yCu1DlR/fqZ30ZBOvgur+Vwa7qvzxMITV9oCkWIckAqT0YMSixX/gqT2magyB3G5eR878DM5lFLJz5lglFlk++Cu7HEc1ZscTs9J77tKz+AGm5MGOEydvAW/F7EEMfxVKj10LqccXmTV3GMdKc3g31WfAemLyRTenOis6tMq98fHD5GkiVTTkfmWTZp7PbTDQObIU7TwzCb7/URfLhsVZgKA6ZLs6yLuqa0EJE5u8AJUIEtdzaLVTY+B94MLxUi9kcGfqJrhkMcKCgM4dLnb3BL8th3b4zyrsuOqE9oQByNi5eTStYblAKJf5BMApuZ5qV8f3TSIQrUcpO/7t3KDCMwii+wx+ZDhhjueF5GHpzXGCTKzCOzM3PTVvXybsjiYW07mbkFl8fXC5fyOLkJB8/paQSw6png1DT4uC+w2ySMFN0pr/qDTS5ZQ0dSEnEQih2qjKBAQQqKrpAAWMDlFzYvOx9QvIxoM4taSdF3pQqolWcDsN3xtnavysdyjJMOFtKwr18J/1L7Fp+NHgDGNtjosuWzgpQ/IPZ5rvzekmw8DLntrNcvhgNw4reTHnZjfDqoJsc/R3WMn2qwxykRfuiCHoaB22K+tnlM95zEwvQW4Qt1lrGAhO66STusvV0vvLITzywj4tOSdQmbjtbPeVkfcse5isJqQP+s2PH+oW88UnARAK0f7ECJTh11aZqDI9ca/kdqdCzuF9iLKiPAxyjhVQtT6FcYgAxBTukT1yHZJ+saS0ijAnell7ThoR42iHMPouc+jQviOwnubAWmvfyDUaV63LDaXxO9XAMFPDqHDfqtJ5tjZFYlrRxEYhgIFn2oKKbxViK5cbMdFX5+KxSEly3lsLutvNF50RYh7+3L+I4e6hudsKpVSM89pl3mKcS8oo1X5F7ER+/owKl8LFQtxXkJWEeJ3v67zwQ0o5ZcutyVKaiKrUYySEToo8r66QBAX/1 gWTZ2Akk C+3UIvA9dytolekFRnCkzR3PQdM1ndc0NPlZ8abup9rqh4zVg0Qavvc+NTDV/kLt/n+EUQbTj0LtIkB2j7mGrkFyFAdkUdLOZsJ0zwFF60o4Sn5jwDr9MvGpdJuUwW+sb2N8jfXILFEmEaN2bPbGaqhPFK2fQfof0N2rgJ1wvbFPEV2O6bEueHeHHVoLyrK+9SLpc3Kyy7ikhPKfPIrQYmNwo90aj5PGAtzD8 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Simplify the body of __collapse_huge_page_copy() while I'm looking at it. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Vishal Moola (Oracle) --- mm/khugepaged.c | 34 +++++++++++++++------------------- 1 file changed, 15 insertions(+), 19 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 1c99d18602e5..71a4119ce3a8 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -767,7 +767,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte, * Returns SCAN_SUCCEED if copying succeeds, otherwise returns SCAN_COPY_MC. * * @pte: starting of the PTEs to copy from - * @page: the new hugepage to copy contents to + * @folio: the new hugepage to copy contents to * @pmd: pointer to the new hugepage's PMD * @orig_pmd: the original raw pages' PMD * @vma: the original raw pages' virtual memory area @@ -775,33 +775,29 @@ static void __collapse_huge_page_copy_failed(pte_t *pte, * @ptl: lock on raw pages' PTEs * @compound_pagelist: list that stores compound pages */ -static int __collapse_huge_page_copy(pte_t *pte, - struct page *page, - pmd_t *pmd, - pmd_t orig_pmd, - struct vm_area_struct *vma, - unsigned long address, - spinlock_t *ptl, - struct list_head *compound_pagelist) +static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio, + pmd_t *pmd, pmd_t orig_pmd, struct vm_area_struct *vma, + unsigned long address, spinlock_t *ptl, + struct list_head *compound_pagelist) { - struct page *src_page; - pte_t *_pte; - pte_t pteval; - unsigned long _address; + unsigned int i; int result = SCAN_SUCCEED; /* * Copying pages' contents is subject to memory poison at any iteration. */ - for (_pte = pte, _address = address; _pte < pte + HPAGE_PMD_NR; - _pte++, page++, _address += PAGE_SIZE) { - pteval = ptep_get(_pte); + for (i = 0; i < HPAGE_PMD_NR; i++) { + pte_t pteval = ptep_get(pte + i); + struct page *page = folio_page(folio, i); + unsigned long src_addr = address + i * PAGE_SIZE; + struct page *src_page; + if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { - clear_user_highpage(page, _address); + clear_user_highpage(page, src_addr); continue; } src_page = pte_page(pteval); - if (copy_mc_user_highpage(page, src_page, _address, vma) > 0) { + if (copy_mc_user_highpage(page, src_page, src_addr, vma) > 0) { result = SCAN_COPY_MC; break; } @@ -1196,7 +1192,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, */ anon_vma_unlock_write(vma->anon_vma); - result = __collapse_huge_page_copy(pte, &folio->page, pmd, _pmd, + result = __collapse_huge_page_copy(pte, folio, pmd, _pmd, vma, address, pte_ptl, &compound_pagelist); pte_unmap(pte); From patchwork Wed Apr 3 17:18:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13616535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F09F8CD128A for ; Wed, 3 Apr 2024 17:18:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 756826B009A; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5ED276B009F; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F0566B009E; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 17D686B009B for ; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 9A248C0D24 for ; Wed, 3 Apr 2024 17:18:42 +0000 (UTC) X-FDA: 81968880084.16.F30EC6F Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 1A97220020 for ; Wed, 3 Apr 2024 17:18:40 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=RAJBuPlO; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712164721; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9I/nqgbgGPS1XVtsGw0obO8NcU9mTwezvoS5j2v+Cc0=; b=M8R9vHTs1QzMaatfjr+YbOBaQv+jouVgf+nxqXTPDzhnn6RemLg4/tv7WbH6cKbx8/M9a/ GEE8Y0Edpk8w7stmKekqyKyCjjVSOyncyUQdePMWzVrY23hKPp1NJXqyDcW49Trzm0uevf F+dvrDxPjQn0HgnqUYmAuo4K+Nt0j+I= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=RAJBuPlO; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712164721; a=rsa-sha256; cv=none; b=FfFiC2me0G0H6eTHn6ZrlAQCx/34r91ey/3WU0mQ58m9pSnYddSwBK+318Xxiymg55bBvl tUS8Nv5XoXsLrRO9/ZzcInvVwSCOz8v2V6o8wsPSeTa3zHijFGfntevtqEGfORNCukaESK Pv1yP7J6XSkoJ/Wv1f2hNtG5gqHUQsg= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9I/nqgbgGPS1XVtsGw0obO8NcU9mTwezvoS5j2v+Cc0=; b=RAJBuPlOFy4F3Ys9Uj6ZHl4lVu Qzn4WWaxH58fwyXsOdepJsSF11JXAMGYkaxC9K2E3r9/J2KPrtGcl21J03LF+YIv6djwHxFreXuSP 4EIo4//84noloRFoH14OJXs2q6rTu+N3wVTFFMiJ+LA6zhUiNwFcN2uNB1+cP/YYxRHhi0bqTBrmo GH6yw5gg6NXFh07jqoLVnk3xLFxOstuGy7EnpBEwntOwXXDh+YVOE9iO2NRdBlgULw24FJlECf6EJ i9+R+1rD8EAUi+gsoMGFoTDDqBQGvFDbkAOmbQKKdcwEqzWxYdAYFzN5hfRPAy7MwFJYKZG6O/n0A bX70USGw==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rs4Fz-00000006492-3PdR; Wed, 03 Apr 2024 17:18:39 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 5/7] khugepaged: Remove hpage from collapse_file() Date: Wed, 3 Apr 2024 18:18:34 +0100 Message-ID: <20240403171838.1445826-6-willy@infradead.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240403171838.1445826-1-willy@infradead.org> References: <20240403171838.1445826-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: buht8cr4kpa8jjnm18sy1npfxnhq63ds X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 1A97220020 X-HE-Tag: 1712164720-58594 X-HE-Meta: U2FsdGVkX1+qWASQsJ7m/KTcPj9YUmrTBpL0pMIkRnCiRWvAogYVl8HYGTXJDtEghDcLDPa4yaZxj47tPdto+C7A8mWrO8fUowOnCdrzt+6NGFXxO6UhTua61UIMxt58RwL2gaZTmlhvzTPkhCHoFZx3d5tXxyLgOuFeyPAmt+NwlyUAqGIyuxEJM9T81um3sWWLh35YuN2EI42pC0xvMVQqVaVerL3B7NDK+YYjgkvdjNPmjcyOc/ZAixVZgTj7fBqbV25a1g9J+D/e9zAkWooayp88i0FVJD85yNjoMpTlN7cX45vUqpYSiw9AtVXj7B2HBwqkcTnJeutHEayqkdc0KDmFwF2Mve4u/AFI4vUH1gUBlxAhqrtBSnYZpARVyEC2FDWmtQ9TpD1khxBpQysO/7grw2msUexBPVzfzGv7i33nrhVR5VdlFoZ4pz6doWZmxHHhJkZs53xXLUltbYZej5065h6Q48ZajyylJqOWMu2zO3U7nGlVy7YCWSjkQEEK6bmTHx+ZQEYVSH8rK6jcrigXyauf2swmqpgKUmSHOzEVwSOdmTEb7WbshtgaqL3X3PO8SbUmYOvjAtZGqVbnTh2FyvyspVZB3JuL9tNlWmCJoU76SDJ3PGSFCjb7a42/15JpThAOLyMBZkjMj/KCysKggkN9sCaymEm63Rx9C1uL8RdgxcCFecbG8G/Ur85SmfBkpGuAwr74PtRbdCFAWaPmSZrEFR/1K5HnQ7l+E3T1mTkmEBh++xPXkGE52feoeveIY9Zc6ksfGBmZYqYmruiep6zITUm5mGKrFBP8h2WkY66pzaL+pTvLMrm2zG2jCQEu3ep12EbX7xVKG9re1A4uRr8M/R99u5RWQNBxr8r6m15UBrOksD8fUpxZDGoG35+QqA5VS4hnZOMgpeMb4yJtBoW6h1OvMIFOMGw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use new_folio throughout where we had been using hpage. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Vishal Moola (Oracle) --- include/trace/events/huge_memory.h | 6 +-- mm/khugepaged.c | 77 +++++++++++++++--------------- 2 files changed, 42 insertions(+), 41 deletions(-) diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h index 6e2ef1d4b002..dc6eeef2d3da 100644 --- a/include/trace/events/huge_memory.h +++ b/include/trace/events/huge_memory.h @@ -207,10 +207,10 @@ TRACE_EVENT(mm_khugepaged_scan_file, ); TRACE_EVENT(mm_khugepaged_collapse_file, - TP_PROTO(struct mm_struct *mm, struct page *hpage, pgoff_t index, + TP_PROTO(struct mm_struct *mm, struct folio *new_folio, pgoff_t index, bool is_shmem, unsigned long addr, struct file *file, int nr, int result), - TP_ARGS(mm, hpage, index, addr, is_shmem, file, nr, result), + TP_ARGS(mm, new_folio, index, addr, is_shmem, file, nr, result), TP_STRUCT__entry( __field(struct mm_struct *, mm) __field(unsigned long, hpfn) @@ -224,7 +224,7 @@ TRACE_EVENT(mm_khugepaged_collapse_file, TP_fast_assign( __entry->mm = mm; - __entry->hpfn = hpage ? page_to_pfn(hpage) : -1; + __entry->hpfn = new_folio ? folio_pfn(new_folio) : -1; __entry->index = index; __entry->addr = addr; __entry->is_shmem = is_shmem; diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 71a4119ce3a8..d44584b5e004 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1780,30 +1780,27 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, struct collapse_control *cc) { struct address_space *mapping = file->f_mapping; - struct page *hpage; struct page *page; - struct page *tmp; + struct page *tmp, *dst; struct folio *folio, *new_folio; pgoff_t index = 0, end = start + HPAGE_PMD_NR; LIST_HEAD(pagelist); XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER); int nr_none = 0, result = SCAN_SUCCEED; bool is_shmem = shmem_file(file); - int nr = 0; VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem); VM_BUG_ON(start & (HPAGE_PMD_NR - 1)); result = alloc_charge_folio(&new_folio, mm, cc); - hpage = &new_folio->page; if (result != SCAN_SUCCEED) goto out; - __SetPageLocked(hpage); + __folio_set_locked(new_folio); if (is_shmem) - __SetPageSwapBacked(hpage); - hpage->index = start; - hpage->mapping = mapping; + __folio_set_swapbacked(new_folio); + new_folio->index = start; + new_folio->mapping = mapping; /* * Ensure we have slots for all the pages in the range. This is @@ -2036,20 +2033,24 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, * The old pages are locked, so they won't change anymore. */ index = start; + dst = folio_page(new_folio, 0); list_for_each_entry(page, &pagelist, lru) { while (index < page->index) { - clear_highpage(hpage + (index % HPAGE_PMD_NR)); + clear_highpage(dst); index++; + dst++; } - if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page) > 0) { + if (copy_mc_highpage(dst, page) > 0) { result = SCAN_COPY_MC; goto rollback; } index++; + dst++; } while (index < end) { - clear_highpage(hpage + (index % HPAGE_PMD_NR)); + clear_highpage(dst); index++; + dst++; } if (nr_none) { @@ -2077,16 +2078,17 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, } /* - * If userspace observed a missing page in a VMA with a MODE_MISSING - * userfaultfd, then it might expect a UFFD_EVENT_PAGEFAULT for that - * page. If so, we need to roll back to avoid suppressing such an - * event. Since wp/minor userfaultfds don't give userspace any - * guarantees that the kernel doesn't fill a missing page with a zero - * page, so they don't matter here. + * If userspace observed a missing page in a VMA with + * a MODE_MISSING userfaultfd, then it might expect a + * UFFD_EVENT_PAGEFAULT for that page. If so, we need to + * roll back to avoid suppressing such an event. Since + * wp/minor userfaultfds don't give userspace any + * guarantees that the kernel doesn't fill a missing + * page with a zero page, so they don't matter here. * - * Any userfaultfds registered after this point will not be able to - * observe any missing pages due to the previously inserted retry - * entries. + * Any userfaultfds registered after this point will + * not be able to observe any missing pages due to the + * previously inserted retry entries. */ vma_interval_tree_foreach(vma, &mapping->i_mmap, start, end) { if (userfaultfd_missing(vma)) { @@ -2111,33 +2113,32 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, xas_lock_irq(&xas); } - folio = page_folio(hpage); - nr = folio_nr_pages(folio); if (is_shmem) - __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr); + __lruvec_stat_mod_folio(new_folio, NR_SHMEM_THPS, HPAGE_PMD_NR); else - __lruvec_stat_mod_folio(folio, NR_FILE_THPS, nr); + __lruvec_stat_mod_folio(new_folio, NR_FILE_THPS, HPAGE_PMD_NR); if (nr_none) { - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr_none); + __lruvec_stat_mod_folio(new_folio, NR_FILE_PAGES, nr_none); /* nr_none is always 0 for non-shmem. */ - __lruvec_stat_mod_folio(folio, NR_SHMEM, nr_none); + __lruvec_stat_mod_folio(new_folio, NR_SHMEM, nr_none); } /* - * Mark hpage as uptodate before inserting it into the page cache so - * that it isn't mistaken for an fallocated but unwritten page. + * Mark new_folio as uptodate before inserting it into the + * page cache so that it isn't mistaken for an fallocated but + * unwritten page. */ - folio_mark_uptodate(folio); - folio_ref_add(folio, HPAGE_PMD_NR - 1); + folio_mark_uptodate(new_folio); + folio_ref_add(new_folio, HPAGE_PMD_NR - 1); if (is_shmem) - folio_mark_dirty(folio); - folio_add_lru(folio); + folio_mark_dirty(new_folio); + folio_add_lru(new_folio); /* Join all the small entries into a single multi-index entry. */ xas_set_order(&xas, start, HPAGE_PMD_ORDER); - xas_store(&xas, folio); + xas_store(&xas, new_folio); WARN_ON_ONCE(xas_error(&xas)); xas_unlock_irq(&xas); @@ -2148,7 +2149,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, retract_page_tables(mapping, start); if (cc && !cc->is_khugepaged) result = SCAN_PTE_MAPPED_HUGEPAGE; - folio_unlock(folio); + folio_unlock(new_folio); /* * The collapse has succeeded, so free the old pages. @@ -2193,13 +2194,13 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, smp_mb(); } - hpage->mapping = NULL; + new_folio->mapping = NULL; - unlock_page(hpage); - put_page(hpage); + folio_unlock(new_folio); + folio_put(new_folio); out: VM_BUG_ON(!list_empty(&pagelist)); - trace_mm_khugepaged_collapse_file(mm, hpage, index, is_shmem, addr, file, nr, result); + trace_mm_khugepaged_collapse_file(mm, new_folio, index, is_shmem, addr, file, HPAGE_PMD_NR, result); return result; } From patchwork Wed Apr 3 17:18:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13616538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4DECCD1296 for ; Wed, 3 Apr 2024 17:18:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 181B46B009D; Wed, 3 Apr 2024 13:18:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1330A6B009E; Wed, 3 Apr 2024 13:18:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E9F006B009F; Wed, 3 Apr 2024 13:18:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id CE23B6B009D for ; Wed, 3 Apr 2024 13:18:47 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id EE7BB160399 for ; Wed, 3 Apr 2024 17:18:42 +0000 (UTC) X-FDA: 81968880084.05.6B68CB8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 4281718001B for ; Wed, 3 Apr 2024 17:18:40 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=kcokLDMF; dmarc=none; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712164721; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OYCIWMOjU/9iFKnXmphYhaagmiPwVbe7m9fmqIvCXbI=; b=cClRA43+dmNoNTzHU/EYupkNouTkk7msq3PO3jFyIOFdWvlrUi7dUeAxV/Vw8QojaqZk0b IzcA3POIxzobhdA717kArIpdsR0PJX61pqc+TM1vl5rI94kFibQz5/s5uNPVYt03wfOWnd aByONFPTpwjc25i4XlD0YvB8sUQNS0k= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=kcokLDMF; dmarc=none; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712164721; a=rsa-sha256; cv=none; b=lLU8STa9qsEyQWZtiwcwFYtRpEsqH9OYT6DaAuknk3+nBVYswfWX5bVZ247D4+jePt2Q+l SAVGbYXpMH9gnRU0HDx9/vWVOlGVn+AM1hs4YPQKqCL/Wkub62NWSBG12Xaxt4h0GqZHNt 9jJtesFnELvIBMLziGCdsqXc6gZDv/0= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=OYCIWMOjU/9iFKnXmphYhaagmiPwVbe7m9fmqIvCXbI=; b=kcokLDMFzvGxHT6kZVFS1o6RV7 muiBzF/FjlzLrbh0UWKxLfybycxOaobUTMcUDVvATZVAKYy+wQNaaRrIxQRsi4MtFILmtV7wLvxyj x1qUJEUHfU1d/GD3plTjr8Uic6XQcNIUvw2rnLqgogrbjtlarwXNKvLDL949UkneeEcWFWid/5HLm 1SRyd1UbMDGqtmDrz5tv7Y8+xhQvxSULqqYBrfg6rOPIHE+lCgdKPHKPM62kJSvVW+X5sNI6KfX+S xd5NHDMGnL6eRHarsXiTqHUhaYgVic4wYFZe5znBz/4uQy2pYvwTaNHoDqp9ElDycTzudKmvbe01j 0Myb4Sww==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rs4Fz-00000006498-41IO; Wed, 03 Apr 2024 17:18:39 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 6/7] khugepaged: Use a folio throughout collapse_file() Date: Wed, 3 Apr 2024 18:18:35 +0100 Message-ID: <20240403171838.1445826-7-willy@infradead.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240403171838.1445826-1-willy@infradead.org> References: <20240403171838.1445826-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 4281718001B X-Stat-Signature: hqkuuxa8aqrq1y3c7ycswjytxbycw8i3 X-HE-Tag: 1712164720-217863 X-HE-Meta: U2FsdGVkX19yYSReBOyl1wq6ustndvNhHTre7a0dyLXETXPGorM4SvjvPOzh7N3gxDJ2Sl0lGvt7SutFxIBYzDLRaHi1RK9P1oosIAHKu6ipAwOIB/2V7eBoEi/lmbzHCRJucpZxI1UsUuAIcs37tWLJ3Aky3Y0eeOJlFHv05EtG1uZOfDnZm9appyXZXsTB/qVyFzmThKu/VYaYcjE6pyNNknjdlr6pEiVBgvLcz62Mv+K2JgxLp8+SmiWWtChxZuBZENpeSd2GYzIdigoOGXohBWX/bW8VPpX+FVNtmR1Zt2waj2DbTuagaNNKJudz/ZNi3peKXzBCJWbE6axXY/7d7NQQVBtq5hwJSpt6YJAlj/TTszj3VvYaOoEznonXF2vQD3IJAP3e4TnTFGK2+Ya1F4gKC/gj1lVvhedcSVe2/HykO+RRUIjOzETy8dMVZfBck3nZY9UXGjLoOT7CPadgdwA67CTv5tHk7X3V0TYyfnis26ifQM/llLC9w0IgR3X/bW6bRhqf++62UvDB3XH/1I/kX1IA7AM+Baq8CsLEOBWGZmN/Yku838oWoFOMurIXpSM6TgZpmOUJyk+TaM9Qck09A+wJafFb7cGoJvTw5r5Cjo990WJeGGrI1uCu9vzTlDdb/2F6yIPutXNahim3vhR2OJHZ5KsjRrjyIpVQzYMdbRrtZBO1dBm9hy9JIOI5rpytUeFdKWW+gmRZm6jkHrOmjV5fyAVy3ma00jKgmIaFODom0pWrIX0nEjw8IpjlRbmp6J8Ctef/q7PWjfegeE5+Rzb051FPVuhVJ9lDIUUmcaixQMf1j1djUVJL+s85gccVcvaMnixyegPsaCsitlNTYfevihZfC94dnvj+2960ezfYXp8KETePC+JUFpRoKhVyEWbx756lz8S/potpA9VUHGlbQm7eNkl075IogUckHOkR6X6BL90HuNXjs8jGRiL7MEAH3QFyqxC NDjkifcv vRwidOgSIaoP3CnWVqbhD3BpStDzLTeiXM0VSO6qVpHEOuaX3YmdKcSw6wGMoFNbWefPGSBd3IHfaX+Su9dTNkvoHlT45D8ma0qobLCYz7/gRTUKoaaA4bcIAH/+Of3TSLeXaT+ebw//PM1YNOLkVMeejwC99T5IBjCcvCgDTil9q78ni7dABHMilFDkzBxKzxJ68aEacBuqeOrcA1DZlsbQO174/mxOdVLQf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Pull folios from the page cache instead of pages. Half of this work had been done already, but we were still operating on pages for a large chunk of this function. There is no attempt in this patch to handle large folios that are smaller than a THP; that will have to wait for a future patch. Signed-off-by: Matthew Wilcox (Oracle) --- mm/khugepaged.c | 113 +++++++++++++++++++++++------------------------- 1 file changed, 54 insertions(+), 59 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index d44584b5e004..0b0053fb30c0 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1780,9 +1780,8 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, struct collapse_control *cc) { struct address_space *mapping = file->f_mapping; - struct page *page; - struct page *tmp, *dst; - struct folio *folio, *new_folio; + struct page *dst; + struct folio *folio, *tmp, *new_folio; pgoff_t index = 0, end = start + HPAGE_PMD_NR; LIST_HEAD(pagelist); XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER); @@ -1820,11 +1819,11 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, for (index = start; index < end; index++) { xas_set(&xas, index); - page = xas_load(&xas); + folio = xas_load(&xas); VM_BUG_ON(index != xas.xa_index); if (is_shmem) { - if (!page) { + if (!folio) { /* * Stop if extent has been truncated or * hole-punched, and is now completely @@ -1840,7 +1839,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, continue; } - if (xa_is_value(page) || !PageUptodate(page)) { + if (xa_is_value(folio) || !folio_test_uptodate(folio)) { xas_unlock_irq(&xas); /* swap in or instantiate fallocated page */ if (shmem_get_folio(mapping->host, index, @@ -1850,28 +1849,27 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, } /* drain lru cache to help isolate_lru_page() */ lru_add_drain(); - page = folio_file_page(folio, index); - } else if (trylock_page(page)) { - get_page(page); + } else if (folio_trylock(folio)) { + folio_get(folio); xas_unlock_irq(&xas); } else { result = SCAN_PAGE_LOCK; goto xa_locked; } } else { /* !is_shmem */ - if (!page || xa_is_value(page)) { + if (!folio || xa_is_value(folio)) { xas_unlock_irq(&xas); page_cache_sync_readahead(mapping, &file->f_ra, file, index, end - index); /* drain lru cache to help isolate_lru_page() */ lru_add_drain(); - page = find_lock_page(mapping, index); - if (unlikely(page == NULL)) { + folio = filemap_lock_folio(mapping, index); + if (unlikely(folio == NULL)) { result = SCAN_FAIL; goto xa_unlocked; } - } else if (PageDirty(page)) { + } else if (folio_test_dirty(folio)) { /* * khugepaged only works on read-only fd, * so this page is dirty because it hasn't @@ -1889,12 +1887,12 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, filemap_flush(mapping); result = SCAN_FAIL; goto xa_unlocked; - } else if (PageWriteback(page)) { + } else if (folio_test_writeback(folio)) { xas_unlock_irq(&xas); result = SCAN_FAIL; goto xa_unlocked; - } else if (trylock_page(page)) { - get_page(page); + } else if (folio_trylock(folio)) { + folio_get(folio); xas_unlock_irq(&xas); } else { result = SCAN_PAGE_LOCK; @@ -1903,35 +1901,31 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, } /* - * The page must be locked, so we can drop the i_pages lock + * The folio must be locked, so we can drop the i_pages lock * without racing with truncate. */ - VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); - /* make sure the page is up to date */ - if (unlikely(!PageUptodate(page))) { + /* make sure the folio is up to date */ + if (unlikely(!folio_test_uptodate(folio))) { result = SCAN_FAIL; goto out_unlock; } /* * If file was truncated then extended, or hole-punched, before - * we locked the first page, then a THP might be there already. + * we locked the first folio, then a THP might be there already. * This will be discovered on the first iteration. */ - if (PageTransCompound(page)) { - struct page *head = compound_head(page); - - result = compound_order(head) == HPAGE_PMD_ORDER && - head->index == start + if (folio_test_large(folio)) { + result = folio_order(folio) == HPAGE_PMD_ORDER && + folio->index == start /* Maybe PMD-mapped */ ? SCAN_PTE_MAPPED_HUGEPAGE : SCAN_PAGE_COMPOUND; goto out_unlock; } - folio = page_folio(page); - if (folio_mapping(folio) != mapping) { result = SCAN_TRUNCATED; goto out_unlock; @@ -1941,7 +1935,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, folio_test_writeback(folio))) { /* * khugepaged only works on read-only fd, so this - * page is dirty because it hasn't been flushed + * folio is dirty because it hasn't been flushed * since first write. */ result = SCAN_FAIL; @@ -1965,33 +1959,34 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, xas_lock_irq(&xas); - VM_BUG_ON_PAGE(page != xa_load(xas.xa, index), page); + VM_BUG_ON_FOLIO(folio != xa_load(xas.xa, index), folio); /* - * We control three references to the page: + * We control three references to the folio: * - we hold a pin on it; * - one reference from page cache; - * - one from isolate_lru_page; - * If those are the only references, then any new usage of the - * page will have to fetch it from the page cache. That requires - * locking the page to handle truncate, so any new usage will be - * blocked until we unlock page after collapse/during rollback. + * - one from lru_isolate_folio; + * If those are the only references, then any new usage + * of the folio will have to fetch it from the page + * cache. That requires locking the folio to handle + * truncate, so any new usage will be blocked until we + * unlock folio after collapse/during rollback. */ - if (page_count(page) != 3) { + if (folio_ref_count(folio) != 3) { result = SCAN_PAGE_COUNT; xas_unlock_irq(&xas); - putback_lru_page(page); + folio_putback_lru(folio); goto out_unlock; } /* - * Accumulate the pages that are being collapsed. + * Accumulate the folios that are being collapsed. */ - list_add_tail(&page->lru, &pagelist); + list_add_tail(&folio->lru, &pagelist); continue; out_unlock: - unlock_page(page); - put_page(page); + folio_unlock(folio); + folio_put(folio); goto xa_unlocked; } @@ -2030,17 +2025,17 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, } /* - * The old pages are locked, so they won't change anymore. + * The old folios are locked, so they won't change anymore. */ index = start; dst = folio_page(new_folio, 0); - list_for_each_entry(page, &pagelist, lru) { - while (index < page->index) { + list_for_each_entry(folio, &pagelist, lru) { + while (index < folio->index) { clear_highpage(dst); index++; dst++; } - if (copy_mc_highpage(dst, page) > 0) { + if (copy_mc_highpage(dst, folio_page(folio, 0)) > 0) { result = SCAN_COPY_MC; goto rollback; } @@ -2152,15 +2147,15 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, folio_unlock(new_folio); /* - * The collapse has succeeded, so free the old pages. + * The collapse has succeeded, so free the old folios. */ - list_for_each_entry_safe(page, tmp, &pagelist, lru) { - list_del(&page->lru); - page->mapping = NULL; - ClearPageActive(page); - ClearPageUnevictable(page); - unlock_page(page); - folio_put_refs(page_folio(page), 3); + list_for_each_entry_safe(folio, tmp, &pagelist, lru) { + list_del(&folio->lru); + folio->mapping = NULL; + folio_clear_active(folio); + folio_clear_unevictable(folio); + folio_unlock(folio); + folio_put_refs(folio, 3); } goto out; @@ -2174,11 +2169,11 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, shmem_uncharge(mapping->host, nr_none); } - list_for_each_entry_safe(page, tmp, &pagelist, lru) { - list_del(&page->lru); - unlock_page(page); - putback_lru_page(page); - put_page(page); + list_for_each_entry_safe(folio, tmp, &pagelist, lru) { + list_del(&folio->lru); + folio_unlock(folio); + folio_putback_lru(folio); + folio_put(folio); } /* * Undo the updates of filemap_nr_thps_inc for non-SHMEM From patchwork Wed Apr 3 17:18:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13616536 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEB2DCD1296 for ; Wed, 3 Apr 2024 17:18:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9E6FC6B0099; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9487B6B009D; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6B5C56B0099; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 3402F6B0099 for ; Wed, 3 Apr 2024 13:18:43 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 02C0CA0151 for ; Wed, 3 Apr 2024 17:18:42 +0000 (UTC) X-FDA: 81968880126.06.8F6EB81 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id 5433340009 for ; Wed, 3 Apr 2024 17:18:41 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=sZY+VDSA; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712164721; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VuP5gd2sA9vxTWVYtUESUcgC7oJmv673ALpV7qT1AUs=; b=WFVTeeYmPecK3M//x4uehTIE5NYhP1aywS9zZuza6j//al1qhamJy3CUrTR3XkfcuKinf3 JHWb2ee64xUQEJ1cXm+kOUcJ2r/f2EyCRR9YgEL33fSzbMbaKSHkB0xoj4borexGrD/SPY Bc66dYrUhwPo3nRKbkznaVgR961xf1w= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712164721; a=rsa-sha256; cv=none; b=h40z56AFqfMK1rPeI2sAyR1Y8sRD4YbK7NqSI23MMu0s3NSzw2mWT5iDDVFAh+sP3VBOcb Vvg+9+Dpr2T4aXJlDglnDkUaM8c8KVHMvzxfvMJjmJ1vveZVVtzl74ke54kelj2tj7K3+n Uun8DLZ2bq43wnlVAAxKYXf4x52qFIs= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=sZY+VDSA; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=VuP5gd2sA9vxTWVYtUESUcgC7oJmv673ALpV7qT1AUs=; b=sZY+VDSA1XSnys54nuVVvWH30u qmDkWSWKFn35d2FG7fya97BqSb1kLhPtcVJFXeL9gcB+G2xEgUu1lKc9OJe9QVItvgXqDxfg97kGW l57E1rO9TGblZMpG7zbUvStWdR66Rueq68MuPNYgF/p1Tu7oFQVEkh9GP9/8+biXMRg30o0iAv/3U BAfYvsJfHY5TOt5SWrb09QfLfg9Z/0xxXjGt452Wg1IGKZuLWseLEMmGcYOvRJhw06/GkHR3D28mP sJ31qg/PytDsoHaiOtDwTeZThmoTzwBRl4kpQdt4DQnNmKJyGvLFgCK7UgH79gc6y6rvJFFAAh5GZ SPyOb0NQ==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rs4G0-0000000649E-0In2; Wed, 03 Apr 2024 17:18:40 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 7/7] khugepaged: Use a folio throughout hpage_collapse_scan_file() Date: Wed, 3 Apr 2024 18:18:36 +0100 Message-ID: <20240403171838.1445826-8-willy@infradead.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240403171838.1445826-1-willy@infradead.org> References: <20240403171838.1445826-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5433340009 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: fgt5akjraxuq7x8fe5op7gyr8kgqgq6e X-HE-Tag: 1712164721-373400 X-HE-Meta: U2FsdGVkX1+c72mZ8LF34AQHMhWVoWTRMstelQ73I/rxON9QwVNsrhyUTfH+RkqeNGAfRUouNWQR4D4PlNuWWvYFcwZbjknD4nkx4a7m9xclUhjacS1sWYXmJKka8rudn5atk37/Yr2lJj59SDFw9zXa/8c9IjoWMhUNW0JTwVA14n8qPxyUArYA5c4vel528pVof1szv+BTfBWpZnF5jcKiS2ht86hm1rHcYqpfUz0Efs/Ovp/KZSf3u9QlkGe00Fh1GnbVhBV+kZgzlQl/GLFHXtpW+rZLdk1epn/kJfAVV8MRH79dyHv32Yo6c/PaIT8gncFOHU7nkpvc3wZWaSTYS3tOHInZHcDIhoNv94hYcWzTGY5E3nwFpbYvgfKTicUG+IWyBuSed0CYW9rTERbXGrb/JkHqEHdLd0Sw7JQ/sc602cAiycUkD34LInFRLKEtmzp5M7W0RR/FETdSNbjeHVZm4jkXfO7+4CjZnEimM4lzeLJQGspgLoRVGL89JGB1Morov5/dpxPYKC7Txoa95aqXfrmEzJuiAtJNFZ6lwKZTJziSTxbGoANXp7XP8Fq9AdCM8CrjYB+Y/wR13Al5Q39LEoGb8LpSWYuGJ7SFfybZDSxB9m9sktNGdTwVJe+feCaAXDmfBCda8X9L5Rb+ja1ITKPr5qwKKy3b0J81NeGTIcjG47h3Zkfx+kMORxaM1N2PANv9Dm+nfc9Bj03qZqBnWxP4n63cg5ap9vpV8ACgFMFJn4L1DW3bFenE0CSLfoHLMELOyo1ksQS9sMNu28xA1clg0x6X7PPWBzQcYL0jlYjFMU54gZJCMNcOZUiS0KL41zFryFcxmIoKRhQxdQIG0wg9xNO69sJ+8sU+jKWflRY6TB9Bg2kRZOMINGXSaIjOFZMaWBIRVFO7e15DCMycHSCwXjWHF01nbbaw6bK935as8GzXkS/y576NyMpRgm7Wkw1klWVM67Q zMn6pMv3 oF1kp+HIWzwDaaedUJ4q7cWpYm+flrlL2aRuBKLntdKFzcS/O6LdXCeb+QOFETm0xugeSHfoYjKTbFblOLD6ByK2mGz83AmqdmBlusjx7CMeeJ7Srqdue75LSidj5N/7jm87ULwcOrEIQT/QZ40LBLlLmpAHl+GYu6sKtXyiaE13feMqQ4yOsYTscE6jT+uLYVfvv28JQoJ76HMRhSS7KqgDpKkDCww/lebcJ58FMUos0h7fJRg3BkYZL/ZhSdL4Y+tqgp3ZbtmKpR3/GpG4YxmwdHQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Replace the use of pages with folios. Saves a few calls to compound_head() and removes some uses of obsolete functions. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Vishal Moola (Oracle) --- include/trace/events/huge_memory.h | 6 +++--- mm/khugepaged.c | 33 +++++++++++++++--------------- 2 files changed, 19 insertions(+), 20 deletions(-) diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h index dc6eeef2d3da..ab576898a126 100644 --- a/include/trace/events/huge_memory.h +++ b/include/trace/events/huge_memory.h @@ -174,10 +174,10 @@ TRACE_EVENT(mm_collapse_huge_page_swapin, TRACE_EVENT(mm_khugepaged_scan_file, - TP_PROTO(struct mm_struct *mm, struct page *page, struct file *file, + TP_PROTO(struct mm_struct *mm, struct folio *folio, struct file *file, int present, int swap, int result), - TP_ARGS(mm, page, file, present, swap, result), + TP_ARGS(mm, folio, file, present, swap, result), TP_STRUCT__entry( __field(struct mm_struct *, mm) @@ -190,7 +190,7 @@ TRACE_EVENT(mm_khugepaged_scan_file, TP_fast_assign( __entry->mm = mm; - __entry->pfn = page ? page_to_pfn(page) : -1; + __entry->pfn = folio ? folio_pfn(folio) : -1; __assign_str(filename, file->f_path.dentry->d_iname); __entry->present = present; __entry->swap = swap; diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 0b0053fb30c0..ef2871aaeb43 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -2203,7 +2203,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, struct file *file, pgoff_t start, struct collapse_control *cc) { - struct page *page = NULL; + struct folio *folio = NULL; struct address_space *mapping = file->f_mapping; XA_STATE(xas, &mapping->i_pages, start); int present, swap; @@ -2215,11 +2215,11 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, memset(cc->node_load, 0, sizeof(cc->node_load)); nodes_clear(cc->alloc_nmask); rcu_read_lock(); - xas_for_each(&xas, page, start + HPAGE_PMD_NR - 1) { - if (xas_retry(&xas, page)) + xas_for_each(&xas, folio, start + HPAGE_PMD_NR - 1) { + if (xas_retry(&xas, folio)) continue; - if (xa_is_value(page)) { + if (xa_is_value(folio)) { ++swap; if (cc->is_khugepaged && swap > khugepaged_max_ptes_swap) { @@ -2234,11 +2234,9 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, * TODO: khugepaged should compact smaller compound pages * into a PMD sized page */ - if (PageTransCompound(page)) { - struct page *head = compound_head(page); - - result = compound_order(head) == HPAGE_PMD_ORDER && - head->index == start + if (folio_test_large(folio)) { + result = folio_order(folio) == HPAGE_PMD_ORDER && + folio->index == start /* Maybe PMD-mapped */ ? SCAN_PTE_MAPPED_HUGEPAGE : SCAN_PAGE_COMPOUND; @@ -2251,28 +2249,29 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, break; } - node = page_to_nid(page); + node = folio_nid(folio); if (hpage_collapse_scan_abort(node, cc)) { result = SCAN_SCAN_ABORT; break; } cc->node_load[node]++; - if (!PageLRU(page)) { + if (!folio_test_lru(folio)) { result = SCAN_PAGE_LRU; break; } - if (page_count(page) != - 1 + page_mapcount(page) + page_has_private(page)) { + if (folio_ref_count(folio) != + 1 + folio_mapcount(folio) + folio_test_private(folio)) { result = SCAN_PAGE_COUNT; break; } /* - * We probably should check if the page is referenced here, but - * nobody would transfer pte_young() to PageReferenced() for us. - * And rmap walk here is just too costly... + * We probably should check if the folio is referenced + * here, but nobody would transfer pte_young() to + * folio_test_referenced() for us. And rmap walk here + * is just too costly... */ present++; @@ -2294,7 +2293,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr, } } - trace_mm_khugepaged_scan_file(mm, page, file, present, swap, result); + trace_mm_khugepaged_scan_file(mm, folio, file, present, swap, result); return result; } #else