From patchwork Tue Feb 13 21:55:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13555716 Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 01F1062178; Tue, 13 Feb 2024 21:55:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861350; cv=none; b=eKGkKXpYeE6nTpHeL1oAMR8dqOGF+elVWfRUh0tXfr0gIQd+HCCZ263zciuQEXzHBzIy2CgID3Giqw3KZdPcQTovge1vuLyHzAmXAGGy4opQPc6n6q64xtrz6tLJ4H9DyCAW2vfActqWqmn7Q08/PzzkPqdGhLFxzqZ5P/gnHFA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861350; c=relaxed/simple; bh=38MdZLYuG2AJ1aXCJFRm5FPXaorCtatf6d4mFy+wv+s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NBrtBNJNVooWeLlVzjEMv/XQoeSsv94NBjBrWEeCTkjXSSbig8RtZpno/2DRu4/39BY1XH9aSIqqzaqmTZz8jvlz+4gfpO/Tb8UKPwXSD5gmjahh61FxJHtDdCXu2ehBnABdaxlYuwtFvHrHf74wHwnF+YkN6XE1VtKOkIZHCtI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com; spf=pass smtp.mailfrom=sent.com; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b=LTPl4PvC; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=NS1rZYqI; arc=none smtp.client-ip=66.111.4.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sent.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b="LTPl4PvC"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="NS1rZYqI" Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id D43925C00CA; Tue, 13 Feb 2024 16:55:47 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Tue, 13 Feb 2024 16:55:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to; s=fm3; t= 1707861347; x=1707947747; bh=43KUO5pZ3osjy6nQRy6LW8cIMJ14A3p6oz8 yC6ceQKk=; b=LTPl4PvCEFNUWDJZquJqdFcLG0owwHI9iL4COQiteBUhernnnNq LH6fO88G1V9ARLPVrpl41dw2Mvmbe7Dms1ZpgNKVpUx87Aw9LUnOSgvBw4+bcsbg Qts6nhxjVNsTFvJweIWEM8CzHPQin02SRUS+uxKYa8DO4jig+rm6zQMQSmM1gEmt JsRgaojL5wKkQqVb27ss0mEt3a+AdAxzleU7U+eGn6OrZMm3jMkUh7cmxMyzseoC UUG/n1QCRbZVEaCZEfaxkshdDkPg+XubSSDrDsUYGTbBl6axSRhmQBh9o6uW9Gi5 RBJ31EzlPhh4FML7qpweqDqojd6i9V4NVYA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t= 1707861347; x=1707947747; bh=43KUO5pZ3osjy6nQRy6LW8cIMJ14A3p6oz8 yC6ceQKk=; b=NS1rZYqIDbBEaDSWg1X5bjk05SdO55tD+MgK8/KdvNjyX8L853q F9BmuErnawsM7nwDrkc7O3CFyEdZ6NkLWoNMUhlDOlWsbXuvOZl0MXQUdG4767R3 YN+egUo2h0XSD/tneJAwXP1569DVxKjnBpm1uCubpskQ4od7IPdHexduhByAfv9y a+JtAMDrL5trG/bCgFoqqGWga0eeMUDDEHdQAqfQnMdf2IUh0wJaq/kMBZ7PSgxW N/tZhT2zmPU1YsfUei38gjBePrKNQQH0Q2n4N4SX1FjCUUDajeiT11CuFsFk1yM8 50BlGNcQjabvDCvofI5zW4XvuKdaFuoV20w== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrudehgdduheegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrgggtgfesthekredtredtjeenucfhrhhomhepkghi ucgjrghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepje ekteekffelleekudfftdefvddtjeejuedtuedtteegjefgvedtfedujeekieevnecuvehl uhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnse hsvghnthdrtghomh X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Feb 2024 16:55:46 -0500 (EST) From: Zi Yan To: "Pankaj Raghav (Samsung)" , linux-mm@kvack.org Cc: Zi Yan , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yang Shi , Yu Zhao , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Hugh Dickins , Mcgrof Chamberlain , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v4 1/7] mm/memcg: use order instead of nr in split_page_memcg() Date: Tue, 13 Feb 2024 16:55:14 -0500 Message-ID: <20240213215520.1048625-2-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240213215520.1048625-1-zi.yan@sent.com> References: <20240213215520.1048625-1-zi.yan@sent.com> Reply-To: Zi Yan Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Zi Yan We do not have non power of two pages, using nr is error prone if nr is not power-of-two. Use page order instead. Signed-off-by: Zi Yan Acked-by: David Hildenbrand --- include/linux/memcontrol.h | 4 ++-- mm/huge_memory.c | 3 ++- mm/memcontrol.c | 3 ++- mm/page_alloc.c | 4 ++-- 4 files changed, 8 insertions(+), 6 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 4e4caeaea404..173bbb53c1ec 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1163,7 +1163,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, rcu_read_unlock(); } -void split_page_memcg(struct page *head, unsigned int nr); +void split_page_memcg(struct page *head, int order); unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, gfp_t gfp_mask, @@ -1621,7 +1621,7 @@ void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { } -static inline void split_page_memcg(struct page *head, unsigned int nr) +static inline void split_page_memcg(struct page *head, int order) { } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 016e20bd813e..0cd5fba0923c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2877,9 +2877,10 @@ static void __split_huge_page(struct page *page, struct list_head *list, unsigned long offset = 0; unsigned int nr = thp_nr_pages(head); int i, nr_dropped = 0; + int order = folio_order(folio); /* complete memcg works before add pages to LRU */ - split_page_memcg(head, nr); + split_page_memcg(head, order); if (folio_test_anon(folio) && folio_test_swapcache(folio)) { offset = swp_offset(folio->swap); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 93ad8640b741..404e529644c0 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3608,11 +3608,12 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) /* * Because page_memcg(head) is not set on tails, set it now. */ -void split_page_memcg(struct page *head, unsigned int nr) +void split_page_memcg(struct page *head, int order) { struct folio *folio = page_folio(head); struct mem_cgroup *memcg = folio_memcg(folio); int i; + unsigned int nr = 1 << order; if (mem_cgroup_disabled() || !memcg) return; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7ae4b74c9e5c..7c927b84e16c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2653,7 +2653,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, 1 << order); - split_page_memcg(page, 1 << order); + split_page_memcg(page, order); } EXPORT_SYMBOL_GPL(split_page); @@ -4838,7 +4838,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, struct page *last = page + nr; split_page_owner(page, 1 << order); - split_page_memcg(page, 1 << order); + split_page_memcg(page, order); while (page < --last) set_page_refcounted(last); From patchwork Tue Feb 13 21:55:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13555717 Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C53A86217C; Tue, 13 Feb 2024 21:55:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861351; cv=none; b=KApB2vSCBMkaPKcwGah88sH6SwMhT2yIeFDg2h6DdBYrFVO/x8UEuqxq5urz3zrA1n7ysgrcZb/ZpSvnxA5zx9RkbuTJtTzYupMNlepIDCN7VhYNZke89Owf89a4t/2sEPxUktpwXcsLRcg0rb2kOqk170zNow4Owl8jY4veTJ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861351; c=relaxed/simple; bh=W+t9ky4Cs9vVz5Q7cEU44YRwRv+9h4htLo26lHE7ZEI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=aunpMtXRo2FaTf281ZZ8zyIVKIS8iz42YQgGmRst4s8g7dBx7G/LBx1XNBt1sG9jfUYssdNSolIyJ12cxAjN9sBiLEUbJSbcNNyMc1A/Th09j3dJm+ib6TnAcZWyU2Ut+P2MKWutHx6N457LS0+SYnXLRQ3Qig5CrEGVnbP8fSs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com; spf=pass smtp.mailfrom=sent.com; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b=IFte4mTO; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=OUf61+g9; arc=none smtp.client-ip=66.111.4.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sent.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b="IFte4mTO"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="OUf61+g9" Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 9DBE15C010A; Tue, 13 Feb 2024 16:55:48 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Tue, 13 Feb 2024 16:55:48 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to; s=fm3; t= 1707861348; x=1707947748; bh=wJBfeFCeCXlsVboyNZCAlUASDo+mI/mq6x/ aZS6p7CI=; b=IFte4mTOyYLgDFzhTe1Wzq+D8KLldWfEdKzAAWhih+Jc94YdTIO R0BPQL5lKaXGs6sCl8paJTtefcKhm9m0bRwd/i6kdUFAuiPIMec2zURErXQ9n9sl kS6nagE2F7UorEzQnFoYn61dJNGJG5pkVoMNakmIrcraDu19kx5UHG5A9ynE67xO 7Uo/+Y2qjRm6uGlG13h3taJdK72R2jbLwghg9moZPJsmub8Q3Uzx3DI65/NFjREL of/pmsNfUj1phFgchBQ069nJ6mI4Q0BTUPTITAJQMiR5BKhmq8T6VS4FJsbBZsas arxVzg3miTSIGfy9pFSECe+I8Y6WQ4ia+bQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t= 1707861348; x=1707947748; bh=wJBfeFCeCXlsVboyNZCAlUASDo+mI/mq6x/ aZS6p7CI=; b=OUf61+g9RDW6eTwtmDEW0Y+9JYoyVi7jrk/8UURdj3XuQ8gJdqi D+Wm7S4JwLfZft36Xex237rwqqDOrYIAB8Vn4FC1eTEZrVPqTBeoYBltVx/8FyCb Xo2zmq/GKvveXx+/iA1jH7puduI3eQGrFpDrnHVFY/zhrVQvaXV/E/vlZ9POzSLY fyA1PPAPdyI/pN/tA1j+iaX5D2ROugh0o6JwgzqGQVLuppBadE1gVPR4m3NFqSNS m6QQvw7RO1QIS/sA9XMFPYx9i4liYS94t6zWHI9cCo4SHbXczMt38B7vVW/oZH6/ NIpeymsD6Ee0PX4V35EFUeGscJb/jyKAx+w== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrudehgdduheegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrgggtgfesthekredtredtjeenucfhrhhomhepkghi ucgjrghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepje ekteekffelleekudfftdefvddtjeejuedtuedtteegjefgvedtfedujeekieevnecuvehl uhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnse hsvghnthdrtghomh X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Feb 2024 16:55:47 -0500 (EST) From: Zi Yan To: "Pankaj Raghav (Samsung)" , linux-mm@kvack.org Cc: Zi Yan , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yang Shi , Yu Zhao , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Hugh Dickins , Mcgrof Chamberlain , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v4 2/7] mm/page_owner: use order instead of nr in split_page_owner() Date: Tue, 13 Feb 2024 16:55:15 -0500 Message-ID: <20240213215520.1048625-3-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240213215520.1048625-1-zi.yan@sent.com> References: <20240213215520.1048625-1-zi.yan@sent.com> Reply-To: Zi Yan Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Zi Yan We do not have non power of two pages, using nr is error prone if nr is not power-of-two. Use page order instead. Signed-off-by: Zi Yan Acked-by: David Hildenbrand --- include/linux/page_owner.h | 8 ++++---- mm/huge_memory.c | 2 +- mm/page_alloc.c | 4 ++-- mm/page_owner.c | 3 ++- 4 files changed, 9 insertions(+), 8 deletions(-) diff --git a/include/linux/page_owner.h b/include/linux/page_owner.h index 119a0c9d2a8b..d7878523adfc 100644 --- a/include/linux/page_owner.h +++ b/include/linux/page_owner.h @@ -11,7 +11,7 @@ extern struct page_ext_operations page_owner_ops; extern void __reset_page_owner(struct page *page, unsigned short order); extern void __set_page_owner(struct page *page, unsigned short order, gfp_t gfp_mask); -extern void __split_page_owner(struct page *page, unsigned int nr); +extern void __split_page_owner(struct page *page, int order); extern void __folio_copy_owner(struct folio *newfolio, struct folio *old); extern void __set_page_owner_migrate_reason(struct page *page, int reason); extern void __dump_page_owner(const struct page *page); @@ -31,10 +31,10 @@ static inline void set_page_owner(struct page *page, __set_page_owner(page, order, gfp_mask); } -static inline void split_page_owner(struct page *page, unsigned int nr) +static inline void split_page_owner(struct page *page, int order) { if (static_branch_unlikely(&page_owner_inited)) - __split_page_owner(page, nr); + __split_page_owner(page, order); } static inline void folio_copy_owner(struct folio *newfolio, struct folio *old) { @@ -60,7 +60,7 @@ static inline void set_page_owner(struct page *page, { } static inline void split_page_owner(struct page *page, - unsigned short order) + int order) { } static inline void folio_copy_owner(struct folio *newfolio, struct folio *folio) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0cd5fba0923c..f079b02f1f59 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2919,7 +2919,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, unlock_page_lruvec(lruvec); /* Caller disabled irqs, so they are still disabled here */ - split_page_owner(head, nr); + split_page_owner(head, order); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7c927b84e16c..b6e8fe6fed67 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2652,7 +2652,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); - split_page_owner(page, 1 << order); + split_page_owner(page, order); split_page_memcg(page, order); } EXPORT_SYMBOL_GPL(split_page); @@ -4837,7 +4837,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, struct page *page = virt_to_page((void *)addr); struct page *last = page + nr; - split_page_owner(page, 1 << order); + split_page_owner(page, order); split_page_memcg(page, order); while (page < --last) set_page_refcounted(last); diff --git a/mm/page_owner.c b/mm/page_owner.c index c4f9e5506e93..1319e402c2cf 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -292,11 +292,12 @@ void __set_page_owner_migrate_reason(struct page *page, int reason) page_ext_put(page_ext); } -void __split_page_owner(struct page *page, unsigned int nr) +void __split_page_owner(struct page *page, int order) { int i; struct page_ext *page_ext = page_ext_get(page); struct page_owner *page_owner; + unsigned int nr = 1 << order; if (unlikely(!page_ext)) return; From patchwork Tue Feb 13 21:55:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13555718 Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5964B6214D; Tue, 13 Feb 2024 21:55:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861352; cv=none; b=MISIKfJGDbki8WY/Ge/kZlgaRJXSDub7enexA2jv4v1iSvO7XUbK3Difot5n5h6qVC1L78sYWN78qtdMqCC3SP7vcD0HIf+lakh9JEDP4PWOevS+h41vYDlhS6cZIYsUTrgxstS3UeOEKxxP6UgptHIxGAiHYFJqrxmml4WNq0I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861352; c=relaxed/simple; bh=UqNu37izsdBNbfTcNTy5md1Flo14ewszOLHxEmwptdg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=H/FdOxVoLSb517b3svt4cE8Lj47jBMbCfaZ4p2OXJf4sSDERkZq5JkExqGoz6oyZtYd9ABgCOGRB6V51zgfd+ZNOJTB5ZtTCtlIEop5pB/8Y20HcTanGqOSWMn11i2CW2VSh/9SPafTkJQt1bdCokGRN3/oOBUbZ1RvgLRCVIBo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com; spf=pass smtp.mailfrom=sent.com; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b=C70tT3/h; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=mfJ6oPkF; arc=none smtp.client-ip=66.111.4.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sent.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b="C70tT3/h"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="mfJ6oPkF" Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 632325C0112; Tue, 13 Feb 2024 16:55:49 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 13 Feb 2024 16:55:49 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to; s=fm3; t= 1707861349; x=1707947749; bh=w6VhDK8TXvu8VmS0cIwR/bVucmXbgvOXS24 MFWRb9kQ=; b=C70tT3/hxvmwRWrVgxnS8fozz4YHtS6gPhAjMZn75BqWt0l64Kq 6NAknIY8hodeTZXxf1TcFW+hmSrY8Ez6UcvgHwAYCo/PzX/kaod9FkQg4YxM6Qap +Q0U0w8f96uywhMHVijq7849z79hzfhu9xT1WBbNjbnqePwm5/ZmuWlXZzd9dWG1 Ihah9K6AUWgSgv9aAkOqam/jPngyvV2GhLW2LtwWP+KQJ+smHMCekknDEyUOxKUI uL233jsmWJ9skW3QUzLrpOgUlN4+wsNmJGn6/f3rdboztQfhMVpqShelgfvJGMBK BzByecW+/ekBdo0qS4Pd0bLg+VuomGNJegA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t= 1707861349; x=1707947749; bh=w6VhDK8TXvu8VmS0cIwR/bVucmXbgvOXS24 MFWRb9kQ=; b=mfJ6oPkFyNA+mLkOP/AUEVBP9KImpo/pDaBA8WA4QMCAq03CiTz 492X3jwKiuTbKwuWwNH62NYygt2WBIlP2hxakQ6u0PuV2KIigxiRUrBfckunC0MP nt/aNZDy4MAzjFvlZ6uLK75/N92HY+cpE2wE0m34hVBxGSzd/XcVGP8Fya3eleTG OdN/SgRo6balkm91Wog9MxEYiVA742kmaYLsnDsEPH9WryD4QW9km77C9HyraNOY ktQRHTwI//dgaj4iVwiDumZjJirsDYev9o0Uv1zHeLkNIc7VjvRc0XXR/UwtChiu Z5/e6Y2a0o5yXwC0zyZf0TbF28RyZw1gxnQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrudehgdduheegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrgggtgfesthekredtredtjeenucfhrhhomhepkghi ucgjrghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepje ekteekffelleekudfftdefvddtjeejuedtuedtteegjefgvedtfedujeekieevnecuvehl uhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnse hsvghnthdrtghomh X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Feb 2024 16:55:48 -0500 (EST) From: Zi Yan To: "Pankaj Raghav (Samsung)" , linux-mm@kvack.org Cc: Zi Yan , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yang Shi , Yu Zhao , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Hugh Dickins , Mcgrof Chamberlain , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v4 3/7] mm: memcg: make memcg huge page split support any order split. Date: Tue, 13 Feb 2024 16:55:16 -0500 Message-ID: <20240213215520.1048625-4-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240213215520.1048625-1-zi.yan@sent.com> References: <20240213215520.1048625-1-zi.yan@sent.com> Reply-To: Zi Yan Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Zi Yan It sets memcg information for the pages after the split. A new parameter new_order is added to tell the order of subpages in the new page, always 0 for now. It prepares for upcoming changes to support split huge page to any lower order. Signed-off-by: Zi Yan Acked-by: David Hildenbrand --- include/linux/memcontrol.h | 4 ++-- mm/huge_memory.c | 2 +- mm/memcontrol.c | 11 ++++++----- mm/page_alloc.c | 4 ++-- 4 files changed, 11 insertions(+), 10 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 173bbb53c1ec..9a2dea92be0e 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1163,7 +1163,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, rcu_read_unlock(); } -void split_page_memcg(struct page *head, int order); +void split_page_memcg(struct page *head, int old_order, int new_order); unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, gfp_t gfp_mask, @@ -1621,7 +1621,7 @@ void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { } -static inline void split_page_memcg(struct page *head, int order) +static inline void split_page_memcg(struct page *head, int old_order, int new_order) { } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f079b02f1f59..3d30eccd3a7f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2880,7 +2880,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, int order = folio_order(folio); /* complete memcg works before add pages to LRU */ - split_page_memcg(head, order); + split_page_memcg(head, order, 0); if (folio_test_anon(folio) && folio_test_swapcache(folio)) { offset = swp_offset(folio->swap); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 404e529644c0..27d53715d8dc 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3608,23 +3608,24 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) /* * Because page_memcg(head) is not set on tails, set it now. */ -void split_page_memcg(struct page *head, int order) +void split_page_memcg(struct page *head, int old_order, int new_order) { struct folio *folio = page_folio(head); struct mem_cgroup *memcg = folio_memcg(folio); int i; - unsigned int nr = 1 << order; + unsigned int old_nr = 1 << old_order; + unsigned int new_nr = 1 << new_order; if (mem_cgroup_disabled() || !memcg) return; - for (i = 1; i < nr; i++) + for (i = new_nr; i < old_nr; i += new_nr) folio_page(folio, i)->memcg_data = folio->memcg_data; if (folio_memcg_kmem(folio)) - obj_cgroup_get_many(__folio_objcg(folio), nr - 1); + obj_cgroup_get_many(__folio_objcg(folio), old_nr / new_nr - 1); else - css_get_many(&memcg->css, nr - 1); + css_get_many(&memcg->css, old_nr / new_nr - 1); } #ifdef CONFIG_SWAP diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b6e8fe6fed67..9d4dd41d0647 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2653,7 +2653,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, order); - split_page_memcg(page, order); + split_page_memcg(page, order, 0); } EXPORT_SYMBOL_GPL(split_page); @@ -4838,7 +4838,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, struct page *last = page + nr; split_page_owner(page, order); - split_page_memcg(page, order); + split_page_memcg(page, order, 0); while (page < --last) set_page_refcounted(last); From patchwork Tue Feb 13 21:55:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13555719 Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D268626D5; Tue, 13 Feb 2024 21:55:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861353; cv=none; b=DNwaTxYaUsicxjXAOakZhlybrhlZg2b018ETyBk/VZ0IASycSuakCTAaMokXHjmlCsx/tnjg1LEBEjB6YyltamRKuoB6CEDaoOTVwdbDpiLHla4qZJpT+3vunOmT3ouxNh7I2TZHHCDg9+08dfOFIp6I2qIao2e1MUzDW4ZVxwM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861353; c=relaxed/simple; bh=VGiJshFlzfJTNo+URBKg1KnL/Ou65I9YBkQp0I3HlGw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ttDoqXW3MXfYHBt7HBrJ8Lh89VHag67JgwUO8BMmNzTfjlKCLkZh8FEpI7HAYNvoE+SLxNPAglZYMB+oD/o7BmukjccKlsxRqo9E367ANnhZpIxrMLUf1XQskfU+kGSlOtZAZBdfP+GNkzusj9c/Fcef1lyNir+yjFqInYOTjyk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com; spf=pass smtp.mailfrom=sent.com; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b=yYAQgjcj; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=bMCcmwqi; arc=none smtp.client-ip=66.111.4.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sent.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b="yYAQgjcj"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="bMCcmwqi" Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 2BFDE5C0114; Tue, 13 Feb 2024 16:55:50 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Tue, 13 Feb 2024 16:55:50 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to; s=fm3; t= 1707861350; x=1707947750; bh=DFD67uQnb+ulb8US39wkV47wjWKTPIuVZ3E M1gZxQmo=; b=yYAQgjcjAjFkviiP5bDanth4ua6xtOef0fKzglSx07B9qGYi4dU mPqMNB+SAAhrcBNQAN/KKoYP2UPpagFwfqCyVHPlnH7gmgbi6z9CLg/dYiJ8JF0g j3iZGL7breFfiA0Q+JqtaRwazEhflTjXY9a7tfB+iefqXsUMySJOOwzZS1i0ThPY lL/XoDxyTlaXoCDtZXMNEoef0LMMKkKzPvFg6MBITMX182wUCADfrwVnru1eFfb5 B2NZ20NtFl3uKbAZqI2BC+GkMjhPc6z9bcHqG4bPHQseCdZfwmJAMnINX2Ay48C5 Lie+IbPo4PaG+33kaocLN+5h7sZUeVPONdw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t= 1707861350; x=1707947750; bh=DFD67uQnb+ulb8US39wkV47wjWKTPIuVZ3E M1gZxQmo=; b=bMCcmwqij0ogO6M9GovWN06DDA9lmOzsRXUQ61nkq8KevW/A5rH VRaHwVra8JutQ9WL90i+sPClYOXvFHmA6KPK+C0nfApz8y3ISz/oBcygKF+/s0Mm akx1a2AGUdLT8mJDekRmk+DIOY2njm64l4iIb6i4KvOL2OVtYCGDqXZYsHCgO9yd 2ZzR5BcYQrI1iLNVzysnwfR10n+8hePLkZT17+ReRFaIDVxiABMDAM+ie7qreYqn tU21tFmNAV6R93PrejHiUOQBkR1cUTnd8p1r6/RITbkIHOQT7gsLlzaGOPu/ffTe KLm2OrTFVnzTGguYUazByunjCCRKq6/KZnw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrudehgdduheefucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrgggtgfesthekredtredtjeenucfhrhhomhepkghi ucgjrghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepje ekteekffelleekudfftdefvddtjeejuedtuedtteegjefgvedtfedujeekieevnecuvehl uhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnse hsvghnthdrtghomh X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Feb 2024 16:55:49 -0500 (EST) From: Zi Yan To: "Pankaj Raghav (Samsung)" , linux-mm@kvack.org Cc: Zi Yan , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yang Shi , Yu Zhao , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Hugh Dickins , Mcgrof Chamberlain , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v4 4/7] mm: page_owner: add support for splitting to any order in split page_owner. Date: Tue, 13 Feb 2024 16:55:17 -0500 Message-ID: <20240213215520.1048625-5-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240213215520.1048625-1-zi.yan@sent.com> References: <20240213215520.1048625-1-zi.yan@sent.com> Reply-To: Zi Yan Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Zi Yan It adds a new_order parameter to set new page order in page owner. It prepares for upcoming changes to support split huge page to any lower order. Signed-off-by: Zi Yan --- include/linux/page_owner.h | 10 +++++----- mm/huge_memory.c | 2 +- mm/page_alloc.c | 4 ++-- mm/page_owner.c | 9 +++++---- 4 files changed, 13 insertions(+), 12 deletions(-) diff --git a/include/linux/page_owner.h b/include/linux/page_owner.h index d7878523adfc..a784ba69f67f 100644 --- a/include/linux/page_owner.h +++ b/include/linux/page_owner.h @@ -11,7 +11,7 @@ extern struct page_ext_operations page_owner_ops; extern void __reset_page_owner(struct page *page, unsigned short order); extern void __set_page_owner(struct page *page, unsigned short order, gfp_t gfp_mask); -extern void __split_page_owner(struct page *page, int order); +extern void __split_page_owner(struct page *page, int old_order, int new_order); extern void __folio_copy_owner(struct folio *newfolio, struct folio *old); extern void __set_page_owner_migrate_reason(struct page *page, int reason); extern void __dump_page_owner(const struct page *page); @@ -31,10 +31,10 @@ static inline void set_page_owner(struct page *page, __set_page_owner(page, order, gfp_mask); } -static inline void split_page_owner(struct page *page, int order) +static inline void split_page_owner(struct page *page, int old_order, int new_order) { if (static_branch_unlikely(&page_owner_inited)) - __split_page_owner(page, order); + __split_page_owner(page, old_order, new_order); } static inline void folio_copy_owner(struct folio *newfolio, struct folio *old) { @@ -56,11 +56,11 @@ static inline void reset_page_owner(struct page *page, unsigned short order) { } static inline void set_page_owner(struct page *page, - unsigned int order, gfp_t gfp_mask) + unsigned short order, gfp_t gfp_mask) { } static inline void split_page_owner(struct page *page, - int order) + int old_order, int new_order) { } static inline void folio_copy_owner(struct folio *newfolio, struct folio *folio) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3d30eccd3a7f..ad7133c97428 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2919,7 +2919,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, unlock_page_lruvec(lruvec); /* Caller disabled irqs, so they are still disabled here */ - split_page_owner(head, order); + split_page_owner(head, order, 0); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9d4dd41d0647..e0f107b21c98 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2652,7 +2652,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); - split_page_owner(page, order); + split_page_owner(page, order, 0); split_page_memcg(page, order, 0); } EXPORT_SYMBOL_GPL(split_page); @@ -4837,7 +4837,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, struct page *page = virt_to_page((void *)addr); struct page *last = page + nr; - split_page_owner(page, order); + split_page_owner(page, order, 0); split_page_memcg(page, order, 0); while (page < --last) set_page_refcounted(last); diff --git a/mm/page_owner.c b/mm/page_owner.c index 1319e402c2cf..ebbffa0501db 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -292,19 +292,20 @@ void __set_page_owner_migrate_reason(struct page *page, int reason) page_ext_put(page_ext); } -void __split_page_owner(struct page *page, int order) +void __split_page_owner(struct page *page, int old_order, int new_order) { int i; struct page_ext *page_ext = page_ext_get(page); struct page_owner *page_owner; - unsigned int nr = 1 << order; + unsigned int old_nr = 1 << old_order; + unsigned int new_nr = 1 << new_order; if (unlikely(!page_ext)) return; - for (i = 0; i < nr; i++) { + for (i = 0; i < old_nr; i += new_nr) { page_owner = get_page_owner(page_ext); - page_owner->order = 0; + page_owner->order = new_order; page_ext = page_ext_next(page_ext); } page_ext_put(page_ext); From patchwork Tue Feb 13 21:55:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13555720 Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B439C62156; Tue, 13 Feb 2024 21:55:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861354; cv=none; b=K0mlsU9fOcoN7eyV8+b+Fb+O99kNa4gy+IxpI2Xj3ybuum2L0F4g2+v9rMyWLAQViNBaI19xhA1tKo+I0kft5FWz+AUTVGxDXgNr0qxfwWtZ1+B+SAD5GnE39O948/+GUc2/qXssNGT4MBkPrXa0CFNVQbQFuJm0AaIYMWp//Hc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861354; c=relaxed/simple; bh=JHkEikCgY3i/2eUJc8+QZcCJfr8NUuDadSm26SoKiOQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=gAY1e5ygThTC0fF5kWJTqIKRDy6TBlehfaBqjgHrjQ1vClGvclPDqR8yJkoPybEGJlw2gHSe06gOgqsjWjYTmFghtjQ35CrjgBaym3Wj6UkExu6XmHN+1ytyrpFu1akQBkPzhjd4nnKoYGVB3vREP1dKAgWbSiZrX+dz26g1Y10= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com; spf=pass smtp.mailfrom=sent.com; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b=xNCr4Eyx; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=kTV0K89w; arc=none smtp.client-ip=66.111.4.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sent.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b="xNCr4Eyx"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="kTV0K89w" Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id CDA975C010C; Tue, 13 Feb 2024 16:55:50 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Tue, 13 Feb 2024 16:55:50 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to; s=fm3; t= 1707861350; x=1707947750; bh=53gxtX7iv4MMGU2MQP7E5gE5detLt5TaDUQ anNQvBhI=; b=xNCr4EyxjGxqd5ULfKK3OPD5KTVkU9NXO+Lzcrk0IOhb4mSXvlq UcfsCv9V6oLlEwlMByomljWq8tGq7IoO5kc9DLSJKn97sNF/wCRtYLb+l+UFfPQK lWuhXmydaRbO9+/OnoXoPiuoPP7gK61p5wJzd4wI/ku3tkYP6a/aan4szsYsZ6dF xtaIFU1UP1kRIflWHy90+v4V0+MwGNMiasr3yA0H35nShrUfxxzwMP5Zwlu+jafz g6Q6K44gxCWbjhLjib6CT2N0u5YF056zZ3lijA+vlMIaT3QZ6pEftWnFkvC3/vIQ JuTdL78x+/59LVC0ilI1UbBmoMZPR/Ea5VQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t= 1707861350; x=1707947750; bh=53gxtX7iv4MMGU2MQP7E5gE5detLt5TaDUQ anNQvBhI=; b=kTV0K89wR4dDFy2qyiolbWIpksbaSJ3bo6a7ddpoRpQiRviYWSe 7IxKMPBhhrDQUulsuPyI/9FGyErLQLPENI7Z5yXfK8aTj/kiovVBxIe6liP70IEI KBmyUdPW5F5KMdJkIu+otGExEiashwZ1GMhbG0Wa2+ujjLHoc4NudbSpk3gE7fIn uXCYOTW1oliAQuvnZh6FXjpqKkBfu5badMEpa/tXUg5yAXZ+q22ApxUhsy3UCTma zdwrfq0KS0g1L+o6168BHfZViCRw7tcGDIdIY+vBeVAOY/jXaoDA5DzlCyVRM9CA SmjG4sdK6XSgbNOMp+S/djUu0g1e9FGRpdg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrudehgdduheegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucdnrfhurhgthhgrshgvucdluddtmdenucfjughrpe fhvfevufffkffojghfrhggtgfgsehtkeertdertdejnecuhfhrohhmpegkihcujggrnhcu oeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeejkeetkeffle elkeduffdtfedvtdejjeeutdeutdetgeejgfevtdefudejkeeiveenucevlhhushhtvghr ufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhesshgvnhhtrd gtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Feb 2024 16:55:50 -0500 (EST) From: Zi Yan To: "Pankaj Raghav (Samsung)" , linux-mm@kvack.org Cc: Zi Yan , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yang Shi , Yu Zhao , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Hugh Dickins , Mcgrof Chamberlain , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v4 5/7] mm: thp: split huge page to any lower order pages (except order-1). Date: Tue, 13 Feb 2024 16:55:18 -0500 Message-ID: <20240213215520.1048625-6-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240213215520.1048625-1-zi.yan@sent.com> References: <20240213215520.1048625-1-zi.yan@sent.com> Reply-To: Zi Yan Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Zi Yan To split a THP to any lower order (except order-1) pages, we need to reform THPs on subpages at given order and add page refcount based on the new page order. Also we need to reinitialize page_deferred_list after removing the page from the split_queue, otherwise a subsequent split will see list corruption when checking the page_deferred_list again. It has many uses, like minimizing the number of pages after truncating a huge pagecache page. For anonymous THPs, we can only split them to order-0 like before until we add support for any size anonymous THPs. Order-1 folio is not supported because _deferred_list, which is used by partially mapped folios, is stored in subpage 2 and an order-1 folio only has subpage 0 and 1. Signed-off-by: Zi Yan --- include/linux/huge_mm.h | 21 +++++--- mm/huge_memory.c | 114 +++++++++++++++++++++++++++++++--------- 2 files changed, 101 insertions(+), 34 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 5adb86af35fc..de0c89105076 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -265,10 +265,11 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, void folio_prep_large_rmappable(struct folio *folio); bool can_split_folio(struct folio *folio, int *pextra_pins); -int split_huge_page_to_list(struct page *page, struct list_head *list); +int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order); static inline int split_huge_page(struct page *page) { - return split_huge_page_to_list(page, NULL); + return split_huge_page_to_list_to_order(page, NULL, 0); } void deferred_split_folio(struct folio *folio); @@ -422,7 +423,8 @@ can_split_folio(struct folio *folio, int *pextra_pins) return false; } static inline int -split_huge_page_to_list(struct page *page, struct list_head *list) +split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order) { return 0; } @@ -519,17 +521,20 @@ static inline bool thp_migration_supported(void) } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ -static inline int split_folio_to_list(struct folio *folio, - struct list_head *list) +static inline int split_folio_to_list_to_order(struct folio *folio, + struct list_head *list, int new_order) { - return split_huge_page_to_list(&folio->page, list); + return split_huge_page_to_list_to_order(&folio->page, list, new_order); } -static inline int split_folio(struct folio *folio) +static inline int split_folio_to_order(struct folio *folio, int new_order) { - return split_folio_to_list(folio, NULL); + return split_folio_to_list_to_order(folio, NULL, new_order); } +#define split_folio_to_list(f, l) split_folio_to_list_to_order(f, l, 0) +#define split_folio(f) split_folio_to_order(f, 0) + /* * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to * limitations in the implementation like arm64 MTE can override this to diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ad7133c97428..d0e555a8ea98 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2718,11 +2718,14 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma, static void unmap_folio(struct folio *folio) { - enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD | - TTU_SYNC | TTU_BATCH_FLUSH; + enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SYNC | + TTU_BATCH_FLUSH; VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); + if (folio_test_pmd_mappable(folio)) + ttu_flags |= TTU_SPLIT_HUGE_PMD; + /* * Anon pages need migration entries to preserve them, but file * pages can simply be left unmapped, then faulted back on demand. @@ -2756,7 +2759,6 @@ static void lru_add_page_tail(struct page *head, struct page *tail, struct lruvec *lruvec, struct list_head *list) { VM_BUG_ON_PAGE(!PageHead(head), head); - VM_BUG_ON_PAGE(PageCompound(tail), head); VM_BUG_ON_PAGE(PageLRU(tail), head); lockdep_assert_held(&lruvec->lru_lock); @@ -2777,7 +2779,8 @@ static void lru_add_page_tail(struct page *head, struct page *tail, } static void __split_huge_page_tail(struct folio *folio, int tail, - struct lruvec *lruvec, struct list_head *list) + struct lruvec *lruvec, struct list_head *list, + unsigned int new_order) { struct page *head = &folio->page; struct page *page_tail = head + tail; @@ -2847,10 +2850,15 @@ static void __split_huge_page_tail(struct folio *folio, int tail, * which needs correct compound_head(). */ clear_compound_head(page_tail); + if (new_order) { + prep_compound_page(page_tail, new_order); + folio_prep_large_rmappable(page_folio(page_tail)); + } /* Finally unfreeze refcount. Additional reference from page cache. */ - page_ref_unfreeze(page_tail, 1 + (!folio_test_anon(folio) || - folio_test_swapcache(folio))); + page_ref_unfreeze(page_tail, + 1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ? + folio_nr_pages(page_folio(page_tail)) : 0)); if (folio_test_young(folio)) folio_set_young(new_folio); @@ -2868,7 +2876,7 @@ static void __split_huge_page_tail(struct folio *folio, int tail, } static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end) + pgoff_t end, unsigned int new_order) { struct folio *folio = page_folio(page); struct page *head = &folio->page; @@ -2877,10 +2885,11 @@ static void __split_huge_page(struct page *page, struct list_head *list, unsigned long offset = 0; unsigned int nr = thp_nr_pages(head); int i, nr_dropped = 0; + unsigned int new_nr = 1 << new_order; int order = folio_order(folio); /* complete memcg works before add pages to LRU */ - split_page_memcg(head, order, 0); + split_page_memcg(head, order, new_order); if (folio_test_anon(folio) && folio_test_swapcache(folio)) { offset = swp_offset(folio->swap); @@ -2893,8 +2902,8 @@ static void __split_huge_page(struct page *page, struct list_head *list, ClearPageHasHWPoisoned(head); - for (i = nr - 1; i >= 1; i--) { - __split_huge_page_tail(folio, i, lruvec, list); + for (i = nr - new_nr; i >= new_nr; i -= new_nr) { + __split_huge_page_tail(folio, i, lruvec, list, new_order); /* Some pages can be beyond EOF: drop them from page cache */ if (head[i].index >= end) { struct folio *tail = page_folio(head + i); @@ -2910,29 +2919,41 @@ static void __split_huge_page(struct page *page, struct list_head *list, __xa_store(&head->mapping->i_pages, head[i].index, head + i, 0); } else if (swap_cache) { + /* + * split anonymous THPs (including swapped out ones) to + * non-zero order not supported + */ + VM_WARN_ONCE(new_order, + "Split swap-cached anon folio to non-0 order not supported"); __xa_store(&swap_cache->i_pages, offset + i, head + i, 0); } } - ClearPageCompound(head); + if (!new_order) + ClearPageCompound(head); + else { + struct folio *new_folio = (struct folio *)head; + + folio_set_order(new_folio, new_order); + } unlock_page_lruvec(lruvec); /* Caller disabled irqs, so they are still disabled here */ - split_page_owner(head, order, 0); + split_page_owner(head, order, new_order); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { /* Additional pin to swap cache */ if (PageSwapCache(head)) { - page_ref_add(head, 2); + page_ref_add(head, 1 + new_nr); xa_unlock(&swap_cache->i_pages); } else { page_ref_inc(head); } } else { /* Additional pin to page cache */ - page_ref_add(head, 2); + page_ref_add(head, 1 + new_nr); xa_unlock(&head->mapping->i_pages); } local_irq_enable(); @@ -2944,7 +2965,15 @@ static void __split_huge_page(struct page *page, struct list_head *list, if (folio_test_swapcache(folio)) split_swap_cluster(folio->swap); - for (i = 0; i < nr; i++) { + /* + * set page to its compound_head when split to non order-0 pages, so + * we can skip unlocking it below, since PG_locked is transferred to + * the compound_head of the page and the caller will unlock it. + */ + if (new_order) + page = compound_head(page); + + for (i = 0; i < nr; i += new_nr) { struct page *subpage = head + i; if (subpage == page) continue; @@ -2978,29 +3007,35 @@ bool can_split_folio(struct folio *folio, int *pextra_pins) } /* - * This function splits huge page into normal pages. @page can point to any - * subpage of huge page to split. Split doesn't change the position of @page. + * This function splits huge page into pages in @new_order. @page can point to + * any subpage of huge page to split. Split doesn't change the position of + * @page. + * + * NOTE: order-1 folio is not supported because _deferred_list, which is used + * by partially mapped folios, is stored in subpage 2 and an order-1 folio + * only has subpage 0 and 1. * * Only caller must hold pin on the @page, otherwise split fails with -EBUSY. * The huge page must be locked. * * If @list is null, tail pages will be added to LRU list, otherwise, to @list. * - * Both head page and tail pages will inherit mapping, flags, and so on from - * the hugepage. + * Pages in new_order will inherit mapping, flags, and so on from the hugepage. * - * GUP pin and PG_locked transferred to @page. Rest subpages can be freed if - * they are not mapped. + * GUP pin and PG_locked transferred to @page or the compound page @page belongs + * to. Rest subpages can be freed if they are not mapped. * * Returns 0 if the hugepage is split successfully. * Returns -EBUSY if the page is pinned or if anon_vma disappeared from under * us. */ -int split_huge_page_to_list(struct page *page, struct list_head *list) +int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order) { struct folio *folio = page_folio(page); struct deferred_split *ds_queue = get_deferred_split_queue(folio); - XA_STATE(xas, &folio->mapping->i_pages, folio->index); + /* reset xarray order to new order after split */ + XA_STATE_ORDER(xas, &folio->mapping->i_pages, folio->index, new_order); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int extra_pins, ret; @@ -3010,6 +3045,26 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); + /* Cannot split THP to order-1 (no order-1 THPs) */ + if (new_order == 1) { + VM_WARN_ONCE(1, "Cannot split to order-1 folio"); + return -EINVAL; + } + + if (new_order) { + /* Split shmem folio to non-zero order not supported */ + if (shmem_mapping(folio->mapping)) { + VM_WARN_ONCE(1, "Split shmem folio to non-0 order not support"); + return -EINVAL; + } + /* No split if the file system does not support large folio */ + if (!mapping_large_folio_support(folio->mapping)) { + VM_WARN_ONCE(1, "Split file folio to non-0 order not support"); + return -EINVAL; + } + } + + is_hzp = is_huge_zero_page(&folio->page); if (is_hzp) { pr_warn_ratelimited("Called split_huge_page for huge zero page\n"); @@ -3105,14 +3160,21 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) if (folio_ref_freeze(folio, 1 + extra_pins)) { if (!list_empty(&folio->_deferred_list)) { ds_queue->split_queue_len--; - list_del(&folio->_deferred_list); + /* + * Reinitialize page_deferred_list after removing the + * page from the split_queue, otherwise a subsequent + * split will see list corruption when checking the + * page_deferred_list. + */ + list_del_init(&folio->_deferred_list); } spin_unlock(&ds_queue->split_queue_lock); if (mapping) { int nr = folio_nr_pages(folio); xas_split(&xas, folio, folio_order(folio)); - if (folio_test_pmd_mappable(folio)) { + if (folio_test_pmd_mappable(folio) && + new_order < HPAGE_PMD_ORDER) { if (folio_test_swapbacked(folio)) { __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr); @@ -3124,7 +3186,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) } } - __split_huge_page(page, list, end); + __split_huge_page(page, list, end, new_order); ret = 0; } else { spin_unlock(&ds_queue->split_queue_lock); From patchwork Tue Feb 13 21:55:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13555721 Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 89E3462A1C; Tue, 13 Feb 2024 21:55:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861354; cv=none; b=ug7aSTNkCSZhsdrM5R1OC2lsd8BIMRQSa3iBIDZBfNQQsWGcIhPolL2Y9ugeJKKLnGRtvIk8NqawxqfCw4Ym0zqxYF3+QDiZ5+LMdMmDxSpz+TDKPE3fWvMnc/FEhqODs4a9Z88W+Howsc1rngKQUg541sgS4S0m4BWLcxXf2KQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861354; c=relaxed/simple; bh=OvAIGuwlGYeYczWW+K0KagfhK17treRqEU1nj29vKyk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=LhRI4XBX0SBYLpXHkIIWuC6bPHBVbi4SFa1mAR9BcpkdHP2dzPXoDCfq+s9dq685hPCJlSJ9bAZ2krr0nNtnOnlmerLAmV7Q4xNdZzDS2TeC97dun6KoP0whU5UyK1rgYrqkJfaR7KtKBUyhJBDc380OTkNYjkfWb40j6t8j2jY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com; spf=pass smtp.mailfrom=sent.com; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b=WvLMGzG8; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=sd8v5LS0; arc=none smtp.client-ip=66.111.4.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sent.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b="WvLMGzG8"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="sd8v5LS0" Received: from compute6.internal (compute6.nyi.internal [10.202.2.47]) by mailout.nyi.internal (Postfix) with ESMTP id 7B87D5C0117; Tue, 13 Feb 2024 16:55:51 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute6.internal (MEProxy); Tue, 13 Feb 2024 16:55:51 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to; s=fm3; t= 1707861351; x=1707947751; bh=oTNlS8iSN/pL1arvPd/5QTJjaetwy6e9fk4 UTiJDrr4=; b=WvLMGzG8wXGQ5WLFCyib254XrZQHLxadtOA5IFge31vgtZsUNbw FPxOtPftSxp73UoR9iEg0kRZSjAt3xVySIfSZPJZ+naXMvtREPI0U3qhxpCwOnqu wwMAdeHW4ZkZaro6be6Dqd0SALeZhxUxx/qjy5Bkw/ko2pogmrCh5Gh0XPG4e4kd dya0nrayzU3kjfIG4G00T9cPLYiLhzzsAz8mnQFVCyxAZ65g7Af73FntZ1IvA56I //RKnQY3tLcyiUYnnv4PzxhGicKR/FW8SnezFUktwatcPARjUIMdMF/G92989M6F oegvKTtsKcbbpLMFmwuQ52mp0uNLaz0pbZA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t= 1707861351; x=1707947751; bh=oTNlS8iSN/pL1arvPd/5QTJjaetwy6e9fk4 UTiJDrr4=; b=sd8v5LS0k1pzxQL6VcVTcdNqUwTwF6t38tK/aSkdRt9DNR69YYW mgl3uyGhZ30NsrbitOLENl0DJQ2ZFqLAjEwrfl0soH7FXk90jzE7qFP38FVcL+M+ 4yjcb/L5toUeqbQW+1OeFqUN6+cxQNa2h5iKXgU8ITyuqdZn0m2lhUNtp8sscJkH 71gt5FNseF2R33sHskNxGOqZdSrEzuzUczAlVcw9KlQK7KCTx82uL61DXBoBG7/1 zb/Uxl+sqfZiUBXkoNgvExH/QammNJtjDbQg7c4N8sO7Y75+AIB7dzCV+RsHbai/ tkYuP3C6jL93ogiwew+KhZFSV9a/jRcl/AQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrudehgdduheegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrgggtgfesthekredtredtjeenucfhrhhomhepkghi ucgjrghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepje ekteekffelleekudfftdefvddtjeejuedtuedtteegjefgvedtfedujeekieevnecuvehl uhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnse hsvghnthdrtghomh X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Feb 2024 16:55:50 -0500 (EST) From: Zi Yan To: "Pankaj Raghav (Samsung)" , linux-mm@kvack.org Cc: Zi Yan , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yang Shi , Yu Zhao , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Hugh Dickins , Mcgrof Chamberlain , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v4 6/7] mm: truncate: split huge page cache page to a non-zero order if possible. Date: Tue, 13 Feb 2024 16:55:19 -0500 Message-ID: <20240213215520.1048625-7-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240213215520.1048625-1-zi.yan@sent.com> References: <20240213215520.1048625-1-zi.yan@sent.com> Reply-To: Zi Yan Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Zi Yan To minimize the number of pages after a huge page truncation, we do not need to split it all the way down to order-0. The huge page has at most three parts, the part before offset, the part to be truncated, the part remaining at the end. Find the greatest common divisor of them to calculate the new page order from it, so we can split the huge page to this order and keep the remaining pages as large and as few as possible. Signed-off-by: Zi Yan --- mm/truncate.c | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index 725b150e47ac..49ddbbf7a617 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -21,6 +21,7 @@ #include #include #include +#include #include "internal.h" /* @@ -210,7 +211,8 @@ int truncate_inode_folio(struct address_space *mapping, struct folio *folio) bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) { loff_t pos = folio_pos(folio); - unsigned int offset, length; + unsigned int offset, length, remaining; + unsigned int new_order = folio_order(folio); if (pos < start) offset = start - pos; @@ -221,6 +223,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) length = length - offset; else length = end + 1 - pos - offset; + remaining = folio_size(folio) - offset - length; folio_wait_writeback(folio); if (length == folio_size(folio)) { @@ -235,11 +238,25 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) */ folio_zero_range(folio, offset, length); + /* + * Use the greatest common divisor of offset, length, and remaining + * as the smallest page size and compute the new order from it. So we + * can truncate a subpage as large as possible. Round up gcd to + * PAGE_SIZE, otherwise ilog2 can give -1 when gcd/PAGE_SIZE is 0. + */ + new_order = ilog2(round_up(gcd(gcd(offset, length), remaining), + PAGE_SIZE) / PAGE_SIZE); + + /* order-1 THP not supported, downgrade to order-0 */ + if (new_order == 1) + new_order = 0; + + if (folio_has_private(folio)) folio_invalidate(folio, offset, length); if (!folio_test_large(folio)) return true; - if (split_folio(folio) == 0) + if (split_huge_page_to_list_to_order(&folio->page, NULL, new_order) == 0) return true; if (folio_test_dirty(folio)) return false; From patchwork Tue Feb 13 21:55:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13555722 Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAFB4627F3; Tue, 13 Feb 2024 21:55:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861355; cv=none; b=D2ZeerNERPHpHiRfLuYjfETWc2y2XwddoavijHaTpa/eWspiwaISeQMPys0co8p6gPp2YaMii1ljQ9i63KpQs701+YLcNsVHtIfdBZfl4lw0hclSQSbbevHsWNkubx00XhQjh9EYeNkAIJ9BDtFop3PyIky8UJGvQSXsLyrBdE0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861355; c=relaxed/simple; bh=4k1J5WpOg9drdG/n/818LiZXytE3GDBXdp/FO/7NnF8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=mdAiINDDVfvs6crXwQTk/ItsSZDUk3ppuqCu1qyIXbiUx/tLXK2uepF0vCFp7czGNg/DgSF+Yau6C/6GKL/TmhbxgJZORRuNb5vFrjc+vZ0FlYb/HAu9Ha68UZg1XQ/vqeNRbflX9Q85YtnEh2HGj2THVBOwg9MOHzdNLLPVaNk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com; spf=pass smtp.mailfrom=sent.com; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b=whIGQEXR; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=qMnl2LTg; arc=none smtp.client-ip=66.111.4.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sent.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b="whIGQEXR"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="qMnl2LTg" Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id EBC085C0116; Tue, 13 Feb 2024 16:55:52 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Tue, 13 Feb 2024 16:55:52 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to; s=fm3; t= 1707861352; x=1707947752; bh=9fpAY5r5SIfyHPBsxmEOn10EsQJ32jaS6lT W3aElgBo=; b=whIGQEXRoqx/qECUNPaK17aH59YKDK1HWsPrYdDSEtILmbdSFrw g3nI7hqiZ7llmvOeqFI5G8ydQXjyYEjGnGWbO/qTtAtrnYrUTyegpXqJVaSzBY6Q wrvN16JHKa26mAoRACslKKQw1/ZS4AkSZ15gN1f1SxK4vINwoyzOjILTscVUItIc BHOrrb9IE0umaiTROEs27PLpXiYKfg1rHjmiLHJNMR/2UMGTPFAmsb+iBmCVVyPx ulMh0mrhTkg7XFydf8oswl0j6NZBEo7VBRy636J5MnkcxOwDlLkAfuixBfKR3hpS INM4L8IWn669amjmF0Pz1zHioU79uKgL+RA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t= 1707861352; x=1707947752; bh=9fpAY5r5SIfyHPBsxmEOn10EsQJ32jaS6lT W3aElgBo=; b=qMnl2LTgLhPI29ilM7TDeZqWes7B4ihFflMZTnlA6Zs3zsC+PTK 8vrWl3oqs9S8yyPK/+Vbd0MtsoyLsX0kfDHPTCFpHBQdMwfJC7c1E00z7Jvxid1U xSlAI5VNRynW6EVN5lhKbRyGRW5NDkkbsojPNzGS6GzwL6kdXX+I4vz3AW7J8mjU UKJ0Bv1wWqaSUCUF3vvpcccsWQRZ2OYwiRi9uNEvvLkDDu4/DeOoXnd1g4os8C/f WMhR6HjM1ut/4rDONxBak+A+KYl0fU9wdUQ9u2aGuCpwH4g5d8xrvW4aI2T/hObu YVeoysZuqmwnTJcKY6wvzBQVWskF/+uH4MQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrudehgdduheegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrgggtgfesthekredtredtjeenucfhrhhomhepkghi ucgjrghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepje ekteekffelleekudfftdefvddtjeejuedtuedtteegjefgvedtfedujeekieevnecuvehl uhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnse hsvghnthdrtghomh X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Feb 2024 16:55:51 -0500 (EST) From: Zi Yan To: "Pankaj Raghav (Samsung)" , linux-mm@kvack.org Cc: Zi Yan , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yang Shi , Yu Zhao , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Hugh Dickins , Mcgrof Chamberlain , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v4 7/7] mm: huge_memory: enable debugfs to split huge pages to any order. Date: Tue, 13 Feb 2024 16:55:20 -0500 Message-ID: <20240213215520.1048625-8-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240213215520.1048625-1-zi.yan@sent.com> References: <20240213215520.1048625-1-zi.yan@sent.com> Reply-To: Zi Yan Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Zi Yan It is used to test split_huge_page_to_list_to_order for pagecache THPs. Also add test cases for split_huge_page_to_list_to_order via both debugfs, truncating a file, and punching holes in a file. Signed-off-by: Zi Yan --- mm/huge_memory.c | 34 ++- .../selftests/mm/split_huge_page_test.c | 223 +++++++++++++++++- 2 files changed, 239 insertions(+), 18 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d0e555a8ea98..0564b007cbd1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3399,7 +3399,7 @@ static inline bool vma_not_suitable_for_thp_split(struct vm_area_struct *vma) } static int split_huge_pages_pid(int pid, unsigned long vaddr_start, - unsigned long vaddr_end) + unsigned long vaddr_end, unsigned int new_order) { int ret = 0; struct task_struct *task; @@ -3463,13 +3463,19 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, goto next; total++; - if (!can_split_folio(folio, NULL)) + /* + * For folios with private, split_huge_page_to_list_to_order() + * will try to drop it before split and then check if the folio + * can be split or not. So skip the check here. + */ + if (!folio_test_private(folio) && + !can_split_folio(folio, NULL)) goto next; if (!folio_trylock(folio)) goto next; - if (!split_folio(folio)) + if (!split_folio_to_order(folio, new_order)) split++; folio_unlock(folio); @@ -3487,7 +3493,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, } static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, - pgoff_t off_end) + pgoff_t off_end, unsigned int new_order) { struct filename *file; struct file *candidate; @@ -3526,7 +3532,7 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, if (!folio_trylock(folio)) goto next; - if (!split_folio(folio)) + if (!split_folio_to_order(folio, new_order)) split++; folio_unlock(folio); @@ -3551,10 +3557,14 @@ static ssize_t split_huge_pages_write(struct file *file, const char __user *buf, { static DEFINE_MUTEX(split_debug_mutex); ssize_t ret; - /* hold pid, start_vaddr, end_vaddr or file_path, off_start, off_end */ + /* + * hold pid, start_vaddr, end_vaddr, new_order or + * file_path, off_start, off_end, new_order + */ char input_buf[MAX_INPUT_BUF_SZ]; int pid; unsigned long vaddr_start, vaddr_end; + unsigned int new_order = 0; ret = mutex_lock_interruptible(&split_debug_mutex); if (ret) @@ -3583,29 +3593,29 @@ static ssize_t split_huge_pages_write(struct file *file, const char __user *buf, goto out; } - ret = sscanf(buf, "0x%lx,0x%lx", &off_start, &off_end); - if (ret != 2) { + ret = sscanf(buf, "0x%lx,0x%lx,%d", &off_start, &off_end, &new_order); + if (ret != 2 && ret != 3) { ret = -EINVAL; goto out; } - ret = split_huge_pages_in_file(file_path, off_start, off_end); + ret = split_huge_pages_in_file(file_path, off_start, off_end, new_order); if (!ret) ret = input_len; goto out; } - ret = sscanf(input_buf, "%d,0x%lx,0x%lx", &pid, &vaddr_start, &vaddr_end); + ret = sscanf(input_buf, "%d,0x%lx,0x%lx,%d", &pid, &vaddr_start, &vaddr_end, &new_order); if (ret == 1 && pid == 1) { split_huge_pages_all(); ret = strlen(input_buf); goto out; - } else if (ret != 3) { + } else if (ret != 3 && ret != 4) { ret = -EINVAL; goto out; } - ret = split_huge_pages_pid(pid, vaddr_start, vaddr_end); + ret = split_huge_pages_pid(pid, vaddr_start, vaddr_end, new_order); if (!ret) ret = strlen(input_buf); out: diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c index 7b698a848bab..ffed5ae24566 100644 --- a/tools/testing/selftests/mm/split_huge_page_test.c +++ b/tools/testing/selftests/mm/split_huge_page_test.c @@ -16,6 +16,7 @@ #include #include #include +#include #include "vm_util.h" #include "../kselftest.h" @@ -24,10 +25,12 @@ unsigned int pageshift; uint64_t pmd_pagesize; #define SPLIT_DEBUGFS "/sys/kernel/debug/split_huge_pages" +#define SMAP_PATH "/proc/self/smaps" +#define THP_FS_PATH "/mnt/thp_fs" #define INPUT_MAX 80 -#define PID_FMT "%d,0x%lx,0x%lx" -#define PATH_FMT "%s,0x%lx,0x%lx" +#define PID_FMT "%d,0x%lx,0x%lx,%d" +#define PATH_FMT "%s,0x%lx,0x%lx,%d" #define PFN_MASK ((1UL<<55)-1) #define KPF_THP (1UL<<22) @@ -102,7 +105,7 @@ void split_pmd_thp(void) /* split all THPs */ write_debugfs(PID_FMT, getpid(), (uint64_t)one_page, - (uint64_t)one_page + len); + (uint64_t)one_page + len, 0); for (i = 0; i < len; i++) if (one_page[i] != (char)i) @@ -177,7 +180,7 @@ void split_pte_mapped_thp(void) /* split all remapped THPs */ write_debugfs(PID_FMT, getpid(), (uint64_t)pte_mapped, - (uint64_t)pte_mapped + pagesize * 4); + (uint64_t)pte_mapped + pagesize * 4, 0); /* smap does not show THPs after mremap, use kpageflags instead */ thp_size = 0; @@ -237,7 +240,7 @@ void split_file_backed_thp(void) } /* split the file-backed THP */ - write_debugfs(PATH_FMT, testfile, pgoff_start, pgoff_end); + write_debugfs(PATH_FMT, testfile, pgoff_start, pgoff_end, 0); status = unlink(testfile); if (status) { @@ -265,8 +268,188 @@ void split_file_backed_thp(void) ksft_exit_fail_msg("Error occurred\n"); } +void create_pagecache_thp_and_fd(const char *testfile, size_t fd_size, int *fd, char **addr) +{ + size_t i; + int dummy; + + srand(time(NULL)); + + *fd = open(testfile, O_CREAT | O_RDWR, 0664); + if (*fd == -1) + ksft_exit_fail_msg("Failed to create a file at "THP_FS_PATH); + + for (i = 0; i < fd_size; i++) { + unsigned char byte = (unsigned char)i; + + write(*fd, &byte, sizeof(byte)); + } + close(*fd); + sync(); + *fd = open("/proc/sys/vm/drop_caches", O_WRONLY); + if (*fd == -1) { + ksft_perror("open drop_caches"); + goto err_out_unlink; + } + if (write(*fd, "3", 1) != 1) { + ksft_perror("write to drop_caches"); + goto err_out_unlink; + } + close(*fd); + + *fd = open(testfile, O_RDWR); + if (*fd == -1) { + ksft_perror("Failed to open a file at "THP_FS_PATH); + goto err_out_unlink; + } + + *addr = mmap(NULL, fd_size, PROT_READ|PROT_WRITE, MAP_SHARED, *fd, 0); + if (*addr == (char *)-1) { + ksft_perror("cannot mmap"); + goto err_out_close; + } + madvise(*addr, fd_size, MADV_HUGEPAGE); + + for (size_t i = 0; i < fd_size; i++) + dummy += *(*addr + i); + + if (!check_huge_file(*addr, fd_size / pmd_pagesize, pmd_pagesize)) { + ksft_print_msg("No large pagecache folio generated, please mount a filesystem supporting large folio at "THP_FS_PATH"\n"); + goto err_out_close; + } + return; +err_out_close: + close(*fd); +err_out_unlink: + unlink(testfile); + ksft_exit_fail_msg("Failed to create large pagecache folios\n"); +} + +void split_thp_in_pagecache_to_order(size_t fd_size, int order) +{ + int fd; + char *addr; + size_t i; + const char testfile[] = THP_FS_PATH "/test"; + int err = 0; + + create_pagecache_thp_and_fd(testfile, fd_size, &fd, &addr); + + write_debugfs(PID_FMT, getpid(), (uint64_t)addr, (uint64_t)addr + fd_size, order); + + for (i = 0; i < fd_size; i++) + if (*(addr + i) != (char)i) { + ksft_print_msg("%lu byte corrupted in the file\n", i); + err = EXIT_FAILURE; + goto out; + } + + if (!check_huge_file(addr, 0, pmd_pagesize)) { + ksft_print_msg("Still FilePmdMapped not split\n"); + err = EXIT_FAILURE; + goto out; + } + +out: + close(fd); + unlink(testfile); + if (err) + ksft_exit_fail_msg("Split PMD-mapped pagecache folio to order %d failed\n", order); + ksft_test_result_pass("Split PMD-mapped pagecache folio to order %d passed\n", order); +} + +void truncate_thp_in_pagecache_to_order(size_t fd_size, int order) +{ + int fd; + char *addr; + size_t i; + const char testfile[] = THP_FS_PATH "/test"; + int err = 0; + + create_pagecache_thp_and_fd(testfile, fd_size, &fd, &addr); + + ftruncate(fd, pagesize << order); + + for (i = 0; i < (pagesize << order); i++) + if (*(addr + i) != (char)i) { + ksft_print_msg("%lu byte corrupted in the file\n", i); + err = EXIT_FAILURE; + goto out; + } + + if (!check_huge_file(addr, 0, pmd_pagesize)) { + ksft_print_msg("Still FilePmdMapped not split after truncate\n"); + err = EXIT_FAILURE; + goto out; + } + +out: + close(fd); + unlink(testfile); + if (err) + ksft_exit_fail_msg("Truncate PMD-mapped pagecache folio to order %d failed\n", order); + ksft_test_result_pass("Truncate PMD-mapped pagecache folio to order %d passed\n", order); +} + +void punch_hole_in_pagecache_thp(size_t fd_size, off_t offset[], off_t len[], + int n, int num_left_thps) +{ + int fd, j; + char *addr; + size_t i; + const char testfile[] = THP_FS_PATH "/test"; + int err = 0; + + create_pagecache_thp_and_fd(testfile, fd_size, &fd, &addr); + + for (j = 0; j < n; j++) { + ksft_print_msg("punch a hole to %ld kB PMD-mapped pagecache page at addr: %lx, offset %ld, and len %ld ...\n", + fd_size >> 10, (unsigned long)addr, offset[j], len[j]); + fallocate(fd, FALLOC_FL_PUNCH_HOLE|FALLOC_FL_KEEP_SIZE, offset[j], len[j]); + } + + for (i = 0; i < fd_size; i++) { + int in_hole = 0; + + for (j = 0; j < n; j++) + if (i >= offset[j] && i < (offset[j] + len[j])) { + in_hole = 1; + break; + } + + if (in_hole) { + if (*(addr + i)) { + ksft_print_msg("%lu byte non-zero after punch\n", i); + err = EXIT_FAILURE; + goto out; + } + continue; + } + if (*(addr + i) != (char)i) { + ksft_print_msg("%lu byte corrupted in the file\n", i); + err = EXIT_FAILURE; + goto out; + } + } + + if (!check_huge_file(addr, num_left_thps, pmd_pagesize)) { + ksft_print_msg("Still FilePmdMapped not split after punch\n"); + goto out; + } +out: + close(fd); + unlink(testfile); + if (err) + ksft_exit_fail_msg("Punch holes in PMD-mapped pagecache folio failed\n"); + ksft_test_result_pass("Punch holes PMD-mapped pagecache folio passed\n"); +} + int main(int argc, char **argv) { + int i; + size_t fd_size; + off_t offset[2], len[2]; + ksft_print_header(); if (geteuid() != 0) { @@ -274,7 +457,7 @@ int main(int argc, char **argv) ksft_finished(); } - ksft_set_plan(3); + ksft_set_plan(3+8+9+2); pagesize = getpagesize(); pageshift = ffs(pagesize) - 1; @@ -282,9 +465,37 @@ int main(int argc, char **argv) if (!pmd_pagesize) ksft_exit_fail_msg("Reading PMD pagesize failed\n"); + fd_size = 2 * pmd_pagesize; + split_pmd_thp(); split_pte_mapped_thp(); split_file_backed_thp(); + for (i = 8; i >= 0; i--) + if (i != 1) + split_thp_in_pagecache_to_order(fd_size, i); + + /* + * for i is 1, truncate code in the kernel should create order-0 pages + * instead of order-1 THPs, since order-1 THP is not supported. No error + * is expected. + */ + for (i = 8; i >= 0; i--) + truncate_thp_in_pagecache_to_order(fd_size, i); + + offset[0] = 123; + offset[1] = 4 * pagesize; + len[0] = 200 * pagesize; + len[1] = 16 * pagesize; + punch_hole_in_pagecache_thp(fd_size, offset, len, 2, 1); + + offset[0] = 259 * pagesize + pagesize / 2; + offset[1] = 33 * pagesize; + len[0] = 129 * pagesize; + len[1] = 16 * pagesize; + punch_hole_in_pagecache_thp(fd_size, offset, len, 2, 1); + ksft_finished(); + + return 0; }