From patchwork Tue Feb 13 09:37:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13554790 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FC05C4829A for ; Tue, 13 Feb 2024 09:37:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C52FA8D0008; Tue, 13 Feb 2024 04:37:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C00B78D0001; Tue, 13 Feb 2024 04:37:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AA1CF8D0008; Tue, 13 Feb 2024 04:37:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 97C518D0001 for ; Tue, 13 Feb 2024 04:37:58 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 59524A1D9E for ; Tue, 13 Feb 2024 09:37:58 +0000 (UTC) X-FDA: 81786279036.30.A34630C Received: from mout-p-103.mailbox.org (mout-p-103.mailbox.org [80.241.56.161]) by imf22.hostedemail.com (Postfix) with ESMTP id 83E61C0005 for ; Tue, 13 Feb 2024 09:37:56 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=vk8rNVHr; dmarc=none; spf=pass (imf22.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707817076; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WTxYFw2wNX7P5ZclG9FyVzbP0PebvoXLD7s4qqN4OLQ=; b=m1zRHIG2o1gdpbtPeUOrub1Pu8OymhHgo6TonHYwREYWLX7IC+rMwK8B+ES66J8e/C6PQe f15ivcYCGCwX6e92fAMIW+MbkgvaFBFkB03IvbUalUFUd8FAzBlUVWcbogE92ZofuU0LtA STS9bqxhSFfwO31Keu+GOIbFmF7E4XY= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=vk8rNVHr; dmarc=none; spf=pass (imf22.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707817076; a=rsa-sha256; cv=none; b=q/67+4tVVivg4IlGI+/C2zmDeXtYzDsty5TVOPpypAQbQhN8uoVCQRVa6pX7AdhcIAEbWm iaz1YH3VosC57tr//T43AEDVb0WgqSM8sqe0R7gXSdThe/qN57YxoTUJpmQNJbAQyJBUGH CZfZW/6p/4jGx6XvYwO43BlKZuCR6+0= Received: from smtp102.mailbox.org (smtp102.mailbox.org [10.196.197.102]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-103.mailbox.org (Postfix) with ESMTPS id 4TYx805XGYz9t7w; Tue, 13 Feb 2024 10:37:52 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1707817072; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WTxYFw2wNX7P5ZclG9FyVzbP0PebvoXLD7s4qqN4OLQ=; b=vk8rNVHrH9GnDjyMJds3YOI6FHkGLa5jobWuxpBSfDc8eim75cFZeal2zSOqMPmtRTP7Lt jiU1XTqkGbKSf4FyAvu8y5Kv3XT7peTK4JTd40qH112YpZ5MhTZqagUFznvuEPJGLeRzDO 1nvCvtEaI6Vb2cmva8byqNu3rPyCcR7y8jkug7J5YUuPwYCi6zhpTo6lN4rddW3W0Zq4xX 9jeP4tpx214SFXzzjp5l3ltv5glok+YytXEQGtoCIdhdlUkbWSpyoosppXs3wr6iTUM87n zVspg8aNshEzNRnan6HsNc0cCrqVf/Ti1mskfH4RqObvWYjl2B4tNvGSyEDuuA== From: "Pankaj Raghav (Samsung)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: mcgrof@kernel.org, gost.dev@samsung.com, akpm@linux-foundation.org, kbusch@kernel.org, djwong@kernel.org, chandan.babu@oracle.com, p.raghav@samsung.com, linux-kernel@vger.kernel.org, hare@suse.de, willy@infradead.org, linux-mm@kvack.org, david@fromorbit.com Subject: [RFC v2 09/14] mm: Support order-1 folios in the page cache Date: Tue, 13 Feb 2024 10:37:08 +0100 Message-ID: <20240213093713.1753368-10-kernel@pankajraghav.com> In-Reply-To: <20240213093713.1753368-1-kernel@pankajraghav.com> References: <20240213093713.1753368-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 83E61C0005 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: sf6w5oi8tshgi8i5cj89s375xjii3dst X-HE-Tag: 1707817076-488241 X-HE-Meta: U2FsdGVkX1+cL1PD7imJjkvE1SsP0OU/iBXbGlQDHKg2QAP5qeYddZq4AVNXkve0X7fdZDuwF1zGuP+2NH1nvq8fbgtbB7Pqd5Xz1cuAQxI9Gv+aKPyPe0W9w0vAL25G3kn/YurVlMXMiB0e/8+bIWUluqwtNIt3ojt7qdLaUzQwEj5EcQz/dUkgg9VSB53HTOU4IgIoFtE2KCsbDFx4rojisz1/qutpJ8IpcVPP/wfXgHrWJZ2CD7RFe4L9Nl8rbTYoFwz40EvuakVSPK6Bmf0hVVilUZSVO6cS5yCKjwvt96QX4bNidonJu0bUJMQfU00XmrtVw84fGSyVNM65cw4Tia4/3lTn0CJq4B9aFj0l+MoZn5x8TCAsjc3PG6mvtd31xBa50x1jHeM+yId21z+DfBLtvzPKVmFtkHvKi5OlBKnOd2f0YGGCNeDglb1zzHyvP6KaETh9Kz6wKnfFj9UEWPQiKr6YDXwIwivzL5HOAjbeYZ0NnO8bNiZVM+dyTAxGWuGirXTV9o3aaNBDSxsTmRE0Mx4VuxXOM/+6DvOoQp6vbg2nZv8TpcbvbDJYOuBFp3tXBOmZgyosNiKD5ucJQ1XV/FGXQpEADsA5o3UFkjuL68xzyC7NcV75zdXJBxAR4u4CqaG7eLNYhyH7WeWpxPqVAsR+qJ+ZWp4lt/g0Jy/lmrrb21nJoqMy7s0nAo37SnJExj/sksxw8xGbMFcHGfkx0c10myP0xufEFULWEzzFtxjTG50xVNDiLRyHzkCwyaQtAIvgKFdzrN8VANy2GKIGGyOdY6LPSmCFv7TJW3DkZZMdJYETtzycvQPQJi5mBOc5L22VxiMbsuGn6Sszl9FY/7FA7j0RcJbySB6GKqFGy0pgHcnvGPBNb8zNccC0eh1VE8EbOcMcDQH2dgf8yR5iB8z7c/8Mu1NA0rY3rXGyCpjZ4EfGnNgb3sMb5aslRGgR8lKSUwJuKBx MdMq8rEI DUhABzzByv8yYsgb+cv24N6I1v2M+1zJ873RyyBvyJ7I+lMT9xg/3vNlZTyYbWSTJkj4jNBfeloSZ+Lt8cM29vOYOgf1TcK+pFeFCOpx0ZRnh8hLjbCL7/a1HwzWULGG82Q3ChevQoUB+hOk2WepI7jvMOU2Sd9zUV7mAZyNHz5rOPuACpEslMql4EQ7wkdxfBAM7MquA3epUMW/1F/dGH7h7dQFcvfCYV1qwXLSyyDAuDfRdjyetKEUdNbsG8IyxPOeuWmCcqOLWvUU1oGMGoOhhriRU7MAratH8QBxUFUdQB8TW9dsEaUEOXpuHR55umLsLmYmjw4tJnGSwaBCcD3bbCBhOW+sT1gqtsWQqTDnYnkTSDsGjErgTGaHcy1Gvejpib7ijvfhYaRxZ8dovONGiM9VObwnmYwPp2Bi1dNQInadls0YyuMzPvw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Matthew Wilcox (Oracle)" Folios of order 1 have no space to store the deferred list. This is not a problem for the page cache as file-backed folios are never placed on the deferred list. All we need to do is prevent the core MM from touching the deferred list for order 1 folios and remove the code which prevented us from allocating order 1 folios. Link: https://lore.kernel.org/linux-mm/90344ea7-4eec-47ee-5996-0c22f42d6a6a@google.com/ Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Hannes Reinecke --- include/linux/huge_mm.h | 7 +++++-- mm/filemap.c | 2 -- mm/huge_memory.c | 23 ++++++++++++++++++----- mm/internal.h | 4 +--- mm/readahead.c | 3 --- 5 files changed, 24 insertions(+), 15 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 5adb86af35fc..916a2a539517 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -263,7 +263,7 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); -void folio_prep_large_rmappable(struct folio *folio); +struct folio *folio_prep_large_rmappable(struct folio *folio); bool can_split_folio(struct folio *folio, int *pextra_pins); int split_huge_page_to_list(struct page *page, struct list_head *list); static inline int split_huge_page(struct page *page) @@ -410,7 +410,10 @@ static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, return 0; } -static inline void folio_prep_large_rmappable(struct folio *folio) {} +static inline struct folio *folio_prep_large_rmappable(struct folio *folio) +{ + return folio; +} #define transparent_hugepage_flags 0UL diff --git a/mm/filemap.c b/mm/filemap.c index 7a6e15c47150..c8205a534532 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1922,8 +1922,6 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, gfp_t alloc_gfp = gfp; err = -ENOMEM; - if (order == 1) - order = 0; if (order < min_order) order = min_order; if (order > 0) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d897efc51025..6ec3417638a1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -788,11 +788,15 @@ struct deferred_split *get_deferred_split_queue(struct folio *folio) } #endif -void folio_prep_large_rmappable(struct folio *folio) +struct folio *folio_prep_large_rmappable(struct folio *folio) { - VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); - INIT_LIST_HEAD(&folio->_deferred_list); + if (!folio || !folio_test_large(folio)) + return folio; + if (folio_order(folio) > 1) + INIT_LIST_HEAD(&folio->_deferred_list); folio_set_large_rmappable(folio); + + return folio; } static inline bool is_transparent_hugepage(struct folio *folio) @@ -3095,7 +3099,8 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) /* Prevent deferred_split_scan() touching ->_refcount */ spin_lock(&ds_queue->split_queue_lock); if (folio_ref_freeze(folio, 1 + extra_pins)) { - if (!list_empty(&folio->_deferred_list)) { + if (folio_order(folio) > 1 && + !list_empty(&folio->_deferred_list)) { ds_queue->split_queue_len--; list_del(&folio->_deferred_list); } @@ -3146,6 +3151,9 @@ void folio_undo_large_rmappable(struct folio *folio) struct deferred_split *ds_queue; unsigned long flags; + if (folio_order(folio) <= 1) + return; + /* * At this point, there is no one trying to add the folio to * deferred_list. If folio is not in deferred_list, it's safe @@ -3171,7 +3179,12 @@ void deferred_split_folio(struct folio *folio) #endif unsigned long flags; - VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); + /* + * Order 1 folios have no space for a deferred list, but we also + * won't waste much memory by not adding them to the deferred list. + */ + if (folio_order(folio) <= 1) + return; /* * The try_to_unmap() in page reclaim path might reach here too, diff --git a/mm/internal.h b/mm/internal.h index f309a010d50f..5174b5b0c344 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -419,9 +419,7 @@ static inline struct folio *page_rmappable_folio(struct page *page) { struct folio *folio = (struct folio *)page; - if (folio && folio_order(folio) > 1) - folio_prep_large_rmappable(folio); - return folio; + return folio_prep_large_rmappable(folio); } static inline void prep_compound_head(struct page *page, unsigned int order) diff --git a/mm/readahead.c b/mm/readahead.c index a361fba18674..7d5f6a8792a8 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -560,9 +560,6 @@ void page_cache_ra_order(struct readahead_control *ractl, /* Don't allocate pages past EOF */ while (order > min_order && index + (1UL << order) - 1 > limit) order--; - /* THP machinery does not support order-1 */ - if (order == 1) - order = 0; if (order < min_order) order = min_order;