diff mbox series

[v2] mm: Remove folio from deferred split list before uncharging it

Message ID 20240311191835.312162-1-willy@infradead.org (mailing list archive)
State New
Headers show
Series [v2] mm: Remove folio from deferred split list before uncharging it | expand

Commit Message

Matthew Wilcox (Oracle) March 11, 2024, 7:18 p.m. UTC
When freeing a large folio, we must remove it from the deferred split
list before we uncharge it as each memcg has its own deferred split list
(with associated lock) and removing a folio from the deferred split list
while holding the wrong lock will corrupt that list and cause various
related problems.

Link: https://lore.kernel.org/linux-mm/367a14f7-340e-4b29-90ae-bc3fcefdd5f4@arm.com/
Fixes: f77171d241e3 (mm: allow non-hugetlb large folios to be batch processed)
Fixes: 29f3843026cf (mm: free folios directly in move_folios_to_lru())
Fixes: bc2ff4cbc329 (mm: free folios in a batch in shrink_folio_list())
Debugged-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Tested-by: Ryan Roberts <ryan.roberts@arm.com>
---
v2: Add two more places Ryan spotted
 mm/swap.c   | 3 +++
 mm/vmscan.c | 6 ++++++
 2 files changed, 9 insertions(+)

Comments

David Hildenbrand March 13, 2024, 12:32 p.m. UTC | #1
On 11.03.24 20:18, Matthew Wilcox (Oracle) wrote:
> When freeing a large folio, we must remove it from the deferred split
> list before we uncharge it as each memcg has its own deferred split list
> (with associated lock) and removing a folio from the deferred split list
> while holding the wrong lock will corrupt that list and cause various
> related problems.
> 
> Link: https://lore.kernel.org/linux-mm/367a14f7-340e-4b29-90ae-bc3fcefdd5f4@arm.com/
> Fixes: f77171d241e3 (mm: allow non-hugetlb large folios to be batch processed)
> Fixes: 29f3843026cf (mm: free folios directly in move_folios_to_lru())
> Fixes: bc2ff4cbc329 (mm: free folios in a batch in shrink_folio_list())
> Debugged-by: Ryan Roberts <ryan.roberts@arm.com>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Tested-by: Ryan Roberts <ryan.roberts@arm.com>
> ---

Reviewed-by: David Hildenbrand <david@redhat.com>
diff mbox series

Patch

diff --git a/mm/swap.c b/mm/swap.c
index eaadbacf7f19..500a09a48dfd 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -1012,6 +1012,9 @@  void folios_put_refs(struct folio_batch *folios, unsigned int *refs)
 			free_huge_folio(folio);
 			continue;
 		}
+		if (folio_test_large(folio) &&
+		    folio_test_large_rmappable(folio))
+			folio_undo_large_rmappable(folio);
 
 		__page_cache_release(folio, &lruvec, &flags);
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index a0e53999a865..766da7de9e27 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1436,6 +1436,9 @@  static unsigned int shrink_folio_list(struct list_head *folio_list,
 		 */
 		nr_reclaimed += nr_pages;
 
+		if (folio_test_large(folio) &&
+		    folio_test_large_rmappable(folio))
+			folio_undo_large_rmappable(folio);
 		if (folio_batch_add(&free_folios, folio) == 0) {
 			mem_cgroup_uncharge_folios(&free_folios);
 			try_to_unmap_flush();
@@ -1842,6 +1845,9 @@  static unsigned int move_folios_to_lru(struct lruvec *lruvec,
 		if (unlikely(folio_put_testzero(folio))) {
 			__folio_clear_lru_flags(folio);
 
+			if (folio_test_large(folio) &&
+			    folio_test_large_rmappable(folio))
+				folio_undo_large_rmappable(folio);
 			if (folio_batch_add(&free_folios, folio) == 0) {
 				spin_unlock_irq(&lruvec->lru_lock);
 				mem_cgroup_uncharge_folios(&free_folios);