diff mbox series

[v2] mm/shmem: Fix race in shmem_undo_range w/THP

Message ID 20230418084031.3439795-1-stevensd@google.com (mailing list archive)
State New
Headers show
Series [v2] mm/shmem: Fix race in shmem_undo_range w/THP | expand

Commit Message

David Stevens April 18, 2023, 8:40 a.m. UTC
From: David Stevens <stevensd@chromium.org>

Split folios during the second loop of shmem_undo_range. It's not
sufficient to only split folios when dealing with partial pages, since
it's possible for a THP to be faulted in after that point. Calling
truncate_inode_folio in that situation can result in throwing away data
outside of the range being targeted.

Fixes: b9a8a4195c7d ("truncate,shmem: Handle truncates that split large folios")
Cc: stable@vger.kernel.org
Signed-off-by: David Stevens <stevensd@chromium.org>
---
v1 -> v2:
 - Actually drop pages after splitting a THP

 mm/shmem.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/mm/shmem.c b/mm/shmem.c
index 9218c955f482..226c94a257b1 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1033,7 +1033,22 @@  static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
 				}
 				VM_BUG_ON_FOLIO(folio_test_writeback(folio),
 						folio);
-				truncate_inode_folio(mapping, folio);
+
+				if (!folio_test_large(folio)) {
+					truncate_inode_folio(mapping, folio);
+				} else if (truncate_inode_partial_folio(folio, lstart, lend)) {
+					/*
+					 * If we split a page, reset the loop so that we
+					 * pick up the new sub pages. Otherwise the THP
+					 * was entirely dropped or the target range was
+					 * zeroed, so just continue the loop as is.
+					 */
+					if (!folio_test_large(folio)) {
+						folio_unlock(folio);
+						index = start;
+						break;
+					}
+				}
 			}
 			folio_unlock(folio);
 		}