diff mbox series

[02/10] mm/huge_memory: splitting set mapping+index before unfreeze

Message ID alpine.LSU.2.11.1811261516380.2275@eggly.anvils (mailing list archive)
State New, archived
Headers show
Series huge_memory,khugepaged tmpfs split/collapse fixes | expand

Commit Message

Hugh Dickins Nov. 26, 2018, 11:19 p.m. UTC
Huge tmpfs stress testing has occasionally hit shmem_undo_range()'s
VM_BUG_ON_PAGE(page_to_pgoff(page) != index, page).

Move the setting of mapping and index up before the page_ref_unfreeze()
in __split_huge_page_tail() to fix this: so that a page cache lookup
cannot get a reference while the tail's mapping and index are unstable.

In fact, might as well move them up before the smp_wmb(): I don't see
an actual need for that, but if I'm missing something, this way round
is safer than the other, and no less efficient.

You might argue that VM_BUG_ON_PAGE(page_to_pgoff(page) != index, page)
is misplaced, and should be left until after the trylock_page(); but
left as is has not crashed since, and gives more stringent assurance.

Fixes: e9b61f19858a5 ("thp: reintroduce split_huge_page()")
Requires: 605ca5ede764 ("mm/huge_memory.c: reorder operations in __split_huge_page_tail()")
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: stable@vger.kernel.org # 4.8+
---
 mm/huge_memory.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

Comments

Kirill A. Shutemov Nov. 27, 2018, 7:26 a.m. UTC | #1
On Mon, Nov 26, 2018 at 03:19:57PM -0800, Hugh Dickins wrote:
> Huge tmpfs stress testing has occasionally hit shmem_undo_range()'s
> VM_BUG_ON_PAGE(page_to_pgoff(page) != index, page).
> 
> Move the setting of mapping and index up before the page_ref_unfreeze()
> in __split_huge_page_tail() to fix this: so that a page cache lookup
> cannot get a reference while the tail's mapping and index are unstable.
> 
> In fact, might as well move them up before the smp_wmb(): I don't see
> an actual need for that, but if I'm missing something, this way round
> is safer than the other, and no less efficient.
> 
> You might argue that VM_BUG_ON_PAGE(page_to_pgoff(page) != index, page)
> is misplaced, and should be left until after the trylock_page(); but
> left as is has not crashed since, and gives more stringent assurance.
> 
> Fixes: e9b61f19858a5 ("thp: reintroduce split_huge_page()")
> Requires: 605ca5ede764 ("mm/huge_memory.c: reorder operations in __split_huge_page_tail()")
> Signed-off-by: Hugh Dickins <hughd@google.com>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
> Cc: stable@vger.kernel.org # 4.8+

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 30100fac2341..cef2c256e7c4 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2402,6 +2402,12 @@  static void __split_huge_page_tail(struct page *head, int tail,
 			 (1L << PG_unevictable) |
 			 (1L << PG_dirty)));
 
+	/* ->mapping in first tail page is compound_mapcount */
+	VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING,
+			page_tail);
+	page_tail->mapping = head->mapping;
+	page_tail->index = head->index + tail;
+
 	/* Page flags must be visible before we make the page non-compound. */
 	smp_wmb();
 
@@ -2422,12 +2428,6 @@  static void __split_huge_page_tail(struct page *head, int tail,
 	if (page_is_idle(head))
 		set_page_idle(page_tail);
 
-	/* ->mapping in first tail page is compound_mapcount */
-	VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING,
-			page_tail);
-	page_tail->mapping = head->mapping;
-
-	page_tail->index = head->index + tail;
 	page_cpupid_xchg_last(page_tail, page_cpupid_last(head));
 
 	/*