diff mbox series

[v3,8/8] x86, mm, gup: prevent get_page() race with munmap in paravirt guest

Message ID 1576529149-14269-9-git-send-email-akaher@vmware.com (mailing list archive)
State New, archived
Headers show
Series Backported fixes for 4.4 stable tree | expand

Commit Message

Ajay Kaher Dec. 16, 2019, 8:45 p.m. UTC
From: Vlastimil Babka <vbabka@suse.cz>

The x86 version of get_user_pages_fast() relies on disabled interrupts to
synchronize gup_pte_range() between gup_get_pte(ptep); and get_page() against
a parallel munmap. The munmap side nulls the pte, then flushes TLBs, then
releases the page. As TLB flush is done synchronously via IPI disabling
interrupts blocks the page release, and get_page(), which assumes existing
reference on page, is thus safe.
However when TLB flush is done by a hypercall, e.g. in a Xen PV guest, there is
no blocking thanks to disabled interrupts, and get_page() can succeed on a page
that was already freed or even reused.

We have recently seen this happen with our 4.4 and 4.12 based kernels, with
userspace (java) that exits a thread, where mm_release() performs a futex_wake()
on tsk->clear_child_tid, and another thread in parallel unmaps the page where
tsk->clear_child_tid points to. The spurious get_page() succeeds, but futex code
immediately releases the page again, while it's already on a freelist. Symptoms
include a bad page state warning, general protection faults acessing a poisoned
list prev/next pointer in the freelist, or free page pcplists of two cpus joined
together in a single list. Oscar has also reproduced this scenario, with a
patch inserting delays before the get_page() to make the race window larger.

Fix this by removing the dependency on TLB flush interrupts the same way as the
generic get_user_pages_fast() code by using page_cache_add_speculative() and
revalidating the PTE contents after pinning the page. Mainline is safe since
4.13 where the x86 gup code was removed in favor of the common code. Accessing
the page table itself safely also relies on disabled interrupts and TLB flush
IPIs that don't happen with hypercalls, which was acknowledged in commit
9e52fc2b50de ("x86/mm: Enable RCU based page table freeing
(CONFIG_HAVE_RCU_TABLE_FREE=y)"). That commit with follups should also be
backported for full safety, although our reproducer didn't hit a problem
without that backport.

Reproduced-by: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Juergen Gross <jgross@suse.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 arch/x86/mm/gup.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

Comments

Peter Zijlstra Dec. 16, 2019, 1:04 p.m. UTC | #1
On Tue, Dec 17, 2019 at 02:15:48AM +0530, Ajay Kaher wrote:
> From: Vlastimil Babka <vbabka@suse.cz>
> 
> The x86 version of get_user_pages_fast() relies on disabled interrupts to
> synchronize gup_pte_range() between gup_get_pte(ptep); and get_page() against
> a parallel munmap. The munmap side nulls the pte, then flushes TLBs, then
> releases the page. As TLB flush is done synchronously via IPI disabling
> interrupts blocks the page release, and get_page(), which assumes existing
> reference on page, is thus safe.
> However when TLB flush is done by a hypercall, e.g. in a Xen PV guest, there is
> no blocking thanks to disabled interrupts, and get_page() can succeed on a page
> that was already freed or even reused.
> 
> We have recently seen this happen with our 4.4 and 4.12 based kernels, with
> userspace (java) that exits a thread, where mm_release() performs a futex_wake()
> on tsk->clear_child_tid, and another thread in parallel unmaps the page where
> tsk->clear_child_tid points to. The spurious get_page() succeeds, but futex code
> immediately releases the page again, while it's already on a freelist. Symptoms
> include a bad page state warning, general protection faults acessing a poisoned
> list prev/next pointer in the freelist, or free page pcplists of two cpus joined
> together in a single list. Oscar has also reproduced this scenario, with a
> patch inserting delays before the get_page() to make the race window larger.
> 
> Fix this by removing the dependency on TLB flush interrupts the same way as the

This is suppsed to be fixed by:

arch/x86/Kconfig:       select HAVE_RCU_TABLE_FREE              if PARAVIRT
Vitaly Kuznetsov Dec. 16, 2019, 1:30 p.m. UTC | #2
Peter Zijlstra <peterz@infradead.org> writes:

> On Tue, Dec 17, 2019 at 02:15:48AM +0530, Ajay Kaher wrote:
>> From: Vlastimil Babka <vbabka@suse.cz>
>> 
>> The x86 version of get_user_pages_fast() relies on disabled interrupts to
>> synchronize gup_pte_range() between gup_get_pte(ptep); and get_page() against
>> a parallel munmap. The munmap side nulls the pte, then flushes TLBs, then
>> releases the page. As TLB flush is done synchronously via IPI disabling
>> interrupts blocks the page release, and get_page(), which assumes existing
>> reference on page, is thus safe.
>> However when TLB flush is done by a hypercall, e.g. in a Xen PV guest, there is
>> no blocking thanks to disabled interrupts, and get_page() can succeed on a page
>> that was already freed or even reused.
>> 
>> We have recently seen this happen with our 4.4 and 4.12 based kernels, with
>> userspace (java) that exits a thread, where mm_release() performs a futex_wake()
>> on tsk->clear_child_tid, and another thread in parallel unmaps the page where
>> tsk->clear_child_tid points to. The spurious get_page() succeeds, but futex code
>> immediately releases the page again, while it's already on a freelist. Symptoms
>> include a bad page state warning, general protection faults acessing a poisoned
>> list prev/next pointer in the freelist, or free page pcplists of two cpus joined
>> together in a single list. Oscar has also reproduced this scenario, with a
>> patch inserting delays before the get_page() to make the race window larger.
>> 
>> Fix this by removing the dependency on TLB flush interrupts the same way as the
>
> This is suppsed to be fixed by:
>
> arch/x86/Kconfig:       select HAVE_RCU_TABLE_FREE              if PARAVIRT
>

Yes,

but HAVE_RCU_TABLE_FREE was enabled on x86 only in 4.14:

commit 9e52fc2b50de3a1c08b44f94c610fbe998c0031a
Author: Vitaly Kuznetsov <vkuznets@redhat.com>
Date:   Mon Aug 28 10:22:51 2017 +0200

    x86/mm: Enable RCU based page table freeing (CONFIG_HAVE_RCU_TABLE_FREE=y)

and, if I understood correctly, Ajay is suggesting the patch for older
stable kernels (4.9 and 4.4 I would guess).
Peter Zijlstra Dec. 16, 2019, 1:47 p.m. UTC | #3
On Mon, Dec 16, 2019 at 02:30:44PM +0100, Vitaly Kuznetsov wrote:
> Peter Zijlstra <peterz@infradead.org> writes:
> 
> > On Tue, Dec 17, 2019 at 02:15:48AM +0530, Ajay Kaher wrote:
> >> From: Vlastimil Babka <vbabka@suse.cz>
> >> 
> >> The x86 version of get_user_pages_fast() relies on disabled interrupts to
> >> synchronize gup_pte_range() between gup_get_pte(ptep); and get_page() against
> >> a parallel munmap. The munmap side nulls the pte, then flushes TLBs, then
> >> releases the page. As TLB flush is done synchronously via IPI disabling
> >> interrupts blocks the page release, and get_page(), which assumes existing
> >> reference on page, is thus safe.
> >> However when TLB flush is done by a hypercall, e.g. in a Xen PV guest, there is
> >> no blocking thanks to disabled interrupts, and get_page() can succeed on a page
> >> that was already freed or even reused.
> >> 
> >> We have recently seen this happen with our 4.4 and 4.12 based kernels, with
> >> userspace (java) that exits a thread, where mm_release() performs a futex_wake()
> >> on tsk->clear_child_tid, and another thread in parallel unmaps the page where
> >> tsk->clear_child_tid points to. The spurious get_page() succeeds, but futex code
> >> immediately releases the page again, while it's already on a freelist. Symptoms
> >> include a bad page state warning, general protection faults acessing a poisoned
> >> list prev/next pointer in the freelist, or free page pcplists of two cpus joined
> >> together in a single list. Oscar has also reproduced this scenario, with a
> >> patch inserting delays before the get_page() to make the race window larger.
> >> 
> >> Fix this by removing the dependency on TLB flush interrupts the same way as the
> >
> > This is suppsed to be fixed by:
> >
> > arch/x86/Kconfig:       select HAVE_RCU_TABLE_FREE              if PARAVIRT
> >
> 
> Yes,
> 
> but HAVE_RCU_TABLE_FREE was enabled on x86 only in 4.14:
> 
> commit 9e52fc2b50de3a1c08b44f94c610fbe998c0031a
> Author: Vitaly Kuznetsov <vkuznets@redhat.com>
> Date:   Mon Aug 28 10:22:51 2017 +0200
> 
>     x86/mm: Enable RCU based page table freeing (CONFIG_HAVE_RCU_TABLE_FREE=y)
> 
> and, if I understood correctly, Ajay is suggesting the patch for older
> stable kernels (4.9 and 4.4 I would guess).

It wasn't at all clear this was targeted at old kernels (I only got this
one patch).

And why can't those necro kernels do backports of the upstream solution?
Vlastimil Babka Dec. 16, 2019, 3:11 p.m. UTC | #4
On 12/16/19 2:47 PM, Peter Zijlstra wrote:
> On Mon, Dec 16, 2019 at 02:30:44PM +0100, Vitaly Kuznetsov wrote:
>> Peter Zijlstra <peterz@infradead.org> writes:
>>
>>> On Tue, Dec 17, 2019 at 02:15:48AM +0530, Ajay Kaher wrote:
>>>> From: Vlastimil Babka <vbabka@suse.cz>
>>>>
>>>> The x86 version of get_user_pages_fast() relies on disabled interrupts to
>>>> synchronize gup_pte_range() between gup_get_pte(ptep); and get_page() against
>>>> a parallel munmap. The munmap side nulls the pte, then flushes TLBs, then
>>>> releases the page. As TLB flush is done synchronously via IPI disabling
>>>> interrupts blocks the page release, and get_page(), which assumes existing
>>>> reference on page, is thus safe.
>>>> However when TLB flush is done by a hypercall, e.g. in a Xen PV guest, there is
>>>> no blocking thanks to disabled interrupts, and get_page() can succeed on a page
>>>> that was already freed or even reused.
>>>>
>>>> We have recently seen this happen with our 4.4 and 4.12 based kernels, with
>>>> userspace (java) that exits a thread, where mm_release() performs a futex_wake()
>>>> on tsk->clear_child_tid, and another thread in parallel unmaps the page where
>>>> tsk->clear_child_tid points to. The spurious get_page() succeeds, but futex code
>>>> immediately releases the page again, while it's already on a freelist. Symptoms
>>>> include a bad page state warning, general protection faults acessing a poisoned
>>>> list prev/next pointer in the freelist, or free page pcplists of two cpus joined
>>>> together in a single list. Oscar has also reproduced this scenario, with a
>>>> patch inserting delays before the get_page() to make the race window larger.
>>>>
>>>> Fix this by removing the dependency on TLB flush interrupts the same way as the
>>>
>>> This is suppsed to be fixed by:
>>>
>>> arch/x86/Kconfig:       select HAVE_RCU_TABLE_FREE              if PARAVIRT
>>>
>>
>> Yes,

Well, that commit fixes the "page table can be freed under us" part. But
this patch is about the "get_page() will succeed on a page that's being
freed" part. Upstream fixed that unknowingly in 4.13 by a gup.c
refactoring that would be too risky to backport fully.

>> but HAVE_RCU_TABLE_FREE was enabled on x86 only in 4.14:
>>
>> commit 9e52fc2b50de3a1c08b44f94c610fbe998c0031a
>> Author: Vitaly Kuznetsov <vkuznets@redhat.com>
>> Date:   Mon Aug 28 10:22:51 2017 +0200
>>
>>     x86/mm: Enable RCU based page table freeing (CONFIG_HAVE_RCU_TABLE_FREE=y)
>>
>> and, if I understood correctly, Ajay is suggesting the patch for older
>> stable kernels (4.9 and 4.4 I would guess).
> 
> It wasn't at all clear this was targeted at old kernels (I only got this
> one patch).
> 
> And why can't those necro kernels do backports of the upstream solution?
>
Vitaly Kuznetsov Dec. 16, 2019, 4:08 p.m. UTC | #5
Vlastimil Babka <vbabka@suse.cz> writes:

> On 12/16/19 2:47 PM, Peter Zijlstra wrote:
>> On Mon, Dec 16, 2019 at 02:30:44PM +0100, Vitaly Kuznetsov wrote:
>>> Peter Zijlstra <peterz@infradead.org> writes:
>>>
>>>> On Tue, Dec 17, 2019 at 02:15:48AM +0530, Ajay Kaher wrote:
>>>>> From: Vlastimil Babka <vbabka@suse.cz>
>>>>>
>>>>> The x86 version of get_user_pages_fast() relies on disabled interrupts to
>>>>> synchronize gup_pte_range() between gup_get_pte(ptep); and get_page() against
>>>>> a parallel munmap. The munmap side nulls the pte, then flushes TLBs, then
>>>>> releases the page. As TLB flush is done synchronously via IPI disabling
>>>>> interrupts blocks the page release, and get_page(), which assumes existing
>>>>> reference on page, is thus safe.
>>>>> However when TLB flush is done by a hypercall, e.g. in a Xen PV guest, there is
>>>>> no blocking thanks to disabled interrupts, and get_page() can succeed on a page
>>>>> that was already freed or even reused.
>>>>>
>>>>> We have recently seen this happen with our 4.4 and 4.12 based kernels, with
>>>>> userspace (java) that exits a thread, where mm_release() performs a futex_wake()
>>>>> on tsk->clear_child_tid, and another thread in parallel unmaps the page where
>>>>> tsk->clear_child_tid points to. The spurious get_page() succeeds, but futex code
>>>>> immediately releases the page again, while it's already on a freelist. Symptoms
>>>>> include a bad page state warning, general protection faults acessing a poisoned
>>>>> list prev/next pointer in the freelist, or free page pcplists of two cpus joined
>>>>> together in a single list. Oscar has also reproduced this scenario, with a
>>>>> patch inserting delays before the get_page() to make the race window larger.
>>>>>
>>>>> Fix this by removing the dependency on TLB flush interrupts the same way as the
>>>>
>>>> This is suppsed to be fixed by:
>>>>
>>>> arch/x86/Kconfig:       select HAVE_RCU_TABLE_FREE              if PARAVIRT
>>>>
>>>
>>> Yes,
>
> Well, that commit fixes the "page table can be freed under us" part. But
> this patch is about the "get_page() will succeed on a page that's being
> freed" part. Upstream fixed that unknowingly in 4.13 by a gup.c
> refactoring that would be too risky to backport fully.
>

(I also dislike receiving only this patch of the series, next time
please send the whole thing, it's only 8 patches, our mailfolders will
survive that)

When I was adding Hyper-V PV TLB flush to RHEL7 - which is 3.10 based -
in addition to adding page_cache_get_speculative() to
gup_get_pte()/gup_huge_pmd()/gup_huge_pud() I also had to synchronize
huge PMD split against gup_fast with the following hack:

+static void do_nothing(void *unused)
+{
+
+}
+
+static void serialize_against_pte_lookup(struct mm_struct *mm)
+{
+       smp_mb();
+       smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
+}
+
 void pmdp_splitting_flush(struct vm_area_struct *vma,
                          unsigned long address, pmd_t *pmdp)
 {
@@ -434,9 +473,10 @@ void pmdp_splitting_flush(struct vm_area_struct *vma,
        set = !test_and_set_bit(_PAGE_BIT_SPLITTING,
                                (unsigned long *)pmdp);
        if (set) {
                /* need tlb flush only to serialize against gup-fast */
                flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+               if (pv_mmu_ops.flush_tlb_others != native_flush_tlb_others)
+                       serialize_against_pte_lookup(vma->vm_mm);
        }
 }

I'm not sure which stable kernel you're targeting (and if you addressed this
with other patches in the series, if this is needed,...) so JFYI.
Ajay Kaher Dec. 17, 2019, 4:13 a.m. UTC | #6
On 16/12/19, 9:38 PM, "Vitaly Kuznetsov" <vkuznets@redhat.com> wrote:

> (I also dislike receiving only this patch of the series, next time
> please send the whole thing, it's only 8 patches, our mailfolders will
> survive that)

Thanks for pointing this, I will take care. 

> I'm not sure which stable kernel you're targeting (and if you addressed this
> with other patches in the series, if this is needed,...) so JFYI.

This series is for 4.4.y, please refer following link for complete series:
https://lore.kernel.org/stable/1576529149-14269-1-git-send-email-akaher@vmware.com/

Yes, this 'Patch v3 8/8' could be merged separately, if it's unsafe to merge at this movement.  

- Ajay
Ajay Kaher Jan. 31, 2020, 12:51 p.m. UTC | #7
On 16/12/19, 9:38 PM, "Vitaly Kuznetsov" <vkuznets@redhat.com> wrote:

>> (I also dislike receiving only this patch of the series, next time
>> please send the whole thing, it's only 8 patches, our mailfolders will
>> survive that)
>
> Thanks for pointing this, I will take care. 
>
>> I'm not sure which stable kernel you're targeting (and if you addressed this
>> with other patches in the series, if this is needed,...) so JFYI.
>
> This series is for 4.4.y, please refer following link for complete series:
> https://lore.kernel.org/stable/1576529149-14269-1-git-send-email-akaher@vmware.com/
>
> Yes, this 'Patch v3 8/8' could be merged separately, if it's unsafe to merge at this movement.  
>
> - Ajay

Greg, last month I posted this series [1] for v4.4.y (including Vlastimil's change).

There were some discussion for [PATCH v3 8/8] [2] and no further discussion once I specified it's for 4.4.y.
Please suggest, if I need to repost this series again.

- Ajay

[1] https://lore.kernel.org/stable/1576529149-14269-1-git-send-email-akaher@vmware.com/
[2] https://lore.kernel.org/stable/1576529149-14269-9-git-send-email-akaher@vmware.com/
diff mbox series

Patch

diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c
index 6612d532e42e..6379a4883c0a 100644
--- a/arch/x86/mm/gup.c
+++ b/arch/x86/mm/gup.c
@@ -9,6 +9,7 @@ 
 #include <linux/vmstat.h>
 #include <linux/highmem.h>
 #include <linux/swap.h>
+#include <linux/pagemap.h>
 
 #include <asm/pgtable.h>
 
@@ -95,10 +96,23 @@  static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
 		}
 		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 		page = pte_page(pte);
-		if (unlikely(!try_get_page(page))) {
+
+		if (WARN_ON_ONCE(page_ref_count(page) < 0)) {
+			pte_unmap(ptep);
+			return 0;
+		}
+
+		if (!page_cache_get_speculative(page)) {
 			pte_unmap(ptep);
 			return 0;
 		}
+
+		if (unlikely(pte_val(pte) != pte_val(*ptep))) {
+			put_page(page);
+			pte_unmap(ptep);
+			return 0;
+		}
+
 		SetPageReferenced(page);
 		pages[*nr] = page;
 		(*nr)++;