From patchwork Mon Dec 16 20:45:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajay Kaher X-Patchwork-Id: 11294125 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8BEBD6C1 for ; Mon, 16 Dec 2019 12:46:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 584C82072D for ; Mon, 16 Dec 2019 12:46:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 584C82072D Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=vmware.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8D5CB8E0008; Mon, 16 Dec 2019 07:46:37 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8853E8E0003; Mon, 16 Dec 2019 07:46:37 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 774DA8E0008; Mon, 16 Dec 2019 07:46:37 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0145.hostedemail.com [216.40.44.145]) by kanga.kvack.org (Postfix) with ESMTP id 5E41E8E0003 for ; Mon, 16 Dec 2019 07:46:37 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 0C166824999B for ; Mon, 16 Dec 2019 12:46:36 +0000 (UTC) X-FDA: 76270978434.04.nose13_39a6b8d260a36 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,akaher@vmware.com,:gregkh@linuxfoundation.org:stable@vger.kernel.org:torvalds@linux-foundation.org:punit.agrawal@arm.com:akpm@linux-foundation.org:kirill.shutemov@linux.intel.com:willy@infradead.org:will.deacon@arm.com:mszeredi@redhat.com::linux-kernel@vger.kernel.org:srivatsab@vmware.com:srivatsa@csail.mit.edu:amakhalov@vmware.com:srinidhir@vmware.com:bvikas@vmware.com:anishs@vmware.com:vsirnapalli@vmware.com:srostedt@vmware.com:akaher@vmware.com:vbabka@suse.cz:osalvador@suse.de:tglx@linutronix.de:mingo@redhat.com:peterz@infradead.org:jgross@suse.com:vkuznets@redhat.com:bp@alien8.de:dave.hansen@linux.intel.com:luto@kernel.org,RULES_HIT:30012:30045:30054:30091,0,RBL:208.91.0.190:@vmware.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: nose13_39a6b8d260a36 X-Filterd-Recvd-Size: 5719 Received: from EX13-EDG-OU-002.vmware.com (ex13-edg-ou-002.vmware.com [208.91.0.190]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Mon, 16 Dec 2019 12:46:36 +0000 (UTC) Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Mon, 16 Dec 2019 04:46:34 -0800 Received: from akaher-lnx-dev.eng.vmware.com (unknown [10.110.19.203]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 30E75402B8; Mon, 16 Dec 2019 04:46:27 -0800 (PST) From: Ajay Kaher To: , CC: , , , , , , , , , , , , , , , , , , Vlastimil Babka , Oscar Salvador , Thomas Gleixner , Ingo Molnar , Peter Zijlstra , Juergen Gross , Vitaly Kuznetsov , Borislav Petkov , Dave Hansen , Andy Lutomirski Subject: [PATCH v3 8/8] x86, mm, gup: prevent get_page() race with munmap in paravirt guest Date: Tue, 17 Dec 2019 02:15:48 +0530 Message-ID: <1576529149-14269-9-git-send-email-akaher@vmware.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1576529149-14269-1-git-send-email-akaher@vmware.com> References: <1576529149-14269-1-git-send-email-akaher@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-002.vmware.com: akaher@vmware.com does not designate permitted sender hosts) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vlastimil Babka The x86 version of get_user_pages_fast() relies on disabled interrupts to synchronize gup_pte_range() between gup_get_pte(ptep); and get_page() against a parallel munmap. The munmap side nulls the pte, then flushes TLBs, then releases the page. As TLB flush is done synchronously via IPI disabling interrupts blocks the page release, and get_page(), which assumes existing reference on page, is thus safe. However when TLB flush is done by a hypercall, e.g. in a Xen PV guest, there is no blocking thanks to disabled interrupts, and get_page() can succeed on a page that was already freed or even reused. We have recently seen this happen with our 4.4 and 4.12 based kernels, with userspace (java) that exits a thread, where mm_release() performs a futex_wake() on tsk->clear_child_tid, and another thread in parallel unmaps the page where tsk->clear_child_tid points to. The spurious get_page() succeeds, but futex code immediately releases the page again, while it's already on a freelist. Symptoms include a bad page state warning, general protection faults acessing a poisoned list prev/next pointer in the freelist, or free page pcplists of two cpus joined together in a single list. Oscar has also reproduced this scenario, with a patch inserting delays before the get_page() to make the race window larger. Fix this by removing the dependency on TLB flush interrupts the same way as the generic get_user_pages_fast() code by using page_cache_add_speculative() and revalidating the PTE contents after pinning the page. Mainline is safe since 4.13 where the x86 gup code was removed in favor of the common code. Accessing the page table itself safely also relies on disabled interrupts and TLB flush IPIs that don't happen with hypercalls, which was acknowledged in commit 9e52fc2b50de ("x86/mm: Enable RCU based page table freeing (CONFIG_HAVE_RCU_TABLE_FREE=y)"). That commit with follups should also be backported for full safety, although our reproducer didn't hit a problem without that backport. Reproduced-by: Oscar Salvador Signed-off-by: Vlastimil Babka Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juergen Gross Cc: Kirill A. Shutemov Cc: Vitaly Kuznetsov Cc: Linus Torvalds Cc: Borislav Petkov Cc: Dave Hansen Cc: Andy Lutomirski Signed-off-by: Vlastimil Babka --- arch/x86/mm/gup.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c index 6612d532e42e..6379a4883c0a 100644 --- a/arch/x86/mm/gup.c +++ b/arch/x86/mm/gup.c @@ -9,6 +9,7 @@ #include #include #include +#include #include @@ -95,10 +96,23 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr, } VM_BUG_ON(!pfn_valid(pte_pfn(pte))); page = pte_page(pte); - if (unlikely(!try_get_page(page))) { + + if (WARN_ON_ONCE(page_ref_count(page) < 0)) { + pte_unmap(ptep); + return 0; + } + + if (!page_cache_get_speculative(page)) { pte_unmap(ptep); return 0; } + + if (unlikely(pte_val(pte) != pte_val(*ptep))) { + put_page(page); + pte_unmap(ptep); + return 0; + } + SetPageReferenced(page); pages[*nr] = page; (*nr)++;