From patchwork Fri Nov 29 09:03:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 11266783 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3146F15AB for ; Fri, 29 Nov 2019 09:04:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 081D7217C3 for ; Fri, 29 Nov 2019 09:04:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 081D7217C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 55F946B056A; Fri, 29 Nov 2019 04:04:27 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 50FD76B056B; Fri, 29 Nov 2019 04:04:27 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 389A06B056C; Fri, 29 Nov 2019 04:04:27 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0033.hostedemail.com [216.40.44.33]) by kanga.kvack.org (Postfix) with ESMTP id 05AB46B056B for ; Fri, 29 Nov 2019 04:04:27 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 6B415180AD811 for ; Fri, 29 Nov 2019 09:04:26 +0000 (UTC) X-FDA: 76208728932.18.house25_26aec3e3c9d38 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,vbabka@suse.cz,:stable@vger.kernel.org:ben.hutchings@codethink.co.uk:willy@infradead.org:akaher@vmware.com::linux-kernel@vger.kernel.org:vbabka@suse.cz,RULES_HIT:30054:30056,0,RBL:195.135.220.15:@suse.cz:.lbl8.mailshell.net-62.14.6.2 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: house25_26aec3e3c9d38 X-Filterd-Recvd-Size: 4514 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Fri, 29 Nov 2019 09:04:25 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id B8F0EB259; Fri, 29 Nov 2019 09:04:23 +0000 (UTC) From: Vlastimil Babka To: stable@vger.kernel.org Cc: Ben Hutchings , Matthew Wilcox , Ajay Kaher , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vlastimil Babka Subject: [PATCH STABLE 4.9 1/1] mm, gup: add missing refcount overflow checks on x86 and s390 Date: Fri, 29 Nov 2019 10:03:49 +0100 Message-Id: <20191129090351.3507-2-vbabka@suse.cz> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191129090351.3507-1-vbabka@suse.cz> References: <20191129090351.3507-1-vbabka@suse.cz> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The mainline commit 8fde12ca79af ("mm: prevent get_user_pages() from overflowing page refcount") was backported to 4.9.y stable as commit 2ed768cfd895. The backport however missed that in 4.9, there are several arch-specific gup.c versions with fast gup implementations, so these do not prevent refcount overflow. This is partially fixed for x86 in stable-only commit d73af79742e7 ("x86, mm, gup: prevent get_page() race with munmap in paravirt guest"). This stable-only commit adds missing parts to x86 version, as well as s390 version, both taken from the SUSE SLES/openSUSE 4.12-based kernels. The remaining architectures with own gup.c are sparc, mips, sh. It's unlikely the known overflow scenario based on FUSE, which needs 140GB of RAM, is a problem for those architectures, and I don't feel confident enough to patch them. Signed-off-by: Vlastimil Babka --- arch/s390/mm/gup.c | 9 ++++++--- arch/x86/mm/gup.c | 10 ++++++++-- 2 files changed, 14 insertions(+), 5 deletions(-) diff --git a/arch/s390/mm/gup.c b/arch/s390/mm/gup.c index 97fc449a7470..33a940389a6d 100644 --- a/arch/s390/mm/gup.c +++ b/arch/s390/mm/gup.c @@ -38,7 +38,8 @@ static inline int gup_pte_range(pmd_t *pmdp, pmd_t pmd, unsigned long addr, VM_BUG_ON(!pfn_valid(pte_pfn(pte))); page = pte_page(pte); head = compound_head(page); - if (!page_cache_get_speculative(head)) + if (unlikely(WARN_ON_ONCE(page_ref_count(head) < 0) + || !page_cache_get_speculative(head))) return 0; if (unlikely(pte_val(pte) != pte_val(*ptep))) { put_page(head); @@ -76,7 +77,8 @@ static inline int gup_huge_pmd(pmd_t *pmdp, pmd_t pmd, unsigned long addr, refs++; } while (addr += PAGE_SIZE, addr != end); - if (!page_cache_add_speculative(head, refs)) { + if (unlikely(WARN_ON_ONCE(page_ref_count(head) < 0) + || !page_cache_add_speculative(head, refs))) { *nr -= refs; return 0; } @@ -150,7 +152,8 @@ static int gup_huge_pud(pud_t *pudp, pud_t pud, unsigned long addr, refs++; } while (addr += PAGE_SIZE, addr != end); - if (!page_cache_add_speculative(head, refs)) { + if (unlikely(WARN_ON_ONCE(page_ref_count(head) < 0) + || !page_cache_add_speculative(head, refs))) { *nr -= refs; return 0; } diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c index d7db45bdfb3b..551fc7fea046 100644 --- a/arch/x86/mm/gup.c +++ b/arch/x86/mm/gup.c @@ -202,10 +202,12 @@ static int __gup_device_huge_pmd(pmd_t pmd, unsigned long addr, undo_dev_pagemap(nr, nr_start, pages); return 0; } + if (unlikely(!try_get_page(page))) { + put_dev_pagemap(pgmap); + return 0; + } SetPageReferenced(page); pages[*nr] = page; - get_page(page); - put_dev_pagemap(pgmap); (*nr)++; pfn++; } while (addr += PAGE_SIZE, addr != end); @@ -230,6 +232,8 @@ static noinline int gup_huge_pmd(pmd_t pmd, unsigned long addr, refs = 0; head = pmd_page(pmd); + if (WARN_ON_ONCE(page_ref_count(head) <= 0)) + return 0; page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT); do { VM_BUG_ON_PAGE(compound_head(page) != head, page); @@ -289,6 +293,8 @@ static noinline int gup_huge_pud(pud_t pud, unsigned long addr, refs = 0; head = pud_page(pud); + if (WARN_ON_ONCE(page_ref_count(head) <= 0)) + return 0; page = head + ((addr & ~PUD_MASK) >> PAGE_SHIFT); do { VM_BUG_ON_PAGE(compound_head(page) != head, page);