From patchwork Fri Nov 8 09:38:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 11234417 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF38E14E5 for ; Fri, 8 Nov 2019 09:38:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BD01121D82 for ; Fri, 8 Nov 2019 09:38:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BD01121D82 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 128276B000A; Fri, 8 Nov 2019 04:38:28 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 02B816B000D; Fri, 8 Nov 2019 04:38:27 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D99CB6B000E; Fri, 8 Nov 2019 04:38:27 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0115.hostedemail.com [216.40.44.115]) by kanga.kvack.org (Postfix) with ESMTP id B81776B000C for ; Fri, 8 Nov 2019 04:38:27 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 76ADD8249980 for ; Fri, 8 Nov 2019 09:38:27 +0000 (UTC) X-FDA: 76132609854.22.balls06_83edcdec21119 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,vbabka@suse.cz,:stable@vger.kernel.org::linux-kernel@vger.kernel.org:akaher@vmware.com:torvalds@linux-foundation.org:willy@infradead.org:jannh@google.com:stable@kernel.org:vbabka@suse.cz,RULES_HIT:30034:30054:30070,0,RBL:195.135.220.15:@suse.cz:.lbl8.mailshell.net-62.2.6.2 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:193,LUA_SUMMARY:none X-HE-Tag: balls06_83edcdec21119 X-Filterd-Recvd-Size: 4172 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Fri, 8 Nov 2019 09:38:26 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 56883AC79; Fri, 8 Nov 2019 09:38:24 +0000 (UTC) From: Vlastimil Babka To: stable@vger.kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Ajay Kaher , Linus Torvalds , Matthew Wilcox , Jann Horn , stable@kernel.org, Vlastimil Babka Subject: [PATCH STABLE 4.4 3/8] mm: make page ref count overflow check tighter and more explicit Date: Fri, 8 Nov 2019 10:38:09 +0100 Message-Id: <20191108093814.16032-4-vbabka@suse.cz> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191108093814.16032-1-vbabka@suse.cz> References: <20191108093814.16032-1-vbabka@suse.cz> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Linus Torvalds commit f958d7b528b1b40c44cfda5eabe2d82760d868c3 upstream. [ 4.4 backport: page_ref_count() doesn't exist, introduce it to reduce churn. Change also two similar checks in mm/internal.h ] We have a VM_BUG_ON() to check that the page reference count doesn't underflow (or get close to overflow) by checking the sign of the count. That's all fine, but we actually want to allow people to use a "get page ref unless it's already very high" helper function, and we want that one to use the sign of the page ref (without triggering this VM_BUG_ON). Change the VM_BUG_ON to only check for small underflows (or _very_ close to overflowing), and ignore overflows which have strayed into negative territory. Acked-by: Matthew Wilcox Cc: Jann Horn Cc: stable@kernel.org Signed-off-by: Linus Torvalds Signed-off-by: Vlastimil Babka --- include/linux/mm.h | 11 ++++++++++- mm/internal.h | 5 +++-- 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ed653ba47c46..997edfcb0a30 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -488,6 +488,15 @@ static inline void get_huge_page_tail(struct page *page) extern bool __get_page_tail(struct page *page); +static inline int page_ref_count(struct page *page) +{ + return atomic_read(&page->_count); +} + +/* 127: arbitrary random number, small enough to assemble well */ +#define page_ref_zero_or_close_to_overflow(page) \ + ((unsigned int) page_ref_count(page) + 127u <= 127u) + static inline void get_page(struct page *page) { if (unlikely(PageTail(page))) @@ -497,7 +506,7 @@ static inline void get_page(struct page *page) * Getting a normal page or the head of a compound page * requires to already have an elevated page->_count. */ - VM_BUG_ON_PAGE(atomic_read(&page->_count) <= 0, page); + VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(page), page); atomic_inc(&page->_count); } diff --git a/mm/internal.h b/mm/internal.h index f63f4393d633..a6639c72780a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -81,7 +81,8 @@ static inline void __get_page_tail_foll(struct page *page, * speculative page access (like in * page_cache_get_speculative()) on tail pages. */ - VM_BUG_ON_PAGE(atomic_read(&compound_head(page)->_count) <= 0, page); + VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(compound_head(page)), + page); if (get_page_head) atomic_inc(&compound_head(page)->_count); get_huge_page_tail(page); @@ -106,7 +107,7 @@ static inline void get_page_foll(struct page *page) * Getting a normal page or the head of a compound page * requires to already have an elevated page->_count. */ - VM_BUG_ON_PAGE(atomic_read(&page->_count) <= 0, page); + VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(page), page); atomic_inc(&page->_count); } }