From patchwork Tue Aug 4 18:39:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11700793 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9FDE413B1 for ; Tue, 4 Aug 2020 18:39:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 81B1222CB2 for ; Tue, 4 Aug 2020 18:39:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="CeydCPkX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 81B1222CB2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 679A18D0181; Tue, 4 Aug 2020 14:39:53 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 629D88D0081; Tue, 4 Aug 2020 14:39:53 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 518A88D0181; Tue, 4 Aug 2020 14:39:53 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0119.hostedemail.com [216.40.44.119]) by kanga.kvack.org (Postfix) with ESMTP id 37E708D0081 for ; Tue, 4 Aug 2020 14:39:53 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id DB3AE3626 for ; Tue, 4 Aug 2020 18:39:52 +0000 (UTC) X-FDA: 77113750224.12.bite62_250192026fa8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 9BE9518055196 for ; Tue, 4 Aug 2020 18:39:52 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,jhubbard@nvidia.com,,RULES_HIT:30054:30064,0,RBL:216.228.121.64:@nvidia.com:.lbl8.mailshell.net-64.10.201.10 62.18.0.100;04yfqa4bh1uxddjmhx733ho9x59zbypcanqnr1enygu5sh1syanmqw59ykkz33s.cqmx3xbkxwanwj9bxffiqyp49nx4ys1tccqr315q9adb1refnhnc75an4qqd6u4.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: bite62_250192026fa8 X-Filterd-Recvd-Size: 5851 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Tue, 4 Aug 2020 18:39:51 +0000 (UTC) Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 04 Aug 2020 11:39:01 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Tue, 04 Aug 2020 11:39:50 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Tue, 04 Aug 2020 11:39:50 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 4 Aug 2020 18:39:47 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 4 Aug 2020 18:39:46 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.58.124]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 04 Aug 2020 11:39:46 -0700 From: John Hubbard To: CC: , , , , , , , , "Kirill A . Shutemov" Subject: [PATCH] mm, dump_page: do not crash with bad compound_page() Date: Tue, 4 Aug 2020 11:39:43 -0700 Message-ID: <20200804183943.1244828-1-jhubbard@nvidia.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200804161414.GG23808@casper.infradead.org> References: <20200804161414.GG23808@casper.infradead.org> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1596566341; bh=yYg+duuovvjZfQddanR7OpyLITxCeHEjNWzMS9YAFkM=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=CeydCPkXhxHgAkBtO+7fre1J7BH4ylvIstnKFozyG4qISVuvu8OgIyGAnJzsl2E2P amYXLChBoSntY2eVFToNHi1AInbAyTRhQ+HN4GdXDkbndrazpB7GpuIeJzF2iY0uqS IoSGazn2RZviHaG2+aK1l2/0nifG/h8qCjIIq2w/bxZL7ibVHcXgPsbQW0CmMhAfWl WlWX4oKq1RyEeC9xU/vXSe4KxHQ8mCcHtVM6h5suRCVjez+ju3KnkJSHmkmqJulto1 ED1b/jMelV8MBklb1vgwmdanf240BygH7psw/eiMPW+wp2K6/EUcnDq3HFCn5xfjAL Du7QifnJzrlug== X-Rspamd-Queue-Id: 9BE9518055196 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If a compound page is being split while dump_page() is being run on that page, we can end up calling compound_mapcount() on a page that is no longer compound. This leads to a crash (already seen at least once in the field), due to the VM_BUG_ON_PAGE() assertion inside compound_mapcount(). (The above is from Matthew Wilcox's analysis of Qian Cai's bug report.) In order to avoid this kind of crash, make dump_page() slightly more robust, by providing a version of compound_page() that doesn't assert, but just warns the first time that such a thing happens. And the first time is usually enough. For debug tools, we don't want to go *too* far in this direction, but this is a simple small fix, and the crash has already been seen, so it's a good trade-off. Reported-by: Qian Cai Cc: Matthew Wilcox Cc: Vlastimil Babka Cc: Kirill A. Shutemov Signed-off-by: John Hubbard --- include/linux/mm.h | 17 +++++++++++++++-- mm/debug.c | 4 ++-- 2 files changed, 17 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index dc7b87310c10..e3991fbb42c0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -779,6 +779,13 @@ static inline void *kvcalloc(size_t n, size_t size, gfp_t flags) extern void kvfree(const void *addr); extern void kvfree_sensitive(const void *addr, size_t len); + +static inline int __compound_mapcount(struct page *page) +{ + page = compound_head(page); + return atomic_read(compound_mapcount_ptr(page)) + 1; +} + /* * Mapcount of compound page as a whole, does not include mapped sub-pages. * @@ -787,8 +794,14 @@ extern void kvfree_sensitive(const void *addr, size_t len); static inline int compound_mapcount(struct page *page) { VM_BUG_ON_PAGE(!PageCompound(page), page); - page = compound_head(page); - return atomic_read(compound_mapcount_ptr(page)) + 1; + return __compound_mapcount(page); +} + +static inline int dump_page_compound_mapcount(struct page *page) +{ + if (WARN_ON_ONCE(!PageCompound(page))) + return 0; + return __compound_mapcount(page); } /* diff --git a/mm/debug.c b/mm/debug.c index 4f376514744d..eab4244aabd8 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -91,7 +91,7 @@ void __dump_page(struct page *page, const char *reason) "compound_mapcount:%d compound_pincount:%d\n", page, page_ref_count(head), mapcount, mapping, page_to_pgoff(page), head, - compound_order(head), compound_mapcount(page), + compound_order(head), dump_page_compound_mapcount(page), compound_pincount(page)); } else { pr_warn("page:%px refcount:%d mapcount:%d mapping:%p " @@ -99,7 +99,7 @@ void __dump_page(struct page *page, const char *reason) "compound_mapcount:%d\n", page, page_ref_count(head), mapcount, mapping, page_to_pgoff(page), head, - compound_order(head), compound_mapcount(page)); + compound_order(head), dump_page_compound_mapcount(page)); } else pr_warn("page:%px refcount:%d mapcount:%d mapping:%p index:%#lx\n",