From patchwork Mon Jun 3 14:34:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 10972975 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2CEB81515 for ; Mon, 3 Jun 2019 14:35:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1D39D28420 for ; Mon, 3 Jun 2019 14:35:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1165827C05; Mon, 3 Jun 2019 14:35:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 818CF285A8 for ; Mon, 3 Jun 2019 14:35:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6E36E6B000E; Mon, 3 Jun 2019 10:35:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 638946B000D; Mon, 3 Jun 2019 10:35:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 32D7F6B0266; Mon, 3 Jun 2019 10:35:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) by kanga.kvack.org (Postfix) with ESMTP id B3E4D6B000C for ; Mon, 3 Jun 2019 10:35:14 -0400 (EDT) Received: by mail-ed1-f71.google.com with SMTP id n23so27794316edv.9 for ; Mon, 03 Jun 2019 07:35:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=yR/+zkAXVY/QWPzunwBUOQ1BEAyvoEDYuTLXMqCG/4c=; b=Y4QkVlY1mBIW23iRj0Pyh4QLtKkQoQunRI2ArblVa6S/ftqnbuaHGBQJ94AS892+GN 3aiDmT3AHo/QG/25wsrH9saGeTVKaUPi1ASENdl31LaEYDq0yqIXWXj9RjN6PujhdGWA AweGiw+6kKGuYvpGJAMIKb1jy0EAxvlstdaIFn4kzl/Bnt4boDMWwJG1zb3s+P8vJPDW zQWC9XXQEWpDdLigcgKciX0bB6sxP2HTKk3JS7ZHo5dqO9iplV36yPAv0r29Cjyj5IhB ghZnO7d4tSZ5A+EataQIH7PRuR7coqQ34zQ6n6YmbTCrlX2tjO1vRtfDkgvNOYeWRD42 tcUg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Gm-Message-State: APjAAAXh9Gsq/bSqR5LHulsP5MwOEgzlPwa2B/a3eJIYZN2eA/bJV0U6 9uWrGIJqI4hqgaAC/vjoiIYOm7qjxh4TVJYcNgs2NcqRuhjWVISE218+HulY/sFjfmkxAW/92wp YWSaU2TFH/KaMSMtPYSsmDj1X3h4+68/XPJKiK4nremqWe5Pv3c0WGrg/ILmKj1IcfQ== X-Received: by 2002:a17:906:c4f:: with SMTP id t15mr24082459ejf.190.1559572514152; Mon, 03 Jun 2019 07:35:14 -0700 (PDT) X-Google-Smtp-Source: APXvYqxsx1EMucoIIESnFO54DnRxgCmeQ/EJFhu0YNqd556A2W745ZxccwIS3HGrjghDPsXtB0WX X-Received: by 2002:a17:906:c4f:: with SMTP id t15mr24082320ejf.190.1559572512669; Mon, 03 Jun 2019 07:35:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559572512; cv=none; d=google.com; s=arc-20160816; b=stNCMD9YHO38tVGIG3uutKTLmM2Ca9jXqgJqPRxfgRsqJYQ+daXjWArtAsM6reVFxS MX1oEetF+/5s3OpgtzINy3wqSad+w7t+kiQ4UllbsjZt0fkTQk+sWyMnBc19e8YxKW2C n7krQur/dz8HPjLJ0/1x7lQj4RTLWhbc6G4tRvkiOVnEvOPeFXkA4LtJT6Im62m840vz NH0P7Hmrr/cFwnWoJHSWJl8XY2XMTydYP04QVy+EPbfaU9uSvO+0fNj5hoL5Xd8veSKd S3+VNBijvThYy3+VO6kQHcDXV+gQ/KJFLZOZS79TPt70JoAbrlov53V/tx3RWjApa28/ IHPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=yR/+zkAXVY/QWPzunwBUOQ1BEAyvoEDYuTLXMqCG/4c=; b=PFIuvFyH4YdADYD5IbPbff1yiPI3XFHqWLYCwfGXnX87azkp3caEoN0+z/C1HbrwmJ lo0lQ5/ciDwfmzX/mODuquExjysqt239w9bm/Qm/bio9IvdxDQHAwWG4GdAXjQFuHKoH cn+3W7fiydHDlJOcKHlhNdHpxxETT7384QXBvlFO5Dc8LnG8EhKllVCnjwH6hGFrYI2n OUwtR8RhFeLV27mH8ipn1ziH24sjLrITbLnnq/Fn8D18NYE10qI3ScqSpJa88goFo+cW PtTkSB/f2pH1EUWQtVkOk2qnBqwwnMGu4Md/iIB/X/Av8TDGQsjBLyqTPGSeS5gWm9/g ws1g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz Received: from mx1.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id u10si782153ejk.38.2019.06.03.07.35.12 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 03 Jun 2019 07:35:12 -0700 (PDT) Received-SPF: pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) client-ip=195.135.220.15; Authentication-Results: mx.google.com; spf=pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id F19ADADDA; Mon, 3 Jun 2019 14:35:11 +0000 (UTC) From: Vlastimil Babka To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton , "Kirill A. Shutemov" , Michal Hocko , Vlastimil Babka , Joonsoo Kim , Matthew Wilcox , Mel Gorman Subject: [PATCH 1/3] mm, debug_pagelloc: use static keys to enable debugging Date: Mon, 3 Jun 2019 16:34:49 +0200 Message-Id: <20190603143451.27353-2-vbabka@suse.cz> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190603143451.27353-1-vbabka@suse.cz> References: <20190603143451.27353-1-vbabka@suse.cz> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP CONFIG_DEBUG_PAGEALLOC has been redesigned by 031bc5743f15 ("mm/debug-pagealloc: make debug-pagealloc boottime configurable") to allow being always enabled in a distro kernel, but only perform its expensive functionality when booted with debug_pagelloc=on. We can further reduce the overhead when not boot-enabled (including page allocator fast paths) using static keys. This patch introduces one for debug_pagealloc core functionality, and another for the optional guard page functionality (enabled by booting with debug_guardpage_minorder=X). Signed-off-by: Vlastimil Babka Cc: Joonsoo Kim --- include/linux/mm.h | 15 +++++++++++---- mm/page_alloc.c | 23 +++++++++++++++++------ 2 files changed, 28 insertions(+), 10 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0e8834ac32b7..c71ed22769f3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2685,11 +2685,18 @@ static inline void kernel_poison_pages(struct page *page, int numpages, int enable) { } #endif -extern bool _debug_pagealloc_enabled; +#ifdef CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT +DECLARE_STATIC_KEY_TRUE(_debug_pagealloc_enabled); +#else +DECLARE_STATIC_KEY_FALSE(_debug_pagealloc_enabled); +#endif static inline bool debug_pagealloc_enabled(void) { - return IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) && _debug_pagealloc_enabled; + if (!IS_ENABLED(CONFIG_DEBUG_PAGEALLOC)) + return false; + + return static_branch_unlikely(&_debug_pagealloc_enabled); } #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP) @@ -2843,7 +2850,7 @@ extern struct page_ext_operations debug_guardpage_ops; #ifdef CONFIG_DEBUG_PAGEALLOC extern unsigned int _debug_guardpage_minorder; -extern bool _debug_guardpage_enabled; +DECLARE_STATIC_KEY_FALSE(_debug_guardpage_enabled); static inline unsigned int debug_guardpage_minorder(void) { @@ -2852,7 +2859,7 @@ static inline unsigned int debug_guardpage_minorder(void) static inline bool debug_guardpage_enabled(void) { - return _debug_guardpage_enabled; + return static_branch_unlikely(&_debug_guardpage_enabled); } static inline bool page_is_guard(struct page *page) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d66bc8abe0af..639f1f9e74c5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -646,16 +646,27 @@ void prep_compound_page(struct page *page, unsigned int order) #ifdef CONFIG_DEBUG_PAGEALLOC unsigned int _debug_guardpage_minorder; -bool _debug_pagealloc_enabled __read_mostly - = IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT); + +#ifdef CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT +DEFINE_STATIC_KEY_TRUE(_debug_pagealloc_enabled); +#else +DEFINE_STATIC_KEY_FALSE(_debug_pagealloc_enabled); +#endif EXPORT_SYMBOL(_debug_pagealloc_enabled); -bool _debug_guardpage_enabled __read_mostly; + +DEFINE_STATIC_KEY_FALSE(_debug_guardpage_enabled); static int __init early_debug_pagealloc(char *buf) { - if (!buf) + bool enable = false; + + if (kstrtobool(buf, &enable)) return -EINVAL; - return kstrtobool(buf, &_debug_pagealloc_enabled); + + if (enable) + static_branch_enable(&_debug_pagealloc_enabled); + + return 0; } early_param("debug_pagealloc", early_debug_pagealloc); @@ -679,7 +690,7 @@ static void init_debug_guardpage(void) if (!debug_guardpage_minorder()) return; - _debug_guardpage_enabled = true; + static_branch_enable(&_debug_guardpage_enabled); } struct page_ext_operations debug_guardpage_ops = { From patchwork Mon Jun 3 14:34:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 10972977 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B06CA1398 for ; Mon, 3 Jun 2019 14:35:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A06D62625B for ; Mon, 3 Jun 2019 14:35:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 94CEA285D5; Mon, 3 Jun 2019 14:35:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E7983281F9 for ; Mon, 3 Jun 2019 14:35:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9953F6B000C; Mon, 3 Jun 2019 10:35:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7F0EC6B0266; Mon, 3 Jun 2019 10:35:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C4EC6B0269; Mon, 3 Jun 2019 10:35:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com [209.85.208.69]) by kanga.kvack.org (Postfix) with ESMTP id F03E76B000E for ; Mon, 3 Jun 2019 10:35:14 -0400 (EDT) Received: by mail-ed1-f69.google.com with SMTP id k15so26874659eda.6 for ; Mon, 03 Jun 2019 07:35:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=4lOoSD8nEnj7z+ICw415bghUg9I0LJXDUPsEjFA6gCQ=; b=UgCAHiDmzL3gzcrgmeeiQcK9MVBUGZDA3Mq7RDBL4tbvsWyfLfaMiqrdQmamvZS354 rEjLd/HAE1ldojtZWq/qKPADtPJznkXPDYQd92Bp4fQ1+3I/mxo/6O7rjex685ztxRpB hvefAwCUB5kYwwaaKGIIyzf27a9dLfnA6uwm1zt+x9c/20I1PR4xy7qWajlWGhcDxvPx 4c2/xeMQXlk7WShlI9KqwCra/ogoxDsm1/nvPh6e0svQAXPCXvJe4m8DxXTDqQ3k1oQb RiBGB2yF8Y2/W0oBbHbPXOk+3dg4rUJDOOx5aLGVWThRGGg84jqVVXlI2ZIfg8ARwB4n GEsw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Gm-Message-State: APjAAAVomfleaMptifR3PjrvdEndE2Hb9rS3C5Rg//24lWhAjXCaLvl6 jd+qeWt98Ct+39AEEXiAACIJXHQRPkbJpAnrN6cZG7IRzYC6C9icDtwrXyTwX81N3f970r4/8Bs L8a4d/S9hjWmgiFHOEn+/0XWiKG9Gs/S4cxaA1jfHcWIzGsnkmqk/q5roZbF9XwZqqw== X-Received: by 2002:a50:bdc6:: with SMTP id z6mr29416142edh.47.1559572514453; Mon, 03 Jun 2019 07:35:14 -0700 (PDT) X-Google-Smtp-Source: APXvYqy7animOA9ov+0wr2B3IwDlZ81h7m129j51iQrCF8Vh8o+6MZ4xWXl7A3FStTR1yXKJ9Y1U X-Received: by 2002:a50:bdc6:: with SMTP id z6mr29415959edh.47.1559572512492; Mon, 03 Jun 2019 07:35:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559572512; cv=none; d=google.com; s=arc-20160816; b=ibaQwZxAeZR9se2A6fMjJv2PrFMGHA0u45Th4o78OJMwCs2p+uyDBS7msjRtEcwQF/ UgZcFAKg5LnIpE7Cl2H3AlPhjpYkx8qNrWOA+OQMpd96ENwwPp0xn14my/GtPHbhVQqi DA1CH4qer30vrD6BsFZ3nJRoU7TMAmCrVleWtdvNzdeos8HTyrN+Xa6Bh8pnSDW0oiMF jx1tvZ2KFIIn3QSMr37WFNlU2vZOXXIZQXlGxJdPUsdJfWBkcqI+M8+HpvWbdCFIadV9 1RGecrYfqh7185azvIEtXFcrxQMXnulKctHYrse7SA69Loh0sqGteCXkiBcxM/p0TuYi Sb9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=4lOoSD8nEnj7z+ICw415bghUg9I0LJXDUPsEjFA6gCQ=; b=Tmmswbo/qNs/pRkKC5AsMFeNYcVi2NmOiI1V/L5rWhTXsdzFPbkFxzUeWSyGcFKasa HUOLK81BPlXFYR99JFaWfZ6D0kAMhvwTYQ+PZ2ybbraXe5VB5kawt1Y1abDamPB4BRrK FPV/xrL2alcM2sECTo2t1UjBfbbkcxDTkFPLR65o50eT1nL5AHAqXhpquA9n4f1m1ji8 LfR98Wjqbe8lLeT0Es02n5r7ZinJhC1igTNwqT9HDdqESLNOTzKJtlh/U6rH09wi6wng /AdqzOUbNwsKcrm/l+QzEZNW16SbkEPtEquSaZVLcc9q3AiuykbGu0CGI+BI/EoPq9Gk wqDg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz Received: from mx1.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id l43si453371eda.71.2019.06.03.07.35.12 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 03 Jun 2019 07:35:12 -0700 (PDT) Received-SPF: pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) client-ip=195.135.220.15; Authentication-Results: mx.google.com; spf=pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id ECABCAD43; Mon, 3 Jun 2019 14:35:11 +0000 (UTC) From: Vlastimil Babka To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton , "Kirill A. Shutemov" , Michal Hocko , Vlastimil Babka , Mel Gorman , Joonsoo Kim , Matthew Wilcox Subject: [PATCH 2/3] mm, page_alloc: more extensive free page checking with debug_pagealloc Date: Mon, 3 Jun 2019 16:34:50 +0200 Message-Id: <20190603143451.27353-3-vbabka@suse.cz> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190603143451.27353-1-vbabka@suse.cz> References: <20190603143451.27353-1-vbabka@suse.cz> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The page allocator checks struct pages for expected state (mapcount, flags etc) as pages are being allocated (check_new_page()) and freed (free_pages_check()) to provide some defense against errors in page allocator users. Prior commits 479f854a207c ("mm, page_alloc: defer debugging checks of pages allocated from the PCP") and 4db7548ccbd9 ("mm, page_alloc: defer debugging checks of freed pages until a PCP drain") this has happened for order-0 pages as they were allocated from or freed to the per-cpu caches (pcplists). Since those are fast paths, the checks are now performed only when pages are moved between pcplists and global free lists. This however lowers the chances of catching errors soon enough. In order to increase the chances of the checks to catch errors, the kernel has to be rebuilt with CONFIG_DEBUG_VM, which also enables multiple other internal debug checks (VM_BUG_ON() etc), which is suboptimal when the goal is to catch errors in mm users, not in mm code itself. To catch some wrong users of page allocator, we have CONFIG_DEBUG_PAGEALLOC, which is designed to have virtually no overhead unless enabled at boot time. Memory corruptions when writing to freed pages have often the same underlying errors (use-after-free, double free) as corrupting the corresponding struct pages, so this existing debugging functionality is a good fit to extend by also perform struct page checks at least as often as if CONFIG_DEBUG_VM was enabled. Specifically, after this patch, when debug_pagealloc is enabled on boot, and CONFIG_DEBUG_VM disabled, pages are checked when allocated from or freed to the pcplists *in addition* to being moved between pcplists and free lists. When both debug_pagealloc and CONFIG_DEBUG_VM are enabled, pages are checked when being moved between pcplists and free lists *in addition* to when allocated from or freed to the pcplists. When debug_pagealloc is not enabled on boot, the overhead in fast paths should be virtually none thanks to the use of static key. Signed-off-by: Vlastimil Babka Cc: Mel Gorman --- mm/Kconfig.debug | 13 ++++++++---- mm/page_alloc.c | 53 +++++++++++++++++++++++++++++++++++++++--------- 2 files changed, 52 insertions(+), 14 deletions(-) diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index fa6d79281368..a35ab6c55192 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -19,12 +19,17 @@ config DEBUG_PAGEALLOC Depending on runtime enablement, this results in a small or large slowdown, but helps to find certain types of memory corruption. + Also, the state of page tracking structures is checked more often as + pages are being allocated and freed, as unexpected state changes + often happen for same reasons as memory corruption (e.g. double free, + use-after-free). + For architectures which don't enable ARCH_SUPPORTS_DEBUG_PAGEALLOC, fill the pages with poison patterns after free_pages() and verify - the patterns before alloc_pages(). Additionally, - this option cannot be enabled in combination with hibernation as - that would result in incorrect warnings of memory corruption after - a resume because free pages are not saved to the suspend image. + the patterns before alloc_pages(). Additionally, this option cannot + be enabled in combination with hibernation as that would result in + incorrect warnings of memory corruption after a resume because free + pages are not saved to the suspend image. By default this option will have a small overhead, e.g. by not allowing the kernel mapping to be backed by large pages on some diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 639f1f9e74c5..e6248e391358 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1162,19 +1162,36 @@ static __always_inline bool free_pages_prepare(struct page *page, } #ifdef CONFIG_DEBUG_VM -static inline bool free_pcp_prepare(struct page *page) +/* + * With DEBUG_VM enabled, order-0 pages are checked immediately when being freed + * to pcp lists. With debug_pagealloc also enabled, they are also rechecked when + * moved from pcp lists to free lists. + */ +static bool free_pcp_prepare(struct page *page) { return free_pages_prepare(page, 0, true); } -static inline bool bulkfree_pcp_prepare(struct page *page) +static bool bulkfree_pcp_prepare(struct page *page) { - return false; + if (debug_pagealloc_enabled()) + return free_pages_check(page); + else + return false; } #else +/* + * With DEBUG_VM disabled, order-0 pages being freed are checked only when + * moving from pcp lists to free list in order to reduce overhead. With + * debug_pagealloc enabled, they are checked also immediately when being freed + * to the pcp lists. + */ static bool free_pcp_prepare(struct page *page) { - return free_pages_prepare(page, 0, false); + if (debug_pagealloc_enabled()) + return free_pages_prepare(page, 0, true); + else + return free_pages_prepare(page, 0, false); } static bool bulkfree_pcp_prepare(struct page *page) @@ -2036,23 +2053,39 @@ static inline bool free_pages_prezeroed(void) } #ifdef CONFIG_DEBUG_VM -static bool check_pcp_refill(struct page *page) +/* + * With DEBUG_VM enabled, order-0 pages are checked for expected state when + * being allocated from pcp lists. With debug_pagealloc also enabled, they are + * also checked when pcp lists are refilled from the free lists. + */ +static inline bool check_pcp_refill(struct page *page) { - return false; + if (debug_pagealloc_enabled()) + return check_new_page(page); + else + return false; } -static bool check_new_pcp(struct page *page) +static inline bool check_new_pcp(struct page *page) { return check_new_page(page); } #else -static bool check_pcp_refill(struct page *page) +/* + * With DEBUG_VM disabled, free order-0 pages are checked for expected state + * when pcp lists are being refilled from the free lists. With debug_pagealloc + * enabled, they are also checked when being allocated from the pcp lists. + */ +static inline bool check_pcp_refill(struct page *page) { return check_new_page(page); } -static bool check_new_pcp(struct page *page) +static inline bool check_new_pcp(struct page *page) { - return false; + if (debug_pagealloc_enabled()) + return check_new_page(page); + else + return false; } #endif /* CONFIG_DEBUG_VM */ From patchwork Mon Jun 3 14:34:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 10972971 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 260F01398 for ; Mon, 3 Jun 2019 14:35:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1687F27F10 for ; Mon, 3 Jun 2019 14:35:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0AD21285D5; Mon, 3 Jun 2019 14:35:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4048128496 for ; Mon, 3 Jun 2019 14:35:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 020086B0008; Mon, 3 Jun 2019 10:35:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EEB526B000D; Mon, 3 Jun 2019 10:35:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D63B36B000E; Mon, 3 Jun 2019 10:35:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by kanga.kvack.org (Postfix) with ESMTP id 81AB86B0008 for ; Mon, 3 Jun 2019 10:35:14 -0400 (EDT) Received: by mail-ed1-f70.google.com with SMTP id l53so4748465edc.7 for ; Mon, 03 Jun 2019 07:35:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Iv0VISLDGp1pb4crv451Mpvs0adMiY7F6Ubd9/bdvuI=; b=rqDMvxo3PrVki3mnvCttdkH6U+SpFKY7l2RWAZ7jnAdxsu72PQi6qTkQfsqbhYfRea sfGU+0wV0bZthMuyF3BF2538F1QSFygqsU+jHBzZwbmpxIpT9OqhqJ/grwLPbGXjK407 GdIk2qIwxPOdYpc9mipuQQcu07lYisxQoQyW5FwfgiGswgcTOQTSFO2lSaCe2uxn7ww5 OA52o6dZK1CPmvT2bh/C7s5dQxUXVZ/PaBB2LoxPE4tmEuiePHd0hG+8zzsjWG69JNW9 wfMQRkKZh281eTpefObVjiUoXAzZLg1sJ7IWUAuWj1ZgvswWxWmk63fLjdJXR4RfqRs4 640w== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Gm-Message-State: APjAAAUsLEjnLYsZEZv0T0o/Hbbt4F8ROIrTQTzkQaePx7phmQj3gF7q bavBesi/mlS/DJN/qTG/inhxitIWXoUTR3y2KoQ1mb0T33sx4c6hmZJDIKIjDMPaDTNXPrraLm7 AbbDpXCRhDUcJ405vQidWD1rB04Z7hexr3NiH7rOBtuwL7SVLNktShPrnUIXqJQYsEQ== X-Received: by 2002:a17:906:546:: with SMTP id k6mr19841103eja.53.1559572513968; Mon, 03 Jun 2019 07:35:13 -0700 (PDT) X-Google-Smtp-Source: APXvYqwBylShCXTmC0hxaIHu40IYm0i/xYIIlrVDXqT46gyyYjalpn4TtPfr1RefTZssnBjftm05 X-Received: by 2002:a17:906:546:: with SMTP id k6mr19840990eja.53.1559572512527; Mon, 03 Jun 2019 07:35:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559572512; cv=none; d=google.com; s=arc-20160816; b=iLBJyjMgfeYJajuj/jWpB0F9pvIlvggsNYQY7ck8dLCoG3NX1QPpeWjIo76ia2inRz NzjInjSCL95vCwDmIKyuIXZkHZjvcAVEVhFgVi6bRCvdZ463PgzPc0fDJThfhbqd1vLy 14pQf6D1gviREo53tSNZnpEjWYEf/28iTiy/rOwYC51GQ9FMmqW7DZZmbeEYVOeyBb6N DgiCs4lcmuUjcLroAn5RTg0uslSIKkmgSGQtSCCghT1qY0e9eNmZ3F+rU9X3JNKfKOAR 3hyyJ6UZUipBlc+FctaCc9FjJKxdShifxO6cdINp/PFsoLPbbO8qvfZBXcK1Yo1vpDrl duQg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=Iv0VISLDGp1pb4crv451Mpvs0adMiY7F6Ubd9/bdvuI=; b=WrtD6WzkeSHUDKgcXmRQ6QibIA0GAgLtlFSG1VeE3T3bwKvRaN79j34183yqPcVEcO g/0kUWrrRwC5sN0KlenaMSEVRK8m818jKp+T3/weLHHn80E32kn81EQ6iyVhRzOYcRnI 2FXXlbryYZL1jp4COrGhoymlqm0QcfklXADwN8qRO8gdO4/wmn4FnEdTjFbBoSJER62f PafVEisMJJOFZyghOfLOlMjF0Kq2vBRmYaT6xolK73cxsyHb+hc0JWZ4ZhDGk+dCYPdb DwEZnpQ1SAcQ7ZkQzpcWnSN3u9FVFZ0l88AVsiGezuInU/JIvM2rBZ1VfcUsvTL7Ng50 iZ+w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz Received: from mx1.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id c55si772440edc.323.2019.06.03.07.35.12 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 03 Jun 2019 07:35:12 -0700 (PDT) Received-SPF: pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) client-ip=195.135.220.15; Authentication-Results: mx.google.com; spf=pass (google.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id EDAEFAD8B; Mon, 3 Jun 2019 14:35:11 +0000 (UTC) From: Vlastimil Babka To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton , "Kirill A. Shutemov" , Michal Hocko , Vlastimil Babka , Joonsoo Kim , Matthew Wilcox , Mel Gorman Subject: [PATCH 3/3] mm, debug_pagealloc: use a page type instead of page_ext flag Date: Mon, 3 Jun 2019 16:34:51 +0200 Message-Id: <20190603143451.27353-4-vbabka@suse.cz> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190603143451.27353-1-vbabka@suse.cz> References: <20190603143451.27353-1-vbabka@suse.cz> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP When debug_pagealloc is enabled, we currently allocate the page_ext array to mark guard pages with the PAGE_EXT_DEBUG_GUARD flag. Now that we have the page_type field in struct page, we can use that instead, as guard pages are neither PageSlab nor mapped to userspace. This reduces memory overhead when debug_pagealloc is enabled and there are no other features requiring the page_ext array. Signed-off-by: Vlastimil Babka Cc: Joonsoo Kim Cc: Matthew Wilcox --- .../admin-guide/kernel-parameters.txt | 10 ++--- include/linux/mm.h | 10 +---- include/linux/page-flags.h | 6 +++ include/linux/page_ext.h | 1 - mm/Kconfig.debug | 1 - mm/page_alloc.c | 40 +++---------------- mm/page_ext.c | 3 -- 7 files changed, 17 insertions(+), 54 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 138f6664b2e2..32003e76ba3b 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -805,12 +805,10 @@ tracking down these problems. debug_pagealloc= - [KNL] When CONFIG_DEBUG_PAGEALLOC is set, this - parameter enables the feature at boot time. In - default, it is disabled. We can avoid allocating huge - chunk of memory for debug pagealloc if we don't enable - it at boot time and the system will work mostly same - with the kernel built without CONFIG_DEBUG_PAGEALLOC. + [KNL] When CONFIG_DEBUG_PAGEALLOC is set, this parameter + enables the feature at boot time. By default, it is + disabled and the system will work mostly the same as a + kernel built without CONFIG_DEBUG_PAGEALLOC. on: enable the feature debugpat [X86] Enable PAT debugging diff --git a/include/linux/mm.h b/include/linux/mm.h index c71ed22769f3..2ba991e687db 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2846,8 +2846,6 @@ extern long copy_huge_page_from_user(struct page *dst_page, bool allow_pagefault); #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */ -extern struct page_ext_operations debug_guardpage_ops; - #ifdef CONFIG_DEBUG_PAGEALLOC extern unsigned int _debug_guardpage_minorder; DECLARE_STATIC_KEY_FALSE(_debug_guardpage_enabled); @@ -2864,16 +2862,10 @@ static inline bool debug_guardpage_enabled(void) static inline bool page_is_guard(struct page *page) { - struct page_ext *page_ext; - if (!debug_guardpage_enabled()) return false; - page_ext = lookup_page_ext(page); - if (unlikely(!page_ext)) - return false; - - return test_bit(PAGE_EXT_DEBUG_GUARD, &page_ext->flags); + return PageGuard(page); } #else static inline unsigned int debug_guardpage_minorder(void) { return 0; } diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 9f8712a4b1a5..b848517da64c 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -703,6 +703,7 @@ PAGEFLAG_FALSE(DoubleMap) #define PG_offline 0x00000100 #define PG_kmemcg 0x00000200 #define PG_table 0x00000400 +#define PG_guard 0x00000800 #define PageType(page, flag) \ ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) @@ -754,6 +755,11 @@ PAGE_TYPE_OPS(Kmemcg, kmemcg) */ PAGE_TYPE_OPS(Table, table) +/* + * Marks guardpages used with debug_pagealloc. + */ +PAGE_TYPE_OPS(Guard, guard) + extern bool is_free_buddy_page(struct page *page); __PAGEFLAG(Isolated, isolated, PF_ANY); diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h index f84f167ec04c..09592951725c 100644 --- a/include/linux/page_ext.h +++ b/include/linux/page_ext.h @@ -17,7 +17,6 @@ struct page_ext_operations { #ifdef CONFIG_PAGE_EXTENSION enum page_ext_flags { - PAGE_EXT_DEBUG_GUARD, PAGE_EXT_OWNER, #if defined(CONFIG_IDLE_PAGE_TRACKING) && !defined(CONFIG_64BIT) PAGE_EXT_YOUNG, diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index a35ab6c55192..82b6a20898bd 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -12,7 +12,6 @@ config DEBUG_PAGEALLOC bool "Debug page memory allocations" depends on DEBUG_KERNEL depends on !HIBERNATION || ARCH_SUPPORTS_DEBUG_PAGEALLOC && !PPC && !SPARC - select PAGE_EXTENSION select PAGE_POISONING if !ARCH_SUPPORTS_DEBUG_PAGEALLOC ---help--- Unmap pages from the kernel linear mapping after free_pages(). diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e6248e391358..b178f297df68 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -50,7 +50,6 @@ #include #include #include -#include #include #include #include @@ -670,18 +669,6 @@ static int __init early_debug_pagealloc(char *buf) } early_param("debug_pagealloc", early_debug_pagealloc); -static bool need_debug_guardpage(void) -{ - /* If we don't use debug_pagealloc, we don't need guard page */ - if (!debug_pagealloc_enabled()) - return false; - - if (!debug_guardpage_minorder()) - return false; - - return true; -} - static void init_debug_guardpage(void) { if (!debug_pagealloc_enabled()) @@ -693,11 +680,6 @@ static void init_debug_guardpage(void) static_branch_enable(&_debug_guardpage_enabled); } -struct page_ext_operations debug_guardpage_ops = { - .need = need_debug_guardpage, - .init = init_debug_guardpage, -}; - static int __init debug_guardpage_minorder_setup(char *buf) { unsigned long res; @@ -715,20 +697,13 @@ early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup); static inline bool set_page_guard(struct zone *zone, struct page *page, unsigned int order, int migratetype) { - struct page_ext *page_ext; - if (!debug_guardpage_enabled()) return false; if (order >= debug_guardpage_minorder()) return false; - page_ext = lookup_page_ext(page); - if (unlikely(!page_ext)) - return false; - - __set_bit(PAGE_EXT_DEBUG_GUARD, &page_ext->flags); - + __SetPageGuard(page); INIT_LIST_HEAD(&page->lru); set_page_private(page, order); /* Guard pages are not available for any usage */ @@ -740,23 +715,16 @@ static inline bool set_page_guard(struct zone *zone, struct page *page, static inline void clear_page_guard(struct zone *zone, struct page *page, unsigned int order, int migratetype) { - struct page_ext *page_ext; - if (!debug_guardpage_enabled()) return; - page_ext = lookup_page_ext(page); - if (unlikely(!page_ext)) - return; - - __clear_bit(PAGE_EXT_DEBUG_GUARD, &page_ext->flags); + __ClearPageGuard(page); set_page_private(page, 0); if (!is_migrate_isolate(migratetype)) __mod_zone_freepage_state(zone, (1 << order), migratetype); } #else -struct page_ext_operations debug_guardpage_ops; static inline bool set_page_guard(struct zone *zone, struct page *page, unsigned int order, int migratetype) { return false; } static inline void clear_page_guard(struct zone *zone, struct page *page, @@ -1931,6 +1899,10 @@ void __init page_alloc_init_late(void) for_each_populated_zone(zone) set_zone_contiguous(zone); + +#ifdef CONFIG_DEBUG_PAGEALLOC + init_debug_guardpage(); +#endif } #ifdef CONFIG_CMA diff --git a/mm/page_ext.c b/mm/page_ext.c index d8f1aca4ad43..5f5769c7db3b 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -59,9 +59,6 @@ */ static struct page_ext_operations *page_ext_ops[] = { -#ifdef CONFIG_DEBUG_PAGEALLOC - &debug_guardpage_ops, -#endif #ifdef CONFIG_PAGE_OWNER &page_owner_ops, #endif