From patchwork Sat Mar 6 00:05:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 12119603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33B59C433E0 for ; Sat, 6 Mar 2021 00:06:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CCBF6650AA for ; Sat, 6 Mar 2021 00:06:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CCBF6650AA Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 690406B0006; Fri, 5 Mar 2021 19:06:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 666396B006C; Fri, 5 Mar 2021 19:06:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 505EA6B0070; Fri, 5 Mar 2021 19:06:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0005.hostedemail.com [216.40.44.5]) by kanga.kvack.org (Postfix) with ESMTP id 3502F6B0006 for ; Fri, 5 Mar 2021 19:06:05 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F0AAE1E0F for ; Sat, 6 Mar 2021 00:06:04 +0000 (UTC) X-FDA: 77887506648.12.955C21B Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) by imf16.hostedemail.com (Postfix) with ESMTP id 32CCC80192D4 for ; Sat, 6 Mar 2021 00:06:04 +0000 (UTC) Received: by mail-qk1-f201.google.com with SMTP id h21so3154209qkl.12 for ; Fri, 05 Mar 2021 16:06:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=yJop2QuXemsmY+rS7D33vVlqYl2tBY2jpSe363QSuIM=; b=nXH2trZDoocaHnXZy9lunINKCAPLoEdruTQBpX+duew8jFyFi4bGAswsllitk77fTl gy8SkK26eD/B3TXFWBdyZYVioeHDQG6oTSonKuy4YeB6hkaYf4t+j7AZ4Wsvlsl5PaU9 yrLZ/A8Z68QAeNmRTzvAnlYevQz365QVkqsfwxn07GVMtmKTrtLd3KnXfNfv6SSUX0yq 72gSp28iqf/Y82UwlY5YP25EbqsikWKrHxp4qjPqYXQeDBQ22DL7S4a8rdC2TXmy4AHR KdCxD5XDDWt6jg0TKs0/GmX3VdMJrbhwLA0RC7iCl3+cS+/BUmsDurnNOgL+U/nJByWr 7QHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=yJop2QuXemsmY+rS7D33vVlqYl2tBY2jpSe363QSuIM=; b=LIimTJRlyDHEl/M9tx7dEXIcC6Syv8Zxs5coojImoITV11MPpu1Rh1i6vjfWW6xdm7 bVzMOzF31K10JNd+VQVpjagfy9bJIcYAlkcUwyFxcgJTxE/y7JZ2VbsIp5Cqscg2gN+y Z33Mc/hWKp640ZakWKcv9raX/G67coJrteyHN0tOl0091M7GVKkSQu/zqq60GK8joxiz 5GMcepAP3lM6dDBXvRz1mTxhev0IECjHpUFgpLXEhq90DPNO1SZiCPzpYcQGHht/6G3Z EJSh/YOia9wR8ikYCGW7ExkkjQGWQScRKoxpMKKTmmgZvzaKMI3hU5Ocs7nfijTtMaw+ /Teg== X-Gm-Message-State: AOAM533LR6+IoS+Fx6DtmaigLRVa5P+Tey0Pw22O7lfZEwSE6gMKnJQX nnVkP9ykKaLKOWNYMzpz9eqBkWSU1nSLbhUU X-Google-Smtp-Source: ABdhPJw81hRTIWJJNIhUsKBPgA01itVYetB5Q5wKddIucRQJpl/MSrsgixro13Mk7IHNYh/ATiT60Eeo0kx/p40z X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:953b:d7cf:2b01:f178]) (user=andreyknvl job=sendgmr) by 2002:a0c:c60b:: with SMTP id v11mr11124307qvi.44.1614989163659; Fri, 05 Mar 2021 16:06:03 -0800 (PST) Date: Sat, 6 Mar 2021 01:05:58 +0100 In-Reply-To: Message-Id: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v3 2/2] mm, kasan: don't poison boot memory with tag-based modes From: Andrey Konovalov To: Andrew Morton , Alexander Potapenko Cc: Catalin Marinas , Will Deacon , Vincenzo Frascino , Dmitry Vyukov , Andrey Ryabinin , Marco Elver , Peter Collingbourne , Evgenii Stepanov , Branislav Rankov , Kevin Brodsky , kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 32CCC80192D4 X-Stat-Signature: wg9fazzinwoqrz61xzxmeqj59u9u9cqh Received-SPF: none (flex--andreyknvl.bounces.google.com>: No applicable sender policy available) receiver=imf16; identity=mailfrom; envelope-from="<3a8dCYAoKCA4o1r5sCy19zu22uzs.q20zw18B-00y9oqy.25u@flex--andreyknvl.bounces.google.com>"; helo=mail-qk1-f201.google.com; client-ip=209.85.222.201 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614989164-817279 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: During boot, all non-reserved memblock memory is exposed to page_alloc via memblock_free_pages->__free_pages_core(). This results in kasan_free_pages() being called, which poisons that memory. Poisoning all that memory lengthens boot time. The most noticeable effect is observed with the HW_TAGS mode. A boot-time impact may potentially also affect systems with large amount of RAM. This patch changes the tag-based modes to not poison the memory during the memblock->page_alloc transition. An exception is made for KASAN_GENERIC. Since it marks all new memory as accessible, not poisoning the memory released from memblock will lead to KASAN missing invalid boot-time accesses to that memory. With KASAN_SW_TAGS, as it uses the invalid 0xFE tag as the default tag for all memory, it won't miss bad boot-time accesses even if the poisoning of memblock memory is removed. With KASAN_HW_TAGS, the default memory tags values are unspecified. Therefore, if memblock poisoning is removed, this KASAN mode will miss the mentioned type of boot-time bugs with a 1/16 probability. This is taken as an acceptable trafe-off. Internally, the poisoning is removed as follows. __free_pages_core() is used when exposing fresh memory during system boot and when onlining memory during hotplug. This patch adds a new FPI_SKIP_KASAN_POISON flag and passes it to __free_pages_ok() through free_pages_prepare() from __free_pages_core(). If FPI_SKIP_KASAN_POISON is set, kasan_free_pages() is not called. All memory allocated normally when the boot is over keeps getting poisoned as usual. Reviewed-by: Catalin Marinas Signed-off-by: Andrey Konovalov --- Changes v2->v3: - Rebased onto v3 of "kasan, mm: fix crash with HW_TAGS and DEBUG_PAGEALLOC". --- mm/page_alloc.c | 45 ++++++++++++++++++++++++++++++++++----------- 1 file changed, 34 insertions(+), 11 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c89ee1ba7034..0efb07b5907c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -108,6 +108,17 @@ typedef int __bitwise fpi_t; */ #define FPI_TO_TAIL ((__force fpi_t)BIT(1)) +/* + * Don't poison memory with KASAN (only for the tag-based modes). + * During boot, all non-reserved memblock memory is exposed to page_alloc. + * Poisoning all that memory lengthens boot time, especially on systems with + * large amount of RAM. This flag is used to skip that poisoning. + * This is only done for the tag-based KASAN modes, as those are able to + * detect memory corruptions with the memory tags assigned by default. + * All memory allocated normally after boot gets poisoned as usual. + */ +#define FPI_SKIP_KASAN_POISON ((__force fpi_t)BIT(2)) + /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_FRACTION (8) @@ -384,10 +395,15 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); * on-demand allocation and then freed again before the deferred pages * initialization is done, but this is not likely to happen. */ -static inline void kasan_free_nondeferred_pages(struct page *page, int order) +static inline void kasan_free_nondeferred_pages(struct page *page, int order, + fpi_t fpi_flags) { - if (!static_branch_unlikely(&deferred_pages)) - kasan_free_pages(page, order); + if (static_branch_unlikely(&deferred_pages)) + return; + if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && + (fpi_flags & FPI_SKIP_KASAN_POISON)) + return; + kasan_free_pages(page, order); } /* Returns true if the struct page for the pfn is uninitialised */ @@ -438,7 +454,14 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } #else -#define kasan_free_nondeferred_pages(p, o) kasan_free_pages(p, o) +static inline void kasan_free_nondeferred_pages(struct page *page, int order, + fpi_t fpi_flags) +{ + if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && + (fpi_flags & FPI_SKIP_KASAN_POISON)) + return; + kasan_free_pages(page, order); +} static inline bool early_page_uninitialised(unsigned long pfn) { @@ -1216,7 +1239,7 @@ static void kernel_init_free_pages(struct page *page, int numpages) } static __always_inline bool free_pages_prepare(struct page *page, - unsigned int order, bool check_free) + unsigned int order, bool check_free, fpi_t fpi_flags) { int bad = 0; @@ -1285,7 +1308,7 @@ static __always_inline bool free_pages_prepare(struct page *page, * With hardware tag-based KASAN, memory tags must be set before the * page becomes unavailable via debug_pagealloc or arch_free_page. */ - kasan_free_nondeferred_pages(page, order); + kasan_free_nondeferred_pages(page, order, fpi_flags); /* * arch_free_page() can make the page's contents inaccessible. s390 @@ -1307,7 +1330,7 @@ static __always_inline bool free_pages_prepare(struct page *page, */ static bool free_pcp_prepare(struct page *page) { - return free_pages_prepare(page, 0, true); + return free_pages_prepare(page, 0, true, FPI_NONE); } static bool bulkfree_pcp_prepare(struct page *page) @@ -1327,9 +1350,9 @@ static bool bulkfree_pcp_prepare(struct page *page) static bool free_pcp_prepare(struct page *page) { if (debug_pagealloc_enabled_static()) - return free_pages_prepare(page, 0, true); + return free_pages_prepare(page, 0, true, FPI_NONE); else - return free_pages_prepare(page, 0, false); + return free_pages_prepare(page, 0, false, FPI_NONE); } static bool bulkfree_pcp_prepare(struct page *page) @@ -1537,7 +1560,7 @@ static void __free_pages_ok(struct page *page, unsigned int order, int migratetype; unsigned long pfn = page_to_pfn(page); - if (!free_pages_prepare(page, order, true)) + if (!free_pages_prepare(page, order, true, fpi_flags)) return; migratetype = get_pfnblock_migratetype(page, pfn); @@ -1574,7 +1597,7 @@ void __free_pages_core(struct page *page, unsigned int order) * Bypass PCP and place fresh pages right to the tail, primarily * relevant for memory onlining. */ - __free_pages_ok(page, order, FPI_TO_TAIL); + __free_pages_ok(page, order, FPI_TO_TAIL | FPI_SKIP_KASAN_POISON); } #ifdef CONFIG_NEED_MULTIPLE_NODES