From patchwork Sat Mar 6 00:05:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 12119609 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4DB2C433DB for ; Sat, 6 Mar 2021 00:09:51 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CE4F4650AC for ; Sat, 6 Mar 2021 00:09:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE4F4650AC Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Cc:To:From:Subject:References:Mime-Version: Message-Id:In-Reply-To:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Fi71y8oxIexArdGuiid9SvxKW6wKV0390oF6FfoicF4=; b=N/p6/yeDJ1sSga JUIQlbzLw2GdqHzPjvvXmXnX1Gkot6Yx2myG8pSwJhzQbmLSleniN2inhhdlhu6sSnf60qJwP8QN2 MYqHgZSUWCeJ2KgUy1zjSHyOEbWID5W0nX9medJzh7W9E8ayCdlD8jZY4iFcKd771H4UJuGy/ySD8 Bw8GZDi9hLfxjO3hUg7upSHbKYSkcVbNzajSbj39OwmtrDxoKEOZrTPgQSecSkpmeafMq38oQLg5A soq+ezbP6nf3UlsGpTFnrC8BPAp2Gvmu99+zWWFZJfMR46XplZBwxBU6Hh2CHxKF/Rhyof31VqHJj IP0W61wi04LQ693aaIPg==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lIKU9-00HICY-9H; Sat, 06 Mar 2021 00:07:58 +0000 Received: from mail-qt1-x849.google.com ([2607:f8b0:4864:20::849]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lIKSL-00HHM9-U6 for linux-arm-kernel@lists.infradead.org; Sat, 06 Mar 2021 00:06:07 +0000 Received: by mail-qt1-x849.google.com with SMTP id k10so3076761qte.17 for ; Fri, 05 Mar 2021 16:06:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=yJop2QuXemsmY+rS7D33vVlqYl2tBY2jpSe363QSuIM=; b=nXH2trZDoocaHnXZy9lunINKCAPLoEdruTQBpX+duew8jFyFi4bGAswsllitk77fTl gy8SkK26eD/B3TXFWBdyZYVioeHDQG6oTSonKuy4YeB6hkaYf4t+j7AZ4Wsvlsl5PaU9 yrLZ/A8Z68QAeNmRTzvAnlYevQz365QVkqsfwxn07GVMtmKTrtLd3KnXfNfv6SSUX0yq 72gSp28iqf/Y82UwlY5YP25EbqsikWKrHxp4qjPqYXQeDBQ22DL7S4a8rdC2TXmy4AHR KdCxD5XDDWt6jg0TKs0/GmX3VdMJrbhwLA0RC7iCl3+cS+/BUmsDurnNOgL+U/nJByWr 7QHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=yJop2QuXemsmY+rS7D33vVlqYl2tBY2jpSe363QSuIM=; b=DVFnRnZOCgGoLEMqD/y27oJxoe6Y346WzvgiUuA5PpeecZBczBmJJvEadKvHOp/R4K +MXeLyac5lvoE9mhR2UY5OgzHlcYfuF1hFkek1pOMIctAqGOPdPs5TAA+jmaLXffaWrP LQcKizgYBcAFxju6ySUjLYk61T8OV2CaNy2qkp8nmIzvpMwy7Y8uaZGps40hW0UWynDE q8o9BaXIOMivI9CmsXr6ZlJa7q0sg9FOLsKq/dZ6Crfr3EtJGYjdLqgXh31+RDlgm/KN H8K0pfmKzrnpLv1hEGsJ0Gujlvs0QXw8ARu4je/oh1lPZ1QcFA5br+znbIUwmimoHlMP cnuA== X-Gm-Message-State: AOAM533hU1TlJFNUfEk1sg1Ye2TAgaE9Z4E3wH3mJu/7N27sSVC/ClLD fA2i3zxDgeXZliF2UsnlfUcbSuMT8cV5bEN2 X-Google-Smtp-Source: ABdhPJw81hRTIWJJNIhUsKBPgA01itVYetB5Q5wKddIucRQJpl/MSrsgixro13Mk7IHNYh/ATiT60Eeo0kx/p40z X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:953b:d7cf:2b01:f178]) (user=andreyknvl job=sendgmr) by 2002:a0c:c60b:: with SMTP id v11mr11124307qvi.44.1614989163659; Fri, 05 Mar 2021 16:06:03 -0800 (PST) Date: Sat, 6 Mar 2021 01:05:58 +0100 In-Reply-To: Message-Id: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v3 2/2] mm, kasan: don't poison boot memory with tag-based modes From: Andrey Konovalov To: Andrew Morton , Alexander Potapenko Cc: Catalin Marinas , Will Deacon , Vincenzo Frascino , Dmitry Vyukov , Andrey Ryabinin , Marco Elver , Peter Collingbourne , Evgenii Stepanov , Branislav Rankov , Kevin Brodsky , kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210306_000606_170176_6B23CB73 X-CRM114-Status: GOOD ( 21.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org During boot, all non-reserved memblock memory is exposed to page_alloc via memblock_free_pages->__free_pages_core(). This results in kasan_free_pages() being called, which poisons that memory. Poisoning all that memory lengthens boot time. The most noticeable effect is observed with the HW_TAGS mode. A boot-time impact may potentially also affect systems with large amount of RAM. This patch changes the tag-based modes to not poison the memory during the memblock->page_alloc transition. An exception is made for KASAN_GENERIC. Since it marks all new memory as accessible, not poisoning the memory released from memblock will lead to KASAN missing invalid boot-time accesses to that memory. With KASAN_SW_TAGS, as it uses the invalid 0xFE tag as the default tag for all memory, it won't miss bad boot-time accesses even if the poisoning of memblock memory is removed. With KASAN_HW_TAGS, the default memory tags values are unspecified. Therefore, if memblock poisoning is removed, this KASAN mode will miss the mentioned type of boot-time bugs with a 1/16 probability. This is taken as an acceptable trafe-off. Internally, the poisoning is removed as follows. __free_pages_core() is used when exposing fresh memory during system boot and when onlining memory during hotplug. This patch adds a new FPI_SKIP_KASAN_POISON flag and passes it to __free_pages_ok() through free_pages_prepare() from __free_pages_core(). If FPI_SKIP_KASAN_POISON is set, kasan_free_pages() is not called. All memory allocated normally when the boot is over keeps getting poisoned as usual. Reviewed-by: Catalin Marinas Signed-off-by: Andrey Konovalov --- Changes v2->v3: - Rebased onto v3 of "kasan, mm: fix crash with HW_TAGS and DEBUG_PAGEALLOC". --- mm/page_alloc.c | 45 ++++++++++++++++++++++++++++++++++----------- 1 file changed, 34 insertions(+), 11 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c89ee1ba7034..0efb07b5907c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -108,6 +108,17 @@ typedef int __bitwise fpi_t; */ #define FPI_TO_TAIL ((__force fpi_t)BIT(1)) +/* + * Don't poison memory with KASAN (only for the tag-based modes). + * During boot, all non-reserved memblock memory is exposed to page_alloc. + * Poisoning all that memory lengthens boot time, especially on systems with + * large amount of RAM. This flag is used to skip that poisoning. + * This is only done for the tag-based KASAN modes, as those are able to + * detect memory corruptions with the memory tags assigned by default. + * All memory allocated normally after boot gets poisoned as usual. + */ +#define FPI_SKIP_KASAN_POISON ((__force fpi_t)BIT(2)) + /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_FRACTION (8) @@ -384,10 +395,15 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); * on-demand allocation and then freed again before the deferred pages * initialization is done, but this is not likely to happen. */ -static inline void kasan_free_nondeferred_pages(struct page *page, int order) +static inline void kasan_free_nondeferred_pages(struct page *page, int order, + fpi_t fpi_flags) { - if (!static_branch_unlikely(&deferred_pages)) - kasan_free_pages(page, order); + if (static_branch_unlikely(&deferred_pages)) + return; + if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && + (fpi_flags & FPI_SKIP_KASAN_POISON)) + return; + kasan_free_pages(page, order); } /* Returns true if the struct page for the pfn is uninitialised */ @@ -438,7 +454,14 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } #else -#define kasan_free_nondeferred_pages(p, o) kasan_free_pages(p, o) +static inline void kasan_free_nondeferred_pages(struct page *page, int order, + fpi_t fpi_flags) +{ + if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && + (fpi_flags & FPI_SKIP_KASAN_POISON)) + return; + kasan_free_pages(page, order); +} static inline bool early_page_uninitialised(unsigned long pfn) { @@ -1216,7 +1239,7 @@ static void kernel_init_free_pages(struct page *page, int numpages) } static __always_inline bool free_pages_prepare(struct page *page, - unsigned int order, bool check_free) + unsigned int order, bool check_free, fpi_t fpi_flags) { int bad = 0; @@ -1285,7 +1308,7 @@ static __always_inline bool free_pages_prepare(struct page *page, * With hardware tag-based KASAN, memory tags must be set before the * page becomes unavailable via debug_pagealloc or arch_free_page. */ - kasan_free_nondeferred_pages(page, order); + kasan_free_nondeferred_pages(page, order, fpi_flags); /* * arch_free_page() can make the page's contents inaccessible. s390 @@ -1307,7 +1330,7 @@ static __always_inline bool free_pages_prepare(struct page *page, */ static bool free_pcp_prepare(struct page *page) { - return free_pages_prepare(page, 0, true); + return free_pages_prepare(page, 0, true, FPI_NONE); } static bool bulkfree_pcp_prepare(struct page *page) @@ -1327,9 +1350,9 @@ static bool bulkfree_pcp_prepare(struct page *page) static bool free_pcp_prepare(struct page *page) { if (debug_pagealloc_enabled_static()) - return free_pages_prepare(page, 0, true); + return free_pages_prepare(page, 0, true, FPI_NONE); else - return free_pages_prepare(page, 0, false); + return free_pages_prepare(page, 0, false, FPI_NONE); } static bool bulkfree_pcp_prepare(struct page *page) @@ -1537,7 +1560,7 @@ static void __free_pages_ok(struct page *page, unsigned int order, int migratetype; unsigned long pfn = page_to_pfn(page); - if (!free_pages_prepare(page, order, true)) + if (!free_pages_prepare(page, order, true, fpi_flags)) return; migratetype = get_pfnblock_migratetype(page, pfn); @@ -1574,7 +1597,7 @@ void __free_pages_core(struct page *page, unsigned int order) * Bypass PCP and place fresh pages right to the tail, primarily * relevant for memory onlining. */ - __free_pages_ok(page, order, FPI_TO_TAIL); + __free_pages_ok(page, order, FPI_TO_TAIL | FPI_SKIP_KASAN_POISON); } #ifdef CONFIG_NEED_MULTIPLE_NODES