From patchwork Wed Feb 17 20:59:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 12092331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C73CEC433E0 for ; Wed, 17 Feb 2021 21:01:34 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7B2D764DFF for ; Wed, 17 Feb 2021 21:01:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7B2D764DFF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:Mime-Version:Message-Id:Date: Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender :Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=Kdf8p81MVJnw9+AHFnERYLwuIfaFpA9+1vI62RvC46w=; b=u3mRb99SlJoX7QziMumNZUPyZq LK+NQ8+/17n90MVdmCiBc2Ch+hMq/3MVmlDin515Hqwodfs/pVPwEVYrRWxhmR/M+WaFP3X2XbgWT zXo46XmUoa22gjC/O9wCXvvqFH7A4r4vzBpj4nh9HBdwNKrWAo63Znw3hHavbpGaK/LigfCtqgPRx 5grLb3OcEVny7H5LFiyMabCaVP+o8GejvERHTBHOxVqLLshOyrUvktgbOQrjJLo+k2YyCqDpSw2fJ TEmXrp1Fhe9BqbgovivqkfxyhzzgNv118YxKZbGRpV3AnSZcrmXezaXbLWvubUe3aCfKH1MjVLvJ5 ndwGKd8w==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1lCTv4-0004KG-6W; Wed, 17 Feb 2021 20:59:34 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1lCTv0-0004Ix-PN for linux-arm-kernel@lists.infradead.org; Wed, 17 Feb 2021 20:59:31 +0000 Received: by mail-wr1-x449.google.com with SMTP id p18so15202528wrt.5 for ; Wed, 17 Feb 2021 12:59:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:message-id:mime-version:subject:from:to:cc; bh=0sXbG1oj09m3SMP+1ea5iNN1KKj0U8Jo6VhHwWu92g0=; b=fmf1oN/3YxSNgmhaGB1v30uVqnVJ+1TODRRa66s5da7nn/Oa4ixwggLwpiV81W533y 8B93RCWsyS3qVFbC/1FdoShFoziWXhsogBxGnOP5DOWE7le3C/qT+BNc4APONU8pvcgB +VHWDbZRkmL3QtGzdA7E5wdkzMgKwGWE2IIu/ZOvnW3fxTljs/jaSjpk5XtYHiVMwG4m nvF5ezpc7XhSYv9pCTKtHYsPB4Xgm5JMmJBwB6zim+VnmedkLsBal294MOwMsWx2zMf8 VHdCEFzrq6PGu2VqT+5Cl5Bj7qGNCIU32hdo5bKELiOgEg/VhvxidtPxUpaJKEIi7vm6 vUSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:message-id:mime-version:subject:from :to:cc; bh=0sXbG1oj09m3SMP+1ea5iNN1KKj0U8Jo6VhHwWu92g0=; b=qwFAwXudOJz9U/5wMnVDcyF+4HfjRu0Ou4eLoP6r7J3Vjxz00rn7VSClOYEa5y8RCT UPxAakZyRKCYnnXP8+MydqUbCkd/VNz3JMCXv87FOoDE9iH9KcBZFBHsL5gcRnI3jWXw gxz3hcYrpwwoSyDSKAXjLIhrigb9B2EgAjByCSEnoE0M1SxrmSTqW96UrAnGapkDHyfc V6yMy1/6WMtrCUCGTDKq1ojCf1DHSIPMAimE+UQaOGXErHqbWpHxR19lfRkowgfSyOzY Lz9Xvi25ff9RtqmVyigum+pqPlUS8h6ReZGR3HbwF5KMlQ/YW1nabBclbwgBaGwPnyw2 Ejjw== X-Gm-Message-State: AOAM531naPvKvHLAW1efulECUsP57JvhACq3ztB2+6hIu/suyeRhjywy XFb/2ltng9+aB92KkMDUDit0shd8IVNnru9e X-Google-Smtp-Source: ABdhPJySTRRJK9TkTzLqC3g/5nw/uBqMlvpc8c00KGcnFACwbJPiuKee8f8/PdNfZ0V+koHHehnfDmQC99atLL7S X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:fc35:c4d:59c2:bb21]) (user=andreyknvl job=sendgmr) by 2002:a05:6000:1362:: with SMTP id q2mr974946wrz.31.1613595567432; Wed, 17 Feb 2021 12:59:27 -0800 (PST) Date: Wed, 17 Feb 2021 21:59:24 +0100 Message-Id: <8d79640cdab4608c454310881b6c771e856dbd2e.1613595522.git.andreyknvl@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.30.0.478.g8a0d178c01-goog Subject: [PATCH RESEND] mm, kasan: don't poison boot memory From: Andrey Konovalov To: Andrew Morton , Catalin Marinas , Vincenzo Frascino X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210217_155930_845568_97368962 X-CRM114-Status: GOOD ( 19.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, Marco Elver , Andrey Konovalov , Kevin Brodsky , Will Deacon , Branislav Rankov , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, Christoph Hellwig , linux-mm@kvack.org, Alexander Potapenko , Evgenii Stepanov , Andrey Ryabinin , Peter Collingbourne , Dmitry Vyukov Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org During boot, all non-reserved memblock memory is exposed to the buddy allocator. Poisoning all that memory with KASAN lengthens boot time, especially on systems with large amount of RAM. This patch makes page_alloc to not call kasan_free_pages() on all new memory. __free_pages_core() is used when exposing fresh memory during system boot and when onlining memory during hotplug. This patch adds a new FPI_SKIP_KASAN_POISON flag and passes it to __free_pages_ok() through free_pages_prepare() from __free_pages_core(). This has little impact on KASAN memory tracking. Assuming that there are no references to newly exposed pages before they are ever allocated, there won't be any intended (but buggy) accesses to that memory that KASAN would normally detect. However, with this patch, KASAN stops detecting wild and large out-of-bounds accesses that happen to land on a fresh memory page that was never allocated. This is taken as an acceptable trade-off. All memory allocated normally when the boot is over keeps getting poisoned as usual. Signed-off-by: Andrey Konovalov Reviewed-by: Catalin Marinas --- Resending with Change-Id dropped. --- mm/page_alloc.c | 43 ++++++++++++++++++++++++++++++++----------- 1 file changed, 32 insertions(+), 11 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0b55c9c95364..f10966e3b4a5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -108,6 +108,17 @@ typedef int __bitwise fpi_t; */ #define FPI_TO_TAIL ((__force fpi_t)BIT(1)) +/* + * Don't poison memory with KASAN. + * During boot, all non-reserved memblock memory is exposed to the buddy + * allocator. Poisoning all that memory lengthens boot time, especially on + * systems with large amount of RAM. This flag is used to skip that poisoning. + * Assuming that there are no references to those newly exposed pages before + * they are ever allocated, this has little effect on KASAN memory tracking. + * All memory allocated normally after boot gets poisoned as usual. + */ +#define FPI_SKIP_KASAN_POISON ((__force fpi_t)BIT(2)) + /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_FRACTION (8) @@ -384,10 +395,14 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); * on-demand allocation and then freed again before the deferred pages * initialization is done, but this is not likely to happen. */ -static inline void kasan_free_nondeferred_pages(struct page *page, int order) +static inline void kasan_free_nondeferred_pages(struct page *page, int order, + fpi_t fpi_flags) { - if (!static_branch_unlikely(&deferred_pages)) - kasan_free_pages(page, order); + if (static_branch_unlikely(&deferred_pages)) + return; + if (fpi_flags & FPI_SKIP_KASAN_POISON) + return; + kasan_free_pages(page, order); } /* Returns true if the struct page for the pfn is uninitialised */ @@ -438,7 +453,13 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } #else -#define kasan_free_nondeferred_pages(p, o) kasan_free_pages(p, o) +static inline void kasan_free_nondeferred_pages(struct page *page, int order, + fpi_t fpi_flags) +{ + if (fpi_flags & FPI_SKIP_KASAN_POISON) + return; + kasan_free_pages(page, order); +} static inline bool early_page_uninitialised(unsigned long pfn) { @@ -1216,7 +1237,7 @@ static void kernel_init_free_pages(struct page *page, int numpages) } static __always_inline bool free_pages_prepare(struct page *page, - unsigned int order, bool check_free) + unsigned int order, bool check_free, fpi_t fpi_flags) { int bad = 0; @@ -1290,7 +1311,7 @@ static __always_inline bool free_pages_prepare(struct page *page, debug_pagealloc_unmap_pages(page, 1 << order); - kasan_free_nondeferred_pages(page, order); + kasan_free_nondeferred_pages(page, order, fpi_flags); return true; } @@ -1303,7 +1324,7 @@ static __always_inline bool free_pages_prepare(struct page *page, */ static bool free_pcp_prepare(struct page *page) { - return free_pages_prepare(page, 0, true); + return free_pages_prepare(page, 0, true, FPI_NONE); } static bool bulkfree_pcp_prepare(struct page *page) @@ -1323,9 +1344,9 @@ static bool bulkfree_pcp_prepare(struct page *page) static bool free_pcp_prepare(struct page *page) { if (debug_pagealloc_enabled_static()) - return free_pages_prepare(page, 0, true); + return free_pages_prepare(page, 0, true, FPI_NONE); else - return free_pages_prepare(page, 0, false); + return free_pages_prepare(page, 0, false, FPI_NONE); } static bool bulkfree_pcp_prepare(struct page *page) @@ -1533,7 +1554,7 @@ static void __free_pages_ok(struct page *page, unsigned int order, int migratetype; unsigned long pfn = page_to_pfn(page); - if (!free_pages_prepare(page, order, true)) + if (!free_pages_prepare(page, order, true, fpi_flags)) return; migratetype = get_pfnblock_migratetype(page, pfn); @@ -1570,7 +1591,7 @@ void __free_pages_core(struct page *page, unsigned int order) * Bypass PCP and place fresh pages right to the tail, primarily * relevant for memory onlining. */ - __free_pages_ok(page, order, FPI_TO_TAIL); + __free_pages_ok(page, order, FPI_TO_TAIL | FPI_SKIP_KASAN_POISON); } #ifdef CONFIG_NEED_MULTIPLE_NODES