From patchwork Sat Mar 6 00:15:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 12119617 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EAB6C433E0 for ; Sat, 6 Mar 2021 00:16:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D2C616508C for ; Sat, 6 Mar 2021 00:16:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D2C616508C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1E9546B0073; Fri, 5 Mar 2021 19:16:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 199946B0075; Fri, 5 Mar 2021 19:16:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F065E6B0074; Fri, 5 Mar 2021 19:16:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0123.hostedemail.com [216.40.44.123]) by kanga.kvack.org (Postfix) with ESMTP id D41026B0072 for ; Fri, 5 Mar 2021 19:16:06 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9AD9C3653 for ; Sat, 6 Mar 2021 00:16:06 +0000 (UTC) X-FDA: 77887531932.24.4A28393 Received: from mail-qt1-f201.google.com (mail-qt1-f201.google.com [209.85.160.201]) by imf29.hostedemail.com (Postfix) with ESMTP id BE535D7 for ; Sat, 6 Mar 2021 00:16:04 +0000 (UTC) Received: by mail-qt1-f201.google.com with SMTP id k15so3109984qtx.15 for ; Fri, 05 Mar 2021 16:16:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=JV+4yHKwhA6hS5zl77p78vaE1RmKFL1mwbyUPs8dFiM=; b=qpZBzenaUm8U9gjmqHAi9lUT1FJi/EeHo1KPjkC4mWO1Ae8KiK1Zk4xcKR2BFWovXs bRAnN2aoBoFry5AjQ9v8Frq8fh41V2aEJVR64q7AUlKVEkFkraYplhp20YWYgg1ATYYy NwvHc26N/JC0pJcrCY4uGScNwMsL+HUQd+jYO/x0O04B7kvVD6LKqGnOahhXoPnE+IHO gp1co2W+kOleRZ3LwLXOdToNDq80HM+rdCNwRgMtivcMtlaIj961XVNKm5QpAECCbXS/ fGgXgrL+u5XUZ2uBKDozWxE10fc8erjW3N+9eGR8Duy76npjEkp1h2pDcn/NKb6CHpaf 2MEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=JV+4yHKwhA6hS5zl77p78vaE1RmKFL1mwbyUPs8dFiM=; b=WbPDe5TeCw0JXIwTx9OMHWfs4iYQsL3TD/Wp+xuabAomlnl0E1wIrQejHKSyA+DYf6 ahArISSAZJiOlM/dwJu+kg638pdHX1OaJtUNfLw86RHqZno86Mpdowp2HA1mcmS1GOYY ab57fHWh+8uEGjMvMdyt8ukWACzPXOU9q+vfE1zEViYTf7nmUaOX1VVWi3ox/zn2X0H1 BY12xjJEaZUFQzLOLpjVdSpOxdOuRy81LzcHwIuow0PSZqF3UfmItA/dsdp76XVuIBvj 00drdxdojcdEig/O+NHiDJaluqEYQNHwECiI/dVX3crFRjklpB1SI71AC36KJpb2zID8 NYDA== X-Gm-Message-State: AOAM531jZRCPBEBYKzdvYRIsbT/1rHBpdjk68QSLLPu/OL45Yulnn8Js fUnk8oeEvl5n1X5un8CRdmMZFDe2TLe6KKy6 X-Google-Smtp-Source: ABdhPJwygryfylXl8C5v361HS8D5ZNHn6Yg9+lVSyvVowUaB3XkE1b5W4pkUUDPRUK01V5WtGJJW9sXsmJ8tyK+Z X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:953b:d7cf:2b01:f178]) (user=andreyknvl job=sendgmr) by 2002:ad4:4471:: with SMTP id s17mr11312797qvt.51.1614989765390; Fri, 05 Mar 2021 16:16:05 -0800 (PST) Date: Sat, 6 Mar 2021 01:15:52 +0100 In-Reply-To: Message-Id: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH 3/5] kasan, mm: integrate page_alloc init with HW_TAGS From: Andrey Konovalov To: Alexander Potapenko Cc: Andrew Morton , Catalin Marinas , Will Deacon , Vincenzo Frascino , Dmitry Vyukov , Andrey Ryabinin , Marco Elver , Peter Collingbourne , Evgenii Stepanov , Branislav Rankov , Kevin Brodsky , kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: BE535D7 X-Stat-Signature: ny8ykfpdi6e8si557rmftnj4xqh8g1nr Received-SPF: none (flex--andreyknvl.bounces.google.com>: No applicable sender policy available) receiver=imf29; identity=mailfrom; envelope-from="<3xclCYAoKCGwKXNbOiUXfVQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--andreyknvl.bounces.google.com>"; helo=mail-qt1-f201.google.com; client-ip=209.85.160.201 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614989764-684813 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This change uses the previously added memory initialization feature of HW_TAGS KASAN routines for page_alloc memory when init_on_alloc/free is enabled. With this change, kernel_init_free_pages() is no longer called when both HW_TAGS KASAN and init_on_alloc/free are enabled. Instead, memory is initialized in KASAN runtime. To avoid discrepancies with which memory gets initialized that can be caused by future changes, both KASAN and kernel_init_free_pages() hooks are put together and a warning comment is added. This patch changes the order in which memory initialization and page poisoning hooks are called. This doesn't lead to any side-effects, as whenever page poisoning is enabled, memory initialization gets disabled. Combining setting allocation tags with memory initialization improves HW_TAGS KASAN performance when init_on_alloc/free is enabled. Signed-off-by: Andrey Konovalov --- include/linux/kasan.h | 16 ++++++++-------- mm/kasan/common.c | 8 ++++---- mm/mempool.c | 4 ++-- mm/page_alloc.c | 37 ++++++++++++++++++++++++++----------- 4 files changed, 40 insertions(+), 25 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 1d89b8175027..4c0f414a893b 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -120,20 +120,20 @@ static __always_inline void kasan_unpoison_range(const void *addr, size_t size) __kasan_unpoison_range(addr, size); } -void __kasan_alloc_pages(struct page *page, unsigned int order); +void __kasan_alloc_pages(struct page *page, unsigned int order, bool init); static __always_inline void kasan_alloc_pages(struct page *page, - unsigned int order) + unsigned int order, bool init) { if (kasan_enabled()) - __kasan_alloc_pages(page, order); + __kasan_alloc_pages(page, order, init); } -void __kasan_free_pages(struct page *page, unsigned int order); +void __kasan_free_pages(struct page *page, unsigned int order, bool init); static __always_inline void kasan_free_pages(struct page *page, - unsigned int order) + unsigned int order, bool init) { if (kasan_enabled()) - __kasan_free_pages(page, order); + __kasan_free_pages(page, order, init); } void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, @@ -282,8 +282,8 @@ static inline slab_flags_t kasan_never_merge(void) return 0; } static inline void kasan_unpoison_range(const void *address, size_t size) {} -static inline void kasan_alloc_pages(struct page *page, unsigned int order) {} -static inline void kasan_free_pages(struct page *page, unsigned int order) {} +static inline void kasan_alloc_pages(struct page *page, unsigned int order, bool init) {} +static inline void kasan_free_pages(struct page *page, unsigned int order, bool init) {} static inline void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) {} diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 316f7f8cd8e6..6107c795611f 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -97,7 +97,7 @@ slab_flags_t __kasan_never_merge(void) return 0; } -void __kasan_alloc_pages(struct page *page, unsigned int order) +void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) { u8 tag; unsigned long i; @@ -108,14 +108,14 @@ void __kasan_alloc_pages(struct page *page, unsigned int order) tag = kasan_random_tag(); for (i = 0; i < (1 << order); i++) page_kasan_tag_set(page + i, tag); - kasan_unpoison(page_address(page), PAGE_SIZE << order, false); + kasan_unpoison(page_address(page), PAGE_SIZE << order, init); } -void __kasan_free_pages(struct page *page, unsigned int order) +void __kasan_free_pages(struct page *page, unsigned int order, bool init) { if (likely(!PageHighMem(page))) kasan_poison(page_address(page), PAGE_SIZE << order, - KASAN_FREE_PAGE, false); + KASAN_FREE_PAGE, init); } /* diff --git a/mm/mempool.c b/mm/mempool.c index 79959fac27d7..fe19d290a301 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -106,7 +106,7 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element) if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) kasan_slab_free_mempool(element); else if (pool->alloc == mempool_alloc_pages) - kasan_free_pages(element, (unsigned long)pool->pool_data); + kasan_free_pages(element, (unsigned long)pool->pool_data, false); } static void kasan_unpoison_element(mempool_t *pool, void *element) @@ -114,7 +114,7 @@ static void kasan_unpoison_element(mempool_t *pool, void *element) if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) kasan_unpoison_range(element, __ksize(element)); else if (pool->alloc == mempool_alloc_pages) - kasan_alloc_pages(element, (unsigned long)pool->pool_data); + kasan_alloc_pages(element, (unsigned long)pool->pool_data, false); } static __always_inline void add_element(mempool_t *pool, void *element) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0efb07b5907c..175bdb36d113 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -396,14 +396,14 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); * initialization is done, but this is not likely to happen. */ static inline void kasan_free_nondeferred_pages(struct page *page, int order, - fpi_t fpi_flags) + bool init, fpi_t fpi_flags) { if (static_branch_unlikely(&deferred_pages)) return; if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && (fpi_flags & FPI_SKIP_KASAN_POISON)) return; - kasan_free_pages(page, order); + kasan_free_pages(page, order, init); } /* Returns true if the struct page for the pfn is uninitialised */ @@ -455,12 +455,12 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) } #else static inline void kasan_free_nondeferred_pages(struct page *page, int order, - fpi_t fpi_flags) + bool init, fpi_t fpi_flags) { if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && (fpi_flags & FPI_SKIP_KASAN_POISON)) return; - kasan_free_pages(page, order); + kasan_free_pages(page, order, init); } static inline bool early_page_uninitialised(unsigned long pfn) @@ -1242,6 +1242,7 @@ static __always_inline bool free_pages_prepare(struct page *page, unsigned int order, bool check_free, fpi_t fpi_flags) { int bad = 0; + bool init; VM_BUG_ON_PAGE(PageTail(page), page); @@ -1299,16 +1300,21 @@ static __always_inline bool free_pages_prepare(struct page *page, debug_check_no_obj_freed(page_address(page), PAGE_SIZE << order); } - if (want_init_on_free()) - kernel_init_free_pages(page, 1 << order); kernel_poison_pages(page, 1 << order); /* + * As memory initialization is integrated with hardware tag-based + * KASAN, kasan_free_pages and kernel_init_free_pages must be + * kept together to avoid discrepancies in behavior. + * * With hardware tag-based KASAN, memory tags must be set before the * page becomes unavailable via debug_pagealloc or arch_free_page. */ - kasan_free_nondeferred_pages(page, order, fpi_flags); + init = want_init_on_free(); + if (init && !IS_ENABLED(CONFIG_KASAN_HW_TAGS)) + kernel_init_free_pages(page, 1 << order); + kasan_free_nondeferred_pages(page, order, init, fpi_flags); /* * arch_free_page() can make the page's contents inaccessible. s390 @@ -2315,17 +2321,26 @@ static bool check_new_pages(struct page *page, unsigned int order) inline void post_alloc_hook(struct page *page, unsigned int order, gfp_t gfp_flags) { + bool init; + set_page_private(page, 0); set_page_refcounted(page); arch_alloc_page(page, order); debug_pagealloc_map_pages(page, 1 << order); - kasan_alloc_pages(page, order); - kernel_unpoison_pages(page, 1 << order); - set_page_owner(page, order, gfp_flags); - if (!want_init_on_free() && want_init_on_alloc(gfp_flags)) + /* + * As memory initialization is integrated with hardware tag-based + * KASAN, kasan_alloc_pages and kernel_init_free_pages must be + * kept together to avoid discrepancies in behavior. + */ + init = !want_init_on_free() && want_init_on_alloc(gfp_flags); + kasan_alloc_pages(page, order, init); + if (init && !IS_ENABLED(CONFIG_KASAN_HW_TAGS)) kernel_init_free_pages(page, 1 << order); + + kernel_unpoison_pages(page, 1 << order); + set_page_owner(page, order, gfp_flags); } static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,