From patchwork Tue May 11 23:54:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12252389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E62DEC433B4 for ; Tue, 11 May 2021 23:56:59 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4F91D61205 for ; Tue, 11 May 2021 23:56:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F91D61205 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Cc:To:From:Subject:References:Mime-Version: Message-Id:In-Reply-To:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=380VSjWDHn5ybB+swStAw7FyGsiEbc7xdT1eOzMvvfI=; b=LUQNmcc0B810vx 1ltW7Vr+aRPwmX4QovTnNHZ0DC+doGEyMP0dFm1Jf7mBhxXsIWbNL492aCWMF19iF2hWHU8jiq+gX nrrfruAhjXGyCyGsIib3GbVIv/W66hZor+ER17cfH3wwcdTfRtvlovfvuCJoKzHF0xckYPFROrnZ7 5LlnuGQ3LasG3623/lxe2BQoq4OROoOwvXUje+CfPtj5rgnk756wQG8390KDx38RPJF9xAiiitBDH uCW6uiR+7E0Mjdi9tHOvLhFHBRa2lWR70YvVm4973x963WnW4hQ/Sr1/SqOPf5Y/Kw2VE2eY3F09e 6KXJi2t1H0SwZBJHt+/Q==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lgcDG-001Z2D-VI; Tue, 11 May 2021 23:54:55 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lgcD5-001Z0w-4u for linux-arm-kernel@desiato.infradead.org; Tue, 11 May 2021 23:54:43 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:Cc:To:From:Subject: References:Mime-Version:Message-Id:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=gx7Urhn3UjJOv7xa/UcQHszA1ohUxz/YNbW2dSOo5lQ=; b=wl3fjlhC80UdL9DNeCAKfxgjWQ EzESASiMoNSFqiuMIu/Swl1ujV1R6EFdU8QceMcQXHfCQNhVAfeuKYfcF6/Vx0jFFCugh3vSIIRcz 4cpdi2icBk+nDH3cHzRgwTZYhZKF/XN5KwyZcLiPe/KRgFgmUiNL5Yq5SE1P5yAScBKY9k3JGd4yA 0W1HNvKtOj+7zy3cWgy8TAnGJ8cMXOklHo1kNr6xmelPz1dFY6TfQX6Wut+6RT6DHCIoSV8G+HiX0 aTGxPYVvNYPYLZMM4oyixOLh8hgqFwT3aD3luwfvkOa/AufO6qo4dabIPwhQOhyI/3bRfLfDi8yRg 8XB/UlQA==; Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lgcD1-00A02f-OY for linux-arm-kernel@lists.infradead.org; Tue, 11 May 2021 23:54:41 +0000 Received: by mail-pj1-x104a.google.com with SMTP id mq6-20020a17090b3806b029015c12a293efso1233963pjb.1 for ; Tue, 11 May 2021 16:54:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=gx7Urhn3UjJOv7xa/UcQHszA1ohUxz/YNbW2dSOo5lQ=; b=J5muvWMJwLu/B8lxuRxSqJUzeoIOa0o3xuX0R9tshvXLvC7SQ+DxNaqs0F+VZ+XYEz nv0XkAkSY+UKFp4KYDFeKnNJoOMFmkRdjxoh2D57eJC5MPJ5JUykFKp7h735uHXyghqL LpSrQZAeWAGgYCyxwtD1ixL6ti6EzUY/Jzpi9YMSGcCyvF1PZbkrKALCPCx0aRHpEv7u BqPYxghmeeC1gzRwN6a74dMmj7j01T71buXejd09CyK/d1uWOszFtg2Njv2PGDhlVPgS uTVqjbNKK9shDiHTJQQYDGqRpVBL7CkhP02UaZLn6NRhgvIGqMt5xXdCn1YZQj0eX5z6 JBMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=gx7Urhn3UjJOv7xa/UcQHszA1ohUxz/YNbW2dSOo5lQ=; b=uU7dIrOqrdzx2lLjIU2S/GeYOBbxHA1AwYxtVPBLIFhotiXN44MVALO3r5It3+XugI tQYSwo1TsfTEmYPEQWde974cwV/bo/rR/2OmPhwWK4FvGngZcyNFYaPI3giY63Vousye eZQRlOl5xCUuoxrzrZHdlP2bQkS9MKDyYbA26IR7h7bl/L8AF83bBbsK7CGKVToHuWWy pP6yhkH7q3rUAiRDWZhMk6xmbblsEoiytxIdAKEpMZHHIfohW6MBbiCqmoFbd+19ljbH mTICkPhwEGlU4dgEV4nfi0R0dVZaoBgzR8jyhKnDr6kpzjYbSGJun88NczD0TkN4DSuX 81vQ== X-Gm-Message-State: AOAM532KeJLcGK5wDE7FJbg1Vsn4fg8br0xT5BvTZliFy7jj7GLHl5P9 TaduvSsilCmgms8u4ovRCsgyKMc= X-Google-Smtp-Source: ABdhPJwfwZYIMs2F8fkdmkXo8LQPuyzIzPkXtI7wcHxuu07p05QfsNB2BPwNHUTdAUHPe/GOq6+3BmE= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:1c93:1da8:808a:36cd]) (user=pcc job=sendgmr) by 2002:a17:902:b406:b029:ec:fbf2:4114 with SMTP id x6-20020a170902b406b02900ecfbf24114mr32846776plr.32.1620777278233; Tue, 11 May 2021 16:54:38 -0700 (PDT) Date: Tue, 11 May 2021 16:54:24 -0700 In-Reply-To: Message-Id: <441e1c22990205618e1bbff216e0bb83e3b13bae.1620777151.git.pcc@google.com> Mime-Version: 1.0 References: X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH v2 1/3] kasan: use separate (un)poison implementation for integrated init From: Peter Collingbourne To: Andrey Konovalov , Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton Cc: Peter Collingbourne , Evgenii Stepanov , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210511_165439_832355_8952B358 X-CRM114-Status: GOOD ( 18.78 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently with integrated init page_alloc.c needs to know whether kasan_alloc_pages() will zero initialize memory, but this will start becoming more complicated once we start adding tag initialization support for user pages. To avoid page_alloc.c needing to know more details of what integrated init will do, move the unpoisoning logic for integrated init into the HW tags implementation. Currently the logic is identical but it will diverge in subsequent patches. For symmetry do the same for poisoning although this logic will be unaffected by subsequent patches. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/I2c550234c6c4a893c48c18ff0c6ce658c7c67056 --- v2: - fix build with KASAN disabled include/linux/kasan.h | 66 +++++++++++++++++++++++++++---------------- mm/kasan/common.c | 4 +-- mm/kasan/hw_tags.c | 14 +++++++++ mm/mempool.c | 6 ++-- mm/page_alloc.c | 56 +++++++++++++++++++----------------- 5 files changed, 91 insertions(+), 55 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index b1678a61e6a7..e35fa301d3db 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -2,6 +2,7 @@ #ifndef _LINUX_KASAN_H #define _LINUX_KASAN_H +#include #include #include @@ -79,14 +80,6 @@ static inline void kasan_disable_current(void) {} #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ -#ifdef CONFIG_KASAN - -struct kasan_cache { - int alloc_meta_offset; - int free_meta_offset; - bool is_kmalloc; -}; - #ifdef CONFIG_KASAN_HW_TAGS DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled); @@ -101,11 +94,18 @@ static inline bool kasan_has_integrated_init(void) return kasan_enabled(); } +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags); +void kasan_free_pages(struct page *page, unsigned int order); + #else /* CONFIG_KASAN_HW_TAGS */ static inline bool kasan_enabled(void) { +#ifdef CONFIG_KASAN return true; +#else + return false; +#endif } static inline bool kasan_has_integrated_init(void) @@ -113,8 +113,30 @@ static inline bool kasan_has_integrated_init(void) return false; } +static __always_inline void kasan_alloc_pages(struct page *page, + unsigned int order, gfp_t flags) +{ + /* Only available for integrated init. */ + BUG(); +} + +static __always_inline void kasan_free_pages(struct page *page, + unsigned int order) +{ + /* Only available for integrated init. */ + BUG(); +} + #endif /* CONFIG_KASAN_HW_TAGS */ +#ifdef CONFIG_KASAN + +struct kasan_cache { + int alloc_meta_offset; + int free_meta_offset; + bool is_kmalloc; +}; + slab_flags_t __kasan_never_merge(void); static __always_inline slab_flags_t kasan_never_merge(void) { @@ -130,20 +152,20 @@ static __always_inline void kasan_unpoison_range(const void *addr, size_t size) __kasan_unpoison_range(addr, size); } -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init); -static __always_inline void kasan_alloc_pages(struct page *page, +void __kasan_poison_pages(struct page *page, unsigned int order, bool init); +static __always_inline void kasan_poison_pages(struct page *page, unsigned int order, bool init) { if (kasan_enabled()) - __kasan_alloc_pages(page, order, init); + __kasan_poison_pages(page, order, init); } -void __kasan_free_pages(struct page *page, unsigned int order, bool init); -static __always_inline void kasan_free_pages(struct page *page, - unsigned int order, bool init) +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init); +static __always_inline void kasan_unpoison_pages(struct page *page, + unsigned int order, bool init) { if (kasan_enabled()) - __kasan_free_pages(page, order, init); + __kasan_unpoison_pages(page, order, init); } void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, @@ -285,21 +307,15 @@ void kasan_restore_multi_shot(bool enabled); #else /* CONFIG_KASAN */ -static inline bool kasan_enabled(void) -{ - return false; -} -static inline bool kasan_has_integrated_init(void) -{ - return false; -} static inline slab_flags_t kasan_never_merge(void) { return 0; } static inline void kasan_unpoison_range(const void *address, size_t size) {} -static inline void kasan_alloc_pages(struct page *page, unsigned int order, bool init) {} -static inline void kasan_free_pages(struct page *page, unsigned int order, bool init) {} +static inline void kasan_poison_pages(struct page *page, unsigned int order, + bool init) {} +static inline void kasan_unpoison_pages(struct page *page, unsigned int order, + bool init) {} static inline void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) {} diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 6bb87f2acd4e..0ecd293af344 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -97,7 +97,7 @@ slab_flags_t __kasan_never_merge(void) return 0; } -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init) { u8 tag; unsigned long i; @@ -111,7 +111,7 @@ void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) kasan_unpoison(page_address(page), PAGE_SIZE << order, init); } -void __kasan_free_pages(struct page *page, unsigned int order, bool init) +void __kasan_poison_pages(struct page *page, unsigned int order, bool init) { if (likely(!PageHighMem(page))) kasan_poison(page_address(page), PAGE_SIZE << order, diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c index 4004388b4e4b..45e552cb9172 100644 --- a/mm/kasan/hw_tags.c +++ b/mm/kasan/hw_tags.c @@ -238,6 +238,20 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, return &alloc_meta->free_track[0]; } +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) +{ + bool init = !want_init_on_free() && want_init_on_alloc(flags); + + kasan_unpoison_pages(page, order, init); +} + +void kasan_free_pages(struct page *page, unsigned int order) +{ + bool init = want_init_on_free(); + + kasan_poison_pages(page, order, init); +} + #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST) void kasan_set_tagging_report_once(bool state) diff --git a/mm/mempool.c b/mm/mempool.c index a258cf4de575..0b8afbec3e35 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -106,7 +106,8 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element) if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) kasan_slab_free_mempool(element); else if (pool->alloc == mempool_alloc_pages) - kasan_free_pages(element, (unsigned long)pool->pool_data, false); + kasan_poison_pages(element, (unsigned long)pool->pool_data, + false); } static void kasan_unpoison_element(mempool_t *pool, void *element) @@ -114,7 +115,8 @@ static void kasan_unpoison_element(mempool_t *pool, void *element) if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) kasan_unpoison_range(element, __ksize(element)); else if (pool->alloc == mempool_alloc_pages) - kasan_alloc_pages(element, (unsigned long)pool->pool_data, false); + kasan_unpoison_pages(element, (unsigned long)pool->pool_data, + false); } static __always_inline void add_element(mempool_t *pool, void *element) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index aaa1655cf682..6e82a7f6fd6f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -382,7 +382,7 @@ int page_group_by_mobility_disabled __read_mostly; static DEFINE_STATIC_KEY_TRUE(deferred_pages); /* - * Calling kasan_free_pages() only after deferred memory initialization + * Calling kasan_poison_pages() only after deferred memory initialization * has completed. Poisoning pages during deferred memory init will greatly * lengthen the process and cause problem in large memory systems as the * deferred pages initialization is done with interrupt disabled. @@ -394,15 +394,11 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); * on-demand allocation and then freed again before the deferred pages * initialization is done, but this is not likely to happen. */ -static inline void kasan_free_nondeferred_pages(struct page *page, int order, - bool init, fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) { - if (static_branch_unlikely(&deferred_pages)) - return; - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)) - return; - kasan_free_pages(page, order, init); + return static_branch_unlikely(&deferred_pages) || + (!IS_ENABLED(CONFIG_KASAN_GENERIC) && + (fpi_flags & FPI_SKIP_KASAN_POISON)); } /* Returns true if the struct page for the pfn is uninitialised */ @@ -453,13 +449,10 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } #else -static inline void kasan_free_nondeferred_pages(struct page *page, int order, - bool init, fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) { - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)) - return; - kasan_free_pages(page, order, init); + return (!IS_ENABLED(CONFIG_KASAN_GENERIC) && + (fpi_flags & FPI_SKIP_KASAN_POISON)); } static inline bool early_page_uninitialised(unsigned long pfn) @@ -1245,7 +1238,7 @@ static __always_inline bool free_pages_prepare(struct page *page, unsigned int order, bool check_free, fpi_t fpi_flags) { int bad = 0; - bool init; + bool skip_kasan_poison = should_skip_kasan_poison(fpi_flags); VM_BUG_ON_PAGE(PageTail(page), page); @@ -1314,10 +1307,17 @@ static __always_inline bool free_pages_prepare(struct page *page, * With hardware tag-based KASAN, memory tags must be set before the * page becomes unavailable via debug_pagealloc or arch_free_page. */ - init = want_init_on_free(); - if (init && !kasan_has_integrated_init()) - kernel_init_free_pages(page, 1 << order); - kasan_free_nondeferred_pages(page, order, init, fpi_flags); + if (kasan_has_integrated_init()) { + if (!skip_kasan_poison) + kasan_free_pages(page, order); + } else { + bool init = want_init_on_free(); + + if (init) + kernel_init_free_pages(page, 1 << order); + if (!skip_kasan_poison) + kasan_poison_pages(page, order, init); + } /* * arch_free_page() can make the page's contents inaccessible. s390 @@ -2324,8 +2324,6 @@ static bool check_new_pages(struct page *page, unsigned int order) inline void post_alloc_hook(struct page *page, unsigned int order, gfp_t gfp_flags) { - bool init; - set_page_private(page, 0); set_page_refcounted(page); @@ -2344,10 +2342,16 @@ inline void post_alloc_hook(struct page *page, unsigned int order, * kasan_alloc_pages and kernel_init_free_pages must be * kept together to avoid discrepancies in behavior. */ - init = !want_init_on_free() && want_init_on_alloc(gfp_flags); - kasan_alloc_pages(page, order, init); - if (init && !kasan_has_integrated_init()) - kernel_init_free_pages(page, 1 << order); + if (kasan_has_integrated_init()) { + kasan_alloc_pages(page, order, gfp_flags); + } else { + bool init = + !want_init_on_free() && want_init_on_alloc(gfp_flags); + + kasan_unpoison_pages(page, order, init); + if (init) + kernel_init_free_pages(page, 1 << order); + } set_page_owner(page, order, gfp_flags); } From patchwork Tue May 11 23:54:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12252393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D224C433ED for ; Tue, 11 May 2021 23:57:08 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EADD761205 for ; Tue, 11 May 2021 23:57:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EADD761205 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Cc:To:From:Subject:References:Mime-Version: Message-Id:In-Reply-To:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pAUhWLYZFjCib19P2Gm1EjsiNjMMChKOX6ZzbzgV6o8=; b=eDiskFQFAW8Rzl 1mDhWOVo+ct1HLKww9CxnPFVAR/hMUo/RbeMSa2hx5UuM33K1Iy6moAibGfxYvAsaH25lFLP8NP9M OFEr6UetYmm72ZtKnQP3H5nimTGI9x3nnwjUQbYQl2wE0sd8NqcV3NKx8ZUn+5nHKt7xZY3a/2i/F CcraWaBdWwqyhFXmEd1ft93Q3Pk+KUM3qZ2ftFPsX94a5IlUvznQBO6eQ5iQ2sOLZOSLVhwc1ii0Y bJiSj8D+ezSosb7FuCTtkoz27JHQua5/1uipf4VqxTWqFst73esc/C4B7OffvWg8UBrAFU7vZlEEr 0OK9bpLEfyNhipEXSsNg==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lgcDS-001Z3w-K4; Tue, 11 May 2021 23:55:06 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lgcD7-001Z1G-W0 for linux-arm-kernel@desiato.infradead.org; Tue, 11 May 2021 23:54:46 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:Cc:To:From:Subject: References:Mime-Version:Message-Id:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=uHMPztqsowKSZm975UsffJ4XdL0YhGHKTmyrUQtFtAs=; b=va0CgxiemMvAFh6L387KmtHH1T i8FZA4Tq6Oww/ycFSeDw3umpKM4L0U8znN0Y4uhic4lEIlixK92eRt2M7f8QGy4/2LcCnaHWmQiQ1 GVZLpFoK3cR36p/KtcsoynQMcfkBQH7LRiaO/3vU1Rw1sJ3PaLPn1TfFB7/tcTRy0kCTcL0pSt9bJ 6scGcw/92LDmwEAsIJm438fLC8aRAUBy60Lw21osroro+1Wj1XaHjqcQY68JGbP4TvlISweIsISA9 tQ40PeBNDQjbDE4JTgqZh5t9n4U79oQsfHslM5Zs83Cqd/frkjywyvxnWAiT9CLs06XFeBzACEp4Q Eao+RR2g==; Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lgcD4-00A02x-JK for linux-arm-kernel@lists.infradead.org; Tue, 11 May 2021 23:54:44 +0000 Received: by mail-yb1-xb49.google.com with SMTP id d4-20020a25b5c40000b02904f8e3c8c6c9so9168162ybg.14 for ; Tue, 11 May 2021 16:54:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uHMPztqsowKSZm975UsffJ4XdL0YhGHKTmyrUQtFtAs=; b=ERGxvIrPSs+UcJdM6arngFz4Y4onhovvET6GVHiPv76LQ0k+cVMD1uzaZU5dFgYWGy umt4atqidSAkAI6qPmev11QfMz5IGcMPpiDFtleyIM9EfIv1hRDrz/qf/zBHXxmJXFhR BijPee36C0Pt33rijxkxgKNIZSZKo66f2lLLUajdpzxc0geHOO3IUwVmbvSxt9Fkv6+X JnzRvYkJoy3+oHDYL9Ny72YNo77sEMgvZsrR4wR7n4oBPAyc7eQEhdDts9EeutxcIB8x 4GmKb4NDfvPGLCz2U3WD2MvDwNcLenEBIOEMgcutEHwZU5e07tZCwJhbFzcfvxG2+VmC MdjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uHMPztqsowKSZm975UsffJ4XdL0YhGHKTmyrUQtFtAs=; b=JHW8kdOTMhVbUcpgYX2mVFhGR6aSZQYU7KyMrjO21wkHYXncFjB88P2XSceP0/cRil VL3Q3dg9R3mMPv0WimI5/5QidBbOxh7XczydBPP6OIdwP7+8GNeKrWk5Ot6N76fR475z tuMgL11069FWTZ8YwHrRTmoSPvn++yxifmL6JpCizDgnviVAlthHGjvby+64HQUGn+9e ZSzLjHkeNH5BW/QzYrfqs+gFdNFtA6lp+2rLNsW94Q/9LpZg2q9XFVUfBZZNbo1AztT1 kG6LNT+9wGACMz8lQ7qKRBPy1GUTbygIKIFpK2rz/xonRtUq1MZDrme0aP9XAApeioT+ MhFQ== X-Gm-Message-State: AOAM5310/ERFuEH1IgBVMgwc0YDLuHynJwntSwmDBo/imQrH87f4QW+a U1ZYAaAU4jnnJCarc//im2PUUls= X-Google-Smtp-Source: ABdhPJxGSxoOX+Ynm8Jg+jgPg8+EtPWuICkofljPsPu3SVHmm8XlGy2KL3PVjrTKlZYMwfAapzUsl7w= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:1c93:1da8:808a:36cd]) (user=pcc job=sendgmr) by 2002:a25:99c1:: with SMTP id q1mr43283632ybo.517.1620777280446; Tue, 11 May 2021 16:54:40 -0700 (PDT) Date: Tue, 11 May 2021 16:54:25 -0700 In-Reply-To: Message-Id: <69431af7c7d5a0688ef2aacc9e51949415df8325.1620777151.git.pcc@google.com> Mime-Version: 1.0 References: X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH v2 2/3] arm64: mte: handle tags zeroing at page allocation time From: Peter Collingbourne To: Andrey Konovalov , Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton Cc: Peter Collingbourne , Evgenii Stepanov , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210511_165442_672623_EEFAE765 X-CRM114-Status: GOOD ( 29.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently, on an anonymous page fault, the kernel allocates a zeroed page and maps it in user space. If the mapping is tagged (PROT_MTE), set_pte_at() additionally clears the tags. It is, however, more efficient to clear the tags at the same time as zeroing the data on allocation. To avoid clearing the tags on any page (which may not be mapped as tagged), only do this if the vma flags contain VM_MTE. This requires introducing a new GFP flag that is used to determine whether to clear the tags. The DC GZVA instruction with a 0 top byte (and 0 tag) requires top-byte-ignore. Set the TCR_EL1.{TBI1,TBID1} bits irrespective of whether KASAN_HW is enabled. Signed-off-by: Peter Collingbourne Co-developed-by: Catalin Marinas Signed-off-by: Catalin Marinas Link: https://linux-review.googlesource.com/id/Id46dc94e30fe11474f7e54f5d65e7658dbdddb26 Reviewed-by: Catalin Marinas --- v2: - remove want_zero_tags_on_free() arch/arm64/include/asm/mte.h | 4 ++++ arch/arm64/include/asm/page.h | 9 +++++++-- arch/arm64/lib/mte.S | 20 ++++++++++++++++++++ arch/arm64/mm/fault.c | 25 +++++++++++++++++++++++++ arch/arm64/mm/proc.S | 10 +++++++--- include/linux/gfp.h | 9 +++++++-- include/linux/highmem.h | 8 ++++++++ mm/kasan/hw_tags.c | 9 ++++++++- mm/page_alloc.c | 13 ++++++++++--- 9 files changed, 96 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index bc88a1ced0d7..67bf259ae768 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -37,6 +37,7 @@ void mte_free_tag_storage(char *storage); /* track which pages have valid allocation tags */ #define PG_mte_tagged PG_arch_2 +void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t *ptep, pte_t pte); void mte_copy_page_tags(void *kto, const void *kfrom); void mte_thread_init_user(void); @@ -53,6 +54,9 @@ int mte_ptrace_copy_tags(struct task_struct *child, long request, /* unused if !CONFIG_ARM64_MTE, silence the compiler */ #define PG_mte_tagged 0 +static inline void mte_zero_clear_page_tags(void *addr) +{ +} static inline void mte_sync_tags(pte_t *ptep, pte_t pte) { } diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 012cffc574e8..448e14071d13 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -13,6 +13,7 @@ #ifndef __ASSEMBLY__ #include /* for READ_IMPLIES_EXEC */ +#include /* for gfp_t */ #include struct page; @@ -28,10 +29,14 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) +struct page *__alloc_zeroed_user_highpage(gfp_t movableflags, + struct vm_area_struct *vma, + unsigned long vaddr); #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +void tag_clear_highpage(struct page *to); +#define __HAVE_ARCH_TAG_CLEAR_HIGHPAGE + #define clear_user_page(page, vaddr, pg) clear_page(page) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 351537c12f36..e83643b3995f 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -36,6 +36,26 @@ SYM_FUNC_START(mte_clear_page_tags) ret SYM_FUNC_END(mte_clear_page_tags) +/* + * Zero the page and tags at the same time + * + * Parameters: + * x0 - address to the beginning of the page + */ +SYM_FUNC_START(mte_zero_clear_page_tags) + mrs x1, dczid_el0 + and w1, w1, #0xf + mov x2, #4 + lsl x1, x2, x1 + and x0, x0, #(1 << MTE_TAG_SHIFT) - 1 // clear the tag + +1: dc gzva, x0 + add x0, x0, x1 + tst x0, #(PAGE_SIZE - 1) + b.ne 1b + ret +SYM_FUNC_END(mte_zero_clear_page_tags) + /* * Copy the tags from the source page to the destination one * x0 - address of the destination page diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 871c82ab0a30..8127e0c0b8fb 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -921,3 +921,28 @@ void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr, debug_exception_exit(regs); } NOKPROBE_SYMBOL(do_debug_exception); + +/* + * Used during anonymous page fault handling. + */ +struct page *__alloc_zeroed_user_highpage(gfp_t flags, + struct vm_area_struct *vma, + unsigned long vaddr) +{ + /* + * If the page is mapped with PROT_MTE, initialise the tags at the + * point of allocation and page zeroing as this is usually faster than + * separate DC ZVA and STGM. + */ + if (vma->vm_flags & VM_MTE) + flags |= __GFP_ZEROTAGS; + + return alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | flags, vma, vaddr); +} + +void tag_clear_highpage(struct page *page) +{ + mte_zero_clear_page_tags(page_address(page)); + page_kasan_tag_reset(page); + set_bit(PG_mte_tagged, &page->flags); +} diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 0a48191534ff..a27c77dbe91c 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -46,9 +46,13 @@ #endif #ifdef CONFIG_KASAN_HW_TAGS -#define TCR_KASAN_HW_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1 | TCR_TBID1 +#define TCR_MTE_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1 | TCR_TBID1 #else -#define TCR_KASAN_HW_FLAGS 0 +/* + * The mte_zero_clear_page_tags() implementation uses DC GZVA, which relies on + * TBI being enabled at EL1. + */ +#define TCR_MTE_FLAGS TCR_TBI1 | TCR_TBID1 #endif /* @@ -452,7 +456,7 @@ SYM_FUNC_START(__cpu_setup) msr_s SYS_TFSRE0_EL1, xzr /* set the TCR_EL1 bits */ - mov_q x10, TCR_KASAN_HW_FLAGS + mov_q x10, TCR_MTE_FLAGS orr tcr, tcr, x10 1: #endif diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 11da8af06704..68ba237365dc 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -53,8 +53,9 @@ struct vm_area_struct; #define ___GFP_HARDWALL 0x100000u #define ___GFP_THISNODE 0x200000u #define ___GFP_ACCOUNT 0x400000u +#define ___GFP_ZEROTAGS 0x800000u #ifdef CONFIG_LOCKDEP -#define ___GFP_NOLOCKDEP 0x800000u +#define ___GFP_NOLOCKDEP 0x1000000u #else #define ___GFP_NOLOCKDEP 0 #endif @@ -229,16 +230,20 @@ struct vm_area_struct; * %__GFP_COMP address compound page metadata. * * %__GFP_ZERO returns a zeroed page on success. + * + * %__GFP_ZEROTAGS returns a page with zeroed memory tags on success, if + * __GFP_ZERO is set. */ #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) #define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) +#define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS) /* Disable lockdep for GFP context tracking */ #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) /* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (23 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_SHIFT (24 + IS_ENABLED(CONFIG_LOCKDEP)) #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /** diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 832b49b50c7b..caaa62e1dd24 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -204,6 +204,14 @@ static inline void clear_highpage(struct page *page) kunmap_atomic(kaddr); } +#ifndef __HAVE_ARCH_TAG_CLEAR_HIGHPAGE + +static inline void tag_clear_highpage(struct page *page) +{ +} + +#endif + /* * If we pass in a base or tail page, we can zero up to PAGE_SIZE. * If we pass in a head page, we can zero up to the size of the compound page. diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c index 45e552cb9172..34362c8d0955 100644 --- a/mm/kasan/hw_tags.c +++ b/mm/kasan/hw_tags.c @@ -242,7 +242,14 @@ void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) { bool init = !want_init_on_free() && want_init_on_alloc(flags); - kasan_unpoison_pages(page, order, init); + if (flags & __GFP_ZEROTAGS) { + int i; + + for (i = 0; i != 1 << order; ++i) + tag_clear_highpage(page + i); + } else { + kasan_unpoison_pages(page, order, init); + } } void kasan_free_pages(struct page *page, unsigned int order) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6e82a7f6fd6f..24e6f668ef73 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1219,10 +1219,16 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) return ret; } -static void kernel_init_free_pages(struct page *page, int numpages) +static void kernel_init_free_pages(struct page *page, int numpages, bool zero_tags) { int i; + if (zero_tags) { + for (i = 0; i < numpages; i++) + tag_clear_highpage(page + i); + return; + } + /* s390's use of memset() could override KASAN redzones. */ kasan_disable_current(); for (i = 0; i < numpages; i++) { @@ -1314,7 +1320,7 @@ static __always_inline bool free_pages_prepare(struct page *page, bool init = want_init_on_free(); if (init) - kernel_init_free_pages(page, 1 << order); + kernel_init_free_pages(page, 1 << order, false); if (!skip_kasan_poison) kasan_poison_pages(page, order, init); } @@ -2350,7 +2356,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order, kasan_unpoison_pages(page, order, init); if (init) - kernel_init_free_pages(page, 1 << order); + kernel_init_free_pages(page, 1 << order, + gfp_flags & __GFP_ZEROTAGS); } set_page_owner(page, order, gfp_flags); From patchwork Tue May 11 23:54:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12252391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78A45C433B4 for ; Tue, 11 May 2021 23:57:04 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E6EBA61205 for ; Tue, 11 May 2021 23:57:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E6EBA61205 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Cc:To:From:Subject:References:Mime-Version: Message-Id:In-Reply-To:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=csmkTCG9CtZkmdtejpWPqm0YRLR6MlIAw8SCzuaC/gc=; b=gjJYnrJKs0Rc5Y 4SVcv43Rz+1JJZ8Jkf2Aetbh7Az1fKwQjANpCcDmf3aID2QfBd6efBUL4KY+WoQ/SG/hp+HwsLJk9 tMyewwIkS4w9G6hQ6vQHq9X/SxBLXC1GugahnIjQEK7T/JEYu1L5xFQdBNJMSX3NFSUZH65b6iCvU P6zELIq0zCtjk5SI0UwnxbfNLy5wTd6MiYYKJMMYJMyBCKW2yKPBhTxI08Wkb8h3ntnBOwfYzjpuz NcuCHjXyBEqRh0Gkwe0YNpTiq6l0gbV4OEP8yv3n+Pl98vaJUvNBRujjCTt2od1ind00rtcQG4z8m Fgp0U2rELpzL5j0ZmQHQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lgcDe-001Z5d-OH; Tue, 11 May 2021 23:55:19 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lgcDA-001Z1R-AH for linux-arm-kernel@desiato.infradead.org; Tue, 11 May 2021 23:54:49 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:Cc:To:From:Subject: References:Mime-Version:Message-Id:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=n3IHOUs95kRCw7K2YR9PXEfXEB+2w+ikNSMW8RDvjoo=; b=XXWHqNcC96bYlOqDkAV67ewx14 itqIibaMb86Dqzs10ehNfensNDqeQw6YCMU51kh+L2AMlZt5D0vk+LpMoyJf0TlIDk06GhJHtc24m qb7F7kCyTttp+6II3JcAbxVQAN/MagX4s9kDteeKD8roF7um+j2eRv9zlVEPYp9J0hmh/y2mwqXK5 FKl6/9Rx/rr8yCX0baoC/5UPkohZAWN6z7Na6oAkLo6vhTXestkoU/yxavNaKxEdQ8yUnWi+h/Gaj 0zJsSyj8t8rIWRrXETu58+VwEgEteq6IUdCeeFwjGNMEuqSFMTFw6BqYcAandM1FqMba5K4Lltva9 jbh4i0Fw==; Received: from mail-qk1-x749.google.com ([2607:f8b0:4864:20::749]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lgcD7-00A03g-6C for linux-arm-kernel@lists.infradead.org; Tue, 11 May 2021 23:54:47 +0000 Received: by mail-qk1-x749.google.com with SMTP id a24-20020a05620a1038b02902fa6ba180ffso3868768qkk.0 for ; Tue, 11 May 2021 16:54:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=n3IHOUs95kRCw7K2YR9PXEfXEB+2w+ikNSMW8RDvjoo=; b=UwYYwxJ1H/ayuXqP5dUago8Gq+6Ap+Z9i5ugskmJN6iD1mZw4silmF8qoSgS7jmMyX JGtuUZ7FVn3SCiOsG34840GJH4s/iUADiiQo5fgXpQF9uMnz2VGP9rou/Cym2MNd1HXD SI/p9a8X1C67hatrvGLaqV1rKh/cm9jsTS77o02Qh5MfXi2V8DTrSJo69NbCxquB0shK qm8Ftc5ETZ0soLWR26+nADxhLNonH+m4ElYicmF7QRw2eEaNnA+qaAKvKy+K8wFmvfhz /0j3mI26G99Mu2D/ZEDoWUaIEq08TB9S/HNvZrWHdEljaGEZnmg5XPXc9c4pIf5uAgxZ L/SQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=n3IHOUs95kRCw7K2YR9PXEfXEB+2w+ikNSMW8RDvjoo=; b=enHoD6f/f4ZhAirC8ITDoZjlPnuVD4uMNmioXKeIUKpADxUXcmVjo4KphFQg0L6HfR B252abvbfqGiRVXdXaXI90GyWX0rp5EYKVDCyWpOB3HkWAn9DIlExixAi1VAFRukF9qs gx6Tdt9lkXwoZFYRTlHi1htoDAnhYebsSKTfc8da2uZgDuILLxRqH5AqvDTcZ/LeDhtK ZPoiK0iFHtiTda3p/I4pnKdA5IVzyse3RxbQI2ZJfgGXA2ADKZOyzWVPYG7UQh+RUVHb upoSdRpy0A/+OiBPR5hIOZTqFbzmx69Yt1h6OmZFl9Ucj6zXX7XdD3VqId31Hxg1UtqJ kvGQ== X-Gm-Message-State: AOAM530ukJm1PdF6tqCg/bpJOAc7jB4+i6g+RREStDzhHOBrSgpNRuZH FTu4EDr8Tfh9YFlc5SjxOK1CsGs= X-Google-Smtp-Source: ABdhPJwH8rDUbAmiuaFAN+OiaRo5m3hIPMN9EN7ttaeCVMW2ImzTgFsvykMdWypVyt85PXpkoP2/wag= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:1c93:1da8:808a:36cd]) (user=pcc job=sendgmr) by 2002:a05:6214:21ce:: with SMTP id d14mr32268041qvh.47.1620777282246; Tue, 11 May 2021 16:54:42 -0700 (PDT) Date: Tue, 11 May 2021 16:54:26 -0700 In-Reply-To: Message-Id: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH v2 3/3] kasan: allow freed user page poisoning to be disabled with HW tags From: Peter Collingbourne To: Andrey Konovalov , Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton Cc: Peter Collingbourne , Evgenii Stepanov , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210511_165445_259507_0163D04C X-CRM114-Status: GOOD ( 22.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Poisoning freed pages protects against kernel use-after-free. The likelihood of such a bug involving kernel pages is significantly higher than that for user pages. At the same time, poisoning freed pages can impose a significant performance cost, which cannot always be justified for user pages given the lower probability of finding a bug. Therefore, make it possible to configure the kernel to disable freed user page poisoning when using HW tags via the new kasan.skip_user_poison_on_free command line option. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/I716846e2de8ef179f44e835770df7e6307be96c9 --- include/linux/gfp.h | 13 ++++++++++--- include/linux/page-flags.h | 9 +++++++++ include/trace/events/mmflags.h | 9 ++++++++- mm/kasan/hw_tags.c | 10 ++++++++++ mm/page_alloc.c | 12 +++++++----- 5 files changed, 44 insertions(+), 9 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 68ba237365dc..9a77e5660b07 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -54,8 +54,9 @@ struct vm_area_struct; #define ___GFP_THISNODE 0x200000u #define ___GFP_ACCOUNT 0x400000u #define ___GFP_ZEROTAGS 0x800000u +#define ___GFP_SKIP_KASAN_POISON 0x1000000u #ifdef CONFIG_LOCKDEP -#define ___GFP_NOLOCKDEP 0x1000000u +#define ___GFP_NOLOCKDEP 0x2000000u #else #define ___GFP_NOLOCKDEP 0 #endif @@ -233,17 +234,22 @@ struct vm_area_struct; * * %__GFP_ZEROTAGS returns a page with zeroed memory tags on success, if * __GFP_ZERO is set. + * + * %__GFP_SKIP_KASAN_POISON returns a page which does not need to be poisoned + * on deallocation. Typically used for userspace pages. Currently only has an + * effect in HW tags mode, and only if a command line option is set. */ #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) #define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) #define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS) +#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON) /* Disable lockdep for GFP context tracking */ #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) /* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (24 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_SHIFT (25 + IS_ENABLED(CONFIG_LOCKDEP)) #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /** @@ -320,7 +326,8 @@ struct vm_area_struct; #define GFP_NOWAIT (__GFP_KSWAPD_RECLAIM) #define GFP_NOIO (__GFP_RECLAIM) #define GFP_NOFS (__GFP_RECLAIM | __GFP_IO) -#define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL) +#define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | \ + __GFP_HARDWALL | __GFP_SKIP_KASAN_POISON) #define GFP_DMA __GFP_DMA #define GFP_DMA32 __GFP_DMA32 #define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 04a34c08e0a6..40e2c5000585 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -137,6 +137,9 @@ enum pageflags { #endif #ifdef CONFIG_64BIT PG_arch_2, +#endif +#ifdef CONFIG_KASAN_HW_TAGS + PG_skip_kasan_poison, #endif __NR_PAGEFLAGS, @@ -443,6 +446,12 @@ TESTCLEARFLAG(Young, young, PF_ANY) PAGEFLAG(Idle, idle, PF_ANY) #endif +#ifdef CONFIG_KASAN_HW_TAGS +PAGEFLAG(SkipKASanPoison, skip_kasan_poison, PF_HEAD) +#else +PAGEFLAG_FALSE(SkipKASanPoison) +#endif + /* * PageReported() is used to track reported free pages within the Buddy * allocator. We can use the non-atomic version of the test and set diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 629c7a0eaff2..390270e00a1d 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -85,6 +85,12 @@ #define IF_HAVE_PG_ARCH_2(flag,string) #endif +#ifdef CONFIG_KASAN_HW_TAGS +#define IF_HAVE_PG_SKIP_KASAN_POISON(flag,string) ,{1UL << flag, string} +#else +#define IF_HAVE_PG_SKIP_KASAN_POISON(flag,string) +#endif + #define __def_pageflag_names \ {1UL << PG_locked, "locked" }, \ {1UL << PG_waiters, "waiters" }, \ @@ -112,7 +118,8 @@ IF_HAVE_PG_UNCACHED(PG_uncached, "uncached" ) \ IF_HAVE_PG_HWPOISON(PG_hwpoison, "hwpoison" ) \ IF_HAVE_PG_IDLE(PG_young, "young" ) \ IF_HAVE_PG_IDLE(PG_idle, "idle" ) \ -IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) +IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) \ +IF_HAVE_PG_SKIP_KASAN_POISON(PG_skip_kasan_poison, "skip_kasan_poison") #define show_page_flags(flags) \ (flags) ? __print_flags(flags, "|", \ diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c index 34362c8d0955..954d5c2f7683 100644 --- a/mm/kasan/hw_tags.c +++ b/mm/kasan/hw_tags.c @@ -238,10 +238,20 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, return &alloc_meta->free_track[0]; } +static bool skip_user_poison_on_free; +static int __init skip_user_poison_on_free_param(char *buf) +{ + return kstrtobool(buf, &skip_user_poison_on_free); +} +early_param("kasan.skip_user_poison_on_free", skip_user_poison_on_free_param); + void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) { bool init = !want_init_on_free() && want_init_on_alloc(flags); + if (skip_user_poison_on_free && (flags & __GFP_SKIP_KASAN_POISON)) + SetPageSkipKASanPoison(page); + if (flags & __GFP_ZEROTAGS) { int i; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 24e6f668ef73..2c3ac15ddd54 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -394,11 +394,12 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); * on-demand allocation and then freed again before the deferred pages * initialization is done, but this is not likely to happen. */ -static inline bool should_skip_kasan_poison(fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags) { return static_branch_unlikely(&deferred_pages) || (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)); + (fpi_flags & FPI_SKIP_KASAN_POISON)) || + PageSkipKASanPoison(page); } /* Returns true if the struct page for the pfn is uninitialised */ @@ -449,10 +450,11 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } #else -static inline bool should_skip_kasan_poison(fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags) { return (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)); + (fpi_flags & FPI_SKIP_KASAN_POISON)) || + PageSkipKASanPoison(page); } static inline bool early_page_uninitialised(unsigned long pfn) @@ -1244,7 +1246,7 @@ static __always_inline bool free_pages_prepare(struct page *page, unsigned int order, bool check_free, fpi_t fpi_flags) { int bad = 0; - bool skip_kasan_poison = should_skip_kasan_poison(fpi_flags); + bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags); VM_BUG_ON_PAGE(PageTail(page), page);