From patchwork Fri Jul 2 03:19:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12355425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EA9AC11F68 for ; Fri, 2 Jul 2021 03:21:06 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E85F9613EE for ; Fri, 2 Jul 2021 03:21:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E85F9613EE Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Mime-Version: Message-Id:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=Mg5LxvVyiMmouyPpwpbyhq6uSzCThL4c5aVrVIw2zlE=; b=tpx 2P34tK0EGCnad/4OnBtsmnPggxiURIdGVGRg06KRc4JApKUcrE/j59qkbqL74WCaLilhHVygkGGvE PdPQ/zKktfio/PM3DBi72aVH6KqAAvSfwbt8jOtIW7pnyIvKe5oydk81tTVDTkMEpLnjZIRmsEjZu 2tY9JW1nN3laVvawyGtavNldrmtawV4apKWZIOQqelaFc9oYn+MMoSGjHAprPOZXxW5cwwnnaNVKC BubyAEOGs5gIdqQTCYuo6SCUAiiytL/UzJSx2SAx8pJGofZI0NikX/iDTkhjNgmqNONA4TjfNk+Ih K9T2awNalIa3s61tRpoVze8fE4eV3FQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lz9iI-001vZp-JQ; Fri, 02 Jul 2021 03:19:34 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lz9iD-001vZA-SV for linux-arm-kernel@lists.infradead.org; Fri, 02 Jul 2021 03:19:32 +0000 Received: by mail-yb1-xb49.google.com with SMTP id t11-20020a056902124bb029055a821867baso11646557ybu.14 for ; Thu, 01 Jul 2021 20:19:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=hXpX1fdR8hJsZh82qSy1AHCac0EYJS10ODIMklQ5T9k=; b=NA50YzjtcPEzOuWmxZz2QkNmEeKPTLodO64ndKjMetkgmwBgm9yTLQrf+qcorD8Jq1 DpbiUldIY/D4Jx6aVy1HLljubwAeiWJtedxhuSjaGb585kdqRlp7rDRVKZRGIGZ61p/g 3aEDEzYmTl27vNOGmnTkV8uM7Vk+XyaupWULLlJtMjPeSLZF+UyUCVv46RUjz2Jm2oDm jcpfzujYmqDhAiwbMX9D/NLF2UHXc261z41v9Fh0KXa28VjJnoIGmsspQOf/idaw0Kkw XBH8ssi99kLsbLc91DMurE8UDaQzBgT5MGt3+jHyMxxyX+rGhGdolPAMAsX7Um3rzvw5 2ehw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=hXpX1fdR8hJsZh82qSy1AHCac0EYJS10ODIMklQ5T9k=; b=i+QLNSAjzkWKGhUOd5QQ2AdNRBK9sxTST0PG5QzLuD5NH2q/XADoxgyjlCWN1Ky6Y5 o/sXbe6w/k4cJbpi4TD4H9k4/gbzjTtIHc9ep7SM0l5oWGfjlO+beDa2YyDV39uaPLe5 vlIeuCupXuNNWMVLQpubNdIrhIYqaaOPV1FvTMqY1966qnt9Zkc+7jbKi0FvnPRg2DgV LwtPINBsm/4AdZVCwDIyE3KnHOE1hlvINYX1DVH3WfPDya65t3219F/7Cl7vC4Xqcmpx gtDUhTzWiKWrxUcgvCNZ4MSevogAojuf8CnsLs8aj+F7YLS8YAZvYAutCs/Bpmh6B//t M65g== X-Gm-Message-State: AOAM533ooMgkCQdIHSM9j3oQHRC34V0Lfa0t6XaoNU2jlvmOEc5rQlH3 QaUJ3i0aXeJEIdPpsnhCOgGK5aI= X-Google-Smtp-Source: ABdhPJxJ9hVrozRWr6RP49/68U+g6tAHtT6h7Lm/UAbykFbHr7ejOrbdTtwGmMlLneNq/lZPp3+SbCs= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:18bd:6fc:e134:5437]) (user=pcc job=sendgmr) by 2002:a5b:34f:: with SMTP id q15mr3812366ybp.189.1625195967947; Thu, 01 Jul 2021 20:19:27 -0700 (PDT) Date: Thu, 1 Jul 2021 20:19:22 -0700 Message-Id: <20210702031922.1291398-1-pcc@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.32.0.93.g670b81a890-goog Subject: [PATCH] arm64: mte: switch GCR_EL1 on task switch rather than entry/exit From: Peter Collingbourne To: Catalin Marinas , Vincenzo Frascino , Will Deacon , Andrey Konovalov Cc: Peter Collingbourne , Evgenii Stepanov , Szabolcs Nagy , Tejas Belagod , linux-arm-kernel@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210701_201930_009688_6D7AE1A3 X-CRM114-Status: GOOD ( 26.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Accessing GCR_EL1 and issuing an ISB can be expensive on some microarchitectures. To avoid taking this performance hit on every kernel entry/exit, switch GCR_EL1 on task switch rather than entry/exit. This is essentially a revert of commit bad1e1c663e0 ("arm64: mte: switch GCR_EL1 in kernel entry and exit"). This requires changing how we generate random tags for HW tag-based KASAN, since at this point IRG would use the user's exclusion mask, which may not be suitable for kernel use. In this patch I chose to take the modulus of CNTVCT_EL0, however alternative approaches are possible. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/I560a190a74176ca4cc5191dad08f77f6b1577c75 --- arch/arm64/include/asm/mte-kasan.h | 15 ++++-- arch/arm64/include/asm/mte.h | 6 --- arch/arm64/kernel/entry.S | 41 ---------------- arch/arm64/kernel/mte.c | 76 ++++++++++++------------------ 4 files changed, 40 insertions(+), 98 deletions(-) diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h index ddd4d17cf9a0..e9b3c1bdbba3 100644 --- a/arch/arm64/include/asm/mte-kasan.h +++ b/arch/arm64/include/asm/mte-kasan.h @@ -13,6 +13,8 @@ #ifdef CONFIG_ARM64_MTE +extern u64 mte_tag_mod; + /* * These functions are meant to be only used from KASAN runtime through * the arch_*() interface defined in asm/memory.h. @@ -37,15 +39,18 @@ static inline u8 mte_get_mem_tag(void *addr) return mte_get_ptr_tag(addr); } -/* Generate a random tag. */ +/* + * Generate a random tag. We can't use IRG because the user's GCR_EL1 is still + * installed for performance reasons. Instead, take the modulus of the + * architected timer which should be random enough for our purposes. + */ static inline u8 mte_get_random_tag(void) { - void *addr; + u64 cntvct; - asm(__MTE_PREAMBLE "irg %0, %0" - : "=r" (addr)); + asm("mrs %0, cntvct_el0" : "=r"(cntvct)); - return mte_get_ptr_tag(addr); + return 0xF0 | (cntvct % mte_tag_mod); } /* diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 719687412798..412b94efcb11 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -16,8 +16,6 @@ #include -extern u64 gcr_kernel_excl; - void mte_clear_page_tags(void *addr); unsigned long mte_copy_tags_from_user(void *to, const void __user *from, unsigned long n); @@ -40,7 +38,6 @@ void mte_free_tag_storage(char *storage); void mte_sync_tags(pte_t *ptep, pte_t pte); void mte_copy_page_tags(void *kto, const void *kfrom); void mte_thread_init_user(void); -void mte_update_sctlr_user(struct task_struct *task); void mte_thread_switch(struct task_struct *next); void mte_suspend_enter(void); void mte_suspend_exit(void); @@ -63,9 +60,6 @@ static inline void mte_copy_page_tags(void *kto, const void *kfrom) static inline void mte_thread_init_user(void) { } -static inline void mte_update_sctlr_user(struct task_struct *task) -{ -} static inline void mte_thread_switch(struct task_struct *next) { } diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index ce59280355c5..c95bfe145639 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -175,43 +175,6 @@ alternative_else_nop_endif #endif .endm - .macro mte_set_gcr, tmp, tmp2 -#ifdef CONFIG_ARM64_MTE - /* - * Calculate and set the exclude mask preserving - * the RRND (bit[16]) setting. - */ - mrs_s \tmp2, SYS_GCR_EL1 - bfxil \tmp2, \tmp, #MTE_CTRL_GCR_USER_EXCL_SHIFT, #16 - msr_s SYS_GCR_EL1, \tmp2 -#endif - .endm - - .macro mte_set_kernel_gcr, tmp, tmp2 -#ifdef CONFIG_KASAN_HW_TAGS -alternative_if_not ARM64_MTE - b 1f -alternative_else_nop_endif - ldr_l \tmp, gcr_kernel_excl - - mte_set_gcr \tmp, \tmp2 - isb -1: -#endif - .endm - - .macro mte_set_user_gcr, tsk, tmp, tmp2 -#ifdef CONFIG_ARM64_MTE -alternative_if_not ARM64_MTE - b 1f -alternative_else_nop_endif - ldr \tmp, [\tsk, #THREAD_MTE_CTRL] - - mte_set_gcr \tmp, \tmp2 -1: -#endif - .endm - .macro kernel_entry, el, regsize = 64 .if \regsize == 32 mov w0, w0 // zero upper 32 bits of x0 @@ -273,8 +236,6 @@ alternative_if ARM64_HAS_ADDRESS_AUTH alternative_else_nop_endif #endif - mte_set_kernel_gcr x22, x23 - scs_load tsk, x20 .else add x21, sp, #PT_REGS_SIZE @@ -398,8 +359,6 @@ alternative_if ARM64_HAS_ADDRESS_AUTH alternative_else_nop_endif #endif - mte_set_user_gcr tsk, x0, x1 - apply_ssbd 0, x0, x1 .endif diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 9c82e27b30f9..b8d3e0b20702 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -23,7 +23,7 @@ #include #include -u64 gcr_kernel_excl __ro_after_init; +u64 mte_tag_mod __ro_after_init; static bool report_fault_once = true; @@ -98,22 +98,7 @@ int memcmp_pages(struct page *page1, struct page *page2) void mte_init_tags(u64 max_tag) { - static bool gcr_kernel_excl_initialized; - - if (!gcr_kernel_excl_initialized) { - /* - * The format of the tags in KASAN is 0xFF and in MTE is 0xF. - * This conversion extracts an MTE tag from a KASAN tag. - */ - u64 incl = GENMASK(FIELD_GET(MTE_TAG_MASK >> MTE_TAG_SHIFT, - max_tag), 0); - - gcr_kernel_excl = ~incl & SYS_GCR_EL1_EXCL_MASK; - gcr_kernel_excl_initialized = true; - } - - /* Enable the kernel exclude mask for random tags generation. */ - write_sysreg_s(SYS_GCR_EL1_RRND | gcr_kernel_excl, SYS_GCR_EL1); + mte_tag_mod = (max_tag & 0xF) + 1; } static inline void __mte_enable_kernel(const char *mode, unsigned long tcf) @@ -188,8 +173,25 @@ void mte_check_tfsr_el1(void) } #endif -static void update_gcr_el1_excl(u64 excl) +static void mte_sync_ctrl(struct task_struct *task) { + /* + * This can only be called on the current or next task since the CPU + * must match where the thread is going to run. + */ + unsigned long sctlr = task->thread.sctlr_user; + unsigned long mte_ctrl = task->thread.mte_ctrl; + unsigned long pref, resolved_mte_tcf; + + preempt_disable(); + pref = __this_cpu_read(mte_tcf_preferred); + resolved_mte_tcf = (mte_ctrl & pref) ? pref : mte_ctrl; + sctlr &= ~SCTLR_EL1_TCF0_MASK; + if (resolved_mte_tcf & MTE_CTRL_TCF_ASYNC) + sctlr |= SCTLR_EL1_TCF0_ASYNC; + else if (resolved_mte_tcf & MTE_CTRL_TCF_SYNC) + sctlr |= SCTLR_EL1_TCF0_SYNC; + task->thread.sctlr_user = sctlr; /* * Note that the mask controlled by the user via prctl() is an @@ -197,7 +199,11 @@ static void update_gcr_el1_excl(u64 excl) * No need for ISB since this only affects EL0 currently, implicit * with ERET. */ - sysreg_clear_set_s(SYS_GCR_EL1, SYS_GCR_EL1_EXCL_MASK, excl); + sysreg_clear_set_s(SYS_GCR_EL1, SYS_GCR_EL1_EXCL_MASK, + (mte_ctrl & MTE_CTRL_GCR_USER_EXCL_MASK) >> + MTE_CTRL_GCR_USER_EXCL_SHIFT); + + preempt_enable(); } void mte_thread_init_user(void) @@ -211,35 +217,13 @@ void mte_thread_init_user(void) clear_thread_flag(TIF_MTE_ASYNC_FAULT); /* disable tag checking and reset tag generation mask */ current->thread.mte_ctrl = MTE_CTRL_GCR_USER_EXCL_MASK; - mte_update_sctlr_user(current); + mte_sync_ctrl(current); set_task_sctlr_el1(current->thread.sctlr_user); } -void mte_update_sctlr_user(struct task_struct *task) -{ - /* - * This can only be called on the current or next task since the CPU - * must match where the thread is going to run. - */ - unsigned long sctlr = task->thread.sctlr_user; - unsigned long mte_ctrl = task->thread.mte_ctrl; - unsigned long pref, resolved_mte_tcf; - - preempt_disable(); - pref = __this_cpu_read(mte_tcf_preferred); - resolved_mte_tcf = (mte_ctrl & pref) ? pref : mte_ctrl; - sctlr &= ~SCTLR_EL1_TCF0_MASK; - if (resolved_mte_tcf & MTE_CTRL_TCF_ASYNC) - sctlr |= SCTLR_EL1_TCF0_ASYNC; - else if (resolved_mte_tcf & MTE_CTRL_TCF_SYNC) - sctlr |= SCTLR_EL1_TCF0_SYNC; - task->thread.sctlr_user = sctlr; - preempt_enable(); -} - void mte_thread_switch(struct task_struct *next) { - mte_update_sctlr_user(next); + mte_sync_ctrl(next); /* * Check if an async tag exception occurred at EL1. @@ -273,7 +257,7 @@ void mte_suspend_exit(void) if (!system_supports_mte()) return; - update_gcr_el1_excl(gcr_kernel_excl); + mte_sync_ctrl(current); } long set_mte_ctrl(struct task_struct *task, unsigned long arg) @@ -291,7 +275,7 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg) task->thread.mte_ctrl = mte_ctrl; if (task == current) { - mte_update_sctlr_user(task); + mte_sync_ctrl(task); set_task_sctlr_el1(task->thread.sctlr_user); } @@ -467,7 +451,7 @@ static ssize_t mte_tcf_preferred_show(struct device *dev, static void sync_sctlr(void *arg) { - mte_update_sctlr_user(current); + mte_sync_ctrl(current); set_task_sctlr_el1(current->thread.sctlr_user); }