From patchwork Sat Jan 22 01:02:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12720390 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 26385C433F5 for ; Sat, 22 Jan 2022 01:04:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Mime-Version: Message-Id:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=HAEwLr6v9yUt0mCcdNPC2d0cI6aobRiAl7v7reOwR+g=; b=BPH E90bekelTPkszFe/4G/Fp1C6H3uyq0qkM5daG/EJq6iy38MVpMYkJkADDlL1SPl2Bl83+vnpwG2Ya jOPquq6GmjD39I7wo9BNIXQFN/cxQfgjhhRpD6yV/Z/kAVhibykqDZxp6xg/ppVjMdSnFcgbX4Zvs EHM1/rsJW8no2+KQQ/R/tfFoz0Wk3NAuQ/3JgilzNZEUIMed8Ux+d3VcIGMt5tbs8jN9X67g04Gny VS905yXB3yYzrzB+EGgFPwz8+49pZkLfjeqpeMnLaOtCwGVud+93My46ZAmgsaHcfA5yKbjzO9oTR JV+cxNShcfNP+d2traQvJxgVH8KRe7Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nB4o5-00GLZP-C9; Sat, 22 Jan 2022 01:03:05 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nB4o0-00GLYb-UA for linux-arm-kernel@lists.infradead.org; Sat, 22 Jan 2022 01:03:02 +0000 Received: by mail-yb1-xb49.google.com with SMTP id s7-20020a5b0447000000b005fb83901511so22523646ybp.11 for ; Fri, 21 Jan 2022 17:02:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=PQa1M7fVmGdrwnbxYDpo4oUBoD3An8hj9f86g4T2Y/w=; b=qFJ6zU8mQ6EX4EMogOWnHhxBcBpyWRoKftMovwOpZaWXHlm5z5UOG/ytOD2oMhG097 H3kPmuYA5JP1RJhKBO0NB8ZnXxYTOA/hiN1VSDtho3ZnJGOd1y1ISJZ3ZaoRt30SE21o EohbA8WZw/bJ5Lgm/SylSye6AMlEv7RpAaKNO4VxKQsLh49FEa7MHERAmQlvsBP/5LzU 30MJWcV17GcFUAZHkxO6metut7ZkiVP3F1bj/FNx70dBDyWO+ywdewD2BshmhiR8bLE+ LbB27CJwaWXTRPVAO6ZfCwexOCbtqSv4uch0KAEFt9GRdk4+z6ek8KlSnxnTTB0Hcfro MCTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=PQa1M7fVmGdrwnbxYDpo4oUBoD3An8hj9f86g4T2Y/w=; b=ews1m9YvY2ELZKH4qnwrsZUB/BGSeGi5fRCUuWNwOr7t/4EAmVu/wmOUrs5QXAsYJT 0eimViyRB8q4tgWh4prV5Hxc12xTW4HVraCHIoSlZ2yAEwA+fCKaY7KAdLF2NkPWl9Dy FSHRaQiDmHQE4vuDAYskOc/BhZn0ECoPpamisRV+4s4+cly3vB0Qz57/Bu424wQjDu3C qgmFV3NY3Y7N27K5xv8o5hKmtdfGKJj72vk9+XxtamT9hGj2WUrt+fwgcSDjEIeZc3Rm txw7jz3L/vP58rFucP6wVB9kcJgJNUWVZ4V+UbMVc/jY5F9jLChRivxkhsS4XcxuK3g3 hPfg== X-Gm-Message-State: AOAM5328Yke79k0ZtdDWHiDhiYoSPkINPb3Br5sQkRJ8k6Q8cKDqT+f8 C9tYKs4cGTP8NsmyTQyD31p3g/U= X-Google-Smtp-Source: ABdhPJyidrBEgo8O33EaXSm8A5aImQbX37ueNEIKH+6UgfYPYOla1WaUmNsK+BW+zAdF6CgVcaFwJ5E= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:7df1:d5d5:5f17:57b9]) (user=pcc job=sendgmr) by 2002:a81:258d:0:b0:2ca:287c:6b65 with SMTP id 00721157ae682-2ca287c6da6mr17b3.10.1642813378563; Fri, 21 Jan 2022 17:02:58 -0800 (PST) Date: Fri, 21 Jan 2022 17:02:50 -0800 Message-Id: <20220122010250.251885-1-pcc@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.35.0.rc0.227.g00780c9af4-goog Subject: [PATCH v3] arm64: mte: avoid clearing PSTATE.TCO on entry unless necessary From: Peter Collingbourne To: Catalin Marinas , Vincenzo Frascino , Will Deacon , Andrey Konovalov , Mark Rutland Cc: Peter Collingbourne , Evgenii Stepanov , linux-arm-kernel@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220121_170301_014415_5A322548 X-CRM114-Status: GOOD ( 18.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On some microarchitectures, clearing PSTATE.TCO is expensive. Clearing TCO is only necessary if in-kernel MTE is enabled, or if MTE is enabled in the userspace process in synchronous (or, soon, asymmetric) mode, because we do not report uaccess faults to userspace in none or asynchronous modes. Therefore, adjust the kernel entry code to clear TCO only if necessary. Because it is now possible to switch to a task in which TCO needs to be clear from a task in which TCO is set, we also need to do the same thing on task switch. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/I52d82a580bd0500d420be501af2c35fa8c90729e Reviewed-by: Catalin Marinas --- v3: - switch to a C implementation v2: - do the same thing in cpu_switch_to() arch/arm64/include/asm/mte.h | 19 +++++++++++++++++++ arch/arm64/kernel/entry-common.c | 3 +++ arch/arm64/kernel/entry.S | 7 ------- arch/arm64/kernel/mte.c | 1 + 4 files changed, 23 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 075539f5f1c8..5352db4c0f45 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -11,7 +11,9 @@ #ifndef __ASSEMBLY__ #include +#include #include +#include #include #include @@ -86,6 +88,23 @@ static inline int mte_ptrace_copy_tags(struct task_struct *child, #endif /* CONFIG_ARM64_MTE */ +static inline void mte_disable_tco_entry(struct task_struct *task) +{ + /* + * Re-enable tag checking (TCO set on exception entry). This is only + * necessary if MTE is enabled in either the kernel or the userspace + * task in synchronous mode. With MTE disabled in the kernel and + * disabled or asynchronous in userspace, tag check faults (including in + * uaccesses) are not reported, therefore there is no need to re-enable + * checking. This is beneficial on microarchitectures where re-enabling + * TCO is expensive. + */ + if (kasan_hw_tags_enabled() || + (system_supports_mte() && + (task->thread.sctlr_user & (1UL << SCTLR_EL1_TCF0_SHIFT)))) + asm volatile(SET_PSTATE_TCO(0)); +} + #ifdef CONFIG_KASAN_HW_TAGS /* Whether the MTE asynchronous mode is enabled. */ DECLARE_STATIC_KEY_FALSE(mte_async_or_asymm_mode); diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index ef7fcefb96bd..7093b578e325 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -6,6 +6,7 @@ */ #include +#include #include #include #include @@ -56,6 +57,7 @@ static void noinstr enter_from_kernel_mode(struct pt_regs *regs) { __enter_from_kernel_mode(regs); mte_check_tfsr_entry(); + mte_disable_tco_entry(current); } /* @@ -103,6 +105,7 @@ static __always_inline void __enter_from_user_mode(void) CT_WARN_ON(ct_state() != CONTEXT_USER); user_exit_irqoff(); trace_hardirqs_off_finish(); + mte_disable_tco_entry(current); } static __always_inline void enter_from_user_mode(struct pt_regs *regs) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 772ec2ecf488..e1013a83d4f0 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -308,13 +308,6 @@ alternative_if ARM64_HAS_IRQ_PRIO_MASKING msr_s SYS_ICC_PMR_EL1, x20 alternative_else_nop_endif - /* Re-enable tag checking (TCO set on exception entry) */ -#ifdef CONFIG_ARM64_MTE -alternative_if ARM64_MTE - SET_PSTATE_TCO(0) -alternative_else_nop_endif -#endif - /* * Registers that may be useful after this macro is invoked: * diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index f418ebc65f95..5345587f3384 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -252,6 +252,7 @@ void mte_thread_switch(struct task_struct *next) mte_update_sctlr_user(next); mte_update_gcr_excl(next); + mte_disable_tco_entry(next); /* * Check if an async tag exception occurred at EL1.