From patchwork Mon Nov 2 16:03:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 11874357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AAACC00A89 for ; Mon, 2 Nov 2020 16:07:05 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0EF7022275 for ; Mon, 2 Nov 2020 16:07:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="O6Y2i8+s"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="H8q6qM0v" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0EF7022275 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=GUkG+5qrQn+xn6BtrQFUYOR+mPtKZ/hiNMEoZilYNYA=; b=O6Y2i8+sr4M7MoaPy7svoqBtk G5vBzkBUcIbgoBmZiqRaA2ctDyM0/Xd2UUeUn2LJjj2N+R8VzOd5f9oUBitoTr0Wikpfu9bB4fBzl EX0IJlHo6krO4WDpncdtc5ETvLZ7x82lDtwHMzGXxoWTKY9chlqEE2PiLqtD42W6j/hDq0B1lOjP3 xfmJ74ZnAE7yHqXn49K4yBlMsJIzI5ZXRQol6y3IymdSxdcX5ADyx+yLEKgo56BjlgLeoqQeF86sA fAElu7yKlBDWK0EDcMwT5R6gQOicNCcecOOGYhwMM/v3Br/wqm9aMlSnlfc1a0kvqQvQgVG5snsot i87rQiDSA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kZcLi-0007LM-QV; Mon, 02 Nov 2020 16:06:27 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kZcK8-0006U9-CK for linux-arm-kernel@lists.infradead.org; Mon, 02 Nov 2020 16:04:51 +0000 Received: by mail-wm1-x349.google.com with SMTP id c10so2652706wmh.6 for ; Mon, 02 Nov 2020 08:04:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=8npALRXM9V5Mvv3YsaAMUfwUwZGn3p2jKYGnwn+XyqI=; b=H8q6qM0vz3D2AXImZuk4VWR+uRoWTgMB0LCHOJn4NJT6YHuuxelM2P6P/0cRIBg5tq /nHs4aE2JvUtZyBqNu0/YTw+B1bjDcQj4jBFc8nrEJKQf75oqOYB8TvrfqhhfZxpTuJz aMBUWR6fHGWkP+lfHjI+yTj4DyqcHSoknFlki8F7NdoEBt/DlANcDhVDM6dMcENZVFaD 8hc+DzwVCpocBAcTRoyMQA9pSzzrJN7CjKWqBDIlw8VWYOMmu3RO8yiBceokHUWfVKLn wmtLZkUiC+Vl9fcz1WmHPpITYauko0SVgAOoM9yYY3PkikdaZzAv7sOubBb+TuqwodnN H7vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8npALRXM9V5Mvv3YsaAMUfwUwZGn3p2jKYGnwn+XyqI=; b=GUgNBTqPjX3pQZ9hvuwLxwj2NUwOAvfNEemydJRlHej+abQishLwOmJAEuoTd9Vleb cn+F07HC+/2bVYPmWaV9M1TlbLMBYt0ruD/3j+O/+VdUFZPQMZe912fOGQ10P0FN+XcR yv6zeI313NYIr4j0wRPECsARvcT0yYx0ewOKwyd4du8ttdbJ3HQE0PS7LRD70fimmrRB 3rqISfWdcPxW60sI64ceVymita22EMjd9vKH4PTCQ1y9D8wFlY2yID4AVc1raaasgQ2x ZSZGB7JX8scFNqDVvKx9sL+PT8S6NBipNPkzl0UYZAC7oN2HN0IQFxE+EfMHrq/DuH3u xekQ== X-Gm-Message-State: AOAM533N2HpmKCypBognGflvqeO6v8cyAiL7ASN9aSJIIDKVc+o0gqzN qYDsbd7lqYdrMGlJqlCAPnx5+/PmP1CiYAvL X-Google-Smtp-Source: ABdhPJz8kLMwIRC1hUMH0NuBbP9EIXo8wMjFjrV/qTxcptrfM9ZbLhIBRj0bEki07dVcb00znMqJsOxqVS20y63J X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:7220:84ff:fe09:7e9d]) (user=andreyknvl job=sendgmr) by 2002:a05:6000:1085:: with SMTP id y5mr20927773wrw.283.1604333085953; Mon, 02 Nov 2020 08:04:45 -0800 (PST) Date: Mon, 2 Nov 2020 17:03:48 +0100 In-Reply-To: Message-Id: <46a1454e0cadea1da73a9f8c1222c1aa3742d4e6.1604333009.git.andreyknvl@google.com> Mime-Version: 1.0 References: X-Mailer: git-send-email 2.29.1.341.ge80a0c044ae-goog Subject: [PATCH v7 08/41] arm64: mte: Switch GCR_EL1 in kernel entry and exit From: Andrey Konovalov To: Catalin Marinas , Will Deacon X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201102_110448_546082_1E959B3F X-CRM114-Status: GOOD ( 22.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, Marco Elver , Elena Petrova , Andrey Konovalov , Kevin Brodsky , Branislav Rankov , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Alexander Potapenko , Evgenii Stepanov , Andrey Ryabinin , Andrew Morton , Vincenzo Frascino , Dmitry Vyukov Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Vincenzo Frascino When MTE is present, the GCR_EL1 register contains the tags mask that allows to exclude tags from the random generation via the IRG instruction. With the introduction of the new Tag-Based KASAN API that provides a mechanism to reserve tags for special reasons, the MTE implementation has to make sure that the GCR_EL1 setting for the kernel does not affect the userspace processes and viceversa. Save and restore the kernel/user mask in GCR_EL1 in kernel entry and exit. Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov Reviewed-by: Catalin Marinas --- Change-Id: I0081cba5ace27a9111bebb239075c9a466af4c84 --- arch/arm64/include/asm/mte-def.h | 1 - arch/arm64/include/asm/mte-kasan.h | 6 +++++ arch/arm64/include/asm/mte.h | 2 ++ arch/arm64/kernel/asm-offsets.c | 3 +++ arch/arm64/kernel/cpufeature.c | 3 +++ arch/arm64/kernel/entry.S | 41 ++++++++++++++++++++++++++++++ arch/arm64/kernel/mte.c | 22 +++++++++++++--- 7 files changed, 74 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/mte-def.h b/arch/arm64/include/asm/mte-def.h index 8401ac5840c7..2d73a1612f09 100644 --- a/arch/arm64/include/asm/mte-def.h +++ b/arch/arm64/include/asm/mte-def.h @@ -10,6 +10,5 @@ #define MTE_TAG_SHIFT 56 #define MTE_TAG_SIZE 4 #define MTE_TAG_MASK GENMASK((MTE_TAG_SHIFT + (MTE_TAG_SIZE - 1)), MTE_TAG_SHIFT) -#define MTE_TAG_MAX (MTE_TAG_MASK >> MTE_TAG_SHIFT) #endif /* __ASM_MTE_DEF_H */ diff --git a/arch/arm64/include/asm/mte-kasan.h b/arch/arm64/include/asm/mte-kasan.h index 3a70fb1807fd..a4c61b926d4a 100644 --- a/arch/arm64/include/asm/mte-kasan.h +++ b/arch/arm64/include/asm/mte-kasan.h @@ -29,6 +29,8 @@ u8 mte_get_mem_tag(void *addr); u8 mte_get_random_tag(void); void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag); +void mte_init_tags(u64 max_tag); + #else /* CONFIG_ARM64_MTE */ static inline u8 mte_get_ptr_tag(void *ptr) @@ -49,6 +51,10 @@ static inline void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag) return addr; } +static inline void mte_init_tags(u64 max_tag) +{ +} + #endif /* CONFIG_ARM64_MTE */ #endif /* __ASSEMBLY__ */ diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index cf1cd181dcb2..d02aff9f493d 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -18,6 +18,8 @@ #include +extern u64 gcr_kernel_excl; + void mte_clear_page_tags(void *addr); unsigned long mte_copy_tags_from_user(void *to, const void __user *from, unsigned long n); diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 7d32fc959b1a..dfe6ed8446ac 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -47,6 +47,9 @@ int main(void) #ifdef CONFIG_ARM64_PTR_AUTH DEFINE(THREAD_KEYS_USER, offsetof(struct task_struct, thread.keys_user)); DEFINE(THREAD_KEYS_KERNEL, offsetof(struct task_struct, thread.keys_kernel)); +#endif +#ifdef CONFIG_ARM64_MTE + DEFINE(THREAD_GCR_EL1_USER, offsetof(struct task_struct, thread.gcr_user_excl)); #endif BLANK(); DEFINE(S_X0, offsetof(struct pt_regs, regs[0])); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index c61f201042b2..8f83042726ff 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1707,6 +1707,9 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) /* Enable in-kernel MTE only if KASAN_HW_TAGS is enabled */ if (IS_ENABLED(CONFIG_KASAN_HW_TAGS)) { + /* Enable the kernel exclude mask for random tags generation */ + write_sysreg_s(SYS_GCR_EL1_RRND | gcr_kernel_excl, SYS_GCR_EL1); + /* Enable MTE Sync Mode for EL1 */ sysreg_clear_set(sctlr_el1, SCTLR_ELx_TCF_MASK, SCTLR_ELx_TCF_SYNC); isb(); diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index b295fb912b12..07646ef4f184 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -173,6 +173,43 @@ alternative_else_nop_endif #endif .endm + .macro mte_set_gcr, tmp, tmp2 +#ifdef CONFIG_ARM64_MTE + /* + * Calculate and set the exclude mask preserving + * the RRND (bit[16]) setting. + */ + mrs_s \tmp2, SYS_GCR_EL1 + bfi \tmp2, \tmp, #0, #16 + msr_s SYS_GCR_EL1, \tmp2 + isb +#endif + .endm + + .macro mte_set_kernel_gcr, tmp, tmp2 +#ifdef CONFIG_KASAN_HW_TAGS +alternative_if_not ARM64_MTE + b 1f +alternative_else_nop_endif + ldr_l \tmp, gcr_kernel_excl + + mte_set_gcr \tmp, \tmp2 +1: +#endif + .endm + + .macro mte_set_user_gcr, tsk, tmp, tmp2 +#ifdef CONFIG_ARM64_MTE +alternative_if_not ARM64_MTE + b 1f +alternative_else_nop_endif + ldr \tmp, [\tsk, #THREAD_GCR_EL1_USER] + + mte_set_gcr \tmp, \tmp2 +1: +#endif + .endm + .macro kernel_entry, el, regsize = 64 .if \regsize == 32 mov w0, w0 // zero upper 32 bits of x0 @@ -212,6 +249,8 @@ alternative_else_nop_endif ptrauth_keys_install_kernel tsk, x20, x22, x23 + mte_set_kernel_gcr x22, x23 + scs_load tsk, x20 .else add x21, sp, #S_FRAME_SIZE @@ -330,6 +369,8 @@ alternative_else_nop_endif /* No kernel C function calls after this as user keys are set. */ ptrauth_keys_install_user tsk, x0, x1, x2 + mte_set_user_gcr tsk, x0, x1 + apply_ssbd 0, x0, x1 .endif diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index a9f03be75cef..ca8206b7f9a6 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -23,6 +23,8 @@ #include #include +u64 gcr_kernel_excl __ro_after_init; + static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap) { pte_t old_pte = READ_ONCE(*ptep); @@ -121,6 +123,17 @@ void *mte_set_mem_tag_range(void *addr, size_t size, u8 tag) return ptr; } +void mte_init_tags(u64 max_tag) +{ + /* + * The format of the tags in KASAN is 0xFF and in MTE is 0xF. + * This conversion is required to extract the MTE tag from a KASAN one. + */ + u64 incl = GENMASK(FIELD_GET(MTE_TAG_MASK >> MTE_TAG_SHIFT, max_tag), 0); + + gcr_kernel_excl = ~incl & SYS_GCR_EL1_EXCL_MASK; +} + static void update_sctlr_el1_tcf0(u64 tcf0) { /* ISB required for the kernel uaccess routines */ @@ -156,7 +169,11 @@ static void update_gcr_el1_excl(u64 excl) static void set_gcr_el1_excl(u64 excl) { current->thread.gcr_user_excl = excl; - update_gcr_el1_excl(excl); + + /* + * SYS_GCR_EL1 will be set to current->thread.gcr_user_excl value + * by mte_set_user_gcr() in kernel_exit, + */ } void flush_mte_state(void) @@ -182,7 +199,6 @@ void mte_thread_switch(struct task_struct *next) /* avoid expensive SCTLR_EL1 accesses if no change */ if (current->thread.sctlr_tcf0 != next->thread.sctlr_tcf0) update_sctlr_el1_tcf0(next->thread.sctlr_tcf0); - update_gcr_el1_excl(next->thread.gcr_user_excl); } void mte_suspend_exit(void) @@ -190,7 +206,7 @@ void mte_suspend_exit(void) if (!system_supports_mte()) return; - update_gcr_el1_excl(current->thread.gcr_user_excl); + update_gcr_el1_excl(gcr_kernel_excl); } long set_mte_ctrl(struct task_struct *task, unsigned long arg)