From patchwork Tue Jun 22 05:12:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12336435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA264C2B9F4 for ; Tue, 22 Jun 2021 05:14:08 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 880E96023D for ; Tue, 22 Jun 2021 05:14:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 880E96023D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Mime-Version: Message-Id:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=MlDrzZ8BsSyF+y2DDMgx756ZTUG300uyuUAMVZw9NAk=; b=uDI 2rgzIoHh9r+B5BnMq1RdLT8m/4vb8E0l1CH01z+qRUNPd6EnjNrBUbChVO3fJZm2chWiWWHPwJs5l dkcDDOEcS+wWSPSqbk8vP7E2vc+TAvyktIjWScbmJHH8a1WIrQPGxyu5H8kK32lGXTIrBz/vjNrRW eZ/6rlCYIg4/9ActemP5vIPbsO5rM7VteLpNX9IpnRy9iUkiRRLciXo6cWOpsZKF5udnzCy7fVme6 A337yK71ewSPGrN4QMZT7T7J5why9kLng8h/istgnGgXgM2iEL4mwQs5yJ7yJraIy1rECAN9DQgxt Fs/hWzAdRIPyWONwDhorUL8M3a4eowA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lvYi0-005g21-S3; Tue, 22 Jun 2021 05:12:25 +0000 Received: from mail-qk1-x74a.google.com ([2607:f8b0:4864:20::74a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lvYhw-005g1X-Q5 for linux-arm-kernel@lists.infradead.org; Tue, 22 Jun 2021 05:12:22 +0000 Received: by mail-qk1-x74a.google.com with SMTP id q207-20020a3743d80000b02903ab34f7ef76so16733369qka.5 for ; Mon, 21 Jun 2021 22:12:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=aijKZJCzVvlZz7ITX50wcymoFDteyjh9x4J/W4CiFQY=; b=p7jv1qZgfIRRKEWaRTAlroMc9+f06zk7OfSFmH0lbYe+suB8y63NcbKvNhFQQXijtP hmKAB3gOqSnv3jhHdMgdcKuGWARHdxGUEV0KX5FykLrL6EARNnhnb5twwWqlb0VD+7I5 oXQ5PSWWMwaLQLNmDgAHgBuXmW6MJWzD7b4vb+vjU2P4PUZglkjGHJpXfR3ZmXTtn/v1 Oxp7Fe/aghB+z8N7oFDj5L/cmvQmFSrUWwbgeHx9Y1uCr3eBPv43YamAjM0vaj33xw1f J30hteu8KzdqxL8rlFoW6+NlgX4MvZ9U+bVYJjquumGOpVUhowFNUy765OVtbOWUE0bh jbZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=aijKZJCzVvlZz7ITX50wcymoFDteyjh9x4J/W4CiFQY=; b=mg/g+RTZFMBwHnV38dPAKtZ/OPJ4lisnUo5LlTno+v1x7JStPGFOwoSu0xvpLERqvs Pza1tIrvos74QkPqmJQocsP1W2HuvECRvBIejJhZ6lq/SINtbJKj8+IvsU6vMEas28db /dvWm3o1EpETMWtuqy7538LaddT8UkpJfZKaSOEHu7TuD+Sa/EOTSqzE9kJcf1HJIdUe OzcTth4og/EuUyANsqv+C/4G4OnvTZH9O449gN4/WgrNlJ8kjfeSsqGyoO3JbjrUkyUU HG4kpnEmmLFnptyM8BuWGb72cTyn6BMRTHWQWsPwbDWbxPhjiMlYHWOjO+fivwnZGO5P kz3A== X-Gm-Message-State: AOAM530gm1WIbSaxk7Whpl2G88AWha0Cv8JFZZQ1Yk6v39b7Lqlv/G5G FCFyUN+9PONmfyky96gtE0nUamc= X-Google-Smtp-Source: ABdhPJwNWfytlZPXhZ+jtEhxvHP8PkLvMKxPJ52anSShqMGkPHc661FD+1IqrJnlcaI9Y6QwqkufVuo= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:a96d:1c45:bc7e:4095]) (user=pcc job=sendgmr) by 2002:a25:acdf:: with SMTP id x31mr2486550ybd.222.1624338737922; Mon, 21 Jun 2021 22:12:17 -0700 (PDT) Date: Mon, 21 Jun 2021 22:12:04 -0700 Message-Id: <20210622051204.3682580-1-pcc@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.32.0.288.g62a8d224e6-goog Subject: [PATCH v2] arm64: allow TCR_EL1.TBID0 to be configured From: Peter Collingbourne To: Catalin Marinas , Evgenii Stepanov , Kostya Serebryany , Vincenzo Frascino , Dave Martin , Will Deacon , Szabolcs Nagy Cc: Peter Collingbourne , Linux ARM , linux-api@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210621_221220_899013_AD251689 X-CRM114-Status: GOOD ( 30.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a command line flag that controls whether TCR_EL1.TBID0 is set at boot time. Since this is a change to the userspace ABI the option defaults to off for now, although it seems likely that we'll be able to change the default at some future point. Setting TCR_EL1.TBID0 increases the number of signature bits used by the pointer authentication instructions for instruction addresses by 8, which improves the security of pointer authentication, but it also has the consequence of changing the operation of the branch instructions so that they no longer ignore the top byte of the target address but instead fault if they are non-zero. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/Ife724ad708142bc475f42e8c1d9609124994bbbd --- v2: - rebase to linux-next - make it a command line flag arch/arm64/include/asm/compiler.h | 19 ++++++++---- arch/arm64/include/asm/memory.h | 2 ++ arch/arm64/include/asm/pgtable-hwdef.h | 1 + arch/arm64/include/asm/pointer_auth.h | 2 +- arch/arm64/include/asm/processor.h | 2 ++ arch/arm64/kernel/pointer_auth.c | 12 +++++++ arch/arm64/kernel/process.c | 43 ++++++++++++++++++++++++++ arch/arm64/kernel/ptrace.c | 8 ++--- arch/arm64/mm/fault.c | 14 ++++++++- arch/arm64/mm/proc.S | 29 +---------------- 10 files changed, 91 insertions(+), 41 deletions(-) diff --git a/arch/arm64/include/asm/compiler.h b/arch/arm64/include/asm/compiler.h index 6fb2e6bcc392..3c2c7a1a2abf 100644 --- a/arch/arm64/include/asm/compiler.h +++ b/arch/arm64/include/asm/compiler.h @@ -8,19 +8,26 @@ #define ARM64_ASM_PREAMBLE #endif +/* Open-code TCR_TBID0 value to avoid circular dependency. */ +#define tcr_tbid0_enabled() (init_tcr & (1UL << 51)) + /* * The EL0/EL1 pointer bits used by a pointer authentication code. * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. */ -#define ptrauth_user_pac_mask() GENMASK_ULL(54, vabits_actual) +#define ptrauth_user_insn_pac_mask() \ + (tcr_tbid0_enabled() ? GENMASK_ULL(63, vabits_actual) : \ + GENMASK_ULL(54, vabits_actual)) +#define ptrauth_user_data_pac_mask() GENMASK_ULL(54, vabits_actual) #define ptrauth_kernel_pac_mask() GENMASK_ULL(63, vabits_actual) /* Valid for EL0 TTBR0 and EL1 TTBR1 instruction pointers */ -#define ptrauth_clear_pac(ptr) \ - ((ptr & BIT_ULL(55)) ? (ptr | ptrauth_kernel_pac_mask()) : \ - (ptr & ~ptrauth_user_pac_mask())) +#define ptrauth_clear_insn_pac(ptr) \ + ((ptr & BIT_ULL(55)) ? (ptr | ptrauth_kernel_pac_mask()) : \ + (ptr & ~ptrauth_user_insn_pac_mask())) -#define __builtin_return_address(val) \ - (void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val))) +#define __builtin_return_address(val) \ + ((void *)(ptrauth_clear_insn_pac( \ + (unsigned long)__builtin_return_address(val)))) #endif /* __ASM_COMPILER_H */ diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 87b90dc27a43..e0d8b8443ca6 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -191,6 +191,8 @@ extern u64 kimage_vaddr; /* the offset between the kernel virtual and physical mappings */ extern u64 kimage_voffset; +extern u64 init_tcr; + static inline unsigned long kaslr_offset(void) { return kimage_vaddr - KIMAGE_VADDR; diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index b82575a33f8b..31fc7c4b75d4 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -275,6 +275,7 @@ #define TCR_TBI1 (UL(1) << 38) #define TCR_HA (UL(1) << 39) #define TCR_HD (UL(1) << 40) +#define TCR_TBID0 (UL(1) << 51) #define TCR_TBID1 (UL(1) << 52) #define TCR_NFD0 (UL(1) << 53) #define TCR_NFD1 (UL(1) << 54) diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index d50416be99be..1bb1b022e5ee 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -92,7 +92,7 @@ extern int ptrauth_get_enabled_keys(struct task_struct *tsk); static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) { - return ptrauth_clear_pac(ptr); + return ptrauth_clear_insn_pac(ptr); } static __always_inline void ptrauth_enable(void) diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 9df3feeee890..b2a575359a9c 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -253,6 +253,8 @@ unsigned long get_wchan(struct task_struct *p); void set_task_sctlr_el1(u64 sctlr); +void enable_tcr(u64 tcr); + /* Thread switching */ extern struct task_struct *cpu_switch_to(struct task_struct *prev, struct task_struct *next); diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c index 60901ab0a7fe..9ac2fc2b4e46 100644 --- a/arch/arm64/kernel/pointer_auth.c +++ b/arch/arm64/kernel/pointer_auth.c @@ -109,3 +109,15 @@ int ptrauth_get_enabled_keys(struct task_struct *tsk) return retval; } + +static int __init tbi_data(char *arg) +{ + bool tbi_data; + + if (kstrtobool(arg, &tbi_data)) + return -EINVAL; + if (tbi_data) + enable_tcr(TCR_TBID0); + return 0; +} +early_param("tbi_data", tbi_data); diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index b4bb67f17a2c..d7a2f6cb833e 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -644,6 +644,49 @@ void arch_setup_new_exec(void) } } +#ifdef CONFIG_ARM64_64K_PAGES +#define TCR_TG_FLAGS (TCR_TG0_64K | TCR_TG1_64K) +#elif defined(CONFIG_ARM64_16K_PAGES) +#define TCR_TG_FLAGS (TCR_TG0_16K | TCR_TG1_16K) +#else /* CONFIG_ARM64_4K_PAGES */ +#define TCR_TG_FLAGS (TCR_TG0_4K | TCR_TG1_4K) +#endif + +#ifdef CONFIG_RANDOMIZE_BASE +#define TCR_KASLR_FLAGS TCR_NFD1 +#else +#define TCR_KASLR_FLAGS 0 +#endif + +#define TCR_SMP_FLAGS TCR_SHARED + +/* PTWs cacheable, inner/outer WBWA */ +#define TCR_CACHE_FLAGS (TCR_IRGN_WBWA | TCR_ORGN_WBWA) + +#ifdef CONFIG_KASAN_SW_TAGS +#define TCR_KASAN_SW_FLAGS (TCR_TBI1 | TCR_TBID1) +#else +#define TCR_KASAN_SW_FLAGS 0 +#endif + +u64 __section(".mmuoff.data.read") init_tcr = + TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | TCR_TG_FLAGS + | TCR_KASLR_FLAGS | TCR_ASID16 | TCR_TBI0 | TCR_A1 | TCR_KASAN_SW_FLAGS; +EXPORT_SYMBOL(init_tcr); + +void __init enable_tcr(u64 tcr) +{ + u64 tmp; + + init_tcr |= tcr; + __asm__ __volatile__( + "mrs %0, tcr_el1\n" + "orr %0, %0, %1\n" + "msr tcr_el1, %0\n" + "tlbi vmalle1\n" + : "=&r"(tmp) : "r"(tcr)); +} + #ifdef CONFIG_ARM64_TAGGED_ADDR_ABI /* * Control the relaxed ABI allowing tagged user addresses into the kernel. diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index eb2f73939b7b..4d86870ed348 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -894,13 +894,11 @@ static int pac_mask_get(struct task_struct *target, { /* * The PAC bits can differ across data and instruction pointers - * depending on TCR_EL1.TBID*, which we may make use of in future, so - * we expose separate masks. + * depending on TCR_EL1.TBID0, so we expose separate masks. */ - unsigned long mask = ptrauth_user_pac_mask(); struct user_pac_mask uregs = { - .data_mask = mask, - .insn_mask = mask, + .data_mask = ptrauth_user_data_pac_mask(), + .insn_mask = ptrauth_user_insn_pac_mask(), }; if (!system_supports_address_auth()) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 871c82ab0a30..9ee32afe121c 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -529,11 +529,23 @@ static int __kprobes do_page_fault(unsigned long far, unsigned int esr, vm_fault_t fault; unsigned long vm_flags; unsigned int mm_flags = FAULT_FLAG_DEFAULT; - unsigned long addr = untagged_addr(far); + unsigned long addr; if (kprobe_page_fault(regs, esr)) return 0; + /* + * If TBID0 is set then we may get an IABT with a tagged address here as + * a result of branching to a tagged address. In this case we want to + * avoid untagging the address, let the VMA lookup fail and get a + * SIGSEGV. Leaving the address as is will also work if TBID0 is clear + * or unsupported because the tag bits of FAR_EL1 will be clear. + */ + if (is_el0_instruction_abort(esr)) + addr = far; + else + addr = untagged_addr(far); + /* * If we're in an interrupt or have no user context, we must not take * the fault. diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 97d7bcd8d4f2..bae9476e6c2a 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -20,31 +20,6 @@ #include #include -#ifdef CONFIG_ARM64_64K_PAGES -#define TCR_TG_FLAGS TCR_TG0_64K | TCR_TG1_64K -#elif defined(CONFIG_ARM64_16K_PAGES) -#define TCR_TG_FLAGS TCR_TG0_16K | TCR_TG1_16K -#else /* CONFIG_ARM64_4K_PAGES */ -#define TCR_TG_FLAGS TCR_TG0_4K | TCR_TG1_4K -#endif - -#ifdef CONFIG_RANDOMIZE_BASE -#define TCR_KASLR_FLAGS TCR_NFD1 -#else -#define TCR_KASLR_FLAGS 0 -#endif - -#define TCR_SMP_FLAGS TCR_SHARED - -/* PTWs cacheable, inner/outer WBWA */ -#define TCR_CACHE_FLAGS TCR_IRGN_WBWA | TCR_ORGN_WBWA - -#ifdef CONFIG_KASAN_SW_TAGS -#define TCR_KASAN_SW_FLAGS TCR_TBI1 | TCR_TBID1 -#else -#define TCR_KASAN_SW_FLAGS 0 -#endif - #ifdef CONFIG_KASAN_HW_TAGS #define TCR_KASAN_HW_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1 | TCR_TBID1 #else @@ -425,9 +400,7 @@ SYM_FUNC_START(__cpu_setup) mair .req x17 tcr .req x16 mov_q mair, MAIR_EL1_SET - mov_q tcr, TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \ - TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \ - TCR_TBI0 | TCR_A1 | TCR_KASAN_SW_FLAGS + ldr_l tcr, init_tcr #ifdef CONFIG_ARM64_MTE /*