From patchwork Tue Jul 13 23:47:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12375407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC34FC07E95 for ; Tue, 13 Jul 2021 23:50:30 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B2B7F6128E for ; Tue, 13 Jul 2021 23:50:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B2B7F6128E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=/+oZEmQ5m61W3tQ4YnzKLlPbi2A0Et07oFaExncL4Es=; b=paxsWm3TtJ2jZoNaUMVZsUoAnD oQyKV4hmS+1k660cIvygKoB05JEKW3gNxALy8agSCwBwkRHgvxway5tpj0ma82mUkc+uynbTIa2Du JUezo8YOx7u+fcmTklWjQZwh78wAyMpdFllCQ25YpXqlWB5FuUmSEdDCvZcwK+gMC2fDrS221PLyj WyS44GpmuWr4V34+TlFAa8aT7/pwk6aIr2rLCM349D1rm7thzOWVEHmPXjhkZvDkifNUQkuG/EK+v h+Ln5k6yMaURskfbeNK+Nn/8HzfIlTfPPNYXAA7E6CNZ7GqjnyH5gYvPVXuOg9qSgGINAK//WAo0H aeZfjBiQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3S8q-00BgkP-QA; Tue, 13 Jul 2021 23:48:45 +0000 Received: from mail-qk1-x74a.google.com ([2607:f8b0:4864:20::74a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3S8L-00BgfH-MX for linux-arm-kernel@lists.infradead.org; Tue, 13 Jul 2021 23:48:15 +0000 Received: by mail-qk1-x74a.google.com with SMTP id c3-20020a37b3030000b02903ad0001a2e8so18705395qkf.3 for ; Tue, 13 Jul 2021 16:48:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9ryHViKZp1y1hFQGGllX7M8/cefBkqgQ+MUoHbikRZo=; b=wOQKfld1rwxmQQmuOuGAvQTtFKN2js98cD4lfRkwZdthi126IS9Qe33b7mvQvQ5Glv fyDJDM02hZdBG8S6l6Fst9AarUMRidN/colnamJ567lNeKH2bsCSyYxM5+48olb5F3n4 rer3wcCwp7a1bkYyP7zg/3PpxhJH8oBLNFnG3il+PclL47Nmc+ZG1Tj987GGKYL9MQyF /x/YgUqCPfoxiASPijXd2kUrWM6SMxJQmuayvVty/Ed/e3JHWuYQ8CXzRb/muXeBtnav JBjDIWBdf40kPB336DuBOgCwMKu8pj7tnSyyTp+1PP1ne/OGI/4F6o8mDQkehIkNH4vH poQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9ryHViKZp1y1hFQGGllX7M8/cefBkqgQ+MUoHbikRZo=; b=QTzLUSV8tV8ciGALsalSJffZStAHgryb8dK6I7aMOlvktbsy6zfvNnIIzYossoeufz RInWJNZyZ3zE8R4NNywcx0acPjHNxU+um97/tkFfWnxAhmdWYxDM8m8b8MC75BU7ZlPd /9NLUvOqt98dDsMMk6DKpFZjawBjkBbPvZq3ohbaS2tcG0poGJQQ9l7uEcwmzONrhgyB +UVOJZzFIIaVEM8QTpyIpS2q+VKMdzcYT2WZLry9Y9w/FP+cOCeGbrKU0mbG1cm6gkKy mQPQfZgmKFoOlY6KISZNv2M2q0ELLq7OrgDcQi8jEM6llHtww9phRJvqyHEf2IAAcfdm to9w== X-Gm-Message-State: AOAM532PJskGnrUqY+FCKTFfrWDpPoB2irT9FV1R/2uxMRIys1ioA8WE OTQ78DumTfkXl0jp2QIvFYfrENA= X-Google-Smtp-Source: ABdhPJzqWRThXjhQAYuvB5wfAi7hHNddNkwDavJ/DKuKZ3/nEAVQZePVfx+BorLZFdUAsHFaNIJontE= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:99b9:48f7:dbe6:8670]) (user=pcc job=sendgmr) by 2002:ad4:52ea:: with SMTP id p10mr7512546qvu.45.1626220092310; Tue, 13 Jul 2021 16:48:12 -0700 (PDT) Date: Tue, 13 Jul 2021 16:47:59 -0700 In-Reply-To: <20210713234801.3858018-1-pcc@google.com> Message-Id: <20210713234801.3858018-4-pcc@google.com> Mime-Version: 1.0 References: <20210713234801.3858018-1-pcc@google.com> X-Mailer: git-send-email 2.32.0.93.g670b81a890-goog Subject: [PATCH v10 3/5] arm64: move preemption disablement to prctl handlers From: Peter Collingbourne To: Catalin Marinas , Vincenzo Frascino , Will Deacon Cc: Peter Collingbourne , Evgenii Stepanov , Szabolcs Nagy , Tejas Belagod , linux-arm-kernel@lists.infradead.org, Greg Kroah-Hartman X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210713_164813_788973_780C507C X-CRM114-Status: GOOD ( 22.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In the next patch, we will start reading sctlr_user from mte_update_sctlr_user and subsequently writing a new value based on the task's TCF setting and potentially the per-CPU TCF preference. This means that we need to be careful to disable preemption around any code sequences that read from sctlr_user and subsequently write to sctlr_user and/or SCTLR_EL1, so that we don't end up writing a stale value (based on the previous CPU's TCF preference) to either of them. We currently have four such sequences, in the prctl handlers for PR_SET_TAGGED_ADDR_CTRL and PR_PAC_SET_ENABLED_KEYS, as well as in the task initialization code that resets the prctl settings. Change the prctl handlers to disable preemption in the handlers themselves rather than the functions that they call, and change the task initialization code to call the respective prctl handlers instead of setting sctlr_user directly. As a result of this change, we no longer need the helper function set_task_sctlr_el1, nor does its behavior make sense any more, so remove it. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/Ic0e8a0c00bb47d786c1e8011df0b7fe99bee4bb5 Acked-by: Will Deacon --- arch/arm64/include/asm/pointer_auth.h | 12 ++++++------ arch/arm64/include/asm/processor.h | 2 +- arch/arm64/kernel/mte.c | 8 ++++---- arch/arm64/kernel/pointer_auth.c | 10 ++++++---- arch/arm64/kernel/process.c | 21 +++++++-------------- 5 files changed, 24 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index d50416be99be..592968f0bc22 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -10,6 +10,9 @@ #include #include +#define PR_PAC_ENABLED_KEYS_MASK \ + (PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY) + #ifdef CONFIG_ARM64_PTR_AUTH /* * Each key is a 128-bit quantity which is split across a pair of 64-bit @@ -113,9 +116,9 @@ static __always_inline void ptrauth_enable(void) \ /* enable all keys */ \ if (system_supports_address_auth()) \ - set_task_sctlr_el1(current->thread.sctlr_user | \ - SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \ - SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); \ + ptrauth_set_enabled_keys(current, \ + PR_PAC_ENABLED_KEYS_MASK, \ + PR_PAC_ENABLED_KEYS_MASK); \ } while (0) #define ptrauth_thread_switch_user(tsk) \ @@ -139,7 +142,4 @@ static __always_inline void ptrauth_enable(void) #define ptrauth_thread_switch_kernel(tsk) #endif /* CONFIG_ARM64_PTR_AUTH */ -#define PR_PAC_ENABLED_KEYS_MASK \ - (PR_PAC_APIAKEY | PR_PAC_APIBKEY | PR_PAC_APDAKEY | PR_PAC_APDBKEY) - #endif /* __ASM_POINTER_AUTH_H */ diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 80ceb9cbdd60..ebb3b1aefed7 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -257,7 +257,7 @@ extern void release_thread(struct task_struct *); unsigned long get_wchan(struct task_struct *p); -void set_task_sctlr_el1(u64 sctlr); +void update_sctlr_el1(u64 sctlr); /* Thread switching */ extern struct task_struct *cpu_switch_to(struct task_struct *prev, diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 53d89915029d..432d9b641e9c 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -222,9 +222,7 @@ void mte_thread_init_user(void) write_sysreg_s(0, SYS_TFSRE0_EL1); clear_thread_flag(TIF_MTE_ASYNC_FAULT); /* disable tag checking and reset tag generation mask */ - current->thread.mte_ctrl = MTE_CTRL_GCR_USER_EXCL_MASK; - mte_update_sctlr_user(current); - set_task_sctlr_el1(current->thread.sctlr_user); + set_mte_ctrl(current, 0); } void mte_thread_switch(struct task_struct *next) @@ -281,8 +279,10 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg) task->thread.mte_ctrl = mte_ctrl; if (task == current) { + preempt_disable(); mte_update_sctlr_user(task); - set_task_sctlr_el1(task->thread.sctlr_user); + update_sctlr_el1(task->thread.sctlr_user); + preempt_enable(); } return 0; diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c index 60901ab0a7fe..2708b620b4ae 100644 --- a/arch/arm64/kernel/pointer_auth.c +++ b/arch/arm64/kernel/pointer_auth.c @@ -67,7 +67,7 @@ static u64 arg_to_enxx_mask(unsigned long arg) int ptrauth_set_enabled_keys(struct task_struct *tsk, unsigned long keys, unsigned long enabled) { - u64 sctlr = tsk->thread.sctlr_user; + u64 sctlr; if (!system_supports_address_auth()) return -EINVAL; @@ -78,12 +78,14 @@ int ptrauth_set_enabled_keys(struct task_struct *tsk, unsigned long keys, if ((keys & ~PR_PAC_ENABLED_KEYS_MASK) || (enabled & ~keys)) return -EINVAL; + preempt_disable(); + sctlr = tsk->thread.sctlr_user; sctlr &= ~arg_to_enxx_mask(keys); sctlr |= arg_to_enxx_mask(enabled); + tsk->thread.sctlr_user = sctlr; if (tsk == current) - set_task_sctlr_el1(sctlr); - else - tsk->thread.sctlr_user = sctlr; + update_sctlr_el1(sctlr); + preempt_enable(); return 0; } diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index b4bb67f17a2c..c548eec87810 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -527,7 +527,13 @@ static void erratum_1418040_thread_switch(struct task_struct *prev, write_sysreg(val, cntkctl_el1); } -static void update_sctlr_el1(u64 sctlr) +/* + * __switch_to() checks current->thread.sctlr_user as an optimisation. Therefore + * this function must be called with preemption disabled and the update to + * sctlr_user must be made in the same preemption disabled block so that + * __switch_to() does not see the variable update before the SCTLR_EL1 one. + */ +void update_sctlr_el1(u64 sctlr) { /* * EnIA must not be cleared while in the kernel as this is necessary for @@ -539,19 +545,6 @@ static void update_sctlr_el1(u64 sctlr) isb(); } -void set_task_sctlr_el1(u64 sctlr) -{ - /* - * __switch_to() checks current->thread.sctlr as an - * optimisation. Disable preemption so that it does not see - * the variable update before the SCTLR_EL1 one. - */ - preempt_disable(); - current->thread.sctlr_user = sctlr; - update_sctlr_el1(sctlr); - preempt_enable(); -} - /* * Thread switching. */