From patchwork Tue Feb 6 12:38:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 13547212 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A2CF4C4828D for ; Tue, 6 Feb 2024 12:39:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=11hcXEXtFWRJJFJIkuqgwW16E1PPAhf42j4XfE++5go=; b=owqx2FQIOztvle nO/8Wl+1diWAijsU1a3puErTmhW6kFt9bLd8nwFjCRVwA/I8x2LmGjCjHmkG/hxfLH74MYP/YWBH2 i81wT9eIFqxcFJm2hkWrmMYX5BexAlfxJKK7fOGN/XKbam+cQQxvW1aozBbDE2w/hTN7O4I0b9lFo ClmDUjfGCKUaqu7bFrS5j2l9QvZrTswWfvGcJsSKvn8EZea09K9FlA3RU9vTbu9lbf7ALg/m6HGtI 1vRYqU9QPqJT9GLvKX5Ki3BQ7Rs+697ZAjFb/9N/ThIUdUT1pnYz6pVrWz23WYSJldkv5SbGE36Uh zq1O0h/3ouSER4HEvKfg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rXKjK-00000007bLt-3YHm; Tue, 06 Feb 2024 12:39:14 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rXKjF-00000007bIu-0ZJn for linux-arm-kernel@lists.infradead.org; Tue, 06 Feb 2024 12:39:10 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 809C01FB; Tue, 6 Feb 2024 04:39:48 -0800 (PST) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 209C13F762; Tue, 6 Feb 2024 04:39:05 -0800 (PST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: broonie@kernel.org, catalin.marinas@arm.com, james.morse@arm.com, mark.rutland@arm.com, will@kernel.org Subject: [PATCH 3/3] arm64: Unmask Debug + SError in do_notify_resume() Date: Tue, 6 Feb 2024 12:38:48 +0000 Message-Id: <20240206123848.1696480-4-mark.rutland@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20240206123848.1696480-1-mark.rutland@arm.com> References: <20240206123848.1696480-1-mark.rutland@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240206_043909_243712_46B205E9 X-CRM114-Status: GOOD ( 12.14 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When returning to a user context, the arm64 entry code masks all DAIF exceptions before handling pending work in exit_to_user_mode_prepare() and do_notify_resume(), where it will transiently unmask all DAIF exceptions. This is a holdover from the old entry assembly, which conservatively masked all DAIF exceptions, and it's only necessary to mask interrupts at this point during the exception return path, so long as we subsequently mask all DAIF exceptions before the actual exception return. While most DAIF manipulation follows a save...restore sequence, the manipulation in do_notify_resume() is the other way around, unmasking all DAIF exceptions before masking them again. This is unfortunate as we unnecessarily mask Debug and SError exceptions, and it would be nice to remove this special case to make DAIF manipulation simpler and most consistent. This patch changes exit_to_user_mode_prepare() and do_notify_resume() to only mask interrupts while handling pending work, masking other DAIF exceptions after this has completed. This removes the unusual DAIF manipulation and allows Debug and SError exceptions to be taken for a slightly longer window during the exception return path. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: James Morse Cc: Mark Brown Cc: Will Deacon Reviewed-by: Mark Brown --- arch/arm64/kernel/entry-common.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index 3c849ad03bf83..b77a15955f28b 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -130,7 +130,7 @@ static __always_inline void __exit_to_user_mode(void) static void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags) { do { - local_daif_restore(DAIF_PROCCTX); + local_irq_enable(); if (thread_flags & _TIF_NEED_RESCHED) schedule(); @@ -153,7 +153,7 @@ static void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags) if (thread_flags & _TIF_FOREIGN_FPSTATE) fpsimd_restore_current_state(); - local_daif_mask(); + local_irq_disable(); thread_flags = read_thread_flags(); } while (thread_flags & _TIF_WORK_MASK); } @@ -162,12 +162,14 @@ static __always_inline void exit_to_user_mode_prepare(struct pt_regs *regs) { unsigned long flags; - local_daif_mask(); + local_irq_disable(); flags = read_thread_flags(); if (unlikely(flags & _TIF_WORK_MASK)) do_notify_resume(regs, flags); + local_daif_mask(); + lockdep_sys_exit(); }