From patchwork Wed May 11 06:05:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Garg X-Patchwork-Id: 12845826 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CC943C433F5 for ; Wed, 11 May 2022 06:06:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3kp2EUtou6czBcZ/8p69+MZSUipifUT3Dsz3GrgtRXQ=; b=Qb3HyNr6R67p27 tcY9XsDxZIdsacu5gZO+y8qW6JzC10cCoU7Mr2VBr5musBMxz/n5gt4HChjqYvWd5VWX33UchbsBX kvtRdBROMik4k6c32J8sMhlLLPKPqbmgz4bfqyVCywuLdxffrjLAwon4pk6kQwUH62l0xEiVhSayV j5HF/VAUFwOcYsuBIFsmSBSUOzy3DIGqZXbK+a+V7u3eOQq6SFtT6KpnfjnDlWwTvvpuA1AgqAJhh uYWwko16FyxEvwu0rtfvQrlI31ZXk/5kroK+dWN0m+VyIXf9+nHOZPbwRAX0JlJ6hdZVcU1AH9EvH leh6d4GWlxP599ql/ZVA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nofTp-005N6c-J1; Wed, 11 May 2022 06:05:49 +0000 Received: from mail-pj1-x1029.google.com ([2607:f8b0:4864:20::1029]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nofTh-005N3x-VP for linux-arm-kernel@lists.infradead.org; Wed, 11 May 2022 06:05:43 +0000 Received: by mail-pj1-x1029.google.com with SMTP id cu23-20020a17090afa9700b001d98d8e53b7so3355680pjb.0 for ; Tue, 10 May 2022 23:05:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7JjRO1XM2cc2RjeI4NcGH570FtT+tUMSzeJQeq2jaPI=; b=yETpf4plo6YLw66FN6+wSBXLofvajIy9gUVHSXi+dv5goV/UrJ1dwI49WYNhzOGajt EVTzsjSNtN50WnD5TeW09SZUATAJDgk0zKq8cATDKNZOaDd4D5Thb1TX6yrrNtcrobvn dDzS1LopmkWRg8qS2xtvjXnJc4z8hKZD20hmT34VtbvJmbNFziB6Vw++pmClyvfVLev/ N2rNO30fdy0tH4I/TkbddwS4uGAt36n4+GZt3+fg/DJzoFzjwsDxpYOLLDsRKEBmZ1UW LG0TPwXTmC0blSdU4XoAmDl1nXR3SMe6rOF5Sf7hOQWAmKkguslyL0Ff4BodqqG3pW7i sihA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7JjRO1XM2cc2RjeI4NcGH570FtT+tUMSzeJQeq2jaPI=; b=rFKQfh+lqrWrOXjSutOw1a30QTX817yKfIi4hQa79BqLVNsZjzbmULSQ38GUlM3DXB 59CEdob6E7zgj9m9u4Eth1C+630w4nVnqyBmz+ZTwJdKzQt+VJXMNYJI17S1APDoTrSM qLv09gQsAuX16yMknVQYcpVCUJB91PsI7g1CVVZuC40I0QBuET3lK/WtCYMjzmNkqqmq VBY1deA/CFDAqyyy+3mMjPLpIjmI8RroNUyQVmcHTnXr32vC1RjFtkcEMxs52BfcLeUD bjahMzfVMEZkki/SsBDSFGHk95Apxw81zdo9JmTlufnQmr5JSEGWaJMxAC21Ey0HDwX5 vS4w== X-Gm-Message-State: AOAM531eeyk73zykqTQSmJ3TODp/T9S7RZdfEcxDGdO98Ygj89ydr5mh 5xRrV5qCVwIfb/FVa5IGUDE97A== X-Google-Smtp-Source: ABdhPJztP5LRu1TkIBC/svV1IQ8xsbq97wImFDfTWtHZBWJTDMYn8f0hRAX/l/o6YVJdhCdfQ/slqA== X-Received: by 2002:a17:90b:4a03:b0:1dc:756a:2463 with SMTP id kk3-20020a17090b4a0300b001dc756a2463mr3689571pjb.68.1652249139169; Tue, 10 May 2022 23:05:39 -0700 (PDT) Received: from localhost.localdomain ([106.213.2.134]) by smtp.gmail.com with ESMTPSA id j7-20020a17090a31c700b001d960eaed66sm800216pjf.42.2022.05.10.23.05.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 May 2022 23:05:38 -0700 (PDT) From: Sumit Garg To: daniel.thompson@linaro.org, dianders@chromium.org, will@kernel.org, liwei391@huawei.com Cc: catalin.marinas@arm.com, mark.rutland@arm.com, mhiramat@kernel.org, jason.wessel@windriver.com, maz@kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sumit Garg Subject: [PATCH v3 1/2] arm64: entry: Skip single stepping into interrupt handlers Date: Wed, 11 May 2022 11:35:20 +0530 Message-Id: <20220511060521.465744-2-sumit.garg@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220511060521.465744-1-sumit.garg@linaro.org> References: <20220511060521.465744-1-sumit.garg@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220510_230542_052868_A33B995C X-CRM114-Status: GOOD ( 16.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently on systems where the timer interrupt (or any other fast-at-human-scale periodic interrupt) is active then it is impossible to step any code with interrupts unlocked because we will always end up stepping into the timer interrupt instead of stepping the user code. The common user's goal while single stepping is that when they step then the system will stop at PC+4 or PC+I for a branch that gets taken relative to the instruction they are stepping. So, fix broken single step implementation via skipping single stepping into interrupt handlers. The methodology is when we receive an interrupt from EL1, check if we are single stepping (pstate.SS). If yes then we save MDSCR_EL1.SS and clear the register bit if it was set. Then unmask only D and leave I set. On return from the interrupt, set D and restore MDSCR_EL1.SS. Along with this skip reschedule if we were stepping. Suggested-by: Will Deacon Signed-off-by: Sumit Garg --- arch/arm64/kernel/entry-common.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index 878c65aa7206..dd2d3af615de 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -458,19 +458,35 @@ static __always_inline void __el1_irq(struct pt_regs *regs, do_interrupt_handler(regs, handler); irq_exit_rcu(); - arm64_preempt_schedule_irq(); + /* Don't reschedule in case we are single stepping */ + if (!(regs->pstate & DBG_SPSR_SS)) + arm64_preempt_schedule_irq(); exit_to_kernel_mode(regs); } + static void noinstr el1_interrupt(struct pt_regs *regs, void (*handler)(struct pt_regs *)) { + unsigned long reg; + + /* Disable single stepping within interrupt handler */ + if (regs->pstate & DBG_SPSR_SS) { + reg = read_sysreg(mdscr_el1); + write_sysreg(reg & ~DBG_MDSCR_SS, mdscr_el1); + } + write_sysreg(DAIF_PROCCTX_NOIRQ, daif); if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs)) __el1_pnmi(regs, handler); else __el1_irq(regs, handler); + + if (regs->pstate & DBG_SPSR_SS) { + write_sysreg(DAIF_PROCCTX_NOIRQ | PSR_D_BIT, daif); + write_sysreg(reg, mdscr_el1); + } } asmlinkage void noinstr el1h_64_irq_handler(struct pt_regs *regs)