From patchwork Thu Feb 10 05:49:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 12741348 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A750CC433EF for ; Thu, 10 Feb 2022 05:53:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Sebz9DPYDHtVyRF9iD0iyPoWTy/lA9yrbkQ9gc9ewH0=; b=Kpp14M4ouuGyzP Y9bQB3MnWL+Iav19+jSq7fBoAtoDLe7rGtUrqTrSWsWSzzYOd+RboO+a1F8+wMwgN59Yc5C9Q4Lvr 24lu7ok32WWaloaxHh8HL9/gBMJuuf4RC0lkpVcMpU5DOeb6Dr588Lf5UKNeQZAxyKUISSV0HeG+B FqYHrEJEqZxWDWOM9uYiypk4LJ6YhVJNYZfGgU6ahC/d+VytkvaH8poUIjixY08VFidsYyd6iCuRO KVk6QsBP1OpRCLE3C0eJgJQr1H4XyuwtcQHQ+4BX9QbazDiZwuVj5Bh86ev9af5kMXF/S1XuW6VrN WE1I7HrQUOuJ+mw5nNDg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nI2N7-002fsv-Ex; Thu, 10 Feb 2022 05:52:03 +0000 Received: from mail-pf1-x42c.google.com ([2607:f8b0:4864:20::42c]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nI2Lr-002fES-9B for linux-arm-kernel@lists.infradead.org; Thu, 10 Feb 2022 05:50:46 +0000 Received: by mail-pf1-x42c.google.com with SMTP id l19so2549185pfu.2 for ; Wed, 09 Feb 2022 21:50:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=k1DSKevVHfkvjZ5rQWGMsgYKCpAx+gTSiY0aIv7L4L8=; b=AMFqLbXO+4XYUzWihu2CEm9cAUSMCKGSQrODZKtgDrM+QIGggWXDC1nP6JPzQ4T5pc vvyCyoFw/5iCLx+QTrAweqRWMyl1yvPX6WqEzwXK8w+0eweP0N2WLyoKG/mgeUDMWAiW vIMUp0ixihQCRjBKTr96SrV6nSgxBQ6F1/cPZosGprOhpky+nMo1P9rtqk5BJ29yTUbE RJf6GdPJJXlrTAbhGXgywHI01/IEaSXQlgDSBVDoOGAiAv64IlLSZ1lB2apQxxFhLGjU mNsDawB403CJi0nkj4ABqs9bkqCKO78rRw++yc0dVoOysJRKsMh0xWTxbjT3L/ene/vo DL7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=k1DSKevVHfkvjZ5rQWGMsgYKCpAx+gTSiY0aIv7L4L8=; b=FCF/ojFBuXTyZze/RQ6euYpsaGBtvq8vFYiVNUzgjM78Ngl4f7H7ZjrHlEE647pr+g b1ZSEGRZACF2yuB+H6iJ4VP0BQCeDZi4MXFUuA03Veaw6TnQQU9fqJTEpoWeY1aeLNyK RKVK91+mulvPRjuYxl7sHzj11kaZ6fmoarj58LKZaL4Cp2kwVTVN48F+7Xz0nG7sUIG4 CoFG0HzZdf7KrqQ2LFhL1u0UR9vbago7RPUmbQfW11V+dcmT1N3lyj3oGAJh/i11QUnN 5EsHxKSkD7DsuSvEKAZ4P8vWg+HhYCkwi9kwLjmH3+VXmEMpPRxzr53B7dsKEedHrrZ+ eigQ== X-Gm-Message-State: AOAM532KQiL/M3EUQwU3YE+bp3hHMCqGm7QTtdLIiAyd3boGNWX7OR56 GfkzagSyMruuxyYqy191+1oiuA== X-Google-Smtp-Source: ABdhPJxpN9QZYSrVplTfejd9Z54+muNaAPM1NhSeYWn++tcqIGuuR9U5saCRaFJ8Hms1VmXGLgG9/w== X-Received: by 2002:a63:2a86:: with SMTP id q128mr4928013pgq.53.1644472241440; Wed, 09 Feb 2022 21:50:41 -0800 (PST) Received: from localhost.localdomain ([122.179.114.46]) by smtp.gmail.com with ESMTPSA id s32sm15192270pfw.80.2022.02.09.21.50.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Feb 2022 21:50:40 -0800 (PST) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Albert Ou , Daniel Lezcano , Ulf Hansson , "Rafael J . Wysocki" , Pavel Machek , Rob Herring Cc: Sandeep Tripathy , Atish Patra , Alistair Francis , Liush , Anup Patel , devicetree@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvm-riscv@lists.infradead.org, Anup Patel , Guo Ren Subject: [PATCH v11 3/8] RISC-V: Add arch functions for non-retentive suspend entry/exit Date: Thu, 10 Feb 2022 11:19:42 +0530 Message-Id: <20220210054947.170134-4-apatel@ventanamicro.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220210054947.170134-1-apatel@ventanamicro.com> References: <20220210054947.170134-1-apatel@ventanamicro.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220209_215043_373332_37426029 X-CRM114-Status: GOOD ( 23.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Anup Patel The hart registers and CSRs are not preserved in non-retentative suspend state so we provide arch specific helper functions which will save/restore hart context upon entry/exit to non-retentive suspend state. These helper functions can be used by cpuidle drivers for non-retentive suspend entry/exit. Signed-off-by: Anup Patel Signed-off-by: Anup Patel Reviewed-by: Guo Ren --- arch/riscv/include/asm/asm.h | 27 +++++++ arch/riscv/include/asm/suspend.h | 36 +++++++++ arch/riscv/kernel/Makefile | 2 + arch/riscv/kernel/asm-offsets.c | 3 + arch/riscv/kernel/head.S | 21 ----- arch/riscv/kernel/suspend.c | 87 +++++++++++++++++++++ arch/riscv/kernel/suspend_entry.S | 124 ++++++++++++++++++++++++++++++ 7 files changed, 279 insertions(+), 21 deletions(-) create mode 100644 arch/riscv/include/asm/suspend.h create mode 100644 arch/riscv/kernel/suspend.c create mode 100644 arch/riscv/kernel/suspend_entry.S diff --git a/arch/riscv/include/asm/asm.h b/arch/riscv/include/asm/asm.h index 618d7c5af1a2..48b4baa4d706 100644 --- a/arch/riscv/include/asm/asm.h +++ b/arch/riscv/include/asm/asm.h @@ -67,4 +67,31 @@ #error "Unexpected __SIZEOF_SHORT__" #endif +#ifdef __ASSEMBLY__ + +/* Common assembly source macros */ + +#ifdef CONFIG_XIP_KERNEL +.macro XIP_FIXUP_OFFSET reg + REG_L t0, _xip_fixup + add \reg, \reg, t0 +.endm +.macro XIP_FIXUP_FLASH_OFFSET reg + la t1, __data_loc + li t0, XIP_OFFSET_MASK + and t1, t1, t0 + li t1, XIP_OFFSET + sub t0, t0, t1 + sub \reg, \reg, t0 +.endm +_xip_fixup: .dword CONFIG_PHYS_RAM_BASE - CONFIG_XIP_PHYS_ADDR - XIP_OFFSET +#else +.macro XIP_FIXUP_OFFSET reg +.endm +.macro XIP_FIXUP_FLASH_OFFSET reg +.endm +#endif /* CONFIG_XIP_KERNEL */ + +#endif /* __ASSEMBLY__ */ + #endif /* _ASM_RISCV_ASM_H */ diff --git a/arch/riscv/include/asm/suspend.h b/arch/riscv/include/asm/suspend.h new file mode 100644 index 000000000000..8be391c2aecb --- /dev/null +++ b/arch/riscv/include/asm/suspend.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2021 Western Digital Corporation or its affiliates. + * Copyright (c) 2022 Ventana Micro Systems Inc. + */ + +#ifndef _ASM_RISCV_SUSPEND_H +#define _ASM_RISCV_SUSPEND_H + +#include + +struct suspend_context { + /* Saved and restored by low-level functions */ + struct pt_regs regs; + /* Saved and restored by high-level functions */ + unsigned long scratch; + unsigned long tvec; + unsigned long ie; +#ifdef CONFIG_MMU + unsigned long satp; +#endif +}; + +/* Low-level CPU suspend entry function */ +int __cpu_suspend_enter(struct suspend_context *context); + +/* High-level CPU suspend which will save context and call finish() */ +int cpu_suspend(unsigned long arg, + int (*finish)(unsigned long arg, + unsigned long entry, + unsigned long context)); + +/* Low-level CPU resume entry function */ +int __cpu_resume_enter(unsigned long hartid, unsigned long context); + +#endif diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile index 612556faa527..13fa5733f5e7 100644 --- a/arch/riscv/kernel/Makefile +++ b/arch/riscv/kernel/Makefile @@ -48,6 +48,8 @@ obj-$(CONFIG_RISCV_BOOT_SPINWAIT) += cpu_ops_spinwait.o obj-$(CONFIG_MODULES) += module.o obj-$(CONFIG_MODULE_SECTIONS) += module-sections.o +obj-$(CONFIG_CPU_PM) += suspend_entry.o suspend.o + obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o obj-$(CONFIG_DYNAMIC_FTRACE) += mcount-dyn.o diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c index df0519a64eaf..df9444397908 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -13,6 +13,7 @@ #include #include #include +#include void asm_offsets(void); @@ -113,6 +114,8 @@ void asm_offsets(void) OFFSET(PT_BADADDR, pt_regs, badaddr); OFFSET(PT_CAUSE, pt_regs, cause); + OFFSET(SUSPEND_CONTEXT_REGS, suspend_context, regs); + OFFSET(KVM_ARCH_GUEST_ZERO, kvm_vcpu_arch, guest_context.zero); OFFSET(KVM_ARCH_GUEST_RA, kvm_vcpu_arch, guest_context.ra); OFFSET(KVM_ARCH_GUEST_SP, kvm_vcpu_arch, guest_context.sp); diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S index 5f4c6b6c4974..893b8bb69391 100644 --- a/arch/riscv/kernel/head.S +++ b/arch/riscv/kernel/head.S @@ -16,27 +16,6 @@ #include #include "efi-header.S" -#ifdef CONFIG_XIP_KERNEL -.macro XIP_FIXUP_OFFSET reg - REG_L t0, _xip_fixup - add \reg, \reg, t0 -.endm -.macro XIP_FIXUP_FLASH_OFFSET reg - la t1, __data_loc - li t0, XIP_OFFSET_MASK - and t1, t1, t0 - li t1, XIP_OFFSET - sub t0, t0, t1 - sub \reg, \reg, t0 -.endm -_xip_fixup: .dword CONFIG_PHYS_RAM_BASE - CONFIG_XIP_PHYS_ADDR - XIP_OFFSET -#else -.macro XIP_FIXUP_OFFSET reg -.endm -.macro XIP_FIXUP_FLASH_OFFSET reg -.endm -#endif /* CONFIG_XIP_KERNEL */ - __HEAD ENTRY(_start) /* diff --git a/arch/riscv/kernel/suspend.c b/arch/riscv/kernel/suspend.c new file mode 100644 index 000000000000..9ba24fb8cc93 --- /dev/null +++ b/arch/riscv/kernel/suspend.c @@ -0,0 +1,87 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2021 Western Digital Corporation or its affiliates. + * Copyright (c) 2022 Ventana Micro Systems Inc. + */ + +#include +#include +#include + +static void suspend_save_csrs(struct suspend_context *context) +{ + context->scratch = csr_read(CSR_SCRATCH); + context->tvec = csr_read(CSR_TVEC); + context->ie = csr_read(CSR_IE); + + /* + * No need to save/restore IP CSR (i.e. MIP or SIP) because: + * + * 1. For no-MMU (M-mode) kernel, the bits in MIP are set by + * external devices (such as interrupt controller, timer, etc). + * 2. For MMU (S-mode) kernel, the bits in SIP are set by + * M-mode firmware and external devices (such as interrupt + * controller, etc). + */ + +#ifdef CONFIG_MMU + context->satp = csr_read(CSR_SATP); +#endif +} + +static void suspend_restore_csrs(struct suspend_context *context) +{ + csr_write(CSR_SCRATCH, context->scratch); + csr_write(CSR_TVEC, context->tvec); + csr_write(CSR_IE, context->ie); + +#ifdef CONFIG_MMU + csr_write(CSR_SATP, context->satp); +#endif +} + +int cpu_suspend(unsigned long arg, + int (*finish)(unsigned long arg, + unsigned long entry, + unsigned long context)) +{ + int rc = 0; + struct suspend_context context = { 0 }; + + /* Finisher should be non-NULL */ + if (!finish) + return -EINVAL; + + /* Save additional CSRs*/ + suspend_save_csrs(&context); + + /* + * Function graph tracer state gets incosistent when the kernel + * calls functions that never return (aka finishers) hence disable + * graph tracing during their execution. + */ + pause_graph_tracing(); + + /* Save context on stack */ + if (__cpu_suspend_enter(&context)) { + /* Call the finisher */ + rc = finish(arg, __pa_symbol(__cpu_resume_enter), + (ulong)&context); + + /* + * Should never reach here, unless the suspend finisher + * fails. Successful cpu_suspend() should return from + * __cpu_resume_entry() + */ + if (!rc) + rc = -EOPNOTSUPP; + } + + /* Enable function graph tracer */ + unpause_graph_tracing(); + + /* Restore additional CSRs */ + suspend_restore_csrs(&context); + + return rc; +} diff --git a/arch/riscv/kernel/suspend_entry.S b/arch/riscv/kernel/suspend_entry.S new file mode 100644 index 000000000000..4b07b809a2b8 --- /dev/null +++ b/arch/riscv/kernel/suspend_entry.S @@ -0,0 +1,124 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2021 Western Digital Corporation or its affiliates. + * Copyright (c) 2022 Ventana Micro Systems Inc. + */ + +#include +#include +#include +#include + + .text + .altmacro + .option norelax + +ENTRY(__cpu_suspend_enter) + /* Save registers (except A0 and T0-T6) */ + REG_S ra, (SUSPEND_CONTEXT_REGS + PT_RA)(a0) + REG_S sp, (SUSPEND_CONTEXT_REGS + PT_SP)(a0) + REG_S gp, (SUSPEND_CONTEXT_REGS + PT_GP)(a0) + REG_S tp, (SUSPEND_CONTEXT_REGS + PT_TP)(a0) + REG_S s0, (SUSPEND_CONTEXT_REGS + PT_S0)(a0) + REG_S s1, (SUSPEND_CONTEXT_REGS + PT_S1)(a0) + REG_S a1, (SUSPEND_CONTEXT_REGS + PT_A1)(a0) + REG_S a2, (SUSPEND_CONTEXT_REGS + PT_A2)(a0) + REG_S a3, (SUSPEND_CONTEXT_REGS + PT_A3)(a0) + REG_S a4, (SUSPEND_CONTEXT_REGS + PT_A4)(a0) + REG_S a5, (SUSPEND_CONTEXT_REGS + PT_A5)(a0) + REG_S a6, (SUSPEND_CONTEXT_REGS + PT_A6)(a0) + REG_S a7, (SUSPEND_CONTEXT_REGS + PT_A7)(a0) + REG_S s2, (SUSPEND_CONTEXT_REGS + PT_S2)(a0) + REG_S s3, (SUSPEND_CONTEXT_REGS + PT_S3)(a0) + REG_S s4, (SUSPEND_CONTEXT_REGS + PT_S4)(a0) + REG_S s5, (SUSPEND_CONTEXT_REGS + PT_S5)(a0) + REG_S s6, (SUSPEND_CONTEXT_REGS + PT_S6)(a0) + REG_S s7, (SUSPEND_CONTEXT_REGS + PT_S7)(a0) + REG_S s8, (SUSPEND_CONTEXT_REGS + PT_S8)(a0) + REG_S s9, (SUSPEND_CONTEXT_REGS + PT_S9)(a0) + REG_S s10, (SUSPEND_CONTEXT_REGS + PT_S10)(a0) + REG_S s11, (SUSPEND_CONTEXT_REGS + PT_S11)(a0) + + /* Save CSRs */ + csrr t0, CSR_EPC + REG_S t0, (SUSPEND_CONTEXT_REGS + PT_EPC)(a0) + csrr t0, CSR_STATUS + REG_S t0, (SUSPEND_CONTEXT_REGS + PT_STATUS)(a0) + csrr t0, CSR_TVAL + REG_S t0, (SUSPEND_CONTEXT_REGS + PT_BADADDR)(a0) + csrr t0, CSR_CAUSE + REG_S t0, (SUSPEND_CONTEXT_REGS + PT_CAUSE)(a0) + + /* Return non-zero value */ + li a0, 1 + + /* Return to C code */ + ret +END(__cpu_suspend_enter) + +ENTRY(__cpu_resume_enter) + /* Load the global pointer */ + .option push + .option norelax + la gp, __global_pointer$ + .option pop + +#ifdef CONFIG_MMU + /* Save A0 and A1 */ + add t0, a0, zero + add t1, a1, zero + + /* Enable MMU */ + la a0, swapper_pg_dir + XIP_FIXUP_OFFSET a0 + call relocate_enable_mmu + + /* Restore A0 and A1 */ + add a0, t0, zero + add a1, t1, zero +#endif + + /* Make A0 point to suspend context */ + add a0, a1, zero + + /* Restore CSRs */ + REG_L t0, (SUSPEND_CONTEXT_REGS + PT_EPC)(a0) + csrw CSR_EPC, t0 + REG_L t0, (SUSPEND_CONTEXT_REGS + PT_STATUS)(a0) + csrw CSR_STATUS, t0 + REG_L t0, (SUSPEND_CONTEXT_REGS + PT_BADADDR)(a0) + csrw CSR_TVAL, t0 + REG_L t0, (SUSPEND_CONTEXT_REGS + PT_CAUSE)(a0) + csrw CSR_CAUSE, t0 + + /* Restore registers (except A0 and T0-T6) */ + REG_L ra, (SUSPEND_CONTEXT_REGS + PT_RA)(a0) + REG_L sp, (SUSPEND_CONTEXT_REGS + PT_SP)(a0) + REG_L gp, (SUSPEND_CONTEXT_REGS + PT_GP)(a0) + REG_L tp, (SUSPEND_CONTEXT_REGS + PT_TP)(a0) + REG_L s0, (SUSPEND_CONTEXT_REGS + PT_S0)(a0) + REG_L s1, (SUSPEND_CONTEXT_REGS + PT_S1)(a0) + REG_L a1, (SUSPEND_CONTEXT_REGS + PT_A1)(a0) + REG_L a2, (SUSPEND_CONTEXT_REGS + PT_A2)(a0) + REG_L a3, (SUSPEND_CONTEXT_REGS + PT_A3)(a0) + REG_L a4, (SUSPEND_CONTEXT_REGS + PT_A4)(a0) + REG_L a5, (SUSPEND_CONTEXT_REGS + PT_A5)(a0) + REG_L a6, (SUSPEND_CONTEXT_REGS + PT_A6)(a0) + REG_L a7, (SUSPEND_CONTEXT_REGS + PT_A7)(a0) + REG_L s2, (SUSPEND_CONTEXT_REGS + PT_S2)(a0) + REG_L s3, (SUSPEND_CONTEXT_REGS + PT_S3)(a0) + REG_L s4, (SUSPEND_CONTEXT_REGS + PT_S4)(a0) + REG_L s5, (SUSPEND_CONTEXT_REGS + PT_S5)(a0) + REG_L s6, (SUSPEND_CONTEXT_REGS + PT_S6)(a0) + REG_L s7, (SUSPEND_CONTEXT_REGS + PT_S7)(a0) + REG_L s8, (SUSPEND_CONTEXT_REGS + PT_S8)(a0) + REG_L s9, (SUSPEND_CONTEXT_REGS + PT_S9)(a0) + REG_L s10, (SUSPEND_CONTEXT_REGS + PT_S10)(a0) + REG_L s11, (SUSPEND_CONTEXT_REGS + PT_S11)(a0) + + /* Return zero value */ + add a0, zero, zero + + /* Return to C code */ + ret +END(__cpu_resume_enter)