From patchwork Thu Apr 1 13:34:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 12178089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,MSGID_FROM_MTA_HEADER,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF747C433B4 for ; Thu, 1 Apr 2021 13:37:53 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 351496124A for ; Thu, 1 Apr 2021 13:37:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 351496124A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=wdc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ZXZZ9FR12gmaY/28oUDXBjjjdTN6P4Pk3IjNXdwOsz4=; b=oFNOVayli23UN6o53reRIUuRW SdC48IQIWH1+l2HjzgG+dCyWBXOqFNcX6qlgjdFVrgg1PKPqYVb5ucOq5M1GbxtmpnYLZPDvOC823 eo6LXSq/V7AYTCk1CMYLPuvETw4PJC2rwlSMQUB5/qDQ1TqCvHk7khKKGfpP3a27c0UtX8ZCOPAnU 9eTRFSSMTdDYpPEuvEpM/zM04sgikjk564t+V/dHdpORLBB6W7rw0LpfeUWcSrEIX92MHxYnM0Srg FHM0f8Pn1Yy/Zbhx0UKfczjLpNOAdTRyw9AAF09U04z9b7SESu41/y8wuzO1OisPbAQbvIJ3A0CO1 1jbM030LA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lRxW1-009hUx-0x; Thu, 01 Apr 2021 13:37:41 +0000 Received: from esa3.hgst.iphmx.com ([216.71.153.141]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lRxVq-009hPI-87; Thu, 01 Apr 2021 13:37:35 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1617284250; x=1648820250; h=from:to:cc:subject:date:message-id:in-reply-to: references:content-transfer-encoding:mime-version; bh=LrzeSSAMnw87FzigHk/ucho0hz2kh71gED4RVNKmibM=; b=A9HR+B+SF+KBGv3Hw89L6rMMF7ktHCj492xLdOE7AooPC1K4BX1V80Ys C++6GENoHUalR7h1jw4lYth9YwNn/MixUC0lc4Yvtx7ThTXliYmQqVPA+ QJMA3yY5LeN+caBp8OBcNMMPvsrFG56gEtjnTPzi4J/phcript/zWCRHF FvdayF5wZ7VPfM2/CVNwgP95LnRV19UTahY5gBJUPAVEx07RnFzKp0tVY e7/JSIt216WRoTcFkZCR9Xjm4jbTwJK1CI3jPgmVAYXHULtFIVrE8t6f/ 2HsAO1uBYDNrHKzCJ/mbLHTWZbacBFLHCnGOkJN8xiXA3Kvb8xKi8J1Cm w==; IronPort-SDR: pFBbLCP6pAPJQwsH63QISefk5UYt0sWx+LTjR5hIEpmih92dsY9zPG1CcybJiKlVK4f0lCiPHH 1UItxnv23i+t8Fsv2eCUWI5wt30Wii5cQmthunC1J8BqvUvSsM8ggxSZjgp0SFQ0KQdess+xI3 nmdO9ykNf+oe9iYxehBLhVDcVc8UGaSDd8Cz66QNfe/MnPI4uTvYLchvyG33qAnYzPlH0vWJNE FvOao/bGW9cDdsSHqROFYIM1GHXBFnku0OsPGGfkCkmBAcsS7idMWi7TDFuA+6iS0ceCk98reZ Fqk= X-IronPort-AV: E=Sophos;i="5.81,296,1610380800"; d="scan'208";a="168041418" Received: from mail-dm6nam12lp2169.outbound.protection.outlook.com (HELO NAM12-DM6-obe.outbound.protection.outlook.com) ([104.47.59.169]) by ob1.hgst.iphmx.com with ESMTP; 01 Apr 2021 21:37:24 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=D8TeDiWMsKRu8cXSVLiVXXStLAnKIkEO/q0L+i2CTBmUScwScsybU4bJ50mw6sFprtSL3d4urnb8i/w1+AyTvJPL0UjgDhbuUWY+yDItKtEQDg9kGsmLhr8zhklv6a8LKvgo6tVDLj2Q8BAxi115fhvARU0juG36oL567z4l3mPgKTzjhvEU9JniYmnuJ5f4vwGwl3YFiMJ9Fj1B7jOPUBzyJ56pObNiTguFlZGhpeL4717OW7CTDqL0Jf4L9XcxmTFRZshpjprvPGzCG2Yg8t38OCOkOzqi4JIt/fjSBfulqK1hGJFGAa3VFPlQaXR0jC2nYKDKIV9xsUH4sHT6lw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2oSErAGcA8QLCqsopc0ngvg/fBGOoWzIbJDYX3JBTGA=; b=SxqBxX1nU+ch+X/SdmLoBB7AiqAmhwgLzvSJ6eo6Tn/xeq06v2rKnNFX5q/raTqQsz9zsyBRlDLaNh0J1tn04uDSj0glZ2+rjJQN8OuNxC1yModipwMvil8l6S+icmg/bOLjZECbMYvtOm7wP/0cXDio2v2McbW7Pyue4pRi32+/br2IKP1BKUwZcPbt/KWxQTQYeAHJA3hgJN57a1QKdtbOzy9UplMTw7bT5IZoPgFBqH1l1JhynCtCj9TyUTpLqYeSy7rTziy92yHAsYbGrFx8odicy2vQt4gNEBCASMhkdJ3B5H/LWbOK0a1zsYBp1fJFepxRjlOx8XDdosJ/uQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass header.d=wdc.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2oSErAGcA8QLCqsopc0ngvg/fBGOoWzIbJDYX3JBTGA=; b=WtbhHfDQf0nS29YvVgx27DUragcwHB9efoxbmTnAMhI4hm0e2vP5qoumfDzVEVGjfyaWk16wv7VjQHGQBH2B+GCZwOk38nHOZKxcgMR7GY9ztmO4d8y8rH4qQMqM4PRbBuyI4USp943QKOIcHBYjRdTu2neX8ow+Ac2+egAsPLE= Authentication-Results: dabbelt.com; dkim=none (message not signed) header.d=none;dabbelt.com; dmarc=none action=none header.from=wdc.com; Received: from DM6PR04MB6201.namprd04.prod.outlook.com (2603:10b6:5:127::32) by DM6PR04MB6528.namprd04.prod.outlook.com (2603:10b6:5:20a::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3999.28; Thu, 1 Apr 2021 13:36:50 +0000 Received: from DM6PR04MB6201.namprd04.prod.outlook.com ([fe80::38c0:cc46:192b:1868]) by DM6PR04MB6201.namprd04.prod.outlook.com ([fe80::38c0:cc46:192b:1868%7]) with mapi id 15.20.3977.033; Thu, 1 Apr 2021 13:36:50 +0000 From: Anup Patel To: Palmer Dabbelt , Palmer Dabbelt , Paul Walmsley , Albert Ou , Paolo Bonzini Cc: Alexander Graf , Atish Patra , Alistair Francis , Damien Le Moal , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v17 06/17] RISC-V: KVM: Implement VCPU world-switch Date: Thu, 1 Apr 2021 19:04:24 +0530 Message-Id: <20210401133435.383959-7-anup.patel@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210401133435.383959-1-anup.patel@wdc.com> References: <20210401133435.383959-1-anup.patel@wdc.com> X-Originating-IP: [122.179.112.210] X-ClientProxiedBy: MA1PR01CA0104.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:1::20) To DM6PR04MB6201.namprd04.prod.outlook.com (2603:10b6:5:127::32) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from wdc.com (122.179.112.210) by MA1PR01CA0104.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:1::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3999.28 via Frontend Transport; Thu, 1 Apr 2021 13:36:29 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: aab4fe60-dd84-468f-586c-08d8f51333ee X-MS-TrafficTypeDiagnostic: DM6PR04MB6528: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: WDCIPOUTBOUND: EOP-TRUE X-MS-Oob-TLC-OOBClassifiers: OLM:7219; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Z7fDWs19+U1VxSsJt0TaQkUeQYuJJKQxM0DJh8qhAzUSGSctWsjUGXT+Xt1YBJXDTSOFiWxHckyx8M9Aja2w9lUuMy8hB5chSPZNvB6XTzcrFaiLu100W6ZWmjfphCQP2NAG2+E8bnLq1zWFK6an5rj2A3buB5g3hHgo8SajHNsE2yEv8vRj2XeAd/qokBFKZ2OPMMiBGzTQs8URpsbpA2MpbA0YzTyWVysk8yOI4VY7hT3clWEWGj72S3j68vyPghvK7Zn//ON6Sl+QT9lIDyvZzAWFNOfnE25+FRUtlOvfnjbMT+Nd9ebvm/AZLos7LW9rVJVvVLK4rgOmXVYgXae9xMltwA8pv5MUVRnf6oTGh/xReMo9kqGYrQziLm0OfdOva7TRls7xDB3RuiVexAFeVVn3/O848wHc8kz/hbVRP6A3ZeDOuUhDvpLY2ezIT/3QbNelLM1lYrB6eJGMW6ebvCGKBTHDvqJaKkUbES4dbgyxde2GeVFuhQxUgaisi6Q9Lj1dnmPSyFk77nkvURuGY4akiyJu2fTGk9mCEL2AUCe2TbWeyZm5A09E6R/wEqa12JUT34LqILRyvm4VauqVBGSqqBIgA3Dfv4BWXN1fGCQuE2ChmnwL0DSQZPHg0OpCycnmMMjmZR4SsD5CHMJXUxxhTMjipHk5Eh1ChbY= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR04MB6201.namprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(136003)(366004)(376002)(396003)(346002)(39860400002)(8886007)(16526019)(36756003)(186003)(4326008)(956004)(2616005)(6666004)(316002)(66556008)(478600001)(26005)(66476007)(2906002)(54906003)(7416002)(1076003)(8936002)(55016002)(86362001)(83380400001)(8676002)(52116002)(110136005)(7696005)(5660300002)(38100700001)(30864003)(44832011)(66946007); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData: 9m0/d1CE6LlyzbMqXfYkPyii77L4Q7ETt/8vbw8UhbRyKhWXqKVyTO+vPAsAYQqKiGBa+fvPFZs+AVkk1ik5NfYy/HIWgcpt55/OGUW1vwQ7rTMq+xNagYJyGLA1DIr4X18v/in7ny5C8QPE0gyachM2qXmrzKLdJqXNRthIDrVE3O3kWmuuuftfuESmeITlaBl70Ro56inU5/bJqsX7bOLrebdw/CrnLc3Uc3ml9CfyrhouMHVw38t5vZE1kguoh7Z7veTRqcABgW7lAoO9U1NjnlXOaB7HFt5c7unnIeKNXB3NOVLPvduYKsmH1USIqhMAjn1tKyWCcP7ZiDNJRzmKCqq3Pd3m9LfkJz3eEfwEyBENXfJ06vGXwllyAZb6TU6slhykq2g47oCQHQ8I/bmepYBAtUVGmn2ro8V8HJ5Ncc8DCfYjGjZ9p/fCAN08M7uvcZ/py8lhscXnzGLl4vEvo3+9c+TgHuKXfPHhZg53aRiRSBwZDnLEwNnx53gsvjzGEkXyigl7EgEsU1fhRzCIXynCvR1EfVn/VVf03yG/i/Q+WDEIoT7YwtJFRH2yuj+hx4i7+XQaurj9+UPmLn8PgxQ1iqZ2DQkpx4E8QPrH8PPQuOtRuMpLGFPfg5iteNgPYP69g33lSDop+GWrXOrF/LXmdzi/HMuRMQW5qMbW0BZ/TzKtFPztSQZngcwxoR7OsuqWP6A6jV5TnzzQ07ZnoFC3psse8gGYxjhXpAI20qXxt/wrTM1rvePvlRx4Z0aJYYKzQfIde8Jm/h/nKumZ3zLMSZWopDs7JygD4+4ug/DLL3p9HxgkyX6vj3+SVys8urxcVbV3VwbBDbgeBpnZHe8V2PXMzQMrd+JRKrkfFjSqEqVsJqD88BgDJvfud8zXgYsg25c+zDYRYx0ZkyQ0oKv+aC80IdIp/R7rXqEBssrQq0u/GiWle2Rjuly5hSEOapZ96rbmpkSAeC8RSBhDyuWn3OMVUySJ+/wuNj1pvY4SuuJJYrpNcNNLUP37oi/nSYiKBUUzwDtiAXsk6Qyi6P0pSNZMLWZv9S7ffC3zSIFJx0CVFv1VNU4UFxACANzLraL/YUgCbbRU0HpVJSrOXZgnTDhX/LH7HsYJ8tdPeCWnRzfjsIrfffGqkR5MGmtKAt9zMdBAnOB3RqtiridINhMNeRXIxvxAA3iwOqTm9sH4UhW3pe+M02F0H9xUOdqHPx3cX8I4gTCjjTo8JGn21u6m+w/kj4bUBBJl1rIF0pBRCyipoidNwDK7Lyq/XdUYfMdP+X7RHR6NEbHJGjD7sVSRr5KK2n+RgZOySdMOh1qUs8ICqPd5rux4UulqVnTm7J4dg68RRcFNM7DjDQ== X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: aab4fe60-dd84-468f-586c-08d8f51333ee X-MS-Exchange-CrossTenant-AuthSource: DM6PR04MB6201.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Apr 2021 13:36:49.9066 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: eR2g9AAGlSw0fq5sQ2jxQ4OrZFG2f5UDXtHjQRd/ljCb/0UPIobxQwSUqU0ZgfgznszSfuh++iWZGuhYqHOzNA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR04MB6528 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210401_143731_559894_F80C6544 X-CRM114-Status: GOOD ( 13.43 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This patch implements the VCPU world-switch for KVM RISC-V. The KVM RISC-V world-switch (i.e. __kvm_riscv_switch_to()) mostly switches general purpose registers, SSTATUS, STVEC, SSCRATCH and HSTATUS CSRs. Other CSRs are switched via vcpu_load() and vcpu_put() interface in kvm_arch_vcpu_load() and kvm_arch_vcpu_put() functions respectively. Signed-off-by: Anup Patel Acked-by: Paolo Bonzini Reviewed-by: Paolo Bonzini Reviewed-by: Alexander Graf --- arch/riscv/include/asm/kvm_host.h | 10 +- arch/riscv/kernel/asm-offsets.c | 78 ++++++++++++ arch/riscv/kvm/Makefile | 2 +- arch/riscv/kvm/vcpu.c | 30 ++++- arch/riscv/kvm/vcpu_switch.S | 203 ++++++++++++++++++++++++++++++ 5 files changed, 319 insertions(+), 4 deletions(-) create mode 100644 arch/riscv/kvm/vcpu_switch.S diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 1bf660b1a9d8..ca9b8dfcd406 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -120,6 +120,14 @@ struct kvm_vcpu_arch { /* ISA feature bits (similar to MISA) */ unsigned long isa; + /* SSCRATCH, STVEC, and SCOUNTEREN of Host */ + unsigned long host_sscratch; + unsigned long host_stvec; + unsigned long host_scounteren; + + /* CPU context of Host */ + struct kvm_cpu_context host_context; + /* CPU context of Guest VCPU */ struct kvm_cpu_context guest_context; @@ -169,7 +177,7 @@ int kvm_riscv_vcpu_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run); int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_trap *trap); -static inline void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch) {} +void __kvm_riscv_switch_to(struct kvm_vcpu_arch *vcpu_arch); int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq); diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c index 9ef33346853c..21f867d35b65 100644 --- a/arch/riscv/kernel/asm-offsets.c +++ b/arch/riscv/kernel/asm-offsets.c @@ -7,7 +7,9 @@ #define GENERATING_ASM_OFFSETS #include +#include #include +#include #include #include @@ -111,6 +113,82 @@ void asm_offsets(void) OFFSET(PT_BADADDR, pt_regs, badaddr); OFFSET(PT_CAUSE, pt_regs, cause); + OFFSET(KVM_ARCH_GUEST_ZERO, kvm_vcpu_arch, guest_context.zero); + OFFSET(KVM_ARCH_GUEST_RA, kvm_vcpu_arch, guest_context.ra); + OFFSET(KVM_ARCH_GUEST_SP, kvm_vcpu_arch, guest_context.sp); + OFFSET(KVM_ARCH_GUEST_GP, kvm_vcpu_arch, guest_context.gp); + OFFSET(KVM_ARCH_GUEST_TP, kvm_vcpu_arch, guest_context.tp); + OFFSET(KVM_ARCH_GUEST_T0, kvm_vcpu_arch, guest_context.t0); + OFFSET(KVM_ARCH_GUEST_T1, kvm_vcpu_arch, guest_context.t1); + OFFSET(KVM_ARCH_GUEST_T2, kvm_vcpu_arch, guest_context.t2); + OFFSET(KVM_ARCH_GUEST_S0, kvm_vcpu_arch, guest_context.s0); + OFFSET(KVM_ARCH_GUEST_S1, kvm_vcpu_arch, guest_context.s1); + OFFSET(KVM_ARCH_GUEST_A0, kvm_vcpu_arch, guest_context.a0); + OFFSET(KVM_ARCH_GUEST_A1, kvm_vcpu_arch, guest_context.a1); + OFFSET(KVM_ARCH_GUEST_A2, kvm_vcpu_arch, guest_context.a2); + OFFSET(KVM_ARCH_GUEST_A3, kvm_vcpu_arch, guest_context.a3); + OFFSET(KVM_ARCH_GUEST_A4, kvm_vcpu_arch, guest_context.a4); + OFFSET(KVM_ARCH_GUEST_A5, kvm_vcpu_arch, guest_context.a5); + OFFSET(KVM_ARCH_GUEST_A6, kvm_vcpu_arch, guest_context.a6); + OFFSET(KVM_ARCH_GUEST_A7, kvm_vcpu_arch, guest_context.a7); + OFFSET(KVM_ARCH_GUEST_S2, kvm_vcpu_arch, guest_context.s2); + OFFSET(KVM_ARCH_GUEST_S3, kvm_vcpu_arch, guest_context.s3); + OFFSET(KVM_ARCH_GUEST_S4, kvm_vcpu_arch, guest_context.s4); + OFFSET(KVM_ARCH_GUEST_S5, kvm_vcpu_arch, guest_context.s5); + OFFSET(KVM_ARCH_GUEST_S6, kvm_vcpu_arch, guest_context.s6); + OFFSET(KVM_ARCH_GUEST_S7, kvm_vcpu_arch, guest_context.s7); + OFFSET(KVM_ARCH_GUEST_S8, kvm_vcpu_arch, guest_context.s8); + OFFSET(KVM_ARCH_GUEST_S9, kvm_vcpu_arch, guest_context.s9); + OFFSET(KVM_ARCH_GUEST_S10, kvm_vcpu_arch, guest_context.s10); + OFFSET(KVM_ARCH_GUEST_S11, kvm_vcpu_arch, guest_context.s11); + OFFSET(KVM_ARCH_GUEST_T3, kvm_vcpu_arch, guest_context.t3); + OFFSET(KVM_ARCH_GUEST_T4, kvm_vcpu_arch, guest_context.t4); + OFFSET(KVM_ARCH_GUEST_T5, kvm_vcpu_arch, guest_context.t5); + OFFSET(KVM_ARCH_GUEST_T6, kvm_vcpu_arch, guest_context.t6); + OFFSET(KVM_ARCH_GUEST_SEPC, kvm_vcpu_arch, guest_context.sepc); + OFFSET(KVM_ARCH_GUEST_SSTATUS, kvm_vcpu_arch, guest_context.sstatus); + OFFSET(KVM_ARCH_GUEST_HSTATUS, kvm_vcpu_arch, guest_context.hstatus); + OFFSET(KVM_ARCH_GUEST_SCOUNTEREN, kvm_vcpu_arch, guest_csr.scounteren); + + OFFSET(KVM_ARCH_HOST_ZERO, kvm_vcpu_arch, host_context.zero); + OFFSET(KVM_ARCH_HOST_RA, kvm_vcpu_arch, host_context.ra); + OFFSET(KVM_ARCH_HOST_SP, kvm_vcpu_arch, host_context.sp); + OFFSET(KVM_ARCH_HOST_GP, kvm_vcpu_arch, host_context.gp); + OFFSET(KVM_ARCH_HOST_TP, kvm_vcpu_arch, host_context.tp); + OFFSET(KVM_ARCH_HOST_T0, kvm_vcpu_arch, host_context.t0); + OFFSET(KVM_ARCH_HOST_T1, kvm_vcpu_arch, host_context.t1); + OFFSET(KVM_ARCH_HOST_T2, kvm_vcpu_arch, host_context.t2); + OFFSET(KVM_ARCH_HOST_S0, kvm_vcpu_arch, host_context.s0); + OFFSET(KVM_ARCH_HOST_S1, kvm_vcpu_arch, host_context.s1); + OFFSET(KVM_ARCH_HOST_A0, kvm_vcpu_arch, host_context.a0); + OFFSET(KVM_ARCH_HOST_A1, kvm_vcpu_arch, host_context.a1); + OFFSET(KVM_ARCH_HOST_A2, kvm_vcpu_arch, host_context.a2); + OFFSET(KVM_ARCH_HOST_A3, kvm_vcpu_arch, host_context.a3); + OFFSET(KVM_ARCH_HOST_A4, kvm_vcpu_arch, host_context.a4); + OFFSET(KVM_ARCH_HOST_A5, kvm_vcpu_arch, host_context.a5); + OFFSET(KVM_ARCH_HOST_A6, kvm_vcpu_arch, host_context.a6); + OFFSET(KVM_ARCH_HOST_A7, kvm_vcpu_arch, host_context.a7); + OFFSET(KVM_ARCH_HOST_S2, kvm_vcpu_arch, host_context.s2); + OFFSET(KVM_ARCH_HOST_S3, kvm_vcpu_arch, host_context.s3); + OFFSET(KVM_ARCH_HOST_S4, kvm_vcpu_arch, host_context.s4); + OFFSET(KVM_ARCH_HOST_S5, kvm_vcpu_arch, host_context.s5); + OFFSET(KVM_ARCH_HOST_S6, kvm_vcpu_arch, host_context.s6); + OFFSET(KVM_ARCH_HOST_S7, kvm_vcpu_arch, host_context.s7); + OFFSET(KVM_ARCH_HOST_S8, kvm_vcpu_arch, host_context.s8); + OFFSET(KVM_ARCH_HOST_S9, kvm_vcpu_arch, host_context.s9); + OFFSET(KVM_ARCH_HOST_S10, kvm_vcpu_arch, host_context.s10); + OFFSET(KVM_ARCH_HOST_S11, kvm_vcpu_arch, host_context.s11); + OFFSET(KVM_ARCH_HOST_T3, kvm_vcpu_arch, host_context.t3); + OFFSET(KVM_ARCH_HOST_T4, kvm_vcpu_arch, host_context.t4); + OFFSET(KVM_ARCH_HOST_T5, kvm_vcpu_arch, host_context.t5); + OFFSET(KVM_ARCH_HOST_T6, kvm_vcpu_arch, host_context.t6); + OFFSET(KVM_ARCH_HOST_SEPC, kvm_vcpu_arch, host_context.sepc); + OFFSET(KVM_ARCH_HOST_SSTATUS, kvm_vcpu_arch, host_context.sstatus); + OFFSET(KVM_ARCH_HOST_HSTATUS, kvm_vcpu_arch, host_context.hstatus); + OFFSET(KVM_ARCH_HOST_SSCRATCH, kvm_vcpu_arch, host_sscratch); + OFFSET(KVM_ARCH_HOST_STVEC, kvm_vcpu_arch, host_stvec); + OFFSET(KVM_ARCH_HOST_SCOUNTEREN, kvm_vcpu_arch, host_scounteren); + /* * THREAD_{F,X}* might be larger than a S-type offset can handle, but * these are used in performance-sensitive assembly so we can't resort diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 37b5a59d4f4f..845579273727 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -8,6 +8,6 @@ ccflags-y := -Ivirt/kvm -Iarch/riscv/kvm kvm-objs := $(common-objs-y) -kvm-objs += main.o vm.o mmu.o vcpu.o vcpu_exit.o +kvm-objs += main.o vm.o mmu.o vcpu.o vcpu_exit.o vcpu_switch.o obj-$(CONFIG_KVM) += kvm.o diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 551359c9136c..bf42afad9d9d 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -565,14 +565,40 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) { - /* TODO: */ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + + csr_write(CSR_VSSTATUS, csr->vsstatus); + csr_write(CSR_HIE, csr->hie); + csr_write(CSR_VSTVEC, csr->vstvec); + csr_write(CSR_VSSCRATCH, csr->vsscratch); + csr_write(CSR_VSEPC, csr->vsepc); + csr_write(CSR_VSCAUSE, csr->vscause); + csr_write(CSR_VSTVAL, csr->vstval); + csr_write(CSR_HVIP, csr->hvip); + csr_write(CSR_VSATP, csr->vsatp); kvm_riscv_stage2_update_hgatp(vcpu); + + vcpu->cpu = cpu; } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { - /* TODO: */ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + + vcpu->cpu = -1; + + csr_write(CSR_HGATP, 0); + + csr->vsstatus = csr_read(CSR_VSSTATUS); + csr->hie = csr_read(CSR_HIE); + csr->vstvec = csr_read(CSR_VSTVEC); + csr->vsscratch = csr_read(CSR_VSSCRATCH); + csr->vsepc = csr_read(CSR_VSEPC); + csr->vscause = csr_read(CSR_VSCAUSE); + csr->vstval = csr_read(CSR_VSTVAL); + csr->hvip = csr_read(CSR_HVIP); + csr->vsatp = csr_read(CSR_VSATP); } static void kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu) diff --git a/arch/riscv/kvm/vcpu_switch.S b/arch/riscv/kvm/vcpu_switch.S new file mode 100644 index 000000000000..5174b025ff4e --- /dev/null +++ b/arch/riscv/kvm/vcpu_switch.S @@ -0,0 +1,203 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * + * Authors: + * Anup Patel + */ + +#include +#include +#include +#include + + .text + .altmacro + .option norelax + +ENTRY(__kvm_riscv_switch_to) + /* Save Host GPRs (except A0 and T0-T6) */ + REG_S ra, (KVM_ARCH_HOST_RA)(a0) + REG_S sp, (KVM_ARCH_HOST_SP)(a0) + REG_S gp, (KVM_ARCH_HOST_GP)(a0) + REG_S tp, (KVM_ARCH_HOST_TP)(a0) + REG_S s0, (KVM_ARCH_HOST_S0)(a0) + REG_S s1, (KVM_ARCH_HOST_S1)(a0) + REG_S a1, (KVM_ARCH_HOST_A1)(a0) + REG_S a2, (KVM_ARCH_HOST_A2)(a0) + REG_S a3, (KVM_ARCH_HOST_A3)(a0) + REG_S a4, (KVM_ARCH_HOST_A4)(a0) + REG_S a5, (KVM_ARCH_HOST_A5)(a0) + REG_S a6, (KVM_ARCH_HOST_A6)(a0) + REG_S a7, (KVM_ARCH_HOST_A7)(a0) + REG_S s2, (KVM_ARCH_HOST_S2)(a0) + REG_S s3, (KVM_ARCH_HOST_S3)(a0) + REG_S s4, (KVM_ARCH_HOST_S4)(a0) + REG_S s5, (KVM_ARCH_HOST_S5)(a0) + REG_S s6, (KVM_ARCH_HOST_S6)(a0) + REG_S s7, (KVM_ARCH_HOST_S7)(a0) + REG_S s8, (KVM_ARCH_HOST_S8)(a0) + REG_S s9, (KVM_ARCH_HOST_S9)(a0) + REG_S s10, (KVM_ARCH_HOST_S10)(a0) + REG_S s11, (KVM_ARCH_HOST_S11)(a0) + + /* Save Host and Restore Guest SSTATUS */ + REG_L t0, (KVM_ARCH_GUEST_SSTATUS)(a0) + csrrw t0, CSR_SSTATUS, t0 + REG_S t0, (KVM_ARCH_HOST_SSTATUS)(a0) + + /* Save Host and Restore Guest HSTATUS */ + REG_L t1, (KVM_ARCH_GUEST_HSTATUS)(a0) + csrrw t1, CSR_HSTATUS, t1 + REG_S t1, (KVM_ARCH_HOST_HSTATUS)(a0) + + /* Save Host and Restore Guest SCOUNTEREN */ + REG_L t2, (KVM_ARCH_GUEST_SCOUNTEREN)(a0) + csrrw t2, CSR_SCOUNTEREN, t2 + REG_S t2, (KVM_ARCH_HOST_SCOUNTEREN)(a0) + + /* Save Host SSCRATCH and change it to struct kvm_vcpu_arch pointer */ + csrrw t3, CSR_SSCRATCH, a0 + REG_S t3, (KVM_ARCH_HOST_SSCRATCH)(a0) + + /* Save Host STVEC and change it to return path */ + la t4, __kvm_switch_return + csrrw t4, CSR_STVEC, t4 + REG_S t4, (KVM_ARCH_HOST_STVEC)(a0) + + /* Restore Guest SEPC */ + REG_L t0, (KVM_ARCH_GUEST_SEPC)(a0) + csrw CSR_SEPC, t0 + + /* Restore Guest GPRs (except A0) */ + REG_L ra, (KVM_ARCH_GUEST_RA)(a0) + REG_L sp, (KVM_ARCH_GUEST_SP)(a0) + REG_L gp, (KVM_ARCH_GUEST_GP)(a0) + REG_L tp, (KVM_ARCH_GUEST_TP)(a0) + REG_L t0, (KVM_ARCH_GUEST_T0)(a0) + REG_L t1, (KVM_ARCH_GUEST_T1)(a0) + REG_L t2, (KVM_ARCH_GUEST_T2)(a0) + REG_L s0, (KVM_ARCH_GUEST_S0)(a0) + REG_L s1, (KVM_ARCH_GUEST_S1)(a0) + REG_L a1, (KVM_ARCH_GUEST_A1)(a0) + REG_L a2, (KVM_ARCH_GUEST_A2)(a0) + REG_L a3, (KVM_ARCH_GUEST_A3)(a0) + REG_L a4, (KVM_ARCH_GUEST_A4)(a0) + REG_L a5, (KVM_ARCH_GUEST_A5)(a0) + REG_L a6, (KVM_ARCH_GUEST_A6)(a0) + REG_L a7, (KVM_ARCH_GUEST_A7)(a0) + REG_L s2, (KVM_ARCH_GUEST_S2)(a0) + REG_L s3, (KVM_ARCH_GUEST_S3)(a0) + REG_L s4, (KVM_ARCH_GUEST_S4)(a0) + REG_L s5, (KVM_ARCH_GUEST_S5)(a0) + REG_L s6, (KVM_ARCH_GUEST_S6)(a0) + REG_L s7, (KVM_ARCH_GUEST_S7)(a0) + REG_L s8, (KVM_ARCH_GUEST_S8)(a0) + REG_L s9, (KVM_ARCH_GUEST_S9)(a0) + REG_L s10, (KVM_ARCH_GUEST_S10)(a0) + REG_L s11, (KVM_ARCH_GUEST_S11)(a0) + REG_L t3, (KVM_ARCH_GUEST_T3)(a0) + REG_L t4, (KVM_ARCH_GUEST_T4)(a0) + REG_L t5, (KVM_ARCH_GUEST_T5)(a0) + REG_L t6, (KVM_ARCH_GUEST_T6)(a0) + + /* Restore Guest A0 */ + REG_L a0, (KVM_ARCH_GUEST_A0)(a0) + + /* Resume Guest */ + sret + + /* Back to Host */ + .align 2 +__kvm_switch_return: + /* Swap Guest A0 with SSCRATCH */ + csrrw a0, CSR_SSCRATCH, a0 + + /* Save Guest GPRs (except A0) */ + REG_S ra, (KVM_ARCH_GUEST_RA)(a0) + REG_S sp, (KVM_ARCH_GUEST_SP)(a0) + REG_S gp, (KVM_ARCH_GUEST_GP)(a0) + REG_S tp, (KVM_ARCH_GUEST_TP)(a0) + REG_S t0, (KVM_ARCH_GUEST_T0)(a0) + REG_S t1, (KVM_ARCH_GUEST_T1)(a0) + REG_S t2, (KVM_ARCH_GUEST_T2)(a0) + REG_S s0, (KVM_ARCH_GUEST_S0)(a0) + REG_S s1, (KVM_ARCH_GUEST_S1)(a0) + REG_S a1, (KVM_ARCH_GUEST_A1)(a0) + REG_S a2, (KVM_ARCH_GUEST_A2)(a0) + REG_S a3, (KVM_ARCH_GUEST_A3)(a0) + REG_S a4, (KVM_ARCH_GUEST_A4)(a0) + REG_S a5, (KVM_ARCH_GUEST_A5)(a0) + REG_S a6, (KVM_ARCH_GUEST_A6)(a0) + REG_S a7, (KVM_ARCH_GUEST_A7)(a0) + REG_S s2, (KVM_ARCH_GUEST_S2)(a0) + REG_S s3, (KVM_ARCH_GUEST_S3)(a0) + REG_S s4, (KVM_ARCH_GUEST_S4)(a0) + REG_S s5, (KVM_ARCH_GUEST_S5)(a0) + REG_S s6, (KVM_ARCH_GUEST_S6)(a0) + REG_S s7, (KVM_ARCH_GUEST_S7)(a0) + REG_S s8, (KVM_ARCH_GUEST_S8)(a0) + REG_S s9, (KVM_ARCH_GUEST_S9)(a0) + REG_S s10, (KVM_ARCH_GUEST_S10)(a0) + REG_S s11, (KVM_ARCH_GUEST_S11)(a0) + REG_S t3, (KVM_ARCH_GUEST_T3)(a0) + REG_S t4, (KVM_ARCH_GUEST_T4)(a0) + REG_S t5, (KVM_ARCH_GUEST_T5)(a0) + REG_S t6, (KVM_ARCH_GUEST_T6)(a0) + + /* Save Guest SEPC */ + csrr t0, CSR_SEPC + REG_S t0, (KVM_ARCH_GUEST_SEPC)(a0) + + /* Restore Host STVEC */ + REG_L t1, (KVM_ARCH_HOST_STVEC)(a0) + csrw CSR_STVEC, t1 + + /* Save Guest A0 and Restore Host SSCRATCH */ + REG_L t2, (KVM_ARCH_HOST_SSCRATCH)(a0) + csrrw t2, CSR_SSCRATCH, t2 + REG_S t2, (KVM_ARCH_GUEST_A0)(a0) + + /* Save Guest and Restore Host SCOUNTEREN */ + REG_L t3, (KVM_ARCH_HOST_SCOUNTEREN)(a0) + csrrw t3, CSR_SCOUNTEREN, t3 + REG_S t3, (KVM_ARCH_GUEST_SCOUNTEREN)(a0) + + /* Save Guest and Restore Host HSTATUS */ + REG_L t4, (KVM_ARCH_HOST_HSTATUS)(a0) + csrrw t4, CSR_HSTATUS, t4 + REG_S t4, (KVM_ARCH_GUEST_HSTATUS)(a0) + + /* Save Guest and Restore Host SSTATUS */ + REG_L t5, (KVM_ARCH_HOST_SSTATUS)(a0) + csrrw t5, CSR_SSTATUS, t5 + REG_S t5, (KVM_ARCH_GUEST_SSTATUS)(a0) + + /* Restore Host GPRs (except A0 and T0-T6) */ + REG_L ra, (KVM_ARCH_HOST_RA)(a0) + REG_L sp, (KVM_ARCH_HOST_SP)(a0) + REG_L gp, (KVM_ARCH_HOST_GP)(a0) + REG_L tp, (KVM_ARCH_HOST_TP)(a0) + REG_L s0, (KVM_ARCH_HOST_S0)(a0) + REG_L s1, (KVM_ARCH_HOST_S1)(a0) + REG_L a1, (KVM_ARCH_HOST_A1)(a0) + REG_L a2, (KVM_ARCH_HOST_A2)(a0) + REG_L a3, (KVM_ARCH_HOST_A3)(a0) + REG_L a4, (KVM_ARCH_HOST_A4)(a0) + REG_L a5, (KVM_ARCH_HOST_A5)(a0) + REG_L a6, (KVM_ARCH_HOST_A6)(a0) + REG_L a7, (KVM_ARCH_HOST_A7)(a0) + REG_L s2, (KVM_ARCH_HOST_S2)(a0) + REG_L s3, (KVM_ARCH_HOST_S3)(a0) + REG_L s4, (KVM_ARCH_HOST_S4)(a0) + REG_L s5, (KVM_ARCH_HOST_S5)(a0) + REG_L s6, (KVM_ARCH_HOST_S6)(a0) + REG_L s7, (KVM_ARCH_HOST_S7)(a0) + REG_L s8, (KVM_ARCH_HOST_S8)(a0) + REG_L s9, (KVM_ARCH_HOST_S9)(a0) + REG_L s10, (KVM_ARCH_HOST_S10)(a0) + REG_L s11, (KVM_ARCH_HOST_S11)(a0) + + /* Return to C code */ + ret +ENDPROC(__kvm_riscv_switch_to)