From patchwork Mon Aug 5 13:43:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 11076945 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8B74117E0 for ; Mon, 5 Aug 2019 13:43:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 789E528564 for ; Mon, 5 Aug 2019 13:43:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6CB65286D5; Mon, 5 Aug 2019 13:43:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A26E428564 for ; Mon, 5 Aug 2019 13:43:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=0bnDja4vjzdRiUuxtmABNKck5ONrnOASWbLa8sI++DQ=; b=GY1asvedrQxNdU j5BALn5bfLCCSke+AJaUoxDbNQDlF5QpJn8jkgmVZBtJgv+DW+SIMpvBrvPZ7wq0mI4KXmcKmmBYs e4pJaclKV3ZruEZZ2SzC2cN8PGUbpayd+WDLRgQwx3RC7GRAo+vcczZuc7ZHqYqrYyfCVGs4q14bu +TPCQVyn/s+lURS9D0cCLB7e3LhISWNv3VsmXJeoPTzEWuguGib0f//tsI2WwxjC0Q6tjNS+qACAY 5bzYVP5R3798xJFP5GaGC2vpG+Q/IXbEezWIiqxJSHrxWHfewZpuxxladiniCciLPg3DvxrzPYefO dXnPBipM+rlqUqVxAW6g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hudGo-0001p9-RZ; Mon, 05 Aug 2019 13:43:26 +0000 Received: from esa1.hgst.iphmx.com ([68.232.141.245]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hudGl-0001nt-Er for linux-riscv@lists.infradead.org; Mon, 05 Aug 2019 13:43:25 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1565012603; x=1596548603; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=YXJU/8SGO5bIExNQ3jPlUcFaRWaKhHEYubC+0FtzYnY=; b=ewJCBtYCinktB2vYxlpwBHCbX222nLQGDsa/5XYhyv+FG7MZ3V2ZBa0V xPWyJagYs3DcmbGzAsvEhas6OWieXvgEVU4pxXCmU8DLG4HWuD35a/1eW CxPWTEMnupV9Bq5I7GpAF3t0Kpr1Z7E2Gyfk7KShoWT9pMJwu32jQoyG+ fl1OtH1sYbI5eIbtkn8ClhAP2Zb6WLteIeR/6iCEB1RMmApcy67U/KcMu leB2DTEsD7pERscesuSHPrsFzq6uT/OV72l9/IKfWU5AUbiJBeSPlVjOV Ooi2F6IRG7ccAdDXxWdBoOj3Gx580LdWdsPT8J7QX4SW9vwZd6zbBhBwd A==; IronPort-SDR: mQViZkWxfDKaROqcBZ/j10lAUmDWAurjL1AUVEyW3Op9Wl+zi6Vhem91vW4q+F+LSZ3lX+++j/ 8QuAgZjf8aIHNseWLUiCZO7i7aormWpJGCsS0h/IUTRQINa7EWiEDbtuCm5JxBR/WrkYEItZuj zyLqiilg4kuvC3c22vU/mdwp/xfyU0eF88PiDnyFXG76bWefPmiwTduAUtnDAnDkJ3fakBWzEC +MAZ3kgJ3NtjqDibbUCD3sDKZKVSvawLilEDU5pyz/KR69doRA5m712y26rwsA9DtYuNHKh3am kAk= X-IronPort-AV: E=Sophos;i="5.64,350,1559491200"; d="scan'208";a="221493443" Received: from mail-by2nam05lp2055.outbound.protection.outlook.com (HELO NAM05-BY2-obe.outbound.protection.outlook.com) ([104.47.50.55]) by ob1.hgst.iphmx.com with ESMTP; 05 Aug 2019 21:43:20 +0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZSKUBrX1iulN/wfTTQuK0QrvM9VN3hgYiKxnR0omXW0Fxl/sxVDb1KQ/9t4RTQv/mlTQiGToomHKozrREhDLVwTgcgoFr33kjmNfLmgoqFuqGjF1hssenzWNhI3Pg2o+uxVgZfJuw2g8JTAUcz9o9xykFk2Xpc1jKO4UlFC71Cry2Jy9PSk+k2pq+JRGD8hqJRtshxbrUkqORUJQ8BL5VHKMiFk0dVovzunAsjx4DdQzFh+nGGhcJ0IvcmwCM6eqHWu69Hq26e9kBnwtUukeaGN2DTiDK5o013XBCRMhewOTOtGGExtO8s2fglUz7cThQn3zABKbjE9svq0aDlAfsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=eN1r7Kj05DaMtpfXA3+bdal+DODPXtNycyRh4cJ2xBE=; b=ZBTJ7d7HImAg9v1iNreQci3Yw60VN7Uyne6HCa9IeWZ9tkTRVgfNBNKTWVji4kzVrHBND1pCHvG5dCs+vdaOc22reqyQ7ZHH98bwDz4lB8LtPjraIJOzcUa7K4tyMV+QME31DEXScXrAKFbFkwy+mxiIQpy41kZ9+OSvhbleylIxUn3uP4s3lDPupipx94eR8HgJm5s1Squt7O+xjvFTS6JK2OfWs4918UAEhd23wPqF543KjWfV9mSLsceRQLW9nQRiRV9CyyegpxO7erVg7FUL5czsTQKzxxoIeOlwHjJ3X/ALiFIzPSNJk2wtOLWpP9388bnqf8pjyioG4vRDHw== ARC-Authentication-Results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=wdc.com;dmarc=pass action=none header.from=wdc.com;dkim=pass header.d=wdc.com;arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=eN1r7Kj05DaMtpfXA3+bdal+DODPXtNycyRh4cJ2xBE=; b=cXQ1lmAxcG4eIvebASgB3w2PXg8dcQ4f8h6gzYpFzEHCVSzYPisKblsDaTbhGhweuQwJrgm60gfF81qRP7Yya4Q/mqJjTQElPErUTZzsdcNz/nn1tcbiZuskyJnbZwET+88bMmETJaippO4jmPFTCoiW3jDJ0QSSItb6TuY668M= Received: from MN2PR04MB6061.namprd04.prod.outlook.com (20.178.246.15) by MN2PR04MB6159.namprd04.prod.outlook.com (20.178.249.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2136.17; Mon, 5 Aug 2019 13:43:18 +0000 Received: from MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8]) by MN2PR04MB6061.namprd04.prod.outlook.com ([fe80::a815:e61a:b4aa:60c8%7]) with mapi id 15.20.2136.018; Mon, 5 Aug 2019 13:43:18 +0000 From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K Subject: [PATCH v3 07/19] RISC-V: KVM: Implement KVM_GET_ONE_REG/KVM_SET_ONE_REG ioctls Thread-Topic: [PATCH v3 07/19] RISC-V: KVM: Implement KVM_GET_ONE_REG/KVM_SET_ONE_REG ioctls Thread-Index: AQHVS5O97SxL8vBQCk+VE8xllzf4rA== Date: Mon, 5 Aug 2019 13:43:18 +0000 Message-ID: <20190805134201.2814-8-anup.patel@wdc.com> References: <20190805134201.2814-1-anup.patel@wdc.com> In-Reply-To: <20190805134201.2814-1-anup.patel@wdc.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: BMXPR01CA0087.INDPRD01.PROD.OUTLOOK.COM (2603:1096:b00:54::27) To MN2PR04MB6061.namprd04.prod.outlook.com (2603:10b6:208:d8::15) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Anup.Patel@wdc.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [106.51.20.197] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 5246bb26-42bb-4613-4abc-08d719aadfbb x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:MN2PR04MB6159; x-ms-traffictypediagnostic: MN2PR04MB6159: x-microsoft-antispam-prvs: wdcipoutbound: EOP-TRUE x-ms-oob-tlc-oobclassifiers: OLM:5236; x-forefront-prvs: 01208B1E18 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(4636009)(39860400002)(136003)(366004)(396003)(376002)(346002)(189003)(199004)(66556008)(66476007)(71190400001)(71200400001)(110136005)(54906003)(66946007)(66446008)(64756008)(86362001)(5660300002)(102836004)(52116002)(55236004)(14454004)(99286004)(386003)(78486014)(316002)(256004)(14444005)(6506007)(76176011)(2616005)(6486002)(8676002)(9456002)(8936002)(36756003)(3846002)(6116002)(6512007)(4326008)(2906002)(476003)(305945005)(44832011)(81156014)(81166006)(26005)(186003)(486006)(11346002)(25786009)(7736002)(1076003)(50226002)(478600001)(68736007)(446003)(53936002)(66066001)(6436002)(7416002); DIR:OUT; SFP:1102; SCL:1; SRVR:MN2PR04MB6159; H:MN2PR04MB6061.namprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: UnAwgeONyKKlE2fTL84S9j1rEWubaIg6y0YASeruwil8In6wweEUjY4q9St56LTKL5oMXfkWm2BlLBIT3oQOgTBqfRK3770wu8fmk4pmQUxeERXUROIg2rtrFogmOtrAgalxaTWpCzKJgV/t8jjzN9At6csd9Xwmi+dHYupDPgwsHSLszq5jP9VRk7oY7SQ9/JLk/Rnd78Xk6C+HoFJXBkoex0zSZEhYngXGGXnD9bEFd3FMyEDMC0L/0EeGpDF7xJX8eQX9uuWBagaejx/4L4E7iPQQthC72OhwPqlNVuEYYqT6UpuE/Yy+1+FKeXmsjnl+vFrSMawBaKCvYEhNIfReDPTxRroTUNPhL41svvb+66LucHTdteXAUXcV5sORuHfEZAXl9TkEONLzKkH0s/N7f3ZTy//KYz2ZjLEPFm8= MIME-Version: 1.0 X-OriginatorOrg: wdc.com X-MS-Exchange-CrossTenant-Network-Message-Id: 5246bb26-42bb-4613-4abc-08d719aadfbb X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Aug 2019 13:43:18.8200 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Anup.Patel@wdc.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB6159 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190805_064323_675475_6537BC96 X-CRM114-Status: GOOD ( 15.45 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Damien Le Moal , Anup Patel , "kvm@vger.kernel.org" , Anup Patel , Daniel Lezcano , "linux-kernel@vger.kernel.org" , Christoph Hellwig , Atish Patra , Alistair Francis , Thomas Gleixner , "linux-riscv@lists.infradead.org" Sender: "linux-riscv" Errors-To: linux-riscv-bounces+patchwork-linux-riscv=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP For KVM RISC-V, we use KVM_GET_ONE_REG/KVM_SET_ONE_REG ioctls to access VCPU config and registers from user-space. We have three types of VCPU registers: 1. CONFIG - these are VCPU config and capabilities 2. CORE - these are VCPU general purpose registers 3. CSR - these are VCPU control and status registers The CONFIG registers available to user-space are ISA and TIMEBASE. Out of these, TIMEBASE is a read-only register which inform user-space about VCPU timer base frequency. The ISA register is a read and write register where user-space can only write the desired VCPU ISA capabilities before running the VCPU. The CORE registers available to user-space are PC, RA, SP, GP, TP, A0-A7, T0-T6, S0-S11 and MODE. Most of these are RISC-V general registers except PC and MODE. The PC register represents program counter whereas the MODE register represent VCPU privilege mode (i.e. S/U-mode). The CSRs available to user-space are SSTATUS, SIE, STVEC, SSCRATCH, SEPC, SCAUSE, STVAL, SIP, and SATP. All of these are read/write registers. In future, more VCPU register types will be added (such as FP) for the KVM_GET_ONE_REG/KVM_SET_ONE_REG ioctls. Signed-off-by: Anup Patel --- arch/riscv/include/uapi/asm/kvm.h | 40 ++++- arch/riscv/kvm/vcpu.c | 235 +++++++++++++++++++++++++++++- 2 files changed, 272 insertions(+), 3 deletions(-) diff --git a/arch/riscv/include/uapi/asm/kvm.h b/arch/riscv/include/uapi/asm/kvm.h index 6dbc056d58ba..024f220eb17e 100644 --- a/arch/riscv/include/uapi/asm/kvm.h +++ b/arch/riscv/include/uapi/asm/kvm.h @@ -23,8 +23,15 @@ /* for KVM_GET_REGS and KVM_SET_REGS */ struct kvm_regs { + /* out (KVM_GET_REGS) / in (KVM_SET_REGS) */ + struct user_regs_struct regs; + unsigned long mode; }; +/* Possible privilege modes for kvm_regs */ +#define KVM_RISCV_MODE_S 1 +#define KVM_RISCV_MODE_U 0 + /* for KVM_GET_FPU and KVM_SET_FPU */ struct kvm_fpu { }; @@ -41,10 +48,41 @@ struct kvm_guest_debug_arch { struct kvm_sync_regs { }; -/* dummy definition */ +/* for KVM_GET_SREGS and KVM_SET_SREGS */ struct kvm_sregs { + unsigned long sstatus; + unsigned long sie; + unsigned long stvec; + unsigned long sscratch; + unsigned long sepc; + unsigned long scause; + unsigned long stval; + unsigned long sip; + unsigned long satp; }; +#define KVM_REG_SIZE(id) \ + (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT)) + +/* If you need to interpret the index values, here is the key: */ +#define KVM_REG_RISCV_TYPE_MASK 0x00000000FF000000 +#define KVM_REG_RISCV_TYPE_SHIFT 24 + +/* Config registers are mapped as type 1 */ +#define KVM_REG_RISCV_CONFIG (0x01 << KVM_REG_RISCV_TYPE_SHIFT) +#define KVM_REG_RISCV_CONFIG_ISA 0x0 +#define KVM_REG_RISCV_CONFIG_TIMEBASE 0x1 + +/* Core registers are mapped as type 2 */ +#define KVM_REG_RISCV_CORE (0x02 << KVM_REG_RISCV_TYPE_SHIFT) +#define KVM_REG_RISCV_CORE_REG(name) \ + (offsetof(struct kvm_regs, name) / sizeof(unsigned long)) + +/* Control and status registers are mapped as type 3 */ +#define KVM_REG_RISCV_CSR (0x03 << KVM_REG_RISCV_TYPE_SHIFT) +#define KVM_REG_RISCV_CSR_REG(name) \ + (offsetof(struct kvm_sregs, name) / sizeof(unsigned long)) + #endif #endif /* __LINUX_KVM_RISCV_H */ diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 455b0f40832b..e22aabaf32e7 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -163,6 +163,215 @@ vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) return VM_FAULT_SIGBUS; } +static int kvm_riscv_vcpu_get_reg_config(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + unsigned long __user *uaddr = + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CONFIG); + unsigned long reg_val; + + if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long)) + return -EINVAL; + + switch (reg_num) { + case KVM_REG_RISCV_CONFIG_ISA: + reg_val = vcpu->arch.isa; + break; + case KVM_REG_RISCV_CONFIG_TIMEBASE: + reg_val = riscv_timebase; + break; + default: + return -EINVAL; + }; + + if (copy_to_user(uaddr, ®_val, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + return 0; +} + +static int kvm_riscv_vcpu_set_reg_config(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + unsigned long __user *uaddr = + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CONFIG); + unsigned long reg_val; + + if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long)) + return -EINVAL; + + if (copy_from_user(®_val, uaddr, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + switch (reg_num) { + case KVM_REG_RISCV_CONFIG_ISA: + if (!vcpu->arch.ran_atleast_once) { + vcpu->arch.isa = reg_val; + vcpu->arch.isa &= riscv_isa; + vcpu->arch.isa &= KVM_RISCV_ISA_ALLOWED; + } else { + return -ENOTSUPP; + } + break; + case KVM_REG_RISCV_CONFIG_TIMEBASE: + return -ENOTSUPP; + default: + return -EINVAL; + }; + + return 0; +} + +static int kvm_riscv_vcpu_get_reg_core(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; + unsigned long __user *uaddr = + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CORE); + unsigned long reg_val; + + if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long)) + return -EINVAL; + + if (reg_num == KVM_REG_RISCV_CORE_REG(regs.pc)) + reg_val = cntx->sepc; + else if (KVM_REG_RISCV_CORE_REG(regs.pc) < reg_num && + reg_num <= KVM_REG_RISCV_CORE_REG(regs.t6)) + reg_val = ((unsigned long *)cntx)[reg_num]; + else if (reg_num == KVM_REG_RISCV_CORE_REG(mode)) + reg_val = (cntx->sstatus & SR_SPP) ? + KVM_RISCV_MODE_S : KVM_RISCV_MODE_U; + else + return -EINVAL; + + if (copy_to_user(uaddr, ®_val, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + return 0; +} + +static int kvm_riscv_vcpu_set_reg_core(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + struct kvm_cpu_context *cntx = &vcpu->arch.guest_context; + unsigned long __user *uaddr = + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CORE); + unsigned long reg_val; + + if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long)) + return -EINVAL; + + if (copy_from_user(®_val, uaddr, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + if (reg_num == KVM_REG_RISCV_CORE_REG(regs.pc)) + cntx->sepc = reg_val; + else if (KVM_REG_RISCV_CORE_REG(regs.pc) < reg_num && + reg_num <= KVM_REG_RISCV_CORE_REG(regs.t6)) + ((unsigned long *)cntx)[reg_num] = reg_val; + else if (reg_num == KVM_REG_RISCV_CORE_REG(mode)) { + if (reg_val == KVM_RISCV_MODE_S) + cntx->sstatus |= SR_SPP; + else + cntx->sstatus &= ~SR_SPP; + } else + return -EINVAL; + + return 0; +} + +static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + unsigned long __user *uaddr = + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CSR); + unsigned long reg_val; + + if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long)) + return -EINVAL; + if (reg_num >= sizeof(struct kvm_sregs) / sizeof(unsigned long)) + return -EINVAL; + + if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) + kvm_riscv_vcpu_flush_interrupts(vcpu); + + reg_val = ((unsigned long *)csr)[reg_num]; + + if (copy_to_user(uaddr, ®_val, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + return 0; +} + +static int kvm_riscv_vcpu_set_reg_csr(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; + unsigned long __user *uaddr = + (unsigned long __user *)(unsigned long)reg->addr; + unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK | + KVM_REG_SIZE_MASK | + KVM_REG_RISCV_CSR); + unsigned long reg_val; + + if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long)) + return -EINVAL; + if (reg_num >= sizeof(struct kvm_sregs) / sizeof(unsigned long)) + return -EINVAL; + + if (copy_from_user(®_val, uaddr, KVM_REG_SIZE(reg->id))) + return -EFAULT; + + ((unsigned long *)csr)[reg_num] = reg_val; + + if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) + WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0); + + return 0; +} + +static int kvm_riscv_vcpu_set_reg(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + if ((reg->id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_CONFIG) + return kvm_riscv_vcpu_set_reg_config(vcpu, reg); + else if ((reg->id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_CORE) + return kvm_riscv_vcpu_set_reg_core(vcpu, reg); + else if ((reg->id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_CSR) + return kvm_riscv_vcpu_set_reg_csr(vcpu, reg); + + return -EINVAL; +} + +static int kvm_riscv_vcpu_get_reg(struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + if ((reg->id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_CONFIG) + return kvm_riscv_vcpu_get_reg_config(vcpu, reg); + else if ((reg->id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_CORE) + return kvm_riscv_vcpu_get_reg_core(vcpu, reg); + else if ((reg->id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_CSR) + return kvm_riscv_vcpu_get_reg_csr(vcpu, reg); + + return -EINVAL; +} + long kvm_arch_vcpu_async_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { @@ -187,8 +396,30 @@ long kvm_arch_vcpu_async_ioctl(struct file *filp, long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { - /* TODO: */ - return -EINVAL; + struct kvm_vcpu *vcpu = filp->private_data; + void __user *argp = (void __user *)arg; + long r = -EINVAL; + + switch (ioctl) { + case KVM_SET_ONE_REG: + case KVM_GET_ONE_REG: { + struct kvm_one_reg reg; + + r = -EFAULT; + if (copy_from_user(®, argp, sizeof(reg))) + break; + + if (ioctl == KVM_SET_ONE_REG) + r = kvm_riscv_vcpu_set_reg(vcpu, ®); + else + r = kvm_riscv_vcpu_get_reg(vcpu, ®); + break; + } + default: + break; + } + + return r; } int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,