From patchwork Mon Nov 9 11:32:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11891317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA54EC388F7 for ; Mon, 9 Nov 2020 11:42:46 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3FF24206ED for ; Mon, 9 Nov 2020 11:42:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="nwAxbtgq"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="GGA3JU5M" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3FF24206ED Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=roBnFOAGNRXPjnqLBVFIQr+mSBTFepSh1HYJj3Jck44=; b=nwAxbtgqoroqegvvFzNf4SRFM qgkXQIkysEkyZcOvo0CgtOkJcFwYk2oW0do23vgpote286MOdZMBJEu2BsFhUV7/sd3GxobKwGofJ 1pewJSwC9C1+g8hM6xNSyKTp/L+rME7rm7Oh07mgZMC6i9fIRuZHS1MZllKe/oJ7hzBXzhGPcc1Hu pleDxRpp/jGKrpZLn9DJyDNcptJ5rA8CNcrgpHJXSvPzubKyeYR+leR3XBcQdjBGGSDAQ+NeoiGV3 jpy/hKbaTfA2A7FuFoE4ANtum0Y/QkYRo5juNyeYx7mdCYI/A/flCHqBx/sN/bIpRbFpGDjyfjlHw mSpLWX0SA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kc5X1-0007xb-Ix; Mon, 09 Nov 2020 11:40:19 +0000 Received: from mail-wr1-x442.google.com ([2a00:1450:4864:20::442]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kc5QP-0004b4-2z for linux-arm-kernel@lists.infradead.org; Mon, 09 Nov 2020 11:33:41 +0000 Received: by mail-wr1-x442.google.com with SMTP id w1so8317085wrm.4 for ; Mon, 09 Nov 2020 03:33:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yiVqo0I7pGB5oEXJoL4wsfNNfOXq8d133w3c7AwQTmA=; b=GGA3JU5M9+tIOvlhbl+W31KWNkSPWds3SMYdI2y84Pb3DQsror8hLpYrnonQblZsI+ JTqPx3DioUxKwT2W5XvGzA/MhoW4J4VZh0ul97rvAkv+Kr8U7J28OQUN0cZqVzvX8oe4 BZrYy1qKA41ohpNXJgymlX8cnWolNzJcTztyhG1iix/9gWfqj3dXmm7lrksUk8hHzLFS r3QVhaBdVrzR4f0NJQjZ6/zG9mdRnJQsPNtJC7tMcGpwVrk+e41ajZwyLfzkJYUOJkKm KuVgYUCmiD/cycIAYCfFwSA7TPdRCMmKsNTANlHO6cmOltdjOhGA6GKjpSuSXDGNc8mZ KJGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yiVqo0I7pGB5oEXJoL4wsfNNfOXq8d133w3c7AwQTmA=; b=KN1WSl+axQ54ar3c2c21mnjkTffBGtauPLIMm3awsdYz+Zjf4Ixm7BL4pUpVZyI83f NWvcRTL5SWMlEFCnoV/+aXoEXpjatAII0zGRS4CGi2hhfkfKzSi8pimRF/naKuZO8qd0 y6/h/AQJs6BmE5y08kgbrDt24E+hojNtiV9IvDQ03ZbcV4WwMrO4YLRm095GjCEeuQI7 qpE9kJWVHErxLHzS3j/RklIqLE5ekeBZ5dVSEcRRVdrzwS9guY2dujm12iLV8oqVy/za uBsj66O60C5zoBlsdj5V9DIniPtp9cLhB0+igMvqOXdsbi/coDbWDBoxJQ+BGMb3C3yC weZQ== X-Gm-Message-State: AOAM5302Oux5C0EU4sn4KS8SwlX/Vdl9qgbc9yKTHTEfqdf4WtiT5LXW 4o+U8o3KF/sBVmT37E8dAGAJlBHzj7ip0n/u X-Google-Smtp-Source: ABdhPJwsxb54EYyLHnV2E1XBcoH6JNqqkZcyLkTXWH8b0BTZda5qQTihScZQ+OrfOTSrBdeqjMvt4w== X-Received: by 2002:adf:e3cf:: with SMTP id k15mr1457671wrm.259.1604921606682; Mon, 09 Nov 2020 03:33:26 -0800 (PST) Received: from localhost ([2a01:4b00:8523:2d03:209d:10b7:c480:3e1f]) by smtp.gmail.com with ESMTPSA id u10sm13507690wrw.36.2020.11.09.03.33.25 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 09 Nov 2020 03:33:25 -0800 (PST) From: David Brazdil To: kvmarm@lists.cs.columbia.edu Subject: [PATCH v1 23/24] kvm: arm64: Trap host SMCs in protected mode. Date: Mon, 9 Nov 2020 11:32:32 +0000 Message-Id: <20201109113233.9012-24-dbrazdil@google.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201109113233.9012-1-dbrazdil@google.com> References: <20201109113233.9012-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201109_063329_433049_E88B0F04 X-CRM114-Status: GOOD ( 21.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , kernel-team@android.com, Lorenzo Pieralisi , Andrew Walbran , Suzuki K Poulose , Marc Zyngier , Quentin Perret , linux-kernel@vger.kernel.org, James Morse , linux-arm-kernel@lists.infradead.org, Catalin Marinas , Tejun Heo , Dennis Zhou , Christoph Lameter , David Brazdil , Will Deacon , Julien Thierry , Andrew Scull Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org While protected nVHE KVM is installed, start trapping all host SMCs. By default, these are simply forwarded to EL3, but PSCI SMCs are validated first. Create new constant HCR_HOST_NVHE_PROTECTED_FLAGS with the new set of HCR flags to use while the nVHE vector is installed when the kernel was booted with the protected flag enabled. Switch back to the default HCR flags when switching back to the stub vector. Signed-off-by: David Brazdil Reported-by: kernel test robot --- arch/arm64/include/asm/kvm_arm.h | 1 + arch/arm64/kernel/image-vars.h | 4 ++++ arch/arm64/kvm/arm.c | 35 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/hyp-init.S | 8 +++++++ arch/arm64/kvm/hyp/nvhe/switch.c | 5 ++++- 5 files changed, 52 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index 64ce29378467..4e90c2debf70 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -80,6 +80,7 @@ HCR_FMO | HCR_IMO | HCR_PTW ) #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF) #define HCR_HOST_NVHE_FLAGS (HCR_RW | HCR_API | HCR_APK | HCR_ATA) +#define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC) #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H) /* TCR_EL2 Registers bits */ diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 78a42a7cdb72..75cda51674f4 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -62,9 +62,13 @@ __efistub__ctype = _ctype; */ /* Alternative callbacks for init-time patching of nVHE hyp code. */ +KVM_NVHE_ALIAS(kvm_patch_hcr_flags); KVM_NVHE_ALIAS(kvm_patch_vector_branch); KVM_NVHE_ALIAS(kvm_update_va_mask); +/* Static key enabled when the user opted into nVHE protected mode. */ +KVM_NVHE_ALIAS(kvm_protected_mode); + /* Global kernel state accessed by nVHE hyp code. */ KVM_NVHE_ALIAS(kvm_vgic_global_state); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 574aa2d026e6..c09b95cfa00a 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1861,6 +1861,41 @@ void kvm_arch_exit(void) kvm_perf_teardown(); } +static inline u32 __init __gen_mov_hcr_insn(u64 hcr, u32 rd, int i) +{ + int shift = 48 - (i * 16); + u16 imm = (hcr >> shift) & GENMASK(16, 0); + + return aarch64_insn_gen_movewide(rd, imm, shift, + AARCH64_INSN_VARIANT_64BIT, + (i == 0) ? AARCH64_INSN_MOVEWIDE_ZERO + : AARCH64_INSN_MOVEWIDE_KEEP); +} + +void __init kvm_patch_hcr_flags(struct alt_instr *alt, + __le32 *origptr, __le32 *updptr, int nr_inst) +{ + int i; + u32 rd; + + BUG_ON(nr_inst != 4); + + /* Skip for VHE and unprotected nVHE modes. */ + if (!is_kvm_protected_mode()) + return; + + rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, + le32_to_cpu(origptr[0])); + + for (i = 0; i < nr_inst; i++) { + u32 oinsn = __gen_mov_hcr_insn(HCR_HOST_NVHE_FLAGS, rd, i); + u32 insn = __gen_mov_hcr_insn(HCR_HOST_NVHE_PROTECTED_FLAGS, rd, i); + + BUG_ON(oinsn != le32_to_cpu(origptr[i])); + updptr[i] = cpu_to_le32(insn); + } +} + static int __init early_kvm_protected_cfg(char *buf) { bool val; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S index f999a35b2c8c..bbe6c5f558e0 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S +++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S @@ -88,6 +88,12 @@ SYM_CODE_END(__kvm_hyp_init) * x0: struct kvm_nvhe_init_params PA */ SYM_CODE_START(___kvm_hyp_init) +alternative_cb kvm_patch_hcr_flags + mov_q x1, HCR_HOST_NVHE_FLAGS +alternative_cb_end + msr hcr_el2, x1 + isb + ldr x1, [x0, #NVHE_INIT_TPIDR_EL2] msr tpidr_el2, x1 @@ -220,6 +226,8 @@ reset: bic x5, x5, x6 // Clear SCTL_M and etc pre_disable_mmu_workaround msr sctlr_el2, x5 + mov_q x5, HCR_HOST_NVHE_FLAGS + msr hcr_el2, x5 isb /* Install stub vectors */ diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 8ae8160bc93a..f605b25a9afc 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -96,7 +96,10 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu) mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT; write_sysreg(mdcr_el2, mdcr_el2); - write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2); + if (is_kvm_protected_mode()) + write_sysreg(HCR_HOST_NVHE_PROTECTED_FLAGS, hcr_el2); + else + write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2); write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); write_sysreg(__kvm_hyp_host_vector, vbar_el2); }