From patchwork Mon Nov 16 20:43:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11910827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B614C2D0A3 for ; Mon, 16 Nov 2020 20:49:41 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 14BB42224B for ; Mon, 16 Nov 2020 20:49:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="BcjbHRRa"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="oc+9V2yp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 14BB42224B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=7jl3hXgrqIb8yew0Dp5ckaMMOyHN0iO6EqvHoaFLJcw=; b=BcjbHRRaLR+boyDsCzPPXEusi K6UEjWsNFGMGuGhNDiMU6iQ11fkqFRku6PKdFdgX96Y/uRfQYt10exHNzd71gc+ylTIIkWhBxMTIZ yDIn/uHK7iX7U6Mz1CmE/K6F0lmzVbH88rj9llNLxLYdXeXqW3DYkQ0gP0laOHBc2L/Hi4Cqmu7Vv a5wCOCwbtGwEuXnKvl7Phx8VG4dNC74dsAJCjf8SCnfjLk9/0GecbhSIHL5gbURXR6rl55y0XdOkR zBJFd0SAqON3fLC/XUYrzq1wiPknXjpw+FSdzTvULxyQNnGDlc+zwLglLoWECSKTG6L4NSIItRloS /Z77dCc6g==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kelQL-0001aC-4J; Mon, 16 Nov 2020 20:48:29 +0000 Received: from mail-wr1-x444.google.com ([2a00:1450:4864:20::444]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kelM9-0007rm-9c for linux-arm-kernel@lists.infradead.org; Mon, 16 Nov 2020 20:44:12 +0000 Received: by mail-wr1-x444.google.com with SMTP id p8so20292485wrx.5 for ; Mon, 16 Nov 2020 12:44:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/b5jsmdQhqUsP3iV70g55rFSHyU2Npcyd/L9vacP6Sc=; b=oc+9V2ypjwDCQSX2JHTAUt2BRrwLY8+/3AfQMXK5Z0Iol0FtYMP/CKLPK8bKmUQZvh n5+acER6e/irmEd8g0uLmW/xNvXzj2WKzpiGA6jYjSymV14pdrfyLbEz3ZtYAii8wShj /dXq9ek07rghShp0XHB9LewTzzK1hhaoXzzrI8TkJ1lGuFccEL7OaE7MpK2oYK+/aaPK lOCRmbOU+0u/88UkaXdx99xG6hhInjJ/FzGNatUi9toC1gXeLTKGJIAslARrRveo8IYQ gyTuw2WWOgwRC8imlFRYSfNuXhxZ53O6BGCOSj9FHkFq5jDJP1SbHqYFU2rQlKoJCBQw pM7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/b5jsmdQhqUsP3iV70g55rFSHyU2Npcyd/L9vacP6Sc=; b=Qff5qHYfs4W7ZT8/5srfEPKutyKD0QcXCfQbT28WIfm8ZNOA5KGoN7FT+8MPUhDwDK IDsAdYfhDnYXCjaPnCIVBPglMUTYBcOcyLkF8u5/zGYCT/NalA+Vyrvd6JzMWRPjpr8+ yESSHApt8VrrSjjBtqVzw3HUiEQzPBSQtk/tSVpqxlNTHSUUmf4JZgZ+ENPQwJJjrKfb CTzJyoENOLtWyZUzHPb4htXHddijxdI3CuXWGxwhkfMqSELkArCycGSWb4d7MKphW0e8 S21lOTcrE1G2snim1wr/CZakSF0XAKvqnnVPOreIAJMZPm/cg5Pf42T9RUdBOtfyCm8B 213w== X-Gm-Message-State: AOAM531BfeF3W6Uf77/8T4ZVlm7X+SE1uTbIlMZXbVB4XguSx8PeV1zu Vb83xYrGR3Nh8SR95r3KkpBP9A== X-Google-Smtp-Source: ABdhPJwioNxTaew/6UynUWUqW3NRa5PGrRoMP1SYBFly6NWT6QRM6TZUt8ZXOODPLV9EiV/5CLUuBQ== X-Received: by 2002:a5d:4ec2:: with SMTP id s2mr21177611wrv.258.1605559447503; Mon, 16 Nov 2020 12:44:07 -0800 (PST) Received: from localhost ([2a01:4b00:8523:2d03:bc40:bd71:373a:1b33]) by smtp.gmail.com with ESMTPSA id c2sm26308586wrf.68.2020.11.16.12.44.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 16 Nov 2020 12:44:06 -0800 (PST) From: David Brazdil To: kvmarm@lists.cs.columbia.edu Subject: [PATCH v2 21/24] kvm: arm64: Add kvm-arm.protected early kernel parameter Date: Mon, 16 Nov 2020 20:43:15 +0000 Message-Id: <20201116204318.63987-22-dbrazdil@google.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201116204318.63987-1-dbrazdil@google.com> References: <20201116204318.63987-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201116_154409_504609_96266D35 X-CRM114-Status: GOOD ( 23.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , kernel-team@android.com, Lorenzo Pieralisi , Andrew Walbran , Suzuki K Poulose , Marc Zyngier , Quentin Perret , linux-kernel@vger.kernel.org, James Morse , linux-arm-kernel@lists.infradead.org, Catalin Marinas , Tejun Heo , Dennis Zhou , Christoph Lameter , David Brazdil , Will Deacon , Julien Thierry , Andrew Scull Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add an early parameter that allows users to opt into protected KVM mode when using the nVHE hypervisor. In this mode, guest state will be kept private from the host. This will primarily involve enabling stage-2 address translation for the host, restricting DMA to host memory, and filtering host SMCs. Capability ARM64_PROTECTED_KVM is set if the param is passed, CONFIG_KVM is enabled and the kernel was not booted with VHE. Signed-off-by: David Brazdil --- arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/include/asm/virt.h | 8 ++++++++ arch/arm64/kernel/cpufeature.c | 29 +++++++++++++++++++++++++++++ arch/arm64/kvm/arm.c | 10 +++++++++- 4 files changed, 48 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index e7d98997c09c..ac075f70b2e4 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -66,7 +66,8 @@ #define ARM64_HAS_TLB_RANGE 56 #define ARM64_MTE 57 #define ARM64_WORKAROUND_1508412 58 +#define ARM64_PROTECTED_KVM 59 -#define ARM64_NCAPS 59 +#define ARM64_NCAPS 60 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h index 6069be50baf9..2fde1186b962 100644 --- a/arch/arm64/include/asm/virt.h +++ b/arch/arm64/include/asm/virt.h @@ -97,6 +97,14 @@ static __always_inline bool has_vhe(void) return cpus_have_final_cap(ARM64_HAS_VIRT_HOST_EXTN); } +static __always_inline bool is_protected_kvm_enabled(void) +{ + if (is_vhe_hyp_code()) + return false; + else + return cpus_have_final_cap(ARM64_PROTECTED_KVM); +} + #endif /* __ASSEMBLY__ */ #endif /* ! __ASM__VIRT_H */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 6f36c4f62f69..dd5bc0f0cf0d 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1709,6 +1709,29 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) } #endif /* CONFIG_ARM64_MTE */ +#ifdef CONFIG_KVM +static bool enable_protected_kvm; + +static bool has_protected_kvm(const struct arm64_cpu_capabilities *entry, int __unused) +{ + if (!enable_protected_kvm) + return false; + + if (is_kernel_in_hyp_mode()) { + pr_warn("Protected KVM not available with VHE\n"); + return false; + } + + return true; +} + +static int __init early_protected_kvm_cfg(char *buf) +{ + return strtobool(buf, &enable_protected_kvm); +} +early_param("kvm-arm.protected", early_protected_kvm_cfg); +#endif /* CONFIG_KVM */ + /* Internal helper functions to match cpu capability type */ static bool cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap) @@ -1822,6 +1845,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .field_pos = ID_AA64PFR0_EL1_SHIFT, .min_field_value = ID_AA64PFR0_EL1_32BIT_64BIT, }, + { + .desc = "Protected KVM", + .capability = ARM64_PROTECTED_KVM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_protected_kvm, + }, #endif { .desc = "Kernel page table isolation (KPTI)", diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c76a8e5bd19c..49d2474f2a80 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1796,6 +1796,12 @@ int kvm_arch_init(void *opaque) return -ENODEV; } + /* The PROTECTED_KVM cap should not have been enabled for VHE. */ + if (in_hyp_mode && is_protected_kvm_enabled()) { + kvm_pr_unimpl("VHE protected mode unsupported, not initializing\n"); + return -ENODEV; + } + if (cpus_have_final_cap(ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE) || cpus_have_final_cap(ARM64_WORKAROUND_1508412)) kvm_info("Guests without required CPU erratum workarounds can deadlock system!\n" \ @@ -1827,7 +1833,9 @@ int kvm_arch_init(void *opaque) if (err) goto out_hyp; - if (in_hyp_mode) + if (is_protected_kvm_enabled()) + kvm_info("Protected nVHE mode initialized successfully\n"); + else if (in_hyp_mode) kvm_info("VHE mode initialized successfully\n"); else kvm_info("Hyp mode initialized successfully\n");