From patchwork Thu Oct 21 15:41:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12575613 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 362D5C433F5 for ; Thu, 21 Oct 2021 15:41:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 23A66611CE for ; Thu, 21 Oct 2021 15:41:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231777AbhJUPnf (ORCPT ); Thu, 21 Oct 2021 11:43:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229550AbhJUPnd (ORCPT ); Thu, 21 Oct 2021 11:43:33 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B836FC061764; Thu, 21 Oct 2021 08:41:17 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id nn3-20020a17090b38c300b001a03bb6c4ebso845713pjb.1; Thu, 21 Oct 2021 08:41:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=REujtU9gub3oBk0GQBuKzifvsq8Uofe+BQdo9S6ffY8=; b=n5Ev2HLBa7uMI0zKL1+N7fzFGoDYov3zimPcm7ybqqO8BB1Vm4Ug/SR+DhEYkFEfcu qZwlhoBBjp7k96MuTTFzTMtEkWKCMyfw7GjDzBIKe5xXhp2ZM9Gydc6WV9NgsleqXVrh U41uFJFWsct82x4AMkMGQbMJICsxL1oKRS/Q5Fr4RLwea40GEKGUu3kUPXybXuwpGmV1 JrrVfgo72MvPEepqlV/12jUD3u7u9fzPe5tCP8USa4xEE3S/Td/wyG5d7b888EZdYr93 iTQu7Snt2JlBFAWkX+xwDTUVy3Se73Lx/NBvvUTjUgaMrkeptjq/rLoyrZ4hkoY1dMQ9 iaVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=REujtU9gub3oBk0GQBuKzifvsq8Uofe+BQdo9S6ffY8=; b=cRCjL/Ayy/P2DG9RhRtuBNgorvSpG2J7se6WosJc7/2KiZgFxTVLWH08J0pnIvHAaM 27smkXG9J9m2l51Tpc9HsXqVSAPFk0ipgnJo+GESYWko2sYWN/HrPXnxF9a3/ZkCkazP DwFkbx44fnYqKq715JDND5jSOE8jv6rW5lvvFUjp3b6M+umpwB/ML4iIs6i3b+87W890 o3B6eHLWZZ/vRh5RcDquK1GsKPfPmMZpTW0TAF6NP1N9v08pISr5EwtLonTKpgipzECg 3/s1WSuxuskA8tEzjGwsmH+DLcwCy+sy4zc1xAP+Y8VqkvNQpMo19cwlnxEfhleL3mIL 4Y2Q== X-Gm-Message-State: AOAM531nXiH4wNJYqWj6vVhbi5jBtmM1KP6ILUr3wPlihJXGCOTCgTSC F7+2Tj5km15GB0/eXsv/HJ8= X-Google-Smtp-Source: ABdhPJxAHp63stAt2qIOYOWsEXOcg3UlootWrxuj+25vWR/2fpxZZSsevS8qr2O3Xeow3lKO72/gGg== X-Received: by 2002:a17:903:24c:b0:13f:2377:ef3a with SMTP id j12-20020a170903024c00b0013f2377ef3amr5861904plh.59.1634830877219; Thu, 21 Oct 2021 08:41:17 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:a76d:53a5:b89f:c2a0]) by smtp.gmail.com with ESMTPSA id p9sm6384130pfn.7.2021.10.21.08.41.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Oct 2021 08:41:16 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, rientjes@google.com, pgonda@google.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, rppt@kernel.org, saravanand@fb.com, aneesh.kumar@linux.ibm.com, hannes@cmpxchg.org, tj@kernel.org, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V8 1/9] x86/hyperv: Initialize GHCB page in Isolation VM Date: Thu, 21 Oct 2021 11:41:01 -0400 Message-Id: <20211021154110.3734294-2-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211021154110.3734294-1-ltykernel@gmail.com> References: <20211021154110.3734294-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan Hyperv exposes GHCB page via SEV ES GHCB MSR for SNP guest to communicate with hypervisor. Map GHCB page for all cpus to read/write MSR register and submit hvcall request via ghcb page. Signed-off-by: Tianyu Lan --- Change since v4: * Fix typo comment Chagne since v3: * Rename ghcb_base to hv_ghcb_pg and move it out of struct ms_hyperv_info. * Allocate hv_ghcb_pg before cpuhp_setup_state() and leverage hv_cpu_init() to initialize ghcb page. --- arch/x86/hyperv/hv_init.c | 68 +++++++++++++++++++++++++++++---- arch/x86/include/asm/mshyperv.h | 4 ++ arch/x86/kernel/cpu/mshyperv.c | 3 ++ include/asm-generic/mshyperv.h | 6 +++ 4 files changed, 74 insertions(+), 7 deletions(-) diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index 708a2712a516..a7e922755ad1 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -36,12 +37,42 @@ EXPORT_SYMBOL_GPL(hv_current_partition_id); void *hv_hypercall_pg; EXPORT_SYMBOL_GPL(hv_hypercall_pg); +void __percpu **hv_ghcb_pg; + /* Storage to save the hypercall page temporarily for hibernation */ static void *hv_hypercall_pg_saved; struct hv_vp_assist_page **hv_vp_assist_page; EXPORT_SYMBOL_GPL(hv_vp_assist_page); +static int hyperv_init_ghcb(void) +{ + u64 ghcb_gpa; + void *ghcb_va; + void **ghcb_base; + + if (!hv_isolation_type_snp()) + return 0; + + if (!hv_ghcb_pg) + return -EINVAL; + + /* + * GHCB page is allocated by paravisor. The address + * returned by MSR_AMD64_SEV_ES_GHCB is above shared + * memory boundary and map it here. + */ + rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa); + ghcb_va = memremap(ghcb_gpa, HV_HYP_PAGE_SIZE, MEMREMAP_WB); + if (!ghcb_va) + return -ENOMEM; + + ghcb_base = (void **)this_cpu_ptr(hv_ghcb_pg); + *ghcb_base = ghcb_va; + + return 0; +} + static int hv_cpu_init(unsigned int cpu) { union hv_vp_assist_msr_contents msr = { 0 }; @@ -85,7 +116,7 @@ static int hv_cpu_init(unsigned int cpu) } } - return 0; + return hyperv_init_ghcb(); } static void (*hv_reenlightenment_cb)(void); @@ -177,6 +208,14 @@ static int hv_cpu_die(unsigned int cpu) { struct hv_reenlightenment_control re_ctrl; unsigned int new_cpu; + void **ghcb_va; + + if (hv_ghcb_pg) { + ghcb_va = (void **)this_cpu_ptr(hv_ghcb_pg); + if (*ghcb_va) + memunmap(*ghcb_va); + *ghcb_va = NULL; + } hv_common_cpu_die(cpu); @@ -366,10 +405,16 @@ void __init hyperv_init(void) goto common_free; } + if (hv_isolation_type_snp()) { + hv_ghcb_pg = alloc_percpu(void *); + if (!hv_ghcb_pg) + goto free_vp_assist_page; + } + cpuhp = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/hyperv_init:online", hv_cpu_init, hv_cpu_die); if (cpuhp < 0) - goto free_vp_assist_page; + goto free_ghcb_page; /* * Setup the hypercall page and enable hypercalls. @@ -383,10 +428,8 @@ void __init hyperv_init(void) VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS, NUMA_NO_NODE, __builtin_return_address(0)); - if (hv_hypercall_pg == NULL) { - wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0); - goto remove_cpuhp_state; - } + if (hv_hypercall_pg == NULL) + goto clean_guest_os_id; rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64); hypercall_msr.enable = 1; @@ -456,8 +499,11 @@ void __init hyperv_init(void) hv_query_ext_cap(0); return; -remove_cpuhp_state: +clean_guest_os_id: + wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0); cpuhp_remove_state(cpuhp); +free_ghcb_page: + free_percpu(hv_ghcb_pg); free_vp_assist_page: kfree(hv_vp_assist_page); hv_vp_assist_page = NULL; @@ -559,3 +605,11 @@ bool hv_is_isolation_supported(void) { return hv_get_isolation_type() != HV_ISOLATION_TYPE_NONE; } + +DEFINE_STATIC_KEY_FALSE(isolation_type_snp); + +bool hv_isolation_type_snp(void) +{ + return static_branch_unlikely(&isolation_type_snp); +} +EXPORT_SYMBOL_GPL(hv_isolation_type_snp); diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index adccbc209169..37739a277ac6 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -11,6 +11,8 @@ #include #include +DECLARE_STATIC_KEY_FALSE(isolation_type_snp); + typedef int (*hyperv_fill_flush_list_func)( struct hv_guest_mapping_flush_list *flush, void *data); @@ -39,6 +41,8 @@ extern void *hv_hypercall_pg; extern u64 hv_current_partition_id; +extern void __percpu **hv_ghcb_pg; + int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id); int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags); diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index e095c28d27ae..b09ade389040 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -316,6 +316,9 @@ static void __init ms_hyperv_init_platform(void) pr_info("Hyper-V: Isolation Config: Group A 0x%x, Group B 0x%x\n", ms_hyperv.isolation_config_a, ms_hyperv.isolation_config_b); + + if (hv_get_isolation_type() == HV_ISOLATION_TYPE_SNP) + static_branch_enable(&isolation_type_snp); } if (hv_max_functions_eax >= HYPERV_CPUID_NESTED_FEATURES) { diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index d3eae6cdbacb..2d88aa855f7e 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -254,12 +254,18 @@ bool hv_is_hyperv_initialized(void); bool hv_is_hibernation_supported(void); enum hv_isolation_type hv_get_isolation_type(void); bool hv_is_isolation_supported(void); +bool hv_isolation_type_snp(void); void hyperv_cleanup(void); bool hv_query_ext_cap(u64 cap_query); #else /* CONFIG_HYPERV */ static inline bool hv_is_hyperv_initialized(void) { return false; } static inline bool hv_is_hibernation_supported(void) { return false; } static inline void hyperv_cleanup(void) {} +static inline bool hv_is_isolation_supported(void) { return false; } +static inline enum hv_isolation_type hv_get_isolation_type(void) +{ + return HV_ISOLATION_TYPE_NONE; +} #endif /* CONFIG_HYPERV */ #endif From patchwork Thu Oct 21 15:41:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12575615 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F1C3C4332F for ; Thu, 21 Oct 2021 15:41:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8686D6120F for ; Thu, 21 Oct 2021 15:41:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231845AbhJUPni (ORCPT ); Thu, 21 Oct 2021 11:43:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231721AbhJUPnf (ORCPT ); Thu, 21 Oct 2021 11:43:35 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E5C4C061764; Thu, 21 Oct 2021 08:41:19 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id ls18so782980pjb.3; Thu, 21 Oct 2021 08:41:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nPmGrLtDkKne+Kxzs4l26xSiBaVw17KkF8YRLryQASU=; b=q0CWaVVwxVCfhKqhL0Fke5u4tv1uTMbLmRyjBSoxY3qaLHEjg21Neq26AXBhDTP1aC TtkwSLRhk5FlkXt/w87x9i8xYG2EkvtnxcNerOywf/oQMKhcJq+B3gziXIWLgNg0S4L4 oPpe98dQwA+UK5HaDWRfNcvdKZxjN4HPdwQj3AGhR6xeMolPgPJ0DbAad8HXboU26hOQ dVLQY5hCHcv3UvjpmYfC3raE/vmIQ/HWu/WKpTMYDr+IGTcktSZGwPUJWL6qS4bOGJaY VG4df3ZvFK4tYm3LGX4e8qc65TOulCcz8tOHviiqWGaZPKMm5CcxXzTk1buwaVzSXBrW wZYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nPmGrLtDkKne+Kxzs4l26xSiBaVw17KkF8YRLryQASU=; b=iJk9MSFrEswLGY1/LJQtele4SzvK2sMiEI/nJLrrpME8Yg2UMTavcPhix9v5E0lrIh JMNE7Z0Z1bviYP3N4bWpzwhkKRkJy0+uZCHLikwH5SA2Vx5FVtkI0Gu6laCAQSIhPCYG qZfKWTXGG5tQoYqSR9VJlKBvu08kXR8SMBiHEK6AmTQudhKLvwlqQjTdihHh3HYnG3oU osEcVGMlaS3V5QRpt2Iw0Axwipejrt8L4+Eq4wIhQKQuXEEFhfMIN6eVY1b115oobEVS qQ/48Lotxg5nMwa+5YBXO1A82qipCYJmcXVbK4/TUDdh6vf2ZPhy8yzUoW0Vl3xGpZd0 /Vmw== X-Gm-Message-State: AOAM530dvVr8lIx86eTGG44AOWnjDXsl0p2jlAhv9HWdT71Qj/MaYa0l WO5h8gTuZ2YKXrE2pV69CMo= X-Google-Smtp-Source: ABdhPJxzALconG8olzzr6RUMuicH54P4fwSsTen2zayqwhtAk3ZNZsgSy0nU11/befwFYubHmgs8FA== X-Received: by 2002:a17:90a:e010:: with SMTP id u16mr7565326pjy.217.1634830878827; Thu, 21 Oct 2021 08:41:18 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:a76d:53a5:b89f:c2a0]) by smtp.gmail.com with ESMTPSA id p9sm6384130pfn.7.2021.10.21.08.41.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Oct 2021 08:41:18 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, rientjes@google.com, pgonda@google.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, rppt@kernel.org, saravanand@fb.com, aneesh.kumar@linux.ibm.com, hannes@cmpxchg.org, tj@kernel.org, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V8 2/9] x86/hyperv: Initialize shared memory boundary in the Isolation VM. Date: Thu, 21 Oct 2021 11:41:02 -0400 Message-Id: <20211021154110.3734294-3-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211021154110.3734294-1-ltykernel@gmail.com> References: <20211021154110.3734294-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan Hyper-V exposes shared memory boundary via cpuid HYPERV_CPUID_ISOLATION_CONFIG and store it in the shared_gpa_boundary of ms_hyperv struct. This prepares to share memory with host for SNP guest. Signed-off-by: Tianyu Lan --- Change since v4: * Rename reserve field. Change since v3: * user BIT_ULL to get shared_gpa_boundary * Rename field Reserved* to reserved --- arch/x86/kernel/cpu/mshyperv.c | 2 ++ include/asm-generic/mshyperv.h | 12 +++++++++++- 2 files changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index b09ade389040..4794b716ec79 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -313,6 +313,8 @@ static void __init ms_hyperv_init_platform(void) if (ms_hyperv.priv_high & HV_ISOLATION) { ms_hyperv.isolation_config_a = cpuid_eax(HYPERV_CPUID_ISOLATION_CONFIG); ms_hyperv.isolation_config_b = cpuid_ebx(HYPERV_CPUID_ISOLATION_CONFIG); + ms_hyperv.shared_gpa_boundary = + BIT_ULL(ms_hyperv.shared_gpa_boundary_bits); pr_info("Hyper-V: Isolation Config: Group A 0x%x, Group B 0x%x\n", ms_hyperv.isolation_config_a, ms_hyperv.isolation_config_b); diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index 2d88aa855f7e..a8ac497167d2 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -35,7 +35,17 @@ struct ms_hyperv_info { u32 max_vp_index; u32 max_lp_index; u32 isolation_config_a; - u32 isolation_config_b; + union { + u32 isolation_config_b; + struct { + u32 cvm_type : 4; + u32 reserved1 : 1; + u32 shared_gpa_boundary_active : 1; + u32 shared_gpa_boundary_bits : 6; + u32 reserved2 : 20; + }; + }; + u64 shared_gpa_boundary; }; extern struct ms_hyperv_info ms_hyperv; From patchwork Thu Oct 21 15:41:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12575617 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A9F1C433F5 for ; Thu, 21 Oct 2021 15:41:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 42FB76120C for ; Thu, 21 Oct 2021 15:41:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231872AbhJUPnk (ORCPT ); Thu, 21 Oct 2021 11:43:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231801AbhJUPng (ORCPT ); Thu, 21 Oct 2021 11:43:36 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E45EDC061764; Thu, 21 Oct 2021 08:41:20 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id a15-20020a17090a688f00b001a132a1679bso3463766pjd.0; Thu, 21 Oct 2021 08:41:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ys+mGTdKi3TledWLoCtEfp8ha6EQFrvRBYZBenkPWus=; b=TsKlcO9iwduVuWBReXKJAiUzFthbv4vNZNEsduuhD7NeWSafFgiwiPh2wrI52JgJxV LMbGIHRwb6kbpNSD8p4sN8Oe0ED5gDe3BirvUo6nZLFv/qBLj91SUlZoyKirC/SiKtTj F3SgQ6fmIQscayLIq67r0Ed2dHzFGlVRMoJ74YPj6OEJ/D4YU6Fd7gHJ5c2MWdml9u1M w6uzqfLsXaLMdCEkpmDZjpUWAJRHpn1Q/wFM7QejxPmRVB17IRNIAJwafQqUyNCQRQaE 02z4a3rLMh3ymvzb5xq2RCKIMo77cCn1WihC0ieO7uo+oR0NOgmdiNipOgH+NdZtdQgK PHjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ys+mGTdKi3TledWLoCtEfp8ha6EQFrvRBYZBenkPWus=; b=XYdi0pYJK7PKJ2vVwHILgkoWevbwG7aknU2H52EJE46e0+IXt3YgKZGboOMkzJVzQf IyyG8isP0VJrA8VImcpK6j6d4lMQSG40DoYJN+pHy3LMTNVgJ5ZU8T2uoyT0AC+cpDqy sIh31L4eQXyGA4io7/GqHrVyBlvWfP5OpcNLcizeRh7kzFUw30Yx2VxTQlRnLJbE//CO qITLM1YW66Xv0/LuIHJn6b8oUZalp9uPOYXBy6KsOhZHQYUBaKn4Q5fTGmRMtedjcNiG /HMNnM2/uZXmWZNizfImQfjc+DOr7pkBwIxIUGo5xi9QcPNNjnYEIpW76llS5Ol0Y19h BSiA== X-Gm-Message-State: AOAM533z+5gnGH5Fu3c1PtlS+7MsPKItkQQ6b/VyZB4j1yPSgeq+9Cqr DTFcANLYceOc6M+9ESZnlM4= X-Google-Smtp-Source: ABdhPJzQ4Rw92VO/aLMOdL7HnQJXPU9vLjmv+LKfwNLdbfc/MJgV91L8Ghbk7aMCn3uvh0KbjvCM1A== X-Received: by 2002:a17:902:7c94:b0:13b:8d10:cc4f with SMTP id y20-20020a1709027c9400b0013b8d10cc4fmr5971226pll.54.1634830880412; Thu, 21 Oct 2021 08:41:20 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:a76d:53a5:b89f:c2a0]) by smtp.gmail.com with ESMTPSA id p9sm6384130pfn.7.2021.10.21.08.41.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Oct 2021 08:41:20 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, rientjes@google.com, pgonda@google.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, rppt@kernel.org, saravanand@fb.com, aneesh.kumar@linux.ibm.com, hannes@cmpxchg.org, tj@kernel.org, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V8 3/9] x86/hyperv: Add new hvcall guest address host visibility support Date: Thu, 21 Oct 2021 11:41:03 -0400 Message-Id: <20211021154110.3734294-4-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211021154110.3734294-1-ltykernel@gmail.com> References: <20211021154110.3734294-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan Add new hvcall guest address host visibility support to mark memory visible to host. Call it inside set_memory_decrypted /encrypted(). Add HYPERVISOR feature check in the hv_is_isolation_supported() to optimize in non-virtualization environment. Acked-by: Dave Hansen Signed-off-by: Tianyu Lan --- Change since v6: * Add hv_set_mem_host_visibility() when CONFIG_HYPERV is no. Fix compile error. * Add comment to describe __set_memory_enc_pgtable(). Change since v4: * Fix typo in the comment * Make hv_mark_gpa_visibility() to be a static function * Merge __hv_set_mem_host_visibility() and hv_set_mem_host_visibility() Change since v3: * Fix error code handle in the __hv_set_mem_host_visibility(). * Move HvCallModifySparseGpaPageHostVisibility near to enum hv_mem_host_visibility. Change since v2: * Rework __set_memory_enc_dec() and call Hyper-V and AMD function according to platform check. Change since v1: * Use new staic call x86_set_memory_enc to avoid add Hyper-V specific check in the set_memory code. --- arch/x86/hyperv/Makefile | 2 +- arch/x86/hyperv/hv_init.c | 6 ++ arch/x86/hyperv/ivm.c | 105 +++++++++++++++++++++++++++++ arch/x86/include/asm/hyperv-tlfs.h | 17 +++++ arch/x86/include/asm/mshyperv.h | 7 +- arch/x86/mm/pat/set_memory.c | 23 +++++-- include/asm-generic/hyperv-tlfs.h | 1 + 7 files changed, 154 insertions(+), 7 deletions(-) create mode 100644 arch/x86/hyperv/ivm.c diff --git a/arch/x86/hyperv/Makefile b/arch/x86/hyperv/Makefile index 48e2c51464e8..5d2de10809ae 100644 --- a/arch/x86/hyperv/Makefile +++ b/arch/x86/hyperv/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0-only -obj-y := hv_init.o mmu.o nested.o irqdomain.o +obj-y := hv_init.o mmu.o nested.o irqdomain.o ivm.o obj-$(CONFIG_X86_64) += hv_apic.o hv_proc.o ifdef CONFIG_X86_64 diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index a7e922755ad1..d57df6825527 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -603,6 +603,12 @@ EXPORT_SYMBOL_GPL(hv_get_isolation_type); bool hv_is_isolation_supported(void) { + if (!cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) + return false; + + if (!hypervisor_is_type(X86_HYPER_MS_HYPERV)) + return false; + return hv_get_isolation_type() != HV_ISOLATION_TYPE_NONE; } diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c new file mode 100644 index 000000000000..79e7fb83472a --- /dev/null +++ b/arch/x86/hyperv/ivm.c @@ -0,0 +1,105 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Hyper-V Isolation VM interface with paravisor and hypervisor + * + * Author: + * Tianyu Lan + */ + +#include +#include +#include +#include +#include +#include + +/* + * hv_mark_gpa_visibility - Set pages visible to host via hvcall. + * + * In Isolation VM, all guest memory is encrypted from host and guest + * needs to set memory visible to host via hvcall before sharing memory + * with host. + */ +static int hv_mark_gpa_visibility(u16 count, const u64 pfn[], + enum hv_mem_host_visibility visibility) +{ + struct hv_gpa_range_for_visibility **input_pcpu, *input; + u16 pages_processed; + u64 hv_status; + unsigned long flags; + + /* no-op if partition isolation is not enabled */ + if (!hv_is_isolation_supported()) + return 0; + + if (count > HV_MAX_MODIFY_GPA_REP_COUNT) { + pr_err("Hyper-V: GPA count:%d exceeds supported:%lu\n", count, + HV_MAX_MODIFY_GPA_REP_COUNT); + return -EINVAL; + } + + local_irq_save(flags); + input_pcpu = (struct hv_gpa_range_for_visibility **) + this_cpu_ptr(hyperv_pcpu_input_arg); + input = *input_pcpu; + if (unlikely(!input)) { + local_irq_restore(flags); + return -EINVAL; + } + + input->partition_id = HV_PARTITION_ID_SELF; + input->host_visibility = visibility; + input->reserved0 = 0; + input->reserved1 = 0; + memcpy((void *)input->gpa_page_list, pfn, count * sizeof(*pfn)); + hv_status = hv_do_rep_hypercall( + HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY, count, + 0, input, &pages_processed); + local_irq_restore(flags); + + if (hv_result_success(hv_status)) + return 0; + else + return -EFAULT; +} + +/* + * hv_set_mem_host_visibility - Set specified memory visible to host. + * + * In Isolation VM, all guest memory is encrypted from host and guest + * needs to set memory visible to host via hvcall before sharing memory + * with host. This function works as wrap of hv_mark_gpa_visibility() + * with memory base and size. + */ +int hv_set_mem_host_visibility(unsigned long kbuffer, int pagecount, bool visible) +{ + enum hv_mem_host_visibility visibility = visible ? + VMBUS_PAGE_VISIBLE_READ_WRITE : VMBUS_PAGE_NOT_VISIBLE; + u64 *pfn_array; + int ret = 0; + int i, pfn; + + if (!hv_is_isolation_supported() || !hv_hypercall_pg) + return 0; + + pfn_array = kmalloc(HV_HYP_PAGE_SIZE, GFP_KERNEL); + if (!pfn_array) + return -ENOMEM; + + for (i = 0, pfn = 0; i < pagecount; i++) { + pfn_array[pfn] = virt_to_hvpfn((void *)kbuffer + i * HV_HYP_PAGE_SIZE); + pfn++; + + if (pfn == HV_MAX_MODIFY_GPA_REP_COUNT || i == pagecount - 1) { + ret = hv_mark_gpa_visibility(pfn, pfn_array, + visibility); + if (ret) + goto err_free_pfn_array; + pfn = 0; + } + } + + err_free_pfn_array: + kfree(pfn_array); + return ret; +} diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h index 2322d6bd5883..381e88122a5f 100644 --- a/arch/x86/include/asm/hyperv-tlfs.h +++ b/arch/x86/include/asm/hyperv-tlfs.h @@ -276,6 +276,23 @@ enum hv_isolation_type { #define HV_X64_MSR_TIME_REF_COUNT HV_REGISTER_TIME_REF_COUNT #define HV_X64_MSR_REFERENCE_TSC HV_REGISTER_REFERENCE_TSC +/* Hyper-V memory host visibility */ +enum hv_mem_host_visibility { + VMBUS_PAGE_NOT_VISIBLE = 0, + VMBUS_PAGE_VISIBLE_READ_ONLY = 1, + VMBUS_PAGE_VISIBLE_READ_WRITE = 3 +}; + +/* HvCallModifySparseGpaPageHostVisibility hypercall */ +#define HV_MAX_MODIFY_GPA_REP_COUNT ((PAGE_SIZE / sizeof(u64)) - 2) +struct hv_gpa_range_for_visibility { + u64 partition_id; + u32 host_visibility:2; + u32 reserved0:30; + u32 reserved1; + u64 gpa_page_list[HV_MAX_MODIFY_GPA_REP_COUNT]; +} __packed; + /* * Declare the MSR used to setup pages used to communicate with the hypervisor. */ diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 37739a277ac6..f3154ca41ac4 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -192,7 +192,7 @@ struct irq_domain *hv_create_pci_msi_domain(void); int hv_map_ioapic_interrupt(int ioapic_id, bool level, int vcpu, int vector, struct hv_interrupt_entry *entry); int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry); - +int hv_set_mem_host_visibility(unsigned long addr, int numpages, bool visible); #else /* CONFIG_HYPERV */ static inline void hyperv_init(void) {} static inline void hyperv_setup_mmu_ops(void) {} @@ -209,6 +209,11 @@ static inline int hyperv_flush_guest_mapping_range(u64 as, { return -1; } +static inline int hv_set_mem_host_visibility(unsigned long addr, int numpages, + bool visible) +{ + return -1; +} #endif /* CONFIG_HYPERV */ diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index ad8a5c586a35..525f682ab150 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -29,6 +29,8 @@ #include #include #include +#include +#include #include "../mm_internal.h" @@ -1980,15 +1982,15 @@ int set_memory_global(unsigned long addr, int numpages) __pgprot(_PAGE_GLOBAL), 0); } -static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) +/* + * __set_memory_enc_pgtable() is used for the hypervisors that get + * informed about "encryption" status via page tables. + */ +static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc) { struct cpa_data cpa; int ret; - /* Nothing to do if memory encryption is not active */ - if (!mem_encrypt_active()) - return 0; - /* Should not be working on unaligned addresses */ if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr)) addr &= PAGE_MASK; @@ -2023,6 +2025,17 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) return ret; } +static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) +{ + if (hv_is_isolation_supported()) + return hv_set_mem_host_visibility(addr, numpages, !enc); + + if (mem_encrypt_active()) + return __set_memory_enc_pgtable(addr, numpages, enc); + + return 0; +} + int set_memory_encrypted(unsigned long addr, int numpages) { return __set_memory_enc_dec(addr, numpages, true); diff --git a/include/asm-generic/hyperv-tlfs.h b/include/asm-generic/hyperv-tlfs.h index 56348a541c50..8ed6733d5146 100644 --- a/include/asm-generic/hyperv-tlfs.h +++ b/include/asm-generic/hyperv-tlfs.h @@ -158,6 +158,7 @@ struct ms_hyperv_tsc_page { #define HVCALL_RETARGET_INTERRUPT 0x007e #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0 +#define HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY 0x00db /* Extended hypercalls */ #define HV_EXT_CALL_QUERY_CAPABILITIES 0x8001 From patchwork Thu Oct 21 15:41:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12575619 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B376C433EF for ; Thu, 21 Oct 2021 15:41:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 26232611C7 for ; Thu, 21 Oct 2021 15:41:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231958AbhJUPnq (ORCPT ); Thu, 21 Oct 2021 11:43:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231844AbhJUPni (ORCPT ); Thu, 21 Oct 2021 11:43:38 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA2A1C0613B9; Thu, 21 Oct 2021 08:41:22 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id y4so733788plb.0; Thu, 21 Oct 2021 08:41:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gk3y4o1zrzwzioXRZ+sUIEPrVPIRY8dyF+dFv2fpgHU=; b=ZgP+MrkFxula9ugicpvmECRz9pnYC95lel2c8Jf8Uu38Df5GJZO+vY2ZPnL+M7Gps9 vT3323gUBCmK7k+nM2amygfW9TnfBEOBLhBzTlAk00aZpMOyaBPD22SJxVeTrxHFfTT4 UtkNLmXg9nRcBz+tO6mDZbQrZbvFXPnSo94MYIe2+FASCg6K4nANq5/6HuWfT6Jx3P6F Dqyy5pC6cCf4QAVx597+bZaKaxDLkxgqpRp6+fENeSP0yHJfM0DrjaGfYgp1AlOFdNkf A80yo8ywNJzPdViaruOl6bn1/Ive8UEekx3Hgw25VnL2q98U8gykHd6gUyd4DR7V6dqQ R3xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gk3y4o1zrzwzioXRZ+sUIEPrVPIRY8dyF+dFv2fpgHU=; b=hjgP+bMxoI0APw8ziSNndm+MEJu7FqJUaYmbB6GVLwvJAsuhCRd701vBC0YtwDdcuz DLqjynSQNvgwH7f/b2xf6FT7UtCQ37qkzxlrgFkG4rdAhFcupdvv9THHy/a+Ekou8m2d m9s03pcADYFsihJhCfkTT3rCM7yeX9sCf9z/EGqaHjcWeqjhfvukP7mMLr1WZhcrPqTB rWFSjshRofG5GfNSGx5wXnWREjHjFI/fgggWfwxh9AJB2IrLwkLMnfqvEQ1XNOgLhwnL EnwiAdwv3XR1PLDeNorioGblYxBe3p6HlGYGX8tVkBZt3HTfpnS7wJUejM7BPPjcF4Jf zdTQ== X-Gm-Message-State: AOAM531eNedwhuUONQaxBMoXGAFaa0u+onnNz9Pdz2nr3Af9dy7KD+Ms Y6XSSBiu7gWTI8qyfZk/q68= X-Google-Smtp-Source: ABdhPJyf1MtbOdJKPzNZuVziA9pIuAtjXU6IBZYG1eNs1Odu3dzMWKEA1u3yYD7+T+pIOtrxdXaM5A== X-Received: by 2002:a17:90a:ac0a:: with SMTP id o10mr7455341pjq.125.1634830882148; Thu, 21 Oct 2021 08:41:22 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:a76d:53a5:b89f:c2a0]) by smtp.gmail.com with ESMTPSA id p9sm6384130pfn.7.2021.10.21.08.41.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Oct 2021 08:41:21 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, rientjes@google.com, pgonda@google.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, rppt@kernel.org, saravanand@fb.com, aneesh.kumar@linux.ibm.com, hannes@cmpxchg.org, tj@kernel.org, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V8 4/9] Drivers: hv: vmbus: Mark vmbus ring buffer visible to host in Isolation VM Date: Thu, 21 Oct 2021 11:41:04 -0400 Message-Id: <20211021154110.3734294-5-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211021154110.3734294-1-ltykernel@gmail.com> References: <20211021154110.3734294-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Tianyu Lan Mark vmbus ring buffer visible with set_memory_decrypted() when establish gpadl handle. Signed-off-by: Tianyu Lan --- Change since v5: * Replace HVPFN_UP() with PFN_UP() in the __vmbus_establish_gpadl() * Remove unused variable gpadl in the __vmbus_open() and vmbus_close_ internal() * Clean gpadl_handle in the vmbus_teardown_gpadl(). Change since v4 * Change gpadl handle in netvsc and uio driver from u32 to struct vmbus_gpadl. * Change vmbus_establish_gpadl()'s gpadl_handle parameter to vmbus_gpadl data structure. Change since v3: * Change vmbus_teardown_gpadl() parameter and put gpadl handle, buffer and buffer size in the struct vmbus_gpadl. --- drivers/hv/channel.c | 53 +++++++++++++++++++++++---------- drivers/net/hyperv/hyperv_net.h | 5 ++-- drivers/net/hyperv/netvsc.c | 15 +++++----- drivers/uio/uio_hv_generic.c | 18 +++++------ include/linux/hyperv.h | 12 ++++++-- 5 files changed, 65 insertions(+), 38 deletions(-) diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index f3761c73b074..b37ff4a39224 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -456,7 +457,7 @@ static int create_gpadl_header(enum hv_gpadl_type type, void *kbuffer, static int __vmbus_establish_gpadl(struct vmbus_channel *channel, enum hv_gpadl_type type, void *kbuffer, u32 size, u32 send_offset, - u32 *gpadl_handle) + struct vmbus_gpadl *gpadl) { struct vmbus_channel_gpadl_header *gpadlmsg; struct vmbus_channel_gpadl_body *gpadl_body; @@ -474,6 +475,15 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel, if (ret) return ret; + ret = set_memory_decrypted((unsigned long)kbuffer, + PFN_UP(size)); + if (ret) { + dev_warn(&channel->device_obj->device, + "Failed to set host visibility for new GPADL %d.\n", + ret); + return ret; + } + init_completion(&msginfo->waitevent); msginfo->waiting_channel = channel; @@ -537,7 +547,10 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel, } /* At this point, we received the gpadl created msg */ - *gpadl_handle = gpadlmsg->gpadl; + gpadl->gpadl_handle = gpadlmsg->gpadl; + gpadl->buffer = kbuffer; + gpadl->size = size; + cleanup: spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); @@ -549,6 +562,11 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel, } kfree(msginfo); + + if (ret) + set_memory_encrypted((unsigned long)kbuffer, + PFN_UP(size)); + return ret; } @@ -561,10 +579,10 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel, * @gpadl_handle: some funky thing */ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, - u32 size, u32 *gpadl_handle) + u32 size, struct vmbus_gpadl *gpadl) { return __vmbus_establish_gpadl(channel, HV_GPADL_BUFFER, kbuffer, size, - 0U, gpadl_handle); + 0U, gpadl); } EXPORT_SYMBOL_GPL(vmbus_establish_gpadl); @@ -675,7 +693,7 @@ static int __vmbus_open(struct vmbus_channel *newchannel, goto error_clean_ring; /* Establish the gpadl for the ring buffer */ - newchannel->ringbuffer_gpadlhandle = 0; + newchannel->ringbuffer_gpadlhandle.gpadl_handle = 0; err = __vmbus_establish_gpadl(newchannel, HV_GPADL_RING, page_address(newchannel->ringbuffer_page), @@ -701,7 +719,8 @@ static int __vmbus_open(struct vmbus_channel *newchannel, open_msg->header.msgtype = CHANNELMSG_OPENCHANNEL; open_msg->openid = newchannel->offermsg.child_relid; open_msg->child_relid = newchannel->offermsg.child_relid; - open_msg->ringbuffer_gpadlhandle = newchannel->ringbuffer_gpadlhandle; + open_msg->ringbuffer_gpadlhandle + = newchannel->ringbuffer_gpadlhandle.gpadl_handle; /* * The unit of ->downstream_ringbuffer_pageoffset is HV_HYP_PAGE and * the unit of ->ringbuffer_send_offset (i.e. send_pages) is PAGE, so @@ -759,8 +778,7 @@ static int __vmbus_open(struct vmbus_channel *newchannel, error_free_info: kfree(open_info); error_free_gpadl: - vmbus_teardown_gpadl(newchannel, newchannel->ringbuffer_gpadlhandle); - newchannel->ringbuffer_gpadlhandle = 0; + vmbus_teardown_gpadl(newchannel, &newchannel->ringbuffer_gpadlhandle); error_clean_ring: hv_ringbuffer_cleanup(&newchannel->outbound); hv_ringbuffer_cleanup(&newchannel->inbound); @@ -806,7 +824,7 @@ EXPORT_SYMBOL_GPL(vmbus_open); /* * vmbus_teardown_gpadl -Teardown the specified GPADL handle */ -int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle) +int vmbus_teardown_gpadl(struct vmbus_channel *channel, struct vmbus_gpadl *gpadl) { struct vmbus_channel_gpadl_teardown *msg; struct vmbus_channel_msginfo *info; @@ -825,7 +843,7 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle) msg->header.msgtype = CHANNELMSG_GPADL_TEARDOWN; msg->child_relid = channel->offermsg.child_relid; - msg->gpadl = gpadl_handle; + msg->gpadl = gpadl->gpadl_handle; spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); list_add_tail(&info->msglistentry, @@ -845,6 +863,8 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle) wait_for_completion(&info->waitevent); + gpadl->gpadl_handle = 0; + post_msg_err: /* * If the channel has been rescinded; @@ -859,6 +879,12 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle) spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); kfree(info); + + ret = set_memory_encrypted((unsigned long)gpadl->buffer, + PFN_UP(gpadl->size)); + if (ret) + pr_warn("Fail to set mem host visibility in GPADL teardown %d.\n", ret); + return ret; } EXPORT_SYMBOL_GPL(vmbus_teardown_gpadl); @@ -933,9 +959,8 @@ static int vmbus_close_internal(struct vmbus_channel *channel) } /* Tear down the gpadl for the channel's ring buffer */ - else if (channel->ringbuffer_gpadlhandle) { - ret = vmbus_teardown_gpadl(channel, - channel->ringbuffer_gpadlhandle); + else if (channel->ringbuffer_gpadlhandle.gpadl_handle) { + ret = vmbus_teardown_gpadl(channel, &channel->ringbuffer_gpadlhandle); if (ret) { pr_err("Close failed: teardown gpadl return %d\n", ret); /* @@ -943,8 +968,6 @@ static int vmbus_close_internal(struct vmbus_channel *channel) * it is perhaps better to leak memory. */ } - - channel->ringbuffer_gpadlhandle = 0; } if (!ret) diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h index bc48855dff10..315278a7cf88 100644 --- a/drivers/net/hyperv/hyperv_net.h +++ b/drivers/net/hyperv/hyperv_net.h @@ -1075,14 +1075,15 @@ struct netvsc_device { /* Receive buffer allocated by us but manages by NetVSP */ void *recv_buf; u32 recv_buf_size; /* allocated bytes */ - u32 recv_buf_gpadl_handle; + struct vmbus_gpadl recv_buf_gpadl_handle; u32 recv_section_cnt; u32 recv_section_size; u32 recv_completion_cnt; /* Send buffer allocated by us */ void *send_buf; - u32 send_buf_gpadl_handle; + u32 send_buf_size; + struct vmbus_gpadl send_buf_gpadl_handle; u32 send_section_cnt; u32 send_section_size; unsigned long *send_section_map; diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c index 7bd935412853..396bc1c204e6 100644 --- a/drivers/net/hyperv/netvsc.c +++ b/drivers/net/hyperv/netvsc.c @@ -278,9 +278,9 @@ static void netvsc_teardown_recv_gpadl(struct hv_device *device, { int ret; - if (net_device->recv_buf_gpadl_handle) { + if (net_device->recv_buf_gpadl_handle.gpadl_handle) { ret = vmbus_teardown_gpadl(device->channel, - net_device->recv_buf_gpadl_handle); + &net_device->recv_buf_gpadl_handle); /* If we failed here, we might as well return and have a leak * rather than continue and a bugchk @@ -290,7 +290,6 @@ static void netvsc_teardown_recv_gpadl(struct hv_device *device, "unable to teardown receive buffer's gpadl\n"); return; } - net_device->recv_buf_gpadl_handle = 0; } } @@ -300,9 +299,9 @@ static void netvsc_teardown_send_gpadl(struct hv_device *device, { int ret; - if (net_device->send_buf_gpadl_handle) { + if (net_device->send_buf_gpadl_handle.gpadl_handle) { ret = vmbus_teardown_gpadl(device->channel, - net_device->send_buf_gpadl_handle); + &net_device->send_buf_gpadl_handle); /* If we failed here, we might as well return and have a leak * rather than continue and a bugchk @@ -312,7 +311,6 @@ static void netvsc_teardown_send_gpadl(struct hv_device *device, "unable to teardown send buffer's gpadl\n"); return; } - net_device->send_buf_gpadl_handle = 0; } } @@ -380,7 +378,7 @@ static int netvsc_init_buf(struct hv_device *device, memset(init_packet, 0, sizeof(struct nvsp_message)); init_packet->hdr.msg_type = NVSP_MSG1_TYPE_SEND_RECV_BUF; init_packet->msg.v1_msg.send_recv_buf. - gpadl_handle = net_device->recv_buf_gpadl_handle; + gpadl_handle = net_device->recv_buf_gpadl_handle.gpadl_handle; init_packet->msg.v1_msg. send_recv_buf.id = NETVSC_RECEIVE_BUFFER_ID; @@ -463,6 +461,7 @@ static int netvsc_init_buf(struct hv_device *device, ret = -ENOMEM; goto cleanup; } + net_device->send_buf_size = buf_size; /* Establish the gpadl handle for this buffer on this * channel. Note: This call uses the vmbus connection rather @@ -482,7 +481,7 @@ static int netvsc_init_buf(struct hv_device *device, memset(init_packet, 0, sizeof(struct nvsp_message)); init_packet->hdr.msg_type = NVSP_MSG1_TYPE_SEND_SEND_BUF; init_packet->msg.v1_msg.send_send_buf.gpadl_handle = - net_device->send_buf_gpadl_handle; + net_device->send_buf_gpadl_handle.gpadl_handle; init_packet->msg.v1_msg.send_send_buf.id = NETVSC_SEND_BUFFER_ID; trace_nvsp_send(ndev, init_packet); diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c index 652fe2547587..c08a6cfd119f 100644 --- a/drivers/uio/uio_hv_generic.c +++ b/drivers/uio/uio_hv_generic.c @@ -58,11 +58,11 @@ struct hv_uio_private_data { atomic_t refcnt; void *recv_buf; - u32 recv_gpadl; + struct vmbus_gpadl recv_gpadl; char recv_name[32]; /* "recv_4294967295" */ void *send_buf; - u32 send_gpadl; + struct vmbus_gpadl send_gpadl; char send_name[32]; }; @@ -179,15 +179,13 @@ hv_uio_new_channel(struct vmbus_channel *new_sc) static void hv_uio_cleanup(struct hv_device *dev, struct hv_uio_private_data *pdata) { - if (pdata->send_gpadl) { - vmbus_teardown_gpadl(dev->channel, pdata->send_gpadl); - pdata->send_gpadl = 0; + if (pdata->send_gpadl.gpadl_handle) { + vmbus_teardown_gpadl(dev->channel, &pdata->send_gpadl); vfree(pdata->send_buf); } - if (pdata->recv_gpadl) { - vmbus_teardown_gpadl(dev->channel, pdata->recv_gpadl); - pdata->recv_gpadl = 0; + if (pdata->recv_gpadl.gpadl_handle) { + vmbus_teardown_gpadl(dev->channel, &pdata->recv_gpadl); vfree(pdata->recv_buf); } } @@ -303,7 +301,7 @@ hv_uio_probe(struct hv_device *dev, /* put Global Physical Address Label in name */ snprintf(pdata->recv_name, sizeof(pdata->recv_name), - "recv:%u", pdata->recv_gpadl); + "recv:%u", pdata->recv_gpadl.gpadl_handle); pdata->info.mem[RECV_BUF_MAP].name = pdata->recv_name; pdata->info.mem[RECV_BUF_MAP].addr = (uintptr_t)pdata->recv_buf; @@ -324,7 +322,7 @@ hv_uio_probe(struct hv_device *dev, } snprintf(pdata->send_name, sizeof(pdata->send_name), - "send:%u", pdata->send_gpadl); + "send:%u", pdata->send_gpadl.gpadl_handle); pdata->info.mem[SEND_BUF_MAP].name = pdata->send_name; pdata->info.mem[SEND_BUF_MAP].addr = (uintptr_t)pdata->send_buf; diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index ddc8713ce57b..a9e0bc3b1511 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -803,6 +803,12 @@ struct vmbus_device { #define VMBUS_DEFAULT_MAX_PKT_SIZE 4096 +struct vmbus_gpadl { + u32 gpadl_handle; + u32 size; + void *buffer; +}; + struct vmbus_channel { struct list_head listentry; @@ -822,7 +828,7 @@ struct vmbus_channel { bool rescind_ref; /* got rescind msg, got channel reference */ struct completion rescind_event; - u32 ringbuffer_gpadlhandle; + struct vmbus_gpadl ringbuffer_gpadlhandle; /* Allocated memory for ring buffer */ struct page *ringbuffer_page; @@ -1192,10 +1198,10 @@ extern int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel, extern int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, u32 size, - u32 *gpadl_handle); + struct vmbus_gpadl *gpadl); extern int vmbus_teardown_gpadl(struct vmbus_channel *channel, - u32 gpadl_handle); + struct vmbus_gpadl *gpadl); void vmbus_reset_channel_cb(struct vmbus_channel *channel); From patchwork Thu Oct 21 15:41:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12575621 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B5E8C4332F for ; Thu, 21 Oct 2021 15:41:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 27F40611BD for ; Thu, 21 Oct 2021 15:41:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232005AbhJUPnt (ORCPT ); Thu, 21 Oct 2021 11:43:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56702 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231857AbhJUPnk (ORCPT ); Thu, 21 Oct 2021 11:43:40 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61DA3C061348; Thu, 21 Oct 2021 08:41:24 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id y4so733836plb.0; Thu, 21 Oct 2021 08:41:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rhK6Ol8pQ/kVpYxJ63s7xxGHzN0w2a0M7t+o1RS/RZk=; b=py0uDYR+jjcRDLGZQFtOj9SAkWIb58yzG2KAtKxlbFCB8mN8r+ngRLXsYF6IoBbxZb Oum0se4eHP7EI815eBGWySQTMaJWJVpbJvmlC3MTRqvKyoPRgGC9kmfBgM98GSa54FOn n2lfR5yY4FFYDO+MXn6NPAu31KDYZSMwGQT2f7dBAePCv1nLsVrquwh1Lu7vFF3Rz/j9 IyfClt6VRzfLdDcJn4LcpKSzIpJqaiWQeEzY9M5c3HkXYrlwlCI9GEKijtiT/0KNlDg5 vSeboa+mL5AB9vnOvIDCMIR8D/pXe+3FdtYrzO8Mc8Fa7qs+knPqfL1Nz1W260mhIl5p jl4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rhK6Ol8pQ/kVpYxJ63s7xxGHzN0w2a0M7t+o1RS/RZk=; b=nL4xYGj8n3+n1hF63GvnFCooLf+dTQeeXxbkqGjvMT3AU1uQNwx2lbfdFoX6wEOC9p al+wTXuivwvcNk5CIZY9lK4E4mjK1EO+mmBnfRIjk08k2benazSNNSGHlOnjkGYN0chD xYOWCDCWDldkbWbq+troKvT2O+yJtRC1Naz32/ZgO1irHmhrEfpcp7N4/MOohnrdcfqc PnB9tV8wuly8B4Y96TqTY8jJ9eBnX2zoXoqjrcQcACndPiWMl6BpgsyVzgJ5ymfapTpP +lAhA6E0CKvQiKX+A86u152R0u6Hvd+Svj6nuStVysuIvrbEmSYCE8EEaFx85VH1LQXD Pwig== X-Gm-Message-State: AOAM533DaHlEWLXMU5TQ2Uy/5uo+vb6QQprMnu9UVd+nsalLwi3rajb0 avMq56SSGKuwiljtgj5bSmE= X-Google-Smtp-Source: ABdhPJxe8AGyYduvdxkE7+YbycHEQnJGcaVJ4qV1YbxgPOXji/BE0IVK5u2fxqbsqOhH6mYbvweHuA== X-Received: by 2002:a17:90b:38c6:: with SMTP id nn6mr7477774pjb.246.1634830883793; Thu, 21 Oct 2021 08:41:23 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:a76d:53a5:b89f:c2a0]) by smtp.gmail.com with ESMTPSA id p9sm6384130pfn.7.2021.10.21.08.41.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Oct 2021 08:41:23 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, rientjes@google.com, pgonda@google.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, rppt@kernel.org, saravanand@fb.com, aneesh.kumar@linux.ibm.com, hannes@cmpxchg.org, tj@kernel.org, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V8 5/9] x86/sev-es: Expose sev_es_ghcb_hv_call() to call ghcb hv call out of sev code Date: Thu, 21 Oct 2021 11:41:05 -0400 Message-Id: <20211021154110.3734294-6-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211021154110.3734294-1-ltykernel@gmail.com> References: <20211021154110.3734294-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan Hyper-V needs to call ghcb hv call to write/read MSR in Isolation VM. So expose sev_es_ghcb_hv_call() to call it in the Hyper-V code. Hyper-V Isolation VM is unenlightened guests and run a paravisor in the VMPL0 for communicating and GHCB pages are being allocated and set up by that paravisor. Linux gets ghcb page pa via MSR_AMD64_SEV_ES_GHCB from paravisor and should not change it. Add set_ghcb_msr parameter for sev_es_ghcb_hv_call() and not set ghcb page pa when it's false. Signed-off-by: Tianyu Lan --- arch/x86/include/asm/sev.h | 12 ++++++++++++ arch/x86/kernel/sev-shared.c | 26 +++++++++++++++++--------- arch/x86/kernel/sev.c | 13 +++++++------ 3 files changed, 36 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index fa5cd05d3b5b..5b7f7e2b81f7 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -81,12 +81,24 @@ static __always_inline void sev_es_nmi_complete(void) __sev_es_nmi_complete(); } extern int __init sev_es_efi_map_ghcbs(pgd_t *pgd); +extern enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb, + bool set_ghcb_msr, + struct es_em_ctxt *ctxt, + u64 exit_code, u64 exit_info_1, + u64 exit_info_2); #else static inline void sev_es_ist_enter(struct pt_regs *regs) { } static inline void sev_es_ist_exit(void) { } static inline int sev_es_setup_ap_jump_table(struct real_mode_header *rmh) { return 0; } static inline void sev_es_nmi_complete(void) { } static inline int sev_es_efi_map_ghcbs(pgd_t *pgd) { return 0; } +static inline enum +es_result sev_es_ghcb_hv_call(struct ghcb *ghcb, + bool set_ghcb_msr, u64 exit_code, + u64 exit_info_1, u64 exit_info_2) +{ + return ES_VMM_ERROR; +} #endif #endif diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c index ea9abd69237e..368ed36971e3 100644 --- a/arch/x86/kernel/sev-shared.c +++ b/arch/x86/kernel/sev-shared.c @@ -124,10 +124,9 @@ static enum es_result verify_exception_info(struct ghcb *ghcb, struct es_em_ctxt return ES_VMM_ERROR; } -static enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb, - struct es_em_ctxt *ctxt, - u64 exit_code, u64 exit_info_1, - u64 exit_info_2) +enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb, bool set_ghcb_msr, + struct es_em_ctxt *ctxt, u64 exit_code, + u64 exit_info_1, u64 exit_info_2) { /* Fill in protocol and format specifiers */ ghcb->protocol_version = GHCB_PROTOCOL_MAX; @@ -137,7 +136,15 @@ static enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb, ghcb_set_sw_exit_info_1(ghcb, exit_info_1); ghcb_set_sw_exit_info_2(ghcb, exit_info_2); - sev_es_wr_ghcb_msr(__pa(ghcb)); + /* + * Hyper-V unenlightened guests use a paravisor for communicating and + * GHCB pages are being allocated and set up by that paravisor. Linux + * should not change ghcb page pa in such case and so add set_ghcb_msr + * parameter. + */ + if (set_ghcb_msr) + sev_es_wr_ghcb_msr(__pa(ghcb)); + VMGEXIT(); return verify_exception_info(ghcb, ctxt); @@ -417,7 +424,7 @@ static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt) */ sw_scratch = __pa(ghcb) + offsetof(struct ghcb, shared_buffer); ghcb_set_sw_scratch(ghcb, sw_scratch); - ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_IOIO, + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, SVM_EXIT_IOIO, exit_info_1, exit_info_2); if (ret != ES_OK) return ret; @@ -459,7 +466,8 @@ static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt) ghcb_set_rax(ghcb, rax); - ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_IOIO, exit_info_1, 0); + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, + SVM_EXIT_IOIO, exit_info_1, 0); if (ret != ES_OK) return ret; @@ -490,7 +498,7 @@ static enum es_result vc_handle_cpuid(struct ghcb *ghcb, /* xgetbv will cause #GP - use reset value for xcr0 */ ghcb_set_xcr0(ghcb, 1); - ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_CPUID, 0, 0); + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, SVM_EXIT_CPUID, 0, 0); if (ret != ES_OK) return ret; @@ -515,7 +523,7 @@ static enum es_result vc_handle_rdtsc(struct ghcb *ghcb, bool rdtscp = (exit_code == SVM_EXIT_RDTSCP); enum es_result ret; - ret = sev_es_ghcb_hv_call(ghcb, ctxt, exit_code, 0, 0); + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, exit_code, 0, 0); if (ret != ES_OK) return ret; diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index a6895e440bc3..bb62a1d15d6c 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -648,7 +648,8 @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt) ghcb_set_rdx(ghcb, regs->dx); } - ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_MSR, exit_info_1, 0); + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, SVM_EXIT_MSR, + exit_info_1, 0); if ((ret == ES_OK) && (!exit_info_1)) { regs->ax = ghcb->save.rax; @@ -867,7 +868,7 @@ static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt, ghcb_set_sw_scratch(ghcb, ghcb_pa + offsetof(struct ghcb, shared_buffer)); - return sev_es_ghcb_hv_call(ghcb, ctxt, exit_code, exit_info_1, exit_info_2); + return sev_es_ghcb_hv_call(ghcb, true, ctxt, exit_code, exit_info_1, exit_info_2); } static enum es_result vc_handle_mmio_twobyte_ops(struct ghcb *ghcb, @@ -1117,7 +1118,7 @@ static enum es_result vc_handle_dr7_write(struct ghcb *ghcb, /* Using a value of 0 for ExitInfo1 means RAX holds the value */ ghcb_set_rax(ghcb, val); - ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_WRITE_DR7, 0, 0); + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, SVM_EXIT_WRITE_DR7, 0, 0); if (ret != ES_OK) return ret; @@ -1147,7 +1148,7 @@ static enum es_result vc_handle_dr7_read(struct ghcb *ghcb, static enum es_result vc_handle_wbinvd(struct ghcb *ghcb, struct es_em_ctxt *ctxt) { - return sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_WBINVD, 0, 0); + return sev_es_ghcb_hv_call(ghcb, true, ctxt, SVM_EXIT_WBINVD, 0, 0); } static enum es_result vc_handle_rdpmc(struct ghcb *ghcb, struct es_em_ctxt *ctxt) @@ -1156,7 +1157,7 @@ static enum es_result vc_handle_rdpmc(struct ghcb *ghcb, struct es_em_ctxt *ctxt ghcb_set_rcx(ghcb, ctxt->regs->cx); - ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_RDPMC, 0, 0); + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, SVM_EXIT_RDPMC, 0, 0); if (ret != ES_OK) return ret; @@ -1197,7 +1198,7 @@ static enum es_result vc_handle_vmmcall(struct ghcb *ghcb, if (x86_platform.hyper.sev_es_hcall_prepare) x86_platform.hyper.sev_es_hcall_prepare(ghcb, ctxt->regs); - ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_VMMCALL, 0, 0); + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, SVM_EXIT_VMMCALL, 0, 0); if (ret != ES_OK) return ret; From patchwork Thu Oct 21 15:41:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12575623 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE286C433F5 for ; Thu, 21 Oct 2021 15:41:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9590F611C7 for ; Thu, 21 Oct 2021 15:41:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231891AbhJUPnw (ORCPT ); Thu, 21 Oct 2021 11:43:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231890AbhJUPnl (ORCPT ); Thu, 21 Oct 2021 11:43:41 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0F65C061225; Thu, 21 Oct 2021 08:41:25 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id ls18so783217pjb.3; Thu, 21 Oct 2021 08:41:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1ngx5jo+Jj3c2MNTpGRB/73rHcT6sBOx1capK3E8/Ok=; b=i+X+Eqqrev/dVrvcyvBx+xB0Cq4YhH2TLYFYF5j2kSusBdJBArEdi5KsM/yH1SOzZU b3XDEN1CWlhBAyTYiN9hOc/eZLdqYRvo9OEdsldJrHyCO27pieOrMm/ekw3DdBd5CFAX 9L4TWcWp6XL6aI/8EJRQKHabxLKlwI9nNI882CMjFt0i4YtZ0iLbytZQJTgETiqu2nLI TCi9oAj6E0refFyPrl0m7n4+658QwExmxtJtNKzGjTPKTrhM4KEhvt+vAwXHdRDwHVbB SgrlsWnkvdhGMQqyEt4uWciymFMv1DF25cq53PgqevHm6Qo+e7suc92g6SfsPGb4nGti I7VQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1ngx5jo+Jj3c2MNTpGRB/73rHcT6sBOx1capK3E8/Ok=; b=rKEPYtYuJrzxrnu9ipzUq1UW/E8G2sA3kzpv0BT/3AK8lwIc5aGC/RNR+uK4rLj18t YbDzTALZb/JPMKKMRx+QAhQ2+jg75KZwWMT/2MapZoOkwWjN2k9O5fmHAsiFtOHw/uUo nKC7KKrSj5+WDxiMY84Gi2sokkNeZB8H6SmDX5qsNgxFpqA/Xi3uZVABGSpZLNu+vbP1 lra5Mm0wKiRT8W3W4TK1f2RneFhsVUwfLNXabxfiMw74A7UzrL4xvv6J7/WXr5YskJ7F ZIl/VVqh9LJPkoTV5HIqPbesboctGuWKqg/Hlg1LlH5sELhEKghmlfDdY8kX8Op6VFMP x6gA== X-Gm-Message-State: AOAM530a2eOgo4urWnZ0cOgFGnq1oAZuQWp3vx0kT2xE3brg4AbpvDVJ woEQPS5Hxr3uW3FksOHwZUI= X-Google-Smtp-Source: ABdhPJyHTKoyZxTU48NztOZqFaz4rllSDFSD1+Jypvm3BDhEwcZ/SZqSqt0KaS0KftHokXS1w13k8A== X-Received: by 2002:a17:90b:1d86:: with SMTP id pf6mr1708382pjb.20.1634830885327; Thu, 21 Oct 2021 08:41:25 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:a76d:53a5:b89f:c2a0]) by smtp.gmail.com with ESMTPSA id p9sm6384130pfn.7.2021.10.21.08.41.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Oct 2021 08:41:25 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, rientjes@google.com, pgonda@google.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, rppt@kernel.org, saravanand@fb.com, aneesh.kumar@linux.ibm.com, hannes@cmpxchg.org, tj@kernel.org, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V8 6/9] x86/hyperv: Add Write/Read MSR registers via ghcb page Date: Thu, 21 Oct 2021 11:41:06 -0400 Message-Id: <20211021154110.3734294-7-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211021154110.3734294-1-ltykernel@gmail.com> References: <20211021154110.3734294-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan Hyperv provides GHCB protocol to write Synthetic Interrupt Controller MSR registers in Isolation VM with AMD SEV SNP and these registers are emulated by hypervisor directly. Hyperv requires to write SINTx MSR registers twice. First writes MSR via GHCB page to communicate with hypervisor and then writes wrmsr instruction to talk with paravisor which runs in VMPL0. Guest OS ID MSR also needs to be set via GHCB page. Signed-off-by: Tianyu Lan --- Change since v6: * Spilt sev-es code into separate patch * Add hv_get/set_register() dummy function under CONFIG_HYPERV is not selected to fix compile error. Change since v5: * Adjust change layout in the asm/mshyperv.h to make hv_is_synic_reg(), hv_get_register() and hv_set_register() ahead of the #include of asm-generic/mshyperv.h * Remove Spurious blank line Change since v4: * Remove hv_get_simp(), hv_get_siefp() hv_get_synint_*() helper function. Move the logic into hv_get/set_register(). Change since v3: * Pass old_msg_type to hv_signal_eom() as parameter. * Use HV_REGISTER_* marcro instead of HV_X64_MSR_* * Add hv_isolation_type_snp() weak function. * Add maros to set syinc register in ARM code. Change since v1: * Introduce sev_es_ghcb_hv_call_simple() and share code between SEV and Hyper-V code. --- arch/x86/hyperv/hv_init.c | 36 +++-------- arch/x86/hyperv/ivm.c | 107 ++++++++++++++++++++++++++++++++ arch/x86/include/asm/mshyperv.h | 57 ++++++++++++----- drivers/hv/hv.c | 74 +++++++++++++++++----- drivers/hv/hv_common.c | 6 ++ include/asm-generic/mshyperv.h | 1 + 6 files changed, 222 insertions(+), 59 deletions(-) diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index d57df6825527..a16a83e46a30 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -37,7 +37,7 @@ EXPORT_SYMBOL_GPL(hv_current_partition_id); void *hv_hypercall_pg; EXPORT_SYMBOL_GPL(hv_hypercall_pg); -void __percpu **hv_ghcb_pg; +union hv_ghcb __percpu **hv_ghcb_pg; /* Storage to save the hypercall page temporarily for hibernation */ static void *hv_hypercall_pg_saved; @@ -406,7 +406,7 @@ void __init hyperv_init(void) } if (hv_isolation_type_snp()) { - hv_ghcb_pg = alloc_percpu(void *); + hv_ghcb_pg = alloc_percpu(union hv_ghcb *); if (!hv_ghcb_pg) goto free_vp_assist_page; } @@ -424,6 +424,9 @@ void __init hyperv_init(void) guest_id = generate_guest_id(0, LINUX_VERSION_CODE, 0); wrmsrl(HV_X64_MSR_GUEST_OS_ID, guest_id); + /* Hyper-V requires to write guest os id via ghcb in SNP IVM. */ + hv_ghcb_msr_write(HV_X64_MSR_GUEST_OS_ID, guest_id); + hv_hypercall_pg = __vmalloc_node_range(PAGE_SIZE, 1, VMALLOC_START, VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS, NUMA_NO_NODE, @@ -501,6 +504,7 @@ void __init hyperv_init(void) clean_guest_os_id: wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0); + hv_ghcb_msr_write(HV_X64_MSR_GUEST_OS_ID, 0); cpuhp_remove_state(cpuhp); free_ghcb_page: free_percpu(hv_ghcb_pg); @@ -522,6 +526,7 @@ void hyperv_cleanup(void) /* Reset our OS id */ wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0); + hv_ghcb_msr_write(HV_X64_MSR_GUEST_OS_ID, 0); /* * Reset hypercall page reference before reset the page, @@ -592,30 +597,3 @@ bool hv_is_hyperv_initialized(void) return hypercall_msr.enable; } EXPORT_SYMBOL_GPL(hv_is_hyperv_initialized); - -enum hv_isolation_type hv_get_isolation_type(void) -{ - if (!(ms_hyperv.priv_high & HV_ISOLATION)) - return HV_ISOLATION_TYPE_NONE; - return FIELD_GET(HV_ISOLATION_TYPE, ms_hyperv.isolation_config_b); -} -EXPORT_SYMBOL_GPL(hv_get_isolation_type); - -bool hv_is_isolation_supported(void) -{ - if (!cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) - return false; - - if (!hypervisor_is_type(X86_HYPER_MS_HYPERV)) - return false; - - return hv_get_isolation_type() != HV_ISOLATION_TYPE_NONE; -} - -DEFINE_STATIC_KEY_FALSE(isolation_type_snp); - -bool hv_isolation_type_snp(void) -{ - return static_branch_unlikely(&isolation_type_snp); -} -EXPORT_SYMBOL_GPL(hv_isolation_type_snp); diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 79e7fb83472a..925cbc3e7601 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -6,12 +6,119 @@ * Tianyu Lan */ +#include +#include #include #include #include #include +#include +#include #include #include +#include + +union hv_ghcb { + struct ghcb ghcb; +} __packed __aligned(HV_HYP_PAGE_SIZE); + +void hv_ghcb_msr_write(u64 msr, u64 value) +{ + union hv_ghcb *hv_ghcb; + void **ghcb_base; + unsigned long flags; + struct es_em_ctxt ctxt; + + if (!hv_ghcb_pg) + return; + + WARN_ON(in_nmi()); + + local_irq_save(flags); + ghcb_base = (void **)this_cpu_ptr(hv_ghcb_pg); + hv_ghcb = (union hv_ghcb *)*ghcb_base; + if (!hv_ghcb) { + local_irq_restore(flags); + return; + } + + ghcb_set_rcx(&hv_ghcb->ghcb, msr); + ghcb_set_rax(&hv_ghcb->ghcb, lower_32_bits(value)); + ghcb_set_rdx(&hv_ghcb->ghcb, upper_32_bits(value)); + + if (sev_es_ghcb_hv_call(&hv_ghcb->ghcb, false, &ctxt, + SVM_EXIT_MSR, 1, 0)) + pr_warn("Fail to write msr via ghcb %llx.\n", msr); + + local_irq_restore(flags); +} + +void hv_ghcb_msr_read(u64 msr, u64 *value) +{ + union hv_ghcb *hv_ghcb; + void **ghcb_base; + unsigned long flags; + struct es_em_ctxt ctxt; + + /* Check size of union hv_ghcb here. */ + BUILD_BUG_ON(sizeof(union hv_ghcb) != HV_HYP_PAGE_SIZE); + + if (!hv_ghcb_pg) + return; + + WARN_ON(in_nmi()); + + local_irq_save(flags); + ghcb_base = (void **)this_cpu_ptr(hv_ghcb_pg); + hv_ghcb = (union hv_ghcb *)*ghcb_base; + if (!hv_ghcb) { + local_irq_restore(flags); + return; + } + + ghcb_set_rcx(&hv_ghcb->ghcb, msr); + if (sev_es_ghcb_hv_call(&hv_ghcb->ghcb, false, &ctxt, + SVM_EXIT_MSR, 0, 0)) + pr_warn("Fail to read msr via ghcb %llx.\n", msr); + else + *value = (u64)lower_32_bits(hv_ghcb->ghcb.save.rax) + | ((u64)lower_32_bits(hv_ghcb->ghcb.save.rdx) << 32); + local_irq_restore(flags); +} + +enum hv_isolation_type hv_get_isolation_type(void) +{ + if (!(ms_hyperv.priv_high & HV_ISOLATION)) + return HV_ISOLATION_TYPE_NONE; + return FIELD_GET(HV_ISOLATION_TYPE, ms_hyperv.isolation_config_b); +} +EXPORT_SYMBOL_GPL(hv_get_isolation_type); + +/* + * hv_is_isolation_supported - Check system runs in the Hyper-V + * isolation VM. + */ +bool hv_is_isolation_supported(void) +{ + if (!cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) + return false; + + if (!hypervisor_is_type(X86_HYPER_MS_HYPERV)) + return false; + + return hv_get_isolation_type() != HV_ISOLATION_TYPE_NONE; +} + +DEFINE_STATIC_KEY_FALSE(isolation_type_snp); + +/* + * hv_isolation_type_snp - Check system runs in the AMD SEV-SNP based + * isolation VM. + */ +bool hv_isolation_type_snp(void) +{ + return static_branch_unlikely(&isolation_type_snp); +} /* * hv_mark_gpa_visibility - Set pages visible to host via hvcall. diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index f3154ca41ac4..eb1ba33da113 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -11,25 +11,14 @@ #include #include +union hv_ghcb; + DECLARE_STATIC_KEY_FALSE(isolation_type_snp); typedef int (*hyperv_fill_flush_list_func)( struct hv_guest_mapping_flush_list *flush, void *data); -static inline void hv_set_register(unsigned int reg, u64 value) -{ - wrmsrl(reg, value); -} - -static inline u64 hv_get_register(unsigned int reg) -{ - u64 value; - - rdmsrl(reg, value); - return value; -} - #define hv_get_raw_timer() rdtsc_ordered() void hyperv_vector_handler(struct pt_regs *regs); @@ -41,7 +30,7 @@ extern void *hv_hypercall_pg; extern u64 hv_current_partition_id; -extern void __percpu **hv_ghcb_pg; +extern union hv_ghcb __percpu **hv_ghcb_pg; int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id); @@ -193,6 +182,44 @@ int hv_map_ioapic_interrupt(int ioapic_id, bool level, int vcpu, int vector, struct hv_interrupt_entry *entry); int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry); int hv_set_mem_host_visibility(unsigned long addr, int numpages, bool visible); +void hv_ghcb_msr_write(u64 msr, u64 value); +void hv_ghcb_msr_read(u64 msr, u64 *value); + +extern bool hv_isolation_type_snp(void); + +static inline bool hv_is_synic_reg(unsigned int reg) +{ + if ((reg >= HV_REGISTER_SCONTROL) && + (reg <= HV_REGISTER_SINT15)) + return true; + return false; +} + +static inline u64 hv_get_register(unsigned int reg) +{ + u64 value; + + if (hv_is_synic_reg(reg) && hv_isolation_type_snp()) + hv_ghcb_msr_read(reg, &value); + else + rdmsrl(reg, value); + return value; +} + +static inline void hv_set_register(unsigned int reg, u64 value) +{ + if (hv_is_synic_reg(reg) && hv_isolation_type_snp()) { + hv_ghcb_msr_write(reg, value); + + /* Write proxy bit via wrmsl instruction */ + if (reg >= HV_REGISTER_SINT0 && + reg <= HV_REGISTER_SINT15) + wrmsrl(reg, value | 1 << 20); + } else { + wrmsrl(reg, value); + } +} + #else /* CONFIG_HYPERV */ static inline void hyperv_init(void) {} static inline void hyperv_setup_mmu_ops(void) {} @@ -209,6 +236,8 @@ static inline int hyperv_flush_guest_mapping_range(u64 as, { return -1; } +static inline void hv_set_register(unsigned int reg, u64 value) { } +static inline u64 hv_get_register(unsigned int reg) { return 0; } static inline int hv_set_mem_host_visibility(unsigned long addr, int numpages, bool visible) { diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c index e83507f49676..943392db9e8a 100644 --- a/drivers/hv/hv.c +++ b/drivers/hv/hv.c @@ -8,6 +8,7 @@ */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include #include #include #include @@ -136,17 +137,24 @@ int hv_synic_alloc(void) tasklet_init(&hv_cpu->msg_dpc, vmbus_on_msg_dpc, (unsigned long) hv_cpu); - hv_cpu->synic_message_page = - (void *)get_zeroed_page(GFP_ATOMIC); - if (hv_cpu->synic_message_page == NULL) { - pr_err("Unable to allocate SYNIC message page\n"); - goto err; - } + /* + * Synic message and event pages are allocated by paravisor. + * Skip these pages allocation here. + */ + if (!hv_isolation_type_snp()) { + hv_cpu->synic_message_page = + (void *)get_zeroed_page(GFP_ATOMIC); + if (hv_cpu->synic_message_page == NULL) { + pr_err("Unable to allocate SYNIC message page\n"); + goto err; + } - hv_cpu->synic_event_page = (void *)get_zeroed_page(GFP_ATOMIC); - if (hv_cpu->synic_event_page == NULL) { - pr_err("Unable to allocate SYNIC event page\n"); - goto err; + hv_cpu->synic_event_page = + (void *)get_zeroed_page(GFP_ATOMIC); + if (hv_cpu->synic_event_page == NULL) { + pr_err("Unable to allocate SYNIC event page\n"); + goto err; + } } hv_cpu->post_msg_page = (void *)get_zeroed_page(GFP_ATOMIC); @@ -201,16 +209,35 @@ void hv_synic_enable_regs(unsigned int cpu) /* Setup the Synic's message page */ simp.as_uint64 = hv_get_register(HV_REGISTER_SIMP); simp.simp_enabled = 1; - simp.base_simp_gpa = virt_to_phys(hv_cpu->synic_message_page) - >> HV_HYP_PAGE_SHIFT; + + if (hv_isolation_type_snp()) { + hv_cpu->synic_message_page + = memremap(simp.base_simp_gpa << HV_HYP_PAGE_SHIFT, + HV_HYP_PAGE_SIZE, MEMREMAP_WB); + if (!hv_cpu->synic_message_page) + pr_err("Fail to map syinc message page.\n"); + } else { + simp.base_simp_gpa = virt_to_phys(hv_cpu->synic_message_page) + >> HV_HYP_PAGE_SHIFT; + } hv_set_register(HV_REGISTER_SIMP, simp.as_uint64); /* Setup the Synic's event page */ siefp.as_uint64 = hv_get_register(HV_REGISTER_SIEFP); siefp.siefp_enabled = 1; - siefp.base_siefp_gpa = virt_to_phys(hv_cpu->synic_event_page) - >> HV_HYP_PAGE_SHIFT; + + if (hv_isolation_type_snp()) { + hv_cpu->synic_event_page = + memremap(siefp.base_siefp_gpa << HV_HYP_PAGE_SHIFT, + HV_HYP_PAGE_SIZE, MEMREMAP_WB); + + if (!hv_cpu->synic_event_page) + pr_err("Fail to map syinc event page.\n"); + } else { + siefp.base_siefp_gpa = virt_to_phys(hv_cpu->synic_event_page) + >> HV_HYP_PAGE_SHIFT; + } hv_set_register(HV_REGISTER_SIEFP, siefp.as_uint64); @@ -257,6 +284,8 @@ int hv_synic_init(unsigned int cpu) */ void hv_synic_disable_regs(unsigned int cpu) { + struct hv_per_cpu_context *hv_cpu + = per_cpu_ptr(hv_context.cpu_context, cpu); union hv_synic_sint shared_sint; union hv_synic_simp simp; union hv_synic_siefp siefp; @@ -273,14 +302,27 @@ void hv_synic_disable_regs(unsigned int cpu) shared_sint.as_uint64); simp.as_uint64 = hv_get_register(HV_REGISTER_SIMP); + /* + * In Isolation VM, sim and sief pages are allocated by + * paravisor. These pages also will be used by kdump + * kernel. So just reset enable bit here and keep page + * addresses. + */ simp.simp_enabled = 0; - simp.base_simp_gpa = 0; + if (hv_isolation_type_snp()) + memunmap(hv_cpu->synic_message_page); + else + simp.base_simp_gpa = 0; hv_set_register(HV_REGISTER_SIMP, simp.as_uint64); siefp.as_uint64 = hv_get_register(HV_REGISTER_SIEFP); siefp.siefp_enabled = 0; - siefp.base_siefp_gpa = 0; + + if (hv_isolation_type_snp()) + memunmap(hv_cpu->synic_event_page); + else + siefp.base_siefp_gpa = 0; hv_set_register(HV_REGISTER_SIEFP, siefp.as_uint64); diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c index c0d9048a4112..1fc82d237161 100644 --- a/drivers/hv/hv_common.c +++ b/drivers/hv/hv_common.c @@ -249,6 +249,12 @@ bool __weak hv_is_isolation_supported(void) } EXPORT_SYMBOL_GPL(hv_is_isolation_supported); +bool __weak hv_isolation_type_snp(void) +{ + return false; +} +EXPORT_SYMBOL_GPL(hv_isolation_type_snp); + void __weak hv_setup_vmbus_handler(void (*handler)(void)) { } diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index a8ac497167d2..6d3ba902ebb0 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -54,6 +54,7 @@ extern void __percpu **hyperv_pcpu_output_arg; extern u64 hv_do_hypercall(u64 control, void *inputaddr, void *outputaddr); extern u64 hv_do_fast_hypercall8(u16 control, u64 input8); +extern bool hv_isolation_type_snp(void); /* Helper functions that provide a consistent pattern for checking Hyper-V hypercall status. */ static inline int hv_result(u64 status) From patchwork Thu Oct 21 15:41:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12575625 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17F45C433F5 for ; Thu, 21 Oct 2021 15:41:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 03D036135A for ; Thu, 21 Oct 2021 15:41:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231844AbhJUPnz (ORCPT ); Thu, 21 Oct 2021 11:43:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231925AbhJUPno (ORCPT ); Thu, 21 Oct 2021 11:43:44 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 459DCC061226; Thu, 21 Oct 2021 08:41:27 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id n11so698576plf.4; Thu, 21 Oct 2021 08:41:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YOBH9dydviDXfIFLYCaslNXDxefpefD14WizJYisjA4=; b=MpZyWB24GtgCcGVENA/WXQGZ6vOWxZyN+P/KQ3/7ARBHMGs0YjpQX5AM5iNG0jK5SV msAycoTuvP2yakUDRojp325U6rrYPt1aGZCz1XhKOA4RF6i/c5uOOZeDO51Ox5jTME24 zLT44ID7yrHiv7h/4vtfRJ52yxDLafa2y1n4qp1Ro2gm0ndBYZSRutQmrrFofbFyxL6x PWV7sFWsFQJ7GhBEV+5WsMOTtQ4XsSlWGJKn6SeMVWspChM5NZv2BAzxTuopnIU6/vyL txuEqCYkeoI/AY/gdGGP72pLiVuwBynC57w7HE7WT2tjV4LEmEmwcxlShsyw1RAgwwBe pQWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YOBH9dydviDXfIFLYCaslNXDxefpefD14WizJYisjA4=; b=NR2WoLZ5tJYs4oYfUPs+w/jBNlD5MHDZwNN29+ZNC5DGRC1N9bhXgPVaYC0yMzfcR/ Ww+99HKTi3KbREGzp4+scF4nbMRHC9FbXBZBecbLuFF1OHK1dCG6+2+nya427g1VMb3P 9e1Yzz0Oelh+ss1nUbTXny6N3yrgTnmthvYDKnyH3k0wqXC/RvfjzXqkgl5a3cLHxojW ZjjE+oAJ6oirVKht9+FYiQZiC3vCvsduoV46bBWadme1UXCcbRClCApyrLOYWdztCqSY QHU/vmSVLssanyfHFODmjKvjzrOu9lat/UExvyJJ+OphxermC0am0awiisY+7JhHz25R 8MeA== X-Gm-Message-State: AOAM532N7D6eqosdlV3U6O2HbzaFpvZXfxlcebD4YG9u+1EvCv2nU2yT QnRVclUdr0VfE3rIEcLP+e4= X-Google-Smtp-Source: ABdhPJwHGiIgIMa/rdh9HHGAfLamb/tEBWm77DajD1Mersx5PBRy8hoDTZ1zjhNrUDJ2zS+Kd37V4Q== X-Received: by 2002:a17:90b:3852:: with SMTP id nl18mr7356512pjb.94.1634830886819; Thu, 21 Oct 2021 08:41:26 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:a76d:53a5:b89f:c2a0]) by smtp.gmail.com with ESMTPSA id p9sm6384130pfn.7.2021.10.21.08.41.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Oct 2021 08:41:26 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, rientjes@google.com, pgonda@google.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, rppt@kernel.org, saravanand@fb.com, aneesh.kumar@linux.ibm.com, hannes@cmpxchg.org, tj@kernel.org, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V8 7/9] x86/hyperv: Add ghcb hvcall support for SNP VM Date: Thu, 21 Oct 2021 11:41:07 -0400 Message-Id: <20211021154110.3734294-8-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211021154110.3734294-1-ltykernel@gmail.com> References: <20211021154110.3734294-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan hyperv provides ghcb hvcall to handle VMBus HVCALL_SIGNAL_EVENT and HVCALL_POST_MESSAGE msg in SNP Isolation VM. Add such support. Signed-off-by: Tianyu Lan --- Change since v3: * Add hv_ghcb_hypercall() stub function to avoid compile error for ARM. --- arch/x86/hyperv/ivm.c | 74 ++++++++++++++++++++++++++++++++++ drivers/hv/connection.c | 6 ++- drivers/hv/hv.c | 8 +++- drivers/hv/hv_common.c | 6 +++ include/asm-generic/mshyperv.h | 1 + 5 files changed, 93 insertions(+), 2 deletions(-) diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 925cbc3e7601..260fcba97914 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -18,10 +18,84 @@ #include #include +#define GHCB_USAGE_HYPERV_CALL 1 + union hv_ghcb { struct ghcb ghcb; + struct { + u64 hypercalldata[509]; + u64 outputgpa; + union { + union { + struct { + u32 callcode : 16; + u32 isfast : 1; + u32 reserved1 : 14; + u32 isnested : 1; + u32 countofelements : 12; + u32 reserved2 : 4; + u32 repstartindex : 12; + u32 reserved3 : 4; + }; + u64 asuint64; + } hypercallinput; + union { + struct { + u16 callstatus; + u16 reserved1; + u32 elementsprocessed : 12; + u32 reserved2 : 20; + }; + u64 asunit64; + } hypercalloutput; + }; + u64 reserved2; + } hypercall; } __packed __aligned(HV_HYP_PAGE_SIZE); +u64 hv_ghcb_hypercall(u64 control, void *input, void *output, u32 input_size) +{ + union hv_ghcb *hv_ghcb; + void **ghcb_base; + unsigned long flags; + u64 status; + + if (!hv_ghcb_pg) + return -EFAULT; + + WARN_ON(in_nmi()); + + local_irq_save(flags); + ghcb_base = (void **)this_cpu_ptr(hv_ghcb_pg); + hv_ghcb = (union hv_ghcb *)*ghcb_base; + if (!hv_ghcb) { + local_irq_restore(flags); + return -EFAULT; + } + + hv_ghcb->ghcb.protocol_version = GHCB_PROTOCOL_MAX; + hv_ghcb->ghcb.ghcb_usage = GHCB_USAGE_HYPERV_CALL; + + hv_ghcb->hypercall.outputgpa = (u64)output; + hv_ghcb->hypercall.hypercallinput.asuint64 = 0; + hv_ghcb->hypercall.hypercallinput.callcode = control; + + if (input_size) + memcpy(hv_ghcb->hypercall.hypercalldata, input, input_size); + + VMGEXIT(); + + hv_ghcb->ghcb.ghcb_usage = 0xffffffff; + memset(hv_ghcb->ghcb.save.valid_bitmap, 0, + sizeof(hv_ghcb->ghcb.save.valid_bitmap)); + + status = hv_ghcb->hypercall.hypercalloutput.callstatus; + + local_irq_restore(flags); + + return status; +} + void hv_ghcb_msr_write(u64 msr, u64 value) { union hv_ghcb *hv_ghcb; diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c index 5e479d54918c..8820ae68f20f 100644 --- a/drivers/hv/connection.c +++ b/drivers/hv/connection.c @@ -447,6 +447,10 @@ void vmbus_set_event(struct vmbus_channel *channel) ++channel->sig_events; - hv_do_fast_hypercall8(HVCALL_SIGNAL_EVENT, channel->sig_event); + if (hv_isolation_type_snp()) + hv_ghcb_hypercall(HVCALL_SIGNAL_EVENT, &channel->sig_event, + NULL, sizeof(channel->sig_event)); + else + hv_do_fast_hypercall8(HVCALL_SIGNAL_EVENT, channel->sig_event); } EXPORT_SYMBOL_GPL(vmbus_set_event); diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c index 943392db9e8a..4d6480d57546 100644 --- a/drivers/hv/hv.c +++ b/drivers/hv/hv.c @@ -98,7 +98,13 @@ int hv_post_message(union hv_connection_id connection_id, aligned_msg->payload_size = payload_size; memcpy((void *)aligned_msg->payload, payload, payload_size); - status = hv_do_hypercall(HVCALL_POST_MESSAGE, aligned_msg, NULL); + if (hv_isolation_type_snp()) + status = hv_ghcb_hypercall(HVCALL_POST_MESSAGE, + (void *)aligned_msg, NULL, + sizeof(*aligned_msg)); + else + status = hv_do_hypercall(HVCALL_POST_MESSAGE, + aligned_msg, NULL); /* Preemption must remain disabled until after the hypercall * so some other thread can't get scheduled onto this cpu and diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c index 1fc82d237161..7be173a99f27 100644 --- a/drivers/hv/hv_common.c +++ b/drivers/hv/hv_common.c @@ -289,3 +289,9 @@ void __weak hyperv_cleanup(void) { } EXPORT_SYMBOL_GPL(hyperv_cleanup); + +u64 __weak hv_ghcb_hypercall(u64 control, void *input, void *output, u32 input_size) +{ + return HV_STATUS_INVALID_PARAMETER; +} +EXPORT_SYMBOL_GPL(hv_ghcb_hypercall); diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index 6d3ba902ebb0..3e2248ac328e 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -266,6 +266,7 @@ bool hv_is_hibernation_supported(void); enum hv_isolation_type hv_get_isolation_type(void); bool hv_is_isolation_supported(void); bool hv_isolation_type_snp(void); +u64 hv_ghcb_hypercall(u64 control, void *input, void *output, u32 input_size); void hyperv_cleanup(void); bool hv_query_ext_cap(u64 cap_query); #else /* CONFIG_HYPERV */ From patchwork Thu Oct 21 15:41:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12575627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28789C43219 for ; Thu, 21 Oct 2021 15:41:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 16AA3611C7 for ; Thu, 21 Oct 2021 15:41:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232077AbhJUPn4 (ORCPT ); Thu, 21 Oct 2021 11:43:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231937AbhJUPnp (ORCPT ); Thu, 21 Oct 2021 11:43:45 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B08E5C061227; Thu, 21 Oct 2021 08:41:28 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id t11so675658plq.11; Thu, 21 Oct 2021 08:41:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XBEgOEIOcem5paCQuIdhsRwNpRP9LBEy4PyqxTKlwF8=; b=CrnBHAUG3BDrMl+e2VQoOpT0AY4qsX/Lr/6VfKE8AspTxyMbSKV4vCQaAUazUUWRLH dbQTsA+ONcnfT3Yym8ewGSKBMUXp4rO2G049ALh34CbO28l7UEFNflNNyc/oDrLQaK5T qPzmcgNOe2ruR582TFutHt7uqcH7RkmWDjNIGW3hvV9xuyGMfoDJyQHvzfb0mpMhnMoH QTI+82hu9QFdpF2eshiT7yNxIZrfJGN+exgWGDA7oZ//BmhBAKNDNiIV22ehINxtiNQp tW0oj6cqi8/SD4paHOCd5qMroHVrCcFSSJ6CYB6zUOWGDl8myQVo6LyMdR5JNS+O4xOz Q56Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XBEgOEIOcem5paCQuIdhsRwNpRP9LBEy4PyqxTKlwF8=; b=LWTKVoq8JxNy1YHgVFO91vqaXd3AW1G2dKiYt93cv6iiI34nDJVZcW9od8cVYrozAy 3QEg7F1GhxSMzZiRGyqo192ipQM37Pt2bhUGX6E24Jkw9O8OpxTQVp2QLDFXw2s4Fwbx unzjBjV0GYwYwAt2B6NqNP2PzKtzvjErKnggmFTIDltOboWv18D4gKDtnf84O3Twe7uC GpwqgaVxhm6Igd5zGdszG+F/8yZVMwoOTmEXiSTSyUl+ZSIs5PpGhkSFH0opYxbnI+Y3 MmhbS+0nvSxA8Z/8ET3wF51JgkVWJZ7GnQT8+W299VGt0izP7iUKoX8nB2TzLprR+x/x 6soA== X-Gm-Message-State: AOAM530TeSDwFZa6vunsFYQqYfZHx1KmxC6M/E+93Ht7W4QAHPCTYYng MK9nVpry3iUUGk7u0XXLJgw= X-Google-Smtp-Source: ABdhPJwHJX5TfcqeKvrCygCz9sA0V+fwxElw+DOvSd7hZJ7AstIS9ZahBzfDEi8lFvvJ+csh1C7JlQ== X-Received: by 2002:a17:90a:9744:: with SMTP id i4mr7606458pjw.241.1634830888270; Thu, 21 Oct 2021 08:41:28 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:a76d:53a5:b89f:c2a0]) by smtp.gmail.com with ESMTPSA id p9sm6384130pfn.7.2021.10.21.08.41.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Oct 2021 08:41:28 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, rientjes@google.com, pgonda@google.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, rppt@kernel.org, saravanand@fb.com, aneesh.kumar@linux.ibm.com, hannes@cmpxchg.org, tj@kernel.org, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V8 8/9] Drivers: hv: vmbus: Add SNP support for VMbus channel initiate message Date: Thu, 21 Oct 2021 11:41:08 -0400 Message-Id: <20211021154110.3734294-9-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211021154110.3734294-1-ltykernel@gmail.com> References: <20211021154110.3734294-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared with host in Isolation VM and so it's necessary to use hvcall to set them visible to host. In Isolation VM with AMD SEV SNP, the access address should be in the extra space which is above shared gpa boundary. So remap these pages into the extra address(pa + shared_gpa_boundary). Introduce monitor_pages_original[] in the struct vmbus_connection to store monitor page virtual address returned by hv_alloc_hyperv_ zeroed_page() and free monitor page via monitor_pages_original in the vmbus_disconnect(). The monitor_pages[] is to used to access monitor page and it is initialized to be equal with monitor_pages_ original. The monitor_pages[] will be overridden in the isolation VM with va of extra address. Introduce monitor_pages_pa[] to store monitor pages' physical address and use it to populate pa in the initiate msg. Signed-off-by: Tianyu Lan --- Change since v6: * Add comment about calling memunmap() in the non-snp IVM. Change since v5: * change vmbus_connection.monitor_pages_pa type from unsigned long to phys_addr_t * Plus vmbus_connection.monitor_pages_pa with ms_hyperv. shared_gpa_boundary only in the IVM with AMD SEV. Change since v4: * Introduce monitor_pages_pa[] to store monitor pages' physical address and use it to populate pa in the initiate msg. * Move code of mapping moniter pages in extra address into vmbus_connect(). Change since v3: * Rename monitor_pages_va with monitor_pages_original * free monitor page via monitor_pages_original and monitor_pages is used to access monitor page. Change since v1: * Not remap monitor pages in the non-SNP isolation VM. --- drivers/hv/connection.c | 95 ++++++++++++++++++++++++++++++++++++--- drivers/hv/hyperv_vmbus.h | 2 + 2 files changed, 91 insertions(+), 6 deletions(-) diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c index 8820ae68f20f..a3d8be8d6cfb 100644 --- a/drivers/hv/connection.c +++ b/drivers/hv/connection.c @@ -19,6 +19,8 @@ #include #include #include +#include +#include #include #include "hyperv_vmbus.h" @@ -102,8 +104,9 @@ int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo, u32 version) vmbus_connection.msg_conn_id = VMBUS_MESSAGE_CONNECTION_ID; } - msg->monitor_page1 = virt_to_phys(vmbus_connection.monitor_pages[0]); - msg->monitor_page2 = virt_to_phys(vmbus_connection.monitor_pages[1]); + msg->monitor_page1 = vmbus_connection.monitor_pages_pa[0]; + msg->monitor_page2 = vmbus_connection.monitor_pages_pa[1]; + msg->target_vcpu = hv_cpu_number_to_vp_number(VMBUS_CONNECT_CPU); /* @@ -216,6 +219,65 @@ int vmbus_connect(void) goto cleanup; } + vmbus_connection.monitor_pages_original[0] + = vmbus_connection.monitor_pages[0]; + vmbus_connection.monitor_pages_original[1] + = vmbus_connection.monitor_pages[1]; + vmbus_connection.monitor_pages_pa[0] + = virt_to_phys(vmbus_connection.monitor_pages[0]); + vmbus_connection.monitor_pages_pa[1] + = virt_to_phys(vmbus_connection.monitor_pages[1]); + + if (hv_is_isolation_supported()) { + ret = set_memory_decrypted((unsigned long) + vmbus_connection.monitor_pages[0], + 1); + ret |= set_memory_decrypted((unsigned long) + vmbus_connection.monitor_pages[1], + 1); + if (ret) + goto cleanup; + + /* + * Isolation VM with AMD SNP needs to access monitor page via + * address space above shared gpa boundary. + */ + if (hv_isolation_type_snp()) { + vmbus_connection.monitor_pages_pa[0] += + ms_hyperv.shared_gpa_boundary; + vmbus_connection.monitor_pages_pa[1] += + ms_hyperv.shared_gpa_boundary; + + vmbus_connection.monitor_pages[0] + = memremap(vmbus_connection.monitor_pages_pa[0], + HV_HYP_PAGE_SIZE, + MEMREMAP_WB); + if (!vmbus_connection.monitor_pages[0]) { + ret = -ENOMEM; + goto cleanup; + } + + vmbus_connection.monitor_pages[1] + = memremap(vmbus_connection.monitor_pages_pa[1], + HV_HYP_PAGE_SIZE, + MEMREMAP_WB); + if (!vmbus_connection.monitor_pages[1]) { + ret = -ENOMEM; + goto cleanup; + } + } + + /* + * Set memory host visibility hvcall smears memory + * and so zero monitor pages here. + */ + memset(vmbus_connection.monitor_pages[0], 0x00, + HV_HYP_PAGE_SIZE); + memset(vmbus_connection.monitor_pages[1], 0x00, + HV_HYP_PAGE_SIZE); + + } + msginfo = kzalloc(sizeof(*msginfo) + sizeof(struct vmbus_channel_initiate_contact), GFP_KERNEL); @@ -303,10 +365,31 @@ void vmbus_disconnect(void) vmbus_connection.int_page = NULL; } - hv_free_hyperv_page((unsigned long)vmbus_connection.monitor_pages[0]); - hv_free_hyperv_page((unsigned long)vmbus_connection.monitor_pages[1]); - vmbus_connection.monitor_pages[0] = NULL; - vmbus_connection.monitor_pages[1] = NULL; + if (hv_is_isolation_supported()) { + /* + * memunmap() checks input address is ioremap address or not + * inside. It doesn't unmap any thing in the non-SNP CVM and + * so not check CVM type here. + */ + memunmap(vmbus_connection.monitor_pages[0]); + memunmap(vmbus_connection.monitor_pages[1]); + + set_memory_encrypted((unsigned long) + vmbus_connection.monitor_pages_original[0], + 1); + set_memory_encrypted((unsigned long) + vmbus_connection.monitor_pages_original[1], + 1); + } + + hv_free_hyperv_page((unsigned long) + vmbus_connection.monitor_pages_original[0]); + hv_free_hyperv_page((unsigned long) + vmbus_connection.monitor_pages_original[1]); + vmbus_connection.monitor_pages_original[0] = + vmbus_connection.monitor_pages[0] = NULL; + vmbus_connection.monitor_pages_original[1] = + vmbus_connection.monitor_pages[1] = NULL; } /* diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h index 42f3d9d123a1..d0a5232a1c3e 100644 --- a/drivers/hv/hyperv_vmbus.h +++ b/drivers/hv/hyperv_vmbus.h @@ -240,6 +240,8 @@ struct vmbus_connection { * is child->parent notification */ struct hv_monitor_page *monitor_pages[2]; + void *monitor_pages_original[2]; + phys_addr_t monitor_pages_pa[2]; struct list_head chn_msg_list; spinlock_t channelmsg_lock; From patchwork Thu Oct 21 15:41:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12575629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1699FC4332F for ; Thu, 21 Oct 2021 15:41:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F1D47611C7 for ; Thu, 21 Oct 2021 15:41:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232051AbhJUPoD (ORCPT ); Thu, 21 Oct 2021 11:44:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229550AbhJUPnq (ORCPT ); Thu, 21 Oct 2021 11:43:46 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 54189C061243; Thu, 21 Oct 2021 08:41:30 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id d5so1022790pfu.1; Thu, 21 Oct 2021 08:41:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cUs+nJBemKyqHW1/+tBNnIBzfXId6r+3EciJyXUHzJs=; b=mb0JzLn6mSIZIpkjv2tkYN3XoDwClOvTasP/YSQ5DSjukGGRylBXYo7vOaPWHhniUm /nl8o14LxOnWMymuDnq+kkYlmRpcZrzVeZThes4LI+n6tlqlPQeHvlUxM/+xHl0OBV62 3JKfF7STiZwdRKtuuQwCFyoBu8VC8sJinNoTmws3JyElVs6ys2pLgkaufurRUIP10Cf6 Pu0zh3ZnjjN7UfSsPF9PQf7ei3jHwsY4ER6K7zDlC2eRBIQkniR8uwSYY2PvCnEi0Y5d RWT1Bkuf4naKZkQk+O+GW6pfSxIvMh4xZvLwgGCgk/Ii96vKhGYRkIVnaepKtOSOFSXX Vzww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cUs+nJBemKyqHW1/+tBNnIBzfXId6r+3EciJyXUHzJs=; b=rPvRA6rgDyVAPCTcvv37iK7BFaGV6YvT8vDF6M5PU5M6GLEM0SqQ8iS+351aM6aAWO oephZ3ifjwCTmNCbpNu59TfZQA+7VRMfCJ6CvMTVk0bAmJTy+C+gZjpKQjixZyg5Nvnq 5oB43ffWvo9lKFfGLQJUNTZcrONCPSZ9HDL1Ve/zR8Y4qRcYCDp9jXeUqiv/SyDX36zP Ysdp+DXbsXyt0rpuTXBJfCCz7uA308vJl46+fqI9FKXRFwz6csB83Hd5K58htqfhHq2p BfuNO3l0y2+ApmbYVK81F/nBsEdZ8O/sZ2X72r1x9lqjo3SrrdupkhQRhur9H8xlsqRp t3JA== X-Gm-Message-State: AOAM530OBSYbcb7h4j4kf5ilnZJFTtloi3VcsHYb98DP0SV5tP0gaoGw tMOnchcPzr55fzB/tk5y3l8= X-Google-Smtp-Source: ABdhPJyOBIXB9R2rYwI+Yj9DVi0eXOZaWxDY9mU8rpQ+yFLMs5jEPhcbNIajbwq+oramkw81DaqX5g== X-Received: by 2002:a05:6a00:1801:b0:44c:aab8:a5ba with SMTP id y1-20020a056a00180100b0044caab8a5bamr6444801pfa.32.1634830889653; Thu, 21 Oct 2021 08:41:29 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:f:a76d:53a5:b89f:c2a0]) by smtp.gmail.com with ESMTPSA id p9sm6384130pfn.7.2021.10.21.08.41.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Oct 2021 08:41:29 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, rientjes@google.com, pgonda@google.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, rppt@kernel.org, saravanand@fb.com, aneesh.kumar@linux.ibm.com, hannes@cmpxchg.org, tj@kernel.org, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V8 9/9] Drivers: hv : vmbus: Initialize VMbus ring buffer for Isolation VM Date: Thu, 21 Oct 2021 11:41:09 -0400 Message-Id: <20211021154110.3734294-10-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211021154110.3734294-1-ltykernel@gmail.com> References: <20211021154110.3734294-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan VMbus ring buffer are shared with host and it's need to be accessed via extra address space of Isolation VM with AMD SNP support. This patch is to map the ring buffer address in extra address space via vmap_pfn(). Hyperv set memory host visibility hvcall smears data in the ring buffer and so reset the ring buffer memory to zero after mapping. Signed-off-by: Tianyu Lan --- Change since v4: * Use PFN_DOWN instead of HVPFN_DOWN in the hv_ringbuffer_init() Change since v3: * Remove hv_ringbuffer_post_init(), merge map operation for Isolation VM into hv_ringbuffer_init() * Call hv_ringbuffer_init() after __vmbus_establish_gpadl(). --- drivers/hv/Kconfig | 1 + drivers/hv/channel.c | 19 +++++++------- drivers/hv/ring_buffer.c | 55 ++++++++++++++++++++++++++++++---------- 3 files changed, 53 insertions(+), 22 deletions(-) diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig index d1123ceb38f3..dd12af20e467 100644 --- a/drivers/hv/Kconfig +++ b/drivers/hv/Kconfig @@ -8,6 +8,7 @@ config HYPERV || (ARM64 && !CPU_BIG_ENDIAN)) select PARAVIRT select X86_HV_CALLBACK_VECTOR if X86 + select VMAP_PFN help Select this option to run Linux as a Hyper-V client operating system. diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index b37ff4a39224..dc5c35210c16 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -683,15 +683,6 @@ static int __vmbus_open(struct vmbus_channel *newchannel, if (!newchannel->max_pkt_size) newchannel->max_pkt_size = VMBUS_DEFAULT_MAX_PKT_SIZE; - err = hv_ringbuffer_init(&newchannel->outbound, page, send_pages, 0); - if (err) - goto error_clean_ring; - - err = hv_ringbuffer_init(&newchannel->inbound, &page[send_pages], - recv_pages, newchannel->max_pkt_size); - if (err) - goto error_clean_ring; - /* Establish the gpadl for the ring buffer */ newchannel->ringbuffer_gpadlhandle.gpadl_handle = 0; @@ -703,6 +694,16 @@ static int __vmbus_open(struct vmbus_channel *newchannel, if (err) goto error_clean_ring; + err = hv_ringbuffer_init(&newchannel->outbound, + page, send_pages, 0); + if (err) + goto error_free_gpadl; + + err = hv_ringbuffer_init(&newchannel->inbound, &page[send_pages], + recv_pages, newchannel->max_pkt_size); + if (err) + goto error_free_gpadl; + /* Create and init the channel open message */ open_info = kzalloc(sizeof(*open_info) + sizeof(struct vmbus_channel_open_channel), diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c index 314015d9e912..931802ae985c 100644 --- a/drivers/hv/ring_buffer.c +++ b/drivers/hv/ring_buffer.c @@ -17,6 +17,8 @@ #include #include #include +#include +#include #include "hyperv_vmbus.h" @@ -183,8 +185,10 @@ void hv_ringbuffer_pre_init(struct vmbus_channel *channel) int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info, struct page *pages, u32 page_cnt, u32 max_pkt_size) { - int i; struct page **pages_wraparound; + unsigned long *pfns_wraparound; + u64 pfn; + int i; BUILD_BUG_ON((sizeof(struct hv_ring_buffer) != PAGE_SIZE)); @@ -192,23 +196,48 @@ int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info, * First page holds struct hv_ring_buffer, do wraparound mapping for * the rest. */ - pages_wraparound = kcalloc(page_cnt * 2 - 1, sizeof(struct page *), - GFP_KERNEL); - if (!pages_wraparound) - return -ENOMEM; + if (hv_isolation_type_snp()) { + pfn = page_to_pfn(pages) + + PFN_DOWN(ms_hyperv.shared_gpa_boundary); + + pfns_wraparound = kcalloc(page_cnt * 2 - 1, + sizeof(unsigned long), GFP_KERNEL); + if (!pfns_wraparound) + return -ENOMEM; + + pfns_wraparound[0] = pfn; + for (i = 0; i < 2 * (page_cnt - 1); i++) + pfns_wraparound[i + 1] = pfn + i % (page_cnt - 1) + 1; - pages_wraparound[0] = pages; - for (i = 0; i < 2 * (page_cnt - 1); i++) - pages_wraparound[i + 1] = &pages[i % (page_cnt - 1) + 1]; + ring_info->ring_buffer = (struct hv_ring_buffer *) + vmap_pfn(pfns_wraparound, page_cnt * 2 - 1, + PAGE_KERNEL); + kfree(pfns_wraparound); - ring_info->ring_buffer = (struct hv_ring_buffer *) - vmap(pages_wraparound, page_cnt * 2 - 1, VM_MAP, PAGE_KERNEL); + if (!ring_info->ring_buffer) + return -ENOMEM; + + /* Zero ring buffer after setting memory host visibility. */ + memset(ring_info->ring_buffer, 0x00, PAGE_SIZE * page_cnt); + } else { + pages_wraparound = kcalloc(page_cnt * 2 - 1, + sizeof(struct page *), + GFP_KERNEL); + + pages_wraparound[0] = pages; + for (i = 0; i < 2 * (page_cnt - 1); i++) + pages_wraparound[i + 1] = + &pages[i % (page_cnt - 1) + 1]; - kfree(pages_wraparound); + ring_info->ring_buffer = (struct hv_ring_buffer *) + vmap(pages_wraparound, page_cnt * 2 - 1, VM_MAP, + PAGE_KERNEL); + kfree(pages_wraparound); + if (!ring_info->ring_buffer) + return -ENOMEM; + } - if (!ring_info->ring_buffer) - return -ENOMEM; ring_info->ring_buffer->read_index = ring_info->ring_buffer->write_index = 0;