From patchwork Mon Oct 25 12:21:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12581699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFC8DC4332F for ; Mon, 25 Oct 2021 12:21:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CE60B60C49 for ; Mon, 25 Oct 2021 12:21:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233309AbhJYMYD (ORCPT ); Mon, 25 Oct 2021 08:24:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38738 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233193AbhJYMXn (ORCPT ); Mon, 25 Oct 2021 08:23:43 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0876C061227; Mon, 25 Oct 2021 05:21:21 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id om14so8129483pjb.5; Mon, 25 Oct 2021 05:21:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6fqCJ0TGhcCVUYrERKD/Wb/8KgAvSWuOBvIt3/+Maws=; b=T/nCu4c0rq48qmNumw+ERJFf6058pmskkgoH31HfR1NMPfk8diK0pO69d+I0/82NBs 6Lm0b31NOwuQL52dmVBzPlybs7jhaBug+qdFd9gOO69i8CmlHPFXnn2TtBwmZeVb9/XV BQ6U3/b/zQsffaVVIhPeK5frKp5GmSi+4HW7g/2bSmRPu/kfSuNXNaOQUAKaq7lbsZ1r Y1Pc2oly1DDvsNr0I0fdv0mdipfgb+sLFSInYNQpb7FstzFXiez11sVjYwxr7kze6EwD hiCm7hGoEo9rwnvRd3C2s8GGd4PFPSwVk35q5TR8GLKl605wRwQp/mg3EpLgnr3bgB91 EyQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6fqCJ0TGhcCVUYrERKD/Wb/8KgAvSWuOBvIt3/+Maws=; b=r7OiYjd42GUvonz6SN8BNl/0Ss/pnhfBNnkI7og6py2N9ag2YyEJlvIk1retYGzBKf RyQCdOzDyKm7qCNBl8Rs0Lwh4QpZS/CP9Qs1bPBsJTERQBAm9+1C51DSOsUe8JS3FFRe 9ZsyPQGxmWkCC+JlbjKQn9Q1Sf06Rt5amRskrabx0aRfpSTJaq6qKhJdFG7ipAe04LjB dtdX+Xlx40U4Z/LEZeaESIW18A28ZVKCiTHGMtKGQN2ycG8fICQoJ/OETN0jRQ6CNkCP NX7FWxkylOWHgumxhNSQfrYkX6GUDM4F/nKRzlIk0QYprN0axViGM9XH0iF/DuaGTu+U CMvA== X-Gm-Message-State: AOAM532w8xNNTQvDkQbyfXYozrLKDOHUMOg8ox6ArjmIrHW6loQESKme ifBglzlP6123FgY4UnlZzBA= X-Google-Smtp-Source: ABdhPJwL6guu5wJCRVMiIQw+lBYm1dcHyHHB3hqVqoziV6uzmJy3Gqjuu4iZFBoQFlj3ueJ0d7gRwA== X-Received: by 2002:a17:90b:4b48:: with SMTP id mi8mr1238858pjb.94.1635164481236; Mon, 25 Oct 2021 05:21:21 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:8:bcf6:9813:137f:2b6]) by smtp.gmail.com with ESMTPSA id mi11sm2786166pjb.5.2021.10.25.05.21.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Oct 2021 05:21:20 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, mikelley@microsoft.com, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, pgonda@google.com, akpm@linux-foundation.org, rppt@kernel.org, kirill.shutemov@linux.intel.com, saravanand@fb.com, aneesh.kumar@linux.ibm.com, sfr@canb.auug.org.au, david@redhat.com, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V9 1/9] x86/hyperv: Initialize GHCB page in Isolation VM Date: Mon, 25 Oct 2021 08:21:06 -0400 Message-Id: <20211025122116.264793-2-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211025122116.264793-1-ltykernel@gmail.com> References: <20211025122116.264793-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan Hyperv exposes GHCB page via SEV ES GHCB MSR for SNP guest to communicate with hypervisor. Map GHCB page for all cpus to read/write MSR register and submit hvcall request via ghcb page. Reviewed-by: Michael Kelley Signed-off-by: Tianyu Lan --- Change since v4: * Fix typo comment Chagne since v3: * Rename ghcb_base to hv_ghcb_pg and move it out of struct ms_hyperv_info. * Allocate hv_ghcb_pg before cpuhp_setup_state() and leverage hv_cpu_init() to initialize ghcb page. --- arch/x86/hyperv/hv_init.c | 68 +++++++++++++++++++++++++++++---- arch/x86/include/asm/mshyperv.h | 4 ++ arch/x86/kernel/cpu/mshyperv.c | 3 ++ include/asm-generic/mshyperv.h | 6 +++ 4 files changed, 74 insertions(+), 7 deletions(-) diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index 708a2712a516..a7e922755ad1 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -36,12 +37,42 @@ EXPORT_SYMBOL_GPL(hv_current_partition_id); void *hv_hypercall_pg; EXPORT_SYMBOL_GPL(hv_hypercall_pg); +void __percpu **hv_ghcb_pg; + /* Storage to save the hypercall page temporarily for hibernation */ static void *hv_hypercall_pg_saved; struct hv_vp_assist_page **hv_vp_assist_page; EXPORT_SYMBOL_GPL(hv_vp_assist_page); +static int hyperv_init_ghcb(void) +{ + u64 ghcb_gpa; + void *ghcb_va; + void **ghcb_base; + + if (!hv_isolation_type_snp()) + return 0; + + if (!hv_ghcb_pg) + return -EINVAL; + + /* + * GHCB page is allocated by paravisor. The address + * returned by MSR_AMD64_SEV_ES_GHCB is above shared + * memory boundary and map it here. + */ + rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa); + ghcb_va = memremap(ghcb_gpa, HV_HYP_PAGE_SIZE, MEMREMAP_WB); + if (!ghcb_va) + return -ENOMEM; + + ghcb_base = (void **)this_cpu_ptr(hv_ghcb_pg); + *ghcb_base = ghcb_va; + + return 0; +} + static int hv_cpu_init(unsigned int cpu) { union hv_vp_assist_msr_contents msr = { 0 }; @@ -85,7 +116,7 @@ static int hv_cpu_init(unsigned int cpu) } } - return 0; + return hyperv_init_ghcb(); } static void (*hv_reenlightenment_cb)(void); @@ -177,6 +208,14 @@ static int hv_cpu_die(unsigned int cpu) { struct hv_reenlightenment_control re_ctrl; unsigned int new_cpu; + void **ghcb_va; + + if (hv_ghcb_pg) { + ghcb_va = (void **)this_cpu_ptr(hv_ghcb_pg); + if (*ghcb_va) + memunmap(*ghcb_va); + *ghcb_va = NULL; + } hv_common_cpu_die(cpu); @@ -366,10 +405,16 @@ void __init hyperv_init(void) goto common_free; } + if (hv_isolation_type_snp()) { + hv_ghcb_pg = alloc_percpu(void *); + if (!hv_ghcb_pg) + goto free_vp_assist_page; + } + cpuhp = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/hyperv_init:online", hv_cpu_init, hv_cpu_die); if (cpuhp < 0) - goto free_vp_assist_page; + goto free_ghcb_page; /* * Setup the hypercall page and enable hypercalls. @@ -383,10 +428,8 @@ void __init hyperv_init(void) VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS, NUMA_NO_NODE, __builtin_return_address(0)); - if (hv_hypercall_pg == NULL) { - wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0); - goto remove_cpuhp_state; - } + if (hv_hypercall_pg == NULL) + goto clean_guest_os_id; rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64); hypercall_msr.enable = 1; @@ -456,8 +499,11 @@ void __init hyperv_init(void) hv_query_ext_cap(0); return; -remove_cpuhp_state: +clean_guest_os_id: + wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0); cpuhp_remove_state(cpuhp); +free_ghcb_page: + free_percpu(hv_ghcb_pg); free_vp_assist_page: kfree(hv_vp_assist_page); hv_vp_assist_page = NULL; @@ -559,3 +605,11 @@ bool hv_is_isolation_supported(void) { return hv_get_isolation_type() != HV_ISOLATION_TYPE_NONE; } + +DEFINE_STATIC_KEY_FALSE(isolation_type_snp); + +bool hv_isolation_type_snp(void) +{ + return static_branch_unlikely(&isolation_type_snp); +} +EXPORT_SYMBOL_GPL(hv_isolation_type_snp); diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index adccbc209169..37739a277ac6 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -11,6 +11,8 @@ #include #include +DECLARE_STATIC_KEY_FALSE(isolation_type_snp); + typedef int (*hyperv_fill_flush_list_func)( struct hv_guest_mapping_flush_list *flush, void *data); @@ -39,6 +41,8 @@ extern void *hv_hypercall_pg; extern u64 hv_current_partition_id; +extern void __percpu **hv_ghcb_pg; + int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id); int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags); diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index e095c28d27ae..b09ade389040 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -316,6 +316,9 @@ static void __init ms_hyperv_init_platform(void) pr_info("Hyper-V: Isolation Config: Group A 0x%x, Group B 0x%x\n", ms_hyperv.isolation_config_a, ms_hyperv.isolation_config_b); + + if (hv_get_isolation_type() == HV_ISOLATION_TYPE_SNP) + static_branch_enable(&isolation_type_snp); } if (hv_max_functions_eax >= HYPERV_CPUID_NESTED_FEATURES) { diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index d3eae6cdbacb..2d88aa855f7e 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -254,12 +254,18 @@ bool hv_is_hyperv_initialized(void); bool hv_is_hibernation_supported(void); enum hv_isolation_type hv_get_isolation_type(void); bool hv_is_isolation_supported(void); +bool hv_isolation_type_snp(void); void hyperv_cleanup(void); bool hv_query_ext_cap(u64 cap_query); #else /* CONFIG_HYPERV */ static inline bool hv_is_hyperv_initialized(void) { return false; } static inline bool hv_is_hibernation_supported(void) { return false; } static inline void hyperv_cleanup(void) {} +static inline bool hv_is_isolation_supported(void) { return false; } +static inline enum hv_isolation_type hv_get_isolation_type(void) +{ + return HV_ISOLATION_TYPE_NONE; +} #endif /* CONFIG_HYPERV */ #endif From patchwork Mon Oct 25 12:21:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12581701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96F7DC4332F for ; Mon, 25 Oct 2021 12:21:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7FB1C60FDA for ; Mon, 25 Oct 2021 12:21:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233324AbhJYMYH (ORCPT ); Mon, 25 Oct 2021 08:24:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233106AbhJYMXp (ORCPT ); Mon, 25 Oct 2021 08:23:45 -0400 Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04E1CC061745; Mon, 25 Oct 2021 05:21:23 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id m26so10590844pff.3; Mon, 25 Oct 2021 05:21:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZdmN+tkx79fwis7eS+XRuIwj8hA3a++/fJHMeaiCPUI=; b=LHZfmnqJbYkxZqeBLfuMe1x636+SKK8Tw6EC941O8sWLa65vYkRZSLRlS7yLr57Fh0 kFM4e0u0zDGuuDBC888/NS8vKaV7N7YlBYn8nh35+1IgRUKDDEWNv/GdC9VDRX1derN7 OM4OdEuQmJtBOO20tvrn7uYiySCM9ZZFA9gkDEaXDHzML5qHqjecE5p/3842cW5dQhIM VFvx5LY5e2NJiM2adRrl59MkOco195JZj566gdqoPmX6Pji+xze/ipHxlQ179BiEUOJ5 zFTT56MhY/ZXCr3FD/dThiXNRJfdBbcuxiucjoZMQZ5a7UwTkQoBAqvcnIbxCEeFAR1B k6YA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZdmN+tkx79fwis7eS+XRuIwj8hA3a++/fJHMeaiCPUI=; b=tDa5TD65v+4OKaxDwjEQThDLf989+lGlycHzEYagy1LW9Dqr1TdDXxjnAmCM2sgTwo U7v6kFK3FNSaDDVjoRFLoszqXHJuS1WUJntJFDOY2mMxvchUqC0fbok1I5ydq/BSkklR KhYYhaZUunDdYQ1a+ELS4cFrPpZrqsBgLBJhppemeDuL+lkDofHg189Ua+yiZYcQV743 v9xgCpHanmhuGxMs1rknIZqfwJ4V0P7vNI6OgAksb+pQ6uDpaohOyM0XIcm9hGodWGtV qV4HXa7QvXniinZNR+EQ2wMY+U4TE4jMzdRS0zODvv6mT8cYJ8BumTGx/4YMZGER803D PUEg== X-Gm-Message-State: AOAM530VJwOiXSVdi3Y5t/h2ZiNLe41Z+Us/aNeuJPxj3WQPiQBv3QF5 VyMEU0y6jzOb+0U5FlO38sA= X-Google-Smtp-Source: ABdhPJzfwe2Ak5EVlDSmQyKOmqwsHB9cN5HuW8F40nvpw2lzNMFB73OXyvR08BHTcjpS77UKvgnwZg== X-Received: by 2002:a63:334c:: with SMTP id z73mr13566286pgz.160.1635164482577; Mon, 25 Oct 2021 05:21:22 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:8:bcf6:9813:137f:2b6]) by smtp.gmail.com with ESMTPSA id mi11sm2786166pjb.5.2021.10.25.05.21.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Oct 2021 05:21:22 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, mikelley@microsoft.com, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, pgonda@google.com, akpm@linux-foundation.org, rppt@kernel.org, kirill.shutemov@linux.intel.com, saravanand@fb.com, aneesh.kumar@linux.ibm.com, sfr@canb.auug.org.au, david@redhat.com, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V9 2/9] x86/hyperv: Initialize shared memory boundary in the Isolation VM. Date: Mon, 25 Oct 2021 08:21:07 -0400 Message-Id: <20211025122116.264793-3-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211025122116.264793-1-ltykernel@gmail.com> References: <20211025122116.264793-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan Hyper-V exposes shared memory boundary via cpuid HYPERV_CPUID_ISOLATION_CONFIG and store it in the shared_gpa_boundary of ms_hyperv struct. This prepares to share memory with host for SNP guest. Reviewed-by: Michael Kelley Signed-off-by: Tianyu Lan --- Change since v4: * Rename reserve field. Change since v3: * user BIT_ULL to get shared_gpa_boundary * Rename field Reserved* to reserved --- arch/x86/kernel/cpu/mshyperv.c | 2 ++ include/asm-generic/mshyperv.h | 12 +++++++++++- 2 files changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index b09ade389040..4794b716ec79 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -313,6 +313,8 @@ static void __init ms_hyperv_init_platform(void) if (ms_hyperv.priv_high & HV_ISOLATION) { ms_hyperv.isolation_config_a = cpuid_eax(HYPERV_CPUID_ISOLATION_CONFIG); ms_hyperv.isolation_config_b = cpuid_ebx(HYPERV_CPUID_ISOLATION_CONFIG); + ms_hyperv.shared_gpa_boundary = + BIT_ULL(ms_hyperv.shared_gpa_boundary_bits); pr_info("Hyper-V: Isolation Config: Group A 0x%x, Group B 0x%x\n", ms_hyperv.isolation_config_a, ms_hyperv.isolation_config_b); diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index 2d88aa855f7e..a8ac497167d2 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -35,7 +35,17 @@ struct ms_hyperv_info { u32 max_vp_index; u32 max_lp_index; u32 isolation_config_a; - u32 isolation_config_b; + union { + u32 isolation_config_b; + struct { + u32 cvm_type : 4; + u32 reserved1 : 1; + u32 shared_gpa_boundary_active : 1; + u32 shared_gpa_boundary_bits : 6; + u32 reserved2 : 20; + }; + }; + u64 shared_gpa_boundary; }; extern struct ms_hyperv_info ms_hyperv; From patchwork Mon Oct 25 12:21:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12581703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EDC5C4332F for ; Mon, 25 Oct 2021 12:21:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0789C60C49 for ; Mon, 25 Oct 2021 12:21:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233158AbhJYMYR (ORCPT ); Mon, 25 Oct 2021 08:24:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233153AbhJYMXq (ORCPT ); Mon, 25 Oct 2021 08:23:46 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61F9AC061767; Mon, 25 Oct 2021 05:21:24 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id t7so10736876pgl.9; Mon, 25 Oct 2021 05:21:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eLShB2ONMrruiOMm6P/0DPqYZ63kzzgkIO+3JZrN4G8=; b=LhONB9WDiWsnfYGUUEiOwAr99d8DEvFvga4X93hasXMuvYNnG4fYjo4L7QR/KkGoDY xG9XVD3ua1MW8C8CCnI7IpmrZb563XB8tKGJ52lS5KzCJ36LS55J96erF6MbvQ/YiaA3 Q0mN87s3Uycly9ezYkjN72yNcueohn5tW+0w3T8zAoG5izztIrxUYcMoF9793WsdW36t Z+ROEWL769l8RTxo6+kQxB3S8QPKgw6FyURa6BP6iS54KYvAlhreAsFoDf4sbb4LO9VD s/s6cgjWu0qECwxcRLgW9At/pcqiDvrtS/NGu/lhaXvYdO5jnDeHGClVJE4MyFHwyvt+ is4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eLShB2ONMrruiOMm6P/0DPqYZ63kzzgkIO+3JZrN4G8=; b=mul/8DYgb7gNHR8d7gWNvVe5Qal9RMnr3NlSODvJ+Yo7NMK5Y1otYkpSAtWF9Y0ziZ FfqYpfK644GzYP9aelxo8svvuYFpXRnpUYmNrZozs15GPP8wFJePK9t7pSb0gk2dh4Ev KkOPmrBzHKKh+sxfQCzoa4XCrcqj6HtU2GXleVTuW95DMDEynq97HIcx97djfCZvCNzj NRmpHFtNdPHP788zbuV6AGi5eISVCzaTuVLKNZWHcnsITOu7crQVCNeJUnRdfCcpnkMX wOgq+bErhNBs5hlnaokYTGkR97RQfo5JuwoySVkt+Fs9X2EHlEHR30bQfozWgijc8Sht mG3w== X-Gm-Message-State: AOAM531JA6SeSndFOnxXoPTg+5/khMd1UpKjeEbCa26wUUXPR3qrPn73 YJ96FGlR0/mx+gOnLmcL/g0= X-Google-Smtp-Source: ABdhPJwr/hTJxsZ6X9XB13uIFKFil4Gg4ztSuBi+9MsV8njyjdZ8TP4mtfPZ3s+SNZS4Cu8tf1L0MA== X-Received: by 2002:a63:5608:: with SMTP id k8mr13424099pgb.287.1635164483960; Mon, 25 Oct 2021 05:21:23 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:8:bcf6:9813:137f:2b6]) by smtp.gmail.com with ESMTPSA id mi11sm2786166pjb.5.2021.10.25.05.21.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Oct 2021 05:21:23 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, mikelley@microsoft.com, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, pgonda@google.com, akpm@linux-foundation.org, rppt@kernel.org, kirill.shutemov@linux.intel.com, saravanand@fb.com, aneesh.kumar@linux.ibm.com, sfr@canb.auug.org.au, david@redhat.com, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V9 3/9] x86/hyperv: Add new hvcall guest address host visibility support Date: Mon, 25 Oct 2021 08:21:08 -0400 Message-Id: <20211025122116.264793-4-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211025122116.264793-1-ltykernel@gmail.com> References: <20211025122116.264793-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan Add new hvcall guest address host visibility support to mark memory visible to host. Call it inside set_memory_decrypted /encrypted(). Add HYPERVISOR feature check in the hv_is_isolation_supported() to optimize in non-virtualization environment. Acked-by: Dave Hansen Reviewed-by: Michael Kelley Signed-off-by: Tianyu Lan --- Change since v6: * Add hv_set_mem_host_visibility() when CONFIG_HYPERV is no. Fix compile error. * Add comment to describe __set_memory_enc_pgtable(). Change since v4: * Fix typo in the comment * Make hv_mark_gpa_visibility() to be a static function * Merge __hv_set_mem_host_visibility() and hv_set_mem_host_visibility() Change since v3: * Fix error code handle in the __hv_set_mem_host_visibility(). * Move HvCallModifySparseGpaPageHostVisibility near to enum hv_mem_host_visibility. Change since v2: * Rework __set_memory_enc_dec() and call Hyper-V and AMD function according to platform check. Change since v1: * Use new staic call x86_set_memory_enc to avoid add Hyper-V specific check in the set_memory code. --- arch/x86/hyperv/Makefile | 2 +- arch/x86/hyperv/hv_init.c | 6 ++ arch/x86/hyperv/ivm.c | 105 +++++++++++++++++++++++++++++ arch/x86/include/asm/hyperv-tlfs.h | 17 +++++ arch/x86/include/asm/mshyperv.h | 7 +- arch/x86/mm/pat/set_memory.c | 23 +++++-- include/asm-generic/hyperv-tlfs.h | 1 + 7 files changed, 154 insertions(+), 7 deletions(-) create mode 100644 arch/x86/hyperv/ivm.c diff --git a/arch/x86/hyperv/Makefile b/arch/x86/hyperv/Makefile index 48e2c51464e8..5d2de10809ae 100644 --- a/arch/x86/hyperv/Makefile +++ b/arch/x86/hyperv/Makefile @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0-only -obj-y := hv_init.o mmu.o nested.o irqdomain.o +obj-y := hv_init.o mmu.o nested.o irqdomain.o ivm.o obj-$(CONFIG_X86_64) += hv_apic.o hv_proc.o ifdef CONFIG_X86_64 diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index a7e922755ad1..d57df6825527 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -603,6 +603,12 @@ EXPORT_SYMBOL_GPL(hv_get_isolation_type); bool hv_is_isolation_supported(void) { + if (!cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) + return false; + + if (!hypervisor_is_type(X86_HYPER_MS_HYPERV)) + return false; + return hv_get_isolation_type() != HV_ISOLATION_TYPE_NONE; } diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c new file mode 100644 index 000000000000..79e7fb83472a --- /dev/null +++ b/arch/x86/hyperv/ivm.c @@ -0,0 +1,105 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Hyper-V Isolation VM interface with paravisor and hypervisor + * + * Author: + * Tianyu Lan + */ + +#include +#include +#include +#include +#include +#include + +/* + * hv_mark_gpa_visibility - Set pages visible to host via hvcall. + * + * In Isolation VM, all guest memory is encrypted from host and guest + * needs to set memory visible to host via hvcall before sharing memory + * with host. + */ +static int hv_mark_gpa_visibility(u16 count, const u64 pfn[], + enum hv_mem_host_visibility visibility) +{ + struct hv_gpa_range_for_visibility **input_pcpu, *input; + u16 pages_processed; + u64 hv_status; + unsigned long flags; + + /* no-op if partition isolation is not enabled */ + if (!hv_is_isolation_supported()) + return 0; + + if (count > HV_MAX_MODIFY_GPA_REP_COUNT) { + pr_err("Hyper-V: GPA count:%d exceeds supported:%lu\n", count, + HV_MAX_MODIFY_GPA_REP_COUNT); + return -EINVAL; + } + + local_irq_save(flags); + input_pcpu = (struct hv_gpa_range_for_visibility **) + this_cpu_ptr(hyperv_pcpu_input_arg); + input = *input_pcpu; + if (unlikely(!input)) { + local_irq_restore(flags); + return -EINVAL; + } + + input->partition_id = HV_PARTITION_ID_SELF; + input->host_visibility = visibility; + input->reserved0 = 0; + input->reserved1 = 0; + memcpy((void *)input->gpa_page_list, pfn, count * sizeof(*pfn)); + hv_status = hv_do_rep_hypercall( + HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY, count, + 0, input, &pages_processed); + local_irq_restore(flags); + + if (hv_result_success(hv_status)) + return 0; + else + return -EFAULT; +} + +/* + * hv_set_mem_host_visibility - Set specified memory visible to host. + * + * In Isolation VM, all guest memory is encrypted from host and guest + * needs to set memory visible to host via hvcall before sharing memory + * with host. This function works as wrap of hv_mark_gpa_visibility() + * with memory base and size. + */ +int hv_set_mem_host_visibility(unsigned long kbuffer, int pagecount, bool visible) +{ + enum hv_mem_host_visibility visibility = visible ? + VMBUS_PAGE_VISIBLE_READ_WRITE : VMBUS_PAGE_NOT_VISIBLE; + u64 *pfn_array; + int ret = 0; + int i, pfn; + + if (!hv_is_isolation_supported() || !hv_hypercall_pg) + return 0; + + pfn_array = kmalloc(HV_HYP_PAGE_SIZE, GFP_KERNEL); + if (!pfn_array) + return -ENOMEM; + + for (i = 0, pfn = 0; i < pagecount; i++) { + pfn_array[pfn] = virt_to_hvpfn((void *)kbuffer + i * HV_HYP_PAGE_SIZE); + pfn++; + + if (pfn == HV_MAX_MODIFY_GPA_REP_COUNT || i == pagecount - 1) { + ret = hv_mark_gpa_visibility(pfn, pfn_array, + visibility); + if (ret) + goto err_free_pfn_array; + pfn = 0; + } + } + + err_free_pfn_array: + kfree(pfn_array); + return ret; +} diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h index 2322d6bd5883..381e88122a5f 100644 --- a/arch/x86/include/asm/hyperv-tlfs.h +++ b/arch/x86/include/asm/hyperv-tlfs.h @@ -276,6 +276,23 @@ enum hv_isolation_type { #define HV_X64_MSR_TIME_REF_COUNT HV_REGISTER_TIME_REF_COUNT #define HV_X64_MSR_REFERENCE_TSC HV_REGISTER_REFERENCE_TSC +/* Hyper-V memory host visibility */ +enum hv_mem_host_visibility { + VMBUS_PAGE_NOT_VISIBLE = 0, + VMBUS_PAGE_VISIBLE_READ_ONLY = 1, + VMBUS_PAGE_VISIBLE_READ_WRITE = 3 +}; + +/* HvCallModifySparseGpaPageHostVisibility hypercall */ +#define HV_MAX_MODIFY_GPA_REP_COUNT ((PAGE_SIZE / sizeof(u64)) - 2) +struct hv_gpa_range_for_visibility { + u64 partition_id; + u32 host_visibility:2; + u32 reserved0:30; + u32 reserved1; + u64 gpa_page_list[HV_MAX_MODIFY_GPA_REP_COUNT]; +} __packed; + /* * Declare the MSR used to setup pages used to communicate with the hypervisor. */ diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 37739a277ac6..f3154ca41ac4 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -192,7 +192,7 @@ struct irq_domain *hv_create_pci_msi_domain(void); int hv_map_ioapic_interrupt(int ioapic_id, bool level, int vcpu, int vector, struct hv_interrupt_entry *entry); int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry); - +int hv_set_mem_host_visibility(unsigned long addr, int numpages, bool visible); #else /* CONFIG_HYPERV */ static inline void hyperv_init(void) {} static inline void hyperv_setup_mmu_ops(void) {} @@ -209,6 +209,11 @@ static inline int hyperv_flush_guest_mapping_range(u64 as, { return -1; } +static inline int hv_set_mem_host_visibility(unsigned long addr, int numpages, + bool visible) +{ + return -1; +} #endif /* CONFIG_HYPERV */ diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index ad8a5c586a35..525f682ab150 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -29,6 +29,8 @@ #include #include #include +#include +#include #include "../mm_internal.h" @@ -1980,15 +1982,15 @@ int set_memory_global(unsigned long addr, int numpages) __pgprot(_PAGE_GLOBAL), 0); } -static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) +/* + * __set_memory_enc_pgtable() is used for the hypervisors that get + * informed about "encryption" status via page tables. + */ +static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc) { struct cpa_data cpa; int ret; - /* Nothing to do if memory encryption is not active */ - if (!mem_encrypt_active()) - return 0; - /* Should not be working on unaligned addresses */ if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr)) addr &= PAGE_MASK; @@ -2023,6 +2025,17 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) return ret; } +static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) +{ + if (hv_is_isolation_supported()) + return hv_set_mem_host_visibility(addr, numpages, !enc); + + if (mem_encrypt_active()) + return __set_memory_enc_pgtable(addr, numpages, enc); + + return 0; +} + int set_memory_encrypted(unsigned long addr, int numpages) { return __set_memory_enc_dec(addr, numpages, true); diff --git a/include/asm-generic/hyperv-tlfs.h b/include/asm-generic/hyperv-tlfs.h index 56348a541c50..8ed6733d5146 100644 --- a/include/asm-generic/hyperv-tlfs.h +++ b/include/asm-generic/hyperv-tlfs.h @@ -158,6 +158,7 @@ struct ms_hyperv_tsc_page { #define HVCALL_RETARGET_INTERRUPT 0x007e #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0 +#define HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY 0x00db /* Extended hypercalls */ #define HV_EXT_CALL_QUERY_CAPABILITIES 0x8001 From patchwork Mon Oct 25 12:21:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12581705 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEC3EC433EF for ; Mon, 25 Oct 2021 12:21:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A804060FDA for ; Mon, 25 Oct 2021 12:21:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233376AbhJYMYS (ORCPT ); Mon, 25 Oct 2021 08:24:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233157AbhJYMXr (ORCPT ); Mon, 25 Oct 2021 08:23:47 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD4FBC061224; Mon, 25 Oct 2021 05:21:25 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id t7so10736938pgl.9; Mon, 25 Oct 2021 05:21:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AIegeKeliv32sYVrHOZHNxjTa99Pk59fKPbxUT7zswg=; b=jJQhZ3FyZXo5ZuKKgcIIQmRjl5jJwbGfFr+kooGVUwgty/LhbCVFH0yd+Xd8Mmazi4 6/vBbJS8yXjA8DkmX7P4nfz6gVTCZM0hGz7SJnwyoGAD4UvX0XgpIv7HhoXlEV4kStAm KrvzmGDM6ev3tCLM7IVy0123fHP7mx65V38vOVgH9GxjHFZ8YvUEVUqO6TAsURczhSqb cCQXbfhXRLjcRigpVV2+S4kGXQe5iGGb+oIJFp+VOSJjCaaaMZ7M9Ydl+KZF5qwdGrqE B1vpsQC3Q1jJVrTQU41/X69OsOnSNiSiQL4qxK5QOpDV0c75ohwDLTXuIEuxWmhHeyXH XKLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AIegeKeliv32sYVrHOZHNxjTa99Pk59fKPbxUT7zswg=; b=1aULWkFZCyxWcNePBc1kefvoIXWeStf6pJl6QHScl7nEhyb1WmTikEZ0CR9Ed28lcd eNyrEtoVzd/8q0SFb5ZbjrGR2uVBBjDRUD4mnGbq+NdoCqfwvq1v+DQU8AE5sm/jVZ3d +E4ygGjIYe8Yk9zXGdlC0RJO+FAKV1JpayocHD/3hZH5R45HLYaQfHWrdXflAPfkKfKU sKziz10JoCNMylzwr7wA9PaeOcGUFaNq85AaXQgUf+uCIdDGhN9K218+in6ArNP9TkYQ JtsbcT9/Lo7Eg7ySorOmO54FWpw5bG0UY2xHqjGVQKsw67f9kS1ao72iDiSQkU5jNBaR DKzA== X-Gm-Message-State: AOAM533IHMZ8PUPdWswtERYH2QW8C1g+Itg0xieI3PD9BSAObuWC7GEI Nuxvy/ZfX/kq5brv2Ol2mx4= X-Google-Smtp-Source: ABdhPJwt4ykoD8jWnOPO3Dbu8E5wW1LY36I3+1pBwC1gZRrju6+hE05RpbYS3WWKZfbWf8mrIUdEmQ== X-Received: by 2002:a63:7247:: with SMTP id c7mr13282847pgn.365.1635164485233; Mon, 25 Oct 2021 05:21:25 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:8:bcf6:9813:137f:2b6]) by smtp.gmail.com with ESMTPSA id mi11sm2786166pjb.5.2021.10.25.05.21.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Oct 2021 05:21:24 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, mikelley@microsoft.com, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, pgonda@google.com, akpm@linux-foundation.org, rppt@kernel.org, kirill.shutemov@linux.intel.com, saravanand@fb.com, aneesh.kumar@linux.ibm.com, sfr@canb.auug.org.au, david@redhat.com, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V9 4/9] Drivers: hv: vmbus: Mark vmbus ring buffer visible to host in Isolation VM Date: Mon, 25 Oct 2021 08:21:09 -0400 Message-Id: <20211025122116.264793-5-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211025122116.264793-1-ltykernel@gmail.com> References: <20211025122116.264793-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Tianyu Lan Mark vmbus ring buffer visible with set_memory_decrypted() when establish gpadl handle. Reviewed-by: Michael Kelley Signed-off-by: Tianyu Lan --- Change since v5: * Replace HVPFN_UP() with PFN_UP() in the __vmbus_establish_gpadl() * Remove unused variable gpadl in the __vmbus_open() and vmbus_close_ internal() * Clean gpadl_handle in the vmbus_teardown_gpadl(). Change since v4 * Change gpadl handle in netvsc and uio driver from u32 to struct vmbus_gpadl. * Change vmbus_establish_gpadl()'s gpadl_handle parameter to vmbus_gpadl data structure. Change since v3: * Change vmbus_teardown_gpadl() parameter and put gpadl handle, buffer and buffer size in the struct vmbus_gpadl. --- drivers/hv/channel.c | 53 +++++++++++++++++++++++---------- drivers/net/hyperv/hyperv_net.h | 5 ++-- drivers/net/hyperv/netvsc.c | 15 +++++----- drivers/uio/uio_hv_generic.c | 18 +++++------ include/linux/hyperv.h | 12 ++++++-- 5 files changed, 65 insertions(+), 38 deletions(-) diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index f3761c73b074..b37ff4a39224 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -456,7 +457,7 @@ static int create_gpadl_header(enum hv_gpadl_type type, void *kbuffer, static int __vmbus_establish_gpadl(struct vmbus_channel *channel, enum hv_gpadl_type type, void *kbuffer, u32 size, u32 send_offset, - u32 *gpadl_handle) + struct vmbus_gpadl *gpadl) { struct vmbus_channel_gpadl_header *gpadlmsg; struct vmbus_channel_gpadl_body *gpadl_body; @@ -474,6 +475,15 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel, if (ret) return ret; + ret = set_memory_decrypted((unsigned long)kbuffer, + PFN_UP(size)); + if (ret) { + dev_warn(&channel->device_obj->device, + "Failed to set host visibility for new GPADL %d.\n", + ret); + return ret; + } + init_completion(&msginfo->waitevent); msginfo->waiting_channel = channel; @@ -537,7 +547,10 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel, } /* At this point, we received the gpadl created msg */ - *gpadl_handle = gpadlmsg->gpadl; + gpadl->gpadl_handle = gpadlmsg->gpadl; + gpadl->buffer = kbuffer; + gpadl->size = size; + cleanup: spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); @@ -549,6 +562,11 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel, } kfree(msginfo); + + if (ret) + set_memory_encrypted((unsigned long)kbuffer, + PFN_UP(size)); + return ret; } @@ -561,10 +579,10 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel, * @gpadl_handle: some funky thing */ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, - u32 size, u32 *gpadl_handle) + u32 size, struct vmbus_gpadl *gpadl) { return __vmbus_establish_gpadl(channel, HV_GPADL_BUFFER, kbuffer, size, - 0U, gpadl_handle); + 0U, gpadl); } EXPORT_SYMBOL_GPL(vmbus_establish_gpadl); @@ -675,7 +693,7 @@ static int __vmbus_open(struct vmbus_channel *newchannel, goto error_clean_ring; /* Establish the gpadl for the ring buffer */ - newchannel->ringbuffer_gpadlhandle = 0; + newchannel->ringbuffer_gpadlhandle.gpadl_handle = 0; err = __vmbus_establish_gpadl(newchannel, HV_GPADL_RING, page_address(newchannel->ringbuffer_page), @@ -701,7 +719,8 @@ static int __vmbus_open(struct vmbus_channel *newchannel, open_msg->header.msgtype = CHANNELMSG_OPENCHANNEL; open_msg->openid = newchannel->offermsg.child_relid; open_msg->child_relid = newchannel->offermsg.child_relid; - open_msg->ringbuffer_gpadlhandle = newchannel->ringbuffer_gpadlhandle; + open_msg->ringbuffer_gpadlhandle + = newchannel->ringbuffer_gpadlhandle.gpadl_handle; /* * The unit of ->downstream_ringbuffer_pageoffset is HV_HYP_PAGE and * the unit of ->ringbuffer_send_offset (i.e. send_pages) is PAGE, so @@ -759,8 +778,7 @@ static int __vmbus_open(struct vmbus_channel *newchannel, error_free_info: kfree(open_info); error_free_gpadl: - vmbus_teardown_gpadl(newchannel, newchannel->ringbuffer_gpadlhandle); - newchannel->ringbuffer_gpadlhandle = 0; + vmbus_teardown_gpadl(newchannel, &newchannel->ringbuffer_gpadlhandle); error_clean_ring: hv_ringbuffer_cleanup(&newchannel->outbound); hv_ringbuffer_cleanup(&newchannel->inbound); @@ -806,7 +824,7 @@ EXPORT_SYMBOL_GPL(vmbus_open); /* * vmbus_teardown_gpadl -Teardown the specified GPADL handle */ -int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle) +int vmbus_teardown_gpadl(struct vmbus_channel *channel, struct vmbus_gpadl *gpadl) { struct vmbus_channel_gpadl_teardown *msg; struct vmbus_channel_msginfo *info; @@ -825,7 +843,7 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle) msg->header.msgtype = CHANNELMSG_GPADL_TEARDOWN; msg->child_relid = channel->offermsg.child_relid; - msg->gpadl = gpadl_handle; + msg->gpadl = gpadl->gpadl_handle; spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); list_add_tail(&info->msglistentry, @@ -845,6 +863,8 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle) wait_for_completion(&info->waitevent); + gpadl->gpadl_handle = 0; + post_msg_err: /* * If the channel has been rescinded; @@ -859,6 +879,12 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle) spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); kfree(info); + + ret = set_memory_encrypted((unsigned long)gpadl->buffer, + PFN_UP(gpadl->size)); + if (ret) + pr_warn("Fail to set mem host visibility in GPADL teardown %d.\n", ret); + return ret; } EXPORT_SYMBOL_GPL(vmbus_teardown_gpadl); @@ -933,9 +959,8 @@ static int vmbus_close_internal(struct vmbus_channel *channel) } /* Tear down the gpadl for the channel's ring buffer */ - else if (channel->ringbuffer_gpadlhandle) { - ret = vmbus_teardown_gpadl(channel, - channel->ringbuffer_gpadlhandle); + else if (channel->ringbuffer_gpadlhandle.gpadl_handle) { + ret = vmbus_teardown_gpadl(channel, &channel->ringbuffer_gpadlhandle); if (ret) { pr_err("Close failed: teardown gpadl return %d\n", ret); /* @@ -943,8 +968,6 @@ static int vmbus_close_internal(struct vmbus_channel *channel) * it is perhaps better to leak memory. */ } - - channel->ringbuffer_gpadlhandle = 0; } if (!ret) diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h index bc48855dff10..315278a7cf88 100644 --- a/drivers/net/hyperv/hyperv_net.h +++ b/drivers/net/hyperv/hyperv_net.h @@ -1075,14 +1075,15 @@ struct netvsc_device { /* Receive buffer allocated by us but manages by NetVSP */ void *recv_buf; u32 recv_buf_size; /* allocated bytes */ - u32 recv_buf_gpadl_handle; + struct vmbus_gpadl recv_buf_gpadl_handle; u32 recv_section_cnt; u32 recv_section_size; u32 recv_completion_cnt; /* Send buffer allocated by us */ void *send_buf; - u32 send_buf_gpadl_handle; + u32 send_buf_size; + struct vmbus_gpadl send_buf_gpadl_handle; u32 send_section_cnt; u32 send_section_size; unsigned long *send_section_map; diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c index 7bd935412853..396bc1c204e6 100644 --- a/drivers/net/hyperv/netvsc.c +++ b/drivers/net/hyperv/netvsc.c @@ -278,9 +278,9 @@ static void netvsc_teardown_recv_gpadl(struct hv_device *device, { int ret; - if (net_device->recv_buf_gpadl_handle) { + if (net_device->recv_buf_gpadl_handle.gpadl_handle) { ret = vmbus_teardown_gpadl(device->channel, - net_device->recv_buf_gpadl_handle); + &net_device->recv_buf_gpadl_handle); /* If we failed here, we might as well return and have a leak * rather than continue and a bugchk @@ -290,7 +290,6 @@ static void netvsc_teardown_recv_gpadl(struct hv_device *device, "unable to teardown receive buffer's gpadl\n"); return; } - net_device->recv_buf_gpadl_handle = 0; } } @@ -300,9 +299,9 @@ static void netvsc_teardown_send_gpadl(struct hv_device *device, { int ret; - if (net_device->send_buf_gpadl_handle) { + if (net_device->send_buf_gpadl_handle.gpadl_handle) { ret = vmbus_teardown_gpadl(device->channel, - net_device->send_buf_gpadl_handle); + &net_device->send_buf_gpadl_handle); /* If we failed here, we might as well return and have a leak * rather than continue and a bugchk @@ -312,7 +311,6 @@ static void netvsc_teardown_send_gpadl(struct hv_device *device, "unable to teardown send buffer's gpadl\n"); return; } - net_device->send_buf_gpadl_handle = 0; } } @@ -380,7 +378,7 @@ static int netvsc_init_buf(struct hv_device *device, memset(init_packet, 0, sizeof(struct nvsp_message)); init_packet->hdr.msg_type = NVSP_MSG1_TYPE_SEND_RECV_BUF; init_packet->msg.v1_msg.send_recv_buf. - gpadl_handle = net_device->recv_buf_gpadl_handle; + gpadl_handle = net_device->recv_buf_gpadl_handle.gpadl_handle; init_packet->msg.v1_msg. send_recv_buf.id = NETVSC_RECEIVE_BUFFER_ID; @@ -463,6 +461,7 @@ static int netvsc_init_buf(struct hv_device *device, ret = -ENOMEM; goto cleanup; } + net_device->send_buf_size = buf_size; /* Establish the gpadl handle for this buffer on this * channel. Note: This call uses the vmbus connection rather @@ -482,7 +481,7 @@ static int netvsc_init_buf(struct hv_device *device, memset(init_packet, 0, sizeof(struct nvsp_message)); init_packet->hdr.msg_type = NVSP_MSG1_TYPE_SEND_SEND_BUF; init_packet->msg.v1_msg.send_send_buf.gpadl_handle = - net_device->send_buf_gpadl_handle; + net_device->send_buf_gpadl_handle.gpadl_handle; init_packet->msg.v1_msg.send_send_buf.id = NETVSC_SEND_BUFFER_ID; trace_nvsp_send(ndev, init_packet); diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c index 652fe2547587..c08a6cfd119f 100644 --- a/drivers/uio/uio_hv_generic.c +++ b/drivers/uio/uio_hv_generic.c @@ -58,11 +58,11 @@ struct hv_uio_private_data { atomic_t refcnt; void *recv_buf; - u32 recv_gpadl; + struct vmbus_gpadl recv_gpadl; char recv_name[32]; /* "recv_4294967295" */ void *send_buf; - u32 send_gpadl; + struct vmbus_gpadl send_gpadl; char send_name[32]; }; @@ -179,15 +179,13 @@ hv_uio_new_channel(struct vmbus_channel *new_sc) static void hv_uio_cleanup(struct hv_device *dev, struct hv_uio_private_data *pdata) { - if (pdata->send_gpadl) { - vmbus_teardown_gpadl(dev->channel, pdata->send_gpadl); - pdata->send_gpadl = 0; + if (pdata->send_gpadl.gpadl_handle) { + vmbus_teardown_gpadl(dev->channel, &pdata->send_gpadl); vfree(pdata->send_buf); } - if (pdata->recv_gpadl) { - vmbus_teardown_gpadl(dev->channel, pdata->recv_gpadl); - pdata->recv_gpadl = 0; + if (pdata->recv_gpadl.gpadl_handle) { + vmbus_teardown_gpadl(dev->channel, &pdata->recv_gpadl); vfree(pdata->recv_buf); } } @@ -303,7 +301,7 @@ hv_uio_probe(struct hv_device *dev, /* put Global Physical Address Label in name */ snprintf(pdata->recv_name, sizeof(pdata->recv_name), - "recv:%u", pdata->recv_gpadl); + "recv:%u", pdata->recv_gpadl.gpadl_handle); pdata->info.mem[RECV_BUF_MAP].name = pdata->recv_name; pdata->info.mem[RECV_BUF_MAP].addr = (uintptr_t)pdata->recv_buf; @@ -324,7 +322,7 @@ hv_uio_probe(struct hv_device *dev, } snprintf(pdata->send_name, sizeof(pdata->send_name), - "send:%u", pdata->send_gpadl); + "send:%u", pdata->send_gpadl.gpadl_handle); pdata->info.mem[SEND_BUF_MAP].name = pdata->send_name; pdata->info.mem[SEND_BUF_MAP].addr = (uintptr_t)pdata->send_buf; diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index ddc8713ce57b..a9e0bc3b1511 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -803,6 +803,12 @@ struct vmbus_device { #define VMBUS_DEFAULT_MAX_PKT_SIZE 4096 +struct vmbus_gpadl { + u32 gpadl_handle; + u32 size; + void *buffer; +}; + struct vmbus_channel { struct list_head listentry; @@ -822,7 +828,7 @@ struct vmbus_channel { bool rescind_ref; /* got rescind msg, got channel reference */ struct completion rescind_event; - u32 ringbuffer_gpadlhandle; + struct vmbus_gpadl ringbuffer_gpadlhandle; /* Allocated memory for ring buffer */ struct page *ringbuffer_page; @@ -1192,10 +1198,10 @@ extern int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel, extern int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, u32 size, - u32 *gpadl_handle); + struct vmbus_gpadl *gpadl); extern int vmbus_teardown_gpadl(struct vmbus_channel *channel, - u32 gpadl_handle); + struct vmbus_gpadl *gpadl); void vmbus_reset_channel_cb(struct vmbus_channel *channel); From patchwork Mon Oct 25 12:21:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12581707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65412C433FE for ; Mon, 25 Oct 2021 12:22:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4DC98610A0 for ; Mon, 25 Oct 2021 12:22:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233394AbhJYMYW (ORCPT ); Mon, 25 Oct 2021 08:24:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38768 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233247AbhJYMXt (ORCPT ); Mon, 25 Oct 2021 08:23:49 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1ECD4C061229; Mon, 25 Oct 2021 05:21:27 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id 187so10550099pfc.10; Mon, 25 Oct 2021 05:21:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+7pz2foD/4GwcDIE39vmWSuWuTwJleRXg4SRKQkhN+E=; b=AESlPeorHAIJn0wVvqn0Bv4CDXtMzlW0FjCvNlhqyF6ZeJf+jeKMOmkK+zhgQmYum7 5S4GI+yN/Vh7C0riiXRyi9TtNa11cdhW1GRp7+7Xj0CY2j439Sfkrhp40j0QFxX5p3P9 HQ1lCyM78JM0ZBe/PYgl/cK+1otfq5zUJ0vT12ouw5B8Fx8Z5HFuy5GBj/DpZlIzQQwi nbJiPo2UDd/Rwwqygf4r1IBggqUBj08RQhdkGYRxWEOVDdQNPUtViERVRClxmHS7KgQd pIgkDPzRFbG51vCat99NTMtSpzk/xQP6QUzGtRKIyEgdYZVwn7u39uGzkoKZxqsDPv/F 7L8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+7pz2foD/4GwcDIE39vmWSuWuTwJleRXg4SRKQkhN+E=; b=cA7nWL02hf3seGyj2LOu+6GzI7V81cYPnEYXRU+e6UthpchJwkOcK8zz03QjA777e3 TQBA5LS+XNsDr1ISHwzWtDudCYwxVWepLlQwn/kpXDRl5AjNHS5Mzarc6KLD+4suhqkZ ENZwQDyMJl9M66sxMZeMlefKi7VToW29BjR5mK4jEWlXpWn3mqjFgGFkuqXEjMjZ4MhB KUs/dqKIfdyCexVwXkkd3//zfylFosVOfawqHWyVzQhpv1lWbJnxmEsIHJUTgnRp/kao hBH1QYTpkNcxLqgPEoK9O1ZKJBJ3S/IiNhUG9yi9A7JhOjwdpk3+x2yIquxW0siQNr4Y byOQ== X-Gm-Message-State: AOAM532jk1oESXBlMy1bo/o3Xk4SRDIy7PvsppHXB2LZSP0jtFqrjEbD U7Hx2tinF7xl/MOf78TjB8s= X-Google-Smtp-Source: ABdhPJxiQwKVDxPVQectkbll0gJHvEm/RE73L9RH9DkEJqD9OrwlNI/ZgmMNTcozf3GE7ND4UQUXsA== X-Received: by 2002:a62:5304:0:b0:44c:719c:a2c with SMTP id h4-20020a625304000000b0044c719c0a2cmr17538108pfb.13.1635164486657; Mon, 25 Oct 2021 05:21:26 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:8:bcf6:9813:137f:2b6]) by smtp.gmail.com with ESMTPSA id mi11sm2786166pjb.5.2021.10.25.05.21.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Oct 2021 05:21:26 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, mikelley@microsoft.com, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, pgonda@google.com, akpm@linux-foundation.org, rppt@kernel.org, kirill.shutemov@linux.intel.com, saravanand@fb.com, aneesh.kumar@linux.ibm.com, sfr@canb.auug.org.au, david@redhat.com, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V9 5/9] x86/sev-es: Expose sev_es_ghcb_hv_call() to call ghcb hv call out of sev code Date: Mon, 25 Oct 2021 08:21:10 -0400 Message-Id: <20211025122116.264793-6-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211025122116.264793-1-ltykernel@gmail.com> References: <20211025122116.264793-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan Hyper-V needs to call ghcb hv call to write/read MSR in Isolation VM. So expose sev_es_ghcb_hv_call() to call it in the Hyper-V code. Hyper-V Isolation VM is unenlightened guests and run a paravisor in the VMPL0 for communicating and GHCB pages are being allocated and set up by that paravisor. Linux gets ghcb page pa via MSR_AMD64_SEV_ES_GHCB from paravisor and should not change it. Add set_ghcb_msr parameter for sev_es_ghcb_hv_call() and not set ghcb page pa when it's false. Reviewed-by: Michael Kelley Signed-off-by: Tianyu Lan --- Change since v8: - Remove sev_es_ghcb_hv_call() stub function. arch/x86/include/asm/sev.h | 5 +++++ arch/x86/kernel/sev-shared.c | 25 ++++++++++++++++--------- arch/x86/kernel/sev.c | 13 +++++++------ 3 files changed, 28 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h index fa5cd05d3b5b..e3576916215d 100644 --- a/arch/x86/include/asm/sev.h +++ b/arch/x86/include/asm/sev.h @@ -81,6 +81,11 @@ static __always_inline void sev_es_nmi_complete(void) __sev_es_nmi_complete(); } extern int __init sev_es_efi_map_ghcbs(pgd_t *pgd); +extern enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb, + bool set_ghcb_msr, + struct es_em_ctxt *ctxt, + u64 exit_code, u64 exit_info_1, + u64 exit_info_2); #else static inline void sev_es_ist_enter(struct pt_regs *regs) { } static inline void sev_es_ist_exit(void) { } diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c index ea9abd69237e..e1863a6d76b8 100644 --- a/arch/x86/kernel/sev-shared.c +++ b/arch/x86/kernel/sev-shared.c @@ -124,10 +124,9 @@ static enum es_result verify_exception_info(struct ghcb *ghcb, struct es_em_ctxt return ES_VMM_ERROR; } -static enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb, - struct es_em_ctxt *ctxt, - u64 exit_code, u64 exit_info_1, - u64 exit_info_2) +enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb, bool set_ghcb_msr, + struct es_em_ctxt *ctxt, u64 exit_code, + u64 exit_info_1, u64 exit_info_2) { /* Fill in protocol and format specifiers */ ghcb->protocol_version = GHCB_PROTOCOL_MAX; @@ -137,7 +136,14 @@ static enum es_result sev_es_ghcb_hv_call(struct ghcb *ghcb, ghcb_set_sw_exit_info_1(ghcb, exit_info_1); ghcb_set_sw_exit_info_2(ghcb, exit_info_2); - sev_es_wr_ghcb_msr(__pa(ghcb)); + /* + * Hyper-V unenlightened guests use a paravisor for communicating and + * GHCB pages are being allocated and set up by that paravisor. Linux + * should not change the GHCB page's physical address. + */ + if (set_ghcb_msr) + sev_es_wr_ghcb_msr(__pa(ghcb)); + VMGEXIT(); return verify_exception_info(ghcb, ctxt); @@ -417,7 +423,7 @@ static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt) */ sw_scratch = __pa(ghcb) + offsetof(struct ghcb, shared_buffer); ghcb_set_sw_scratch(ghcb, sw_scratch); - ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_IOIO, + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, SVM_EXIT_IOIO, exit_info_1, exit_info_2); if (ret != ES_OK) return ret; @@ -459,7 +465,8 @@ static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt) ghcb_set_rax(ghcb, rax); - ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_IOIO, exit_info_1, 0); + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, + SVM_EXIT_IOIO, exit_info_1, 0); if (ret != ES_OK) return ret; @@ -490,7 +497,7 @@ static enum es_result vc_handle_cpuid(struct ghcb *ghcb, /* xgetbv will cause #GP - use reset value for xcr0 */ ghcb_set_xcr0(ghcb, 1); - ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_CPUID, 0, 0); + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, SVM_EXIT_CPUID, 0, 0); if (ret != ES_OK) return ret; @@ -515,7 +522,7 @@ static enum es_result vc_handle_rdtsc(struct ghcb *ghcb, bool rdtscp = (exit_code == SVM_EXIT_RDTSCP); enum es_result ret; - ret = sev_es_ghcb_hv_call(ghcb, ctxt, exit_code, 0, 0); + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, exit_code, 0, 0); if (ret != ES_OK) return ret; diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index a6895e440bc3..bb62a1d15d6c 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -648,7 +648,8 @@ static enum es_result vc_handle_msr(struct ghcb *ghcb, struct es_em_ctxt *ctxt) ghcb_set_rdx(ghcb, regs->dx); } - ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_MSR, exit_info_1, 0); + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, SVM_EXIT_MSR, + exit_info_1, 0); if ((ret == ES_OK) && (!exit_info_1)) { regs->ax = ghcb->save.rax; @@ -867,7 +868,7 @@ static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt, ghcb_set_sw_scratch(ghcb, ghcb_pa + offsetof(struct ghcb, shared_buffer)); - return sev_es_ghcb_hv_call(ghcb, ctxt, exit_code, exit_info_1, exit_info_2); + return sev_es_ghcb_hv_call(ghcb, true, ctxt, exit_code, exit_info_1, exit_info_2); } static enum es_result vc_handle_mmio_twobyte_ops(struct ghcb *ghcb, @@ -1117,7 +1118,7 @@ static enum es_result vc_handle_dr7_write(struct ghcb *ghcb, /* Using a value of 0 for ExitInfo1 means RAX holds the value */ ghcb_set_rax(ghcb, val); - ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_WRITE_DR7, 0, 0); + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, SVM_EXIT_WRITE_DR7, 0, 0); if (ret != ES_OK) return ret; @@ -1147,7 +1148,7 @@ static enum es_result vc_handle_dr7_read(struct ghcb *ghcb, static enum es_result vc_handle_wbinvd(struct ghcb *ghcb, struct es_em_ctxt *ctxt) { - return sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_WBINVD, 0, 0); + return sev_es_ghcb_hv_call(ghcb, true, ctxt, SVM_EXIT_WBINVD, 0, 0); } static enum es_result vc_handle_rdpmc(struct ghcb *ghcb, struct es_em_ctxt *ctxt) @@ -1156,7 +1157,7 @@ static enum es_result vc_handle_rdpmc(struct ghcb *ghcb, struct es_em_ctxt *ctxt ghcb_set_rcx(ghcb, ctxt->regs->cx); - ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_RDPMC, 0, 0); + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, SVM_EXIT_RDPMC, 0, 0); if (ret != ES_OK) return ret; @@ -1197,7 +1198,7 @@ static enum es_result vc_handle_vmmcall(struct ghcb *ghcb, if (x86_platform.hyper.sev_es_hcall_prepare) x86_platform.hyper.sev_es_hcall_prepare(ghcb, ctxt->regs); - ret = sev_es_ghcb_hv_call(ghcb, ctxt, SVM_EXIT_VMMCALL, 0, 0); + ret = sev_es_ghcb_hv_call(ghcb, true, ctxt, SVM_EXIT_VMMCALL, 0, 0); if (ret != ES_OK) return ret; From patchwork Mon Oct 25 12:21:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12581709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2413AC4332F for ; Mon, 25 Oct 2021 12:22:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0A6A96023B for ; Mon, 25 Oct 2021 12:22:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233420AbhJYMYZ (ORCPT ); Mon, 25 Oct 2021 08:24:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233249AbhJYMXu (ORCPT ); Mon, 25 Oct 2021 08:23:50 -0400 Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [IPv6:2607:f8b0:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84E60C06122A; Mon, 25 Oct 2021 05:21:28 -0700 (PDT) Received: by mail-pg1-x530.google.com with SMTP id s136so10759046pgs.4; Mon, 25 Oct 2021 05:21:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SWIg/hrN+y5LzQZxoTnC5dUnrWv9jnHFLm+y0Jv7Wjc=; b=kgkQjaC7y4nZDoZodIZECB2KvISRkrUJ+y4EAtqehgdvpU0kxZTVQu/Q2/eu4rwLnw RG0ZdARfBFtIP9hRkpdbzU7Fh+qKtHE0LyN+tdUC9Z8NeIP23VaLzzlQElixMimKQyc5 wzevU1VqeY8ZXx1PxLiKzBraCuEMO/Gmt9glPSDernSAv9ZsR1fH/kRyl0A9YUy6HiFK x7IFdbCjeIdozimeVBZSudT16T8gRXXvp8JM+YP6Dqsrlno2tTSfkLDRoLUV48/cZVVI twebrWVXqKZ6f6oWz+6lMOahX+XceDjKtC8u/hWzpoF3JBKlez5fTbCro1Qe7fE5plDp Pt1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SWIg/hrN+y5LzQZxoTnC5dUnrWv9jnHFLm+y0Jv7Wjc=; b=Obvg+RosG7Mmiq11uYbHRxtqQz3XdJI6DItjWyV5gJnoxi+pOfrH7032AnVCUxbdSl Y27Toz1zCPRgbMZh6DL+EzsAYZnccLEJ1hStXHuFhoTLkZMavEbnJ3Or43irbDkx7QSI L2xYRXIUibUNTWzzhtZeEYyAwbQLd+TEYP9C376tfC6gtdIDMR58/kc/9EkVbif8jNdg nSdsauqcMpszvzi1+T5/yCPJ6Hmf7TKFnGJpTUIPkFUmolc4uSiZ2T+ltDrpOSlP0n7t Om9BkO/iLlAfT5ST2KIqcPdsKP+21CPJAFcXhd26X2xP2oChfLQuCYcY6/c3ZE+ZYTFV D8xA== X-Gm-Message-State: AOAM532WvZdfFeP8z4y/67kykV+AVB9t9BYMvKVut25d3gqmoLN4wpKi MjJJK4vSnW3qhwDnWdUeWKE= X-Google-Smtp-Source: ABdhPJyXMFMNKnr3NRfBhJfns0lGIby4fTg+rEel6baS9rnld8COYRaXppv4s1pXj13K1NM45Z+7BQ== X-Received: by 2002:aa7:90d0:0:b0:44d:b8a:8837 with SMTP id k16-20020aa790d0000000b0044d0b8a8837mr18454115pfk.47.1635164488026; Mon, 25 Oct 2021 05:21:28 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:8:bcf6:9813:137f:2b6]) by smtp.gmail.com with ESMTPSA id mi11sm2786166pjb.5.2021.10.25.05.21.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Oct 2021 05:21:27 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, mikelley@microsoft.com, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, pgonda@google.com, akpm@linux-foundation.org, rppt@kernel.org, kirill.shutemov@linux.intel.com, saravanand@fb.com, aneesh.kumar@linux.ibm.com, sfr@canb.auug.org.au, david@redhat.com, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V9 6/9] x86/hyperv: Add Write/Read MSR registers via ghcb page Date: Mon, 25 Oct 2021 08:21:11 -0400 Message-Id: <20211025122116.264793-7-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211025122116.264793-1-ltykernel@gmail.com> References: <20211025122116.264793-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan Hyperv provides GHCB protocol to write Synthetic Interrupt Controller MSR registers in Isolation VM with AMD SEV SNP and these registers are emulated by hypervisor directly. Hyperv requires to write SINTx MSR registers twice. First writes MSR via GHCB page to communicate with hypervisor and then writes wrmsr instruction to talk with paravisor which runs in VMPL0. Guest OS ID MSR also needs to be set via GHCB page. Reviewed-by: Michael Kelley Signed-off-by: Tianyu Lan --- Change since v8: * Add AMD SEV option check around ghcb function in ivm.c * Add hv_ghcb_msr_write/read() stub function. Change since v6: * Spilt sev-es code into separate patch * Add hv_get/set_register() dummy function under CONFIG_HYPERV is not selected to fix compile error. Change since v5: * Adjust change layout in the asm/mshyperv.h to make hv_is_synic_reg(), hv_get_register() and hv_set_register() ahead of the #include of asm-generic/mshyperv.h * Remove Spurious blank line Change since v4: * Remove hv_get_simp(), hv_get_siefp() hv_get_synint_*() helper function. Move the logic into hv_get/set_register(). Change since v3: * Pass old_msg_type to hv_signal_eom() as parameter. * Use HV_REGISTER_* marcro instead of HV_X64_MSR_* * Add hv_isolation_type_snp() weak function. * Add maros to set syinc register in ARM code. Change since v1: * Introduce sev_es_ghcb_hv_call_simple() and share code between SEV and Hyper-V code. --- arch/x86/hyperv/hv_init.c | 36 ++--------- arch/x86/hyperv/ivm.c | 111 ++++++++++++++++++++++++++++++++ arch/x86/include/asm/mshyperv.h | 63 ++++++++++++++---- drivers/hv/hv.c | 74 ++++++++++++++++----- drivers/hv/hv_common.c | 6 ++ include/asm-generic/mshyperv.h | 1 + 6 files changed, 232 insertions(+), 59 deletions(-) diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index d57df6825527..a16a83e46a30 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -37,7 +37,7 @@ EXPORT_SYMBOL_GPL(hv_current_partition_id); void *hv_hypercall_pg; EXPORT_SYMBOL_GPL(hv_hypercall_pg); -void __percpu **hv_ghcb_pg; +union hv_ghcb __percpu **hv_ghcb_pg; /* Storage to save the hypercall page temporarily for hibernation */ static void *hv_hypercall_pg_saved; @@ -406,7 +406,7 @@ void __init hyperv_init(void) } if (hv_isolation_type_snp()) { - hv_ghcb_pg = alloc_percpu(void *); + hv_ghcb_pg = alloc_percpu(union hv_ghcb *); if (!hv_ghcb_pg) goto free_vp_assist_page; } @@ -424,6 +424,9 @@ void __init hyperv_init(void) guest_id = generate_guest_id(0, LINUX_VERSION_CODE, 0); wrmsrl(HV_X64_MSR_GUEST_OS_ID, guest_id); + /* Hyper-V requires to write guest os id via ghcb in SNP IVM. */ + hv_ghcb_msr_write(HV_X64_MSR_GUEST_OS_ID, guest_id); + hv_hypercall_pg = __vmalloc_node_range(PAGE_SIZE, 1, VMALLOC_START, VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS, NUMA_NO_NODE, @@ -501,6 +504,7 @@ void __init hyperv_init(void) clean_guest_os_id: wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0); + hv_ghcb_msr_write(HV_X64_MSR_GUEST_OS_ID, 0); cpuhp_remove_state(cpuhp); free_ghcb_page: free_percpu(hv_ghcb_pg); @@ -522,6 +526,7 @@ void hyperv_cleanup(void) /* Reset our OS id */ wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0); + hv_ghcb_msr_write(HV_X64_MSR_GUEST_OS_ID, 0); /* * Reset hypercall page reference before reset the page, @@ -592,30 +597,3 @@ bool hv_is_hyperv_initialized(void) return hypercall_msr.enable; } EXPORT_SYMBOL_GPL(hv_is_hyperv_initialized); - -enum hv_isolation_type hv_get_isolation_type(void) -{ - if (!(ms_hyperv.priv_high & HV_ISOLATION)) - return HV_ISOLATION_TYPE_NONE; - return FIELD_GET(HV_ISOLATION_TYPE, ms_hyperv.isolation_config_b); -} -EXPORT_SYMBOL_GPL(hv_get_isolation_type); - -bool hv_is_isolation_supported(void) -{ - if (!cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) - return false; - - if (!hypervisor_is_type(X86_HYPER_MS_HYPERV)) - return false; - - return hv_get_isolation_type() != HV_ISOLATION_TYPE_NONE; -} - -DEFINE_STATIC_KEY_FALSE(isolation_type_snp); - -bool hv_isolation_type_snp(void) -{ - return static_branch_unlikely(&isolation_type_snp); -} -EXPORT_SYMBOL_GPL(hv_isolation_type_snp); diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 79e7fb83472a..9c48d6e2d8b2 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -6,12 +6,123 @@ * Tianyu Lan */ +#include +#include #include #include #include #include +#include +#include #include #include +#include + +#ifdef CONFIG_AMD_MEM_ENCRYPT +union hv_ghcb { + struct ghcb ghcb; +} __packed __aligned(HV_HYP_PAGE_SIZE); + +void hv_ghcb_msr_write(u64 msr, u64 value) +{ + union hv_ghcb *hv_ghcb; + void **ghcb_base; + unsigned long flags; + struct es_em_ctxt ctxt; + + if (!hv_ghcb_pg) + return; + + WARN_ON(in_nmi()); + + local_irq_save(flags); + ghcb_base = (void **)this_cpu_ptr(hv_ghcb_pg); + hv_ghcb = (union hv_ghcb *)*ghcb_base; + if (!hv_ghcb) { + local_irq_restore(flags); + return; + } + + ghcb_set_rcx(&hv_ghcb->ghcb, msr); + ghcb_set_rax(&hv_ghcb->ghcb, lower_32_bits(value)); + ghcb_set_rdx(&hv_ghcb->ghcb, upper_32_bits(value)); + + if (sev_es_ghcb_hv_call(&hv_ghcb->ghcb, false, &ctxt, + SVM_EXIT_MSR, 1, 0)) + pr_warn("Fail to write msr via ghcb %llx.\n", msr); + + local_irq_restore(flags); +} +EXPORT_SYMBOL_GPL(hv_ghcb_msr_write); + +void hv_ghcb_msr_read(u64 msr, u64 *value) +{ + union hv_ghcb *hv_ghcb; + void **ghcb_base; + unsigned long flags; + struct es_em_ctxt ctxt; + + /* Check size of union hv_ghcb here. */ + BUILD_BUG_ON(sizeof(union hv_ghcb) != HV_HYP_PAGE_SIZE); + + if (!hv_ghcb_pg) + return; + + WARN_ON(in_nmi()); + + local_irq_save(flags); + ghcb_base = (void **)this_cpu_ptr(hv_ghcb_pg); + hv_ghcb = (union hv_ghcb *)*ghcb_base; + if (!hv_ghcb) { + local_irq_restore(flags); + return; + } + + ghcb_set_rcx(&hv_ghcb->ghcb, msr); + if (sev_es_ghcb_hv_call(&hv_ghcb->ghcb, false, &ctxt, + SVM_EXIT_MSR, 0, 0)) + pr_warn("Fail to read msr via ghcb %llx.\n", msr); + else + *value = (u64)lower_32_bits(hv_ghcb->ghcb.save.rax) + | ((u64)lower_32_bits(hv_ghcb->ghcb.save.rdx) << 32); + local_irq_restore(flags); +} +EXPORT_SYMBOL_GPL(hv_ghcb_msr_read); +#endif + +enum hv_isolation_type hv_get_isolation_type(void) +{ + if (!(ms_hyperv.priv_high & HV_ISOLATION)) + return HV_ISOLATION_TYPE_NONE; + return FIELD_GET(HV_ISOLATION_TYPE, ms_hyperv.isolation_config_b); +} +EXPORT_SYMBOL_GPL(hv_get_isolation_type); + +/* + * hv_is_isolation_supported - Check system runs in the Hyper-V + * isolation VM. + */ +bool hv_is_isolation_supported(void) +{ + if (!cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) + return false; + + if (!hypervisor_is_type(X86_HYPER_MS_HYPERV)) + return false; + + return hv_get_isolation_type() != HV_ISOLATION_TYPE_NONE; +} + +DEFINE_STATIC_KEY_FALSE(isolation_type_snp); + +/* + * hv_isolation_type_snp - Check system runs in the AMD SEV-SNP based + * isolation VM. + */ +bool hv_isolation_type_snp(void) +{ + return static_branch_unlikely(&isolation_type_snp); +} /* * hv_mark_gpa_visibility - Set pages visible to host via hvcall. diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index f3154ca41ac4..da3972fe5a7a 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -11,25 +11,14 @@ #include #include +union hv_ghcb; + DECLARE_STATIC_KEY_FALSE(isolation_type_snp); typedef int (*hyperv_fill_flush_list_func)( struct hv_guest_mapping_flush_list *flush, void *data); -static inline void hv_set_register(unsigned int reg, u64 value) -{ - wrmsrl(reg, value); -} - -static inline u64 hv_get_register(unsigned int reg) -{ - u64 value; - - rdmsrl(reg, value); - return value; -} - #define hv_get_raw_timer() rdtsc_ordered() void hyperv_vector_handler(struct pt_regs *regs); @@ -41,7 +30,7 @@ extern void *hv_hypercall_pg; extern u64 hv_current_partition_id; -extern void __percpu **hv_ghcb_pg; +extern union hv_ghcb __percpu **hv_ghcb_pg; int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages); int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id); @@ -193,6 +182,50 @@ int hv_map_ioapic_interrupt(int ioapic_id, bool level, int vcpu, int vector, struct hv_interrupt_entry *entry); int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry); int hv_set_mem_host_visibility(unsigned long addr, int numpages, bool visible); + +#ifdef CONFIG_AMD_MEM_ENCRYPT +void hv_ghcb_msr_write(u64 msr, u64 value); +void hv_ghcb_msr_read(u64 msr, u64 *value); +#else +static inline void hv_ghcb_msr_write(u64 msr, u64 value) {} +static inline void hv_ghcb_msr_read(u64 msr, u64 *value) {} +#endif + +extern bool hv_isolation_type_snp(void); + +static inline bool hv_is_synic_reg(unsigned int reg) +{ + if ((reg >= HV_REGISTER_SCONTROL) && + (reg <= HV_REGISTER_SINT15)) + return true; + return false; +} + +static inline u64 hv_get_register(unsigned int reg) +{ + u64 value; + + if (hv_is_synic_reg(reg) && hv_isolation_type_snp()) + hv_ghcb_msr_read(reg, &value); + else + rdmsrl(reg, value); + return value; +} + +static inline void hv_set_register(unsigned int reg, u64 value) +{ + if (hv_is_synic_reg(reg) && hv_isolation_type_snp()) { + hv_ghcb_msr_write(reg, value); + + /* Write proxy bit via wrmsl instruction */ + if (reg >= HV_REGISTER_SINT0 && + reg <= HV_REGISTER_SINT15) + wrmsrl(reg, value | 1 << 20); + } else { + wrmsrl(reg, value); + } +} + #else /* CONFIG_HYPERV */ static inline void hyperv_init(void) {} static inline void hyperv_setup_mmu_ops(void) {} @@ -209,6 +242,8 @@ static inline int hyperv_flush_guest_mapping_range(u64 as, { return -1; } +static inline void hv_set_register(unsigned int reg, u64 value) { } +static inline u64 hv_get_register(unsigned int reg) { return 0; } static inline int hv_set_mem_host_visibility(unsigned long addr, int numpages, bool visible) { diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c index e83507f49676..943392db9e8a 100644 --- a/drivers/hv/hv.c +++ b/drivers/hv/hv.c @@ -8,6 +8,7 @@ */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include #include #include #include @@ -136,17 +137,24 @@ int hv_synic_alloc(void) tasklet_init(&hv_cpu->msg_dpc, vmbus_on_msg_dpc, (unsigned long) hv_cpu); - hv_cpu->synic_message_page = - (void *)get_zeroed_page(GFP_ATOMIC); - if (hv_cpu->synic_message_page == NULL) { - pr_err("Unable to allocate SYNIC message page\n"); - goto err; - } + /* + * Synic message and event pages are allocated by paravisor. + * Skip these pages allocation here. + */ + if (!hv_isolation_type_snp()) { + hv_cpu->synic_message_page = + (void *)get_zeroed_page(GFP_ATOMIC); + if (hv_cpu->synic_message_page == NULL) { + pr_err("Unable to allocate SYNIC message page\n"); + goto err; + } - hv_cpu->synic_event_page = (void *)get_zeroed_page(GFP_ATOMIC); - if (hv_cpu->synic_event_page == NULL) { - pr_err("Unable to allocate SYNIC event page\n"); - goto err; + hv_cpu->synic_event_page = + (void *)get_zeroed_page(GFP_ATOMIC); + if (hv_cpu->synic_event_page == NULL) { + pr_err("Unable to allocate SYNIC event page\n"); + goto err; + } } hv_cpu->post_msg_page = (void *)get_zeroed_page(GFP_ATOMIC); @@ -201,16 +209,35 @@ void hv_synic_enable_regs(unsigned int cpu) /* Setup the Synic's message page */ simp.as_uint64 = hv_get_register(HV_REGISTER_SIMP); simp.simp_enabled = 1; - simp.base_simp_gpa = virt_to_phys(hv_cpu->synic_message_page) - >> HV_HYP_PAGE_SHIFT; + + if (hv_isolation_type_snp()) { + hv_cpu->synic_message_page + = memremap(simp.base_simp_gpa << HV_HYP_PAGE_SHIFT, + HV_HYP_PAGE_SIZE, MEMREMAP_WB); + if (!hv_cpu->synic_message_page) + pr_err("Fail to map syinc message page.\n"); + } else { + simp.base_simp_gpa = virt_to_phys(hv_cpu->synic_message_page) + >> HV_HYP_PAGE_SHIFT; + } hv_set_register(HV_REGISTER_SIMP, simp.as_uint64); /* Setup the Synic's event page */ siefp.as_uint64 = hv_get_register(HV_REGISTER_SIEFP); siefp.siefp_enabled = 1; - siefp.base_siefp_gpa = virt_to_phys(hv_cpu->synic_event_page) - >> HV_HYP_PAGE_SHIFT; + + if (hv_isolation_type_snp()) { + hv_cpu->synic_event_page = + memremap(siefp.base_siefp_gpa << HV_HYP_PAGE_SHIFT, + HV_HYP_PAGE_SIZE, MEMREMAP_WB); + + if (!hv_cpu->synic_event_page) + pr_err("Fail to map syinc event page.\n"); + } else { + siefp.base_siefp_gpa = virt_to_phys(hv_cpu->synic_event_page) + >> HV_HYP_PAGE_SHIFT; + } hv_set_register(HV_REGISTER_SIEFP, siefp.as_uint64); @@ -257,6 +284,8 @@ int hv_synic_init(unsigned int cpu) */ void hv_synic_disable_regs(unsigned int cpu) { + struct hv_per_cpu_context *hv_cpu + = per_cpu_ptr(hv_context.cpu_context, cpu); union hv_synic_sint shared_sint; union hv_synic_simp simp; union hv_synic_siefp siefp; @@ -273,14 +302,27 @@ void hv_synic_disable_regs(unsigned int cpu) shared_sint.as_uint64); simp.as_uint64 = hv_get_register(HV_REGISTER_SIMP); + /* + * In Isolation VM, sim and sief pages are allocated by + * paravisor. These pages also will be used by kdump + * kernel. So just reset enable bit here and keep page + * addresses. + */ simp.simp_enabled = 0; - simp.base_simp_gpa = 0; + if (hv_isolation_type_snp()) + memunmap(hv_cpu->synic_message_page); + else + simp.base_simp_gpa = 0; hv_set_register(HV_REGISTER_SIMP, simp.as_uint64); siefp.as_uint64 = hv_get_register(HV_REGISTER_SIEFP); siefp.siefp_enabled = 0; - siefp.base_siefp_gpa = 0; + + if (hv_isolation_type_snp()) + memunmap(hv_cpu->synic_event_page); + else + siefp.base_siefp_gpa = 0; hv_set_register(HV_REGISTER_SIEFP, siefp.as_uint64); diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c index c0d9048a4112..1fc82d237161 100644 --- a/drivers/hv/hv_common.c +++ b/drivers/hv/hv_common.c @@ -249,6 +249,12 @@ bool __weak hv_is_isolation_supported(void) } EXPORT_SYMBOL_GPL(hv_is_isolation_supported); +bool __weak hv_isolation_type_snp(void) +{ + return false; +} +EXPORT_SYMBOL_GPL(hv_isolation_type_snp); + void __weak hv_setup_vmbus_handler(void (*handler)(void)) { } diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index a8ac497167d2..6d3ba902ebb0 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -54,6 +54,7 @@ extern void __percpu **hyperv_pcpu_output_arg; extern u64 hv_do_hypercall(u64 control, void *inputaddr, void *outputaddr); extern u64 hv_do_fast_hypercall8(u16 control, u64 input8); +extern bool hv_isolation_type_snp(void); /* Helper functions that provide a consistent pattern for checking Hyper-V hypercall status. */ static inline int hv_result(u64 status) From patchwork Mon Oct 25 12:21:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12581715 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61850C433FE for ; Mon, 25 Oct 2021 12:22:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4604D603E8 for ; Mon, 25 Oct 2021 12:22:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233216AbhJYMYj (ORCPT ); Mon, 25 Oct 2021 08:24:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233183AbhJYMYC (ORCPT ); Mon, 25 Oct 2021 08:24:02 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C500CC06122E; Mon, 25 Oct 2021 05:21:29 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id f4so5066402plt.3; Mon, 25 Oct 2021 05:21:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rfS0g7vb96rwV7A8IuQpnrVMO7EVsHi2UQpE9Aj3wKo=; b=CRGmA+vVXZqglJj6JMO9qpnImTaM90q51Q9XAmExNLSqchfVOdMVJLr5aqg9txl02q DGtmI6TFgyPQZtCyO8n+i4rqPWJPThvevlfLVw5/E33WWTPsPYWpqgOzf1XT2SgVBDh8 nHfu0JP4zu++HN5ybyrgKW60m20dCSzIoJuxJMRSoWd1GecQVDT3MiQYIBSvAq62wngL xHcajGJo1vE0Nuug5i4yCIw1Gf5pEwyvOAJamGp81WZ61K1qRzQk+MZHgBnR4gYa+0Rk 7BU7GLY5ga7eNy1g5oPnKfP9fleqgQzK3LKn9duxp5B+zy4HGqDRvEzG71r1Ak/5qbVu +dbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rfS0g7vb96rwV7A8IuQpnrVMO7EVsHi2UQpE9Aj3wKo=; b=qTdwqikeU3BFkpTirAbs3WNXjtpYSrTjh1BmIlrhhxW5TRYsYJPAiS4qtogI6bhA38 iRWgi3Et0uzGpcj8iANXJJ+mwSVwf/7T84wQ8VlRl9MtSPA7LGgmuKV7OQ+PzUDyZvAJ Qq203VffR72ox3khf2u6lPs896z3MkSS4v7pw+CfId2wjwvrMz9LcdfMmxEbFR2G4Cc3 Nuwgy0hlTT1m8T8brF4orFHLraHsOFHlY7i/5YC8xX2zNdV7bGSc0Iow6LWnGPJmlk5L k4BQR3aKib4tdQPbk/uOZts4yJtg6FoxvcmocY5v3lj4hPyEMoywwqM0MRGog8MMseFE tcjQ== X-Gm-Message-State: AOAM533TS5s8twl4AARbsY0UOwobhkw3k6gmH0wzGLBh5tiiuWF54BPE EI0OGrZVpq3ecVmhoP325LY= X-Google-Smtp-Source: ABdhPJxRo/vNJGpqgYNRQjK3zWJesZkpVcLskKYCNTUsIoj07wZqq4XSdM7XVCvF8z2trY4ab6ucqw== X-Received: by 2002:a17:90a:39c5:: with SMTP id k5mr35590609pjf.211.1635164489391; Mon, 25 Oct 2021 05:21:29 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:8:bcf6:9813:137f:2b6]) by smtp.gmail.com with ESMTPSA id mi11sm2786166pjb.5.2021.10.25.05.21.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Oct 2021 05:21:29 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, mikelley@microsoft.com, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, pgonda@google.com, akpm@linux-foundation.org, rppt@kernel.org, kirill.shutemov@linux.intel.com, saravanand@fb.com, aneesh.kumar@linux.ibm.com, sfr@canb.auug.org.au, david@redhat.com, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V9 7/9] x86/hyperv: Add ghcb hvcall support for SNP VM Date: Mon, 25 Oct 2021 08:21:12 -0400 Message-Id: <20211025122116.264793-8-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211025122116.264793-1-ltykernel@gmail.com> References: <20211025122116.264793-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan hyperv provides ghcb hvcall to handle VMBus HVCALL_SIGNAL_EVENT and HVCALL_POST_MESSAGE msg in SNP Isolation VM. Add such support. Reviewed-by: Michael Kelley Signed-off-by: Tianyu Lan --- Change since v3: * Add hv_ghcb_hypercall() stub function to avoid compile error for ARM. --- arch/x86/hyperv/ivm.c | 75 ++++++++++++++++++++++++++++++++++ drivers/hv/connection.c | 6 ++- drivers/hv/hv.c | 8 +++- drivers/hv/hv_common.c | 6 +++ include/asm-generic/mshyperv.h | 1 + 5 files changed, 94 insertions(+), 2 deletions(-) diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c index 9c48d6e2d8b2..4d012fd9d95d 100644 --- a/arch/x86/hyperv/ivm.c +++ b/arch/x86/hyperv/ivm.c @@ -19,10 +19,85 @@ #include #ifdef CONFIG_AMD_MEM_ENCRYPT + +#define GHCB_USAGE_HYPERV_CALL 1 + union hv_ghcb { struct ghcb ghcb; + struct { + u64 hypercalldata[509]; + u64 outputgpa; + union { + union { + struct { + u32 callcode : 16; + u32 isfast : 1; + u32 reserved1 : 14; + u32 isnested : 1; + u32 countofelements : 12; + u32 reserved2 : 4; + u32 repstartindex : 12; + u32 reserved3 : 4; + }; + u64 asuint64; + } hypercallinput; + union { + struct { + u16 callstatus; + u16 reserved1; + u32 elementsprocessed : 12; + u32 reserved2 : 20; + }; + u64 asunit64; + } hypercalloutput; + }; + u64 reserved2; + } hypercall; } __packed __aligned(HV_HYP_PAGE_SIZE); +u64 hv_ghcb_hypercall(u64 control, void *input, void *output, u32 input_size) +{ + union hv_ghcb *hv_ghcb; + void **ghcb_base; + unsigned long flags; + u64 status; + + if (!hv_ghcb_pg) + return -EFAULT; + + WARN_ON(in_nmi()); + + local_irq_save(flags); + ghcb_base = (void **)this_cpu_ptr(hv_ghcb_pg); + hv_ghcb = (union hv_ghcb *)*ghcb_base; + if (!hv_ghcb) { + local_irq_restore(flags); + return -EFAULT; + } + + hv_ghcb->ghcb.protocol_version = GHCB_PROTOCOL_MAX; + hv_ghcb->ghcb.ghcb_usage = GHCB_USAGE_HYPERV_CALL; + + hv_ghcb->hypercall.outputgpa = (u64)output; + hv_ghcb->hypercall.hypercallinput.asuint64 = 0; + hv_ghcb->hypercall.hypercallinput.callcode = control; + + if (input_size) + memcpy(hv_ghcb->hypercall.hypercalldata, input, input_size); + + VMGEXIT(); + + hv_ghcb->ghcb.ghcb_usage = 0xffffffff; + memset(hv_ghcb->ghcb.save.valid_bitmap, 0, + sizeof(hv_ghcb->ghcb.save.valid_bitmap)); + + status = hv_ghcb->hypercall.hypercalloutput.callstatus; + + local_irq_restore(flags); + + return status; +} + void hv_ghcb_msr_write(u64 msr, u64 value) { union hv_ghcb *hv_ghcb; diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c index 5e479d54918c..8820ae68f20f 100644 --- a/drivers/hv/connection.c +++ b/drivers/hv/connection.c @@ -447,6 +447,10 @@ void vmbus_set_event(struct vmbus_channel *channel) ++channel->sig_events; - hv_do_fast_hypercall8(HVCALL_SIGNAL_EVENT, channel->sig_event); + if (hv_isolation_type_snp()) + hv_ghcb_hypercall(HVCALL_SIGNAL_EVENT, &channel->sig_event, + NULL, sizeof(channel->sig_event)); + else + hv_do_fast_hypercall8(HVCALL_SIGNAL_EVENT, channel->sig_event); } EXPORT_SYMBOL_GPL(vmbus_set_event); diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c index 943392db9e8a..4d6480d57546 100644 --- a/drivers/hv/hv.c +++ b/drivers/hv/hv.c @@ -98,7 +98,13 @@ int hv_post_message(union hv_connection_id connection_id, aligned_msg->payload_size = payload_size; memcpy((void *)aligned_msg->payload, payload, payload_size); - status = hv_do_hypercall(HVCALL_POST_MESSAGE, aligned_msg, NULL); + if (hv_isolation_type_snp()) + status = hv_ghcb_hypercall(HVCALL_POST_MESSAGE, + (void *)aligned_msg, NULL, + sizeof(*aligned_msg)); + else + status = hv_do_hypercall(HVCALL_POST_MESSAGE, + aligned_msg, NULL); /* Preemption must remain disabled until after the hypercall * so some other thread can't get scheduled onto this cpu and diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c index 1fc82d237161..7be173a99f27 100644 --- a/drivers/hv/hv_common.c +++ b/drivers/hv/hv_common.c @@ -289,3 +289,9 @@ void __weak hyperv_cleanup(void) { } EXPORT_SYMBOL_GPL(hyperv_cleanup); + +u64 __weak hv_ghcb_hypercall(u64 control, void *input, void *output, u32 input_size) +{ + return HV_STATUS_INVALID_PARAMETER; +} +EXPORT_SYMBOL_GPL(hv_ghcb_hypercall); diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index 6d3ba902ebb0..3e2248ac328e 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -266,6 +266,7 @@ bool hv_is_hibernation_supported(void); enum hv_isolation_type hv_get_isolation_type(void); bool hv_is_isolation_supported(void); bool hv_isolation_type_snp(void); +u64 hv_ghcb_hypercall(u64 control, void *input, void *output, u32 input_size); void hyperv_cleanup(void); bool hv_query_ext_cap(u64 cap_query); #else /* CONFIG_HYPERV */ From patchwork Mon Oct 25 12:21:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12581711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8812EC433FE for ; Mon, 25 Oct 2021 12:22:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6A9036023D for ; Mon, 25 Oct 2021 12:22:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233336AbhJYMYg (ORCPT ); Mon, 25 Oct 2021 08:24:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233310AbhJYMYD (ORCPT ); Mon, 25 Oct 2021 08:24:03 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D60B3C061230; Mon, 25 Oct 2021 05:21:34 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id r5so3215858pls.1; Mon, 25 Oct 2021 05:21:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+e9i1j4UbbSysykywCgKGACAoEqAJklSJx0zDl7vqfw=; b=AvjvC/YLJB4k7ntGp94FVgt0fLUESxLJ4KTQTaxtjwxNzVa/zrhAwzfPY4Az7qecjI 5y5yXpQ2Q2QKHDteoTsYG7Oss+OiBQmgtwKRlT7u+nzQpfAKDuAQ3t6+qWY+RpTh0LBm 1K9hVJpbVX0MNErosSTGGTmogLBP9f9RpIqm0xmPkz7iMgZ2en2bsZdj082yViS0t1Of l5u5rA8HFwyU9/kLLdb0lkFzAr+KjlSGGkQ4pl5YqycWT9yDcUZfyY9RhAGRe0EPPQoy 2cySmO/vUQE0acz7BCc+lTDMwnmcyF7aT3p5K8lIhdLIvv28ovW0c5zhl7qTCbdh5Faz +ZUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+e9i1j4UbbSysykywCgKGACAoEqAJklSJx0zDl7vqfw=; b=hcy1fgsdP4LPSiV4P8ag8TB/tW9V63mVwbP/tdavj4QLWDRoAeaOGFEE2cUS9bFbsP bXsYfoN5lqHdK3afZswZJJewmoZ1TSpl4dCf4U4DVW3VAhPSoufsdy9fqPdZAPNm8Ot7 mUNbNToYbaTSRJu/XMluJjaY/+C9/aRMASY4qHy6eMDegVAtXA3VcPmQRVXG9N3pV42G qLflZ/bx4C7jlYC6x7uqq3tc5vXHoYxB2fGRlhIAZqhYtxujpUWekTPXHvBhwMjz9Odv 14MGGZsPO9/Kc1QSpRUvZSRmQT5AuOFhbYtn5ce0e/e5dq/i16kfycWYir4rsV34iXCv m+Ug== X-Gm-Message-State: AOAM532K1zjemr0yMCAM6xFva4KhtyynURscvsnLWn1cb4zaIz5FRkAx mQeLuNPLoiJAit2sCAhsPEQ= X-Google-Smtp-Source: ABdhPJx1/AR0qB0jpQn88IE7C7QJNFUjA2YUmooYb7IMlzFN/02ccxrCutamhrXMaAcr0hqpY+P1Uw== X-Received: by 2002:a17:90a:ba03:: with SMTP id s3mr19102903pjr.116.1635164491957; Mon, 25 Oct 2021 05:21:31 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:8:bcf6:9813:137f:2b6]) by smtp.gmail.com with ESMTPSA id mi11sm2786166pjb.5.2021.10.25.05.21.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Oct 2021 05:21:31 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, mikelley@microsoft.com, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, pgonda@google.com, akpm@linux-foundation.org, rppt@kernel.org, kirill.shutemov@linux.intel.com, saravanand@fb.com, aneesh.kumar@linux.ibm.com, sfr@canb.auug.org.au, david@redhat.com, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V9 8/9] Drivers: hv: vmbus: Add SNP support for VMbus channel initiate message Date: Mon, 25 Oct 2021 08:21:13 -0400 Message-Id: <20211025122116.264793-9-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211025122116.264793-1-ltykernel@gmail.com> References: <20211025122116.264793-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared with host in Isolation VM and so it's necessary to use hvcall to set them visible to host. In Isolation VM with AMD SEV SNP, the access address should be in the extra space which is above shared gpa boundary. So remap these pages into the extra address(pa + shared_gpa_boundary). Introduce monitor_pages_original[] in the struct vmbus_connection to store monitor page virtual address returned by hv_alloc_hyperv_ zeroed_page() and free monitor page via monitor_pages_original in the vmbus_disconnect(). The monitor_pages[] is to used to access monitor page and it is initialized to be equal with monitor_pages_ original. The monitor_pages[] will be overridden in the isolation VM with va of extra address. Introduce monitor_pages_pa[] to store monitor pages' physical address and use it to populate pa in the initiate msg. Reviewed-by: Michael Kelley Signed-off-by: Tianyu Lan --- Change since v6: * Add comment about calling memunmap() in the non-snp IVM. Change since v5: * change vmbus_connection.monitor_pages_pa type from unsigned long to phys_addr_t * Plus vmbus_connection.monitor_pages_pa with ms_hyperv. shared_gpa_boundary only in the IVM with AMD SEV. Change since v4: * Introduce monitor_pages_pa[] to store monitor pages' physical address and use it to populate pa in the initiate msg. * Move code of mapping moniter pages in extra address into vmbus_connect(). Change since v3: * Rename monitor_pages_va with monitor_pages_original * free monitor page via monitor_pages_original and monitor_pages is used to access monitor page. Change since v1: * Not remap monitor pages in the non-SNP isolation VM. --- drivers/hv/connection.c | 95 ++++++++++++++++++++++++++++++++++++--- drivers/hv/hyperv_vmbus.h | 2 + 2 files changed, 91 insertions(+), 6 deletions(-) diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c index 8820ae68f20f..a3d8be8d6cfb 100644 --- a/drivers/hv/connection.c +++ b/drivers/hv/connection.c @@ -19,6 +19,8 @@ #include #include #include +#include +#include #include #include "hyperv_vmbus.h" @@ -102,8 +104,9 @@ int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo, u32 version) vmbus_connection.msg_conn_id = VMBUS_MESSAGE_CONNECTION_ID; } - msg->monitor_page1 = virt_to_phys(vmbus_connection.monitor_pages[0]); - msg->monitor_page2 = virt_to_phys(vmbus_connection.monitor_pages[1]); + msg->monitor_page1 = vmbus_connection.monitor_pages_pa[0]; + msg->monitor_page2 = vmbus_connection.monitor_pages_pa[1]; + msg->target_vcpu = hv_cpu_number_to_vp_number(VMBUS_CONNECT_CPU); /* @@ -216,6 +219,65 @@ int vmbus_connect(void) goto cleanup; } + vmbus_connection.monitor_pages_original[0] + = vmbus_connection.monitor_pages[0]; + vmbus_connection.monitor_pages_original[1] + = vmbus_connection.monitor_pages[1]; + vmbus_connection.monitor_pages_pa[0] + = virt_to_phys(vmbus_connection.monitor_pages[0]); + vmbus_connection.monitor_pages_pa[1] + = virt_to_phys(vmbus_connection.monitor_pages[1]); + + if (hv_is_isolation_supported()) { + ret = set_memory_decrypted((unsigned long) + vmbus_connection.monitor_pages[0], + 1); + ret |= set_memory_decrypted((unsigned long) + vmbus_connection.monitor_pages[1], + 1); + if (ret) + goto cleanup; + + /* + * Isolation VM with AMD SNP needs to access monitor page via + * address space above shared gpa boundary. + */ + if (hv_isolation_type_snp()) { + vmbus_connection.monitor_pages_pa[0] += + ms_hyperv.shared_gpa_boundary; + vmbus_connection.monitor_pages_pa[1] += + ms_hyperv.shared_gpa_boundary; + + vmbus_connection.monitor_pages[0] + = memremap(vmbus_connection.monitor_pages_pa[0], + HV_HYP_PAGE_SIZE, + MEMREMAP_WB); + if (!vmbus_connection.monitor_pages[0]) { + ret = -ENOMEM; + goto cleanup; + } + + vmbus_connection.monitor_pages[1] + = memremap(vmbus_connection.monitor_pages_pa[1], + HV_HYP_PAGE_SIZE, + MEMREMAP_WB); + if (!vmbus_connection.monitor_pages[1]) { + ret = -ENOMEM; + goto cleanup; + } + } + + /* + * Set memory host visibility hvcall smears memory + * and so zero monitor pages here. + */ + memset(vmbus_connection.monitor_pages[0], 0x00, + HV_HYP_PAGE_SIZE); + memset(vmbus_connection.monitor_pages[1], 0x00, + HV_HYP_PAGE_SIZE); + + } + msginfo = kzalloc(sizeof(*msginfo) + sizeof(struct vmbus_channel_initiate_contact), GFP_KERNEL); @@ -303,10 +365,31 @@ void vmbus_disconnect(void) vmbus_connection.int_page = NULL; } - hv_free_hyperv_page((unsigned long)vmbus_connection.monitor_pages[0]); - hv_free_hyperv_page((unsigned long)vmbus_connection.monitor_pages[1]); - vmbus_connection.monitor_pages[0] = NULL; - vmbus_connection.monitor_pages[1] = NULL; + if (hv_is_isolation_supported()) { + /* + * memunmap() checks input address is ioremap address or not + * inside. It doesn't unmap any thing in the non-SNP CVM and + * so not check CVM type here. + */ + memunmap(vmbus_connection.monitor_pages[0]); + memunmap(vmbus_connection.monitor_pages[1]); + + set_memory_encrypted((unsigned long) + vmbus_connection.monitor_pages_original[0], + 1); + set_memory_encrypted((unsigned long) + vmbus_connection.monitor_pages_original[1], + 1); + } + + hv_free_hyperv_page((unsigned long) + vmbus_connection.monitor_pages_original[0]); + hv_free_hyperv_page((unsigned long) + vmbus_connection.monitor_pages_original[1]); + vmbus_connection.monitor_pages_original[0] = + vmbus_connection.monitor_pages[0] = NULL; + vmbus_connection.monitor_pages_original[1] = + vmbus_connection.monitor_pages[1] = NULL; } /* diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h index 42f3d9d123a1..d0a5232a1c3e 100644 --- a/drivers/hv/hyperv_vmbus.h +++ b/drivers/hv/hyperv_vmbus.h @@ -240,6 +240,8 @@ struct vmbus_connection { * is child->parent notification */ struct hv_monitor_page *monitor_pages[2]; + void *monitor_pages_original[2]; + phys_addr_t monitor_pages_pa[2]; struct list_head chn_msg_list; spinlock_t channelmsg_lock; From patchwork Mon Oct 25 12:21:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12581713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D3B4C43219 for ; Mon, 25 Oct 2021 12:22:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 764DF603E8 for ; Mon, 25 Oct 2021 12:22:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233141AbhJYMYi (ORCPT ); Mon, 25 Oct 2021 08:24:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232582AbhJYMYD (ORCPT ); Mon, 25 Oct 2021 08:24:03 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9CCFC06122F; Mon, 25 Oct 2021 05:21:33 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id oa4so8145376pjb.2; Mon, 25 Oct 2021 05:21:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rjA9Km5RNAIhCojwd6r90HC+YkWUUuhzZ0o+qCDoGUI=; b=aUlauNk/3VsTu/qMuSkUtPFZSBYL6eZyvUnXjJcfWEkIDTWH46fzXBCoX+xIvlnIMm fzZgiGEeaovRwCvGXd8/Et/qXqNU+EtO3WUI83mde/FmLukxsYwOtaHAVmVtwIULk/pG t5sEVBSoPMhfaLcHw3bDw3Ih/Kf51TMibtDH4HQUP6Z0OZfmdT6CQnycaOcgLj44N71X QIJXfhFqW2TO3DC30Zf2vcdznx+PEFBXXVPIqZM/GifOVwMEN0ErvTm9Tcp6m9x7V8hA jjkJtJIiTsRo50fbpTVDMmse2X6O66Tw+QIBzYLTV2RBjtwKN/NYVtDYzYBKHGH6wXkU +OkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rjA9Km5RNAIhCojwd6r90HC+YkWUUuhzZ0o+qCDoGUI=; b=FxGlZT328oTqiy6kDgJsFoOGa06fmjEdTM7gp0wiARLSNIwuOC6lEGRfsH5dQGU+AV nJcTP3T6Ed4cyi4EMDgJQUmgRD7EyXxcGVDGWJvKmlhcTRD477KQDnnhmEdBvblGQEfH yp1Eg6prVY/lHL/L3iGfIWiG+gZS3yzlcgiS2whyJRTDNqP7XWoKG835fHRFmJmPCjCr 2fWhEtZ1zPx+vjPeGYaBWDjJoHngM3EYRrKD0rLSOG3oH8kKgrwZizxJ2aQ+O7TxVeRd LVjCG3I9IaaXiMC154FLmPcCNlipChS9Bqefyy0v7fvmWMr9OYNnBQFJuX/SvJv/hXlc HjZg== X-Gm-Message-State: AOAM530uekeXXNINF81xq1MkovrErmYdYk6EO8WgKrNtDEdoueEAQx39 SXHw0ZKvAjlVBEeAcricdog= X-Google-Smtp-Source: ABdhPJyOzrw1VPMh6YoqOsIE8iJK4EM/aaEHO1BJ9JQuZb+0QAX5HCakbZX03Q29M+VUsjeyQBU3NQ== X-Received: by 2002:a17:90b:3ecc:: with SMTP id rm12mr20355750pjb.48.1635164493237; Mon, 25 Oct 2021 05:21:33 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:8:bcf6:9813:137f:2b6]) by smtp.gmail.com with ESMTPSA id mi11sm2786166pjb.5.2021.10.25.05.21.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Oct 2021 05:21:32 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, davem@davemloft.net, kuba@kernel.org, gregkh@linuxfoundation.org, arnd@arndb.de, brijesh.singh@amd.com, jroedel@suse.de, mikelley@microsoft.com, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, pgonda@google.com, akpm@linux-foundation.org, rppt@kernel.org, kirill.shutemov@linux.intel.com, saravanand@fb.com, aneesh.kumar@linux.ibm.com, sfr@canb.auug.org.au, david@redhat.com, michael.h.kelley@microsoft.com Cc: linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, konrad.wilk@oracle.com, hch@lst.de, robin.murphy@arm.com, joro@8bytes.org, parri.andrea@gmail.com, dave.hansen@intel.com Subject: [PATCH V9 9/9] Drivers: hv : vmbus: Initialize VMbus ring buffer for Isolation VM Date: Mon, 25 Oct 2021 08:21:14 -0400 Message-Id: <20211025122116.264793-10-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211025122116.264793-1-ltykernel@gmail.com> References: <20211025122116.264793-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tianyu Lan VMbus ring buffer are shared with host and it's need to be accessed via extra address space of Isolation VM with AMD SNP support. This patch is to map the ring buffer address in extra address space via vmap_pfn(). Hyperv set memory host visibility hvcall smears data in the ring buffer and so reset the ring buffer memory to zero after mapping. Reviewed-by: Michael Kelley Signed-off-by: Tianyu Lan --- Change since v4: * Use PFN_DOWN instead of HVPFN_DOWN in the hv_ringbuffer_init() Change since v3: * Remove hv_ringbuffer_post_init(), merge map operation for Isolation VM into hv_ringbuffer_init() * Call hv_ringbuffer_init() after __vmbus_establish_gpadl(). --- drivers/hv/Kconfig | 1 + drivers/hv/channel.c | 19 +++++++------- drivers/hv/ring_buffer.c | 55 ++++++++++++++++++++++++++++++---------- 3 files changed, 53 insertions(+), 22 deletions(-) diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig index d1123ceb38f3..dd12af20e467 100644 --- a/drivers/hv/Kconfig +++ b/drivers/hv/Kconfig @@ -8,6 +8,7 @@ config HYPERV || (ARM64 && !CPU_BIG_ENDIAN)) select PARAVIRT select X86_HV_CALLBACK_VECTOR if X86 + select VMAP_PFN help Select this option to run Linux as a Hyper-V client operating system. diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index b37ff4a39224..dc5c35210c16 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -683,15 +683,6 @@ static int __vmbus_open(struct vmbus_channel *newchannel, if (!newchannel->max_pkt_size) newchannel->max_pkt_size = VMBUS_DEFAULT_MAX_PKT_SIZE; - err = hv_ringbuffer_init(&newchannel->outbound, page, send_pages, 0); - if (err) - goto error_clean_ring; - - err = hv_ringbuffer_init(&newchannel->inbound, &page[send_pages], - recv_pages, newchannel->max_pkt_size); - if (err) - goto error_clean_ring; - /* Establish the gpadl for the ring buffer */ newchannel->ringbuffer_gpadlhandle.gpadl_handle = 0; @@ -703,6 +694,16 @@ static int __vmbus_open(struct vmbus_channel *newchannel, if (err) goto error_clean_ring; + err = hv_ringbuffer_init(&newchannel->outbound, + page, send_pages, 0); + if (err) + goto error_free_gpadl; + + err = hv_ringbuffer_init(&newchannel->inbound, &page[send_pages], + recv_pages, newchannel->max_pkt_size); + if (err) + goto error_free_gpadl; + /* Create and init the channel open message */ open_info = kzalloc(sizeof(*open_info) + sizeof(struct vmbus_channel_open_channel), diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c index 314015d9e912..931802ae985c 100644 --- a/drivers/hv/ring_buffer.c +++ b/drivers/hv/ring_buffer.c @@ -17,6 +17,8 @@ #include #include #include +#include +#include #include "hyperv_vmbus.h" @@ -183,8 +185,10 @@ void hv_ringbuffer_pre_init(struct vmbus_channel *channel) int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info, struct page *pages, u32 page_cnt, u32 max_pkt_size) { - int i; struct page **pages_wraparound; + unsigned long *pfns_wraparound; + u64 pfn; + int i; BUILD_BUG_ON((sizeof(struct hv_ring_buffer) != PAGE_SIZE)); @@ -192,23 +196,48 @@ int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info, * First page holds struct hv_ring_buffer, do wraparound mapping for * the rest. */ - pages_wraparound = kcalloc(page_cnt * 2 - 1, sizeof(struct page *), - GFP_KERNEL); - if (!pages_wraparound) - return -ENOMEM; + if (hv_isolation_type_snp()) { + pfn = page_to_pfn(pages) + + PFN_DOWN(ms_hyperv.shared_gpa_boundary); + + pfns_wraparound = kcalloc(page_cnt * 2 - 1, + sizeof(unsigned long), GFP_KERNEL); + if (!pfns_wraparound) + return -ENOMEM; + + pfns_wraparound[0] = pfn; + for (i = 0; i < 2 * (page_cnt - 1); i++) + pfns_wraparound[i + 1] = pfn + i % (page_cnt - 1) + 1; - pages_wraparound[0] = pages; - for (i = 0; i < 2 * (page_cnt - 1); i++) - pages_wraparound[i + 1] = &pages[i % (page_cnt - 1) + 1]; + ring_info->ring_buffer = (struct hv_ring_buffer *) + vmap_pfn(pfns_wraparound, page_cnt * 2 - 1, + PAGE_KERNEL); + kfree(pfns_wraparound); - ring_info->ring_buffer = (struct hv_ring_buffer *) - vmap(pages_wraparound, page_cnt * 2 - 1, VM_MAP, PAGE_KERNEL); + if (!ring_info->ring_buffer) + return -ENOMEM; + + /* Zero ring buffer after setting memory host visibility. */ + memset(ring_info->ring_buffer, 0x00, PAGE_SIZE * page_cnt); + } else { + pages_wraparound = kcalloc(page_cnt * 2 - 1, + sizeof(struct page *), + GFP_KERNEL); + + pages_wraparound[0] = pages; + for (i = 0; i < 2 * (page_cnt - 1); i++) + pages_wraparound[i + 1] = + &pages[i % (page_cnt - 1) + 1]; - kfree(pages_wraparound); + ring_info->ring_buffer = (struct hv_ring_buffer *) + vmap(pages_wraparound, page_cnt * 2 - 1, VM_MAP, + PAGE_KERNEL); + kfree(pages_wraparound); + if (!ring_info->ring_buffer) + return -ENOMEM; + } - if (!ring_info->ring_buffer) - return -ENOMEM; ring_info->ring_buffer->read_index = ring_info->ring_buffer->write_index = 0;