diff mbox series

[RFC,v8,53/56] KVM: SVM: Make VMSAVE target area memory allocation SNP safe

Message ID 20230220183847.59159-54-michael.roth@amd.com (mailing list archive)
State New, archived
Headers show
Series Add AMD Secure Nested Paging (SEV-SNP) Hypervisor Support | expand

Commit Message

Michael Roth Feb. 20, 2023, 6:38 p.m. UTC
From: Ashish Kalra <ashish.kalra@amd.com>

Implement a workaround for an SNP erratum where the CPU will incorrectly
signal an RMP violation #PF if a hugepage (2mb or 1gb) collides with the
RMP entry of the VMSAVE target page.

When SEV-SNP is globally enabled, the CPU marks the VMSAVE target page
as "InUse" while the VMSAVE instruction is executing. If another
CPU writes to a different page in the same 2MB region while the VMSAVE
is executing, the CPU will throw an RMP violation #PF.

Use the snp safe generic allocator for allocating the VMSA target
page which will ensure that the page returned is not a hugepage, as it
is already being used for the allocating the VMCB, VMSA and AVIC backing
page.

Co-developed-by: Marc Orr <marcorr@google.com>
Signed-off-by: Marc Orr <marcorr@google.com>
Reported-by: Alper Gun <alpergun@google.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
---
 arch/x86/kvm/svm/svm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Zhi Wang March 1, 2023, 9:23 p.m. UTC | #1
On Mon, 20 Feb 2023 12:38:44 -0600
Michael Roth <michael.roth@amd.com> wrote:

> From: Ashish Kalra <ashish.kalra@amd.com>
> 
> Implement a workaround for an SNP erratum where the CPU will incorrectly
> signal an RMP violation #PF if a hugepage (2mb or 1gb) collides with the
> RMP entry of the VMSAVE target page.
> 
> When SEV-SNP is globally enabled, the CPU marks the VMSAVE target page
> as "InUse" while the VMSAVE instruction is executing. If another
> CPU writes to a different page in the same 2MB region while the VMSAVE
> is executing, the CPU will throw an RMP violation #PF.
> 
> Use the snp safe generic allocator for allocating the VMSA target
> page which will ensure that the page returned is not a hugepage, as it
> is already being used for the allocating the VMCB, VMSA and AVIC backing
> page.
> 

This should be merged with patch where implements the snp_safe_alloc_page().

> Co-developed-by: Marc Orr <marcorr@google.com>
> Signed-off-by: Marc Orr <marcorr@google.com>
> Reported-by: Alper Gun <alpergun@google.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> Signed-off-by: Michael Roth <michael.roth@amd.com>
> ---
>  arch/x86/kvm/svm/svm.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 3fe5f13b5f3a..8bda31a61757 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -665,7 +665,7 @@ static int svm_cpu_init(int cpu)
>  	int ret = -ENOMEM;
>  
>  	memset(sd, 0, sizeof(struct svm_cpu_data));
> -	sd->save_area = alloc_page(GFP_KERNEL | __GFP_ZERO);
> +	sd->save_area = snp_safe_alloc_page(NULL);
>  	if (!sd->save_area)
>  		return ret;
>
diff mbox series

Patch

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 3fe5f13b5f3a..8bda31a61757 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -665,7 +665,7 @@  static int svm_cpu_init(int cpu)
 	int ret = -ENOMEM;
 
 	memset(sd, 0, sizeof(struct svm_cpu_data));
-	sd->save_area = alloc_page(GFP_KERNEL | __GFP_ZERO);
+	sd->save_area = snp_safe_alloc_page(NULL);
 	if (!sd->save_area)
 		return ret;