diff mbox series

[RFC,v7,07/64] KVM: SEV: Handle KVM_HC_MAP_GPA_RANGE hypercall

Message ID 20221214194056.161492-8-michael.roth@amd.com (mailing list archive)
State Not Applicable
Delegated to: Herbert Xu
Headers show
Series Add AMD Secure Nested Paging (SEV-SNP) Hypervisor Support | expand

Commit Message

Michael Roth Dec. 14, 2022, 7:39 p.m. UTC
From: Nikunj A Dadhania <nikunj@amd.com>

KVM_HC_MAP_GPA_RANGE hypercall is used by the SEV guest to notify a
change in the page encryption status to the hypervisor.

The hypercall exits to userspace with KVM_EXIT_HYPERCALL exit code,
currently this is used for explicit memory conversion between
shared/private for memfd based private memory.

Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
---
 arch/x86/kvm/x86.c  | 8 ++++++++
 virt/kvm/kvm_main.c | 1 +
 2 files changed, 9 insertions(+)

Comments

Borislav Petkov Jan. 13, 2023, 4 p.m. UTC | #1
On Wed, Dec 14, 2022 at 01:39:59PM -0600, Michael Roth wrote:
> From: Nikunj A Dadhania <nikunj@amd.com>
> 
> KVM_HC_MAP_GPA_RANGE hypercall is used by the SEV guest to notify a
> change in the page encryption status to the hypervisor.
> 
> The hypercall exits to userspace with KVM_EXIT_HYPERCALL exit code,
> currently this is used for explicit memory conversion between
> shared/private for memfd based private memory.

So Tom and I spent a while to figure out what this is doing...

Please explain in more detail what that is. Like the hypercall gets ignored for
memslots which cannot be private...?

And what's the story with supporting UPM with SEV{,-ES} guests?

In general, this text needs more background and why this is being done.

Thx.
Sean Christopherson Jan. 13, 2023, 4:17 p.m. UTC | #2
On Fri, Jan 13, 2023, Borislav Petkov wrote:
> On Wed, Dec 14, 2022 at 01:39:59PM -0600, Michael Roth wrote:
> > From: Nikunj A Dadhania <nikunj@amd.com>
> > 
> > KVM_HC_MAP_GPA_RANGE hypercall is used by the SEV guest to notify a
> > change in the page encryption status to the hypervisor.
> > 
> > The hypercall exits to userspace with KVM_EXIT_HYPERCALL exit code,
> > currently this is used for explicit memory conversion between
> > shared/private for memfd based private memory.
> 
> So Tom and I spent a while to figure out what this is doing...
> 
> Please explain in more detail what that is. Like the hypercall gets ignored for
> memslots which cannot be private...?

Don't bother, just drop the patch.  It's perfectly legal for userspace to create
the private memslot in response to a guest request.
Nikunj A. Dadhania Jan. 16, 2023, 7:56 a.m. UTC | #3
On 13/01/23 21:47, Sean Christopherson wrote:
> On Fri, Jan 13, 2023, Borislav Petkov wrote:
>> On Wed, Dec 14, 2022 at 01:39:59PM -0600, Michael Roth wrote:
>>> From: Nikunj A Dadhania <nikunj@amd.com>
>>>
>>> KVM_HC_MAP_GPA_RANGE hypercall is used by the SEV guest to notify a
>>> change in the page encryption status to the hypervisor.
>>>
>>> The hypercall exits to userspace with KVM_EXIT_HYPERCALL exit code,
>>> currently this is used for explicit memory conversion between
>>> shared/private for memfd based private memory.
>>
>> So Tom and I spent a while to figure out what this is doing...
>>
>> Please explain in more detail what that is. Like the hypercall gets ignored for
>> memslots which cannot be private...?

This was required when we were using per memslot bitmap for storing the 
private information, mem_attr_array is not dependent on memslot anymore.

> 
> Don't bother, just drop the patch.  

Agree, we can drop this. I have tested SEV without this patch.

> It's perfectly legal for userspace to create the private memslot in response > to a guest request.

Sean, did not understand this part, how could a memslot be created on a guest request?

Regards
Nikunj
Sean Christopherson Jan. 17, 2023, 5:19 p.m. UTC | #4
On Mon, Jan 16, 2023, Nikunj A. Dadhania wrote:
> On 13/01/23 21:47, Sean Christopherson wrote:
> > It's perfectly legal for userspace to create the private memslot in response
> > to a guest request.
> 
> Sean, did not understand this part, how could a memslot be created on a guest request?

KVM_HC_MAP_GPA_RANGE gets routed to host userspace, at that point userspace can
take any action it wants to satisfy the guest request.  E.g. a userspace+guest
setup could define memory as shared by default, and only create KVM_MEM_PRIVATE
memslots for memory that the guest explicitly requests to be mapped private.

I don't anticipate any real world use cases actually doing something like that,
but I also don't see any value in going out of our way to disallow it.  Normally
I like to be conservative when it comes to KVM's uAPI, e.g. allow the minimum
needed to support known use cases, but restricting KVM_HC_MAP_GPA_RANGE doesn't
actually achieve anything and just makes things more complex for KVM.  E.g. the
behavior is non-deterministic from KVM's perspective if a userspace memslots update
is in-progress.
Jeremi Piotrowski Jan. 27, 2023, 4:35 p.m. UTC | #5
On Wed, Dec 14, 2022 at 01:39:59PM -0600, Michael Roth wrote:
> From: Nikunj A Dadhania <nikunj@amd.com>
> 
> KVM_HC_MAP_GPA_RANGE hypercall is used by the SEV guest to notify a
> change in the page encryption status to the hypervisor.
> 
> The hypercall exits to userspace with KVM_EXIT_HYPERCALL exit code,
> currently this is used for explicit memory conversion between
> shared/private for memfd based private memory.
> 
> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
> Signed-off-by: Michael Roth <michael.roth@amd.com>
> ---
>  arch/x86/kvm/x86.c  | 8 ++++++++
>  virt/kvm/kvm_main.c | 1 +
>  2 files changed, 9 insertions(+)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index bb6adb216054..732f9cbbadb5 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -9649,6 +9649,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)

Couldn't find a better commit to comment on:
when the guest has the ptp-kvm module, it will issue a KVM_HC_CLOCK_PAIRING
hypercall. This will pass sev_es_validate_vmgexit validation and end up in this
function where kvm_pv_clock_pairing() is called, and that calls
kvm_write_guest(). This results in a CPU soft-lockup, at least in my testing.

Are there any emulated hypercalls that make sense for snp guests? We should
block at least the ones that definitely don't work.

Jeremi

>  		break;
>  	case KVM_HC_MAP_GPA_RANGE: {
>  		u64 gpa = a0, npages = a1, attrs = a2;
> +		struct kvm_memory_slot *slot;
>  
>  		ret = -KVM_ENOSYS;
>  		if (!(vcpu->kvm->arch.hypercall_exit_enabled & (1 << KVM_HC_MAP_GPA_RANGE)))
> @@ -9660,6 +9661,13 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
>  			break;
>  		}
>  
> +		slot = kvm_vcpu_gfn_to_memslot(vcpu, gpa_to_gfn(gpa));
> +		if (!vcpu->kvm->arch.upm_mode ||
> +		    !kvm_slot_can_be_private(slot)) {
> +			ret = 0;
> +			break;
> +		}
> +
>  		vcpu->run->exit_reason        = KVM_EXIT_HYPERCALL;
>  		vcpu->run->hypercall.nr       = KVM_HC_MAP_GPA_RANGE;
>  		vcpu->run->hypercall.args[0]  = gpa;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index d2daa049e94a..73bf0bdedb59 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2646,6 +2646,7 @@ struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn
>  
>  	return NULL;
>  }
> +EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_memslot);
>  
>  bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn)
>  {
> -- 
> 2.25.1
>
Jeremi Piotrowski Jan. 31, 2023, 2:15 p.m. UTC | #6
On Fri, Jan 27, 2023 at 08:35:58AM -0800, Jeremi Piotrowski wrote:
> On Wed, Dec 14, 2022 at 01:39:59PM -0600, Michael Roth wrote:
> > From: Nikunj A Dadhania <nikunj@amd.com>
> > 
> > KVM_HC_MAP_GPA_RANGE hypercall is used by the SEV guest to notify a
> > change in the page encryption status to the hypervisor.
> > 
> > The hypercall exits to userspace with KVM_EXIT_HYPERCALL exit code,
> > currently this is used for explicit memory conversion between
> > shared/private for memfd based private memory.
> > 
> > Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
> > Signed-off-by: Michael Roth <michael.roth@amd.com>
> > ---
> >  arch/x86/kvm/x86.c  | 8 ++++++++
> >  virt/kvm/kvm_main.c | 1 +
> >  2 files changed, 9 insertions(+)
> > 
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index bb6adb216054..732f9cbbadb5 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -9649,6 +9649,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
> 
> Couldn't find a better commit to comment on:
> when the guest has the ptp-kvm module, it will issue a KVM_HC_CLOCK_PAIRING
> hypercall. This will pass sev_es_validate_vmgexit validation and end up in this
> function where kvm_pv_clock_pairing() is called, and that calls
> kvm_write_guest(). This results in a CPU soft-lockup, at least in my testing.
> 
> Are there any emulated hypercalls that make sense for snp guests? We should
> block at least the ones that definitely don't work.
> 
> Jeremi

So turns out the soft-lockup is a nested issue (details here for those
interested: [^1]), but the questions still stands, of whether we should
block kvm_write_page (and similar) explicitly or rely on the rmp fault.

[^1]: https://github.com/jepio/linux/commit/6c3bdf552e93664ae172660e24ceceed60fd4df5
diff mbox series

Patch

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index bb6adb216054..732f9cbbadb5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9649,6 +9649,7 @@  int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 		break;
 	case KVM_HC_MAP_GPA_RANGE: {
 		u64 gpa = a0, npages = a1, attrs = a2;
+		struct kvm_memory_slot *slot;
 
 		ret = -KVM_ENOSYS;
 		if (!(vcpu->kvm->arch.hypercall_exit_enabled & (1 << KVM_HC_MAP_GPA_RANGE)))
@@ -9660,6 +9661,13 @@  int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 			break;
 		}
 
+		slot = kvm_vcpu_gfn_to_memslot(vcpu, gpa_to_gfn(gpa));
+		if (!vcpu->kvm->arch.upm_mode ||
+		    !kvm_slot_can_be_private(slot)) {
+			ret = 0;
+			break;
+		}
+
 		vcpu->run->exit_reason        = KVM_EXIT_HYPERCALL;
 		vcpu->run->hypercall.nr       = KVM_HC_MAP_GPA_RANGE;
 		vcpu->run->hypercall.args[0]  = gpa;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index d2daa049e94a..73bf0bdedb59 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2646,6 +2646,7 @@  struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn
 
 	return NULL;
 }
+EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_memslot);
 
 bool kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn)
 {