Message ID | 20200522125214.31348-11-kirill.shutemov@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM protected memory extension | expand |
"Kirill A. Shutemov" <kirill@shutemov.name> writes: > Wire up hypercalls for the feature and define VM_KVM_PROTECTED. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > --- > arch/x86/Kconfig | 1 + > arch/x86/kvm/cpuid.c | 3 +++ > arch/x86/kvm/x86.c | 9 +++++++++ > include/linux/mm.h | 4 ++++ > 4 files changed, 17 insertions(+) > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig > index 58dd44a1b92f..420e3947f0c6 100644 > --- a/arch/x86/Kconfig > +++ b/arch/x86/Kconfig > @@ -801,6 +801,7 @@ config KVM_GUEST > select ARCH_CPUIDLE_HALTPOLL > select X86_MEM_ENCRYPT_COMMON > select SWIOTLB > + select ARCH_USES_HIGH_VMA_FLAGS > default y > ---help--- > This option enables various optimizations for running under the KVM > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c > index 901cd1fdecd9..94cc5e45467e 100644 > --- a/arch/x86/kvm/cpuid.c > +++ b/arch/x86/kvm/cpuid.c > @@ -714,6 +714,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function) > (1 << KVM_FEATURE_POLL_CONTROL) | > (1 << KVM_FEATURE_PV_SCHED_YIELD); > > + if (VM_KVM_PROTECTED) > + entry->eax |=(1 << KVM_FEATURE_MEM_PROTECTED); Nit: missing space. > + > if (sched_info_on()) > entry->eax |= (1 << KVM_FEATURE_STEAL_TIME); > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index c17e6eb9ad43..acba0ac07f61 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -7598,6 +7598,15 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) > kvm_sched_yield(vcpu->kvm, a0); > ret = 0; > break; > + case KVM_HC_ENABLE_MEM_PROTECTED: > + ret = kvm_protect_all_memory(vcpu->kvm); > + break; > + case KVM_HC_MEM_SHARE: > + ret = kvm_protect_memory(vcpu->kvm, a0, a1, false); > + break; > + case KVM_HC_MEM_UNSHARE: > + ret = kvm_protect_memory(vcpu->kvm, a0, a1, true); > + break; > default: > ret = -KVM_ENOSYS; > break; > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 4f7195365cc0..6eb771c14968 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -329,7 +329,11 @@ extern unsigned int kobjsize(const void *objp); > # define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */ > #endif > > +#if defined(CONFIG_X86_64) && defined(CONFIG_KVM) > +#define VM_KVM_PROTECTED VM_HIGH_ARCH_4 > +#else > #define VM_KVM_PROTECTED 0 > +#endif > > #ifndef VM_GROWSUP > # define VM_GROWSUP VM_NONE
On Fri, May 22, 2020 at 03:52:08PM +0300, Kirill A. Shutemov wrote: > Wire up hypercalls for the feature and define VM_KVM_PROTECTED. > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > --- > arch/x86/Kconfig | 1 + > arch/x86/kvm/cpuid.c | 3 +++ > arch/x86/kvm/x86.c | 9 +++++++++ > include/linux/mm.h | 4 ++++ > 4 files changed, 17 insertions(+) > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig > index 58dd44a1b92f..420e3947f0c6 100644 > --- a/arch/x86/Kconfig > +++ b/arch/x86/Kconfig > @@ -801,6 +801,7 @@ config KVM_GUEST > select ARCH_CPUIDLE_HALTPOLL > select X86_MEM_ENCRYPT_COMMON > select SWIOTLB > + select ARCH_USES_HIGH_VMA_FLAGS > default y > ---help--- > This option enables various optimizations for running under the KVM > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c > index 901cd1fdecd9..94cc5e45467e 100644 > --- a/arch/x86/kvm/cpuid.c > +++ b/arch/x86/kvm/cpuid.c > @@ -714,6 +714,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function) > (1 << KVM_FEATURE_POLL_CONTROL) | > (1 << KVM_FEATURE_PV_SCHED_YIELD); > > + if (VM_KVM_PROTECTED) > + entry->eax |=(1 << KVM_FEATURE_MEM_PROTECTED); > + > if (sched_info_on()) > entry->eax |= (1 << KVM_FEATURE_STEAL_TIME); > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index c17e6eb9ad43..acba0ac07f61 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -7598,6 +7598,15 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) > kvm_sched_yield(vcpu->kvm, a0); > ret = 0; > break; > + case KVM_HC_ENABLE_MEM_PROTECTED: > + ret = kvm_protect_all_memory(vcpu->kvm); > + break; > + case KVM_HC_MEM_SHARE: > + ret = kvm_protect_memory(vcpu->kvm, a0, a1, false); > + break; > + case KVM_HC_MEM_UNSHARE: > + ret = kvm_protect_memory(vcpu->kvm, a0, a1, true); > + break; > default: > ret = -KVM_ENOSYS; > break; > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 4f7195365cc0..6eb771c14968 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -329,7 +329,11 @@ extern unsigned int kobjsize(const void *objp); > # define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */ > #endif > > +#if defined(CONFIG_X86_64) && defined(CONFIG_KVM) This would be better spelled as ARCH_WANTS_PROTECTED_MEMORY, IMHO. > +#define VM_KVM_PROTECTED VM_HIGH_ARCH_4 Maybe this should be VM_HIGH_ARCH_5 so that powerpc could enable this feature eventually? > +#else > #define VM_KVM_PROTECTED 0 > +#endif > > #ifndef VM_GROWSUP > # define VM_GROWSUP VM_NONE > -- > 2.26.2 > >
On Tue, May 26, 2020 at 09:16:09AM +0300, Mike Rapoport wrote: > On Fri, May 22, 2020 at 03:52:08PM +0300, Kirill A. Shutemov wrote: > > Wire up hypercalls for the feature and define VM_KVM_PROTECTED. > > > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> > > --- > > arch/x86/Kconfig | 1 + > > arch/x86/kvm/cpuid.c | 3 +++ > > arch/x86/kvm/x86.c | 9 +++++++++ > > include/linux/mm.h | 4 ++++ > > 4 files changed, 17 insertions(+) > > > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig > > index 58dd44a1b92f..420e3947f0c6 100644 > > --- a/arch/x86/Kconfig > > +++ b/arch/x86/Kconfig > > @@ -801,6 +801,7 @@ config KVM_GUEST > > select ARCH_CPUIDLE_HALTPOLL > > select X86_MEM_ENCRYPT_COMMON > > select SWIOTLB > > + select ARCH_USES_HIGH_VMA_FLAGS > > default y > > ---help--- > > This option enables various optimizations for running under the KVM > > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c > > index 901cd1fdecd9..94cc5e45467e 100644 > > --- a/arch/x86/kvm/cpuid.c > > +++ b/arch/x86/kvm/cpuid.c > > @@ -714,6 +714,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function) > > (1 << KVM_FEATURE_POLL_CONTROL) | > > (1 << KVM_FEATURE_PV_SCHED_YIELD); > > > > + if (VM_KVM_PROTECTED) > > + entry->eax |=(1 << KVM_FEATURE_MEM_PROTECTED); > > + > > if (sched_info_on()) > > entry->eax |= (1 << KVM_FEATURE_STEAL_TIME); > > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > index c17e6eb9ad43..acba0ac07f61 100644 > > --- a/arch/x86/kvm/x86.c > > +++ b/arch/x86/kvm/x86.c > > @@ -7598,6 +7598,15 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) > > kvm_sched_yield(vcpu->kvm, a0); > > ret = 0; > > break; > > + case KVM_HC_ENABLE_MEM_PROTECTED: > > + ret = kvm_protect_all_memory(vcpu->kvm); > > + break; > > + case KVM_HC_MEM_SHARE: > > + ret = kvm_protect_memory(vcpu->kvm, a0, a1, false); > > + break; > > + case KVM_HC_MEM_UNSHARE: > > + ret = kvm_protect_memory(vcpu->kvm, a0, a1, true); > > + break; > > default: > > ret = -KVM_ENOSYS; > > break; > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index 4f7195365cc0..6eb771c14968 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -329,7 +329,11 @@ extern unsigned int kobjsize(const void *objp); > > # define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */ > > #endif > > > > +#if defined(CONFIG_X86_64) && defined(CONFIG_KVM) > > This would be better spelled as ARCH_WANTS_PROTECTED_MEMORY, IMHO. Sure. I though it's good enough for RFC :) > > +#define VM_KVM_PROTECTED VM_HIGH_ARCH_4 > > Maybe this should be VM_HIGH_ARCH_5 so that powerpc could enable this > feature eventually? Okay-okay.
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 58dd44a1b92f..420e3947f0c6 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -801,6 +801,7 @@ config KVM_GUEST select ARCH_CPUIDLE_HALTPOLL select X86_MEM_ENCRYPT_COMMON select SWIOTLB + select ARCH_USES_HIGH_VMA_FLAGS default y ---help--- This option enables various optimizations for running under the KVM diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 901cd1fdecd9..94cc5e45467e 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -714,6 +714,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function) (1 << KVM_FEATURE_POLL_CONTROL) | (1 << KVM_FEATURE_PV_SCHED_YIELD); + if (VM_KVM_PROTECTED) + entry->eax |=(1 << KVM_FEATURE_MEM_PROTECTED); + if (sched_info_on()) entry->eax |= (1 << KVM_FEATURE_STEAL_TIME); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c17e6eb9ad43..acba0ac07f61 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7598,6 +7598,15 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) kvm_sched_yield(vcpu->kvm, a0); ret = 0; break; + case KVM_HC_ENABLE_MEM_PROTECTED: + ret = kvm_protect_all_memory(vcpu->kvm); + break; + case KVM_HC_MEM_SHARE: + ret = kvm_protect_memory(vcpu->kvm, a0, a1, false); + break; + case KVM_HC_MEM_UNSHARE: + ret = kvm_protect_memory(vcpu->kvm, a0, a1, true); + break; default: ret = -KVM_ENOSYS; break; diff --git a/include/linux/mm.h b/include/linux/mm.h index 4f7195365cc0..6eb771c14968 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -329,7 +329,11 @@ extern unsigned int kobjsize(const void *objp); # define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */ #endif +#if defined(CONFIG_X86_64) && defined(CONFIG_KVM) +#define VM_KVM_PROTECTED VM_HIGH_ARCH_4 +#else #define VM_KVM_PROTECTED 0 +#endif #ifndef VM_GROWSUP # define VM_GROWSUP VM_NONE
Wire up hypercalls for the feature and define VM_KVM_PROTECTED. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> --- arch/x86/Kconfig | 1 + arch/x86/kvm/cpuid.c | 3 +++ arch/x86/kvm/x86.c | 9 +++++++++ include/linux/mm.h | 4 ++++ 4 files changed, 17 insertions(+)