Message ID | 158880254122.11615.156420638099504288.stgit@naples-babu.amd.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arch/x86: Enable MPK feature on AMD | expand |
On Wed, May 06, 2020 at 05:02:21PM -0500, Babu Moger wrote: > static __init int svm_hardware_setup(void) > @@ -1300,6 +1304,8 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > indirect_branch_prediction_barrier(); > } > avic_vcpu_load(vcpu, cpu); > + > + svm->host_pkru = read_pkru(); Move vcpu_vmx's host_prku to kvm_vcpu_arch instead of duplicating it to SVM. And I'm 99% certain "vcpu->arch.host_pkru = read_pkru()" can be moved to kvm_arch_vcpu_load(). The only direct calls to vmx_vcpu_load() are to get the right VMCS loaded. Actually, those calls shouldn't be using vmx_vcpu_load(), especially since that'll trigger IBPB. I'll send a patch for that. > } > > static void svm_vcpu_put(struct kvm_vcpu *vcpu) > @@ -3318,6 +3324,12 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) > clgi(); > kvm_load_guest_xsave_state(vcpu); > > + /* Load the guest pkru state */ > + if (static_cpu_has(X86_FEATURE_PKU) && > + kvm_read_cr4_bits(vcpu, X86_CR4_PKE) && > + vcpu->arch.pkru != svm->host_pkru) > + __write_pkru(vcpu->arch.pkru); This and the restoration should be moved to common x86 helpers, at a glance they look identical. In short, pretty much all of this belongs in common x86. > + > if (lapic_in_kernel(vcpu) && > vcpu->arch.apic->lapic_timer.timer_advance_ns) > kvm_wait_lapic_expire(vcpu); > @@ -3371,6 +3383,14 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) > if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI)) > kvm_before_interrupt(&svm->vcpu); > > + /* Save the guest pkru state and restore the host pkru state back */ > + if (static_cpu_has(X86_FEATURE_PKU) && > + kvm_read_cr4_bits(vcpu, X86_CR4_PKE)) { > + vcpu->arch.pkru = rdpkru(); > + if (vcpu->arch.pkru != svm->host_pkru) > + __write_pkru(svm->host_pkru); > + } > + > kvm_load_host_xsave_state(vcpu); > stgi(); > > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h > index df3474f4fb02..5d20a28c1b0e 100644 > --- a/arch/x86/kvm/svm/svm.h > +++ b/arch/x86/kvm/svm/svm.h > @@ -158,6 +158,8 @@ struct vcpu_svm { > u64 *avic_physical_id_cache; > bool avic_is_running; > > + u32 host_pkru; > + > /* > * Per-vcpu list of struct amd_svm_iommu_ir: > * This is used mainly to store interrupt remapping information used >
On 5/6/20 3:02 PM, Babu Moger wrote: > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -818,6 +818,10 @@ static __init void svm_set_cpu_caps(void) > if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) || > boot_cpu_has(X86_FEATURE_AMD_SSBD)) > kvm_cpu_cap_set(X86_FEATURE_VIRT_SSBD); > + > + /* PKU is not yet implemented for shadow paging. */ > + if (npt_enabled && boot_cpu_has(X86_FEATURE_OSPKE)) > + kvm_cpu_cap_check_and_set(X86_FEATURE_PKU); > } Reviewed-by: Dave Hansen <dave.hansen@intel.com> But, I'll also say that we probably shouldn't have put the other code into arch/x86/kvm/vmx/vmx.c in the first place. It would be much nicer if this refactored the current code into a common spot rather than copying. But, I do understand the impulse to do it the way it was done.
On 07/05/20 00:26, Sean Christopherson wrote: >> + /* Load the guest pkru state */ >> + if (static_cpu_has(X86_FEATURE_PKU) && >> + kvm_read_cr4_bits(vcpu, X86_CR4_PKE) && >> + vcpu->arch.pkru != svm->host_pkru) >> + __write_pkru(vcpu->arch.pkru); > This and the restoration should be moved to common x86 helpers, at a glance > they look identical. > > In short, pretty much all of this belongs in common x86. > We could stick this in kvm_load_guest_xsave_state kvm_load_host_xsave_state. It's not a perfect match, after all the code itself proves that PKRU can be loaded without XSAVE; but it's close enough and it's exactly in the right point of vmx_vcpu_run and svm_vcpu_run. Paolo
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 2f379bacbb26..de327f02470f 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -818,6 +818,10 @@ static __init void svm_set_cpu_caps(void) if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) || boot_cpu_has(X86_FEATURE_AMD_SSBD)) kvm_cpu_cap_set(X86_FEATURE_VIRT_SSBD); + + /* PKU is not yet implemented for shadow paging. */ + if (npt_enabled && boot_cpu_has(X86_FEATURE_OSPKE)) + kvm_cpu_cap_check_and_set(X86_FEATURE_PKU); } static __init int svm_hardware_setup(void) @@ -1300,6 +1304,8 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) indirect_branch_prediction_barrier(); } avic_vcpu_load(vcpu, cpu); + + svm->host_pkru = read_pkru(); } static void svm_vcpu_put(struct kvm_vcpu *vcpu) @@ -3318,6 +3324,12 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) clgi(); kvm_load_guest_xsave_state(vcpu); + /* Load the guest pkru state */ + if (static_cpu_has(X86_FEATURE_PKU) && + kvm_read_cr4_bits(vcpu, X86_CR4_PKE) && + vcpu->arch.pkru != svm->host_pkru) + __write_pkru(vcpu->arch.pkru); + if (lapic_in_kernel(vcpu) && vcpu->arch.apic->lapic_timer.timer_advance_ns) kvm_wait_lapic_expire(vcpu); @@ -3371,6 +3383,14 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI)) kvm_before_interrupt(&svm->vcpu); + /* Save the guest pkru state and restore the host pkru state back */ + if (static_cpu_has(X86_FEATURE_PKU) && + kvm_read_cr4_bits(vcpu, X86_CR4_PKE)) { + vcpu->arch.pkru = rdpkru(); + if (vcpu->arch.pkru != svm->host_pkru) + __write_pkru(svm->host_pkru); + } + kvm_load_host_xsave_state(vcpu); stgi(); diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index df3474f4fb02..5d20a28c1b0e 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -158,6 +158,8 @@ struct vcpu_svm { u64 *avic_physical_id_cache; bool avic_is_running; + u32 host_pkru; + /* * Per-vcpu list of struct amd_svm_iommu_ir: * This is used mainly to store interrupt remapping information used
The Memory Protection Key (MPK) feature provides a way for applications to impose page-based data access protections (read/write, read-only or no access), without requiring modification of page tables and subsequent TLB invalidations when the application changes protection domains. This feature is already available in Intel platforms. Now enable the feature on AMD platforms. The host pkru state needs to be saved/restored during the guest/host switches in SVM. Other changes are already taken care by the pkru common code. AMD documentation for MPK feature is available at "AMD64 Architecture Programmer’s Manual Volume 2: System Programming, Pub. 24593 Rev. 3.34, Section 5.6.6 Memory Protection Keys (MPK) Bit". Documentation can be obtained at the link below. Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537 Signed-off-by: Babu Moger <babu.moger@amd.com> --- arch/x86/kvm/svm/svm.c | 20 ++++++++++++++++++++ arch/x86/kvm/svm/svm.h | 2 ++ 2 files changed, 22 insertions(+)