Message ID | CAPm50aKwbZGeXPK5uig18Br8CF1hOS71CE2j_dLX+ub7oJdpGg@mail.gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [RESEND] KVM: X86: Reduce size of kvm_vcpu_arch structure when CONFIG_KVM_XEN=n | expand |
On Tue, 05 Sep 2023 09:07:09 +0800, Hao Peng wrote: > When CONFIG_KVM_XEN=n, the size of kvm_vcpu_arch can be reduced > from 5100+ to 4400+ by adding macro control. Applied to kvm-x86 misc. Please fix whatever mail client you're using to send patches, the patch was heavily whitespace damaged. I fixed up this one because it was easy to fix and a straightforward patch. [1/1] KVM: X86: Reduce size of kvm_vcpu_arch structure when CONFIG_KVM_XEN=n https://github.com/kvm-x86/linux/commit/fd00e095a031 -- https://github.com/kvm-x86/linux/tree/next
On Thu, Sep 28, 2023, Sean Christopherson wrote: > On Tue, 05 Sep 2023 09:07:09 +0800, Hao Peng wrote: > > When CONFIG_KVM_XEN=n, the size of kvm_vcpu_arch can be reduced > > from 5100+ to 4400+ by adding macro control. > > Applied to kvm-x86 misc. Please fix whatever mail client you're using to send > patches, the patch was heavily whitespace damaged. I fixed up this one because > it was easy to fix and a straightforward patch. > > [1/1] KVM: X86: Reduce size of kvm_vcpu_arch structure when CONFIG_KVM_XEN=n > https://github.com/kvm-x86/linux/commit/fd00e095a031 FYI, I've moved this to "kvm-x86 xen". There are enough Xen patches coming in that I didn't want to dump them all in "misc", and I also didn't want to have one lone Xen patch in a different pull request. [1/1] KVM: X86: Reduce size of kvm_vcpu_arch structure when CONFIG_KVM_XEN=n https://github.com/kvm-x86/linux/commit/ee11ab6bb04e
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 1a4def36d5bb..9320019708f9 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -680,6 +680,7 @@ struct kvm_hypervisor_cpuid { u32 limit; }; +#ifdef CONFIG_KVM_XEN /* Xen HVM per vcpu emulation context */ struct kvm_vcpu_xen { u64 hypercall_rip; @@ -702,6 +703,7 @@ struct kvm_vcpu_xen { struct timer_list poll_timer; struct kvm_hypervisor_cpuid cpuid; }; +#endif struct kvm_queued_exception { bool pending; @@ -930,8 +932,9 @@ struct kvm_vcpu_arch { bool hyperv_enabled; struct kvm_vcpu_hv *hyperv; +#ifdef CONFIG_KVM_XEN struct kvm_vcpu_xen xen; - +#endif cpumask_var_t wbinvd_dirty_mask; unsigned long last_retry_eip; diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 0544e30b4946..48f5308c4556 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -456,7 +456,9 @@ static int kvm_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2, vcpu->arch.cpuid_nent = nent; vcpu->arch.kvm_cpuid = kvm_get_hypervisor_cpuid(vcpu, KVM_SIGNATURE); +#ifdef CONFIG_KVM_XEN vcpu->arch.xen.cpuid = kvm_get_hypervisor_cpuid(vcpu, XEN_SIGNATURE); +#endif kvm_vcpu_after_set_cpuid(vcpu); return 0; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6c9c81e82e65..4fd08a5e0e98 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3232,11 +3232,13 @@ static int kvm_guest_time_update(struct kvm_vcpu *v) if (vcpu->pv_time.active) kvm_setup_guest_pvclock(v, &vcpu->pv_time, 0); +#ifdef CONFIG_KVM_XEN if (vcpu->xen.vcpu_info_cache.active) kvm_setup_guest_pvclock(v, &vcpu->xen.vcpu_info_cache, offsetof(struct compat_vcpu_info, time)); if (vcpu->xen.vcpu_time_info_cache.active) kvm_setup_guest_pvclock(v, &vcpu->xen.vcpu_time_info_cache, 0); +#endif kvm_hv_setup_tsc_page(v->kvm, &vcpu->hv_clock); return 0;