Message ID | 20240419112952.15598-5-wei.w.wang@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM/x86: Enhancements to static calls | expand |
On Fri, Apr 19, 2024, Wei Wang wrote: > KVM_X86_OP and KVM_X86_OP_OPTIONAL were utilized to define and execute > static_call_update() calls on mandatory and optional hooks, respectively. > Mandatory hooks were invoked via static_call() and necessitated definition > due to the presumption that an undefined hook (i.e., NULL) would cause > static_call() to fail. This assumption no longer holds true as > static_call() has been updated to treat a "NULL" hook as a NOP on x86. > Consequently, the so-called mandatory hooks are no longer required to be > defined, rendering them non-mandatory. This is wrong. They absolutely are mandatory. The fact that static_call() doesn't blow up doesn't make them optional. If a vendor neglects to implement a mandatory hook, KVM *will* break, just not immediately on the static_call(). The static_call() behavior is actually unfortunate, as KVM at least would prefer that it does explode on a NULL point. I.e. better to crash the kernel (hopefully before getting to production) then to have a lurking bug just waiting to cause problems. > This eliminates the need to differentiate between mandatory and optional > hooks, allowing a single KVM_X86_OP to suffice. > > So KVM_X86_OP_OPTIONAL and the WARN_ON() associated with KVM_X86_OP are > removed to simplify usage, Just in case it isn't clear, I am very strongly opposed to removing KVM_X86_OP_OPTIONAL() and the WARN_ON() protection to ensure mandatory ops are implemented.
On Friday, April 19, 2024 9:42 PM, Sean Christopherson wrote: > On Fri, Apr 19, 2024, Wei Wang wrote: > > KVM_X86_OP and KVM_X86_OP_OPTIONAL were utilized to define and > execute > > static_call_update() calls on mandatory and optional hooks, respectively. > > Mandatory hooks were invoked via static_call() and necessitated > > definition due to the presumption that an undefined hook (i.e., NULL) > > would cause > > static_call() to fail. This assumption no longer holds true as > > static_call() has been updated to treat a "NULL" hook as a NOP on x86. > > Consequently, the so-called mandatory hooks are no longer required to > > be defined, rendering them non-mandatory. > > This is wrong. They absolutely are mandatory. The fact that static_call() > doesn't blow up doesn't make them optional. If a vendor neglects to > implement a mandatory hook, KVM *will* break, just not immediately on the > static_call(). > > The static_call() behavior is actually unfortunate, as KVM at least would prefer > that it does explode on a NULL point. I.e. better to crash the kernel (hopefully > before getting to production) then to have a lurking bug just waiting to cause > problems. > > > This eliminates the need to differentiate between mandatory and > > optional hooks, allowing a single KVM_X86_OP to suffice. > > > > So KVM_X86_OP_OPTIONAL and the WARN_ON() associated with > KVM_X86_OP > > are removed to simplify usage, > > Just in case it isn't clear, I am very strongly opposed to removing > KVM_X86_OP_OPTIONAL() and the WARN_ON() protection to ensure > mandatory ops are implemented. OK, we can drop patch 4 and 5. Btw, may I know what is the boundary between mandatory and optional hooks? For example, when adding a new hook, what criteria should we use to determine whether it's mandatory, thereby requiring both SVM and VMX to implement it (and seems need to be merged them together?) (I searched a bit, but didn't find it)
On Fri, Apr 19, 2024, Wei W Wang wrote: > On Friday, April 19, 2024 9:42 PM, Sean Christopherson wrote: > > On Fri, Apr 19, 2024, Wei Wang wrote: > > > KVM_X86_OP and KVM_X86_OP_OPTIONAL were utilized to define and > > execute > > > static_call_update() calls on mandatory and optional hooks, respectively. > > > Mandatory hooks were invoked via static_call() and necessitated > > > definition due to the presumption that an undefined hook (i.e., NULL) > > > would cause > > > static_call() to fail. This assumption no longer holds true as > > > static_call() has been updated to treat a "NULL" hook as a NOP on x86. > > > Consequently, the so-called mandatory hooks are no longer required to > > > be defined, rendering them non-mandatory. > > > > This is wrong. They absolutely are mandatory. The fact that static_call() > > doesn't blow up doesn't make them optional. If a vendor neglects to > > implement a mandatory hook, KVM *will* break, just not immediately on the > > static_call(). > > > > The static_call() behavior is actually unfortunate, as KVM at least would prefer > > that it does explode on a NULL point. I.e. better to crash the kernel (hopefully > > before getting to production) then to have a lurking bug just waiting to cause > > problems. > > > > > This eliminates the need to differentiate between mandatory and > > > optional hooks, allowing a single KVM_X86_OP to suffice. > > > > > > So KVM_X86_OP_OPTIONAL and the WARN_ON() associated with > > KVM_X86_OP > > > are removed to simplify usage, > > > > Just in case it isn't clear, I am very strongly opposed to removing > > KVM_X86_OP_OPTIONAL() and the WARN_ON() protection to ensure > > mandatory ops are implemented. > > OK, we can drop patch 4 and 5. > > Btw, may I know what is the boundary between mandatory and optional hooks? > For example, when adding a new hook, what criteria should we use to determine > whether it's mandatory, thereby requiring both SVM and VMX to implement it (and > seems need to be merged them together?) > (I searched a bit, but didn't find it) It's a fairly simple rule: is the hook required for functional correctness, at all times? E.g. post_set_cr3() is unique to SEV-ES+ guests, and so it's optional for both VMX and SVM (because SEV-ES might not be enabled). All of the APICv related hooks are optional, because APICv support isn't guaranteed. set_tss_addr() and set_identity_map_addr() are unique to old Intel hardware. The mem_enc ops are unique to SEV+ (and at some point TDX), which again isn't guaranteed to be supported and enabled. For something like vcpu_precreate(), it's an arbitrary judgment call: is it cleaner to make the hook optional, or to have SVM implement a nop? Thankfully, there are very few of these. Heh, vm_destroy() should be non-optional, we should clean that up.
On Friday, April 19, 2024 11:58 PM, Sean Christopherson wrote: > On Fri, Apr 19, 2024, Wei W Wang wrote: > > On Friday, April 19, 2024 9:42 PM, Sean Christopherson wrote: > > > On Fri, Apr 19, 2024, Wei Wang wrote: > > > > KVM_X86_OP and KVM_X86_OP_OPTIONAL were utilized to define and > > > execute > > > > static_call_update() calls on mandatory and optional hooks, respectively. > > > > Mandatory hooks were invoked via static_call() and necessitated > > > > definition due to the presumption that an undefined hook (i.e., > > > > NULL) would cause > > > > static_call() to fail. This assumption no longer holds true as > > > > static_call() has been updated to treat a "NULL" hook as a NOP on x86. > > > > Consequently, the so-called mandatory hooks are no longer required > > > > to be defined, rendering them non-mandatory. > > > > > > This is wrong. They absolutely are mandatory. The fact that > > > static_call() doesn't blow up doesn't make them optional. If a > > > vendor neglects to implement a mandatory hook, KVM *will* break, > > > just not immediately on the static_call(). > > > > > > The static_call() behavior is actually unfortunate, as KVM at least > > > would prefer that it does explode on a NULL point. I.e. better to > > > crash the kernel (hopefully before getting to production) then to > > > have a lurking bug just waiting to cause problems. > > > > > > > This eliminates the need to differentiate between mandatory and > > > > optional hooks, allowing a single KVM_X86_OP to suffice. > > > > > > > > So KVM_X86_OP_OPTIONAL and the WARN_ON() associated with > > > KVM_X86_OP > > > > are removed to simplify usage, > > > > > > Just in case it isn't clear, I am very strongly opposed to removing > > > KVM_X86_OP_OPTIONAL() and the WARN_ON() protection to ensure > > > mandatory ops are implemented. > > > > OK, we can drop patch 4 and 5. > > > > Btw, may I know what is the boundary between mandatory and optional > hooks? > > For example, when adding a new hook, what criteria should we use to > > determine whether it's mandatory, thereby requiring both SVM and VMX > > to implement it (and seems need to be merged them together?) (I > > searched a bit, but didn't find it) > > It's a fairly simple rule: is the hook required for functional correctness, at all > times? > > E.g. post_set_cr3() is unique to SEV-ES+ guests, and so it's optional for both > VMX and SVM (because SEV-ES might not be enabled). > > All of the APICv related hooks are optional, because APICv support isn't > guaranteed. > > set_tss_addr() and set_identity_map_addr() are unique to old Intel hardware. > > The mem_enc ops are unique to SEV+ (and at some point TDX), which again > isn't guaranteed to be supported and enabled. > > For something like vcpu_precreate(), it's an arbitrary judgment call: is it > cleaner to make the hook optional, or to have SVM implement a nop? > Thankfully, there are very few of these. > > Heh, vm_destroy() should be non-optional, we should clean that up. I think determining whether a hook is optional is easy, but classifying a hook as mandatory might be challenging due to the multiple options available to achieve functional correctness. Take the vm_destroy() example as you mentioned, it could be debatable to say it's mandatory, e.g. the VMX code could be adjusted by incorporating vmx_vm_destroy() into the vcpu_free() hook, and being invoked upon the first vcpu to be freed. It could be even harder at the time when the first user (e.g. SVM) adds the hook and classifies that vm_destroy() is mandatory. (not try to argue for anything, just want to gain a comprehensive understanding of the rules)
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 110d7f29ca9a..306c9820e373 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -1,18 +1,15 @@ /* SPDX-License-Identifier: GPL-2.0 */ -#if !defined(KVM_X86_OP) || !defined(KVM_X86_OP_OPTIONAL) +#if !defined(KVM_X86_OP) BUILD_BUG_ON(1) #endif /* - * KVM_X86_OP() and KVM_X86_OP_OPTIONAL() are used to help generate - * both DECLARE/DEFINE_STATIC_CALL() invocations and - * "static_call_update()" calls. - * - * KVM_X86_OP_OPTIONAL() can be used for those functions that can have - * a NULL definition, for example if "static_call_cond()" will be used - * at the call sites. KVM_X86_OP_OPTIONAL_RET0() can be used likewise - * to make a definition optional, but in this case the default will - * be __static_call_return0. + * KVM_X86_OP() is used to help generate both DECLARE/DEFINE_STATIC_CALL() + * invocations and static_call_update() calls. All the hooks defined can be + * optional now, as invocation to an undefined hook (i.e.) has now been treated + * as a NOP by static_call(). + * KVM_X86_OP_RET0() can be used to make an undefined operation be + * __static_call_return0. */ KVM_X86_OP(check_processor_compatibility) KVM_X86_OP(hardware_enable) @@ -21,8 +18,8 @@ KVM_X86_OP(hardware_unsetup) KVM_X86_OP(has_emulated_msr) KVM_X86_OP(vcpu_after_set_cpuid) KVM_X86_OP(vm_init) -KVM_X86_OP_OPTIONAL(vm_destroy) -KVM_X86_OP_OPTIONAL_RET0(vcpu_precreate) +KVM_X86_OP(vm_destroy) +KVM_X86_OP_RET0(vcpu_precreate) KVM_X86_OP(vcpu_create) KVM_X86_OP(vcpu_free) KVM_X86_OP(vcpu_reset) @@ -39,7 +36,7 @@ KVM_X86_OP(set_segment) KVM_X86_OP(get_cs_db_l_bits) KVM_X86_OP(is_valid_cr0) KVM_X86_OP(set_cr0) -KVM_X86_OP_OPTIONAL(post_set_cr3) +KVM_X86_OP(post_set_cr3) KVM_X86_OP(is_valid_cr4) KVM_X86_OP(set_cr4) KVM_X86_OP(set_efer) @@ -56,8 +53,8 @@ KVM_X86_OP(get_if_flag) KVM_X86_OP(flush_tlb_all) KVM_X86_OP(flush_tlb_current) #if IS_ENABLED(CONFIG_HYPERV) -KVM_X86_OP_OPTIONAL(flush_remote_tlbs) -KVM_X86_OP_OPTIONAL(flush_remote_tlbs_range) +KVM_X86_OP(flush_remote_tlbs) +KVM_X86_OP(flush_remote_tlbs_range) #endif KVM_X86_OP(flush_tlb_gva) KVM_X86_OP(flush_tlb_guest) @@ -65,14 +62,14 @@ KVM_X86_OP(vcpu_pre_run) KVM_X86_OP(vcpu_run) KVM_X86_OP(handle_exit) KVM_X86_OP(skip_emulated_instruction) -KVM_X86_OP_OPTIONAL(update_emulated_instruction) +KVM_X86_OP(update_emulated_instruction) KVM_X86_OP(set_interrupt_shadow) KVM_X86_OP(get_interrupt_shadow) KVM_X86_OP(patch_hypercall) KVM_X86_OP(inject_irq) KVM_X86_OP(inject_nmi) -KVM_X86_OP_OPTIONAL_RET0(is_vnmi_pending) -KVM_X86_OP_OPTIONAL_RET0(set_vnmi_pending) +KVM_X86_OP_RET0(is_vnmi_pending) +KVM_X86_OP_RET0(set_vnmi_pending) KVM_X86_OP(inject_exception) KVM_X86_OP(cancel_injection) KVM_X86_OP(interrupt_allowed) @@ -81,19 +78,19 @@ KVM_X86_OP(get_nmi_mask) KVM_X86_OP(set_nmi_mask) KVM_X86_OP(enable_nmi_window) KVM_X86_OP(enable_irq_window) -KVM_X86_OP_OPTIONAL(update_cr8_intercept) +KVM_X86_OP(update_cr8_intercept) KVM_X86_OP(refresh_apicv_exec_ctrl) -KVM_X86_OP_OPTIONAL(hwapic_irr_update) -KVM_X86_OP_OPTIONAL(hwapic_isr_update) -KVM_X86_OP_OPTIONAL_RET0(guest_apic_has_interrupt) -KVM_X86_OP_OPTIONAL(load_eoi_exitmap) -KVM_X86_OP_OPTIONAL(set_virtual_apic_mode) -KVM_X86_OP_OPTIONAL(set_apic_access_page_addr) +KVM_X86_OP(hwapic_irr_update) +KVM_X86_OP(hwapic_isr_update) +KVM_X86_OP_RET0(guest_apic_has_interrupt) +KVM_X86_OP(load_eoi_exitmap) +KVM_X86_OP(set_virtual_apic_mode) +KVM_X86_OP(set_apic_access_page_addr) KVM_X86_OP(deliver_interrupt) -KVM_X86_OP_OPTIONAL(sync_pir_to_irr) -KVM_X86_OP_OPTIONAL_RET0(set_tss_addr) -KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr) -KVM_X86_OP_OPTIONAL_RET0(get_mt_mask) +KVM_X86_OP(sync_pir_to_irr) +KVM_X86_OP_RET0(set_tss_addr) +KVM_X86_OP_RET0(set_identity_map_addr) +KVM_X86_OP_RET0(get_mt_mask) KVM_X86_OP(load_mmu_pgd) KVM_X86_OP(has_wbinvd_exit) KVM_X86_OP(get_l2_tsc_offset) @@ -104,16 +101,16 @@ KVM_X86_OP(get_exit_info) KVM_X86_OP(check_intercept) KVM_X86_OP(handle_exit_irqoff) KVM_X86_OP(sched_in) -KVM_X86_OP_OPTIONAL(update_cpu_dirty_logging) -KVM_X86_OP_OPTIONAL(vcpu_blocking) -KVM_X86_OP_OPTIONAL(vcpu_unblocking) -KVM_X86_OP_OPTIONAL(pi_update_irte) -KVM_X86_OP_OPTIONAL(pi_start_assignment) -KVM_X86_OP_OPTIONAL(apicv_pre_state_restore) -KVM_X86_OP_OPTIONAL(apicv_post_state_restore) -KVM_X86_OP_OPTIONAL_RET0(dy_apicv_has_pending_interrupt) -KVM_X86_OP_OPTIONAL(set_hv_timer) -KVM_X86_OP_OPTIONAL(cancel_hv_timer) +KVM_X86_OP(update_cpu_dirty_logging) +KVM_X86_OP(vcpu_blocking) +KVM_X86_OP(vcpu_unblocking) +KVM_X86_OP(pi_update_irte) +KVM_X86_OP(pi_start_assignment) +KVM_X86_OP(apicv_pre_state_restore) +KVM_X86_OP(apicv_post_state_restore) +KVM_X86_OP_RET0(dy_apicv_has_pending_interrupt) +KVM_X86_OP(set_hv_timer) +KVM_X86_OP(cancel_hv_timer) KVM_X86_OP(setup_mce) #ifdef CONFIG_KVM_SMM KVM_X86_OP(smi_allowed) @@ -121,24 +118,23 @@ KVM_X86_OP(enter_smm) KVM_X86_OP(leave_smm) KVM_X86_OP(enable_smi_window) #endif -KVM_X86_OP_OPTIONAL(mem_enc_ioctl) -KVM_X86_OP_OPTIONAL(mem_enc_register_region) -KVM_X86_OP_OPTIONAL(mem_enc_unregister_region) -KVM_X86_OP_OPTIONAL(vm_copy_enc_context_from) -KVM_X86_OP_OPTIONAL(vm_move_enc_context_from) -KVM_X86_OP_OPTIONAL(guest_memory_reclaimed) +KVM_X86_OP(mem_enc_ioctl) +KVM_X86_OP(mem_enc_register_region) +KVM_X86_OP(mem_enc_unregister_region) +KVM_X86_OP(vm_copy_enc_context_from) +KVM_X86_OP(vm_move_enc_context_from) +KVM_X86_OP(guest_memory_reclaimed) KVM_X86_OP(get_msr_feature) KVM_X86_OP(check_emulate_instruction) KVM_X86_OP(apic_init_signal_blocked) -KVM_X86_OP_OPTIONAL(enable_l2_tlb_flush) -KVM_X86_OP_OPTIONAL(migrate_timers) +KVM_X86_OP(enable_l2_tlb_flush) +KVM_X86_OP(migrate_timers) KVM_X86_OP(msr_filter_changed) KVM_X86_OP(complete_emulated_msr) KVM_X86_OP(vcpu_deliver_sipi_vector) -KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons); -KVM_X86_OP_OPTIONAL(get_untagged_addr) -KVM_X86_OP_OPTIONAL(alloc_apic_backing_page) +KVM_X86_OP_RET0(vcpu_get_apicv_inhibit_reasons); +KVM_X86_OP(get_untagged_addr) +KVM_X86_OP(alloc_apic_backing_page) #undef KVM_X86_OP -#undef KVM_X86_OP_OPTIONAL -#undef KVM_X86_OP_OPTIONAL_RET0 +#undef KVM_X86_OP_RET0 diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 3a2e42bb6969..647e7ec7c381 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1857,8 +1857,7 @@ extern struct kvm_x86_ops kvm_x86_ops; #define KVM_X86_OP(func) \ DECLARE_STATIC_CALL(kvm_x86_##func, *(((struct kvm_x86_ops *)0)->func)); -#define KVM_X86_OP_OPTIONAL KVM_X86_OP -#define KVM_X86_OP_OPTIONAL_RET0 KVM_X86_OP +#define KVM_X86_OP_RET0 KVM_X86_OP #include <asm/kvm-x86-ops.h> int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2467e053cb35..58c13bf68057 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -138,8 +138,7 @@ struct kvm_x86_ops kvm_x86_ops __read_mostly; #define KVM_X86_OP(func) \ DEFINE_STATIC_CALL_NULL(kvm_x86_##func, \ *(((struct kvm_x86_ops *)0)->func)); -#define KVM_X86_OP_OPTIONAL KVM_X86_OP -#define KVM_X86_OP_OPTIONAL_RET0 KVM_X86_OP +#define KVM_X86_OP_RET0 KVM_X86_OP #include <asm/kvm-x86-ops.h> EXPORT_STATIC_CALL_GPL(kvm_x86_get_cs_db_l_bits); EXPORT_STATIC_CALL_GPL(kvm_x86_cache_reg); @@ -9656,16 +9655,12 @@ static inline void kvm_ops_update(struct kvm_x86_init_ops *ops) { memcpy(&kvm_x86_ops, ops->runtime_ops, sizeof(kvm_x86_ops)); -#define __KVM_X86_OP(func) \ - static_call_update(kvm_x86_##func, kvm_x86_ops.func); #define KVM_X86_OP(func) \ - WARN_ON(!kvm_x86_ops.func); __KVM_X86_OP(func) -#define KVM_X86_OP_OPTIONAL __KVM_X86_OP -#define KVM_X86_OP_OPTIONAL_RET0(func) \ + static_call_update(kvm_x86_##func, kvm_x86_ops.func); +#define KVM_X86_OP_RET0(func) \ static_call_update(kvm_x86_##func, (void *)kvm_x86_ops.func ? : \ (void *)__static_call_return0); #include <asm/kvm-x86-ops.h> -#undef __KVM_X86_OP kvm_pmu_ops_update(ops->pmu_ops); }
KVM_X86_OP and KVM_X86_OP_OPTIONAL were utilized to define and execute static_call_update() calls on mandatory and optional hooks, respectively. Mandatory hooks were invoked via static_call() and necessitated definition due to the presumption that an undefined hook (i.e., NULL) would cause static_call() to fail. This assumption no longer holds true as static_call() has been updated to treat a "NULL" hook as a NOP on x86. Consequently, the so-called mandatory hooks are no longer required to be defined, rendering them non-mandatory. This eliminates the need to differentiate between mandatory and optional hooks, allowing a single KVM_X86_OP to suffice. So KVM_X86_OP_OPTIONAL and the WARN_ON() associated with KVM_X86_OP are removed to simplify usage, and KVM_X86_OP_OPTIONAL_RET0 is renamed to KVM_X86_OP_RET0, as the term "optional" is now redundant (every hook can be optional). Signed-off-by: Wei Wang <wei.w.wang@intel.com> --- arch/x86/include/asm/kvm-x86-ops.h | 100 ++++++++++++++--------------- arch/x86/include/asm/kvm_host.h | 3 +- arch/x86/kvm/x86.c | 11 +--- 3 files changed, 52 insertions(+), 62 deletions(-)