Message ID | 20211113012234.1443009-3-rananta@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: arm64: Add support for hypercall services selection | expand |
On Sat, 13 Nov 2021 01:22:25 +0000, Raghavendra Rao Ananta <rananta@google.com> wrote: > > Architectures such as arm64 and riscv uses vcpu variables > such as has_run_once and ran_atleast_once, respectively, > to mark if the vCPU has started running. Since these are > architecture agnostic variables, introduce > kvm_vcpu_has_run_once() as a core kvm functionality and > use this instead of the architecture defined variables. > > No functional change intended. > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> arm64 is moving away from this, see [1]. You also don't need any new state, as vcpu->pid gives you exactly what you need. Happy to queue additional patches on top if you want to deal with riscv. Thanks, M. [1] https://lore.kernel.org/all/20211018211158.3050779-1-maz@kernel.org/
Hello Marc, On Mon, Nov 22, 2021 at 8:27 AM Marc Zyngier <maz@kernel.org> wrote: > > On Sat, 13 Nov 2021 01:22:25 +0000, > Raghavendra Rao Ananta <rananta@google.com> wrote: > > > > Architectures such as arm64 and riscv uses vcpu variables > > such as has_run_once and ran_atleast_once, respectively, > > to mark if the vCPU has started running. Since these are > > architecture agnostic variables, introduce > > kvm_vcpu_has_run_once() as a core kvm functionality and > > use this instead of the architecture defined variables. > > > > No functional change intended. > > > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> > > arm64 is moving away from this, see [1]. You also don't need any new > state, as vcpu->pid gives you exactly what you need. Thanks for the pointer. I can directly use this! > > Happy to queue additional patches on top if you want to deal with > riscv. > Just to clarify, you mean get the kvm support for riscv on kvmarm-next? If yes, then sure, I can make changes in riscv to use vcpu_has_run_once() from your series. Regards, Raghavendra > Thanks, > > M. > > [1] https://lore.kernel.org/all/20211018211158.3050779-1-maz@kernel.org/ > > -- > Without deviation from the norm, progress is not possible.
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 4be8486042a7..02dffe50a20c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -367,9 +367,6 @@ struct kvm_vcpu_arch { int target; DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES); - /* Detect first run of a vcpu */ - bool has_run_once; - /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ u64 vsesr_el2; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index f5490afe1ebf..0cc148211b4e 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -344,7 +344,7 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) { - if (vcpu->arch.has_run_once && unlikely(!irqchip_in_kernel(vcpu->kvm))) + if (kvm_vcpu_has_run_once(vcpu) && unlikely(!irqchip_in_kernel(vcpu->kvm))) static_branch_dec(&userspace_irqchip_in_use); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); @@ -582,13 +582,13 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) struct kvm *kvm = vcpu->kvm; int ret = 0; - if (likely(vcpu->arch.has_run_once)) + if (likely(kvm_vcpu_has_run_once(vcpu))) return 0; if (!kvm_arm_vcpu_is_finalized(vcpu)) return -EPERM; - vcpu->arch.has_run_once = true; + vcpu->has_run_once = true; kvm_arm_vcpu_init_debug(vcpu); @@ -1116,7 +1116,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, * need to invalidate the I-cache though, as FWB does *not* * imply CTR_EL0.DIC. */ - if (vcpu->arch.has_run_once) { + if (kvm_vcpu_has_run_once(vcpu)) { if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) stage2_unmap_vm(vcpu->kvm); else diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c index 0a06d0648970..6fb41097880b 100644 --- a/arch/arm64/kvm/vgic/vgic-init.c +++ b/arch/arm64/kvm/vgic/vgic-init.c @@ -91,7 +91,7 @@ int kvm_vgic_create(struct kvm *kvm, u32 type) return ret; kvm_for_each_vcpu(i, vcpu, kvm) { - if (vcpu->arch.has_run_once) + if (kvm_vcpu_has_run_once(vcpu)) goto out_unlock; } ret = 0; diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h index 25ba21f98504..645e95f61d47 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -147,9 +147,6 @@ struct kvm_vcpu_csr { }; struct kvm_vcpu_arch { - /* VCPU ran at least once */ - bool ran_atleast_once; - /* ISA feature bits (similar to MISA) */ unsigned long isa; diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index e3d3aed46184..18cbc8b0c03d 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -75,9 +75,6 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *cntx; - /* Mark this VCPU never ran */ - vcpu->arch.ran_atleast_once = false; - /* Setup ISA features available to VCPU */ vcpu->arch.isa = riscv_isa_extension_base(NULL) & KVM_RISCV_ISA_ALLOWED; @@ -190,7 +187,7 @@ static int kvm_riscv_vcpu_set_reg_config(struct kvm_vcpu *vcpu, switch (reg_num) { case KVM_REG_RISCV_CONFIG_REG(isa): - if (!vcpu->arch.ran_atleast_once) { + if (!kvm_vcpu_has_run_once(vcpu)) { vcpu->arch.isa = reg_val; vcpu->arch.isa &= riscv_isa_extension_base(NULL); vcpu->arch.isa &= KVM_RISCV_ISA_ALLOWED; @@ -682,7 +679,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) struct kvm_run *run = vcpu->run; /* Mark this VCPU ran at least once */ - vcpu->arch.ran_atleast_once = true; + vcpu->has_run_once = true; vcpu->arch.srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 60a35d9fe259..b373929c71eb 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -360,6 +360,8 @@ struct kvm_vcpu { * it is a valid slot. */ int last_used_slot; + + bool has_run_once; }; /* must be called with irqs disabled */ @@ -1847,4 +1849,9 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) /* Max number of entries allowed for each kvm dirty ring */ #define KVM_DIRTY_RING_MAX_ENTRIES 65536 +static inline bool kvm_vcpu_has_run_once(struct kvm_vcpu *vcpu) +{ + return vcpu->has_run_once; +} + #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 3f6d450355f0..1ec8a8e959b2 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -433,6 +433,7 @@ static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id) vcpu->ready = false; preempt_notifier_init(&vcpu->preempt_notifier, &kvm_preempt_ops); vcpu->last_used_slot = 0; + vcpu->has_run_once = false; } void kvm_vcpu_destroy(struct kvm_vcpu *vcpu)
Architectures such as arm64 and riscv uses vcpu variables such as has_run_once and ran_atleast_once, respectively, to mark if the vCPU has started running. Since these are architecture agnostic variables, introduce kvm_vcpu_has_run_once() as a core kvm functionality and use this instead of the architecture defined variables. No functional change intended. Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> --- arch/arm64/include/asm/kvm_host.h | 3 --- arch/arm64/kvm/arm.c | 8 ++++---- arch/arm64/kvm/vgic/vgic-init.c | 2 +- arch/riscv/include/asm/kvm_host.h | 3 --- arch/riscv/kvm/vcpu.c | 7 ++----- include/linux/kvm_host.h | 7 +++++++ virt/kvm/kvm_main.c | 1 + 7 files changed, 15 insertions(+), 16 deletions(-)