Message ID | 1537032727-29397-3-git-send-email-gengdongjiu@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | add support for VCPU event states | expand |
Hi Peter/shannon, when you are free, hope you can have a look at this series of patches. Thanks a lot for your time in advance. On 2018/9/16 1:32, Dongjiu Geng wrote: > This patch extends the qemu-kvm state sync logic with support for > KVM_GET/SET_VCPU_EVENTS, giving access to yet missing SError exception. > And also it can support the exception state migration. > > The SError exception states include SError pending state and ESR value, > the kvm_put/get_vcpu_events() will be called when set or get system > registers. When do migration, if source machine has SError pending, > QEMU will do this migration regardless whether the target machine supports > to specify guest ESR value, because if target machine does not support that, > it can also inject the SError with zero ESR value. > > Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com> > --- > Change since v8: > 1. Update the commit message > > Change since v7: > 1. Change "pending" and "has_esr" from uint32_t to uint8_t for CPUARMState > 2. Add error_report() in kvm_get_vcpu_events() > > Change since v6: > 1. Add cover letter > 2. Change name "cpu/ras" to "cpu/serror" > 3. Add some comments and check the ioctl return value for kvm_put_vcpu_events() > > Change since v5: > address Peter's comments: > 1. Move the "struct serror" before the "end_reset_fields" in CPUARMState > 2. Remove ARM_FEATURE_RAS_EXT and add a variable have_inject_serror_esr > 3. Use the variable have_inject_serror_esr to track whether the kernel has state we need to migrate > 4. Remove printf() in kvm_arch_put_registers() > 5. ras_needed/vmstate_ras to serror_needed/vmstate_serror > 6. Check to use "return env.serror.pending != 0" instead of "arm_feature(env, ARM_FEATURE_RAS_EXT)" in the ras_needed() > > Change since v4: > 1. Rebase the code to latest > --- > target/arm/cpu.h | 7 ++++++ > target/arm/kvm64.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++++++ > target/arm/machine.c | 22 +++++++++++++++++ > 3 files changed, 98 insertions(+) > > diff --git a/target/arm/cpu.h b/target/arm/cpu.h > index 62c36b4..034b035 100644 > --- a/target/arm/cpu.h > +++ b/target/arm/cpu.h > @@ -530,6 +530,13 @@ typedef struct CPUARMState { > */ > } exception; > > + /* Information associated with an SError */ > + struct { > + uint8_t pending; > + uint8_t has_esr; > + uint64_t esr; > + } serror; > + > /* Thumb-2 EE state. */ > uint32_t teecr; > uint32_t teehbr; > diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c > index e0b8246..e8705e2 100644 > --- a/target/arm/kvm64.c > +++ b/target/arm/kvm64.c > @@ -29,6 +29,7 @@ > #include "hw/arm/arm.h" > > static bool have_guest_debug; > +static bool have_inject_serror_esr; > > /* > * Although the ARM implementation of hardware assisted debugging > @@ -546,6 +547,10 @@ int kvm_arch_init_vcpu(CPUState *cs) > > kvm_arm_init_debug(cs); > > + /* Check whether userspace can specify guest syndrome value */ > + have_inject_serror_esr = kvm_check_extension(cs->kvm_state, > + KVM_CAP_ARM_INJECT_SERROR_ESR); > + > return kvm_arm_init_cpreg_list(cpu); > } > > @@ -600,6 +605,60 @@ int kvm_arm_cpreg_level(uint64_t regidx) > #define AARCH64_SIMD_CTRL_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U32 | \ > KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x)) > > +static int kvm_put_vcpu_events(ARMCPU *cpu) > +{ > + CPUARMState *env = &cpu->env; > + struct kvm_vcpu_events events = {}; > + int ret; > + > + if (!kvm_has_vcpu_events()) { > + return 0; > + } > + > + memset(&events, 0, sizeof(events)); > + events.exception.serror_pending = env->serror.pending; > + > + /* Inject SError to guest with specified syndrome if host kernel > + * supports it, otherwise inject SError without syndrome. > + */ > + if (have_inject_serror_esr) { > + events.exception.serror_has_esr = env->serror.has_esr; > + events.exception.serror_esr = env->serror.esr; > + } > + > + ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_VCPU_EVENTS, &events); > + if (ret) { > + error_report("failed to put vcpu events"); > + } > + > + return ret; > +} > + > +static int kvm_get_vcpu_events(ARMCPU *cpu) > +{ > + CPUARMState *env = &cpu->env; > + struct kvm_vcpu_events events; > + int ret; > + > + if (!kvm_has_vcpu_events()) { > + return 0; > + } > + > + memset(&events, 0, sizeof(events)); > + ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_VCPU_EVENTS, &events); > + > + if (ret) { > + error_report("failed to get vcpu events"); > + return ret; > + } > + > + env->serror.pending = events.exception.serror_pending; > + env->serror.has_esr = events.exception.serror_has_esr; > + env->serror.esr = events.exception.serror_esr; > + > + return 0; > +} > + > int kvm_arch_put_registers(CPUState *cs, int level) > { > struct kvm_one_reg reg; > @@ -727,6 +786,11 @@ int kvm_arch_put_registers(CPUState *cs, int level) > return ret; > } > > + ret = kvm_put_vcpu_events(cpu); > + if (ret) { > + return ret; > + } > + > if (!write_list_to_kvmstate(cpu, level)) { > return EINVAL; > } > @@ -863,6 +927,11 @@ int kvm_arch_get_registers(CPUState *cs) > } > vfp_set_fpcr(env, fpr); > > + ret = kvm_get_vcpu_events(cpu); > + if (ret) { > + return ret; > + } > + > if (!write_kvmstate_to_list(cpu)) { > return EINVAL; > } > diff --git a/target/arm/machine.c b/target/arm/machine.c > index ff4ec22..32bcde0 100644 > --- a/target/arm/machine.c > +++ b/target/arm/machine.c > @@ -172,6 +172,27 @@ static const VMStateDescription vmstate_sve = { > }; > #endif /* AARCH64 */ > > +static bool serror_needed(void *opaque) > +{ > + ARMCPU *cpu = opaque; > + CPUARMState *env = &cpu->env; > + > + return env->serror.pending != 0; > +} > + > +static const VMStateDescription vmstate_serror = { > + .name = "cpu/serror", > + .version_id = 1, > + .minimum_version_id = 1, > + .needed = serror_needed, > + .fields = (VMStateField[]) { > + VMSTATE_UINT8(env.serror.pending, ARMCPU), > + VMSTATE_UINT8(env.serror.has_esr, ARMCPU), > + VMSTATE_UINT64(env.serror.esr, ARMCPU), > + VMSTATE_END_OF_LIST() > + } > +}; > + > static bool m_needed(void *opaque) > { > ARMCPU *cpu = opaque; > @@ -726,6 +747,7 @@ const VMStateDescription vmstate_arm_cpu = { > #ifdef TARGET_AARCH64 > &vmstate_sve, > #endif > + &vmstate_serror, > NULL > } > }; >
On Sat, Sep 15, 2018 at 01:32:07PM -0400, Dongjiu Geng wrote: > This patch extends the qemu-kvm state sync logic with support for > KVM_GET/SET_VCPU_EVENTS, giving access to yet missing SError exception. > And also it can support the exception state migration. > > The SError exception states include SError pending state and ESR value, > the kvm_put/get_vcpu_events() will be called when set or get system > registers. When do migration, if source machine has SError pending, > QEMU will do this migration regardless whether the target machine supports > to specify guest ESR value, because if target machine does not support that, > it can also inject the SError with zero ESR value. > > Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com> > --- > Change since v8: > 1. Update the commit message > > Change since v7: > 1. Change "pending" and "has_esr" from uint32_t to uint8_t for CPUARMState > 2. Add error_report() in kvm_get_vcpu_events() > > Change since v6: > 1. Add cover letter > 2. Change name "cpu/ras" to "cpu/serror" > 3. Add some comments and check the ioctl return value for kvm_put_vcpu_events() > > Change since v5: > address Peter's comments: > 1. Move the "struct serror" before the "end_reset_fields" in CPUARMState > 2. Remove ARM_FEATURE_RAS_EXT and add a variable have_inject_serror_esr > 3. Use the variable have_inject_serror_esr to track whether the kernel has state we need to migrate > 4. Remove printf() in kvm_arch_put_registers() > 5. ras_needed/vmstate_ras to serror_needed/vmstate_serror > 6. Check to use "return env.serror.pending != 0" instead of "arm_feature(env, ARM_FEATURE_RAS_EXT)" in the ras_needed() > > Change since v4: > 1. Rebase the code to latest > --- > target/arm/cpu.h | 7 ++++++ > target/arm/kvm64.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++++++ > target/arm/machine.c | 22 +++++++++++++++++ > 3 files changed, 98 insertions(+) > > diff --git a/target/arm/cpu.h b/target/arm/cpu.h > index 62c36b4..034b035 100644 > --- a/target/arm/cpu.h > +++ b/target/arm/cpu.h > @@ -530,6 +530,13 @@ typedef struct CPUARMState { > */ > } exception; > > + /* Information associated with an SError */ > + struct { > + uint8_t pending; > + uint8_t has_esr; > + uint64_t esr; > + } serror; > + > /* Thumb-2 EE state. */ > uint32_t teecr; > uint32_t teehbr; > diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c > index e0b8246..e8705e2 100644 > --- a/target/arm/kvm64.c > +++ b/target/arm/kvm64.c > @@ -29,6 +29,7 @@ > #include "hw/arm/arm.h" > > static bool have_guest_debug; > +static bool have_inject_serror_esr; > > /* > * Although the ARM implementation of hardware assisted debugging > @@ -546,6 +547,10 @@ int kvm_arch_init_vcpu(CPUState *cs) > > kvm_arm_init_debug(cs); > > + /* Check whether userspace can specify guest syndrome value */ > + have_inject_serror_esr = kvm_check_extension(cs->kvm_state, > + KVM_CAP_ARM_INJECT_SERROR_ESR); > + > return kvm_arm_init_cpreg_list(cpu); > } > > @@ -600,6 +605,60 @@ int kvm_arm_cpreg_level(uint64_t regidx) > #define AARCH64_SIMD_CTRL_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U32 | \ > KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x)) > > +static int kvm_put_vcpu_events(ARMCPU *cpu) > +{ > + CPUARMState *env = &cpu->env; > + struct kvm_vcpu_events events = {}; > + int ret; > + > + if (!kvm_has_vcpu_events()) { > + return 0; > + } > + > + memset(&events, 0, sizeof(events)); nit: this memset is redundant with the '= {}' above. I'd probably keep it and remove the '= {}' though. > + events.exception.serror_pending = env->serror.pending; > + > + /* Inject SError to guest with specified syndrome if host kernel > + * supports it, otherwise inject SError without syndrome. > + */ > + if (have_inject_serror_esr) { > + events.exception.serror_has_esr = env->serror.has_esr; > + events.exception.serror_esr = env->serror.esr; > + } > + > + ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_VCPU_EVENTS, &events); > + if (ret) { > + error_report("failed to put vcpu events"); > + } > + > + return ret; > +} > + > +static int kvm_get_vcpu_events(ARMCPU *cpu) > +{ > + CPUARMState *env = &cpu->env; > + struct kvm_vcpu_events events; > + int ret; > + > + if (!kvm_has_vcpu_events()) { > + return 0; > + } > + > + memset(&events, 0, sizeof(events)); > + ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_VCPU_EVENTS, &events); > + nit: no need for this blank line > + if (ret) { > + error_report("failed to get vcpu events"); > + return ret; > + } > + > + env->serror.pending = events.exception.serror_pending; > + env->serror.has_esr = events.exception.serror_has_esr; > + env->serror.esr = events.exception.serror_esr; > + > + return 0; > +} > + > int kvm_arch_put_registers(CPUState *cs, int level) > { > struct kvm_one_reg reg; > @@ -727,6 +786,11 @@ int kvm_arch_put_registers(CPUState *cs, int level) > return ret; > } > > + ret = kvm_put_vcpu_events(cpu); > + if (ret) { > + return ret; > + } > + > if (!write_list_to_kvmstate(cpu, level)) { > return EINVAL; > } > @@ -863,6 +927,11 @@ int kvm_arch_get_registers(CPUState *cs) > } > vfp_set_fpcr(env, fpr); > > + ret = kvm_get_vcpu_events(cpu); > + if (ret) { > + return ret; > + } > + > if (!write_kvmstate_to_list(cpu)) { > return EINVAL; > } > diff --git a/target/arm/machine.c b/target/arm/machine.c > index ff4ec22..32bcde0 100644 > --- a/target/arm/machine.c > +++ b/target/arm/machine.c > @@ -172,6 +172,27 @@ static const VMStateDescription vmstate_sve = { > }; > #endif /* AARCH64 */ > > +static bool serror_needed(void *opaque) > +{ > + ARMCPU *cpu = opaque; > + CPUARMState *env = &cpu->env; > + > + return env->serror.pending != 0; > +} > + > +static const VMStateDescription vmstate_serror = { > + .name = "cpu/serror", > + .version_id = 1, > + .minimum_version_id = 1, > + .needed = serror_needed, > + .fields = (VMStateField[]) { > + VMSTATE_UINT8(env.serror.pending, ARMCPU), > + VMSTATE_UINT8(env.serror.has_esr, ARMCPU), > + VMSTATE_UINT64(env.serror.esr, ARMCPU), > + VMSTATE_END_OF_LIST() > + } > +}; > + > static bool m_needed(void *opaque) > { > ARMCPU *cpu = opaque; > @@ -726,6 +747,7 @@ const VMStateDescription vmstate_arm_cpu = { > #ifdef TARGET_AARCH64 > &vmstate_sve, > #endif > + &vmstate_serror, > NULL > } > }; > -- > 1.8.3.1 > > Hmm. kvm_vcpu_events is defined for both 32-bit and 64-bit and the vmstate isn't in the '#ifdef TARGET_AARCH64', like vmstate_sve, yet only the target/arm/kvm64.c file is getting updated. Shouldn't we put kvm_get/put_vcpu_events() in target/arm/kvm.c and then also update target/arm/kvm32.c? Thanks, drew
diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 62c36b4..034b035 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -530,6 +530,13 @@ typedef struct CPUARMState { */ } exception; + /* Information associated with an SError */ + struct { + uint8_t pending; + uint8_t has_esr; + uint64_t esr; + } serror; + /* Thumb-2 EE state. */ uint32_t teecr; uint32_t teehbr; diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c index e0b8246..e8705e2 100644 --- a/target/arm/kvm64.c +++ b/target/arm/kvm64.c @@ -29,6 +29,7 @@ #include "hw/arm/arm.h" static bool have_guest_debug; +static bool have_inject_serror_esr; /* * Although the ARM implementation of hardware assisted debugging @@ -546,6 +547,10 @@ int kvm_arch_init_vcpu(CPUState *cs) kvm_arm_init_debug(cs); + /* Check whether userspace can specify guest syndrome value */ + have_inject_serror_esr = kvm_check_extension(cs->kvm_state, + KVM_CAP_ARM_INJECT_SERROR_ESR); + return kvm_arm_init_cpreg_list(cpu); } @@ -600,6 +605,60 @@ int kvm_arm_cpreg_level(uint64_t regidx) #define AARCH64_SIMD_CTRL_REG(x) (KVM_REG_ARM64 | KVM_REG_SIZE_U32 | \ KVM_REG_ARM_CORE | KVM_REG_ARM_CORE_REG(x)) +static int kvm_put_vcpu_events(ARMCPU *cpu) +{ + CPUARMState *env = &cpu->env; + struct kvm_vcpu_events events = {}; + int ret; + + if (!kvm_has_vcpu_events()) { + return 0; + } + + memset(&events, 0, sizeof(events)); + events.exception.serror_pending = env->serror.pending; + + /* Inject SError to guest with specified syndrome if host kernel + * supports it, otherwise inject SError without syndrome. + */ + if (have_inject_serror_esr) { + events.exception.serror_has_esr = env->serror.has_esr; + events.exception.serror_esr = env->serror.esr; + } + + ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_VCPU_EVENTS, &events); + if (ret) { + error_report("failed to put vcpu events"); + } + + return ret; +} + +static int kvm_get_vcpu_events(ARMCPU *cpu) +{ + CPUARMState *env = &cpu->env; + struct kvm_vcpu_events events; + int ret; + + if (!kvm_has_vcpu_events()) { + return 0; + } + + memset(&events, 0, sizeof(events)); + ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_VCPU_EVENTS, &events); + + if (ret) { + error_report("failed to get vcpu events"); + return ret; + } + + env->serror.pending = events.exception.serror_pending; + env->serror.has_esr = events.exception.serror_has_esr; + env->serror.esr = events.exception.serror_esr; + + return 0; +} + int kvm_arch_put_registers(CPUState *cs, int level) { struct kvm_one_reg reg; @@ -727,6 +786,11 @@ int kvm_arch_put_registers(CPUState *cs, int level) return ret; } + ret = kvm_put_vcpu_events(cpu); + if (ret) { + return ret; + } + if (!write_list_to_kvmstate(cpu, level)) { return EINVAL; } @@ -863,6 +927,11 @@ int kvm_arch_get_registers(CPUState *cs) } vfp_set_fpcr(env, fpr); + ret = kvm_get_vcpu_events(cpu); + if (ret) { + return ret; + } + if (!write_kvmstate_to_list(cpu)) { return EINVAL; } diff --git a/target/arm/machine.c b/target/arm/machine.c index ff4ec22..32bcde0 100644 --- a/target/arm/machine.c +++ b/target/arm/machine.c @@ -172,6 +172,27 @@ static const VMStateDescription vmstate_sve = { }; #endif /* AARCH64 */ +static bool serror_needed(void *opaque) +{ + ARMCPU *cpu = opaque; + CPUARMState *env = &cpu->env; + + return env->serror.pending != 0; +} + +static const VMStateDescription vmstate_serror = { + .name = "cpu/serror", + .version_id = 1, + .minimum_version_id = 1, + .needed = serror_needed, + .fields = (VMStateField[]) { + VMSTATE_UINT8(env.serror.pending, ARMCPU), + VMSTATE_UINT8(env.serror.has_esr, ARMCPU), + VMSTATE_UINT64(env.serror.esr, ARMCPU), + VMSTATE_END_OF_LIST() + } +}; + static bool m_needed(void *opaque) { ARMCPU *cpu = opaque; @@ -726,6 +747,7 @@ const VMStateDescription vmstate_arm_cpu = { #ifdef TARGET_AARCH64 &vmstate_sve, #endif + &vmstate_serror, NULL } };
This patch extends the qemu-kvm state sync logic with support for KVM_GET/SET_VCPU_EVENTS, giving access to yet missing SError exception. And also it can support the exception state migration. The SError exception states include SError pending state and ESR value, the kvm_put/get_vcpu_events() will be called when set or get system registers. When do migration, if source machine has SError pending, QEMU will do this migration regardless whether the target machine supports to specify guest ESR value, because if target machine does not support that, it can also inject the SError with zero ESR value. Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com> --- Change since v8: 1. Update the commit message Change since v7: 1. Change "pending" and "has_esr" from uint32_t to uint8_t for CPUARMState 2. Add error_report() in kvm_get_vcpu_events() Change since v6: 1. Add cover letter 2. Change name "cpu/ras" to "cpu/serror" 3. Add some comments and check the ioctl return value for kvm_put_vcpu_events() Change since v5: address Peter's comments: 1. Move the "struct serror" before the "end_reset_fields" in CPUARMState 2. Remove ARM_FEATURE_RAS_EXT and add a variable have_inject_serror_esr 3. Use the variable have_inject_serror_esr to track whether the kernel has state we need to migrate 4. Remove printf() in kvm_arch_put_registers() 5. ras_needed/vmstate_ras to serror_needed/vmstate_serror 6. Check to use "return env.serror.pending != 0" instead of "arm_feature(env, ARM_FEATURE_RAS_EXT)" in the ras_needed() Change since v4: 1. Rebase the code to latest --- target/arm/cpu.h | 7 ++++++ target/arm/kvm64.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++++++ target/arm/machine.c | 22 +++++++++++++++++ 3 files changed, 98 insertions(+)