Message ID | 20190125114155.32062-5-vkuznets@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | i386/kvm/hyper-v: refactor and implement 'hv-stimer-direct' and 'hv-all' enlightenments | expand |
On Fri, Jan 25, 2019 at 12:41:51PM +0100, Vitaly Kuznetsov wrote: > In many case we just want to give Windows guests all currently supported > Hyper-V enlightenments and that's where this new mode may come handy. We > pass through what was returned by KVM_GET_SUPPORTED_HV_CPUID. How is the compatibility ensured on migration between kernels reporting different feature sets? Roman.
Roman Kagan <rkagan@virtuozzo.com> writes: > On Fri, Jan 25, 2019 at 12:41:51PM +0100, Vitaly Kuznetsov wrote: >> In many case we just want to give Windows guests all currently supported >> Hyper-V enlightenments and that's where this new mode may come handy. We >> pass through what was returned by KVM_GET_SUPPORTED_HV_CPUID. > > How is the compatibility ensured on migration between kernels reporting > different feature sets? AFAIU we don't change anything in this regard (or, my intention was to not change anything): hv-all is converted to the individual hv-* properties (hv_cpuid_check_and_set()) actually sets cpu->hyperv_* flags according to what's supported by kernel so when we migrate we will require all these features supported. I'll look to see that my expectations actually match the reallity, thanks for the reminder!
On Fri, Jan 25, 2019 at 02:46:42PM +0100, Vitaly Kuznetsov wrote: > Roman Kagan <rkagan@virtuozzo.com> writes: > > > On Fri, Jan 25, 2019 at 12:41:51PM +0100, Vitaly Kuznetsov wrote: > >> In many case we just want to give Windows guests all currently supported > >> Hyper-V enlightenments and that's where this new mode may come handy. We > >> pass through what was returned by KVM_GET_SUPPORTED_HV_CPUID. > > > > How is the compatibility ensured on migration between kernels reporting > > different feature sets? > > AFAIU we don't change anything in this regard (or, my intention was to > not change anything): hv-all is converted to the individual hv-* > properties (hv_cpuid_check_and_set()) actually sets cpu->hyperv_* flags > according to what's supported by kernel so when we migrate we will > require all these features supported. Migration relies on the upper layer to run the destination QEMU with the identical command line (except for -incoming) as the source, and QEMU is then supposed to set up identical environment in the target VM as was in the source, or refuse to start if that's impossible. (If I'm misunderstanding this Dave (cc-d) may want to correct me.) AFAICS this hv-all attribute will enable different feature sets depending on the kernel it's run on, so the migration between different kernels will appear to succeed, but the guest may suddenly encounter an incompatible change in the environment. Roman.
Roman Kagan <rkagan@virtuozzo.com> writes: > On Fri, Jan 25, 2019 at 02:46:42PM +0100, Vitaly Kuznetsov wrote: >> Roman Kagan <rkagan@virtuozzo.com> writes: >> >> > On Fri, Jan 25, 2019 at 12:41:51PM +0100, Vitaly Kuznetsov wrote: >> >> In many case we just want to give Windows guests all currently supported >> >> Hyper-V enlightenments and that's where this new mode may come handy. We >> >> pass through what was returned by KVM_GET_SUPPORTED_HV_CPUID. >> > >> > How is the compatibility ensured on migration between kernels reporting >> > different feature sets? >> >> AFAIU we don't change anything in this regard (or, my intention was to >> not change anything): hv-all is converted to the individual hv-* >> properties (hv_cpuid_check_and_set()) actually sets cpu->hyperv_* flags >> according to what's supported by kernel so when we migrate we will >> require all these features supported. > > Migration relies on the upper layer to run the destination QEMU with the > identical command line (except for -incoming) as the source, and QEMU is > then supposed to set up identical environment in the target VM as was in > the source, or refuse to start if that's impossible. (If I'm > misunderstanding this Dave (cc-d) may want to correct me.) > > AFAICS this hv-all attribute will enable different feature sets > depending on the kernel it's run on, so the migration between different > kernels will appear to succeed, but the guest may suddenly encounter an > incompatible change in the environment. With 'hv-all' I'm trying to achieve behavior similar to '-cpu host' and AFAIK these VMs are migratable 'at your own risk' (if you do it directly from qemu). Libvirt (or whatever upper layer), however, would do CPU feature comparison and in case you have less features on the destination host than you had on the source code it will forbid the migration. I think if this also works for Hyper-V features than were fine. Dave, feel free to tell me I'm completely wrong with my assumptions)
* Vitaly Kuznetsov (vkuznets@redhat.com) wrote: > Roman Kagan <rkagan@virtuozzo.com> writes: > > > On Fri, Jan 25, 2019 at 02:46:42PM +0100, Vitaly Kuznetsov wrote: > >> Roman Kagan <rkagan@virtuozzo.com> writes: > >> > >> > On Fri, Jan 25, 2019 at 12:41:51PM +0100, Vitaly Kuznetsov wrote: > >> >> In many case we just want to give Windows guests all currently supported > >> >> Hyper-V enlightenments and that's where this new mode may come handy. We > >> >> pass through what was returned by KVM_GET_SUPPORTED_HV_CPUID. > >> > > >> > How is the compatibility ensured on migration between kernels reporting > >> > different feature sets? > >> > >> AFAIU we don't change anything in this regard (or, my intention was to > >> not change anything): hv-all is converted to the individual hv-* > >> properties (hv_cpuid_check_and_set()) actually sets cpu->hyperv_* flags > >> according to what's supported by kernel so when we migrate we will > >> require all these features supported. > > > > Migration relies on the upper layer to run the destination QEMU with the > > identical command line (except for -incoming) as the source, and QEMU is > > then supposed to set up identical environment in the target VM as was in > > the source, or refuse to start if that's impossible. (If I'm > > misunderstanding this Dave (cc-d) may want to correct me.) > > > > AFAICS this hv-all attribute will enable different feature sets > > depending on the kernel it's run on, so the migration between different > > kernels will appear to succeed, but the guest may suddenly encounter an > > incompatible change in the environment. > > With 'hv-all' I'm trying to achieve behavior similar to '-cpu host' and > AFAIK these VMs are migratable 'at your own risk' (if you do it directly > from qemu). Libvirt (or whatever upper layer), however, would do CPU > feature comparison and in case you have less features on the destination > host than you had on the source code it will forbid the migration. I > think if this also works for Hyper-V features than were fine. > > Dave, feel free to tell me I'm completely wrong with my assumptions) It does sound like -cpu host, but -cpu host does come with a health warning and we often get subtle screwups where it doesn't quite behave the same on the two sides, also qemu now warns (and with 'enforce' enforces) a check at it's level rather than relying on libvirt. So hmm, yes it sounds like -cpu host, but I'd generally say it's not a great thing to copy unless you're really really careful. For example, in the -cpu host world people might have two machines they think are the same - but then they find out one has HT disabled or nesting enabled and so they're not actually the same. I'm not sure what the equivalent bear traps are in the Hyper-V world, but I'd be surprised if there weren't any; for example what happens when someone upgrades one of their hosts to some minor version that adds/removes a feature? Also, how does libvirt figure out that the features are actually the same - does it need a bunch of detection code? Dave > -- > Vitaly -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On Mon, Jan 28, 2019 at 06:22:30PM +0000, Dr. David Alan Gilbert wrote: > * Vitaly Kuznetsov (vkuznets@redhat.com) wrote: > > Roman Kagan <rkagan@virtuozzo.com> writes: > > > > > On Fri, Jan 25, 2019 at 02:46:42PM +0100, Vitaly Kuznetsov wrote: > > >> Roman Kagan <rkagan@virtuozzo.com> writes: > > >> > > >> > On Fri, Jan 25, 2019 at 12:41:51PM +0100, Vitaly Kuznetsov wrote: > > >> >> In many case we just want to give Windows guests all currently supported > > >> >> Hyper-V enlightenments and that's where this new mode may come handy. We > > >> >> pass through what was returned by KVM_GET_SUPPORTED_HV_CPUID. > > >> > > > >> > How is the compatibility ensured on migration between kernels reporting > > >> > different feature sets? > > >> > > >> AFAIU we don't change anything in this regard (or, my intention was to > > >> not change anything): hv-all is converted to the individual hv-* > > >> properties (hv_cpuid_check_and_set()) actually sets cpu->hyperv_* flags > > >> according to what's supported by kernel so when we migrate we will > > >> require all these features supported. > > > > > > Migration relies on the upper layer to run the destination QEMU with the > > > identical command line (except for -incoming) as the source, and QEMU is > > > then supposed to set up identical environment in the target VM as was in > > > the source, or refuse to start if that's impossible. (If I'm > > > misunderstanding this Dave (cc-d) may want to correct me.) > > > > > > AFAICS this hv-all attribute will enable different feature sets > > > depending on the kernel it's run on, so the migration between different > > > kernels will appear to succeed, but the guest may suddenly encounter an > > > incompatible change in the environment. > > > > With 'hv-all' I'm trying to achieve behavior similar to '-cpu host' and > > AFAIK these VMs are migratable 'at your own risk' (if you do it directly > > from qemu). Libvirt (or whatever upper layer), however, would do CPU > > feature comparison and in case you have less features on the destination > > host than you had on the source code it will forbid the migration. I > > think if this also works for Hyper-V features than were fine. > > > > Dave, feel free to tell me I'm completely wrong with my assumptions) > > It does sound like -cpu host, but -cpu host does come with a health > warning and we often get subtle screwups where it doesn't quite behave > the same on the two sides, also qemu now warns (and with 'enforce' > enforces) a check at it's level rather than relying on libvirt. > > So hmm, yes it sounds like -cpu host, but I'd generally say it's not a > great thing to copy unless you're really really careful. > For example, in the -cpu host world people might have two machines > they think are the same - but then they find out one has HT disabled > or nesting enabled and so they're not actually the same. > > I'm not sure what the equivalent bear traps are in the Hyper-V world, > but I'd be surprised if there weren't any; for example what happens > when someone upgrades one of their hosts to some minor version that > adds/removes a feature? > > Also, how does libvirt figure out that the features are actually the > same - does it need a bunch of detection code? If libvirt is involved, it's much simpler and safer to use something like <cpu mode="host-model">, which generates a migration-safe CPU configuration based on the current host. Live migration support with "-cpu host" is only useful for experiments and carefully controlled environments. Is there a real need to make hv-all migratable? What would be the use case, exactly? If there's no clear use case, I would recommend making it a migration blocker.
"Dr. David Alan Gilbert" <dgilbert@redhat.com> writes: > I'm not sure what the equivalent bear traps are in the Hyper-V world, > but I'd be surprised if there weren't any; for example what happens > when someone upgrades one of their hosts to some minor version that > adds/removes a feature? Here we're talking about Hyper-V emulation in KVM, features only get added there, but even if it gets removed it will be detected by libvirt ... > > Also, how does libvirt figure out that the features are actually the > same - does it need a bunch of detection code? ... as I *think* it compares Feature CPUID words (and all Hyper-V features which we enable with hv-all are there).
Eduardo Habkost <ehabkost@redhat.com> writes: > > If libvirt is involved, it's much simpler and safer to use > something like <cpu mode="host-model">, which generates a > migration-safe CPU configuration based on the current host. Live > migration support with "-cpu host" is only useful for experiments > and carefully controlled environments. > > Is there a real need to make hv-all migratable? What would be > the use case, exactly? If there's no clear use case, I would > recommend making it a migration blocker. There's no clear use-case; I noticed that we keep adding Hyper-V enlightenments and these make Windows' life on KVM easier so we recommend enabling them all (and, with an exception for hv-evmcs, which I also don't enable with hv-all, I'm unawere of cases which would require disabling certain Hyper-V enlightenments). hv-all is mostly a convenience feature. I plan to take a look at 'host-model' to see if we can borrow some ideas from there (that would actually be ideal - build a set of 'hv-*' enlightenments based on capabilites of the current host) but I'm also not totally against keeping it the way it is and making it a migration blocker for the time being (and making it a 'developer-only' feature).
* Vitaly Kuznetsov (vkuznets@redhat.com) wrote: > "Dr. David Alan Gilbert" <dgilbert@redhat.com> writes: > > > I'm not sure what the equivalent bear traps are in the Hyper-V world, > > but I'd be surprised if there weren't any; for example what happens > > when someone upgrades one of their hosts to some minor version that > > adds/removes a feature? > > Here we're talking about Hyper-V emulation in KVM, features only get > added there, but even if it gets removed it will be detected by libvirt ... OK, but then you do get the same behaviour; upgrade a host to a new kernel/qemu and get a new enlightenment, and you can't back migrate to the older one (possibly with no warning). > > > > Also, how does libvirt figure out that the features are actually the > > same - does it need a bunch of detection code? > > ... as I *think* it compares Feature CPUID words (and all Hyper-V > features which we enable with hv-all are there). Not too bad if it does, but also look at the scary command lines we get generated full of -cpu ...+feature,+feature,-feature,.... Dave > -- > Vitaly -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
On Mon, Jan 28, 2019 at 06:22:30PM +0000, Dr. David Alan Gilbert wrote: > * Vitaly Kuznetsov (vkuznets@redhat.com) wrote: > > Roman Kagan <rkagan@virtuozzo.com> writes: > > > > > On Fri, Jan 25, 2019 at 02:46:42PM +0100, Vitaly Kuznetsov wrote: > > >> Roman Kagan <rkagan@virtuozzo.com> writes: > > >> > > >> > On Fri, Jan 25, 2019 at 12:41:51PM +0100, Vitaly Kuznetsov wrote: > > >> >> In many case we just want to give Windows guests all currently supported > > >> >> Hyper-V enlightenments and that's where this new mode may come handy. We > > >> >> pass through what was returned by KVM_GET_SUPPORTED_HV_CPUID. > > >> > > > >> > How is the compatibility ensured on migration between kernels reporting > > >> > different feature sets? > > >> > > >> AFAIU we don't change anything in this regard (or, my intention was to > > >> not change anything): hv-all is converted to the individual hv-* > > >> properties (hv_cpuid_check_and_set()) actually sets cpu->hyperv_* flags > > >> according to what's supported by kernel so when we migrate we will > > >> require all these features supported. > > > > > > Migration relies on the upper layer to run the destination QEMU with the > > > identical command line (except for -incoming) as the source, and QEMU is > > > then supposed to set up identical environment in the target VM as was in > > > the source, or refuse to start if that's impossible. (If I'm > > > misunderstanding this Dave (cc-d) may want to correct me.) > > > > > > AFAICS this hv-all attribute will enable different feature sets > > > depending on the kernel it's run on, so the migration between different > > > kernels will appear to succeed, but the guest may suddenly encounter an > > > incompatible change in the environment. > > > > With 'hv-all' I'm trying to achieve behavior similar to '-cpu host' and > > AFAIK these VMs are migratable 'at your own risk' (if you do it directly > > from qemu). Libvirt (or whatever upper layer), however, would do CPU > > feature comparison and in case you have less features on the destination > > host than you had on the source code it will forbid the migration. I > > think if this also works for Hyper-V features than were fine. > > > > Dave, feel free to tell me I'm completely wrong with my assumptions) > > It does sound like -cpu host, but -cpu host does come with a health > warning and we often get subtle screwups where it doesn't quite behave > the same on the two sides, also qemu now warns (and with 'enforce' > enforces) a check at it's level rather than relying on libvirt. > > So hmm, yes it sounds like -cpu host, but I'd generally say it's not a > great thing to copy unless you're really really careful. > For example, in the -cpu host world people might have two machines > they think are the same - but then they find out one has HT disabled > or nesting enabled and so they're not actually the same. > > I'm not sure what the equivalent bear traps are in the Hyper-V world, > but I'd be surprised if there weren't any; for example what happens > when someone upgrades one of their hosts to some minor version that > adds/removes a feature? > > Also, how does libvirt figure out that the features are actually the > same - does it need a bunch of detection code? We support "-cpu host" with libvirt because there is a genuine functional reasont to want to use that, as you can't get precisely the same result any other way. This isn't the case with 'hv-all', as it doesn't offer any feature that you can't already achieve by listing all the hv-XXX features explicitly. As such I don't expect libvirt to use 'hv-all' at all. There's no reason why QEMU can't support it anyway though, for the simplicity of people launching QEMU manually. Just need to document that migration caveat - its only safe if QEMU versions match on both sides. Regards, Daniel
diff --git a/target/i386/cpu.c b/target/i386/cpu.c index 2f5412592d..b776be5223 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -5771,6 +5771,7 @@ static Property x86_cpu_properties[] = { DEFINE_PROP_BOOL("hv-tlbflush", X86CPU, hyperv_tlbflush, false), DEFINE_PROP_BOOL("hv-evmcs", X86CPU, hyperv_evmcs, false), DEFINE_PROP_BOOL("hv-ipi", X86CPU, hyperv_ipi, false), + DEFINE_PROP_BOOL("hv-all", X86CPU, hyperv_all, false), DEFINE_PROP_BOOL("check", X86CPU, check_cpuid, true), DEFINE_PROP_BOOL("enforce", X86CPU, enforce_cpuid, false), DEFINE_PROP_BOOL("kvm", X86CPU, expose_kvm, true), diff --git a/target/i386/cpu.h b/target/i386/cpu.h index 59656a70e6..9b5c2715cc 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -1397,6 +1397,7 @@ struct X86CPU { bool hyperv_tlbflush; bool hyperv_evmcs; bool hyperv_ipi; + bool hyperv_all; bool check_cpuid; bool enforce_cpuid; bool expose_kvm; diff --git a/target/i386/kvm.c b/target/i386/kvm.c index ed55040d9e..b373b4ac06 100644 --- a/target/i386/kvm.c +++ b/target/i386/kvm.c @@ -647,7 +647,8 @@ static bool hyperv_enabled(X86CPU *cpu) cpu->hyperv_stimer || cpu->hyperv_reenlightenment || cpu->hyperv_tlbflush || - cpu->hyperv_ipi); + cpu->hyperv_ipi || + cpu->hyperv_all); } static int kvm_arch_set_tsc_khz(CPUState *cs) @@ -995,14 +996,15 @@ static int hv_cpuid_get_fw(struct kvm_cpuid2 *cpuid, int fw, uint32_t *r) } static int hv_cpuid_check_and_set(CPUState *cs, struct kvm_cpuid2 *cpuid, - const char *name, bool flag) + const char *name, bool *flag) { X86CPU *cpu = X86_CPU(cs); CPUX86State *env = &cpu->env; uint32_t r, fw, bits;; int i, j; + bool present; - if (!flag) { + if (!*flag && !cpu->hyperv_all) { return 0; } @@ -1011,6 +1013,7 @@ static int hv_cpuid_check_and_set(CPUState *cs, struct kvm_cpuid2 *cpuid, continue; } + present = true; for (j = 0; j < ARRAY_SIZE(kvm_hyperv_properties[i].flags); j++) { fw = kvm_hyperv_properties[i].flags[j].fw; bits = kvm_hyperv_properties[i].flags[j].bits; @@ -1020,17 +1023,26 @@ static int hv_cpuid_check_and_set(CPUState *cs, struct kvm_cpuid2 *cpuid, } if (hv_cpuid_get_fw(cpuid, fw, &r) || (r & bits) != bits) { - fprintf(stderr, - "Hyper-V %s (requested by '%s' cpu flag) " - "is not supported by kernel\n", - kvm_hyperv_properties[i].desc, - kvm_hyperv_properties[i].name); - return 1; + if (*flag) { + fprintf(stderr, + "Hyper-V %s (requested by '%s' cpu flag) " + "is not supported by kernel\n", + kvm_hyperv_properties[i].desc, + kvm_hyperv_properties[i].name); + return 1; + } else { + present = false; + break; + } } env->features[fw] |= bits; } + if (cpu->hyperv_all && present) { + *flag = true; + } + return 0; } @@ -1038,6 +1050,43 @@ static int hv_cpuid_check_and_set(CPUState *cs, struct kvm_cpuid2 *cpuid, return 1; } +static int hv_report_missing_dep(X86CPU *cpu, const char *name, + const char *dep_name) +{ + int i, j, nprops = sizeof(kvm_hyperv_properties); + + for (i = 0; i < nprops; i++) { + if (!strcmp(kvm_hyperv_properties[i].name, name)) { + break; + } + } + for (j = 0; j < nprops; j++) { + if (!strcmp(kvm_hyperv_properties[j].name, dep_name)) { + break; + } + } + + /* + * Internal error: either feature or its dependency is not in + * kvm_hyperv_properties! + */ + if (i == nprops || j == nprops) { + return 1; + } + + if (cpu->hyperv_all) { + fprintf(stderr, "Hyper-V %s (requested by 'hv-all' cpu flag) " + "requires %s (is not supported by kernel)\n", + kvm_hyperv_properties[i].desc, kvm_hyperv_properties[j].desc); + } else { + fprintf(stderr, "Hyper-V %s (requested by '%s' cpu flag) " + "requires %s ('%s')\n", kvm_hyperv_properties[i].desc, + name, kvm_hyperv_properties[j].desc, dep_name); + } + + return 1; +} + /* * Fill in Hyper-V CPUIDs. Returns the number of entries filled in cpuid_ent in * case of success, errno < 0 in case of failure and 0 when no Hyper-V @@ -1077,32 +1126,54 @@ static int hyperv_handle_properties(CPUState *cs, cpuid = get_supported_hv_cpuid_legacy(cs); } + if (cpu->hyperv_all) { + memcpy(cpuid_ent, &cpuid->entries[0], + cpuid->nent * sizeof(cpuid->entries[0])); + + c = cpuid_find_entry(cpuid, HV_CPUID_FEATURES, 0); + if (c) { + env->features[FEAT_HYPERV_EAX] = c->eax; + env->features[FEAT_HYPERV_EBX] = c->ebx; + env->features[FEAT_HYPERV_EDX] = c->eax; + } + c = cpuid_find_entry(cpuid, HV_CPUID_ENLIGHTMENT_INFO, 0); + if (c) { + env->features[FEAT_HV_RECOMM_EAX] = c->eax; + + /* hv-spinlocks may have been overriden */ + if (cpu->hyperv_spinlock_attempts != HYPERV_SPINLOCK_NEVER_RETRY) { + c->ebx = cpu->hyperv_spinlock_attempts; + } + } + c = cpuid_find_entry(cpuid, HV_CPUID_NESTED_FEATURES, 0); + if (c) { + env->features[FEAT_HV_NESTED_EAX] = c->eax; + } + } + /* Features */ r |= hv_cpuid_check_and_set(cs, cpuid, "hv-relaxed", - cpu->hyperv_relaxed_timing); - r |= hv_cpuid_check_and_set(cs, cpuid, "hv-vapic", cpu->hyperv_vapic); - r |= hv_cpuid_check_and_set(cs, cpuid, "hv-time", cpu->hyperv_time); + &cpu->hyperv_relaxed_timing); + r |= hv_cpuid_check_and_set(cs, cpuid, "hv-vapic", &cpu->hyperv_vapic); + r |= hv_cpuid_check_and_set(cs, cpuid, "hv-time", &cpu->hyperv_time); r |= hv_cpuid_check_and_set(cs, cpuid, "hv-frequencies", - cpu->hyperv_frequencies); - r |= hv_cpuid_check_and_set(cs, cpuid, "hv-crash", cpu->hyperv_crash); + &cpu->hyperv_frequencies); + r |= hv_cpuid_check_and_set(cs, cpuid, "hv-crash", &cpu->hyperv_crash); r |= hv_cpuid_check_and_set(cs, cpuid, "hv-reenlightenment", - cpu->hyperv_reenlightenment); - r |= hv_cpuid_check_and_set(cs, cpuid, "hv-reset", cpu->hyperv_reset); - r |= hv_cpuid_check_and_set(cs, cpuid, "hv-vpindex", cpu->hyperv_vpindex); - r |= hv_cpuid_check_and_set(cs, cpuid, "hv-runtime", cpu->hyperv_runtime); - r |= hv_cpuid_check_and_set(cs, cpuid, "hv-synic", cpu->hyperv_synic); - r |= hv_cpuid_check_and_set(cs, cpuid, "hv-stimer", cpu->hyperv_stimer); - r |= hv_cpuid_check_and_set(cs, cpuid, "hv-tlbflush", cpu->hyperv_tlbflush); - r |= hv_cpuid_check_and_set(cs, cpuid, "hv-ipi", cpu->hyperv_ipi); + &cpu->hyperv_reenlightenment); + r |= hv_cpuid_check_and_set(cs, cpuid, "hv-reset", &cpu->hyperv_reset); + r |= hv_cpuid_check_and_set(cs, cpuid, "hv-vpindex", &cpu->hyperv_vpindex); + r |= hv_cpuid_check_and_set(cs, cpuid, "hv-runtime", &cpu->hyperv_runtime); + r |= hv_cpuid_check_and_set(cs, cpuid, "hv-synic", &cpu->hyperv_synic); + r |= hv_cpuid_check_and_set(cs, cpuid, "hv-stimer", &cpu->hyperv_stimer); + r |= hv_cpuid_check_and_set(cs, cpuid, "hv-tlbflush", + &cpu->hyperv_tlbflush); + r |= hv_cpuid_check_and_set(cs, cpuid, "hv-ipi", &cpu->hyperv_ipi); /* Dependencies */ if (cpu->hyperv_synic && !cpu->hyperv_synic_kvm_only && - !cpu->hyperv_vpindex) { - fprintf(stderr, "Hyper-V SynIC " - "(requested by 'hv-synic' cpu flag) " - "requires Hyper-V VP_INDEX ('hv-vpindex')\n"); - r |= 1; - } + !cpu->hyperv_vpindex) + r |= hv_report_missing_dep(cpu, "hv-synic", "hv-vpindex"); /* Not exposed by KVM but needed to make CPU hotplug in Windows work */ env->features[FEAT_HYPERV_EDX] |= HV_CPU_DYNAMIC_PARTITIONING_AVAILABLE; @@ -1112,6 +1183,12 @@ static int hyperv_handle_properties(CPUState *cs, goto free; } + if (cpu->hyperv_all) { + /* We already copied all feature words from KVM as is */ + r = cpuid->nent; + goto free; + } + c = &cpuid_ent[cpuid_i++]; c->function = HV_CPUID_VENDOR_AND_MAX_FUNCTIONS; if (!cpu->hyperv_vendor_id) {
In many case we just want to give Windows guests all currently supported Hyper-V enlightenments and that's where this new mode may come handy. We pass through what was returned by KVM_GET_SUPPORTED_HV_CPUID. hv_cpuid_check_and_set() is modified to also set cpu->hyperv_* flags as we may want to check them later (and we actually do for hv_runtime, hv_synic,...). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> --- target/i386/cpu.c | 1 + target/i386/cpu.h | 1 + target/i386/kvm.c | 133 ++++++++++++++++++++++++++++++++++++---------- 3 files changed, 107 insertions(+), 28 deletions(-)