Message ID | 20190826182320.9089-1-tony.luck@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86: Only print persistent reasons for kvm disabled once | expand |
Tony Luck <tony.luck@intel.com> writes: > When I boot my server I'm treated to a console log with: > > [ 40.520510] kvm: disabled by bios > [ 40.551234] kvm: disabled by bios > [ 40.607987] kvm: disabled by bios > [ 40.659701] kvm: disabled by bios > [ 40.691224] kvm: disabled by bios > [ 40.718786] kvm: disabled by bios > [ 40.750122] kvm: disabled by bios > [ 40.797170] kvm: disabled by bios > [ 40.828408] kvm: disabled by bios > > ... many, many more lines, one for every logical CPU (If I didn't miss anything) we have the following code: __init vmx_init() kvm_init(); kvm_arch_init() and we bail on first error so there should be only 1 message per module load attempt. The question I have is who (and why) is trying to load kvm-intel (or kvm-amd which is not any different) for each CPU? Is it udev? Can this be changed? In particular, I'm worried about eVMCS enablement in vmx_init(), we will also get a bunch of "KVM: vmx: using Hyper-V Enlightened VMCS" messages if the consequent kvm_arch_init() fails. > > Since it isn't likely that BIOS is going to suddenly enable > KVM between bringing up one CPU and the next, we might as > well just print this once. > > Same for a few other unchanging reasons that might keep > kvm from being initialized. > > Signed-off-by: Tony Luck <tony.luck@intel.com> > --- > arch/x86/kvm/x86.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 93b0bd45ac73..56d4a43dd2db 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -7007,18 +7007,18 @@ int kvm_arch_init(void *opaque) > struct kvm_x86_ops *ops = opaque; > > if (kvm_x86_ops) { > - printk(KERN_ERR "kvm: already loaded the other module\n"); > + pr_err_once("kvm: already loaded the other module\n"); > r = -EEXIST; > goto out; > } > > if (!ops->cpu_has_kvm_support()) { > - printk(KERN_ERR "kvm: no hardware support\n"); > + pr_err_once("kvm: no hardware support\n"); > r = -EOPNOTSUPP; > goto out; > } > if (ops->disabled_by_bios()) { > - printk(KERN_ERR "kvm: disabled by bios\n"); > + pr_err_once("kvm: disabled by bios\n"); > r = -EOPNOTSUPP; > goto out; > } > @@ -7029,7 +7029,7 @@ int kvm_arch_init(void *opaque) > * vCPU's FPU state as a fxregs_state struct. > */ > if (!boot_cpu_has(X86_FEATURE_FPU) || !boot_cpu_has(X86_FEATURE_FXSR)) { > - printk(KERN_ERR "kvm: inadequate fpu\n"); > + pr_err_once("kvm: inadequate fpu\n"); > r = -EOPNOTSUPP; > goto out; > } The messages themselves may need some love but this is irrelevant to the patch, so Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> and we bail on first error so there should be only 1 message per module > load attempt. The question I have is who (and why) is trying to load > kvm-intel (or kvm-amd which is not any different) for each CPU? Is it > udev? Can this be changed? No idea where to look. The system has a RHEL8 user space install. Kernel is latest from Linus (v5.3-rc6). -Tony
2019-08-27 08:27+0200, Vitaly Kuznetsov: > Tony Luck <tony.luck@intel.com> writes: > > > When I boot my server I'm treated to a console log with: > > > > [ 40.520510] kvm: disabled by bios > > [ 40.551234] kvm: disabled by bios > > [ 40.607987] kvm: disabled by bios > > [ 40.659701] kvm: disabled by bios > > [ 40.691224] kvm: disabled by bios > > [ 40.718786] kvm: disabled by bios > > [ 40.750122] kvm: disabled by bios > > [ 40.797170] kvm: disabled by bios > > [ 40.828408] kvm: disabled by bios > > > > ... many, many more lines, one for every logical CPU > > (If I didn't miss anything) we have the following code: > > __init vmx_init() > kvm_init(); > kvm_arch_init() > > and we bail on first error so there should be only 1 message per module > load attempt. The question I have is who (and why) is trying to load > kvm-intel (or kvm-amd which is not any different) for each CPU? Is it > udev? Can this be changed? I agree that this is a highly suspicious behavior. It would be really helpful if we found out what is causing it. So far, this patch seems to be working around a userspace bug. > In particular, I'm worried about eVMCS enablement in vmx_init(), we will > also get a bunch of "KVM: vmx: using Hyper-V Enlightened VMCS" messages > if the consequent kvm_arch_init() fails. And we can't get rid of this through the printk_once trick, because this code lives in kvm_intel module and therefore gets unloaded on every failure. I am also not inclined to apply the patch as we will likely merge the kvm and kvm_{svm,intel} modules in the future to take full advantage of link time optimizations and this patch would stop working after that. Thanks.
On Tue, Aug 27, 2019 at 09:08:10PM +0200, Radim Krčmář wrote: > I am also not inclined to apply the patch as we will likely merge the > kvm and kvm_{svm,intel} modules in the future to take full advantage of > link time optimizations and this patch would stop working after that. Any chance you can provide additional details on the plan for merging modules? E.g. I assume there would still be kvm_intel and kvm_svm, just no vanilla kvm?
On 27/08/19 21:24, Sean Christopherson wrote: > On Tue, Aug 27, 2019 at 09:08:10PM +0200, Radim Krčmář wrote: >> I am also not inclined to apply the patch as we will likely merge the >> kvm and kvm_{svm,intel} modules in the future to take full advantage of >> link time optimizations and this patch would stop working after that. > > Any chance you can provide additional details on the plan for merging > modules? E.g. I assume there would still be kvm_intel and kvm_svm, just > no vanilla kvm? Yes, that is the idea. Basically trade disk space for performance, since kvm+kvm_intel+kvm_amd are never loaded together and kvm never has >1 user. Paolo
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 93b0bd45ac73..56d4a43dd2db 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7007,18 +7007,18 @@ int kvm_arch_init(void *opaque) struct kvm_x86_ops *ops = opaque; if (kvm_x86_ops) { - printk(KERN_ERR "kvm: already loaded the other module\n"); + pr_err_once("kvm: already loaded the other module\n"); r = -EEXIST; goto out; } if (!ops->cpu_has_kvm_support()) { - printk(KERN_ERR "kvm: no hardware support\n"); + pr_err_once("kvm: no hardware support\n"); r = -EOPNOTSUPP; goto out; } if (ops->disabled_by_bios()) { - printk(KERN_ERR "kvm: disabled by bios\n"); + pr_err_once("kvm: disabled by bios\n"); r = -EOPNOTSUPP; goto out; } @@ -7029,7 +7029,7 @@ int kvm_arch_init(void *opaque) * vCPU's FPU state as a fxregs_state struct. */ if (!boot_cpu_has(X86_FEATURE_FPU) || !boot_cpu_has(X86_FEATURE_FXSR)) { - printk(KERN_ERR "kvm: inadequate fpu\n"); + pr_err_once("kvm: inadequate fpu\n"); r = -EOPNOTSUPP; goto out; }
When I boot my server I'm treated to a console log with: [ 40.520510] kvm: disabled by bios [ 40.551234] kvm: disabled by bios [ 40.607987] kvm: disabled by bios [ 40.659701] kvm: disabled by bios [ 40.691224] kvm: disabled by bios [ 40.718786] kvm: disabled by bios [ 40.750122] kvm: disabled by bios [ 40.797170] kvm: disabled by bios [ 40.828408] kvm: disabled by bios ... many, many more lines, one for every logical CPU Since it isn't likely that BIOS is going to suddenly enable KVM between bringing up one CPU and the next, we might as well just print this once. Same for a few other unchanging reasons that might keep kvm from being initialized. Signed-off-by: Tony Luck <tony.luck@intel.com> --- arch/x86/kvm/x86.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)