Message ID | 20221102231911.3107438-1-seanjc@google.com (mailing list archive) |
---|---|
Headers | show |
Series | KVM: Rework kvm_init() and hardware enabling | expand |
Am 03.11.22 um 00:18 schrieb Sean Christopherson: > Non-x86 folks, please test on hardware when possible. I made a _lot_ of > mistakes when moving code around. Thankfully, x86 was the trickiest code > to deal with, and I'm fairly confident that I found all the bugs I > introduced via testing. But the number of mistakes I made and found on > x86 makes me more than a bit worried that I screwed something up in other > arch code. > > This is a continuation of Chao's series to do x86 CPU compatibility checks > during virtualization hardware enabling[1], and of Isaku's series to try > and clean up the hardware enabling paths so that x86 (Intel specifically) > can temporarily enable hardware during module initialization without > causing undue pain for other architectures[2]. It also includes one patch > from another mini-series from Isaku that provides the less controversial > patches[3]. > > The main theme of this series is to kill off kvm_arch_init(), > kvm_arch_hardware_(un)setup(), and kvm_arch_check_processor_compat(), which > all originated in x86 code from way back when, and needlessly complicate > both common KVM code and architecture code. E.g. many architectures don't > mark functions/data as __init/__ro_after_init purely because kvm_init() > isn't marked __init to support x86's separate vendor modules. > > The idea/hope is that with those hooks gone (moved to arch code), it will > be easier for x86 (and other architectures) to modify their module init > sequences as needed without having to fight common KVM code. E.g. I'm > hoping that ARM can build on this to simplify its hardware enabling logic, > especially the pKVM side of things. > > There are bug fixes throughout this series. They are more scattered than > I would usually prefer, but getting the sequencing correct was a gigantic > pain for many of the x86 fixes due to needing to fix common code in order > for the x86 fix to have any meaning. And while the bugs are often fatal, > they aren't all that interesting for most users as they either require a > malicious admin or broken hardware, i.e. aren't likely to be encountered > by the vast majority of KVM users. So unless someone _really_ wants a > particular fix isolated for backporting, I'm not planning on shuffling > patches. > > Tested on x86. Lightly tested on arm64. Compile tested only on all other > architectures. Some sniff tests seem to work ok on s390.
On 11/3/22 13:08, Christian Borntraeger wrote: >> There are bug fixes throughout this series. They are more scattered than >> I would usually prefer, but getting the sequencing correct was a gigantic >> pain for many of the x86 fixes due to needing to fix common code in order >> for the x86 fix to have any meaning. And while the bugs are often fatal, >> they aren't all that interesting for most users as they either require a >> malicious admin or broken hardware, i.e. aren't likely to be encountered >> by the vast majority of KVM users. So unless someone _really_ wants a >> particular fix isolated for backporting, I'm not planning on shuffling >> patches. >> >> Tested on x86. Lightly tested on arm64. Compile tested only on all >> other architectures. > > Some sniff tests seem to work ok on s390. Thanks. There are just a couple nits, and MIPS/PPC/RISC-V have very small changes. Feel free to send me a pull request once Marc acks. Paolo
On Wed, Nov 02, 2022 at 11:18:27PM +0000, Sean Christopherson <seanjc@google.com> wrote: > Non-x86 folks, please test on hardware when possible. I made a _lot_ of > mistakes when moving code around. Thankfully, x86 was the trickiest code > to deal with, and I'm fairly confident that I found all the bugs I > introduced via testing. But the number of mistakes I made and found on > x86 makes me more than a bit worried that I screwed something up in other > arch code. > > This is a continuation of Chao's series to do x86 CPU compatibility checks > during virtualization hardware enabling[1], and of Isaku's series to try > and clean up the hardware enabling paths so that x86 (Intel specifically) > can temporarily enable hardware during module initialization without > causing undue pain for other architectures[2]. It also includes one patch > from another mini-series from Isaku that provides the less controversial > patches[3]. > > The main theme of this series is to kill off kvm_arch_init(), > kvm_arch_hardware_(un)setup(), and kvm_arch_check_processor_compat(), which > all originated in x86 code from way back when, and needlessly complicate > both common KVM code and architecture code. E.g. many architectures don't > mark functions/data as __init/__ro_after_init purely because kvm_init() > isn't marked __init to support x86's separate vendor modules. > > The idea/hope is that with those hooks gone (moved to arch code), it will > be easier for x86 (and other architectures) to modify their module init > sequences as needed without having to fight common KVM code. E.g. I'm > hoping that ARM can build on this to simplify its hardware enabling logic, > especially the pKVM side of things. > > There are bug fixes throughout this series. They are more scattered than > I would usually prefer, but getting the sequencing correct was a gigantic > pain for many of the x86 fixes due to needing to fix common code in order > for the x86 fix to have any meaning. And while the bugs are often fatal, > they aren't all that interesting for most users as they either require a > malicious admin or broken hardware, i.e. aren't likely to be encountered > by the vast majority of KVM users. So unless someone _really_ wants a > particular fix isolated for backporting, I'm not planning on shuffling > patches. > > Tested on x86. Lightly tested on arm64. Compile tested only on all other > architectures. Thanks for the patch series. I the rebased TDX KVM patch series and it worked. Since cpu offline needs to be rejected in some cases(To keep at least one cpu on a package), arch hook for cpu offline is needed. I can keep it in TDX KVM patch series. diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 23c0f4bc63f1..ef7bcb845d42 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -17,6 +17,7 @@ BUILD_BUG_ON(1) KVM_X86_OP(hardware_enable) KVM_X86_OP(hardware_disable) KVM_X86_OP(hardware_unsetup) +KVM_X86_OP_OPTIONAL_RET0(offline_cpu) KVM_X86_OP(has_emulated_msr) KVM_X86_OP(vcpu_after_set_cpuid) KVM_X86_OP(is_vm_type_supported) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 496c7c6eaff9..c420409aa96f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1468,6 +1468,7 @@ struct kvm_x86_ops { int (*hardware_enable)(void); void (*hardware_disable)(void); void (*hardware_unsetup)(void); + int (*offline_cpu)(void); bool (*has_emulated_msr)(struct kvm *kvm, u32 index); void (*vcpu_after_set_cpuid)(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 2ed5a017f7bc..17c5d6a76c93 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12039,6 +12039,11 @@ void kvm_arch_hardware_disable(void) drop_user_return_notifiers(); } +int kvm_arch_offline_cpu(unsigned int cpu) +{ + return static_call(kvm_x86_offline_cpu)(); +} + bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu) { return vcpu->kvm->arch.bsp_vcpu_id == vcpu->vcpu_id; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 620489b9aa93..4df79443fd11 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1460,6 +1460,7 @@ static inline void kvm_create_vcpu_debugfs(struct kvm_vcpu *vcpu) {} int kvm_arch_hardware_enable(void); void kvm_arch_hardware_disable(void); #endif +int kvm_arch_offline_cpu(unsigned int cpu); int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu); bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu); int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f6b6dcedaa0a..f770fdc662d0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -5396,16 +5396,24 @@ static void hardware_disable_nolock(void *junk) __this_cpu_write(hardware_enabled, false); } +__weak int kvm_arch_offline_cpu(unsigned int cpu) +{ + return 0; +} + static int kvm_offline_cpu(unsigned int cpu) { + int r = 0; + mutex_lock(&kvm_lock); - if (kvm_usage_count) { + r = kvm_arch_offline_cpu(cpu); + if (!r && kvm_usage_count) { preempt_disable(); hardware_disable_nolock(NULL); preempt_enable(); } mutex_unlock(&kvm_lock); - return 0; + return r; } static void hardware_disable_all_nolock(void)
On 11/4/22 08:17, Isaku Yamahata wrote: > On Wed, Nov 02, 2022 at 11:18:27PM +0000, > Sean Christopherson <seanjc@google.com> wrote: > >> Non-x86 folks, please test on hardware when possible. I made a _lot_ of >> mistakes when moving code around. Thankfully, x86 was the trickiest code >> to deal with, and I'm fairly confident that I found all the bugs I >> introduced via testing. But the number of mistakes I made and found on >> x86 makes me more than a bit worried that I screwed something up in other >> arch code. >> >> This is a continuation of Chao's series to do x86 CPU compatibility checks >> during virtualization hardware enabling[1], and of Isaku's series to try >> and clean up the hardware enabling paths so that x86 (Intel specifically) >> can temporarily enable hardware during module initialization without >> causing undue pain for other architectures[2]. It also includes one patch >> from another mini-series from Isaku that provides the less controversial >> patches[3]. >> >> The main theme of this series is to kill off kvm_arch_init(), >> kvm_arch_hardware_(un)setup(), and kvm_arch_check_processor_compat(), which >> all originated in x86 code from way back when, and needlessly complicate >> both common KVM code and architecture code. E.g. many architectures don't >> mark functions/data as __init/__ro_after_init purely because kvm_init() >> isn't marked __init to support x86's separate vendor modules. >> >> The idea/hope is that with those hooks gone (moved to arch code), it will >> be easier for x86 (and other architectures) to modify their module init >> sequences as needed without having to fight common KVM code. E.g. I'm >> hoping that ARM can build on this to simplify its hardware enabling logic, >> especially the pKVM side of things. >> >> There are bug fixes throughout this series. They are more scattered than >> I would usually prefer, but getting the sequencing correct was a gigantic >> pain for many of the x86 fixes due to needing to fix common code in order >> for the x86 fix to have any meaning. And while the bugs are often fatal, >> they aren't all that interesting for most users as they either require a >> malicious admin or broken hardware, i.e. aren't likely to be encountered >> by the vast majority of KVM users. So unless someone _really_ wants a >> particular fix isolated for backporting, I'm not planning on shuffling >> patches. >> >> Tested on x86. Lightly tested on arm64. Compile tested only on all other >> architectures. > > Thanks for the patch series. I the rebased TDX KVM patch series and it worked. > Since cpu offline needs to be rejected in some cases(To keep at least one cpu > on a package), arch hook for cpu offline is needed. > I can keep it in TDX KVM patch series. Yes, this patch looks good. Paolo > diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h > index 23c0f4bc63f1..ef7bcb845d42 100644 > --- a/arch/x86/include/asm/kvm-x86-ops.h > +++ b/arch/x86/include/asm/kvm-x86-ops.h > @@ -17,6 +17,7 @@ BUILD_BUG_ON(1) > KVM_X86_OP(hardware_enable) > KVM_X86_OP(hardware_disable) > KVM_X86_OP(hardware_unsetup) > +KVM_X86_OP_OPTIONAL_RET0(offline_cpu) > KVM_X86_OP(has_emulated_msr) > KVM_X86_OP(vcpu_after_set_cpuid) > KVM_X86_OP(is_vm_type_supported) > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 496c7c6eaff9..c420409aa96f 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -1468,6 +1468,7 @@ struct kvm_x86_ops { > int (*hardware_enable)(void); > void (*hardware_disable)(void); > void (*hardware_unsetup)(void); > + int (*offline_cpu)(void); > bool (*has_emulated_msr)(struct kvm *kvm, u32 index); > void (*vcpu_after_set_cpuid)(struct kvm_vcpu *vcpu); > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 2ed5a017f7bc..17c5d6a76c93 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -12039,6 +12039,11 @@ void kvm_arch_hardware_disable(void) > drop_user_return_notifiers(); > } > > +int kvm_arch_offline_cpu(unsigned int cpu) > +{ > + return static_call(kvm_x86_offline_cpu)(); > +} > + > bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu) > { > return vcpu->kvm->arch.bsp_vcpu_id == vcpu->vcpu_id; > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 620489b9aa93..4df79443fd11 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -1460,6 +1460,7 @@ static inline void kvm_create_vcpu_debugfs(struct kvm_vcpu *vcpu) {} > int kvm_arch_hardware_enable(void); > void kvm_arch_hardware_disable(void); > #endif > +int kvm_arch_offline_cpu(unsigned int cpu); > int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu); > bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu); > int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu); > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index f6b6dcedaa0a..f770fdc662d0 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -5396,16 +5396,24 @@ static void hardware_disable_nolock(void *junk) > __this_cpu_write(hardware_enabled, false); > } > > +__weak int kvm_arch_offline_cpu(unsigned int cpu) > +{ > + return 0; > +} > + > static int kvm_offline_cpu(unsigned int cpu) > { > + int r = 0; > + > mutex_lock(&kvm_lock); > - if (kvm_usage_count) { > + r = kvm_arch_offline_cpu(cpu); > + if (!r && kvm_usage_count) { > preempt_disable(); > hardware_disable_nolock(NULL); > preempt_enable(); > } > mutex_unlock(&kvm_lock); > - return 0; > + return r; > } > > static void hardware_disable_all_nolock(void) >
On Fri, Nov 04, 2022, Isaku Yamahata wrote: > Thanks for the patch series. I the rebased TDX KVM patch series and it worked. > Since cpu offline needs to be rejected in some cases(To keep at least one cpu > on a package), arch hook for cpu offline is needed. I hate to bring this up because I doubt there's a real use case for SUSPEND with TDX, but the CPU offline path isn't just for true offlining of CPUs. When the system enters SUSPEND, only the initiating CPU goes through kvm_suspend()+kvm_resume(), all responding CPUs go through CPU offline+online. I.e. disallowing all CPUs from going "offline" will prevent suspending the system. I don't see anything in the TDX series or the specs that suggests suspend+resume is disallowed when TDX is enabled, so blocking that seems just as wrong as preventing software from soft-offlining CPUs.
On Fri, Nov 04, 2022 at 08:27:14PM +0000, Sean Christopherson <seanjc@google.com> wrote: > On Fri, Nov 04, 2022, Isaku Yamahata wrote: > > Thanks for the patch series. I the rebased TDX KVM patch series and it worked. > > Since cpu offline needs to be rejected in some cases(To keep at least one cpu > > on a package), arch hook for cpu offline is needed. > > I hate to bring this up because I doubt there's a real use case for SUSPEND with > TDX, but the CPU offline path isn't just for true offlining of CPUs. When the > system enters SUSPEND, only the initiating CPU goes through kvm_suspend()+kvm_resume(), > all responding CPUs go through CPU offline+online. I.e. disallowing all CPUs from > going "offline" will prevent suspending the system. The current TDX KVM implementation disallows CPU package from offline only when TDs are running. If no TD is running, CPU offline is allowed. So before SUSPEND, TDs need to be killed via systemd or something. After killing TDs, the system can enter into SUSPEND state. > I don't see anything in the TDX series or the specs that suggests suspend+resume > is disallowed when TDX is enabled, so blocking that seems just as wrong as > preventing software from soft-offlining CPUs. When it comes to SUSPEND, it means suspend-to-idle, ACPI S1, S3, or S4. suspend-to-idle doesn't require CPU offline. Although CPU related spec doesn't mention about S3, the ACPI spec says 7.4.2.2 System _S1 State (Sleeping with Processor Context Maintained) The processor-complex context is maintained. 7.4.2.4 System _S3 State or 7.4.2.5 System _S4 State The processor-complex context is not maintained. It's safe to say the processor context related to TDX is complex, I think. Let me summarize the situation. What do you think? - While no TD running: No additional limitation on CPU offline. - On TD creation: If any of whole cpu package is software offlined, TD creation fails. Alternative: forcibly online necessary CPUs, create TD, and offline CPUs - TD running: Although it's not required to keep all CPU packages online, keep CPU package from offlining for TD destruction. - TD destruction: If any of whole cpu package is software offlined, TD destruction fails. The current implementation prevents any cpu package from offlinining during TD running. Alternative: - forcibly online necessary CPUs, destruct TD, and offline CPUs again and allow CPU package to offline - Stash TDX resources somewhere. When cpu packages are onlined, free those release. - On SUSPEND: TODO: Allow CPU offline if S1 is requested. - suspend-to-idle: nothing to do because cpu offline isn't required - ACPI S1: Need to allow offline CPUs. This can be implemented by referencing suspend_state_t pm_suspend_target_state is PM_SUSPEND_TO_STANBY. - ACPI S3/S4: refuse cpu offline. The system needs to kill all TDs before starting SUSPEND process. This is what is implemented. Thanks,
On Mon, 2022-11-07 at 13:46 -0800, Isaku Yamahata wrote: > > On Fri, Nov 04, 2022, Isaku Yamahata wrote: > > > Thanks for the patch series. I the rebased TDX KVM patch series and it > > > worked. > > > Since cpu offline needs to be rejected in some cases(To keep at least one > > > cpu > > > on a package), arch hook for cpu offline is needed. > > > > I hate to bring this up because I doubt there's a real use case for SUSPEND > > with > > TDX, but the CPU offline path isn't just for true offlining of CPUs. When > > the > > system enters SUSPEND, only the initiating CPU goes through > > kvm_suspend()+kvm_resume(), > > all responding CPUs go through CPU offline+online. I.e. disallowing all > > CPUs from > > going "offline" will prevent suspending the system. > > The current TDX KVM implementation disallows CPU package from offline only > when > TDs are running. If no TD is running, CPU offline is allowed. So before > SUSPEND, TDs need to be killed via systemd or something. After killing TDs, > the > system can enter into SUSPEND state. This seems not correct. You need one cpu for each to be online in order to create TD as well, as TDH.MNG.KEY.CONFIG needs to be called on all packages, correct?
On Tue, Nov 08, 2022 at 01:09:27AM +0000, "Huang, Kai" <kai.huang@intel.com> wrote: > On Mon, 2022-11-07 at 13:46 -0800, Isaku Yamahata wrote: > > > On Fri, Nov 04, 2022, Isaku Yamahata wrote: > > > > Thanks for the patch series. I the rebased TDX KVM patch series and it > > > > worked. > > > > Since cpu offline needs to be rejected in some cases(To keep at least one > > > > cpu > > > > on a package), arch hook for cpu offline is needed. > > > > > > I hate to bring this up because I doubt there's a real use case for SUSPEND > > > with > > > TDX, but the CPU offline path isn't just for true offlining of CPUs. When > > > the > > > system enters SUSPEND, only the initiating CPU goes through > > > kvm_suspend()+kvm_resume(), > > > all responding CPUs go through CPU offline+online. I.e. disallowing all > > > CPUs from > > > going "offline" will prevent suspending the system. > > > > The current TDX KVM implementation disallows CPU package from offline only > > when > > TDs are running. If no TD is running, CPU offline is allowed. So before > > SUSPEND, TDs need to be killed via systemd or something. After killing TDs, > > the > > system can enter into SUSPEND state. > > This seems not correct. You need one cpu for each to be online in order to > create TD as well, as TDH.MNG.KEY.CONFIG needs to be called on all packages, > correct? That's correct. In such case, the creation of TD fails. TD creation checks if at least one cpu is online on all CPU packages. If no, error.
On Mon, 2022-11-07 at 21:43 -0800, Isaku Yamahata wrote: > On Tue, Nov 08, 2022 at 01:09:27AM +0000, > "Huang, Kai" <kai.huang@intel.com> wrote: > > > On Mon, 2022-11-07 at 13:46 -0800, Isaku Yamahata wrote: > > > > On Fri, Nov 04, 2022, Isaku Yamahata wrote: > > > > > Thanks for the patch series. I the rebased TDX KVM patch series and it > > > > > worked. > > > > > Since cpu offline needs to be rejected in some cases(To keep at least one > > > > > cpu > > > > > on a package), arch hook for cpu offline is needed. > > > > > > > > I hate to bring this up because I doubt there's a real use case for SUSPEND > > > > with > > > > TDX, but the CPU offline path isn't just for true offlining of CPUs. When > > > > the > > > > system enters SUSPEND, only the initiating CPU goes through > > > > kvm_suspend()+kvm_resume(), > > > > all responding CPUs go through CPU offline+online. I.e. disallowing all > > > > CPUs from > > > > going "offline" will prevent suspending the system. > > > > > > The current TDX KVM implementation disallows CPU package from offline only > > > when > > > TDs are running. If no TD is running, CPU offline is allowed. So before > > > SUSPEND, TDs need to be killed via systemd or something. After killing TDs, > > > the > > > system can enter into SUSPEND state. > > > > This seems not correct. You need one cpu for each to be online in order to > > create TD as well, as TDH.MNG.KEY.CONFIG needs to be called on all packages, > > correct? > > That's correct. In such case, the creation of TD fails. TD creation checks if > at least one cpu is online on all CPU packages. If no, error. I think we can just always refuse to offline the last cpu for each package when TDX is enabled. It's simpler I guess.
On Tue, 2022-11-08 at 08:56 +0000, Huang, Kai wrote: > On Mon, 2022-11-07 at 21:43 -0800, Isaku Yamahata wrote: > > On Tue, Nov 08, 2022 at 01:09:27AM +0000, > > "Huang, Kai" <kai.huang@intel.com> wrote: > > > > > On Mon, 2022-11-07 at 13:46 -0800, Isaku Yamahata wrote: > > > > > On Fri, Nov 04, 2022, Isaku Yamahata wrote: > > > > > > Thanks for the patch series. I the rebased TDX KVM patch series and it > > > > > > worked. > > > > > > Since cpu offline needs to be rejected in some cases(To keep at least one > > > > > > cpu > > > > > > on a package), arch hook for cpu offline is needed. > > > > > > > > > > I hate to bring this up because I doubt there's a real use case for SUSPEND > > > > > with > > > > > TDX, but the CPU offline path isn't just for true offlining of CPUs. When > > > > > the > > > > > system enters SUSPEND, only the initiating CPU goes through > > > > > kvm_suspend()+kvm_resume(), > > > > > all responding CPUs go through CPU offline+online. I.e. disallowing all > > > > > CPUs from > > > > > going "offline" will prevent suspending the system. > > > > > > > > The current TDX KVM implementation disallows CPU package from offline only > > > > when > > > > TDs are running. If no TD is running, CPU offline is allowed. So before > > > > SUSPEND, TDs need to be killed via systemd or something. After killing TDs, > > > > the > > > > system can enter into SUSPEND state. > > > > > > This seems not correct. You need one cpu for each to be online in order to > > > create TD as well, as TDH.MNG.KEY.CONFIG needs to be called on all packages, > > > correct? > > > > That's correct. In such case, the creation of TD fails. TD creation checks if > > at least one cpu is online on all CPU packages. If no, error. > > I think we can just always refuse to offline the last cpu for each package when > TDX is enabled. It's simpler I guess. Sorry I wasn't reading carefully. Please ignore. We need to support suspend :)
On Mon, Nov 07, 2022, Isaku Yamahata wrote: > On Fri, Nov 04, 2022 at 08:27:14PM +0000, > Sean Christopherson <seanjc@google.com> wrote: > > > On Fri, Nov 04, 2022, Isaku Yamahata wrote: > > > Thanks for the patch series. I the rebased TDX KVM patch series and it worked. > > > Since cpu offline needs to be rejected in some cases(To keep at least one cpu > > > on a package), arch hook for cpu offline is needed. > > > > I hate to bring this up because I doubt there's a real use case for SUSPEND with > > TDX, but the CPU offline path isn't just for true offlining of CPUs. When the > > system enters SUSPEND, only the initiating CPU goes through kvm_suspend()+kvm_resume(), > > all responding CPUs go through CPU offline+online. I.e. disallowing all CPUs from > > going "offline" will prevent suspending the system. > > The current TDX KVM implementation disallows CPU package from offline only when > TDs are running. If no TD is running, CPU offline is allowed. So before > SUSPEND, TDs need to be killed via systemd or something. After killing TDs, the > system can enter into SUSPEND state. Ah, I assumed offlining was disallowed if TDX was enabled. > > I don't see anything in the TDX series or the specs that suggests suspend+resume > > is disallowed when TDX is enabled, so blocking that seems just as wrong as > > preventing software from soft-offlining CPUs. > > When it comes to SUSPEND, it means suspend-to-idle, ACPI S1, S3, or S4. > suspend-to-idle doesn't require CPU offline. > > Although CPU related spec doesn't mention about S3, the ACPI spec says > > 7.4.2.2 System _S1 State (Sleeping with Processor Context Maintained) > The processor-complex context is maintained. > > 7.4.2.4 System _S3 State or 7.4.2.5 System _S4 State > The processor-complex context is not maintained. > > It's safe to say the processor context related to TDX is complex, I think. > Let me summarize the situation. What do you think? > > - While no TD running: > No additional limitation on CPU offline. > > - On TD creation: > If any of whole cpu package is software offlined, TD creation fails. > Alternative: forcibly online necessary CPUs, create TD, and offline CPUs The alternative isn't really viable because there's no way the kernel can guarantee a CPU can be onlined, i.e. the kernel would need to fallback of disallowing TD creation anyways. > - TD running: > Although it's not required to keep all CPU packages online, keep CPU package > from offlining for TD destruction. > > - TD destruction: > If any of whole cpu package is software offlined, TD destruction fails. > The current implementation prevents any cpu package from offlinining during > TD running. > Alternative: > - forcibly online necessary CPUs, destruct TD, and offline CPUs again and > allow CPU package to offline > - Stash TDX resources somewhere. When cpu packages are onlined, free those > release. > > - On SUSPEND: > TODO: Allow CPU offline if S1 is requested. Is this actually a TODO? I assume the kernel doesn't actually try to offline CPUs in this case, i.e. it Just Works. > - suspend-to-idle: nothing to do because cpu offline isn't required > - ACPI S1: Need to allow offline CPUs. This can be implemented by referencing > suspend_state_t pm_suspend_target_state is PM_SUSPEND_TO_STANBY. > - ACPI S3/S4: refuse cpu offline. The system needs to kill all TDs before > starting SUSPEND process. This is what is implemented. Looks good, disallowing SUSPEND with active TDs is a reasonable tradeoff. As above, I highly doubt anyone actually cares.