Message ID | 20250114175143.81438-23-vschneid@redhat.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | context_tracking,x86: Defer some IPIs until a user->kernel transition | expand |
Context | Check | Description |
---|---|---|
netdev/tree_selection | success | Not a local patch |
Le Tue, Jan 14, 2025 at 06:51:35PM +0100, Valentin Schneider a écrit : > ct_nmi_{enter, exit}() only touches the RCU watching counter and doesn't > modify the actual CT state part context_tracking.state. This means that > upon receiving an IRQ when idle, the CT_STATE_IDLE->CT_STATE_KERNEL > transition only happens in ct_idle_exit(). > > One can note that ct_nmi_enter() can only ever be entered with the CT state > as either CT_STATE_KERNEL or CT_STATE_IDLE, as an IRQ/NMI happenning in the > CT_STATE_USER or CT_STATE_GUEST states will be routed down to ct_user_exit(). Are you sure? An NMI can fire between guest_state_enter_irqoff() and __svm_vcpu_run(). And NMIs interrupting userspace don't call enter_from_user_mode(). In fact they don't call irqentry_enter_from_user_mode() like regular IRQs but irqentry_nmi_enter() instead. Well that's for archs implementing common entry code, I can't speak for the others. Unifying the behaviour between user and idle such that the IRQs/NMIs exit the CT_STATE can be interesting but I fear this may not come for free. You would need to save the old state on IRQ/NMI entry and restore it on exit. Do we really need it? Thanks.
On Wed, Jan 22, 2025, Frederic Weisbecker wrote: > Le Tue, Jan 14, 2025 at 06:51:35PM +0100, Valentin Schneider a écrit : > > ct_nmi_{enter, exit}() only touches the RCU watching counter and doesn't > > modify the actual CT state part context_tracking.state. This means that > > upon receiving an IRQ when idle, the CT_STATE_IDLE->CT_STATE_KERNEL > > transition only happens in ct_idle_exit(). > > > > One can note that ct_nmi_enter() can only ever be entered with the CT state > > as either CT_STATE_KERNEL or CT_STATE_IDLE, as an IRQ/NMI happenning in the > > CT_STATE_USER or CT_STATE_GUEST states will be routed down to ct_user_exit(). > > Are you sure? An NMI can fire between guest_state_enter_irqoff() and > __svm_vcpu_run(). Heh, technically, they can't. On SVM, KVM clears GIF prior to svm_vcpu_enter_exit(), and restores GIF=1 only after it returns. I.e. NMIs are fully blocked _on SVM_. VMX unfortunately doesn't provide GIF, and so NMIs can arrive at any time. It's infeasible for software to prevent them, so we're stuck with that. [In theory, KVM could deliberately generate an NMI and not do IRET so that NMIs are blocked, but that would be beyond crazy]. > And NMIs interrupting userspace don't call enter_from_user_mode(). In fact > they don't call irqentry_enter_from_user_mode() like regular IRQs but > irqentry_nmi_enter() instead. Well that's for archs implementing common entry > code, I can't speak for the others. > > Unifying the behaviour between user and idle such that the IRQs/NMIs exit the > CT_STATE can be interesting but I fear this may not come for free. You would > need to save the old state on IRQ/NMI entry and restore it on exit. > > Do we really need it? > > Thanks.
On 22/01/25 01:22, Frederic Weisbecker wrote: > Le Tue, Jan 14, 2025 at 06:51:35PM +0100, Valentin Schneider a écrit : >> ct_nmi_{enter, exit}() only touches the RCU watching counter and doesn't >> modify the actual CT state part context_tracking.state. This means that >> upon receiving an IRQ when idle, the CT_STATE_IDLE->CT_STATE_KERNEL >> transition only happens in ct_idle_exit(). >> >> One can note that ct_nmi_enter() can only ever be entered with the CT state >> as either CT_STATE_KERNEL or CT_STATE_IDLE, as an IRQ/NMI happenning in the >> CT_STATE_USER or CT_STATE_GUEST states will be routed down to ct_user_exit(). > > Are you sure? An NMI can fire between guest_state_enter_irqoff() and > __svm_vcpu_run(). Urgh, you're quite right. > And NMIs interrupting userspace don't call > enter_from_user_mode(). In fact they don't call irqentry_enter_from_user_mode() > like regular IRQs but irqentry_nmi_enter() instead. Well that's for archs > implementing common entry code, I can't speak for the others. > That I didn't realize, so thank you for pointing it out. Having another look now, I mistook DEFINE_IDTENTRY_RAW(exc_int3) for the general case when it really isn't :( > Unifying the behaviour between user and idle such that the IRQs/NMIs exit the > CT_STATE can be interesting but I fear this may not come for free. You would > need to save the old state on IRQ/NMI entry and restore it on exit. > That's what I tried to avoid, but it sounds like there's no nice way around it. > Do we really need it? > Well, my problem with not doing IDLE->KERNEL transitions on IRQ/NMI is that this leads the IPI deferral logic to observe a technically-out-of-sync sate for remote CPUs. Consider: CPUx CPUy state := CT_STATE_IDLE ... ~>IRQ ... ct_nmi_enter() [in the kernel proper by now] text_poke_bp_batch() ct_set_cpu_work(CPUy, CT_WORK_SYNC) READ CPUy ct->state `-> CT_IDLE_STATE `-> defer IPI I thought this meant I would need to throw out the "defer IPIs if CPU is idle" part, but AIUI this also affects CT_STATE_USER and CT_STATE_GUEST, which is a bummer :(
On 27/01/25 12:17, Valentin Schneider wrote: > On 22/01/25 01:22, Frederic Weisbecker wrote: >> And NMIs interrupting userspace don't call >> enter_from_user_mode(). In fact they don't call irqentry_enter_from_user_mode() >> like regular IRQs but irqentry_nmi_enter() instead. Well that's for archs >> implementing common entry code, I can't speak for the others. >> > > That I didn't realize, so thank you for pointing it out. Having another > look now, I mistook DEFINE_IDTENTRY_RAW(exc_int3) for the general case > when it really isn't :( > >> Unifying the behaviour between user and idle such that the IRQs/NMIs exit the >> CT_STATE can be interesting but I fear this may not come for free. You would >> need to save the old state on IRQ/NMI entry and restore it on exit. >> > > That's what I tried to avoid, but it sounds like there's no nice way around it. > >> Do we really need it? >> > > Well, my problem with not doing IDLE->KERNEL transitions on IRQ/NMI is that > this leads the IPI deferral logic to observe a technically-out-of-sync sate > for remote CPUs. Consider: > > CPUx CPUy > state := CT_STATE_IDLE > ... > ~>IRQ > ... > ct_nmi_enter() > [in the kernel proper by now] > > text_poke_bp_batch() > ct_set_cpu_work(CPUy, CT_WORK_SYNC) > READ CPUy ct->state > `-> CT_IDLE_STATE > `-> defer IPI > > > I thought this meant I would need to throw out the "defer IPIs if CPU is > idle" part, but AIUI this also affects CT_STATE_USER and CT_STATE_GUEST, > which is a bummer :( Soooo I've been thinking... Isn't (context_tracking.state & CT_RCU_WATCHING) pretty much a proxy for knowing whether a CPU is executing in kernelspace, including NMIs? NMI interrupts userspace/VM/idle -> ct_nmi_enter() -> it becomes true IRQ interrupts idle -> ct_irq_enter() -> it becomes true IRQ interrupts userspace -> __ct_user_exit() -> it becomes true IRQ interrupts VM -> __ct_user_exit() -> it becomes true IOW, if I gate setting deferred work by checking for this instead of explicitely CT_STATE_KERNEL, "it should work" and prevent the aforementioned issue? Or should I be out drinking instead? :-)
Le Fri, Feb 07, 2025 at 06:06:45PM +0100, Valentin Schneider a écrit : > On 27/01/25 12:17, Valentin Schneider wrote: > > On 22/01/25 01:22, Frederic Weisbecker wrote: > >> And NMIs interrupting userspace don't call > >> enter_from_user_mode(). In fact they don't call irqentry_enter_from_user_mode() > >> like regular IRQs but irqentry_nmi_enter() instead. Well that's for archs > >> implementing common entry code, I can't speak for the others. > >> > > > > That I didn't realize, so thank you for pointing it out. Having another > > look now, I mistook DEFINE_IDTENTRY_RAW(exc_int3) for the general case > > when it really isn't :( > > > >> Unifying the behaviour between user and idle such that the IRQs/NMIs exit the > >> CT_STATE can be interesting but I fear this may not come for free. You would > >> need to save the old state on IRQ/NMI entry and restore it on exit. > >> > > > > That's what I tried to avoid, but it sounds like there's no nice way around it. > > > >> Do we really need it? > >> > > > > Well, my problem with not doing IDLE->KERNEL transitions on IRQ/NMI is that > > this leads the IPI deferral logic to observe a technically-out-of-sync sate > > for remote CPUs. Consider: > > > > CPUx CPUy > > state := CT_STATE_IDLE > > ... > > ~>IRQ > > ... > > ct_nmi_enter() > > [in the kernel proper by now] > > > > text_poke_bp_batch() > > ct_set_cpu_work(CPUy, CT_WORK_SYNC) > > READ CPUy ct->state > > `-> CT_IDLE_STATE > > `-> defer IPI > > > > > > I thought this meant I would need to throw out the "defer IPIs if CPU is > > idle" part, but AIUI this also affects CT_STATE_USER and CT_STATE_GUEST, > > which is a bummer :( > > Soooo I've been thinking... > > Isn't > > (context_tracking.state & CT_RCU_WATCHING) > > pretty much a proxy for knowing whether a CPU is executing in kernelspace, > including NMIs? You got it! > > NMI interrupts userspace/VM/idle -> ct_nmi_enter() -> it becomes true > IRQ interrupts idle -> ct_irq_enter() -> it becomes true > IRQ interrupts userspace -> __ct_user_exit() -> it becomes true > IRQ interrupts VM -> __ct_user_exit() -> it becomes true > > IOW, if I gate setting deferred work by checking for this instead of > explicitely CT_STATE_KERNEL, "it should work" and prevent the > aforementioned issue? Or should I be out drinking instead? :-) Exactly it should work! Now that doesn't mean you can't go out for a drink :-) Thanks.
On 07/02/25 19:37, Frederic Weisbecker wrote: > Le Fri, Feb 07, 2025 at 06:06:45PM +0100, Valentin Schneider a écrit : >> >> Soooo I've been thinking... >> >> Isn't >> >> (context_tracking.state & CT_RCU_WATCHING) >> >> pretty much a proxy for knowing whether a CPU is executing in kernelspace, >> including NMIs? > > You got it! > Yay! >> >> NMI interrupts userspace/VM/idle -> ct_nmi_enter() -> it becomes true >> IRQ interrupts idle -> ct_irq_enter() -> it becomes true >> IRQ interrupts userspace -> __ct_user_exit() -> it becomes true >> IRQ interrupts VM -> __ct_user_exit() -> it becomes true >> >> IOW, if I gate setting deferred work by checking for this instead of >> explicitely CT_STATE_KERNEL, "it should work" and prevent the >> aforementioned issue? Or should I be out drinking instead? :-) > > Exactly it should work! Now that doesn't mean you can't go out > for a drink :-) > Well, drinks were had very shortly after sending this email :D > Thanks.
diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c index a61498a8425e2..15f10ddec8cbe 100644 --- a/kernel/context_tracking.c +++ b/kernel/context_tracking.c @@ -236,7 +236,9 @@ void noinstr ct_nmi_exit(void) instrumentation_end(); // RCU is watching here ... - ct_kernel_exit_state(CT_RCU_WATCHING); + ct_kernel_exit_state(CT_RCU_WATCHING - + CT_STATE_KERNEL + + CT_STATE_IDLE); // ... but is no longer watching here. if (!in_nmi()) @@ -259,6 +261,7 @@ void noinstr ct_nmi_enter(void) { long incby = 2; struct context_tracking *ct = this_cpu_ptr(&context_tracking); + int curr_state; /* Complain about underflow. */ WARN_ON_ONCE(ct_nmi_nesting() < 0); @@ -271,13 +274,26 @@ void noinstr ct_nmi_enter(void) * to be in the outermost NMI handler that interrupted an RCU-idle * period (observation due to Andy Lutomirski). */ - if (!rcu_is_watching_curr_cpu()) { + curr_state = raw_atomic_read(this_cpu_ptr(&context_tracking.state)); + if (!(curr_state & CT_RCU_WATCHING)) { if (!in_nmi()) rcu_task_enter(); + /* + * RCU isn't watching, so we're one of + * CT_STATE_IDLE + * CT_STATE_USER + * CT_STATE_GUEST + * guest/user entry is handled by ct_user_enter(), so this has + * to be idle entry. + */ + WARN_ON_ONCE((curr_state & CT_STATE_MASK) != CT_STATE_IDLE); + // RCU is not watching here ... - ct_kernel_enter_state(CT_RCU_WATCHING); + ct_kernel_enter_state(CT_RCU_WATCHING + + CT_STATE_KERNEL - + CT_STATE_IDLE); // ... but is watching here. instrumentation_begin();
ct_nmi_{enter, exit}() only touches the RCU watching counter and doesn't modify the actual CT state part context_tracking.state. This means that upon receiving an IRQ when idle, the CT_STATE_IDLE->CT_STATE_KERNEL transition only happens in ct_idle_exit(). One can note that ct_nmi_enter() can only ever be entered with the CT state as either CT_STATE_KERNEL or CT_STATE_IDLE, as an IRQ/NMI happenning in the CT_STATE_USER or CT_STATE_GUEST states will be routed down to ct_user_exit(). Add/remove CT_STATE_IDLE from the context tracking state as needed in ct_nmi_{enter, exit}(). Note that this leaves the following window where the CPU is executing code in kernelspace, but the context tracking state is CT_STATE_IDLE: ~> IRQ ct_nmi_enter() state = state + CT_STATE_KERNEL - CT_STATE_IDLE [...] ct_nmi_exit() state = state - CT_STATE_KERNEL + CT_STATE_IDLE [...] /!\ CT_STATE_IDLE here while we're really in kernelspace! /!\ ct_cpuidle_exit() state = state + CT_STATE_KERNEL - CT_STATE_IDLE Signed-off-by: Valentin Schneider <vschneid@redhat.com> --- kernel/context_tracking.c | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-)