Message ID | 20201113141734.096224353@linutronix.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | softirq: Cleanups and RT awareness | expand |
On Fri, Nov 13, 2020 at 03:02:19PM +0100, Thomas Gleixner wrote: > RT requires the softirq to be preemptible and uses a per CPU local lock to > protect BH disabled sections and softirq processing. Therefore RT cannot > use the preempt counter to keep track of BH disabled/serving. > > Add a RT only counter to task struct and adjust the relevant macros in > preempt.h. You may want to describe a bit the reason for this per task counter. It's not intuitive at this stage. Thanks.
On Thu, Nov 19 2020 at 13:18, Frederic Weisbecker wrote: > On Fri, Nov 13, 2020 at 03:02:19PM +0100, Thomas Gleixner wrote: >> RT requires the softirq to be preemptible and uses a per CPU local lock to >> protect BH disabled sections and softirq processing. Therefore RT cannot >> use the preempt counter to keep track of BH disabled/serving. >> >> Add a RT only counter to task struct and adjust the relevant macros in >> preempt.h. > > You may want to describe a bit the reason for this per task counter. > It's not intuitive at this stage. Something like this: RT requires the softirq processing and local bottomhalf disabled regions to be preemptible. Using the normal preempt count based serialization is therefore not possible because this implicitely disables preemption. RT kernels use a per CPU local lock to serialize bottomhalfs. As local_bh_disable() can nest the lock can only be acquired on the outermost invocation of local_bh_disable() and released when the nest count becomes zero. Tasks which hold the local lock can be preempted so its required to keep track of the nest count per task. Add a RT only counter to task struct and adjust the relevant macros in preempt.h. Thanks, tglx
On Thu, Nov 19, 2020 at 07:34:13PM +0100, Thomas Gleixner wrote: > On Thu, Nov 19 2020 at 13:18, Frederic Weisbecker wrote: > > On Fri, Nov 13, 2020 at 03:02:19PM +0100, Thomas Gleixner wrote: > >> RT requires the softirq to be preemptible and uses a per CPU local lock to > >> protect BH disabled sections and softirq processing. Therefore RT cannot > >> use the preempt counter to keep track of BH disabled/serving. > >> > >> Add a RT only counter to task struct and adjust the relevant macros in > >> preempt.h. > > > > You may want to describe a bit the reason for this per task counter. > > It's not intuitive at this stage. > > Something like this: > > RT requires the softirq processing and local bottomhalf disabled regions > to be preemptible. Using the normal preempt count based serialization is > therefore not possible because this implicitely disables preemption. > > RT kernels use a per CPU local lock to serialize bottomhalfs. As > local_bh_disable() can nest the lock can only be acquired on the > outermost invocation of local_bh_disable() and released when the nest > count becomes zero. Tasks which hold the local lock can be preempted so > its required to keep track of the nest count per task. > > Add a RT only counter to task struct and adjust the relevant macros in > preempt.h. > > Thanks, Very good, thanks!
--- a/include/linux/hardirq.h +++ b/include/linux/hardirq.h @@ -6,6 +6,7 @@ #include <linux/preempt.h> #include <linux/lockdep.h> #include <linux/ftrace_irq.h> +#include <linux/sched.h> #include <linux/vtime.h> #include <asm/hardirq.h> --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -79,7 +79,11 @@ #define nmi_count() (preempt_count() & NMI_MASK) #define hardirq_count() (preempt_count() & HARDIRQ_MASK) -#define softirq_count() (preempt_count() & SOFTIRQ_MASK) +#ifdef CONFIG_PREEMPT_RT +# define softirq_count() (current->softirq_disable_cnt & SOFTIRQ_MASK) +#else +# define softirq_count() (preempt_count() & SOFTIRQ_MASK) +#endif #define irq_count() (nmi_count() | hardirq_count() | softirq_count()) /* --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1004,6 +1004,9 @@ struct task_struct { int softirq_context; int irq_config; #endif +#ifdef CONFIG_PREEMPT_RT + int softirq_disable_cnt; +#endif #ifdef CONFIG_LOCKDEP # define MAX_LOCK_DEPTH 48UL
RT requires the softirq to be preemptible and uses a per CPU local lock to protect BH disabled sections and softirq processing. Therefore RT cannot use the preempt counter to keep track of BH disabled/serving. Add a RT only counter to task struct and adjust the relevant macros in preempt.h. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- include/linux/hardirq.h | 1 + include/linux/preempt.h | 6 +++++- include/linux/sched.h | 3 +++ 3 files changed, 9 insertions(+), 1 deletion(-)