Message ID | 20211223163500.2625491-1-bigeasy@linutronix.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [REPOST,REPOST,v2] fscache: Use only one fscache_object_cong_wait. | expand |
Thanks, but this is gone in the upcoming fscache rewrite. I'm hoping that will get in the next merge window. David
On 2021-12-23 17:17:09 [+0000], David Howells wrote: > Thanks, but this is gone in the upcoming fscache rewrite. I'm hoping that > will get in the next merge window. Yes, I noticed that. What about current tree, v5.16-rc6 and less? Shouldn't this be addressed? > David Sebastian
On Thu, 23 Dec 2021 19:15:09 +0100 Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote: > On 2021-12-23 17:17:09 [+0000], David Howells wrote: > > Thanks, but this is gone in the upcoming fscache rewrite. I'm hoping that > > will get in the next merge window. > > Yes, I noticed that. What about current tree, v5.16-rc6 and less? > Shouldn't this be addressed? If the bug is serious enough to justify a -stable backport then yes, we should merge a fix such as this ahead of the fscache rewrite, so we have something suitable for backporting. Is the bug serious enough? Or is the bug in a not-yet-noticed state? In other words, is it possible that four years from now, someone will hit this bug in a 5.15-based kernel and will then wish we'd backported a fix?
On 2021-12-26 16:20:30 [-0800], Andrew Morton wrote: > On Thu, 23 Dec 2021 19:15:09 +0100 Sebastian Andrzej Siewior <bigeasy@linutronix.de> wrote: > > > On 2021-12-23 17:17:09 [+0000], David Howells wrote: > > > Thanks, but this is gone in the upcoming fscache rewrite. I'm hoping that > > > will get in the next merge window. > > > > Yes, I noticed that. What about current tree, v5.16-rc6 and less? > > Shouldn't this be addressed? > > If the bug is serious enough to justify a -stable backport then yes, we > should merge a fix such as this ahead of the fscache rewrite, so we > have something suitable for backporting. > > Is the bug serious enough? > > Or is the bug in a not-yet-noticed state? In other words, is it > possible that four years from now, someone will hit this bug in a > 5.15-based kernel and will then wish we'd backported a fix? I can't answer how serious it is but: - with CONFIG_DEBUG_PREEMPT enabled there has to be a visible backtrace due this_cpu_ptr() usage. - because of schedule_timeout(60 * HZ) there is no visible hang. It should be either woken up properly (via the waitqueue) or after a minute due to the timeout. both things don't look good in general. Sebastian
diff --git a/fs/fscache/internal.h b/fs/fscache/internal.h index c3e4804b8fcbf..9edb87e11680b 100644 --- a/fs/fscache/internal.h +++ b/fs/fscache/internal.h @@ -81,7 +81,6 @@ extern unsigned fscache_debug; extern struct kobject *fscache_root; extern struct workqueue_struct *fscache_object_wq; extern struct workqueue_struct *fscache_op_wq; -DECLARE_PER_CPU(wait_queue_head_t, fscache_object_cong_wait); extern unsigned int fscache_hash(unsigned int salt, unsigned int *data, unsigned int n); diff --git a/fs/fscache/main.c b/fs/fscache/main.c index 4207f98e405fd..85f8cf3a323d5 100644 --- a/fs/fscache/main.c +++ b/fs/fscache/main.c @@ -41,8 +41,6 @@ struct kobject *fscache_root; struct workqueue_struct *fscache_object_wq; struct workqueue_struct *fscache_op_wq; -DEFINE_PER_CPU(wait_queue_head_t, fscache_object_cong_wait); - /* these values serve as lower bounds, will be adjusted in fscache_init() */ static unsigned fscache_object_max_active = 4; static unsigned fscache_op_max_active = 2; @@ -138,7 +136,6 @@ unsigned int fscache_hash(unsigned int salt, unsigned int *data, unsigned int n) static int __init fscache_init(void) { unsigned int nr_cpus = num_possible_cpus(); - unsigned int cpu; int ret; fscache_object_max_active = @@ -161,9 +158,6 @@ static int __init fscache_init(void) if (!fscache_op_wq) goto error_op_wq; - for_each_possible_cpu(cpu) - init_waitqueue_head(&per_cpu(fscache_object_cong_wait, cpu)); - ret = fscache_proc_init(); if (ret < 0) goto error_proc; diff --git a/fs/fscache/object.c b/fs/fscache/object.c index 6a675652129b2..7a972d144b546 100644 --- a/fs/fscache/object.c +++ b/fs/fscache/object.c @@ -798,6 +798,8 @@ void fscache_object_destroy(struct fscache_object *object) } EXPORT_SYMBOL(fscache_object_destroy); +static DECLARE_WAIT_QUEUE_HEAD(fscache_object_cong_wait); + /* * enqueue an object for metadata-type processing */ @@ -806,16 +808,12 @@ void fscache_enqueue_object(struct fscache_object *object) _enter("{OBJ%x}", object->debug_id); if (fscache_get_object(object, fscache_obj_get_queue) >= 0) { - wait_queue_head_t *cong_wq = - &get_cpu_var(fscache_object_cong_wait); if (queue_work(fscache_object_wq, &object->work)) { if (fscache_object_congested()) - wake_up(cong_wq); + wake_up(&fscache_object_cong_wait); } else fscache_put_object(object, fscache_obj_put_queue); - - put_cpu_var(fscache_object_cong_wait); } } @@ -833,16 +831,15 @@ void fscache_enqueue_object(struct fscache_object *object) */ bool fscache_object_sleep_till_congested(signed long *timeoutp) { - wait_queue_head_t *cong_wq = this_cpu_ptr(&fscache_object_cong_wait); DEFINE_WAIT(wait); if (fscache_object_congested()) return true; - add_wait_queue_exclusive(cong_wq, &wait); + add_wait_queue_exclusive(&fscache_object_cong_wait, &wait); if (!fscache_object_congested()) *timeoutp = schedule_timeout(*timeoutp); - finish_wait(cong_wq, &wait); + finish_wait(&fscache_object_cong_wait, &wait); return fscache_object_congested(); }
In the commit mentioned below, fscache was converted from slow-work to workqueue. slow_work_enqueue() and slow_work_sleep_till_thread_needed() did not use a per-CPU workqueue. They choose from two global waitqueues depending on the SLOW_WORK_VERY_SLOW bit which was not set so it always one waitqueue. I can't find out how it is ensured that a waiter on certain CPU is woken up be the other side. My guess is that the timeout in schedule_timeout() ensures that it does not wait forever (or a random wake up). fscache_object_sleep_till_congested() must be invoked from preemptible context in order for schedule() to work. In this case this_cpu_ptr() should complain with CONFIG_DEBUG_PREEMPT enabled except the thread is bound to one CPU. wake_up() wakes only one waiter and I'm not sure if it is guaranteed that only one waiter exists. Replace the per-CPU waitqueue with one global waitqueue. Fixes: 8b8edefa2fffb ("fscache: convert object to use workqueue instead of slow-work") Reported-by: Gregor Beck <gregor.beck@gmail.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- v2: https://lore.kernel.org/all/20211029083839.xwwt7jgzru3kcpii@linutronix.de/ Repost: https://lore.kernel.org/all/20211118165442.hekmz7xgisdzsyuh@linutronix.de/ Ping 1: https://lore.kernel.org/all/20211202205240.giqxuxqemlxxoobw@linutronix.de/ |I noticed that -next gained commit | 608bfec640edb ("fscache: Remove the contents of the fscache driver, pending rewrite") | |which removes slow_work_sleep_till_thread_needed() and the per-CPU |variable. Since it looks like a bug, what happens stable wise? Ping 2: https://lore.kernel.org/all/YbdiYN+wU1RN9mWo@linutronix.de/ fs/fscache/internal.h | 1 - fs/fscache/main.c | 6 ------ fs/fscache/object.c | 13 +++++-------- 3 files changed, 5 insertions(+), 15 deletions(-)