Message ID | 20200501164543.24423-2-james.morse@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | ACPI / APEI: Kick the memory_failure() queue for synchronous errors | expand |
On Friday, May 1, 2020 6:45:41 PM CEST James Morse wrote: > The GHES code calls memory_failure_queue() from IRQ context to schedule > work on the current CPU so that memory_failure() can sleep. > > For synchronous memory errors the arch code needs to know any signals > that memory_failure() will trigger are pending before it returns to > user-space, possibly when exiting from the IRQ. > > Add a helper to kick the memory failure queue, to ensure the scheduled > work has happened. This has to be called from process context, so may > have been migrated from the original cpu. Pass the cpu the work was > queued on. > > Change memory_failure_work_func() to permit being called on the 'wrong' > cpu. > > Signed-off-by: James Morse <james.morse@arm.com> > Tested-by: Tyler Baicar <baicar@os.amperecomputing.com> > --- > include/linux/mm.h | 1 + > mm/memory-failure.c | 15 ++++++++++++++- > 2 files changed, 15 insertions(+), 1 deletion(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 5a323422d783..c606dbbfa5e1 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -3012,6 +3012,7 @@ enum mf_flags { > }; > extern int memory_failure(unsigned long pfn, int flags); > extern void memory_failure_queue(unsigned long pfn, int flags); > +extern void memory_failure_queue_kick(int cpu); > extern int unpoison_memory(unsigned long pfn); > extern int get_hwpoison_page(struct page *page); > #define put_hwpoison_page(page) put_page(page) > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index a96364be8ab4..c4afb407bf0f 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -1493,7 +1493,7 @@ static void memory_failure_work_func(struct work_struct *work) > unsigned long proc_flags; > int gotten; > > - mf_cpu = this_cpu_ptr(&memory_failure_cpu); > + mf_cpu = container_of(work, struct memory_failure_cpu, work); > for (;;) { > spin_lock_irqsave(&mf_cpu->lock, proc_flags); > gotten = kfifo_get(&mf_cpu->fifo, &entry); > @@ -1507,6 +1507,19 @@ static void memory_failure_work_func(struct work_struct *work) > } > } > > +/* > + * Process memory_failure work queued on the specified CPU. > + * Used to avoid return-to-userspace racing with the memory_failure workqueue. > + */ > +void memory_failure_queue_kick(int cpu) > +{ > + struct memory_failure_cpu *mf_cpu; > + > + mf_cpu = &per_cpu(memory_failure_cpu, cpu); > + cancel_work_sync(&mf_cpu->work); > + memory_failure_work_func(&mf_cpu->work); > +} > + > static int __init memory_failure_init(void) > { > struct memory_failure_cpu *mf_cpu; > I could apply this provided an ACK from the mm people. Thanks!
On Mon, 18 May 2020 14:45:05 +0200 "Rafael J. Wysocki" <rjw@rjwysocki.net> wrote: > On Friday, May 1, 2020 6:45:41 PM CEST James Morse wrote: > > The GHES code calls memory_failure_queue() from IRQ context to schedule > > work on the current CPU so that memory_failure() can sleep. > > > > For synchronous memory errors the arch code needs to know any signals > > that memory_failure() will trigger are pending before it returns to > > user-space, possibly when exiting from the IRQ. > > > > Add a helper to kick the memory failure queue, to ensure the scheduled > > work has happened. This has to be called from process context, so may > > have been migrated from the original cpu. Pass the cpu the work was > > queued on. > > > > Change memory_failure_work_func() to permit being called on the 'wrong' > > cpu. > > > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -3012,6 +3012,7 @@ enum mf_flags { > > }; > > extern int memory_failure(unsigned long pfn, int flags); > > extern void memory_failure_queue(unsigned long pfn, int flags); > > +extern void memory_failure_queue_kick(int cpu); > > extern int unpoison_memory(unsigned long pfn); > > extern int get_hwpoison_page(struct page *page); > > #define put_hwpoison_page(page) put_page(page) > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > > index a96364be8ab4..c4afb407bf0f 100644 > > --- a/mm/memory-failure.c > > +++ b/mm/memory-failure.c > > @@ -1493,7 +1493,7 @@ static void memory_failure_work_func(struct work_struct *work) > > unsigned long proc_flags; > > int gotten; > > > > - mf_cpu = this_cpu_ptr(&memory_failure_cpu); > > + mf_cpu = container_of(work, struct memory_failure_cpu, work); > > for (;;) { > > spin_lock_irqsave(&mf_cpu->lock, proc_flags); > > gotten = kfifo_get(&mf_cpu->fifo, &entry); > > @@ -1507,6 +1507,19 @@ static void memory_failure_work_func(struct work_struct *work) > > } > > } > > > > +/* > > + * Process memory_failure work queued on the specified CPU. > > + * Used to avoid return-to-userspace racing with the memory_failure workqueue. > > + */ > > +void memory_failure_queue_kick(int cpu) > > +{ > > + struct memory_failure_cpu *mf_cpu; > > + > > + mf_cpu = &per_cpu(memory_failure_cpu, cpu); > > + cancel_work_sync(&mf_cpu->work); > > + memory_failure_work_func(&mf_cpu->work); > > +} > > + > > static int __init memory_failure_init(void) > > { > > struct memory_failure_cpu *mf_cpu; > > > > I could apply this provided an ACK from the mm people. > Naoya Horiguchi is the memory-failure.c person. A review would be appreciated please? I'm struggling with it a bit. memory_failure_queue_kick() should be called on the cpu which is identified by arg `cpu', yes? memory_failure_work_func() appears to assume this. If that's right then a) why bother passing in the `cpu' arg? and b) what keeps this thread pinned to that CPU? cancel_work_sync() can schedule.
On Mon, May 18, 2020 at 12:58:28PM -0700, Andrew Morton wrote: > On Mon, 18 May 2020 14:45:05 +0200 "Rafael J. Wysocki" <rjw@rjwysocki.net> wrote: > > > On Friday, May 1, 2020 6:45:41 PM CEST James Morse wrote: > > > The GHES code calls memory_failure_queue() from IRQ context to schedule > > > work on the current CPU so that memory_failure() can sleep. > > > > > > For synchronous memory errors the arch code needs to know any signals > > > that memory_failure() will trigger are pending before it returns to > > > user-space, possibly when exiting from the IRQ. > > > > > > Add a helper to kick the memory failure queue, to ensure the scheduled > > > work has happened. This has to be called from process context, so may > > > have been migrated from the original cpu. Pass the cpu the work was > > > queued on. > > > > > > Change memory_failure_work_func() to permit being called on the 'wrong' > > > cpu. > > > > > > --- a/include/linux/mm.h > > > +++ b/include/linux/mm.h > > > @@ -3012,6 +3012,7 @@ enum mf_flags { > > > }; > > > extern int memory_failure(unsigned long pfn, int flags); > > > extern void memory_failure_queue(unsigned long pfn, int flags); > > > +extern void memory_failure_queue_kick(int cpu); > > > extern int unpoison_memory(unsigned long pfn); > > > extern int get_hwpoison_page(struct page *page); > > > #define put_hwpoison_page(page) put_page(page) > > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > > > index a96364be8ab4..c4afb407bf0f 100644 > > > --- a/mm/memory-failure.c > > > +++ b/mm/memory-failure.c > > > @@ -1493,7 +1493,7 @@ static void memory_failure_work_func(struct work_struct *work) > > > unsigned long proc_flags; > > > int gotten; > > > > > > - mf_cpu = this_cpu_ptr(&memory_failure_cpu); > > > + mf_cpu = container_of(work, struct memory_failure_cpu, work); > > > for (;;) { > > > spin_lock_irqsave(&mf_cpu->lock, proc_flags); > > > gotten = kfifo_get(&mf_cpu->fifo, &entry); > > > @@ -1507,6 +1507,19 @@ static void memory_failure_work_func(struct work_struct *work) > > > } > > > } > > > > > > +/* > > > + * Process memory_failure work queued on the specified CPU. > > > + * Used to avoid return-to-userspace racing with the memory_failure workqueue. > > > + */ > > > +void memory_failure_queue_kick(int cpu) > > > +{ > > > + struct memory_failure_cpu *mf_cpu; > > > + > > > + mf_cpu = &per_cpu(memory_failure_cpu, cpu); > > > + cancel_work_sync(&mf_cpu->work); > > > + memory_failure_work_func(&mf_cpu->work); > > > +} > > > + > > > static int __init memory_failure_init(void) > > > { > > > struct memory_failure_cpu *mf_cpu; > > > > > > > I could apply this provided an ACK from the mm people. > > > > Naoya Horiguchi is the memory-failure.c person. A review would be > appreciated please? > > I'm struggling with it a bit. memory_failure_queue_kick() should be > called on the cpu which is identified by arg `cpu', yes? > memory_failure_work_func() appears to assume this. > > If that's right then a) why bother passing in the `cpu' arg? and b) > what keeps this thread pinned to that CPU? cancel_work_sync() can > schedule. If I read correctly, memory_failure work is queue on the CPU on which the user process ran when it touched the corrupted memory, and the process can be scheduled on another CPU when the kernel returned back to userspace after handling the GHES event. So we need to remember where the memory_failure event is queued to flush proper work queue. So I feel that this properly implements it. Considering the effect to the other caller, currently memory_failure_queue() has 2 callers, ghes_handle_memory_failure() and cec_add_elem(). The former is what we try to change now. And the latter is to execute soft offline (which is related to corrected non-fatal errors), so that's not affected by the reported issue. So I don't think that this change breaks the other caller. So I'm fine with the suggested change. Acked-by: Naoya Horiguchi <naoya.horiguchi@nec.com> Thanks, Naoya Horiguchi
On Tue, May 19, 2020 at 5:15 AM HORIGUCHI NAOYA(堀口 直也) <naoya.horiguchi@nec.com> wrote: > > On Mon, May 18, 2020 at 12:58:28PM -0700, Andrew Morton wrote: > > On Mon, 18 May 2020 14:45:05 +0200 "Rafael J. Wysocki" <rjw@rjwysocki.net> wrote: > > > > > On Friday, May 1, 2020 6:45:41 PM CEST James Morse wrote: > > > > The GHES code calls memory_failure_queue() from IRQ context to schedule > > > > work on the current CPU so that memory_failure() can sleep. > > > > > > > > For synchronous memory errors the arch code needs to know any signals > > > > that memory_failure() will trigger are pending before it returns to > > > > user-space, possibly when exiting from the IRQ. > > > > > > > > Add a helper to kick the memory failure queue, to ensure the scheduled > > > > work has happened. This has to be called from process context, so may > > > > have been migrated from the original cpu. Pass the cpu the work was > > > > queued on. > > > > > > > > Change memory_failure_work_func() to permit being called on the 'wrong' > > > > cpu. > > > > > > > > --- a/include/linux/mm.h > > > > +++ b/include/linux/mm.h > > > > @@ -3012,6 +3012,7 @@ enum mf_flags { > > > > }; > > > > extern int memory_failure(unsigned long pfn, int flags); > > > > extern void memory_failure_queue(unsigned long pfn, int flags); > > > > +extern void memory_failure_queue_kick(int cpu); > > > > extern int unpoison_memory(unsigned long pfn); > > > > extern int get_hwpoison_page(struct page *page); > > > > #define put_hwpoison_page(page) put_page(page) > > > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > > > > index a96364be8ab4..c4afb407bf0f 100644 > > > > --- a/mm/memory-failure.c > > > > +++ b/mm/memory-failure.c > > > > @@ -1493,7 +1493,7 @@ static void memory_failure_work_func(struct work_struct *work) > > > > unsigned long proc_flags; > > > > int gotten; > > > > > > > > - mf_cpu = this_cpu_ptr(&memory_failure_cpu); > > > > + mf_cpu = container_of(work, struct memory_failure_cpu, work); > > > > for (;;) { > > > > spin_lock_irqsave(&mf_cpu->lock, proc_flags); > > > > gotten = kfifo_get(&mf_cpu->fifo, &entry); > > > > @@ -1507,6 +1507,19 @@ static void memory_failure_work_func(struct work_struct *work) > > > > } > > > > } > > > > > > > > +/* > > > > + * Process memory_failure work queued on the specified CPU. > > > > + * Used to avoid return-to-userspace racing with the memory_failure workqueue. > > > > + */ > > > > +void memory_failure_queue_kick(int cpu) > > > > +{ > > > > + struct memory_failure_cpu *mf_cpu; > > > > + > > > > + mf_cpu = &per_cpu(memory_failure_cpu, cpu); > > > > + cancel_work_sync(&mf_cpu->work); > > > > + memory_failure_work_func(&mf_cpu->work); > > > > +} > > > > + > > > > static int __init memory_failure_init(void) > > > > { > > > > struct memory_failure_cpu *mf_cpu; > > > > > > > > > > I could apply this provided an ACK from the mm people. > > > > > > > Naoya Horiguchi is the memory-failure.c person. A review would be > > appreciated please? > > > > I'm struggling with it a bit. memory_failure_queue_kick() should be > > called on the cpu which is identified by arg `cpu', yes? > > memory_failure_work_func() appears to assume this. > > > > If that's right then a) why bother passing in the `cpu' arg? and b) > > what keeps this thread pinned to that CPU? cancel_work_sync() can > > schedule. > > If I read correctly, memory_failure work is queue on the CPU on which the > user process ran when it touched the corrupted memory, and the process can > be scheduled on another CPU when the kernel returned back to userspace after > handling the GHES event. So we need to remember where the memory_failure > event is queued to flush proper work queue. So I feel that this properly > implements it. > > Considering the effect to the other caller, currently memory_failure_queue() > has 2 callers, ghes_handle_memory_failure() and cec_add_elem(). The former > is what we try to change now. And the latter is to execute soft offline > (which is related to corrected non-fatal errors), so that's not affected by > the reported issue. So I don't think that this change breaks the other > caller. > > So I'm fine with the suggested change. > > Acked-by: Naoya Horiguchi <naoya.horiguchi@nec.com> OK, thanks! So because patch [1/3] has been ACKed already, I'm applying this series as 5.8 material. Thanks everyone!
diff --git a/include/linux/mm.h b/include/linux/mm.h index 5a323422d783..c606dbbfa5e1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3012,6 +3012,7 @@ enum mf_flags { }; extern int memory_failure(unsigned long pfn, int flags); extern void memory_failure_queue(unsigned long pfn, int flags); +extern void memory_failure_queue_kick(int cpu); extern int unpoison_memory(unsigned long pfn); extern int get_hwpoison_page(struct page *page); #define put_hwpoison_page(page) put_page(page) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index a96364be8ab4..c4afb407bf0f 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1493,7 +1493,7 @@ static void memory_failure_work_func(struct work_struct *work) unsigned long proc_flags; int gotten; - mf_cpu = this_cpu_ptr(&memory_failure_cpu); + mf_cpu = container_of(work, struct memory_failure_cpu, work); for (;;) { spin_lock_irqsave(&mf_cpu->lock, proc_flags); gotten = kfifo_get(&mf_cpu->fifo, &entry); @@ -1507,6 +1507,19 @@ static void memory_failure_work_func(struct work_struct *work) } } +/* + * Process memory_failure work queued on the specified CPU. + * Used to avoid return-to-userspace racing with the memory_failure workqueue. + */ +void memory_failure_queue_kick(int cpu) +{ + struct memory_failure_cpu *mf_cpu; + + mf_cpu = &per_cpu(memory_failure_cpu, cpu); + cancel_work_sync(&mf_cpu->work); + memory_failure_work_func(&mf_cpu->work); +} + static int __init memory_failure_init(void) { struct memory_failure_cpu *mf_cpu;