Message ID | 20220708144406.GJ27531@techsingularity.net (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/page_alloc: replace local_lock with normal spinlock -fix -fix | expand |
On 7/8/22 16:44, Mel Gorman wrote: > pcpu_spin_unlock and pcpu_spin_unlock_irqrestore both unlock > pcp->lock and then enable preemption. This lacks symmetry against > both the pcpu_spin helpers and differs from how local_unlock_* is > implemented. While this is harmless, it's unnecessary and it's generally > better to unwind locks and preemption state in the reverse order as > they were acquired. Hm I'm confused, it seems it's done in reverse order (which I agree with) before this -fix-fix, but not after it? before, pcpu_spin_lock() (and variants) do pcpu_task_pin() and then spin_lock() (or variant), and pcpu_spin_unlock() does spin_unlock() and then pcpu_task_unpin(). That seems symmetrical, i.e. reverse order to me? And seems to match what local_lock family does too. > This is a fix on top of the mm-unstable patch > mm-page_alloc-replace-local_lock-with-normal-spinlock-fix.patch > > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> > --- > mm/page_alloc.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 934d1b5a5449..d0141e51e613 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -192,14 +192,14 @@ static DEFINE_MUTEX(pcp_batch_high_lock); > > #define pcpu_spin_unlock(member, ptr) \ > ({ \ > - spin_unlock(&ptr->member); \ > pcpu_task_unpin(); \ > + spin_unlock(&ptr->member); \ > }) > > #define pcpu_spin_unlock_irqrestore(member, ptr, flags) \ > ({ \ > - spin_unlock_irqrestore(&ptr->member, flags); \ > pcpu_task_unpin(); \ > + spin_unlock_irqrestore(&ptr->member, flags); \ > }) > > /* struct per_cpu_pages specific helpers. */
On Fri, Jul 08, 2022 at 04:54:47PM +0200, Vlastimil Babka wrote: > On 7/8/22 16:44, Mel Gorman wrote: > > pcpu_spin_unlock and pcpu_spin_unlock_irqrestore both unlock > > pcp->lock and then enable preemption. This lacks symmetry against > > both the pcpu_spin helpers and differs from how local_unlock_* is > > implemented. While this is harmless, it's unnecessary and it's generally > > better to unwind locks and preemption state in the reverse order as > > they were acquired. > > Hm I'm confused, it seems it's done in reverse order (which I agree with) > before this -fix-fix, but not after it? > > before, pcpu_spin_lock() (and variants) do pcpu_task_pin() and then > spin_lock() (or variant), and pcpu_spin_unlock() does spin_unlock() and then > pcpu_task_unpin(). That seems symmetrical, i.e. reverse order to me? And > seems to match what local_lock family does too. > You're not confused, I am. The patch and the changelog are outright brain damage from excessive context switching and a sign that it's time for the weekend to start. Sorry for this absolute misfortune.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 934d1b5a5449..d0141e51e613 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -192,14 +192,14 @@ static DEFINE_MUTEX(pcp_batch_high_lock); #define pcpu_spin_unlock(member, ptr) \ ({ \ - spin_unlock(&ptr->member); \ pcpu_task_unpin(); \ + spin_unlock(&ptr->member); \ }) #define pcpu_spin_unlock_irqrestore(member, ptr, flags) \ ({ \ - spin_unlock_irqrestore(&ptr->member, flags); \ pcpu_task_unpin(); \ + spin_unlock_irqrestore(&ptr->member, flags); \ }) /* struct per_cpu_pages specific helpers. */
pcpu_spin_unlock and pcpu_spin_unlock_irqrestore both unlock pcp->lock and then enable preemption. This lacks symmetry against both the pcpu_spin helpers and differs from how local_unlock_* is implemented. While this is harmless, it's unnecessary and it's generally better to unwind locks and preemption state in the reverse order as they were acquired. This is a fix on top of the mm-unstable patch mm-page_alloc-replace-local_lock-with-normal-spinlock-fix.patch Signed-off-by: Mel Gorman <mgorman@techsingularity.net> --- mm/page_alloc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)