Message ID | 20210909011014.JJu-mAZB6%akpm@linux-foundation.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [1/8] mm/hmm: bypass devmap pte when all pfn requested flags are fulfilled | expand |
Andrew, I sent a v3 of this patch with a better description as suggested by Vlastimil Babka and Steven Rostedt. I also forgot to add the reviewed-by's and acked-by's from v2 as Steven Rostedt pointed out. It's probably best to look at the email message [1]. 1. https://lore.kernel.org/linux-mm/20210907162537.27cbf082@gandalf.local.home/ Thanks, Liam * Andrew Morton <akpm@linux-foundation.org> [210908 21:10]: > From: Liam Howlett <liam.howlett@oracle.com> > Subject: mmap_lock: change trace and locking order > > Print to the trace log before releasing the lock to avoid racing with > other trace log printers of the same lock type. > > Link: https://lkml.kernel.org/r/20210903022041.1843024-1-Liam.Howlett@oracle.com > Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com> > Suggested-by: Steven Rostedt (VMware) <rostedt@goodmis.org> > Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Cc: Michel Lespinasse <walken.cr@gmail.com> > Cc: Vlastimil Babka <vbabka@suse.cz> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > --- > > include/linux/mmap_lock.h | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > --- a/include/linux/mmap_lock.h~mmap_lock-change-trace-and-locking-order > +++ a/include/linux/mmap_lock.h > @@ -101,14 +101,14 @@ static inline bool mmap_write_trylock(st > > static inline void mmap_write_unlock(struct mm_struct *mm) > { > - up_write(&mm->mmap_lock); > __mmap_lock_trace_released(mm, true); > + up_write(&mm->mmap_lock); > } > > static inline void mmap_write_downgrade(struct mm_struct *mm) > { > - downgrade_write(&mm->mmap_lock); > __mmap_lock_trace_acquire_returned(mm, false, true); > + downgrade_write(&mm->mmap_lock); > } > > static inline void mmap_read_lock(struct mm_struct *mm) > @@ -140,8 +140,8 @@ static inline bool mmap_read_trylock(str > > static inline void mmap_read_unlock(struct mm_struct *mm) > { > - up_read(&mm->mmap_lock); > __mmap_lock_trace_released(mm, false); > + up_read(&mm->mmap_lock); > } > > static inline bool mmap_read_trylock_non_owner(struct mm_struct *mm) > @@ -155,8 +155,8 @@ static inline bool mmap_read_trylock_non > > static inline void mmap_read_unlock_non_owner(struct mm_struct *mm) > { > - up_read_non_owner(&mm->mmap_lock); > __mmap_lock_trace_released(mm, false); > + up_read_non_owner(&mm->mmap_lock); > } > > static inline void mmap_assert_locked(struct mm_struct *mm) > _
--- a/include/linux/mmap_lock.h~mmap_lock-change-trace-and-locking-order +++ a/include/linux/mmap_lock.h @@ -101,14 +101,14 @@ static inline bool mmap_write_trylock(st static inline void mmap_write_unlock(struct mm_struct *mm) { - up_write(&mm->mmap_lock); __mmap_lock_trace_released(mm, true); + up_write(&mm->mmap_lock); } static inline void mmap_write_downgrade(struct mm_struct *mm) { - downgrade_write(&mm->mmap_lock); __mmap_lock_trace_acquire_returned(mm, false, true); + downgrade_write(&mm->mmap_lock); } static inline void mmap_read_lock(struct mm_struct *mm) @@ -140,8 +140,8 @@ static inline bool mmap_read_trylock(str static inline void mmap_read_unlock(struct mm_struct *mm) { - up_read(&mm->mmap_lock); __mmap_lock_trace_released(mm, false); + up_read(&mm->mmap_lock); } static inline bool mmap_read_trylock_non_owner(struct mm_struct *mm) @@ -155,8 +155,8 @@ static inline bool mmap_read_trylock_non static inline void mmap_read_unlock_non_owner(struct mm_struct *mm) { - up_read_non_owner(&mm->mmap_lock); __mmap_lock_trace_released(mm, false); + up_read_non_owner(&mm->mmap_lock); } static inline void mmap_assert_locked(struct mm_struct *mm)