Message ID | 20201207220949.830352-8-yuzhao@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: lru related cleanups | expand |
On Mon, Dec 07, 2020 at 03:09:45PM -0700, Yu Zhao wrote: > Move scattered VM_BUG_ONs to two essential places that cover all > lru list additions and deletions. I'd like to see these converted into VM_BUG_ON_PGFLAGS so you have to take that extra CONFIG step to enable checking them.
On Mon, Dec 07, 2020 at 10:24:29PM +0000, Matthew Wilcox wrote: > On Mon, Dec 07, 2020 at 03:09:45PM -0700, Yu Zhao wrote: > > Move scattered VM_BUG_ONs to two essential places that cover all > > lru list additions and deletions. > > I'd like to see these converted into VM_BUG_ON_PGFLAGS so you have > to take that extra CONFIG step to enable checking them. Right. I'll make sure it won't slip my mind again in v2.
On Tue, Dec 15, 2020 at 05:54:51PM -0700, Yu Zhao wrote: > On Mon, Dec 07, 2020 at 10:24:29PM +0000, Matthew Wilcox wrote: > > On Mon, Dec 07, 2020 at 03:09:45PM -0700, Yu Zhao wrote: > > > Move scattered VM_BUG_ONs to two essential places that cover all > > > lru list additions and deletions. > > > > I'd like to see these converted into VM_BUG_ON_PGFLAGS so you have > > to take that extra CONFIG step to enable checking them. > > Right. I'll make sure it won't slip my mind again in v2. Hugh has enlightened me that VM_BUG_ON_PGFLAGS() should not be used for this purpose. Sorry for the bad recommendation.
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index ef3fd79222e5..6d907a4dd6ad 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -66,6 +66,8 @@ static inline enum lru_list page_lru_base_type(struct page *page) */ static __always_inline void __clear_page_lru_flags(struct page *page) { + VM_BUG_ON_PAGE(!PageLRU(page), page); + __ClearPageLRU(page); /* this shouldn't happen, so leave the flags to bad_page() */ @@ -87,6 +89,8 @@ static __always_inline enum lru_list page_lru(struct page *page) { enum lru_list lru; + VM_BUG_ON_PAGE(PageActive(page) && PageUnevictable(page), page); + if (PageUnevictable(page)) lru = LRU_UNEVICTABLE; else { diff --git a/mm/swap.c b/mm/swap.c index a37c896a32b0..09c4a48e0bcd 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -83,7 +83,6 @@ static void __page_cache_release(struct page *page) unsigned long flags; lruvec = lock_page_lruvec_irqsave(page, &flags); - VM_BUG_ON_PAGE(!PageLRU(page), page); del_page_from_lru_list(page, lruvec); __clear_page_lru_flags(page); unlock_page_lruvec_irqrestore(lruvec, flags); @@ -909,7 +908,6 @@ void release_pages(struct page **pages, int nr) if (prev_lruvec != lruvec) lock_batch = 0; - VM_BUG_ON_PAGE(!PageLRU(page), page); del_page_from_lru_list(page, lruvec); __clear_page_lru_flags(page); } diff --git a/mm/vmscan.c b/mm/vmscan.c index e6bdfdfa2da1..95e581c9d9af 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4279,7 +4279,6 @@ void check_move_unevictable_pages(struct pagevec *pvec) lruvec = relock_page_lruvec_irq(page, lruvec); if (page_evictable(page) && PageUnevictable(page)) { - VM_BUG_ON_PAGE(PageActive(page), page); del_page_from_lru_list(page, lruvec); ClearPageUnevictable(page); add_page_to_lru_list(page, lruvec);
Move scattered VM_BUG_ONs to two essential places that cover all lru list additions and deletions. Signed-off-by: Yu Zhao <yuzhao@google.com> --- include/linux/mm_inline.h | 4 ++++ mm/swap.c | 2 -- mm/vmscan.c | 1 - 3 files changed, 4 insertions(+), 3 deletions(-)