Message ID | 20220830123604.25763-3-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | A few cleanup patches for memory-failure | expand |
Hi Miaohe, On Tue, Aug 30, 2022 at 08:36:00PM +0800, Miaohe Lin wrote: > It's more recommended to use __PageMovable() to detect non-lru movable > pages. We can avoid bumping page refcnt via isolate_movable_page() for > the isolated lru pages. Also if pages become PageLRU just after they're > checked but before trying to isolate them, isolate_lru_page() will be > called to do the right work. Good point, non-lru movable page is currently handled by isolate_lru_page(), which always fails. This means that we lost the chance of soft-offlining for any non-lru movable page. So this patch improves the situation. > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > --- > mm/memory-failure.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index a923a6dde871..3966fa6abe03 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -2404,7 +2404,7 @@ EXPORT_SYMBOL(unpoison_memory); > static bool isolate_page(struct page *page, struct list_head *pagelist) > { > bool isolated = false; > - bool lru = PageLRU(page); > + bool lru = !__PageMovable(page); It seems that PAGE_MAPPING_MOVABLE is not set for hugetlb pages, so lru becomes true for them. Then, if isolate_hugetlb() succeeds, inc_node_page_state() is called for hugetlb pages, maybe that's not expected. > > if (PageHuge(page)) { > isolated = !isolate_hugetlb(page, pagelist); } else { if (lru) isolated = !isolate_lru_page(page); else isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE); if (isolated) list_add(&page->lru, pagelist); } if (isolated && lru) inc_node_page_state(page, NR_ISOLATED_ANON + page_is_file_lru(page)); so, how about moving this if block into the above else block? Then, the automatic variable lru can be moved into the else block. Thanks, Naoya Horiguchi
On 2022/9/5 13:22, HORIGUCHI NAOYA(堀口 直也) wrote: > Hi Miaohe, > > On Tue, Aug 30, 2022 at 08:36:00PM +0800, Miaohe Lin wrote: >> It's more recommended to use __PageMovable() to detect non-lru movable >> pages. We can avoid bumping page refcnt via isolate_movable_page() for >> the isolated lru pages. Also if pages become PageLRU just after they're >> checked but before trying to isolate them, isolate_lru_page() will be >> called to do the right work. > > Good point, non-lru movable page is currently handled by isolate_lru_page(), > which always fails. This means that we lost the chance of soft-offlining > for any non-lru movable page. So this patch improves the situation. Non-lru movable page will still be handled by isolate_movable_page() before the code change as they don't have PageLRU set. The current situation is that the isolated LRU pages are passed to isolate_movable_page() uncorrectly. This might not hurt. But the chance that pages become un-isolated and thus available just after checking could be seized with this patch. > >> >> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> >> --- >> mm/memory-failure.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/mm/memory-failure.c b/mm/memory-failure.c >> index a923a6dde871..3966fa6abe03 100644 >> --- a/mm/memory-failure.c >> +++ b/mm/memory-failure.c >> @@ -2404,7 +2404,7 @@ EXPORT_SYMBOL(unpoison_memory); >> static bool isolate_page(struct page *page, struct list_head *pagelist) >> { >> bool isolated = false; >> - bool lru = PageLRU(page); >> + bool lru = !__PageMovable(page); > > It seems that PAGE_MAPPING_MOVABLE is not set for hugetlb pages, so > lru becomes true for them. Then, if isolate_hugetlb() succeeds, > inc_node_page_state() is called for hugetlb pages, maybe that's not expected. Yes, that's unexpected. Thanks for pointing this out. > >> >> if (PageHuge(page)) { >> isolated = !isolate_hugetlb(page, pagelist); > } else { > if (lru) > isolated = !isolate_lru_page(page); > else > isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE); > > if (isolated) > list_add(&page->lru, pagelist); > } > > if (isolated && lru) > inc_node_page_state(page, NR_ISOLATED_ANON + > page_is_file_lru(page)); > > so, how about moving this if block into the above else block? > Then, the automatic variable lru can be moved into the else block. Do you mean something like below? diff --git a/mm/memory-failure.c b/mm/memory-failure.c index df3bf266eebf..48780f3a61d3 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2404,24 +2404,25 @@ EXPORT_SYMBOL(unpoison_memory); static bool isolate_page(struct page *page, struct list_head *pagelist) { bool isolated = false; - bool lru = !__PageMovable(page); if (PageHuge(page)) { isolated = !isolate_hugetlb(page, pagelist); } else { + bool lru = !__PageMovable(page); + if (lru) isolated = !isolate_lru_page(page); else isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE); - if (isolated) + if (isolated) { list_add(&page->lru, pagelist); + if (lru) + inc_node_page_state(page, NR_ISOLATED_ANON + + page_is_file_lru(page)); + } } - if (isolated && lru) - inc_node_page_state(page, NR_ISOLATED_ANON + - page_is_file_lru(page)); - /* * If we succeed to isolate the page, we grabbed another refcount on * the page, so we can safely drop the one we got from get_any_pages(). > > Thanks, > Naoya Horiguchi Thanks a lot for your review and comment on this series, Naoya. Thanks, Miaohe Lin.
On Mon, Sep 05, 2022 at 02:53:41PM +0800, Miaohe Lin wrote: > On 2022/9/5 13:22, HORIGUCHI NAOYA(堀口 直也) wrote: > > Hi Miaohe, > > > > On Tue, Aug 30, 2022 at 08:36:00PM +0800, Miaohe Lin wrote: > >> It's more recommended to use __PageMovable() to detect non-lru movable > >> pages. We can avoid bumping page refcnt via isolate_movable_page() for > >> the isolated lru pages. Also if pages become PageLRU just after they're > >> checked but before trying to isolate them, isolate_lru_page() will be > >> called to do the right work. > > > > Good point, non-lru movable page is currently handled by isolate_lru_page(), > > which always fails. This means that we lost the chance of soft-offlining > > for any non-lru movable page. So this patch improves the situation. > > Non-lru movable page will still be handled by isolate_movable_page() before the code change > as they don't have PageLRU set. The current situation is that the isolated LRU pages are > passed to isolate_movable_page() uncorrectly. This might not hurt. But the chance that pages > become un-isolated and thus available just after checking could be seized with this patch. OK, thank you for correct me. > > > > >> > >> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > >> --- > >> mm/memory-failure.c | 2 +- > >> 1 file changed, 1 insertion(+), 1 deletion(-) > >> > >> diff --git a/mm/memory-failure.c b/mm/memory-failure.c > >> index a923a6dde871..3966fa6abe03 100644 > >> --- a/mm/memory-failure.c > >> +++ b/mm/memory-failure.c > >> @@ -2404,7 +2404,7 @@ EXPORT_SYMBOL(unpoison_memory); > >> static bool isolate_page(struct page *page, struct list_head *pagelist) > >> { > >> bool isolated = false; > >> - bool lru = PageLRU(page); > >> + bool lru = !__PageMovable(page); > > > > It seems that PAGE_MAPPING_MOVABLE is not set for hugetlb pages, so > > lru becomes true for them. Then, if isolate_hugetlb() succeeds, > > inc_node_page_state() is called for hugetlb pages, maybe that's not expected. > > Yes, that's unexpected. Thanks for pointing this out. > > > > >> > >> if (PageHuge(page)) { > >> isolated = !isolate_hugetlb(page, pagelist); > > } else { > > if (lru) > > isolated = !isolate_lru_page(page); > > else > > isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE); > > > > if (isolated) > > list_add(&page->lru, pagelist); > > } > > > > if (isolated && lru) > > inc_node_page_state(page, NR_ISOLATED_ANON + > > page_is_file_lru(page)); > > > > so, how about moving this if block into the above else block? > > Then, the automatic variable lru can be moved into the else block. > > Do you mean something like below? > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index df3bf266eebf..48780f3a61d3 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -2404,24 +2404,25 @@ EXPORT_SYMBOL(unpoison_memory); > static bool isolate_page(struct page *page, struct list_head *pagelist) > { > bool isolated = false; > - bool lru = !__PageMovable(page); > > if (PageHuge(page)) { > isolated = !isolate_hugetlb(page, pagelist); > } else { > + bool lru = !__PageMovable(page); > + > if (lru) > isolated = !isolate_lru_page(page); > else > isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE); > > - if (isolated) > + if (isolated) { > list_add(&page->lru, pagelist); > + if (lru) > + inc_node_page_state(page, NR_ISOLATED_ANON + > + page_is_file_lru(page)); > + } > } > > - if (isolated && lru) > - inc_node_page_state(page, NR_ISOLATED_ANON + > - page_is_file_lru(page)); > - > /* > * If we succeed to isolate the page, we grabbed another refcount on > * the page, so we can safely drop the one we got from get_any_pages(). > Yes, that's exactly what I thought of. Thanks, Naoya Horiguchi
On 2022/9/5 15:15, HORIGUCHI NAOYA(堀口 直也) wrote: > On Mon, Sep 05, 2022 at 02:53:41PM +0800, Miaohe Lin wrote: >> On 2022/9/5 13:22, HORIGUCHI NAOYA(堀口 直也) wrote: >>> Hi Miaohe, >>> >>> On Tue, Aug 30, 2022 at 08:36:00PM +0800, Miaohe Lin wrote: >>>> It's more recommended to use __PageMovable() to detect non-lru movable >>>> pages. We can avoid bumping page refcnt via isolate_movable_page() for >>>> the isolated lru pages. Also if pages become PageLRU just after they're >>>> checked but before trying to isolate them, isolate_lru_page() will be >>>> called to do the right work. >>> >>> Good point, non-lru movable page is currently handled by isolate_lru_page(), >>> which always fails. This means that we lost the chance of soft-offlining >>> for any non-lru movable page. So this patch improves the situation. >> >> Non-lru movable page will still be handled by isolate_movable_page() before the code change >> as they don't have PageLRU set. The current situation is that the isolated LRU pages are >> passed to isolate_movable_page() uncorrectly. This might not hurt. But the chance that pages >> become un-isolated and thus available just after checking could be seized with this patch. > > OK, thank you for correct me. > >> >>> >>>> >>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> >>>> --- >>>> mm/memory-failure.c | 2 +- >>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>> >>>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c >>>> index a923a6dde871..3966fa6abe03 100644 >>>> --- a/mm/memory-failure.c >>>> +++ b/mm/memory-failure.c >>>> @@ -2404,7 +2404,7 @@ EXPORT_SYMBOL(unpoison_memory); >>>> static bool isolate_page(struct page *page, struct list_head *pagelist) >>>> { >>>> bool isolated = false; >>>> - bool lru = PageLRU(page); >>>> + bool lru = !__PageMovable(page); >>> >>> It seems that PAGE_MAPPING_MOVABLE is not set for hugetlb pages, so >>> lru becomes true for them. Then, if isolate_hugetlb() succeeds, >>> inc_node_page_state() is called for hugetlb pages, maybe that's not expected. >> >> Yes, that's unexpected. Thanks for pointing this out. >> >>> >>>> >>>> if (PageHuge(page)) { >>>> isolated = !isolate_hugetlb(page, pagelist); >>> } else { >>> if (lru) >>> isolated = !isolate_lru_page(page); >>> else >>> isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE); >>> >>> if (isolated) >>> list_add(&page->lru, pagelist); >>> } >>> >>> if (isolated && lru) >>> inc_node_page_state(page, NR_ISOLATED_ANON + >>> page_is_file_lru(page)); >>> >>> so, how about moving this if block into the above else block? >>> Then, the automatic variable lru can be moved into the else block. >> >> Do you mean something like below? >> >> diff --git a/mm/memory-failure.c b/mm/memory-failure.c >> index df3bf266eebf..48780f3a61d3 100644 >> --- a/mm/memory-failure.c >> +++ b/mm/memory-failure.c >> @@ -2404,24 +2404,25 @@ EXPORT_SYMBOL(unpoison_memory); >> static bool isolate_page(struct page *page, struct list_head *pagelist) >> { >> bool isolated = false; >> - bool lru = !__PageMovable(page); >> >> if (PageHuge(page)) { >> isolated = !isolate_hugetlb(page, pagelist); >> } else { >> + bool lru = !__PageMovable(page); >> + >> if (lru) >> isolated = !isolate_lru_page(page); >> else >> isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE); >> >> - if (isolated) >> + if (isolated) { >> list_add(&page->lru, pagelist); >> + if (lru) >> + inc_node_page_state(page, NR_ISOLATED_ANON + >> + page_is_file_lru(page)); >> + } >> } >> >> - if (isolated && lru) >> - inc_node_page_state(page, NR_ISOLATED_ANON + >> - page_is_file_lru(page)); >> - >> /* >> * If we succeed to isolate the page, we grabbed another refcount on >> * the page, so we can safely drop the one we got from get_any_pages(). >> > > Yes, that's exactly what I thought of. Hi Andrew: The above code change could be applied to the mm-tree directly. Or should I resend the v2 series? Which one is more convenient for you? They're all fine to me. ;) Many thanks both. Thanks, Miaohe Lin
On Mon, 5 Sep 2022 15:29:34 +0800 Miaohe Lin <linmiaohe@huawei.com> wrote: > The above code change could be applied to the mm-tree directly. Or should I resend > the v2 series? Which one is more convenient for you? They're all fine to me. ;) I got it, thanks. From: Miaohe Lin <linmiaohe@huawei.com> Subject: mm-hwpoison-use-__pagemovable-to-detect-non-lru-movable-pages-fix Date: Mon, 5 Sep 2022 14:53:41 +0800 fixes per Naoya Horiguchi Link: https://lkml.kernel.org/r/1f7ee86e-7d28-0d8c-e0de-b7a5a94519e8@huawei.com Cc: Naoya Horiguchi <naoya.horiguchi@nec.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/memory-failure.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) --- a/mm/memory-failure.c~mm-hwpoison-use-__pagemovable-to-detect-non-lru-movable-pages-fix +++ a/mm/memory-failure.c @@ -2404,24 +2404,26 @@ EXPORT_SYMBOL(unpoison_memory); static bool isolate_page(struct page *page, struct list_head *pagelist) { bool isolated = false; - bool lru = !__PageMovable(page); if (PageHuge(page)) { isolated = !isolate_hugetlb(page, pagelist); } else { + bool lru = !__PageMovable(page); + if (lru) isolated = !isolate_lru_page(page); else - isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE); + isolated = !isolate_movable_page(page, + ISOLATE_UNEVICTABLE); - if (isolated) + if (isolated) { list_add(&page->lru, pagelist); + if (lru) + inc_node_page_state(page, NR_ISOLATED_ANON + + page_is_file_lru(page)); + } } - if (isolated && lru) - inc_node_page_state(page, NR_ISOLATED_ANON + - page_is_file_lru(page)); - /* * If we succeed to isolate the page, we grabbed another refcount on * the page, so we can safely drop the one we got from get_any_pages().
On 2022/9/6 5:53, Andrew Morton wrote: > On Mon, 5 Sep 2022 15:29:34 +0800 Miaohe Lin <linmiaohe@huawei.com> wrote: > >> The above code change could be applied to the mm-tree directly. Or should I resend >> the v2 series? Which one is more convenient for you? They're all fine to me. ;) > > I got it, thanks. Many thanks for doing this. That's very kind of you. Thanks, Miaohe Lin > > From: Miaohe Lin <linmiaohe@huawei.com> > Subject: mm-hwpoison-use-__pagemovable-to-detect-non-lru-movable-pages-fix > Date: Mon, 5 Sep 2022 14:53:41 +0800 > > fixes per Naoya Horiguchi > > Link: https://lkml.kernel.org/r/1f7ee86e-7d28-0d8c-e0de-b7a5a94519e8@huawei.com > Cc: Naoya Horiguchi <naoya.horiguchi@nec.com> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > --- > > mm/memory-failure.c | 16 +++++++++------- > 1 file changed, 9 insertions(+), 7 deletions(-) > > --- a/mm/memory-failure.c~mm-hwpoison-use-__pagemovable-to-detect-non-lru-movable-pages-fix > +++ a/mm/memory-failure.c > @@ -2404,24 +2404,26 @@ EXPORT_SYMBOL(unpoison_memory); > static bool isolate_page(struct page *page, struct list_head *pagelist) > { > bool isolated = false; > - bool lru = !__PageMovable(page); > > if (PageHuge(page)) { > isolated = !isolate_hugetlb(page, pagelist); > } else { > + bool lru = !__PageMovable(page); > + > if (lru) > isolated = !isolate_lru_page(page); > else > - isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE); > + isolated = !isolate_movable_page(page, > + ISOLATE_UNEVICTABLE); > > - if (isolated) > + if (isolated) { > list_add(&page->lru, pagelist); > + if (lru) > + inc_node_page_state(page, NR_ISOLATED_ANON + > + page_is_file_lru(page)); > + } > } > > - if (isolated && lru) > - inc_node_page_state(page, NR_ISOLATED_ANON + > - page_is_file_lru(page)); > - > /* > * If we succeed to isolate the page, we grabbed another refcount on > * the page, so we can safely drop the one we got from get_any_pages(). > _ > > . >
diff --git a/mm/memory-failure.c b/mm/memory-failure.c index a923a6dde871..3966fa6abe03 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2404,7 +2404,7 @@ EXPORT_SYMBOL(unpoison_memory); static bool isolate_page(struct page *page, struct list_head *pagelist) { bool isolated = false; - bool lru = PageLRU(page); + bool lru = !__PageMovable(page); if (PageHuge(page)) { isolated = !isolate_hugetlb(page, pagelist);
It's more recommended to use __PageMovable() to detect non-lru movable pages. We can avoid bumping page refcnt via isolate_movable_page() for the isolated lru pages. Also if pages become PageLRU just after they're checked but before trying to isolate them, isolate_lru_page() will be called to do the right work. Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> --- mm/memory-failure.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)