Message ID | 20221118101714.19590-2-mgorman@techsingularity.net (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Leave IRQs enabled for per-cpu page allocations | expand |
On 11/18/22 11:17, Mel Gorman wrote: > free_unref_page_list() has neglected to remove pages properly from the list > of pages to free since forever. It works by coincidence because list_add > happened to do the right thing adding the pages to just the PCP lists. > However, a later patch added pages to either the PCP list or the zone list > but only properly deleted the page from the list in one path leading to > list corruption and a subsequent failure. As a preparation patch, always > delete the pages from one list properly before adding to another. On its > own, this fixes nothing although it adds a fractional amount of overhead > but is critical to the next patch. > > Reported-by: Hugh Dickins <hughd@google.com> > Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> > --- > mm/page_alloc.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 218b28ee49ed..1ec54173b8d4 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3546,6 +3546,8 @@ void free_unref_page_list(struct list_head *list) > list_for_each_entry_safe(page, next, list, lru) { > struct zone *zone = page_zone(page); > > + list_del(&page->lru); > + > /* Different zone, different pcp lock. */ > if (zone != locked_zone) { > if (pcp)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 218b28ee49ed..1ec54173b8d4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3546,6 +3546,8 @@ void free_unref_page_list(struct list_head *list) list_for_each_entry_safe(page, next, list, lru) { struct zone *zone = page_zone(page); + list_del(&page->lru); + /* Different zone, different pcp lock. */ if (zone != locked_zone) { if (pcp)
free_unref_page_list() has neglected to remove pages properly from the list of pages to free since forever. It works by coincidence because list_add happened to do the right thing adding the pages to just the PCP lists. However, a later patch added pages to either the PCP list or the zone list but only properly deleted the page from the list in one path leading to list corruption and a subsequent failure. As a preparation patch, always delete the pages from one list properly before adding to another. On its own, this fixes nothing although it adds a fractional amount of overhead but is critical to the next patch. Reported-by: Hugh Dickins <hughd@google.com> Signed-off-by: Mel Gorman <mgorman@techsingularity.net> --- mm/page_alloc.c | 2 ++ 1 file changed, 2 insertions(+)