Message ID | 1660034138397b82a0a8b6ae51cbe96bd583d89e.1700821416.git.quic_charante@quicinc.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: page_alloc: fixes for high atomic reserve caluculations | expand |
On Fri, 24 Nov 2023, Charan Teja Kalla wrote: > reserve_highatomic_pageblock() aims to reserve the 1% of the managed > pages of a zone, which is used for the high order atomic allocations. > > It uses the below calculation to reserve: > static void reserve_highatomic_pageblock(struct page *page, ....) { > > ....... > max_managed = (zone_managed_pages(zone) / 100) + pageblock_nr_pages; > > if (zone->nr_reserved_highatomic >= max_managed) > goto out; > > zone->nr_reserved_highatomic += pageblock_nr_pages; > set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); > move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL); > > out: > .... > } > > Since we are always appending the 1% of zone managed pages count to > pageblock_nr_pages, the minimum it is turning into 2 pageblocks as the > nr_reserved_highatomic is incremented/decremented in pageblock sizes. > > Encountered a system(actually a VM running on the Linux kernel) with the > below zone configuration: > Normal free:7728kB boost:0kB min:804kB low:1004kB high:1204kB > reserved_highatomic:8192KB managed:49224kB > > The existing calculations making it to reserve the 8MB(with pageblock > size of 4MB) i.e. 16% of the zone managed memory. Reserving such high > amount of memory can easily exert memory pressure in the system thus may > lead into unnecessary reclaims till unreserving of high atomic reserves. > > Since high atomic reserves are managed in pageblock size granules, as > MIGRATE_HIGHATOMIC is set for such pageblock, fix the calculations for > high atomic reserves as, minimum is pageblock size , maximum is > approximately 1% of the zone managed pages. > > Acked-by: Mel Gorman <mgorman@techsingularity.net> > Signed-off-by: Charan Teja Kalla <quic_charante@quicinc.com> Acked-by: David Rientjes <rientjes@google.com>
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 733732e..a789dfd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1884,10 +1884,11 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone) unsigned long max_managed, flags; /* - * Limit the number reserved to 1 pageblock or roughly 1% of a zone. + * The number reserved as: minimum is 1 pageblock, maximum is + * roughly 1% of a zone. * Check is race-prone but harmless. */ - max_managed = (zone_managed_pages(zone) / 100) + pageblock_nr_pages; + max_managed = ALIGN((zone_managed_pages(zone) / 100), pageblock_nr_pages); if (zone->nr_reserved_highatomic >= max_managed) return;