Message ID | 20220207133643.23427-3-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | A few cleanup patches around memory_hotplug | expand |
On 07.02.22 14:36, Miaohe Lin wrote: > If zid reaches ZONE_NORMAL, the caller will always get the NORMAL zone no > matter what zone_intersects() returns. So we can save some possible cpu > cycles by avoid calling zone_intersects() for ZONE_NORMAL. > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > --- > mm/memory_hotplug.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index cbc67c27e0dd..140809e60e9a 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -826,7 +826,7 @@ static struct zone *default_kernel_zone_for_pfn(int nid, unsigned long start_pfn > struct pglist_data *pgdat = NODE_DATA(nid); > int zid; > > - for (zid = 0; zid <= ZONE_NORMAL; zid++) { > + for (zid = 0; zid < ZONE_NORMAL; zid++) { > struct zone *zone = &pgdat->node_zones[zid]; > > if (zone_intersects(zone, start_pfn, nr_pages)) Makes sense, although we just don't care about the CPU cycles at that point. Reviewed-by: David Hildenbrand <david@redhat.com>
On Mon, Feb 07, 2022 at 09:36:41PM +0800, Miaohe Lin wrote: > If zid reaches ZONE_NORMAL, the caller will always get the NORMAL zone no > matter what zone_intersects() returns. So we can save some possible cpu > cycles by avoid calling zone_intersects() for ZONE_NORMAL. > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> Reviewed-by: Oscar Salvador <osalvador@suse.de> > --- > mm/memory_hotplug.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index cbc67c27e0dd..140809e60e9a 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -826,7 +826,7 @@ static struct zone *default_kernel_zone_for_pfn(int nid, unsigned long start_pfn > struct pglist_data *pgdat = NODE_DATA(nid); > int zid; > > - for (zid = 0; zid <= ZONE_NORMAL; zid++) { > + for (zid = 0; zid < ZONE_NORMAL; zid++) { > struct zone *zone = &pgdat->node_zones[zid]; > > if (zone_intersects(zone, start_pfn, nr_pages)) > -- > 2.23.0 > >
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index cbc67c27e0dd..140809e60e9a 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -826,7 +826,7 @@ static struct zone *default_kernel_zone_for_pfn(int nid, unsigned long start_pfn struct pglist_data *pgdat = NODE_DATA(nid); int zid; - for (zid = 0; zid <= ZONE_NORMAL; zid++) { + for (zid = 0; zid < ZONE_NORMAL; zid++) { struct zone *zone = &pgdat->node_zones[zid]; if (zone_intersects(zone, start_pfn, nr_pages))
If zid reaches ZONE_NORMAL, the caller will always get the NORMAL zone no matter what zone_intersects() returns. So we can save some possible cpu cycles by avoid calling zone_intersects() for ZONE_NORMAL. Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> --- mm/memory_hotplug.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)