Message ID | 20181102120001.4526-1-bsingharora@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/hotplug: Optimize clear_hwpoisoned_pages | expand |
On Fri 02-11-18 23:00:01, Balbir Singh wrote: > In hot remove, we try to clear poisoned pages, but > a small optimization to check if num_poisoned_pages > is 0 helps remove the iteration through nr_pages. > > Signed-off-by: Balbir Singh <bsingharora@gmail.com> Makes sense to me. It would be great to actually have some number but the optimization for the normal case is quite obvious. Acked-by: Michal Hocko <mhocko@suse.com> > --- > mm/sparse.c | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/mm/sparse.c b/mm/sparse.c > index 33307fc05c4d..16219c7ddb5f 100644 > --- a/mm/sparse.c > +++ b/mm/sparse.c > @@ -724,6 +724,16 @@ static void clear_hwpoisoned_pages(struct page *memmap, int nr_pages) > if (!memmap) > return; > > + /* > + * A further optimization is to have per section > + * ref counted num_poisoned_pages, but that is going > + * to need more space per memmap, for now just do > + * a quick global check, this should speed up this > + * routine in the absence of bad pages. > + */ > + if (atomic_long_read(&num_poisoned_pages) == 0) > + return; > + > for (i = 0; i < nr_pages; i++) { > if (PageHWPoison(&memmap[i])) { > atomic_long_sub(1, &num_poisoned_pages); > -- > 2.17.1 >
On Fri, Nov 02, 2018 at 11:00:01PM +1100, Balbir Singh wrote: > In hot remove, we try to clear poisoned pages, but > a small optimization to check if num_poisoned_pages > is 0 helps remove the iteration through nr_pages. > > Signed-off-by: Balbir Singh <bsingharora@gmail.com> Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Thanks!
diff --git a/mm/sparse.c b/mm/sparse.c index 33307fc05c4d..16219c7ddb5f 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -724,6 +724,16 @@ static void clear_hwpoisoned_pages(struct page *memmap, int nr_pages) if (!memmap) return; + /* + * A further optimization is to have per section + * ref counted num_poisoned_pages, but that is going + * to need more space per memmap, for now just do + * a quick global check, this should speed up this + * routine in the absence of bad pages. + */ + if (atomic_long_read(&num_poisoned_pages) == 0) + return; + for (i = 0; i < nr_pages; i++) { if (PageHWPoison(&memmap[i])) { atomic_long_sub(1, &num_poisoned_pages);
In hot remove, we try to clear poisoned pages, but a small optimization to check if num_poisoned_pages is 0 helps remove the iteration through nr_pages. Signed-off-by: Balbir Singh <bsingharora@gmail.com> --- mm/sparse.c | 10 ++++++++++ 1 file changed, 10 insertions(+)