Message ID | 20220310131253.30970-1-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/huge_memory: remove unneeded local variable follflags | expand |
Hi Miaohe, On 3/10/22 18:42, Miaohe Lin wrote: > We can pass FOLL_GET | FOLL_DUMP to follow_page directly to simplify > the code a bit. > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > --- > mm/huge_memory.c | 4 +--- > 1 file changed, 1 insertion(+), 3 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 3557aabe86fe..418d077da246 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2838,7 +2838,6 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, > */ > for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) { > struct vm_area_struct *vma = find_vma(mm, addr); > - unsigned int follflags; > struct page *page; > > if (!vma || addr < vma->vm_start) > @@ -2851,8 +2850,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, > } > > /* FOLL_DUMP to ignore special (like zero) pages */ > - follflags = FOLL_GET | FOLL_DUMP; > - page = follow_page(vma, addr, follflags); > + page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP); > > if (IS_ERR(page)) > continue; LGTM, but there is another similar instance in add_page_for_migration() inside mm/migrate.c, requiring this exact clean up. Hence with that change in place. Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
On 2022/3/11 12:51, Anshuman Khandual wrote: > Hi Miaohe, > > On 3/10/22 18:42, Miaohe Lin wrote: >> We can pass FOLL_GET | FOLL_DUMP to follow_page directly to simplify >> the code a bit. >> >> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> >> --- >> mm/huge_memory.c | 4 +--- >> 1 file changed, 1 insertion(+), 3 deletions(-) >> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >> index 3557aabe86fe..418d077da246 100644 >> --- a/mm/huge_memory.c >> +++ b/mm/huge_memory.c >> @@ -2838,7 +2838,6 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, >> */ >> for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) { >> struct vm_area_struct *vma = find_vma(mm, addr); >> - unsigned int follflags; >> struct page *page; >> >> if (!vma || addr < vma->vm_start) >> @@ -2851,8 +2850,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, >> } >> >> /* FOLL_DUMP to ignore special (like zero) pages */ >> - follflags = FOLL_GET | FOLL_DUMP; >> - page = follow_page(vma, addr, follflags); >> + page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP); >> >> if (IS_ERR(page)) >> continue; > > LGTM, but there is another similar instance in add_page_for_migration() > inside mm/migrate.c, requiring this exact clean up. > Thanks for comment. That similar case is done in my previous patch series[1] aimed at migration cleanup and fixup. It might be more suitable to do that clean up in that specialized series? [1]:https://lore.kernel.org/linux-mm/20220304093409.25829-4-linmiaohe@huawei.com/ > Hence with that change in place. > > Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Thanks again. > . >
On 3/11/22 11:56, Miaohe Lin wrote: > On 2022/3/11 12:51, Anshuman Khandual wrote: >> Hi Miaohe, >> >> On 3/10/22 18:42, Miaohe Lin wrote: >>> We can pass FOLL_GET | FOLL_DUMP to follow_page directly to simplify >>> the code a bit. >>> >>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> >>> --- >>> mm/huge_memory.c | 4 +--- >>> 1 file changed, 1 insertion(+), 3 deletions(-) >>> >>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>> index 3557aabe86fe..418d077da246 100644 >>> --- a/mm/huge_memory.c >>> +++ b/mm/huge_memory.c >>> @@ -2838,7 +2838,6 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, >>> */ >>> for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) { >>> struct vm_area_struct *vma = find_vma(mm, addr); >>> - unsigned int follflags; >>> struct page *page; >>> >>> if (!vma || addr < vma->vm_start) >>> @@ -2851,8 +2850,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, >>> } >>> >>> /* FOLL_DUMP to ignore special (like zero) pages */ >>> - follflags = FOLL_GET | FOLL_DUMP; >>> - page = follow_page(vma, addr, follflags); >>> + page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP); >>> >>> if (IS_ERR(page)) >>> continue; >> >> LGTM, but there is another similar instance in add_page_for_migration() >> inside mm/migrate.c, requiring this exact clean up. >> > > Thanks for comment. That similar case is done in my previous patch series[1] > aimed at migration cleanup and fixup. It might be more suitable to do that > clean up in that specialized series? Both these similar scenarios i.e the one proposed here and other one in the migration series, should be folded into a separate single patch, either here or in the series itself. > > [1]:https://lore.kernel.org/linux-mm/20220304093409.25829-4-linmiaohe@huawei.com/ > >> Hence with that change in place. >> >> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> > > Thanks again. > >> . >> >
On 2022/3/11 14:39, Anshuman Khandual wrote: > > > On 3/11/22 11:56, Miaohe Lin wrote: >> On 2022/3/11 12:51, Anshuman Khandual wrote: >>> Hi Miaohe, >>> >>> On 3/10/22 18:42, Miaohe Lin wrote: >>>> We can pass FOLL_GET | FOLL_DUMP to follow_page directly to simplify >>>> the code a bit. >>>> >>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> >>>> --- >>>> mm/huge_memory.c | 4 +--- >>>> 1 file changed, 1 insertion(+), 3 deletions(-) >>>> >>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>>> index 3557aabe86fe..418d077da246 100644 >>>> --- a/mm/huge_memory.c >>>> +++ b/mm/huge_memory.c >>>> @@ -2838,7 +2838,6 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, >>>> */ >>>> for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) { >>>> struct vm_area_struct *vma = find_vma(mm, addr); >>>> - unsigned int follflags; >>>> struct page *page; >>>> >>>> if (!vma || addr < vma->vm_start) >>>> @@ -2851,8 +2850,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, >>>> } >>>> >>>> /* FOLL_DUMP to ignore special (like zero) pages */ >>>> - follflags = FOLL_GET | FOLL_DUMP; >>>> - page = follow_page(vma, addr, follflags); >>>> + page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP); >>>> >>>> if (IS_ERR(page)) >>>> continue; >>> >>> LGTM, but there is another similar instance in add_page_for_migration() >>> inside mm/migrate.c, requiring this exact clean up. >>> >> >> Thanks for comment. That similar case is done in my previous patch series[1] >> aimed at migration cleanup and fixup. It might be more suitable to do that >> clean up in that specialized series? > > Both these similar scenarios i.e the one proposed here and other one in the > migration series, should be folded into a separate single patch, either here > or in the series itself. Looks fine to me. Will do. Thanks. > >> >> [1]:https://lore.kernel.org/linux-mm/20220304093409.25829-4-linmiaohe@huawei.com/ >> >>> Hence with that change in place. >>> >>> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> >> >> Thanks again. >> >>> . >>> >> > . >
On 10.03.22 14:12, Miaohe Lin wrote: > We can pass FOLL_GET | FOLL_DUMP to follow_page directly to simplify > the code a bit. > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > --- > mm/huge_memory.c | 4 +--- > 1 file changed, 1 insertion(+), 3 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 3557aabe86fe..418d077da246 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2838,7 +2838,6 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, > */ > for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) { > struct vm_area_struct *vma = find_vma(mm, addr); > - unsigned int follflags; > struct page *page; > > if (!vma || addr < vma->vm_start) > @@ -2851,8 +2850,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, > } > > /* FOLL_DUMP to ignore special (like zero) pages */ > - follflags = FOLL_GET | FOLL_DUMP; > - page = follow_page(vma, addr, follflags); > + page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP); > > if (IS_ERR(page)) > continue; Reviewed-by: David Hildenbrand <david@redhat.com>
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3557aabe86fe..418d077da246 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2838,7 +2838,6 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, */ for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) { struct vm_area_struct *vma = find_vma(mm, addr); - unsigned int follflags; struct page *page; if (!vma || addr < vma->vm_start) @@ -2851,8 +2850,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, } /* FOLL_DUMP to ignore special (like zero) pages */ - follflags = FOLL_GET | FOLL_DUMP; - page = follow_page(vma, addr, follflags); + page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP); if (IS_ERR(page)) continue;
We can pass FOLL_GET | FOLL_DUMP to follow_page directly to simplify the code a bit. Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> --- mm/huge_memory.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)