Message ID | 20221205140327.72304-1-wangkefeng.wang@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: add cond_resched() in swapin_walk_pmd_entry() | expand |
On Mon, 5 Dec 2022 22:03:27 +0800 Kefeng Wang <wangkefeng.wang@huawei.com> wrote: > When handle MADV_WILLNEED in madvise(), the soflockup may be occurred > in swapin_walk_pmd_entry() if swapin lots of memory on slow device, > add a cond_resched() into it to avoid the possible softlockup. > > ... > > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -226,6 +226,7 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, > put_page(page); > } > swap_read_unplug(splug); > + cond_resched(); > > return 0; > } I wonder if this would be better in walk_pmd_range(), to address other very large walk attempts.
On 2022/12/6 5:03, Andrew Morton wrote: > On Mon, 5 Dec 2022 22:03:27 +0800 Kefeng Wang <wangkefeng.wang@huawei.com> wrote: > >> When handle MADV_WILLNEED in madvise(), the soflockup may be occurred >> in swapin_walk_pmd_entry() if swapin lots of memory on slow device, >> add a cond_resched() into it to avoid the possible softlockup. >> >> ... >> >> --- a/mm/madvise.c >> +++ b/mm/madvise.c >> @@ -226,6 +226,7 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, >> put_page(page); >> } >> swap_read_unplug(splug); >> + cond_resched(); >> >> return 0; >> } > I wonder if this would be better in walk_pmd_range(), to address other > very large walk attempts. mm/madvise.c:287: walk_page_range(vma->vm_mm, start, end, &swapin_walk_ops, vma); mm/madvise.c:514: walk_page_range(vma->vm_mm, addr, end, &cold_walk_ops, &walk_private); mm/madvise.c:762: walk_page_range(vma->vm_mm, range.start, range.end, mm/madvise.c-763- &madvise_free_walk_ops, &tlb); The cold_walk_ops and madvise_free_walk_ops are already with cond_resched() in theirs pmd_entry walk, maybe there's no need for a precautionary increase a cond_resched() for now > > .
diff --git a/mm/madvise.c b/mm/madvise.c index b913ba6efc10..fea589d8a2fb 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -226,6 +226,7 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, put_page(page); } swap_read_unplug(splug); + cond_resched(); return 0; }
When handle MADV_WILLNEED in madvise(), the soflockup may be occurred in swapin_walk_pmd_entry() if swapin lots of memory on slow device, add a cond_resched() into it to avoid the possible softlockup. Fixes: 1998cc048901 ("mm: make madvise(MADV_WILLNEED) support swap file prefetch") Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> --- mm/madvise.c | 1 + 1 file changed, 1 insertion(+)