Message ID | 20220618090527.37843-1-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/madvise: minor cleanup for swapin_walk_pmd_entry() | expand |
On Sat, 18 Jun 2022 17:05:27 +0800 Miaohe Lin <linmiaohe@huawei.com> wrote: > Passing index to pte_offset_map_lock() directly so the below calculation > can be avoided. Rename orig_pte to ptep as it's not changed. Also use > helper is_swap_pte() to improve the readability. No functional change > intended. > > ... > > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -195,7 +195,7 @@ static int madvise_update_vma(struct vm_area_struct *vma, > static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, > unsigned long end, struct mm_walk *walk) > { > - pte_t *orig_pte; > + pte_t *ptep; > struct vm_area_struct *vma = walk->private; > unsigned long index; > struct swap_iocb *splug = NULL; > @@ -209,11 +209,11 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, > struct page *page; > spinlock_t *ptl; > > - orig_pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl); > - pte = *(orig_pte + ((index - start) / PAGE_SIZE)); > - pte_unmap_unlock(orig_pte, ptl); > + ptep = pte_offset_map_lock(vma->vm_mm, pmd, index, &ptl); > + pte = *ptep; > + pte_unmap_unlock(ptep, ptl); > > - if (pte_present(pte) || pte_none(pte)) > + if (!is_swap_pte(pte)) > continue; > entry = pte_to_swp_entry(pte); > if (unlikely(non_swap_entry(entry))) Also... From: Andrew Morton <akpm@linux-foundation.org> Subject: mm-madvise-minor-cleanup-for-swapin_walk_pmd_entry-fix Date: Sat Jun 18 11:58:03 AM PDT 2022 reduce scope of `ptep' Cc: Miaohe Lin <linmiaohe@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> --- mm/madvise.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/madvise.c~mm-madvise-minor-cleanup-for-swapin_walk_pmd_entry-fix +++ a/mm/madvise.c @@ -195,7 +195,6 @@ success: static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, unsigned long end, struct mm_walk *walk) { - pte_t *ptep; struct vm_area_struct *vma = walk->private; unsigned long index; struct swap_iocb *splug = NULL; @@ -208,6 +207,7 @@ static int swapin_walk_pmd_entry(pmd_t * swp_entry_t entry; struct page *page; spinlock_t *ptl; + pte_t *ptep; ptep = pte_offset_map_lock(vma->vm_mm, pmd, index, &ptl); pte = *ptep;
On 2022/6/19 2:59, Andrew Morton wrote: > On Sat, 18 Jun 2022 17:05:27 +0800 Miaohe Lin <linmiaohe@huawei.com> wrote: > >> Passing index to pte_offset_map_lock() directly so the below calculation >> can be avoided. Rename orig_pte to ptep as it's not changed. Also use >> helper is_swap_pte() to improve the readability. No functional change >> intended. >> >> ... >> >> --- a/mm/madvise.c >> +++ b/mm/madvise.c >> @@ -195,7 +195,7 @@ static int madvise_update_vma(struct vm_area_struct *vma, >> static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, >> unsigned long end, struct mm_walk *walk) >> { >> - pte_t *orig_pte; >> + pte_t *ptep; >> struct vm_area_struct *vma = walk->private; >> unsigned long index; >> struct swap_iocb *splug = NULL; >> @@ -209,11 +209,11 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, >> struct page *page; >> spinlock_t *ptl; >> >> - orig_pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl); >> - pte = *(orig_pte + ((index - start) / PAGE_SIZE)); >> - pte_unmap_unlock(orig_pte, ptl); >> + ptep = pte_offset_map_lock(vma->vm_mm, pmd, index, &ptl); >> + pte = *ptep; >> + pte_unmap_unlock(ptep, ptl); >> >> - if (pte_present(pte) || pte_none(pte)) >> + if (!is_swap_pte(pte)) >> continue; >> entry = pte_to_swp_entry(pte); >> if (unlikely(non_swap_entry(entry))) > > Also... > > From: Andrew Morton <akpm@linux-foundation.org> > Subject: mm-madvise-minor-cleanup-for-swapin_walk_pmd_entry-fix > Date: Sat Jun 18 11:58:03 AM PDT 2022 > > reduce scope of `ptep' Looks good to me. Thanks for doing this. :) > > Cc: Miaohe Lin <linmiaohe@huawei.com> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > --- > > mm/madvise.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > --- a/mm/madvise.c~mm-madvise-minor-cleanup-for-swapin_walk_pmd_entry-fix > +++ a/mm/madvise.c > @@ -195,7 +195,6 @@ success: > static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, > unsigned long end, struct mm_walk *walk) > { > - pte_t *ptep; > struct vm_area_struct *vma = walk->private; > unsigned long index; > struct swap_iocb *splug = NULL; > @@ -208,6 +207,7 @@ static int swapin_walk_pmd_entry(pmd_t * > swp_entry_t entry; > struct page *page; > spinlock_t *ptl; > + pte_t *ptep; > > ptep = pte_offset_map_lock(vma->vm_mm, pmd, index, &ptl); > pte = *ptep; > _ > > . >
diff --git a/mm/madvise.c b/mm/madvise.c index 7a8af04069b3..cf49e123991c 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -195,7 +195,7 @@ static int madvise_update_vma(struct vm_area_struct *vma, static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, unsigned long end, struct mm_walk *walk) { - pte_t *orig_pte; + pte_t *ptep; struct vm_area_struct *vma = walk->private; unsigned long index; struct swap_iocb *splug = NULL; @@ -209,11 +209,11 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, struct page *page; spinlock_t *ptl; - orig_pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl); - pte = *(orig_pte + ((index - start) / PAGE_SIZE)); - pte_unmap_unlock(orig_pte, ptl); + ptep = pte_offset_map_lock(vma->vm_mm, pmd, index, &ptl); + pte = *ptep; + pte_unmap_unlock(ptep, ptl); - if (pte_present(pte) || pte_none(pte)) + if (!is_swap_pte(pte)) continue; entry = pte_to_swp_entry(pte); if (unlikely(non_swap_entry(entry)))
Passing index to pte_offset_map_lock() directly so the below calculation can be avoided. Rename orig_pte to ptep as it's not changed. Also use helper is_swap_pte() to improve the readability. No functional change intended. Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> --- mm/madvise.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)