diff mbox series

mm/mempolicy: clean up the code logic in queue_pages_pte_range

Message ID 20220419122234.45083-1-linmiaohe@huawei.com (mailing list archive)
State New
Headers show
Series mm/mempolicy: clean up the code logic in queue_pages_pte_range | expand

Commit Message

Miaohe Lin April 19, 2022, 12:22 p.m. UTC
Since commit e5947d23edd8 ("mm: mempolicy: don't have to split pmd for
huge zero page"), THP is never splited in queue_pages_pmd. Thus 2 is
never returned now. We can remove such unnecessary ret != 2 check and
clean up the relevant comment. Minor improvements in readability.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/mempolicy.c | 12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

Comments

Yang Shi April 21, 2022, 12:44 a.m. UTC | #1
On Tue, Apr 19, 2022 at 5:22 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> Since commit e5947d23edd8 ("mm: mempolicy: don't have to split pmd for
> huge zero page"), THP is never splited in queue_pages_pmd. Thus 2 is
> never returned now. We can remove such unnecessary ret != 2 check and
> clean up the relevant comment. Minor improvements in readability.

Nice catch. Yeah, it was missed when I worked on that commit.

Reviewed-by: Yang Shi <shy828301@gmail.com>

>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/mempolicy.c | 12 +++---------
>  1 file changed, 3 insertions(+), 9 deletions(-)
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 75a8b247f631..3934476fb708 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -441,12 +441,11 @@ static inline bool queue_pages_required(struct page *page,
>  }
>
>  /*
> - * queue_pages_pmd() has four possible return values:
> + * queue_pages_pmd() has three possible return values:
>   * 0 - pages are placed on the right node or queued successfully, or
>   *     special page is met, i.e. huge zero page.
>   * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
>   *     specified.
> - * 2 - THP was split.
>   * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an
>   *        existing page was already on a node that does not follow the
>   *        policy.
> @@ -508,18 +507,13 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
>         struct page *page;
>         struct queue_pages *qp = walk->private;
>         unsigned long flags = qp->flags;
> -       int ret;
>         bool has_unmovable = false;
>         pte_t *pte, *mapped_pte;
>         spinlock_t *ptl;
>
>         ptl = pmd_trans_huge_lock(pmd, vma);
> -       if (ptl) {
> -               ret = queue_pages_pmd(pmd, ptl, addr, end, walk);
> -               if (ret != 2)
> -                       return ret;
> -       }
> -       /* THP was split, fall through to pte walk */
> +       if (ptl)
> +               return queue_pages_pmd(pmd, ptl, addr, end, walk);
>
>         if (pmd_trans_unstable(pmd))
>                 return 0;
> --
> 2.23.0
>
>
Michal Hocko April 22, 2022, 11:48 a.m. UTC | #2
On Tue 19-04-22 20:22:34, Miaohe Lin wrote:
> Since commit e5947d23edd8 ("mm: mempolicy: don't have to split pmd for
> huge zero page"), THP is never splited in queue_pages_pmd. Thus 2 is
> never returned now. We can remove such unnecessary ret != 2 check and
> clean up the relevant comment. Minor improvements in readability.
> 
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>

Acked-by: Michal Hocko <mhocko@suse.com>
Thanks!

> ---
>  mm/mempolicy.c | 12 +++---------
>  1 file changed, 3 insertions(+), 9 deletions(-)
> 
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 75a8b247f631..3934476fb708 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -441,12 +441,11 @@ static inline bool queue_pages_required(struct page *page,
>  }
>  
>  /*
> - * queue_pages_pmd() has four possible return values:
> + * queue_pages_pmd() has three possible return values:
>   * 0 - pages are placed on the right node or queued successfully, or
>   *     special page is met, i.e. huge zero page.
>   * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
>   *     specified.
> - * 2 - THP was split.
>   * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an
>   *        existing page was already on a node that does not follow the
>   *        policy.
> @@ -508,18 +507,13 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
>  	struct page *page;
>  	struct queue_pages *qp = walk->private;
>  	unsigned long flags = qp->flags;
> -	int ret;
>  	bool has_unmovable = false;
>  	pte_t *pte, *mapped_pte;
>  	spinlock_t *ptl;
>  
>  	ptl = pmd_trans_huge_lock(pmd, vma);
> -	if (ptl) {
> -		ret = queue_pages_pmd(pmd, ptl, addr, end, walk);
> -		if (ret != 2)
> -			return ret;
> -	}
> -	/* THP was split, fall through to pte walk */
> +	if (ptl)
> +		return queue_pages_pmd(pmd, ptl, addr, end, walk);
>  
>  	if (pmd_trans_unstable(pmd))
>  		return 0;
> -- 
> 2.23.0
David Rientjes April 24, 2022, 7:23 p.m. UTC | #3
On Tue, 19 Apr 2022, Miaohe Lin wrote:

> Since commit e5947d23edd8 ("mm: mempolicy: don't have to split pmd for
> huge zero page"), THP is never splited in queue_pages_pmd. Thus 2 is
> never returned now. We can remove such unnecessary ret != 2 check and
> clean up the relevant comment. Minor improvements in readability.
> 
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>

Acked-by: David Rientjes <rientjes@google.com>
diff mbox series

Patch

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 75a8b247f631..3934476fb708 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -441,12 +441,11 @@  static inline bool queue_pages_required(struct page *page,
 }
 
 /*
- * queue_pages_pmd() has four possible return values:
+ * queue_pages_pmd() has three possible return values:
  * 0 - pages are placed on the right node or queued successfully, or
  *     special page is met, i.e. huge zero page.
  * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
  *     specified.
- * 2 - THP was split.
  * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an
  *        existing page was already on a node that does not follow the
  *        policy.
@@ -508,18 +507,13 @@  static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
 	struct page *page;
 	struct queue_pages *qp = walk->private;
 	unsigned long flags = qp->flags;
-	int ret;
 	bool has_unmovable = false;
 	pte_t *pte, *mapped_pte;
 	spinlock_t *ptl;
 
 	ptl = pmd_trans_huge_lock(pmd, vma);
-	if (ptl) {
-		ret = queue_pages_pmd(pmd, ptl, addr, end, walk);
-		if (ret != 2)
-			return ret;
-	}
-	/* THP was split, fall through to pte walk */
+	if (ptl)
+		return queue_pages_pmd(pmd, ptl, addr, end, walk);
 
 	if (pmd_trans_unstable(pmd))
 		return 0;