diff mbox series

[4/7] mm/khugepaged: minor cleanup for collapse_file

Message ID 20220611084731.55155-5-linmiaohe@huawei.com (mailing list archive)
State New
Headers show
Series A few cleanup patches for khugepaged | expand

Commit Message

Miaohe Lin June 11, 2022, 8:47 a.m. UTC
nr_none is always 0 for non-shmem case because the page can be read from
the backend store. So when nr_none ! = 0, it must be in is_shmem case.
Also only adjust the nrpages and uncharge shmem when nr_none != 0 to save
cpu cycles.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/khugepaged.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

Comments

Zach O'Keefe June 15, 2022, 3:54 p.m. UTC | #1
On 11 Jun 16:47, Miaohe Lin wrote:
> nr_none is always 0 for non-shmem case because the page can be read from
> the backend store. So when nr_none ! = 0, it must be in is_shmem case.
> Also only adjust the nrpages and uncharge shmem when nr_none != 0 to save
> cpu cycles.
> 
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/khugepaged.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 1b5dd3820eac..8e6fad7c7bd9 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1885,8 +1885,7 @@ static void collapse_file(struct mm_struct *mm,
>  
>  	if (nr_none) {
>  		__mod_lruvec_page_state(new_page, NR_FILE_PAGES, nr_none);
> -		if (is_shmem)
> -			__mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
> +		__mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
>  	}


Might be worth a small comment here - even though folks can see in above code
that this is only incremented in shmem path, might be nice to say why it's
always 0 for non-shmem (or conversely, why it's only possible to be non 0 on
shmem).

>  
>  	/* Join all the small entries into a single multi-index entry */
> @@ -1950,10 +1949,10 @@ static void collapse_file(struct mm_struct *mm,
>  
>  		/* Something went wrong: roll back page cache changes */
>  		xas_lock_irq(&xas);
> -		mapping->nrpages -= nr_none;
> -
> -		if (is_shmem)
> +		if (nr_none) {
> +			mapping->nrpages -= nr_none;
>  			shmem_uncharge(mapping->host, nr_none);
> +		}
>  
>  		xas_set(&xas, start);
>  		xas_for_each(&xas, page, end - 1) {
> -- 
> 2.23.0
> 
> 

Otherwise,

Reviewed-by: Zach O'Keefe <zokeefe@google.com>
Yang Shi June 15, 2022, 6:18 p.m. UTC | #2
On Wed, Jun 15, 2022 at 8:55 AM Zach O'Keefe <zokeefe@google.com> wrote:
>
> On 11 Jun 16:47, Miaohe Lin wrote:
> > nr_none is always 0 for non-shmem case because the page can be read from
> > the backend store. So when nr_none ! = 0, it must be in is_shmem case.
> > Also only adjust the nrpages and uncharge shmem when nr_none != 0 to save
> > cpu cycles.
> >
> > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> > ---
> >  mm/khugepaged.c | 9 ++++-----
> >  1 file changed, 4 insertions(+), 5 deletions(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 1b5dd3820eac..8e6fad7c7bd9 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -1885,8 +1885,7 @@ static void collapse_file(struct mm_struct *mm,
> >
> >       if (nr_none) {
> >               __mod_lruvec_page_state(new_page, NR_FILE_PAGES, nr_none);
> > -             if (is_shmem)
> > -                     __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
> > +             __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
> >       }
>
>
> Might be worth a small comment here - even though folks can see in above code
> that this is only incremented in shmem path, might be nice to say why it's
> always 0 for non-shmem (or conversely, why it's only possible to be non 0 on
> shmem).

Agreed, better to have some comments in the code.

>
> >
> >       /* Join all the small entries into a single multi-index entry */
> > @@ -1950,10 +1949,10 @@ static void collapse_file(struct mm_struct *mm,
> >
> >               /* Something went wrong: roll back page cache changes */
> >               xas_lock_irq(&xas);
> > -             mapping->nrpages -= nr_none;
> > -
> > -             if (is_shmem)
> > +             if (nr_none) {
> > +                     mapping->nrpages -= nr_none;
> >                       shmem_uncharge(mapping->host, nr_none);
> > +             }
> >
> >               xas_set(&xas, start);
> >               xas_for_each(&xas, page, end - 1) {
> > --
> > 2.23.0
> >
> >
>
> Otherwise,
>
> Reviewed-by: Zach O'Keefe <zokeefe@google.com>
>
Miaohe Lin June 16, 2022, 6:10 a.m. UTC | #3
On 2022/6/16 2:18, Yang Shi wrote:
> On Wed, Jun 15, 2022 at 8:55 AM Zach O'Keefe <zokeefe@google.com> wrote:
>>
>> On 11 Jun 16:47, Miaohe Lin wrote:
>>> nr_none is always 0 for non-shmem case because the page can be read from
>>> the backend store. So when nr_none ! = 0, it must be in is_shmem case.
>>> Also only adjust the nrpages and uncharge shmem when nr_none != 0 to save
>>> cpu cycles.
>>>
>>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>>> ---
>>>  mm/khugepaged.c | 9 ++++-----
>>>  1 file changed, 4 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>> index 1b5dd3820eac..8e6fad7c7bd9 100644
>>> --- a/mm/khugepaged.c
>>> +++ b/mm/khugepaged.c
>>> @@ -1885,8 +1885,7 @@ static void collapse_file(struct mm_struct *mm,
>>>
>>>       if (nr_none) {
>>>               __mod_lruvec_page_state(new_page, NR_FILE_PAGES, nr_none);
>>> -             if (is_shmem)
>>> -                     __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
>>> +             __mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
>>>       }
>>
>>
>> Might be worth a small comment here - even though folks can see in above code
>> that this is only incremented in shmem path, might be nice to say why it's
>> always 0 for non-shmem (or conversely, why it's only possible to be non 0 on
>> shmem).
> 
> Agreed, better to have some comments in the code.

Will try to add comments in next version. Thanks both!

> 
>>
>>>
>>>       /* Join all the small entries into a single multi-index entry */
>>> @@ -1950,10 +1949,10 @@ static void collapse_file(struct mm_struct *mm,
>>>
>>>               /* Something went wrong: roll back page cache changes */
>>>               xas_lock_irq(&xas);
>>> -             mapping->nrpages -= nr_none;
>>> -
>>> -             if (is_shmem)
>>> +             if (nr_none) {
>>> +                     mapping->nrpages -= nr_none;
>>>                       shmem_uncharge(mapping->host, nr_none);
>>> +             }
>>>
>>>               xas_set(&xas, start);
>>>               xas_for_each(&xas, page, end - 1) {
>>> --
>>> 2.23.0
>>>
>>>
>>
>> Otherwise,
>>
>> Reviewed-by: Zach O'Keefe <zokeefe@google.com>
>>
> .
>
diff mbox series

Patch

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 1b5dd3820eac..8e6fad7c7bd9 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1885,8 +1885,7 @@  static void collapse_file(struct mm_struct *mm,
 
 	if (nr_none) {
 		__mod_lruvec_page_state(new_page, NR_FILE_PAGES, nr_none);
-		if (is_shmem)
-			__mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
+		__mod_lruvec_page_state(new_page, NR_SHMEM, nr_none);
 	}
 
 	/* Join all the small entries into a single multi-index entry */
@@ -1950,10 +1949,10 @@  static void collapse_file(struct mm_struct *mm,
 
 		/* Something went wrong: roll back page cache changes */
 		xas_lock_irq(&xas);
-		mapping->nrpages -= nr_none;
-
-		if (is_shmem)
+		if (nr_none) {
+			mapping->nrpages -= nr_none;
 			shmem_uncharge(mapping->host, nr_none);
+		}
 
 		xas_set(&xas, start);
 		xas_for_each(&xas, page, end - 1) {