diff mbox series

[v2,3/9] mm/swap: avoid doing extra unlock error checks for direct swapin

Message ID 20240102175338.62012-4-ryncsn@gmail.com (mailing list archive)
State New
Headers show
Series swapin refactor for optimization and unified readahead | expand

Commit Message

Kairui Song Jan. 2, 2024, 5:53 p.m. UTC
From: Kairui Song <kasong@tencent.com>

When swapping in a page, mem_cgroup_swapin_charge_folio is called for
new allocated folio, nothing else is referencing the folio so no need
to set the lock bit early. This avoided doing extra unlock checks
on the error path.

Signed-off-by: Kairui Song <kasong@tencent.com>
---
 mm/swap_state.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

Comments

Huang, Ying Jan. 4, 2024, 8:10 a.m. UTC | #1
Kairui Song <ryncsn@gmail.com> writes:

> From: Kairui Song <kasong@tencent.com>
>
> When swapping in a page, mem_cgroup_swapin_charge_folio is called for
> new allocated folio, nothing else is referencing the folio so no need
> to set the lock bit early. This avoided doing extra unlock checks
> on the error path.
>
> Signed-off-by: Kairui Song <kasong@tencent.com>
> ---
>  mm/swap_state.c | 13 ++++++-------
>  1 file changed, 6 insertions(+), 7 deletions(-)
>
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 24cb93ed5081..6130de8d5226 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -881,16 +881,15 @@ struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask,
>  	folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
>  				vma, vmf->address, false);
>  	if (folio) {
> -		__folio_set_locked(folio);
> -		__folio_set_swapbacked(folio);
> -
> -		if (mem_cgroup_swapin_charge_folio(folio,
> -					vma->vm_mm, GFP_KERNEL,
> -					entry)) {
> -			folio_unlock(folio);
> +		if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm,
> +						   GFP_KERNEL, entry)) {
>  			folio_put(folio);
>  			return NULL;
>  		}
> +
> +		__folio_set_locked(folio);
> +		__folio_set_swapbacked(folio);
> +
>  		mem_cgroup_swapin_uncharge_swap(entry);
>  
>  		shadow = get_shadow_from_swap_cache(entry);

I don't find any issue with the patch.  But another caller of
mem_cgroup_swapin_charge_folio() in __read_swap_cache_async() setups
newly allocated folio in the same way before the change.  Better to keep
them same?  Because the benefit of change is small too.

--
Best Regards,
Huang, Ying
Kairui Song Jan. 9, 2024, 9:38 a.m. UTC | #2
Huang, Ying <ying.huang@intel.com> 于2024年1月4日周四 16:12写道:
>
> Kairui Song <ryncsn@gmail.com> writes:
>
> > From: Kairui Song <kasong@tencent.com>
> >
> > When swapping in a page, mem_cgroup_swapin_charge_folio is called for
> > new allocated folio, nothing else is referencing the folio so no need
> > to set the lock bit early. This avoided doing extra unlock checks
> > on the error path.
> >
> > Signed-off-by: Kairui Song <kasong@tencent.com>
> > ---
> >  mm/swap_state.c | 13 ++++++-------
> >  1 file changed, 6 insertions(+), 7 deletions(-)
> >
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index 24cb93ed5081..6130de8d5226 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -881,16 +881,15 @@ struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask,
> >       folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
> >                               vma, vmf->address, false);
> >       if (folio) {
> > -             __folio_set_locked(folio);
> > -             __folio_set_swapbacked(folio);
> > -
> > -             if (mem_cgroup_swapin_charge_folio(folio,
> > -                                     vma->vm_mm, GFP_KERNEL,
> > -                                     entry)) {
> > -                     folio_unlock(folio);
> > +             if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm,
> > +                                                GFP_KERNEL, entry)) {
> >                       folio_put(folio);
> >                       return NULL;
> >               }
> > +
> > +             __folio_set_locked(folio);
> > +             __folio_set_swapbacked(folio);
> > +
> >               mem_cgroup_swapin_uncharge_swap(entry);
> >
> >               shadow = get_shadow_from_swap_cache(entry);
>
> I don't find any issue with the patch.  But another caller of
> mem_cgroup_swapin_charge_folio() in __read_swap_cache_async() setups
> newly allocated folio in the same way before the change.  Better to keep
> them same?  Because the benefit of change is small too.

OK, this is just a trivial optimization, I can drop it.
diff mbox series

Patch

diff --git a/mm/swap_state.c b/mm/swap_state.c
index 24cb93ed5081..6130de8d5226 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -881,16 +881,15 @@  struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask,
 	folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
 				vma, vmf->address, false);
 	if (folio) {
-		__folio_set_locked(folio);
-		__folio_set_swapbacked(folio);
-
-		if (mem_cgroup_swapin_charge_folio(folio,
-					vma->vm_mm, GFP_KERNEL,
-					entry)) {
-			folio_unlock(folio);
+		if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm,
+						   GFP_KERNEL, entry)) {
 			folio_put(folio);
 			return NULL;
 		}
+
+		__folio_set_locked(folio);
+		__folio_set_swapbacked(folio);
+
 		mem_cgroup_swapin_uncharge_swap(entry);
 
 		shadow = get_shadow_from_swap_cache(entry);