diff mbox series

[4/8] fs: shrink only (SB_ACTIVE|SB_BORN) superblocks in super_cache_scan()

Message ID 20230531095742.2480623-5-qi.zheng@linux.dev (mailing list archive)
State Under Review
Headers show
Series make unregistration of super_block shrinker more faster | expand

Commit Message

Qi Zheng May 31, 2023, 9:57 a.m. UTC
From: Kirill Tkhai <tkhai@ya.ru>

This patch prepares superblock shrinker for delayed unregistering.
It makes super_cache_scan() avoid shrinking of not active superblocks.
SB_ACTIVE is used as such the indicator. In case of superblock is not
active, super_cache_scan() just exits with SHRINK_STOP as result.

Note, that SB_ACTIVE is cleared in generic_shutdown_super() and this
is made under the write lock of s_umount. Function super_cache_scan()
also takes the read lock of s_umount, so it can't skip this flag cleared.

SB_BORN check is added to super_cache_scan() just for uniformity
with super_cache_count(), while super_cache_count() received SB_ACTIVE
check just for uniformity with super_cache_scan().

After this patch super_cache_scan() becomes to ignore unregistering
superblocks, so this function is OK with splitting unregister_shrinker().
Next patches prepare super_cache_count() to follow this way.

Signed-off-by: Kirill Tkhai <tkhai@ya.ru>
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
---
 fs/super.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

Comments

Dave Chinner May 31, 2023, 11:12 p.m. UTC | #1
On Wed, May 31, 2023 at 09:57:38AM +0000, Qi Zheng wrote:
> From: Kirill Tkhai <tkhai@ya.ru>
> 
> This patch prepares superblock shrinker for delayed unregistering.
> It makes super_cache_scan() avoid shrinking of not active superblocks.
> SB_ACTIVE is used as such the indicator. In case of superblock is not
> active, super_cache_scan() just exits with SHRINK_STOP as result.
> 
> Note, that SB_ACTIVE is cleared in generic_shutdown_super() and this
> is made under the write lock of s_umount. Function super_cache_scan()
> also takes the read lock of s_umount, so it can't skip this flag cleared.
> 
> SB_BORN check is added to super_cache_scan() just for uniformity
> with super_cache_count(), while super_cache_count() received SB_ACTIVE
> check just for uniformity with super_cache_scan().
> 
> After this patch super_cache_scan() becomes to ignore unregistering
> superblocks, so this function is OK with splitting unregister_shrinker().
> Next patches prepare super_cache_count() to follow this way.
> 
> Signed-off-by: Kirill Tkhai <tkhai@ya.ru>
> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
> ---
>  fs/super.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/super.c b/fs/super.c
> index 2ce4c72720f3..2ce54561e82e 100644
> --- a/fs/super.c
> +++ b/fs/super.c
> @@ -79,6 +79,11 @@ static unsigned long super_cache_scan(struct shrinker *shrink,
>  	if (!trylock_super(sb))
>  		return SHRINK_STOP;
>  
> +	if ((sb->s_flags & (SB_BORN|SB_ACTIVE)) != (SB_BORN|SB_ACTIVE)) {
> +		freed = SHRINK_STOP;
> +		goto unlock;
> +	}

This should not be here - the check to determine if the shrinker
should run is done in the ->count method. If we removed the SB_ACTIVE
flag between ->count and ->scan, then the superblock should be
locked and the trylock_super() call above should fail....

Indeed, the unregister_shrinker() call in deactivate_locked_super()
is done with the sb->s_umount held exclusively, and this happens
before we clear SB_ACTIVE in the ->kill_sb() -> kill_block_super()
-> generic_shutdown_super() path after the shrinker is unregistered.

Hence we can't get to this check without SB_ACTIVE being set - the
trylock will fail and then the shrinker_unregister() call will do
it's thing to ensure the shrinker is never called again.

If the change to the shrinker code allows the shrinker to still be
actively running when we call ->kill_sb(), then that needs to be
fixed. THe superblock shrinker must be stopped completely and never
run again before we call ->kill_sb().

>  	if (sb->s_op->nr_cached_objects)
>  		fs_objects = sb->s_op->nr_cached_objects(sb, sc);
>  
> @@ -110,6 +115,7 @@ static unsigned long super_cache_scan(struct shrinker *shrink,
>  		freed += sb->s_op->free_cached_objects(sb, sc);
>  	}
>  
> +unlock:
>  	up_read(&sb->s_umount);
>  	return freed;
>  }
> @@ -136,7 +142,7 @@ static unsigned long super_cache_count(struct shrinker *shrink,
>  	 * avoid this situation, so do the same here. The memory barrier is
>  	 * matched with the one in mount_fs() as we don't hold locks here.
>  	 */
> -	if (!(sb->s_flags & SB_BORN))
> +	if ((sb->s_flags & (SB_BORN|SB_ACTIVE)) != (SB_BORN|SB_ACTIVE))
>  		return 0;

This is fine because it's an unlocked check, but I don't think it's
actually necessary given the above. Indeed, if you are adding this,
you need to expand the comment above on why SB_ACTIVE needs
checking, and why the memory barrier doesn't actually apply to that
part of the check....

-Dave.
diff mbox series

Patch

diff --git a/fs/super.c b/fs/super.c
index 2ce4c72720f3..2ce54561e82e 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -79,6 +79,11 @@  static unsigned long super_cache_scan(struct shrinker *shrink,
 	if (!trylock_super(sb))
 		return SHRINK_STOP;
 
+	if ((sb->s_flags & (SB_BORN|SB_ACTIVE)) != (SB_BORN|SB_ACTIVE)) {
+		freed = SHRINK_STOP;
+		goto unlock;
+	}
+
 	if (sb->s_op->nr_cached_objects)
 		fs_objects = sb->s_op->nr_cached_objects(sb, sc);
 
@@ -110,6 +115,7 @@  static unsigned long super_cache_scan(struct shrinker *shrink,
 		freed += sb->s_op->free_cached_objects(sb, sc);
 	}
 
+unlock:
 	up_read(&sb->s_umount);
 	return freed;
 }
@@ -136,7 +142,7 @@  static unsigned long super_cache_count(struct shrinker *shrink,
 	 * avoid this situation, so do the same here. The memory barrier is
 	 * matched with the one in mount_fs() as we don't hold locks here.
 	 */
-	if (!(sb->s_flags & SB_BORN))
+	if ((sb->s_flags & (SB_BORN|SB_ACTIVE)) != (SB_BORN|SB_ACTIVE))
 		return 0;
 	smp_rmb();