diff mbox series

[f2fs-dev] f2fs:Add a threshold for the FG_GC of zone UFS

Message ID 20240807110715.6541-1-liaoyuanhong@vivo.com (mailing list archive)
State New
Headers show
Series [f2fs-dev] f2fs:Add a threshold for the FG_GC of zone UFS | expand

Commit Message

Liao Yuanhong Aug. 7, 2024, 11:07 a.m. UTC
Right now, when a zone UFS device gets close to running out of space and
starts FG_GC, the system continues to execute FG_GC even if there is a few
dirty space available for reclamation. This can make everything else slow
down or just hang. 
Since the function for calculating remaining space operates in sections as
the smallest unit (has_enough_free_secs), it only makes sense to reclaim
space in multiples of sections. Additionally, the larger the zone size,
the slower the reclamation speed. If the total size of dirty blocks is less
than one section, not only is the reclamation efficiency very poor, but the
system will continue to be stuck in FG_GC.
So, we really need to set a limit. If the clutter is less than this limit,
we shouldn't waste our time with FG_GC. It's just too slow at that point
and doesn't clear up enough space to matter.

Signed-off-by: Liao Yuanhong <liaoyuanhong@vivo.com>
Signed-off-by: Wu Bo <bo.wu@vivo.com>
---
 fs/f2fs/f2fs.h    | 6 ++++++
 fs/f2fs/segment.c | 8 ++++++++
 2 files changed, 14 insertions(+)
diff mbox series

Patch

diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 51fd5063a69c..aeff0d2a644f 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -133,6 +133,12 @@  typedef u32 nid_t;
 
 #define COMPRESS_EXT_NUM		16
 
+/*
+ * Avoid entering the FG-GC when the total size
+ * of dirty blocks is below this value
+ */
+#define FOREGROUND_GC_THRESHOLD	2
+
 enum blkzone_allocation_policy {
 	BLKZONE_ALLOC_PRIOR_SEQ,	/* Prioritize writing to sequential zones */
 	BLKZONE_ALLOC_ONLY_SEQ,		/* Only allow writing to sequential zones */
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index 741e46f9d0fd..5ad7b5362079 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -418,6 +418,8 @@  int f2fs_commit_atomic_write(struct inode *inode)
  */
 void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need)
 {
+	block_t invalid_user_blocks = sbi->user_block_count - written_block_count(sbi);
+
 	if (f2fs_cp_error(sbi))
 		return;
 
@@ -438,6 +440,12 @@  void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need)
 	if (has_enough_free_secs(sbi, 0, 0))
 		return;
 
+	/*
+	 * If there aren't enough dirty segments, GC is not required.
+	 */
+	if (invalid_user_blocks < FOREGROUND_GC_THRESHOLD * BLKS_PER_SEC(sbi))
+		return;
+
 	if (test_opt(sbi, GC_MERGE) && sbi->gc_thread &&
 				sbi->gc_thread->f2fs_gc_task) {
 		DEFINE_WAIT(wait);