From patchwork Mon Feb 2 23:29:25 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jaegeuk Kim X-Patchwork-Id: 5765321 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C231CBF440 for ; Mon, 2 Feb 2015 23:29:32 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E373B209BB for ; Mon, 2 Feb 2015 23:29:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 06B5B209B8 for ; Mon, 2 Feb 2015 23:29:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755413AbbBBX33 (ORCPT ); Mon, 2 Feb 2015 18:29:29 -0500 Received: from mail.kernel.org ([198.145.29.136]:42659 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752461AbbBBX33 (ORCPT ); Mon, 2 Feb 2015 18:29:29 -0500 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 09929209B9; Mon, 2 Feb 2015 23:29:28 +0000 (UTC) Received: from localhost (unknown [166.170.40.129]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 16C26209B8; Mon, 2 Feb 2015 23:29:27 +0000 (UTC) Date: Mon, 2 Feb 2015 15:29:25 -0800 From: Jaegeuk Kim To: Chao Yu Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Subject: Re: [PATCH 5/5 v2] f2fs: introduce a batched trim Message-ID: <20150202232803.GB34575@jaegeuk-mac02.mot.com> References: <1422401503-4769-1-git-send-email-jaegeuk@kernel.org> <1422401503-4769-5-git-send-email-jaegeuk@kernel.org> <003b01d03bc0$a5f23d20$f1d6b760$@samsung.com> <20150129214117.GB17521@jaegeuk-mac02> <006701d03c4b$a62ec0c0$f28c4240$@samsung.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <006701d03c4b$a62ec0c0$f28c4240$@samsung.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Change long from v1: o add description o change the # of batched segments suggested by Chao o make consistent for # of batched segments This patch introduces a batched trimming feature, which submits split discard commands. This patch introduces a batched trimming feature, which submits split discard commands. This is to avoid long latency due to huge trim commands. If fstrim was triggered ranging from 0 to the end of device, we should lock all the checkpoint-related mutexes, resulting in very long latency. Signed-off-by: Jaegeuk Kim --- fs/f2fs/f2fs.h | 2 ++ fs/f2fs/segment.c | 16 +++++++++++----- 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 8231a59..ec5e66f 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -105,6 +105,8 @@ enum { CP_DISCARD, }; +#define BATCHED_TRIM_SEGMENTS(sbi) (((sbi)->segs_per_sec) << 5) + struct cp_control { int reason; __u64 trim_start; diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 5ea57ec..b85bb97 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -1066,14 +1066,20 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range) end_segno = (end >= MAX_BLKADDR(sbi)) ? MAIN_SEGS(sbi) - 1 : GET_SEGNO(sbi, end); cpc.reason = CP_DISCARD; - cpc.trim_start = start_segno; - cpc.trim_end = end_segno; cpc.trim_minlen = range->minlen >> sbi->log_blocksize; /* do checkpoint to issue discard commands safely */ - mutex_lock(&sbi->gc_mutex); - write_checkpoint(sbi, &cpc); - mutex_unlock(&sbi->gc_mutex); + for (; start_segno <= end_segno; + start_segno += BATCHED_TRIM_SEGMENTS(sbi)) { + cpc.trim_start = start_segno; + cpc.trim_end = min_t(unsigned int, + start_segno + BATCHED_TRIM_SEGMENTS (sbi) - 1, + end_segno); + + mutex_lock(&sbi->gc_mutex); + write_checkpoint(sbi, &cpc); + mutex_unlock(&sbi->gc_mutex); + } out: range->len = cpc.trimmed << sbi->log_blocksize; return 0;