From patchwork Thu Oct 31 23:45:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222215 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9828E13BD for ; Thu, 31 Oct 2019 23:48:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7E68A2087F for ; Thu, 31 Oct 2019 23:48:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728438AbfJaXq1 (ORCPT ); Thu, 31 Oct 2019 19:46:27 -0400 Received: from mail104.syd.optusnet.com.au ([211.29.132.246]:39711 "EHLO mail104.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728207AbfJaXqZ (ORCPT ); Thu, 31 Oct 2019 19:46:25 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id 692A27EA8CC; Fri, 1 Nov 2019 10:46:20 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007Bw-0I; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8w-00041C-Te; Fri, 01 Nov 2019 10:46:18 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 01/28] xfs: Lower CIL flush limit for large logs Date: Fri, 1 Nov 2019 10:45:51 +1100 Message-Id: <20191031234618.15403-2-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=D+Q3ErZj c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=yPCof4ZbAAAA:8 a=UuDO-KUg2xYWwgiXHJ0A:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner The current CIL size aggregation limit is 1/8th the log size. This means for large logs we might be aggregating at least 250MB of dirty objects in memory before the CIL is flushed to the journal. With CIL shadow buffers sitting around, this means the CIL is often consuming >500MB of temporary memory that is all allocated under GFP_NOFS conditions. Flushing the CIL can take some time to do if there is other IO ongoing, and can introduce substantial log force latency by itself. It also pins the memory until the objects are in the AIL and can be written back and reclaimed by shrinkers. Hence this threshold also tends to determine the minimum amount of memory XFS can operate in under heavy modification without triggering the OOM killer. Modify the CIL space limit to prevent such huge amounts of pinned metadata from aggregating. We can have 2MB of log IO in flight at once, so limit aggregation to 16x this size. This threshold was chosen as it little impact on performance (on 16-way fsmark) or log traffic but pins a lot less memory on large logs especially under heavy memory pressure. An aggregation limit of 8x had 5-10% performance degradation and a 50% increase in log throughput for the same workload, so clearly that was too small for highly concurrent workloads on large logs. This was found via trace analysis of AIL behaviour. e.g. insertion from a single CIL flush: xfs_ail_insert: old lsn 0/0 new lsn 1/3033090 type XFS_LI_INODE flags IN_AIL $ grep xfs_ail_insert /mnt/scratch/s.t |grep "new lsn 1/3033090" |wc -l 1721823 $ So there were 1.7 million objects inserted into the AIL from this CIL checkpoint, the first at 2323.392108, the last at 2325.667566 which was the end of the trace (i.e. it hadn't finished). Clearly a major problem. Signed-off-by: Dave Chinner Reviewed-by: Brian Foster Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_log_priv.h | 29 +++++++++++++++++++++++------ 1 file changed, 23 insertions(+), 6 deletions(-) diff --git a/fs/xfs/xfs_log_priv.h b/fs/xfs/xfs_log_priv.h index 4f19375f6592..abd382cfffe3 100644 --- a/fs/xfs/xfs_log_priv.h +++ b/fs/xfs/xfs_log_priv.h @@ -318,13 +318,30 @@ struct xfs_cil { * tries to keep 25% of the log free, so we need to keep below that limit or we * risk running out of free log space to start any new transactions. * - * In order to keep background CIL push efficient, we will set a lower - * threshold at which background pushing is attempted without blocking current - * transaction commits. A separate, higher bound defines when CIL pushes are - * enforced to ensure we stay within our maximum checkpoint size bounds. - * threshold, yet give us plenty of space for aggregation on large logs. + * In order to keep background CIL push efficient, we only need to ensure the + * CIL is large enough to maintain sufficient in-memory relogging to avoid + * repeated physical writes of frequently modified metadata. If we allow the CIL + * to grow to a substantial fraction of the log, then we may be pinning hundreds + * of megabytes of metadata in memory until the CIL flushes. This can cause + * issues when we are running low on memory - pinned memory cannot be reclaimed, + * and the CIL consumes a lot of memory. Hence we need to set an upper physical + * size limit for the CIL that limits the maximum amount of memory pinned by the + * CIL but does not limit performance by reducing relogging efficiency + * significantly. + * + * As such, the CIL push threshold ends up being the smaller of two thresholds: + * - a threshold large enough that it allows CIL to be pushed and progress to be + * made without excessive blocking of incoming transaction commits. This is + * defined to be 12.5% of the log space - half the 25% push threshold of the + * AIL. + * - small enough that it doesn't pin excessive amounts of memory but maintains + * close to peak relogging efficiency. This is defined to be 16x the iclog + * buffer window (32MB) as measurements have shown this to be roughly the + * point of diminishing performance increases under highly concurrent + * modification workloads. */ -#define XLOG_CIL_SPACE_LIMIT(log) (log->l_logsize >> 3) +#define XLOG_CIL_SPACE_LIMIT(log) \ + min_t(int, (log)->l_logsize >> 3, BBTOB(XLOG_TOTAL_REC_SHIFT(log)) << 4) /* * ticket grant locks, queues and accounting have their own cachlines From patchwork Thu Oct 31 23:45:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222211 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5B8F113BD for ; Thu, 31 Oct 2019 23:48:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 39A442087F for ; Thu, 31 Oct 2019 23:48:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729766AbfJaXsl (ORCPT ); Thu, 31 Oct 2019 19:48:41 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:55593 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728345AbfJaXq1 (ORCPT ); Thu, 31 Oct 2019 19:46:27 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 07CF33A283A; Fri, 1 Nov 2019 10:46:22 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007By-1I; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8w-00041E-VT; Fri, 01 Nov 2019 10:46:18 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 02/28] xfs: Throttle commits on delayed background CIL push Date: Fri, 1 Nov 2019 10:45:52 +1100 Message-Id: <20191031234618.15403-3-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=P6RKvmIu c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=WQ0osmkouEGinBUFY8QA:9 a=F0QMLCV4RP_8GM7f:21 a=RmuKINs690l8OJ9H:21 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner In certain situations the background CIL push can be indefinitely delayed. While we have workarounds from the obvious cases now, it doesn't solve the underlying issue. This issue is that there is no upper limit on the CIL where we will either force or wait for a background push to start, hence allowing the CIL to grow without bound until it consumes all log space. To fix this, add a new wait queue to the CIL which allows background pushes to wait for the CIL context to be switched out. This happens when the push starts, so it will allow us to block incoming transaction commit completion until the push has started. This will only affect processes that are running modifications, and only when the CIL threshold has been significantly overrun. This has no apparent impact on performance, and doesn't even trigger until over 45 million inodes had been created in a 16-way fsmark test on a 2GB log. That was limiting at 64MB of log space used, so the active CIL size is only about 3% of the total log in that case. The concurrent removal of those files did not trigger the background sleep at all. Signed-off-by: Dave Chinner Reviewed-by: Brian Foster --- fs/xfs/xfs_log_cil.c | 37 +++++++++++++++++++++++++++++++++---- fs/xfs/xfs_log_priv.h | 24 ++++++++++++++++++++++++ fs/xfs/xfs_trace.h | 1 + 3 files changed, 58 insertions(+), 4 deletions(-) diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c index a1204424a938..fc3f8e849dec 100644 --- a/fs/xfs/xfs_log_cil.c +++ b/fs/xfs/xfs_log_cil.c @@ -670,6 +670,11 @@ xlog_cil_push( push_seq = cil->xc_push_seq; ASSERT(push_seq <= ctx->sequence); + /* + * Wake up any background push waiters now this context is being pushed. + */ + wake_up_all(&ctx->push_wait); + /* * Check if we've anything to push. If there is nothing, then we don't * move on to a new sequence number and so we have to be able to push @@ -746,6 +751,7 @@ xlog_cil_push( */ INIT_LIST_HEAD(&new_ctx->committing); INIT_LIST_HEAD(&new_ctx->busy_extents); + init_waitqueue_head(&new_ctx->push_wait); new_ctx->sequence = ctx->sequence + 1; new_ctx->cil = cil; cil->xc_ctx = new_ctx; @@ -900,7 +906,7 @@ xlog_cil_push_work( */ static void xlog_cil_push_background( - struct xlog *log) + struct xlog *log) __releases(cil->xc_ctx_lock) { struct xfs_cil *cil = log->l_cilp; @@ -914,14 +920,36 @@ xlog_cil_push_background( * don't do a background push if we haven't used up all the * space available yet. */ - if (cil->xc_ctx->space_used < XLOG_CIL_SPACE_LIMIT(log)) + if (cil->xc_ctx->space_used < XLOG_CIL_SPACE_LIMIT(log)) { + up_read(&cil->xc_ctx_lock); return; + } spin_lock(&cil->xc_push_lock); if (cil->xc_push_seq < cil->xc_current_sequence) { cil->xc_push_seq = cil->xc_current_sequence; queue_work(log->l_mp->m_cil_workqueue, &cil->xc_push_work); } + + /* + * Drop the context lock now, we can't hold that if we need to sleep + * because we are over the blocking threshold. The push_lock is still + * held, so blocking threshold sleep/wakeup is still correctly + * serialised here. + */ + up_read(&cil->xc_ctx_lock); + + /* + * If we are well over the space limit, throttle the work that is being + * done until the push work on this context has begun. + */ + if (cil->xc_ctx->space_used >= XLOG_CIL_BLOCKING_SPACE_LIMIT(log)) { + trace_xfs_log_cil_wait(log, cil->xc_ctx->ticket); + ASSERT(cil->xc_ctx->space_used < log->l_logsize); + xlog_wait(&cil->xc_ctx->push_wait, &cil->xc_push_lock); + return; + } + spin_unlock(&cil->xc_push_lock); } @@ -1038,9 +1066,9 @@ xfs_log_commit_cil( if (lip->li_ops->iop_committing) lip->li_ops->iop_committing(lip, xc_commit_lsn); } - xlog_cil_push_background(log); - up_read(&cil->xc_ctx_lock); + /* xlog_cil_push_background() releases cil->xc_ctx_lock */ + xlog_cil_push_background(log); } /* @@ -1199,6 +1227,7 @@ xlog_cil_init( INIT_LIST_HEAD(&ctx->committing); INIT_LIST_HEAD(&ctx->busy_extents); + init_waitqueue_head(&ctx->push_wait); ctx->sequence = 1; ctx->cil = cil; cil->xc_ctx = ctx; diff --git a/fs/xfs/xfs_log_priv.h b/fs/xfs/xfs_log_priv.h index abd382cfffe3..1cc5333a3f6a 100644 --- a/fs/xfs/xfs_log_priv.h +++ b/fs/xfs/xfs_log_priv.h @@ -242,6 +242,7 @@ struct xfs_cil_ctx { struct xfs_log_vec *lv_chain; /* logvecs being pushed */ struct list_head iclog_entry; struct list_head committing; /* ctx committing list */ + wait_queue_head_t push_wait; /* background push throttle */ struct work_struct discard_endio_work; }; @@ -339,10 +340,33 @@ struct xfs_cil { * buffer window (32MB) as measurements have shown this to be roughly the * point of diminishing performance increases under highly concurrent * modification workloads. + * + * To prevent the CIL from overflowing upper commit size bounds, we introduce a + * new threshold at which we block committing transactions until the background + * CIL commit commences and switches to a new context. While this is not a hard + * limit, it forces the process committing a transaction to the CIL to block and + * yeild the CPU, giving the CIL push work a chance to be scheduled and start + * work. This prevents a process running lots of transactions from overfilling + * the CIL because it is not yielding the CPU. We set the blocking limit at + * twice the background push space threshold so we keep in line with the AIL + * push thresholds. + * + * Note: this is not a -hard- limit as blocking is applied after the transaction + * is inserted into the CIL and the push has been triggered. It is largely a + * throttling mechanism that allows the CIL push to be scheduled and run. A hard + * limit will be difficult to implement without introducing global serialisation + * in the CIL commit fast path, and it's not at all clear that we actually need + * such hard limits given the ~7 years we've run without a hard limit before + * finding the first situation where a checkpoint size overflow actually + * occurred. Hence the simple throttle, and an ASSERT check to tell us that + * we've overrun the max size. */ #define XLOG_CIL_SPACE_LIMIT(log) \ min_t(int, (log)->l_logsize >> 3, BBTOB(XLOG_TOTAL_REC_SHIFT(log)) << 4) +#define XLOG_CIL_BLOCKING_SPACE_LIMIT(log) \ + (XLOG_CIL_SPACE_LIMIT(log) * 2) + /* * ticket grant locks, queues and accounting have their own cachlines * as these are quite hot and can be operated on concurrently. diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h index c13bb3655e48..d3635d1e3de6 100644 --- a/fs/xfs/xfs_trace.h +++ b/fs/xfs/xfs_trace.h @@ -1011,6 +1011,7 @@ DEFINE_LOGGRANT_EVENT(xfs_log_regrant_reserve_sub); DEFINE_LOGGRANT_EVENT(xfs_log_ungrant_enter); DEFINE_LOGGRANT_EVENT(xfs_log_ungrant_exit); DEFINE_LOGGRANT_EVENT(xfs_log_ungrant_sub); +DEFINE_LOGGRANT_EVENT(xfs_log_cil_wait); DECLARE_EVENT_CLASS(xfs_log_item_class, TP_PROTO(struct xfs_log_item *lip), From patchwork Thu Oct 31 23:45:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222221 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 06A2B14E5 for ; Thu, 31 Oct 2019 23:48:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E20C12087F for ; Thu, 31 Oct 2019 23:48:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728259AbfJaXqY (ORCPT ); Thu, 31 Oct 2019 19:46:24 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:55291 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726680AbfJaXqY (ORCPT ); Thu, 31 Oct 2019 19:46:24 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 687293A27E3; Fri, 1 Nov 2019 10:46:20 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007C0-2h; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041I-0X; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 03/28] xfs: don't allow log IO to be throttled Date: Fri, 1 Nov 2019 10:45:53 +1100 Message-Id: <20191031234618.15403-4-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=D+Q3ErZj c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=yPCof4ZbAAAA:8 a=5HahVxdoFHTWBnBQlCYA:9 a=DiKeHqHhRZ4A:10 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner Running metadata intensive workloads, I've been seeing the AIL pushing getting stuck on pinned buffers and triggering log forces. The log force is taking a long time to run because the log IO is getting throttled by wbt_wait() - the block layer writeback throttle. It's being throttled because there is a huge amount of metadata writeback going on which is filling the request queue. IOWs, we have a priority inversion problem here. Mark the log IO bios with REQ_IDLE so they don't get throttled by the block layer writeback throttle. When we are forcing the CIL, we are likely to need to to tens of log IOs, and they are issued as fast as they can be build and IO completed. Hence REQ_IDLE is appropriate - it's an indication that more IO will follow shortly. And because we also set REQ_SYNC, the writeback throttle will now treat log IO the same way it treats direct IO writes - it will not throttle them at all. Hence we solve the priority inversion problem caused by the writeback throttle being unable to distinguish between high priority log IO and background metadata writeback. Signed-off-by: Dave Chinner Reviewed-by: Brian Foster Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_log.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index 33fb38864207..e7441a983b3b 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -1792,7 +1792,15 @@ xlog_write_iclog( iclog->ic_bio.bi_iter.bi_sector = log->l_logBBstart + bno; iclog->ic_bio.bi_end_io = xlog_bio_end_io; iclog->ic_bio.bi_private = iclog; - iclog->ic_bio.bi_opf = REQ_OP_WRITE | REQ_META | REQ_SYNC | REQ_FUA; + + /* + * We use REQ_SYNC | REQ_IDLE here to tell the block layer the are more + * IOs coming immediately after this one. This prevents the block layer + * writeback throttle from throttling log writes behind background + * metadata writeback and causing priority inversions. + */ + iclog->ic_bio.bi_opf = REQ_OP_WRITE | REQ_META | REQ_SYNC | + REQ_IDLE | REQ_FUA; if (need_flush) iclog->ic_bio.bi_opf |= REQ_PREFLUSH; From patchwork Thu Oct 31 23:45:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222201 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 58874912 for ; Thu, 31 Oct 2019 23:48:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 36C9D21734 for ; Thu, 31 Oct 2019 23:48:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729740AbfJaXsa (ORCPT ); Thu, 31 Oct 2019 19:48:30 -0400 Received: from mail104.syd.optusnet.com.au ([211.29.132.246]:40133 "EHLO mail104.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728380AbfJaXq2 (ORCPT ); Thu, 31 Oct 2019 19:46:28 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id 944467EA8F9; Fri, 1 Nov 2019 10:46:21 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007C2-3o; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041L-1j; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 04/28] xfs: Improve metadata buffer reclaim accountability Date: Fri, 1 Nov 2019 10:45:54 +1100 Message-Id: <20191031234618.15403-5-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=G6BsK5s5 c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=1dk79d6Hl8FtNpQQbMkA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner The buffer cache shrinker frees more than just the xfs_buf slab objects - it also frees the pages attached to the buffers. Make sure the memory reclaim code accounts for this memory being freed correctly, similar to how the inode shrinker accounts for pages freed from the page cache due to mapping invalidation. We also need to make sure that the mm subsystem knows these are reclaimable objects. We provide the memory reclaim subsystem with a a shrinker to reclaim xfs_bufs, so we should really mark the slab that way. We also have a lot of xfs_bufs in a busy system, spread them around like we do inodes. Signed-off-by: Dave Chinner Reviewed-by: Brian Foster --- fs/xfs/xfs_buf.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 1e63dd3d1257..d34e5d2edacd 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -324,6 +324,9 @@ xfs_buf_free( __free_page(page); } + if (current->reclaim_state) + current->reclaim_state->reclaimed_slab += + bp->b_page_count; } else if (bp->b_flags & _XBF_KMEM) kmem_free(bp->b_addr); _xfs_buf_free_pages(bp); @@ -2061,7 +2064,8 @@ int __init xfs_buf_init(void) { xfs_buf_zone = kmem_zone_init_flags(sizeof(xfs_buf_t), "xfs_buf", - KM_ZONE_HWALIGN, NULL); + KM_ZONE_HWALIGN | KM_ZONE_SPREAD | KM_ZONE_RECLAIM, + NULL); if (!xfs_buf_zone) goto out; From patchwork Thu Oct 31 23:45:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222199 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF3F61515 for ; Thu, 31 Oct 2019 23:48:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CC018217F9 for ; Thu, 31 Oct 2019 23:48:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729734AbfJaXsa (ORCPT ); Thu, 31 Oct 2019 19:48:30 -0400 Received: from mail104.syd.optusnet.com.au ([211.29.132.246]:40221 "EHLO mail104.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728395AbfJaXq2 (ORCPT ); Thu, 31 Oct 2019 19:46:28 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id 947547EA8FA; Fri, 1 Nov 2019 10:46:20 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007C4-5A; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041O-2l; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 05/28] xfs: correctly acount for reclaimable slabs Date: Fri, 1 Nov 2019 10:45:55 +1100 Message-Id: <20191031234618.15403-6-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=D+Q3ErZj c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=yPCof4ZbAAAA:8 a=Lohin5lSNhcegm5w81oA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner The XFS inode item slab actually reclaimed by inode shrinker callbacks from the memory reclaim subsystem. These should be marked as reclaimable so the mm subsystem has the full picture of how much memory it can actually reclaim from the XFS slab caches. Signed-off-by: Dave Chinner Reviewed-by: Brian Foster Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_super.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index bcb1575a5652..ebe2ccd36127 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1876,7 +1876,7 @@ xfs_init_zones(void) xfs_ili_zone = kmem_zone_init_flags(sizeof(xfs_inode_log_item_t), "xfs_ili", - KM_ZONE_SPREAD, NULL); + KM_ZONE_SPREAD | KM_ZONE_RECLAIM, NULL); if (!xfs_ili_zone) goto out_destroy_inode_zone; xfs_icreate_zone = kmem_zone_init(sizeof(struct xfs_icreate_item), From patchwork Thu Oct 31 23:45:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222181 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B6A2A14E5 for ; Thu, 31 Oct 2019 23:48:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9D6E2208C0 for ; Thu, 31 Oct 2019 23:48:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728616AbfJaXq3 (ORCPT ); Thu, 31 Oct 2019 19:46:29 -0400 Received: from mail104.syd.optusnet.com.au ([211.29.132.246]:40131 "EHLO mail104.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728443AbfJaXq3 (ORCPT ); Thu, 31 Oct 2019 19:46:29 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id 699FD7EA73F; Fri, 1 Nov 2019 10:46:20 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007C6-6J; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041R-3r; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 06/28] xfs: factor common AIL item deletion code Date: Fri, 1 Nov 2019 10:45:56 +1100 Message-Id: <20191031234618.15403-7-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=G6BsK5s5 c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=MYL0kNLdbJVOrCnBhk4A:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner Factor the common AIL deletion code that does all the wakeups into a helper so we only have one copy of this somewhat tricky code to interface with all the wakeups necessary when the LSN of the log tail changes. Signed-off-by: Dave Chinner Reviewed-by: Christoph Hellwig --- fs/xfs/xfs_inode_item.c | 12 +---------- fs/xfs/xfs_trans_ail.c | 48 ++++++++++++++++++++++------------------- fs/xfs/xfs_trans_priv.h | 4 +++- 3 files changed, 30 insertions(+), 34 deletions(-) diff --git a/fs/xfs/xfs_inode_item.c b/fs/xfs/xfs_inode_item.c index bb8f076805b9..ab12e526540a 100644 --- a/fs/xfs/xfs_inode_item.c +++ b/fs/xfs/xfs_inode_item.c @@ -743,17 +743,7 @@ xfs_iflush_done( xfs_clear_li_failed(blip); } } - - if (mlip_changed) { - if (!XFS_FORCED_SHUTDOWN(ailp->ail_mount)) - xlog_assign_tail_lsn_locked(ailp->ail_mount); - if (list_empty(&ailp->ail_head)) - wake_up_all(&ailp->ail_empty); - } - spin_unlock(&ailp->ail_lock); - - if (mlip_changed) - xfs_log_space_wake(ailp->ail_mount); + xfs_ail_update_finish(ailp, mlip_changed); } /* diff --git a/fs/xfs/xfs_trans_ail.c b/fs/xfs/xfs_trans_ail.c index 6ccfd75d3c24..656819523bbd 100644 --- a/fs/xfs/xfs_trans_ail.c +++ b/fs/xfs/xfs_trans_ail.c @@ -678,6 +678,27 @@ xfs_ail_push_all_sync( finish_wait(&ailp->ail_empty, &wait); } +void +xfs_ail_update_finish( + struct xfs_ail *ailp, + bool do_tail_update) __releases(ailp->ail_lock) +{ + struct xfs_mount *mp = ailp->ail_mount; + + if (!do_tail_update) { + spin_unlock(&ailp->ail_lock); + return; + } + + if (!XFS_FORCED_SHUTDOWN(mp)) + xlog_assign_tail_lsn_locked(mp); + + if (list_empty(&ailp->ail_head)) + wake_up_all(&ailp->ail_empty); + spin_unlock(&ailp->ail_lock); + xfs_log_space_wake(mp); +} + /* * xfs_trans_ail_update - bulk AIL insertion operation. * @@ -737,15 +758,7 @@ xfs_trans_ail_update_bulk( if (!list_empty(&tmp)) xfs_ail_splice(ailp, cur, &tmp, lsn); - if (mlip_changed) { - if (!XFS_FORCED_SHUTDOWN(ailp->ail_mount)) - xlog_assign_tail_lsn_locked(ailp->ail_mount); - spin_unlock(&ailp->ail_lock); - - xfs_log_space_wake(ailp->ail_mount); - } else { - spin_unlock(&ailp->ail_lock); - } + xfs_ail_update_finish(ailp, mlip_changed); } bool @@ -789,10 +802,10 @@ void xfs_trans_ail_delete( struct xfs_ail *ailp, struct xfs_log_item *lip, - int shutdown_type) __releases(ailp->ail_lock) + int shutdown_type) { struct xfs_mount *mp = ailp->ail_mount; - bool mlip_changed; + bool need_update; if (!test_bit(XFS_LI_IN_AIL, &lip->li_flags)) { spin_unlock(&ailp->ail_lock); @@ -805,17 +818,8 @@ xfs_trans_ail_delete( return; } - mlip_changed = xfs_ail_delete_one(ailp, lip); - if (mlip_changed) { - if (!XFS_FORCED_SHUTDOWN(mp)) - xlog_assign_tail_lsn_locked(mp); - if (list_empty(&ailp->ail_head)) - wake_up_all(&ailp->ail_empty); - } - - spin_unlock(&ailp->ail_lock); - if (mlip_changed) - xfs_log_space_wake(ailp->ail_mount); + need_update = xfs_ail_delete_one(ailp, lip); + xfs_ail_update_finish(ailp, need_update); } int diff --git a/fs/xfs/xfs_trans_priv.h b/fs/xfs/xfs_trans_priv.h index 2e073c1c4614..64ffa746730e 100644 --- a/fs/xfs/xfs_trans_priv.h +++ b/fs/xfs/xfs_trans_priv.h @@ -92,8 +92,10 @@ xfs_trans_ail_update( } bool xfs_ail_delete_one(struct xfs_ail *ailp, struct xfs_log_item *lip); +void xfs_ail_update_finish(struct xfs_ail *ailp, bool do_tail_update) + __releases(ailp->ail_lock); void xfs_trans_ail_delete(struct xfs_ail *ailp, struct xfs_log_item *lip, - int shutdown_type) __releases(ailp->ail_lock); + int shutdown_type); static inline void xfs_trans_ail_remove( From patchwork Thu Oct 31 23:45:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222197 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 78D4B14E5 for ; Thu, 31 Oct 2019 23:48:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 58E7F208C0 for ; Thu, 31 Oct 2019 23:48:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728519AbfJaXq2 (ORCPT ); Thu, 31 Oct 2019 19:46:28 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:55491 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728307AbfJaXq1 (ORCPT ); Thu, 31 Oct 2019 19:46:27 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 9459C3A23D0; Fri, 1 Nov 2019 10:46:20 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007C8-7k; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041U-51; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 07/28] xfs: tail updates only need to occur when LSN changes Date: Fri, 1 Nov 2019 10:45:57 +1100 Message-Id: <20191031234618.15403-8-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=P6RKvmIu c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=W3SVq3cs_kfo-QxLsoMA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner We currently wake anything waiting on the log tail to move whenever the log item at the tail of the log is removed. Historically this was fine behaviour because there were very few items at any given LSN. But with delayed logging, there may be thousands of items at any given LSN, and we can't move the tail until they are all gone. Hence if we are removing them in near tail-first order, we might be waking up processes waiting on the tail LSN to change (e.g. log space waiters) repeatedly without them being able to make progress. This also occurs with the new sync push waiters, and can result in thousands of spurious wakeups every second when under heavy direct reclaim pressure. To fix this, check that the tail LSN has actually changed on the AIL before triggering wakeups. This will reduce the number of spurious wakeups when doing bulk AIL removal and make this code much more efficient. Signed-off-by: Dave Chinner Reviewed-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_inode_item.c | 18 ++++++++++---- fs/xfs/xfs_trans_ail.c | 52 ++++++++++++++++++++++++++++------------- fs/xfs/xfs_trans_priv.h | 4 ++-- 3 files changed, 51 insertions(+), 23 deletions(-) diff --git a/fs/xfs/xfs_inode_item.c b/fs/xfs/xfs_inode_item.c index ab12e526540a..79ffe6dff115 100644 --- a/fs/xfs/xfs_inode_item.c +++ b/fs/xfs/xfs_inode_item.c @@ -731,19 +731,27 @@ xfs_iflush_done( * holding the lock before removing the inode from the AIL. */ if (need_ail) { - bool mlip_changed = false; + xfs_lsn_t tail_lsn = 0; /* this is an opencoded batch version of xfs_trans_ail_delete */ spin_lock(&ailp->ail_lock); list_for_each_entry(blip, &tmp, li_bio_list) { if (INODE_ITEM(blip)->ili_logged && - blip->li_lsn == INODE_ITEM(blip)->ili_flush_lsn) - mlip_changed |= xfs_ail_delete_one(ailp, blip); - else { + blip->li_lsn == INODE_ITEM(blip)->ili_flush_lsn) { + /* + * xfs_ail_update_finish() only cares about the + * lsn of the first tail item removed, any + * others will be at the same or higher lsn so + * we just ignore them. + */ + xfs_lsn_t lsn = xfs_ail_delete_one(ailp, blip); + if (!tail_lsn && lsn) + tail_lsn = lsn; + } else { xfs_clear_li_failed(blip); } } - xfs_ail_update_finish(ailp, mlip_changed); + xfs_ail_update_finish(ailp, tail_lsn); } /* diff --git a/fs/xfs/xfs_trans_ail.c b/fs/xfs/xfs_trans_ail.c index 656819523bbd..685a21cd24c0 100644 --- a/fs/xfs/xfs_trans_ail.c +++ b/fs/xfs/xfs_trans_ail.c @@ -108,17 +108,25 @@ xfs_ail_next( * We need the AIL lock in order to get a coherent read of the lsn of the last * item in the AIL. */ +static xfs_lsn_t +__xfs_ail_min_lsn( + struct xfs_ail *ailp) +{ + struct xfs_log_item *lip = xfs_ail_min(ailp); + + if (lip) + return lip->li_lsn; + return 0; +} + xfs_lsn_t xfs_ail_min_lsn( struct xfs_ail *ailp) { - xfs_lsn_t lsn = 0; - struct xfs_log_item *lip; + xfs_lsn_t lsn; spin_lock(&ailp->ail_lock); - lip = xfs_ail_min(ailp); - if (lip) - lsn = lip->li_lsn; + lsn = __xfs_ail_min_lsn(ailp); spin_unlock(&ailp->ail_lock); return lsn; @@ -681,11 +689,12 @@ xfs_ail_push_all_sync( void xfs_ail_update_finish( struct xfs_ail *ailp, - bool do_tail_update) __releases(ailp->ail_lock) + xfs_lsn_t old_lsn) __releases(ailp->ail_lock) { struct xfs_mount *mp = ailp->ail_mount; - if (!do_tail_update) { + /* if the tail lsn hasn't changed, don't do updates or wakeups. */ + if (!old_lsn || old_lsn == __xfs_ail_min_lsn(ailp)) { spin_unlock(&ailp->ail_lock); return; } @@ -730,7 +739,7 @@ xfs_trans_ail_update_bulk( xfs_lsn_t lsn) __releases(ailp->ail_lock) { struct xfs_log_item *mlip; - int mlip_changed = 0; + xfs_lsn_t tail_lsn = 0; int i; LIST_HEAD(tmp); @@ -745,9 +754,10 @@ xfs_trans_ail_update_bulk( continue; trace_xfs_ail_move(lip, lip->li_lsn, lsn); + if (mlip == lip && !tail_lsn) + tail_lsn = lip->li_lsn; + xfs_ail_delete(ailp, lip); - if (mlip == lip) - mlip_changed = 1; } else { trace_xfs_ail_insert(lip, 0, lsn); } @@ -758,15 +768,23 @@ xfs_trans_ail_update_bulk( if (!list_empty(&tmp)) xfs_ail_splice(ailp, cur, &tmp, lsn); - xfs_ail_update_finish(ailp, mlip_changed); + xfs_ail_update_finish(ailp, tail_lsn); } -bool +/* + * Delete one log item from the AIL. + * + * If this item was at the tail of the AIL, return the LSN of the log item so + * that we can use it to check if the LSN of the tail of the log has moved + * when finishing up the AIL delete process in xfs_ail_update_finish(). + */ +xfs_lsn_t xfs_ail_delete_one( struct xfs_ail *ailp, struct xfs_log_item *lip) { struct xfs_log_item *mlip = xfs_ail_min(ailp); + xfs_lsn_t lsn = lip->li_lsn; trace_xfs_ail_delete(lip, mlip->li_lsn, lip->li_lsn); xfs_ail_delete(ailp, lip); @@ -774,7 +792,9 @@ xfs_ail_delete_one( clear_bit(XFS_LI_IN_AIL, &lip->li_flags); lip->li_lsn = 0; - return mlip == lip; + if (mlip == lip) + return lsn; + return 0; } /** @@ -805,7 +825,7 @@ xfs_trans_ail_delete( int shutdown_type) { struct xfs_mount *mp = ailp->ail_mount; - bool need_update; + xfs_lsn_t tail_lsn; if (!test_bit(XFS_LI_IN_AIL, &lip->li_flags)) { spin_unlock(&ailp->ail_lock); @@ -818,8 +838,8 @@ xfs_trans_ail_delete( return; } - need_update = xfs_ail_delete_one(ailp, lip); - xfs_ail_update_finish(ailp, need_update); + tail_lsn = xfs_ail_delete_one(ailp, lip); + xfs_ail_update_finish(ailp, tail_lsn); } int diff --git a/fs/xfs/xfs_trans_priv.h b/fs/xfs/xfs_trans_priv.h index 64ffa746730e..35655eac01a6 100644 --- a/fs/xfs/xfs_trans_priv.h +++ b/fs/xfs/xfs_trans_priv.h @@ -91,8 +91,8 @@ xfs_trans_ail_update( xfs_trans_ail_update_bulk(ailp, NULL, &lip, 1, lsn); } -bool xfs_ail_delete_one(struct xfs_ail *ailp, struct xfs_log_item *lip); -void xfs_ail_update_finish(struct xfs_ail *ailp, bool do_tail_update) +xfs_lsn_t xfs_ail_delete_one(struct xfs_ail *ailp, struct xfs_log_item *lip); +void xfs_ail_update_finish(struct xfs_ail *ailp, xfs_lsn_t old_lsn) __releases(ailp->ail_lock); void xfs_trans_ail_delete(struct xfs_ail *ailp, struct xfs_log_item *lip, int shutdown_type); From patchwork Thu Oct 31 23:45:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222217 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C4C611515 for ; Thu, 31 Oct 2019 23:48:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AD3202087F for ; Thu, 31 Oct 2019 23:48:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728386AbfJaXq0 (ORCPT ); Thu, 31 Oct 2019 19:46:26 -0400 Received: from mail104.syd.optusnet.com.au ([211.29.132.246]:39693 "EHLO mail104.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727382AbfJaXqZ (ORCPT ); Thu, 31 Oct 2019 19:46:25 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id 690BD7EA88E; Fri, 1 Nov 2019 10:46:20 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007CA-8f; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041W-5z; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 08/28] xfs: factor inode lookup from xfs_ifree_cluster Date: Fri, 1 Nov 2019 10:45:58 +1100 Message-Id: <20191031234618.15403-9-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=D+Q3ErZj c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=v6uQa3AzV7p3N87VeX4A:9 a=KSkNJWIdgRSM0BYC:21 a=MFCUz6Zf14C1qaC9:21 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner There's lots of indent in this code which makes it a bit hard to follow. We are also going to completely rework the inode lookup code as part of the inode reclaim rework, so factor out the inode lookup code from the inode cluster freeing code. Based on prototype code from Christoph Hellwig. Signed-off-by: Dave Chinner Reviewed-by: Brian Foster Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_inode.c | 152 +++++++++++++++++++++++++-------------------- 1 file changed, 84 insertions(+), 68 deletions(-) diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index e9e4f444f8ce..33edb18098ca 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -2516,6 +2516,88 @@ xfs_iunlink_remove( return error; } +/* + * Look up the inode number specified and mark it stale if it is found. If it is + * dirty, return the inode so it can be attached to the cluster buffer so it can + * be processed appropriately when the cluster free transaction completes. + */ +static struct xfs_inode * +xfs_ifree_get_one_inode( + struct xfs_perag *pag, + struct xfs_inode *free_ip, + int inum) +{ + struct xfs_mount *mp = pag->pag_mount; + struct xfs_inode *ip; + +retry: + rcu_read_lock(); + ip = radix_tree_lookup(&pag->pag_ici_root, XFS_INO_TO_AGINO(mp, inum)); + + /* Inode not in memory, nothing to do */ + if (!ip) + goto out_rcu_unlock; + + /* + * because this is an RCU protected lookup, we could find a recently + * freed or even reallocated inode during the lookup. We need to check + * under the i_flags_lock for a valid inode here. Skip it if it is not + * valid, the wrong inode or stale. + */ + spin_lock(&ip->i_flags_lock); + if (ip->i_ino != inum || __xfs_iflags_test(ip, XFS_ISTALE)) { + spin_unlock(&ip->i_flags_lock); + goto out_rcu_unlock; + } + spin_unlock(&ip->i_flags_lock); + + /* + * Don't try to lock/unlock the current inode, but we _cannot_ skip the + * other inodes that we did not find in the list attached to the buffer + * and are not already marked stale. If we can't lock it, back off and + * retry. + */ + if (ip != free_ip) { + if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL)) { + rcu_read_unlock(); + delay(1); + goto retry; + } + + /* + * Check the inode number again in case we're racing with + * freeing in xfs_reclaim_inode(). See the comments in that + * function for more information as to why the initial check is + * not sufficient. + */ + if (ip->i_ino != inum) { + xfs_iunlock(ip, XFS_ILOCK_EXCL); + goto out_rcu_unlock; + } + } + rcu_read_unlock(); + + xfs_iflock(ip); + xfs_iflags_set(ip, XFS_ISTALE); + + /* + * We don't need to attach clean inodes or those only with unlogged + * changes (which we throw away, anyway). + */ + if (!ip->i_itemp || xfs_inode_clean(ip)) { + ASSERT(ip != free_ip); + xfs_ifunlock(ip); + xfs_iunlock(ip, XFS_ILOCK_EXCL); + goto out_no_inode; + } + return ip; + +out_rcu_unlock: + rcu_read_unlock(); +out_no_inode: + return NULL; +} + /* * A big issue when freeing the inode cluster is that we _cannot_ skip any * inodes that are in memory - they all must be marked stale and attached to @@ -2616,77 +2698,11 @@ xfs_ifree_cluster( * even trying to lock them. */ for (i = 0; i < igeo->inodes_per_cluster; i++) { -retry: - rcu_read_lock(); - ip = radix_tree_lookup(&pag->pag_ici_root, - XFS_INO_TO_AGINO(mp, (inum + i))); - - /* Inode not in memory, nothing to do */ - if (!ip) { - rcu_read_unlock(); + ip = xfs_ifree_get_one_inode(pag, free_ip, inum + i); + if (!ip) continue; - } - - /* - * because this is an RCU protected lookup, we could - * find a recently freed or even reallocated inode - * during the lookup. We need to check under the - * i_flags_lock for a valid inode here. Skip it if it - * is not valid, the wrong inode or stale. - */ - spin_lock(&ip->i_flags_lock); - if (ip->i_ino != inum + i || - __xfs_iflags_test(ip, XFS_ISTALE)) { - spin_unlock(&ip->i_flags_lock); - rcu_read_unlock(); - continue; - } - spin_unlock(&ip->i_flags_lock); - - /* - * Don't try to lock/unlock the current inode, but we - * _cannot_ skip the other inodes that we did not find - * in the list attached to the buffer and are not - * already marked stale. If we can't lock it, back off - * and retry. - */ - if (ip != free_ip) { - if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL)) { - rcu_read_unlock(); - delay(1); - goto retry; - } - - /* - * Check the inode number again in case we're - * racing with freeing in xfs_reclaim_inode(). - * See the comments in that function for more - * information as to why the initial check is - * not sufficient. - */ - if (ip->i_ino != inum + i) { - xfs_iunlock(ip, XFS_ILOCK_EXCL); - rcu_read_unlock(); - continue; - } - } - rcu_read_unlock(); - xfs_iflock(ip); - xfs_iflags_set(ip, XFS_ISTALE); - - /* - * we don't need to attach clean inodes or those only - * with unlogged changes (which we throw away, anyway). - */ iip = ip->i_itemp; - if (!iip || xfs_inode_clean(ip)) { - ASSERT(ip != free_ip); - xfs_ifunlock(ip); - xfs_iunlock(ip, XFS_ILOCK_EXCL); - continue; - } - iip->ili_last_fields = iip->ili_fields; iip->ili_fields = 0; iip->ili_fsync_fields = 0; From patchwork Thu Oct 31 23:45:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222225 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 535E214E5 for ; Thu, 31 Oct 2019 23:48:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3B09F2087F for ; Thu, 31 Oct 2019 23:48:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727718AbfJaXqX (ORCPT ); Thu, 31 Oct 2019 19:46:23 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:55323 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727003AbfJaXqX (ORCPT ); Thu, 31 Oct 2019 19:46:23 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 93DF93A0641; Fri, 1 Nov 2019 10:46:21 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007CB-9V; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041Z-7D; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 09/28] mm: directed shrinker work deferral Date: Fri, 1 Nov 2019 10:45:59 +1100 Message-Id: <20191031234618.15403-10-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=P6RKvmIu c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=3J8m_CEvPCn7CZIx0tYA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner Introduce a mechanism for ->count_objects() to indicate to the shrinker infrastructure that the reclaim context will not allow scanning work to be done and so the work it decides is necessary needs to be deferred. This simplifies the code by separating out the accounting of deferred work from the actual doing of the work, and allows better decisions to be made by the shrinekr control logic on what action it can take. Signed-off-by: Dave Chinner --- include/linux/shrinker.h | 7 +++++++ mm/vmscan.c | 8 ++++++++ 2 files changed, 15 insertions(+) diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 0f80123650e2..3405c39ab92c 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -31,6 +31,13 @@ struct shrink_control { /* current memcg being shrunk (for memcg aware shrinkers) */ struct mem_cgroup *memcg; + + /* + * set by ->count_objects if reclaim context prevents reclaim from + * occurring. This allows the shrinker to immediately defer all the + * work and not even attempt to scan the cache. + */ + bool defer_work; }; #define SHRINK_STOP (~0UL) diff --git a/mm/vmscan.c b/mm/vmscan.c index ee4eecc7e1c2..a215d71d9d4b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -536,6 +536,13 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, freeable, delta, total_scan, priority); + /* + * If the shrinker can't run (e.g. due to gfp_mask constraints), then + * defer the work to a context that can scan the cache. + */ + if (shrinkctl->defer_work) + goto done; + /* * Normally, we should not scan less than batch_size objects in one * pass to avoid too frequent shrinker calls, but if the slab has less @@ -570,6 +577,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, cond_resched(); } +done: if (next_deferred >= scanned) next_deferred -= scanned; else From patchwork Thu Oct 31 23:46:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222177 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A89A6912 for ; Thu, 31 Oct 2019 23:48:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 85AAF2173E for ; Thu, 31 Oct 2019 23:48:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729090AbfJaXsG (ORCPT ); Thu, 31 Oct 2019 19:48:06 -0400 Received: from mail104.syd.optusnet.com.au ([211.29.132.246]:40223 "EHLO mail104.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728568AbfJaXqa (ORCPT ); Thu, 31 Oct 2019 19:46:30 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id 541DD7EA8FF; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007CE-Ab; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041c-8Q; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 10/28] shrinkers: use defer_work for GFP_NOFS sensitive shrinkers Date: Fri, 1 Nov 2019 10:46:00 +1100 Message-Id: <20191031234618.15403-11-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=D+Q3ErZj c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=bIsfdx-f5ddGStTJopEA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner For shrinkers that currently avoid scanning when called under GFP_NOFS contexts, convert them to use the new ->defer_work flag rather than checking and returning errors during scans. This makes it very clear that these shrinkers are not doing any work because of the context limitations, not because there is no work that can be done. Signed-off-by: Dave Chinner --- drivers/staging/android/ashmem.c | 8 ++++---- fs/gfs2/glock.c | 5 +++-- fs/gfs2/quota.c | 6 +++--- fs/nfs/dir.c | 6 +++--- fs/super.c | 6 +++--- fs/xfs/xfs_qm.c | 11 ++++++++--- net/sunrpc/auth.c | 5 ++--- 7 files changed, 26 insertions(+), 21 deletions(-) diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c index 74d497d39c5a..0b80149f0ac5 100644 --- a/drivers/staging/android/ashmem.c +++ b/drivers/staging/android/ashmem.c @@ -438,10 +438,6 @@ ashmem_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) { unsigned long freed = 0; - /* We might recurse into filesystem code, so bail out if necessary */ - if (!(sc->gfp_mask & __GFP_FS)) - return SHRINK_STOP; - if (!mutex_trylock(&ashmem_mutex)) return -1; @@ -478,6 +474,10 @@ ashmem_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) static unsigned long ashmem_shrink_count(struct shrinker *shrink, struct shrink_control *sc) { + /* We might recurse into filesystem code, so bail out if necessary */ + if (!(sc->gfp_mask & __GFP_FS)) + sc->defer_work = true; + /* * note that lru_count is count of pages on the lru, not a count of * objects on the list. This means the scan function needs to return the diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c index 0290a22ebccf..a25161b93f96 100644 --- a/fs/gfs2/glock.c +++ b/fs/gfs2/glock.c @@ -1614,14 +1614,15 @@ static long gfs2_scan_glock_lru(int nr) static unsigned long gfs2_glock_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) { - if (!(sc->gfp_mask & __GFP_FS)) - return SHRINK_STOP; return gfs2_scan_glock_lru(sc->nr_to_scan); } static unsigned long gfs2_glock_shrink_count(struct shrinker *shrink, struct shrink_control *sc) { + if (!(sc->gfp_mask & __GFP_FS)) + sc->defer_work = true; + return vfs_pressure_ratio(atomic_read(&lru_count)); } diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c index 7c016a082aa6..661189b42c31 100644 --- a/fs/gfs2/quota.c +++ b/fs/gfs2/quota.c @@ -166,9 +166,6 @@ static unsigned long gfs2_qd_shrink_scan(struct shrinker *shrink, LIST_HEAD(dispose); unsigned long freed; - if (!(sc->gfp_mask & __GFP_FS)) - return SHRINK_STOP; - freed = list_lru_shrink_walk(&gfs2_qd_lru, sc, gfs2_qd_isolate, &dispose); @@ -180,6 +177,9 @@ static unsigned long gfs2_qd_shrink_scan(struct shrinker *shrink, static unsigned long gfs2_qd_shrink_count(struct shrinker *shrink, struct shrink_control *sc) { + if (!(sc->gfp_mask & __GFP_FS)) + sc->defer_work = true; + return vfs_pressure_ratio(list_lru_shrink_count(&gfs2_qd_lru, sc)); } diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c index e180033e35cf..fd4a70479790 100644 --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -2211,10 +2211,7 @@ unsigned long nfs_access_cache_scan(struct shrinker *shrink, struct shrink_control *sc) { int nr_to_scan = sc->nr_to_scan; - gfp_t gfp_mask = sc->gfp_mask; - if ((gfp_mask & GFP_KERNEL) != GFP_KERNEL) - return SHRINK_STOP; return nfs_do_access_cache_scan(nr_to_scan); } @@ -2222,6 +2219,9 @@ nfs_access_cache_scan(struct shrinker *shrink, struct shrink_control *sc) unsigned long nfs_access_cache_count(struct shrinker *shrink, struct shrink_control *sc) { + if ((sc->gfp_mask & GFP_KERNEL) != GFP_KERNEL) + sc->defer_work = true; + return vfs_pressure_ratio(atomic_long_read(&nfs_access_nr_entries)); } diff --git a/fs/super.c b/fs/super.c index cfadab2cbf35..6dcab2a92454 100644 --- a/fs/super.c +++ b/fs/super.c @@ -74,9 +74,6 @@ static unsigned long super_cache_scan(struct shrinker *shrink, * Deadlock avoidance. We may hold various FS locks, and we don't want * to recurse into the FS that called us in clear_inode() and friends.. */ - if (!(sc->gfp_mask & __GFP_FS)) - return SHRINK_STOP; - if (!trylock_super(sb)) return SHRINK_STOP; @@ -141,6 +138,9 @@ static unsigned long super_cache_count(struct shrinker *shrink, return 0; smp_rmb(); + if (!(sc->gfp_mask & __GFP_FS)) + sc->defer_work = true; + if (sb->s_op && sb->s_op->nr_cached_objects) total_objects = sb->s_op->nr_cached_objects(sb, sc); diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index ecd8ce152ab1..aa03f2448145 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -502,9 +502,6 @@ xfs_qm_shrink_scan( unsigned long freed; int error; - if ((sc->gfp_mask & (__GFP_FS|__GFP_DIRECT_RECLAIM)) != (__GFP_FS|__GFP_DIRECT_RECLAIM)) - return 0; - INIT_LIST_HEAD(&isol.buffers); INIT_LIST_HEAD(&isol.dispose); @@ -534,6 +531,14 @@ xfs_qm_shrink_count( struct xfs_quotainfo *qi = container_of(shrink, struct xfs_quotainfo, qi_shrinker); + /* + * __GFP_DIRECT_RECLAIM is used here to avoid blocking kswapd + */ + if ((sc->gfp_mask & (__GFP_FS|__GFP_DIRECT_RECLAIM)) != + (__GFP_FS|__GFP_DIRECT_RECLAIM)) { + sc->defer_work = true; + } + return list_lru_shrink_count(&qi->qi_lru, sc); } diff --git a/net/sunrpc/auth.c b/net/sunrpc/auth.c index cdb05b48de44..7d11a7034fee 100644 --- a/net/sunrpc/auth.c +++ b/net/sunrpc/auth.c @@ -527,9 +527,6 @@ static unsigned long rpcauth_cache_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) { - if ((sc->gfp_mask & GFP_KERNEL) != GFP_KERNEL) - return SHRINK_STOP; - /* nothing left, don't come back */ if (list_empty(&cred_unused)) return SHRINK_STOP; @@ -541,6 +538,8 @@ static unsigned long rpcauth_cache_shrink_count(struct shrinker *shrink, struct shrink_control *sc) { + if ((sc->gfp_mask & GFP_KERNEL) != GFP_KERNEL) + sc->defer_work = true; return number_cred_unused * sysctl_vfs_cache_pressure / 100; } From patchwork Thu Oct 31 23:46:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222133 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 66518912 for ; Thu, 31 Oct 2019 23:47:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4E068216F4 for ; Thu, 31 Oct 2019 23:47:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729248AbfJaXrI (ORCPT ); Thu, 31 Oct 2019 19:47:08 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:56355 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728726AbfJaXqe (ORCPT ); Thu, 31 Oct 2019 19:46:34 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id CF9383A28AC; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007CG-Bo; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041g-9R; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 11/28] mm: factor shrinker work calculations Date: Fri, 1 Nov 2019 10:46:01 +1100 Message-Id: <20191031234618.15403-12-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=P6RKvmIu c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=rQ5XUWRrd7byYprixfUA:9 a=NH2oGUoCzA-B3QL-:21 a=DSgJKIrDGFEhNJPj:21 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner Start to clean up the shrinker code by factoring out the calculation that determines how much work to do. This separates the calculation from clamping and other adjustments that are done before the shrinker work is run. Document the scan batch size calculation better while we are there. Also convert the calculation for the amount of work to be done to use 64 bit logic so we don't have to keep jumping through hoops to keep calculations within 32 bits on 32 bit systems. Signed-off-by: Dave Chinner --- mm/vmscan.c | 97 ++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 70 insertions(+), 27 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index a215d71d9d4b..2d39ec37c04d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -459,13 +459,68 @@ EXPORT_SYMBOL(unregister_shrinker); #define SHRINK_BATCH 128 +/* + * Calculate the number of new objects to scan this time around. Return + * the work to be done. If there are freeable objects, return that number in + * @freeable_objects. + */ +static int64_t shrink_scan_count(struct shrink_control *shrinkctl, + struct shrinker *shrinker, int priority, + int64_t *freeable_objects) +{ + int64_t delta; + int64_t freeable; + + freeable = shrinker->count_objects(shrinker, shrinkctl); + if (freeable == 0 || freeable == SHRINK_EMPTY) + return freeable; + + if (shrinker->seeks) { + /* + * shrinker->seeks is a measure of how much IO is required to + * reinstantiate the object in memory. The default value is 2 + * which is typical for a cold inode requiring a directory read + * and an inode read to re-instantiate. + * + * The scan batch size is defined by the shrinker priority, but + * to be able to bias the reclaim we increase the default batch + * size by 4. Hence we end up with a scan batch multipler that + * scales like so: + * + * ->seeks scan batch multiplier + * 1 4.00x + * 2 2.00x + * 3 1.33x + * 4 1.00x + * 8 0.50x + * + * IOWs, the more seeks it takes to pull the item into cache, + * the smaller the reclaim scan batch. Hence we put more reclaim + * pressure on caches that are fast to repopulate and to keep a + * rough balance between caches that have different costs. + */ + delta = freeable >> (priority - 2); + do_div(delta, shrinker->seeks); + } else { + /* + * These objects don't require any IO to create. Trim them + * aggressively under memory pressure to keep them from causing + * refetches in the IO caches. + */ + delta = freeable / 2; + } + + *freeable_objects = freeable; + return delta > 0 ? delta : 0; +} + static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, struct shrinker *shrinker, int priority) { unsigned long freed = 0; - unsigned long long delta; long total_scan; - long freeable; + int64_t freeable_objects = 0; + int64_t scan_count; long nr; long new_nr; int nid = shrinkctl->nid; @@ -476,9 +531,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) nid = 0; - freeable = shrinker->count_objects(shrinker, shrinkctl); - if (freeable == 0 || freeable == SHRINK_EMPTY) - return freeable; + scan_count = shrink_scan_count(shrinkctl, shrinker, priority, + &freeable_objects); + if (scan_count == 0 || scan_count == SHRINK_EMPTY) + return scan_count; /* * copy the current shrinker scan count into a local variable @@ -487,25 +543,11 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, */ nr = atomic_long_xchg(&shrinker->nr_deferred[nid], 0); - total_scan = nr; - if (shrinker->seeks) { - delta = freeable >> priority; - delta *= 4; - do_div(delta, shrinker->seeks); - } else { - /* - * These objects don't require any IO to create. Trim - * them aggressively under memory pressure to keep - * them from causing refetches in the IO caches. - */ - delta = freeable / 2; - } - - total_scan += delta; + total_scan = nr + scan_count; if (total_scan < 0) { pr_err("shrink_slab: %pS negative objects to delete nr=%ld\n", shrinker->scan_objects, total_scan); - total_scan = freeable; + total_scan = scan_count; next_deferred = nr; } else next_deferred = total_scan; @@ -522,19 +564,20 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, * Hence only allow the shrinker to scan the entire cache when * a large delta change is calculated directly. */ - if (delta < freeable / 4) - total_scan = min(total_scan, freeable / 2); + if (scan_count < freeable_objects / 4) + total_scan = min_t(long, total_scan, freeable_objects / 2); /* * Avoid risking looping forever due to too large nr value: * never try to free more than twice the estimate number of * freeable entries. */ - if (total_scan > freeable * 2) - total_scan = freeable * 2; + if (total_scan > freeable_objects * 2) + total_scan = freeable_objects * 2; trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, - freeable, delta, total_scan, priority); + freeable_objects, scan_count, + total_scan, priority); /* * If the shrinker can't run (e.g. due to gfp_mask constraints), then @@ -559,7 +602,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, * possible. */ while (total_scan >= batch_size || - total_scan >= freeable) { + total_scan >= freeable_objects) { unsigned long ret; unsigned long nr_to_scan = min(batch_size, total_scan); From patchwork Thu Oct 31 23:46:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222067 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 039141515 for ; Thu, 31 Oct 2019 23:46:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D52F620873 for ; Thu, 31 Oct 2019 23:46:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728755AbfJaXqc (ORCPT ); Thu, 31 Oct 2019 19:46:32 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:55491 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728520AbfJaXqa (ORCPT ); Thu, 31 Oct 2019 19:46:30 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 627573A2001; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007CI-DJ; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041j-Ac; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 12/28] shrinker: defer work only to kswapd Date: Fri, 1 Nov 2019 10:46:02 +1100 Message-Id: <20191031234618.15403-13-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=G6BsK5s5 c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=DY0x1sV2QIg9a-hhTtsA:9 a=P5PVAFAxUgfNbYNS:21 a=F-z_de4VdKWNkWDm:21 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner Right now deferred work is picked up by whatever GFP_KERNEL context reclaimer that wins the race to empty the node's deferred work counter. However, if there are lots of direct reclaimers, that work might be continually picked up by contexts taht can't do any work and so the opportunities to do the work are missed by contexts that could do them. A further problem with the current code is that the deferred work can be picked up by a random direct reclaimer, resulting in that specific process having to do all the deferred reclaim work and hence can take extremely long latencies if the reclaim work blocks regularly. This is not good for direct reclaim fairness or for minimising long tail latency events. To avoid these problems, simply limit deferred work to kswapd contexts. We know kswapd is a context that can always do reclaim work, and hence deferring work to kswapd allows the deferred work to be done in the background and not adversely affect any specific process context doing direct reclaim. The advantage of this is that amount of work to be done in direct reclaim is now bound and predictable - it is entirely based on the cache's freeable objects and the reclaim priority. hence all direct reclaimers running at the same time should be doing relatively equal amounts of work, thereby reducing the incidence of long tail latencies due to uneven reclaim workloads. Note that we use signed integers for everything except the freed count as the returns from the shrinker callouts cannot be guaranteed untainted. Indeed, the shrinkers can return scan counts larger that were fed in, so we need scan counts to underflow in a detectable manner to terminate loops. This is necessary to avoid a misbehaving shrinker from triggering endless scanning loops. Signed-off-by: Dave Chinner --- include/linux/shrinker.h | 2 +- mm/vmscan.c | 100 ++++++++++++++++++++------------------- 2 files changed, 53 insertions(+), 49 deletions(-) diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 3405c39ab92c..30c10f42109f 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -81,7 +81,7 @@ struct shrinker { int id; #endif /* objs pending delete, per node */ - atomic_long_t *nr_deferred; + atomic64_t *nr_deferred; }; #define DEFAULT_SEEKS 2 /* A good number if you don't know better. */ diff --git a/mm/vmscan.c b/mm/vmscan.c index 2d39ec37c04d..c0e2bf656e3f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -517,16 +517,16 @@ static int64_t shrink_scan_count(struct shrink_control *shrinkctl, static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, struct shrinker *shrinker, int priority) { - unsigned long freed = 0; - long total_scan; + uint64_t freed = 0; int64_t freeable_objects = 0; int64_t scan_count; - long nr; - long new_nr; + int64_t scanned_objects = 0; + int64_t next_deferred = 0; + int64_t deferred_count = 0; + int64_t new_nr; int nid = shrinkctl->nid; long batch_size = shrinker->batch ? shrinker->batch : SHRINK_BATCH; - long scanned = 0, next_deferred; if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) nid = 0; @@ -537,47 +537,51 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, return scan_count; /* - * copy the current shrinker scan count into a local variable - * and zero it so that other concurrent shrinker invocations - * don't also do this scanning work. - */ - nr = atomic_long_xchg(&shrinker->nr_deferred[nid], 0); - - total_scan = nr + scan_count; - if (total_scan < 0) { - pr_err("shrink_slab: %pS negative objects to delete nr=%ld\n", - shrinker->scan_objects, total_scan); - total_scan = scan_count; - next_deferred = nr; - } else - next_deferred = total_scan; - - /* - * We need to avoid excessive windup on filesystem shrinkers - * due to large numbers of GFP_NOFS allocations causing the - * shrinkers to return -1 all the time. This results in a large - * nr being built up so when a shrink that can do some work - * comes along it empties the entire cache due to nr >>> - * freeable. This is bad for sustaining a working set in - * memory. + * If kswapd, we take all the deferred work and do it here. We don't let + * direct reclaim do this, because then it means some poor sod is going + * to have to do somebody else's GFP_NOFS reclaim, and it hides the real + * amount of reclaim work from concurrent kswapd operations. Hence we do + * the work in the wrong place, at the wrong time, and it's largely + * unpredictable. * - * Hence only allow the shrinker to scan the entire cache when - * a large delta change is calculated directly. + * By doing the deferred work only in kswapd, we can schedule the work + * according the the reclaim priority - low priority reclaim will do + * less deferred work, hence we'll do more of the deferred work the more + * desperate we become for free memory. This avoids the need for needing + * to specifically avoid deferred work windup as low amount os memory + * pressure won't excessive trim caches anymore. */ - if (scan_count < freeable_objects / 4) - total_scan = min_t(long, total_scan, freeable_objects / 2); + if (current_is_kswapd()) { + int64_t deferred_scan; + + deferred_count = atomic64_xchg(&shrinker->nr_deferred[nid], 0); + + /* we want to scan 5-10% of the deferred work here at minimum */ + deferred_scan = deferred_count; + if (priority) + do_div(deferred_scan, priority); + scan_count += deferred_scan; + + /* + * If there is more deferred work than the number of freeable + * items in the cache, limit the amount of work we will carry + * over to the next kswapd run on this cache. This prevents + * deferred work windup. + */ + deferred_count = min(deferred_count, freeable_objects * 2); + + } /* * Avoid risking looping forever due to too large nr value: * never try to free more than twice the estimate number of * freeable entries. */ - if (total_scan > freeable_objects * 2) - total_scan = freeable_objects * 2; + scan_count = min(scan_count, freeable_objects * 2); - trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, + trace_mm_shrink_slab_start(shrinker, shrinkctl, deferred_count, freeable_objects, scan_count, - total_scan, priority); + scan_count, priority); /* * If the shrinker can't run (e.g. due to gfp_mask constraints), then @@ -601,10 +605,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, * scanning at high prio and therefore should try to reclaim as much as * possible. */ - while (total_scan >= batch_size || - total_scan >= freeable_objects) { + while (scan_count >= batch_size || + scan_count >= freeable_objects) { unsigned long ret; - unsigned long nr_to_scan = min(batch_size, total_scan); + unsigned long nr_to_scan = min_t(long, batch_size, scan_count); shrinkctl->nr_to_scan = nr_to_scan; shrinkctl->nr_scanned = nr_to_scan; @@ -614,29 +618,29 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, freed += ret; count_vm_events(SLABS_SCANNED, shrinkctl->nr_scanned); - total_scan -= shrinkctl->nr_scanned; - scanned += shrinkctl->nr_scanned; + scan_count -= shrinkctl->nr_scanned; + scanned_objects += shrinkctl->nr_scanned; cond_resched(); } - done: - if (next_deferred >= scanned) - next_deferred -= scanned; + if (deferred_count) + next_deferred = deferred_count - scanned_objects; else - next_deferred = 0; + next_deferred = scan_count; /* * move the unused scan count back into the shrinker in a * manner that handles concurrent updates. If we exhausted the * scan, there is no need to do an update. */ if (next_deferred > 0) - new_nr = atomic_long_add_return(next_deferred, + new_nr = atomic64_add_return(next_deferred, &shrinker->nr_deferred[nid]); else - new_nr = atomic_long_read(&shrinker->nr_deferred[nid]); + new_nr = atomic64_read(&shrinker->nr_deferred[nid]); - trace_mm_shrink_slab_end(shrinker, nid, freed, nr, new_nr, total_scan); + trace_mm_shrink_slab_end(shrinker, nid, freed, deferred_count, new_nr, + scan_count); return freed; } From patchwork Thu Oct 31 23:46:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222191 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C384814E5 for ; Thu, 31 Oct 2019 23:48:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A19062087F for ; Thu, 31 Oct 2019 23:48:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729714AbfJaXsU (ORCPT ); Thu, 31 Oct 2019 19:48:20 -0400 Received: from mail104.syd.optusnet.com.au ([211.29.132.246]:40223 "EHLO mail104.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728397AbfJaXq2 (ORCPT ); Thu, 31 Oct 2019 19:46:28 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id E37D17EA89D; Fri, 1 Nov 2019 10:46:24 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007CK-Ee; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041m-CB; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 13/28] shrinker: clean up variable types and tracepoints Date: Fri, 1 Nov 2019 10:46:03 +1100 Message-Id: <20191031234618.15403-14-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=D+Q3ErZj c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=dfQxWFgAP5TgkvwPFjsA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner The tracepoint information in the shrinker code don't make a lot of sense anymore and contain redundant information as a result of the changes in the patchset. Refine the information passed to the tracepoints so they expose the operation of the shrinkers more precisely and clean up the remaining code and varibles in the shrinker code so it all makes sense. Signed-off-by: Dave Chinner Reviewed-by: Brian Foster --- include/trace/events/vmscan.h | 69 ++++++++++++++++------------------- mm/vmscan.c | 24 +++++------- 2 files changed, 41 insertions(+), 52 deletions(-) diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h index a5ab2973e8dc..110637d9efa5 100644 --- a/include/trace/events/vmscan.h +++ b/include/trace/events/vmscan.h @@ -184,84 +184,77 @@ DEFINE_EVENT(mm_vmscan_direct_reclaim_end_template, mm_vmscan_memcg_softlimit_re TRACE_EVENT(mm_shrink_slab_start, TP_PROTO(struct shrinker *shr, struct shrink_control *sc, - long nr_objects_to_shrink, unsigned long cache_items, - unsigned long long delta, unsigned long total_scan, - int priority), + int64_t deferred_count, int64_t freeable_objects, + int64_t scan_count, int priority), - TP_ARGS(shr, sc, nr_objects_to_shrink, cache_items, delta, total_scan, + TP_ARGS(shr, sc, deferred_count, freeable_objects, scan_count, priority), TP_STRUCT__entry( __field(struct shrinker *, shr) __field(void *, shrink) __field(int, nid) - __field(long, nr_objects_to_shrink) - __field(gfp_t, gfp_flags) - __field(unsigned long, cache_items) - __field(unsigned long long, delta) - __field(unsigned long, total_scan) + __field(int64_t, deferred_count) + __field(int64_t, freeable_objects) + __field(int64_t, scan_count) __field(int, priority) + __field(gfp_t, gfp_flags) ), TP_fast_assign( __entry->shr = shr; __entry->shrink = shr->scan_objects; __entry->nid = sc->nid; - __entry->nr_objects_to_shrink = nr_objects_to_shrink; - __entry->gfp_flags = sc->gfp_mask; - __entry->cache_items = cache_items; - __entry->delta = delta; - __entry->total_scan = total_scan; + __entry->deferred_count = deferred_count; + __entry->freeable_objects = freeable_objects; + __entry->scan_count = scan_count; __entry->priority = priority; + __entry->gfp_flags = sc->gfp_mask; ), - TP_printk("%pS %p: nid: %d objects to shrink %ld gfp_flags %s cache items %ld delta %lld total_scan %ld priority %d", + TP_printk("%pS %p: nid: %d scan count %lld freeable items %lld deferred count %lld priority %d gfp_flags %s", __entry->shrink, __entry->shr, __entry->nid, - __entry->nr_objects_to_shrink, - show_gfp_flags(__entry->gfp_flags), - __entry->cache_items, - __entry->delta, - __entry->total_scan, - __entry->priority) + __entry->scan_count, + __entry->freeable_objects, + __entry->deferred_count, + __entry->priority, + show_gfp_flags(__entry->gfp_flags)) ); TRACE_EVENT(mm_shrink_slab_end, - TP_PROTO(struct shrinker *shr, int nid, int shrinker_retval, - long unused_scan_cnt, long new_scan_cnt, long total_scan), + TP_PROTO(struct shrinker *shr, int nid, int64_t freed_objects, + int64_t scanned_objects, int64_t deferred_scan), - TP_ARGS(shr, nid, shrinker_retval, unused_scan_cnt, new_scan_cnt, - total_scan), + TP_ARGS(shr, nid, freed_objects, scanned_objects, + deferred_scan), TP_STRUCT__entry( __field(struct shrinker *, shr) __field(int, nid) __field(void *, shrink) - __field(long, unused_scan) - __field(long, new_scan) - __field(int, retval) - __field(long, total_scan) + __field(long long, freed_objects) + __field(long long, scanned_objects) + __field(long long, deferred_scan) ), TP_fast_assign( __entry->shr = shr; __entry->nid = nid; __entry->shrink = shr->scan_objects; - __entry->unused_scan = unused_scan_cnt; - __entry->new_scan = new_scan_cnt; - __entry->retval = shrinker_retval; - __entry->total_scan = total_scan; + __entry->freed_objects = freed_objects; + __entry->scanned_objects = scanned_objects; + __entry->deferred_scan = deferred_scan; ), - TP_printk("%pS %p: nid: %d unused scan count %ld new scan count %ld total_scan %ld last shrinker return val %d", + TP_printk("%pS %p: nid: %d freed objects %lld scanned objects %lld, deferred scan %lld", __entry->shrink, __entry->shr, __entry->nid, - __entry->unused_scan, - __entry->new_scan, - __entry->total_scan, - __entry->retval) + __entry->freed_objects, + __entry->scanned_objects, + __entry->deferred_scan) ); TRACE_EVENT(mm_vmscan_lru_isolate, diff --git a/mm/vmscan.c b/mm/vmscan.c index c0e2bf656e3f..7a8256322150 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -523,7 +523,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, int64_t scanned_objects = 0; int64_t next_deferred = 0; int64_t deferred_count = 0; - int64_t new_nr; int nid = shrinkctl->nid; long batch_size = shrinker->batch ? shrinker->batch : SHRINK_BATCH; @@ -580,8 +579,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, scan_count = min(scan_count, freeable_objects * 2); trace_mm_shrink_slab_start(shrinker, shrinkctl, deferred_count, - freeable_objects, scan_count, - scan_count, priority); + freeable_objects, scan_count, priority); /* * If the shrinker can't run (e.g. due to gfp_mask constraints), then @@ -624,23 +622,21 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, cond_resched(); } done: + /* + * Calculate the remaining work that we need to defer to kswapd, and + * store it in a manner that handles concurrent updates. If we exhausted + * the scan, there is no need to do an update. + */ if (deferred_count) next_deferred = deferred_count - scanned_objects; else next_deferred = scan_count; - /* - * move the unused scan count back into the shrinker in a - * manner that handles concurrent updates. If we exhausted the - * scan, there is no need to do an update. - */ + if (next_deferred > 0) - new_nr = atomic64_add_return(next_deferred, - &shrinker->nr_deferred[nid]); - else - new_nr = atomic64_read(&shrinker->nr_deferred[nid]); + atomic64_add(next_deferred, &shrinker->nr_deferred[nid]); - trace_mm_shrink_slab_end(shrinker, nid, freed, deferred_count, new_nr, - scan_count); + trace_mm_shrink_slab_end(shrinker, nid, freed, scanned_objects, + next_deferred); return freed; } From patchwork Thu Oct 31 23:46:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222165 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D7F4E912 for ; Thu, 31 Oct 2019 23:47:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B6BB320873 for ; Thu, 31 Oct 2019 23:47:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729621AbfJaXrv (ORCPT ); Thu, 31 Oct 2019 19:47:51 -0400 Received: from mail104.syd.optusnet.com.au ([211.29.132.246]:40221 "EHLO mail104.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728559AbfJaXqc (ORCPT ); Thu, 31 Oct 2019 19:46:32 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id 749747EA8F3; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007CM-FY; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041p-DJ; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 14/28] mm: reclaim_state records pages reclaimed, not slabs Date: Fri, 1 Nov 2019 10:46:04 +1100 Message-Id: <20191031234618.15403-15-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=G6BsK5s5 c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=avOCubNBGnevqGq14jMA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner Add a wrapper to account for page freeing in shrinker reclaim so that the high level scanning accounts for all the memory freed during a shrinker scan. No logic changes, just replacing open coded checks with a simple wrapper. Signed-off-by: Dave Chinner Reviewed-by: Brian Foster --- fs/inode.c | 3 +-- fs/xfs/xfs_buf.c | 4 +--- include/linux/swap.h | 20 ++++++++++++++++++-- mm/slab.c | 3 +-- mm/slob.c | 4 +--- mm/slub.c | 3 +-- mm/vmscan.c | 4 ++-- 7 files changed, 25 insertions(+), 16 deletions(-) diff --git a/fs/inode.c b/fs/inode.c index fef457a42882..a77caf216659 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -764,8 +764,7 @@ static enum lru_status inode_lru_isolate(struct list_head *item, __count_vm_events(KSWAPD_INODESTEAL, reap); else __count_vm_events(PGINODESTEAL, reap); - if (current->reclaim_state) - current->reclaim_state->reclaimed_slab += reap; + current_reclaim_account_pages(reap); } iput(inode); spin_lock(lru_lock); diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index d34e5d2edacd..55b082bc53b3 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -324,9 +324,7 @@ xfs_buf_free( __free_page(page); } - if (current->reclaim_state) - current->reclaim_state->reclaimed_slab += - bp->b_page_count; + current_reclaim_account_pages(bp->b_page_count); } else if (bp->b_flags & _XBF_KMEM) kmem_free(bp->b_addr); _xfs_buf_free_pages(bp); diff --git a/include/linux/swap.h b/include/linux/swap.h index 063c0c1e112b..72b855fe20b0 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -126,12 +126,28 @@ union swap_header { /* * current->reclaim_state points to one of these when a task is running - * memory reclaim + * memory reclaim. It is typically used by shrinkers to return reclaim + * information back to the main vmscan loop. */ struct reclaim_state { - unsigned long reclaimed_slab; + unsigned long reclaimed_pages; /* pages freed by shrinkers */ }; +/* + * When code frees a page that may be run from a memory reclaim context, it + * needs to account for the pages it frees so memory reclaim can track them. + * Slab memory that is freed is accounted via this mechanism, so this is not + * necessary for slab or heap memory being freed. However, if the object being + * freed frees pages directly, then those pages should be accounted as well when + * in memory reclaim. This helper function takes care accounting for the pages + * being reclaimed when it is required. + */ +static inline void current_reclaim_account_pages(int nr_pages) +{ + if (current->reclaim_state) + current->reclaim_state->reclaimed_pages += nr_pages; +} + #ifdef __KERNEL__ struct address_space; diff --git a/mm/slab.c b/mm/slab.c index 66e5d8032bae..419be005f41a 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1395,8 +1395,7 @@ static void kmem_freepages(struct kmem_cache *cachep, struct page *page) page_mapcount_reset(page); page->mapping = NULL; - if (current->reclaim_state) - current->reclaim_state->reclaimed_slab += 1 << order; + current_reclaim_account_pages(1 << order); uncharge_slab_page(page, order, cachep); __free_pages(page, order); } diff --git a/mm/slob.c b/mm/slob.c index fa53e9f73893..c54a7eeee86d 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -211,9 +211,7 @@ static void slob_free_pages(void *b, int order) { struct page *sp = virt_to_page(b); - if (current->reclaim_state) - current->reclaim_state->reclaimed_slab += 1 << order; - + current_reclaim_account_pages(1 << order); mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE, -(1 << order)); __free_pages(sp, order); diff --git a/mm/slub.c b/mm/slub.c index b25c807a111f..478554082079 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1746,8 +1746,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page) __ClearPageSlab(page); page->mapping = NULL; - if (current->reclaim_state) - current->reclaim_state->reclaimed_slab += pages; + current_reclaim_account_pages(pages); uncharge_slab_page(page, order, s); __free_pages(page, order); } diff --git a/mm/vmscan.c b/mm/vmscan.c index 7a8256322150..967e3d3c7748 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2870,8 +2870,8 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) } while ((memcg = mem_cgroup_iter(root, memcg, NULL))); if (reclaim_state) { - sc->nr_reclaimed += reclaim_state->reclaimed_slab; - reclaim_state->reclaimed_slab = 0; + sc->nr_reclaimed += reclaim_state->reclaimed_pages; + reclaim_state->reclaimed_pages = 0; } /* Record the subtree's reclaim efficiency */ From patchwork Thu Oct 31 23:46:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222171 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0BD8E912 for ; Thu, 31 Oct 2019 23:47:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E8D992173E for ; Thu, 31 Oct 2019 23:47:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728857AbfJaXr5 (ORCPT ); Thu, 31 Oct 2019 19:47:57 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:55729 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728672AbfJaXqb (ORCPT ); Thu, 31 Oct 2019 19:46:31 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id ADDED3A289E; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007CO-GU; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041s-ET; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 15/28] mm: back off direct reclaim on excessive shrinker deferral Date: Fri, 1 Nov 2019 10:46:05 +1100 Message-Id: <20191031234618.15403-16-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=P6RKvmIu c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=c3jh6I83BcSAbW0NpfQA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner When the majority of possible shrinker reclaim work is deferred by the shrinkers (e.g. due to GFP_NOFS context), and there is more work defered than LRU pages were scanned, back off reclaim if there are large amounts of IO in progress. This tends to occur when there are inode cache heavy workloads that have little page cache or application memory pressure on filesytems like XFS. Inode cache heavy workloads involve lots of IO, so if we are getting device congestion it is indicative of memory reclaim running up against an IO throughput limitation. in this situation we need to throttle direct reclaim as we nee dto wait for kswapd to get some of the deferred work done. However, if there is no device congestion, then the system is keeping up with both the workload and memory reclaim and so there's no need to throttle. Hence we should only back off scanning for a bit if we see this condition and there is block device congestion present. Signed-off-by: Dave Chinner --- include/linux/swap.h | 2 ++ mm/vmscan.c | 30 +++++++++++++++++++++++++++++- 2 files changed, 31 insertions(+), 1 deletion(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 72b855fe20b0..da0913e14bb9 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -131,6 +131,8 @@ union swap_header { */ struct reclaim_state { unsigned long reclaimed_pages; /* pages freed by shrinkers */ + unsigned long scanned_objects; /* quantity of work done */ + unsigned long deferred_objects; /* work that wasn't done */ }; /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 967e3d3c7748..13c11e10c9c5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -570,6 +570,8 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, deferred_count = min(deferred_count, freeable_objects * 2); } + if (current->reclaim_state) + current->reclaim_state->scanned_objects += scanned_objects; /* * Avoid risking looping forever due to too large nr value: @@ -585,8 +587,11 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, * If the shrinker can't run (e.g. due to gfp_mask constraints), then * defer the work to a context that can scan the cache. */ - if (shrinkctl->defer_work) + if (shrinkctl->defer_work) { + if (current->reclaim_state) + current->reclaim_state->deferred_objects += scan_count; goto done; + } /* * Normally, we should not scan less than batch_size objects in one @@ -2871,7 +2876,30 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) if (reclaim_state) { sc->nr_reclaimed += reclaim_state->reclaimed_pages; + + /* + * If we are deferring more work than we are actually + * doing in the shrinkers, and we are scanning more + * objects than we are pages, the we have a large amount + * of slab caches we are deferring work to kswapd for. + * We better back off here for a while, otherwise + * we risk priority windup, swap storms and OOM kills + * once we empty the page lists but still can't make + * progress on the shrinker memory. + * + * kswapd won't ever defer work as it's run under a + * GFP_KERNEL context and can always do work. + */ + if ((reclaim_state->deferred_objects > + sc->nr_scanned - nr_scanned) && + (reclaim_state->deferred_objects > + reclaim_state->scanned_objects)) { + wait_iff_congested(BLK_RW_ASYNC, HZ/50); + } + reclaim_state->reclaimed_pages = 0; + reclaim_state->deferred_objects = 0; + reclaim_state->scanned_objects = 0; } /* Record the subtree's reclaim efficiency */ From patchwork Thu Oct 31 23:46:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222141 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E3AF414E5 for ; Thu, 31 Oct 2019 23:47:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CAAD62087F for ; Thu, 31 Oct 2019 23:47:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729117AbfJaXrX (ORCPT ); Thu, 31 Oct 2019 19:47:23 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:56405 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728692AbfJaXqd (ORCPT ); Thu, 31 Oct 2019 19:46:33 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id B8F183A28A0; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007CQ-Hd; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041v-FL; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 16/28] mm: kswapd backoff for shrinkers Date: Fri, 1 Nov 2019 10:46:06 +1100 Message-Id: <20191031234618.15403-17-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=D+Q3ErZj c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=NNMOctoXzqbiiAOzY8AA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner When kswapd reaches the end of the page LRU and starts hitting dirty pages, the logic in shrink_node() allows it to back off and wait for IO to complete, thereby preventing kswapd from scanning excessively and driving the system into swap thrashing and OOM conditions. When we have inode cache heavy workloads on XFS, we have exactly the same problem with reclaim inodes. The non-blocking kswapd reclaim will keep putting pressure onto the inode cache which is unable to make progress. When the system gets to the point where there is no pages in the LRU to free, there is no swap left and there are no clean inodes that can be freed, it will OOM. This has a specific signature in OOM: [ 110.841987] Mem-Info: [ 110.842816] active_anon:241 inactive_anon:82 isolated_anon:1 active_file:168 inactive_file:143 isolated_file:0 unevictable:2621523 dirty:1 writeback:8 unstable:0 slab_reclaimable:564445 slab_unreclaimable:420046 mapped:1042 shmem:11 pagetables:6509 bounce:0 free:77626 free_pcp:2 free_cma:0 In this case, we have about 500-600 pages left in teh LRUs, but we have ~565000 reclaimable slab pages still available for reclaim. Unfortunately, they are mostly dirty inodes, and so we really need to be able to throttle kswapd when shrinker progress is limited due to reaching the dirty end of the LRU... So, add a flag into the reclaim_state so if the shrinker decides it needs kswapd to back off and wait for a while (for whatever reason) it can do so. Signed-off-by: Dave Chinner --- include/linux/swap.h | 1 + mm/vmscan.c | 10 +++++++++- 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index da0913e14bb9..76fc28f0e483 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -133,6 +133,7 @@ struct reclaim_state { unsigned long reclaimed_pages; /* pages freed by shrinkers */ unsigned long scanned_objects; /* quantity of work done */ unsigned long deferred_objects; /* work that wasn't done */ + bool need_backoff; /* tell kswapd to slow down */ }; /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 13c11e10c9c5..0f7d35820057 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2949,8 +2949,16 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) * implies that pages are cycling through the LRU * faster than they are written so also forcibly stall. */ - if (sc->nr.immediate) + if (sc->nr.immediate) { congestion_wait(BLK_RW_ASYNC, HZ/10); + } else if (reclaim_state && reclaim_state->need_backoff) { + /* + * Ditto, but it's a slab cache that is cycling + * through the LRU faster than they are written + */ + congestion_wait(BLK_RW_ASYNC, HZ/10); + reclaim_state->need_backoff = false; + } } /* From patchwork Thu Oct 31 23:46:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222183 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 71B68912 for ; Thu, 31 Oct 2019 23:48:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 593C321734 for ; Thu, 31 Oct 2019 23:48:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729689AbfJaXsO (ORCPT ); Thu, 31 Oct 2019 19:48:14 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:55729 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728398AbfJaXq3 (ORCPT ); Thu, 31 Oct 2019 19:46:29 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 178993A288C; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007CS-Ie; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00041y-Gp; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 17/28] xfs: synchronous AIL pushing Date: Fri, 1 Nov 2019 10:46:07 +1100 Message-Id: <20191031234618.15403-18-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=G6BsK5s5 c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=Yw-OB7zFZuFGJ-GQUAcA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner Provide an interface to push the AIL to a target LSN and wait for the tail of the log to move past that LSN. This is used to wait for all items older than a specific LSN to either be cleaned (written back) or relogged to a higher LSN in the AIL. The primary use for this is to allow IO free inode reclaim throttling. Factor the common AIL deletion code that does all the wakeups into a helper so we only have one copy of this somewhat tricky code to interface with all the wakeups necessary when the LSN of the log tail changes. xfs_ail_push_sync() is temporary infrastructure to facilitate non-blocking, IO-less inode reclaim throttling that allows further structural changes to be made. Once those structural changes are made, the need for this function goes away and it is removed. In essence, it is only provided to ensure git bisects don't break while the changes to the reclaim algorithms are in progress. Signed-off-by: Dave Chinner Reviewed-by: Brian Foster --- fs/xfs/xfs_trans_ail.c | 32 ++++++++++++++++++++++++++++++++ fs/xfs/xfs_trans_priv.h | 2 ++ 2 files changed, 34 insertions(+) diff --git a/fs/xfs/xfs_trans_ail.c b/fs/xfs/xfs_trans_ail.c index 685a21cd24c0..3e1d0e1439e2 100644 --- a/fs/xfs/xfs_trans_ail.c +++ b/fs/xfs/xfs_trans_ail.c @@ -662,6 +662,36 @@ xfs_ail_push_all( xfs_ail_push(ailp, threshold_lsn); } +/* + * Push the AIL to a specific lsn and wait for it to complete. + */ +void +xfs_ail_push_sync( + struct xfs_ail *ailp, + xfs_lsn_t threshold_lsn) +{ + struct xfs_log_item *lip; + DEFINE_WAIT(wait); + + spin_lock(&ailp->ail_lock); + while ((lip = xfs_ail_min(ailp)) != NULL) { + prepare_to_wait(&ailp->ail_push, &wait, TASK_UNINTERRUPTIBLE); + if (XFS_FORCED_SHUTDOWN(ailp->ail_mount) || + XFS_LSN_CMP(threshold_lsn, lip->li_lsn) < 0) + break; + if (XFS_LSN_CMP(threshold_lsn, ailp->ail_target) > 0) + ailp->ail_target = threshold_lsn; + wake_up_process(ailp->ail_task); + spin_unlock(&ailp->ail_lock); + schedule(); + spin_lock(&ailp->ail_lock); + } + spin_unlock(&ailp->ail_lock); + + finish_wait(&ailp->ail_push, &wait); +} + + /* * Push out all items in the AIL immediately and wait until the AIL is empty. */ @@ -702,6 +732,7 @@ xfs_ail_update_finish( if (!XFS_FORCED_SHUTDOWN(mp)) xlog_assign_tail_lsn_locked(mp); + wake_up_all(&ailp->ail_push); if (list_empty(&ailp->ail_head)) wake_up_all(&ailp->ail_empty); spin_unlock(&ailp->ail_lock); @@ -858,6 +889,7 @@ xfs_trans_ail_init( spin_lock_init(&ailp->ail_lock); INIT_LIST_HEAD(&ailp->ail_buf_list); init_waitqueue_head(&ailp->ail_empty); + init_waitqueue_head(&ailp->ail_push); ailp->ail_task = kthread_run(xfsaild, ailp, "xfsaild/%s", ailp->ail_mount->m_fsname); diff --git a/fs/xfs/xfs_trans_priv.h b/fs/xfs/xfs_trans_priv.h index 35655eac01a6..1b6f4bbd47c0 100644 --- a/fs/xfs/xfs_trans_priv.h +++ b/fs/xfs/xfs_trans_priv.h @@ -61,6 +61,7 @@ struct xfs_ail { int ail_log_flush; struct list_head ail_buf_list; wait_queue_head_t ail_empty; + wait_queue_head_t ail_push; }; /* @@ -113,6 +114,7 @@ xfs_trans_ail_remove( } void xfs_ail_push(struct xfs_ail *, xfs_lsn_t); +void xfs_ail_push_sync(struct xfs_ail *, xfs_lsn_t); void xfs_ail_push_all(struct xfs_ail *); void xfs_ail_push_all_sync(struct xfs_ail *); struct xfs_log_item *xfs_ail_min(struct xfs_ail *ailp); From patchwork Thu Oct 31 23:46:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222159 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 98C10912 for ; Thu, 31 Oct 2019 23:47:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 80F6820873 for ; Thu, 31 Oct 2019 23:47:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729559AbfJaXrl (ORCPT ); Thu, 31 Oct 2019 19:47:41 -0400 Received: from mail104.syd.optusnet.com.au ([211.29.132.246]:40131 "EHLO mail104.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728696AbfJaXqc (ORCPT ); Thu, 31 Oct 2019 19:46:32 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id 61A077EA3C0; Fri, 1 Nov 2019 10:46:26 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007CU-Jz; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-000421-Hp; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 18/28] xfs: don't block kswapd in inode reclaim Date: Fri, 1 Nov 2019 10:46:08 +1100 Message-Id: <20191031234618.15403-19-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=D+Q3ErZj c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=x2yLRZ3W9cWBXR4-ZBgA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner We have a number of reasons for blocking kswapd in XFS inode reclaim, mainly all to do with the fact that memory reclaim has no feedback mechanisms to throttle on dirty slab objects that need IO to reclaim. As a result, we currently throttle inode reclaim by issuing IO in the reclaim context. The unfortunate side effect of this is that it can cause long tail latencies in reclaim and for some workloads this can be a problem. Now that the shrinkers finally have a method of telling kswapd to back off, we can start the process of making inode reclaim in XFS non-blocking. The first thing we need to do is not block kswapd, but so that doesn't cause immediate serious problems, make sure inode writeback is always underway when kswapd is running. As we don't block kswapd now, we don't have to worry about reclaim scans taking long delays due to IO being issued and waited for. Hence while direct reclaim gets delayed by IO, kswapd will not and so it will keep pushing the AIL to clean inodes. Hence direct reclaim doesn't need to push the AIL anymore as kswapd will do it reliably now. Signed-off-by: Dave Chinner Reviewed-by: Brian Foster --- fs/xfs/xfs_icache.c | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 944add5ff8e0..edcc3f6bb3bf 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -1378,11 +1378,22 @@ xfs_reclaim_inodes_nr( struct xfs_mount *mp, int nr_to_scan) { - /* kick background reclaimer and push the AIL */ + int sync_mode = SYNC_TRYLOCK; + + /* kick background reclaimer */ xfs_reclaim_work_queue(mp); - xfs_ail_push_all(mp->m_ail); - return xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT, &nr_to_scan); + /* + * For kswapd, we kick background inode writeback. For direct + * reclaim, we issue and wait on inode writeback to throttle + * reclaim rates and avoid shouty OOM-death. + */ + if (current_is_kswapd()) + xfs_ail_push_all(mp->m_ail); + else + sync_mode |= SYNC_WAIT; + + return xfs_reclaim_inodes_ag(mp, sync_mode, &nr_to_scan); } /* From patchwork Thu Oct 31 23:46:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222131 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1237D912 for ; Thu, 31 Oct 2019 23:47:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EE466216F4 for ; Thu, 31 Oct 2019 23:47:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729274AbfJaXrI (ORCPT ); Thu, 31 Oct 2019 19:47:08 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:55491 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728774AbfJaXqe (ORCPT ); Thu, 31 Oct 2019 19:46:34 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 283A53A2815; Fri, 1 Nov 2019 10:46:26 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007CW-Kq; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-000424-Iy; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 19/28] xfs: reduce kswapd blocking on inode locking. Date: Fri, 1 Nov 2019 10:46:09 +1100 Message-Id: <20191031234618.15403-20-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=D+Q3ErZj c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=KE6An8oM74Ymw0apzXAA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner When doing async node reclaiming, we grab a batch of inodes that we are likely able to reclaim and ignore those that are already flushing. However, when we actually go to reclaim them, the first thing we do is lock the inode. If we are racing with something else reclaiming the inode or flushing it because it is dirty, we block on the inode lock. Hence we can still block kswapd here. Further, if we flush an inode, we also cluster all the other dirty inodes in that cluster into the same IO, flush locking them all. However, if the workload is operating on sequential inodes (e.g. created by a tarball extraction) most of these inodes will be sequntial in the cache and so in the same batch we've already grabbed for reclaim scanning. As a result, it is common for all the inodes in the batch to be dirty and it is common for the first inode flushed to also flush all the inodes in the reclaim batch. In which case, they are now all going to be flush locked and we do not want to block on them. Hence, for async reclaim (SYNC_TRYLOCK) make sure we always use trylock semantics and abort reclaim of an inode as quickly as we can without blocking kswapd. This will be necessary for the upcoming conversion to LRU lists for inode reclaim tracking. Found via tracing and finding big batches of repeated lock/unlock runs on inodes that we just flushed by write clustering during reclaim. Signed-off-by: Dave Chinner Reviewed-by: Christoph Hellwig Reviewed-by: Brian Foster --- fs/xfs/xfs_icache.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index edcc3f6bb3bf..189cf423fe8f 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -1104,11 +1104,23 @@ xfs_reclaim_inode( restart: error = 0; - xfs_ilock(ip, XFS_ILOCK_EXCL); - if (!xfs_iflock_nowait(ip)) { - if (!(sync_mode & SYNC_WAIT)) + /* + * Don't try to flush the inode if another inode in this cluster has + * already flushed it after we did the initial checks in + * xfs_reclaim_inode_grab(). + */ + if (sync_mode & SYNC_TRYLOCK) { + if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL)) goto out; - xfs_iflock(ip); + if (!xfs_iflock_nowait(ip)) + goto out_unlock; + } else { + xfs_ilock(ip, XFS_ILOCK_EXCL); + if (!xfs_iflock_nowait(ip)) { + if (!(sync_mode & SYNC_WAIT)) + goto out_unlock; + xfs_iflock(ip); + } } if (XFS_FORCED_SHUTDOWN(ip->i_mount)) { @@ -1215,9 +1227,10 @@ xfs_reclaim_inode( out_ifunlock: xfs_ifunlock(ip); +out_unlock: + xfs_iunlock(ip, XFS_ILOCK_EXCL); out: xfs_iflags_clear(ip, XFS_IRECLAIM); - xfs_iunlock(ip, XFS_ILOCK_EXCL); /* * We could return -EAGAIN here to make reclaim rescan the inode tree in * a short while. However, this just burns CPU time scanning the tree From patchwork Thu Oct 31 23:46:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222173 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9C9641515 for ; Thu, 31 Oct 2019 23:47:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7BAEE208C0 for ; Thu, 31 Oct 2019 23:47:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729641AbfJaXr5 (ORCPT ); Thu, 31 Oct 2019 19:47:57 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:56309 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728634AbfJaXqb (ORCPT ); Thu, 31 Oct 2019 19:46:31 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id BF8403A28A1; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007CY-Lq; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-000427-Jw; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 20/28] xfs: kill background reclaim work Date: Fri, 1 Nov 2019 10:46:10 +1100 Message-Id: <20191031234618.15403-21-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=G6BsK5s5 c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=1xoyCpcK-Ekt5S4qF2sA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner This function is now entirely done by kswapd, so we don't need the worker thread to do async reclaim anymore. Signed-off-by: Dave Chinner Reviewed-by: Christoph Hellwig Reviewed-by: Brian Foster --- fs/xfs/xfs_icache.c | 44 -------------------------------------------- fs/xfs/xfs_icache.h | 2 -- fs/xfs/xfs_mount.c | 2 -- fs/xfs/xfs_mount.h | 2 -- fs/xfs/xfs_super.c | 11 +---------- 5 files changed, 1 insertion(+), 60 deletions(-) diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 189cf423fe8f..7e175304e146 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -138,44 +138,6 @@ xfs_inode_free( __xfs_inode_free(ip); } -/* - * Queue a new inode reclaim pass if there are reclaimable inodes and there - * isn't a reclaim pass already in progress. By default it runs every 5s based - * on the xfs periodic sync default of 30s. Perhaps this should have it's own - * tunable, but that can be done if this method proves to be ineffective or too - * aggressive. - */ -static void -xfs_reclaim_work_queue( - struct xfs_mount *mp) -{ - - rcu_read_lock(); - if (radix_tree_tagged(&mp->m_perag_tree, XFS_ICI_RECLAIM_TAG)) { - queue_delayed_work(mp->m_reclaim_workqueue, &mp->m_reclaim_work, - msecs_to_jiffies(xfs_syncd_centisecs / 6 * 10)); - } - rcu_read_unlock(); -} - -/* - * This is a fast pass over the inode cache to try to get reclaim moving on as - * many inodes as possible in a short period of time. It kicks itself every few - * seconds, as well as being kicked by the inode cache shrinker when memory - * goes low. It scans as quickly as possible avoiding locked inodes or those - * already being flushed, and once done schedules a future pass. - */ -void -xfs_reclaim_worker( - struct work_struct *work) -{ - struct xfs_mount *mp = container_of(to_delayed_work(work), - struct xfs_mount, m_reclaim_work); - - xfs_reclaim_inodes(mp, SYNC_TRYLOCK); - xfs_reclaim_work_queue(mp); -} - static void xfs_perag_set_reclaim_tag( struct xfs_perag *pag) @@ -192,9 +154,6 @@ xfs_perag_set_reclaim_tag( XFS_ICI_RECLAIM_TAG); spin_unlock(&mp->m_perag_lock); - /* schedule periodic background inode reclaim */ - xfs_reclaim_work_queue(mp); - trace_xfs_perag_set_reclaim(mp, pag->pag_agno, -1, _RET_IP_); } @@ -1393,9 +1352,6 @@ xfs_reclaim_inodes_nr( { int sync_mode = SYNC_TRYLOCK; - /* kick background reclaimer */ - xfs_reclaim_work_queue(mp); - /* * For kswapd, we kick background inode writeback. For direct * reclaim, we issue and wait on inode writeback to throttle diff --git a/fs/xfs/xfs_icache.h b/fs/xfs/xfs_icache.h index 48f1fd2bb6ad..4c0d8920cc54 100644 --- a/fs/xfs/xfs_icache.h +++ b/fs/xfs/xfs_icache.h @@ -49,8 +49,6 @@ int xfs_iget(struct xfs_mount *mp, struct xfs_trans *tp, xfs_ino_t ino, struct xfs_inode * xfs_inode_alloc(struct xfs_mount *mp, xfs_ino_t ino); void xfs_inode_free(struct xfs_inode *ip); -void xfs_reclaim_worker(struct work_struct *work); - int xfs_reclaim_inodes(struct xfs_mount *mp, int mode); int xfs_reclaim_inodes_count(struct xfs_mount *mp); long xfs_reclaim_inodes_nr(struct xfs_mount *mp, int nr_to_scan); diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 3e8eedf01eb2..8f76c2add18b 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -952,7 +952,6 @@ xfs_mountfs( * qm_unmount_quotas and therefore rely on qm_unmount to release the * quota inodes. */ - cancel_delayed_work_sync(&mp->m_reclaim_work); xfs_reclaim_inodes(mp, SYNC_WAIT); xfs_health_unmount(mp); out_log_dealloc: @@ -1035,7 +1034,6 @@ xfs_unmountfs( * reclaim just to be sure. We can stop background inode reclaim * here as well if it is still running. */ - cancel_delayed_work_sync(&mp->m_reclaim_work); xfs_reclaim_inodes(mp, SYNC_WAIT); xfs_health_unmount(mp); diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h index a46cb3fd24b1..8c6885d3b085 100644 --- a/fs/xfs/xfs_mount.h +++ b/fs/xfs/xfs_mount.h @@ -163,7 +163,6 @@ typedef struct xfs_mount { uint m_chsize; /* size of next field */ atomic_t m_active_trans; /* number trans frozen */ struct xfs_mru_cache *m_filestream; /* per-mount filestream data */ - struct delayed_work m_reclaim_work; /* background inode reclaim */ struct delayed_work m_eofblocks_work; /* background eof blocks trimming */ struct delayed_work m_cowblocks_work; /* background cow blocks @@ -180,7 +179,6 @@ typedef struct xfs_mount { struct workqueue_struct *m_buf_workqueue; struct workqueue_struct *m_unwritten_workqueue; struct workqueue_struct *m_cil_workqueue; - struct workqueue_struct *m_reclaim_workqueue; struct workqueue_struct *m_eofblocks_workqueue; struct workqueue_struct *m_sync_workqueue; diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index ebe2ccd36127..a4fe679207ef 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -794,15 +794,10 @@ xfs_init_mount_workqueues( if (!mp->m_cil_workqueue) goto out_destroy_unwritten; - mp->m_reclaim_workqueue = alloc_workqueue("xfs-reclaim/%s", - WQ_MEM_RECLAIM|WQ_FREEZABLE, 0, mp->m_fsname); - if (!mp->m_reclaim_workqueue) - goto out_destroy_cil; - mp->m_eofblocks_workqueue = alloc_workqueue("xfs-eofblocks/%s", WQ_MEM_RECLAIM|WQ_FREEZABLE, 0, mp->m_fsname); if (!mp->m_eofblocks_workqueue) - goto out_destroy_reclaim; + goto out_destroy_cil; mp->m_sync_workqueue = alloc_workqueue("xfs-sync/%s", WQ_FREEZABLE, 0, mp->m_fsname); @@ -813,8 +808,6 @@ xfs_init_mount_workqueues( out_destroy_eofb: destroy_workqueue(mp->m_eofblocks_workqueue); -out_destroy_reclaim: - destroy_workqueue(mp->m_reclaim_workqueue); out_destroy_cil: destroy_workqueue(mp->m_cil_workqueue); out_destroy_unwritten: @@ -831,7 +824,6 @@ xfs_destroy_mount_workqueues( { destroy_workqueue(mp->m_sync_workqueue); destroy_workqueue(mp->m_eofblocks_workqueue); - destroy_workqueue(mp->m_reclaim_workqueue); destroy_workqueue(mp->m_cil_workqueue); destroy_workqueue(mp->m_unwritten_workqueue); destroy_workqueue(mp->m_buf_workqueue); @@ -1520,7 +1512,6 @@ xfs_mount_alloc( spin_lock_init(&mp->m_perag_lock); mutex_init(&mp->m_growlock); atomic_set(&mp->m_active_trans, 0); - INIT_DELAYED_WORK(&mp->m_reclaim_work, xfs_reclaim_worker); INIT_DELAYED_WORK(&mp->m_eofblocks_work, xfs_eofblocks_worker); INIT_DELAYED_WORK(&mp->m_cowblocks_work, xfs_cowblocks_worker); mp->m_kobj.kobject.kset = xfs_kset; From patchwork Thu Oct 31 23:46:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222145 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D1F6E912 for ; Thu, 31 Oct 2019 23:47:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B01EF216F4 for ; Thu, 31 Oct 2019 23:47:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729465AbfJaXr3 (ORCPT ); Thu, 31 Oct 2019 19:47:29 -0400 Received: from mail104.syd.optusnet.com.au ([211.29.132.246]:40529 "EHLO mail104.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728627AbfJaXqc (ORCPT ); Thu, 31 Oct 2019 19:46:32 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id 39CB87EA8C1; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007Ca-NF; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00042A-Kx; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 21/28] xfs: use AIL pushing for inode reclaim IO Date: Fri, 1 Nov 2019 10:46:11 +1100 Message-Id: <20191031234618.15403-22-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=D+Q3ErZj c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=1ffGU6BmlR_CJC1Va-MA:9 a=-VguHsaQu4ztqAO3:21 a=lelhI7Dy3mXGfK96:21 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner Inode reclaim currently issues it's own inode IO when it comes across dirty inodes. This is used to throttle direct reclaim down to the rate at which we can reclaim dirty inodes. Failure to throttle in this manner results in the OOM killer being trivial to trigger even when there is lots of free memory available. However, having direct reclaimers issue IO causes an amount of IO thrashing to occur. We can have up to the number of AGs in the filesystem concurrently issuing IO, plus the AIL pushing thread as well. This means we can many competing sources of IO and they all end up thrashing and competing for the request slots in the block device. Similar to dirty page throttling and the BDI flusher thread, we can use the AIL pushing thread the sole place we issue inode writeback from and everything else waits for it to make progress. To do this, reclaim will skip over dirty inodes, but in doing so will record the lowest LSN of all the dirty inodes it skips. It will then push the AIL to this LSN and wait for it to complete that work. In doing so, we block direct reclaim on the IO of at least one IO, thereby providing some level of throttling for when we encounter dirty inodes. However we gain the ability to scan and reclaim clean inodes in a non-blocking fashion. Hence direct reclaim will be throttled directly by the rate at which dirty inodes are cleaned by AIL pushing, rather than by delays caused by competing IO submissions. This allows us to reduce the locking that limits direct reclaim concurrency to just protecting the reclaim cursor state, hence greatly simplifying the inode reclaim code as it now just skips dirty inodes. Note: this patch by itself isn't completely able to throttle direct reclaim sufficiently to prevent OOM killer madness. We can't do that until we change the way we index reclaimable inodes in the next patch and can feed back state to the mm core sanely. However, we can't change the way we index reclaimable inodes until we have IO-less non-blocking reclaim for both direct reclaim and kswapd reclaim. Catch-22... Signed-off-by: Dave Chinner Reviewed-by: Brian Foster --- fs/xfs/xfs_icache.c | 218 ++++++++++++++++++-------------------------- 1 file changed, 89 insertions(+), 129 deletions(-) diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 7e175304e146..ff8ae32614a6 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -22,6 +22,7 @@ #include "xfs_dquot_item.h" #include "xfs_dquot.h" #include "xfs_reflink.h" +#include "xfs_log.h" #include @@ -967,28 +968,42 @@ xfs_inode_ag_iterator_tag( } /* - * Grab the inode for reclaim exclusively. - * Return 0 if we grabbed it, non-zero otherwise. + * Grab the inode for reclaim. + * + * Return false if we aren't going to reclaim it, true if it is a reclaim + * candidate. + * + * If the inode is clean or unreclaimable, return 0 to tell the caller it does + * not require flushing. Otherwise return the log item lsn of the inode so the + * caller can determine it's inode flush target. If we get the clean/dirty + * state wrong then it will be sorted in xfs_reclaim_inode() once we have locks + * held. */ -STATIC int +STATIC bool xfs_reclaim_inode_grab( struct xfs_inode *ip, - int flags) + int flags, + xfs_lsn_t *lsn) { ASSERT(rcu_read_lock_held()); + *lsn = 0; /* quick check for stale RCU freed inode */ if (!ip->i_ino) - return 1; + return false; /* - * If we are asked for non-blocking operation, do unlocked checks to - * see if the inode already is being flushed or in reclaim to avoid - * lock traffic. + * Do unlocked checks to see if the inode already is being flushed or in + * reclaim to avoid lock traffic. If the inode is not clean, return the + * position in the AIL for the caller to push to. */ - if ((flags & SYNC_TRYLOCK) && - __xfs_iflags_test(ip, XFS_IFLOCK | XFS_IRECLAIM)) - return 1; + if (!xfs_inode_clean(ip)) { + *lsn = ip->i_itemp->ili_item.li_lsn; + return false; + } + + if (__xfs_iflags_test(ip, XFS_IFLOCK | XFS_IRECLAIM)) + return false; /* * The radix tree lock here protects a thread in xfs_iget from racing @@ -1005,11 +1020,11 @@ xfs_reclaim_inode_grab( __xfs_iflags_test(ip, XFS_IRECLAIM)) { /* not a reclaim candidate. */ spin_unlock(&ip->i_flags_lock); - return 1; + return false; } __xfs_iflags_set(ip, XFS_IRECLAIM); spin_unlock(&ip->i_flags_lock); - return 0; + return true; } /* @@ -1050,92 +1065,61 @@ xfs_reclaim_inode_grab( * clean => reclaim * dirty, async => requeue * dirty, sync => flush, wait and reclaim + * + * Returns true if the inode was reclaimed, false otherwise. */ -STATIC int +STATIC bool xfs_reclaim_inode( struct xfs_inode *ip, struct xfs_perag *pag, - int sync_mode) + xfs_lsn_t *lsn) { - struct xfs_buf *bp = NULL; - xfs_ino_t ino = ip->i_ino; /* for radix_tree_delete */ - int error; + xfs_ino_t ino; + + *lsn = 0; -restart: - error = 0; /* * Don't try to flush the inode if another inode in this cluster has * already flushed it after we did the initial checks in * xfs_reclaim_inode_grab(). */ - if (sync_mode & SYNC_TRYLOCK) { - if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL)) - goto out; - if (!xfs_iflock_nowait(ip)) - goto out_unlock; - } else { - xfs_ilock(ip, XFS_ILOCK_EXCL); - if (!xfs_iflock_nowait(ip)) { - if (!(sync_mode & SYNC_WAIT)) - goto out_unlock; - xfs_iflock(ip); - } - } + if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL)) + goto out; + if (!xfs_iflock_nowait(ip)) + goto out_unlock; + /* If we are in shutdown, we don't care about blocking. */ if (XFS_FORCED_SHUTDOWN(ip->i_mount)) { xfs_iunpin_wait(ip); /* xfs_iflush_abort() drops the flush lock */ xfs_iflush_abort(ip, false); goto reclaim; } + + /* Can't do anything to pinned inodes without blocking, skip it. */ if (xfs_ipincount(ip)) { - if (!(sync_mode & SYNC_WAIT)) - goto out_ifunlock; - xfs_iunpin_wait(ip); - } - if (xfs_iflags_test(ip, XFS_ISTALE) || xfs_inode_clean(ip)) { - xfs_ifunlock(ip); - goto reclaim; + *lsn = ip->i_itemp->ili_item.li_lsn; + goto out_ifunlock; } /* - * Never flush out dirty data during non-blocking reclaim, as it would - * just contend with AIL pushing trying to do the same job. + * Dirty inode we didn't catch, skip it. */ - if (!(sync_mode & SYNC_WAIT)) + if (!xfs_inode_clean(ip) && !xfs_iflags_test(ip, XFS_ISTALE)) { + *lsn = ip->i_itemp->ili_item.li_lsn; goto out_ifunlock; + } /* - * Now we have an inode that needs flushing. - * - * Note that xfs_iflush will never block on the inode buffer lock, as - * xfs_ifree_cluster() can lock the inode buffer before it locks the - * ip->i_lock, and we are doing the exact opposite here. As a result, - * doing a blocking xfs_imap_to_bp() to get the cluster buffer would - * result in an ABBA deadlock with xfs_ifree_cluster(). - * - * As xfs_ifree_cluser() must gather all inodes that are active in the - * cache to mark them stale, if we hit this case we don't actually want - * to do IO here - we want the inode marked stale so we can simply - * reclaim it. Hence if we get an EAGAIN error here, just unlock the - * inode, back off and try again. Hopefully the next pass through will - * see the stale flag set on the inode. + * It's clean, we have it locked, we can now drop the flush lock + * and reclaim it. */ - error = xfs_iflush(ip, &bp); - if (error == -EAGAIN) { - xfs_iunlock(ip, XFS_ILOCK_EXCL); - /* backoff longer than in xfs_ifree_cluster */ - delay(2); - goto restart; - } - - if (!error) { - error = xfs_bwrite(bp); - xfs_buf_relse(bp); - } + xfs_ifunlock(ip); reclaim: ASSERT(!xfs_isiflocked(ip)); + ASSERT(xfs_inode_clean(ip) || xfs_iflags_test(ip, XFS_ISTALE)); + ASSERT(ip->i_ino != 0); /* * Because we use RCU freeing we need to ensure the inode always appears @@ -1148,6 +1132,7 @@ xfs_reclaim_inode( * will see an invalid inode that it can skip. */ spin_lock(&ip->i_flags_lock); + ino = ip->i_ino; /* for radix_tree_delete */ ip->i_flags = XFS_IRECLAIM; ip->i_ino = 0; spin_unlock(&ip->i_flags_lock); @@ -1182,7 +1167,7 @@ xfs_reclaim_inode( xfs_iunlock(ip, XFS_ILOCK_EXCL); __xfs_inode_free(ip); - return error; + return true; out_ifunlock: xfs_ifunlock(ip); @@ -1190,14 +1175,7 @@ xfs_reclaim_inode( xfs_iunlock(ip, XFS_ILOCK_EXCL); out: xfs_iflags_clear(ip, XFS_IRECLAIM); - /* - * We could return -EAGAIN here to make reclaim rescan the inode tree in - * a short while. However, this just burns CPU time scanning the tree - * waiting for IO to complete and the reclaim work never goes back to - * the idle state. Instead, return 0 to let the next scheduled - * background reclaim attempt to reclaim the inode again. - */ - return 0; + return false; } /* @@ -1205,55 +1183,42 @@ xfs_reclaim_inode( * corrupted, we still want to try to reclaim all the inodes. If we don't, * then a shut down during filesystem unmount reclaim walk leak all the * unreclaimed inodes. + * + * Return the number of inodes freed. */ STATIC int xfs_reclaim_inodes_ag( struct xfs_mount *mp, int flags, - int *nr_to_scan) + int nr_to_scan) { struct xfs_perag *pag; - int error = 0; - int last_error = 0; xfs_agnumber_t ag; - int trylock = flags & SYNC_TRYLOCK; - int skipped; + xfs_lsn_t lsn, lowest_lsn = NULLCOMMITLSN; + long freed = 0; -restart: ag = 0; - skipped = 0; while ((pag = xfs_perag_get_tag(mp, ag, XFS_ICI_RECLAIM_TAG))) { unsigned long first_index = 0; int done = 0; int nr_found = 0; ag = pag->pag_agno + 1; - - if (trylock) { - if (!mutex_trylock(&pag->pag_ici_reclaim_lock)) { - skipped++; - xfs_perag_put(pag); - continue; - } - first_index = pag->pag_ici_reclaim_cursor; - } else - mutex_lock(&pag->pag_ici_reclaim_lock); - do { struct xfs_inode *batch[XFS_LOOKUP_BATCH]; int i; + mutex_lock(&pag->pag_ici_reclaim_lock); + first_index = pag->pag_ici_reclaim_cursor; + rcu_read_lock(); nr_found = radix_tree_gang_lookup_tag( &pag->pag_ici_root, (void **)batch, first_index, XFS_LOOKUP_BATCH, XFS_ICI_RECLAIM_TAG); - if (!nr_found) { + if (!nr_found) done = 1; - rcu_read_unlock(); - break; - } /* * Grab the inodes before we drop the lock. if we found @@ -1262,9 +1227,13 @@ xfs_reclaim_inodes_ag( for (i = 0; i < nr_found; i++) { struct xfs_inode *ip = batch[i]; - if (done || xfs_reclaim_inode_grab(ip, flags)) + if (done || + !xfs_reclaim_inode_grab(ip, flags, &lsn)) batch[i] = NULL; + if (lsn && XFS_LSN_CMP(lsn, lowest_lsn) < 0) + lowest_lsn = lsn; + /* * Update the index for the next lookup. Catch * overflows into the next AG range which can @@ -1289,41 +1258,34 @@ xfs_reclaim_inodes_ag( /* unlock now we've grabbed the inodes. */ rcu_read_unlock(); + if (!done) + pag->pag_ici_reclaim_cursor = first_index; + else + pag->pag_ici_reclaim_cursor = 0; + mutex_unlock(&pag->pag_ici_reclaim_lock); for (i = 0; i < nr_found; i++) { if (!batch[i]) continue; - error = xfs_reclaim_inode(batch[i], pag, flags); - if (error && last_error != -EFSCORRUPTED) - last_error = error; + if (xfs_reclaim_inode(batch[i], pag, &lsn)) + freed++; + if (lsn && (lowest_lsn == NULLCOMMITLSN || + XFS_LSN_CMP(lsn, lowest_lsn) < 0)) + lowest_lsn = lsn; } - *nr_to_scan -= XFS_LOOKUP_BATCH; - + nr_to_scan -= XFS_LOOKUP_BATCH; cond_resched(); - } while (nr_found && !done && *nr_to_scan > 0); + } while (nr_found && !done && nr_to_scan > 0); - if (trylock && !done) - pag->pag_ici_reclaim_cursor = first_index; - else - pag->pag_ici_reclaim_cursor = 0; - mutex_unlock(&pag->pag_ici_reclaim_lock); xfs_perag_put(pag); } - /* - * if we skipped any AG, and we still have scan count remaining, do - * another pass this time using blocking reclaim semantics (i.e - * waiting on the reclaim locks and ignoring the reclaim cursors). This - * ensure that when we get more reclaimers than AGs we block rather - * than spin trying to execute reclaim. - */ - if (skipped && (flags & SYNC_WAIT) && *nr_to_scan > 0) { - trylock = 0; - goto restart; - } - return last_error; + if ((flags & SYNC_WAIT) && lowest_lsn != NULLCOMMITLSN) + xfs_ail_push_sync(mp->m_ail, lowest_lsn); + + return freed; } int @@ -1331,9 +1293,7 @@ xfs_reclaim_inodes( xfs_mount_t *mp, int mode) { - int nr_to_scan = INT_MAX; - - return xfs_reclaim_inodes_ag(mp, mode, &nr_to_scan); + return xfs_reclaim_inodes_ag(mp, mode, INT_MAX); } /* @@ -1350,7 +1310,7 @@ xfs_reclaim_inodes_nr( struct xfs_mount *mp, int nr_to_scan) { - int sync_mode = SYNC_TRYLOCK; + int sync_mode = 0; /* * For kswapd, we kick background inode writeback. For direct @@ -1362,7 +1322,7 @@ xfs_reclaim_inodes_nr( else sync_mode |= SYNC_WAIT; - return xfs_reclaim_inodes_ag(mp, sync_mode, &nr_to_scan); + return xfs_reclaim_inodes_ag(mp, sync_mode, nr_to_scan); } /* From patchwork Thu Oct 31 23:46:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222137 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5EC57912 for ; Thu, 31 Oct 2019 23:47:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 47E2A208E3 for ; Thu, 31 Oct 2019 23:47:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728918AbfJaXqd (ORCPT ); Thu, 31 Oct 2019 19:46:33 -0400 Received: from mail104.syd.optusnet.com.au ([211.29.132.246]:40531 "EHLO mail104.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728637AbfJaXqc (ORCPT ); Thu, 31 Oct 2019 19:46:32 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id B172F7EA8AA; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007Cb-O2; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00042E-M1; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 22/28] xfs: remove mode from xfs_reclaim_inodes() Date: Fri, 1 Nov 2019 10:46:12 +1100 Message-Id: <20191031234618.15403-23-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=D+Q3ErZj c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=rBCjN8xBrULXB8iKm2EA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner Because it's always SYNC_WAIT now. Rename it to xfs_reclaim_all_inodes() to make it clear how it is different to the other similarly named reclaim functions. Signed-off-by: Dave Chinner Reviewed-by: Brian Foster --- fs/xfs/xfs_icache.c | 9 ++++----- fs/xfs/xfs_icache.h | 2 +- fs/xfs/xfs_mount.c | 4 ++-- fs/xfs/xfs_super.c | 3 +-- 4 files changed, 8 insertions(+), 10 deletions(-) diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index ff8ae32614a6..048f7f1b54ff 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -1288,12 +1288,11 @@ xfs_reclaim_inodes_ag( return freed; } -int -xfs_reclaim_inodes( - xfs_mount_t *mp, - int mode) +void +xfs_reclaim_all_inodes( + struct xfs_mount *mp) { - return xfs_reclaim_inodes_ag(mp, mode, INT_MAX); + xfs_reclaim_inodes_ag(mp, SYNC_WAIT, INT_MAX); } /* diff --git a/fs/xfs/xfs_icache.h b/fs/xfs/xfs_icache.h index 4c0d8920cc54..37cd33741bee 100644 --- a/fs/xfs/xfs_icache.h +++ b/fs/xfs/xfs_icache.h @@ -49,7 +49,7 @@ int xfs_iget(struct xfs_mount *mp, struct xfs_trans *tp, xfs_ino_t ino, struct xfs_inode * xfs_inode_alloc(struct xfs_mount *mp, xfs_ino_t ino); void xfs_inode_free(struct xfs_inode *ip); -int xfs_reclaim_inodes(struct xfs_mount *mp, int mode); +void xfs_reclaim_all_inodes(struct xfs_mount *mp); int xfs_reclaim_inodes_count(struct xfs_mount *mp); long xfs_reclaim_inodes_nr(struct xfs_mount *mp, int nr_to_scan); diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 8f76c2add18b..5f3fd1d8f63f 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -952,7 +952,7 @@ xfs_mountfs( * qm_unmount_quotas and therefore rely on qm_unmount to release the * quota inodes. */ - xfs_reclaim_inodes(mp, SYNC_WAIT); + xfs_reclaim_all_inodes(mp); xfs_health_unmount(mp); out_log_dealloc: mp->m_flags |= XFS_MOUNT_UNMOUNTING; @@ -1034,7 +1034,7 @@ xfs_unmountfs( * reclaim just to be sure. We can stop background inode reclaim * here as well if it is still running. */ - xfs_reclaim_inodes(mp, SYNC_WAIT); + xfs_reclaim_all_inodes(mp); xfs_health_unmount(mp); xfs_qm_unmount(mp); diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index a4fe679207ef..456a398aad82 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1151,8 +1151,7 @@ xfs_quiesce_attr( xfs_log_force(mp, XFS_LOG_SYNC); /* reclaim inodes to do any IO before the freeze completes */ - xfs_reclaim_inodes(mp, 0); - xfs_reclaim_inodes(mp, SYNC_WAIT); + xfs_reclaim_all_inodes(mp); /* Push the superblock and write an unmount record */ error = xfs_log_sbcount(mp); From patchwork Thu Oct 31 23:46:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222155 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 59CA614E5 for ; Thu, 31 Oct 2019 23:47:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 37CA02173E for ; Thu, 31 Oct 2019 23:47:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729573AbfJaXrl (ORCPT ); Thu, 31 Oct 2019 19:47:41 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:55489 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728618AbfJaXqc (ORCPT ); Thu, 31 Oct 2019 19:46:32 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id D0FFA3A28AD; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007Ce-Ol; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00042H-Mz; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 23/28] xfs: track reclaimable inodes using a LRU list Date: Fri, 1 Nov 2019 10:46:13 +1100 Message-Id: <20191031234618.15403-24-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=G6BsK5s5 c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=1_kZ1czCliemkzvEU9sA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner Now that we don't do IO from the inode reclaim code, there is no need to optimise inode scanning order for optimal IO characteristics. The AIL takes care of that for us, so now reclaim can focus on selecting the best inodes to reclaim. Hence we can change the inode reclaim algorithm to a real LRU and remove the need to use the radix tree to track and walk inodes under reclaim. This frees up a radix tree bit and simplifies the code that marks inodes are reclaim candidates. It also simplifies the reclaim code - we don't need batching anymore and all the reclaim logic can be added to the LRU isolation callback. Further, we get node aware reclaim at the xfs_inode level, which should help the per-node reclaim code free relevant inodes faster. We can re-use the VFS inode lru pointers - once the inode has been reclaimed from the VFS, we can use these pointers ourselves. Hence we don't need to grow the inode to change the way we index reclaimable inodes. Start by adding the list_lru tracking in parallel with the existing reclaim code. This makes it easier to see the LRU infrastructure separate to the reclaim algorithm changes. Especially the locking order, which is ip->i_flags_lock -> list_lru lock. Signed-off-by: Dave Chinner Reviewed-by: Brian Foster --- fs/xfs/xfs_icache.c | 32 ++++++++------------------------ fs/xfs/xfs_icache.h | 1 - fs/xfs/xfs_mount.h | 1 + fs/xfs/xfs_super.c | 30 ++++++++++++++++++++++-------- 4 files changed, 31 insertions(+), 33 deletions(-) diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 048f7f1b54ff..350f42e7730b 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -198,6 +198,8 @@ xfs_inode_set_reclaim_tag( xfs_perag_set_reclaim_tag(pag); __xfs_iflags_set(ip, XFS_IRECLAIMABLE); + list_lru_add(&mp->m_inode_lru, &VFS_I(ip)->i_lru); + spin_unlock(&ip->i_flags_lock); spin_unlock(&pag->pag_ici_lock); xfs_perag_put(pag); @@ -370,12 +372,10 @@ xfs_iget_cache_hit( /* * We need to set XFS_IRECLAIM to prevent xfs_reclaim_inode - * from stomping over us while we recycle the inode. We can't - * clear the radix tree reclaimable tag yet as it requires - * pag_ici_lock to be held exclusive. + * from stomping over us while we recycle the inode. Remove it + * from the LRU straight away so we can re-init the VFS inode. */ ip->i_flags |= XFS_IRECLAIM; - spin_unlock(&ip->i_flags_lock); rcu_read_unlock(); @@ -407,6 +407,7 @@ xfs_iget_cache_hit( */ ip->i_flags &= ~XFS_IRECLAIM_RESET_FLAGS; ip->i_flags |= XFS_INEW; + list_lru_del(&mp->m_inode_lru, &inode->i_lru); xfs_inode_clear_reclaim_tag(pag, ip->i_ino); inode->i_state = I_NEW; ip->i_sick = 0; @@ -1135,6 +1136,9 @@ xfs_reclaim_inode( ino = ip->i_ino; /* for radix_tree_delete */ ip->i_flags = XFS_IRECLAIM; ip->i_ino = 0; + + /* XXX: temporary until lru based reclaim */ + list_lru_del(&pag->pag_mount->m_inode_lru, &VFS_I(ip)->i_lru); spin_unlock(&ip->i_flags_lock); xfs_iunlock(ip, XFS_ILOCK_EXCL); @@ -1324,26 +1328,6 @@ xfs_reclaim_inodes_nr( return xfs_reclaim_inodes_ag(mp, sync_mode, nr_to_scan); } -/* - * Return the number of reclaimable inodes in the filesystem for - * the shrinker to determine how much to reclaim. - */ -int -xfs_reclaim_inodes_count( - struct xfs_mount *mp) -{ - struct xfs_perag *pag; - xfs_agnumber_t ag = 0; - int reclaimable = 0; - - while ((pag = xfs_perag_get_tag(mp, ag, XFS_ICI_RECLAIM_TAG))) { - ag = pag->pag_agno + 1; - reclaimable += pag->pag_ici_reclaimable; - xfs_perag_put(pag); - } - return reclaimable; -} - STATIC int xfs_inode_match_id( struct xfs_inode *ip, diff --git a/fs/xfs/xfs_icache.h b/fs/xfs/xfs_icache.h index 37cd33741bee..afd692b06c13 100644 --- a/fs/xfs/xfs_icache.h +++ b/fs/xfs/xfs_icache.h @@ -50,7 +50,6 @@ struct xfs_inode * xfs_inode_alloc(struct xfs_mount *mp, xfs_ino_t ino); void xfs_inode_free(struct xfs_inode *ip); void xfs_reclaim_all_inodes(struct xfs_mount *mp); -int xfs_reclaim_inodes_count(struct xfs_mount *mp); long xfs_reclaim_inodes_nr(struct xfs_mount *mp, int nr_to_scan); void xfs_inode_set_reclaim_tag(struct xfs_inode *ip); diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h index 8c6885d3b085..4f153ee17e18 100644 --- a/fs/xfs/xfs_mount.h +++ b/fs/xfs/xfs_mount.h @@ -75,6 +75,7 @@ typedef struct xfs_mount { uint8_t m_rt_sick; struct xfs_ail *m_ail; /* fs active log item list */ + struct list_lru m_inode_lru; struct xfs_sb m_sb; /* copy of fs superblock */ spinlock_t m_sb_lock; /* sb counter lock */ diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 456a398aad82..98ffbe42f8ae 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -891,28 +891,31 @@ xfs_fs_destroy_inode( struct inode *inode) { struct xfs_inode *ip = XFS_I(inode); + struct xfs_mount *mp = ip->i_mount; trace_xfs_destroy_inode(ip); ASSERT(!rwsem_is_locked(&inode->i_rwsem)); - XFS_STATS_INC(ip->i_mount, vn_rele); - XFS_STATS_INC(ip->i_mount, vn_remove); + XFS_STATS_INC(mp, vn_rele); + XFS_STATS_INC(mp, vn_remove); xfs_inactive(ip); - if (!XFS_FORCED_SHUTDOWN(ip->i_mount) && ip->i_delayed_blks) { + if (!XFS_FORCED_SHUTDOWN(mp) && ip->i_delayed_blks) { xfs_check_delalloc(ip, XFS_DATA_FORK); xfs_check_delalloc(ip, XFS_COW_FORK); ASSERT(0); } - XFS_STATS_INC(ip->i_mount, vn_reclaim); + XFS_STATS_INC(mp, vn_reclaim); /* * We should never get here with one of the reclaim flags already set. */ - ASSERT_ALWAYS(!xfs_iflags_test(ip, XFS_IRECLAIMABLE)); - ASSERT_ALWAYS(!xfs_iflags_test(ip, XFS_IRECLAIM)); + spin_lock(&ip->i_flags_lock); + ASSERT_ALWAYS(!__xfs_iflags_test(ip, XFS_IRECLAIMABLE)); + ASSERT_ALWAYS(!__xfs_iflags_test(ip, XFS_IRECLAIM)); + spin_unlock(&ip->i_flags_lock); /* * We always use background reclaim here because even if the @@ -1544,9 +1547,16 @@ xfs_fs_fill_super( goto out; sb->s_fs_info = mp; + /* + * The inode lru needs to be associated with the superblock shrinker, + * and like the rest of the superblock shrinker, it's memcg aware. + */ + if (list_lru_init_memcg(&mp->m_inode_lru, &sb->s_shrink)) + goto out_free_fsname; + error = xfs_parseargs(mp, (char *)data); if (error) - goto out_free_fsname; + goto out_free_lru; sb_min_blocksize(sb, BBSIZE); sb->s_xattr = xfs_xattr_handlers; @@ -1710,6 +1720,8 @@ xfs_fs_fill_super( xfs_destroy_mount_workqueues(mp); out_close_devices: xfs_close_devices(mp); + out_free_lru: + list_lru_destroy(&mp->m_inode_lru); out_free_fsname: sb->s_fs_info = NULL; xfs_free_fsname(mp); @@ -1743,6 +1755,7 @@ xfs_fs_put_super( xfs_destroy_mount_workqueues(mp); xfs_close_devices(mp); + list_lru_destroy(&mp->m_inode_lru); sb->s_fs_info = NULL; xfs_free_fsname(mp); kfree(mp); @@ -1766,7 +1779,8 @@ xfs_fs_nr_cached_objects( /* Paranoia: catch incorrect calls during mount setup or teardown */ if (WARN_ON_ONCE(!sb->s_fs_info)) return 0; - return xfs_reclaim_inodes_count(XFS_M(sb)); + + return list_lru_shrink_count(&XFS_M(sb)->m_inode_lru, sc); } static long From patchwork Thu Oct 31 23:46:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222109 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 626D4912 for ; Thu, 31 Oct 2019 23:47:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3617C217D9 for ; Thu, 31 Oct 2019 23:47:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729091AbfJaXqt (ORCPT ); Thu, 31 Oct 2019 19:46:49 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:56547 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728729AbfJaXqf (ORCPT ); Thu, 31 Oct 2019 19:46:35 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id D03C03A23B1; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007Cg-Ps; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00042K-Nv; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 24/28] xfs: reclaim inodes from the LRU Date: Fri, 1 Nov 2019 10:46:14 +1100 Message-Id: <20191031234618.15403-25-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=D+Q3ErZj c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=CWdGKF79LG_jA4S27w8A:9 a=3olIEYc3oiszChvJ:21 a=oXdPoYE73i3MdTZ7:21 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner Replace the AG radix tree walking reclaim code with a list_lru walker, giving us both node-aware and memcg-aware inode reclaim at the XFS level. This requires adding an inode isolation function to determine if the inode can be reclaim, and a list walker to dispose of the inodes that were isolated. We want the isolation function to be non-blocking. If we can't grab an inode then we either skip it or rotate it. If it's clean then we skip it, if it's dirty then we rotate to give it time to be cleaned before it is scanned again. This congregates the dirty inodes at the tail of the LRU, which means that if we start hitting a majority of dirty inodes either there are lots of unlinked inodes in the reclaim list or we've reclaimed all the clean inodes and we're looped back on the dirty inodes. Either way, this is an indication we should tell kswapd to back off. The non-blocking isolation function introduces a complexity for the filesystem shutdown case. When the filesystem is shut down, we want to free the inode even if it is dirty, and this may require blocking. We already hold the locks needed to do this blocking, so what we do is that we leave inodes locked - both the ILOCK and the flush lock - while they are sitting on the dispose list to be freed after the LRU walk completes. This allows us to process the shutdown state outside the LRU walk where we can block safely. Because we now are reclaiming inodes from the context that it needs memory in (memcg and/or node), direct reclaim throttling within the high level reclaim code in now much more effective. Hence we don't wait on IO for either kswapd or direct reclaim. However, we have to tell kswapd to back off if we start hitting too many dirty inodes. This implies we've wrapped around the LRU and don't have many clean inodes left to reclaim, so it needs to wait a while for the AIL pushing to clean some of the remaining reclaimable inodes. Keep in mind we don't have to care about inode lock order or blocking with inode locks held here because a) we are using trylocks, and b) once marked with XFS_IRECLAIM they can't be found via the LRU and inode cache lookups will abort and retry. Hence nobody will try to lock them in any other context that might also be holding other inode locks. Also convert xfs_reclaim_all_inodes() to use a LRU walk to free all the reclaimable inodes in the filesystem. Signed-off-by: Dave Chinner --- fs/xfs/xfs_icache.c | 404 +++++++++++++------------------------------- fs/xfs/xfs_icache.h | 18 +- fs/xfs/xfs_inode.h | 18 ++ fs/xfs/xfs_super.c | 46 ++++- 4 files changed, 190 insertions(+), 296 deletions(-) diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 350f42e7730b..05dd292bfdb6 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -968,160 +968,110 @@ xfs_inode_ag_iterator_tag( return last_error; } -/* - * Grab the inode for reclaim. - * - * Return false if we aren't going to reclaim it, true if it is a reclaim - * candidate. - * - * If the inode is clean or unreclaimable, return 0 to tell the caller it does - * not require flushing. Otherwise return the log item lsn of the inode so the - * caller can determine it's inode flush target. If we get the clean/dirty - * state wrong then it will be sorted in xfs_reclaim_inode() once we have locks - * held. - */ -STATIC bool -xfs_reclaim_inode_grab( - struct xfs_inode *ip, - int flags, - xfs_lsn_t *lsn) +enum lru_status +xfs_inode_reclaim_isolate( + struct list_head *item, + struct list_lru_one *lru, + spinlock_t *lru_lock, + void *arg) { - ASSERT(rcu_read_lock_held()); - *lsn = 0; + struct xfs_ireclaim_args *ra = arg; + struct inode *inode = container_of(item, struct inode, + i_lru); + struct xfs_inode *ip = XFS_I(inode); + enum lru_status ret; + xfs_lsn_t lsn = 0; + + /* Careful: inversion of iflags_lock and everything else here */ + if (!spin_trylock(&ip->i_flags_lock)) + return LRU_SKIP; + + /* if we are in shutdown, we'll reclaim it even if dirty */ + ret = LRU_ROTATE; + if (!xfs_inode_clean(ip) && !__xfs_iflags_test(ip, XFS_ISTALE) && + !XFS_FORCED_SHUTDOWN(ip->i_mount)) { + lsn = ip->i_itemp->ili_item.li_lsn; + ra->dirty_skipped++; + goto out_unlock_flags; + } - /* quick check for stale RCU freed inode */ - if (!ip->i_ino) - return false; + ret = LRU_SKIP; + if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL)) + goto out_unlock_flags; - /* - * Do unlocked checks to see if the inode already is being flushed or in - * reclaim to avoid lock traffic. If the inode is not clean, return the - * position in the AIL for the caller to push to. - */ - if (!xfs_inode_clean(ip)) { - *lsn = ip->i_itemp->ili_item.li_lsn; - return false; + if (!__xfs_iflock_nowait(ip)) { + lsn = ip->i_itemp->ili_item.li_lsn; + ra->dirty_skipped++; + goto out_unlock_inode; } - if (__xfs_iflags_test(ip, XFS_IFLOCK | XFS_IRECLAIM)) - return false; + if (XFS_FORCED_SHUTDOWN(ip->i_mount)) + goto reclaim; /* - * The radix tree lock here protects a thread in xfs_iget from racing - * with us starting reclaim on the inode. Once we have the - * XFS_IRECLAIM flag set it will not touch us. - * - * Due to RCU lookup, we may find inodes that have been freed and only - * have XFS_IRECLAIM set. Indeed, we may see reallocated inodes that - * aren't candidates for reclaim at all, so we must check the - * XFS_IRECLAIMABLE is set first before proceeding to reclaim. + * Now the inode is locked, we can actually determine if it is dirty + * without racing with anything. */ - spin_lock(&ip->i_flags_lock); - if (!__xfs_iflags_test(ip, XFS_IRECLAIMABLE) || - __xfs_iflags_test(ip, XFS_IRECLAIM)) { - /* not a reclaim candidate. */ - spin_unlock(&ip->i_flags_lock); - return false; + ret = LRU_ROTATE; + if (xfs_ipincount(ip)) { + ra->dirty_skipped++; + goto out_ifunlock; + } + if (!xfs_inode_clean(ip) && !__xfs_iflags_test(ip, XFS_ISTALE)) { + lsn = ip->i_itemp->ili_item.li_lsn; + ra->dirty_skipped++; + goto out_ifunlock; } + +reclaim: + /* + * Once we mark the inode with XFS_IRECLAIM, no-one will grab it again. + * RCU lookups will still find the inode, but they'll stop when they set + * the IRECLAIM flag. Hence we can leave the inode locked as we move it + * to the dispose list so we can deal with shutdown cleanup there + * outside the LRU lock context. + */ __xfs_iflags_set(ip, XFS_IRECLAIM); + list_lru_isolate_move(lru, &inode->i_lru, &ra->freeable); spin_unlock(&ip->i_flags_lock); - return true; + return LRU_REMOVED; + +out_ifunlock: + __xfs_ifunlock(ip); +out_unlock_inode: + xfs_iunlock(ip, XFS_ILOCK_EXCL); +out_unlock_flags: + spin_unlock(&ip->i_flags_lock); + + if (lsn && XFS_LSN_CMP(lsn, ra->lowest_lsn) < 0) + ra->lowest_lsn = lsn; + return ret; } -/* - * Inodes in different states need to be treated differently. The following - * table lists the inode states and the reclaim actions necessary: - * - * inode state iflush ret required action - * --------------- ---------- --------------- - * bad - reclaim - * shutdown EIO unpin and reclaim - * clean, unpinned 0 reclaim - * stale, unpinned 0 reclaim - * clean, pinned(*) 0 requeue - * stale, pinned EAGAIN requeue - * dirty, async - requeue - * dirty, sync 0 reclaim - * - * (*) dgc: I don't think the clean, pinned state is possible but it gets - * handled anyway given the order of checks implemented. - * - * Also, because we get the flush lock first, we know that any inode that has - * been flushed delwri has had the flush completed by the time we check that - * the inode is clean. - * - * Note that because the inode is flushed delayed write by AIL pushing, the - * flush lock may already be held here and waiting on it can result in very - * long latencies. Hence for sync reclaims, where we wait on the flush lock, - * the caller should push the AIL first before trying to reclaim inodes to - * minimise the amount of time spent waiting. For background relaim, we only - * bother to reclaim clean inodes anyway. - * - * Hence the order of actions after gaining the locks should be: - * bad => reclaim - * shutdown => unpin and reclaim - * pinned, async => requeue - * pinned, sync => unpin - * stale => reclaim - * clean => reclaim - * dirty, async => requeue - * dirty, sync => flush, wait and reclaim - * - * Returns true if the inode was reclaimed, false otherwise. - */ -STATIC bool -xfs_reclaim_inode( - struct xfs_inode *ip, - struct xfs_perag *pag, - xfs_lsn_t *lsn) +static void +xfs_dispose_inode( + struct xfs_inode *ip) { + struct xfs_mount *mp = ip->i_mount; + struct xfs_perag *pag; xfs_ino_t ino; - *lsn = 0; + ASSERT(xfs_isiflocked(ip)); + ASSERT(xfs_inode_clean(ip) || xfs_iflags_test(ip, XFS_ISTALE) || + XFS_FORCED_SHUTDOWN(mp)); + ASSERT(ip->i_ino != 0); /* - * Don't try to flush the inode if another inode in this cluster has - * already flushed it after we did the initial checks in - * xfs_reclaim_inode_grab(). + * Process the shutdown reclaim work we deferred from the LRU isolation + * callback before we go any further. */ - if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL)) - goto out; - if (!xfs_iflock_nowait(ip)) - goto out_unlock; - - /* If we are in shutdown, we don't care about blocking. */ - if (XFS_FORCED_SHUTDOWN(ip->i_mount)) { + if (XFS_FORCED_SHUTDOWN(mp)) { xfs_iunpin_wait(ip); - /* xfs_iflush_abort() drops the flush lock */ xfs_iflush_abort(ip, false); - goto reclaim; - } - - /* Can't do anything to pinned inodes without blocking, skip it. */ - if (xfs_ipincount(ip)) { - *lsn = ip->i_itemp->ili_item.li_lsn; - goto out_ifunlock; - } - - /* - * Dirty inode we didn't catch, skip it. - */ - if (!xfs_inode_clean(ip) && !xfs_iflags_test(ip, XFS_ISTALE)) { - *lsn = ip->i_itemp->ili_item.li_lsn; - goto out_ifunlock; + } else { + xfs_ifunlock(ip); } - /* - * It's clean, we have it locked, we can now drop the flush lock - * and reclaim it. - */ - xfs_ifunlock(ip); - -reclaim: - ASSERT(!xfs_isiflocked(ip)); - ASSERT(xfs_inode_clean(ip) || xfs_iflags_test(ip, XFS_ISTALE)); - ASSERT(ip->i_ino != 0); - /* * Because we use RCU freeing we need to ensure the inode always appears * to be reclaimed with an invalid inode number when in the free state. @@ -1136,27 +1086,20 @@ xfs_reclaim_inode( ino = ip->i_ino; /* for radix_tree_delete */ ip->i_flags = XFS_IRECLAIM; ip->i_ino = 0; - - /* XXX: temporary until lru based reclaim */ - list_lru_del(&pag->pag_mount->m_inode_lru, &VFS_I(ip)->i_lru); spin_unlock(&ip->i_flags_lock); - xfs_iunlock(ip, XFS_ILOCK_EXCL); - XFS_STATS_INC(ip->i_mount, xs_ig_reclaims); + XFS_STATS_INC(mp, xs_ig_reclaims); + /* * Remove the inode from the per-AG radix tree. - * - * Because radix_tree_delete won't complain even if the item was never - * added to the tree assert that it's been there before to catch - * problems with the inode life time early on. */ + pag = xfs_perag_get(mp, XFS_INO_TO_AGNO(mp, ino)); spin_lock(&pag->pag_ici_lock); - if (!radix_tree_delete(&pag->pag_ici_root, - XFS_INO_TO_AGINO(ip->i_mount, ino))) + if (!radix_tree_delete(&pag->pag_ici_root, XFS_INO_TO_AGINO(mp, ino))) ASSERT(0); - xfs_perag_clear_reclaim_tag(pag); spin_unlock(&pag->pag_ici_lock); + xfs_perag_put(pag); /* * Here we do an (almost) spurious inode lock in order to coordinate @@ -1165,167 +1108,52 @@ xfs_reclaim_inode( * * We make that OK here by ensuring that we wait until the inode is * unlocked after the lookup before we go ahead and free it. + * + * XXX: need to check this is still true. Not sure it is. */ xfs_ilock(ip, XFS_ILOCK_EXCL); xfs_qm_dqdetach(ip); xfs_iunlock(ip, XFS_ILOCK_EXCL); __xfs_inode_free(ip); - return true; - -out_ifunlock: - xfs_ifunlock(ip); -out_unlock: - xfs_iunlock(ip, XFS_ILOCK_EXCL); -out: - xfs_iflags_clear(ip, XFS_IRECLAIM); - return false; } -/* - * Walk the AGs and reclaim the inodes in them. Even if the filesystem is - * corrupted, we still want to try to reclaim all the inodes. If we don't, - * then a shut down during filesystem unmount reclaim walk leak all the - * unreclaimed inodes. - * - * Return the number of inodes freed. - */ -STATIC int -xfs_reclaim_inodes_ag( - struct xfs_mount *mp, - int flags, - int nr_to_scan) +void +xfs_dispose_inodes( + struct list_head *freeable) { - struct xfs_perag *pag; - xfs_agnumber_t ag; - xfs_lsn_t lsn, lowest_lsn = NULLCOMMITLSN; - long freed = 0; - - ag = 0; - while ((pag = xfs_perag_get_tag(mp, ag, XFS_ICI_RECLAIM_TAG))) { - unsigned long first_index = 0; - int done = 0; - int nr_found = 0; - - ag = pag->pag_agno + 1; - do { - struct xfs_inode *batch[XFS_LOOKUP_BATCH]; - int i; - - mutex_lock(&pag->pag_ici_reclaim_lock); - first_index = pag->pag_ici_reclaim_cursor; + struct inode *inode; - rcu_read_lock(); - nr_found = radix_tree_gang_lookup_tag( - &pag->pag_ici_root, - (void **)batch, first_index, - XFS_LOOKUP_BATCH, - XFS_ICI_RECLAIM_TAG); - if (!nr_found) - done = 1; - - /* - * Grab the inodes before we drop the lock. if we found - * nothing, nr == 0 and the loop will be skipped. - */ - for (i = 0; i < nr_found; i++) { - struct xfs_inode *ip = batch[i]; - - if (done || - !xfs_reclaim_inode_grab(ip, flags, &lsn)) - batch[i] = NULL; - - if (lsn && XFS_LSN_CMP(lsn, lowest_lsn) < 0) - lowest_lsn = lsn; - - /* - * Update the index for the next lookup. Catch - * overflows into the next AG range which can - * occur if we have inodes in the last block of - * the AG and we are currently pointing to the - * last inode. - * - * Because we may see inodes that are from the - * wrong AG due to RCU freeing and - * reallocation, only update the index if it - * lies in this AG. It was a race that lead us - * to see this inode, so another lookup from - * the same index will not find it again. - */ - if (XFS_INO_TO_AGNO(mp, ip->i_ino) != - pag->pag_agno) - continue; - first_index = XFS_INO_TO_AGINO(mp, ip->i_ino + 1); - if (first_index < XFS_INO_TO_AGINO(mp, ip->i_ino)) - done = 1; - } - - /* unlock now we've grabbed the inodes. */ - rcu_read_unlock(); - if (!done) - pag->pag_ici_reclaim_cursor = first_index; - else - pag->pag_ici_reclaim_cursor = 0; - mutex_unlock(&pag->pag_ici_reclaim_lock); - - for (i = 0; i < nr_found; i++) { - if (!batch[i]) - continue; - if (xfs_reclaim_inode(batch[i], pag, &lsn)) - freed++; - if (lsn && (lowest_lsn == NULLCOMMITLSN || - XFS_LSN_CMP(lsn, lowest_lsn) < 0)) - lowest_lsn = lsn; - } - - nr_to_scan -= XFS_LOOKUP_BATCH; - cond_resched(); - - } while (nr_found && !done && nr_to_scan > 0); - - xfs_perag_put(pag); + while ((inode = list_first_entry_or_null(freeable, + struct inode, i_lru))) { + list_del_init(&inode->i_lru); + xfs_dispose_inode(XFS_I(inode)); + cond_resched(); } - - if ((flags & SYNC_WAIT) && lowest_lsn != NULLCOMMITLSN) - xfs_ail_push_sync(mp->m_ail, lowest_lsn); - - return freed; } - void xfs_reclaim_all_inodes( struct xfs_mount *mp) { - xfs_reclaim_inodes_ag(mp, SYNC_WAIT, INT_MAX); -} - -/* - * Scan a certain number of inodes for reclaim. - * - * When called we make sure that there is a background (fast) inode reclaim in - * progress, while we will throttle the speed of reclaim via doing synchronous - * reclaim of inodes. That means if we come across dirty inodes, we wait for - * them to be cleaned, which we hope will not be very long due to the - * background walker having already kicked the IO off on those dirty inodes. - */ -long -xfs_reclaim_inodes_nr( - struct xfs_mount *mp, - int nr_to_scan) -{ - int sync_mode = 0; - - /* - * For kswapd, we kick background inode writeback. For direct - * reclaim, we issue and wait on inode writeback to throttle - * reclaim rates and avoid shouty OOM-death. - */ - if (current_is_kswapd()) - xfs_ail_push_all(mp->m_ail); - else - sync_mode |= SYNC_WAIT; - - return xfs_reclaim_inodes_ag(mp, sync_mode, nr_to_scan); + while (list_lru_count(&mp->m_inode_lru)) { + struct xfs_ireclaim_args ra; + long freed, to_free; + + xfs_ireclaim_args_init(&ra); + + to_free = list_lru_count(&mp->m_inode_lru); + freed = list_lru_walk(&mp->m_inode_lru, + xfs_inode_reclaim_isolate, &ra, to_free); + xfs_dispose_inodes(&ra.freeable); + + if (freed == 0) { + xfs_log_force(mp, XFS_LOG_SYNC); + xfs_ail_push_all(mp->m_ail); + } else if (ra.lowest_lsn != NULLCOMMITLSN) { + xfs_ail_push_sync(mp->m_ail, ra.lowest_lsn); + } + cond_resched(); + } } STATIC int diff --git a/fs/xfs/xfs_icache.h b/fs/xfs/xfs_icache.h index afd692b06c13..86e858e4a281 100644 --- a/fs/xfs/xfs_icache.h +++ b/fs/xfs/xfs_icache.h @@ -49,8 +49,24 @@ int xfs_iget(struct xfs_mount *mp, struct xfs_trans *tp, xfs_ino_t ino, struct xfs_inode * xfs_inode_alloc(struct xfs_mount *mp, xfs_ino_t ino); void xfs_inode_free(struct xfs_inode *ip); +struct xfs_ireclaim_args { + struct list_head freeable; + xfs_lsn_t lowest_lsn; + unsigned long dirty_skipped; +}; + +static inline void +xfs_ireclaim_args_init(struct xfs_ireclaim_args *ra) +{ + INIT_LIST_HEAD(&ra->freeable); + ra->lowest_lsn = NULLCOMMITLSN; + ra->dirty_skipped = 0; +} + +enum lru_status xfs_inode_reclaim_isolate(struct list_head *item, + struct list_lru_one *lru, spinlock_t *lru_lock, void *arg); +void xfs_dispose_inodes(struct list_head *freeable); void xfs_reclaim_all_inodes(struct xfs_mount *mp); -long xfs_reclaim_inodes_nr(struct xfs_mount *mp, int nr_to_scan); void xfs_inode_set_reclaim_tag(struct xfs_inode *ip); diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h index bcfb35a9c5ca..00145debf820 100644 --- a/fs/xfs/xfs_inode.h +++ b/fs/xfs/xfs_inode.h @@ -270,6 +270,15 @@ static inline int xfs_isiflocked(struct xfs_inode *ip) extern void __xfs_iflock(struct xfs_inode *ip); +static inline int __xfs_iflock_nowait(struct xfs_inode *ip) +{ + lockdep_assert_held(&ip->i_flags_lock); + if (ip->i_flags & XFS_IFLOCK) + return false; + ip->i_flags |= XFS_IFLOCK; + return true; +} + static inline int xfs_iflock_nowait(struct xfs_inode *ip) { return !xfs_iflags_test_and_set(ip, XFS_IFLOCK); @@ -281,6 +290,15 @@ static inline void xfs_iflock(struct xfs_inode *ip) __xfs_iflock(ip); } +static inline void __xfs_ifunlock(struct xfs_inode *ip) +{ + lockdep_assert_held(&ip->i_flags_lock); + ASSERT(ip->i_flags & XFS_IFLOCK); + ip->i_flags &= ~XFS_IFLOCK; + smp_mb(); + wake_up_bit(&ip->i_flags, __XFS_IFLOCK_BIT); +} + static inline void xfs_ifunlock(struct xfs_inode *ip) { ASSERT(xfs_isiflocked(ip)); diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 98ffbe42f8ae..096ae31b5436 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -17,6 +17,7 @@ #include "xfs_alloc.h" #include "xfs_fsops.h" #include "xfs_trans.h" +#include "xfs_trans_priv.h" #include "xfs_buf_item.h" #include "xfs_log.h" #include "xfs_log_priv.h" @@ -1772,23 +1773,54 @@ xfs_fs_mount( } static long -xfs_fs_nr_cached_objects( +xfs_fs_free_cached_objects( struct super_block *sb, struct shrink_control *sc) { - /* Paranoia: catch incorrect calls during mount setup or teardown */ - if (WARN_ON_ONCE(!sb->s_fs_info)) - return 0; + struct xfs_mount *mp = XFS_M(sb); + struct xfs_ireclaim_args ra; + long freed; - return list_lru_shrink_count(&XFS_M(sb)->m_inode_lru, sc); + xfs_ireclaim_args_init(&ra); + + freed = list_lru_shrink_walk(&mp->m_inode_lru, sc, + xfs_inode_reclaim_isolate, &ra); + xfs_dispose_inodes(&ra.freeable); + + /* + * Deal with dirty inodes. We will have the LSN of + * the oldest dirty inode in our reclaim args if we skipped any. + * + * For kswapd, if we skipped too many dirty inodes (i.e. more dirty than + * we freed) then we need kswapd to back off once it's scan has been + * completed. That way it will have some clean inodes once it comes back + * and can make progress, but make sure we have inode cleaning in + * progress. + * + * Direct reclaim will be throttled by the caller as it winds the + * priority up. All we need to do is keep pushing on dirty inodes + * in the background so when we come back progress will be made. + */ + if (current_is_kswapd() && ra.dirty_skipped >= freed) { + if (current->reclaim_state) + current->reclaim_state->need_backoff = true; + } + if (ra.lowest_lsn != NULLCOMMITLSN) + xfs_ail_push(mp->m_ail, ra.lowest_lsn); + + return freed; } static long -xfs_fs_free_cached_objects( +xfs_fs_nr_cached_objects( struct super_block *sb, struct shrink_control *sc) { - return xfs_reclaim_inodes_nr(XFS_M(sb), sc->nr_to_scan); + /* Paranoia: catch incorrect calls during mount setup or teardown */ + if (WARN_ON_ONCE(!sb->s_fs_info)) + return 0; + + return list_lru_shrink_count(&XFS_M(sb)->m_inode_lru, sc); } static const struct super_operations xfs_super_operations = { From patchwork Thu Oct 31 23:46:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222149 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F0EDE14E5 for ; Thu, 31 Oct 2019 23:47:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CDC9321734 for ; Thu, 31 Oct 2019 23:47:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729507AbfJaXre (ORCPT ); Thu, 31 Oct 2019 19:47:34 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:56153 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728648AbfJaXqc (ORCPT ); Thu, 31 Oct 2019 19:46:32 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 547253A01A1; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007Ci-RT; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00042N-Oz; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 25/28] xfs: remove unusued old inode reclaim code Date: Fri, 1 Nov 2019 10:46:15 +1100 Message-Id: <20191031234618.15403-26-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=D+Q3ErZj c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=Mk_DjsQrHyw8X3MnIygA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner Now that the custom AG radix tree walker has been replaced and removed, we don't need the radix tree tags anymore, nor the reclaim cursors or the locks taht protect it. Remove all remaining traces of these things. Signed-off-by: Dave Chinner Reviewed-by: Brian Foster --- fs/xfs/xfs_icache.c | 82 +-------------------------------------------- fs/xfs/xfs_icache.h | 7 ++-- fs/xfs/xfs_mount.c | 4 --- fs/xfs/xfs_mount.h | 3 -- fs/xfs/xfs_super.c | 5 +-- 5 files changed, 6 insertions(+), 95 deletions(-) diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 05dd292bfdb6..71a729e29260 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -139,83 +139,6 @@ xfs_inode_free( __xfs_inode_free(ip); } -static void -xfs_perag_set_reclaim_tag( - struct xfs_perag *pag) -{ - struct xfs_mount *mp = pag->pag_mount; - - lockdep_assert_held(&pag->pag_ici_lock); - if (pag->pag_ici_reclaimable++) - return; - - /* propagate the reclaim tag up into the perag radix tree */ - spin_lock(&mp->m_perag_lock); - radix_tree_tag_set(&mp->m_perag_tree, pag->pag_agno, - XFS_ICI_RECLAIM_TAG); - spin_unlock(&mp->m_perag_lock); - - trace_xfs_perag_set_reclaim(mp, pag->pag_agno, -1, _RET_IP_); -} - -static void -xfs_perag_clear_reclaim_tag( - struct xfs_perag *pag) -{ - struct xfs_mount *mp = pag->pag_mount; - - lockdep_assert_held(&pag->pag_ici_lock); - if (--pag->pag_ici_reclaimable) - return; - - /* clear the reclaim tag from the perag radix tree */ - spin_lock(&mp->m_perag_lock); - radix_tree_tag_clear(&mp->m_perag_tree, pag->pag_agno, - XFS_ICI_RECLAIM_TAG); - spin_unlock(&mp->m_perag_lock); - trace_xfs_perag_clear_reclaim(mp, pag->pag_agno, -1, _RET_IP_); -} - - -/* - * We set the inode flag atomically with the radix tree tag. - * Once we get tag lookups on the radix tree, this inode flag - * can go away. - */ -void -xfs_inode_set_reclaim_tag( - struct xfs_inode *ip) -{ - struct xfs_mount *mp = ip->i_mount; - struct xfs_perag *pag; - - pag = xfs_perag_get(mp, XFS_INO_TO_AGNO(mp, ip->i_ino)); - spin_lock(&pag->pag_ici_lock); - spin_lock(&ip->i_flags_lock); - - radix_tree_tag_set(&pag->pag_ici_root, XFS_INO_TO_AGINO(mp, ip->i_ino), - XFS_ICI_RECLAIM_TAG); - xfs_perag_set_reclaim_tag(pag); - __xfs_iflags_set(ip, XFS_IRECLAIMABLE); - - list_lru_add(&mp->m_inode_lru, &VFS_I(ip)->i_lru); - - spin_unlock(&ip->i_flags_lock); - spin_unlock(&pag->pag_ici_lock); - xfs_perag_put(pag); -} - -STATIC void -xfs_inode_clear_reclaim_tag( - struct xfs_perag *pag, - xfs_ino_t ino) -{ - radix_tree_tag_clear(&pag->pag_ici_root, - XFS_INO_TO_AGINO(pag->pag_mount, ino), - XFS_ICI_RECLAIM_TAG); - xfs_perag_clear_reclaim_tag(pag); -} - static void xfs_inew_wait( struct xfs_inode *ip) @@ -397,18 +320,16 @@ xfs_iget_cache_hit( goto out_error; } - spin_lock(&pag->pag_ici_lock); - spin_lock(&ip->i_flags_lock); /* * Clear the per-lifetime state in the inode as we are now * effectively a new inode and need to return to the initial * state before reuse occurs. */ + spin_lock(&ip->i_flags_lock); ip->i_flags &= ~XFS_IRECLAIM_RESET_FLAGS; ip->i_flags |= XFS_INEW; list_lru_del(&mp->m_inode_lru, &inode->i_lru); - xfs_inode_clear_reclaim_tag(pag, ip->i_ino); inode->i_state = I_NEW; ip->i_sick = 0; ip->i_checked = 0; @@ -417,7 +338,6 @@ xfs_iget_cache_hit( init_rwsem(&inode->i_rwsem); spin_unlock(&ip->i_flags_lock); - spin_unlock(&pag->pag_ici_lock); } else { /* If the VFS inode is being torn down, pause and try again. */ if (!igrab(inode)) { diff --git a/fs/xfs/xfs_icache.h b/fs/xfs/xfs_icache.h index 86e858e4a281..ec646b9e88b7 100644 --- a/fs/xfs/xfs_icache.h +++ b/fs/xfs/xfs_icache.h @@ -25,9 +25,8 @@ struct xfs_eofblocks { */ #define XFS_ICI_NO_TAG (-1) /* special flag for an untagged lookup in xfs_inode_ag_iterator */ -#define XFS_ICI_RECLAIM_TAG 0 /* inode is to be reclaimed */ -#define XFS_ICI_EOFBLOCKS_TAG 1 /* inode has blocks beyond EOF */ -#define XFS_ICI_COWBLOCKS_TAG 2 /* inode can have cow blocks to gc */ +#define XFS_ICI_EOFBLOCKS_TAG 0 /* inode has blocks beyond EOF */ +#define XFS_ICI_COWBLOCKS_TAG 1 /* inode can have cow blocks to gc */ /* * Flags for xfs_iget() @@ -68,8 +67,6 @@ enum lru_status xfs_inode_reclaim_isolate(struct list_head *item, void xfs_dispose_inodes(struct list_head *freeable); void xfs_reclaim_all_inodes(struct xfs_mount *mp); -void xfs_inode_set_reclaim_tag(struct xfs_inode *ip); - void xfs_inode_set_eofblocks_tag(struct xfs_inode *ip); void xfs_inode_clear_eofblocks_tag(struct xfs_inode *ip); int xfs_icache_free_eofblocks(struct xfs_mount *, struct xfs_eofblocks *); diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 5f3fd1d8f63f..9d60a4e033a0 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -148,7 +148,6 @@ xfs_free_perag( ASSERT(atomic_read(&pag->pag_ref) == 0); xfs_iunlink_destroy(pag); xfs_buf_hash_destroy(pag); - mutex_destroy(&pag->pag_ici_reclaim_lock); call_rcu(&pag->rcu_head, __xfs_free_perag); } } @@ -200,7 +199,6 @@ xfs_initialize_perag( pag->pag_agno = index; pag->pag_mount = mp; spin_lock_init(&pag->pag_ici_lock); - mutex_init(&pag->pag_ici_reclaim_lock); INIT_RADIX_TREE(&pag->pag_ici_root, GFP_ATOMIC); if (xfs_buf_hash_init(pag)) goto out_free_pag; @@ -242,7 +240,6 @@ xfs_initialize_perag( out_hash_destroy: xfs_buf_hash_destroy(pag); out_free_pag: - mutex_destroy(&pag->pag_ici_reclaim_lock); kmem_free(pag); out_unwind_new_pags: /* unwind any prior newly initialized pags */ @@ -252,7 +249,6 @@ xfs_initialize_perag( break; xfs_buf_hash_destroy(pag); xfs_iunlink_destroy(pag); - mutex_destroy(&pag->pag_ici_reclaim_lock); kmem_free(pag); } return error; diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h index 4f153ee17e18..dea05cd867bf 100644 --- a/fs/xfs/xfs_mount.h +++ b/fs/xfs/xfs_mount.h @@ -343,9 +343,6 @@ typedef struct xfs_perag { spinlock_t pag_ici_lock; /* incore inode cache lock */ struct radix_tree_root pag_ici_root; /* incore inode cache root */ - int pag_ici_reclaimable; /* reclaimable inodes */ - struct mutex pag_ici_reclaim_lock; /* serialisation point */ - unsigned long pag_ici_reclaim_cursor; /* reclaim restart point */ /* buffer cache index */ spinlock_t pag_buf_lock; /* lock for pag_buf_hash */ diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 096ae31b5436..d2200fbce139 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -916,7 +916,6 @@ xfs_fs_destroy_inode( spin_lock(&ip->i_flags_lock); ASSERT_ALWAYS(!__xfs_iflags_test(ip, XFS_IRECLAIMABLE)); ASSERT_ALWAYS(!__xfs_iflags_test(ip, XFS_IRECLAIM)); - spin_unlock(&ip->i_flags_lock); /* * We always use background reclaim here because even if the @@ -925,7 +924,9 @@ xfs_fs_destroy_inode( * this more efficiently than we can here, so simply let background * reclaim tear down all inodes. */ - xfs_inode_set_reclaim_tag(ip); + __xfs_iflags_set(ip, XFS_IRECLAIMABLE); + list_lru_add(&mp->m_inode_lru, &VFS_I(ip)->i_lru); + spin_unlock(&ip->i_flags_lock); } static void From patchwork Thu Oct 31 23:46:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222069 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DEB0C912 for ; Thu, 31 Oct 2019 23:46:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C322621734 for ; Thu, 31 Oct 2019 23:46:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728857AbfJaXqc (ORCPT ); Thu, 31 Oct 2019 19:46:32 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:55593 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728572AbfJaXqb (ORCPT ); Thu, 31 Oct 2019 19:46:31 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 898E73A2897; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007Ck-SB; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00042Q-Q9; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 26/28] xfs: use xfs_ail_push_all in xfs_reclaim_inodes Date: Fri, 1 Nov 2019 10:46:16 +1100 Message-Id: <20191031234618.15403-27-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=G6BsK5s5 c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=WMbJ51up1JB3X8GliIAA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner If we are reclaiming all inodes, it is likely we need to flush the entire AIL to do that. We have mechanisms to do that without needing to push to a specific LSN. Convert xfs_relaim_all_inodes() to use xfs_ail_push_all variant so we can get rid of the hacky xfs_ail_push_sync() scaffolding we used to support the intermediate stages of the non-blocking reclaim changeset. Signed-off-by: Dave Chinner --- fs/xfs/xfs_icache.c | 17 +++++++++++------ fs/xfs/xfs_trans_ail.c | 32 -------------------------------- fs/xfs/xfs_trans_priv.h | 2 -- 3 files changed, 11 insertions(+), 40 deletions(-) diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 71a729e29260..11bf4768d491 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -25,6 +25,7 @@ #include "xfs_log.h" #include +#include /* for congestion_wait() */ /* * Allocate and initialise an xfs_inode. @@ -1051,6 +1052,10 @@ xfs_dispose_inodes( cond_resched(); } } + +/* + * Reclaim all unused inodes in the filesystem. + */ void xfs_reclaim_all_inodes( struct xfs_mount *mp) @@ -1059,6 +1064,9 @@ xfs_reclaim_all_inodes( struct xfs_ireclaim_args ra; long freed, to_free; + /* push the AIL to clean dirty reclaimable inodes */ + xfs_ail_push_all(mp->m_ail); + xfs_ireclaim_args_init(&ra); to_free = list_lru_count(&mp->m_inode_lru); @@ -1066,13 +1074,10 @@ xfs_reclaim_all_inodes( xfs_inode_reclaim_isolate, &ra, to_free); xfs_dispose_inodes(&ra.freeable); - if (freed == 0) { + if (freed == 0) xfs_log_force(mp, XFS_LOG_SYNC); - xfs_ail_push_all(mp->m_ail); - } else if (ra.lowest_lsn != NULLCOMMITLSN) { - xfs_ail_push_sync(mp->m_ail, ra.lowest_lsn); - } - cond_resched(); + else if (ra.dirty_skipped) + congestion_wait(BLK_RW_ASYNC, HZ/10); } } diff --git a/fs/xfs/xfs_trans_ail.c b/fs/xfs/xfs_trans_ail.c index 3e1d0e1439e2..685a21cd24c0 100644 --- a/fs/xfs/xfs_trans_ail.c +++ b/fs/xfs/xfs_trans_ail.c @@ -662,36 +662,6 @@ xfs_ail_push_all( xfs_ail_push(ailp, threshold_lsn); } -/* - * Push the AIL to a specific lsn and wait for it to complete. - */ -void -xfs_ail_push_sync( - struct xfs_ail *ailp, - xfs_lsn_t threshold_lsn) -{ - struct xfs_log_item *lip; - DEFINE_WAIT(wait); - - spin_lock(&ailp->ail_lock); - while ((lip = xfs_ail_min(ailp)) != NULL) { - prepare_to_wait(&ailp->ail_push, &wait, TASK_UNINTERRUPTIBLE); - if (XFS_FORCED_SHUTDOWN(ailp->ail_mount) || - XFS_LSN_CMP(threshold_lsn, lip->li_lsn) < 0) - break; - if (XFS_LSN_CMP(threshold_lsn, ailp->ail_target) > 0) - ailp->ail_target = threshold_lsn; - wake_up_process(ailp->ail_task); - spin_unlock(&ailp->ail_lock); - schedule(); - spin_lock(&ailp->ail_lock); - } - spin_unlock(&ailp->ail_lock); - - finish_wait(&ailp->ail_push, &wait); -} - - /* * Push out all items in the AIL immediately and wait until the AIL is empty. */ @@ -732,7 +702,6 @@ xfs_ail_update_finish( if (!XFS_FORCED_SHUTDOWN(mp)) xlog_assign_tail_lsn_locked(mp); - wake_up_all(&ailp->ail_push); if (list_empty(&ailp->ail_head)) wake_up_all(&ailp->ail_empty); spin_unlock(&ailp->ail_lock); @@ -889,7 +858,6 @@ xfs_trans_ail_init( spin_lock_init(&ailp->ail_lock); INIT_LIST_HEAD(&ailp->ail_buf_list); init_waitqueue_head(&ailp->ail_empty); - init_waitqueue_head(&ailp->ail_push); ailp->ail_task = kthread_run(xfsaild, ailp, "xfsaild/%s", ailp->ail_mount->m_fsname); diff --git a/fs/xfs/xfs_trans_priv.h b/fs/xfs/xfs_trans_priv.h index 1b6f4bbd47c0..35655eac01a6 100644 --- a/fs/xfs/xfs_trans_priv.h +++ b/fs/xfs/xfs_trans_priv.h @@ -61,7 +61,6 @@ struct xfs_ail { int ail_log_flush; struct list_head ail_buf_list; wait_queue_head_t ail_empty; - wait_queue_head_t ail_push; }; /* @@ -114,7 +113,6 @@ xfs_trans_ail_remove( } void xfs_ail_push(struct xfs_ail *, xfs_lsn_t); -void xfs_ail_push_sync(struct xfs_ail *, xfs_lsn_t); void xfs_ail_push_all(struct xfs_ail *); void xfs_ail_push_all_sync(struct xfs_ail *); struct xfs_log_item *xfs_ail_min(struct xfs_ail *ailp); From patchwork Thu Oct 31 23:46:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222125 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0DC58912 for ; Thu, 31 Oct 2019 23:47:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E5266208E3 for ; Thu, 31 Oct 2019 23:47:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729288AbfJaXrI (ORCPT ); Thu, 31 Oct 2019 19:47:08 -0400 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:55729 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728812AbfJaXqe (ORCPT ); Thu, 31 Oct 2019 19:46:34 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 20C713A2710; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007Cm-TS; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00042T-RH; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 27/28] rwsem: introduce down/up_write_non_owner Date: Fri, 1 Nov 2019 10:46:17 +1100 Message-Id: <20191031234618.15403-28-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=G6BsK5s5 c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=PQ_rIYW8ek-NpEnsdwgA:9 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner To serialise freeing of inodes against unreferenced lookups, XFS wants to hold the inode locked from the reclaim context that queues it from RCU freeing until the grace period that actually frees the inode. THis means the inode is being unlocked by a context that didn't lock it, and that makes lockdep unhappy. This is a very special use case - inodes can be found once marked for reclaim because of lockless RCU lookups, so we need some synchronisation that will prevent such inodes from being locked. To access an unreferenced inode we need to take the ILOCK rwsem without blocking and still under rcu_read_lock() to hold off reclaim of the inode. If the inode has been reclaimed and is queued for freeing, holding the ILOCK rwsem until the RCU grace period expires means no lookup that finds it in that grace period will be able to lock it and use it. Once the grace period expires we are guaranteed that nothing will ever find the inode again, and we can unlock it and free it. This requires down_write_trylock_non_owner() in the reclaim context before we mark the inode as reclaimed and run call_rcu() to free it. It require up_write_non_owner() in the RCU callback before we free the inode. Signed-off-by: Dave Chinner --- include/linux/rwsem.h | 6 ++++++ kernel/locking/rwsem.c | 23 +++++++++++++++++++++++ 2 files changed, 29 insertions(+) diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h index 00d6054687dd..e557bd994d0e 100644 --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -191,6 +191,9 @@ do { \ */ extern void down_read_non_owner(struct rw_semaphore *sem); extern void up_read_non_owner(struct rw_semaphore *sem); +extern void down_write_non_owner(struct rw_semaphore *sem); +extern int down_write_trylock_non_owner(struct rw_semaphore *sem); +extern void up_write_non_owner(struct rw_semaphore *sem); #else # define down_read_nested(sem, subclass) down_read(sem) # define down_write_nest_lock(sem, nest_lock) down_write(sem) @@ -198,6 +201,9 @@ extern void up_read_non_owner(struct rw_semaphore *sem); # define down_write_killable_nested(sem, subclass) down_write_killable(sem) # define down_read_non_owner(sem) down_read(sem) # define up_read_non_owner(sem) up_read(sem) +# define down_write_non_owner(sem) down_write(sem) +# define down_write_trylock_non_owner(sem) down_write_trylock(sem) +# define up_write_non_owner(sem) up_write(sem) #endif #endif /* _LINUX_RWSEM_H */ diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index eef04551eae7..36162d42fe09 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -1654,4 +1654,27 @@ void up_read_non_owner(struct rw_semaphore *sem) } EXPORT_SYMBOL(up_read_non_owner); +void down_write_non_owner(struct rw_semaphore *sem) +{ + might_sleep(); + __down_write(sem); +} +EXPORT_SYMBOL(down_write_non_owner); + +/* + * trylock for writing -- returns 1 if successful, 0 if contention + */ +int down_write_trylock_non_owner(struct rw_semaphore *sem) +{ + return __down_write_trylock(sem); +} +EXPORT_SYMBOL(down_write_trylock_non_owner); + +void up_write_non_owner(struct rw_semaphore *sem) +{ + rwsem_set_owner(sem); + __up_write(sem); +} +EXPORT_SYMBOL(up_write_non_owner); + #endif From patchwork Thu Oct 31 23:46:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 11222107 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A138814E5 for ; Thu, 31 Oct 2019 23:47:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6AA1D208C0 for ; Thu, 31 Oct 2019 23:47:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729099AbfJaXqt (ORCPT ); Thu, 31 Oct 2019 19:46:49 -0400 Received: from mail104.syd.optusnet.com.au ([211.29.132.246]:40133 "EHLO mail104.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728578AbfJaXqe (ORCPT ); Thu, 31 Oct 2019 19:46:34 -0400 Received: from dread.disaster.area (pa49-180-67-183.pa.nsw.optusnet.com.au [49.180.67.183]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id ED9677EA903; Fri, 1 Nov 2019 10:46:25 +1100 (AEDT) Received: from discord.disaster.area ([192.168.253.110]) by dread.disaster.area with esmtp (Exim 4.92.3) (envelope-from ) id 1iQK8x-0007Co-V0; Fri, 01 Nov 2019 10:46:19 +1100 Received: from dave by discord.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1iQK8x-00042W-SH; Fri, 01 Nov 2019 10:46:19 +1100 From: Dave Chinner To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 28/28] xfs: rework unreferenced inode lookups Date: Fri, 1 Nov 2019 10:46:18 +1100 Message-Id: <20191031234618.15403-29-david@fromorbit.com> X-Mailer: git-send-email 2.24.0.rc0 In-Reply-To: <20191031234618.15403-1-david@fromorbit.com> References: <20191031234618.15403-1-david@fromorbit.com> MIME-Version: 1.0 X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=G6BsK5s5 c=1 sm=1 tr=0 a=3wLbm4YUAFX2xaPZIabsgw==:117 a=3wLbm4YUAFX2xaPZIabsgw==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=MeAgGD-zjQ4A:10 a=20KFwNOVAAAA:8 a=r6joNmVve4b8u8DPqOAA:9 a=cpms68JvJ2FCVZZY:21 a=fhZphxW6ybyfpQKv:21 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Dave Chinner Looking up an unreferenced inode in the inode cache is a bit hairy. We do this for inode invalidation and writeback clustering purposes, which is all invisible to the VFS. Hence we can't take reference counts to the inode and so must be very careful how we do it. There are several different places that all do the lookups and checks slightly differently. Fundamentally, though, they are all racy and inode reclaim has to block waiting for the inode lock if it loses the race. This is not very optimal given all the work we;ve already done to make reclaim non-blocking. We can make the reclaim process nonblocking with a couple of simple changes. If we define the unreferenced lookup process in a way that will either always grab an inode in a way that reclaim will notice and skip, or will notice a reclaim has grabbed the inode so it can skip the inode, then there is no need for reclaim to need to cycle the inode ILOCK at all. Selecting an inode for reclaim is already non-blocking, so if the ILOCK is held the inode will be skipped. If we ensure that reclaim holds the ILOCK until the inode is freed, then we can do the same thing in the unreferenced lookup to avoid inodes in reclaim. We can do this simply by holding the ILOCK until the RCU grace period expires and the inode freeing callback is run. As all unreferenced lookups have to hold the rcu_read_lock(), we are guaranteed that a reclaimed inode will be noticed as the trylock will fail. Additional research notes on final reclaim locking before free -------------------------------------------------------------- 2016: 1f2dcfe89eda ("xfs: xfs_inode_free() isn't RCU safe") Fixes situation where the inode is found during RCU lookup within the freeing grace period, but critical structures have already been freed. lookup code that has this problem is stuff like xfs_iflush_cluster. 2008: 455486b9ccdd ("[XFS] avoid all reclaimable inodes in xfs_sync_inodes_ag") Prior to this commit, the flushing of inodes required serialisation with xfs_ireclaim(), which did this lock/unlock thingy to ensure that it waited for flushing in xfs_sync_inodes_ag() to complete before freeing the inode: /* - * If we can't get a reference on the VFS_I, the inode must be - * in reclaim. If we can get the inode lock without blocking, - * it is safe to flush the inode because we hold the tree lock - * and xfs_iextract will block right now. Hence if we lock the - * inode while holding the tree lock, xfs_ireclaim() is - * guaranteed to block on the inode lock we now hold and hence - * it is safe to reference the inode until we drop the inode - * locks completely. + * If we can't get a reference on the inode, it must be + * in reclaim. Leave it for the reclaim code to flush. */ This case is completely gone from the modern code. lock/unlock exists at start of git era. Switching to archive tree. This xfs_sync() functionality goes back to 1994 when inode writeback was first introduced by: 47ac6d60 ("Add support to xfs_ireclaim() needed for xfs_sync().") So it has been there forever - lets see if we can get rid of it. State of existing codeL - xfs_iflush_cluster() does not check for XFS_IRECLAIM inode flag while holding rcu_read_lock()/i_flags_lock, so doesn't avoid reclaimable or inodes that are in the process of being reclaimed. Inodes at this point of reclaim are clean, so if xfs_iflush_cluster wins the race to the ILOCK, then inode reclaim has to wait for the lock to be dropped by xfs_iflush_cluster() once it detects the inode is clean. - xfs_ifree_cluster() has similar logic based around XFS_ISTALE, results in similar race conditions that require inode reclaim to cycle the ILOCK to serialise against. - xfs_inode_ag_walk() uses xfs_inode_ag_walk_grab(), and it checks XFS_IRECLAIM under RCU. It then tries to take a reference to the VFS inode via igrab(), which will fail if the inode is either XFS_IRECLAIMABLE | XFS_IRECLAIM, and it if races then igrab() will fail because the inode has I_FREEING still set, so it's protected against reclaim races. That leaves xfs_iflush_cluster() + xfs_ifree_cluster() to be modified to do reclaim-safe lookups. W.r.t. new inode reclaim LRU isolate function: 1. inode can be referenced while rcu_read_lock() is held. 2. XFS_IRECLAIM means inode has been fully locked down and has placed on the dispose list, and will be freed soon. - ilock_nowait() will fail once IRECLAIM is set due to lock order in isolation code. 3. ip->i_ino == 0 means it's been removed from the dispose list and is about to or has been removed from the radix tree and may have already been queued on the rcu freeing list to be freed at the end of the current grace period. - the old xfs_ireclaim() code will have dropped the ILOCK here, and so there's a race between checking IRECLAIM, grabbing ilock_nowait() and reclaim freeing the inode. - this is what the spurious lock/unlock avoids. 4. it xfs_ilock_nowait() fails before the rcu grace period expires, it doesn't matter if we race between checking IRECLAIM and failing the lock attempt. In fact, we don't even have to check XFS_IRECLAIM - just failing xfs_ilock_nowait() is sufficient to avoid inodes being reclaimed. Hence when xfs_ilock_nowait() fails, we can either drop the rcu_read_lock at that point and restart the inode lookup, or we just skip the inode altogether. If we raced with reclaim, the retry will not find the inode in reclaim again. If we raced wtih some other lock holder, then we'll find the inode and try to lock it again. - Requires holding ILOCK into rcu freeing callback and dropping it there. i.e. inode to be reclaimed remains locked until grace period expires. - No window at all between IRECLAIM being set and visible to other CPUs and the inode being removed from the cache and freed where ilock_nowait will succeed. - simple, effective, reliable. Signed-off-by: Dave Chinner --- fs/xfs/mrlock.h | 27 +++++++++ fs/xfs/xfs_icache.c | 88 +++++++++++++++++++++-------- fs/xfs/xfs_inode.c | 131 +++++++++++++++++++++----------------------- 3 files changed, 153 insertions(+), 93 deletions(-) diff --git a/fs/xfs/mrlock.h b/fs/xfs/mrlock.h index 79155eec341b..1752a2592bcc 100644 --- a/fs/xfs/mrlock.h +++ b/fs/xfs/mrlock.h @@ -75,4 +75,31 @@ static inline void mrdemote(mrlock_t *mrp) downgrade_write(&mrp->mr_lock); } +/* special locking cases for inode reclaim */ +static inline void mrupdate_non_owner(mrlock_t *mrp) +{ + down_write_non_owner(&mrp->mr_lock); +#if defined(DEBUG) || defined(XFS_WARN) + mrp->mr_writer = 1; +#endif +} + +static inline int mrtryupdate_non_owner(mrlock_t *mrp) +{ + if (!down_write_trylock_non_owner(&mrp->mr_lock)) + return 0; +#if defined(DEBUG) || defined(XFS_WARN) + mrp->mr_writer = 1; +#endif + return 1; +} + +static inline void mrunlock_excl_non_owner(mrlock_t *mrp) +{ +#if defined(DEBUG) || defined(XFS_WARN) + mrp->mr_writer = 0; +#endif + up_write_non_owner(&mrp->mr_lock); +} + #endif /* __XFS_SUPPORT_MRLOCK_H__ */ diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 11bf4768d491..45ee3b5cd873 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -106,6 +106,7 @@ xfs_inode_free_callback( ip->i_itemp = NULL; } + mrunlock_excl_non_owner(&ip->i_lock); kmem_zone_free(xfs_inode_zone, ip); } @@ -132,6 +133,7 @@ xfs_inode_free( * free state. The ip->i_flags_lock provides the barrier against lookup * races. */ + mrupdate_non_owner(&ip->i_lock); spin_lock(&ip->i_flags_lock); ip->i_flags = XFS_IRECLAIM; ip->i_ino = 0; @@ -295,11 +297,24 @@ xfs_iget_cache_hit( } /* - * We need to set XFS_IRECLAIM to prevent xfs_reclaim_inode - * from stomping over us while we recycle the inode. Remove it - * from the LRU straight away so we can re-init the VFS inode. + * Before we reinitialise the inode, we need to make sure + * reclaim does not pull it out from underneath us. We already + * hold the i_flags_lock, and because the XFS_IRECLAIM is not + * set we know the inode is still on the LRU. However, the LRU + * code may have just selected this inode to reclaim, so we need + * to ensure we hold the i_flags_lock long enough for the + * trylock in xfs_inode_reclaim_isolate() to fail. We do this by + * removing the inode from the LRU, which will spin on the LRU + * list locks until reclaim stops walking, at which point we + * know there is no possible race between reclaim isolation and + * this lookup. + * + * We also set the XFS_IRECLAIM flag here while trying to do the + * re-initialisation to prevent multiple racing lookups on this + * inode from all landing here at the same time. */ ip->i_flags |= XFS_IRECLAIM; + list_lru_del(&mp->m_inode_lru, &inode->i_lru); spin_unlock(&ip->i_flags_lock); rcu_read_unlock(); @@ -314,6 +329,7 @@ xfs_iget_cache_hit( spin_lock(&ip->i_flags_lock); wake = !!__xfs_iflags_test(ip, XFS_INEW); ip->i_flags &= ~(XFS_INEW | XFS_IRECLAIM); + list_lru_add(&mp->m_inode_lru, &inode->i_lru); if (wake) wake_up_bit(&ip->i_flags, __XFS_INEW_BIT); ASSERT(ip->i_flags & XFS_IRECLAIMABLE); @@ -330,7 +346,6 @@ xfs_iget_cache_hit( spin_lock(&ip->i_flags_lock); ip->i_flags &= ~XFS_IRECLAIM_RESET_FLAGS; ip->i_flags |= XFS_INEW; - list_lru_del(&mp->m_inode_lru, &inode->i_lru); inode->i_state = I_NEW; ip->i_sick = 0; ip->i_checked = 0; @@ -610,8 +625,7 @@ xfs_icache_inode_is_allocated( /* * The inode lookup is done in batches to keep the amount of lock traffic and * radix tree lookups to a minimum. The batch size is a trade off between - * lookup reduction and stack usage. This is in the reclaim path, so we can't - * be too greedy. + * lookup reduction and stack usage. */ #define XFS_LOOKUP_BATCH 32 @@ -916,8 +930,13 @@ xfs_inode_reclaim_isolate( goto out_unlock_flags; } + /* + * If we are going to reclaim this inode, it will be unlocked by the + * RCU callback and so is in a different context. Hence we need to use + * special non-owner trylock semantics for XFS_ILOCK_EXCL here. + */ ret = LRU_SKIP; - if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL)) + if (!mrtryupdate_non_owner(&ip->i_lock)) goto out_unlock_flags; if (!__xfs_iflock_nowait(ip)) { @@ -960,7 +979,7 @@ xfs_inode_reclaim_isolate( out_ifunlock: __xfs_ifunlock(ip); out_unlock_inode: - xfs_iunlock(ip, XFS_ILOCK_EXCL); + mrunlock_excl_non_owner(&ip->i_lock); out_unlock_flags: spin_unlock(&ip->i_flags_lock); @@ -969,6 +988,41 @@ xfs_inode_reclaim_isolate( return ret; } +/* + * We are passed a locked inode to dispose of. + * + * To avoid race conditions with lookups that don't take references, we do + * not drop the XFS_ILOCK_EXCL until the RCU callback that frees the inode. + * This means that any attempt to lock the inode during the current RCU grace + * period will fail, and hence we do not need any synchonisation here to wait + * for code that pins unreferenced inodes with the XFS_ILOCK to drain. + * + * This requires code that requires such pins to do the following under a single + * rcu_read_lock() context: + * + * - rcu_read_lock + * - find the inode via radix tree lookup + * - take the ip->i_flags_lock + * - check ip->i_ino != 0 + * - check XFS_IRECLAIM is not set + * - call xfs_ilock_nowait(ip, XFS_ILOCK_[SHARED|EXCL]) to lock the inode + * - drop ip->i_flags_lock + * - rcu_read_unlock() + * + * Only if all this succeeds and the caller has the inode locked and protected + * against it being freed until the ilock is released. If the XFS_IRECLAIM flag + * is set or xfs_ilock_nowait() fails, then the caller must either skip the + * inode and move on to the next inode (gang lookup) or drop the rcu_read_lock + * and start the entire inode lookup process again (individual lookup). + * + * This works because i_flags_lock serialises against + * xfs_inode_reclaim_isolate() - if the lookup wins the race on i_flags_lock and + * XFS_IRECLAIM is not set, then it will be able to lock the inode and hold off + * reclaim. If the isolate function wins the race, it will lock the inode and + * set the XFS_IRECLAIM flag if it is going to free the inode and this will + * prevent the lookup callers from succeeding in getting unreferenced pin via + * the ILOCK. + */ static void xfs_dispose_inode( struct xfs_inode *ip) @@ -977,11 +1031,14 @@ xfs_dispose_inode( struct xfs_perag *pag; xfs_ino_t ino; + ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); ASSERT(xfs_isiflocked(ip)); ASSERT(xfs_inode_clean(ip) || xfs_iflags_test(ip, XFS_ISTALE) || XFS_FORCED_SHUTDOWN(mp)); ASSERT(ip->i_ino != 0); + XFS_STATS_INC(mp, xs_ig_reclaims); + /* * Process the shutdown reclaim work we deferred from the LRU isolation * callback before we go any further. @@ -1008,9 +1065,6 @@ xfs_dispose_inode( ip->i_flags = XFS_IRECLAIM; ip->i_ino = 0; spin_unlock(&ip->i_flags_lock); - xfs_iunlock(ip, XFS_ILOCK_EXCL); - - XFS_STATS_INC(mp, xs_ig_reclaims); /* * Remove the inode from the per-AG radix tree. @@ -1022,19 +1076,7 @@ xfs_dispose_inode( spin_unlock(&pag->pag_ici_lock); xfs_perag_put(pag); - /* - * Here we do an (almost) spurious inode lock in order to coordinate - * with inode cache radix tree lookups. This is because the lookup - * can reference the inodes in the cache without taking references. - * - * We make that OK here by ensuring that we wait until the inode is - * unlocked after the lookup before we go ahead and free it. - * - * XXX: need to check this is still true. Not sure it is. - */ - xfs_ilock(ip, XFS_ILOCK_EXCL); xfs_qm_dqdetach(ip); - xfs_iunlock(ip, XFS_ILOCK_EXCL); __xfs_inode_free(ip); } diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index 33edb18098ca..5c0be82195fc 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -2538,60 +2538,63 @@ xfs_ifree_get_one_inode( if (!ip) goto out_rcu_unlock; + + spin_lock(&ip->i_flags_lock); + if (!ip->i_ino || ip->i_ino != inum || + __xfs_iflags_test(ip, XFS_IRECLAIM)) + goto out_iflags_unlock; + /* - * because this is an RCU protected lookup, we could find a recently - * freed or even reallocated inode during the lookup. We need to check - * under the i_flags_lock for a valid inode here. Skip it if it is not - * valid, the wrong inode or stale. + * We've got the right inode and it isn't in reclaim but it might be + * locked by someone else. In that case, we retry the inode rather than + * skipping it completely as we have to process it with the cluster + * being freed. */ - spin_lock(&ip->i_flags_lock); - if (ip->i_ino != inum || __xfs_iflags_test(ip, XFS_ISTALE)) { + if (ip != free_ip && !xfs_ilock_nowait(ip, XFS_ILOCK_EXCL)) { spin_unlock(&ip->i_flags_lock); - goto out_rcu_unlock; + rcu_read_unlock(); + delay(1); + goto retry; } - spin_unlock(&ip->i_flags_lock); /* - * Don't try to lock/unlock the current inode, but we _cannot_ skip the - * other inodes that we did not find in the list attached to the buffer - * and are not already marked stale. If we can't lock it, back off and - * retry. + * Inode is now pinned against reclaim until we unlock it. If the inode + * is already marked stale, then it has already been flush locked and + * attached to the buffer so we don't need to do anything more here. */ - if (ip != free_ip) { - if (!xfs_ilock_nowait(ip, XFS_ILOCK_EXCL)) { - rcu_read_unlock(); - delay(1); - goto retry; - } - - /* - * Check the inode number again in case we're racing with - * freeing in xfs_reclaim_inode(). See the comments in that - * function for more information as to why the initial check is - * not sufficient. - */ - if (ip->i_ino != inum) { + if (__xfs_iflags_test(ip, XFS_ISTALE)) { + if (ip != free_ip) xfs_iunlock(ip, XFS_ILOCK_EXCL); - goto out_rcu_unlock; - } + goto out_iflags_unlock; } + __xfs_iflags_set(ip, XFS_ISTALE); + spin_unlock(&ip->i_flags_lock); rcu_read_unlock(); + /* + * The flush lock will now hold off inode reclaim until the buffer + * completion routine runs the xfs_istale_done callback and unlocks the + * flush lock. Hence the caller can safely drop the ILOCK when it is + * done attaching the inode to the cluster buffer. + */ xfs_iflock(ip); - xfs_iflags_set(ip, XFS_ISTALE); /* - * We don't need to attach clean inodes or those only with unlogged - * changes (which we throw away, anyway). + * We don't need to attach clean inodes to the buffer - they are marked + * stale in memory now and will need to be re-initialised by inode + * allocation before they can be reused. */ if (!ip->i_itemp || xfs_inode_clean(ip)) { ASSERT(ip != free_ip); xfs_ifunlock(ip); - xfs_iunlock(ip, XFS_ILOCK_EXCL); + if (ip != free_ip) + xfs_iunlock(ip, XFS_ILOCK_EXCL); goto out_no_inode; } return ip; +out_iflags_unlock: + spin_unlock(&ip->i_flags_lock); out_rcu_unlock: rcu_read_unlock(); out_no_inode: @@ -3519,44 +3522,40 @@ xfs_iflush_cluster( continue; /* - * because this is an RCU protected lookup, we could find a - * recently freed or even reallocated inode during the lookup. - * We need to check under the i_flags_lock for a valid inode - * here. Skip it if it is not valid or the wrong inode. + * See xfs_dispose_inode() for an explanation of the + * tests here to avoid inode reclaim races. */ spin_lock(&cip->i_flags_lock); if (!cip->i_ino || - __xfs_iflags_test(cip, XFS_ISTALE)) { + __xfs_iflags_test(cip, XFS_IRECLAIM)) { spin_unlock(&cip->i_flags_lock); continue; } - /* - * Once we fall off the end of the cluster, no point checking - * any more inodes in the list because they will also all be - * outside the cluster. - */ + /* ILOCK will pin the inode against reclaim */ + if (!xfs_ilock_nowait(cip, XFS_ILOCK_SHARED)) { + spin_unlock(&cip->i_flags_lock); + continue; + } + + if (__xfs_iflags_test(cip, XFS_ISTALE)) { + xfs_iunlock(cip, XFS_ILOCK_SHARED); + spin_unlock(&cip->i_flags_lock); + continue; + } + + /* Lookup can find inodes outside the cluster being flushed. */ if ((XFS_INO_TO_AGINO(mp, cip->i_ino) & mask) != first_index) { + xfs_iunlock(cip, XFS_ILOCK_SHARED); spin_unlock(&cip->i_flags_lock); break; } spin_unlock(&cip->i_flags_lock); /* - * Do an un-protected check to see if the inode is dirty and - * is a candidate for flushing. These checks will be repeated - * later after the appropriate locks are acquired. - */ - if (xfs_inode_clean(cip) && xfs_ipincount(cip) == 0) - continue; - - /* - * Try to get locks. If any are unavailable or it is pinned, + * If we can't get the flush lock now or the inode is pinned, * then this inode cannot be flushed and is skipped. */ - - if (!xfs_ilock_nowait(cip, XFS_ILOCK_SHARED)) - continue; if (!xfs_iflock_nowait(cip)) { xfs_iunlock(cip, XFS_ILOCK_SHARED); continue; @@ -3567,22 +3566,9 @@ xfs_iflush_cluster( continue; } - /* - * Check the inode number again, just to be certain we are not - * racing with freeing in xfs_reclaim_inode(). See the comments - * in that function for more information as to why the initial - * check is not sufficient. - */ - if (!cip->i_ino) { - xfs_ifunlock(cip); - xfs_iunlock(cip, XFS_ILOCK_SHARED); - continue; - } - - /* - * arriving here means that this inode can be flushed. First - * re-check that it's dirty before flushing. + * Arriving here means that this inode can be flushed. First + * check that it's dirty before flushing. */ if (!xfs_inode_clean(cip)) { int error; @@ -3596,6 +3582,7 @@ xfs_iflush_cluster( xfs_ifunlock(cip); } xfs_iunlock(cip, XFS_ILOCK_SHARED); + /* unsafe to reference cip from here */ } if (clcount) { @@ -3634,7 +3621,11 @@ xfs_iflush_cluster( xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE); - /* abort the corrupt inode, as it was not attached to the buffer */ + /* + * Abort the corrupt inode, as it was not attached to the buffer. It is + * unlocked, but still pinned against reclaim by the flush lock so it is + * safe to reference here until after the flush abort completes. + */ xfs_iflush_abort(cip, false); kmem_free(cilist); xfs_perag_put(pag);