Message ID | 20210603005517.1403689-6-guro@fb.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | cgroup, blkcg: prevent dirty inodes to pin dying memory cgroups | expand |
On Wed 02-06-21 17:55:17, Roman Gushchin wrote: > Asynchronously try to release dying cgwbs by switching attached inodes > to the bdi's wb. It helps to get rid of per-cgroup writeback > structures themselves and of pinned memory and block cgroups, which > are significantly larger structures (mostly due to large per-cpu > statistics data). This prevents memory waste and helps to avoid > different scalability problems caused by large piles of dying cgroups. > > Reuse the existing mechanism of inode switching used for foreign inode > detection. To speed things up batch up to 115 inode switching in a > single operation (the maximum number is selected so that the resulting > struct inode_switch_wbs_context can fit into 1024 bytes). Because > every switching consists of two steps divided by an RCU grace period, > it would be too slow without batching. Please note that the whole > batch counts as a single operation (when increasing/decreasing > isw_nr_in_flight). This allows to keep umounting working (flush the > switching queue), however prevents cleanups from consuming the whole > switching quota and effectively blocking the frn switching. > > A cgwb cleanup operation can fail due to different reasons (e.g. not > enough memory, the cgwb has an in-flight/pending io, an attached inode > in a wrong state, etc). In this case the next scheduled cleanup will > make a new attempt. An attempt is made each time a new cgwb is offlined > (in other words a memcg and/or a blkcg is deleted by a user). In the > future an additional attempt scheduled by a timer can be implemented. > > Signed-off-by: Roman Gushchin <guro@fb.com> I think we are getting close :). Some comments are below. > --- > fs/fs-writeback.c | 68 ++++++++++++++++++++++++++++++++ > include/linux/backing-dev-defs.h | 1 + > include/linux/writeback.h | 1 + > mm/backing-dev.c | 58 ++++++++++++++++++++++++++- > 4 files changed, 126 insertions(+), 2 deletions(-) > > diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c > index 49d7b23a7cfe..e8517ad677eb 100644 > --- a/fs/fs-writeback.c > +++ b/fs/fs-writeback.c > @@ -225,6 +225,8 @@ void wb_wait_for_completion(struct wb_completion *done) > /* one round can affect upto 5 slots */ > #define WB_FRN_MAX_IN_FLIGHT 1024 /* don't queue too many concurrently */ > > +#define WB_MAX_INODES_PER_ISW 116 /* maximum inodes per isw */ > + Why this number? Please add an explanation here... > static atomic_t isw_nr_in_flight = ATOMIC_INIT(0); > static struct workqueue_struct *isw_wq; > > @@ -552,6 +554,72 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id) > kfree(isw); > } > > +/** > + * cleanup_offline_cgwb - detach associated inodes > + * @wb: target wb > + * > + * Switch all inodes attached to @wb to the bdi's root wb in order to eventually > + * release the dying @wb. Returns %true if not all inodes were switched and > + * the function has to be restarted. > + */ > +bool cleanup_offline_cgwb(struct bdi_writeback *wb) > +{ > + struct inode_switch_wbs_context *isw; > + struct inode *inode; > + int nr; > + bool restart = false; > + > + isw = kzalloc(sizeof(*isw) + WB_MAX_INODES_PER_ISW * > + sizeof(struct inode *), GFP_KERNEL); > + if (!isw) > + return restart; > + > + /* no need to call wb_get() here: bdi's root wb is not refcounted */ > + isw->new_wb = &wb->bdi->wb; > + > + nr = 0; > + spin_lock(&wb->list_lock); > + list_for_each_entry(inode, &wb->b_attached, i_io_list) { > + spin_lock(&inode->i_lock); > + if (!(inode->i_sb->s_flags & SB_ACTIVE) || > + inode->i_state & (I_WB_SWITCH | I_FREEING) || > + inode_to_wb(inode) == isw->new_wb) { > + spin_unlock(&inode->i_lock); > + continue; > + } > + inode->i_state |= I_WB_SWITCH; > + __iget(inode); > + spin_unlock(&inode->i_lock); This hunk is identical with the one in inode_switch_wbs(). Maybe create a helper for it like inode_prepare_wb_switch() or something like that. Also we need to check for I_WILL_FREE flag as well as I_FREEING (see the code in iput_final()) - that's actually a bug in inode_switch_wbs() as well so probably a separate fix for that should come earlier in the series. > + > + isw->inodes[nr++] = inode; At first it seemed a bit silly to allocate an array of inode pointers when we have them in the list. But after some thought I agree that dealing with other switching being triggered from other sources in parallel would be really difficult so your decision makes sense. Just maybe add an explanation in a comment somewhere about this design decision. > + > + if (nr >= WB_MAX_INODES_PER_ISW - 1) { > + restart = true; > + break; > + } > + } > + spin_unlock(&wb->list_lock); ... > +static void cleanup_offline_cgwbs_workfn(struct work_struct *work) > +{ > + struct bdi_writeback *wb; > + LIST_HEAD(processed); > + > + spin_lock_irq(&cgwb_lock); > + > + while (!list_empty(&offline_cgwbs)) { > + wb = list_first_entry(&offline_cgwbs, struct bdi_writeback, > + offline_node); > + list_move(&wb->offline_node, &processed); > + > + if (wb_has_dirty_io(wb)) > + continue; Maybe explain in a comment why skipping wbs with dirty inodes is fine? Because honestly, I'm not sure... I guess the rationale is that inodes should get cleaned eventually and if they are getting redirtied, they will be switched to another wb anyway? > + > + if (!wb_tryget(wb)) > + continue; > + > + spin_unlock_irq(&cgwb_lock); > + while ((cleanup_offline_cgwb(wb))) > + cond_resched(); > + spin_lock_irq(&cgwb_lock); > + > + wb_put(wb); > + } > + > + if (!list_empty(&processed)) > + list_splice_tail(&processed, &offline_cgwbs); > + > + spin_unlock_irq(&cgwb_lock); > +} > + Honza
On Thu, Jun 03, 2021 at 12:02:33PM +0200, Jan Kara wrote: > On Wed 02-06-21 17:55:17, Roman Gushchin wrote: > > Asynchronously try to release dying cgwbs by switching attached inodes > > to the bdi's wb. It helps to get rid of per-cgroup writeback > > structures themselves and of pinned memory and block cgroups, which > > are significantly larger structures (mostly due to large per-cpu > > statistics data). This prevents memory waste and helps to avoid > > different scalability problems caused by large piles of dying cgroups. > > > > Reuse the existing mechanism of inode switching used for foreign inode > > detection. To speed things up batch up to 115 inode switching in a > > single operation (the maximum number is selected so that the resulting > > struct inode_switch_wbs_context can fit into 1024 bytes). Because > > every switching consists of two steps divided by an RCU grace period, > > it would be too slow without batching. Please note that the whole > > batch counts as a single operation (when increasing/decreasing > > isw_nr_in_flight). This allows to keep umounting working (flush the > > switching queue), however prevents cleanups from consuming the whole > > switching quota and effectively blocking the frn switching. > > > > A cgwb cleanup operation can fail due to different reasons (e.g. not > > enough memory, the cgwb has an in-flight/pending io, an attached inode > > in a wrong state, etc). In this case the next scheduled cleanup will > > make a new attempt. An attempt is made each time a new cgwb is offlined > > (in other words a memcg and/or a blkcg is deleted by a user). In the > > future an additional attempt scheduled by a timer can be implemented. > > > > Signed-off-by: Roman Gushchin <guro@fb.com> > > I think we are getting close :). Some comments are below. Great! Thank for reviewing the code! > > > --- > > fs/fs-writeback.c | 68 ++++++++++++++++++++++++++++++++ > > include/linux/backing-dev-defs.h | 1 + > > include/linux/writeback.h | 1 + > > mm/backing-dev.c | 58 ++++++++++++++++++++++++++- > > 4 files changed, 126 insertions(+), 2 deletions(-) > > > > diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c > > index 49d7b23a7cfe..e8517ad677eb 100644 > > --- a/fs/fs-writeback.c > > +++ b/fs/fs-writeback.c > > @@ -225,6 +225,8 @@ void wb_wait_for_completion(struct wb_completion *done) > > /* one round can affect upto 5 slots */ > > #define WB_FRN_MAX_IN_FLIGHT 1024 /* don't queue too many concurrently */ > > > > +#define WB_MAX_INODES_PER_ISW 116 /* maximum inodes per isw */ > > + > > Why this number? Please add an explanation here... Added. > > > static atomic_t isw_nr_in_flight = ATOMIC_INIT(0); > > static struct workqueue_struct *isw_wq; > > > > @@ -552,6 +554,72 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id) > > kfree(isw); > > } > > > > +/** > > + * cleanup_offline_cgwb - detach associated inodes > > + * @wb: target wb > > + * > > + * Switch all inodes attached to @wb to the bdi's root wb in order to eventually > > + * release the dying @wb. Returns %true if not all inodes were switched and > > + * the function has to be restarted. > > + */ > > +bool cleanup_offline_cgwb(struct bdi_writeback *wb) > > +{ > > + struct inode_switch_wbs_context *isw; > > + struct inode *inode; > > + int nr; > > + bool restart = false; > > + > > + isw = kzalloc(sizeof(*isw) + WB_MAX_INODES_PER_ISW * > > + sizeof(struct inode *), GFP_KERNEL); > > + if (!isw) > > + return restart; > > + > > + /* no need to call wb_get() here: bdi's root wb is not refcounted */ > > + isw->new_wb = &wb->bdi->wb; > > + > > + nr = 0; > > + spin_lock(&wb->list_lock); > > + list_for_each_entry(inode, &wb->b_attached, i_io_list) { > > + spin_lock(&inode->i_lock); > > + if (!(inode->i_sb->s_flags & SB_ACTIVE) || > > + inode->i_state & (I_WB_SWITCH | I_FREEING) || > > + inode_to_wb(inode) == isw->new_wb) { > > + spin_unlock(&inode->i_lock); > > + continue; > > + } > > + inode->i_state |= I_WB_SWITCH; > > + __iget(inode); > > + spin_unlock(&inode->i_lock); > > This hunk is identical with the one in inode_switch_wbs(). Maybe create a > helper for it like inode_prepare_wb_switch() or something like that. Also > we need to check for I_WILL_FREE flag as well as I_FREEING (see the code in > iput_final()) - that's actually a bug in inode_switch_wbs() as well so > probably a separate fix for that should come earlier in the series. Good point, added in v7. > > > + > > + isw->inodes[nr++] = inode; > > At first it seemed a bit silly to allocate an array of inode pointers when > we have them in the list. But after some thought I agree that dealing with > other switching being triggered from other sources in parallel would be > really difficult so your decision makes sense. Just maybe add an > explanation in a comment somewhere about this design decision. Added in v7. > > > + > > + if (nr >= WB_MAX_INODES_PER_ISW - 1) { > > + restart = true; > > + break; > > + } > > + } > > + spin_unlock(&wb->list_lock); > > ... > > > +static void cleanup_offline_cgwbs_workfn(struct work_struct *work) > > +{ > > + struct bdi_writeback *wb; > > + LIST_HEAD(processed); > > + > > + spin_lock_irq(&cgwb_lock); > > + > > + while (!list_empty(&offline_cgwbs)) { > > + wb = list_first_entry(&offline_cgwbs, struct bdi_writeback, > > + offline_node); > > + list_move(&wb->offline_node, &processed); > > + > > + if (wb_has_dirty_io(wb)) > > + continue; > > Maybe explain in a comment why skipping wbs with dirty inodes is fine? > Because honestly, I'm not sure... I guess the rationale is that inodes > should get cleaned eventually and if they are getting redirtied, they will > be switched to another wb anyway? The main rationale here is that the deletion of a memory/blkcg cgroup by a user shouldn't affect the io distribution. In other words, the remaining io shouldn't be performed faster than it could be finished had the cgroup remain existing.
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 49d7b23a7cfe..e8517ad677eb 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -225,6 +225,8 @@ void wb_wait_for_completion(struct wb_completion *done) /* one round can affect upto 5 slots */ #define WB_FRN_MAX_IN_FLIGHT 1024 /* don't queue too many concurrently */ +#define WB_MAX_INODES_PER_ISW 116 /* maximum inodes per isw */ + static atomic_t isw_nr_in_flight = ATOMIC_INIT(0); static struct workqueue_struct *isw_wq; @@ -552,6 +554,72 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id) kfree(isw); } +/** + * cleanup_offline_cgwb - detach associated inodes + * @wb: target wb + * + * Switch all inodes attached to @wb to the bdi's root wb in order to eventually + * release the dying @wb. Returns %true if not all inodes were switched and + * the function has to be restarted. + */ +bool cleanup_offline_cgwb(struct bdi_writeback *wb) +{ + struct inode_switch_wbs_context *isw; + struct inode *inode; + int nr; + bool restart = false; + + isw = kzalloc(sizeof(*isw) + WB_MAX_INODES_PER_ISW * + sizeof(struct inode *), GFP_KERNEL); + if (!isw) + return restart; + + /* no need to call wb_get() here: bdi's root wb is not refcounted */ + isw->new_wb = &wb->bdi->wb; + + nr = 0; + spin_lock(&wb->list_lock); + list_for_each_entry(inode, &wb->b_attached, i_io_list) { + spin_lock(&inode->i_lock); + if (!(inode->i_sb->s_flags & SB_ACTIVE) || + inode->i_state & (I_WB_SWITCH | I_FREEING) || + inode_to_wb(inode) == isw->new_wb) { + spin_unlock(&inode->i_lock); + continue; + } + inode->i_state |= I_WB_SWITCH; + __iget(inode); + spin_unlock(&inode->i_lock); + + isw->inodes[nr++] = inode; + + if (nr >= WB_MAX_INODES_PER_ISW - 1) { + restart = true; + break; + } + } + spin_unlock(&wb->list_lock); + + /* no attached inodes? bail out */ + if (nr == 0) { + kfree(isw); + return restart; + } + + /* + * In addition to synchronizing among switchers, I_WB_SWITCH tells + * the RCU protected stat update paths to grab the i_page + * lock so that stat transfer can synchronize against them. + * Let's continue after I_WB_SWITCH is guaranteed to be visible. + */ + INIT_RCU_WORK(&isw->work, inode_switch_wbs_work_fn); + queue_rcu_work(isw_wq, &isw->work); + + atomic_inc(&isw_nr_in_flight); + + return restart; +} + /** * wbc_attach_and_unlock_inode - associate wbc with target inode and unlock it * @wbc: writeback_control of interest diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h index e5dc238ebe4f..07d6b6d6dbdf 100644 --- a/include/linux/backing-dev-defs.h +++ b/include/linux/backing-dev-defs.h @@ -155,6 +155,7 @@ struct bdi_writeback { struct list_head memcg_node; /* anchored at memcg->cgwb_list */ struct list_head blkcg_node; /* anchored at blkcg->cgwb_list */ struct list_head b_attached; /* attached inodes, protected by list_lock */ + struct list_head offline_node; /* anchored at offline_cgwbs */ union { struct work_struct release_work; diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 8e5c5bb16e2d..95de51c10248 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -221,6 +221,7 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page, int cgroup_writeback_by_id(u64 bdi_id, int memcg_id, unsigned long nr_pages, enum wb_reason reason, struct wb_completion *done); void cgroup_writeback_umount(void); +bool cleanup_offline_cgwb(struct bdi_writeback *wb); /** * inode_attach_wb - associate an inode with its wb diff --git a/mm/backing-dev.c b/mm/backing-dev.c index 54c5dc4b8c24..f1fc04412bd7 100644 --- a/mm/backing-dev.c +++ b/mm/backing-dev.c @@ -371,12 +371,16 @@ static void wb_exit(struct bdi_writeback *wb) #include <linux/memcontrol.h> /* - * cgwb_lock protects bdi->cgwb_tree, blkcg->cgwb_list, and memcg->cgwb_list. - * bdi->cgwb_tree is also RCU protected. + * cgwb_lock protects bdi->cgwb_tree, blkcg->cgwb_list, offline_cgwbs and + * memcg->cgwb_list. bdi->cgwb_tree is also RCU protected. */ static DEFINE_SPINLOCK(cgwb_lock); static struct workqueue_struct *cgwb_release_wq; +static LIST_HEAD(offline_cgwbs); +static void cleanup_offline_cgwbs_workfn(struct work_struct *work); +static DECLARE_WORK(cleanup_offline_cgwbs_work, cleanup_offline_cgwbs_workfn); + static void cgwb_release_workfn(struct work_struct *work) { struct bdi_writeback *wb = container_of(work, struct bdi_writeback, @@ -395,6 +399,11 @@ static void cgwb_release_workfn(struct work_struct *work) fprop_local_destroy_percpu(&wb->memcg_completions); percpu_ref_exit(&wb->refcnt); + + spin_lock_irq(&cgwb_lock); + list_del(&wb->offline_node); + spin_unlock_irq(&cgwb_lock); + wb_exit(wb); WARN_ON_ONCE(!list_empty(&wb->b_attached)); kfree_rcu(wb, rcu); @@ -414,6 +423,7 @@ static void cgwb_kill(struct bdi_writeback *wb) WARN_ON(!radix_tree_delete(&wb->bdi->cgwb_tree, wb->memcg_css->id)); list_del(&wb->memcg_node); list_del(&wb->blkcg_node); + list_add(&wb->offline_node, &offline_cgwbs); percpu_ref_kill(&wb->refcnt); } @@ -635,6 +645,48 @@ static void cgwb_bdi_unregister(struct backing_dev_info *bdi) mutex_unlock(&bdi->cgwb_release_mutex); } +/** + * cleanup_offline_cgwbs - try to release dying cgwbs + * + * Try to release dying cgwbs by switching attached inodes to the wb + * belonging to the root memory cgroup. Processed wbs are placed at the + * end of the list to guarantee the forward progress. + * + * Should be called with the acquired cgwb_lock lock, which might + * be released and re-acquired in the process. + */ +static void cleanup_offline_cgwbs_workfn(struct work_struct *work) +{ + struct bdi_writeback *wb; + LIST_HEAD(processed); + + spin_lock_irq(&cgwb_lock); + + while (!list_empty(&offline_cgwbs)) { + wb = list_first_entry(&offline_cgwbs, struct bdi_writeback, + offline_node); + list_move(&wb->offline_node, &processed); + + if (wb_has_dirty_io(wb)) + continue; + + if (!wb_tryget(wb)) + continue; + + spin_unlock_irq(&cgwb_lock); + while ((cleanup_offline_cgwb(wb))) + cond_resched(); + spin_lock_irq(&cgwb_lock); + + wb_put(wb); + } + + if (!list_empty(&processed)) + list_splice_tail(&processed, &offline_cgwbs); + + spin_unlock_irq(&cgwb_lock); +} + /** * wb_memcg_offline - kill all wb's associated with a memcg being offlined * @memcg: memcg being offlined @@ -651,6 +703,8 @@ void wb_memcg_offline(struct mem_cgroup *memcg) cgwb_kill(wb); memcg_cgwb_list->next = NULL; /* prevent new wb's */ spin_unlock_irq(&cgwb_lock); + + queue_work(system_unbound_wq, &cleanup_offline_cgwbs_work); } /**
Asynchronously try to release dying cgwbs by switching attached inodes to the bdi's wb. It helps to get rid of per-cgroup writeback structures themselves and of pinned memory and block cgroups, which are significantly larger structures (mostly due to large per-cpu statistics data). This prevents memory waste and helps to avoid different scalability problems caused by large piles of dying cgroups. Reuse the existing mechanism of inode switching used for foreign inode detection. To speed things up batch up to 115 inode switching in a single operation (the maximum number is selected so that the resulting struct inode_switch_wbs_context can fit into 1024 bytes). Because every switching consists of two steps divided by an RCU grace period, it would be too slow without batching. Please note that the whole batch counts as a single operation (when increasing/decreasing isw_nr_in_flight). This allows to keep umounting working (flush the switching queue), however prevents cleanups from consuming the whole switching quota and effectively blocking the frn switching. A cgwb cleanup operation can fail due to different reasons (e.g. not enough memory, the cgwb has an in-flight/pending io, an attached inode in a wrong state, etc). In this case the next scheduled cleanup will make a new attempt. An attempt is made each time a new cgwb is offlined (in other words a memcg and/or a blkcg is deleted by a user). In the future an additional attempt scheduled by a timer can be implemented. Signed-off-by: Roman Gushchin <guro@fb.com> --- fs/fs-writeback.c | 68 ++++++++++++++++++++++++++++++++ include/linux/backing-dev-defs.h | 1 + include/linux/writeback.h | 1 + mm/backing-dev.c | 58 ++++++++++++++++++++++++++- 4 files changed, 126 insertions(+), 2 deletions(-)