Message ID | 20240611101633.507101-2-mjguzik@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | rcu-based inode lookup for iget* | expand |
On Tue 11-06-24 12:16:31, Mateusz Guzik wrote: > Instantiating a new inode normally takes the global inode hash lock > twice: > 1. once to check if it happens to already be present > 2. once to add it to the hash > > The back-to-back lock/unlock pattern is known to degrade performance > significantly, which is further exacerbated if the hash is heavily > populated (long chains to walk, extending hold time). Arguably hash > sizing and hashing algo need to be revisited, but that's beyond the > scope of this patch. > > A long term fix would introduce finer-grained locking. An attempt was > made several times, most recently in [1], but the effort appears > stalled. > > A simpler idea which solves majority of the problem and which may be > good enough for the time being is to use RCU for the initial lookup. > Basic RCU support is already present in the hash. This being a temporary > measure I tried to keep the change as small as possible. > > iget_locked consumers (notably ext4) get away without any changes > because inode comparison method is built-in. > > iget5_locked and ilookup5_nowait consumers pass a custom callback. Since > removal of locking adds more problems (inode can be changing) it's not > safe to assume all filesystems happen to cope. Thus iget5_locked_rcu, > ilookup5_rcu and ilookup5_nowait_rcu get added, requiring manual > conversion. > > In order to reduce code duplication find_inode and find_inode_fast grow > an argument indicating whether inode hash lock is held, which is passed > down should sleeping be necessary. They always rcu_read_lock, which is > redundant but harmless. Doing it conditionally reduces readability for > no real gain that I can see. RCU-alike restrictions were already put on > callbacks due to the hash spinlock being held. > > There is a real cache-busting workload scanning millions of files in > parallel (it's a backup server thing), where the initial lookup is > guaranteed to fail resulting in the 2 lock acquires. > > Implemented below is a synthehic benchmark which provides the same > behavior. [I shall note the workload is not running on Linux, instead it > was causing trouble elsewhere. Benchmark below was used while addressing > said problems and was found to adequately represent the real workload.] > > Total real time fluctuates by 1-2s. > > With 20 threads each walking a dedicated 1000 dirs * 1000 files > directory tree to stat(2) on a 32 core + 24GB RAM vm: > > ext4 (needed mkfs.ext4 -N 24000000): > before: 3.77s user 890.90s system 1939% cpu 46.118 total > after: 3.24s user 397.73s system 1858% cpu 21.581 total (-53%) > > Benchmark can be found here: https://people.freebsd.org/~mjg/fstree.tgz > > [1] https://lore.kernel.org/all/20231206060629.2827226-1-david@fromorbit.com/ > > Signed-off-by: Mateusz Guzik <mjguzik@gmail.com> Nice speedups and the patch looks good to me. It would be lovely to get Dave's speedups finished but this is already nice. I've found just two nits: > +/** > + * ilookup5 - search for an inode in the inode cache ^^^ ilookup5_rcu > + * @sb: super block of file system to search > + * @hashval: hash value (usually inode number) to search for > + * @test: callback used for comparisons between inodes > + * @data: opaque data pointer to pass to @test > + * > + * This is equivalent to ilookup5, except the @test callback must > + * tolerate the inode not being stable, including being mid-teardown. > + */ ... > +struct inode *ilookup5_nowait_rcu(struct super_block *sb, unsigned long hashval, > + int (*test)(struct inode *, void *), void *data); I'd prefer wrapping the above so that it fits into 80 columns. Otherwise feel free to add: Reviewed-by: Jan Kara <jack@suse.cz> Honza
On Tue, Jun 11, 2024 at 12:50:11PM +0200, Jan Kara wrote: > On Tue 11-06-24 12:16:31, Mateusz Guzik wrote: > > +/** > > + * ilookup5 - search for an inode in the inode cache > ^^^ ilookup5_rcu > fixed in my branch > > + * @sb: super block of file system to search > > + * @hashval: hash value (usually inode number) to search for > > + * @test: callback used for comparisons between inodes > > + * @data: opaque data pointer to pass to @test > > + * > > + * This is equivalent to ilookup5, except the @test callback must > > + * tolerate the inode not being stable, including being mid-teardown. > > + */ > ... > > +struct inode *ilookup5_nowait_rcu(struct super_block *sb, unsigned long hashval, > > + int (*test)(struct inode *, void *), void *data); > > I'd prefer wrapping the above so that it fits into 80 columns. > the last comma is precisely at 80, but i can wrap it if you insist > Otherwise feel free to add: > > Reviewed-by: Jan Kara <jack@suse.cz> > thanks I'm going to wait for more feedback, tweak the commit message to stress that this goes from 2 hash lock acquires to 1, maybe fix some typos and submit a v4. past that if people want something faster they are welcome to implement or carry it over the finish line themselves. > Honza > -- > Jan Kara <jack@suse.com> > SUSE Labs, CR
On Tue, Jun 11, 2024 at 01:40:37PM +0200, Mateusz Guzik wrote: > On Tue, Jun 11, 2024 at 12:50:11PM +0200, Jan Kara wrote: > > On Tue 11-06-24 12:16:31, Mateusz Guzik wrote: > > > +/** > > > + * ilookup5 - search for an inode in the inode cache > > ^^^ ilookup5_rcu > > > > fixed in my branch > > > > + * @sb: super block of file system to search > > > + * @hashval: hash value (usually inode number) to search for > > > + * @test: callback used for comparisons between inodes > > > + * @data: opaque data pointer to pass to @test > > > + * > > > + * This is equivalent to ilookup5, except the @test callback must > > > + * tolerate the inode not being stable, including being mid-teardown. > > > + */ > > ... > > > +struct inode *ilookup5_nowait_rcu(struct super_block *sb, unsigned long hashval, > > > + int (*test)(struct inode *, void *), void *data); > > > > I'd prefer wrapping the above so that it fits into 80 columns. > > > > the last comma is precisely at 80, but i can wrap it if you insist > > > Otherwise feel free to add: > > > > Reviewed-by: Jan Kara <jack@suse.cz> > > > > thanks > > I'm going to wait for more feedback, tweak the commit message to stress > that this goes from 2 hash lock acquires to 1, maybe fix some typos and > submit a v4. > > past that if people want something faster they are welcome to implement > or carry it over the finish line themselves. I'm generally fine with this but I would think that we shouldn't add all these helpers without any users. I'm not trying to make this a chicken and egg problem though. Let's get the blessing from Josef to convert btrfs to that *_rcu variant and then we can add that helper. Additional helpers can follow as needed? @Jan, thoughts?
On Tue, Jun 11, 2024 at 3:04 PM Christian Brauner <brauner@kernel.org> wrote: > > On Tue, Jun 11, 2024 at 01:40:37PM +0200, Mateusz Guzik wrote: > > On Tue, Jun 11, 2024 at 12:50:11PM +0200, Jan Kara wrote: > > > On Tue 11-06-24 12:16:31, Mateusz Guzik wrote: > > > > +/** > > > > + * ilookup5 - search for an inode in the inode cache > > > ^^^ ilookup5_rcu > > > > > > > fixed in my branch > > > > > > + * @sb: super block of file system to search > > > > + * @hashval: hash value (usually inode number) to search for > > > > + * @test: callback used for comparisons between inodes > > > > + * @data: opaque data pointer to pass to @test > > > > + * > > > > + * This is equivalent to ilookup5, except the @test callback must > > > > + * tolerate the inode not being stable, including being mid-teardown. > > > > + */ > > > ... > > > > +struct inode *ilookup5_nowait_rcu(struct super_block *sb, unsigned long hashval, > > > > + int (*test)(struct inode *, void *), void *data); > > > > > > I'd prefer wrapping the above so that it fits into 80 columns. > > > > > > > the last comma is precisely at 80, but i can wrap it if you insist > > > > > Otherwise feel free to add: > > > > > > Reviewed-by: Jan Kara <jack@suse.cz> > > > > > > > thanks > > > > I'm going to wait for more feedback, tweak the commit message to stress > > that this goes from 2 hash lock acquires to 1, maybe fix some typos and > > submit a v4. > > > > past that if people want something faster they are welcome to implement > > or carry it over the finish line themselves. > > I'm generally fine with this but I would think that we shouldn't add all > these helpers without any users. I'm not trying to make this a chicken > and egg problem though. Let's get the blessing from Josef to convert > btrfs to that *_rcu variant and then we can add that helper. Additional > helpers can follow as needed? @Jan, thoughts? That's basically v1 of the patch (modulo other changes like EXPORT_SYMBOL_GPL). It only has iget5_locked_rcu for btrfs and ilookup5_rcu for bcachefs, which has since turned out to not use it. Jan wanted iget5_locked_rcu to follow the iget5_locked in style, hence I ended up with 3 helpers instead of 1. I am very much in favor of whacking the extra code and making iget5_locked_rcu internals look like they did in v1. For reference that's here: https://lore.kernel.org/linux-fsdevel/20240606140515.216424-1-mjguzik@gmail.com/
On Tue, Jun 11, 2024 at 03:13:45PM +0200, Mateusz Guzik wrote: > On Tue, Jun 11, 2024 at 3:04 PM Christian Brauner <brauner@kernel.org> wrote: > > > > On Tue, Jun 11, 2024 at 01:40:37PM +0200, Mateusz Guzik wrote: > > > On Tue, Jun 11, 2024 at 12:50:11PM +0200, Jan Kara wrote: > > > > On Tue 11-06-24 12:16:31, Mateusz Guzik wrote: > > > > > +/** > > > > > + * ilookup5 - search for an inode in the inode cache > > > > ^^^ ilookup5_rcu > > > > > > > > > > fixed in my branch > > > > > > > > + * @sb: super block of file system to search > > > > > + * @hashval: hash value (usually inode number) to search for > > > > > + * @test: callback used for comparisons between inodes > > > > > + * @data: opaque data pointer to pass to @test > > > > > + * > > > > > + * This is equivalent to ilookup5, except the @test callback must > > > > > + * tolerate the inode not being stable, including being mid-teardown. > > > > > + */ > > > > ... > > > > > +struct inode *ilookup5_nowait_rcu(struct super_block *sb, unsigned long hashval, > > > > > + int (*test)(struct inode *, void *), void *data); > > > > > > > > I'd prefer wrapping the above so that it fits into 80 columns. > > > > > > > > > > the last comma is precisely at 80, but i can wrap it if you insist > > > > > > > Otherwise feel free to add: > > > > > > > > Reviewed-by: Jan Kara <jack@suse.cz> > > > > > > > > > > thanks > > > > > > I'm going to wait for more feedback, tweak the commit message to stress > > > that this goes from 2 hash lock acquires to 1, maybe fix some typos and > > > submit a v4. > > > > > > past that if people want something faster they are welcome to implement > > > or carry it over the finish line themselves. > > > > I'm generally fine with this but I would think that we shouldn't add all > > these helpers without any users. I'm not trying to make this a chicken > > and egg problem though. Let's get the blessing from Josef to convert > > btrfs to that *_rcu variant and then we can add that helper. Additional > > helpers can follow as needed? @Jan, thoughts? > > That's basically v1 of the patch (modulo other changes like EXPORT_SYMBOL_GPL). > > It only has iget5_locked_rcu for btrfs and ilookup5_rcu for bcachefs, > which has since turned out to not use it. > > Jan wanted iget5_locked_rcu to follow the iget5_locked in style, hence > I ended up with 3 helpers instead of 1. We don't need any extra APIs if you just convert the inode cache to using hash-bl rather than just converting lookups to RCU. Everyone automatically gets the all extra scalability improvements and no new interfaces are needed at all. -Dave.
diff --git a/fs/inode.c b/fs/inode.c index 3a41f83a4ba5..95a093c257ad 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -886,36 +886,45 @@ long prune_icache_sb(struct super_block *sb, struct shrink_control *sc) return freed; } -static void __wait_on_freeing_inode(struct inode *inode); +static void __wait_on_freeing_inode(struct inode *inode, bool locked); /* * Called with the inode lock held. */ static struct inode *find_inode(struct super_block *sb, struct hlist_head *head, int (*test)(struct inode *, void *), - void *data) + void *data, bool locked) { struct inode *inode = NULL; + if (locked) + lockdep_assert_held(&inode_hash_lock); + else + lockdep_assert_not_held(&inode_hash_lock); + + rcu_read_lock(); repeat: - hlist_for_each_entry(inode, head, i_hash) { + hlist_for_each_entry_rcu(inode, head, i_hash) { if (inode->i_sb != sb) continue; if (!test(inode, data)) continue; spin_lock(&inode->i_lock); if (inode->i_state & (I_FREEING|I_WILL_FREE)) { - __wait_on_freeing_inode(inode); + __wait_on_freeing_inode(inode, locked); goto repeat; } if (unlikely(inode->i_state & I_CREATING)) { spin_unlock(&inode->i_lock); + rcu_read_unlock(); return ERR_PTR(-ESTALE); } __iget(inode); spin_unlock(&inode->i_lock); + rcu_read_unlock(); return inode; } + rcu_read_unlock(); return NULL; } @@ -924,29 +933,39 @@ static struct inode *find_inode(struct super_block *sb, * iget_locked for details. */ static struct inode *find_inode_fast(struct super_block *sb, - struct hlist_head *head, unsigned long ino) + struct hlist_head *head, unsigned long ino, + bool locked) { struct inode *inode = NULL; + if (locked) + lockdep_assert_held(&inode_hash_lock); + else + lockdep_assert_not_held(&inode_hash_lock); + + rcu_read_lock(); repeat: - hlist_for_each_entry(inode, head, i_hash) { + hlist_for_each_entry_rcu(inode, head, i_hash) { if (inode->i_ino != ino) continue; if (inode->i_sb != sb) continue; spin_lock(&inode->i_lock); if (inode->i_state & (I_FREEING|I_WILL_FREE)) { - __wait_on_freeing_inode(inode); + __wait_on_freeing_inode(inode, locked); goto repeat; } if (unlikely(inode->i_state & I_CREATING)) { spin_unlock(&inode->i_lock); + rcu_read_unlock(); return ERR_PTR(-ESTALE); } __iget(inode); spin_unlock(&inode->i_lock); + rcu_read_unlock(); return inode; } + rcu_read_unlock(); return NULL; } @@ -1161,7 +1180,7 @@ struct inode *inode_insert5(struct inode *inode, unsigned long hashval, again: spin_lock(&inode_hash_lock); - old = find_inode(inode->i_sb, head, test, data); + old = find_inode(inode->i_sb, head, test, data, true); if (unlikely(old)) { /* * Uhhuh, somebody else created the same inode under us. @@ -1245,6 +1264,37 @@ struct inode *iget5_locked(struct super_block *sb, unsigned long hashval, } EXPORT_SYMBOL(iget5_locked); +/** + * iget5_locked_rcu - obtain an inode from a mounted file system + * @sb: super block of file system + * @hashval: hash value (usually inode number) to get + * @test: callback used for comparisons between inodes + * @set: callback used to initialize a new struct inode + * @data: opaque data pointer to pass to @test and @set + * + * This is equivalent to iget5, except the @test callback must + * tolerate the inode not being stable, including being mid-teardown. + */ +struct inode *iget5_locked_rcu(struct super_block *sb, unsigned long hashval, + int (*test)(struct inode *, void *), + int (*set)(struct inode *, void *), void *data) +{ + struct inode *inode = ilookup5_rcu(sb, hashval, test, data); + + if (!inode) { + struct inode *new = alloc_inode(sb); + + if (new) { + new->i_state = 0; + inode = inode_insert5(new, hashval, test, set, data); + if (unlikely(inode != new)) + destroy_inode(new); + } + } + return inode; +} +EXPORT_SYMBOL_GPL(iget5_locked_rcu); + /** * iget_locked - obtain an inode from a mounted file system * @sb: super block of file system @@ -1263,9 +1313,7 @@ struct inode *iget_locked(struct super_block *sb, unsigned long ino) struct hlist_head *head = inode_hashtable + hash(sb, ino); struct inode *inode; again: - spin_lock(&inode_hash_lock); - inode = find_inode_fast(sb, head, ino); - spin_unlock(&inode_hash_lock); + inode = find_inode_fast(sb, head, ino, false); if (inode) { if (IS_ERR(inode)) return NULL; @@ -1283,7 +1331,7 @@ struct inode *iget_locked(struct super_block *sb, unsigned long ino) spin_lock(&inode_hash_lock); /* We released the lock, so.. */ - old = find_inode_fast(sb, head, ino); + old = find_inode_fast(sb, head, ino, true); if (!old) { inode->i_ino = ino; spin_lock(&inode->i_lock); @@ -1419,13 +1467,35 @@ struct inode *ilookup5_nowait(struct super_block *sb, unsigned long hashval, struct inode *inode; spin_lock(&inode_hash_lock); - inode = find_inode(sb, head, test, data); + inode = find_inode(sb, head, test, data, true); spin_unlock(&inode_hash_lock); return IS_ERR(inode) ? NULL : inode; } EXPORT_SYMBOL(ilookup5_nowait); +/** + * ilookup5_nowait_rcu - search for an inode in the inode cache + * @sb: super block of file system to search + * @hashval: hash value (usually inode number) to search for + * @test: callback used for comparisons between inodes + * @data: opaque data pointer to pass to @test + * + * This is equivalent to ilookup5_nowait, except the @test callback must + * tolerate the inode not being stable, including being mid-teardown. + */ +struct inode *ilookup5_nowait_rcu(struct super_block *sb, unsigned long hashval, + int (*test)(struct inode *, void *), void *data) +{ + struct hlist_head *head = inode_hashtable + hash(sb, hashval); + struct inode *inode; + + inode = find_inode(sb, head, test, data, false); + + return IS_ERR(inode) ? NULL : inode; +} +EXPORT_SYMBOL_GPL(ilookup5_nowait_rcu); + /** * ilookup5 - search for an inode in the inode cache * @sb: super block of file system to search @@ -1460,6 +1530,33 @@ struct inode *ilookup5(struct super_block *sb, unsigned long hashval, } EXPORT_SYMBOL(ilookup5); +/** + * ilookup5 - search for an inode in the inode cache + * @sb: super block of file system to search + * @hashval: hash value (usually inode number) to search for + * @test: callback used for comparisons between inodes + * @data: opaque data pointer to pass to @test + * + * This is equivalent to ilookup5, except the @test callback must + * tolerate the inode not being stable, including being mid-teardown. + */ +struct inode *ilookup5_rcu(struct super_block *sb, unsigned long hashval, + int (*test)(struct inode *, void *), void *data) +{ + struct inode *inode; +again: + inode = ilookup5_nowait_rcu(sb, hashval, test, data); + if (inode) { + wait_on_inode(inode); + if (unlikely(inode_unhashed(inode))) { + iput(inode); + goto again; + } + } + return inode; +} +EXPORT_SYMBOL_GPL(ilookup5_rcu); + /** * ilookup - search for an inode in the inode cache * @sb: super block of file system to search @@ -1474,7 +1571,7 @@ struct inode *ilookup(struct super_block *sb, unsigned long ino) struct inode *inode; again: spin_lock(&inode_hash_lock); - inode = find_inode_fast(sb, head, ino); + inode = find_inode_fast(sb, head, ino, true); spin_unlock(&inode_hash_lock); if (inode) { @@ -2235,17 +2332,21 @@ EXPORT_SYMBOL(inode_needs_sync); * wake_up_bit(&inode->i_state, __I_NEW) after removing from the hash list * will DTRT. */ -static void __wait_on_freeing_inode(struct inode *inode) +static void __wait_on_freeing_inode(struct inode *inode, bool locked) { wait_queue_head_t *wq; DEFINE_WAIT_BIT(wait, &inode->i_state, __I_NEW); wq = bit_waitqueue(&inode->i_state, __I_NEW); prepare_to_wait(wq, &wait.wq_entry, TASK_UNINTERRUPTIBLE); spin_unlock(&inode->i_lock); - spin_unlock(&inode_hash_lock); + rcu_read_unlock(); + if (locked) + spin_unlock(&inode_hash_lock); schedule(); finish_wait(wq, &wait.wq_entry); - spin_lock(&inode_hash_lock); + if (locked) + spin_lock(&inode_hash_lock); + rcu_read_lock(); } static __initdata unsigned long ihash_entries; diff --git a/include/linux/fs.h b/include/linux/fs.h index bfc1e6407bf6..9d4109fd22c9 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -3037,15 +3037,25 @@ extern void d_mark_dontcache(struct inode *inode); extern struct inode *ilookup5_nowait(struct super_block *sb, unsigned long hashval, int (*test)(struct inode *, void *), void *data); +struct inode *ilookup5_nowait_rcu(struct super_block *sb, unsigned long hashval, + int (*test)(struct inode *, void *), void *data); extern struct inode *ilookup5(struct super_block *sb, unsigned long hashval, int (*test)(struct inode *, void *), void *data); +struct inode *ilookup5_rcu(struct super_block *sb, unsigned long hashval, + int (*test)(struct inode *, void *), void *data); + extern struct inode *ilookup(struct super_block *sb, unsigned long ino); extern struct inode *inode_insert5(struct inode *inode, unsigned long hashval, int (*test)(struct inode *, void *), int (*set)(struct inode *, void *), void *data); -extern struct inode * iget5_locked(struct super_block *, unsigned long, int (*test)(struct inode *, void *), int (*set)(struct inode *, void *), void *); +struct inode *iget5_locked(struct super_block *, unsigned long, + int (*test)(struct inode *, void *), + int (*set)(struct inode *, void *), void *); +struct inode *iget5_locked_rcu(struct super_block *, unsigned long, + int (*test)(struct inode *, void *), + int (*set)(struct inode *, void *), void *); extern struct inode * iget_locked(struct super_block *, unsigned long); extern struct inode *find_inode_nowait(struct super_block *, unsigned long,
Instantiating a new inode normally takes the global inode hash lock twice: 1. once to check if it happens to already be present 2. once to add it to the hash The back-to-back lock/unlock pattern is known to degrade performance significantly, which is further exacerbated if the hash is heavily populated (long chains to walk, extending hold time). Arguably hash sizing and hashing algo need to be revisited, but that's beyond the scope of this patch. A long term fix would introduce finer-grained locking. An attempt was made several times, most recently in [1], but the effort appears stalled. A simpler idea which solves majority of the problem and which may be good enough for the time being is to use RCU for the initial lookup. Basic RCU support is already present in the hash. This being a temporary measure I tried to keep the change as small as possible. iget_locked consumers (notably ext4) get away without any changes because inode comparison method is built-in. iget5_locked and ilookup5_nowait consumers pass a custom callback. Since removal of locking adds more problems (inode can be changing) it's not safe to assume all filesystems happen to cope. Thus iget5_locked_rcu, ilookup5_rcu and ilookup5_nowait_rcu get added, requiring manual conversion. In order to reduce code duplication find_inode and find_inode_fast grow an argument indicating whether inode hash lock is held, which is passed down should sleeping be necessary. They always rcu_read_lock, which is redundant but harmless. Doing it conditionally reduces readability for no real gain that I can see. RCU-alike restrictions were already put on callbacks due to the hash spinlock being held. There is a real cache-busting workload scanning millions of files in parallel (it's a backup server thing), where the initial lookup is guaranteed to fail resulting in the 2 lock acquires. Implemented below is a synthehic benchmark which provides the same behavior. [I shall note the workload is not running on Linux, instead it was causing trouble elsewhere. Benchmark below was used while addressing said problems and was found to adequately represent the real workload.] Total real time fluctuates by 1-2s. With 20 threads each walking a dedicated 1000 dirs * 1000 files directory tree to stat(2) on a 32 core + 24GB RAM vm: ext4 (needed mkfs.ext4 -N 24000000): before: 3.77s user 890.90s system 1939% cpu 46.118 total after: 3.24s user 397.73s system 1858% cpu 21.581 total (-53%) Benchmark can be found here: https://people.freebsd.org/~mjg/fstree.tgz [1] https://lore.kernel.org/all/20231206060629.2827226-1-david@fromorbit.com/ Signed-off-by: Mateusz Guzik <mjguzik@gmail.com> --- fs/inode.c | 135 +++++++++++++++++++++++++++++++++++++++------ include/linux/fs.h | 12 +++- 2 files changed, 129 insertions(+), 18 deletions(-)