diff mbox series

namespace: Use atomic64_inc_return() in alloc_mnt_ns()

Message ID 20241007085303.48312-1-ubizjak@gmail.com (mailing list archive)
State New
Headers show
Series namespace: Use atomic64_inc_return() in alloc_mnt_ns() | expand

Commit Message

Uros Bizjak Oct. 7, 2024, 8:52 a.m. UTC
Use atomic64_inc_return(&ref) instead of atomic64_add_return(1, &ref)
to use optimized implementation and ease register pressure around
the primitive for targets that implement optimized variant.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Jan Kara <jack@suse.cz>
---
 fs/namespace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Christian Brauner Oct. 7, 2024, 11:39 a.m. UTC | #1
On Mon, 07 Oct 2024 10:52:37 +0200, Uros Bizjak wrote:
> Use atomic64_inc_return(&ref) instead of atomic64_add_return(1, &ref)
> to use optimized implementation and ease register pressure around
> the primitive for targets that implement optimized variant.
> 
> 

Applied to the vfs.misc branch of the vfs/vfs.git tree.
Patches in the vfs.misc branch should appear in linux-next soon.

Please report any outstanding bugs that were missed during review in a
new review to the original patch series allowing us to drop it.

It's encouraged to provide Acked-bys and Reviewed-bys even though the
patch has now been applied. If possible patch trailers will be updated.

Note that commit hashes shown below are subject to change due to rebase,
trailer updates or similar. If in doubt, please check the listed branch.

tree:   https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git
branch: vfs.misc

[1/1] namespace: Use atomic64_inc_return() in alloc_mnt_ns()
      https://git.kernel.org/vfs/vfs/c/26bb6d8535e7
Al Viro Oct. 7, 2024, 2:50 p.m. UTC | #2
On Mon, Oct 07, 2024 at 10:52:37AM +0200, Uros Bizjak wrote:
> Use atomic64_inc_return(&ref) instead of atomic64_add_return(1, &ref)
> to use optimized implementation and ease register pressure around
> the primitive for targets that implement optimized variant.
> 
> Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Christian Brauner <brauner@kernel.org>
> Cc: Jan Kara <jack@suse.cz>
> ---
>  fs/namespace.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/namespace.c b/fs/namespace.c
> index 93c377816d75..9a3c251d033d 100644
> --- a/fs/namespace.c
> +++ b/fs/namespace.c
> @@ -3901,7 +3901,7 @@ static struct mnt_namespace *alloc_mnt_ns(struct user_namespace *user_ns, bool a
>  	}
>  	new_ns->ns.ops = &mntns_operations;
>  	if (!anon)
> -		new_ns->seq = atomic64_add_return(1, &mnt_ns_seq);
> +		new_ns->seq = atomic64_inc_return(&mnt_ns_seq);

On which load do you see that path hot enough for the change to
make any difference???

Seriously, if we have something that manages that, I would like
to know - the same load would be a great way to stress a lot of
stuff in fs/namespace.c and fs/pnode.c...
Christian Brauner Oct. 7, 2024, 2:56 p.m. UTC | #3
On Mon, Oct 07, 2024 at 03:50:34PM GMT, Al Viro wrote:
> On Mon, Oct 07, 2024 at 10:52:37AM +0200, Uros Bizjak wrote:
> > Use atomic64_inc_return(&ref) instead of atomic64_add_return(1, &ref)
> > to use optimized implementation and ease register pressure around
> > the primitive for targets that implement optimized variant.
> > 
> > Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
> > Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> > Cc: Christian Brauner <brauner@kernel.org>
> > Cc: Jan Kara <jack@suse.cz>
> > ---
> >  fs/namespace.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/fs/namespace.c b/fs/namespace.c
> > index 93c377816d75..9a3c251d033d 100644
> > --- a/fs/namespace.c
> > +++ b/fs/namespace.c
> > @@ -3901,7 +3901,7 @@ static struct mnt_namespace *alloc_mnt_ns(struct user_namespace *user_ns, bool a
> >  	}
> >  	new_ns->ns.ops = &mntns_operations;
> >  	if (!anon)
> > -		new_ns->seq = atomic64_add_return(1, &mnt_ns_seq);
> > +		new_ns->seq = atomic64_inc_return(&mnt_ns_seq);
> 
> On which load do you see that path hot enough for the change to
> make any difference???

I don't think that's really an issue. Imho, *inc_return() is just
straightforward compared to the add variant. That can easily be
reflected in the commit message when I push out.
Uros Bizjak Oct. 7, 2024, 3:02 p.m. UTC | #4
On Mon, Oct 7, 2024 at 4:50 PM Al Viro <viro@zeniv.linux.org.uk> wrote:
>
> On Mon, Oct 07, 2024 at 10:52:37AM +0200, Uros Bizjak wrote:
> > Use atomic64_inc_return(&ref) instead of atomic64_add_return(1, &ref)
> > to use optimized implementation and ease register pressure around
> > the primitive for targets that implement optimized variant.
> >
> > Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
> > Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> > Cc: Christian Brauner <brauner@kernel.org>
> > Cc: Jan Kara <jack@suse.cz>
> > ---
> >  fs/namespace.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/fs/namespace.c b/fs/namespace.c
> > index 93c377816d75..9a3c251d033d 100644
> > --- a/fs/namespace.c
> > +++ b/fs/namespace.c
> > @@ -3901,7 +3901,7 @@ static struct mnt_namespace *alloc_mnt_ns(struct user_namespace *user_ns, bool a
> >       }
> >       new_ns->ns.ops = &mntns_operations;
> >       if (!anon)
> > -             new_ns->seq = atomic64_add_return(1, &mnt_ns_seq);
> > +             new_ns->seq = atomic64_inc_return(&mnt_ns_seq);
>
> On which load do you see that path hot enough for the change to
> make any difference???

It is not performance, but code size improvement, as stated in the
commit message.

The difference on x86_32 (that implements atomic64_inc_return()) is:

     eeb:    b8 01 00 00 00           mov    $0x1,%eax
     ef0:    31 d2                    xor    %edx,%edx
     ef2:    b9 20 00 00 00           mov    $0x20,%ecx
            ef3: R_386_32    .data
     ef7:    e8 fc ff ff ff           call   ef8 <alloc_mnt_ns+0xd0>
            ef8: R_386_PC32    atomic64_add_return_cx8
     efc:    89 46 20                 mov    %eax,0x20(%esi)
     eff:    89 56 24                 mov    %edx,0x24(%esi)

vs:

     eeb:    be 20 00 00 00           mov    $0x20,%esi
            eec: R_386_32    .data
     ef0:    e8 fc ff ff ff           call   ef1 <alloc_mnt_ns+0xc9>
            ef1: R_386_PC32    atomic64_inc_return_cx8
     ef5:    89 43 20                 mov    %eax,0x20(%ebx)
     ef8:    89 53 24                 mov    %edx,0x24(%ebx)

Uros.
diff mbox series

Patch

diff --git a/fs/namespace.c b/fs/namespace.c
index 93c377816d75..9a3c251d033d 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -3901,7 +3901,7 @@  static struct mnt_namespace *alloc_mnt_ns(struct user_namespace *user_ns, bool a
 	}
 	new_ns->ns.ops = &mntns_operations;
 	if (!anon)
-		new_ns->seq = atomic64_add_return(1, &mnt_ns_seq);
+		new_ns->seq = atomic64_inc_return(&mnt_ns_seq);
 	refcount_set(&new_ns->ns.count, 1);
 	refcount_set(&new_ns->passive, 1);
 	new_ns->mounts = RB_ROOT;