diff mbox series

fs: use xarray for old mount id

Message ID 20241217-erhielten-regung-44bb1604ca8f@brauner (mailing list archive)
State New
Headers show
Series fs: use xarray for old mount id | expand

Commit Message

Christian Brauner Dec. 17, 2024, 12:21 p.m. UTC
While the ida does use the xarray internally we can use it explicitly
which allows us to increment the unique mount id under the xa lock.
This allows us to remove the atomic as we're now allocating both ids in
one go.

Signed-off-by: Christian Brauner <brauner@kernel.org>
---
 fs/namespace.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

Comments

Miklos Szeredi Dec. 17, 2024, 12:35 p.m. UTC | #1
On Tue, 17 Dec 2024 at 13:23, Christian Brauner <brauner@kernel.org> wrote:

> @@ -270,18 +270,19 @@ static inline struct hlist_head *mp_hash(struct dentry *dentry)
>
>  static int mnt_alloc_id(struct mount *mnt)
>  {
> -       int res = ida_alloc(&mnt_id_ida, GFP_KERNEL);
> +       int res;
>
> -       if (res < 0)
> -               return res;
> -       mnt->mnt_id = res;
> -       mnt->mnt_id_unique = atomic64_inc_return(&mnt_id_ctr);
> +       xa_lock(&mnt_id_xa);
> +       res = __xa_alloc(&mnt_id_xa, &mnt->mnt_id, mnt, XA_LIMIT(1, INT_MAX), GFP_KERNEL);

This uses a different allocation strategy, right?  That would be a
user visible change, which is somewhat risky.

> +       if (!res)
> +               mnt->mnt_id_unique = ++mnt_id_ctr;
> +       xa_unlock(&mnt_id_xa);
>         return 0;

        return res;

Thanks,
Miklos
Christian Brauner Dec. 17, 2024, 1:52 p.m. UTC | #2
On Tue, Dec 17, 2024 at 01:35:09PM +0100, Miklos Szeredi wrote:
> On Tue, 17 Dec 2024 at 13:23, Christian Brauner <brauner@kernel.org> wrote:
> 
> > @@ -270,18 +270,19 @@ static inline struct hlist_head *mp_hash(struct dentry *dentry)
> >
> >  static int mnt_alloc_id(struct mount *mnt)
> >  {
> > -       int res = ida_alloc(&mnt_id_ida, GFP_KERNEL);
> > +       int res;
> >
> > -       if (res < 0)
> > -               return res;
> > -       mnt->mnt_id = res;
> > -       mnt->mnt_id_unique = atomic64_inc_return(&mnt_id_ctr);
> > +       xa_lock(&mnt_id_xa);
> > +       res = __xa_alloc(&mnt_id_xa, &mnt->mnt_id, mnt, XA_LIMIT(1, INT_MAX), GFP_KERNEL);
> 
> This uses a different allocation strategy, right?  That would be a
> user visible change, which is somewhat risky.

Maybe, but afaict, xa_alloc() just uses the first available key similar
to ida_alloc(). A while ago I even asked Karel whether he would mind
allocating the old mount id cyclically via ida_alloc_cyclic() and he
said he won't care and it won't matter (to him at least). I doubt that
userspace expects mount ids to be in any specific sequence. A long time
ago we even did cyclic allocation and switched to non-cyclic allocation.
Right now, if I mount and unmount immediately afterwards and no one
managed to get their mount in between I get the same id assigned. That's
true of xa_alloc() as well from my testing. So I think we can just risk
it.

> 
> > +       if (!res)
> > +               mnt->mnt_id_unique = ++mnt_id_ctr;
> > +       xa_unlock(&mnt_id_xa);
> >         return 0;
> 
>         return res;

Bah, thanks. Fixed.
Miklos Szeredi Dec. 17, 2024, 2:36 p.m. UTC | #3
On Tue, 17 Dec 2024 at 14:52, Christian Brauner <brauner@kernel.org> wrote:

> Right now, if I mount and unmount immediately afterwards and no one
> managed to get their mount in between I get the same id assigned. That's
> true of xa_alloc() as well from my testing. So I think we can just risk

Okay.

Maybe worth a mention in the patch header.

Thanks,
Miklos
diff mbox series

Patch

diff --git a/fs/namespace.c b/fs/namespace.c
index d99a3c2c5e5c..e102aaaa065c 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -65,12 +65,12 @@  static int __init set_mphash_entries(char *str)
 __setup("mphash_entries=", set_mphash_entries);
 
 static u64 event;
-static DEFINE_IDA(mnt_id_ida);
+static DEFINE_XARRAY_FLAGS(mnt_id_xa, XA_FLAGS_ALLOC);
 static DEFINE_IDA(mnt_group_ida);
 
 /* Don't allow confusion with old 32bit mount ID */
 #define MNT_UNIQUE_ID_OFFSET (1ULL << 31)
-static atomic64_t mnt_id_ctr = ATOMIC64_INIT(MNT_UNIQUE_ID_OFFSET);
+static u64 mnt_id_ctr = MNT_UNIQUE_ID_OFFSET;
 
 static struct hlist_head *mount_hashtable __ro_after_init;
 static struct hlist_head *mountpoint_hashtable __ro_after_init;
@@ -270,18 +270,19 @@  static inline struct hlist_head *mp_hash(struct dentry *dentry)
 
 static int mnt_alloc_id(struct mount *mnt)
 {
-	int res = ida_alloc(&mnt_id_ida, GFP_KERNEL);
+	int res;
 
-	if (res < 0)
-		return res;
-	mnt->mnt_id = res;
-	mnt->mnt_id_unique = atomic64_inc_return(&mnt_id_ctr);
+	xa_lock(&mnt_id_xa);
+	res = __xa_alloc(&mnt_id_xa, &mnt->mnt_id, mnt, XA_LIMIT(1, INT_MAX), GFP_KERNEL);
+	if (!res)
+		mnt->mnt_id_unique = ++mnt_id_ctr;
+	xa_unlock(&mnt_id_xa);
 	return 0;
 }
 
 static void mnt_free_id(struct mount *mnt)
 {
-	ida_free(&mnt_id_ida, mnt->mnt_id);
+	xa_erase(&mnt_id_xa, mnt->mnt_id);
 }
 
 /*