mbox series

[v5,00/14] ceph: support idmapped mounts

Message ID 20230608154256.562906-1-aleksandr.mikhalitsyn@canonical.com (mailing list archive)
Headers show
Series ceph: support idmapped mounts | expand

Message

Alexander Mikhalitsyn June 8, 2023, 3:42 p.m. UTC
Dear friends,

This patchset was originally developed by Christian Brauner but I'll continue
to push it forward. Christian allowed me to do that :)

This feature is already actively used/tested with LXD/LXC project.

Git tree (based on https://github.com/ceph/ceph-client.git master):
v5: https://github.com/mihalicyn/linux/commits/fs.idmapped.ceph.v5
current: https://github.com/mihalicyn/linux/tree/fs.idmapped.ceph

In the version 3 I've changed only two commits:
- fs: export mnt_idmap_get/mnt_idmap_put
- ceph: allow idmapped setattr inode op
and added a new one:
- ceph: pass idmap to __ceph_setattr

In the version 4 I've reworked the ("ceph: stash idmapping in mdsc request")
commit. Now we take idmap refcounter just in place where req->r_mnt_idmap
is filled. It's more safer approach and prevents possible refcounter underflow
on error paths where __register_request wasn't called but ceph_mdsc_release_request is
called.

Changelog for version 5:
- a few commits were squashed into one (as suggested by Xiubo Li)
- started passing an idmapping everywhere (if possible), so a caller
UID/GID-s will be mapped almost everywhere (as suggested by Xiubo Li)

I can confirm that this version passes xfstests.

Links to previous versions:
v1: https://lore.kernel.org/all/20220104140414.155198-1-brauner@kernel.org/
v2: https://lore.kernel.org/lkml/20230524153316.476973-1-aleksandr.mikhalitsyn@canonical.com/
tree: https://github.com/mihalicyn/linux/commits/fs.idmapped.ceph.v2
v3: https://lore.kernel.org/lkml/20230607152038.469739-1-aleksandr.mikhalitsyn@canonical.com/#t
v4: https://lore.kernel.org/lkml/20230607180958.645115-1-aleksandr.mikhalitsyn@canonical.com/#t
tree: https://github.com/mihalicyn/linux/commits/fs.idmapped.ceph.v4

Kind regards,
Alex

Original description from Christian:
========================================================================
This patch series enables cephfs to support idmapped mounts, i.e. the
ability to alter ownership information on a per-mount basis.

Container managers such as LXD support sharaing data via cephfs between
the host and unprivileged containers and between unprivileged containers.
They may all use different idmappings. Idmapped mounts can be used to
create mounts with the idmapping used for the container (or a different
one specific to the use-case).

There are in fact more use-cases such as remapping ownership for
mountpoints on the host itself to grant or restrict access to different
users or to make it possible to enforce that programs running as root
will write with a non-zero {g,u}id to disk.

The patch series is simple overall and few changes are needed to cephfs.
There is one cephfs specific issue that I would like to discuss and
solve which I explain in detail in:

[PATCH 02/12] ceph: handle idmapped mounts in create_request_message()

It has to do with how to handle mds serves which have id-based access
restrictions configured. I would ask you to please take a look at the
explanation in the aforementioned patch.

The patch series passes the vfs and idmapped mount testsuite as part of
xfstests. To run it you will need a config like:

[ceph]
export FSTYP=ceph
export TEST_DIR=/mnt/test
export TEST_DEV=10.103.182.10:6789:/
export TEST_FS_MOUNT_OPTS="-o name=admin,secret=$password

and then simply call

sudo ./check -g idmapped

========================================================================

Alexander Mikhalitsyn (5):
  fs: export mnt_idmap_get/mnt_idmap_put
  ceph: pass idmap to __ceph_setattr
  ceph: pass idmap to ceph_do_getattr
  ceph: pass idmap to __ceph_setxattr
  ceph: pass idmap to ceph_open/ioctl_set_layout

Christian Brauner (9):
  ceph: stash idmapping in mdsc request
  ceph: handle idmapped mounts in create_request_message()
  ceph: pass an idmapping to mknod/symlink/mkdir/rename
  ceph: allow idmapped getattr inode op
  ceph: allow idmapped permission inode op
  ceph: allow idmapped setattr inode op
  ceph/acl: allow idmapped set_acl inode op
  ceph/file: allow idmapped atomic_open inode op
  ceph: allow idmapped mounts

 fs/ceph/acl.c                 |  8 ++++----
 fs/ceph/addr.c                |  3 ++-
 fs/ceph/caps.c                |  3 ++-
 fs/ceph/dir.c                 |  4 ++++
 fs/ceph/export.c              |  2 +-
 fs/ceph/file.c                | 21 ++++++++++++++-----
 fs/ceph/inode.c               | 38 +++++++++++++++++++++--------------
 fs/ceph/ioctl.c               |  9 +++++++--
 fs/ceph/mds_client.c          | 27 +++++++++++++++++++++----
 fs/ceph/mds_client.h          |  1 +
 fs/ceph/quota.c               |  2 +-
 fs/ceph/super.c               |  6 +++---
 fs/ceph/super.h               | 14 ++++++++-----
 fs/ceph/xattr.c               | 18 +++++++++--------
 fs/mnt_idmapping.c            |  2 ++
 include/linux/mnt_idmapping.h |  3 +++
 16 files changed, 111 insertions(+), 50 deletions(-)

Comments

Xiubo Li June 9, 2023, 1:57 a.m. UTC | #1
On 6/8/23 23:42, Alexander Mikhalitsyn wrote:
> Dear friends,
>
> This patchset was originally developed by Christian Brauner but I'll continue
> to push it forward. Christian allowed me to do that :)
>
> This feature is already actively used/tested with LXD/LXC project.
>
> Git tree (based on https://github.com/ceph/ceph-client.git master):

Could you rebase these patches to 'testing' branch ?

And you still have missed several places, for example the following cases:


    1    269  fs/ceph/addr.c <<ceph_netfs_issue_op_inline>>
              req = ceph_mdsc_create_request(mdsc, CEPH_MDS_OP_GETATTR, 
mode);
    2    389  fs/ceph/dir.c <<ceph_readdir>>
              req = ceph_mdsc_create_request(mdsc, op, USE_AUTH_MDS);
    3    789  fs/ceph/dir.c <<ceph_lookup>>
              req = ceph_mdsc_create_request(mdsc, op, USE_ANY_MDS);
    ...


For this requests you also need to set the real idmap.


Thanks

- Xiubo



> v5: https://github.com/mihalicyn/linux/commits/fs.idmapped.ceph.v5
> current: https://github.com/mihalicyn/linux/tree/fs.idmapped.ceph
>
> In the version 3 I've changed only two commits:
> - fs: export mnt_idmap_get/mnt_idmap_put
> - ceph: allow idmapped setattr inode op
> and added a new one:
> - ceph: pass idmap to __ceph_setattr
>
> In the version 4 I've reworked the ("ceph: stash idmapping in mdsc request")
> commit. Now we take idmap refcounter just in place where req->r_mnt_idmap
> is filled. It's more safer approach and prevents possible refcounter underflow
> on error paths where __register_request wasn't called but ceph_mdsc_release_request is
> called.
>
> Changelog for version 5:
> - a few commits were squashed into one (as suggested by Xiubo Li)
> - started passing an idmapping everywhere (if possible), so a caller
> UID/GID-s will be mapped almost everywhere (as suggested by Xiubo Li)
>
> I can confirm that this version passes xfstests.
>
> Links to previous versions:
> v1: https://lore.kernel.org/all/20220104140414.155198-1-brauner@kernel.org/
> v2: https://lore.kernel.org/lkml/20230524153316.476973-1-aleksandr.mikhalitsyn@canonical.com/
> tree: https://github.com/mihalicyn/linux/commits/fs.idmapped.ceph.v2
> v3: https://lore.kernel.org/lkml/20230607152038.469739-1-aleksandr.mikhalitsyn@canonical.com/#t
> v4: https://lore.kernel.org/lkml/20230607180958.645115-1-aleksandr.mikhalitsyn@canonical.com/#t
> tree: https://github.com/mihalicyn/linux/commits/fs.idmapped.ceph.v4
>
> Kind regards,
> Alex
>
> Original description from Christian:
> ========================================================================
> This patch series enables cephfs to support idmapped mounts, i.e. the
> ability to alter ownership information on a per-mount basis.
>
> Container managers such as LXD support sharaing data via cephfs between
> the host and unprivileged containers and between unprivileged containers.
> They may all use different idmappings. Idmapped mounts can be used to
> create mounts with the idmapping used for the container (or a different
> one specific to the use-case).
>
> There are in fact more use-cases such as remapping ownership for
> mountpoints on the host itself to grant or restrict access to different
> users or to make it possible to enforce that programs running as root
> will write with a non-zero {g,u}id to disk.
>
> The patch series is simple overall and few changes are needed to cephfs.
> There is one cephfs specific issue that I would like to discuss and
> solve which I explain in detail in:
>
> [PATCH 02/12] ceph: handle idmapped mounts in create_request_message()
>
> It has to do with how to handle mds serves which have id-based access
> restrictions configured. I would ask you to please take a look at the
> explanation in the aforementioned patch.
>
> The patch series passes the vfs and idmapped mount testsuite as part of
> xfstests. To run it you will need a config like:
>
> [ceph]
> export FSTYP=ceph
> export TEST_DIR=/mnt/test
> export TEST_DEV=10.103.182.10:6789:/
> export TEST_FS_MOUNT_OPTS="-o name=admin,secret=$password
>
> and then simply call
>
> sudo ./check -g idmapped
>
> ========================================================================
>
> Alexander Mikhalitsyn (5):
>    fs: export mnt_idmap_get/mnt_idmap_put
>    ceph: pass idmap to __ceph_setattr
>    ceph: pass idmap to ceph_do_getattr
>    ceph: pass idmap to __ceph_setxattr
>    ceph: pass idmap to ceph_open/ioctl_set_layout
>
> Christian Brauner (9):
>    ceph: stash idmapping in mdsc request
>    ceph: handle idmapped mounts in create_request_message()
>    ceph: pass an idmapping to mknod/symlink/mkdir/rename
>    ceph: allow idmapped getattr inode op
>    ceph: allow idmapped permission inode op
>    ceph: allow idmapped setattr inode op
>    ceph/acl: allow idmapped set_acl inode op
>    ceph/file: allow idmapped atomic_open inode op
>    ceph: allow idmapped mounts
>
>   fs/ceph/acl.c                 |  8 ++++----
>   fs/ceph/addr.c                |  3 ++-
>   fs/ceph/caps.c                |  3 ++-
>   fs/ceph/dir.c                 |  4 ++++
>   fs/ceph/export.c              |  2 +-
>   fs/ceph/file.c                | 21 ++++++++++++++-----
>   fs/ceph/inode.c               | 38 +++++++++++++++++++++--------------
>   fs/ceph/ioctl.c               |  9 +++++++--
>   fs/ceph/mds_client.c          | 27 +++++++++++++++++++++----
>   fs/ceph/mds_client.h          |  1 +
>   fs/ceph/quota.c               |  2 +-
>   fs/ceph/super.c               |  6 +++---
>   fs/ceph/super.h               | 14 ++++++++-----
>   fs/ceph/xattr.c               | 18 +++++++++--------
>   fs/mnt_idmapping.c            |  2 ++
>   include/linux/mnt_idmapping.h |  3 +++
>   16 files changed, 111 insertions(+), 50 deletions(-)
>
Alexander Mikhalitsyn June 9, 2023, 8:59 a.m. UTC | #2
On Fri, Jun 9, 2023 at 3:57 AM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 6/8/23 23:42, Alexander Mikhalitsyn wrote:
> > Dear friends,
> >
> > This patchset was originally developed by Christian Brauner but I'll continue
> > to push it forward. Christian allowed me to do that :)
> >
> > This feature is already actively used/tested with LXD/LXC project.
> >
> > Git tree (based on https://github.com/ceph/ceph-client.git master):

Hi Xiubo!

>
> Could you rebase these patches to 'testing' branch ?

Will do in -v6.

>
> And you still have missed several places, for example the following cases:
>
>
>     1    269  fs/ceph/addr.c <<ceph_netfs_issue_op_inline>>
>               req = ceph_mdsc_create_request(mdsc, CEPH_MDS_OP_GETATTR,
> mode);

+

>     2    389  fs/ceph/dir.c <<ceph_readdir>>
>               req = ceph_mdsc_create_request(mdsc, op, USE_AUTH_MDS);

+

>     3    789  fs/ceph/dir.c <<ceph_lookup>>
>               req = ceph_mdsc_create_request(mdsc, op, USE_ANY_MDS);

We don't have an idmapping passed to lookup from the VFS layer. As I
mentioned before, it's just impossible now.

I've checked all places with ceph_mdsc_create_request and passed
idmapping everywhere if possible (in v6, that I will send soon).

>     ...
>
>
> For this requests you also need to set the real idmap.

Thanks,
Alex

>
>
> Thanks
>
> - Xiubo
>
>
>
> > v5: https://github.com/mihalicyn/linux/commits/fs.idmapped.ceph.v5
> > current: https://github.com/mihalicyn/linux/tree/fs.idmapped.ceph
> >
> > In the version 3 I've changed only two commits:
> > - fs: export mnt_idmap_get/mnt_idmap_put
> > - ceph: allow idmapped setattr inode op
> > and added a new one:
> > - ceph: pass idmap to __ceph_setattr
> >
> > In the version 4 I've reworked the ("ceph: stash idmapping in mdsc request")
> > commit. Now we take idmap refcounter just in place where req->r_mnt_idmap
> > is filled. It's more safer approach and prevents possible refcounter underflow
> > on error paths where __register_request wasn't called but ceph_mdsc_release_request is
> > called.
> >
> > Changelog for version 5:
> > - a few commits were squashed into one (as suggested by Xiubo Li)
> > - started passing an idmapping everywhere (if possible), so a caller
> > UID/GID-s will be mapped almost everywhere (as suggested by Xiubo Li)
> >
> > I can confirm that this version passes xfstests.
> >
> > Links to previous versions:
> > v1: https://lore.kernel.org/all/20220104140414.155198-1-brauner@kernel.org/
> > v2: https://lore.kernel.org/lkml/20230524153316.476973-1-aleksandr.mikhalitsyn@canonical.com/
> > tree: https://github.com/mihalicyn/linux/commits/fs.idmapped.ceph.v2
> > v3: https://lore.kernel.org/lkml/20230607152038.469739-1-aleksandr.mikhalitsyn@canonical.com/#t
> > v4: https://lore.kernel.org/lkml/20230607180958.645115-1-aleksandr.mikhalitsyn@canonical.com/#t
> > tree: https://github.com/mihalicyn/linux/commits/fs.idmapped.ceph.v4
> >
> > Kind regards,
> > Alex
> >
> > Original description from Christian:
> > ========================================================================
> > This patch series enables cephfs to support idmapped mounts, i.e. the
> > ability to alter ownership information on a per-mount basis.
> >
> > Container managers such as LXD support sharaing data via cephfs between
> > the host and unprivileged containers and between unprivileged containers.
> > They may all use different idmappings. Idmapped mounts can be used to
> > create mounts with the idmapping used for the container (or a different
> > one specific to the use-case).
> >
> > There are in fact more use-cases such as remapping ownership for
> > mountpoints on the host itself to grant or restrict access to different
> > users or to make it possible to enforce that programs running as root
> > will write with a non-zero {g,u}id to disk.
> >
> > The patch series is simple overall and few changes are needed to cephfs.
> > There is one cephfs specific issue that I would like to discuss and
> > solve which I explain in detail in:
> >
> > [PATCH 02/12] ceph: handle idmapped mounts in create_request_message()
> >
> > It has to do with how to handle mds serves which have id-based access
> > restrictions configured. I would ask you to please take a look at the
> > explanation in the aforementioned patch.
> >
> > The patch series passes the vfs and idmapped mount testsuite as part of
> > xfstests. To run it you will need a config like:
> >
> > [ceph]
> > export FSTYP=ceph
> > export TEST_DIR=/mnt/test
> > export TEST_DEV=10.103.182.10:6789:/
> > export TEST_FS_MOUNT_OPTS="-o name=admin,secret=$password
> >
> > and then simply call
> >
> > sudo ./check -g idmapped
> >
> > ========================================================================
> >
> > Alexander Mikhalitsyn (5):
> >    fs: export mnt_idmap_get/mnt_idmap_put
> >    ceph: pass idmap to __ceph_setattr
> >    ceph: pass idmap to ceph_do_getattr
> >    ceph: pass idmap to __ceph_setxattr
> >    ceph: pass idmap to ceph_open/ioctl_set_layout
> >
> > Christian Brauner (9):
> >    ceph: stash idmapping in mdsc request
> >    ceph: handle idmapped mounts in create_request_message()
> >    ceph: pass an idmapping to mknod/symlink/mkdir/rename
> >    ceph: allow idmapped getattr inode op
> >    ceph: allow idmapped permission inode op
> >    ceph: allow idmapped setattr inode op
> >    ceph/acl: allow idmapped set_acl inode op
> >    ceph/file: allow idmapped atomic_open inode op
> >    ceph: allow idmapped mounts
> >
> >   fs/ceph/acl.c                 |  8 ++++----
> >   fs/ceph/addr.c                |  3 ++-
> >   fs/ceph/caps.c                |  3 ++-
> >   fs/ceph/dir.c                 |  4 ++++
> >   fs/ceph/export.c              |  2 +-
> >   fs/ceph/file.c                | 21 ++++++++++++++-----
> >   fs/ceph/inode.c               | 38 +++++++++++++++++++++--------------
> >   fs/ceph/ioctl.c               |  9 +++++++--
> >   fs/ceph/mds_client.c          | 27 +++++++++++++++++++++----
> >   fs/ceph/mds_client.h          |  1 +
> >   fs/ceph/quota.c               |  2 +-
> >   fs/ceph/super.c               |  6 +++---
> >   fs/ceph/super.h               | 14 ++++++++-----
> >   fs/ceph/xattr.c               | 18 +++++++++--------
> >   fs/mnt_idmapping.c            |  2 ++
> >   include/linux/mnt_idmapping.h |  3 +++
> >   16 files changed, 111 insertions(+), 50 deletions(-)
> >
>
Christian Brauner June 9, 2023, 9:59 a.m. UTC | #3
On Fri, Jun 09, 2023 at 10:59:19AM +0200, Aleksandr Mikhalitsyn wrote:
> On Fri, Jun 9, 2023 at 3:57 AM Xiubo Li <xiubli@redhat.com> wrote:
> >
> >
> > On 6/8/23 23:42, Alexander Mikhalitsyn wrote:
> > > Dear friends,
> > >
> > > This patchset was originally developed by Christian Brauner but I'll continue
> > > to push it forward. Christian allowed me to do that :)
> > >
> > > This feature is already actively used/tested with LXD/LXC project.
> > >
> > > Git tree (based on https://github.com/ceph/ceph-client.git master):
> 
> Hi Xiubo!
> 
> >
> > Could you rebase these patches to 'testing' branch ?
> 
> Will do in -v6.
> 
> >
> > And you still have missed several places, for example the following cases:
> >
> >
> >     1    269  fs/ceph/addr.c <<ceph_netfs_issue_op_inline>>
> >               req = ceph_mdsc_create_request(mdsc, CEPH_MDS_OP_GETATTR,
> > mode);
> 
> +
> 
> >     2    389  fs/ceph/dir.c <<ceph_readdir>>
> >               req = ceph_mdsc_create_request(mdsc, op, USE_AUTH_MDS);
> 
> +
> 
> >     3    789  fs/ceph/dir.c <<ceph_lookup>>
> >               req = ceph_mdsc_create_request(mdsc, op, USE_ANY_MDS);
> 
> We don't have an idmapping passed to lookup from the VFS layer. As I
> mentioned before, it's just impossible now.

->lookup() doesn't deal with idmappings and really can't otherwise you
risk ending up with inode aliasing which is really not something you
want. IOW, you can't fill in inode->i_{g,u}id based on a mount's
idmapping as inode->i_{g,u}id absolutely needs to be a filesystem wide
value. So better not even risk exposing the idmapping in there at all.
Alexander Mikhalitsyn June 9, 2023, 10:12 a.m. UTC | #4
On Fri, Jun 9, 2023 at 12:00 PM Christian Brauner <brauner@kernel.org> wrote:
>
> On Fri, Jun 09, 2023 at 10:59:19AM +0200, Aleksandr Mikhalitsyn wrote:
> > On Fri, Jun 9, 2023 at 3:57 AM Xiubo Li <xiubli@redhat.com> wrote:
> > >
> > >
> > > On 6/8/23 23:42, Alexander Mikhalitsyn wrote:
> > > > Dear friends,
> > > >
> > > > This patchset was originally developed by Christian Brauner but I'll continue
> > > > to push it forward. Christian allowed me to do that :)
> > > >
> > > > This feature is already actively used/tested with LXD/LXC project.
> > > >
> > > > Git tree (based on https://github.com/ceph/ceph-client.git master):
> >
> > Hi Xiubo!
> >
> > >
> > > Could you rebase these patches to 'testing' branch ?
> >
> > Will do in -v6.
> >
> > >
> > > And you still have missed several places, for example the following cases:
> > >
> > >
> > >     1    269  fs/ceph/addr.c <<ceph_netfs_issue_op_inline>>
> > >               req = ceph_mdsc_create_request(mdsc, CEPH_MDS_OP_GETATTR,
> > > mode);
> >
> > +
> >
> > >     2    389  fs/ceph/dir.c <<ceph_readdir>>
> > >               req = ceph_mdsc_create_request(mdsc, op, USE_AUTH_MDS);
> >
> > +
> >
> > >     3    789  fs/ceph/dir.c <<ceph_lookup>>
> > >               req = ceph_mdsc_create_request(mdsc, op, USE_ANY_MDS);
> >
> > We don't have an idmapping passed to lookup from the VFS layer. As I
> > mentioned before, it's just impossible now.
>
> ->lookup() doesn't deal with idmappings and really can't otherwise you
> risk ending up with inode aliasing which is really not something you
> want. IOW, you can't fill in inode->i_{g,u}id based on a mount's
> idmapping as inode->i_{g,u}id absolutely needs to be a filesystem wide
> value. So better not even risk exposing the idmapping in there at all.

Thanks for adding, Christian!

I agree, every time when we use an idmapping we need to be careful with
what we map. AFAIU, inode->i_{g,u}id should be based on the filesystem
idmapping (not mount),
but in this case, Xiubo want's current_fs{u,g}id to be mapped
according to an idmapping.
Anyway, it's impossible at now and IMHO, until we don't have any
practical use case where
UID/GID-based path restriction is used in combination with idmapped
mounts it's not worth to
make such big changes in the VFS layer.

May be I'm not right, but it seems like UID/GID-based path restriction
is not a widespread
feature and I can hardly imagine it to be used with the container
workloads (for instance),
because it will require to always keep in sync MDS permissions
configuration with the
possible UID/GID ranges on the client. It looks like a nightmare for sysadmin.
It is useful when cephfs is used as an external storage on the host, but if you
share cephfs with a few containers with different user namespaces idmapping...

Kind regards,
Alex
Xiubo Li June 13, 2023, 1:43 a.m. UTC | #5
On 6/9/23 18:12, Aleksandr Mikhalitsyn wrote:
> On Fri, Jun 9, 2023 at 12:00 PM Christian Brauner <brauner@kernel.org> wrote:
>> On Fri, Jun 09, 2023 at 10:59:19AM +0200, Aleksandr Mikhalitsyn wrote:
>>> On Fri, Jun 9, 2023 at 3:57 AM Xiubo Li <xiubli@redhat.com> wrote:
>>>>
>>>> On 6/8/23 23:42, Alexander Mikhalitsyn wrote:
>>>>> Dear friends,
>>>>>
>>>>> This patchset was originally developed by Christian Brauner but I'll continue
>>>>> to push it forward. Christian allowed me to do that :)
>>>>>
>>>>> This feature is already actively used/tested with LXD/LXC project.
>>>>>
>>>>> Git tree (based on https://github.com/ceph/ceph-client.git master):
>>> Hi Xiubo!
>>>
>>>> Could you rebase these patches to 'testing' branch ?
>>> Will do in -v6.
>>>
>>>> And you still have missed several places, for example the following cases:
>>>>
>>>>
>>>>      1    269  fs/ceph/addr.c <<ceph_netfs_issue_op_inline>>
>>>>                req = ceph_mdsc_create_request(mdsc, CEPH_MDS_OP_GETATTR,
>>>> mode);
>>> +
>>>
>>>>      2    389  fs/ceph/dir.c <<ceph_readdir>>
>>>>                req = ceph_mdsc_create_request(mdsc, op, USE_AUTH_MDS);
>>> +
>>>
>>>>      3    789  fs/ceph/dir.c <<ceph_lookup>>
>>>>                req = ceph_mdsc_create_request(mdsc, op, USE_ANY_MDS);
>>> We don't have an idmapping passed to lookup from the VFS layer. As I
>>> mentioned before, it's just impossible now.
>> ->lookup() doesn't deal with idmappings and really can't otherwise you
>> risk ending up with inode aliasing which is really not something you
>> want. IOW, you can't fill in inode->i_{g,u}id based on a mount's
>> idmapping as inode->i_{g,u}id absolutely needs to be a filesystem wide
>> value. So better not even risk exposing the idmapping in there at all.
> Thanks for adding, Christian!
>
> I agree, every time when we use an idmapping we need to be careful with
> what we map. AFAIU, inode->i_{g,u}id should be based on the filesystem
> idmapping (not mount),
> but in this case, Xiubo want's current_fs{u,g}id to be mapped
> according to an idmapping.
> Anyway, it's impossible at now and IMHO, until we don't have any
> practical use case where
> UID/GID-based path restriction is used in combination with idmapped
> mounts it's not worth to
> make such big changes in the VFS layer.
>
> May be I'm not right, but it seems like UID/GID-based path restriction
> is not a widespread
> feature and I can hardly imagine it to be used with the container
> workloads (for instance),
> because it will require to always keep in sync MDS permissions
> configuration with the
> possible UID/GID ranges on the client. It looks like a nightmare for sysadmin.
> It is useful when cephfs is used as an external storage on the host, but if you
> share cephfs with a few containers with different user namespaces idmapping...

Hmm, while this will break the MDS permission check in cephfs then in 
lookup case. If we really couldn't support it we should make it to 
escape the check anyway or some OPs may fail and won't work as expected.

@Greg

For the lookup requests the idmapping couldn't get the mapped UID/GID 
just like all the other requests, which is needed by the MDS permission 
check. Is that okay to make it disable the check for this case ? I am 
afraid this will break the MDS permssions logic.

Any idea ?

Thanks

- Xiubo


> Kind regards,
> Alex
>
Alexander Mikhalitsyn June 13, 2023, 12:46 p.m. UTC | #6
On Tue, Jun 13, 2023 at 3:43 AM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 6/9/23 18:12, Aleksandr Mikhalitsyn wrote:
> > On Fri, Jun 9, 2023 at 12:00 PM Christian Brauner <brauner@kernel.org> wrote:
> >> On Fri, Jun 09, 2023 at 10:59:19AM +0200, Aleksandr Mikhalitsyn wrote:
> >>> On Fri, Jun 9, 2023 at 3:57 AM Xiubo Li <xiubli@redhat.com> wrote:
> >>>>
> >>>> On 6/8/23 23:42, Alexander Mikhalitsyn wrote:
> >>>>> Dear friends,
> >>>>>
> >>>>> This patchset was originally developed by Christian Brauner but I'll continue
> >>>>> to push it forward. Christian allowed me to do that :)
> >>>>>
> >>>>> This feature is already actively used/tested with LXD/LXC project.
> >>>>>
> >>>>> Git tree (based on https://github.com/ceph/ceph-client.git master):
> >>> Hi Xiubo!
> >>>
> >>>> Could you rebase these patches to 'testing' branch ?
> >>> Will do in -v6.
> >>>
> >>>> And you still have missed several places, for example the following cases:
> >>>>
> >>>>
> >>>>      1    269  fs/ceph/addr.c <<ceph_netfs_issue_op_inline>>
> >>>>                req = ceph_mdsc_create_request(mdsc, CEPH_MDS_OP_GETATTR,
> >>>> mode);
> >>> +
> >>>
> >>>>      2    389  fs/ceph/dir.c <<ceph_readdir>>
> >>>>                req = ceph_mdsc_create_request(mdsc, op, USE_AUTH_MDS);
> >>> +
> >>>
> >>>>      3    789  fs/ceph/dir.c <<ceph_lookup>>
> >>>>                req = ceph_mdsc_create_request(mdsc, op, USE_ANY_MDS);
> >>> We don't have an idmapping passed to lookup from the VFS layer. As I
> >>> mentioned before, it's just impossible now.
> >> ->lookup() doesn't deal with idmappings and really can't otherwise you
> >> risk ending up with inode aliasing which is really not something you
> >> want. IOW, you can't fill in inode->i_{g,u}id based on a mount's
> >> idmapping as inode->i_{g,u}id absolutely needs to be a filesystem wide
> >> value. So better not even risk exposing the idmapping in there at all.
> > Thanks for adding, Christian!
> >
> > I agree, every time when we use an idmapping we need to be careful with
> > what we map. AFAIU, inode->i_{g,u}id should be based on the filesystem
> > idmapping (not mount),
> > but in this case, Xiubo want's current_fs{u,g}id to be mapped
> > according to an idmapping.
> > Anyway, it's impossible at now and IMHO, until we don't have any
> > practical use case where
> > UID/GID-based path restriction is used in combination with idmapped
> > mounts it's not worth to
> > make such big changes in the VFS layer.
> >
> > May be I'm not right, but it seems like UID/GID-based path restriction
> > is not a widespread
> > feature and I can hardly imagine it to be used with the container
> > workloads (for instance),
> > because it will require to always keep in sync MDS permissions
> > configuration with the
> > possible UID/GID ranges on the client. It looks like a nightmare for sysadmin.
> > It is useful when cephfs is used as an external storage on the host, but if you
> > share cephfs with a few containers with different user namespaces idmapping...
>
> Hmm, while this will break the MDS permission check in cephfs then in
> lookup case. If we really couldn't support it we should make it to
> escape the check anyway or some OPs may fail and won't work as expected.

Hi Xiubo!

Disabling UID/GID checks on the MDS side looks reasonable. IMHO the
most important checks are:
- open
- mknod/mkdir/symlink/rename
and for these checks we already have an idmapping.

Also, I want to add that it's a little bit unusual when permission
checks are done against the caller UID/GID.
Usually, if we have opened a file descriptor and, for instance, passed
this file descriptor through a unix socket then
file descriptor holder will be able to use it in accordance with the
flags (O_RDONLY, O_RDWR, ...).
We also have ->f_cred on the struct file that contains credentials of
the file opener and permission checks are usually done
based on this. But in cephfs we are always using syscall caller's
credentials. It makes cephfs file descriptor "not transferable"
in terms of permission checks.

Kind regards,
Alex

>
> @Greg
>
> For the lookup requests the idmapping couldn't get the mapped UID/GID
> just like all the other requests, which is needed by the MDS permission
> check. Is that okay to make it disable the check for this case ? I am
> afraid this will break the MDS permssions logic.
>
> Any idea ?
>
> Thanks
>
> - Xiubo
>
>
> > Kind regards,
> > Alex
> >
>
Gregory Farnum June 13, 2023, 2:53 p.m. UTC | #7
On Mon, Jun 12, 2023 at 6:43 PM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 6/9/23 18:12, Aleksandr Mikhalitsyn wrote:
> > On Fri, Jun 9, 2023 at 12:00 PM Christian Brauner <brauner@kernel.org> wrote:
> >> On Fri, Jun 09, 2023 at 10:59:19AM +0200, Aleksandr Mikhalitsyn wrote:
> >>> On Fri, Jun 9, 2023 at 3:57 AM Xiubo Li <xiubli@redhat.com> wrote:
> >>>>
> >>>> On 6/8/23 23:42, Alexander Mikhalitsyn wrote:
> >>>>> Dear friends,
> >>>>>
> >>>>> This patchset was originally developed by Christian Brauner but I'll continue
> >>>>> to push it forward. Christian allowed me to do that :)
> >>>>>
> >>>>> This feature is already actively used/tested with LXD/LXC project.
> >>>>>
> >>>>> Git tree (based on https://github.com/ceph/ceph-client.git master):
> >>> Hi Xiubo!
> >>>
> >>>> Could you rebase these patches to 'testing' branch ?
> >>> Will do in -v6.
> >>>
> >>>> And you still have missed several places, for example the following cases:
> >>>>
> >>>>
> >>>>      1    269  fs/ceph/addr.c <<ceph_netfs_issue_op_inline>>
> >>>>                req = ceph_mdsc_create_request(mdsc, CEPH_MDS_OP_GETATTR,
> >>>> mode);
> >>> +
> >>>
> >>>>      2    389  fs/ceph/dir.c <<ceph_readdir>>
> >>>>                req = ceph_mdsc_create_request(mdsc, op, USE_AUTH_MDS);
> >>> +
> >>>
> >>>>      3    789  fs/ceph/dir.c <<ceph_lookup>>
> >>>>                req = ceph_mdsc_create_request(mdsc, op, USE_ANY_MDS);
> >>> We don't have an idmapping passed to lookup from the VFS layer. As I
> >>> mentioned before, it's just impossible now.
> >> ->lookup() doesn't deal with idmappings and really can't otherwise you
> >> risk ending up with inode aliasing which is really not something you
> >> want. IOW, you can't fill in inode->i_{g,u}id based on a mount's
> >> idmapping as inode->i_{g,u}id absolutely needs to be a filesystem wide
> >> value. So better not even risk exposing the idmapping in there at all.
> > Thanks for adding, Christian!
> >
> > I agree, every time when we use an idmapping we need to be careful with
> > what we map. AFAIU, inode->i_{g,u}id should be based on the filesystem
> > idmapping (not mount),
> > but in this case, Xiubo want's current_fs{u,g}id to be mapped
> > according to an idmapping.
> > Anyway, it's impossible at now and IMHO, until we don't have any
> > practical use case where
> > UID/GID-based path restriction is used in combination with idmapped
> > mounts it's not worth to
> > make such big changes in the VFS layer.
> >
> > May be I'm not right, but it seems like UID/GID-based path restriction
> > is not a widespread
> > feature and I can hardly imagine it to be used with the container
> > workloads (for instance),
> > because it will require to always keep in sync MDS permissions
> > configuration with the
> > possible UID/GID ranges on the client. It looks like a nightmare for sysadmin.
> > It is useful when cephfs is used as an external storage on the host, but if you
> > share cephfs with a few containers with different user namespaces idmapping...
>
> Hmm, while this will break the MDS permission check in cephfs then in
> lookup case. If we really couldn't support it we should make it to
> escape the check anyway or some OPs may fail and won't work as expected.

I don't pretend to know the details of the VFS (or even our linux
client implementation), but I'm confused that this is apparently so
hard. It looks to me like we currently always fill in the "caller_uid"
with "from_kuid(&init_user_ns, req->r_cred->fsuid))". Is this actually
valid to begin with? If it is, why can't the uid mapping be applied on
that?

As both the client and the server share authority over the inode's
state (including things like mode bits and owners), and need to do
permission checking, being able to tell the server the relevant actor
is inherently necessary. We also let admins restrict keys to
particular UID/GID combinations as they wish, and it's not the most
popular feature but it does get deployed. I would really expect a user
of UID mapping to be one of the *most* likely to employ such a
facility...maybe not with containers, but certainly end-user homedirs
and shared spaces.

Disabling the MDS auth checks is really not an option. I guess we
could require any user employing idmapping to not be uid-restricted,
and set the anonymous UID (does that work, Xiubo, or was it the broken
one? In which case we'd have to default to root?). But that seems a
bit janky to me.
-Greg

> @Greg
>
> For the lookup requests the idmapping couldn't get the mapped UID/GID
> just like all the other requests, which is needed by the MDS permission
> check. Is that okay to make it disable the check for this case ? I am
> afraid this will break the MDS permssions logic.
>
> Any idea ?
>
> Thanks
>
> - Xiubo
>
>
> > Kind regards,
> > Alex
> >
>
Alexander Mikhalitsyn June 13, 2023, 4:27 p.m. UTC | #8
On Tue, Jun 13, 2023 at 4:54 PM Gregory Farnum <gfarnum@redhat.com> wrote:
>
> On Mon, Jun 12, 2023 at 6:43 PM Xiubo Li <xiubli@redhat.com> wrote:
> >
> >
> > On 6/9/23 18:12, Aleksandr Mikhalitsyn wrote:
> > > On Fri, Jun 9, 2023 at 12:00 PM Christian Brauner <brauner@kernel.org> wrote:
> > >> On Fri, Jun 09, 2023 at 10:59:19AM +0200, Aleksandr Mikhalitsyn wrote:
> > >>> On Fri, Jun 9, 2023 at 3:57 AM Xiubo Li <xiubli@redhat.com> wrote:
> > >>>>
> > >>>> On 6/8/23 23:42, Alexander Mikhalitsyn wrote:
> > >>>>> Dear friends,
> > >>>>>
> > >>>>> This patchset was originally developed by Christian Brauner but I'll continue
> > >>>>> to push it forward. Christian allowed me to do that :)
> > >>>>>
> > >>>>> This feature is already actively used/tested with LXD/LXC project.
> > >>>>>
> > >>>>> Git tree (based on https://github.com/ceph/ceph-client.git master):
> > >>> Hi Xiubo!
> > >>>
> > >>>> Could you rebase these patches to 'testing' branch ?
> > >>> Will do in -v6.
> > >>>
> > >>>> And you still have missed several places, for example the following cases:
> > >>>>
> > >>>>
> > >>>>      1    269  fs/ceph/addr.c <<ceph_netfs_issue_op_inline>>
> > >>>>                req = ceph_mdsc_create_request(mdsc, CEPH_MDS_OP_GETATTR,
> > >>>> mode);
> > >>> +
> > >>>
> > >>>>      2    389  fs/ceph/dir.c <<ceph_readdir>>
> > >>>>                req = ceph_mdsc_create_request(mdsc, op, USE_AUTH_MDS);
> > >>> +
> > >>>
> > >>>>      3    789  fs/ceph/dir.c <<ceph_lookup>>
> > >>>>                req = ceph_mdsc_create_request(mdsc, op, USE_ANY_MDS);
> > >>> We don't have an idmapping passed to lookup from the VFS layer. As I
> > >>> mentioned before, it's just impossible now.
> > >> ->lookup() doesn't deal with idmappings and really can't otherwise you
> > >> risk ending up with inode aliasing which is really not something you
> > >> want. IOW, you can't fill in inode->i_{g,u}id based on a mount's
> > >> idmapping as inode->i_{g,u}id absolutely needs to be a filesystem wide
> > >> value. So better not even risk exposing the idmapping in there at all.
> > > Thanks for adding, Christian!
> > >
> > > I agree, every time when we use an idmapping we need to be careful with
> > > what we map. AFAIU, inode->i_{g,u}id should be based on the filesystem
> > > idmapping (not mount),
> > > but in this case, Xiubo want's current_fs{u,g}id to be mapped
> > > according to an idmapping.
> > > Anyway, it's impossible at now and IMHO, until we don't have any
> > > practical use case where
> > > UID/GID-based path restriction is used in combination with idmapped
> > > mounts it's not worth to
> > > make such big changes in the VFS layer.
> > >
> > > May be I'm not right, but it seems like UID/GID-based path restriction
> > > is not a widespread
> > > feature and I can hardly imagine it to be used with the container
> > > workloads (for instance),
> > > because it will require to always keep in sync MDS permissions
> > > configuration with the
> > > possible UID/GID ranges on the client. It looks like a nightmare for sysadmin.
> > > It is useful when cephfs is used as an external storage on the host, but if you
> > > share cephfs with a few containers with different user namespaces idmapping...
> >
> > Hmm, while this will break the MDS permission check in cephfs then in
> > lookup case. If we really couldn't support it we should make it to
> > escape the check anyway or some OPs may fail and won't work as expected.

Dear Gregory,

Thanks for the fast reply!

>
> I don't pretend to know the details of the VFS (or even our linux
> client implementation), but I'm confused that this is apparently so
> hard. It looks to me like we currently always fill in the "caller_uid"
> with "from_kuid(&init_user_ns, req->r_cred->fsuid))". Is this actually
> valid to begin with? If it is, why can't the uid mapping be applied on
> that?

Applying an idmapping is not hard, it's as simple as replacing
from_kuid(&init_user_ns, req->r_cred->fsuid)
to
from_vfsuid(req->r_mnt_idmap, &init_user_ns, VFSUIDT_INIT(req->r_cred->fsuid))

but the problem is that we don't have req->r_mnt_idmap for all the requests.
For instance, we don't have idmap arguments (that come from the VFS
layer) for ->lookup
operation and many others. There are some reasons for that (Christian
has covered some of them).
So, it's not about my laziness to implement that. It's a real pain ;-)

>
> As both the client and the server share authority over the inode's
> state (including things like mode bits and owners), and need to do
> permission checking, being able to tell the server the relevant actor
> is inherently necessary. We also let admins restrict keys to
> particular UID/GID combinations as they wish, and it's not the most
> popular feature but it does get deployed. I would really expect a user
> of UID mapping to be one of the *most* likely to employ such a
> facility...maybe not with containers, but certainly end-user homedirs
> and shared spaces.
>
> Disabling the MDS auth checks is really not an option. I guess we
> could require any user employing idmapping to not be uid-restricted,
> and set the anonymous UID (does that work, Xiubo, or was it the broken
> one? In which case we'd have to default to root?). But that seems a
> bit janky to me.

That's an interesting point about anonymous UID, but at the same time,
We use these caller's fs UID/GID values as an owner's UID/GID for
newly created inodes.
It means that we can't use anonymous UID everywhere in this case
otherwise all new files/directories
will be owned by an anonymous user.

> -Greg

Kind regards,
Alex

>
> > @Greg
> >
> > For the lookup requests the idmapping couldn't get the mapped UID/GID
> > just like all the other requests, which is needed by the MDS permission
> > check. Is that okay to make it disable the check for this case ? I am
> > afraid this will break the MDS permssions logic.
> >
> > Any idea ?
> >
> > Thanks
> >
> > - Xiubo
> >
> >
> > > Kind regards,
> > > Alex
> > >
> >
>
Xiubo Li June 14, 2023, 1:52 a.m. UTC | #9
On 6/13/23 22:53, Gregory Farnum wrote:
> On Mon, Jun 12, 2023 at 6:43 PM Xiubo Li <xiubli@redhat.com> wrote:
>>
>> On 6/9/23 18:12, Aleksandr Mikhalitsyn wrote:
>>> On Fri, Jun 9, 2023 at 12:00 PM Christian Brauner <brauner@kernel.org> wrote:
>>>> On Fri, Jun 09, 2023 at 10:59:19AM +0200, Aleksandr Mikhalitsyn wrote:
>>>>> On Fri, Jun 9, 2023 at 3:57 AM Xiubo Li <xiubli@redhat.com> wrote:
>>>>>> On 6/8/23 23:42, Alexander Mikhalitsyn wrote:
>>>>>>> Dear friends,
>>>>>>>
>>>>>>> This patchset was originally developed by Christian Brauner but I'll continue
>>>>>>> to push it forward. Christian allowed me to do that :)
>>>>>>>
>>>>>>> This feature is already actively used/tested with LXD/LXC project.
>>>>>>>
>>>>>>> Git tree (based on https://github.com/ceph/ceph-client.git master):
>>>>> Hi Xiubo!
>>>>>
>>>>>> Could you rebase these patches to 'testing' branch ?
>>>>> Will do in -v6.
>>>>>
>>>>>> And you still have missed several places, for example the following cases:
>>>>>>
>>>>>>
>>>>>>       1    269  fs/ceph/addr.c <<ceph_netfs_issue_op_inline>>
>>>>>>                 req = ceph_mdsc_create_request(mdsc, CEPH_MDS_OP_GETATTR,
>>>>>> mode);
>>>>> +
>>>>>
>>>>>>       2    389  fs/ceph/dir.c <<ceph_readdir>>
>>>>>>                 req = ceph_mdsc_create_request(mdsc, op, USE_AUTH_MDS);
>>>>> +
>>>>>
>>>>>>       3    789  fs/ceph/dir.c <<ceph_lookup>>
>>>>>>                 req = ceph_mdsc_create_request(mdsc, op, USE_ANY_MDS);
>>>>> We don't have an idmapping passed to lookup from the VFS layer. As I
>>>>> mentioned before, it's just impossible now.
>>>> ->lookup() doesn't deal with idmappings and really can't otherwise you
>>>> risk ending up with inode aliasing which is really not something you
>>>> want. IOW, you can't fill in inode->i_{g,u}id based on a mount's
>>>> idmapping as inode->i_{g,u}id absolutely needs to be a filesystem wide
>>>> value. So better not even risk exposing the idmapping in there at all.
>>> Thanks for adding, Christian!
>>>
>>> I agree, every time when we use an idmapping we need to be careful with
>>> what we map. AFAIU, inode->i_{g,u}id should be based on the filesystem
>>> idmapping (not mount),
>>> but in this case, Xiubo want's current_fs{u,g}id to be mapped
>>> according to an idmapping.
>>> Anyway, it's impossible at now and IMHO, until we don't have any
>>> practical use case where
>>> UID/GID-based path restriction is used in combination with idmapped
>>> mounts it's not worth to
>>> make such big changes in the VFS layer.
>>>
>>> May be I'm not right, but it seems like UID/GID-based path restriction
>>> is not a widespread
>>> feature and I can hardly imagine it to be used with the container
>>> workloads (for instance),
>>> because it will require to always keep in sync MDS permissions
>>> configuration with the
>>> possible UID/GID ranges on the client. It looks like a nightmare for sysadmin.
>>> It is useful when cephfs is used as an external storage on the host, but if you
>>> share cephfs with a few containers with different user namespaces idmapping...
>> Hmm, while this will break the MDS permission check in cephfs then in
>> lookup case. If we really couldn't support it we should make it to
>> escape the check anyway or some OPs may fail and won't work as expected.
> I don't pretend to know the details of the VFS (or even our linux
> client implementation), but I'm confused that this is apparently so
> hard. It looks to me like we currently always fill in the "caller_uid"
> with "from_kuid(&init_user_ns, req->r_cred->fsuid))". Is this actually
> valid to begin with? If it is, why can't the uid mapping be applied on
> that?
>
> As both the client and the server share authority over the inode's
> state (including things like mode bits and owners), and need to do
> permission checking, being able to tell the server the relevant actor
> is inherently necessary. We also let admins restrict keys to
> particular UID/GID combinations as they wish, and it's not the most
> popular feature but it does get deployed. I would really expect a user
> of UID mapping to be one of the *most* likely to employ such a
> facility...maybe not with containers, but certainly end-user homedirs
> and shared spaces.
>
> Disabling the MDS auth checks is really not an option. I guess we
> could require any user employing idmapping to not be uid-restricted,
> and set the anonymous UID (does that work, Xiubo, or was it the broken
> one? In which case we'd have to default to root?). But that seems a
> bit janky to me.

Yeah, this also seems risky.

Instead disabling the MDS auth checks there is another option, which is 
we can prevent  the kclient to be mounted or the idmapping to be 
applied. But this still have issues, such as what if admins set the MDS 
auth caps after idmap applied to the kclients ?

IMO there have 2 options: the best way is to fix this in VFS if 
possible. Else to add one option to disable the corresponding MDS auth 
caps in ceph if users want to support the idmap feature.

Thanks

- Xiubo

> -Greg
>
>> @Greg
>>
>> For the lookup requests the idmapping couldn't get the mapped UID/GID
>> just like all the other requests, which is needed by the MDS permission
>> check. Is that okay to make it disable the check for this case ? I am
>> afraid this will break the MDS permssions logic.
>>
>> Any idea ?
>>
>> Thanks
>>
>> - Xiubo
>>
>>
>>> Kind regards,
>>> Alex
>>>
Christian Brauner June 14, 2023, 9:45 a.m. UTC | #10
On Tue, Jun 13, 2023 at 02:46:02PM +0200, Aleksandr Mikhalitsyn wrote:
> On Tue, Jun 13, 2023 at 3:43 AM Xiubo Li <xiubli@redhat.com> wrote:
> >
> >
> > On 6/9/23 18:12, Aleksandr Mikhalitsyn wrote:
> > > On Fri, Jun 9, 2023 at 12:00 PM Christian Brauner <brauner@kernel.org> wrote:
> > >> On Fri, Jun 09, 2023 at 10:59:19AM +0200, Aleksandr Mikhalitsyn wrote:
> > >>> On Fri, Jun 9, 2023 at 3:57 AM Xiubo Li <xiubli@redhat.com> wrote:
> > >>>>
> > >>>> On 6/8/23 23:42, Alexander Mikhalitsyn wrote:
> > >>>>> Dear friends,
> > >>>>>
> > >>>>> This patchset was originally developed by Christian Brauner but I'll continue
> > >>>>> to push it forward. Christian allowed me to do that :)
> > >>>>>
> > >>>>> This feature is already actively used/tested with LXD/LXC project.
> > >>>>>
> > >>>>> Git tree (based on https://github.com/ceph/ceph-client.git master):
> > >>> Hi Xiubo!
> > >>>
> > >>>> Could you rebase these patches to 'testing' branch ?
> > >>> Will do in -v6.
> > >>>
> > >>>> And you still have missed several places, for example the following cases:
> > >>>>
> > >>>>
> > >>>>      1    269  fs/ceph/addr.c <<ceph_netfs_issue_op_inline>>
> > >>>>                req = ceph_mdsc_create_request(mdsc, CEPH_MDS_OP_GETATTR,
> > >>>> mode);
> > >>> +
> > >>>
> > >>>>      2    389  fs/ceph/dir.c <<ceph_readdir>>
> > >>>>                req = ceph_mdsc_create_request(mdsc, op, USE_AUTH_MDS);
> > >>> +
> > >>>
> > >>>>      3    789  fs/ceph/dir.c <<ceph_lookup>>
> > >>>>                req = ceph_mdsc_create_request(mdsc, op, USE_ANY_MDS);
> > >>> We don't have an idmapping passed to lookup from the VFS layer. As I
> > >>> mentioned before, it's just impossible now.
> > >> ->lookup() doesn't deal with idmappings and really can't otherwise you
> > >> risk ending up with inode aliasing which is really not something you
> > >> want. IOW, you can't fill in inode->i_{g,u}id based on a mount's
> > >> idmapping as inode->i_{g,u}id absolutely needs to be a filesystem wide
> > >> value. So better not even risk exposing the idmapping in there at all.
> > > Thanks for adding, Christian!
> > >
> > > I agree, every time when we use an idmapping we need to be careful with
> > > what we map. AFAIU, inode->i_{g,u}id should be based on the filesystem
> > > idmapping (not mount),
> > > but in this case, Xiubo want's current_fs{u,g}id to be mapped
> > > according to an idmapping.
> > > Anyway, it's impossible at now and IMHO, until we don't have any
> > > practical use case where
> > > UID/GID-based path restriction is used in combination with idmapped
> > > mounts it's not worth to
> > > make such big changes in the VFS layer.
> > >
> > > May be I'm not right, but it seems like UID/GID-based path restriction
> > > is not a widespread
> > > feature and I can hardly imagine it to be used with the container
> > > workloads (for instance),
> > > because it will require to always keep in sync MDS permissions
> > > configuration with the
> > > possible UID/GID ranges on the client. It looks like a nightmare for sysadmin.
> > > It is useful when cephfs is used as an external storage on the host, but if you
> > > share cephfs with a few containers with different user namespaces idmapping...
> >
> > Hmm, while this will break the MDS permission check in cephfs then in
> > lookup case. If we really couldn't support it we should make it to
> > escape the check anyway or some OPs may fail and won't work as expected.
> 
> Hi Xiubo!
> 
> Disabling UID/GID checks on the MDS side looks reasonable. IMHO the
> most important checks are:
> - open
> - mknod/mkdir/symlink/rename
> and for these checks we already have an idmapping.
> 
> Also, I want to add that it's a little bit unusual when permission
> checks are done against the caller UID/GID.

The server side permission checking based on the sender's fs{g,u}id is
rather esoteric imho. So I would just disable it for idmapped mounts.

> Usually, if we have opened a file descriptor and, for instance, passed
> this file descriptor through a unix socket then
> file descriptor holder will be able to use it in accordance with the
> flags (O_RDONLY, O_RDWR, ...).
> We also have ->f_cred on the struct file that contains credentials of
> the file opener and permission checks are usually done
> based on this. But in cephfs we are always using syscall caller's
> credentials. It makes cephfs file descriptor "not transferable"
> in terms of permission checks.

Yeah, that's another good point.
Alexander Mikhalitsyn June 14, 2023, 12:39 p.m. UTC | #11
On Wed, Jun 14, 2023 at 3:53 AM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 6/13/23 22:53, Gregory Farnum wrote:
> > On Mon, Jun 12, 2023 at 6:43 PM Xiubo Li <xiubli@redhat.com> wrote:
> >>
> >> On 6/9/23 18:12, Aleksandr Mikhalitsyn wrote:
> >>> On Fri, Jun 9, 2023 at 12:00 PM Christian Brauner <brauner@kernel.org> wrote:
> >>>> On Fri, Jun 09, 2023 at 10:59:19AM +0200, Aleksandr Mikhalitsyn wrote:
> >>>>> On Fri, Jun 9, 2023 at 3:57 AM Xiubo Li <xiubli@redhat.com> wrote:
> >>>>>> On 6/8/23 23:42, Alexander Mikhalitsyn wrote:
> >>>>>>> Dear friends,
> >>>>>>>
> >>>>>>> This patchset was originally developed by Christian Brauner but I'll continue
> >>>>>>> to push it forward. Christian allowed me to do that :)
> >>>>>>>
> >>>>>>> This feature is already actively used/tested with LXD/LXC project.
> >>>>>>>
> >>>>>>> Git tree (based on https://github.com/ceph/ceph-client.git master):
> >>>>> Hi Xiubo!
> >>>>>
> >>>>>> Could you rebase these patches to 'testing' branch ?
> >>>>> Will do in -v6.
> >>>>>
> >>>>>> And you still have missed several places, for example the following cases:
> >>>>>>
> >>>>>>
> >>>>>>       1    269  fs/ceph/addr.c <<ceph_netfs_issue_op_inline>>
> >>>>>>                 req = ceph_mdsc_create_request(mdsc, CEPH_MDS_OP_GETATTR,
> >>>>>> mode);
> >>>>> +
> >>>>>
> >>>>>>       2    389  fs/ceph/dir.c <<ceph_readdir>>
> >>>>>>                 req = ceph_mdsc_create_request(mdsc, op, USE_AUTH_MDS);
> >>>>> +
> >>>>>
> >>>>>>       3    789  fs/ceph/dir.c <<ceph_lookup>>
> >>>>>>                 req = ceph_mdsc_create_request(mdsc, op, USE_ANY_MDS);
> >>>>> We don't have an idmapping passed to lookup from the VFS layer. As I
> >>>>> mentioned before, it's just impossible now.
> >>>> ->lookup() doesn't deal with idmappings and really can't otherwise you
> >>>> risk ending up with inode aliasing which is really not something you
> >>>> want. IOW, you can't fill in inode->i_{g,u}id based on a mount's
> >>>> idmapping as inode->i_{g,u}id absolutely needs to be a filesystem wide
> >>>> value. So better not even risk exposing the idmapping in there at all.
> >>> Thanks for adding, Christian!
> >>>
> >>> I agree, every time when we use an idmapping we need to be careful with
> >>> what we map. AFAIU, inode->i_{g,u}id should be based on the filesystem
> >>> idmapping (not mount),
> >>> but in this case, Xiubo want's current_fs{u,g}id to be mapped
> >>> according to an idmapping.
> >>> Anyway, it's impossible at now and IMHO, until we don't have any
> >>> practical use case where
> >>> UID/GID-based path restriction is used in combination with idmapped
> >>> mounts it's not worth to
> >>> make such big changes in the VFS layer.
> >>>
> >>> May be I'm not right, but it seems like UID/GID-based path restriction
> >>> is not a widespread
> >>> feature and I can hardly imagine it to be used with the container
> >>> workloads (for instance),
> >>> because it will require to always keep in sync MDS permissions
> >>> configuration with the
> >>> possible UID/GID ranges on the client. It looks like a nightmare for sysadmin.
> >>> It is useful when cephfs is used as an external storage on the host, but if you
> >>> share cephfs with a few containers with different user namespaces idmapping...
> >> Hmm, while this will break the MDS permission check in cephfs then in
> >> lookup case. If we really couldn't support it we should make it to
> >> escape the check anyway or some OPs may fail and won't work as expected.
> > I don't pretend to know the details of the VFS (or even our linux
> > client implementation), but I'm confused that this is apparently so
> > hard. It looks to me like we currently always fill in the "caller_uid"
> > with "from_kuid(&init_user_ns, req->r_cred->fsuid))". Is this actually
> > valid to begin with? If it is, why can't the uid mapping be applied on
> > that?
> >
> > As both the client and the server share authority over the inode's
> > state (including things like mode bits and owners), and need to do
> > permission checking, being able to tell the server the relevant actor
> > is inherently necessary. We also let admins restrict keys to
> > particular UID/GID combinations as they wish, and it's not the most
> > popular feature but it does get deployed. I would really expect a user
> > of UID mapping to be one of the *most* likely to employ such a
> > facility...maybe not with containers, but certainly end-user homedirs
> > and shared spaces.
> >
> > Disabling the MDS auth checks is really not an option. I guess we
> > could require any user employing idmapping to not be uid-restricted,
> > and set the anonymous UID (does that work, Xiubo, or was it the broken
> > one? In which case we'd have to default to root?). But that seems a
> > bit janky to me.
>
> Yeah, this also seems risky.
>
> Instead disabling the MDS auth checks there is another option, which is
> we can prevent  the kclient to be mounted or the idmapping to be
> applied. But this still have issues, such as what if admins set the MDS
> auth caps after idmap applied to the kclients ?

Hi Xiubo,

I thought about this too and came to the same conclusion, that UID/GID based
restriction can be applied dynamically, so detecting it on mount-time
helps not so much.

>
> IMO there have 2 options: the best way is to fix this in VFS if
> possible. Else to add one option to disable the corresponding MDS auth
> caps in ceph if users want to support the idmap feature.

Dear colleagues,
Dear Xiubo,

Let me try to summarize the previous discussions about cephfs idmapped
mount support.

This discussion about the need of caller's UID/GID mapping is started
from the first
version of this patchset in this [1] thread. Let'me quote Christian here:
> Since the idmapping is a property of the mount and not a property of the
> caller the caller's fs{g,u}id aren't mapped. What is mapped are the
> inode's i{g,u}id when accessed from a particular mount.
>
> The fs{g,u}id are only ever mapped when a new filesystem object is
> created. So if I have an idmapped mount that makes it so that files
> owned by 1000 on-disk appear to be owned by uid 0 then a user with uid 0
> creating a new file will create files with uid 1000 on-disk when going
> through that mount. For cephfs that'd be the uid we would be sending
> with creation requests as I've currently written it.

This is a key part of this discussion. Idmapped mounts is not a way to proxify
caller's UID/GID, but idmapped mounts are designed to perform UID/GID mapping
of inode's owner's UID/GID. Yes, these concepts look really-really
close and from
the first glance it looks like it's just an equivalent thing. But they are not.

From my understanding, if someone wants to verify caller UID/GID then he should
take an unmapped UID/GID and verify it. It's not important if the
caller does something
through an idmapped mount or not, from_kuid(&init_user_ns, req->r_cred->fsuid))
literally "UID of the caller in a root user namespace". But cephfs
mount can be used
from any user namespace (yes, cephfs can't be mounted in user namespaces, but it
can be inherited during CLONE_NEWNS, or used as a detached mount with
open_tree/move_mount).
What I want to say by providing this example is that even now, without
idmapped mounts
we have kinda close problem, that UID/GID based restriction will be
based on the host's (!),
root user namespace, UID/GID-s even if the caller sits inside the user
namespace. And we don't care,
right? Why it's a problem with an idmapped mounts? If someone wants to
control caller's UID/GID
on the MDS side he just needs to take hosts UID/GIDs and use them in
permission rules. That's it.

Next point is that technically idmapped mounts don't break anything,
if someone starts using
idmapped mounts with UID/GID-based restrictions he will get -EACCESS.
Why is this a problem?
A user will check configuration, read the clarification in the
documentation about idmapped mounts
in cephfs and find a warning that these are not fully compatible
things right now.

IMHO, there is only one real problem (which makes UID/GID-based
restrictions is not fully compatible with
an idmapped mounts). Is that we have to map caller's UID/GID according
to a mount idmapping when we
creating a new inode (mknod, mkdir, symlink, open(O_CREAT)). But it's
only because the caller's UID/GIDs are
used as the owner's UID/GID for newly created inode. Ideally, we need
to have two fields in ceph request,
one for a caller's UID/GID and another one for inode owner UID/GID.
But this requires cephfs protocol modification
(yes, it's a bit painful. But global VFS changes are painful too!). As
Christian pointed this is a reason why
he went this way in the first patchset version.

Maybe I'm not right, but both options to properly fix that VFS API
changes or cephfs protocol modification
are too expensive until we don't have a real requestors with a good
use case for idmapped mounts + UID/GID
based permissions. We already have a real and good use case for
idmapped mounts in Cephfs for LXD/LXC.
IMHO, it's better to move this thing forward step by step, because VFS
API/cephfs protocol changes will
take a really big amount of time and it's not obvious that it's worth
it, moreover it's not even clear that VFS API
change is the right way to deal with this problem. It seems to me that
Cephfs protocol change seems like a
more proper way here. At the same time I fully understand that you are
not happy about this option.

Just to conclude, we don't have any kind of cephfs degradation here,
all users without idmapping will not be affected,
all users who start using mount idmappings with cephfs will be aware
of this limitation.

[1] https://lore.kernel.org/all/20220105141023.vrrbfhti5apdvkz7@wittgenstein/

Kind regards,
Alex

>
> Thanks
>
> - Xiubo
>
> > -Greg
> >
> >> @Greg
> >>
> >> For the lookup requests the idmapping couldn't get the mapped UID/GID
> >> just like all the other requests, which is needed by the MDS permission
> >> check. Is that okay to make it disable the check for this case ? I am
> >> afraid this will break the MDS permssions logic.
> >>
> >> Any idea ?
> >>
> >> Thanks
> >>
> >> - Xiubo
> >>
> >>
> >>> Kind regards,
> >>> Alex
> >>>
>
Xiubo Li June 15, 2023, 5:08 a.m. UTC | #12
On 6/14/23 20:34, Aleksandr Mikhalitsyn wrote:
> On Wed, Jun 14, 2023 at 3:53 AM Xiubo Li <xiubli@redhat.com> wrote:
> >
> >
> > On 6/13/23 22:53, Gregory Farnum wrote:
> > > On Mon, Jun 12, 2023 at 6:43 PM Xiubo Li <xiubli@redhat.com> wrote:
> > >>
> > >> On 6/9/23 18:12, Aleksandr Mikhalitsyn wrote:
> > >>> On Fri, Jun 9, 2023 at 12:00 PM Christian Brauner 
> <brauner@kernel.org> wrote:
> > >>>> On Fri, Jun 09, 2023 at 10:59:19AM +0200, Aleksandr Mikhalitsyn 
> wrote:
> > >>>>> On Fri, Jun 9, 2023 at 3:57 AM Xiubo Li <xiubli@redhat.com> wrote:
> > >>>>>> On 6/8/23 23:42, Alexander Mikhalitsyn wrote:
> > >>>>>>> Dear friends,
> > >>>>>>>
> > >>>>>>> This patchset was originally developed by Christian Brauner 
> but I'll continue
> > >>>>>>> to push it forward. Christian allowed me to do that :)
> > >>>>>>>
> > >>>>>>> This feature is already actively used/tested with LXD/LXC 
> project.
> > >>>>>>>
> > >>>>>>> Git tree (based on https://github.com/ceph/ceph-client.git 
> master):
> > >>>>> Hi Xiubo!
> > >>>>>
> > >>>>>> Could you rebase these patches to 'testing' branch ?
> > >>>>> Will do in -v6.
> > >>>>>
> > >>>>>> And you still have missed several places, for example the 
> following cases:
> > >>>>>>
> > >>>>>>
> > >>>>>>       1    269  fs/ceph/addr.c <<ceph_netfs_issue_op_inline>>
> > >>>>>>                 req = ceph_mdsc_create_request(mdsc, 
> CEPH_MDS_OP_GETATTR,
> > >>>>>> mode);
> > >>>>> +
> > >>>>>
> > >>>>>>       2    389  fs/ceph/dir.c <<ceph_readdir>>
> > >>>>>>                 req = ceph_mdsc_create_request(mdsc, op, 
> USE_AUTH_MDS);
> > >>>>> +
> > >>>>>
> > >>>>>>       3    789  fs/ceph/dir.c <<ceph_lookup>>
> > >>>>>>                 req = ceph_mdsc_create_request(mdsc, op, 
> USE_ANY_MDS);
> > >>>>> We don't have an idmapping passed to lookup from the VFS 
> layer. As I
> > >>>>> mentioned before, it's just impossible now.
> > >>>> ->lookup() doesn't deal with idmappings and really can't 
> otherwise you
> > >>>> risk ending up with inode aliasing which is really not 
> something you
> > >>>> want. IOW, you can't fill in inode->i_{g,u}id based on a mount's
> > >>>> idmapping as inode->i_{g,u}id absolutely needs to be a 
> filesystem wide
> > >>>> value. So better not even risk exposing the idmapping in there 
> at all.
> > >>> Thanks for adding, Christian!
> > >>>
> > >>> I agree, every time when we use an idmapping we need to be 
> careful with
> > >>> what we map. AFAIU, inode->i_{g,u}id should be based on the 
> filesystem
> > >>> idmapping (not mount),
> > >>> but in this case, Xiubo want's current_fs{u,g}id to be mapped
> > >>> according to an idmapping.
> > >>> Anyway, it's impossible at now and IMHO, until we don't have any
> > >>> practical use case where
> > >>> UID/GID-based path restriction is used in combination with idmapped
> > >>> mounts it's not worth to
> > >>> make such big changes in the VFS layer.
> > >>>
> > >>> May be I'm not right, but it seems like UID/GID-based path 
> restriction
> > >>> is not a widespread
> > >>> feature and I can hardly imagine it to be used with the container
> > >>> workloads (for instance),
> > >>> because it will require to always keep in sync MDS permissions
> > >>> configuration with the
> > >>> possible UID/GID ranges on the client. It looks like a nightmare 
> for sysadmin.
> > >>> It is useful when cephfs is used as an external storage on the 
> host, but if you
> > >>> share cephfs with a few containers with different user 
> namespaces idmapping...
> > >> Hmm, while this will break the MDS permission check in cephfs then in
> > >> lookup case. If we really couldn't support it we should make it to
> > >> escape the check anyway or some OPs may fail and won't work as 
> expected.
> > > I don't pretend to know the details of the VFS (or even our linux
> > > client implementation), but I'm confused that this is apparently so
> > > hard. It looks to me like we currently always fill in the "caller_uid"
> > > with "from_kuid(&init_user_ns, req->r_cred->fsuid))". Is this actually
> > > valid to begin with? If it is, why can't the uid mapping be applied on
> > > that?
> > >
> > > As both the client and the server share authority over the inode's
> > > state (including things like mode bits and owners), and need to do
> > > permission checking, being able to tell the server the relevant actor
> > > is inherently necessary. We also let admins restrict keys to
> > > particular UID/GID combinations as they wish, and it's not the most
> > > popular feature but it does get deployed. I would really expect a user
> > > of UID mapping to be one of the *most* likely to employ such a
> > > facility...maybe not with containers, but certainly end-user homedirs
> > > and shared spaces.
> > >
> > > Disabling the MDS auth checks is really not an option. I guess we
> > > could require any user employing idmapping to not be uid-restricted,
> > > and set the anonymous UID (does that work, Xiubo, or was it the broken
> > > one? In which case we'd have to default to root?). But that seems a
> > > bit janky to me.
> >
> > Yeah, this also seems risky.
> >
> > Instead disabling the MDS auth checks there is another option, which is
> > we can prevent  the kclient to be mounted or the idmapping to be
> > applied. But this still have issues, such as what if admins set the MDS
> > auth caps after idmap applied to the kclients ?
>
> Hi Xiubo,
>
> I thought about this too and came to the same conclusion, that UID/GID 
> based
> restriction can be applied dynamically, so detecting it on mount-time 
> helps not so much.
>
For this you please raise one PR to ceph first to support this, and in 
the PR we can discuss more for the MDS auth caps. And after the PR 
getting merged then in this patch series you need to check the 
corresponding option or flag to determine whether could the idmap 
mounting succeed.

Thanks

- Xiubo


> >
> > IMO there have 2 options: the best way is to fix this in VFS if
> > possible. Else to add one option to disable the corresponding MDS auth
> > caps in ceph if users want to support the idmap feature.
>
> Dear colleagues,
> Dear Xiubo,
>
> Let me try to summarize the previous discussions about cephfs idmapped 
> mount support.
>
> This discussion about the need of caller's UID/GID mapping is started 
> from the first
> version of this patchset in this [1] thread. Let'me quote Christian here:
> > Since the idmapping is a property of the mount and not a property of the
> > caller the caller's fs{g,u}id aren't mapped. What is mapped are the
> > inode's i{g,u}id when accessed from a particular mount.
> >
> > The fs{g,u}id are only ever mapped when a new filesystem object is
> > created. So if I have an idmapped mount that makes it so that files
> > owned by 1000 on-disk appear to be owned by uid 0 then a user with uid 0
> > creating a new file will create files with uid 1000 on-disk when going
> > through that mount. For cephfs that'd be the uid we would be sending
> > with creation requests as I've currently written it.
>
> This is a key part of this discussion. Idmapped mounts is not a way to 
> proxify
> caller's UID/GID, but idmapped mounts are designed to perform UID/GID 
> mapping
> of inode's owner's UID/GID. Yes, these concepts look really-really 
> close and from
> the first glance it looks like it's just an equivalent thing. But they 
> are not.
>
> From my understanding, if someone wants to verify caller UID/GID then 
> he should
> take an unmapped UID/GID and verify it. It's not important if the 
> caller does something
> through an idmapped mount or not, from_kuid(&init_user_ns, 
> req->r_cred->fsuid))
> literally "UID of the caller in a root user namespace". But cephfs 
> mount can be used
> from any user namespace (yes, cephfs can't be mounted in user 
> namespaces, but it
> can be inherited during CLONE_NEWNS, or used as a detached mount with 
> open_tree/move_mount).
> What I want to say by providing this example is that even now, without 
> idmapped mounts
> we have kinda close problem, that UID/GID based restriction will be 
> based on the host's (!),
> root user namespace, UID/GID-s even if the caller sits inside the user 
> namespace. And we don't care,
> right? Why it's a problem with an idmapped mounts? If someone wants to 
> control caller's UID/GID
> on the MDS side he just needs to take hosts UID/GIDs and use them in 
> permission rules. That's it.
>
> Next point is that technically idmapped mounts don't break anything, 
> if someone starts using
> idmapped mounts with UID/GID-based restrictions he will get -EACCESS. 
> Why is this a problem?
> A user will check configuration, read the clarification in the 
> documentation about idmapped mounts
> in cephfs and find a warning that these are not fully compatible 
> things right now.
>
> IMHO, there is only one real problem (which makes UID/GID-based 
> restrictions is not fully compatible with
> an idmapped mounts). Is that we have to map caller's UID/GID according 
> to a mount idmapping when we
> creating a new inode (mknod, mkdir, symlink, open(O_CREAT)). But it's 
> only because the caller's UID/GIDs are
> used as the owner's UID/GID for newly created inode. Ideally, we need 
> to have two fields in ceph request,
> one for a caller's UID/GID and another one for inode owner UID/GID. 
> But this requires cephfs protocol modification
> (yes, it's a bit painful. But global VFS changes are painful too!). As 
> Christian pointed this is a reason why
> he went this way in the first patchset version.
>
> Maybe I'm not right, but both options to properly fix that VFS API 
> changes or cephfs protocol modification
> are too expensive until we don't have a real requestors with a good 
> use case for idmapped mounts + UID/GID
> based permissions. We already have a real and good use case for 
> idmapped mounts in Cephfs for LXD/LXC.
> IMHO, it's better to move this thing forward step by step, because VFS 
> API/cephfs protocol changes will
> take a really big amount of time and it's not obvious that it's worth 
> it, moreover it's not even clear that VFS API
> change is the right way to deal with this problem. It seems to me that 
> Cephfs protocol change seems like a
> more proper way here. At the same time I fully understand that you are 
> not happy about this option.
>
> Just to conclude, we don't have any kind of cephfs degradation here, 
> all users without idmapping will not be affected,
> all users who start using mount idmappings with cephfs will be aware 
> of this limitation.
>
> [1] 
> https://lore.kernel.org/all/20220105141023.vrrbfhti5apdvkz7@wittgenstein/
>
> Kind regards,
> Alex
>
> >
> > Thanks
> >
> > - Xiubo
> >
> > > -Greg
> > >
> > >> @Greg
> > >>
> > >> For the lookup requests the idmapping couldn't get the mapped UID/GID
> > >> just like all the other requests, which is needed by the MDS 
> permission
> > >> check. Is that okay to make it disable the check for this case ? I am
> > >> afraid this will break the MDS permssions logic.
> > >>
> > >> Any idea ?
> > >>
> > >> Thanks
> > >>
> > >> - Xiubo
> > >>
> > >>
> > >>> Kind regards,
> > >>> Alex
> > >>>
> >
Alexander Mikhalitsyn June 15, 2023, 11:05 a.m. UTC | #13
On Thu, Jun 15, 2023 at 7:08 AM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 6/14/23 20:34, Aleksandr Mikhalitsyn wrote:
> > On Wed, Jun 14, 2023 at 3:53 AM Xiubo Li <xiubli@redhat.com> wrote:
> > >
> > >
> > > On 6/13/23 22:53, Gregory Farnum wrote:
> > > > On Mon, Jun 12, 2023 at 6:43 PM Xiubo Li <xiubli@redhat.com> wrote:
> > > >>
> > > >> On 6/9/23 18:12, Aleksandr Mikhalitsyn wrote:
> > > >>> On Fri, Jun 9, 2023 at 12:00 PM Christian Brauner
> > <brauner@kernel.org> wrote:
> > > >>>> On Fri, Jun 09, 2023 at 10:59:19AM +0200, Aleksandr Mikhalitsyn
> > wrote:
> > > >>>>> On Fri, Jun 9, 2023 at 3:57 AM Xiubo Li <xiubli@redhat.com> wrote:
> > > >>>>>> On 6/8/23 23:42, Alexander Mikhalitsyn wrote:
> > > >>>>>>> Dear friends,
> > > >>>>>>>
> > > >>>>>>> This patchset was originally developed by Christian Brauner
> > but I'll continue
> > > >>>>>>> to push it forward. Christian allowed me to do that :)
> > > >>>>>>>
> > > >>>>>>> This feature is already actively used/tested with LXD/LXC
> > project.
> > > >>>>>>>
> > > >>>>>>> Git tree (based on https://github.com/ceph/ceph-client.git
> > master):
> > > >>>>> Hi Xiubo!
> > > >>>>>
> > > >>>>>> Could you rebase these patches to 'testing' branch ?
> > > >>>>> Will do in -v6.
> > > >>>>>
> > > >>>>>> And you still have missed several places, for example the
> > following cases:
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>       1    269  fs/ceph/addr.c <<ceph_netfs_issue_op_inline>>
> > > >>>>>>                 req = ceph_mdsc_create_request(mdsc,
> > CEPH_MDS_OP_GETATTR,
> > > >>>>>> mode);
> > > >>>>> +
> > > >>>>>
> > > >>>>>>       2    389  fs/ceph/dir.c <<ceph_readdir>>
> > > >>>>>>                 req = ceph_mdsc_create_request(mdsc, op,
> > USE_AUTH_MDS);
> > > >>>>> +
> > > >>>>>
> > > >>>>>>       3    789  fs/ceph/dir.c <<ceph_lookup>>
> > > >>>>>>                 req = ceph_mdsc_create_request(mdsc, op,
> > USE_ANY_MDS);
> > > >>>>> We don't have an idmapping passed to lookup from the VFS
> > layer. As I
> > > >>>>> mentioned before, it's just impossible now.
> > > >>>> ->lookup() doesn't deal with idmappings and really can't
> > otherwise you
> > > >>>> risk ending up with inode aliasing which is really not
> > something you
> > > >>>> want. IOW, you can't fill in inode->i_{g,u}id based on a mount's
> > > >>>> idmapping as inode->i_{g,u}id absolutely needs to be a
> > filesystem wide
> > > >>>> value. So better not even risk exposing the idmapping in there
> > at all.
> > > >>> Thanks for adding, Christian!
> > > >>>
> > > >>> I agree, every time when we use an idmapping we need to be
> > careful with
> > > >>> what we map. AFAIU, inode->i_{g,u}id should be based on the
> > filesystem
> > > >>> idmapping (not mount),
> > > >>> but in this case, Xiubo want's current_fs{u,g}id to be mapped
> > > >>> according to an idmapping.
> > > >>> Anyway, it's impossible at now and IMHO, until we don't have any
> > > >>> practical use case where
> > > >>> UID/GID-based path restriction is used in combination with idmapped
> > > >>> mounts it's not worth to
> > > >>> make such big changes in the VFS layer.
> > > >>>
> > > >>> May be I'm not right, but it seems like UID/GID-based path
> > restriction
> > > >>> is not a widespread
> > > >>> feature and I can hardly imagine it to be used with the container
> > > >>> workloads (for instance),
> > > >>> because it will require to always keep in sync MDS permissions
> > > >>> configuration with the
> > > >>> possible UID/GID ranges on the client. It looks like a nightmare
> > for sysadmin.
> > > >>> It is useful when cephfs is used as an external storage on the
> > host, but if you
> > > >>> share cephfs with a few containers with different user
> > namespaces idmapping...
> > > >> Hmm, while this will break the MDS permission check in cephfs then in
> > > >> lookup case. If we really couldn't support it we should make it to
> > > >> escape the check anyway or some OPs may fail and won't work as
> > expected.
> > > > I don't pretend to know the details of the VFS (or even our linux
> > > > client implementation), but I'm confused that this is apparently so
> > > > hard. It looks to me like we currently always fill in the "caller_uid"
> > > > with "from_kuid(&init_user_ns, req->r_cred->fsuid))". Is this actually
> > > > valid to begin with? If it is, why can't the uid mapping be applied on
> > > > that?
> > > >
> > > > As both the client and the server share authority over the inode's
> > > > state (including things like mode bits and owners), and need to do
> > > > permission checking, being able to tell the server the relevant actor
> > > > is inherently necessary. We also let admins restrict keys to
> > > > particular UID/GID combinations as they wish, and it's not the most
> > > > popular feature but it does get deployed. I would really expect a user
> > > > of UID mapping to be one of the *most* likely to employ such a
> > > > facility...maybe not with containers, but certainly end-user homedirs
> > > > and shared spaces.
> > > >
> > > > Disabling the MDS auth checks is really not an option. I guess we
> > > > could require any user employing idmapping to not be uid-restricted,
> > > > and set the anonymous UID (does that work, Xiubo, or was it the broken
> > > > one? In which case we'd have to default to root?). But that seems a
> > > > bit janky to me.
> > >
> > > Yeah, this also seems risky.
> > >
> > > Instead disabling the MDS auth checks there is another option, which is
> > > we can prevent  the kclient to be mounted or the idmapping to be
> > > applied. But this still have issues, such as what if admins set the MDS
> > > auth caps after idmap applied to the kclients ?
> >
> > Hi Xiubo,
> >
> > I thought about this too and came to the same conclusion, that UID/GID
> > based
> > restriction can be applied dynamically, so detecting it on mount-time
> > helps not so much.
> >
> For this you please raise one PR to ceph first to support this, and in
> the PR we can discuss more for the MDS auth caps. And after the PR
> getting merged then in this patch series you need to check the
> corresponding option or flag to determine whether could the idmap
> mounting succeed.

I'm sorry but I don't understand what we want to support here. Do we want to
add some new ceph request that allows to check if UID/GID-based
permissions are applied for
a particular ceph client user?

Thanks,
Alex

>
> Thanks
>
> - Xiubo
>
>
> > >
> > > IMO there have 2 options: the best way is to fix this in VFS if
> > > possible. Else to add one option to disable the corresponding MDS auth
> > > caps in ceph if users want to support the idmap feature.
> >
> > Dear colleagues,
> > Dear Xiubo,
> >
> > Let me try to summarize the previous discussions about cephfs idmapped
> > mount support.
> >
> > This discussion about the need of caller's UID/GID mapping is started
> > from the first
> > version of this patchset in this [1] thread. Let'me quote Christian here:
> > > Since the idmapping is a property of the mount and not a property of the
> > > caller the caller's fs{g,u}id aren't mapped. What is mapped are the
> > > inode's i{g,u}id when accessed from a particular mount.
> > >
> > > The fs{g,u}id are only ever mapped when a new filesystem object is
> > > created. So if I have an idmapped mount that makes it so that files
> > > owned by 1000 on-disk appear to be owned by uid 0 then a user with uid 0
> > > creating a new file will create files with uid 1000 on-disk when going
> > > through that mount. For cephfs that'd be the uid we would be sending
> > > with creation requests as I've currently written it.
> >
> > This is a key part of this discussion. Idmapped mounts is not a way to
> > proxify
> > caller's UID/GID, but idmapped mounts are designed to perform UID/GID
> > mapping
> > of inode's owner's UID/GID. Yes, these concepts look really-really
> > close and from
> > the first glance it looks like it's just an equivalent thing. But they
> > are not.
> >
> > From my understanding, if someone wants to verify caller UID/GID then
> > he should
> > take an unmapped UID/GID and verify it. It's not important if the
> > caller does something
> > through an idmapped mount or not, from_kuid(&init_user_ns,
> > req->r_cred->fsuid))
> > literally "UID of the caller in a root user namespace". But cephfs
> > mount can be used
> > from any user namespace (yes, cephfs can't be mounted in user
> > namespaces, but it
> > can be inherited during CLONE_NEWNS, or used as a detached mount with
> > open_tree/move_mount).
> > What I want to say by providing this example is that even now, without
> > idmapped mounts
> > we have kinda close problem, that UID/GID based restriction will be
> > based on the host's (!),
> > root user namespace, UID/GID-s even if the caller sits inside the user
> > namespace. And we don't care,
> > right? Why it's a problem with an idmapped mounts? If someone wants to
> > control caller's UID/GID
> > on the MDS side he just needs to take hosts UID/GIDs and use them in
> > permission rules. That's it.
> >
> > Next point is that technically idmapped mounts don't break anything,
> > if someone starts using
> > idmapped mounts with UID/GID-based restrictions he will get -EACCESS.
> > Why is this a problem?
> > A user will check configuration, read the clarification in the
> > documentation about idmapped mounts
> > in cephfs and find a warning that these are not fully compatible
> > things right now.
> >
> > IMHO, there is only one real problem (which makes UID/GID-based
> > restrictions is not fully compatible with
> > an idmapped mounts). Is that we have to map caller's UID/GID according
> > to a mount idmapping when we
> > creating a new inode (mknod, mkdir, symlink, open(O_CREAT)). But it's
> > only because the caller's UID/GIDs are
> > used as the owner's UID/GID for newly created inode. Ideally, we need
> > to have two fields in ceph request,
> > one for a caller's UID/GID and another one for inode owner UID/GID.
> > But this requires cephfs protocol modification
> > (yes, it's a bit painful. But global VFS changes are painful too!). As
> > Christian pointed this is a reason why
> > he went this way in the first patchset version.
> >
> > Maybe I'm not right, but both options to properly fix that VFS API
> > changes or cephfs protocol modification
> > are too expensive until we don't have a real requestors with a good
> > use case for idmapped mounts + UID/GID
> > based permissions. We already have a real and good use case for
> > idmapped mounts in Cephfs for LXD/LXC.
> > IMHO, it's better to move this thing forward step by step, because VFS
> > API/cephfs protocol changes will
> > take a really big amount of time and it's not obvious that it's worth
> > it, moreover it's not even clear that VFS API
> > change is the right way to deal with this problem. It seems to me that
> > Cephfs protocol change seems like a
> > more proper way here. At the same time I fully understand that you are
> > not happy about this option.
> >
> > Just to conclude, we don't have any kind of cephfs degradation here,
> > all users without idmapping will not be affected,
> > all users who start using mount idmappings with cephfs will be aware
> > of this limitation.
> >
> > [1]
> > https://lore.kernel.org/all/20220105141023.vrrbfhti5apdvkz7@wittgenstein/
> >
> > Kind regards,
> > Alex
> >
> > >
> > > Thanks
> > >
> > > - Xiubo
> > >
> > > > -Greg
> > > >
> > > >> @Greg
> > > >>
> > > >> For the lookup requests the idmapping couldn't get the mapped UID/GID
> > > >> just like all the other requests, which is needed by the MDS
> > permission
> > > >> check. Is that okay to make it disable the check for this case ? I am
> > > >> afraid this will break the MDS permssions logic.
> > > >>
> > > >> Any idea ?
> > > >>
> > > >> Thanks
> > > >>
> > > >> - Xiubo
> > > >>
> > > >>
> > > >>> Kind regards,
> > > >>> Alex
> > > >>>
> > >
>
Xiubo Li June 15, 2023, 12:29 p.m. UTC | #14
[...]

 > > >
 > > > I thought about this too and came to the same conclusion, that 
UID/GID
 > > > based
 > > > restriction can be applied dynamically, so detecting it on mount-time
 > > > helps not so much.
 > > >
 > > For this you please raise one PR to ceph first to support this, and in
 > > the PR we can discuss more for the MDS auth caps. And after the PR
 > > getting merged then in this patch series you need to check the
 > > corresponding option or flag to determine whether could the idmap
 > > mounting succeed.
 >
 > I'm sorry but I don't understand what we want to support here. Do we 
want to
 > add some new ceph request that allows to check if UID/GID-based
 > permissions are applied for
 > a particular ceph client user?

IMO we should prevent users to set UID/GID-based MDS auth caps from ceph 
side. And users should know what has happened.

Once users want to support the idmap mounts they should know that the 
MDS auth caps won't work anymore.

Thanks

- Xiubo
Alexander Mikhalitsyn June 15, 2023, 12:54 p.m. UTC | #15
On Thu, Jun 15, 2023 at 2:29 PM Xiubo Li <xiubli@redhat.com> wrote:
>
> [...]
>
>  > > >
>  > > > I thought about this too and came to the same conclusion, that
> UID/GID
>  > > > based
>  > > > restriction can be applied dynamically, so detecting it on mount-time
>  > > > helps not so much.
>  > > >
>  > > For this you please raise one PR to ceph first to support this, and in
>  > > the PR we can discuss more for the MDS auth caps. And after the PR
>  > > getting merged then in this patch series you need to check the
>  > > corresponding option or flag to determine whether could the idmap
>  > > mounting succeed.
>  >
>  > I'm sorry but I don't understand what we want to support here. Do we
> want to
>  > add some new ceph request that allows to check if UID/GID-based
>  > permissions are applied for
>  > a particular ceph client user?
>
> IMO we should prevent users to set UID/GID-based MDS auth caps from ceph
> side. And users should know what has happened.

ok, we want to restrict setting of UID/GID-based permissions if there is an
idmapped mount on the client. IMHO, idmapping mounts is truly a
client-side feature
and server modification looks a bit strange to me.

>
> Once users want to support the idmap mounts they should know that the
> MDS auth caps won't work anymore.

They will work, but permission rule configuration should include
non-mapped UID/GID-s.
As I mentioned here [1] it's already the case even without mount idmappings.

It would be great to discuss this thing as a concept and synchronize
our understanding of this
before going into modification of a server side.

[1] https://lore.kernel.org/lkml/CAEivzxcBBJV6DOGzy5S7=TUjrXZfVaGaJX5z7WFzYq1w4MdtiA@mail.gmail.com/

Kind regards,
Alex

>
> Thanks
>
> - Xiubo
>
Alexander Mikhalitsyn June 21, 2023, 4:55 p.m. UTC | #16
On Thu, Jun 15, 2023 at 2:54 PM Aleksandr Mikhalitsyn
<aleksandr.mikhalitsyn@canonical.com> wrote:
>
> On Thu, Jun 15, 2023 at 2:29 PM Xiubo Li <xiubli@redhat.com> wrote:
> >
> > [...]
> >
> >  > > >
> >  > > > I thought about this too and came to the same conclusion, that
> > UID/GID
> >  > > > based
> >  > > > restriction can be applied dynamically, so detecting it on mount-time
> >  > > > helps not so much.
> >  > > >
> >  > > For this you please raise one PR to ceph first to support this, and in
> >  > > the PR we can discuss more for the MDS auth caps. And after the PR
> >  > > getting merged then in this patch series you need to check the
> >  > > corresponding option or flag to determine whether could the idmap
> >  > > mounting succeed.
> >  >
> >  > I'm sorry but I don't understand what we want to support here. Do we
> > want to
> >  > add some new ceph request that allows to check if UID/GID-based
> >  > permissions are applied for
> >  > a particular ceph client user?
> >
> > IMO we should prevent users to set UID/GID-based MDS auth caps from ceph
> > side. And users should know what has happened.
>
> ok, we want to restrict setting of UID/GID-based permissions if there is an
> idmapped mount on the client. IMHO, idmapping mounts is truly a
> client-side feature
> and server modification looks a bit strange to me.
>
> >
> > Once users want to support the idmap mounts they should know that the
> > MDS auth caps won't work anymore.
>
> They will work, but permission rule configuration should include
> non-mapped UID/GID-s.
> As I mentioned here [1] it's already the case even without mount idmappings.
>
> It would be great to discuss this thing as a concept and synchronize
> our understanding of this
> before going into modification of a server side.

Hi everyone,

I've spent some extra time analyzing this issue with UID/GID-based
path restriction feature and idmapped mounts
one more time and am still fully sure that we have two ways here:
I. Extend Cephfs protocol (add new fields to request arguments in the
"union ceph_mds_request_args")
There should be 2 new fields for the file/directory owner's UID and
GID respectively. With the help of these
new fields, we will be able to split the permission check logic (that
is based on the caller's UID/GID and should not be affected by
the mounts idmapping at all!) and file owner concept, which involves
mounts' idmapping.
II. ignore this issue as non-critical, because:
- idmapped mounts can be created only by privileged users
(CAP_SYS_ADMIN in the superblock owner's user namespace (currently,
it's always the initial user namespace!))
- the surface of the problem is really small (combination of idmapped
mount + UID/GID path-based restriction)
- problem *can* be workarounded by appropriate permission
configuration (UID/GID permissions should be configured to
include both the mount's idmapping UIDs/GIDs and the host ones).

Before that I've highlighted some existing problems of this UID/GID
path-based restriction feature:
- [kernel client] UID/GIDs are sent to the server always from the
initial user namespace (while the caller can be from inside the
container with a non-initial user namespace)
- [fuse client] UID/GIDs are always mapped to the fuse mount's
superblock user namespace
(https://github.com/ceph/ceph-client/blob/409e873ea3c1fd3079909718bbeb06ac1ec7f38b/fs/fuse/dev.c#L138)
It means that we already have analogical inconsistency between clients
(userspace one and kernel).
- [kernel client] We always take current user credentials instead of
using (struct file)->f_cred as it has usually done for other
filesystems

Please understand me in the right way, I'm not trying to say that we
need to be lazy and ignore the issue at all, but I'm
just trying to say that this issue is not local and is not caused by
an idmapped mounts, but it there is something to do on the cephfs
side,
we need to extend protocol and it's not obvious that it is worth it.
My understanding is that we need to clarify this limitation in
cephfs kernel client documentation and explain how to configure
UID/GID path-based permissions with idmapped mounts to work around
this.
And if we get requests from our users that this is interesting to
someone to support it in the right way then we can do all of this
crazy stuff
by extending ceph protocol. Btw, I've checked when "union
ceph_mds_request_args" was extended last time. It was 6 (!) years ago
:)

Kind regards,
Alex

>
> [1] https://lore.kernel.org/lkml/CAEivzxcBBJV6DOGzy5S7=TUjrXZfVaGaJX5z7WFzYq1w4MdtiA@mail.gmail.com/
>
> Kind regards,
> Alex
>
> >
> > Thanks
> >
> > - Xiubo
> >
Xiubo Li June 24, 2023, 1:36 a.m. UTC | #17
[...]

 > > >
 > > > I thought about this too and came to the same conclusion, that 
UID/GID
 > > > based
 > > > restriction can be applied dynamically, so detecting it on mount-time
 > > > helps not so much.
 > > >
 > > For this you please raise one PR to ceph first to support this, and in
 > > the PR we can discuss more for the MDS auth caps. And after the PR
 > > getting merged then in this patch series you need to check the
 > > corresponding option or flag to determine whether could the idmap
 > > mounting succeed.
 >
 > I'm sorry but I don't understand what we want to support here. Do we 
want to
 > add some new ceph request that allows to check if UID/GID-based
 > permissions are applied for
 > a particular ceph client user?

IMO we should prevent user to set UID/GID-based permisions caps from 
ceph side.

As I know currently there is no way to prevent users to set MDS auth 
caps, IMO in ceph side at least we need one flag or option to disable 
this once users want this fs cluster sever for idmap mounts use case.

Thanks

- Xiubo
Alexander Mikhalitsyn June 24, 2023, 7:11 a.m. UTC | #18
On Sat, Jun 24, 2023 at 3:37 AM Xiubo Li <xiubli@redhat.com> wrote:
>
> [...]
>
>  > > >
>  > > > I thought about this too and came to the same conclusion, that
> UID/GID
>  > > > based
>  > > > restriction can be applied dynamically, so detecting it on mount-time
>  > > > helps not so much.
>  > > >
>  > > For this you please raise one PR to ceph first to support this, and in
>  > > the PR we can discuss more for the MDS auth caps. And after the PR
>  > > getting merged then in this patch series you need to check the
>  > > corresponding option or flag to determine whether could the idmap
>  > > mounting succeed.
>  >
>  > I'm sorry but I don't understand what we want to support here. Do we
> want to
>  > add some new ceph request that allows to check if UID/GID-based
>  > permissions are applied for
>  > a particular ceph client user?
>
> IMO we should prevent user to set UID/GID-based permisions caps from
> ceph side.
>
> As I know currently there is no way to prevent users to set MDS auth
> caps, IMO in ceph side at least we need one flag or option to disable
> this once users want this fs cluster sever for idmap mounts use case.

How this should be visible from the user side? We introducing a new
kernel client mount option,
like "nomdscaps", then pass flag to the MDS and MDS should check that
MDS auth permissions
are not applied (on the mount time) and prevent them from being
applied later while session is active. Like that?

At the same time I'm thinking about protocol extension that adds 2
additional fields for UID/GID. This will allow to correctly
handle everything. I wanted to avoid any changes to the protocol or
server-side things. But if we want to change MDS side,
maybe it's better then to go this way?

Thanks,
Alex

>
> Thanks
>
> - Xiubo
>
Xiubo Li June 26, 2023, 1:04 a.m. UTC | #19
On 6/15/23 20:54, Aleksandr Mikhalitsyn wrote:
> On Thu, Jun 15, 2023 at 2:29 PM Xiubo Li <xiubli@redhat.com> wrote:
>> [...]
>>
>>   > > >
>>   > > > I thought about this too and came to the same conclusion, that
>> UID/GID
>>   > > > based
>>   > > > restriction can be applied dynamically, so detecting it on mount-time
>>   > > > helps not so much.
>>   > > >
>>   > > For this you please raise one PR to ceph first to support this, and in
>>   > > the PR we can discuss more for the MDS auth caps. And after the PR
>>   > > getting merged then in this patch series you need to check the
>>   > > corresponding option or flag to determine whether could the idmap
>>   > > mounting succeed.
>>   >
>>   > I'm sorry but I don't understand what we want to support here. Do we
>> want to
>>   > add some new ceph request that allows to check if UID/GID-based
>>   > permissions are applied for
>>   > a particular ceph client user?
>>
>> IMO we should prevent users to set UID/GID-based MDS auth caps from ceph
>> side. And users should know what has happened.
> ok, we want to restrict setting of UID/GID-based permissions if there is an
> idmapped mount on the client. IMHO, idmapping mounts is truly a
> client-side feature
> and server modification looks a bit strange to me.

Yeah, agree.

But without fixing the lookup issue in kclient side it will be buggy and 
may make some tests fail too.

We need to support this more smoothly.

Thanks

- Xiubo

>> Once users want to support the idmap mounts they should know that the
>> MDS auth caps won't work anymore.
> They will work, but permission rule configuration should include
> non-mapped UID/GID-s.
> As I mentioned here [1] it's already the case even without mount idmappings.
>
> It would be great to discuss this thing as a concept and synchronize
> our understanding of this
> before going into modification of a server side.
>
> [1] https://lore.kernel.org/lkml/CAEivzxcBBJV6DOGzy5S7=TUjrXZfVaGaJX5z7WFzYq1w4MdtiA@mail.gmail.com/
>
> Kind regards,
> Alex
>
>> Thanks
>>
>> - Xiubo
>>
Xiubo Li June 26, 2023, 2:12 a.m. UTC | #20
On 6/24/23 15:11, Aleksandr Mikhalitsyn wrote:
> On Sat, Jun 24, 2023 at 3:37 AM Xiubo Li <xiubli@redhat.com> wrote:
>> [...]
>>
>>   > > >
>>   > > > I thought about this too and came to the same conclusion, that
>> UID/GID
>>   > > > based
>>   > > > restriction can be applied dynamically, so detecting it on mount-time
>>   > > > helps not so much.
>>   > > >
>>   > > For this you please raise one PR to ceph first to support this, and in
>>   > > the PR we can discuss more for the MDS auth caps. And after the PR
>>   > > getting merged then in this patch series you need to check the
>>   > > corresponding option or flag to determine whether could the idmap
>>   > > mounting succeed.
>>   >
>>   > I'm sorry but I don't understand what we want to support here. Do we
>> want to
>>   > add some new ceph request that allows to check if UID/GID-based
>>   > permissions are applied for
>>   > a particular ceph client user?
>>
>> IMO we should prevent user to set UID/GID-based permisions caps from
>> ceph side.
>>
>> As I know currently there is no way to prevent users to set MDS auth
>> caps, IMO in ceph side at least we need one flag or option to disable
>> this once users want this fs cluster sever for idmap mounts use case.
> How this should be visible from the user side? We introducing a new
> kernel client mount option,
> like "nomdscaps", then pass flag to the MDS and MDS should check that
> MDS auth permissions
> are not applied (on the mount time) and prevent them from being
> applied later while session is active. Like that?
>
> At the same time I'm thinking about protocol extension that adds 2
> additional fields for UID/GID. This will allow to correctly
> handle everything. I wanted to avoid any changes to the protocol or
> server-side things. But if we want to change MDS side,
> maybe it's better then to go this way?

There is another way:

For each client it will have a dedicated client auth caps, something like:

client.foo
   key: *key*
   caps: [mds] allow r, allow rw path=/bar
   caps: [mon] allow r
   caps: [osd] allow rw tag cephfs data=cephfs_a

When mounting this client with idmap enabled, then we can just check the 
above [mds] caps, if there has any UID/GID based permissions set, then 
fail the mounting.

That means this kind client couldn't be mounted with idmap enabled.

Also we need to make sure that once there is a mount with idmap enabled, 
the corresponding client caps couldn't be append the UID/GID based 
permissions. This need a patch in ceph anyway IMO.

Thanks

- Xiubo





>
> Thanks,
> Alex
>
>> Thanks
>>
>> - Xiubo
>>
Alexander Mikhalitsyn June 26, 2023, 11:23 a.m. UTC | #21
On Mon, Jun 26, 2023 at 4:12 AM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 6/24/23 15:11, Aleksandr Mikhalitsyn wrote:
> > On Sat, Jun 24, 2023 at 3:37 AM Xiubo Li <xiubli@redhat.com> wrote:
> >> [...]
> >>
> >>   > > >
> >>   > > > I thought about this too and came to the same conclusion, that
> >> UID/GID
> >>   > > > based
> >>   > > > restriction can be applied dynamically, so detecting it on mount-time
> >>   > > > helps not so much.
> >>   > > >
> >>   > > For this you please raise one PR to ceph first to support this, and in
> >>   > > the PR we can discuss more for the MDS auth caps. And after the PR
> >>   > > getting merged then in this patch series you need to check the
> >>   > > corresponding option or flag to determine whether could the idmap
> >>   > > mounting succeed.
> >>   >
> >>   > I'm sorry but I don't understand what we want to support here. Do we
> >> want to
> >>   > add some new ceph request that allows to check if UID/GID-based
> >>   > permissions are applied for
> >>   > a particular ceph client user?
> >>
> >> IMO we should prevent user to set UID/GID-based permisions caps from
> >> ceph side.
> >>
> >> As I know currently there is no way to prevent users to set MDS auth
> >> caps, IMO in ceph side at least we need one flag or option to disable
> >> this once users want this fs cluster sever for idmap mounts use case.
> > How this should be visible from the user side? We introducing a new
> > kernel client mount option,
> > like "nomdscaps", then pass flag to the MDS and MDS should check that
> > MDS auth permissions
> > are not applied (on the mount time) and prevent them from being
> > applied later while session is active. Like that?
> >
> > At the same time I'm thinking about protocol extension that adds 2
> > additional fields for UID/GID. This will allow to correctly
> > handle everything. I wanted to avoid any changes to the protocol or
> > server-side things. But if we want to change MDS side,
> > maybe it's better then to go this way?

Hi Xiubo,

>
> There is another way:
>
> For each client it will have a dedicated client auth caps, something like:
>
> client.foo
>    key: *key*
>    caps: [mds] allow r, allow rw path=/bar
>    caps: [mon] allow r
>    caps: [osd] allow rw tag cephfs data=cephfs_a

Do we have any infrastructure to get this caps list on the client side
right now?
(I've taken a quick look through the code and can't find anything
related to this.)

>
> When mounting this client with idmap enabled, then we can just check the
> above [mds] caps, if there has any UID/GID based permissions set, then
> fail the mounting.

understood

>
> That means this kind client couldn't be mounted with idmap enabled.
>
> Also we need to make sure that once there is a mount with idmap enabled,
> the corresponding client caps couldn't be append the UID/GID based
> permissions. This need a patch in ceph anyway IMO.

So, yeah we will need to effectively block cephx permission changes if
there is a client mounted with
an active idmapped mount. Sounds as something that require massive
changes on the server side.

At the same time this will just block users from using idmapped mounts
along with UID/GID restrictions.

If you want me to change server-side anyways, isn't it better just to
extend cephfs protocol to properly
handle UID/GIDs with idmapped mounts? (It was originally proposed by Christian.)
What we need to do here is to add a separate UID/GID fields for ceph
requests those are creating a new inodes
(like mknod, symlink, etc).

Kind regards,
Alex

>
> Thanks
>
> - Xiubo
>
>
>
>
>
> >
> > Thanks,
> > Alex
> >
> >> Thanks
> >>
> >> - Xiubo
> >>
>
Alexander Mikhalitsyn June 26, 2023, 11:49 a.m. UTC | #22
On Mon, Jun 26, 2023 at 1:23 PM Aleksandr Mikhalitsyn
<aleksandr.mikhalitsyn@canonical.com> wrote:
>
> On Mon, Jun 26, 2023 at 4:12 AM Xiubo Li <xiubli@redhat.com> wrote:
> >
> >
> > On 6/24/23 15:11, Aleksandr Mikhalitsyn wrote:
> > > On Sat, Jun 24, 2023 at 3:37 AM Xiubo Li <xiubli@redhat.com> wrote:
> > >> [...]
> > >>
> > >>   > > >
> > >>   > > > I thought about this too and came to the same conclusion, that
> > >> UID/GID
> > >>   > > > based
> > >>   > > > restriction can be applied dynamically, so detecting it on mount-time
> > >>   > > > helps not so much.
> > >>   > > >
> > >>   > > For this you please raise one PR to ceph first to support this, and in
> > >>   > > the PR we can discuss more for the MDS auth caps. And after the PR
> > >>   > > getting merged then in this patch series you need to check the
> > >>   > > corresponding option or flag to determine whether could the idmap
> > >>   > > mounting succeed.
> > >>   >
> > >>   > I'm sorry but I don't understand what we want to support here. Do we
> > >> want to
> > >>   > add some new ceph request that allows to check if UID/GID-based
> > >>   > permissions are applied for
> > >>   > a particular ceph client user?
> > >>
> > >> IMO we should prevent user to set UID/GID-based permisions caps from
> > >> ceph side.
> > >>
> > >> As I know currently there is no way to prevent users to set MDS auth
> > >> caps, IMO in ceph side at least we need one flag or option to disable
> > >> this once users want this fs cluster sever for idmap mounts use case.
> > > How this should be visible from the user side? We introducing a new
> > > kernel client mount option,
> > > like "nomdscaps", then pass flag to the MDS and MDS should check that
> > > MDS auth permissions
> > > are not applied (on the mount time) and prevent them from being
> > > applied later while session is active. Like that?
> > >
> > > At the same time I'm thinking about protocol extension that adds 2
> > > additional fields for UID/GID. This will allow to correctly
> > > handle everything. I wanted to avoid any changes to the protocol or
> > > server-side things. But if we want to change MDS side,
> > > maybe it's better then to go this way?
>
> Hi Xiubo,
>
> >
> > There is another way:
> >
> > For each client it will have a dedicated client auth caps, something like:
> >
> > client.foo
> >    key: *key*
> >    caps: [mds] allow r, allow rw path=/bar
> >    caps: [mon] allow r
> >    caps: [osd] allow rw tag cephfs data=cephfs_a
>
> Do we have any infrastructure to get this caps list on the client side
> right now?
> (I've taken a quick look through the code and can't find anything
> related to this.)

I've found your PR that looks related https://github.com/ceph/ceph/pull/48027

>
> >
> > When mounting this client with idmap enabled, then we can just check the
> > above [mds] caps, if there has any UID/GID based permissions set, then
> > fail the mounting.
>
> understood
>
> >
> > That means this kind client couldn't be mounted with idmap enabled.
> >
> > Also we need to make sure that once there is a mount with idmap enabled,
> > the corresponding client caps couldn't be append the UID/GID based
> > permissions. This need a patch in ceph anyway IMO.
>
> So, yeah we will need to effectively block cephx permission changes if
> there is a client mounted with
> an active idmapped mount. Sounds as something that require massive
> changes on the server side.
>
> At the same time this will just block users from using idmapped mounts
> along with UID/GID restrictions.
>
> If you want me to change server-side anyways, isn't it better just to
> extend cephfs protocol to properly
> handle UID/GIDs with idmapped mounts? (It was originally proposed by Christian.)
> What we need to do here is to add a separate UID/GID fields for ceph
> requests those are creating a new inodes
> (like mknod, symlink, etc).
>
> Kind regards,
> Alex
>
> >
> > Thanks
> >
> > - Xiubo
> >
> >
> >
> >
> >
> > >
> > > Thanks,
> > > Alex
> > >
> > >> Thanks
> > >>
> > >> - Xiubo
> > >>
> >
Xiubo Li July 4, 2023, 1:08 a.m. UTC | #23
Sorry, not sure, why my last reply wasn't sent out.

Do it again.


On 6/26/23 19:23, Aleksandr Mikhalitsyn wrote:
> On Mon, Jun 26, 2023 at 4:12 AM Xiubo Li<xiubli@redhat.com>  wrote:
>> On 6/24/23 15:11, Aleksandr Mikhalitsyn wrote:
>>> On Sat, Jun 24, 2023 at 3:37 AM Xiubo Li<xiubli@redhat.com>  wrote:
>>>> [...]
>>>>
>>>>    > > >
>>>>    > > > I thought about this too and came to the same conclusion, that
>>>> UID/GID
>>>>    > > > based
>>>>    > > > restriction can be applied dynamically, so detecting it on mount-time
>>>>    > > > helps not so much.
>>>>    > > >
>>>>    > > For this you please raise one PR to ceph first to support this, and in
>>>>    > > the PR we can discuss more for the MDS auth caps. And after the PR
>>>>    > > getting merged then in this patch series you need to check the
>>>>    > > corresponding option or flag to determine whether could the idmap
>>>>    > > mounting succeed.
>>>>    >
>>>>    > I'm sorry but I don't understand what we want to support here. Do we
>>>> want to
>>>>    > add some new ceph request that allows to check if UID/GID-based
>>>>    > permissions are applied for
>>>>    > a particular ceph client user?
>>>>
>>>> IMO we should prevent user to set UID/GID-based permisions caps from
>>>> ceph side.
>>>>
>>>> As I know currently there is no way to prevent users to set MDS auth
>>>> caps, IMO in ceph side at least we need one flag or option to disable
>>>> this once users want this fs cluster sever for idmap mounts use case.
>>> How this should be visible from the user side? We introducing a new
>>> kernel client mount option,
>>> like "nomdscaps", then pass flag to the MDS and MDS should check that
>>> MDS auth permissions
>>> are not applied (on the mount time) and prevent them from being
>>> applied later while session is active. Like that?
>>>
>>> At the same time I'm thinking about protocol extension that adds 2
>>> additional fields for UID/GID. This will allow to correctly
>>> handle everything. I wanted to avoid any changes to the protocol or
>>> server-side things. But if we want to change MDS side,
>>> maybe it's better then to go this way?
> Hi Xiubo,
>
>> There is another way:
>>
>> For each client it will have a dedicated client auth caps, something like:
>>
>> client.foo
>>     key: *key*
>>     caps: [mds] allow r, allow rw path=/bar
>>     caps: [mon] allow r
>>     caps: [osd] allow rw tag cephfs data=cephfs_a
> Do we have any infrastructure to get this caps list on the client side
> right now?
> (I've taken a quick look through the code and can't find anything
> related to this.)

I am afraid there is no.

But just after the following ceph PR gets merged it will be easy to do this:

https://github.com/ceph/ceph/pull/48027

This is still under testing.

>> When mounting this client with idmap enabled, then we can just check the
>> above [mds] caps, if there has any UID/GID based permissions set, then
>> fail the mounting.
> understood
>
>> That means this kind client couldn't be mounted with idmap enabled.
>>
>> Also we need to make sure that once there is a mount with idmap enabled,
>> the corresponding client caps couldn't be append the UID/GID based
>> permissions. This need a patch in ceph anyway IMO.
> So, yeah we will need to effectively block cephx permission changes if
> there is a client mounted with
> an active idmapped mount. Sounds as something that require massive
> changes on the server side.

Maybe no need much, it should be simple IMO. But I am not 100% sure.

> At the same time this will just block users from using idmapped mounts
> along with UID/GID restrictions.
>
> If you want me to change server-side anyways, isn't it better just to
> extend cephfs protocol to properly
> handle UID/GIDs with idmapped mounts? (It was originally proposed by Christian.)
> What we need to do here is to add a separate UID/GID fields for ceph
> requests those are creating a new inodes
> (like mknod, symlink, etc).

BTW, could you explain it more ? How could this resolve the issue we are 
discussing here ?

Thanks

- Xiubo


>
> Kind regards,
> Alex
>
>> Thanks
>>
>> - Xiubo
>>
>>
>>
>>
>>
>>> Thanks,
>>> Alex
>>>
>>>> Thanks
>>>>
>>>> - Xiubo
>>>>
Xiubo Li July 4, 2023, 1:10 a.m. UTC | #24
On 6/26/23 19:49, Aleksandr Mikhalitsyn wrote:
> On Mon, Jun 26, 2023 at 1:23 PM Aleksandr Mikhalitsyn
> <aleksandr.mikhalitsyn@canonical.com> wrote:
>> On Mon, Jun 26, 2023 at 4:12 AM Xiubo Li <xiubli@redhat.com> wrote:
>>>
>>> On 6/24/23 15:11, Aleksandr Mikhalitsyn wrote:
>>>> On Sat, Jun 24, 2023 at 3:37 AM Xiubo Li <xiubli@redhat.com> wrote:
>>>>> [...]
>>>>>
>>>>>    > > >
>>>>>    > > > I thought about this too and came to the same conclusion, that
>>>>> UID/GID
>>>>>    > > > based
>>>>>    > > > restriction can be applied dynamically, so detecting it on mount-time
>>>>>    > > > helps not so much.
>>>>>    > > >
>>>>>    > > For this you please raise one PR to ceph first to support this, and in
>>>>>    > > the PR we can discuss more for the MDS auth caps. And after the PR
>>>>>    > > getting merged then in this patch series you need to check the
>>>>>    > > corresponding option or flag to determine whether could the idmap
>>>>>    > > mounting succeed.
>>>>>    >
>>>>>    > I'm sorry but I don't understand what we want to support here. Do we
>>>>> want to
>>>>>    > add some new ceph request that allows to check if UID/GID-based
>>>>>    > permissions are applied for
>>>>>    > a particular ceph client user?
>>>>>
>>>>> IMO we should prevent user to set UID/GID-based permisions caps from
>>>>> ceph side.
>>>>>
>>>>> As I know currently there is no way to prevent users to set MDS auth
>>>>> caps, IMO in ceph side at least we need one flag or option to disable
>>>>> this once users want this fs cluster sever for idmap mounts use case.
>>>> How this should be visible from the user side? We introducing a new
>>>> kernel client mount option,
>>>> like "nomdscaps", then pass flag to the MDS and MDS should check that
>>>> MDS auth permissions
>>>> are not applied (on the mount time) and prevent them from being
>>>> applied later while session is active. Like that?
>>>>
>>>> At the same time I'm thinking about protocol extension that adds 2
>>>> additional fields for UID/GID. This will allow to correctly
>>>> handle everything. I wanted to avoid any changes to the protocol or
>>>> server-side things. But if we want to change MDS side,
>>>> maybe it's better then to go this way?
>> Hi Xiubo,
>>
>>> There is another way:
>>>
>>> For each client it will have a dedicated client auth caps, something like:
>>>
>>> client.foo
>>>     key: *key*
>>>     caps: [mds] allow r, allow rw path=/bar
>>>     caps: [mon] allow r
>>>     caps: [osd] allow rw tag cephfs data=cephfs_a
>> Do we have any infrastructure to get this caps list on the client side
>> right now?
>> (I've taken a quick look through the code and can't find anything
>> related to this.)
> I've found your PR that looks related https://github.com/ceph/ceph/pull/48027

Yeah, after this we need to do some extra work in kclient and then it 
will be easy to check the caps I think.

Thanks

- Xiubo

>>> When mounting this client with idmap enabled, then we can just check the
>>> above [mds] caps, if there has any UID/GID based permissions set, then
>>> fail the mounting.
>> understood
>>
>>> That means this kind client couldn't be mounted with idmap enabled.
>>>
>>> Also we need to make sure that once there is a mount with idmap enabled,
>>> the corresponding client caps couldn't be append the UID/GID based
>>> permissions. This need a patch in ceph anyway IMO.
>> So, yeah we will need to effectively block cephx permission changes if
>> there is a client mounted with
>> an active idmapped mount. Sounds as something that require massive
>> changes on the server side.
>>
>> At the same time this will just block users from using idmapped mounts
>> along with UID/GID restrictions.
>>
>> If you want me to change server-side anyways, isn't it better just to
>> extend cephfs protocol to properly
>> handle UID/GIDs with idmapped mounts? (It was originally proposed by Christian.)
>> What we need to do here is to add a separate UID/GID fields for ceph
>> requests those are creating a new inodes
>> (like mknod, symlink, etc).
>>
>> Kind regards,
>> Alex
>>
>>> Thanks
>>>
>>> - Xiubo
>>>
>>>
>>>
>>>
>>>
>>>> Thanks,
>>>> Alex
>>>>
>>>>> Thanks
>>>>>
>>>>> - Xiubo
>>>>>
Alexander Mikhalitsyn July 14, 2023, 12:57 p.m. UTC | #25
On Tue, Jul 4, 2023 at 3:09 AM Xiubo Li <xiubli@redhat.com> wrote:
>
> Sorry, not sure, why my last reply wasn't sent out.
>
> Do it again.
>
>
> On 6/26/23 19:23, Aleksandr Mikhalitsyn wrote:
> > On Mon, Jun 26, 2023 at 4:12 AM Xiubo Li<xiubli@redhat.com>  wrote:
> >> On 6/24/23 15:11, Aleksandr Mikhalitsyn wrote:
> >>> On Sat, Jun 24, 2023 at 3:37 AM Xiubo Li<xiubli@redhat.com>  wrote:
> >>>> [...]
> >>>>
> >>>>    > > >
> >>>>    > > > I thought about this too and came to the same conclusion, that
> >>>> UID/GID
> >>>>    > > > based
> >>>>    > > > restriction can be applied dynamically, so detecting it on mount-time
> >>>>    > > > helps not so much.
> >>>>    > > >
> >>>>    > > For this you please raise one PR to ceph first to support this, and in
> >>>>    > > the PR we can discuss more for the MDS auth caps. And after the PR
> >>>>    > > getting merged then in this patch series you need to check the
> >>>>    > > corresponding option or flag to determine whether could the idmap
> >>>>    > > mounting succeed.
> >>>>    >
> >>>>    > I'm sorry but I don't understand what we want to support here. Do we
> >>>> want to
> >>>>    > add some new ceph request that allows to check if UID/GID-based
> >>>>    > permissions are applied for
> >>>>    > a particular ceph client user?
> >>>>
> >>>> IMO we should prevent user to set UID/GID-based permisions caps from
> >>>> ceph side.
> >>>>
> >>>> As I know currently there is no way to prevent users to set MDS auth
> >>>> caps, IMO in ceph side at least we need one flag or option to disable
> >>>> this once users want this fs cluster sever for idmap mounts use case.
> >>> How this should be visible from the user side? We introducing a new
> >>> kernel client mount option,
> >>> like "nomdscaps", then pass flag to the MDS and MDS should check that
> >>> MDS auth permissions
> >>> are not applied (on the mount time) and prevent them from being
> >>> applied later while session is active. Like that?
> >>>
> >>> At the same time I'm thinking about protocol extension that adds 2
> >>> additional fields for UID/GID. This will allow to correctly
> >>> handle everything. I wanted to avoid any changes to the protocol or
> >>> server-side things. But if we want to change MDS side,
> >>> maybe it's better then to go this way?
> > Hi Xiubo,
> >
> >> There is another way:
> >>
> >> For each client it will have a dedicated client auth caps, something like:
> >>
> >> client.foo
> >>     key: *key*
> >>     caps: [mds] allow r, allow rw path=/bar
> >>     caps: [mon] allow r
> >>     caps: [osd] allow rw tag cephfs data=cephfs_a
> > Do we have any infrastructure to get this caps list on the client side
> > right now?
> > (I've taken a quick look through the code and can't find anything
> > related to this.)
>
> I am afraid there is no.
>
> But just after the following ceph PR gets merged it will be easy to do this:
>
> https://github.com/ceph/ceph/pull/48027
>
> This is still under testing.
>
> >> When mounting this client with idmap enabled, then we can just check the
> >> above [mds] caps, if there has any UID/GID based permissions set, then
> >> fail the mounting.
> > understood
> >
> >> That means this kind client couldn't be mounted with idmap enabled.
> >>
> >> Also we need to make sure that once there is a mount with idmap enabled,
> >> the corresponding client caps couldn't be append the UID/GID based
> >> permissions. This need a patch in ceph anyway IMO.
> > So, yeah we will need to effectively block cephx permission changes if
> > there is a client mounted with
> > an active idmapped mount. Sounds as something that require massive
> > changes on the server side.
>
> Maybe no need much, it should be simple IMO. But I am not 100% sure.
>
> > At the same time this will just block users from using idmapped mounts
> > along with UID/GID restrictions.
> >
> > If you want me to change server-side anyways, isn't it better just to
> > extend cephfs protocol to properly
> > handle UID/GIDs with idmapped mounts? (It was originally proposed by Christian.)
> > What we need to do here is to add a separate UID/GID fields for ceph
> > requests those are creating a new inodes
> > (like mknod, symlink, etc).

Dear Xiubo,

I'm sorry for delay with reply, I've missed this message accidentally.

>
> BTW, could you explain it more ? How could this resolve the issue we are
> discussing here ?

This was briefly mentioned here
https://lore.kernel.org/all/20220105141023.vrrbfhti5apdvkz7@wittgenstein/#t
by Christian. Let me describe it in detail.

In the current approach we apply mount idmapping to
head->caller_{uid,gid} fields
to make mkdir/mknod/symlink operations set a proper inode owner
uid/gid in according with an idmapping.

This makes a problem with path-based UID/GID restriction mechanism,
because it uses head->caller_{uid,gid} fields
to check if UID/GID is permitted or not.

So, the problem is that we have one field in ceph request for two
different needs - to control permissions and to set inode owner.
Christian pointed that the most saner way is to modify ceph protocol
and add a separate field to store inode owner UID/GID,
and only this fields should be idmapped, but head->caller_{uid,gid}
will be untouched.

With this approach, we will not affect UID/GID-based permission rules
with an idmapped mounts at all.

Kind regards,
Alex

>
> Thanks
>
> - Xiubo
>
>
> >
> > Kind regards,
> > Alex
> >
> >> Thanks
> >>
> >> - Xiubo
> >>
> >>
> >>
> >>
> >>
> >>> Thanks,
> >>> Alex
> >>>
> >>>> Thanks
> >>>>
> >>>> - Xiubo
> >>>>
>
Xiubo Li July 18, 2023, 1:44 a.m. UTC | #26
On 7/14/23 20:57, Aleksandr Mikhalitsyn wrote:
> On Tue, Jul 4, 2023 at 3:09 AM Xiubo Li <xiubli@redhat.com> wrote:
>> Sorry, not sure, why my last reply wasn't sent out.
>>
>> Do it again.
>>
>>
>> On 6/26/23 19:23, Aleksandr Mikhalitsyn wrote:
>>> On Mon, Jun 26, 2023 at 4:12 AM Xiubo Li<xiubli@redhat.com>  wrote:
>>>> On 6/24/23 15:11, Aleksandr Mikhalitsyn wrote:
>>>>> On Sat, Jun 24, 2023 at 3:37 AM Xiubo Li<xiubli@redhat.com>  wrote:
>>>>>> [...]
>>>>>>
>>>>>>     > > >
>>>>>>     > > > I thought about this too and came to the same conclusion, that
>>>>>> UID/GID
>>>>>>     > > > based
>>>>>>     > > > restriction can be applied dynamically, so detecting it on mount-time
>>>>>>     > > > helps not so much.
>>>>>>     > > >
>>>>>>     > > For this you please raise one PR to ceph first to support this, and in
>>>>>>     > > the PR we can discuss more for the MDS auth caps. And after the PR
>>>>>>     > > getting merged then in this patch series you need to check the
>>>>>>     > > corresponding option or flag to determine whether could the idmap
>>>>>>     > > mounting succeed.
>>>>>>     >
>>>>>>     > I'm sorry but I don't understand what we want to support here. Do we
>>>>>> want to
>>>>>>     > add some new ceph request that allows to check if UID/GID-based
>>>>>>     > permissions are applied for
>>>>>>     > a particular ceph client user?
>>>>>>
>>>>>> IMO we should prevent user to set UID/GID-based permisions caps from
>>>>>> ceph side.
>>>>>>
>>>>>> As I know currently there is no way to prevent users to set MDS auth
>>>>>> caps, IMO in ceph side at least we need one flag or option to disable
>>>>>> this once users want this fs cluster sever for idmap mounts use case.
>>>>> How this should be visible from the user side? We introducing a new
>>>>> kernel client mount option,
>>>>> like "nomdscaps", then pass flag to the MDS and MDS should check that
>>>>> MDS auth permissions
>>>>> are not applied (on the mount time) and prevent them from being
>>>>> applied later while session is active. Like that?
>>>>>
>>>>> At the same time I'm thinking about protocol extension that adds 2
>>>>> additional fields for UID/GID. This will allow to correctly
>>>>> handle everything. I wanted to avoid any changes to the protocol or
>>>>> server-side things. But if we want to change MDS side,
>>>>> maybe it's better then to go this way?
>>> Hi Xiubo,
>>>
>>>> There is another way:
>>>>
>>>> For each client it will have a dedicated client auth caps, something like:
>>>>
>>>> client.foo
>>>>      key: *key*
>>>>      caps: [mds] allow r, allow rw path=/bar
>>>>      caps: [mon] allow r
>>>>      caps: [osd] allow rw tag cephfs data=cephfs_a
>>> Do we have any infrastructure to get this caps list on the client side
>>> right now?
>>> (I've taken a quick look through the code and can't find anything
>>> related to this.)
>> I am afraid there is no.
>>
>> But just after the following ceph PR gets merged it will be easy to do this:
>>
>> https://github.com/ceph/ceph/pull/48027
>>
>> This is still under testing.
>>
>>>> When mounting this client with idmap enabled, then we can just check the
>>>> above [mds] caps, if there has any UID/GID based permissions set, then
>>>> fail the mounting.
>>> understood
>>>
>>>> That means this kind client couldn't be mounted with idmap enabled.
>>>>
>>>> Also we need to make sure that once there is a mount with idmap enabled,
>>>> the corresponding client caps couldn't be append the UID/GID based
>>>> permissions. This need a patch in ceph anyway IMO.
>>> So, yeah we will need to effectively block cephx permission changes if
>>> there is a client mounted with
>>> an active idmapped mount. Sounds as something that require massive
>>> changes on the server side.
>> Maybe no need much, it should be simple IMO. But I am not 100% sure.
>>
>>> At the same time this will just block users from using idmapped mounts
>>> along with UID/GID restrictions.
>>>
>>> If you want me to change server-side anyways, isn't it better just to
>>> extend cephfs protocol to properly
>>> handle UID/GIDs with idmapped mounts? (It was originally proposed by Christian.)
>>> What we need to do here is to add a separate UID/GID fields for ceph
>>> requests those are creating a new inodes
>>> (like mknod, symlink, etc).
> Dear Xiubo,
>
> I'm sorry for delay with reply, I've missed this message accidentally.
>
>> BTW, could you explain it more ? How could this resolve the issue we are
>> discussing here ?
> This was briefly mentioned here
> https://lore.kernel.org/all/20220105141023.vrrbfhti5apdvkz7@wittgenstein/#t
> by Christian. Let me describe it in detail.
>
> In the current approach we apply mount idmapping to
> head->caller_{uid,gid} fields
> to make mkdir/mknod/symlink operations set a proper inode owner
> uid/gid in according with an idmapping.

Sorry for late.

I still couldn't get how this could resolve the lookup case.

For a lookup request the caller_{uid, gid} still will be the mapped 
{uid, gid}, right ? And also the same for other non-create requests. If 
so this will be incorrect for the cephx perm checks IMO.

Thanks

- Xiubo


> This makes a problem with path-based UID/GID restriction mechanism,
> because it uses head->caller_{uid,gid} fields
> to check if UID/GID is permitted or not.
>
> So, the problem is that we have one field in ceph request for two
> different needs - to control permissions and to set inode owner.
> Christian pointed that the most saner way is to modify ceph protocol
> and add a separate field to store inode owner UID/GID,
> and only this fields should be idmapped, but head->caller_{uid,gid}
> will be untouched.
>
> With this approach, we will not affect UID/GID-based permission rules
> with an idmapped mounts at all.
>
> Kind regards,
> Alex
>
>> Thanks
>>
>> - Xiubo
>>
>>
>>> Kind regards,
>>> Alex
>>>
>>>> Thanks
>>>>
>>>> - Xiubo
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>> Thanks,
>>>>> Alex
>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> - Xiubo
>>>>>>
Alexander Mikhalitsyn July 18, 2023, 2:49 p.m. UTC | #27
On Tue, Jul 18, 2023 at 3:45 AM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 7/14/23 20:57, Aleksandr Mikhalitsyn wrote:
> > On Tue, Jul 4, 2023 at 3:09 AM Xiubo Li <xiubli@redhat.com> wrote:
> >> Sorry, not sure, why my last reply wasn't sent out.
> >>
> >> Do it again.
> >>
> >>
> >> On 6/26/23 19:23, Aleksandr Mikhalitsyn wrote:
> >>> On Mon, Jun 26, 2023 at 4:12 AM Xiubo Li<xiubli@redhat.com>  wrote:
> >>>> On 6/24/23 15:11, Aleksandr Mikhalitsyn wrote:
> >>>>> On Sat, Jun 24, 2023 at 3:37 AM Xiubo Li<xiubli@redhat.com>  wrote:
> >>>>>> [...]
> >>>>>>
> >>>>>>     > > >
> >>>>>>     > > > I thought about this too and came to the same conclusion, that
> >>>>>> UID/GID
> >>>>>>     > > > based
> >>>>>>     > > > restriction can be applied dynamically, so detecting it on mount-time
> >>>>>>     > > > helps not so much.
> >>>>>>     > > >
> >>>>>>     > > For this you please raise one PR to ceph first to support this, and in
> >>>>>>     > > the PR we can discuss more for the MDS auth caps. And after the PR
> >>>>>>     > > getting merged then in this patch series you need to check the
> >>>>>>     > > corresponding option or flag to determine whether could the idmap
> >>>>>>     > > mounting succeed.
> >>>>>>     >
> >>>>>>     > I'm sorry but I don't understand what we want to support here. Do we
> >>>>>> want to
> >>>>>>     > add some new ceph request that allows to check if UID/GID-based
> >>>>>>     > permissions are applied for
> >>>>>>     > a particular ceph client user?
> >>>>>>
> >>>>>> IMO we should prevent user to set UID/GID-based permisions caps from
> >>>>>> ceph side.
> >>>>>>
> >>>>>> As I know currently there is no way to prevent users to set MDS auth
> >>>>>> caps, IMO in ceph side at least we need one flag or option to disable
> >>>>>> this once users want this fs cluster sever for idmap mounts use case.
> >>>>> How this should be visible from the user side? We introducing a new
> >>>>> kernel client mount option,
> >>>>> like "nomdscaps", then pass flag to the MDS and MDS should check that
> >>>>> MDS auth permissions
> >>>>> are not applied (on the mount time) and prevent them from being
> >>>>> applied later while session is active. Like that?
> >>>>>
> >>>>> At the same time I'm thinking about protocol extension that adds 2
> >>>>> additional fields for UID/GID. This will allow to correctly
> >>>>> handle everything. I wanted to avoid any changes to the protocol or
> >>>>> server-side things. But if we want to change MDS side,
> >>>>> maybe it's better then to go this way?
> >>> Hi Xiubo,
> >>>
> >>>> There is another way:
> >>>>
> >>>> For each client it will have a dedicated client auth caps, something like:
> >>>>
> >>>> client.foo
> >>>>      key: *key*
> >>>>      caps: [mds] allow r, allow rw path=/bar
> >>>>      caps: [mon] allow r
> >>>>      caps: [osd] allow rw tag cephfs data=cephfs_a
> >>> Do we have any infrastructure to get this caps list on the client side
> >>> right now?
> >>> (I've taken a quick look through the code and can't find anything
> >>> related to this.)
> >> I am afraid there is no.
> >>
> >> But just after the following ceph PR gets merged it will be easy to do this:
> >>
> >> https://github.com/ceph/ceph/pull/48027
> >>
> >> This is still under testing.
> >>
> >>>> When mounting this client with idmap enabled, then we can just check the
> >>>> above [mds] caps, if there has any UID/GID based permissions set, then
> >>>> fail the mounting.
> >>> understood
> >>>
> >>>> That means this kind client couldn't be mounted with idmap enabled.
> >>>>
> >>>> Also we need to make sure that once there is a mount with idmap enabled,
> >>>> the corresponding client caps couldn't be append the UID/GID based
> >>>> permissions. This need a patch in ceph anyway IMO.
> >>> So, yeah we will need to effectively block cephx permission changes if
> >>> there is a client mounted with
> >>> an active idmapped mount. Sounds as something that require massive
> >>> changes on the server side.
> >> Maybe no need much, it should be simple IMO. But I am not 100% sure.
> >>
> >>> At the same time this will just block users from using idmapped mounts
> >>> along with UID/GID restrictions.
> >>>
> >>> If you want me to change server-side anyways, isn't it better just to
> >>> extend cephfs protocol to properly
> >>> handle UID/GIDs with idmapped mounts? (It was originally proposed by Christian.)
> >>> What we need to do here is to add a separate UID/GID fields for ceph
> >>> requests those are creating a new inodes
> >>> (like mknod, symlink, etc).
> > Dear Xiubo,
> >
> > I'm sorry for delay with reply, I've missed this message accidentally.
> >
> >> BTW, could you explain it more ? How could this resolve the issue we are
> >> discussing here ?
> > This was briefly mentioned here
> > https://lore.kernel.org/all/20220105141023.vrrbfhti5apdvkz7@wittgenstein/#t
> > by Christian. Let me describe it in detail.
> >
> > In the current approach we apply mount idmapping to
> > head->caller_{uid,gid} fields
> > to make mkdir/mknod/symlink operations set a proper inode owner
> > uid/gid in according with an idmapping.
>
> Sorry for late.
>
> I still couldn't get how this could resolve the lookup case.
>
> For a lookup request the caller_{uid, gid} still will be the mapped
> {uid, gid}, right ?

No, the idea is to stop mapping a caller_{uid, gid}. And to add a new
fields like
inode_owner_{uid, gid} which will be idmapped (this field will be specific only
for those operations that create a new inode).

>
And also the same for other non-create requests. If
> so this will be incorrect for the cephx perm checks IMO.

Thanks,
Alex

>
> Thanks
>
> - Xiubo
>
>
> > This makes a problem with path-based UID/GID restriction mechanism,
> > because it uses head->caller_{uid,gid} fields
> > to check if UID/GID is permitted or not.
> >
> > So, the problem is that we have one field in ceph request for two
> > different needs - to control permissions and to set inode owner.
> > Christian pointed that the most saner way is to modify ceph protocol
> > and add a separate field to store inode owner UID/GID,
> > and only this fields should be idmapped, but head->caller_{uid,gid}
> > will be untouched.
> >
> > With this approach, we will not affect UID/GID-based permission rules
> > with an idmapped mounts at all.
> >
> > Kind regards,
> > Alex
> >
> >> Thanks
> >>
> >> - Xiubo
> >>
> >>
> >>> Kind regards,
> >>> Alex
> >>>
> >>>> Thanks
> >>>>
> >>>> - Xiubo
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>> Thanks,
> >>>>> Alex
> >>>>>
> >>>>>> Thanks
> >>>>>>
> >>>>>> - Xiubo
> >>>>>>
>
Alexander Mikhalitsyn July 19, 2023, 11:57 a.m. UTC | #28
On Tue, Jul 18, 2023 at 4:49 PM Aleksandr Mikhalitsyn
<aleksandr.mikhalitsyn@canonical.com> wrote:
>
> On Tue, Jul 18, 2023 at 3:45 AM Xiubo Li <xiubli@redhat.com> wrote:
> >
> >
> > On 7/14/23 20:57, Aleksandr Mikhalitsyn wrote:
> > > On Tue, Jul 4, 2023 at 3:09 AM Xiubo Li <xiubli@redhat.com> wrote:
> > >> Sorry, not sure, why my last reply wasn't sent out.
> > >>
> > >> Do it again.
> > >>
> > >>
> > >> On 6/26/23 19:23, Aleksandr Mikhalitsyn wrote:
> > >>> On Mon, Jun 26, 2023 at 4:12 AM Xiubo Li<xiubli@redhat.com>  wrote:
> > >>>> On 6/24/23 15:11, Aleksandr Mikhalitsyn wrote:
> > >>>>> On Sat, Jun 24, 2023 at 3:37 AM Xiubo Li<xiubli@redhat.com>  wrote:
> > >>>>>> [...]
> > >>>>>>
> > >>>>>>     > > >
> > >>>>>>     > > > I thought about this too and came to the same conclusion, that
> > >>>>>> UID/GID
> > >>>>>>     > > > based
> > >>>>>>     > > > restriction can be applied dynamically, so detecting it on mount-time
> > >>>>>>     > > > helps not so much.
> > >>>>>>     > > >
> > >>>>>>     > > For this you please raise one PR to ceph first to support this, and in
> > >>>>>>     > > the PR we can discuss more for the MDS auth caps. And after the PR
> > >>>>>>     > > getting merged then in this patch series you need to check the
> > >>>>>>     > > corresponding option or flag to determine whether could the idmap
> > >>>>>>     > > mounting succeed.
> > >>>>>>     >
> > >>>>>>     > I'm sorry but I don't understand what we want to support here. Do we
> > >>>>>> want to
> > >>>>>>     > add some new ceph request that allows to check if UID/GID-based
> > >>>>>>     > permissions are applied for
> > >>>>>>     > a particular ceph client user?
> > >>>>>>
> > >>>>>> IMO we should prevent user to set UID/GID-based permisions caps from
> > >>>>>> ceph side.
> > >>>>>>
> > >>>>>> As I know currently there is no way to prevent users to set MDS auth
> > >>>>>> caps, IMO in ceph side at least we need one flag or option to disable
> > >>>>>> this once users want this fs cluster sever for idmap mounts use case.
> > >>>>> How this should be visible from the user side? We introducing a new
> > >>>>> kernel client mount option,
> > >>>>> like "nomdscaps", then pass flag to the MDS and MDS should check that
> > >>>>> MDS auth permissions
> > >>>>> are not applied (on the mount time) and prevent them from being
> > >>>>> applied later while session is active. Like that?
> > >>>>>
> > >>>>> At the same time I'm thinking about protocol extension that adds 2
> > >>>>> additional fields for UID/GID. This will allow to correctly
> > >>>>> handle everything. I wanted to avoid any changes to the protocol or
> > >>>>> server-side things. But if we want to change MDS side,
> > >>>>> maybe it's better then to go this way?
> > >>> Hi Xiubo,
> > >>>
> > >>>> There is another way:
> > >>>>
> > >>>> For each client it will have a dedicated client auth caps, something like:
> > >>>>
> > >>>> client.foo
> > >>>>      key: *key*
> > >>>>      caps: [mds] allow r, allow rw path=/bar
> > >>>>      caps: [mon] allow r
> > >>>>      caps: [osd] allow rw tag cephfs data=cephfs_a
> > >>> Do we have any infrastructure to get this caps list on the client side
> > >>> right now?
> > >>> (I've taken a quick look through the code and can't find anything
> > >>> related to this.)
> > >> I am afraid there is no.
> > >>
> > >> But just after the following ceph PR gets merged it will be easy to do this:
> > >>
> > >> https://github.com/ceph/ceph/pull/48027
> > >>
> > >> This is still under testing.
> > >>
> > >>>> When mounting this client with idmap enabled, then we can just check the
> > >>>> above [mds] caps, if there has any UID/GID based permissions set, then
> > >>>> fail the mounting.
> > >>> understood
> > >>>
> > >>>> That means this kind client couldn't be mounted with idmap enabled.
> > >>>>
> > >>>> Also we need to make sure that once there is a mount with idmap enabled,
> > >>>> the corresponding client caps couldn't be append the UID/GID based
> > >>>> permissions. This need a patch in ceph anyway IMO.
> > >>> So, yeah we will need to effectively block cephx permission changes if
> > >>> there is a client mounted with
> > >>> an active idmapped mount. Sounds as something that require massive
> > >>> changes on the server side.
> > >> Maybe no need much, it should be simple IMO. But I am not 100% sure.
> > >>
> > >>> At the same time this will just block users from using idmapped mounts
> > >>> along with UID/GID restrictions.
> > >>>
> > >>> If you want me to change server-side anyways, isn't it better just to
> > >>> extend cephfs protocol to properly
> > >>> handle UID/GIDs with idmapped mounts? (It was originally proposed by Christian.)
> > >>> What we need to do here is to add a separate UID/GID fields for ceph
> > >>> requests those are creating a new inodes
> > >>> (like mknod, symlink, etc).
> > > Dear Xiubo,
> > >
> > > I'm sorry for delay with reply, I've missed this message accidentally.
> > >
> > >> BTW, could you explain it more ? How could this resolve the issue we are
> > >> discussing here ?
> > > This was briefly mentioned here
> > > https://lore.kernel.org/all/20220105141023.vrrbfhti5apdvkz7@wittgenstein/#t
> > > by Christian. Let me describe it in detail.
> > >
> > > In the current approach we apply mount idmapping to
> > > head->caller_{uid,gid} fields
> > > to make mkdir/mknod/symlink operations set a proper inode owner
> > > uid/gid in according with an idmapping.
> >
> > Sorry for late.
> >
> > I still couldn't get how this could resolve the lookup case.
> >
> > For a lookup request the caller_{uid, gid} still will be the mapped
> > {uid, gid}, right ?
>
> No, the idea is to stop mapping a caller_{uid, gid}. And to add a new
> fields like
> inode_owner_{uid, gid} which will be idmapped (this field will be specific only
> for those operations that create a new inode).

I've decided to write some summary of different approaches and
elaborate tricky places.

Current implementation.

We have head->caller_{uid,gid} fields mapped in according
to the mount's idmapping. But as we don't have information about
mount's idmapping in all call stacks (like ->lookup case), we
are not able to map it always and they are left untouched in these cases.

This tends to an inconsistency between different inode_operations,
for example ->lookup (don't have an access to an idmapping) and
->mkdir (have an idmapping as an argument).

This inconsistency is absolutely harmless if the user does not
use UID/GID-based restrictions. Because in this use case head->caller_{uid,gid}
fields used *only* to set inode owner UID/GID during the inode_operations
which create inodes.

Conclusion 1. head->caller_{uid,gid} fields have two meanings
- UID/GID-based permission checks
- inode owner information

Solution 0. Ignore the issue with UID/GID-based restrictions and idmapped mounts
until we are not blamed by users ;-)

As far as I can see you are not happy about this way. :-)

Solution 1. Let's add mount's idmapping argument to all inode_operations
and always map head->caller_{uid,gid} fields.

Not a best idea, because:
- big modification of VFS layer
- ideologically incorrect, for instance ->lookup should not care
and know *anything* about mount's idmapping, because ->lookup works
not on the mount level (it's not important who and through which mount
triggered the ->lookup). Imagine that you've dentry cache filled and call
open(...) in this case ->lookup can be uncalled. But if the user was not lucky
enough to have cache filled then open(..) will trigger the lookup and
then ->lookup results will be dependent on the mount's idmapping. It
seems incorrect
and unobvious consequence of introducing such a parameter to ->lookup operation.
To summarize, ->lookup is about filling dentry cache and dentry cache
is superblock-level
thing, not mount-level.

Solution 2. Add some kind of extra checks to ceph-client and ceph
server to detect that
mount idmappings used with UID/GID-based restrictions and restrict such mounts.

Seems not ideal to me too. Because it's not a fix, it's a limitation
and this limitation is
not cheap from the implementation perspective (we need heavy changes
in ceph server side and
client side too). Btw, currently VFS API is also not ready for that,
because we can't
decide if idmapped mounts are allowed or not in runtime. It's a static
thing that should be declared
with FS_ALLOW_IDMAP flag in (struct file_system_type)->fs_flags. Not a
big deal, but...

Solution 3. Add a new UID/GID fields to ceph request structure in
addition to head->caller_{uid,gid}
to store information about inode owners (only for inode_operations
which create inodes).

How does it solves the problem?
With these new fields we can leave head->caller_{uid,gid} untouched
with an idmapped mounts code.
It means that UID/GID-based restrictions will continue to work as intended.

At the same time, new fields (let say "inode_owner_{uid,gid}") will be
mapped in accordance with
a mount's idmapping.

This solution seems ideal, because it is philosophically correct, it
makes cephfs idmapped mounts to work
in the same manner and way as idmapped mounts work for any other
filesystem like ext4.

But yes, this requires cephfs protocol changes...

I personally still believe that the "Solution 0" approach is optimal
and we can go with "Solution 3" way
as the next iteration.

Kind regards,
Alex

>
> >
> And also the same for other non-create requests. If
> > so this will be incorrect for the cephx perm checks IMO.
>
> Thanks,
> Alex
>
> >
> > Thanks
> >
> > - Xiubo
> >
> >
> > > This makes a problem with path-based UID/GID restriction mechanism,
> > > because it uses head->caller_{uid,gid} fields
> > > to check if UID/GID is permitted or not.
> > >
> > > So, the problem is that we have one field in ceph request for two
> > > different needs - to control permissions and to set inode owner.
> > > Christian pointed that the most saner way is to modify ceph protocol
> > > and add a separate field to store inode owner UID/GID,
> > > and only this fields should be idmapped, but head->caller_{uid,gid}
> > > will be untouched.
> > >
> > > With this approach, we will not affect UID/GID-based permission rules
> > > with an idmapped mounts at all.
> > >
> > > Kind regards,
> > > Alex
> > >
> > >> Thanks
> > >>
> > >> - Xiubo
> > >>
> > >>
> > >>> Kind regards,
> > >>> Alex
> > >>>
> > >>>> Thanks
> > >>>>
> > >>>> - Xiubo
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>> Thanks,
> > >>>>> Alex
> > >>>>>
> > >>>>>> Thanks
> > >>>>>>
> > >>>>>> - Xiubo
> > >>>>>>
> >
Xiubo Li July 20, 2023, 6:36 a.m. UTC | #29
On 7/19/23 19:57, Aleksandr Mikhalitsyn wrote:
> On Tue, Jul 18, 2023 at 4:49 PM Aleksandr Mikhalitsyn
> <aleksandr.mikhalitsyn@canonical.com> wrote:
>> On Tue, Jul 18, 2023 at 3:45 AM Xiubo Li <xiubli@redhat.com> wrote:
[...]
>> No, the idea is to stop mapping a caller_{uid, gid}. And to add a new
>> fields like
>> inode_owner_{uid, gid} which will be idmapped (this field will be specific only
>> for those operations that create a new inode).
> I've decided to write some summary of different approaches and
> elaborate tricky places.
>
> Current implementation.
>
> We have head->caller_{uid,gid} fields mapped in according
> to the mount's idmapping. But as we don't have information about
> mount's idmapping in all call stacks (like ->lookup case), we
> are not able to map it always and they are left untouched in these cases.
>
> This tends to an inconsistency between different inode_operations,
> for example ->lookup (don't have an access to an idmapping) and
> ->mkdir (have an idmapping as an argument).
>
> This inconsistency is absolutely harmless if the user does not
> use UID/GID-based restrictions. Because in this use case head->caller_{uid,gid}
> fields used *only* to set inode owner UID/GID during the inode_operations
> which create inodes.
>
> Conclusion 1. head->caller_{uid,gid} fields have two meanings
> - UID/GID-based permission checks
> - inode owner information
>
> Solution 0. Ignore the issue with UID/GID-based restrictions and idmapped mounts
> until we are not blamed by users ;-)
>
> As far as I can see you are not happy about this way. :-)
>
> Solution 1. Let's add mount's idmapping argument to all inode_operations
> and always map head->caller_{uid,gid} fields.
>
> Not a best idea, because:
> - big modification of VFS layer
> - ideologically incorrect, for instance ->lookup should not care
> and know *anything* about mount's idmapping, because ->lookup works
> not on the mount level (it's not important who and through which mount
> triggered the ->lookup). Imagine that you've dentry cache filled and call
> open(...) in this case ->lookup can be uncalled. But if the user was not lucky
> enough to have cache filled then open(..) will trigger the lookup and
> then ->lookup results will be dependent on the mount's idmapping. It
> seems incorrect
> and unobvious consequence of introducing such a parameter to ->lookup operation.
> To summarize, ->lookup is about filling dentry cache and dentry cache
> is superblock-level
> thing, not mount-level.
>
> Solution 2. Add some kind of extra checks to ceph-client and ceph
> server to detect that
> mount idmappings used with UID/GID-based restrictions and restrict such mounts.
>
> Seems not ideal to me too. Because it's not a fix, it's a limitation
> and this limitation is
> not cheap from the implementation perspective (we need heavy changes
> in ceph server side and
> client side too). Btw, currently VFS API is also not ready for that,
> because we can't
> decide if idmapped mounts are allowed or not in runtime. It's a static
> thing that should be declared
> with FS_ALLOW_IDMAP flag in (struct file_system_type)->fs_flags. Not a
> big deal, but...
>
> Solution 3. Add a new UID/GID fields to ceph request structure in
> addition to head->caller_{uid,gid}
> to store information about inode owners (only for inode_operations
> which create inodes).
>
> How does it solves the problem?
> With these new fields we can leave head->caller_{uid,gid} untouched
> with an idmapped mounts code.
> It means that UID/GID-based restrictions will continue to work as intended.
>
> At the same time, new fields (let say "inode_owner_{uid,gid}") will be
> mapped in accordance with
> a mount's idmapping.
>
> This solution seems ideal, because it is philosophically correct, it
> makes cephfs idmapped mounts to work
> in the same manner and way as idmapped mounts work for any other
> filesystem like ext4.

Okay, this approach sounds more reasonable to me. But you need to do 
some extra work to make it to be compatible between {old,new} kernels 
and  {old,new} cephs.

So then the caller uid/gid will always be the user uid/gid issuing the 
requests as now.

Thanks

- Xiubo


> But yes, this requires cephfs protocol changes...
>
> I personally still believe that the "Solution 0" approach is optimal
> and we can go with "Solution 3" way
> as the next iteration.
>
> Kind regards,
> Alex
>
>> And also the same for other non-create requests. If
>>> so this will be incorrect for the cephx perm checks IMO.
>> Thanks,
>> Alex
>>
>>> Thanks
>>>
>>> - Xiubo
>>>
>>>
>>>> This makes a problem with path-based UID/GID restriction mechanism,
>>>> because it uses head->caller_{uid,gid} fields
>>>> to check if UID/GID is permitted or not.
>>>>
>>>> So, the problem is that we have one field in ceph request for two
>>>> different needs - to control permissions and to set inode owner.
>>>> Christian pointed that the most saner way is to modify ceph protocol
>>>> and add a separate field to store inode owner UID/GID,
>>>> and only this fields should be idmapped, but head->caller_{uid,gid}
>>>> will be untouched.
>>>>
>>>> With this approach, we will not affect UID/GID-based permission rules
>>>> with an idmapped mounts at all.
>>>>
>>>> Kind regards,
>>>> Alex
>>>>
>>>>> Thanks
>>>>>
>>>>> - Xiubo
>>>>>
>>>>>
>>>>>> Kind regards,
>>>>>> Alex
>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> - Xiubo
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Alex
>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>> - Xiubo
>>>>>>>>>
Alexander Mikhalitsyn July 20, 2023, 6:41 a.m. UTC | #30
On Thu, Jul 20, 2023 at 8:36 AM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 7/19/23 19:57, Aleksandr Mikhalitsyn wrote:
> > On Tue, Jul 18, 2023 at 4:49 PM Aleksandr Mikhalitsyn
> > <aleksandr.mikhalitsyn@canonical.com> wrote:
> >> On Tue, Jul 18, 2023 at 3:45 AM Xiubo Li <xiubli@redhat.com> wrote:
> [...]
> >> No, the idea is to stop mapping a caller_{uid, gid}. And to add a new
> >> fields like
> >> inode_owner_{uid, gid} which will be idmapped (this field will be specific only
> >> for those operations that create a new inode).
> > I've decided to write some summary of different approaches and
> > elaborate tricky places.
> >
> > Current implementation.
> >
> > We have head->caller_{uid,gid} fields mapped in according
> > to the mount's idmapping. But as we don't have information about
> > mount's idmapping in all call stacks (like ->lookup case), we
> > are not able to map it always and they are left untouched in these cases.
> >
> > This tends to an inconsistency between different inode_operations,
> > for example ->lookup (don't have an access to an idmapping) and
> > ->mkdir (have an idmapping as an argument).
> >
> > This inconsistency is absolutely harmless if the user does not
> > use UID/GID-based restrictions. Because in this use case head->caller_{uid,gid}
> > fields used *only* to set inode owner UID/GID during the inode_operations
> > which create inodes.
> >
> > Conclusion 1. head->caller_{uid,gid} fields have two meanings
> > - UID/GID-based permission checks
> > - inode owner information
> >
> > Solution 0. Ignore the issue with UID/GID-based restrictions and idmapped mounts
> > until we are not blamed by users ;-)
> >
> > As far as I can see you are not happy about this way. :-)
> >
> > Solution 1. Let's add mount's idmapping argument to all inode_operations
> > and always map head->caller_{uid,gid} fields.
> >
> > Not a best idea, because:
> > - big modification of VFS layer
> > - ideologically incorrect, for instance ->lookup should not care
> > and know *anything* about mount's idmapping, because ->lookup works
> > not on the mount level (it's not important who and through which mount
> > triggered the ->lookup). Imagine that you've dentry cache filled and call
> > open(...) in this case ->lookup can be uncalled. But if the user was not lucky
> > enough to have cache filled then open(..) will trigger the lookup and
> > then ->lookup results will be dependent on the mount's idmapping. It
> > seems incorrect
> > and unobvious consequence of introducing such a parameter to ->lookup operation.
> > To summarize, ->lookup is about filling dentry cache and dentry cache
> > is superblock-level
> > thing, not mount-level.
> >
> > Solution 2. Add some kind of extra checks to ceph-client and ceph
> > server to detect that
> > mount idmappings used with UID/GID-based restrictions and restrict such mounts.
> >
> > Seems not ideal to me too. Because it's not a fix, it's a limitation
> > and this limitation is
> > not cheap from the implementation perspective (we need heavy changes
> > in ceph server side and
> > client side too). Btw, currently VFS API is also not ready for that,
> > because we can't
> > decide if idmapped mounts are allowed or not in runtime. It's a static
> > thing that should be declared
> > with FS_ALLOW_IDMAP flag in (struct file_system_type)->fs_flags. Not a
> > big deal, but...
> >
> > Solution 3. Add a new UID/GID fields to ceph request structure in
> > addition to head->caller_{uid,gid}
> > to store information about inode owners (only for inode_operations
> > which create inodes).
> >
> > How does it solves the problem?
> > With these new fields we can leave head->caller_{uid,gid} untouched
> > with an idmapped mounts code.
> > It means that UID/GID-based restrictions will continue to work as intended.
> >
> > At the same time, new fields (let say "inode_owner_{uid,gid}") will be
> > mapped in accordance with
> > a mount's idmapping.
> >
> > This solution seems ideal, because it is philosophically correct, it
> > makes cephfs idmapped mounts to work
> > in the same manner and way as idmapped mounts work for any other
> > filesystem like ext4.
>
> Okay, this approach sounds more reasonable to me. But you need to do
> some extra work to make it to be compatible between {old,new} kernels
> and  {old,new} cephs.

Sure. Then I'll start implementing this.

Kind regards,
Alex

>
> So then the caller uid/gid will always be the user uid/gid issuing the
> requests as now.
>
> Thanks
>
> - Xiubo
>
>
> > But yes, this requires cephfs protocol changes...
> >
> > I personally still believe that the "Solution 0" approach is optimal
> > and we can go with "Solution 3" way
> > as the next iteration.
> >
> > Kind regards,
> > Alex
> >
> >> And also the same for other non-create requests. If
> >>> so this will be incorrect for the cephx perm checks IMO.
> >> Thanks,
> >> Alex
> >>
> >>> Thanks
> >>>
> >>> - Xiubo
> >>>
> >>>
> >>>> This makes a problem with path-based UID/GID restriction mechanism,
> >>>> because it uses head->caller_{uid,gid} fields
> >>>> to check if UID/GID is permitted or not.
> >>>>
> >>>> So, the problem is that we have one field in ceph request for two
> >>>> different needs - to control permissions and to set inode owner.
> >>>> Christian pointed that the most saner way is to modify ceph protocol
> >>>> and add a separate field to store inode owner UID/GID,
> >>>> and only this fields should be idmapped, but head->caller_{uid,gid}
> >>>> will be untouched.
> >>>>
> >>>> With this approach, we will not affect UID/GID-based permission rules
> >>>> with an idmapped mounts at all.
> >>>>
> >>>> Kind regards,
> >>>> Alex
> >>>>
> >>>>> Thanks
> >>>>>
> >>>>> - Xiubo
> >>>>>
> >>>>>
> >>>>>> Kind regards,
> >>>>>> Alex
> >>>>>>
> >>>>>>> Thanks
> >>>>>>>
> >>>>>>> - Xiubo
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>> Alex
> >>>>>>>>
> >>>>>>>>> Thanks
> >>>>>>>>>
> >>>>>>>>> - Xiubo
> >>>>>>>>>
>
Alexander Mikhalitsyn July 21, 2023, 3:43 p.m. UTC | #31
On Thu, Jul 20, 2023 at 8:36 AM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 7/19/23 19:57, Aleksandr Mikhalitsyn wrote:
> > On Tue, Jul 18, 2023 at 4:49 PM Aleksandr Mikhalitsyn
> > <aleksandr.mikhalitsyn@canonical.com> wrote:
> >> On Tue, Jul 18, 2023 at 3:45 AM Xiubo Li <xiubli@redhat.com> wrote:
> [...]
> >> No, the idea is to stop mapping a caller_{uid, gid}. And to add a new
> >> fields like
> >> inode_owner_{uid, gid} which will be idmapped (this field will be specific only
> >> for those operations that create a new inode).
> > I've decided to write some summary of different approaches and
> > elaborate tricky places.
> >
> > Current implementation.
> >
> > We have head->caller_{uid,gid} fields mapped in according
> > to the mount's idmapping. But as we don't have information about
> > mount's idmapping in all call stacks (like ->lookup case), we
> > are not able to map it always and they are left untouched in these cases.
> >
> > This tends to an inconsistency between different inode_operations,
> > for example ->lookup (don't have an access to an idmapping) and
> > ->mkdir (have an idmapping as an argument).
> >
> > This inconsistency is absolutely harmless if the user does not
> > use UID/GID-based restrictions. Because in this use case head->caller_{uid,gid}
> > fields used *only* to set inode owner UID/GID during the inode_operations
> > which create inodes.
> >
> > Conclusion 1. head->caller_{uid,gid} fields have two meanings
> > - UID/GID-based permission checks
> > - inode owner information
> >
> > Solution 0. Ignore the issue with UID/GID-based restrictions and idmapped mounts
> > until we are not blamed by users ;-)
> >
> > As far as I can see you are not happy about this way. :-)
> >
> > Solution 1. Let's add mount's idmapping argument to all inode_operations
> > and always map head->caller_{uid,gid} fields.
> >
> > Not a best idea, because:
> > - big modification of VFS layer
> > - ideologically incorrect, for instance ->lookup should not care
> > and know *anything* about mount's idmapping, because ->lookup works
> > not on the mount level (it's not important who and through which mount
> > triggered the ->lookup). Imagine that you've dentry cache filled and call
> > open(...) in this case ->lookup can be uncalled. But if the user was not lucky
> > enough to have cache filled then open(..) will trigger the lookup and
> > then ->lookup results will be dependent on the mount's idmapping. It
> > seems incorrect
> > and unobvious consequence of introducing such a parameter to ->lookup operation.
> > To summarize, ->lookup is about filling dentry cache and dentry cache
> > is superblock-level
> > thing, not mount-level.
> >
> > Solution 2. Add some kind of extra checks to ceph-client and ceph
> > server to detect that
> > mount idmappings used with UID/GID-based restrictions and restrict such mounts.
> >
> > Seems not ideal to me too. Because it's not a fix, it's a limitation
> > and this limitation is
> > not cheap from the implementation perspective (we need heavy changes
> > in ceph server side and
> > client side too). Btw, currently VFS API is also not ready for that,
> > because we can't
> > decide if idmapped mounts are allowed or not in runtime. It's a static
> > thing that should be declared
> > with FS_ALLOW_IDMAP flag in (struct file_system_type)->fs_flags. Not a
> > big deal, but...
> >
> > Solution 3. Add a new UID/GID fields to ceph request structure in
> > addition to head->caller_{uid,gid}
> > to store information about inode owners (only for inode_operations
> > which create inodes).
> >
> > How does it solves the problem?
> > With these new fields we can leave head->caller_{uid,gid} untouched
> > with an idmapped mounts code.
> > It means that UID/GID-based restrictions will continue to work as intended.
> >
> > At the same time, new fields (let say "inode_owner_{uid,gid}") will be
> > mapped in accordance with
> > a mount's idmapping.
> >
> > This solution seems ideal, because it is philosophically correct, it
> > makes cephfs idmapped mounts to work
> > in the same manner and way as idmapped mounts work for any other
> > filesystem like ext4.
>
> Okay, this approach sounds more reasonable to me. But you need to do
> some extra work to make it to be compatible between {old,new} kernels
> and  {old,new} cephs.
>
> So then the caller uid/gid will always be the user uid/gid issuing the
> requests as now.

Dear Xiubo,

I've posted a PR https://github.com/ceph/ceph/pull/52575

Kind regards,
Alex

>
> Thanks
>
> - Xiubo
>
>
> > But yes, this requires cephfs protocol changes...
> >
> > I personally still believe that the "Solution 0" approach is optimal
> > and we can go with "Solution 3" way
> > as the next iteration.
> >
> > Kind regards,
> > Alex
> >
> >> And also the same for other non-create requests. If
> >>> so this will be incorrect for the cephx perm checks IMO.
> >> Thanks,
> >> Alex
> >>
> >>> Thanks
> >>>
> >>> - Xiubo
> >>>
> >>>
> >>>> This makes a problem with path-based UID/GID restriction mechanism,
> >>>> because it uses head->caller_{uid,gid} fields
> >>>> to check if UID/GID is permitted or not.
> >>>>
> >>>> So, the problem is that we have one field in ceph request for two
> >>>> different needs - to control permissions and to set inode owner.
> >>>> Christian pointed that the most saner way is to modify ceph protocol
> >>>> and add a separate field to store inode owner UID/GID,
> >>>> and only this fields should be idmapped, but head->caller_{uid,gid}
> >>>> will be untouched.
> >>>>
> >>>> With this approach, we will not affect UID/GID-based permission rules
> >>>> with an idmapped mounts at all.
> >>>>
> >>>> Kind regards,
> >>>> Alex
> >>>>
> >>>>> Thanks
> >>>>>
> >>>>> - Xiubo
> >>>>>
> >>>>>
> >>>>>> Kind regards,
> >>>>>> Alex
> >>>>>>
> >>>>>>> Thanks
> >>>>>>>
> >>>>>>> - Xiubo
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>> Alex
> >>>>>>>>
> >>>>>>>>> Thanks
> >>>>>>>>>
> >>>>>>>>> - Xiubo
> >>>>>>>>>
>
Xiubo Li July 24, 2023, 1:02 a.m. UTC | #32
On 7/21/23 23:43, Aleksandr Mikhalitsyn wrote:
> On Thu, Jul 20, 2023 at 8:36 AM Xiubo Li <xiubli@redhat.com> wrote:
>>
>> On 7/19/23 19:57, Aleksandr Mikhalitsyn wrote:
>>> On Tue, Jul 18, 2023 at 4:49 PM Aleksandr Mikhalitsyn
>>> <aleksandr.mikhalitsyn@canonical.com> wrote:
>>>> On Tue, Jul 18, 2023 at 3:45 AM Xiubo Li <xiubli@redhat.com> wrote:
>> [...]
>>>> No, the idea is to stop mapping a caller_{uid, gid}. And to add a new
>>>> fields like
>>>> inode_owner_{uid, gid} which will be idmapped (this field will be specific only
>>>> for those operations that create a new inode).
>>> I've decided to write some summary of different approaches and
>>> elaborate tricky places.
>>>
>>> Current implementation.
>>>
>>> We have head->caller_{uid,gid} fields mapped in according
>>> to the mount's idmapping. But as we don't have information about
>>> mount's idmapping in all call stacks (like ->lookup case), we
>>> are not able to map it always and they are left untouched in these cases.
>>>
>>> This tends to an inconsistency between different inode_operations,
>>> for example ->lookup (don't have an access to an idmapping) and
>>> ->mkdir (have an idmapping as an argument).
>>>
>>> This inconsistency is absolutely harmless if the user does not
>>> use UID/GID-based restrictions. Because in this use case head->caller_{uid,gid}
>>> fields used *only* to set inode owner UID/GID during the inode_operations
>>> which create inodes.
>>>
>>> Conclusion 1. head->caller_{uid,gid} fields have two meanings
>>> - UID/GID-based permission checks
>>> - inode owner information
>>>
>>> Solution 0. Ignore the issue with UID/GID-based restrictions and idmapped mounts
>>> until we are not blamed by users ;-)
>>>
>>> As far as I can see you are not happy about this way. :-)
>>>
>>> Solution 1. Let's add mount's idmapping argument to all inode_operations
>>> and always map head->caller_{uid,gid} fields.
>>>
>>> Not a best idea, because:
>>> - big modification of VFS layer
>>> - ideologically incorrect, for instance ->lookup should not care
>>> and know *anything* about mount's idmapping, because ->lookup works
>>> not on the mount level (it's not important who and through which mount
>>> triggered the ->lookup). Imagine that you've dentry cache filled and call
>>> open(...) in this case ->lookup can be uncalled. But if the user was not lucky
>>> enough to have cache filled then open(..) will trigger the lookup and
>>> then ->lookup results will be dependent on the mount's idmapping. It
>>> seems incorrect
>>> and unobvious consequence of introducing such a parameter to ->lookup operation.
>>> To summarize, ->lookup is about filling dentry cache and dentry cache
>>> is superblock-level
>>> thing, not mount-level.
>>>
>>> Solution 2. Add some kind of extra checks to ceph-client and ceph
>>> server to detect that
>>> mount idmappings used with UID/GID-based restrictions and restrict such mounts.
>>>
>>> Seems not ideal to me too. Because it's not a fix, it's a limitation
>>> and this limitation is
>>> not cheap from the implementation perspective (we need heavy changes
>>> in ceph server side and
>>> client side too). Btw, currently VFS API is also not ready for that,
>>> because we can't
>>> decide if idmapped mounts are allowed or not in runtime. It's a static
>>> thing that should be declared
>>> with FS_ALLOW_IDMAP flag in (struct file_system_type)->fs_flags. Not a
>>> big deal, but...
>>>
>>> Solution 3. Add a new UID/GID fields to ceph request structure in
>>> addition to head->caller_{uid,gid}
>>> to store information about inode owners (only for inode_operations
>>> which create inodes).
>>>
>>> How does it solves the problem?
>>> With these new fields we can leave head->caller_{uid,gid} untouched
>>> with an idmapped mounts code.
>>> It means that UID/GID-based restrictions will continue to work as intended.
>>>
>>> At the same time, new fields (let say "inode_owner_{uid,gid}") will be
>>> mapped in accordance with
>>> a mount's idmapping.
>>>
>>> This solution seems ideal, because it is philosophically correct, it
>>> makes cephfs idmapped mounts to work
>>> in the same manner and way as idmapped mounts work for any other
>>> filesystem like ext4.
>> Okay, this approach sounds more reasonable to me. But you need to do
>> some extra work to make it to be compatible between {old,new} kernels
>> and  {old,new} cephs.
>>
>> So then the caller uid/gid will always be the user uid/gid issuing the
>> requests as now.
> Dear Xiubo,
>
> I've posted a PR https://github.com/ceph/ceph/pull/52575

Sure. Will check.

Thanks

- Xiubo

> Kind regards,
> Alex
>
>> Thanks
>>
>> - Xiubo
>>
>>
>>> But yes, this requires cephfs protocol changes...
>>>
>>> I personally still believe that the "Solution 0" approach is optimal
>>> and we can go with "Solution 3" way
>>> as the next iteration.
>>>
>>> Kind regards,
>>> Alex
>>>
>>>> And also the same for other non-create requests. If
>>>>> so this will be incorrect for the cephx perm checks IMO.
>>>> Thanks,
>>>> Alex
>>>>
>>>>> Thanks
>>>>>
>>>>> - Xiubo
>>>>>
>>>>>
>>>>>> This makes a problem with path-based UID/GID restriction mechanism,
>>>>>> because it uses head->caller_{uid,gid} fields
>>>>>> to check if UID/GID is permitted or not.
>>>>>>
>>>>>> So, the problem is that we have one field in ceph request for two
>>>>>> different needs - to control permissions and to set inode owner.
>>>>>> Christian pointed that the most saner way is to modify ceph protocol
>>>>>> and add a separate field to store inode owner UID/GID,
>>>>>> and only this fields should be idmapped, but head->caller_{uid,gid}
>>>>>> will be untouched.
>>>>>>
>>>>>> With this approach, we will not affect UID/GID-based permission rules
>>>>>> with an idmapped mounts at all.
>>>>>>
>>>>>> Kind regards,
>>>>>> Alex
>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> - Xiubo
>>>>>>>
>>>>>>>
>>>>>>>> Kind regards,
>>>>>>>> Alex
>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>> - Xiubo
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Alex
>>>>>>>>>>
>>>>>>>>>>> Thanks
>>>>>>>>>>>
>>>>>>>>>>> - Xiubo
>>>>>>>>>>>