Message ID | 20230411155921.14716-8-casey@schaufler-ca.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Paul Moore |
Headers | show |
Series | LSM: Three basic syscalls | expand |
On Tue, Apr 11, 2023 at 12:02 PM Casey Schaufler <casey@schaufler-ca.com> wrote: > > Add lsm_name_to_attr(), which translates a text string to a > LSM_ATTR value if one is available. > > Add lsm_fill_user_ctx(), which fills a struct lsm_ctx, including > the trailing attribute value. The .len value is padded to a multiple > of the size of the structure for alignment. > > All are used in module specific components of LSM system calls. > > Signed-off-by: Casey Schaufler <casey@schaufler-ca.com> > --- > include/linux/security.h | 13 +++++++++++ > security/lsm_syscalls.c | 24 ++++++++++++++++++++ > security/security.c | 48 ++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 85 insertions(+) ... > diff --git a/security/lsm_syscalls.c b/security/lsm_syscalls.c > index 6efbe244d304..67106f642422 100644 > --- a/security/lsm_syscalls.c > +++ b/security/lsm_syscalls.c > @@ -17,6 +17,30 @@ > #include <linux/lsm_hooks.h> > #include <uapi/linux/lsm.h> > > +/** > + * lsm_name_to_attr - map an LSM attribute name to its ID > + * @name: name of the attribute > + * > + * Returns the LSM attribute value associated with @name, or 0 if > + * there is no mapping. > + */ > +u64 lsm_name_to_attr(const char *name) > +{ > + if (!strcmp(name, "current")) > + return LSM_ATTR_CURRENT; > + if (!strcmp(name, "exec")) > + return LSM_ATTR_EXEC; > + if (!strcmp(name, "fscreate")) > + return LSM_ATTR_FSCREATE; > + if (!strcmp(name, "keycreate")) > + return LSM_ATTR_KEYCREATE; > + if (!strcmp(name, "prev")) > + return LSM_ATTR_PREV; > + if (!strcmp(name, "sockcreate")) > + return LSM_ATTR_SOCKCREATE; > + return 0; > +} Thank you :) > /** > * sys_lsm_set_self_attr - Set current task's security module attribute > * @attr: which attribute to set > diff --git a/security/security.c b/security/security.c > index bfe9a1a426b2..453f3ff591ec 100644 > --- a/security/security.c > +++ b/security/security.c > @@ -752,6 +752,54 @@ static int lsm_superblock_alloc(struct super_block *sb) > return 0; > } > > +/** > + * lsm_fill_user_ctx - Fill a user space lsm_ctx structure > + * @ctx: an LSM context to be filled > + * @context: the new context value > + * @context_size: the size of the new context value > + * @id: LSM id > + * @flags: LSM defined flags > + * > + * Fill all of the fields in a user space lsm_ctx structure. > + * Caller is assumed to have verified that @ctx has enough space > + * for @context. > + * > + * The total length is padded to an integral number of lsm_ctx. Considering that lsm_ctx is variable length I'm not sure that makes a lot of sense, how about we pad the total length so that the @ctx entry is a multiple of 64-bits? If needed we can always change this later as the lsm_ctx struct is inherently variable in length and userspace will need to deal with the buffer regardless of alignment. > + * Returns 0 on success, -EFAULT on a copyout error. > + */ > +int lsm_fill_user_ctx(struct lsm_ctx __user *ctx, void *context, > + size_t context_size, u64 id, u64 flags) > +{ > + struct lsm_ctx *lctx; > + size_t locallen; > + u8 *composite; > + int rc = 0; > + > + locallen = sizeof(*ctx); > + if (context_size) > + locallen += sizeof(*ctx) * ((context_size / sizeof(*ctx)) + 1); It seems cleaner to use the kernel's ALIGN() macro: /* ensure the lsm_ctx length is a multiple of 64-bits */ locallen = ALIGN(sizeof(*ctx) + context_size, 8); lctx = kzalloc(locallen, GFP_KERNEL) if (!lctx) return -ENOMEM; > + composite = kzalloc(locallen, GFP_KERNEL); > + if (composite == NULL) > + return -ENOMEM; > + > + lctx = (struct lsm_ctx *)composite; > + lctx->id = id; > + lctx->flags = flags; > + lctx->ctx_len = context_size; > + lctx->len = locallen; > + > + memcpy(composite + sizeof(*lctx), context, context_size); Is there a problem with doing `memcpy(lctx->ctx, context, context_size)` in place of the memcpy above? That is easier to read and we can get rid of @composite. > + if (copy_to_user(ctx, composite, locallen)) > + rc = -EFAULT; > + > + kfree(composite); > + > + return rc; > +} I understand Mickaël asked you to do a single copy_to_user(), but I'm not sure it is worth it if we have to add a temporary buffer allocation like that. How about something like below (v7 with some tweaks/padding)? You could be a bit more clever with the memset if you want, I was just typing this up quickly ... int lsm_fill_user_ctx(...) { struct lsm_ctx lctx; /* ensure the lctx length is a multiple of 64-bits */ lctx.len = ALIGN(sizeof(lctx) + context_size, 8); lctx.id = id; lctx.flags = flags; lctx.ctx_len = context_size; memset(ctx, 0, lctx.len); if (copy_to_user(ctx, &lctx, sizeof(lctx)) return -EFAULT; if (copy_to_user(&ctx[1], context, context_size) return -EFAULT; return 0; } -- paul-moore.com
On 4/18/2023 2:51 PM, Paul Moore wrote: > On Tue, Apr 11, 2023 at 12:02 PM Casey Schaufler <casey@schaufler-ca.com> wrote: >> Add lsm_name_to_attr(), which translates a text string to a >> LSM_ATTR value if one is available. >> >> Add lsm_fill_user_ctx(), which fills a struct lsm_ctx, including >> the trailing attribute value. The .len value is padded to a multiple >> of the size of the structure for alignment. >> >> All are used in module specific components of LSM system calls. >> >> Signed-off-by: Casey Schaufler <casey@schaufler-ca.com> >> --- >> include/linux/security.h | 13 +++++++++++ >> security/lsm_syscalls.c | 24 ++++++++++++++++++++ >> security/security.c | 48 ++++++++++++++++++++++++++++++++++++++++ >> 3 files changed, 85 insertions(+) > .. > >> diff --git a/security/lsm_syscalls.c b/security/lsm_syscalls.c >> index 6efbe244d304..67106f642422 100644 >> --- a/security/lsm_syscalls.c >> +++ b/security/lsm_syscalls.c >> @@ -17,6 +17,30 @@ >> #include <linux/lsm_hooks.h> >> #include <uapi/linux/lsm.h> >> >> +/** >> + * lsm_name_to_attr - map an LSM attribute name to its ID >> + * @name: name of the attribute >> + * >> + * Returns the LSM attribute value associated with @name, or 0 if >> + * there is no mapping. >> + */ >> +u64 lsm_name_to_attr(const char *name) >> +{ >> + if (!strcmp(name, "current")) >> + return LSM_ATTR_CURRENT; >> + if (!strcmp(name, "exec")) >> + return LSM_ATTR_EXEC; >> + if (!strcmp(name, "fscreate")) >> + return LSM_ATTR_FSCREATE; >> + if (!strcmp(name, "keycreate")) >> + return LSM_ATTR_KEYCREATE; >> + if (!strcmp(name, "prev")) >> + return LSM_ATTR_PREV; >> + if (!strcmp(name, "sockcreate")) >> + return LSM_ATTR_SOCKCREATE; >> + return 0; >> +} > Thank you :) It didn't hurt all that badly. > >> /** >> * sys_lsm_set_self_attr - Set current task's security module attribute >> * @attr: which attribute to set >> diff --git a/security/security.c b/security/security.c >> index bfe9a1a426b2..453f3ff591ec 100644 >> --- a/security/security.c >> +++ b/security/security.c >> @@ -752,6 +752,54 @@ static int lsm_superblock_alloc(struct super_block *sb) >> return 0; >> } >> >> +/** >> + * lsm_fill_user_ctx - Fill a user space lsm_ctx structure >> + * @ctx: an LSM context to be filled >> + * @context: the new context value >> + * @context_size: the size of the new context value >> + * @id: LSM id >> + * @flags: LSM defined flags >> + * >> + * Fill all of the fields in a user space lsm_ctx structure. >> + * Caller is assumed to have verified that @ctx has enough space >> + * for @context. >> + * >> + * The total length is padded to an integral number of lsm_ctx. > Considering that lsm_ctx is variable length I'm not sure that makes a > lot of sense, how about we pad the total length so that the @ctx entry > is a multiple of 64-bits? 64 is fine. > If needed we can always change this later > as the lsm_ctx struct is inherently variable in length and userspace > will need to deal with the buffer regardless of alignment. > >> + * Returns 0 on success, -EFAULT on a copyout error. >> + */ >> +int lsm_fill_user_ctx(struct lsm_ctx __user *ctx, void *context, >> + size_t context_size, u64 id, u64 flags) >> +{ >> + struct lsm_ctx *lctx; >> + size_t locallen; >> + u8 *composite; >> + int rc = 0; >> + >> + locallen = sizeof(*ctx); >> + if (context_size) >> + locallen += sizeof(*ctx) * ((context_size / sizeof(*ctx)) + 1); > It seems cleaner to use the kernel's ALIGN() macro: Indeed. I'll do it. > > /* ensure the lsm_ctx length is a multiple of 64-bits */ > locallen = ALIGN(sizeof(*ctx) + context_size, 8); > lctx = kzalloc(locallen, GFP_KERNEL) > if (!lctx) > return -ENOMEM; > >> + composite = kzalloc(locallen, GFP_KERNEL); >> + if (composite == NULL) >> + return -ENOMEM; >> + >> + lctx = (struct lsm_ctx *)composite; >> + lctx->id = id; >> + lctx->flags = flags; >> + lctx->ctx_len = context_size; >> + lctx->len = locallen; >> + >> + memcpy(composite + sizeof(*lctx), context, context_size); > Is there a problem with doing `memcpy(lctx->ctx, context, > context_size)` in place of the memcpy above? Nope. > That is easier to read > and we can get rid of @composite. Point. >> + if (copy_to_user(ctx, composite, locallen)) >> + rc = -EFAULT; >> + >> + kfree(composite); >> + >> + return rc; >> +} > I understand Mickaël asked you to do a single copy_to_user(), but I'm > not sure it is worth it if we have to add a temporary buffer > allocation like that. How about something like below (v7 with some > tweaks/padding)? You could be a bit more clever with the memset if > you want, I was just typing this up quickly ... I prefer two copies to the allocation myself. I'll incorporate this. > > int lsm_fill_user_ctx(...) > { > struct lsm_ctx lctx; > > /* ensure the lctx length is a multiple of 64-bits */ > lctx.len = ALIGN(sizeof(lctx) + context_size, 8); > > lctx.id = id; > lctx.flags = flags; > lctx.ctx_len = context_size; > > memset(ctx, 0, lctx.len); > if (copy_to_user(ctx, &lctx, sizeof(lctx)) > return -EFAULT; > if (copy_to_user(&ctx[1], context, context_size) > return -EFAULT; > > return 0; > } > > -- > paul-moore.com
On 4/18/2023 3:43 PM, Casey Schaufler wrote: > On 4/18/2023 2:51 PM, Paul Moore wrote: >> On Tue, Apr 11, 2023 at 12:02 PM Casey Schaufler <casey@schaufler-ca.com> wrote: >>> Add lsm_name_to_attr(), which translates a text string to a >>> LSM_ATTR value if one is available. >>> >>> Add lsm_fill_user_ctx(), which fills a struct lsm_ctx, including >>> the trailing attribute value. The .len value is padded to a multiple >>> of the size of the structure for alignment. >>> >>> All are used in module specific components of LSM system calls. >>> >>> Signed-off-by: Casey Schaufler <casey@schaufler-ca.com> >>> --- >>> include/linux/security.h | 13 +++++++++++ >>> security/lsm_syscalls.c | 24 ++++++++++++++++++++ >>> security/security.c | 48 ++++++++++++++++++++++++++++++++++++++++ >>> 3 files changed, 85 insertions(+) >> .. >> >>> diff --git a/security/lsm_syscalls.c b/security/lsm_syscalls.c >>> index 6efbe244d304..67106f642422 100644 >>> --- a/security/lsm_syscalls.c >>> +++ b/security/lsm_syscalls.c >>> @@ -17,6 +17,30 @@ >>> #include <linux/lsm_hooks.h> >>> #include <uapi/linux/lsm.h> >>> >>> +/** >>> + * lsm_name_to_attr - map an LSM attribute name to its ID >>> + * @name: name of the attribute >>> + * >>> + * Returns the LSM attribute value associated with @name, or 0 if >>> + * there is no mapping. >>> + */ >>> +u64 lsm_name_to_attr(const char *name) >>> +{ >>> + if (!strcmp(name, "current")) >>> + return LSM_ATTR_CURRENT; >>> + if (!strcmp(name, "exec")) >>> + return LSM_ATTR_EXEC; >>> + if (!strcmp(name, "fscreate")) >>> + return LSM_ATTR_FSCREATE; >>> + if (!strcmp(name, "keycreate")) >>> + return LSM_ATTR_KEYCREATE; >>> + if (!strcmp(name, "prev")) >>> + return LSM_ATTR_PREV; >>> + if (!strcmp(name, "sockcreate")) >>> + return LSM_ATTR_SOCKCREATE; >>> + return 0; >>> +} >> Thank you :) > It didn't hurt all that badly. > >>> /** >>> * sys_lsm_set_self_attr - Set current task's security module attribute >>> * @attr: which attribute to set >>> diff --git a/security/security.c b/security/security.c >>> index bfe9a1a426b2..453f3ff591ec 100644 >>> --- a/security/security.c >>> +++ b/security/security.c >>> @@ -752,6 +752,54 @@ static int lsm_superblock_alloc(struct super_block *sb) >>> return 0; >>> } >>> >>> +/** >>> + * lsm_fill_user_ctx - Fill a user space lsm_ctx structure >>> + * @ctx: an LSM context to be filled >>> + * @context: the new context value >>> + * @context_size: the size of the new context value >>> + * @id: LSM id >>> + * @flags: LSM defined flags >>> + * >>> + * Fill all of the fields in a user space lsm_ctx structure. >>> + * Caller is assumed to have verified that @ctx has enough space >>> + * for @context. >>> + * >>> + * The total length is padded to an integral number of lsm_ctx. >> Considering that lsm_ctx is variable length I'm not sure that makes a >> lot of sense, how about we pad the total length so that the @ctx entry >> is a multiple of 64-bits? > 64 is fine. > >> If needed we can always change this later >> as the lsm_ctx struct is inherently variable in length and userspace >> will need to deal with the buffer regardless of alignment. >> >>> + * Returns 0 on success, -EFAULT on a copyout error. >>> + */ >>> +int lsm_fill_user_ctx(struct lsm_ctx __user *ctx, void *context, >>> + size_t context_size, u64 id, u64 flags) >>> +{ >>> + struct lsm_ctx *lctx; >>> + size_t locallen; >>> + u8 *composite; >>> + int rc = 0; >>> + >>> + locallen = sizeof(*ctx); >>> + if (context_size) >>> + locallen += sizeof(*ctx) * ((context_size / sizeof(*ctx)) + 1); >> It seems cleaner to use the kernel's ALIGN() macro: > Indeed. I'll do it. > >> /* ensure the lsm_ctx length is a multiple of 64-bits */ >> locallen = ALIGN(sizeof(*ctx) + context_size, 8); >> lctx = kzalloc(locallen, GFP_KERNEL) >> if (!lctx) >> return -ENOMEM; >> >>> + composite = kzalloc(locallen, GFP_KERNEL); >>> + if (composite == NULL) >>> + return -ENOMEM; >>> + >>> + lctx = (struct lsm_ctx *)composite; >>> + lctx->id = id; >>> + lctx->flags = flags; >>> + lctx->ctx_len = context_size; >>> + lctx->len = locallen; >>> + >>> + memcpy(composite + sizeof(*lctx), context, context_size); >> Is there a problem with doing `memcpy(lctx->ctx, context, >> context_size)` in place of the memcpy above? > Nope. > >> That is easier to read >> and we can get rid of @composite. > Point. > >>> + if (copy_to_user(ctx, composite, locallen)) >>> + rc = -EFAULT; >>> + >>> + kfree(composite); >>> + >>> + return rc; >>> +} >> I understand Mickaël asked you to do a single copy_to_user(), but I'm >> not sure it is worth it if we have to add a temporary buffer >> allocation like that. How about something like below (v7 with some >> tweaks/padding)? You could be a bit more clever with the memset if >> you want, I was just typing this up quickly ... > I prefer two copies to the allocation myself. I'll incorporate this. After further review ... The tweaks required for padding aren't as clean as all that, and memset() isn't going to work for __user memory. I'm having trouble coming up with a way to deal with the padding that doesn't require either allocation or a third copy_to_user(). The inclusion of padding makes the kzalloc() and single copy_to_user() pretty attractive. > >> int lsm_fill_user_ctx(...) >> { >> struct lsm_ctx lctx; >> >> /* ensure the lctx length is a multiple of 64-bits */ >> lctx.len = ALIGN(sizeof(lctx) + context_size, 8); >> >> lctx.id = id; >> lctx.flags = flags; >> lctx.ctx_len = context_size; >> >> memset(ctx, 0, lctx.len); >> if (copy_to_user(ctx, &lctx, sizeof(lctx)) >> return -EFAULT; >> if (copy_to_user(&ctx[1], context, context_size) >> return -EFAULT; >> >> return 0; >> } >> >> -- >> paul-moore.com
diff --git a/include/linux/security.h b/include/linux/security.h index f7292890b6a2..c96fb56159e3 100644 --- a/include/linux/security.h +++ b/include/linux/security.h @@ -263,6 +263,7 @@ int unregister_blocking_lsm_notifier(struct notifier_block *nb); /* prototypes */ extern int security_init(void); extern int early_security_init(void); +extern u64 lsm_name_to_attr(const char *name); /* Security operations */ int security_binder_set_context_mgr(const struct cred *mgr); @@ -491,6 +492,8 @@ int security_inode_notifysecctx(struct inode *inode, void *ctx, u32 ctxlen); int security_inode_setsecctx(struct dentry *dentry, void *ctx, u32 ctxlen); int security_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen); int security_locked_down(enum lockdown_reason what); +int lsm_fill_user_ctx(struct lsm_ctx __user *ctx, void *context, + size_t context_size, u64 id, u64 flags); #else /* CONFIG_SECURITY */ static inline int call_blocking_lsm_notifier(enum lsm_event event, void *data) @@ -508,6 +511,11 @@ static inline int unregister_blocking_lsm_notifier(struct notifier_block *nb) return 0; } +static inline u64 lsm_name_to_attr(const char *name) +{ + return 0; +} + static inline void security_free_mnt_opts(void **mnt_opts) { } @@ -1420,6 +1428,11 @@ static inline int security_locked_down(enum lockdown_reason what) { return 0; } +static inline int lsm_fill_user_ctx(struct lsm_ctx __user *ctx, void *context, + size_t context_size, u64 id, u64 flags) +{ + return 0; +} #endif /* CONFIG_SECURITY */ #if defined(CONFIG_SECURITY) && defined(CONFIG_WATCH_QUEUE) diff --git a/security/lsm_syscalls.c b/security/lsm_syscalls.c index 6efbe244d304..67106f642422 100644 --- a/security/lsm_syscalls.c +++ b/security/lsm_syscalls.c @@ -17,6 +17,30 @@ #include <linux/lsm_hooks.h> #include <uapi/linux/lsm.h> +/** + * lsm_name_to_attr - map an LSM attribute name to its ID + * @name: name of the attribute + * + * Returns the LSM attribute value associated with @name, or 0 if + * there is no mapping. + */ +u64 lsm_name_to_attr(const char *name) +{ + if (!strcmp(name, "current")) + return LSM_ATTR_CURRENT; + if (!strcmp(name, "exec")) + return LSM_ATTR_EXEC; + if (!strcmp(name, "fscreate")) + return LSM_ATTR_FSCREATE; + if (!strcmp(name, "keycreate")) + return LSM_ATTR_KEYCREATE; + if (!strcmp(name, "prev")) + return LSM_ATTR_PREV; + if (!strcmp(name, "sockcreate")) + return LSM_ATTR_SOCKCREATE; + return 0; +} + /** * sys_lsm_set_self_attr - Set current task's security module attribute * @attr: which attribute to set diff --git a/security/security.c b/security/security.c index bfe9a1a426b2..453f3ff591ec 100644 --- a/security/security.c +++ b/security/security.c @@ -752,6 +752,54 @@ static int lsm_superblock_alloc(struct super_block *sb) return 0; } +/** + * lsm_fill_user_ctx - Fill a user space lsm_ctx structure + * @ctx: an LSM context to be filled + * @context: the new context value + * @context_size: the size of the new context value + * @id: LSM id + * @flags: LSM defined flags + * + * Fill all of the fields in a user space lsm_ctx structure. + * Caller is assumed to have verified that @ctx has enough space + * for @context. + * + * The total length is padded to an integral number of lsm_ctx. + * + * Returns 0 on success, -EFAULT on a copyout error. + */ +int lsm_fill_user_ctx(struct lsm_ctx __user *ctx, void *context, + size_t context_size, u64 id, u64 flags) +{ + struct lsm_ctx *lctx; + size_t locallen; + u8 *composite; + int rc = 0; + + locallen = sizeof(*ctx); + if (context_size) + locallen += sizeof(*ctx) * ((context_size / sizeof(*ctx)) + 1); + + composite = kzalloc(locallen, GFP_KERNEL); + if (composite == NULL) + return -ENOMEM; + + lctx = (struct lsm_ctx *)composite; + lctx->id = id; + lctx->flags = flags; + lctx->ctx_len = context_size; + lctx->len = locallen; + + memcpy(composite + sizeof(*lctx), context, context_size); + + if (copy_to_user(ctx, composite, locallen)) + rc = -EFAULT; + + kfree(composite); + + return rc; +} + /* * The default value of the LSM hook is defined in linux/lsm_hook_defs.h and * can be accessed with:
Add lsm_name_to_attr(), which translates a text string to a LSM_ATTR value if one is available. Add lsm_fill_user_ctx(), which fills a struct lsm_ctx, including the trailing attribute value. The .len value is padded to a multiple of the size of the structure for alignment. All are used in module specific components of LSM system calls. Signed-off-by: Casey Schaufler <casey@schaufler-ca.com> --- include/linux/security.h | 13 +++++++++++ security/lsm_syscalls.c | 24 ++++++++++++++++++++ security/security.c | 48 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 85 insertions(+)