diff mbox series

[v3,3/7] mm, security: Fix missed security_task_movememory()

Message ID 20231201094636.19770-4-laoar.shao@gmail.com (mailing list archive)
State New
Headers show
Series mm, security, bpf: Fine-grained control over memory policy adjustments with lsm bpf | expand

Commit Message

Yafang Shao Dec. 1, 2023, 9:46 a.m. UTC
Considering that MPOL_F_NUMA_BALANCING or  mbind(2) using either
MPOL_MF_MOVE or MPOL_MF_MOVE_ALL are capable of memory movement, it's
essential to include security_task_movememory() to cover this
functionality as well. It was identified during a code review.

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 mm/mempolicy.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

Comments

Serge E. Hallyn Dec. 1, 2023, 8:50 p.m. UTC | #1
On Fri, Dec 01, 2023 at 09:46:32AM +0000, Yafang Shao wrote:
> Considering that MPOL_F_NUMA_BALANCING or  mbind(2) using either
> MPOL_MF_MOVE or MPOL_MF_MOVE_ALL are capable of memory movement, it's
> essential to include security_task_movememory() to cover this
> functionality as well. It was identified during a code review.

Hm - this doesn't have any bad side effects for you when using selinux?
The selinux_task_movememory() hook checks for PROCESS__SETSCHED privs.
The two existing security_task_movememory() calls are in cases where we
expect the caller to be affecting another task identified by pid, so
that makes sense.  Is an MPOL_MV_MOVE to move your own pages actually
analogous to that?

Much like the concern you mentioned in your intro about requiring
CAP_SYS_NICE and thereby expanding its use, it seems that here you
will be regressing some mbind users unless the granting of PROCESS__SETSCHED
is widened.

> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> ---
>  mm/mempolicy.c | 14 +++++++++++++-
>  1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 10a590ee1c89..1eafe81d782e 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -1259,8 +1259,15 @@ static long do_mbind(unsigned long start, unsigned long len,
>  	if (!new)
>  		flags |= MPOL_MF_DISCONTIG_OK;
>  
> -	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
> +	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {

MPOL_MF_MOVE_ALL already has a CAP_SYS_NICE check.  Does that
suffice for that one?

> +		err = security_task_movememory(current);
> +		if (err) {
> +			mpol_put(new);
> +			return err;
> +		}
>  		lru_cache_disable();
> +	}
> +
>  	{
>  		NODEMASK_SCRATCH(scratch);
>  		if (scratch) {
> @@ -1450,6 +1457,8 @@ static int copy_nodes_to_user(unsigned long __user *mask, unsigned long maxnode,
>  /* Basic parameter sanity check used by both mbind() and set_mempolicy() */
>  static inline int sanitize_mpol_flags(int *mode, unsigned short *flags)
>  {
> +	int err;
> +
>  	*flags = *mode & MPOL_MODE_FLAGS;
>  	*mode &= ~MPOL_MODE_FLAGS;
>  
> @@ -1460,6 +1469,9 @@ static inline int sanitize_mpol_flags(int *mode, unsigned short *flags)
>  	if (*flags & MPOL_F_NUMA_BALANCING) {
>  		if (*mode != MPOL_BIND)
>  			return -EINVAL;
> +		err = security_task_movememory(current);
> +		if (err)
> +			return err;
>  		*flags |= (MPOL_F_MOF | MPOL_F_MORON);
>  	}
>  	return 0;
> -- 
> 2.30.1 (Apple Git-130)
Yafang Shao Dec. 3, 2023, 2:57 a.m. UTC | #2
On Sat, Dec 2, 2023 at 4:50 AM Serge E. Hallyn <serge@hallyn.com> wrote:
>
> On Fri, Dec 01, 2023 at 09:46:32AM +0000, Yafang Shao wrote:
> > Considering that MPOL_F_NUMA_BALANCING or  mbind(2) using either
> > MPOL_MF_MOVE or MPOL_MF_MOVE_ALL are capable of memory movement, it's
> > essential to include security_task_movememory() to cover this
> > functionality as well. It was identified during a code review.
>
> Hm - this doesn't have any bad side effects for you when using selinux?
> The selinux_task_movememory() hook checks for PROCESS__SETSCHED privs.
> The two existing security_task_movememory() calls are in cases where we
> expect the caller to be affecting another task identified by pid, so
> that makes sense.  Is an MPOL_MV_MOVE to move your own pages actually
> analogous to that?
>
> Much like the concern you mentioned in your intro about requiring
> CAP_SYS_NICE and thereby expanding its use, it seems that here you
> will be regressing some mbind users unless the granting of PROCESS__SETSCHED
> is widened.

Ah, it appears that this change might lead to regression. I overlooked
its association with the PROCESS__SETSCHED privilege. I'll exclude
this patch from the upcoming version.
Thanks for your review.
diff mbox series

Patch

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 10a590ee1c89..1eafe81d782e 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1259,8 +1259,15 @@  static long do_mbind(unsigned long start, unsigned long len,
 	if (!new)
 		flags |= MPOL_MF_DISCONTIG_OK;
 
-	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
+	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
+		err = security_task_movememory(current);
+		if (err) {
+			mpol_put(new);
+			return err;
+		}
 		lru_cache_disable();
+	}
+
 	{
 		NODEMASK_SCRATCH(scratch);
 		if (scratch) {
@@ -1450,6 +1457,8 @@  static int copy_nodes_to_user(unsigned long __user *mask, unsigned long maxnode,
 /* Basic parameter sanity check used by both mbind() and set_mempolicy() */
 static inline int sanitize_mpol_flags(int *mode, unsigned short *flags)
 {
+	int err;
+
 	*flags = *mode & MPOL_MODE_FLAGS;
 	*mode &= ~MPOL_MODE_FLAGS;
 
@@ -1460,6 +1469,9 @@  static inline int sanitize_mpol_flags(int *mode, unsigned short *flags)
 	if (*flags & MPOL_F_NUMA_BALANCING) {
 		if (*mode != MPOL_BIND)
 			return -EINVAL;
+		err = security_task_movememory(current);
+		if (err)
+			return err;
 		*flags |= (MPOL_F_MOF | MPOL_F_MORON);
 	}
 	return 0;