diff mbox series

[bpf-next,RESEND,v2,2/2] bpf: Take return from set_memory_rox() into account with bpf_jit_binary_lock_ro()

Message ID 9c0b7a2e25f05c7128adcd04cf33a2df64c82627.1709279661.git.christophe.leroy@csgroup.eu (mailing list archive)
State Not Applicable
Headers show
Series [bpf-next,RESEND,v2,1/2] bpf: Take return from set_memory_ro() into account with bpf_prog_lock_ro() | expand

Commit Message

Christophe Leroy March 1, 2024, 7:57 a.m. UTC
set_memory_rox() can fail, leaving memory unprotected.

Check return and bail out when bpf_jit_binary_lock_ro() returns
an error.

Link: https://github.com/KSPP/linux/issues/7
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: linux-hardening@vger.kernel.org <linux-hardening@vger.kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Puranjay Mohan <puranjay12@gmail.com>
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>  # s390x
Acked-by: Tiezhu Yang <yangtiezhu@loongson.cn>  # LoongArch
Reviewed-by: Johan Almbladh <johan.almbladh@anyfinetworks.com> # MIPS Part
---
Sorry for the resend, I forgot to flag patch 2 as bpf-next

Previous patch introduces a dependency on this patch because it modifies bpf_prog_lock_ro(), but they are independant.
It is possible to apply this patch as standalone by handling trivial conflict with unmodified bpf_prog_lock_ro().

v2:
- Dropped arm64 change following commit 1dad391daef1 ("bpf, arm64: use bpf_prog_pack for memory management")
- Fixed too long lines reported by checkpatch
---
 arch/arm/net/bpf_jit_32.c        | 25 ++++++++++++-------------
 arch/loongarch/net/bpf_jit.c     | 22 ++++++++++++++++------
 arch/mips/net/bpf_jit_comp.c     |  3 ++-
 arch/parisc/net/bpf_jit_core.c   |  8 +++++++-
 arch/s390/net/bpf_jit_comp.c     |  6 +++++-
 arch/sparc/net/bpf_jit_comp_64.c |  6 +++++-
 arch/x86/net/bpf_jit_comp32.c    |  3 +--
 include/linux/filter.h           |  5 +++--
 8 files changed, 51 insertions(+), 27 deletions(-)

Comments

Christophe Leroy March 1, 2024, 11:27 a.m. UTC | #1
Le 01/03/2024 à 08:57, Christophe Leroy a écrit :
> set_memory_rox() can fail, leaving memory unprotected.
> 
> Check return and bail out when bpf_jit_binary_lock_ro() returns
> an error.
> 

Definitely not a good day for me. I switched maintainers between patch 1 
and patch 2 sorry.

Adding hengqi.chen@gmail.com, svens@linux.ibm.com, dsahern@kernel.org, 
mingo@redhat.com, James.Bottomley@HansenPartnership.com, 
tsbogend@alpha.franken.de, x86@kernel.org, tglx@linutronix.de, 
paulburton@kernel.org, hca@linux.ibm.com, udknight@gmail.com, 
bp@alien8.de, chenhuacai@kernel.org, hpa@zytor.com, gor@linux.ibm.com, 
linux@armlinux.org.uk, kernel@xen0n.name, dave.hansen@linux.intel.com, 
borntraeger@linux.ibm.com, agordeev@linux.ibm.com, deller@gmx.de

Tell me if I need to resend the series once more.

Thanks
Christophe

> Link: https://github.com/KSPP/linux/issues/7
> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> Cc: linux-hardening@vger.kernel.org <linux-hardening@vger.kernel.org>
> Reviewed-by: Kees Cook <keescook@chromium.org>
> Reviewed-by: Puranjay Mohan <puranjay12@gmail.com>
> Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>  # s390x
> Acked-by: Tiezhu Yang <yangtiezhu@loongson.cn>  # LoongArch
> Reviewed-by: Johan Almbladh <johan.almbladh@anyfinetworks.com> # MIPS Part
> ---
> Sorry for the resend, I forgot to flag patch 2 as bpf-next
> 
> Previous patch introduces a dependency on this patch because it modifies bpf_prog_lock_ro(), but they are independant.
> It is possible to apply this patch as standalone by handling trivial conflict with unmodified bpf_prog_lock_ro().
> 
> v2:
> - Dropped arm64 change following commit 1dad391daef1 ("bpf, arm64: use bpf_prog_pack for memory management")
> - Fixed too long lines reported by checkpatch
> ---
>   arch/arm/net/bpf_jit_32.c        | 25 ++++++++++++-------------
>   arch/loongarch/net/bpf_jit.c     | 22 ++++++++++++++++------
>   arch/mips/net/bpf_jit_comp.c     |  3 ++-
>   arch/parisc/net/bpf_jit_core.c   |  8 +++++++-
>   arch/s390/net/bpf_jit_comp.c     |  6 +++++-
>   arch/sparc/net/bpf_jit_comp_64.c |  6 +++++-
>   arch/x86/net/bpf_jit_comp32.c    |  3 +--
>   include/linux/filter.h           |  5 +++--
>   8 files changed, 51 insertions(+), 27 deletions(-)
> 
> diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
> index 1d672457d02f..01516f83a95a 100644
> --- a/arch/arm/net/bpf_jit_32.c
> +++ b/arch/arm/net/bpf_jit_32.c
> @@ -2222,28 +2222,21 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>   	/* If building the body of the JITed code fails somehow,
>   	 * we fall back to the interpretation.
>   	 */
> -	if (build_body(&ctx) < 0) {
> -		image_ptr = NULL;
> -		bpf_jit_binary_free(header);
> -		prog = orig_prog;
> -		goto out_imms;
> -	}
> +	if (build_body(&ctx) < 0)
> +		goto out_free;
>   	build_epilogue(&ctx);
>   
>   	/* 3.) Extra pass to validate JITed Code */
> -	if (validate_code(&ctx)) {
> -		image_ptr = NULL;
> -		bpf_jit_binary_free(header);
> -		prog = orig_prog;
> -		goto out_imms;
> -	}
> +	if (validate_code(&ctx))
> +		goto out_free;
>   	flush_icache_range((u32)header, (u32)(ctx.target + ctx.idx));
>   
>   	if (bpf_jit_enable > 1)
>   		/* there are 2 passes here */
>   		bpf_jit_dump(prog->len, image_size, 2, ctx.target);
>   
> -	bpf_jit_binary_lock_ro(header);
> +	if (bpf_jit_binary_lock_ro(header))
> +		goto out_free;
>   	prog->bpf_func = (void *)ctx.target;
>   	prog->jited = 1;
>   	prog->jited_len = image_size;
> @@ -2260,5 +2253,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>   		bpf_jit_prog_release_other(prog, prog == orig_prog ?
>   					   tmp : orig_prog);
>   	return prog;
> +
> +out_free:
> +	image_ptr = NULL;
> +	bpf_jit_binary_free(header);
> +	prog = orig_prog;
> +	goto out_imms;
>   }
>   
> diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
> index e73323d759d0..7dbefd4ba210 100644
> --- a/arch/loongarch/net/bpf_jit.c
> +++ b/arch/loongarch/net/bpf_jit.c
> @@ -1294,16 +1294,19 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>   	flush_icache_range((unsigned long)header, (unsigned long)(ctx.image + ctx.idx));
>   
>   	if (!prog->is_func || extra_pass) {
> +		int err;
> +
>   		if (extra_pass && ctx.idx != jit_data->ctx.idx) {
>   			pr_err_once("multi-func JIT bug %d != %d\n",
>   				    ctx.idx, jit_data->ctx.idx);
> -			bpf_jit_binary_free(header);
> -			prog->bpf_func = NULL;
> -			prog->jited = 0;
> -			prog->jited_len = 0;
> -			goto out_offset;
> +			goto out_free;
> +		}
> +		err = bpf_jit_binary_lock_ro(header);
> +		if (err) {
> +			pr_err_once("bpf_jit_binary_lock_ro() returned %d\n",
> +				    err);
> +			goto out_free;
>   		}
> -		bpf_jit_binary_lock_ro(header);
>   	} else {
>   		jit_data->ctx = ctx;
>   		jit_data->image = image_ptr;
> @@ -1334,6 +1337,13 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>   	out_offset = -1;
>   
>   	return prog;
> +
> +out_free:
> +	bpf_jit_binary_free(header);
> +	prog->bpf_func = NULL;
> +	prog->jited = 0;
> +	prog->jited_len = 0;
> +	goto out_offset;
>   }
>   
>   /* Indicate the JIT backend supports mixing bpf2bpf and tailcalls. */
> diff --git a/arch/mips/net/bpf_jit_comp.c b/arch/mips/net/bpf_jit_comp.c
> index a40d926b6513..e355dfca4400 100644
> --- a/arch/mips/net/bpf_jit_comp.c
> +++ b/arch/mips/net/bpf_jit_comp.c
> @@ -1012,7 +1012,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>   	bpf_prog_fill_jited_linfo(prog, &ctx.descriptors[1]);
>   
>   	/* Set as read-only exec and flush instruction cache */
> -	bpf_jit_binary_lock_ro(header);
> +	if (bpf_jit_binary_lock_ro(header))
> +		goto out_err;
>   	flush_icache_range((unsigned long)header,
>   			   (unsigned long)&ctx.target[ctx.jit_index]);
>   
> diff --git a/arch/parisc/net/bpf_jit_core.c b/arch/parisc/net/bpf_jit_core.c
> index d6ee2fd45550..979f45d4d1fb 100644
> --- a/arch/parisc/net/bpf_jit_core.c
> +++ b/arch/parisc/net/bpf_jit_core.c
> @@ -167,7 +167,13 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>   	bpf_flush_icache(jit_data->header, ctx->insns + ctx->ninsns);
>   
>   	if (!prog->is_func || extra_pass) {
> -		bpf_jit_binary_lock_ro(jit_data->header);
> +		if (bpf_jit_binary_lock_ro(jit_data->header)) {
> +			bpf_jit_binary_free(jit_data->header);
> +			prog->bpf_func = NULL;
> +			prog->jited = 0;
> +			prog->jited_len = 0;
> +			goto out_offset;
> +		}
>   		prologue_len = ctx->epilogue_offset - ctx->body_len;
>   		for (i = 0; i < prog->len; i++)
>   			ctx->offset[i] += prologue_len;
> diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
> index b418333bb086..e613eebfd349 100644
> --- a/arch/s390/net/bpf_jit_comp.c
> +++ b/arch/s390/net/bpf_jit_comp.c
> @@ -2111,7 +2111,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
>   		print_fn_code(jit.prg_buf, jit.size_prg);
>   	}
>   	if (!fp->is_func || extra_pass) {
> -		bpf_jit_binary_lock_ro(header);
> +		if (bpf_jit_binary_lock_ro(header)) {
> +			bpf_jit_binary_free(header);
> +			fp = orig_fp;
> +			goto free_addrs;
> +		}
>   	} else {
>   		jit_data->header = header;
>   		jit_data->ctx = jit;
> diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
> index fa0759bfe498..73bf0aea8baf 100644
> --- a/arch/sparc/net/bpf_jit_comp_64.c
> +++ b/arch/sparc/net/bpf_jit_comp_64.c
> @@ -1602,7 +1602,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>   	bpf_flush_icache(header, (u8 *)header + header->size);
>   
>   	if (!prog->is_func || extra_pass) {
> -		bpf_jit_binary_lock_ro(header);
> +		if (bpf_jit_binary_lock_ro(header)) {
> +			bpf_jit_binary_free(header);
> +			prog = orig_prog;
> +			goto out_off;
> +		}
>   	} else {
>   		jit_data->ctx = ctx;
>   		jit_data->image = image_ptr;
> diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
> index b18ce19981ec..f2be1dcf3b24 100644
> --- a/arch/x86/net/bpf_jit_comp32.c
> +++ b/arch/x86/net/bpf_jit_comp32.c
> @@ -2600,8 +2600,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>   	if (bpf_jit_enable > 1)
>   		bpf_jit_dump(prog->len, proglen, pass + 1, image);
>   
> -	if (image) {
> -		bpf_jit_binary_lock_ro(header);
> +	if (image && !bpf_jit_binary_lock_ro(header)) {
>   		prog->bpf_func = (void *)image;
>   		prog->jited = 1;
>   		prog->jited_len = proglen;
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index 7dd59bccaeec..fc42dcfdbd62 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -895,10 +895,11 @@ static inline int __must_check bpf_prog_lock_ro(struct bpf_prog *fp)
>   	return 0;
>   }
>   
> -static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)
> +static inline int __must_check
> +bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)
>   {
>   	set_vm_flush_reset_perms(hdr);
> -	set_memory_rox((unsigned long)hdr, hdr->size >> PAGE_SHIFT);
> +	return set_memory_rox((unsigned long)hdr, hdr->size >> PAGE_SHIFT);
>   }
>   
>   int sk_filter_trim_cap(struct sock *sk, struct sk_buff *skb, unsigned int cap);
diff mbox series

Patch

diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 1d672457d02f..01516f83a95a 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -2222,28 +2222,21 @@  struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	/* If building the body of the JITed code fails somehow,
 	 * we fall back to the interpretation.
 	 */
-	if (build_body(&ctx) < 0) {
-		image_ptr = NULL;
-		bpf_jit_binary_free(header);
-		prog = orig_prog;
-		goto out_imms;
-	}
+	if (build_body(&ctx) < 0)
+		goto out_free;
 	build_epilogue(&ctx);
 
 	/* 3.) Extra pass to validate JITed Code */
-	if (validate_code(&ctx)) {
-		image_ptr = NULL;
-		bpf_jit_binary_free(header);
-		prog = orig_prog;
-		goto out_imms;
-	}
+	if (validate_code(&ctx))
+		goto out_free;
 	flush_icache_range((u32)header, (u32)(ctx.target + ctx.idx));
 
 	if (bpf_jit_enable > 1)
 		/* there are 2 passes here */
 		bpf_jit_dump(prog->len, image_size, 2, ctx.target);
 
-	bpf_jit_binary_lock_ro(header);
+	if (bpf_jit_binary_lock_ro(header))
+		goto out_free;
 	prog->bpf_func = (void *)ctx.target;
 	prog->jited = 1;
 	prog->jited_len = image_size;
@@ -2260,5 +2253,11 @@  struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 		bpf_jit_prog_release_other(prog, prog == orig_prog ?
 					   tmp : orig_prog);
 	return prog;
+
+out_free:
+	image_ptr = NULL;
+	bpf_jit_binary_free(header);
+	prog = orig_prog;
+	goto out_imms;
 }
 
diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
index e73323d759d0..7dbefd4ba210 100644
--- a/arch/loongarch/net/bpf_jit.c
+++ b/arch/loongarch/net/bpf_jit.c
@@ -1294,16 +1294,19 @@  struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	flush_icache_range((unsigned long)header, (unsigned long)(ctx.image + ctx.idx));
 
 	if (!prog->is_func || extra_pass) {
+		int err;
+
 		if (extra_pass && ctx.idx != jit_data->ctx.idx) {
 			pr_err_once("multi-func JIT bug %d != %d\n",
 				    ctx.idx, jit_data->ctx.idx);
-			bpf_jit_binary_free(header);
-			prog->bpf_func = NULL;
-			prog->jited = 0;
-			prog->jited_len = 0;
-			goto out_offset;
+			goto out_free;
+		}
+		err = bpf_jit_binary_lock_ro(header);
+		if (err) {
+			pr_err_once("bpf_jit_binary_lock_ro() returned %d\n",
+				    err);
+			goto out_free;
 		}
-		bpf_jit_binary_lock_ro(header);
 	} else {
 		jit_data->ctx = ctx;
 		jit_data->image = image_ptr;
@@ -1334,6 +1337,13 @@  struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	out_offset = -1;
 
 	return prog;
+
+out_free:
+	bpf_jit_binary_free(header);
+	prog->bpf_func = NULL;
+	prog->jited = 0;
+	prog->jited_len = 0;
+	goto out_offset;
 }
 
 /* Indicate the JIT backend supports mixing bpf2bpf and tailcalls. */
diff --git a/arch/mips/net/bpf_jit_comp.c b/arch/mips/net/bpf_jit_comp.c
index a40d926b6513..e355dfca4400 100644
--- a/arch/mips/net/bpf_jit_comp.c
+++ b/arch/mips/net/bpf_jit_comp.c
@@ -1012,7 +1012,8 @@  struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	bpf_prog_fill_jited_linfo(prog, &ctx.descriptors[1]);
 
 	/* Set as read-only exec and flush instruction cache */
-	bpf_jit_binary_lock_ro(header);
+	if (bpf_jit_binary_lock_ro(header))
+		goto out_err;
 	flush_icache_range((unsigned long)header,
 			   (unsigned long)&ctx.target[ctx.jit_index]);
 
diff --git a/arch/parisc/net/bpf_jit_core.c b/arch/parisc/net/bpf_jit_core.c
index d6ee2fd45550..979f45d4d1fb 100644
--- a/arch/parisc/net/bpf_jit_core.c
+++ b/arch/parisc/net/bpf_jit_core.c
@@ -167,7 +167,13 @@  struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	bpf_flush_icache(jit_data->header, ctx->insns + ctx->ninsns);
 
 	if (!prog->is_func || extra_pass) {
-		bpf_jit_binary_lock_ro(jit_data->header);
+		if (bpf_jit_binary_lock_ro(jit_data->header)) {
+			bpf_jit_binary_free(jit_data->header);
+			prog->bpf_func = NULL;
+			prog->jited = 0;
+			prog->jited_len = 0;
+			goto out_offset;
+		}
 		prologue_len = ctx->epilogue_offset - ctx->body_len;
 		for (i = 0; i < prog->len; i++)
 			ctx->offset[i] += prologue_len;
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index b418333bb086..e613eebfd349 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -2111,7 +2111,11 @@  struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 		print_fn_code(jit.prg_buf, jit.size_prg);
 	}
 	if (!fp->is_func || extra_pass) {
-		bpf_jit_binary_lock_ro(header);
+		if (bpf_jit_binary_lock_ro(header)) {
+			bpf_jit_binary_free(header);
+			fp = orig_fp;
+			goto free_addrs;
+		}
 	} else {
 		jit_data->header = header;
 		jit_data->ctx = jit;
diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index fa0759bfe498..73bf0aea8baf 100644
--- a/arch/sparc/net/bpf_jit_comp_64.c
+++ b/arch/sparc/net/bpf_jit_comp_64.c
@@ -1602,7 +1602,11 @@  struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	bpf_flush_icache(header, (u8 *)header + header->size);
 
 	if (!prog->is_func || extra_pass) {
-		bpf_jit_binary_lock_ro(header);
+		if (bpf_jit_binary_lock_ro(header)) {
+			bpf_jit_binary_free(header);
+			prog = orig_prog;
+			goto out_off;
+		}
 	} else {
 		jit_data->ctx = ctx;
 		jit_data->image = image_ptr;
diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
index b18ce19981ec..f2be1dcf3b24 100644
--- a/arch/x86/net/bpf_jit_comp32.c
+++ b/arch/x86/net/bpf_jit_comp32.c
@@ -2600,8 +2600,7 @@  struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	if (bpf_jit_enable > 1)
 		bpf_jit_dump(prog->len, proglen, pass + 1, image);
 
-	if (image) {
-		bpf_jit_binary_lock_ro(header);
+	if (image && !bpf_jit_binary_lock_ro(header)) {
 		prog->bpf_func = (void *)image;
 		prog->jited = 1;
 		prog->jited_len = proglen;
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 7dd59bccaeec..fc42dcfdbd62 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -895,10 +895,11 @@  static inline int __must_check bpf_prog_lock_ro(struct bpf_prog *fp)
 	return 0;
 }
 
-static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)
+static inline int __must_check
+bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)
 {
 	set_vm_flush_reset_perms(hdr);
-	set_memory_rox((unsigned long)hdr, hdr->size >> PAGE_SHIFT);
+	return set_memory_rox((unsigned long)hdr, hdr->size >> PAGE_SHIFT);
 }
 
 int sk_filter_trim_cap(struct sock *sk, struct sk_buff *skb, unsigned int cap);