diff mbox series

[bpf-next,3/8] bpf: factor out fetching bpf_map from FD and adding it to used_maps list

Message ID 20240813230300.915127-4-andrii@kernel.org (mailing list archive)
State New, archived
Delegated to: BPF
Headers show
Series BPF follow ups to struct fd refactorings | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/apply fail Patch does not apply to bpf-next-0
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Unittests
bpf/vmtest-bpf-next-VM_Test-3 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-4 success Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-12 success Logs for s390x-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-13 success Logs for s390x-gcc / test (test_progs, false, 360) / test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for s390x-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for s390x-gcc / test (test_verifier, false, 360) / test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-17 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-18 success Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-19 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-gcc / test (test_maps, false, 360) / test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for x86_64-gcc / test (test_progs, false, 360) / test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-22 success Logs for x86_64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-5 success Logs for aarch64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-23 success Logs for x86_64-gcc / test (test_progs_no_alu32_parallel, true, 30) / test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for aarch64-gcc / test (test_maps, false, 360) / test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-gcc / test (test_progs_parallel, true, 30) / test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for x86_64-gcc / test (test_verifier, false, 360) / test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for aarch64-gcc / test (test_verifier, false, 360) / test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for aarch64-gcc / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-11 success Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-7 success Logs for aarch64-gcc / test (test_progs, false, 360) / test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-26 success Logs for x86_64-gcc / veristat / veristat on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-28 success Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17-O2
bpf/vmtest-bpf-next-VM_Test-29 success Logs for x86_64-llvm-17 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-30 success Logs for x86_64-llvm-17 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-31 success Logs for x86_64-llvm-17 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-32 success Logs for x86_64-llvm-17 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-33 success Logs for x86_64-llvm-17 / veristat
bpf/vmtest-bpf-next-VM_Test-34 success Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-35 success Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18-O2
bpf/vmtest-bpf-next-VM_Test-36 success Logs for x86_64-llvm-18 / test (test_maps, false, 360) / test_maps on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-37 success Logs for x86_64-llvm-18 / test (test_progs, false, 360) / test_progs on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-38 success Logs for x86_64-llvm-18 / test (test_progs_cpuv4, false, 360) / test_progs_cpuv4 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-39 success Logs for x86_64-llvm-18 / test (test_progs_no_alu32, false, 360) / test_progs_no_alu32 on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-40 success Logs for x86_64-llvm-18 / test (test_verifier, false, 360) / test_verifier on x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-41 success Logs for x86_64-llvm-18 / veristat

Commit Message

Andrii Nakryiko Aug. 13, 2024, 11:02 p.m. UTC
Factor out the logic to extract bpf_map instances from FD embedded in
bpf_insns, adding it to the list of used_maps (unless it's already
there, in which case we just reuse map's index). This simplifies the
logic in resolve_pseudo_ldimm64(), especially around `struct fd`
handling, as all that is now neatly contained in the helper and doesn't
leak into a dozen error handling paths.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 kernel/bpf/verifier.c | 115 ++++++++++++++++++++++++------------------
 1 file changed, 66 insertions(+), 49 deletions(-)

Comments

Jiri Olsa Aug. 14, 2024, 9:39 p.m. UTC | #1
On Tue, Aug 13, 2024 at 04:02:55PM -0700, Andrii Nakryiko wrote:
> Factor out the logic to extract bpf_map instances from FD embedded in
> bpf_insns, adding it to the list of used_maps (unless it's already
> there, in which case we just reuse map's index). This simplifies the
> logic in resolve_pseudo_ldimm64(), especially around `struct fd`
> handling, as all that is now neatly contained in the helper and doesn't
> leak into a dozen error handling paths.
> 
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---
>  kernel/bpf/verifier.c | 115 ++++++++++++++++++++++++------------------
>  1 file changed, 66 insertions(+), 49 deletions(-)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index df3be12096cf..14e4ef687a59 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -18865,6 +18865,58 @@ static bool bpf_map_is_cgroup_storage(struct bpf_map *map)
>  		map->map_type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE);
>  }
>  
> +/* Add map behind fd to used maps list, if it's not already there, and return
> + * its index. Also set *reused to true if this map was already in the list of
> + * used maps.
> + * Returns <0 on error, or >= 0 index, on success.
> + */
> +static int add_used_map_from_fd(struct bpf_verifier_env *env, int fd, bool *reused)
> +{
> +	struct fd f = fdget(fd);

using 'CLASS(fd, f)(fd)' would remove few fdput lines below?

jirka

> +	struct bpf_map *map;
> +	int i;
> +
> +	map = __bpf_map_get(f);
> +	if (IS_ERR(map)) {
> +		verbose(env, "fd %d is not pointing to valid bpf_map\n", fd);
> +		return PTR_ERR(map);
> +	}
> +
> +	/* check whether we recorded this map already */
> +	for (i = 0; i < env->used_map_cnt; i++) {
> +		if (env->used_maps[i] == map) {
> +			*reused = true;
> +			fdput(f);
> +			return i;
> +		}
> +	}
> +
> +	if (env->used_map_cnt >= MAX_USED_MAPS) {
> +		verbose(env, "The total number of maps per program has reached the limit of %u\n",
> +			MAX_USED_MAPS);
> +		fdput(f);
> +		return -E2BIG;
> +	}
> +
> +	if (env->prog->sleepable)
> +		atomic64_inc(&map->sleepable_refcnt);
> +
> +	/* hold the map. If the program is rejected by verifier,
> +	 * the map will be released by release_maps() or it
> +	 * will be used by the valid program until it's unloaded
> +	 * and all maps are released in bpf_free_used_maps()
> +	 */
> +	bpf_map_inc(map);
> +
> +	*reused = false;
> +	env->used_maps[env->used_map_cnt++] = map;
> +
> +	fdput(f);
> +
> +	return env->used_map_cnt - 1;
> +
> +}
> +
>  /* find and rewrite pseudo imm in ld_imm64 instructions:
>   *
>   * 1. if it accesses map FD, replace it with actual map pointer.
> @@ -18876,7 +18928,7 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
>  {
>  	struct bpf_insn *insn = env->prog->insnsi;
>  	int insn_cnt = env->prog->len;
> -	int i, j, err;
> +	int i, err;
>  
>  	err = bpf_prog_calc_tag(env->prog);
>  	if (err)
> @@ -18893,9 +18945,10 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
>  		if (insn[0].code == (BPF_LD | BPF_IMM | BPF_DW)) {
>  			struct bpf_insn_aux_data *aux;
>  			struct bpf_map *map;
> -			struct fd f;
> +			int map_idx;
>  			u64 addr;
>  			u32 fd;
> +			bool reused;
>  
>  			if (i == insn_cnt - 1 || insn[1].code != 0 ||
>  			    insn[1].dst_reg != 0 || insn[1].src_reg != 0 ||
> @@ -18956,20 +19009,18 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
>  				break;
>  			}
>  
> -			f = fdget(fd);
> -			map = __bpf_map_get(f);
> -			if (IS_ERR(map)) {
> -				verbose(env, "fd %d is not pointing to valid bpf_map\n", fd);
> -				return PTR_ERR(map);
> -			}
> +			map_idx = add_used_map_from_fd(env, fd, &reused);
> +			if (map_idx < 0)
> +				return map_idx;
> +			map = env->used_maps[map_idx];
> +
> +			aux = &env->insn_aux_data[i];
> +			aux->map_index = map_idx;
>  
>  			err = check_map_prog_compatibility(env, map, env->prog);
> -			if (err) {
> -				fdput(f);
> +			if (err)
>  				return err;
> -			}
>  
> -			aux = &env->insn_aux_data[i];
>  			if (insn[0].src_reg == BPF_PSEUDO_MAP_FD ||
>  			    insn[0].src_reg == BPF_PSEUDO_MAP_IDX) {
>  				addr = (unsigned long)map;
> @@ -18978,13 +19029,11 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
>  
>  				if (off >= BPF_MAX_VAR_OFF) {
>  					verbose(env, "direct value offset of %u is not allowed\n", off);
> -					fdput(f);
>  					return -EINVAL;
>  				}
>  
>  				if (!map->ops->map_direct_value_addr) {
>  					verbose(env, "no direct value access support for this map type\n");
> -					fdput(f);
>  					return -EINVAL;
>  				}
>  
> @@ -18992,7 +19041,6 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
>  				if (err) {
>  					verbose(env, "invalid access to map value pointer, value_size=%u off=%u\n",
>  						map->value_size, off);
> -					fdput(f);
>  					return err;
>  				}
>  
> @@ -19003,70 +19051,39 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
>  			insn[0].imm = (u32)addr;
>  			insn[1].imm = addr >> 32;
>  
> -			/* check whether we recorded this map already */
> -			for (j = 0; j < env->used_map_cnt; j++) {
> -				if (env->used_maps[j] == map) {
> -					aux->map_index = j;
> -					fdput(f);
> -					goto next_insn;
> -				}
> -			}
> -
> -			if (env->used_map_cnt >= MAX_USED_MAPS) {
> -				verbose(env, "The total number of maps per program has reached the limit of %u\n",
> -					MAX_USED_MAPS);
> -				fdput(f);
> -				return -E2BIG;
> -			}
> -
> -			if (env->prog->sleepable)
> -				atomic64_inc(&map->sleepable_refcnt);
> -			/* hold the map. If the program is rejected by verifier,
> -			 * the map will be released by release_maps() or it
> -			 * will be used by the valid program until it's unloaded
> -			 * and all maps are released in bpf_free_used_maps()
> -			 */
> -			bpf_map_inc(map);
> -
> -			aux->map_index = env->used_map_cnt;
> -			env->used_maps[env->used_map_cnt++] = map;
> +			/* proceed with extra checks only if its newly added used map */
> +			if (reused)
> +				goto next_insn;
>  
>  			if (bpf_map_is_cgroup_storage(map) &&
>  			    bpf_cgroup_storage_assign(env->prog->aux, map)) {
>  				verbose(env, "only one cgroup storage of each type is allowed\n");
> -				fdput(f);
>  				return -EBUSY;
>  			}
>  			if (map->map_type == BPF_MAP_TYPE_ARENA) {
>  				if (env->prog->aux->arena) {
>  					verbose(env, "Only one arena per program\n");
> -					fdput(f);
>  					return -EBUSY;
>  				}
>  				if (!env->allow_ptr_leaks || !env->bpf_capable) {
>  					verbose(env, "CAP_BPF and CAP_PERFMON are required to use arena\n");
> -					fdput(f);
>  					return -EPERM;
>  				}
>  				if (!env->prog->jit_requested) {
>  					verbose(env, "JIT is required to use arena\n");
> -					fdput(f);
>  					return -EOPNOTSUPP;
>  				}
>  				if (!bpf_jit_supports_arena()) {
>  					verbose(env, "JIT doesn't support arena\n");
> -					fdput(f);
>  					return -EOPNOTSUPP;
>  				}
>  				env->prog->aux->arena = (void *)map;
>  				if (!bpf_arena_get_user_vm_start(env->prog->aux->arena)) {
>  					verbose(env, "arena's user address must be set via map_extra or mmap()\n");
> -					fdput(f);
>  					return -EINVAL;
>  				}
>  			}
>  
> -			fdput(f);
>  next_insn:
>  			insn++;
>  			i++;
> -- 
> 2.43.5
> 
>
Andrii Nakryiko Aug. 14, 2024, 11:17 p.m. UTC | #2
On Wed, Aug 14, 2024 at 2:39 PM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Tue, Aug 13, 2024 at 04:02:55PM -0700, Andrii Nakryiko wrote:
> > Factor out the logic to extract bpf_map instances from FD embedded in
> > bpf_insns, adding it to the list of used_maps (unless it's already
> > there, in which case we just reuse map's index). This simplifies the
> > logic in resolve_pseudo_ldimm64(), especially around `struct fd`
> > handling, as all that is now neatly contained in the helper and doesn't
> > leak into a dozen error handling paths.
> >
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
> >  kernel/bpf/verifier.c | 115 ++++++++++++++++++++++++------------------
> >  1 file changed, 66 insertions(+), 49 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index df3be12096cf..14e4ef687a59 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -18865,6 +18865,58 @@ static bool bpf_map_is_cgroup_storage(struct bpf_map *map)
> >               map->map_type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE);
> >  }
> >
> > +/* Add map behind fd to used maps list, if it's not already there, and return
> > + * its index. Also set *reused to true if this map was already in the list of
> > + * used maps.
> > + * Returns <0 on error, or >= 0 index, on success.
> > + */
> > +static int add_used_map_from_fd(struct bpf_verifier_env *env, int fd, bool *reused)
> > +{
> > +     struct fd f = fdget(fd);
>
> using 'CLASS(fd, f)(fd)' would remove few fdput lines below?

That's done in the next patch once we change __bpf_map_get() behavior
to allow usage of CLASS(fd, ...)

>
> jirka
>
> > +     struct bpf_map *map;
> > +     int i;
> > +
> > +     map = __bpf_map_get(f);
> > +     if (IS_ERR(map)) {
> > +             verbose(env, "fd %d is not pointing to valid bpf_map\n", fd);
> > +             return PTR_ERR(map);
> > +     }
> > +

[...]
diff mbox series

Patch

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index df3be12096cf..14e4ef687a59 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -18865,6 +18865,58 @@  static bool bpf_map_is_cgroup_storage(struct bpf_map *map)
 		map->map_type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE);
 }
 
+/* Add map behind fd to used maps list, if it's not already there, and return
+ * its index. Also set *reused to true if this map was already in the list of
+ * used maps.
+ * Returns <0 on error, or >= 0 index, on success.
+ */
+static int add_used_map_from_fd(struct bpf_verifier_env *env, int fd, bool *reused)
+{
+	struct fd f = fdget(fd);
+	struct bpf_map *map;
+	int i;
+
+	map = __bpf_map_get(f);
+	if (IS_ERR(map)) {
+		verbose(env, "fd %d is not pointing to valid bpf_map\n", fd);
+		return PTR_ERR(map);
+	}
+
+	/* check whether we recorded this map already */
+	for (i = 0; i < env->used_map_cnt; i++) {
+		if (env->used_maps[i] == map) {
+			*reused = true;
+			fdput(f);
+			return i;
+		}
+	}
+
+	if (env->used_map_cnt >= MAX_USED_MAPS) {
+		verbose(env, "The total number of maps per program has reached the limit of %u\n",
+			MAX_USED_MAPS);
+		fdput(f);
+		return -E2BIG;
+	}
+
+	if (env->prog->sleepable)
+		atomic64_inc(&map->sleepable_refcnt);
+
+	/* hold the map. If the program is rejected by verifier,
+	 * the map will be released by release_maps() or it
+	 * will be used by the valid program until it's unloaded
+	 * and all maps are released in bpf_free_used_maps()
+	 */
+	bpf_map_inc(map);
+
+	*reused = false;
+	env->used_maps[env->used_map_cnt++] = map;
+
+	fdput(f);
+
+	return env->used_map_cnt - 1;
+
+}
+
 /* find and rewrite pseudo imm in ld_imm64 instructions:
  *
  * 1. if it accesses map FD, replace it with actual map pointer.
@@ -18876,7 +18928,7 @@  static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
 {
 	struct bpf_insn *insn = env->prog->insnsi;
 	int insn_cnt = env->prog->len;
-	int i, j, err;
+	int i, err;
 
 	err = bpf_prog_calc_tag(env->prog);
 	if (err)
@@ -18893,9 +18945,10 @@  static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
 		if (insn[0].code == (BPF_LD | BPF_IMM | BPF_DW)) {
 			struct bpf_insn_aux_data *aux;
 			struct bpf_map *map;
-			struct fd f;
+			int map_idx;
 			u64 addr;
 			u32 fd;
+			bool reused;
 
 			if (i == insn_cnt - 1 || insn[1].code != 0 ||
 			    insn[1].dst_reg != 0 || insn[1].src_reg != 0 ||
@@ -18956,20 +19009,18 @@  static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
 				break;
 			}
 
-			f = fdget(fd);
-			map = __bpf_map_get(f);
-			if (IS_ERR(map)) {
-				verbose(env, "fd %d is not pointing to valid bpf_map\n", fd);
-				return PTR_ERR(map);
-			}
+			map_idx = add_used_map_from_fd(env, fd, &reused);
+			if (map_idx < 0)
+				return map_idx;
+			map = env->used_maps[map_idx];
+
+			aux = &env->insn_aux_data[i];
+			aux->map_index = map_idx;
 
 			err = check_map_prog_compatibility(env, map, env->prog);
-			if (err) {
-				fdput(f);
+			if (err)
 				return err;
-			}
 
-			aux = &env->insn_aux_data[i];
 			if (insn[0].src_reg == BPF_PSEUDO_MAP_FD ||
 			    insn[0].src_reg == BPF_PSEUDO_MAP_IDX) {
 				addr = (unsigned long)map;
@@ -18978,13 +19029,11 @@  static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
 
 				if (off >= BPF_MAX_VAR_OFF) {
 					verbose(env, "direct value offset of %u is not allowed\n", off);
-					fdput(f);
 					return -EINVAL;
 				}
 
 				if (!map->ops->map_direct_value_addr) {
 					verbose(env, "no direct value access support for this map type\n");
-					fdput(f);
 					return -EINVAL;
 				}
 
@@ -18992,7 +19041,6 @@  static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
 				if (err) {
 					verbose(env, "invalid access to map value pointer, value_size=%u off=%u\n",
 						map->value_size, off);
-					fdput(f);
 					return err;
 				}
 
@@ -19003,70 +19051,39 @@  static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
 			insn[0].imm = (u32)addr;
 			insn[1].imm = addr >> 32;
 
-			/* check whether we recorded this map already */
-			for (j = 0; j < env->used_map_cnt; j++) {
-				if (env->used_maps[j] == map) {
-					aux->map_index = j;
-					fdput(f);
-					goto next_insn;
-				}
-			}
-
-			if (env->used_map_cnt >= MAX_USED_MAPS) {
-				verbose(env, "The total number of maps per program has reached the limit of %u\n",
-					MAX_USED_MAPS);
-				fdput(f);
-				return -E2BIG;
-			}
-
-			if (env->prog->sleepable)
-				atomic64_inc(&map->sleepable_refcnt);
-			/* hold the map. If the program is rejected by verifier,
-			 * the map will be released by release_maps() or it
-			 * will be used by the valid program until it's unloaded
-			 * and all maps are released in bpf_free_used_maps()
-			 */
-			bpf_map_inc(map);
-
-			aux->map_index = env->used_map_cnt;
-			env->used_maps[env->used_map_cnt++] = map;
+			/* proceed with extra checks only if its newly added used map */
+			if (reused)
+				goto next_insn;
 
 			if (bpf_map_is_cgroup_storage(map) &&
 			    bpf_cgroup_storage_assign(env->prog->aux, map)) {
 				verbose(env, "only one cgroup storage of each type is allowed\n");
-				fdput(f);
 				return -EBUSY;
 			}
 			if (map->map_type == BPF_MAP_TYPE_ARENA) {
 				if (env->prog->aux->arena) {
 					verbose(env, "Only one arena per program\n");
-					fdput(f);
 					return -EBUSY;
 				}
 				if (!env->allow_ptr_leaks || !env->bpf_capable) {
 					verbose(env, "CAP_BPF and CAP_PERFMON are required to use arena\n");
-					fdput(f);
 					return -EPERM;
 				}
 				if (!env->prog->jit_requested) {
 					verbose(env, "JIT is required to use arena\n");
-					fdput(f);
 					return -EOPNOTSUPP;
 				}
 				if (!bpf_jit_supports_arena()) {
 					verbose(env, "JIT doesn't support arena\n");
-					fdput(f);
 					return -EOPNOTSUPP;
 				}
 				env->prog->aux->arena = (void *)map;
 				if (!bpf_arena_get_user_vm_start(env->prog->aux->arena)) {
 					verbose(env, "arena's user address must be set via map_extra or mmap()\n");
-					fdput(f);
 					return -EINVAL;
 				}
 			}
 
-			fdput(f);
 next_insn:
 			insn++;
 			i++;