From patchwork Wed Nov 27 21:35:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13887389 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 81D3E2036ED for ; Wed, 27 Nov 2024 21:35:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.65 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732743344; cv=none; b=T8mv16T6Twowwo7TaH74pGAO67ZvrhCNthbrjpu6Ul7EPmZ7H7uBAznQK2ihb+s2DXPVloecZxsEnP7ojF39Yx9xgZjHX+MJfy9364vs7izd92Iefx/4SrXj5d9wXUjtJ621SLcL4FgiBMyytsB+fb4YKdbNXhvkUKk8F7FcEqw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732743344; c=relaxed/simple; bh=haXHBIw+IAeTSlGBb0DWF0SoN+arpIfSKJ65ihAMstA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FDS6KQCE9EakcaLGWbbw7C6qZolN8NB45vf7CXR7dW1DrLSeMui7gp/+pJevYf4EW2INtXygEsVWJ7PQd6e8uaUWl+4kA72ux3nuzWONL7TG2HBwF8zD9KY4M8G7tC6b3lMSSl7KRSLLfumgMJL+MA/JcB5pwTIH4j6oOo7hR08= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ht5rSZFo; arc=none smtp.client-ip=209.85.128.65 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ht5rSZFo" Received: by mail-wm1-f65.google.com with SMTP id 5b1f17b1804b1-434ab114753so1121845e9.0 for ; Wed, 27 Nov 2024 13:35:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732743340; x=1733348140; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5GeYHbLohCgWUukG+5zybVEDFJ+Pso3FMFyO/WP56Ps=; b=ht5rSZFoG6XK4trOJe1Z4dY0REri6LLg/rfIFpZzleHtWbASO2P3wdRXak4fctFr3A ESiosp+Jmr2BQQ03h4irpt3SW076gNWTk/xOnx2SSHkve2vqxKO5Zs1PPQ5nXUJTvTP+ DkSOcPMBvT6jbB/UThSFsqlDFZb4oS/b8AyYLUscwa+yO3VEjiYzs2OxwzcBqR1Hf+Ko 96rqmUg9wXZRHl5TG0Qi0ZZgrzk0G5xy0za5PalvYFg2W+vBoaGtpnJUl2BuqoYXCcne CG30MN1FYsbw9tWUzASQK64W1faEQrjfj1oYCdU2UT/Ef1MF/JYH8z0CKH+5lvrS9XSq PUZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732743340; x=1733348140; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5GeYHbLohCgWUukG+5zybVEDFJ+Pso3FMFyO/WP56Ps=; b=I8YFqmarvyym780UqRnq2yYn2D7WU5czckZjYhNAlN3vvCHZxlX6eV073VQf2q0Bp9 rDfNqpxo9VPscP1bo2M1DxlHhEEFKv7vqP2Vpo+8cqDPzU8cz00MWq1v087ve7YnMGmm NypW1enDa6JZlMJ6ow3o0DCI2wxDOKB3bf1rkVLgcR4Q6dw4lQ/qImt7AZZ4ROcnzzPP ev27d5x1IyQKVZO/RAKACeAiD/vKIon/3sf60CDKpV9cfuyihhNQj7uoLzOCKakM2iY4 iC0W6+TLezqHXeNqwImtggeAwhYh9NBu/PxKJ4ItDWTQ9rxkgh6EMDjit5Uf3C1sxXbP vwrg== X-Gm-Message-State: AOJu0YyR4OJc5dI1cR5WPFYxANmU88P2nSYYt3wnhUMtKdTduSsASZp4 oO465ZwwxmNFqW7y9Q2nVxQd7KNnkI7QC6wwCalEe34txzJJ4v/Q+489t77bLTY= X-Gm-Gg: ASbGncsaLWB0dHB1Zs0H+rLHquFSJEifuiwdI58siCoEW+l7gf39TaCZnl7+xHG+e+f rWjdqy1J8wwsonAGvvb1NjiOUTZURDuG5mUg1tqMi8OThL20a3DjWoLi4bYJ1Xl7UMtsTZa/Wr8 wnnx7GS60s3Wj1pgCMQaxuFcBVCLVWIe//5fTni1zysUrFQk9athoPjw2y6y/UurME9Fn+W0rfP mQig81rT1lGuQw5ZAJES4faVEeu0xvaVz8eVHOYn5nq7tcAMJ7GPu3ZOWpy1ft9hM9dYS8rifZ+ EQ== X-Google-Smtp-Source: AGHT+IHi06BxKwRvZbAOlxkuze0LNCz5rpJO2Jz/Gv35cJtL5bPQA472pzoMMelrigMrQ3T3Me3Wkg== X-Received: by 2002:a05:600c:4f49:b0:431:52a3:d9d9 with SMTP id 5b1f17b1804b1-434a9d4fa7dmr46582995e9.0.1732743338592; Wed, 27 Nov 2024 13:35:38 -0800 (PST) Received: from localhost (fwdproxy-cln-029.fbsv.net. [2a03:2880:31ff:1d::face:b00c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434aa74f1f6sm33580475e9.8.2024.11.27.13.35.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 13:35:37 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Eduard Zingerman , kernel-team@fb.com Subject: [PATCH bpf-next v4 1/7] bpf: Consolidate locks and reference state in verifier state Date: Wed, 27 Nov 2024 13:35:29 -0800 Message-ID: <20241127213535.3657472-2-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241127213535.3657472-1-memxor@gmail.com> References: <20241127213535.3657472-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=18177; h=from:subject; bh=haXHBIw+IAeTSlGBb0DWF0SoN+arpIfSKJ65ihAMstA=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnR46tfgemBNPfsR0Yb6UDrTC83hnQitfVQ5N6J088 gT1qzUuJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZ0eOrQAKCRBM4MiGSL8RypXXEA CumKI6upqYhOMsZ3ljSjKyeGTMkMquQw2o+rtm2M7ZEDQwlgucMfNI2AuwsYEbZHkD/vAy4aXXTnoF K4b5DK03QxJ69GK/tFOcYnAjliTiT7Pw2YmippD/I/wO6bwGja39E6ZN9KyDoYWZm2LuaU1cybxC2q pSAVuABUG6IsjS3/rrTf7W0jiVvqx3rGXhbyVKdnnfXD5r4UMqQ1mOrfKxJEeXPsx2UX+yEmKvOQog ywUBVxg0W3qjEAUDkyA5Hm0X31izXlxiv+DfP5OyG3+NE1uJjqH07b5f+Km3OVECPPLBypjrJ0esDM PitJ6WUHq5FnJuacaZpvQmi7m2mg2k6Gh1gajXih6+8wp7UhCjEcGIEazytraPknt2EZ+/AAETTfEa bdNK/i+cfx4+fZmjuCcvLIKt/lo5ymVvPcSAgmvGeaIk+ngoKAOFQh8oUfl8NRm6cMnTZmL1B4eLlT OkNh37SQ+7w6aiuGDs+t9l+PMIAvnJIWdOsmjZHPx01zm/uj0rhifhMK2g96J5QweWIaf0d7Y90Jvr ceWM1Zlchj2oBmwC5K6aomo0H/FjzWOHLjiSUvVWgSifFxfD2lopQ50S/CJV5AhuVyEOS1V6lU/5EL SKRGVaYut4s0csLH8gIxXfqbAjUmEQrPlMuIHKkPaGf1mVged4znPnjAJzkw== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net Currently, state for RCU read locks and preemption is in bpf_verifier_state, while locks and pointer reference state remains in bpf_func_state. There is no particular reason to keep the latter in bpf_func_state. Additionally, it is copied into a new frame's state and copied back to the caller frame's state everytime the verifier processes a pseudo call instruction. This is a bit wasteful, given this state is global for a given verification state / path. Move all resource and reference related state in bpf_verifier_state structure in this patch, in preparation for introducing new reference state types in the future. Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf_verifier.h | 11 ++-- kernel/bpf/log.c | 11 ++-- kernel/bpf/verifier.c | 112 ++++++++++++++++------------------- 3 files changed, 64 insertions(+), 70 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index f4290c179bee..af64b5415df8 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -315,9 +315,6 @@ struct bpf_func_state { u32 callback_depth; /* The following fields should be last. See copy_func_state() */ - int acquired_refs; - int active_locks; - struct bpf_reference_state *refs; /* The state of the stack. Each element of the array describes BPF_REG_SIZE * (i.e. 8) bytes worth of stack memory. * stack[0] represents bytes [*(r10-8)..*(r10-1)] @@ -419,9 +416,13 @@ struct bpf_verifier_state { u32 insn_idx; u32 curframe; - bool speculative; + struct bpf_reference_state *refs; + u32 acquired_refs; + u32 active_locks; + u32 active_preempt_locks; bool active_rcu_lock; - u32 active_preempt_lock; + + bool speculative; /* If this state was ever pointed-to by other state's loop_entry field * this flag would be set to true. Used to avoid freeing such states * while they are still in use. diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c index 4a858fdb6476..8b52e5b7504c 100644 --- a/kernel/bpf/log.c +++ b/kernel/bpf/log.c @@ -756,6 +756,7 @@ static void print_reg_state(struct bpf_verifier_env *env, void print_verifier_state(struct bpf_verifier_env *env, const struct bpf_func_state *state, bool print_all) { + struct bpf_verifier_state *vstate = env->cur_state; const struct bpf_reg_state *reg; int i; @@ -843,11 +844,11 @@ void print_verifier_state(struct bpf_verifier_env *env, const struct bpf_func_st break; } } - if (state->acquired_refs && state->refs[0].id) { - verbose(env, " refs=%d", state->refs[0].id); - for (i = 1; i < state->acquired_refs; i++) - if (state->refs[i].id) - verbose(env, ",%d", state->refs[i].id); + if (vstate->acquired_refs && vstate->refs[0].id) { + verbose(env, " refs=%d", vstate->refs[0].id); + for (i = 1; i < vstate->acquired_refs; i++) + if (vstate->refs[i].id) + verbose(env, ",%d", vstate->refs[i].id); } if (state->in_callback_fn) verbose(env, " cb"); diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 1c4ebb326785..f8313e95eb8e 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1279,15 +1279,17 @@ static void *realloc_array(void *arr, size_t old_n, size_t new_n, size_t size) return arr ? arr : ZERO_SIZE_PTR; } -static int copy_reference_state(struct bpf_func_state *dst, const struct bpf_func_state *src) +static int copy_reference_state(struct bpf_verifier_state *dst, const struct bpf_verifier_state *src) { dst->refs = copy_array(dst->refs, src->refs, src->acquired_refs, sizeof(struct bpf_reference_state), GFP_KERNEL); if (!dst->refs) return -ENOMEM; - dst->active_locks = src->active_locks; dst->acquired_refs = src->acquired_refs; + dst->active_locks = src->active_locks; + dst->active_preempt_locks = src->active_preempt_locks; + dst->active_rcu_lock = src->active_rcu_lock; return 0; } @@ -1304,7 +1306,7 @@ static int copy_stack_state(struct bpf_func_state *dst, const struct bpf_func_st return 0; } -static int resize_reference_state(struct bpf_func_state *state, size_t n) +static int resize_reference_state(struct bpf_verifier_state *state, size_t n) { state->refs = realloc_array(state->refs, state->acquired_refs, n, sizeof(struct bpf_reference_state)); @@ -1349,7 +1351,7 @@ static int grow_stack_state(struct bpf_verifier_env *env, struct bpf_func_state */ static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx) { - struct bpf_func_state *state = cur_func(env); + struct bpf_verifier_state *state = env->cur_state; int new_ofs = state->acquired_refs; int id, err; @@ -1367,7 +1369,7 @@ static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx) static int acquire_lock_state(struct bpf_verifier_env *env, int insn_idx, enum ref_state_type type, int id, void *ptr) { - struct bpf_func_state *state = cur_func(env); + struct bpf_verifier_state *state = env->cur_state; int new_ofs = state->acquired_refs; int err; @@ -1384,7 +1386,7 @@ static int acquire_lock_state(struct bpf_verifier_env *env, int insn_idx, enum r } /* release function corresponding to acquire_reference_state(). Idempotent. */ -static int release_reference_state(struct bpf_func_state *state, int ptr_id) +static int release_reference_state(struct bpf_verifier_state *state, int ptr_id) { int i, last_idx; @@ -1404,7 +1406,7 @@ static int release_reference_state(struct bpf_func_state *state, int ptr_id) return -EINVAL; } -static int release_lock_state(struct bpf_func_state *state, int type, int id, void *ptr) +static int release_lock_state(struct bpf_verifier_state *state, int type, int id, void *ptr) { int i, last_idx; @@ -1425,10 +1427,9 @@ static int release_lock_state(struct bpf_func_state *state, int type, int id, vo return -EINVAL; } -static struct bpf_reference_state *find_lock_state(struct bpf_verifier_env *env, enum ref_state_type type, +static struct bpf_reference_state *find_lock_state(struct bpf_verifier_state *state, enum ref_state_type type, int id, void *ptr) { - struct bpf_func_state *state = cur_func(env); int i; for (i = 0; i < state->acquired_refs; i++) { @@ -1447,7 +1448,6 @@ static void free_func_state(struct bpf_func_state *state) { if (!state) return; - kfree(state->refs); kfree(state->stack); kfree(state); } @@ -1461,6 +1461,7 @@ static void free_verifier_state(struct bpf_verifier_state *state, free_func_state(state->frame[i]); state->frame[i] = NULL; } + kfree(state->refs); if (free_self) kfree(state); } @@ -1471,12 +1472,7 @@ static void free_verifier_state(struct bpf_verifier_state *state, static int copy_func_state(struct bpf_func_state *dst, const struct bpf_func_state *src) { - int err; - - memcpy(dst, src, offsetof(struct bpf_func_state, acquired_refs)); - err = copy_reference_state(dst, src); - if (err) - return err; + memcpy(dst, src, offsetof(struct bpf_func_state, stack)); return copy_stack_state(dst, src); } @@ -1493,9 +1489,10 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state, free_func_state(dst_state->frame[i]); dst_state->frame[i] = NULL; } + err = copy_reference_state(dst_state, src); + if (err) + return err; dst_state->speculative = src->speculative; - dst_state->active_rcu_lock = src->active_rcu_lock; - dst_state->active_preempt_lock = src->active_preempt_lock; dst_state->in_sleepable = src->in_sleepable; dst_state->curframe = src->curframe; dst_state->branches = src->branches; @@ -5496,7 +5493,7 @@ static bool in_sleepable(struct bpf_verifier_env *env) static bool in_rcu_cs(struct bpf_verifier_env *env) { return env->cur_state->active_rcu_lock || - cur_func(env)->active_locks || + env->cur_state->active_locks || !in_sleepable(env); } @@ -7850,15 +7847,15 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg * Since only one bpf_spin_lock is allowed the checks are simpler than * reg_is_refcounted() logic. The verifier needs to remember only * one spin_lock instead of array of acquired_refs. - * cur_func(env)->active_locks remembers which map value element or allocated + * env->cur_state->active_locks remembers which map value element or allocated * object got locked and clears it after bpf_spin_unlock. */ static int process_spin_lock(struct bpf_verifier_env *env, int regno, bool is_lock) { struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno]; + struct bpf_verifier_state *cur = env->cur_state; bool is_const = tnum_is_const(reg->var_off); - struct bpf_func_state *cur = cur_func(env); u64 val = reg->var_off.value; struct bpf_map *map = NULL; struct btf *btf = NULL; @@ -7925,7 +7922,7 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, return -EINVAL; } - if (release_lock_state(cur_func(env), REF_TYPE_LOCK, reg->id, ptr)) { + if (release_lock_state(env->cur_state, REF_TYPE_LOCK, reg->id, ptr)) { verbose(env, "bpf_spin_unlock of different lock\n"); return -EINVAL; } @@ -9679,7 +9676,7 @@ static int release_reference(struct bpf_verifier_env *env, struct bpf_reg_state *reg; int err; - err = release_reference_state(cur_func(env), ref_obj_id); + err = release_reference_state(env->cur_state, ref_obj_id); if (err) return err; @@ -9757,9 +9754,7 @@ static int setup_func_entry(struct bpf_verifier_env *env, int subprog, int calls callsite, state->curframe + 1 /* frameno within this callchain */, subprog /* subprog number within this prog */); - /* Transfer references to the callee */ - err = copy_reference_state(callee, caller); - err = err ?: set_callee_state_cb(env, caller, callee, callsite); + err = set_callee_state_cb(env, caller, callee, callsite); if (err) goto err_out; @@ -9992,14 +9987,14 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, const char *sub_name = subprog_name(env, subprog); /* Only global subprogs cannot be called with a lock held. */ - if (cur_func(env)->active_locks) { + if (env->cur_state->active_locks) { verbose(env, "global function calls are not allowed while holding a lock,\n" "use static function instead\n"); return -EINVAL; } /* Only global subprogs cannot be called with preemption disabled. */ - if (env->cur_state->active_preempt_lock) { + if (env->cur_state->active_preempt_locks) { verbose(env, "global function calls are not allowed with preemption disabled,\n" "use static function instead\n"); return -EINVAL; @@ -10333,11 +10328,6 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx) caller->regs[BPF_REG_0] = *r0; } - /* Transfer references to the caller */ - err = copy_reference_state(caller, callee); - if (err) - return err; - /* for callbacks like bpf_loop or bpf_for_each_map_elem go back to callsite, * there function call logic would reschedule callback visit. If iteration * converges is_state_visited() would prune that visit eventually. @@ -10502,11 +10492,11 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta, static int check_reference_leak(struct bpf_verifier_env *env, bool exception_exit) { - struct bpf_func_state *state = cur_func(env); + struct bpf_verifier_state *state = env->cur_state; bool refs_lingering = false; int i; - if (!exception_exit && state->frameno) + if (!exception_exit && cur_func(env)->frameno) return 0; for (i = 0; i < state->acquired_refs; i++) { @@ -10523,7 +10513,7 @@ static int check_resource_leak(struct bpf_verifier_env *env, bool exception_exit { int err; - if (check_lock && cur_func(env)->active_locks) { + if (check_lock && env->cur_state->active_locks) { verbose(env, "%s cannot be used inside bpf_spin_lock-ed region\n", prefix); return -EINVAL; } @@ -10539,7 +10529,7 @@ static int check_resource_leak(struct bpf_verifier_env *env, bool exception_exit return -EINVAL; } - if (check_lock && env->cur_state->active_preempt_lock) { + if (check_lock && env->cur_state->active_preempt_locks) { verbose(env, "%s cannot be used inside bpf_preempt_disable-ed region\n", prefix); return -EINVAL; } @@ -10727,7 +10717,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn env->insn_aux_data[insn_idx].storage_get_func_atomic = true; } - if (env->cur_state->active_preempt_lock) { + if (env->cur_state->active_preempt_locks) { if (fn->might_sleep) { verbose(env, "sleepable helper %s#%d in non-preemptible region\n", func_id_name(func_id), func_id); @@ -10784,7 +10774,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn struct bpf_func_state *state; struct bpf_reg_state *reg; - err = release_reference_state(cur_func(env), ref_obj_id); + err = release_reference_state(env->cur_state, ref_obj_id); if (!err) { bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({ if (reg->ref_obj_id == ref_obj_id) { @@ -11746,7 +11736,7 @@ static int ref_set_non_owning(struct bpf_verifier_env *env, struct bpf_reg_state { struct btf_record *rec = reg_btf_record(reg); - if (!cur_func(env)->active_locks) { + if (!env->cur_state->active_locks) { verbose(env, "verifier internal error: ref_set_non_owning w/o active lock\n"); return -EFAULT; } @@ -11765,12 +11755,11 @@ static int ref_set_non_owning(struct bpf_verifier_env *env, struct bpf_reg_state static int ref_convert_owning_non_owning(struct bpf_verifier_env *env, u32 ref_obj_id) { - struct bpf_func_state *state, *unused; + struct bpf_verifier_state *state = env->cur_state; + struct bpf_func_state *unused; struct bpf_reg_state *reg; int i; - state = cur_func(env); - if (!ref_obj_id) { verbose(env, "verifier internal error: ref_obj_id is zero for " "owning -> non-owning conversion\n"); @@ -11860,9 +11849,9 @@ static int check_reg_allocation_locked(struct bpf_verifier_env *env, struct bpf_ } id = reg->id; - if (!cur_func(env)->active_locks) + if (!env->cur_state->active_locks) return -EINVAL; - s = find_lock_state(env, REF_TYPE_LOCK, id, ptr); + s = find_lock_state(env->cur_state, REF_TYPE_LOCK, id, ptr); if (!s) { verbose(env, "held lock and object are not in the same allocation\n"); return -EINVAL; @@ -12789,17 +12778,17 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return -EINVAL; } - if (env->cur_state->active_preempt_lock) { + if (env->cur_state->active_preempt_locks) { if (preempt_disable) { - env->cur_state->active_preempt_lock++; + env->cur_state->active_preempt_locks++; } else if (preempt_enable) { - env->cur_state->active_preempt_lock--; + env->cur_state->active_preempt_locks--; } else if (sleepable) { verbose(env, "kernel func %s is sleepable within non-preemptible region\n", func_name); return -EACCES; } } else if (preempt_disable) { - env->cur_state->active_preempt_lock++; + env->cur_state->active_preempt_locks++; } else if (preempt_enable) { verbose(env, "unmatched attempt to enable preemption (kernel function %s)\n", func_name); return -EINVAL; @@ -15398,7 +15387,7 @@ static void mark_ptr_or_null_regs(struct bpf_verifier_state *vstate, u32 regno, * No one could have freed the reference state before * doing the NULL check. */ - WARN_ON_ONCE(release_reference_state(state, id)); + WARN_ON_ONCE(release_reference_state(vstate, id)); bpf_for_each_reg_in_vstate(vstate, state, reg, ({ mark_ptr_or_null_reg(state, reg, id, is_null); @@ -17750,7 +17739,7 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, return true; } -static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur, +static bool refsafe(struct bpf_verifier_state *old, struct bpf_verifier_state *cur, struct bpf_idmap *idmap) { int i; @@ -17758,6 +17747,15 @@ static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur, if (old->acquired_refs != cur->acquired_refs) return false; + if (old->active_locks != cur->active_locks) + return false; + + if (old->active_preempt_locks != cur->active_preempt_locks) + return false; + + if (old->active_rcu_lock != cur->active_rcu_lock) + return false; + for (i = 0; i < old->acquired_refs; i++) { if (!check_ids(old->refs[i].id, cur->refs[i].id, idmap) || old->refs[i].type != cur->refs[i].type) @@ -17820,9 +17818,6 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat if (!stacksafe(env, old, cur, &env->idmap_scratch, exact)) return false; - if (!refsafe(old, cur, &env->idmap_scratch)) - return false; - return true; } @@ -17850,13 +17845,10 @@ static bool states_equal(struct bpf_verifier_env *env, if (old->speculative && !cur->speculative) return false; - if (old->active_rcu_lock != cur->active_rcu_lock) - return false; - - if (old->active_preempt_lock != cur->active_preempt_lock) + if (old->in_sleepable != cur->in_sleepable) return false; - if (old->in_sleepable != cur->in_sleepable) + if (!refsafe(old, cur, &env->idmap_scratch)) return false; /* for states to be equal callsites have to be the same @@ -18751,7 +18743,7 @@ static int do_check(struct bpf_verifier_env *env) return -EINVAL; } - if (cur_func(env)->active_locks) { + if (env->cur_state->active_locks) { if ((insn->src_reg == BPF_REG_0 && insn->imm != BPF_FUNC_spin_unlock) || (insn->src_reg == BPF_PSEUDO_KFUNC_CALL && (insn->off != 0 || !is_bpf_graph_api_kfunc(insn->imm)))) { From patchwork Wed Nov 27 21:35:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13887388 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5AE7D14EC60 for ; Wed, 27 Nov 2024 21:35:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.66 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732743344; cv=none; b=Sk+q5lRW1/U4YmngnT9vyN3cAZtqEezUFNjqNqzjxrgg5jnBVrMuQm8GAUFD6u/V/mn0d1qVQ4tQcG31FpYvCi2hwZ+ckgLBR3bbh3BiNQcVZM52VfatO4VaJXu2G2O8i8m9YuJGGfbealy40X1uWwNST8R8WOHtGMdj0RL8fuE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732743344; c=relaxed/simple; bh=+lbvwqryOE2W6P45sCY/CQfIp65/Ra0f06fAzDGT4+E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=trTd4hQUZV5zacUmv/7ORfecRtCDE/SYfpd+Sc4K6IzHpu37jISC5TJY0SLeTbe8RXLI836ApvchRZftX3QPiZ2rGmsdz6KnTShGDrKeWrgJ4hvjpa5KKQ7L9jFc4Xit3iJKZQdxkey6CdTS/vZWewhEcueo9BV0LtarQV1My8Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=fYov/jZ6; arc=none smtp.client-ip=209.85.221.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fYov/jZ6" Received: by mail-wr1-f66.google.com with SMTP id ffacd0b85a97d-382378f359dso142111f8f.1 for ; Wed, 27 Nov 2024 13:35:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732743340; x=1733348140; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fyFj/w4NJo/EiVDGBkkt3HCkn23xq83hNOiKaFrtX/0=; b=fYov/jZ6i4ZBKfxjABiqDsjdfTBwUpXju5vIR3tJ/vYKxF/UDyZMFo+Cop7vbbeR6m gigqoM86Vy/AXJWDOWrBYwzlq30TAytdDlcjUSJr9NuV6GlyCPdUK1xNAebn6Q6rgenO 2+Tqlo1bvtp/IS8fIE8pulVSgkZf+b0Njd1VuOjj/7AlL5mfxZmdAs0+JvDTJ0TfSRac zyv+NURkbdigGGaGAWfABz9PsonZA94d3L9L/O3DjQY+hGkLBAZlTEVDFt/FhlHRiIKS A+DY+tmiey37p3kAP3YdTcSINKoDzxCNvDg43UAOCgqSP4KdXZfEpRN0NHljr+Au9Cgq t+Tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732743340; x=1733348140; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fyFj/w4NJo/EiVDGBkkt3HCkn23xq83hNOiKaFrtX/0=; b=qH3q/rVfjruLdX3cOq86rXhvsEF3k7JkWf0K4KL4SNdxhI3Sxp5F/5JXkX7FK4p0EA aS07ktQodZodF9d2+BuSa49QEs0gH7qt9E2rqafhA9CyLwukypp47hFebeVCddC5kDM5 yMNQC/O8rRK3S137R+WNdjNzv8a5AW7hmJzXPwX7UpMd6/tNJxp8lLnAgdS9rtxoEsQD IrKry98WK32zWrHhKxTO1XfFy4aOhq0aFOY7mZd2r1JcjxTrhaJ4AJijiVihivs8zrCF EbtocACgAri+cnR59WVQNjAt4lK/1+eGSNTLBGCAQMEmhoATnPcaVsB/ZPVn9WJ5hdC7 mykg== X-Gm-Message-State: AOJu0YyXXpJZ3BkhEykP/ogci9Rf/XXYmTYjoz+XmZNmZcUOL9Pu8sCg aC+acFX0iLuEX/tnLd7cRKSvjr3ECH4cZ9OziKOEHqPekW8qBw3YT/JgGHA1QVA= X-Gm-Gg: ASbGncuDWO5B/26Lxy1XQRwgFyR/0wqJzdoRi+sXNF3952vd+cE2iqDeUKR/hzMyCUD x6Ene1i8S2WTfjRLq3FR0UnPz+G4uVlLpBOBfE0AvEDHMcFDsQOd1tSn8Rp3zO7rm27cjP3/L8n l9Y3EceV48MOLWmV9EVvHjStT1x23i2BaOMdyCu3yKdmP1VOAGu8/I42Ibzm21UuacBbVG5/rX0 /FgzAjx4CDxiMuZV/EToZTWbYq02L3xjbLuWk++AZ8U6q7YF/ICmaYKcZWoPpb3rkN8tp+ggONo pg== X-Google-Smtp-Source: AGHT+IGdvO4bgXCS1d5nHFwZ2A7gvg4lTSbjm87AYmYr49MckcNVbyC4O74iaTLgr8SH//hF0+1GUA== X-Received: by 2002:adf:e191:0:b0:382:4b5c:4199 with SMTP id ffacd0b85a97d-385c6eb725dmr4103642f8f.23.1732743339867; Wed, 27 Nov 2024 13:35:39 -0800 (PST) Received: from localhost (fwdproxy-cln-116.fbsv.net. [2a03:2880:31ff:74::face:b00c]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fb2685csm17691635f8f.46.2024.11.27.13.35.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 13:35:39 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , Eduard Zingerman , kernel-team@fb.com Subject: [PATCH bpf-next v4 2/7] bpf: Refactor {acquire,release}_reference_state Date: Wed, 27 Nov 2024 13:35:30 -0800 Message-ID: <20241127213535.3657472-3-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241127213535.3657472-1-memxor@gmail.com> References: <20241127213535.3657472-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=9386; h=from:subject; bh=+lbvwqryOE2W6P45sCY/CQfIp65/Ra0f06fAzDGT4+E=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnR46tL/DlFkU+TVBd+Gdd2AGUfKzuJOe+YbJRWprV S/dy0lGJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZ0eOrQAKCRBM4MiGSL8RyhpvD/ 9QK5ujTqCCZG0KbkoG5nrhi225UMlT/MG14AmDpn4lUHeIgLidz/wtbLHR3vGEgCzRoDEgmRCUx2va rLJwZ+7nP3TY8faWZBgxLXQjhx0veqsiLdnbDmRRbR6BP1wxrNykMQ9rnE6DciRSdKdFhCFNR8+I0S 6Z0VmJtR2wXU4AJVV/k/HHfVYbeNs/q0TsDcZ3ekXouGBwlgmyJJQeylAxDqUO5O0dF7h6q3M1fl/N 8Y/KxGT2bIxgMffZ+Q0xwyAyocnicqVncsrDbUpciISFRDJTbQDeucmf/0sScFBEmhEHoSnFQSK/GT 57JRCA8bi8ETjw09jYjnwpk6HmyOzFJDdoz31cqOqqFyHqW0ofMTYOIUrGPzfr59lK5LJfA4uBM571 Khp7RnOthA3LOZkgS+jImL5T+GalvGg6CNBZ7mNc4HiJVuilIn3grWTZBEz/Mdg1ZQwG7cvdIH4Nbp hNoEwd/7kZluzt5IP6v3cAwuFbhvi7n1oQbz+y2nojqjbFnHLfZTLBfD+7VjmqsZUxXRUYoBuX42St qyAJyZF5AH2u14McPeG8NLguUH6EybvmcgBTEjgq+Zhg8nD47cW9MZk8s1wn99eJHdw2ZZ5tCgEYGV TYWfFXoTjuldjpOh0R9yxC7DKSNdnW5KAiLX64aIuLxow2X6h6x+UD0kuJvg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net In preparation for introducing support for more reference types which have to add and remove reference state, refactor the acquire_reference_state and release_reference_state functions to share common logic. The acquire_reference_state function simply handles growing the acquired refs and returning the pointer to the new uninitialized element, which can be filled in by the caller. The release_reference_state function simply erases a reference state entry in the acquired_refs array and shrinks it. The callers are responsible for finding the suitable element by matching on various fields of the reference state and requesting deletion through this function. It is not supposed to be called directly. Existing callers of release_reference_state were using it to find and remove state for a given ref_obj_id without scrubbing the associated registers in the verifier state. Introduce release_reference_nomark to provide this functionality and convert callers. We now use this new release_reference_nomark function within release_reference as well. It needs to operate on a verifier state instead of taking verifier env as mark_ptr_or_null_regs requires operating on verifier state of the two branches of a NULL condition check, therefore env->cur_state cannot be used directly. Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 113 +++++++++++++++++++++++------------------- 1 file changed, 63 insertions(+), 50 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index f8313e95eb8e..474cca3e8f66 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -196,7 +196,8 @@ struct bpf_verifier_stack_elem { #define BPF_PRIV_STACK_MIN_SIZE 64 -static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx); +static int acquire_reference(struct bpf_verifier_env *env, int insn_idx); +static int release_reference_nomark(struct bpf_verifier_state *state, int ref_obj_id); static int release_reference(struct bpf_verifier_env *env, int ref_obj_id); static void invalidate_non_owning_refs(struct bpf_verifier_env *env); static bool in_rbtree_lock_required_cb(struct bpf_verifier_env *env); @@ -771,7 +772,7 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_ if (clone_ref_obj_id) id = clone_ref_obj_id; else - id = acquire_reference_state(env, insn_idx); + id = acquire_reference(env, insn_idx); if (id < 0) return id; @@ -1033,7 +1034,7 @@ static int mark_stack_slots_iter(struct bpf_verifier_env *env, if (spi < 0) return spi; - id = acquire_reference_state(env, insn_idx); + id = acquire_reference(env, insn_idx); if (id < 0) return id; @@ -1349,77 +1350,69 @@ static int grow_stack_state(struct bpf_verifier_env *env, struct bpf_func_state * On success, returns a valid pointer id to associate with the register * On failure, returns a negative errno. */ -static int acquire_reference_state(struct bpf_verifier_env *env, int insn_idx) +static struct bpf_reference_state *acquire_reference_state(struct bpf_verifier_env *env, int insn_idx, bool gen_id) { struct bpf_verifier_state *state = env->cur_state; int new_ofs = state->acquired_refs; - int id, err; + int err; err = resize_reference_state(state, state->acquired_refs + 1); if (err) - return err; - id = ++env->id_gen; - state->refs[new_ofs].type = REF_TYPE_PTR; - state->refs[new_ofs].id = id; + return NULL; + if (gen_id) + state->refs[new_ofs].id = ++env->id_gen; state->refs[new_ofs].insn_idx = insn_idx; - return id; + return &state->refs[new_ofs]; +} + +static int acquire_reference(struct bpf_verifier_env *env, int insn_idx) +{ + struct bpf_reference_state *s; + + s = acquire_reference_state(env, insn_idx, true); + if (!s) + return -ENOMEM; + s->type = REF_TYPE_PTR; + return s->id; } static int acquire_lock_state(struct bpf_verifier_env *env, int insn_idx, enum ref_state_type type, int id, void *ptr) { struct bpf_verifier_state *state = env->cur_state; - int new_ofs = state->acquired_refs; - int err; + struct bpf_reference_state *s; - err = resize_reference_state(state, state->acquired_refs + 1); - if (err) - return err; - state->refs[new_ofs].type = type; - state->refs[new_ofs].id = id; - state->refs[new_ofs].insn_idx = insn_idx; - state->refs[new_ofs].ptr = ptr; + s = acquire_reference_state(env, insn_idx, false); + s->type = type; + s->id = id; + s->ptr = ptr; state->active_locks++; return 0; } -/* release function corresponding to acquire_reference_state(). Idempotent. */ -static int release_reference_state(struct bpf_verifier_state *state, int ptr_id) +static void release_reference_state(struct bpf_verifier_state *state, int idx) { - int i, last_idx; + int last_idx; last_idx = state->acquired_refs - 1; - for (i = 0; i < state->acquired_refs; i++) { - if (state->refs[i].type != REF_TYPE_PTR) - continue; - if (state->refs[i].id == ptr_id) { - if (last_idx && i != last_idx) - memcpy(&state->refs[i], &state->refs[last_idx], - sizeof(*state->refs)); - memset(&state->refs[last_idx], 0, sizeof(*state->refs)); - state->acquired_refs--; - return 0; - } - } - return -EINVAL; + if (last_idx && idx != last_idx) + memcpy(&state->refs[idx], &state->refs[last_idx], sizeof(*state->refs)); + memset(&state->refs[last_idx], 0, sizeof(*state->refs)); + state->acquired_refs--; + return; } static int release_lock_state(struct bpf_verifier_state *state, int type, int id, void *ptr) { - int i, last_idx; + int i; - last_idx = state->acquired_refs - 1; for (i = 0; i < state->acquired_refs; i++) { if (state->refs[i].type != type) continue; if (state->refs[i].id == id && state->refs[i].ptr == ptr) { - if (last_idx && i != last_idx) - memcpy(&state->refs[i], &state->refs[last_idx], - sizeof(*state->refs)); - memset(&state->refs[last_idx], 0, sizeof(*state->refs)); - state->acquired_refs--; + release_reference_state(state, i); state->active_locks--; return 0; } @@ -9666,21 +9659,41 @@ static void mark_pkt_end(struct bpf_verifier_state *vstate, int regn, bool range reg->range = AT_PKT_END; } +static int release_reference_nomark(struct bpf_verifier_state *state, int ref_obj_id) +{ + int i; + + for (i = 0; i < state->acquired_refs; i++) { + if (state->refs[i].type != REF_TYPE_PTR) + continue; + if (state->refs[i].id == ref_obj_id) { + release_reference_state(state, i); + return 0; + } + } + return -EINVAL; +} + /* The pointer with the specified id has released its reference to kernel * resources. Identify all copies of the same pointer and clear the reference. + * + * This is the release function corresponding to acquire_reference(). Idempotent. + * The 'mark' boolean is used to optionally skip scrubbing registers matching + * the ref_obj_id, in case they need to be switched to some other type instead + * of havoc scalar value. */ -static int release_reference(struct bpf_verifier_env *env, - int ref_obj_id) +static int release_reference(struct bpf_verifier_env *env, int ref_obj_id) { + struct bpf_verifier_state *vstate = env->cur_state; struct bpf_func_state *state; struct bpf_reg_state *reg; int err; - err = release_reference_state(env->cur_state, ref_obj_id); + err = release_reference_nomark(vstate, ref_obj_id); if (err) return err; - bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({ + bpf_for_each_reg_in_vstate(vstate, state, reg, ({ if (reg->ref_obj_id == ref_obj_id) mark_reg_invalid(env, reg); })); @@ -10774,7 +10787,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn struct bpf_func_state *state; struct bpf_reg_state *reg; - err = release_reference_state(env->cur_state, ref_obj_id); + err = release_reference_nomark(env->cur_state, ref_obj_id); if (!err) { bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({ if (reg->ref_obj_id == ref_obj_id) { @@ -11107,7 +11120,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn /* For release_reference() */ regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id; } else if (is_acquire_function(func_id, meta.map_ptr)) { - int id = acquire_reference_state(env, insn_idx); + int id = acquire_reference(env, insn_idx); if (id < 0) return id; @@ -13087,7 +13100,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, } mark_btf_func_reg_size(env, BPF_REG_0, sizeof(void *)); if (is_kfunc_acquire(&meta)) { - int id = acquire_reference_state(env, insn_idx); + int id = acquire_reference(env, insn_idx); if (id < 0) return id; @@ -15387,7 +15400,7 @@ static void mark_ptr_or_null_regs(struct bpf_verifier_state *vstate, u32 regno, * No one could have freed the reference state before * doing the NULL check. */ - WARN_ON_ONCE(release_reference_state(vstate, id)); + WARN_ON_ONCE(release_reference_nomark(vstate, id)); bpf_for_each_reg_in_vstate(vstate, state, reg, ({ mark_ptr_or_null_reg(state, reg, id, is_null); From patchwork Wed Nov 27 21:35:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13887390 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 143672036EE for ; Wed, 27 Nov 2024 21:35:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.67 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732743345; cv=none; b=YI+0bTdu1xJjM3XPDNQqFEnIq3MKHF1gd8fsRkWC+WIH6spKyxBHRNxaN9AdiuS9k+uR85wKx/Gay5fKJgE2CwVnFa0ZFGqDEWw92MFpEDrKwTLwAfBi9eDfEIz1Jk/kUPlWTGhci4rdwLSjbrDjtg2jXOYEaN7mrKJOURsGiBw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732743345; c=relaxed/simple; bh=mCbV2hzfoSvQXfT9CUwMciZZAlSpMRDDqFj2bBCWrIk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ftgKLf0wgtksaXJcqYjfCRQIAgfQwLSk0BKVy5RFiiHENJDWqNUt/v4zl70szy/msO6J8+NApDfIjvtazE08aoHrhPYHUKHTPZ0P8mD9EFIcRXCra826o5dqkbTvnV6yCvCtDN0dXWcAJ0F4UMUUCwHTLA7fVVfnzlniOfU/bE8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=b499cWgW; arc=none smtp.client-ip=209.85.221.67 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="b499cWgW" Received: by mail-wr1-f67.google.com with SMTP id ffacd0b85a97d-37ed3bd6114so115733f8f.2 for ; Wed, 27 Nov 2024 13:35:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732743342; x=1733348142; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QdwiHmAk6OhdkiUA9Ev7EZPLqHFZB5MK41Iiedr0ApI=; b=b499cWgWl9INsCMluUxESPnrDqX+Om9neug5hKENR5zW4n485cBlqAvFqISGA6DfDf DjrNIRw7X/ac8DkpM7EuO36RAGLGuWrfmb3Gdd0tQVzXI0E+tTq9TT20z1gjekGODWA2 fBMyLnClnjGtz4t94CVCIGJBdd6OY/wuSQrObGDNlfxdB20OSpJgvJ/aMiz0Z4JT5FBe aABW42t6BzqyXMMzGRu2+OidTmC0U16MDvf0mOo1GXfm7tGol2/lWNrzmlfcTWiFqGhe 8tSSzxlNy6WVpFLjnCwzV95irUc99PMugi92RaJzKDd+PSd8NkEmjaH3aqcqU8WSie2L xl4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732743342; x=1733348142; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QdwiHmAk6OhdkiUA9Ev7EZPLqHFZB5MK41Iiedr0ApI=; b=E9gm6L3ZdSd/Ai9P6B25QfrhLoU/2SExjS4L4ReMDl6E+YKJJRuoxHXZOpcgPHL/Dn Oc0jXVcIh16pcetieixW/YXc2ruitBxq7D1D9YTRU1YioRasTKWbmZ98qNmDjCV5bAu6 svu41jwb9GcxGkP6hQiuQTuNVdZswZh9SyKcojk9WxnyghfW6U131yaXEPjU+wK0yqDj sxFnwvBkJKPs88tcdeTNuIOmc5UuTu5hCJuZ9GcNsqS6KDtLgLxajMlYTUF+QEA3Qn/B nUyTh+ou3kv8NRKXccRvr/NoY18UqmfJa1+nEn7QkodO0Dy9qwXK6T7jK1ce3jd4GgQm 0IQg== X-Gm-Message-State: AOJu0YzfK4If989QzptuRvLvel00zmU+aL3cWe13DmgQkeRRrOa6kgGp T5IA4lUUn0Iy2yY/kqrBYhWD2gAAyx+j3NWBSgDjaaK8m6lfnYP/7LCui4QM/Xs= X-Gm-Gg: ASbGncvinc5DPp1TZUJijcC0lEUGq/tiWiEvG1OLy7oZfRo00yu7HeBNThSOnPK2uzq mftKpyFGFduB7VqjBlZHEYaEfMco2Lz3dW17ciizeJw4pQm1c9YkPQruXYE9puwCPsfoCS7Yg67 ZaEVFAKlhgW0SlBwo6IbzEwkTyvqvksSmv0GQ3Y3RY8kkoy5yv+DwgvZRMZwh/pciLLY5mGQ0Lt 8gd6MbeH2H25lC818DqZ8cHpyAgOhRuLisReHCwyjMPvsV+AM+J8+BmBFX9hqW1ad1luQYL1nfD fg== X-Google-Smtp-Source: AGHT+IFuSd3atbVzR81SVRi4HlNIEILAp8nauqj8+lll1NZ/hmpIcIdrRTCl1DvM63d7KVi5lKghzQ== X-Received: by 2002:a05:6000:1865:b0:382:4eef:270 with SMTP id ffacd0b85a97d-385c6eb7283mr4089322f8f.16.1732743341534; Wed, 27 Nov 2024 13:35:41 -0800 (PST) Received: from localhost (fwdproxy-cln-016.fbsv.net. [2a03:2880:31ff:10::face:b00c]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fb2609csm17523638f8f.44.2024.11.27.13.35.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 13:35:40 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Eduard Zingerman , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , kernel-team@fb.com Subject: [PATCH bpf-next v4 3/7] bpf: Refactor mark_{dynptr,iter}_read Date: Wed, 27 Nov 2024 13:35:31 -0800 Message-ID: <20241127213535.3657472-4-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241127213535.3657472-1-memxor@gmail.com> References: <20241127213535.3657472-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2790; h=from:subject; bh=mCbV2hzfoSvQXfT9CUwMciZZAlSpMRDDqFj2bBCWrIk=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnR46t5JREANWkAQkTq9sY5tFPCMEqN0mVIp6AlRAX NqsjPXSJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZ0eOrQAKCRBM4MiGSL8RyjAdD/ 9mHy1nctt+53EqAGTEtIb2QRx+S/9sq3Pb50NiapYhHtS0eSb5kwYt0Ni6GetEhhHJVwa/PDGLpCpi FL9f5yWpU5pVifkqfWe1ycE/vdLC2q4BzrkbJF5Jcv2TWW0oAQ/WkuqQS62ooW3wyBZ/I2Uo53c8Ai UkV9qs6mpfdN+Y4/ANUpvJLFu8cmScHUzv26lvHAVEQWxnKo+fKYAB4cwAqqxcsbDPG3nPUGdTnDow wCFA9WpN56TP05z80FUwh5qoFXGEgVPoWkHhDbTETJujqJzGtm3K9mv4F79psXQmochO7r4cMvZ9Hf aepX075iGjMN0SF1wyJU51FoejtFgk10fDvNXS+oJc2qYp2gCLFNABXfxBNdzk8hA1AEWD8tq7My0N wu9qBpa/OdFsIHBHDVmrniZYkFuVHt8GK1JvMk57ijivD+YIfuBw4beBBUxAMUMNKze4yQa7AJC9Q5 XA735eMHs4Gcxk+f+nAPpJ1M/JMUcxwrelViRvZQDVGJRyXacLAHTdVRrAR8VXguz/ay5yoQ+wt1Kk 3rcqs6yVzMDE3tUkUUzPRr1hl2SQO6/UJ/T7lUIZJn4CaXJduU+RWrW5m+3rvutfa1Op86CboFtusa ONfn8SkVTDZEh63axSJ3b5RNK2R2NXLzZdRcTL9Cv/Z04oIv1c/TcwSey6wA== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net There is possibility of sharing code between mark_dynptr_read and mark_iter_read for updating liveness information of their stack slots. Consolidate common logic into mark_stack_slot_obj_read function in preparation for the next patch which needs the same logic for its own stack slots. Acked-by: Eduard Zingerman Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 43 +++++++++++++++++++++---------------------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 474cca3e8f66..be2365a9794a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3192,10 +3192,27 @@ static int mark_reg_read(struct bpf_verifier_env *env, return 0; } -static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +static int mark_stack_slot_obj_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg, + int spi, int nr_slots) { struct bpf_func_state *state = func(env, reg); - int spi, ret; + int err, i; + + for (i = 0; i < nr_slots; i++) { + struct bpf_reg_state *st = &state->stack[spi - i].spilled_ptr; + + err = mark_reg_read(env, st, st->parent, REG_LIVE_READ64); + if (err) + return err; + + mark_stack_slot_scratched(env, spi - i); + } + return 0; +} + +static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + int spi; /* For CONST_PTR_TO_DYNPTR, it must have already been done by * check_reg_arg in check_helper_call and mark_btf_func_reg_size in @@ -3210,31 +3227,13 @@ static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state * * bounds and spi is the first dynptr slot. Simply mark stack slot as * read. */ - ret = mark_reg_read(env, &state->stack[spi].spilled_ptr, - state->stack[spi].spilled_ptr.parent, REG_LIVE_READ64); - if (ret) - return ret; - return mark_reg_read(env, &state->stack[spi - 1].spilled_ptr, - state->stack[spi - 1].spilled_ptr.parent, REG_LIVE_READ64); + return mark_stack_slot_obj_read(env, reg, spi, BPF_DYNPTR_NR_SLOTS); } static int mark_iter_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int spi, int nr_slots) { - struct bpf_func_state *state = func(env, reg); - int err, i; - - for (i = 0; i < nr_slots; i++) { - struct bpf_reg_state *st = &state->stack[spi - i].spilled_ptr; - - err = mark_reg_read(env, st, st->parent, REG_LIVE_READ64); - if (err) - return err; - - mark_stack_slot_scratched(env, spi - i); - } - - return 0; + return mark_stack_slot_obj_read(env, reg, spi, nr_slots); } /* This function is supposed to be used by the following 32-bit optimization From patchwork Wed Nov 27 21:35:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13887391 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f66.google.com (mail-wm1-f66.google.com [209.85.128.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A8F232036ED for ; Wed, 27 Nov 2024 21:35:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.66 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732743348; cv=none; b=k/wWTIBUKC/+3i2/5PjGOEJQFI77l4F4eP8YAePWJrq0P7F/1ZiTpY3qBhvo2qaJV87dUz+EaKrN+L7L0xifc9ZQ/zDNYNWcjc2ubn96gxo83/nZUlcrDrVxTzFWWlY0uGDmTcDPAUg2kmSMmgByuM0KjFR9p9toysuM+fMPUQI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732743348; c=relaxed/simple; bh=ksfDgRFYGSzuQqYjxVro5/xmUohsFt6GMMmBfQY4zQ4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=t3q7HdzXsqLWFaewDOreYhQXxucAy5F4ZCfBJ5q+oPrdmZe4oHy2Jq12DzfuPbTq0BBbzTJblRyrb9rePn35gq4BIVtf1bGOiBUkRorUJwX7ANlMyUwd3ye5DpOmdqubU2H/J6X2vz6EsqWF9HPPAgt4TGcyvBvtnqG6j+Nkicg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=LokaIAYv; arc=none smtp.client-ip=209.85.128.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LokaIAYv" Received: by mail-wm1-f66.google.com with SMTP id 5b1f17b1804b1-434a90fed23so1030805e9.1 for ; Wed, 27 Nov 2024 13:35:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732743343; x=1733348143; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Dei7sXfXu27xPHTSY3naVcqGF4Znp8drdSDEYj5Ma3g=; b=LokaIAYvFNehkZuRyTAv6qthtkRwwZVYOTHrXvUEZktmknrZGFGOAApgSwqZ2r3sV9 q26gicynJ2qv28+w6tjKSkip2LCIqaLH0KYGHIsIIURcAoqF+A0BQ/1iPvTaB7DKXVpe zdlIuN2kM0cyZ5QvAm8GpEfkZeBhWFsVl5qthhtRRE76+zbW62y+Ujs4+vyp1FRdi15+ CQRL6mS8a10kClYlSF8wXP6AorvVd4sLLhZbzpfMjTizHIfdx4bC0ljW9Ja/fkriTJKH Bng7aS/HZzq8FHT0VcaZgEhiQTmFtbXBnyz4Rkma8ViH0WP1K9myXLMNps4H+tLOBFOq QolQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732743343; x=1733348143; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Dei7sXfXu27xPHTSY3naVcqGF4Znp8drdSDEYj5Ma3g=; b=K67tPlyCa2u77RzQGUBeNOenkBeKiqt5faGmX6CTiKdkyQNisa+b/mjXvXslwy4kU0 llI4D/GBSeSldq7XjSt+TGP2x1R4LA+ARCaWf22wr9+LAoymziy8Hr3gtXXcmosind1O FFJ6xLNgd15y98UHVHVPTfIXpkayYCKmeNwb7AcHP9NLyb6SXTSgNAM+D+KNiACbugmY Nl80w1tpL4s3S5CshUIotGBiTXObXsmj2Gyf0B+CnKScSc0u/ckbQPanfmCevT0lO6KL NV7gXL77vbmuJjSRhqtAeZaCzXUOCoo9SQGZeZH87WOQ8wPFUHGOJCDhDUJ4IcwbvsvE h9IQ== X-Gm-Message-State: AOJu0YxyhpZOC9pFpuaQUJ7oHaGXkJadibpkz4rmsiaGfD1D4SanRgZc ccgdrqQvN8uMPnwlYmITAfLWSmLsLGOHxlEc5Q/5tnKxe0YOrcgzf34qbuqP5N8= X-Gm-Gg: ASbGncuoljv1YcWQ7ldiZTno25tWBPD2vIq667769Yp/8JU9xM8qJ/kefDLbX7jkMhj wQnvR74twAVyXXl2+E47Z7HlOuxVNXW+cBW8wjq+MeUhoUMGNQketi/e0WhF52EM2doLxq18/zi QQl2UVN4esWqVASB0PO4R0igHUh/wEMAKR6UeHHMFuYLAFmcuBmGwCWpN5OxzRdqSE4/uJWgoc9 txgKPR/1yfIjl9XyM1U3F1UeZaIaXDYnJFxfcfs6rGVEoIj5wQqYhy1Vujq6PmXopcl+yvFRqKG hA== X-Google-Smtp-Source: AGHT+IEwjSkJPamKVxcjDtlZBnq11pPhmtWFTcSUrH1HmJOSkQlX4cn4X8I8YJnYhWSonouUl3zsYQ== X-Received: by 2002:a05:6000:1a87:b0:37e:f1ed:67e8 with SMTP id ffacd0b85a97d-385c6ebb946mr4286334f8f.35.1732743343144; Wed, 27 Nov 2024 13:35:43 -0800 (PST) Received: from localhost (fwdproxy-cln-029.fbsv.net. [2a03:2880:31ff:1d::face:b00c]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fbc3defsm17913903f8f.70.2024.11.27.13.35.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 13:35:42 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Eduard Zingerman , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , kernel-team@fb.com Subject: [PATCH bpf-next v4 4/7] bpf: Introduce support for bpf_local_irq_{save,restore} Date: Wed, 27 Nov 2024 13:35:32 -0800 Message-ID: <20241127213535.3657472-5-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241127213535.3657472-1-memxor@gmail.com> References: <20241127213535.3657472-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=18865; h=from:subject; bh=ksfDgRFYGSzuQqYjxVro5/xmUohsFt6GMMmBfQY4zQ4=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnR46tkHr0BjF8hSkZcrVhwaOXgwyYqOZFTIyFJfd9 O26XLe6JAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZ0eOrQAKCRBM4MiGSL8RynQMEA DBwCAMbzEjTFPw9QMgb0Y6hPHeHFoJAN3iFI3M/I4IMTYTuvEqJv0LteOsLjeLmW84WfclST6DEZr+ 3hayBmbcxwTGc04te/iH2ojxE+VRig6ROJayTUYapThEKLGMo9Mp+Rdoe+iv3x9C2N+QvG69p1LRij rbjsfw6tCoW6EGR5vmLanSwAQAQQxbWrNg36cK9ihCizj1T5v0dA72d0SqT5TdtiYZHEeJgfxyM97o qVM0s0s1QyL0Fyies7yDaSkjKlXz1XeXsbcb1lFaPdphgEL6/OvzvWdY1w1TwMUohmWUkdtOWzcSzt vvDlVj4iotes9dJaZmIBX698iiuM4GUjWzxBGh9s0U6LK/9qetqr7djG0c5MO0vVwNwWLKnT4daV8q IX7Z7y55nOgz1fh9KW1T9L7xwkqPz3IZZ2IEcgyA+mBRrJ3l1uHm2/gSO93ufv0aufvAFJC2RSSUsX DRSIG58KjAbzxyLM5wwmCoWlQ6FPGGAsvvAVT5yleMkiwb1KAzrjQLC3WkIAtR0PlZNpO0d6EvSzCx eROwnVoaD6GmKrcUZ2ArK+elz62b8Z6dTD13Uy9NrOEsM0NWKFow+V9NsjXEqbVqJpCSu1LzQSACSz jwRKqE1xQ+SJPrdW75KIGezUVHZGT49GfED86TkdO7G1bDx4ut/waCOpPB5A== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net Teach the verifier about IRQ-disabled sections through the introduction of two new kfuncs, bpf_local_irq_save, to save IRQ state and disable them, and bpf_local_irq_restore, to restore IRQ state and enable them back again. For the purposes of tracking the saved IRQ state, the verifier is taught about a new special object on the stack of type STACK_IRQ_FLAG. This is a 8 byte value which saves the IRQ flags which are to be passed back to the IRQ restore kfunc. Renumber the enums for REF_TYPE_* to simplify the check in find_lock_state, filtering out non-lock types as they grow will become cumbersome and is unecessary. To track a dynamic number of IRQ-disabled regions and their associated saved states, a new resource type RES_TYPE_IRQ is introduced, which its state management functions: acquire_irq_state and release_irq_state, taking advantage of the refactoring and clean ups made in earlier commits. One notable requirement of the kernel's IRQ save and restore API is that they cannot happen out of order. For this purpose, when releasing reference we keep track of the prev_id we saw with REF_TYPE_IRQ. Since reference states are inserted in increasing order of the index, this is used to remember the ordering of acquisitions of IRQ saved states, so that we maintain a logical stack in acquisition order of resource identities, and can enforce LIFO ordering when restoring IRQ state. The top of the stack is maintained using bpf_verifier_state's active_irq_id. The logic to detect initialized and unitialized irq flag slots, marking and unmarking is similar to how it's done for iterators. No additional checks are needed in refsafe for REF_TYPE_IRQ, apart from the usual check_id satisfiability check on the ref[i].id. We have to perform the same check_ids check on state->active_irq_id as well. The kfuncs themselves are plain wrappers over local_irq_save and local_irq_restore macros. Acked-by: Eduard Zingerman Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf_verifier.h | 8 +- kernel/bpf/helpers.c | 17 +++ kernel/bpf/log.c | 1 + kernel/bpf/verifier.c | 279 ++++++++++++++++++++++++++++++++++- 4 files changed, 302 insertions(+), 3 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index af64b5415df8..3da7ae6c7bba 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -233,6 +233,7 @@ enum bpf_stack_slot_type { */ STACK_DYNPTR, STACK_ITER, + STACK_IRQ_FLAG, }; #define BPF_REG_SIZE 8 /* size of eBPF register in bytes */ @@ -254,8 +255,10 @@ struct bpf_reference_state { * default to pointer reference on zero initialization of a state. */ enum ref_state_type { - REF_TYPE_PTR = 0, - REF_TYPE_LOCK, + REF_TYPE_PTR = 1, + REF_TYPE_IRQ = 2, + + REF_TYPE_LOCK = 3, } type; /* Track each reference created with a unique id, even if the same * instruction creates the reference multiple times (eg, via CALL). @@ -420,6 +423,7 @@ struct bpf_verifier_state { u32 acquired_refs; u32 active_locks; u32 active_preempt_locks; + u32 active_irq_id; bool active_rcu_lock; bool speculative; diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 751c150f9e1c..532ea74d4850 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -3057,6 +3057,21 @@ __bpf_kfunc int bpf_copy_from_user_str(void *dst, u32 dst__sz, const void __user return ret + 1; } +/* Keep unsinged long in prototype so that kfunc is usable when emitted to + * vmlinux.h in BPF programs directly, but note that while in BPF prog, the + * unsigned long always points to 8-byte region on stack, the kernel may only + * read and write the 4-bytes on 32-bit. + */ +__bpf_kfunc void bpf_local_irq_save(unsigned long *flags__irq_flag) +{ + local_irq_save(*flags__irq_flag); +} + +__bpf_kfunc void bpf_local_irq_restore(unsigned long *flags__irq_flag) +{ + local_irq_restore(*flags__irq_flag); +} + __bpf_kfunc_end_defs(); BTF_KFUNCS_START(generic_btf_ids) @@ -3149,6 +3164,8 @@ BTF_ID_FLAGS(func, bpf_get_kmem_cache) BTF_ID_FLAGS(func, bpf_iter_kmem_cache_new, KF_ITER_NEW | KF_SLEEPABLE) BTF_ID_FLAGS(func, bpf_iter_kmem_cache_next, KF_ITER_NEXT | KF_RET_NULL | KF_SLEEPABLE) BTF_ID_FLAGS(func, bpf_iter_kmem_cache_destroy, KF_ITER_DESTROY | KF_SLEEPABLE) +BTF_ID_FLAGS(func, bpf_local_irq_save) +BTF_ID_FLAGS(func, bpf_local_irq_restore) BTF_KFUNCS_END(common_btf_ids) static const struct btf_kfunc_id_set common_kfunc_set = { diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c index 8b52e5b7504c..434fc320ba1d 100644 --- a/kernel/bpf/log.c +++ b/kernel/bpf/log.c @@ -537,6 +537,7 @@ static char slot_type_char[] = { [STACK_ZERO] = '0', [STACK_DYNPTR] = 'd', [STACK_ITER] = 'i', + [STACK_IRQ_FLAG] = 'f' }; static void print_liveness(struct bpf_verifier_env *env, diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index be2365a9794a..c6b40da49835 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -661,6 +661,11 @@ static int iter_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg, return stack_slot_obj_get_spi(env, reg, "iter", nr_slots); } +static int irq_flag_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + return stack_slot_obj_get_spi(env, reg, "irq_flag", 1); +} + static enum bpf_dynptr_type arg_to_dynptr_type(enum bpf_arg_type arg_type) { switch (arg_type & DYNPTR_TYPE_FLAG_MASK) { @@ -1156,10 +1161,126 @@ static int is_iter_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_s return 0; } +static int acquire_irq_state(struct bpf_verifier_env *env, int insn_idx); +static int release_irq_state(struct bpf_verifier_state *state, int id); + +static int mark_stack_slot_irq_flag(struct bpf_verifier_env *env, + struct bpf_kfunc_call_arg_meta *meta, + struct bpf_reg_state *reg, int insn_idx) +{ + struct bpf_func_state *state = func(env, reg); + struct bpf_stack_state *slot; + struct bpf_reg_state *st; + int spi, i, id; + + spi = irq_flag_get_spi(env, reg); + if (spi < 0) + return spi; + + id = acquire_irq_state(env, insn_idx); + if (id < 0) + return id; + + slot = &state->stack[spi]; + st = &slot->spilled_ptr; + + __mark_reg_known_zero(st); + st->type = PTR_TO_STACK; /* we don't have dedicated reg type */ + st->live |= REG_LIVE_WRITTEN; + st->ref_obj_id = id; + + for (i = 0; i < BPF_REG_SIZE; i++) + slot->slot_type[i] = STACK_IRQ_FLAG; + + mark_stack_slot_scratched(env, spi); + return 0; +} + +static int unmark_stack_slot_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + struct bpf_func_state *state = func(env, reg); + struct bpf_stack_state *slot; + struct bpf_reg_state *st; + int spi, i, err; + + spi = irq_flag_get_spi(env, reg); + if (spi < 0) + return spi; + + slot = &state->stack[spi]; + st = &slot->spilled_ptr; + + err = release_irq_state(env->cur_state, st->ref_obj_id); + WARN_ON_ONCE(err && err != -EACCES); + if (err) { + verbose(env, "cannot restore irq state out of order\n"); + return err; + } + + __mark_reg_not_init(env, st); + + /* see unmark_stack_slots_dynptr() for why we need to set REG_LIVE_WRITTEN */ + st->live |= REG_LIVE_WRITTEN; + + for (i = 0; i < BPF_REG_SIZE; i++) + slot->slot_type[i] = STACK_INVALID; + + mark_stack_slot_scratched(env, spi); + return 0; +} + +static bool is_irq_flag_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + struct bpf_func_state *state = func(env, reg); + struct bpf_stack_state *slot; + int spi, i; + + /* For -ERANGE (i.e. spi not falling into allocated stack slots), we + * will do check_mem_access to check and update stack bounds later, so + * return true for that case. + */ + spi = irq_flag_get_spi(env, reg); + if (spi == -ERANGE) + return true; + if (spi < 0) + return false; + + slot = &state->stack[spi]; + + for (i = 0; i < BPF_REG_SIZE; i++) + if (slot->slot_type[i] == STACK_IRQ_FLAG) + return false; + return true; +} + +static int is_irq_flag_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + struct bpf_func_state *state = func(env, reg); + struct bpf_stack_state *slot; + struct bpf_reg_state *st; + int spi, i; + + spi = irq_flag_get_spi(env, reg); + if (spi < 0) + return -EINVAL; + + slot = &state->stack[spi]; + st = &slot->spilled_ptr; + + if (!st->ref_obj_id) + return -EINVAL; + + for (i = 0; i < BPF_REG_SIZE; i++) + if (slot->slot_type[i] != STACK_IRQ_FLAG) + return -EINVAL; + return 0; +} + /* Check if given stack slot is "special": * - spilled register state (STACK_SPILL); * - dynptr state (STACK_DYNPTR); * - iter state (STACK_ITER). + * - irq flag state (STACK_IRQ_FLAG) */ static bool is_stack_slot_special(const struct bpf_stack_state *stack) { @@ -1169,6 +1290,7 @@ static bool is_stack_slot_special(const struct bpf_stack_state *stack) case STACK_SPILL: case STACK_DYNPTR: case STACK_ITER: + case STACK_IRQ_FLAG: return true; case STACK_INVALID: case STACK_MISC: @@ -1291,6 +1413,7 @@ static int copy_reference_state(struct bpf_verifier_state *dst, const struct bpf dst->active_locks = src->active_locks; dst->active_preempt_locks = src->active_preempt_locks; dst->active_rcu_lock = src->active_rcu_lock; + dst->active_irq_id = src->active_irq_id; return 0; } @@ -1392,6 +1515,20 @@ static int acquire_lock_state(struct bpf_verifier_env *env, int insn_idx, enum r return 0; } +static int acquire_irq_state(struct bpf_verifier_env *env, int insn_idx) +{ + struct bpf_verifier_state *state = env->cur_state; + struct bpf_reference_state *s; + + s = acquire_reference_state(env, insn_idx, true); + if (!s) + return -ENOMEM; + s->type = REF_TYPE_IRQ; + + state->active_irq_id = s->id; + return s->id; +} + static void release_reference_state(struct bpf_verifier_state *state, int idx) { int last_idx; @@ -1420,6 +1557,28 @@ static int release_lock_state(struct bpf_verifier_state *state, int type, int id return -EINVAL; } +static int release_irq_state(struct bpf_verifier_state *state, int id) +{ + u32 prev_id = 0; + int i; + + if (id != state->active_irq_id) + return -EACCES; + + for (i = 0; i < state->acquired_refs; i++) { + if (state->refs[i].type != REF_TYPE_IRQ) + continue; + if (state->refs[i].id == id) { + release_reference_state(state, i); + state->active_irq_id = prev_id; + return 0; + } else { + prev_id = state->refs[i].id; + } + } + return -EINVAL; +} + static struct bpf_reference_state *find_lock_state(struct bpf_verifier_state *state, enum ref_state_type type, int id, void *ptr) { @@ -1428,7 +1587,7 @@ static struct bpf_reference_state *find_lock_state(struct bpf_verifier_state *st for (i = 0; i < state->acquired_refs; i++) { struct bpf_reference_state *s = &state->refs[i]; - if (s->type == REF_TYPE_PTR || s->type != type) + if (s->type != type) continue; if (s->id == id && s->ptr == ptr) @@ -3236,6 +3395,16 @@ static int mark_iter_read(struct bpf_verifier_env *env, struct bpf_reg_state *re return mark_stack_slot_obj_read(env, reg, spi, nr_slots); } +static int mark_irq_flag_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + int spi; + + spi = irq_flag_get_spi(env, reg); + if (spi < 0) + return spi; + return mark_stack_slot_obj_read(env, reg, spi, 1); +} + /* This function is supposed to be used by the following 32-bit optimization * code only. It returns TRUE if the source or destination register operates * on 64-bit, otherwise return FALSE. @@ -10012,6 +10181,12 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return -EINVAL; } + if (env->cur_state->active_irq_id) { + verbose(env, "global function calls are not allowed with IRQs disabled,\n" + "use static function instead\n"); + return -EINVAL; + } + if (err) { verbose(env, "Caller passes invalid args into func#%d ('%s')\n", subprog, sub_name); @@ -10536,6 +10711,11 @@ static int check_resource_leak(struct bpf_verifier_env *env, bool exception_exit return err; } + if (check_lock && env->cur_state->active_irq_id) { + verbose(env, "%s cannot be used inside bpf_local_irq_save-ed region\n", prefix); + return -EINVAL; + } + if (check_lock && env->cur_state->active_rcu_lock) { verbose(env, "%s cannot be used inside bpf_rcu_read_lock-ed region\n", prefix); return -EINVAL; @@ -10740,6 +10920,17 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn env->insn_aux_data[insn_idx].storage_get_func_atomic = true; } + if (env->cur_state->active_irq_id) { + if (fn->might_sleep) { + verbose(env, "sleepable helper %s#%d in IRQ-disabled region\n", + func_id_name(func_id), func_id); + return -EINVAL; + } + + if (in_sleepable(env) && is_storage_get_function(func_id)) + env->insn_aux_data[insn_idx].storage_get_func_atomic = true; + } + meta.func_id = func_id; /* check args */ for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++) { @@ -11301,6 +11492,11 @@ static bool is_kfunc_arg_const_str(const struct btf *btf, const struct btf_param return btf_param_match_suffix(btf, arg, "__str"); } +static bool is_kfunc_arg_irq_flag(const struct btf *btf, const struct btf_param *arg) +{ + return btf_param_match_suffix(btf, arg, "__irq_flag"); +} + static bool is_kfunc_arg_scalar_with_name(const struct btf *btf, const struct btf_param *arg, const char *name) @@ -11454,6 +11650,7 @@ enum kfunc_ptr_arg_type { KF_ARG_PTR_TO_CONST_STR, KF_ARG_PTR_TO_MAP, KF_ARG_PTR_TO_WORKQUEUE, + KF_ARG_PTR_TO_IRQ_FLAG, }; enum special_kfunc_type { @@ -11485,6 +11682,8 @@ enum special_kfunc_type { KF_bpf_iter_css_task_new, KF_bpf_session_cookie, KF_bpf_get_kmem_cache, + KF_bpf_local_irq_save, + KF_bpf_local_irq_restore, }; BTF_SET_START(special_kfunc_set) @@ -11551,6 +11750,8 @@ BTF_ID(func, bpf_session_cookie) BTF_ID_UNUSED #endif BTF_ID(func, bpf_get_kmem_cache) +BTF_ID(func, bpf_local_irq_save) +BTF_ID(func, bpf_local_irq_restore) static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta) { @@ -11641,6 +11842,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, if (is_kfunc_arg_wq(meta->btf, &args[argno])) return KF_ARG_PTR_TO_WORKQUEUE; + if (is_kfunc_arg_irq_flag(meta->btf, &args[argno])) + return KF_ARG_PTR_TO_IRQ_FLAG; + if ((base_type(reg->type) == PTR_TO_BTF_ID || reg2btf_ids[base_type(reg->type)])) { if (!btf_type_is_struct(ref_t)) { verbose(env, "kernel function %s args#%d pointer type %s %s is not supported\n", @@ -11744,6 +11948,54 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env, return 0; } +static int process_irq_flag(struct bpf_verifier_env *env, int regno, + struct bpf_kfunc_call_arg_meta *meta) +{ + struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno]; + bool irq_save; + int err; + + if (meta->func_id == special_kfunc_list[KF_bpf_local_irq_save]) { + irq_save = true; + } else if (meta->func_id == special_kfunc_list[KF_bpf_local_irq_restore]) { + irq_save = false; + } else { + verbose(env, "verifier internal error: unknown irq flags kfunc\n"); + return -EFAULT; + } + + if (irq_save) { + if (!is_irq_flag_reg_valid_uninit(env, reg)) { + verbose(env, "expected uninitialized irq flag as arg#%d\n", regno); + return -EINVAL; + } + + err = check_mem_access(env, env->insn_idx, regno, 0, BPF_DW, BPF_WRITE, -1, false, false); + if (err) + return err; + + err = mark_stack_slot_irq_flag(env, meta, reg, env->insn_idx); + if (err) + return err; + } else { + err = is_irq_flag_reg_valid_init(env, reg); + if (err) { + verbose(env, "expected an initialized irq flag as arg#%d\n", regno); + return err; + } + + err = mark_irq_flag_read(env, reg); + if (err) + return err; + + err = unmark_stack_slot_irq_flag(env, reg); + if (err) + return err; + } + return 0; +} + + static int ref_set_non_owning(struct bpf_verifier_env *env, struct bpf_reg_state *reg) { struct btf_record *rec = reg_btf_record(reg); @@ -12332,6 +12584,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ case KF_ARG_PTR_TO_REFCOUNTED_KPTR: case KF_ARG_PTR_TO_CONST_STR: case KF_ARG_PTR_TO_WORKQUEUE: + case KF_ARG_PTR_TO_IRQ_FLAG: break; default: WARN_ON_ONCE(1); @@ -12626,6 +12879,15 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ if (ret < 0) return ret; break; + case KF_ARG_PTR_TO_IRQ_FLAG: + if (reg->type != PTR_TO_STACK) { + verbose(env, "arg#%d doesn't point to an irq flag on stack\n", i); + return -EINVAL; + } + ret = process_irq_flag(env, regno, meta); + if (ret < 0) + return ret; + break; } } @@ -12806,6 +13068,11 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return -EINVAL; } + if (env->cur_state->active_irq_id && sleepable) { + verbose(env, "kernel func %s is sleepable within IRQ-disabled region\n", func_name); + return -EACCES; + } + /* In case of release function, we get register number of refcounted * PTR_TO_BTF_ID in bpf_kfunc_arg_meta, do the release now. */ @@ -17739,6 +18006,12 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, !check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap)) return false; break; + case STACK_IRQ_FLAG: + old_reg = &old->stack[spi].spilled_ptr; + cur_reg = &cur->stack[spi].spilled_ptr; + if (!check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap)) + return false; + break; case STACK_MISC: case STACK_ZERO: case STACK_INVALID: @@ -17768,12 +18041,16 @@ static bool refsafe(struct bpf_verifier_state *old, struct bpf_verifier_state *c if (old->active_rcu_lock != cur->active_rcu_lock) return false; + if (!check_ids(old->active_irq_id, cur->active_irq_id, idmap)) + return false; + for (i = 0; i < old->acquired_refs; i++) { if (!check_ids(old->refs[i].id, cur->refs[i].id, idmap) || old->refs[i].type != cur->refs[i].type) return false; switch (old->refs[i].type) { case REF_TYPE_PTR: + case REF_TYPE_IRQ: break; case REF_TYPE_LOCK: if (old->refs[i].ptr != cur->refs[i].ptr) From patchwork Wed Nov 27 21:35:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13887392 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f67.google.com (mail-wm1-f67.google.com [209.85.128.67]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C84642036EE for ; Wed, 27 Nov 2024 21:35:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.67 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732743348; cv=none; b=Ndky3ToxzAHMFUS50MtM7HYP8eSqgs94wvKYVsN3CmSSmILG+bF11TDk1sc6YOs3x0rk2GUUu+JBN+sE980RcLfv45A3gnh2ULjoaPP2z3wLvw4oEwTsmKXBO3Z5tQLCJjHjjbnzcAATq6do/Oao8/fdADdEQ0EGDbQvgO5JnyI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732743348; c=relaxed/simple; bh=AS6RGpUg7SjcvNF28pmCk7E8ajACcP7mIl90yqST/d8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ainb+PEMYU0Sd++4dMGu6artNK0IgtyFjvpN1aVUwELKYnwr3Xv7TFa5BocWgB4/1s3Vbm8mvqnsvn2ncTpP/vD9kr0Icj8lEonHPc/N82015DNGecOwzbnshVLWBqC+Z7FQBdDY+8ndisc550r+ajFVlJA9nv1DrHJbmaF10Os= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=hs5tzNl3; arc=none smtp.client-ip=209.85.128.67 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hs5tzNl3" Received: by mail-wm1-f67.google.com with SMTP id 5b1f17b1804b1-4349cc45219so1004875e9.3 for ; Wed, 27 Nov 2024 13:35:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732743345; x=1733348145; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RgMjlVEVk7tVqhgBZ+ffrrUJynt0xwMe4LZjpiGG/NA=; b=hs5tzNl3zgi+Ai5TSNJkLmeMLq1gvlSHvRtm5H3TE9a37H676tSNUF90x5yNBVRNkk wu+amc0uYyjtuyS5boI7b/uzw1US91QdfRqIV/jtn2VniiY5V5QtCnRLN3WcDunVOnea 1ShPNkeKVQgbLxbnXiozQr8iiPYpMflfI6pwmUFyLV41AG/y/g0b6phSOwV66dqSVs0G OYbTwu6D1GI/PY8d2b2jufZpSrPSVaX22CBAhmooAtKwl05hnXLQI4YGiuDbJ1s8AW5T BefQw+Dtn2uxcBP3sSxg6LdhVigXoZILWmAH0sUI4hYnFZlUbgw1n6pBL6gHL+lmBCWo chSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732743345; x=1733348145; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RgMjlVEVk7tVqhgBZ+ffrrUJynt0xwMe4LZjpiGG/NA=; b=WwBdnKiRPBEuj43OJVg3U8CCD8+RgysStlMOCDDjnktKrd3YOoliJWnrOd6lkl45Ea 1bvieMrfHDDRbp9P3sXD20KXTZ7KpmAfi3J/zKVb6+y1pmkkYfg1CPHExAK2kYWdW20v 9NzKLUWBP5w9e/03PiagQnr2nJmX0DdDenJfsIcyYgKmrO2fn/RncQ1F+0B52sC45zKw Do+BupejNYS+2fKviL+SOvKIgl2VRDim+hiRxF4iTDwnGxlbB8NLn3IicVAtkTXhn1p3 JqOnWUSeKmUF3HJiL2obbobMuOGxcBHrieEKj5/PduRzdrNmMpLOXHlLeO4fiyoqSom9 booQ== X-Gm-Message-State: AOJu0YznkBZ1olShQntLI/EHutfE9qm/OSrWPsflOwRYIeSvzsniJcMY 84ur2JFz1RyUw02DcwTSNKKv5Eec4dtxZPQgcaumWHDQc+piR20nQKFmxn3r03A= X-Gm-Gg: ASbGncsRrJK6k20CilmL5riQG4dezkLissDGyRPV1cxOybNZIk2/lf0oBI9evz50kai nEiKPC1cepEDDCr6OnJOv3IiV+tib2+a/ZQ4t/irc67CbaB5M3WWv0mZ4Eq7NwJL9XPt4s9l1f7 ih2KG8/2GdzvwfF+aPFLzN1AwO2OqQRxGmwTswyldaB1Yt+kwgMdPiTo6mvVm903MWSNvcy8fPv Hp/emM/U4ur4mzfXsvm3x3th+I8TznfGVfMT06EYtbKRYEeJqJOaQ95/+Y3LTnqBsRVvq0xc2N8 X-Google-Smtp-Source: AGHT+IESEC4IxYEsiSGFI7RG98LDQbOFVg5pvsNYFe74y2bTWi4LCmHBKZmVqb+WRfVVeK/51fth2Q== X-Received: by 2002:a05:600c:4686:b0:434:a923:9313 with SMTP id 5b1f17b1804b1-434a9de55b4mr40037845e9.25.1732743344791; Wed, 27 Nov 2024 13:35:44 -0800 (PST) Received: from localhost (fwdproxy-cln-010.fbsv.net. [2a03:2880:31ff:a::face:b00c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-434b0dbe40csm1229815e9.10.2024.11.27.13.35.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 13:35:43 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Eduard Zingerman , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , kernel-team@fb.com Subject: [PATCH bpf-next v4 5/7] bpf: Improve verifier log for resource leak on exit Date: Wed, 27 Nov 2024 13:35:33 -0800 Message-ID: <20241127213535.3657472-6-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241127213535.3657472-1-memxor@gmail.com> References: <20241127213535.3657472-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5787; h=from:subject; bh=AS6RGpUg7SjcvNF28pmCk7E8ajACcP7mIl90yqST/d8=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnR46uisfmwfl0HO6vr6Ld6iK7fLArMpYJD2/kdwOe Ev/8MqqJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZ0eOrgAKCRBM4MiGSL8RyoRLD/ 9Vw/QC1BtkvgT2ih+fdHl3m8auODgjOlr/85kZpVBnI2pa30iOLeYqmhZ0XHSDnTOjJuNiUgFyE2JE b/8KA75I8oBM4NMgkGOhHPO7X0gWZdJ4etT1wYHggCiFXwC1aMMYtYlkpbhR1fg78OQu7HYYn6Bo/N 66I1pHJubxr/hgsBONcxV41bs81Tx3u2oOKH05noNFKM68iP0+7NSkMqzOKf8Oht32uEByeHyOEmeC K/C9FTnzP06GEy3BpzhsW6h+8MUtaDStULhUK6+K0J0Ey11UftklvdgVB1lrQtJfbILZWCL63GdF9o YiYCXkR17UMLGZeMjGP8PB3Gn8JimQFEH+SeozyyqovvZz+HiV9/FfApJKq9+t1PJS4ahJA22u0r4m +/IUvYd3eB2jKoB4pAc0/RQsllqX7Opf45NT4PS/D+SUMFnLbPaQIp29DR1q7/AfuQpx43gYIz9EEr 9xeU1DhzjGSqUsjmaOPyw+B4DB28SWxzwNJ9l4hm71bFro68I9gh79lyyUXMfkot7UQnpdILpwfh7T Q61m/iboWR4sDmb/99If45lqFmEa2iGTpuUSHwP+5Dc7wxdNwdR9QmsM6HL52EEib+xf/3bhJ85ZcO MzQaOuMTMRrkuwbTJTAmXxRkGlFsLiOgy7STvcLjPbGLyVr4I182vNUPYUUQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net The verifier log when leaking resources on BPF_EXIT may be a bit confusing, as it's a problem only when finally existing from the main prog, not from any of the subprogs. Hence, update the verifier error string and the corresponding selftests matching on it. Suggested-by: Eduard Zingerman Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 2 +- .../testing/selftests/bpf/progs/exceptions_fail.c | 4 ++-- tools/testing/selftests/bpf/progs/preempt_lock.c | 14 +++++++------- .../selftests/bpf/progs/verifier_spin_lock.c | 2 +- 4 files changed, 11 insertions(+), 11 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index c6b40da49835..b9fdb7e362ca 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -19088,7 +19088,7 @@ static int do_check(struct bpf_verifier_env *env) * match caller reference state when it exits. */ err = check_resource_leak(env, exception_exit, !env->cur_state->curframe, - "BPF_EXIT instruction"); + "BPF_EXIT instruction in main prog"); if (err) return err; diff --git a/tools/testing/selftests/bpf/progs/exceptions_fail.c b/tools/testing/selftests/bpf/progs/exceptions_fail.c index fe0f3fa5aab6..8a0fdff89927 100644 --- a/tools/testing/selftests/bpf/progs/exceptions_fail.c +++ b/tools/testing/selftests/bpf/progs/exceptions_fail.c @@ -131,7 +131,7 @@ int reject_subprog_with_lock(void *ctx) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_rcu_read_lock-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_rcu_read_lock-ed region") int reject_with_rcu_read_lock(void *ctx) { bpf_rcu_read_lock(); @@ -147,7 +147,7 @@ __noinline static int throwing_subprog(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_rcu_read_lock-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_rcu_read_lock-ed region") int reject_subprog_with_rcu_read_lock(void *ctx) { bpf_rcu_read_lock(); diff --git a/tools/testing/selftests/bpf/progs/preempt_lock.c b/tools/testing/selftests/bpf/progs/preempt_lock.c index 885377e83607..5269571cf7b5 100644 --- a/tools/testing/selftests/bpf/progs/preempt_lock.c +++ b/tools/testing/selftests/bpf/progs/preempt_lock.c @@ -6,7 +6,7 @@ #include "bpf_experimental.h" SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_preempt_disable-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_preempt_disable-ed region") int preempt_lock_missing_1(struct __sk_buff *ctx) { bpf_preempt_disable(); @@ -14,7 +14,7 @@ int preempt_lock_missing_1(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_preempt_disable-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_preempt_disable-ed region") int preempt_lock_missing_2(struct __sk_buff *ctx) { bpf_preempt_disable(); @@ -23,7 +23,7 @@ int preempt_lock_missing_2(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_preempt_disable-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_preempt_disable-ed region") int preempt_lock_missing_3(struct __sk_buff *ctx) { bpf_preempt_disable(); @@ -33,7 +33,7 @@ int preempt_lock_missing_3(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_preempt_disable-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_preempt_disable-ed region") int preempt_lock_missing_3_minus_2(struct __sk_buff *ctx) { bpf_preempt_disable(); @@ -55,7 +55,7 @@ static __noinline void preempt_enable(void) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_preempt_disable-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_preempt_disable-ed region") int preempt_lock_missing_1_subprog(struct __sk_buff *ctx) { preempt_disable(); @@ -63,7 +63,7 @@ int preempt_lock_missing_1_subprog(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_preempt_disable-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_preempt_disable-ed region") int preempt_lock_missing_2_subprog(struct __sk_buff *ctx) { preempt_disable(); @@ -72,7 +72,7 @@ int preempt_lock_missing_2_subprog(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_preempt_disable-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_preempt_disable-ed region") int preempt_lock_missing_2_minus_1_subprog(struct __sk_buff *ctx) { preempt_disable(); diff --git a/tools/testing/selftests/bpf/progs/verifier_spin_lock.c b/tools/testing/selftests/bpf/progs/verifier_spin_lock.c index 3f679de73229..25599eac9a70 100644 --- a/tools/testing/selftests/bpf/progs/verifier_spin_lock.c +++ b/tools/testing/selftests/bpf/progs/verifier_spin_lock.c @@ -187,7 +187,7 @@ l0_%=: r6 = r0; \ SEC("cgroup/skb") __description("spin_lock: test6 missing unlock") -__failure __msg("BPF_EXIT instruction cannot be used inside bpf_spin_lock-ed region") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_spin_lock-ed region") __failure_unpriv __msg_unpriv("") __naked void spin_lock_test6_missing_unlock(void) { From patchwork Wed Nov 27 21:35:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13887393 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1C7C7202F8F for ; Wed, 27 Nov 2024 21:35:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.67 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732743349; cv=none; b=Dx5EF/BgWk6jJgbIVFaPSTrur6oz2CyJUjHclaRQ74FGa+hMOGwctV6FkRlJxtc7eR/0DkSZi2ovW8UVVgNeCIIYeZWmRvttKs9NkA9qrRs1DsQoXTK+T5mk95oggp518Cpf2YlYdLT5BXFrZ6AGDhDlihkV7H6RZJL5riYg6A4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732743349; c=relaxed/simple; bh=XI8afY+EEgzwCChq5uiRY5b/c1t5IWFqhWe47+0fWFE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=I85gIDeEb9LT21G+D/R40GVJIeCz84oWyuo/L34bDLXP+NF6Oq9kasljb9kDeOlvqvrKJQmT+9X83DiAmJQsV3DFJtz9psEJHvbFu4DDOYB5NkBnTgu9Yml4oB350rkfGPvn1oc6A+HSuzUmqrI+FJrYTO/sD1uSLP571a+TP1w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=GfeYd4pI; arc=none smtp.client-ip=209.85.221.67 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GfeYd4pI" Received: by mail-wr1-f67.google.com with SMTP id ffacd0b85a97d-382296631f1so165003f8f.3 for ; Wed, 27 Nov 2024 13:35:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732743346; x=1733348146; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=O8hfXZNRSbjbJUCdqZD3pI8RxR9Vc0H89eeyYkgRzE0=; b=GfeYd4pIzazYqWwtT+Zy4B+2SZxrCaQci46ciNWWichsABN51oA80agz6XLQsxFIUf Q0ThZBRJH4rXlo0yoghPJiMVIHImZ1ZPv3Ic81P1hRie8FkosbCbWZOqpxOrz9tz/T6E orcEBH+7x2MeqMb2CJIzOjaVOmVp9fnGpmubi5YIzLxp96n7CYePLVn18qY0MBl2f+Gt m75hAUGO6LXr16Ogi8gDFjvYoehfGRqZ5kWeAF+EV667/h+1b8wKd7+8tuMbPzoXkW+n 9W70u0MutB2tZNNSDf94FWNS+qyxlY45aVjBjIKHvXveofmINED4Di9maZh4GgOQTY2S J/Qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732743346; x=1733348146; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=O8hfXZNRSbjbJUCdqZD3pI8RxR9Vc0H89eeyYkgRzE0=; b=ITelWFaLYDJx/7ucehw6yF0vFLHYq1m031My86pKO2nEc9Hl1DjG6qMGbR5sxoRQgT u8UMxRmucNGQpBsHWq/OOxNHhUSVL2hNRP8gOD/LuYzUFYJvjho3li8a8sL57ELkH/W+ 7YmXDzrlTM6rZpl5xwnFzN3GUsJz+j8FLVq0dBFz808aEoJSIOjgUO49mPhZE1Z2913x aUPuY0H2Gdmd9CMJvWJCiSzUi9a2ZQevUwsdXx8U5HWtW+kVR2b7ildVEeyiiFzOaj8h 4Xmp1s/b10sPc8k92pXSfw/YTMmykb1edCxkwKrLA6EoahwhdoYcF9WHg+RvDemlsoJo wz8w== X-Gm-Message-State: AOJu0YxBfVaGslWSQvjfUeyWHF4WzFhi40As26xA1SMBYb6rdsx7w19x LKJuyZY1KmcfciiTRILi/qHPrPTSRHSUWYVXKv/NM9dG6WwJTLpeWspv3JJum2Q= X-Gm-Gg: ASbGncvf9Y9BjnjSRzcYATN1r5pXhM3y38IXmy/bh+8trE6eEv1uXxMjS9gVxkiG7pT cAB0aZkB2jU1BNYvR2aIFBwLNHwrNk6aDtsQvGLRbOAlQdKmqfDV5kzswxuQCP7xJHCfNmKUzza 1nEvpDxQQdxsnTNjtjcEm8FQj8OGE7IGw+R403bITpATGAX45dpAh6hGiMXpn2Ndp/Isq6M2f/2 bOyJJrt5hXKfRBjWkxvh/RgfvwEuyBAIOFx/Ac4XFox5gLvhxt5WyX/sXUE7mGwkC3avcc0Siog X-Google-Smtp-Source: AGHT+IF4DH7ef0iH5RlKywoIhgDRNMmjbI74FhOsYB7yKXwgYTT+grUfGsdZ0JDV2vDbKBXhdT4HnQ== X-Received: by 2002:a5d:5f83:0:b0:382:4bb3:9050 with SMTP id ffacd0b85a97d-385c6ef4785mr3716386f8f.57.1732743346154; Wed, 27 Nov 2024 13:35:46 -0800 (PST) Received: from localhost (fwdproxy-cln-013.fbsv.net. [2a03:2880:31ff:d::face:b00c]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fbe901esm17519794f8f.87.2024.11.27.13.35.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 13:35:45 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Eduard Zingerman , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , kernel-team@fb.com Subject: [PATCH bpf-next v4 6/7] selftests/bpf: Expand coverage of preempt tests to sleepable kfunc Date: Wed, 27 Nov 2024 13:35:34 -0800 Message-ID: <20241127213535.3657472-7-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241127213535.3657472-1-memxor@gmail.com> References: <20241127213535.3657472-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1596; h=from:subject; bh=XI8afY+EEgzwCChq5uiRY5b/c1t5IWFqhWe47+0fWFE=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnR46uymtIJKVPVICIIc71lyz/NPShGAJs7VKCXfPI ybvJCliJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZ0eOrgAKCRBM4MiGSL8RyiKOD/ 0R096k3rdl1P15gPO4WWgiEPyna0t1+EXI1yas3E69XCqmyGmyBPkd/BfLVEussl1IOiUE46wsgH2m 7JEc1GznePJ8qiCDwKL1+2oUPorTkI5bPI4oN2ADGpSFan8IUtGHJGWp0kiRJ3l8Ivzl2G7JAwwn5F kOp4hxOhrGagmURBQ2yp6fRT8vBsoraygOhai5LoN5C2gX8xpjVxpnFD81eXhxWdo4VGeWBVmJJ7bb 5Mpz5V1PpiICep2xDoBc9aVZWcymI7r9Aabln1vM4kD2whAIzVJDy7ETK6Fofm6Z8kP9nwNzY6+Xbc RcPmh7wtXI6ovd04F6+nvLvxejJqtubp72kF1z430wMutdgD2D7/zZcN89EWosq2EZ3f+3xsS5MEk5 nTQIpFI9LQwDGj6gIoZp/qCWRj3komGuyvw+UAVvuZSYJg7mTPD+BGyL6elkGAvcXqj81CBe/YfmA4 UtF5/SoytQlJEm0XmvyiSKljWMZRhs0bAwPLltUnSN1k3LAKeBOj4pZQlv8Oh9kpdAszXiXB+XaCU8 wbFWIPua9182UhoLm1azazRiuOV0WHnkRau8Nu2B+vHP1QunncNsDkC2W9sM427eqKk4PoRE4i44mA 5EckljqplALOiItcY/4BEnkLQAF5q60nmU8klnP36zZ5X55/JaeR1w1AqQhg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net For preemption-related kfuncs, we don't test their interaction with sleepable kfuncs (we do test helpers) even though the verifier has code to protect against such a pattern. Expand coverage of the selftest to include this case. Acked-by: Eduard Zingerman Signed-off-by: Kumar Kartikeya Dwivedi --- tools/testing/selftests/bpf/progs/preempt_lock.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/preempt_lock.c b/tools/testing/selftests/bpf/progs/preempt_lock.c index 5269571cf7b5..6c5797bf0ead 100644 --- a/tools/testing/selftests/bpf/progs/preempt_lock.c +++ b/tools/testing/selftests/bpf/progs/preempt_lock.c @@ -5,6 +5,8 @@ #include "bpf_misc.h" #include "bpf_experimental.h" +extern int bpf_copy_from_user_str(void *dst, u32 dst__sz, const void *unsafe_ptr__ign, u64 flags) __weak __ksym; + SEC("?tc") __failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_preempt_disable-ed region") int preempt_lock_missing_1(struct __sk_buff *ctx) @@ -113,6 +115,18 @@ int preempt_sleepable_helper(void *ctx) return 0; } +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +__failure __msg("kernel func bpf_copy_from_user_str is sleepable within non-preemptible region") +int preempt_sleepable_kfunc(void *ctx) +{ + u32 data; + + bpf_preempt_disable(); + bpf_copy_from_user_str(&data, sizeof(data), NULL, 0); + bpf_preempt_enable(); + return 0; +} + int __noinline preempt_global_subprog(void) { preempt_balance_subprog(); From patchwork Wed Nov 27 21:35:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13887394 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-wm1-f68.google.com (mail-wm1-f68.google.com [209.85.128.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32D4B203712 for ; Wed, 27 Nov 2024 21:35:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.68 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732743351; cv=none; b=iZWYCD44oIzyk+nCR/AgFZSRhP6nGwkoswe8ZrhvAdLvQKvBs4Fe/a+7RfyGb6z57vTCKud9ENuTaf+LpNWqbcZye/07XEjQ5yb1ZYwdLAQ+s9fZGoyFnKV5iRFWZDzy1J0Hh94874xDuuM1470kAvcNlUfCovN1NzSDWRDmvwI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732743351; c=relaxed/simple; bh=oKNzaaFVrJSwLEjaY/JdfG+ePoPPd5PC28IvTt9EK5A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZzYR4qWY2ZXBUbinc7Gk3o97+wQfoRNG//DBFoJv0joSHGufL4+aLmO/J6NzrB56cE1w7yA1pGuRWwu0bcAZ2u1buKGDRLQr/essXnZJdimlhn8nYajRRJcLwa6hpJccO5jmkY01bLYWTOgKlxEgajicIDV9+Q+GuZ7tN3FEyWs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ha+ow2Gc; arc=none smtp.client-ip=209.85.128.68 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ha+ow2Gc" Received: by mail-wm1-f68.google.com with SMTP id 5b1f17b1804b1-434a7ee3d60so7797945e9.1 for ; Wed, 27 Nov 2024 13:35:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1732743347; x=1733348147; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=U8aGK/nurJ0pFhZLLXoFWXyWjNsdaRzGcZ9s/k3JCAY=; b=ha+ow2GchuMDW9nLoy+pb7b/7K9U+8Ye5FVy+7kOqo5v+YDzMBTic+DS7elHzvr41l tUmAkS8zEtlJauhBipmDWlyrC9FLtHzyqY5Y5qAn3Ous2QfEdyoi4Haswbi0dUzrRaFE 5MUboaoHkjLAZGCRUkjmRkyhyHmtw4RkI7KM5VM3AiXQreo/QQAGQxDylk7RJ2EBLBpE 1ForATcVrYnuxtOizhQTczAtKUezQXRYWd7slvdP6DpBwKXFNhL+njHk2Ua892zIMKJI 0PC6b8ROJakis+0iaIzjJ2yf7n/1LoXdYGysTYwK2GTjUfkDQyVZAPthgWJbDVQDC274 lX0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732743347; x=1733348147; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=U8aGK/nurJ0pFhZLLXoFWXyWjNsdaRzGcZ9s/k3JCAY=; b=R9ZvNOakYH6ewb/2NunrRkzPERZ2UNMD19juvSHsAlo0KuyyCdU6601GQSSd+SHczs /OkPEJhY0865+D4xVOnVLVV8zn3ZIFqGB0fpwryOwLFDNjXgiHyPOr6XXZrlKZRstjsd jmqg4qSrFcg4/CApvlCP1LLLkUKeBIHY8xG1iFoUw9HtVF3eQe2FBBTWMDN7oX4W1idK h852UgOZk/vgsBlNrBSMxyFurMKvTQc2NPzeL4xirideSZG3sTpuJoF6RI46LgNXgqnU AZBabvF1kpMA6zXMyTaF5tZvnw1pQ85otHsSzDaMFMAueU5rHOh0niYyJ89lxVQ9kJIj uQtw== X-Gm-Message-State: AOJu0YwA9YfePxF5ht04G93JfdYARg+tbMYRTEqVcyYwQq7Ws2vC1G66 55RECN1OZWYalz70Yulq0+vUyLYNiDoLHz0+1v6xMo71HMA6hAGUE8XG6eV5EoU= X-Gm-Gg: ASbGncsAUKc+tZp/hP1qahwgcSokd+MnmrE8wf52O3AOnZF4GGLyg9uZgW8sPxP6moN A+T2WHueBFPujOLr89aWeLLj/oHYQuI/4HiR+qdv+08fOLrZc7fXFJDtV6Ajt5JbnKv8eR1iUyw qt24a4TY+IMyMAAnRtjiiiKOYSEV4LiOxkYnQwraAbt8qrsE0O0n96jfFk3fEtohfRYI34V701n muTDxaJkdjNDDZ7eTvM899ShMeES4gQpNVA8mHqR2YRdePubLaHL8UOIKs0ALPFbtvzCu8JW+Tt X-Google-Smtp-Source: AGHT+IGYSSa8TdhRWDzYNMr9DlbgdYiqEXaha6uVtQIj+dLiJmk06UZJNlw4ner98TWxufj7/FpijA== X-Received: by 2002:a05:6000:1569:b0:382:4641:9854 with SMTP id ffacd0b85a97d-385cbd73618mr849403f8f.8.1732743347279; Wed, 27 Nov 2024 13:35:47 -0800 (PST) Received: from localhost (fwdproxy-cln-015.fbsv.net. [2a03:2880:31ff:f::face:b00c]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fbe91a6sm17604325f8f.86.2024.11.27.13.35.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Nov 2024 13:35:46 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: kkd@meta.com, Eduard Zingerman , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , kernel-team@fb.com Subject: [PATCH bpf-next v4 7/7] selftests/bpf: Add IRQ save/restore tests Date: Wed, 27 Nov 2024 13:35:35 -0800 Message-ID: <20241127213535.3657472-8-memxor@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241127213535.3657472-1-memxor@gmail.com> References: <20241127213535.3657472-1-memxor@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=12505; h=from:subject; bh=oKNzaaFVrJSwLEjaY/JdfG+ePoPPd5PC28IvTt9EK5A=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBnR46u+Fa3OBgf44+T2tWcV8lK44O9pee3XLaltCWF GTepBNSJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCZ0eOrgAKCRBM4MiGSL8RynM0D/ 9ZnPjbU6kRbO55GoDSms6sIHqb++4FvspNtUfElpG3uTT/vDtohp+yJRUKiNBQnd5oc2PN0oz47p0L KD4nQ3aZLKe7E68BlCIbp5Q9pRyPwGMRLiXAlzV+tk3ExcZCu+Kpp9a25o8pSdBtd48tq3vl6KfgO4 FCWpsy0BK08ae8QnfN+4VkzILX5kW1+Ic9ndGC+KgFcR+HZGJWbBjiZ97jBQNKIVE5tm8TfUbWQcbU ehBZnjTXv/+v8Vx74NkCksPTfgoi3MFYrJpuwu1PFuDBKSs/F9kz3C2fSFluLqyZHtWfmJzFl2QaFv 0NEciCqIraHIHqzbCYXyyH8fbkg1BFka1k1Yt4uGmDgCoXw8gTNAYjBt100AVefT2yfQhcnxbr9N60 DoxC+Tt/pDsQlrNBtEPPFKKDrOrHbBZoG6JDur0DPN0nStldytd7Tm5D6jrtiPk3IvTMxmd/F2ooSi prkjMO2qgbbQ9asDcH0DYMRzt9o3yceyYIir1dJL/NxbszXkgNcIiJtdJHzUEGWms4K5sg31XPL/Yj T/voymPNrNDdH1fdH8pozUGSEptBfDPvUamrBaCZdZCtIoSy+43ou84gQ1ILQ1X2zDSSx3BD0GEZRe NJG4Vlma+7itcB+XAfqKp14f4WhgT5xsR1InPK7d+EgTMJdp+cDGZO9nXLkg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA X-Patchwork-Delegate: bpf@iogearbox.net Include tests that check for rejection in erroneous cases, like unbalanced IRQ-disabled counts, within and across subprogs, invalid IRQ flag state or input to kfuncs, behavior upon overwriting IRQ saved state on stack, interaction with sleepable kfuncs/helpers, global functions, and out of order restore. Include some success scenarios as well to demonstrate usage. #128/1 irq/irq_save_bad_arg:OK #128/2 irq/irq_restore_bad_arg:OK #128/3 irq/irq_restore_missing_2:OK #128/4 irq/irq_restore_missing_3:OK #128/5 irq/irq_restore_missing_3_minus_2:OK #128/6 irq/irq_restore_missing_1_subprog:OK #128/7 irq/irq_restore_missing_2_subprog:OK #128/8 irq/irq_restore_missing_3_subprog:OK #128/9 irq/irq_restore_missing_3_minus_2_subprog:OK #128/10 irq/irq_balance:OK #128/11 irq/irq_balance_n:OK #128/12 irq/irq_balance_subprog:OK #128/13 irq/irq_global_subprog:OK #128/14 irq/irq_restore_ooo:OK #128/15 irq/irq_restore_ooo_3:OK #128/16 irq/irq_restore_3_subprog:OK #128/17 irq/irq_restore_4_subprog:OK #128/18 irq/irq_restore_ooo_3_subprog:OK #128/19 irq/irq_restore_invalid:OK #128/20 irq/irq_save_invalid:OK #128/21 irq/irq_restore_iter:OK #128/22 irq/irq_save_iter:OK #128/23 irq/irq_flag_overwrite:OK #128/24 irq/irq_flag_overwrite_partial:OK #128/25 irq/irq_sleepable_helper:OK #128/26 irq/irq_sleepable_kfunc:OK #128 irq:OK Summary: 1/26 PASSED, 0 SKIPPED, 0 FAILED Acked-by: Eduard Zingerman Signed-off-by: Kumar Kartikeya Dwivedi --- .../selftests/bpf/prog_tests/verifier.c | 2 + tools/testing/selftests/bpf/progs/irq.c | 397 ++++++++++++++++++ 2 files changed, 399 insertions(+) create mode 100644 tools/testing/selftests/bpf/progs/irq.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index d9f65adb456b..b1b4d69c407a 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -98,6 +98,7 @@ #include "verifier_xdp_direct_packet_access.skel.h" #include "verifier_bits_iter.skel.h" #include "verifier_lsm.skel.h" +#include "irq.skel.h" #define MAX_ENTRIES 11 @@ -225,6 +226,7 @@ void test_verifier_xdp(void) { RUN(verifier_xdp); } void test_verifier_xdp_direct_packet_access(void) { RUN(verifier_xdp_direct_packet_access); } void test_verifier_bits_iter(void) { RUN(verifier_bits_iter); } void test_verifier_lsm(void) { RUN(verifier_lsm); } +void test_irq(void) { RUN(irq); } void test_verifier_mtu(void) { diff --git a/tools/testing/selftests/bpf/progs/irq.c b/tools/testing/selftests/bpf/progs/irq.c new file mode 100644 index 000000000000..b5056ac17384 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/irq.c @@ -0,0 +1,397 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */ +#include +#include +#include "bpf_misc.h" + +unsigned long global_flags; + +extern void bpf_local_irq_save(unsigned long *) __weak __ksym; +extern void bpf_local_irq_restore(unsigned long *) __weak __ksym; +extern int bpf_copy_from_user_str(void *dst, u32 dst__sz, const void *unsafe_ptr__ign, u64 flags) __weak __ksym; + +SEC("?tc") +__failure __msg("arg#0 doesn't point to an irq flag on stack") +int irq_save_bad_arg(struct __sk_buff *ctx) +{ + bpf_local_irq_save(&global_flags); + return 0; +} + +SEC("?tc") +__failure __msg("arg#0 doesn't point to an irq flag on stack") +int irq_restore_bad_arg(struct __sk_buff *ctx) +{ + bpf_local_irq_restore(&global_flags); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_2(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags2); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_3(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags2); + bpf_local_irq_save(&flags3); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_3_minus_2(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags2); + bpf_local_irq_save(&flags3); + bpf_local_irq_restore(&flags3); + bpf_local_irq_restore(&flags2); + return 0; +} + +static __noinline void local_irq_save(unsigned long *flags) +{ + bpf_local_irq_save(flags); +} + +static __noinline void local_irq_restore(unsigned long *flags) +{ + bpf_local_irq_restore(flags); +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_1_subprog(struct __sk_buff *ctx) +{ + unsigned long flags; + + local_irq_save(&flags); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_2_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + + local_irq_save(&flags1); + local_irq_save(&flags2); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_3_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save(&flags1); + local_irq_save(&flags2); + local_irq_save(&flags3); + return 0; +} + +SEC("?tc") +__failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_local_irq_save-ed region") +int irq_restore_missing_3_minus_2_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save(&flags1); + local_irq_save(&flags2); + local_irq_save(&flags3); + local_irq_restore(&flags3); + local_irq_restore(&flags2); + return 0; +} + +SEC("?tc") +__success +int irq_balance(struct __sk_buff *ctx) +{ + unsigned long flags; + + local_irq_save(&flags); + local_irq_restore(&flags); + return 0; +} + +SEC("?tc") +__success +int irq_balance_n(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save(&flags1); + local_irq_save(&flags2); + local_irq_save(&flags3); + local_irq_restore(&flags3); + local_irq_restore(&flags2); + local_irq_restore(&flags1); + return 0; +} + +static __noinline void local_irq_balance(void) +{ + unsigned long flags; + + local_irq_save(&flags); + local_irq_restore(&flags); +} + +static __noinline void local_irq_balance_n(void) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save(&flags1); + local_irq_save(&flags2); + local_irq_save(&flags3); + local_irq_restore(&flags3); + local_irq_restore(&flags2); + local_irq_restore(&flags1); +} + +SEC("?tc") +__success +int irq_balance_subprog(struct __sk_buff *ctx) +{ + local_irq_balance(); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +__failure __msg("sleepable helper bpf_copy_from_user#") +int irq_sleepable_helper(void *ctx) +{ + unsigned long flags; + u32 data; + + local_irq_save(&flags); + bpf_copy_from_user(&data, sizeof(data), NULL); + local_irq_restore(&flags); + return 0; +} + +SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") +__failure __msg("kernel func bpf_copy_from_user_str is sleepable within IRQ-disabled region") +int irq_sleepable_kfunc(void *ctx) +{ + unsigned long flags; + u32 data; + + local_irq_save(&flags); + bpf_copy_from_user_str(&data, sizeof(data), NULL, 0); + local_irq_restore(&flags); + return 0; +} + +int __noinline global_local_irq_balance(void) +{ + local_irq_balance_n(); + return 0; +} + +SEC("?tc") +__failure __msg("global function calls are not allowed with IRQs disabled") +int irq_global_subprog(struct __sk_buff *ctx) +{ + unsigned long flags; + + bpf_local_irq_save(&flags); + global_local_irq_balance(); + bpf_local_irq_restore(&flags); + return 0; +} + +SEC("?tc") +__failure __msg("cannot restore irq state out of order") +int irq_restore_ooo(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags2); + bpf_local_irq_restore(&flags1); + bpf_local_irq_restore(&flags2); + return 0; +} + +SEC("?tc") +__failure __msg("cannot restore irq state out of order") +int irq_restore_ooo_3(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags2); + bpf_local_irq_restore(&flags2); + bpf_local_irq_save(&flags3); + bpf_local_irq_restore(&flags1); + bpf_local_irq_restore(&flags3); + return 0; +} + +static __noinline void local_irq_save_3(unsigned long *flags1, unsigned long *flags2, + unsigned long *flags3) +{ + local_irq_save(flags1); + local_irq_save(flags2); + local_irq_save(flags3); +} + +SEC("?tc") +__success +int irq_restore_3_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save_3(&flags1, &flags2, &flags3); + bpf_local_irq_restore(&flags3); + bpf_local_irq_restore(&flags2); + bpf_local_irq_restore(&flags1); + return 0; +} + +SEC("?tc") +__failure __msg("cannot restore irq state out of order") +int irq_restore_4_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + unsigned long flags4; + + local_irq_save_3(&flags1, &flags2, &flags3); + bpf_local_irq_restore(&flags3); + bpf_local_irq_save(&flags4); + bpf_local_irq_restore(&flags4); + bpf_local_irq_restore(&flags1); + return 0; +} + +SEC("?tc") +__failure __msg("cannot restore irq state out of order") +int irq_restore_ooo_3_subprog(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags2; + unsigned long flags3; + + local_irq_save_3(&flags1, &flags2, &flags3); + bpf_local_irq_restore(&flags3); + bpf_local_irq_restore(&flags2); + bpf_local_irq_save(&flags3); + bpf_local_irq_restore(&flags1); + return 0; +} + +SEC("?tc") +__failure __msg("expected an initialized") +int irq_restore_invalid(struct __sk_buff *ctx) +{ + unsigned long flags1; + unsigned long flags = 0xfaceb00c; + + bpf_local_irq_save(&flags1); + bpf_local_irq_restore(&flags); + return 0; +} + +SEC("?tc") +__failure __msg("expected uninitialized") +int irq_save_invalid(struct __sk_buff *ctx) +{ + unsigned long flags1; + + bpf_local_irq_save(&flags1); + bpf_local_irq_save(&flags1); + return 0; +} + +SEC("?tc") +__failure __msg("expected an initialized") +int irq_restore_iter(struct __sk_buff *ctx) +{ + struct bpf_iter_num it; + + bpf_iter_num_new(&it, 0, 42); + bpf_local_irq_restore((unsigned long *)&it); + return 0; +} + +SEC("?tc") +__failure __msg("Unreleased reference id=1") +int irq_save_iter(struct __sk_buff *ctx) +{ + struct bpf_iter_num it; + + /* Ensure same sized slot has st->ref_obj_id set, so we reject based on + * slot_type != STACK_IRQ_FLAG... + */ + _Static_assert(sizeof(it) == sizeof(unsigned long), "broken iterator size"); + + bpf_iter_num_new(&it, 0, 42); + bpf_local_irq_save((unsigned long *)&it); + bpf_local_irq_restore((unsigned long *)&it); + return 0; +} + +SEC("?tc") +__failure __msg("expected an initialized") +int irq_flag_overwrite(struct __sk_buff *ctx) +{ + unsigned long flags; + + bpf_local_irq_save(&flags); + flags = 0xdeadbeef; + bpf_local_irq_restore(&flags); + return 0; +} + +SEC("?tc") +__failure __msg("expected an initialized") +int irq_flag_overwrite_partial(struct __sk_buff *ctx) +{ + unsigned long flags; + + bpf_local_irq_save(&flags); + *(((char *)&flags) + 1) = 0xff; + bpf_local_irq_restore(&flags); + return 0; +} + +char _license[] SEC("license") = "GPL";