From patchwork Tue Mar 5 04:52:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13581607 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 332B13FE5D for ; Tue, 5 Mar 2024 04:52:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709614349; cv=none; b=d6OBhVWgD2fIFl58CuT0ZtAXDQ9pnPNDvXaejs3JanHq7k3sSsLoUlmRDBCQxnb1r+566Z1QnAQ+XRL2ptD3eC8Y2qcsPkWYAIQsFQRis53edpGSlD7jBgM+G5MbewsMls7P+v97VBy7Xmny6LPFFtKHxgyBiX3D0NHoWkYqLkA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709614349; c=relaxed/simple; bh=DxgHdibrSRFHhMaVjVmU5nqJVai/ep/f6RuZp4OkJ+w=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=YPe2AofIXQScacNJTRiiIAVe32OUGORVn4XBmLyeWXqprryZonbLLmu1/YZxc41ReTYifrAm8TqAakqDV/7T4JMKYZdfdzaizE+Dv0jSIaLM3+8uRhD/RiMZXTDpQcGMeYPoaij3xVNqacnWNA0vUZScrUs6mETucmcNdZBQ3Ts= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Kx3U0BxN; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Kx3U0BxN" Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-6e6082eab17so1811792b3a.1 for ; Mon, 04 Mar 2024 20:52:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709614346; x=1710219146; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=t5G+DvvquklTF1213OQAnBXP3S9jFBHoAvfve7kaHGg=; b=Kx3U0BxNIG2knFdIUto5JS19aZ+wLl6SPm/XH/nOfDi9z/bYTu9Pw7g3QtX4SVrBvS 5/NmYZmLNOk2LTt1l5/6DfrZ4sBLmquP3C9zNvR+1tc6Gn6Z+6JzpFWJNOZ27zlU+ty9 JWL8E7Je+/6MwatYSrFP6SQ02JpdF+A9NKBeOoveRsepa/Mj1yiiIlmuVBjJWsXUTomO qtBYh97miBX0edFsYMMSi0G9fcCKJEZfj3VwM5ycIxBCnxbAXBHjFrthTUTnx7vjp5rn m/Mf+YLkNYt3jlG7S4KsjQQegVu//SyLNUz1pXYdtEw4nLyxAqi2+M10WwdnlJl4N9ZV fCxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709614346; x=1710219146; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=t5G+DvvquklTF1213OQAnBXP3S9jFBHoAvfve7kaHGg=; b=PShYoHIOrtndV8zhLuF+IS91oJbjBj6VYRtIpC9hu4JOrHi2V3VhQZe9h+3rPzoJxe 6YWSs4TeoSB0CkNbY8HBNSm1LY6oKJC13fdaZnOY0r4D7daEzeAehypXlwBgY3TePBn0 /FGweoxN5yQEI3KvZAktABzUl3fN5kEnTu0TX1HeMkGTbZFJuJUlQDzVu4bWe1eTCMXO WiWcSyddFIBI9Aqu1iB5ybWHt1EDPsc28H9BAgLhqPU6RC9lDOip5cY17vDlvezrRTVX FeLpvDZoDuQs0VbuHyRbkvky3ff5oqxRn4fL2EaHLqNlH4AyGVJLPt8UIL38xpOWmgcX efVQ== X-Gm-Message-State: AOJu0YwAI2Y6WXSds9ZpnhXISSPI5d7mcV0QUlyHS69CHnw15PeuiePs GN/9WTBQvMsd1zwqnLC1OD3mLZBVyM6IhM078Pd+NJajDDJuMUu1KG9qfP22 X-Google-Smtp-Source: AGHT+IEyq4LoMAsDM2HD7zrAMrIKPirxpVhM11ney2MHFPjy3w/Q570CF6l+/mx1WbUVRI7uA2gRxg== X-Received: by 2002:a05:6a00:22cc:b0:6e5:9031:9886 with SMTP id f12-20020a056a0022cc00b006e590319886mr11341244pfj.25.1709614345541; Mon, 04 Mar 2024 20:52:25 -0800 (PST) Received: from localhost.localdomain ([2620:10d:c090:400::5:9426]) by smtp.gmail.com with ESMTPSA id n18-20020a056a000d5200b006e4d2cbcac8sm8328001pfv.94.2024.03.04.20.52.24 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 04 Mar 2024 20:52:25 -0800 (PST) From: Alexei Starovoitov To: bpf@vger.kernel.org Cc: daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, memxor@gmail.com, eddyz87@gmail.com, john.fastabend@gmail.com, kernel-team@fb.com Subject: [PATCH v5 bpf-next 1/4] bpf: Introduce may_goto instruction Date: Mon, 4 Mar 2024 20:52:16 -0800 Message-Id: <20240305045219.66142-2-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-145) In-Reply-To: <20240305045219.66142-1-alexei.starovoitov@gmail.com> References: <20240305045219.66142-1-alexei.starovoitov@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Introduce may_goto instruction that acts on a hidden bpf_iter_num, so that bpf_iter_num_new(), bpf_iter_num_destroy() don't need to be called explicitly. It can be used in any normal "for" or "while" loop, like for (i = zero; i < cnt; cond_break, i++) { The verifier recognizes that may_goto is used in the program, reserves additional 8 bytes of stack, initializes them in subprog prologue, and replaces may_goto instruction with: aux_reg = *(u64 *)(fp - 40) if aux_reg == 0 goto pc+off aux_reg += 1 *(u64 *)(fp - 40) = aux_reg may_goto instruction can be used by LLVM to implement __builtin_memcpy, __builtin_strcmp. may_goto is not a full substitute for bpf_for() macro. bpf_for() doesn't have induction variable that verifiers sees, so 'i' in bpf_for(i, 0, 100) is seen as imprecise and bounded. But when the code is written as: for (i = 0; i < 100; cond_break, i++) the verifier see 'i' as precise constant zero, hence cond_break (aka may_goto) doesn't help to converge the loop. A static or global variable can be used as a workaround: static int zero = 0; for (i = zero; i < 100; cond_break, i++) // works! may_goto works well with arena pointers that don't need to be bounds-checked on every iteration. Load/store from arena returns imprecise unbounded scalars. Reserve new opcode BPF_JMP | BPF_JMA for may_goto insn. JMA stands for "jump maybe", and "jump multipurpose", and "jump multi always". Since goto_or_nop insn was proposed, it may use the same opcode. may_goto vs goto_or_nop can be distinguished by src_reg: code = BPF_JMP | BPF_JMA: src_reg = 0 - may_goto src_reg = 1 - goto_or_nop We could have reused BPF_JMP | BPF_JA like: src_reg = 0 - normal goto src_reg = 1 - may_goto src_reg = 2 - goto_or_nop but JA is a real insn and it's unconditional, while may_goto and goto_or_nop are pseudo instructions, and both are conditional. Hence it's better to have a different opcode for them. Hence BPF_JMA. Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko --- include/linux/bpf_verifier.h | 2 + include/uapi/linux/bpf.h | 1 + kernel/bpf/core.c | 1 + kernel/bpf/disasm.c | 3 + kernel/bpf/verifier.c | 156 ++++++++++++++++++++++++++------- tools/include/uapi/linux/bpf.h | 1 + 6 files changed, 134 insertions(+), 30 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 84365e6dd85d..917ca603059b 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -449,6 +449,7 @@ struct bpf_verifier_state { u32 jmp_history_cnt; u32 dfs_depth; u32 callback_unroll_depth; + u32 may_goto_cnt; }; #define bpf_get_spilled_reg(slot, frame, mask) \ @@ -619,6 +620,7 @@ struct bpf_subprog_info { u32 start; /* insn idx of function entry point */ u32 linfo_idx; /* The idx to the main_prog->aux->linfo */ u16 stack_depth; /* max. stack depth used by this function */ + u16 stack_extra; bool has_tail_call: 1; bool tail_call_reachable: 1; bool has_ld_abs: 1; diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index a241f407c234..932ffef0dc88 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -42,6 +42,7 @@ #define BPF_JSGE 0x70 /* SGE is signed '>=', GE in x86 */ #define BPF_JSLT 0xc0 /* SLT is signed, '<' */ #define BPF_JSLE 0xd0 /* SLE is signed, '<=' */ +#define BPF_JMA 0xe0 /* may_goto */ #define BPF_CALL 0x80 /* function call */ #define BPF_EXIT 0x90 /* function return */ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 71c459a51d9e..ba6101447b49 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1675,6 +1675,7 @@ bool bpf_opcode_in_insntable(u8 code) [BPF_LD | BPF_IND | BPF_B] = true, [BPF_LD | BPF_IND | BPF_H] = true, [BPF_LD | BPF_IND | BPF_W] = true, + [BPF_JMP | BPF_JMA] = true, }; #undef BPF_INSN_3_TBL #undef BPF_INSN_2_TBL diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index 49940c26a227..598cd38af84c 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -322,6 +322,9 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, } else if (insn->code == (BPF_JMP | BPF_JA)) { verbose(cbs->private_data, "(%02x) goto pc%+d\n", insn->code, insn->off); + } else if (insn->code == (BPF_JMP | BPF_JMA)) { + verbose(cbs->private_data, "(%02x) may_goto pc%+d\n", + insn->code, insn->off); } else if (insn->code == (BPF_JMP32 | BPF_JA)) { verbose(cbs->private_data, "(%02x) gotol pc%+d\n", insn->code, insn->imm); diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 4dd84e13bbfe..226bb65f9c2c 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1429,6 +1429,7 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state, dst_state->dfs_depth = src->dfs_depth; dst_state->callback_unroll_depth = src->callback_unroll_depth; dst_state->used_as_loop_entry = src->used_as_loop_entry; + dst_state->may_goto_cnt = src->may_goto_cnt; for (i = 0; i <= src->curframe; i++) { dst = dst_state->frame[i]; if (!dst) { @@ -7880,6 +7881,11 @@ static int widen_imprecise_scalars(struct bpf_verifier_env *env, return 0; } +static bool is_may_goto_insn(struct bpf_verifier_env *env, int insn_idx) +{ + return env->prog->insnsi[insn_idx].code == (BPF_JMP | BPF_JMA); +} + /* process_iter_next_call() is called when verifier gets to iterator's next * "method" (e.g., bpf_iter_num_next() for numbers iterator) call. We'll refer * to it as just "iter_next()" in comments below. @@ -14871,11 +14877,35 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, int err; /* Only conditional jumps are expected to reach here. */ - if (opcode == BPF_JA || opcode > BPF_JSLE) { + if (opcode == BPF_JA || opcode > BPF_JMA) { verbose(env, "invalid BPF_JMP/JMP32 opcode %x\n", opcode); return -EINVAL; } + if (opcode == BPF_JMA) { + struct bpf_verifier_state *cur_st = env->cur_state, *queued_st, *prev_st; + int idx = *insn_idx; + + if (insn->code != (BPF_JMP | BPF_JMA) || + insn->src_reg || insn->dst_reg || insn->imm || insn->off == 0) { + verbose(env, "invalid may_goto off %d imm %d\n", + insn->off, insn->imm); + return -EINVAL; + } + prev_st = find_prev_entry(env, cur_st->parent, idx); + + /* branch out 'fallthrough' insn as a new state to explore */ + queued_st = push_stack(env, idx + 1, idx, false); + if (!queued_st) + return -ENOMEM; + + queued_st->may_goto_cnt++; + if (prev_st) + widen_imprecise_scalars(env, prev_st, queued_st); + *insn_idx += insn->off; + return 0; + } + /* check src2 operand */ err = check_reg_arg(env, insn->dst_reg, SRC_OP); if (err) @@ -15659,6 +15689,8 @@ static int visit_insn(int t, struct bpf_verifier_env *env) default: /* conditional jump with two edges */ mark_prune_point(env, t); + if (insn->code == (BPF_JMP | BPF_JMA)) + mark_force_checkpoint(env, t); ret = push_insn(t, t + 1, FALLTHROUGH, env); if (ret) @@ -17135,6 +17167,13 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) } goto skip_inf_loop_check; } + if (is_may_goto_insn(env, insn_idx)) { + if (states_equal(env, &sl->state, cur, true)) { + update_loop_entry(cur, &sl->state); + goto hit; + } + goto skip_inf_loop_check; + } if (calls_callback(env, insn_idx)) { if (states_equal(env, &sl->state, cur, true)) goto hit; @@ -17144,6 +17183,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) if (states_maybe_looping(&sl->state, cur) && states_equal(env, &sl->state, cur, true) && !iter_active_depths_differ(&sl->state, cur) && + sl->state.may_goto_cnt == cur->may_goto_cnt && sl->state.callback_unroll_depth == cur->callback_unroll_depth) { verbose_linfo(env, insn_idx, "; "); verbose(env, "infinite loop detected at insn %d\n", insn_idx); @@ -19408,7 +19448,10 @@ static int do_misc_fixups(struct bpf_verifier_env *env) struct bpf_insn insn_buf[16]; struct bpf_prog *new_prog; struct bpf_map *map_ptr; - int i, ret, cnt, delta = 0; + int i, ret, cnt, delta = 0, cur_subprog = 0; + struct bpf_subprog_info *subprogs = env->subprog_info; + u16 stack_depth = subprogs[cur_subprog].stack_depth; + u16 stack_depth_extra = 0; if (env->seen_exception && !env->exception_callback_subprog) { struct bpf_insn patch[] = { @@ -19428,7 +19471,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) mark_subprog_exc_cb(env, env->exception_callback_subprog); } - for (i = 0; i < insn_cnt; i++, insn++) { + for (i = 0; i < insn_cnt;) { /* Make divide-by-zero exceptions impossible. */ if (insn->code == (BPF_ALU64 | BPF_MOD | BPF_X) || insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) || @@ -19467,7 +19510,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } /* Implement LD_ABS and LD_IND with a rewrite, if supported by the program type. */ @@ -19487,7 +19530,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } /* Rewrite pointer arithmetic to mitigate speculation attacks. */ @@ -19502,7 +19545,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) aux = &env->insn_aux_data[i + delta]; if (!aux->alu_state || aux->alu_state == BPF_ALU_NON_POINTER) - continue; + goto next_insn; isneg = aux->alu_state & BPF_ALU_NEG_VALUE; issrc = (aux->alu_state & BPF_ALU_SANITIZE) == @@ -19540,19 +19583,39 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; + } + + if (insn->code == (BPF_JMP | BPF_JMA)) { + int stack_off = -stack_depth - 8; + + stack_depth_extra = 8; + insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_AX, BPF_REG_10, stack_off); + insn_buf[1] = BPF_JMP_IMM(BPF_JEQ, BPF_REG_AX, 0, insn->off + 2); + insn_buf[2] = BPF_ALU64_IMM(BPF_SUB, BPF_REG_AX, 1); + insn_buf[3] = BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_AX, stack_off); + cnt = 4; + + new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); + if (!new_prog) + return -ENOMEM; + + delta += cnt - 1; + env->prog = prog = new_prog; + insn = new_prog->insnsi + i + delta; + goto next_insn; } if (insn->code != (BPF_JMP | BPF_CALL)) - continue; + goto next_insn; if (insn->src_reg == BPF_PSEUDO_CALL) - continue; + goto next_insn; if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { ret = fixup_kfunc_call(env, insn, insn_buf, i + delta, &cnt); if (ret) return ret; if (cnt == 0) - continue; + goto next_insn; new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); if (!new_prog) @@ -19561,7 +19624,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } if (insn->imm == BPF_FUNC_get_route_realm) @@ -19609,11 +19672,11 @@ static int do_misc_fixups(struct bpf_verifier_env *env) } insn->imm = ret + 1; - continue; + goto next_insn; } if (!bpf_map_ptr_unpriv(aux)) - continue; + goto next_insn; /* instead of changing every JIT dealing with tail_call * emit two extra insns: @@ -19642,7 +19705,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } if (insn->imm == BPF_FUNC_timer_set_callback) { @@ -19754,7 +19817,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } BUILD_BUG_ON(!__same_type(ops->map_lookup_elem, @@ -19785,31 +19848,31 @@ static int do_misc_fixups(struct bpf_verifier_env *env) switch (insn->imm) { case BPF_FUNC_map_lookup_elem: insn->imm = BPF_CALL_IMM(ops->map_lookup_elem); - continue; + goto next_insn; case BPF_FUNC_map_update_elem: insn->imm = BPF_CALL_IMM(ops->map_update_elem); - continue; + goto next_insn; case BPF_FUNC_map_delete_elem: insn->imm = BPF_CALL_IMM(ops->map_delete_elem); - continue; + goto next_insn; case BPF_FUNC_map_push_elem: insn->imm = BPF_CALL_IMM(ops->map_push_elem); - continue; + goto next_insn; case BPF_FUNC_map_pop_elem: insn->imm = BPF_CALL_IMM(ops->map_pop_elem); - continue; + goto next_insn; case BPF_FUNC_map_peek_elem: insn->imm = BPF_CALL_IMM(ops->map_peek_elem); - continue; + goto next_insn; case BPF_FUNC_redirect_map: insn->imm = BPF_CALL_IMM(ops->map_redirect); - continue; + goto next_insn; case BPF_FUNC_for_each_map_elem: insn->imm = BPF_CALL_IMM(ops->map_for_each_callback); - continue; + goto next_insn; case BPF_FUNC_map_lookup_percpu_elem: insn->imm = BPF_CALL_IMM(ops->map_lookup_percpu_elem); - continue; + goto next_insn; } goto patch_call_imm; @@ -19837,7 +19900,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } /* Implement bpf_get_func_arg inline. */ @@ -19862,7 +19925,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } /* Implement bpf_get_func_ret inline. */ @@ -19890,7 +19953,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } /* Implement get_func_arg_cnt inline. */ @@ -19905,7 +19968,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } /* Implement bpf_get_func_ip inline. */ @@ -19920,7 +19983,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } /* Implement bpf_kptr_xchg inline */ @@ -19938,7 +20001,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } patch_call_imm: fn = env->ops->get_func_proto(insn->imm, env->prog); @@ -19952,6 +20015,39 @@ static int do_misc_fixups(struct bpf_verifier_env *env) return -EFAULT; } insn->imm = fn->func - __bpf_call_base; +next_insn: + if (subprogs[cur_subprog + 1].start == i + delta + 1) { + subprogs[cur_subprog].stack_depth += stack_depth_extra; + subprogs[cur_subprog].stack_extra = stack_depth_extra; + cur_subprog++; + stack_depth = subprogs[cur_subprog].stack_depth; + stack_depth_extra = 0; + } + i++; insn++; + } + + env->prog->aux->stack_depth = subprogs[0].stack_depth; + for (i = 0; i < env->subprog_cnt; i++) { + int subprog_start = subprogs[i].start, j; + int stack_slots = subprogs[i].stack_extra / 8; + + if (stack_slots >= ARRAY_SIZE(insn_buf)) { + verbose(env, "verifier bug: stack_extra is too large\n"); + return -EFAULT; + } + + /* Add insns to subprog prologue to init extra stack */ + for (j = 0; j < stack_slots; j++) + insn_buf[j] = BPF_ST_MEM(BPF_DW, BPF_REG_FP, + -subprogs[i].stack_depth + j * 8, BPF_MAX_LOOPS); + if (j) { + insn_buf[j] = env->prog->insnsi[subprog_start]; + + new_prog = bpf_patch_insn_data(env, subprog_start, insn_buf, j + 1); + if (!new_prog) + return -ENOMEM; + env->prog = prog = new_prog; + } } /* Since poke tab is now finalized, publish aux to tracker. */ diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index a241f407c234..932ffef0dc88 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -42,6 +42,7 @@ #define BPF_JSGE 0x70 /* SGE is signed '>=', GE in x86 */ #define BPF_JSLT 0xc0 /* SLT is signed, '<' */ #define BPF_JSLE 0xd0 /* SLE is signed, '<=' */ +#define BPF_JMA 0xe0 /* may_goto */ #define BPF_CALL 0x80 /* function call */ #define BPF_EXIT 0x90 /* function return */ From patchwork Tue Mar 5 04:52:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13581608 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6417D3FE5D for ; Tue, 5 Mar 2024 04:52:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709614352; cv=none; b=h0kBBZZAH8jJHH1QcKsRu1zv7faZ17mLT0JTtAUI8F6INV1jyMo41fkMBYBW8m2z776OvqCK2/iHQpc6MeT+uoLNTYa2Vbk38Kyn/KVlJuvJm5df5UDbAJ9n3nBZ6FsyKWlpx25xN00gXMw/YRUCAmYHgHracuGOx1STn5llJdA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709614352; c=relaxed/simple; bh=XxfQPu6EUKSLv0l3+Z2CjhbzewSlnRlviPglaZr4Xw0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=cXSy3omgTNiqclfA3EHnZqQrFkx2yYhgMlJRF57dKSqEyWsfNVVq8Xhzn4o42ZveskMcOJ4LmCzYBzUrYcKL/zaTpvALFtOjHpFHr0KopEiY8lxMOqAJorR+/3E4vQAjIrWUOzmtIXTrO7AC9AWbLsVnb5ov+KtUaYv2y7n+Yx4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=SndZSQbe; arc=none smtp.client-ip=209.85.215.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="SndZSQbe" Received: by mail-pg1-f182.google.com with SMTP id 41be03b00d2f7-5d8b70b39efso4781085a12.0 for ; Mon, 04 Mar 2024 20:52:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709614349; x=1710219149; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=l07LPbm0nnBYI/sTuJsZdLie13rgjSrAnpJTVfqJAMA=; b=SndZSQbe6fhJBqeosWK9XZmodrc8OZV6gZ0FKB2LzJfWdCLHTh6I1E4PG//yrWoPO0 9+L6wIgkMFOGC/ZUAC0wikUOvOq/r4iziq2FJhRgCzocaAHiGTQ0BdhsHsAijypvm064 GuthWo1uJaLtLaKC/opJTvZlWl7f3dY+Ls007IJJPnVZgn3calUX2DH2WjkesTFL/gfo i8kwPNKkC2U+XbEve/4BKA9SltLO9jT5b+Nv5pyAAh+/+Da08QGePgJZnQmZeFD1gi/E Jt11P497Y1R3zt7fRQFScHNGKvcChINrOJLOOaAzwpKcL7KEz3+PaUD2VnPok+Bu6OG6 bGrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709614349; x=1710219149; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l07LPbm0nnBYI/sTuJsZdLie13rgjSrAnpJTVfqJAMA=; b=KjDjsyuHj/if2TTmEXRYVDzUpotZmALSZZ6NAmxiiUc4mnAkSOI3kA1V5gqOGhYyex eS4CB4YvLtOWW2EEyiEapfxfUihC3E8PSqrrG+o36IK9ZZYbiMvBJfJ2OG28hAy4PmLH G27vnAMhpwG5bBZctLVEqwON7jYaK4POGGnGnQHXGDAo+YF37EmXhrAQbmgE10J9PIpT ssBCOOLYYlhP6X1kJBdFgkPqQs8rYlx0kYRxG0wNe32IKPNwbjwpR+/vxD1+KNKiRj9a OqGpiNtex+uSZ1j/WbUkr5iM5/HKsL9Go3Pf+mzCg2d0Xe5YxiOiNTJYI9bnzTOaTzF8 CF8Q== X-Gm-Message-State: AOJu0Yyy/oxlIbK/N9KSuiJpxmeLWxYa9TN0RkGck/FYjEgX/OewZz05 tylA3E5KzhV3PNLI80fHRDsDPyY1NnmY0WZw3i5aRRfIAbZqc7XAqBLKxayu X-Google-Smtp-Source: AGHT+IGChOk7GlgBm0KQVpsToaSKc2KP4qH55OyQwENDblm7e/GdeSEsvFLNa2bTOPdj8/gXuEBw6Q== X-Received: by 2002:a05:6a21:398a:b0:1a1:511b:eca0 with SMTP id ad10-20020a056a21398a00b001a1511beca0mr888180pzc.32.1709614348910; Mon, 04 Mar 2024 20:52:28 -0800 (PST) Received: from localhost.localdomain ([2620:10d:c090:400::5:9426]) by smtp.gmail.com with ESMTPSA id a21-20020a170902ee9500b001db5753e8b8sm9385493pld.218.2024.03.04.20.52.27 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 04 Mar 2024 20:52:28 -0800 (PST) From: Alexei Starovoitov To: bpf@vger.kernel.org Cc: daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, memxor@gmail.com, eddyz87@gmail.com, john.fastabend@gmail.com, kernel-team@fb.com Subject: [PATCH v5 bpf-next 2/4] bpf: Recognize that two registers are safe when their ranges match Date: Mon, 4 Mar 2024 20:52:17 -0800 Message-Id: <20240305045219.66142-3-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-145) In-Reply-To: <20240305045219.66142-1-alexei.starovoitov@gmail.com> References: <20240305045219.66142-1-alexei.starovoitov@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov When open code iterators, bpf_loop or may_goto is used the following two states are equivalent and safe to prune the search: cur state: fp-8_w=scalar(id=3,smin=umin=smin32=umin32=2,smax=umax=smax32=umax32=11,var_off=(0x0; 0xf)) old state: fp-8_rw=scalar(id=2,smin=umin=smin32=umin32=1,smax=umax=smax32=umax32=11,var_off=(0x0; 0xf)) In other words "exact" state match should ignore liveness and precision marks, since open coded iterator logic didn't complete their propagation, but range_within logic that applies to scalars, ptr_to_mem, map_value, pkt_ptr is safe to rely on. Avoid doing such comparison when regular infinite loop detection logic is used, otherwise bounded loop logic will declare such "infinite loop" as false positive. Such example is in progs/verifier_loops1.c not_an_inifinite_loop(). Signed-off-by: Alexei Starovoitov --- kernel/bpf/verifier.c | 39 ++++++++++++++++++++++----------------- 1 file changed, 22 insertions(+), 17 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 226bb65f9c2c..74b55d5571c7 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -16254,8 +16254,8 @@ static int check_btf_info(struct bpf_verifier_env *env, } /* check %cur's range satisfies %old's */ -static bool range_within(struct bpf_reg_state *old, - struct bpf_reg_state *cur) +static bool range_within(const struct bpf_reg_state *old, + const struct bpf_reg_state *cur) { return old->umin_value <= cur->umin_value && old->umax_value >= cur->umax_value && @@ -16419,21 +16419,26 @@ static bool regs_exact(const struct bpf_reg_state *rold, check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap); } +enum exact_level { + NOT_EXACT, + EXACT, + RANGE_WITHIN +}; + /* Returns true if (rold safe implies rcur safe) */ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold, - struct bpf_reg_state *rcur, struct bpf_idmap *idmap, bool exact) + struct bpf_reg_state *rcur, struct bpf_idmap *idmap, + enum exact_level exact) { - if (exact) + if (exact == EXACT) return regs_exact(rold, rcur, idmap); - if (!(rold->live & REG_LIVE_READ)) + if (!(rold->live & REG_LIVE_READ) && exact != RANGE_WITHIN) /* explored state didn't use this */ return true; - if (rold->type == NOT_INIT) + if (rold->type == NOT_INIT && exact != RANGE_WITHIN) /* explored state can't have used this */ return true; - if (rcur->type == NOT_INIT) - return false; /* Enforce that register types have to match exactly, including their * modifiers (like PTR_MAYBE_NULL, MEM_RDONLY, etc), as a general @@ -16468,7 +16473,7 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold, return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 && check_scalar_ids(rold->id, rcur->id, idmap); } - if (!rold->precise) + if (!rold->precise && exact != RANGE_WITHIN) return true; /* Why check_ids() for scalar registers? * @@ -16579,7 +16584,7 @@ static struct bpf_reg_state *scalar_reg_for_stack(struct bpf_verifier_env *env, } static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, - struct bpf_func_state *cur, struct bpf_idmap *idmap, bool exact) + struct bpf_func_state *cur, struct bpf_idmap *idmap, enum exact_level exact) { int i, spi; @@ -16743,7 +16748,7 @@ static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur, * the current state will reach 'bpf_exit' instruction safely */ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_state *old, - struct bpf_func_state *cur, bool exact) + struct bpf_func_state *cur, enum exact_level exact) { int i; @@ -16770,7 +16775,7 @@ static void reset_idmap_scratch(struct bpf_verifier_env *env) static bool states_equal(struct bpf_verifier_env *env, struct bpf_verifier_state *old, struct bpf_verifier_state *cur, - bool exact) + enum exact_level exact) { int i; @@ -17144,7 +17149,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) * => unsafe memory access at 11 would not be caught. */ if (is_iter_next_insn(env, insn_idx)) { - if (states_equal(env, &sl->state, cur, true)) { + if (states_equal(env, &sl->state, cur, RANGE_WITHIN)) { struct bpf_func_state *cur_frame; struct bpf_reg_state *iter_state, *iter_reg; int spi; @@ -17168,20 +17173,20 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) goto skip_inf_loop_check; } if (is_may_goto_insn(env, insn_idx)) { - if (states_equal(env, &sl->state, cur, true)) { + if (states_equal(env, &sl->state, cur, RANGE_WITHIN)) { update_loop_entry(cur, &sl->state); goto hit; } goto skip_inf_loop_check; } if (calls_callback(env, insn_idx)) { - if (states_equal(env, &sl->state, cur, true)) + if (states_equal(env, &sl->state, cur, RANGE_WITHIN)) goto hit; goto skip_inf_loop_check; } /* attempt to detect infinite loop to avoid unnecessary doomed work */ if (states_maybe_looping(&sl->state, cur) && - states_equal(env, &sl->state, cur, true) && + states_equal(env, &sl->state, cur, EXACT) && !iter_active_depths_differ(&sl->state, cur) && sl->state.may_goto_cnt == cur->may_goto_cnt && sl->state.callback_unroll_depth == cur->callback_unroll_depth) { @@ -17239,7 +17244,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) */ loop_entry = get_loop_entry(&sl->state); force_exact = loop_entry && loop_entry->branches > 0; - if (states_equal(env, &sl->state, cur, force_exact)) { + if (states_equal(env, &sl->state, cur, force_exact ? RANGE_WITHIN : NOT_EXACT)) { if (force_exact) update_loop_entry(cur, loop_entry); hit: From patchwork Tue Mar 5 04:52:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13581609 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-oi1-f173.google.com (mail-oi1-f173.google.com [209.85.167.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B547945035 for ; Tue, 5 Mar 2024 04:52:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709614357; cv=none; b=HYxH5IkxxZT8KEkPJWeoL7cMO9vSJv8rbyJ9TawfXpsWCas4wKTYWG0ncva3pJ6m/EZNi91f3qPbicCj/RNJ5ecYG1OMkvoKIMn4IY0imzgcpY7h/XqHrhOgWeJaviRF9zMbV58wQbzXeBBzRbBYMkR570qeGUc2xKicqnfGfmM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709614357; c=relaxed/simple; bh=mCa42/Q98qGiGrElS19uEf5wTtmbxd6P3lHNZbxLTmk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=aLAMQMkG7MQ3RU6Gk/Dr+y/kzHf9R+bh4QleHRHuqGBGvK41EM9BAZPs5wPZQwAixcW4niUYQph3mSqRXBnyCpCFVVM6BhXDrEl4zhIIpZ/VLfESsA6LI7YDpzwDfOEOzz+eNt575lIRRSRXDkLl7J3V8DW0XdPmhW1tDWBGBlo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=d/EriwE2; arc=none smtp.client-ip=209.85.167.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="d/EriwE2" Received: by mail-oi1-f173.google.com with SMTP id 5614622812f47-3c1ea59cf81so1098183b6e.1 for ; Mon, 04 Mar 2024 20:52:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709614354; x=1710219154; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=G1XGjkNPdvKETGMdqvp5/vYhVvH/XT13ZLHghy/4+Jo=; b=d/EriwE2+R0WbhqccBKuTnIErXADMr49FsrtC4C/Wr10s/AZWE556qRFjt7pBD+GfT lIFP2mjQN/ZXQWlxkq3CRtUjJTBXogyQ83y+BYOazyeaUJXvIzmT4ByElSnJmmIs+AXt 7jjI6jViVxrMXiIN/B7SnBWcEdxI1wphok9vXHHCf3afxoHdseiPGyoqFHMNB8GzsCv+ toi4ARMloiJic7HdW7S+sT5yBCmzOpz4HJcjl4016zHJFCJYVzfLlERJBQ5QAgxswvjG IFfLas7TAymRUO4c4ZN6lPMhf6xEHPcbOKRGEY4TK6nlN6Yihn7i4EVj/Xd10mCgZngX 1XmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709614354; x=1710219154; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=G1XGjkNPdvKETGMdqvp5/vYhVvH/XT13ZLHghy/4+Jo=; b=GVVtVZcP/VMkoFkVI1aLXfcNj8FAZlV8FOgyZSc6cgDDdOpaNIAhDjrH/q11dgibYq qavG1lVR5tnRmOOoU6nfcWGM+Arms8gxJRKgJr6AfFV/rXv6jt35OdK48EV/c4fC9btA a+Tfe+Et8YhwlyRaSwYbmrnUV9bi1C3WMIdqX/iqQcaSBtaka5s0AYUI8grc5qJJ3na6 BgQ7BlgdAlnDEBQhDBSGzIXyXlBpLXxqvEqivGb0KvGneSNI9kAnPbz8JfONWxuwdYun nZtxhHzbPxevuFzyP28Pu5sMxSldJYyuKkGfCP5yTg/h08JGZfG8LiEdTIqS00N/X877 5CcQ== X-Gm-Message-State: AOJu0YwyE4mwcl21DH6B3V0c/k9h73GsauqmeJ7hN3rd6mCFe+72uMKk /Vz5CBE1JOggYUuWWCoGsERZSSHWnfuVKTnXRfuYYw+2XCExz5raDu5dklnu X-Google-Smtp-Source: AGHT+IEJbiQolKj0lStTRN6Wb0dFzQOJm1vzDvBRYwfF25e4GXp70PnYNvLyMO0xI3WEuBGGnEvKnw== X-Received: by 2002:a05:6808:8c4:b0:3c1:eac5:3b5e with SMTP id k4-20020a05680808c400b003c1eac53b5emr775251oij.7.1709614352497; Mon, 04 Mar 2024 20:52:32 -0800 (PST) Received: from localhost.localdomain ([2620:10d:c090:400::5:9426]) by smtp.gmail.com with ESMTPSA id y11-20020aa79e0b000000b006e537e90f91sm7986117pfq.131.2024.03.04.20.52.31 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 04 Mar 2024 20:52:32 -0800 (PST) From: Alexei Starovoitov To: bpf@vger.kernel.org Cc: daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, memxor@gmail.com, eddyz87@gmail.com, john.fastabend@gmail.com, kernel-team@fb.com Subject: [PATCH v5 bpf-next 3/4] bpf: Add cond_break macro Date: Mon, 4 Mar 2024 20:52:18 -0800 Message-Id: <20240305045219.66142-4-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-145) In-Reply-To: <20240305045219.66142-1-alexei.starovoitov@gmail.com> References: <20240305045219.66142-1-alexei.starovoitov@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Use may_goto instruction to implement cond_break macro. Ideally the macro should be written as: asm volatile goto(".byte 0xe5; .byte 0; .short %l[l_break] ... .long 0; but LLVM doesn't recognize fixup of 2 byte PC relative yet. Hence use asm volatile goto(".byte 0xe5; .byte 0; .long %l[l_break] ... .short 0; that produces correct asm on little endian. Signed-off-by: Alexei Starovoitov --- tools/testing/selftests/bpf/bpf_experimental.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h index 0d749006d107..bc9a0832ae72 100644 --- a/tools/testing/selftests/bpf/bpf_experimental.h +++ b/tools/testing/selftests/bpf/bpf_experimental.h @@ -326,6 +326,18 @@ l_true: \ }) #endif +#define cond_break \ + ({ __label__ l_break, l_continue; \ + asm volatile goto("1:.byte 0xe5; \ + .byte 0; \ + .long ((%l[l_break] - 1b - 8) / 8) & 0xffff; \ + .short 0" \ + :::: l_break); \ + goto l_continue; \ + l_break: break; \ + l_continue:; \ + }) + #ifndef bpf_nop_mov #define bpf_nop_mov(var) \ asm volatile("%[reg]=%[reg]"::[reg]"r"((short)var)) From patchwork Tue Mar 5 04:52:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13581610 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 65DC34C637 for ; Tue, 5 Mar 2024 04:52:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709614358; cv=none; b=eC2uy/3XDTqHropgcrACmn+pyNJ6zpIJmOAoDqhK5WG1+UjXbJPhN8cOtDujQwfH3QGf0412ujre6pDOfil5cGZJIWxNL7iLg+Afj7drch2rJBsAe9bxTOST2Q9fwM6DzvYiudK4jk/ljABWYbbrpAw+zaaasyJlN4rgWeZo62M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709614358; c=relaxed/simple; bh=Zsn3UciRmz1y4y4OcE4XSj+bnHZZ1ViSPWAHbOk1uSY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=qNIAicjydZ7FkFw4uv1yOB1QFeQwJWZnUFfcvHJ/lnHMENy4d8RnN9nQz60JcWD387Ai22HdZzm1b8D2BvziFQWnrkB1mLaCTjbGRzTe0hsNcXM71IO400KpqVBNBUuR1pFSNsaYfWEAxofXv5QKApVnMw1tOO1gfs13pdfRr6A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Xd6IwlJA; arc=none smtp.client-ip=209.85.210.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Xd6IwlJA" Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-6e4d48a5823so4185144b3a.1 for ; Mon, 04 Mar 2024 20:52:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709614356; x=1710219156; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DSQ/agq3nwxiLrfdgKH47D6257r+0l8kkJgdS53pJhI=; b=Xd6IwlJA1cpCGdebeBWhqlHE96BJ4XHBT38O0urKjApDEEIAgQLblORZu6/kSuHDCT PvxH9xP5EeUREN6IyR6eJbasc15nqGWu1aoUc6jKAy4ucWJYtILaPKxO6B/Bf5sPYqEn uK5KVqKB9PaLAmtOCZkS5Qw3fUSP7BxOV1/gJ+W8ibSPH3tWkifw7rF5Hrg5PyCVPCxH MVBIhWMOQIexWWHqSp6rAc3PxJK9fSdiRwnwrhyT13FErO8OsnBYh9In+acm4dUfRWzy stFsuum5GOOgQbhURsrpIgZVqxfVx8Fftq7jRwp/4d9yPahR6HK3aC43JHJu+544xbo7 5Jwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709614356; x=1710219156; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DSQ/agq3nwxiLrfdgKH47D6257r+0l8kkJgdS53pJhI=; b=vo279U7B7sn7PBSES72m2Lqkz/taKTe9OgoH0hqCbmd4M5dEL5DlAeBKN2DOJ9sNpb 61OoX1e3BcMUZ3gBiYYqBn1kAalLHhINP3LBAXV3FqBLlYwBDl0BFDv8x20Ic6idCnZH WIsApnIZqW9uXxRFnorhv1gjhbBLIcI9wKI/xyBW+96iYLCWC51bpxqltgLzEOdjBSLh 4Yfn5UnpBo/0psIPix1xQEViZMgS51HOAPjr4AZ7nXb4aAcbOhwV+mwwc0JgRCSJupMY NzJRpBRiOndWT2uMjBWPgLv2Lr4Cu541rcdnQC/L5dqL2A+kX/MtVgSa8ojulEd3rA+3 ZpWw== X-Gm-Message-State: AOJu0YySkhDlDCtUhpmrLIznOmi83q6Y9E4jSqiS/VfhayQoSSvQf3CT L6ygreBZgQi29mGKGR9pnuvqF6NGs4lEU1Tfngh8+Ec8LRfXvdVB8oOn32eR X-Google-Smtp-Source: AGHT+IEn22Wlnq2fYGJMxP6zW6kxVI+CfKoKfLmaInCkciuUnyOgxph/MANId2gAnStQ67YEgHvwHg== X-Received: by 2002:a05:6a20:54a4:b0:1a0:ea67:ecb2 with SMTP id i36-20020a056a2054a400b001a0ea67ecb2mr998113pzk.12.1709614355985; Mon, 04 Mar 2024 20:52:35 -0800 (PST) Received: from localhost.localdomain ([2620:10d:c090:400::5:9426]) by smtp.gmail.com with ESMTPSA id b10-20020a170903228a00b001dcc138d4afsm9434175plh.34.2024.03.04.20.52.34 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 04 Mar 2024 20:52:35 -0800 (PST) From: Alexei Starovoitov To: bpf@vger.kernel.org Cc: daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, memxor@gmail.com, eddyz87@gmail.com, john.fastabend@gmail.com, kernel-team@fb.com Subject: [PATCH v5 bpf-next 4/4] selftests/bpf: Test may_goto Date: Mon, 4 Mar 2024 20:52:19 -0800 Message-Id: <20240305045219.66142-5-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-145) In-Reply-To: <20240305045219.66142-1-alexei.starovoitov@gmail.com> References: <20240305045219.66142-1-alexei.starovoitov@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Add tests for may_goto instruction via cond_break macro. Signed-off-by: Alexei Starovoitov --- tools/testing/selftests/bpf/DENYLIST.s390x | 1 + .../bpf/progs/verifier_iterating_callbacks.c | 103 +++++++++++++++++- 2 files changed, 101 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index 1a63996c0304..cb810a98e78f 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -3,3 +3,4 @@ exceptions # JIT does not support calling kfunc bpf_throw (exceptions) get_stack_raw_tp # user_stack corrupted user stack (no backchain userspace) stacktrace_build_id # compare_map_keys stackid_hmap vs. stackmap err -2 errno 2 (?) +verifier_iterating_callbacks diff --git a/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c b/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c index 5905e036e0ea..04cdbce4652f 100644 --- a/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c +++ b/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c @@ -1,8 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 - -#include -#include #include "bpf_misc.h" +#include "bpf_experimental.h" struct { __uint(type, BPF_MAP_TYPE_ARRAY); @@ -239,4 +237,103 @@ int bpf_loop_iter_limit_nested(void *unused) return 1000 * a + b + c; } +#define ARR_SZ 1000000 +int zero; +char arr[ARR_SZ]; + +SEC("socket") +__success __retval(0xd495cdc0) +int cond_break1(const void *ctx) +{ + unsigned long i; + unsigned int sum = 0; + + for (i = zero; i < ARR_SZ; cond_break, i++) + sum += i; + for (i = zero; i < ARR_SZ; i++) { + barrier_var(i); + sum += i + arr[i]; + cond_break; + } + + return sum; +} + +SEC("socket") +__success __retval(999000000) +int cond_break2(const void *ctx) +{ + int i, j; + int sum = 0; + + for (i = zero; i < 1000; cond_break, i++) + for (j = zero; j < 1000; j++) { + sum += i + j; + cond_break; + } + + return sum; +} + +static __noinline int loop(void) +{ + int i, sum = 0; + + for (i = zero; i <= 1000000; i++, cond_break) + sum += i; + + return sum; +} + +SEC("socket") +__success __retval(0x6a5a2920) +int cond_break3(const void *ctx) +{ + return loop(); +} + +SEC("socket") +__success __retval(1) +int cond_break4(const void *ctx) +{ + int cnt = zero; + + for (;;) { + /* should eventually break out of the loop */ + cond_break; + cnt++; + } + /* if we looped a bit, it's a success */ + return cnt > 1 ? 1 : 0; +} + +static __noinline int static_subprog(void) +{ + int cnt = zero; + + for (;;) { + cond_break; + cnt++; + } + + return cnt; +} + +SEC("socket") +__success __retval(1) +int cond_break5(const void *ctx) +{ + int cnt1 = zero, cnt2; + + for (;;) { + cond_break; + cnt1++; + } + + cnt2 = static_subprog(); + + /* main and subprog have to loop a bit */ + return cnt1 > 1 && cnt2 > 1 ? 1 : 0; +} + char _license[] SEC("license") = "GPL";