From patchwork Fri Mar 1 03:37:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13577928 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F2944D5BA for ; Fri, 1 Mar 2024 03:37:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709264269; cv=none; b=Rw6pw3MQQBhqIqTqs7cYf310KISGu0b02tPuid0RdgTgKhYS0toOQbvQ0VYIL+SKyXmqRmXzs9t6OW7bHK3Nu3Tli9tz9xanibNGwZcfQpDmoQDvpaA38lIFHNCfQ5YPHahcCtuWVtZc4/6q3qnOAYkZRJwfT5G3VyzwMnOcfBU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709264269; c=relaxed/simple; bh=wIdeQUNwiI7lzV8Xg5Y9BGaJwpweAkRHSJHQai+bqcY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=LNsiCbdriw8FMyv0dwmk2AdgKr5jEXqGu+5wbSgmILmb5MKryFgv++rlNogQLr3xjyQ8GCJaCg5/uKRhv9W/2vUvRH3L6h7iW0DzXKPWeknCJhsok5UXd7qKrqSUGvWGqZ4/llkqGVwngDVPyZoNA+YAunuvYOk4jRJ8AQJDM50= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=b4pdmuY8; arc=none smtp.client-ip=209.85.215.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="b4pdmuY8" Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-5ce942efda5so1337513a12.2 for ; Thu, 29 Feb 2024 19:37:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709264266; x=1709869066; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C+PEqmP0tFRYWN9csRQjsmBuNFCx7wuMvEfX0ueLztY=; b=b4pdmuY8c6mplJuO84dl9cKGrzo6LlhHvjlPKU0w5gOTCey33uIc0K+o2u2XyPvOyB elziIWzkWJ+AFuAZwv4fKJ2KvKxyBUtvG0dbKtSOBAiGGN5SbGsQmgRkl57oKiKG1pxI 3HqhMpAyiiy2ZvbAf/Ox0D/7kuUb4K6umYJnAgkbxXlNX0c270gAByTltPVoQ+gbD2E7 ewsvb2IcRNXcC1EfzllId+r+k3OkeEHy98rdiONRSGwXjRWqjatRX1dig89Te1SIkWy0 zi/+eS7wkUNIqsE+LleT2iXWbqkpIjFMDw613sm2U6PD6KlOsyr9yduCjP51YH44lYvo hLYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709264266; x=1709869066; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C+PEqmP0tFRYWN9csRQjsmBuNFCx7wuMvEfX0ueLztY=; b=jl8OIeUCBD6VbGQ/jHUK7AC2bIsXP5trZh73gDxXnwBkIuEi/sfN8PuZlYsLwCnxF6 /5OI65038B2FBkvF49hLPvHrvIbpT5UmteYUnyVlS3P5iaNkbUUWviUNpqqGppkDvefM Fq9UXo0UzR7cotfUkYph/JflU1A7vqVqxx0dXLPTa6c94WMS/QiOLGyB5RAht/TbEkN9 3LFXoZ7kxnQnDALC4S78rtDwnnw6frRYerrRzZRnRnG1IJa7HczMWzkGztvOuLf2Eadh iTNkuC9HK70qdTuGdU9w+UlMemtBKTChFL2vgDbaetq98aEVZzmk7yLVBmz5J4sHR7Nm 93uw== X-Gm-Message-State: AOJu0Yyx8xHSzcDa3cE/Yh99Mj5/apaSrK2RPkHEYrTKo4vOk26RFhZE QYpIGqxe0gVffT21ZyS42Y1ovqB08PGyeWcp+UvUE68R5C0YYb/lGgU/yP+f X-Google-Smtp-Source: AGHT+IH2N2UR1waMuGXeT7Y7mwe/0usR/Bj+OA0MF2RbV/8XfI3BJXSnRLh1cOO/n9T3THO6pAKeaQ== X-Received: by 2002:a05:6a20:43a2:b0:1a1:186e:69e5 with SMTP id i34-20020a056a2043a200b001a1186e69e5mr490501pzl.38.1709264265546; Thu, 29 Feb 2024 19:37:45 -0800 (PST) Received: from localhost.localdomain ([2620:10d:c090:400::5:8f17]) by smtp.gmail.com with ESMTPSA id z9-20020a170903018900b001dba2e99a9esm2282329plg.90.2024.02.29.19.37.43 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 29 Feb 2024 19:37:45 -0800 (PST) From: Alexei Starovoitov To: bpf@vger.kernel.org Cc: daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, memxor@gmail.com, eddyz87@gmail.com, kernel-team@fb.com Subject: [PATCH v3 bpf-next 1/4] bpf: Introduce may_goto instruction Date: Thu, 29 Feb 2024 19:37:31 -0800 Message-Id: <20240301033734.95939-2-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-145) In-Reply-To: <20240301033734.95939-1-alexei.starovoitov@gmail.com> References: <20240301033734.95939-1-alexei.starovoitov@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Introduce may_goto instruction that acts on a hidden bpf_iter_num, so that bpf_iter_num_new(), bpf_iter_num_destroy() don't need to be called explicitly. It can be used in any normal "for" or "while" loop, like for (i = zero; i < cnt; cond_break, i++) { The verifier recognizes that may_goto is used in the program, reserves additional 8 bytes of stack, initializes them in subprog prologue, and replaces may_goto instruction with: aux_reg = *(u64 *)(fp - 40) if aux_reg == 0 goto pc+off aux_reg += 1 *(u64 *)(fp - 40) = aux_reg may_goto instruction can be used by LLVM to implement __builtin_memcpy, __builtin_strcmp. may_goto is not a full substitute for bpf_for() macro. bpf_for() doesn't have induction variable that verifiers sees, so 'i' in bpf_for(i, 0, 100) is seen as imprecise and bounded. But when the code is written as: for (i = 0; i < 100; cond_break, i++) the verifier see 'i' as precise constant zero, hence cond_break (aka may_goto) doesn't help to converge the loop. A static or global variable can be used as a workaround: static int zero = 0; for (i = zero; i < 100; cond_break, i++) // works! may_goto works well with arena pointers that don't need to be bounds-checked on every iteration. Load/store from arena returns imprecise unbounded scalars. Signed-off-by: Alexei Starovoitov --- include/linux/bpf_verifier.h | 2 + include/uapi/linux/bpf.h | 1 + kernel/bpf/core.c | 1 + kernel/bpf/disasm.c | 3 + kernel/bpf/verifier.c | 235 +++++++++++++++++++++++++-------- tools/include/uapi/linux/bpf.h | 1 + 6 files changed, 189 insertions(+), 54 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 84365e6dd85d..8bd8bb32bb28 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -449,6 +449,7 @@ struct bpf_verifier_state { u32 jmp_history_cnt; u32 dfs_depth; u32 callback_unroll_depth; + struct bpf_reg_state may_goto_reg; }; #define bpf_get_spilled_reg(slot, frame, mask) \ @@ -619,6 +620,7 @@ struct bpf_subprog_info { u32 start; /* insn idx of function entry point */ u32 linfo_idx; /* The idx to the main_prog->aux->linfo */ u16 stack_depth; /* max. stack depth used by this function */ + u16 stack_extra; bool has_tail_call: 1; bool tail_call_reachable: 1; bool has_ld_abs: 1; diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index d2e6c5fcec01..8cf86566ad6d 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -42,6 +42,7 @@ #define BPF_JSGE 0x70 /* SGE is signed '>=', GE in x86 */ #define BPF_JSLT 0xc0 /* SLT is signed, '<' */ #define BPF_JSLE 0xd0 /* SLE is signed, '<=' */ +#define BPF_JMA 0xe0 /* may_goto */ #define BPF_CALL 0x80 /* function call */ #define BPF_EXIT 0x90 /* function return */ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 71c459a51d9e..ba6101447b49 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1675,6 +1675,7 @@ bool bpf_opcode_in_insntable(u8 code) [BPF_LD | BPF_IND | BPF_B] = true, [BPF_LD | BPF_IND | BPF_H] = true, [BPF_LD | BPF_IND | BPF_W] = true, + [BPF_JMP | BPF_JMA] = true, }; #undef BPF_INSN_3_TBL #undef BPF_INSN_2_TBL diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index 49940c26a227..598cd38af84c 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -322,6 +322,9 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, } else if (insn->code == (BPF_JMP | BPF_JA)) { verbose(cbs->private_data, "(%02x) goto pc%+d\n", insn->code, insn->off); + } else if (insn->code == (BPF_JMP | BPF_JMA)) { + verbose(cbs->private_data, "(%02x) may_goto pc%+d\n", + insn->code, insn->off); } else if (insn->code == (BPF_JMP32 | BPF_JA)) { verbose(cbs->private_data, "(%02x) gotol pc%+d\n", insn->code, insn->imm); diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 1c34b91b9583..a50395872d58 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1441,6 +1441,7 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state, if (err) return err; } + dst_state->may_goto_reg = src->may_goto_reg; return 0; } @@ -7878,6 +7879,43 @@ static int widen_imprecise_scalars(struct bpf_verifier_env *env, return 0; } +static bool is_may_goto_insn(struct bpf_verifier_env *env, int insn_idx) +{ + return env->prog->insnsi[insn_idx].code == (BPF_JMP | BPF_JMA); +} + +static struct bpf_reg_state *get_iter_reg_meta(struct bpf_verifier_state *st, + struct bpf_kfunc_call_arg_meta *meta) +{ + int iter_frameno = meta->iter.frameno; + int iter_spi = meta->iter.spi; + + return &st->frame[iter_frameno]->stack[iter_spi].spilled_ptr; +} + +static struct bpf_reg_state *get_iter_reg(struct bpf_verifier_env *env, + struct bpf_verifier_state *st, int insn_idx) +{ + struct bpf_reg_state *iter_reg; + struct bpf_func_state *frame; + int spi; + + if (is_may_goto_insn(env, insn_idx)) + return &st->may_goto_reg; + + frame = st->frame[st->curframe]; + /* btf_check_iter_kfuncs() enforces that + * iter state pointer is always the first arg + */ + iter_reg = &frame->regs[BPF_REG_1]; + /* current state is valid due to states_equal(), + * so we can assume valid iter and reg state, + * no need for extra (re-)validations + */ + spi = __get_spi(iter_reg->off + (s32)iter_reg->var_off.value); + return &st->frame[iter_reg->frameno]->stack[spi].spilled_ptr; +} + /* process_iter_next_call() is called when verifier gets to iterator's next * "method" (e.g., bpf_iter_num_next() for numbers iterator) call. We'll refer * to it as just "iter_next()" in comments below. @@ -7957,17 +7995,18 @@ static int widen_imprecise_scalars(struct bpf_verifier_env *env, * bpf_iter_num_destroy(&it); */ static int process_iter_next_call(struct bpf_verifier_env *env, int insn_idx, - struct bpf_kfunc_call_arg_meta *meta) + struct bpf_kfunc_call_arg_meta *meta, bool may_goto) { struct bpf_verifier_state *cur_st = env->cur_state, *queued_st, *prev_st; struct bpf_func_state *cur_fr = cur_st->frame[cur_st->curframe], *queued_fr; struct bpf_reg_state *cur_iter, *queued_iter; - int iter_frameno = meta->iter.frameno; - int iter_spi = meta->iter.spi; BTF_TYPE_EMIT(struct bpf_iter); - cur_iter = &env->cur_state->frame[iter_frameno]->stack[iter_spi].spilled_ptr; + if (may_goto) + cur_iter = &cur_st->may_goto_reg; + else + cur_iter = get_iter_reg_meta(cur_st, meta); if (cur_iter->iter.state != BPF_ITER_STATE_ACTIVE && cur_iter->iter.state != BPF_ITER_STATE_DRAINED) { @@ -7990,25 +8029,32 @@ static int process_iter_next_call(struct bpf_verifier_env *env, int insn_idx, * right at this instruction. */ prev_st = find_prev_entry(env, cur_st->parent, insn_idx); + /* branch out active iter state */ queued_st = push_stack(env, insn_idx + 1, insn_idx, false); if (!queued_st) return -ENOMEM; - queued_iter = &queued_st->frame[iter_frameno]->stack[iter_spi].spilled_ptr; + if (may_goto) + queued_iter = &queued_st->may_goto_reg; + else + queued_iter = get_iter_reg_meta(queued_st, meta); queued_iter->iter.state = BPF_ITER_STATE_ACTIVE; queued_iter->iter.depth++; if (prev_st) widen_imprecise_scalars(env, prev_st, queued_st); - queued_fr = queued_st->frame[queued_st->curframe]; - mark_ptr_not_null_reg(&queued_fr->regs[BPF_REG_0]); + if (!may_goto) { + queued_fr = queued_st->frame[queued_st->curframe]; + mark_ptr_not_null_reg(&queued_fr->regs[BPF_REG_0]); + } } /* switch to DRAINED state, but keep the depth unchanged */ /* mark current iter state as drained and assume returned NULL */ cur_iter->iter.state = BPF_ITER_STATE_DRAINED; - __mark_reg_const_zero(env, &cur_fr->regs[BPF_REG_0]); + if (!may_goto) + __mark_reg_const_zero(env, &cur_fr->regs[BPF_REG_0]); return 0; } @@ -12433,7 +12479,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, } if (is_iter_next_kfunc(&meta)) { - err = process_iter_next_call(env, insn_idx, &meta); + err = process_iter_next_call(env, insn_idx, &meta, false); if (err) return err; } @@ -14869,11 +14915,24 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, int err; /* Only conditional jumps are expected to reach here. */ - if (opcode == BPF_JA || opcode > BPF_JSLE) { + if (opcode == BPF_JA || opcode > BPF_JMA) { verbose(env, "invalid BPF_JMP/JMP32 opcode %x\n", opcode); return -EINVAL; } + if (opcode == BPF_JMA) { + if (insn->code != (BPF_JMP | BPF_JMA) || + insn->src_reg || insn->dst_reg) { + verbose(env, "invalid may_goto\n"); + return -EINVAL; + } + err = process_iter_next_call(env, *insn_idx, NULL, true); + if (err) + return err; + *insn_idx += insn->off; + return 0; + } + /* check src2 operand */ err = check_reg_arg(env, insn->dst_reg, SRC_OP); if (err) @@ -15657,6 +15716,8 @@ static int visit_insn(int t, struct bpf_verifier_env *env) default: /* conditional jump with two edges */ mark_prune_point(env, t); + if (insn->code == (BPF_JMP | BPF_JMA)) + mark_force_checkpoint(env, t); ret = push_insn(t, t + 1, FALLTHROUGH, env); if (ret) @@ -16767,6 +16828,9 @@ static bool states_equal(struct bpf_verifier_env *env, if (old->active_rcu_lock != cur->active_rcu_lock) return false; + if (old->may_goto_reg.iter.state != cur->may_goto_reg.iter.state) + return false; + /* for states to be equal callsites have to be the same * and all frame states need to be equivalent */ @@ -17005,6 +17069,9 @@ static bool iter_active_depths_differ(struct bpf_verifier_state *old, struct bpf struct bpf_func_state *state; int i, fr; + if (old->may_goto_reg.iter.depth != cur->may_goto_reg.iter.depth) + return true; + for (fr = old->curframe; fr >= 0; fr--) { state = old->frame[fr]; for (i = 0; i < state->allocated_stack / BPF_REG_SIZE; i++) { @@ -17109,23 +17176,11 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) * comparison would discard current state with r7=-32 * => unsafe memory access at 11 would not be caught. */ - if (is_iter_next_insn(env, insn_idx)) { + if (is_iter_next_insn(env, insn_idx) || is_may_goto_insn(env, insn_idx)) { if (states_equal(env, &sl->state, cur, true)) { - struct bpf_func_state *cur_frame; - struct bpf_reg_state *iter_state, *iter_reg; - int spi; + struct bpf_reg_state *iter_state; - cur_frame = cur->frame[cur->curframe]; - /* btf_check_iter_kfuncs() enforces that - * iter state pointer is always the first arg - */ - iter_reg = &cur_frame->regs[BPF_REG_1]; - /* current state is valid due to states_equal(), - * so we can assume valid iter and reg state, - * no need for extra (re-)validations - */ - spi = __get_spi(iter_reg->off + iter_reg->var_off.value); - iter_state = &func(env, iter_reg)->stack[spi].spilled_ptr; + iter_state = get_iter_reg(env, cur, insn_idx); if (iter_state->iter.state == BPF_ITER_STATE_ACTIVE) { update_loop_entry(cur, &sl->state); goto hit; @@ -19406,7 +19461,10 @@ static int do_misc_fixups(struct bpf_verifier_env *env) struct bpf_insn insn_buf[16]; struct bpf_prog *new_prog; struct bpf_map *map_ptr; - int i, ret, cnt, delta = 0; + int i, ret, cnt, delta = 0, cur_subprog = 0; + struct bpf_subprog_info *subprogs = env->subprog_info; + u16 stack_depth = subprogs[cur_subprog].stack_depth; + u16 stack_depth_extra = 0; if (env->seen_exception && !env->exception_callback_subprog) { struct bpf_insn patch[] = { @@ -19426,7 +19484,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) mark_subprog_exc_cb(env, env->exception_callback_subprog); } - for (i = 0; i < insn_cnt; i++, insn++) { + for (i = 0; i < insn_cnt;) { /* Make divide-by-zero exceptions impossible. */ if (insn->code == (BPF_ALU64 | BPF_MOD | BPF_X) || insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) || @@ -19465,7 +19523,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } /* Implement LD_ABS and LD_IND with a rewrite, if supported by the program type. */ @@ -19485,7 +19543,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } /* Rewrite pointer arithmetic to mitigate speculation attacks. */ @@ -19500,7 +19558,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) aux = &env->insn_aux_data[i + delta]; if (!aux->alu_state || aux->alu_state == BPF_ALU_NON_POINTER) - continue; + goto next_insn; isneg = aux->alu_state & BPF_ALU_NEG_VALUE; issrc = (aux->alu_state & BPF_ALU_SANITIZE) == @@ -19538,19 +19596,39 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; + } + + if (insn->code == (BPF_JMP | BPF_JMA)) { + int stack_off = -stack_depth - 8; + + stack_depth_extra = 8; + insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_AX, BPF_REG_10, stack_off); + insn_buf[1] = BPF_JMP_IMM(BPF_JEQ, BPF_REG_AX, 0, insn->off + 2); + insn_buf[2] = BPF_ALU64_IMM(BPF_SUB, BPF_REG_AX, 1); + insn_buf[3] = BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_AX, stack_off); + cnt = 4; + + new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); + if (!new_prog) + return -ENOMEM; + + delta += cnt - 1; + env->prog = prog = new_prog; + insn = new_prog->insnsi + i + delta; + goto next_insn; } if (insn->code != (BPF_JMP | BPF_CALL)) - continue; + goto next_insn; if (insn->src_reg == BPF_PSEUDO_CALL) - continue; + goto next_insn; if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { ret = fixup_kfunc_call(env, insn, insn_buf, i + delta, &cnt); if (ret) return ret; if (cnt == 0) - continue; + goto next_insn; new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); if (!new_prog) @@ -19559,7 +19637,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } if (insn->imm == BPF_FUNC_get_route_realm) @@ -19607,11 +19685,11 @@ static int do_misc_fixups(struct bpf_verifier_env *env) } insn->imm = ret + 1; - continue; + goto next_insn; } if (!bpf_map_ptr_unpriv(aux)) - continue; + goto next_insn; /* instead of changing every JIT dealing with tail_call * emit two extra insns: @@ -19640,7 +19718,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } if (insn->imm == BPF_FUNC_timer_set_callback) { @@ -19752,7 +19830,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } BUILD_BUG_ON(!__same_type(ops->map_lookup_elem, @@ -19783,31 +19861,31 @@ static int do_misc_fixups(struct bpf_verifier_env *env) switch (insn->imm) { case BPF_FUNC_map_lookup_elem: insn->imm = BPF_CALL_IMM(ops->map_lookup_elem); - continue; + goto next_insn; case BPF_FUNC_map_update_elem: insn->imm = BPF_CALL_IMM(ops->map_update_elem); - continue; + goto next_insn; case BPF_FUNC_map_delete_elem: insn->imm = BPF_CALL_IMM(ops->map_delete_elem); - continue; + goto next_insn; case BPF_FUNC_map_push_elem: insn->imm = BPF_CALL_IMM(ops->map_push_elem); - continue; + goto next_insn; case BPF_FUNC_map_pop_elem: insn->imm = BPF_CALL_IMM(ops->map_pop_elem); - continue; + goto next_insn; case BPF_FUNC_map_peek_elem: insn->imm = BPF_CALL_IMM(ops->map_peek_elem); - continue; + goto next_insn; case BPF_FUNC_redirect_map: insn->imm = BPF_CALL_IMM(ops->map_redirect); - continue; + goto next_insn; case BPF_FUNC_for_each_map_elem: insn->imm = BPF_CALL_IMM(ops->map_for_each_callback); - continue; + goto next_insn; case BPF_FUNC_map_lookup_percpu_elem: insn->imm = BPF_CALL_IMM(ops->map_lookup_percpu_elem); - continue; + goto next_insn; } goto patch_call_imm; @@ -19835,7 +19913,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } /* Implement bpf_get_func_arg inline. */ @@ -19860,7 +19938,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } /* Implement bpf_get_func_ret inline. */ @@ -19888,7 +19966,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } /* Implement get_func_arg_cnt inline. */ @@ -19903,7 +19981,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } /* Implement bpf_get_func_ip inline. */ @@ -19918,7 +19996,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } /* Implement bpf_kptr_xchg inline */ @@ -19936,7 +20014,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) delta += cnt - 1; env->prog = prog = new_prog; insn = new_prog->insnsi + i + delta; - continue; + goto next_insn; } patch_call_imm: fn = env->ops->get_func_proto(insn->imm, env->prog); @@ -19950,6 +20028,39 @@ static int do_misc_fixups(struct bpf_verifier_env *env) return -EFAULT; } insn->imm = fn->func - __bpf_call_base; +next_insn: + if (subprogs[cur_subprog + 1].start == i + delta + 1) { + subprogs[cur_subprog].stack_depth += stack_depth_extra; + subprogs[cur_subprog].stack_extra = stack_depth_extra; + cur_subprog++; + stack_depth = subprogs[cur_subprog].stack_depth; + stack_depth_extra = 0; + } + i++; insn++; + } + + env->prog->aux->stack_depth = subprogs[0].stack_depth; + for (i = 0; i < env->subprog_cnt; i++) { + int subprog_start = subprogs[i].start, j; + int stack_slots = subprogs[i].stack_extra / 8; + + if (stack_slots >= ARRAY_SIZE(insn_buf)) { + verbose(env, "verifier bug: stack_extra is too large\n"); + return -EFAULT; + } + + /* Add insns to subprog prologue to zero init extra stack */ + for (j = 0; j < stack_slots; j++) + insn_buf[j] = BPF_ST_MEM(BPF_DW, BPF_REG_FP, + -subprogs[i].stack_depth + j * 8, BPF_MAX_LOOPS); + if (j) { + insn_buf[j] = env->prog->insnsi[subprog_start]; + + new_prog = bpf_patch_insn_data(env, subprog_start, insn_buf, j + 1); + if (!new_prog) + return -ENOMEM; + env->prog = prog = new_prog; + } } /* Since poke tab is now finalized, publish aux to tracker. */ @@ -20140,6 +20251,21 @@ static void free_states(struct bpf_verifier_env *env) } } +static void init_may_goto_reg(struct bpf_reg_state *st) +{ + __mark_reg_known_zero(st); + st->type = PTR_TO_STACK; + st->live |= REG_LIVE_WRITTEN; + st->ref_obj_id = 0; + st->iter.btf = NULL; + st->iter.btf_id = 0; + /* Init register state to sane values. + * Only iter.state and iter.depth are used during verification. + */ + st->iter.state = BPF_ITER_STATE_ACTIVE; + st->iter.depth = 0; +} + static int do_check_common(struct bpf_verifier_env *env, int subprog) { bool pop_log = !(env->log.level & BPF_LOG_LEVEL2); @@ -20157,6 +20283,7 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog) state->curframe = 0; state->speculative = false; state->branches = 1; + init_may_goto_reg(&state->may_goto_reg); state->frame[0] = kzalloc(sizeof(struct bpf_func_state), GFP_KERNEL); if (!state->frame[0]) { kfree(state); diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index d2e6c5fcec01..8cf86566ad6d 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -42,6 +42,7 @@ #define BPF_JSGE 0x70 /* SGE is signed '>=', GE in x86 */ #define BPF_JSLT 0xc0 /* SLT is signed, '<' */ #define BPF_JSLE 0xd0 /* SLE is signed, '<=' */ +#define BPF_JMA 0xe0 /* may_goto */ #define BPF_CALL 0x80 /* function call */ #define BPF_EXIT 0x90 /* function return */ From patchwork Fri Mar 1 03:37:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13577929 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-ot1-f50.google.com (mail-ot1-f50.google.com [209.85.210.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C87F4D9FF for ; Fri, 1 Mar 2024 03:37:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709264273; cv=none; b=ue5LZVKLKctliTnJ7sjZeYc0Jx+qFeprYHi/sPJNk5JfhQMmnJpNud3Zt6lRY/9mMe2LajUv5/SxzHXAFFogV+UoNLub75/fOclfGkB0tbKX3vIQveYMlrvlcaGGNBlg95DGGJnCTOjC+BEr3RAOknG7o78j9WbkBfWSFet93xc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709264273; c=relaxed/simple; bh=v+i1xNP+MMaDbawrrYfHonirE7BFMQrVo6aakKyHUAU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=qdPn03PrDJzVzVHXaAg8LVKACQntvJCtZCJKrZCAjLsN4Drwy0w2pQZfoniycK4fSNdzWoEejYz6jlI3MA6AdIAcmz8aQiUxF9zzSNnT653VClvM9r+UotUTHkiAdic/nmUbZIfPFQF68ueUOSwqnBfedVLkCOPN8QZfbSYzZC4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Vi30D4KW; arc=none smtp.client-ip=209.85.210.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Vi30D4KW" Received: by mail-ot1-f50.google.com with SMTP id 46e09a7af769-6e49833ccdfso725735a34.2 for ; Thu, 29 Feb 2024 19:37:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709264270; x=1709869070; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mO2DJFIYuvfvPUA0kwCOKexWJV9f55jrodWw6Fp2x00=; b=Vi30D4KWbDC0udJmLik7Js38PJFYLQcnJvWV3GUSZAobGaC5riQt9vCSx+GElmJjct /15QblYl3oqsdVpuHv3W5f2yz40YHof6MUATtSy7pStK9VWhdVzqYmeHM0cnAjiJ4bjS qQDTZQT0lg/1BRY0qCUstbrAqFs8MagcOfSvJDmx5q92PfvFmRkt3jtX4PMNBiZo9WaB h1PRx2wgON2/AJ5hp6j84yLg3SfMt7atmhYhRIDN7ObkTbglWFfVJ/KO4fXYkWGGpaHv 5nrtSbGPcDzthVceYaJv5zK27hJ/5zl/jbbPnY4T4gNe2AUQk0fgw19F6Uc3WjyfKD58 rT9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709264270; x=1709869070; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mO2DJFIYuvfvPUA0kwCOKexWJV9f55jrodWw6Fp2x00=; b=uopXX+lry3Ts+n53PEekJbfzbi52CWUlVuowVG9QcuJVWDGMLM5u0NVHWXagfY3n85 lHsKqWu6MXSGYrXIlqNrM+WIqXuRDc6OQl432xOax4HCHrRgkIbx0Rj73sHswL066TAr 9p6cofQ0QhYARrvq3z1FCZpWYy2UXKB4H+gnQAjRau1a6Mhw/qbHY9kCuE07pkJ4tIiM LBPfif8gZy1M/V+4VaawM2YbeV0WpF3N6Is8iv+qvGzcOtapegAsUFKDi5NB8kbki3NF auXkEvB3ztBey8a+dZEdk6MPK+Ust56F+Rug5FRrVT1VzvQ1wQYk5zEMo593JH8iE2pq PFWA== X-Gm-Message-State: AOJu0YxIWE1VWh8PcJ/UpScKp4EKqCV8oHj3OOeWAhN2C0rAPChds//T BXBQoQ2jrbaYCKg1se+qTAJ9z9VtXft+4WUEYilYi7LC93sS1b/NXBoFxDxh X-Google-Smtp-Source: AGHT+IGXkfsUPu92KpIKps1Kr6pBKmgy9WgYgSZm1PDbNAel+2PnHlARDLujFeZN22Lmc6F61+LwYg== X-Received: by 2002:a05:6830:ec9:b0:6e4:a1a1:8d78 with SMTP id dq9-20020a0568300ec900b006e4a1a18d78mr678201otb.2.1709264270482; Thu, 29 Feb 2024 19:37:50 -0800 (PST) Received: from localhost.localdomain ([2620:10d:c090:400::5:8f17]) by smtp.gmail.com with ESMTPSA id l20-20020a63da54000000b005cd8044c6fesm2088987pgj.23.2024.02.29.19.37.48 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 29 Feb 2024 19:37:50 -0800 (PST) From: Alexei Starovoitov To: bpf@vger.kernel.org Cc: daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, memxor@gmail.com, eddyz87@gmail.com, kernel-team@fb.com Subject: [PATCH v3 bpf-next 2/4] bpf: Recognize that two registers are safe when their ranges match Date: Thu, 29 Feb 2024 19:37:32 -0800 Message-Id: <20240301033734.95939-3-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-145) In-Reply-To: <20240301033734.95939-1-alexei.starovoitov@gmail.com> References: <20240301033734.95939-1-alexei.starovoitov@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov When open code iterators, bpf_loop or may_goto is used the following two states are equivalent and safe to prune the search: cur state: fp-8_w=scalar(id=3,smin=umin=smin32=umin32=2,smax=umax=smax32=umax32=11,var_off=(0x0; 0xf)) old state: fp-8_rw=scalar(id=2,smin=umin=smin32=umin32=1,smax=umax=smax32=umax32=11,var_off=(0x0; 0xf)) In other words "exact" state match should ignore liveness and precision marks, since open coded iterator logic didn't complete their propagation, but range_within logic that applies to scalars, ptr_to_mem, map_value, pkt_ptr is safe to rely on. Avoid doing such comparison when regular infinite loop detection logic is used, otherwise bounded loop logic will declare such "infinite loop" as false positive. Such example is in progs/verifier_loops1.c not_an_inifinite_loop(). Signed-off-by: Alexei Starovoitov --- kernel/bpf/verifier.c | 32 +++++++++++++++++++------------- 1 file changed, 19 insertions(+), 13 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index a50395872d58..f3b1ffc66ee6 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7830,6 +7830,11 @@ static struct bpf_verifier_state *find_prev_entry(struct bpf_verifier_env *env, } static void reset_idmap_scratch(struct bpf_verifier_env *env); +enum exact_level { + NOT_EXACT, + EXACT, + RANGE_WITHIN +}; static bool regs_exact(const struct bpf_reg_state *rold, const struct bpf_reg_state *rcur, struct bpf_idmap *idmap); @@ -16281,8 +16286,8 @@ static int check_btf_info(struct bpf_verifier_env *env, } /* check %cur's range satisfies %old's */ -static bool range_within(struct bpf_reg_state *old, - struct bpf_reg_state *cur) +static bool range_within(const struct bpf_reg_state *old, + const struct bpf_reg_state *cur) { return old->umin_value <= cur->umin_value && old->umax_value >= cur->umax_value && @@ -16448,12 +16453,13 @@ static bool regs_exact(const struct bpf_reg_state *rold, /* Returns true if (rold safe implies rcur safe) */ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold, - struct bpf_reg_state *rcur, struct bpf_idmap *idmap, bool exact) + struct bpf_reg_state *rcur, struct bpf_idmap *idmap, + enum exact_level exact) { - if (exact) + if (exact == EXACT) return regs_exact(rold, rcur, idmap); - if (!(rold->live & REG_LIVE_READ)) + if (!(rold->live & REG_LIVE_READ) && exact != RANGE_WITHIN) /* explored state didn't use this */ return true; if (rold->type == NOT_INIT) @@ -16495,7 +16501,7 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold, return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 && check_scalar_ids(rold->id, rcur->id, idmap); } - if (!rold->precise) + if (!rold->precise && exact != RANGE_WITHIN) return true; /* Why check_ids() for scalar registers? * @@ -16606,7 +16612,7 @@ static struct bpf_reg_state *scalar_reg_for_stack(struct bpf_verifier_env *env, } static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, - struct bpf_func_state *cur, struct bpf_idmap *idmap, bool exact) + struct bpf_func_state *cur, struct bpf_idmap *idmap, enum exact_level exact) { int i, spi; @@ -16770,7 +16776,7 @@ static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur, * the current state will reach 'bpf_exit' instruction safely */ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_state *old, - struct bpf_func_state *cur, bool exact) + struct bpf_func_state *cur, enum exact_level exact) { int i; @@ -16797,7 +16803,7 @@ static void reset_idmap_scratch(struct bpf_verifier_env *env) static bool states_equal(struct bpf_verifier_env *env, struct bpf_verifier_state *old, struct bpf_verifier_state *cur, - bool exact) + enum exact_level exact) { int i; @@ -17177,7 +17183,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) * => unsafe memory access at 11 would not be caught. */ if (is_iter_next_insn(env, insn_idx) || is_may_goto_insn(env, insn_idx)) { - if (states_equal(env, &sl->state, cur, true)) { + if (states_equal(env, &sl->state, cur, RANGE_WITHIN)) { struct bpf_reg_state *iter_state; iter_state = get_iter_reg(env, cur, insn_idx); @@ -17189,13 +17195,13 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) goto skip_inf_loop_check; } if (calls_callback(env, insn_idx)) { - if (states_equal(env, &sl->state, cur, true)) + if (states_equal(env, &sl->state, cur, RANGE_WITHIN)) goto hit; goto skip_inf_loop_check; } /* attempt to detect infinite loop to avoid unnecessary doomed work */ if (states_maybe_looping(&sl->state, cur) && - states_equal(env, &sl->state, cur, true) && + states_equal(env, &sl->state, cur, EXACT) && !iter_active_depths_differ(&sl->state, cur) && sl->state.callback_unroll_depth == cur->callback_unroll_depth) { verbose_linfo(env, insn_idx, "; "); @@ -17252,7 +17258,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) */ loop_entry = get_loop_entry(&sl->state); force_exact = loop_entry && loop_entry->branches > 0; - if (states_equal(env, &sl->state, cur, force_exact)) { + if (states_equal(env, &sl->state, cur, force_exact ? EXACT : NOT_EXACT)) { if (force_exact) update_loop_entry(cur, loop_entry); hit: From patchwork Fri Mar 1 03:37:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13577930 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 119684D9FF for ; Fri, 1 Mar 2024 03:37:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709264277; cv=none; b=qmz9v9NkmH4FTr5LBCWk3e+alWcJZEJbJj8BQtM8kkO3w5Y0RlFQBi1J8Gc+cg/FPU1i6rRM49NILvexBGDuAtR9OyvbpMa7tiN/81kI3z2Qfv6xmrReJ3ibcXgh6Jk0wl/lXdoTUMryzP2JZdd02niykuGpAUCaBDeP7ClGynA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709264277; c=relaxed/simple; bh=dfXEAHZhy6jdQ4xwmTIA+k+a5jFU6CG/dcqvpa7co3s=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=oSvmVyPjY4WKtAnKL8yt+HqJ2euO9AEWlQUqCN3/RnW8p/GnyIYXyOD13dTmzprgGX7AxQzTe6nC+W3gE/j/yPh4uW5R0XaIfl+c5irBSSeoBdo9GibnuXOj2dXvqSh3WZ6o/ceztkFKj0FfmPXN/xxR4ljkfOvsgXQ2n+++W7A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=lBR4xWkx; arc=none smtp.client-ip=209.85.210.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="lBR4xWkx" Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-6e55731af5cso1328556b3a.0 for ; Thu, 29 Feb 2024 19:37:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709264275; x=1709869075; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=i5wx9ymqphzfExBttxLv5GhJT6/NHpgQiyl9aEQxmqQ=; b=lBR4xWkx4M2/6TnlaGI6CHQflqsm+5F1IsmhMLnDYkpUVACmVWjBeG0ePivytGmQgG kTT6AGb5dsJF/mqNObjTP+nRkOG3/HjZbWHkBno5POSZZZr9yzq+80X8PH6ewz+vhO5M bdoM34PITYknEuXLUT+6WWEoN3D5f2cdxzphVGz6hdpiYb2IvqDuZGfLfTs3Y7CLZuzm uhhl+nMaymGU5F2CJb8AZCgvc4/fxsRVBhZAw26QS4XqiW2S/SBjqDcjydLd/A7S/wht Ec6FewvENe0Ogc6GkR2qNwIJAxokrtMdNmxv83bt8zCwmfXZ8ow/d58V/2vj1fC9PV03 OLFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709264275; x=1709869075; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=i5wx9ymqphzfExBttxLv5GhJT6/NHpgQiyl9aEQxmqQ=; b=VR5hgAhM1bC4223D8u3mU6lGP+OuzzArm7A+boI07cCavesR/4fUDP34GE8bZ0vyhK qK+J0ecXg/r0b+uLcwSyYLFbTGNIPSxbUahngsxSIKhmtH/M9E/esE8QSR8ykPmgC5kh sX5DqStZZlzAhclKTh5cmxo5itz6vlD95JYSWrqhelVPLEIekD0ZjIPUqh9X84uBJPXK zhCWmfQO/XhW3crKax8Nod80XH027K/Z6w+8sN8PxRnaDnLMccuUjxVg4sJRbHiz1/id j4hdBLWTtkzQOXBCTjwcafUE5gZzHTIoqNdHScqviri8VPBn0subiwM066m0dVM3HzLz tgSw== X-Gm-Message-State: AOJu0Yx8SlljxkPB4wZLRA6zeZ0McGQU3wjY/TQ6Oo+yu7z6+uNOS0YI NfkDkBiir27RNlFLApfhq/bGctB5Q9SWTGoevx/cVd65z0dfEfAgsj//raKU X-Google-Smtp-Source: AGHT+IGZGUyRWB6ym8s4mBFHFsfbEzBlPC2PyuV3A2B4tX5A7F6GC87aC+X8xD1qJn8NudlD7sAL1w== X-Received: by 2002:a05:6a20:428a:b0:1a0:f5e6:110d with SMTP id o10-20020a056a20428a00b001a0f5e6110dmr387618pzj.7.1709264274969; Thu, 29 Feb 2024 19:37:54 -0800 (PST) Received: from localhost.localdomain ([2620:10d:c090:400::5:8f17]) by smtp.gmail.com with ESMTPSA id nd16-20020a17090b4cd000b0029b035682d7sm2772584pjb.9.2024.02.29.19.37.53 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 29 Feb 2024 19:37:54 -0800 (PST) From: Alexei Starovoitov To: bpf@vger.kernel.org Cc: daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, memxor@gmail.com, eddyz87@gmail.com, kernel-team@fb.com Subject: [PATCH v3 bpf-next 3/4] bpf: Add cond_break macro Date: Thu, 29 Feb 2024 19:37:33 -0800 Message-Id: <20240301033734.95939-4-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-145) In-Reply-To: <20240301033734.95939-1-alexei.starovoitov@gmail.com> References: <20240301033734.95939-1-alexei.starovoitov@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Use may_goto instruction to implement cond_break macro. Ideally the macro should be written as: asm volatile goto(".byte 0xe5; .byte 0; .short (%l[l_break] - . - 4) / 8; .long 0; but LLVM doesn't recognize fixup of 2 byte PC relative yet. Hence use asm volatile goto(".byte 0xe5; .byte 0; .long (%l[l_break] - . - 4) / 8; .short 0; that produces correct asm on little endian. Signed-off-by: Alexei Starovoitov --- tools/testing/selftests/bpf/bpf_experimental.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h index 0d749006d107..2d408d8b9b70 100644 --- a/tools/testing/selftests/bpf/bpf_experimental.h +++ b/tools/testing/selftests/bpf/bpf_experimental.h @@ -326,6 +326,18 @@ l_true: \ }) #endif +#define cond_break \ + ({ __label__ l_break, l_continue; \ + asm volatile goto(".byte 0xe5; \ + .byte 0; \ + .long (%l[l_break] - . - 4) / 8; \ + .short 0" \ + :::: l_break); \ + goto l_continue; \ + l_break: break; \ + l_continue:; \ + }) + #ifndef bpf_nop_mov #define bpf_nop_mov(var) \ asm volatile("%[reg]=%[reg]"::[reg]"r"((short)var)) From patchwork Fri Mar 1 03:37:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13577931 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB3904EB3A for ; Fri, 1 Mar 2024 03:38:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709264282; cv=none; b=FvLmfhE4BxWljovCZB0COULnh91uMFoiz/iYV3mSySPbj/OIDR0Ew0mGUUpvL0VthUqNI4pBcDSD0dEbdEn/cLM8qpe2GgEim0h+5Fc6uQkvTt2VoNpzaApo8obHvzfmhtjyUeL7MpUj328Fffmemqr6GCKdHVGYh0gBSTL937Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709264282; c=relaxed/simple; bh=uNm8UAAjM07l5dnbUcorqFCBSlXCO1KNXoNbp2ZjVjU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=E3axAiNPNk5zuLigl5t/BnjxIKBGbupPEYmOk8JUdEjFkonWNG8/aHr7h6tMeHRGa1N5javP5KqAbevkin7LKwPYF7v88NCLZops7yevEJrqB8GmKFtRkJDOyk6lbhOjWZz70MvJV2S+XHKPUMUH6031MWqQTAlg7UC60TSDopQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=BTIKBLKw; arc=none smtp.client-ip=209.85.214.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BTIKBLKw" Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-1dcd6a3da83so12260855ad.3 for ; Thu, 29 Feb 2024 19:38:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709264279; x=1709869079; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FJKoITsZ2qMw+f2ZVmr0IejkTeqQU9mDVNK0qHIBo7g=; b=BTIKBLKwIPDSw0haSOMe9MI6v5iSzguCg+Fmxr+Bi4O6BT92k2W7DYWnbrQxxO7jlW pE8kQ3PNFvrwGBGyhsUPn17Dqv1tC8fGmEmCXhQwhETMAbOWC44chVvezNu1ls9Qx6WE PtCKYqtD6M9K1ju3qPZlxf8BPV6egCbjLyMKy84a2U+64PKIH40FcyuolDF8GOAVG/SH W8zixMet7vGe97wTmwEnUrMaaR/Fh9y0yQ2x/Z66p1/g/MyrCqtkT/TqXO8CuDBPcwxx qUjS2BM6+7oqAnd8kFTwNKWjTgmoo4Wl8s/r2Opt6llBlBqAAatyN9Mz7Gyc2uxZTUhc Y6mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709264279; x=1709869079; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FJKoITsZ2qMw+f2ZVmr0IejkTeqQU9mDVNK0qHIBo7g=; b=INiyjytQ7Uir6IwFo6xguOocNQomsJCbDKc9tdONvoHGqfBUmOqie3BHuxu78VO7WW WX+zZo1DwO82Jzpu+6gJxptXt4MQ4d8TQkHJ1Ve8MVmSyKf4VespbDgT5ueu4B4FJbYH TejBGGVBuJaU/QUEs+n2H3dIxb0x87XFtuF0yAannjwSunl0jBTdS0BcNfq+q/DmnaK4 KNwe3KwJyNClMO4kWxB+MeHXnOAU6vvJrwB7lk33zHh3XtcaA9ngKFEsxY+d1jfDbG6t DPVt93Ww6B0pX/T8FCVf7Jp02YAHbklM8lFKWRLmKB4tMmKv9BhdVZ2+SAHxxcmYE4xq 5Lfg== X-Gm-Message-State: AOJu0Yx/YexmzOHhOt3sPQR6BrjpaikWpmll3wEQBW+ea3kOtnZwSVYr WwpyT+/Q0+Gv5Sg0kmaAIXWquSVidJUbOwR9VSH4OfPNF1zYcJCl/f8mXuu2 X-Google-Smtp-Source: AGHT+IGU48rZ0de+QbY3STODj6Sgd0rZFVhal7z7CMldITeKfSJJ95CoUWB/0B+VhXNUu1b9qfY5wA== X-Received: by 2002:a17:902:e944:b0:1dc:226d:d85f with SMTP id b4-20020a170902e94400b001dc226dd85fmr674519pll.69.1709264279315; Thu, 29 Feb 2024 19:37:59 -0800 (PST) Received: from localhost.localdomain ([2620:10d:c090:400::5:8f17]) by smtp.gmail.com with ESMTPSA id 13-20020a170902c24d00b001dc3c3be4a4sm2267286plg.304.2024.02.29.19.37.57 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 29 Feb 2024 19:37:58 -0800 (PST) From: Alexei Starovoitov To: bpf@vger.kernel.org Cc: daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, memxor@gmail.com, eddyz87@gmail.com, kernel-team@fb.com Subject: [PATCH v3 bpf-next 4/4] selftests/bpf: Test may_goto Date: Thu, 29 Feb 2024 19:37:34 -0800 Message-Id: <20240301033734.95939-5-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-145) In-Reply-To: <20240301033734.95939-1-alexei.starovoitov@gmail.com> References: <20240301033734.95939-1-alexei.starovoitov@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Alexei Starovoitov Add tests for may_goto instruction via cond_break macro. Signed-off-by: Alexei Starovoitov --- tools/testing/selftests/bpf/DENYLIST.s390x | 1 + .../bpf/progs/verifier_iterating_callbacks.c | 72 ++++++++++++++++++- 2 files changed, 70 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index 1a63996c0304..c6c31b960810 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -3,3 +3,4 @@ exceptions # JIT does not support calling kfunc bpf_throw (exceptions) get_stack_raw_tp # user_stack corrupted user stack (no backchain userspace) stacktrace_build_id # compare_map_keys stackid_hmap vs. stackmap err -2 errno 2 (?) +verifier_iter/cond_break diff --git a/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c b/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c index 5905e036e0ea..8476dc47623f 100644 --- a/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c +++ b/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c @@ -1,8 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 - -#include -#include #include "bpf_misc.h" +#include "bpf_experimental.h" struct { __uint(type, BPF_MAP_TYPE_ARRAY); @@ -239,4 +237,72 @@ int bpf_loop_iter_limit_nested(void *unused) return 1000 * a + b + c; } +#define ARR_SZ 1000000 +int zero; +char arr[ARR_SZ]; + +SEC("socket") +__success __retval(0xd495cdc0) +int cond_break1(const void *ctx) +{ + unsigned int i; + unsigned int sum = 0; + + for (i = zero; i < ARR_SZ; cond_break, i++) + sum += i; + for (i = zero; i < ARR_SZ; i++) { + barrier_var(i); + sum += i + arr[i]; + cond_break; + } + + return sum; +} + +SEC("socket") +__success __retval(999000000) +int cond_break2(const void *ctx) +{ + int i, j; + int sum = 0; + + for (i = zero; i < 1000; cond_break, i++) + for (j = zero; j < 1000; j++) { + sum += i + j; + cond_break; + } + + return sum; +} + +static __noinline int loop(void) +{ + int i, sum = 0; + + for (i = zero; i <= 1000000; i++, cond_break) + sum += i; + + return sum; +} + +SEC("socket") +__success __retval(0x6a5a2920) +int cond_break3(const void *ctx) +{ + return loop(); +} + +SEC("socket") +__success __retval(0x800000) /* BPF_MAX_LOOPS */ +int cond_break4(const void *ctx) +{ + int cnt = 0; + + for (;;) { + cond_break; + cnt++; + } + return cnt; +} + char _license[] SEC("license") = "GPL";