From patchwork Mon Jun 12 16:07:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13276941 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C950A1DDF8 for ; Mon, 12 Jun 2023 16:08:27 +0000 (UTC) Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com [IPv6:2a00:1450:4864:20::22a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 228A6171A for ; Mon, 12 Jun 2023 09:08:15 -0700 (PDT) Received: by mail-lj1-x22a.google.com with SMTP id 38308e7fff4ca-2b331a2a4b0so6972151fa.0 for ; Mon, 12 Jun 2023 09:08:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1686586093; x=1689178093; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RsgtxabANHnwGR1Z4X9trcaCtjOVm6TrMknW3i91S9E=; b=H0TYRgcz8jNuG9SjIO7I3iWfp/s/IdEomHTTtaOB0kWFlJr0QOPfTTCKUh0Y5OfOW5 vkpwVFaUwQJ8yd8I/hG7aWUlP1obpBwPOVW2tkXQghgdVbVYZh/pp4lw7UOcxICqsYnU Z6y+m2O3TkTFVi3Qd15sXRwww81DuzkvgCFFXowdJ2VfvdHduE1OopTceiWHoRCr2Cfw LUf0gbHM9X4KbmDzKd/jYJiywoOxwtlD+MdyZWMlJTbZTdjvDe87zUztF/qjh9vOXQEP 3j5oUzIkYlmYxJCVK5me4nTnhgP3ByuJbfmjx3Wk0QGqR9Q9YKhIbqDRU4BCxQ4pD5Zh kOZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686586093; x=1689178093; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RsgtxabANHnwGR1Z4X9trcaCtjOVm6TrMknW3i91S9E=; b=HOhIYy/SiGL4tGzHwbPdKwVfetTlovksxMmG3wHtvJXCgho2fL3dQp0sWQhMGn1r4w WOazVTcyO4po+OpgRFcCdua5D6d94XxUAL+tYFUZmN1NoQFXaIpo1N8gL46KjiIWbIKu uNrJ3IhBqrnY3DfRrUBascsdy9PYZLfKKk+Eajv1RhOYiZjWubgsV1x1iyFrN/quJtvt 33lKY0AY59gTeEC645cxaFFxsfX6VIQasEcNrKtL2OiM3IRvXRfJqSgNMszVEn43v4Q/ zgIcsOupyKDmOGbqEMGnwLDWrmlq62XoE6fPyuHX7qBDhyNQGF2jZPpbN5BQY2A0No68 kadw== X-Gm-Message-State: AC+VfDy+f1Qi5+fwaX7op4l96e2jucy3+qPegMXaAeU9/KkeCP/r3UmZ NhSA9zSP7CTRZ1QCtZmMCtfsy1bnZTs= X-Google-Smtp-Source: ACHHUZ4/MswvjK/zpHXBQGGIcWn/tFnAYIr9gTrvwIB2QVhHHJqP0seAuuHYDHiF8UY6CErAgWuFvw== X-Received: by 2002:a2e:9f0d:0:b0:2b1:e829:a959 with SMTP id u13-20020a2e9f0d000000b002b1e829a959mr2637902ljk.5.1686586092830; Mon, 12 Jun 2023 09:08:12 -0700 (PDT) Received: from bigfoot.. (host-176-36-0-241.b024.la.net.ua. [176.36.0.241]) by smtp.gmail.com with ESMTPSA id y19-20020a2e9d53000000b002ad5f774579sm1810216ljj.96.2023.06.12.09.08.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Jun 2023 09:08:12 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next v5 1/4] bpf: use scalar ids in mark_chain_precision() Date: Mon, 12 Jun 2023 19:07:58 +0300 Message-Id: <20230612160801.2804666-2-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230612160801.2804666-1-eddyz87@gmail.com> References: <20230612160801.2804666-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Change mark_chain_precision() to track precision in situations like below: r2 = unknown value ... --- state #0 --- ... r1 = r2 // r1 and r2 now share the same ID ... --- state #1 {r1.id = A, r2.id = A} --- ... if (r2 > 10) goto exit; // find_equal_scalars() assigns range to r1 ... --- state #2 {r1.id = A, r2.id = A} --- r3 = r10 r3 += r1 // need to mark both r1 and r2 At the beginning of the processing of each state, ensure that if a register with a scalar ID is marked as precise, all registers sharing this ID are also marked as precise. This property would be used by a follow-up change in regsafe(). Acked-by: Andrii Nakryiko Signed-off-by: Eduard Zingerman --- include/linux/bpf_verifier.h | 10 +- kernel/bpf/verifier.c | 115 ++++++++++++++++++ .../testing/selftests/bpf/verifier/precise.c | 8 +- 3 files changed, 128 insertions(+), 5 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 5fe589e11ac8..73a98f6240fd 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -559,6 +559,11 @@ struct backtrack_state { u64 stack_masks[MAX_CALL_FRAMES]; }; +struct bpf_idset { + u32 count; + u32 ids[BPF_ID_MAP_SIZE]; +}; + /* single container for all structs * one verifier_env per bpf_check() call */ @@ -590,7 +595,10 @@ struct bpf_verifier_env { const struct bpf_line_info *prev_linfo; struct bpf_verifier_log log; struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1]; - struct bpf_id_pair idmap_scratch[BPF_ID_MAP_SIZE]; + union { + struct bpf_id_pair idmap_scratch[BPF_ID_MAP_SIZE]; + struct bpf_idset idset_scratch; + }; struct { int *insn_state; int *insn_stack; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index ed79a93398f8..9b5f2433194f 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3787,6 +3787,96 @@ static void mark_all_scalars_imprecise(struct bpf_verifier_env *env, struct bpf_ } } +static bool idset_contains(struct bpf_idset *s, u32 id) +{ + u32 i; + + for (i = 0; i < s->count; ++i) + if (s->ids[i] == id) + return true; + + return false; +} + +static int idset_push(struct bpf_idset *s, u32 id) +{ + if (WARN_ON_ONCE(s->count >= ARRAY_SIZE(s->ids))) + return -1; + s->ids[s->count++] = id; + return 0; +} + +static void idset_reset(struct bpf_idset *s) +{ + s->count = 0; +} + +/* Collect a set of IDs for all registers currently marked as precise in env->bt. + * Mark all registers with these IDs as precise. + */ +static int mark_precise_scalar_ids(struct bpf_verifier_env *env, struct bpf_verifier_state *st) +{ + struct bpf_idset *precise_ids = &env->idset_scratch; + struct backtrack_state *bt = &env->bt; + struct bpf_func_state *func; + struct bpf_reg_state *reg; + DECLARE_BITMAP(mask, 64); + int i, fr; + + idset_reset(precise_ids); + + for (fr = bt->frame; fr >= 0; fr--) { + func = st->frame[fr]; + + bitmap_from_u64(mask, bt_frame_reg_mask(bt, fr)); + for_each_set_bit(i, mask, 32) { + reg = &func->regs[i]; + if (!reg->id || reg->type != SCALAR_VALUE) + continue; + if (idset_push(precise_ids, reg->id)) + return -EFAULT; + } + + bitmap_from_u64(mask, bt_frame_stack_mask(bt, fr)); + for_each_set_bit(i, mask, 64) { + if (i >= func->allocated_stack / BPF_REG_SIZE) + break; + if (!is_spilled_scalar_reg(&func->stack[i])) + continue; + reg = &func->stack[i].spilled_ptr; + if (!reg->id) + continue; + if (idset_push(precise_ids, reg->id)) + return -EFAULT; + } + } + + for (fr = 0; fr <= st->curframe; ++fr) { + func = st->frame[fr]; + + for (i = BPF_REG_0; i < BPF_REG_10; ++i) { + reg = &func->regs[i]; + if (!reg->id) + continue; + if (!idset_contains(precise_ids, reg->id)) + continue; + bt_set_frame_reg(bt, fr, i); + } + for (i = 0; i < func->allocated_stack / BPF_REG_SIZE; ++i) { + if (!is_spilled_scalar_reg(&func->stack[i])) + continue; + reg = &func->stack[i].spilled_ptr; + if (!reg->id) + continue; + if (!idset_contains(precise_ids, reg->id)) + continue; + bt_set_frame_slot(bt, fr, i); + } + } + + return 0; +} + /* * __mark_chain_precision() backtracks BPF program instruction sequence and * chain of verifier states making sure that register *regno* (if regno >= 0) @@ -3918,6 +4008,31 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno) bt->frame, last_idx, first_idx, subseq_idx); } + /* If some register with scalar ID is marked as precise, + * make sure that all registers sharing this ID are also precise. + * This is needed to estimate effect of find_equal_scalars(). + * Do this at the last instruction of each state, + * bpf_reg_state::id fields are valid for these instructions. + * + * Allows to track precision in situation like below: + * + * r2 = unknown value + * ... + * --- state #0 --- + * ... + * r1 = r2 // r1 and r2 now share the same ID + * ... + * --- state #1 {r1.id = A, r2.id = A} --- + * ... + * if (r2 > 10) goto exit; // find_equal_scalars() assigns range to r1 + * ... + * --- state #2 {r1.id = A, r2.id = A} --- + * r3 = r10 + * r3 += r1 // need to mark both r1 and r2 + */ + if (mark_precise_scalar_ids(env, st)) + return -EFAULT; + if (last_idx < 0) { /* we are at the entry into subprog, which * is expected for global funcs, but only if diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c index b8c0aae8e7ec..99272bb890da 100644 --- a/tools/testing/selftests/bpf/verifier/precise.c +++ b/tools/testing/selftests/bpf/verifier/precise.c @@ -46,7 +46,7 @@ mark_precise: frame0: regs=r2 stack= before 20\ mark_precise: frame0: parent state regs=r2 stack=:\ mark_precise: frame0: last_idx 19 first_idx 10\ - mark_precise: frame0: regs=r2 stack= before 19\ + mark_precise: frame0: regs=r2,r9 stack= before 19\ mark_precise: frame0: regs=r9 stack= before 18\ mark_precise: frame0: regs=r8,r9 stack= before 17\ mark_precise: frame0: regs=r0,r9 stack= before 15\ @@ -106,10 +106,10 @@ mark_precise: frame0: regs=r2 stack= before 22\ mark_precise: frame0: parent state regs=r2 stack=:\ mark_precise: frame0: last_idx 20 first_idx 20\ - mark_precise: frame0: regs=r2 stack= before 20\ - mark_precise: frame0: parent state regs=r2 stack=:\ + mark_precise: frame0: regs=r2,r9 stack= before 20\ + mark_precise: frame0: parent state regs=r2,r9 stack=:\ mark_precise: frame0: last_idx 19 first_idx 17\ - mark_precise: frame0: regs=r2 stack= before 19\ + mark_precise: frame0: regs=r2,r9 stack= before 19\ mark_precise: frame0: regs=r9 stack= before 18\ mark_precise: frame0: regs=r8,r9 stack= before 17\ mark_precise: frame0: parent state regs= stack=:", From patchwork Mon Jun 12 16:07:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13276942 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F9DB1DDF8 for ; Mon, 12 Jun 2023 16:08:29 +0000 (UTC) Received: from mail-lj1-x229.google.com (mail-lj1-x229.google.com [IPv6:2a00:1450:4864:20::229]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4FB1184 for ; Mon, 12 Jun 2023 09:08:16 -0700 (PDT) Received: by mail-lj1-x229.google.com with SMTP id 38308e7fff4ca-2b227fdda27so35558201fa.1 for ; Mon, 12 Jun 2023 09:08:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1686586095; x=1689178095; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=D9IXfjeD23d9hPD03ntGCnsFDpVLP0lwyTfkRFNmw/g=; b=QfpOgSS6NhM8XVoenMqjbRhYPwT47poEPmBtT/ShfWCx5EW/t8rHZLbjjW617IGMCu kssMLEnY7+Sl+vFdCxGl536nD+ihqzaT7ttIh+RPVJffqnPYV5rj6S+D6jwapVkEmSk6 hSJ4qbsvBBGRsMEMWd+wmPVwJD+LF5Co82Yd78CPY6JjL6jDo+fEYXh30exaC1Tyx01Z McrLku0HSKKMCXJVjSGQhErf4c/tv6iT41RiSnJjmB4kMuTYoIRlB9JDqQxV2yS3gcl8 kaXbHGBQlNj03+PU718h1K2cxsU5sIB5FMqj5EzvvpLL5sAa2YCD9EFwAzRctyNdR5h8 OQgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686586095; x=1689178095; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=D9IXfjeD23d9hPD03ntGCnsFDpVLP0lwyTfkRFNmw/g=; b=dAtbMgBaEpgp81x1/KQ5xm0tA1C8m1Z0nPimgbVnSCPLfE41V/bLRBbPFPQJU04oq5 WAivw1xSOWU3d/URvkv7XueAL/9V3K3e/CEm9+0Vo/TUkkrVg8wRB0jNoZSJbKtOcniS lZnksX0wDlr4CaUSD3ZeRtJGIhVMQRYC/GkHVxSmL0nB0GTnWw1x4pIbD29/JaimJ6rs Jx7yLQqquQmIcGfiEdO6MXd1MgWgUNgwztIk02Unk4uUANrB61AkKBldXtEp84Y1nDlR OXC3cep49Y8CitF8XHcjY38tmTMKJkbLTEwtfAUDijBpY1sXY78HzcC70oq5PFYd9TVe qz0w== X-Gm-Message-State: AC+VfDxuwbXdMUKyrvKSI0So85GCeMzjOA8Iy90Ym+zyhNz6inmF4xF/ OzJZ0NkRtCeda9DMOCZ4vz24HCNuo+4= X-Google-Smtp-Source: ACHHUZ4LDbfPPssQwkkgs/FKA654kdgMkVgKbTcfMgPZ1Ri3MuR4uM8644x81CQYCy4sZP3rlbh9+A== X-Received: by 2002:a2e:a485:0:b0:2af:23dd:fd71 with SMTP id h5-20020a2ea485000000b002af23ddfd71mr1883062lji.25.1686586094529; Mon, 12 Jun 2023 09:08:14 -0700 (PDT) Received: from bigfoot.. (host-176-36-0-241.b024.la.net.ua. [176.36.0.241]) by smtp.gmail.com with ESMTPSA id y19-20020a2e9d53000000b002ad5f774579sm1810216ljj.96.2023.06.12.09.08.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Jun 2023 09:08:13 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next v5 2/4] selftests/bpf: check if mark_chain_precision() follows scalar ids Date: Mon, 12 Jun 2023 19:07:59 +0300 Message-Id: <20230612160801.2804666-3-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230612160801.2804666-1-eddyz87@gmail.com> References: <20230612160801.2804666-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Check __mark_chain_precision() log to verify that scalars with same IDs are marked as precise. Use several scenarios to test that precision marks are propagated through: - registers of scalar type with the same ID within one state; - registers of scalar type with the same ID cross several states; - registers of scalar type with the same ID cross several stack frames; - stack slot of scalar type with the same ID; - multiple scalar IDs are tracked independently. Acked-by: Andrii Nakryiko Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_scalar_ids.c | 344 ++++++++++++++++++ 2 files changed, 346 insertions(+) create mode 100644 tools/testing/selftests/bpf/progs/verifier_scalar_ids.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 531621adef42..070a13833c3f 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -50,6 +50,7 @@ #include "verifier_regalloc.skel.h" #include "verifier_ringbuf.skel.h" #include "verifier_runtime_jit.skel.h" +#include "verifier_scalar_ids.skel.h" #include "verifier_search_pruning.skel.h" #include "verifier_sock.skel.h" #include "verifier_spill_fill.skel.h" @@ -150,6 +151,7 @@ void test_verifier_ref_tracking(void) { RUN(verifier_ref_tracking); } void test_verifier_regalloc(void) { RUN(verifier_regalloc); } void test_verifier_ringbuf(void) { RUN(verifier_ringbuf); } void test_verifier_runtime_jit(void) { RUN(verifier_runtime_jit); } +void test_verifier_scalar_ids(void) { RUN(verifier_scalar_ids); } void test_verifier_search_pruning(void) { RUN(verifier_search_pruning); } void test_verifier_sock(void) { RUN(verifier_sock); } void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); } diff --git a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c new file mode 100644 index 000000000000..8a5203fb14ca --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c @@ -0,0 +1,344 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include "bpf_misc.h" + +/* Check that precision marks propagate through scalar IDs. + * Registers r{0,1,2} have the same scalar ID at the moment when r0 is + * marked to be precise, this mark is immediately propagated to r{1,2}. + */ +SEC("socket") +__success __log_level(2) +__msg("frame0: regs=r0,r1,r2 stack= before 4: (bf) r3 = r10") +__msg("frame0: regs=r0,r1,r2 stack= before 3: (bf) r2 = r0") +__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0") +__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255") +__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void precision_same_state(void) +{ + asm volatile ( + /* r0 = random number up to 0xff */ + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + /* tie r0.id == r1.id == r2.id */ + "r1 = r0;" + "r2 = r0;" + /* force r0 to be precise, this immediately marks r1 and r2 as + * precise as well because of shared IDs + */ + "r3 = r10;" + "r3 += r0;" + "r0 = 0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* Same as precision_same_state, but mark propagates through state / + * parent state boundary. + */ +SEC("socket") +__success __log_level(2) +__msg("frame0: last_idx 6 first_idx 5 subseq_idx -1") +__msg("frame0: regs=r0,r1,r2 stack= before 5: (bf) r3 = r10") +__msg("frame0: parent state regs=r0,r1,r2 stack=:") +__msg("frame0: regs=r0,r1,r2 stack= before 4: (05) goto pc+0") +__msg("frame0: regs=r0,r1,r2 stack= before 3: (bf) r2 = r0") +__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0") +__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255") +__msg("frame0: parent state regs=r0 stack=:") +__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void precision_cross_state(void) +{ + asm volatile ( + /* r0 = random number up to 0xff */ + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + /* tie r0.id == r1.id == r2.id */ + "r1 = r0;" + "r2 = r0;" + /* force checkpoint */ + "goto +0;" + /* force r0 to be precise, this immediately marks r1 and r2 as + * precise as well because of shared IDs + */ + "r3 = r10;" + "r3 += r0;" + "r0 = 0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* Same as precision_same_state, but break one of the + * links, note that r1 is absent from regs=... in __msg below. + */ +SEC("socket") +__success __log_level(2) +__msg("frame0: regs=r0,r2 stack= before 5: (bf) r3 = r10") +__msg("frame0: regs=r0,r2 stack= before 4: (b7) r1 = 0") +__msg("frame0: regs=r0,r2 stack= before 3: (bf) r2 = r0") +__msg("frame0: regs=r0 stack= before 2: (bf) r1 = r0") +__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255") +__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void precision_same_state_broken_link(void) +{ + asm volatile ( + /* r0 = random number up to 0xff */ + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + /* tie r0.id == r1.id == r2.id */ + "r1 = r0;" + "r2 = r0;" + /* break link for r1, this is the only line that differs + * compared to the previous test + */ + "r1 = 0;" + /* force r0 to be precise, this immediately marks r1 and r2 as + * precise as well because of shared IDs + */ + "r3 = r10;" + "r3 += r0;" + "r0 = 0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* Same as precision_same_state_broken_link, but with state / + * parent state boundary. + */ +SEC("socket") +__success __log_level(2) +__msg("frame0: regs=r0,r2 stack= before 6: (bf) r3 = r10") +__msg("frame0: regs=r0,r2 stack= before 5: (b7) r1 = 0") +__msg("frame0: parent state regs=r0,r2 stack=:") +__msg("frame0: regs=r0,r1,r2 stack= before 4: (05) goto pc+0") +__msg("frame0: regs=r0,r1,r2 stack= before 3: (bf) r2 = r0") +__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0") +__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255") +__msg("frame0: parent state regs=r0 stack=:") +__msg("frame0: regs=r0 stack= before 0: (85) call bpf_ktime_get_ns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void precision_cross_state_broken_link(void) +{ + asm volatile ( + /* r0 = random number up to 0xff */ + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + /* tie r0.id == r1.id == r2.id */ + "r1 = r0;" + "r2 = r0;" + /* force checkpoint, although link between r1 and r{0,2} is + * broken by the next statement current precision tracking + * algorithm can't react to it and propagates mark for r1 to + * the parent state. + */ + "goto +0;" + /* break link for r1, this is the only line that differs + * compared to precision_cross_state() + */ + "r1 = 0;" + /* force r0 to be precise, this immediately marks r1 and r2 as + * precise as well because of shared IDs + */ + "r3 = r10;" + "r3 += r0;" + "r0 = 0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* Check that precision marks propagate through scalar IDs. + * Use the same scalar ID in multiple stack frames, check that + * precision information is propagated up the call stack. + */ +SEC("socket") +__success __log_level(2) +__msg("11: (0f) r2 += r1") +/* Current state */ +__msg("frame2: last_idx 11 first_idx 10 subseq_idx -1") +__msg("frame2: regs=r1 stack= before 10: (bf) r2 = r10") +__msg("frame2: parent state regs=r1 stack=") +/* frame1.r{6,7} are marked because mark_precise_scalar_ids() + * looks for all registers with frame2.r1.id in the current state + */ +__msg("frame1: parent state regs=r6,r7 stack=") +__msg("frame0: parent state regs=r6 stack=") +/* Parent state */ +__msg("frame2: last_idx 8 first_idx 8 subseq_idx 10") +__msg("frame2: regs=r1 stack= before 8: (85) call pc+1") +/* frame1.r1 is marked because of backtracking of call instruction */ +__msg("frame1: parent state regs=r1,r6,r7 stack=") +__msg("frame0: parent state regs=r6 stack=") +/* Parent state */ +__msg("frame1: last_idx 7 first_idx 6 subseq_idx 8") +__msg("frame1: regs=r1,r6,r7 stack= before 7: (bf) r7 = r1") +__msg("frame1: regs=r1,r6 stack= before 6: (bf) r6 = r1") +__msg("frame1: parent state regs=r1 stack=") +__msg("frame0: parent state regs=r6 stack=") +/* Parent state */ +__msg("frame1: last_idx 4 first_idx 4 subseq_idx 6") +__msg("frame1: regs=r1 stack= before 4: (85) call pc+1") +__msg("frame0: parent state regs=r1,r6 stack=") +/* Parent state */ +__msg("frame0: last_idx 3 first_idx 1 subseq_idx 4") +__msg("frame0: regs=r0,r1,r6 stack= before 3: (bf) r6 = r0") +__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0") +__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void precision_many_frames(void) +{ + asm volatile ( + /* r0 = random number up to 0xff */ + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + /* tie r0.id == r1.id == r6.id */ + "r1 = r0;" + "r6 = r0;" + "call precision_many_frames__foo;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +static __naked __noinline __used +void precision_many_frames__foo(void) +{ + asm volatile ( + /* conflate one of the register numbers (r6) with outer frame, + * to verify that those are tracked independently + */ + "r6 = r1;" + "r7 = r1;" + "call precision_many_frames__bar;" + "exit" + ::: __clobber_all); +} + +static __naked __noinline __used +void precision_many_frames__bar(void) +{ + asm volatile ( + /* force r1 to be precise, this immediately marks: + * - bar frame r1 + * - foo frame r{1,6,7} + * - main frame r{1,6} + */ + "r2 = r10;" + "r2 += r1;" + "r0 = 0;" + "exit;" + ::: __clobber_all); +} + +/* Check that scalars with the same IDs are marked precise on stack as + * well as in registers. + */ +SEC("socket") +__success __log_level(2) +/* foo frame */ +__msg("frame1: regs=r1 stack=-8,-16 before 9: (bf) r2 = r10") +__msg("frame1: regs=r1 stack=-8,-16 before 8: (7b) *(u64 *)(r10 -16) = r1") +__msg("frame1: regs=r1 stack=-8 before 7: (7b) *(u64 *)(r10 -8) = r1") +__msg("frame1: regs=r1 stack= before 4: (85) call pc+2") +/* main frame */ +__msg("frame0: regs=r0,r1 stack=-8 before 3: (7b) *(u64 *)(r10 -8) = r1") +__msg("frame0: regs=r0,r1 stack= before 2: (bf) r1 = r0") +__msg("frame0: regs=r0 stack= before 1: (57) r0 &= 255") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void precision_stack(void) +{ + asm volatile ( + /* r0 = random number up to 0xff */ + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + /* tie r0.id == r1.id == fp[-8].id */ + "r1 = r0;" + "*(u64*)(r10 - 8) = r1;" + "call precision_stack__foo;" + "r0 = 0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +static __naked __noinline __used +void precision_stack__foo(void) +{ + asm volatile ( + /* conflate one of the register numbers (r6) with outer frame, + * to verify that those are tracked independently + */ + "*(u64*)(r10 - 8) = r1;" + "*(u64*)(r10 - 16) = r1;" + /* force r1 to be precise, this immediately marks: + * - foo frame r1,fp{-8,-16} + * - main frame r1,fp{-8} + */ + "r2 = r10;" + "r2 += r1;" + "exit" + ::: __clobber_all); +} + +/* Use two separate scalar IDs to check that these are propagated + * independently. + */ +SEC("socket") +__success __log_level(2) +/* r{6,7} */ +__msg("11: (0f) r3 += r7") +__msg("frame0: regs=r6,r7 stack= before 10: (bf) r3 = r10") +/* ... skip some insns ... */ +__msg("frame0: regs=r6,r7 stack= before 3: (bf) r7 = r0") +__msg("frame0: regs=r0,r6 stack= before 2: (bf) r6 = r0") +/* r{8,9} */ +__msg("12: (0f) r3 += r9") +__msg("frame0: regs=r8,r9 stack= before 11: (0f) r3 += r7") +/* ... skip some insns ... */ +__msg("frame0: regs=r8,r9 stack= before 7: (bf) r9 = r0") +__msg("frame0: regs=r0,r8 stack= before 6: (bf) r8 = r0") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void precision_two_ids(void) +{ + asm volatile ( + /* r6 = random number up to 0xff + * r6.id == r7.id + */ + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + "r6 = r0;" + "r7 = r0;" + /* same, but for r{8,9} */ + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + "r8 = r0;" + "r9 = r0;" + /* clear r0 id */ + "r0 = 0;" + /* force checkpoint */ + "goto +0;" + "r3 = r10;" + /* force r7 to be precise, this also marks r6 */ + "r3 += r7;" + /* force r9 to be precise, this also marks r8 */ + "r3 += r9;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; From patchwork Mon Jun 12 16:08:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13276943 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 032E71DDF8 for ; Mon, 12 Jun 2023 16:08:30 +0000 (UTC) Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com [IPv6:2a00:1450:4864:20::22c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDFA610D9 for ; Mon, 12 Jun 2023 09:08:17 -0700 (PDT) Received: by mail-lj1-x22c.google.com with SMTP id 38308e7fff4ca-2b227fdda27so35558471fa.1 for ; Mon, 12 Jun 2023 09:08:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1686586096; x=1689178096; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fjtiJ77zkxZPxbQd5YnO5Fi0nGBownzM5UDw4fdDuLc=; b=W3ITCfwJ6ep5T6U6aYIbOCEnPVWSBJnJ9InptoujThSvWTHTkoGjlR6UMEyeJreTSh C8RcdbXwnARLC0w8qojStZ2alG315b2IVICQzyS9/w3SvdvzAn7wYEdtZ5z61ZRYtFlf 9S8W7iwi0Y3lkG91BUIeLJZ0age57OOBEsV2MFl9NimOKNdW1tr59GI42KJjNhri+3Td vKhpTP8J6M8bLBOzle/KYUWlY8EUC9gimq8w6vV/oCTCV7BDkc+Sinp6Xh1UTLKiV8rk vUE7o//dKd3peTqiCEsTu5ElzDq282xXQMWyPznFmJPLan3HTmtIp7HMaAXWyTGhOrLC wh8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686586096; x=1689178096; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fjtiJ77zkxZPxbQd5YnO5Fi0nGBownzM5UDw4fdDuLc=; b=GP8LHa80kiTLRXgtSuGvh6JQxr0BCs1MRH2ATJKU2I7tCBrWZZa+MCgnbh55TCHzpG mun7J1l77TJgX20At3CiOM+paHRZbUwNeLZgs22tWd36z1Y6vqF0J/mwgJ2G2NdkyqUF uqWzcJ/AMpwE1ClBIdc9OSXNVYnaRudT2Q4QZqyht58rFrCInbvd8VUQrdya6aUtf0OL inb43fwsDGwKzonVSjID7yDpUeuCxYEsw04rKGtkucp04NeTTo73sc4WSWVTU3RKiySi q9N2rWUbpUzHMSYmTLTjuJ4rLaVhUkPwnskzXs7JBIRHpfFB95+CrtQ6Dgnl7yWaXqPA aFoQ== X-Gm-Message-State: AC+VfDy3K34zGibPMzV2qXqk9wJyyxm7J9j63uxOItOUiiPwcKny8+Wf Fy0+iIeo3pArCGnGFGDzxp9TuGMASas= X-Google-Smtp-Source: ACHHUZ6pP2bX+NFattKpDMWNz9BHgy8dfxfsBhsVjIJoV+0HgTQ6iphYW9EpGcAwjKg6QMAi5nowfA== X-Received: by 2002:a2e:87d9:0:b0:2af:332e:3039 with SMTP id v25-20020a2e87d9000000b002af332e3039mr2278850ljj.11.1686586095586; Mon, 12 Jun 2023 09:08:15 -0700 (PDT) Received: from bigfoot.. (host-176-36-0-241.b024.la.net.ua. [176.36.0.241]) by smtp.gmail.com with ESMTPSA id y19-20020a2e9d53000000b002ad5f774579sm1810216ljj.96.2023.06.12.09.08.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Jun 2023 09:08:15 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next v5 3/4] bpf: verify scalar ids mapping in regsafe() using check_ids() Date: Mon, 12 Jun 2023 19:08:00 +0300 Message-Id: <20230612160801.2804666-4-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230612160801.2804666-1-eddyz87@gmail.com> References: <20230612160801.2804666-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Make sure that the following unsafe example is rejected by verifier: 1: r9 = ... some pointer with range X ... 2: r6 = ... unbound scalar ID=a ... 3: r7 = ... unbound scalar ID=b ... 4: if (r6 > r7) goto +1 5: r6 = r7 6: if (r6 > X) goto ... --- checkpoint --- 7: r9 += r7 8: *(u64 *)r9 = Y This example is unsafe because not all execution paths verify r7 range. Because of the jump at (4) the verifier would arrive at (6) in two states: I. r6{.id=b}, r7{.id=b} via path 1-6; II. r6{.id=a}, r7{.id=b} via path 1-4, 6. Currently regsafe() does not call check_ids() for scalar registers, thus from POV of regsafe() states (I) and (II) are identical. If the path 1-6 is taken by verifier first, and checkpoint is created at (6) the path [1-4, 6] would be considered safe. Changes in this commit: - Function check_scalar_ids() is added, it differs from check_ids() in the following aspects: - treats ID zero as a unique scalar ID; - disallows mapping same 'cur_id' to multiple 'old_id'. - check_scalar_ids() needs to generate temporary unique IDs, field 'tmp_id_gen' is added to bpf_verifier_env::idmap_scratch to facilitate this. - Function scalar_regs_exact() is added, it differs from regs_exact() in the following aspects: - uses check_scalar_ids() for ID comparison; - does not check reg_obj_id as this field is not used for scalar values. - regsafe() is updated to: - use check_scalar_ids() for precise scalar registers. - use scalar_regs_exact() for scalar registers, but only for explore_alu_limits branch. This simplifies control flow for scalar case, and has no measurable performance impact. - check_alu_op() is updated avoid generating bpf_reg_state::id for constant scalar values when processing BPF_MOV. ID is needed to propagate range information for identical values, but there is nothing to propagate for constants. Fixes: 75748837b7e5 ("bpf: Propagate scalar ranges through register assignments.") Acked-by: Andrii Nakryiko Signed-off-by: Eduard Zingerman --- include/linux/bpf_verifier.h | 17 ++++-- kernel/bpf/verifier.c | 108 ++++++++++++++++++++++++++++------- 2 files changed, 97 insertions(+), 28 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 73a98f6240fd..042b76fe8e29 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -313,11 +313,6 @@ struct bpf_idx_pair { u32 idx; }; -struct bpf_id_pair { - u32 old; - u32 cur; -}; - #define MAX_CALL_FRAMES 8 /* Maximum number of register states that can exist at once */ #define BPF_ID_MAP_SIZE ((MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE) * MAX_CALL_FRAMES) @@ -559,6 +554,16 @@ struct backtrack_state { u64 stack_masks[MAX_CALL_FRAMES]; }; +struct bpf_id_pair { + u32 old; + u32 cur; +}; + +struct bpf_idmap { + u32 tmp_id_gen; + struct bpf_id_pair map[BPF_ID_MAP_SIZE]; +}; + struct bpf_idset { u32 count; u32 ids[BPF_ID_MAP_SIZE]; @@ -596,7 +601,7 @@ struct bpf_verifier_env { struct bpf_verifier_log log; struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1]; union { - struct bpf_id_pair idmap_scratch[BPF_ID_MAP_SIZE]; + struct bpf_idmap idmap_scratch; struct bpf_idset idset_scratch; }; struct { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 9b5f2433194f..b15ebfed207a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -12942,12 +12942,14 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) if (BPF_SRC(insn->code) == BPF_X) { struct bpf_reg_state *src_reg = regs + insn->src_reg; struct bpf_reg_state *dst_reg = regs + insn->dst_reg; + bool need_id = src_reg->type == SCALAR_VALUE && !src_reg->id && + !tnum_is_const(src_reg->var_off); if (BPF_CLASS(insn->code) == BPF_ALU64) { /* case: R1 = R2 * copy register state to dest reg */ - if (src_reg->type == SCALAR_VALUE && !src_reg->id) + if (need_id) /* Assign src and dst registers the same ID * that will be used by find_equal_scalars() * to propagate min/max range. @@ -12966,7 +12968,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) } else if (src_reg->type == SCALAR_VALUE) { bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX; - if (is_src_reg_u32 && !src_reg->id) + if (is_src_reg_u32 && need_id) src_reg->id = ++env->id_gen; copy_register_state(dst_reg, src_reg); /* Make sure ID is cleared if src_reg is not in u32 range otherwise @@ -15122,8 +15124,9 @@ static bool range_within(struct bpf_reg_state *old, * So we look through our idmap to see if this old id has been seen before. If * so, we require the new id to match; otherwise, we add the id pair to the map. */ -static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap) +static bool check_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap) { + struct bpf_id_pair *map = idmap->map; unsigned int i; /* either both IDs should be set or both should be zero */ @@ -15134,14 +15137,44 @@ static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap) return true; for (i = 0; i < BPF_ID_MAP_SIZE; i++) { - if (!idmap[i].old) { + if (!map[i].old) { /* Reached an empty slot; haven't seen this id before */ - idmap[i].old = old_id; - idmap[i].cur = cur_id; + map[i].old = old_id; + map[i].cur = cur_id; return true; } - if (idmap[i].old == old_id) - return idmap[i].cur == cur_id; + if (map[i].old == old_id) + return map[i].cur == cur_id; + } + /* We ran out of idmap slots, which should be impossible */ + WARN_ON_ONCE(1); + return false; +} + +/* Similar to check_ids(), but: + * - disallow mapping of different 'old_id' values to same 'cur_id' value; + * - for zero 'old_id' or 'cur_id' allocate a unique temporary ID + * to allow pairs like '0 vs unique ID', 'unique ID vs 0'. + */ +static bool check_scalar_ids(u32 old_id, u32 cur_id, struct bpf_idmap *idmap) +{ + struct bpf_id_pair *map = idmap->map; + unsigned int i; + + old_id = old_id ? old_id : ++idmap->tmp_id_gen; + cur_id = cur_id ? cur_id : ++idmap->tmp_id_gen; + + for (i = 0; i < BPF_ID_MAP_SIZE; i++) { + if (!map[i].old) { + /* Reached an empty slot; haven't seen this id before */ + map[i].old = old_id; + map[i].cur = cur_id; + return true; + } + if (map[i].old == old_id) + return map[i].cur == cur_id; + if (map[i].cur == cur_id) + return false; } /* We ran out of idmap slots, which should be impossible */ WARN_ON_ONCE(1); @@ -15246,16 +15279,24 @@ static void clean_live_states(struct bpf_verifier_env *env, int insn, static bool regs_exact(const struct bpf_reg_state *rold, const struct bpf_reg_state *rcur, - struct bpf_id_pair *idmap) + struct bpf_idmap *idmap) { return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 && check_ids(rold->id, rcur->id, idmap) && check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap); } +static bool scalar_regs_exact(const struct bpf_reg_state *rold, + const struct bpf_reg_state *rcur, + struct bpf_idmap *idmap) +{ + return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 && + check_scalar_ids(rold->id, rcur->id, idmap); +} + /* Returns true if (rold safe implies rcur safe) */ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold, - struct bpf_reg_state *rcur, struct bpf_id_pair *idmap) + struct bpf_reg_state *rcur, struct bpf_idmap *idmap) { if (!(rold->live & REG_LIVE_READ)) /* explored state didn't use this */ @@ -15292,15 +15333,37 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold, switch (base_type(rold->type)) { case SCALAR_VALUE: - if (regs_exact(rold, rcur, idmap)) - return true; if (env->explore_alu_limits) - return false; + return scalar_regs_exact(rold, rcur, idmap); if (!rold->precise) return true; - /* new val must satisfy old val knowledge */ + /* Why check_ids() for scalar registers? + * + * Consider the following BPF code: + * 1: r6 = ... unbound scalar, ID=a ... + * 2: r7 = ... unbound scalar, ID=b ... + * 3: if (r6 > r7) goto +1 + * 4: r6 = r7 + * 5: if (r6 > X) goto ... + * 6: ... memory operation using r7 ... + * + * First verification path is [1-6]: + * - at (4) same bpf_reg_state::id (b) would be assigned to r6 and r7; + * - at (5) r6 would be marked <= X, find_equal_scalars() would also mark + * r7 <= X, because r6 and r7 share same id. + * Next verification path is [1-4, 6]. + * + * Instruction (6) would be reached in two states: + * I. r6{.id=b}, r7{.id=b} via path 1-6; + * II. r6{.id=a}, r7{.id=b} via path 1-4, 6. + * + * Use check_ids() to distinguish these states. + * --- + * Also verify that new value satisfies old value range knowledge. + */ return range_within(rold, rcur) && - tnum_in(rold->var_off, rcur->var_off); + tnum_in(rold->var_off, rcur->var_off) && + check_scalar_ids(rold->id, rcur->id, idmap); case PTR_TO_MAP_KEY: case PTR_TO_MAP_VALUE: case PTR_TO_MEM: @@ -15346,7 +15409,7 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold, } static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, - struct bpf_func_state *cur, struct bpf_id_pair *idmap) + struct bpf_func_state *cur, struct bpf_idmap *idmap) { int i, spi; @@ -15449,7 +15512,7 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, } static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur, - struct bpf_id_pair *idmap) + struct bpf_idmap *idmap) { int i; @@ -15497,13 +15560,13 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat for (i = 0; i < MAX_BPF_REG; i++) if (!regsafe(env, &old->regs[i], &cur->regs[i], - env->idmap_scratch)) + &env->idmap_scratch)) return false; - if (!stacksafe(env, old, cur, env->idmap_scratch)) + if (!stacksafe(env, old, cur, &env->idmap_scratch)) return false; - if (!refsafe(old, cur, env->idmap_scratch)) + if (!refsafe(old, cur, &env->idmap_scratch)) return false; return true; @@ -15518,7 +15581,8 @@ static bool states_equal(struct bpf_verifier_env *env, if (old->curframe != cur->curframe) return false; - memset(env->idmap_scratch, 0, sizeof(env->idmap_scratch)); + env->idmap_scratch.tmp_id_gen = env->id_gen; + memset(&env->idmap_scratch.map, 0, sizeof(env->idmap_scratch)); /* Verification state from speculative execution simulation * must never prune a non-speculative execution one. @@ -15536,7 +15600,7 @@ static bool states_equal(struct bpf_verifier_env *env, return false; if (old->active_lock.id && - !check_ids(old->active_lock.id, cur->active_lock.id, env->idmap_scratch)) + !check_ids(old->active_lock.id, cur->active_lock.id, &env->idmap_scratch)) return false; if (old->active_rcu_lock != cur->active_rcu_lock) From patchwork Mon Jun 12 16:08:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13276944 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 97A811DDF8 for ; Mon, 12 Jun 2023 16:08:30 +0000 (UTC) Received: from mail-lj1-x22b.google.com (mail-lj1-x22b.google.com [IPv6:2a00:1450:4864:20::22b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE4C3171F for ; Mon, 12 Jun 2023 09:08:18 -0700 (PDT) Received: by mail-lj1-x22b.google.com with SMTP id 38308e7fff4ca-2b1a7e31dcaso52961401fa.2 for ; Mon, 12 Jun 2023 09:08:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1686586097; x=1689178097; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=62GLomNpDywhoz4Nr7QyMfQKxaeXL5T5x9VIyDfoD3g=; b=GvWmfwiKqQgRIv/xqK4UJcf3LDf9Yzx/ykSLV72FONIh8rzrI9g0wg1TYvjyGaflLF 2acqraRej5cuPOSE2WYYg817COCbbJT+K7UKrYv14VWnjCPpJIRF6ikR8Ntoib3I21zG gbv2o/xtWNdAK1JnUIxgwgGoogMqUm5uJQgXyVD1aaV9UMYXrFkLxpqb6LQI1T+FYTCb nfvMiXUU9P/TDWdGwTML0V2PBe4fwc/My6T2JOrHsU7jswtunwm2iVQ7xn0oJoXuARdt 768utsGGGQfp6V+RiQLLFP6BQkf83wG7zSg0lXFk5/4JKPO7ttMowY/UOTaNfAZTO9wB bqfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686586097; x=1689178097; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=62GLomNpDywhoz4Nr7QyMfQKxaeXL5T5x9VIyDfoD3g=; b=Jw0OYkeCYYPM1dlDSRoLSsZzofqKmuG/p23/GmovUDUqSmfNQnTNjGIGsI623XQUqi l3Fz0/NoIaAJk2cM+0WzMF2GRPwmL2hfVHPnEZPOO4v7/JhauQ4Yr9R6VVmJIDWAauqd ZFmLQdCPNVtaWUBhAsRhLnPnBZXkK3HjsiZNDSkdMammx6oABGS7j0JgVJSU29vTOuOP yOKwi9YpiK98MhiId/J/mvQ/hyj7sr8vg3Kj8HYQX99wre35HHHZKeBJ/CJ3yCN+ynap +jh3V1khjp7LmAAprQhnnLgDkw+QY+mh9H+W8Vf5jtDQ54glmdprwMjggAAM1itPbjGb C9MQ== X-Gm-Message-State: AC+VfDzbmHoTaEdiSh5Z7H5JbfaISHvC+McnNOlvOpEtBfMnXXwDLknK ve5zvD4x2G+g/1L/VID5z7okLvcIiBI= X-Google-Smtp-Source: ACHHUZ6EtHdrVhReb76CAvNT4QLGeW2jHPROljIm7Hqy3oFk4+NGa+fwUjTH+m4y6Tw4ZORMPSgqLg== X-Received: by 2002:a2e:8786:0:b0:2ad:ac93:3c7d with SMTP id n6-20020a2e8786000000b002adac933c7dmr3186810lji.38.1686586096658; Mon, 12 Jun 2023 09:08:16 -0700 (PDT) Received: from bigfoot.. (host-176-36-0-241.b024.la.net.ua. [176.36.0.241]) by smtp.gmail.com with ESMTPSA id y19-20020a2e9d53000000b002ad5f774579sm1810216ljj.96.2023.06.12.09.08.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Jun 2023 09:08:16 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yhs@fb.com, Eduard Zingerman Subject: [PATCH bpf-next v5 4/4] selftests/bpf: verify that check_ids() is used for scalars in regsafe() Date: Mon, 12 Jun 2023 19:08:01 +0300 Message-Id: <20230612160801.2804666-5-eddyz87@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230612160801.2804666-1-eddyz87@gmail.com> References: <20230612160801.2804666-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Verify that the following example is rejected by verifier: r9 = ... some pointer with range X ... r6 = ... unbound scalar ID=a ... r7 = ... unbound scalar ID=b ... if (r6 > r7) goto +1 r7 = r6 if (r7 > X) goto exit r9 += r6 *(u64 *)r9 = Y Also add test cases to: - check that check_alu_op() for BPF_MOV instruction does not allocate scalar ID if source register is a constant; - check that unique scalar IDs are ignored when new verifier state is compared to cached verifier state; - check that two different scalar IDs in a verified state can't be mapped to the same scalar ID in current state. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/progs/verifier_scalar_ids.c | 313 ++++++++++++++++++ 1 file changed, 313 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c index 8a5203fb14ca..5d56e764fe43 100644 --- a/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c +++ b/tools/testing/selftests/bpf/progs/verifier_scalar_ids.c @@ -341,4 +341,317 @@ __naked void precision_two_ids(void) : __clobber_all); } +/* Verify that check_ids() is used by regsafe() for scalars. + * + * r9 = ... some pointer with range X ... + * r6 = ... unbound scalar ID=a ... + * r7 = ... unbound scalar ID=b ... + * if (r6 > r7) goto +1 + * r6 = r7 + * if (r6 > X) goto exit + * r9 += r7 + * *(u8 *)r9 = Y + * + * The memory access is safe only if r7 is bounded, + * which is true for one branch and not true for another. + */ +SEC("socket") +__failure __msg("register with unbounded min value") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void check_ids_in_regsafe(void) +{ + asm volatile ( + /* Bump allocated stack */ + "r1 = 0;" + "*(u64*)(r10 - 8) = r1;" + /* r9 = pointer to stack */ + "r9 = r10;" + "r9 += -8;" + /* r7 = ktime_get_ns() */ + "call %[bpf_ktime_get_ns];" + "r7 = r0;" + /* r6 = ktime_get_ns() */ + "call %[bpf_ktime_get_ns];" + "r6 = r0;" + /* if r6 > r7 is an unpredictable jump */ + "if r6 > r7 goto l1_%=;" + "r7 = r6;" +"l1_%=:" + /* if r6 > 4 exit(0) */ + "if r7 > 4 goto l2_%=;" + /* Access memory at r9[r7] */ + "r9 += r6;" + "r0 = *(u8*)(r9 + 0);" +"l2_%=:" + "r0 = 0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* Similar to check_ids_in_regsafe. + * The l0 could be reached in two states: + * + * (1) r6{.id=A}, r7{.id=A}, r8{.id=B} + * (2) r6{.id=B}, r7{.id=A}, r8{.id=B} + * + * Where (2) is not safe, as "r7 > 4" check won't propagate range for it. + * This example would be considered safe without changes to + * mark_chain_precision() to track scalar values with equal IDs. + */ +SEC("socket") +__failure __msg("register with unbounded min value") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void check_ids_in_regsafe_2(void) +{ + asm volatile ( + /* Bump allocated stack */ + "r1 = 0;" + "*(u64*)(r10 - 8) = r1;" + /* r9 = pointer to stack */ + "r9 = r10;" + "r9 += -8;" + /* r8 = ktime_get_ns() */ + "call %[bpf_ktime_get_ns];" + "r8 = r0;" + /* r7 = ktime_get_ns() */ + "call %[bpf_ktime_get_ns];" + "r7 = r0;" + /* r6 = ktime_get_ns() */ + "call %[bpf_ktime_get_ns];" + "r6 = r0;" + /* scratch .id from r0 */ + "r0 = 0;" + /* if r6 > r7 is an unpredictable jump */ + "if r6 > r7 goto l1_%=;" + /* tie r6 and r7 .id */ + "r6 = r7;" +"l0_%=:" + /* if r7 > 4 exit(0) */ + "if r7 > 4 goto l2_%=;" + /* Access memory at r9[r7] */ + "r9 += r6;" + "r0 = *(u8*)(r9 + 0);" +"l2_%=:" + "r0 = 0;" + "exit;" +"l1_%=:" + /* tie r6 and r8 .id */ + "r6 = r8;" + "goto l0_%=;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* Check that scalar IDs *are not* generated on register to register + * assignments if source register is a constant. + * + * If such IDs *are* generated the 'l1' below would be reached in + * two states: + * + * (1) r1{.id=A}, r2{.id=A} + * (2) r1{.id=C}, r2{.id=C} + * + * Thus forcing 'if r1 == r2' verification twice. + */ +SEC("socket") +__success __log_level(2) +__msg("11: (1d) if r3 == r4 goto pc+0") +__msg("frame 0: propagating r3,r4") +__msg("11: safe") +__msg("processed 15 insns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void no_scalar_id_for_const(void) +{ + asm volatile ( + "call %[bpf_ktime_get_ns];" + /* unpredictable jump */ + "if r0 > 7 goto l0_%=;" + /* possibly generate same scalar ids for r3 and r4 */ + "r1 = 0;" + "r1 = r1;" + "r3 = r1;" + "r4 = r1;" + "goto l1_%=;" +"l0_%=:" + /* possibly generate different scalar ids for r3 and r4 */ + "r1 = 0;" + "r2 = 0;" + "r3 = r1;" + "r4 = r2;" +"l1_%=:" + /* predictable jump, marks r3 and r4 precise */ + "if r3 == r4 goto +0;" + "r0 = 0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* Same as no_scalar_id_for_const() but for 32-bit values */ +SEC("socket") +__success __log_level(2) +__msg("11: (1e) if w3 == w4 goto pc+0") +__msg("frame 0: propagating r3,r4") +__msg("11: safe") +__msg("processed 15 insns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void no_scalar_id_for_const32(void) +{ + asm volatile ( + "call %[bpf_ktime_get_ns];" + /* unpredictable jump */ + "if r0 > 7 goto l0_%=;" + /* possibly generate same scalar ids for r3 and r4 */ + "w1 = 0;" + "w1 = w1;" + "w3 = w1;" + "w4 = w1;" + "goto l1_%=;" +"l0_%=:" + /* possibly generate different scalar ids for r3 and r4 */ + "w1 = 0;" + "w2 = 0;" + "w3 = w1;" + "w4 = w2;" +"l1_%=:" + /* predictable jump, marks r1 and r2 precise */ + "if w3 == w4 goto +0;" + "r0 = 0;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* Check that unique scalar IDs are ignored when new verifier state is + * compared to cached verifier state. For this test: + * - cached state has no id on r1 + * - new state has a unique id on r1 + */ +SEC("socket") +__success __log_level(2) +__msg("6: (25) if r6 > 0x7 goto pc+1") +__msg("7: (57) r1 &= 255") +__msg("8: (bf) r2 = r10") +__msg("from 6 to 8: safe") +__msg("processed 12 insns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void ignore_unique_scalar_ids_cur(void) +{ + asm volatile ( + "call %[bpf_ktime_get_ns];" + "r6 = r0;" + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + /* r1.id == r0.id */ + "r1 = r0;" + /* make r1.id unique */ + "r0 = 0;" + "if r6 > 7 goto l0_%=;" + /* clear r1 id, but keep the range compatible */ + "r1 &= 0xff;" +"l0_%=:" + /* get here in two states: + * - first: r1 has no id (cached state) + * - second: r1 has a unique id (should be considered equivalent) + */ + "r2 = r10;" + "r2 += r1;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* Check that unique scalar IDs are ignored when new verifier state is + * compared to cached verifier state. For this test: + * - cached state has a unique id on r1 + * - new state has no id on r1 + */ +SEC("socket") +__success __log_level(2) +__msg("6: (25) if r6 > 0x7 goto pc+1") +__msg("7: (05) goto pc+1") +__msg("9: (bf) r2 = r10") +__msg("9: safe") +__msg("processed 13 insns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void ignore_unique_scalar_ids_old(void) +{ + asm volatile ( + "call %[bpf_ktime_get_ns];" + "r6 = r0;" + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + /* r1.id == r0.id */ + "r1 = r0;" + /* make r1.id unique */ + "r0 = 0;" + "if r6 > 7 goto l1_%=;" + "goto l0_%=;" +"l1_%=:" + /* clear r1 id, but keep the range compatible */ + "r1 &= 0xff;" +"l0_%=:" + /* get here in two states: + * - first: r1 has a unique id (cached state) + * - second: r1 has no id (should be considered equivalent) + */ + "r2 = r10;" + "r2 += r1;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +/* Check that two different scalar IDs in a verified state can't be + * mapped to the same scalar ID in current state. + */ +SEC("socket") +__success __log_level(2) +/* The exit instruction should be reachable from two states, + * use two matches and "processed .. insns" to ensure this. + */ +__msg("13: (95) exit") +__msg("13: (95) exit") +__msg("processed 18 insns") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void two_old_ids_one_cur_id(void) +{ + asm volatile ( + /* Give unique scalar IDs to r{6,7} */ + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + "r6 = r0;" + "call %[bpf_ktime_get_ns];" + "r0 &= 0xff;" + "r7 = r0;" + "r0 = 0;" + /* Maybe make r{6,7} IDs identical */ + "if r6 > r7 goto l0_%=;" + "goto l1_%=;" +"l0_%=:" + "r6 = r7;" +"l1_%=:" + /* Mark r{6,7} precise. + * Get here in two states: + * - first: r6{.id=A}, r7{.id=B} (cached state) + * - second: r6{.id=A}, r7{.id=A} + * Currently we don't want to consider such states equivalent. + * Thus, marker instruction "r0 = r0;" would be verified twice. + */ + "r2 = r10;" + "r2 += r6;" + "r2 += r7;" + "exit;" + : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + char _license[] SEC("license") = "GPL";