mbox series

[bpf-next,v2,0/4] verify scalar ids mapping in regsafe()

Message ID 20230530172739.447290-1-eddyz87@gmail.com (mailing list archive)
Headers show
Series verify scalar ids mapping in regsafe() | expand

Message

Eduard Zingerman May 30, 2023, 5:27 p.m. UTC
Update regsafe() to use check_ids() for scalar values.
Otherwise the following unsafe pattern is accepted by verifier:

  1: r9 = ... some pointer with range X ...
  2: r6 = ... unbound scalar ID=a ...
  3: r7 = ... unbound scalar ID=b ...
  4: if (r6 > r7) goto +1
  5: r6 = r7
  6: if (r6 > X) goto ...
  --- checkpoint ---
  7: r9 += r7
  8: *(u64 *)r9 = Y

This example is unsafe because not all execution paths verify r7 range.
Because of the jump at (4) the verifier would arrive at (6) in two states:
I.  r6{.id=b}, r7{.id=b} via path 1-6;
II. r6{.id=a}, r7{.id=b} via path 1-4, 6.

Currently regsafe() does not call check_ids() for scalar registers,
thus from POV of regsafe() states (I) and (II) are identical.

The change is split in two parts:
- patches #1,2:
  - correctness fix for regsafe();
  - test cases.
  Patch #1 has a big hit on verification performance.
- patches #3,4
  - modification for find_equal_scalars() to save ids of registers
    that gain range via this function;
  - modification for regsafe() to do check_ids() for scalar registers
    only for such ids;
  - test cases.
  Patch #3 restores most of the verification performance.
  
A note on design decisions for patch #3. The change in patch #1 is
simple but very heavy handed:

  @@ -15151,6 +15151,28 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
 
 	switch (base_type(rold->type)) {
 	case SCALAR_VALUE:
  +		/* ... */
  +		if (!check_ids(rold->id, rcur->id, idmap))
  +			return false;

Ideally check_ids() should only be called for 'rold' that either:
(a) gained range via find_equal_scalars() in some child verification
    path and was then marked as precise;
(b) was a source of range information for some other register via
    find_equal_scalars() in some child verification path, and that
    register was then marked as precise.

While rold->precise flag could be a proxy for criteria (a) it is,
unfortunately, cannot be a proxy for criteria (b). E.g. for the
example above precision marks look as follows:

                           Precise registers
  5: r6 = r7              ; r7
  6: if (r6 > X) goto ... ; r7
  7: r9 += r7             ; r7

Jump at (6) cannot be predicted, thus there is no precision mark on r6.
If there is ever a jump to (6), cached state will not have precise
marks for r6.

This leaves two options:
- Modification of precision tracking to take find_equal_scalars() into
  account.
- Find a way to track which ids were used for range information
  transfer in find_equal_scalars().
  
The former is a bit complicated, because information about register id
assignments for instructions in the middle of a state is lost.
It is possible to extend bpf_verifier_state::jmp_history to track a
mask for registers / stack slots that gained range via
find_equal_scalars() and use this mask in backtrack_insn().
However, this is a significant complication for a very non-trivial code.

Thus, in patch #3 I opted for a latter approach, accumulate all ids
that gain range via find_equal_scalars() in a set stored in struct
bpf_verifier_env.

To represent this set I use a u32_hashset data structure derived from
tools/lib/bpf/hashmap.h. I tested it locally (see [1]), but I think
that ideally it should be tested using KUnit. However, AFAIK, this
would be the first use of KUnit in context of BPF verifier.
If people are ok with this, I will prepare the tests and necessary
CI integration.

Changelog:
- V1 -> v2:
  - 'rold->precise' and 'rold->id' checks are dropped as unsafe
    (thanks to discussion with Yonghong);
  - patches #3,4 adding tracking of ids used for range transfer in
    order to mitigate performance impact.
- RFC -> V1:
  - Function verifier.c:mark_equal_scalars_as_read() is dropped,
    as it was an incorrect fix for problem solved by commit [3].
  - check_ids() is called only for precise scalar values.
  - Test case updated to use inline assembly.

[V1]  https://lore.kernel.org/bpf/20230526184126.3104040-1-eddyz87@gmail.com/
[RFC] https://lore.kernel.org/bpf/20221128163442.280187-1-eddyz87@gmail.com/
[1]   https://gist.github.com/eddyz87/a32ea7e62a27d3c201117c9a39ab4286

Eduard Zingerman (4):
  bpf: verify scalar ids mapping in regsafe() using check_ids()
  selftests/bpf: verify that check_ids() is used for scalars in
    regsafe()
  bpf: filter out scalar ids not used for range transfer in regsafe()
  selftests/bpf: check env->that range_transfer_ids has effect

 include/linux/bpf_verifier.h                  |   4 +
 kernel/bpf/Makefile                           |   1 +
 kernel/bpf/u32_hashset.c                      | 137 +++++++++++
 kernel/bpf/u32_hashset.h                      |  30 +++
 kernel/bpf/verifier.c                         |  66 +++++-
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_scalar_ids.c | 214 ++++++++++++++++++
 7 files changed, 447 insertions(+), 7 deletions(-)
 create mode 100644 kernel/bpf/u32_hashset.c
 create mode 100644 kernel/bpf/u32_hashset.h
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_scalar_ids.c