mbox series

[v4,bpf-next,0/5] Support bpf_kptr_xchg into local kptr

Message ID 20240813212424.2871455-1-amery.hung@bytedance.com (mailing list archive)
Headers show
Series Support bpf_kptr_xchg into local kptr | expand

Message

Amery Hung Aug. 13, 2024, 9:24 p.m. UTC
This revision adds substaintial changes to patch 2 to support structures
with kptr as the only special btf type. The test is split into
local_kptr_stash and task_kfunc_success to remove dependencies on
bpf_testmod that would break veristat results.

This series allows stashing kptr into local kptr. Currently, kptrs are
only allowed to be stashed into map value with bpf_kptr_xchg(). A
motivating use case of this series is to enable adding referenced kptr to
bpf_rbtree or bpf_list by using allocated object as graph node and the
storage of referenced kptr. For example, a bpf qdisc [0] enqueuing a
referenced kptr to a struct sk_buff* to a bpf_list serving as a fifo:

    struct skb_node {
            struct sk_buff __kptr *skb;
            struct bpf_list_node node;
    };

    private(A) struct bpf_spin_lock fifo_lock;
    private(A) struct bpf_list_head fifo __contains(skb_node, node);

    /* In Qdisc_ops.enqueue */
    struct skb_node *skbn;

    skbn = bpf_obj_new(typeof(*skbn));
    if (!skbn)
        goto drop;

    /* skb is a referenced kptr to struct sk_buff acquired earilier
     * but not shown in this code snippet.
     */
    skb = bpf_kptr_xchg(&skbn->skb, skb);
    if (skb)
        /* should not happen; do something below releasing skb to
         * satisfy the verifier */
    	...
    
    bpf_spin_lock(&fifo_lock);
    bpf_list_push_back(&fifo, &skbn->node);
    bpf_spin_unlock(&fifo_lock);

The implementation first searches for BPF_KPTR when generating program
BTF. Then, we teach the verifier that the detination argument of
bpf_kptr_xchg() can be local kptr, and use the btf_record in program BTF
to check against the source argument.

This series is mostly developed by Dave, who kindly helped and sent me
the patchset. The selftests in bpf qdisc (WIP) relies on this series to
work.

[0] https://lore.kernel.org/netdev/20240714175130.4051012-10-amery.hung@bytedance.com/

---
v3 -> v4
  - Allow struct in prog btf w/ kptr as the only special field type
  - Split tests of stashing referenced kptr and local kptr
  - v3: https://lore.kernel.org/bpf/20240809005131.3916464-1-amery.hung@bytedance.com/

v2 -> v3
  - Fix prog btf memory leak
  - Test stashing kptr in prog btf
  - Test unstashing kptrs after stashing into local kptrs
  - v2: https://lore.kernel.org/bpf/20240803001145.635887-1-amery.hung@bytedance.com/

v1 -> v2
  - Fix the document for bpf_kptr_xchg()
  - Add a comment explaining changes in the verifier
  - v1: https://lore.kernel.org/bpf/20240728030115.3970543-1-amery.hung@bytedance.com/

Amery Hung (1):
  bpf: Let callers of btf_parse_kptr() track life cycle of prog btf

Dave Marchevsky (4):
  bpf: Search for kptrs in prog BTF structs
  bpf: Rename ARG_PTR_TO_KPTR -> ARG_KPTR_XCHG_DEST
  bpf: Support bpf_kptr_xchg into local kptr
  selftests/bpf: Test bpf_kptr_xchg stashing into local kptr

 include/linux/bpf.h                           |  2 +-
 include/uapi/linux/bpf.h                      |  9 +--
 kernel/bpf/btf.c                              | 72 ++++++++++++++-----
 kernel/bpf/helpers.c                          |  6 +-
 kernel/bpf/syscall.c                          |  6 +-
 kernel/bpf/verifier.c                         | 48 ++++++++-----
 .../selftests/bpf/progs/local_kptr_stash.c    | 30 +++++++-
 .../selftests/bpf/progs/task_kfunc_success.c  | 26 ++++++-
 8 files changed, 151 insertions(+), 48 deletions(-)

Comments

patchwork-bot+netdevbpf@kernel.org Aug. 23, 2024, 6:50 p.m. UTC | #1
Hello:

This series was applied to bpf/bpf-next.git (master)
by Alexei Starovoitov <ast@kernel.org>:

On Tue, 13 Aug 2024 21:24:19 +0000 you wrote:
> This revision adds substaintial changes to patch 2 to support structures
> with kptr as the only special btf type. The test is split into
> local_kptr_stash and task_kfunc_success to remove dependencies on
> bpf_testmod that would break veristat results.
> 
> This series allows stashing kptr into local kptr. Currently, kptrs are
> only allowed to be stashed into map value with bpf_kptr_xchg(). A
> motivating use case of this series is to enable adding referenced kptr to
> bpf_rbtree or bpf_list by using allocated object as graph node and the
> storage of referenced kptr. For example, a bpf qdisc [0] enqueuing a
> referenced kptr to a struct sk_buff* to a bpf_list serving as a fifo:
> 
> [...]

Here is the summary with links:
  - [v4,bpf-next,1/5] bpf: Let callers of btf_parse_kptr() track life cycle of prog btf
    https://git.kernel.org/bpf/bpf-next/c/c5ef53420f46
  - [v4,bpf-next,2/5] bpf: Search for kptrs in prog BTF structs
    https://git.kernel.org/bpf/bpf-next/c/7a851ecb1806
  - [v4,bpf-next,3/5] bpf: Rename ARG_PTR_TO_KPTR -> ARG_KPTR_XCHG_DEST
    https://git.kernel.org/bpf/bpf-next/c/d59232afb034
  - [v4,bpf-next,4/5] bpf: Support bpf_kptr_xchg into local kptr
    https://git.kernel.org/bpf/bpf-next/c/b0966c724584
  - [v4,bpf-next,5/5] selftests/bpf: Test bpf_kptr_xchg stashing into local kptr
    https://git.kernel.org/bpf/bpf-next/c/91c96842ab1e

You are awesome, thank you!