mbox series

[mptcp-next,v3,0/2] add bpf_iter_task

Message ID cover.1741577149.git.tanggeliang@kylinos.cn (mailing list archive)
Headers show
Series add bpf_iter_task | expand

Message

Geliang Tang March 10, 2025, 3:30 a.m. UTC
From: Geliang Tang <tanggeliang@kylinos.cn>

v3:
 - add bpf_iter_task in mptcp_sock, no need to add back struct
   mptcp_sched_data then.
 - add sk_lock_sock() and sk_release_sock() for struct sock.
 - set and clear bpf_iter_task for MPTCP BPF Cgroup getsockopt
   and setsockopt.

v2:
 - Keep mptcp scheduler API unchanged.
 - Add back struct mptcp_sched_data.
 - add bpf_iter_task in mptcp_sched_data instead of mptcp_sock
 - Add wrapper bpf_iter mptcp_subflow_sched.
 - Use mptcp_subflow_sched iter instead of mptcp_subflow.
 - https://patchwork.kernel.org/project/mptcp/cover/cover.1741347233.git.tanggeliang@kylinos.cn/

v1:
 - https://patchwork.kernel.org/project/mptcp/cover/cover.1740997925.git.tanggeliang@kylinos.cn/

Geliang Tang (2):
  mptcp: add bpf_iter_task for mptcp_sock
  bpf: Customize mptcp's own sock lock

 include/net/sock.h   |  2 ++
 kernel/bpf/cgroup.c  |  8 ++++----
 net/mptcp/bpf.c      |  2 ++
 net/mptcp/protocol.c | 16 ++++++++++++++++
 net/mptcp/protocol.h | 20 ++++++++++++++++++++
 net/mptcp/sched.c    | 15 +++++++++++----
 6 files changed, 55 insertions(+), 8 deletions(-)

Comments

MPTCP CI March 10, 2025, 5:12 a.m. UTC | #1
Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal: Success! ✅
- KVM Validation: debug: Critical: Global Timeout ❌
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/13756425685

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/ae5890ca8bfa
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=942077


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-normal

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)