mbox series

[mptcp-next,v2,00/36] BPF path manager

Message ID cover.1729588019.git.tanggeliang@kylinos.cn (mailing list archive)
Headers show
Series BPF path manager | expand

Message

Geliang Tang Oct. 22, 2024, 9:14 a.m. UTC
From: Geliang Tang <tanggeliang@kylinos.cn>

v2:
 - add BPF-related code in this set (32-36).

In order to implement BPF userspace path manager, it is necessary to
unify the interfaces of the path manager. This set contains some
cleanups and refactoring to unify the interfaces in kernel space.
Finally, define a struct mptcp_pm_ops for a userspace path manager
like this:

struct mptcp_pm_ops {
        int (*address_announce)(struct mptcp_sock *msk,
                                struct mptcp_pm_addr_entry *local);
        int (*address_remove)(struct mptcp_sock *msk, u8 id);
        int (*subflow_create)(struct mptcp_sock *msk,
                              struct mptcp_pm_addr_entry *local,
                              struct mptcp_addr_info *remote);
        int (*subflow_destroy)(struct mptcp_sock *msk,
                               struct mptcp_pm_addr_entry *local,
                               struct mptcp_addr_info *remote);
        int (*get_local_id)(struct mptcp_sock *msk,
                            struct mptcp_pm_addr_entry *local);
        u8 (*get_flags)(struct mptcp_sock *msk,
                        struct mptcp_addr_info *skc);
        struct mptcp_pm_addr_entry *(*get_addr)(struct mptcp_sock *msk,
                                                u8 id);
        int (*dump_addr)(struct mptcp_sock *msk,
                         struct mptcp_id_bitmap *bitmap);
        int (*set_flags)(struct mptcp_sock *msk,
                         struct mptcp_pm_addr_entry *local,
                         struct mptcp_addr_info *remote);

        u8                      type;
        struct module           *owner;
        struct list_head        list;

        void (*init)(struct mptcp_sock *msk);
        void (*release)(struct mptcp_sock *msk);
} ____cacheline_aligned_in_smp;

Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/74

Depends on:
 - "add mptcp_address bpf_iter" v2

Based-on: <cover.1729582332.git.tanggeliang@kylinos.cn>

Geliang Tang (36):
  mptcp: drop else in mptcp_pm_addr_families_match
  mptcp: use __lookup_addr in pm_netlink
  mptcp: add mptcp_for_each_address macros
  mptcp: use sock_kfree_s instead of kfree
  mptcp: add lookup_addr for userspace pm
  mptcp: add mptcp_userspace_pm_get_sock helper
  mptcp: make three pm wrappers static
  mptcp: drop skb parameter of get_addr
  mptcp: add id parameter for get_addr
  mptcp: add addr parameter for get_addr
  mptcp: reuse sending nlmsg code in get_addr
  mptcp: change info of get_addr as const
  mptcp: add struct mptcp_id_bitmap
  mptcp: refactor dump_addr with id bitmap
  mptcp: refactor dump_addr with get_addr
  mptcp: reuse sending nlmsg code in dump_addr
  mptcp: update local address flags when setting it
  mptcp: change rem type of set_flags
  mptcp: drop skb parameter of set_flags
  mptcp: add loc and rem for set_flags
  mptcp: update address type of get_local_id
  mptcp: change is_backup interfaces as get_flags
  mptcp: drop struct mptcp_pm_local
  mptcp: drop struct mptcp_pm_add_entry
  mptcp: change local type of subflow_destroy
  mptcp: hold pm lock when deleting entry
  mptcp: rename mptcp_pm_remove_addrs
  mptcp: drop free_list for deleting entries
  mptcp: define struct mptcp_pm_ops
  mptcp: implement userspace pm interfaces
  mptcp: register default userspace pm
  bpf: Add mptcp path manager struct_ops
  bpf: Register mptcp struct_ops kfunc set
  Squash to "bpf: Export mptcp packet scheduler helpers"
  selftests/bpf: Add mptcp userspace pm subtest
  selftests/bpf: Add mptcp bpf path manager subtest

 include/net/mptcp.h                           |  32 +
 net/mptcp/bpf.c                               | 377 ++++++++-
 net/mptcp/pm.c                                |  53 +-
 net/mptcp/pm_netlink.c                        | 313 +++----
 net/mptcp/pm_userspace.c                      | 777 ++++++++++--------
 net/mptcp/protocol.c                          |   1 +
 net/mptcp/protocol.h                          |  81 +-
 net/mptcp/subflow.c                           |   2 +-
 tools/testing/selftests/bpf/config            |   1 +
 .../testing/selftests/bpf/prog_tests/mptcp.c  | 305 +++++++
 tools/testing/selftests/bpf/progs/mptcp_bpf.h |  71 ++
 .../bpf/progs/mptcp_bpf_userspace_pm.c        | 409 +++++++++
 12 files changed, 1855 insertions(+), 567 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_userspace_pm.c

Comments

MPTCP CI Oct. 22, 2024, 10:12 a.m. UTC | #1
Hi Geliang,

Thank you for your modifications, that's great!

But sadly, our CI spotted some issues with it when trying to build it.

You can find more details there:

  https://github.com/multipath-tcp/mptcp_net-next/actions/runs/11457329322

Status: failure
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/9def5a043f12
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=901761

Feel free to reply to this email if you cannot access logs, if you need
some support to fix the error, if this doesn't seem to be caused by your
modifications or if the error is a false positive one.

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)
MPTCP CI Oct. 22, 2024, 10:24 a.m. UTC | #2
Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal: Success! ✅
- KVM Validation: debug: Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/11457329343

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/9def5a043f12
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=901761


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-normal

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)
Matthieu Baerts (NGI0) Nov. 4, 2024, 7:31 p.m. UTC | #3
Hi Geliang,

On 22/10/2024 11:14, Geliang Tang wrote:
> From: Geliang Tang <tanggeliang@kylinos.cn>
> 
> v2:
>  - add BPF-related code in this set (32-36).

I did a review of the patches up to 28/36 included.

If you want, you can send a v3 without patches 29-36: the series is very
long, that's hard to review such long series. We can check this part
later. WDYT?

Cheers,
Matt
Geliang Tang Nov. 5, 2024, 10:12 a.m. UTC | #4
On Mon, 2024-11-04 at 20:31 +0100, Matthieu Baerts wrote:
> Hi Geliang,
> 
> On 22/10/2024 11:14, Geliang Tang wrote:
> > From: Geliang Tang <tanggeliang@kylinos.cn>
> > 
> > v2:
> >  - add BPF-related code in this set (32-36).
> 
> I did a review of the patches up to 28/36 included.
> 
> If you want, you can send a v3 without patches 29-36: the series is

Thanks Matt, please send a v3 of "mptcp: pm: use _rcu variant under
rcu_read_lock" first. Then I'll send a v3 of this set based on it.

-Geliang

> very
> long, that's hard to review such long series. We can check this part
> later. WDYT?
> 
> Cheers,
> Matt