Message ID | 20241015151737.4111686-2-matttbe@kernel.org (mailing list archive) |
---|---|
State | Superseded, archived |
Headers | show |
Series | [mptcp-next] Squash to "add mptcp_subflow bpf_iter" v9 | expand |
Context | Check | Description |
---|---|---|
matttbe/build | success | Build and static analysis OK |
matttbe/checkpatch | success | total: 0 errors, 0 warnings, 0 checks, 81 lines checked |
matttbe/shellcheck | success | MPTCP selftests files have not been modified |
matttbe/KVM_Validation__normal | success | Success! ✅ |
matttbe/KVM_Validation__debug | success | Success! ✅ |
matttbe/KVM_Validation__btf__only_bpftest_all_ | success | Success! ✅ |
Hi Matthieu, Thank you for your modifications, that's great! But sadly, our CI spotted some issues with it when trying to build it. You can find more details there: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/11349477339 Status: failure Initiator: Patchew Applier Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/3a3642f20e95 Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=899379 Feel free to reply to this email if you cannot access logs, if you need some support to fix the error, if this doesn't seem to be caused by your modifications or if the error is a false positive one. Cheers, MPTCP GH Action bot Bot operated by Matthieu Baerts (NGI0 Core)
Hello, On 15/10/2024 17:42, MPTCP CI wrote: > Hi Matthieu, > > Thank you for your modifications, that's great! > > But sadly, our CI spotted some issues with it when trying to build it. > > You can find more details there: > > https://github.com/multipath-tcp/mptcp_net-next/actions/runs/11349477339 FYI, the failures are from the previous v9 series this patch is based on. It looks like this patch fixes the issue, see: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/11349477339/job/31565662602#step:7:4568 make C=1 W=1 (...) (...) CC net/mptcp/bpf.o CC scripts/mod/empty.o (...) No error. Cheers, Matt
Hi Matthieu, Thank you for your modifications, that's great! Our CI did some validations and here is its report: - KVM Validation: normal: Success! ✅ - KVM Validation: debug: Success! ✅ - KVM Validation: btf (only bpftest_all): Success! ✅ - Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/11349477323 Initiator: Patchew Applier Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/3a3642f20e95 Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=899379 If there are some issues, you can reproduce them using the same environment as the one used by the CI thanks to a docker image, e.g.: $ cd [kernel source code] $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \ --pull always mptcp/mptcp-upstream-virtme-docker:latest \ auto-normal For more details: https://github.com/multipath-tcp/mptcp-upstream-virtme-docker Please note that despite all the efforts that have been already done to have a stable tests suite when executed on a public CI like here, it is possible some reported issues are not due to your modifications. Still, do not hesitate to help us improve that ;-) Cheers, MPTCP GH Action bot Bot operated by Matthieu Baerts (NGI0 Core)
diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c index 9b87eee13955..dfdad3eaedfd 100644 --- a/net/mptcp/bpf.c +++ b/net/mptcp/bpf.c @@ -212,20 +212,21 @@ struct bpf_iter_mptcp_subflow_kern { __bpf_kfunc_start_defs(); -__bpf_kfunc struct mptcp_subflow_context * +__bpf_kfunc static struct mptcp_subflow_context * bpf_mptcp_subflow_ctx(const struct sock *sk) { return mptcp_subflow_ctx(sk); } -__bpf_kfunc struct sock * +__bpf_kfunc static struct sock * bpf_mptcp_subflow_tcp_sock(const struct mptcp_subflow_context *subflow) { return mptcp_subflow_tcp_sock(subflow); } -__bpf_kfunc int bpf_iter_mptcp_subflow_new(struct bpf_iter_mptcp_subflow *it, - struct mptcp_sock *msk) +__bpf_kfunc static int +bpf_iter_mptcp_subflow_new(struct bpf_iter_mptcp_subflow *it, + struct mptcp_sock *msk) { struct bpf_iter_mptcp_subflow_kern *kit = (void *)it; @@ -239,7 +240,7 @@ __bpf_kfunc int bpf_iter_mptcp_subflow_new(struct bpf_iter_mptcp_subflow *it, return 0; } -__bpf_kfunc struct mptcp_subflow_context * +__bpf_kfunc static struct mptcp_subflow_context * bpf_iter_mptcp_subflow_next(struct bpf_iter_mptcp_subflow *it) { struct bpf_iter_mptcp_subflow_kern *kit = (void *)it; @@ -251,11 +252,13 @@ bpf_iter_mptcp_subflow_next(struct bpf_iter_mptcp_subflow *it) return list_entry(kit->pos, struct mptcp_subflow_context, node); } -__bpf_kfunc void bpf_iter_mptcp_subflow_destroy(struct bpf_iter_mptcp_subflow *it) +__bpf_kfunc static void +bpf_iter_mptcp_subflow_destroy(struct bpf_iter_mptcp_subflow *it) { } -__bpf_kfunc struct mptcp_sock *bpf_mptcp_sock_acquire(struct mptcp_sock *msk) +__bpf_kfunc static struct mptcp_sock * +bpf_mptcp_sock_acquire(struct mptcp_sock *msk) { struct sock *sk = (struct sock *)msk; @@ -264,14 +267,14 @@ __bpf_kfunc struct mptcp_sock *bpf_mptcp_sock_acquire(struct mptcp_sock *msk) return NULL; } -__bpf_kfunc void bpf_mptcp_sock_release(struct mptcp_sock *msk) +__bpf_kfunc static void bpf_mptcp_sock_release(struct mptcp_sock *msk) { struct sock *sk = (struct sock *)msk; WARN_ON_ONCE(!sk || !refcount_dec_not_one(&sk->sk_refcnt)); } -__bpf_kfunc struct mptcp_subflow_context * +__bpf_kfunc static struct mptcp_subflow_context * bpf_mptcp_subflow_ctx_by_pos(const struct mptcp_sched_data *data, unsigned int pos) { if (pos >= MPTCP_SUBFLOWS_MAX) @@ -279,7 +282,7 @@ bpf_mptcp_subflow_ctx_by_pos(const struct mptcp_sched_data *data, unsigned int p return data->contexts[pos]; } -__bpf_kfunc bool bpf_mptcp_subflow_queues_empty(struct sock *sk) +__bpf_kfunc static bool bpf_mptcp_subflow_queues_empty(struct sock *sk) { return tcp_rtx_queue_empty(sk); } diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index b963e68451b1..7848a1989d17 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -722,9 +722,6 @@ void mptcp_subflow_queue_clean(struct sock *sk, struct sock *ssk); void mptcp_sock_graft(struct sock *sk, struct socket *parent); u64 mptcp_wnd_end(const struct mptcp_sock *msk); void mptcp_set_timeout(struct sock *sk); -bool bpf_mptcp_subflow_queues_empty(struct sock *sk); -struct mptcp_subflow_context * -bpf_mptcp_subflow_ctx_by_pos(const struct mptcp_sched_data *data, unsigned int pos); struct sock *__mptcp_nmpc_sk(struct mptcp_sock *msk); bool __mptcp_close(struct sock *sk, long timeout); void mptcp_cancel_work(struct sock *sk);
The CI reported these issues with this series: net/mptcp/bpf.c:215:42: warning: symbol 'bpf_mptcp_subflow_ctx' was not declared. Should it be static? net/mptcp/bpf.c:221:25: warning: symbol 'bpf_mptcp_subflow_tcp_sock' was not declared. Should it be static? net/mptcp/bpf.c:227:17: warning: symbol 'bpf_iter_mptcp_subflow_new' was not declared. Should it be static? net/mptcp/bpf.c:242:42: warning: symbol 'bpf_iter_mptcp_subflow_next' was not declared. Should it be static? net/mptcp/bpf.c:254:18: warning: symbol 'bpf_iter_mptcp_subflow_destroy' was not declared. Should it be static? net/mptcp/bpf.c:258:31: warning: symbol 'bpf_mptcp_sock_acquire' was not declared. Should it be static? net/mptcp/bpf.c:267:18: warning: symbol 'bpf_mptcp_sock_release' was not declared. Should it be static? On my side, adding static seems to be enough to fix them locally. While at it, I also removed the export of two previous kfunc: it looks like it is not needed to export them in protocol.h, no? If it is, then I guess all these new kfunc should be exported too. Tested with: touch net/mptcp/bpf.c make C=1 net/mptcp/bpf.o @Geliang: if the CI is happy with these modifications, can you include them in your future v10 please? Cc: Geliang Tang <geliang@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> --- Based-on: <cover.1728466623.git.tanggeliang@kylinos.cn> --- net/mptcp/bpf.c | 23 +++++++++++++---------- net/mptcp/protocol.h | 3 --- 2 files changed, 13 insertions(+), 13 deletions(-)