mbox series

[mptcp-next,0/2] Fixes for "use bpf_iter in bpf schedulers" v8

Message ID cover.1729738008.git.tanggeliang@kylinos.cn (mailing list archive)
Headers show
Series Fixes for "use bpf_iter in bpf schedulers" v8 | expand

Message

Geliang Tang Oct. 24, 2024, 2:52 a.m. UTC
From: Geliang Tang <tanggeliang@kylinos.cn>

Fix mptcp_join.sh (22) and packetdrill errors in "use bpf_iter in bpf
schedulers" (v8) reported by CI.

Depends on:
 - "use bpf_iter in bpf schedulers" v8

Based-on: <cover.1729676320.git.tanggeliang@kylinos.cn>

Geliang Tang (2):
  Squash to "mptcp: check sk_stream_memory_free in loop"
  Squash to "selftests/bpf: Add bpf_burst scheduler & test"

 net/mptcp/protocol.c                                | 5 +++--
 tools/testing/selftests/bpf/progs/mptcp_bpf_burst.c | 5 +++--
 2 files changed, 6 insertions(+), 4 deletions(-)

Comments

MPTCP CI Oct. 24, 2024, 4:33 a.m. UTC | #1
Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal: Success! ✅
- KVM Validation: debug: Critical: 2 Call Trace(s) - Critical: Global Timeout ❌
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/11491898749

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/d6804cd6fae2
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=902508


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-normal

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)
Geliang Tang Oct. 24, 2024, 7:38 a.m. UTC | #2
On Thu, 2024-10-24 at 04:33 +0000, MPTCP CI wrote:
> Hi Geliang,
> 
> Thank you for your modifications, that's great!
> 
> Our CI did some validations and here is its report:
> 
> - KVM Validation: normal: Success! ✅
> - KVM Validation: debug: Critical: 2 Call Trace(s) - Critical: Global
> Timeout ❌

This error seems to unrelated to this set. I have tested this set
repeatedly, but there's no such error.

Thanks,
-Geliang

> - KVM Validation: btf-normal (only bpftest_all): Success! ✅
> - KVM Validation: btf-debug (only bpftest_all): Success! ✅
> - Task:
> https://github.com/multipath-tcp/mptcp_net-next/actions/runs/11491898749
> 
> Initiator: Patchew Applier
> Commits:
> https://github.com/multipath-tcp/mptcp_net-next/commits/d6804cd6fae2
> Patchwork:
> https://patchwork.kernel.org/project/mptcp/list/?series=902508
> 
> 
> If there are some issues, you can reproduce them using the same
> environment as
> the one used by the CI thanks to a docker image, e.g.:
> 
>     $ cd [kernel source code]
>     $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm
> -it \
>         --pull always mptcp/mptcp-upstream-virtme-docker:latest \
>         auto-normal
> 
> For more details:
> 
>     https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
> 
> 
> Please note that despite all the efforts that have been already done
> to have a
> stable tests suite when executed on a public CI like here, it is
> possible some
> reported issues are not due to your modifications. Still, do not
> hesitate to
> help us improve that ;-)
> 
> Cheers,
> MPTCP GH Action bot
> Bot operated by Matthieu Baerts (NGI0 Core)
>
Matthieu Baerts (NGI0) Oct. 24, 2024, 9:39 a.m. UTC | #3
Hi Geliang,

On 24/10/2024 09:38, Geliang Tang wrote:
> On Thu, 2024-10-24 at 04:33 +0000, MPTCP CI wrote:
>> Hi Geliang,
>>
>> Thank you for your modifications, that's great!
>>
>> Our CI did some validations and here is its report:
>>
>> - KVM Validation: normal: Success! ✅
>> - KVM Validation: debug: Critical: 2 Call Trace(s) - Critical: Global
>> Timeout ❌
> 
> This error seems to unrelated to this set. I have tested this set
> repeatedly, but there's no such error.

Indeed, it looks like it is not related.

I wonder if it is not related to an issue with the SWAP, maybe fixed by:

 https://lore.kernel.org/all/20240922080838.15184-1-aha310510@gmail.com/

Instead of applying this patch in our tree, and hoping for the best, I
just increased the RAM allocated to the VM used by the CI. It might be
enough.

Cheers,
Matt