Message ID | 20240620125601.15755-1-aconole@redhat.com (mailing list archive) |
---|---|
Headers | show |
Series | selftests: net: Switch pmtu.sh to use the internal ovs script. | expand |
On Thu, 20 Jun 2024 08:55:54 -0400 Aaron Conole wrote: > This series enhances the ovs-dpctl utility to provide support for set() > and tunnel() flow specifiers, better ipv6 handling support, and the > ability to add tunnel vports, and LWT interfaces. Finally, it modifies > the pmtu.sh script to call the ovs-dpctl.py utility rather than the > typical OVS userspace utilities. Thanks for the work! Looks like the series no longer applies because of other changes to the kernel config. Before it stopped applying we got some runs, here's what I see: https://netdev-3.bots.linux.dev/vmksft-net/results/648440/3-pmtu-sh/stdout # Cannot find device "ovs_br0" # TEST: IPv4, OVS vxlan4: PMTU exceptions [FAIL] # Cannot find device "ovs_br0" # TEST: IPv4, OVS vxlan4: PMTU exceptions - nexthop objects [FAIL] # Cannot find device "ovs_br0" # TEST: IPv6, OVS vxlan4: PMTU exceptions [FAIL] # Cannot find device "ovs_br0" # TEST: IPv6, OVS vxlan4: PMTU exceptions - nexthop objects [FAIL] # Cannot find device "ovs_br0" # TEST: IPv4, OVS vxlan6: PMTU exceptions [FAIL] # Cannot find device "ovs_br0" # TEST: IPv4, OVS vxlan6: PMTU exceptions - nexthop objects [FAIL] # Cannot find device "ovs_br0" # TEST: IPv6, OVS vxlan6: PMTU exceptions [FAIL] # Cannot find device "ovs_br0" # TEST: IPv6, OVS vxlan6: PMTU exceptions - nexthop objects [FAIL] # Cannot find device "ovs_br0" # TEST: IPv4, OVS geneve4: PMTU exceptions [FAIL] # Cannot find device "ovs_br0" # TEST: IPv4, OVS geneve4: PMTU exceptions - nexthop objects [FAIL] # Cannot find device "ovs_br0" # TEST: IPv6, OVS geneve4: PMTU exceptions [FAIL] # Cannot find device "ovs_br0" # TEST: IPv6, OVS geneve4: PMTU exceptions - nexthop objects [FAIL] # Cannot find device "ovs_br0" # TEST: IPv4, OVS geneve6: PMTU exceptions [FAIL] # Cannot find device "ovs_br0" # TEST: IPv4, OVS geneve6: PMTU exceptions - nexthop objects [FAIL] # Cannot find device "ovs_br0" # TEST: IPv6, OVS geneve6: PMTU exceptions [FAIL] # Cannot find device "ovs_br0" Any idea why? Looks like kernel config did include OVS, perhaps we need explicit modprobe now? I don't see any more details in the logs.
Jakub Kicinski <kuba@kernel.org> writes: > On Thu, 20 Jun 2024 08:55:54 -0400 Aaron Conole wrote: >> This series enhances the ovs-dpctl utility to provide support for set() >> and tunnel() flow specifiers, better ipv6 handling support, and the >> ability to add tunnel vports, and LWT interfaces. Finally, it modifies >> the pmtu.sh script to call the ovs-dpctl.py utility rather than the >> typical OVS userspace utilities. > > Thanks for the work! > > Looks like the series no longer applies because of other changes > to the kernel config. Before it stopped applying we got some runs, > here's what I see: > > https://netdev-3.bots.linux.dev/vmksft-net/results/648440/3-pmtu-sh/stdout > > # Cannot find device "ovs_br0" > # TEST: IPv4, OVS vxlan4: PMTU exceptions [FAIL] > # Cannot find device "ovs_br0" > # TEST: IPv4, OVS vxlan4: PMTU exceptions - nexthop objects [FAIL] > # Cannot find device "ovs_br0" > # TEST: IPv6, OVS vxlan4: PMTU exceptions [FAIL] > # Cannot find device "ovs_br0" > # TEST: IPv6, OVS vxlan4: PMTU exceptions - nexthop objects [FAIL] > # Cannot find device "ovs_br0" > # TEST: IPv4, OVS vxlan6: PMTU exceptions [FAIL] > # Cannot find device "ovs_br0" > # TEST: IPv4, OVS vxlan6: PMTU exceptions - nexthop objects [FAIL] > # Cannot find device "ovs_br0" > # TEST: IPv6, OVS vxlan6: PMTU exceptions [FAIL] > # Cannot find device "ovs_br0" > # TEST: IPv6, OVS vxlan6: PMTU exceptions - nexthop objects [FAIL] > # Cannot find device "ovs_br0" > # TEST: IPv4, OVS geneve4: PMTU exceptions [FAIL] > # Cannot find device "ovs_br0" > # TEST: IPv4, OVS geneve4: PMTU exceptions - nexthop objects [FAIL] > # Cannot find device "ovs_br0" > # TEST: IPv6, OVS geneve4: PMTU exceptions [FAIL] > # Cannot find device "ovs_br0" > # TEST: IPv6, OVS geneve4: PMTU exceptions - nexthop objects [FAIL] > # Cannot find device "ovs_br0" > # TEST: IPv4, OVS geneve6: PMTU exceptions [FAIL] > # Cannot find device "ovs_br0" > # TEST: IPv4, OVS geneve6: PMTU exceptions - nexthop objects [FAIL] > # Cannot find device "ovs_br0" > # TEST: IPv6, OVS geneve6: PMTU exceptions [FAIL] > # Cannot find device "ovs_br0" > > Any idea why? Looks like kernel config did include OVS, perhaps we need > explicit modprobe now? I don't see any more details in the logs. Strange. I expected that the module should have automatically been loaded when attempting to communicate with the OVS genetlink family type. At least, that is how it had been working previously. I'll spend some time looking into it and resubmit a rebased version. Thanks, Jakub!
Aaron Conole <aconole@redhat.com> writes: > Jakub Kicinski <kuba@kernel.org> writes: > >> On Thu, 20 Jun 2024 08:55:54 -0400 Aaron Conole wrote: >>> This series enhances the ovs-dpctl utility to provide support for set() >>> and tunnel() flow specifiers, better ipv6 handling support, and the >>> ability to add tunnel vports, and LWT interfaces. Finally, it modifies >>> the pmtu.sh script to call the ovs-dpctl.py utility rather than the >>> typical OVS userspace utilities. >> >> Thanks for the work! >> >> Looks like the series no longer applies because of other changes >> to the kernel config. Before it stopped applying we got some runs, >> here's what I see: >> >> https://netdev-3.bots.linux.dev/vmksft-net/results/648440/3-pmtu-sh/stdout >> >> # Cannot find device "ovs_br0" >> # TEST: IPv4, OVS vxlan4: PMTU exceptions [FAIL] >> # Cannot find device "ovs_br0" >> # TEST: IPv4, OVS vxlan4: PMTU exceptions - nexthop objects [FAIL] >> # Cannot find device "ovs_br0" >> # TEST: IPv6, OVS vxlan4: PMTU exceptions [FAIL] >> # Cannot find device "ovs_br0" >> # TEST: IPv6, OVS vxlan4: PMTU exceptions - nexthop objects [FAIL] >> # Cannot find device "ovs_br0" >> # TEST: IPv4, OVS vxlan6: PMTU exceptions [FAIL] >> # Cannot find device "ovs_br0" >> # TEST: IPv4, OVS vxlan6: PMTU exceptions - nexthop objects [FAIL] >> # Cannot find device "ovs_br0" >> # TEST: IPv6, OVS vxlan6: PMTU exceptions [FAIL] >> # Cannot find device "ovs_br0" >> # TEST: IPv6, OVS vxlan6: PMTU exceptions - nexthop objects [FAIL] >> # Cannot find device "ovs_br0" >> # TEST: IPv4, OVS geneve4: PMTU exceptions [FAIL] >> # Cannot find device "ovs_br0" >> # TEST: IPv4, OVS geneve4: PMTU exceptions - nexthop objects [FAIL] >> # Cannot find device "ovs_br0" >> # TEST: IPv6, OVS geneve4: PMTU exceptions [FAIL] >> # Cannot find device "ovs_br0" >> # TEST: IPv6, OVS geneve4: PMTU exceptions - nexthop objects [FAIL] >> # Cannot find device "ovs_br0" >> # TEST: IPv4, OVS geneve6: PMTU exceptions [FAIL] >> # Cannot find device "ovs_br0" >> # TEST: IPv4, OVS geneve6: PMTU exceptions - nexthop objects [FAIL] >> # Cannot find device "ovs_br0" >> # TEST: IPv6, OVS geneve6: PMTU exceptions [FAIL] >> # Cannot find device "ovs_br0" >> >> Any idea why? Looks like kernel config did include OVS, perhaps we need >> explicit modprobe now? I don't see any more details in the logs. > > Strange. I expected that the module should have automatically been > loaded when attempting to communicate with the OVS genetlink family > type. At least, that is how it had been working previously. > > I'll spend some time looking into it and resubmit a rebased version. > Thanks, Jakub! If the ovs module isn't available, then I see: # ovs_bridge not supported # TEST: IPv4, OVS vxlan4: PMTU exceptions [SKIP] But if it is available, I haven't been able to reproduce such ovs_br0 setup failure - things work. My branch is rebased on 568ebdaba6370c03360860f1524f646ddd5ca523 Additionally, the "Cannot find device ..." text comes from an iproute2 utility output. The only place we actually interact with that is via the call at pmtu.sh:973: run_cmd ip link set ovs_br0 up Maybe it is possible that the link isn't up (could some port memory allocation or message be delaying it?) yet in the virtual environment. To confirm, is it possible to run in the constrained environment, but put a 5s sleep or something? I will add the following either as a separate patch (ie 7/8), or I can fold it into 6/7 (and drop Stefano's ACK waiting for another review): wait_for_if() { if ip link show "$2" >/dev/null 2>&1; then return 0; fi for d in `seq 1 30`; do sleep 1 if ip link show "$2" >/dev/null 2>&1; then return 0; fi done return 1 } .... setup_ovs_br_internal || setup_ovs_br_vswitchd || return $ksft_skip + wait_for_if "ovs_br0" run_cmd ip link set ovs_br0 up .... Does it make sense or does it seem like I am way off base?
On Mon, 24 Jun 2024 12:53:45 -0400 Aaron Conole wrote: > Additionally, the "Cannot find device ..." text comes from an iproute2 > utility output. The only place we actually interact with that is via > the call at pmtu.sh:973: > > run_cmd ip link set ovs_br0 up > > Maybe it is possible that the link isn't up (could some port memory > allocation or message be delaying it?) yet in the virtual environment. Depends on how the creation is implemented, normally device creation over netlink is synchronous. Just to be sure have you tried to repro with vng: https://github.com/linux-netdev/nipa/wiki/How-to-run-netdev-selftests-CI-style ? It could be the base OS difference, too, but that's harder to confirm. > To confirm, is it possible to run in the constrained environment, but > put a 5s sleep or something? I will add the following either as a > separate patch (ie 7/8), or I can fold it into 6/7 (and drop Stefano's > ACK waiting for another review): > > > wait_for_if() { > if ip link show "$2" >/dev/null 2>&1; then return 0; fi > > for d in `seq 1 30`; do > sleep 1 > if ip link show "$2" >/dev/null 2>&1; then return 0; fi > done > return 1 > } > > .... > setup_ovs_br_internal || setup_ovs_br_vswitchd || return $ksft_skip > + wait_for_if "ovs_br0" > run_cmd ip link set ovs_br0 up > .... > > Does it make sense or does it seem like I am way off base? sleep 1 is a bit high (sleep does accept fractional numbers!) but otherwise worth trying, if you can't repro locally.
On Mon, 2024-06-24 at 12:53 -0400, Aaron Conole wrote: > Aaron Conole <aconole@redhat.com> writes: > > > Jakub Kicinski <kuba@kernel.org> writes: > > > > > On Thu, 20 Jun 2024 08:55:54 -0400 Aaron Conole wrote: > > > > This series enhances the ovs-dpctl utility to provide support for set() > > > > and tunnel() flow specifiers, better ipv6 handling support, and the > > > > ability to add tunnel vports, and LWT interfaces. Finally, it modifies > > > > the pmtu.sh script to call the ovs-dpctl.py utility rather than the > > > > typical OVS userspace utilities. > > > > > > Thanks for the work! > > > > > > Looks like the series no longer applies because of other changes > > > to the kernel config. Before it stopped applying we got some runs, > > > here's what I see: > > > > > > https://netdev-3.bots.linux.dev/vmksft-net/results/648440/3-pmtu-sh/stdout > > > > > > # Cannot find device "ovs_br0" > > > # TEST: IPv4, OVS vxlan4: PMTU exceptions [FAIL] > > > # Cannot find device "ovs_br0" > > > # TEST: IPv4, OVS vxlan4: PMTU exceptions - nexthop objects [FAIL] > > > # Cannot find device "ovs_br0" > > > # TEST: IPv6, OVS vxlan4: PMTU exceptions [FAIL] > > > # Cannot find device "ovs_br0" > > > # TEST: IPv6, OVS vxlan4: PMTU exceptions - nexthop objects [FAIL] > > > # Cannot find device "ovs_br0" > > > # TEST: IPv4, OVS vxlan6: PMTU exceptions [FAIL] > > > # Cannot find device "ovs_br0" > > > # TEST: IPv4, OVS vxlan6: PMTU exceptions - nexthop objects [FAIL] > > > # Cannot find device "ovs_br0" > > > # TEST: IPv6, OVS vxlan6: PMTU exceptions [FAIL] > > > # Cannot find device "ovs_br0" > > > # TEST: IPv6, OVS vxlan6: PMTU exceptions - nexthop objects [FAIL] > > > # Cannot find device "ovs_br0" > > > # TEST: IPv4, OVS geneve4: PMTU exceptions [FAIL] > > > # Cannot find device "ovs_br0" > > > # TEST: IPv4, OVS geneve4: PMTU exceptions - nexthop objects [FAIL] > > > # Cannot find device "ovs_br0" > > > # TEST: IPv6, OVS geneve4: PMTU exceptions [FAIL] > > > # Cannot find device "ovs_br0" > > > # TEST: IPv6, OVS geneve4: PMTU exceptions - nexthop objects [FAIL] > > > # Cannot find device "ovs_br0" > > > # TEST: IPv4, OVS geneve6: PMTU exceptions [FAIL] > > > # Cannot find device "ovs_br0" > > > # TEST: IPv4, OVS geneve6: PMTU exceptions - nexthop objects [FAIL] > > > # Cannot find device "ovs_br0" > > > # TEST: IPv6, OVS geneve6: PMTU exceptions [FAIL] > > > # Cannot find device "ovs_br0" > > > > > > Any idea why? Looks like kernel config did include OVS, perhaps we need > > > explicit modprobe now? I don't see any more details in the logs. > > > > Strange. I expected that the module should have automatically been > > loaded when attempting to communicate with the OVS genetlink family > > type. At least, that is how it had been working previously. > > > > I'll spend some time looking into it and resubmit a rebased version. > > Thanks, Jakub! > > If the ovs module isn't available, then I see: > > # ovs_bridge not supported > # TEST: IPv4, OVS vxlan4: PMTU exceptions [SKIP] > > But if it is available, I haven't been able to reproduce such ovs_br0 > setup failure - things work. I'm still wondering if the issue is Kconfig-related (plus possibly bad interaction with vng). I don't see the OVS knob enabled in the self- tests config. If it's implied by some other knob, and ends-up being selected as a module, vng could stumble upon loading the module at runtime, especially on incremental build (at least I experience that problem locally). I'm not even sure if the KCI is building incrementally or not, so all the above could is quite a wild guess. In any case I think adding the explicit CONFIG_OPENVSWITCH=y the selftest config would make the scenario more well defined. Cheers, Paolo
On Mon, 24 Jun 2024 15:30:23 -0700 Jakub Kicinski <kuba@kernel.org> wrote: > On Mon, 24 Jun 2024 12:53:45 -0400 Aaron Conole wrote: > > Additionally, the "Cannot find device ..." text comes from an iproute2 > > utility output. The only place we actually interact with that is via > > the call at pmtu.sh:973: > > > > run_cmd ip link set ovs_br0 up > > > > Maybe it is possible that the link isn't up (could some port memory > > allocation or message be delaying it?) yet in the virtual environment. > > Depends on how the creation is implemented, normally device creation > over netlink is synchronous. It also looks like pyroute2 would keep everything synchronous (unless you call NetlinkSocket.bind(async_cache=True))... weird. > Just to be sure have you tried to repro with vng: > > https://github.com/linux-netdev/nipa/wiki/How-to-run-netdev-selftests-CI-style > > ? It could be the base OS difference, too, but that's harder to confirm. > > > To confirm, is it possible to run in the constrained environment, but > > put a 5s sleep or something? I will add the following either as a > > separate patch (ie 7/8), or I can fold it into 6/7 (and drop Stefano's > > ACK waiting for another review): > > > > > > wait_for_if() { > > if ip link show "$2" >/dev/null 2>&1; then return 0; fi > > > > for d in `seq 1 30`; do > > sleep 1 > > if ip link show "$2" >/dev/null 2>&1; then return 0; fi > > done > > return 1 > > } > > > > .... > > setup_ovs_br_internal || setup_ovs_br_vswitchd || return $ksft_skip > > + wait_for_if "ovs_br0" > > run_cmd ip link set ovs_br0 up > > .... > > > > Does it make sense or does it seem like I am way off base? > > sleep 1 is a bit high (sleep does accept fractional numbers!) This script was originally (and mostly is) all nice and POSIX (where sleep doesn't take fractional numbers), so, if you don't mind, I'd rather prefer "sleep 0.1 || sleep 1". :)
Jakub Kicinski <kuba@kernel.org> writes: > On Mon, 24 Jun 2024 12:53:45 -0400 Aaron Conole wrote: >> Additionally, the "Cannot find device ..." text comes from an iproute2 >> utility output. The only place we actually interact with that is via >> the call at pmtu.sh:973: >> >> run_cmd ip link set ovs_br0 up >> >> Maybe it is possible that the link isn't up (could some port memory >> allocation or message be delaying it?) yet in the virtual environment. > > Depends on how the creation is implemented, normally device creation > over netlink is synchronous. Just to be sure have you tried to repro > with vng: > > https://github.com/linux-netdev/nipa/wiki/How-to-run-netdev-selftests-CI-style > > ? It could be the base OS difference, too, but that's harder to confirm. Yes - that's the way I run it. But I didn't try to use any of the stress inducing options. I'll work on it with that. >> To confirm, is it possible to run in the constrained environment, but >> put a 5s sleep or something? I will add the following either as a >> separate patch (ie 7/8), or I can fold it into 6/7 (and drop Stefano's >> ACK waiting for another review): >> >> >> wait_for_if() { >> if ip link show "$2" >/dev/null 2>&1; then return 0; fi >> >> for d in `seq 1 30`; do >> sleep 1 >> if ip link show "$2" >/dev/null 2>&1; then return 0; fi >> done >> return 1 >> } >> >> .... >> setup_ovs_br_internal || setup_ovs_br_vswitchd || return $ksft_skip >> + wait_for_if "ovs_br0" >> run_cmd ip link set ovs_br0 up >> .... >> >> Does it make sense or does it seem like I am way off base? > > sleep 1 is a bit high (sleep does accept fractional numbers!) > but otherwise worth trying, if you can't repro locally. Ack.
Paolo Abeni <pabeni@redhat.com> writes: > On Mon, 2024-06-24 at 12:53 -0400, Aaron Conole wrote: >> Aaron Conole <aconole@redhat.com> writes: >> >> > Jakub Kicinski <kuba@kernel.org> writes: >> > >> > > On Thu, 20 Jun 2024 08:55:54 -0400 Aaron Conole wrote: >> > > > This series enhances the ovs-dpctl utility to provide support for set() >> > > > and tunnel() flow specifiers, better ipv6 handling support, and the >> > > > ability to add tunnel vports, and LWT interfaces. Finally, it modifies >> > > > the pmtu.sh script to call the ovs-dpctl.py utility rather than the >> > > > typical OVS userspace utilities. >> > > >> > > Thanks for the work! >> > > >> > > Looks like the series no longer applies because of other changes >> > > to the kernel config. Before it stopped applying we got some runs, >> > > here's what I see: >> > > >> > > https://netdev-3.bots.linux.dev/vmksft-net/results/648440/3-pmtu-sh/stdout >> > > >> > > # Cannot find device "ovs_br0" >> > > # TEST: IPv4, OVS vxlan4: PMTU exceptions [FAIL] >> > > # Cannot find device "ovs_br0" >> > > # TEST: IPv4, OVS vxlan4: PMTU exceptions - nexthop objects >> > > [FAIL] >> > > # Cannot find device "ovs_br0" >> > > # TEST: IPv6, OVS vxlan4: PMTU exceptions [FAIL] >> > > # Cannot find device "ovs_br0" >> > > # TEST: IPv6, OVS vxlan4: PMTU exceptions - nexthop objects >> > > [FAIL] >> > > # Cannot find device "ovs_br0" >> > > # TEST: IPv4, OVS vxlan6: PMTU exceptions [FAIL] >> > > # Cannot find device "ovs_br0" >> > > # TEST: IPv4, OVS vxlan6: PMTU exceptions - nexthop objects >> > > [FAIL] >> > > # Cannot find device "ovs_br0" >> > > # TEST: IPv6, OVS vxlan6: PMTU exceptions [FAIL] >> > > # Cannot find device "ovs_br0" >> > > # TEST: IPv6, OVS vxlan6: PMTU exceptions - nexthop objects >> > > [FAIL] >> > > # Cannot find device "ovs_br0" >> > > # TEST: IPv4, OVS geneve4: PMTU exceptions [FAIL] >> > > # Cannot find device "ovs_br0" >> > > # TEST: IPv4, OVS geneve4: PMTU exceptions - nexthop objects >> > > [FAIL] >> > > # Cannot find device "ovs_br0" >> > > # TEST: IPv6, OVS geneve4: PMTU exceptions [FAIL] >> > > # Cannot find device "ovs_br0" >> > > # TEST: IPv6, OVS geneve4: PMTU exceptions - nexthop objects >> > > [FAIL] >> > > # Cannot find device "ovs_br0" >> > > # TEST: IPv4, OVS geneve6: PMTU exceptions [FAIL] >> > > # Cannot find device "ovs_br0" >> > > # TEST: IPv4, OVS geneve6: PMTU exceptions - nexthop objects >> > > [FAIL] >> > > # Cannot find device "ovs_br0" >> > > # TEST: IPv6, OVS geneve6: PMTU exceptions [FAIL] >> > > # Cannot find device "ovs_br0" >> > > >> > > Any idea why? Looks like kernel config did include OVS, perhaps we need >> > > explicit modprobe now? I don't see any more details in the logs. >> > >> > Strange. I expected that the module should have automatically been >> > loaded when attempting to communicate with the OVS genetlink family >> > type. At least, that is how it had been working previously. >> > >> > I'll spend some time looking into it and resubmit a rebased version. >> > Thanks, Jakub! >> >> If the ovs module isn't available, then I see: >> >> # ovs_bridge not supported >> # TEST: IPv4, OVS vxlan4: PMTU exceptions [SKIP] >> >> But if it is available, I haven't been able to reproduce such ovs_br0 >> setup failure - things work. > > I'm still wondering if the issue is Kconfig-related (plus possibly bad > interaction with vng). I don't see the OVS knob enabled in the self- > tests config. If it's implied by some other knob, and ends-up being > selected as a module, vng could stumble upon loading the module at > runtime, especially on incremental build (at least I experience that > problem locally). I'm not even sure if the KCI is building > incrementally or not, so all the above could is quite a wild guess. > > In any case I think adding the explicit CONFIG_OPENVSWITCH=y the > selftest config would make the scenario more well defined. That is in 7/7 - but there was a collision with a netfilter knob getting turned on. I can repost it as-is (just after rebasing) if you think that is the only issue. > Cheers, > > Paolo
On Tue, 25 Jun 2024 09:20:29 -0400 Aaron Conole wrote: > > I'm still wondering if the issue is Kconfig-related (plus possibly bad > > interaction with vng). I don't see the OVS knob enabled in the self- > > tests config. If it's implied by some other knob, and ends-up being > > selected as a module, vng could stumble upon loading the module at > > runtime, especially on incremental build (at least I experience that > > problem locally). I'm not even sure if the KCI is building > > incrementally or not, so all the above could is quite a wild guess. > > > > In any case I think adding the explicit CONFIG_OPENVSWITCH=y the > > selftest config would make the scenario more well defined. > > That is in 7/7 - but there was a collision with a netfilter knob getting > turned on. I can repost it as-is (just after rebasing) if you think > that is the only issue. Sorry for not checking it earlier, looks like the runner was missing pyroute: # python3 ./tools/testing/selftests/net/openvswitch/ovs-dpctl.py Need to install the python pyroute2 package >= 0.6. I guess run_cmd counter-productively eats the stderr output ? :(
Jakub Kicinski <kuba@kernel.org> writes: > On Tue, 25 Jun 2024 09:20:29 -0400 Aaron Conole wrote: >> > I'm still wondering if the issue is Kconfig-related (plus possibly bad >> > interaction with vng). I don't see the OVS knob enabled in the self- >> > tests config. If it's implied by some other knob, and ends-up being >> > selected as a module, vng could stumble upon loading the module at >> > runtime, especially on incremental build (at least I experience that >> > problem locally). I'm not even sure if the KCI is building >> > incrementally or not, so all the above could is quite a wild guess. >> > >> > In any case I think adding the explicit CONFIG_OPENVSWITCH=y the >> > selftest config would make the scenario more well defined. >> >> That is in 7/7 - but there was a collision with a netfilter knob getting >> turned on. I can repost it as-is (just after rebasing) if you think >> that is the only issue. > > Sorry for not checking it earlier, looks like the runner was missing > pyroute: > > # python3 ./tools/testing/selftests/net/openvswitch/ovs-dpctl.py > Need to install the python pyroute2 package >= 0.6. > > I guess run_cmd counter-productively eats the stderr output ? :( Awesome :) I will add a patch to ovs-dpctl that will turn the sys.exit(0) into sys.exit(1) - that way it should do the skip. When I previously tested, I put an error in the `try` without reading the except being specifically for a ModuleNotFound error. I'll make sure pyroute2 isn't installed when I run it again. Thanks for your help Jakub and Paolo!
On Tue, 25 Jun 2024 10:14:24 -0400 Aaron Conole wrote: > > Sorry for not checking it earlier, looks like the runner was missing > > pyroute: > > > > # python3 ./tools/testing/selftests/net/openvswitch/ovs-dpctl.py > > Need to install the python pyroute2 package >= 0.6. > > > > I guess run_cmd counter-productively eats the stderr output ? :( > > Awesome :) I will add a patch to ovs-dpctl that will turn the > sys.exit(0) into sys.exit(1) - that way it should do the skip. > > When I previously tested, I put an error in the `try` without reading > the except being specifically for a ModuleNotFound error. > > I'll make sure pyroute2 isn't installed when I run it again. > > Thanks for your help Jakub and Paolo! BTW I popped the v2 back into the queue, so the next run (in 20min) will tell us if that's the only thing we were missing
Jakub Kicinski <kuba@kernel.org> writes: > On Tue, 25 Jun 2024 10:14:24 -0400 Aaron Conole wrote: >> > Sorry for not checking it earlier, looks like the runner was missing >> > pyroute: >> > >> > # python3 ./tools/testing/selftests/net/openvswitch/ovs-dpctl.py >> > Need to install the python pyroute2 package >= 0.6. >> > >> > I guess run_cmd counter-productively eats the stderr output ? :( >> >> Awesome :) I will add a patch to ovs-dpctl that will turn the >> sys.exit(0) into sys.exit(1) - that way it should do the skip. >> >> When I previously tested, I put an error in the `try` without reading >> the except being specifically for a ModuleNotFound error. >> >> I'll make sure pyroute2 isn't installed when I run it again. >> >> Thanks for your help Jakub and Paolo! > > BTW I popped the v2 back into the queue, so the next run (in 20min) > will tell us if that's the only thing we were missing
On Tue, 25 Jun 2024 07:06:54 -0700 Jakub Kicinski <kuba@kernel.org> wrote: > On Tue, 25 Jun 2024 09:20:29 -0400 Aaron Conole wrote: > > > I'm still wondering if the issue is Kconfig-related (plus possibly bad > > > interaction with vng). I don't see the OVS knob enabled in the self- > > > tests config. If it's implied by some other knob, and ends-up being > > > selected as a module, vng could stumble upon loading the module at > > > runtime, especially on incremental build (at least I experience that > > > problem locally). I'm not even sure if the KCI is building > > > incrementally or not, so all the above could is quite a wild guess. > > > > > > In any case I think adding the explicit CONFIG_OPENVSWITCH=y the > > > selftest config would make the scenario more well defined. > > > > That is in 7/7 - but there was a collision with a netfilter knob getting > > turned on. I can repost it as-is (just after rebasing) if you think > > that is the only issue. > > Sorry for not checking it earlier, looks like the runner was missing > pyroute: > > # python3 ./tools/testing/selftests/net/openvswitch/ovs-dpctl.py > Need to install the python pyroute2 package >= 0.6. > > I guess run_cmd counter-productively eats the stderr output ? :( Yes, otherwise it's rather noisy, but you can run the thing with VERBOSE=1, see also 56490b623aa0 ("selftests: Add debugging options to pmtu.sh"). Before that change, we didn't eat standard error, but in the general case I guess it's quite an improvement.
On Tue, 25 Jun 2024 11:17:14 -0400 Aaron Conole wrote: > > BTW I popped the v2 back into the queue, so the next run (in 20min) > > will tell us if that's the only thing we were missing