Message ID | 20250217-kunit-kselftests-v1-0-42b4524c3b0a@linutronix.de (mailing list archive) |
---|---|
Headers | show |
Series | kunit: Introduce UAPI testing framework | expand |
On Mon, 17 Feb 2025 at 19:00, Thomas Weißschuh <thomas.weissschuh@linutronix.de> wrote: > > Currently testing of userspace and in-kernel API use two different > frameworks. kselftests for the userspace ones and Kunit for the > in-kernel ones. Besides their different scopes, both have different > strengths and limitations: > > Kunit: > * Tests are normal kernel code. > * They use the regular kernel toolchain. > * They can be packaged and distributed as modules conveniently. > > Kselftests: > * Tests are normal userspace code > * They need a userspace toolchain. > A kernel cross toolchain is likely not enough. > * A fair amout of userland is required to run the tests, > which means a full distro or handcrafted rootfs. > * There is no way to conveniently package and run kselftests with a > given kernel image. > * The kselftests makefiles are not as powerful as regular kbuild. > For example they are missing proper header dependency tracking or more > complex compiler option modifications. > > Therefore kunit is much easier to run against different kernel > configurations and architectures. > This series aims to combine kselftests and kunit, avoiding both their > limitations. It works by compiling the userspace kselftests as part of > the regular kernel build, embedding them into the kunit kernel or module > and executing them from there. If the kernel toolchain is not fit to > produce userspace because of a missing libc, the kernel's own nolibc can > be used instead. > The structured TAP output from the kselftest is integrated into the > kunit KTAP output transparently, the kunit parser can parse the combined > logs together. Wow -- this is really neat! Thanks for putting this together. I haven't had a chance to play with it in detail yet, but here are a few initial / random thoughts: - Having support for running things from userspace within a KUnit test seems like it's something that could be really useful for testing syscalls (and maybe other mm / exec code as well). - I don't think we can totally combine kselftests and KUnit for all tests (some of the selftests definitely require more complicated dependencies than I think KUnit would want to reasonably support or require). - The in-kernel KUnit framework doesn't have any knowledge of the structure or results of a uapi test. It'd be nice to at least be able to get the process exit status, and bubble up a basic 'passed'/'skipped'/'failed' so that we're not reporting success for failed tests (and so that simple test executables could run without needing to output their own KTAP if they only run one test). - Equally, for some selftests, it's probably a pain to have to write a kernel module if there's nothing that needs to be done in the kernel. Maybe such tests could still be built with nolibc and a kernel toolchain, but be triggered directly from the python tooling (e.g. as the 'init' process). - There still seems to be some increased requirements over plain KUnit at the moment: I'm definitely seeing issues from not having the right libgcc installed for all architectures. (Though it's working for most of them, which is very neat!) - This is a great example of how having standardised result formats is useful! - If this is going to change or blur the boundary between "this is a ksefltest" and "this is a kunit test", we probably will need to update Documentation/dev-tools/testing-overview.rst -- it probably needs some clarifications there _anyway_, so this is probably a good point to ensure everyone's on the same page. Do you have a particular non-example test you'd like to either write or port to use this? I think it'd be great to see some real-world examples of where this'd be most useful. Either way, I'll keep playing with this a bit over the next few days. I'd love to hear what Shuah and Rae think, as well, as this involves kselftest and KTAP a lot. Cheers, -- David > > Further room for improvements: > * Call each test in its completely dedicated namespace > * Handle additional test files besides the test executable through > archives. CPIO, cramfs, etc. > * Compatibility with kselftest_harness.h (in progress) > * Expose the blobs in debugfs > * Provide some convience wrappers around compat userprogs > * Figure out a migration path/coexistence solution for > kunit UAPI and tools/testing/selftests/ > > Output from the kunit example testcase, note the output of > "example_uapi_tests". > > $ ./tools/testing/kunit/kunit.py run --kunitconfig lib/kunit example > ... > Running tests with: > $ .kunit/linux kunit.filter_glob=example kunit.enable=1 mem=1G console=tty kunit_shutdown=halt > [11:53:53] ================== example (10 subtests) =================== > [11:53:53] [PASSED] example_simple_test > [11:53:53] [SKIPPED] example_skip_test > [11:53:53] [SKIPPED] example_mark_skipped_test > [11:53:53] [PASSED] example_all_expect_macros_test > [11:53:53] [PASSED] example_static_stub_test > [11:53:53] [PASSED] example_static_stub_using_fn_ptr_test > [11:53:53] [PASSED] example_priv_test > [11:53:53] =================== example_params_test =================== > [11:53:53] [SKIPPED] example value 3 > [11:53:53] [PASSED] example value 2 > [11:53:53] [PASSED] example value 1 > [11:53:53] [SKIPPED] example value 0 > [11:53:53] =============== [PASSED] example_params_test =============== > [11:53:53] [PASSED] example_slow_test > [11:53:53] ======================= (4 subtests) ======================= > [11:53:53] [PASSED] procfs > [11:53:53] [PASSED] userspace test 2 > [11:53:53] [SKIPPED] userspace test 3: some reason > [11:53:53] [PASSED] userspace test 4 > [11:53:53] ================ [PASSED] example_uapi_test ================ > [11:53:53] ===================== [PASSED] example ===================== > [11:53:53] ============================================================ > [11:53:53] Testing complete. Ran 16 tests: passed: 11, skipped: 5 > [11:53:53] Elapsed time: 67.543s total, 1.823s configuring, 65.655s building, 0.058s running > > Based on v6.14-rc1 and the series > "tools/nolibc: compatibility with -Wmissing-prototypes" [0]. > For compatibility with LLVM/clang another series is needed [1]. > > [0] https://lore.kernel.org/lkml/20250123-nolibc-prototype-v1-0-e1afc5c1999a@weissschuh.net/ > [1] https://lore.kernel.org/lkml/20250213-kbuild-userprog-fixes-v1-0-f255fb477d98@linutronix.de/ > > Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> > --- > Thomas Weißschuh (12): > kconfig: implement CONFIG_HEADERS_INSTALL for Usermode Linux > kconfig: introduce CONFIG_ARCH_HAS_NOLIBC > kbuild: userprogs: respect CONFIG_WERROR > kbuild: userprogs: add nolibc support > kbuild: introduce blob framework > kunit: tool: Add test for nested test result reporting > kunit: tool: Don't overwrite test status based on subtest counts > kunit: tool: Parse skipped tests from kselftest.h > kunit: Introduce UAPI testing framework > kunit: uapi: Add example for UAPI tests > kunit: uapi: Introduce preinit executable > kunit: uapi: Validate usability of /proc > > Documentation/kbuild/makefiles.rst | 12 + > Makefile | 5 +- > include/kunit/uapi.h | 17 ++ > include/linux/blob.h | 21 ++ > init/Kconfig | 2 + > lib/Kconfig.debug | 1 - > lib/kunit/Kconfig | 9 + > lib/kunit/Makefile | 17 +- > lib/kunit/kunit-example-test.c | 17 ++ > lib/kunit/kunit-uapi-example.c | 58 +++++ > lib/kunit/uapi-preinit.c | 61 +++++ > lib/kunit/uapi.c | 250 +++++++++++++++++++++ > scripts/Makefile.blobs | 19 ++ > scripts/Makefile.build | 6 + > scripts/Makefile.clean | 2 +- > scripts/Makefile.userprogs | 18 +- > scripts/blob-wrap.c | 27 +++ > tools/include/nolibc/Kconfig.nolibc | 18 ++ > tools/testing/kunit/kunit_parser.py | 13 +- > tools/testing/kunit/kunit_tool_test.py | 9 + > .../test_is_test_passed-failure-nested.log | 10 + > .../test_data/test_is_test_passed-kselftest.log | 3 +- > 22 files changed, 584 insertions(+), 11 deletions(-) > --- > base-commit: 20e952894066214a80793404c9578d72ef89c5e0 > change-id: 20241015-kunit-kselftests-56273bc40442 > > Best regards, > -- > Thomas Weißschuh <thomas.weissschuh@linutronix.de> >
On Tue, Feb 18, 2025 at 04:20:06PM +0800, David Gow wrote: > On Mon, 17 Feb 2025 at 19:00, Thomas Weißschuh > <thomas.weissschuh@linutronix.de> wrote: > > > > Currently testing of userspace and in-kernel API use two different > > frameworks. kselftests for the userspace ones and Kunit for the > > in-kernel ones. Besides their different scopes, both have different > > strengths and limitations: > > > > Kunit: > > * Tests are normal kernel code. > > * They use the regular kernel toolchain. > > * They can be packaged and distributed as modules conveniently. > > > > Kselftests: > > * Tests are normal userspace code > > * They need a userspace toolchain. > > A kernel cross toolchain is likely not enough. > > * A fair amout of userland is required to run the tests, > > which means a full distro or handcrafted rootfs. > > * There is no way to conveniently package and run kselftests with a > > given kernel image. > > * The kselftests makefiles are not as powerful as regular kbuild. > > For example they are missing proper header dependency tracking or more > > complex compiler option modifications. > > > > Therefore kunit is much easier to run against different kernel > > configurations and architectures. > > This series aims to combine kselftests and kunit, avoiding both their > > limitations. It works by compiling the userspace kselftests as part of > > the regular kernel build, embedding them into the kunit kernel or module > > and executing them from there. If the kernel toolchain is not fit to > > produce userspace because of a missing libc, the kernel's own nolibc can > > be used instead. > > The structured TAP output from the kselftest is integrated into the > > kunit KTAP output transparently, the kunit parser can parse the combined > > logs together. > > Wow -- this is really neat! Thanks for putting this together. > > I haven't had a chance to play with it in detail yet, but here are a > few initial / random thoughts: > - Having support for running things from userspace within a KUnit test > seems like it's something that could be really useful for testing > syscalls (and maybe other mm / exec code as well). That's the target :-) I'm also looking for more descriptive naming ideas. > - I don't think we can totally combine kselftests and KUnit for all > tests (some of the selftests definitely require more complicated > dependencies than I think KUnit would want to reasonably support or > require). Agreed, though I somewhat expect that some complex selftests would be simplified to work with this scheme as it should improve test coverage from the bots. > - The in-kernel KUnit framework doesn't have any knowledge of the > structure or results of a uapi test. It'd be nice to at least be able > to get the process exit status, and bubble up a basic > 'passed'/'skipped'/'failed' so that we're not reporting success for > failed tests (and so that simple test executables could run without > needing to output their own KTAP if they only run one test). Currently any exitcode != 0 fails the test. I'll add some proper handling for exit(KSFT_SKIP). > - Equally, for some selftests, it's probably a pain to have to write a > kernel module if there's nothing that needs to be done in the kernel. > Maybe such tests could still be built with nolibc and a kernel > toolchain, but be triggered directly from the python tooling (e.g. as > the 'init' process). Some autodiscovery based on linker sections could be done. However that would not yet define how to group them into suites. Having one explicit reference in a module makes everything easier to understand. What about a helper macro for the test case definition: KUNIT_CASE_UAPI(symbol)? All UAPI tests of a subsystem can share the same module, so the overhead should be limited. I'd like to keep it usable without needing the python tooling. Note in case it was not clear: All test executables are available as normal files in the build directory and can also be executed from there. > - There still seems to be some increased requirements over plain KUnit > at the moment: I'm definitely seeing issues from not having the right > libgcc installed for all architectures. (Though it's working for most > of them, which is very neat!) I'll look into that. > - This is a great example of how having standardised result formats is useful! Indeed, it was surprisingly compatible. > - If this is going to change or blur the boundary between "this is a > ksefltest" and "this is a kunit test", we probably will need to update > Documentation/dev-tools/testing-overview.rst -- it probably needs some > clarifications there _anyway_, so this is probably a good point to > ensure everyone's on the same page. Agreed. > Do you have a particular non-example test you'd like to either write > or port to use this? I think it'd be great to see some real-world > examples of where this'd be most useful. I want to use it for the vDSO selftests. To be usable for that another series is necessary[0]. I tested the whole thing locally with one selftest and promptly found a bug in the selftests [1]. > Either way, I'll keep playing with this a bit over the next few days. > I'd love to hear what Shuah and Rae think, as well, as this involves > kselftest and KTAP a lot. Thanks! I'm also looking forward to their feedback. Thomas <snip> [0] https://lore.kernel.org/lkml/20250203-parse_vdso-nolibc-v1-0-9cb6268d77be@linutronix.de/ [1] https://lore.kernel.org/lkml/20250217-selftests-vdso-s390-gnu-hash-v2-1-f6c2532ffe2a@linutronix.de/
Currently testing of userspace and in-kernel API use two different frameworks. kselftests for the userspace ones and Kunit for the in-kernel ones. Besides their different scopes, both have different strengths and limitations: Kunit: * Tests are normal kernel code. * They use the regular kernel toolchain. * They can be packaged and distributed as modules conveniently. Kselftests: * Tests are normal userspace code * They need a userspace toolchain. A kernel cross toolchain is likely not enough. * A fair amout of userland is required to run the tests, which means a full distro or handcrafted rootfs. * There is no way to conveniently package and run kselftests with a given kernel image. * The kselftests makefiles are not as powerful as regular kbuild. For example they are missing proper header dependency tracking or more complex compiler option modifications. Therefore kunit is much easier to run against different kernel configurations and architectures. This series aims to combine kselftests and kunit, avoiding both their limitations. It works by compiling the userspace kselftests as part of the regular kernel build, embedding them into the kunit kernel or module and executing them from there. If the kernel toolchain is not fit to produce userspace because of a missing libc, the kernel's own nolibc can be used instead. The structured TAP output from the kselftest is integrated into the kunit KTAP output transparently, the kunit parser can parse the combined logs together. Further room for improvements: * Call each test in its completely dedicated namespace * Handle additional test files besides the test executable through archives. CPIO, cramfs, etc. * Compatibility with kselftest_harness.h (in progress) * Expose the blobs in debugfs * Provide some convience wrappers around compat userprogs * Figure out a migration path/coexistence solution for kunit UAPI and tools/testing/selftests/ Output from the kunit example testcase, note the output of "example_uapi_tests". $ ./tools/testing/kunit/kunit.py run --kunitconfig lib/kunit example ... Running tests with: $ .kunit/linux kunit.filter_glob=example kunit.enable=1 mem=1G console=tty kunit_shutdown=halt [11:53:53] ================== example (10 subtests) =================== [11:53:53] [PASSED] example_simple_test [11:53:53] [SKIPPED] example_skip_test [11:53:53] [SKIPPED] example_mark_skipped_test [11:53:53] [PASSED] example_all_expect_macros_test [11:53:53] [PASSED] example_static_stub_test [11:53:53] [PASSED] example_static_stub_using_fn_ptr_test [11:53:53] [PASSED] example_priv_test [11:53:53] =================== example_params_test =================== [11:53:53] [SKIPPED] example value 3 [11:53:53] [PASSED] example value 2 [11:53:53] [PASSED] example value 1 [11:53:53] [SKIPPED] example value 0 [11:53:53] =============== [PASSED] example_params_test =============== [11:53:53] [PASSED] example_slow_test [11:53:53] ======================= (4 subtests) ======================= [11:53:53] [PASSED] procfs [11:53:53] [PASSED] userspace test 2 [11:53:53] [SKIPPED] userspace test 3: some reason [11:53:53] [PASSED] userspace test 4 [11:53:53] ================ [PASSED] example_uapi_test ================ [11:53:53] ===================== [PASSED] example ===================== [11:53:53] ============================================================ [11:53:53] Testing complete. Ran 16 tests: passed: 11, skipped: 5 [11:53:53] Elapsed time: 67.543s total, 1.823s configuring, 65.655s building, 0.058s running Based on v6.14-rc1 and the series "tools/nolibc: compatibility with -Wmissing-prototypes" [0]. For compatibility with LLVM/clang another series is needed [1]. [0] https://lore.kernel.org/lkml/20250123-nolibc-prototype-v1-0-e1afc5c1999a@weissschuh.net/ [1] https://lore.kernel.org/lkml/20250213-kbuild-userprog-fixes-v1-0-f255fb477d98@linutronix.de/ Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> --- Thomas Weißschuh (12): kconfig: implement CONFIG_HEADERS_INSTALL for Usermode Linux kconfig: introduce CONFIG_ARCH_HAS_NOLIBC kbuild: userprogs: respect CONFIG_WERROR kbuild: userprogs: add nolibc support kbuild: introduce blob framework kunit: tool: Add test for nested test result reporting kunit: tool: Don't overwrite test status based on subtest counts kunit: tool: Parse skipped tests from kselftest.h kunit: Introduce UAPI testing framework kunit: uapi: Add example for UAPI tests kunit: uapi: Introduce preinit executable kunit: uapi: Validate usability of /proc Documentation/kbuild/makefiles.rst | 12 + Makefile | 5 +- include/kunit/uapi.h | 17 ++ include/linux/blob.h | 21 ++ init/Kconfig | 2 + lib/Kconfig.debug | 1 - lib/kunit/Kconfig | 9 + lib/kunit/Makefile | 17 +- lib/kunit/kunit-example-test.c | 17 ++ lib/kunit/kunit-uapi-example.c | 58 +++++ lib/kunit/uapi-preinit.c | 61 +++++ lib/kunit/uapi.c | 250 +++++++++++++++++++++ scripts/Makefile.blobs | 19 ++ scripts/Makefile.build | 6 + scripts/Makefile.clean | 2 +- scripts/Makefile.userprogs | 18 +- scripts/blob-wrap.c | 27 +++ tools/include/nolibc/Kconfig.nolibc | 18 ++ tools/testing/kunit/kunit_parser.py | 13 +- tools/testing/kunit/kunit_tool_test.py | 9 + .../test_is_test_passed-failure-nested.log | 10 + .../test_data/test_is_test_passed-kselftest.log | 3 +- 22 files changed, 584 insertions(+), 11 deletions(-) --- base-commit: 20e952894066214a80793404c9578d72ef89c5e0 change-id: 20241015-kunit-kselftests-56273bc40442 Best regards,