Message ID | 20200130230812.142642-3-brendanhiggins@google.com (mailing list archive) |
---|---|
State | Superseded, archived |
Headers | show |
Series | kunit: create a centralized executor to dispatch all KUnit tests | expand |
On 1/30/20 5:08 PM, Brendan Higgins wrote: > Add a linker section to UML where KUnit can put references to its test > suites. This patch is an early step in transitioning to dispatching all > KUnit tests from a centralized executor rather than having each as its > own separate late_initcall. All architectures please. The early versions of Kunit documented reliance on UML. Discussion lead to the conclusion that real architectures and real hardware would be supported. This like this are what make me reluctant to move devicetree unittests to KUnit. Can you please add a section to the KUnit documentation that lists things like the expectations, requirements, limitations, etc for a test case that is run by KUnit? Some examples that pop to mind from recent discussions and my own experiences: - Each test case is invoked after late_init is complete. + Exception: the possible value of being able to run a unit test at a specific runlevel has been expressed. If an actual unit test can be shown to require running earlier, this restriction will be re-visited. - Each test case must be idempotent. Each test case may be called multiple times, and must generate the same result each time it is called. + Exception 1: a test case can be declared to not be idempotent [[ mechanism TBD ]], in which case KUnit will not call the test case a second time without the kernel rebooting. + Exception 2: hardware may not be deterministic, so a test that always passes or fails when run under UML may not always to so on real hardware. <--- sentence copied from Documentation/dev-tools/kunit/usage.rst [[ This item and 1st exception do not exist yet, but will exist in some form if the proposed proc filesystem interface is added. ]] - KUnit provides a helpful wrapper to simplify building a UML kernel containing the KUnit test cases, booting the UML kernel, and formatting the output from the test cases. This wrapper MUST NOT be required to run the test cases or to determine a test result. The formatting may provide additional analysis and improve readability of a test result. - .... There is more that belongs here, but I'm getting side tracked here, when I'm trying to instead convert devicetree unittests to KUnit and want to get back to that. -Frank > > Signed-off-by: Brendan Higgins <brendanhiggins@google.com> > Reviewed-by: Stephen Boyd <sboyd@kernel.org> > --- > arch/um/include/asm/common.lds.S | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/arch/um/include/asm/common.lds.S b/arch/um/include/asm/common.lds.S > index 7145ce6999822..eab9ceb450efd 100644 > --- a/arch/um/include/asm/common.lds.S > +++ b/arch/um/include/asm/common.lds.S > @@ -52,6 +52,10 @@ > CON_INITCALL > } > > + .kunit_test_suites : { > + KUNIT_TEST_SUITES > + } > + > .exitcall : { > __exitcall_begin = .; > *(.exitcall.exit) >
On Tue, Feb 4, 2020 at 1:59 PM Frank Rowand <frowand.list@gmail.com> wrote: > > On 1/30/20 5:08 PM, Brendan Higgins wrote: > > Add a linker section to UML where KUnit can put references to its test > > suites. This patch is an early step in transitioning to dispatching all > > KUnit tests from a centralized executor rather than having each as its > > own separate late_initcall. > > All architectures please. I *am* supporting all architectures with this patchset. The first patch in this series adds support to all architectures except UML (admittedly I only tried x86 and ARM, 32 bit and 64 bit for both, but I am pretty sure someone tried it for POWER and something else, so maybe I should try it with others before submission). A patch specific for UML, this patch, was needed because UML is a special snowflake and has a bunch of special linker scripts that don't make the change in vmlinux.lds.h (the previous patch) sufficient. > The early versions of Kunit documented reliance on UML. Discussion lead to > the conclusion that real architectures and real hardware would be supported. I am *very* aware. I would never intentionally break support for other architectures. I know it is very important to you, Alan, and others. > This like this are what make me reluctant to move devicetree unittests to > KUnit. Hopefully I can reassure you then: With Alan as a regular contributor who cares very much about non-UML architectures, it would be very unlikely for me to accidentally break support for other architectures without us finding out before a release. I also periodically test KUnit on linux-next on x86-64. I have gotten bugs for other architectures from Arnd Bergmann and one of the m86k maintainers who seems to play around with it as well. So yeah, other people care about this too, and I would really not want to make any of them unhappy. > Can you please add a section to the KUnit documentation that lists things > like the expectations, requirements, limitations, etc for a test case that > is run by KUnit? Some examples that pop to mind from recent discussions > and my own experiences: > > - Each test case is invoked after late_init is complete. > + Exception: the possible value of being able to run a unit test > at a specific runlevel has been expressed. If an actual unit > test can be shown to require running earlier, this restriction > will be re-visited. > > - Each test case must be idempotent. Each test case may be called > multiple times, and must generate the same result each time it > is called. > + Exception 1: a test case can be declared to not be idempotent > [[ mechanism TBD ]], in which case KUnit will not call the > test case a second time without the kernel rebooting. > + Exception 2: hardware may not be deterministic, so a test that > always passes or fails when run under UML may not always to > so on real hardware. <--- sentence copied from > Documentation/dev-tools/kunit/usage.rst > [[ This item and 1st exception do not exist yet, but will exist > in some form if the proposed proc filesystem interface is > added. ]] > > - KUnit provides a helpful wrapper to simplify building a UML kernel > containing the KUnit test cases, booting the UML kernel, and > formatting the output from the test cases. This wrapper MUST NOT > be required to run the test cases or to determine a test result. > The formatting may provide additional analysis and improve > readability of a test result. > > - .... There is more that belongs here, but I'm getting side tracked > here, when I'm trying to instead convert devicetree unittests to > KUnit and want to get back to that. Sure, I think that's a great start! Thanks for that. I hope you don't mind if I copy and paste some of it. It kind of sounds like you are talking about more of a requirements doc than the design doc I was imagining in my reply to you on the cover letter, which is fine. The documentation is primarily for people other than me, so whatever you and others think is useful, I will do.
On 2/4/20 4:30 PM, Brendan Higgins wrote: > On Tue, Feb 4, 2020 at 1:59 PM Frank Rowand <frowand.list@gmail.com> wrote: >> >> On 1/30/20 5:08 PM, Brendan Higgins wrote: >>> Add a linker section to UML where KUnit can put references to its test >>> suites. This patch is an early step in transitioning to dispatching all >>> KUnit tests from a centralized executor rather than having each as its >>> own separate late_initcall. >> >> All architectures please. > > I *am* supporting all architectures with this patchset. > > The first patch in this series adds support to all architectures > except UML (admittedly I only tried x86 and ARM, 32 bit and 64 bit for Right you are. My mind did not span from patch 1 to patch 2. Apologies for the noise. > both, but I am pretty sure someone tried it for POWER and something > else, so maybe I should try it with others before submission). A patch > specific for UML, this patch, was needed because UML is a special > snowflake and has a bunch of special linker scripts that don't make > the change in vmlinux.lds.h (the previous patch) sufficient. > >> The early versions of Kunit documented reliance on UML. Discussion lead to >> the conclusion that real architectures and real hardware would be supported. > > I am *very* aware. > > I would never intentionally break support for other architectures. I > know it is very important to you, Alan, and others. > >> This like this are what make me reluctant to move devicetree unittests to >> KUnit. > > Hopefully I can reassure you then: > > With Alan as a regular contributor who cares very much about non-UML > architectures, it would be very unlikely for me to accidentally break > support for other architectures without us finding out before a > release. > > I also periodically test KUnit on linux-next on x86-64. I have gotten > bugs for other architectures from Arnd Bergmann and one of the m86k > maintainers who seems to play around with it as well. > > So yeah, other people care about this too, and I would really not want > to make any of them unhappy. Thanks for the extra reassurance. > >> Can you please add a section to the KUnit documentation that lists things >> like the expectations, requirements, limitations, etc for a test case that >> is run by KUnit? Some examples that pop to mind from recent discussions >> and my own experiences: >> >> - Each test case is invoked after late_init is complete. >> + Exception: the possible value of being able to run a unit test >> at a specific runlevel has been expressed. If an actual unit >> test can be shown to require running earlier, this restriction >> will be re-visited. >> >> - Each test case must be idempotent. Each test case may be called >> multiple times, and must generate the same result each time it >> is called. >> + Exception 1: a test case can be declared to not be idempotent >> [[ mechanism TBD ]], in which case KUnit will not call the >> test case a second time without the kernel rebooting. >> + Exception 2: hardware may not be deterministic, so a test that >> always passes or fails when run under UML may not always to >> so on real hardware. <--- sentence copied from >> Documentation/dev-tools/kunit/usage.rst >> [[ This item and 1st exception do not exist yet, but will exist >> in some form if the proposed proc filesystem interface is >> added. ]] >> >> - KUnit provides a helpful wrapper to simplify building a UML kernel >> containing the KUnit test cases, booting the UML kernel, and >> formatting the output from the test cases. This wrapper MUST NOT >> be required to run the test cases or to determine a test result. >> The formatting may provide additional analysis and improve >> readability of a test result. >> >> - .... There is more that belongs here, but I'm getting side tracked >> here, when I'm trying to instead convert devicetree unittests to >> KUnit and want to get back to that. > > Sure, I think that's a great start! Thanks for that. I hope you don't > mind if I copy and paste some of it. Please do. And no need to credit me. > It kind of sounds like you are talking about more of a requirements > doc than the design doc I was imagining in my reply to you on the > cover letter, which is fine. The documentation is primarily for people > other than me, so whatever you and others think is useful, I will do. > I wasn't really sure what to label it as. My inspiration was based a little bit on reading through the Linux 5.5 KUnit source and documentation, and trying to understand the expectations of the KUnit framework and what the test cases have to either obey or can expect. I think there is a lot of history that you know, but is only visible to test implementors if they read through the past couple of years email threads. -Frank
On Tue, Feb 4, 2020 at 3:17 PM Frank Rowand <frowand.list@gmail.com> wrote: > > On 2/4/20 4:30 PM, Brendan Higgins wrote: > > On Tue, Feb 4, 2020 at 1:59 PM Frank Rowand <frowand.list@gmail.com> wrote: > >> > >> On 1/30/20 5:08 PM, Brendan Higgins wrote: > >>> Add a linker section to UML where KUnit can put references to its test > >>> suites. This patch is an early step in transitioning to dispatching all > >>> KUnit tests from a centralized executor rather than having each as its > >>> own separate late_initcall. > >> > >> All architectures please. > > > > I *am* supporting all architectures with this patchset. > > > > The first patch in this series adds support to all architectures > > except UML (admittedly I only tried x86 and ARM, 32 bit and 64 bit for > > Right you are. My mind did not span from patch 1 to patch 2. Apologies for > the noise. > > > > both, but I am pretty sure someone tried it for POWER and something > > else, so maybe I should try it with others before submission). A patch > > specific for UML, this patch, was needed because UML is a special > > snowflake and has a bunch of special linker scripts that don't make > > the change in vmlinux.lds.h (the previous patch) sufficient. > > > >> The early versions of Kunit documented reliance on UML. Discussion lead to > >> the conclusion that real architectures and real hardware would be supported. > > > > I am *very* aware. > > > > I would never intentionally break support for other architectures. I > > know it is very important to you, Alan, and others. > > > >> This like this are what make me reluctant to move devicetree unittests to > >> KUnit. > > > > Hopefully I can reassure you then: > > > > With Alan as a regular contributor who cares very much about non-UML > > architectures, it would be very unlikely for me to accidentally break > > support for other architectures without us finding out before a > > release. > > > > I also periodically test KUnit on linux-next on x86-64. I have gotten > > bugs for other architectures from Arnd Bergmann and one of the m86k > > maintainers who seems to play around with it as well. > > > > So yeah, other people care about this too, and I would really not want > > to make any of them unhappy. > > Thanks for the extra reassurance. No worries. This actually makes me think that we should probably have some kind of automated test that at least makes sure that popular KUnit architectures are not broken. Someone is currently adding KUnit support to KernelCI; maybe we can have KernelCI run multiple architecture tests after the initial support is complete. > >> Can you please add a section to the KUnit documentation that lists things > >> like the expectations, requirements, limitations, etc for a test case that > >> is run by KUnit? Some examples that pop to mind from recent discussions > >> and my own experiences: > >> > >> - Each test case is invoked after late_init is complete. > >> + Exception: the possible value of being able to run a unit test > >> at a specific runlevel has been expressed. If an actual unit > >> test can be shown to require running earlier, this restriction > >> will be re-visited. > >> > >> - Each test case must be idempotent. Each test case may be called > >> multiple times, and must generate the same result each time it > >> is called. > >> + Exception 1: a test case can be declared to not be idempotent > >> [[ mechanism TBD ]], in which case KUnit will not call the > >> test case a second time without the kernel rebooting. > >> + Exception 2: hardware may not be deterministic, so a test that > >> always passes or fails when run under UML may not always to > >> so on real hardware. <--- sentence copied from > >> Documentation/dev-tools/kunit/usage.rst > >> [[ This item and 1st exception do not exist yet, but will exist > >> in some form if the proposed proc filesystem interface is > >> added. ]] > >> > >> - KUnit provides a helpful wrapper to simplify building a UML kernel > >> containing the KUnit test cases, booting the UML kernel, and > >> formatting the output from the test cases. This wrapper MUST NOT > >> be required to run the test cases or to determine a test result. > >> The formatting may provide additional analysis and improve > >> readability of a test result. > >> > >> - .... There is more that belongs here, but I'm getting side tracked > >> here, when I'm trying to instead convert devicetree unittests to > >> KUnit and want to get back to that. > > > > Sure, I think that's a great start! Thanks for that. I hope you don't > > mind if I copy and paste some of it. > > Please do. And no need to credit me. > > > > It kind of sounds like you are talking about more of a requirements > > doc than the design doc I was imagining in my reply to you on the > > cover letter, which is fine. The documentation is primarily for people > > other than me, so whatever you and others think is useful, I will do. > > > > I wasn't really sure what to label it as. My inspiration was based > a little bit on reading through the Linux 5.5 KUnit source and > documentation, and trying to understand the expectations of the > KUnit framework and what the test cases have to either obey or > can expect. > > I think there is a lot of history that you know, but is only visible > to test implementors if they read through the past couple of years > email threads. Yeah, that's no good. We need to provide a better experience than that. David has gotten deeply involved relatively recently: I suspect that he might have some good insight on this. David, you mentioned offline that there are some philosophical changes in how we think about KUnit that has happened that you feel have never quite been captured in the docs. Do you think this is part of what Frank has pointed out here? If not, do you have any thoughts about what we should call this documentation section? Shuah's first KUnit PR seemed to imply that KUnit was primarily for UML, or only fully supported under UML. So I think I might be the odd one out thinking that that has changed and the documentation properly conveys that.
On Tue, Feb 4, 2020 at 5:17 PM 'Brendan Higgins' via KUnit Development <kunit-dev@googlegroups.com> wrote: > > On Tue, Feb 4, 2020 at 3:17 PM Frank Rowand <frowand.list@gmail.com> wrote: > > > > On 2/4/20 4:30 PM, Brendan Higgins wrote: > > > On Tue, Feb 4, 2020 at 1:59 PM Frank Rowand <frowand.list@gmail.com> wrote: > > >> > > >> Can you please add a section to the KUnit documentation that lists things > > >> like the expectations, requirements, limitations, etc for a test case that > > >> is run by KUnit? Some examples that pop to mind from recent discussions > > >> and my own experiences: > > >> > > >> - Each test case is invoked after late_init is complete. > > >> + Exception: the possible value of being able to run a unit test > > >> at a specific runlevel has been expressed. If an actual unit > > >> test can be shown to require running earlier, this restriction > > >> will be re-visited. > > >> > > >> - Each test case must be idempotent. Each test case may be called > > >> multiple times, and must generate the same result each time it > > >> is called. > > >> + Exception 1: a test case can be declared to not be idempotent > > >> [[ mechanism TBD ]], in which case KUnit will not call the > > >> test case a second time without the kernel rebooting. > > >> + Exception 2: hardware may not be deterministic, so a test that > > >> always passes or fails when run under UML may not always to > > >> so on real hardware. <--- sentence copied from > > >> Documentation/dev-tools/kunit/usage.rst > > >> [[ This item and 1st exception do not exist yet, but will exist > > >> in some form if the proposed proc filesystem interface is > > >> added. ]] > > >> > > >> - KUnit provides a helpful wrapper to simplify building a UML kernel > > >> containing the KUnit test cases, booting the UML kernel, and > > >> formatting the output from the test cases. This wrapper MUST NOT > > >> be required to run the test cases or to determine a test result. > > >> The formatting may provide additional analysis and improve > > >> readability of a test result. > > >> > > >> - .... There is more that belongs here, but I'm getting side tracked > > >> here, when I'm trying to instead convert devicetree unittests to > > >> KUnit and want to get back to that. > > > > > > Sure, I think that's a great start! Thanks for that. I hope you don't > > > mind if I copy and paste some of it. > > > > Please do. And no need to credit me. > > > > > > > It kind of sounds like you are talking about more of a requirements > > > doc than the design doc I was imagining in my reply to you on the > > > cover letter, which is fine. The documentation is primarily for people > > > other than me, so whatever you and others think is useful, I will do. > > > > > > > I wasn't really sure what to label it as. My inspiration was based > > a little bit on reading through the Linux 5.5 KUnit source and > > documentation, and trying to understand the expectations of the > > KUnit framework and what the test cases have to either obey or > > can expect. > > > > I think there is a lot of history that you know, but is only visible > > to test implementors if they read through the past couple of years > > email threads. > > Yeah, that's no good. We need to provide a better experience than > that. David has gotten deeply involved relatively recently: I suspect > that he might have some good insight on this. > > David, you mentioned offline that there are some philosophical changes > in how we think about KUnit that has happened that you feel have never > quite been captured in the docs. Do you think this is part of what > Frank has pointed out here? If not, do you have any thoughts about > what we should call this documentation section? > > Shuah's first KUnit PR seemed to imply that KUnit was primarily for > UML, or only fully supported under UML. So I think I might be the odd > one out thinking that that has changed and the documentation properly > conveys that. Yeah: I think the documentation could do with some improvements on these fronts: there are a few places which imply that KUnit requires UML, which is definitely not the case. We still want to encourage people to try UML: it's usually the quickest way of running tests, but KUnit itself should remain architecture-independent, as should as many tests as possible. I think there are probably (at least) two different things that need doing to make these sorts of miscommunications less likely. The bulk of the documentation needs to stop referring to KUnit as something built upon or using UML (while still making it clear that some of our tooling defaults to or requires UML at the moment). This shows up in a lot of the "What is KUnit/Why use KUnit" sections in both the kernel docs and the KUnit website. Secondly, we need to document the test environment (possibly alongside some test style best practises, possibly separately). Given that work like the debugfs stuff and how to support tests which need __init data or need to only run once is still ongoing, I'm not sure if we can definitively state what the solutions there will be yet, but noting that tests should not depend on a specific architecture like UML (unless they're testing architecture-specific code in a way that can't reasonably be made architecture independent) is worth doing sooner rather than later. I'll see if I can put a few doc patches together to update/fix at least some of the most egregious examples. Cheers, -- David
diff --git a/arch/um/include/asm/common.lds.S b/arch/um/include/asm/common.lds.S index 7145ce6999822..eab9ceb450efd 100644 --- a/arch/um/include/asm/common.lds.S +++ b/arch/um/include/asm/common.lds.S @@ -52,6 +52,10 @@ CON_INITCALL } + .kunit_test_suites : { + KUNIT_TEST_SUITES + } + .exitcall : { __exitcall_begin = .; *(.exitcall.exit)