diff mbox series

[3/3] t-ctype: do one test per class and char

Message ID 20240225112722.89221-4-l.s.r@web.de (mailing list archive)
State New, archived
Headers show
Series t-ctype: simplify unit test definitions | expand

Commit Message

René Scharfe Feb. 25, 2024, 11:27 a.m. UTC
Simplify TEST_CHAR_CLASS by using TEST for each character separately.
This increases the number of tests to 3598, but avoids the need for
using internal functions and test_msg() for custom messages.  The
resulting macro has minimal test setup overhead.

Signed-off-by: René Scharfe <l.s.r@web.de>
---
 t/unit-tests/t-ctype.c | 15 +++++----------
 1 file changed, 5 insertions(+), 10 deletions(-)

--
2.44.0

Comments

Christian Couder Feb. 26, 2024, 9:28 a.m. UTC | #1
On Sun, Feb 25, 2024 at 12:27 PM René Scharfe <l.s.r@web.de> wrote:
>
> Simplify TEST_CHAR_CLASS by using TEST for each character separately.
> This increases the number of tests to 3598,

Does this mean that when all the tests pass there will be 3598 lines
of output on the terminal instead of 14 before this patch?

If that's the case, I don't like this.

> but avoids the need for
> using internal functions and test_msg() for custom messages.  The
> resulting macro has minimal test setup overhead.

Yeah, the code looks definitely cleaner, but a clean output is important too.

Thanks!
René Scharfe Feb. 26, 2024, 5:26 p.m. UTC | #2
Am 26.02.24 um 10:28 schrieb Christian Couder:
> On Sun, Feb 25, 2024 at 12:27 PM René Scharfe <l.s.r@web.de> wrote:
>>
>> Simplify TEST_CHAR_CLASS by using TEST for each character separately.
>> This increases the number of tests to 3598,
>
> Does this mean that when all the tests pass there will be 3598 lines
> of output on the terminal instead of 14 before this patch?

Yes.

> If that's the case, I don't like this.
>
>> but avoids the need for
>> using internal functions and test_msg() for custom messages.  The
>> resulting macro has minimal test setup overhead.
>
> Yeah, the code looks definitely cleaner, but a clean output is important too.

The output is clean as well, but there's a lot of it.  Perhaps too much.
The success messages are boring, though, and if all checks pass then the
only useful information is the status code.  A TAP harness like prove
summarizes that nicely:

   $ prove t/unit-tests/bin/t-ctype
   t/unit-tests/bin/t-ctype .. ok
   All tests successful.
   Files=1, Tests=3598,  0 wallclock secs ( 0.08 usr +  0.00 sys =  0.08 CPU)
   Result: PASS

Filtering out passing checks e.g. with "| grep -v ^ok" would help when
debugging a test failure. I vaguely miss the --immediate switch from the
regular test library, however.

René
Junio C Hamano Feb. 26, 2024, 5:44 p.m. UTC | #3
René Scharfe <l.s.r@web.de> writes:

> Filtering out passing checks e.g. with "| grep -v ^ok" would help when
> debugging a test failure. I vaguely miss the --immediate switch from the
> regular test library, however.

Yeah, "-i" is one of the most useful switches during debugging
tests.
Josh Steadmon Feb. 26, 2024, 6:58 p.m. UTC | #4
On 2024.02.26 18:26, René Scharfe wrote:
> Am 26.02.24 um 10:28 schrieb Christian Couder:
> > On Sun, Feb 25, 2024 at 12:27 PM René Scharfe <l.s.r@web.de> wrote:
> >>
> >> Simplify TEST_CHAR_CLASS by using TEST for each character separately.
> >> This increases the number of tests to 3598,
> >
> > Does this mean that when all the tests pass there will be 3598 lines
> > of output on the terminal instead of 14 before this patch?
> 
> Yes.
> 
> > If that's the case, I don't like this.
> >
> >> but avoids the need for
> >> using internal functions and test_msg() for custom messages.  The
> >> resulting macro has minimal test setup overhead.
> >
> > Yeah, the code looks definitely cleaner, but a clean output is important too.
> 
> The output is clean as well, but there's a lot of it.  Perhaps too much.
> The success messages are boring, though, and if all checks pass then the
> only useful information is the status code.  A TAP harness like prove
> summarizes that nicely:
> 
>    $ prove t/unit-tests/bin/t-ctype
>    t/unit-tests/bin/t-ctype .. ok
>    All tests successful.
>    Files=1, Tests=3598,  0 wallclock secs ( 0.08 usr +  0.00 sys =  0.08 CPU)
>    Result: PASS
> 
> Filtering out passing checks e.g. with "| grep -v ^ok" would help when
> debugging a test failure. I vaguely miss the --immediate switch from the
> regular test library, however.

Yeah, I agree here. It's a lot of output but it's almost always going to
be consumed by a test harness rather than a human, and it's easy to
filter out the noise if someone does need to do some manual debugging.
Christian Couder Feb. 27, 2024, 10:04 a.m. UTC | #5
On Mon, Feb 26, 2024 at 7:58 PM Josh Steadmon <steadmon@google.com> wrote:
>
> On 2024.02.26 18:26, René Scharfe wrote:

> > The output is clean as well, but there's a lot of it.  Perhaps too much.
> > The success messages are boring, though, and if all checks pass then the
> > only useful information is the status code.  A TAP harness like prove
> > summarizes that nicely:
> >
> >    $ prove t/unit-tests/bin/t-ctype
> >    t/unit-tests/bin/t-ctype .. ok
> >    All tests successful.
> >    Files=1, Tests=3598,  0 wallclock secs ( 0.08 usr +  0.00 sys =  0.08 CPU)
> >    Result: PASS
> >
> > Filtering out passing checks e.g. with "| grep -v ^ok" would help when
> > debugging a test failure. I vaguely miss the --immediate switch from the
> > regular test library, however.
>
> Yeah, I agree here. It's a lot of output but it's almost always going to
> be consumed by a test harness rather than a human, and it's easy to
> filter out the noise if someone does need to do some manual debugging.

Yeah, I know about TAP harnesses like prove, but the most
straightforward way to run the unit tests is still `make unit-tests`
in the t/ directory. Also when you add or change some tests, it's a
good idea to run `make unit-tests` to see what the output is, so you
still have to see that output quite often when you work on tests and
going through 3598 of mostly useless output instead of just 14 isn't
nice.
René Scharfe March 2, 2024, 10 p.m. UTC | #6
Am 27.02.24 um 11:04 schrieb Christian Couder:
> On Mon, Feb 26, 2024 at 7:58 PM Josh Steadmon <steadmon@google.com> wrote:
>>
>> On 2024.02.26 18:26, René Scharfe wrote:
>
>>> The output is clean as well, but there's a lot of it.  Perhaps too much.
>>> The success messages are boring, though, and if all checks pass then the
>>> only useful information is the status code.  A TAP harness like prove
>>> summarizes that nicely:
>>>
>>>    $ prove t/unit-tests/bin/t-ctype
>>>    t/unit-tests/bin/t-ctype .. ok
>>>    All tests successful.
>>>    Files=1, Tests=3598,  0 wallclock secs ( 0.08 usr +  0.00 sys =  0.08 CPU)
>>>    Result: PASS
>>>
>>> Filtering out passing checks e.g. with "| grep -v ^ok" would help when
>>> debugging a test failure. I vaguely miss the --immediate switch from the
>>> regular test library, however.
>>
>> Yeah, I agree here. It's a lot of output but it's almost always going to
>> be consumed by a test harness rather than a human, and it's easy to
>> filter out the noise if someone does need to do some manual debugging.
>
> Yeah, I know about TAP harnesses like prove, but the most
> straightforward way to run the unit tests is still `make unit-tests`
> in the t/ directory. Also when you add or change some tests, it's a
> good idea to run `make unit-tests` to see what the output is, so you
> still have to see that output quite often when you work on tests and
> going through 3598 of mostly useless output instead of just 14 isn't
> nice.

I was starting the programs from t/unit-tests/bin/ individually because
I didn't know 'make unit-tests' exists.  This is much nicer, thank you!
Especially after adding 'DEFAULT_UNIT_TEST_TARGET = unit-tests-prove' to
config.mak to complement the 'DEFAULT_TEST_TARGET = prove' I added long
ago.  It would be even nicer if the former was the default when the
latter is set.

As unit tests are added, their output is surely going to grow to
multiple screens with or without prove, no?  So someone writing or
debugging tests will still go back to starting then individually
eventually.

The size of the output in itself is not a problem, I assume, but that
most of it is useless -- details of successful tests are uninteresting.
A test harness can aggregate the output, but prove annoyed me when used
with the regular tests by also aggregating error output and only showing
the numbers of failed tests.  Finding their names involved running the
test script again without prove.  Turns out it has an option for that.
Added 'GIT_PROVE_OPTS = --failures' to config.mak as well, will see if
it helps.

Is it too much to ask developers to use a test harness?  Perhaps: It's
yet another dependency and not enabled by default.

What's the right level of aggregation and how do we achieve it?
Grouping by class is natural and follows the test definition.  We
could stop after patch 2.  Dunno.

René
Christian Couder March 4, 2024, 10 a.m. UTC | #7
On Sat, Mar 2, 2024 at 11:00 PM René Scharfe <l.s.r@web.de> wrote:
>
> Am 27.02.24 um 11:04 schrieb Christian Couder:

> > Yeah, I know about TAP harnesses like prove, but the most
> > straightforward way to run the unit tests is still `make unit-tests`
> > in the t/ directory. Also when you add or change some tests, it's a
> > good idea to run `make unit-tests` to see what the output is, so you
> > still have to see that output quite often when you work on tests and
> > going through 3598 of mostly useless output instead of just 14 isn't
> > nice.
>
> I was starting the programs from t/unit-tests/bin/ individually because
> I didn't know 'make unit-tests' exists.  This is much nicer, thank you!
> Especially after adding 'DEFAULT_UNIT_TEST_TARGET = unit-tests-prove' to
> config.mak to complement the 'DEFAULT_TEST_TARGET = prove' I added long
> ago.  It would be even nicer if the former was the default when the
> latter is set.
>
> As unit tests are added, their output is surely going to grow to
> multiple screens with or without prove, no?  So someone writing or
> debugging tests will still go back to starting then individually
> eventually.

When t-ctype will be run individually from t/unit-tests/bin/, for
example when adding or debugging ctype tests, it would still be better
if there are only 14 lines in its output rather than 3598.

> The size of the output in itself is not a problem, I assume, but that
> most of it is useless -- details of successful tests are uninteresting.
> A test harness can aggregate the output, but prove annoyed me when used
> with the regular tests by also aggregating error output and only showing
> the numbers of failed tests.  Finding their names involved running the
> test script again without prove.  Turns out it has an option for that.
> Added 'GIT_PROVE_OPTS = --failures' to config.mak as well, will see if
> it helps.
>
> Is it too much to ask developers to use a test harness?  Perhaps: It's
> yet another dependency and not enabled by default.

Yeah, it's a dependency, and when running CI tests, it's sometimes
better and simpler to have the canonical output rather than having the
output processed by a test harness.

Also if we add some verbose or immediate modes (like -v and -i with
the shell test framework) or perhaps other kinds of modes (checking
for memory leaks or other errors using existing tools for example),
these modes might not interact nicely with test harnesses but still be
useful. Requiring to always use a test harness might restrict us for
no good reason.

And anyway it doesn't make sense to have meaningful messages as second
arguments to the TEST() macros if we always want to use a test harness
that just discards them. Either:

  - we decide that we will always use some test harness, and then we
might just want to remove that second argument and yeah we can have
thousands of tests output lines from a single binary, or
  - we acknowledge that we don't always want to use a test harness,
and then we want a relatively short and meaningful output from a
single binary.

> What's the right level of aggregation and how do we achieve it?
> Grouping by class is natural and follows the test definition.  We
> could stop after patch 2.  Dunno.

I am Ok with just removing this patch like you did in v2. Thanks.
Josh Steadmon March 4, 2024, 6:35 p.m. UTC | #8
On 2024.03.02 23:00, René Scharfe wrote:
> Am 27.02.24 um 11:04 schrieb Christian Couder:
> > On Mon, Feb 26, 2024 at 7:58 PM Josh Steadmon <steadmon@google.com> wrote:
> >>
> >> On 2024.02.26 18:26, René Scharfe wrote:
> >
> >>> The output is clean as well, but there's a lot of it.  Perhaps too much.
> >>> The success messages are boring, though, and if all checks pass then the
> >>> only useful information is the status code.  A TAP harness like prove
> >>> summarizes that nicely:
> >>>
> >>>    $ prove t/unit-tests/bin/t-ctype
> >>>    t/unit-tests/bin/t-ctype .. ok
> >>>    All tests successful.
> >>>    Files=1, Tests=3598,  0 wallclock secs ( 0.08 usr +  0.00 sys =  0.08 CPU)
> >>>    Result: PASS
> >>>
> >>> Filtering out passing checks e.g. with "| grep -v ^ok" would help when
> >>> debugging a test failure. I vaguely miss the --immediate switch from the
> >>> regular test library, however.
> >>
> >> Yeah, I agree here. It's a lot of output but it's almost always going to
> >> be consumed by a test harness rather than a human, and it's easy to
> >> filter out the noise if someone does need to do some manual debugging.
> >
> > Yeah, I know about TAP harnesses like prove, but the most
> > straightforward way to run the unit tests is still `make unit-tests`
> > in the t/ directory. Also when you add or change some tests, it's a
> > good idea to run `make unit-tests` to see what the output is, so you
> > still have to see that output quite often when you work on tests and
> > going through 3598 of mostly useless output instead of just 14 isn't
> > nice.
> 
> I was starting the programs from t/unit-tests/bin/ individually because
> I didn't know 'make unit-tests' exists.  This is much nicer, thank you!
> Especially after adding 'DEFAULT_UNIT_TEST_TARGET = unit-tests-prove' to
> config.mak to complement the 'DEFAULT_TEST_TARGET = prove' I added long
> ago.  It would be even nicer if the former was the default when the
> latter is set.

After js/unit-test-suite-runner [1] is merged, then using
'DEFAULT_TEST_TARGET = prove' will also run the unit tests alongside the
shell test suite.

[1] https://lore.kernel.org/git/cover.1708728717.git.steadmon@google.com/
Junio C Hamano March 4, 2024, 6:46 p.m. UTC | #9
Josh Steadmon <steadmon@google.com> writes:

> On 2024.03.02 23:00, René Scharfe wrote:
> ...
>> I was starting the programs from t/unit-tests/bin/ individually because
>> I didn't know 'make unit-tests' exists.  This is much nicer, thank you!
>> Especially after adding 'DEFAULT_UNIT_TEST_TARGET = unit-tests-prove' to
>> config.mak to complement the 'DEFAULT_TEST_TARGET = prove' I added long
>> ago.  It would be even nicer if the former was the default when the
>> latter is set.
>
> After js/unit-test-suite-runner [1] is merged, then using
> 'DEFAULT_TEST_TARGET = prove' will also run the unit tests alongside the
> shell test suite.
>
> [1] https://lore.kernel.org/git/cover.1708728717.git.steadmon@google.com/

Nice ;-).
René Scharfe March 6, 2024, 6:16 p.m. UTC | #10
Am 04.03.24 um 11:00 schrieb Christian Couder:
> And anyway it doesn't make sense to have meaningful messages as second
> arguments to the TEST() macros if we always want to use a test harness
> that just discards them. Either:
>
>   - we decide that we will always use some test harness, and then we
> might just want to remove that second argument and yeah we can have
> thousands of tests output lines from a single binary, or
>   - we acknowledge that we don't always want to use a test harness,
> and then we want a relatively short and meaningful output from a
> single binary.

Different situations require different levels of detail.  If all checks
pass, we just need that one bit of information.  If some fail, we need
as much helpful context as we can get.  Success messages are interesting
for someone who added a new test and as a progress indicator, but are
useless otherwise (to me at least).

Perhaps it would make sense to show ok message only after a delay if
writing to a terminal, similar to what progress.c does?  This would
effectively silence them, since our current unit tests currently take
only a fraction of a second to finish.

René
diff mbox series

Patch

diff --git a/t/unit-tests/t-ctype.c b/t/unit-tests/t-ctype.c
index 56dfefb68e..aa728175a6 100644
--- a/t/unit-tests/t-ctype.c
+++ b/t/unit-tests/t-ctype.c
@@ -1,17 +1,12 @@ 
 #include "test-lib.h"

 #define TEST_CHAR_CLASS(class, string) do { \
-	int skip = test__run_begin(); \
-	if (!skip) { \
-		for (int i = 0; i < 256; i++) { \
-			int expect = !!memchr(string, i, sizeof(string) - 1); \
-			if (!check_int(class(i), ==, expect)) \
-				test_msg("       i: 0x%02x", i); \
-		} \
-		if (!check(!class(EOF))) \
-			test_msg("      i: 0x%02x (EOF)", EOF); \
+	for (int i = 0; i < 256; i++) { \
+		int expect = !!memchr(string, i, sizeof(string) - 1); \
+		TEST(check_int(class(i), ==, expect), \
+		     #class "(0x%02x) works", i); \
 	} \
-	test__run_end(!skip, TEST_LOCATION(), #class " works"); \
+	TEST(check(!class(EOF)), #class "(EOF) works"); \
 } while (0)

 #define DIGIT "0123456789"