Message ID | 1464366473-14046-1-git-send-email-marius.c.vlad@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Fri, May 27, 2016 at 07:27:46PM +0300, Marius Vlad wrote: You are lacking an explanation. Please tell me what this test is about and why it is not suitable criteria for a basic acceptance test. -Chris
The explanation is the same as in the previous series: the GEM tests are taking too long. Either I sack them under nightly runs or decrease the runtime. As new tests are added, it will take too long to provide meaningful output from BAT. There are platforms that reach the timeout (of 15minutes), and the slowest platform is the one that provides the runtime for the entire CI system (as we wait and collect the results from all of them). Not being able to complete under a 5min makes BAT useless and more than once you mention that even at their current runtime some GEM tests have to have a higher runtime to trigger relevant bugs, which in your own words it makes sense to have them run for (some) extended period of time. Even though running once per night has some disadvantage if the tests are reliable if won't be that hard to catch the regression(s). On Fri, May 27, 2016 at 05:37:15PM +0100, Chris Wilson wrote: > On Fri, May 27, 2016 at 07:27:46PM +0300, Marius Vlad wrote: > > You are lacking an explanation. Please tell me what this test is about > and why it is not suitable criteria for a basic acceptance test. > -Chris > > -- > Chris Wilson, Intel Open Source Technology Centre
On Mon, May 30, 2016 at 01:44:52PM +0300, Marius Vlad wrote: > The explanation is the same as in the previous series: the GEM tests are > taking too long. Either I sack them under nightly runs or decrease the > runtime. As new tests are added, it will take too long to provide > meaningful output from BAT. There are platforms that reach the timeout > (of 15minutes), and the slowest platform is the one that provides the > runtime for the entire CI system (as we wait and collect the results > from all of them). I am all for improving BAT, dropping tests is not acceptable imo. Replacing them with equivalent-or-better that run quickly should be the goal. Otherwise, you get into the same situation with the nightly runs (that already exclude gem_concurrent_blit despite it being one of the few tools that catch basic errors in GEM). -Chris
On Mon, May 30, 2016 at 11:50:00AM +0100, Chris Wilson wrote: > On Mon, May 30, 2016 at 01:44:52PM +0300, Marius Vlad wrote: > > The explanation is the same as in the previous series: the GEM tests are > > taking too long. Either I sack them under nightly runs or decrease the > > runtime. As new tests are added, it will take too long to provide > > meaningful output from BAT. There are platforms that reach the timeout > > (of 15minutes), and the slowest platform is the one that provides the > > runtime for the entire CI system (as we wait and collect the results > > from all of them). > > I am all for improving BAT, dropping tests is not acceptable imo. > Replacing them with equivalent-or-better that run quickly should be the > goal. Otherwise, you get into the same situation with the nightly runs > (that already exclude gem_concurrent_blit despite it being one of the > few tools that catch basic errors in GEM). I'm far from being the expert here to work on improving them, hence my suggestion to tune-down the execution time my first approach. The consensus is to have some sort of work-around in some acceptable time-frame. Improving, on different platforms, is an endeavour on itself. Have another suggestion, similar to Nightly runs. Have an extended namespace for GEM test that take/require a longer runtime and run them before Nightly kicks in. Basically have a shorter run-time for BAT and longer runtime (under that extended name) for extended runs. Atm, I'm timing concurrent_blit on a couple of platforms to see how much time it normally takes. If under a couple of hours that would be acceptable. Hoping that by segregating GEM tests would remove the noise I'm seeing for Nightly runs. > -Chris > > -- > Chris Wilson, Intel Open Source Technology Centre > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
diff --git a/tests/gem_exec_flush.c b/tests/gem_exec_flush.c index d08b843..2f271bb 100644 --- a/tests/gem_exec_flush.c +++ b/tests/gem_exec_flush.c @@ -532,18 +532,15 @@ igt_main } for (const struct batch *b = batches; b->name; b++) { - igt_subtest_f("%sbatch-%s-%s-uc", - b == batches && e->exec_id == 0 ? "basic-" : "", + igt_subtest_f("batch-%s-%s-uc", b->name, e->name) batch(fd, ring, ncpus, timeout, b->mode, 0); - igt_subtest_f("%sbatch-%s-%s-wb", - b == batches && e->exec_id == 0 ? "basic-" : "", + igt_subtest_f("batch-%s-%s-wb", b->name, e->name) batch(fd, ring, ncpus, timeout, b->mode, COHERENT); - igt_subtest_f("%sbatch-%s-%s-cmd", - b == batches && e->exec_id == 0 ? "basic-" : "", + igt_subtest_f("batch-%s-%s-cmd", b->name, e->name) batch(fd, ring, ncpus, timeout, b->mode, @@ -551,8 +548,7 @@ igt_main } for (const struct mode *m = modes; m->name; m++) { - igt_subtest_f("%suc-%s-%s", - (m->flags & BASIC && e->exec_id == 0) ? "basic-" : "", + igt_subtest_f("uc-%s-%s", m->name, e->name) run(fd, ring, ncpus, timeout, @@ -564,8 +560,7 @@ igt_main run(fd, ring, ncpus, timeout, UNCACHED | m->flags | INTERRUPTIBLE); - igt_subtest_f("%swb-%s-%s", - e->exec_id == 0 ? "basic-" : "", + igt_subtest_f("wb-%s-%s", m->name, e->name) run(fd, ring, ncpus, timeout,
Signed-off-by: Marius Vlad <marius.c.vlad@intel.com> --- tests/gem_exec_flush.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-)