diff mbox series

[3/3] tests/acceptance: Add boot linux with kvm test

Message ID 20190628150217.32659-4-wainersm@redhat.com (mailing list archive)
State New, archived
Headers show
Series Acceptance tests: boot Linux with KVM test | expand

Commit Message

Wainer dos Santos Moschetta June 28, 2019, 3:02 p.m. UTC
Until now the suite of acceptance tests doesn't exercise
QEMU with kvm enabled. So this introduces a simple test
that boots the Linux kernel and checks it boots on the
accelerator correctly.

Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
---
 tests/acceptance/kvm.py | 58 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 58 insertions(+)
 create mode 100644 tests/acceptance/kvm.py

Comments

Eduardo Habkost June 28, 2019, 8:18 p.m. UTC | #1
On Fri, Jun 28, 2019 at 11:02:17AM -0400, Wainer dos Santos Moschetta wrote:
> Until now the suite of acceptance tests doesn't exercise
> QEMU with kvm enabled. So this introduces a simple test
> that boots the Linux kernel and checks it boots on the
> accelerator correctly.
> 
> Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>

Why not just change the existing test_x86_64_pc() test case to
use KVM by default?  We can use "accel=kvm:tcg" to allow it to
fall back to TCG if KVM is not available.
Cleber Rosa June 30, 2019, 5:39 p.m. UTC | #2
On Fri, Jun 28, 2019 at 05:18:46PM -0300, Eduardo Habkost wrote:
> On Fri, Jun 28, 2019 at 11:02:17AM -0400, Wainer dos Santos Moschetta wrote:
> > Until now the suite of acceptance tests doesn't exercise
> > QEMU with kvm enabled. So this introduces a simple test
> > that boots the Linux kernel and checks it boots on the
> > accelerator correctly.
> > 
> > Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
> 
> Why not just change the existing test_x86_64_pc() test case to
> use KVM by default?  We can use "accel=kvm:tcg" to allow it to
> fall back to TCG if KVM is not available.
> 
> -- 
> Eduardo

I though of something similar, but not exactly the same.  An example
can be seen here:

  https://travis-ci.org/clebergnu/qemu/jobs/551437429#L3350

IMO, it's a good practice to be able to briefly describe what a test
does, given its name.  It's also very important for the test to
attempt to exercise the same behavior across executions.

I'm saying that because I don't think we should fallback to TCG if KVM
is not available, but instead, have two different tests that do each a
simpler and more predictable set of checks.  This would make it
simpler to find KVM issues when a given test fails but the TCG
continues to pass.  The tags (and other mechanisms) can be used to
select the tests that a given job should run though.

Regards!
- Cleber.
Eduardo Habkost July 1, 2019, 6:34 p.m. UTC | #3
On Sun, Jun 30, 2019 at 01:39:33PM -0400, Cleber Rosa wrote:
> On Fri, Jun 28, 2019 at 05:18:46PM -0300, Eduardo Habkost wrote:
> > On Fri, Jun 28, 2019 at 11:02:17AM -0400, Wainer dos Santos Moschetta wrote:
> > > Until now the suite of acceptance tests doesn't exercise
> > > QEMU with kvm enabled. So this introduces a simple test
> > > that boots the Linux kernel and checks it boots on the
> > > accelerator correctly.
> > > 
> > > Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
> > 
> > Why not just change the existing test_x86_64_pc() test case to
> > use KVM by default?  We can use "accel=kvm:tcg" to allow it to
> > fall back to TCG if KVM is not available.
> > 
> > -- 
> > Eduardo
> 
> I though of something similar, but not exactly the same.  An example
> can be seen here:
> 
>   https://travis-ci.org/clebergnu/qemu/jobs/551437429#L3350
> 
> IMO, it's a good practice to be able to briefly describe what a test
> does, given its name.  It's also very important for the test to
> attempt to exercise the same behavior across executions.
> 
> I'm saying that because I don't think we should fallback to TCG if KVM
> is not available, but instead, have two different tests that do each a
> simpler and more predictable set of checks.  This would make it
> simpler to find KVM issues when a given test fails but the TCG
> continues to pass.  The tags (and other mechanisms) can be used to
> select the tests that a given job should run though.

Agreed that kvm:tcg fallback I suggested isn't a good idea.
However, do we really want to require a separate test method to
be written just because we want to use a different accelerator or
other QEMU option?

This patch may be the simplest solution short term, but can we
have something that doesn't require so much code duplication and
boilerplate code in the future?
Cleber Rosa July 1, 2019, 8:29 p.m. UTC | #4
On Mon, Jul 01, 2019 at 03:34:51PM -0300, Eduardo Habkost wrote:
> 
> Agreed that kvm:tcg fallback I suggested isn't a good idea.
> However, do we really want to require a separate test method to
> be written just because we want to use a different accelerator or
> other QEMU option?
>

No, in the short term we want to have tests that can respond to a
number of well known parameters, such as "accel".  But to actually
have tests (names) that are meaningful enough, we need to:

 1) Add a varianter implementation (or usage)
 2) Drop the duplicate tests

#1 is needed because:

 a) it doesn't feel right to name tests based on simple command
    line parameters (the ones given with -p, say, "-p accel=kvm"
    will add to the test name "accel_kvm".

 b) a variant *name* is added to the test ID, which then can be
    kept consistent.

Then we can proceed to #2, and drop the duplicate tests, say:

 - test_x86_64_pc, test_x86_64_pc_kvm => test_x86_64_pc

On a further iteration, it may even make sense to consolidate:

 - test_x86_64_pc, test_x86_64_q35 => test_x86_64

Time will tell.

> This patch may be the simplest solution short term, but can we
> have something that doesn't require so much code duplication and
> boilerplate code in the future?

Yes, the new implementation of the Varianter CIT is now generally
available on Avocado 70.0, so I'm working on a file that hopefully
will suite the acceptance tests.

> 
> -- 
> Eduardo

Best,
- Cleber.
Wainer dos Santos Moschetta July 5, 2019, 3:43 p.m. UTC | #5
On 07/01/2019 05:29 PM, Cleber Rosa wrote:
> On Mon, Jul 01, 2019 at 03:34:51PM -0300, Eduardo Habkost wrote:
>> Agreed that kvm:tcg fallback I suggested isn't a good idea.
>> However, do we really want to require a separate test method to
>> be written just because we want to use a different accelerator or
>> other QEMU option?
>>
> No, in the short term we want to have tests that can respond to a
> number of well known parameters, such as "accel".  But to actually
> have tests (names) that are meaningful enough, we need to:
>
>   1) Add a varianter implementation (or usage)
>   2) Drop the duplicate tests
>
> #1 is needed because:
>
>   a) it doesn't feel right to name tests based on simple command
>      line parameters (the ones given with -p, say, "-p accel=kvm"
>      will add to the test name "accel_kvm".
>
>   b) a variant *name* is added to the test ID, which then can be
>      kept consistent.
>
> Then we can proceed to #2, and drop the duplicate tests, say:
>
>   - test_x86_64_pc, test_x86_64_pc_kvm => test_x86_64_pc
>
> On a further iteration, it may even make sense to consolidate:
>
>   - test_x86_64_pc, test_x86_64_q35 => test_x86_64
>
> Time will tell.
>
>> This patch may be the simplest solution short term, but can we
>> have something that doesn't require so much code duplication and
>> boilerplate code in the future?
> Yes, the new implementation of the Varianter CIT is now generally
> available on Avocado 70.0, so I'm working on a file that hopefully
> will suite the acceptance tests.

Cleber, Eduardo, that's a good discussion. I was expecting we would 
eventually evolve the acceptance tests to use Avocado varianter feature.

Now I think what to do with this series.

I can see the usefulness of patch 01 when you want to, for example, 
start qemu expecting a crash or just want to gather information from 
command-line (qemu -cpu help, qemu -accel help...etc).

The implementation on patch 02 to check the availability of accelerators 
seems desirable even on this (near, maybe) future where tests can be 
automatically derived.

Thus, can we merge patches 01 and 02 only? Of course, if they pass the 
reviews.

Thanks!

- Wainer

>
>> -- 
>> Eduardo
> Best,
> - Cleber.
diff mbox series

Patch

diff --git a/tests/acceptance/kvm.py b/tests/acceptance/kvm.py
new file mode 100644
index 0000000000..aafb865cdb
--- /dev/null
+++ b/tests/acceptance/kvm.py
@@ -0,0 +1,58 @@ 
+# KVM acceptance tests.
+#
+# Copyright (c) 2019 Red Hat, Inc.
+#
+# Author:
+#  Wainer dos Santos Moschetta <wainersm@redhat.com>
+#
+# This work is licensed under the terms of the GNU GPL, version 2 or
+# later.  See the COPYING file in the top-level directory.
+
+import logging
+
+from avocado_qemu import Test
+
+
+class Kvm(Test):
+    """
+    Suite of acceptance tests to check QEMU and KVM integration.
+    """
+
+    def test_boot_linux(self):
+        """
+        Simple Linux boot test with kvm enabled.
+
+        :avocado: tags=arch:x86_64
+        :avocado: tags=accel:kvm
+        """
+        self.vm.add_args('-enable-kvm')
+        kernel_url = ('https://download.fedoraproject.org/pub/fedora/linux/'
+                      'releases/29/Everything/x86_64/os/images/pxeboot/vmlinuz')
+        kernel_hash = '23bebd2680757891cf7adedb033532163a792495'
+        kernel_path = self.fetch_asset(kernel_url, asset_hash=kernel_hash)
+
+        self.vm.set_machine('pc')
+        self.vm.set_console()
+        self.vm.add_args('-kernel', kernel_path,
+                         '-append', 'printk.time=0 console=ttyS0')
+        self.vm.launch()
+
+        query = self.vm.command('query-kvm')
+        self.assertTrue(query['enabled'])
+        self.assertTrue(query['present'])
+
+        console = self.vm.console_socket.makefile()
+        console_logger = logging.getLogger('console')
+        failure_message = 'Kernel panic - not syncing'
+        success_message = 'Booting paravirtualized kernel on KVM'
+
+        while True:
+            msg = console.readline().strip()
+            if not msg:
+                continue
+            console_logger.debug(msg)
+            if success_message in msg:
+                break
+            if failure_message in msg:
+                fail = 'Failure message found in console: %s' % failure_message
+                self.fail(fail)