diff mbox series

[v2,2/2] iotests/149: Skip on unsupported ciphers

Message ID 20211117151707.52549-3-hreitz@redhat.com (mailing list archive)
State New, archived
Headers show
Series iotests: Fix crypto algorithm failures | expand

Commit Message

Hanna Czenczek Nov. 17, 2021, 3:17 p.m. UTC
Whenever qemu-img or qemu-io report that some cipher is unsupported,
skip the whole test, because that is probably because qemu has been
configured with the gnutls crypto backend.

We could taylor the algorithm list to what gnutls supports, but this is
a test that is run rather rarely anyway (because it requires
password-less sudo), and so it seems better and easier to skip it.  When
this test is intentionally run to check LUKS compatibility, it seems
better not to limit the algorithms but keep the list extensive.

Signed-off-by: Hanna Reitz <hreitz@redhat.com>
---
 tests/qemu-iotests/149 | 23 ++++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

Comments

Daniel P. Berrangé Nov. 17, 2021, 3:46 p.m. UTC | #1
On Wed, Nov 17, 2021 at 04:17:07PM +0100, Hanna Reitz wrote:
> Whenever qemu-img or qemu-io report that some cipher is unsupported,
> skip the whole test, because that is probably because qemu has been
> configured with the gnutls crypto backend.
> 
> We could taylor the algorithm list to what gnutls supports, but this is
> a test that is run rather rarely anyway (because it requires
> password-less sudo), and so it seems better and easier to skip it.  When
> this test is intentionally run to check LUKS compatibility, it seems
> better not to limit the algorithms but keep the list extensive.

I'd really like to figure out a way to be able to partially run
this test. When I have hit problems in the past, I needed to
run specific tests, but then the expected output always contains
everything.  I've thought of a few options

 - Split it into many stanadlone tests - eg
     tests/qemu-iotests/tests/luks-host-$ALG

 - Split only the expected output eg
 
     149-$SUBTEST

  and have a way to indicate which of expected output files
  we need to concatenate for the set of subtests that we
  run.

 - Introduce some template syntax in expected output
   tha can be used to munge the output.

 - Stop comparing expected output entirely and just
   then this into a normal python unit test.

 - Insert your idea here ?

> 
> Signed-off-by: Hanna Reitz <hreitz@redhat.com>
> ---
>  tests/qemu-iotests/149 | 23 ++++++++++++++++++-----
>  1 file changed, 18 insertions(+), 5 deletions(-)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


Regards,
Daniel
Hanna Czenczek Nov. 18, 2021, 3:53 p.m. UTC | #2
On 17.11.21 16:46, Daniel P. Berrangé wrote:
> On Wed, Nov 17, 2021 at 04:17:07PM +0100, Hanna Reitz wrote:
>> Whenever qemu-img or qemu-io report that some cipher is unsupported,
>> skip the whole test, because that is probably because qemu has been
>> configured with the gnutls crypto backend.
>>
>> We could taylor the algorithm list to what gnutls supports, but this is
>> a test that is run rather rarely anyway (because it requires
>> password-less sudo), and so it seems better and easier to skip it.  When
>> this test is intentionally run to check LUKS compatibility, it seems
>> better not to limit the algorithms but keep the list extensive.
> I'd really like to figure out a way to be able to partially run
> this test. When I have hit problems in the past, I needed to
> run specific tests, but then the expected output always contains
> everything.  I've thought of a few options
>
>   - Split it into many stanadlone tests - eg
>       tests/qemu-iotests/tests/luks-host-$ALG

I wouldn’t hate it, though we should have some common file where common 
code can be sourced from.

>   - Split only the expected output eg
>   
>       149-$SUBTEST
>
>    and have a way to indicate which of expected output files
>    we need to concatenate for the set of subtests that we
>    run.

I’d prefer it if the test could verify its own output so that the 
reference output is basically just the usual unittest output of dots, 
“Ran XX tests” and “OK”.

(Two reasons: You can then easily disable some tests with the reference 
output changing only slightly; and it makes reviewing a test much easier 
because then I don’t need to verify the reference output...)

>   - Introduce some template syntax in expected output
>     tha can be used to munge the output.
>
>   - Stop comparing expected output entirely and just
>     then this into a normal python unit test.

That’s something that might indeed be useful for unittest-style iotests.

Then again, we already allow them to skip any test case and it will be 
counted as success, is that not sufficient?

>   - Insert your idea here ?

I personally most prefer unittest-style tests, because with them you can 
just %s/def test_/def xtest_/, then reverse this change for all the 
cases you want to run, and then adjust the reference output to match the 
number of tests run.

So I suppose the best idea I have is to convert this test into unittest 
style, and then it should be more modular when it comes to what subtests 
it wants to run.

I mean, it doesn’t have to truly be an iotests.QMPTestCase.  It would be 
sufficient if the test itself verified the output of every command it 
invokes (instead of leaving that to a separate reference output file) 
and then printed something like “OK” afterwards.  Then we could 
trivially skip some cases just by printing “OK” even if they weren’t run.

Hanna
diff mbox series

Patch

diff --git a/tests/qemu-iotests/149 b/tests/qemu-iotests/149
index 328fd05a4c..d49646ca60 100755
--- a/tests/qemu-iotests/149
+++ b/tests/qemu-iotests/149
@@ -230,6 +230,18 @@  def create_image(config, size_mb):
         fn.truncate(size_mb * 1024 * 1024)
 
 
+def check_cipher_support(config, output):
+    """Check the output of qemu-img or qemu-io for mention of the respective
+    cipher algorithm being unsupported, and if so, skip this test.
+    (Returns `output` for convenience.)"""
+
+    if 'Unsupported cipher algorithm' in output:
+        iotests.notrun('Unsupported cipher algorithm '
+                       f'{config.cipher}-{config.keylen}-{config.mode}; '
+                       'consider configuring qemu with a different crypto '
+                       'backend')
+    return output
+
 def qemu_img_create(config, size_mb):
     """Create and format a disk image with LUKS using qemu-img"""
 
@@ -253,7 +265,8 @@  def qemu_img_create(config, size_mb):
             "%dM" % size_mb]
 
     iotests.log("qemu-img " + " ".join(args), filters=[iotests.filter_test_dir])
-    iotests.log(iotests.qemu_img_pipe(*args), filters=[iotests.filter_test_dir])
+    iotests.log(check_cipher_support(config, iotests.qemu_img_pipe(*args)),
+                filters=[iotests.filter_test_dir])
 
 def qemu_io_image_args(config, dev=False):
     """Get the args for access an image or device with qemu-io"""
@@ -279,8 +292,8 @@  def qemu_io_write_pattern(config, pattern, offset_mb, size_mb, dev=False):
     args = ["-c", "write -P 0x%x %dM %dM" % (pattern, offset_mb, size_mb)]
     args.extend(qemu_io_image_args(config, dev))
     iotests.log("qemu-io " + " ".join(args), filters=[iotests.filter_test_dir])
-    iotests.log(iotests.qemu_io(*args), filters=[iotests.filter_test_dir,
-                                                 iotests.filter_qemu_io])
+    iotests.log(check_cipher_support(config, iotests.qemu_io(*args)),
+                filters=[iotests.filter_test_dir, iotests.filter_qemu_io])
 
 
 def qemu_io_read_pattern(config, pattern, offset_mb, size_mb, dev=False):
@@ -291,8 +304,8 @@  def qemu_io_read_pattern(config, pattern, offset_mb, size_mb, dev=False):
     args = ["-c", "read -P 0x%x %dM %dM" % (pattern, offset_mb, size_mb)]
     args.extend(qemu_io_image_args(config, dev))
     iotests.log("qemu-io " + " ".join(args), filters=[iotests.filter_test_dir])
-    iotests.log(iotests.qemu_io(*args), filters=[iotests.filter_test_dir,
-                                                 iotests.filter_qemu_io])
+    iotests.log(check_cipher_support(config, iotests.qemu_io(*args)),
+                filters=[iotests.filter_test_dir, iotests.filter_qemu_io])
 
 
 def test_once(config, qemu_img=False):