diff mbox

[AUTOTEST,2/2] Add ability to call autotest client tests from kvm tests like a subtest.

Message ID 1304085561-4774-3-git-send-email-jzupka@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Jiri Zupka April 29, 2011, 1:59 p.m. UTC
Example run autotest/client/netperf2 like a server.

test.runsubtest("netperf2", tag="server", server_ip=host_ip,
                client_ip=guest_ip, role='server')

Client part is called in paralel thread on virtual machine.

guest = kvm_utils.Thread(kvm_test_utils.run_autotest,
                         (vm, session, control_path, control_args,
                          timeout, outputdir, params))
guest.start()

On the guest is required to have installed the program mpstat for netpert2 test.
Netperf2 test will be changed or will be created in new version.

This patch are necessary to avoid of creation double version of test.
netperf, multicast, etc..

Make patch of tests_base.cfs.sample in correct way.

Signed-off-by: Ji?í Župka <jzupka@redhat.com>
---
 client/bin/client_logging_config.py    |    5 +-
 client/bin/net/net_utils.py            |   16 ++++-
 client/common_lib/base_job.py          |    2 +
 client/common_lib/logging_config.py    |    3 +-
 client/common_lib/test.py              |   21 ++++++-
 client/tests/kvm/tests/subtest.py      |   43 ++++++++++++
 client/tests/kvm/tests_base.cfg.sample |    8 ++
 client/tests/netperf2/netperf2.py      |    3 +-
 client/tools/html_report.py            |  115 ++++++++++++++++++--------------
 client/virt/virt_test_utils.py         |   19 ++++--
 10 files changed, 174 insertions(+), 61 deletions(-)
 create mode 100644 client/tests/kvm/tests/subtest.py

Comments

Lucas Meneghel Rodrigues May 4, 2011, 2:19 a.m. UTC | #1
Hi Jiri, after reviewing the code I have comments, similar to Cleber's:

On Fri, Apr 29, 2011 at 10:59 AM, Ji?í Župka <jzupka@redhat.com> wrote:
> Example run autotest/client/netperf2 like a server.

... snip

> diff --git a/client/tests/kvm/tests/subtest.py b/client/tests/kvm/tests/subtest.py
> new file mode 100644
> index 0000000..3b546dc
> --- /dev/null
> +++ b/client/tests/kvm/tests/subtest.py
> @@ -0,0 +1,43 @@
> +import os, logging
> +from autotest_lib.client.virt import virt_utils, virt_test_utils, kvm_monitor
> +from autotest_lib.client.bin import job
> +from autotest_lib.client.bin.net import net_utils
> +
> +
> +def run_subtest(test, params, env):
> +    """
> +    Run an autotest test inside a guest and subtest on host side.
> +    This test should be substitution netperf test in kvm.
> +
> +    @param test: kvm test object.
> +    @param params: Dictionary with test parameters.
> +    @param env: Dictionary with the test environment.
> +    """
> +    vm = env.get_vm(params["main_vm"])
> +    vm.verify_alive()
> +    timeout = int(params.get("login_timeout", 360))
> +    session = vm.wait_for_login(timeout=timeout)
> +
> +    # Collect test parameters
> +    timeout = int(params.get("test_timeout", 300))
> +    control_path = os.path.join(test.bindir, "autotest_control",
> +                                params.get("test_control_file"))
> +    control_args = params.get("test_control_args")
> +    outputdir = test.outputdir
> +
> +    guest_ip = vm.get_address()
> +    host_ip = net_utils.network().get_corespond_local_ip(guest_ip)
> +    if not host_ip is None:
> +        control_args = host_ip + " " + guest_ip
> +
> +        guest = virt_utils.Thread(virt_test_utils.run_autotest,
> +                                 (vm, session, control_path, control_args,
> +                                  timeout, outputdir, params))
> +        guest.start()
> +
> +        test.runsubtest("netperf2", tag="server", server_ip=host_ip,
> +             client_ip=guest_ip, role='server')

^ This really should be made generic, since as Cleber mentioned,
calling this test run_subtest wouldn't cut for cases where we run
something other than netperf2. So things that started coming to my
mind:

* We could extend the utility function to run autotest tests on a
guest in a way that it can accept a string with the control file
contents, rather than just an existing control file. This way we'd be
more free to run arbitrary control code in guests, while of course
keeping the ability to use existing control files;
* We could actually create an Autotest() class abstraction, very much
like what we have in server control files, such as

auto_vm1 = virt_utils.Autotest(vm1) # This would install autotest in a
VM and wait for further commands

control = "job.run_test('sleeptest')"

auto_vm1.run_control(control) # This would run sleeptest and bring
back the results to the host

It's a matter to see how this is modeled for server side control
files... I believe this could be cleaner and help us a lot...

In other comments, please use the idiom:

if foo is not None:

Across all places where we compare a variable with None, because it's
easier to understand the intent right away and it's on the
CODING_STYLE document.
Jiri Zupka May 4, 2011, 1:57 p.m. UTC | #2
----- Original Message -----
> Hi Jiri, after reviewing the code I have comments, similar to
> Cleber's:
> 
> On Fri, Apr 29, 2011 at 10:59 AM, Ji?í Župka <jzupka@redhat.com>
> wrote:
> > Example run autotest/client/netperf2 like a server.
> 
> ... snip
> 
> > diff --git a/client/tests/kvm/tests/subtest.py
> > b/client/tests/kvm/tests/subtest.py
> > new file mode 100644
> > index 0000000..3b546dc
> > --- /dev/null
> > +++ b/client/tests/kvm/tests/subtest.py
> > @@ -0,0 +1,43 @@
> > +import os, logging
> > +from autotest_lib.client.virt import virt_utils, virt_test_utils,
> > kvm_monitor
> > +from autotest_lib.client.bin import job
> > +from autotest_lib.client.bin.net import net_utils
> > +
> > +
> > +def run_subtest(test, params, env):
> > + """
> > + Run an autotest test inside a guest and subtest on host side.
> > + This test should be substitution netperf test in kvm.
> > +
> > + @param test: kvm test object.
> > + @param params: Dictionary with test parameters.
> > + @param env: Dictionary with the test environment.
> > + """
> > + vm = env.get_vm(params["main_vm"])
> > + vm.verify_alive()
> > + timeout = int(params.get("login_timeout", 360))
> > + session = vm.wait_for_login(timeout=timeout)
> > +
> > + # Collect test parameters
> > + timeout = int(params.get("test_timeout", 300))
> > + control_path = os.path.join(test.bindir, "autotest_control",
> > + params.get("test_control_file"))
> > + control_args = params.get("test_control_args")
> > + outputdir = test.outputdir
> > +
> > + guest_ip = vm.get_address()
> > + host_ip = net_utils.network().get_corespond_local_ip(guest_ip)
> > + if not host_ip is None:
> > + control_args = host_ip + " " + guest_ip
> > +
> > + guest = virt_utils.Thread(virt_test_utils.run_autotest,
> > + (vm, session, control_path, control_args,
> > + timeout, outputdir, params))
> > + guest.start()
> > +
> > + test.runsubtest("netperf2", tag="server", server_ip=host_ip,
> > + client_ip=guest_ip, role='server')
> 
> ^ This really should be made generic, since as Cleber mentioned,
> calling this test run_subtest wouldn't cut for cases where we run
> something other than netperf2. So things that started coming to my
> mind:

^ Yes you are right. I wanted to show how use and configure parameter 
in control file. This shouldn't be a test this test should be only a sample 
of technology. But I made wrong implanting to tests_base.conf. I try think 
about tests_base.conf and make this implantation in better way. 

I repair subtest and send patch again.

> 
> * We could extend the utility function to run autotest tests on a
> guest in a way that it can accept a string with the control file
> contents, rather than just an existing control file. This way we'd be
> more free to run arbitrary control code in guests, while of course
> keeping the ability to use existing control files;
> * We could actually create an Autotest() class abstraction, very much
> like what we have in server control files, such as
> 
> auto_vm1 = virt_utils.Autotest(vm1) # This would install autotest in a
> VM and wait for further commands
> 
> control = "job.run_test('sleeptest')"

                                            ^ This should be standard test in client/tests/
                                                not file from client/tests/kvm/autotest_control.

> 
> auto_vm1.run_control(control) # This would run sleeptest and bring
> back the results to the host

> 
> It's a matter to see how this is modeled for server side control
> files... I believe this could be cleaner and help us a lot...

And yes I agree with this. This sounds good. 

> 
> In other comments, please use the idiom:
> 
> if foo is not None:
> 
> Across all places where we compare a variable with None, because it's
> easier to understand the intent right away and it's on the
> CODING_STYLE document.

^^ I try this.
> 
> --
> Lucas
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jiri Zupka July 19, 2011, 12:32 p.m. UTC | #3
Hi Lukas,

> Hi Jiri, after reviewing the code I have comments, similar to
> Cleber's:
> 
> On Fri, Apr 29, 2011 at 10:59 AM, Ji?í Župka <jzupka@redhat.com>
> wrote:
> > Example run autotest/client/netperf2 like a server.
> 
> ... snip
> 
> > diff --git a/client/tests/kvm/tests/subtest.py
> > b/client/tests/kvm/tests/subtest.py
> > new file mode 100644
> > index 0000000..3b546dc
> > --- /dev/null
> > +++ b/client/tests/kvm/tests/subtest.py
> > @@ -0,0 +1,43 @@
> > +import os, logging
> > +from autotest_lib.client.virt import virt_utils, virt_test_utils,
> > kvm_monitor
> > +from autotest_lib.client.bin import job
> > +from autotest_lib.client.bin.net import net_utils
> > +
> > +
> > +def run_subtest(test, params, env):
> > + """
> > + Run an autotest test inside a guest and subtest on host side.
> > + This test should be substitution netperf test in kvm.
> > +
> > + @param test: kvm test object.
> > + @param params: Dictionary with test parameters.
> > + @param env: Dictionary with the test environment.
> > + """
> > + vm = env.get_vm(params["main_vm"])
> > + vm.verify_alive()
> > + timeout = int(params.get("login_timeout", 360))
> > + session = vm.wait_for_login(timeout=timeout)
> > +
> > + # Collect test parameters
> > + timeout = int(params.get("test_timeout", 300))
> > + control_path = os.path.join(test.bindir, "autotest_control",
> > + params.get("test_control_file"))
> > + control_args = params.get("test_control_args")
> > + outputdir = test.outputdir
> > +
> > + guest_ip = vm.get_address()
> > + host_ip = net_utils.network().get_corespond_local_ip(guest_ip)
> > + if not host_ip is None:
> > + control_args = host_ip + " " + guest_ip
> > +
> > + guest = virt_utils.Thread(virt_test_utils.run_autotest,
> > + (vm, session, control_path, control_args,
> > + timeout, outputdir, params))
> > + guest.start()
> > +
> > + test.runsubtest("netperf2", tag="server", server_ip=host_ip,
> > + client_ip=guest_ip, role='server')
> 
> ^ This really should be made generic, since as Cleber mentioned,
> calling this test run_subtest wouldn't cut for cases where we run
> something other than netperf2. So things that started coming to my
> mind:
> 
> * We could extend the utility function to run autotest tests on a
> guest in a way that it can accept a string with the control file
> contents, rather than just an existing control file. This way we'd be
> more free to run arbitrary control code in guests, while of course
> keeping the ability to use existing control files;
> * We could actually create an Autotest() class abstraction, very much
> like what we have in server control files, such as
> 
> auto_vm1 = virt_utils.Autotest(vm1) # This would install autotest in a
> VM and wait for further commands
> 
> control = "job.run_test('sleeptest')"
> 
> auto_vm1.run_control(control) # This would run sleeptest and bring
> back the results to the host

I'm thinking about this feature and there is some choice:
 1) We can do another separate interface for this feature in autotest/client.
       - I think, not good way because there is to much (useless, duplicate) code.
 2) Move Autotest.py from server part to client part and 
     a) Write interface for package hosts which extend virt_vm.py and aexpect.py 
         to work as hosts part in server.
           - This isn't enough generic you can't run test on virt machine and another 
             bare metal host.
     b) Move hosts part from server to client part. This add ability cliet part of test
         to start autotest test on others machines. 
          - Is this way good? This is most generic way how to do this, but this changes
            should change way of using autotest. When server starts test on client 
            machines (bare metal, virt).

May be we should think about place of virt in autotest infrastructure.. If there is test 
which is multimechine, generic and should be run on bare metal and virt machine 
with no problem. 
1*) Then there should be way how to start virtual machine from server part 
    but with capabilities like is in kvm test. (because there is lot of good features
    like automatic installing of virtual machine, etc..)
2*) Or we can move some of part from server to client part (autotest.py, hosts) to
    allow client part of test start autotest on virt machine and on bare metal machine.
3*) Or we should think about writing tests. This mean no changes in autotest structure 
    but changes in tests structure. Client part should have test to only start virt machine 
    and server controls of starting tests on this infrastructure. This mean move 
    some of (kvm, virt) test server part (virt/tests/multicast, virt/tests/netperf).
    May be there should be /server/tests/virtprepare. This way is posible start kvm machine.

I have done part (2) and I have done some part of part (b) 70%, but I'm not sure if this is 
good way how to do this. This is most generic but... I'm going to do changes way (2b) but 
I think that way (3*) is most clean way how to do this.

Ji?í

> 
> It's a matter to see how this is modeled for server side control
> files... I believe this could be cleaner and help us a lot...
> 
> In other comments, please use the idiom:
> 
> if foo is not None:
> 
> Across all places where we compare a variable with None, because it's
> easier to understand the intent right away and it's on the
> CODING_STYLE document.
> 
> --
> Lucas
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Lucas Meneghel Rodrigues July 19, 2011, 2:18 p.m. UTC | #4
On Tue 19 Jul 2011 09:32:51 AM BRT, Jiri Zupka wrote:
>>
>> auto_vm1.run_control(control) # This would run sleeptest and bring
>> back the results to the host
> 
> I'm thinking about this feature and there is some choice:
>   1) We can do another separate interface for this feature in autotest/client.
>         - I think, not good way because there is to much (useless, duplicate) code.
>   2) Move Autotest.py from server part to client part and
>       a) Write interface for package hosts which extend virt_vm.py and aexpect.py
>           to work as hosts part in server.
>             - This isn't enough generic you can't run test on virt machine and another
>               bare metal host.
>       b) Move hosts part from server to client part. This add ability cliet part of test
>           to start autotest test on others machines.
>            - Is this way good? This is most generic way how to do this, but this changes
>              should change way of using autotest. When server starts test on client
>              machines (bare metal, virt).

^ Or, in other words, 'merge' the client and the server program, so 
we'd have a single, unified API to write tests, and any . This is 
something that Martin Bligh wanted to get done in autotest.

However, it is pretty major work going on. I really like the idea, so 
let's evaluate it carefully
 
> May be we should think about place of virt in autotest infrastructure.. If there is test
> which is multimechine, generic and should be run on bare metal and virt machine
> with no problem.
> 1*) Then there should be way how to start virtual machine from server part
>      but with capabilities like is in kvm test. (because there is lot of good features
>      like automatic installing of virtual machine, etc..)
> 2*) Or we can move some of part from server to client part (autotest.py, hosts) to
>      allow client part of test start autotest on virt machine and on bare metal machine.
> 3*) Or we should think about writing tests. This mean no changes in autotest structure
>      but changes in tests structure. Client part should have test to only start virt machine
>      and server controls of starting tests on this infrastructure. This mean move
>      some of (kvm, virt) test server part (virt/tests/multicast, virt/tests/netperf).
>      May be there should be /server/tests/virtprepare. This way is posible start kvm machine.

Although I might be way off, I saw stuff on beijing's tree (I think it 
is cross_host_utilities or something) that could help to implement this 
option.

> I have done part (2) and I have done some part of part (b) 70%, but I'm not sure if this is
> good way how to do this. This is most generic but... I'm going to do changes way (2b) but
> I think that way (3*) is most clean way how to do this.

My personal preference would be to unify server and client, so 2). 
However, given that it is *major* work, maybe 3) is better.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/client/bin/client_logging_config.py b/client/bin/client_logging_config.py
index a59b078..28c007d 100644
--- a/client/bin/client_logging_config.py
+++ b/client/bin/client_logging_config.py
@@ -12,8 +12,9 @@  class ClientLoggingConfig(logging_config.LoggingConfig):
 
 
     def configure_logging(self, results_dir=None, verbose=False):
-        super(ClientLoggingConfig, self).configure_logging(use_console=True,
-                                                           verbose=verbose)
+        super(ClientLoggingConfig, self).configure_logging(
+                                                  use_console=self.use_console,
+                                                  verbose=verbose)
 
         if results_dir:
             log_dir = os.path.join(results_dir, 'debug')
diff --git a/client/bin/net/net_utils.py b/client/bin/net/net_utils.py
index 868958c..ac9b494 100644
--- a/client/bin/net/net_utils.py
+++ b/client/bin/net/net_utils.py
@@ -5,7 +5,7 @@  This library is to release in the public repository.
 
 import commands, os, re, socket, sys, time, struct
 from autotest_lib.client.common_lib import error
-import utils
+from autotest_lib.client.common_lib import utils
 
 TIMEOUT = 10 # Used for socket timeout and barrier timeout
 
@@ -27,6 +27,20 @@  class network_utils(object):
         utils.system('/sbin/ifconfig -a')
 
 
+    def get_corespond_local_ip(self, query_ip, netmask="24"):
+        """
+        Get ip address in local system which can communicate with quert_ip.
+
+        @param query_ip: IP of client which want communicate with autotest machine.
+        @return: IP address which can communicate with query_ip
+        """
+        ip = utils.system_output("ip addr show to %s/%s" % (query_ip, netmask))
+        ip = re.search(r"inet ([0-9.]*)/",ip)
+        if ip is None:
+            return ip
+        return ip.group(1)
+
+
     def disable_ip_local_loopback(self, ignore_status=False):
         utils.system("echo '1' > /proc/sys/net/ipv4/route/no_local_loopback",
                      ignore_status=ignore_status)
diff --git a/client/common_lib/base_job.py b/client/common_lib/base_job.py
index 843c0e8..eef9efc 100644
--- a/client/common_lib/base_job.py
+++ b/client/common_lib/base_job.py
@@ -1117,6 +1117,7 @@  class base_job(object):
         tag_parts = []
 
         # build up the parts of the tag used for the test name
+        master_testpath = dargs.get('master_testpath', "")
         base_tag = dargs.pop('tag', None)
         if base_tag:
             tag_parts.append(str(base_tag))
@@ -1132,6 +1133,7 @@  class base_job(object):
         if subdir_tag:
             tag_parts.append(subdir_tag)
         subdir = '.'.join([testname] + tag_parts)
+        subdir = os.path.join(master_testpath, subdir)
         tag = '.'.join(tag_parts)
 
         return full_testname, subdir, tag
diff --git a/client/common_lib/logging_config.py b/client/common_lib/logging_config.py
index afe754a..9114d7a 100644
--- a/client/common_lib/logging_config.py
+++ b/client/common_lib/logging_config.py
@@ -32,9 +32,10 @@  class LoggingConfig(object):
         fmt='%(asctime)s %(levelname)-5.5s| %(message)s',
         datefmt='%H:%M:%S')
 
-    def __init__(self):
+    def __init__(self, use_console=True):
         self.logger = logging.getLogger()
         self.global_level = logging.DEBUG
+        self.use_console = use_console
 
 
     @classmethod
diff --git a/client/common_lib/test.py b/client/common_lib/test.py
index c55d23b..b1a0904 100644
--- a/client/common_lib/test.py
+++ b/client/common_lib/test.py
@@ -465,6 +465,24 @@  class base_test(object):
                 self.job.enable_warnings("NETWORK")
 
 
+    def runsubtest(self, url, *args, **dargs):
+        """
+        This call subtest in running test.
+
+        @param test: Parent test.
+        @param url: Url of new test.
+        @param tag: Tag added to test name.
+        @param args: Args for subtest.
+        @param dargs: Distionary args for subtest.
+        @iterations: Number of iteration of subtest.
+        @profile_inly: If true not profile.
+        """
+        dargs["profile_only"] = dargs.get("profile_only", True)
+        test_basepath = self.outputdir[len(self.job.resultdir + "/"):]
+        self.job.run_test(url, master_testpath=test_basepath,
+                          *args, **dargs)
+
+
 def _get_nonstar_args(func):
     """Extract all the (normal) function parameter names.
 
@@ -658,7 +676,8 @@  def runtest(job, url, tag, args, dargs,
         if not bindir:
             raise error.TestError(testname + ': test does not exist')
 
-    outputdir = os.path.join(job.resultdir, testname)
+    subdir = os.path.join(dargs.pop('master_testpath', ""), testname)
+    outputdir = os.path.join(job.resultdir, subdir)
     if tag:
         outputdir += '.' + tag
 
diff --git a/client/tests/kvm/tests/subtest.py b/client/tests/kvm/tests/subtest.py
new file mode 100644
index 0000000..3b546dc
--- /dev/null
+++ b/client/tests/kvm/tests/subtest.py
@@ -0,0 +1,43 @@ 
+import os, logging
+from autotest_lib.client.virt import virt_utils, virt_test_utils, kvm_monitor
+from autotest_lib.client.bin import job
+from autotest_lib.client.bin.net import net_utils
+
+
+def run_subtest(test, params, env):
+    """
+    Run an autotest test inside a guest and subtest on host side.
+    This test should be substitution netperf test in kvm.
+
+    @param test: kvm test object.
+    @param params: Dictionary with test parameters.
+    @param env: Dictionary with the test environment.
+    """
+    vm = env.get_vm(params["main_vm"])
+    vm.verify_alive()
+    timeout = int(params.get("login_timeout", 360))
+    session = vm.wait_for_login(timeout=timeout)
+
+    # Collect test parameters
+    timeout = int(params.get("test_timeout", 300))
+    control_path = os.path.join(test.bindir, "autotest_control",
+                                params.get("test_control_file"))
+    control_args = params.get("test_control_args")
+    outputdir = test.outputdir
+
+    guest_ip = vm.get_address()
+    host_ip = net_utils.network().get_corespond_local_ip(guest_ip)
+    if not host_ip is None:
+        control_args = host_ip + " " + guest_ip
+
+        guest = virt_utils.Thread(virt_test_utils.run_autotest,
+                                 (vm, session, control_path, control_args,
+                                  timeout, outputdir, params))
+        guest.start()
+
+        test.runsubtest("netperf2", tag="server", server_ip=host_ip,
+             client_ip=guest_ip, role='server')
+
+    else:
+        logging.error("Host cannot communicate with client by"
+                      " normal network connection.")
\ No newline at end of file
diff --git a/client/tests/kvm/tests_base.cfg.sample b/client/tests/kvm/tests_base.cfg.sample
index 810a4bd..c16f2f9 100644
--- a/client/tests/kvm/tests_base.cfg.sample
+++ b/client/tests/kvm/tests_base.cfg.sample
@@ -261,6 +261,14 @@  variants:
             - systemtap:
                 test_control_file = systemtap.control
 
+    - subtest:     install setup unattended_install.cdrom
+        type = subtest
+        test_timeout = 1800
+        variants:
+            - netperf2:
+                test_control_file = netperf2.control
+                nic_mode = tap
+
     - linux_s3:     install setup unattended_install.cdrom
         only Linux
         type = linux_s3
diff --git a/client/tests/netperf2/netperf2.py b/client/tests/netperf2/netperf2.py
index 1b659dd..23d25c5 100644
--- a/client/tests/netperf2/netperf2.py
+++ b/client/tests/netperf2/netperf2.py
@@ -2,6 +2,7 @@  import os, time, re, logging
 from autotest_lib.client.bin import test, utils
 from autotest_lib.client.bin.net import net_utils
 from autotest_lib.client.common_lib import error
+from autotest_lib.client.common_lib import barrier
 
 MPSTAT_IX = 0
 NETPERF_IX = 1
@@ -36,7 +37,7 @@  class netperf2(test.test):
 
     def run_once(self, server_ip, client_ip, role, test = 'TCP_STREAM',
                  test_time = 15, stream_list = [1], test_specific_args = '',
-                 cpu_affinity = '', dev = '', bidi = False, wait_time = 5):
+                 cpu_affinity = '', dev = '', bidi = False, wait_time = 2):
         """
         server_ip: IP address of host running netserver
         client_ip: IP address of host running netperf client(s)
diff --git a/client/tools/html_report.py b/client/tools/html_report.py
index 7b17a75..563a7a9 100755
--- a/client/tools/html_report.py
+++ b/client/tools/html_report.py
@@ -1372,7 +1372,7 @@  function processList(ul) {
 }
 """
 
-stimelist = []
+
 
 
 def make_html_file(metadata, results, tag, host, output_file_name, dirname):
@@ -1430,11 +1430,12 @@  return true;
     total_failed = 0
     total_passed = 0
     for res in results:
-        total_executed += 1
-        if res['status'] == 'GOOD':
-            total_passed += 1
-        else:
-            total_failed += 1
+        if results[res][2] != None:
+            total_executed += 1
+            if results[res][2]['status'] == 'GOOD':
+                total_passed += 1
+            else:
+                total_failed += 1
     stat_str = 'No test cases executed'
     if total_executed > 0:
         failed_perct = int(float(total_failed)/float(total_executed)*100)
@@ -1471,39 +1472,46 @@  id="t1" class="stats table-autosort:4 table-autofilter table-stripeclass:alterna
 <tbody>
 """
     print >> output, result_table_prefix
-    for res in results:
-        print >> output, '<tr>'
-        print >> output, '<td align="left">%s</td>' % res['time']
-        print >> output, '<td align="left">%s</td>' % res['testcase']
-        if res['status'] == 'GOOD':
-            print >> output, '<td align=\"left\"><b><font color="#00CC00">PASS</font></b></td>'
-        elif res['status'] == 'FAIL':
-            print >> output, '<td align=\"left\"><b><font color="red">FAIL</font></b></td>'
-        elif res['status'] == 'ERROR':
-            print >> output, '<td align=\"left\"><b><font color="red">ERROR!</font></b></td>'
-        else:
-            print >> output, '<td align=\"left\">%s</td>' % res['status']
-        # print exec time (seconds)
-        print >> output, '<td align="left">%s</td>' % res['exec_time_sec']
-        # print log only if test failed..
-        if res['log']:
-            #chop all '\n' from log text (to prevent html errors)
-            rx1 = re.compile('(\s+)')
-            log_text = rx1.sub(' ', res['log'])
-
-            # allow only a-zA-Z0-9_ in html title name
-            # (due to bug in MS-explorer)
-            rx2 = re.compile('([^a-zA-Z_0-9])')
-            updated_tag = rx2.sub('_', res['title'])
-
-            html_body_text = '<html><head><title>%s</title></head><body>%s</body></html>' % (str(updated_tag), log_text)
-            print >> output, '<td align=\"left\"><A HREF=\"#\" onClick=\"popup(\'%s\',\'%s\')\">Info</A></td>' % (str(updated_tag), str(html_body_text))
-        else:
-            print >> output, '<td align=\"left\"></td>'
-        # print execution time
-        print >> output, '<td align="left"><A HREF=\"%s\">Debug</A></td>' % os.path.join(dirname, res['title'], "debug")
+    def print_result(result, indent):
+        while result != []:
+            r = result.pop(0)
+            print r
+            res = results[r][2]
+            print >> output, '<tr>'
+            print >> output, '<td align="left">%s</td>' % res['time']
+            print >> output, '<td align="left" style="padding-left:%dpx">%s</td>' % (indent * 20, res['title'])
+            if res['status'] == 'GOOD':
+                print >> output, '<td align=\"left\"><b><font color="#00CC00">PASS</font></b></td>'
+            elif res['status'] == 'FAIL':
+                print >> output, '<td align=\"left\"><b><font color="red">FAIL</font></b></td>'
+            elif res['status'] == 'ERROR':
+                print >> output, '<td align=\"left\"><b><font color="red">ERROR!</font></b></td>'
+            else:
+                print >> output, '<td align=\"left\">%s</td>' % res['status']
+            # print exec time (seconds)
+            print >> output, '<td align="left">%s</td>' % res['exec_time_sec']
+            # print log only if test failed..
+            if res['log']:
+                #chop all '\n' from log text (to prevent html errors)
+                rx1 = re.compile('(\s+)')
+                log_text = rx1.sub(' ', res['log'])
+
+                # allow only a-zA-Z0-9_ in html title name
+                # (due to bug in MS-explorer)
+                rx2 = re.compile('([^a-zA-Z_0-9])')
+                updated_tag = rx2.sub('_', res['title'])
+
+                html_body_text = '<html><head><title>%s</title></head><body>%s</body></html>' % (str(updated_tag), log_text)
+                print >> output, '<td align=\"left\"><A HREF=\"#\" onClick=\"popup(\'%s\',\'%s\')\">Info</A></td>' % (str(updated_tag), str(html_body_text))
+            else:
+                print >> output, '<td align=\"left\"></td>'
+            # print execution time
+            print >> output, '<td align="left"><A HREF=\"%s\">Debug</A></td>' % os.path.join(dirname, res['subdir'], "debug")
 
-        print >> output, '</tr>'
+            print >> output, '</tr>'
+            print_result(results[r][1], indent + 1)
+
+    print_result(results[""][1], 0)
     print >> output, "</tbody></table>"
 
 
@@ -1531,21 +1539,27 @@  id="t1" class="stats table-autosort:4 table-autofilter table-stripeclass:alterna
         output.close()
 
 
-def parse_result(dirname, line):
+def parse_result(dirname, line, results_data):
     """
     Parse job status log line.
 
     @param dirname: Job results dir
     @param line: Status log line.
+    @param results_data: Dictionary with for results.
     """
     parts = line.split()
     if len(parts) < 4:
         return None
-    global stimelist
+    global tests
     if parts[0] == 'START':
         pair = parts[3].split('=')
         stime = int(pair[1])
-        stimelist.append(stime)
+        results_data[parts[1]] = [stime, [], None]
+        try:
+            parent_test = re.findall(r".*/", parts[1])[0][:-1]
+            results_data[parent_test][1].append(parts[1])
+        except IndexError:
+            results_data[""][1].append(parts[1])
 
     elif (parts[0] == 'END'):
         result = {}
@@ -1562,21 +1576,25 @@  def parse_result(dirname, line):
         result['exec_time_sec'] = 'na'
         tag = parts[3]
 
+        result['subdir'] = parts[2]
         # assign actual values
         rx = re.compile('^(\w+)\.(.*)$')
         m1 = rx.findall(parts[3])
-        result['testcase'] = str(tag)
+        if len(m1):
+            result['testcase'] = m1[0][1]
+        else:
+            result['testcase'] = parts[3]
         result['title'] = str(tag)
         result['status'] = parts[1]
         if result['status'] != 'GOOD':
             result['log'] = get_exec_log(dirname, tag)
-        if len(stimelist)>0:
+        if len(results_data)>0:
             pair = parts[4].split('=')
             etime = int(pair[1])
-            stime = stimelist.pop()
+            stime = results_data[parts[2]][0]
             total_exec_time_sec = etime - stime
             result['exec_time_sec'] = total_exec_time_sec
-        return result
+        results_data[parts[2]][2] = result
     return None
 
 
@@ -1699,16 +1717,15 @@  def create_report(dirname, html_path='', output_file_name=None):
     host = get_info_file(os.path.join(sysinfo_dir, 'hostname'))
     rx = re.compile('^\s+[END|START].*$')
     # create the results set dict
-    results_data = []
+    results_data = {}
+    results_data[""] = [0, [], None]
     if os.path.exists(status_file_name):
         f = open(status_file_name, "r")
         lines = f.readlines()
         f.close()
         for line in lines:
             if rx.match(line):
-                result_dict = parse_result(dirname, line)
-                if result_dict:
-                    results_data.append(result_dict)
+                parse_result(dirname, line, results_data)
     # create the meta info dict
     metalist = {
                 'uname': get_info_file(os.path.join(sysinfo_dir, 'uname')),
diff --git a/client/virt/virt_test_utils.py b/client/virt/virt_test_utils.py
index e3a18d2..556d3e5 100644
--- a/client/virt/virt_test_utils.py
+++ b/client/virt/virt_test_utils.py
@@ -430,13 +430,15 @@  def get_memory_info(lvms):
     return meminfo
 
 
-def run_autotest(vm, session, control_path, timeout, outputdir, params):
+def run_autotest(vm, session, control_path, control_args, timeout, outputdir,
+                 params):
     """
     Run an autotest control file inside a guest (linux only utility).
 
     @param vm: VM object.
     @param session: A shell session on the VM provided.
     @param control_path: A path to an autotest control file.
+    @param control_args: An argumets for control file.
     @param timeout: Timeout under which the autotest control file must complete.
     @param outputdir: Path on host where we should copy the guest autotest
             results to.
@@ -561,6 +563,10 @@  def run_autotest(vm, session, control_path, timeout, outputdir, params):
         pass
     try:
         bg = None
+        if control_args != None:
+            control_args = ' -a "' + control_args + '"'
+        else:
+            control_args = ""
         try:
             logging.info("---------------- Test output ----------------")
             if migrate_background:
@@ -568,7 +574,8 @@  def run_autotest(vm, session, control_path, timeout, outputdir, params):
                 mig_protocol = params.get("migration_protocol", "tcp")
 
                 bg = virt_utils.Thread(session.cmd_output,
-                                      kwargs={'cmd': "bin/autotest control",
+                                      kwargs={'cmd': "bin/autotest control" +
+                                              control_args,
                                               'timeout': timeout,
                                               'print_func': logging.info})
 
@@ -579,8 +586,8 @@  def run_autotest(vm, session, control_path, timeout, outputdir, params):
                                  "migration ...")
                     vm.migrate(timeout=mig_timeout, protocol=mig_protocol)
             else:
-                session.cmd_output("bin/autotest control", timeout=timeout,
-                                   print_func=logging.info)
+                session.cmd_output("bin/autotest control" + control_args,
+                                   timeout=timeout, print_func=logging.info)
         finally:
             logging.info("------------- End of test output ------------")
             if migrate_background and bg:
@@ -624,8 +631,8 @@  def run_autotest(vm, session, control_path, timeout, outputdir, params):
 
 def get_loss_ratio(output):
     """
-    Get the packet loss ratio from the output of ping
-.
+    Get the packet loss ratio from the output of ping.
+
     @param output: Ping output.
     """
     try: