From patchwork Thu Nov 24 16:10:33 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 9446043 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4D4246071C for ; Thu, 24 Nov 2016 16:24:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3ECAF2808F for ; Thu, 24 Nov 2016 16:24:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 31EC3280DE; Thu, 24 Nov 2016 16:24:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 140E52808F for ; Thu, 24 Nov 2016 16:24:34 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1c9wnI-0005Ll-NB; Thu, 24 Nov 2016 16:22:40 +0000 Received: from mail-wm0-x22a.google.com ([2a00:1450:400c:c09::22a]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1c9wcX-0003Xa-Nz for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2016 16:11:48 +0000 Received: by mail-wm0-x22a.google.com with SMTP id a197so119263981wmd.0 for ; Thu, 24 Nov 2016 08:11:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nt6O3g+Gz6NQy9QT8t+4CpNPkFtAOWRfpaMVVY/e4+A=; b=X5BIzetdRrCZF2HMEWP87GwOxSE+5b9hAEk52zkJojpUBMDVie1mlgN9SeqxF97Bv1 IIclaBLnXRkIfCrWpIglLoOVd2fv7kXXKKzBdrT/JurriR3vgeBD5nVa8XPfHzWte+Fa pGxwwRNPOTOrErRqOKOa/1GRtJTmr/VETJvg4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nt6O3g+Gz6NQy9QT8t+4CpNPkFtAOWRfpaMVVY/e4+A=; b=fza46i51dg29mQQbMeg0tyBIANhPsLOmFDWWlcV7We3Rh3dRAGEr3W5I69QX8HqI9/ T5nhkLw3JMVMOHqeBJ51AjM0My4K+oBaXi6Z75E7lCXYC0MOBkEb8Vn40LlYK0RK0Ka/ adMKRzp1KpoUOYrwPlElQRZ9cUsDAGRjolIScUsA/MBak681cOA/u6Q9XdF2NOFdHoG9 7XjFnqZPaNRa4PVwA1Mi7SJftR+5EN3vXpFIymDTy3U8v7U4RkAEmK9dqk5uQ9HNm0Q4 Dvtovva9misPK8vUEnP0ieQoxjLCZVY0DAd/i6jDGXFjUzqUyfCMjBRt/pDZ4825tivC cKUA== X-Gm-Message-State: AKaTC008oZCVKHvRh++IiMn/oUn2h4gUCp4TqAkfYCjFqHfX8lhd8jEioQ3RbdUGGn1RL7ym X-Received: by 10.28.236.83 with SMTP id k80mr2997607wmh.0.1480003871713; Thu, 24 Nov 2016 08:11:11 -0800 (PST) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id c133sm8954221wme.12.2016.11.24.08.11.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 24 Nov 2016 08:11:06 -0800 (PST) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id 51D363E05A3; Thu, 24 Nov 2016 16:11:00 +0000 (GMT) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, marc.zyngier@arm.com Subject: [kvm-unit-tests PATCH v7 11/11] arm/tcg-test: some basic TCG exercising tests Date: Thu, 24 Nov 2016 16:10:33 +0000 Message-Id: <20161124161033.11456-12-alex.bennee@linaro.org> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20161124161033.11456-1-alex.bennee@linaro.org> References: <20161124161033.11456-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20161124_081134_420376_F1CEB820 X-CRM114-Status: GOOD ( 23.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mttcg@listserver.greensocs.com, peter.maydell@linaro.org, claudio.fontana@huawei.com, nikunj@linux.vnet.ibm.com, jan.kiszka@siemens.com, mark.burton@greensocs.com, a.rigo@virtualopensystems.com, qemu-devel@nongnu.org, cota@braap.org, serge.fdrv@gmail.com, pbonzini@redhat.com, bobby.prani@gmail.com, rth@twiddle.net, =?UTF-8?q?Alex=20Benn=C3=A9e?= , fred.konrad@greensocs.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP These tests are not really aimed at KVM at all but exist to stretch QEMU's TCG code generator. In particular these exercise the ability of the TCG to: * Chain TranslationBlocks together (tight) * Handle heavy usage of the tb_jump_cache (paged) * Pathological case of computed local jumps (computed) In addition the tests can be varied by adding IPI IRQs or SMC sequences into the mix to stress the tcg_exit and invalidation mechanisms. To explicitly stress the tb_flush() mechanism you can use the mod/rounds parameters to force more frequent tb invalidation. Combined with setting -tb-size 1 in QEMU to limit the code generation buffer size. Signed-off-by: Alex Bennée --- v5 - added armv8 version of the tcg tests - max out at -smp 4 in unittests.cfg - add up IRQs sent and delivered for PASS/FAIL - take into account error count - add "rounds=" parameter - tweak smc to tb-size=1 - printf fmt fix v7 - merged in IRQ numerology - updated to latest IRQ API --- arm/Makefile.arm | 2 + arm/Makefile.arm64 | 2 + arm/Makefile.common | 1 + arm/tcg-test-asm.S | 170 ++++++++++++++++++++++++++ arm/tcg-test-asm64.S | 169 ++++++++++++++++++++++++++ arm/tcg-test.c | 337 +++++++++++++++++++++++++++++++++++++++++++++++++++ arm/unittests.cfg | 84 +++++++++++++ 7 files changed, 765 insertions(+) create mode 100644 arm/tcg-test-asm.S create mode 100644 arm/tcg-test-asm64.S create mode 100644 arm/tcg-test.c diff --git a/arm/Makefile.arm b/arm/Makefile.arm index 92f3757..7058bd2 100644 --- a/arm/Makefile.arm +++ b/arm/Makefile.arm @@ -24,4 +24,6 @@ tests = include $(TEST_DIR)/Makefile.common +$(TEST_DIR)/tcg-test.elf: $(cstart.o) $(TEST_DIR)/tcg-test.o $(TEST_DIR)/tcg-test-asm.o + arch_clean: arm_clean diff --git a/arm/Makefile.arm64 b/arm/Makefile.arm64 index 0b0761c..678fca4 100644 --- a/arm/Makefile.arm64 +++ b/arm/Makefile.arm64 @@ -16,5 +16,7 @@ tests = include $(TEST_DIR)/Makefile.common +$(TEST_DIR)/tcg-test.elf: $(cstart.o) $(TEST_DIR)/tcg-test.o $(TEST_DIR)/tcg-test-asm64.o + arch_clean: arm_clean $(RM) lib/arm64/.*.d diff --git a/arm/Makefile.common b/arm/Makefile.common index a508128..9af758f 100644 --- a/arm/Makefile.common +++ b/arm/Makefile.common @@ -17,6 +17,7 @@ tests-common += $(TEST_DIR)/tlbflush-code.flat tests-common += $(TEST_DIR)/tlbflush-data.flat tests-common += $(TEST_DIR)/locking-test.flat tests-common += $(TEST_DIR)/barrier-litmus-test.flat +tests-common += $(TEST_DIR)/tcg-test.flat all: test_cases diff --git a/arm/tcg-test-asm.S b/arm/tcg-test-asm.S new file mode 100644 index 0000000..6e823b7 --- /dev/null +++ b/arm/tcg-test-asm.S @@ -0,0 +1,170 @@ +/* + * TCG Test assembler functions for armv7 tests. + * + * Copyright (C) 2016, Linaro Ltd, Alex Bennée + * + * This work is licensed under the terms of the GNU LGPL, version 2. + * + * These helper functions are written in pure asm to control the size + * of the basic blocks and ensure they fit neatly into page + * aligned chunks. The pattern of branches they follow is determined by + * the 32 bit seed they are passed. It should be the same for each set. + * + * Calling convention + * - r0, iterations + * - r1, jump pattern + * - r2-r3, scratch + * + * Returns r0 + */ + +.arm + +.section .text + +/* Tight - all blocks should quickly be patched and should run + * very fast unless irqs or smc gets in the way + */ + +.global tight_start +tight_start: + subs r0, r0, #1 + beq tight_end + + ror r1, r1, #1 + tst r1, #1 + beq tightA + b tight_start + +tightA: + subs r0, r0, #1 + beq tight_end + + ror r1, r1, #1 + tst r1, #1 + beq tightB + b tight_start + +tightB: + subs r0, r0, #1 + beq tight_end + + ror r1, r1, #1 + tst r1, #1 + beq tight_start + b tightA + +.global tight_end +tight_end: + mov pc, lr + +/* + * Computed jumps cannot be hardwired into the basic blocks so each one + * will cause an exit for the main execution loop to look up the next block. + * + * There is some caching which should ameliorate the cost a little. + */ + + /* Align << 13 == 4096 byte alignment */ + .align 13 + .global computed_start +computed_start: + subs r0, r0, #1 + beq computed_end + + /* Jump table */ + ror r1, r1, #1 + and r2, r1, #1 + adr r3, computed_jump_table + ldr r2, [r3, r2, lsl #2] + mov pc, r2 + + b computed_err + +computed_jump_table: + .word computed_start + .word computedA + +computedA: + subs r0, r0, #1 + beq computed_end + + /* Jump into code */ + ror r1, r1, #1 + and r2, r1, #1 + adr r3, 1f + add r3, r2, lsl #2 + mov pc, r3 +1: b computed_start + b computedB + + b computed_err + + +computedB: + subs r0, r0, #1 + beq computed_end + ror r1, r1, #1 + + /* Conditional register load */ + adr r3, computedA + tst r1, #1 + adreq r3, computed_start + mov pc, r3 + + b computed_err + +computed_err: + mov r0, #1 + .global computed_end +computed_end: + mov pc, lr + + +/* + * Page hoping + * + * Each block is in a different page, hence the blocks never get joined + */ + /* Align << 13 == 4096 byte alignment */ + .align 13 + .global paged_start +paged_start: + subs r0, r0, #1 + beq paged_end + + ror r1, r1, #1 + tst r1, #1 + beq pagedA + b paged_start + + /* Align << 13 == 4096 byte alignment */ + .align 13 +pagedA: + subs r0, r0, #1 + beq paged_end + + ror r1, r1, #1 + tst r1, #1 + beq pagedB + b paged_start + + /* Align << 13 == 4096 byte alignment */ + .align 13 +pagedB: + subs r0, r0, #1 + beq paged_end + + ror r1, r1, #1 + tst r1, #1 + beq paged_start + b pagedA + + /* Align << 13 == 4096 byte alignment */ + .align 13 +.global paged_end +paged_end: + mov pc, lr + +.global test_code_end +test_code_end: diff --git a/arm/tcg-test-asm64.S b/arm/tcg-test-asm64.S new file mode 100644 index 0000000..22bcfb4 --- /dev/null +++ b/arm/tcg-test-asm64.S @@ -0,0 +1,169 @@ +/* + * TCG Test assembler functions for armv8 tests. + * + * Copyright (C) 2016, Linaro Ltd, Alex Bennée + * + * This work is licensed under the terms of the GNU LGPL, version 2. + * + * These helper functions are written in pure asm to control the size + * of the basic blocks and ensure they fit neatly into page + * aligned chunks. The pattern of branches they follow is determined by + * the 32 bit seed they are passed. It should be the same for each set. + * + * Calling convention + * - x0, iterations + * - x1, jump pattern + * - x2-x3, scratch + * + * Returns x0 + */ + +.section .text + +/* Tight - all blocks should quickly be patched and should run + * very fast unless irqs or smc gets in the way + */ + +.global tight_start +tight_start: + subs x0, x0, #1 + beq tight_end + + ror x1, x1, #1 + tst x1, #1 + beq tightA + b tight_start + +tightA: + subs x0, x0, #1 + beq tight_end + + ror x1, x1, #1 + tst x1, #1 + beq tightB + b tight_start + +tightB: + subs x0, x0, #1 + beq tight_end + + ror x1, x1, #1 + tst x1, #1 + beq tight_start + b tightA + +.global tight_end +tight_end: + ret + +/* + * Computed jumps cannot be hardwired into the basic blocks so each one + * will cause an exit for the main execution loop to look up the next block. + * + * There is some caching which should ameliorate the cost a little. + */ + + /* Align << 13 == 4096 byte alignment */ + .align 13 + .global computed_start +computed_start: + subs x0, x0, #1 + beq computed_end + + /* Jump table */ + ror x1, x1, #1 + and x2, x1, #1 + adr x3, computed_jump_table + ldr x2, [x3, x2, lsl #3] + br x2 + + b computed_err + +computed_jump_table: + .quad computed_start + .quad computedA + +computedA: + subs x0, x0, #1 + beq computed_end + + /* Jump into code */ + ror x1, x1, #1 + and x2, x1, #1 + adr x3, 1f + add x3, x3, x2, lsl #2 + br x3 +1: b computed_start + b computedB + + b computed_err + + +computedB: + subs x0, x0, #1 + beq computed_end + ror x1, x1, #1 + + /* Conditional register load */ + adr x2, computedA + adr x3, computed_start + tst x1, #1 + csel x2, x3, x2, eq + br x2 + + b computed_err + +computed_err: + mov x0, #1 + .global computed_end +computed_end: + ret + + +/* + * Page hoping + * + * Each block is in a different page, hence the blocks never get joined + */ + /* Align << 13 == 4096 byte alignment */ + .align 13 + .global paged_start +paged_start: + subs x0, x0, #1 + beq paged_end + + ror x1, x1, #1 + tst x1, #1 + beq pagedA + b paged_start + + /* Align << 13 == 4096 byte alignment */ + .align 13 +pagedA: + subs x0, x0, #1 + beq paged_end + + ror x1, x1, #1 + tst x1, #1 + beq pagedB + b paged_start + + /* Align << 13 == 4096 byte alignment */ + .align 13 +pagedB: + subs x0, x0, #1 + beq paged_end + + ror x1, x1, #1 + tst x1, #1 + beq paged_start + b pagedA + + /* Align << 13 == 4096 byte alignment */ + .align 13 +.global paged_end +paged_end: + ret + +.global test_code_end +test_code_end: diff --git a/arm/tcg-test.c b/arm/tcg-test.c new file mode 100644 index 0000000..341dca3 --- /dev/null +++ b/arm/tcg-test.c @@ -0,0 +1,337 @@ +/* + * ARM TCG Tests + * + * These tests are explicitly aimed at stretching the QEMU TCG engine. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include + +#define MAX_CPUS 8 + +/* These entry points are in the assembly code */ +extern int tight_start(uint32_t count, uint32_t pattern); +extern int computed_start(uint32_t count, uint32_t pattern); +extern int paged_start(uint32_t count, uint32_t pattern); +extern uint32_t tight_end; +extern uint32_t computed_end; +extern uint32_t paged_end; +extern unsigned long test_code_end; + +typedef int (*test_fn)(uint32_t count, uint32_t pattern); + +typedef struct { + const char *test_name; + bool should_pass; + test_fn start_fn; + uint32_t *code_end; +} test_descr_t; + +/* Test array */ +static test_descr_t tests[] = { + /* + * Tight chain. + * + * These are a bunch of basic blocks that have fixed branches in + * a page aligned space. The branches taken are decided by a + * psuedo-random bitmap for each CPU. + * + * Once the basic blocks have been chained together by the TCG they + * should run until they reach their block count. This will be the + * most efficient mode in which generated code is run. The only other + * exits will be caused by interrupts or TB invalidation. + */ + { "tight", true, tight_start, &tight_end }, + /* + * Computed jumps. + * + * A bunch of basic blocks which just do computed jumps so the basic + * block is never chained but they are all within a page (maybe not + * required). This will exercise the cache lookup but not the new + * generation. + */ + { "computed", true, computed_start, &computed_end }, + /* + * Page ping pong. + * + * Have the blocks are separated by PAGE_SIZE so they can never + * be chained together. + * + */ + { "paged", true, paged_start, &paged_end} +}; + +static test_descr_t *test = NULL; + +static int iterations = 1000000; +static int rounds = 1000; +static int mod_freq = 5; +static uint32_t pattern[MAX_CPUS]; + +/* control flags */ +static int smc = 0; +static int irq = 0; +static int check_irq = 0; + +/* IRQ accounting */ +#define MAX_IRQ_IDS 16 +static int irqv; +static unsigned long irq_sent_ts[MAX_CPUS][MAX_CPUS][MAX_IRQ_IDS]; + +static int irq_recv[MAX_CPUS]; +static int irq_sent[MAX_CPUS]; +static int irq_overlap[MAX_CPUS]; /* if ts > now, i.e a race */ +static int irq_slow[MAX_CPUS]; /* if delay > threshold */ +static unsigned long irq_latency[MAX_CPUS]; /* cumulative time */ + +static int errors[MAX_CPUS]; + +static cpumask_t smp_test_complete; + +static cpumask_t ready; + +static void wait_on_ready(void) +{ + cpumask_set_cpu(smp_processor_id(), &ready); + while (!cpumask_full(&ready)) + cpu_relax(); +} + +/* This triggers TCGs SMC detection by writing values to the executing + * code pages. We are not actually modifying the instructions and the + * underlying code will remain unchanged. However this should trigger + * invalidation of the Translation Blocks + */ + +void trigger_smc_detection(uint32_t *start, uint32_t *end) +{ + volatile uint32_t *ptr = start; + while (ptr < end) { + uint32_t inst = *ptr; + *ptr++ = inst; + } +} + +/* Handler for receiving IRQs */ + +static void irq_handler(struct pt_regs *regs __unused) +{ + unsigned long then, now = get_cntvct(); + int cpu = smp_processor_id(); + u32 irqstat = gic_read_iar(); + u32 irqnr = gic_iar_irqnr(irqstat); + + if (irqnr != GICC_INT_SPURIOUS) { + unsigned int src_cpu = (irqstat >> 10) & 0x7; ; + gic_write_eoir(irqstat); + irq_recv[cpu]++; + + then = irq_sent_ts[src_cpu][cpu][irqnr]; + + if (then > now) { + irq_overlap[cpu]++; + } else { + unsigned long latency = (now - then); + if (latency > 30000) { + irq_slow[cpu]++; + } else { + irq_latency[cpu] += latency; + } + } + } +} + +/* This triggers cross-CPU IRQs. Each IRQ should cause the basic block + * execution to finish the main run-loop get entered again. + */ +int send_cross_cpu_irqs(int this_cpu, int irq) +{ + int cpu, sent = 0; + cpumask_t mask; + + cpumask_copy(&mask, &cpu_present_mask); + + for_each_present_cpu(cpu) { + if (cpu != this_cpu) { + irq_sent_ts[this_cpu][cpu][irq] = get_cntvct(); + cpumask_clear_cpu(cpu, &mask); + sent++; + } + } + + gic_ipi_send_mask(irq, &mask); + + return sent; +} + +void do_test(void) +{ + int cpu = smp_processor_id(); + int i, irq_id = 0; + + printf("CPU%d: online and setting up with pattern 0x%"PRIx32"\n", cpu, pattern[cpu]); + + if (irq) { + gic_enable_defaults(); +#ifdef __arm__ + install_exception_handler(EXCPTN_IRQ, irq_handler); +#else + install_irq_handler(EL1H_IRQ, irq_handler); +#endif + local_irq_enable(); + + wait_on_ready(); + } + + for (i=0; istart_fn(iterations, pattern[cpu]); + + if ((i + cpu) % mod_freq == 0) + { + if (smc) { + trigger_smc_detection((uint32_t *) test->start_fn, + test->code_end); + } + if (irq) { + irq_sent[cpu] += send_cross_cpu_irqs(cpu, irq_id); + irq_id++; + irq_id = irq_id % 15; + } + } + } + + smp_wmb(); + + cpumask_set_cpu(cpu, &smp_test_complete); + if (cpu != 0) + halt(); +} + +void report_irq_stats(int cpu) +{ + int recv = irq_recv[cpu]; + int race = irq_overlap[cpu]; + int slow = irq_slow[cpu]; + + unsigned long avg_latency = irq_latency[cpu] / (recv - (race + slow)); + + printf("CPU%d: %d irqs (%d races, %d slow, %ld ticks avg latency)\n", + cpu, recv, race, slow, avg_latency); +} + + +void setup_and_run_tcg_test(void) +{ + static const unsigned char seed[] = "tcg-test"; + struct isaac_ctx prng_context; + int cpu; + int total_err = 0, total_sent = 0, total_recv = 0; + + isaac_init(&prng_context, &seed[0], sizeof(seed)); + + /* boot other CPUs */ + for_each_present_cpu(cpu) { + pattern[cpu] = isaac_next_uint32(&prng_context); + + if (cpu == 0) + continue; + + smp_boot_secondary(cpu, do_test); + } + + do_test(); + + while (!cpumask_full(&smp_test_complete)) + cpu_relax(); + + smp_mb(); + + /* Now total up errors and irqs */ + for_each_present_cpu(cpu) { + total_err += errors[cpu]; + total_sent += irq_sent[cpu]; + total_recv += irq_recv[cpu]; + + if (check_irq) { + report_irq_stats(cpu); + } + } + + if (check_irq) { + if (total_sent != total_recv) { + report("%d IRQs sent, %d received\n", false, total_sent, total_recv); + } else { + report("%d errors, IRQs OK", total_err == 0, total_err); + } + } else { + report("%d errors, IRQs not checked", total_err == 0, total_err); + } +} + +int main(int argc, char **argv) +{ + int i; + unsigned int j; + + for (i=0; i4?4:$MAX_SMP)) +extra_params = -append 'tight' +groups = tcg +accel = tcg + +[tcg::tight-smc] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'tight smc' -tb-size 1 +groups = tcg +accel = tcg + +[tcg::tight-irq] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'tight irq' +groups = tcg +accel = tcg + +[tcg::tight-smc-irq] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'tight smc irq' +groups = tcg +accel = tcg + +[tcg::computed] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'computed' +groups = tcg +accel = tcg + +[tcg::computed-smc] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'computed smc' +groups = tcg +accel = tcg + +[tcg::computed-irq] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'computed irq' +groups = tcg +accel = tcg + +[tcg::computed-smc-irq] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'computed smc irq' +groups = tcg +accel = tcg + +[tcg::paged] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'paged' +groups = tcg +accel = tcg + +[tcg::paged-smc] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'paged smc' +groups = tcg +accel = tcg + +[tcg::paged-irq] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'paged irq' +groups = tcg +accel = tcg + +[tcg::paged-smc-irq] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'paged smc irq' +groups = tcg +accel = tcg