From patchwork Thu Dec 2 11:53:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 12694504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AC321C433F5 for ; Thu, 2 Dec 2021 12:01:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fSWTda+wR+vTsrbLP8lZ8a3Gi+DSencDOHyn/Jw6h7o=; b=UUukSVncOf9o1q 6dbJ5dqjqxgIteEPjP82Wcv/7v1JMH55rKxWzzO5JY9Y+QfWdHD/U1MzMnwbM1ZXFySibh3Cpyiyo XxnWfo9QXHrecto5T9hNRJoz52NkvMj+Gthsw9/CrwOTBJxPaIXJtnJ/yQzkhmYvYUEeStlU9eMXB IeVAvodFeK1tgZ7Ca4Rs0VQkHnFrsbxkSXP9UMcAfMLuWl4j8yMDstzdBP+SIcxtvYUSjKQLeuEQy ATyvWJ9SEGkPsmkcBtIkgW2OqYEICVz235Vw+49WwwoQYbDoMhsCM3spOburUaX0ziQPLWgRryoOs yShi0tAD0OHFR99SIHLw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mskk5-00C6yX-Fs; Thu, 02 Dec 2021 11:59:14 +0000 Received: from mail-wm1-x329.google.com ([2a00:1450:4864:20::329]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mskfA-00C4m8-PE for linux-arm-kernel@lists.infradead.org; Thu, 02 Dec 2021 11:54:12 +0000 Received: by mail-wm1-x329.google.com with SMTP id p18so22758113wmq.5 for ; Thu, 02 Dec 2021 03:54:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vHWuEvwKjoVe4S2dquUkM5fuCp27K/1ympYHMbJge00=; b=M93aRAqC/wsRIEpLk8UI7NwN6xIZMkicXmJpKPiljdQYaTnTJQNgAFeGsHiqs1Eevc DOtDEPAW+/OqY+bIDCUwDzsZv9+O1OONQh968r4ZG3uYGCpms9an4oTxMLSDpzTJchjq NbFXnlcVFKOYJ2iK/+TM0nho1jQC4zaqaMlHdUNV/Wr152o+3nr2x5eRdBPUMVIszObC 4flxy8kbHwR8l4zLAh+OhR3qd/hJ/ZSL2m600x6GVAka38FB2HOqBwbrVJod5v8oTlTe GxY04plTuWn3hU6BTGZDqRwbqNaWRUPHJNgHObMGJ2mHJSHB2XEu34v0VFicp/49rJZa HjqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vHWuEvwKjoVe4S2dquUkM5fuCp27K/1ympYHMbJge00=; b=LZEfEv1oSA7ryqSqn+GCKxCm1hvY9MgoCdo2LAHpD5d7I1DJsJCJW3q0HoUZ4sx0Hz PxRd/lFeaCEhVOHFX/CiE6P63HPryD/1yhcjOcSNPdnKBRlLqe2wd4ybsaB3vzy0/k8f 5Mw9TkYbxRLuRfri4ZYr6NTshdR9ccTN7DZdzPxbaLXTKJyvQA0SY9HCFM2oRr+sSJHL yhmPiMW/RLjv1NltM8rIKtnZEk+8An3UK2PayPBXGLfIN6jNlpJLfjFG0f+bI7fgRz3T kAAWiJ5TZ7kPKHLK6x9XdIHxcpYcr0WdKHZ60ET1EefisKvGikrsXDK+zv9jQFm6SR8S xRgQ== X-Gm-Message-State: AOAM532xuLCHmEUKbhyn+0Sx4ilfjXzVdv+R+voGYnqVdkZJ4rzo3/Gy OAB0nNy6n1gDqN16P2REBzOLHA== X-Google-Smtp-Source: ABdhPJz0T6cz2gWY6DG64DmJoh0mtooUcVO8rsS+3okARWoBzdZS9NSzZCW9GIa48Rp74BnnlI76rQ== X-Received: by 2002:a05:600c:1c1a:: with SMTP id j26mr5988830wms.28.1638446046981; Thu, 02 Dec 2021 03:54:06 -0800 (PST) Received: from zen.linaroharston ([51.148.130.216]) by smtp.gmail.com with ESMTPSA id s63sm2154174wme.22.2021.12.02.03.53.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Dec 2021 03:54:00 -0800 (PST) Received: from zen.lan (localhost [127.0.0.1]) by zen.linaroharston (Postfix) with ESMTP id 2817A1FFA5; Thu, 2 Dec 2021 11:53:53 +0000 (GMT) From: =?utf-8?q?Alex_Benn=C3=A9e?= To: pbonzini@redhat.com, drjones@redhat.com, thuth@redhat.com Cc: kvm@vger.kernel.org, qemu-arm@nongnu.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, christoffer.dall@arm.com, maz@kernel.org, =?utf-8?q?Alex_Benn=C3=A9e?= Subject: [kvm-unit-tests PATCH v9 9/9] arm/tcg-test: some basic TCG exercising tests Date: Thu, 2 Dec 2021 11:53:52 +0000 Message-Id: <20211202115352.951548-10-alex.bennee@linaro.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211202115352.951548-1-alex.bennee@linaro.org> References: <20211202115352.951548-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211202_035408_900328_7389591C X-CRM114-Status: GOOD ( 26.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org These tests are not really aimed at KVM at all but exist to stretch QEMU's TCG code generator. In particular these exercise the ability of the TCG to: * Chain TranslationBlocks together (tight) * Handle heavy usage of the tb_jump_cache (paged) * Pathological case of computed local jumps (computed) In addition the tests can be varied by adding IPI IRQs or SMC sequences into the mix to stress the tcg_exit and invalidation mechanisms. To explicitly stress the tb_flush() mechanism you can use the mod/rounds parameters to force more frequent tb invalidation. Combined with setting -tb-size 1 in QEMU to limit the code generation buffer size. Signed-off-by: Alex Bennée Message-Id: <20211118184650.661575-11-alex.bennee@linaro.org> --- v9 - moved back to unittests.cfg - fixed some missing accel tags - s/printf/report_info/ --- arm/Makefile.arm | 2 + arm/Makefile.arm64 | 2 + arm/Makefile.common | 1 + arm/tcg-test-asm.S | 171 ++++++++++++++++++++++ arm/tcg-test-asm64.S | 170 ++++++++++++++++++++++ arm/tcg-test.c | 338 +++++++++++++++++++++++++++++++++++++++++++ arm/unittests.cfg | 84 +++++++++++ 7 files changed, 768 insertions(+) create mode 100644 arm/tcg-test-asm.S create mode 100644 arm/tcg-test-asm64.S create mode 100644 arm/tcg-test.c diff --git a/arm/Makefile.arm b/arm/Makefile.arm index 3a4cc6b2..05e47f10 100644 --- a/arm/Makefile.arm +++ b/arm/Makefile.arm @@ -31,4 +31,6 @@ tests = include $(SRCDIR)/$(TEST_DIR)/Makefile.common +$(TEST_DIR)/tcg-test.elf: $(cstart.o) $(TEST_DIR)/tcg-test.o $(TEST_DIR)/tcg-test-asm.o + arch_clean: arm_clean diff --git a/arm/Makefile.arm64 b/arm/Makefile.arm64 index e8a38d78..ac94f8ed 100644 --- a/arm/Makefile.arm64 +++ b/arm/Makefile.arm64 @@ -34,5 +34,7 @@ tests += $(TEST_DIR)/cache.flat include $(SRCDIR)/$(TEST_DIR)/Makefile.common +$(TEST_DIR)/tcg-test.elf: $(cstart.o) $(TEST_DIR)/tcg-test.o $(TEST_DIR)/tcg-test-asm64.o + arch_clean: arm_clean $(RM) lib/arm64/.*.d diff --git a/arm/Makefile.common b/arm/Makefile.common index 861e5c7f..abb69489 100644 --- a/arm/Makefile.common +++ b/arm/Makefile.common @@ -14,6 +14,7 @@ tests-common += $(TEST_DIR)/pl031.flat tests-common += $(TEST_DIR)/tlbflush-code.flat tests-common += $(TEST_DIR)/locking-test.flat tests-common += $(TEST_DIR)/barrier-litmus-test.flat +tests-common += $(TEST_DIR)/tcg-test.flat tests-all = $(tests-common) $(tests) all: directories $(tests-all) diff --git a/arm/tcg-test-asm.S b/arm/tcg-test-asm.S new file mode 100644 index 00000000..f58fac08 --- /dev/null +++ b/arm/tcg-test-asm.S @@ -0,0 +1,171 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * TCG Test assembler functions for armv7 tests. + * + * Copyright (C) 2016, Linaro Ltd, Alex Bennée + * + * These helper functions are written in pure asm to control the size + * of the basic blocks and ensure they fit neatly into page + * aligned chunks. The pattern of branches they follow is determined by + * the 32 bit seed they are passed. It should be the same for each set. + * + * Calling convention + * - r0, iterations + * - r1, jump pattern + * - r2-r3, scratch + * + * Returns r0 + */ + +.arm + +.section .text + +/* + * Tight - all blocks should quickly be patched and should run + * very fast unless irqs or smc gets in the way + */ + +.global tight_start +tight_start: + subs r0, r0, #1 + beq tight_end + + ror r1, r1, #1 + tst r1, #1 + beq tightA + b tight_start + +tightA: + subs r0, r0, #1 + beq tight_end + + ror r1, r1, #1 + tst r1, #1 + beq tightB + b tight_start + +tightB: + subs r0, r0, #1 + beq tight_end + + ror r1, r1, #1 + tst r1, #1 + beq tight_start + b tightA + +.global tight_end +tight_end: + mov pc, lr + +/* + * Computed jumps cannot be hardwired into the basic blocks so each one + * will either cause an exit for the main execution loop or trigger an + * inline look up for the next block. + * + * There is some caching which should ameliorate the cost a little. + */ + + /* Align << 13 == 4096 byte alignment */ + .align 13 + .global computed_start +computed_start: + subs r0, r0, #1 + beq computed_end + + /* Jump table */ + ror r1, r1, #1 + and r2, r1, #1 + adr r3, computed_jump_table + ldr r2, [r3, r2, lsl #2] + mov pc, r2 + + b computed_err + +computed_jump_table: + .word computed_start + .word computedA + +computedA: + subs r0, r0, #1 + beq computed_end + + /* Jump into code */ + ror r1, r1, #1 + and r2, r1, #1 + adr r3, 1f + add r3, r2, lsl #2 + mov pc, r3 +1: b computed_start + b computedB + + b computed_err + + +computedB: + subs r0, r0, #1 + beq computed_end + ror r1, r1, #1 + + /* Conditional register load */ + adr r3, computedA + tst r1, #1 + adreq r3, computed_start + mov pc, r3 + + b computed_err + +computed_err: + mov r0, #1 + .global computed_end +computed_end: + mov pc, lr + + +/* + * Page hoping + * + * Each block is in a different page, hence the blocks never get joined + */ + /* Align << 13 == 4096 byte alignment */ + .align 13 + .global paged_start +paged_start: + subs r0, r0, #1 + beq paged_end + + ror r1, r1, #1 + tst r1, #1 + beq pagedA + b paged_start + + /* Align << 13 == 4096 byte alignment */ + .align 13 +pagedA: + subs r0, r0, #1 + beq paged_end + + ror r1, r1, #1 + tst r1, #1 + beq pagedB + b paged_start + + /* Align << 13 == 4096 byte alignment */ + .align 13 +pagedB: + subs r0, r0, #1 + beq paged_end + + ror r1, r1, #1 + tst r1, #1 + beq paged_start + b pagedA + + /* Align << 13 == 4096 byte alignment */ + .align 13 +.global paged_end +paged_end: + mov pc, lr + +.global test_code_end +test_code_end: diff --git a/arm/tcg-test-asm64.S b/arm/tcg-test-asm64.S new file mode 100644 index 00000000..e69a8c72 --- /dev/null +++ b/arm/tcg-test-asm64.S @@ -0,0 +1,170 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * TCG Test assembler functions for armv8 tests. + * + * Copyright (C) 2016, Linaro Ltd, Alex Bennée + * + * These helper functions are written in pure asm to control the size + * of the basic blocks and ensure they fit neatly into page + * aligned chunks. The pattern of branches they follow is determined by + * the 32 bit seed they are passed. It should be the same for each set. + * + * Calling convention + * - x0, iterations + * - x1, jump pattern + * - x2-x3, scratch + * + * Returns x0 + */ + +.section .text + +/* + * Tight - all blocks should quickly be patched and should run + * very fast unless irqs or smc gets in the way + */ + +.global tight_start +tight_start: + subs x0, x0, #1 + beq tight_end + + ror x1, x1, #1 + tst x1, #1 + beq tightA + b tight_start + +tightA: + subs x0, x0, #1 + beq tight_end + + ror x1, x1, #1 + tst x1, #1 + beq tightB + b tight_start + +tightB: + subs x0, x0, #1 + beq tight_end + + ror x1, x1, #1 + tst x1, #1 + beq tight_start + b tightA + +.global tight_end +tight_end: + ret + +/* + * Computed jumps cannot be hardwired into the basic blocks so each one + * will either cause an exit for the main execution loop or trigger an + * inline look up for the next block. + * + * There is some caching which should ameliorate the cost a little. + */ + + /* Align << 13 == 4096 byte alignment */ + .align 13 + .global computed_start +computed_start: + subs x0, x0, #1 + beq computed_end + + /* Jump table */ + ror x1, x1, #1 + and x2, x1, #1 + adr x3, computed_jump_table + ldr x2, [x3, x2, lsl #3] + br x2 + + b computed_err + +computed_jump_table: + .quad computed_start + .quad computedA + +computedA: + subs x0, x0, #1 + beq computed_end + + /* Jump into code */ + ror x1, x1, #1 + and x2, x1, #1 + adr x3, 1f + add x3, x3, x2, lsl #2 + br x3 +1: b computed_start + b computedB + + b computed_err + + +computedB: + subs x0, x0, #1 + beq computed_end + ror x1, x1, #1 + + /* Conditional register load */ + adr x2, computedA + adr x3, computed_start + tst x1, #1 + csel x2, x3, x2, eq + br x2 + + b computed_err + +computed_err: + mov x0, #1 + .global computed_end +computed_end: + ret + + +/* + * Page hoping + * + * Each block is in a different page, hence the blocks never get joined + */ + /* Align << 13 == 4096 byte alignment */ + .align 13 + .global paged_start +paged_start: + subs x0, x0, #1 + beq paged_end + + ror x1, x1, #1 + tst x1, #1 + beq pagedA + b paged_start + + /* Align << 13 == 4096 byte alignment */ + .align 13 +pagedA: + subs x0, x0, #1 + beq paged_end + + ror x1, x1, #1 + tst x1, #1 + beq pagedB + b paged_start + + /* Align << 13 == 4096 byte alignment */ + .align 13 +pagedB: + subs x0, x0, #1 + beq paged_end + + ror x1, x1, #1 + tst x1, #1 + beq paged_start + b pagedA + + /* Align << 13 == 4096 byte alignment */ + .align 13 +.global paged_end +paged_end: + ret + +.global test_code_end +test_code_end: diff --git a/arm/tcg-test.c b/arm/tcg-test.c new file mode 100644 index 00000000..3efec2b2 --- /dev/null +++ b/arm/tcg-test.c @@ -0,0 +1,338 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * ARM TCG Tests + * + * These tests are explicitly aimed at stretching the QEMU TCG engine. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include + +#define MAX_CPUS 8 + +/* These entry points are in the assembly code */ +extern int tight_start(uint32_t count, uint32_t pattern); +extern int computed_start(uint32_t count, uint32_t pattern); +extern int paged_start(uint32_t count, uint32_t pattern); +extern uint32_t tight_end; +extern uint32_t computed_end; +extern uint32_t paged_end; +extern unsigned long test_code_end; + +typedef int (*test_fn)(uint32_t count, uint32_t pattern); + +typedef struct { + const char *test_name; + bool should_pass; + test_fn start_fn; + uint32_t *code_end; +} test_descr_t; + +/* Test array */ +static test_descr_t tests[] = { + /* + * Tight chain. + * + * These are a bunch of basic blocks that have fixed branches in + * a page aligned space. The branches taken are decided by a + * psuedo-random bitmap for each CPU. + * + * Once the basic blocks have been chained together by the TCG they + * should run until they reach their block count. This will be the + * most efficient mode in which generated code is run. The only other + * exits will be caused by interrupts or TB invalidation. + */ + { "tight", true, tight_start, &tight_end }, + /* + * Computed jumps. + * + * A bunch of basic blocks which just do computed jumps so the basic + * block is never chained but they are all within a page (maybe not + * required). This will exercise the cache lookup but not the new + * generation. + */ + { "computed", true, computed_start, &computed_end }, + /* + * Page ping pong. + * + * Have the blocks are separated by PAGE_SIZE so they can never + * be chained together. + * + */ + { "paged", true, paged_start, &paged_end} +}; + +static test_descr_t *test; + +static int iterations = 1000000; +static int rounds = 1000; +static int mod_freq = 5; +static uint32_t pattern[MAX_CPUS]; + +/* control flags */ +static int smc; +static int irq; +static int check_irq; + +/* IRQ accounting */ +#define MAX_IRQ_IDS 16 +static int irqv; +static unsigned long irq_sent_ts[MAX_CPUS][MAX_CPUS][MAX_IRQ_IDS]; + +static int irq_recv[MAX_CPUS]; +static int irq_sent[MAX_CPUS]; +static int irq_overlap[MAX_CPUS]; /* if ts > now, i.e a race */ +static int irq_slow[MAX_CPUS]; /* if delay > threshold */ +static unsigned long irq_latency[MAX_CPUS]; /* cumulative time */ + +static int errors[MAX_CPUS]; + +static cpumask_t smp_test_complete; + +static cpumask_t ready; + +static void wait_on_ready(void) +{ + cpumask_set_cpu(smp_processor_id(), &ready); + while (!cpumask_full(&ready)) + cpu_relax(); +} + +/* This triggers TCGs SMC detection by writing values to the executing + * code pages. We are not actually modifying the instructions and the + * underlying code will remain unchanged. However this should trigger + * invalidation of the Translation Blocks + */ + +static void trigger_smc_detection(uint32_t *start, uint32_t *end) +{ + volatile uint32_t *ptr = start; + + while (ptr < end) { + uint32_t inst = *ptr; + *ptr++ = inst; + } +} + +/* Handler for receiving IRQs */ + +static void irq_handler(struct pt_regs *regs __unused) +{ + unsigned long then, now = get_cntvct(); + int cpu = smp_processor_id(); + u32 irqstat = gic_read_iar(); + u32 irqnr = gic_iar_irqnr(irqstat); + + if (irqnr != GICC_INT_SPURIOUS) { + unsigned int src_cpu = (irqstat >> 10) & 0x7; + + gic_write_eoir(irqstat); + irq_recv[cpu]++; + + then = irq_sent_ts[src_cpu][cpu][irqnr]; + + if (then > now) { + irq_overlap[cpu]++; + } else { + unsigned long latency = (now - then); + + if (latency > 30000) + irq_slow[cpu]++; + else + irq_latency[cpu] += latency; + } + } +} + +/* This triggers cross-CPU IRQs. Each IRQ should cause the basic block + * execution to finish the main run-loop get entered again. + */ +static int send_cross_cpu_irqs(int this_cpu, int irq) +{ + int cpu, sent = 0; + cpumask_t mask; + + cpumask_copy(&mask, &cpu_present_mask); + + for_each_present_cpu(cpu) { + if (cpu != this_cpu) { + irq_sent_ts[this_cpu][cpu][irq] = get_cntvct(); + cpumask_clear_cpu(cpu, &mask); + sent++; + } + } + + gic_ipi_send_mask(irq, &mask); + + return sent; +} + +static void do_test(void) +{ + int cpu = smp_processor_id(); + int i, irq_id = 0; + + report_info("CPU%d: online and setting up with pattern 0x%"PRIx32, + cpu, pattern[cpu]); + + if (irq) { + gic_enable_defaults(); +#ifdef __arm__ + install_exception_handler(EXCPTN_IRQ, irq_handler); +#else + install_irq_handler(EL1H_IRQ, irq_handler); +#endif + local_irq_enable(); + + wait_on_ready(); + } + + for (i = 0; i < rounds; i++) { + /* Enter the blocks */ + errors[cpu] += test->start_fn(iterations, pattern[cpu]); + + if ((i + cpu) % mod_freq == 0) { + if (smc) + trigger_smc_detection((uint32_t *) test->start_fn, + test->code_end); + + if (irq) { + irq_sent[cpu] += send_cross_cpu_irqs(cpu, irq_id); + irq_id++; + irq_id = irq_id % 15; + } + } + } + + /* ensure everything complete before we finish */ + smp_wmb(); + + cpumask_set_cpu(cpu, &smp_test_complete); + if (cpu != 0) + halt(); +} + +static void report_irq_stats(int cpu) +{ + int recv = irq_recv[cpu]; + int race = irq_overlap[cpu]; + int slow = irq_slow[cpu]; + + unsigned long avg_latency = irq_latency[cpu] / (recv - (race + slow)); + + report_info("CPU%d: %d irqs (%d races, %d slow, %ld ticks avg latency)", + cpu, recv, race, slow, avg_latency); +} + + +static void setup_and_run_tcg_test(void) +{ + static const unsigned char seed[] = "tcg-test"; + struct isaac_ctx prng_context; + int cpu; + int total_err = 0, total_sent = 0, total_recv = 0; + + isaac_init(&prng_context, &seed[0], sizeof(seed)); + + /* boot other CPUs */ + for_each_present_cpu(cpu) { + pattern[cpu] = isaac_next_uint32(&prng_context); + + if (cpu == 0) + continue; + + smp_boot_secondary(cpu, do_test); + } + + do_test(); + + while (!cpumask_full(&smp_test_complete)) + cpu_relax(); + + /* Ensure everything completes before we check the data */ + smp_mb(); + + /* Now total up errors and irqs */ + for_each_present_cpu(cpu) { + total_err += errors[cpu]; + total_sent += irq_sent[cpu]; + total_recv += irq_recv[cpu]; + + if (check_irq) + report_irq_stats(cpu); + } + + if (check_irq) + report(total_sent == total_recv && total_err == 0, + "%d IRQs sent, %d received, %d errors\n", + total_sent, total_recv, total_err == 0); + else + report(total_err == 0, "%d errors, IRQs not checked", total_err); +} + +int main(int argc, char **argv) +{ + int i; + unsigned int j; + + for (i = 0; i < argc; i++) { + char *arg = argv[i]; + + for (j = 0; j < ARRAY_SIZE(tests); j++) { + if (strcmp(arg, tests[j].test_name) == 0) + test = &tests[j]; + } + + /* Test modifiers */ + if (strstr(arg, "mod=") != NULL) { + char *p = strstr(arg, "="); + + mod_freq = atol(p+1); + } + + if (strstr(arg, "rounds=") != NULL) { + char *p = strstr(arg, "="); + + rounds = atol(p+1); + } + + if (strcmp(arg, "smc") == 0) { + unsigned long test_start = (unsigned long) &tight_start; + unsigned long test_end = (unsigned long) &test_code_end; + + smc = 1; + mmu_set_range_ptes(mmu_idmap, test_start, test_start, test_end, + __pgprot(PTE_WBWA)); + + report_prefix_push("smc"); + } + + if (strcmp(arg, "irq") == 0) { + irq = 1; + if (!gic_init()) + report_abort("No supported gic present!"); + irqv = gic_version(); + report_prefix_push("irq"); + } + + if (strcmp(arg, "check_irq") == 0) + check_irq = 1; + } + + if (test) { + /* ensure args visible to all cores */ + smp_mb(); + setup_and_run_tcg_test(); + } else { + report(false, "Unknown test"); + } + + return report_summary(); +} diff --git a/arm/unittests.cfg b/arm/unittests.cfg index 607f5641..8adbebd8 100644 --- a/arm/unittests.cfg +++ b/arm/unittests.cfg @@ -327,3 +327,87 @@ smp = 2 extra_params = -append 'sal_barrier' groups = nodefault mttcg barrier +# TCG Tests +[tcg::tight] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'tight' +groups = nodefault mttcg +accel = tcg + +[tcg::tight-smc] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'tight smc' -accel tcg,tb-size=1 +groups = nodefault mttcg +accel = tcg + +[tcg::tight-irq] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'tight irq' +groups = nodefault mttcg +accel = tcg + +[tcg::tight-smc-irq] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'tight smc irq' +groups = nodefault mttcg +accel = tcg + +[tcg::computed] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'computed' +groups = nodefault mttcg +accel = tcg + +[tcg::computed-smc] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'computed smc' +groups = nodefault mttcg +accel = tcg + +[tcg::computed-irq] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'computed irq' +groups = nodefault mttcg +accel = tcg + +[tcg::computed-smc-irq] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'computed smc irq' +groups = nodefault mttcg +accel = tcg + +[tcg::paged] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'paged' +groups = nodefault mttcg +accel = tcg + +[tcg::paged-smc] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'paged smc' +groups = nodefault mttcg +accel = tcg + +[tcg::paged-irq] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'paged irq' +groups = nodefault mttcg +accel = tcg + +[tcg::paged-smc-irq] +file = tcg-test.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append 'paged smc irq' +groups = nodefault mttcg +accel = tcg