From patchwork Thu Nov 24 16:10:30 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 9446029 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8632B6075F for ; Thu, 24 Nov 2016 16:19:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 777802808C for ; Thu, 24 Nov 2016 16:19:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6AE43280D0; Thu, 24 Nov 2016 16:19:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 99B5F2808C for ; Thu, 24 Nov 2016 16:19:18 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1c9wiF-0001JZ-Ku; Thu, 24 Nov 2016 16:17:27 +0000 Received: from mail-wj0-x229.google.com ([2a00:1450:400c:c01::229]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1c9wcU-0003Th-CW for linux-arm-kernel@lists.infradead.org; Thu, 24 Nov 2016 16:11:40 +0000 Received: by mail-wj0-x229.google.com with SMTP id xy5so37025418wjc.0 for ; Thu, 24 Nov 2016 08:11:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=a0r2ch9Y/rZhAYLypuwH61/yx82XBn4cZkEOvxbEwRo=; b=VaRA+NlE8w1fyJG9CXy6ntdLXAzQHmoGrGrJav69/hYnt9Ce4LewYxCeXgGMwIRm5o AvhHBUqKSK/evL8noZ+pUc2cyrU5fqNUIXbgqx/FRFY1kzGpGOUZCvDD4H9kKS50FxZu 495A7gBYnogwMxL4Gqo0rdGJ0nDRdSxM9+Ho4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=a0r2ch9Y/rZhAYLypuwH61/yx82XBn4cZkEOvxbEwRo=; b=me548k0Y9b9KJF9NJvoZ0rzd1r/O00WPf3amw0VPeFTTBv/28byeUGs0IRC/87svYO vpCEtdZwrBIk0F4JcacKdsYXKgonH7MVgwnnBzIjeD44ySjYKNdKpVAFxGqdBErNxIkJ ToaqPnQ+lcGe+ETVAAZ4X64FkrbQEUga6tUDiX1LlJ2aVJhwbY9P7+KjeIcCUaNbHrxm 3VY4VwGyqMnfGNvB0t2U1S7tKifv+5sLaHMLZ88hKJS7Dya+JELrMhj8eSFGrcJx6A9G aWdFEV2pFlEyfFIoBYS2ESLErmrh+wpg5h4lgdjoGn4rvZRpoZpIKP/R7HOYBntX3uY/ 6jpQ== X-Gm-Message-State: AKaTC000cvtcMn/PPgtqvB9Qua6Cwq9zg1HPgFrjefkCu4CGcLmufz4HbMJiIpBiGfNgjqIt X-Received: by 10.194.198.196 with SMTP id je4mr2895999wjc.25.1480003868461; Thu, 24 Nov 2016 08:11:08 -0800 (PST) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id k11sm8907815wmb.18.2016.11.24.08.11.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 24 Nov 2016 08:11:06 -0800 (PST) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id 2F2273E0572; Thu, 24 Nov 2016 16:11:00 +0000 (GMT) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, marc.zyngier@arm.com Subject: [kvm-unit-tests PATCH v7 08/11] arm/tlbflush-data: Add TLB flush during data writes test Date: Thu, 24 Nov 2016 16:10:30 +0000 Message-Id: <20161124161033.11456-9-alex.bennee@linaro.org> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20161124161033.11456-1-alex.bennee@linaro.org> References: <20161124161033.11456-1-alex.bennee@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20161124_081131_128291_C5F7C8F1 X-CRM114-Status: GOOD ( 29.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mttcg@listserver.greensocs.com, peter.maydell@linaro.org, claudio.fontana@huawei.com, nikunj@linux.vnet.ibm.com, jan.kiszka@siemens.com, Mark Rutland , mark.burton@greensocs.com, a.rigo@virtualopensystems.com, qemu-devel@nongnu.org, cota@braap.org, serge.fdrv@gmail.com, pbonzini@redhat.com, bobby.prani@gmail.com, rth@twiddle.net, =?UTF-8?q?Alex=20Benn=C3=A9e?= , fred.konrad@greensocs.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This test is the cousin of the tlbflush-code test. Instead of flushing running code it re-maps virtual addresses while a buffer is being filled up. It then audits the results checking for writes that have ended up in the wrong place. While tlbflush-code exercises QEMU's translation invalidation logic this test stresses the SoftMMU cputlb code and ensures it is semantically correct. The test optionally takes two parameters for debugging: cycles - change the default number of test iterations page - flush pages individually instead of all Signed-off-by: Alex Bennée CC: Mark Rutland --- arm/Makefile.common | 2 + arm/tlbflush-data.c | 401 ++++++++++++++++++++++++++++++++++++++++++++++++++++ arm/unittests.cfg | 12 ++ 3 files changed, 415 insertions(+) create mode 100644 arm/tlbflush-data.c diff --git a/arm/Makefile.common b/arm/Makefile.common index de99a6e..528166d 100644 --- a/arm/Makefile.common +++ b/arm/Makefile.common @@ -14,6 +14,7 @@ tests-common += $(TEST_DIR)/spinlock-test.flat tests-common += $(TEST_DIR)/pci-test.flat tests-common += $(TEST_DIR)/gic.flat tests-common += $(TEST_DIR)/tlbflush-code.flat +tests-common += $(TEST_DIR)/tlbflush-data.flat all: test_cases @@ -83,3 +84,4 @@ test_cases: $(generated_files) $(tests-common) $(tests) $(TEST_DIR)/selftest.o $(cstart.o): $(asm-offsets) $(TEST_DIR)/tlbflush-code.elf: $(cstart.o) $(TEST_DIR)/tlbflush-code.o +$(TEST_DIR)/tlbflush-data.elf: $(cstart.o) $(TEST_DIR)/tlbflush-data.o diff --git a/arm/tlbflush-data.c b/arm/tlbflush-data.c new file mode 100644 index 0000000..7920179 --- /dev/null +++ b/arm/tlbflush-data.c @@ -0,0 +1,401 @@ +/* + * TLB Flush Race Tests + * + * These tests are designed to test for incorrect TLB flush semantics + * under emulation. The initial CPU will set all the others working on + * a writing to a set of pages. It will then re-map one of the pages + * back and forth while recording the timestamps of when each page was + * active. The test fails if a write was detected on a page after the + * tlbflush switching to a new page should have completed. + * + * Copyright (C) 2016, Linaro, Alex Bennée + * + * This work is licensed under the terms of the GNU LGPL, version 2. + */ + +#include +#include +#include +#include +#include + +#define NR_TIMESTAMPS ((PAGE_SIZE/sizeof(u64)) << 2) +#define NR_AUDIT_RECORDS 16384 +#define NR_DYNAMIC_PAGES 3 +#define MAX_CPUS 8 + +#define MIN(a, b) ((a) < (b) ? (a) : (b)) + +typedef struct { + u64 timestamps[NR_TIMESTAMPS]; +} write_buffer; + +typedef struct { + write_buffer *newbuf; + u64 time_before_flush; + u64 time_after_flush; +} audit_rec_t; + +typedef struct { + audit_rec_t records[NR_AUDIT_RECORDS]; +} audit_buffer; + +typedef struct { + write_buffer *stable_pages; + write_buffer *dynamic_pages[NR_DYNAMIC_PAGES]; + audit_buffer *audit; + unsigned int flush_count; +} test_data_t; + +static test_data_t test_data[MAX_CPUS]; + +static cpumask_t ready; +static cpumask_t complete; + +static bool test_complete; +static bool flush_verbose; +static bool flush_by_page; +static int test_cycles=3; +static int secondary_cpus; + +static write_buffer * alloc_test_pages(void) +{ + write_buffer *pg; + pg = calloc(NR_TIMESTAMPS, sizeof(u64)); + return pg; +} + +static void setup_pages_for_cpu(int cpu) +{ + unsigned int i; + + test_data[cpu].stable_pages = alloc_test_pages(); + + for (i=0; irecords[record]; +} + +/* Sync on a given cpumask */ +static void wait_on(int cpu, cpumask_t *mask) +{ + cpumask_set_cpu(cpu, mask); + while (!cpumask_full(mask)) + cpu_relax(); +} + +static uint64_t sync_start(void) +{ + const uint64_t gate_mask = ~0x7ff; + uint64_t gate, now; + gate = get_cntvct() & gate_mask; + do { + now = get_cntvct(); + } while ((now & gate_mask) == gate); + + return now; +} + +static void do_page_writes(void) +{ + unsigned int i, runs = 0; + int cpu = smp_processor_id(); + write_buffer *stable_pages = test_data[cpu].stable_pages; + write_buffer *moving_page = test_data[cpu].dynamic_pages[0]; + + printf("CPU%d: ready %p/%p @ 0x%08" PRIx64"\n", + cpu, stable_pages, moving_page, get_cntvct()); + + while (!test_complete) { + u64 run_start, run_end; + + smp_mb(); + wait_on(cpu, &ready); + run_start = sync_start(); + + for (i = 0; i < NR_TIMESTAMPS; i++) { + u64 ts = get_cntvct(); + moving_page->timestamps[i] = ts; + stable_pages->timestamps[i] = ts; + } + + run_end = get_cntvct(); + printf("CPU%d: run %d 0x%" PRIx64 "->0x%" PRIx64 " (%" PRId64 " cycles)\n", + cpu, runs++, run_start, run_end, run_end - run_start); + + /* wait on completion - gets clear my main thread*/ + wait_on(cpu, &complete); + } +} + + +/* + * This is the core of the test. Timestamps are taken either side of + * the updating of the page table and the flush instruction. By + * keeping track of when the page mapping is changed we can detect any + * writes that shouldn't have made it to the other pages. + * + * This isn't the recommended way to update the page table. ARM + * recommends break-before-make so accesses that are in flight can + * trigger faults that can be handled cleanly. + */ + +/* This mimics __flush_tlb_range from the kernel, doing a series of + * flush operations and then the dsb() to complete. */ +static void flush_pages(unsigned long start, unsigned long end) +{ + unsigned long addr; + start = start >> 12; + end = end >> 12; + + dsb(ishst); + for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT -12)) { +#if defined(__aarch64__) + asm("tlbi vaae1is, %0" :: "r" (addr)); +#else + asm volatile("mcr p15, 0, %0, c8, c7, 3" :: "r" (addr)); +#endif + } + dsb(ish); +} + +static void remap_one_page(test_data_t *data) +{ + u64 ts_before, ts_after; + int pg = (data->flush_count % (NR_DYNAMIC_PAGES + 1)); + write_buffer *dynamic_pages_vaddr = data->dynamic_pages[0]; + write_buffer *newbuf_paddr = data->dynamic_pages[pg]; + write_buffer *end_page_paddr = newbuf_paddr+1; + + ts_before = get_cntvct(); + /* update the page table */ + mmu_set_range_ptes(mmu_idmap, + (unsigned long) dynamic_pages_vaddr, + (unsigned long) newbuf_paddr, + (unsigned long) end_page_paddr, + __pgprot(PTE_WBWA)); + /* until the flush + isb() writes may still go to old address */ + if (flush_by_page) { + flush_pages((unsigned long)dynamic_pages_vaddr, (unsigned long)(dynamic_pages_vaddr+1)); + } else { + flush_tlb_all(); + } + ts_after = get_cntvct(); + + if (data->flush_count < NR_AUDIT_RECORDS) { + audit_rec_t *rec = get_audit_record(data->audit, data->flush_count); + rec->newbuf = newbuf_paddr; + rec->time_before_flush = ts_before; + rec->time_after_flush = ts_after; + } + data->flush_count++; +} + +static int check_pages(int cpu, char *msg, + write_buffer *base_page, write_buffer *test_page, + audit_buffer *audit, unsigned int flushes) +{ + write_buffer *prev_page = base_page; + unsigned int empty = 0, write = 0, late = 0, weird = 0; + unsigned int ts_index = 0, audit_index; + u64 ts; + + /* For each audit record */ + for (audit_index = 0; audit_index < MIN(flushes, NR_AUDIT_RECORDS); audit_index++) { + audit_rec_t *rec = get_audit_record(audit, audit_index); + + do { + /* Work through timestamps until we overtake + * this audit record */ + ts = test_page->timestamps[ts_index]; + + if (ts == 0) { + empty++; + } else if (ts < rec->time_before_flush) { + if (test_page == prev_page) { + write++; + } else { + late++; + } + } else if (ts >= rec->time_before_flush + && ts <= rec->time_after_flush) { + if (test_page == prev_page + || test_page == rec->newbuf) { + write++; + } else { + weird++; + } + } else if (ts > rec->time_after_flush) { + if (test_page == rec->newbuf) { + write++; + } + /* It's possible the ts is way ahead + * of the current record so we can't + * call a non-match weird... + * + * Time to skip to next audit record + */ + break; + } + + ts = test_page->timestamps[ts_index++]; + } while (ts <= rec->time_after_flush && ts_index < NR_TIMESTAMPS); + + + /* Next record */ + prev_page = rec->newbuf; + } /* for each audit record */ + + if (flush_verbose) { + printf("CPU%d: %s %p => %p %u/%u/%u/%u (0/OK/L/?) = %u total\n", + cpu, msg, test_page, base_page, + empty, write, late, weird, empty+write+late+weird); + } + + return weird; +} + +static int audit_cpu_pages(int cpu, test_data_t *data) +{ + unsigned int pg, writes=0, ts_index = 0; + write_buffer *test_page; + int errors = 0; + + /* first the stable page */ + test_page = data->stable_pages; + do { + if (test_page->timestamps[ts_index++]) { + writes++; + } + } while (ts_index < NR_TIMESTAMPS); + + if (writes != ts_index) { + errors += 1; + } + + if (flush_verbose) { + printf("CPU%d: stable page %p %u writes\n", + cpu, test_page, writes); + } + + + /* Restore the mapping for dynamic page */ + test_page = data->dynamic_pages[0]; + + mmu_set_range_ptes(mmu_idmap, + (unsigned long) test_page, + (unsigned long) test_page, + (unsigned long) &test_page[1], + __pgprot(PTE_WBWA)); + flush_tlb_all(); + + for (pg=0; pgdynamic_pages[pg], + data->audit, data->flush_count); + } + + /* reset for next run */ + memset(data->stable_pages, 0, sizeof(write_buffer)); + for (pg=0; pgdynamic_pages[pg], 0, sizeof(write_buffer)); + } + memset(data->audit, 0, sizeof(audit_buffer)); + data->flush_count = 0; + smp_mb(); + + report("CPU%d: checked, errors: %d", errors == 0, cpu, errors); + return errors; +} + +static void do_page_flushes(void) +{ + int i, cpu; + + printf("CPU0: ready @ 0x%08" PRIx64"\n", get_cntvct()); + + for (i=0; i0x%" PRIx64 " (%" PRId64 " cycles, %u flushes)\n", + i, run_start, run_end, run_end - run_start, flushes); + + /* Reset our ready mask for next cycle */ + cpumask_clear_cpu(0, &ready); + smp_mb(); + wait_on(0, &complete); + + /* Check for discrepancies */ + for_each_present_cpu(cpu) { + if (cpu == 0) + continue; + audit_cpu_pages(cpu, &test_data[cpu]); + } + } + + test_complete = true; + smp_mb(); + cpumask_set_cpu(0, &ready); + cpumask_set_cpu(0, &complete); +} + +int main(int argc, char **argv) +{ + int cpu, i; + + for (i=0; i4?4:$MAX_SMP)) extra_params = -append 'page self' groups = tlbflush + +[tlbflush-data::all] +file = tlbflush-data.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +groups = tlbflush + +[tlbflush-data::page] +file = tlbflush-data.flat +smp = $(($MAX_SMP>4?4:$MAX_SMP)) +extra_params = -append "page" +groups = tlbflush +