From patchwork Mon Feb 26 10:12:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 13571855 Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 720625EE89 for ; Mon, 26 Feb 2024 10:14:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708942443; cv=none; b=K3eUUqjcpfRbc5qrKynN2Zk1n+aooyf2+WbvxCGA9+3nJ9+kYv9EvlCSLTGxhoEHnkvGA31dOepYY8ed2LNf0F6xuCgEeX8zNSCLBhjLCu8AWCEIuHifxZE14mFJtWT4tOmWKYEDlZ1Hu98J/YweRuSFZbkCnDV+Xt5o5Suhhp8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708942443; c=relaxed/simple; bh=rjgAsezPJk+ff+43GsINFjehKzSNk/E+o/Q0faioiHk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Des+sdWKqVefBfMUbQeoA2qt6d9yL3Q1Q4PnkuBH64bv7BT+qNXgxpFoOmiYQuT1dS62O+Xv+5WR3+rZKzd0fUKlNgawgPM5MgrXi7pmaBsKDjbN8py1OfYWxrxdG9Mwf4TMJqQOXqXVb/C9sqerpPNOtB2CiVwDaV3YExnS5BI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=YuMhzFXw; arc=none smtp.client-ip=209.85.210.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YuMhzFXw" Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-6e4e7e2594cso1363134b3a.2 for ; Mon, 26 Feb 2024 02:14:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1708942441; x=1709547241; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=a4JVFBxX4M4zgHOMLHLp5U2OTYa3zGvmAI6C6jXH6r4=; b=YuMhzFXwCQ2I+uvyV0T5wgrumcAHHtiYte6rZl00or/OxjLa5qcHhiaeTs7EcXCRmV yibR4eCI+hw1rTzUYW/7YAvPUCkDXOwVXmlG8T9tgtXr7hZmjMa7MUHgYalfQM+nKb+E vioxcGWfJp0ZznEp0bRvBqvhbwPN66gfvWSh0sFZUz6t4bnCvTttg7N2JTUA8G2REbDc TWJbU2/GOWA5z18fiyVCjyoh6tfAPLMz1hP8IYwYd17zGDeXVdqcB+V1VT1HClAjRmKE 5lquZPo+gjuC/yf9eN2+yDZwr0ueN8mn3MqfdjZaF3uFyYuljixw6xTpdzklBDH8r0VV QmeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708942441; x=1709547241; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a4JVFBxX4M4zgHOMLHLp5U2OTYa3zGvmAI6C6jXH6r4=; b=afXv9E7wGkz6YbUEj+RIs8o+T3nMzWXY+5QxzG+clGk48ZQ4xmjsgqsPhBBkGG6irG +Ueqc+Qk76b+rfgN6Zbr6Un3HP7qsl7gaTSdssjWZSBGi85LP0tjoVrChMeiqaDprQk0 dHfge/Oen/hi2WoqQjqi9aGsNaw0k17rzpOE8khKp2qu47C6m20OX4Dzdepjp1x77Vie 0MOceaWGF91EUDHIL89XmJDioEHNF3dAzVIKG2POhRDc/QtlWvQUn6QZX99Zp4YhepU6 MBRI4aqFS+MnR6yZZafSFM5Kx6uZOlPHOiyZZFrdxbeDHOI+iERVAFs2WHYFgrkmikqK RzDA== X-Forwarded-Encrypted: i=1; AJvYcCVRYMo7TZeC1+bdG8worhqfYS/y6wgKQy/qZ9VQx+uaIf/vFuiygZslHvepLOtdSphrs5RF8lfGt9MVprOsDR2kkFZA X-Gm-Message-State: AOJu0YySXKybUA63j4ZEQTvonVbWxWd4JQjULoz145LpokmOT8lHiLcO t6uiBKpMB81/oKomjzZr+ycqNZSsbqsqvj+DGI4ZTFG+yWrJbp7L X-Google-Smtp-Source: AGHT+IEkul3H88R+KiO3mAox/mHF26RNwKXo+0TOJCGP7j3CKDpEN5i6zXaZbgyDOoyIM8mjGNAz/A== X-Received: by 2002:a05:6a00:458f:b0:6e4:7590:da67 with SMTP id it15-20020a056a00458f00b006e47590da67mr4695216pfb.8.1708942440809; Mon, 26 Feb 2024 02:14:00 -0800 (PST) Received: from wheely.local0.net (220-235-194-103.tpgi.com.au. [220.235.194.103]) by smtp.gmail.com with ESMTPSA id x24-20020aa784d8000000b006e463414493sm3626693pfn.105.2024.02.26.02.13.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Feb 2024 02:14:00 -0800 (PST) From: Nicholas Piggin To: Thomas Huth Cc: Nicholas Piggin , Laurent Vivier , Andrew Jones , Paolo Bonzini , Joel Stanley , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org Subject: [kvm-unit-tests PATCH 21/32] powerpc: Add atomics tests Date: Mon, 26 Feb 2024 20:12:07 +1000 Message-ID: <20240226101218.1472843-22-npiggin@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240226101218.1472843-1-npiggin@gmail.com> References: <20240226101218.1472843-1-npiggin@gmail.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Signed-off-by: Nicholas Piggin --- powerpc/Makefile.common | 1 + powerpc/atomics.c | 373 ++++++++++++++++++++++++++++++++++++++++ powerpc/unittests.cfg | 9 + 3 files changed, 383 insertions(+) create mode 100644 powerpc/atomics.c diff --git a/powerpc/Makefile.common b/powerpc/Makefile.common index 02af54b83..b6f9b3b85 100644 --- a/powerpc/Makefile.common +++ b/powerpc/Makefile.common @@ -11,6 +11,7 @@ tests-common = \ $(TEST_DIR)/spapr_hcall.elf \ $(TEST_DIR)/rtas.elf \ $(TEST_DIR)/emulator.elf \ + $(TEST_DIR)/atomics.elf \ $(TEST_DIR)/tm.elf \ $(TEST_DIR)/smp.elf \ $(TEST_DIR)/sprs.elf \ diff --git a/powerpc/atomics.c b/powerpc/atomics.c new file mode 100644 index 000000000..d79e70eaa --- /dev/null +++ b/powerpc/atomics.c @@ -0,0 +1,373 @@ +/* + * Test some powerpc instructions + * + * Copyright 2024 Nicholas Piggin, IBM Corp. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static bool do_migrate; +static bool do_record; + +#define RSV_SIZE 128 + +static uint8_t granule[RSV_SIZE] __attribute((__aligned__(RSV_SIZE))); + +static void spin_lock(unsigned int *lock) +{ + unsigned int old; + + asm volatile ("1:" + "lwarx %0,0,%2;" + "cmpwi %0,0;" + "bne 1b;" + "stwcx. %1,0,%2;" + "bne- 1b;" + "lwsync;" + : "=&r"(old) : "r"(1), "r"(lock) : "cr0", "memory"); +} + +static void spin_unlock(unsigned int *lock) +{ + asm volatile("lwsync;" + "stw %1,%0;" + : "+m"(*lock) : "r"(0) : "memory"); +} + +static volatile bool got_interrupt; +static volatile struct pt_regs recorded_regs; + +static void interrupt_handler(struct pt_regs *regs, void *opaque) +{ + assert(!got_interrupt); + got_interrupt = true; + memcpy((void *)&recorded_regs, regs, sizeof(struct pt_regs)); + regs_advance_insn(regs); +} + +static void test_lwarx_stwcx(int argc, char *argv[]) +{ + unsigned int *var = (unsigned int *)granule; + unsigned int old; + unsigned int result; + + *var = 0; + asm volatile ("1:" + "lwarx %0,0,%2;" + "stwcx. %1,0,%2;" + "bne- 1b;" + : "=&r"(old) : "r"(1), "r"(var) : "cr0", "memory"); + report(old == 0 && *var == 1, "simple update"); + + *var = 0; + asm volatile ("li %0,0;" + "stwcx. %1,0,%2;" + "stwcx. %1,0,%2;" + "bne- 1f;" + "li %0,1;" + "1:" + : "=&r"(result) + : "r"(1), "r"(var) : "cr0", "memory"); + report(result == 0 && *var == 0, "failed stwcx. (no reservation)"); + + *var = 0; + asm volatile ("li %0,0;" + "lwarx %1,0,%4;" + "stw %3,0(%4);" + "stwcx. %2,0,%4;" + "bne- 1f;" + "li %0,1;" + "1:" + : "=&r"(result), "=&r"(old) + : "r"(1), "r"(2), "r"(var) : "cr0", "memory"); + /* This is implementation specific, so don't fail */ + if (result == 0 && *var == 2) + report(true, "failed stwcx. (intervening store)"); + else + report(true, "succeeded stwcx. (intervening store)"); + + handle_exception(0x600, interrupt_handler, NULL); + handle_exception(0x700, interrupt_handler, NULL); + + /* Implementations may not necessarily invoke the alignment interrupt */ + old = 10; + *var = 0; + asm volatile ( + "lwarx %0,0,%1;" + : "+&r"(old) : "r"((char *)var + 1)); + report(old == 10 && got_interrupt && recorded_regs.trap == 0x600, "unaligned lwarx causes fault"); + got_interrupt = false; + + /* + * Unaligned stwcx. is more difficult to test, at least under QEMU, + * the store does not proceed if there is no matching reservation, so + * the alignment handler does not get invoked. This is okay according + * to the Power ISA (unalignment does not necessarily invoke the + * alignment interrupt). But POWER CPUs do cause alignment interrupt. + */ + *var = 0; + asm volatile ( + "lwarx %0,0,%2;" + "stwcx. %1,0,%3;" + : "=&r"(old) : "r"(1), "r"(var), "r"((char *)var+1) : "cr0", "memory"); + report(old == 0 && *var == 0 && got_interrupt && recorded_regs.trap == 0x600, "unaligned stwcx. causes fault"); + got_interrupt = false; + + handle_exception(0x600, NULL, NULL); + +} + +static void test_lqarx_stqcx(int argc, char *argv[]) +{ + union { + __int128_t var; + struct { +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + unsigned long var1; + unsigned long var2; +#else + unsigned long var2; + unsigned long var1; +#endif + }; + } var __attribute__((aligned(16))); + register unsigned long new1 asm("r8"); + register unsigned long new2 asm("r9"); + register unsigned long old1 asm("r10"); + register unsigned long old2 asm("r11"); + unsigned int result; + + var.var1 = 1; + var.var2 = 2; + + (void)new2; + (void)old2; + + old1 = 0; + old2 = 0; + new1 = 3; + new2 = 4; + asm volatile ("1:" + "lqarx %0,0,%4;" + "stqcx. %2,0,%4;" + "bne- 1b;" + : "=&r"(old1), "=&r"(old2) + : "r"(new1), "r"(new2), "r"(&var) + : "cr0", "memory"); + + report(old1 == 2 && old2 == 1 && var.var1 == 4 && var.var2 == 3, + "simple update"); + + var.var1 = 1; + var.var2 = 2; + new1 = 3; + new2 = 4; + asm volatile ("li %0,0;" + "stqcx. %1,0,%3;" + "stqcx. %1,0,%3;" + "bne- 1f;" + "li %0,1;" + "1:" + : "=&r"(result) + : "r"(new1), "r"(new2), "r"(&var) + : "cr0", "memory"); + report(result == 0 && var.var1 == 1 && var.var2 == 2, + "failed stqcx. (no reservation)"); + + var.var1 = 1; + var.var2 = 2; + new1 = 3; + new2 = 4; + asm volatile ("li %0,0;" + "lqarx %1,0,%6;" + "std %5,0(%6);" + "stqcx. %3,0,%6;" + "bne- 1f;" + "li %0,1;" + "1:" + : "=&r"(result), "=&r"(old1), "=&r"(old2) + : "r"(new1), "r"(new2), "r"(0), "r"(&var) + : "cr0", "memory"); + /* This is implementation specific, so don't fail */ + if (result == 0 && (var.var1 == 0 || var.var2 == 0)) + report(true, "failed stqcx. (intervening store)"); + else + report(true, "succeeded stqcx. (intervening store)"); +} + +static void test_migrate_reserve(int argc, char *argv[]) +{ + unsigned int *var = (unsigned int *)granule; + unsigned int old; + int i; + int succeed = 0; + + if (!do_migrate) + return; + + for (i = 0; i < 10; i++) { + *var = 0x12345; + asm volatile ("lwarx %0,0,%1" : "=&r"(old) : "r"(var) : "memory"); + migrate_quiet(); + asm volatile ("stwcx. %0,0,%1" : : "r"(0xf00d), "r"(var) : "cr0", "memory"); + if (*var == 0xf00d) + succeed++; + } + + if (do_record) { + /* + * Running under TCG record-replay, reservations must not + * be lost by migration + */ + report(succeed > 0, "migrated reservation is not lost"); + } else { + report(succeed == 0, "migrated reservation is lost"); + } + + report_prefix_pop(); +} + +#define ITERS 10000000 +static int test_counter = 0; +static void test_inc_perf(int argc, char *argv[]) +{ + int i; + uint64_t tb1, tb2; + + tb1 = get_tb(); + for (i = 0; i < ITERS; i++) + __atomic_fetch_add(&test_counter, 1, __ATOMIC_RELAXED); + tb2 = get_tb(); + report(true, "atomic add takes %ldns", (tb2 - tb1) * 1000000000 / ITERS / tb_hz); + + tb1 = get_tb(); + for (i = 0; i < ITERS; i++) + __atomic_fetch_add(&test_counter, 1, __ATOMIC_SEQ_CST); + tb2 = get_tb(); + report(true, "sequentially conssistent atomic add takes %ldns", (tb2 - tb1) * 1000000000 / ITERS / tb_hz); +} + +static long smp_inc_counter = 0; +static int smp_inc_started; + +static void smp_inc_fn(int cpu_id) +{ + long i; + + atomic_fetch_inc(&smp_inc_started); + while (smp_inc_started < nr_cpus_present) + cpu_relax(); + + for (i = 0; i < ITERS; i++) + atomic_fetch_inc(&smp_inc_counter); + atomic_fetch_dec(&smp_inc_started); +} + +static void test_smp_inc(int argc, char **argv) +{ + if (nr_cpus_present < 2) + return; + + if (!start_all_cpus(smp_inc_fn)) + report_abort("Failed to start secondary cpus"); + + while (smp_inc_started < nr_cpus_present - 1) + cpu_relax(); + smp_inc_fn(smp_processor_id()); + while (smp_inc_started > 0) + cpu_relax(); + + stop_all_cpus(); + + report(smp_inc_counter == nr_cpus_present * ITERS, "counter lost no increments"); +} + +static long smp_lock_counter __attribute__((aligned(128))) = 0; +static unsigned int smp_lock __attribute__((aligned(128))); +static int smp_lock_started; + +static void smp_lock_fn(int cpu_id) +{ + long i; + + atomic_fetch_inc(&smp_lock_started); + while (smp_lock_started < nr_cpus_present) + cpu_relax(); + + for (i = 0; i < ITERS; i++) { + spin_lock(&smp_lock); + smp_lock_counter++; + spin_unlock(&smp_lock); + } + atomic_fetch_dec(&smp_lock_started); +} + +static void test_smp_lock(int argc, char **argv) +{ + if (nr_cpus_present < 2) + return; + + if (!start_all_cpus(smp_lock_fn)) + report_abort("Failed to start secondary cpus"); + + while (smp_lock_started < nr_cpus_present - 1) + cpu_relax(); + smp_lock_fn(smp_processor_id()); + while (smp_lock_started > 0) + cpu_relax(); + + stop_all_cpus(); + + report(smp_lock_counter == nr_cpus_present * ITERS, "counter lost no increments"); +} + +struct { + const char *name; + void (*func)(int argc, char **argv); +} hctests[] = { + { "lwarx/stwcx", test_lwarx_stwcx }, + { "lqarx/stqcx", test_lqarx_stqcx }, + { "migration", test_migrate_reserve }, + { "performance", test_inc_perf }, + { "SMP-atomic", test_smp_inc }, + { "SMP-lock", test_smp_lock }, + { NULL, NULL } +}; + +int main(int argc, char **argv) +{ + int i; + int all; + + all = argc == 1 || !strcmp(argv[1], "all"); + + for (i = 1; i < argc; i++) { + if (strcmp(argv[i], "-r") == 0) { + do_record = true; + } + if (strcmp(argv[i], "-m") == 0) { + do_migrate = true; + } + } + + report_prefix_push("atomics"); + + for (i = 0; hctests[i].name != NULL; i++) { + if (all || strcmp(argv[1], hctests[i].name) == 0) { + report_prefix_push(hctests[i].name); + hctests[i].func(argc, argv); + report_prefix_pop(); + } + } + + report_prefix_pop(); + + return report_summary(); +} diff --git a/powerpc/unittests.cfg b/powerpc/unittests.cfg index 915b6a482..f8d43c220 100644 --- a/powerpc/unittests.cfg +++ b/powerpc/unittests.cfg @@ -103,6 +103,15 @@ file = smp.elf smp = 8,threads=4 accel = tcg,thread=single +[atomics] +file = atomics.elf + +[atomics-migration] +file = atomics.elf +machine = pseries +extra_params = -append "'migration -m'" +groups = migration + [h_cede_tm] file = tm.elf machine = pseries