From patchwork Mon Nov 25 11:54:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13884859 Received: from mail-wr1-f48.google.com (mail-wr1-f48.google.com [209.85.221.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 86E72156F5D for ; Mon, 25 Nov 2024 11:55:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732535748; cv=none; b=vCZhBctoJ1vCGPdHTxH4EaRYk7ftFyTYOCsMAo3lgStFp8a5NqwHuRjsGvOcZIODJIr0J8RlRpumZYrRbswEkE02J4EG/ycCDIFlJ2SVgDpFGWK3WjbwC1jeE3EEHULQGENf7rh4LWj0UAAYFdtI1QP5jvwt5ncgpNfggA3UixM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732535748; c=relaxed/simple; bh=KV/51auJCFapSG2Ic286+IDPPaazJPQ2HvY2PqrYnyQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=HjLAn5b5yB3Za++ymFbJG270G5/m6Xzx7dxClVMozA4SvmPfA9cdB4cQls0H47RXREZWSxsz2kNakn7dHbF55E+yuTOzOaTgQLeVgj/wlay21sHHwv40gyrRE+wP+otumm4VAszoEab4XWo/6ZBTh4XHJ48ta9H8jwwIynmxo3w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=KEg+WLjg; arc=none smtp.client-ip=209.85.221.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="KEg+WLjg" Received: by mail-wr1-f48.google.com with SMTP id ffacd0b85a97d-38241435528so2819039f8f.2 for ; Mon, 25 Nov 2024 03:55:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1732535744; x=1733140544; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4uJ+65G9O42b/6yem9iWeoXL7YcTuQmTGptgwDH4wnk=; b=KEg+WLjgwXWphidcCfTH5U24jfMDas9LIKl7LiV9riUG4SeqsQtqURrzNx4ElxL9Lw 000yXS0pfopMMFy8WS1z7xi+BNokylXjPPWXAb2AgNLODkSNtmEO673jcOxNJ8nuVpW/ dJ/Pa0zXy02otu7utEO/s32btcUmdIol4bvUu75aJFZDLxY9QR2/StVwwa5UC0C+z5/R Y8QpGMwDl1MhTuQwYy1YLoPGy2zKZshQ3TVA4ozmg2orswbD5SDVCZchC0z4H3sMs3t1 JVRc9Cp4wymHeEixdSivvRy1eW/8OqoHE5o+9WgHgojp0NwTDfHvcXgNbX7va8CG4X9V nBSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732535744; x=1733140544; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4uJ+65G9O42b/6yem9iWeoXL7YcTuQmTGptgwDH4wnk=; b=l2r3a4smUip1KTXiVNrHEAUGF17IJPvcDxITEucTeiIGJeZsbdHzuHr0ykIhMeMUKM bGMYOJr7ePnmIhNnwO6EBSVuLC224GpA7KgsCZV3m4aMKTuSVaGx2KGg/R6wXOAnyfvc 7YswhGS8Zb3Tc9o6uWkc3n+bJkjJsNB4KTxWmqtirywkHEm89yobqHlW0igAQnZ0cwR1 sJEUHIPrflBsBSo6NJAX4u0TmD2V6KUd9f+JtaVDhMKEp7khhUwj4AKP4NjFLcXkT17H Bwe0aoWTkv1OfZHTtcRANPJrpiiW9QoQM25TzLbRynYKkxF1hv1fXnBqzv/Wo+JNEHYe 3zOg== X-Gm-Message-State: AOJu0Yy8S+dbDzzoPM1KYbHNWXygjIKFu2Yf2pqfAOhWuEotZYbykF/A AbQOtWwN/DLkE36eR6CL/amY7ljYBFWbiqGZohRTsTFTUf7VjLgrn7yU8gjMVHOgE38q0Tx3lg1 u X-Gm-Gg: ASbGncu7zwy6Rm8WcyD4d53a++VMtp2nA9XVZAdQV6SDkuR77LDB+MdMDf7fE+JWMj7 kHiN0dAMmE5XoTPt5PuQ0y1pU/gLgC6qA+6zuslKTrCF+AVoexM+3/jTHSrY7Cz1yjjhijfEIQ2 lu+4S406xPsQsj5eUoKfEFX8mXd+K4nzlggQC3umiRQ0XcWAh077ty6ryizw56q8wj24mp3YEP7 lhgUTw30mo+R4BshyzJVwkyIC+gvTcTGmk6hpWEcG5HcfdnCnk= X-Google-Smtp-Source: AGHT+IHq9nkVUYHFUa2cVzM/5Vw8THIZONe3O1roMJEpi9xQNFJUTULt16tV+Ag1+T536/tnQdoGrA== X-Received: by 2002:a05:6000:4901:b0:382:32ec:f5b4 with SMTP id ffacd0b85a97d-38260bd07eamr11875439f8f.47.1732535744145; Mon, 25 Nov 2024 03:55:44 -0800 (PST) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fbc3dfasm10546938f8f.76.2024.11.25.03.55.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Nov 2024 03:55:43 -0800 (PST) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra Subject: [kvm-unit-tests PATCH v3 1/4] riscv: Add "-deps" handling for tests Date: Mon, 25 Nov 2024 12:54:45 +0100 Message-ID: <20241125115452.1255745-2-cleger@rivosinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241125115452.1255745-1-cleger@rivosinc.com> References: <20241125115452.1255745-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Some tests uses additional files that needs to be linked in the final binary. This is the case for asm-sbi.S which is only used by the sbi test. Add a "-deps" per test variable that allows to designate additional .o files. Signed-off-by: Clément Léger Reviewed-by: Andrew Jones --- riscv/Makefile | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/riscv/Makefile b/riscv/Makefile index 28b04156..5b5e157c 100644 --- a/riscv/Makefile +++ b/riscv/Makefile @@ -17,6 +17,8 @@ tests += $(TEST_DIR)/sieve.$(exe) all: $(tests) +$(TEST_DIR)/sbi-deps = $(TEST_DIR)/sbi-asm.o + # When built for EFI sieve needs extra memory, run with e.g. '-m 256' on QEMU $(TEST_DIR)/sieve.$(exe): AUXFLAGS = 0x1 @@ -44,7 +46,6 @@ cflatobjs += lib/riscv/timer.o ifeq ($(ARCH),riscv32) cflatobjs += lib/ldiv32.o endif -cflatobjs += riscv/sbi-asm.o ######################################## @@ -93,6 +94,7 @@ include $(SRCDIR)/scripts/asm-offsets.mak $(CC) $(CFLAGS) -c -o $@ $< \ -DPROGNAME=\"$(notdir $(@:.aux.o=.$(exe)))\" -DAUXFLAGS=$(AUXFLAGS) +.SECONDEXPANSION: ifeq ($(CONFIG_EFI),y) # avoid jump tables before all relocations have been processed riscv/efi/reloc_riscv64.o: CFLAGS += -fno-jump-tables @@ -103,7 +105,7 @@ cflatobjs += lib/efi.o .PRECIOUS: %.so %.so: EFI_LDFLAGS += -defsym=EFI_SUBSYSTEM=0xa --no-undefined -%.so: %.o $(FLATLIBS) $(SRCDIR)/riscv/efi/elf_riscv64_efi.lds $(cstart.o) %.aux.o +%.so: %.o $(FLATLIBS) $(SRCDIR)/riscv/efi/elf_riscv64_efi.lds $(cstart.o) %.aux.o $$($$*-deps) $(LD) $(EFI_LDFLAGS) -o $@ -T $(SRCDIR)/riscv/efi/elf_riscv64_efi.lds \ $(filter %.o, $^) $(FLATLIBS) $(EFI_LIBS) @@ -119,7 +121,7 @@ cflatobjs += lib/efi.o -O binary $^ $@ else %.elf: LDFLAGS += -pie -n -z notext -%.elf: %.o $(FLATLIBS) $(SRCDIR)/riscv/flat.lds $(cstart.o) %.aux.o +%.elf: %.o $(FLATLIBS) $(SRCDIR)/riscv/flat.lds $(cstart.o) %.aux.o $$($$*-deps) $(LD) $(LDFLAGS) -o $@ -T $(SRCDIR)/riscv/flat.lds \ $(filter %.o, $^) $(FLATLIBS) @chmod a-x $@ From patchwork Mon Nov 25 11:54:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13884860 Received: from mail-wm1-f42.google.com (mail-wm1-f42.google.com [209.85.128.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 17294187553 for ; Mon, 25 Nov 2024 11:55:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732535750; cv=none; b=pHKp8bZ0An6AoFEJxmmu3INw5EkEW/WzxYYp0vuaZ9Fds54EJT2NphnYwwK+8zvRbr98K+zA14zxAoBsdP797wdTIYcuVV+YtubmQMHd9nGswXBBX//tbTcPzJIdyVWxA/3pQZBsjluLYHbr25PZRrmp2Bgk8Sydnv9hqhX4wTk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732535750; c=relaxed/simple; bh=/8UdF1CcqsWMTBCP/8awGLd4+ltP/g7U6DQ83xBuVMI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Jlsm6dGdNX/dQFxs9HVB6uDGxkWNxZHdQc8seN5Ycxrqt9MiPpgkFngLfgPa/gImeXf5Y6Gh8iNDwavr6cC6b6atys5DcmXOTI38qWvH8UtlhyjSH2Dy+Va06+5AVydX+GlWVBsgf+8q0VPX7HYhyFl1rBoJcPuiFBi7GuIcLUs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=vSCjPNGT; arc=none smtp.client-ip=209.85.128.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="vSCjPNGT" Received: by mail-wm1-f42.google.com with SMTP id 5b1f17b1804b1-43152b79d25so39619685e9.1 for ; Mon, 25 Nov 2024 03:55:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1732535746; x=1733140546; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7AuN3jt2kvHAoYQ+T9+1Bb/039QjwXbOGdxCl73sHAk=; b=vSCjPNGTPmTvWSYgBrXvmInKP7kStwg8Zyj1LVPFNkNDDlY0ISvYNrjpuvHuxTmbwU ZimKO1mHXJd+TPm/UnxFqSR+HRVwYikVG18L3B9mGdwb0nlNwDljcWg32DKunwdYxqWw scevN8Og61jpye8mdafuWmXrAnx3RbsgV24Qgx/CwkGR5aV6OskdUo56dk9By/uyFpuz s+M6A+vokSAkrF0ceCNBjoZrZmIOdHmduA/xztabf/tAEfaCD7TLtIhlzQncTEGps7GB iRUvGGav6rbboVE6lI+lCky3jKsb28IWAL11UjUfsXgatnGRobbSAI14PFuxVo21thIZ ekDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732535746; x=1733140546; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7AuN3jt2kvHAoYQ+T9+1Bb/039QjwXbOGdxCl73sHAk=; b=r45aq22Ub5uSwVh5HZVef6UzOpCQnhf7Cn8f0nOqOCgZbaUaIcaVqKegKYJJQ+qR/Y DggfI1jasv5Q6fXMurt9sza2sbVB7jKTLhSE2FQY8TTmqfWX54YSfgWVqjrKf3mR9nKC QzsWNvMyhlXQ6FA4Zsez/geHQ2fDeuijodE+mX5RGWTZ3VDEhAqT9KNdi5id6kHi/qvV 8PROL4ME7QuAF6GdoDs07nNEH5fx3x83lUdixxI1JNloNPkcmkGel5RIg3Oe4wyzbUyd 9+/V8/8R6ncioqRzbP3+IlPKvmdP3rzXxq5UACrxV2gNdc+ZA20N+CQJ1//FFjljT/3N ZBzw== X-Gm-Message-State: AOJu0YwZETASct3E3ThiaMe7dnb5GS36NThMrDrdvXJTUtm2rrEJ1e06 1Z4mDx9U5UsGa5rJZLNsvRHUZkPKsEppzO8ckra3UjRRqig1+ACrsmcSZU7Y+ZvUP43dS8c5tcZ R X-Gm-Gg: ASbGncto1r7sTQfJqwNgtMQBU5+UgqKcrgU1o/x3PS04rcDrTbd0f0eilSDRMwuCJg4 uOKDC0Evc5I+TaAX2GcpZRDqu5YBmxe9D65t9rW5H9iGIXx0MmeVdJxtv6dTy+Ips3R2eXnn2id nWVYTHt5fb64THl2EU5sixD4+TeZ+HGMYfdrbDaGJcBe0DRxa540k3+96NEeRoZXwtjVd2ZPXmF /7x5XR9qcdFtDo8PcA5IjfwHET5AHNYIb0jsiRzoVYBnpyiNPk= X-Google-Smtp-Source: AGHT+IFpLU5tCmHwOO1amkNYJxdjFh8cYczZNMSI4fxFbxTF/ip4wFuiUrcjHfGc1vbj6mGL2QzHVQ== X-Received: by 2002:a05:6000:144e:b0:382:49ad:54df with SMTP id ffacd0b85a97d-38260b48514mr10448174f8f.2.1732535745506; Mon, 25 Nov 2024 03:55:45 -0800 (PST) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fbc3dfasm10546938f8f.76.2024.11.25.03.55.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Nov 2024 03:55:44 -0800 (PST) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra Subject: [kvm-unit-tests PATCH v3 2/4] riscv: lib: Add SBI SSE extension definitions Date: Mon, 25 Nov 2024 12:54:46 +0100 Message-ID: <20241125115452.1255745-3-cleger@rivosinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241125115452.1255745-1-cleger@rivosinc.com> References: <20241125115452.1255745-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add SBI SSE extension definitions in sbi.h Signed-off-by: Clément Léger --- lib/riscv/asm/sbi.h | 83 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 83 insertions(+) diff --git a/lib/riscv/asm/sbi.h b/lib/riscv/asm/sbi.h index 98a9b097..f2494a50 100644 --- a/lib/riscv/asm/sbi.h +++ b/lib/riscv/asm/sbi.h @@ -11,6 +11,11 @@ #define SBI_ERR_ALREADY_AVAILABLE -6 #define SBI_ERR_ALREADY_STARTED -7 #define SBI_ERR_ALREADY_STOPPED -8 +#define SBI_ERR_NO_SHMEM -9 +#define SBI_ERR_INVALID_STATE -10 +#define SBI_ERR_BAD_RANGE -11 +#define SBI_ERR_TIMEOUT -12 +#define SBI_ERR_IO -13 #ifndef __ASSEMBLY__ #include @@ -23,6 +28,7 @@ enum sbi_ext_id { SBI_EXT_SRST = 0x53525354, SBI_EXT_DBCN = 0x4442434E, SBI_EXT_SUSP = 0x53555350, + SBI_EXT_SSE = 0x535345, }; enum sbi_ext_base_fid { @@ -71,6 +77,83 @@ enum sbi_ext_dbcn_fid { SBI_EXT_DBCN_CONSOLE_WRITE_BYTE, }; +enum sbi_ext_ss_fid { + SBI_EXT_SSE_READ_ATTRS = 0, + SBI_EXT_SSE_WRITE_ATTRS, + SBI_EXT_SSE_REGISTER, + SBI_EXT_SSE_UNREGISTER, + SBI_EXT_SSE_ENABLE, + SBI_EXT_SSE_DISABLE, + SBI_EXT_SSE_COMPLETE, + SBI_EXT_SSE_INJECT, + SBI_EXT_SSE_HART_UNMASK, + SBI_EXT_SSE_HART_MASK, +}; + +/* SBI SSE Event Attributes. */ +enum sbi_sse_attr_id { + SBI_SSE_ATTR_STATUS = 0x00000000, + SBI_SSE_ATTR_PRIORITY = 0x00000001, + SBI_SSE_ATTR_CONFIG = 0x00000002, + SBI_SSE_ATTR_PREFERRED_HART = 0x00000003, + SBI_SSE_ATTR_ENTRY_PC = 0x00000004, + SBI_SSE_ATTR_ENTRY_ARG = 0x00000005, + SBI_SSE_ATTR_INTERRUPTED_SEPC = 0x00000006, + SBI_SSE_ATTR_INTERRUPTED_FLAGS = 0x00000007, + SBI_SSE_ATTR_INTERRUPTED_A6 = 0x00000008, + SBI_SSE_ATTR_INTERRUPTED_A7 = 0x00000009, +}; + +#define SBI_SSE_ATTR_STATUS_STATE_OFFSET 0 +#define SBI_SSE_ATTR_STATUS_STATE_MASK 0x3 +#define SBI_SSE_ATTR_STATUS_PENDING_OFFSET 2 +#define SBI_SSE_ATTR_STATUS_INJECT_OFFSET 3 + +#define SBI_SSE_ATTR_CONFIG_ONESHOT (1 << 0) + +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP BIT(0) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE BIT(1) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV BIT(2) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP BIT(3) + +enum sbi_sse_state { + SBI_SSE_STATE_UNUSED = 0, + SBI_SSE_STATE_REGISTERED = 1, + SBI_SSE_STATE_ENABLED = 2, + SBI_SSE_STATE_RUNNING = 3, +}; + +/* SBI SSE Event IDs. */ +#define SBI_SSE_EVENT_LOCAL_RAS 0x00000000 +#define SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP 0x00000001 +#define SBI_SSE_EVENT_LOCAL_PLAT_0_START 0x00004000 +#define SBI_SSE_EVENT_LOCAL_PLAT_0_END 0x00007fff + +#define SBI_SSE_EVENT_GLOBAL_RAS 0x00008000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_0_START 0x0000c000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_0_END 0x0000ffff + +#define SBI_SSE_EVENT_LOCAL_PMU 0x00010000 +#define SBI_SSE_EVENT_LOCAL_PLAT_1_START 0x00014000 +#define SBI_SSE_EVENT_LOCAL_PLAT_1_END 0x00017fff +#define SBI_SSE_EVENT_GLOBAL_PLAT_1_START 0x0001c000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_1_END 0x0001ffff + +#define SBI_SSE_EVENT_LOCAL_PLAT_2_START 0x00024000 +#define SBI_SSE_EVENT_LOCAL_PLAT_2_END 0x00027fff +#define SBI_SSE_EVENT_GLOBAL_PLAT_2_START 0x0002c000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_2_END 0x0002ffff + +#define SBI_SSE_EVENT_LOCAL_SOFTWARE 0xffff0000 +#define SBI_SSE_EVENT_LOCAL_PLAT_3_START 0xffff4000 +#define SBI_SSE_EVENT_LOCAL_PLAT_3_END 0xffff7fff +#define SBI_SSE_EVENT_GLOBAL_SOFTWARE 0xffff8000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_3_START 0xffffc000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_3_END 0xffffffff + +#define SBI_SSE_EVENT_PLATFORM_BIT (1 << 14) +#define SBI_SSE_EVENT_GLOBAL_BIT (1 << 15) + struct sbiret { long error; long value; From patchwork Mon Nov 25 11:54:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13884861 Received: from mail-wr1-f43.google.com (mail-wr1-f43.google.com [209.85.221.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6FD7F19CC2D for ; Mon, 25 Nov 2024 11:55:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732535751; cv=none; b=ICc2v2yneSIYMyRCF/qVcY7Ktr/gaKdxcy5pdfcHEYd1nZiltyJKMXdgxd/9X9wRd+ENvNtKN5IVfk+ZuCARfHo3o8UmF1BgP4OLjtX1mvfRfOUv8cudTdU/LwNnjU+hF79TgCJIMwXqj6feqX/JcidbJMQXNq7Pay9OJTnOo3w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732535751; c=relaxed/simple; bh=U0j/4m03LrQWNU+F19IMmGQSwEN+ltA+hSnk3LnEx8M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Cl6qbL582RW486bqr2q4gVNC/CVwO/mWTWLY5U+oO5eCQUFWpids4ZguhvBDToqOw4GsF50X+O+9FPqY0CU1Iu7mwERgalnGD4GISiyNx0+/w6oHrVX+XNflSC0jNOUmN+bkqXFgB4fafNwUsy87+NTfLQPUsPms7fOXiwoZOL8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=wsjrKRMe; arc=none smtp.client-ip=209.85.221.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="wsjrKRMe" Received: by mail-wr1-f43.google.com with SMTP id ffacd0b85a97d-382423e1f7aso3071554f8f.2 for ; Mon, 25 Nov 2024 03:55:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1732535747; x=1733140547; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=H0pL7DZeVqFb0E46nCkbY+Ov7yPQu6RWmvxXHMwPvAc=; b=wsjrKRMeh1Zc9Bjt7AUw1lOqr7rDYsSObA7GH4ZMb/ozbRk6v4qP/7zSwEulFQTg7q Qxx0kGzjNpO4NcVeAG8yh5rCw2vvfbyuLYwJ8bf8fJ9D/ZT0kUyHkytE/umCDzgrgdST SH6w385FCkNJOT0Ajtydo+nw2MzA2kwQD6k169AOI0DqNX5hiXiaucQMo9ite7TLLjBR qY3DoSctS7o+wmVT5bYUZ8TjDERUveMOTzg8u3/mw8JwPbmuprIHGTgaY5C6967Yu7kN k/L5gpGjJ8xcDfpvzjCMsVBNLkc+Wb0UCOULMbeds8zYNgqYbAufkVCSw+JsvPdABw5y MqMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732535747; x=1733140547; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H0pL7DZeVqFb0E46nCkbY+Ov7yPQu6RWmvxXHMwPvAc=; b=Qxef/4soLpn7bl690dGj67SqtqEqBFzmqecMuglMOUbjuzg61NM1AW2z0XILT7C6lq p0LgWVAq2ui05AsHEqgonyGXXIOd28aWR83ezIK1EnZ45vkKRxXPy7Iqu95CWJd0Vg3h dJLo9JpixluOST2toXSbSeemU/YNvlAAblXOUcw0mC5ueEE21fhH0zBJuLRW5HTPWI30 ci4lNVSqAeBEFlB930P2ukBAH7ktyLyktVMs6KFXdSv4AZvZVsYA76CP9U7yiGYfHkwF oQi2oydlmpfIPr8lZVbBwTpQnB0eByunNdqTBcJEC6+WHeEtImEo+UZwWgPudP17Hx1Y c+eA== X-Gm-Message-State: AOJu0YwsLJP9DcmYYRcxxH+D4gcrwNTCbF7RHwjv6IlbEasptSBAFWQk lpqPs6+Tu8kyz+47tCeVgT9rvk65GBkY6dH2acf2LEfGeN6oo9TE7Q5RuyJV2cDviz1IHc/c3AI C X-Gm-Gg: ASbGncscXMk3IPeHw5Wmw3c+za4R5u6jHf3D2IuqcMMZmYV4c5Hwb77d+XMdLTfBGPX f2OHBCglWDUWU96TPhp+4P44N4M5hTKs05hXKT4l6djelqbJunlymYtvnEGF0OAVtpQ9RYtpzCa ytnR9qB9ppyVRYJxlbjU7hScGFcxfFwaZqnp8hqos7JI0VKu3mXpIUl+tT1qzsXt/96iR7DzBbI tFW6ZFOiCmJ63Jvl70NTARQnR7XktVTXFNk7h8KNLHFR/wTqpQ= X-Google-Smtp-Source: AGHT+IHGm4wPmy7o0WP1T/lSMvy50LCOdpTo7KB6T7cXN02HfKrLI6HA2XfsbLaUCHmLG4dL2T7f6w== X-Received: by 2002:a5d:6da8:0:b0:382:5137:30eb with SMTP id ffacd0b85a97d-38260b44b6amr7941202f8f.8.1732535747011; Mon, 25 Nov 2024 03:55:47 -0800 (PST) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fbc3dfasm10546938f8f.76.2024.11.25.03.55.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Nov 2024 03:55:45 -0800 (PST) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra Subject: [kvm-unit-tests PATCH v3 3/4] riscv: lib: Add SSE assembly entry handling Date: Mon, 25 Nov 2024 12:54:47 +0100 Message-ID: <20241125115452.1255745-4-cleger@rivosinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241125115452.1255745-1-cleger@rivosinc.com> References: <20241125115452.1255745-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add a SSE entry assembly code to handle SSE events. Events should be registered with a struct sse_handler_arg containing a correct stack and handler function. Signed-off-by: Clément Léger --- riscv/Makefile | 1 + lib/riscv/asm/sse.h | 16 +++++++ lib/riscv/sse-entry.S | 100 ++++++++++++++++++++++++++++++++++++++++ lib/riscv/asm-offsets.c | 9 ++++ 4 files changed, 126 insertions(+) create mode 100644 lib/riscv/asm/sse.h create mode 100644 lib/riscv/sse-entry.S diff --git a/riscv/Makefile b/riscv/Makefile index 5b5e157c..c278ec5c 100644 --- a/riscv/Makefile +++ b/riscv/Makefile @@ -41,6 +41,7 @@ cflatobjs += lib/riscv/sbi.o cflatobjs += lib/riscv/setjmp.o cflatobjs += lib/riscv/setup.o cflatobjs += lib/riscv/smp.o +cflatobjs += lib/riscv/sse-entry.o cflatobjs += lib/riscv/stack.o cflatobjs += lib/riscv/timer.o ifeq ($(ARCH),riscv32) diff --git a/lib/riscv/asm/sse.h b/lib/riscv/asm/sse.h new file mode 100644 index 00000000..557f6680 --- /dev/null +++ b/lib/riscv/asm/sse.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _ASMRISCV_SSE_H_ +#define _ASMRISCV_SSE_H_ + +typedef void (*sse_handler_fn)(void *data, struct pt_regs *regs, unsigned int hartid); + +struct sse_handler_arg { + unsigned long reg_tmp; + sse_handler_fn handler; + void *handler_data; + void *stack; +}; + +extern void sse_entry(void); + +#endif /* _ASMRISCV_SSE_H_ */ diff --git a/lib/riscv/sse-entry.S b/lib/riscv/sse-entry.S new file mode 100644 index 00000000..f1244e17 --- /dev/null +++ b/lib/riscv/sse-entry.S @@ -0,0 +1,100 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * SBI SSE entry code + * + * Copyright (C) 2024, Rivos Inc., Clément Léger + */ +#include +#include +#include + +.global sse_entry +sse_entry: + /* Save stack temporarily */ + REG_S sp, SSE_REG_TMP(a7) + /* Set entry stack */ + REG_L sp, SSE_HANDLER_STACK(a7) + + addi sp, sp, -(PT_SIZE) + REG_S ra, PT_RA(sp) + REG_S s0, PT_S0(sp) + REG_S s1, PT_S1(sp) + REG_S s2, PT_S2(sp) + REG_S s3, PT_S3(sp) + REG_S s4, PT_S4(sp) + REG_S s5, PT_S5(sp) + REG_S s6, PT_S6(sp) + REG_S s7, PT_S7(sp) + REG_S s8, PT_S8(sp) + REG_S s9, PT_S9(sp) + REG_S s10, PT_S10(sp) + REG_S s11, PT_S11(sp) + REG_S tp, PT_TP(sp) + REG_S t0, PT_T0(sp) + REG_S t1, PT_T1(sp) + REG_S t2, PT_T2(sp) + REG_S t3, PT_T3(sp) + REG_S t4, PT_T4(sp) + REG_S t5, PT_T5(sp) + REG_S t6, PT_T6(sp) + REG_S gp, PT_GP(sp) + REG_S a0, PT_A0(sp) + REG_S a1, PT_A1(sp) + REG_S a2, PT_A2(sp) + REG_S a3, PT_A3(sp) + REG_S a4, PT_A4(sp) + REG_S a5, PT_A5(sp) + csrr a1, CSR_SEPC + REG_S a1, PT_EPC(sp) + csrr a2, CSR_SSTATUS + REG_S a2, PT_STATUS(sp) + + REG_L a0, SSE_REG_TMP(a7) + REG_S a0, PT_SP(sp) + + REG_L t0, SSE_HANDLER(a7) + REG_L a0, SSE_HANDLER_DATA(a7) + mv a1, sp + mv a2, a6 + jalr t0 + + + REG_L a1, PT_EPC(sp) + REG_L a2, PT_STATUS(sp) + csrw CSR_SEPC, a1 + csrw CSR_SSTATUS, a2 + + REG_L ra, PT_RA(sp) + REG_L s0, PT_S0(sp) + REG_L s1, PT_S1(sp) + REG_L s2, PT_S2(sp) + REG_L s3, PT_S3(sp) + REG_L s4, PT_S4(sp) + REG_L s5, PT_S5(sp) + REG_L s6, PT_S6(sp) + REG_L s7, PT_S7(sp) + REG_L s8, PT_S8(sp) + REG_L s9, PT_S9(sp) + REG_L s10, PT_S10(sp) + REG_L s11, PT_S11(sp) + REG_L tp, PT_TP(sp) + REG_L t0, PT_T0(sp) + REG_L t1, PT_T1(sp) + REG_L t2, PT_T2(sp) + REG_L t3, PT_T3(sp) + REG_L t4, PT_T4(sp) + REG_L t5, PT_T5(sp) + REG_L t6, PT_T6(sp) + REG_L gp, PT_GP(sp) + REG_L a0, PT_A0(sp) + REG_L a1, PT_A1(sp) + REG_L a2, PT_A2(sp) + REG_L a3, PT_A3(sp) + REG_L a4, PT_A4(sp) + REG_L a5, PT_A5(sp) + + REG_L sp, PT_SP(sp) + + li a7, ASM_SBI_EXT_SSE + li a6, ASM_SBI_EXT_SSE_COMPLETE + ecall diff --git a/lib/riscv/asm-offsets.c b/lib/riscv/asm-offsets.c index 6c511c14..b3465eeb 100644 --- a/lib/riscv/asm-offsets.c +++ b/lib/riscv/asm-offsets.c @@ -3,7 +3,9 @@ #include #include #include +#include #include +#include int main(void) { @@ -63,5 +65,12 @@ int main(void) OFFSET(THREAD_INFO_HARTID, thread_info, hartid); DEFINE(THREAD_INFO_SIZE, sizeof(struct thread_info)); + OFFSET(SSE_REG_TMP, sse_handler_arg, reg_tmp); + OFFSET(SSE_HANDLER, sse_handler_arg, handler); + OFFSET(SSE_HANDLER_DATA, sse_handler_arg, handler_data); + OFFSET(SSE_HANDLER_STACK, sse_handler_arg, stack); + DEFINE(ASM_SBI_EXT_SSE, SBI_EXT_SSE); + DEFINE(ASM_SBI_EXT_SSE_COMPLETE, SBI_EXT_SSE_COMPLETE); + return 0; } From patchwork Mon Nov 25 11:54:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13884862 Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 873FF156F5D for ; Mon, 25 Nov 2024 11:55:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732535753; cv=none; b=fKpu442wpXw4+cDNGMBFZu6KDuT6uwVpyJYqb9QsBlaaSQs5otM86q0CNxiUPCOYcUnlTlvYQd4EDy+HG2OoW8Kx64AY6PxjMJZV6wQVmZGRLMOqSYXgiJkzAHEVgdyE/hhok0yXypBqFthPtO+QvBcesEFJ47gJv+Y9tCYu4FQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732535753; c=relaxed/simple; bh=URCmWT6V/SgxfPPesAxj7WRe0RcHm6WUOh7lRvPaVjI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=kpkq3s/dthtEh3ZDxhgztjN+eJlQG2oZLK4+eoe7L7/P3jofQlcavHPqljnbPB342w0C8Bf/oxXZRZdIKLZBRFpMLLOGoUx320SfEoRxUu6lsSony0YmPQbxwbNHoPDatyjsI3XzxCN1HqKmLVjJmZwO78lfFLIyYSiobAtvcFE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=kXXL21rU; arc=none smtp.client-ip=209.85.128.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="kXXL21rU" Received: by mail-wm1-f52.google.com with SMTP id 5b1f17b1804b1-43152b79d25so39620045e9.1 for ; Mon, 25 Nov 2024 03:55:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1732535748; x=1733140548; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UcHnBisRSnbeh0MNtChNkmdWv3LXfdzUaFxzsJImrRs=; b=kXXL21rUfE9rjBbuYTIAq5lRVa53YpmIJBZA77tDOe/4d11CGpeJMq2lCUKmPAAu1I R6rpmd75iI0vXkf3rhsEYelqrir4a1Q9hTCVnQ+n/8urB21fdMHxX7laFSUuUP3Mpyj3 92INzMlEe4jgXr8KMP+ASMbyDAqIXcTZauqxm7lnXMeb1uYp9z3F3TgqTQGJENcEZl7F KlYjkbIK9S2i7lzf/bFlaEUVWJ7oTH8OQ7V3GEouqD0DuEgwM3dKfpoE8/VOYWmhDLne rR7YmahALoNo6H9zOE5H2Zk+0Oik4wyHsF0qhxM+en0aNifsKm1BiLDoE/7UPJeAhQ2B Cf5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732535748; x=1733140548; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UcHnBisRSnbeh0MNtChNkmdWv3LXfdzUaFxzsJImrRs=; b=MefRKJaJVEIAfnPt9pWWXdO2jDBof3CmniYyjUY273DmP2tMKeKbWFJUkYPkeXvWBe 6yPhtNB7qp57EBBD+irmi4qhJyl/Rj3HWdHVLhZ4cj7pMMj2l+jAVZ7JrVFQKpMjOkHD 1o+T71CwLx6S356xBQOHJidRcwP/jV6wWv/3z5SvtupDJpKd1NJm5v1GC2rHofR+OAs1 9B+FBwdyvy88pXTqljIwWQcMK7MxHYXrsYzTB3uOTTHt6DSW8bq8sYxSabYX8yKXwoCK 7Ljfl00mFp+aoLX/BbpnF5PCOzp/WaqtoJ1BSSWtDWKezgOgRzbObgI4i/puKnIKT1lk Nq5w== X-Gm-Message-State: AOJu0Yy4HYWhFTwqS09zTWmFAh/kCDaEqGGa6yRhi/iRZIwO5UlQSNzj MOAzuD45MB645Zl0buVTKOtw79E12jDPGx/8C6gAcp79Wp6H2sT4Rv5eFThlWjhSCPnwo+NxUaI t X-Gm-Gg: ASbGncuz+j8Ue+X6w6uzK7vN52G/GUjQvBVuRYfasLrtwHQXsaVdlw5Jb8z75mPfaBU ZUWfu4UxSEdUjhbetdHLosKw6mGKPl7L0SjK6sGeWYObzMGxR/OUG25vgaNxyj/s1/S1hnnF3ue btkTiCziKMZqO3gxzT+Zajm2NP9PiJliFAfkdAfvbI+zyn5BaJbG/Vs/2WY3iKIqVeem7U5WF32 Mi7XJScVOJ0InjUTZvYVmwHIuQCNgJrvB+c/Uvl1gAB6gK7H/w= X-Google-Smtp-Source: AGHT+IF0v9FOftGeJyHcaQZyg+UXiEAL6ZiAuZxz1XCgp2rsKA1pMpKsxRCpxiA14xKc9N+FZBgMhw== X-Received: by 2002:a5d:6d0d:0:b0:382:496e:87a3 with SMTP id ffacd0b85a97d-38260b5b76dmr11476661f8f.15.1732535748432; Mon, 25 Nov 2024 03:55:48 -0800 (PST) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3825fbc3dfasm10546938f8f.76.2024.11.25.03.55.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Nov 2024 03:55:47 -0800 (PST) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra Subject: [kvm-unit-tests PATCH v3 4/4] riscv: sbi: Add SSE extension tests Date: Mon, 25 Nov 2024 12:54:48 +0100 Message-ID: <20241125115452.1255745-5-cleger@rivosinc.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241125115452.1255745-1-cleger@rivosinc.com> References: <20241125115452.1255745-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add SBI SSE extension tests for the following features: - Test attributes errors (invalid values, RO, etc) - Registration errors - Simple events (register, enable, inject) - Events with different priorities - Global events dispatch on different harts - Local events on all harts Signed-off-by: Clément Léger --- riscv/Makefile | 2 +- lib/riscv/asm/csr.h | 2 + riscv/sbi-sse.c | 1043 +++++++++++++++++++++++++++++++++++++++++++ riscv/sbi.c | 3 + 4 files changed, 1049 insertions(+), 1 deletion(-) create mode 100644 riscv/sbi-sse.c diff --git a/riscv/Makefile b/riscv/Makefile index c278ec5c..81b75ad5 100644 --- a/riscv/Makefile +++ b/riscv/Makefile @@ -17,7 +17,7 @@ tests += $(TEST_DIR)/sieve.$(exe) all: $(tests) -$(TEST_DIR)/sbi-deps = $(TEST_DIR)/sbi-asm.o +$(TEST_DIR)/sbi-deps = $(TEST_DIR)/sbi-asm.o $(TEST_DIR)/sbi-sse.o # When built for EFI sieve needs extra memory, run with e.g. '-m 256' on QEMU $(TEST_DIR)/sieve.$(exe): AUXFLAGS = 0x1 diff --git a/lib/riscv/asm/csr.h b/lib/riscv/asm/csr.h index 16f5ddd7..06831380 100644 --- a/lib/riscv/asm/csr.h +++ b/lib/riscv/asm/csr.h @@ -21,6 +21,8 @@ /* Exception cause high bit - is an interrupt if set */ #define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1)) +#define SSTATUS_SPP _AC(0x00000100, UL) /* Previously Supervisor */ + /* Exception causes */ #define EXC_INST_MISALIGNED 0 #define EXC_INST_ACCESS 1 diff --git a/riscv/sbi-sse.c b/riscv/sbi-sse.c new file mode 100644 index 00000000..a230c600 --- /dev/null +++ b/riscv/sbi-sse.c @@ -0,0 +1,1043 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * SBI SSE testsuite + * + * Copyright (C) 2024, Rivos Inc., Clément Léger + */ +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#include "sbi-tests.h" + +#define SSE_STACK_SIZE PAGE_SIZE + +void check_sse(void); + +struct sse_event_info { + unsigned long event_id; + const char *name; + bool can_inject; +}; + +static struct sse_event_info sse_event_infos[] = { + { + .event_id = SBI_SSE_EVENT_LOCAL_RAS, + .name = "local_ras", + }, + { + .event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP, + .name = "double_trap", + }, + { + .event_id = SBI_SSE_EVENT_GLOBAL_RAS, + .name = "global_ras", + }, + { + .event_id = SBI_SSE_EVENT_LOCAL_PMU, + .name = "local_pmu", + }, + { + .event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE, + .name = "local_software", + }, + { + .event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE, + .name = "global_software", + }, +}; + +static const char *const attr_names[] = { + [SBI_SSE_ATTR_STATUS] = "status", + [SBI_SSE_ATTR_PRIORITY] = "prio", + [SBI_SSE_ATTR_CONFIG] = "config", + [SBI_SSE_ATTR_PREFERRED_HART] = "preferred_hart", + [SBI_SSE_ATTR_ENTRY_PC] = "entry_pc", + [SBI_SSE_ATTR_ENTRY_ARG] = "entry_arg", + [SBI_SSE_ATTR_INTERRUPTED_SEPC] = "interrupted_pc", + [SBI_SSE_ATTR_INTERRUPTED_FLAGS] = "interrupted_flags", + [SBI_SSE_ATTR_INTERRUPTED_A6] = "interrupted_a6", + [SBI_SSE_ATTR_INTERRUPTED_A7] = "interrupted_a7", +}; + +static const unsigned long ro_attrs[] = { + SBI_SSE_ATTR_STATUS, + SBI_SSE_ATTR_ENTRY_PC, + SBI_SSE_ATTR_ENTRY_ARG, +}; + +static const unsigned long interrupted_attrs[] = { + SBI_SSE_ATTR_INTERRUPTED_FLAGS, + SBI_SSE_ATTR_INTERRUPTED_SEPC, + SBI_SSE_ATTR_INTERRUPTED_A6, + SBI_SSE_ATTR_INTERRUPTED_A7, +}; + +static const unsigned long interrupted_flags[] = { + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP, +}; + +static struct sse_event_info *sse_evt_get_infos(unsigned long event_id) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) { + if (sse_event_infos[i].event_id == event_id) + return &sse_event_infos[i]; + } + + assert_msg(false, "Invalid event id: %ld", event_id); +} + +static const char *sse_evt_name(unsigned long event_id) +{ + struct sse_event_info *infos = sse_evt_get_infos(event_id); + + return infos->name; +} + +static bool sse_evt_can_inject(unsigned long event_id) +{ + struct sse_event_info *infos = sse_evt_get_infos(event_id); + + return infos->can_inject; +} + +static bool sse_event_is_global(unsigned long event_id) +{ + return !!(event_id & SBI_SSE_EVENT_GLOBAL_BIT); +} + +static struct sbiret sse_event_get_attr_raw(unsigned long event_id, + unsigned long base_attr_id, + unsigned long attr_count, + unsigned long phys_lo, + unsigned long phys_hi) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_READ_ATTRS, event_id, + base_attr_id, attr_count, phys_lo, phys_hi, 0); +} + +static unsigned long sse_event_get_attrs(unsigned long event_id, unsigned long attr_id, + unsigned long *values, unsigned int attr_count) +{ + struct sbiret ret; + + ret = sse_event_get_attr_raw(event_id, attr_id, attr_count, (unsigned long)values, 0); + + return ret.error; +} + +static unsigned long sse_event_get_attr(unsigned long event_id, unsigned long attr_id, + unsigned long *value) +{ + return sse_event_get_attrs(event_id, attr_id, value, 1); +} + +static struct sbiret sse_event_set_attr_raw(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long phys_lo, + unsigned long phys_hi) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_WRITE_ATTRS, event_id, base_attr_id, attr_count, + phys_lo, phys_hi, 0); +} + +static unsigned long sse_event_set_attr(unsigned long event_id, unsigned long attr_id, + unsigned long value) +{ + struct sbiret ret; + + ret = sse_event_set_attr_raw(event_id, attr_id, 1, (unsigned long)&value, 0); + + return ret.error; +} + +static unsigned long sse_event_register_raw(unsigned long event_id, void *entry_pc, void *entry_arg) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_REGISTER, event_id, (unsigned long)entry_pc, + (unsigned long)entry_arg, 0, 0, 0); + + return ret.error; +} + +static unsigned long sse_event_register(unsigned long event_id, struct sse_handler_arg *arg) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_REGISTER, event_id, (unsigned long)sse_entry, + (unsigned long)arg, 0, 0, 0); + + return ret.error; +} + +static unsigned long sse_event_unregister(unsigned long event_id) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_UNREGISTER, event_id, 0, 0, 0, 0, 0); + + return ret.error; +} + +static unsigned long sse_event_enable(unsigned long event_id) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_ENABLE, event_id, 0, 0, 0, 0, 0); + + return ret.error; +} + +static unsigned long sse_hart_mask(void) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_MASK, 0, 0, 0, 0, 0, 0); + + return ret.error; +} + +static unsigned long sse_hart_unmask(void) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_UNMASK, 0, 0, 0, 0, 0, 0); + + return ret.error; +} + +static unsigned long sse_event_inject(unsigned long event_id, unsigned long hart_id) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_INJECT, event_id, hart_id, 0, 0, 0, 0); + + return ret.error; +} + +static unsigned long sse_event_disable(unsigned long event_id) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_DISABLE, event_id, 0, 0, 0, 0, 0); + + return ret.error; +} + + +static int sse_get_state(unsigned long event_id, enum sbi_sse_state *state) +{ + int ret; + unsigned long status; + + ret = sse_event_get_attr(event_id, SBI_SSE_ATTR_STATUS, &status); + if (ret) { + report_fail("Failed to get SSE event status"); + return -1; + } + + *state = status & SBI_SSE_ATTR_STATUS_STATE_MASK; + + return 0; +} + +static void sse_global_event_set_current_hart(unsigned long event_id) +{ + int ret; + + if (!sse_event_is_global(event_id)) + return; + + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_PREFERRED_HART, + current_thread_info()->hartid); + if (ret) + report_abort("set preferred hart failure"); +} + +static int sse_check_state(unsigned long event_id, unsigned long expected_state) +{ + int ret; + enum sbi_sse_state state; + + ret = sse_get_state(event_id, &state); + if (ret) + return 1; + report(state == expected_state, "SSE event status == %ld", expected_state); + + return state != expected_state; +} + +static bool sse_event_pending(unsigned long event_id) +{ + int ret; + unsigned long status; + + ret = sse_event_get_attr(event_id, SBI_SSE_ATTR_STATUS, &status); + if (ret) { + report_fail("Failed to get SSE event status"); + return false; + } + + return !!(status & BIT(SBI_SSE_ATTR_STATUS_PENDING_OFFSET)); +} + +static void *sse_alloc_stack(void) +{ + return (alloc_page() + SSE_STACK_SIZE); +} + +static void sse_free_stack(void *stack) +{ + free_page(stack - SSE_STACK_SIZE); +} + +static void sse_test_attr(unsigned long event_id) +{ + unsigned long ret, value = 0; + unsigned long values[ARRAY_SIZE(ro_attrs)]; + struct sbiret sret; + unsigned int i; + + report_prefix_push("attrs"); + + for (i = 0; i < ARRAY_SIZE(ro_attrs); i++) { + ret = sse_event_set_attr(event_id, ro_attrs[i], value); + report(ret == SBI_ERR_BAD_RANGE, "RO attribute %s not writable", + attr_names[ro_attrs[i]]); + } + + for (i = SBI_SSE_ATTR_STATUS; i <= SBI_SSE_ATTR_INTERRUPTED_A7; i++) { + ret = sse_event_get_attr(event_id, i, &value); + report(ret == SBI_SUCCESS, "Read single attribute %s", attr_names[i]); + /* Preferred Hart reset value is defined by SBI vendor and status injectable bit + * also depends on the SBI implementation + */ + if (i != SBI_SSE_ATTR_STATUS && i != SBI_SSE_ATTR_PREFERRED_HART) + report(value == 0, "Attribute %s reset value is 0", attr_names[i]); + } + + ret = sse_event_get_attrs(event_id, SBI_SSE_ATTR_STATUS, values, + SBI_SSE_ATTR_INTERRUPTED_A7 - SBI_SSE_ATTR_STATUS); + report(ret == SBI_SUCCESS, "Read multiple attributes"); + +#if __riscv_xlen > 32 + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_PRIORITY, 0xFFFFFFFFUL + 1UL); + report(ret == SBI_ERR_INVALID_PARAM, "Write prio > 0xFFFFFFFF error"); +#endif + + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_CONFIG, ~SBI_SSE_ATTR_CONFIG_ONESHOT); + report(ret == SBI_ERR_INVALID_PARAM, "Write invalid config error"); + + if (sse_event_is_global(event_id)) { + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_PREFERRED_HART, 0xFFFFFFFFUL); + report(ret == SBI_ERR_INVALID_PARAM, "Set invalid hart id error"); + } else { + /* Set Hart on local event -> RO */ + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_PREFERRED_HART, + current_thread_info()->hartid); + report(ret == SBI_ERR_BAD_RANGE, "Set hart id on local event error"); + } + + /* Set/get flags, sepc, a6, a7 */ + for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) { + ret = sse_event_get_attr(event_id, interrupted_attrs[i], &value); + report(ret == 0, "Get interrupted %s no error", attr_names[interrupted_attrs[i]]); + + /* 0x1 is a valid value for all the interrupted attributes */ + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_INTERRUPTED_FLAGS, 0x1); + report(ret == SBI_ERR_INVALID_STATE, "Set interrupted flags invalid state error"); + } + + /* Attr_count == 0 */ + sret = sse_event_get_attr_raw(event_id, SBI_SSE_ATTR_STATUS, 0, (unsigned long) &value, 0); + report(sret.error == SBI_ERR_INVALID_PARAM, "Read attribute attr_count == 0 error"); + + sret = sse_event_set_attr_raw(event_id, SBI_SSE_ATTR_STATUS, 0, (unsigned long) &value, 0); + report(sret.error == SBI_ERR_INVALID_PARAM, "Write attribute attr_count == 0 error"); + + /* Invalid attribute id */ + ret = sse_event_get_attr(event_id, SBI_SSE_ATTR_INTERRUPTED_A7 + 1, &value); + report(ret == SBI_ERR_BAD_RANGE, "Read invalid attribute error"); + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_INTERRUPTED_A7 + 1, value); + report(ret == SBI_ERR_BAD_RANGE, "Write invalid attribute error"); + + /* Misaligned phys address */ + sret = sse_event_get_attr_raw(event_id, SBI_SSE_ATTR_STATUS, 1, + ((unsigned long) &value | 0x1), 0); + report(sret.error == SBI_ERR_INVALID_ADDRESS, "Read attribute with invalid address error"); + sret = sse_event_set_attr_raw(event_id, SBI_SSE_ATTR_STATUS, 1, + ((unsigned long) &value | 0x1), 0); + report(sret.error == SBI_ERR_INVALID_ADDRESS, "Write attribute with invalid address error"); + + report_prefix_pop(); +} + +static void sse_test_register_error(unsigned long event_id) +{ + unsigned long ret; + + report_prefix_push("register"); + + ret = sse_event_unregister(event_id); + report(ret == SBI_ERR_INVALID_STATE, "SSE unregister non registered event"); + + ret = sse_event_register_raw(event_id, (void *) 0x1, NULL); + report(ret == SBI_ERR_INVALID_PARAM, "SSE register misaligned entry"); + + ret = sse_event_register_raw(event_id, (void *) sse_entry, NULL); + report(ret == SBI_SUCCESS, "SSE register ok"); + if (ret) + goto done; + + ret = sse_event_register_raw(event_id, (void *) sse_entry, NULL); + report(ret == SBI_ERR_INVALID_STATE, "SSE register twice failure"); + if (!ret) + goto done; + + ret = sse_event_unregister(event_id); + report(ret == SBI_SUCCESS, "SSE unregister ok"); + +done: + report_prefix_pop(); +} + +struct sse_simple_test_arg { + bool done; + unsigned long expected_a6; + unsigned long event_id; +}; + +static void sse_simple_handler(void *data, struct pt_regs *regs, unsigned int hartid) +{ + volatile struct sse_simple_test_arg *arg = data; + int ret, i; + const char *attr_name; + unsigned long event_id = arg->event_id, value, prev_value, flags, attr; + const unsigned long regs_len = (SBI_SSE_ATTR_INTERRUPTED_A7 - SBI_SSE_ATTR_INTERRUPTED_A6) + + 1; + unsigned long interrupted_state[regs_len]; + + if ((regs->status & SSTATUS_SPP) == 0) + report_fail("Interrupted S-mode"); + + if (hartid != current_thread_info()->hartid) + report_fail("Hartid correctly passed"); + + sse_check_state(event_id, SBI_SSE_STATE_RUNNING); + if (sse_event_pending(event_id)) + report_fail("Event is not pending"); + + /* Set a6, a7, sepc, flags while running */ + for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) { + attr = interrupted_attrs[i]; + attr_name = attr_names[attr]; + + ret = sse_event_get_attr(event_id, attr, &prev_value); + report(ret == 0, "Get attr %s no error", attr_name); + + /* We test SBI_SSE_ATTR_INTERRUPTED_FLAGS below with specific flag values */ + if (attr == SBI_SSE_ATTR_INTERRUPTED_FLAGS) + continue; + + ret = sse_event_set_attr(event_id, attr, 0xDEADBEEF + i); + report(ret == 0, "Set attr %s invalid state no error", attr_name); + + ret = sse_event_get_attr(event_id, attr, &value); + report(ret == 0, "Get attr %s modified value no error", attr_name); + report(value == 0xDEADBEEF + i, "Get attr %s modified value ok", attr_name); + + ret = sse_event_set_attr(event_id, attr, prev_value); + report(ret == 0, "Restore attr %s value no error", attr_name); + } + + /* Test all flags allowed for SBI_SSE_ATTR_INTERRUPTED_FLAGS*/ + attr = SBI_SSE_ATTR_INTERRUPTED_FLAGS; + attr_name = attr_names[attr]; + ret = sse_event_get_attr(event_id, attr, &prev_value); + report(ret == 0, "Get attr %s no error", attr_name); + + for (i = 0; i < ARRAY_SIZE(interrupted_flags); i++) { + flags = interrupted_flags[i]; + ret = sse_event_set_attr(event_id, attr, flags); + report(ret == 0, "Set interrupted %s value no error", attr_name); + ret = sse_event_get_attr(event_id, attr, &value); + report(value == flags, "Get attr %s modified value ok", attr_name); + } + + ret = sse_event_set_attr(event_id, attr, prev_value); + report(ret == 0, "Restore attr %s value no error", attr_name); + + /* Try to change HARTID/Priority while running */ + if (sse_event_is_global(event_id)) { + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_PREFERRED_HART, + current_thread_info()->hartid); + report(ret == SBI_ERR_INVALID_STATE, "Set hart id while running error"); + } + + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_PRIORITY, 0); + report(ret == SBI_ERR_INVALID_STATE, "Set priority while running error"); + + ret = sse_event_get_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_A6, interrupted_state, + regs_len); + report(ret == SBI_SUCCESS, "Read interrupted context from SSE handler ok"); + if (interrupted_state[0] != arg->expected_a6) + report_fail("Interrupted state a6 check ok"); + if (interrupted_state[1] != SBI_EXT_SSE) + report_fail("Interrupted state a7 check ok"); + + arg->done = true; +} + +static void sse_test_inject_simple(unsigned long event_id) +{ + unsigned long ret; + struct sse_handler_arg args; + volatile struct sse_simple_test_arg test_arg = {.event_id = event_id, .done = 0}; + + args.handler = sse_simple_handler; + args.handler_data = (void *) &test_arg; + args.stack = sse_alloc_stack(); + + report_prefix_push("simple"); + + ret = sse_check_state(event_id, SBI_SSE_STATE_UNUSED); + if (ret) + goto done; + + ret = sse_event_register(event_id, &args); + report(ret == SBI_SUCCESS, "SSE register no error"); + if (ret) + goto done; + + ret = sse_check_state(event_id, SBI_SSE_STATE_REGISTERED); + if (ret) + goto done; + + /* Be sure global events are targeting the current hart */ + sse_global_event_set_current_hart(event_id); + + ret = sse_event_enable(event_id); + report(ret == SBI_SUCCESS, "SSE enable no error"); + if (ret) + goto done; + + ret = sse_check_state(event_id, SBI_SSE_STATE_ENABLED); + if (ret) + goto done; + + ret = sse_hart_mask(); + report(ret == SBI_SUCCESS, "SSE hart mask no error"); + + ret = sse_event_inject(event_id, current_thread_info()->hartid); + report(ret == SBI_SUCCESS, "SSE injection masked no error"); + if (ret) + goto done; + + barrier(); + report(test_arg.done == 0, "SSE event masked not handled"); + + /* + * When unmasking the SSE events, we expect it to be injected + * immediately so a6 should be SBI_EXT_SSE_HART_UNMASK + */ + test_arg.expected_a6 = SBI_EXT_SSE_HART_UNMASK; + ret = sse_hart_unmask(); + report(ret == SBI_SUCCESS, "SSE hart unmask no error"); + + barrier(); + report(test_arg.done == 1, "SSE event unmasked handled"); + test_arg.done = 0; + test_arg.expected_a6 = SBI_EXT_SSE_INJECT; + + /* Set as oneshot and verify it is disabled */ + ret = sse_event_disable(event_id); + report(ret == 0, "Disable event ok"); + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_CONFIG, SBI_SSE_ATTR_CONFIG_ONESHOT); + report(ret == 0, "Set event attribute as ONESHOT"); + ret = sse_event_enable(event_id); + report(ret == 0, "Enable event ok"); + + ret = sse_event_inject(event_id, current_thread_info()->hartid); + report(ret == SBI_SUCCESS, "SSE injection 2 no error"); + if (ret) + goto done; + + barrier(); + report(test_arg.done == 1, "SSE event handled ok"); + test_arg.done = 0; + + ret = sse_check_state(event_id, SBI_SSE_STATE_REGISTERED); + if (ret) + goto done; + + /* Clear ONESHOT FLAG */ + sse_event_set_attr(event_id, SBI_SSE_ATTR_CONFIG, 0); + + ret = sse_event_unregister(event_id); + report(ret == SBI_SUCCESS, "SSE unregister no error"); + if (ret) + goto done; + + sse_check_state(event_id, SBI_SSE_STATE_UNUSED); + +done: + sse_free_stack(args.stack); + report_prefix_pop(); +} + +struct sse_foreign_cpu_test_arg { + bool done; + unsigned int expected_cpu; + unsigned long event_id; +}; + +static void sse_foreign_cpu_handler(void *data, struct pt_regs *regs, unsigned int hartid) +{ + volatile struct sse_foreign_cpu_test_arg *arg = data; + + /* For arg content to be visible */ + smp_rmb(); + if (arg->expected_cpu != current_thread_info()->cpu) + report_fail("Received event on CPU (%d), expected CPU (%d)", + current_thread_info()->cpu, arg->expected_cpu); + + arg->done = true; + /* For arg update to be visible for other CPUs */ + smp_wmb(); +} + +struct sse_local_per_cpu { + struct sse_handler_arg args; + unsigned long ret; +}; + +struct sse_local_data { + unsigned long event_id; + struct sse_local_per_cpu *cpu_args[NR_CPUS]; +}; + +static void sse_register_enable_local(void *data) +{ + struct sse_local_data *local_data = data; + struct sse_local_per_cpu *cpu_arg = local_data->cpu_args[current_thread_info()->cpu]; + + cpu_arg->ret = sse_event_register(local_data->event_id, &cpu_arg->args); + if (cpu_arg->ret) + return; + + cpu_arg->ret = sse_event_enable(local_data->event_id); +} + +static void sse_disable_unregister_local(void *data) +{ + struct sse_local_data *local_data = data; + struct sse_local_per_cpu *cpu_arg = local_data->cpu_args[current_thread_info()->cpu]; + + cpu_arg->ret = sse_event_disable(local_data->event_id); + if (cpu_arg->ret) + return; + + cpu_arg->ret = sse_event_unregister(local_data->event_id); +} + +static void sse_test_inject_local(unsigned long event_id) +{ + int cpu; + unsigned long ret; + struct sse_local_data local_data; + struct sse_local_per_cpu *cpu_arg; + volatile struct sse_foreign_cpu_test_arg test_arg = {.event_id = event_id}; + + report_prefix_push("local_dispatch"); + local_data.event_id = event_id; + + for_each_online_cpu(cpu) { + cpu_arg = calloc(1, sizeof(struct sse_handler_arg)); + + cpu_arg->args.stack = sse_alloc_stack(); + cpu_arg->args.handler = sse_foreign_cpu_handler; + cpu_arg->args.handler_data = (void *)&test_arg; + local_data.cpu_args[cpu] = cpu_arg; + } + + on_cpus(sse_register_enable_local, &local_data); + for_each_online_cpu(cpu) { + if (local_data.cpu_args[cpu]->ret) + report_abort("CPU failed to register/enable SSE event"); + + test_arg.expected_cpu = cpu; + /* For test_arg content to be visible for other CPUs */ + smp_wmb(); + ret = sse_event_inject(event_id, cpus[cpu].hartid); + if (ret) + report_abort("CPU failed to register/enable SSE event"); + + while (!test_arg.done) { + /* For test_arg update to be visible */ + smp_rmb(); + } + + test_arg.done = false; + } + + on_cpus(sse_disable_unregister_local, &local_data); + for_each_online_cpu(cpu) { + if (local_data.cpu_args[cpu]->ret) + report_abort("CPU failed to disable/unregister SSE event"); + } + + for_each_online_cpu(cpu) { + cpu_arg = local_data.cpu_args[cpu]; + + sse_free_stack(cpu_arg->args.stack); + } + + report_pass("local event dispatch on all CPUs"); + report_prefix_pop(); + +} + +static void sse_test_inject_global(unsigned long event_id) +{ + unsigned long ret; + unsigned int cpu; + struct sse_handler_arg args; + volatile struct sse_foreign_cpu_test_arg test_arg = {.event_id = event_id}; + enum sbi_sse_state state; + + args.handler = sse_foreign_cpu_handler; + args.handler_data = (void *)&test_arg; + args.stack = sse_alloc_stack(); + + report_prefix_push("global_dispatch"); + + ret = sse_event_register(event_id, &args); + if (ret) + goto done; + + for_each_online_cpu(cpu) { + test_arg.expected_cpu = cpu; + /* For test_arg content to be visible for other CPUs */ + smp_wmb(); + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_PREFERRED_HART, cpu); + if (ret) { + report_fail("Failed to set preferred hart"); + goto done; + } + + ret = sse_event_enable(event_id); + if (ret) { + report_fail("Failed to enable SSE event"); + goto done; + } + + ret = sse_event_inject(event_id, cpu); + if (ret) { + report_fail("Failed to inject event"); + goto done; + } + + while (!test_arg.done) { + /* For shared test_arg structure */ + smp_rmb(); + } + + test_arg.done = false; + + /* Wait for event to be in ENABLED state */ + do { + ret = sse_get_state(event_id, &state); + if (ret) { + report_fail("Failed to get event state"); + goto done; + } + } while (state != SBI_SSE_STATE_ENABLED); + + ret = sse_event_disable(event_id); + if (ret) { + report_fail("Failed to disable SSE event"); + goto done; + } + + report_pass("Global event on CPU %d", cpu); + } + +done: + ret = sse_event_unregister(event_id); + if (ret) + report_fail("Failed to unregister event"); + + sse_free_stack(args.stack); + report_prefix_pop(); +} + +struct priority_test_arg { + unsigned long evt; + bool called; + u32 prio; + struct priority_test_arg *next_evt_arg; + void (*check_func)(struct priority_test_arg *arg); +}; + +static void sse_hi_priority_test_handler(void *arg, struct pt_regs *regs, + unsigned int hartid) +{ + struct priority_test_arg *targ = arg; + struct priority_test_arg *next = targ->next_evt_arg; + + targ->called = 1; + if (next) { + sse_event_inject(next->evt, current_thread_info()->hartid); + if (sse_event_pending(next->evt)) + report_fail("Higher priority event is pending"); + if (!next->called) + report_fail("Higher priority event was not handled"); + } +} + +static void sse_low_priority_test_handler(void *arg, struct pt_regs *regs, + unsigned int hartid) +{ + struct priority_test_arg *targ = arg; + struct priority_test_arg *next = targ->next_evt_arg; + + targ->called = 1; + + if (next) { + sse_event_inject(next->evt, current_thread_info()->hartid); + + if (!sse_event_pending(next->evt)) + report_fail("Lower priority event is pending"); + + if (next->called) + report_fail("Lower priority event %s was handle before %s", + sse_evt_name(next->evt), sse_evt_name(targ->evt)); + } +} + +static void sse_test_injection_priority_arg(struct priority_test_arg *in_args, + unsigned int in_args_size, + sse_handler_fn handler, + const char *test_name) +{ + unsigned int i; + int ret; + unsigned long event_id; + struct priority_test_arg *arg; + unsigned int args_size = 0; + struct sse_handler_arg event_args[in_args_size]; + struct priority_test_arg *args[in_args_size]; + void *stack; + struct sse_handler_arg *event_arg; + + report_prefix_push(test_name); + + for (i = 0; i < in_args_size; i++) { + arg = &in_args[i]; + event_id = arg->evt; + if (!sse_evt_can_inject(event_id)) + continue; + + args[args_size] = arg; + args_size++; + } + + if (!args_size) { + report_skip("No event injectable"); + report_prefix_pop(); + goto skip; + } + + for (i = 0; i < args_size; i++) { + arg = args[i]; + event_id = arg->evt; + stack = sse_alloc_stack(); + + event_arg = &event_args[i]; + event_arg->handler = handler; + event_arg->handler_data = (void *)arg; + event_arg->stack = stack; + + if (i < (args_size - 1)) + arg->next_evt_arg = args[i + 1]; + else + arg->next_evt_arg = NULL; + + /* Be sure global events are targeting the current hart */ + sse_global_event_set_current_hart(event_id); + + sse_event_register(event_id, event_arg); + sse_event_set_attr(event_id, SBI_SSE_ATTR_PRIORITY, arg->prio); + sse_event_enable(event_id); + } + + /* Inject first event */ + ret = sse_event_inject(args[0]->evt, current_thread_info()->hartid); + report(ret == SBI_SUCCESS, "SSE injection no error"); + + for (i = 0; i < args_size; i++) { + arg = args[i]; + event_id = arg->evt; + + if (!arg->called) + report_fail("Event %s handler called", sse_evt_name(arg->evt)); + + sse_event_disable(event_id); + sse_event_unregister(event_id); + + event_arg = &event_args[i]; + sse_free_stack(event_arg->stack); + } + +skip: + report_prefix_pop(); +} + +static struct priority_test_arg hi_prio_args[] = { + {.evt = SBI_SSE_EVENT_GLOBAL_SOFTWARE}, + {.evt = SBI_SSE_EVENT_LOCAL_SOFTWARE}, + {.evt = SBI_SSE_EVENT_LOCAL_PMU}, + {.evt = SBI_SSE_EVENT_GLOBAL_RAS}, + {.evt = SBI_SSE_EVENT_LOCAL_RAS}, +}; + +static struct priority_test_arg low_prio_args[] = { + {.evt = SBI_SSE_EVENT_LOCAL_RAS}, + {.evt = SBI_SSE_EVENT_GLOBAL_RAS}, + {.evt = SBI_SSE_EVENT_LOCAL_PMU}, + {.evt = SBI_SSE_EVENT_LOCAL_SOFTWARE}, + {.evt = SBI_SSE_EVENT_GLOBAL_SOFTWARE}, +}; + +static struct priority_test_arg prio_args[] = { + {.evt = SBI_SSE_EVENT_GLOBAL_SOFTWARE, .prio = 5}, + {.evt = SBI_SSE_EVENT_LOCAL_SOFTWARE, .prio = 10}, + {.evt = SBI_SSE_EVENT_LOCAL_PMU, .prio = 15}, + {.evt = SBI_SSE_EVENT_GLOBAL_RAS, .prio = 20}, + {.evt = SBI_SSE_EVENT_LOCAL_RAS, .prio = 25}, +}; + +static struct priority_test_arg same_prio_args[] = { + {.evt = SBI_SSE_EVENT_LOCAL_PMU, .prio = 0}, + {.evt = SBI_SSE_EVENT_LOCAL_RAS, .prio = 10}, + {.evt = SBI_SSE_EVENT_LOCAL_SOFTWARE, .prio = 10}, + {.evt = SBI_SSE_EVENT_GLOBAL_SOFTWARE, .prio = 10}, + {.evt = SBI_SSE_EVENT_GLOBAL_RAS, .prio = 20}, +}; + +static void sse_test_injection_priority(void) +{ + report_prefix_push("prio"); + + sse_test_injection_priority_arg(hi_prio_args, ARRAY_SIZE(hi_prio_args), + sse_hi_priority_test_handler, "high"); + + sse_test_injection_priority_arg(low_prio_args, ARRAY_SIZE(low_prio_args), + sse_low_priority_test_handler, "low"); + + sse_test_injection_priority_arg(prio_args, ARRAY_SIZE(prio_args), + sse_low_priority_test_handler, "changed"); + + sse_test_injection_priority_arg(same_prio_args, ARRAY_SIZE(same_prio_args), + sse_low_priority_test_handler, "same_prio_args"); + + report_prefix_pop(); +} + +static bool sse_can_inject(unsigned long event_id) +{ + int ret; + unsigned long status; + + ret = sse_event_get_attr(event_id, SBI_SSE_ATTR_STATUS, &status); + report(ret == 0, "SSE get attr status no error"); + if (ret) + return 0; + + return !!(status & BIT(SBI_SSE_ATTR_STATUS_INJECT_OFFSET)); +} + +static void boot_secondary(void *data) +{ + sse_hart_unmask(); +} + +static void sse_check_mask(void) +{ + int ret; + + /* Upon boot, event are masked, check that */ + ret = sse_hart_mask(); + report(ret == SBI_ERR_ALREADY_STARTED, "SSE hart mask at boot time ok"); + + ret = sse_hart_unmask(); + report(ret == SBI_SUCCESS, "SSE hart no error ok"); + ret = sse_hart_unmask(); + report(ret == SBI_ERR_ALREADY_STOPPED, "SSE hart unmask twice error ok"); + + ret = sse_hart_mask(); + report(ret == SBI_SUCCESS, "SSE hart mask no error"); + ret = sse_hart_mask(); + report(ret == SBI_ERR_ALREADY_STARTED, "SSE hart mask twice ok"); +} + +void check_sse(void) +{ + unsigned long i, event; + + report_prefix_push("sse"); + sse_check_mask(); + + /* + * Dummy wakeup of all processors since some of them will be targeted + * by global events without going through the wakeup call as well as + * unmasking all + */ + on_cpus(boot_secondary, NULL); + + if (!sbi_probe(SBI_EXT_SSE)) { + report_skip("SSE extension not available"); + report_prefix_pop(); + return; + } + + for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) { + event = sse_event_infos[i].event_id; + report_prefix_push(sse_event_infos[i].name); + if (!sse_can_inject(event)) { + report_skip("Event does not support injection"); + report_prefix_pop(); + continue; + } else { + sse_event_infos[i].can_inject = true; + } + sse_test_attr(event); + sse_test_register_error(event); + sse_test_inject_simple(event); + if (sse_event_is_global(event)) + sse_test_inject_global(event); + else + sse_test_inject_local(event); + + report_prefix_pop(); + } + + sse_test_injection_priority(); + + report_prefix_pop(); +} diff --git a/riscv/sbi.c b/riscv/sbi.c index 6f4ddaf1..33d5e40d 100644 --- a/riscv/sbi.c +++ b/riscv/sbi.c @@ -32,6 +32,8 @@ #define HIGH_ADDR_BOUNDARY ((phys_addr_t)1 << 32) +void check_sse(void); + static long __labs(long a) { return __builtin_labs(a); @@ -1451,6 +1453,7 @@ int main(int argc, char **argv) check_hsm(); check_dbcn(); check_susp(); + check_sse(); return report_summary(); }