From patchwork Fri Mar 7 16:15:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 14006712 Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 283A823957F for ; Fri, 7 Mar 2025 16:15:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741364157; cv=none; b=ZROMGr18aS70PBKp4XKEM8pNdBD7IhPp9gq7P4O+uVd5ZMh/d6dfehBw0X0zHrgkkoldguuHLNvaSpFqYT/3mvTgILkkXYjwP7Wg9Oxz0RJ0XWLvTGYPrrOdiBweay5+0/LkukBVDM/yRPA3GMxlqo/MPtPi38U+JADVDPPU1Kg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741364157; c=relaxed/simple; bh=pUbBH6lKqjkYHOrWph+jCd+XfegjoIltPb4IwKUmCfs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ZRtbdAqGAlFEvbw83d1dR3c7dGGHKde+NEE+VxuYLqRsQ3KlLnZw9q9WIrzAhH4Z8NA4cPFrpDyyDeXw/Oy1EhF1BgIaRVcH+kScNj+M2vytr3lFHvcivjOZb+z+xEw97QtyAKbMefZ5ynhSvFu7oponRrStGZ5cYJDUMOlw4AE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=2cGsIzb4; arc=none smtp.client-ip=209.85.128.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="2cGsIzb4" Received: by mail-wm1-f51.google.com with SMTP id 5b1f17b1804b1-43bd732fd27so17581105e9.0 for ; Fri, 07 Mar 2025 08:15:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1741364152; x=1741968952; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/6i1/sQFS23e0aHQ0gG/Xo3pgpweMckYvmkT9KdoCho=; b=2cGsIzb4tvEHb4VhpHacTU5WgIwAzCCslCeulAcNeeO71YwYYYwCfhLCR89ADkwN2d 9fPIMwKQsXabeeKWKUAQVrNPjcwmHF7dCTg3iTR2l7qT5d3DRGbpOS5lv1JWb+yhU0sq VHmJX2UCC2X4QaLz/0ufaSYM35EqfvI4DqdKca5kCK4OEXzVkyDsogX+i0mlShq9Tzvi 4CH86IN81SR8gRKT8A/Np2lOFSqf6X0EnepvBHxgDizUYIcw1ajTqHqWadohwL4cbT0Z UEdY1RQZLGwKvaWlbqwSNKBawL72DZtGpH8C2pTHaRDt03zlzdJqvoyhnY2BJqLe1b4Q tWug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741364152; x=1741968952; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/6i1/sQFS23e0aHQ0gG/Xo3pgpweMckYvmkT9KdoCho=; b=gkaJxvHhjO0Bmp/KXvmayghZe/FnLK4CRDEMAntz8qO8Rft/yztNQlROZ6p2HA5VAG o1rWEGBrLPtD6gDfOS70G3Qz1VJQe9347QhH7jcJJVOEb5PUb6VyqwTrdWUMHivrEVaU 3XX2SPRibT5hq0Oxt1uNN8eqpAFEkhe9e5ICueydnTynN63eEp29rBH1MThR2HF1gm85 TFPK+i8KS44UfKoWsa8W0OcueJ6LcEotX4sGUgrCwHlpUEIhwpfoj7zyWf7bsgpj1B7C vd40nGjFH+ccSJTNaDv2luART+QGBBZoq4keDRpaUJvRLXf2CA42rebG+1AWFzprADye xDvg== X-Gm-Message-State: AOJu0YyIzsgw0d15cnTltW38/Tz0QFuZlFoHdko3eIi3GwyyVauN7CJE flBhsQm/9xVsiBxugQL33G1DqyAGVYoX+iMzGeGrmEqyQyIO3pBEC/WxKoAMUOm5jWYsJC6tAsW 7 X-Gm-Gg: ASbGncvKI5LNLEgLCiMEwxw/acMffL4FRH5Wc0CrsIkE7BDStFzUxfH1q49soFAzD4I m6tk6LTC3CCDZbemBD6Z3nby4fMlX4n1X++JTVN6ZGokY15IEb5gDEjW4u1Mdr3/J88n5To4lNh 76ZFkw5qO0+uRpsgwRK5XjBzrjGb/0QZ2OeZRAeonDgi1hqbR6KGlWg4tCMfinBMUOYryHYrraI STDNbwEjgtdSw7DUGkb7zLdtPhuGcA9rIJmC+yAMuIPFuvywWpuDjw9GVuR/W0F8mbS43QurHDO vhxeZEvRNYpWG0eqr/BQdxdOO102C/jZYfJDqwkLPF794Q== X-Google-Smtp-Source: AGHT+IFgUcPXSiBD5RSvWaNWhqMpYlA6SxLhZgQGPKx5eyPuJ3M5Yl0LRdylWzB/3/C6DxCmNYVAYQ== X-Received: by 2002:a05:600c:450d:b0:439:7c0b:13f6 with SMTP id 5b1f17b1804b1-43c6872e1ffmr32499415e9.31.1741364152302; Fri, 07 Mar 2025 08:15:52 -0800 (PST) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43bdd8daadbsm55496245e9.21.2025.03.07.08.15.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 07 Mar 2025 08:15:51 -0800 (PST) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra , Andrew Jones Subject: [kvm-unit-tests PATCH v8 1/6] kbuild: Allow multiple asm-offsets file to be generated Date: Fri, 7 Mar 2025 17:15:43 +0100 Message-ID: <20250307161549.1873770-2-cleger@rivosinc.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250307161549.1873770-1-cleger@rivosinc.com> References: <20250307161549.1873770-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In order to allow multiple asm-offsets files to generated the include guard need to be different between these file. Add a asm_offset_name makefile macro to obtain an uppercase name matching the original asm offsets file. Signed-off-by: Clément Léger Reviewed-by: Andrew Jones --- scripts/asm-offsets.mak | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/scripts/asm-offsets.mak b/scripts/asm-offsets.mak index 7b64162d..a5fdbf5d 100644 --- a/scripts/asm-offsets.mak +++ b/scripts/asm-offsets.mak @@ -15,10 +15,14 @@ define sed-y s:->::; p;}' endef +define asm_offset_name + $(shell echo $(notdir $(1)) | tr [:lower:]- [:upper:]_) +endef + define make_asm_offsets (set -e; \ - echo "#ifndef __ASM_OFFSETS_H__"; \ - echo "#define __ASM_OFFSETS_H__"; \ + echo "#ifndef __$(strip $(asm_offset_name))_H__"; \ + echo "#define __$(strip $(asm_offset_name))_H__"; \ echo "/*"; \ echo " * Generated file. DO NOT MODIFY."; \ echo " *"; \ @@ -29,12 +33,16 @@ define make_asm_offsets echo "#endif" ) > $@ endef -$(asm-offsets:.h=.s): $(asm-offsets:.h=.c) - $(CC) $(CFLAGS) -fverbose-asm -S -o $@ $< +define gen_asm_offsets_rules +$(1).s: $(1).c + $(CC) $(CFLAGS) -fverbose-asm -S -o $$@ $$< + +$(1).h: $(1).s + $$(call make_asm_offsets,$(1)) + cp -f $$@ lib/generated/ +endef -$(asm-offsets): $(asm-offsets:.h=.s) - $(call make_asm_offsets) - cp -f $(asm-offsets) lib/generated/ +$(foreach o,$(asm-offsets),$(eval $(call gen_asm_offsets_rules, $(o:.h=)))) OBJDIRS += lib/generated From patchwork Fri Mar 7 16:15:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 14006713 Received: from mail-wm1-f53.google.com (mail-wm1-f53.google.com [209.85.128.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D4AB323959A for ; Fri, 7 Mar 2025 16:15:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741364158; cv=none; b=ArQOOReIlh+qkEiMlQZugfFaN1xXONZmfQj9pmnJdlfn2QakoheLoN+NPgtM2VfkAE6u5Ia7Y9cn1AjlLMZx6D5mV2VdVzRmEzHktp/XgLo3RqqBPXsiJ6wWBqCgT5m4mt0qaM+7uV5JN5o6shP4kX9WESywdEfJ5Khx7qpud0Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741364158; c=relaxed/simple; bh=BiyKyIHm7l9GP+JDlcvuhxsmijeJ+nOu7K5c2SdsmUo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=K1rN31tsp8WRtSrt/TSUlypW1wIkapWlnqwgK+hWi7Ir5QqG6jMlI1CU8k9PD5cqoKTOpvhzQ7DlVJfuy0+I2Y1t5tsvgYoYjNz920DCVqSYFgDOPgp2EoVg9by5dFUAbReXyn/pfmt/dNy4ODjBmoLj4clLYH9tdNe1pgqZjz4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=Tf6PUaCW; arc=none smtp.client-ip=209.85.128.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="Tf6PUaCW" Received: by mail-wm1-f53.google.com with SMTP id 5b1f17b1804b1-43bdc607c3fso12817615e9.3 for ; Fri, 07 Mar 2025 08:15:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1741364154; x=1741968954; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SCReb3cvg707HPbBwZSbIVzyvT4c2UyWOl9p1yzcaUg=; b=Tf6PUaCWy5bVasvNdOFO/ouOyXs2CysexMN+Y2m7gSDB7i1hme2WQz2GQTjK3yfZuu ct5WX9C7bvXYfD16p1UddtbIRI8YWL6h1CRnTeYzKgxO+jBSpRMQs5BlCfu6gibuHzPE Ai58SjiVt5C5gqNNLW9fQGqLHwTK8Am2Xdt4zT7HA++6MQS/g5swZmxs7WuWlYfq0jBu 8QB7Ql7Cz5dzpT/uS9v5JZ8oJf6Cu/QWP6230/bCQ+wopCguL2mSWawr5cixMf21WDaW YNlNEyeh8bgJ9N3KiBRycQsvRZKUiUon8zQVBU0HQf1CdR3J9JWXZUref0b3PxJBWSJE aEOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741364154; x=1741968954; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SCReb3cvg707HPbBwZSbIVzyvT4c2UyWOl9p1yzcaUg=; b=PJOP4qX70UWsFwv2tl4u8QsbJOa5QUe/BtSZRTJ12e6RlAiisEFeJlojckOiTYsO70 F2qfzkCfr+YfeT1eJ29GPP20xpJ9eEBDr5w+nMhl2xFdaeCzs1G/Iw9UVdfs7+ZNgU2Q t7tnZCVLbJ9ZmhQO94Mt/bsJmcPMR1vzi2ywIV7aQDdixT2p5PTpG6Fdf+bpvW6xgAv4 At2/VD+5lHnCven9ubcXGrmkoJ4l1WW6fWxE3V1uVKwF09xWHKZDKrcQ6stF1W6yf5lB 4/p34pijqVg3EorOhsYCQB16DBsBBisQ5YaHPIBlncMT3wEaFD5I92vW2NItr0PC9h/7 HRMg== X-Gm-Message-State: AOJu0YyjlvywKPot+GnQogjS1OH80qFobV5WCVhsHLv2JQegKxV3WLMC 7AwUQPZXE+u2wiMzo/BKToQGwnP+YdjXFQvoi8lEQIr2DZt53X5UIaj1+9TulehcD6a7zCwy1xa n X-Gm-Gg: ASbGncvUv9CQi+4wCHLpN5vX/HkQUpvt1rFjSWdX4IgJ47aZo8mgKNwX1XlgmWKvdLH PhjhyGv6SZTgIep2QnWhfr9p0dQ2DJAyOh0R8enfNZ5PVNi7qposJlyvmq3FQsCW05qCGaLcT8p Ju/wKl2JekJUm+6zdMvejS4+6eFcEKBuv5/CoNmig2mQhAbPyLFlUv26EBUfF0xvMdZ3U97c70Z PnTiPjejFohnmhjzFWwnfIYBGeADoOpmHa50fVInZShGBKKm1Vo8NC4TqSCEOj85hR/dIsWwDg2 dus+zZPBM8enLi6gt/WV76O6NEk9CJwhlR+dX6UKwqyYlw== X-Google-Smtp-Source: AGHT+IG9PL3mSCAWphM76+/MO36a6gjgduMu7busd7CtSobZbrw6wfyMhJnicGa7Y/w+ulU7H5lAKQ== X-Received: by 2002:a05:600c:198b:b0:439:b7e3:ce56 with SMTP id 5b1f17b1804b1-43c687027f5mr28095125e9.29.1741364153686; Fri, 07 Mar 2025 08:15:53 -0800 (PST) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43bdd8daadbsm55496245e9.21.2025.03.07.08.15.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 07 Mar 2025 08:15:52 -0800 (PST) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra Subject: [kvm-unit-tests PATCH v8 2/6] riscv: Set .aux.o files as .PRECIOUS Date: Fri, 7 Mar 2025 17:15:44 +0100 Message-ID: <20250307161549.1873770-3-cleger@rivosinc.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250307161549.1873770-1-cleger@rivosinc.com> References: <20250307161549.1873770-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When compiling, we need to keep .aux.o file or they will be removed after the compilation which leads to dependent files to be recompiled. Set these files as .PRECIOUS to keep them. Signed-off-by: Clément Léger --- riscv/Makefile | 1 + 1 file changed, 1 insertion(+) diff --git a/riscv/Makefile b/riscv/Makefile index 52718f3f..ae9cf02a 100644 --- a/riscv/Makefile +++ b/riscv/Makefile @@ -90,6 +90,7 @@ CFLAGS += -I $(SRCDIR)/lib -I $(SRCDIR)/lib/libfdt -I lib -I $(SRCDIR)/riscv asm-offsets = lib/riscv/asm-offsets.h include $(SRCDIR)/scripts/asm-offsets.mak +.PRECIOUS: %.aux.o %.aux.o: $(SRCDIR)/lib/auxinfo.c $(CC) $(CFLAGS) -c -o $@ $< \ -DPROGNAME=\"$(notdir $(@:.aux.o=.$(exe)))\" -DAUXFLAGS=$(AUXFLAGS) From patchwork Fri Mar 7 16:15:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 14006714 Received: from mail-wr1-f54.google.com (mail-wr1-f54.google.com [209.85.221.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2E5423A990 for ; Fri, 7 Mar 2025 16:15:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741364158; cv=none; b=Ioy3+B1rdJm6toshzu8Wyfh/cUa4JE+cT3DpK91VeL4+1i/PhEiHrFTL3xzJ1m3SbZT72C/RakWKgZ92Iz7HeE78lZtyGBeEuXLitnmfuosuJG087oxCNw8zKrrvFhvgVTg0QQXSjSNHSBsPlkIeXinrqDzY2loTKP0xf/Rp4+o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741364158; c=relaxed/simple; bh=WMcAeqgXkwwkH48gxSeCvtz99ZLpGt6R2M4eU1o5O9Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=N+w4yvPMV8V2rz/MgmySqX1SMTUevjYGDuf/YCVnJA3+0m76cWmnhH6rN9f7oVuXGawxwyk3kwyQQQyke99L2M8e41u6DlY3WeLIJo8b20AgC2rUDDh2arWzbLwYlz3UZFaHOitChSuMJt0cDij9w5uFhSweBiv43rZv95J1u1k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=H6VEoWKb; arc=none smtp.client-ip=209.85.221.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="H6VEoWKb" Received: by mail-wr1-f54.google.com with SMTP id ffacd0b85a97d-39104c1cbbdso1071105f8f.3 for ; Fri, 07 Mar 2025 08:15:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1741364155; x=1741968955; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C6xs0cTjWqrdDwyk7E4bpCxTQMeqygyQS+ITTTaO07U=; b=H6VEoWKbyBGR3mbJbTedqDQuE1kioyz9t2yZG2gNryY8KQJHhl04t6Ut5u3gYKU6t1 GtY9lfQY+y4Dj0E3mDHmhTe11c9MuLFUHvBKQuLxWvR55JvURQpOuUlZx/miWEX9J9Mo WDODeYYN728nUIz8cWvfpaZT2WYzjteAdjI5WGuhXMIeU5g2dZ3Z8dtjV19SCtmOaqOk IJ0RTUq/TQW7/1lS30mbh0VVacbdGtNdNZWFIo298vYWlqjQbAeciyHj8+WfnRi1+O4P IpSqWW2xy1r/sAMDPYxZ181/XYpbrqJj6SU8po/y3za/ux/zaCE3dLrrkqOflgq2hOcE fDOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741364155; x=1741968955; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C6xs0cTjWqrdDwyk7E4bpCxTQMeqygyQS+ITTTaO07U=; b=bv9lWtCWR09f8VINDtUSLkh3uDk2CoT8ZCV2As8OOj2OJBLfFGgIeTINNfRfO4v+K1 Vhj2ynlNQTc4JC31KEAhZ7cWnL3X3PbjsNlpKcFjM6KiqU00ig5vTw+Vw1DGurVtCMsO zh2ybqvXpTQ2oggDtYI+rnkjV5gDNHu3e5kWIMbYRGZw8vZS+/M/k077t3Hj35F9OpbS YnWDcBz78tm7ldE4FbRqwdqiPr2aNsNGoutZZBR9+WAY+ejQhw5dWiNGVjyRB6CV1NMa yUJpbCFuEPfHdDUflixZD6F8lRWX03wRnYIUQibZkXgsjnEjdfi0TFdfCK6T0v+yy8p5 Md5Q== X-Gm-Message-State: AOJu0YzcuOrKPzuwnRuKdELm81cxjI6Zol2XzrUXAJ70zfFU0CnEg6tG dZl2uSwGOo0VGG87FOkuG/29e6zMc4Gj/Jd4gNqao1mEdX5fZRWS7lsCkXgEGQlUqoTlQjvkCF1 G X-Gm-Gg: ASbGncua7K2eAA1Eyjn8JsDBC0ssRiouP23M2DfIZx1OouM8RMIpexQ6/9ezDyzU6MT xj/X7ptIo8E8puZYF0kBtPUcB6EczfeyirIdkXMwzVTFfe2Z4M9+ErEZHJMWzpaVB5hfrnM8QLE RxXFGFYi4V2mU0t5Afev1LOOMvIIpz4fTGrluyuC8VSQ0XGXlwfJEF8y75zuXsi8hMOAEZfvIO2 v373Rw+lR5UsWfmpjuCYEZRFuWOTz0qVeNFWq2g0Ecx11bp1pouVd1H8Yhts4VdmsfTqy81bXQq GW/XzMEn3LCoBTZVrBk9rfrl5lN635Kb1pJk5sI+7vtGyw== X-Google-Smtp-Source: AGHT+IHcCwpTjyAXPFIzNs53K4MT0tBUVdvPSgrXuPHtlFQbJxcUjhD9AItXnHJoW1cpUE6dZm8oGQ== X-Received: by 2002:a05:6000:178d:b0:390:df02:47f0 with SMTP id ffacd0b85a97d-39132dd6b75mr2305650f8f.42.1741364154679; Fri, 07 Mar 2025 08:15:54 -0800 (PST) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43bdd8daadbsm55496245e9.21.2025.03.07.08.15.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 07 Mar 2025 08:15:54 -0800 (PST) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra , Andrew Jones Subject: [kvm-unit-tests PATCH v8 3/6] riscv: Use asm-offsets to generate SBI_EXT_HSM values Date: Fri, 7 Mar 2025 17:15:45 +0100 Message-ID: <20250307161549.1873770-4-cleger@rivosinc.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250307161549.1873770-1-cleger@rivosinc.com> References: <20250307161549.1873770-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Replace hardcoded values with generated ones using sbi-asm-offset. This allows to directly use ASM_SBI_EXT_HSM and ASM_SBI_EXT_HSM_STOP in assembly. Signed-off-by: Clément Léger Reviewed-by: Andrew Jones --- riscv/Makefile | 2 +- riscv/sbi-asm.S | 6 ++++-- riscv/sbi-asm-offsets.c | 11 +++++++++++ riscv/.gitignore | 1 + 4 files changed, 17 insertions(+), 3 deletions(-) create mode 100644 riscv/sbi-asm-offsets.c create mode 100644 riscv/.gitignore diff --git a/riscv/Makefile b/riscv/Makefile index ae9cf02a..02d2ac39 100644 --- a/riscv/Makefile +++ b/riscv/Makefile @@ -87,7 +87,7 @@ CFLAGS += -ffreestanding CFLAGS += -O2 CFLAGS += -I $(SRCDIR)/lib -I $(SRCDIR)/lib/libfdt -I lib -I $(SRCDIR)/riscv -asm-offsets = lib/riscv/asm-offsets.h +asm-offsets = lib/riscv/asm-offsets.h riscv/sbi-asm-offsets.h include $(SRCDIR)/scripts/asm-offsets.mak .PRECIOUS: %.aux.o diff --git a/riscv/sbi-asm.S b/riscv/sbi-asm.S index f4185496..51f46efd 100644 --- a/riscv/sbi-asm.S +++ b/riscv/sbi-asm.S @@ -6,6 +6,8 @@ */ #include #include +#include +#include #include "sbi-tests.h" @@ -57,8 +59,8 @@ sbi_hsm_check: 7: lb t0, 0(t1) pause beqz t0, 7b - li a7, 0x48534d /* SBI_EXT_HSM */ - li a6, 1 /* SBI_EXT_HSM_HART_STOP */ + li a7, ASM_SBI_EXT_HSM + li a6, ASM_SBI_EXT_HSM_HART_STOP ecall 8: pause j 8b diff --git a/riscv/sbi-asm-offsets.c b/riscv/sbi-asm-offsets.c new file mode 100644 index 00000000..bd37b6a2 --- /dev/null +++ b/riscv/sbi-asm-offsets.c @@ -0,0 +1,11 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include + +int main(void) +{ + DEFINE(ASM_SBI_EXT_HSM, SBI_EXT_HSM); + DEFINE(ASM_SBI_EXT_HSM_HART_STOP, SBI_EXT_HSM_HART_STOP); + + return 0; +} diff --git a/riscv/.gitignore b/riscv/.gitignore new file mode 100644 index 00000000..0a8c5a36 --- /dev/null +++ b/riscv/.gitignore @@ -0,0 +1 @@ +/*-asm-offsets.[hs] From patchwork Fri Mar 7 16:15:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 14006715 Received: from mail-wr1-f52.google.com (mail-wr1-f52.google.com [209.85.221.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B65F23AE66 for ; Fri, 7 Mar 2025 16:15:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741364160; cv=none; b=Rr1vUbyFjD3U67NDN/QCiaiF7QbveO8B/wV90p/fJ9Pwkln8egJ/dTZ1YKNeZaZahl7YpXGnLcIuIJ/70wJP9kf2T2fJJTiC6nk/oFpCmKdy+/wlodBqjoeHdNCQrgbv4DcAyBDN7AdQr7V6ySRGWFGTZNwlB3mq0U4cFLTSkHA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741364160; c=relaxed/simple; bh=2HnFK5/Qhr15peEjtFesKrEiqrALMeMpB1gNvKkT6dE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=o6X9kr24wDhzwG7pGpPpZJddg1dsbcysz7Uqx+eL/sNeb7LLzmt/ye6wBYG+chOktYcnwSO7Qg/zMpwHOXobXJY1pPk9332NnN7UL0Cjq6vWkxqisqsYI2/k8t7Mb33yN2mmbQIxJNF44HEDQyJlu6tcm8PhfyaSSCJvt3IvnJ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=kMcI5Rax; arc=none smtp.client-ip=209.85.221.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="kMcI5Rax" Received: by mail-wr1-f52.google.com with SMTP id ffacd0b85a97d-3912baafc58so1378488f8f.1 for ; Fri, 07 Mar 2025 08:15:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1741364156; x=1741968956; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SEnadjkXdfJ/Wyi24C/siwRXIpKyA4zX4ZpFhHe2Glo=; b=kMcI5Rax0Oysq+TF1Oy/Xn2w1jXb8hH/jw88Ap7WMVt6UQAZECfSSznp5O+4o4FkFx ON937fJy7qhNG1nLanlGwlbtcujM93S77y+m7L0Y1KPOtQrdQzZz2e0eDmRexBKe2OOF ls/732AErKONfI8Xrc324i8VGmAs9ThGuq58Z3mGbu+D1XOnMGGY6Qcev/WZ209XfdSk X/QtsYHYt69CeHyAO6Zuug4HW0geXj+88tsXGEed7KPxMp95O6315taZczeXsdoW/Om/ iQT4DcE2wR3ZCe3IDNMrqxga0Amadpi5h0ulWuL7KKhBWZNtobql7lvBOegHQ5SVyrIJ Bclg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741364156; x=1741968956; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SEnadjkXdfJ/Wyi24C/siwRXIpKyA4zX4ZpFhHe2Glo=; b=kZHv3X9OSq4zVbU6ygbuhr+3zRKoNHR68+K40PSfoapwcPRJU36r3VJJvI5L79xsoF fhhLxYLjafRJQX584on6k6DsN3aFp6kIxRcQ/RLblDbfQqhzU5tHD6Jea2VpCRPy1K42 wHUJwFt9Lw8AtbFVbn8j4uJtiB+HXHV3oKwcvpmLr6jB3iClp7g6MX8x30Hk8bum5MC9 SHCmAoZMKZJVsijtxxlmHNmGqkCXpxL0ntHAn/+PtwKxFaNXS2oFThisE5Ne2US9ucAr xGy6MNshaNY3F8sPzVHbTc/ffKRdsQz9CEVicH2h2+QL9xIWBH3YaNqqA4JMk+DXg7Zq u41g== X-Gm-Message-State: AOJu0YzpJZQw+BlHHTYoC8ehlwZkp/8E554QdpjUrrrHDJ4SpL0q1BXL wkDJrcxHMDnVzzSOPU351K2CEkGNjIV4xmNbEodMpwpt+uitS+ucXtCKAsQt/k6LKvbePpV88Sd Z X-Gm-Gg: ASbGncs0RVTYvHrEV7hwP4YONdAg287b54TG2rsrBjSb9fIKBC8NXvIFPfzuwKY9I8K /viZTwI0L5BDzDmX4etdOQIbbcLuDgLvleo4OOHoNUt4Gp/JHYiIRZBadarQ/PJme9HlPX8LXXl fu/Mn0QyWXfQt5LrMqIaQCL/dcBHjVOkqBFRoRpXawQBGPmhbBCdCrzNU6I6xzAiJxb8TgJvWxK JbpEwpeIoatLErK1jF979IFbAd4XroDs7CamZBdQGuMNGLptpF+dR5HfCMWS4n633kb+rRZo/Bn ATAdGBtMnXSr6hJZd1ZvzI/023L4UiTCMBxGZZCnxFKMDQ== X-Google-Smtp-Source: AGHT+IFvS9NJZFLvIztECb1WSTlOmEQTqeqFagerajLozaZgx6rA8DsqzXSXRF2Ng1vQXPz20xT7fw== X-Received: by 2002:adf:8b1d:0:b0:391:1652:f0bf with SMTP id ffacd0b85a97d-39132d8dc66mr2172407f8f.33.1741364155712; Fri, 07 Mar 2025 08:15:55 -0800 (PST) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43bdd8daadbsm55496245e9.21.2025.03.07.08.15.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 07 Mar 2025 08:15:55 -0800 (PST) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra , Andrew Jones Subject: [kvm-unit-tests PATCH v8 4/6] riscv: lib: Add SBI SSE extension definitions Date: Fri, 7 Mar 2025 17:15:46 +0100 Message-ID: <20250307161549.1873770-5-cleger@rivosinc.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250307161549.1873770-1-cleger@rivosinc.com> References: <20250307161549.1873770-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add SBI SSE extension definitions in sbi.h Signed-off-by: Clément Léger Reviewed-by: Andrew Jones --- lib/riscv/asm/sbi.h | 106 +++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 105 insertions(+), 1 deletion(-) diff --git a/lib/riscv/asm/sbi.h b/lib/riscv/asm/sbi.h index 2f4d91ef..780c9edd 100644 --- a/lib/riscv/asm/sbi.h +++ b/lib/riscv/asm/sbi.h @@ -30,6 +30,7 @@ enum sbi_ext_id { SBI_EXT_DBCN = 0x4442434E, SBI_EXT_SUSP = 0x53555350, SBI_EXT_FWFT = 0x46574654, + SBI_EXT_SSE = 0x535345, }; enum sbi_ext_base_fid { @@ -78,7 +79,6 @@ enum sbi_ext_dbcn_fid { SBI_EXT_DBCN_CONSOLE_WRITE_BYTE, }; - enum sbi_ext_fwft_fid { SBI_EXT_FWFT_SET = 0, SBI_EXT_FWFT_GET, @@ -105,6 +105,110 @@ enum sbi_ext_fwft_fid { #define SBI_FWFT_SET_FLAG_LOCK BIT(0) +enum sbi_ext_sse_fid { + SBI_EXT_SSE_READ_ATTRS = 0, + SBI_EXT_SSE_WRITE_ATTRS, + SBI_EXT_SSE_REGISTER, + SBI_EXT_SSE_UNREGISTER, + SBI_EXT_SSE_ENABLE, + SBI_EXT_SSE_DISABLE, + SBI_EXT_SSE_COMPLETE, + SBI_EXT_SSE_INJECT, + SBI_EXT_SSE_HART_UNMASK, + SBI_EXT_SSE_HART_MASK, +}; + +/* SBI SSE Event Attributes. */ +enum sbi_sse_attr_id { + SBI_SSE_ATTR_STATUS = 0x00000000, + SBI_SSE_ATTR_PRIORITY = 0x00000001, + SBI_SSE_ATTR_CONFIG = 0x00000002, + SBI_SSE_ATTR_PREFERRED_HART = 0x00000003, + SBI_SSE_ATTR_ENTRY_PC = 0x00000004, + SBI_SSE_ATTR_ENTRY_ARG = 0x00000005, + SBI_SSE_ATTR_INTERRUPTED_SEPC = 0x00000006, + SBI_SSE_ATTR_INTERRUPTED_FLAGS = 0x00000007, + SBI_SSE_ATTR_INTERRUPTED_A6 = 0x00000008, + SBI_SSE_ATTR_INTERRUPTED_A7 = 0x00000009, +}; + +#define SBI_SSE_ATTR_STATUS_STATE_OFFSET 0 +#define SBI_SSE_ATTR_STATUS_STATE_MASK 0x3 +#define SBI_SSE_ATTR_STATUS_PENDING_OFFSET 2 +#define SBI_SSE_ATTR_STATUS_INJECT_OFFSET 3 + +#define SBI_SSE_ATTR_CONFIG_ONESHOT BIT(0) + +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP BIT(0) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE BIT(1) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV BIT(2) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP BIT(3) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP BIT(4) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT BIT(5) + +enum sbi_sse_state { + SBI_SSE_STATE_UNUSED = 0, + SBI_SSE_STATE_REGISTERED = 1, + SBI_SSE_STATE_ENABLED = 2, + SBI_SSE_STATE_RUNNING = 3, +}; + +/* SBI SSE Event IDs. */ +/* Range 0x00000000 - 0x0000ffff */ +#define SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS 0x00000000 +#define SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP 0x00000001 +#define SBI_SSE_EVENT_LOCAL_RESERVED_0_START 0x00000002 +#define SBI_SSE_EVENT_LOCAL_RESERVED_0_END 0x00003fff +#define SBI_SSE_EVENT_LOCAL_PLAT_0_START 0x00004000 +#define SBI_SSE_EVENT_LOCAL_PLAT_0_END 0x00007fff + +#define SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS 0x00008000 +#define SBI_SSE_EVENT_GLOBAL_RESERVED_0_START 0x00008001 +#define SBI_SSE_EVENT_GLOBAL_RESERVED_0_END 0x0000bfff +#define SBI_SSE_EVENT_GLOBAL_PLAT_0_START 0x0000c000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_0_END 0x0000ffff + +/* Range 0x00010000 - 0x0001ffff */ +#define SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW 0x00010000 +#define SBI_SSE_EVENT_LOCAL_RESERVED_1_START 0x00010001 +#define SBI_SSE_EVENT_LOCAL_RESERVED_1_END 0x00013fff +#define SBI_SSE_EVENT_LOCAL_PLAT_1_START 0x00014000 +#define SBI_SSE_EVENT_LOCAL_PLAT_1_END 0x00017fff + +#define SBI_SSE_EVENT_GLOBAL_RESERVED_1_START 0x00018000 +#define SBI_SSE_EVENT_GLOBAL_RESERVED_1_END 0x0001bfff +#define SBI_SSE_EVENT_GLOBAL_PLAT_1_START 0x0001c000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_1_END 0x0001ffff + +/* Range 0x00100000 - 0x0010ffff */ +#define SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS 0x00100000 +#define SBI_SSE_EVENT_LOCAL_RESERVED_2_START 0x00100001 +#define SBI_SSE_EVENT_LOCAL_RESERVED_2_END 0x00103fff +#define SBI_SSE_EVENT_LOCAL_PLAT_2_START 0x00104000 +#define SBI_SSE_EVENT_LOCAL_PLAT_2_END 0x00107fff + +#define SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS 0x00108000 +#define SBI_SSE_EVENT_GLOBAL_RESERVED_2_START 0x00108001 +#define SBI_SSE_EVENT_GLOBAL_RESERVED_2_END 0x0010bfff +#define SBI_SSE_EVENT_GLOBAL_PLAT_2_START 0x0010c000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_2_END 0x0010ffff + +/* Range 0xffff0000 - 0xffffffff */ +#define SBI_SSE_EVENT_LOCAL_SOFTWARE 0xffff0000 +#define SBI_SSE_EVENT_LOCAL_RESERVED_3_START 0xffff0001 +#define SBI_SSE_EVENT_LOCAL_RESERVED_3_END 0xffff3fff +#define SBI_SSE_EVENT_LOCAL_PLAT_3_START 0xffff4000 +#define SBI_SSE_EVENT_LOCAL_PLAT_3_END 0xffff7fff + +#define SBI_SSE_EVENT_GLOBAL_SOFTWARE 0xffff8000 +#define SBI_SSE_EVENT_GLOBAL_RESERVED_3_START 0xffff8001 +#define SBI_SSE_EVENT_GLOBAL_RESERVED_3_END 0xffffbfff +#define SBI_SSE_EVENT_GLOBAL_PLAT_3_START 0xffffc000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_3_END 0xffffffff + +#define SBI_SSE_EVENT_PLATFORM_BIT BIT(14) +#define SBI_SSE_EVENT_GLOBAL_BIT BIT(15) + struct sbiret { long error; long value; From patchwork Fri Mar 7 16:15:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 14006716 Received: from mail-wm1-f53.google.com (mail-wm1-f53.google.com [209.85.128.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79FDC23A9B9 for ; Fri, 7 Mar 2025 16:15:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741364161; cv=none; b=qIyKMdmPE+eD2vC7pgDItAfWOAiWjr7P0Ku8xVeYDaVznzL8cVRkB6bNiYiYDrn8/2guZHkaZqwcGrfNVh4umJADA+5Turulng+pOzXHiTnSf/DCxwlmxZjtgu8u6HXergVqD/2W7uMXp73ymN4ovVeU02XFOehOW3U2nnMYw3Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741364161; c=relaxed/simple; bh=0vXr6zJHycPNBK2lb0xI8lDT5pogVQSRiivh9sVSrQ0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=H6o6uerxamGfZORmsZcoV0MRSbbV2IPuqVH1zd55A9Qfuh36/PUxg2s0Y5K208oAeemtsAw4L0EAVjZqBS0APUqI5KWWEdeyrab50/JNipR3xQsFPRjHBqxDvC3dzKs+axnUxOjLOhRa+UpZjoxmfgbIKxT+WdHVVDGI1VxkZnM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=g4lVVDkh; arc=none smtp.client-ip=209.85.128.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="g4lVVDkh" Received: by mail-wm1-f53.google.com with SMTP id 5b1f17b1804b1-43bc6a6aaf7so16096175e9.2 for ; Fri, 07 Mar 2025 08:15:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1741364157; x=1741968957; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HtNCIt0sbVFVt5KiMHCWY0zZh24u1KZa/LMo5ijr36g=; b=g4lVVDkhMeJ9d463Y4xuxkqIBehAwX0KK4KN/YmyRFoycl14K7fWuPUPZ9POzdzb9G fI9Ly60oGuC44ksdckLMxmy/neWXXWVMqHKFUgESBIMp8SKbd8PCeP4f8fRYNT8ScslF kvllziUp9IAj+0A5g2YfVsk/1DE02wHdxgvPiD41GIruJjbGS3K1uLzNMorAQ0Q4dLiZ biIobPxuZB72Z6VOKK+C8i+Vz63RFEa3ZxCkWjWQjel905ashF5FIJ6kG3Y89RMmtffK nQLN0iXqgVLmEMHkkmwdyoZ0BDtRjhElqelJKMMEO5GOGxRTDcss0FbjnS9FYlykh3+j 2n3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741364157; x=1741968957; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HtNCIt0sbVFVt5KiMHCWY0zZh24u1KZa/LMo5ijr36g=; b=gn6xqrTRki3emf8fu88fdunFS0A45RVAQgnRMIKxw476GkVB9WZoa2X9qh0IVD8ail vCNGkF59jgWdbdocke1vAGOPuKQqWb7tL+iAp8zxsHJk86VE7sPgh3M77bHh4fRfeReE QVxBjfrWCffZTBoV/nYqYmZo0I7G28LJ5VtjE9we9MdblfqK1V5IVkRtZQBr+SoHgZOn KPx3WTqoy/OpxzQnWcciwJsvqatL5OPJ4YtVy9Tbhg8k5SxZ5YEt/I40Bj/8dGFCN0OA eVG336MlQ+mZ2QBZTxVtkG9VEeCwgvflyecgd9ELjr/8Iiq9p4wj++wl5w63GtRhBAuK CadA== X-Gm-Message-State: AOJu0YxibiJm/ea2wqTvNhw04Sq56A0b4hKaapBnYSML+Sn78qAf1F4x vRNQ/IaWqJbdawAhH3RLin8u9/WLmLN8aAwHQfmnAc7XbBciEeTbzXBKLjBTxDbz5DkPqLYDAKl U X-Gm-Gg: ASbGncvwEkL4Ccn/bg93ndAypEgARXwIivN9bULu7v+z9kQKYgULW19Z0ewjRYo/hR9 o4Xe+yhRpCHYuqZw/EggW023aF6ozQXhY4Sv0uVcMtF+7PSiJuf3qv+g5buTAvoS3ELQFgHDs+i 4upuR8S6dBjRrfPQGBJqE6/W+3DHN6ok7DiIZz6M+xriEGOa0q+zcJbITShRYPs3n0+I/9aEArx H5oq7dlBiSU1uVH6ehBW731njHmQjm4NRYx5RKbtbNLdh9LXfpjgi9PPGhk5zix//gqrdq0zdAU 7+OLS0oV5GTtnWtYJOJVd/aj1BNmWGdhNS6jmTAIylGbBQ== X-Google-Smtp-Source: AGHT+IGfkOp9n36L7ud5jFITPQA8CXbQsjD9Fc766Da/GQ5gWXgjGdSp22qZN5x6lUrFj9emrDr4pw== X-Received: by 2002:a05:600c:4e8e:b0:439:8185:4ad4 with SMTP id 5b1f17b1804b1-43ce097647fmr12010005e9.27.1741364157246; Fri, 07 Mar 2025 08:15:57 -0800 (PST) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43bdd8daadbsm55496245e9.21.2025.03.07.08.15.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 07 Mar 2025 08:15:56 -0800 (PST) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra , Andrew Jones Subject: [kvm-unit-tests PATCH v8 5/6] lib: riscv: Add SBI SSE support Date: Fri, 7 Mar 2025 17:15:47 +0100 Message-ID: <20250307161549.1873770-6-cleger@rivosinc.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250307161549.1873770-1-cleger@rivosinc.com> References: <20250307161549.1873770-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add support for registering and handling SSE events. This will be used by sbi test as well as upcoming double trap tests. Signed-off-by: Clément Léger Reviewed-by: Andrew Jones --- riscv/Makefile | 1 + lib/riscv/asm/csr.h | 1 + lib/riscv/asm/sbi.h | 38 ++++++++++++++- lib/riscv/sbi-sse-asm.S | 103 ++++++++++++++++++++++++++++++++++++++++ lib/riscv/asm-offsets.c | 9 ++++ lib/riscv/sbi.c | 76 +++++++++++++++++++++++++++++ 6 files changed, 227 insertions(+), 1 deletion(-) create mode 100644 lib/riscv/sbi-sse-asm.S diff --git a/riscv/Makefile b/riscv/Makefile index 02d2ac39..16fc125b 100644 --- a/riscv/Makefile +++ b/riscv/Makefile @@ -43,6 +43,7 @@ cflatobjs += lib/riscv/setup.o cflatobjs += lib/riscv/smp.o cflatobjs += lib/riscv/stack.o cflatobjs += lib/riscv/timer.o +cflatobjs += lib/riscv/sbi-sse-asm.o ifeq ($(ARCH),riscv32) cflatobjs += lib/ldiv32.o endif diff --git a/lib/riscv/asm/csr.h b/lib/riscv/asm/csr.h index c7fc87a9..3e4b5fca 100644 --- a/lib/riscv/asm/csr.h +++ b/lib/riscv/asm/csr.h @@ -17,6 +17,7 @@ #define CSR_TIME 0xc01 #define SR_SIE _AC(0x00000002, UL) +#define SR_SPP _AC(0x00000100, UL) /* Exception cause high bit - is an interrupt if set */ #define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1)) diff --git a/lib/riscv/asm/sbi.h b/lib/riscv/asm/sbi.h index 780c9edd..acef8a5e 100644 --- a/lib/riscv/asm/sbi.h +++ b/lib/riscv/asm/sbi.h @@ -230,5 +230,41 @@ struct sbiret sbi_send_ipi_broadcast(void); struct sbiret sbi_set_timer(unsigned long stime_value); long sbi_probe(int ext); -#endif /* !__ASSEMBLER__ */ +typedef void (*sbi_sse_handler_fn)(void *data, struct pt_regs *regs, unsigned int hartid); + +struct sbi_sse_handler_arg { + unsigned long reg_tmp; + sbi_sse_handler_fn handler; + void *handler_data; + void *stack; +}; + +extern void sbi_sse_entry(void); + +static inline bool sbi_sse_event_is_global(uint32_t event_id) +{ + return !!(event_id & SBI_SSE_EVENT_GLOBAL_BIT); +} + +struct sbiret sbi_sse_read_attrs_raw(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long phys_lo, + unsigned long phys_hi); +struct sbiret sbi_sse_read_attrs(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long *values); +struct sbiret sbi_sse_write_attrs_raw(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long phys_lo, + unsigned long phys_hi); +struct sbiret sbi_sse_write_attrs(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long *values); +struct sbiret sbi_sse_register_raw(unsigned long event_id, unsigned long entry_pc, + unsigned long entry_arg); +struct sbiret sbi_sse_register(unsigned long event_id, struct sbi_sse_handler_arg *arg); +struct sbiret sbi_sse_unregister(unsigned long event_id); +struct sbiret sbi_sse_enable(unsigned long event_id); +struct sbiret sbi_sse_disable(unsigned long event_id); +struct sbiret sbi_sse_hart_mask(void); +struct sbiret sbi_sse_hart_unmask(void); +struct sbiret sbi_sse_inject(unsigned long event_id, unsigned long hart_id); + +#endif /* !__ASSEMBLY__ */ #endif /* _ASMRISCV_SBI_H_ */ diff --git a/lib/riscv/sbi-sse-asm.S b/lib/riscv/sbi-sse-asm.S new file mode 100644 index 00000000..e4efd1ff --- /dev/null +++ b/lib/riscv/sbi-sse-asm.S @@ -0,0 +1,103 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * RISC-V SSE events entry point. + * + * Copyright (C) 2025, Rivos Inc., Clément Léger + */ +#define __ASSEMBLY__ +#include +#include +#include +#include + +.section .text +.global sbi_sse_entry +sbi_sse_entry: + /* Save stack temporarily */ + REG_S sp, SBI_SSE_REG_TMP(a7) + /* Set entry stack */ + REG_L sp, SBI_SSE_HANDLER_STACK(a7) + + addi sp, sp, -(PT_SIZE) + REG_S ra, PT_RA(sp) + REG_S s0, PT_S0(sp) + REG_S s1, PT_S1(sp) + REG_S s2, PT_S2(sp) + REG_S s3, PT_S3(sp) + REG_S s4, PT_S4(sp) + REG_S s5, PT_S5(sp) + REG_S s6, PT_S6(sp) + REG_S s7, PT_S7(sp) + REG_S s8, PT_S8(sp) + REG_S s9, PT_S9(sp) + REG_S s10, PT_S10(sp) + REG_S s11, PT_S11(sp) + REG_S tp, PT_TP(sp) + REG_S t0, PT_T0(sp) + REG_S t1, PT_T1(sp) + REG_S t2, PT_T2(sp) + REG_S t3, PT_T3(sp) + REG_S t4, PT_T4(sp) + REG_S t5, PT_T5(sp) + REG_S t6, PT_T6(sp) + REG_S gp, PT_GP(sp) + REG_S a0, PT_A0(sp) + REG_S a1, PT_A1(sp) + REG_S a2, PT_A2(sp) + REG_S a3, PT_A3(sp) + REG_S a4, PT_A4(sp) + REG_S a5, PT_A5(sp) + csrr a1, CSR_SEPC + REG_S a1, PT_EPC(sp) + csrr a2, CSR_SSTATUS + REG_S a2, PT_STATUS(sp) + + REG_L a0, SBI_SSE_REG_TMP(a7) + REG_S a0, PT_SP(sp) + + REG_L t0, SBI_SSE_HANDLER(a7) + REG_L a0, SBI_SSE_HANDLER_DATA(a7) + mv a1, sp + mv a2, a6 + jalr t0 + + REG_L a1, PT_EPC(sp) + REG_L a2, PT_STATUS(sp) + csrw CSR_SEPC, a1 + csrw CSR_SSTATUS, a2 + + REG_L ra, PT_RA(sp) + REG_L s0, PT_S0(sp) + REG_L s1, PT_S1(sp) + REG_L s2, PT_S2(sp) + REG_L s3, PT_S3(sp) + REG_L s4, PT_S4(sp) + REG_L s5, PT_S5(sp) + REG_L s6, PT_S6(sp) + REG_L s7, PT_S7(sp) + REG_L s8, PT_S8(sp) + REG_L s9, PT_S9(sp) + REG_L s10, PT_S10(sp) + REG_L s11, PT_S11(sp) + REG_L tp, PT_TP(sp) + REG_L t0, PT_T0(sp) + REG_L t1, PT_T1(sp) + REG_L t2, PT_T2(sp) + REG_L t3, PT_T3(sp) + REG_L t4, PT_T4(sp) + REG_L t5, PT_T5(sp) + REG_L t6, PT_T6(sp) + REG_L gp, PT_GP(sp) + REG_L a0, PT_A0(sp) + REG_L a1, PT_A1(sp) + REG_L a2, PT_A2(sp) + REG_L a3, PT_A3(sp) + REG_L a4, PT_A4(sp) + REG_L a5, PT_A5(sp) + + REG_L sp, PT_SP(sp) + + li a7, ASM_SBI_EXT_SSE + li a6, ASM_SBI_EXT_SSE_COMPLETE + ecall + diff --git a/lib/riscv/asm-offsets.c b/lib/riscv/asm-offsets.c index 6c511c14..a96c6e97 100644 --- a/lib/riscv/asm-offsets.c +++ b/lib/riscv/asm-offsets.c @@ -3,6 +3,7 @@ #include #include #include +#include #include int main(void) @@ -63,5 +64,13 @@ int main(void) OFFSET(THREAD_INFO_HARTID, thread_info, hartid); DEFINE(THREAD_INFO_SIZE, sizeof(struct thread_info)); + DEFINE(ASM_SBI_EXT_SSE, SBI_EXT_SSE); + DEFINE(ASM_SBI_EXT_SSE_COMPLETE, SBI_EXT_SSE_COMPLETE); + + OFFSET(SBI_SSE_REG_TMP, sbi_sse_handler_arg, reg_tmp); + OFFSET(SBI_SSE_HANDLER, sbi_sse_handler_arg, handler); + OFFSET(SBI_SSE_HANDLER_DATA, sbi_sse_handler_arg, handler_data); + OFFSET(SBI_SSE_HANDLER_STACK, sbi_sse_handler_arg, stack); + return 0; } diff --git a/lib/riscv/sbi.c b/lib/riscv/sbi.c index 02dd338c..1752c916 100644 --- a/lib/riscv/sbi.c +++ b/lib/riscv/sbi.c @@ -2,6 +2,7 @@ #include #include #include +#include #include #include @@ -31,6 +32,81 @@ struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0, return ret; } +struct sbiret sbi_sse_read_attrs_raw(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long phys_lo, + unsigned long phys_hi) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_READ_ATTRS, event_id, base_attr_id, attr_count, + phys_lo, phys_hi, 0); +} + +struct sbiret sbi_sse_read_attrs(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long *values) +{ + phys_addr_t p = virt_to_phys(values); + + return sbi_sse_read_attrs_raw(event_id, base_attr_id, attr_count, lower_32_bits(p), + upper_32_bits(p)); +} + +struct sbiret sbi_sse_write_attrs_raw(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long phys_lo, + unsigned long phys_hi) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_WRITE_ATTRS, event_id, base_attr_id, attr_count, + phys_lo, phys_hi, 0); +} + +struct sbiret sbi_sse_write_attrs(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long *values) +{ + phys_addr_t p = virt_to_phys(values); + + return sbi_sse_write_attrs_raw(event_id, base_attr_id, attr_count, lower_32_bits(p), + upper_32_bits(p)); +} + +struct sbiret sbi_sse_register_raw(unsigned long event_id, unsigned long entry_pc, + unsigned long entry_arg) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_REGISTER, event_id, entry_pc, entry_arg, 0, 0, 0); +} + +struct sbiret sbi_sse_register(unsigned long event_id, struct sbi_sse_handler_arg *arg) +{ + return sbi_sse_register_raw(event_id, (unsigned long) sbi_sse_entry, (unsigned long) arg); +} + +struct sbiret sbi_sse_unregister(unsigned long event_id) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_UNREGISTER, event_id, 0, 0, 0, 0, 0); +} + +struct sbiret sbi_sse_enable(unsigned long event_id) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_ENABLE, event_id, 0, 0, 0, 0, 0); +} + +struct sbiret sbi_sse_disable(unsigned long event_id) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_DISABLE, event_id, 0, 0, 0, 0, 0); +} + +struct sbiret sbi_sse_hart_mask(void) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_MASK, 0, 0, 0, 0, 0, 0); +} + +struct sbiret sbi_sse_hart_unmask(void) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_UNMASK, 0, 0, 0, 0, 0, 0); +} + +struct sbiret sbi_sse_inject(unsigned long event_id, unsigned long hart_id) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_INJECT, event_id, hart_id, 0, 0, 0, 0); +} + void sbi_shutdown(void) { sbi_ecall(SBI_EXT_SRST, 0, 0, 0, 0, 0, 0, 0); From patchwork Fri Mar 7 16:15:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 14006717 Received: from mail-wm1-f42.google.com (mail-wm1-f42.google.com [209.85.128.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 961F023AE67 for ; Fri, 7 Mar 2025 16:16:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741364163; cv=none; b=dEYdMoBNHZ8nH0jDdjlLuseYj1C3BEaXzKgS27flKAHldoarBm/Y+gX0l/a+CyV0lyNbvGCV9mV3geHWfcxhKSar8KHTvi3Emb+suq18ohW5knwC7LXYfJIVfYV/OABhmYpB9+c42A7swRN3zJav0enGwMgtPjKSk8N5HVRWZcg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741364163; c=relaxed/simple; bh=evnwAUxmGU6p75X4AMYwAC6TliamZNHfdIwLrQKknuU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=F1oCIeB9vsqaApqEHe+LvGGurewDxt+1H0w+ByLCrHSoEPGA/jcMOx+Aad/9ZeWqWhz7tsn/8mUpcB2726lSbi0U9llhfL2S6rzDzgEd2yNVGsdjfgPG+9LjtywA6aC4z3tr3kN+WLxUdVZZrh9DF99wHW1m0nS7LJ8I3g43aRM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=n2o8WzZD; arc=none smtp.client-ip=209.85.128.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="n2o8WzZD" Received: by mail-wm1-f42.google.com with SMTP id 5b1f17b1804b1-43bc48ff815so12830825e9.0 for ; Fri, 07 Mar 2025 08:16:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1741364158; x=1741968958; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cDow56U7N/a9O1IKyRuPhUgN4xRZCBxC2rrI7ooaM7k=; b=n2o8WzZDEQ8v2JlBdC0adGCP++mxlDTn6VpK7WdfWjbwss6HVnTS1SwESUHBsx/pxQ HE+T1bmuh3fn7K1QBkbGVgafQUyqBuvlBT0pQSKilkfvMpKxVNq/Pg/mirWqZq3B0+fZ TeX1vWav8B/PC2FAE2QzvaDdIPmDlNiCFdRhzK2joUPBIbab8DwT0lmTzm1Mris1dd00 uYidJnjhXrOF1LPeoPMkPG5uDX0jND8hF+5geSM5WbpZ8OXcwksy7eMxXzwMdkp3rW64 8S0CYCD5h9kcqRHLCq3PZgz4yOw79yzlrQUrvWDCvtuhmw2rk68qxAEong0c0V9l7E2W axug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741364158; x=1741968958; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cDow56U7N/a9O1IKyRuPhUgN4xRZCBxC2rrI7ooaM7k=; b=fbC3hFpflTDzFK+QMrHN3j2KDDIUzhxhPw3BTy8otQeKwEQiWl104/BS8Okamo4ACv OadXqrzl9Ql6/GZXfZyNraZc2bAb/MhKDGOUqfrzbzphOWMSUuL35lO0c8idUQx8TfXN YSeuxsn4rIsVH8ujc8Ck90u4ZrnHgsL2xGftEtRAzqBj+xbG7yKzo740yDhZuzG2Z82d K26iPYM6XB/vM/mFtpn4OuWJ8pNS3PmZiryePyuROwKcDrXPcXxN1Zd6+rowh1leEecC LJZaXaoJWTSjeCy7IF/KC1G93LtHAiXnnSilafmRX/V7kj0wGvTodbPdo+ib/Dp/Y75M n9gg== X-Gm-Message-State: AOJu0YxsvBqNvucsoXHtpxJjCEMsv0n8kWOUtKiAddXp4sE7BeiGNk0L a8YsL/t2Focss3d96b0iZEbxf3Nitei3bqqboDb4ugAh1tukdf8JQIvqhY3Z37G9Q9jIhO8Pvu9 W X-Gm-Gg: ASbGncv30NygtOHul/tZ618CQnFuRdZ/Lb05ds+Olwx/Zz2t16Cd2oxAjPzeP8M9aNz FsPI5nqVt96obhxx8vrlwnMdJ+Q6/AD4tEnyOkcINteWHEOjBkqPgHSxmmxJqj4IlzB3nORVity jIGwpd1Cb8QA/nB2rkLR+uiVgS+7mzB24KrYNw6fTREiHoz4NdD6OI8mJpA+UNYYLZ65UDo7Ggy ZahxgkCXnLV46ZNgtYMWRwk3mU2ijZsig5OexLLaGaWWmZMz629rPvm6lfpaIdL0ptW+yqunCfF 1Z48pLfQEQeJ6B0Kg3qSQxX1r9+9sLMJvEFtk1XUTux/Jg== X-Google-Smtp-Source: AGHT+IHhiU2Y3MyDcbDNaKkzBvhdAyGlNF8BOk7/XzLewWKKQbmlctIMJwBZOZyov+ExEKpouPRsWg== X-Received: by 2002:a05:600c:3c82:b0:439:4c1e:d810 with SMTP id 5b1f17b1804b1-43ce4ad68b1mr1126515e9.9.1741364158389; Fri, 07 Mar 2025 08:15:58 -0800 (PST) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43bdd8daadbsm55496245e9.21.2025.03.07.08.15.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 07 Mar 2025 08:15:57 -0800 (PST) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra Subject: [kvm-unit-tests PATCH v8 6/6] riscv: sbi: Add SSE extension tests Date: Fri, 7 Mar 2025 17:15:48 +0100 Message-ID: <20250307161549.1873770-7-cleger@rivosinc.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250307161549.1873770-1-cleger@rivosinc.com> References: <20250307161549.1873770-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add SBI SSE extension tests for the following features: - Test attributes errors (invalid values, RO, etc) - Registration errors - Simple events (register, enable, inject) - Events with different priorities - Global events dispatch on different harts - Local events on all harts - Hart mask/unmask events Signed-off-by: Clément Léger --- riscv/Makefile | 1 + riscv/sbi-tests.h | 1 + riscv/sbi-sse.c | 1215 +++++++++++++++++++++++++++++++++++++++++++++ riscv/sbi.c | 2 + 4 files changed, 1219 insertions(+) create mode 100644 riscv/sbi-sse.c diff --git a/riscv/Makefile b/riscv/Makefile index 16fc125b..4fe2f1bb 100644 --- a/riscv/Makefile +++ b/riscv/Makefile @@ -18,6 +18,7 @@ tests += $(TEST_DIR)/sieve.$(exe) all: $(tests) $(TEST_DIR)/sbi-deps = $(TEST_DIR)/sbi-asm.o $(TEST_DIR)/sbi-fwft.o +$(TEST_DIR)/sbi-deps += $(TEST_DIR)/sbi-sse.o # When built for EFI sieve needs extra memory, run with e.g. '-m 256' on QEMU $(TEST_DIR)/sieve.$(exe): AUXFLAGS = 0x1 diff --git a/riscv/sbi-tests.h b/riscv/sbi-tests.h index b081464d..a71da809 100644 --- a/riscv/sbi-tests.h +++ b/riscv/sbi-tests.h @@ -71,6 +71,7 @@ sbiret_report(ret, expected_error, expected_value, "check sbi.error and sbi.value") void sbi_bad_fid(int ext); +void check_sse(void); #endif /* __ASSEMBLER__ */ #endif /* _RISCV_SBI_TESTS_H_ */ diff --git a/riscv/sbi-sse.c b/riscv/sbi-sse.c new file mode 100644 index 00000000..7bd58b8b --- /dev/null +++ b/riscv/sbi-sse.c @@ -0,0 +1,1215 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * SBI SSE testsuite + * + * Copyright (C) 2025, Rivos Inc., Clément Léger + */ +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "sbi-tests.h" + +#define SSE_STACK_SIZE PAGE_SIZE + +struct sse_event_info { + uint32_t event_id; + const char *name; + bool can_inject; +}; + +static struct sse_event_info sse_event_infos[] = { + { + .event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS, + .name = "local_high_prio_ras", + }, + { + .event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP, + .name = "double_trap", + }, + { + .event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS, + .name = "global_high_prio_ras", + }, + { + .event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW, + .name = "local_pmu_overflow", + }, + { + .event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS, + .name = "local_low_prio_ras", + }, + { + .event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS, + .name = "global_low_prio_ras", + }, + { + .event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE, + .name = "local_software", + }, + { + .event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE, + .name = "global_software", + }, +}; + +static const char *const attr_names[] = { + [SBI_SSE_ATTR_STATUS] = "status", + [SBI_SSE_ATTR_PRIORITY] = "priority", + [SBI_SSE_ATTR_CONFIG] = "config", + [SBI_SSE_ATTR_PREFERRED_HART] = "preferred_hart", + [SBI_SSE_ATTR_ENTRY_PC] = "entry_pc", + [SBI_SSE_ATTR_ENTRY_ARG] = "entry_arg", + [SBI_SSE_ATTR_INTERRUPTED_SEPC] = "interrupted_sepc", + [SBI_SSE_ATTR_INTERRUPTED_FLAGS] = "interrupted_flags", + [SBI_SSE_ATTR_INTERRUPTED_A6] = "interrupted_a6", + [SBI_SSE_ATTR_INTERRUPTED_A7] = "interrupted_a7", +}; + +static const unsigned long ro_attrs[] = { + SBI_SSE_ATTR_STATUS, + SBI_SSE_ATTR_ENTRY_PC, + SBI_SSE_ATTR_ENTRY_ARG, +}; + +static const unsigned long interrupted_attrs[] = { + SBI_SSE_ATTR_INTERRUPTED_SEPC, + SBI_SSE_ATTR_INTERRUPTED_FLAGS, + SBI_SSE_ATTR_INTERRUPTED_A6, + SBI_SSE_ATTR_INTERRUPTED_A7, +}; + +static const unsigned long interrupted_flags[] = { + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP, +}; + +static struct sse_event_info *sse_event_get_info(uint32_t event_id) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) { + if (sse_event_infos[i].event_id == event_id) + return &sse_event_infos[i]; + } + + assert_msg(false, "Invalid event id: %d", event_id); +} + +static const char *sse_event_name(uint32_t event_id) +{ + return sse_event_get_info(event_id)->name; +} + +static bool sse_event_can_inject(uint32_t event_id) +{ + return sse_event_get_info(event_id)->can_inject; +} + +static struct sbiret sse_get_event_status_field(uint32_t event_id, unsigned long mask, + unsigned long shift, unsigned long *value) +{ + struct sbiret ret; + unsigned long status; + + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status); + if (ret.error) { + sbiret_report_error(&ret, SBI_SUCCESS, "Get event status"); + return ret; + } + + *value = (status & mask) >> shift; + + return ret; +} + +static struct sbiret sse_event_get_state(uint32_t event_id, enum sbi_sse_state *state) +{ + unsigned long status = 0; + struct sbiret ret; + + ret = sse_get_event_status_field(event_id, SBI_SSE_ATTR_STATUS_STATE_MASK, + SBI_SSE_ATTR_STATUS_STATE_OFFSET, &status); + *state = status; + + return ret; +} + +static unsigned long sse_global_event_set_current_hart(uint32_t event_id) +{ + struct sbiret ret; + unsigned long current_hart = current_thread_info()->hartid; + + if (!sbi_sse_event_is_global(event_id)) + return SBI_ERR_INVALID_PARAM; + + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, ¤t_hart); + if (sbiret_report_error(&ret, SBI_SUCCESS, "Set preferred hart")) + return ret.error; + + return 0; +} + +static bool sse_check_state(uint32_t event_id, unsigned long expected_state) +{ + struct sbiret ret; + enum sbi_sse_state state; + + ret = sse_event_get_state(event_id, &state); + if (ret.error) + return false; + + return report(state == expected_state, "event status == %ld", expected_state); +} + +static bool sse_event_pending(uint32_t event_id) +{ + bool pending = 0; + + sse_get_event_status_field(event_id, BIT(SBI_SSE_ATTR_STATUS_PENDING_OFFSET), + SBI_SSE_ATTR_STATUS_PENDING_OFFSET, (unsigned long*)&pending); + + return pending; +} + +static void *sse_alloc_stack(void) +{ + /* + * We assume that SSE_STACK_SIZE always fit in one page. This page will + * always be decremented before storing anything on it in sse-entry.S. + */ + assert(SSE_STACK_SIZE <= PAGE_SIZE); + + return (alloc_page() + SSE_STACK_SIZE); +} + +static void sse_free_stack(void *stack) +{ + free_page(stack - SSE_STACK_SIZE); +} + +static void sse_read_write_test(uint32_t event_id, unsigned long attr, unsigned long attr_count, + unsigned long *value, long expected_error, const char *str) +{ + struct sbiret ret; + + ret = sbi_sse_read_attrs(event_id, attr, attr_count, value); + sbiret_report_error(&ret, expected_error, "Read %s error", str); + + ret = sbi_sse_write_attrs(event_id, attr, attr_count, value); + sbiret_report_error(&ret, expected_error, "Write %s error", str); +} + +#define ALL_ATTRS_COUNT (SBI_SSE_ATTR_INTERRUPTED_A7 + 1) + +static void sse_test_attrs(uint32_t event_id) +{ + unsigned long value = 0; + struct sbiret ret; + void *ptr; + unsigned long values[ALL_ATTRS_COUNT]; + unsigned int i; + const char *invalid_hart_str; + const char *attr_name; + + report_prefix_push("attrs"); + + for (i = 0; i < ARRAY_SIZE(ro_attrs); i++) { + ret = sbi_sse_write_attrs(event_id, ro_attrs[i], 1, &value); + sbiret_report_error(&ret, SBI_ERR_DENIED, "RO attribute %s not writable", + attr_names[ro_attrs[i]]); + } + + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, ALL_ATTRS_COUNT, values); + sbiret_report_error(&ret, SBI_SUCCESS, "Read multiple attributes"); + + for (i = SBI_SSE_ATTR_STATUS; i <= SBI_SSE_ATTR_INTERRUPTED_A7; i++) { + ret = sbi_sse_read_attrs(event_id, i, 1, &value); + attr_name = attr_names[i]; + + sbiret_report_error(&ret, SBI_SUCCESS, "Read single attribute %s", attr_name); + if (values[i] != value) + report_fail("Attribute 0x%x single value read (0x%lx) differs from the one read with multiple attributes (0x%lx)", + i, value, values[i]); + /* + * Preferred hart reset value is defined by SBI vendor + */ + if(i != SBI_SSE_ATTR_PREFERRED_HART) { + /* + * Specification states that injectable bit is implementation dependent + * but other bits are zero-initialized. + */ + if (i == SBI_SSE_ATTR_STATUS) + value &= ~BIT(SBI_SSE_ATTR_STATUS_INJECT_OFFSET); + report(value == 0, "Attribute %s reset value is 0, found %lx", attr_name, + value); + } + } + +#if __riscv_xlen > 32 + value = BIT(32); + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid prio > 0xFFFFFFFF error"); +#endif + + value = ~SBI_SSE_ATTR_CONFIG_ONESHOT; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid config value error"); + + if (sbi_sse_event_is_global(event_id)) { + invalid_hart_str = getenv("INVALID_HART_ID"); + if (!invalid_hart_str) + value = 0xFFFFFFFFUL; + else + value = strtoul(invalid_hart_str, NULL, 0); + + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid hart id error"); + } else { + /* Set Hart on local event -> RO */ + value = current_thread_info()->hartid; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value); + sbiret_report_error(&ret, SBI_ERR_DENIED, + "Set hart id on local event error"); + } + + /* Set/get flags, sepc, a6, a7 */ + for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) { + attr_name = attr_names[interrupted_attrs[i]]; + ret = sbi_sse_read_attrs(event_id, interrupted_attrs[i], 1, &value); + sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted %s", attr_name); + + value = ARRAY_SIZE(interrupted_attrs) - i; + ret = sbi_sse_write_attrs(event_id, interrupted_attrs[i], 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, + "Set attribute %s invalid state error", attr_name); + } + + sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 0, &value, SBI_ERR_INVALID_PARAM, + "attribute attr_count == 0"); + sse_read_write_test(event_id, SBI_SSE_ATTR_INTERRUPTED_A7 + 1, 1, &value, SBI_ERR_BAD_RANGE, + "invalid attribute"); + + /* Misaligned pointer address */ + ptr = (void *)&value; + ptr += 1; + sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 1, ptr, SBI_ERR_INVALID_ADDRESS, + "attribute with invalid address"); + + report_prefix_pop(); +} + +static void sse_test_register_error(uint32_t event_id) +{ + struct sbiret ret; + + report_prefix_push("register"); + + ret = sbi_sse_unregister(event_id); + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "unregister non-registered event"); + + ret = sbi_sse_register_raw(event_id, 0x1, 0); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "register misaligned entry"); + + ret = sbi_sse_register_raw(event_id, (unsigned long)sbi_sse_entry, 0); + sbiret_report_error(&ret, SBI_SUCCESS, "register"); + if (ret.error) + goto done; + + ret = sbi_sse_register_raw(event_id, (unsigned long)sbi_sse_entry, 0); + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "register used event failure"); + + ret = sbi_sse_unregister(event_id); + sbiret_report_error(&ret, SBI_SUCCESS, "unregister"); + +done: + report_prefix_pop(); +} + +struct sse_simple_test_arg { + bool done; + unsigned long expected_a6; + uint32_t event_id; +}; + +#if __riscv_xlen > 32 + +struct alias_test_params { + unsigned long event_id; + unsigned long attr_id; + unsigned long attr_count; + const char *str; +}; + +static void test_alias(uint32_t event_id) +{ + struct alias_test_params *write, *read; + unsigned long write_value, read_value; + struct sbiret ret; + bool err = false; + int r, w; + struct alias_test_params params[] = { + {event_id, SBI_SSE_ATTR_INTERRUPTED_A6, 1, "non aliased"}, + {BIT(32) + event_id, SBI_SSE_ATTR_INTERRUPTED_A6, 1, "aliased event_id"}, + {event_id, BIT(32) + SBI_SSE_ATTR_INTERRUPTED_A6, 1, "aliased attr_id"}, + {event_id, SBI_SSE_ATTR_INTERRUPTED_A6, BIT(32) + 1, "aliased attr_count"}, + }; + + report_prefix_push("alias"); + for (w = 0; w < ARRAY_SIZE(params); w++) { + write = ¶ms[w]; + + write_value = 0xDEADBEEF + w; + ret = sbi_sse_write_attrs(write->event_id, write->attr_id, write->attr_count, &write_value); + if (ret.error) + sbiret_report_error(&ret, SBI_SUCCESS, "Write %s, event 0x%lx attr 0x%lx, attr count 0x%lx", write->str, write->event_id, write->attr_id, write->attr_count); + + for (r = 0; r < ARRAY_SIZE(params); r++) { + read = ¶ms[r]; + read_value = 0; + ret = sbi_sse_read_attrs(read->event_id, read->attr_id, read->attr_count, &read_value); + if (ret.error) + sbiret_report_error(&ret, SBI_SUCCESS, + "Read %s, event 0x%lx attr 0x%lx, attr count 0x%lx", read->str, read->event_id, read->attr_id, read->attr_count); + + /* Do not spam output with a lot of reports */ + if (write_value != read_value) { + err = true; + report_fail("Write %s, event 0x%lx attr 0x%lx, attr count 0x%lx value %lx ==" + "Read %s, event 0x%lx attr 0x%lx, attr count 0x%lx value %lx", + write->str, write->event_id, write->attr_id, write->attr_count, + write_value, + read->str, read->event_id, read->attr_id, read->attr_count, + read_value); + } + } + } + + report(!err, "BIT(32) aliasing tests"); + report_prefix_pop(); +} +#endif + +static void sse_simple_handler(void *data, struct pt_regs *regs, unsigned int hartid) +{ + struct sse_simple_test_arg *arg = data; + int i; + struct sbiret ret; + const char *attr_name; + uint32_t event_id = READ_ONCE(arg->event_id), attr; + unsigned long value, prev_value, flags; + unsigned long interrupted_state[ARRAY_SIZE(interrupted_attrs)]; + unsigned long modified_state[ARRAY_SIZE(interrupted_attrs)] = {4, 3, 2, 1}; + unsigned long tmp_state[ARRAY_SIZE(interrupted_attrs)]; + + report((regs->status & SR_SPP) == SR_SPP, "Interrupted S-mode"); + report(hartid == current_thread_info()->hartid, "Hartid correctly passed"); + sse_check_state(event_id, SBI_SSE_STATE_RUNNING); + report(!sse_event_pending(event_id), "Event not pending"); + + /* Read full interrupted state */ + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC, + ARRAY_SIZE(interrupted_attrs), interrupted_state); + sbiret_report_error(&ret, SBI_SUCCESS, "Save full interrupted state from handler"); + + /* Write full modified state and read it */ + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC, + ARRAY_SIZE(modified_state), modified_state); + sbiret_report_error(&ret, SBI_SUCCESS, + "Write full interrupted state from handler"); + + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC, + ARRAY_SIZE(tmp_state), tmp_state); + sbiret_report_error(&ret, SBI_SUCCESS, "Read full modified state from handler"); + + report(memcmp(tmp_state, modified_state, sizeof(modified_state)) == 0, + "Full interrupted state successfully written"); + +#if __riscv_xlen > 32 + test_alias(event_id); +#endif + + /* Restore full saved state */ + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC, + ARRAY_SIZE(interrupted_attrs), interrupted_state); + sbiret_report_error(&ret, SBI_SUCCESS, + "Full interrupted state restore from handler"); + + /* We test SBI_SSE_ATTR_INTERRUPTED_FLAGS below with specific flag values */ + for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) { + attr = interrupted_attrs[i]; + if (attr == SBI_SSE_ATTR_INTERRUPTED_FLAGS) + continue; + + attr_name = attr_names[attr]; + + ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value); + sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s", attr_name); + + value = 0xDEADBEEF + i; + ret = sbi_sse_write_attrs(event_id, attr, 1, &value); + sbiret_report_error(&ret, SBI_SUCCESS, "Set attr %s", attr_name); + + ret = sbi_sse_read_attrs(event_id, attr, 1, &value); + sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s", attr_name); + report(value == 0xDEADBEEF + i, "Get attr %s, value: 0x%lx", attr_name, + value); + + ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value); + sbiret_report_error(&ret, SBI_SUCCESS, "Restore attr %s value", attr_name); + } + + /* Test all flags allowed for SBI_SSE_ATTR_INTERRUPTED_FLAGS */ + attr = SBI_SSE_ATTR_INTERRUPTED_FLAGS; + ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value); + sbiret_report_error(&ret, SBI_SUCCESS, "Save interrupted flags"); + + for (i = 0; i < ARRAY_SIZE(interrupted_flags); i++) { + flags = interrupted_flags[i]; + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags); + sbiret_report_error(&ret, SBI_SUCCESS, + "Set interrupted flags bit 0x%lx value", flags); + ret = sbi_sse_read_attrs(event_id, attr, 1, &value); + sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted flags after set"); + report(value == flags, "interrupted flags modified value: 0x%lx", value); + } + + /* Write invalid bit in flag register */ + flags = SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT << 1; + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error", + flags); + + flags = BIT(SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT + 1); + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error", + flags); + + ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value); + sbiret_report_error(&ret, SBI_SUCCESS, "Restore interrupted flags"); + + /* Try to change HARTID/Priority while running */ + if (sbi_sse_event_is_global(event_id)) { + value = current_thread_info()->hartid; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set hart id while running error"); + } + + value = 0; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set priority while running error"); + + value = READ_ONCE(arg->expected_a6); + report(interrupted_state[2] == value, "Interrupted state a6, expected 0x%lx, got 0x%lx", + value, interrupted_state[2]); + + report(interrupted_state[3] == SBI_EXT_SSE, + "Interrupted state a7, expected 0x%x, got 0x%lx", SBI_EXT_SSE, + interrupted_state[3]); + + WRITE_ONCE(arg->done, true); +} + +static int sse_test_inject_simple(uint32_t event_id) +{ + unsigned long value, error; + struct sbiret ret; + int err_ret = 1; + struct sse_simple_test_arg test_arg = {.event_id = event_id}; + struct sbi_sse_handler_arg args = { + .handler = sse_simple_handler, + .handler_data = (void *)&test_arg, + .stack = sse_alloc_stack(), + }; + + report_prefix_push("simple"); + + if (!sse_check_state(event_id, SBI_SSE_STATE_UNUSED)) + goto err; + + ret = sbi_sse_register(event_id, &args); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "register")) + goto err; + + if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED)) + goto err; + + if (sbi_sse_event_is_global(event_id)) { + /* Be sure global events are targeting the current hart */ + error = sse_global_event_set_current_hart(event_id); + if (error) + goto err; + } + + ret = sbi_sse_enable(event_id); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "enable")) + goto err; + + if (!sse_check_state(event_id, SBI_SSE_STATE_ENABLED)) + goto err; + + ret = sbi_sse_hart_mask(); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "hart mask")) + goto err; + + ret = sbi_sse_inject(event_id, current_thread_info()->hartid); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "injection masked")) { + sbi_sse_hart_unmask(); + goto err; + } + + report(READ_ONCE(test_arg.done) == 0, "event masked not handled"); + + /* + * When unmasking the SSE events, we expect it to be injected + * immediately so a6 should be SBI_EXT_SBI_SSE_HART_UNMASK + */ + WRITE_ONCE(test_arg.expected_a6, SBI_EXT_SSE_HART_UNMASK); + ret = sbi_sse_hart_unmask(); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "hart unmask")) { + goto err; + } + + report(READ_ONCE(test_arg.done) == 1, "event unmasked handled"); + WRITE_ONCE(test_arg.done, 0); + WRITE_ONCE(test_arg.expected_a6, SBI_EXT_SSE_INJECT); + + /* Set as oneshot and verify it is disabled */ + ret = sbi_sse_disable(event_id); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Disable event")) { + /* Nothing we can really do here, event can not be disabled */ + goto err; + } + + value = SBI_SSE_ATTR_CONFIG_ONESHOT; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Set event attribute as ONESHOT")) + goto err; + + ret = sbi_sse_enable(event_id); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Enable event")) + goto err; + + ret = sbi_sse_inject(event_id, current_thread_info()->hartid); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "second injection")) + goto err; + + report(READ_ONCE(test_arg.done) == 1, "event handled"); + WRITE_ONCE(test_arg.done, 0); + + if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED)) + goto err; + + /* Clear ONESHOT FLAG */ + value = 0; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Clear CONFIG.ONESHOT flag")) + goto err; + + ret = sbi_sse_unregister(event_id); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "unregister")) + goto err; + + sse_check_state(event_id, SBI_SSE_STATE_UNUSED); + + err_ret = 0; +err: + sse_free_stack(args.stack); + report_prefix_pop(); + + return err_ret; +} + +struct sse_foreign_cpu_test_arg { + bool done; + unsigned int expected_cpu; + uint32_t event_id; +}; + +static void sse_foreign_cpu_handler(void *data, struct pt_regs *regs, unsigned int hartid) +{ + struct sse_foreign_cpu_test_arg *arg = data; + unsigned int expected_cpu; + + /* For arg content to be visible */ + smp_rmb(); + expected_cpu = READ_ONCE(arg->expected_cpu); + report(expected_cpu == current_thread_info()->cpu, + "Received event on CPU (%d), expected CPU (%d)", current_thread_info()->cpu, + expected_cpu); + + WRITE_ONCE(arg->done, true); + /* For arg update to be visible for other CPUs */ + smp_wmb(); +} + +struct sse_local_per_cpu { + struct sbi_sse_handler_arg args; + struct sbiret ret; + struct sse_foreign_cpu_test_arg handler_arg; +}; + +static void sse_register_enable_local(void *data) +{ + struct sbiret ret; + struct sse_local_per_cpu *cpu_args = data; + struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu]; + uint32_t event_id = cpu_arg->handler_arg.event_id; + + ret = sbi_sse_register(event_id, &cpu_arg->args); + WRITE_ONCE(cpu_arg->ret, ret); + if (ret.error) + return; + + ret = sbi_sse_enable(event_id); + WRITE_ONCE(cpu_arg->ret, ret); +} + +static void sbi_sse_disable_unregister_local(void *data) +{ + struct sbiret ret; + struct sse_local_per_cpu *cpu_args = data; + struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu]; + uint32_t event_id = cpu_arg->handler_arg.event_id; + + ret = sbi_sse_disable(event_id); + WRITE_ONCE(cpu_arg->ret, ret); + if (ret.error) + return; + + ret = sbi_sse_unregister(event_id); + WRITE_ONCE(cpu_arg->ret, ret); +} + +static int sse_test_inject_local(uint32_t event_id) +{ + int cpu; + int err_ret = 1; + struct sbiret ret; + struct sse_local_per_cpu *cpu_args, *cpu_arg; + struct sse_foreign_cpu_test_arg *handler_arg; + + cpu_args = calloc(NR_CPUS, sizeof(struct sbi_sse_handler_arg)); + + report_prefix_push("local_dispatch"); + for_each_online_cpu(cpu) { + cpu_arg = &cpu_args[cpu]; + cpu_arg->handler_arg.event_id = event_id; + cpu_arg->args.stack = sse_alloc_stack(); + cpu_arg->args.handler = sse_foreign_cpu_handler; + cpu_arg->args.handler_data = (void *)&cpu_arg->handler_arg; + } + + on_cpus(sse_register_enable_local, cpu_args); + for_each_online_cpu(cpu) { + cpu_arg = &cpu_args[cpu]; + ret = cpu_arg->ret; + if (ret.error) { + report_fail("CPU failed to register/enable event: %ld", ret.error); + goto err; + } + + handler_arg = &cpu_arg->handler_arg; + WRITE_ONCE(handler_arg->expected_cpu, cpu); + /* For handler_arg content to be visible for other CPUs */ + smp_wmb(); + ret = sbi_sse_inject(event_id, cpus[cpu].hartid); + if (ret.error) { + report_fail("CPU failed to inject event: %ld", ret.error); + goto err; + } + } + + for_each_online_cpu(cpu) { + handler_arg = &cpu_args[cpu].handler_arg; + smp_rmb(); + while (!READ_ONCE(handler_arg->done)) { + /* For handler_arg update to be visible */ + smp_rmb(); + cpu_relax(); + } + WRITE_ONCE(handler_arg->done, false); + } + + on_cpus(sbi_sse_disable_unregister_local, cpu_args); + for_each_online_cpu(cpu) { + cpu_arg = &cpu_args[cpu]; + ret = READ_ONCE(cpu_arg->ret); + if (ret.error) { + report_fail("CPU failed to disable/unregister event: %ld", ret.error); + goto err; + } + } + + err_ret = 0; +err: + for_each_online_cpu(cpu) { + cpu_arg = &cpu_args[cpu]; + sse_free_stack(cpu_arg->args.stack); + } + + report_pass("local event dispatch on all CPUs"); + report_prefix_pop(); + + return err_ret; +} + +static int sse_test_inject_global(uint32_t event_id) +{ + unsigned long value; + struct sbiret ret; + unsigned int cpu; + uint64_t timeout; + int err_ret = 1; + struct sse_foreign_cpu_test_arg test_arg = {.event_id = event_id}; + struct sbi_sse_handler_arg args = { + .handler = sse_foreign_cpu_handler, + .handler_data = (void *)&test_arg, + .stack = sse_alloc_stack(), + }; + enum sbi_sse_state state; + + report_prefix_push("global_dispatch"); + + ret = sbi_sse_register(event_id, &args); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Register event")) + goto err; + + for_each_online_cpu(cpu) { + WRITE_ONCE(test_arg.expected_cpu, cpu); + /* For test_arg content to be visible for other CPUs */ + smp_wmb(); + value = cpu; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Set preferred hart")) + goto err; + + ret = sbi_sse_enable(event_id); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Enable event")) + goto err; + + ret = sbi_sse_inject(event_id, cpu); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Inject event")) + goto err; + + smp_rmb(); + while (!READ_ONCE(test_arg.done)) { + /* For shared test_arg structure */ + smp_rmb(); + cpu_relax(); + } + + WRITE_ONCE(test_arg.done, false); + + timeout = timer_get_cycles() + usec_to_cycles((uint64_t)1000); + /* Wait for event to be back in ENABLED state */ + do { + ret = sse_event_get_state(event_id, &state); + if (ret.error) + goto err; + cpu_relax(); + } while (state != SBI_SSE_STATE_ENABLED || timer_get_cycles() < timeout); + + if (!report(state == SBI_SSE_STATE_ENABLED, + "wait for event to be in enable state")) + goto err; + + ret = sbi_sse_disable(event_id); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Disable event")) + goto err; + + report_pass("Global event on CPU %d", cpu); + } + + ret = sbi_sse_unregister(event_id); + sbiret_report_error(&ret, SBI_SUCCESS, "Unregister event"); + + err_ret = 0; + +err: + sse_free_stack(args.stack); + report_prefix_pop(); + + return err_ret; +} + +struct priority_test_arg { + uint32_t event_id; + bool called; + u32 prio; + struct priority_test_arg *next_event_arg; + void (*check_func)(struct priority_test_arg *arg); +}; + +static void sse_hi_priority_test_handler(void *arg, struct pt_regs *regs, + unsigned int hartid) +{ + struct priority_test_arg *targ = arg; + struct priority_test_arg *next = targ->next_event_arg; + + targ->called = true; + if (next) { + sbi_sse_inject(next->event_id, current_thread_info()->hartid); + + report(!sse_event_pending(next->event_id), "Higher priority event is not pending"); + report(next->called, "Higher priority event was handled"); + } +} + +static void sse_low_priority_test_handler(void *arg, struct pt_regs *regs, + unsigned int hartid) +{ + struct priority_test_arg *targ = arg; + struct priority_test_arg *next = targ->next_event_arg; + + targ->called = true; + + if (next) { + sbi_sse_inject(next->event_id, current_thread_info()->hartid); + + report(sse_event_pending(next->event_id), "Lower priority event is pending"); + report(!next->called, "Lower priority event %s was not handle before %s", + sse_event_name(next->event_id), sse_event_name(targ->event_id)); + } +} + +static void sse_test_injection_priority_arg(struct priority_test_arg *in_args, + unsigned int in_args_size, + sbi_sse_handler_fn handler, + const char *test_name) +{ + unsigned int i; + unsigned long value, uret; + struct sbiret ret; + uint32_t event_id; + struct priority_test_arg *arg; + unsigned int args_size = 0; + struct sbi_sse_handler_arg event_args[in_args_size]; + struct priority_test_arg *args[in_args_size]; + void *stack; + struct sbi_sse_handler_arg *event_arg; + + report_prefix_push(test_name); + + for (i = 0; i < in_args_size; i++) { + arg = &in_args[i]; + event_id = arg->event_id; + if (!sse_event_can_inject(event_id)) + continue; + + args[args_size] = arg; + args_size++; + event_args->stack = 0; + } + + if (!args_size) { + report_skip("No injectable events"); + goto skip; + } + + for (i = 0; i < args_size; i++) { + arg = args[i]; + event_id = arg->event_id; + stack = sse_alloc_stack(); + + event_arg = &event_args[i]; + event_arg->handler = handler; + event_arg->handler_data = (void *)arg; + event_arg->stack = stack; + + if (i < (args_size - 1)) + arg->next_event_arg = args[i + 1]; + else + arg->next_event_arg = NULL; + + /* Be sure global events are targeting the current hart */ + if (sbi_sse_event_is_global(event_id)) { + uret = sse_global_event_set_current_hart(event_id); + if (uret) + goto err; + } + + ret = sbi_sse_register(event_id, event_arg); + if (ret.error) { + sbiret_report_error(&ret, SBI_SUCCESS, "register event 0x%x", event_id); + goto err; + } + + value = arg->prio; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value); + if (ret.error) { + sbiret_report_error(&ret, SBI_SUCCESS, "set event 0x%x priority", event_id); + goto err; + } + ret = sbi_sse_enable(event_id); + if (ret.error) { + sbiret_report_error(&ret, SBI_SUCCESS, "enable event 0x%x", event_id); + goto err; + } + } + + /* Inject first event */ + ret = sbi_sse_inject(args[0]->event_id, current_thread_info()->hartid); + sbiret_report_error(&ret, SBI_SUCCESS, "injection"); + +err: + for (i = 0; i < args_size; i++) { + arg = args[i]; + event_id = arg->event_id; + + report(arg->called, "Event %s handler called", sse_event_name(arg->event_id)); + + ret = sbi_sse_disable(event_id); + if (ret.error) + sbiret_report_error(&ret, SBI_SUCCESS, "disable event 0x%x", event_id); + + sbi_sse_unregister(event_id); + if (ret.error) + sbiret_report_error(&ret, SBI_SUCCESS, "unregister event 0x%x", event_id); + + event_arg = &event_args[i]; + if (event_arg->stack) + sse_free_stack(event_arg->stack); + } + +skip: + report_prefix_pop(); +} + +static struct priority_test_arg hi_prio_args[] = { + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE}, + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE}, + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS}, + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS}, + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW}, + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS}, + {.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP}, + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS}, +}; + +static struct priority_test_arg low_prio_args[] = { + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS}, + {.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP}, + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS}, + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW}, + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS}, + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS}, + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE}, + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE}, +}; + +static struct priority_test_arg prio_args[] = { + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE, .prio = 5}, + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE, .prio = 10}, + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS, .prio = 12}, + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW, .prio = 15}, + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS, .prio = 20}, + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS, .prio = 22}, + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS, .prio = 25}, +}; + +static struct priority_test_arg same_prio_args[] = { + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW, .prio = 0}, + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS, .prio = 0}, + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS, .prio = 10}, + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE, .prio = 10}, + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE, .prio = 10}, + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS, .prio = 20}, + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS, .prio = 20}, +}; + +static void sse_test_injection_priority(void) +{ + report_prefix_push("prio"); + + sse_test_injection_priority_arg(hi_prio_args, ARRAY_SIZE(hi_prio_args), + sse_hi_priority_test_handler, "high"); + + sse_test_injection_priority_arg(low_prio_args, ARRAY_SIZE(low_prio_args), + sse_low_priority_test_handler, "low"); + + sse_test_injection_priority_arg(prio_args, ARRAY_SIZE(prio_args), + sse_low_priority_test_handler, "changed"); + + sse_test_injection_priority_arg(same_prio_args, ARRAY_SIZE(same_prio_args), + sse_low_priority_test_handler, "same_prio_args"); + + report_prefix_pop(); +} + +static void test_invalid_event_id(unsigned long event_id) +{ + struct sbiret ret; + unsigned long value = 0; + + ret = sbi_sse_register_raw(event_id, (unsigned long) sbi_sse_entry, 0); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, + "register event_id 0x%lx", event_id); + + ret = sbi_sse_unregister(event_id); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, + "unregister event_id 0x%lx", event_id); + + ret = sbi_sse_enable(event_id); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, + "enable event_id 0x%lx", event_id); + + ret = sbi_sse_disable(event_id); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, + "disable event_id 0x%lx", event_id); + + ret = sbi_sse_inject(event_id, 0); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, + "inject event_id 0x%lx", event_id); + + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, + "write attr event_id 0x%lx", event_id); + + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, + "read attr event_id 0x%lx", event_id); +} + +static void sse_test_invalid_event_id(void) +{ + + report_prefix_push("event_id"); + + test_invalid_event_id(SBI_SSE_EVENT_LOCAL_RESERVED_0_START); + + report_prefix_pop(); +} + +static void sse_check_event_availability(uint32_t event_id, bool *can_inject, bool *supported) +{ + unsigned long status; + struct sbiret ret; + + *can_inject = false; + *supported = false; + + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status); + if (ret.error != SBI_SUCCESS && ret.error != SBI_ERR_NOT_SUPPORTED) { + report_fail("Get event status != SBI_SUCCESS && != SBI_ERR_NOT_SUPPORTED: %ld", ret.error); + return; + } + if (ret.error == SBI_ERR_NOT_SUPPORTED) + return; + + if (!ret.error) + *supported = true; + + *can_inject = (status >> SBI_SSE_ATTR_STATUS_INJECT_OFFSET) & 1; +} + +static void sse_secondary_boot_and_unmask(void *data) +{ + sbi_sse_hart_unmask(); +} + +static void sse_check_mask(void) +{ + struct sbiret ret; + + /* Upon boot, event are masked, check that */ + ret = sbi_sse_hart_mask(); + sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "hart mask at boot time"); + + ret = sbi_sse_hart_unmask(); + sbiret_report_error(&ret, SBI_SUCCESS, "hart unmask"); + ret = sbi_sse_hart_unmask(); + sbiret_report_error(&ret, SBI_ERR_ALREADY_STARTED, "hart unmask twice error"); + + ret = sbi_sse_hart_mask(); + sbiret_report_error(&ret, SBI_SUCCESS, "hart mask"); + ret = sbi_sse_hart_mask(); + sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "hart mask twice"); +} + +static int run_inject_test(struct sse_event_info *info) +{ + unsigned long event_id = info->event_id; + + if (!info->can_inject) { + report_skip("Event does not support injection, skipping injection tests"); + return 0; + } + + if (sse_test_inject_simple(event_id)) + return 1; + + if (sbi_sse_event_is_global(event_id)) + return sse_test_inject_global(event_id); + else + return sse_test_inject_local(event_id); +} + +void check_sse(void) +{ + struct sse_event_info *info; + unsigned long i, event_id; + bool supported; + + report_prefix_push("sse"); + + if (!sbi_probe(SBI_EXT_SSE)) { + report_skip("extension not available"); + report_prefix_pop(); + return; + } + + sse_check_mask(); + + /* + * Dummy wakeup of all processors since some of them will be targeted + * by global events without going through the wakeup call as well as + * unmasking SSE events on all harts + */ + on_cpus(sse_secondary_boot_and_unmask, NULL); + + sse_test_invalid_event_id(); + + for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) { + info = &sse_event_infos[i]; + event_id = info->event_id; + report_prefix_push(info->name); + sse_check_event_availability(event_id, &info->can_inject, &supported); + if (!supported) { + report_skip("Event is not supported, skipping tests"); + report_prefix_pop(); + continue; + } + + sse_test_attrs(event_id); + sse_test_register_error(event_id); + + if (run_inject_test(info)) { + report_skip("Event test failed, event state unreliable"); + info->can_inject = false; + } + + report_prefix_pop(); + } + + sse_test_injection_priority(); + + report_prefix_pop(); +} diff --git a/riscv/sbi.c b/riscv/sbi.c index 0404bb81..478cb35d 100644 --- a/riscv/sbi.c +++ b/riscv/sbi.c @@ -32,6 +32,7 @@ #define HIGH_ADDR_BOUNDARY ((phys_addr_t)1 << 32) +void check_sse(void); void check_fwft(void); static long __labs(long a) @@ -1567,6 +1568,7 @@ int main(int argc, char **argv) check_hsm(); check_dbcn(); check_susp(); + check_sse(); check_fwft(); return report_summary();