From patchwork Tue Oct 15 21:13:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kowal X-Patchwork-Id: 13837094 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D651FD1D876 for ; Tue, 15 Oct 2024 21:14:08 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t0ore-0003k6-Sa; Tue, 15 Oct 2024 17:13:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0ord-0003iz-6F; Tue, 15 Oct 2024 17:13:57 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0ora-0000NS-Tc; Tue, 15 Oct 2024 17:13:56 -0400 Received: from pps.filterd (m0356516.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49FJY63k030288; Tue, 15 Oct 2024 21:13:45 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=C+YqxDfKR9zNebasP VESqbOAs8dN8sSMjb5oSEWfYVE=; b=db9zXcdQQDMBKS0vYLkuYo2zRU5MLnkY+ Wu75WvwJIrQdnQSUfsawn1O9gJAxi8CwW3PXcVtX9hZuebdvA/bLD5uLNUOfggJb DeVu1qA1276iuHRX9+clDjK9bc+RjrIhGQ8bDlwbaDjOvf5wDAihbobIClW9HKal jqwT4HFj75vr5oOd9cwRbQ7Y+ZqIi2NX6r8iuU1W9JdzGgDUX0sqp/85OdSfk5gt aiBD8Oo/fVSwSImSZszumM8fqgcOx2uvxg2QSha3vWdnMq4mAYdsnqfZwwvSTR9g JUCf1WpiHpe7g4ApK0BeXxypG82RZzwH0HvjHZI8B0nyHN47gOs4w== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429x17ggyj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:45 +0000 (GMT) Received: from m0356516.ppops.net (m0356516.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 49FLDiiO009162; Tue, 15 Oct 2024 21:13:44 GMT Received: from ppma23.wdc07v.mail.ibm.com (5d.69.3da9.ip4.static.sl-reverse.com [169.61.105.93]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429x17ggyd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:44 +0000 (GMT) Received: from pps.filterd (ppma23.wdc07v.mail.ibm.com [127.0.0.1]) by ppma23.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 49FI6Wbv006757; Tue, 15 Oct 2024 21:13:43 GMT Received: from smtprelay07.fra02v.mail.ibm.com ([9.218.2.229]) by ppma23.wdc07v.mail.ibm.com (PPS) with ESMTPS id 4284xk5v8u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:43 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay07.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 49FLDdbs51773736 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Oct 2024 21:13:40 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C8F962004B; Tue, 15 Oct 2024 21:13:39 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id AC47B20043; Tue, 15 Oct 2024 21:13:37 +0000 (GMT) Received: from gfwr518.rchland.ibm.com (unknown [9.10.239.106]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 15 Oct 2024 21:13:37 +0000 (GMT) From: Michael Kowal To: qemu-devel@nongnu.org Cc: qemu-ppc@nongnu.org, clg@kaod.org, fbarrat@linux.ibm.com, npiggin@gmail.com, milesg@linux.ibm.com, danielhb413@gmail.com, david@gibson.dropbear.id.au, harshpb@linux.ibm.com, thuth@redhat.com, lvivier@redhat.com, pbonzini@redhat.com Subject: [PATCH 01/14] ppc/xive2: Update NVP save/restore for group attributes Date: Tue, 15 Oct 2024 16:13:16 -0500 Message-Id: <20241015211329.21113-2-kowal@linux.ibm.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241015211329.21113-1-kowal@linux.ibm.com> References: <20241015211329.21113-1-kowal@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: UyvKiZD6TVZ6QAMJ70aWyu3xYdfXafvz X-Proofpoint-GUID: PCTuyCA76AtwpafKwd8MELY4ZdWyym4A X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1051,Hydra:6.0.680,FMLib:17.12.62.30 definitions=2024-10-15_01,2024-10-11_01,2024-09-30_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxscore=0 lowpriorityscore=0 bulkscore=0 mlxlogscore=714 phishscore=0 clxscore=1015 adultscore=0 spamscore=0 suspectscore=0 malwarescore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410150140 Received-SPF: pass client-ip=148.163.158.5; envelope-from=kowal@linux.ibm.com; helo=mx0b-001b2d01.pphosted.com X-Spam_score_int: -26 X-Spam_score: -2.7 X-Spam_bar: -- X-Spam_report: (-2.7 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Frederic Barrat If the 'H' attribute is set on the NVP structure, the hardware automatically saves and restores some attributes from the TIMA in the NVP structure. The group-specific attributes LSMFB, LGS and T have an extra flag to individually control what is saved/restored. Signed-off-by: Frederic Barrat Signed-off-by: Michael Kowal --- include/hw/ppc/xive2_regs.h | 5 +++++ hw/intc/xive2.c | 18 ++++++++++++++++-- 2 files changed, 21 insertions(+), 2 deletions(-) diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h index 1d00c8df64..30868e8e09 100644 --- a/include/hw/ppc/xive2_regs.h +++ b/include/hw/ppc/xive2_regs.h @@ -152,6 +152,9 @@ typedef struct Xive2Nvp { uint32_t w0; #define NVP2_W0_VALID PPC_BIT32(0) #define NVP2_W0_HW PPC_BIT32(7) +#define NVP2_W0_L PPC_BIT32(8) +#define NVP2_W0_G PPC_BIT32(9) +#define NVP2_W0_T PPC_BIT32(10) #define NVP2_W0_ESC_END PPC_BIT32(25) /* 'N' bit 0:ESB 1:END */ #define NVP2_W0_PGOFIRST PPC_BITMASK32(26, 31) uint32_t w1; @@ -163,6 +166,8 @@ typedef struct Xive2Nvp { #define NVP2_W2_CPPR PPC_BITMASK32(0, 7) #define NVP2_W2_IPB PPC_BITMASK32(8, 15) #define NVP2_W2_LSMFB PPC_BITMASK32(16, 23) +#define NVP2_W2_T PPC_BIT32(27) +#define NVP2_W2_LGS PPC_BITMASK32(28, 31) uint32_t w3; uint32_t w4; #define NVP2_W4_ESC_ESB_BLOCK PPC_BITMASK32(0, 3) /* N:0 */ diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c index d1df35e9b3..4adc3b6950 100644 --- a/hw/intc/xive2.c +++ b/hw/intc/xive2.c @@ -313,7 +313,19 @@ static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx, nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, regs[TM_IPB]); nvp.w2 = xive_set_field32(NVP2_W2_CPPR, nvp.w2, regs[TM_CPPR]); - nvp.w2 = xive_set_field32(NVP2_W2_LSMFB, nvp.w2, regs[TM_LSMFB]); + if (nvp.w0 & NVP2_W0_L) { + /* + * Typically not used. If LSMFB is restored with 0, it will + * force a backlog rescan + */ + nvp.w2 = xive_set_field32(NVP2_W2_LSMFB, nvp.w2, regs[TM_LSMFB]); + } + if (nvp.w0 & NVP2_W0_G) { + nvp.w2 = xive_set_field32(NVP2_W2_LGS, nvp.w2, regs[TM_LGS]); + } + if (nvp.w0 & NVP2_W0_T) { + nvp.w2 = xive_set_field32(NVP2_W2_T, nvp.w2, regs[TM_T]); + } xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2); nvp.w1 = xive_set_field32(NVP2_W1_CO, nvp.w1, 0); @@ -527,7 +539,9 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx, xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, nvp, 2); tctx->regs[TM_QW1_OS + TM_CPPR] = cppr; - /* we don't model LSMFB */ + tctx->regs[TM_QW1_OS + TM_LSMFB] = xive_get_field32(NVP2_W2_LSMFB, nvp->w2); + tctx->regs[TM_QW1_OS + TM_LGS] = xive_get_field32(NVP2_W2_LGS, nvp->w2); + tctx->regs[TM_QW1_OS + TM_T] = xive_get_field32(NVP2_W2_T, nvp->w2); nvp->w1 = xive_set_field32(NVP2_W1_CO, nvp->w1, 1); nvp->w1 = xive_set_field32(NVP2_W1_CO_THRID_VALID, nvp->w1, 1); From patchwork Tue Oct 15 21:13:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kowal X-Patchwork-Id: 13837096 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A2340D1F9D4 for ; Tue, 15 Oct 2024 21:14:49 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t0orn-0003qk-BP; Tue, 15 Oct 2024 17:14:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0orl-0003mx-C3; Tue, 15 Oct 2024 17:14:05 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0ori-0000OT-Py; Tue, 15 Oct 2024 17:14:05 -0400 Received: from pps.filterd (m0353725.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49FKo52E009186; Tue, 15 Oct 2024 21:13:47 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=Y/wQ7LuHgN8mauz+7 tsTyYhqRFBwHghOOEvjlNUrsQU=; b=CHAyKQba2Gm8fBtVb7n9LYFlCK8xgZnil KtfpKz7+gSnVheYgak95doMm1Age4cZ7L39t+2jmUiOe4ttJkceDK2q8MlmPec46 lnVDqJ4n7uI+hRNR3drc0c4ve4J4M3Bxt6XzqmgjQlgj0Ojpz0Oxswv/hEcKYfj0 sk0jNGls/IEnTnppxFCNwN3z2uIZqw72+37FXwEonQhR57ZQab6nvs8jbSFAbkcF IQ/5T22vRXTXBfZVBiyj9YgTSeNaWwGkjcNMT9+xCvGD8By3QVXoYoRpJ7bNR/yK SZb8gg7wz5pJp/9hnlA7+/CPzyOP7UnFkkA6Rr6BwhxKwWonQQ/Bw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429ysbr283-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:47 +0000 (GMT) Received: from m0353725.ppops.net (m0353725.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 49FLDkCg021304; Tue, 15 Oct 2024 21:13:46 GMT Received: from ppma13.dal12v.mail.ibm.com (dd.9e.1632.ip4.static.sl-reverse.com [50.22.158.221]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429ysbr27y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:46 +0000 (GMT) Received: from pps.filterd (ppma13.dal12v.mail.ibm.com [127.0.0.1]) by ppma13.dal12v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 49FISLom005218; Tue, 15 Oct 2024 21:13:45 GMT Received: from smtprelay01.fra02v.mail.ibm.com ([9.218.2.227]) by ppma13.dal12v.mail.ibm.com (PPS) with ESMTPS id 4285nj5p6w-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:45 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay01.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 49FLDgGe47055274 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Oct 2024 21:13:42 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 36C4C2004B; Tue, 15 Oct 2024 21:13:42 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1C7C720043; Tue, 15 Oct 2024 21:13:40 +0000 (GMT) Received: from gfwr518.rchland.ibm.com (unknown [9.10.239.106]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 15 Oct 2024 21:13:39 +0000 (GMT) From: Michael Kowal To: qemu-devel@nongnu.org Cc: qemu-ppc@nongnu.org, clg@kaod.org, fbarrat@linux.ibm.com, npiggin@gmail.com, milesg@linux.ibm.com, danielhb413@gmail.com, david@gibson.dropbear.id.au, harshpb@linux.ibm.com, thuth@redhat.com, lvivier@redhat.com, pbonzini@redhat.com Subject: [PATCH 02/14] ppc/xive2: Add grouping level to notification Date: Tue, 15 Oct 2024 16:13:17 -0500 Message-Id: <20241015211329.21113-3-kowal@linux.ibm.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241015211329.21113-1-kowal@linux.ibm.com> References: <20241015211329.21113-1-kowal@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: m3C5XeoOQIkegAp8xURomLmX9U902YNv X-Proofpoint-ORIG-GUID: FVZjirA0HVZfaKt8i9J8vHXDpfS3T8Ta X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1051,Hydra:6.0.680,FMLib:17.12.62.30 definitions=2024-10-15_01,2024-10-11_01,2024-09-30_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 impostorscore=0 malwarescore=0 mlxlogscore=862 clxscore=1015 spamscore=0 lowpriorityscore=0 phishscore=0 bulkscore=0 adultscore=0 priorityscore=1501 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410150140 Received-SPF: pass client-ip=148.163.158.5; envelope-from=kowal@linux.ibm.com; helo=mx0b-001b2d01.pphosted.com X-Spam_score_int: -26 X-Spam_score: -2.7 X-Spam_bar: -- X-Spam_report: (-2.7 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Frederic Barrat The NSR has a (so far unused) grouping level field. When a interrupt is presented, that field tells the hypervisor or OS if the interrupt is for an individual VP or for a VP-group/crowd. This patch reworks the presentation API to allow to set/unset the level when raising/accepting an interrupt. It also renames xive_tctx_ipb_update() to xive_tctx_pipr_update() as the IPB is only used for VP-specific target, whereas the PIPR always needs to be updated. Signed-off-by: Frederic Barrat Signed-off-by: Michael Kowal --- include/hw/ppc/xive.h | 19 +++++++- include/hw/ppc/xive_regs.h | 20 +++++++-- hw/intc/xive.c | 90 +++++++++++++++++++++++--------------- hw/intc/xive2.c | 18 ++++---- hw/intc/trace-events | 2 +- 5 files changed, 100 insertions(+), 49 deletions(-) diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h index 31242f0406..27ef6c1a17 100644 --- a/include/hw/ppc/xive.h +++ b/include/hw/ppc/xive.h @@ -510,6 +510,21 @@ static inline uint8_t xive_priority_to_ipb(uint8_t priority) 0 : 1 << (XIVE_PRIORITY_MAX - priority); } +static inline uint8_t xive_priority_to_pipr(uint8_t priority) +{ + return priority > XIVE_PRIORITY_MAX ? 0xFF : priority; +} + +/* + * Convert an Interrupt Pending Buffer (IPB) register to a Pending + * Interrupt Priority Register (PIPR), which contains the priority of + * the most favored pending notification. + */ +static inline uint8_t xive_ipb_to_pipr(uint8_t ibp) +{ + return ibp ? clz32((uint32_t)ibp << 24) : 0xff; +} + /* * XIVE Thread Interrupt Management Aera (TIMA) * @@ -532,8 +547,10 @@ void xive_tctx_pic_print_info(XiveTCTX *tctx, GString *buf); Object *xive_tctx_create(Object *cpu, XivePresenter *xptr, Error **errp); void xive_tctx_reset(XiveTCTX *tctx); void xive_tctx_destroy(XiveTCTX *tctx); -void xive_tctx_ipb_update(XiveTCTX *tctx, uint8_t ring, uint8_t ipb); +void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority, + uint8_t group_level); void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring); +void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level); /* * KVM XIVE device helpers diff --git a/include/hw/ppc/xive_regs.h b/include/hw/ppc/xive_regs.h index 326327fc79..b455728c9c 100644 --- a/include/hw/ppc/xive_regs.h +++ b/include/hw/ppc/xive_regs.h @@ -146,7 +146,14 @@ #define TM_SPC_PULL_PHYS_CTX_OL 0xc38 /* Pull phys ctx to odd cache line */ /* XXX more... */ -/* NSR fields for the various QW ack types */ +/* + * NSR fields for the various QW ack types + * + * P10 has an extra bit in QW3 for the group level instead of the + * reserved 'i' bit. Since it is not used and we don't support group + * interrupts on P9, we use the P10 definition for the group level so + * that we can have common macros for the NSR + */ #define TM_QW0_NSR_EB PPC_BIT8(0) #define TM_QW1_NSR_EO PPC_BIT8(0) #define TM_QW3_NSR_HE PPC_BITMASK8(0, 1) @@ -154,8 +161,15 @@ #define TM_QW3_NSR_HE_POOL 1 #define TM_QW3_NSR_HE_PHYS 2 #define TM_QW3_NSR_HE_LSI 3 -#define TM_QW3_NSR_I PPC_BIT8(2) -#define TM_QW3_NSR_GRP_LVL PPC_BIT8(3, 7) +#define TM_NSR_GRP_LVL PPC_BITMASK8(2, 7) +/* + * On P10, the format of the 6-bit group level is: 2 bits for the + * crowd size and 4 bits for the group size. Since group/crowd size is + * always a power of 2, we encode the log. For example, group_level=4 + * means crowd size = 0 and group size = 16 (2^4) + * Same encoding is used in the NVP and NVGC structures for + * PGoFirst and PGoNext fields + */ /* * EAS (Event Assignment Structure) diff --git a/hw/intc/xive.c b/hw/intc/xive.c index efcb63e8aa..bacf518fa6 100644 --- a/hw/intc/xive.c +++ b/hw/intc/xive.c @@ -27,16 +27,6 @@ * XIVE Thread Interrupt Management context */ -/* - * Convert an Interrupt Pending Buffer (IPB) register to a Pending - * Interrupt Priority Register (PIPR), which contains the priority of - * the most favored pending notification. - */ -static uint8_t ipb_to_pipr(uint8_t ibp) -{ - return ibp ? clz32((uint32_t)ibp << 24) : 0xff; -} - static uint8_t exception_mask(uint8_t ring) { switch (ring) { @@ -87,10 +77,17 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring) regs[TM_CPPR] = cppr; - /* Reset the pending buffer bit */ - alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr); + /* + * If the interrupt was for a specific VP, reset the pending + * buffer bit, otherwise clear the logical server indicator + */ + if (regs[TM_NSR] & TM_NSR_GRP_LVL) { + regs[TM_NSR] &= ~TM_NSR_GRP_LVL; + } else { + alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr); + } - /* Drop Exception bit */ + /* Drop the exception bit */ regs[TM_NSR] &= ~mask; trace_xive_tctx_accept(tctx->cs->cpu_index, alt_ring, @@ -101,7 +98,7 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring) return ((uint64_t)nsr << 8) | regs[TM_CPPR]; } -static void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring) +void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level) { /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */ uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring; @@ -111,13 +108,13 @@ static void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring) if (alt_regs[TM_PIPR] < alt_regs[TM_CPPR]) { switch (ring) { case TM_QW1_OS: - regs[TM_NSR] |= TM_QW1_NSR_EO; + regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F); break; case TM_QW2_HV_POOL: - alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6); + alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6) | (group_level & 0x3F); break; case TM_QW3_HV_PHYS: - regs[TM_NSR] |= (TM_QW3_NSR_HE_PHYS << 6); + regs[TM_NSR] = (TM_QW3_NSR_HE_PHYS << 6) | (group_level & 0x3F); break; default: g_assert_not_reached(); @@ -159,7 +156,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr) * Recompute the PIPR based on local pending interrupts. The PHYS * ring must take the minimum of both the PHYS and POOL PIPR values. */ - pipr_min = ipb_to_pipr(regs[TM_IPB]); + pipr_min = xive_ipb_to_pipr(regs[TM_IPB]); ring_min = ring; /* PHYS updates also depend on POOL values */ @@ -169,7 +166,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr) /* POOL values only matter if POOL ctx is valid */ if (pool_regs[TM_WORD2] & 0x80) { - uint8_t pool_pipr = ipb_to_pipr(pool_regs[TM_IPB]); + uint8_t pool_pipr = xive_ipb_to_pipr(pool_regs[TM_IPB]); /* * Determine highest priority interrupt and @@ -185,17 +182,27 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr) regs[TM_PIPR] = pipr_min; /* CPPR has changed, check if we need to raise a pending exception */ - xive_tctx_notify(tctx, ring_min); + xive_tctx_notify(tctx, ring_min, 0); } -void xive_tctx_ipb_update(XiveTCTX *tctx, uint8_t ring, uint8_t ipb) -{ +void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority, + uint8_t group_level) + { + /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */ + uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring; + uint8_t *alt_regs = &tctx->regs[alt_ring]; uint8_t *regs = &tctx->regs[ring]; - regs[TM_IPB] |= ipb; - regs[TM_PIPR] = ipb_to_pipr(regs[TM_IPB]); - xive_tctx_notify(tctx, ring); -} + if (group_level == 0) { + /* VP-specific */ + regs[TM_IPB] |= xive_priority_to_ipb(priority); + alt_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]); + } else { + /* VP-group */ + alt_regs[TM_PIPR] = xive_priority_to_pipr(priority); + } + xive_tctx_notify(tctx, ring, group_level); + } /* * XIVE Thread Interrupt Management Area (TIMA) @@ -411,13 +418,13 @@ static void xive_tm_set_os_lgs(XivePresenter *xptr, XiveTCTX *tctx, } /* - * Adjust the IPB to allow a CPU to process event queues of other + * Adjust the PIPR to allow a CPU to process event queues of other * priorities during one physical interrupt cycle. */ static void xive_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset, uint64_t value, unsigned size) { - xive_tctx_ipb_update(tctx, TM_QW1_OS, xive_priority_to_ipb(value & 0xff)); + xive_tctx_pipr_update(tctx, TM_QW1_OS, value & 0xff, 0); } static void xive_os_cam_decode(uint32_t cam, uint8_t *nvt_blk, @@ -495,16 +502,20 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx, /* Reset the NVT value */ nvt.w4 = xive_set_field32(NVT_W4_IPB, nvt.w4, 0); xive_router_write_nvt(xrtr, nvt_blk, nvt_idx, &nvt, 4); - } + + uint8_t *regs = &tctx->regs[TM_QW1_OS]; + regs[TM_IPB] |= ipb; +} + /* - * Always call xive_tctx_ipb_update(). Even if there were no + * Always call xive_tctx_pipr_update(). Even if there were no * escalation triggered, there could be a pending interrupt which * was saved when the context was pulled and that we need to take * into account by recalculating the PIPR (which is not * saved/restored). * It will also raise the External interrupt signal if needed. */ - xive_tctx_ipb_update(tctx, TM_QW1_OS, ipb); + xive_tctx_pipr_update(tctx, TM_QW1_OS, 0xFF, 0); /* fxb */ } /* @@ -841,9 +852,9 @@ void xive_tctx_reset(XiveTCTX *tctx) * CPPR is first set. */ tctx->regs[TM_QW1_OS + TM_PIPR] = - ipb_to_pipr(tctx->regs[TM_QW1_OS + TM_IPB]); + xive_ipb_to_pipr(tctx->regs[TM_QW1_OS + TM_IPB]); tctx->regs[TM_QW3_HV_PHYS + TM_PIPR] = - ipb_to_pipr(tctx->regs[TM_QW3_HV_PHYS + TM_IPB]); + xive_ipb_to_pipr(tctx->regs[TM_QW3_HV_PHYS + TM_IPB]); } static void xive_tctx_realize(DeviceState *dev, Error **errp) @@ -1660,6 +1671,12 @@ static uint32_t xive_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx) return xive_nvt_cam_line(blk, 1 << 7 | (pir & 0x7f)); } +static uint8_t xive_get_group_level(uint32_t nvp_index) +{ + /* FIXME add crowd encoding */ + return ctz32(~nvp_index) + 1; +} + /* * The thread context register words are in big-endian format. */ @@ -1745,6 +1762,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format, { XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb); XiveTCTXMatch match = { .tctx = NULL, .ring = 0 }; + uint8_t group_level; int count; /* @@ -1758,9 +1776,9 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format, /* handle CPU exception delivery */ if (count) { - trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring); - xive_tctx_ipb_update(match.tctx, match.ring, - xive_priority_to_ipb(priority)); + group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0; + trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level); + xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level); } return !!count; diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c index 4adc3b6950..db372f4b30 100644 --- a/hw/intc/xive2.c +++ b/hw/intc/xive2.c @@ -564,8 +564,10 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t nvp_blk, uint32_t nvp_idx, bool do_restore) { + uint8_t ipb, backlog_level; + uint8_t backlog_prio; + uint8_t *regs = &tctx->regs[TM_QW1_OS]; Xive2Nvp nvp; - uint8_t ipb; /* * Grab the associated thread interrupt context registers in the @@ -594,15 +596,15 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx, nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, 0); xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2); } + regs[TM_IPB] = ipb; + backlog_prio = xive_ipb_to_pipr(ipb); + backlog_level = 0; + /* - * Always call xive_tctx_ipb_update(). Even if there were no - * escalation triggered, there could be a pending interrupt which - * was saved when the context was pulled and that we need to take - * into account by recalculating the PIPR (which is not - * saved/restored). - * It will also raise the External interrupt signal if needed. + * Compute the PIPR based on the restored state. + * It will raise the External interrupt signal if needed. */ - xive_tctx_ipb_update(tctx, TM_QW1_OS, ipb); + xive_tctx_pipr_update(tctx, TM_QW1_OS, backlog_prio, backlog_level); } /* diff --git a/hw/intc/trace-events b/hw/intc/trace-events index 3dcf147198..7435728c51 100644 --- a/hw/intc/trace-events +++ b/hw/intc/trace-events @@ -282,7 +282,7 @@ xive_router_end_notify(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "EN xive_router_end_escalate(uint8_t end_blk, uint32_t end_idx, uint8_t esc_blk, uint32_t esc_idx, uint32_t end_data) "END 0x%02x/0x%04x -> escalate END 0x%02x/0x%04x data 0x%08x" xive_tctx_tm_write(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64 xive_tctx_tm_read(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64 -xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring) "found NVT 0x%x/0x%x ring=0x%x" +xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring, uint8_t group_level) "found NVT 0x%x/0x%x ring=0x%x group_level=%d" xive_end_source_read(uint8_t end_blk, uint32_t end_idx, uint64_t addr) "END 0x%x/0x%x @0x%"PRIx64 # pnv_xive.c From patchwork Tue Oct 15 21:13:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kowal X-Patchwork-Id: 13837100 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D8951D1D876 for ; Tue, 15 Oct 2024 21:15:16 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t0orf-0003kt-VS; Tue, 15 Oct 2024 17:14:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0ore-0003k7-RX; Tue, 15 Oct 2024 17:13:58 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0orc-0000Nk-5Y; Tue, 15 Oct 2024 17:13:58 -0400 Received: from pps.filterd (m0356517.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49FJY3Yf001160; Tue, 15 Oct 2024 21:13:50 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=/7BdxA4zR6s0mGxto Ep/Hohi6due2+ISYaGvm1flmgc=; b=Uqx4jZ0MnAwyju5ReyPPEjWXDdVbQkQSt 29gy9fj03OgaJ5Fz5HtbkTtA9kVWOJ6aZccI1NrA7lRVPZYQz12sPyQasFjkyj7I cONpgm9EAj0T2UAVCPLZin0gJuG8iXOPux/kEFzYj04CoRXgmtll5VAcBQ7bicHV sSIzsKGQdHZtgiSV8X7Z4GSeG0fCbJR9oVkIJmwapdD3QU70eoPklXhJ0xTUCvM3 oRQBmDIKSE3UzRwi0Yjnaoa2Womxv0dM1TnYuEPAuRWjDX8Z8ga9/yz+bUBzUAIx P2R5cOyf2p+EFVakgmsdOuWaRjEMdSb2Vllt6yoXFog8fWSAX/V7w== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429vrw8xf8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:49 +0000 (GMT) Received: from m0356517.ppops.net (m0356517.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 49FLDnST007569; Tue, 15 Oct 2024 21:13:49 GMT Received: from ppma12.dal12v.mail.ibm.com (dc.9e.1632.ip4.static.sl-reverse.com [50.22.158.220]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429vrw8xf5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:49 +0000 (GMT) Received: from pps.filterd (ppma12.dal12v.mail.ibm.com [127.0.0.1]) by ppma12.dal12v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 49FKgEgt006674; Tue, 15 Oct 2024 21:13:48 GMT Received: from smtprelay05.fra02v.mail.ibm.com ([9.218.2.225]) by ppma12.dal12v.mail.ibm.com (PPS) with ESMTPS id 4283erx5m3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:48 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay05.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 49FLDiHK54985054 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Oct 2024 21:13:44 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 996052004B; Tue, 15 Oct 2024 21:13:44 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7ED6C20043; Tue, 15 Oct 2024 21:13:42 +0000 (GMT) Received: from gfwr518.rchland.ibm.com (unknown [9.10.239.106]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 15 Oct 2024 21:13:42 +0000 (GMT) From: Michael Kowal To: qemu-devel@nongnu.org Cc: qemu-ppc@nongnu.org, clg@kaod.org, fbarrat@linux.ibm.com, npiggin@gmail.com, milesg@linux.ibm.com, danielhb413@gmail.com, david@gibson.dropbear.id.au, harshpb@linux.ibm.com, thuth@redhat.com, lvivier@redhat.com, pbonzini@redhat.com Subject: [PATCH 03/14] ppc/xive2: Support group-matching when looking for target Date: Tue, 15 Oct 2024 16:13:18 -0500 Message-Id: <20241015211329.21113-4-kowal@linux.ibm.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241015211329.21113-1-kowal@linux.ibm.com> References: <20241015211329.21113-1-kowal@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: DEEhNj59WtdAebhTLPUJ9xazfqj7A18Y X-Proofpoint-GUID: 3A6iAwpn62cz-JSvCtQDs4M46rbqnhuJ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1051,Hydra:6.0.680,FMLib:17.12.62.30 definitions=2024-10-15_01,2024-10-11_01,2024-09-30_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 phishscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 suspectscore=0 priorityscore=1501 mlxscore=0 clxscore=1015 spamscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410150140 Received-SPF: pass client-ip=148.163.156.1; envelope-from=kowal@linux.ibm.com; helo=mx0a-001b2d01.pphosted.com X-Spam_score_int: -26 X-Spam_score: -2.7 X-Spam_bar: -- X-Spam_report: (-2.7 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Frederic Barrat If an END has the 'i' bit set (ignore), then it targets a group of VPs. The size of the group depends on the VP index of the target (first 0 found when looking at the least significant bits of the index) so a mask is applied on the VP index of a running thread to know if we have a match. Signed-off-by: Frederic Barrat Signed-off-by: Michael Kowal --- include/hw/ppc/xive.h | 5 +++- include/hw/ppc/xive2.h | 1 + hw/intc/pnv_xive2.c | 33 ++++++++++++++------- hw/intc/xive.c | 56 +++++++++++++++++++++++++----------- hw/intc/xive2.c | 65 ++++++++++++++++++++++++++++++------------ 5 files changed, 114 insertions(+), 46 deletions(-) diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h index 27ef6c1a17..a177b75723 100644 --- a/include/hw/ppc/xive.h +++ b/include/hw/ppc/xive.h @@ -424,6 +424,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas); typedef struct XiveTCTXMatch { XiveTCTX *tctx; uint8_t ring; + bool precluded; } XiveTCTXMatch; #define TYPE_XIVE_PRESENTER "xive-presenter" @@ -452,7 +453,9 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx, bool xive_presenter_notify(XiveFabric *xfb, uint8_t format, uint8_t nvt_blk, uint32_t nvt_idx, bool cam_ignore, uint8_t priority, - uint32_t logic_serv); + uint32_t logic_serv, bool *precluded); + +uint32_t xive_get_vpgroup_size(uint32_t nvp_index); /* * XIVE Fabric (Interface between Interrupt Controller and Machine) diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h index 5bccf41159..17c31fcb4b 100644 --- a/include/hw/ppc/xive2.h +++ b/include/hw/ppc/xive2.h @@ -121,6 +121,7 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset, unsigned size); void xive2_tm_pull_os_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset, uint64_t value, unsigned size); +bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority); void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset, uint64_t value, unsigned size); void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx, diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c index 834d32287b..3fb466bb2c 100644 --- a/hw/intc/pnv_xive2.c +++ b/hw/intc/pnv_xive2.c @@ -660,21 +660,34 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format, logic_serv); } - /* - * Save the context and follow on to catch duplicates, - * that we don't support yet. - */ if (ring != -1) { - if (match->tctx) { + /* + * For VP-specific match, finding more than one is a + * problem. For group notification, it's possible. + */ + if (!cam_ignore && match->tctx) { qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already found a " "thread context NVT %x/%x\n", nvt_blk, nvt_idx); - return false; + /* Should set a FIR if we ever model it */ + return -1; + } + /* + * For a group notification, we need to know if the + * match is precluded first by checking the current + * thread priority. If the interrupt can be delivered, + * we always notify the first match (for now). + */ + if (cam_ignore && + xive2_tm_irq_precluded(tctx, ring, priority)) { + match->precluded = true; + } else { + if (!match->tctx) { + match->ring = ring; + match->tctx = tctx; + } + count++; } - - match->ring = ring; - match->tctx = tctx; - count++; } } } diff --git a/hw/intc/xive.c b/hw/intc/xive.c index bacf518fa6..8ffcac4f65 100644 --- a/hw/intc/xive.c +++ b/hw/intc/xive.c @@ -1671,6 +1671,16 @@ static uint32_t xive_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx) return xive_nvt_cam_line(blk, 1 << 7 | (pir & 0x7f)); } +uint32_t xive_get_vpgroup_size(uint32_t nvp_index) +{ + /* + * Group size is a power of 2. The position of the first 0 + * (starting with the least significant bits) in the NVP index + * gives the size of the group. + */ + return 1 << (ctz32(~nvp_index) + 1); +} + static uint8_t xive_get_group_level(uint32_t nvp_index) { /* FIXME add crowd encoding */ @@ -1743,30 +1753,39 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx, /* * This is our simple Xive Presenter Engine model. It is merged in the * Router as it does not require an extra object. - * - * It receives notification requests sent by the IVRE to find one - * matching NVT (or more) dispatched on the processor threads. In case - * of a single NVT notification, the process is abbreviated and the - * thread is signaled if a match is found. In case of a logical server - * notification (bits ignored at the end of the NVT identifier), the - * IVPE and IVRE select a winning thread using different filters. This - * involves 2 or 3 exchanges on the PowerBus that the model does not - * support. - * - * The parameters represent what is sent on the PowerBus */ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format, uint8_t nvt_blk, uint32_t nvt_idx, bool cam_ignore, uint8_t priority, - uint32_t logic_serv) + uint32_t logic_serv, bool *precluded) { XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb); - XiveTCTXMatch match = { .tctx = NULL, .ring = 0 }; + XiveTCTXMatch match = { .tctx = NULL, .ring = 0, .precluded = false }; uint8_t group_level; int count; /* - * Ask the machine to scan the interrupt controllers for a match + * Ask the machine to scan the interrupt controllers for a match. + * + * For VP-specific notification, we expect at most one match and + * one call to the presenters is all we need (abbreviated notify + * sequence documented by the architecture). + * + * For VP-group notification, match_nvt() is the equivalent of the + * "histogram" and "poll" commands sent to the power bus to the + * presenters. 'count' could be more than one, but we always + * select the first match for now. 'precluded' tells if (at least) + * one thread matches but can't take the interrupt now because + * it's running at a more favored priority. We return the + * information to the router so that it can take appropriate + * actions (backlog, escalation, broadcast, etc...) + * + * If we were to implement a better way of dispatching the + * interrupt in case of multiple matches (instead of the first + * match), we would need a heuristic to elect a thread (for + * example, the hardware keeps track of an 'age' in the TIMA) and + * a new command to the presenters (the equivalent of the "assign" + * power bus command in the documented full notify sequence. */ count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, cam_ignore, priority, logic_serv, &match); @@ -1779,6 +1798,8 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format, group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0; trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level); xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level); + } else { + *precluded = match.precluded; } return !!count; @@ -1818,7 +1839,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas) uint8_t nvt_blk; uint32_t nvt_idx; XiveNVT nvt; - bool found; + bool found, precluded; uint8_t end_blk = xive_get_field64(EAS_END_BLOCK, eas->w); uint32_t end_idx = xive_get_field64(EAS_END_INDEX, eas->w); @@ -1901,8 +1922,9 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas) found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx, xive_get_field32(END_W7_F0_IGNORE, end.w7), priority, - xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7)); - + xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7), + &precluded); + /* we don't support VP-group notification on P9, so precluded is not used */ /* TODO: Auto EOI. */ if (found) { diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c index db372f4b30..2cb03c758e 100644 --- a/hw/intc/xive2.c +++ b/hw/intc/xive2.c @@ -739,6 +739,12 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd, return xrc->write_nvgc(xrtr, crowd, nvgc_blk, nvgc_idx, nvgc); } +static bool xive2_vp_match_mask(uint32_t cam1, uint32_t cam2, + uint32_t vp_mask) +{ + return (cam1 & vp_mask) == (cam2 & vp_mask); +} + /* * The thread context register words are in big-endian format. */ @@ -753,44 +759,50 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx, uint32_t qw1w2 = xive_tctx_word2(&tctx->regs[TM_QW1_OS]); uint32_t qw0w2 = xive_tctx_word2(&tctx->regs[TM_QW0_USER]); - /* - * TODO (PowerNV): ignore mode. The low order bits of the NVT - * identifier are ignored in the "CAM" match. - */ + uint32_t vp_mask = 0xFFFFFFFF; if (format == 0) { - if (cam_ignore == true) { - /* - * F=0 & i=1: Logical server notification (bits ignored at - * the end of the NVT identifier) - */ - qemu_log_mask(LOG_UNIMP, "XIVE: no support for LS NVT %x/%x\n", - nvt_blk, nvt_idx); - return -1; + /* + * i=0: Specific NVT notification + * i=1: VP-group notification (bits ignored at the end of the + * NVT identifier) + */ + if (cam_ignore) { + vp_mask = ~(xive_get_vpgroup_size(nvt_idx) - 1); } - /* F=0 & i=0: Specific NVT notification */ + /* For VP-group notifications, threads with LGS=0 are excluded */ /* PHYS ring */ if ((be32_to_cpu(qw3w2) & TM2_QW3W2_VT) && - cam == xive2_tctx_hw_cam_line(xptr, tctx)) { + !(cam_ignore && tctx->regs[TM_QW3_HV_PHYS + TM_LGS] == 0) && + xive2_vp_match_mask(cam, + xive2_tctx_hw_cam_line(xptr, tctx), + vp_mask)) { return TM_QW3_HV_PHYS; } /* HV POOL ring */ if ((be32_to_cpu(qw2w2) & TM2_QW2W2_VP) && - cam == xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2)) { + !(cam_ignore && tctx->regs[TM_QW2_HV_POOL + TM_LGS] == 0) && + xive2_vp_match_mask(cam, + xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2), + vp_mask)) { return TM_QW2_HV_POOL; } /* OS ring */ if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) && - cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) { + !(cam_ignore && tctx->regs[TM_QW1_OS + TM_LGS] == 0) && + xive2_vp_match_mask(cam, + xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2), + vp_mask)) { return TM_QW1_OS; } } else { /* F=1 : User level Event-Based Branch (EBB) notification */ + /* FIXME: what if cam_ignore and LGS = 0 ? */ /* USER ring */ if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) && (cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) && @@ -802,6 +814,22 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx, return -1; } +bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority) +{ + uint8_t *regs = &tctx->regs[ring]; + + /* + * The xive2_presenter_tctx_match() above tells if there's a match + * but for VP-group notification, we still need to look at the + * priority to know if the thread can take the interrupt now or if + * it is precluded. + */ + if (priority < regs[TM_CPPR]) { + return false; + } + return true; +} + static void xive2_router_realize(DeviceState *dev, Error **errp) { Xive2Router *xrtr = XIVE2_ROUTER(dev); @@ -841,7 +869,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk, Xive2End end; uint8_t priority; uint8_t format; - bool found; + bool found, precluded; Xive2Nvp nvp; uint8_t nvp_blk; uint32_t nvp_idx; @@ -922,7 +950,8 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk, found = xive_presenter_notify(xrtr->xfb, format, nvp_blk, nvp_idx, xive2_end_is_ignore(&end), priority, - xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7)); + xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7), + &precluded); /* TODO: Auto EOI. */ From patchwork Tue Oct 15 21:13:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kowal X-Patchwork-Id: 13837098 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D1768D1F9D4 for ; Tue, 15 Oct 2024 21:15:10 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t0orr-00044Z-4n; Tue, 15 Oct 2024 17:14:11 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0orp-0003ze-3U; Tue, 15 Oct 2024 17:14:09 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0orn-0000PK-4O; Tue, 15 Oct 2024 17:14:08 -0400 Received: from pps.filterd (m0360072.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49FJtgqj002181; Tue, 15 Oct 2024 21:13:52 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=RzXVEDirVaxmnxXJA ZR52SyQUWdT7m/exs9Afe1nJo0=; b=ZtLFtkerr2XwvmHw49Qegc5rKlZtRjNd3 /XLjzz5f9Vta6Z6wn+TU9iZRMdYFtldVEeDdaP3APYw8S2L38wRtl4SSQfp0a7ME dqUK5YaNElQuXHRAFrUpqJ3oZUC29ejv8kRRsPF69MXbGLPxNhzCRa8V/94mtacj +AwJXnD92tqanrmg7OR8dcWd9VuW5yMdl2Ov3nKMsefWfl7AUoEwViBqz0QkfCH0 rB8Y3EffwiJnjRw8SXxmmw1i3wGAh85AB5fW/mQ0ZE7E1fDs9NHKByfw21lXb5qg E5coW/agRsZGmvG0BZMUPqTT0S/lNWIDsa0WZYDJNxI30mx8x0OKQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429xywr8y6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:51 +0000 (GMT) Received: from m0360072.ppops.net (m0360072.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 49FLAZ08022977; Tue, 15 Oct 2024 21:13:51 GMT Received: from ppma23.wdc07v.mail.ibm.com (5d.69.3da9.ip4.static.sl-reverse.com [169.61.105.93]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429xywr8y1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:51 +0000 (GMT) Received: from pps.filterd (ppma23.wdc07v.mail.ibm.com [127.0.0.1]) by ppma23.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 49FI2owi006789; Tue, 15 Oct 2024 21:13:50 GMT Received: from smtprelay03.fra02v.mail.ibm.com ([9.218.2.224]) by ppma23.wdc07v.mail.ibm.com (PPS) with ESMTPS id 4284xk5v92-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:50 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay03.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 49FLDlbE56951074 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Oct 2024 21:13:47 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 076272004B; Tue, 15 Oct 2024 21:13:47 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E140C20043; Tue, 15 Oct 2024 21:13:44 +0000 (GMT) Received: from gfwr518.rchland.ibm.com (unknown [9.10.239.106]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 15 Oct 2024 21:13:44 +0000 (GMT) From: Michael Kowal To: qemu-devel@nongnu.org Cc: qemu-ppc@nongnu.org, clg@kaod.org, fbarrat@linux.ibm.com, npiggin@gmail.com, milesg@linux.ibm.com, danielhb413@gmail.com, david@gibson.dropbear.id.au, harshpb@linux.ibm.com, thuth@redhat.com, lvivier@redhat.com, pbonzini@redhat.com Subject: [PATCH 04/14] ppc/xive2: Add undelivered group interrupt to backlog Date: Tue, 15 Oct 2024 16:13:19 -0500 Message-Id: <20241015211329.21113-5-kowal@linux.ibm.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241015211329.21113-1-kowal@linux.ibm.com> References: <20241015211329.21113-1-kowal@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: 82zj6GnM1Fa_i4ug4zVJUXxeTziD7YJm X-Proofpoint-GUID: xgTIi3jhYLlKIqx_L2oddmJUa8f84NAa X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1051,Hydra:6.0.680,FMLib:17.12.62.30 definitions=2024-10-15_01,2024-10-11_01,2024-09-30_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 bulkscore=0 spamscore=0 priorityscore=1501 phishscore=0 adultscore=0 mlxlogscore=999 lowpriorityscore=0 clxscore=1015 impostorscore=0 mlxscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410150140 Received-SPF: pass client-ip=148.163.158.5; envelope-from=kowal@linux.ibm.com; helo=mx0b-001b2d01.pphosted.com X-Spam_score_int: -26 X-Spam_score: -2.7 X-Spam_bar: -- X-Spam_report: (-2.7 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Frederic Barrat When a group interrupt cannot be delivered, we need to: - increment the backlog counter for the group in the NVG table (if the END is configured to keep a backlog). - start a broadcast operation to set the LSMFB field on matching CPUs which can't take the interrupt now because they're running at too high a priority. Signed-off-by: Frederic Barrat Signed-off-by: Michael Kowal --- include/hw/ppc/xive.h | 5 ++ include/hw/ppc/xive2.h | 1 + hw/intc/pnv_xive2.c | 42 +++++++++++++++++ hw/intc/xive2.c | 105 +++++++++++++++++++++++++++++++++++------ hw/ppc/pnv.c | 18 +++++++ 5 files changed, 156 insertions(+), 15 deletions(-) diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h index a177b75723..7660578b20 100644 --- a/include/hw/ppc/xive.h +++ b/include/hw/ppc/xive.h @@ -444,6 +444,9 @@ struct XivePresenterClass { uint32_t logic_serv, XiveTCTXMatch *match); bool (*in_kernel)(const XivePresenter *xptr); uint32_t (*get_config)(XivePresenter *xptr); + int (*broadcast)(XivePresenter *xptr, + uint8_t nvt_blk, uint32_t nvt_idx, + uint8_t priority); }; int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx, @@ -474,6 +477,8 @@ struct XiveFabricClass { uint8_t nvt_blk, uint32_t nvt_idx, bool cam_ignore, uint8_t priority, uint32_t logic_serv, XiveTCTXMatch *match); + int (*broadcast)(XiveFabric *xfb, uint8_t nvt_blk, uint32_t nvt_idx, + uint8_t priority); }; /* diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h index 17c31fcb4b..d88db05687 100644 --- a/include/hw/ppc/xive2.h +++ b/include/hw/ppc/xive2.h @@ -122,6 +122,7 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx, void xive2_tm_pull_os_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset, uint64_t value, unsigned size); bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority); +void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority); void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset, uint64_t value, unsigned size); void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx, diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c index 3fb466bb2c..0482193fd7 100644 --- a/hw/intc/pnv_xive2.c +++ b/hw/intc/pnv_xive2.c @@ -706,6 +706,47 @@ static uint32_t pnv_xive2_presenter_get_config(XivePresenter *xptr) return cfg; } +static int pnv_xive2_broadcast(XivePresenter *xptr, + uint8_t nvt_blk, uint32_t nvt_idx, + uint8_t priority) +{ + PnvXive2 *xive = PNV_XIVE2(xptr); + PnvChip *chip = xive->chip; + int i, j; + bool gen1_tima_os = + xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS; + + for (i = 0; i < chip->nr_cores; i++) { + PnvCore *pc = chip->cores[i]; + CPUCore *cc = CPU_CORE(pc); + + for (j = 0; j < cc->nr_threads; j++) { + PowerPCCPU *cpu = pc->threads[j]; + XiveTCTX *tctx; + int ring; + + if (!pnv_xive2_is_cpu_enabled(xive, cpu)) { + continue; + } + + tctx = XIVE_TCTX(pnv_cpu_state(cpu)->intc); + + if (gen1_tima_os) { + ring = xive_presenter_tctx_match(xptr, tctx, 0, nvt_blk, + nvt_idx, true, 0); + } else { + ring = xive2_presenter_tctx_match(xptr, tctx, 0, nvt_blk, + nvt_idx, true, 0); + } + + if (ring != -1) { + xive2_tm_set_lsmfb(tctx, ring, priority); + } + } + } + return 0; +} + static uint8_t pnv_xive2_get_block_id(Xive2Router *xrtr) { return pnv_xive2_block_id(PNV_XIVE2(xrtr)); @@ -2446,6 +2487,7 @@ static void pnv_xive2_class_init(ObjectClass *klass, void *data) xpc->match_nvt = pnv_xive2_match_nvt; xpc->get_config = pnv_xive2_presenter_get_config; + xpc->broadcast = pnv_xive2_broadcast; }; static const TypeInfo pnv_xive2_info = { diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c index 2cb03c758e..a6dc6d553f 100644 --- a/hw/intc/xive2.c +++ b/hw/intc/xive2.c @@ -63,6 +63,30 @@ static uint32_t xive2_nvgc_get_backlog(Xive2Nvgc *nvgc, uint8_t priority) return val; } +static void xive2_nvgc_set_backlog(Xive2Nvgc *nvgc, uint8_t priority, + uint32_t val) +{ + uint8_t *ptr, i; + uint32_t shift; + + if (priority > 7) { + return; + } + + if (val > 0xFFFFFF) { + val = 0xFFFFFF; + } + /* + * The per-priority backlog counters are 24-bit and the structure + * is stored in big endian + */ + ptr = (uint8_t *)&nvgc->w2 + priority * 3; + for (i = 0; i < 3; i++, ptr++) { + shift = 8 * (2 - i); + *ptr = (val >> shift) & 0xFF; + } +} + void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf) { if (!xive2_eas_is_valid(eas)) { @@ -830,6 +854,19 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority) return true; } +void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority) +{ + uint8_t *regs = &tctx->regs[ring]; + + /* + * Called by the router during a VP-group notification when the + * thread matches but can't take the interrupt because it's + * already running at a more favored priority. It then stores the + * new interrupt priority in the LSMFB field. + */ + regs[TM_LSMFB] = priority; +} + static void xive2_router_realize(DeviceState *dev, Error **errp) { Xive2Router *xrtr = XIVE2_ROUTER(dev); @@ -962,10 +999,9 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk, /* * If no matching NVP is dispatched on a HW thread : * - specific VP: update the NVP structure if backlog is activated - * - logical server : forward request to IVPE (not supported) + * - VP-group: update the backlog counter for that priority in the NVG */ if (xive2_end_is_backlog(&end)) { - uint8_t ipb; if (format == 1) { qemu_log_mask(LOG_GUEST_ERROR, @@ -974,19 +1010,58 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk, return; } - /* - * Record the IPB in the associated NVP structure for later - * use. The presenter will resend the interrupt when the vCPU - * is dispatched again on a HW thread. - */ - ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2) | - xive_priority_to_ipb(priority); - nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb); - xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2); - - /* - * On HW, follows a "Broadcast Backlog" to IVPEs - */ + if (!xive2_end_is_ignore(&end)) { + uint8_t ipb; + /* + * Record the IPB in the associated NVP structure for later + * use. The presenter will resend the interrupt when the vCPU + * is dispatched again on a HW thread. + */ + ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2) | + xive_priority_to_ipb(priority); + nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb); + xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2); + } else { + Xive2Nvgc nvg; + uint32_t backlog; + + /* For groups, the per-priority backlog counters are in the NVG */ + if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no NVG %x/%x\n", + nvp_blk, nvp_idx); + return; + } + + if (!xive2_nvgc_is_valid(&nvg)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVG %x/%x is invalid\n", + nvp_blk, nvp_idx); + return; + } + + /* + * Increment the backlog counter for that priority. + * For the precluded case, we only call broadcast the + * first time the counter is incremented. broadcast will + * set the LSMFB field of the TIMA of relevant threads so + * that they know an interrupt is pending. + */ + backlog = xive2_nvgc_get_backlog(&nvg, priority) + 1; + xive2_nvgc_set_backlog(&nvg, priority, backlog); + xive2_router_write_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg); + + if (precluded && backlog == 1) { + XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb); + xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx, priority); + + if (!xive2_end_is_precluded_escalation(&end)) { + /* + * The interrupt will be picked up when the + * matching thread lowers its priority level + */ + return; + } + } + } } do_escalation: diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c index 3526852685..9b42f47326 100644 --- a/hw/ppc/pnv.c +++ b/hw/ppc/pnv.c @@ -2610,6 +2610,23 @@ static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format, return total_count; } +static int pnv10_xive_broadcast(XiveFabric *xfb, + uint8_t nvt_blk, uint32_t nvt_idx, + uint8_t priority) +{ + PnvMachineState *pnv = PNV_MACHINE(xfb); + int i; + + for (i = 0; i < pnv->num_chips; i++) { + Pnv10Chip *chip10 = PNV10_CHIP(pnv->chips[i]); + XivePresenter *xptr = XIVE_PRESENTER(&chip10->xive); + XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr); + + xpc->broadcast(xptr, nvt_blk, nvt_idx, priority); + } + return 0; +} + static bool pnv_machine_get_big_core(Object *obj, Error **errp) { PnvMachineState *pnv = PNV_MACHINE(obj); @@ -2743,6 +2760,7 @@ static void pnv_machine_p10_common_class_init(ObjectClass *oc, void *data) pmc->dt_power_mgt = pnv_dt_power_mgt; xfc->match_nvt = pnv10_xive_match_nvt; + xfc->broadcast = pnv10_xive_broadcast; machine_class_allow_dynamic_sysbus_dev(mc, TYPE_PNV_PHB); } From patchwork Tue Oct 15 21:13:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kowal X-Patchwork-Id: 13837095 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 42BD1D1F9D4 for ; Tue, 15 Oct 2024 21:14:14 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t0oro-0003w5-RI; Tue, 15 Oct 2024 17:14:08 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0orm-0003qC-T9; Tue, 15 Oct 2024 17:14:07 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0orl-0000Oq-6N; Tue, 15 Oct 2024 17:14:06 -0400 Received: from pps.filterd (m0353725.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49FKoA5Z009316; Tue, 15 Oct 2024 21:13:55 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=TFzKnbC2JSfiB0g4n K3R8z9haD//HGbPomOqiUV9RnQ=; b=rtCtiR4pO0EqOEiqXn6pHAwyJfV1mOrOj N4OWuwJvOoDolkCcyEa+Tlsjar4eHu9I1QRjfqWAro5H5drzI6LXqaeTwM8RiZgu /MRvfJ27890bmH2BBNdeb2Q0yLf1GoGfpi4VutNK+n8vnrIxdHUfXvFZyLVl7SMM JPSkhCHKBNfzJK5S0nSMG1PimJqmuVv2lThd7hM54hWDynIgsW2aje9S1vilBJon PIkpIDJN+p9qm/KoU+OcittzwO6DQ1ixCTiwaFIHPVL4Lzfbb2+X+n4r8IugPegS b6nAXU/oR0A/zkQtr7antbpRmgrbYSagqgIQKFRgzVNLaZSHMRqkA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429ysbr28m-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:54 +0000 (GMT) Received: from m0353725.ppops.net (m0353725.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 49FLDshk021385; Tue, 15 Oct 2024 21:13:54 GMT Received: from ppma23.wdc07v.mail.ibm.com (5d.69.3da9.ip4.static.sl-reverse.com [169.61.105.93]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429ysbr28g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:54 +0000 (GMT) Received: from pps.filterd (ppma23.wdc07v.mail.ibm.com [127.0.0.1]) by ppma23.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 49FIGGME007025; Tue, 15 Oct 2024 21:13:53 GMT Received: from smtprelay01.fra02v.mail.ibm.com ([9.218.2.227]) by ppma23.wdc07v.mail.ibm.com (PPS) with ESMTPS id 4284xk5v97-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:53 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay01.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 49FLDn0N47055276 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Oct 2024 21:13:49 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6C33F2004E; Tue, 15 Oct 2024 21:13:49 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4F40A20043; Tue, 15 Oct 2024 21:13:47 +0000 (GMT) Received: from gfwr518.rchland.ibm.com (unknown [9.10.239.106]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 15 Oct 2024 21:13:47 +0000 (GMT) From: Michael Kowal To: qemu-devel@nongnu.org Cc: qemu-ppc@nongnu.org, clg@kaod.org, fbarrat@linux.ibm.com, npiggin@gmail.com, milesg@linux.ibm.com, danielhb413@gmail.com, david@gibson.dropbear.id.au, harshpb@linux.ibm.com, thuth@redhat.com, lvivier@redhat.com, pbonzini@redhat.com Subject: [PATCH 05/14] ppc/xive2: Process group backlog when pushing an OS context Date: Tue, 15 Oct 2024 16:13:20 -0500 Message-Id: <20241015211329.21113-6-kowal@linux.ibm.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241015211329.21113-1-kowal@linux.ibm.com> References: <20241015211329.21113-1-kowal@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: tHgJkyr18nrX0BBy7wgtw1sjqLchUR1R X-Proofpoint-ORIG-GUID: sZCKeqafwrLpzcm_RdMGlRQmskKKFPDJ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1051,Hydra:6.0.680,FMLib:17.12.62.30 definitions=2024-10-15_01,2024-10-11_01,2024-09-30_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 impostorscore=0 malwarescore=0 mlxlogscore=806 clxscore=1015 spamscore=0 lowpriorityscore=0 phishscore=0 bulkscore=0 adultscore=0 priorityscore=1501 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410150140 Received-SPF: pass client-ip=148.163.158.5; envelope-from=kowal@linux.ibm.com; helo=mx0b-001b2d01.pphosted.com X-Spam_score_int: -26 X-Spam_score: -2.7 X-Spam_bar: -- X-Spam_report: (-2.7 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Frederic Barrat When pushing an OS context, we were already checking if there was a pending interrupt in the IPB and sending a notification if needed. We also need to check if there is a pending group interrupt stored in the NVG table. To avoid useless backlog scans, we only scan if the NVP belongs to a group. Signed-off-by: Frederic Barrat Signed-off-by: Michael Kowal --- hw/intc/xive2.c | 100 ++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 97 insertions(+), 3 deletions(-) diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c index a6dc6d553f..7130892482 100644 --- a/hw/intc/xive2.c +++ b/hw/intc/xive2.c @@ -279,6 +279,85 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data) end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex); } +/* + * Scan the group chain and return the highest priority and group + * level of pending group interrupts. + */ +static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr, + uint8_t nvp_blk, uint32_t nvp_idx, + uint8_t first_group, + uint8_t *out_level) +{ + Xive2Router *xrtr = XIVE2_ROUTER(xptr); + uint32_t nvgc_idx, mask; + uint32_t current_level, count; + uint8_t prio; + Xive2Nvgc nvgc; + + for (prio = 0; prio <= XIVE_PRIORITY_MAX; prio++) { + current_level = first_group & 0xF; + + while (current_level) { + mask = (1 << current_level) - 1; + nvgc_idx = nvp_idx & ~mask; + nvgc_idx |= mask >> 1; + qemu_log("fxb %s checking backlog for prio %d group idx %x\n", + __func__, prio, nvgc_idx); + + if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n", + nvp_blk, nvgc_idx); + return 0xFF; + } + if (!xive2_nvgc_is_valid(&nvgc)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n", + nvp_blk, nvgc_idx); + return 0xFF; + } + + count = xive2_nvgc_get_backlog(&nvgc, prio); + if (count) { + *out_level = current_level; + return prio; + } + current_level = xive_get_field32(NVGC2_W0_PGONEXT, nvgc.w0) & 0xF; + } + } + return 0xFF; +} + +static void xive2_presenter_backlog_decr(XivePresenter *xptr, + uint8_t nvp_blk, uint32_t nvp_idx, + uint8_t group_prio, + uint8_t group_level) +{ + Xive2Router *xrtr = XIVE2_ROUTER(xptr); + uint32_t nvgc_idx, mask, count; + Xive2Nvgc nvgc; + + group_level &= 0xF; + mask = (1 << group_level) - 1; + nvgc_idx = nvp_idx & ~mask; + nvgc_idx |= mask >> 1; + + if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n", + nvp_blk, nvgc_idx); + return; + } + if (!xive2_nvgc_is_valid(&nvgc)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n", + nvp_blk, nvgc_idx); + return; + } + count = xive2_nvgc_get_backlog(&nvgc, group_prio); + if (!count) { + return; + } + xive2_nvgc_set_backlog(&nvgc, group_prio, count - 1); + xive2_router_write_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc); +} + /* * XIVE Thread Interrupt Management Area (TIMA) - Gen2 mode * @@ -588,8 +667,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t nvp_blk, uint32_t nvp_idx, bool do_restore) { - uint8_t ipb, backlog_level; - uint8_t backlog_prio; + XivePresenter *xptr = XIVE_PRESENTER(xrtr); + uint8_t ipb, backlog_level, group_level, first_group; + uint8_t backlog_prio, group_prio; uint8_t *regs = &tctx->regs[TM_QW1_OS]; Xive2Nvp nvp; @@ -624,8 +704,22 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx, backlog_prio = xive_ipb_to_pipr(ipb); backlog_level = 0; + first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0); + if (first_group && regs[TM_LSMFB] < backlog_prio) { + group_prio = xive2_presenter_backlog_check(xptr, nvp_blk, nvp_idx, + first_group, &group_level); + regs[TM_LSMFB] = group_prio; + if (regs[TM_LGS] && group_prio < backlog_prio) { + /* VP can take a group interrupt */ + xive2_presenter_backlog_decr(xptr, nvp_blk, nvp_idx, + group_prio, group_level); + backlog_prio = group_prio; + backlog_level = group_level; + } + } + /* - * Compute the PIPR based on the restored state. + * Compute the PIPR based on the restored state. * It will raise the External interrupt signal if needed. */ xive_tctx_pipr_update(tctx, TM_QW1_OS, backlog_prio, backlog_level); From patchwork Tue Oct 15 21:13:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kowal X-Patchwork-Id: 13837101 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5525BD1D876 for ; Tue, 15 Oct 2024 21:15:36 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t0orq-00044P-Hd; Tue, 15 Oct 2024 17:14:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0oro-0003wU-Ju; Tue, 15 Oct 2024 17:14:08 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0orm-0000P9-Fo; Tue, 15 Oct 2024 17:14:08 -0400 Received: from pps.filterd (m0360072.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49FJtxvc002550; Tue, 15 Oct 2024 21:13:57 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=BUQdm66A8uScV5U78 h6aycAM9ASgO9VCFfYQuDdBUEc=; b=gKwoQrsiYurK16063upCrZlCX51mxUVAo XlqQ5ArKrVtZcR3mJ8p1tjvUs4ANM9e9SmgrjP1bhC2/FSn0w1jjAhuhHa/2o39l S9PhAsMCC7cjtG/nEzd+Ly1pBjRMr3pm3xEd9f5NS3iOqGWwr0upk9PlWtw8W2Im +T4jC/gz8OWPj9I+PBbaJbhQ9vtEYxaMkpJ5BYXWlon0gy1D2kJsoySHvaCHJPcV /xMpiwGAaQc6t+OxRY1ZkyQCUGJnHBD+QMzG7Nm3CPjQR+hN+x+kmX+dY210BcdH 8U0CxjnRr1JuxIizlBXFIlm7KU58XAWxpVu3csUAkinsfY36XGjgw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429xywr8yk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:56 +0000 (GMT) Received: from m0360072.ppops.net (m0360072.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 49FLDmje027136; Tue, 15 Oct 2024 21:13:56 GMT Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429xywr8ye-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:56 +0000 (GMT) Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 49FIF5nj002452; Tue, 15 Oct 2024 21:13:55 GMT Received: from smtprelay05.fra02v.mail.ibm.com ([9.218.2.225]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 4284emnytr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:55 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay05.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 49FLDp7034079022 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Oct 2024 21:13:52 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id CBE9B2004B; Tue, 15 Oct 2024 21:13:51 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B19F120043; Tue, 15 Oct 2024 21:13:49 +0000 (GMT) Received: from gfwr518.rchland.ibm.com (unknown [9.10.239.106]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 15 Oct 2024 21:13:49 +0000 (GMT) From: Michael Kowal To: qemu-devel@nongnu.org Cc: qemu-ppc@nongnu.org, clg@kaod.org, fbarrat@linux.ibm.com, npiggin@gmail.com, milesg@linux.ibm.com, danielhb413@gmail.com, david@gibson.dropbear.id.au, harshpb@linux.ibm.com, thuth@redhat.com, lvivier@redhat.com, pbonzini@redhat.com Subject: [PATCH 06/14] ppc/xive2: Process group backlog when updating the CPPR Date: Tue, 15 Oct 2024 16:13:21 -0500 Message-Id: <20241015211329.21113-7-kowal@linux.ibm.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241015211329.21113-1-kowal@linux.ibm.com> References: <20241015211329.21113-1-kowal@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: WcQdD-Fygpt0jec-pIZcIhnSgQ7YN5zv X-Proofpoint-GUID: dqepEFwQ-xttSBQQHQgZR3U-iKSrboL4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1051,Hydra:6.0.680,FMLib:17.12.62.30 definitions=2024-10-15_01,2024-10-11_01,2024-09-30_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 bulkscore=0 spamscore=0 priorityscore=1501 phishscore=0 adultscore=0 mlxlogscore=999 lowpriorityscore=0 clxscore=1015 impostorscore=0 mlxscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410150140 Received-SPF: pass client-ip=148.163.158.5; envelope-from=kowal@linux.ibm.com; helo=mx0b-001b2d01.pphosted.com X-Spam_score_int: -26 X-Spam_score: -2.7 X-Spam_bar: -- X-Spam_report: (-2.7 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Frederic Barrat When the hypervisor or OS pushes a new value to the CPPR, if the LSMFB value is lower than the new CPPR value, there could be a pending group interrupt in the backlog, so it needs to be scanned. Signed-off-by: Frederic Barrat Signed-off-by: Michael Kowal --- include/hw/ppc/xive2.h | 4 + hw/intc/xive.c | 4 +- hw/intc/xive2.c | 173 ++++++++++++++++++++++++++++++++++++++++- 3 files changed, 177 insertions(+), 4 deletions(-) diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h index d88db05687..e61b978f37 100644 --- a/include/hw/ppc/xive2.h +++ b/include/hw/ppc/xive2.h @@ -115,6 +115,10 @@ typedef struct Xive2EndSource { * XIVE2 Thread Interrupt Management Area (POWER10) */ +void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx, + hwaddr offset, uint64_t value, unsigned size); +void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx, + hwaddr offset, uint64_t value, unsigned size); void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset, uint64_t value, unsigned size); uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx, diff --git a/hw/intc/xive.c b/hw/intc/xive.c index 8ffcac4f65..2aa6e1fecc 100644 --- a/hw/intc/xive.c +++ b/hw/intc/xive.c @@ -603,7 +603,7 @@ static const XiveTmOp xive2_tm_operations[] = { * MMIOs below 2K : raw values and special operations without side * effects */ - { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive_tm_set_os_cppr, + { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive2_tm_set_os_cppr, NULL }, { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, xive2_tm_push_os_ctx, NULL }, @@ -611,7 +611,7 @@ static const XiveTmOp xive2_tm_operations[] = { NULL }, { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS, 1, xive_tm_set_os_lgs, NULL }, - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive_tm_set_hv_cppr, + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive2_tm_set_hv_cppr, NULL }, { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, xive_tm_vt_push, NULL }, diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c index 7130892482..0c53f71879 100644 --- a/hw/intc/xive2.c +++ b/hw/intc/xive2.c @@ -18,6 +18,7 @@ #include "hw/ppc/xive.h" #include "hw/ppc/xive2.h" #include "hw/ppc/xive2_regs.h" +#include "trace.h" uint32_t xive2_router_get_config(Xive2Router *xrtr) { @@ -764,6 +765,172 @@ void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx, } } +static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring, + uint32_t *nvp_blk, uint32_t *nvp_idx) +{ + uint32_t w2, cam; + + w2 = xive_tctx_word2(&tctx->regs[ring]); + switch (ring) { + case TM_QW1_OS: + if (!(be32_to_cpu(w2) & TM2_QW1W2_VO)) { + return -1; + } + cam = xive_get_field32(TM2_QW1W2_OS_CAM, w2); + break; + case TM_QW2_HV_POOL: + if (!(be32_to_cpu(w2) & TM2_QW2W2_VP)) { + return -1; + } + cam = xive_get_field32(TM2_QW2W2_POOL_CAM, w2); + break; + case TM_QW3_HV_PHYS: + if (!(be32_to_cpu(w2) & TM2_QW3W2_VT)) { + return -1; + } + cam = xive2_tctx_hw_cam_line(tctx->xptr, tctx); + break; + default: + return -1; + } + *nvp_blk = xive2_nvp_blk(cam); + *nvp_idx = xive2_nvp_idx(cam); + return 0; +} + +static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr) +{ + uint8_t *regs = &tctx->regs[ring]; + Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr); + uint8_t old_cppr, backlog_prio, first_group, group_level = 0; + uint8_t pipr_min, lsmfb_min, ring_min; + bool group_enabled; + uint32_t nvp_blk, nvp_idx; + Xive2Nvp nvp; + int rc; + + trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring, + regs[TM_IPB], regs[TM_PIPR], + cppr, regs[TM_NSR]); + + if (cppr > XIVE_PRIORITY_MAX) { + cppr = 0xff; + } + + old_cppr = regs[TM_CPPR]; + regs[TM_CPPR] = cppr; + + /* + * Recompute the PIPR based on local pending interrupts. It will + * be adjusted below if needed in case of pending group interrupts. + */ + pipr_min = xive_ipb_to_pipr(regs[TM_IPB]); + group_enabled = !!regs[TM_LGS]; + lsmfb_min = (group_enabled) ? regs[TM_LSMFB] : 0xff; + ring_min = ring; + + /* PHYS updates also depend on POOL values */ + if (ring == TM_QW3_HV_PHYS) { + uint8_t *pregs = &tctx->regs[TM_QW2_HV_POOL]; + + /* POOL values only matter if POOL ctx is valid */ + if (pregs[TM_WORD2] & 0x80) { + + uint8_t pool_pipr = xive_ipb_to_pipr(pregs[TM_IPB]); + uint8_t pool_lsmfb = pregs[TM_LSMFB]; + + /* + * Determine highest priority interrupt and + * remember which ring has it. + */ + if (pool_pipr < pipr_min) { + pipr_min = pool_pipr; + if (pool_pipr < lsmfb_min) { + ring_min = TM_QW2_HV_POOL; + } + } + + /* Values needed for group priority calculation */ + if (pregs[TM_LGS] && (pool_lsmfb < lsmfb_min)) { + group_enabled = true; + lsmfb_min = pool_lsmfb; + if (lsmfb_min < pipr_min) { + ring_min = TM_QW2_HV_POOL; + } + } + } + } + regs[TM_PIPR] = pipr_min; + + rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx); + if (rc) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid context\n"); + return; + } + + if (cppr < old_cppr) { + /* + * FIXME: check if there's a group interrupt being presented + * and if the new cppr prevents it. If so, then the group + * interrupt needs to be re-added to the backlog and + * re-triggered (see re-trigger END info in the NVGC + * structure) + */ + } + + if (group_enabled && + lsmfb_min < cppr && + lsmfb_min < regs[TM_PIPR]) { + /* + * Thread has seen a group interrupt with a higher priority + * than the new cppr or pending local interrupt. Check the + * backlog + */ + if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n", + nvp_blk, nvp_idx); + return; + } + + if (!xive2_nvp_is_valid(&nvp)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n", + nvp_blk, nvp_idx); + return; + } + + first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0); + if (!first_group) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n", + nvp_blk, nvp_idx); + return; + } + + backlog_prio = xive2_presenter_backlog_check(tctx->xptr, + nvp_blk, nvp_idx, + first_group, &group_level); + tctx->regs[ring_min + TM_LSMFB] = backlog_prio; + if (backlog_prio != 0xFF) { + xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx, + backlog_prio, group_level); + regs[TM_PIPR] = backlog_prio; + } + } + /* CPPR has changed, check if we need to raise a pending exception */ + xive_tctx_notify(tctx, ring_min, group_level); +} + +void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx, + hwaddr offset, uint64_t value, unsigned size) +{ + xive2_tctx_set_cppr(tctx, TM_QW3_HV_PHYS, value & 0xff); +} + +void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx, + hwaddr offset, uint64_t value, unsigned size) +{ + xive2_tctx_set_cppr(tctx, TM_QW1_OS, value & 0xff); +} + static void xive2_tctx_set_target(XiveTCTX *tctx, uint8_t ring, uint8_t target) { uint8_t *regs = &tctx->regs[ring]; @@ -934,7 +1101,9 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx, bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority) { - uint8_t *regs = &tctx->regs[ring]; + /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */ + uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring; + uint8_t *alt_regs = &tctx->regs[alt_ring]; /* * The xive2_presenter_tctx_match() above tells if there's a match @@ -942,7 +1111,7 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority) * priority to know if the thread can take the interrupt now or if * it is precluded. */ - if (priority < regs[TM_CPPR]) { + if (priority < alt_regs[TM_CPPR]) { return false; } return true; From patchwork Tue Oct 15 21:13:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kowal X-Patchwork-Id: 13837104 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8D990D1F9D8 for ; Tue, 15 Oct 2024 21:16:06 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t0orw-0004ME-7C; Tue, 15 Oct 2024 17:14:16 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0ort-0004At-Dc; Tue, 15 Oct 2024 17:14:14 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0orr-0000Q5-Ao; Tue, 15 Oct 2024 17:14:12 -0400 Received: from pps.filterd (m0353729.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49FKK85Q023399; Tue, 15 Oct 2024 21:14:00 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=hr8/puR3vMrhkZG53 yht2hhrtIuhKe52UM1kjDdc3Jc=; b=VmaI6Yj475AMYM3YwK2mExoyP6/3kzpRP MtnqeNUClDnN83xutB2sffpWXTbvmPzdBcWZJ9fOQg4A1o4nQu1K2FtcIWxrkgFm k8lCysEfv+KfFzcM7EiQdz0aqEhJfevfJiFkCGdiPgb0w/ajyU2SlVsKGnPLP9CF p+PNRbHXuqVT+WT4/yfXhZNhxfGP8TAhQfEaL14Vqb0ZXGoswQS6Z80r1nUMK854 arDjGYxFRo2WTY6TU8PqqR9Nl3KOTk538CdP2Xn5LGSaq1O3SeiTvv6FlLp+KT25 /hnZdqZUJ8ZsbDz9f1zUmNvzFMI/qZdD3J2Zmvevq7bf6Arpy+dwg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429yb505fq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:59 +0000 (GMT) Received: from m0353729.ppops.net (m0353729.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 49FLAqn2024953; Tue, 15 Oct 2024 21:13:59 GMT Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429yb505fm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:59 +0000 (GMT) Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 49FI0rrx002343; Tue, 15 Oct 2024 21:13:58 GMT Received: from smtprelay03.fra02v.mail.ibm.com ([9.218.2.224]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 4284emnytu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:13:57 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay03.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 49FLDsuh56295848 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Oct 2024 21:13:54 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 3A6D42004B; Tue, 15 Oct 2024 21:13:54 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1FE4B20043; Tue, 15 Oct 2024 21:13:52 +0000 (GMT) Received: from gfwr518.rchland.ibm.com (unknown [9.10.239.106]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 15 Oct 2024 21:13:51 +0000 (GMT) From: Michael Kowal To: qemu-devel@nongnu.org Cc: qemu-ppc@nongnu.org, clg@kaod.org, fbarrat@linux.ibm.com, npiggin@gmail.com, milesg@linux.ibm.com, danielhb413@gmail.com, david@gibson.dropbear.id.au, harshpb@linux.ibm.com, thuth@redhat.com, lvivier@redhat.com, pbonzini@redhat.com Subject: [PATCH 07/14] qtest/xive: Add group-interrupt test Date: Tue, 15 Oct 2024 16:13:22 -0500 Message-Id: <20241015211329.21113-8-kowal@linux.ibm.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241015211329.21113-1-kowal@linux.ibm.com> References: <20241015211329.21113-1-kowal@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: 2WdSMw1PbdnyiLDGZBFJo7D51H7pifKt X-Proofpoint-GUID: V7VkKxMkfGkDODlMu7hQOhQIEc8V3jRR X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1051,Hydra:6.0.680,FMLib:17.12.62.30 definitions=2024-10-15_01,2024-10-11_01,2024-09-30_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=957 impostorscore=0 phishscore=0 clxscore=1015 lowpriorityscore=0 adultscore=0 mlxscore=0 spamscore=0 bulkscore=0 suspectscore=0 priorityscore=1501 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410150140 Received-SPF: pass client-ip=148.163.156.1; envelope-from=kowal@linux.ibm.com; helo=mx0a-001b2d01.pphosted.com X-Spam_score_int: -26 X-Spam_score: -2.7 X-Spam_bar: -- X-Spam_report: (-2.7 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Frederic Barrat Add XIVE2 tests for group interrupts and group interrupts that have been backlogged. Signed-off-by: Frederic Barrat Signed-off-by: Michael Kowal --- tests/qtest/pnv-xive2-test.c | 160 +++++++++++++++++++++++++++++++++++ 1 file changed, 160 insertions(+) diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c index 4ec1cc1b0f..1705127da1 100644 --- a/tests/qtest/pnv-xive2-test.c +++ b/tests/qtest/pnv-xive2-test.c @@ -2,6 +2,8 @@ * QTest testcase for PowerNV 10 interrupt controller (xive2) * - Test irq to hardware thread * - Test 'Pull Thread Context to Odd Thread Reporting Line' + * - Test irq to hardware group + * - Test irq to hardware group going through backlog * * Copyright (c) 2024, IBM Corporation. * @@ -316,6 +318,158 @@ static void test_pull_thread_ctx_to_odd_thread_cl(QTestState *qts) word2 = get_tima32(qts, target_pir, TM_QW3_HV_PHYS + TM_WORD2); g_assert_cmphex(xive_get_field32(TM_QW3W2_VT, word2), ==, 0); } + +static void test_hw_group_irq(QTestState *qts) +{ + uint32_t irq = 100; + uint32_t irq_data = 0xdeadbeef; + uint32_t end_index = 23; + uint32_t chosen_one; + uint32_t target_nvp = 0x81; /* group size = 4 */ + uint8_t priority = 6; + uint32_t reg32; + uint16_t reg16; + uint8_t pq, nsr, cppr; + + printf("# ============================================================\n"); + printf("# Testing irq %d to hardware group of size 4\n", irq); + + /* irq config */ + set_eas(qts, irq, end_index, irq_data); + set_end(qts, end_index, target_nvp, priority, true /* group */); + + /* enable and trigger irq */ + get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00); + set_esb(qts, irq, XIVE_TRIGGER_PAGE, 0, 0); + + /* check irq is raised on cpu */ + pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET); + g_assert_cmpuint(pq, ==, XIVE_ESB_PENDING); + + /* find the targeted vCPU */ + for (chosen_one = 0; chosen_one < SMT; chosen_one++) { + reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0); + nsr = reg32 >> 24; + if (nsr == 0x82) { + break; + } + } + g_assert_cmphex(chosen_one, <, SMT); + cppr = (reg32 >> 16) & 0xFF; + g_assert_cmphex(nsr, ==, 0x82); + g_assert_cmphex(cppr, ==, 0xFF); + + /* ack the irq */ + reg16 = get_tima16(qts, chosen_one, TM_SPC_ACK_HV_REG); + nsr = reg16 >> 8; + cppr = reg16 & 0xFF; + g_assert_cmphex(nsr, ==, 0x82); + g_assert_cmphex(cppr, ==, priority); + + /* check irq data is what was configured */ + reg32 = qtest_readl(qts, xive_get_queue_addr(end_index)); + g_assert_cmphex((reg32 & 0x7fffffff), ==, (irq_data & 0x7fffffff)); + + /* End Of Interrupt */ + set_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_STORE_EOI, 0); + pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET); + g_assert_cmpuint(pq, ==, XIVE_ESB_RESET); + + /* reset CPPR */ + set_tima8(qts, chosen_one, TM_QW3_HV_PHYS + TM_CPPR, 0xFF); + reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0); + nsr = reg32 >> 24; + cppr = (reg32 >> 16) & 0xFF; + g_assert_cmphex(nsr, ==, 0x00); + g_assert_cmphex(cppr, ==, 0xFF); +} + +static void test_hw_group_irq_backlog(QTestState *qts) +{ + uint32_t irq = 31; + uint32_t irq_data = 0x01234567; + uint32_t end_index = 129; + uint32_t target_nvp = 0x81; /* group size = 4 */ + uint32_t chosen_one = 3; + uint8_t blocking_priority, priority = 3; + uint32_t reg32; + uint16_t reg16; + uint8_t pq, nsr, cppr, lsmfb, i; + + printf("# ============================================================\n"); + printf("# Testing irq %d to hardware group of size 4 going through " \ + "backlog\n", + irq); + + /* + * set current priority of all threads in the group to something + * higher than what we're about to trigger + */ + blocking_priority = priority - 1; + for (i = 0; i < SMT; i++) { + set_tima8(qts, i, TM_QW3_HV_PHYS + TM_CPPR, blocking_priority); + } + + /* irq config */ + set_eas(qts, irq, end_index, irq_data); + set_end(qts, end_index, target_nvp, priority, true /* group */); + + /* enable and trigger irq */ + get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00); + set_esb(qts, irq, XIVE_TRIGGER_PAGE, 0, 0); + + /* check irq is raised on cpu */ + pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET); + g_assert_cmpuint(pq, ==, XIVE_ESB_PENDING); + + /* check no interrupt is pending on the 2 possible targets */ + for (i = 0; i < SMT; i++) { + reg32 = get_tima32(qts, i, TM_QW3_HV_PHYS + TM_WORD0); + nsr = reg32 >> 24; + cppr = (reg32 >> 16) & 0xFF; + lsmfb = reg32 & 0xFF; + g_assert_cmphex(nsr, ==, 0x0); + g_assert_cmphex(cppr, ==, blocking_priority); + g_assert_cmphex(lsmfb, ==, priority); + } + + /* lower priority of one thread */ + set_tima8(qts, chosen_one, TM_QW3_HV_PHYS + TM_CPPR, priority + 1); + + /* check backlogged interrupt is presented */ + reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0); + nsr = reg32 >> 24; + cppr = (reg32 >> 16) & 0xFF; + g_assert_cmphex(nsr, ==, 0x82); + g_assert_cmphex(cppr, ==, priority + 1); + + /* ack the irq */ + reg16 = get_tima16(qts, chosen_one, TM_SPC_ACK_HV_REG); + nsr = reg16 >> 8; + cppr = reg16 & 0xFF; + g_assert_cmphex(nsr, ==, 0x82); + g_assert_cmphex(cppr, ==, priority); + + /* check irq data is what was configured */ + reg32 = qtest_readl(qts, xive_get_queue_addr(end_index)); + g_assert_cmphex((reg32 & 0x7fffffff), ==, (irq_data & 0x7fffffff)); + + /* End Of Interrupt */ + set_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_STORE_EOI, 0); + pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET); + g_assert_cmpuint(pq, ==, XIVE_ESB_RESET); + + /* reset CPPR */ + set_tima8(qts, chosen_one, TM_QW3_HV_PHYS + TM_CPPR, 0xFF); + reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0); + nsr = reg32 >> 24; + cppr = (reg32 >> 16) & 0xFF; + lsmfb = reg32 & 0xFF; + g_assert_cmphex(nsr, ==, 0x00); + g_assert_cmphex(cppr, ==, 0xFF); + g_assert_cmphex(lsmfb, ==, 0xFF); +} + static void test_xive(void) { QTestState *qts; @@ -331,6 +485,12 @@ static void test_xive(void) /* omit reset_state here and use settings from test_hw_irq */ test_pull_thread_ctx_to_odd_thread_cl(qts); + reset_state(qts); + test_hw_group_irq(qts); + + reset_state(qts); + test_hw_group_irq_backlog(qts); + reset_state(qts); test_flush_sync_inject(qts); From patchwork Tue Oct 15 21:13:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kowal X-Patchwork-Id: 13837108 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DCF53D1D876 for ; Tue, 15 Oct 2024 21:16:52 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t0os2-0004f4-6H; Tue, 15 Oct 2024 17:14:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0ory-0004UQ-0a; Tue, 15 Oct 2024 17:14:18 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0orv-0000QW-2C; Tue, 15 Oct 2024 17:14:17 -0400 Received: from pps.filterd (m0353729.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49FKJaSP022495; Tue, 15 Oct 2024 21:14:02 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=sQyX0uLyNbNulNrI3 lQIe1EvVwwnkJ3bxq0p7BN2trQ=; b=Q3Jz/++FUDt1kYI4yC0/zQ9ZdDVhFOHb1 HCtQZEezFas0t2KUQSeMO9UIYu3rIvcuDbN5aUv0t1x7JgFKL+tAC6JOSY9dqRfY QtOrEkO4bwJm+VX0A3ItQLTGdIVEQPuoGOUObW2jPoMISbTIOR4lOoH0cfvAuh8y RUw/yDzzVnwzpgoq5oLxIg5O8c20OoyAeygKaE66Me0CdCv4nlfdJMH3xY0oXb19 FfWabDykYm4m6E0ZiNrtDXQM9RIEJX6csa+iHuO99229qMwU4g5z4jpc3qQYozgz JeRQvpwS0MXW/tLc407V9qRqtPrCRf/jjkC1Na+i5mr0//mvKtjTA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429yb505fy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:02 +0000 (GMT) Received: from m0353729.ppops.net (m0353729.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 49FLE2qa029706; Tue, 15 Oct 2024 21:14:02 GMT Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429yb505ft-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:01 +0000 (GMT) Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 49FHfMbp002408; Tue, 15 Oct 2024 21:14:00 GMT Received: from smtprelay01.fra02v.mail.ibm.com ([9.218.2.227]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 4284emnyu1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:00 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay01.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 49FLDupq29491514 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Oct 2024 21:13:56 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9C4FA2004B; Tue, 15 Oct 2024 21:13:56 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8285F20043; Tue, 15 Oct 2024 21:13:54 +0000 (GMT) Received: from gfwr518.rchland.ibm.com (unknown [9.10.239.106]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 15 Oct 2024 21:13:54 +0000 (GMT) From: Michael Kowal To: qemu-devel@nongnu.org Cc: qemu-ppc@nongnu.org, clg@kaod.org, fbarrat@linux.ibm.com, npiggin@gmail.com, milesg@linux.ibm.com, danielhb413@gmail.com, david@gibson.dropbear.id.au, harshpb@linux.ibm.com, thuth@redhat.com, lvivier@redhat.com, pbonzini@redhat.com Subject: [PATCH 08/14] Add support for MMIO operations on the NVPG/NVC BAR Date: Tue, 15 Oct 2024 16:13:23 -0500 Message-Id: <20241015211329.21113-9-kowal@linux.ibm.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241015211329.21113-1-kowal@linux.ibm.com> References: <20241015211329.21113-1-kowal@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: zJrcRn0HYJejsZDgY1Gkc-GfdJgIitF4 X-Proofpoint-GUID: SMPehxO8mkKU4arwCe_rOP2C5Dlts_xx X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1051,Hydra:6.0.680,FMLib:17.12.62.30 definitions=2024-10-15_01,2024-10-11_01,2024-09-30_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 impostorscore=0 phishscore=0 clxscore=1015 lowpriorityscore=0 adultscore=0 mlxscore=0 spamscore=0 bulkscore=0 suspectscore=0 priorityscore=1501 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410150140 Received-SPF: pass client-ip=148.163.156.1; envelope-from=kowal@linux.ibm.com; helo=mx0a-001b2d01.pphosted.com X-Spam_score_int: -26 X-Spam_score: -2.7 X-Spam_bar: -- X-Spam_report: (-2.7 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Frederic Barrat Add support for the NVPG and NVC BARs. Access to the BAR pages will cause backlog counter operations to either increment or decriment the counter. Also added qtests for the same. Signed-off-by: Frederic Barrat Signed-off-by: Michael Kowal --- include/hw/ppc/xive2.h | 9 ++ include/hw/ppc/xive2_regs.h | 3 + tests/qtest/pnv-xive2-common.h | 1 + hw/intc/pnv_xive2.c | 80 +++++++++++++--- hw/intc/xive2.c | 87 +++++++++++++++++ tests/qtest/pnv-xive2-nvpg_bar.c | 154 +++++++++++++++++++++++++++++++ tests/qtest/pnv-xive2-test.c | 3 + hw/intc/trace-events | 4 + tests/qtest/meson.build | 3 +- 9 files changed, 329 insertions(+), 15 deletions(-) create mode 100644 tests/qtest/pnv-xive2-nvpg_bar.c diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h index e61b978f37..049028d2c2 100644 --- a/include/hw/ppc/xive2.h +++ b/include/hw/ppc/xive2.h @@ -92,6 +92,15 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx, uint8_t nvt_blk, uint32_t nvt_idx, bool cam_ignore, uint32_t logic_serv); +uint64_t xive2_presenter_nvp_backlog_op(XivePresenter *xptr, + uint8_t blk, uint32_t idx, + uint16_t offset); + +uint64_t xive2_presenter_nvgc_backlog_op(XivePresenter *xptr, + bool crowd, + uint8_t blk, uint32_t idx, + uint16_t offset, uint16_t val); + /* * XIVE2 END ESBs (POWER10) */ diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h index 30868e8e09..66a419441c 100644 --- a/include/hw/ppc/xive2_regs.h +++ b/include/hw/ppc/xive2_regs.h @@ -234,4 +234,7 @@ typedef struct Xive2Nvgc { void xive2_nvgc_pic_print_info(Xive2Nvgc *nvgc, uint32_t nvgc_idx, GString *buf); +#define NVx_BACKLOG_OP PPC_BITMASK(52, 53) +#define NVx_BACKLOG_PRIO PPC_BITMASK(57, 59) + #endif /* PPC_XIVE2_REGS_H */ diff --git a/tests/qtest/pnv-xive2-common.h b/tests/qtest/pnv-xive2-common.h index 2135b04d5b..910f0f512e 100644 --- a/tests/qtest/pnv-xive2-common.h +++ b/tests/qtest/pnv-xive2-common.h @@ -108,5 +108,6 @@ extern void set_end(QTestState *qts, uint32_t index, uint32_t nvp_index, void test_flush_sync_inject(QTestState *qts); +void test_nvpg_bar(QTestState *qts); #endif /* TEST_PNV_XIVE2_COMMON_H */ diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c index 0482193fd7..9736b623ba 100644 --- a/hw/intc/pnv_xive2.c +++ b/hw/intc/pnv_xive2.c @@ -2203,21 +2203,40 @@ static const MemoryRegionOps pnv_xive2_tm_ops = { }, }; -static uint64_t pnv_xive2_nvc_read(void *opaque, hwaddr offset, +static uint64_t pnv_xive2_nvc_read(void *opaque, hwaddr addr, unsigned size) { PnvXive2 *xive = PNV_XIVE2(opaque); + XivePresenter *xptr = XIVE_PRESENTER(xive); + uint32_t page = addr >> xive->nvpg_shift; + uint16_t op = addr & 0xFFF; + uint8_t blk = pnv_xive2_block_id(xive); - xive2_error(xive, "NVC: invalid read @%"HWADDR_PRIx, offset); - return -1; + if (size != 2) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvc load size %d\n", + size); + return -1; + } + + return xive2_presenter_nvgc_backlog_op(xptr, true, blk, page, op, 1); } -static void pnv_xive2_nvc_write(void *opaque, hwaddr offset, +static void pnv_xive2_nvc_write(void *opaque, hwaddr addr, uint64_t val, unsigned size) { PnvXive2 *xive = PNV_XIVE2(opaque); + XivePresenter *xptr = XIVE_PRESENTER(xive); + uint32_t page = addr >> xive->nvc_shift; + uint16_t op = addr & 0xFFF; + uint8_t blk = pnv_xive2_block_id(xive); - xive2_error(xive, "NVC: invalid write @%"HWADDR_PRIx, offset); + if (size != 1) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvc write size %d\n", + size); + return; + } + + (void)xive2_presenter_nvgc_backlog_op(xptr, true, blk, page, op, val); } static const MemoryRegionOps pnv_xive2_nvc_ops = { @@ -2225,30 +2244,63 @@ static const MemoryRegionOps pnv_xive2_nvc_ops = { .write = pnv_xive2_nvc_write, .endianness = DEVICE_BIG_ENDIAN, .valid = { - .min_access_size = 8, + .min_access_size = 1, .max_access_size = 8, }, .impl = { - .min_access_size = 8, + .min_access_size = 1, .max_access_size = 8, }, }; -static uint64_t pnv_xive2_nvpg_read(void *opaque, hwaddr offset, +static uint64_t pnv_xive2_nvpg_read(void *opaque, hwaddr addr, unsigned size) { PnvXive2 *xive = PNV_XIVE2(opaque); + XivePresenter *xptr = XIVE_PRESENTER(xive); + uint32_t page = addr >> xive->nvpg_shift; + uint16_t op = addr & 0xFFF; + uint32_t index = page >> 1; + uint8_t blk = pnv_xive2_block_id(xive); - xive2_error(xive, "NVPG: invalid read @%"HWADDR_PRIx, offset); - return -1; + if (size != 2) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvpg load size %d\n", + size); + return -1; + } + + if (page % 2) { + /* odd page - NVG */ + return xive2_presenter_nvgc_backlog_op(xptr, false, blk, index, op, 1); + } else { + /* even page - NVP */ + return xive2_presenter_nvp_backlog_op(xptr, blk, index, op); + } } -static void pnv_xive2_nvpg_write(void *opaque, hwaddr offset, +static void pnv_xive2_nvpg_write(void *opaque, hwaddr addr, uint64_t val, unsigned size) { PnvXive2 *xive = PNV_XIVE2(opaque); + XivePresenter *xptr = XIVE_PRESENTER(xive); + uint32_t page = addr >> xive->nvpg_shift; + uint16_t op = addr & 0xFFF; + uint32_t index = page >> 1; + uint8_t blk = pnv_xive2_block_id(xive); - xive2_error(xive, "NVPG: invalid write @%"HWADDR_PRIx, offset); + if (size != 1) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvpg write size %d\n", + size); + return; + } + + if (page % 2) { + /* odd page - NVG */ + (void)xive2_presenter_nvgc_backlog_op(xptr, false, blk, index, op, val); + } else { + /* even page - NVP */ + (void)xive2_presenter_nvp_backlog_op(xptr, blk, index, op); + } } static const MemoryRegionOps pnv_xive2_nvpg_ops = { @@ -2256,11 +2308,11 @@ static const MemoryRegionOps pnv_xive2_nvpg_ops = { .write = pnv_xive2_nvpg_write, .endianness = DEVICE_BIG_ENDIAN, .valid = { - .min_access_size = 8, + .min_access_size = 1, .max_access_size = 8, }, .impl = { - .min_access_size = 8, + .min_access_size = 1, .max_access_size = 8, }, }; diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c index 0c53f71879..b6f279e6a3 100644 --- a/hw/intc/xive2.c +++ b/hw/intc/xive2.c @@ -88,6 +88,93 @@ static void xive2_nvgc_set_backlog(Xive2Nvgc *nvgc, uint8_t priority, } } +uint64_t xive2_presenter_nvgc_backlog_op(XivePresenter *xptr, + bool crowd, + uint8_t blk, uint32_t idx, + uint16_t offset, uint16_t val) +{ + Xive2Router *xrtr = XIVE2_ROUTER(xptr); + uint8_t priority = GETFIELD(NVx_BACKLOG_PRIO, offset); + uint8_t op = GETFIELD(NVx_BACKLOG_OP, offset); + Xive2Nvgc nvgc; + uint32_t count, old_count; + + if (xive2_router_get_nvgc(xrtr, crowd, blk, idx, &nvgc)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No %s %x/%x\n", + crowd ? "NVC" : "NVG", blk, idx); + return -1; + } + if (!xive2_nvgc_is_valid(&nvgc)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n", blk, idx); + return -1; + } + + old_count = xive2_nvgc_get_backlog(&nvgc, priority); + count = old_count; + /* + * op: + * 0b00 => increment + * 0b01 => decrement + * 0b1- => read + */ + if (op == 0b00 || op == 0b01) { + if (op == 0b00) { + count += val; + } else { + if (count > val) { + count -= val; + } else { + count = 0; + } + } + xive2_nvgc_set_backlog(&nvgc, priority, count); + xive2_router_write_nvgc(xrtr, crowd, blk, idx, &nvgc); + } + trace_xive_nvgc_backlog_op(crowd, blk, idx, op, priority, old_count); + return old_count; +} + +uint64_t xive2_presenter_nvp_backlog_op(XivePresenter *xptr, + uint8_t blk, uint32_t idx, + uint16_t offset) +{ + Xive2Router *xrtr = XIVE2_ROUTER(xptr); + uint8_t priority = GETFIELD(NVx_BACKLOG_PRIO, offset); + uint8_t op = GETFIELD(NVx_BACKLOG_OP, offset); + Xive2Nvp nvp; + uint8_t ipb, old_ipb, rc; + + if (xive2_router_get_nvp(xrtr, blk, idx, &nvp)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n", blk, idx); + return -1; + } + if (!xive2_nvp_is_valid(&nvp)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVP %x/%x\n", blk, idx); + return -1; + } + + old_ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2); + ipb = old_ipb; + /* + * op: + * 0b00 => set priority bit + * 0b01 => reset priority bit + * 0b1- => read + */ + if (op == 0b00 || op == 0b01) { + if (op == 0b00) { + ipb |= xive_priority_to_ipb(priority); + } else { + ipb &= ~xive_priority_to_ipb(priority); + } + nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb); + xive2_router_write_nvp(xrtr, blk, idx, &nvp, 2); + } + rc = !!(old_ipb & xive_priority_to_ipb(priority)); + trace_xive_nvp_backlog_op(blk, idx, op, priority, rc); + return rc; +} + void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf) { if (!xive2_eas_is_valid(eas)) { diff --git a/tests/qtest/pnv-xive2-nvpg_bar.c b/tests/qtest/pnv-xive2-nvpg_bar.c new file mode 100644 index 0000000000..10d4962d1e --- /dev/null +++ b/tests/qtest/pnv-xive2-nvpg_bar.c @@ -0,0 +1,154 @@ +/* + * QTest testcase for PowerNV 10 interrupt controller (xive2) + * - Test NVPG BAR MMIO operations + * + * Copyright (c) 2024, IBM Corporation. + * + * This work is licensed under the terms of the GNU GPL, version 2 or + * later. See the COPYING file in the top-level directory. + */ +#include "qemu/osdep.h" +#include "libqtest.h" + +#include "pnv-xive2-common.h" + +#define NVPG_BACKLOG_OP_SHIFT 10 +#define NVPG_BACKLOG_PRIO_SHIFT 4 + +#define XIVE_PRIORITY_MAX 7 + +enum NVx { + NVP, + NVG, + NVC +}; + +typedef enum { + INCR_STORE = 0b100, + INCR_LOAD = 0b000, + DECR_STORE = 0b101, + DECR_LOAD = 0b001, + READ_x = 0b010, + READ_y = 0b011, +} backlog_op; + +static uint32_t nvpg_backlog_op(QTestState *qts, backlog_op op, + enum NVx type, uint64_t index, + uint8_t priority, uint8_t delta) +{ + uint64_t addr, offset; + uint32_t count = 0; + + switch (type) { + case NVP: + addr = XIVE_NVPG_ADDR + (index << (XIVE_PAGE_SHIFT + 1)); + break; + case NVG: + addr = XIVE_NVPG_ADDR + (index << (XIVE_PAGE_SHIFT + 1)) + + (1 << XIVE_PAGE_SHIFT); + break; + case NVC: + addr = XIVE_NVC_ADDR + (index << XIVE_PAGE_SHIFT); + break; + default: + g_assert_not_reached(); + } + + offset = (op & 0b11) << NVPG_BACKLOG_OP_SHIFT; + offset |= priority << NVPG_BACKLOG_PRIO_SHIFT; + if (op >> 2) { + qtest_writeb(qts, addr + offset, delta); + } else { + count = qtest_readw(qts, addr + offset); + } + return count; +} + +void test_nvpg_bar(QTestState *qts) +{ + uint32_t nvp_target = 0x11; + uint32_t group_target = 0x17; /* size 16 */ + uint32_t vp_irq = 33, group_irq = 47; + uint32_t vp_end = 3, group_end = 97; + uint32_t vp_irq_data = 0x33333333; + uint32_t group_irq_data = 0x66666666; + uint8_t vp_priority = 0, group_priority = 5; + uint32_t vp_count[XIVE_PRIORITY_MAX + 1] = { 0 }; + uint32_t group_count[XIVE_PRIORITY_MAX + 1] = { 0 }; + uint32_t count, delta; + uint8_t i; + + printf("# ============================================================\n"); + printf("# Testing NVPG BAR operations\n"); + + set_nvg(qts, group_target, 0); + set_nvp(qts, nvp_target, 0x04); + set_nvp(qts, group_target, 0x04); + + /* + * Setup: trigger a VP-specific interrupt and a group interrupt + * so that the backlog counters are initialized to something else + * than 0 for at least one priority level + */ + set_eas(qts, vp_irq, vp_end, vp_irq_data); + set_end(qts, vp_end, nvp_target, vp_priority, false /* group */); + + set_eas(qts, group_irq, group_end, group_irq_data); + set_end(qts, group_end, group_target, group_priority, true /* group */); + + get_esb(qts, vp_irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00); + set_esb(qts, vp_irq, XIVE_TRIGGER_PAGE, 0, 0); + vp_count[vp_priority]++; + + get_esb(qts, group_irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00); + set_esb(qts, group_irq, XIVE_TRIGGER_PAGE, 0, 0); + group_count[group_priority]++; + + /* check the initial counters */ + for (i = 0; i <= XIVE_PRIORITY_MAX; i++) { + count = nvpg_backlog_op(qts, READ_x, NVP, nvp_target, i, 0); + g_assert_cmpuint(count, ==, vp_count[i]); + + count = nvpg_backlog_op(qts, READ_y, NVG, group_target, i, 0); + g_assert_cmpuint(count, ==, group_count[i]); + } + + /* do a few ops on the VP. Counter can only be 0 and 1 */ + vp_priority = 2; + delta = 7; + nvpg_backlog_op(qts, INCR_STORE, NVP, nvp_target, vp_priority, delta); + vp_count[vp_priority] = 1; + count = nvpg_backlog_op(qts, INCR_LOAD, NVP, nvp_target, vp_priority, 0); + g_assert_cmpuint(count, ==, vp_count[vp_priority]); + count = nvpg_backlog_op(qts, READ_y, NVP, nvp_target, vp_priority, 0); + g_assert_cmpuint(count, ==, vp_count[vp_priority]); + + count = nvpg_backlog_op(qts, DECR_LOAD, NVP, nvp_target, vp_priority, 0); + g_assert_cmpuint(count, ==, vp_count[vp_priority]); + vp_count[vp_priority] = 0; + nvpg_backlog_op(qts, DECR_STORE, NVP, nvp_target, vp_priority, delta); + count = nvpg_backlog_op(qts, READ_x, NVP, nvp_target, vp_priority, 0); + g_assert_cmpuint(count, ==, vp_count[vp_priority]); + + /* do a few ops on the group */ + group_priority = 2; + delta = 9; + /* can't go negative */ + nvpg_backlog_op(qts, DECR_STORE, NVG, group_target, group_priority, delta); + count = nvpg_backlog_op(qts, READ_y, NVG, group_target, group_priority, 0); + g_assert_cmpuint(count, ==, 0); + nvpg_backlog_op(qts, INCR_STORE, NVG, group_target, group_priority, delta); + group_count[group_priority] += delta; + count = nvpg_backlog_op(qts, INCR_LOAD, NVG, group_target, + group_priority, delta); + g_assert_cmpuint(count, ==, group_count[group_priority]); + group_count[group_priority]++; + + count = nvpg_backlog_op(qts, DECR_LOAD, NVG, group_target, + group_priority, delta); + g_assert_cmpuint(count, ==, group_count[group_priority]); + group_count[group_priority]--; + count = nvpg_backlog_op(qts, READ_x, NVG, group_target, group_priority, 0); + g_assert_cmpuint(count, ==, group_count[group_priority]); +} + diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c index 1705127da1..a6008bc053 100644 --- a/tests/qtest/pnv-xive2-test.c +++ b/tests/qtest/pnv-xive2-test.c @@ -494,6 +494,9 @@ static void test_xive(void) reset_state(qts); test_flush_sync_inject(qts); + reset_state(qts); + test_nvpg_bar(qts); + qtest_quit(qts); } diff --git a/hw/intc/trace-events b/hw/intc/trace-events index 7435728c51..7f362c38b0 100644 --- a/hw/intc/trace-events +++ b/hw/intc/trace-events @@ -285,6 +285,10 @@ xive_tctx_tm_read(uint32_t index, uint64_t offset, unsigned int size, uint64_t v xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring, uint8_t group_level) "found NVT 0x%x/0x%x ring=0x%x group_level=%d" xive_end_source_read(uint8_t end_blk, uint32_t end_idx, uint64_t addr) "END 0x%x/0x%x @0x%"PRIx64 +# xive2.c +xive_nvp_backlog_op(uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint8_t rc) "NVP 0x%x/0x%x operation=%d priority=%d rc=%d" +xive_nvgc_backlog_op(bool c, uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint32_t rc) "NVGC crowd=%d 0x%x/0x%x operation=%d priority=%d rc=%d" + # pnv_xive.c pnv_xive_ic_hw_trigger(uint64_t addr, uint64_t val) "@0x%"PRIx64" val=0x%"PRIx64 diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build index d2af58800d..9ef9819450 100644 --- a/tests/qtest/meson.build +++ b/tests/qtest/meson.build @@ -337,7 +337,8 @@ qtests = { 'ivshmem-test': [rt, '../../contrib/ivshmem-server/ivshmem-server.c'], 'migration-test': migration_files, 'pxe-test': files('boot-sector.c'), - 'pnv-xive2-test': files('pnv-xive2-common.c', 'pnv-xive2-flush-sync.c'), + 'pnv-xive2-test': files('pnv-xive2-common.c', 'pnv-xive2-flush-sync.c', + 'pnv-xive2-nvpg_bar.c'), 'qos-test': [chardev, io, qos_test_ss.apply({}).sources()], 'tpm-crb-swtpm-test': [io, tpmemu_files], 'tpm-crb-test': [io, tpmemu_files], From patchwork Tue Oct 15 21:13:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kowal X-Patchwork-Id: 13837106 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 21996D1D876 for ; Tue, 15 Oct 2024 21:16:24 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t0os0-0004Yp-2h; Tue, 15 Oct 2024 17:14:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0orw-0004Mg-8I; Tue, 15 Oct 2024 17:14:16 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0ort-0000QF-Oi; Tue, 15 Oct 2024 17:14:16 -0400 Received: from pps.filterd (m0356516.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49FJY4WM029966; Tue, 15 Oct 2024 21:14:04 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=+1f/2+JEnoQc+rvzc 779brho+/Ext9vaX7kqkx/mgIY=; b=ArlpkwVH1NtoKYR5W3FWAAj12qI0HyO+/ flV8IN7naHbAoelbkpFafcaN66JKH3bpQs2J1yZhzZzz6aRkwUs6dBnBJ4RYwriW JknsC9oZZzpNJD0KC4XstXsfAWapvJt70OJU2LwgdWZi7VJr06HEKeYq2ttf2S+b kOUiDN2Kse37z93l9HJDTY6e0DS7ANLpSevbMU+XeQ/R3qLNvlo5+mokqK6I+66Z +k4csYdtmHAhANVJVfeX3DeyNjx4qV+qC3D8xa1XAGAtzNWgh4wlgljgWQbRP0Py smGFS4e0YheUw+ccMpaeaiQMCbVec2Wmx3VgYT8xPCTBHBglYedlA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429x17gh0b-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:03 +0000 (GMT) Received: from m0356516.ppops.net (m0356516.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 49FLE3or009607; Tue, 15 Oct 2024 21:14:03 GMT Received: from ppma13.dal12v.mail.ibm.com (dd.9e.1632.ip4.static.sl-reverse.com [50.22.158.221]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429x17gh08-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:03 +0000 (GMT) Received: from pps.filterd (ppma13.dal12v.mail.ibm.com [127.0.0.1]) by ppma13.dal12v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 49FISLoo005218; Tue, 15 Oct 2024 21:14:02 GMT Received: from smtprelay06.fra02v.mail.ibm.com ([9.218.2.230]) by ppma13.dal12v.mail.ibm.com (PPS) with ESMTPS id 4285nj5p7q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:02 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay06.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 49FLDxK235586662 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Oct 2024 21:13:59 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0A5B02004B; Tue, 15 Oct 2024 21:13:59 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E4AD520043; Tue, 15 Oct 2024 21:13:56 +0000 (GMT) Received: from gfwr518.rchland.ibm.com (unknown [9.10.239.106]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 15 Oct 2024 21:13:56 +0000 (GMT) From: Michael Kowal To: qemu-devel@nongnu.org Cc: qemu-ppc@nongnu.org, clg@kaod.org, fbarrat@linux.ibm.com, npiggin@gmail.com, milesg@linux.ibm.com, danielhb413@gmail.com, david@gibson.dropbear.id.au, harshpb@linux.ibm.com, thuth@redhat.com, lvivier@redhat.com, pbonzini@redhat.com Subject: [PATCH 09/14] ppc/xive2: Support crowd-matching when looking for target Date: Tue, 15 Oct 2024 16:13:24 -0500 Message-Id: <20241015211329.21113-10-kowal@linux.ibm.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241015211329.21113-1-kowal@linux.ibm.com> References: <20241015211329.21113-1-kowal@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: 8XHdpQbJSEja0I6f7Bsw4UjttCwHuxOX X-Proofpoint-GUID: ABKB6igIrOWrqU2WYtNlxo5u5yHkRni- X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1051,Hydra:6.0.680,FMLib:17.12.62.30 definitions=2024-10-15_01,2024-10-11_01,2024-09-30_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxscore=0 lowpriorityscore=0 bulkscore=0 mlxlogscore=999 phishscore=0 clxscore=1015 adultscore=0 spamscore=0 suspectscore=0 malwarescore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410150140 Received-SPF: pass client-ip=148.163.158.5; envelope-from=kowal@linux.ibm.com; helo=mx0b-001b2d01.pphosted.com X-Spam_score_int: -26 X-Spam_score: -2.7 X-Spam_bar: -- X-Spam_report: (-2.7 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Frederic Barrat If an END is defined with the 'crowd' bit set, then a target can be running on different blocks. It means that some bits from the block VP are masked when looking for a match. It is similar to groups, but on the block instead of the VP index. Most of the changes are due to passing the extra argument 'crowd' all the way to the function checking for matches. Signed-off-by: Frederic Barrat Signed-off-by: Michael Kowal --- include/hw/ppc/xive.h | 10 +++--- include/hw/ppc/xive2.h | 3 +- hw/intc/pnv_xive.c | 5 +-- hw/intc/pnv_xive2.c | 12 +++---- hw/intc/spapr_xive.c | 3 +- hw/intc/xive.c | 21 ++++++++---- hw/intc/xive2.c | 78 +++++++++++++++++++++++++++++++++--------- hw/ppc/pnv.c | 15 ++++---- hw/ppc/spapr.c | 4 +-- 9 files changed, 105 insertions(+), 46 deletions(-) diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h index 7660578b20..c9070792ec 100644 --- a/include/hw/ppc/xive.h +++ b/include/hw/ppc/xive.h @@ -440,13 +440,13 @@ struct XivePresenterClass { InterfaceClass parent; int (*match_nvt)(XivePresenter *xptr, uint8_t format, uint8_t nvt_blk, uint32_t nvt_idx, - bool cam_ignore, uint8_t priority, + bool crowd, bool cam_ignore, uint8_t priority, uint32_t logic_serv, XiveTCTXMatch *match); bool (*in_kernel)(const XivePresenter *xptr); uint32_t (*get_config)(XivePresenter *xptr); int (*broadcast)(XivePresenter *xptr, uint8_t nvt_blk, uint32_t nvt_idx, - uint8_t priority); + bool crowd, bool cam_ignore, uint8_t priority); }; int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx, @@ -455,7 +455,7 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx, bool cam_ignore, uint32_t logic_serv); bool xive_presenter_notify(XiveFabric *xfb, uint8_t format, uint8_t nvt_blk, uint32_t nvt_idx, - bool cam_ignore, uint8_t priority, + bool crowd, bool cam_ignore, uint8_t priority, uint32_t logic_serv, bool *precluded); uint32_t xive_get_vpgroup_size(uint32_t nvp_index); @@ -475,10 +475,10 @@ struct XiveFabricClass { InterfaceClass parent; int (*match_nvt)(XiveFabric *xfb, uint8_t format, uint8_t nvt_blk, uint32_t nvt_idx, - bool cam_ignore, uint8_t priority, + bool crowd, bool cam_ignore, uint8_t priority, uint32_t logic_serv, XiveTCTXMatch *match); int (*broadcast)(XiveFabric *xfb, uint8_t nvt_blk, uint32_t nvt_idx, - uint8_t priority); + bool crowd, bool cam_ignore, uint8_t priority); }; /* diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h index 049028d2c2..37aca4d26a 100644 --- a/include/hw/ppc/xive2.h +++ b/include/hw/ppc/xive2.h @@ -90,7 +90,8 @@ void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked); int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx, uint8_t format, uint8_t nvt_blk, uint32_t nvt_idx, - bool cam_ignore, uint32_t logic_serv); + bool crowd, bool cam_ignore, + uint32_t logic_serv); uint64_t xive2_presenter_nvp_backlog_op(XivePresenter *xptr, uint8_t blk, uint32_t idx, diff --git a/hw/intc/pnv_xive.c b/hw/intc/pnv_xive.c index 5bacbce6a4..346549f32e 100644 --- a/hw/intc/pnv_xive.c +++ b/hw/intc/pnv_xive.c @@ -473,7 +473,7 @@ static bool pnv_xive_is_cpu_enabled(PnvXive *xive, PowerPCCPU *cpu) static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format, uint8_t nvt_blk, uint32_t nvt_idx, - bool cam_ignore, uint8_t priority, + bool crowd, bool cam_ignore, uint8_t priority, uint32_t logic_serv, XiveTCTXMatch *match) { PnvXive *xive = PNV_XIVE(xptr); @@ -500,7 +500,8 @@ static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format, * Check the thread context CAM lines and record matches. */ ring = xive_presenter_tctx_match(xptr, tctx, format, nvt_blk, - nvt_idx, cam_ignore, logic_serv); + nvt_idx, cam_ignore, + logic_serv); /* * Save the context and follow on to catch duplicates, that we * don't support yet. diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c index 9736b623ba..236f9d7eb7 100644 --- a/hw/intc/pnv_xive2.c +++ b/hw/intc/pnv_xive2.c @@ -625,7 +625,7 @@ static bool pnv_xive2_is_cpu_enabled(PnvXive2 *xive, PowerPCCPU *cpu) static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format, uint8_t nvt_blk, uint32_t nvt_idx, - bool cam_ignore, uint8_t priority, + bool crowd, bool cam_ignore, uint8_t priority, uint32_t logic_serv, XiveTCTXMatch *match) { PnvXive2 *xive = PNV_XIVE2(xptr); @@ -656,8 +656,8 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format, logic_serv); } else { ring = xive2_presenter_tctx_match(xptr, tctx, format, nvt_blk, - nvt_idx, cam_ignore, - logic_serv); + nvt_idx, crowd, cam_ignore, + logic_serv); } if (ring != -1) { @@ -708,7 +708,7 @@ static uint32_t pnv_xive2_presenter_get_config(XivePresenter *xptr) static int pnv_xive2_broadcast(XivePresenter *xptr, uint8_t nvt_blk, uint32_t nvt_idx, - uint8_t priority) + bool crowd, bool ignore, uint8_t priority) { PnvXive2 *xive = PNV_XIVE2(xptr); PnvChip *chip = xive->chip; @@ -733,10 +733,10 @@ static int pnv_xive2_broadcast(XivePresenter *xptr, if (gen1_tima_os) { ring = xive_presenter_tctx_match(xptr, tctx, 0, nvt_blk, - nvt_idx, true, 0); + nvt_idx, ignore, 0); } else { ring = xive2_presenter_tctx_match(xptr, tctx, 0, nvt_blk, - nvt_idx, true, 0); + nvt_idx, crowd, ignore, 0); } if (ring != -1) { diff --git a/hw/intc/spapr_xive.c b/hw/intc/spapr_xive.c index 283a6b8fd2..41cfcab3b9 100644 --- a/hw/intc/spapr_xive.c +++ b/hw/intc/spapr_xive.c @@ -431,7 +431,8 @@ static int spapr_xive_write_nvt(XiveRouter *xrtr, uint8_t nvt_blk, static int spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format, uint8_t nvt_blk, uint32_t nvt_idx, - bool cam_ignore, uint8_t priority, + bool crowd, bool cam_ignore, + uint8_t priority, uint32_t logic_serv, XiveTCTXMatch *match) { CPUState *cs; diff --git a/hw/intc/xive.c b/hw/intc/xive.c index 2aa6e1fecc..d5fbd9bbd8 100644 --- a/hw/intc/xive.c +++ b/hw/intc/xive.c @@ -1681,10 +1681,18 @@ uint32_t xive_get_vpgroup_size(uint32_t nvp_index) return 1 << (ctz32(~nvp_index) + 1); } -static uint8_t xive_get_group_level(uint32_t nvp_index) +static uint8_t xive_get_group_level(bool crowd, bool ignore, + uint32_t nvp_blk, uint32_t nvp_index) { - /* FIXME add crowd encoding */ - return ctz32(~nvp_index) + 1; + uint8_t level = 0; + + if (crowd) { + level = ((ctz32(~nvp_blk) + 1) & 0b11) << 4; + } + if (ignore) { + level |= (ctz32(~nvp_index) + 1) & 0b1111; + } + return level; } /* @@ -1756,7 +1764,7 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx, */ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format, uint8_t nvt_blk, uint32_t nvt_idx, - bool cam_ignore, uint8_t priority, + bool crowd, bool cam_ignore, uint8_t priority, uint32_t logic_serv, bool *precluded) { XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb); @@ -1787,7 +1795,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format, * a new command to the presenters (the equivalent of the "assign" * power bus command in the documented full notify sequence. */ - count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, cam_ignore, + count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, crowd, cam_ignore, priority, logic_serv, &match); if (count < 0) { return false; @@ -1795,7 +1803,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format, /* handle CPU exception delivery */ if (count) { - group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0; + group_level = xive_get_group_level(crowd, cam_ignore, nvt_blk, nvt_idx); trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level); xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level); } else { @@ -1920,6 +1928,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas) } found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx, + false /* crowd */, xive_get_field32(END_W7_F0_IGNORE, end.w7), priority, xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7), diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c index b6f279e6a3..1f2837104c 100644 --- a/hw/intc/xive2.c +++ b/hw/intc/xive2.c @@ -1117,13 +1117,42 @@ static bool xive2_vp_match_mask(uint32_t cam1, uint32_t cam2, return (cam1 & vp_mask) == (cam2 & vp_mask); } +static uint8_t xive2_get_vp_block_mask(uint32_t nvt_blk, bool crowd) +{ + uint8_t size, block_mask = 0b1111; + + /* 3 supported crowd sizes: 2, 4, 16 */ + if (crowd) { + size = xive_get_vpgroup_size(nvt_blk); + if (size == 8) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid crowd size of 8n"); + return block_mask; + } + block_mask = ~(size - 1); + block_mask &= 0b1111; + } + return block_mask; +} + +static uint32_t xive2_get_vp_index_mask(uint32_t nvt_index, bool cam_ignore) +{ + uint32_t index_mask = 0xFFFFFF; /* 24 bits */ + + if (cam_ignore) { + index_mask = ~(xive_get_vpgroup_size(nvt_index) - 1); + index_mask &= 0xFFFFFF; + } + return index_mask; +} + /* * The thread context register words are in big-endian format. */ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx, uint8_t format, uint8_t nvt_blk, uint32_t nvt_idx, - bool cam_ignore, uint32_t logic_serv) + bool crowd, bool cam_ignore, + uint32_t logic_serv) { uint32_t cam = xive2_nvp_cam_line(nvt_blk, nvt_idx); uint32_t qw3w2 = xive_tctx_word2(&tctx->regs[TM_QW3_HV_PHYS]); @@ -1131,7 +1160,8 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx, uint32_t qw1w2 = xive_tctx_word2(&tctx->regs[TM_QW1_OS]); uint32_t qw0w2 = xive_tctx_word2(&tctx->regs[TM_QW0_USER]); - uint32_t vp_mask = 0xFFFFFFFF; + uint32_t index_mask, vp_mask; + uint8_t block_mask; if (format == 0) { /* @@ -1139,9 +1169,9 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx, * i=1: VP-group notification (bits ignored at the end of the * NVT identifier) */ - if (cam_ignore) { - vp_mask = ~(xive_get_vpgroup_size(nvt_idx) - 1); - } + block_mask = xive2_get_vp_block_mask(nvt_blk, crowd); + index_mask = xive2_get_vp_index_mask(nvt_idx, cam_ignore); + vp_mask = xive2_nvp_cam_line(block_mask, index_mask); /* For VP-group notifications, threads with LGS=0 are excluded */ @@ -1274,6 +1304,12 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk, return; } + if (xive2_end_is_crowd(&end) & !xive2_end_is_ignore(&end)) { + qemu_log_mask(LOG_GUEST_ERROR, + "XIVE: invalid END, 'crowd' bit requires 'ignore' bit\n"); + return; + } + if (xive2_end_is_enqueue(&end)) { xive2_end_enqueue(&end, end_data); /* Enqueuing event data modifies the EQ toggle and index */ @@ -1335,7 +1371,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk, } found = xive_presenter_notify(xrtr->xfb, format, nvp_blk, nvp_idx, - xive2_end_is_ignore(&end), + xive2_end_is_crowd(&end), xive2_end_is_ignore(&end), priority, xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7), &precluded); @@ -1372,17 +1408,24 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk, nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb); xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2); } else { - Xive2Nvgc nvg; + Xive2Nvgc nvgc; uint32_t backlog; + bool crowd; - /* For groups, the per-priority backlog counters are in the NVG */ - if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg)) { - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no NVG %x/%x\n", - nvp_blk, nvp_idx); + crowd = xive2_end_is_crowd(&end); + + /* + * For groups and crowds, the per-priority backlog + * counters are stored in the NVG/NVC structures + */ + if (xive2_router_get_nvgc(xrtr, crowd, + nvp_blk, nvp_idx, &nvgc)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no %s %x/%x\n", + crowd ? "NVC" : "NVG", nvp_blk, nvp_idx); return; } - if (!xive2_nvgc_is_valid(&nvg)) { + if (!xive2_nvgc_is_valid(&nvgc)) { qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVG %x/%x is invalid\n", nvp_blk, nvp_idx); return; @@ -1395,13 +1438,16 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk, * set the LSMFB field of the TIMA of relevant threads so * that they know an interrupt is pending. */ - backlog = xive2_nvgc_get_backlog(&nvg, priority) + 1; - xive2_nvgc_set_backlog(&nvg, priority, backlog); - xive2_router_write_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg); + backlog = xive2_nvgc_get_backlog(&nvgc, priority) + 1; + xive2_nvgc_set_backlog(&nvgc, priority, backlog); + xive2_router_write_nvgc(xrtr, crowd, nvp_blk, nvp_idx, &nvgc); if (precluded && backlog == 1) { XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb); - xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx, priority); + xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx, + xive2_end_is_crowd(&end), + xive2_end_is_ignore(&end), + priority); if (!xive2_end_is_precluded_escalation(&end)) { /* diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c index 9b42f47326..3a86a6edda 100644 --- a/hw/ppc/pnv.c +++ b/hw/ppc/pnv.c @@ -2554,7 +2554,7 @@ static void pnv_pic_print_info(InterruptStatsProvider *obj, GString *buf) static int pnv_match_nvt(XiveFabric *xfb, uint8_t format, uint8_t nvt_blk, uint32_t nvt_idx, - bool cam_ignore, uint8_t priority, + bool crowd, bool cam_ignore, uint8_t priority, uint32_t logic_serv, XiveTCTXMatch *match) { @@ -2568,8 +2568,8 @@ static int pnv_match_nvt(XiveFabric *xfb, uint8_t format, XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr); int count; - count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, cam_ignore, - priority, logic_serv, match); + count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd, + cam_ignore, priority, logic_serv, match); if (count < 0) { return count; @@ -2583,7 +2583,7 @@ static int pnv_match_nvt(XiveFabric *xfb, uint8_t format, static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format, uint8_t nvt_blk, uint32_t nvt_idx, - bool cam_ignore, uint8_t priority, + bool crowd, bool cam_ignore, uint8_t priority, uint32_t logic_serv, XiveTCTXMatch *match) { @@ -2597,8 +2597,8 @@ static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format, XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr); int count; - count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, cam_ignore, - priority, logic_serv, match); + count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd, + cam_ignore, priority, logic_serv, match); if (count < 0) { return count; @@ -2612,6 +2612,7 @@ static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format, static int pnv10_xive_broadcast(XiveFabric *xfb, uint8_t nvt_blk, uint32_t nvt_idx, + bool crowd, bool cam_ignore, uint8_t priority) { PnvMachineState *pnv = PNV_MACHINE(xfb); @@ -2622,7 +2623,7 @@ static int pnv10_xive_broadcast(XiveFabric *xfb, XivePresenter *xptr = XIVE_PRESENTER(&chip10->xive); XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr); - xpc->broadcast(xptr, nvt_blk, nvt_idx, priority); + xpc->broadcast(xptr, nvt_blk, nvt_idx, crowd, cam_ignore, priority); } return 0; } diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c index 8aa3ce7449..35a7bf8cce 100644 --- a/hw/ppc/spapr.c +++ b/hw/ppc/spapr.c @@ -4539,7 +4539,7 @@ static void spapr_pic_print_info(InterruptStatsProvider *obj, GString *buf) */ static int spapr_match_nvt(XiveFabric *xfb, uint8_t format, uint8_t nvt_blk, uint32_t nvt_idx, - bool cam_ignore, uint8_t priority, + bool crowd, bool cam_ignore, uint8_t priority, uint32_t logic_serv, XiveTCTXMatch *match) { SpaprMachineState *spapr = SPAPR_MACHINE(xfb); @@ -4547,7 +4547,7 @@ static int spapr_match_nvt(XiveFabric *xfb, uint8_t format, XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr); int count; - count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, cam_ignore, + count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd, cam_ignore, priority, logic_serv, match); if (count < 0) { return count; From patchwork Tue Oct 15 21:13:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kowal X-Patchwork-Id: 13837099 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0DB22D1F9D8 for ; Tue, 15 Oct 2024 21:15:10 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t0ory-0004WE-Ua; Tue, 15 Oct 2024 17:14:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0orv-0004Ll-U6; Tue, 15 Oct 2024 17:14:16 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0ort-0000QI-QM; Tue, 15 Oct 2024 17:14:15 -0400 Received: from pps.filterd (m0353729.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49FKJcJE022684; Tue, 15 Oct 2024 21:14:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=d0+aZG2Wxq7dL5cBO xKWamosOzieR+NeWSt262f1gDY=; b=AEX5Fz0HefWqFs5CgIQDmavWV7J7Uodnb T+IFlhQWsfbZ2a6RK1v8i6PshTPjRS2MBcGm6n1Zb9LC32Ry8gGdeuPkQqxLYszr vEmFbgBs7JNQeH3R+Pb/RuAYQajxvB5bY8mwTMVlbPq6A9ZtTS4/DH+zLPuSfuEH 5G3a8ovltfPpjwXdrcGzLSVH5vNAfjiYLuLqi/2STP2vtZDfSfok8SpSpksXjBQt mZoQP/g1njr4yAYcjVdgrtcPjEcGQtxcfaTtnQjx9kXJTBYAdhgmVxOxrqYyDv1x VcllZfl/aQZizu6u5AzcGIJdxntmk129ZdwRyL4leG7GuFuG6IiVA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429yb505g6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:07 +0000 (GMT) Received: from m0353729.ppops.net (m0353729.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 49FLE6Aw029737; Tue, 15 Oct 2024 21:14:06 GMT Received: from ppma23.wdc07v.mail.ibm.com (5d.69.3da9.ip4.static.sl-reverse.com [169.61.105.93]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429yb505g3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:06 +0000 (GMT) Received: from pps.filterd (ppma23.wdc07v.mail.ibm.com [127.0.0.1]) by ppma23.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 49FIJjrG006761; Tue, 15 Oct 2024 21:14:05 GMT Received: from smtprelay03.fra02v.mail.ibm.com ([9.218.2.224]) by ppma23.wdc07v.mail.ibm.com (PPS) with ESMTPS id 4284xk5v9r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:05 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay03.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 49FLE1bk56951090 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Oct 2024 21:14:01 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6C7832004B; Tue, 15 Oct 2024 21:14:01 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 526BF20043; Tue, 15 Oct 2024 21:13:59 +0000 (GMT) Received: from gfwr518.rchland.ibm.com (unknown [9.10.239.106]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 15 Oct 2024 21:13:59 +0000 (GMT) From: Michael Kowal To: qemu-devel@nongnu.org Cc: qemu-ppc@nongnu.org, clg@kaod.org, fbarrat@linux.ibm.com, npiggin@gmail.com, milesg@linux.ibm.com, danielhb413@gmail.com, david@gibson.dropbear.id.au, harshpb@linux.ibm.com, thuth@redhat.com, lvivier@redhat.com, pbonzini@redhat.com Subject: [PATCH 10/14] ppc/xive2: Check crowd backlog when scanning group backlog Date: Tue, 15 Oct 2024 16:13:25 -0500 Message-Id: <20241015211329.21113-11-kowal@linux.ibm.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241015211329.21113-1-kowal@linux.ibm.com> References: <20241015211329.21113-1-kowal@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: -UqQvfqpijfxFhNRBbT-q-cdw-COAQJJ X-Proofpoint-GUID: IJbHb8IOs1boAJc6UPkt-3aNjW8qsFmh X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1051,Hydra:6.0.680,FMLib:17.12.62.30 definitions=2024-10-15_01,2024-10-11_01,2024-09-30_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=638 impostorscore=0 phishscore=0 clxscore=1015 lowpriorityscore=0 adultscore=0 mlxscore=0 spamscore=0 bulkscore=0 suspectscore=0 priorityscore=1501 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410150140 Received-SPF: pass client-ip=148.163.156.1; envelope-from=kowal@linux.ibm.com; helo=mx0a-001b2d01.pphosted.com X-Spam_score_int: -26 X-Spam_score: -2.7 X-Spam_bar: -- X-Spam_report: (-2.7 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Frederic Barrat When processing a backlog scan for group interrupts, also take into account crowd interrupts. Signed-off-by: Frederic Barrat Signed-off-by: Michael Kowal --- include/hw/ppc/xive2_regs.h | 4 ++ hw/intc/xive2.c | 82 +++++++++++++++++++++++++------------ 2 files changed, 60 insertions(+), 26 deletions(-) diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h index 66a419441c..89236b9aaf 100644 --- a/include/hw/ppc/xive2_regs.h +++ b/include/hw/ppc/xive2_regs.h @@ -237,4 +237,8 @@ void xive2_nvgc_pic_print_info(Xive2Nvgc *nvgc, uint32_t nvgc_idx, #define NVx_BACKLOG_OP PPC_BITMASK(52, 53) #define NVx_BACKLOG_PRIO PPC_BITMASK(57, 59) +/* split the 6-bit crowd/group level */ +#define NVx_CROWD_LVL(level) ((level >> 4) & 0b11) +#define NVx_GROUP_LVL(level) (level & 0b1111) + #endif /* PPC_XIVE2_REGS_H */ diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c index 1f2837104c..41d689eaab 100644 --- a/hw/intc/xive2.c +++ b/hw/intc/xive2.c @@ -367,6 +367,35 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data) end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex); } +static void xive2_pgofnext(uint8_t *nvgc_blk, uint32_t *nvgc_idx, + uint8_t next_level) +{ + uint32_t mask, next_idx; + uint8_t next_blk; + + /* + * Adjust the block and index of a VP for the next group/crowd + * size (PGofFirst/PGofNext field in the NVP and NVGC structures). + * + * The 6-bit group level is split into a 2-bit crowd and 4-bit + * group levels. Encoding is similar. However, we don't support + * crowd size of 8. So a crowd level of 0b11 is bumped to a crowd + * size of 16. + */ + next_blk = NVx_CROWD_LVL(next_level); + if (next_blk == 3) { + next_blk = 4; + } + mask = (1 << next_blk) - 1; + *nvgc_blk &= ~mask; + *nvgc_blk |= mask >> 1; + + next_idx = NVx_GROUP_LVL(next_level); + mask = (1 << next_idx) - 1; + *nvgc_idx &= ~mask; + *nvgc_idx |= mask >> 1; +} + /* * Scan the group chain and return the highest priority and group * level of pending group interrupts. @@ -377,29 +406,28 @@ static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr, uint8_t *out_level) { Xive2Router *xrtr = XIVE2_ROUTER(xptr); - uint32_t nvgc_idx, mask; + uint32_t nvgc_idx; uint32_t current_level, count; - uint8_t prio; + uint8_t nvgc_blk, prio; Xive2Nvgc nvgc; for (prio = 0; prio <= XIVE_PRIORITY_MAX; prio++) { - current_level = first_group & 0xF; + current_level = first_group & 0x3F; + nvgc_blk = nvp_blk; + nvgc_idx = nvp_idx; while (current_level) { - mask = (1 << current_level) - 1; - nvgc_idx = nvp_idx & ~mask; - nvgc_idx |= mask >> 1; - qemu_log("fxb %s checking backlog for prio %d group idx %x\n", - __func__, prio, nvgc_idx); - - if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) { - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n", - nvp_blk, nvgc_idx); + xive2_pgofnext(&nvgc_blk, &nvgc_idx, current_level); + + if (xive2_router_get_nvgc(xrtr, NVx_CROWD_LVL(current_level), + nvgc_blk, nvgc_idx, &nvgc)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVGC %x/%x\n", + nvgc_blk, nvgc_idx); return 0xFF; } if (!xive2_nvgc_is_valid(&nvgc)) { - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n", - nvp_blk, nvgc_idx); + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVGC %x/%x\n", + nvgc_blk, nvgc_idx); return 0xFF; } @@ -408,7 +436,7 @@ static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr, *out_level = current_level; return prio; } - current_level = xive_get_field32(NVGC2_W0_PGONEXT, nvgc.w0) & 0xF; + current_level = xive_get_field32(NVGC2_W0_PGONEXT, nvgc.w0) & 0x3F; } } return 0xFF; @@ -420,22 +448,23 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr, uint8_t group_level) { Xive2Router *xrtr = XIVE2_ROUTER(xptr); - uint32_t nvgc_idx, mask, count; + uint32_t nvgc_idx, count; + uint8_t nvgc_blk; Xive2Nvgc nvgc; - group_level &= 0xF; - mask = (1 << group_level) - 1; - nvgc_idx = nvp_idx & ~mask; - nvgc_idx |= mask >> 1; + nvgc_blk = nvp_blk; + nvgc_idx = nvp_idx; + xive2_pgofnext(&nvgc_blk, &nvgc_idx, group_level); - if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) { - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n", - nvp_blk, nvgc_idx); + if (xive2_router_get_nvgc(xrtr, NVx_CROWD_LVL(group_level), + nvgc_blk, nvgc_idx, &nvgc)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVGC %x/%x\n", + nvgc_blk, nvgc_idx); return; } if (!xive2_nvgc_is_valid(&nvgc)) { - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n", - nvp_blk, nvgc_idx); + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVGC %x/%x\n", + nvgc_blk, nvgc_idx); return; } count = xive2_nvgc_get_backlog(&nvgc, group_prio); @@ -443,7 +472,8 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr, return; } xive2_nvgc_set_backlog(&nvgc, group_prio, count - 1); - xive2_router_write_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc); + xive2_router_write_nvgc(xrtr, NVx_CROWD_LVL(group_level), + nvgc_blk, nvgc_idx, &nvgc); } /* From patchwork Tue Oct 15 21:13:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kowal X-Patchwork-Id: 13837102 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A9B25D1D876 for ; Tue, 15 Oct 2024 21:16:00 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t0osA-0004zv-LT; Tue, 15 Oct 2024 17:14:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0os6-0004qO-SO; Tue, 15 Oct 2024 17:14:27 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0os5-0000RN-By; Tue, 15 Oct 2024 17:14:26 -0400 Received: from pps.filterd (m0353725.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49FKoBTD009327; Tue, 15 Oct 2024 21:14:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=JDHAZt3OEIu26gxyW oWLvd7Q8gdIopP79UnuRtkg+Wk=; b=qz4X2T0/dImb/dBdrR+jB7JD+CHpWBdSh xTjPl50QfCh7igIaFaSZH/vEDG9RP44/f8EX5o+TrPyeQqC22NotDXNeuNtT4jXt YsrhisAGWy1vnPFxir5lp51/Rx77EJbnlE/Ilg4a+0FH6sPj7SJUKBm3xd4fwCDR CL8INohJAkkkYfBYjZtN76sAQZV/sGanqMvH6SzOlmKmF7PjprzI6BzcJl84h6i4 tRDf9FkXUzXtbO5++LrJl4eCz/1IQ436xJ7GFgs4Ehh+6/1hS2iIb1GFMaomHYqX FRdF40v1RHYxuWuFbJA5z7D8WJ3FVuaJ9+I5L288Ba9/AQbGta/bQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429ysbr29t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:08 +0000 (GMT) Received: from m0353725.ppops.net (m0353725.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 49FLE8GH022233; Tue, 15 Oct 2024 21:14:08 GMT Received: from ppma12.dal12v.mail.ibm.com (dc.9e.1632.ip4.static.sl-reverse.com [50.22.158.220]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429ysbr29p-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:08 +0000 (GMT) Received: from pps.filterd (ppma12.dal12v.mail.ibm.com [127.0.0.1]) by ppma12.dal12v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 49FL1RrG006671; Tue, 15 Oct 2024 21:14:07 GMT Received: from smtprelay01.fra02v.mail.ibm.com ([9.218.2.227]) by ppma12.dal12v.mail.ibm.com (PPS) with ESMTPS id 4283erx5mx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:07 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay01.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 49FLE39R50528746 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Oct 2024 21:14:04 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id D061D2004F; Tue, 15 Oct 2024 21:14:03 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B45BB20043; Tue, 15 Oct 2024 21:14:01 +0000 (GMT) Received: from gfwr518.rchland.ibm.com (unknown [9.10.239.106]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 15 Oct 2024 21:14:01 +0000 (GMT) From: Michael Kowal To: qemu-devel@nongnu.org Cc: qemu-ppc@nongnu.org, clg@kaod.org, fbarrat@linux.ibm.com, npiggin@gmail.com, milesg@linux.ibm.com, danielhb413@gmail.com, david@gibson.dropbear.id.au, harshpb@linux.ibm.com, thuth@redhat.com, lvivier@redhat.com, pbonzini@redhat.com Subject: [PATCH 11/14] pnv/xive: Only support crowd size of 0, 2, 4 and 16 Date: Tue, 15 Oct 2024 16:13:26 -0500 Message-Id: <20241015211329.21113-12-kowal@linux.ibm.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241015211329.21113-1-kowal@linux.ibm.com> References: <20241015211329.21113-1-kowal@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: _mprSJYJOPy5qNjPGapRDp1pW00bqZSy X-Proofpoint-ORIG-GUID: adWKgyedtRMrH4yqAqTuUG0uvhhQWmel X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1051,Hydra:6.0.680,FMLib:17.12.62.30 definitions=2024-10-15_01,2024-10-11_01,2024-09-30_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 impostorscore=0 malwarescore=0 mlxlogscore=999 clxscore=1015 spamscore=0 lowpriorityscore=0 phishscore=0 bulkscore=0 adultscore=0 priorityscore=1501 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410150140 Received-SPF: pass client-ip=148.163.158.5; envelope-from=kowal@linux.ibm.com; helo=mx0b-001b2d01.pphosted.com X-Spam_score_int: -26 X-Spam_score: -2.7 X-Spam_bar: -- X-Spam_report: (-2.7 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Glenn Miles XIVE crowd sizes are encoded into a 2-bit field as follows: 0: 0b00 2: 0b01 4: 0b10 16: 0b11 A crowd size of 8 is not supported. Signed-off-by: Glenn Miles Signed-off-by: Michael Kowal --- hw/intc/xive.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/hw/intc/xive.c b/hw/intc/xive.c index d5fbd9bbd8..565f0243bd 100644 --- a/hw/intc/xive.c +++ b/hw/intc/xive.c @@ -1687,7 +1687,26 @@ static uint8_t xive_get_group_level(bool crowd, bool ignore, uint8_t level = 0; if (crowd) { - level = ((ctz32(~nvp_blk) + 1) & 0b11) << 4; + /* crowd level is bit position of first 0 from the right in nvp_blk */ + level = ctz32(~nvp_blk) + 1; + + /* + * Supported crowd sizes are 2^1, 2^2, and 2^4. 2^3 is not supported. + * HW will encode level 4 as the value 3. See xive2_pgofnext(). + */ + switch (level) { + case 1: + case 2: + break; + case 4: + level = 3; + break; + default: + g_assert_not_reached(); + } + + /* Crowd level bits reside in upper 2 bits of the 6 bit group level */ + level <<= 4; } if (ignore) { level |= (ctz32(~nvp_index) + 1) & 0b1111; From patchwork Tue Oct 15 21:13:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kowal X-Patchwork-Id: 13837105 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D0A8BD1F9D4 for ; Tue, 15 Oct 2024 21:16:06 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t0os7-0004qU-8L; Tue, 15 Oct 2024 17:14:27 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0os5-0004mR-B6; Tue, 15 Oct 2024 17:14:25 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0os3-0000RB-B5; Tue, 15 Oct 2024 17:14:25 -0400 Received: from pps.filterd (m0360083.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49FKt3NC006002; Tue, 15 Oct 2024 21:14:12 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=Svg/kjtfh1RhNMOhq Dr8aPKWZ6dz2QWpYwWPsvVxroo=; b=kYt/Ni6piIKnvvSUT0eC6G98SCLlOxcm0 i16C/Lb0Ssx5NJ/Axo3uKKLDWiOryfcSqSNz+7ivhY2aVn0v+Fvjk1OSFWgsoNx4 mnvlOdmGi5+yG05dGCH+fO10v0E7FCJnC5/SBxZjpAwokFnWfzi9EMyG8gurHw4R yRykcIuPAerQ6rn1BIuuG5454B8IKm5pHDQFSIzgtkYUmRvWOP71I1gtnjVoS3XY npkJBfNDgURIANKbu76MboUlkvS0wSczxk6D5zpp2JDAAMQBxjuDIV+bk01UrLnW O4UudyhF9FvWt3M7X3A0jzetgGVUJHN2sy9X1GYajBAe1m+6hw2bw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429yun81y9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:11 +0000 (GMT) Received: from m0360083.ppops.net (m0360083.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 49FLEB6V009261; Tue, 15 Oct 2024 21:14:11 GMT Received: from ppma23.wdc07v.mail.ibm.com (5d.69.3da9.ip4.static.sl-reverse.com [169.61.105.93]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429yun81y5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:11 +0000 (GMT) Received: from pps.filterd (ppma23.wdc07v.mail.ibm.com [127.0.0.1]) by ppma23.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 49FI9ZpB006401; Tue, 15 Oct 2024 21:14:10 GMT Received: from smtprelay06.fra02v.mail.ibm.com ([9.218.2.230]) by ppma23.wdc07v.mail.ibm.com (PPS) with ESMTPS id 4284xk5v9x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:10 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay06.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 49FLE6N621758694 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Oct 2024 21:14:06 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 3CDAA2004D; Tue, 15 Oct 2024 21:14:06 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 221DA20043; Tue, 15 Oct 2024 21:14:04 +0000 (GMT) Received: from gfwr518.rchland.ibm.com (unknown [9.10.239.106]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 15 Oct 2024 21:14:03 +0000 (GMT) From: Michael Kowal To: qemu-devel@nongnu.org Cc: qemu-ppc@nongnu.org, clg@kaod.org, fbarrat@linux.ibm.com, npiggin@gmail.com, milesg@linux.ibm.com, danielhb413@gmail.com, david@gibson.dropbear.id.au, harshpb@linux.ibm.com, thuth@redhat.com, lvivier@redhat.com, pbonzini@redhat.com Subject: [PATCH 12/14] pnv/xive: Support ESB Escalation Date: Tue, 15 Oct 2024 16:13:27 -0500 Message-Id: <20241015211329.21113-13-kowal@linux.ibm.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241015211329.21113-1-kowal@linux.ibm.com> References: <20241015211329.21113-1-kowal@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: vUQ8FOM1u2PofVgTWE6JT9VWbeEzllu_ X-Proofpoint-GUID: jOgnfPNRKrtvqmxXGkvaLCujJtTf5aaz X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1051,Hydra:6.0.680,FMLib:17.12.62.30 definitions=2024-10-15_01,2024-10-11_01,2024-09-30_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 spamscore=0 mlxscore=0 adultscore=0 impostorscore=0 bulkscore=0 malwarescore=0 phishscore=0 clxscore=1015 mlxlogscore=964 suspectscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410150140 Received-SPF: pass client-ip=148.163.156.1; envelope-from=kowal@linux.ibm.com; helo=mx0a-001b2d01.pphosted.com X-Spam_score_int: -26 X-Spam_score: -2.7 X-Spam_bar: -- X-Spam_report: (-2.7 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Glenn Miles END notification processing has an escalation path. The escalation is not always an END escalation but can be an ESB escalation. Also added a check for 'resume' processing which log a message stating it needs to be implemented. This is not needed at the time but is part of the END notification processing. This change was taken from a patch provided by Michael Kowal Suggested-by: Michael Kowal Signed-off-by: Glenn Miles Signed-off-by: Michael Kowal --- include/hw/ppc/xive2.h | 1 + include/hw/ppc/xive2_regs.h | 13 +++++--- hw/intc/xive2.c | 61 +++++++++++++++++++++++++++++-------- 3 files changed, 58 insertions(+), 17 deletions(-) diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h index 37aca4d26a..b17cc21ca6 100644 --- a/include/hw/ppc/xive2.h +++ b/include/hw/ppc/xive2.h @@ -82,6 +82,7 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd, uint32_t xive2_router_get_config(Xive2Router *xrtr); void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked); +void xive2_notify(Xive2Router *xrtr, uint32_t lisn, bool pq_checked); /* * XIVE2 Presenter (POWER10) diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h index 89236b9aaf..42cdc91452 100644 --- a/include/hw/ppc/xive2_regs.h +++ b/include/hw/ppc/xive2_regs.h @@ -40,15 +40,18 @@ typedef struct Xive2Eas { uint64_t w; -#define EAS2_VALID PPC_BIT(0) -#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */ -#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */ -#define EAS2_MASKED PPC_BIT(32) /* Masked */ -#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */ +#define EAS2_VALID PPC_BIT(0) +#define EAS2_QOS PPC_BIT(1, 2) /* Quality of Service(unimp) */ +#define EAS2_RESUME PPC_BIT(3) /* END Resume(unimp) */ +#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */ +#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */ +#define EAS2_MASKED PPC_BIT(32) /* Masked */ +#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */ } Xive2Eas; #define xive2_eas_is_valid(eas) (be64_to_cpu((eas)->w) & EAS2_VALID) #define xive2_eas_is_masked(eas) (be64_to_cpu((eas)->w) & EAS2_MASKED) +#define xive2_eas_is_resume(eas) (be64_to_cpu((eas)->w) & EAS2_RESUME) void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf); diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c index 41d689eaab..f812ba9624 100644 --- a/hw/intc/xive2.c +++ b/hw/intc/xive2.c @@ -1511,18 +1511,39 @@ do_escalation: } } - /* - * The END trigger becomes an Escalation trigger - */ - xive2_router_end_notify(xrtr, - xive_get_field32(END2_W4_END_BLOCK, end.w4), - xive_get_field32(END2_W4_ESC_END_INDEX, end.w4), - xive_get_field32(END2_W5_ESC_END_DATA, end.w5)); + if (xive2_end_is_escalate_end(&end)) { + /* + * Perform END Adaptive escalation processing + * The END trigger becomes an Escalation trigger + */ + xive2_router_end_notify(xrtr, + xive_get_field32(END2_W4_END_BLOCK, end.w4), + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4), + xive_get_field32(END2_W5_ESC_END_DATA, end.w5)); + } /* end END adaptive escalation */ + + else { + uint32_t lisn; /* Logical Interrupt Source Number */ + + /* + * Perform ESB escalation processing + * E[N] == 1 --> N + * Req[Block] <- E[ESB_Block] + * Req[Index] <- E[ESB_Index] + * Req[Offset] <- 0x000 + * Execute Req command + */ + lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK, end.w4), + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4)); + + xive2_notify(xrtr, lisn, true /* pq_checked */); + } + + return; } -void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked) +void xive2_notify(Xive2Router *xrtr , uint32_t lisn, bool pq_checked) { - Xive2Router *xrtr = XIVE2_ROUTER(xn); uint8_t eas_blk = XIVE_EAS_BLOCK(lisn); uint32_t eas_idx = XIVE_EAS_INDEX(lisn); Xive2Eas eas; @@ -1565,13 +1586,29 @@ void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked) return; } + /* TODO: add support for EAS resume if ever needed */ + if (xive2_eas_is_resume(&eas)) { + qemu_log_mask(LOG_UNIMP, + "XIVE: EAS resume processing unimplemented - LISN %x\n", + lisn); + return; + } + /* * The event trigger becomes an END trigger */ xive2_router_end_notify(xrtr, - xive_get_field64(EAS2_END_BLOCK, eas.w), - xive_get_field64(EAS2_END_INDEX, eas.w), - xive_get_field64(EAS2_END_DATA, eas.w)); + xive_get_field64(EAS2_END_BLOCK, eas.w), + xive_get_field64(EAS2_END_INDEX, eas.w), + xive_get_field64(EAS2_END_DATA, eas.w)); +} + +void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked) +{ + Xive2Router *xrtr = XIVE2_ROUTER(xn); + + xive2_notify(xrtr, lisn, pq_checked); + return; } static Property xive2_router_properties[] = { From patchwork Tue Oct 15 21:13:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kowal X-Patchwork-Id: 13837107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1DFA3D1F9D8 for ; Tue, 15 Oct 2024 21:16:27 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t0os3-0004iU-Aj; Tue, 15 Oct 2024 17:14:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0os2-0004g1-Ad; Tue, 15 Oct 2024 17:14:22 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0os0-0000R4-AC; Tue, 15 Oct 2024 17:14:22 -0400 Received: from pps.filterd (m0360083.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49FKsoI1005634; Tue, 15 Oct 2024 21:14:14 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=/mT1cbEScz7D/Q/TK ztwqyR3ENNMowBPo+2jzLFWcRg=; b=fNUhr0Uw8O4GvrZGUivqc7Y2ZsNgICQKd Vrh2n2RVAEolgNtLkJENblFZYg0BMLBVGEhCzWG0STf+Wvri19/nUM3N5ulddx71 /YR4U+4//rjrhIds424jc5Ue0gg0cFotCT7kzJOEF0yYIrNE0vYOzDhQS1slcDQB UzFCYs1laL+9euf1s6OrnNoKEVsTGk7jaXTjCaLTYT/IZ7vYg2c+WMMABjhS9Enr Xw1jswjnSCv+lIeOIOMUzY1nEswu24QaD4/OjpJ7WdYJXVtUVWbyqJksQqE1ieIE P4qoX7Rr/Eo6gXPDLsITHDpiScq83gDpusD0ZF/4fJVppaE6v7Niw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429yun81yh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:14 +0000 (GMT) Received: from m0360083.ppops.net (m0360083.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 49FLDDLI007748; Tue, 15 Oct 2024 21:14:13 GMT Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429yun81yc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:13 +0000 (GMT) Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 49FHpSRS001982; Tue, 15 Oct 2024 21:14:12 GMT Received: from smtprelay03.fra02v.mail.ibm.com ([9.218.2.224]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 4284emnyug-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:12 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay03.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 49FLE8LO52691292 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Oct 2024 21:14:08 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9E8272004D; Tue, 15 Oct 2024 21:14:08 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 84B6B20043; Tue, 15 Oct 2024 21:14:06 +0000 (GMT) Received: from gfwr518.rchland.ibm.com (unknown [9.10.239.106]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 15 Oct 2024 21:14:06 +0000 (GMT) From: Michael Kowal To: qemu-devel@nongnu.org Cc: qemu-ppc@nongnu.org, clg@kaod.org, fbarrat@linux.ibm.com, npiggin@gmail.com, milesg@linux.ibm.com, danielhb413@gmail.com, david@gibson.dropbear.id.au, harshpb@linux.ibm.com, thuth@redhat.com, lvivier@redhat.com, pbonzini@redhat.com Subject: [PATCH 13/14] pnv/xive: Fix problem with treating NVGC as a NVP Date: Tue, 15 Oct 2024 16:13:28 -0500 Message-Id: <20241015211329.21113-14-kowal@linux.ibm.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241015211329.21113-1-kowal@linux.ibm.com> References: <20241015211329.21113-1-kowal@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: cvBzA-Ovkn_CkOt9B6MB3K9QAaxnmmKe X-Proofpoint-GUID: Yg6MIo9N9TE_o0RtPeUVeeejVQ_RM-ke X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1051,Hydra:6.0.680,FMLib:17.12.62.30 definitions=2024-10-15_01,2024-10-11_01,2024-09-30_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 spamscore=0 mlxscore=0 adultscore=0 impostorscore=0 bulkscore=0 malwarescore=0 phishscore=0 clxscore=1015 mlxlogscore=999 suspectscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410150140 Received-SPF: pass client-ip=148.163.156.1; envelope-from=kowal@linux.ibm.com; helo=mx0a-001b2d01.pphosted.com X-Spam_score_int: -26 X-Spam_score: -2.7 X-Spam_bar: -- X-Spam_report: (-2.7 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Glenn Miles When booting with PHYP, the blk/index for a NVGC was being mistakenly treated as the blk/index for a NVP. Renamed nvp_blk/nvp_idx throughout the code to nvx_blk/nvx_idx to prevent confusion in the future and now we delay loading the NVP until the point where we know that the block and index actually point to a NVP. Suggested-by: Michael Kowal Fixes: 6d4c4f70262 ("ppc/xive2: Support crowd-matching when looking for target") Signed-off-by: Glenn Miles Signed-off-by: Michael Kowal --- hw/intc/xive2.c | 78 ++++++++++++++++++++++++------------------------- 1 file changed, 39 insertions(+), 39 deletions(-) diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c index f812ba9624..8abccd2f4b 100644 --- a/hw/intc/xive2.c +++ b/hw/intc/xive2.c @@ -226,8 +226,8 @@ void xive2_end_pic_print_info(Xive2End *end, uint32_t end_idx, GString *buf) uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3); uint32_t qentries = 1 << (qsize + 10); - uint32_t nvp_blk = xive_get_field32(END2_W6_VP_BLOCK, end->w6); - uint32_t nvp_idx = xive_get_field32(END2_W6_VP_OFFSET, end->w6); + uint32_t nvx_blk = xive_get_field32(END2_W6_VP_BLOCK, end->w6); + uint32_t nvx_idx = xive_get_field32(END2_W6_VP_OFFSET, end->w6); uint8_t priority = xive_get_field32(END2_W7_F0_PRIORITY, end->w7); uint8_t pq; @@ -256,7 +256,7 @@ void xive2_end_pic_print_info(Xive2End *end, uint32_t end_idx, GString *buf) xive2_end_is_firmware2(end) ? 'F' : '-', xive2_end_is_ignore(end) ? 'i' : '-', xive2_end_is_crowd(end) ? 'c' : '-', - priority, nvp_blk, nvp_idx); + priority, nvx_blk, nvx_idx); if (qaddr_base) { g_string_append_printf(buf, " eq:@%08"PRIx64"% 6d/%5d ^%d", @@ -401,7 +401,7 @@ static void xive2_pgofnext(uint8_t *nvgc_blk, uint32_t *nvgc_idx, * level of pending group interrupts. */ static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr, - uint8_t nvp_blk, uint32_t nvp_idx, + uint8_t nvx_blk, uint32_t nvx_idx, uint8_t first_group, uint8_t *out_level) { @@ -413,8 +413,8 @@ static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr, for (prio = 0; prio <= XIVE_PRIORITY_MAX; prio++) { current_level = first_group & 0x3F; - nvgc_blk = nvp_blk; - nvgc_idx = nvp_idx; + nvgc_blk = nvx_blk; + nvgc_idx = nvx_idx; while (current_level) { xive2_pgofnext(&nvgc_blk, &nvgc_idx, current_level); @@ -443,7 +443,7 @@ static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr, } static void xive2_presenter_backlog_decr(XivePresenter *xptr, - uint8_t nvp_blk, uint32_t nvp_idx, + uint8_t nvx_blk, uint32_t nvx_idx, uint8_t group_prio, uint8_t group_level) { @@ -452,8 +452,8 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr, uint8_t nvgc_blk; Xive2Nvgc nvgc; - nvgc_blk = nvp_blk; - nvgc_idx = nvp_idx; + nvgc_blk = nvx_blk; + nvgc_idx = nvx_idx; xive2_pgofnext(&nvgc_blk, &nvgc_idx, group_level); if (xive2_router_get_nvgc(xrtr, NVx_CROWD_LVL(group_level), @@ -1317,9 +1317,8 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk, uint8_t priority; uint8_t format; bool found, precluded; - Xive2Nvp nvp; - uint8_t nvp_blk; - uint32_t nvp_idx; + uint8_t nvx_blk; + uint32_t nvx_idx; /* END cache lookup */ if (xive2_router_get_end(xrtr, end_blk, end_idx, &end)) { @@ -1384,23 +1383,10 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk, /* * Follows IVPE notification */ - nvp_blk = xive_get_field32(END2_W6_VP_BLOCK, end.w6); - nvp_idx = xive_get_field32(END2_W6_VP_OFFSET, end.w6); - - /* NVP cache lookup */ - if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) { - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no NVP %x/%x\n", - nvp_blk, nvp_idx); - return; - } - - if (!xive2_nvp_is_valid(&nvp)) { - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVP %x/%x is invalid\n", - nvp_blk, nvp_idx); - return; - } + nvx_blk = xive_get_field32(END2_W6_VP_BLOCK, end.w6); + nvx_idx = xive_get_field32(END2_W6_VP_OFFSET, end.w6); - found = xive_presenter_notify(xrtr->xfb, format, nvp_blk, nvp_idx, + found = xive_presenter_notify(xrtr->xfb, format, nvx_blk, nvx_idx, xive2_end_is_crowd(&end), xive2_end_is_ignore(&end), priority, xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7), @@ -1428,6 +1414,21 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk, if (!xive2_end_is_ignore(&end)) { uint8_t ipb; + Xive2Nvp nvp; + + /* NVP cache lookup */ + if (xive2_router_get_nvp(xrtr, nvx_blk, nvx_idx, &nvp)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no NVP %x/%x\n", + nvx_blk, nvx_idx); + return; + } + + if (!xive2_nvp_is_valid(&nvp)) { + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVP %x/%x is invalid\n", + nvx_blk, nvx_idx); + return; + } + /* * Record the IPB in the associated NVP structure for later * use. The presenter will resend the interrupt when the vCPU @@ -1436,7 +1437,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk, ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2) | xive_priority_to_ipb(priority); nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb); - xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2); + xive2_router_write_nvp(xrtr, nvx_blk, nvx_idx, &nvp, 2); } else { Xive2Nvgc nvgc; uint32_t backlog; @@ -1449,32 +1450,31 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk, * counters are stored in the NVG/NVC structures */ if (xive2_router_get_nvgc(xrtr, crowd, - nvp_blk, nvp_idx, &nvgc)) { + nvx_blk, nvx_idx, &nvgc)) { qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no %s %x/%x\n", - crowd ? "NVC" : "NVG", nvp_blk, nvp_idx); + crowd ? "NVC" : "NVG", nvx_blk, nvx_idx); return; } if (!xive2_nvgc_is_valid(&nvgc)) { qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVG %x/%x is invalid\n", - nvp_blk, nvp_idx); + nvx_blk, nvx_idx); return; } /* * Increment the backlog counter for that priority. - * For the precluded case, we only call broadcast the - * first time the counter is incremented. broadcast will - * set the LSMFB field of the TIMA of relevant threads so - * that they know an interrupt is pending. + * We only call broadcast the first time the counter is + * incremented. broadcast will set the LSMFB field of the TIMA of + * relevant threads so that they know an interrupt is pending. */ backlog = xive2_nvgc_get_backlog(&nvgc, priority) + 1; xive2_nvgc_set_backlog(&nvgc, priority, backlog); - xive2_router_write_nvgc(xrtr, crowd, nvp_blk, nvp_idx, &nvgc); + xive2_router_write_nvgc(xrtr, crowd, nvx_blk, nvx_idx, &nvgc); - if (precluded && backlog == 1) { + if (backlog == 1) { XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb); - xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx, + xfc->broadcast(xrtr->xfb, nvx_blk, nvx_idx, xive2_end_is_crowd(&end), xive2_end_is_ignore(&end), priority); From patchwork Tue Oct 15 21:13:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kowal X-Patchwork-Id: 13837103 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C69D2D1D876 for ; Tue, 15 Oct 2024 21:16:05 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t0osC-000532-UH; Tue, 15 Oct 2024 17:14:34 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0os8-0004vL-4x; Tue, 15 Oct 2024 17:14:29 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t0os6-0000RR-Bt; Tue, 15 Oct 2024 17:14:27 -0400 Received: from pps.filterd (m0360072.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49FJu6eT002682; Tue, 15 Oct 2024 21:14:16 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=/DBhHEQue6m2kIXKt c0Fa1uIdhy1To8+56w5CU0nnME=; b=LQYrFXq6JsaS7dCc9YdWo/0yR03aKChHU exhg/rO3isQF8JfO/DOrzZzaQWnzuaVxp2GY4mPFB7ik6ChPgbcLhi0YieLztvb1 I2fkRy+SmnEHAykOzJ2iOGmumTbnT+x6oPWTwu57SRLN+++zcbLKF5/2phyRvOPu PkiJEA+Fk935WIJqTc0mDYHGtYxAQIQwdmI1PlRPpzRgaN9+JfNOPPICmrIrb1T3 c+7RS2s93d1LIwJgo3tQRdAazGoljPbGMkJq3BSjN8SNfxjIvW4pp6Ew4mgB6sKa 2K4cz1OI15/DaWrIfZrwuYE6Kk+vOa519GZHXlOLYqSLC58tANK0g== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429xywr91e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:16 +0000 (GMT) Received: from m0360072.ppops.net (m0360072.ppops.net [127.0.0.1]) by pps.reinject (8.18.0.8/8.18.0.8) with ESMTP id 49FLAPOQ022718; Tue, 15 Oct 2024 21:14:15 GMT Received: from ppma11.dal12v.mail.ibm.com (db.9e.1632.ip4.static.sl-reverse.com [50.22.158.219]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 429xywr918-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:15 +0000 (GMT) Received: from pps.filterd (ppma11.dal12v.mail.ibm.com [127.0.0.1]) by ppma11.dal12v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 49FJ0oRu005965; Tue, 15 Oct 2024 21:14:14 GMT Received: from smtprelay02.fra02v.mail.ibm.com ([9.218.2.226]) by ppma11.dal12v.mail.ibm.com (PPS) with ESMTPS id 428650wkyw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Oct 2024 21:14:14 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay02.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 49FLEBKC42860908 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Oct 2024 21:14:11 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0C4532004D; Tue, 15 Oct 2024 21:14:11 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E652820043; Tue, 15 Oct 2024 21:14:08 +0000 (GMT) Received: from gfwr518.rchland.ibm.com (unknown [9.10.239.106]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Tue, 15 Oct 2024 21:14:08 +0000 (GMT) From: Michael Kowal To: qemu-devel@nongnu.org Cc: qemu-ppc@nongnu.org, clg@kaod.org, fbarrat@linux.ibm.com, npiggin@gmail.com, milesg@linux.ibm.com, danielhb413@gmail.com, david@gibson.dropbear.id.au, harshpb@linux.ibm.com, thuth@redhat.com, lvivier@redhat.com, pbonzini@redhat.com Subject: [PATCH 14/14] qtest/xive: Add test of pool interrupts Date: Tue, 15 Oct 2024 16:13:29 -0500 Message-Id: <20241015211329.21113-15-kowal@linux.ibm.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241015211329.21113-1-kowal@linux.ibm.com> References: <20241015211329.21113-1-kowal@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: nsSDH1Wg4HClfMbo-wb4QEMI2M55dGoO X-Proofpoint-GUID: 4bNBXyqOAiAh2UuLhkasw9hAsQEGk6pz X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1051,Hydra:6.0.680,FMLib:17.12.62.30 definitions=2024-10-15_01,2024-10-11_01,2024-09-30_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 bulkscore=0 spamscore=0 priorityscore=1501 phishscore=0 adultscore=0 mlxlogscore=999 lowpriorityscore=0 clxscore=1015 impostorscore=0 mlxscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410150140 Received-SPF: pass client-ip=148.163.158.5; envelope-from=kowal@linux.ibm.com; helo=mx0b-001b2d01.pphosted.com X-Spam_score_int: -26 X-Spam_score: -2.7 X-Spam_bar: -- X-Spam_report: (-2.7 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Glenn Miles Added new test for pool interrupts. Signed-off-by: Glenn Miles Signed-off-by: Michael Kowal --- tests/qtest/pnv-xive2-test.c | 77 ++++++++++++++++++++++++++++++++++++ 1 file changed, 77 insertions(+) diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c index a6008bc053..6e7e7f0d9b 100644 --- a/tests/qtest/pnv-xive2-test.c +++ b/tests/qtest/pnv-xive2-test.c @@ -4,6 +4,7 @@ * - Test 'Pull Thread Context to Odd Thread Reporting Line' * - Test irq to hardware group * - Test irq to hardware group going through backlog + * - Test irq to pool thread * * Copyright (c) 2024, IBM Corporation. * @@ -267,6 +268,79 @@ static void test_hw_irq(QTestState *qts) g_assert_cmphex(cppr, ==, 0xFF); } +static void test_pool_irq(QTestState *qts) +{ + uint32_t irq = 2; + uint32_t irq_data = 0x600d0d06; + uint32_t end_index = 5; + uint32_t target_pir = 1; + uint32_t target_nvp = 0x100 + target_pir; + uint8_t priority = 5; + uint32_t reg32; + uint16_t reg16; + uint8_t pq, nsr, cppr, ipb; + + printf("# ============================================================\n"); + printf("# Testing irq %d to pool thread %d\n", irq, target_pir); + + /* irq config */ + set_eas(qts, irq, end_index, irq_data); + set_end(qts, end_index, target_nvp, priority, false /* group */); + + /* enable and trigger irq */ + get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00); + set_esb(qts, irq, XIVE_TRIGGER_PAGE, 0, 0); + + /* check irq is raised on cpu */ + pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET); + g_assert_cmpuint(pq, ==, XIVE_ESB_PENDING); + + /* check TIMA values in the PHYS ring (shared by POOL ring) */ + reg32 = get_tima32(qts, target_pir, TM_QW3_HV_PHYS + TM_WORD0); + nsr = reg32 >> 24; + cppr = (reg32 >> 16) & 0xFF; + g_assert_cmphex(nsr, ==, 0x40); + g_assert_cmphex(cppr, ==, 0xFF); + + /* check TIMA values in the POOL ring */ + reg32 = get_tima32(qts, target_pir, TM_QW2_HV_POOL + TM_WORD0); + nsr = reg32 >> 24; + cppr = (reg32 >> 16) & 0xFF; + ipb = (reg32 >> 8) & 0xFF; + g_assert_cmphex(nsr, ==, 0); + g_assert_cmphex(cppr, ==, 0); + g_assert_cmphex(ipb, ==, 0x80 >> priority); + + /* ack the irq */ + reg16 = get_tima16(qts, target_pir, TM_SPC_ACK_HV_REG); + nsr = reg16 >> 8; + cppr = reg16 & 0xFF; + g_assert_cmphex(nsr, ==, 0x40); + g_assert_cmphex(cppr, ==, priority); + + /* check irq data is what was configured */ + reg32 = qtest_readl(qts, xive_get_queue_addr(end_index)); + g_assert_cmphex((reg32 & 0x7fffffff), ==, (irq_data & 0x7fffffff)); + + /* check IPB is cleared in the POOL ring */ + reg32 = get_tima32(qts, target_pir, TM_QW2_HV_POOL + TM_WORD0); + ipb = (reg32 >> 8) & 0xFF; + g_assert_cmphex(ipb, ==, 0); + + /* End Of Interrupt */ + set_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_STORE_EOI, 0); + pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET); + g_assert_cmpuint(pq, ==, XIVE_ESB_RESET); + + /* reset CPPR */ + set_tima8(qts, target_pir, TM_QW3_HV_PHYS + TM_CPPR, 0xFF); + reg32 = get_tima32(qts, target_pir, TM_QW3_HV_PHYS + TM_WORD0); + nsr = reg32 >> 24; + cppr = (reg32 >> 16) & 0xFF; + g_assert_cmphex(nsr, ==, 0x00); + g_assert_cmphex(cppr, ==, 0xFF); +} + #define XIVE_ODD_CL 0x80 static void test_pull_thread_ctx_to_odd_thread_cl(QTestState *qts) { @@ -485,6 +559,9 @@ static void test_xive(void) /* omit reset_state here and use settings from test_hw_irq */ test_pull_thread_ctx_to_odd_thread_cl(qts); + reset_state(qts); + test_pool_irq(qts); + reset_state(qts); test_hw_group_irq(qts);