From patchwork Wed Feb 5 04:03:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13960548 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50D10C02198 for ; Wed, 5 Feb 2025 04:04:56 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfWdj-0008HR-Hx; Tue, 04 Feb 2025 23:03:51 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfWdg-0008Gh-EN for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:48 -0500 Received: from mail-pl1-x62a.google.com ([2607:f8b0:4864:20::62a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tfWde-000790-AR for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:48 -0500 Received: by mail-pl1-x62a.google.com with SMTP id d9443c01a7336-21f0c4275a1so17820275ad.2 for ; Tue, 04 Feb 2025 20:03:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1738728224; x=1739333024; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=Ux/KtpTlSf39OaNdOQLkmTJ9/3Lg0YbAwExs8GYlciw=; b=rze1huqdtdKjw6G/P+bcamLB+bRH+RdbXfMFZg0D/fX8b/HP1L4t82gP8uSfaqWnDF w+Zy+RvsOk2t3TkKIZHWWo2zNbii8antqFC79YbzfXvst2TO7m3Upflm6XehdtABsVGI dwvRwMiyG1AGnn3jqTA4z+iUJ3nK+Kta5yuKkLyZRDh/me4xypkckYCdFVfg/Uh+7VEt +QS4smT3gr5cfL3mTdWAQ9lmyDhxc2drSyNjJRVyKzCjpAnCdmcH4jLN+xRP99vgAzdj GGYRJ+njYIy0scrnVKxa28xwS+k+PXHafFAF2zADaeqw41AeG0sT5K+pyx5dlCHXVfIP Q7Mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738728224; x=1739333024; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ux/KtpTlSf39OaNdOQLkmTJ9/3Lg0YbAwExs8GYlciw=; b=VRWFgP7kBNDgqsVhDBitGMcGkPYqKKV1IK6PyVBxflY0HfGcGP9O6gUKWKTKCESKY5 5FHdfwy0wvt8A0nuyKYMlMvj7om7CIoNlM5OdZxYbUMWzpZsYV6VR+F0JcyzuJVTVmNV Rr8qJ377eZVPKq5GoxkfpdIV899vV73mc+xMiyprber0q2qTzZEao+OFfxccSGQXQWcV OCFzaOoOK9m1Bfbx7g1qeoIW89FBNj3QenHDcVMOL8CpCsn5l4+GmiUF2in478qQYIiQ JPjcB+Zh97aaC/Mp2xCh4s/4w7VyEPWPDtT88gswQucbA6ytq7aICVP3xRit7GNULBIu bbGQ== X-Gm-Message-State: AOJu0YyvMDcTPVmi1/yrDGL17SY9E7EGR4keI5zMSCU88QNYB+SpnBY0 dkzbHlYi/BzSBFkfGtVWkpRkrcVbY7a1CKojIED6kVSKCQ4P5u4kHt44H7QjLgnIvDLFu/PEdVE M X-Gm-Gg: ASbGncvzdYIygQZklO4TK1Zm9It0iwUZRmiJVTd8jd1mtShYS3khcIP8M5OEybTHdh4 CVeDLSky2GWl4+yIkbO/A2gDS6hfpVxKI388IvvzlgCKZW6B22fsuaYDmvutWOQzyKhw3EIfUoH J0QE0yg124xTLvg+Gx0BgGQT5HH26CJVQXJo7i0b3ETAWqIGQRn1yGZsM23DX7LENEsyPPyepqV q1blfuja6Iu8on1eiISAdfL+rg80s5Oxf4nvmhULmqmrz1QPsW0bCG8UoeqwNS/Cx6xmBCNfyN/ rYt8+Omnrhp1e/FPej2BBDzoMib/mogqS0KDIEfucrk4X2E= X-Google-Smtp-Source: AGHT+IE9MNOXDoH3mElMEF/U4QLRMPD2YNlrBvbt7WshT06IkjR4Kq8/fY8SRyVdjVnqFrUuqm2cxQ== X-Received: by 2002:a17:902:da8c:b0:216:2426:7668 with SMTP id d9443c01a7336-21f17e4c840mr21103285ad.13.1738728224460; Tue, 04 Feb 2025 20:03:44 -0800 (PST) Received: from stoup.. (71-212-39-66.tukw.qwest.net. [71.212.39.66]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21f054eb89esm22380325ad.79.2025.02.04.20.03.43 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 20:03:43 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 01/11] tcg: Drop support for two address registers in gen_ldst Date: Tue, 4 Feb 2025 20:03:31 -0800 Message-ID: <20250205040341.2056361-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250205040341.2056361-1-richard.henderson@linaro.org> References: <20250205040341.2056361-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::62a; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Signed-off-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé --- tcg/tcg-op-ldst.c | 22 ++++------------------ 1 file changed, 4 insertions(+), 18 deletions(-) diff --git a/tcg/tcg-op-ldst.c b/tcg/tcg-op-ldst.c index 77271e0193..c3e9bf992a 100644 --- a/tcg/tcg-op-ldst.c +++ b/tcg/tcg-op-ldst.c @@ -91,25 +91,11 @@ static MemOp tcg_canonicalize_memop(MemOp op, bool is64, bool st) static void gen_ldst(TCGOpcode opc, TCGType type, TCGTemp *vl, TCGTemp *vh, TCGTemp *addr, MemOpIdx oi) { - if (TCG_TARGET_REG_BITS == 64 || tcg_ctx->addr_type == TCG_TYPE_I32) { - if (vh) { - tcg_gen_op4(opc, type, temp_arg(vl), temp_arg(vh), - temp_arg(addr), oi); - } else { - tcg_gen_op3(opc, type, temp_arg(vl), temp_arg(addr), oi); - } + assert(tcg_ctx->addr_type <= TCG_TYPE_REG); + if (vh) { + tcg_gen_op4(opc, type, temp_arg(vl), temp_arg(vh), temp_arg(addr), oi); } else { - /* See TCGV_LOW/HIGH. */ - TCGTemp *al = addr + HOST_BIG_ENDIAN; - TCGTemp *ah = addr + !HOST_BIG_ENDIAN; - - if (vh) { - tcg_gen_op5(opc, type, temp_arg(vl), temp_arg(vh), - temp_arg(al), temp_arg(ah), oi); - } else { - tcg_gen_op4(opc, type, temp_arg(vl), - temp_arg(al), temp_arg(ah), oi); - } + tcg_gen_op3(opc, type, temp_arg(vl), temp_arg(addr), oi); } } From patchwork Wed Feb 5 04:03:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13960551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 503ACC02192 for ; Wed, 5 Feb 2025 04:05:02 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfWeA-0008O5-M3; Tue, 04 Feb 2025 23:04:19 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfWdm-0008IP-Jz for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:56 -0500 Received: from mail-pl1-x629.google.com ([2607:f8b0:4864:20::629]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tfWdg-000794-6t for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:52 -0500 Received: by mail-pl1-x629.google.com with SMTP id d9443c01a7336-2161eb95317so112582335ad.1 for ; Tue, 04 Feb 2025 20:03:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1738728226; x=1739333026; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=M9EQ9pBY86c2A3cFXnBp2vdaguBxq0Vpc4mL+qvOSO4=; b=kEhXSnNzq1xxHPAEfEG+xXSvC62413fZHo4otYBTp69SuIzM/6092ODU+UcPC+/AYm z+8f19cwKyxzamE8KFnkZ8iTO73AULreUeIrJey+FscR3bu8ZBSUvYU1IwemCfaT6F4F 9zr/nPmz7lAk7XUoUMMDFWOmmpD+9youvqSNRxSBynrmaNckSDk0d10B4iDhXuvRfe7b WEhIH5rqRDAGJoIllcWdoB+LDioMK0ALzu6zvu0aAmVL6a/J6kI+61bY2i0rfFD5NLPW zDDEUTdrw87d9pyyZxCM0Hl805slBmOOAk5ZKHD3Z+01DWJzISlJgYR2n223eI2rm2qG yInw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738728226; x=1739333026; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=M9EQ9pBY86c2A3cFXnBp2vdaguBxq0Vpc4mL+qvOSO4=; b=m7Pmnj9G3AvXTmcTXBRaIHl5VfTahlzoSBcQZGING3+kQ0jgoDFojpzWwSkFC21j2B eBMOi69kPKDk1/f6N4LjWYgL/ot87LrU6c2JG2h9m9WbMAoxJaGvQuZ6XOFIagaUXjg7 EuWxM8490OMGCdSdKm/knuIrDchEwf2mPsMHJbLbmIbnheI76fY+csm068EIKfLTJHrD DH16OToeM3QthzjIIct1knEiRtIPRNway0dKQnmn3SLZGvxa0aq13l+mcT5iz+kMRJMO s+r0Za98ju7cwx/JoxCMkl2tCVZsiNIPgTU6tS21VhSn/UoN8U1YK3isSlMGhSyZaRqk sENQ== X-Gm-Message-State: AOJu0YzX3wk7My7/v/81KpNsYmHcQUe5aLXCK+FbZXxPNIaeJlS/mdCv 8cM8UDayHqj3UMoUyXM2Vk6cQK9A0pdjI/Z9+bPwS7E8iG5gQbkQjkH8lXV76YKdgjeqdipEqvj l X-Gm-Gg: ASbGncuUElGka7AHXD0t4931T6KpwLvG8TEQ9EKWPYXmskXn3942C54yrU4eKpYSbmh NEh7MQozYrYTO1SZNeaVQ18IW49+yDIZd9a5Y5CYTgLoJGi5MkO22qHsqYpUiSPjGTgR/baupOw HbbJ5LCnGuYXJAi4uxHT+QRNEJxYRc5CVuaZ6CXV5GftGA4G6Bpdq+8p/tcXCP4AFh1tHcG4oL3 UedifMDyZzAGKu7zdQ/LR4LtWP/mBDIFvTbP7vrbR87U9BJhaJlrTBB33oaAO27r8vSJi9GE780 jDudZo1shi9cWfz31MoHf+BUEaCxcQy8w6/k9INTxI6Ulq0= X-Google-Smtp-Source: AGHT+IFDwwxvlkeo+HhF74GH4RNbvQIRyvtubiEeQCRjO5MJaUqEds4QcQNJS4uMDrWln7U2zTZp4A== X-Received: by 2002:a17:902:c941:b0:215:a96d:ec17 with SMTP id d9443c01a7336-21f17df58efmr23811955ad.14.1738728225473; Tue, 04 Feb 2025 20:03:45 -0800 (PST) Received: from stoup.. (71-212-39-66.tukw.qwest.net. [71.212.39.66]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21f054eb89esm22380325ad.79.2025.02.04.20.03.44 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 20:03:45 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 02/11] tcg: Merge INDEX_op_qemu_*_{a32,a64}_* Date: Tue, 4 Feb 2025 20:03:32 -0800 Message-ID: <20250205040341.2056361-3-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250205040341.2056361-1-richard.henderson@linaro.org> References: <20250205040341.2056361-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::629; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x629.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Since 64-on-32 is now unsupported, guest addresses always fit in one host register. Drop the replication of opcodes. Signed-off-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé --- include/tcg/tcg-opc.h | 28 ++------ tcg/optimize.c | 21 ++---- tcg/tcg-op-ldst.c | 82 +++++---------------- tcg/tcg.c | 42 ++++------- tcg/tci.c | 119 ++++++------------------------- tcg/aarch64/tcg-target.c.inc | 36 ++++------ tcg/arm/tcg-target.c.inc | 40 +++-------- tcg/i386/tcg-target.c.inc | 69 ++++-------------- tcg/loongarch64/tcg-target.c.inc | 36 ++++------ tcg/mips/tcg-target.c.inc | 51 +++---------- tcg/ppc/tcg-target.c.inc | 68 ++++-------------- tcg/riscv/tcg-target.c.inc | 24 +++---- tcg/s390x/tcg-target.c.inc | 36 ++++------ tcg/sparc64/tcg-target.c.inc | 24 +++---- tcg/tci/tcg-target.c.inc | 60 ++++------------ 15 files changed, 177 insertions(+), 559 deletions(-) diff --git a/include/tcg/tcg-opc.h b/include/tcg/tcg-opc.h index 9383e295f4..5bf78b0764 100644 --- a/include/tcg/tcg-opc.h +++ b/include/tcg/tcg-opc.h @@ -188,36 +188,22 @@ DEF(goto_ptr, 0, 1, 0, TCG_OPF_BB_EXIT | TCG_OPF_BB_END) DEF(plugin_cb, 0, 0, 1, TCG_OPF_NOT_PRESENT) DEF(plugin_mem_cb, 0, 1, 1, TCG_OPF_NOT_PRESENT) -/* Replicate ld/st ops for 32 and 64-bit guest addresses. */ -DEF(qemu_ld_a32_i32, 1, 1, 1, +DEF(qemu_ld_i32, 1, 1, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) -DEF(qemu_st_a32_i32, 0, 1 + 1, 1, +DEF(qemu_st_i32, 0, 1 + 1, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) -DEF(qemu_ld_a32_i64, DATA64_ARGS, 1, 1, +DEF(qemu_ld_i64, DATA64_ARGS, 1, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) -DEF(qemu_st_a32_i64, 0, DATA64_ARGS + 1, 1, - TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) - -DEF(qemu_ld_a64_i32, 1, DATA64_ARGS, 1, - TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) -DEF(qemu_st_a64_i32, 0, 1 + DATA64_ARGS, 1, - TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) -DEF(qemu_ld_a64_i64, DATA64_ARGS, DATA64_ARGS, 1, - TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) -DEF(qemu_st_a64_i64, 0, DATA64_ARGS + DATA64_ARGS, 1, +DEF(qemu_st_i64, 0, DATA64_ARGS + 1, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) /* Only used by i386 to cope with stupid register constraints. */ -DEF(qemu_st8_a32_i32, 0, 1 + 1, 1, - TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) -DEF(qemu_st8_a64_i32, 0, 1 + DATA64_ARGS, 1, +DEF(qemu_st8_i32, 0, 1 + 1, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) /* Only for 64-bit hosts at the moment. */ -DEF(qemu_ld_a32_i128, 2, 1, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) -DEF(qemu_ld_a64_i128, 2, 1, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) -DEF(qemu_st_a32_i128, 0, 3, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) -DEF(qemu_st_a64_i128, 0, 3, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) +DEF(qemu_ld_i128, 2, 1, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) +DEF(qemu_st_i128, 0, 3, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) /* Host vector support. */ diff --git a/tcg/optimize.c b/tcg/optimize.c index 8c6303e3af..996448c8bc 100644 --- a/tcg/optimize.c +++ b/tcg/optimize.c @@ -3008,29 +3008,22 @@ void tcg_optimize(TCGContext *s) CASE_OP_32_64_VEC(orc): done = fold_orc(&ctx, op); break; - case INDEX_op_qemu_ld_a32_i32: - case INDEX_op_qemu_ld_a64_i32: + case INDEX_op_qemu_ld_i32: done = fold_qemu_ld_1reg(&ctx, op); break; - case INDEX_op_qemu_ld_a32_i64: - case INDEX_op_qemu_ld_a64_i64: + case INDEX_op_qemu_ld_i64: if (TCG_TARGET_REG_BITS == 64) { done = fold_qemu_ld_1reg(&ctx, op); break; } QEMU_FALLTHROUGH; - case INDEX_op_qemu_ld_a32_i128: - case INDEX_op_qemu_ld_a64_i128: + case INDEX_op_qemu_ld_i128: done = fold_qemu_ld_2reg(&ctx, op); break; - case INDEX_op_qemu_st8_a32_i32: - case INDEX_op_qemu_st8_a64_i32: - case INDEX_op_qemu_st_a32_i32: - case INDEX_op_qemu_st_a64_i32: - case INDEX_op_qemu_st_a32_i64: - case INDEX_op_qemu_st_a64_i64: - case INDEX_op_qemu_st_a32_i128: - case INDEX_op_qemu_st_a64_i128: + case INDEX_op_qemu_st8_i32: + case INDEX_op_qemu_st_i32: + case INDEX_op_qemu_st_i64: + case INDEX_op_qemu_st_i128: done = fold_qemu_st(&ctx, op); break; CASE_OP_32_64(rem): diff --git a/tcg/tcg-op-ldst.c b/tcg/tcg-op-ldst.c index c3e9bf992a..96553f3c3d 100644 --- a/tcg/tcg-op-ldst.c +++ b/tcg/tcg-op-ldst.c @@ -218,7 +218,6 @@ static void tcg_gen_qemu_ld_i32_int(TCGv_i32 val, TCGTemp *addr, MemOp orig_memop; MemOpIdx orig_oi, oi; TCGv_i64 copy_addr; - TCGOpcode opc; tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); orig_memop = memop = tcg_canonicalize_memop(memop, 0, 0); @@ -234,12 +233,8 @@ static void tcg_gen_qemu_ld_i32_int(TCGv_i32 val, TCGTemp *addr, } copy_addr = plugin_maybe_preserve_addr(addr); - if (tcg_ctx->addr_type == TCG_TYPE_I32) { - opc = INDEX_op_qemu_ld_a32_i32; - } else { - opc = INDEX_op_qemu_ld_a64_i32; - } - gen_ldst(opc, TCG_TYPE_I32, tcgv_i32_temp(val), NULL, addr, oi); + gen_ldst(INDEX_op_qemu_ld_i32, TCG_TYPE_I32, + tcgv_i32_temp(val), NULL, addr, oi); plugin_gen_mem_callbacks_i32(val, copy_addr, addr, orig_oi, QEMU_PLUGIN_MEM_R); @@ -296,17 +291,9 @@ static void tcg_gen_qemu_st_i32_int(TCGv_i32 val, TCGTemp *addr, } if (TCG_TARGET_HAS_qemu_st8_i32 && (memop & MO_SIZE) == MO_8) { - if (tcg_ctx->addr_type == TCG_TYPE_I32) { - opc = INDEX_op_qemu_st8_a32_i32; - } else { - opc = INDEX_op_qemu_st8_a64_i32; - } + opc = INDEX_op_qemu_st8_i32; } else { - if (tcg_ctx->addr_type == TCG_TYPE_I32) { - opc = INDEX_op_qemu_st_a32_i32; - } else { - opc = INDEX_op_qemu_st_a64_i32; - } + opc = INDEX_op_qemu_st_i32; } gen_ldst(opc, TCG_TYPE_I32, tcgv_i32_temp(val), NULL, addr, oi); plugin_gen_mem_callbacks_i32(val, NULL, addr, orig_oi, QEMU_PLUGIN_MEM_W); @@ -330,7 +317,6 @@ static void tcg_gen_qemu_ld_i64_int(TCGv_i64 val, TCGTemp *addr, MemOp orig_memop; MemOpIdx orig_oi, oi; TCGv_i64 copy_addr; - TCGOpcode opc; if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) { tcg_gen_qemu_ld_i32_int(TCGV_LOW(val), addr, idx, memop); @@ -356,12 +342,7 @@ static void tcg_gen_qemu_ld_i64_int(TCGv_i64 val, TCGTemp *addr, } copy_addr = plugin_maybe_preserve_addr(addr); - if (tcg_ctx->addr_type == TCG_TYPE_I32) { - opc = INDEX_op_qemu_ld_a32_i64; - } else { - opc = INDEX_op_qemu_ld_a64_i64; - } - gen_ldst_i64(opc, val, addr, oi); + gen_ldst_i64(INDEX_op_qemu_ld_i64, val, addr, oi); plugin_gen_mem_callbacks_i64(val, copy_addr, addr, orig_oi, QEMU_PLUGIN_MEM_R); @@ -398,7 +379,6 @@ static void tcg_gen_qemu_st_i64_int(TCGv_i64 val, TCGTemp *addr, { TCGv_i64 swap = NULL; MemOpIdx orig_oi, oi; - TCGOpcode opc; if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) { tcg_gen_qemu_st_i32_int(TCGV_LOW(val), addr, idx, memop); @@ -429,12 +409,7 @@ static void tcg_gen_qemu_st_i64_int(TCGv_i64 val, TCGTemp *addr, oi = make_memop_idx(memop, idx); } - if (tcg_ctx->addr_type == TCG_TYPE_I32) { - opc = INDEX_op_qemu_st_a32_i64; - } else { - opc = INDEX_op_qemu_st_a64_i64; - } - gen_ldst_i64(opc, val, addr, oi); + gen_ldst_i64(INDEX_op_qemu_st_i64, val, addr, oi); plugin_gen_mem_callbacks_i64(val, NULL, addr, orig_oi, QEMU_PLUGIN_MEM_W); if (swap) { @@ -546,7 +521,6 @@ static void tcg_gen_qemu_ld_i128_int(TCGv_i128 val, TCGTemp *addr, { MemOpIdx orig_oi; TCGv_i64 ext_addr = NULL; - TCGOpcode opc; check_max_alignment(memop_alignment_bits(memop)); tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); @@ -574,12 +548,7 @@ static void tcg_gen_qemu_ld_i128_int(TCGv_i128 val, TCGTemp *addr, hi = TCGV128_HIGH(val); } - if (tcg_ctx->addr_type == TCG_TYPE_I32) { - opc = INDEX_op_qemu_ld_a32_i128; - } else { - opc = INDEX_op_qemu_ld_a64_i128; - } - gen_ldst(opc, TCG_TYPE_I128, tcgv_i64_temp(lo), + gen_ldst(INDEX_op_qemu_ld_i128, TCG_TYPE_I128, tcgv_i64_temp(lo), tcgv_i64_temp(hi), addr, oi); if (need_bswap) { @@ -595,12 +564,6 @@ static void tcg_gen_qemu_ld_i128_int(TCGv_i128 val, TCGTemp *addr, canonicalize_memop_i128_as_i64(mop, memop); need_bswap = (mop[0] ^ memop) & MO_BSWAP; - if (tcg_ctx->addr_type == TCG_TYPE_I32) { - opc = INDEX_op_qemu_ld_a32_i64; - } else { - opc = INDEX_op_qemu_ld_a64_i64; - } - /* * Since there are no global TCGv_i128, there is no visible state * changed if the second load faults. Load directly into the two @@ -614,7 +577,8 @@ static void tcg_gen_qemu_ld_i128_int(TCGv_i128 val, TCGTemp *addr, y = TCGV128_LOW(val); } - gen_ldst_i64(opc, x, addr, make_memop_idx(mop[0], idx)); + gen_ldst_i64(INDEX_op_qemu_ld_i64, x, addr, + make_memop_idx(mop[0], idx)); if (need_bswap) { tcg_gen_bswap64_i64(x, x); @@ -630,7 +594,8 @@ static void tcg_gen_qemu_ld_i128_int(TCGv_i128 val, TCGTemp *addr, addr_p8 = tcgv_i64_temp(t); } - gen_ldst_i64(opc, y, addr_p8, make_memop_idx(mop[1], idx)); + gen_ldst_i64(INDEX_op_qemu_ld_i64, y, addr_p8, + make_memop_idx(mop[1], idx)); tcg_temp_free_internal(addr_p8); if (need_bswap) { @@ -664,7 +629,6 @@ static void tcg_gen_qemu_st_i128_int(TCGv_i128 val, TCGTemp *addr, { MemOpIdx orig_oi; TCGv_i64 ext_addr = NULL; - TCGOpcode opc; check_max_alignment(memop_alignment_bits(memop)); tcg_gen_req_mo(TCG_MO_ST_LD | TCG_MO_ST_ST); @@ -695,13 +659,8 @@ static void tcg_gen_qemu_st_i128_int(TCGv_i128 val, TCGTemp *addr, hi = TCGV128_HIGH(val); } - if (tcg_ctx->addr_type == TCG_TYPE_I32) { - opc = INDEX_op_qemu_st_a32_i128; - } else { - opc = INDEX_op_qemu_st_a64_i128; - } - gen_ldst(opc, TCG_TYPE_I128, tcgv_i64_temp(lo), - tcgv_i64_temp(hi), addr, oi); + gen_ldst(INDEX_op_qemu_st_i128, TCG_TYPE_I128, + tcgv_i64_temp(lo), tcgv_i64_temp(hi), addr, oi); if (need_bswap) { tcg_temp_free_i64(lo); @@ -714,12 +673,6 @@ static void tcg_gen_qemu_st_i128_int(TCGv_i128 val, TCGTemp *addr, canonicalize_memop_i128_as_i64(mop, memop); - if (tcg_ctx->addr_type == TCG_TYPE_I32) { - opc = INDEX_op_qemu_st_a32_i64; - } else { - opc = INDEX_op_qemu_st_a64_i64; - } - if ((memop & MO_BSWAP) == MO_LE) { x = TCGV128_LOW(val); y = TCGV128_HIGH(val); @@ -734,7 +687,8 @@ static void tcg_gen_qemu_st_i128_int(TCGv_i128 val, TCGTemp *addr, x = b; } - gen_ldst_i64(opc, x, addr, make_memop_idx(mop[0], idx)); + gen_ldst_i64(INDEX_op_qemu_st_i64, x, addr, + make_memop_idx(mop[0], idx)); if (tcg_ctx->addr_type == TCG_TYPE_I32) { TCGv_i32 t = tcg_temp_ebb_new_i32(); @@ -748,10 +702,12 @@ static void tcg_gen_qemu_st_i128_int(TCGv_i128 val, TCGTemp *addr, if (b) { tcg_gen_bswap64_i64(b, y); - gen_ldst_i64(opc, b, addr_p8, make_memop_idx(mop[1], idx)); + gen_ldst_i64(INDEX_op_qemu_st_i64, b, addr_p8, + make_memop_idx(mop[1], idx)); tcg_temp_free_i64(b); } else { - gen_ldst_i64(opc, y, addr_p8, make_memop_idx(mop[1], idx)); + gen_ldst_i64(INDEX_op_qemu_st_i64, y, addr_p8, + make_memop_idx(mop[1], idx)); } tcg_temp_free_internal(addr_p8); } else { diff --git a/tcg/tcg.c b/tcg/tcg.c index 43b6712286..295004b74f 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -2153,24 +2153,17 @@ bool tcg_op_supported(TCGOpcode op, TCGType type, unsigned flags) case INDEX_op_exit_tb: case INDEX_op_goto_tb: case INDEX_op_goto_ptr: - case INDEX_op_qemu_ld_a32_i32: - case INDEX_op_qemu_ld_a64_i32: - case INDEX_op_qemu_st_a32_i32: - case INDEX_op_qemu_st_a64_i32: - case INDEX_op_qemu_ld_a32_i64: - case INDEX_op_qemu_ld_a64_i64: - case INDEX_op_qemu_st_a32_i64: - case INDEX_op_qemu_st_a64_i64: + case INDEX_op_qemu_ld_i32: + case INDEX_op_qemu_st_i32: + case INDEX_op_qemu_ld_i64: + case INDEX_op_qemu_st_i64: return true; - case INDEX_op_qemu_st8_a32_i32: - case INDEX_op_qemu_st8_a64_i32: + case INDEX_op_qemu_st8_i32: return TCG_TARGET_HAS_qemu_st8_i32; - case INDEX_op_qemu_ld_a32_i128: - case INDEX_op_qemu_ld_a64_i128: - case INDEX_op_qemu_st_a32_i128: - case INDEX_op_qemu_st_a64_i128: + case INDEX_op_qemu_ld_i128: + case INDEX_op_qemu_st_i128: return TCG_TARGET_HAS_qemu_ldst_i128; case INDEX_op_mov_i32: @@ -2868,20 +2861,13 @@ void tcg_dump_ops(TCGContext *s, FILE *f, bool have_prefs) } i = 1; break; - case INDEX_op_qemu_ld_a32_i32: - case INDEX_op_qemu_ld_a64_i32: - case INDEX_op_qemu_st_a32_i32: - case INDEX_op_qemu_st_a64_i32: - case INDEX_op_qemu_st8_a32_i32: - case INDEX_op_qemu_st8_a64_i32: - case INDEX_op_qemu_ld_a32_i64: - case INDEX_op_qemu_ld_a64_i64: - case INDEX_op_qemu_st_a32_i64: - case INDEX_op_qemu_st_a64_i64: - case INDEX_op_qemu_ld_a32_i128: - case INDEX_op_qemu_ld_a64_i128: - case INDEX_op_qemu_st_a32_i128: - case INDEX_op_qemu_st_a64_i128: + case INDEX_op_qemu_ld_i32: + case INDEX_op_qemu_st_i32: + case INDEX_op_qemu_st8_i32: + case INDEX_op_qemu_ld_i64: + case INDEX_op_qemu_st_i64: + case INDEX_op_qemu_ld_i128: + case INDEX_op_qemu_st_i128: { const char *s_al, *s_op, *s_at; MemOpIdx oi = op->args[k++]; diff --git a/tcg/tci.c b/tcg/tci.c index 8c1c53424d..d223258efe 100644 --- a/tcg/tci.c +++ b/tcg/tci.c @@ -154,16 +154,6 @@ static void tci_args_rrrbb(uint32_t insn, TCGReg *r0, TCGReg *r1, *i4 = extract32(insn, 26, 6); } -static void tci_args_rrrrr(uint32_t insn, TCGReg *r0, TCGReg *r1, - TCGReg *r2, TCGReg *r3, TCGReg *r4) -{ - *r0 = extract32(insn, 8, 4); - *r1 = extract32(insn, 12, 4); - *r2 = extract32(insn, 16, 4); - *r3 = extract32(insn, 20, 4); - *r4 = extract32(insn, 24, 4); -} - static void tci_args_rrrr(uint32_t insn, TCGReg *r0, TCGReg *r1, TCGReg *r2, TCGReg *r3) { @@ -912,43 +902,21 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env, tb_ptr = ptr; break; - case INDEX_op_qemu_ld_a32_i32: + case INDEX_op_qemu_ld_i32: tci_args_rrm(insn, &r0, &r1, &oi); - taddr = (uint32_t)regs[r1]; - goto do_ld_i32; - case INDEX_op_qemu_ld_a64_i32: - if (TCG_TARGET_REG_BITS == 64) { - tci_args_rrm(insn, &r0, &r1, &oi); - taddr = regs[r1]; - } else { - tci_args_rrrr(insn, &r0, &r1, &r2, &r3); - taddr = tci_uint64(regs[r2], regs[r1]); - oi = regs[r3]; - } - do_ld_i32: + taddr = regs[r1]; regs[r0] = tci_qemu_ld(env, taddr, oi, tb_ptr); break; - case INDEX_op_qemu_ld_a32_i64: - if (TCG_TARGET_REG_BITS == 64) { - tci_args_rrm(insn, &r0, &r1, &oi); - taddr = (uint32_t)regs[r1]; - } else { - tci_args_rrrr(insn, &r0, &r1, &r2, &r3); - taddr = (uint32_t)regs[r2]; - oi = regs[r3]; - } - goto do_ld_i64; - case INDEX_op_qemu_ld_a64_i64: + case INDEX_op_qemu_ld_i64: if (TCG_TARGET_REG_BITS == 64) { tci_args_rrm(insn, &r0, &r1, &oi); taddr = regs[r1]; } else { - tci_args_rrrrr(insn, &r0, &r1, &r2, &r3, &r4); - taddr = tci_uint64(regs[r3], regs[r2]); - oi = regs[r4]; + tci_args_rrrr(insn, &r0, &r1, &r2, &r3); + taddr = regs[r2]; + oi = regs[r3]; } - do_ld_i64: tmp64 = tci_qemu_ld(env, taddr, oi, tb_ptr); if (TCG_TARGET_REG_BITS == 32) { tci_write_reg64(regs, r1, r0, tmp64); @@ -957,47 +925,23 @@ uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env, } break; - case INDEX_op_qemu_st_a32_i32: + case INDEX_op_qemu_st_i32: tci_args_rrm(insn, &r0, &r1, &oi); - taddr = (uint32_t)regs[r1]; - goto do_st_i32; - case INDEX_op_qemu_st_a64_i32: - if (TCG_TARGET_REG_BITS == 64) { - tci_args_rrm(insn, &r0, &r1, &oi); - taddr = regs[r1]; - } else { - tci_args_rrrr(insn, &r0, &r1, &r2, &r3); - taddr = tci_uint64(regs[r2], regs[r1]); - oi = regs[r3]; - } - do_st_i32: + taddr = regs[r1]; tci_qemu_st(env, taddr, regs[r0], oi, tb_ptr); break; - case INDEX_op_qemu_st_a32_i64: - if (TCG_TARGET_REG_BITS == 64) { - tci_args_rrm(insn, &r0, &r1, &oi); - tmp64 = regs[r0]; - taddr = (uint32_t)regs[r1]; - } else { - tci_args_rrrr(insn, &r0, &r1, &r2, &r3); - tmp64 = tci_uint64(regs[r1], regs[r0]); - taddr = (uint32_t)regs[r2]; - oi = regs[r3]; - } - goto do_st_i64; - case INDEX_op_qemu_st_a64_i64: + case INDEX_op_qemu_st_i64: if (TCG_TARGET_REG_BITS == 64) { tci_args_rrm(insn, &r0, &r1, &oi); tmp64 = regs[r0]; taddr = regs[r1]; } else { - tci_args_rrrrr(insn, &r0, &r1, &r2, &r3, &r4); + tci_args_rrrr(insn, &r0, &r1, &r2, &r3); tmp64 = tci_uint64(regs[r1], regs[r0]); - taddr = tci_uint64(regs[r3], regs[r2]); - oi = regs[r4]; + taddr = regs[r2]; + oi = regs[r3]; } - do_st_i64: tci_qemu_st(env, taddr, tmp64, oi, tb_ptr); break; @@ -1269,42 +1213,21 @@ int print_insn_tci(bfd_vma addr, disassemble_info *info) str_r(r3), str_r(r4), str_r(r5)); break; - case INDEX_op_qemu_ld_a32_i32: - case INDEX_op_qemu_st_a32_i32: - len = 1 + 1; - goto do_qemu_ldst; - case INDEX_op_qemu_ld_a32_i64: - case INDEX_op_qemu_st_a32_i64: - case INDEX_op_qemu_ld_a64_i32: - case INDEX_op_qemu_st_a64_i32: - len = 1 + DIV_ROUND_UP(64, TCG_TARGET_REG_BITS); - goto do_qemu_ldst; - case INDEX_op_qemu_ld_a64_i64: - case INDEX_op_qemu_st_a64_i64: - len = 2 * DIV_ROUND_UP(64, TCG_TARGET_REG_BITS); - goto do_qemu_ldst; - do_qemu_ldst: - switch (len) { - case 2: - tci_args_rrm(insn, &r0, &r1, &oi); - info->fprintf_func(info->stream, "%-12s %s, %s, %x", - op_name, str_r(r0), str_r(r1), oi); - break; - case 3: + case INDEX_op_qemu_ld_i64: + case INDEX_op_qemu_st_i64: + if (TCG_TARGET_REG_BITS == 32) { tci_args_rrrr(insn, &r0, &r1, &r2, &r3); info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s", op_name, str_r(r0), str_r(r1), str_r(r2), str_r(r3)); break; - case 4: - tci_args_rrrrr(insn, &r0, &r1, &r2, &r3, &r4); - info->fprintf_func(info->stream, "%-12s %s, %s, %s, %s, %s", - op_name, str_r(r0), str_r(r1), - str_r(r2), str_r(r3), str_r(r4)); - break; - default: - g_assert_not_reached(); } + /* fall through */ + case INDEX_op_qemu_ld_i32: + case INDEX_op_qemu_st_i32: + tci_args_rrm(insn, &r0, &r1, &oi); + info->fprintf_func(info->stream, "%-12s %s, %s, %x", + op_name, str_r(r0), str_r(r1), oi); break; case 0: diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 66eb4b73b5..45dc2c649b 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -2398,24 +2398,18 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType ext, tcg_out_insn(s, 3506, CSEL, ext, a0, REG0(3), REG0(4), args[5]); break; - case INDEX_op_qemu_ld_a32_i32: - case INDEX_op_qemu_ld_a64_i32: - case INDEX_op_qemu_ld_a32_i64: - case INDEX_op_qemu_ld_a64_i64: + case INDEX_op_qemu_ld_i32: + case INDEX_op_qemu_ld_i64: tcg_out_qemu_ld(s, a0, a1, a2, ext); break; - case INDEX_op_qemu_st_a32_i32: - case INDEX_op_qemu_st_a64_i32: - case INDEX_op_qemu_st_a32_i64: - case INDEX_op_qemu_st_a64_i64: + case INDEX_op_qemu_st_i32: + case INDEX_op_qemu_st_i64: tcg_out_qemu_st(s, REG0(0), a1, a2, ext); break; - case INDEX_op_qemu_ld_a32_i128: - case INDEX_op_qemu_ld_a64_i128: + case INDEX_op_qemu_ld_i128: tcg_out_qemu_ldst_i128(s, a0, a1, a2, args[3], true); break; - case INDEX_op_qemu_st_a32_i128: - case INDEX_op_qemu_st_a64_i128: + case INDEX_op_qemu_st_i128: tcg_out_qemu_ldst_i128(s, REG0(0), REG0(1), a2, args[3], false); break; @@ -3084,21 +3078,15 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags) case INDEX_op_movcond_i64: return C_O1_I4(r, r, rC, rZ, rZ); - case INDEX_op_qemu_ld_a32_i32: - case INDEX_op_qemu_ld_a64_i32: - case INDEX_op_qemu_ld_a32_i64: - case INDEX_op_qemu_ld_a64_i64: + case INDEX_op_qemu_ld_i32: + case INDEX_op_qemu_ld_i64: return C_O1_I1(r, r); - case INDEX_op_qemu_ld_a32_i128: - case INDEX_op_qemu_ld_a64_i128: + case INDEX_op_qemu_ld_i128: return C_O2_I1(r, r, r); - case INDEX_op_qemu_st_a32_i32: - case INDEX_op_qemu_st_a64_i32: - case INDEX_op_qemu_st_a32_i64: - case INDEX_op_qemu_st_a64_i64: + case INDEX_op_qemu_st_i32: + case INDEX_op_qemu_st_i64: return C_O0_I2(rZ, r); - case INDEX_op_qemu_st_a32_i128: - case INDEX_op_qemu_st_a64_i128: + case INDEX_op_qemu_st_i128: return C_O0_I3(rZ, rZ, r); case INDEX_op_deposit_i32: diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index 12dad7307f..05bb367a39 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -2071,37 +2071,21 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, ARITH_MOV, args[0], 0, 0); break; - case INDEX_op_qemu_ld_a32_i32: + case INDEX_op_qemu_ld_i32: tcg_out_qemu_ld(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32); break; - case INDEX_op_qemu_ld_a64_i32: - tcg_out_qemu_ld(s, args[0], -1, args[1], args[2], - args[3], TCG_TYPE_I32); - break; - case INDEX_op_qemu_ld_a32_i64: + case INDEX_op_qemu_ld_i64: tcg_out_qemu_ld(s, args[0], args[1], args[2], -1, args[3], TCG_TYPE_I64); break; - case INDEX_op_qemu_ld_a64_i64: - tcg_out_qemu_ld(s, args[0], args[1], args[2], args[3], - args[4], TCG_TYPE_I64); - break; - case INDEX_op_qemu_st_a32_i32: + case INDEX_op_qemu_st_i32: tcg_out_qemu_st(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32); break; - case INDEX_op_qemu_st_a64_i32: - tcg_out_qemu_st(s, args[0], -1, args[1], args[2], - args[3], TCG_TYPE_I32); - break; - case INDEX_op_qemu_st_a32_i64: + case INDEX_op_qemu_st_i64: tcg_out_qemu_st(s, args[0], args[1], args[2], -1, args[3], TCG_TYPE_I64); break; - case INDEX_op_qemu_st_a64_i64: - tcg_out_qemu_st(s, args[0], args[1], args[2], args[3], - args[4], TCG_TYPE_I64); - break; case INDEX_op_bswap16_i32: tcg_out_bswap16(s, COND_AL, args[0], args[1], args[2]); @@ -2243,22 +2227,14 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags) case INDEX_op_setcond2_i32: return C_O1_I4(r, r, r, rI, rI); - case INDEX_op_qemu_ld_a32_i32: + case INDEX_op_qemu_ld_i32: return C_O1_I1(r, q); - case INDEX_op_qemu_ld_a64_i32: - return C_O1_I2(r, q, q); - case INDEX_op_qemu_ld_a32_i64: + case INDEX_op_qemu_ld_i64: return C_O2_I1(e, p, q); - case INDEX_op_qemu_ld_a64_i64: - return C_O2_I2(e, p, q, q); - case INDEX_op_qemu_st_a32_i32: + case INDEX_op_qemu_st_i32: return C_O0_I2(q, q); - case INDEX_op_qemu_st_a64_i32: - return C_O0_I3(q, q, q); - case INDEX_op_qemu_st_a32_i64: + case INDEX_op_qemu_st_i64: return C_O0_I3(Q, p, q); - case INDEX_op_qemu_st_a64_i64: - return C_O0_I4(Q, p, q, q); case INDEX_op_st_vec: return C_O0_I2(w, r); diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index 2cac151331..ca6e8abc57 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -2879,62 +2879,33 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, tcg_out_modrm(s, OPC_GRP3_Ev + rexw, EXT3_NOT, a0); break; - case INDEX_op_qemu_ld_a64_i32: - if (TCG_TARGET_REG_BITS == 32) { - tcg_out_qemu_ld(s, a0, -1, a1, a2, args[3], TCG_TYPE_I32); - break; - } - /* fall through */ - case INDEX_op_qemu_ld_a32_i32: + case INDEX_op_qemu_ld_i32: tcg_out_qemu_ld(s, a0, -1, a1, -1, a2, TCG_TYPE_I32); break; - case INDEX_op_qemu_ld_a32_i64: + case INDEX_op_qemu_ld_i64: if (TCG_TARGET_REG_BITS == 64) { tcg_out_qemu_ld(s, a0, -1, a1, -1, a2, TCG_TYPE_I64); } else { tcg_out_qemu_ld(s, a0, a1, a2, -1, args[3], TCG_TYPE_I64); } break; - case INDEX_op_qemu_ld_a64_i64: - if (TCG_TARGET_REG_BITS == 64) { - tcg_out_qemu_ld(s, a0, -1, a1, -1, a2, TCG_TYPE_I64); - } else { - tcg_out_qemu_ld(s, a0, a1, a2, args[3], args[4], TCG_TYPE_I64); - } - break; - case INDEX_op_qemu_ld_a32_i128: - case INDEX_op_qemu_ld_a64_i128: + case INDEX_op_qemu_ld_i128: tcg_debug_assert(TCG_TARGET_REG_BITS == 64); tcg_out_qemu_ld(s, a0, a1, a2, -1, args[3], TCG_TYPE_I128); break; - case INDEX_op_qemu_st_a64_i32: - case INDEX_op_qemu_st8_a64_i32: - if (TCG_TARGET_REG_BITS == 32) { - tcg_out_qemu_st(s, a0, -1, a1, a2, args[3], TCG_TYPE_I32); - break; - } - /* fall through */ - case INDEX_op_qemu_st_a32_i32: - case INDEX_op_qemu_st8_a32_i32: + case INDEX_op_qemu_st_i32: + case INDEX_op_qemu_st8_i32: tcg_out_qemu_st(s, a0, -1, a1, -1, a2, TCG_TYPE_I32); break; - case INDEX_op_qemu_st_a32_i64: + case INDEX_op_qemu_st_i64: if (TCG_TARGET_REG_BITS == 64) { tcg_out_qemu_st(s, a0, -1, a1, -1, a2, TCG_TYPE_I64); } else { tcg_out_qemu_st(s, a0, a1, a2, -1, args[3], TCG_TYPE_I64); } break; - case INDEX_op_qemu_st_a64_i64: - if (TCG_TARGET_REG_BITS == 64) { - tcg_out_qemu_st(s, a0, -1, a1, -1, a2, TCG_TYPE_I64); - } else { - tcg_out_qemu_st(s, a0, a1, a2, args[3], args[4], TCG_TYPE_I64); - } - break; - case INDEX_op_qemu_st_a32_i128: - case INDEX_op_qemu_st_a64_i128: + case INDEX_op_qemu_st_i128: tcg_debug_assert(TCG_TARGET_REG_BITS == 64); tcg_out_qemu_st(s, a0, a1, a2, -1, args[3], TCG_TYPE_I128); break; @@ -3824,36 +3795,24 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags) case INDEX_op_clz_i64: return have_lzcnt ? C_N1_I2(r, r, rW) : C_N1_I2(r, r, r); - case INDEX_op_qemu_ld_a32_i32: + case INDEX_op_qemu_ld_i32: return C_O1_I1(r, L); - case INDEX_op_qemu_ld_a64_i32: - return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, L) : C_O1_I2(r, L, L); - case INDEX_op_qemu_st_a32_i32: + case INDEX_op_qemu_st_i32: return C_O0_I2(L, L); - case INDEX_op_qemu_st_a64_i32: - return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(L, L) : C_O0_I3(L, L, L); - case INDEX_op_qemu_st8_a32_i32: + case INDEX_op_qemu_st8_i32: return C_O0_I2(s, L); - case INDEX_op_qemu_st8_a64_i32: - return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(s, L) : C_O0_I3(s, L, L); - case INDEX_op_qemu_ld_a32_i64: + case INDEX_op_qemu_ld_i64: return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, L) : C_O2_I1(r, r, L); - case INDEX_op_qemu_ld_a64_i64: - return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, L) : C_O2_I2(r, r, L, L); - case INDEX_op_qemu_st_a32_i64: + case INDEX_op_qemu_st_i64: return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(L, L) : C_O0_I3(L, L, L); - case INDEX_op_qemu_st_a64_i64: - return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(L, L) : C_O0_I4(L, L, L, L); - case INDEX_op_qemu_ld_a32_i128: - case INDEX_op_qemu_ld_a64_i128: + case INDEX_op_qemu_ld_i128: tcg_debug_assert(TCG_TARGET_REG_BITS == 64); return C_O2_I1(r, r, L); - case INDEX_op_qemu_st_a32_i128: - case INDEX_op_qemu_st_a64_i128: + case INDEX_op_qemu_st_i128: tcg_debug_assert(TCG_TARGET_REG_BITS == 64); return C_O0_I3(L, L, L); diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index cebe8dd354..4f32bf3e97 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -1675,28 +1675,22 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, tcg_out_ldst(s, OPC_ST_D, a0, a1, a2); break; - case INDEX_op_qemu_ld_a32_i32: - case INDEX_op_qemu_ld_a64_i32: + case INDEX_op_qemu_ld_i32: tcg_out_qemu_ld(s, a0, a1, a2, TCG_TYPE_I32); break; - case INDEX_op_qemu_ld_a32_i64: - case INDEX_op_qemu_ld_a64_i64: + case INDEX_op_qemu_ld_i64: tcg_out_qemu_ld(s, a0, a1, a2, TCG_TYPE_I64); break; - case INDEX_op_qemu_ld_a32_i128: - case INDEX_op_qemu_ld_a64_i128: + case INDEX_op_qemu_ld_i128: tcg_out_qemu_ldst_i128(s, a0, a1, a2, a3, true); break; - case INDEX_op_qemu_st_a32_i32: - case INDEX_op_qemu_st_a64_i32: + case INDEX_op_qemu_st_i32: tcg_out_qemu_st(s, a0, a1, a2, TCG_TYPE_I32); break; - case INDEX_op_qemu_st_a32_i64: - case INDEX_op_qemu_st_a64_i64: + case INDEX_op_qemu_st_i64: tcg_out_qemu_st(s, a0, a1, a2, TCG_TYPE_I64); break; - case INDEX_op_qemu_st_a32_i128: - case INDEX_op_qemu_st_a64_i128: + case INDEX_op_qemu_st_i128: tcg_out_qemu_ldst_i128(s, a0, a1, a2, a3, false); break; @@ -2233,18 +2227,14 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags) case INDEX_op_st32_i64: case INDEX_op_st_i32: case INDEX_op_st_i64: - case INDEX_op_qemu_st_a32_i32: - case INDEX_op_qemu_st_a64_i32: - case INDEX_op_qemu_st_a32_i64: - case INDEX_op_qemu_st_a64_i64: + case INDEX_op_qemu_st_i32: + case INDEX_op_qemu_st_i64: return C_O0_I2(rZ, r); - case INDEX_op_qemu_ld_a32_i128: - case INDEX_op_qemu_ld_a64_i128: + case INDEX_op_qemu_ld_i128: return C_N2_I1(r, r, r); - case INDEX_op_qemu_st_a32_i128: - case INDEX_op_qemu_st_a64_i128: + case INDEX_op_qemu_st_i128: return C_O0_I3(r, r, r); case INDEX_op_brcond_i32: @@ -2290,10 +2280,8 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags) case INDEX_op_ld32u_i64: case INDEX_op_ld_i32: case INDEX_op_ld_i64: - case INDEX_op_qemu_ld_a32_i32: - case INDEX_op_qemu_ld_a64_i32: - case INDEX_op_qemu_ld_a32_i64: - case INDEX_op_qemu_ld_a64_i64: + case INDEX_op_qemu_ld_i32: + case INDEX_op_qemu_ld_i64: return C_O1_I1(r, r); case INDEX_op_andc_i32: diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc index 99f6ef6c76..b1d512ca2a 100644 --- a/tcg/mips/tcg-target.c.inc +++ b/tcg/mips/tcg-target.c.inc @@ -2095,53 +2095,27 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, tcg_out_setcond2(s, args[5], a0, a1, a2, args[3], args[4]); break; - case INDEX_op_qemu_ld_a64_i32: - if (TCG_TARGET_REG_BITS == 32) { - tcg_out_qemu_ld(s, a0, 0, a1, a2, args[3], TCG_TYPE_I32); - break; - } - /* fall through */ - case INDEX_op_qemu_ld_a32_i32: + case INDEX_op_qemu_ld_i32: tcg_out_qemu_ld(s, a0, 0, a1, 0, a2, TCG_TYPE_I32); break; - case INDEX_op_qemu_ld_a32_i64: + case INDEX_op_qemu_ld_i64: if (TCG_TARGET_REG_BITS == 64) { tcg_out_qemu_ld(s, a0, 0, a1, 0, a2, TCG_TYPE_I64); } else { tcg_out_qemu_ld(s, a0, a1, a2, 0, args[3], TCG_TYPE_I64); } break; - case INDEX_op_qemu_ld_a64_i64: - if (TCG_TARGET_REG_BITS == 64) { - tcg_out_qemu_ld(s, a0, 0, a1, 0, a2, TCG_TYPE_I64); - } else { - tcg_out_qemu_ld(s, a0, a1, a2, args[3], args[4], TCG_TYPE_I64); - } - break; - case INDEX_op_qemu_st_a64_i32: - if (TCG_TARGET_REG_BITS == 32) { - tcg_out_qemu_st(s, a0, 0, a1, a2, args[3], TCG_TYPE_I32); - break; - } - /* fall through */ - case INDEX_op_qemu_st_a32_i32: + case INDEX_op_qemu_st_i32: tcg_out_qemu_st(s, a0, 0, a1, 0, a2, TCG_TYPE_I32); break; - case INDEX_op_qemu_st_a32_i64: + case INDEX_op_qemu_st_i64: if (TCG_TARGET_REG_BITS == 64) { tcg_out_qemu_st(s, a0, 0, a1, 0, a2, TCG_TYPE_I64); } else { tcg_out_qemu_st(s, a0, a1, a2, 0, args[3], TCG_TYPE_I64); } break; - case INDEX_op_qemu_st_a64_i64: - if (TCG_TARGET_REG_BITS == 64) { - tcg_out_qemu_st(s, a0, 0, a1, 0, a2, TCG_TYPE_I64); - } else { - tcg_out_qemu_st(s, a0, a1, a2, args[3], args[4], TCG_TYPE_I64); - } - break; case INDEX_op_add2_i32: tcg_out_addsub2(s, a0, a1, a2, args[3], args[4], args[5], @@ -2301,23 +2275,14 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags) case INDEX_op_brcond2_i32: return C_O0_I4(rZ, rZ, rZ, rZ); - case INDEX_op_qemu_ld_a32_i32: + case INDEX_op_qemu_ld_i32: return C_O1_I1(r, r); - case INDEX_op_qemu_ld_a64_i32: - return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O1_I2(r, r, r); - case INDEX_op_qemu_st_a32_i32: + case INDEX_op_qemu_st_i32: return C_O0_I2(rZ, r); - case INDEX_op_qemu_st_a64_i32: - return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(rZ, r) : C_O0_I3(rZ, r, r); - case INDEX_op_qemu_ld_a32_i64: + case INDEX_op_qemu_ld_i64: return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O2_I1(r, r, r); - case INDEX_op_qemu_ld_a64_i64: - return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O2_I2(r, r, r, r); - case INDEX_op_qemu_st_a32_i64: + case INDEX_op_qemu_st_i64: return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(rZ, r) : C_O0_I3(rZ, rZ, r); - case INDEX_op_qemu_st_a64_i64: - return (TCG_TARGET_REG_BITS == 64 ? C_O0_I2(rZ, r) - : C_O0_I4(rZ, rZ, r, r)); default: return C_NotImplemented; diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 6e711cd53f..801cb6f3cb 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -3308,17 +3308,10 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, tcg_out32(s, MODUD | TAB(args[0], args[1], args[2])); break; - case INDEX_op_qemu_ld_a64_i32: - if (TCG_TARGET_REG_BITS == 32) { - tcg_out_qemu_ld(s, args[0], -1, args[1], args[2], - args[3], TCG_TYPE_I32); - break; - } - /* fall through */ - case INDEX_op_qemu_ld_a32_i32: + case INDEX_op_qemu_ld_i32: tcg_out_qemu_ld(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32); break; - case INDEX_op_qemu_ld_a32_i64: + case INDEX_op_qemu_ld_i64: if (TCG_TARGET_REG_BITS == 64) { tcg_out_qemu_ld(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I64); @@ -3327,32 +3320,15 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, args[3], TCG_TYPE_I64); } break; - case INDEX_op_qemu_ld_a64_i64: - if (TCG_TARGET_REG_BITS == 64) { - tcg_out_qemu_ld(s, args[0], -1, args[1], -1, - args[2], TCG_TYPE_I64); - } else { - tcg_out_qemu_ld(s, args[0], args[1], args[2], args[3], - args[4], TCG_TYPE_I64); - } - break; - case INDEX_op_qemu_ld_a32_i128: - case INDEX_op_qemu_ld_a64_i128: + case INDEX_op_qemu_ld_i128: tcg_debug_assert(TCG_TARGET_REG_BITS == 64); tcg_out_qemu_ldst_i128(s, args[0], args[1], args[2], args[3], true); break; - case INDEX_op_qemu_st_a64_i32: - if (TCG_TARGET_REG_BITS == 32) { - tcg_out_qemu_st(s, args[0], -1, args[1], args[2], - args[3], TCG_TYPE_I32); - break; - } - /* fall through */ - case INDEX_op_qemu_st_a32_i32: + case INDEX_op_qemu_st_i32: tcg_out_qemu_st(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32); break; - case INDEX_op_qemu_st_a32_i64: + case INDEX_op_qemu_st_i64: if (TCG_TARGET_REG_BITS == 64) { tcg_out_qemu_st(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I64); @@ -3361,17 +3337,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, args[3], TCG_TYPE_I64); } break; - case INDEX_op_qemu_st_a64_i64: - if (TCG_TARGET_REG_BITS == 64) { - tcg_out_qemu_st(s, args[0], -1, args[1], -1, - args[2], TCG_TYPE_I64); - } else { - tcg_out_qemu_st(s, args[0], args[1], args[2], args[3], - args[4], TCG_TYPE_I64); - } - break; - case INDEX_op_qemu_st_a32_i128: - case INDEX_op_qemu_st_a64_i128: + case INDEX_op_qemu_st_i128: tcg_debug_assert(TCG_TARGET_REG_BITS == 64); tcg_out_qemu_ldst_i128(s, args[0], args[1], args[2], args[3], false); break; @@ -4306,29 +4272,19 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags) case INDEX_op_sub2_i32: return C_O2_I4(r, r, rI, rZM, r, r); - case INDEX_op_qemu_ld_a32_i32: + case INDEX_op_qemu_ld_i32: return C_O1_I1(r, r); - case INDEX_op_qemu_ld_a64_i32: - return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O1_I2(r, r, r); - case INDEX_op_qemu_ld_a32_i64: + case INDEX_op_qemu_ld_i64: return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O2_I1(r, r, r); - case INDEX_op_qemu_ld_a64_i64: - return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O2_I2(r, r, r, r); - case INDEX_op_qemu_st_a32_i32: + case INDEX_op_qemu_st_i32: return C_O0_I2(r, r); - case INDEX_op_qemu_st_a64_i32: + case INDEX_op_qemu_st_i64: return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(r, r) : C_O0_I3(r, r, r); - case INDEX_op_qemu_st_a32_i64: - return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(r, r) : C_O0_I3(r, r, r); - case INDEX_op_qemu_st_a64_i64: - return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(r, r) : C_O0_I4(r, r, r, r); - case INDEX_op_qemu_ld_a32_i128: - case INDEX_op_qemu_ld_a64_i128: + case INDEX_op_qemu_ld_i128: return C_N1O1_I1(o, m, r); - case INDEX_op_qemu_st_a32_i128: - case INDEX_op_qemu_st_a64_i128: + case INDEX_op_qemu_st_i128: return C_O0_I3(o, m, r); case INDEX_op_add_vec: diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc index 61dc310c1a..55a3398712 100644 --- a/tcg/riscv/tcg-target.c.inc +++ b/tcg/riscv/tcg-target.c.inc @@ -2309,20 +2309,16 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, args[3], const_args[3], args[4], const_args[4]); break; - case INDEX_op_qemu_ld_a32_i32: - case INDEX_op_qemu_ld_a64_i32: + case INDEX_op_qemu_ld_i32: tcg_out_qemu_ld(s, a0, a1, a2, TCG_TYPE_I32); break; - case INDEX_op_qemu_ld_a32_i64: - case INDEX_op_qemu_ld_a64_i64: + case INDEX_op_qemu_ld_i64: tcg_out_qemu_ld(s, a0, a1, a2, TCG_TYPE_I64); break; - case INDEX_op_qemu_st_a32_i32: - case INDEX_op_qemu_st_a64_i32: + case INDEX_op_qemu_st_i32: tcg_out_qemu_st(s, a0, a1, a2, TCG_TYPE_I32); break; - case INDEX_op_qemu_st_a32_i64: - case INDEX_op_qemu_st_a64_i64: + case INDEX_op_qemu_st_i64: tcg_out_qemu_st(s, a0, a1, a2, TCG_TYPE_I64); break; @@ -2761,15 +2757,11 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags) case INDEX_op_sub2_i64: return C_O2_I4(r, r, rZ, rZ, rM, rM); - case INDEX_op_qemu_ld_a32_i32: - case INDEX_op_qemu_ld_a64_i32: - case INDEX_op_qemu_ld_a32_i64: - case INDEX_op_qemu_ld_a64_i64: + case INDEX_op_qemu_ld_i32: + case INDEX_op_qemu_ld_i64: return C_O1_I1(r, r); - case INDEX_op_qemu_st_a32_i32: - case INDEX_op_qemu_st_a64_i32: - case INDEX_op_qemu_st_a32_i64: - case INDEX_op_qemu_st_a64_i64: + case INDEX_op_qemu_st_i32: + case INDEX_op_qemu_st_i64: return C_O0_I2(rZ, r); case INDEX_op_st_vec: diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc index dc7722dc31..6786e7b316 100644 --- a/tcg/s390x/tcg-target.c.inc +++ b/tcg/s390x/tcg-target.c.inc @@ -2455,28 +2455,22 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, args[2], const_args[2], args[3], const_args[3], args[4]); break; - case INDEX_op_qemu_ld_a32_i32: - case INDEX_op_qemu_ld_a64_i32: + case INDEX_op_qemu_ld_i32: tcg_out_qemu_ld(s, args[0], args[1], args[2], TCG_TYPE_I32); break; - case INDEX_op_qemu_ld_a32_i64: - case INDEX_op_qemu_ld_a64_i64: + case INDEX_op_qemu_ld_i64: tcg_out_qemu_ld(s, args[0], args[1], args[2], TCG_TYPE_I64); break; - case INDEX_op_qemu_st_a32_i32: - case INDEX_op_qemu_st_a64_i32: + case INDEX_op_qemu_st_i32: tcg_out_qemu_st(s, args[0], args[1], args[2], TCG_TYPE_I32); break; - case INDEX_op_qemu_st_a32_i64: - case INDEX_op_qemu_st_a64_i64: + case INDEX_op_qemu_st_i64: tcg_out_qemu_st(s, args[0], args[1], args[2], TCG_TYPE_I64); break; - case INDEX_op_qemu_ld_a32_i128: - case INDEX_op_qemu_ld_a64_i128: + case INDEX_op_qemu_ld_i128: tcg_out_qemu_ldst_i128(s, args[0], args[1], args[2], args[3], true); break; - case INDEX_op_qemu_st_a32_i128: - case INDEX_op_qemu_st_a64_i128: + case INDEX_op_qemu_st_i128: tcg_out_qemu_ldst_i128(s, args[0], args[1], args[2], args[3], false); break; @@ -3366,21 +3360,15 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags) case INDEX_op_ctpop_i64: return C_O1_I1(r, r); - case INDEX_op_qemu_ld_a32_i32: - case INDEX_op_qemu_ld_a64_i32: - case INDEX_op_qemu_ld_a32_i64: - case INDEX_op_qemu_ld_a64_i64: + case INDEX_op_qemu_ld_i32: + case INDEX_op_qemu_ld_i64: return C_O1_I1(r, r); - case INDEX_op_qemu_st_a32_i64: - case INDEX_op_qemu_st_a64_i64: - case INDEX_op_qemu_st_a32_i32: - case INDEX_op_qemu_st_a64_i32: + case INDEX_op_qemu_st_i64: + case INDEX_op_qemu_st_i32: return C_O0_I2(r, r); - case INDEX_op_qemu_ld_a32_i128: - case INDEX_op_qemu_ld_a64_i128: + case INDEX_op_qemu_ld_i128: return C_O2_I1(o, m, r); - case INDEX_op_qemu_st_a32_i128: - case INDEX_op_qemu_st_a64_i128: + case INDEX_op_qemu_st_i128: return C_O0_I3(o, m, r); case INDEX_op_deposit_i32: diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index 733cb51651..ea0a3b8692 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -1426,20 +1426,16 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, tcg_out_arithi(s, a1, a0, 32, SHIFT_SRLX); break; - case INDEX_op_qemu_ld_a32_i32: - case INDEX_op_qemu_ld_a64_i32: + case INDEX_op_qemu_ld_i32: tcg_out_qemu_ld(s, a0, a1, a2, TCG_TYPE_I32); break; - case INDEX_op_qemu_ld_a32_i64: - case INDEX_op_qemu_ld_a64_i64: + case INDEX_op_qemu_ld_i64: tcg_out_qemu_ld(s, a0, a1, a2, TCG_TYPE_I64); break; - case INDEX_op_qemu_st_a32_i32: - case INDEX_op_qemu_st_a64_i32: + case INDEX_op_qemu_st_i32: tcg_out_qemu_st(s, a0, a1, a2, TCG_TYPE_I32); break; - case INDEX_op_qemu_st_a32_i64: - case INDEX_op_qemu_st_a64_i64: + case INDEX_op_qemu_st_i64: tcg_out_qemu_st(s, a0, a1, a2, TCG_TYPE_I64); break; @@ -1570,10 +1566,8 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags) case INDEX_op_extu_i32_i64: case INDEX_op_extract_i64: case INDEX_op_sextract_i64: - case INDEX_op_qemu_ld_a32_i32: - case INDEX_op_qemu_ld_a64_i32: - case INDEX_op_qemu_ld_a32_i64: - case INDEX_op_qemu_ld_a64_i64: + case INDEX_op_qemu_ld_i32: + case INDEX_op_qemu_ld_i64: return C_O1_I1(r, r); case INDEX_op_st8_i32: @@ -1583,10 +1577,8 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags) case INDEX_op_st_i32: case INDEX_op_st32_i64: case INDEX_op_st_i64: - case INDEX_op_qemu_st_a32_i32: - case INDEX_op_qemu_st_a64_i32: - case INDEX_op_qemu_st_a32_i64: - case INDEX_op_qemu_st_a64_i64: + case INDEX_op_qemu_st_i32: + case INDEX_op_qemu_st_i64: return C_O0_I2(rZ, r); case INDEX_op_add_i32: diff --git a/tcg/tci/tcg-target.c.inc b/tcg/tci/tcg-target.c.inc index d6c77325a3..36e018dd19 100644 --- a/tcg/tci/tcg-target.c.inc +++ b/tcg/tci/tcg-target.c.inc @@ -169,22 +169,14 @@ tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags) case INDEX_op_setcond2_i32: return C_O1_I4(r, r, r, r, r); - case INDEX_op_qemu_ld_a32_i32: + case INDEX_op_qemu_ld_i32: return C_O1_I1(r, r); - case INDEX_op_qemu_ld_a64_i32: - return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O1_I2(r, r, r); - case INDEX_op_qemu_ld_a32_i64: + case INDEX_op_qemu_ld_i64: return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O2_I1(r, r, r); - case INDEX_op_qemu_ld_a64_i64: - return TCG_TARGET_REG_BITS == 64 ? C_O1_I1(r, r) : C_O2_I2(r, r, r, r); - case INDEX_op_qemu_st_a32_i32: + case INDEX_op_qemu_st_i32: return C_O0_I2(r, r); - case INDEX_op_qemu_st_a64_i32: + case INDEX_op_qemu_st_i64: return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(r, r) : C_O0_I3(r, r, r); - case INDEX_op_qemu_st_a32_i64: - return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(r, r) : C_O0_I3(r, r, r); - case INDEX_op_qemu_st_a64_i64: - return TCG_TARGET_REG_BITS == 64 ? C_O0_I2(r, r) : C_O0_I4(r, r, r, r); default: return C_NotImplemented; @@ -422,20 +414,6 @@ static void tcg_out_op_rrrbb(TCGContext *s, TCGOpcode op, TCGReg r0, tcg_out32(s, insn); } -static void tcg_out_op_rrrrr(TCGContext *s, TCGOpcode op, TCGReg r0, - TCGReg r1, TCGReg r2, TCGReg r3, TCGReg r4) -{ - tcg_insn_unit insn = 0; - - insn = deposit32(insn, 0, 8, op); - insn = deposit32(insn, 8, 4, r0); - insn = deposit32(insn, 12, 4, r1); - insn = deposit32(insn, 16, 4, r2); - insn = deposit32(insn, 20, 4, r3); - insn = deposit32(insn, 24, 4, r4); - tcg_out32(s, insn); -} - static void tcg_out_op_rrrr(TCGContext *s, TCGOpcode op, TCGReg r0, TCGReg r1, TCGReg r2, TCGReg r3) { @@ -833,29 +811,21 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, tcg_out_op_rrrr(s, opc, args[0], args[1], args[2], args[3]); break; - case INDEX_op_qemu_ld_a32_i32: - case INDEX_op_qemu_st_a32_i32: - tcg_out_op_rrm(s, opc, args[0], args[1], args[2]); - break; - case INDEX_op_qemu_ld_a64_i32: - case INDEX_op_qemu_st_a64_i32: - case INDEX_op_qemu_ld_a32_i64: - case INDEX_op_qemu_st_a32_i64: - if (TCG_TARGET_REG_BITS == 64) { - tcg_out_op_rrm(s, opc, args[0], args[1], args[2]); - } else { + case INDEX_op_qemu_ld_i64: + case INDEX_op_qemu_st_i64: + if (TCG_TARGET_REG_BITS == 32) { tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_TMP, args[3]); tcg_out_op_rrrr(s, opc, args[0], args[1], args[2], TCG_REG_TMP); + break; } - break; - case INDEX_op_qemu_ld_a64_i64: - case INDEX_op_qemu_st_a64_i64: - if (TCG_TARGET_REG_BITS == 64) { - tcg_out_op_rrm(s, opc, args[0], args[1], args[2]); + /* fall through */ + case INDEX_op_qemu_ld_i32: + case INDEX_op_qemu_st_i32: + if (TCG_TARGET_REG_BITS == 64 && s->addr_type == TCG_TYPE_I32) { + tcg_out_ext32u(s, TCG_REG_TMP, args[1]); + tcg_out_op_rrm(s, opc, args[0], TCG_REG_TMP, args[2]); } else { - tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_TMP, args[4]); - tcg_out_op_rrrrr(s, opc, args[0], args[1], - args[2], args[3], TCG_REG_TMP); + tcg_out_op_rrm(s, opc, args[0], args[1], args[2]); } break; From patchwork Wed Feb 5 04:03:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13960547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EDC02C02197 for ; Wed, 5 Feb 2025 04:04:41 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfWdk-0008Hi-CN; Tue, 04 Feb 2025 23:03:52 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfWdj-0008HS-8y for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:51 -0500 Received: from mail-pl1-x634.google.com ([2607:f8b0:4864:20::634]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tfWdg-000797-73 for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:50 -0500 Received: by mail-pl1-x634.google.com with SMTP id d9443c01a7336-21634338cfdso56883395ad.2 for ; Tue, 04 Feb 2025 20:03:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1738728226; x=1739333026; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=S+kIGuGBK8eG0Jdr2LQlsshBeedKRjQUqU3s0GIA0mE=; b=KfwVvJOGJCxH1v0/idfSDlp8osiFvBplYeihDygvVxBS3lMEfHuI9Yae3lP2IHxQ+O Qr8OwvOQHTJ1fzvMuRQr2rcgS9KJ3t2DQxjcfg0cd1GCnNBUaAgoyYy5A++Bzjscym9W NQnn9sd35joXxBUPMPkNCuTyeTEruShm7eBfvIDxc5omQpun2nlUBy38QwwRLiJcpMs7 CjBOM8OoVlz1SUUsJDgSojyfhHA3bMcX0vRGakdfJ7HavnuGmhtapJkB/wMX6CF7tO0y LPF8j3JVA3K4jWl35YreauScdmW7rGhSBRQUzyX9d6kKbiqR2nwB8DB7TcYjifVXWPSU zYug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738728226; x=1739333026; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=S+kIGuGBK8eG0Jdr2LQlsshBeedKRjQUqU3s0GIA0mE=; b=DfVnIQEoSJSv45QM/bmSSezrtbK5VAVTd7Iwjg9PbIQsEJsNSqd7U/ZT7t1ZXgSqB2 vYfZ9OaAFvbqEopnZTm2ay66VGaJGvq9KZsTw/1ZHc30dFiQxcn0BkyhLEhhFVA8gCXX T6HoTSlXDwAOqmX3/0VYzbyDqwYDm9fhJJ6wJJXbTGEaENVKdiowpOizeBo6gPCUFqL/ qgpnlwakrIozToGql2bO2LWeKdob3uQYvLWDq09wJAYEcVTWXwqTd94HgulGDSpBtZUx u2bmS9j4QT2OvBfEtO4B1v17MhI5HjiTuHAYP+1vk3KjngW1v3/O6jkRa6s0QO51Al8u 2GlQ== X-Gm-Message-State: AOJu0YxlXkwUSbQu4Rpf2Enu+jdw1U2iWt/Mk50aH5cqIp/2LHwKRC8I E9RCnj2pqpKmi0v5JI7FcQAD354oSr6CZFAcVvRhvlgZmgfMoUp7RtKxLU7/+cNwkzhUPjH8uH7 A X-Gm-Gg: ASbGnctfToZLWzO7syUmFhO3uKIzffQAhQwMkSXvpJg+GmnWt8rpwSBtmENQGwfmq7f ivbQaKhKnhJWQxehJmpW2jc/Bas9iKzEfw2dYF664sfKUes/nVheaG4DHcBDshJ+i6BBPvuJXPf RcKC0AfJeSYoLEBQX+G7bqvbqXNym5wwaIIHwMSJRXysItAMnM5wPZ3hJRTuuRHrXWoHfxe+11e 93GJa6VOEI6WtRxr6a0mt1tnGxRZW92XXYNVnK4U9VPnu1ye/mihXsn0llV21mIutVW+x98+/aM un/Wg+JT9WEmWgqEFe2XQpJpTRs3XEswK4MmAaNHirly9DI= X-Google-Smtp-Source: AGHT+IH72xcYGxzOZJkdMTsqQYeYmF8IgaafJHNGOY1OsZlXXzCAtS7eTzNiqPxWmtw74FxH4JYncQ== X-Received: by 2002:a17:902:cecf:b0:21f:1549:a55a with SMTP id d9443c01a7336-21f17e2cb68mr25715645ad.1.1738728226481; Tue, 04 Feb 2025 20:03:46 -0800 (PST) Received: from stoup.. (71-212-39-66.tukw.qwest.net. [71.212.39.66]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21f054eb89esm22380325ad.79.2025.02.04.20.03.45 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 20:03:45 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 03/11] tcg/arm: Drop addrhi from prepare_host_addr Date: Tue, 4 Feb 2025 20:03:33 -0800 Message-ID: <20250205040341.2056361-4-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250205040341.2056361-1-richard.henderson@linaro.org> References: <20250205040341.2056361-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::634; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x634.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org The guest address will now always be TCG_TYPE_I32. Signed-off-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé --- tcg/arm/tcg-target.c.inc | 63 ++++++++++++++-------------------------- 1 file changed, 21 insertions(+), 42 deletions(-) diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index 05bb367a39..252d9aa7e5 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1455,8 +1455,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) #define MIN_TLB_MASK_TABLE_OFS -256 static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, - TCGReg addrlo, TCGReg addrhi, - MemOpIdx oi, bool is_ld) + TCGReg addr, MemOpIdx oi, bool is_ld) { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); @@ -1465,14 +1464,14 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, if (tcg_use_softmmu) { *h = (HostAddress){ .cond = COND_AL, - .base = addrlo, + .base = addr, .index = TCG_REG_R1, .index_scratch = true, }; } else { *h = (HostAddress){ .cond = COND_AL, - .base = addrlo, + .base = addr, .index = guest_base ? TCG_REG_GUEST_BASE : -1, .index_scratch = false, }; @@ -1492,8 +1491,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addrlo; - ldst->addrhi_reg = addrhi; + ldst->addrlo_reg = addr; /* Load cpu->neg.tlb.f[mmu_idx].{mask,table} into {r0,r1}. */ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, mask) != 0); @@ -1501,30 +1499,20 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, tcg_out_ldrd_8(s, COND_AL, TCG_REG_R0, TCG_AREG0, fast_off); /* Extract the tlb index from the address into R0. */ - tcg_out_dat_reg(s, COND_AL, ARITH_AND, TCG_REG_R0, TCG_REG_R0, addrlo, + tcg_out_dat_reg(s, COND_AL, ARITH_AND, TCG_REG_R0, TCG_REG_R0, addr, SHIFT_IMM_LSR(s->page_bits - CPU_TLB_ENTRY_BITS)); /* * Add the tlb_table pointer, creating the CPUTLBEntry address in R1. - * Load the tlb comparator into R2/R3 and the fast path addend into R1. + * Load the tlb comparator into R2 and the fast path addend into R1. */ QEMU_BUILD_BUG_ON(HOST_BIG_ENDIAN); if (cmp_off == 0) { - if (s->addr_type == TCG_TYPE_I32) { - tcg_out_ld32_rwb(s, COND_AL, TCG_REG_R2, - TCG_REG_R1, TCG_REG_R0); - } else { - tcg_out_ldrd_rwb(s, COND_AL, TCG_REG_R2, - TCG_REG_R1, TCG_REG_R0); - } + tcg_out_ld32_rwb(s, COND_AL, TCG_REG_R2, TCG_REG_R1, TCG_REG_R0); } else { tcg_out_dat_reg(s, COND_AL, ARITH_ADD, TCG_REG_R1, TCG_REG_R1, TCG_REG_R0, 0); - if (s->addr_type == TCG_TYPE_I32) { - tcg_out_ld32_12(s, COND_AL, TCG_REG_R2, TCG_REG_R1, cmp_off); - } else { - tcg_out_ldrd_8(s, COND_AL, TCG_REG_R2, TCG_REG_R1, cmp_off); - } + tcg_out_ld32_12(s, COND_AL, TCG_REG_R2, TCG_REG_R1, cmp_off); } /* Load the tlb addend. */ @@ -1543,11 +1531,11 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, * This leaves the least significant alignment bits unchanged, and of * course must be zero. */ - t_addr = addrlo; + t_addr = addr; if (a_mask < s_mask) { t_addr = TCG_REG_R0; tcg_out_dat_imm(s, COND_AL, ARITH_ADD, t_addr, - addrlo, s_mask - a_mask); + addr, s_mask - a_mask); } if (use_armv7_instructions && s->page_bits <= 16) { tcg_out_movi32(s, COND_AL, TCG_REG_TMP, ~(s->page_mask | a_mask)); @@ -1558,7 +1546,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, } else { if (a_mask) { tcg_debug_assert(a_mask <= 0xff); - tcg_out_dat_imm(s, COND_AL, ARITH_TST, 0, addrlo, a_mask); + tcg_out_dat_imm(s, COND_AL, ARITH_TST, 0, addr, a_mask); } tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_TMP, 0, t_addr, SHIFT_IMM_LSR(s->page_bits)); @@ -1566,21 +1554,16 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, 0, TCG_REG_R2, TCG_REG_TMP, SHIFT_IMM_LSL(s->page_bits)); } - - if (s->addr_type != TCG_TYPE_I32) { - tcg_out_dat_reg(s, COND_EQ, ARITH_CMP, 0, TCG_REG_R3, addrhi, 0); - } } else if (a_mask) { ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addrlo; - ldst->addrhi_reg = addrhi; + ldst->addrlo_reg = addr; /* We are expecting alignment to max out at 7 */ tcg_debug_assert(a_mask <= 0xff); /* tst addr, #mask */ - tcg_out_dat_imm(s, COND_AL, ARITH_TST, 0, addrlo, a_mask); + tcg_out_dat_imm(s, COND_AL, ARITH_TST, 0, addr, a_mask); } return ldst; @@ -1678,14 +1661,13 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg datalo, } static void tcg_out_qemu_ld(TCGContext *s, TCGReg datalo, TCGReg datahi, - TCGReg addrlo, TCGReg addrhi, - MemOpIdx oi, TCGType data_type) + TCGReg addr, MemOpIdx oi, TCGType data_type) { MemOp opc = get_memop(oi); TCGLabelQemuLdst *ldst; HostAddress h; - ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, true); + ldst = prepare_host_addr(s, &h, addr, oi, true); if (ldst) { ldst->type = data_type; ldst->datalo_reg = datalo; @@ -1764,14 +1746,13 @@ static void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc, TCGReg datalo, } static void tcg_out_qemu_st(TCGContext *s, TCGReg datalo, TCGReg datahi, - TCGReg addrlo, TCGReg addrhi, - MemOpIdx oi, TCGType data_type) + TCGReg addr, MemOpIdx oi, TCGType data_type) { MemOp opc = get_memop(oi); TCGLabelQemuLdst *ldst; HostAddress h; - ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, false); + ldst = prepare_host_addr(s, &h, addr, oi, false); if (ldst) { ldst->type = data_type; ldst->datalo_reg = datalo; @@ -2072,19 +2053,17 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, break; case INDEX_op_qemu_ld_i32: - tcg_out_qemu_ld(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32); + tcg_out_qemu_ld(s, args[0], -1, args[1], args[2], TCG_TYPE_I32); break; case INDEX_op_qemu_ld_i64: - tcg_out_qemu_ld(s, args[0], args[1], args[2], -1, - args[3], TCG_TYPE_I64); + tcg_out_qemu_ld(s, args[0], args[1], args[2], args[3], TCG_TYPE_I64); break; case INDEX_op_qemu_st_i32: - tcg_out_qemu_st(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32); + tcg_out_qemu_st(s, args[0], -1, args[1], args[2], TCG_TYPE_I32); break; case INDEX_op_qemu_st_i64: - tcg_out_qemu_st(s, args[0], args[1], args[2], -1, - args[3], TCG_TYPE_I64); + tcg_out_qemu_st(s, args[0], args[1], args[2], args[3], TCG_TYPE_I64); break; case INDEX_op_bswap16_i32: From patchwork Wed Feb 5 04:03:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13960549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2B999C02192 for ; Wed, 5 Feb 2025 04:04:56 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfWdo-0008IU-TX; Tue, 04 Feb 2025 23:03:56 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfWdk-0008Hl-Jh for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:52 -0500 Received: from mail-pl1-x62c.google.com ([2607:f8b0:4864:20::62c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tfWdg-00079N-T3 for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:51 -0500 Received: by mail-pl1-x62c.google.com with SMTP id d9443c01a7336-21f05693a27so18806185ad.2 for ; Tue, 04 Feb 2025 20:03:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1738728227; x=1739333027; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=swreBxlAtnKlo1b9Cq2bkzMDz6ZCxQg5nbFtz31l2ew=; b=QfB1aghfVKP0L3YjlsiRNYTuHKgYqf7H6tJ6szcub82vRCC4KVJ/5BA3qoAOcnzyZC k+ZJccd99gWmvdxKPt7Ldn1vA2/njKcIEJZ3wtDrztRM3oIUbAWD3Y9mj8ABAOlrcGm3 uXlm8SWcYBcO+C0HZb4W+EbB7ZH+A4cMi1UGexMjiMJfJ72Kqj8bNZTGmi4oXWCtpLCK xC89zm2SeXzpmV6wrLVn1g9IG8n+TgiA8/kwA7m1oHJEDkywRZyuMMqOuOp994/TohuX LAo1owvpksrSLRkdnPmU7/J4LWcsTNF5rmvI56Nfqhi6rHgFcv7dJzQ2mpg2eSLwgYOS KXqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738728227; x=1739333027; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=swreBxlAtnKlo1b9Cq2bkzMDz6ZCxQg5nbFtz31l2ew=; b=blwSm7Lj7e/Gfph2Xg5pODMmlRzm7CDe/gso62XSQhGOv9zzje/1VQpoE96gcobOn9 OY2yz1x/rKlzVYUgPsRdK5ZN2WALCuc2kR335jun7Za+iN/wOhH0uUYjqr1xxG0it5E4 TRZiRCsvX7TOsw5KDmPGsBa61+pDOW2ox7NzTF2OqnIKsQvllVcf2hsypxQu8v5CNIC+ 7vrPb2WmlqhkULgVICUhjRJTTpvDZAdkcGEO3yLZmbF+qkcpyifHM6klm3gRBKdhUPgX mB4c7QS56+MC9vgG9ZZgC2fxPesT5msEj89dfPd3Epy1YQxr4UgCXdRc5xCcRfOuZsiX fEtw== X-Gm-Message-State: AOJu0YwPgfPjghf2JwqYduaEk19D++odmo46s0klI/IKfJadGuzxQtnd BmQG60qob029Fct8I22lT2JsngYX2V8jVbCo6f7cRI9DbDMmcaJS3cJx98udrhIy2cdtu1/G5g8 6 X-Gm-Gg: ASbGncumoUeHE7riXn2s/DN6MCNMJSr/d9i/bZoXpo1mPC+bgP72LpDpVSXyeBJvuX6 cW+C8aSfhCiv6Iy7XprLY1TcGdEydaLHw/A1kdl0Dtcb8+VfqkfUR+eAMmXDpkpYw8uhKtSTNYF F+Sfh/p49LJ87teA0MRe9BOfaGYxgLib7+j+QDalFjmeGc2T1y/pwlDnTwovbfrCSotCcOBBGBI BuRjRMn3hoxiuJKkoK1QVfMbLfOZ1m+2/fFLy/0je6dkYjV4XC6pHs8zWWpPz5y3poPrsGrlcDX ZP8vfFVLemPMljy//dD4MmsvbiSQFQVNYsZX7iZBuYT0omo= X-Google-Smtp-Source: AGHT+IGA9j2tfHjlqwT92r3JXJXsPCDRw8iyQBF8p+P3JZ1vW5+opiQyghtiUZjnYTWy8fkMPCr3Tg== X-Received: by 2002:a17:902:f686:b0:216:2426:7666 with SMTP id d9443c01a7336-21f17df5651mr21307135ad.12.1738728227219; Tue, 04 Feb 2025 20:03:47 -0800 (PST) Received: from stoup.. (71-212-39-66.tukw.qwest.net. [71.212.39.66]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21f054eb89esm22380325ad.79.2025.02.04.20.03.46 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 20:03:46 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 04/11] tcg/i386: Drop addrhi from prepare_host_addr Date: Tue, 4 Feb 2025 20:03:34 -0800 Message-ID: <20250205040341.2056361-5-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250205040341.2056361-1-richard.henderson@linaro.org> References: <20250205040341.2056361-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::62c; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org The guest address will now always fit in one register. Signed-off-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé --- tcg/i386/tcg-target.c.inc | 56 ++++++++++++++------------------------- 1 file changed, 20 insertions(+), 36 deletions(-) diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index ca6e8abc57..b33fe7fe23 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -2169,8 +2169,7 @@ static inline int setup_guest_base_seg(void) * is required and fill in @h with the host address for the fast path. */ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, - TCGReg addrlo, TCGReg addrhi, - MemOpIdx oi, bool is_ld) + TCGReg addr, MemOpIdx oi, bool is_ld) { TCGLabelQemuLdst *ldst = NULL; MemOp opc = get_memop(oi); @@ -2184,7 +2183,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, } else { *h = x86_guest_base; } - h->base = addrlo; + h->base = addr; h->aa = atom_and_align_for_opc(s, opc, MO_ATOM_IFALIGN, s_bits == MO_128); a_mask = (1 << h->aa.align) - 1; @@ -2202,8 +2201,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addrlo; - ldst->addrhi_reg = addrhi; + ldst->addrlo_reg = addr; if (TCG_TARGET_REG_BITS == 64) { ttype = s->addr_type; @@ -2217,7 +2215,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, } } - tcg_out_mov(s, tlbtype, TCG_REG_L0, addrlo); + tcg_out_mov(s, tlbtype, TCG_REG_L0, addr); tcg_out_shifti(s, SHIFT_SHR + tlbrexw, TCG_REG_L0, s->page_bits - CPU_TLB_ENTRY_BITS); @@ -2233,10 +2231,10 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, * check that we don't cross pages for the complete access. */ if (a_mask >= s_mask) { - tcg_out_mov(s, ttype, TCG_REG_L1, addrlo); + tcg_out_mov(s, ttype, TCG_REG_L1, addr); } else { tcg_out_modrm_offset(s, OPC_LEA + trexw, TCG_REG_L1, - addrlo, s_mask - a_mask); + addr, s_mask - a_mask); } tlb_mask = s->page_mask | a_mask; tgen_arithi(s, ARITH_AND + trexw, TCG_REG_L1, tlb_mask, 0); @@ -2250,17 +2248,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst->label_ptr[0] = s->code_ptr; s->code_ptr += 4; - if (TCG_TARGET_REG_BITS == 32 && s->addr_type == TCG_TYPE_I64) { - /* cmp 4(TCG_REG_L0), addrhi */ - tcg_out_modrm_offset(s, OPC_CMP_GvEv, addrhi, - TCG_REG_L0, cmp_ofs + 4); - - /* jne slow_path */ - tcg_out_opc(s, OPC_JCC_long + JCC_JNE, 0, 0, 0); - ldst->label_ptr[1] = s->code_ptr; - s->code_ptr += 4; - } - /* TLB Hit. */ tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_L0, TCG_REG_L0, offsetof(CPUTLBEntry, addend)); @@ -2270,11 +2257,10 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addrlo; - ldst->addrhi_reg = addrhi; + ldst->addrlo_reg = addr; /* jne slow_path */ - jcc = tcg_out_cmp(s, TCG_COND_TSTNE, addrlo, a_mask, true, false); + jcc = tcg_out_cmp(s, TCG_COND_TSTNE, addr, a_mask, true, false); tcg_out_opc(s, OPC_JCC_long + jcc, 0, 0, 0); ldst->label_ptr[0] = s->code_ptr; s->code_ptr += 4; @@ -2446,13 +2432,12 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi, } static void tcg_out_qemu_ld(TCGContext *s, TCGReg datalo, TCGReg datahi, - TCGReg addrlo, TCGReg addrhi, - MemOpIdx oi, TCGType data_type) + TCGReg addr, MemOpIdx oi, TCGType data_type) { TCGLabelQemuLdst *ldst; HostAddress h; - ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, true); + ldst = prepare_host_addr(s, &h, addr, oi, true); tcg_out_qemu_ld_direct(s, datalo, datahi, h, data_type, get_memop(oi)); if (ldst) { @@ -2574,13 +2559,12 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi, } static void tcg_out_qemu_st(TCGContext *s, TCGReg datalo, TCGReg datahi, - TCGReg addrlo, TCGReg addrhi, - MemOpIdx oi, TCGType data_type) + TCGReg addr, MemOpIdx oi, TCGType data_type) { TCGLabelQemuLdst *ldst; HostAddress h; - ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, false); + ldst = prepare_host_addr(s, &h, addr, oi, false); tcg_out_qemu_st_direct(s, datalo, datahi, h, get_memop(oi)); if (ldst) { @@ -2880,34 +2864,34 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, break; case INDEX_op_qemu_ld_i32: - tcg_out_qemu_ld(s, a0, -1, a1, -1, a2, TCG_TYPE_I32); + tcg_out_qemu_ld(s, a0, -1, a1, a2, TCG_TYPE_I32); break; case INDEX_op_qemu_ld_i64: if (TCG_TARGET_REG_BITS == 64) { - tcg_out_qemu_ld(s, a0, -1, a1, -1, a2, TCG_TYPE_I64); + tcg_out_qemu_ld(s, a0, -1, a1, a2, TCG_TYPE_I64); } else { - tcg_out_qemu_ld(s, a0, a1, a2, -1, args[3], TCG_TYPE_I64); + tcg_out_qemu_ld(s, a0, a1, a2, args[3], TCG_TYPE_I64); } break; case INDEX_op_qemu_ld_i128: tcg_debug_assert(TCG_TARGET_REG_BITS == 64); - tcg_out_qemu_ld(s, a0, a1, a2, -1, args[3], TCG_TYPE_I128); + tcg_out_qemu_ld(s, a0, a1, a2, args[3], TCG_TYPE_I128); break; case INDEX_op_qemu_st_i32: case INDEX_op_qemu_st8_i32: - tcg_out_qemu_st(s, a0, -1, a1, -1, a2, TCG_TYPE_I32); + tcg_out_qemu_st(s, a0, -1, a1, a2, TCG_TYPE_I32); break; case INDEX_op_qemu_st_i64: if (TCG_TARGET_REG_BITS == 64) { - tcg_out_qemu_st(s, a0, -1, a1, -1, a2, TCG_TYPE_I64); + tcg_out_qemu_st(s, a0, -1, a1, a2, TCG_TYPE_I64); } else { - tcg_out_qemu_st(s, a0, a1, a2, -1, args[3], TCG_TYPE_I64); + tcg_out_qemu_st(s, a0, a1, a2, args[3], TCG_TYPE_I64); } break; case INDEX_op_qemu_st_i128: tcg_debug_assert(TCG_TARGET_REG_BITS == 64); - tcg_out_qemu_st(s, a0, a1, a2, -1, args[3], TCG_TYPE_I128); + tcg_out_qemu_st(s, a0, a1, a2, args[3], TCG_TYPE_I128); break; OP_32_64(mulu2): From patchwork Wed Feb 5 04:03:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13960557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7EA85C02192 for ; Wed, 5 Feb 2025 04:06:43 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfWdq-0008KS-37; Tue, 04 Feb 2025 23:03:58 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfWdm-0008IK-IU for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:54 -0500 Received: from mail-pl1-x633.google.com ([2607:f8b0:4864:20::633]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tfWdj-00079a-1p for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:52 -0500 Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-21654fdd5daso110101595ad.1 for ; Tue, 04 Feb 2025 20:03:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1738728228; x=1739333028; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=gLYqET5p8j8VFKQbbiyA8QMXMpxuG1/ty4odhnTY0Ts=; b=rS+xUsWg3fqap0tyCo+MHR9oJZMAsDTzuY2oRsgqsryGDsC3RgxFQDWps3NLj6gruY csD9X8AHZeGgSBPaCdbgJvtZndJdKTU71Et8m2UKaKVPxgmCAu7Wuk+N3rixHXA0CrFH WfkUJxWCHhHfQsWCXcM4cZyVTwj6UAivvqFz463jTkdyFckAkl2kbzOgWxsHN6N27xGU S+dYoXmSOMmE7h4QHYKbnrf5MEMjrgupWCXe+tYHwcH1X16fvpJQgQ+KdR1iiHfyLAwV RbclZbH0xBBb+o12UWxvdIxlrICeRdJSQ2pdepsi492jvyWO7iRlOvW+51xP/GbT/GVY IeWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738728228; x=1739333028; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gLYqET5p8j8VFKQbbiyA8QMXMpxuG1/ty4odhnTY0Ts=; b=Erp2GEKTz5xNqD6XEEUdQvMdDSdl3S8bR1cd+ack1R+RjWwUBo9nRMWGZuU9JJl4BL Jik+Wd6A+Ju9wjHeepsJP73RfNslL94qD+PCeTEZ1jDSokJ4WeiuPH2amjhH1G2VFU6i KhXsP+WfZQwGcPgD5DiOYJldG0zbZmqyVc08cDe47EL9NbEBvw2LQVo+icdXWpbjDyvE cTyV7hFAyXe89knzgrv63o7hGu3/s8BqxcXi+PROkmdwwLN4r5fo/oyg3z/Ea2W4hKnM 5FRfuTj6RqhLNv3CduDefCA8Mg7p6+1snwuBloFIS1FdWgCrCnkAoYpP8xoCJgCjRQeL x4Kw== X-Gm-Message-State: AOJu0Yxjrve3KqfZFaFZHm+tCz7QMCabv/uGlIa6kg0EePLJslDFMcIg cQbUplKEJc97j66V12fTQDZaXbEmPb9pGJPmtWWzZchJVxN3RlFfm2pPZ52nqbPgx+s5qH9Xsvo L X-Gm-Gg: ASbGncutrxwL9CFDJ3NwMrXJ+fNAg1DS2OZYme5atS6zrglXXbJRMIbtjoikwlxwMJG 9PTaSByEF8+2+4opvmqkDNT6qRBOHOmvIHwxZ1/le9WEZoYefFtbdea3r6llWOSovwgEIyJc0gw koYTf7wY+M/6kf978CtOvw6nnqbZ4s/k7eTeQlLkjZ2GmnoYmrTC4jHrH4wFxloxGjS0MzuKx/w mpYQLsQBwbV/cm/Hr56uIS9z2W5tlnsjOjv9Bpun9Pnksa5x7p1lLYUHEU6ooRFP0EdlwR/Z0LT n7llessJhg2YCZ/8Ibm5Kp4xT0RQnN5l87J5MmkdF2iu1P0= X-Google-Smtp-Source: AGHT+IEfI5W0WCbpYCurZn20wG4F3iNCRox60/LeZHjYDtj4DfNvzvyvc57eZbefz8NiBvDBpQHF6g== X-Received: by 2002:a17:902:c94d:b0:215:b9a6:5cb9 with SMTP id d9443c01a7336-21f17e26df1mr24644095ad.5.1738728228179; Tue, 04 Feb 2025 20:03:48 -0800 (PST) Received: from stoup.. (71-212-39-66.tukw.qwest.net. [71.212.39.66]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21f054eb89esm22380325ad.79.2025.02.04.20.03.47 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 20:03:47 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 05/11] tcg/mips: Drop addrhi from prepare_host_addr Date: Tue, 4 Feb 2025 20:03:35 -0800 Message-ID: <20250205040341.2056361-6-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250205040341.2056361-1-richard.henderson@linaro.org> References: <20250205040341.2056361-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::633; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x633.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org The guest address will now always fit in one register. Signed-off-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé --- tcg/mips/tcg-target.c.inc | 62 ++++++++++++++------------------------- 1 file changed, 22 insertions(+), 40 deletions(-) diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc index b1d512ca2a..153ce1f3c3 100644 --- a/tcg/mips/tcg-target.c.inc +++ b/tcg/mips/tcg-target.c.inc @@ -1217,8 +1217,7 @@ bool tcg_target_has_memory_bswap(MemOp memop) * is required and fill in @h with the host address for the fast path. */ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, - TCGReg addrlo, TCGReg addrhi, - MemOpIdx oi, bool is_ld) + TCGReg addr, MemOpIdx oi, bool is_ld) { TCGType addr_type = s->addr_type; TCGLabelQemuLdst *ldst = NULL; @@ -1245,8 +1244,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addrlo; - ldst->addrhi_reg = addrhi; + ldst->addrlo_reg = addr; /* Load tlb_mask[mmu_idx] and tlb_table[mmu_idx]. */ tcg_out_ld(s, TCG_TYPE_PTR, TCG_TMP0, TCG_AREG0, mask_off); @@ -1254,11 +1252,10 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, /* Extract the TLB index from the address into TMP3. */ if (TCG_TARGET_REG_BITS == 32 || addr_type == TCG_TYPE_I32) { - tcg_out_opc_sa(s, OPC_SRL, TCG_TMP3, addrlo, + tcg_out_opc_sa(s, OPC_SRL, TCG_TMP3, addr, s->page_bits - CPU_TLB_ENTRY_BITS); } else { - tcg_out_dsrl(s, TCG_TMP3, addrlo, - s->page_bits - CPU_TLB_ENTRY_BITS); + tcg_out_dsrl(s, TCG_TMP3, addr, s->page_bits - CPU_TLB_ENTRY_BITS); } tcg_out_opc_reg(s, OPC_AND, TCG_TMP3, TCG_TMP3, TCG_TMP0); @@ -1288,48 +1285,35 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, tcg_out_opc_imm(s, (TCG_TARGET_REG_BITS == 32 || addr_type == TCG_TYPE_I32 ? OPC_ADDIU : OPC_DADDIU), - TCG_TMP2, addrlo, s_mask - a_mask); + TCG_TMP2, addr, s_mask - a_mask); tcg_out_opc_reg(s, OPC_AND, TCG_TMP1, TCG_TMP1, TCG_TMP2); } else { - tcg_out_opc_reg(s, OPC_AND, TCG_TMP1, TCG_TMP1, addrlo); + tcg_out_opc_reg(s, OPC_AND, TCG_TMP1, TCG_TMP1, addr); } /* Zero extend a 32-bit guest address for a 64-bit host. */ if (TCG_TARGET_REG_BITS == 64 && addr_type == TCG_TYPE_I32) { - tcg_out_ext32u(s, TCG_TMP2, addrlo); - addrlo = TCG_TMP2; + tcg_out_ext32u(s, TCG_TMP2, addr); + addr = TCG_TMP2; } ldst->label_ptr[0] = s->code_ptr; tcg_out_opc_br(s, OPC_BNE, TCG_TMP1, TCG_TMP0); - /* Load and test the high half tlb comparator. */ - if (TCG_TARGET_REG_BITS == 32 && addr_type != TCG_TYPE_I32) { - /* delay slot */ - tcg_out_ldst(s, OPC_LW, TCG_TMP0, TCG_TMP3, cmp_off + HI_OFF); - - /* Load the tlb addend for the fast path. */ - tcg_out_ld(s, TCG_TYPE_PTR, TCG_TMP3, TCG_TMP3, add_off); - - ldst->label_ptr[1] = s->code_ptr; - tcg_out_opc_br(s, OPC_BNE, addrhi, TCG_TMP0); - } - /* delay slot */ base = TCG_TMP3; - tcg_out_opc_reg(s, ALIAS_PADD, base, TCG_TMP3, addrlo); + tcg_out_opc_reg(s, ALIAS_PADD, base, TCG_TMP3, addr); } else { if (a_mask && (use_mips32r6_instructions || a_bits != s_bits)) { ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addrlo; - ldst->addrhi_reg = addrhi; + ldst->addrlo_reg = addr; /* We are expecting a_bits to max out at 7, much lower than ANDI. */ tcg_debug_assert(a_bits < 16); - tcg_out_opc_imm(s, OPC_ANDI, TCG_TMP0, addrlo, a_mask); + tcg_out_opc_imm(s, OPC_ANDI, TCG_TMP0, addr, a_mask); ldst->label_ptr[0] = s->code_ptr; if (use_mips32r6_instructions) { @@ -1340,7 +1324,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, } } - base = addrlo; + base = addr; if (TCG_TARGET_REG_BITS == 64 && addr_type == TCG_TYPE_I32) { tcg_out_ext32u(s, TCG_REG_A0, base); base = TCG_REG_A0; @@ -1460,14 +1444,13 @@ static void tcg_out_qemu_ld_unalign(TCGContext *s, TCGReg lo, TCGReg hi, } static void tcg_out_qemu_ld(TCGContext *s, TCGReg datalo, TCGReg datahi, - TCGReg addrlo, TCGReg addrhi, - MemOpIdx oi, TCGType data_type) + TCGReg addr, MemOpIdx oi, TCGType data_type) { MemOp opc = get_memop(oi); TCGLabelQemuLdst *ldst; HostAddress h; - ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, true); + ldst = prepare_host_addr(s, &h, addr, oi, true); if (use_mips32r6_instructions || h.aa.align >= (opc & MO_SIZE)) { tcg_out_qemu_ld_direct(s, datalo, datahi, h.base, opc, data_type); @@ -1547,14 +1530,13 @@ static void tcg_out_qemu_st_unalign(TCGContext *s, TCGReg lo, TCGReg hi, } static void tcg_out_qemu_st(TCGContext *s, TCGReg datalo, TCGReg datahi, - TCGReg addrlo, TCGReg addrhi, - MemOpIdx oi, TCGType data_type) + TCGReg addr, MemOpIdx oi, TCGType data_type) { MemOp opc = get_memop(oi); TCGLabelQemuLdst *ldst; HostAddress h; - ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, false); + ldst = prepare_host_addr(s, &h, addr, oi, false); if (use_mips32r6_instructions || h.aa.align >= (opc & MO_SIZE)) { tcg_out_qemu_st_direct(s, datalo, datahi, h.base, opc); @@ -2096,24 +2078,24 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, break; case INDEX_op_qemu_ld_i32: - tcg_out_qemu_ld(s, a0, 0, a1, 0, a2, TCG_TYPE_I32); + tcg_out_qemu_ld(s, a0, 0, a1, a2, TCG_TYPE_I32); break; case INDEX_op_qemu_ld_i64: if (TCG_TARGET_REG_BITS == 64) { - tcg_out_qemu_ld(s, a0, 0, a1, 0, a2, TCG_TYPE_I64); + tcg_out_qemu_ld(s, a0, 0, a1, a2, TCG_TYPE_I64); } else { - tcg_out_qemu_ld(s, a0, a1, a2, 0, args[3], TCG_TYPE_I64); + tcg_out_qemu_ld(s, a0, a1, a2, args[3], TCG_TYPE_I64); } break; case INDEX_op_qemu_st_i32: - tcg_out_qemu_st(s, a0, 0, a1, 0, a2, TCG_TYPE_I32); + tcg_out_qemu_st(s, a0, 0, a1, a2, TCG_TYPE_I32); break; case INDEX_op_qemu_st_i64: if (TCG_TARGET_REG_BITS == 64) { - tcg_out_qemu_st(s, a0, 0, a1, 0, a2, TCG_TYPE_I64); + tcg_out_qemu_st(s, a0, 0, a1, a2, TCG_TYPE_I64); } else { - tcg_out_qemu_st(s, a0, a1, a2, 0, args[3], TCG_TYPE_I64); + tcg_out_qemu_st(s, a0, a1, a2, args[3], TCG_TYPE_I64); } break; From patchwork Wed Feb 5 04:03:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13960556 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3F832C02197 for ; Wed, 5 Feb 2025 04:05:58 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfWeB-0008OP-5N; Tue, 04 Feb 2025 23:04:19 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfWdm-0008IL-Iz for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:56 -0500 Received: from mail-pl1-x633.google.com ([2607:f8b0:4864:20::633]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tfWdk-0007BO-C4 for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:54 -0500 Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-21f08b44937so17156245ad.1 for ; Tue, 04 Feb 2025 20:03:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1738728230; x=1739333030; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=JqoGgTuN0/9CJQXO5a4+Mfpk+MASA4tJ+KC10nQogg4=; b=VAlwqU18u93xcdvFz20qTzVmiUN45vkznhjS/ZvtQvw6oEoqcwZjJUzyHUvTZBKR+Y NFF5Uv8LbHWXjk2B6W6tj7AFRyItM2u1xw2sXLBpmwyUAWwMUWGCiihrLa2UXQ8WorgI 6H41egVdhMI+7GjbyMardlH9ktnUJ+1sgWHmkinj2C6MJ57Ck4dNxZLTIrFw5ihiDsi+ VA9s2GgjCcexHZ9R58ZGQLfieDoyi5cAG/pZH0LGaSP/K91RTFAfWNRrwZtHKouDIpN1 lajFaXwvtR4EBZ9MK41p5chvKk/5cFJmAZVEuMYWwy9r2sBPC1lxMUs2dWlm3VQO2oEt 40Gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738728230; x=1739333030; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JqoGgTuN0/9CJQXO5a4+Mfpk+MASA4tJ+KC10nQogg4=; b=E6jNcSMerUp+plEnhuZykWI5YKs9OLhGb2rMECQvpWj6PJnqFwZKuHukbSRLZUWC+Z y3I+n8SBCPA52hoLsGd9Y6vfXoSLs64rU9cck9pjCUYdbbOB+nJ7PjEBVVRaei5nUb6w V51a9IeLyOLzduNGrTWO1rLZiVHau2dW5yA6HRXLo7J2r32XuPEXfAGCG3u0vsMtTW2S CYsfzL+m77nBogM6iGOFoKG7F6MW76b8rkziYW3hHfUP22UC388oP5JtQjygS71xxQYd B4u7k18mcU6EiH7da1XgY2Umiq+umNJIxRYNONG21OAkhT7LPZmcSAfzjmxMJKuNSu6X xFog== X-Gm-Message-State: AOJu0YxCuq4cWjWoEhhUctR2HTftIKP8mJfT6KVbYuy3kzUwIffQ4M0x jvELgffNgjchgPu3Hqhr3bK14mpwiSG7ppy6thDdYads5rWbc0DdeqzvlVO8ivsp173+H+q4ded k X-Gm-Gg: ASbGncsSEGMYPWBWsgVLZMKW4ZzCAdgWTPznl8/0uJGdK9AY8fLRbTsZ1VLwGc/LFvD YzeCar1o8SYIm2TaYjmgb+qagHd71aprpcwQbbBWpFsE4xz5MSlagaVvCf6vcCEhIfQhST5qEJX 0Fz3RvORh/vWv+FjbmmYPjhLtMsmThiWDNvcnJ+41dnb2vINJNfpKNzBOssSA5dEyWuwYZLG4ln tEiR+ZBxADHuaz8GZorHGZbusgbtNwFW0AM2tm7k+e7cesLqa7fbZkm0QxGKvSJaP7NSRA2z7P5 GqaAdQYnXuH0qZ7SC226vlO6rEbtyGOLbiuEebBdu0Xl52I= X-Google-Smtp-Source: AGHT+IGUxqpugrFz5F7emDHnbCnoGUI9Ibj7EPJSHmm4Uz0nTheYliiH8o6Rl/eRkcCh4p/hJ2SUSw== X-Received: by 2002:a17:903:252:b0:21c:fb6:7c52 with SMTP id d9443c01a7336-21f17f02051mr19746645ad.45.1738728229061; Tue, 04 Feb 2025 20:03:49 -0800 (PST) Received: from stoup.. (71-212-39-66.tukw.qwest.net. [71.212.39.66]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21f054eb89esm22380325ad.79.2025.02.04.20.03.48 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 20:03:48 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 06/11] tcg/ppc: Drop addrhi from prepare_host_addr Date: Tue, 4 Feb 2025 20:03:36 -0800 Message-ID: <20250205040341.2056361-7-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250205040341.2056361-1-richard.henderson@linaro.org> References: <20250205040341.2056361-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::633; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x633.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org The guest address will now always fit in one register. Signed-off-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé --- tcg/ppc/tcg-target.c.inc | 75 ++++++++++++---------------------------- 1 file changed, 23 insertions(+), 52 deletions(-) diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 801cb6f3cb..74b93f4b57 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -2438,8 +2438,7 @@ bool tcg_target_has_memory_bswap(MemOp memop) * is required and fill in @h with the host address for the fast path. */ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, - TCGReg addrlo, TCGReg addrhi, - MemOpIdx oi, bool is_ld) + TCGReg addr, MemOpIdx oi, bool is_ld) { TCGType addr_type = s->addr_type; TCGLabelQemuLdst *ldst = NULL; @@ -2474,8 +2473,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addrlo; - ldst->addrhi_reg = addrhi; + ldst->addrlo_reg = addr; /* Load tlb_mask[mmu_idx] and tlb_table[mmu_idx]. */ tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP1, TCG_AREG0, mask_off); @@ -2483,10 +2481,10 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, /* Extract the page index, shifted into place for tlb index. */ if (TCG_TARGET_REG_BITS == 32) { - tcg_out_shri32(s, TCG_REG_R0, addrlo, + tcg_out_shri32(s, TCG_REG_R0, addr, s->page_bits - CPU_TLB_ENTRY_BITS); } else { - tcg_out_shri64(s, TCG_REG_R0, addrlo, + tcg_out_shri64(s, TCG_REG_R0, addr, s->page_bits - CPU_TLB_ENTRY_BITS); } tcg_out32(s, AND | SAB(TCG_REG_TMP1, TCG_REG_TMP1, TCG_REG_R0)); @@ -2534,10 +2532,10 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, if (a_bits < s_bits) { a_bits = s_bits; } - tcg_out_rlw(s, RLWINM, TCG_REG_R0, addrlo, 0, + tcg_out_rlw(s, RLWINM, TCG_REG_R0, addr, 0, (32 - a_bits) & 31, 31 - s->page_bits); } else { - TCGReg t = addrlo; + TCGReg t = addr; /* * If the access is unaligned, we need to make sure we fail if we @@ -2566,30 +2564,8 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, } } - if (TCG_TARGET_REG_BITS == 32 && addr_type != TCG_TYPE_I32) { - /* Low part comparison into cr7. */ - tcg_out_cmp(s, TCG_COND_EQ, TCG_REG_R0, TCG_REG_TMP2, - 0, 7, TCG_TYPE_I32); - - /* Load the high part TLB comparator into TMP2. */ - tcg_out_ld(s, TCG_TYPE_I32, TCG_REG_TMP2, TCG_REG_TMP1, - cmp_off + 4 * !HOST_BIG_ENDIAN); - - /* Load addend, deferred for this case. */ - tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP1, TCG_REG_TMP1, - offsetof(CPUTLBEntry, addend)); - - /* High part comparison into cr6. */ - tcg_out_cmp(s, TCG_COND_EQ, addrhi, TCG_REG_TMP2, - 0, 6, TCG_TYPE_I32); - - /* Combine comparisons into cr0. */ - tcg_out32(s, CRAND | BT(0, CR_EQ) | BA(6, CR_EQ) | BB(7, CR_EQ)); - } else { - /* Full comparison into cr0. */ - tcg_out_cmp(s, TCG_COND_EQ, TCG_REG_R0, TCG_REG_TMP2, - 0, 0, addr_type); - } + /* Full comparison into cr0. */ + tcg_out_cmp(s, TCG_COND_EQ, TCG_REG_R0, TCG_REG_TMP2, 0, 0, addr_type); /* Load a pointer into the current opcode w/conditional branch-link. */ ldst->label_ptr[0] = s->code_ptr; @@ -2601,12 +2577,11 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addrlo; - ldst->addrhi_reg = addrhi; + ldst->addrlo_reg = addr; /* We are expecting a_bits to max out at 7, much lower than ANDI. */ tcg_debug_assert(a_bits < 16); - tcg_out32(s, ANDI | SAI(addrlo, TCG_REG_R0, (1 << a_bits) - 1)); + tcg_out32(s, ANDI | SAI(addr, TCG_REG_R0, (1 << a_bits) - 1)); ldst->label_ptr[0] = s->code_ptr; tcg_out32(s, BC | BI(0, CR_EQ) | BO_COND_FALSE | LK); @@ -2617,24 +2592,23 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, if (TCG_TARGET_REG_BITS == 64 && addr_type == TCG_TYPE_I32) { /* Zero-extend the guest address for use in the host address. */ - tcg_out_ext32u(s, TCG_REG_TMP2, addrlo); + tcg_out_ext32u(s, TCG_REG_TMP2, addr); h->index = TCG_REG_TMP2; } else { - h->index = addrlo; + h->index = addr; } return ldst; } static void tcg_out_qemu_ld(TCGContext *s, TCGReg datalo, TCGReg datahi, - TCGReg addrlo, TCGReg addrhi, - MemOpIdx oi, TCGType data_type) + TCGReg addr, MemOpIdx oi, TCGType data_type) { MemOp opc = get_memop(oi); TCGLabelQemuLdst *ldst; HostAddress h; - ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, true); + ldst = prepare_host_addr(s, &h, addr, oi, true); if (TCG_TARGET_REG_BITS == 32 && (opc & MO_SIZE) == MO_64) { if (opc & MO_BSWAP) { @@ -2678,14 +2652,13 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg datalo, TCGReg datahi, } static void tcg_out_qemu_st(TCGContext *s, TCGReg datalo, TCGReg datahi, - TCGReg addrlo, TCGReg addrhi, - MemOpIdx oi, TCGType data_type) + TCGReg addr, MemOpIdx oi, TCGType data_type) { MemOp opc = get_memop(oi); TCGLabelQemuLdst *ldst; HostAddress h; - ldst = prepare_host_addr(s, &h, addrlo, addrhi, oi, false); + ldst = prepare_host_addr(s, &h, addr, oi, false); if (TCG_TARGET_REG_BITS == 32 && (opc & MO_SIZE) == MO_64) { if (opc & MO_BSWAP) { @@ -2729,7 +2702,7 @@ static void tcg_out_qemu_ldst_i128(TCGContext *s, TCGReg datalo, TCGReg datahi, uint32_t insn; TCGReg index; - ldst = prepare_host_addr(s, &h, addr_reg, -1, oi, is_ld); + ldst = prepare_host_addr(s, &h, addr_reg, oi, is_ld); /* Compose the final address, as LQ/STQ have no indexing. */ index = h.index; @@ -3309,14 +3282,13 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, break; case INDEX_op_qemu_ld_i32: - tcg_out_qemu_ld(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32); + tcg_out_qemu_ld(s, args[0], -1, args[1], args[2], TCG_TYPE_I32); break; case INDEX_op_qemu_ld_i64: if (TCG_TARGET_REG_BITS == 64) { - tcg_out_qemu_ld(s, args[0], -1, args[1], -1, - args[2], TCG_TYPE_I64); + tcg_out_qemu_ld(s, args[0], -1, args[1], args[2], TCG_TYPE_I64); } else { - tcg_out_qemu_ld(s, args[0], args[1], args[2], -1, + tcg_out_qemu_ld(s, args[0], args[1], args[2], args[3], TCG_TYPE_I64); } break; @@ -3326,14 +3298,13 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, break; case INDEX_op_qemu_st_i32: - tcg_out_qemu_st(s, args[0], -1, args[1], -1, args[2], TCG_TYPE_I32); + tcg_out_qemu_st(s, args[0], -1, args[1], args[2], TCG_TYPE_I32); break; case INDEX_op_qemu_st_i64: if (TCG_TARGET_REG_BITS == 64) { - tcg_out_qemu_st(s, args[0], -1, args[1], -1, - args[2], TCG_TYPE_I64); + tcg_out_qemu_st(s, args[0], -1, args[1], args[2], TCG_TYPE_I64); } else { - tcg_out_qemu_st(s, args[0], args[1], args[2], -1, + tcg_out_qemu_st(s, args[0], args[1], args[2], args[3], TCG_TYPE_I64); } break; From patchwork Wed Feb 5 04:03:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13960555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 48472C02192 for ; Wed, 5 Feb 2025 04:05:53 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfWdr-0008Kq-DM; Tue, 04 Feb 2025 23:03:59 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfWdm-0008IQ-KV for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:56 -0500 Received: from mail-pl1-x62a.google.com ([2607:f8b0:4864:20::62a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tfWdk-0007BJ-Dc for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:54 -0500 Received: by mail-pl1-x62a.google.com with SMTP id d9443c01a7336-21f0c4275a1so17821335ad.2 for ; Tue, 04 Feb 2025 20:03:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1738728230; x=1739333030; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=AOc2h0xXoQ7rt5k5O8GPqL+wyW8l0f5WlDdl3vzFiHs=; b=Yba4ETwd3Oiwhd6di7hs3jrqLCfQW22rmwD9GbQ/44Kf3vJOlQW3HDTuXEEm5zIntF u2ryssGYOew+J+/J45uMOPQgSPniuVV6qW8C76iMbLZbDlMf2GlzbKrFeJrjx03ceRZI uD7LR0fFWt7Mm4YcXIsyNUTjN5rGx34nzgyfR2qZzrBHUxq33euZVrhLekUUCPy0CBOv NZYfEBjqE3XnPbZSI6ZQH5vd4jT99pKUq1NpyHnm1Pba5nfW8xAmuZSds60gxAppJO61 zEdwm4Rz/NlcuHfKwAcUBw4z802sGfWIfPz0icWHivW0EuhGAZmZGXY6jULeUtlLtwJt Jnpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738728230; x=1739333030; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AOc2h0xXoQ7rt5k5O8GPqL+wyW8l0f5WlDdl3vzFiHs=; b=h9YWP0gG1p0m02wtddz+2YhSrUsubZLAhwTGcxUHVfrM5ANaH9HuhF045lWnpYFUvQ LaIBqYbw8Oo7kNQiWklepc681vTfu1iWgHRynB6u4Ur4g2OJWpYrNbZSYe/U0MS7r5WI yCSYedMuLWP1H53WVAFJZJWXPVqSjWflnMY3vO5F73rHI18j4bsliUi8SSOKywg9f1tL U6WGMz/pYjR2SKxtPPczcW/huYk5u7AYQ9ujq9kJ//GiT5Nsv4iEpKbL16AB3/lJWknq JtEqp5mSzajRcpSw4YLBDN+pnKZwXuqGCFYiOSkndzRxYK8fRFZ9AR1G6S/eCuoYR8oX 4Xqw== X-Gm-Message-State: AOJu0YxBZcUCADk/8Our16pVV/t1OyyRwaYtguAY8R2Em4nezr+gouPW JSukJ2cRNZF4A8Ox6we5xO3BkPHi20obaeUhW3Ho8PGvtX3Qe1eXsGY+sPTkEu2yyYxpQPpgGxz I X-Gm-Gg: ASbGncvxIKq1eHMSXMwSVR0Ac/kIzFMklPvSFyQGGjXx+qvQLzr6kJQOkf0Tp1DOMTo HB65SvsjJWhgl0kkFSPZkCDBCBqOzpepxVisVVzhjpp1oHifqVPBgo9aeu6FDfEGG9vxTaAWpQZ J1BJZ8TPsRlsZdSsnRtqsYg4Pwc2akn16woRqjvibYygzipRIBS8U3Kv/roLUPLi4WtapTjv+kv jY2ONQXNLxYtm8xrwdLOPTFcNjaBy0yUqVrX+eQqdPoILwfPRf/CkGP/uV+J6/qvkcF/KXFiHpV fd4I67aovTKjRwzE7bMGQW8P54hvGhAa3rgL49745YY+iyA= X-Google-Smtp-Source: AGHT+IFu2HFKyCLYXS1t12urGr2Z9tpjndhTg9iL9JP7TRPSRLzsOmiPC+6WFWVkhAc2Sj8xEZo4HA== X-Received: by 2002:a17:903:41cf:b0:211:e812:3948 with SMTP id d9443c01a7336-21f17d44565mr28630775ad.0.1738728229967; Tue, 04 Feb 2025 20:03:49 -0800 (PST) Received: from stoup.. (71-212-39-66.tukw.qwest.net. [71.212.39.66]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21f054eb89esm22380325ad.79.2025.02.04.20.03.49 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 20:03:49 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 07/11] tcg: Replace addr{lo, hi}_reg with addr_reg in TCGLabelQemuLdst Date: Tue, 4 Feb 2025 20:03:37 -0800 Message-ID: <20250205040341.2056361-8-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250205040341.2056361-1-richard.henderson@linaro.org> References: <20250205040341.2056361-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::62a; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org There is now always only one guest address register. Signed-off-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé --- tcg/tcg.c | 18 +++++++++--------- tcg/aarch64/tcg-target.c.inc | 4 ++-- tcg/arm/tcg-target.c.inc | 4 ++-- tcg/i386/tcg-target.c.inc | 4 ++-- tcg/loongarch64/tcg-target.c.inc | 4 ++-- tcg/mips/tcg-target.c.inc | 4 ++-- tcg/ppc/tcg-target.c.inc | 4 ++-- tcg/riscv/tcg-target.c.inc | 4 ++-- tcg/s390x/tcg-target.c.inc | 4 ++-- tcg/sparc64/tcg-target.c.inc | 4 ++-- 10 files changed, 27 insertions(+), 27 deletions(-) diff --git a/tcg/tcg.c b/tcg/tcg.c index 295004b74f..57f72b78d4 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -100,8 +100,7 @@ struct TCGLabelQemuLdst { bool is_ld; /* qemu_ld: true, qemu_st: false */ MemOpIdx oi; TCGType type; /* result type of a load */ - TCGReg addrlo_reg; /* reg index for low word of guest virtual addr */ - TCGReg addrhi_reg; /* reg index for high word of guest virtual addr */ + TCGReg addr_reg; /* reg index for guest virtual addr */ TCGReg datalo_reg; /* reg index for low word to be loaded or stored */ TCGReg datahi_reg; /* reg index for high word to be loaded or stored */ const tcg_insn_unit *raddr; /* addr of the next IR of qemu_ld/st IR */ @@ -6067,7 +6066,7 @@ static void tcg_out_ld_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst, */ tcg_out_helper_add_mov(mov, loc + HOST_BIG_ENDIAN, TCG_TYPE_I32, TCG_TYPE_I32, - ldst->addrlo_reg, -1); + ldst->addr_reg, -1); tcg_out_helper_load_slots(s, 1, mov, parm); tcg_out_helper_load_imm(s, loc[!HOST_BIG_ENDIAN].arg_slot, @@ -6075,7 +6074,7 @@ static void tcg_out_ld_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst, next_arg += 2; } else { nmov = tcg_out_helper_add_mov(mov, loc, TCG_TYPE_I64, s->addr_type, - ldst->addrlo_reg, ldst->addrhi_reg); + ldst->addr_reg, -1); tcg_out_helper_load_slots(s, nmov, mov, parm); next_arg += nmov; } @@ -6232,21 +6231,22 @@ static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst, /* Handle addr argument. */ loc = &info->in[next_arg]; - if (TCG_TARGET_REG_BITS == 32 && s->addr_type == TCG_TYPE_I32) { + tcg_debug_assert(s->addr_type <= TCG_TYPE_REG); + if (TCG_TARGET_REG_BITS == 32) { /* - * 32-bit host with 32-bit guest: zero-extend the guest address + * 32-bit host (and thus 32-bit guest): zero-extend the guest address * to 64-bits for the helper by storing the low part. Later, * after we have processed the register inputs, we will load a * zero for the high part. */ tcg_out_helper_add_mov(mov, loc + HOST_BIG_ENDIAN, TCG_TYPE_I32, TCG_TYPE_I32, - ldst->addrlo_reg, -1); + ldst->addr_reg, -1); next_arg += 2; nmov += 1; } else { n = tcg_out_helper_add_mov(mov, loc, TCG_TYPE_I64, s->addr_type, - ldst->addrlo_reg, ldst->addrhi_reg); + ldst->addr_reg, -1); next_arg += n; nmov += n; } @@ -6294,7 +6294,7 @@ static void tcg_out_st_helper_args(TCGContext *s, const TCGLabelQemuLdst *ldst, g_assert_not_reached(); } - if (TCG_TARGET_REG_BITS == 32 && s->addr_type == TCG_TYPE_I32) { + if (TCG_TARGET_REG_BITS == 32) { /* Zero extend the address by loading a zero for the high part. */ loc = &info->in[1 + !HOST_BIG_ENDIAN]; tcg_out_helper_load_imm(s, loc->arg_slot, TCG_TYPE_I32, 0, parm); diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 45dc2c649b..6f383c1592 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -1775,7 +1775,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr_reg; + ldst->addr_reg = addr_reg; mask_type = (s->page_bits + s->tlb_dyn_max_bits > 32 ? TCG_TYPE_I64 : TCG_TYPE_I32); @@ -1837,7 +1837,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr_reg; + ldst->addr_reg = addr_reg; /* tst addr, #mask */ tcg_out_logicali(s, I3404_ANDSI, 0, TCG_REG_XZR, addr_reg, a_mask); diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index 252d9aa7e5..865aab0ccd 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1491,7 +1491,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr; + ldst->addr_reg = addr; /* Load cpu->neg.tlb.f[mmu_idx].{mask,table} into {r0,r1}. */ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, mask) != 0); @@ -1558,7 +1558,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr; + ldst->addr_reg = addr; /* We are expecting alignment to max out at 7 */ tcg_debug_assert(a_mask <= 0xff); diff --git a/tcg/i386/tcg-target.c.inc b/tcg/i386/tcg-target.c.inc index b33fe7fe23..cfea4c496d 100644 --- a/tcg/i386/tcg-target.c.inc +++ b/tcg/i386/tcg-target.c.inc @@ -2201,7 +2201,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr; + ldst->addr_reg = addr; if (TCG_TARGET_REG_BITS == 64) { ttype = s->addr_type; @@ -2257,7 +2257,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr; + ldst->addr_reg = addr; /* jne slow_path */ jcc = tcg_out_cmp(s, TCG_COND_TSTNE, addr, a_mask, true, false); diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc index 4f32bf3e97..dd67e8f6bc 100644 --- a/tcg/loongarch64/tcg-target.c.inc +++ b/tcg/loongarch64/tcg-target.c.inc @@ -1010,7 +1010,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr_reg; + ldst->addr_reg = addr_reg; tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP0, TCG_AREG0, mask_ofs); tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP1, TCG_AREG0, table_ofs); @@ -1055,7 +1055,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr_reg; + ldst->addr_reg = addr_reg; /* * Without micro-architecture details, we don't know which of diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc index 153ce1f3c3..d744b853cd 100644 --- a/tcg/mips/tcg-target.c.inc +++ b/tcg/mips/tcg-target.c.inc @@ -1244,7 +1244,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr; + ldst->addr_reg = addr; /* Load tlb_mask[mmu_idx] and tlb_table[mmu_idx]. */ tcg_out_ld(s, TCG_TYPE_PTR, TCG_TMP0, TCG_AREG0, mask_off); @@ -1309,7 +1309,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr; + ldst->addr_reg = addr; /* We are expecting a_bits to max out at 7, much lower than ANDI. */ tcg_debug_assert(a_bits < 16); diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 74b93f4b57..2d16807ec7 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -2473,7 +2473,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr; + ldst->addr_reg = addr; /* Load tlb_mask[mmu_idx] and tlb_table[mmu_idx]. */ tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP1, TCG_AREG0, mask_off); @@ -2577,7 +2577,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr; + ldst->addr_reg = addr; /* We are expecting a_bits to max out at 7, much lower than ANDI. */ tcg_debug_assert(a_bits < 16); diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc index 55a3398712..689fbea0df 100644 --- a/tcg/riscv/tcg-target.c.inc +++ b/tcg/riscv/tcg-target.c.inc @@ -1727,7 +1727,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr_reg; + ldst->addr_reg = addr_reg; init_setting_vtype(s); @@ -1790,7 +1790,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr_reg; + ldst->addr_reg = addr_reg; init_setting_vtype(s); diff --git a/tcg/s390x/tcg-target.c.inc b/tcg/s390x/tcg-target.c.inc index 6786e7b316..b2e1cd60ff 100644 --- a/tcg/s390x/tcg-target.c.inc +++ b/tcg/s390x/tcg-target.c.inc @@ -1920,7 +1920,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr_reg; + ldst->addr_reg = addr_reg; tcg_out_sh64(s, RSY_SRLG, TCG_TMP0, addr_reg, TCG_REG_NONE, s->page_bits - CPU_TLB_ENTRY_BITS); @@ -1974,7 +1974,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr_reg; + ldst->addr_reg = addr_reg; tcg_debug_assert(a_mask <= 0xffff); tcg_out_insn(s, RI, TMLL, addr_reg, a_mask); diff --git a/tcg/sparc64/tcg-target.c.inc b/tcg/sparc64/tcg-target.c.inc index ea0a3b8692..527af5665d 100644 --- a/tcg/sparc64/tcg-target.c.inc +++ b/tcg/sparc64/tcg-target.c.inc @@ -1127,7 +1127,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr_reg; + ldst->addr_reg = addr_reg; ldst->label_ptr[0] = s->code_ptr; /* bne,pn %[xi]cc, label0 */ @@ -1147,7 +1147,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, ldst = new_ldst_label(s); ldst->is_ld = is_ld; ldst->oi = oi; - ldst->addrlo_reg = addr_reg; + ldst->addr_reg = addr_reg; ldst->label_ptr[0] = s->code_ptr; /* bne,pn %icc, label0 */ From patchwork Wed Feb 5 04:03:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13960550 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F1750C02197 for ; Wed, 5 Feb 2025 04:04:56 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfWdw-0008Lg-Ga; Tue, 04 Feb 2025 23:04:06 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfWdm-0008IM-Jg for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:56 -0500 Received: from mail-pl1-x633.google.com ([2607:f8b0:4864:20::633]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tfWdk-0007BS-CA for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:53 -0500 Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-2162c0f6a39so7762195ad.0 for ; Tue, 04 Feb 2025 20:03:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1738728231; x=1739333031; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=ytZ7noDAYkPUTObh2VvZQ/o6CBNJfl8UBbSNHzJglWk=; b=Uwufksxqw6ocSzACK8/tQh5ZoBg1MQNYoqmhXfj+j2k1+6cT9jE1vovWYtkFd5CUMq TMgjQm6bZ1HBrGzLqizV6nKS3S5dk9Ook/jaM7H216JLhLXOcCmFk0+gMC9qrkR76uGd AY3E/9hfgo+JDhVihbjLPUtVqE04U/ARA/HPAPfyH72FjVdCDTnqc0g4NNOrm6f7bXiV 752kdT5aSm1e0B9yAbNomwaU9S0ok0/xqzy1maFHJK+brqs8uAmLRwdUhV0YKboSb1Pv XHNwtU7Uh0Lri/cus+hNe/ZHfMffOTneiR6h51lFWQt8JGCeoCqdlM0tMgY9EL/6LaAB 3U4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738728231; x=1739333031; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ytZ7noDAYkPUTObh2VvZQ/o6CBNJfl8UBbSNHzJglWk=; b=YLxJbT9SZn6wkpwKBjjgiCgGk7J62QqYMfJm2Hsx1IBTl1acHiVyQ8SeBPb/sQQjo0 z4XS/HGx+1q7OPCVASdy9LdZnxBHpVQzRPUrcyDtsXj+7jjctYr2sWyTRh3YrifSYV4c hBJdCGXgabG+wiJ7CRpt+uWIKpzBMJ0i8I8oULIhbO2Ogm/5+NuGgzYJOU0KuAqWDJU5 FCbxrsZGi37G/eDWO4N5ykalzHSpKxj1BFiAmfCznOlKwVvQilSgA8slDeyd7ajnpG0F S+Ar1MxPMdB7r6EgBXQppTPnPyL4iiZbtcVjXmXFjJO08JgkA2jBfxP+tYLeW54nM8mO 3UvA== X-Gm-Message-State: AOJu0Yzq0ZGpEkV1H8Y86QnoZW9/uLbFxxYmvbreBfw2bGfHXucQtsDW 0pR2Gi4VRsTZHgPmFNw8BJVLVtHXixmOCvxtgSaSuAAaWNwNbENcnmXvnHL3RrocUqotUHNikKB p X-Gm-Gg: ASbGncthak4P84+7HCdHcBWjp2raRz+wvgVMmGwufqENdW+HxH6xyenDSRvTfTq0aFP h5Bo0oJXw4LsOCDY5iBRLDpRIWoSXkMqSrk3KgbNEuCI6fCGj31b1rfR6WH4WuNjpAdzsfd5/p+ KExUsgTa41j1Cjo2wZZGlH4ZgwCH42Sv9/1hwucSsSlBVb60vjkEkP6c49gOxlIT2q5oKQkTuSk W8Y01rn1faAltE5fvUQ6dECgYmFzcBRAqb7yyb68ghLM76mHBg4Ce/u1kEGmB7jDmUNeNy78kEk tec2WyL5K08JxDNxEN01R2LRNLihDhpgufs1pCYueCf241I= X-Google-Smtp-Source: AGHT+IFyaCx056mEUL0v33/F8ebkpVoyCERJK8WEcdHYgnZy4mezvEJojL2ng/1Bcl3BxnQ6w0jQeA== X-Received: by 2002:a17:902:f681:b0:215:742e:5cff with SMTP id d9443c01a7336-21f17ac8ce8mr20556575ad.16.1738728230780; Tue, 04 Feb 2025 20:03:50 -0800 (PST) Received: from stoup.. (71-212-39-66.tukw.qwest.net. [71.212.39.66]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21f054eb89esm22380325ad.79.2025.02.04.20.03.50 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 20:03:50 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 08/11] plugins: Fix qemu_plugin_read_memory_vaddr parameters Date: Tue, 4 Feb 2025 20:03:38 -0800 Message-ID: <20250205040341.2056361-9-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250205040341.2056361-1-richard.henderson@linaro.org> References: <20250205040341.2056361-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::633; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x633.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org The declaration uses uint64_t for addr. Signed-off-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé --- plugins/api.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/plugins/api.c b/plugins/api.c index 4110cfaa23..cf8cdf076a 100644 --- a/plugins/api.c +++ b/plugins/api.c @@ -561,7 +561,7 @@ GArray *qemu_plugin_get_registers(void) return create_register_handles(regs); } -bool qemu_plugin_read_memory_vaddr(vaddr addr, GByteArray *data, size_t len) +bool qemu_plugin_read_memory_vaddr(uint64_t addr, GByteArray *data, size_t len) { g_assert(current_cpu); From patchwork Wed Feb 5 04:03:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13960553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E2718C02192 for ; Wed, 5 Feb 2025 04:05:15 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfWdy-0008Lw-Bj; Tue, 04 Feb 2025 23:04:08 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfWdn-0008If-P6 for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:56 -0500 Received: from mail-pl1-x62e.google.com ([2607:f8b0:4864:20::62e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tfWdm-0007Dp-Af for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:55 -0500 Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-21644aca3a0so147687435ad.3 for ; Tue, 04 Feb 2025 20:03:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1738728231; x=1739333031; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=K2WH4FlAMIZ4FEc2EEi85vRc1c0fQFd97dzVfYF6JEw=; b=ZeIM1yFRSYCMk4mrzYbhsn1Vl+XI4o6aG/iXwIiuJQehBZJTgxVQFjf/PRlcivd+av claCtfj0Vg6XZnUi9FLoeYcdXdqsQ0RB35yDibm9VxjHYczgTIhS7JpOul3DGM0P+b6w BZ5aEYdW1p+/Q5PHjf/HEhkNjYk4UKIj7kNomuFYQqsNWHy5dSRdXICEabeVCmXNl8Dx x7BpgEYMX3WpMynqLunVTWU2Fqg4BhPFmzWe2a/36gw4EI6Vjh69ETmhHlAsKOt8VEZh 3DxMNMb0eFXw4xCuVEaFhxG6JIUUqVrjC4cdaIp3IUCHeFBFwdTMlm2q1ulrL1N4Rn1A +hZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738728231; x=1739333031; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=K2WH4FlAMIZ4FEc2EEi85vRc1c0fQFd97dzVfYF6JEw=; b=IhoBWslwDQ7ydQSd4GKtPhAEl14BweghYwVdmaphvU1xv7sVJGk5ZQGogSE4ziLWM/ AOlkU0m17/DUgYft1GfLEkGUnj/tKaCQpsD5DKsGNhm4OyEEUwQxTbv1FXQhylfvJjmw IKiBUecweXpoIcR4HhvAbSi9z2naVr6s4po6DPtCl+6I7OSEykeLmQCyW6J3XavRcMNu GyWIrS1/DrzJfI6nXG7V64wbPNK8gl2JW6PAQfejN11YcKveA5pI7DpKl4jFlYaciirE Cu8hDDGUHGMkSvUdWwwxPwJAjozdYQrPges4WgLOgypOe+bMFWdRFCvZZ9ujLCwZeAVS kmSg== X-Gm-Message-State: AOJu0Yz+iuvaB5mTWh2uCFEBNDCAtDYSKPANppuqmVBazFspLCyZRC0j cW3oyYM4MBUDHEZC4otpMGRNzp0DKy88PBThrhLyc8Et/4UQV7N9oShOquL9wr1p2hBCk4aWDZM r X-Gm-Gg: ASbGncuy0OWwTXuwO/rJGCbAbUpAzscrlsoY/qbfymARCZrxy8+DX9vNUJ/guEDbh1y 0qS+SvGUyge+qJKfyaIkbRtROLKg9owavvAixstckhOsvP6d94jiLDSfCsEPY47lP/1kUQLv5oK QcdBQMeu+3wgblCaCDoCAx2/0lgUEvPjSqVnuRaPyduOMCykeiG9LWE+c6yzYhC/wKNk9/XAwYw rHZrY4/oBoCugeiBe3bomFYJdm9+PnEiycIvI4MpQ8efC+MuHjsfb+vSPOdRlK83tiZKDb51k4r OMxLLkHTiVYc2F31NcdU+omE5y6uLZ1Xxs3VppV5EAj9VNI= X-Google-Smtp-Source: AGHT+IG1UBZqAdxUg+66exJN1TkuvP8EzWRPlVZX08eM59jJzZGN16HZ86KYuhC9dQbsGXQH6FSWmA== X-Received: by 2002:a17:902:cec7:b0:21f:1bd:efcb with SMTP id d9443c01a7336-21f17dde2bcmr19813825ad.7.1738728231571; Tue, 04 Feb 2025 20:03:51 -0800 (PST) Received: from stoup.. (71-212-39-66.tukw.qwest.net. [71.212.39.66]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21f054eb89esm22380325ad.79.2025.02.04.20.03.50 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 20:03:51 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 09/11] accel/tcg: Fix tlb_set_page_with_attrs, tlb_set_page Date: Tue, 4 Feb 2025 20:03:39 -0800 Message-ID: <20250205040341.2056361-10-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250205040341.2056361-1-richard.henderson@linaro.org> References: <20250205040341.2056361-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::62e; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org The declarations use vaddr for size. Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 17e2251695..75d075d044 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1193,7 +1193,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, void tlb_set_page_with_attrs(CPUState *cpu, vaddr addr, hwaddr paddr, MemTxAttrs attrs, int prot, - int mmu_idx, uint64_t size) + int mmu_idx, vaddr size) { CPUTLBEntryFull full = { .phys_addr = paddr, @@ -1208,7 +1208,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, vaddr addr, void tlb_set_page(CPUState *cpu, vaddr addr, hwaddr paddr, int prot, - int mmu_idx, uint64_t size) + int mmu_idx, vaddr size) { tlb_set_page_with_attrs(cpu, addr, paddr, MEMTXATTRS_UNSPECIFIED, prot, mmu_idx, size); From patchwork Wed Feb 5 04:03:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13960554 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 18E0DC02197 for ; Wed, 5 Feb 2025 04:05:22 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfWdr-0008Kr-PX; Tue, 04 Feb 2025 23:04:00 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfWdn-0008Ie-Oo for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:56 -0500 Received: from mail-pl1-x62e.google.com ([2607:f8b0:4864:20::62e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tfWdm-0007Fh-BO for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:55 -0500 Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-2165cb60719so115322215ad.0 for ; Tue, 04 Feb 2025 20:03:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1738728232; x=1739333032; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=sM8wEzZqivRSntSpPQLp9Bl8dcy72JUL2xbQoUbrJ+I=; b=CiLjHy61YTYTW8V81LwtbpBdbfMp3B28p4u1FmiGxP4UnPSD7ACv6/7vkaha802R1m 1g+NrAEoGPIteQ/02j90s3YwLVBp6mmMketZlpNqiF1zMJ7YLlLZU+fp2NKW7kpxcOkS g6dyLZqj6WNpQkMWFHYooquIfa60ws9GV/5qSmtuvr4W4oIjqZGyb7tIDQj6izplI2lY TOIznOMoCxnMuzLlNvQrRAFQjj98VHJ90k/JAC/3AAvhw16xHD8q4ivWlhbgn4VsAO1X AiST8eGTieqU1n8TVoKJaWWUdKidT1rsD/0ZSf4oGhNJ6ga5cx0DDxu12wZppWCZ+dvd t/2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738728232; x=1739333032; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sM8wEzZqivRSntSpPQLp9Bl8dcy72JUL2xbQoUbrJ+I=; b=e+HCJ1dbi0/biYq+nfO6j9ld4v1iulFyFU9l6jsopwgexHvMkn05CFRb2h4sD1kocO sXgBnatfJl1TMJ3Ug2jFqqE+hTBFjDJDYEG0cUqGhb9mO4aLZqIhGs7KsnGrhuZ6Jqce /QJuWpwUBaxIksXoIuCxUA2YfqpjiXC4jgK0RNAyHswYZ9QNnL0Vlr0L/ei75WFgX+OO fWCxUiSBAC2GN9wICwVOiNAiBjoByEYwU0rnXaqFpCKOtZGM6Nf2M+m0V21yv15FBWIh SUUp1j1Hk+jREmjmoc8DuhObM2PzughT8cWQ34O+wmSRPqi/Pfh2xjrFRdINl/8Y+TG6 XPmQ== X-Gm-Message-State: AOJu0YxTWR+/XGAZEVlqrcGDvtgC8qcxiw9cV/b7a3gbMY+35V90Gkcs e2+gcxWh24Ph08oc43BY+NlSxz3ZQs/6YbWamVCXkZvLNG39o8/zS5pQWhJBorOkoZ5LqF3OYRE 7 X-Gm-Gg: ASbGnct0bpevA819ty0k2gsYDm5GVlsMqtxXpkoCF3IMlK6FDB7AXvXVkn0plTI2sSA 6hOsrb7VwPam6pF6DLZXmKE+EWmaTZdLzYgnD1/bYly2RtiapZxoeF6huxnBiQWT1M5+YjlMSaW m8FKkNi6gzKQnKnDVu9xKvVRq/Gi0Qmq3RnJD5Eas4L89MW8/rj8AWSOKdcq1vBNRD3CtvUTDDs hS4tfMq/lf2HiPiMxD5HvQKJAPmt1G1fL4UkQZEBeUD87b7nLf5DxDGdaXcp/a/MiWeJrSYiqro Z16YDQcZBWTmpY74TZTYbPWDtDvPs8wSOeFJ1l1SIQ3sRMY= X-Google-Smtp-Source: AGHT+IG75Ck87qxpgZ23DMSG5XfD4jAPTo7LWadRmbN9v0hjB7yMCfQlDzcsmzivR3GpE4TKpXhP8w== X-Received: by 2002:a17:902:cf12:b0:215:9894:5679 with SMTP id d9443c01a7336-21f17d12180mr27422825ad.0.1738728232512; Tue, 04 Feb 2025 20:03:52 -0800 (PST) Received: from stoup.. (71-212-39-66.tukw.qwest.net. [71.212.39.66]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21f054eb89esm22380325ad.79.2025.02.04.20.03.51 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 20:03:51 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 10/11] include/exec: Change vaddr to uintptr_t Date: Tue, 4 Feb 2025 20:03:40 -0800 Message-ID: <20250205040341.2056361-11-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250205040341.2056361-1-richard.henderson@linaro.org> References: <20250205040341.2056361-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::62e; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Since we no longer support 64-bit guests on 32-bit hosts, we can use a 32-bit type on a 32-bit host. Signed-off-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé --- include/exec/vaddr.h | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/include/exec/vaddr.h b/include/exec/vaddr.h index b9844afc77..28bec632fb 100644 --- a/include/exec/vaddr.h +++ b/include/exec/vaddr.h @@ -6,13 +6,15 @@ /** * vaddr: * Type wide enough to contain any #target_ulong virtual address. + * We do not support 64-bit guest on 32-host and detect at configure time. + * Therefore, a host pointer width will always fit a guest pointer. */ -typedef uint64_t vaddr; -#define VADDR_PRId PRId64 -#define VADDR_PRIu PRIu64 -#define VADDR_PRIo PRIo64 -#define VADDR_PRIx PRIx64 -#define VADDR_PRIX PRIX64 -#define VADDR_MAX UINT64_MAX +typedef uintptr_t vaddr; +#define VADDR_PRId PRIdPTR +#define VADDR_PRIu PRIuPTR +#define VADDR_PRIo PRIoPTR +#define VADDR_PRIx PRIxPTR +#define VADDR_PRIX PRIXPTR +#define VADDR_MAX UINTPTR_MAX #endif From patchwork Wed Feb 5 04:03:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 13960552 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BB882C02192 for ; Wed, 5 Feb 2025 04:05:07 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tfWds-0008Ku-Ex; Tue, 04 Feb 2025 23:04:00 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tfWdp-0008Ji-8C for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:57 -0500 Received: from mail-pl1-x633.google.com ([2607:f8b0:4864:20::633]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tfWdn-0007Fp-3L for qemu-devel@nongnu.org; Tue, 04 Feb 2025 23:03:57 -0500 Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-21f01fe1ce8so21574525ad.2 for ; Tue, 04 Feb 2025 20:03:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1738728234; x=1739333034; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=q0zAIe7SWbgfcY9vZuTS6FSoBBjcjACpHzR+Y3UmH08=; b=kKtBOQv6lJjN6+M8Tw5GZJ7gDEkKjxWizPZeqObGM/QsRtB74vU6nCuj3qzD4tahSA TvUqyuXXT1AO/yzwV33DfdnKBLEsOGaBksZuqoUqinSxhFGUjdN2kIbWmbDmqB9at3uz 7L2vW4GSkwlqIzIeeKh2m8bCLi8QBoZjPStZTUcRZWDwBRyiUPEgjaZiKMk7+tNR0SxI 69OTLEoUP1pogTWbABfWh1ULtQkOSYVz99G7f62D9W2TpErEd1PMkTaeyPd5+Egkv6cQ c2vW1GkUlPRxa7LHSpa8+LCseQU1t0uqvPi/30q+AaGiQoNBp8SdOF+2hrprA/cHerp5 p4Jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738728234; x=1739333034; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=q0zAIe7SWbgfcY9vZuTS6FSoBBjcjACpHzR+Y3UmH08=; b=OX2uKnyOBzvtPxfqG/UpQkhGzEE7P2gBDC/zInbFLWU31hnLqWVem+ECMMMBM0ykJ4 l5HbS28ytWdX/aAfYGSiiP7b2kXS5cY00ki2I8quPRQPPF4U6yLpajlYHdIY12aUFwn6 JHhmz8j0R0CNxRXQKRncqRbljmhtc8blCKSmnvvaniBbkUQnviCExIozaaF4lCUyOySM a8YNF9ayySq2A1hiX/SSOWTZ13e4zwqbni2JiuAjk4NuL+fwVO8uCJf5l45t9GkueBmS OjmZdXJyZAIpJr5loWa5LwvezmSVXsakkxh/6GHDdKAAX95Va1XsWFwp4to5a3AfFgbE Xwag== X-Gm-Message-State: AOJu0YyLwWb9MNGkUpHny3rap26otVq8MJM1SrToptxcBUx5kIcqG8AX YUu2Xv/zUgWfk1zvePEEBoUSLZH166YPPAZaPDjWSm+b/whM5OrkS4ipwBrvhsayo3P7Tkr/t8M 2 X-Gm-Gg: ASbGncv1VgwGAvcOBnLdnIVqk2YsDCH76DYDnpzWY0uvQ6+LFcguarOgDeJxNyuRodF YhhfRuXf2mNAg3xei6nN12RFAXrJWFdI+UypVQkzYiyJwEYNSnInriy+zXJzxWMot6V1GeEMpCj IZV8JcISCiQmQFfikl1fGb+8v8F2Av+S0a0KY7dIglIYArrE2OeNpzhy4maF8pEPFGBP88Y2uYR Y9Zgsdp1KsebFaxFBOP9Oov1vXoxnaZ24SRgUqMaySh27pgDpOlsnauRyZAQ+Nuq0UaFZGJ9pJl cSCRJSLpsS+ww5c7/4oRk19yWndsxjPlo3yu9JfHkQKRNUc= X-Google-Smtp-Source: AGHT+IF7va74PKI1DDGVinX1zooP9rE4m5REzeDeLSkvhy5diFDcRo72ToclevN6uB17LKm9vONv4w== X-Received: by 2002:a17:902:ced1:b0:215:65f3:27f2 with SMTP id d9443c01a7336-21f17e2719dmr22637905ad.8.1738728233649; Tue, 04 Feb 2025 20:03:53 -0800 (PST) Received: from stoup.. (71-212-39-66.tukw.qwest.net. [71.212.39.66]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21f054eb89esm22380325ad.79.2025.02.04.20.03.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Feb 2025 20:03:53 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 11/11] include/exec: Use uintptr_t in CPUTLBEntry Date: Tue, 4 Feb 2025 20:03:41 -0800 Message-ID: <20250205040341.2056361-12-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250205040341.2056361-1-richard.henderson@linaro.org> References: <20250205040341.2056361-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::633; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x633.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Since we no longer support 64-bit guests on 32-bit hosts, we can use a 32-bit type on a 32-bit host. This shrinks the size of the structure to 16 bytes on a 32-bit host. Signed-off-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé --- include/exec/tlb-common.h | 10 +++++----- accel/tcg/cputlb.c | 21 ++++----------------- tcg/arm/tcg-target.c.inc | 1 - tcg/mips/tcg-target.c.inc | 9 ++------- tcg/ppc/tcg-target.c.inc | 21 +++++---------------- tcg/riscv/tcg-target.c.inc | 1 - 6 files changed, 16 insertions(+), 47 deletions(-) diff --git a/include/exec/tlb-common.h b/include/exec/tlb-common.h index dc5a5faa0b..03b5a8ffc7 100644 --- a/include/exec/tlb-common.h +++ b/include/exec/tlb-common.h @@ -19,14 +19,14 @@ #ifndef EXEC_TLB_COMMON_H #define EXEC_TLB_COMMON_H 1 -#define CPU_TLB_ENTRY_BITS 5 +#define CPU_TLB_ENTRY_BITS (HOST_LONG_BITS == 32 ? 4 : 5) /* Minimalized TLB entry for use by TCG fast path. */ typedef union CPUTLBEntry { struct { - uint64_t addr_read; - uint64_t addr_write; - uint64_t addr_code; + uintptr_t addr_read; + uintptr_t addr_write; + uintptr_t addr_code; /* * Addend to virtual address to get host address. IO accesses * use the corresponding iotlb value. @@ -37,7 +37,7 @@ typedef union CPUTLBEntry { * Padding to get a power of two size, as well as index * access to addr_{read,write,code}. */ - uint64_t addr_idx[(1 << CPU_TLB_ENTRY_BITS) / sizeof(uint64_t)]; + uintptr_t addr_idx[(1 << CPU_TLB_ENTRY_BITS) / sizeof(uintptr_t)]; } CPUTLBEntry; QEMU_BUILD_BUG_ON(sizeof(CPUTLBEntry) != (1 << CPU_TLB_ENTRY_BITS)); diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 75d075d044..ad158050a1 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -104,22 +104,15 @@ static inline uint64_t tlb_read_idx(const CPUTLBEntry *entry, { /* Do not rearrange the CPUTLBEntry structure members. */ QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_read) != - MMU_DATA_LOAD * sizeof(uint64_t)); + MMU_DATA_LOAD * sizeof(uintptr_t)); QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_write) != - MMU_DATA_STORE * sizeof(uint64_t)); + MMU_DATA_STORE * sizeof(uintptr_t)); QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_code) != - MMU_INST_FETCH * sizeof(uint64_t)); + MMU_INST_FETCH * sizeof(uintptr_t)); -#if TARGET_LONG_BITS == 32 - /* Use qatomic_read, in case of addr_write; only care about low bits. */ - const uint32_t *ptr = (uint32_t *)&entry->addr_idx[access_type]; - ptr += HOST_BIG_ENDIAN; - return qatomic_read(ptr); -#else - const uint64_t *ptr = &entry->addr_idx[access_type]; + const uintptr_t *ptr = &entry->addr_idx[access_type]; /* ofs might correspond to .addr_write, so use qatomic_read */ return qatomic_read(ptr); -#endif } static inline uint64_t tlb_addr_write(const CPUTLBEntry *entry) @@ -899,14 +892,8 @@ static void tlb_reset_dirty_range_locked(CPUTLBEntry *tlb_entry, addr &= TARGET_PAGE_MASK; addr += tlb_entry->addend; if ((addr - start) < length) { -#if TARGET_LONG_BITS == 32 - uint32_t *ptr_write = (uint32_t *)&tlb_entry->addr_write; - ptr_write += HOST_BIG_ENDIAN; - qatomic_set(ptr_write, *ptr_write | TLB_NOTDIRTY); -#else qatomic_set(&tlb_entry->addr_write, tlb_entry->addr_write | TLB_NOTDIRTY); -#endif } } } diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index 865aab0ccd..f03bb76396 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1506,7 +1506,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, * Add the tlb_table pointer, creating the CPUTLBEntry address in R1. * Load the tlb comparator into R2 and the fast path addend into R1. */ - QEMU_BUILD_BUG_ON(HOST_BIG_ENDIAN); if (cmp_off == 0) { tcg_out_ld32_rwb(s, COND_AL, TCG_REG_R2, TCG_REG_R1, TCG_REG_R0); } else { diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc index d744b853cd..6fe7a77813 100644 --- a/tcg/mips/tcg-target.c.inc +++ b/tcg/mips/tcg-target.c.inc @@ -1262,13 +1262,8 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, /* Add the tlb_table pointer, creating the CPUTLBEntry address. */ tcg_out_opc_reg(s, ALIAS_PADD, TCG_TMP3, TCG_TMP3, TCG_TMP1); - if (TCG_TARGET_REG_BITS == 32 || addr_type == TCG_TYPE_I32) { - /* Load the (low half) tlb comparator. */ - tcg_out_ld(s, TCG_TYPE_I32, TCG_TMP0, TCG_TMP3, - cmp_off + HOST_BIG_ENDIAN * 4); - } else { - tcg_out_ld(s, TCG_TYPE_I64, TCG_TMP0, TCG_TMP3, cmp_off); - } + /* Load the tlb comparator. */ + tcg_out_ld(s, TCG_TYPE_PTR, TCG_TMP0, TCG_TMP3, cmp_off); if (TCG_TARGET_REG_BITS == 64 || addr_type == TCG_TYPE_I32) { /* Load the tlb addend for the fast path. */ diff --git a/tcg/ppc/tcg-target.c.inc b/tcg/ppc/tcg-target.c.inc index 2d16807ec7..822925a19b 100644 --- a/tcg/ppc/tcg-target.c.inc +++ b/tcg/ppc/tcg-target.c.inc @@ -2490,27 +2490,16 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, HostAddress *h, tcg_out32(s, AND | SAB(TCG_REG_TMP1, TCG_REG_TMP1, TCG_REG_R0)); /* - * Load the (low part) TLB comparator into TMP2. + * Load the TLB comparator into TMP2. * For 64-bit host, always load the entire 64-bit slot for simplicity. * We will ignore the high bits with tcg_out_cmp(..., addr_type). */ - if (TCG_TARGET_REG_BITS == 64) { - if (cmp_off == 0) { - tcg_out32(s, LDUX | TAB(TCG_REG_TMP2, - TCG_REG_TMP1, TCG_REG_TMP2)); - } else { - tcg_out32(s, ADD | TAB(TCG_REG_TMP1, - TCG_REG_TMP1, TCG_REG_TMP2)); - tcg_out_ld(s, TCG_TYPE_I64, TCG_REG_TMP2, - TCG_REG_TMP1, cmp_off); - } - } else if (cmp_off == 0 && !HOST_BIG_ENDIAN) { - tcg_out32(s, LWZUX | TAB(TCG_REG_TMP2, - TCG_REG_TMP1, TCG_REG_TMP2)); + if (cmp_off == 0) { + tcg_out32(s, (TCG_TARGET_REG_BITS == 64 ? LDUX : LWZUX) + | TAB(TCG_REG_TMP2, TCG_REG_TMP1, TCG_REG_TMP2)); } else { tcg_out32(s, ADD | TAB(TCG_REG_TMP1, TCG_REG_TMP1, TCG_REG_TMP2)); - tcg_out_ld(s, TCG_TYPE_I32, TCG_REG_TMP2, TCG_REG_TMP1, - cmp_off + 4 * HOST_BIG_ENDIAN); + tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP2, TCG_REG_TMP1, cmp_off); } /* diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc index 689fbea0df..dae892437e 100644 --- a/tcg/riscv/tcg-target.c.inc +++ b/tcg/riscv/tcg-target.c.inc @@ -1760,7 +1760,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s, TCGReg *pbase, } /* Load the tlb comparator and the addend. */ - QEMU_BUILD_BUG_ON(HOST_BIG_ENDIAN); tcg_out_ld(s, addr_type, TCG_REG_TMP0, TCG_REG_TMP2, is_ld ? offsetof(CPUTLBEntry, addr_read) : offsetof(CPUTLBEntry, addr_write));