From patchwork Fri Feb 9 18:07:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiaxun Yang X-Patchwork-Id: 13551692 Received: from fhigh5-smtp.messagingengine.com (fhigh5-smtp.messagingengine.com [103.168.172.156]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CBB8880BFD; Fri, 9 Feb 2024 18:08:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=103.168.172.156 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502084; cv=none; b=fC78xO4ODCX1t0J65vOBZdy2KANMudnRXs97n76DrSIWwYP1hrV7cDaUK/ZjofqoxZ7tGd3AbhyYk+FHqsoR6sx8MNkIW+Dxb3GyvA+qc5XqqSxiw5nnOx5I7zZedCYF8gg35U0sLdv/05k3R06I1y3IytAd4Wf0URUwpIyAY7s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502084; c=relaxed/simple; bh=CV9+EIr92yEIbZY8JlNNXbdWIz3p+PxaXCNE06NQQtM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=JQpj4zj00e/Sg3UfIp2nnONxyEWm311I1um+ezJ/iZl7ORQLhh0+C7WWaP74dvl8NZXemEekuAdcoQs1OwIY9Ratr4R0WTpr6QKmjz0V5hjVklAsy7Cgz5ta2Tec+YDRh4NtBtc8CUgJV4VSiDQO9dw+1RSSP962rNPTWQAjQUI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com; spf=pass smtp.mailfrom=flygoat.com; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b=Q9Zae9Cz; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=wohtTw21; arc=none smtp.client-ip=103.168.172.156 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flygoat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b="Q9Zae9Cz"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="wohtTw21" Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailfhigh.nyi.internal (Postfix) with ESMTP id 811031140064; Fri, 9 Feb 2024 13:08:00 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Fri, 09 Feb 2024 13:08:00 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=flygoat.com; h= cc:cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to; s=fm1; t=1707502080; x=1707588480; bh=3zgdfdfNCom5K1clLszQcMGRiIxJGh10LIX+cXJjyYs=; b= Q9Zae9CzDMRYcnC3fMvXLWOoI4OObGV795nrqINzkMjLY7lwup9aXFpgxy1b86gi ooVuFCYCpTvubBrbsgbZCVTK6L1bhQ5E4RDAUD9bDgfM0ADqBfKpes1mZri614rh OIJxO1eKH3iUc+X1Dz5oEpn6yghf4CR/qDH5FaSZbj02n9+F0X2jlswhCkLayTuy dlhHi6BVyxIaPidbyPeippFHoFC9m2+rDVVeUD+hyMcFZYo5E18vVHJyRQFhaeJB ojOVkDY3L0z/hBBJra4Zp8Vk2DsTaxX38Tv9MfoSPnLDzcwWXzO7wK19c2nFL5/b G/YDwBqokOF03Rw9MAmiZA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1707502080; x= 1707588480; bh=3zgdfdfNCom5K1clLszQcMGRiIxJGh10LIX+cXJjyYs=; b=w ohtTw21mcs9tHycAq1EHnMjyXi8bhPma5yOLd4rHl1l/TZOGD54ynFpRgBkr+eHz XYc4AVPQk8UYcHQpHdtmGeVd+OuM3pj8RXLP7CIZdPd9LSe1gMbAkJg13UOATIE2 D6YRYdTPstyv/nlGwSYuSk0gDyNjth+0dtTNoJJm5/JfRJENCnPCDfa8jSiwu3Sv DujGxQpnEL5ombMYV0x+a5751W5wOF6iXoyfPQn9zT7Z/knD8jb18MIXP8XdPZAA yGIgx4UdR87RJtIg3bkqnpwD7olH6Y4RpEEucvefgxQr/v7mZUqhP0CRA1to8ahA RMVUa223qvHT+6KuDcHSg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrtdeigddutdejucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephfffufggtgfgkfhfjgfvvefosehtjeertdertdejnecuhfhrohhmpeflihgr gihunhcujggrnhhguceojhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomheqne cuggftrfgrthhtvghrnhepvdekiefhfeevkeeuveetfeelffekgedugefhtdduudeghfeu veegffegudekjeelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilh hfrhhomhepjhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomh X-ME-Proxy: Feedback-ID: ifd894703:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 9 Feb 2024 13:07:59 -0500 (EST) From: Jiaxun Yang Date: Fri, 09 Feb 2024 18:07:47 +0000 Subject: [PATCH 1/8] MIPS: Unify define of CP0 registers for uasm code Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240209-regname-v1-1-2125efa016ef@flygoat.com> References: <20240209-regname-v1-0-2125efa016ef@flygoat.com> In-Reply-To: <20240209-regname-v1-0-2125efa016ef@flygoat.com> To: Thomas Bogendoerfer Cc: Gregory CLEMENT , linux-mips@vger.kernel.org, linux-kernel@vger.kernel.org, Jiaxun Yang X-Mailer: b4 0.12.4 X-Developer-Signature: v=1; a=openpgp-sha256; l=8605; i=jiaxun.yang@flygoat.com; h=from:subject:message-id; bh=CV9+EIr92yEIbZY8JlNNXbdWIz3p+PxaXCNE06NQQtM=; b=owGbwMvMwCXmXMhTe71c8zDjabUkhtRjmX/Yz05z/7k4+9OULX/DngjWLC4W6120veJPRqewy ayw4vtXO0pZGMS4GGTFFFlCBJT6NjReXHD9QdYfmDmsTCBDGLg4BWAi1s8ZGeatutbHGbmp5G63 Ya7Nlbyq5k3T539WkQ/i2WN4Xf7cgy8Mf7gc/DQdzCw1zR7pHbuRtD0h5vQq89s9spUvTiw/avT OnBkA X-Developer-Key: i=jiaxun.yang@flygoat.com; a=openpgp; fpr=980379BEFEBFBF477EA04EF9C111949073FC0F67 Definitions of uasm variant of CP0 registers are unified to mipsregs.h, so they lay together with uasm variant of the code. Signed-off-by: Jiaxun Yang --- checkpatch warning: "ERROR: Macros with complex values should be enclosed in parentheses" is expected. --- arch/mips/include/asm/mipsregs.h | 249 +++++++++++++++++++++++++++++++-------- arch/mips/kvm/entry.c | 27 +---- arch/mips/mm/tlbex.c | 18 +-- 3 files changed, 200 insertions(+), 94 deletions(-) diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h index ec58cb76d076..59ea637c5975 100644 --- a/arch/mips/include/asm/mipsregs.h +++ b/arch/mips/include/asm/mipsregs.h @@ -42,59 +42,198 @@ /* * Coprocessor 0 register names + * + * CP0_REGISTER variant is meant to be used in assembly code, C0_REGISTER + * variant is meant to be used in C (uasm) code. */ -#define CP0_INDEX $0 -#define CP0_RANDOM $1 -#define CP0_ENTRYLO0 $2 -#define CP0_ENTRYLO1 $3 -#define CP0_CONF $3 -#define CP0_GLOBALNUMBER $3, 1 -#define CP0_CONTEXT $4 -#define CP0_PAGEMASK $5 -#define CP0_PAGEGRAIN $5, 1 -#define CP0_SEGCTL0 $5, 2 -#define CP0_SEGCTL1 $5, 3 -#define CP0_SEGCTL2 $5, 4 -#define CP0_WIRED $6 -#define CP0_INFO $7 -#define CP0_HWRENA $7 -#define CP0_BADVADDR $8 -#define CP0_BADINSTR $8, 1 -#define CP0_COUNT $9 -#define CP0_ENTRYHI $10 -#define CP0_GUESTCTL1 $10, 4 -#define CP0_GUESTCTL2 $10, 5 -#define CP0_GUESTCTL3 $10, 6 -#define CP0_COMPARE $11 -#define CP0_GUESTCTL0EXT $11, 4 -#define CP0_STATUS $12 -#define CP0_GUESTCTL0 $12, 6 -#define CP0_GTOFFSET $12, 7 -#define CP0_CAUSE $13 -#define CP0_EPC $14 -#define CP0_PRID $15 -#define CP0_EBASE $15, 1 -#define CP0_CMGCRBASE $15, 3 -#define CP0_CONFIG $16 -#define CP0_CONFIG3 $16, 3 -#define CP0_CONFIG5 $16, 5 -#define CP0_CONFIG6 $16, 6 -#define CP0_LLADDR $17 -#define CP0_WATCHLO $18 -#define CP0_WATCHHI $19 -#define CP0_XCONTEXT $20 -#define CP0_FRAMEMASK $21 -#define CP0_DIAGNOSTIC $22 -#define CP0_DIAGNOSTIC1 $22, 1 -#define CP0_DEBUG $23 -#define CP0_DEPC $24 -#define CP0_PERFORMANCE $25 -#define CP0_ECC $26 -#define CP0_CACHEERR $27 -#define CP0_TAGLO $28 -#define CP0_TAGHI $29 -#define CP0_ERROREPC $30 -#define CP0_DESAVE $31 +#define CP0_INDEX $0 +#define C0_INDEX 0, 0 + +#define CP0_RANDOM $1 +#define C0_RANDOM 1, 0 + +#define CP0_ENTRYLO0 $2 +#define C0_ENTRYLO0 2, 0 + +#define CP0_ENTRYLO1 $3 +#define C0_ENTRYLO1 3, 0 + +#define CP0_CONF $3 +#define C0_CONF 3, 0 + +#define CP0_GLOBALNUMBER $3, 1 +#define C0_GLOBALNUMBER 3, 1 + +#define CP0_CONTEXT $4 +#define C0_CONTEXT 4, 0 + +#define CP0_PAGEMASK $5 +#define C0_PAGEMASK 5, 0 + +#define CP0_PAGEGRAIN $5, 1 +#define C0_PAGEGRAIN 5, 1 + +#define CP0_SEGCTL0 $5, 2 +#define C0_SEGCTL0 5, 2 + +#define CP0_SEGCTL1 $5, 3 +#define C0_SEGCTL1 5, 3 + +#define CP0_SEGCTL2 $5, 4 +#define C0_SEGCTL2 5, 4 + +#define CP0_PWBASE $5, 5 +#define C0_PWBASE 5, 5 + +#define CP0_PWFIELD $5, 6 +#define C0_PWFIELD 5, 6 + +#define CP0_PWCTL $5, 7 +#define C0_PWCTL 5, 7 + +#define CP0_WIRED $6 +#define C0_WIRED 6, 0 + +#define CP0_INFO $7 +#define C0_INFO 7, 0 + +#define CP0_HWRENA $7 +#define C0_HWRENA 7, 0 + +#define CP0_BADVADDR $8 +#define C0_BADVADDR 8, 0 + +#define CP0_BADINSTR $8, 1 +#define C0_BADINSTR 8, 1 + +#define CP0_BADINSTRP $8, 2 +#define C0_BADINSTRP 8, 2 + +#define CP0_COUNT $9 +#define C0_COUNT 9, 0 + +#define CP0_PGD $9, 7 +#define C0_PGD 9, 7 + +#define CP0_ENTRYHI $10 +#define C0_ENTRYHI 10, 0 + +#define CP0_GUESTCTL1 $10, 4 +#define C0_GUESTCTL1 10, 5 + +#define CP0_GUESTCTL2 $10, 5 +#define C0_GUESTCTL2 10, 5 + +#define CP0_GUESTCTL3 $10, 6 +#define C0_GUESTCTL3 10, 6 + +#define CP0_COMPARE $11 +#define C0_COMPARE 11, 0 + +#define CP0_GUESTCTL0EXT $11, 4 +#define C0_GUESTCTL0EXT 11, 4 + +#define CP0_STATUS $12 +#define C0_STATUS 12, 0 + +#define CP0_GUESTCTL0 $12, 6 +#define C0_GUESTCTL0 12, 6 + +#define CP0_GTOFFSET $12, 7 +#define C0_GTOFFSET 12, 7 + +#define CP0_CAUSE $13 +#define C0_CAUSE 13, 0 + +#define CP0_EPC $14 +#define C0_EPC 14, 0 + +#define CP0_PRID $15 +#define C0_PRID 15, 0 + +#define CP0_EBASE $15, 1 +#define C0_EBASE 15, 1 + +#define CP0_CMGCRBASE $15, 3 +#define C0_CMGCRBASE 15, 3 + +#define CP0_CONFIG $16 +#define C0_CONFIG 16, 0 + +#define CP0_CONFIG1 $16, 1 +#define C0_CONFIG1 16, 1 + +#define CP0_CONFIG2 $16, 2 +#define C0_CONFIG2 16, 2 + +#define CP0_CONFIG3 $16, 3 +#define C0_CONFIG3 16, 3 + +#define CP0_CONFIG4 $16, 4 +#define C0_CONFIG4 16, 4 + +#define CP0_CONFIG5 $16, 5 +#define C0_CONFIG5 16, 5 + +#define CP0_CONFIG6 $16, 6 +#define C0_CONFIG6 16, 6 + +#define CP0_LLADDR $17 +#define C0_LLADDR 17, 0 + +#define CP0_WATCHLO $18 +#define C0_WATCHLO 18, 0 + +#define CP0_WATCHHI $19 +#define C0_WATCHHI 19, 0 + +#define CP0_XCONTEXT $20 +#define C0_XCONTEXT 20, 0 + +#define CP0_FRAMEMASK $21 +#define C0_FRAMEMASK 21, 0 + +#define CP0_DIAGNOSTIC $22 +#define C0_DIAGNOSTIC 22, 0 + +#define CP0_DIAGNOSTIC1 $22, 1 +#define C0_DIAGNOSTIC1 22, 1 + +#define CP0_DEBUG $23 +#define C0_DEBUG 23, 0 + +#define CP0_DEPC $24 +#define C0_DEPC 24, 0 + +#define CP0_PERFORMANCE $25 +#define C0_PERFORMANCE 25, 0 + +#define CP0_ECC $26 +#define C0_ECC 26, 0 + +#define CP0_CACHEERR $27 +#define C0_CACHEERR 27, 0 + +#define CP0_TAGLO $28 +#define C0_TAGLO 28, 0 + +#define CP0_DTAGLO $28, 2 +#define C0_DTAGLO 28, 2 + +#define CP0_DDATALO $28, 3 +#define C0_DDATALO 28, 3 + +#define CP0_STAGLO $28, 4 +#define C0_STAGLO 28, 4 + +#define CP0_TAGHI $29 +#define C0_TAGHI 29, 0 + +#define CP0_ERROREPC $30 +#define C0_ERROREPC 30, 0 + +#define CP0_DESAVE $31 +#define C0_DESAVE 31, 0 /* * R4640/R4650 cp0 register names. These registers are listed @@ -291,6 +430,12 @@ #define ST0_DE 0x00010000 #define ST0_CE 0x00020000 +#ifdef CONFIG_64BIT +#define ST0_KX_IF_64 ST0_KX +#else +#define ST0_KX_IF_64 0 +#endif + /* * Setting c0_status.co enables Hit_Writeback and Hit_Writeback_Invalidate * cacheops in userspace. This bit exists only on RM7000 and RM9000 diff --git a/arch/mips/kvm/entry.c b/arch/mips/kvm/entry.c index aceed14aa1f7..96f64a95f51b 100644 --- a/arch/mips/kvm/entry.c +++ b/arch/mips/kvm/entry.c @@ -13,6 +13,7 @@ #include #include +#include #include #include #include @@ -50,33 +51,9 @@ #define SP 29 #define RA 31 -/* Some CP0 registers */ -#define C0_PWBASE 5, 5 -#define C0_HWRENA 7, 0 -#define C0_BADVADDR 8, 0 -#define C0_BADINSTR 8, 1 -#define C0_BADINSTRP 8, 2 -#define C0_PGD 9, 7 -#define C0_ENTRYHI 10, 0 -#define C0_GUESTCTL1 10, 4 -#define C0_STATUS 12, 0 -#define C0_GUESTCTL0 12, 6 -#define C0_CAUSE 13, 0 -#define C0_EPC 14, 0 -#define C0_EBASE 15, 1 -#define C0_CONFIG5 16, 5 -#define C0_DDATA_LO 28, 3 -#define C0_ERROREPC 30, 0 - #define CALLFRAME_SIZ 32 -#ifdef CONFIG_64BIT -#define ST0_KX_IF_64 ST0_KX -#else -#define ST0_KX_IF_64 0 -#endif - -static unsigned int scratch_vcpu[2] = { C0_DDATA_LO }; +static unsigned int scratch_vcpu[2] = { C0_DDATALO }; static unsigned int scratch_tmp[2] = { C0_ERROREPC }; enum label_id { diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c index 4017fa0e2f68..c9d00d9cb3c8 100644 --- a/arch/mips/mm/tlbex.c +++ b/arch/mips/mm/tlbex.c @@ -32,6 +32,7 @@ #include #include +#include #include #include #include @@ -280,23 +281,6 @@ static inline void dump_handler(const char *symbol, const void *start, const voi #define K0 26 #define K1 27 -/* Some CP0 registers */ -#define C0_INDEX 0, 0 -#define C0_ENTRYLO0 2, 0 -#define C0_TCBIND 2, 2 -#define C0_ENTRYLO1 3, 0 -#define C0_CONTEXT 4, 0 -#define C0_PAGEMASK 5, 0 -#define C0_PWBASE 5, 5 -#define C0_PWFIELD 5, 6 -#define C0_PWSIZE 5, 7 -#define C0_PWCTL 6, 6 -#define C0_BADVADDR 8, 0 -#define C0_PGD 9, 7 -#define C0_ENTRYHI 10, 0 -#define C0_EPC 14, 0 -#define C0_XCONTEXT 20, 0 - #ifdef CONFIG_64BIT # define GET_CONTEXT(buf, reg) UASM_i_MFC0(buf, reg, C0_XCONTEXT) #else From patchwork Fri Feb 9 18:07:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiaxun Yang X-Patchwork-Id: 13551693 Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2DB480C01; Fri, 9 Feb 2024 18:08:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.28 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502084; cv=none; b=rv/syLdC4yihOBTR/5UXe/IbcThKo+yshE6wYNXDjiZ56DZSqdo9MDJWPrmkREkNQmS3wQsQ9oW9MQS+riYx+T97cpcaPScKKtHgXTn1uGnBUrJh2mpfoZP1gXYIrMMJ1GwKUUiAQ+SOFcQZgCINoqKwZFTePgSmsTn1NvVkcOA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502084; c=relaxed/simple; bh=LKttXLr0babb1IylnCacJGFrcMdR+mMwj71Fm7abKUY=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=MnXDMySoVxtqmseERckyF3+QLBQU4+33a+9KFw4pH4TMT6eVDmhRo7xmrrwCr9lHZ/SqVjIAEJt6dmfcW320ajjfnwgDfxMUCzBduubr2losdreD+4aTsN8nM4PH+NE9HGSKH353v6+EYuyGb1/0MG33I9viV6fUi8z33NTDGSs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com; spf=pass smtp.mailfrom=flygoat.com; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b=WLN3XHtL; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=ndlPE5p9; arc=none smtp.client-ip=66.111.4.28 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flygoat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b="WLN3XHtL"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="ndlPE5p9" Received: from compute6.internal (compute6.nyi.internal [10.202.2.47]) by mailout.nyi.internal (Postfix) with ESMTP id 88E685C0154; Fri, 9 Feb 2024 13:08:01 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute6.internal (MEProxy); Fri, 09 Feb 2024 13:08:01 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=flygoat.com; h= cc:cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to; s=fm1; t=1707502081; x=1707588481; bh=rOTpDa4ia+tDgi5nhBOgRAcL/ixRct8eMBhwoMPjL34=; b= WLN3XHtLvsPxqg0Xk8woWerpd8LRHATppn8GwQi8sKjtTfpeEHZmvi0XS58opon6 IiY67MeRjpNczg3q+sprAdQZH3bYxbo2Fm7LqFf3AjdHCv0o2GBhi44Euj5j+h3N ++niUDDClkT325DBGoIBVrC95e6wxo6o39XyqlomCg33yBZzxgZ9i0FOKAD7lIIt QloQmQhc38Qy/+j+WtJ58kT1RDhcRsSwVJm2LhRoZw2vyT9PcYHKYjieR+jn5nHj uh+OrH8YWc5vSx9N8Tfw4ZnA1ohet3Vx3gfE8bbInfcK07zInwr87B7PY7/GygYb NT2qhRv38IHeaT/qTbs2IA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1707502081; x= 1707588481; bh=rOTpDa4ia+tDgi5nhBOgRAcL/ixRct8eMBhwoMPjL34=; b=n dlPE5p9ztThejkwz2SZi2wTTbhdWnzI76bsD+aBGUZi/JztyWHtRAdug5CqYV6zr yl26DLxKKtBWL1W2st4ylB6EacXDnta11YIy6UQ4/aKxOGks2gcT/01WxT9ViJmB SPjU1s1mc4YLBqnJmuh97FxTkslRDrsIKFQG1+GQ2iyR3sh+D1wleeYMEcshwx3P pehK7vuS1wW00s5Fx7xbXkZoLdkUKSNl4g7rHDC9eSLsCNED8/DQOmdD6siLVCUk mdAS3Yp4V9oC526/SyZT86BB4YGf2ynz+E2ft1eNAHjh4JjZY7klubkvehbuBWQB J8ds12ZGPUx/dfR1vmjog== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrtdeigddutdejucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephfffufggtgfgkfhfjgfvvefosehtjeertdertdejnecuhfhrohhmpeflihgr gihunhcujggrnhhguceojhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomheqne cuggftrfgrthhtvghrnhepvdekiefhfeevkeeuveetfeelffekgedugefhtdduudeghfeu veegffegudekjeelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilh hfrhhomhepjhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomh X-ME-Proxy: Feedback-ID: ifd894703:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 9 Feb 2024 13:08:00 -0500 (EST) From: Jiaxun Yang Date: Fri, 09 Feb 2024 18:07:48 +0000 Subject: [PATCH 2/8] MIPS: regdefs.h: Guard all defines with __ASSEMBLY__ Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240209-regname-v1-2-2125efa016ef@flygoat.com> References: <20240209-regname-v1-0-2125efa016ef@flygoat.com> In-Reply-To: <20240209-regname-v1-0-2125efa016ef@flygoat.com> To: Thomas Bogendoerfer Cc: Gregory CLEMENT , linux-mips@vger.kernel.org, linux-kernel@vger.kernel.org, Jiaxun Yang X-Mailer: b4 0.12.4 X-Developer-Signature: v=1; a=openpgp-sha256; l=731; i=jiaxun.yang@flygoat.com; h=from:subject:message-id; bh=LKttXLr0babb1IylnCacJGFrcMdR+mMwj71Fm7abKUY=; b=owGbwMvMwCXmXMhTe71c8zDjabUkhtRjmX+u+E2w3D9TIWyGzjLr24sUjPm+5pWIlLdxC6d+m uNSMvtCRykLgxgXg6yYIkuIgFLfhsaLC64/yPoDM4eVCWQIAxenAExE8jfDf1/n7K7JTx9wh822 1eG/andxc6lbkv7PBNE5j5sOqm7f94WRobcypm6mZpWMU2DgouIHT6R39v4Jb48o4Dmw4pOJq+8 rVgA= X-Developer-Key: i=jiaxun.yang@flygoat.com; a=openpgp; fpr=980379BEFEBFBF477EA04EF9C111949073FC0F67 Those definitions are only meant to be used in pure assembly code. Signed-off-by: Jiaxun Yang --- arch/mips/include/asm/regdef.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/mips/include/asm/regdef.h b/arch/mips/include/asm/regdef.h index 3c687df1d515..87ba7be1a847 100644 --- a/arch/mips/include/asm/regdef.h +++ b/arch/mips/include/asm/regdef.h @@ -14,6 +14,7 @@ #include +#ifdef __ASSEMBLY__ #if _MIPS_SIM == _MIPS_SIM_ABI32 /* @@ -102,5 +103,6 @@ #define ra $31 /* return address */ #endif /* _MIPS_SIM == _MIPS_SIM_ABI64 || _MIPS_SIM == _MIPS_SIM_NABI32 */ +#endif /* __ASSEMBLY__ */ #endif /* _ASM_REGDEF_H */ From patchwork Fri Feb 9 18:07:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiaxun Yang X-Patchwork-Id: 13551694 Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD23B80C0D; Fri, 9 Feb 2024 18:08:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.28 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502085; cv=none; b=CxDXMxN8XtieJbI6KT3PEx3nZ8kcbKjEdjv/Xpn5AokIwjhlYJ8fOFWaX3B7aT6gyGvV0T9CnDy0JQJ4fT8aYtnZ7x0RaHVWcL/aRRQSKrkMU2Ytfuj0t/m0ejagFS+LXUOmDQqJ8VThkL1S8w9k3Oe1gcnp78DNP1kVUSvaziU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502085; c=relaxed/simple; bh=drnUUscEJd0qoxeAseYhODkaifRCuE3Ni5XFZNt/K/g=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=lENuoZfiXPYe2vMdatqdeQU4O2C8sxY0pXK+PZseIqpeMduDj+5aEn0bwwCH1bCT9V5d1XZUSKvffF31sjbE/dQ8YUIqxKdKB4c3qnSCLdvumH+YZpQdl/SCAr2Ro6DsR9oHQg08Y7uwRbI/HettDazhfmVP2LGMlsE80i80AtU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com; spf=pass smtp.mailfrom=flygoat.com; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b=NevMQ/0/; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=Vfyknal6; arc=none smtp.client-ip=66.111.4.28 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flygoat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b="NevMQ/0/"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="Vfyknal6" Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id A733F5C00DB; Fri, 9 Feb 2024 13:08:02 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Fri, 09 Feb 2024 13:08:02 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=flygoat.com; h= cc:cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to; s=fm1; t=1707502082; x=1707588482; bh=6gwcRnoAoOJpXfrN5Im6WqVu6p4yEsIVuUrix2xlsVU=; b= NevMQ/0/fbOVUllhSo6h7QGb2uDdR1K3a/xG/A0nJ83UJcCflm9YRKy0wkWlO4ib Q3SF4Illn++mmxfLvya9ADdE1CBVJJFNZbRJSj5guB9538a+15CBwozRcvQ46SeF wfcEjCjMHB4L8r9NQeUZJcPe0lbMLv5jOynunwspEZrLiUBOkyjONpa8oTuHQlPr MrKthvcdvi+wnklR1z2qq74H3ISlMA4TXXrwdwLHlkc/bNVHbnPrrLeKwL/5AhLP cBp0YdtJ/DCmDCabZa2012W+ZvSekosofEAlSjKdwxHakx3j/FLoY9VmUnW3ru8E ldRd30ZqVVQgzsrD2YGRLQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1707502082; x= 1707588482; bh=6gwcRnoAoOJpXfrN5Im6WqVu6p4yEsIVuUrix2xlsVU=; b=V fyknal6ifS3uYExh8UaMLG57k2g80kVVL6THc05ycFCrmIWWH26pn2wFnVPUpj8e DjMvh1cwndjsoZEFVxvOvPbl1pv0T+/vNyeuZ7n4OfXJWJikiW6If6dKyB7O4ViL huDT/MxmBnZ0LlDK0u3M+/28MrzHeomn6ZnlAcW6PD8NQEjx9UW8x7UYRGDqqH+K 7WcGWviLagODI6xCcX9r3FnajW4e9bEZSZGxqg7f11175Cz5NHfTju4jsGEQnOXj 3bJVNif+pfoplYIP2CoG67S04YxByjfBljjFp473EjgCJ3phP/h4uWz/U7LVwwQf KVoYF15fUaT9MfRj5uhVA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrtdeigddutdejucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephfffufggtgfgkfhfjgfvvefosehtjeertdertdejnecuhfhrohhmpeflihgr gihunhcujggrnhhguceojhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomheqne cuggftrfgrthhtvghrnhepvdekiefhfeevkeeuveetfeelffekgedugefhtdduudeghfeu veegffegudekjeelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilh hfrhhomhepjhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomh X-ME-Proxy: Feedback-ID: ifd894703:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 9 Feb 2024 13:08:01 -0500 (EST) From: Jiaxun Yang Date: Fri, 09 Feb 2024 18:07:49 +0000 Subject: [PATCH 3/8] MIPS: regdefs.h: Define a set of register numbers Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240209-regname-v1-3-2125efa016ef@flygoat.com> References: <20240209-regname-v1-0-2125efa016ef@flygoat.com> In-Reply-To: <20240209-regname-v1-0-2125efa016ef@flygoat.com> To: Thomas Bogendoerfer Cc: Gregory CLEMENT , linux-mips@vger.kernel.org, linux-kernel@vger.kernel.org, Jiaxun Yang X-Mailer: b4 0.12.4 X-Developer-Signature: v=1; a=openpgp-sha256; l=3153; i=jiaxun.yang@flygoat.com; h=from:subject:message-id; bh=drnUUscEJd0qoxeAseYhODkaifRCuE3Ni5XFZNt/K/g=; b=owGbwMvMwCXmXMhTe71c8zDjabUkhtRjmX9DfzNNyT9mwb/p65yQmyU3Fz95VaB87pd6UK1Xu L/309u9HaUsDGJcDLJiiiwhAkp9GxovLrj+IOsPzBxWJpAhDFycAjCRtasZGdpVBf3ijxRqbQoI urAtJqJ3q8HGp56Wk//EcCVbbZE9FMjIMJuf4XPCIofJvvzFDSuqXlxsMbvi9VE1ecOtq3JFhoy v+QA= X-Developer-Key: i=jiaxun.yang@flygoat.com; a=openpgp; fpr=980379BEFEBFBF477EA04EF9C111949073FC0F67 Define a set of register numbers with their symbolic names to help with uasm code. All names are prefixed by GPR_ to prevent naming clash. Signed-off-by: Jiaxun Yang --- arch/mips/include/asm/regdef.h | 89 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 89 insertions(+) diff --git a/arch/mips/include/asm/regdef.h b/arch/mips/include/asm/regdef.h index 87ba7be1a847..236051364f78 100644 --- a/arch/mips/include/asm/regdef.h +++ b/arch/mips/include/asm/regdef.h @@ -14,6 +14,95 @@ #include +#if _MIPS_SIM == _MIPS_SIM_ABI32 + +/* + * General purpose register numbers for 32 bit ABI + */ +#define GPR_ZERO 0 /* wired zero */ +#define GPR_AT 1 /* assembler temp */ +#define GPR_V0 2 /* return value */ +#define GPR_V1 3 +#define GPR_A0 4 /* argument registers */ +#define GPR_A1 5 +#define GPR_A2 6 +#define GPR_A3 7 +#define GPR_T0 8 /* caller saved */ +#define GPR_T1 9 +#define GPR_T2 10 +#define GPR_T3 11 +#define GPR_T4 12 +#define GPR_TA0 12 +#define GPR_T5 13 +#define GPR_TA1 13 +#define GPR_T6 14 +#define GPR_TA2 14 +#define GPR_T7 15 +#define GPR_TA3 15 +#define GPR_S0 16 /* callee saved */ +#define GPR_S1 17 +#define GPR_S2 18 +#define GPR_S3 19 +#define GPR_S4 20 +#define GPR_S5 21 +#define GPR_S6 22 +#define GPR_S7 23 +#define GPR_T8 24 /* caller saved */ +#define GPR_T9 25 +#define GPR_JP 25 /* PIC jump register */ +#define GPR_K0 26 /* kernel scratch */ +#define GPR_K1 27 +#define GPR_GP 28 /* global pointer */ +#define GPR_SP 29 /* stack pointer */ +#define GPR_FP 30 /* frame pointer */ +#define GPR_S8 30 /* same like fp! */ +#define GPR_RA 31 /* return address */ + +#endif /* _MIPS_SIM == _MIPS_SIM_ABI32 */ + +#if _MIPS_SIM == _MIPS_SIM_ABI64 || _MIPS_SIM == _MIPS_SIM_NABI32 + +#define GPR_ZERO 0 /* wired zero */ +#define GPR_AT 1 /* assembler temp */ +#define GPR_V0 2 /* return value - caller saved */ +#define GPR_V1 3 +#define GPR_A0 4 /* argument registers */ +#define GPR_A1 5 +#define GPR_A2 6 +#define GPR_A3 7 +#define GPR_A4 8 /* arg reg 64 bit; caller saved in 32 bit */ +#define GPR_TA0 8 +#define GPR_A5 9 +#define GPR_TA1 9 +#define GPR_A6 10 +#define GPR_TA2 10 +#define GPR_A7 11 +#define GPR_TA3 11 +#define GPR_T0 12 /* caller saved */ +#define GPR_T1 13 +#define GPR_T2 14 +#define GPR_T3 15 +#define GPR_S0 16 /* callee saved */ +#define GPR_S1 17 +#define GPR_S2 18 +#define GPR_S3 19 +#define GPR_S4 20 +#define GPR_S5 21 +#define GPR_S6 22 +#define GPR_S7 23 +#define GPR_T8 24 /* caller saved */ +#define GPR_T9 25 /* callee address for PIC/temp */ +#define GPR_JP 25 /* PIC jump register */ +#define GPR_K0 26 /* kernel temporary */ +#define GPR_K1 27 +#define GPR_GP 28 /* global pointer - caller saved for PIC */ +#define GPR_SP 29 /* stack pointer */ +#define GPR_FP 30 /* frame pointer */ +#define GPR_S8 30 /* callee saved */ +#define GPR_RA 31 /* return address */ + +#endif /* _MIPS_SIM == _MIPS_SIM_ABI64 || _MIPS_SIM == _MIPS_SIM_NABI32 */ + #ifdef __ASSEMBLY__ #if _MIPS_SIM == _MIPS_SIM_ABI32 From patchwork Fri Feb 9 18:07:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiaxun Yang X-Patchwork-Id: 13551695 Received: from fout3-smtp.messagingengine.com (fout3-smtp.messagingengine.com [103.168.172.146]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBF3F80C16; Fri, 9 Feb 2024 18:08:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=103.168.172.146 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502086; cv=none; b=S9pR+XyqClCGrbJ6zizEVGI5Xo0wth3rU6d6M/OZ7tJB4R8M5KQ9O3PvcA57R5770/p+bwqag7e6mdwMhHzNONGK1a4QWVBsbafZarSq3NKESm9nwSqzF22ed4UBBFmQNSFMbKLGRCKS3NslhA8CgMFMUC47h0nbn5qoUuglS8o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502086; c=relaxed/simple; bh=JYiDQWQkuQam69G61i1ZMe45gTQtctb1ul2pcp6chb4=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Fs4HWWA0mM0FkZ8NsnFbrDFLAEJwfbT4oCU3lW7Aw+RS/8EcS1Etw8BimDVQTZKYDRFMLfVCtXOMtKsGFd1u1rSumX5cMpyAYjAgS1PNXBHmvzaL3OSWglU7Czi+m81BAxyT/SfteWIU8qbncPqNlkV3i8oID4kIUQA67dd8/Oo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com; spf=pass smtp.mailfrom=flygoat.com; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b=mCXVdQbJ; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=gQOWNatS; arc=none smtp.client-ip=103.168.172.146 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flygoat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b="mCXVdQbJ"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="gQOWNatS" Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailfout.nyi.internal (Postfix) with ESMTP id AEF4E13800BD; Fri, 9 Feb 2024 13:08:03 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Fri, 09 Feb 2024 13:08:03 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=flygoat.com; h= cc:cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to; s=fm1; t=1707502083; x=1707588483; bh=3q3qV7TXH9fl2/87iY/a+gh4HUomq6NbrJMeFnY6F+M=; b= mCXVdQbJJ1Y0RIBGoDesG29uRjLSRbpx7QZJkmUP9r4r6mbAXP9/VtRtqK00WgEW DDexzcqM8kwRbOjmowOJF8JIQVBlLDSozv2nyw3wvQgXeY+CDUhTgpTrjVLMM4jm MgwCdhKaSiolYYZK/n60cpj3EI/hDS9eNL4sxuo84SpCzAiMIiJ160XuRnkHfJOD usNy1H5Fvs1uq3ofmWm4Dw3xaYnroCGImWo3aaKclufcA1SVpZI9f6UME0Jqxj82 CpiQ7s55aFMTivsFHsTHE6hnz9t3bveKag0zvzbFS3Ej2u6Y8MOPVQyJt0i+iA5A 3/AvUTQeexMYtholXt9oRA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1707502083; x= 1707588483; bh=3q3qV7TXH9fl2/87iY/a+gh4HUomq6NbrJMeFnY6F+M=; b=g QOWNatSzCpLTJAhtM3dKQaQ+AyAf7uIRJ05QO39QEkplnLWBvmfgfjXid9Z62Q9+ 8JLP+iAtXuBO/Zxp/p40ol4mZXWaJRs3CeoTfxQd901qTiYLmY4FZFRP4+B9OaAL egoHZ6Llp/u2STZufAOvaRoETTfBK/P0oqJ5fiu4bT1e/m/O4VPFHOOCJFu0+LSp Hplc9IwDmHffByZcQfp1lV1vrhIhd0Fx45fU173J3YPg2nrvYcEVnFkqSPu8snkz nslEbEN+enD5rVqJ1vyA9LNrmI5Nxb1/MC0bS7ftVgsyM6dP3jSUtUIYJsZgLka/ e/qdNWGArM/Lx5ebQZfYg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrtdeigddutdejucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephfffufggtgfgkfhfjgfvvefosehtjeertdertdejnecuhfhrohhmpeflihgr gihunhcujggrnhhguceojhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomheqne cuggftrfgrthhtvghrnhepvdekiefhfeevkeeuveetfeelffekgedugefhtdduudeghfeu veegffegudekjeelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilh hfrhhomhepjhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomh X-ME-Proxy: Feedback-ID: ifd894703:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 9 Feb 2024 13:08:02 -0500 (EST) From: Jiaxun Yang Date: Fri, 09 Feb 2024 18:07:50 +0000 Subject: [PATCH 4/8] MIPS: traps: Use GPR number macros Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240209-regname-v1-4-2125efa016ef@flygoat.com> References: <20240209-regname-v1-0-2125efa016ef@flygoat.com> In-Reply-To: <20240209-regname-v1-0-2125efa016ef@flygoat.com> To: Thomas Bogendoerfer Cc: Gregory CLEMENT , linux-mips@vger.kernel.org, linux-kernel@vger.kernel.org, Jiaxun Yang X-Mailer: b4 0.12.4 X-Developer-Signature: v=1; a=openpgp-sha256; l=1213; i=jiaxun.yang@flygoat.com; h=from:subject:message-id; bh=JYiDQWQkuQam69G61i1ZMe45gTQtctb1ul2pcp6chb4=; b=owGbwMvMwCXmXMhTe71c8zDjabUkhtRjmX/3x3kfLG/z5PKMa5ly/d/lwG2F16TYJ9YuC73x+ sBLl9M/O0pZGMS4GGTFFFlCBJT6NjReXHD9QdYfmDmsTCBDGLg4BWAiKxkYGbYKe9RymfmK/mzm OeXHVua/wubI+k6uhO3phuWFhSfVJjD8L/p/x6jzUv6z+0arZn2cJvdk7bpmKR1xtg6P32ulJtQ 6MwEA X-Developer-Key: i=jiaxun.yang@flygoat.com; a=openpgp; fpr=980379BEFEBFBF477EA04EF9C111949073FC0F67 Use GPR number macros in uasm code generation parts to reduce code duplication. No functional change. Signed-off-by: Jiaxun Yang --- arch/mips/kernel/traps.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c index a1c1cb5de913..2d95e9971a2d 100644 --- a/arch/mips/kernel/traps.c +++ b/arch/mips/kernel/traps.c @@ -58,6 +58,7 @@ #include #include #include +#include #include #include #include @@ -2041,13 +2042,12 @@ void __init *set_except_vector(int n, void *addr) unsigned long jump_mask = ~((1 << 28) - 1); #endif u32 *buf = (u32 *)(ebase + 0x200); - unsigned int k0 = 26; if ((handler & jump_mask) == ((ebase + 0x200) & jump_mask)) { uasm_i_j(&buf, handler & ~jump_mask); uasm_i_nop(&buf); } else { - UASM_i_LA(&buf, k0, handler); - uasm_i_jr(&buf, k0); + UASM_i_LA(&buf, GPR_K0, handler); + uasm_i_jr(&buf, GPR_K0); uasm_i_nop(&buf); } local_flush_icache_range(ebase + 0x200, (unsigned long)buf); From patchwork Fri Feb 9 18:07:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiaxun Yang X-Patchwork-Id: 13551696 Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8CD684A50; Fri, 9 Feb 2024 18:08:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.28 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502088; cv=none; b=C8SaEnEDraE/QWTM2pXp1VXvmP4cs3bzWuxjITrU3zMkh6ouUVTSjkpTtoLr7ekDnHRmHvxzV7wYTVhNJgJ701odqVLOm7yt5oUWRapFWSxPB1tf2bqlbZ0Oqp0zxmnv0nGphuBE1dEBZqbmEhPca1zEYjOjfk2x+RGTGHn+GQQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502088; c=relaxed/simple; bh=U59qFELCgrV/R3QuGSA+/5l8zGuhaS8ricquO0KyPO8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=bqsQ0PzkztGTi4sXdZnHXqC15jpV1LsyiT3qzJBS5CR2lykvY94nIuaBKrCys159Cp9Bu+1DTxWxYhmFL6/XaLNKwF4XQ186PCHN/kRFDG/aElxQdptJmml/Oa0QIlGvgks6a6QWpRWYTMG2fICX/r1dfIrj4WNgFkGOHCXljf8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com; spf=pass smtp.mailfrom=flygoat.com; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b=dc1U1F4c; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=Vn1APbAN; arc=none smtp.client-ip=66.111.4.28 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flygoat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b="dc1U1F4c"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="Vn1APbAN" Received: from compute6.internal (compute6.nyi.internal [10.202.2.47]) by mailout.nyi.internal (Postfix) with ESMTP id DC2335C0154; Fri, 9 Feb 2024 13:08:04 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute6.internal (MEProxy); Fri, 09 Feb 2024 13:08:04 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=flygoat.com; h= cc:cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to; s=fm1; t=1707502084; x=1707588484; bh=TTB1IY9VvEjaszu/AtF8BJTEXlJf6fHxvsuI+4EegIE=; b= dc1U1F4cb9aMCdYz60lVNz/E3BA2pJKlADxorJqwOGzjnq8pCjIZx9EP3GQzqWdL riwfwCAUoOlgmg6FOfanX2qBw2CktbBxIP+43WFbGwoZgOwfyfxni7JiSCl33hfc dPQayjbYc0i0QArTf5MfaEQiwq7U9ZtZvQqt0x1+TJCqBZScBV+69HRKCYaMUcYU cNoLQAmc5Rgdf6MQSrCI8QcJwfzTlVk3VlAQVlANjUP2ecLjTKeeaz4CYTm7/4K8 Bek1k79GNiyA2ZFJv59tykSZwaosb5hc+0raoacL8iiIuBga+M/b9FviPQrOj6Ib leePElSrN1TfLw4dx87Byg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1707502084; x= 1707588484; bh=TTB1IY9VvEjaszu/AtF8BJTEXlJf6fHxvsuI+4EegIE=; b=V n1APbANilaxDNjel3BwXrvhzRVsHMfiUeWPikqAwJj1NTrocmW9ZDZIGTOgnZ8nN 4g3Za6wckRo9HCLUlqh+F1JqqlLJIX9bak8B9Y++NeJHnDhUbE+nwX8fbpJ8uraS BMtLvuEIeSovXv6pTBU+n+nC9JuvD1tew2cymN2+75qlnTvowsmm+ffqCFcXt3T2 8nB7VDSsW4HP996yIeZKkwe7AJ2znHV7fgkzMDvILMICJLRbEeshjWUviYSqMP// raQjnoMZ09/P6GOlWoFj09fX07jYQCweWtyf0J+WCOJ2uwAqWnyIYd1RB3TSuAYI nGuQXCVcBLSYtRYlwNbCg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrtdeigddutdejucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephfffufggtgfgkfhfjgfvvefosehtjeertdertdejnecuhfhrohhmpeflihgr gihunhcujggrnhhguceojhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomheqne cuggftrfgrthhtvghrnhepvdekiefhfeevkeeuveetfeelffekgedugefhtdduudeghfeu veegffegudekjeelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilh hfrhhomhepjhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomh X-ME-Proxy: Feedback-ID: ifd894703:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 9 Feb 2024 13:08:03 -0500 (EST) From: Jiaxun Yang Date: Fri, 09 Feb 2024 18:07:51 +0000 Subject: [PATCH 5/8] MIPS: page: Use GPR number macros Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240209-regname-v1-5-2125efa016ef@flygoat.com> References: <20240209-regname-v1-0-2125efa016ef@flygoat.com> In-Reply-To: <20240209-regname-v1-0-2125efa016ef@flygoat.com> To: Thomas Bogendoerfer Cc: Gregory CLEMENT , linux-mips@vger.kernel.org, linux-kernel@vger.kernel.org, Jiaxun Yang X-Mailer: b4 0.12.4 X-Developer-Signature: v=1; a=openpgp-sha256; l=16105; i=jiaxun.yang@flygoat.com; h=from:subject:message-id; bh=U59qFELCgrV/R3QuGSA+/5l8zGuhaS8ricquO0KyPO8=; b=owGbwMvMwCXmXMhTe71c8zDjabUkhtRjmX/ZHnHq/JR5Ozuz6vbT0HfOb3+I2aRJ3r+8daK4h /cms8INHaUsDGJcDLJiiiwhAkp9GxovLrj+IOsPzBxWJpAhDFycAjCRPRIM/6NOL6zu/zI7Qf+E /vKSacnljLeOr99amhdxvvk7V+1yfmOGPzx+V7PiXVkZp37j/nW/tqqB66PzhIdHnv/kDtmntOG SKi8A X-Developer-Key: i=jiaxun.yang@flygoat.com; a=openpgp; fpr=980379BEFEBFBF477EA04EF9C111949073FC0F67 Use GPR number macros in uasm code generation parts to reduce code duplication. No functional change. Signed-off-by: Jiaxun Yang --- arch/mips/mm/page.c | 202 ++++++++++++++++++++++++---------------------------- 1 file changed, 95 insertions(+), 107 deletions(-) diff --git a/arch/mips/mm/page.c b/arch/mips/mm/page.c index d3b4459d0fe8..1df237bd4a72 100644 --- a/arch/mips/mm/page.c +++ b/arch/mips/mm/page.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #ifdef CONFIG_SIBYTE_DMA_PAGEOPS @@ -34,19 +35,6 @@ #include -/* Registers used in the assembled routines. */ -#define ZERO 0 -#define AT 2 -#define A0 4 -#define A1 5 -#define A2 6 -#define T0 8 -#define T1 9 -#define T2 10 -#define T3 11 -#define T9 25 -#define RA 31 - /* Handle labels (which must be positive integers). */ enum label_id { label_clear_nopref = 1, @@ -106,16 +94,16 @@ pg_addiu(u32 **buf, unsigned int reg1, unsigned int reg2, unsigned int off) IS_ENABLED(CONFIG_CPU_DADDI_WORKAROUNDS) && r4k_daddiu_bug()) { if (off > 0x7fff) { - uasm_i_lui(buf, T9, uasm_rel_hi(off)); - uasm_i_addiu(buf, T9, T9, uasm_rel_lo(off)); + uasm_i_lui(buf, GPR_T9, uasm_rel_hi(off)); + uasm_i_addiu(buf, GPR_T9, GPR_T9, uasm_rel_lo(off)); } else - uasm_i_addiu(buf, T9, ZERO, off); - uasm_i_daddu(buf, reg1, reg2, T9); + uasm_i_addiu(buf, GPR_T9, GPR_ZERO, off); + uasm_i_daddu(buf, reg1, reg2, GPR_T9); } else { if (off > 0x7fff) { - uasm_i_lui(buf, T9, uasm_rel_hi(off)); - uasm_i_addiu(buf, T9, T9, uasm_rel_lo(off)); - UASM_i_ADDU(buf, reg1, reg2, T9); + uasm_i_lui(buf, GPR_T9, uasm_rel_hi(off)); + uasm_i_addiu(buf, GPR_T9, GPR_T9, uasm_rel_lo(off)); + UASM_i_ADDU(buf, reg1, reg2, GPR_T9); } else UASM_i_ADDIU(buf, reg1, reg2, off); } @@ -233,9 +221,9 @@ static void set_prefetch_parameters(void) static void build_clear_store(u32 **buf, int off) { if (cpu_has_64bit_gp_regs || cpu_has_64bit_zero_reg) { - uasm_i_sd(buf, ZERO, off, A0); + uasm_i_sd(buf, GPR_ZERO, off, GPR_A0); } else { - uasm_i_sw(buf, ZERO, off, A0); + uasm_i_sw(buf, GPR_ZERO, off, GPR_A0); } } @@ -246,10 +234,10 @@ static inline void build_clear_pref(u32 **buf, int off) if (pref_bias_clear_store) { _uasm_i_pref(buf, pref_dst_mode, pref_bias_clear_store + off, - A0); + GPR_A0); } else if (cache_line_size == (half_clear_loop_size << 1)) { if (cpu_has_cache_cdex_s) { - uasm_i_cache(buf, Create_Dirty_Excl_SD, off, A0); + uasm_i_cache(buf, Create_Dirty_Excl_SD, off, GPR_A0); } else if (cpu_has_cache_cdex_p) { if (IS_ENABLED(CONFIG_WAR_R4600_V1_HIT_CACHEOP) && cpu_is_r4600_v1_x()) { @@ -261,9 +249,9 @@ static inline void build_clear_pref(u32 **buf, int off) if (IS_ENABLED(CONFIG_WAR_R4600_V2_HIT_CACHEOP) && cpu_is_r4600_v2_x()) - uasm_i_lw(buf, ZERO, ZERO, AT); + uasm_i_lw(buf, GPR_ZERO, GPR_ZERO, GPR_AT); - uasm_i_cache(buf, Create_Dirty_Excl_D, off, A0); + uasm_i_cache(buf, Create_Dirty_Excl_D, off, GPR_A0); } } } @@ -301,12 +289,12 @@ void build_clear_page(void) off = PAGE_SIZE - pref_bias_clear_store; if (off > 0xffff || !pref_bias_clear_store) - pg_addiu(&buf, A2, A0, off); + pg_addiu(&buf, GPR_A2, GPR_A0, off); else - uasm_i_ori(&buf, A2, A0, off); + uasm_i_ori(&buf, GPR_A2, GPR_A0, off); if (IS_ENABLED(CONFIG_WAR_R4600_V2_HIT_CACHEOP) && cpu_is_r4600_v2_x()) - uasm_i_lui(&buf, AT, uasm_rel_hi(0xa0000000)); + uasm_i_lui(&buf, GPR_AT, uasm_rel_hi(0xa0000000)); off = cache_line_size ? min(8, pref_bias_clear_store / cache_line_size) * cache_line_size : 0; @@ -320,36 +308,36 @@ void build_clear_page(void) build_clear_store(&buf, off); off += clear_word_size; } while (off < half_clear_loop_size); - pg_addiu(&buf, A0, A0, 2 * off); + pg_addiu(&buf, GPR_A0, GPR_A0, 2 * off); off = -off; do { build_clear_pref(&buf, off); if (off == -clear_word_size) - uasm_il_bne(&buf, &r, A0, A2, label_clear_pref); + uasm_il_bne(&buf, &r, GPR_A0, GPR_A2, label_clear_pref); build_clear_store(&buf, off); off += clear_word_size; } while (off < 0); if (pref_bias_clear_store) { - pg_addiu(&buf, A2, A0, pref_bias_clear_store); + pg_addiu(&buf, GPR_A2, GPR_A0, pref_bias_clear_store); uasm_l_clear_nopref(&l, buf); off = 0; do { build_clear_store(&buf, off); off += clear_word_size; } while (off < half_clear_loop_size); - pg_addiu(&buf, A0, A0, 2 * off); + pg_addiu(&buf, GPR_A0, GPR_A0, 2 * off); off = -off; do { if (off == -clear_word_size) - uasm_il_bne(&buf, &r, A0, A2, + uasm_il_bne(&buf, &r, GPR_A0, GPR_A2, label_clear_nopref); build_clear_store(&buf, off); off += clear_word_size; } while (off < 0); } - uasm_i_jr(&buf, RA); + uasm_i_jr(&buf, GPR_RA); uasm_i_nop(&buf); BUG_ON(buf > &__clear_page_end); @@ -369,18 +357,18 @@ void build_clear_page(void) static void build_copy_load(u32 **buf, int reg, int off) { if (cpu_has_64bit_gp_regs) { - uasm_i_ld(buf, reg, off, A1); + uasm_i_ld(buf, reg, off, GPR_A1); } else { - uasm_i_lw(buf, reg, off, A1); + uasm_i_lw(buf, reg, off, GPR_A1); } } static void build_copy_store(u32 **buf, int reg, int off) { if (cpu_has_64bit_gp_regs) { - uasm_i_sd(buf, reg, off, A0); + uasm_i_sd(buf, reg, off, GPR_A0); } else { - uasm_i_sw(buf, reg, off, A0); + uasm_i_sw(buf, reg, off, GPR_A0); } } @@ -390,7 +378,7 @@ static inline void build_copy_load_pref(u32 **buf, int off) return; if (pref_bias_copy_load) - _uasm_i_pref(buf, pref_src_mode, pref_bias_copy_load + off, A1); + _uasm_i_pref(buf, pref_src_mode, pref_bias_copy_load + off, GPR_A1); } static inline void build_copy_store_pref(u32 **buf, int off) @@ -400,10 +388,10 @@ static inline void build_copy_store_pref(u32 **buf, int off) if (pref_bias_copy_store) { _uasm_i_pref(buf, pref_dst_mode, pref_bias_copy_store + off, - A0); + GPR_A0); } else if (cache_line_size == (half_copy_loop_size << 1)) { if (cpu_has_cache_cdex_s) { - uasm_i_cache(buf, Create_Dirty_Excl_SD, off, A0); + uasm_i_cache(buf, Create_Dirty_Excl_SD, off, GPR_A0); } else if (cpu_has_cache_cdex_p) { if (IS_ENABLED(CONFIG_WAR_R4600_V1_HIT_CACHEOP) && cpu_is_r4600_v1_x()) { @@ -415,9 +403,9 @@ static inline void build_copy_store_pref(u32 **buf, int off) if (IS_ENABLED(CONFIG_WAR_R4600_V2_HIT_CACHEOP) && cpu_is_r4600_v2_x()) - uasm_i_lw(buf, ZERO, ZERO, AT); + uasm_i_lw(buf, GPR_ZERO, GPR_ZERO, GPR_AT); - uasm_i_cache(buf, Create_Dirty_Excl_D, off, A0); + uasm_i_cache(buf, Create_Dirty_Excl_D, off, GPR_A0); } } } @@ -454,12 +442,12 @@ void build_copy_page(void) off = PAGE_SIZE - pref_bias_copy_load; if (off > 0xffff || !pref_bias_copy_load) - pg_addiu(&buf, A2, A0, off); + pg_addiu(&buf, GPR_A2, GPR_A0, off); else - uasm_i_ori(&buf, A2, A0, off); + uasm_i_ori(&buf, GPR_A2, GPR_A0, off); if (IS_ENABLED(CONFIG_WAR_R4600_V2_HIT_CACHEOP) && cpu_is_r4600_v2_x()) - uasm_i_lui(&buf, AT, uasm_rel_hi(0xa0000000)); + uasm_i_lui(&buf, GPR_AT, uasm_rel_hi(0xa0000000)); off = cache_line_size ? min(8, pref_bias_copy_load / cache_line_size) * cache_line_size : 0; @@ -476,126 +464,126 @@ void build_copy_page(void) uasm_l_copy_pref_both(&l, buf); do { build_copy_load_pref(&buf, off); - build_copy_load(&buf, T0, off); + build_copy_load(&buf, GPR_T0, off); build_copy_load_pref(&buf, off + copy_word_size); - build_copy_load(&buf, T1, off + copy_word_size); + build_copy_load(&buf, GPR_T1, off + copy_word_size); build_copy_load_pref(&buf, off + 2 * copy_word_size); - build_copy_load(&buf, T2, off + 2 * copy_word_size); + build_copy_load(&buf, GPR_T2, off + 2 * copy_word_size); build_copy_load_pref(&buf, off + 3 * copy_word_size); - build_copy_load(&buf, T3, off + 3 * copy_word_size); + build_copy_load(&buf, GPR_T3, off + 3 * copy_word_size); build_copy_store_pref(&buf, off); - build_copy_store(&buf, T0, off); + build_copy_store(&buf, GPR_T0, off); build_copy_store_pref(&buf, off + copy_word_size); - build_copy_store(&buf, T1, off + copy_word_size); + build_copy_store(&buf, GPR_T1, off + copy_word_size); build_copy_store_pref(&buf, off + 2 * copy_word_size); - build_copy_store(&buf, T2, off + 2 * copy_word_size); + build_copy_store(&buf, GPR_T2, off + 2 * copy_word_size); build_copy_store_pref(&buf, off + 3 * copy_word_size); - build_copy_store(&buf, T3, off + 3 * copy_word_size); + build_copy_store(&buf, GPR_T3, off + 3 * copy_word_size); off += 4 * copy_word_size; } while (off < half_copy_loop_size); - pg_addiu(&buf, A1, A1, 2 * off); - pg_addiu(&buf, A0, A0, 2 * off); + pg_addiu(&buf, GPR_A1, GPR_A1, 2 * off); + pg_addiu(&buf, GPR_A0, GPR_A0, 2 * off); off = -off; do { build_copy_load_pref(&buf, off); - build_copy_load(&buf, T0, off); + build_copy_load(&buf, GPR_T0, off); build_copy_load_pref(&buf, off + copy_word_size); - build_copy_load(&buf, T1, off + copy_word_size); + build_copy_load(&buf, GPR_T1, off + copy_word_size); build_copy_load_pref(&buf, off + 2 * copy_word_size); - build_copy_load(&buf, T2, off + 2 * copy_word_size); + build_copy_load(&buf, GPR_T2, off + 2 * copy_word_size); build_copy_load_pref(&buf, off + 3 * copy_word_size); - build_copy_load(&buf, T3, off + 3 * copy_word_size); + build_copy_load(&buf, GPR_T3, off + 3 * copy_word_size); build_copy_store_pref(&buf, off); - build_copy_store(&buf, T0, off); + build_copy_store(&buf, GPR_T0, off); build_copy_store_pref(&buf, off + copy_word_size); - build_copy_store(&buf, T1, off + copy_word_size); + build_copy_store(&buf, GPR_T1, off + copy_word_size); build_copy_store_pref(&buf, off + 2 * copy_word_size); - build_copy_store(&buf, T2, off + 2 * copy_word_size); + build_copy_store(&buf, GPR_T2, off + 2 * copy_word_size); build_copy_store_pref(&buf, off + 3 * copy_word_size); if (off == -(4 * copy_word_size)) - uasm_il_bne(&buf, &r, A2, A0, label_copy_pref_both); - build_copy_store(&buf, T3, off + 3 * copy_word_size); + uasm_il_bne(&buf, &r, GPR_A2, GPR_A0, label_copy_pref_both); + build_copy_store(&buf, GPR_T3, off + 3 * copy_word_size); off += 4 * copy_word_size; } while (off < 0); if (pref_bias_copy_load - pref_bias_copy_store) { - pg_addiu(&buf, A2, A0, + pg_addiu(&buf, GPR_A2, GPR_A0, pref_bias_copy_load - pref_bias_copy_store); uasm_l_copy_pref_store(&l, buf); off = 0; do { - build_copy_load(&buf, T0, off); - build_copy_load(&buf, T1, off + copy_word_size); - build_copy_load(&buf, T2, off + 2 * copy_word_size); - build_copy_load(&buf, T3, off + 3 * copy_word_size); + build_copy_load(&buf, GPR_T0, off); + build_copy_load(&buf, GPR_T1, off + copy_word_size); + build_copy_load(&buf, GPR_T2, off + 2 * copy_word_size); + build_copy_load(&buf, GPR_T3, off + 3 * copy_word_size); build_copy_store_pref(&buf, off); - build_copy_store(&buf, T0, off); + build_copy_store(&buf, GPR_T0, off); build_copy_store_pref(&buf, off + copy_word_size); - build_copy_store(&buf, T1, off + copy_word_size); + build_copy_store(&buf, GPR_T1, off + copy_word_size); build_copy_store_pref(&buf, off + 2 * copy_word_size); - build_copy_store(&buf, T2, off + 2 * copy_word_size); + build_copy_store(&buf, GPR_T2, off + 2 * copy_word_size); build_copy_store_pref(&buf, off + 3 * copy_word_size); - build_copy_store(&buf, T3, off + 3 * copy_word_size); + build_copy_store(&buf, GPR_T3, off + 3 * copy_word_size); off += 4 * copy_word_size; } while (off < half_copy_loop_size); - pg_addiu(&buf, A1, A1, 2 * off); - pg_addiu(&buf, A0, A0, 2 * off); + pg_addiu(&buf, GPR_A1, GPR_A1, 2 * off); + pg_addiu(&buf, GPR_A0, GPR_A0, 2 * off); off = -off; do { - build_copy_load(&buf, T0, off); - build_copy_load(&buf, T1, off + copy_word_size); - build_copy_load(&buf, T2, off + 2 * copy_word_size); - build_copy_load(&buf, T3, off + 3 * copy_word_size); + build_copy_load(&buf, GPR_T0, off); + build_copy_load(&buf, GPR_T1, off + copy_word_size); + build_copy_load(&buf, GPR_T2, off + 2 * copy_word_size); + build_copy_load(&buf, GPR_T3, off + 3 * copy_word_size); build_copy_store_pref(&buf, off); - build_copy_store(&buf, T0, off); + build_copy_store(&buf, GPR_T0, off); build_copy_store_pref(&buf, off + copy_word_size); - build_copy_store(&buf, T1, off + copy_word_size); + build_copy_store(&buf, GPR_T1, off + copy_word_size); build_copy_store_pref(&buf, off + 2 * copy_word_size); - build_copy_store(&buf, T2, off + 2 * copy_word_size); + build_copy_store(&buf, GPR_T2, off + 2 * copy_word_size); build_copy_store_pref(&buf, off + 3 * copy_word_size); if (off == -(4 * copy_word_size)) - uasm_il_bne(&buf, &r, A2, A0, + uasm_il_bne(&buf, &r, GPR_A2, GPR_A0, label_copy_pref_store); - build_copy_store(&buf, T3, off + 3 * copy_word_size); + build_copy_store(&buf, GPR_T3, off + 3 * copy_word_size); off += 4 * copy_word_size; } while (off < 0); } if (pref_bias_copy_store) { - pg_addiu(&buf, A2, A0, pref_bias_copy_store); + pg_addiu(&buf, GPR_A2, GPR_A0, pref_bias_copy_store); uasm_l_copy_nopref(&l, buf); off = 0; do { - build_copy_load(&buf, T0, off); - build_copy_load(&buf, T1, off + copy_word_size); - build_copy_load(&buf, T2, off + 2 * copy_word_size); - build_copy_load(&buf, T3, off + 3 * copy_word_size); - build_copy_store(&buf, T0, off); - build_copy_store(&buf, T1, off + copy_word_size); - build_copy_store(&buf, T2, off + 2 * copy_word_size); - build_copy_store(&buf, T3, off + 3 * copy_word_size); + build_copy_load(&buf, GPR_T0, off); + build_copy_load(&buf, GPR_T1, off + copy_word_size); + build_copy_load(&buf, GPR_T2, off + 2 * copy_word_size); + build_copy_load(&buf, GPR_T3, off + 3 * copy_word_size); + build_copy_store(&buf, GPR_T0, off); + build_copy_store(&buf, GPR_T1, off + copy_word_size); + build_copy_store(&buf, GPR_T2, off + 2 * copy_word_size); + build_copy_store(&buf, GPR_T3, off + 3 * copy_word_size); off += 4 * copy_word_size; } while (off < half_copy_loop_size); - pg_addiu(&buf, A1, A1, 2 * off); - pg_addiu(&buf, A0, A0, 2 * off); + pg_addiu(&buf, GPR_A1, GPR_A1, 2 * off); + pg_addiu(&buf, GPR_A0, GPR_A0, 2 * off); off = -off; do { - build_copy_load(&buf, T0, off); - build_copy_load(&buf, T1, off + copy_word_size); - build_copy_load(&buf, T2, off + 2 * copy_word_size); - build_copy_load(&buf, T3, off + 3 * copy_word_size); - build_copy_store(&buf, T0, off); - build_copy_store(&buf, T1, off + copy_word_size); - build_copy_store(&buf, T2, off + 2 * copy_word_size); + build_copy_load(&buf, GPR_T0, off); + build_copy_load(&buf, GPR_T1, off + copy_word_size); + build_copy_load(&buf, GPR_T2, off + 2 * copy_word_size); + build_copy_load(&buf, GPR_T3, off + 3 * copy_word_size); + build_copy_store(&buf, GPR_T0, off); + build_copy_store(&buf, GPR_T1, off + copy_word_size); + build_copy_store(&buf, GPR_T2, off + 2 * copy_word_size); if (off == -(4 * copy_word_size)) - uasm_il_bne(&buf, &r, A2, A0, + uasm_il_bne(&buf, &r, GPR_A2, GPR_A0, label_copy_nopref); - build_copy_store(&buf, T3, off + 3 * copy_word_size); + build_copy_store(&buf, GPR_T3, off + 3 * copy_word_size); off += 4 * copy_word_size; } while (off < 0); } - uasm_i_jr(&buf, RA); + uasm_i_jr(&buf, GPR_RA); uasm_i_nop(&buf); BUG_ON(buf > &__copy_page_end); From patchwork Fri Feb 9 18:07:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiaxun Yang X-Patchwork-Id: 13551697 Received: from fout3-smtp.messagingengine.com (fout3-smtp.messagingengine.com [103.168.172.146]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1503A86AD8; Fri, 9 Feb 2024 18:08:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=103.168.172.146 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502089; cv=none; b=lECCfHaxap0CGnBtDypt7Zf8201mLSeiPKGXgGpe7qO+IWhRdQ465PpzPxzP7QHkbsiKmKRVuQAEOwpJUE9mEK9mJYuNnVjwhezv7T2R/vh03nmaibjbOzankXjTn4rH90rgE3bkGH1hu0Q7lwLv5/5iPkRhwYeVXkh0Z7Wkfyo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502089; c=relaxed/simple; bh=rzATakOEyyJuBm/TJ8ceDcgUlVU73mkl+ivq91jSATg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=EI5LxpTCVfMhjxh4W4Lx2E23LEwQCUuIcvsVXlJDKSiE/pe9IaPmybgsr2hE+Rsjq7yzi5tzsj81EW/bvbixNzm5hfnNzrA/kOcLzHwzSFclLF6v4Jj/y5soEzu985h+pSFcLQ8bJ2HDIlIoOCYCnKOz4G1/mibYbliKgmaX1lE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com; spf=pass smtp.mailfrom=flygoat.com; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b=JXf60UWh; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=K7qKG67F; arc=none smtp.client-ip=103.168.172.146 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flygoat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b="JXf60UWh"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="K7qKG67F" Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailfout.nyi.internal (Postfix) with ESMTP id F2F7813800B9; Fri, 9 Feb 2024 13:08:05 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Fri, 09 Feb 2024 13:08:05 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=flygoat.com; h= cc:cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to; s=fm1; t=1707502085; x=1707588485; bh=jv8EYuj3bWvaUTuv8bEn7Jt6jV5x+VSuR/EwKaxASlY=; b= JXf60UWhy+6hSRqJuipBqixrh+53MvI2fNEAuanaJ+5Woo02bYq70BIAvj4UE3xe Phbn9z3SOV3KnFrDyz/8DOf6FE6llD2MOIav9WUedSRroD4GYABrCm+L6DRnKX22 ey2ESnZeubD/nlCwWcs3AdOPkOVabqpHJS+xDfw9SdHRgWnfCCLYPM6Ue6EUZrgx q+Gs67EH5Q95bHWU6d3UqYhbI8sHmCFX7JJ9V6fIg5ScxdPGQ6+xwvavPuEEVabf 556MwkTU9+xDlAPjhobfEw3BMrQLOdfwmnoJzQjC1wRdYU8Xlf0AQnkyCXFJspJo RM5EfWocqVEp32tqEDGoGA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1707502085; x= 1707588485; bh=jv8EYuj3bWvaUTuv8bEn7Jt6jV5x+VSuR/EwKaxASlY=; b=K 7qKG67FL2ol9yy9JlYbOZfIYYKQMWXhsot9IyjLVJ9itKvHgkQaJhCM/XLqD1VsG mNxT/Ol8/TWvd/LYZHFNsRFRrT8FUce8PtaaXaiy9Vlg+hxsmLuTnEVLcteEWdBZ WeIx8/woXW89fH35ePKmVectkvBPM1lsnkVruBvh2xzfiLe683ovW6j4gv7/2mlY h9JmwNW1GsXt+qaJ6Dlz8Naq+0Q3Pqorwr3ySrZNEEbeNSjmIxDn0M4RSaV2LKZj u2vwSJ9/obs3fzgrdPjFOoNvwVILQzqYOsotWZIJzYW+NV2C6vHJTBNoZjvnN75P 80QdHKHmcrVlaKl+fT+Pg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrtdeigddutdejucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephfffufggtgfgkfhfjgfvvefosehtjeertdertdejnecuhfhrohhmpeflihgr gihunhcujggrnhhguceojhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomheqne cuggftrfgrthhtvghrnhepvdekiefhfeevkeeuveetfeelffekgedugefhtdduudeghfeu veegffegudekjeelnecuvehluhhsthgvrhfuihiivgepvdenucfrrghrrghmpehmrghilh hfrhhomhepjhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomh X-ME-Proxy: Feedback-ID: ifd894703:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 9 Feb 2024 13:08:04 -0500 (EST) From: Jiaxun Yang Date: Fri, 09 Feb 2024 18:07:52 +0000 Subject: [PATCH 6/8] MIPS: tlbex: Use GPR number macros Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240209-regname-v1-6-2125efa016ef@flygoat.com> References: <20240209-regname-v1-0-2125efa016ef@flygoat.com> In-Reply-To: <20240209-regname-v1-0-2125efa016ef@flygoat.com> To: Thomas Bogendoerfer Cc: Gregory CLEMENT , linux-mips@vger.kernel.org, linux-kernel@vger.kernel.org, Jiaxun Yang X-Mailer: b4 0.12.4 X-Developer-Signature: v=1; a=openpgp-sha256; l=14697; i=jiaxun.yang@flygoat.com; h=from:subject:message-id; bh=rzATakOEyyJuBm/TJ8ceDcgUlVU73mkl+ivq91jSATg=; b=owGbwMvMwCXmXMhTe71c8zDjabUkhtRjmX/PbxWultM1uXJ/7tLgrM+OYYxLL/I/V7kZILjZR +i7yRrXjlIWBjEuBlkxRZYQAaW+DY0XF1x/kPUHZg4rE8gQBi5OAZjILWWG/56WU/5YMxSrfZqn UHV6/aoUTkcuA1/TqvP2nX931vh4zWL4n6q0c4GJ+zUZ5WzNJ7xbcvlXyW7nnrr+4Inq+aXL2fZ c4gAA X-Developer-Key: i=jiaxun.yang@flygoat.com; a=openpgp; fpr=980379BEFEBFBF477EA04EF9C111949073FC0F67 Use GPR number macros in uasm code generation parts to reduce code duplication. No functional change. Signed-off-by: Jiaxun Yang --- arch/mips/mm/tlbex.c | 196 +++++++++++++++++++++++++-------------------------- 1 file changed, 97 insertions(+), 99 deletions(-) diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c index c9d00d9cb3c8..69ea54bdc0c3 100644 --- a/arch/mips/mm/tlbex.c +++ b/arch/mips/mm/tlbex.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -277,10 +278,6 @@ static inline void dump_handler(const char *symbol, const void *start, const voi pr_debug("\tEND(%s)\n", symbol); } -/* The only general purpose registers allowed in TLB handlers. */ -#define K0 26 -#define K1 27 - #ifdef CONFIG_64BIT # define GET_CONTEXT(buf, reg) UASM_i_MFC0(buf, reg, C0_XCONTEXT) #else @@ -340,30 +337,30 @@ static struct work_registers build_get_work_registers(u32 **p) if (scratch_reg >= 0) { /* Save in CPU local C0_KScratch? */ UASM_i_MTC0(p, 1, c0_kscratch(), scratch_reg); - r.r1 = K0; - r.r2 = K1; - r.r3 = 1; + r.r1 = GPR_K0; + r.r2 = GPR_K1; + r.r3 = GPR_AT; return r; } if (num_possible_cpus() > 1) { /* Get smp_processor_id */ - UASM_i_CPUID_MFC0(p, K0, SMP_CPUID_REG); - UASM_i_SRL_SAFE(p, K0, K0, SMP_CPUID_REGSHIFT); + UASM_i_CPUID_MFC0(p, GPR_K0, SMP_CPUID_REG); + UASM_i_SRL_SAFE(p, GPR_K0, GPR_K0, SMP_CPUID_REGSHIFT); - /* handler_reg_save index in K0 */ - UASM_i_SLL(p, K0, K0, ilog2(sizeof(struct tlb_reg_save))); + /* handler_reg_save index in GPR_K0 */ + UASM_i_SLL(p, GPR_K0, GPR_K0, ilog2(sizeof(struct tlb_reg_save))); - UASM_i_LA(p, K1, (long)&handler_reg_save); - UASM_i_ADDU(p, K0, K0, K1); + UASM_i_LA(p, GPR_K1, (long)&handler_reg_save); + UASM_i_ADDU(p, GPR_K0, GPR_K0, GPR_K1); } else { - UASM_i_LA(p, K0, (long)&handler_reg_save); + UASM_i_LA(p, GPR_K0, (long)&handler_reg_save); } - /* K0 now points to save area, save $1 and $2 */ - UASM_i_SW(p, 1, offsetof(struct tlb_reg_save, a), K0); - UASM_i_SW(p, 2, offsetof(struct tlb_reg_save, b), K0); + /* GPR_K0 now points to save area, save $1 and $2 */ + UASM_i_SW(p, 1, offsetof(struct tlb_reg_save, a), GPR_K0); + UASM_i_SW(p, 2, offsetof(struct tlb_reg_save, b), GPR_K0); - r.r1 = K1; + r.r1 = GPR_K1; r.r2 = 1; r.r3 = 2; return r; @@ -376,9 +373,9 @@ static void build_restore_work_registers(u32 **p) UASM_i_MFC0(p, 1, c0_kscratch(), scratch_reg); return; } - /* K0 already points to save area, restore $1 and $2 */ - UASM_i_LW(p, 1, offsetof(struct tlb_reg_save, a), K0); - UASM_i_LW(p, 2, offsetof(struct tlb_reg_save, b), K0); + /* GPR_K0 already points to save area, restore $1 and $2 */ + UASM_i_LW(p, 1, offsetof(struct tlb_reg_save, a), GPR_K0); + UASM_i_LW(p, 2, offsetof(struct tlb_reg_save, b), GPR_K0); } #ifndef CONFIG_MIPS_PGD_C0_CONTEXT @@ -397,22 +394,22 @@ static void build_r3000_tlb_refill_handler(void) memset(tlb_handler, 0, sizeof(tlb_handler)); p = tlb_handler; - uasm_i_mfc0(&p, K0, C0_BADVADDR); - uasm_i_lui(&p, K1, uasm_rel_hi(pgdc)); /* cp0 delay */ - uasm_i_lw(&p, K1, uasm_rel_lo(pgdc), K1); - uasm_i_srl(&p, K0, K0, 22); /* load delay */ - uasm_i_sll(&p, K0, K0, 2); - uasm_i_addu(&p, K1, K1, K0); - uasm_i_mfc0(&p, K0, C0_CONTEXT); - uasm_i_lw(&p, K1, 0, K1); /* cp0 delay */ - uasm_i_andi(&p, K0, K0, 0xffc); /* load delay */ - uasm_i_addu(&p, K1, K1, K0); - uasm_i_lw(&p, K0, 0, K1); + uasm_i_mfc0(&p, GPR_K0, C0_BADVADDR); + uasm_i_lui(&p, GPR_K1, uasm_rel_hi(pgdc)); /* cp0 delay */ + uasm_i_lw(&p, GPR_K1, uasm_rel_lo(pgdc), GPR_K1); + uasm_i_srl(&p, GPR_K0, GPR_K0, 22); /* load delay */ + uasm_i_sll(&p, GPR_K0, GPR_K0, 2); + uasm_i_addu(&p, GPR_K1, GPR_K1, GPR_K0); + uasm_i_mfc0(&p, GPR_K0, C0_CONTEXT); + uasm_i_lw(&p, GPR_K1, 0, GPR_K1); /* cp0 delay */ + uasm_i_andi(&p, GPR_K0, GPR_K0, 0xffc); /* load delay */ + uasm_i_addu(&p, GPR_K1, GPR_K1, GPR_K0); + uasm_i_lw(&p, GPR_K0, 0, GPR_K1); uasm_i_nop(&p); /* load delay */ - uasm_i_mtc0(&p, K0, C0_ENTRYLO0); - uasm_i_mfc0(&p, K1, C0_EPC); /* cp0 delay */ + uasm_i_mtc0(&p, GPR_K0, C0_ENTRYLO0); + uasm_i_mfc0(&p, GPR_K1, C0_EPC); /* cp0 delay */ uasm_i_tlbwr(&p); /* cp0 delay */ - uasm_i_jr(&p, K1); + uasm_i_jr(&p, GPR_K1); uasm_i_rfe(&p); /* branch delay */ if (p > tlb_handler + 32) @@ -1260,11 +1257,11 @@ static void build_r4000_tlb_refill_handler(void) memset(final_handler, 0, sizeof(final_handler)); if (IS_ENABLED(CONFIG_64BIT) && (scratch_reg >= 0 || scratchpad_available()) && use_bbit_insns()) { - htlb_info = build_fast_tlb_refill_handler(&p, &l, &r, K0, K1, + htlb_info = build_fast_tlb_refill_handler(&p, &l, &r, GPR_K0, GPR_K1, scratch_reg); vmalloc_mode = refill_scratch; } else { - htlb_info.huge_pte = K0; + htlb_info.huge_pte = GPR_K0; htlb_info.restore_scratch = 0; htlb_info.need_reload_pte = true; vmalloc_mode = refill_noscratch; @@ -1274,29 +1271,29 @@ static void build_r4000_tlb_refill_handler(void) if (bcm1250_m3_war()) { unsigned int segbits = 44; - uasm_i_dmfc0(&p, K0, C0_BADVADDR); - uasm_i_dmfc0(&p, K1, C0_ENTRYHI); - uasm_i_xor(&p, K0, K0, K1); - uasm_i_dsrl_safe(&p, K1, K0, 62); - uasm_i_dsrl_safe(&p, K0, K0, 12 + 1); - uasm_i_dsll_safe(&p, K0, K0, 64 + 12 + 1 - segbits); - uasm_i_or(&p, K0, K0, K1); - uasm_il_bnez(&p, &r, K0, label_leave); + uasm_i_dmfc0(&p, GPR_K0, C0_BADVADDR); + uasm_i_dmfc0(&p, GPR_K1, C0_ENTRYHI); + uasm_i_xor(&p, GPR_K0, GPR_K0, GPR_K1); + uasm_i_dsrl_safe(&p, GPR_K1, GPR_K0, 62); + uasm_i_dsrl_safe(&p, GPR_K0, GPR_K0, 12 + 1); + uasm_i_dsll_safe(&p, GPR_K0, GPR_K0, 64 + 12 + 1 - segbits); + uasm_i_or(&p, GPR_K0, GPR_K0, GPR_K1); + uasm_il_bnez(&p, &r, GPR_K0, label_leave); /* No need for uasm_i_nop */ } #ifdef CONFIG_64BIT - build_get_pmde64(&p, &l, &r, K0, K1); /* get pmd in K1 */ + build_get_pmde64(&p, &l, &r, GPR_K0, GPR_K1); /* get pmd in GPR_K1 */ #else - build_get_pgde32(&p, K0, K1); /* get pgd in K1 */ + build_get_pgde32(&p, GPR_K0, GPR_K1); /* get pgd in GPR_K1 */ #endif #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT - build_is_huge_pte(&p, &r, K0, K1, label_tlb_huge_update); + build_is_huge_pte(&p, &r, GPR_K0, GPR_K1, label_tlb_huge_update); #endif - build_get_ptep(&p, K0, K1); - build_update_entries(&p, K0, K1); + build_get_ptep(&p, GPR_K0, GPR_K1); + build_update_entries(&p, GPR_K0, GPR_K1); build_tlb_write_entry(&p, &l, &r, tlb_random); uasm_l_leave(&l, p); uasm_i_eret(&p); /* return from trap */ @@ -1304,14 +1301,14 @@ static void build_r4000_tlb_refill_handler(void) #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT uasm_l_tlb_huge_update(&l, p); if (htlb_info.need_reload_pte) - UASM_i_LW(&p, htlb_info.huge_pte, 0, K1); - build_huge_update_entries(&p, htlb_info.huge_pte, K1); - build_huge_tlb_write_entry(&p, &l, &r, K0, tlb_random, + UASM_i_LW(&p, htlb_info.huge_pte, 0, GPR_K1); + build_huge_update_entries(&p, htlb_info.huge_pte, GPR_K1); + build_huge_tlb_write_entry(&p, &l, &r, GPR_K0, tlb_random, htlb_info.restore_scratch); #endif #ifdef CONFIG_64BIT - build_get_pgd_vmalloc64(&p, &l, &r, K0, K1, vmalloc_mode); + build_get_pgd_vmalloc64(&p, &l, &r, GPR_K0, GPR_K1, vmalloc_mode); #endif /* @@ -1484,34 +1481,35 @@ static void build_loongson3_tlb_refill_handler(void) memset(tlb_handler, 0, sizeof(tlb_handler)); if (check_for_high_segbits) { - uasm_i_dmfc0(&p, K0, C0_BADVADDR); - uasm_i_dsrl_safe(&p, K1, K0, PGDIR_SHIFT + PGD_TABLE_ORDER + PAGE_SHIFT - 3); - uasm_il_beqz(&p, &r, K1, label_vmalloc); + uasm_i_dmfc0(&p, GPR_K0, C0_BADVADDR); + uasm_i_dsrl_safe(&p, GPR_K1, GPR_K0, + PGDIR_SHIFT + PGD_TABLE_ORDER + PAGE_SHIFT - 3); + uasm_il_beqz(&p, &r, GPR_K1, label_vmalloc); uasm_i_nop(&p); - uasm_il_bgez(&p, &r, K0, label_large_segbits_fault); + uasm_il_bgez(&p, &r, GPR_K0, label_large_segbits_fault); uasm_i_nop(&p); uasm_l_vmalloc(&l, p); } - uasm_i_dmfc0(&p, K1, C0_PGD); + uasm_i_dmfc0(&p, GPR_K1, C0_PGD); - uasm_i_lddir(&p, K0, K1, 3); /* global page dir */ + uasm_i_lddir(&p, GPR_K0, GPR_K1, 3); /* global page dir */ #ifndef __PAGETABLE_PMD_FOLDED - uasm_i_lddir(&p, K1, K0, 1); /* middle page dir */ + uasm_i_lddir(&p, GPR_K1, GPR_K0, 1); /* middle page dir */ #endif - uasm_i_ldpte(&p, K1, 0); /* even */ - uasm_i_ldpte(&p, K1, 1); /* odd */ + uasm_i_ldpte(&p, GPR_K1, 0); /* even */ + uasm_i_ldpte(&p, GPR_K1, 1); /* odd */ uasm_i_tlbwr(&p); /* restore page mask */ if (PM_DEFAULT_MASK >> 16) { - uasm_i_lui(&p, K0, PM_DEFAULT_MASK >> 16); - uasm_i_ori(&p, K0, K0, PM_DEFAULT_MASK & 0xffff); - uasm_i_mtc0(&p, K0, C0_PAGEMASK); + uasm_i_lui(&p, GPR_K0, PM_DEFAULT_MASK >> 16); + uasm_i_ori(&p, GPR_K0, GPR_K0, PM_DEFAULT_MASK & 0xffff); + uasm_i_mtc0(&p, GPR_K0, C0_PAGEMASK); } else if (PM_DEFAULT_MASK) { - uasm_i_ori(&p, K0, 0, PM_DEFAULT_MASK); - uasm_i_mtc0(&p, K0, C0_PAGEMASK); + uasm_i_ori(&p, GPR_K0, 0, PM_DEFAULT_MASK); + uasm_i_mtc0(&p, GPR_K0, C0_PAGEMASK); } else { uasm_i_mtc0(&p, 0, C0_PAGEMASK); } @@ -1520,8 +1518,8 @@ static void build_loongson3_tlb_refill_handler(void) if (check_for_high_segbits) { uasm_l_large_segbits_fault(&l, p); - UASM_i_LA(&p, K1, (unsigned long)tlb_do_page_fault_0); - uasm_i_jr(&p, K1); + UASM_i_LA(&p, GPR_K1, (unsigned long)tlb_do_page_fault_0); + uasm_i_jr(&p, GPR_K1); uasm_i_nop(&p); } @@ -1887,11 +1885,11 @@ static void build_r3000_tlb_load_handler(void) memset(labels, 0, sizeof(labels)); memset(relocs, 0, sizeof(relocs)); - build_r3000_tlbchange_handler_head(&p, K0, K1); - build_pte_present(&p, &r, K0, K1, -1, label_nopage_tlbl); + build_r3000_tlbchange_handler_head(&p, GPR_K0, GPR_K1); + build_pte_present(&p, &r, GPR_K0, GPR_K1, -1, label_nopage_tlbl); uasm_i_nop(&p); /* load delay */ - build_make_valid(&p, &r, K0, K1, -1); - build_r3000_tlb_reload_write(&p, &l, &r, K0, K1); + build_make_valid(&p, &r, GPR_K0, GPR_K1, -1); + build_r3000_tlb_reload_write(&p, &l, &r, GPR_K0, GPR_K1); uasm_l_nopage_tlbl(&l, p); uasm_i_j(&p, (unsigned long)tlb_do_page_fault_0 & 0x0fffffff); @@ -1917,11 +1915,11 @@ static void build_r3000_tlb_store_handler(void) memset(labels, 0, sizeof(labels)); memset(relocs, 0, sizeof(relocs)); - build_r3000_tlbchange_handler_head(&p, K0, K1); - build_pte_writable(&p, &r, K0, K1, -1, label_nopage_tlbs); + build_r3000_tlbchange_handler_head(&p, GPR_K0, GPR_K1); + build_pte_writable(&p, &r, GPR_K0, GPR_K1, -1, label_nopage_tlbs); uasm_i_nop(&p); /* load delay */ - build_make_write(&p, &r, K0, K1, -1); - build_r3000_tlb_reload_write(&p, &l, &r, K0, K1); + build_make_write(&p, &r, GPR_K0, GPR_K1, -1); + build_r3000_tlb_reload_write(&p, &l, &r, GPR_K0, GPR_K1); uasm_l_nopage_tlbs(&l, p); uasm_i_j(&p, (unsigned long)tlb_do_page_fault_1 & 0x0fffffff); @@ -1947,11 +1945,11 @@ static void build_r3000_tlb_modify_handler(void) memset(labels, 0, sizeof(labels)); memset(relocs, 0, sizeof(relocs)); - build_r3000_tlbchange_handler_head(&p, K0, K1); - build_pte_modifiable(&p, &r, K0, K1, -1, label_nopage_tlbm); + build_r3000_tlbchange_handler_head(&p, GPR_K0, GPR_K1); + build_pte_modifiable(&p, &r, GPR_K0, GPR_K1, -1, label_nopage_tlbm); uasm_i_nop(&p); /* load delay */ - build_make_write(&p, &r, K0, K1, -1); - build_r3000_pte_reload_tlbwi(&p, K0, K1); + build_make_write(&p, &r, GPR_K0, GPR_K1, -1); + build_r3000_pte_reload_tlbwi(&p, GPR_K0, GPR_K1); uasm_l_nopage_tlbm(&l, p); uasm_i_j(&p, (unsigned long)tlb_do_page_fault_1 & 0x0fffffff); @@ -2067,14 +2065,14 @@ static void build_r4000_tlb_load_handler(void) if (bcm1250_m3_war()) { unsigned int segbits = 44; - uasm_i_dmfc0(&p, K0, C0_BADVADDR); - uasm_i_dmfc0(&p, K1, C0_ENTRYHI); - uasm_i_xor(&p, K0, K0, K1); - uasm_i_dsrl_safe(&p, K1, K0, 62); - uasm_i_dsrl_safe(&p, K0, K0, 12 + 1); - uasm_i_dsll_safe(&p, K0, K0, 64 + 12 + 1 - segbits); - uasm_i_or(&p, K0, K0, K1); - uasm_il_bnez(&p, &r, K0, label_leave); + uasm_i_dmfc0(&p, GPR_K0, C0_BADVADDR); + uasm_i_dmfc0(&p, GPR_K1, C0_ENTRYHI); + uasm_i_xor(&p, GPR_K0, GPR_K0, GPR_K1); + uasm_i_dsrl_safe(&p, GPR_K1, GPR_K0, 62); + uasm_i_dsrl_safe(&p, GPR_K0, GPR_K0, 12 + 1); + uasm_i_dsll_safe(&p, GPR_K0, GPR_K0, 64 + 12 + 1 - segbits); + uasm_i_or(&p, GPR_K0, GPR_K0, GPR_K1); + uasm_il_bnez(&p, &r, GPR_K0, label_leave); /* No need for uasm_i_nop */ } @@ -2217,9 +2215,9 @@ static void build_r4000_tlb_load_handler(void) build_restore_work_registers(&p); #ifdef CONFIG_CPU_MICROMIPS if ((unsigned long)tlb_do_page_fault_0 & 1) { - uasm_i_lui(&p, K0, uasm_rel_hi((long)tlb_do_page_fault_0)); - uasm_i_addiu(&p, K0, K0, uasm_rel_lo((long)tlb_do_page_fault_0)); - uasm_i_jr(&p, K0); + uasm_i_lui(&p, GPR_K0, uasm_rel_hi((long)tlb_do_page_fault_0)); + uasm_i_addiu(&p, GPR_K0, GPR_K0, uasm_rel_lo((long)tlb_do_page_fault_0)); + uasm_i_jr(&p, GPR_K0); } else #endif uasm_i_j(&p, (unsigned long)tlb_do_page_fault_0 & 0x0fffffff); @@ -2273,9 +2271,9 @@ static void build_r4000_tlb_store_handler(void) build_restore_work_registers(&p); #ifdef CONFIG_CPU_MICROMIPS if ((unsigned long)tlb_do_page_fault_1 & 1) { - uasm_i_lui(&p, K0, uasm_rel_hi((long)tlb_do_page_fault_1)); - uasm_i_addiu(&p, K0, K0, uasm_rel_lo((long)tlb_do_page_fault_1)); - uasm_i_jr(&p, K0); + uasm_i_lui(&p, GPR_K0, uasm_rel_hi((long)tlb_do_page_fault_1)); + uasm_i_addiu(&p, GPR_K0, GPR_K0, uasm_rel_lo((long)tlb_do_page_fault_1)); + uasm_i_jr(&p, GPR_K0); } else #endif uasm_i_j(&p, (unsigned long)tlb_do_page_fault_1 & 0x0fffffff); @@ -2330,9 +2328,9 @@ static void build_r4000_tlb_modify_handler(void) build_restore_work_registers(&p); #ifdef CONFIG_CPU_MICROMIPS if ((unsigned long)tlb_do_page_fault_1 & 1) { - uasm_i_lui(&p, K0, uasm_rel_hi((long)tlb_do_page_fault_1)); - uasm_i_addiu(&p, K0, K0, uasm_rel_lo((long)tlb_do_page_fault_1)); - uasm_i_jr(&p, K0); + uasm_i_lui(&p, GPR_K0, uasm_rel_hi((long)tlb_do_page_fault_1)); + uasm_i_addiu(&p, GPR_K0, GPR_K0, uasm_rel_lo((long)tlb_do_page_fault_1)); + uasm_i_jr(&p, GPR_K0); } else #endif uasm_i_j(&p, (unsigned long)tlb_do_page_fault_1 & 0x0fffffff); From patchwork Fri Feb 9 18:07:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiaxun Yang X-Patchwork-Id: 13551698 Received: from fhigh5-smtp.messagingengine.com (fhigh5-smtp.messagingengine.com [103.168.172.156]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B3341272BB; Fri, 9 Feb 2024 18:08:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=103.168.172.156 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502091; cv=none; b=htwYeVMI02mACs/jP2cMnKrf//lkfbCfcwuhZ9AOaU8VgGijK1GuzVKjIdWfhGOQwE+qn7cM66vxiy9o3AgeH4piFsg7e7Tdfo+41z+bhptO6oLbo4cJP4Qk9spVe4yn3s3P+/MPsBTEuoV1Dvexj3HpiL6GLJjkoeVrJuOzW8w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502091; c=relaxed/simple; bh=MaH8rb72siRqvxGQH5GM7y7eYI8vkxwQjIvIvcGPBp0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=js46ak1dAXFfcpUhfNZM5Uxq5m14nGPERRBk0TTUZe7wVl/ZYdWqomXNJgPb/XIkXsHFdhl9MpWEav1RPQMor2G0E7EsYsekHeSQplw7drIBuiVTY0HdXOOs13fRfQjPfxWnuM1PHjjqApkZObV//iI0h//gKnWbcUGjOlcmLv4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com; spf=pass smtp.mailfrom=flygoat.com; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b=dJtVXP3I; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=DyQd+xvN; arc=none smtp.client-ip=103.168.172.156 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flygoat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b="dJtVXP3I"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="DyQd+xvN" Received: from compute6.internal (compute6.nyi.internal [10.202.2.47]) by mailfhigh.nyi.internal (Postfix) with ESMTP id 1A017114008D; Fri, 9 Feb 2024 13:08:07 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute6.internal (MEProxy); Fri, 09 Feb 2024 13:08:07 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=flygoat.com; h= cc:cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to; s=fm1; t=1707502087; x=1707588487; bh=+h28cftFDnEy+YGP5d1XKvTvZ9JGxznokSL/6sNf/hw=; b= dJtVXP3IDWDI0G9xfG3Y8xNghSqIbs7WSt5srX6cDUfvl1AkBqLe6PNtsIulU8Hm AR1GjaLda59CeoMHkDXbJWyi8EiHC7xHXFMToH32bM6cVPr5kv0F0xXjk1AHLQXN 6VDPlNayBnHTFY07QwnQUom/66igf2OAARyHz9dCVVjav/nZ6JoR44cmkN1rWOu1 eglTKt9P0vt3z3kyC+bTzbEsWxiC2aRWYwxZ8Ae4U2Zr7Ci0LBm9MAF26G0uY0yY 7cKzNFW0wdBqbmjgmil3Opo6Be531hoFgQOSozelKT1d7/nhXFMZ2uLPUBxqIw74 iNyYKt92S6/ykjjuJ7pBeg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1707502087; x= 1707588487; bh=+h28cftFDnEy+YGP5d1XKvTvZ9JGxznokSL/6sNf/hw=; b=D yQd+xvNABsD8+EYUKR6sR+U0hT/VMuHIpGdK2/DX7rtta1nJ8MHE/rTRfFPRjl1g vh9UdMlX7zblxyFBRMPc5XZg7kZqno1goKJavwHKZT4OBdFNxyAScOA8unOpCEWv kic5x3u+4fNYj1YvitdpBfEFmX4MTVglBQe3nCXuWpHZ4kwzSMNNf0FH2DGyHzyh DDC1OO40riLOsFtHtn3+rj2ZbnV5fPuXZiW3uafsIRRAPpzlU1aPKN/LMC62yLvC z1YdQ0tqV2jyyp0en+GxNgeGPVOF5qdzmy2acUgnJNhHRhnImErrOn3V5Bvl59ry SDz5yLHM8AEid0YO+YkIw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrtdeigddutdejucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephfffufggtgfgkfhfjgfvvefosehtjeertdertdejnecuhfhrohhmpeflihgr gihunhcujggrnhhguceojhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomheqne cuggftrfgrthhtvghrnhepvdekiefhfeevkeeuveetfeelffekgedugefhtdduudeghfeu veegffegudekjeelnecuvehluhhsthgvrhfuihiivgepvdenucfrrghrrghmpehmrghilh hfrhhomhepjhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomh X-ME-Proxy: Feedback-ID: ifd894703:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 9 Feb 2024 13:08:06 -0500 (EST) From: Jiaxun Yang Date: Fri, 09 Feb 2024 18:07:53 +0000 Subject: [PATCH 7/8] MIPS: kvm/entry: Use GPR number macros Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240209-regname-v1-7-2125efa016ef@flygoat.com> References: <20240209-regname-v1-0-2125efa016ef@flygoat.com> In-Reply-To: <20240209-regname-v1-0-2125efa016ef@flygoat.com> To: Thomas Bogendoerfer Cc: Gregory CLEMENT , linux-mips@vger.kernel.org, linux-kernel@vger.kernel.org, Jiaxun Yang X-Mailer: b4 0.12.4 X-Developer-Signature: v=1; a=openpgp-sha256; l=27907; i=jiaxun.yang@flygoat.com; h=from:subject:message-id; bh=MaH8rb72siRqvxGQH5GM7y7eYI8vkxwQjIvIvcGPBp0=; b=owGbwMvMwCXmXMhTe71c8zDjabUkhtRjmX8/lN9xTWB2iU71/vZf9dtPyUXLTHeVTCy/Etp+7 omATZl2RykLgxgXg6yYIkuIgFLfhsaLC64/yPoDM4eVCWQIAxenAEykgJeRoXum88kqvT+y/lN+ zcja/+PD1LPsunw/4iyeeeXoZWsEPWBkeKfFrcDx20usYU38mWfirTLXbhXIn7whGZiyPWNuTI8 4PwA= X-Developer-Key: i=jiaxun.yang@flygoat.com; a=openpgp; fpr=980379BEFEBFBF477EA04EF9C111949073FC0F67 Use GPR number macros in uasm code generation parts to reduce code duplication. No functional change. Signed-off-by: Jiaxun Yang --- arch/mips/kvm/entry.c | 404 +++++++++++++++++++++++--------------------------- 1 file changed, 187 insertions(+), 217 deletions(-) diff --git a/arch/mips/kvm/entry.c b/arch/mips/kvm/entry.c index 96f64a95f51b..ac8e074c6bb7 100644 --- a/arch/mips/kvm/entry.c +++ b/arch/mips/kvm/entry.c @@ -16,41 +16,11 @@ #include #include #include +#include #include #include #include -/* Register names */ -#define ZERO 0 -#define AT 1 -#define V0 2 -#define V1 3 -#define A0 4 -#define A1 5 - -#if _MIPS_SIM == _MIPS_SIM_ABI32 -#define T0 8 -#define T1 9 -#define T2 10 -#define T3 11 -#endif /* _MIPS_SIM == _MIPS_SIM_ABI32 */ - -#if _MIPS_SIM == _MIPS_SIM_ABI64 || _MIPS_SIM == _MIPS_SIM_NABI32 -#define T0 12 -#define T1 13 -#define T2 14 -#define T3 15 -#endif /* _MIPS_SIM == _MIPS_SIM_ABI64 || _MIPS_SIM == _MIPS_SIM_NABI32 */ - -#define S0 16 -#define S1 17 -#define T9 25 -#define K0 26 -#define K1 27 -#define GP 28 -#define SP 29 -#define RA 31 - #define CALLFRAME_SIZ 32 static unsigned int scratch_vcpu[2] = { C0_DDATALO }; @@ -189,60 +159,60 @@ void *kvm_mips_build_vcpu_run(void *addr) unsigned int i; /* - * A0: vcpu + * GPR_A0: vcpu */ /* k0/k1 not being used in host kernel context */ - UASM_i_ADDIU(&p, K1, SP, -(int)sizeof(struct pt_regs)); + UASM_i_ADDIU(&p, GPR_K1, GPR_SP, -(int)sizeof(struct pt_regs)); for (i = 16; i < 32; ++i) { if (i == 24) i = 28; - UASM_i_SW(&p, i, offsetof(struct pt_regs, regs[i]), K1); + UASM_i_SW(&p, i, offsetof(struct pt_regs, regs[i]), GPR_K1); } /* Save host status */ - uasm_i_mfc0(&p, V0, C0_STATUS); - UASM_i_SW(&p, V0, offsetof(struct pt_regs, cp0_status), K1); + uasm_i_mfc0(&p, GPR_V0, C0_STATUS); + UASM_i_SW(&p, GPR_V0, offsetof(struct pt_regs, cp0_status), GPR_K1); /* Save scratch registers, will be used to store pointer to vcpu etc */ - kvm_mips_build_save_scratch(&p, V1, K1); + kvm_mips_build_save_scratch(&p, GPR_V1, GPR_K1); /* VCPU scratch register has pointer to vcpu */ - UASM_i_MTC0(&p, A0, scratch_vcpu[0], scratch_vcpu[1]); + UASM_i_MTC0(&p, GPR_A0, scratch_vcpu[0], scratch_vcpu[1]); /* Offset into vcpu->arch */ - UASM_i_ADDIU(&p, K1, A0, offsetof(struct kvm_vcpu, arch)); + UASM_i_ADDIU(&p, GPR_K1, GPR_A0, offsetof(struct kvm_vcpu, arch)); /* * Save the host stack to VCPU, used for exception processing * when we exit from the Guest */ - UASM_i_SW(&p, SP, offsetof(struct kvm_vcpu_arch, host_stack), K1); + UASM_i_SW(&p, GPR_SP, offsetof(struct kvm_vcpu_arch, host_stack), GPR_K1); /* Save the kernel gp as well */ - UASM_i_SW(&p, GP, offsetof(struct kvm_vcpu_arch, host_gp), K1); + UASM_i_SW(&p, GPR_GP, offsetof(struct kvm_vcpu_arch, host_gp), GPR_K1); /* * Setup status register for running the guest in UM, interrupts * are disabled */ - UASM_i_LA(&p, K0, ST0_EXL | KSU_USER | ST0_BEV | ST0_KX_IF_64); - uasm_i_mtc0(&p, K0, C0_STATUS); + UASM_i_LA(&p, GPR_K0, ST0_EXL | KSU_USER | ST0_BEV | ST0_KX_IF_64); + uasm_i_mtc0(&p, GPR_K0, C0_STATUS); uasm_i_ehb(&p); /* load up the new EBASE */ - UASM_i_LW(&p, K0, offsetof(struct kvm_vcpu_arch, guest_ebase), K1); - build_set_exc_base(&p, K0); + UASM_i_LW(&p, GPR_K0, offsetof(struct kvm_vcpu_arch, guest_ebase), GPR_K1); + build_set_exc_base(&p, GPR_K0); /* * Now that the new EBASE has been loaded, unset BEV, set * interrupt mask as it was but make sure that timer interrupts * are enabled */ - uasm_i_addiu(&p, K0, ZERO, ST0_EXL | KSU_USER | ST0_IE | ST0_KX_IF_64); - uasm_i_andi(&p, V0, V0, ST0_IM); - uasm_i_or(&p, K0, K0, V0); - uasm_i_mtc0(&p, K0, C0_STATUS); + uasm_i_addiu(&p, GPR_K0, GPR_ZERO, ST0_EXL | KSU_USER | ST0_IE | ST0_KX_IF_64); + uasm_i_andi(&p, GPR_V0, GPR_V0, ST0_IM); + uasm_i_or(&p, GPR_K0, GPR_K0, GPR_V0); + uasm_i_mtc0(&p, GPR_K0, C0_STATUS); uasm_i_ehb(&p); p = kvm_mips_build_enter_guest(p); @@ -273,15 +243,15 @@ static void *kvm_mips_build_enter_guest(void *addr) memset(relocs, 0, sizeof(relocs)); /* Set Guest EPC */ - UASM_i_LW(&p, T0, offsetof(struct kvm_vcpu_arch, pc), K1); - UASM_i_MTC0(&p, T0, C0_EPC); + UASM_i_LW(&p, GPR_T0, offsetof(struct kvm_vcpu_arch, pc), GPR_K1); + UASM_i_MTC0(&p, GPR_T0, C0_EPC); /* Save normal linux process pgd (VZ guarantees pgd_reg is set) */ if (cpu_has_ldpte) - UASM_i_MFC0(&p, K0, C0_PWBASE); + UASM_i_MFC0(&p, GPR_K0, C0_PWBASE); else - UASM_i_MFC0(&p, K0, c0_kscratch(), pgd_reg); - UASM_i_SW(&p, K0, offsetof(struct kvm_vcpu_arch, host_pgd), K1); + UASM_i_MFC0(&p, GPR_K0, c0_kscratch(), pgd_reg); + UASM_i_SW(&p, GPR_K0, offsetof(struct kvm_vcpu_arch, host_pgd), GPR_K1); /* * Set up KVM GPA pgd. @@ -289,24 +259,24 @@ static void *kvm_mips_build_enter_guest(void *addr) * - call tlbmiss_handler_setup_pgd(mm->pgd) * - write mm->pgd into CP0_PWBase * - * We keep S0 pointing at struct kvm so we can load the ASID below. + * We keep GPR_S0 pointing at struct kvm so we can load the ASID below. */ - UASM_i_LW(&p, S0, (int)offsetof(struct kvm_vcpu, kvm) - - (int)offsetof(struct kvm_vcpu, arch), K1); - UASM_i_LW(&p, A0, offsetof(struct kvm, arch.gpa_mm.pgd), S0); - UASM_i_LA(&p, T9, (unsigned long)tlbmiss_handler_setup_pgd); - uasm_i_jalr(&p, RA, T9); + UASM_i_LW(&p, GPR_S0, (int)offsetof(struct kvm_vcpu, kvm) - + (int)offsetof(struct kvm_vcpu, arch), GPR_K1); + UASM_i_LW(&p, GPR_A0, offsetof(struct kvm, arch.gpa_mm.pgd), GPR_S0); + UASM_i_LA(&p, GPR_T9, (unsigned long)tlbmiss_handler_setup_pgd); + uasm_i_jalr(&p, GPR_RA, GPR_T9); /* delay slot */ if (cpu_has_htw) - UASM_i_MTC0(&p, A0, C0_PWBASE); + UASM_i_MTC0(&p, GPR_A0, C0_PWBASE); else uasm_i_nop(&p); /* Set GM bit to setup eret to VZ guest context */ - uasm_i_addiu(&p, V1, ZERO, 1); - uasm_i_mfc0(&p, K0, C0_GUESTCTL0); - uasm_i_ins(&p, K0, V1, MIPS_GCTL0_GM_SHIFT, 1); - uasm_i_mtc0(&p, K0, C0_GUESTCTL0); + uasm_i_addiu(&p, GPR_V1, GPR_ZERO, 1); + uasm_i_mfc0(&p, GPR_K0, C0_GUESTCTL0); + uasm_i_ins(&p, GPR_K0, GPR_V1, MIPS_GCTL0_GM_SHIFT, 1); + uasm_i_mtc0(&p, GPR_K0, C0_GUESTCTL0); if (cpu_has_guestid) { /* @@ -315,13 +285,13 @@ static void *kvm_mips_build_enter_guest(void *addr) */ /* Get current GuestID */ - uasm_i_mfc0(&p, T0, C0_GUESTCTL1); + uasm_i_mfc0(&p, GPR_T0, C0_GUESTCTL1); /* Set GuestCtl1.RID = GuestCtl1.ID */ - uasm_i_ext(&p, T1, T0, MIPS_GCTL1_ID_SHIFT, + uasm_i_ext(&p, GPR_T1, GPR_T0, MIPS_GCTL1_ID_SHIFT, MIPS_GCTL1_ID_WIDTH); - uasm_i_ins(&p, T0, T1, MIPS_GCTL1_RID_SHIFT, + uasm_i_ins(&p, GPR_T0, GPR_T1, MIPS_GCTL1_RID_SHIFT, MIPS_GCTL1_RID_WIDTH); - uasm_i_mtc0(&p, T0, C0_GUESTCTL1); + uasm_i_mtc0(&p, GPR_T0, C0_GUESTCTL1); /* GuestID handles dealiasing so we don't need to touch ASID */ goto skip_asid_restore; @@ -330,65 +300,65 @@ static void *kvm_mips_build_enter_guest(void *addr) /* Root ASID Dealias (RAD) */ /* Save host ASID */ - UASM_i_MFC0(&p, K0, C0_ENTRYHI); - UASM_i_SW(&p, K0, offsetof(struct kvm_vcpu_arch, host_entryhi), - K1); + UASM_i_MFC0(&p, GPR_K0, C0_ENTRYHI); + UASM_i_SW(&p, GPR_K0, offsetof(struct kvm_vcpu_arch, host_entryhi), + GPR_K1); /* Set the root ASID for the Guest */ - UASM_i_ADDIU(&p, T1, S0, + UASM_i_ADDIU(&p, GPR_T1, GPR_S0, offsetof(struct kvm, arch.gpa_mm.context.asid)); /* t1: contains the base of the ASID array, need to get the cpu id */ /* smp_processor_id */ - uasm_i_lw(&p, T2, offsetof(struct thread_info, cpu), GP); + uasm_i_lw(&p, GPR_T2, offsetof(struct thread_info, cpu), GPR_GP); /* index the ASID array */ - uasm_i_sll(&p, T2, T2, ilog2(sizeof(long))); - UASM_i_ADDU(&p, T3, T1, T2); - UASM_i_LW(&p, K0, 0, T3); + uasm_i_sll(&p, GPR_T2, GPR_T2, ilog2(sizeof(long))); + UASM_i_ADDU(&p, GPR_T3, GPR_T1, GPR_T2); + UASM_i_LW(&p, GPR_K0, 0, GPR_T3); #ifdef CONFIG_MIPS_ASID_BITS_VARIABLE /* * reuse ASID array offset * cpuinfo_mips is a multiple of sizeof(long) */ - uasm_i_addiu(&p, T3, ZERO, sizeof(struct cpuinfo_mips)/sizeof(long)); - uasm_i_mul(&p, T2, T2, T3); + uasm_i_addiu(&p, GPR_T3, GPR_ZERO, sizeof(struct cpuinfo_mips)/sizeof(long)); + uasm_i_mul(&p, GPR_T2, GPR_T2, GPR_T3); - UASM_i_LA_mostly(&p, AT, (long)&cpu_data[0].asid_mask); - UASM_i_ADDU(&p, AT, AT, T2); - UASM_i_LW(&p, T2, uasm_rel_lo((long)&cpu_data[0].asid_mask), AT); - uasm_i_and(&p, K0, K0, T2); + UASM_i_LA_mostly(&p, GPR_AT, (long)&cpu_data[0].asid_mask); + UASM_i_ADDU(&p, GPR_AT, GPR_AT, GPR_T2); + UASM_i_LW(&p, GPR_T2, uasm_rel_lo((long)&cpu_data[0].asid_mask), GPR_AT); + uasm_i_and(&p, GPR_K0, GPR_K0, GPR_T2); #else - uasm_i_andi(&p, K0, K0, MIPS_ENTRYHI_ASID); + uasm_i_andi(&p, GPR_K0, GPR_K0, MIPS_ENTRYHI_ASID); #endif /* Set up KVM VZ root ASID (!guestid) */ - uasm_i_mtc0(&p, K0, C0_ENTRYHI); + uasm_i_mtc0(&p, GPR_K0, C0_ENTRYHI); skip_asid_restore: uasm_i_ehb(&p); /* Disable RDHWR access */ - uasm_i_mtc0(&p, ZERO, C0_HWRENA); + uasm_i_mtc0(&p, GPR_ZERO, C0_HWRENA); /* load the guest context from VCPU and return */ for (i = 1; i < 32; ++i) { /* Guest k0/k1 loaded later */ - if (i == K0 || i == K1) + if (i == GPR_K0 || i == GPR_K1) continue; - UASM_i_LW(&p, i, offsetof(struct kvm_vcpu_arch, gprs[i]), K1); + UASM_i_LW(&p, i, offsetof(struct kvm_vcpu_arch, gprs[i]), GPR_K1); } #ifndef CONFIG_CPU_MIPSR6 /* Restore hi/lo */ - UASM_i_LW(&p, K0, offsetof(struct kvm_vcpu_arch, hi), K1); - uasm_i_mthi(&p, K0); + UASM_i_LW(&p, GPR_K0, offsetof(struct kvm_vcpu_arch, hi), GPR_K1); + uasm_i_mthi(&p, GPR_K0); - UASM_i_LW(&p, K0, offsetof(struct kvm_vcpu_arch, lo), K1); - uasm_i_mtlo(&p, K0); + UASM_i_LW(&p, GPR_K0, offsetof(struct kvm_vcpu_arch, lo), GPR_K1); + uasm_i_mtlo(&p, GPR_K0); #endif /* Restore the guest's k0/k1 registers */ - UASM_i_LW(&p, K0, offsetof(struct kvm_vcpu_arch, gprs[K0]), K1); - UASM_i_LW(&p, K1, offsetof(struct kvm_vcpu_arch, gprs[K1]), K1); + UASM_i_LW(&p, GPR_K0, offsetof(struct kvm_vcpu_arch, gprs[GPR_K0]), GPR_K1); + UASM_i_LW(&p, GPR_K1, offsetof(struct kvm_vcpu_arch, gprs[GPR_K1]), GPR_K1); /* Jump to guest */ uasm_i_eret(&p); @@ -421,13 +391,13 @@ void *kvm_mips_build_tlb_refill_exception(void *addr, void *handler) memset(relocs, 0, sizeof(relocs)); /* Save guest k1 into scratch register */ - UASM_i_MTC0(&p, K1, scratch_tmp[0], scratch_tmp[1]); + UASM_i_MTC0(&p, GPR_K1, scratch_tmp[0], scratch_tmp[1]); /* Get the VCPU pointer from the VCPU scratch register */ - UASM_i_MFC0(&p, K1, scratch_vcpu[0], scratch_vcpu[1]); + UASM_i_MFC0(&p, GPR_K1, scratch_vcpu[0], scratch_vcpu[1]); /* Save guest k0 into VCPU structure */ - UASM_i_SW(&p, K0, offsetof(struct kvm_vcpu, arch.gprs[K0]), K1); + UASM_i_SW(&p, GPR_K0, offsetof(struct kvm_vcpu, arch.gprs[GPR_K0]), GPR_K1); /* * Some of the common tlbex code uses current_cpu_type(). For KVM we @@ -436,13 +406,13 @@ void *kvm_mips_build_tlb_refill_exception(void *addr, void *handler) preempt_disable(); #ifdef CONFIG_CPU_LOONGSON64 - UASM_i_MFC0(&p, K1, C0_PGD); - uasm_i_lddir(&p, K0, K1, 3); /* global page dir */ + UASM_i_MFC0(&p, GPR_K1, C0_PGD); + uasm_i_lddir(&p, GPR_K0, GPR_K1, 3); /* global page dir */ #ifndef __PAGETABLE_PMD_FOLDED - uasm_i_lddir(&p, K1, K0, 1); /* middle page dir */ + uasm_i_lddir(&p, GPR_K1, GPR_K0, 1); /* middle page dir */ #endif - uasm_i_ldpte(&p, K1, 0); /* even */ - uasm_i_ldpte(&p, K1, 1); /* odd */ + uasm_i_ldpte(&p, GPR_K1, 0); /* even */ + uasm_i_ldpte(&p, GPR_K1, 1); /* odd */ uasm_i_tlbwr(&p); #else /* @@ -457,27 +427,27 @@ void *kvm_mips_build_tlb_refill_exception(void *addr, void *handler) */ #ifdef CONFIG_64BIT - build_get_pmde64(&p, &l, &r, K0, K1); /* get pmd in K1 */ + build_get_pmde64(&p, &l, &r, GPR_K0, GPR_K1); /* get pmd in GPR_K1 */ #else - build_get_pgde32(&p, K0, K1); /* get pgd in K1 */ + build_get_pgde32(&p, GPR_K0, GPR_K1); /* get pgd in GPR_K1 */ #endif /* we don't support huge pages yet */ - build_get_ptep(&p, K0, K1); - build_update_entries(&p, K0, K1); + build_get_ptep(&p, GPR_K0, GPR_K1); + build_update_entries(&p, GPR_K0, GPR_K1); build_tlb_write_entry(&p, &l, &r, tlb_random); #endif preempt_enable(); /* Get the VCPU pointer from the VCPU scratch register again */ - UASM_i_MFC0(&p, K1, scratch_vcpu[0], scratch_vcpu[1]); + UASM_i_MFC0(&p, GPR_K1, scratch_vcpu[0], scratch_vcpu[1]); /* Restore the guest's k0/k1 registers */ - UASM_i_LW(&p, K0, offsetof(struct kvm_vcpu, arch.gprs[K0]), K1); + UASM_i_LW(&p, GPR_K0, offsetof(struct kvm_vcpu, arch.gprs[GPR_K0]), GPR_K1); uasm_i_ehb(&p); - UASM_i_MFC0(&p, K1, scratch_tmp[0], scratch_tmp[1]); + UASM_i_MFC0(&p, GPR_K1, scratch_tmp[0], scratch_tmp[1]); /* Jump to guest */ uasm_i_eret(&p); @@ -507,14 +477,14 @@ void *kvm_mips_build_exception(void *addr, void *handler) memset(relocs, 0, sizeof(relocs)); /* Save guest k1 into scratch register */ - UASM_i_MTC0(&p, K1, scratch_tmp[0], scratch_tmp[1]); + UASM_i_MTC0(&p, GPR_K1, scratch_tmp[0], scratch_tmp[1]); /* Get the VCPU pointer from the VCPU scratch register */ - UASM_i_MFC0(&p, K1, scratch_vcpu[0], scratch_vcpu[1]); - UASM_i_ADDIU(&p, K1, K1, offsetof(struct kvm_vcpu, arch)); + UASM_i_MFC0(&p, GPR_K1, scratch_vcpu[0], scratch_vcpu[1]); + UASM_i_ADDIU(&p, GPR_K1, GPR_K1, offsetof(struct kvm_vcpu, arch)); /* Save guest k0 into VCPU structure */ - UASM_i_SW(&p, K0, offsetof(struct kvm_vcpu_arch, gprs[K0]), K1); + UASM_i_SW(&p, GPR_K0, offsetof(struct kvm_vcpu_arch, gprs[GPR_K0]), GPR_K1); /* Branch to the common handler */ uasm_il_b(&p, &r, label_exit_common); @@ -562,85 +532,85 @@ void *kvm_mips_build_exit(void *addr) /* Start saving Guest context to VCPU */ for (i = 0; i < 32; ++i) { /* Guest k0/k1 saved later */ - if (i == K0 || i == K1) + if (i == GPR_K0 || i == GPR_K1) continue; - UASM_i_SW(&p, i, offsetof(struct kvm_vcpu_arch, gprs[i]), K1); + UASM_i_SW(&p, i, offsetof(struct kvm_vcpu_arch, gprs[i]), GPR_K1); } #ifndef CONFIG_CPU_MIPSR6 /* We need to save hi/lo and restore them on the way out */ - uasm_i_mfhi(&p, T0); - UASM_i_SW(&p, T0, offsetof(struct kvm_vcpu_arch, hi), K1); + uasm_i_mfhi(&p, GPR_T0); + UASM_i_SW(&p, GPR_T0, offsetof(struct kvm_vcpu_arch, hi), GPR_K1); - uasm_i_mflo(&p, T0); - UASM_i_SW(&p, T0, offsetof(struct kvm_vcpu_arch, lo), K1); + uasm_i_mflo(&p, GPR_T0); + UASM_i_SW(&p, GPR_T0, offsetof(struct kvm_vcpu_arch, lo), GPR_K1); #endif /* Finally save guest k1 to VCPU */ uasm_i_ehb(&p); - UASM_i_MFC0(&p, T0, scratch_tmp[0], scratch_tmp[1]); - UASM_i_SW(&p, T0, offsetof(struct kvm_vcpu_arch, gprs[K1]), K1); + UASM_i_MFC0(&p, GPR_T0, scratch_tmp[0], scratch_tmp[1]); + UASM_i_SW(&p, GPR_T0, offsetof(struct kvm_vcpu_arch, gprs[GPR_K1]), GPR_K1); /* Now that context has been saved, we can use other registers */ /* Restore vcpu */ - UASM_i_MFC0(&p, S0, scratch_vcpu[0], scratch_vcpu[1]); + UASM_i_MFC0(&p, GPR_S0, scratch_vcpu[0], scratch_vcpu[1]); /* * Save Host level EPC, BadVaddr and Cause to VCPU, useful to process * the exception */ - UASM_i_MFC0(&p, K0, C0_EPC); - UASM_i_SW(&p, K0, offsetof(struct kvm_vcpu_arch, pc), K1); + UASM_i_MFC0(&p, GPR_K0, C0_EPC); + UASM_i_SW(&p, GPR_K0, offsetof(struct kvm_vcpu_arch, pc), GPR_K1); - UASM_i_MFC0(&p, K0, C0_BADVADDR); - UASM_i_SW(&p, K0, offsetof(struct kvm_vcpu_arch, host_cp0_badvaddr), - K1); + UASM_i_MFC0(&p, GPR_K0, C0_BADVADDR); + UASM_i_SW(&p, GPR_K0, offsetof(struct kvm_vcpu_arch, host_cp0_badvaddr), + GPR_K1); - uasm_i_mfc0(&p, K0, C0_CAUSE); - uasm_i_sw(&p, K0, offsetof(struct kvm_vcpu_arch, host_cp0_cause), K1); + uasm_i_mfc0(&p, GPR_K0, C0_CAUSE); + uasm_i_sw(&p, GPR_K0, offsetof(struct kvm_vcpu_arch, host_cp0_cause), GPR_K1); if (cpu_has_badinstr) { - uasm_i_mfc0(&p, K0, C0_BADINSTR); - uasm_i_sw(&p, K0, offsetof(struct kvm_vcpu_arch, - host_cp0_badinstr), K1); + uasm_i_mfc0(&p, GPR_K0, C0_BADINSTR); + uasm_i_sw(&p, GPR_K0, offsetof(struct kvm_vcpu_arch, + host_cp0_badinstr), GPR_K1); } if (cpu_has_badinstrp) { - uasm_i_mfc0(&p, K0, C0_BADINSTRP); - uasm_i_sw(&p, K0, offsetof(struct kvm_vcpu_arch, - host_cp0_badinstrp), K1); + uasm_i_mfc0(&p, GPR_K0, C0_BADINSTRP); + uasm_i_sw(&p, GPR_K0, offsetof(struct kvm_vcpu_arch, + host_cp0_badinstrp), GPR_K1); } /* Now restore the host state just enough to run the handlers */ /* Switch EBASE to the one used by Linux */ /* load up the host EBASE */ - uasm_i_mfc0(&p, V0, C0_STATUS); + uasm_i_mfc0(&p, GPR_V0, C0_STATUS); - uasm_i_lui(&p, AT, ST0_BEV >> 16); - uasm_i_or(&p, K0, V0, AT); + uasm_i_lui(&p, GPR_AT, ST0_BEV >> 16); + uasm_i_or(&p, GPR_K0, GPR_V0, GPR_AT); - uasm_i_mtc0(&p, K0, C0_STATUS); + uasm_i_mtc0(&p, GPR_K0, C0_STATUS); uasm_i_ehb(&p); - UASM_i_LA_mostly(&p, K0, (long)&ebase); - UASM_i_LW(&p, K0, uasm_rel_lo((long)&ebase), K0); - build_set_exc_base(&p, K0); + UASM_i_LA_mostly(&p, GPR_K0, (long)&ebase); + UASM_i_LW(&p, GPR_K0, uasm_rel_lo((long)&ebase), GPR_K0); + build_set_exc_base(&p, GPR_K0); if (raw_cpu_has_fpu) { /* * If FPU is enabled, save FCR31 and clear it so that later * ctc1's don't trigger FPE for pending exceptions. */ - uasm_i_lui(&p, AT, ST0_CU1 >> 16); - uasm_i_and(&p, V1, V0, AT); - uasm_il_beqz(&p, &r, V1, label_fpu_1); + uasm_i_lui(&p, GPR_AT, ST0_CU1 >> 16); + uasm_i_and(&p, GPR_V1, GPR_V0, GPR_AT); + uasm_il_beqz(&p, &r, GPR_V1, label_fpu_1); uasm_i_nop(&p); - uasm_i_cfc1(&p, T0, 31); - uasm_i_sw(&p, T0, offsetof(struct kvm_vcpu_arch, fpu.fcr31), - K1); - uasm_i_ctc1(&p, ZERO, 31); + uasm_i_cfc1(&p, GPR_T0, 31); + uasm_i_sw(&p, GPR_T0, offsetof(struct kvm_vcpu_arch, fpu.fcr31), + GPR_K1); + uasm_i_ctc1(&p, GPR_ZERO, 31); uasm_l_fpu_1(&l, p); } @@ -649,22 +619,22 @@ void *kvm_mips_build_exit(void *addr) * If MSA is enabled, save MSACSR and clear it so that later * instructions don't trigger MSAFPE for pending exceptions. */ - uasm_i_mfc0(&p, T0, C0_CONFIG5); - uasm_i_ext(&p, T0, T0, 27, 1); /* MIPS_CONF5_MSAEN */ - uasm_il_beqz(&p, &r, T0, label_msa_1); + uasm_i_mfc0(&p, GPR_T0, C0_CONFIG5); + uasm_i_ext(&p, GPR_T0, GPR_T0, 27, 1); /* MIPS_CONF5_MSAEN */ + uasm_il_beqz(&p, &r, GPR_T0, label_msa_1); uasm_i_nop(&p); - uasm_i_cfcmsa(&p, T0, MSA_CSR); - uasm_i_sw(&p, T0, offsetof(struct kvm_vcpu_arch, fpu.msacsr), - K1); - uasm_i_ctcmsa(&p, MSA_CSR, ZERO); + uasm_i_cfcmsa(&p, GPR_T0, MSA_CSR); + uasm_i_sw(&p, GPR_T0, offsetof(struct kvm_vcpu_arch, fpu.msacsr), + GPR_K1); + uasm_i_ctcmsa(&p, MSA_CSR, GPR_ZERO); uasm_l_msa_1(&l, p); } /* Restore host ASID */ if (!cpu_has_guestid) { - UASM_i_LW(&p, K0, offsetof(struct kvm_vcpu_arch, host_entryhi), - K1); - UASM_i_MTC0(&p, K0, C0_ENTRYHI); + UASM_i_LW(&p, GPR_K0, offsetof(struct kvm_vcpu_arch, host_entryhi), + GPR_K1); + UASM_i_MTC0(&p, GPR_K0, C0_ENTRYHI); } /* @@ -673,56 +643,56 @@ void *kvm_mips_build_exit(void *addr) * - call tlbmiss_handler_setup_pgd(mm->pgd) * - write mm->pgd into CP0_PWBase */ - UASM_i_LW(&p, A0, - offsetof(struct kvm_vcpu_arch, host_pgd), K1); - UASM_i_LA(&p, T9, (unsigned long)tlbmiss_handler_setup_pgd); - uasm_i_jalr(&p, RA, T9); + UASM_i_LW(&p, GPR_A0, + offsetof(struct kvm_vcpu_arch, host_pgd), GPR_K1); + UASM_i_LA(&p, GPR_T9, (unsigned long)tlbmiss_handler_setup_pgd); + uasm_i_jalr(&p, GPR_RA, GPR_T9); /* delay slot */ if (cpu_has_htw) - UASM_i_MTC0(&p, A0, C0_PWBASE); + UASM_i_MTC0(&p, GPR_A0, C0_PWBASE); else uasm_i_nop(&p); /* Clear GM bit so we don't enter guest mode when EXL is cleared */ - uasm_i_mfc0(&p, K0, C0_GUESTCTL0); - uasm_i_ins(&p, K0, ZERO, MIPS_GCTL0_GM_SHIFT, 1); - uasm_i_mtc0(&p, K0, C0_GUESTCTL0); + uasm_i_mfc0(&p, GPR_K0, C0_GUESTCTL0); + uasm_i_ins(&p, GPR_K0, GPR_ZERO, MIPS_GCTL0_GM_SHIFT, 1); + uasm_i_mtc0(&p, GPR_K0, C0_GUESTCTL0); /* Save GuestCtl0 so we can access GExcCode after CPU migration */ - uasm_i_sw(&p, K0, - offsetof(struct kvm_vcpu_arch, host_cp0_guestctl0), K1); + uasm_i_sw(&p, GPR_K0, + offsetof(struct kvm_vcpu_arch, host_cp0_guestctl0), GPR_K1); if (cpu_has_guestid) { /* * Clear root mode GuestID, so that root TLB operations use the * root GuestID in the root TLB. */ - uasm_i_mfc0(&p, T0, C0_GUESTCTL1); + uasm_i_mfc0(&p, GPR_T0, C0_GUESTCTL1); /* Set GuestCtl1.RID = MIPS_GCTL1_ROOT_GUESTID (i.e. 0) */ - uasm_i_ins(&p, T0, ZERO, MIPS_GCTL1_RID_SHIFT, + uasm_i_ins(&p, GPR_T0, GPR_ZERO, MIPS_GCTL1_RID_SHIFT, MIPS_GCTL1_RID_WIDTH); - uasm_i_mtc0(&p, T0, C0_GUESTCTL1); + uasm_i_mtc0(&p, GPR_T0, C0_GUESTCTL1); } /* Now that the new EBASE has been loaded, unset BEV and KSU_USER */ - uasm_i_addiu(&p, AT, ZERO, ~(ST0_EXL | KSU_USER | ST0_IE)); - uasm_i_and(&p, V0, V0, AT); - uasm_i_lui(&p, AT, ST0_CU0 >> 16); - uasm_i_or(&p, V0, V0, AT); + uasm_i_addiu(&p, GPR_AT, GPR_ZERO, ~(ST0_EXL | KSU_USER | ST0_IE)); + uasm_i_and(&p, GPR_V0, GPR_V0, GPR_AT); + uasm_i_lui(&p, GPR_AT, ST0_CU0 >> 16); + uasm_i_or(&p, GPR_V0, GPR_V0, GPR_AT); #ifdef CONFIG_64BIT - uasm_i_ori(&p, V0, V0, ST0_SX | ST0_UX); + uasm_i_ori(&p, GPR_V0, GPR_V0, ST0_SX | ST0_UX); #endif - uasm_i_mtc0(&p, V0, C0_STATUS); + uasm_i_mtc0(&p, GPR_V0, C0_STATUS); uasm_i_ehb(&p); - /* Load up host GP */ - UASM_i_LW(&p, GP, offsetof(struct kvm_vcpu_arch, host_gp), K1); + /* Load up host GPR_GP */ + UASM_i_LW(&p, GPR_GP, offsetof(struct kvm_vcpu_arch, host_gp), GPR_K1); /* Need a stack before we can jump to "C" */ - UASM_i_LW(&p, SP, offsetof(struct kvm_vcpu_arch, host_stack), K1); + UASM_i_LW(&p, GPR_SP, offsetof(struct kvm_vcpu_arch, host_stack), GPR_K1); /* Saved host state */ - UASM_i_ADDIU(&p, SP, SP, -(int)sizeof(struct pt_regs)); + UASM_i_ADDIU(&p, GPR_SP, GPR_SP, -(int)sizeof(struct pt_regs)); /* * XXXKYMA do we need to load the host ASID, maybe not because the @@ -730,12 +700,12 @@ void *kvm_mips_build_exit(void *addr) */ /* Restore host scratch registers, as we'll have clobbered them */ - kvm_mips_build_restore_scratch(&p, K0, SP); + kvm_mips_build_restore_scratch(&p, GPR_K0, GPR_SP); /* Restore RDHWR access */ - UASM_i_LA_mostly(&p, K0, (long)&hwrena); - uasm_i_lw(&p, K0, uasm_rel_lo((long)&hwrena), K0); - uasm_i_mtc0(&p, K0, C0_HWRENA); + UASM_i_LA_mostly(&p, GPR_K0, (long)&hwrena); + uasm_i_lw(&p, GPR_K0, uasm_rel_lo((long)&hwrena), GPR_K0); + uasm_i_mtc0(&p, GPR_K0, C0_HWRENA); /* Jump to handler */ /* @@ -743,10 +713,10 @@ void *kvm_mips_build_exit(void *addr) * Now jump to the kvm_mips_handle_exit() to see if we can deal * with this in the kernel */ - uasm_i_move(&p, A0, S0); - UASM_i_LA(&p, T9, (unsigned long)kvm_mips_handle_exit); - uasm_i_jalr(&p, RA, T9); - UASM_i_ADDIU(&p, SP, SP, -CALLFRAME_SIZ); + uasm_i_move(&p, GPR_A0, GPR_S0); + UASM_i_LA(&p, GPR_T9, (unsigned long)kvm_mips_handle_exit); + uasm_i_jalr(&p, GPR_RA, GPR_T9); + UASM_i_ADDIU(&p, GPR_SP, GPR_SP, -CALLFRAME_SIZ); uasm_resolve_relocs(relocs, labels); @@ -776,7 +746,7 @@ static void *kvm_mips_build_ret_from_exit(void *addr) memset(relocs, 0, sizeof(relocs)); /* Return from handler Make sure interrupts are disabled */ - uasm_i_di(&p, ZERO); + uasm_i_di(&p, GPR_ZERO); uasm_i_ehb(&p); /* @@ -785,15 +755,15 @@ static void *kvm_mips_build_ret_from_exit(void *addr) * guest, reload k1 */ - uasm_i_move(&p, K1, S0); - UASM_i_ADDIU(&p, K1, K1, offsetof(struct kvm_vcpu, arch)); + uasm_i_move(&p, GPR_K1, GPR_S0); + UASM_i_ADDIU(&p, GPR_K1, GPR_K1, offsetof(struct kvm_vcpu, arch)); /* * Check return value, should tell us if we are returning to the * host (handle I/O etc)or resuming the guest */ - uasm_i_andi(&p, T0, V0, RESUME_HOST); - uasm_il_bnez(&p, &r, T0, label_return_to_host); + uasm_i_andi(&p, GPR_T0, GPR_V0, RESUME_HOST); + uasm_il_bnez(&p, &r, GPR_T0, label_return_to_host); uasm_i_nop(&p); p = kvm_mips_build_ret_to_guest(p); @@ -820,24 +790,24 @@ static void *kvm_mips_build_ret_to_guest(void *addr) u32 *p = addr; /* Put the saved pointer to vcpu (s0) back into the scratch register */ - UASM_i_MTC0(&p, S0, scratch_vcpu[0], scratch_vcpu[1]); + UASM_i_MTC0(&p, GPR_S0, scratch_vcpu[0], scratch_vcpu[1]); /* Load up the Guest EBASE to minimize the window where BEV is set */ - UASM_i_LW(&p, T0, offsetof(struct kvm_vcpu_arch, guest_ebase), K1); + UASM_i_LW(&p, GPR_T0, offsetof(struct kvm_vcpu_arch, guest_ebase), GPR_K1); /* Switch EBASE back to the one used by KVM */ - uasm_i_mfc0(&p, V1, C0_STATUS); - uasm_i_lui(&p, AT, ST0_BEV >> 16); - uasm_i_or(&p, K0, V1, AT); - uasm_i_mtc0(&p, K0, C0_STATUS); + uasm_i_mfc0(&p, GPR_V1, C0_STATUS); + uasm_i_lui(&p, GPR_AT, ST0_BEV >> 16); + uasm_i_or(&p, GPR_K0, GPR_V1, GPR_AT); + uasm_i_mtc0(&p, GPR_K0, C0_STATUS); uasm_i_ehb(&p); - build_set_exc_base(&p, T0); + build_set_exc_base(&p, GPR_T0); /* Setup status register for running guest in UM */ - uasm_i_ori(&p, V1, V1, ST0_EXL | KSU_USER | ST0_IE); - UASM_i_LA(&p, AT, ~(ST0_CU0 | ST0_MX | ST0_SX | ST0_UX)); - uasm_i_and(&p, V1, V1, AT); - uasm_i_mtc0(&p, V1, C0_STATUS); + uasm_i_ori(&p, GPR_V1, GPR_V1, ST0_EXL | KSU_USER | ST0_IE); + UASM_i_LA(&p, GPR_AT, ~(ST0_CU0 | ST0_MX | ST0_SX | ST0_UX)); + uasm_i_and(&p, GPR_V1, GPR_V1, GPR_AT); + uasm_i_mtc0(&p, GPR_V1, C0_STATUS); uasm_i_ehb(&p); p = kvm_mips_build_enter_guest(p); @@ -861,31 +831,31 @@ static void *kvm_mips_build_ret_to_host(void *addr) unsigned int i; /* EBASE is already pointing to Linux */ - UASM_i_LW(&p, K1, offsetof(struct kvm_vcpu_arch, host_stack), K1); - UASM_i_ADDIU(&p, K1, K1, -(int)sizeof(struct pt_regs)); + UASM_i_LW(&p, GPR_K1, offsetof(struct kvm_vcpu_arch, host_stack), GPR_K1); + UASM_i_ADDIU(&p, GPR_K1, GPR_K1, -(int)sizeof(struct pt_regs)); /* * r2/v0 is the return code, shift it down by 2 (arithmetic) * to recover the err code */ - uasm_i_sra(&p, K0, V0, 2); - uasm_i_move(&p, V0, K0); + uasm_i_sra(&p, GPR_K0, GPR_V0, 2); + uasm_i_move(&p, GPR_V0, GPR_K0); /* Load context saved on the host stack */ for (i = 16; i < 31; ++i) { if (i == 24) i = 28; - UASM_i_LW(&p, i, offsetof(struct pt_regs, regs[i]), K1); + UASM_i_LW(&p, i, offsetof(struct pt_regs, regs[i]), GPR_K1); } /* Restore RDHWR access */ - UASM_i_LA_mostly(&p, K0, (long)&hwrena); - uasm_i_lw(&p, K0, uasm_rel_lo((long)&hwrena), K0); - uasm_i_mtc0(&p, K0, C0_HWRENA); + UASM_i_LA_mostly(&p, GPR_K0, (long)&hwrena); + uasm_i_lw(&p, GPR_K0, uasm_rel_lo((long)&hwrena), GPR_K0); + uasm_i_mtc0(&p, GPR_K0, C0_HWRENA); - /* Restore RA, which is the address we will return to */ - UASM_i_LW(&p, RA, offsetof(struct pt_regs, regs[RA]), K1); - uasm_i_jr(&p, RA); + /* Restore GPR_RA, which is the address we will return to */ + UASM_i_LW(&p, GPR_RA, offsetof(struct pt_regs, regs[GPR_RA]), GPR_K1); + uasm_i_jr(&p, GPR_RA); uasm_i_nop(&p); return p; From patchwork Fri Feb 9 18:07:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiaxun Yang X-Patchwork-Id: 13551699 Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D6C31272D4; Fri, 9 Feb 2024 18:08:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.28 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502091; cv=none; b=X0FqbYwtIZ3GS1d9R8SicZ7WkQZi4Cwojh8y4NRtDA44eTg7llCGUe5VrdOKYG6wntm9hk1pEr/qeKrt5W6J0ZxDa1EKUuB0wKe1os6PZ/okYtmk2iZHaeK5MVKwp5dIHULXpobicUt3SC2B7PWVS0V5fzWKldR9Hsr+wnSDHfk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707502091; c=relaxed/simple; bh=ogCOAZX2NXN9A243dzILDExre06o072M6R8BIadrlQY=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=SV3+ecnWAtGoWmUCsBGNVg6EcnoY9QnFiArinb72yJ4d9zt/xNkmQxXZn+7oPQHDKVPuwQgjjwgpUf2DqIuER1+h+mjlqFfF5+eN/wMYvu2fYBpv+RJ4TPEtkmgcEAL8Y2sDVUXHyzwg3IDqTLMvQQzEnJYEs+zV5S4yNc7nPrA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com; spf=pass smtp.mailfrom=flygoat.com; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b=eT4rm1BT; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=CS36Ks/5; arc=none smtp.client-ip=66.111.4.28 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=flygoat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flygoat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=flygoat.com header.i=@flygoat.com header.b="eT4rm1BT"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="CS36Ks/5" Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id 404125C00DB; Fri, 9 Feb 2024 13:08:08 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Fri, 09 Feb 2024 13:08:08 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=flygoat.com; h= cc:cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to; s=fm1; t=1707502088; x=1707588488; bh=5OEoS0A3LR4JS3bmwdfD5QA/dA50Ddny3vlr09nGCQg=; b= eT4rm1BTPv3BLErxkfkLnxD4ZrxuBopp4NP+MoLM9vtEdIsSGAERg4EFOJLpgvBT m3epCh5i0tpsWtLgmC1vtrA6oR1lqIbnvP9NPg0NE11xHqQQfYFByH1475g3bx73 fwUot1PF1LF2SECHP5N/4FR0d5u3ibRzNXmmvWAUr04BY8107Z7kWgiz5dqHVXg7 19YK/w6G8yBpLpOmZKbY7N7u3YJE4Rz91bvVOymKHCA9AoVO9K5Llt4hv8wmewyU pKaL4JRRsOKbxvrJlyyhpfQi2ZaMVNWqRUSxi3fFMURKmTAoW+U4uofzGbhcJcfL pzcCFPkp9jjOcjNpEdjHtA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1707502088; x= 1707588488; bh=5OEoS0A3LR4JS3bmwdfD5QA/dA50Ddny3vlr09nGCQg=; b=C S36Ks/5qOQSaM12xhStZsdROgYXcha6viUceQQsnyDlyh4i3iUkh9bk8KKOutohr Ja5Ne5WGPS7HhiGYpz0wC8BhSFUXoSD/bPBeDxpKjziAtPbcdHsYu48U7lKNXXQy h82KKtWflXXgp1VT8oJpTHCwCnV/3SD2rmWb7xhVmYVZhyWAQ9iKLvsnnVx2llzi X9s9KPTZqwGJgktAB7T5mFeSQlOwDx9KSH4B+5PkvJbrcMOiYvpSRq431XJ0GOsX 6/7uMx7224JAzXPtHXlTPMWu4g4nSdoX6Gzjto5CORo71p1+971mNqaynnb57dhk /9y79ABaoRjI75DFJ2Zhg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrtdeigddutdejucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephfffufggtgfgkfhfjgfvvefosehtjeertdertdejnecuhfhrohhmpeflihgr gihunhcujggrnhhguceojhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomheqne cuggftrfgrthhtvghrnhepvdekiefhfeevkeeuveetfeelffekgedugefhtdduudeghfeu veegffegudekjeelnecuvehluhhsthgvrhfuihiivgepvdenucfrrghrrghmpehmrghilh hfrhhomhepjhhirgiguhhnrdihrghnghesfhhlhihgohgrthdrtghomh X-ME-Proxy: Feedback-ID: ifd894703:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 9 Feb 2024 13:08:07 -0500 (EST) From: Jiaxun Yang Date: Fri, 09 Feb 2024 18:07:54 +0000 Subject: [PATCH 8/8] MIPS: pm-cps: Use GPR number macros Precedence: bulk X-Mailing-List: linux-mips@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240209-regname-v1-8-2125efa016ef@flygoat.com> References: <20240209-regname-v1-0-2125efa016ef@flygoat.com> In-Reply-To: <20240209-regname-v1-0-2125efa016ef@flygoat.com> To: Thomas Bogendoerfer Cc: Gregory CLEMENT , linux-mips@vger.kernel.org, linux-kernel@vger.kernel.org, Jiaxun Yang X-Mailer: b4 0.12.4 X-Developer-Signature: v=1; a=openpgp-sha256; l=12857; i=jiaxun.yang@flygoat.com; h=from:subject:message-id; bh=ogCOAZX2NXN9A243dzILDExre06o072M6R8BIadrlQY=; b=owGbwMvMwCXmXMhTe71c8zDjabUkhtRjmX+//U+Xn3tytfnJN5tneIpz90wX4S2N/+d2QlWNe Z9F1YOEjlIWBjEuBlkxRZYQAaW+DY0XF1x/kPUHZg4rE8gQBi5OAZjIUjdGhgsMJpfNjFyuaJpF f4g8c/K7ShDHo0X7Xp77Yf9x5QRm7kZGhrdXFe5WtPH6X/08rS7nrLT+/7C2rXxHJky3XVTztKD sKhMA X-Developer-Key: i=jiaxun.yang@flygoat.com; a=openpgp; fpr=980379BEFEBFBF477EA04EF9C111949073FC0F67 Use GPR number macros in uasm code generation parts to reduce code duplication. There are functional change due to difference in register symbolic names between OABI and NABI, while existing code is only using definitions from OABI. Code pieces are carefully inspected to ensure register usages are safe on NABI as well. We changed register allocation of r_pcohctl from T7 to T8 as T7 is not available on NABI and we just want a caller saved scratch register here. Signed-off-by: Jiaxun Yang --- arch/mips/kernel/pm-cps.c | 134 ++++++++++++++++++++++------------------------ 1 file changed, 64 insertions(+), 70 deletions(-) diff --git a/arch/mips/kernel/pm-cps.c b/arch/mips/kernel/pm-cps.c index 9bf60d7d44d3..d09ca77e624d 100644 --- a/arch/mips/kernel/pm-cps.c +++ b/arch/mips/kernel/pm-cps.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include @@ -69,13 +70,6 @@ DEFINE_PER_CPU_ALIGNED(struct mips_static_suspend_state, cps_cpu_state); static struct uasm_label labels[32]; static struct uasm_reloc relocs[32]; -enum mips_reg { - zero, at, v0, v1, a0, a1, a2, a3, - t0, t1, t2, t3, t4, t5, t6, t7, - s0, s1, s2, s3, s4, s5, s6, s7, - t8, t9, k0, k1, gp, sp, fp, ra, -}; - bool cps_pm_support_state(enum cps_pm_state state) { return test_bit(state, state_support); @@ -203,13 +197,13 @@ static void cps_gen_cache_routine(u32 **pp, struct uasm_label **pl, return; /* Load base address */ - UASM_i_LA(pp, t0, (long)CKSEG0); + UASM_i_LA(pp, GPR_T0, (long)CKSEG0); /* Calculate end address */ if (cache_size < 0x8000) - uasm_i_addiu(pp, t1, t0, cache_size); + uasm_i_addiu(pp, GPR_T1, GPR_T0, cache_size); else - UASM_i_LA(pp, t1, (long)(CKSEG0 + cache_size)); + UASM_i_LA(pp, GPR_T1, (long)(CKSEG0 + cache_size)); /* Start of cache op loop */ uasm_build_label(pl, *pp, lbl); @@ -217,19 +211,19 @@ static void cps_gen_cache_routine(u32 **pp, struct uasm_label **pl, /* Generate the cache ops */ for (i = 0; i < unroll_lines; i++) { if (cpu_has_mips_r6) { - uasm_i_cache(pp, op, 0, t0); - uasm_i_addiu(pp, t0, t0, cache->linesz); + uasm_i_cache(pp, op, 0, GPR_T0); + uasm_i_addiu(pp, GPR_T0, GPR_T0, cache->linesz); } else { - uasm_i_cache(pp, op, i * cache->linesz, t0); + uasm_i_cache(pp, op, i * cache->linesz, GPR_T0); } } if (!cpu_has_mips_r6) /* Update the base address */ - uasm_i_addiu(pp, t0, t0, unroll_lines * cache->linesz); + uasm_i_addiu(pp, GPR_T0, GPR_T0, unroll_lines * cache->linesz); /* Loop if we haven't reached the end address yet */ - uasm_il_bne(pp, pr, t0, t1, lbl); + uasm_il_bne(pp, pr, GPR_T0, GPR_T1, lbl); uasm_i_nop(pp); } @@ -275,25 +269,25 @@ static int cps_gen_flush_fsb(u32 **pp, struct uasm_label **pl, */ /* Preserve perf counter setup */ - uasm_i_mfc0(pp, t2, 25, (perf_counter * 2) + 0); /* PerfCtlN */ - uasm_i_mfc0(pp, t3, 25, (perf_counter * 2) + 1); /* PerfCntN */ + uasm_i_mfc0(pp, GPR_T2, 25, (perf_counter * 2) + 0); /* PerfCtlN */ + uasm_i_mfc0(pp, GPR_T3, 25, (perf_counter * 2) + 1); /* PerfCntN */ /* Setup perf counter to count FSB full pipeline stalls */ - uasm_i_addiu(pp, t0, zero, (perf_event << 5) | 0xf); - uasm_i_mtc0(pp, t0, 25, (perf_counter * 2) + 0); /* PerfCtlN */ + uasm_i_addiu(pp, GPR_T0, GPR_ZERO, (perf_event << 5) | 0xf); + uasm_i_mtc0(pp, GPR_T0, 25, (perf_counter * 2) + 0); /* PerfCtlN */ uasm_i_ehb(pp); - uasm_i_mtc0(pp, zero, 25, (perf_counter * 2) + 1); /* PerfCntN */ + uasm_i_mtc0(pp, GPR_ZERO, 25, (perf_counter * 2) + 1); /* PerfCntN */ uasm_i_ehb(pp); /* Base address for loads */ - UASM_i_LA(pp, t0, (long)CKSEG0); + UASM_i_LA(pp, GPR_T0, (long)CKSEG0); /* Start of clear loop */ uasm_build_label(pl, *pp, lbl); /* Perform some loads to fill the FSB */ for (i = 0; i < num_loads; i++) - uasm_i_lw(pp, zero, i * line_size * line_stride, t0); + uasm_i_lw(pp, GPR_ZERO, i * line_size * line_stride, GPR_T0); /* * Invalidate the new D-cache entries so that the cache will need @@ -301,9 +295,9 @@ static int cps_gen_flush_fsb(u32 **pp, struct uasm_label **pl, */ for (i = 0; i < num_loads; i++) { uasm_i_cache(pp, Hit_Invalidate_D, - i * line_size * line_stride, t0); + i * line_size * line_stride, GPR_T0); uasm_i_cache(pp, Hit_Writeback_Inv_SD, - i * line_size * line_stride, t0); + i * line_size * line_stride, GPR_T0); } /* Barrier ensuring previous cache invalidates are complete */ @@ -311,16 +305,16 @@ static int cps_gen_flush_fsb(u32 **pp, struct uasm_label **pl, uasm_i_ehb(pp); /* Check whether the pipeline stalled due to the FSB being full */ - uasm_i_mfc0(pp, t1, 25, (perf_counter * 2) + 1); /* PerfCntN */ + uasm_i_mfc0(pp, GPR_T1, 25, (perf_counter * 2) + 1); /* PerfCntN */ /* Loop if it didn't */ - uasm_il_beqz(pp, pr, t1, lbl); + uasm_il_beqz(pp, pr, GPR_T1, lbl); uasm_i_nop(pp); /* Restore perf counter 1. The count may well now be wrong... */ - uasm_i_mtc0(pp, t2, 25, (perf_counter * 2) + 0); /* PerfCtlN */ + uasm_i_mtc0(pp, GPR_T2, 25, (perf_counter * 2) + 0); /* PerfCtlN */ uasm_i_ehb(pp); - uasm_i_mtc0(pp, t3, 25, (perf_counter * 2) + 1); /* PerfCntN */ + uasm_i_mtc0(pp, GPR_T3, 25, (perf_counter * 2) + 1); /* PerfCntN */ uasm_i_ehb(pp); return 0; @@ -330,12 +324,12 @@ static void cps_gen_set_top_bit(u32 **pp, struct uasm_label **pl, struct uasm_reloc **pr, unsigned r_addr, int lbl) { - uasm_i_lui(pp, t0, uasm_rel_hi(0x80000000)); + uasm_i_lui(pp, GPR_T0, uasm_rel_hi(0x80000000)); uasm_build_label(pl, *pp, lbl); - uasm_i_ll(pp, t1, 0, r_addr); - uasm_i_or(pp, t1, t1, t0); - uasm_i_sc(pp, t1, 0, r_addr); - uasm_il_beqz(pp, pr, t1, lbl); + uasm_i_ll(pp, GPR_T1, 0, r_addr); + uasm_i_or(pp, GPR_T1, GPR_T1, GPR_T0); + uasm_i_sc(pp, GPR_T1, 0, r_addr); + uasm_il_beqz(pp, pr, GPR_T1, lbl); uasm_i_nop(pp); } @@ -344,9 +338,9 @@ static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) struct uasm_label *l = labels; struct uasm_reloc *r = relocs; u32 *buf, *p; - const unsigned r_online = a0; - const unsigned r_nc_count = a1; - const unsigned r_pcohctl = t7; + const unsigned r_online = GPR_A0; + const unsigned r_nc_count = GPR_A1; + const unsigned r_pcohctl = GPR_T8; const unsigned max_instrs = 256; unsigned cpc_cmd; int err; @@ -383,8 +377,8 @@ static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) * with the return address placed in v0 to avoid clobbering * the ra register before it is saved. */ - UASM_i_LA(&p, t0, (long)mips_cps_pm_save); - uasm_i_jalr(&p, v0, t0); + UASM_i_LA(&p, GPR_T0, (long)mips_cps_pm_save); + uasm_i_jalr(&p, GPR_V0, GPR_T0); uasm_i_nop(&p); } @@ -399,11 +393,11 @@ static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) /* Increment ready_count */ uasm_i_sync(&p, __SYNC_mb); uasm_build_label(&l, p, lbl_incready); - uasm_i_ll(&p, t1, 0, r_nc_count); - uasm_i_addiu(&p, t2, t1, 1); - uasm_i_sc(&p, t2, 0, r_nc_count); - uasm_il_beqz(&p, &r, t2, lbl_incready); - uasm_i_addiu(&p, t1, t1, 1); + uasm_i_ll(&p, GPR_T1, 0, r_nc_count); + uasm_i_addiu(&p, GPR_T2, GPR_T1, 1); + uasm_i_sc(&p, GPR_T2, 0, r_nc_count); + uasm_il_beqz(&p, &r, GPR_T2, lbl_incready); + uasm_i_addiu(&p, GPR_T1, GPR_T1, 1); /* Barrier ensuring all CPUs see the updated r_nc_count value */ uasm_i_sync(&p, __SYNC_mb); @@ -412,7 +406,7 @@ static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) * If this is the last VPE to become ready for non-coherence * then it should branch below. */ - uasm_il_beq(&p, &r, t1, r_online, lbl_disable_coherence); + uasm_il_beq(&p, &r, GPR_T1, r_online, lbl_disable_coherence); uasm_i_nop(&p); if (state < CPS_PM_POWER_GATED) { @@ -422,13 +416,13 @@ static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) * has been disabled before proceeding, which it will do * by polling for the top bit of ready_count being set. */ - uasm_i_addiu(&p, t1, zero, -1); + uasm_i_addiu(&p, GPR_T1, GPR_ZERO, -1); uasm_build_label(&l, p, lbl_poll_cont); - uasm_i_lw(&p, t0, 0, r_nc_count); - uasm_il_bltz(&p, &r, t0, lbl_secondary_cont); + uasm_i_lw(&p, GPR_T0, 0, r_nc_count); + uasm_il_bltz(&p, &r, GPR_T0, lbl_secondary_cont); uasm_i_ehb(&p); if (cpu_has_mipsmt) - uasm_i_yield(&p, zero, t1); + uasm_i_yield(&p, GPR_ZERO, GPR_T1); uasm_il_b(&p, &r, lbl_poll_cont); uasm_i_nop(&p); } else { @@ -438,16 +432,16 @@ static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) */ if (cpu_has_mipsmt) { /* Halt the VPE via C0 tchalt register */ - uasm_i_addiu(&p, t0, zero, TCHALT_H); - uasm_i_mtc0(&p, t0, 2, 4); + uasm_i_addiu(&p, GPR_T0, GPR_ZERO, TCHALT_H); + uasm_i_mtc0(&p, GPR_T0, 2, 4); } else if (cpu_has_vp) { /* Halt the VP via the CPC VP_STOP register */ unsigned int vpe_id; vpe_id = cpu_vpe_id(&cpu_data[cpu]); - uasm_i_addiu(&p, t0, zero, 1 << vpe_id); - UASM_i_LA(&p, t1, (long)addr_cpc_cl_vp_stop()); - uasm_i_sw(&p, t0, 0, t1); + uasm_i_addiu(&p, GPR_T0, GPR_ZERO, 1 << vpe_id); + UASM_i_LA(&p, GPR_T1, (long)addr_cpc_cl_vp_stop()); + uasm_i_sw(&p, GPR_T0, 0, GPR_T1); } else { BUG(); } @@ -482,9 +476,9 @@ static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) * defined by the interAptiv & proAptiv SUMs as ensuring that the * operation resulting from the preceding store is complete. */ - uasm_i_addiu(&p, t0, zero, 1 << cpu_core(&cpu_data[cpu])); - uasm_i_sw(&p, t0, 0, r_pcohctl); - uasm_i_lw(&p, t0, 0, r_pcohctl); + uasm_i_addiu(&p, GPR_T0, GPR_ZERO, 1 << cpu_core(&cpu_data[cpu])); + uasm_i_sw(&p, GPR_T0, 0, r_pcohctl); + uasm_i_lw(&p, GPR_T0, 0, r_pcohctl); /* Barrier to ensure write to coherence control is complete */ uasm_i_sync(&p, __SYNC_full); @@ -492,8 +486,8 @@ static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) } /* Disable coherence */ - uasm_i_sw(&p, zero, 0, r_pcohctl); - uasm_i_lw(&p, t0, 0, r_pcohctl); + uasm_i_sw(&p, GPR_ZERO, 0, r_pcohctl); + uasm_i_lw(&p, GPR_T0, 0, r_pcohctl); if (state >= CPS_PM_CLOCK_GATED) { err = cps_gen_flush_fsb(&p, &l, &r, &cpu_data[cpu], @@ -515,9 +509,9 @@ static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) } /* Issue the CPC command */ - UASM_i_LA(&p, t0, (long)addr_cpc_cl_cmd()); - uasm_i_addiu(&p, t1, zero, cpc_cmd); - uasm_i_sw(&p, t1, 0, t0); + UASM_i_LA(&p, GPR_T0, (long)addr_cpc_cl_cmd()); + uasm_i_addiu(&p, GPR_T1, GPR_ZERO, cpc_cmd); + uasm_i_sw(&p, GPR_T1, 0, GPR_T0); if (state == CPS_PM_POWER_GATED) { /* If anything goes wrong just hang */ @@ -564,12 +558,12 @@ static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) * will run this. The first will actually re-enable coherence & the * rest will just be performing a rather unusual nop. */ - uasm_i_addiu(&p, t0, zero, mips_cm_revision() < CM_REV_CM3 + uasm_i_addiu(&p, GPR_T0, GPR_ZERO, mips_cm_revision() < CM_REV_CM3 ? CM_GCR_Cx_COHERENCE_COHDOMAINEN : CM3_GCR_Cx_COHERENCE_COHEN); - uasm_i_sw(&p, t0, 0, r_pcohctl); - uasm_i_lw(&p, t0, 0, r_pcohctl); + uasm_i_sw(&p, GPR_T0, 0, r_pcohctl); + uasm_i_lw(&p, GPR_T0, 0, r_pcohctl); /* Barrier to ensure write to coherence control is complete */ uasm_i_sync(&p, __SYNC_full); @@ -579,11 +573,11 @@ static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) /* Decrement ready_count */ uasm_build_label(&l, p, lbl_decready); uasm_i_sync(&p, __SYNC_mb); - uasm_i_ll(&p, t1, 0, r_nc_count); - uasm_i_addiu(&p, t2, t1, -1); - uasm_i_sc(&p, t2, 0, r_nc_count); - uasm_il_beqz(&p, &r, t2, lbl_decready); - uasm_i_andi(&p, v0, t1, (1 << fls(smp_num_siblings)) - 1); + uasm_i_ll(&p, GPR_T1, 0, r_nc_count); + uasm_i_addiu(&p, GPR_T2, GPR_T1, -1); + uasm_i_sc(&p, GPR_T2, 0, r_nc_count); + uasm_il_beqz(&p, &r, GPR_T2, lbl_decready); + uasm_i_andi(&p, GPR_V0, GPR_T1, (1 << fls(smp_num_siblings)) - 1); /* Barrier ensuring all CPUs see the updated r_nc_count value */ uasm_i_sync(&p, __SYNC_mb); @@ -612,7 +606,7 @@ static void *cps_gen_entry_code(unsigned cpu, enum cps_pm_state state) } /* The core is coherent, time to return to C code */ - uasm_i_jr(&p, ra); + uasm_i_jr(&p, GPR_RA); uasm_i_nop(&p); gen_done: