From patchwork Tue Jun 21 07:26:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12888745 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66609CCA473 for ; Tue, 21 Jun 2022 07:27:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 68D516B0074; Tue, 21 Jun 2022 03:27:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 63D6D6B0075; Tue, 21 Jun 2022 03:27:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 505118E0002; Tue, 21 Jun 2022 03:27:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 418396B0074 for ; Tue, 21 Jun 2022 03:27:02 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 096BF803A2 for ; Tue, 21 Jun 2022 07:27:02 +0000 (UTC) X-FDA: 79601411484.16.3722053 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf22.hostedemail.com (Postfix) with ESMTP id D548FC00BB for ; Tue, 21 Jun 2022 07:26:59 +0000 (UTC) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LRyfs24HszSh67; Tue, 21 Jun 2022 15:23:33 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:26:50 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:26:48 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v6 01/10] arm64: extable: add new extable type EX_TYPE_KACCESS_ERR_ZERO support Date: Tue, 21 Jun 2022 07:26:29 +0000 Message-ID: <20220621072638.1273594-2-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621072638.1273594-1-tongtiangen@huawei.com> References: <20220621072638.1273594-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655796420; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=j48klinMicM6NTN4tSKwNU/P3tHmZ54vw1faBk+XVQ8=; b=kwDfHz31c/rKsfHFuQPTbXZH4M7Fj8b8xt2Vbg71B05iEpbxiyNxEwR2+/5iVdo/NVJkKX FoV+4q5eMEULKeCcVZRLfABQRjBQCYVbvf3y1LjJ4hB0A1It6ce1b5uPiE3TyXeSElBLlz ujW1gcQfoatyndtORWdM/Gcjy7ChCLc= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655796420; a=rsa-sha256; cv=none; b=pXeGjhkN89jmv6t9WTdEHt7klZcGzN1/qFc5YS1u+51mZOUsVRBUw91CqjcL3O21guuLfd AKIq+cWQmrf11CN6u1ZIEegBzsnS6DqMBTeEkVUDQeXNSCDYFSSkkuEiu/myIK6KJPpqvA MC8D+01xh0qjdKDRpcxMgtv7Ty7n9lw= X-Stat-Signature: ak8agsqqtrpxswxf37so8nd71mxbk8d8 X-Rspamd-Queue-Id: D548FC00BB X-Rspamd-Server: rspam11 X-Rspam-User: Authentication-Results: imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-HE-Tag: 1655796419-868173 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, The extable type EX_TYPE_UACCESS_ERR_ZERO is used by __get/put_kernel_nofault(), but those helpers are not uaccess type, so we add a new extable type EX_TYPE_KACCESS_ERR_ZERO which can be used by __get/put_kernel_no_fault(). This is also to prepare for distinguishing the two types in machine check safe process. Suggested-by: Mark Rutland Signed-off-by: Tong Tiangen Acked-by: Mark Rutland --- arch/arm64/include/asm/asm-extable.h | 15 ++++- arch/arm64/include/asm/uaccess.h | 94 ++++++++++++++-------------- arch/arm64/mm/extable.c | 1 + 3 files changed, 62 insertions(+), 48 deletions(-) diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index c39f2437e08e..1717fc4cfeb5 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -6,7 +6,8 @@ #define EX_TYPE_FIXUP 1 #define EX_TYPE_BPF 2 #define EX_TYPE_UACCESS_ERR_ZERO 3 -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +#define EX_TYPE_KACCESS_ERR_ZERO 4 +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 5 #ifdef __ASSEMBLY__ @@ -73,9 +74,21 @@ EX_DATA_REG(ZERO, zero) \ ")") +#define _ASM_EXTABLE_KACCESS_ERR_ZERO(insn, fixup, err, zero) \ + __DEFINE_ASM_GPR_NUMS \ + __ASM_EXTABLE_RAW(#insn, #fixup, \ + __stringify(EX_TYPE_KACCESS_ERR_ZERO), \ + "(" \ + EX_DATA_REG(ERR, err) " | " \ + EX_DATA_REG(ZERO, zero) \ + ")") + #define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err) \ _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, wzr) +#define _ASM_EXTABLE_KACCESS_ERR(insn, fixup, err) \ + _ASM_EXTABLE_KACCESS_ERR_ZERO(insn, fixup, err, wzr) + #define EX_DATA_REG_DATA_SHIFT 0 #define EX_DATA_REG_DATA GENMASK(4, 0) #define EX_DATA_REG_ADDR_SHIFT 5 diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 63f9c828f1a7..2fc9f0861769 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -232,34 +232,34 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) * The "__xxx_error" versions set the third argument to -EFAULT if an error * occurs, and leave it unchanged on success. */ -#define __get_mem_asm(load, reg, x, addr, err) \ +#define __get_mem_asm(load, reg, x, addr, err, type) \ asm volatile( \ "1: " load " " reg "1, [%2]\n" \ "2:\n" \ - _ASM_EXTABLE_UACCESS_ERR_ZERO(1b, 2b, %w0, %w1) \ + _ASM_EXTABLE_##type##ACCESS_ERR_ZERO(1b, 2b, %w0, %w1) \ : "+r" (err), "=&r" (x) \ : "r" (addr)) -#define __raw_get_mem(ldr, x, ptr, err) \ -do { \ - unsigned long __gu_val; \ - switch (sizeof(*(ptr))) { \ - case 1: \ - __get_mem_asm(ldr "b", "%w", __gu_val, (ptr), (err)); \ - break; \ - case 2: \ - __get_mem_asm(ldr "h", "%w", __gu_val, (ptr), (err)); \ - break; \ - case 4: \ - __get_mem_asm(ldr, "%w", __gu_val, (ptr), (err)); \ - break; \ - case 8: \ - __get_mem_asm(ldr, "%x", __gu_val, (ptr), (err)); \ - break; \ - default: \ - BUILD_BUG(); \ - } \ - (x) = (__force __typeof__(*(ptr)))__gu_val; \ +#define __raw_get_mem(ldr, x, ptr, err, type) \ +do { \ + unsigned long __gu_val; \ + switch (sizeof(*(ptr))) { \ + case 1: \ + __get_mem_asm(ldr "b", "%w", __gu_val, (ptr), (err), type); \ + break; \ + case 2: \ + __get_mem_asm(ldr "h", "%w", __gu_val, (ptr), (err), type); \ + break; \ + case 4: \ + __get_mem_asm(ldr, "%w", __gu_val, (ptr), (err), type); \ + break; \ + case 8: \ + __get_mem_asm(ldr, "%x", __gu_val, (ptr), (err), type); \ + break; \ + default: \ + BUILD_BUG(); \ + } \ + (x) = (__force __typeof__(*(ptr)))__gu_val; \ } while (0) /* @@ -274,7 +274,7 @@ do { \ __chk_user_ptr(ptr); \ \ uaccess_ttbr0_enable(); \ - __raw_get_mem("ldtr", __rgu_val, __rgu_ptr, err); \ + __raw_get_mem("ldtr", __rgu_val, __rgu_ptr, err, U); \ uaccess_ttbr0_disable(); \ \ (x) = __rgu_val; \ @@ -314,40 +314,40 @@ do { \ \ __uaccess_enable_tco_async(); \ __raw_get_mem("ldr", *((type *)(__gkn_dst)), \ - (__force type *)(__gkn_src), __gkn_err); \ + (__force type *)(__gkn_src), __gkn_err, K); \ __uaccess_disable_tco_async(); \ \ if (unlikely(__gkn_err)) \ goto err_label; \ } while (0) -#define __put_mem_asm(store, reg, x, addr, err) \ +#define __put_mem_asm(store, reg, x, addr, err, type) \ asm volatile( \ "1: " store " " reg "1, [%2]\n" \ "2:\n" \ - _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0) \ + _ASM_EXTABLE_##type##ACCESS_ERR(1b, 2b, %w0) \ : "+r" (err) \ : "r" (x), "r" (addr)) -#define __raw_put_mem(str, x, ptr, err) \ -do { \ - __typeof__(*(ptr)) __pu_val = (x); \ - switch (sizeof(*(ptr))) { \ - case 1: \ - __put_mem_asm(str "b", "%w", __pu_val, (ptr), (err)); \ - break; \ - case 2: \ - __put_mem_asm(str "h", "%w", __pu_val, (ptr), (err)); \ - break; \ - case 4: \ - __put_mem_asm(str, "%w", __pu_val, (ptr), (err)); \ - break; \ - case 8: \ - __put_mem_asm(str, "%x", __pu_val, (ptr), (err)); \ - break; \ - default: \ - BUILD_BUG(); \ - } \ +#define __raw_put_mem(str, x, ptr, err, type) \ +do { \ + __typeof__(*(ptr)) __pu_val = (x); \ + switch (sizeof(*(ptr))) { \ + case 1: \ + __put_mem_asm(str "b", "%w", __pu_val, (ptr), (err), type); \ + break; \ + case 2: \ + __put_mem_asm(str "h", "%w", __pu_val, (ptr), (err), type); \ + break; \ + case 4: \ + __put_mem_asm(str, "%w", __pu_val, (ptr), (err), type); \ + break; \ + case 8: \ + __put_mem_asm(str, "%x", __pu_val, (ptr), (err), type); \ + break; \ + default: \ + BUILD_BUG(); \ + } \ } while (0) /* @@ -362,7 +362,7 @@ do { \ __chk_user_ptr(__rpu_ptr); \ \ uaccess_ttbr0_enable(); \ - __raw_put_mem("sttr", __rpu_val, __rpu_ptr, err); \ + __raw_put_mem("sttr", __rpu_val, __rpu_ptr, err, U); \ uaccess_ttbr0_disable(); \ } while (0) @@ -400,7 +400,7 @@ do { \ \ __uaccess_enable_tco_async(); \ __raw_put_mem("str", *((type *)(__pkn_src)), \ - (__force type *)(__pkn_dst), __pkn_err); \ + (__force type *)(__pkn_dst), __pkn_err, K); \ __uaccess_disable_tco_async(); \ \ if (unlikely(__pkn_err)) \ diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 489455309695..056591e5ca80 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -77,6 +77,7 @@ bool fixup_exception(struct pt_regs *regs) case EX_TYPE_BPF: return ex_handler_bpf(ex, regs); case EX_TYPE_UACCESS_ERR_ZERO: + case EX_TYPE_KACCESS_ERR_ZERO: return ex_handler_uaccess_err_zero(ex, regs); case EX_TYPE_LOAD_UNALIGNED_ZEROPAD: return ex_handler_load_unaligned_zeropad(ex, regs); From patchwork Tue Jun 21 07:26:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12888746 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0BE1C433EF for ; Tue, 21 Jun 2022 07:27:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B6296B007B; Tue, 21 Jun 2022 03:27:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 63F406B007D; Tue, 21 Jun 2022 03:27:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4B7FF8E0002; Tue, 21 Jun 2022 03:27:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 38B476B007B for ; Tue, 21 Jun 2022 03:27:11 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0B81D33CCC for ; Tue, 21 Jun 2022 07:27:11 +0000 (UTC) X-FDA: 79601411862.26.5D39667 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf29.hostedemail.com (Postfix) with ESMTP id EE47312009C for ; Tue, 21 Jun 2022 07:27:09 +0000 (UTC) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4LRyhY2zmvz1KC8N; Tue, 21 Jun 2022 15:25:01 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:26:52 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:26:50 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v6 02/10] arm64: asm-extable: move data fields Date: Tue, 21 Jun 2022 07:26:30 +0000 Message-ID: <20220621072638.1273594-3-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621072638.1273594-1-tongtiangen@huawei.com> References: <20220621072638.1273594-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655796430; a=rsa-sha256; cv=none; b=F34s0JHGZHmedYlcY5v7fNq8BLTZ/16Je7xJuND7o637eGnUdP58lapsOZAYlaL1KjAtq2 NvKWsx5/keC/mFhgyFyYxx18eH1ALfQQKDq1aNebhh4Ex6EcA8j9NKf6UNOKjJAUG94e4z KGk/G+eDHmuKdvdfJNcqxF32GArtHRU= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf29.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655796430; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J1xLNbgcHAWu50KdUv/dck7dlXi8JWIE3+k1IsYSL2I=; b=QI6z0Z1ZEUJkrDd5N5zj0wmMWDay0CF8G73IlzrtF6OummXbg82SAZf1z6iOd8F1OnNysE W2nGTV9em6H2Ye1S+JJra2uEeQMWJORzzAECo66pOs776QMDCFeMj9HSjX4IcpmPbKoJuc v/pp6u/+ItH7I7X29/rbM/mPggrBXaY= Authentication-Results: imf29.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf29.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Stat-Signature: 84t9rmx59dj7wpmcg3ztnsencytni1xd X-Rspamd-Queue-Id: EE47312009C X-Rspam-User: X-Rspamd-Server: rspam03 X-HE-Tag: 1655796429-812891 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In subsequent patches we'll need to fill in extable data fields in regular assembly files. In preparation for this, move the definitions of the extable data fields earlier in asm-extable.h so that they are defined for both assembly and C files. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/asm-extable.h | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 1717fc4cfeb5..204b30bf78b3 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -9,6 +9,18 @@ #define EX_TYPE_KACCESS_ERR_ZERO 4 #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 5 +/* Data fields for EX_TYPE_UACCESS_ERR_ZERO */ +#define EX_DATA_REG_ERR_SHIFT 0 +#define EX_DATA_REG_ERR GENMASK(4, 0) +#define EX_DATA_REG_ZERO_SHIFT 5 +#define EX_DATA_REG_ZERO GENMASK(9, 5) + +/* Data fields for EX_TYPE_LOAD_UNALIGNED_ZEROPAD */ +#define EX_DATA_REG_DATA_SHIFT 0 +#define EX_DATA_REG_DATA GENMASK(4, 0) +#define EX_DATA_REG_ADDR_SHIFT 5 +#define EX_DATA_REG_ADDR GENMASK(9, 5) + #ifdef __ASSEMBLY__ #define __ASM_EXTABLE_RAW(insn, fixup, type, data) \ @@ -57,11 +69,6 @@ #define _ASM_EXTABLE(insn, fixup) \ __ASM_EXTABLE_RAW(#insn, #fixup, __stringify(EX_TYPE_FIXUP), "0") -#define EX_DATA_REG_ERR_SHIFT 0 -#define EX_DATA_REG_ERR GENMASK(4, 0) -#define EX_DATA_REG_ZERO_SHIFT 5 -#define EX_DATA_REG_ZERO GENMASK(9, 5) - #define EX_DATA_REG(reg, gpr) \ "((.L__gpr_num_" #gpr ") << " __stringify(EX_DATA_REG_##reg##_SHIFT) ")" @@ -89,11 +96,6 @@ #define _ASM_EXTABLE_KACCESS_ERR(insn, fixup, err) \ _ASM_EXTABLE_KACCESS_ERR_ZERO(insn, fixup, err, wzr) -#define EX_DATA_REG_DATA_SHIFT 0 -#define EX_DATA_REG_DATA GENMASK(4, 0) -#define EX_DATA_REG_ADDR_SHIFT 5 -#define EX_DATA_REG_ADDR GENMASK(9, 5) - #define _ASM_EXTABLE_LOAD_UNALIGNED_ZEROPAD(insn, fixup, data, addr) \ __DEFINE_ASM_GPR_NUMS \ __ASM_EXTABLE_RAW(#insn, #fixup, \ From patchwork Tue Jun 21 07:26:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12888749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52192C43334 for ; Tue, 21 Jun 2022 07:27:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E310B6B0080; Tue, 21 Jun 2022 03:27:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DB8F86B0081; Tue, 21 Jun 2022 03:27:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A35C38E0002; Tue, 21 Jun 2022 03:27:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 89EB56B0080 for ; Tue, 21 Jun 2022 03:27:33 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5B4C133A05 for ; Tue, 21 Jun 2022 07:27:33 +0000 (UTC) X-FDA: 79601412786.20.32FC289 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf08.hostedemail.com (Postfix) with ESMTP id 75DCC1600A8 for ; Tue, 21 Jun 2022 07:27:32 +0000 (UTC) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LRyk15dcCzkWhr; Tue, 21 Jun 2022 15:26:17 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:26:53 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:26:51 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v6 03/10] arm64: asm-extable: add asm uacess helpers Date: Tue, 21 Jun 2022 07:26:31 +0000 Message-ID: <20220621072638.1273594-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621072638.1273594-1-tongtiangen@huawei.com> References: <20220621072638.1273594-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655796453; a=rsa-sha256; cv=none; b=00bduTTy+tx1nvk/NcgPG9GFvijXxdyCvqQB8g9H8EHD1vrImmlL67DsTL1M6AkVw35dat roXfcxx+NUzamVzerGuIbxgPIQYrBUXP1nyPbq6a45UhGECf2EDbMYBLh3nTHxiroAb9Ie Te9Dls0Y8fx7rU/+QkbDVjeO2Uud0Qc= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655796453; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HUv3Jwbg/c5tJMPNpUoXucc6u/ZNDs7DNJCj3EnApco=; b=1BjqLAVI18Ipy5vFTDPdZQxPkOb5VGNGnvMiDUxzF+opAxyB1yvkUkEiYXAUPkMDBmaqL4 o4nDaVFWQKYNF2jwKwnP0LPWJijc+tFpd4pijKxc7ZPOr8EUqzkXl5rbBY+2lYe5d4fHzt Zy7iD5YFZBUaV9f98ELMvWSx7KfYgiw= X-Stat-Signature: h3419azo697rr919e1g9bmn7pxz8cczz X-Rspamd-Queue-Id: 75DCC1600A8 Authentication-Results: imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspamd-Server: rspam07 X-Rspam-User: X-HE-Tag: 1655796452-499884 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In subsequent patches we want to explciitly annotate uaccess fixups in assembly files. We have existing helpers for this for inline assembly, but due to differing stringification requirements it's not possible to have a single definition that we can use for both inline asm and plain asm files. So as with other cases (e.g. gpr-regnum.h), we must prove separate helprs for plain asm and inline asm. So that we can do so, this patch adds helpers to define EX_TYPE_UACCESS_ERR_ZERO fixups in plain assembly. These correspond 1-1 with the inline assembly versions except for the absence of stringification. No plain assmebly heleprs are added for EX_TYPE_LOAD_UNALIGNED_ZEROPAD fixups as these only exist for a single C function. For copy_{to,from}_user() we'll need fixups with regs and err, so I've added _ASM_EXTABLE_UACCESS(insn, fixup), where both the error and zero registers are WZR. For clarity, the existing `_asm_extable` assemgbly maco is now defined in terms of the _ASM_EXTABLE() CPP macro, making the CPP macros canonical in all cases. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/asm-extable.h | 31 ++++++++++++++++++++++++---- 1 file changed, 27 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 204b30bf78b3..2e1e6bc33bcd 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -2,6 +2,9 @@ #ifndef __ASM_ASM_EXTABLE_H #define __ASM_ASM_EXTABLE_H +#include +#include + #define EX_TYPE_NONE 0 #define EX_TYPE_FIXUP 1 #define EX_TYPE_BPF 2 @@ -32,12 +35,32 @@ .short (data); \ .popsection; +#define _ASM_EXTABLE(insn, fixup) \ + __ASM_EXTABLE_RAW(insn, fixup, EX_TYPE_FIXUP, 0) + +#define EX_DATA_REG(reg, gpr) \ + (.L__gpr_num_##gpr << EX_DATA_REG_##reg##_SHIFT) + +#define _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero) \ + __ASM_EXTABLE_RAW(insn, fixup, \ + EX_TYPE_UACCESS_ERR_ZERO, \ + ( \ + EX_DATA_REG(ERR, err) | \ + EX_DATA_REG(ZERO, zero) \ + )) + +#define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err) \ + _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, wzr) + +#define _ASM_EXTABLE_UACCESS(insn, fixup) \ + _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr) + /* * Create an exception table entry for `insn`, which will branch to `fixup` * when an unhandled fault is taken. */ .macro _asm_extable, insn, fixup - __ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_FIXUP, 0) + _ASM_EXTABLE(\insn, \fixup) .endm /* @@ -52,11 +75,8 @@ #else /* __ASSEMBLY__ */ -#include #include -#include - #define __ASM_EXTABLE_RAW(insn, fixup, type, data) \ ".pushsection __ex_table, \"a\"\n" \ ".align 2\n" \ @@ -93,6 +113,9 @@ #define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err) \ _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, wzr) +#define _ASM_EXTABLE_UACCESS(insn, fixup) \ + _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr) + #define _ASM_EXTABLE_KACCESS_ERR(insn, fixup, err) \ _ASM_EXTABLE_KACCESS_ERR_ZERO(insn, fixup, err, wzr) From patchwork Tue Jun 21 07:26:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12888752 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4634CC433EF for ; Tue, 21 Jun 2022 07:27:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D6DB16B007B; Tue, 21 Jun 2022 03:27:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF4CA6B0083; Tue, 21 Jun 2022 03:27:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B976E6B0085; Tue, 21 Jun 2022 03:27:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A61D56B007B for ; Tue, 21 Jun 2022 03:27:53 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 78F5F2113D for ; Tue, 21 Jun 2022 07:27:53 +0000 (UTC) X-FDA: 79601413626.05.B8C13AA Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf05.hostedemail.com (Postfix) with ESMTP id 91EE4100018 for ; Tue, 21 Jun 2022 07:27:52 +0000 (UTC) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LRygp00QvzSh50; Tue, 21 Jun 2022 15:24:21 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:26:55 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:26:53 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v6 04/10] arm64: extable: make uaaccess helper use extable type EX_TYPE_UACCESS_ERR_ZERO Date: Tue, 21 Jun 2022 07:26:32 +0000 Message-ID: <20220621072638.1273594-5-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621072638.1273594-1-tongtiangen@huawei.com> References: <20220621072638.1273594-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655796473; a=rsa-sha256; cv=none; b=C3fnwU/8CLH0roI/K5iYCwxS7J34GBbxOTOWRBQsjJnGetVsgnxOZH9oxfcAZ0PXea+2qh CjaeM0vQ9fEVLBkJdcaSZikg3M7CGect5oowwE6jz4gLDRanwK7gyj0CUxLDfTHJ0BcBSV objXiXJq9v2r41pr6573Wd137vNYx5M= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655796473; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cpAJTkAkR+AWelwBmJIr/oNW1uIdsErQHRvTboo3IX0=; b=mMZIbgBbk12sTMFj++Ql8sjPFgRYOlWuUxxMvkvpjZz6h8jToL3rv4UqNbFJaxfgzIK3YM ZIP/1t22LMyF072kfCYrpmn+Fuoza1Ak5+q2XlTfzRsocjBnek9mb/eNE1hKVCTwJa8K3h hxIuPSfRB0LUuFk9BzWp16xjbMqEk9w= Authentication-Results: imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspam-User: X-Stat-Signature: x81n19tbt1c68s7agoajche6h9zditi8 X-Rspamd-Queue-Id: 91EE4100018 X-Rspamd-Server: rspam08 X-HE-Tag: 1655796472-503958 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currnetly, the extable type used by __arch_copy_from/to_user() is EX_TYPE_FIXUP. In fact, It is more clearly to use meaningful EX_TYPE_UACCESS_*. Suggested-by: Mark Rutland Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/asm-extable.h | 8 ++++++++ arch/arm64/include/asm/asm-uaccess.h | 12 ++++++------ 2 files changed, 14 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 2e1e6bc33bcd..73266553f8a2 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -63,6 +63,14 @@ _ASM_EXTABLE(\insn, \fixup) .endm +/* + * Create an exception table entry for uaccess `insn`, which will branch to `fixup` + * when an unhandled fault is taken. + */ + .macro _asm_extable_uaccess, insn, fixup + _ASM_EXTABLE_UACCESS(\insn, \fixup) + .endm + /* * Create an exception table entry for `insn` if `fixup` is provided. Otherwise * do nothing. diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h index 0557af834e03..75b211c98dea 100644 --- a/arch/arm64/include/asm/asm-uaccess.h +++ b/arch/arm64/include/asm/asm-uaccess.h @@ -61,7 +61,7 @@ alternative_else_nop_endif #define USER(l, x...) \ 9999: x; \ - _asm_extable 9999b, l + _asm_extable_uaccess 9999b, l /* * Generate the assembly for LDTR/STTR with exception table entries. @@ -73,8 +73,8 @@ alternative_else_nop_endif 8889: ldtr \reg2, [\addr, #8]; add \addr, \addr, \post_inc; - _asm_extable 8888b,\l; - _asm_extable 8889b,\l; + _asm_extable_uaccess 8888b, \l; + _asm_extable_uaccess 8889b, \l; .endm .macro user_stp l, reg1, reg2, addr, post_inc @@ -82,14 +82,14 @@ alternative_else_nop_endif 8889: sttr \reg2, [\addr, #8]; add \addr, \addr, \post_inc; - _asm_extable 8888b,\l; - _asm_extable 8889b,\l; + _asm_extable_uaccess 8888b,\l; + _asm_extable_uaccess 8889b,\l; .endm .macro user_ldst l, inst, reg, addr, post_inc 8888: \inst \reg, [\addr]; add \addr, \addr, \post_inc; - _asm_extable 8888b,\l; + _asm_extable_uaccess 8888b, \l; .endm #endif From patchwork Tue Jun 21 07:26:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12888747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EADD2C433EF for ; Tue, 21 Jun 2022 07:27:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 862A16B007D; Tue, 21 Jun 2022 03:27:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7EB976B007E; Tue, 21 Jun 2022 03:27:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6656F8E0002; Tue, 21 Jun 2022 03:27:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 52A0B6B007D for ; Tue, 21 Jun 2022 03:27:22 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 2E92880E9F for ; Tue, 21 Jun 2022 07:27:22 +0000 (UTC) X-FDA: 79601412324.29.C014574 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf04.hostedemail.com (Postfix) with ESMTP id 93890400A2 for ; Tue, 21 Jun 2022 07:27:16 +0000 (UTC) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LRygB2V2KzSh6N; Tue, 21 Jun 2022 15:23:50 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:26:56 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:26:55 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v6 05/10] arm64: extable: move _cond_extable to _cond_uaccess_extable Date: Tue, 21 Jun 2022 07:26:33 +0000 Message-ID: <20220621072638.1273594-6-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621072638.1273594-1-tongtiangen@huawei.com> References: <20220621072638.1273594-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf04.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655796436; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JWoj78HmAPrvorsNxXzRtRwiByDkC/N1zM+BJrt112k=; b=sgPMTuHrrz9JnqN0oFtYb+wrU7dHcUC78I5NTE3EWKbWeTlYIYI9+dAxVe1VIIQlcVhn7x 2qlDZemulBg8GYAdx8QyUDvCeEQANqkKJ+ScsntgPE3nXz3wX4YOjJiiAFI4/Y8aF8ORAT JWG4cz6Y1AY9X8wAPCAvykV8OsKWCG4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655796436; a=rsa-sha256; cv=none; b=VOsBauFD2fjwT5xHz+Jlcf6nFm+WY3M2+9UkTMbdj1ZUewc/lp2TphM+Ym1vpxuedYbFuv d/qGn1rDt/IBBPgdaCa8s13hqsHLy1eJGtwwPDpfjwwO9qpNEIUgJ8nIw/XYIvEMSyaemK LELFGwKRmdgyuVEZQxuRLkPDqrnJVdE= X-Rspamd-Queue-Id: 93890400A2 Authentication-Results: imf04.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf04.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: r1bjq35rfjzbun7dnz7xfcqj5jqnuzaf X-HE-Tag: 1655796436-692941 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, We use _cond_extable for cache maintenance uaccess helper caches_clean_inval_user_pou(), so this should be moved over to EX_TYPE_UACCESS_ERR_ZERO and rename _cond_extable to _cond_uaccess_extable for clarity. Suggested-by: Mark Rutland Signed-off-by: Tong Tiangen Acked-by: Mark Rutland --- arch/arm64/include/asm/asm-extable.h | 6 +++--- arch/arm64/include/asm/assembler.h | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 73266553f8a2..b97213d292ce 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -75,9 +75,9 @@ * Create an exception table entry for `insn` if `fixup` is provided. Otherwise * do nothing. */ - .macro _cond_extable, insn, fixup - .ifnc \fixup, - _asm_extable \insn, \fixup + .macro _cond_uaccess_extable, insn, fixup + .ifnc \fixup, + _asm_extable_uaccess \insn, \fixup .endif .endm diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 8c5a61aeaf8e..dc422fa437c2 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -423,7 +423,7 @@ alternative_endif b.lo .Ldcache_op\@ dsb \domain - _cond_extable .Ldcache_op\@, \fixup + _cond_uaccess_extable .Ldcache_op\@, \fixup .endm /* @@ -462,7 +462,7 @@ alternative_endif dsb ish isb - _cond_extable .Licache_op\@, \fixup + _cond_uaccess_extable .Licache_op\@, \fixup .endm /* From patchwork Tue Jun 21 07:26:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12888748 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 881DBC433EF for ; Tue, 21 Jun 2022 07:27:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2E4B96B007E; Tue, 21 Jun 2022 03:27:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 26D596B0080; Tue, 21 Jun 2022 03:27:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 10FA38E0002; Tue, 21 Jun 2022 03:27:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id F307E6B007E for ; Tue, 21 Jun 2022 03:27:32 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C859B61136 for ; Tue, 21 Jun 2022 07:27:32 +0000 (UTC) X-FDA: 79601412744.19.42FA81D Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf22.hostedemail.com (Postfix) with ESMTP id BD812C00BC for ; Tue, 21 Jun 2022 07:27:31 +0000 (UTC) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LRyk16LVSzkWj6; Tue, 21 Jun 2022 15:26:17 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:26:58 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:26:56 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v6 06/10] arm64: extable: cleanup redundant extable type EX_TYPE_FIXUP Date: Tue, 21 Jun 2022 07:26:34 +0000 Message-ID: <20220621072638.1273594-7-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621072638.1273594-1-tongtiangen@huawei.com> References: <20220621072638.1273594-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655796452; a=rsa-sha256; cv=none; b=IsECVuCAsL8UwotSBYfZtdjTD7gnNBQAN2qf2MFINC29U43EzS99CUsFTlOFnWP2oM10Jd JwxOyS4MOlP+hvqLFWchuOh584pxhItNIwAB5Nfa97HP0larPpvNwRnmTUkAxWpJVKLmQ+ g/Vi+zJRyNjG0dSaJ7Sgk2H7KGpayzw= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655796452; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KnmR2SHZkV81U6sDpcfE1lteS3UqC5jh4GgsqzCWUbc=; b=SCrT7/c6xQ4y8aYZ63BPV+dmPVFCMPrdv4vmHaK5DKQVgkbq8JlclMU7wm/oiy3waGnANs QKuC9kFaxvIAW9ydiKA/pZ3Uvfegh09DjQ88PbR2/fnO94ivCBmuobsFEQFj6EMt7WR3b4 mfuIXGwi2G+vdpw0qUChcwh+HYKLiNE= Authentication-Results: imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspam-User: X-Stat-Signature: mqhak4rs6u4iif48j1ktjx6dq38f586c X-Rspamd-Queue-Id: BD812C00BC X-Rspamd-Server: rspam08 X-HE-Tag: 1655796451-419053 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, extable type EX_TYPE_FIXUP is no place to use, We can safely remove it. Suggested-by: Mark Rutland Signed-off-by: Tong Tiangen Acked-by: Mark Rutland --- arch/arm64/include/asm/asm-extable.h | 23 ++++------------------- arch/arm64/mm/extable.c | 9 --------- 2 files changed, 4 insertions(+), 28 deletions(-) diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index b97213d292ce..980d1dd8e1a3 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -6,11 +6,10 @@ #include #define EX_TYPE_NONE 0 -#define EX_TYPE_FIXUP 1 -#define EX_TYPE_BPF 2 -#define EX_TYPE_UACCESS_ERR_ZERO 3 -#define EX_TYPE_KACCESS_ERR_ZERO 4 -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 5 +#define EX_TYPE_BPF 1 +#define EX_TYPE_UACCESS_ERR_ZERO 2 +#define EX_TYPE_KACCESS_ERR_ZERO 3 +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */ #define EX_DATA_REG_ERR_SHIFT 0 @@ -35,9 +34,6 @@ .short (data); \ .popsection; -#define _ASM_EXTABLE(insn, fixup) \ - __ASM_EXTABLE_RAW(insn, fixup, EX_TYPE_FIXUP, 0) - #define EX_DATA_REG(reg, gpr) \ (.L__gpr_num_##gpr << EX_DATA_REG_##reg##_SHIFT) @@ -55,14 +51,6 @@ #define _ASM_EXTABLE_UACCESS(insn, fixup) \ _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr) -/* - * Create an exception table entry for `insn`, which will branch to `fixup` - * when an unhandled fault is taken. - */ - .macro _asm_extable, insn, fixup - _ASM_EXTABLE(\insn, \fixup) - .endm - /* * Create an exception table entry for uaccess `insn`, which will branch to `fixup` * when an unhandled fault is taken. @@ -94,9 +82,6 @@ ".short (" data ")\n" \ ".popsection\n" -#define _ASM_EXTABLE(insn, fixup) \ - __ASM_EXTABLE_RAW(#insn, #fixup, __stringify(EX_TYPE_FIXUP), "0") - #define EX_DATA_REG(reg, gpr) \ "((.L__gpr_num_" #gpr ") << " __stringify(EX_DATA_REG_##reg##_SHIFT) ")" diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 056591e5ca80..228d681a8715 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -16,13 +16,6 @@ get_ex_fixup(const struct exception_table_entry *ex) return ((unsigned long)&ex->fixup + ex->fixup); } -static bool ex_handler_fixup(const struct exception_table_entry *ex, - struct pt_regs *regs) -{ - regs->pc = get_ex_fixup(ex); - return true; -} - static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex, struct pt_regs *regs) { @@ -72,8 +65,6 @@ bool fixup_exception(struct pt_regs *regs) return false; switch (ex->type) { - case EX_TYPE_FIXUP: - return ex_handler_fixup(ex, regs); case EX_TYPE_BPF: return ex_handler_bpf(ex, regs); case EX_TYPE_UACCESS_ERR_ZERO: From patchwork Tue Jun 21 07:26:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12888764 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A2EFCCA473 for ; Tue, 21 Jun 2022 07:28:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DBA596B007E; Tue, 21 Jun 2022 03:28:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D6AD98E0002; Tue, 21 Jun 2022 03:28:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C33176B0085; Tue, 21 Jun 2022 03:28:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B23C86B007E for ; Tue, 21 Jun 2022 03:28:36 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8AD3620FE9 for ; Tue, 21 Jun 2022 07:28:36 +0000 (UTC) X-FDA: 79601415432.23.D75CADC Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf18.hostedemail.com (Postfix) with ESMTP id A179F1C0004 for ; Tue, 21 Jun 2022 07:28:35 +0000 (UTC) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LRyhc61h7zSh3j; Tue, 21 Jun 2022 15:25:04 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:27:00 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:26:58 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v6 07/10] Add generic fallback version of copy_mc_to_user() Date: Tue, 21 Jun 2022 07:26:35 +0000 Message-ID: <20220621072638.1273594-8-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621072638.1273594-1-tongtiangen@huawei.com> References: <20220621072638.1273594-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655796516; a=rsa-sha256; cv=none; b=CjXUGNoDkUz/UKCTKNZpdDDawPyixNPRfCAbZErlUzN7jJFuCE530d7qiIAH3A6mET+8xg vQ0+WXtHuiI/TwXRiifUNo87NL476pQugv3cjaOZ2tab4LDeS+KzANojKvUwpq6iz+AwE2 CQIAdx4c+rJuS3ewSp7rpCDQAzVUil0= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf18.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655796516; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7mwAOc/80KRKhxWJzYe9k0Z/q0SiROquJuehpTl9E5U=; b=FdRYHrvMGVuVI9z8ze21uDny9gGNQDbYrZQS20TWzP9/F2IPYvIbr1KmPKwg2U6N3faoF5 DxkopQq6yO20YyqU977iIb7sVAEp/rwP/79Z1MeQEV1sLwAul8berYLKBYTFdNvqDQJfkQ I33pJAFcYdDh8x6HdzN21eQOqcpbODA= Authentication-Results: imf18.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf18.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Stat-Signature: 6pjtf3giowyc5qeqbhtq4yp5kxo9pa5a X-Rspamd-Queue-Id: A179F1C0004 X-Rspam-User: X-Rspamd-Server: rspam03 X-HE-Tag: 1655796515-919548 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: x86/powerpc has it's implementation of copy_mc_to_user(), we add generic fallback in include/linux/uaccess.h prepare for other architechures to enable CONFIG_ARCH_HAS_COPY_MC. Signed-off-by: Tong Tiangen Acked-by: Michael Ellerman --- arch/powerpc/include/asm/uaccess.h | 1 + arch/x86/include/asm/uaccess.h | 1 + include/linux/uaccess.h | 9 +++++++++ 3 files changed, 11 insertions(+) diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index 9b82b38ff867..58dbe8e2e318 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -358,6 +358,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n) return n; } +#define copy_mc_to_user copy_mc_to_user #endif extern long __copy_from_user_flushcache(void *dst, const void __user *src, diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 913e593a3b45..64ba7f723ddf 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -512,6 +512,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len); unsigned long __must_check copy_mc_to_user(void *to, const void *from, unsigned len); +#define copy_mc_to_user copy_mc_to_user #endif /* diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 5a328cf02b75..07e9faeb14b5 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -174,6 +174,15 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) } #endif +#ifndef copy_mc_to_user +static inline unsigned long __must_check +copy_mc_to_user(void *dst, const void *src, size_t cnt) +{ + check_object_size(src, cnt, true); + return raw_copy_to_user(dst, src, cnt); +} +#endif + static __always_inline void pagefault_disabled_inc(void) { current->pagefault_disabled++; From patchwork Tue Jun 21 07:26:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12888751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCDD1C43334 for ; Tue, 21 Jun 2022 07:27:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7EDB76B0082; Tue, 21 Jun 2022 03:27:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7765E8E0002; Tue, 21 Jun 2022 03:27:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6174E6B0085; Tue, 21 Jun 2022 03:27:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4E2E46B0082 for ; Tue, 21 Jun 2022 03:27:41 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 26FF533FC6 for ; Tue, 21 Jun 2022 07:27:41 +0000 (UTC) X-FDA: 79601413122.17.6E95E62 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf07.hostedemail.com (Postfix) with ESMTP id 32DC840099 for ; Tue, 21 Jun 2022 07:27:40 +0000 (UTC) Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LRyjX66nMzkWQL; Tue, 21 Jun 2022 15:25:52 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:27:01 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:26:59 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v6 08/10] arm64: add support for machine check error safe Date: Tue, 21 Jun 2022 07:26:36 +0000 Message-ID: <20220621072638.1273594-9-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621072638.1273594-1-tongtiangen@huawei.com> References: <20220621072638.1273594-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf07.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655796460; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6+6dna/xM+jw9O7OFwCxR1+mAjsk3eBCAuM+7elWlA4=; b=RBsZcTq+NFGRHQrwB7A9xCTHJGjjVCYPvTsEvSCRd9AsrYBEujTEe6qnMI+1vWF4SfALQF LC1xobCT4aQU4XZjc1cyZBVdqudbmL3zKKdmnNB64I9YbLPhf2Wxyd/JGDT5OPcereGKkZ /1oi2HjWFT3NiA1s3H3c5E5mnPWLnuk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655796460; a=rsa-sha256; cv=none; b=AwliANAbzHXXh7m6iydf+3oFz7/lu7ph8zlVSVC1Z6d2iwrCv5GiiA2p0R23LrXG3TdCvH BUXojaznieG6lx+OLQNwIWW2/xKEl9DaYJP39JbcWLxhWuHYVbQ4rWFCv8Yti0DPSPlFnN pY6XtcN5vb8uBAx0snW4PD6OKk6kP7E= X-Stat-Signature: otuesyizkn33xn4yc9dnpzfhu6gy3n97 X-Rspamd-Queue-Id: 32DC840099 X-Rspam-User: Authentication-Results: imf07.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf07.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Rspamd-Server: rspam10 X-HE-Tag: 1655796460-151272 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: During the processing of arm64 kernel hardware memory errors(do_sea()), if the errors is consumed in the kernel, the current processing is panic. However, it is not optimal. Take uaccess for example, if the uaccess operation fails due to memory error, only the user process will be affected, kill the user process and isolate the user page with hardware memory errors is a better choice. This patch only enable machine error check framework, it add exception fixup before kernel panic in do_sea() and only limit the consumption of hardware memory errors in kernel mode triggered by user mode processes. If fixup successful, panic can be avoided. Signed-off-by: Tong Tiangen --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/extable.h | 1 + arch/arm64/mm/extable.c | 16 ++++++++++++++++ arch/arm64/mm/fault.c | 29 ++++++++++++++++++++++++++++- 4 files changed, 46 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9728103a13aa..a636e5ce02b5 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -19,6 +19,7 @@ config ARM64 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h index 72b0e71cc3de..f80ebd0addfd 100644 --- a/arch/arm64/include/asm/extable.h +++ b/arch/arm64/include/asm/extable.h @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex, #endif /* !CONFIG_BPF_JIT */ bool fixup_exception(struct pt_regs *regs); +bool fixup_exception_mc(struct pt_regs *regs); #endif diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 228d681a8715..478e639f8680 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -76,3 +76,19 @@ bool fixup_exception(struct pt_regs *regs) BUG(); } + +bool fixup_exception_mc(struct pt_regs *regs) +{ + const struct exception_table_entry *ex; + + ex = search_exception_tables(instruction_pointer(regs)); + if (!ex) + return false; + + /* + * This is not complete, More Machine check safe extable type can + * be processed here. + */ + + return false; +} diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index de166cdeb89a..bd6dd67c9ead 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -700,6 +700,31 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs) return 1; /* "fault" */ } +static bool arm64_do_kernel_sea(unsigned long addr, unsigned int esr, + struct pt_regs *regs, int sig, int code) +{ + if (!IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) + return false; + + if (user_mode(regs)) + return false; + + if (apei_claim_sea(regs) < 0) + return false; + + if (!fixup_exception_mc(regs)) + return false; + + if (current->flags & PF_KTHREAD) + return true; + + set_thread_esr(0, esr); + arm64_force_sig_fault(sig, code, addr, + "Uncorrected memory error on access to user memory\n"); + + return true; +} + static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) { const struct fault_info *inf; @@ -725,7 +750,9 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) */ siaddr = untagged_addr(far); } - arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); + + if (!arm64_do_kernel_sea(siaddr, esr, regs, inf->sig, inf->code)) + arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); return 0; } From patchwork Tue Jun 21 07:26:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12888750 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DECA7C43334 for ; Tue, 21 Jun 2022 07:27:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A0096B0081; Tue, 21 Jun 2022 03:27:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 72A7F6B0082; Tue, 21 Jun 2022 03:27:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57B9D8E0002; Tue, 21 Jun 2022 03:27:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 46C826B0081 for ; Tue, 21 Jun 2022 03:27:36 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 2152161168 for ; Tue, 21 Jun 2022 07:27:36 +0000 (UTC) X-FDA: 79601412912.07.10AD450 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf13.hostedemail.com (Postfix) with ESMTP id 54B55200B6 for ; Tue, 21 Jun 2022 07:27:35 +0000 (UTC) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LRyk25zkWzkWgY; Tue, 21 Jun 2022 15:26:18 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:27:03 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:27:01 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v6 09/10] arm64: add uaccess to machine check safe Date: Tue, 21 Jun 2022 07:26:37 +0000 Message-ID: <20220621072638.1273594-10-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621072638.1273594-1-tongtiangen@huawei.com> References: <20220621072638.1273594-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655796455; a=rsa-sha256; cv=none; b=G9DNWG8lmGZILs3JfRjN+GDKl2GOsq3Yu3pwrCBqY3LHYbq8MXRZb7zlLE39AgRaXJvn87 5fM41yu9cVGtQABO43XpRWrql2SXawlSUjrGpAKOT+pN3RGXUhHwd8ljAQ1uUg/F9ZlQmW 12DU0KOsY8JiJaqDKn6aSZiiZD0wU9k= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf13.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655796455; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jQKhpRFZYNSMJJ68eyFcWrh5PEMv/jS/d7zTq+MXzzs=; b=wWCMyIk9Mp8beWaIA8R24YHdsTpnf1e6r/11Fw3qPnMtMjaW/vUHuAz/xj1s2L0nXDoiUx dErxUfwjtP4ivSqCnUyDqoUA2Rkrmvengl99Iu4DLWxSd+OnUghBabtAOjcxK+gSsthdoP h50B0cPLTKXKlnvQlGzf+hklmZaBQYA= Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf13.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com X-Stat-Signature: j81s1ae89ms3wjdp9um9d8k53k5ut1nb X-Rspamd-Queue-Id: 54B55200B6 X-Rspam-User: X-Rspamd-Server: rspam03 X-HE-Tag: 1655796455-485240 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If user access fail due to hardware memory error, only the relevant processes are affected, so killing the user process and isolate the error page with hardware memory errors is a more reasonable choice than kernel panic. Signed-off-by: Tong Tiangen --- arch/arm64/mm/extable.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 478e639f8680..28ec35e3d210 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -85,10 +85,10 @@ bool fixup_exception_mc(struct pt_regs *regs) if (!ex) return false; - /* - * This is not complete, More Machine check safe extable type can - * be processed here. - */ + switch (ex->type) { + case EX_TYPE_UACCESS_ERR_ZERO: + return ex_handler_uaccess_err_zero(ex, regs); + } return false; } From patchwork Tue Jun 21 07:26:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 12888765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60A29C433EF for ; Tue, 21 Jun 2022 07:28:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 000366B0083; Tue, 21 Jun 2022 03:28:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EF1478E0002; Tue, 21 Jun 2022 03:28:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D92F46B0087; Tue, 21 Jun 2022 03:28:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C9E596B0083 for ; Tue, 21 Jun 2022 03:28:47 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 9C11E341C0 for ; Tue, 21 Jun 2022 07:28:47 +0000 (UTC) X-FDA: 79601415894.02.9BC14FD Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf31.hostedemail.com (Postfix) with ESMTP id 3B2BD200B8 for ; Tue, 21 Jun 2022 07:28:46 +0000 (UTC) Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4LRykp3L90zkWPt; Tue, 21 Jun 2022 15:26:58 +0800 (CST) Received: from kwepemm600017.china.huawei.com (7.193.23.234) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:27:04 +0800 Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 21 Jun 2022 15:27:03 +0800 From: Tong Tiangen To: Mark Rutland , James Morse , Andrew Morton , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Robin Murphy , Dave Hansen , Catalin Marinas , Will Deacon , Alexander Viro , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , , "H . Peter Anvin" CC: , , , , Kefeng Wang , Xie XiuQi , Guohanjun , Tong Tiangen Subject: [PATCH -next v6 10/10] arm64: add cow to machine check safe Date: Tue, 21 Jun 2022 07:26:38 +0000 Message-ID: <20220621072638.1273594-11-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220621072638.1273594-1-tongtiangen@huawei.com> References: <20220621072638.1273594-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655796527; a=rsa-sha256; cv=none; b=CsTDJvX4k1VZQbFLhfTgxx9XNI/i+hIjhY08RvxX/TqH4uXBrWTaEciCbDLP7yadtjsSBJ ZmK+WgOoLhcAlNDTGbMkJSlAZGzu+oUwUtKpAE/ZeJPspgUqKojNhqwJm1AP3A9rMmiQ2S CHeKP0+dCbpTP7X8nMdUUXfv41FIaLw= ARC-Authentication-Results: i=1; imf31.hostedemail.com; dkim=none; spf=pass (imf31.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655796527; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lvaiz5v1pLTVT7sMH7fTbuHBuKrDzyTNEXj9OoYsr2U=; b=NoFWev6QcHsgEZMOc75rOaLAnQ+LGyftAhpUa2zBy9GnUDtwY2hOo0BfZe48MU/sarHdyo bNCvSSv6neLA/2mB7sPPCcXb9z3FPir8fFF3O+p436mDEdCS1rI1aCOmW1YhlJ3muGqGMM GuxBI9ieMGljogw7e9//kkhBi2hobS8= Authentication-Results: imf31.hostedemail.com; dkim=none; spf=pass (imf31.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspam-User: X-Stat-Signature: gc8udrqcoape1xamppgse4n1z7eo3xdw X-Rspamd-Queue-Id: 3B2BD200B8 X-Rspamd-Server: rspam08 X-HE-Tag: 1655796526-44576 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the cow(copy on write) processing, the data of the user process is copied, when hardware memory error is encountered during copy, only the relevant processes are affected, so killing the user process and isolate the user page with hardware memory errors is a more reasonable choice than kernel panic. Add new helper copy_page_mc() which provide a page copy implementation with machine check safe. At present, only used in cow. In future, we can expand more scenes. As long as the consequences of page copy failure are not fatal(eg: only affect user process), we can use this helper. The copy_page_mc() in copy_page_mc.S is largely borrows from copy_page() in copy_page.S and the main difference is copy_page_mc() add extable entry to every load/store insn to support machine check safe. largely to keep the patch simple. If needed those optimizations can be folded in. Add new extable type EX_TYPE_COPY_PAGE_MC which used in copy_page_mc(). Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/asm-extable.h | 5 ++ arch/arm64/include/asm/assembler.h | 4 ++ arch/arm64/include/asm/mte.h | 4 ++ arch/arm64/include/asm/page.h | 10 ++++ arch/arm64/lib/Makefile | 2 + arch/arm64/lib/copy_page_mc.S | 82 ++++++++++++++++++++++++++++ arch/arm64/lib/mte.S | 19 +++++++ arch/arm64/mm/copypage.c | 41 +++++++++++--- arch/arm64/mm/extable.c | 9 +++ include/linux/highmem.h | 8 +++ mm/memory.c | 2 +- 11 files changed, 178 insertions(+), 8 deletions(-) create mode 100644 arch/arm64/lib/copy_page_mc.S diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 980d1dd8e1a3..969e2848ca13 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -10,6 +10,7 @@ #define EX_TYPE_UACCESS_ERR_ZERO 2 #define EX_TYPE_KACCESS_ERR_ZERO 3 #define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +#define EX_TYPE_COPY_PAGE_MC 5 /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */ #define EX_DATA_REG_ERR_SHIFT 0 @@ -59,6 +60,10 @@ _ASM_EXTABLE_UACCESS(\insn, \fixup) .endm + .macro _asm_extable_copy_page_mc, insn, fixup + __ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_COPY_PAGE_MC, 0) + .endm + /* * Create an exception table entry for `insn` if `fixup` is provided. Otherwise * do nothing. diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index dc422fa437c2..44927fa3a844 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -168,6 +168,10 @@ lr .req x30 // link register #define CPU_LE(code...) code #endif +#define CPY_MC(l, x...) \ +9999: x; \ + _asm_extable_copy_page_mc 9999b, l + /* * Define a macro that constructs a 64-bit value by concatenating two * 32-bit registers. Note that on big endian systems the order of the diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index aa523591a44e..b8129f64cfea 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -40,6 +40,7 @@ void mte_free_tag_storage(char *storage); void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t old_pte, pte_t pte); void mte_copy_page_tags(void *kto, const void *kfrom); +void mte_copy_page_tags_mc(void *kto, const void *kfrom); void mte_thread_init_user(void); void mte_thread_switch(struct task_struct *next); void mte_suspend_enter(void); @@ -63,6 +64,9 @@ static inline void mte_sync_tags(pte_t old_pte, pte_t pte) static inline void mte_copy_page_tags(void *kto, const void *kfrom) { } +static inline void mte_copy_page_tags_mc(void *kto, const void *kfrom) +{ +} static inline void mte_thread_init_user(void) { } diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 993a27ea6f54..832571a7dddb 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE +#ifdef CONFIG_ARCH_HAS_COPY_MC +extern void copy_page_mc(void *to, const void *from); +void copy_highpage_mc(struct page *to, struct page *from); +#define __HAVE_ARCH_COPY_HIGHPAGE_MC + +void copy_user_highpage_mc(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma); +#define __HAVE_ARCH_COPY_USER_HIGHPAGE_MC +#endif + struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, unsigned long vaddr); #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE_MOVABLE diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 29490be2546b..0d9f292ef68a 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -15,6 +15,8 @@ endif lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_page_mc.o + obj-$(CONFIG_CRC32) += crc32.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o diff --git a/arch/arm64/lib/copy_page_mc.S b/arch/arm64/lib/copy_page_mc.S new file mode 100644 index 000000000000..65fcad1dd7c8 --- /dev/null +++ b/arch/arm64/lib/copy_page_mc.S @@ -0,0 +1,82 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include + +/* + * Copy a page from src to dest (both are page aligned) with machine check + * + * Parameters: + * x0 - dest + * x1 - src + */ +SYM_FUNC_START(__pi_copy_page_mc) +alternative_if ARM64_HAS_NO_HW_PREFETCH + // Prefetch three cache lines ahead. + prfm pldl1strm, [x1, #128] + prfm pldl1strm, [x1, #256] + prfm pldl1strm, [x1, #384] +alternative_else_nop_endif + +CPY_MC(9998f, ldp x2, x3, [x1]) +CPY_MC(9998f, ldp x4, x5, [x1, #16]) +CPY_MC(9998f, ldp x6, x7, [x1, #32]) +CPY_MC(9998f, ldp x8, x9, [x1, #48]) +CPY_MC(9998f, ldp x10, x11, [x1, #64]) +CPY_MC(9998f, ldp x12, x13, [x1, #80]) +CPY_MC(9998f, ldp x14, x15, [x1, #96]) +CPY_MC(9998f, ldp x16, x17, [x1, #112]) + + add x0, x0, #256 + add x1, x1, #128 +1: + tst x0, #(PAGE_SIZE - 1) + +alternative_if ARM64_HAS_NO_HW_PREFETCH + prfm pldl1strm, [x1, #384] +alternative_else_nop_endif + +CPY_MC(9998f, stnp x2, x3, [x0, #-256]) +CPY_MC(9998f, ldp x2, x3, [x1]) +CPY_MC(9998f, stnp x4, x5, [x0, #16 - 256]) +CPY_MC(9998f, ldp x4, x5, [x1, #16]) +CPY_MC(9998f, stnp x6, x7, [x0, #32 - 256]) +CPY_MC(9998f, ldp x6, x7, [x1, #32]) +CPY_MC(9998f, stnp x8, x9, [x0, #48 - 256]) +CPY_MC(9998f, ldp x8, x9, [x1, #48]) +CPY_MC(9998f, stnp x10, x11, [x0, #64 - 256]) +CPY_MC(9998f, ldp x10, x11, [x1, #64]) +CPY_MC(9998f, stnp x12, x13, [x0, #80 - 256]) +CPY_MC(9998f, ldp x12, x13, [x1, #80]) +CPY_MC(9998f, stnp x14, x15, [x0, #96 - 256]) +CPY_MC(9998f, ldp x14, x15, [x1, #96]) +CPY_MC(9998f, stnp x16, x17, [x0, #112 - 256]) +CPY_MC(9998f, ldp x16, x17, [x1, #112]) + + add x0, x0, #128 + add x1, x1, #128 + + b.ne 1b + +CPY_MC(9998f, stnp x2, x3, [x0, #-256]) +CPY_MC(9998f, stnp x4, x5, [x0, #16 - 256]) +CPY_MC(9998f, stnp x6, x7, [x0, #32 - 256]) +CPY_MC(9998f, stnp x8, x9, [x0, #48 - 256]) +CPY_MC(9998f, stnp x10, x11, [x0, #64 - 256]) +CPY_MC(9998f, stnp x12, x13, [x0, #80 - 256]) +CPY_MC(9998f, stnp x14, x15, [x0, #96 - 256]) +CPY_MC(9998f, stnp x16, x17, [x0, #112 - 256]) + +9998: ret + +SYM_FUNC_END(__pi_copy_page_mc) +SYM_FUNC_ALIAS(copy_page_mc, __pi_copy_page_mc) +EXPORT_SYMBOL(copy_page_mc) diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index eeb9e45bcce8..cf728a9f39b5 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -80,6 +80,25 @@ SYM_FUNC_START(mte_copy_page_tags) ret SYM_FUNC_END(mte_copy_page_tags) +/* + * Copy the tags from the source page to the destination one wiht machine check safe + * x0 - address of the destination page + * x1 - address of the source page + */ +SYM_FUNC_START(mte_copy_page_tags_mc) + mov x2, x0 + mov x3, x1 + multitag_transfer_size x5, x6 +1: +CPY_MC(2f, ldgm x4, [x3]) + stgm x4, [x2] + add x2, x2, x5 + add x3, x3, x5 + tst x2, #(PAGE_SIZE - 1) + b.ne 1b +2: ret +SYM_FUNC_END(mte_copy_page_tags_mc) + /* * Read tags from a user buffer (one tag per byte) and set the corresponding * tags at the given kernel address. Used by PTRACE_POKEMTETAGS. diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 0dea80bf6de4..d68c5fc753a2 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -14,13 +14,8 @@ #include #include -void copy_highpage(struct page *to, struct page *from) +static void do_mte(struct page *to, struct page *from, void *kto, void *kfrom, bool mc) { - void *kto = page_address(to); - void *kfrom = page_address(from); - - copy_page(kto, kfrom); - if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { set_bit(PG_mte_tagged, &to->flags); page_kasan_tag_reset(to); @@ -32,9 +27,21 @@ void copy_highpage(struct page *to, struct page *from) * the new page->flags are visible before the tags were updated. */ smp_wmb(); - mte_copy_page_tags(kto, kfrom); + if (mc) + mte_copy_page_tags_mc(kto, kfrom); + else + mte_copy_page_tags(kto, kfrom); } } + +void copy_highpage(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + + copy_page(kto, kfrom); + do_mte(to, from, kto, kfrom, false); +} EXPORT_SYMBOL(copy_highpage); void copy_user_highpage(struct page *to, struct page *from, @@ -44,3 +51,23 @@ void copy_user_highpage(struct page *to, struct page *from, flush_dcache_page(to); } EXPORT_SYMBOL_GPL(copy_user_highpage); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +void copy_highpage_mc(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + + copy_page_mc(kto, kfrom); + do_mte(to, from, kto, kfrom, true); +} +EXPORT_SYMBOL(copy_highpage_mc); + +void copy_user_highpage_mc(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + copy_highpage_mc(to, from); + flush_dcache_page(to); +} +EXPORT_SYMBOL_GPL(copy_user_highpage_mc); +#endif diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 28ec35e3d210..b986333a3100 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -16,6 +16,13 @@ get_ex_fixup(const struct exception_table_entry *ex) return ((unsigned long)&ex->fixup + ex->fixup); } +static bool ex_handler_fixup(const struct exception_table_entry *ex, + struct pt_regs *regs) +{ + regs->pc = get_ex_fixup(ex); + return true; +} + static bool ex_handler_uaccess_err_zero(const struct exception_table_entry *ex, struct pt_regs *regs) { @@ -88,6 +95,8 @@ bool fixup_exception_mc(struct pt_regs *regs) switch (ex->type) { case EX_TYPE_UACCESS_ERR_ZERO: return ex_handler_uaccess_err_zero(ex, regs); + case EX_TYPE_COPY_PAGE_MC: + return ex_handler_fixup(ex, regs); } return false; diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 22379a63e293..5ba234b89be5 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -318,6 +318,10 @@ static inline void copy_user_highpage(struct page *to, struct page *from, #endif +#ifndef __HAVE_ARCH_COPY_USER_HIGHPAGE_MC +#define copy_user_highpage_mc copy_user_highpage +#endif + #ifndef __HAVE_ARCH_COPY_HIGHPAGE static inline void copy_highpage(struct page *to, struct page *from) @@ -333,6 +337,10 @@ static inline void copy_highpage(struct page *to, struct page *from) #endif +#ifndef __HAVE_ARCH_COPY_HIGHPAGE_MC +#define copy_highpage_mc copy_highpage +#endif + static inline void memcpy_page(struct page *dst_page, size_t dst_off, struct page *src_page, size_t src_off, size_t len) diff --git a/mm/memory.c b/mm/memory.c index fee2884481f2..7decc792a02d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2868,7 +2868,7 @@ static inline bool __wp_page_copy_user(struct page *dst, struct page *src, unsigned long addr = vmf->address; if (likely(src)) { - copy_user_highpage(dst, src, addr, vma); + copy_user_highpage_mc(dst, src, addr, vma); return true; }