From patchwork Mon Dec 9 02:42:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13898748 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1908FE77181 for ; Mon, 9 Dec 2024 02:43:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2A516B03A1; Sun, 8 Dec 2024 21:43:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AD90D6B03A2; Sun, 8 Dec 2024 21:43:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9C7D96B03A3; Sun, 8 Dec 2024 21:43:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 813956B03A1 for ; Sun, 8 Dec 2024 21:43:28 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3892480BB5 for ; Mon, 9 Dec 2024 02:43:28 +0000 (UTC) X-FDA: 82873873698.25.25F3B1B Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf28.hostedemail.com (Postfix) with ESMTP id 70D85C0007 for ; Mon, 9 Dec 2024 02:43:02 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf28.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733712187; a=rsa-sha256; cv=none; b=iTkx3kR2OcHiYsSZmXUuBz8/92v1tMDk/cbouZZd454c5UU9lICRPsBt+/Zk6nZL70VMY+ rSqZQCNumOX7DV37jLZ/5gf4nbTAXI0xjUXxbhGn3rYjM2GD0tJ2FY2/VJ2Nnk7F9DguTx PRmjcsufhyPPFAvPYQB9CNjuM+AxSEI= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf28.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733712187; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Dl98tFGY+rPyC8FqJPKFVEBdvlsf0WDTKEQ63I8mg4k=; b=f9YnvZxTGrrz5tRref98pNFGPzz7dYHGl8Qb9GpCJ7jnwvqpc8QiAtE40Izwtg7WSrVaox kqRFv6BTQ3DKaRwUwSUzZRhG3vV5pRu27DlhzamxxKPfWodOjIzG1HS52JevuUHmFuX8yG R6km0BEfXnhQSqUXdKnkZUaiqVHmAWI= Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Y65hT6ZTFz1T6l2; Mon, 9 Dec 2024 10:40:57 +0800 (CST) Received: from kwepemk500005.china.huawei.com (unknown [7.202.194.90]) by mail.maildlp.com (Postfix) with ESMTPS id 483AF14010C; Mon, 9 Dec 2024 10:43:21 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemk500005.china.huawei.com (7.202.194.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 9 Dec 2024 10:43:19 +0800 From: Tong Tiangen To: Mark Rutland , Jonathan Cameron , Mauro Carvalho Chehab , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , Madhavan Srinivasan CC: , , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v13 1/5] uaccess: add generic fallback version of copy_mc_to_user() Date: Mon, 9 Dec 2024 10:42:53 +0800 Message-ID: <20241209024257.3618492-2-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241209024257.3618492-1-tongtiangen@huawei.com> References: <20241209024257.3618492-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemk500005.china.huawei.com (7.202.194.90) X-Stat-Signature: thwod99cumg346q86ndf6sdx91cnp3ke X-Rspam-User: X-Rspamd-Queue-Id: 70D85C0007 X-Rspamd-Server: rspam08 X-HE-Tag: 1733712182-65546 X-HE-Meta: U2FsdGVkX19ZSUJrXpVPNB/22TISCf3ALXRVCUgmVradQlpTQ/8bGUtkNgiwLGIm9mp6fOQjVB5uGK75A9s+ZDB7gcCwyw0zSzc+s0I19GXgaxBm9vsAzJynZbwVwOGblGXHDUIPm3qOGngPRWsWaFysk/Cr0yIp+nfEi0aRn4k3ZX71xhzFJ4AQ3kmPvV+rbvCTfYIlmLchsA2u4G02lt45QzHFfinX6YBFWb+ln59kMGlpgIbuvWarZ0jPGpNYHzAv/l+pGEPUreLY9tK3abix/CMBkPF9QW2uNVR/5VLuYi80YBIzDQp6XiSBAC5C0v65ZbPtLRNwI1iIOsImFzirlGPpQKGA6enfT/ULt00Ov9hY1I5uQMhXs+KlFLXKM4FoL8mxhh4EKCFoKeCrWtSr1GkWZkP12ass60ihJECgUZ9qnmROsoL9ZoJ2KxRB29l0ZquvJ34G1kF28325q9suGcI0Rvd81uhYT5pVr8L+ZMgOV5fUM64NkcKk5xD/nPpu1m0bfMgRCo8Q0cYzICYIsd4SFg60n9NRJRC8hK0fq88vFiUYk0DKKJ7KulmDHXIJccaVXH3s/GgzKz4PJoNE4KjmUlU649PVmmVGdVQWQeTHMAx9r5U9vNvOKFPbM4406R/AyKZ2fyYuSDb6ot9tRlejBgYzrmH2MeeNPczTYQ5gdve7yPaR4KiwPf0xd+0CRIX9uofn5taILwzpHn9YmmRbPKgnlIgf8xTCAvjzQfg/6mDw8Anxx4bLX9Aikllt6MQEDNjsMlAo1+8o1XGlRh/5S08npOewxvMDjRhG6eS4smufKCGJ6uzfTlUKcTvxLZOGIFgFt47I3Qh7ThC9oV7cFSuYQGx1+FWdhR68Ft1NgE/gO6ES8zzWkfntrg3KSUgdQx3QVNm4ptcWhxcglWkUHFeS8CAP17uVoXIkk7fEh9czPxC7XyqN+DNuRS8KOBSLzXF642CB627 GiJD3mco mCcRRkAyWj+kjvw6vx7e1Ka2EjUmCFThhZ0Kz7hK3CVyoyitm7yzoj9OYw89JnhB9os6aRQWYWsVfMs0pwCLurNRJNZYR57v6KiuTQWr+/pSa99bv7lMZiQd2NUK5N1wpiQqDl7nrvNbUJJeHzaooJJX5IglLWUWT3c6y23UQxsoXVtm/K9nWDcOKdwRzUrnlxwgC9jIm8RmdAqe/t1Csv6RYj8cDyjN70M4Juv54UUf7cHnFRgg6XQE5T6teuDSPmvKj1l+qX3qkh7W2ouV7hmzAfkgalhk6HRCZW610D2z08Q8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: x86/powerpc has it's implementation of copy_mc_to_user(), we add generic fallback in include/linux/uaccess.h prepare for other architechures to enable CONFIG_ARCH_HAS_COPY_MC. Signed-off-by: Tong Tiangen Acked-by: Michael Ellerman Reviewed-by: Mauro Carvalho Chehab Reviewed-by: Jonathan Cameron --- arch/powerpc/include/asm/uaccess.h | 1 + arch/x86/include/asm/uaccess.h | 1 + include/linux/uaccess.h | 8 ++++++++ 3 files changed, 10 insertions(+) diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index 4f5a46a77fa2..44476d66ed13 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -403,6 +403,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n) return n; } +#define copy_mc_to_user copy_mc_to_user #endif extern long __copy_from_user_flushcache(void *dst, const void __user *src, diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 3a7755c1a441..3db67f44063b 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len); unsigned long __must_check copy_mc_to_user(void __user *to, const void *from, unsigned len); +#define copy_mc_to_user copy_mc_to_user #endif /* diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index e9c702c1908d..9d8c9f8082ff 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -239,6 +239,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) } #endif +#ifndef copy_mc_to_user +static inline unsigned long __must_check +copy_mc_to_user(void *dst, const void *src, size_t cnt) +{ + return copy_to_user(dst, src, cnt); +} +#endif + static __always_inline void pagefault_disabled_inc(void) { current->pagefault_disabled++; From patchwork Mon Dec 9 02:42:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13898749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5ACC1E77180 for ; Mon, 9 Dec 2024 02:43:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 585186B0088; Sun, 8 Dec 2024 21:43:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 50F546B03A2; Sun, 8 Dec 2024 21:43:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 339866B03A3; Sun, 8 Dec 2024 21:43:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 17E3A6B0088 for ; Sun, 8 Dec 2024 21:43:29 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BE9E8160B5A for ; Mon, 9 Dec 2024 02:43:28 +0000 (UTC) X-FDA: 82873874286.02.969A6C1 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf25.hostedemail.com (Postfix) with ESMTP id CE845A0009 for ; Mon, 9 Dec 2024 02:43:11 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733712188; a=rsa-sha256; cv=none; b=3lFIfX6xz375iLGmxoM1/5WdzrTzuENXXSArXbShZF15FnNxwGd+UCTYCkqEXj5JWqSwWh f8HoChswv3rTNqbAqkLTwrAj0P9/L1C4gHwJT9fNzEnbO1Zjk1GMGHZE+DvulpJpokeQNA 17ow65q1nlx0eQEdL34r9zf2GgJnys4= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733712188; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iDrZfdrMnIOWDOEvnU3lBEDaJYl49vcnJr2daN4zEaU=; b=QA684p4aj2Uj7WAfIX6NqzBQ6/cBI5lRxm4qbl0F1XdL6Mwkt7yFYlYfkD93BwI74foiwL 0F/rXWVB3kBttTr5jpy2+Bt/Gcda/mmpes2uY8TIV56omJFRru2+a/DB/NoeRoKXarliv8 /tSNNu8znUA3OGKCyvJ+KlRWIex+gV4= Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Y65l23pZpz1JDw3; Mon, 9 Dec 2024 10:43:10 +0800 (CST) Received: from kwepemk500005.china.huawei.com (unknown [7.202.194.90]) by mail.maildlp.com (Postfix) with ESMTPS id 23F251A0188; Mon, 9 Dec 2024 10:43:23 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemk500005.china.huawei.com (7.202.194.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 9 Dec 2024 10:43:21 +0800 From: Tong Tiangen To: Mark Rutland , Jonathan Cameron , Mauro Carvalho Chehab , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , Madhavan Srinivasan CC: , , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v13 2/5] arm64: add support for ARCH_HAS_COPY_MC Date: Mon, 9 Dec 2024 10:42:54 +0800 Message-ID: <20241209024257.3618492-3-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241209024257.3618492-1-tongtiangen@huawei.com> References: <20241209024257.3618492-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemk500005.china.huawei.com (7.202.194.90) X-Rspamd-Queue-Id: CE845A0009 X-Stat-Signature: dpde4ynptxwfygb14j1hfchhz63g8jrb X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1733712191-930424 X-HE-Meta: U2FsdGVkX1/kht6zNA29wmVoBpuJbWy53BmAI1JLhLwFJX8Rl1znCHY4ghfkz8dTY0ls7Ctbe0jQ6HDg9GLB0lsrPlzeNh3qRMz1dIKFTrsWQwjJJxFjuwWH599oszYztJio3FFRN+u5WaUmXQYZ3ttCdcIoML65LWDQ3IIPrQ5daqT0RPxE4Ymv54yGjS2gwnzywJ1F2FpCqz4QUTrHSvJEpiRfgrh5GBySZmqcKtPP/reFTWZd68JJ589jHdYU6yC+32qPHuuKMBH6woj4cCHl/bmpFsUUkoSEDC3+3eaMG5avRzc8vVW8zSYawDUedeTGF1FgUv8l69wkBj1c6mvSQL7wWt36sAkn4y8u+QzgS/pzl1FiyMdKZ1Ts05RQ7CNnyzUlawB2iXUIfO+ePPpbKynRITSypEtxAo4EV0EahfxKwbDCGYdR4TB7QHrI2jc1oZeDfcKege5FY6LCv7MM73M8IHT+Z0iMvpm+Me3f6Xj0hTWmbxy5q4vKjdQ7eO0HtSdge9bXFKKZcDW+/2R0yG7F1G5+2y/dHg7u+tUSDHwmfVPqopzoiJRrzpGcWo1/+PojBmlXgPXIFkdIUgVIhhpMTJWEZtO1S6CgeHKUxXPofF4pWCTVLlGshFHZ8qjdra6WI2Kn5oUoB1EijH8gk62jnbCgRwHvC+fi8LYW/+/UT2L+t9HgJVjfVlb0uIb4MNpdbvBc0UzT8GzXI1pj60hcLUIfiBsQhrVHo0u4Y1/cklBIxJKOHQK+68ac5kJS/YlNuxnU7Sxbtq4T3vbZKvFkss/a2pAGQ4prfWkcYEv1di2/09OhlmEaC82cvhMKhWb0pGko2DsEVqwNCxDsmuFkiHnQeBwc0DpK6SoIVzoicFJp+WffsS1LU+rMDQH+33i8hWfQA840lt2AjLgnK+pT6EdSFAfTsvdYcTi1C7FRBX7kbOeoAFXFhxGeRFp92Q5uLc/0tzR670k f9vLz0X8 CB6QaPQdGhdgnwBiUTqfSryQNfLJOMfZsGXklkPZiHmeUO3lhqE8W961y0FW7o4sn5J3/Qi/ej5DRK+4/lPaN/KafCiv81UC5supSRFsiIf8GVeDIGSNeh84GLu1m5epQbsN7Fvw4JOkdf2ArtEQHTN9FEDxRNpme9cKcoXFwVmlIatvirEWyXqlBjBYVWoY5P03YEC2yo4Q1TKSJ4/yB1d7tBfC51rLCPJKBxw8IDJDHv8Yx+7HxFt7F6MJedhPOk0YJ6/IQuZu0nKWB1bN9ziwtaQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: For the arm64 kernel, when it processes hardware memory errors for synchronize notifications(do_sea()), if the errors is consumed within the kernel, the current processing is panic. However, it is not optimal. Take copy_from/to_user for example, If ld* triggers a memory error, even in kernel mode, only the associated process is affected. Killing the user process and isolating the corrupt page is a better choice. Add new fixup type EX_TYPE_KACCESS_ERR_ZERO_MEM_ERR to identify insn that can recover from memory errors triggered by access to kernel memory, and this fixup type is used in __arch_copy_to_user(), This make the regular copy_to_user() will handle kernel memory errors. Signed-off-by: Tong Tiangen --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/asm-extable.h | 31 +++++++++++++++++++++++----- arch/arm64/include/asm/asm-uaccess.h | 4 ++++ arch/arm64/include/asm/extable.h | 1 + arch/arm64/lib/copy_to_user.S | 10 ++++----- arch/arm64/mm/extable.c | 19 +++++++++++++++++ arch/arm64/mm/fault.c | 30 ++++++++++++++++++++------- 7 files changed, 78 insertions(+), 18 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 100570a048c5..5fa54d31162c 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -21,6 +21,7 @@ config ARM64 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE select ARCH_HAS_CC_PLATFORM + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index b8a5861dc7b7..0f9123efca0a 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -5,11 +5,13 @@ #include #include -#define EX_TYPE_NONE 0 -#define EX_TYPE_BPF 1 -#define EX_TYPE_UACCESS_ERR_ZERO 2 -#define EX_TYPE_KACCESS_ERR_ZERO 3 -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +#define EX_TYPE_NONE 0 +#define EX_TYPE_BPF 1 +#define EX_TYPE_UACCESS_ERR_ZERO 2 +#define EX_TYPE_KACCESS_ERR_ZERO 3 +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +/* kernel access memory error safe */ +#define EX_TYPE_KACCESS_ERR_ZERO_MEM_ERR 5 /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */ #define EX_DATA_REG_ERR_SHIFT 0 @@ -51,6 +53,17 @@ #define _ASM_EXTABLE_UACCESS(insn, fixup) \ _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr) +#define _ASM_EXTABLE_KACCESS_ERR_ZERO_MEM_ERR(insn, fixup, err, zero) \ + __ASM_EXTABLE_RAW(insn, fixup, \ + EX_TYPE_KACCESS_ERR_ZERO_MEM_ERR, \ + ( \ + EX_DATA_REG(ERR, err) | \ + EX_DATA_REG(ZERO, zero) \ + )) + +#define _ASM_EXTABLE_KACCESS_MEM_ERR(insn, fixup) \ + _ASM_EXTABLE_KACCESS_ERR_ZERO_MEM_ERR(insn, fixup, wzr, wzr) + /* * Create an exception table entry for uaccess `insn`, which will branch to `fixup` * when an unhandled fault is taken. @@ -69,6 +82,14 @@ .endif .endm +/* + * Create an exception table entry for kaccess `insn`, which will branch to + * `fixup` when an unhandled fault is taken. + */ + .macro _asm_extable_kaccess_mem_err, insn, fixup + _ASM_EXTABLE_KACCESS_MEM_ERR(\insn, \fixup) + .endm + #else /* __ASSEMBLY__ */ #include diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h index 5b6efe8abeeb..19aa0180f645 100644 --- a/arch/arm64/include/asm/asm-uaccess.h +++ b/arch/arm64/include/asm/asm-uaccess.h @@ -57,6 +57,10 @@ alternative_else_nop_endif .endm #endif +#define KERNEL_MEM_ERR(l, x...) \ +9999: x; \ + _asm_extable_kaccess_mem_err 9999b, l + #define USER(l, x...) \ 9999: x; \ _asm_extable_uaccess 9999b, l diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h index 72b0e71cc3de..bc49443bc502 100644 --- a/arch/arm64/include/asm/extable.h +++ b/arch/arm64/include/asm/extable.h @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex, #endif /* !CONFIG_BPF_JIT */ bool fixup_exception(struct pt_regs *regs); +bool fixup_exception_me(struct pt_regs *regs); #endif diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S index 802231772608..bedab1678431 100644 --- a/arch/arm64/lib/copy_to_user.S +++ b/arch/arm64/lib/copy_to_user.S @@ -20,7 +20,7 @@ * x0 - bytes not copied */ .macro ldrb1 reg, ptr, val - ldrb \reg, [\ptr], \val + KERNEL_MEM_ERR(9998f, ldrb \reg, [\ptr], \val) .endm .macro strb1 reg, ptr, val @@ -28,7 +28,7 @@ .endm .macro ldrh1 reg, ptr, val - ldrh \reg, [\ptr], \val + KERNEL_MEM_ERR(9998f, ldrh \reg, [\ptr], \val) .endm .macro strh1 reg, ptr, val @@ -36,7 +36,7 @@ .endm .macro ldr1 reg, ptr, val - ldr \reg, [\ptr], \val + KERNEL_MEM_ERR(9998f, ldr \reg, [\ptr], \val) .endm .macro str1 reg, ptr, val @@ -44,7 +44,7 @@ .endm .macro ldp1 reg1, reg2, ptr, val - ldp \reg1, \reg2, [\ptr], \val + KERNEL_MEM_ERR(9998f, ldp \reg1, \reg2, [\ptr], \val) .endm .macro stp1 reg1, reg2, ptr, val @@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user) 9997: cmp dst, dstin b.ne 9998f // Before being absolutely sure we couldn't copy anything, try harder - ldrb tmp1w, [srcin] +KERNEL_MEM_ERR(9998f, ldrb tmp1w, [srcin]) USER(9998f, sttrb tmp1w, [dst]) add dst, dst, #1 9998: sub x0, end, dst // bytes not copied diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 228d681a8715..9ad2b6473b60 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -72,7 +72,26 @@ bool fixup_exception(struct pt_regs *regs) return ex_handler_uaccess_err_zero(ex, regs); case EX_TYPE_LOAD_UNALIGNED_ZEROPAD: return ex_handler_load_unaligned_zeropad(ex, regs); + case EX_TYPE_KACCESS_ERR_ZERO_MEM_ERR: + return false; } BUG(); } + +bool fixup_exception_me(struct pt_regs *regs) +{ + const struct exception_table_entry *ex; + + ex = search_exception_tables(instruction_pointer(regs)); + if (!ex) + return false; + + switch (ex->type) { + case EX_TYPE_UACCESS_ERR_ZERO: + case EX_TYPE_KACCESS_ERR_ZERO_MEM_ERR: + return ex_handler_uaccess_err_zero(ex, regs); + } + + return false; +} diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index ef63651099a9..278e67357f49 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -801,21 +801,35 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs) return 1; /* "fault" */ } +/* + * APEI claimed this as a firmware-first notification. + * Some processing deferred to task_work before ret_to_user(). + */ +static int do_apei_claim_sea(struct pt_regs *regs) +{ + int ret; + + ret = apei_claim_sea(regs); + if (ret) + return ret; + + if (!user_mode(regs) && IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { + if (!fixup_exception_me(regs)) + return -ENOENT; + } + + return ret; +} + static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) { const struct fault_info *inf; unsigned long siaddr; - inf = esr_to_fault_info(esr); - - if (user_mode(regs) && apei_claim_sea(regs) == 0) { - /* - * APEI claimed this as a firmware-first notification. - * Some processing deferred to task_work before ret_to_user(). - */ + if (do_apei_claim_sea(regs) == 0) return 0; - } + inf = esr_to_fault_info(esr); if (esr & ESR_ELx_FnV) { siaddr = 0; } else { From patchwork Mon Dec 9 02:42:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13898750 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40FD5E7717F for ; Mon, 9 Dec 2024 02:43:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2EEE66B03A5; Sun, 8 Dec 2024 21:43:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B5466B03A6; Sun, 8 Dec 2024 21:43:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E35E96B03A7; Sun, 8 Dec 2024 21:43:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id BC4636B03A5 for ; Sun, 8 Dec 2024 21:43:31 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 77DBA1C77B7 for ; Mon, 9 Dec 2024 02:43:31 +0000 (UTC) X-FDA: 82873873824.06.B07DEC4 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf18.hostedemail.com (Postfix) with ESMTP id DD0591C0004 for ; Mon, 9 Dec 2024 02:43:19 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf18.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733712190; a=rsa-sha256; cv=none; b=7eyoONlqcXXQ6hmH8fsRrV2399e02wOw5uprUZ4QUITHNYit4ClYJ3beh2y3nN7yiE8d5a qvtbSzxxMpZIlH4wUicSByqlzi+N6nfbL9HHFeJYpp6MgPEvnXnBBAsfxLdSz8b19AOoG7 sLFo7+2MTbhQqOMQR79d4h9sgtQB0hQ= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf18.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733712190; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LlnRR/ekzvfKFU0JIU5g10P2k1CrW7ToDOkJBkyI4mw=; b=LpCEclZL9+vnfGoQ1yUnjMigaJsgKkU+vHQ6slb6RFLM9pd2ZAQZeybv5mFclBvdutvBjw ceoZxGQwvOvrd9KpoeD6TPdsfeMoGv9kTCmQmNXoBBHYQNKsMVVX1X1C7DaORUGpm1czYG Cu3owyk4SgbRJxf+oKAEwoE4irfL6jA= Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4Y65jL32dzzRj4G; Mon, 9 Dec 2024 10:41:42 +0800 (CST) Received: from kwepemk500005.china.huawei.com (unknown [7.202.194.90]) by mail.maildlp.com (Postfix) with ESMTPS id F0128180102; Mon, 9 Dec 2024 10:43:24 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemk500005.china.huawei.com (7.202.194.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 9 Dec 2024 10:43:22 +0800 From: Tong Tiangen To: Mark Rutland , Jonathan Cameron , Mauro Carvalho Chehab , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , Madhavan Srinivasan CC: , , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v13 3/5] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Date: Mon, 9 Dec 2024 10:42:55 +0800 Message-ID: <20241209024257.3618492-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241209024257.3618492-1-tongtiangen@huawei.com> References: <20241209024257.3618492-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemk500005.china.huawei.com (7.202.194.90) X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: DD0591C0004 X-Stat-Signature: rkb8pn1iqx4tzcbnexardx1twzjxaxm3 X-Rspam-User: X-HE-Tag: 1733712199-475187 X-HE-Meta: U2FsdGVkX1/ML1m/Clvqv+bHTTGyIjJnASdi9DMt5BP5vgc31oux3d4Xn59ooV5KUQwwOgJvVOgLIdBNKSxwcqc3JNAAuHTCCUFdU709iR0UE3fS+78/BsQwzd57oQvvaQG++Br2wFM8yWibcEe5TB73Bp4lm76mKpSdGgCm2jqesXt9X4/a12vVMDZfLUVUNJPVhPlYjdd+9B8GvakXuot8C2nULlli0zenhrUJdjhnigzu6REFr+X6tWrwdZr9Azq9OJnD96I8FPcpOm/hwthGUi9JIhGJnc5n8iBZ36J50njz9Xzp5YZJFzTrvwKRpKg5L8aeNxg+msTjX7i1VTwgphm4A6hOjNY4TO/lcXPpL4imZA/yZ2mS3TEOVMN9aHiEtFzDl04wVlzzd5skmc25VDfIX9s5Rm4LYj7OmyetrIXHN/7eqPSVIlJ1ALnmi5KfHl6Vi5ACQxGAzmRCR+Wni/wdXQPEJL6fiAF7zCABB0xPiJNUsN+ASROT8nrmqb4JpqI3ckK1E8SmJvXm+ULqRa+BbtCNsX7vB61i6WsEVo3P1O/2AybG0DMF8mYvem+OadjUGxhnyhfrX0NG8SpjeO2wyGYlE+5Ief+3rtlGpU6uHasPnuX5HimvAskhoCC/dwVqDJPRg9V7QwD4QzueD00f4Uw+BkNLwwaSym0/D6G07qTUMSfKdv/hqPDOlBrg9laYCAWeTRQ/DxSOpohEr/GIteF6z44hkYfY23xCxesVWEcDokrTguBYgkurjKRSAfV82Je/rowoQgV9Dc+AmrkD8AQ1loKKNS6/30w7V/fOdiqnSr689jfEQYYehvv2XW0dVRSDIiO9tDlfm7LLfduQcd4RpdxEK+2xxzWQNkwJHeLjCQK2OXClIo3Ss57E1/+hi0kNxLCNwPSH6G3IS0LXDRnd6W3whwGyYotQ6Jon0TZ0EIkuch9AXacEI/3bkdG3gkE7/FMsFfV r+cVp6Zk raFeE6VESWx3RALLANKa2vb7+1sI4FL6V0FYynPqsWnUb5dO8UFDtPxITEFemEH9tmItULZd74ZgcC/bKmlPWWCAaY+3TGTEhMY3Sp8V4F3Nja7rbpCWwUWZYrtvxLsaWZllH5ykVG+0/9/8I5iQTz4RJYRwyGa3VYBeYka/r5z/ETWqxKDAU5AhZ9T8BdWwmXimXLMo8CC/i8/fNwmoHvlwoxJGDYxkbL2/aVFY7ITmlDCEH3cC2sLU3nLSHSAJcZr9h1UvtjfIHjjR2+TjhcLRqH9URZxsydtqbg9W2aPDERVU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, copy_mc_[user]_highpage() returns zero on success, or in case of failures, the number of bytes that weren't copied. While tracking the number of not copied works fine for x86 and PPC, There are some difficulties in doing the same thing on ARM64 because there is no available caller-saved register in copy_page()(lib/copy_page.S) to save "bytes not copied", and the following copy_mc_page() will also encounter the same problem. Consider the caller of copy_mc_[user]_highpage() cannot do any processing on the remaining data(The page has hardware errors), they only check if copy was succeeded or not, make the interface more generic by using an error code when copy fails (-EFAULT) or return zero on success. Signed-off-by: Tong Tiangen Reviewed-by: Jonathan Cameron Reviewed-by: Mauro Carvalho Chehab --- include/linux/highmem.h | 8 ++++---- mm/khugepaged.c | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 6e452bd8e7e3..0eb4b9b06837 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -329,8 +329,8 @@ static inline void copy_highpage(struct page *to, struct page *from) /* * If architecture supports machine check exception handling, define the * #MC versions of copy_user_highpage and copy_highpage. They copy a memory - * page with #MC in source page (@from) handled, and return the number - * of bytes not copied if there was a #MC, otherwise 0 for success. + * page with #MC in source page (@from) handled, and return -EFAULT if there + * was a #MC, otherwise 0 for success. */ static inline int copy_mc_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) @@ -349,7 +349,7 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, if (ret) memory_failure_queue(page_to_pfn(from), 0); - return ret; + return ret ? -EFAULT : 0; } static inline int copy_mc_highpage(struct page *to, struct page *from) @@ -368,7 +368,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) if (ret) memory_failure_queue(page_to_pfn(from), 0); - return ret; + return ret ? -EFAULT : 0; } #else static inline int copy_mc_user_highpage(struct page *to, struct page *from, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 6f8d46d107b4..c3cdc0155dcd 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -820,7 +820,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio, continue; } src_page = pte_page(pteval); - if (copy_mc_user_highpage(page, src_page, src_addr, vma) > 0) { + if (copy_mc_user_highpage(page, src_page, src_addr, vma)) { result = SCAN_COPY_MC; break; } @@ -2081,7 +2081,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, } for (i = 0; i < nr_pages; i++) { - if (copy_mc_highpage(dst, folio_page(folio, i)) > 0) { + if (copy_mc_highpage(dst, folio_page(folio, i))) { result = SCAN_COPY_MC; goto rollback; } From patchwork Mon Dec 9 02:42:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13898751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B3F5E77181 for ; Mon, 9 Dec 2024 02:43:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5FDB06B03A7; Sun, 8 Dec 2024 21:43:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 535856B03A8; Sun, 8 Dec 2024 21:43:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 389526B03AA; Sun, 8 Dec 2024 21:43:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id F19636B03A7 for ; Sun, 8 Dec 2024 21:43:33 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 70844140242 for ; Mon, 9 Dec 2024 02:43:33 +0000 (UTC) X-FDA: 82873874244.05.05E1179 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf13.hostedemail.com (Postfix) with ESMTP id 90E1620005 for ; Mon, 9 Dec 2024 02:43:10 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf13.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733712192; a=rsa-sha256; cv=none; b=wC8AD3btFL358tivcTX1CgnOthAKCoIwzqbAJhcRcR4o380ReCWaBiUpJlYoepWhb1rk0L cUCek9vV0uJ3OWesmBMZd1szZGI8StOJ0livrD8+lMuafdggmzTBn2eDqz/bxrwad9lZug /VMQr2AAKJnliFv+/tr06o8sLrovMfQ= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf13.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733712192; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OaOnBlY521kDCiKjaM8WSMz2EeZwjwYkMODwR6ewATM=; b=4nPvFO7opsEH6YAStZIrS5GR0Ao21Whrs0fxyPp6ShTMGVG2hZtpL/mzmk+UNGqelgx2GE EKuO+Rk/P9psRok/6Ue5UALDMCv7QFOxr0SizA0TK+Ve/Nmfab8MN2D+8Ss+9XuFe2wycZ ntAESBbEC5vGKJyk9lxsYF83EvK+/Fw= Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Y65hc5DnFzhZTn; Mon, 9 Dec 2024 10:41:04 +0800 (CST) Received: from kwepemk500005.china.huawei.com (unknown [7.202.194.90]) by mail.maildlp.com (Postfix) with ESMTPS id CAE141400FF; Mon, 9 Dec 2024 10:43:26 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemk500005.china.huawei.com (7.202.194.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 9 Dec 2024 10:43:24 +0800 From: Tong Tiangen To: Mark Rutland , Jonathan Cameron , Mauro Carvalho Chehab , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , Madhavan Srinivasan CC: , , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v13 4/5] arm64: support copy_mc_[user]_highpage() Date: Mon, 9 Dec 2024 10:42:56 +0800 Message-ID: <20241209024257.3618492-5-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241209024257.3618492-1-tongtiangen@huawei.com> References: <20241209024257.3618492-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemk500005.china.huawei.com (7.202.194.90) X-Stat-Signature: ocjnf8nzdcsjtzgduccajy6u1yp4bury X-Rspam-User: X-Rspamd-Queue-Id: 90E1620005 X-Rspamd-Server: rspam08 X-HE-Tag: 1733712190-757098 X-HE-Meta: U2FsdGVkX19GznF/wnXtFT29f2/IGfMTFAgZR1kkPpKmjXCYukRJef6AbikBwbv+ZB0kCUjGMKl4G6aF7Ot0hccj0zCgtCDAXM71LJqxKi0lNPbdOG3qbVMIA0yYE/JDkqc6H6cOJOwAvQOpqDIdkaeQKplQzQu4Z3UBtyjxw1dvR4N0J3BDM8KSCqYk2ntw1C+vrUI1tKsVJ2NKGSiz598+qcJNS3Ao3XTwpyYJm3c5g5P6ZsxbwTNE2blKxpz37iC6WLEK2qpuPR8P2zosLbhik3zN04fBIqWGuwmDhJ3mJpCAxRcou5iOCtpzuwXXPprwiDV+owNbVahZ5aOJUbjL63YDAFFwXypGmflUvQR5a8eKkOFljHgZ92jh+QdbXnJb59A1xjB5p+UWLvwHO64xIdM/uFajdq3PKXzP4KyGE+TyLSKKvEjEdSkG4SsnkNUJ47NSMrc1gfWdLl5sYv32IuDvryKpXreGAWEt535SO/4ROJjLbdJsiIn7d43MIf5A8U/GtGdytR+YFpU0z7ILxyO1xTDYf8MSCStHgs2B3KF0Z8PdNIgvt8dnPyJSlMA8WGW/llRwacwORi/K1Wy/J0ebLmSBn9WZ0Yt8VgkgohCR2j5ergKEaZYWhqD5MxdEnaYxfm8KngVuYdMJcmVPumc48MUH8DDifhUrNbugHXQ+gj9efX+jw+oSLehsBjk7T70mFMGxNjU/BcT7BNK+RvCbtHYZVgBZbCGR9qComMKWz8gVY6Sb9HJMb72HGKhnegPlRjV1DUQkjEn/ezNgZWLjPZSRnUQbdJ1sUrI7x+DbRb1R/QnIBVo1N/h2z95jwiMhVEqnT+5lCJzfDCLWJqjmy1hSX2XGsSM3JRYROtFLQNPabe/bDzD5qL9xPev/qZPmkilOK/hcfVThQFBXO92IBnvKG8w0e3y54NGsacgRQcDb6GCH4LtZSYucRyPZvJW1uo12hHs0bfj FM/8Dxbh f3peJD99o6sRU9d3kq2n8QfBtxvi16sLXLH+Wwu7M5w2kDzUyfpn95IMPOloMyNQeFK/Ixi4gsqxDe4Ykn0GqOMy0uNFffCcmJhSAWlLmI+0YUAEo7OgKYcoLXA71U2ROP6pdBUl4h2+HecCVJEpmOlYvDtAshexyQoeUIFiOt7VnAl8oorQ3BLIeRe/Kuhef1nlVuYrwYkEaD4YoSmGFrIJVDthm3rmWyeRTKjGtyPBNiR7yY7PbTmNFF2Bit+IPrUuRSN9SrslCzMRJR1FP61JatA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, many scenarios that can tolerate memory errors when copying page have been supported in the kernel[1~5], all of which are implemented by copy_mc_[user]_highpage(). arm64 should also support this mechanism. Due to mte, arm64 needs to have its own copy_mc_[user]_highpage() architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it. Add new helper copy_mc_page() which provide a page copy implementation with hardware memory error safe. The code logic of copy_mc_page() is the same as copy_page(), the main difference is that the ldp insn of copy_mc_page() contains the fixup type EX_TYPE_KACCESS_ERR_ZERO_MEM_ERR, therefore, the main logic is extracted to copy_page_template.S. In addition, the fixup of MOPS insn is not considered at present. [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline") [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults") [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()") [4] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory") [5] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory") Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/mte.h | 9 ++++ arch/arm64/include/asm/page.h | 10 ++++ arch/arm64/lib/Makefile | 2 + arch/arm64/lib/copy_mc_page.S | 37 ++++++++++++++ arch/arm64/lib/copy_page.S | 62 ++---------------------- arch/arm64/lib/copy_page_template.S | 70 +++++++++++++++++++++++++++ arch/arm64/lib/mte.S | 29 +++++++++++ arch/arm64/mm/copypage.c | 75 +++++++++++++++++++++++++++++ include/linux/highmem.h | 8 +++ 9 files changed, 245 insertions(+), 57 deletions(-) create mode 100644 arch/arm64/lib/copy_mc_page.S create mode 100644 arch/arm64/lib/copy_page_template.S diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 6567df8ec8ca..efcd850ea2f8 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -98,6 +98,11 @@ static inline bool try_page_mte_tagging(struct page *page) void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t pte, unsigned int nr_pages); void mte_copy_page_tags(void *kto, const void *kfrom); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +int mte_copy_mc_page_tags(void *kto, const void *kfrom); +#endif + void mte_thread_init_user(void); void mte_thread_switch(struct task_struct *next); void mte_cpu_setup(void); @@ -134,6 +139,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages) static inline void mte_copy_page_tags(void *kto, const void *kfrom) { } +static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom) +{ + return 0; +} static inline void mte_thread_init_user(void) { } diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 2312e6ee595f..304cc86b8a10 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE +#ifdef CONFIG_ARCH_HAS_COPY_MC +int copy_mc_page(void *to, const void *from); +int copy_mc_highpage(struct page *to, struct page *from); +#define __HAVE_ARCH_COPY_MC_HIGHPAGE + +int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma); +#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE +#endif + struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, unsigned long vaddr); #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 8e882f479d98..78b0e9904689 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -13,6 +13,8 @@ endif lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o + obj-$(CONFIG_CRC32) += crc32.o crc32-glue.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S new file mode 100644 index 000000000000..51564828c30c --- /dev/null +++ b/arch/arm64/lib/copy_mc_page.S @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * Copy a page from src to dest (both are page aligned) with memory error safe + * + * Parameters: + * x0 - dest + * x1 - src + * Returns: + * x0 - Return 0 if copy success, or -EFAULT if anything goes wrong + * while copying. + */ + .macro ldp1 reg1, reg2, ptr, val + KERNEL_MEM_ERR(9998f, ldp \reg1, \reg2, [\ptr, \val]) + .endm + +SYM_FUNC_START(__pi_copy_mc_page) +#include "copy_page_template.S" + + mov x0, #0 + ret + +9998: mov x0, #-EFAULT + ret + +SYM_FUNC_END(__pi_copy_mc_page) +SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page) +EXPORT_SYMBOL(copy_mc_page) diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S index e6374e7e5511..d0186bbf99f1 100644 --- a/arch/arm64/lib/copy_page.S +++ b/arch/arm64/lib/copy_page.S @@ -17,65 +17,13 @@ * x0 - dest * x1 - src */ -SYM_FUNC_START(__pi_copy_page) -#ifdef CONFIG_AS_HAS_MOPS - .arch_extension mops -alternative_if_not ARM64_HAS_MOPS - b .Lno_mops -alternative_else_nop_endif - - mov x2, #PAGE_SIZE - cpypwn [x0]!, [x1]!, x2! - cpymwn [x0]!, [x1]!, x2! - cpyewn [x0]!, [x1]!, x2! - ret -.Lno_mops: -#endif - ldp x2, x3, [x1] - ldp x4, x5, [x1, #16] - ldp x6, x7, [x1, #32] - ldp x8, x9, [x1, #48] - ldp x10, x11, [x1, #64] - ldp x12, x13, [x1, #80] - ldp x14, x15, [x1, #96] - ldp x16, x17, [x1, #112] - - add x0, x0, #256 - add x1, x1, #128 -1: - tst x0, #(PAGE_SIZE - 1) - stnp x2, x3, [x0, #-256] - ldp x2, x3, [x1] - stnp x4, x5, [x0, #16 - 256] - ldp x4, x5, [x1, #16] - stnp x6, x7, [x0, #32 - 256] - ldp x6, x7, [x1, #32] - stnp x8, x9, [x0, #48 - 256] - ldp x8, x9, [x1, #48] - stnp x10, x11, [x0, #64 - 256] - ldp x10, x11, [x1, #64] - stnp x12, x13, [x0, #80 - 256] - ldp x12, x13, [x1, #80] - stnp x14, x15, [x0, #96 - 256] - ldp x14, x15, [x1, #96] - stnp x16, x17, [x0, #112 - 256] - ldp x16, x17, [x1, #112] - - add x0, x0, #128 - add x1, x1, #128 - - b.ne 1b - - stnp x2, x3, [x0, #-256] - stnp x4, x5, [x0, #16 - 256] - stnp x6, x7, [x0, #32 - 256] - stnp x8, x9, [x0, #48 - 256] - stnp x10, x11, [x0, #64 - 256] - stnp x12, x13, [x0, #80 - 256] - stnp x14, x15, [x0, #96 - 256] - stnp x16, x17, [x0, #112 - 256] + .macro ldp1 reg1, reg2, ptr, val + ldp \reg1, \reg2, [\ptr, \val] + .endm +SYM_FUNC_START(__pi_copy_page) +#include "copy_page_template.S" ret SYM_FUNC_END(__pi_copy_page) SYM_FUNC_ALIAS(copy_page, __pi_copy_page) diff --git a/arch/arm64/lib/copy_page_template.S b/arch/arm64/lib/copy_page_template.S new file mode 100644 index 000000000000..f96c7988c93d --- /dev/null +++ b/arch/arm64/lib/copy_page_template.S @@ -0,0 +1,70 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +/* + * Copy a page from src to dest (both are page aligned) + * + * Parameters: + * x0 - dest + * x1 - src + */ + +#ifdef CONFIG_AS_HAS_MOPS + .arch_extension mops +alternative_if_not ARM64_HAS_MOPS + b .Lno_mops +alternative_else_nop_endif + + mov x2, #PAGE_SIZE + cpypwn [x0]!, [x1]!, x2! + cpymwn [x0]!, [x1]!, x2! + cpyewn [x0]!, [x1]!, x2! + ret +.Lno_mops: +#endif + ldp1 x2, x3, x1, #0 + ldp1 x4, x5, x1, #16 + ldp1 x6, x7, x1, #32 + ldp1 x8, x9, x1, #48 + ldp1 x10, x11, x1, #64 + ldp1 x12, x13, x1, #80 + ldp1 x14, x15, x1, #96 + ldp1 x16, x17, x1, #112 + + add x0, x0, #256 + add x1, x1, #128 +1: + tst x0, #(PAGE_SIZE - 1) + + stnp x2, x3, [x0, #-256] + ldp1 x2, x3, x1, #0 + stnp x4, x5, [x0, #16 - 256] + ldp1 x4, x5, x1, #16 + stnp x6, x7, [x0, #32 - 256] + ldp1 x6, x7, x1, #32 + stnp x8, x9, [x0, #48 - 256] + ldp1 x8, x9, x1, #48 + stnp x10, x11, [x0, #64 - 256] + ldp1 x10, x11, x1, #64 + stnp x12, x13, [x0, #80 - 256] + ldp1 x12, x13, x1, #80 + stnp x14, x15, [x0, #96 - 256] + ldp1 x14, x15, x1, #96 + stnp x16, x17, [x0, #112 - 256] + ldp1 x16, x17, x1, #112 + + add x0, x0, #128 + add x1, x1, #128 + + b.ne 1b + + stnp x2, x3, [x0, #-256] + stnp x4, x5, [x0, #16 - 256] + stnp x6, x7, [x0, #32 - 256] + stnp x8, x9, [x0, #48 - 256] + stnp x10, x11, [x0, #64 - 256] + stnp x12, x13, [x0, #80 - 256] + stnp x14, x15, [x0, #96 - 256] + stnp x16, x17, [x0, #112 - 256] diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 5018ac03b6bf..9d4eeb76a838 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -80,6 +80,35 @@ SYM_FUNC_START(mte_copy_page_tags) ret SYM_FUNC_END(mte_copy_page_tags) +#ifdef CONFIG_ARCH_HAS_COPY_MC +/* + * Copy the tags from the source page to the destination one with memory error safe + * x0 - address of the destination page + * x1 - address of the source page + * Returns: + * x0 - Return 0 if copy success, or + * -EFAULT if anything goes wrong while copying. + */ +SYM_FUNC_START(mte_copy_mc_page_tags) + mov x2, x0 + mov x3, x1 + multitag_transfer_size x5, x6 +1: +KERNEL_MEM_ERR(2f, ldgm x4, [x3]) + stgm x4, [x2] + add x2, x2, x5 + add x3, x3, x5 + tst x2, #(PAGE_SIZE - 1) + b.ne 1b + + mov x0, #0 + ret + +2: mov x0, #-EFAULT + ret +SYM_FUNC_END(mte_copy_mc_page_tags) +#endif + /* * Read tags from a user buffer (one tag per byte) and set the corresponding * tags at the given kernel address. Used by PTRACE_POKEMTETAGS. diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index a86c897017df..1a369f325ebb 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -67,3 +67,78 @@ void copy_user_highpage(struct page *to, struct page *from, flush_dcache_page(to); } EXPORT_SYMBOL_GPL(copy_user_highpage); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +/* + * Return -EFAULT if anything goes wrong while copying page or mte. + */ +int copy_mc_highpage(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + struct folio *src = page_folio(from); + struct folio *dst = page_folio(to); + unsigned int i, nr_pages; + int ret; + + ret = copy_mc_page(kto, kfrom); + if (ret) + return -EFAULT; + + if (kasan_hw_tags_enabled()) + page_kasan_tag_reset(to); + + if (!system_supports_mte()) + return 0; + + if (folio_test_hugetlb(src)) { + if (!folio_test_hugetlb_mte_tagged(src) || + from != folio_page(src, 0)) + return 0; + + WARN_ON_ONCE(!folio_try_hugetlb_mte_tagging(dst)); + + /* + * Populate tags for all subpages. + * + * Don't assume the first page is head page since + * huge page copy may start from any subpage. + */ + nr_pages = folio_nr_pages(src); + for (i = 0; i < nr_pages; i++) { + kfrom = page_address(folio_page(src, i)); + kto = page_address(folio_page(dst, i)); + ret = mte_copy_mc_page_tags(kto, kfrom); + if (ret) + return -EFAULT; + } + folio_set_hugetlb_mte_tagged(dst); + } else if (page_mte_tagged(from)) { + /* It's a new page, shouldn't have been tagged yet */ + WARN_ON_ONCE(!try_page_mte_tagging(to)); + + ret = mte_copy_mc_page_tags(kto, kfrom); + if (ret) + return -EFAULT; + set_page_mte_tagged(to); + } + + return 0; +} +EXPORT_SYMBOL(copy_mc_highpage); + +int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + int ret; + + ret = copy_mc_highpage(to, from); + if (ret) + return ret; + + flush_dcache_page(to); + + return 0; +} +EXPORT_SYMBOL_GPL(copy_mc_user_highpage); +#endif diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 0eb4b9b06837..89a6e0fd0b31 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -326,6 +326,7 @@ static inline void copy_highpage(struct page *to, struct page *from) #endif #ifdef copy_mc_to_kernel +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE /* * If architecture supports machine check exception handling, define the * #MC versions of copy_user_highpage and copy_highpage. They copy a memory @@ -351,7 +352,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, return ret ? -EFAULT : 0; } +#endif +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE static inline int copy_mc_highpage(struct page *to, struct page *from) { unsigned long ret; @@ -370,20 +373,25 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) return ret ? -EFAULT : 0; } +#endif #else +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE static inline int copy_mc_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { copy_user_highpage(to, from, vaddr, vma); return 0; } +#endif +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE static inline int copy_mc_highpage(struct page *to, struct page *from) { copy_highpage(to, from); return 0; } #endif +#endif static inline void memcpy_page(struct page *dst_page, size_t dst_off, struct page *src_page, size_t src_off, From patchwork Mon Dec 9 02:42:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13898752 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90830E77180 for ; Mon, 9 Dec 2024 02:43:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 925106B03A8; Sun, 8 Dec 2024 21:43:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 70DF36B03AB; Sun, 8 Dec 2024 21:43:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 44AC66B03A9; Sun, 8 Dec 2024 21:43:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id F384C6B03A8 for ; Sun, 8 Dec 2024 21:43:33 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B0CA380BC4 for ; Mon, 9 Dec 2024 02:43:33 +0000 (UTC) X-FDA: 82873873782.19.60AD6C6 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf02.hostedemail.com (Postfix) with ESMTP id 14F2F80007 for ; Mon, 9 Dec 2024 02:42:52 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733712198; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3LXOxUklyNgf5lug7zsMwusW92Er0cLCd2wPgXWZ91s=; b=jeiXkFuyT5Bh9OydNViQE919MvS0yhN+fRQ9lGc4vV6TQV03iIWbvwubv1/HFPxnVU/bys /XIcxZ9rtFE5vc2fQYcg9P7YlXeXt/EsytyiaqtJn6Q35SzUk+Vi0quywfbrLUL5JEAI5g 5cKjGST2xCXvN6vVON9svVYZnd+GKeA= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733712198; a=rsa-sha256; cv=none; b=tCmIjHNqO48sXbYl+G1Y5lNJ4NqK9466r4I3ZiAL4t01d14ABE/M0ZwvaXHonbu9Q/u5Z8 UvE0Wuo3HUQHziWzi/uaq+eFvPLIAZpY/+kkCBsfrxkMSTegQfswEum3j6oCvWnT8cCIYL gfoCKKIKJptvtymamnPs4StqgS0Os9g= Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Y65hf0h34z1kvfy; Mon, 9 Dec 2024 10:41:06 +0800 (CST) Received: from kwepemk500005.china.huawei.com (unknown [7.202.194.90]) by mail.maildlp.com (Postfix) with ESMTPS id A318F14010C; Mon, 9 Dec 2024 10:43:28 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemk500005.china.huawei.com (7.202.194.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 9 Dec 2024 10:43:26 +0800 From: Tong Tiangen To: Mark Rutland , Jonathan Cameron , Mauro Carvalho Chehab , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , Madhavan Srinivasan CC: , , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v13 5/5] arm64: introduce copy_mc_to_kernel() implementation Date: Mon, 9 Dec 2024 10:42:57 +0800 Message-ID: <20241209024257.3618492-6-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241209024257.3618492-1-tongtiangen@huawei.com> References: <20241209024257.3618492-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemk500005.china.huawei.com (7.202.194.90) X-Rspamd-Server: rspam05 X-Stat-Signature: jb1c3i55yurzb8srfb6t5fqtf58gjgy9 X-Rspamd-Queue-Id: 14F2F80007 X-Rspam-User: X-HE-Tag: 1733712172-329648 X-HE-Meta: U2FsdGVkX18Jq/Hsg82+UJoP++/VVvoRsaXVDbXqEWyFbB+TaK8C7mDX2NZQGySXvPWFJMlHJu9m77x4/Mywh9hlxz1MHwmO7bpAiX2ZLOAVij84TfrNukIa0oU/oOcyMM5yTSMKmzK1XmajdWH9y5q7caEuL9eI0TOUs/YHOS+R1t+RjUv/AdNKnHtpyzcO1Mwx/IqZIXfwyeD41XxT9dHNQhHkuvQuLoWXlQXLnwAFqYypxCGNQoVFUkZ2QAoWaoG/EWAQVcFUMaw8SwwdzkRiCBEqgeuxiodT2aNelXdfkjaU8u0U4SO/qUZMXLRfghB3AiMns6Hu3b4oHbMQ2i/LB9nsM0mIxveNWmKTqU9lmPJgA92qomSGPB6usydNyag/kBQzj4OGKuoCC2nS5wiOaEv/iiWA5kmYFsnycKW4dl9ocIT1lgq8DfxowEmg+5KawRn8uf+HBV0qcrFPUC8S801RDIai7gkJNr1ZLv/AhEB26xt6SXDbwjvDpRTItWXes9UxSycpEUT/qD4iaS0zKE133SjAb7RkoFlUwE5KV10OczLoCNY8UUwvqO2jf5+Zp+WZx7VNNavfzTO7kV1xSXV2mRo0yRMTOamtE+Cv8p3qjrE4GyUy6mnxfVnUy7zPEKD9zipFfhNmLHIvI2WbIKrlkLqPBCSGhtYpYHhComQlDiRoBwUoUOEt/X3ELY0vdGRxHQb6G9/NBkIVNNXfBjNrJ6Ja0uPYOlA6+flnjU9o6aP1WwaMqhH0+PPceh80kwjcnHoNf7uR4FlvPwnklxqAtTkrYRp3YzVpf806rPHi+NoBeR6LkNxKAfiLwfPv+CB1gxretLtyAFfR0zJWhm1L2U3v5PrLdqYNXbwNlPmKYYwlRdSw2eWg7CWiKxlz6e0MlbS/ZBe7/54etA2OkZ+9m9nkmsKN8eV5Xrn+wZy/wXV6zKeTo1tbxDQ0SR12fY7UkRKgPO+wtI3 mtKH3LJs fJUsX9MFESaItzbSty8ME101+66ZRqzLMFtbzSE3ObCMORJpoUK7UbcAhFwS5nJxzaOVt56xUY8qx3rOT0v9bL39XRftp9S3mNl0imvLq9EaUMCbyAwwH03cfBnPc0hKfQld/7p89nLCa4GGT23adLsd6LLmGBXkwtxZIPY1gyrIESOsWjAYpsnVkTOaWM//WNej9LwkZdppOK1vy4u8y4jhyVp9RPPd34TgWBcElOX3jsLJpg+uWBc18YPLaq9kEUQs2c7+NBNkiATD2LsZ4N1ImSX6AXI3uIJyrjHfBsS0vL14= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The copy_mc_to_kernel() helper is memory copy implementation that handles source exceptions. It can be used in memory copy scenarios that tolerate hardware memory errors(e.g: pmem_read/dax_copy_to_iter). Currently, only x86 and ppc support this helper, Add this for ARM64 as well, if ARCH_HAS_COPY_MC is defined, by implementing copy_mc_to_kernel() and memcpy_mc() functions. Because there is no caller-saved GPR is available for saving "bytes not copied" in memcpy(), the memcpy_mc() is referenced to the implementation of copy_from_user(). In addition, the fixup of MOPS insn is not considered at present. Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/string.h | 5 ++ arch/arm64/include/asm/uaccess.h | 18 ++++++ arch/arm64/lib/Makefile | 2 +- arch/arm64/lib/memcpy_mc.S | 98 ++++++++++++++++++++++++++++++++ mm/kasan/shadow.c | 12 ++++ 5 files changed, 134 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/lib/memcpy_mc.S diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h index 3a3264ff47b9..23eca4fb24fa 100644 --- a/arch/arm64/include/asm/string.h +++ b/arch/arm64/include/asm/string.h @@ -35,6 +35,10 @@ extern void *memchr(const void *, int, __kernel_size_t); extern void *memcpy(void *, const void *, __kernel_size_t); extern void *__memcpy(void *, const void *, __kernel_size_t); +#define __HAVE_ARCH_MEMCPY_MC +extern int memcpy_mc(void *, const void *, __kernel_size_t); +extern int __memcpy_mc(void *, const void *, __kernel_size_t); + #define __HAVE_ARCH_MEMMOVE extern void *memmove(void *, const void *, __kernel_size_t); extern void *__memmove(void *, const void *, __kernel_size_t); @@ -57,6 +61,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt); */ #define memcpy(dst, src, len) __memcpy(dst, src, len) +#define memcpy_mc(dst, src, len) __memcpy_mc(dst, src, len) #define memmove(dst, src, len) __memmove(dst, src, len) #define memset(s, c, n) __memset(s, c, n) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 5b91803201ef..2a14b732306a 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -542,4 +542,22 @@ static inline void put_user_gcs(unsigned long val, unsigned long __user *addr, #endif /* CONFIG_ARM64_GCS */ +#ifdef CONFIG_ARCH_HAS_COPY_MC +/** + * copy_mc_to_kernel - memory copy that handles source exceptions + * + * @to: destination address + * @from: source address + * @size: number of bytes to copy + * + * Return 0 for success, or bytes not copied. + */ +static inline unsigned long __must_check +copy_mc_to_kernel(void *to, const void *from, unsigned long size) +{ + return memcpy_mc(to, from, size); +} +#define copy_mc_to_kernel copy_mc_to_kernel +#endif + #endif /* __ASM_UACCESS_H */ diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 78b0e9904689..326d71ba0517 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -13,7 +13,7 @@ endif lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o -lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o memcpy_mc.o obj-$(CONFIG_CRC32) += crc32.o crc32-glue.o diff --git a/arch/arm64/lib/memcpy_mc.S b/arch/arm64/lib/memcpy_mc.S new file mode 100644 index 000000000000..cb9caaa1ab0b --- /dev/null +++ b/arch/arm64/lib/memcpy_mc.S @@ -0,0 +1,98 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2013 ARM Ltd. + * Copyright (C) 2013 Linaro. + * + * This code is based on glibc cortex strings work originally authored by Linaro + * be found @ + * + * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ + * files/head:/src/aarch64/ + */ + +#include +#include +#include +#include + +/* + * Copy a buffer from src to dest (alignment handled by the hardware) + * + * Parameters: + * x0 - dest + * x1 - src + * x2 - n + * Returns: + * x0 - bytes not copied + */ + .macro ldrb1 reg, ptr, val + KERNEL_MEM_ERR(9997f, ldrb \reg, [\ptr], \val) + .endm + + .macro strb1 reg, ptr, val + strb \reg, [\ptr], \val + .endm + + .macro ldrh1 reg, ptr, val + KERNEL_MEM_ERR(9997f, ldrh \reg, [\ptr], \val) + .endm + + .macro strh1 reg, ptr, val + strh \reg, [\ptr], \val + .endm + + .macro ldr1 reg, ptr, val + KERNEL_MEM_ERR(9997f, ldr \reg, [\ptr], \val) + .endm + + .macro str1 reg, ptr, val + str \reg, [\ptr], \val + .endm + + .macro ldp1 reg1, reg2, ptr, val + KERNEL_MEM_ERR(9997f, ldp \reg1, \reg2, [\ptr], \val) + .endm + + .macro stp1 reg1, reg2, ptr, val + stp \reg1, \reg2, [\ptr], \val + .endm + +end .req x5 +SYM_FUNC_START(__memcpy_mc_generic) + add end, x0, x2 +#include "copy_template.S" + mov x0, #0 // Nothing to copy + ret + + // Exception fixups +9997: sub x0, end, dst // bytes not copied + ret +SYM_FUNC_END(__memcpy_mc_generic) + +#ifdef CONFIG_AS_HAS_MOPS + .arch_extension mops +SYM_FUNC_START(__memcpy_mc) +alternative_if_not ARM64_HAS_MOPS + b __memcpy_mc_generic +alternative_else_nop_endif + +dstin .req x0 +src .req x1 +count .req x2 +dst .req x3 + + mov dst, dstin + cpyp [dst]!, [src]!, count! + cpym [dst]!, [src]!, count! + cpye [dst]!, [src]!, count! + + mov x0, #0 // Nothing to copy + ret +SYM_FUNC_END(__memcpy_mc) +#else +SYM_FUNC_ALIAS(__memcpy_mc, __memcpy_mc_generic) +#endif + +EXPORT_SYMBOL(__memcpy_mc) +SYM_FUNC_ALIAS_WEAK(memcpy_mc, __memcpy_mc) +EXPORT_SYMBOL(memcpy_mc) diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index 88d1c9dcb507..a12770fb2e9c 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -79,6 +79,18 @@ void *memcpy(void *dest, const void *src, size_t len) } #endif +#ifdef __HAVE_ARCH_MEMCPY_MC +#undef memcpy_mc +int memcpy_mc(void *dest, const void *src, size_t len) +{ + if (!kasan_check_range(src, len, false, _RET_IP_) || + !kasan_check_range(dest, len, true, _RET_IP_)) + return (int)len; + + return __memcpy_mc(dest, src, len); +} +#endif + void *__asan_memset(void *addr, int c, ssize_t len) { if (!kasan_check_range(addr, len, true, _RET_IP_))