From patchwork Tue May 28 08:59:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13676340 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC6ACC25B78 for ; Tue, 28 May 2024 08:59:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 248E16B0093; Tue, 28 May 2024 04:59:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F7C66B0095; Tue, 28 May 2024 04:59:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E74E6B0096; Tue, 28 May 2024 04:59:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DED9F6B0093 for ; Tue, 28 May 2024 04:59:29 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 85757403DB for ; Tue, 28 May 2024 08:59:29 +0000 (UTC) X-FDA: 82167206058.10.91E96C1 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf18.hostedemail.com (Postfix) with ESMTP id E23FA1C001D for ; Tue, 28 May 2024 08:59:26 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf18.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716886767; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xu38cO3vhq48n8gjCWWM2dguB1ag5dELJQsNCdci/zg=; b=guX7i1eSvD3N8nMHfAj0C9ldSyoeMhi4cG38wygde2y28Cf9d7L0FdKU6gDmibd76pdFJ7 /RI+bZIVdRHdt3uktNoQ7qHB5g5i2rsRvjAbOfwCUqENffwpYO3CD4JfCu/tAqjY3hACPy cDLJMP9UDFF3gLcnbDBVTm8VyRZcjfI= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf18.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716886767; a=rsa-sha256; cv=none; b=12aNeIAZjh4fg2MhQ7BqhnqhFL2OyRMitTEULQHtSw/8LqgANKfoJi3MfkMl45bqKuFYMi Q1oOj0aIz6GJBD21ajvjJuzqcrMISuuPTiFDRgqEe4IFY+zKW7FnyWMfqmkKJP1c5fE58F PyWxv6RcWIROKO3BBurxcI3zpXYr7JU= Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4VpRFS2mtVzPnc9; Tue, 28 May 2024 16:56:12 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id E62E6141EC8; Tue, 28 May 2024 16:59:20 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 28 May 2024 16:59:18 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v12 1/6] uaccess: add generic fallback version of copy_mc_to_user() Date: Tue, 28 May 2024 16:59:10 +0800 Message-ID: <20240528085915.1955987-2-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240528085915.1955987-1-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: E23FA1C001D X-Stat-Signature: 6p3u7mib9gbcuxjdceufkbi53iqcj76u X-Rspam-User: X-HE-Tag: 1716886766-580425 X-HE-Meta: U2FsdGVkX18/zR38WakzTLIei2zSl5Mm0dUfh59ykXwLDSBvd+Ntf/pwRyYJmtmmRtc36zG8cz2anzbyLT6rP+tq12+yBMNRPR4nDWzVhO/AC+PwzkLI8tSH+nY4Iwqo04Zsil+7ApVj1mxKVx+Sp7G1jD2SbLhv70XTY6aJgy5H1xL0vrmEQho63DFR8ZWvR2+9UPL1PDRjeHO1N6Gh/01fJ4P7N9JNcKFRvqzVanktEQiMjS3VheyIAt3wtlSIBHQvvmw4F0d++YgAdSTKfmXADSSBdzCXsAChV16FVsq4d62hjpTLGjS2sWOpV9wsv1OsJK6O2M+pP+eyS0E2X1w33hNxtyFqeys+Lo7a+ur9Alu4FDf5numohgk/m75TJmPXme5zTuy29Svx0eSeT8vI3dsOuKW3MZXddBeamgJ7rq+CZOSZ/oAXPdeIwogqfknM9kfQn0Bn+TOtPC3JDfN6TFcIS1eJqUOadAWP2RgDAG/Z0hZVyLjSBUNs6eNmIaSADlofdok/V9q82mytHc9PPB1Yl1nl6W/XqOJiQ23IcRzqRi8ZbOqUz01D1XgL3M396puUL2DI6Vcwvkbnph22XtSwC5ZEk50a/OTSDvBDJ44nH4vl9Xcli4atJEdi1ZXNed/VXgZiwxvTTygrC/mlm4SoO6Er10yjlcJ4JndxVCqPJRqlimRvt1BRJG5PnE2B2ze65qLgheoM85hZdKxIlCONKSnp/lt4aPK/qXg95JTCpIbVH/Y877E0jG+brOFr5I6U7MtDiDFm/+A8ova/hIHaImtZOw3aZIfBvrkHBkEBhLvtZtXrYaumS/fFWpqa9t/lOb8PnbVSCxzANRjdmrW22vYoOUPLYPw9jChfsOSHJH3l486KnB6rAKdqQ6U7DcQZfLLsQzvNMLj11Gw7JqDPEE6omrsywxzUfXw6S3QOOCwC1IHGOIub734eQ+GS5ZWWr3Nwkrzu6BF klPABVuo Vm9YYYOTDPAC/QAtqUwGraBbkiFociyt1tUIppEpXULmGWhbtg6HuTPNlCRKRX0ySzjOnxJPEEufDyW/xhpca9YKzpBnUaQOE1ujFyxFtCAIRds9ODZ3M2y+z2zowFA+G8A85j6ucZx0izQC+Dk6Q/T7LDbarKmutdioCPmtwjI/S9hvhAngE82rbDpeY7Z3oGgr5qUUoliW46l1ubIEbuT2DHXUWsE4Or8h5BzyMYSJSN15ygXos5R4UO0NvOND4nt5PV5m9rfewFEI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: x86/powerpc has it's implementation of copy_mc_to_user(), we add generic fallback in include/linux/uaccess.h prepare for other architechures to enable CONFIG_ARCH_HAS_COPY_MC. Signed-off-by: Tong Tiangen Acked-by: Michael Ellerman Reviewed-by: Mauro Carvalho Chehab Reviewed-by: Jonathan Cameron --- arch/powerpc/include/asm/uaccess.h | 1 + arch/x86/include/asm/uaccess.h | 1 + include/linux/uaccess.h | 8 ++++++++ 3 files changed, 10 insertions(+) diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index de10437fd206..df42e6ad647f 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -381,6 +381,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n) return n; } +#define copy_mc_to_user copy_mc_to_user #endif extern long __copy_from_user_flushcache(void *dst, const void __user *src, diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 0f9bab92a43d..309f2439327e 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len); unsigned long __must_check copy_mc_to_user(void __user *to, const void *from, unsigned len); +#define copy_mc_to_user copy_mc_to_user #endif /* diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 3064314f4832..0dfa9241b6ee 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -205,6 +205,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) } #endif +#ifndef copy_mc_to_user +static inline unsigned long __must_check +copy_mc_to_user(void *dst, const void *src, size_t cnt) +{ + return copy_to_user(dst, src, cnt); +} +#endif + static __always_inline void pagefault_disabled_inc(void) { current->pagefault_disabled++; From patchwork Tue May 28 08:59:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13676339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C94B8C27C44 for ; Tue, 28 May 2024 08:59:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D1B786B0092; Tue, 28 May 2024 04:59:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C83F86B0096; Tue, 28 May 2024 04:59:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 998FC6B0093; Tue, 28 May 2024 04:59:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 766CB6B0092 for ; Tue, 28 May 2024 04:59:29 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 27D951C19ED for ; Tue, 28 May 2024 08:59:29 +0000 (UTC) X-FDA: 82167206058.09.F4A2843 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf11.hostedemail.com (Postfix) with ESMTP id 8096B4000B for ; Tue, 28 May 2024 08:59:26 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; spf=pass (imf11.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716886767; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zy2m3+yHnSEbdZH3TQxjduUg9mQfAkrxonUtcJfEQDY=; b=xqKqXBiFxJyHH6EFaSt5GDTnB9FNjPpPyN4jUHqT0yZrTIZdH89zT+6GFZsqs/WWHY0A+8 VRSlIhoy44zOW3NtsN7adZgs0kjn5nimy7OlJAq+M5/O6EVSA/KNOZZSAN+q2GUcdfLObb 5SbUbaCIBjL0KSoUp2/dxqvqPvjdWRE= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; spf=pass (imf11.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716886767; a=rsa-sha256; cv=none; b=rJcHiL7Rr06aRZIysVby0J3b2BGdbRs+MlOceIeTa2199Bhzyl+WtsgLyeWcHaD1i5n1v8 HKrKVnb2rSs6RoK6qvc19xLrNp3ImSeDYH0VI7zgsxAArF5p+FAf8OW6GWgHdGPHRN8YsY OsTcXEkLcTgaKm4Fm72Yjm8K8ezBLeU= Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4VpRHD29JNz1HCrG; Tue, 28 May 2024 16:57:44 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id C95E014037C; Tue, 28 May 2024 16:59:22 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 28 May 2024 16:59:20 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC Date: Tue, 28 May 2024 16:59:11 +0800 Message-ID: <20240528085915.1955987-3-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240528085915.1955987-1-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 8096B4000B X-Stat-Signature: 6u363g95fui9o35kuokqxdf5cwzrp1pj X-HE-Tag: 1716886766-785318 X-HE-Meta: U2FsdGVkX18QT91aCmYoDfSHofTc85520TA5aWtWFMi80gilNQu1rDL9K9Qh9kY88xphW7n4irbZmJL1xtrltmXgPyejGNoFVf2eRY97G82SF1mFOPXT5XaE7pgnmcNo4g6AyqvE+6KZTSZCJIBddF9jzMpYurLxJ+k7przKesPP1BcAXWHi1jktdLI/RX4Q55LVbG7xqacWwJi1IYqI1pBKMfTHffcA8J1zQ37d9vpLmh760iYaW+sgeazfv0lHsT/FaA0BSUWB1c+8jZ0rF4DnNuuz65uhRGP6m8AcJD+RnxyfovYHNendAHPMEQcAisGTKzx7MoQhTg6MMscp5Xy6cQyl0poFUS8UKU5Qmii4+3hhBOrQfCUPniZhzxbxP76dbQmjifdT/6NUzoLXxrYEZ3dqm5JodI6O/Ea79O/G8e6XFtO4YbtZZGSuGIRotMgyfKTsYA9qgC4PlkZfVB2Tn6qPc7rhFGzEgAUxgdy+HFvE1n/fJsycmvQwxAL6SOj/UxWbbBGNGUk1Y/KtUocwb8rQW6fDujdI7XdCBycZf5yZTrcBrcPdCazbAS6wxProOUZMC5O92vOsZbhG56HhB/3oujshHJWe4gSZ1IMo2FbQLdZvkl5yROtWx6S7x8qz8VZYPDWenXKXglLjuDafZOBHxQ0WgtDWyFOiTz28Q+PSc+l21/0QqYoVyrwnVmAcNkPcUYt53QZiP1pZeHNoCW4+GBgeFv/fvZ41FtkfkrGfnAeyJAeQxqhUyT0fq7EkNuXO6DCXPaH1EfhNuu4PwIEdgr0IWAWrh0cfBVH5vFbsh8XNUt3YzgxkOcgwWYZAP2J63GMV2/3mvndgO+6fQfvVa9qoYWJzU5x/N0VH7vFZwwizYQznuQ4DjQqa/FMjVeVj4cmkpBmClrVuovhkld3V6o+UZljTb8kl15V7mygIBiZalfu9DKqJ62zw0UXEhY/ky34vibCzajD PrmxbA5P BO7AO2/64DK80XwSUrhzi2/ciEJ5Zk7FMyqzh8F51tOrUnk4wHvx08AcLImw/shMZX1UIQUPuXhYl97ma0sedRTmfoVG7x4oAJrQJACVqZYE60XCbZCgk1/Xh+04wPU/l1vVDkaQKKv02ZAL1AC9YOW50s4k8HgByRpkyCksSLBYypouJNn/QH4X+LYU2V08uBo47nwmbSEs4dJpBygCgQTD2keHZ6TjYD54LZdeB/6WePKwu7v9NAj72NyFV7660UDGX2+TjZ3uDa58= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: For the arm64 kernel, when it processes hardware memory errors for synchronize notifications(do_sea()), if the errors is consumed within the kernel, the current processing is panic. However, it is not optimal. Take copy_from/to_user for example, If ld* triggers a memory error, even in kernel mode, only the associated process is affected. Killing the user process and isolating the corrupt page is a better choice. New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn that can recover from memory errors triggered by access to kernel memory. Signed-off-by: Tong Tiangen --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/asm-extable.h | 31 +++++++++++++++++++++++----- arch/arm64/include/asm/asm-uaccess.h | 4 ++++ arch/arm64/include/asm/extable.h | 1 + arch/arm64/lib/copy_to_user.S | 10 ++++----- arch/arm64/mm/extable.c | 19 +++++++++++++++++ arch/arm64/mm/fault.c | 27 +++++++++++++++++------- 7 files changed, 75 insertions(+), 18 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 5d91259ee7b5..13ca06ddf3dd 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -20,6 +20,7 @@ config ARM64 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 980d1dd8e1a3..9c0664fe1eb1 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -5,11 +5,13 @@ #include #include -#define EX_TYPE_NONE 0 -#define EX_TYPE_BPF 1 -#define EX_TYPE_UACCESS_ERR_ZERO 2 -#define EX_TYPE_KACCESS_ERR_ZERO 3 -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +#define EX_TYPE_NONE 0 +#define EX_TYPE_BPF 1 +#define EX_TYPE_UACCESS_ERR_ZERO 2 +#define EX_TYPE_KACCESS_ERR_ZERO 3 +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +/* kernel access memory error safe */ +#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE 5 /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */ #define EX_DATA_REG_ERR_SHIFT 0 @@ -51,6 +53,17 @@ #define _ASM_EXTABLE_UACCESS(insn, fixup) \ _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr) +#define _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, err, zero) \ + __ASM_EXTABLE_RAW(insn, fixup, \ + EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, \ + ( \ + EX_DATA_REG(ERR, err) | \ + EX_DATA_REG(ZERO, zero) \ + )) + +#define _ASM_EXTABLE_KACCESS_ME_SAFE(insn, fixup) \ + _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, wzr, wzr) + /* * Create an exception table entry for uaccess `insn`, which will branch to `fixup` * when an unhandled fault is taken. @@ -69,6 +82,14 @@ .endif .endm +/* + * Create an exception table entry for kaccess me(memory error) safe `insn`, which + * will branch to `fixup` when an unhandled fault is taken. + */ + .macro _asm_extable_kaccess_me_safe, insn, fixup + _ASM_EXTABLE_KACCESS_ME_SAFE(\insn, \fixup) + .endm + #else /* __ASSEMBLY__ */ #include diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h index 5b6efe8abeeb..7bbebfa5b710 100644 --- a/arch/arm64/include/asm/asm-uaccess.h +++ b/arch/arm64/include/asm/asm-uaccess.h @@ -57,6 +57,10 @@ alternative_else_nop_endif .endm #endif +#define KERNEL_ME_SAFE(l, x...) \ +9999: x; \ + _asm_extable_kaccess_me_safe 9999b, l + #define USER(l, x...) \ 9999: x; \ _asm_extable_uaccess 9999b, l diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h index 72b0e71cc3de..bc49443bc502 100644 --- a/arch/arm64/include/asm/extable.h +++ b/arch/arm64/include/asm/extable.h @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex, #endif /* !CONFIG_BPF_JIT */ bool fixup_exception(struct pt_regs *regs); +bool fixup_exception_me(struct pt_regs *regs); #endif diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S index 802231772608..2ac716c0d6d8 100644 --- a/arch/arm64/lib/copy_to_user.S +++ b/arch/arm64/lib/copy_to_user.S @@ -20,7 +20,7 @@ * x0 - bytes not copied */ .macro ldrb1 reg, ptr, val - ldrb \reg, [\ptr], \val + KERNEL_ME_SAFE(9998f, ldrb \reg, [\ptr], \val) .endm .macro strb1 reg, ptr, val @@ -28,7 +28,7 @@ .endm .macro ldrh1 reg, ptr, val - ldrh \reg, [\ptr], \val + KERNEL_ME_SAFE(9998f, ldrh \reg, [\ptr], \val) .endm .macro strh1 reg, ptr, val @@ -36,7 +36,7 @@ .endm .macro ldr1 reg, ptr, val - ldr \reg, [\ptr], \val + KERNEL_ME_SAFE(9998f, ldr \reg, [\ptr], \val) .endm .macro str1 reg, ptr, val @@ -44,7 +44,7 @@ .endm .macro ldp1 reg1, reg2, ptr, val - ldp \reg1, \reg2, [\ptr], \val + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr], \val) .endm .macro stp1 reg1, reg2, ptr, val @@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user) 9997: cmp dst, dstin b.ne 9998f // Before being absolutely sure we couldn't copy anything, try harder - ldrb tmp1w, [srcin] +KERNEL_ME_SAFE(9998f, ldrb tmp1w, [srcin]) USER(9998f, sttrb tmp1w, [dst]) add dst, dst, #1 9998: sub x0, end, dst // bytes not copied diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c index 228d681a8715..8c690ae61944 100644 --- a/arch/arm64/mm/extable.c +++ b/arch/arm64/mm/extable.c @@ -72,7 +72,26 @@ bool fixup_exception(struct pt_regs *regs) return ex_handler_uaccess_err_zero(ex, regs); case EX_TYPE_LOAD_UNALIGNED_ZEROPAD: return ex_handler_load_unaligned_zeropad(ex, regs); + case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE: + return false; } BUG(); } + +bool fixup_exception_me(struct pt_regs *regs) +{ + const struct exception_table_entry *ex; + + ex = search_exception_tables(instruction_pointer(regs)); + if (!ex) + return false; + + switch (ex->type) { + case EX_TYPE_UACCESS_ERR_ZERO: + case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE: + return ex_handler_uaccess_err_zero(ex, regs); + } + + return false; +} diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 451ba7cbd5ad..2dc65f99d389 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -708,21 +708,32 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs) return 1; /* "fault" */ } +/* + * APEI claimed this as a firmware-first notification. + * Some processing deferred to task_work before ret_to_user(). + */ +static bool do_apei_claim_sea(struct pt_regs *regs) +{ + if (user_mode(regs)) { + if (!apei_claim_sea(regs)) + return true; + } else if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { + if (fixup_exception_me(regs) && !apei_claim_sea(regs)) + return true; + } + + return false; +} + static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) { const struct fault_info *inf; unsigned long siaddr; - inf = esr_to_fault_info(esr); - - if (user_mode(regs) && apei_claim_sea(regs) == 0) { - /* - * APEI claimed this as a firmware-first notification. - * Some processing deferred to task_work before ret_to_user(). - */ + if (do_apei_claim_sea(regs)) return 0; - } + inf = esr_to_fault_info(esr); if (esr & ESR_ELx_FnV) { siaddr = 0; } else { From patchwork Tue May 28 08:59:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13676341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FEEFC25B78 for ; Tue, 28 May 2024 08:59:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9297A6B0095; Tue, 28 May 2024 04:59:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8AFFC6B0096; Tue, 28 May 2024 04:59:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 728F96B0098; Tue, 28 May 2024 04:59:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 541686B0095 for ; Tue, 28 May 2024 04:59:31 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 04F75403AE for ; Tue, 28 May 2024 08:59:30 +0000 (UTC) X-FDA: 82167206142.23.A3B15EA Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf17.hostedemail.com (Postfix) with ESMTP id 636F740002 for ; Tue, 28 May 2024 08:59:28 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716886769; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=d+GeMxnw1qmEmr4PZn8SdT73JpUKtOPcdyNfnD5jfrc=; b=pMxEijO1mbGhnFeZHJyeCs+U3iP2RIP02O5c3rWRDXI/BQbj8TJ8sndjbrEyWmicjt4GkF u1l/ellsAv8RgRgfZQM0qS86QZP569/5mUxEQ54EoOPLYqOBeQV6FLKF6FYygrc3aYF2v7 LpJwajsJxlfEv4q96U8k8YvkjUkwXsQ= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf17.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716886769; a=rsa-sha256; cv=none; b=yoFTr9WZ/aUnbw5W2C6BTPXvk9v8l5nubqtNbef6mM4Umrm7FaowXJbjL3nWJ2/ocC5pQj SdyoVb9VgtQ2LNwc9zKNicoC5u9xMz5Ate/D7I7Pa5+2gajbPvy/vfboB54HK5NJG48pr5 2NNc9JGH4deD0g+tltAoUjii84QN4AA= Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4VpRF22XpZz2CjFS; Tue, 28 May 2024 16:55:50 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id 8CFA7180066; Tue, 28 May 2024 16:59:24 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 28 May 2024 16:59:22 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v12 3/6] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Date: Tue, 28 May 2024 16:59:12 +0800 Message-ID: <20240528085915.1955987-4-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240528085915.1955987-1-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspamd-Queue-Id: 636F740002 X-Stat-Signature: wwnfoar8pjmmyq13shuzddgeeeyq3c66 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1716886768-161764 X-HE-Meta: U2FsdGVkX1+0Jl3Idf5eazSKbRRpyUlDBjx7cS/oCQQdGpaEJ5b94mansRwHrU5r1RYL0gpgY39vRoOSd8eDSTI//qvLHp5bTLNaXA2DCha3FQgSx6WASp2Xldoz6niS84KZpcHXBXyFs6gFDL5/VQtaajJcXNjKkuXv7FheFbN5DvsZXx4pwWdhZeMp83xBr9hcTXkqXnlIyWYyEQbGZEK2eB+WkRzv2ZzmvFcb0eTbfa6DN7+FaE8tVyej0kTBLZelr2vBluv14yDIYJt18zqGVbebR6RLcZSo0LYehOseE/qOr3avRFMt0Ig9umKtTDB/gdJGSOyygRI0rdzGOnuratR1Qz9o6S0aHQ3WjQp1mbdcJ4v9r3iLG002jibsRcBbTz165UfUjbl0N1bQkMBjscGKTp0OYKUXdSojYsT79ZOuua5KwLm1UTl9Fm6aaRKv5Q532HLtqO6Sxz+B+ZAHqtrie5emlvtTMLMW5EUFUQhFvZfhglve1fWRuLPqbVZLfsGepvzYDncfeX59b4BJOgzaXPR+HoO0NIgXCx5d57YhGXDoGyJ6Og4yeBY3JtWws4hr4UjOMNdcvoxjQVG/fF1jNvfrNdAe4jnkmCSbN4h2jOSoAgtgUsxp0X6bvVGq0ycGet2As0rpLtlsFp7MWyaEKmt3NDhNCCuekkkUHesV1rzbjbnQIBRYglJtkRIqRa4mRrM5rV2cd5Ayir311cbsvR2oOEJ7iN+2snmFxb45eMCV2ruN1xMGNv1idnoEa15s81zXBdXI1X541Niv5nKB8GoAUsz96NkQo75DpDIb0ol1pNh7lEk+6fWWekLy1zJHRgsNsBbAY6oILOnw8TKIRku7LqLxmRPTviJ+32bfB+qLMKUg/vjI7g7rLPHYZv/Z7z6xVOA2jKfGXuo5I7o3y2lcMfZgi1wgVvVNULXU9qcWAnvO4Kz4MoqyFzkkTYh4YuLfKPl9ef8 X37X354I xaYvnGAPWS3LfFaq7eZEBaTe+ww7YdhAYr1Egja1N1sXzPwbiFJm/kbTSP0efYL1j2omsw9wftHfxersZRC4R8+HngUcshQJb1U6DZgdG4xBW9zjiOI7Do2j4sv6kNWzQY4MMS4jr38hphkSmcSbEnNhCiCDPIEsv7JN3SjWh5vDniROG7815bGE8EQpmtEvj8qW6fQ6dgjkPyoj1wcDiQmQ7PrOuAM/DzvopBH50ARELQ+pjndxLFmaLWqCt1X79v4DyT78QbQGq5zQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: If hardware errors are encountered during page copying, returning the bytes not copied is not meaningful, and the caller cannot do any processing on the remaining data. Returning -EFAULT is more reasonable, which represents a hardware error encountered during the copying. Signed-off-by: Tong Tiangen Reviewed-by: Jonathan Cameron --- include/linux/highmem.h | 8 ++++---- mm/khugepaged.c | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 00341b56d291..64a567d5ad6f 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -335,8 +335,8 @@ static inline void copy_highpage(struct page *to, struct page *from) /* * If architecture supports machine check exception handling, define the * #MC versions of copy_user_highpage and copy_highpage. They copy a memory - * page with #MC in source page (@from) handled, and return the number - * of bytes not copied if there was a #MC, otherwise 0 for success. + * page with #MC in source page (@from) handled, and return -EFAULT if there + * was a #MC, otherwise 0 for success. */ static inline int copy_mc_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) @@ -352,7 +352,7 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, kunmap_local(vto); kunmap_local(vfrom); - return ret; + return ret ? -EFAULT : 0; } static inline int copy_mc_highpage(struct page *to, struct page *from) @@ -368,7 +368,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) kunmap_local(vto); kunmap_local(vfrom); - return ret; + return ret ? -EFAULT : 0; } #else static inline int copy_mc_user_highpage(struct page *to, struct page *from, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 774a97e6e2da..cce838e85967 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -798,7 +798,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio, continue; } src_page = pte_page(pteval); - if (copy_mc_user_highpage(page, src_page, src_addr, vma) > 0) { + if (copy_mc_user_highpage(page, src_page, src_addr, vma)) { result = SCAN_COPY_MC; break; } @@ -2042,7 +2042,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, index++; dst++; } - if (copy_mc_highpage(dst, folio_page(folio, 0)) > 0) { + if (copy_mc_highpage(dst, folio_page(folio, 0))) { result = SCAN_COPY_MC; goto rollback; } From patchwork Tue May 28 08:59:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13676342 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A846FC25B78 for ; Tue, 28 May 2024 08:59:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C36876B0096; Tue, 28 May 2024 04:59:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE3BC6B0098; Tue, 28 May 2024 04:59:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A5EDE6B0099; Tue, 28 May 2024 04:59:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 840C46B0096 for ; Tue, 28 May 2024 04:59:32 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 37D97160408 for ; Tue, 28 May 2024 08:59:32 +0000 (UTC) X-FDA: 82167206184.15.80BBC43 Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf02.hostedemail.com (Postfix) with ESMTP id A698880016 for ; Tue, 28 May 2024 08:59:29 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf02.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716886770; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tuxZFttotA9AUJhIqiebyEWdx3IPkkd+oWAtOo4dSWM=; b=FARZi8t2XCHQCkvSNT+SweII+dwJplkogMqTMuP0WWtNdMPt2rf+d1vDHUARsdoo44Xw9V brijnxTs2JuDh4dGKHZMuUISgd4srn55rVQbxD558PG9FvrFbSO16XkLVM3VSLRuAFyXJu /l3Wsk/bNthcOwj3x9Jk6ybryqlOMuw= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf02.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716886770; a=rsa-sha256; cv=none; b=iJm2z6HVxfd128BhCs4MziHgOuHPwkN2eXOE5xd/rpMufhcPzKJfOJtXE26RnJJWQNIbe7 eFi1umTWec1nImugypX3BjJGFM4sqh4sDrqpJQyIyIlZmeVXwE6k7GxHScLyF2g742QdUv k8+e3/3foZPUefGyUU6iYy1BW1PxrVE= Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4VpRF371t1z1S5xH; Tue, 28 May 2024 16:55:51 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id 83FFC1A016C; Tue, 28 May 2024 16:59:26 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 28 May 2024 16:59:24 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v12 4/6] arm64: support copy_mc_[user]_highpage() Date: Tue, 28 May 2024 16:59:13 +0800 Message-ID: <20240528085915.1955987-5-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240528085915.1955987-1-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: A698880016 X-Stat-Signature: hemjmrnyqt9165rosekbskd3yke6d4s8 X-Rspam-User: X-HE-Tag: 1716886769-439727 X-HE-Meta: U2FsdGVkX1/hVjiJNQYXVRzACYSsSljDBmkQ4PKnVATK2J46oC0YyvEplH9UlAzymjmHAjUc/b2dluX4SdJqhtq8CsDW8D+48hFhIw9TZcb4RyuAWaUn213RyzesWGkVKlxeB6F6bWWMcjcbqM4+yOUDnYKBbC5yLDUX55kgBOOli3SKlNizbWuLMbScGFsB4CbA6U4wDZOGsYPiFrE8aqrq4wHNlYAcz6LpoAmqywDOsqouUyoeHpH6Ut5LnbieEvI0Goly/3CR32jODgUivV+QnmOx9Vp9rXlNSrGq1aSwYTB6F3409toBobRD2JQGJnT6P98HvpHPF0EoMRLafxeUa/j6AkjHdjK76/3A7mSdIFm4+G1eusw5d46XgDxsEW7RAb+0zbvkAHdpqT3DTqnVG3gcj3+KHMXgm//l4XU6WBEJypuJdraPdNrHM03zg1akZ3Vm4zXXuQEB6u+36fhvNagLxa3JWDEZiWDPTsuMriXVw20qFYZ1OSAwbyBizyv6Nt5naJnYwva1HO0Ta9mD17+2jlrpWUL6IqaK+EafcJM6+9egXPC4ntPBcnbXYxVtzV9TUwMNzQjlyVV3ZEI+l7cSWARnQ3Dma5899ZLuzP7VpPTFgpXv9UV078zmLJq6NamM1gvGKNc4opNfW6G5Cq+e2lWzO6Y6GdHCrF8E/KWhQauzan4aK7MTOEhb1tbMo7Rl4JDYLiVVroZKv8nrhgcp+GolpKFxzjHIOAXVPJIkxCp0oyounWJyIcLLv3Sl8JGiFpRS9hcjq47vYshoomyye2bTJ0Vg6GxmMvXfubb3b+1ys9aVqXkd+nnSHza1qmixGn+DflGBcdONgzBoRekQMFRIQDY+KazPEkd8Rc5V8fED/NU8tGxyC8QEwnWXtm/CHp31wWeu/z33gYDpZhMcymKm79qu5DXnjsQykh12q7pwCGTO/deHm97Gk84qs6X7L055uotX2L9 GacnqwuU dw4nFgDHcjONzHY4pKP4Wv+a7Htcee0PM1NPEdjyAJW3DiOUL3rO+K8LV+MYtXo8V9f5YpcLXebtw6nSEeBLQ8shwbzuECTJOPM7vG0sm/rUmaTPOg7I/YH6T4xMBCn4HadVF2gQUmFJ9q8ol/6aG6bmjK6S3aMm/cgb4wd29Ukzqbrm8mQ/HVoSXCMajRMwATWmFIhWjhcZ0jH1MFgZY/a1xJ49EHS2mwCYV6vSR9ypp8isGg4S4EURUG4pbSDMvAh3TgUoK8AGPAh8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, many scenarios that can tolerate memory errors when copying page have been supported in the kernel[1~5], all of which are implemented by copy_mc_[user]_highpage(). arm64 should also support this mechanism. Due to mte, arm64 needs to have its own copy_mc_[user]_highpage() architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it. Add new helper copy_mc_page() which provide a page copy implementation with hardware memory error safe. The code logic of copy_mc_page() is the same as copy_page(), the main difference is that the ldp insn of copy_mc_page() contains the fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, therefore, the main logic is extracted to copy_page_template.S. [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline") [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults") [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()") [4] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory") [5] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory") Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/mte.h | 9 +++++ arch/arm64/include/asm/page.h | 10 ++++++ arch/arm64/lib/Makefile | 2 ++ arch/arm64/lib/copy_mc_page.S | 35 ++++++++++++++++++ arch/arm64/lib/copy_page.S | 50 +++----------------------- arch/arm64/lib/copy_page_template.S | 56 +++++++++++++++++++++++++++++ arch/arm64/lib/mte.S | 29 +++++++++++++++ arch/arm64/mm/copypage.c | 45 +++++++++++++++++++++++ include/linux/highmem.h | 8 +++++ 9 files changed, 199 insertions(+), 45 deletions(-) create mode 100644 arch/arm64/lib/copy_mc_page.S create mode 100644 arch/arm64/lib/copy_page_template.S diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 91fbd5c8a391..dc68337c2623 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -92,6 +92,11 @@ static inline bool try_page_mte_tagging(struct page *page) void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t pte, unsigned int nr_pages); void mte_copy_page_tags(void *kto, const void *kfrom); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +int mte_copy_mc_page_tags(void *kto, const void *kfrom); +#endif + void mte_thread_init_user(void); void mte_thread_switch(struct task_struct *next); void mte_cpu_setup(void); @@ -128,6 +133,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages) static inline void mte_copy_page_tags(void *kto, const void *kfrom) { } +static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom) +{ + return 0; +} static inline void mte_thread_init_user(void) { } diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 2312e6ee595f..304cc86b8a10 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE +#ifdef CONFIG_ARCH_HAS_COPY_MC +int copy_mc_page(void *to, const void *from); +int copy_mc_highpage(struct page *to, struct page *from); +#define __HAVE_ARCH_COPY_MC_HIGHPAGE + +int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma); +#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE +#endif + struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, unsigned long vaddr); #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 13e6a2829116..65ec3d24d32d 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -13,6 +13,8 @@ endif lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o + obj-$(CONFIG_CRC32) += crc32.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S new file mode 100644 index 000000000000..7e633a6aa89f --- /dev/null +++ b/arch/arm64/lib/copy_mc_page.S @@ -0,0 +1,35 @@ +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * Copy a page from src to dest (both are page aligned) with memory error safe + * + * Parameters: + * x0 - dest + * x1 - src + * Returns: + * x0 - Return 0 if copy success, or -EFAULT if anything goes wrong + * while copying. + */ + .macro ldp1 reg1, reg2, ptr, val + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr, \val]) + .endm + +SYM_FUNC_START(__pi_copy_mc_page) +#include "copy_page_template.S" + + mov x0, #0 + ret + +9998: mov x0, #-EFAULT + ret + +SYM_FUNC_END(__pi_copy_mc_page) +SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page) +EXPORT_SYMBOL(copy_mc_page) diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S index 6a56d7cf309d..5499f507bb75 100644 --- a/arch/arm64/lib/copy_page.S +++ b/arch/arm64/lib/copy_page.S @@ -17,52 +17,12 @@ * x0 - dest * x1 - src */ -SYM_FUNC_START(__pi_copy_page) - ldp x2, x3, [x1] - ldp x4, x5, [x1, #16] - ldp x6, x7, [x1, #32] - ldp x8, x9, [x1, #48] - ldp x10, x11, [x1, #64] - ldp x12, x13, [x1, #80] - ldp x14, x15, [x1, #96] - ldp x16, x17, [x1, #112] - - add x0, x0, #256 - add x1, x1, #128 -1: - tst x0, #(PAGE_SIZE - 1) - - stnp x2, x3, [x0, #-256] - ldp x2, x3, [x1] - stnp x4, x5, [x0, #16 - 256] - ldp x4, x5, [x1, #16] - stnp x6, x7, [x0, #32 - 256] - ldp x6, x7, [x1, #32] - stnp x8, x9, [x0, #48 - 256] - ldp x8, x9, [x1, #48] - stnp x10, x11, [x0, #64 - 256] - ldp x10, x11, [x1, #64] - stnp x12, x13, [x0, #80 - 256] - ldp x12, x13, [x1, #80] - stnp x14, x15, [x0, #96 - 256] - ldp x14, x15, [x1, #96] - stnp x16, x17, [x0, #112 - 256] - ldp x16, x17, [x1, #112] - - add x0, x0, #128 - add x1, x1, #128 - - b.ne 1b - - stnp x2, x3, [x0, #-256] - stnp x4, x5, [x0, #16 - 256] - stnp x6, x7, [x0, #32 - 256] - stnp x8, x9, [x0, #48 - 256] - stnp x10, x11, [x0, #64 - 256] - stnp x12, x13, [x0, #80 - 256] - stnp x14, x15, [x0, #96 - 256] - stnp x16, x17, [x0, #112 - 256] + .macro ldp1 reg1, reg2, ptr, val + ldp \reg1, \reg2, [\ptr, \val] + .endm +SYM_FUNC_START(__pi_copy_page) +#include "copy_page_template.S" ret SYM_FUNC_END(__pi_copy_page) SYM_FUNC_ALIAS(copy_page, __pi_copy_page) diff --git a/arch/arm64/lib/copy_page_template.S b/arch/arm64/lib/copy_page_template.S new file mode 100644 index 000000000000..b3ddec2c7a27 --- /dev/null +++ b/arch/arm64/lib/copy_page_template.S @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2012 ARM Ltd. + */ + +/* + * Copy a page from src to dest (both are page aligned) + * + * Parameters: + * x0 - dest + * x1 - src + */ + ldp1 x2, x3, x1, #0 + ldp1 x4, x5, x1, #16 + ldp1 x6, x7, x1, #32 + ldp1 x8, x9, x1, #48 + ldp1 x10, x11, x1, #64 + ldp1 x12, x13, x1, #80 + ldp1 x14, x15, x1, #96 + ldp1 x16, x17, x1, #112 + + add x0, x0, #256 + add x1, x1, #128 +1: + tst x0, #(PAGE_SIZE - 1) + + stnp x2, x3, [x0, #-256] + ldp1 x2, x3, x1, #0 + stnp x4, x5, [x0, #16 - 256] + ldp1 x4, x5, x1, #16 + stnp x6, x7, [x0, #32 - 256] + ldp1 x6, x7, x1, #32 + stnp x8, x9, [x0, #48 - 256] + ldp1 x8, x9, x1, #48 + stnp x10, x11, [x0, #64 - 256] + ldp1 x10, x11, x1, #64 + stnp x12, x13, [x0, #80 - 256] + ldp1 x12, x13, x1, #80 + stnp x14, x15, [x0, #96 - 256] + ldp1 x14, x15, x1, #96 + stnp x16, x17, [x0, #112 - 256] + ldp1 x16, x17, x1, #112 + + add x0, x0, #128 + add x1, x1, #128 + + b.ne 1b + + stnp x2, x3, [x0, #-256] + stnp x4, x5, [x0, #16 - 256] + stnp x6, x7, [x0, #32 - 256] + stnp x8, x9, [x0, #48 - 256] + stnp x10, x11, [x0, #64 - 256] + stnp x12, x13, [x0, #80 - 256] + stnp x14, x15, [x0, #96 - 256] + stnp x16, x17, [x0, #112 - 256] diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 5018ac03b6bf..50ef24318281 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -80,6 +80,35 @@ SYM_FUNC_START(mte_copy_page_tags) ret SYM_FUNC_END(mte_copy_page_tags) +#ifdef CONFIG_ARCH_HAS_COPY_MC +/* + * Copy the tags from the source page to the destination one wiht machine check safe + * x0 - address of the destination page + * x1 - address of the source page + * Returns: + * x0 - Return 0 if copy success, or + * -EFAULT if anything goes wrong while copying. + */ +SYM_FUNC_START(mte_copy_mc_page_tags) + mov x2, x0 + mov x3, x1 + multitag_transfer_size x5, x6 +1: +KERNEL_ME_SAFE(2f, ldgm x4, [x3]) + stgm x4, [x2] + add x2, x2, x5 + add x3, x3, x5 + tst x2, #(PAGE_SIZE - 1) + b.ne 1b + + mov x0, #0 + ret + +2: mov x0, #-EFAULT + ret +SYM_FUNC_END(mte_copy_mc_page_tags) +#endif + /* * Read tags from a user buffer (one tag per byte) and set the corresponding * tags at the given kernel address. Used by PTRACE_POKEMTETAGS. diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index a7bb20055ce0..ff0d9ceea2a4 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -40,3 +40,48 @@ void copy_user_highpage(struct page *to, struct page *from, flush_dcache_page(to); } EXPORT_SYMBOL_GPL(copy_user_highpage); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +/* + * Return -EFAULT if anything goes wrong while copying page or mte. + */ +int copy_mc_highpage(struct page *to, struct page *from) +{ + void *kto = page_address(to); + void *kfrom = page_address(from); + int ret; + + ret = copy_mc_page(kto, kfrom); + if (ret) + return -EFAULT; + + if (kasan_hw_tags_enabled()) + page_kasan_tag_reset(to); + + if (system_supports_mte() && page_mte_tagged(from)) { + /* It's a new page, shouldn't have been tagged yet */ + WARN_ON_ONCE(!try_page_mte_tagging(to)); + ret = mte_copy_mc_page_tags(kto, kfrom); + if (ret) + return -EFAULT; + + set_page_mte_tagged(to); + } + + return 0; +} +EXPORT_SYMBOL(copy_mc_highpage); + +int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + int ret; + + ret = copy_mc_highpage(to, from); + if (!ret) + flush_dcache_page(to); + + return ret; +} +EXPORT_SYMBOL_GPL(copy_mc_user_highpage); +#endif diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 64a567d5ad6f..7a8ddb2b67e6 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -332,6 +332,7 @@ static inline void copy_highpage(struct page *to, struct page *from) #endif #ifdef copy_mc_to_kernel +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE /* * If architecture supports machine check exception handling, define the * #MC versions of copy_user_highpage and copy_highpage. They copy a memory @@ -354,7 +355,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, return ret ? -EFAULT : 0; } +#endif +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE static inline int copy_mc_highpage(struct page *to, struct page *from) { unsigned long ret; @@ -370,20 +373,25 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) return ret ? -EFAULT : 0; } +#endif #else +#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE static inline int copy_mc_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) { copy_user_highpage(to, from, vaddr, vma); return 0; } +#endif +#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE static inline int copy_mc_highpage(struct page *to, struct page *from) { copy_highpage(to, from); return 0; } #endif +#endif static inline void memcpy_page(struct page *dst_page, size_t dst_off, struct page *src_page, size_t src_off, From patchwork Tue May 28 08:59:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13676343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 998BBC25B7E for ; Tue, 28 May 2024 08:59:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8406C6B0098; Tue, 28 May 2024 04:59:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7C5FD6B0099; Tue, 28 May 2024 04:59:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 666526B009A; Tue, 28 May 2024 04:59:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 406976B0098 for ; Tue, 28 May 2024 04:59:34 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 06A1C403AE for ; Tue, 28 May 2024 08:59:34 +0000 (UTC) X-FDA: 82167206268.24.2566086 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf04.hostedemail.com (Postfix) with ESMTP id 925C54000A for ; Tue, 28 May 2024 08:59:31 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf04.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716886772; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KhyfYQfkA+/3EECOl3yZ82du84T2lVmOOg2VMBnv7xI=; b=IXBvRjjVKBYPx3tR50Nnw6ui2k7gJMA9yasp5/V//lkTiGqSjFWFhFY014+9eX1clOP/jc abhNf0gPPy14CyHmhbs7eXo2dxjE2uy1ZQ8ymXAnuP/sY/wLL4cZRYalHwNkaJFdF/yH3R 2IbAcjUtTDq8pVPHDCX9vS7AfRl1mlk= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf04.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716886772; a=rsa-sha256; cv=none; b=5exincNrTYKivehqUOmOmKCkHKkOeUilGKd7i4gRZbrroKt6rPtEiJIQ6DnPF/jGkhoC1r ztH03tRtSnV5qK7jyV1A1hbnl4ob6TivKlbF9XW1eqgzGee1BFUbUmflKLW2nv5QGuIWt0 USl2yfnUg1O9i3ZRYbRLRWRSinR6K7s= Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4VpRHg66lLzckDb; Tue, 28 May 2024 16:58:07 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id 328B1180AA5; Tue, 28 May 2024 16:59:28 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 28 May 2024 16:59:26 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v12 5/6] arm64: introduce copy_mc_to_kernel() implementation Date: Tue, 28 May 2024 16:59:14 +0800 Message-ID: <20240528085915.1955987-6-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240528085915.1955987-1-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspamd-Queue-Id: 925C54000A X-Stat-Signature: jtn14z9o58bqyeos19ekr3yp4dwg8wiu X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1716886771-672780 X-HE-Meta: U2FsdGVkX1/SGr/HWrL/euFdof09VAhsM951mOI47rjs8FS9NhMGcdMo1y+PI+Em4BNED17GjiuMY54pkE3iiGBb5bPWE6GiSe7NnYQT+/29XcZmdMlt5t8SGaf/l54GP6RYfIKFsIf0x900Sgd+wMxq2mQIaGEOK2oFKB5p0igsReAn+sDp80rthamA5r5ZIG5SE5aSzejX2i7ZxmpfcvtrBNP/xBHzZbiRCt+T5nBJhVNdUPh/VcfjlQk9OEspQ+YzXe7ALUmtZ0/xUtNaDj+0nrmaThPCZ4IrCow8fzn1pDmgnM21MQ6rrWG9kP8vhZTfUKIgvn28ntNX1XwhkGczH9SzSBKW8GE5qncn66Od1ciot12uQSQ1JNmSPDVQP3Ea/udd4MWKaiHJEaTLX8CGKRFGOUoRkC7YanTnd5GvBhumSgff9eys+q/bdNUcit4XxcMTDRrSny8rt68cAPB5Se7Fx9PTU6APWVkQ6E/0AEj66bx1xY7NEBb0lENf3Ny6NGijdwko8N3O0Xf1NMr4AAlyLxYXyxy7Q8mxjIOf8ovmPgJMeSLQitJgT0fZCtV3dcA5JMh22IaEsrxMy6o+RGWBI78p6L0um3yi2Tr38RoF/N5soFyptWjyuqV3fiGegrh2CToJg7KmUAV6bjz+zNCMsAoGZD0dToFoPRFn1XxrCDjfBvuRPiHhWNr/znqJbdfkULYKQ39jxRjJDcLiL5AgGJfxeCY89OiBQDqkrOt0hhvvJKXKNmprsdwk3+nTqmBrcqUL1OURxel58JGKHdS3BulpDFNGL54Nnxh7yzgowLvKoIk+FvDD0shAuwoN4Ds5F1uORFs6LvMRpDHHTLEZMSJa/UqFp984AIEEdxUr/i+xZXAhIszgUFDyu41QVU5IDXISy+muoea7t62y2deH8BjpPVrRGsKsc/xoj6xJJPmFSUGdNprGj2laQ2v924yWM4OU93VCZmP 1vm/YZwT WtXy84b3vpdGNWTIxWuq+ACT+t7BYf1y6R3EWcVbXgS1AEK5dr2hMe5WAAeIgzNK2e6p3760RtLHQ6BIii1HsCqx5PaE1O3ZlYAiccAW/AddiZ4QydvPYYmA7k+qLNkl9dxMz14Oy+3izla6SL5kyRhom/qvWVKQ7NTFG2JQ3Ex6tQw/uzY6a+C8tosyZAmes8jfwaFPpVi/nMdR3KmbBoJImHn70bo+8iR1mSXlI+DhpovC4jLOF1331vq06Qa21B1h4YmXdwK8j0KxS32t8rmvxX07LrN/jf0bFaEaHwGsTOdwY7eA6nR0y9Z8fTLCXH6YuJBQl2+D3970= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The copy_mc_to_kernel() helper is memory copy implementation that handles source exceptions. It can be used in memory copy scenarios that tolerate hardware memory errors(e.g: pmem_read/dax_copy_to_iter). Currnently, only x86 and ppc suuport this helper, after arm64 support ARCH_HAS_COPY_MC , we introduce copy_mc_to_kernel() implementation. Also add memcpy_mc() for memory copy that handles source exceptions. Because there is no GPR is available for saving "bytes not copied" in memcpy(), the mempcy_mc() is referenced to the implementation of copy_from_user(). Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/string.h | 5 +++ arch/arm64/include/asm/uaccess.h | 18 ++++++++ arch/arm64/lib/Makefile | 2 +- arch/arm64/lib/memcpy_mc.S | 73 ++++++++++++++++++++++++++++++++ mm/kasan/shadow.c | 12 ++++++ 5 files changed, 109 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/lib/memcpy_mc.S diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h index 3a3264ff47b9..23eca4fb24fa 100644 --- a/arch/arm64/include/asm/string.h +++ b/arch/arm64/include/asm/string.h @@ -35,6 +35,10 @@ extern void *memchr(const void *, int, __kernel_size_t); extern void *memcpy(void *, const void *, __kernel_size_t); extern void *__memcpy(void *, const void *, __kernel_size_t); +#define __HAVE_ARCH_MEMCPY_MC +extern int memcpy_mc(void *, const void *, __kernel_size_t); +extern int __memcpy_mc(void *, const void *, __kernel_size_t); + #define __HAVE_ARCH_MEMMOVE extern void *memmove(void *, const void *, __kernel_size_t); extern void *__memmove(void *, const void *, __kernel_size_t); @@ -57,6 +61,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt); */ #define memcpy(dst, src, len) __memcpy(dst, src, len) +#define memcpy_mc(dst, src, len) __memcpy_mc(dst, src, len) #define memmove(dst, src, len) __memmove(dst, src, len) #define memset(s, c, n) __memset(s, c, n) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 14be5000c5a0..07c1aeaeb094 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -425,4 +425,22 @@ static inline size_t probe_subpage_writeable(const char __user *uaddr, #endif /* CONFIG_ARCH_HAS_SUBPAGE_FAULTS */ +#ifdef CONFIG_ARCH_HAS_COPY_MC +/** + * copy_mc_to_kernel - memory copy that handles source exceptions + * + * @to: destination address + * @from: source address + * @size: number of bytes to copy + * + * Return 0 for success, or bytes not copied. + */ +static inline unsigned long __must_check +copy_mc_to_kernel(void *to, const void *from, unsigned long size) +{ + return memcpy_mc(to, from, size); +} +#define copy_mc_to_kernel copy_mc_to_kernel +#endif + #endif /* __ASM_UACCESS_H */ diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 65ec3d24d32d..0d2fc251fbae 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -13,7 +13,7 @@ endif lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o -lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o memcpy_mc.o obj-$(CONFIG_CRC32) += crc32.o diff --git a/arch/arm64/lib/memcpy_mc.S b/arch/arm64/lib/memcpy_mc.S new file mode 100644 index 000000000000..1798090eba06 --- /dev/null +++ b/arch/arm64/lib/memcpy_mc.S @@ -0,0 +1,73 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2013 ARM Ltd. + * Copyright (C) 2013 Linaro. + * + * This code is based on glibc cortex strings work originally authored by Linaro + * be found @ + * + * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ + * files/head:/src/aarch64/ + */ + +#include +#include +#include +#include + +/* + * Copy a buffer from src to dest (alignment handled by the hardware) + * + * Parameters: + * x0 - dest + * x1 - src + * x2 - n + * Returns: + * x0 - dest + */ + .macro ldrb1 reg, ptr, val + KERNEL_ME_SAFE(9997f, ldrb \reg, [\ptr], \val) + .endm + + .macro strb1 reg, ptr, val + strb \reg, [\ptr], \val + .endm + + .macro ldrh1 reg, ptr, val + KERNEL_ME_SAFE(9997f, ldrh \reg, [\ptr], \val) + .endm + + .macro strh1 reg, ptr, val + strh \reg, [\ptr], \val + .endm + + .macro ldr1 reg, ptr, val + KERNEL_ME_SAFE(9997f, ldr \reg, [\ptr], \val) + .endm + + .macro str1 reg, ptr, val + str \reg, [\ptr], \val + .endm + + .macro ldp1 reg1, reg2, ptr, val + KERNEL_ME_SAFE(9997f, ldp \reg1, \reg2, [\ptr], \val) + .endm + + .macro stp1 reg1, reg2, ptr, val + stp \reg1, \reg2, [\ptr], \val + .endm + +end .req x5 +SYM_FUNC_START(__memcpy_mc) + add end, x0, x2 +#include "copy_template.S" + mov x0, #0 // Nothing to copy + ret + + // Exception fixups +9997: sub x0, end, dst // bytes not copied + ret +SYM_FUNC_END(__memcpy_mc) +EXPORT_SYMBOL(__memcpy_mc) +SYM_FUNC_ALIAS_WEAK(memcpy_mc, __memcpy_mc) +EXPORT_SYMBOL(memcpy_mc) diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index d6210ca48dda..e23632391eac 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -79,6 +79,18 @@ void *memcpy(void *dest, const void *src, size_t len) } #endif +#ifdef __HAVE_ARCH_MEMCPY_MC +#undef memcpy_mc +int memcpy_mc(void *dest, const void *src, size_t len) +{ + if (!kasan_check_range(src, len, false, _RET_IP_) || + !kasan_check_range(dest, len, true, _RET_IP_)) + return (int)len; + + return __memcpy_mc(dest, src, len); +} +#endif + void *__asan_memset(void *addr, int c, ssize_t len) { if (!kasan_check_range(addr, len, true, _RET_IP_)) From patchwork Tue May 28 08:59:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13676344 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 814A8C27C44 for ; Tue, 28 May 2024 08:59:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4E1406B0099; Tue, 28 May 2024 04:59:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 46A7D6B009A; Tue, 28 May 2024 04:59:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BD076B009B; Tue, 28 May 2024 04:59:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 05D1B6B0099 for ; Tue, 28 May 2024 04:59:34 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A65F4A03DA for ; Tue, 28 May 2024 08:59:34 +0000 (UTC) X-FDA: 82167206268.25.CE220DE Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf27.hostedemail.com (Postfix) with ESMTP id 8E57A40008 for ; Tue, 28 May 2024 08:59:32 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716886773; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1rH7ohauD1o7ullv7l8c2uGds6pBAu7U1sjaJ0/dU4U=; b=2qMrAfqNHNSQh2HIY8JzTL6UxgFtY5IKILHetdjTaDqupEpX6ohVSbQIpm5mbf+K3/jyFz MvHk9oraYhVV17GfVfd0XQy4JM1DY+z1yzfpaLvma37MxXGfcUtZ6jyDnK8m3PTxyfRaJu FtRA1h+Ue87sUmoLoWLnYOpOoXZjMOo= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716886773; a=rsa-sha256; cv=none; b=MfX4q92tSYSgpDyuFvTRWnbbj6bWANDeaW8kc7D+3n/P6Rlp1kUEI//5oxKASYzDqtEZP4 JGsE69xp4P1bZSK5ihxOw6HZONtaHG/u2qgPAcTEU64fIG7IYJ4moMb7MfzT8852IVuY3B QPc8Hg5d+5xxd7z0aHN5LIJ3DvC2xmQ= Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4VpRDF0V3xzlX8L; Tue, 28 May 2024 16:55:09 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id 013A5141EC8; Tue, 28 May 2024 16:59:30 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 28 May 2024 16:59:28 +0800 From: Tong Tiangen To: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" CC: , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v12 6/6] arm64: send SIGBUS to user process for SEA exception Date: Tue, 28 May 2024 16:59:15 +0800 Message-ID: <20240528085915.1955987-7-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240528085915.1955987-1-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 8E57A40008 X-Stat-Signature: 9fn98i7w8x4i3d55n9f6cupsrxup58q5 X-HE-Tag: 1716886772-728741 X-HE-Meta: U2FsdGVkX19s3wYbNUMdzzTyetyFUqeFeUPTaM8WT5Z9jBdeg4PlbuhVcYnUwuKz4gKnLuDgVYym+nIz2Ng4RPcGmwR1MFRoFN3b3e1MFL64rqAnmjAFsWatQyLy9K+z++qW15c0JHwPORwYqw2aSCjMjZthEanEk2c03uk522k1QquYyhTkjCgYw4hCMnoHVMna6V8TBzuf/TYsQ4D7uhydhtgceETW786oXNcdgMwfPjPyJuOsWO5NcazBtzUMDuXB5VVwquM0UIq/XWqWc+CGJawFLm0OR3VVIq98Sj20ztpc1ZEFf2X8Ep1DiHxf0TKjRrOrsIYA3tl4Doziu9I5xJntu+tjao7jOrqRrIuXPmfUKEtSoWoHAWXJtQ/3HZYxKyw90mQ/ZhrDttP5c7hXnFGEtz18yfqsk9/xnot4YuExIbgCAc57N2GI31+w9ygM2JT4XqBSJwF6n6ml79t6MS6qQenh8EbNXRiMz2HdnHiNkelQxSQ31MFa3HTSO89iBiumZIlyXGvMnSudVZwBkXJY0VzwoCJCgL0YmDBv7MR6HRjXdKrv4o3NgifIMA4lgkw4iYXQlTnJTAWvIsvQeE74+iamA0kPXKH4D554a7t2X9mPKbI2lUJ/Xj9WtweUNzI6z+jHIfN+Ywt+KEQ12dIcuARsHIPNk/iFDhi6fMe4nQkp+mCVzHzBUiIVl2s56zh6p1OTyN7pw8QSDl4I265bS8Pgzt6GRs5Jn7nVkZ5vExlCxJmDsS9Pw8apYbvVloLt7nKRPvn2qCrVyZZbcC2TTkM0KM/Wt522ryePw+GDPbeuCJ4DNPT8DoQTmypOefcQKs6M9+kuETQWEJic/T2xXbBuqqpk6jgpiwHvsHi4f7zG385LAjfAX5qOmWdlEi9/yGIHY4nu3y1p6M0VSyILEkfgmcdxGJnV1k1csFkZK7l9wnpvX8csHKCDRz00AkyBNsQ7L4pfOxo 43BFxU75 k1q8q+zh/iX4Ap6GvbajGJLuPl+bB/cYduBq2mIzaaPQKITPI9jBJPUv3FZyWPundsQyToanY937sA8JTIvXhcWrFJ6uUrg1uP2WpAgTHCDcoEKr+ewiBovUMm2uUROALBSkQAVKrWT7eiE5K3nYsNzGyCreSZmq2at20W2OVstyQY3USggaKi6t5XfIG/DbQ/8Zhr1lUxAmNfWKrVrO6O2lLluYxzxX1h6xGrMMOF7L/Zp1nFWmZzvWhRXgNltClJaFGtnJwqksrp2Tpe0586s/n9ydnWvpb3rvLfwjmFd6zNpM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: For SEA exception, kernel require take some action to recover from memory error, such as isolate poison page adn kill failure thread, which are done in memory_failure(). During our test, the failure thread cannot be killed due to this issue[1], Here, I temporarily workaround this issue by sending signals to user processes in do_sea(). After [1] is merged, this patch can be rolled back or the SIGBUS will be sent repeated. [1]https://lore.kernel.org/lkml/20240204080144.7977-1-xueshuai@linux.alibaba.com/ Signed-off-by: Tong Tiangen --- arch/arm64/mm/fault.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 2dc65f99d389..37d7e74d9aee 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -730,9 +730,6 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) const struct fault_info *inf; unsigned long siaddr; - if (do_apei_claim_sea(regs)) - return 0; - inf = esr_to_fault_info(esr); if (esr & ESR_ELx_FnV) { siaddr = 0; @@ -744,6 +741,17 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) */ siaddr = untagged_addr(far); } + + if (do_apei_claim_sea(regs)) { + if (current->mm) { + set_thread_esr(0, esr); + arm64_force_sig_fault(inf->sig, inf->code, siaddr, + "Uncorrected memory error on access \ + to poison memory\n"); + } + return 0; + } + arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); return 0;