From patchwork Mon Dec 9 02:42:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 13898752 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90830E77180 for ; Mon, 9 Dec 2024 02:43:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 925106B03A8; Sun, 8 Dec 2024 21:43:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 70DF36B03AB; Sun, 8 Dec 2024 21:43:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 44AC66B03A9; Sun, 8 Dec 2024 21:43:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id F384C6B03A8 for ; Sun, 8 Dec 2024 21:43:33 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B0CA380BC4 for ; Mon, 9 Dec 2024 02:43:33 +0000 (UTC) X-FDA: 82873873782.19.60AD6C6 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf02.hostedemail.com (Postfix) with ESMTP id 14F2F80007 for ; Mon, 9 Dec 2024 02:42:52 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733712198; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3LXOxUklyNgf5lug7zsMwusW92Er0cLCd2wPgXWZ91s=; b=jeiXkFuyT5Bh9OydNViQE919MvS0yhN+fRQ9lGc4vV6TQV03iIWbvwubv1/HFPxnVU/bys /XIcxZ9rtFE5vc2fQYcg9P7YlXeXt/EsytyiaqtJn6Q35SzUk+Vi0quywfbrLUL5JEAI5g 5cKjGST2xCXvN6vVON9svVYZnd+GKeA= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of tongtiangen@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=tongtiangen@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733712198; a=rsa-sha256; cv=none; b=tCmIjHNqO48sXbYl+G1Y5lNJ4NqK9466r4I3ZiAL4t01d14ABE/M0ZwvaXHonbu9Q/u5Z8 UvE0Wuo3HUQHziWzi/uaq+eFvPLIAZpY/+kkCBsfrxkMSTegQfswEum3j6oCvWnT8cCIYL gfoCKKIKJptvtymamnPs4StqgS0Os9g= Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Y65hf0h34z1kvfy; Mon, 9 Dec 2024 10:41:06 +0800 (CST) Received: from kwepemk500005.china.huawei.com (unknown [7.202.194.90]) by mail.maildlp.com (Postfix) with ESMTPS id A318F14010C; Mon, 9 Dec 2024 10:43:28 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemk500005.china.huawei.com (7.202.194.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 9 Dec 2024 10:43:26 +0800 From: Tong Tiangen To: Mark Rutland , Jonathan Cameron , Mauro Carvalho Chehab , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , Madhavan Srinivasan CC: , , , , , Tong Tiangen , , Guohanjun Subject: [PATCH v13 5/5] arm64: introduce copy_mc_to_kernel() implementation Date: Mon, 9 Dec 2024 10:42:57 +0800 Message-ID: <20241209024257.3618492-6-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241209024257.3618492-1-tongtiangen@huawei.com> References: <20241209024257.3618492-1-tongtiangen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemk500005.china.huawei.com (7.202.194.90) X-Rspamd-Server: rspam05 X-Stat-Signature: jb1c3i55yurzb8srfb6t5fqtf58gjgy9 X-Rspamd-Queue-Id: 14F2F80007 X-Rspam-User: X-HE-Tag: 1733712172-329648 X-HE-Meta: U2FsdGVkX18Jq/Hsg82+UJoP++/VVvoRsaXVDbXqEWyFbB+TaK8C7mDX2NZQGySXvPWFJMlHJu9m77x4/Mywh9hlxz1MHwmO7bpAiX2ZLOAVij84TfrNukIa0oU/oOcyMM5yTSMKmzK1XmajdWH9y5q7caEuL9eI0TOUs/YHOS+R1t+RjUv/AdNKnHtpyzcO1Mwx/IqZIXfwyeD41XxT9dHNQhHkuvQuLoWXlQXLnwAFqYypxCGNQoVFUkZ2QAoWaoG/EWAQVcFUMaw8SwwdzkRiCBEqgeuxiodT2aNelXdfkjaU8u0U4SO/qUZMXLRfghB3AiMns6Hu3b4oHbMQ2i/LB9nsM0mIxveNWmKTqU9lmPJgA92qomSGPB6usydNyag/kBQzj4OGKuoCC2nS5wiOaEv/iiWA5kmYFsnycKW4dl9ocIT1lgq8DfxowEmg+5KawRn8uf+HBV0qcrFPUC8S801RDIai7gkJNr1ZLv/AhEB26xt6SXDbwjvDpRTItWXes9UxSycpEUT/qD4iaS0zKE133SjAb7RkoFlUwE5KV10OczLoCNY8UUwvqO2jf5+Zp+WZx7VNNavfzTO7kV1xSXV2mRo0yRMTOamtE+Cv8p3qjrE4GyUy6mnxfVnUy7zPEKD9zipFfhNmLHIvI2WbIKrlkLqPBCSGhtYpYHhComQlDiRoBwUoUOEt/X3ELY0vdGRxHQb6G9/NBkIVNNXfBjNrJ6Ja0uPYOlA6+flnjU9o6aP1WwaMqhH0+PPceh80kwjcnHoNf7uR4FlvPwnklxqAtTkrYRp3YzVpf806rPHi+NoBeR6LkNxKAfiLwfPv+CB1gxretLtyAFfR0zJWhm1L2U3v5PrLdqYNXbwNlPmKYYwlRdSw2eWg7CWiKxlz6e0MlbS/ZBe7/54etA2OkZ+9m9nkmsKN8eV5Xrn+wZy/wXV6zKeTo1tbxDQ0SR12fY7UkRKgPO+wtI3 mtKH3LJs fJUsX9MFESaItzbSty8ME101+66ZRqzLMFtbzSE3ObCMORJpoUK7UbcAhFwS5nJxzaOVt56xUY8qx3rOT0v9bL39XRftp9S3mNl0imvLq9EaUMCbyAwwH03cfBnPc0hKfQld/7p89nLCa4GGT23adLsd6LLmGBXkwtxZIPY1gyrIESOsWjAYpsnVkTOaWM//WNej9LwkZdppOK1vy4u8y4jhyVp9RPPd34TgWBcElOX3jsLJpg+uWBc18YPLaq9kEUQs2c7+NBNkiATD2LsZ4N1ImSX6AXI3uIJyrjHfBsS0vL14= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The copy_mc_to_kernel() helper is memory copy implementation that handles source exceptions. It can be used in memory copy scenarios that tolerate hardware memory errors(e.g: pmem_read/dax_copy_to_iter). Currently, only x86 and ppc support this helper, Add this for ARM64 as well, if ARCH_HAS_COPY_MC is defined, by implementing copy_mc_to_kernel() and memcpy_mc() functions. Because there is no caller-saved GPR is available for saving "bytes not copied" in memcpy(), the memcpy_mc() is referenced to the implementation of copy_from_user(). In addition, the fixup of MOPS insn is not considered at present. Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/string.h | 5 ++ arch/arm64/include/asm/uaccess.h | 18 ++++++ arch/arm64/lib/Makefile | 2 +- arch/arm64/lib/memcpy_mc.S | 98 ++++++++++++++++++++++++++++++++ mm/kasan/shadow.c | 12 ++++ 5 files changed, 134 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/lib/memcpy_mc.S diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h index 3a3264ff47b9..23eca4fb24fa 100644 --- a/arch/arm64/include/asm/string.h +++ b/arch/arm64/include/asm/string.h @@ -35,6 +35,10 @@ extern void *memchr(const void *, int, __kernel_size_t); extern void *memcpy(void *, const void *, __kernel_size_t); extern void *__memcpy(void *, const void *, __kernel_size_t); +#define __HAVE_ARCH_MEMCPY_MC +extern int memcpy_mc(void *, const void *, __kernel_size_t); +extern int __memcpy_mc(void *, const void *, __kernel_size_t); + #define __HAVE_ARCH_MEMMOVE extern void *memmove(void *, const void *, __kernel_size_t); extern void *__memmove(void *, const void *, __kernel_size_t); @@ -57,6 +61,7 @@ void memcpy_flushcache(void *dst, const void *src, size_t cnt); */ #define memcpy(dst, src, len) __memcpy(dst, src, len) +#define memcpy_mc(dst, src, len) __memcpy_mc(dst, src, len) #define memmove(dst, src, len) __memmove(dst, src, len) #define memset(s, c, n) __memset(s, c, n) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 5b91803201ef..2a14b732306a 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -542,4 +542,22 @@ static inline void put_user_gcs(unsigned long val, unsigned long __user *addr, #endif /* CONFIG_ARM64_GCS */ +#ifdef CONFIG_ARCH_HAS_COPY_MC +/** + * copy_mc_to_kernel - memory copy that handles source exceptions + * + * @to: destination address + * @from: source address + * @size: number of bytes to copy + * + * Return 0 for success, or bytes not copied. + */ +static inline unsigned long __must_check +copy_mc_to_kernel(void *to, const void *from, unsigned long size) +{ + return memcpy_mc(to, from, size); +} +#define copy_mc_to_kernel copy_mc_to_kernel +#endif + #endif /* __ASM_UACCESS_H */ diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 78b0e9904689..326d71ba0517 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -13,7 +13,7 @@ endif lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o -lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o memcpy_mc.o obj-$(CONFIG_CRC32) += crc32.o crc32-glue.o diff --git a/arch/arm64/lib/memcpy_mc.S b/arch/arm64/lib/memcpy_mc.S new file mode 100644 index 000000000000..cb9caaa1ab0b --- /dev/null +++ b/arch/arm64/lib/memcpy_mc.S @@ -0,0 +1,98 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2013 ARM Ltd. + * Copyright (C) 2013 Linaro. + * + * This code is based on glibc cortex strings work originally authored by Linaro + * be found @ + * + * http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/ + * files/head:/src/aarch64/ + */ + +#include +#include +#include +#include + +/* + * Copy a buffer from src to dest (alignment handled by the hardware) + * + * Parameters: + * x0 - dest + * x1 - src + * x2 - n + * Returns: + * x0 - bytes not copied + */ + .macro ldrb1 reg, ptr, val + KERNEL_MEM_ERR(9997f, ldrb \reg, [\ptr], \val) + .endm + + .macro strb1 reg, ptr, val + strb \reg, [\ptr], \val + .endm + + .macro ldrh1 reg, ptr, val + KERNEL_MEM_ERR(9997f, ldrh \reg, [\ptr], \val) + .endm + + .macro strh1 reg, ptr, val + strh \reg, [\ptr], \val + .endm + + .macro ldr1 reg, ptr, val + KERNEL_MEM_ERR(9997f, ldr \reg, [\ptr], \val) + .endm + + .macro str1 reg, ptr, val + str \reg, [\ptr], \val + .endm + + .macro ldp1 reg1, reg2, ptr, val + KERNEL_MEM_ERR(9997f, ldp \reg1, \reg2, [\ptr], \val) + .endm + + .macro stp1 reg1, reg2, ptr, val + stp \reg1, \reg2, [\ptr], \val + .endm + +end .req x5 +SYM_FUNC_START(__memcpy_mc_generic) + add end, x0, x2 +#include "copy_template.S" + mov x0, #0 // Nothing to copy + ret + + // Exception fixups +9997: sub x0, end, dst // bytes not copied + ret +SYM_FUNC_END(__memcpy_mc_generic) + +#ifdef CONFIG_AS_HAS_MOPS + .arch_extension mops +SYM_FUNC_START(__memcpy_mc) +alternative_if_not ARM64_HAS_MOPS + b __memcpy_mc_generic +alternative_else_nop_endif + +dstin .req x0 +src .req x1 +count .req x2 +dst .req x3 + + mov dst, dstin + cpyp [dst]!, [src]!, count! + cpym [dst]!, [src]!, count! + cpye [dst]!, [src]!, count! + + mov x0, #0 // Nothing to copy + ret +SYM_FUNC_END(__memcpy_mc) +#else +SYM_FUNC_ALIAS(__memcpy_mc, __memcpy_mc_generic) +#endif + +EXPORT_SYMBOL(__memcpy_mc) +SYM_FUNC_ALIAS_WEAK(memcpy_mc, __memcpy_mc) +EXPORT_SYMBOL(memcpy_mc) diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index 88d1c9dcb507..a12770fb2e9c 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -79,6 +79,18 @@ void *memcpy(void *dest, const void *src, size_t len) } #endif +#ifdef __HAVE_ARCH_MEMCPY_MC +#undef memcpy_mc +int memcpy_mc(void *dest, const void *src, size_t len) +{ + if (!kasan_check_range(src, len, false, _RET_IP_) || + !kasan_check_range(dest, len, true, _RET_IP_)) + return (int)len; + + return __memcpy_mc(dest, src, len); +} +#endif + void *__asan_memset(void *addr, int c, ssize_t len) { if (!kasan_check_range(addr, len, true, _RET_IP_))