From patchwork Mon Apr 17 04:53:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13213244 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85520C77B70 for ; Mon, 17 Apr 2023 04:35:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AC3398E0002; Mon, 17 Apr 2023 00:35:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A731D8E0001; Mon, 17 Apr 2023 00:35:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 961C08E0002; Mon, 17 Apr 2023 00:35:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8316A8E0001 for ; Mon, 17 Apr 2023 00:35:20 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4E740160385 for ; Mon, 17 Apr 2023 04:35:20 +0000 (UTC) X-FDA: 80689618800.29.96499C3 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf03.hostedemail.com (Postfix) with ESMTP id 6AB4420003 for ; Mon, 17 Apr 2023 04:35:15 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf03.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681706118; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=ODOSvtb55rjXIdhRoIWui6MDeFrz0D4G8svL9zkaclk=; b=UQc1H/UItuok9WAOTzsJIzZeTxYh+Co6v/qTFMg57kstMPUnfKjX9BXRpNQCiWTJiz2Ca1 UiDu325Mtneb9zaZ0k0ckFvi+/Es5MCIr85dP8rKXUhsGn2OVVPe5O795eAG7js67N/EIp vCJDY3GB+jX8pkqSqBPL0J62R9Y96N0= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf03.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681706118; a=rsa-sha256; cv=none; b=YYTuEUn0+gJtbKvtcbrX43vFT/Svjvnx1OToICvktjre49gxQoh1N5Z5MM9pymsBfXggQa 7VxZ5BVQU0H+joOLuMUHB5ZcB7nD+VDUAeTOTNKkTw7if1XGbcLxQuHyqJQbFR9tkRX7r0 yeufgo46e9S+t0LRQ8JdsPVEnrOdGbQ= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Q0DdP6HvMzSrTf; Mon, 17 Apr 2023 12:31:05 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 17 Apr 2023 12:35:04 +0800 From: Kefeng Wang To: Naoya Horiguchi , Alexander Viro , Christian Brauner , , , Andrew Morton CC: Miaohe Lin , , Kefeng Wang , Tong Tiangen , Jens Axboe Subject: [PATCH v2] mm: hwpoison: coredump: support recovery from dump_user_range() Date: Mon, 17 Apr 2023 12:53:23 +0800 Message-ID: <20230417045323.11054-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 6AB4420003 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: 8ze4z3kwfw514nh41h9zhwfopncckucp X-HE-Tag: 1681706115-716480 X-HE-Meta: U2FsdGVkX1+Fq1AY2vs3q1+QkUqQBPZQumP09fIP3ahShV5V7ngpP3zbNKDXug4cBGECKNiO9XGfWLV6/Fmu43/m7gD+g0bSVQam8SNaTB9+nYsx3+uuchctfvUBgQXJBQOmsvjjENh88qN2TnApteTHTLLYRMp4R3KeLUAjJZENwyxnWr9KNzyconPQgRN6lqVETZj/OFdwEUhtstFIcWfsi76hSKVrfTON4rzgmeE0bUIUBe5xTEd0cYWVrX57Ps6qmdrLrwXycFTiUWb0sgaQZw19SycfX5q4QnM4+4NQALIvss0eKxiwWWWGl2wlTpzpcWfQd0sdIsYZYptrBfG5Zk5MA8GGrsFyaIqp/c7HqPp/+RUVErXKXzM8FW68zbzDeT9g36DbjqrcX6IdK165q8Sj5JRLbojKToAq4tv/UmbA981YuNh7SXEeAUFRmolwrwbQENHVbLaInJPjm8S2Ws6u0EVhaTq/zUfuxFOF9ef8w4Ox3tR5sbccuepL343xIJBz7EfLUiLor56E2qu4eee+yKAeKlG7tcoSjqabEDsj1dAy3w+pqXouFqpLdmxIMvNcUghtzPMDKJmbJB6qnXmVqTHQoSA8nAhk62sWdOcfyah14ZmVlh/eAZRSM3BCFLvVIP09Q8tS9ajAgSqCKpjXIOs8hVj4oa7vXxlkIp9A+qdwIcJ+oThXMbyQj5Xijq/c3hRtUwjITulXd/lmfVy0pTsOD6sFDFpIV4ZAb0JVnXWbE9IVauoqaKtJMi07Dw0rJhM/2Q/klnYINSURBdLet4UQjdrCRUNb1X0LxHDhRrzJCKPrGdN4u6NKJ2/ISk/E4X79KUklhoqz9DzkNLiaxdLnYILtc2xOnQQlQu0CCSCSUb6HKPPz6CWEwVWgA+nB4rkuUrp/ML9fDJ3FcJZ9RUrzkglixibH6yhkalIGPuZsEPt4RpvbWztsHVg+q2GrpgS+tmhr7Sx p1sCfCvc 5oRHoqkT6qI/81wxR3M5gWVVCnpryrFsle5Ou7Yglrc5XctHcZZd6T/HfD9VL3YQRw+tk/tpWdInNR+g1i3FJo2P+QJbku0o+avq9oitLwNBIIR+YyqIPAHB73FlRgkGd4nQ32YtcePQHheaoyEJ+bBKB32zH0toaYeGNYMvc0xvkod1k4HW5XtyH2dHapGCbDFBxZVhgTdWWes8A02qz+8vZHbaYv0cP2sCE3Qr5H5+kRdo6g95r7WM3rSaDyKCp8oyES4SfFt7qhPvFEGMMnKbej0gVXFAM1TcrgvzA2UqYHFOqFJObjPNFeIra2Ec2NY1tY4+3f8pNmN77gxIDlHcIfNumaFL19/R8 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The dump_user_range() is used to copy the user page to a coredump file, but if a hardware memory error occurred during copy, which called from __kernel_write_iter() in dump_user_range(), it crashes, CPU: 112 PID: 7014 Comm: mca-recover Not tainted 6.3.0-rc2 #425 pc : __memcpy+0x110/0x260 lr : _copy_from_iter+0x3bc/0x4c8 ... Call trace: __memcpy+0x110/0x260 copy_page_from_iter+0xcc/0x130 pipe_write+0x164/0x6d8 __kernel_write_iter+0x9c/0x210 dump_user_range+0xc8/0x1d8 elf_core_dump+0x308/0x368 do_coredump+0x2e8/0xa40 get_signal+0x59c/0x788 do_signal+0x118/0x1f8 do_notify_resume+0xf0/0x280 el0_da+0x130/0x138 el0t_64_sync_handler+0x68/0xc0 el0t_64_sync+0x188/0x190 Generally, the '->write_iter' of file ops will use copy_page_from_iter() and copy_page_from_iter_atomic(), change memcpy() to copy_mc_to_kernel() in both of them to handle #MC during source read, which stop coredump processing and kill the task instead of kernel panic, but the source address may not always a user address, so introduce a new copy_mc flag in struct iov_iter{} to indicate that the iter could do a safe memory copy, also introduce the helpers to set/cleck the flag, for now, it's only used in coredump's dump_user_range(), but it could expand to any other scenarios to fix the similar issue. Cc: Alexander Viro Cc: Christian Brauner Cc: Miaohe Lin Cc: Naoya Horiguchi Cc: Tong Tiangen Cc: Jens Axboe Signed-off-by: Kefeng Wang --- v2: - move the helper functions under pre-existing CONFIG_ARCH_HAS_COPY_MC - reposition the copy_mc in struct iov_iter for easy merge, suggested by Andrew Morton - drop unnecessary clear flag helper - fix checkpatch warning fs/coredump.c | 1 + include/linux/uio.h | 16 ++++++++++++++++ lib/iov_iter.c | 17 +++++++++++++++-- 3 files changed, 32 insertions(+), 2 deletions(-) diff --git a/fs/coredump.c b/fs/coredump.c index 5df1e6e1eb2b..ece7badf701b 100644 --- a/fs/coredump.c +++ b/fs/coredump.c @@ -882,6 +882,7 @@ static int dump_emit_page(struct coredump_params *cprm, struct page *page) pos = file->f_pos; bvec_set_page(&bvec, page, PAGE_SIZE, 0); iov_iter_bvec(&iter, ITER_SOURCE, &bvec, 1, PAGE_SIZE); + iov_iter_set_copy_mc(&iter); n = __kernel_write_iter(cprm->file, &iter, &pos); if (n != PAGE_SIZE) return 0; diff --git a/include/linux/uio.h b/include/linux/uio.h index c459e1d5772b..aa3a4c6ba585 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -40,6 +40,7 @@ struct iov_iter_state { struct iov_iter { u8 iter_type; + bool copy_mc; bool nofault; bool data_source; bool user_backed; @@ -241,8 +242,22 @@ size_t _copy_from_iter_flushcache(void *addr, size_t bytes, struct iov_iter *i); #ifdef CONFIG_ARCH_HAS_COPY_MC size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i); +static inline void iov_iter_set_copy_mc(struct iov_iter *i) +{ + i->copy_mc = true; +} + +static inline bool iov_iter_is_copy_mc(const struct iov_iter *i) +{ + return i->copy_mc; +} #else #define _copy_mc_to_iter _copy_to_iter +static inline void iov_iter_set_copy_mc(struct iov_iter *i) { } +static inline bool iov_iter_is_copy_mc(const struct iov_iter *i) +{ + return false; +} #endif size_t iov_iter_zero(size_t bytes, struct iov_iter *); @@ -357,6 +372,7 @@ static inline void iov_iter_ubuf(struct iov_iter *i, unsigned int direction, WARN_ON(direction & ~(READ | WRITE)); *i = (struct iov_iter) { .iter_type = ITER_UBUF, + .copy_mc = false, .user_backed = true, .data_source = direction, .ubuf = buf, diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 08587feb94cc..7b9d8419fee7 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -288,6 +288,7 @@ void iov_iter_init(struct iov_iter *i, unsigned int direction, WARN_ON(direction & ~(READ | WRITE)); *i = (struct iov_iter) { .iter_type = ITER_IOVEC, + .copy_mc = false, .nofault = false, .user_backed = true, .data_source = direction, @@ -371,6 +372,14 @@ size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i) EXPORT_SYMBOL_GPL(_copy_mc_to_iter); #endif /* CONFIG_ARCH_HAS_COPY_MC */ +static void *memcpy_from_iter(struct iov_iter *i, void *to, const void *from, + size_t size) +{ + if (iov_iter_is_copy_mc(i)) + return (void *)copy_mc_to_kernel(to, from, size); + return memcpy(to, from, size); +} + size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) { if (WARN_ON_ONCE(!i->data_source)) @@ -380,7 +389,7 @@ size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) might_fault(); iterate_and_advance(i, bytes, base, len, off, copyin(addr + off, base, len), - memcpy(addr + off, base, len) + memcpy_from_iter(i, addr + off, base, len) ) return bytes; @@ -571,7 +580,7 @@ size_t copy_page_from_iter_atomic(struct page *page, unsigned offset, size_t byt } iterate_and_advance(i, bytes, base, len, off, copyin(p + off, base, len), - memcpy(p + off, base, len) + memcpy_from_iter(i, p + off, base, len) ) kunmap_atomic(kaddr); return bytes; @@ -704,6 +713,7 @@ void iov_iter_kvec(struct iov_iter *i, unsigned int direction, WARN_ON(direction & ~(READ | WRITE)); *i = (struct iov_iter){ .iter_type = ITER_KVEC, + .copy_mc = false, .data_source = direction, .kvec = kvec, .nr_segs = nr_segs, @@ -720,6 +730,7 @@ void iov_iter_bvec(struct iov_iter *i, unsigned int direction, WARN_ON(direction & ~(READ | WRITE)); *i = (struct iov_iter){ .iter_type = ITER_BVEC, + .copy_mc = false, .data_source = direction, .bvec = bvec, .nr_segs = nr_segs, @@ -748,6 +759,7 @@ void iov_iter_xarray(struct iov_iter *i, unsigned int direction, BUG_ON(direction & ~1); *i = (struct iov_iter) { .iter_type = ITER_XARRAY, + .copy_mc = false, .data_source = direction, .xarray = xarray, .xarray_start = start, @@ -771,6 +783,7 @@ void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t count) BUG_ON(direction != READ); *i = (struct iov_iter){ .iter_type = ITER_DISCARD, + .copy_mc = false, .data_source = false, .count = count, .iov_offset = 0