From patchwork Thu Aug 25 08:01:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhihao Cheng X-Patchwork-Id: 12954312 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E78C7C28D13 for ; Thu, 25 Aug 2022 07:50:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 660216B0075; Thu, 25 Aug 2022 03:50:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 61096940008; Thu, 25 Aug 2022 03:50:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5007D940007; Thu, 25 Aug 2022 03:50:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 43E6A6B0075 for ; Thu, 25 Aug 2022 03:50:29 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 07C33C04B5 for ; Thu, 25 Aug 2022 07:50:29 +0000 (UTC) X-FDA: 79837342578.27.23AF32B Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf09.hostedemail.com (Postfix) with ESMTP id 9003014000A for ; Thu, 25 Aug 2022 07:50:27 +0000 (UTC) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4MCw7t6PWVzGpp5; Thu, 25 Aug 2022 15:48:42 +0800 (CST) Received: from kwepemm600013.china.huawei.com (7.193.23.68) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 25 Aug 2022 15:50:22 +0800 Received: from huawei.com (10.175.127.227) by kwepemm600013.china.huawei.com (7.193.23.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 25 Aug 2022 15:50:21 +0800 From: Zhihao Cheng To: , , , CC: , , , , Subject: [PATCH] mm: migrate: buffer_migrate_folio_norefs() fallback migrate not uptodate pages Date: Thu, 25 Aug 2022 16:01:46 +0800 Message-ID: <20220825080146.2021641-1-chengzhihao1@huawei.com> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-Originating-IP: [10.175.127.227] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemm600013.china.huawei.com (7.193.23.68) X-CFilter-Loop: Reflected ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661413828; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=me+ILOzP149CU0sRWasdBAdFTQwPCWkzLYt9HP9Z45E=; b=vzbiamfNkk8PSHjcHlBx6qpqaYkZYcxCLkP3yQWGRDxoeoTLJZ0UGJMZjP2La/sfaT6L71 FY/u+77xbLOOhzeIQKIkNfB2IGqqafkOKE9KyCzNv6QpPS+x8ar1K0LBLGuongllQvdkhY xfmIUdCNbvn8o6fntyntL9BPxQYfRVQ= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf09.hostedemail.com: domain of chengzhihao1@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=chengzhihao1@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661413828; a=rsa-sha256; cv=none; b=pGjBB8/RDmXCharXkH/uBW8E++RKGUzMvb72FhSNm58Q3hiDL+CHNZA6x++NJyEgSkvtzL RZDLqqndN63+Y1EA859gwPaZnPdttWYVpjMlyuKCahwdP1JhGgi5rcxkriSD8TcAF7nKi/ zOVaPqiAXXkzIS+V2Cg3KK5U4q480nA= Authentication-Results: imf09.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf09.hostedemail.com: domain of chengzhihao1@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=chengzhihao1@huawei.com X-Rspam-User: X-Rspamd-Queue-Id: 9003014000A X-Rspamd-Server: rspam10 X-Stat-Signature: n8jcsmbudfzu193hnykerfr63yc1rcgg X-HE-Tag: 1661413827-374639 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zhang Yi Recently we notice that ext4 filesystem occasionally fail to read metadata from disk and report error message, but the disk and block layer looks fine. After analyse, we lockon commit 88dbcbb3a484 ("blkdev: avoid migration stalls for blkdev pages"). It provide a migration method for the bdev, we could move page that has buffers without extra users now, but it will lock the buffers on the page, which breaks a lot of current filesystem's fragile metadata read operations, like ll_rw_block() for common usage and ext4_read_bh_lock() for ext4, these helpers just trylock the buffer and skip submit IO if it lock failed, many callers just wait_on_buffer() and conclude IO error if the buffer is not uptodate after buffer unlocked. This issue could be easily reproduced by add some delay just after buffer_migrate_lock_buffers() in __buffer_migrate_folio() and do fsstress on ext4 filesystem. EXT4-fs error (device pmem1): __ext4_find_entry:1658: inode #73193: comm fsstress: reading directory lblock 0 EXT4-fs error (device pmem1): __ext4_find_entry:1658: inode #75334: comm fsstress: reading directory lblock 0 Something like ll_rw_block() should be used carefully and seems could only be safely used for the readahead case. So the best way is to fix the read operations in filesystem in the long run, but now let us avoid this issue first. This patch avoid this issue by fallback to migrate pages that are not uotodate like fallback_migrate_folio(), those pages that has buffers may probably do read operation soon. Fixes: 88dbcbb3a484 ("blkdev: avoid migration stalls for blkdev pages") Signed-off-by: Zhang Yi Signed-off-by: Zhihao Cheng --- mm/migrate.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/mm/migrate.c b/mm/migrate.c index 6a1597c92261..bded69867619 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -691,6 +691,38 @@ static int __buffer_migrate_folio(struct address_space *mapping, if (!head) return migrate_folio(mapping, dst, src, mode); + /* + * If the mapped buffers on the page are not uptodate and has refcount, + * some others may propably try to lock the buffer and submit read IO + * through ll_rw_block(), but it will not submit IO once it failed to + * lock the buffer, so try to fallback to migrate_folio() to prevent + * false positive EIO. + */ + if (check_refs) { + bool uptodate = true; + bool invalidate = false; + + bh = head; + do { + if (buffer_mapped(bh) && !buffer_uptodate(bh)) { + uptodate = false; + if (atomic_read(&bh->b_count)) { + invalidate = true; + break; + } + } + bh = bh->b_this_page; + } while (bh != head); + + if (!uptodate) { + if (invalidate) + invalidate_bh_lrus(); + if (filemap_release_folio(src, GFP_KERNEL)) + return migrate_folio(mapping, dst, src, mode); + return -EAGAIN; + } + } + /* Check whether page does not have extra refs before we do more work */ expected_count = folio_expected_refs(mapping, src); if (folio_ref_count(src) != expected_count)