From patchwork Tue Feb 6 11:21:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13547060 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FF51C4828D for ; Tue, 6 Feb 2024 11:22:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C6FD76B0075; Tue, 6 Feb 2024 06:21:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 55ED86B007B; Tue, 6 Feb 2024 06:21:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 001216B0088; Tue, 6 Feb 2024 06:21:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 57C096B0078 for ; Tue, 6 Feb 2024 06:21:52 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 2DAD9140636 for ; Tue, 6 Feb 2024 11:21:52 +0000 (UTC) X-FDA: 81761139264.19.4C93895 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf23.hostedemail.com (Postfix) with ESMTP id 76D53140020; Tue, 6 Feb 2024 11:21:48 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; spf=pass (imf23.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707218510; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FR1onbveHUe7r8sjm3YYDMwQFwmIxq4X/CEpvOMeXvI=; b=pCJ1zezNGA8AHFhfpO52taNe7fL1rrldPNMgo/OyNDn0L8/DL2IEpP+HQf37NsQC5Y5RBj CuCbbxy1EDyj9u3PTeYGy4EROwdeywM3bPne/z6FplTAlY19wFRL8sXelA2tOoIn4aWkRr QUGdt+YfZMo/IOGvPT7+kBbghqm07SM= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; spf=pass (imf23.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707218510; a=rsa-sha256; cv=none; b=P/PeNkvszIETNpRrLHbD3D9H6m5Tpd8Zl2KQSEXH6bSHrjyknHN+Ug1QO8dmXwln34Rns5 AO8EMfCcilWkEHCPIvaRuC26F8WqnAj4wzRAjmOibabpAEedXpqjpPMIYlHZqEZygG1oUN QKYGu7DutDoTfXHC8vnuEDTN3UoBU7Q= Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4TTglq2GSFz1xnPF; Tue, 6 Feb 2024 19:20:39 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id C5AA9140412; Tue, 6 Feb 2024 19:21:45 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Tue, 6 Feb 2024 19:21:45 +0800 From: Kefeng Wang To: Andrew Morton , CC: Tony Luck , Naoya Horiguchi , Miaohe Lin , Matthew Wilcox , David Hildenbrand , Muchun Song , Benjamin LaHaise , , , , Kefeng Wang Subject: [PATCH rfcv2 05/11] mm: remove MIGRATE_SYNC_NO_COPY mode Date: Tue, 6 Feb 2024 19:21:28 +0800 Message-ID: <20240206112134.1479464-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240206112134.1479464-1-wangkefeng.wang@huawei.com> References: <20240206112134.1479464-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm100001.china.huawei.com (7.185.36.93) X-Rspamd-Queue-Id: 76D53140020 X-Rspam-User: X-Stat-Signature: znmy8krj5aqgxzbf3pocb9xsmkpi45mj X-Rspamd-Server: rspam01 X-HE-Tag: 1707218508-4873 X-HE-Meta: U2FsdGVkX19s611wUNbWJBe+720OQDJ4FSSaLx7hHb9uXuiPrtWfCGbq1I57r2S4SmP1QAkH7NFq18RqChPpZA2XWmzuDmrTjn3MkD8iUlInKbauCNDnpCguQKdlnZpibz0nlYvkjOwv6rYBUCXPuqJQNDbZvnp25KGL6D7gbQxVpDXWXZL8EE/e8+KgXmJQapJM2GSdBurUesJFbRE/sJ2j4cZyqWejob64X9ep/F0Cq7G6aJP1gsYWlWIVwHDuTmUKtGoUDrqCKT/Nov56RK5eDZrlL1ZncXwOQFfSrse7HDHsA7JYQPp3al/mlLDlz/6qMW607otF6ckVvONWjtOHhx8+aHesMaEyolmjuOao0vStL+trndDghCCC2vA76P7PuEnhnfjxls9uFEzpMj7ZgnoYtoVtWFB2ZzCtSTYEC6ibKFCu/I+otQ6wi2qH3M6OhWWpt/0pIvjNIExPJlC6a95OeI2ZfR8UJukgPeRRaMHaWMItEbCSk7xvRoE1TOushkeGhePM0JtUsqvrBm1Ue1DBsjPsaNlB2WZ3tyKVcPL5wiZkGVbQ1kbcVGGftFW2fjzyoqak9L96WiokWnJFIUgYTwOKAplyBYpqOAOFx7RuF0lOQweMNFsqCUjqAoJPLHy9oMCzQLkIXmQbmgxwu1dQjMLjwHuno45bklUq8GEs+xUcCgcsye5c4Jrwjvcwmte8Mh13PpM9S9fL2RzvVQzZGchtZOOOW0K2kh2SO1kyNciCw1HMPCVJM8ycatE07/nJItGe37xprq8qIKQHslIPPd/xiDQ+0Ox+JJyaRZ1YPOcgGjJPWS9L0BpamW7vtkvfP+w/BMMjcaGkfsrEqBGmbDmY1Il0RPkahOcoCeFhS1MUXDwDOflaAVylihcF4B7JG9TPzx8AiaP/X6B4evXg9li+t+NtDAKMW6GujCG1D7eZQPIEpjV4j+522HfE2ni/psB4HMGDf5v RPU+oTlr 4Q+LQflqFc6gAGpNx4ldboomeYRwlwSo92BJVhNCMgC2Y+cwOyHMH6paQVBMSl4zowjS3B1U0l+uX3m+MDDSdweO4cL6iUJJlXm1B0fkcDgvsBbYne/mmYjaRIS4BXkeE57f15pF4w+5u/EgJFtXSW/wc/ICaSBnrcC1zOe8YwfMphUD1raULUsEHIOKEu/4YB5nSc9AQXyOXg5hy/aSDi07Vqa6sNm1xaGNh X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Commit 2916ecc0f9d4 ("mm/migrate: new migrate mode MIGRATE_SYNC_NO_COPY") introduce a new MIGRATE_SYNC_NO_COPY mode to allow to offload the copy to a device DMA engine, which is only used __migrate_device_pages() to decide whether or not copy the old page, and the MIGRATE_SYNC_NO_COPY mode only set in hmm, as the MIGRATE_SYNC_NO_COPY set is removed by previous cleanup, it seems that we could remove the unnecessary MIGRATE_SYNC_NO_COPY. Signed-off-by: Kefeng Wang --- fs/aio.c | 12 +----------- fs/hugetlbfs/inode.c | 5 +---- include/linux/migrate_mode.h | 5 ----- mm/balloon_compaction.c | 8 -------- mm/migrate.c | 8 +------- mm/zsmalloc.c | 8 -------- 6 files changed, 3 insertions(+), 43 deletions(-) diff --git a/fs/aio.c b/fs/aio.c index bb2ff48991f3..1d0ca2a2776d 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -409,17 +409,7 @@ static int aio_migrate_folio(struct address_space *mapping, struct folio *dst, struct kioctx *ctx; unsigned long flags; pgoff_t idx; - int rc; - - /* - * We cannot support the _NO_COPY case here, because copy needs to - * happen under the ctx->completion_lock. That does not work with the - * migration workflow of MIGRATE_SYNC_NO_COPY. - */ - if (mode == MIGRATE_SYNC_NO_COPY) - return -EINVAL; - - rc = 0; + int rc = 0; /* mapping->i_private_lock here protects against the kioctx teardown. */ spin_lock(&mapping->i_private_lock); diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index d746866ae3b6..4f2a423037b6 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -1129,10 +1129,7 @@ static int hugetlbfs_migrate_folio(struct address_space *mapping, hugetlb_set_folio_subpool(src, NULL); } - if (mode != MIGRATE_SYNC_NO_COPY) - folio_migrate_copy(dst, src); - else - folio_migrate_flags(dst, src); + folio_migrate_copy(dst, src); return MIGRATEPAGE_SUCCESS; } diff --git a/include/linux/migrate_mode.h b/include/linux/migrate_mode.h index f37cc03f9369..9fb482bb7323 100644 --- a/include/linux/migrate_mode.h +++ b/include/linux/migrate_mode.h @@ -7,16 +7,11 @@ * on most operations but not ->writepage as the potential stall time * is too significant * MIGRATE_SYNC will block when migrating pages - * MIGRATE_SYNC_NO_COPY will block when migrating pages but will not copy pages - * with the CPU. Instead, page copy happens outside the migratepage() - * callback and is likely using a DMA engine. See migrate_vma() and HMM - * (mm/hmm.c) for users of this mode. */ enum migrate_mode { MIGRATE_ASYNC, MIGRATE_SYNC_LIGHT, MIGRATE_SYNC, - MIGRATE_SYNC_NO_COPY, }; enum migrate_reason { diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c index 22c96fed70b5..6597ebea8ae2 100644 --- a/mm/balloon_compaction.c +++ b/mm/balloon_compaction.c @@ -234,14 +234,6 @@ static int balloon_page_migrate(struct page *newpage, struct page *page, { struct balloon_dev_info *balloon = balloon_page_device(page); - /* - * We can not easily support the no copy case here so ignore it as it - * is unlikely to be used with balloon pages. See include/linux/hmm.h - * for a user of the MIGRATE_SYNC_NO_COPY mode. - */ - if (mode == MIGRATE_SYNC_NO_COPY) - return -EINVAL; - VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); diff --git a/mm/migrate.c b/mm/migrate.c index 461badf26eb2..2dcd0d422056 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -668,10 +668,7 @@ static int __migrate_folio(struct address_space *mapping, struct folio *dst, if (src_private) folio_attach_private(dst, folio_detach_private(src)); - if (mode != MIGRATE_SYNC_NO_COPY) - folio_migrate_copy(dst, src); - else - folio_migrate_flags(dst, src); + folio_migrate_copy(dst, src); return MIGRATEPAGE_SUCCESS; } @@ -900,7 +897,6 @@ static int fallback_migrate_folio(struct address_space *mapping, /* Only writeback folios in full synchronous migration */ switch (mode) { case MIGRATE_SYNC: - case MIGRATE_SYNC_NO_COPY: break; default: return -EBUSY; @@ -1158,7 +1154,6 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, */ switch (mode) { case MIGRATE_SYNC: - case MIGRATE_SYNC_NO_COPY: break; default: rc = -EBUSY; @@ -1369,7 +1364,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio, goto out; switch (mode) { case MIGRATE_SYNC: - case MIGRATE_SYNC_NO_COPY: break; default: goto out; diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index c937635e0ad1..b9ffe1a041ca 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1811,14 +1811,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page, unsigned long old_obj, new_obj; unsigned int obj_idx; - /* - * We cannot support the _NO_COPY case here, because copy needs to - * happen under the zs lock, which does not work with - * MIGRATE_SYNC_NO_COPY workflow. - */ - if (mode == MIGRATE_SYNC_NO_COPY) - return -EINVAL; - VM_BUG_ON_PAGE(!PageIsolated(page), page); /* The page is locked, so this pointer must remain valid */