From patchwork Mon Nov 13 15:22:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13454122 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35C5BC4167D for ; Mon, 13 Nov 2023 15:22:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A84EC6B020F; Mon, 13 Nov 2023 10:22:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A33D66B0213; Mon, 13 Nov 2023 10:22:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 923646B0246; Mon, 13 Nov 2023 10:22:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 8467D6B020F for ; Mon, 13 Nov 2023 10:22:37 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 4AD7AC0835 for ; Mon, 13 Nov 2023 15:22:37 +0000 (UTC) X-FDA: 81453297954.17.7BDF32B Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf26.hostedemail.com (Postfix) with ESMTP id 7B9A9140031 for ; Mon, 13 Nov 2023 15:22:34 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf26.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1699888955; a=rsa-sha256; cv=none; b=eoTNCheyOnaDhzh1wg6X/x2GqjXJjgRVtEDVM/dHCgEEBM+Jdfnig8tUt0zjmJpzNCBkhh 5VEnbRb7AEePolUwp5jKqm738LYGNpsfjDqQPLKYlQ2oxJfHLwV+a62yYLBolGzrtNIZpn KxKQSS8M7gBWNrMVv+F3Fesq++fhIOs= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf26.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1699888955; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7TOHqbiMIO/wyNRYoVPCmYaKWfex8wVlXnk19rbhbzs=; b=7d1CKAxgzDF9RksQNI48dSWr9PsO7p/2d2/cEKvPybV8rzSJNwAkvbt9WdA8VyNMQwiCg4 ANALx5XrAEIvCSsEAxUA3Fo7QMRsFv1uH8vfYHUjYFnxw3mFQQ5Bdr9/+a0glB2gDpCvR7 olAupzdzAHj/S+phviwoM6EXLuwPzlw= Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4STY2q1FgKzMmqw; Mon, 13 Nov 2023 23:17:55 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 13 Nov 2023 23:22:28 +0800 From: Kefeng Wang To: Andrew Morton CC: , , Matthew Wilcox , David Hildenbrand , Sidhartha Kumar , Kefeng Wang Subject: [PATCH v2 1/6] mm: ksm: use more folio api in ksm_might_need_to_copy() Date: Mon, 13 Nov 2023 23:22:17 +0800 Message-ID: <20231113152222.3495908-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231113152222.3495908-1-wangkefeng.wang@huawei.com> References: <20231113152222.3495908-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 7B9A9140031 X-Stat-Signature: bg7t48igjdc3has4rwwet3ssi1ntp93a X-Rspam-User: X-HE-Tag: 1699888954-421414 X-HE-Meta: U2FsdGVkX18OdqtUBEzgsSurVAIcfMr9TdDzpjAcYIMWWIczkpCg55USF/VOD6RWrs3cVMqPZhwIH5BlfmRDbgN1vrUIL/tKHLFHt7tZVMov3leU1smS44JwAalntlfjkX6xlpt8RWwAMAwi0VNbTytPR/mFAOTuS7pFHBpVT/n802PUCzr3zTteOqSaOh8n2UOXvVoJfA1F/fRym/C5b1k6H6UGetWnaCXPSJria3W+YyhP9CIZx//jsEbqoigSpqS7rEcV0r2c235XKVtgXDkWVZDJhRCz8M21tp6cSmWYRyui1zeSppfCE16feZsAaSjRCqsI4YUQP582cGIBm60mYq0QOvVN67tvW0czvWwUy26ImmZpU5SMmY7FxceB3EFNpFxGq0UuGhKmOO4jLQfGcbOxv9ifdSoV2CyvO3Nvlra3xOQFvn6uMtmlS0dJKRZ1QezFTy00FlMxASusvWkTcEhJIqh3Yq13SSwsychbCg8RrAi+FshPSJipvo8iPCvXm95ELM6H12Q+ioB+Vw6m1Lk8K3dE+MbtU5tt6agvyFbuPEaraPV0CWAvmV2I4yDcES9FaQoxA16Zco481QEaGCXSFFJ/F6WwikYKQjmaQjxwcUvYZOG0LvnJmWjI6g8hkQMPQxTXBEIzw5wk4WMQbjxCcFd4H28VKD13bR0mIa5RAF11G4o20ucvUJHn4rXg5pzVei0zqBwp7xOIyrOA/a1oB1/QLtAvm1jTHK+pigBT+yY032Qyeh9uNxMA6MnWK99kKgmHDgDfBRo9IpffAvVT6ulDsmCfJhTKMq7FvlRhXW9FUDWUR250fPZk2Rr6ZH5rY2sT4pRNOLi41IFBbRtQ+okkBLWlNRtIegMDZVggK/HyzBQ8gftmZmZVVjokn6at8LJuFsXrfnh7VNcf/KBWXc6/D/3azMSsuBJUevTjjv4HdRmtcPjcKzgTHaLpBysKknf/tjoL9XV t0lZVb8x D9FHhrqvZ8FpBz/6uGECJx6/A7BWFG2FJEIDGXppFCiB7T0jRZ8OBYwKWGwY37PBshPmxD19IBNAUT0xxa8YMEghGt+vEQBkq0/K2vXFV1JpI5LWyBUq9MLcqzevRKXNTEKE2KGn6M+h0+bofW3POJhQ/0nCU7oSpOXUKmzRA+yOPRJMb8iMzBST4tvIB+Bb0NIOA X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Since ksm only support normal page, no swapout/in for ksm large folio too, add large folio check in ksm_might_need_to_copy(), also convert page->index to folio->index as page->index is going away. Then convert ksm_might_need_to_copy() to use more folio api to save nine compound_head() calls, short 'address' to reduce max-line-length. Signed-off-by: Kefeng Wang --- include/linux/ksm.h | 4 ++-- mm/ksm.c | 39 +++++++++++++++++++++------------------ 2 files changed, 23 insertions(+), 20 deletions(-) diff --git a/include/linux/ksm.h b/include/linux/ksm.h index c2dd786a30e1..4643d5244e77 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -77,7 +77,7 @@ static inline void ksm_exit(struct mm_struct *mm) * but what if the vma was unmerged while the page was swapped out? */ struct page *ksm_might_need_to_copy(struct page *page, - struct vm_area_struct *vma, unsigned long address); + struct vm_area_struct *vma, unsigned long addr); void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc); void folio_migrate_ksm(struct folio *newfolio, struct folio *folio); @@ -130,7 +130,7 @@ static inline int ksm_madvise(struct vm_area_struct *vma, unsigned long start, } static inline struct page *ksm_might_need_to_copy(struct page *page, - struct vm_area_struct *vma, unsigned long address) + struct vm_area_struct *vma, unsigned long addr) { return page; } diff --git a/mm/ksm.c b/mm/ksm.c index 7efcc68ccc6e..e9d72254e66c 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2876,48 +2876,51 @@ void __ksm_exit(struct mm_struct *mm) } struct page *ksm_might_need_to_copy(struct page *page, - struct vm_area_struct *vma, unsigned long address) + struct vm_area_struct *vma, unsigned long addr) { struct folio *folio = page_folio(page); struct anon_vma *anon_vma = folio_anon_vma(folio); - struct page *new_page; + struct folio *new_folio; - if (PageKsm(page)) { - if (page_stable_node(page) && + if (folio_test_large(folio)) + return page; + + if (folio_test_ksm(folio)) { + if (folio_stable_node(folio) && !(ksm_run & KSM_RUN_UNMERGE)) return page; /* no need to copy it */ } else if (!anon_vma) { return page; /* no need to copy it */ - } else if (page->index == linear_page_index(vma, address) && + } else if (folio->index == linear_page_index(vma, addr) && anon_vma->root == vma->anon_vma->root) { return page; /* still no need to copy it */ } if (PageHWPoison(page)) return ERR_PTR(-EHWPOISON); - if (!PageUptodate(page)) + if (!folio_test_uptodate(folio)) return page; /* let do_swap_page report the error */ - new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address); - if (new_page && - mem_cgroup_charge(page_folio(new_page), vma->vm_mm, GFP_KERNEL)) { - put_page(new_page); - new_page = NULL; + new_folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, addr, false); + if (new_folio && + mem_cgroup_charge(new_folio, vma->vm_mm, GFP_KERNEL)) { + folio_put(new_folio); + new_folio = NULL; } - if (new_page) { - if (copy_mc_user_highpage(new_page, page, address, vma)) { - put_page(new_page); + if (new_folio) { + if (copy_mc_user_highpage(&new_folio->page, page, addr, vma)) { + folio_put(new_folio); memory_failure_queue(page_to_pfn(page), 0); return ERR_PTR(-EHWPOISON); } - SetPageDirty(new_page); - __SetPageUptodate(new_page); - __SetPageLocked(new_page); + folio_set_dirty(new_folio); + __folio_mark_uptodate(new_folio); + __folio_set_locked(new_folio); #ifdef CONFIG_SWAP count_vm_event(KSM_SWPIN_COPY); #endif } - return new_page; + return new_folio ? &new_folio->page : NULL; } void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)