From patchwork Sat Nov 18 02:32:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13459827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07658C072A2 for ; Sat, 18 Nov 2023 02:52:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3AE9A6B054C; Fri, 17 Nov 2023 21:52:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 35E966B054E; Fri, 17 Nov 2023 21:52:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 24CCD6B054F; Fri, 17 Nov 2023 21:52:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0F2A96B054C for ; Fri, 17 Nov 2023 21:52:08 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id BBDAEC0E32 for ; Sat, 18 Nov 2023 02:52:07 +0000 (UTC) X-FDA: 81469550694.22.20DE25D Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf28.hostedemail.com (Postfix) with ESMTP id 33177C0011 for ; Sat, 18 Nov 2023 02:52:04 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf28.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700275925; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Kx1dtcbvG8N3Qzp6sRQx8xWUu37WHENfBtc8uGJOXxI=; b=hr4aTaOD9x1cB9KZqOI6CmAVbE/XEKvuNx+athTPyugof3moE8Rf680QUBTeSJae19ukFb 3IRXUp9Sl2/gcttCWbvq1NGLA3dnEQ3CqWGRyUgDDsz49CgCodPtVNSbwTQZJ0Z+vy6Oo2 YhTrzE4H+a3YDlxPMStcbLbqnqAkgTM= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf28.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700275925; a=rsa-sha256; cv=none; b=nJSNNG6LcqDRv/bZZyX6AXA1sRR/A6yhzYGQwdRM9WegGzQwnkEbunw4RDq5kowBwm2Pji C73POSRbJ628LMPeGUVeUUpSAzRF7gb4bl6dW1co14TqZrbpDup0ow94v/Ip7cQn9Qwfo4 /ENGCxehn7X3MqmLMLrabfA6NkBdi8c= Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4SXHqc0G7KzvQDX; Sat, 18 Nov 2023 10:32:44 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Sat, 18 Nov 2023 10:33:03 +0800 From: Kefeng Wang To: Andrew Morton CC: , , Matthew Wilcox , David Hildenbrand , Sidhartha Kumar , Vishal Moola , Kefeng Wang Subject: [PATCH v3 1/5] mm: ksm: use more folio api in ksm_might_need_to_copy() Date: Sat, 18 Nov 2023 10:32:28 +0800 Message-ID: <20231118023232.1409103-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231118023232.1409103-1-wangkefeng.wang@huawei.com> References: <20231118023232.1409103-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 33177C0011 X-Stat-Signature: 9ms5h6mzoh6jkzmnro5jeb8y5ufbjzek X-Rspam-User: X-HE-Tag: 1700275924-736635 X-HE-Meta: U2FsdGVkX1/ts4xxh187JlB9CF1vAI4A0YruTBVKWB4TWNYqN1cdZbxtyTmAC2JILMLIcGWOl+iWJ4GHz28MOxbyu/zXxDbYUhBrjFoLkfcseSZzEKNZT1SH4xLTHH0kw2wd0T4HIJySU8CMdfs57AuKNiinxZ8O4ygBX95nAT6P5t+uzztAbqvocHntLL44wQ0/njE1Iygspcb6SyHez5iIYuOV4IjIGPUD/uPi5XLm/+fWOrEGYcm27vL+6FiglhpqrkcJjJfbU8HPWeAYbpPuukw+IkI9w49ni7KL20ssDUubgfKqxm783fMBMFkKbyxmlv11sle0KR9oQq3N6GgKl5Lr3tg8sr0ZwMsCkMaCyOATlsF/gTakWi3eP/M2cYVj23NHZDcaSQi1xe+0yVYO+87tXMrs+gBVn1K3SN8Pnbg7ZD/LDXwVsVHaMORBVNHtpy5E4eNylVHZVGuEugKsP+tcDBhQh3Gwk5vf3RbN22iGy1HdJn+QX99XucLXBiJnID3+/Rfthrji3X8TI6Fh/1hEIzLMAMO74GmFZxPNvtWfugYUw7pT0IaUu+QkEDPgcP3n3ePD58YAoRolw9snHyDHzC1hkuCkmvK88SsPcWRJhoDvuYG5XgxVnbIoOF4vJl5QgJuDfl9T/FHVvB+L9gFqh54I45zc0Vk2ZIrSRXT+nFt4+Q9v6yuY5nEnhLa26TzaH4In59YNECTw9GEVBaD05T88F6R6hbM5fFA4NipmcRG+4lL3qsEa6gSiL3sLXB2lkjnb64pGn5cKHnGSon5z0Pat+cnkvB9i95RnoOSCRbcz/JhpeP+28XodUVJiw5Fq9e4NIncuoO3D8rVfUCd/HCHf26CYokfws9QRKssaTcTPbWoPmmODElazz984Th8wAmCQczP6aC/hEHZoOgh8atLOK8uUtM/cparGxEFX0POepvdiM9KFqtWWxLGOJTrktf7/mCo6Czx kpEiAuo/ 8auVIqjsk4rOKD+o5T3Km6yJnyYZoSCsnX5oEzsGJDQT+Hntv/o2Mkyei3Rxrt5ghZbwS/bB08+kmCHtL5xEvfch6M7OzvhPi47GNgzQ8iLQkk2TscRB7hgmO2PY7uw3EWXROlm5wJRIVu1F8eqSwghCY9uIDu2zA/wU5O9Cc2e0+3R9lU3Mrzm5rt0tQhAXHjOpNo/OQ5LmXrWKp0QXiNindu//ARabKqP+mO4WdiX69TxfLD/9AoNWpPYeQPAM+mk8TJ8XPmHbjCp4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Since ksm only support normal page, no swapout/in for ksm large folio too, add large folio check in ksm_might_need_to_copy(), also convert page->index to folio->index as page->index is going away. Then convert ksm_might_need_to_copy() to use more folio api to save nine compound_head() calls, short 'address' to reduce max-line-length. Signed-off-by: Kefeng Wang --- include/linux/ksm.h | 4 ++-- mm/ksm.c | 39 +++++++++++++++++++++------------------ 2 files changed, 23 insertions(+), 20 deletions(-) diff --git a/include/linux/ksm.h b/include/linux/ksm.h index c2dd786a30e1..4643d5244e77 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -77,7 +77,7 @@ static inline void ksm_exit(struct mm_struct *mm) * but what if the vma was unmerged while the page was swapped out? */ struct page *ksm_might_need_to_copy(struct page *page, - struct vm_area_struct *vma, unsigned long address); + struct vm_area_struct *vma, unsigned long addr); void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc); void folio_migrate_ksm(struct folio *newfolio, struct folio *folio); @@ -130,7 +130,7 @@ static inline int ksm_madvise(struct vm_area_struct *vma, unsigned long start, } static inline struct page *ksm_might_need_to_copy(struct page *page, - struct vm_area_struct *vma, unsigned long address) + struct vm_area_struct *vma, unsigned long addr) { return page; } diff --git a/mm/ksm.c b/mm/ksm.c index 6a831009b4cb..6d841c22642b 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2876,48 +2876,51 @@ void __ksm_exit(struct mm_struct *mm) } struct page *ksm_might_need_to_copy(struct page *page, - struct vm_area_struct *vma, unsigned long address) + struct vm_area_struct *vma, unsigned long addr) { struct folio *folio = page_folio(page); struct anon_vma *anon_vma = folio_anon_vma(folio); - struct page *new_page; + struct folio *new_folio; - if (PageKsm(page)) { - if (page_stable_node(page) && + if (folio_test_large(folio)) + return page; + + if (folio_test_ksm(folio)) { + if (folio_stable_node(folio) && !(ksm_run & KSM_RUN_UNMERGE)) return page; /* no need to copy it */ } else if (!anon_vma) { return page; /* no need to copy it */ - } else if (page->index == linear_page_index(vma, address) && + } else if (folio->index == linear_page_index(vma, addr) && anon_vma->root == vma->anon_vma->root) { return page; /* still no need to copy it */ } if (PageHWPoison(page)) return ERR_PTR(-EHWPOISON); - if (!PageUptodate(page)) + if (!folio_test_uptodate(folio)) return page; /* let do_swap_page report the error */ - new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address); - if (new_page && - mem_cgroup_charge(page_folio(new_page), vma->vm_mm, GFP_KERNEL)) { - put_page(new_page); - new_page = NULL; + new_folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, addr, false); + if (new_folio && + mem_cgroup_charge(new_folio, vma->vm_mm, GFP_KERNEL)) { + folio_put(new_folio); + new_folio = NULL; } - if (new_page) { - if (copy_mc_user_highpage(new_page, page, address, vma)) { - put_page(new_page); + if (new_folio) { + if (copy_mc_user_highpage(&new_folio->page, page, addr, vma)) { + folio_put(new_folio); memory_failure_queue(page_to_pfn(page), 0); return ERR_PTR(-EHWPOISON); } - SetPageDirty(new_page); - __SetPageUptodate(new_page); - __SetPageLocked(new_page); + folio_set_dirty(new_folio); + __folio_mark_uptodate(new_folio); + __folio_set_locked(new_folio); #ifdef CONFIG_SWAP count_vm_event(KSM_SWPIN_COPY); #endif } - return new_page; + return new_folio ? &new_folio->page : NULL; } void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)