From patchwork Tue Nov 7 13:52:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13448857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DB86C4167D for ; Tue, 7 Nov 2023 13:53:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 908B48D004D; Tue, 7 Nov 2023 08:53:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B7FE8D0048; Tue, 7 Nov 2023 08:53:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7CF018D004D; Tue, 7 Nov 2023 08:53:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6FF168D0048 for ; Tue, 7 Nov 2023 08:53:03 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 40742120204 for ; Tue, 7 Nov 2023 13:53:03 +0000 (UTC) X-FDA: 81431299446.24.5BED07D Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf18.hostedemail.com (Postfix) with ESMTP id 91A4E1C0005 for ; Tue, 7 Nov 2023 13:52:59 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf18.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1699365180; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i/hwNvtWKMB53x/L/842IS9iBeroo0uarqbl1nJ8C5Y=; b=3vvQ4QmbwYES3yRzgVYj45ZNBmeKAH0oKiIAPHmFLwwaKVIzAdKNwDRqPTNDycM4Xk1hKT 96QgwHerc0fWqxtPvgtwzIEXcoLbVgHS1oW5UCT1VUSkaXZUOx7D8GUzdeehaW6xqTPxH2 YNRTLGK2V5PYSjMdzw0FN+a1HBVxD+Y= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf18.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1699365180; a=rsa-sha256; cv=none; b=C2QRtCEb7yKaQlE9wZF+ZN1VMJIOjKU9FXYl7s4ShkAXCZSS0bgpi10HgYB5AxtnCC65vk kn4ICVY6joxw9AkT2DgxF7OTO8QjF5gy00yUVUdxMsnkBe2+nK2dl98NUvW/foSTSs7xsp XYjDkI+JNUH3oEa1jyVeQC/FbJ4s+Xc= Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4SPqMp08QGzrTyk; Tue, 7 Nov 2023 21:49:42 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Tue, 7 Nov 2023 21:52:51 +0800 From: Kefeng Wang To: Andrew Morton CC: , , Matthew Wilcox , David Hildenbrand , Kefeng Wang Subject: [PATCH 1/6] mm: ksm: use more folio api in ksm_might_need_to_copy() Date: Tue, 7 Nov 2023 21:52:11 +0800 Message-ID: <20231107135216.415926-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231107135216.415926-1-wangkefeng.wang@huawei.com> References: <20231107135216.415926-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 91A4E1C0005 X-Stat-Signature: nsfs8i8rwnygenix61r1oe8hmwqaiobg X-HE-Tag: 1699365179-773318 X-HE-Meta: U2FsdGVkX1++hxarbmW2gcIOkXffpz72MuyR70nz9dw5eUDv6pWVTWF0tZpAdq76aYPSlgSSmPh+Yd/fW28WKZA2Smd9VXGK6FCzqurKeKtnxjGCGuX+2/+04JeegC7nyxlgcjDtlZ3h4cyImUzvYkR4wH4I1VaTupqW9BK5ecXN4YIEaYv8jnBFUIz/myKxq+8cW9RdiISH9SSTJEuobzpiJwIegmnQPTRf9z9nRuqxn8emjII9/ZaYpunPeBaRixIr5FljPLZQ5C5a9FCY+UQf1VSOF8O0VOsyhrX45sdDNQkR/NwR2gr14liQAOeBRNwG7ce9X6zgp/4KG8PusdZpxn61cZT1X6YIgj5/AMInr+KIVkVzJP0wW5ChbpyWra5ZcetPq01GhnBxCPbQk61JOPuAPZ1NtIk9b9qVy/1dIboZ/p8IZmmwi481ko7X5cnW0yvgwW2cQcijTx3fhyGHPJsiQswt9qW1XE/fyHxSpAqV4wdwCE72UHepU+iEYg79U4WjVjaJYJUNwNE43ipAVJ+TioAtBs9MgkGHsaFVGcag5wBoPxiLslk5xpDYJjigiLNfCO2ALapPRsUFIsS79uO0UfkfxTmVSMzYH1II042HmteJExDv8AvxTOWLpBd5wp0zkrI3aAuQRRqdhEtDYvzTV8okQBhKVmMf98Hg/DnApT8J7NaC4qmnSA/duvRnGfjU4ZekirdnupwqvIZTb7OmdV/bga1dzzoouOwZ07upVSKSSW0tlTNvYye/s8FisNcYelp4vrlk+tT4rwJ/rCYta/T8ijj/h+hOHW+QhpsMa0V/EoTj9idwK79yjWr+6XlwF+WvyKWBklewYgpw8rLdtGk+dr61h8N6rVgFjC9ueLt84qwaI5gBm1CmZkMPmtB2riCHWoHfyYanetpRNwKLxdTLtORhtW38SV0kO6zoRfUBF0+VIFhvpRzC94TzkaMdbKMp9oyhHP4 lGF7zhBh +28SkMd4sJ3FZDAduepJvcGN4cOhHd8IwyNp9HakjvnTYWUqMXQ0n2NwWRqLo7kZNvKir7mx+YCGKrINid8wHcQsvrP+wSMtPO6/489uX+QYzzljxe6ZgibM/utC+/R6bHDTyKz9PLSg1oIoymI6J0ITwt62KTlUBtgnJBvPLBO5PnGIgw0h68FSGvHNaEEtsrGBn X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Convert ksm_might_need_to_copy() to use more folio api to save nine compound_head() calls, short 'address' to reduce max-line-length. Signed-off-by: Kefeng Wang --- include/linux/ksm.h | 4 ++-- mm/ksm.c | 36 ++++++++++++++++++------------------ 2 files changed, 20 insertions(+), 20 deletions(-) diff --git a/include/linux/ksm.h b/include/linux/ksm.h index c2dd786a30e1..4643d5244e77 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -77,7 +77,7 @@ static inline void ksm_exit(struct mm_struct *mm) * but what if the vma was unmerged while the page was swapped out? */ struct page *ksm_might_need_to_copy(struct page *page, - struct vm_area_struct *vma, unsigned long address); + struct vm_area_struct *vma, unsigned long addr); void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc); void folio_migrate_ksm(struct folio *newfolio, struct folio *folio); @@ -130,7 +130,7 @@ static inline int ksm_madvise(struct vm_area_struct *vma, unsigned long start, } static inline struct page *ksm_might_need_to_copy(struct page *page, - struct vm_area_struct *vma, unsigned long address) + struct vm_area_struct *vma, unsigned long addr) { return page; } diff --git a/mm/ksm.c b/mm/ksm.c index 7efcc68ccc6e..e5b8b677e2de 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2876,48 +2876,48 @@ void __ksm_exit(struct mm_struct *mm) } struct page *ksm_might_need_to_copy(struct page *page, - struct vm_area_struct *vma, unsigned long address) + struct vm_area_struct *vma, unsigned long addr) { struct folio *folio = page_folio(page); struct anon_vma *anon_vma = folio_anon_vma(folio); - struct page *new_page; + struct folio *new_folio; - if (PageKsm(page)) { - if (page_stable_node(page) && + if (folio_test_ksm(folio)) { + if (folio_stable_node(folio) && !(ksm_run & KSM_RUN_UNMERGE)) return page; /* no need to copy it */ } else if (!anon_vma) { return page; /* no need to copy it */ - } else if (page->index == linear_page_index(vma, address) && + } else if (page->index == linear_page_index(vma, addr) && anon_vma->root == vma->anon_vma->root) { return page; /* still no need to copy it */ } if (PageHWPoison(page)) return ERR_PTR(-EHWPOISON); - if (!PageUptodate(page)) + if (!folio_test_uptodate(folio)) return page; /* let do_swap_page report the error */ - new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address); - if (new_page && - mem_cgroup_charge(page_folio(new_page), vma->vm_mm, GFP_KERNEL)) { - put_page(new_page); - new_page = NULL; + new_folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, addr, false); + if (new_folio && + mem_cgroup_charge(new_folio, vma->vm_mm, GFP_KERNEL)) { + folio_put(new_folio); + new_folio = NULL; } - if (new_page) { - if (copy_mc_user_highpage(new_page, page, address, vma)) { - put_page(new_page); + if (new_folio) { + if (copy_mc_user_highpage(&new_folio->page, page, addr, vma)) { + folio_put(new_folio); memory_failure_queue(page_to_pfn(page), 0); return ERR_PTR(-EHWPOISON); } - SetPageDirty(new_page); - __SetPageUptodate(new_page); - __SetPageLocked(new_page); + folio_set_dirty(new_folio); + __folio_mark_uptodate(new_folio); + __folio_set_locked(new_folio); #ifdef CONFIG_SWAP count_vm_event(KSM_SWPIN_COPY); #endif } - return new_page; + return new_folio ? &new_folio->page : NULL; } void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)