From patchwork Tue Dec 13 09:27:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13071853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B372DC4332F for ; Tue, 13 Dec 2022 09:11:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B13438E0003; Tue, 13 Dec 2022 04:11:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7C6408E000B; Tue, 13 Dec 2022 04:11:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4905C8E0006; Tue, 13 Dec 2022 04:11:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 1BBE58E0008 for ; Tue, 13 Dec 2022 04:11:27 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D650612013A for ; Tue, 13 Dec 2022 09:11:26 +0000 (UTC) X-FDA: 80236714572.27.0A0C832 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf25.hostedemail.com (Postfix) with ESMTP id 9332AA001E for ; Tue, 13 Dec 2022 09:11:24 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf25.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670922685; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zMxQGK/WPnyJiINoa0WutuLvpioCRlCbJYFWSCp8tO8=; b=ChS9gWl2LkjW9T7ETQXM8k7f5KmiIK+rmlfHIst1u4YjQoqILD907DAPG9AjTGahCEQwtE dOON4EHmUrNtEIkaShvOjY2z7CQVY7wDrA40mtDxwC5S2KMCD2O9WequgpOwdm5hHjAs7M 4nwBFNUH7hRjIgkX2ysc1JK/uZBvRG4= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf25.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670922685; a=rsa-sha256; cv=none; b=c3/Lx321hSvK0IP6Peh409sqH7XG1KEGn7ggDShsQi87ApHidHTNkVfNpBzVNXyKBjB5Fd 4pzM6wMcwQIKlKOszyzfP95D280NoG/SlY38dEZocyn+h2/iz7oERwU/dgsfrRPBFdv6vr +5EtBPhsM8Cbg03IJHh6dD1DUymVjVw= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4NWXlL3283zRptt; Tue, 13 Dec 2022 17:10:22 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 13 Dec 2022 17:11:21 +0800 From: Kefeng Wang To: Andrew Morton , David Hildenbrand , Oscar Salvador , SeongJae Park CC: , , , , , Kefeng Wang Subject: [PATCH -next 6/8] mm: damon: paddr: convert damon_pa_*() to use folios Date: Tue, 13 Dec 2022 17:27:33 +0800 Message-ID: <20221213092735.187924-7-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20221213092735.187924-1-wangkefeng.wang@huawei.com> References: <20221213092735.187924-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 9332AA001E X-Rspam-User: X-Stat-Signature: 3n5we5m7jguwp16ipyukhqbu6h61e7t4 X-HE-Tag: 1670922684-256282 X-HE-Meta: U2FsdGVkX18rzqJNQMJybJLQBlbOxj4/y0R/L+GAc5B10VElDCqDEV96Ag2pn7vq2U5tmeWBZIDqx9nGCj5VQYz9qdfp6sfX5Yu9v4zZ8hjmoqjywPfrCA7UQt+Z/jqSFppHaRm4tLyx+ADfXOYJwboULxuec6pV90OAMWHp/QjJhizzPxJbi+kCexKQZ2bnhSllpPIzbvNwF/HnMBgzZJygsOd+jjjiTqsLCAzFjlrdxBk93CNEc6hlMZbEQaZaH0PtdWVkLa0vzYiwdZEE7A1jetdMuCRHZ79yY26b3vebE/mL8ivLEHVIo0AaXC7B8FF2HVl2CewjtGIPgCdwgSqB4VRDU0wyj470Y35oMBdC4Z96czisxh7m+G1/l8jLMVpnXmCQWaY61z1i9S3FejQOrAdHjkvSFOqnz6dVABFKpEUlvLX7ASzW8CwEGpeu31YCkadWmK8n0/RG/Tvzk3N69Vj1qtVF9KEk37zNkJnwuQD1qvNVK6Ko0FDa7P3sVpw5OxPZX8WvfykcRZw+xDjTUJV8JEtoj2BDaSPkXaRllmwM1OLt6Ljf9Mo8kJADafLiHcg7Gla++3Wyj7/LFxNAJsd43pZKTqAOfM2uYRAxtR/YFGR+yai1zr68e07Mpj0zK2FwyFM8ZssfAySJXdsAXVCBCRsCaSna/aM1frBimBB0GJiJjTgWZ+4fW9A7gxPA7W0zvz7kUGT7t+1eAHV9EptMsmOXUClRI/7Tu9Ev6ERI7XOkarTLd6GjsmOfgStd9zzjEggaLRFdhmXaVMVxkVbWk9GkuG+H7MrDpPYd1G6sURlLp8Ka5OlaGJpj+AkoB9o4e+FrXbrFfOJ/EQ1t3DydN5DhnHCPzGEaB5IR18WO/uZEQFyWq0eVVUWU1ecIpJyuu7nkM9WQudAqqMGSFfQB5NM4b5SlG3GS5MbQIv+wUdQ8c9Jw4zLMtZDhtlUOKMz4VdEmjVKxSsb eRDT4lPR oMo/OFmrNaTxLjgA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With damon_get_folio(), let's convert damon_ptep_mkold() and damon_pmdp_mkold() to use folios. Signed-off-by: Kefeng Wang --- mm/damon/paddr.c | 44 +++++++++++++++++++------------------------- 1 file changed, 19 insertions(+), 25 deletions(-) diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index 6b36de1396a4..95d4686611a5 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -33,17 +33,15 @@ static bool __damon_pa_mkold(struct folio *folio, struct vm_area_struct *vma, static void damon_pa_mkold(unsigned long paddr) { - struct folio *folio; - struct page *page = damon_get_page(PHYS_PFN(paddr)); + struct folio *folio = damon_get_folio(PHYS_PFN(paddr)); struct rmap_walk_control rwc = { .rmap_one = __damon_pa_mkold, .anon_lock = folio_lock_anon_vma_read, }; bool need_lock; - if (!page) + if (!folio) return; - folio = page_folio(page); if (!folio_mapped(folio) || !folio_raw_mapping(folio)) { folio_set_idle(folio); @@ -93,7 +91,7 @@ static bool __damon_pa_young(struct folio *folio, struct vm_area_struct *vma, DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, addr, 0); result->accessed = false; - result->page_sz = PAGE_SIZE; + result->page_sz = PAGE_SIZE * folio_nr_pages(folio); while (page_vma_mapped_walk(&pvmw)) { addr = pvmw.address; if (pvmw.pte) { @@ -122,8 +120,7 @@ static bool __damon_pa_young(struct folio *folio, struct vm_area_struct *vma, static bool damon_pa_young(unsigned long paddr, unsigned long *page_sz) { - struct folio *folio; - struct page *page = damon_get_page(PHYS_PFN(paddr)); + struct folio *folio = damon_get_folio(PHYS_PFN(paddr)); struct damon_pa_access_chk_result result = { .page_sz = PAGE_SIZE, .accessed = false, @@ -135,9 +132,8 @@ static bool damon_pa_young(unsigned long paddr, unsigned long *page_sz) }; bool need_lock; - if (!page) + if (!folio) return false; - folio = page_folio(page); if (!folio_mapped(folio) || !folio_raw_mapping(folio)) { if (folio_test_idle(folio)) @@ -205,28 +201,28 @@ static unsigned int damon_pa_check_accesses(struct damon_ctx *ctx) static unsigned long damon_pa_pageout(struct damon_region *r) { unsigned long addr, applied; - LIST_HEAD(page_list); + LIST_HEAD(folio_list); for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) { - struct page *page = damon_get_page(PHYS_PFN(addr)); + struct folio *folio = damon_get_folio(PHYS_PFN(addr)); - if (!page) + if (!folio) continue; - ClearPageReferenced(page); - test_and_clear_page_young(page); - if (isolate_lru_page(page)) { - put_page(page); + folio_clear_referenced(folio); + folio_test_clear_young(folio); + if (folio_isolate_lru(folio)) { + folio_put(folio); continue; } - if (PageUnevictable(page)) { - putback_lru_page(page); + if (folio_test_unevictable(folio)) { + folio_putback_lru(folio); } else { - list_add(&page->lru, &page_list); - put_page(page); + list_add(&folio->lru, &folio_list); + folio_put(folio); } } - applied = reclaim_pages(&page_list); + applied = reclaim_pages(&folio_list); cond_resched(); return applied * PAGE_SIZE; } @@ -237,12 +233,10 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate( unsigned long addr, applied = 0; for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) { - struct page *page = damon_get_page(PHYS_PFN(addr)); - struct folio *folio; + struct folio *folio = damon_get_folio(PHYS_PFN(addr)); - if (!page) + if (!folio) continue; - folio = page_folio(page); if (mark_accessed) folio_mark_accessed(folio);