From patchwork Tue Sep 26 00:52:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13398591 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5ED24E81806 for ; Tue, 26 Sep 2023 00:53:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 44C0D8D0005; Mon, 25 Sep 2023 20:53:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3355E8D0060; Mon, 25 Sep 2023 20:53:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F62A8D0005; Mon, 25 Sep 2023 20:53:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 06D858D005C for ; Mon, 25 Sep 2023 20:53:47 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C7BBF120D4A for ; Tue, 26 Sep 2023 00:53:46 +0000 (UTC) X-FDA: 81276926052.23.593A31B Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf28.hostedemail.com (Postfix) with ESMTP id 99E56C0008 for ; Tue, 26 Sep 2023 00:53:44 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695689625; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xLDhMjCrVSLoYbaa4gzJ4uRL0hqY2r45hKznONkN8B8=; b=tbh3yAqiBgBg30UHQPL9bHVLMw/HjdUmyGNNGvXMdkyCLRy6jIOnBHjxr69l3RKZEI6gQc xH7PANbG3OF4J62tap2NdMbttyPaXOt5bdiq/z+eKLPMDHE4YvEGVB6Q2C4aHeMFrnPgn3 RWw4Dt34MaMkLESh1/eOSfilrtKaXs4= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695689625; a=rsa-sha256; cv=none; b=L8SY8ID5d3WglsizC2zxyXDC01QaE5h/j1r3CW9Scg4ZCj+Tm5agtHau7j1Ve1GqJmzSxd UEdi4AlC/hyD6py06zFXOMhzDmhbZ34TD4LVFIhPUHExpjnzmN4Paf/dYxec9IroeEYTSW 2H/NxSwo2tp5D2ZTaT2QyyD1rUEmo6M= Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4Rvh4c4Mp8z15Mq5; Tue, 26 Sep 2023 08:50:56 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Tue, 26 Sep 2023 08:53:09 +0800 From: Kefeng Wang To: Andrew Morton CC: Mike Rapoport , Matthew Wilcox , David Hildenbrand , , , , Zi Yan , Kefeng Wang Subject: [PATCH -next 3/9] mm: huge_memory: use a folio in change_huge_pmd() Date: Tue, 26 Sep 2023 08:52:48 +0800 Message-ID: <20230926005254.2861577-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230926005254.2861577-1-wangkefeng.wang@huawei.com> References: <20230926005254.2861577-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 99E56C0008 X-Rspam-User: X-Stat-Signature: 1bge7hzys1wo93e5qr1q5cmbottshdou X-Rspamd-Server: rspam01 X-HE-Tag: 1695689624-420526 X-HE-Meta: U2FsdGVkX1+C2eB0ykUjaInEV1A44GNyVMGBwmBy7hQAaqpNvxsj+P64BfoQsd6D2CFh7J+eiQ85rArb4cNjhCFNnmLZ/xgInMCd+q8pEjjMZkig3gU4KBQIAN889JWo/PIxQd03nnCn2RYztxazU/EOMMqoQTlaQ5Vkj6GMAXTZ/dbSam1SNCZLhg2JlHnbx9P/Mp266rGRlCrtKIq3a6ukwThezMr9sEQU75QjzNPSW6vNBmy7GlOlalgCw6xKetVrxWFkrloXyR7He/tKlfHwnK0qE7w6OLf7R8UYD70iH6ZnuvMbk8lUsDd7DYUHCgjYcb0dOcQ5h8PzwULFm7chKF1+aQbDTry/2dnBVO/gxMcdFWLAUckpDQp3eAFG6MGrnDqlOueGpGyYpWUbqxYM92L9qpdHD586r5dDJHchiE0ROiHBHpqDDeeVG6DJYfrvRIiTeFBrycfruanJANOrcLN4B60ht1r4mUkw7p0XMHkEciUaoyYK72Sx8JjWly7/q88cb1nNZm/a6RP58mgPzhIySHYiVOydPri2G7TiIb7ENKLoNSVvjahoNyyi56x9K41uL8fd5I5GX3NIRO2eVkiFACqXXR0O4SgyXsjFv2xMgsg/n3gIuTCnU1AiyaJ0pr3TncuMd0jA/A+h/yzAKVepUp1MR1WVg77HN5GJzShW/IMWovFXg4BTfZWQU/X1KsaUviiOTlnEOVTqVYw39u08IFefwCfXfBdpykHu8vrOXPDW04NYnrfbATi+8WUqR1YHLHWT20G0in8WnYHammu+pRhL9sHeXlKa2gnHxQvN15bvh6nGRm91g0RtXEKJJm8jEly4iIugF07i0JssfhjR1Bu1tWDwz6sVpjyKtF+RFD+An5vq7ZApUoEkh0X/jr0zZCw5pi6FcK4hAIka0y9nV47jAoroRZpEtH3nVVMBdbb6NZUj3knBzGgZwZylheYglh/Os7nFf8o 1wCFOOrm fFVPMgQB2Zc5qVap/3YR1+7QBPCc1Mbux0N7ZC+bJ2RJ8wV+8RC+WgjdhWqWH6TQWQkaS4Y+MaQhp4/J04xnSgyDwsCL8L8ceSlKnQjH8Lo7lVoS/855gbTwY2XgJq9m0S+oXrX4oXDkJperRAFPrnK4Rma4pGtBBrqGiHEKrc7HNfoUqzzo69njShqsPu6fWQJN6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use a folio in change_huge_pmd(), this is in preparation for xchg_page_access_time() to folio conversion. Signed-off-by: Kefeng Wang --- mm/huge_memory.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0f93a73115f7..c7efa214add8 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1849,7 +1849,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION if (is_swap_pmd(*pmd)) { swp_entry_t entry = pmd_to_swp_entry(*pmd); - struct page *page = pfn_swap_entry_to_page(entry); + struct folio *folio = page_folio(pfn_swap_entry_to_page(entry)); pmd_t newpmd; VM_BUG_ON(!is_pmd_migration_entry(*pmd)); @@ -1858,7 +1858,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, * A protection check is difficult so * just be safe and disable write */ - if (PageAnon(page)) + if (folio_test_anon(folio)) entry = make_readable_exclusive_migration_entry(swp_offset(entry)); else entry = make_readable_migration_entry(swp_offset(entry)); @@ -1880,7 +1880,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, #endif if (prot_numa) { - struct page *page; + struct folio *folio; bool toptier; /* * Avoid trapping faults against the zero page. The read-only @@ -1893,8 +1893,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, if (pmd_protnone(*pmd)) goto unlock; - page = pmd_page(*pmd); - toptier = node_is_toptier(page_to_nid(page)); + folio = page_folio(pmd_page(*pmd)); + toptier = node_is_toptier(folio_nid(folio)); /* * Skip scanning top tier node if normal numa * balancing is disabled @@ -1905,7 +1905,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && !toptier) - xchg_page_access_time(page, jiffies_to_msecs(jiffies)); + xchg_page_access_time(&folio->page, + jiffies_to_msecs(jiffies)); } /* * In case prot_numa, we are under mmap_read_lock(mm). It's critical