From patchwork Fri Jul 12 02:44:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13731253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A9B5C3DA4B for ; Fri, 12 Jul 2024 02:45:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CF4006B009B; Thu, 11 Jul 2024 22:45:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C7FCB6B009C; Thu, 11 Jul 2024 22:45:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A5B976B009D; Thu, 11 Jul 2024 22:45:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 828236B009B for ; Thu, 11 Jul 2024 22:45:13 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 3DDFDC0821 for ; Fri, 12 Jul 2024 02:45:13 +0000 (UTC) X-FDA: 82329558906.28.8396A56 Received: from fout6-smtp.messagingengine.com (fout6-smtp.messagingengine.com [103.168.172.149]) by imf24.hostedemail.com (Postfix) with ESMTP id 37132180015 for ; Fri, 12 Jul 2024 02:45:11 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=UzhrMi6K; dkim=pass header.d=messagingengine.com header.s=fm2 header.b="A t1TRM8"; spf=pass (imf24.hostedemail.com: domain of zi.yan@sent.com designates 103.168.172.149 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720752294; a=rsa-sha256; cv=none; b=xZja6sR14rAiZeukrc5KGTqtqzzYl/uT8YSS5KgtgW85KWT8HBtMtfxYVM3/rBMD82gnfg 4vwobAZnNTf/93hLAY6ksnSKQHkhhnG3E0v8S57WUT4EEtyaSM+dfk7TD4dNTUq2b9/ugM Zk2vD0N2hHup0r7EjfTvra1GI11B+1w= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=sent.com header.s=fm2 header.b=UzhrMi6K; dkim=pass header.d=messagingengine.com header.s=fm2 header.b="A t1TRM8"; spf=pass (imf24.hostedemail.com: domain of zi.yan@sent.com designates 103.168.172.149 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720752294; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qOLH68BAMXmJaL/17ag+VBuU12QsIrePjVshhM6Mfe0=; b=RTt8jAK0tB0gphpcBQfRuTOlGMftYf3Hthd4I+l8xKQf3DriMc6qmPiO2yiyt2KGfCfzUh AOIBthQYYsiexru9N+U9NItnyX2v/WLut2yTLISdRzn5uxbGM4qh4tFZv9O+WEd9JdGXfr LiXI+IdZMNxh5i5jNmdQdBc/gRMRpqM= Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailfout.nyi.internal (Postfix) with ESMTP id 7F0D11388636; Thu, 11 Jul 2024 22:45:10 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Thu, 11 Jul 2024 22:45:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:subject:subject:to:to; s=fm2; t=1720752310; x=1720838710; bh=qOLH68BAMXmJaL/17ag+VBuU12QsIrePjVshhM6Mfe0=; b= UzhrMi6KRCDGISUUfYy8VY1O+HNyLY+YsDCfn7vyNqLyKNGyvCiSp9lSx4DIlrdH Wz+A0c1HnuV444dIAMSJgLwkaNNhtz4lhfpX+gTZnqbMokGPCg7uxg8j0zaAtrnr EWxIehnk0SV9nHHyDxbY5U7RiN7QcbagxPxt/rOmTQVH+1epuEnFG0iYOIUOCVfG XgncIdsqnEve7p5ol3AcczTUYpTAXmqf2ow/fkmJaK4zZhrSsewoCoxNgq1mcoQf /fqXqJW4PGGdUQZiIhi3ZDvNr/Zo5QpZ7OaHn12I9iLFunXjwZGLOVapaaxzpiHH fxJGpWsU1BKUjU5L/bydTQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1720752310; x= 1720838710; bh=qOLH68BAMXmJaL/17ag+VBuU12QsIrePjVshhM6Mfe0=; b=A t1TRM8pYiC+yDef6ArSXq6vIeVDSlwZcsQDCaVLGNIxE4y0f/Vhbnx2bSXtCH/Ki Wui7qzOwtG6EfE/TvSOU8G+iFjPFvFISE5JCp4PJ1M21fMSKoKD9mrRaTxEhIj6j 6O0/Xlo1boRKbeFqZeKuidvP05NIVdUBWzKO7qRECwyYbtGYqh8OHWnyflHUMvZ5 q7nXoI7kEY4k+bQmvSGAbpfXR/qck3i9ctyTJmlrh2/coeF06LvghZoDLzpyDnhG qZQ6lrNiySmt77S9XhQ8BJBeKfv1GurePGryUThlB7g7ALS7nA0QzLZ0Lybi6N/P /VLW4r2XyjIn89dqDqGKw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrfeehgdeigecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvfevufffkffojghfrhgggfestdekredtredttdenucfhrhhomhepkghiucgj rghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepudfgfe elhfekiefgtedvudfghfduffefieefffejvefhlefhhffgkedtfeevhfevnecuvehluhhs thgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnsehsvg hnthdrtghomh X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 11 Jul 2024 22:45:08 -0400 (EDT) From: Zi Yan To: David Hildenbrand , "Huang, Ying" , linux-mm@kvack.org Cc: Zi Yan , Andrew Morton , Baolin Wang , linux-kernel@vger.kernel.org Subject: [RFC PATCH 3/3] mm/migrate: move common code to numa_migrate_check (was numa_migrate_prep) Date: Thu, 11 Jul 2024 22:44:55 -0400 Message-ID: <20240712024455.163543-4-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240712024455.163543-1-zi.yan@sent.com> References: <20240712024455.163543-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 X-Stat-Signature: 6xejnd17nu3mwuwqra7bi3ohbxo9fjsg X-Rspamd-Queue-Id: 37132180015 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1720752311-882798 X-HE-Meta: U2FsdGVkX1/oykv1SGVKXpj4SCM3v0fNFD43Snj2DpMqJy0q5Omo3sFvg669O43QpovLdjJJTnpRqGzUySwOFsui76URq9To1iTu/YKXw6nrJKESZOb2X3aDY641O0IsG60jHDf06dv0/f4f/5kX7RDJtzn4RwrGgrIksPfGuwb/Jr3USxFENcijvWaivaBlPkIhc6Uff3s+x6n2Df9tqLhMVJIDa/fTStFCpyiIWR7VJVYrtWITq0f+nups3Nrc5mHxGohqaVdrTpBlJegbDTTgxk/IzY008Z8DPVtBrqCOqninBQnC2DCEmDbO2y7RrNkppKTo6e2AhIcp6xYtVGEjy/iFQFu0LQsaXNBgG6y8C9SOq3Kr9Etzsa8ZFFRZAM1m6bjeYB+I0x7+y2GuTZjSifxGRC9hiWoh9xP4seWqshEOQ6hv0D2+szILbyFi4woFvhH4/btw9g8bRZ4FicmLJmaOjcw0COJSf1dUGtlPM49/B7S2pSSteUm7KkGlTl8BwYBRkC5aNN4n9HQFr/yn9xdX147LxPaloMISRqdctNILSKrOUABpqIt5kZIAZkco2FoPPpmDekM5SpzEgrsc/eiv86fuLaxuPBeZsZRgIzyYjJEapSYNxOTjznoJbiNAe6Eifaa6gqMkZuQGkSu7Nitv4ODRVqQ2j2HBxiIgd0NxzGAf9zlib5kfk4YZh1KIcDLl5SSL2IS6J4tZjmE4NfJmYfvTcdfkHlYz1GOcoflbFjbOpDiWvO1sTsLoXD4trELx54FO+pbjRl1yKV0K3EyFQWIOP414k3PA6Lryxjq36aK/B8moju8vKMSLJj7ESYhB861xAqpEjkVwWAuPKG0kid66NrmUA2V+CkDwxhZpQjohxvcDLT+sJdfrmTAGie/VPGvY82Q6PuNE/ZNlWdovKXsMB9AFi+agMHUS0N71pn6jSrHhZsWwq5mh/OUOy0qnZQY7/VJUY/e rALnrOQs 0BcUta/8VNC6YBlXcY8fqWPsypl3fQzLem6oLwe7DZ+RKLY/gvkU8KdnljQ/90IdEFKqIgIS3jPb4MISkPYG8YeucFUCkZDL1/8lYkAIUDCkvG2wXjXIRJo1jjt+/j21dcXUryPR60JSI5+ZF5iUIu1skrpf2eH733vNSzTpEdNjjAyaUBfNZi3hzSArXYH2LZwDBNL7iqtB/B1CgUwXUvPEgMOY8GQgzOfC8NPhMqBBeusHB88Q/ogkLCScVaBziRdqb7IL5CS7gQtUJguKUJm0i4N/ZMgNGLs6JL0ttPJ9GahDsqOYDy991n4zVurGueuCPobMHcX2+0cwybKjVl6eqbIB+2P08wH6OMJQ1N09b8/XTkQFX+PO/TBO1R/LTktkJ0FK4xktoodwBFQRaJ2K9x2XHnvWIODjJIFtCPqogT4hP3FqnkXnb7QzF78c4QzcSDAe8sZLFf/M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Zi Yan do_numa_page() and do_huge_pmd_numa_page() share a lot of common code. To reduce redundancy, move common code to numa_migrate_prep() and rename the function to numa_migrate_check() to reflect its functionality. There is some code difference between do_numa_page() and do_huge_pmd_numa_page() before the code move: 1. do_huge_pmd_numa_page() did not check shared folios to set TNF_SHARED. 2. do_huge_pmd_numa_page() did not check and skip zone device folios. Signed-off-by: Zi Yan --- mm/huge_memory.c | 28 ++++++----------- mm/internal.h | 5 +-- mm/memory.c | 81 +++++++++++++++++++++++------------------------- 3 files changed, 52 insertions(+), 62 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8c11d6da4b36..66d67d13e0dc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1670,10 +1670,10 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) pmd_t pmd; struct folio *folio; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; - int nid = NUMA_NO_NODE; - int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK); + int target_nid = NUMA_NO_NODE; + int last_cpupid = (-1 & LAST_CPUPID_MASK); bool writable = false; - int flags = 0; + int flags = 0, nr_pages; vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { @@ -1693,21 +1693,13 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) writable = true; folio = vm_normal_folio_pmd(vma, haddr, pmd); - if (!folio) + if (!folio || folio_is_zone_device(folio)) goto out_map; - /* See similar comment in do_numa_page for explanation */ - if (!writable) - flags |= TNF_NO_GROUP; + nr_pages = folio_nr_pages(folio); - nid = folio_nid(folio); - /* - * For memory tiering mode, cpupid of slow memory page is used - * to record page access time. So use default value. - */ - if (folio_has_cpupid(folio)) - last_cpupid = folio_last_cpupid(folio); - target_nid = numa_migrate_prep(folio, vmf, haddr, nid, &flags); + target_nid = numa_migrate_check(folio, vmf, haddr, writable, + &flags, &last_cpupid); if (target_nid == NUMA_NO_NODE) goto out_map; if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) { @@ -1720,8 +1712,8 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) if (!migrate_misplaced_folio(folio, vma, target_nid)) { flags |= TNF_MIGRATED; - nid = target_nid; } else { + target_nid = NUMA_NO_NODE; flags |= TNF_MIGRATE_FAIL; vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { @@ -1732,8 +1724,8 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) } out: - if (nid != NUMA_NO_NODE) - task_numa_fault(last_cpupid, nid, HPAGE_PMD_NR, flags); + if (target_nid != NUMA_NO_NODE) + task_numa_fault(last_cpupid, target_nid, nr_pages, flags); return 0; diff --git a/mm/internal.h b/mm/internal.h index b4d86436565b..7782b7bb3383 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1217,8 +1217,9 @@ void vunmap_range_noflush(unsigned long start, unsigned long end); void __vunmap_range_noflush(unsigned long start, unsigned long end); -int numa_migrate_prep(struct folio *folio, struct vm_fault *vmf, - unsigned long addr, int page_nid, int *flags); +int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, + unsigned long addr, bool writable, + int *flags, int *last_cpupid); void free_zone_device_folio(struct folio *folio); int migrate_device_coherent_page(struct page *page); diff --git a/mm/memory.c b/mm/memory.c index 96c2f5b3d796..a252c0f13755 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5209,16 +5209,42 @@ static vm_fault_t do_fault(struct vm_fault *vmf) return ret; } -int numa_migrate_prep(struct folio *folio, struct vm_fault *vmf, - unsigned long addr, int page_nid, int *flags) +int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, + unsigned long addr, bool writable, + int *flags, int *last_cpupid) { struct vm_area_struct *vma = vmf->vma; + /* + * Avoid grouping on RO pages in general. RO pages shouldn't hurt as + * much anyway since they can be in shared cache state. This misses + * the case where a mapping is writable but the process never writes + * to it but pte_write gets cleared during protection updates and + * pte_dirty has unpredictable behaviour between PTE scan updates, + * background writeback, dirty balancing and application behaviour. + */ + if (!writable) + *flags |= TNF_NO_GROUP; + + /* + * Flag if the folio is shared between multiple address spaces. This + * is later used when determining whether to group tasks together + */ + if (folio_likely_mapped_shared(folio) && (vma->vm_flags & VM_SHARED)) + *flags |= TNF_SHARED; + + /* + * For memory tiering mode, cpupid of slow memory page is used + * to record page access time. + */ + if (folio_has_cpupid(folio)) + *last_cpupid = folio_last_cpupid(folio); + /* Record the current PID acceesing VMA */ vma_set_access_pid_bit(vma); count_vm_numa_event(NUMA_HINT_FAULTS); - if (page_nid == numa_node_id()) { + if (folio_nid(folio) == numa_node_id()) { count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL); *flags |= TNF_FAULT_LOCAL; } @@ -5284,12 +5310,11 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; struct folio *folio = NULL; - int nid = NUMA_NO_NODE; + int target_nid = NUMA_NO_NODE; bool writable = false, ignore_writable = false; bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma); - int last_cpupid; - int target_nid; - pte_t pte, old_pte; + int last_cpupid = (-1 & LAST_CPUPID_MASK); + pte_t pte, old_pte = vmf->orig_pte; int flags = 0, nr_pages; /* @@ -5297,10 +5322,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) * table lock, that its contents have not changed during fault handling. */ spin_lock(vmf->ptl); - /* Read the live PTE from the page tables: */ - old_pte = ptep_get(vmf->pte); - - if (unlikely(!pte_same(old_pte, vmf->orig_pte))) { + if (unlikely(!pte_same(old_pte, *vmf->pte))) { pte_unmap_unlock(vmf->pte, vmf->ptl); goto out; } @@ -5320,35 +5342,10 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) if (!folio || folio_is_zone_device(folio)) goto out_map; - /* - * Avoid grouping on RO pages in general. RO pages shouldn't hurt as - * much anyway since they can be in shared cache state. This misses - * the case where a mapping is writable but the process never writes - * to it but pte_write gets cleared during protection updates and - * pte_dirty has unpredictable behaviour between PTE scan updates, - * background writeback, dirty balancing and application behaviour. - */ - if (!writable) - flags |= TNF_NO_GROUP; - - /* - * Flag if the folio is shared between multiple address spaces. This - * is later used when determining whether to group tasks together - */ - if (folio_likely_mapped_shared(folio) && (vma->vm_flags & VM_SHARED)) - flags |= TNF_SHARED; - - nid = folio_nid(folio); nr_pages = folio_nr_pages(folio); - /* - * For memory tiering mode, cpupid of slow memory page is used - * to record page access time. So use default value. - */ - if (!folio_has_cpupid(folio)) - last_cpupid = (-1 & LAST_CPUPID_MASK); - else - last_cpupid = folio_last_cpupid(folio); - target_nid = numa_migrate_prep(folio, vmf, vmf->address, nid, &flags); + + target_nid = numa_migrate_check(folio, vmf, vmf->address, writable, + &flags, &last_cpupid); if (target_nid == NUMA_NO_NODE) goto out_map; if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) { @@ -5362,9 +5359,9 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) /* Migrate to the requested node */ if (!migrate_misplaced_folio(folio, vma, target_nid)) { - nid = target_nid; flags |= TNF_MIGRATED; } else { + target_nid = NUMA_NO_NODE; flags |= TNF_MIGRATE_FAIL; vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); @@ -5378,8 +5375,8 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) } out: - if (nid != NUMA_NO_NODE) - task_numa_fault(last_cpupid, nid, nr_pages, flags); + if (target_nid != NUMA_NO_NODE) + task_numa_fault(last_cpupid, target_nid, nr_pages, flags); return 0; out_map: /*