From patchwork Mon Sep 18 10:32:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13389425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5320CCD37B0 for ; Mon, 18 Sep 2023 10:51:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CD8036B02E1; Mon, 18 Sep 2023 06:51:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C88486B02E2; Mon, 18 Sep 2023 06:51:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B4FDB6B02E4; Mon, 18 Sep 2023 06:51:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A4F116B02E1 for ; Mon, 18 Sep 2023 06:51:37 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6D2B3120CCC for ; Mon, 18 Sep 2023 10:51:37 +0000 (UTC) X-FDA: 81249402234.08.4642CF4 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf25.hostedemail.com (Postfix) with ESMTP id 8CCB4A001B for ; Mon, 18 Sep 2023 10:51:34 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695034295; a=rsa-sha256; cv=none; b=jjIg7QsF+YsEIHmYs9kRKv0OUGZhyprY3tn41drECvAPFC+A/Xxptqm9YxseBI8OccZXJs 3VZCme4Dnq/YajAGEj489HmrdCPtAAMORy3pO/PBpXYvNRS6y9jLT/X4Alvw0nBJd32y7Z Mghye7vp8xTcnm9+fK35Tcx13lQCOB0= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695034295; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CMSHTaHrk6mPrRaj+nNWbQVpdzwUiaPbO4b5ojYmYnI=; b=dNFKZfIYmEeUxBB1utMkiexcUWc4wKwsSlh5ljSOa4imgjyisZfjPkILwzKY8m5+k4lH9i Cz3tArc3x/QvOHNkqcEKIk2fLqp4Tugjnxf02S0vWbGJeKgj2+vXluT/Qa00zzcb2rjjag Pom6vzshKFv3okLel1mdm8lXdGLoQko= Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Rq1HL6T1FzNnfq; Mon, 18 Sep 2023 18:29:02 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 18 Sep 2023 18:32:48 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , , , Zi Yan , Mike Kravetz , , Kefeng Wang Subject: [PATCH 4/6] mm: memory: use a folio in do_numa_page() Date: Mon, 18 Sep 2023 18:32:11 +0800 Message-ID: <20230918103213.4166210-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230918103213.4166210-1-wangkefeng.wang@huawei.com> References: <20230918103213.4166210-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 8CCB4A001B X-Stat-Signature: kg9sxokhu65adtwyk7uzmpauhnak3sf6 X-HE-Tag: 1695034294-87368 X-HE-Meta: U2FsdGVkX18FWYIadcoEgaps8TY6HwtBclKqqR85pvWBwo9Mwig9dI4BJhTv1tChI+mEG8azZt5EHQ3XnSKp3O0HDO7IlhIYTQ1VJDqVY3DoWCeRGFlC9XRm+JcKR+fTFW+CLXz08BchlPxkxz0QIduYmaKlbzc4zrLIYuX4GVdNh50ybRWLFT8rNwVNaUOoWayEpakZLmhxMGj4ysQHwyD34z46JB2lvyVzP2TyC94at354EgZ9K2Iql675Qz+ug8s789fLmqrPypD9UX4uDJaCzXHy7VQgUmk3VsZX6oLWgDZmp2suT5Xscmj2C7zNnTHzFawWyLoNv7KCwb9y5tA82dW1zrplbFvlIxk5SH2dFEV2U3+128qO4+mORbAeSz6hnGzo/rHWOoLFgGXwiwInu0lnxffVNWKiXU3eaCuwIyzObkS7mVizraZcTHRdQFJVXA4HfKyqHuHQaWOOikoUijrgDuZ/Jau2WyJc2Hzm25FSDngBxjx9mu1bDD7/U0e+gYipjdOf/s+7T+SkwkO4oT3xSIQ++mtITQOpyPri0uA0Cq/k2GrnZe9wPKOmd4PDuUDqpGP7A9JlIKiikkQiV6uHXex0qOHWQ2ylN1cFhDkvw8z5FfiyUniDO0YBnfKyBG1F/0JkskXxiC6yFYrN5o6A5vJkr0dJIbKz61zHm1Y7dgq3kpfhVKsbxovU8T8CmZ7J9tQSvOv8qJJpkWhGraxR8GD9cu6fvo+35xzOOSEswf5vbVLA/rPLpRzqowMwGWM7RHMlM/NZmI/fGZzoNiWy+1fVRqEdt8eWyqeUSmpuHZcVjeS9EnhE0eMWJwhSuEuKQsYd0+R7Wo40IYCIqID/3sQMpPqxi7VSHzjaQ56k3kSjcGN/C0ZbUR4/yeKZ3/a4qJjjHNDeps0am6/vEyt/HzvWILdkJOosJB4PFi6t5j4De4sPf+TIU6ImwLKNAqn9LZQ403QFNBa dDZXlzs7 /C4bPnPT3D4s+9NXRe/Do3gGVn0SaiZR9Y3vm5HwPj6LhWYw/HoTKOCsXUA5zYWLwhsYvpNE1hM4p50VWLvOtEVOS1VH1kdVbOOy5Imcgia0uznpsIdwY3S7QP2h8SCufkT+Se3CIvGlIUBwj5mrdKXP/T6m6HwLEVLKfFGer/3JNOBkVhKU9gXelocdE18+zGADy X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Numa balancing only try to migrate non-compound page in do_numa_page(), use a folio in it to save several compound_head calls, note we use folio_estimated_sharers(), it is enough to check the folio sharers since only normal page is handled, if large folio numa balancing is supported, a precise folio sharers check would be used, no functional change intended. Signed-off-by: Kefeng Wang --- mm/memory.c | 33 ++++++++++++++++----------------- 1 file changed, 16 insertions(+), 17 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index ce7d9d9eddc4..ce3efe7255d2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4737,8 +4737,8 @@ int numa_migrate_prep(struct folio *folio, struct vm_area_struct *vma, static vm_fault_t do_numa_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; - struct page *page = NULL; - int page_nid = NUMA_NO_NODE; + struct folio *folio = NULL; + int nid = NUMA_NO_NODE; bool writable = false; int last_cpupid; int target_nid; @@ -4769,12 +4769,12 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) can_change_pte_writable(vma, vmf->address, pte)) writable = true; - page = vm_normal_page(vma, vmf->address, pte); - if (!page || is_zone_device_page(page)) + folio = vm_normal_folio(vma, vmf->address, pte); + if (!folio || folio_is_zone_device(folio)) goto out_map; /* TODO: handle PTE-mapped THP */ - if (PageCompound(page)) + if (folio_test_large(folio)) goto out_map; /* @@ -4789,34 +4789,33 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) flags |= TNF_NO_GROUP; /* - * Flag if the page is shared between multiple address spaces. This + * Flag if the folio is shared between multiple address spaces. This * is later used when determining whether to group tasks together */ - if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED)) + if (folio_estimated_sharers(folio) > 1 && (vma->vm_flags & VM_SHARED)) flags |= TNF_SHARED; - page_nid = page_to_nid(page); + nid = folio_nid(folio); /* * For memory tiering mode, cpupid of slow memory page is used * to record page access time. So use default value. */ if ((sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) && - !node_is_toptier(page_nid)) + !node_is_toptier(nid)) last_cpupid = (-1 & LAST_CPUPID_MASK); else - last_cpupid = page_cpupid_last(page); - target_nid = numa_migrate_prep(page_folio(page), vma, vmf->address, - page_nid, &flags); + last_cpupid = page_cpupid_last(&folio->page); + target_nid = numa_migrate_prep(folio, vma, vmf->address, nid, &flags); if (target_nid == NUMA_NO_NODE) { - put_page(page); + folio_put(folio); goto out_map; } pte_unmap_unlock(vmf->pte, vmf->ptl); writable = false; /* Migrate to the requested node */ - if (migrate_misplaced_folio(page_folio(page), vma, target_nid)) { - page_nid = target_nid; + if (migrate_misplaced_folio(folio, vma, target_nid)) { + nid = target_nid; flags |= TNF_MIGRATED; } else { flags |= TNF_MIGRATE_FAIL; @@ -4832,8 +4831,8 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) } out: - if (page_nid != NUMA_NO_NODE) - task_numa_fault(last_cpupid, page_nid, 1, flags); + if (nid != NUMA_NO_NODE) + task_numa_fault(last_cpupid, nid, 1, flags); return 0; out_map: /*