From patchwork Wed Feb 16 08:30:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: haoxin X-Patchwork-Id: 12748181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6316AC433EF for ; Wed, 16 Feb 2022 08:30:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 22D2D6B0071; Wed, 16 Feb 2022 03:30:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 11F916B0078; Wed, 16 Feb 2022 03:30:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E3AE16B0073; Wed, 16 Feb 2022 03:30:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id C264D6B0078 for ; Wed, 16 Feb 2022 03:30:49 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6FC0E93688 for ; Wed, 16 Feb 2022 08:30:49 +0000 (UTC) X-FDA: 79147972218.19.327DAF4 Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) by imf14.hostedemail.com (Postfix) with ESMTP id 8D7EC100004 for ; Wed, 16 Feb 2022 08:30:48 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04407;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0V4cXjyS_1645000244; Received: from localhost.localdomain(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0V4cXjyS_1645000244) by smtp.aliyun-inc.com(127.0.0.1); Wed, 16 Feb 2022 16:30:45 +0800 From: Xin Hao To: sj@kernel.org Cc: xhao@linux.alibaba.com, rongwei.wang@linux.alibaba.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH V1 1/5] mm/damon: Add NUMA local and remote variables in 'damon_region' Date: Wed, 16 Feb 2022 16:30:37 +0800 Message-Id: <2fb03665b39d7e3b222955ff690d73fe8e201c24.1645024354.git.xhao@linux.alibaba.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: 6ic9sazrcn7qb8za1r4khifgk5tfq66w X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 8D7EC100004 Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf14.hostedemail.com: domain of xhao@linux.alibaba.com designates 115.124.30.131 as permitted sender) smtp.mailfrom=xhao@linux.alibaba.com X-Rspam-User: X-HE-Tag: 1645000248-16962 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000701, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The purpose of adding these two variables 'local' & 'remote' is to obtain the struct 'damon_region' numa access status. Signed-off-by: Xin Hao Signed-off-by: Rongwei Wang --- include/linux/damon.h | 4 ++++ mm/damon/core.c | 6 ++++++ 2 files changed, 10 insertions(+) diff --git a/include/linux/damon.h b/include/linux/damon.h index 5e1e3a128b77..77d0937dcab5 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -41,6 +41,8 @@ struct damon_addr_range { * @nr_accesses: Access frequency of this region. * @list: List head for siblings. * @age: Age of this region. + * @local: Local numa node accesses. + * @remote: Remote numa node accesses. * * @age is initially zero, increased for each aggregation interval, and reset * to zero again if the access frequency is significantly changed. If two @@ -56,6 +58,8 @@ struct damon_region { unsigned int age; /* private: Internal value for age calculation. */ unsigned int last_nr_accesses; + unsigned long local; + unsigned long remote; }; /** diff --git a/mm/damon/core.c b/mm/damon/core.c index 1dd153c31c9e..933ef51afa71 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -45,6 +45,8 @@ struct damon_region *damon_new_region(unsigned long start, unsigned long end) region->age = 0; region->last_nr_accesses = 0; + region->local = 0; + region->remote = 0; return region; } @@ -740,6 +742,8 @@ static void damon_merge_two_regions(struct damon_target *t, l->nr_accesses = (l->nr_accesses * sz_l + r->nr_accesses * sz_r) / (sz_l + sz_r); + l->remote = (l->remote * sz_l + r->remote * sz_r) / (sz_l + sz_r); + l->local = (l->local * sz_l + r->local * sz_r) / (sz_l + sz_r); l->age = (l->age * sz_l + r->age * sz_r) / (sz_l + sz_r); l->ar.end = r->ar.end; damon_destroy_region(r, t); @@ -812,6 +816,8 @@ static void damon_split_region_at(struct damon_ctx *ctx, new->age = r->age; new->last_nr_accesses = r->last_nr_accesses; + new->local = r->local; + new->remote = r->remote; damon_insert_region(new, r, damon_next_region(r), t); } From patchwork Wed Feb 16 08:30:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: haoxin X-Patchwork-Id: 12748182 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E27BBC433F5 for ; Wed, 16 Feb 2022 08:30:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6529E6B0073; Wed, 16 Feb 2022 03:30:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 560826B0078; Wed, 16 Feb 2022 03:30:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2F0696B007B; Wed, 16 Feb 2022 03:30:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 13ADC6B0073 for ; Wed, 16 Feb 2022 03:30:51 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A743D20FEE for ; Wed, 16 Feb 2022 08:30:50 +0000 (UTC) X-FDA: 79147972260.01.FCB42B6 Received: from out30-44.freemail.mail.aliyun.com (out30-44.freemail.mail.aliyun.com [115.124.30.44]) by imf10.hostedemail.com (Postfix) with ESMTP id A2B43C0008 for ; Wed, 16 Feb 2022 08:30:49 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04395;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0V4cXjyl_1645000245; Received: from localhost.localdomain(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0V4cXjyl_1645000245) by smtp.aliyun-inc.com(127.0.0.1); Wed, 16 Feb 2022 16:30:46 +0800 From: Xin Hao To: sj@kernel.org Cc: xhao@linux.alibaba.com, rongwei.wang@linux.alibaba.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH V1 2/5] mm/damon: Add 'damon_region' NUMA fault simulation support Date: Wed, 16 Feb 2022 16:30:38 +0800 Message-Id: <35c8c45267c6f2f5b6ec3559592342685106d39e.1645024354.git.xhao@linux.alibaba.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: References: MIME-Version: 1.0 Authentication-Results: imf10.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf10.hostedemail.com: domain of xhao@linux.alibaba.com designates 115.124.30.44 as permitted sender) smtp.mailfrom=xhao@linux.alibaba.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: A2B43C0008 X-Stat-Signature: 3bngj789sym5k8d7wfboh691njuachmj X-HE-Tag: 1645000249-540505 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These codes development here refers to NUMA balance code, it will cause a page_fault, in do_numa_page(), we will count 'damon_region' NUMA local and remote values. Signed-off-by: Xin Hao Signed-off-by: Rongwei Wang --- mm/damon/paddr.c | 23 +++++++++++++++++---- mm/damon/prmtv-common.c | 44 +++++++++++++++++++++++++++++++++++++++++ mm/damon/prmtv-common.h | 3 +++ mm/damon/vaddr.c | 32 +++++++++++++++++++++--------- 4 files changed, 89 insertions(+), 13 deletions(-) diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index 5e8244f65a1a..b8feacf15592 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -16,9 +16,10 @@ #include "../internal.h" #include "prmtv-common.h" -static bool __damon_pa_mkold(struct page *page, struct vm_area_struct *vma, +static bool __damon_pa_mk_set(struct page *page, struct vm_area_struct *vma, unsigned long addr, void *arg) { + bool result = false; struct page_vma_mapped_walk pvmw = { .page = page, .vma = vma, @@ -27,10 +28,24 @@ static bool __damon_pa_mkold(struct page *page, struct vm_area_struct *vma, while (page_vma_mapped_walk(&pvmw)) { addr = pvmw.address; - if (pvmw.pte) + if (pvmw.pte) { damon_ptep_mkold(pvmw.pte, vma->vm_mm, addr); - else + if (nr_online_nodes > 1) { + result = damon_ptep_mknone(pvmw.pte, vma, addr); + if (result) + flush_tlb_page(vma, addr); + } + } else { damon_pmdp_mkold(pvmw.pmd, vma->vm_mm, addr); + if (nr_online_nodes > 1) { + result = damon_pmdp_mknone(pvmw.pmd, vma, addr); + if (result) { + unsigned long haddr = addr & HPAGE_PMD_MASK; + + flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE); + } + } + } } return true; } @@ -39,7 +54,7 @@ static void damon_pa_mkold(unsigned long paddr) { struct page *page = damon_get_page(PHYS_PFN(paddr)); struct rmap_walk_control rwc = { - .rmap_one = __damon_pa_mkold, + .rmap_one = __damon_pa_mk_set, .anon_lock = page_lock_anon_vma_read, }; bool need_lock; diff --git a/mm/damon/prmtv-common.c b/mm/damon/prmtv-common.c index 92a04f5831d6..35ac50fdf7b6 100644 --- a/mm/damon/prmtv-common.c +++ b/mm/damon/prmtv-common.c @@ -12,6 +12,50 @@ #include "prmtv-common.h" +bool damon_ptep_mknone(pte_t *pte, struct vm_area_struct *vma, unsigned long addr) +{ + pte_t oldpte, ptent; + bool preserve_write; + + oldpte = *pte; + if (pte_protnone(oldpte)) + return false; + + if (pte_present(oldpte)) { + preserve_write = pte_write(oldpte); + oldpte = ptep_modify_prot_start(vma, addr, pte); + ptent = pte_modify(oldpte, PAGE_NONE); + + if (preserve_write) + ptent = pte_mk_savedwrite(ptent); + + ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); + return true; + } + return false; +} + +bool damon_pmdp_mknone(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr) +{ + bool preserve_write; + pmd_t entry = *pmd; + + if (is_huge_zero_pmd(entry) || pmd_protnone(entry)) + return false; + + if (pmd_present(entry)) { + preserve_write = pmd_write(entry); + entry = pmdp_invalidate(vma, addr, pmd); + entry = pmd_modify(entry, PAGE_NONE); + if (preserve_write) + entry = pmd_mk_savedwrite(entry); + + set_pmd_at(vma->vm_mm, addr, pmd, entry); + return true; + } + return false; +} + /* * Get an online page for a pfn if it's in the LRU list. Otherwise, returns * NULL. diff --git a/mm/damon/prmtv-common.h b/mm/damon/prmtv-common.h index e790cb5f8fe0..002a308facd0 100644 --- a/mm/damon/prmtv-common.h +++ b/mm/damon/prmtv-common.h @@ -7,6 +7,9 @@ #include +bool damon_ptep_mknone(pte_t *pte, struct vm_area_struct *vma, unsigned long addr); +bool damon_pmdp_mknone(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr); + struct page *damon_get_page(unsigned long pfn); void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr); diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index 89b6468da2b9..732b41ed134c 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -367,9 +367,10 @@ static void damon_va_update(struct damon_ctx *ctx) } } -static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr, +static int damon_va_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next, struct mm_walk *walk) { + bool result = false; pte_t *pte; spinlock_t *ptl; @@ -377,7 +378,14 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr, ptl = pmd_lock(walk->mm, pmd); if (pmd_huge(*pmd)) { damon_pmdp_mkold(pmd, walk->mm, addr); + if (nr_online_nodes > 1) + result = damon_pmdp_mknone(pmd, walk->vma, addr); spin_unlock(ptl); + if (result) { + unsigned long haddr = addr & HPAGE_PMD_MASK; + + flush_tlb_range(walk->vma, haddr, haddr + HPAGE_PMD_SIZE); + } return 0; } spin_unlock(ptl); @@ -386,11 +394,17 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr, if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) return 0; pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); - if (!pte_present(*pte)) - goto out; + if (!pte_present(*pte)) { + pte_unmap_unlock(pte, ptl); + return 0; + } damon_ptep_mkold(pte, walk->mm, addr); -out: + if (nr_online_nodes > 1) + result = damon_ptep_mknone(pte, walk->vma, addr); pte_unmap_unlock(pte, ptl); + if (result) + flush_tlb_page(walk->vma, addr); + return 0; } @@ -450,15 +464,15 @@ static int damon_mkold_hugetlb_entry(pte_t *pte, unsigned long hmask, #define damon_mkold_hugetlb_entry NULL #endif /* CONFIG_HUGETLB_PAGE */ -static const struct mm_walk_ops damon_mkold_ops = { - .pmd_entry = damon_mkold_pmd_entry, +static const struct mm_walk_ops damon_va_ops = { + .pmd_entry = damon_va_pmd_entry, .hugetlb_entry = damon_mkold_hugetlb_entry, }; -static void damon_va_mkold(struct mm_struct *mm, unsigned long addr) +static void damon_va_check(struct mm_struct *mm, unsigned long addr) { mmap_read_lock(mm); - walk_page_range(mm, addr, addr + 1, &damon_mkold_ops, NULL); + walk_page_range(mm, addr, addr + 1, &damon_va_ops, NULL); mmap_read_unlock(mm); } @@ -471,7 +485,7 @@ static void __damon_va_prepare_access_check(struct damon_ctx *ctx, { r->sampling_addr = damon_rand(r->ar.start, r->ar.end); - damon_va_mkold(mm, r->sampling_addr); + damon_va_check(mm, r->sampling_addr); } static void damon_va_prepare_access_checks(struct damon_ctx *ctx) From patchwork Wed Feb 16 08:30:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: haoxin X-Patchwork-Id: 12748183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82D25C433FE for ; Wed, 16 Feb 2022 08:30:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 120C06B0078; Wed, 16 Feb 2022 03:30:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A69A6B007B; Wed, 16 Feb 2022 03:30:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA1396B007E; Wed, 16 Feb 2022 03:30:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0099.hostedemail.com [216.40.44.99]) by kanga.kvack.org (Postfix) with ESMTP id B43A46B0078 for ; Wed, 16 Feb 2022 03:30:51 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 79242180A9E2F for ; Wed, 16 Feb 2022 08:30:51 +0000 (UTC) X-FDA: 79147972302.21.E396EA2 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by imf21.hostedemail.com (Postfix) with ESMTP id 7E1041C000B for ; Wed, 16 Feb 2022 08:30:50 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R211e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04423;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0V4cXjyv_1645000246; Received: from localhost.localdomain(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0V4cXjyv_1645000246) by smtp.aliyun-inc.com(127.0.0.1); Wed, 16 Feb 2022 16:30:47 +0800 From: Xin Hao To: sj@kernel.org Cc: xhao@linux.alibaba.com, rongwei.wang@linux.alibaba.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH V1 3/5] mm/damon: Add 'damon_region' NUMA access statistics core implementation Date: Wed, 16 Feb 2022 16:30:39 +0800 Message-Id: X-Mailer: git-send-email 2.31.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 7E1041C000B X-Stat-Signature: ddkz9hapm3jhgaycddpfyr1d1awphnmi X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf21.hostedemail.com: domain of xhao@linux.alibaba.com designates 115.124.30.56 as permitted sender) smtp.mailfrom=xhao@linux.alibaba.com X-HE-Tag: 1645000250-997540 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After setting PTE none or PMD none in DAMON, NUMA access of "damon_region" will be counted in page fault if current pid matches the pid that DAMON is monitoring. Signed-off-by: Xin Hao Signed-off-by: Rongwei Wang --- include/linux/damon.h | 18 ++++++++++ mm/damon/core.c | 80 +++++++++++++++++++++++++++++++++++++++++-- mm/damon/dbgfs.c | 18 +++++++--- mm/damon/vaddr.c | 11 ++---- mm/huge_memory.c | 5 +++ mm/memory.c | 5 +++ 6 files changed, 121 insertions(+), 16 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index 77d0937dcab5..5bf1eb92584b 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -12,12 +12,16 @@ #include #include #include +#include /* Minimal region size. Every damon_region is aligned by this. */ #define DAMON_MIN_REGION PAGE_SIZE /* Max priority score for DAMON-based operation schemes */ #define DAMOS_MAX_SCORE (99) +extern struct damon_ctx **dbgfs_ctxs; +extern int dbgfs_nr_ctxs; + /* Get a random number in [l, r) */ static inline unsigned long damon_rand(unsigned long l, unsigned long r) { @@ -68,6 +72,7 @@ struct damon_region { * @nr_regions: Number of monitoring target regions of this target. * @regions_list: Head of the monitoring target regions of this target. * @list: List head for siblings. + * @target_lock: Use damon_region lock to avoid race. * * Each monitoring context could have multiple targets. For example, a context * for virtual memory address spaces could have multiple target processes. The @@ -80,6 +85,7 @@ struct damon_target { unsigned int nr_regions; struct list_head regions_list; struct list_head list; + spinlock_t target_lock; }; /** @@ -503,8 +509,20 @@ int damon_stop(struct damon_ctx **ctxs, int nr_ctxs); #endif /* CONFIG_DAMON */ #ifdef CONFIG_DAMON_VADDR + +/* + * 't->id' should be the pointer to the relevant 'struct pid' having reference + * count. Caller must put the returned task, unless it is NULL. + */ +static inline struct task_struct *damon_get_task_struct(struct damon_target *t) +{ + return get_pid_task((struct pid *)t->id, PIDTYPE_PID); +} bool damon_va_target_valid(void *t); void damon_va_set_primitives(struct damon_ctx *ctx); +void damon_numa_fault(int page_nid, int node_id, struct vm_fault *vmf); +#else +static inline void damon_numa_fault(int page_nid, int node_id, struct vm_fault *vmf) { } #endif /* CONFIG_DAMON_VADDR */ #ifdef CONFIG_DAMON_PADDR diff --git a/mm/damon/core.c b/mm/damon/core.c index 933ef51afa71..970fc02abeba 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -157,6 +157,7 @@ struct damon_target *damon_new_target(unsigned long id) t->id = id; t->nr_regions = 0; INIT_LIST_HEAD(&t->regions_list); + spin_lock_init(&t->target_lock); return t; } @@ -792,8 +793,11 @@ static void kdamond_merge_regions(struct damon_ctx *c, unsigned int threshold, { struct damon_target *t; - damon_for_each_target(t, c) + damon_for_each_target(t, c) { + spin_lock(&t->target_lock); damon_merge_regions_of(t, threshold, sz_limit); + spin_unlock(&t->target_lock); + } } /* @@ -879,8 +883,11 @@ static void kdamond_split_regions(struct damon_ctx *ctx) nr_regions < ctx->max_nr_regions / 3) nr_subregions = 3; - damon_for_each_target(t, ctx) + damon_for_each_target(t, ctx) { + spin_lock(&t->target_lock); damon_split_regions_of(ctx, t, nr_subregions); + spin_unlock(&t->target_lock); + } last_nr_regions = nr_regions; } @@ -1000,6 +1007,73 @@ static int kdamond_wait_activation(struct damon_ctx *ctx) return -EBUSY; } +static struct damon_target *get_damon_target(struct task_struct *task) +{ + int i; + unsigned long id1, id2; + struct damon_target *t; + + rcu_read_lock(); + for (i = 0; i < READ_ONCE(dbgfs_nr_ctxs); i++) { + struct damon_ctx *ctx = rcu_dereference(dbgfs_ctxs[i]); + + if (!ctx || !ctx->kdamond) + continue; + damon_for_each_target(t, dbgfs_ctxs[i]) { + struct task_struct *ts = damon_get_task_struct(t); + + if (ts) { + id1 = (unsigned long)pid_vnr((struct pid *)t->id); + id2 = (unsigned long)pid_vnr(get_task_pid(task, PIDTYPE_PID)); + put_task_struct(ts); + if (id1 == id2) + return t; + } + } + } + rcu_read_unlock(); + + return NULL; +} + +static struct damon_region *get_damon_region(struct damon_target *t, unsigned long addr) +{ + struct damon_region *r, *next; + + if (!t || !addr) + return NULL; + + spin_lock(&t->target_lock); + damon_for_each_region_safe(r, next, t) { + if (r->ar.start <= addr && r->ar.end >= addr) { + spin_unlock(&t->target_lock); + return r; + } + } + spin_unlock(&t->target_lock); + + return NULL; +} + +void damon_numa_fault(int page_nid, int node_id, struct vm_fault *vmf) +{ + struct damon_target *t; + struct damon_region *r; + + if (nr_online_nodes > 1) { + t = get_damon_target(current); + if (!t) + return; + r = get_damon_region(t, vmf->address); + if (r) { + if (page_nid == node_id) + r->local++; + else + r->remote++; + } + } +} + /* * The monitoring daemon that runs as a kernel thread */ @@ -1057,8 +1131,10 @@ static int kdamond_fn(void *data) } } damon_for_each_target(t, ctx) { + spin_lock(&t->target_lock); damon_for_each_region_safe(r, next, t) damon_destroy_region(r, t); + spin_unlock(&t->target_lock); } if (ctx->callback.before_terminate) diff --git a/mm/damon/dbgfs.c b/mm/damon/dbgfs.c index 5b899601e56c..c7f4e95abc14 100644 --- a/mm/damon/dbgfs.c +++ b/mm/damon/dbgfs.c @@ -15,11 +15,12 @@ #include #include -static struct damon_ctx **dbgfs_ctxs; -static int dbgfs_nr_ctxs; +struct damon_ctx **dbgfs_ctxs; +int dbgfs_nr_ctxs; static struct dentry **dbgfs_dirs; static DEFINE_MUTEX(damon_dbgfs_lock); + /* * Returns non-empty string on success, negative error code otherwise. */ @@ -808,10 +809,18 @@ static int dbgfs_rm_context(char *name) return -ENOMEM; } - for (i = 0, j = 0; i < dbgfs_nr_ctxs; i++) { + dbgfs_nr_ctxs--; + /* Prevent NUMA fault get the wrong value */ + smp_mb(); + + for (i = 0, j = 0; i < dbgfs_nr_ctxs + 1; i++) { if (dbgfs_dirs[i] == dir) { + struct damon_ctx *tmp_ctx = dbgfs_ctxs[i]; + + rcu_assign_pointer(dbgfs_ctxs[i], NULL); + synchronize_rcu(); debugfs_remove(dbgfs_dirs[i]); - dbgfs_destroy_ctx(dbgfs_ctxs[i]); + dbgfs_destroy_ctx(tmp_ctx); continue; } new_dirs[j] = dbgfs_dirs[i]; @@ -823,7 +832,6 @@ static int dbgfs_rm_context(char *name) dbgfs_dirs = new_dirs; dbgfs_ctxs = new_ctxs; - dbgfs_nr_ctxs--; return 0; } diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index 732b41ed134c..78b90972d171 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -22,15 +22,6 @@ #define DAMON_MIN_REGION 1 #endif -/* - * 't->id' should be the pointer to the relevant 'struct pid' having reference - * count. Caller must put the returned task, unless it is NULL. - */ -static inline struct task_struct *damon_get_task_struct(struct damon_target *t) -{ - return get_pid_task((struct pid *)t->id, PIDTYPE_PID); -} - /* * Get the mm_struct of the given target * @@ -363,7 +354,9 @@ static void damon_va_update(struct damon_ctx *ctx) damon_for_each_target(t, ctx) { if (damon_va_three_regions(t, three_regions)) continue; + spin_lock(&t->target_lock); damon_va_apply_three_regions(t, three_regions); + spin_unlock(&t->target_lock); } } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 406a3c28c026..9cb413a8cd4a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -1450,6 +1451,10 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) flags |= TNF_NO_GROUP; page_nid = page_to_nid(page); + + /* Get the NUMA accesses of monitored processes by DAMON */ + damon_numa_fault(page_nid, numa_node_id(), vmf); + last_cpupid = page_cpupid_last(page); target_nid = numa_migrate_prep(page, vma, haddr, page_nid, &flags); diff --git a/mm/memory.c b/mm/memory.c index c125c4969913..fb55264f36af 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -74,6 +74,7 @@ #include #include #include +#include #include @@ -4392,6 +4393,10 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) last_cpupid = page_cpupid_last(page); page_nid = page_to_nid(page); + + /* Get the NUMA accesses of monitored processes by DAMON */ + damon_numa_fault(page_nid, numa_node_id(), vmf); + target_nid = numa_migrate_prep(page, vma, vmf->address, page_nid, &flags); if (target_nid == NUMA_NO_NODE) { From patchwork Wed Feb 16 08:30:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: haoxin X-Patchwork-Id: 12748184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A02B6C433F5 for ; Wed, 16 Feb 2022 08:30:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 674E26B007B; Wed, 16 Feb 2022 03:30:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5FDC66B007E; Wed, 16 Feb 2022 03:30:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 431B26B0081; Wed, 16 Feb 2022 03:30:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0031.hostedemail.com [216.40.44.31]) by kanga.kvack.org (Postfix) with ESMTP id 24F6B6B007E for ; Wed, 16 Feb 2022 03:30:52 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D70618249980 for ; Wed, 16 Feb 2022 08:30:51 +0000 (UTC) X-FDA: 79147972302.06.D06C5EB Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by imf29.hostedemail.com (Postfix) with ESMTP id F3120120008 for ; Wed, 16 Feb 2022 08:30:50 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R241e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04400;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0V4cXjz7_1645000247; Received: from localhost.localdomain(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0V4cXjz7_1645000247) by smtp.aliyun-inc.com(127.0.0.1); Wed, 16 Feb 2022 16:30:48 +0800 From: Xin Hao To: sj@kernel.org Cc: xhao@linux.alibaba.com, rongwei.wang@linux.alibaba.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH V1 4/5] mm/damon/dbgfs: Add numa simulate switch Date: Wed, 16 Feb 2022 16:30:40 +0800 Message-Id: <20b2e1e1a60431e7d0f47df3ef9619db3bda2946.1645024354.git.xhao@linux.alibaba.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: References: MIME-Version: 1.0 Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of xhao@linux.alibaba.com designates 115.124.30.56 as permitted sender) smtp.mailfrom=xhao@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com X-Rspamd-Server: rspam07 X-Rspam-User: X-Rspamd-Queue-Id: F3120120008 X-Stat-Signature: 5mfpp6rcy7gsdkxoh7e9ej6x9odfwb3c X-HE-Tag: 1645000250-784051 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For applications that frequently access the memory area, Doing numa simulation will cause a lot of pagefault and tlb misses which will cause the applications performance regression. So there adds a switch to turn off the numa simulation function by default, if you want to turn on this function just do like below. # cd /sys/kernel/debug/damon/ # echo on > numa_stat # cat numa_stat # on Signed-off-by: Xin Hao Signed-off-by: Rongwei Wang --- include/linux/damon.h | 3 +++ mm/damon/core.c | 10 ++++++++- mm/damon/dbgfs.c | 52 +++++++++++++++++++++++++++++++++++++++++-- mm/damon/paddr.c | 6 +++-- mm/damon/vaddr.c | 6 +++-- 5 files changed, 70 insertions(+), 7 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index 5bf1eb92584b..c7d7613e1a17 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -19,6 +19,9 @@ /* Max priority score for DAMON-based operation schemes */ #define DAMOS_MAX_SCORE (99) +/* Switch for NUMA fault */ +DECLARE_STATIC_KEY_FALSE(numa_stat_enabled_key); + extern struct damon_ctx **dbgfs_ctxs; extern int dbgfs_nr_ctxs; diff --git a/mm/damon/core.c b/mm/damon/core.c index 970fc02abeba..4aa3c2d3895c 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -1060,7 +1060,8 @@ void damon_numa_fault(int page_nid, int node_id, struct vm_fault *vmf) struct damon_target *t; struct damon_region *r; - if (nr_online_nodes > 1) { + if (static_branch_unlikely(&numa_stat_enabled_key) + && nr_online_nodes > 1) { t = get_damon_target(current); if (!t) return; @@ -1151,6 +1152,13 @@ static int kdamond_fn(void *data) nr_running_ctxs--; mutex_unlock(&damon_lock); + /* + * when no kdamond threads are running, the + * 'numa_stat_enabled_key' keeps default value. + */ + if (!nr_running_ctxs) + static_branch_disable(&numa_stat_enabled_key); + return 0; } diff --git a/mm/damon/dbgfs.c b/mm/damon/dbgfs.c index c7f4e95abc14..0ef35dbfda39 100644 --- a/mm/damon/dbgfs.c +++ b/mm/damon/dbgfs.c @@ -609,6 +609,49 @@ static ssize_t dbgfs_kdamond_pid_read(struct file *file, return len; } +DEFINE_STATIC_KEY_FALSE(numa_stat_enabled_key); + +static ssize_t dbgfs_numa_stat_read(struct file *file, + char __user *buf, size_t count, loff_t *ppos) +{ + char numa_on_buf[5]; + bool enable = static_branch_unlikely(&numa_stat_enabled_key); + int len; + + len = scnprintf(numa_on_buf, 5, enable ? "on\n" : "off\n"); + + return simple_read_from_buffer(buf, count, ppos, numa_on_buf, len); +} + +static ssize_t dbgfs_numa_stat_write(struct file *file, + const char __user *buf, size_t count, loff_t *ppos) +{ + ssize_t ret = 0; + char *kbuf; + + kbuf = user_input_str(buf, count, ppos); + if (IS_ERR(kbuf)) + return PTR_ERR(kbuf); + + /* Remove white space */ + if (sscanf(kbuf, "%s", kbuf) != 1) { + kfree(kbuf); + return -EINVAL; + } + + if (!strncmp(kbuf, "on", count)) + static_branch_enable(&numa_stat_enabled_key); + else if (!strncmp(kbuf, "off", count)) + static_branch_disable(&numa_stat_enabled_key); + else + ret = -EINVAL; + + if (!ret) + ret = count; + kfree(kbuf); + return ret; +} + static int damon_dbgfs_open(struct inode *inode, struct file *file) { file->private_data = inode->i_private; @@ -645,12 +688,17 @@ static const struct file_operations kdamond_pid_fops = { .read = dbgfs_kdamond_pid_read, }; +static const struct file_operations numa_stat_ops = { + .write = dbgfs_numa_stat_write, + .read = dbgfs_numa_stat_read, +}; + static void dbgfs_fill_ctx_dir(struct dentry *dir, struct damon_ctx *ctx) { const char * const file_names[] = {"attrs", "schemes", "target_ids", - "init_regions", "kdamond_pid"}; + "init_regions", "kdamond_pid", "numa_stat"}; const struct file_operations *fops[] = {&attrs_fops, &schemes_fops, - &target_ids_fops, &init_regions_fops, &kdamond_pid_fops}; + &target_ids_fops, &init_regions_fops, &kdamond_pid_fops, &numa_stat_ops}; int i; for (i = 0; i < ARRAY_SIZE(file_names); i++) diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index b8feacf15592..9b9920784f22 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -30,14 +30,16 @@ static bool __damon_pa_mk_set(struct page *page, struct vm_area_struct *vma, addr = pvmw.address; if (pvmw.pte) { damon_ptep_mkold(pvmw.pte, vma->vm_mm, addr); - if (nr_online_nodes > 1) { + if (static_branch_unlikely(&numa_stat_enabled_key) && + nr_online_nodes > 1) { result = damon_ptep_mknone(pvmw.pte, vma, addr); if (result) flush_tlb_page(vma, addr); } } else { damon_pmdp_mkold(pvmw.pmd, vma->vm_mm, addr); - if (nr_online_nodes > 1) { + if (static_branch_unlikely(&numa_stat_enabled_key) && + nr_online_nodes > 1) { result = damon_pmdp_mknone(pvmw.pmd, vma, addr); if (result) { unsigned long haddr = addr & HPAGE_PMD_MASK; diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index 78b90972d171..5c2e2c2e29bb 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -371,7 +371,8 @@ static int damon_va_pmd_entry(pmd_t *pmd, unsigned long addr, ptl = pmd_lock(walk->mm, pmd); if (pmd_huge(*pmd)) { damon_pmdp_mkold(pmd, walk->mm, addr); - if (nr_online_nodes > 1) + if (static_branch_unlikely(&numa_stat_enabled_key) && + nr_online_nodes > 1) result = damon_pmdp_mknone(pmd, walk->vma, addr); spin_unlock(ptl); if (result) { @@ -392,7 +393,8 @@ static int damon_va_pmd_entry(pmd_t *pmd, unsigned long addr, return 0; } damon_ptep_mkold(pte, walk->mm, addr); - if (nr_online_nodes > 1) + if (static_branch_unlikely(&numa_stat_enabled_key) && + nr_online_nodes > 1) result = damon_ptep_mknone(pte, walk->vma, addr); pte_unmap_unlock(pte, ptl); if (result) From patchwork Wed Feb 16 08:30:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: haoxin X-Patchwork-Id: 12748185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF37DC433F5 for ; Wed, 16 Feb 2022 08:30:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 542806B007E; Wed, 16 Feb 2022 03:30:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 44F0C6B0080; Wed, 16 Feb 2022 03:30:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 22D9C6B0081; Wed, 16 Feb 2022 03:30:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id EA34C6B007E for ; Wed, 16 Feb 2022 03:30:53 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id AA5A0181AC9C6 for ; Wed, 16 Feb 2022 08:30:53 +0000 (UTC) X-FDA: 79147972386.14.C97CE6A Received: from out30-45.freemail.mail.aliyun.com (out30-45.freemail.mail.aliyun.com [115.124.30.45]) by imf20.hostedemail.com (Postfix) with ESMTP id 5B3521C0008 for ; Wed, 16 Feb 2022 08:30:51 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04400;MF=xhao@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0V4cXjzP_1645000248; Received: from localhost.localdomain(mailfrom:xhao@linux.alibaba.com fp:SMTPD_---0V4cXjzP_1645000248) by smtp.aliyun-inc.com(127.0.0.1); Wed, 16 Feb 2022 16:30:49 +0800 From: Xin Hao To: sj@kernel.org Cc: xhao@linux.alibaba.com, rongwei.wang@linux.alibaba.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH V1 5/5] mm/damon/tracepoint: Add 'damon_region' NUMA access statistics support Date: Wed, 16 Feb 2022 16:30:41 +0800 Message-Id: X-Mailer: git-send-email 2.31.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 5B3521C0008 X-Stat-Signature: y6cmjqwuynwormrtk39ekhpkahpmfhip X-Rspam-User: Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf20.hostedemail.com: domain of xhao@linux.alibaba.com designates 115.124.30.45 as permitted sender) smtp.mailfrom=xhao@linux.alibaba.com X-HE-Tag: 1645000251-355469 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch is used to support 'damon_region' NUMA access for tracepoint, The purpose of this is to facilitate users to obtain the numa access status of 'damon_region' through perf or damo tools. Signed-off-by: Xin Hao Signed-off-by: Rongwei Wang --- include/trace/events/damon.h | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/include/trace/events/damon.h b/include/trace/events/damon.h index c79f1d4c39af..687b7aba751e 100644 --- a/include/trace/events/damon.h +++ b/include/trace/events/damon.h @@ -23,6 +23,8 @@ TRACE_EVENT(damon_aggregated, __field(unsigned long, end) __field(unsigned int, nr_accesses) __field(unsigned int, age) + __field(unsigned long, local) + __field(unsigned long, remote) ), TP_fast_assign( @@ -32,12 +34,15 @@ TRACE_EVENT(damon_aggregated, __entry->end = r->ar.end; __entry->nr_accesses = r->nr_accesses; __entry->age = r->age; + __entry->local = r->local; + __entry->remote = r->remote; ), - TP_printk("target_id=%lu nr_regions=%u %lu-%lu: %u %u", + TP_printk("target_id=%lu nr_regions=%u %lu-%lu: %u %u %lu %lu", __entry->target_id, __entry->nr_regions, __entry->start, __entry->end, - __entry->nr_accesses, __entry->age) + __entry->nr_accesses, __entry->age, + __entry->local, __entry->remote) ); #endif /* _TRACE_DAMON_H */