From patchwork Wed Dec 16 09:42:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11977145 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD840C4361B for ; Wed, 16 Dec 2020 09:47:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 44AE623123 for ; Wed, 16 Dec 2020 09:47:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 44AE623123 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C0A316B006C; Wed, 16 Dec 2020 04:47:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B92E66B0071; Wed, 16 Dec 2020 04:47:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A821C6B0073; Wed, 16 Dec 2020 04:47:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id 916A46B006C for ; Wed, 16 Dec 2020 04:47:46 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5A241181AF5F4 for ; Wed, 16 Dec 2020 09:47:46 +0000 (UTC) X-FDA: 77598668532.25.fowl37_4b161a02742b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 371231814904B for ; Wed, 16 Dec 2020 09:47:46 +0000 (UTC) X-HE-Tag: fowl37_4b161a02742b X-Filterd-Recvd-Size: 10642 Received: from smtp-fw-6002.amazon.com (smtp-fw-6002.amazon.com [52.95.49.90]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Dec 2020 09:47:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1608112066; x=1639648066; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=OfAxiI1TGWAm938TQnf2Mv9tv9ZVjwG7ArzBDalULAM=; b=sJ53I7/Pap+zgm8gP+tFXpZi7ePwl0qSPwm2qtrtuY7mQ7erm0K4aUHA roTBQ6kMKL7R8yrcegIk8d1qSMMgSK5VXmEXBydu1MDIq19FxjopeP5zs sfUOe4XJF2ks99yas/zwTVeBJJ794uKVQXFxIGkiSoyZLGJmxLw6TyoFo c=; X-IronPort-AV: E=Sophos;i="5.78,424,1599523200"; d="scan'208";a="71565777" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-2a-41350382.us-west-2.amazon.com) ([10.43.8.2]) by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP; 16 Dec 2020 09:47:43 +0000 Received: from EX13D31EUA001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-2a-41350382.us-west-2.amazon.com (Postfix) with ESMTPS id A18FAC28E8; Wed, 16 Dec 2020 09:47:39 +0000 (UTC) Received: from u3f2cd687b01c55.ant.amazon.com (10.43.161.31) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 16 Dec 2020 09:47:21 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC v10 12/13] mm/damon/paddr: Separate commonly usable functions Date: Wed, 16 Dec 2020 10:42:20 +0100 Message-ID: <20201216094221.11898-13-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201216094221.11898-1-sjpark@amazon.com> References: <20201216094221.11898-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.31] X-ClientProxiedBy: EX13D16UWB001.ant.amazon.com (10.43.161.17) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park This commit moves functions in the default physical address space monitoring primitives that commonly usable from other use cases like page granularity idleness monitoring to prmtv-common. Signed-off-by: SeongJae Park --- mm/damon/paddr.c | 122 ---------------------------------------- mm/damon/prmtv-common.c | 122 ++++++++++++++++++++++++++++++++++++++++ mm/damon/prmtv-common.h | 4 ++ 3 files changed, 126 insertions(+), 122 deletions(-) diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index b120f672cc57..143ddc0e5917 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -19,69 +19,6 @@ * of the primitives. */ -/* - * Get a page by pfn if it is in the LRU list. Otherwise, returns NULL. - * - * The body of this function is stollen from the 'page_idle_get_page()'. We - * steal rather than reuse it because the code is quite simple. - */ -static struct page *damon_pa_get_page(unsigned long pfn) -{ - struct page *page = pfn_to_online_page(pfn); - pg_data_t *pgdat; - - if (!page || !PageLRU(page) || - !get_page_unless_zero(page)) - return NULL; - - pgdat = page_pgdat(page); - spin_lock_irq(&pgdat->lru_lock); - if (unlikely(!PageLRU(page))) { - put_page(page); - page = NULL; - } - spin_unlock_irq(&pgdat->lru_lock); - return page; -} - -static bool __damon_pa_mkold(struct page *page, struct vm_area_struct *vma, - unsigned long addr, void *arg) -{ - damon_va_mkold(vma->vm_mm, addr); - return true; -} - -static void damon_pa_mkold(unsigned long paddr) -{ - struct page *page = damon_pa_get_page(PHYS_PFN(paddr)); - struct rmap_walk_control rwc = { - .rmap_one = __damon_pa_mkold, - .anon_lock = page_lock_anon_vma_read, - }; - bool need_lock; - - if (!page) - return; - - if (!page_mapped(page) || !page_rmapping(page)) { - set_page_idle(page); - put_page(page); - return; - } - - need_lock = !PageAnon(page) || PageKsm(page); - if (need_lock && !trylock_page(page)) { - put_page(page); - return; - } - - rmap_walk(page, &rwc); - - if (need_lock) - unlock_page(page); - put_page(page); -} - static void __damon_pa_prepare_access_check(struct damon_ctx *ctx, struct damon_region *r) { @@ -101,65 +38,6 @@ void damon_pa_prepare_access_checks(struct damon_ctx *ctx) } } -struct damon_pa_access_chk_result { - unsigned long page_sz; - bool accessed; -}; - -static bool damon_pa_accessed(struct page *page, struct vm_area_struct *vma, - unsigned long addr, void *arg) -{ - struct damon_pa_access_chk_result *result = arg; - - result->accessed = damon_va_young(vma->vm_mm, addr, &result->page_sz); - - /* If accessed, stop walking */ - return !result->accessed; -} - -static bool damon_pa_young(unsigned long paddr, unsigned long *page_sz) -{ - struct page *page = damon_pa_get_page(PHYS_PFN(paddr)); - struct damon_pa_access_chk_result result = { - .page_sz = PAGE_SIZE, - .accessed = false, - }; - struct rmap_walk_control rwc = { - .arg = &result, - .rmap_one = damon_pa_accessed, - .anon_lock = page_lock_anon_vma_read, - }; - bool need_lock; - - if (!page) - return false; - - if (!page_mapped(page) || !page_rmapping(page)) { - if (page_is_idle(page)) - result.accessed = false; - else - result.accessed = true; - put_page(page); - goto out; - } - - need_lock = !PageAnon(page) || PageKsm(page); - if (need_lock && !trylock_page(page)) { - put_page(page); - return NULL; - } - - rmap_walk(page, &rwc); - - if (need_lock) - unlock_page(page); - put_page(page); - -out: - *page_sz = result.page_sz; - return result.accessed; -} - /* * Check whether the region was accessed after the last preparation * diff --git a/mm/damon/prmtv-common.c b/mm/damon/prmtv-common.c index 6cdb96cbc9ef..6c2e760e086c 100644 --- a/mm/damon/prmtv-common.c +++ b/mm/damon/prmtv-common.c @@ -102,3 +102,125 @@ bool damon_va_young(struct mm_struct *mm, unsigned long addr, return young; } + +/* + * Get a page by pfn if it is in the LRU list. Otherwise, returns NULL. + * + * The body of this function is stollen from the 'page_idle_get_page()'. We + * steal rather than reuse it because the code is quite simple. + */ +static struct page *damon_pa_get_page(unsigned long pfn) +{ + struct page *page = pfn_to_online_page(pfn); + pg_data_t *pgdat; + + if (!page || !PageLRU(page) || + !get_page_unless_zero(page)) + return NULL; + + pgdat = page_pgdat(page); + spin_lock_irq(&pgdat->lru_lock); + if (unlikely(!PageLRU(page))) { + put_page(page); + page = NULL; + } + spin_unlock_irq(&pgdat->lru_lock); + return page; +} + +static bool __damon_pa_mkold(struct page *page, struct vm_area_struct *vma, + unsigned long addr, void *arg) +{ + damon_va_mkold(vma->vm_mm, addr); + return true; +} + +void damon_pa_mkold(unsigned long paddr) +{ + struct page *page = damon_pa_get_page(PHYS_PFN(paddr)); + struct rmap_walk_control rwc = { + .rmap_one = __damon_pa_mkold, + .anon_lock = page_lock_anon_vma_read, + }; + bool need_lock; + + if (!page) + return; + + if (!page_mapped(page) || !page_rmapping(page)) { + set_page_idle(page); + put_page(page); + return; + } + + need_lock = !PageAnon(page) || PageKsm(page); + if (need_lock && !trylock_page(page)) { + put_page(page); + return; + } + + rmap_walk(page, &rwc); + + if (need_lock) + unlock_page(page); + put_page(page); +} + +struct damon_pa_access_chk_result { + unsigned long page_sz; + bool accessed; +}; + +static bool damon_pa_accessed(struct page *page, struct vm_area_struct *vma, + unsigned long addr, void *arg) +{ + struct damon_pa_access_chk_result *result = arg; + + result->accessed = damon_va_young(vma->vm_mm, addr, &result->page_sz); + + /* If accessed, stop walking */ + return !result->accessed; +} + +bool damon_pa_young(unsigned long paddr, unsigned long *page_sz) +{ + struct page *page = damon_pa_get_page(PHYS_PFN(paddr)); + struct damon_pa_access_chk_result result = { + .page_sz = PAGE_SIZE, + .accessed = false, + }; + struct rmap_walk_control rwc = { + .arg = &result, + .rmap_one = damon_pa_accessed, + .anon_lock = page_lock_anon_vma_read, + }; + bool need_lock; + + if (!page) + return false; + + if (!page_mapped(page) || !page_rmapping(page)) { + if (page_is_idle(page)) + result.accessed = false; + else + result.accessed = true; + put_page(page); + goto out; + } + + need_lock = !PageAnon(page) || PageKsm(page); + if (need_lock && !trylock_page(page)) { + put_page(page); + return NULL; + } + + rmap_walk(page, &rwc); + + if (need_lock) + unlock_page(page); + put_page(page); + +out: + *page_sz = result.page_sz; + return result.accessed; +} diff --git a/mm/damon/prmtv-common.h b/mm/damon/prmtv-common.h index a66a6139b4fc..fbe9452bd040 100644 --- a/mm/damon/prmtv-common.h +++ b/mm/damon/prmtv-common.h @@ -10,6 +10,7 @@ #include #include #include +#include #include #include @@ -19,3 +20,6 @@ void damon_va_mkold(struct mm_struct *mm, unsigned long addr); bool damon_va_young(struct mm_struct *mm, unsigned long addr, unsigned long *page_sz); + +void damon_pa_mkold(unsigned long paddr); +bool damon_pa_young(unsigned long paddr, unsigned long *page_sz);