From patchwork Mon Jun 15 16:19:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11605399 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 01E8C60D for ; Mon, 15 Jun 2020 16:21:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B7B7F207FF for ; Mon, 15 Jun 2020 16:21:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="u2bR08qt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B7B7F207FF Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EA26E6B000C; Mon, 15 Jun 2020 12:21:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E2C786B000D; Mon, 15 Jun 2020 12:21:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D1C0C6B000E; Mon, 15 Jun 2020 12:21:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0172.hostedemail.com [216.40.44.172]) by kanga.kvack.org (Postfix) with ESMTP id B9B0D6B000C for ; Mon, 15 Jun 2020 12:21:23 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 7BBF780134F9 for ; Mon, 15 Jun 2020 16:21:23 +0000 (UTC) X-FDA: 76931961246.26.quiet84_5210cda26df7 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id D1AD61801A108 for ; Mon, 15 Jun 2020 16:20:27 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=4280d6d30=sjpark@amazon.com,,RULES_HIT:30054:30064,0,RBL:207.171.190.10:@amazon.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: quiet84_5210cda26df7 X-Filterd-Recvd-Size: 4091 Received: from smtp-fw-33001.amazon.com (smtp-fw-33001.amazon.com [207.171.190.10]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Mon, 15 Jun 2020 16:20:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1592238028; x=1623774028; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=KFodkFYUcobTypZK2z+lk2aPe8xpAQLyInNhOFuMluc=; b=u2bR08qtxh65oipUJUL0+U8R2ZFK2K6smht2fvcm59c4hBTBwoMBvl0w hiakveG+ReOhZulhO+Orop3+pnR1knsOwjpb+vMPI6y3GAFFYTNu9dkbx zjG70ghGPmoht9sYCsNXQENKmm9Ad/Hjrw2pVAKbU6jrOzi+rL2IXai4y Q=; IronPort-SDR: d+8kFcmrVjMOo4zHUcEZP8LaGpLfub6toaIfFxqqY4YchE3iHQfJmLk93bVNQkFC+x0eEvJbgZ iCArwFJMIFeg== X-IronPort-AV: E=Sophos;i="5.73,515,1583193600"; d="scan'208";a="51081083" Received: from sea32-co-svc-lb4-vlan2.sea.corp.amazon.com (HELO email-inbound-relay-2a-6e2fc477.us-west-2.amazon.com) ([10.47.23.34]) by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 15 Jun 2020 16:20:23 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2a-6e2fc477.us-west-2.amazon.com (Postfix) with ESMTPS id CC59CA224C; Mon, 15 Jun 2020 16:20:20 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 15 Jun 2020 16:20:20 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.161.145) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 15 Jun 2020 16:20:02 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v16 01/14] mm/page_ext: Export lookup_page_ext() to GPL modules Date: Mon, 15 Jun 2020 18:19:14 +0200 Message-ID: <20200615161927.12637-2-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200615161927.12637-1-sjpark@amazon.com> References: <20200615161927.12637-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.145] X-ClientProxiedBy: EX13D23UWA004.ant.amazon.com (10.43.160.72) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: D1AD61801A108 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park This commit exports 'lookup_page_ext()' to GPL modules. It will be used by DAMON in following commit for the implementation of the region based sampling. Signed-off-by: SeongJae Park Reviewed-by: Leonard Foerster Reviewed-by: Varad Gautam --- mm/page_ext.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/page_ext.c b/mm/page_ext.c index a3616f7a0e9e..9d802d01fcb5 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -131,6 +131,7 @@ struct page_ext *lookup_page_ext(const struct page *page) MAX_ORDER_NR_PAGES); return get_entry(base, index); } +EXPORT_SYMBOL_GPL(lookup_page_ext); static int __init alloc_node_page_ext(int nid) { From patchwork Mon Jun 15 16:19:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11605401 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7E2EC60D for ; Mon, 15 Jun 2020 16:21:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3B11A207D4 for ; Mon, 15 Jun 2020 16:21:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="e5f+JaCe" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3B11A207D4 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 701426B000D; Mon, 15 Jun 2020 12:21:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 689E46B000E; Mon, 15 Jun 2020 12:21:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5531D6B0010; Mon, 15 Jun 2020 12:21:35 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0221.hostedemail.com [216.40.44.221]) by kanga.kvack.org (Postfix) with ESMTP id 3E5856B000D for ; Mon, 15 Jun 2020 12:21:35 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id EC4137CC9F for ; Mon, 15 Jun 2020 16:21:34 +0000 (UTC) X-FDA: 76931961708.26.swing59_5a09c1926df7 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 4F1C31807F20A for ; Mon, 15 Jun 2020 16:20:42 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=4280d6d30=sjpark@amazon.com,,RULES_HIT:30003:30034:30054:30064:30067:30070:30080,0,RBL:207.171.184.25:@amazon.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: swing59_5a09c1926df7 X-Filterd-Recvd-Size: 11348 Received: from smtp-fw-9101.amazon.com (smtp-fw-9101.amazon.com [207.171.184.25]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Mon, 15 Jun 2020 16:20:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1592238042; x=1623774042; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=z5o75IDbNf7Iu6TujBVKaea7ZgMi1CRye81jjhn96+8=; b=e5f+JaCeYYObcjhiY6KavGvmJtKXTpv3pVD2GGufRv6NP1Eu3Se6ruEx O5hvFUmM2zgJSDeBTgYeASbc5rL4G9C89dlJEEqjDHti77VR4cHimbtvG gVvvv7CIor3GCUd6bVo+VLBjp+QVMaq8gbohSWc9q3XRiBtgX85ltuBdF 0=; IronPort-SDR: kViR7VGaFrHEJfCLA0Dd9Cws4yO1itw2j9YMRBtDIs4uSiWw6oihRz5/ORfWW3GRiCTPx6Y1Oc 5GjKQBzRQD8w== X-IronPort-AV: E=Sophos;i="5.73,515,1583193600"; d="scan'208";a="44141495" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2a-c5104f52.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 15 Jun 2020 16:20:40 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166]) by email-inbound-relay-2a-c5104f52.us-west-2.amazon.com (Postfix) with ESMTPS id 4FAEDA23CD; Mon, 15 Jun 2020 16:20:38 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 15 Jun 2020 16:20:37 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.161.145) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 15 Jun 2020 16:20:20 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v16 02/14] mm: Introduce Data Access MONitor (DAMON) Date: Mon, 15 Jun 2020 18:19:15 +0200 Message-ID: <20200615161927.12637-3-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200615161927.12637-1-sjpark@amazon.com> References: <20200615161927.12637-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.145] X-ClientProxiedBy: EX13D23UWA004.ant.amazon.com (10.43.160.72) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: 4F1C31807F20A X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park This commit introduces a kernel module named DAMON. Note that this commit is implementing only the stub for the module load/unload, basic data structures, and simple manipulation functions of the structures to keep the size of commit small. The core mechanisms of DAMON will be implemented one by one by following commits. Signed-off-by: SeongJae Park Reviewed-by: Leonard Foerster Reviewed-by: Varad Gautam --- include/linux/damon.h | 63 ++++++++++++++ mm/Kconfig | 12 +++ mm/Makefile | 1 + mm/damon.c | 188 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 264 insertions(+) create mode 100644 include/linux/damon.h create mode 100644 mm/damon.c diff --git a/include/linux/damon.h b/include/linux/damon.h new file mode 100644 index 000000000000..c8f8c1c41a45 --- /dev/null +++ b/include/linux/damon.h @@ -0,0 +1,63 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * DAMON api + * + * Copyright 2019-2020 Amazon.com, Inc. or its affiliates. + * + * Author: SeongJae Park + */ + +#ifndef _DAMON_H_ +#define _DAMON_H_ + +#include +#include + +/** + * struct damon_addr_range - Represents an address region of [@start, @end). + * @start: Start address of the region (inclusive). + * @end: End address of the region (exclusive). + */ +struct damon_addr_range { + unsigned long start; + unsigned long end; +}; + +/** + * struct damon_region - Represents a monitoring target region. + * @ar: The address range of the region. + * @sampling_addr: Address of the sample for the next access check. + * @nr_accesses: Access frequency of this region. + * @list: List head for siblings. + */ +struct damon_region { + struct damon_addr_range ar; + unsigned long sampling_addr; + unsigned int nr_accesses; + struct list_head list; +}; + +/** + * struct damon_task - Represents a monitoring target task. + * @pid: Process id of the task. + * @regions_list: Head of the monitoring target regions of this task. + * @list: List head for siblings. + * + * If the monitoring target address space is task independent (e.g., physical + * memory address space monitoring), @pid should be '-1'. + */ +struct damon_task { + int pid; + struct list_head regions_list; + struct list_head list; +}; + +/** + * struct damon_ctx - Represents a context for each monitoring. + * @tasks_list: Head of monitoring target tasks (&damon_task) list. + */ +struct damon_ctx { + struct list_head tasks_list; /* 'damon_task' objects */ +}; + +#endif diff --git a/mm/Kconfig b/mm/Kconfig index c1acc34c1c35..ecea0889ea35 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -867,4 +867,16 @@ config ARCH_HAS_HUGEPD config MAPPING_DIRTY_HELPERS bool +config DAMON + tristate "Data Access Monitor" + depends on MMU + help + Provides data access monitoring. + + DAMON is a kernel module that allows users to monitor the actual + memory access pattern of specific user-space processes. It aims to + be 1) accurate enough to be useful for performance-centric domains, + and 2) sufficiently light-weight so that it can be applied online. + If unsure, say N. + endmenu diff --git a/mm/Makefile b/mm/Makefile index fccd3756b25f..230e545b6e07 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -112,3 +112,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o obj-$(CONFIG_PTDUMP_CORE) += ptdump.o obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o +obj-$(CONFIG_DAMON) += damon.o diff --git a/mm/damon.c b/mm/damon.c new file mode 100644 index 000000000000..2bf35bdc0470 --- /dev/null +++ b/mm/damon.c @@ -0,0 +1,188 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Data Access Monitor + * + * Copyright 2019-2020 Amazon.com, Inc. or its affiliates. + * + * Author: SeongJae Park + * + * This file is constructed in below parts. + * + * - Functions and macros for DAMON data structures + * - Functions for the module loading/unloading + * + * The core parts are not implemented yet. + */ + +#define pr_fmt(fmt) "damon: " fmt + +#include +#include +#include +#include + +/* + * Functions and macros for DAMON data structures + */ + +#define damon_get_task_struct(t) \ + (get_pid_task(find_vpid(t->pid), PIDTYPE_PID)) + +#define damon_next_region(r) \ + (container_of(r->list.next, struct damon_region, list)) + +#define damon_prev_region(r) \ + (container_of(r->list.prev, struct damon_region, list)) + +#define damon_for_each_region(r, t) \ + list_for_each_entry(r, &t->regions_list, list) + +#define damon_for_each_region_safe(r, next, t) \ + list_for_each_entry_safe(r, next, &t->regions_list, list) + +#define damon_for_each_task(t, ctx) \ + list_for_each_entry(t, &(ctx)->tasks_list, list) + +#define damon_for_each_task_safe(t, next, ctx) \ + list_for_each_entry_safe(t, next, &(ctx)->tasks_list, list) + +/* Get a random number in [l, r) */ +#define damon_rand(l, r) (l + prandom_u32() % (r - l)) + +/* + * Construct a damon_region struct + * + * Returns the pointer to the new struct if success, or NULL otherwise + */ +static struct damon_region *damon_new_region(struct damon_ctx *ctx, + unsigned long start, unsigned long end) +{ + struct damon_region *region; + + region = kmalloc(sizeof(*region), GFP_KERNEL); + if (!region) + return NULL; + + region->ar.start = start; + region->ar.end = end; + region->nr_accesses = 0; + INIT_LIST_HEAD(®ion->list); + + return region; +} + +/* + * Add a region between two other regions + */ +static inline void damon_insert_region(struct damon_region *r, + struct damon_region *prev, struct damon_region *next) +{ + __list_add(&r->list, &prev->list, &next->list); +} + +static void damon_add_region(struct damon_region *r, struct damon_task *t) +{ + list_add_tail(&r->list, &t->regions_list); +} + +static void damon_del_region(struct damon_region *r) +{ + list_del(&r->list); +} + +static void damon_free_region(struct damon_region *r) +{ + kfree(r); +} + +static void damon_destroy_region(struct damon_region *r) +{ + damon_del_region(r); + damon_free_region(r); +} + +/* + * Construct a damon_task struct + * + * Returns the pointer to the new struct if success, or NULL otherwise + */ +static struct damon_task *damon_new_task(int pid) +{ + struct damon_task *t; + + t = kmalloc(sizeof(*t), GFP_KERNEL); + if (!t) + return NULL; + + t->pid = pid; + INIT_LIST_HEAD(&t->regions_list); + + return t; +} + +static void damon_add_task(struct damon_ctx *ctx, struct damon_task *t) +{ + list_add_tail(&t->list, &ctx->tasks_list); +} + +static void damon_del_task(struct damon_task *t) +{ + list_del(&t->list); +} + +static void damon_free_task(struct damon_task *t) +{ + struct damon_region *r, *next; + + damon_for_each_region_safe(r, next, t) + damon_free_region(r); + kfree(t); +} + +static void damon_destroy_task(struct damon_task *t) +{ + damon_del_task(t); + damon_free_task(t); +} + +static unsigned int nr_damon_tasks(struct damon_ctx *ctx) +{ + struct damon_task *t; + unsigned int nr_tasks = 0; + + damon_for_each_task(t, ctx) + nr_tasks++; + + return nr_tasks; +} + +static unsigned int nr_damon_regions(struct damon_task *t) +{ + struct damon_region *r; + unsigned int nr_regions = 0; + + damon_for_each_region(r, t) + nr_regions++; + + return nr_regions; +} + +/* + * Functions for the module loading/unloading + */ + +static int __init damon_init(void) +{ + return 0; +} + +static void __exit damon_exit(void) +{ +} + +module_init(damon_init); +module_exit(damon_exit); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("SeongJae Park "); +MODULE_DESCRIPTION("DAMON: Data Access MONitor"); From patchwork Mon Jun 15 16:19:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11605395 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A862560D for ; Mon, 15 Jun 2020 16:21:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5A1BE2081A for ; Mon, 15 Jun 2020 16:21:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="bYl3+/N1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5A1BE2081A Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8D6CD6B0008; Mon, 15 Jun 2020 12:21:07 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 885466B000A; Mon, 15 Jun 2020 12:21:07 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 74EC16B000C; Mon, 15 Jun 2020 12:21:07 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0107.hostedemail.com [216.40.44.107]) by kanga.kvack.org (Postfix) with ESMTP id 5826F6B0008 for ; Mon, 15 Jun 2020 12:21:07 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 1D9581E31F9 for ; Mon, 15 Jun 2020 16:21:07 +0000 (UTC) X-FDA: 76931960574.17.end09_4b07dcd26df7 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id E341F1802EB13 for ; Mon, 15 Jun 2020 16:21:06 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=4280d6d30=sjpark@amazon.com,,RULES_HIT:30003:30012:30034:30051:30054:30056:30064:30070:30075,0,RBL:72.21.198.25:@amazon.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:22,LUA_SUMMARY:none X-HE-Tag: end09_4b07dcd26df7 X-Filterd-Recvd-Size: 17134 Received: from smtp-fw-4101.amazon.com (smtp-fw-4101.amazon.com [72.21.198.25]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Mon, 15 Jun 2020 16:21:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1592238066; x=1623774066; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=gpzfjDjUXSfospLjnBTLWKKESDPOF6dDsa4rIxN/4KU=; b=bYl3+/N1eSx/Zpj7ZhjK25mMN/rPserllPdBNFuwDvanYNfXm1H7xfuy I+Hxl/AEpsqhdRTisMTu0bGWf6DYvMv+ZlZJliqBiFeIBRijEToSdzoeK A+rOq6a386SBOATtbE4eJc6RMdigxF2TKg8uv9/XnMGRgdYP2KL/w+kYt I=; IronPort-SDR: zE658v6NIXbAIPL5WLwmN70d7+tV8+m1/D1rS2pXjkFH3HSEmC8uM+ywa6kkj5jOs1G9q41B60 E0IG3itizl1Q== X-IronPort-AV: E=Sophos;i="5.73,515,1583193600"; d="scan'208";a="36385371" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2a-1c1b5cdd.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP; 15 Jun 2020 16:20:58 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2a-1c1b5cdd.us-west-2.amazon.com (Postfix) with ESMTPS id 7B185A1E78; Mon, 15 Jun 2020 16:20:55 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 15 Jun 2020 16:20:54 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.161.145) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 15 Jun 2020 16:20:38 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v16 03/14] mm/damon: Implement region based sampling Date: Mon, 15 Jun 2020 18:19:16 +0200 Message-ID: <20200615161927.12637-4-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200615161927.12637-1-sjpark@amazon.com> References: <20200615161927.12637-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.145] X-ClientProxiedBy: EX13D23UWA004.ant.amazon.com (10.43.160.72) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: E341F1802EB13 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park This commit implements DAMON's target address space independent high level logics for basic access check and region based sampling. The target address space specific logics for the monitoring target address regions construction and the access check are required, though. The following commits will provide reference implementations of those for the general virtual address spaces and the physical address space. Users can implement and use their own versions for their specific use cases, though. Basic Access Check ------------------ DAMON basically reports what pages are how frequently accessed. The frequency is not an absolute number of accesses, but a ratio. For this, DAMON first calls target monitoring construction callback (``init_target_regions``), and then the access check callbacks, which is assumed to check the access to each page and aggregates the number of observed accesses of each page, for every ``sampling interval``. Finally, DAMON resets the aggregated count per ``aggregation interval``. This is thus similar to the common periodic access checks based monitoring mechanisms but provides the access frequency. The overhead will increase as the size of the target process grows. Region Based Sampling --------------------- To avoid the unbounded increase of the overhead, DAMON groups a number of adjacent pages that assumed to have same access frequencies into a region. As long as the assumption (pages in a region have same access frequencies) is kept, only one page in the region is required to be checked. Therefore, the monitoring overhead is controllable by setting the number of regions. Nonetheless, this scheme cannot preserve the quality of the output if the assumption is not kept. Following commit will introduce how we can make the guarantee with some sort of best effort. Signed-off-by: SeongJae Park Reviewed-by: Leonard Foerster --- include/linux/damon.h | 80 ++++++++++++- mm/damon.c | 258 +++++++++++++++++++++++++++++++++++++++++- 2 files changed, 335 insertions(+), 3 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index c8f8c1c41a45..649fb8b6209f 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -11,6 +11,8 @@ #define _DAMON_H_ #include +#include +#include #include /** @@ -53,11 +55,87 @@ struct damon_task { }; /** - * struct damon_ctx - Represents a context for each monitoring. + * struct damon_ctx - Represents a context for each monitoring. This is the + * main interface that allows users to set the attributes and get the results + * of the monitoring. + * + * For each monitoring request (damon_start()), a kernel thread for the + * monitoring is created. The pointer to the thread is stored in @kdamond. + * + * @sample_interval: The time between access samplings. + * @aggr_interval: The time between monitor results aggregations. + * @min_nr_regions: The number of initial monitoring regions. + * + * For each @sample_interval, DAMON checks whether each region is accessed or + * not. It aggregates and keeps the access information (number of accesses to + * each region) for @aggr_interval time. All time intervals are in + * micro-seconds. + * + * @kdamond: Kernel thread who does the monitoring. + * @kdamond_stop: Notifies whether kdamond should stop. + * @kdamond_lock: Mutex for the synchronizations with @kdamond. + * + * The monitoring thread sets @kdamond to NULL when it terminates. Therefore, + * users can know whether the monitoring is ongoing or terminated by reading + * @kdamond. Also, users can ask @kdamond to be terminated by writing non-zero + * to @kdamond_stop. Reads and writes to @kdamond and @kdamond_stop from + * outside of the monitoring thread must be protected by @kdamond_lock. + * + * Note that the monitoring thread protects only @kdamond and @kdamond_stop via + * @kdamond_lock. Accesses to other fields must be protected by themselves. + * * @tasks_list: Head of monitoring target tasks (&damon_task) list. + * + * @init_target_regions: Constructs initial monitoring target regions. + * @prepare_access_checks: Prepares next access check of target regions. + * @check_accesses: Checks the access of target regions. + * @sample_cb: Called for each sampling interval. + * @aggregate_cb: Called for each aggregation interval. + * + * DAMON can be extended for various address spaces by users. For this, users + * can register the target address space dependent low level functions for + * their usecases via the callback pointers of the context. The monitoring + * thread calls @init_target_regions before starting the monitoring, and + * @prepare_access_checks and @check_accesses for each @sample_interval. + * + * @init_target_regions should construct proper monitoring target regions and + * link those to the DAMON context struct. + * @prepare_access_checks should manipulate the monitoring regions to be + * prepare for the next access check. + * @check_accesses should check the accesses to each region that made after the + * last preparation and update the `->nr_accesses` of each region. + * + * @sample_cb and @aggregate_cb are called from @kdamond for each of the + * sampling intervals and aggregation intervals, respectively. Therefore, + * users can safely access to the monitoring results via @tasks_list without + * additional protection of @kdamond_lock. For the reason, users are + * recommended to use these callback for the accesses to the results. */ struct damon_ctx { + unsigned long sample_interval; + unsigned long aggr_interval; + unsigned long min_nr_regions; + + struct timespec64 last_aggregation; + + struct task_struct *kdamond; + bool kdamond_stop; + struct mutex kdamond_lock; + struct list_head tasks_list; /* 'damon_task' objects */ + + /* callbacks */ + void (*init_target_regions)(struct damon_ctx *context); + void (*prepare_access_checks)(struct damon_ctx *context); + unsigned int (*check_accesses)(struct damon_ctx *context); + void (*sample_cb)(struct damon_ctx *context); + void (*aggregate_cb)(struct damon_ctx *context); }; +int damon_set_pids(struct damon_ctx *ctx, int *pids, ssize_t nr_pids); +int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int, + unsigned long aggr_int, unsigned long min_nr_reg); +int damon_start(struct damon_ctx *ctx); +int damon_stop(struct damon_ctx *ctx); + #endif diff --git a/mm/damon.c b/mm/damon.c index 2bf35bdc0470..aba02c652b51 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -9,18 +9,27 @@ * This file is constructed in below parts. * * - Functions and macros for DAMON data structures + * - Functions for DAMON core logics and features + * - Functions for the DAMON programming interface * - Functions for the module loading/unloading - * - * The core parts are not implemented yet. */ #define pr_fmt(fmt) "damon: " fmt #include +#include +#include #include #include +#include +#include +#include +#include #include +/* Minimal region size. Every damon_region is aligned by this. */ +#define MIN_REGION PAGE_SIZE + /* * Functions and macros for DAMON data structures */ @@ -167,6 +176,251 @@ static unsigned int nr_damon_regions(struct damon_task *t) return nr_regions; } +/* + * Functions for DAMON core logics and features + */ + +/* + * damon_check_reset_time_interval() - Check if a time interval is elapsed. + * @baseline: the time to check whether the interval has elapsed since + * @interval: the time interval (microseconds) + * + * See whether the given time interval has passed since the given baseline + * time. If so, it also updates the baseline to current time for next check. + * + * Return: true if the time interval has passed, or false otherwise. + */ +static bool damon_check_reset_time_interval(struct timespec64 *baseline, + unsigned long interval) +{ + struct timespec64 now; + + ktime_get_coarse_ts64(&now); + if ((timespec64_to_ns(&now) - timespec64_to_ns(baseline)) < + interval * 1000) + return false; + *baseline = now; + return true; +} + +/* + * Check whether it is time to flush the aggregated information + */ +static bool kdamond_aggregate_interval_passed(struct damon_ctx *ctx) +{ + return damon_check_reset_time_interval(&ctx->last_aggregation, + ctx->aggr_interval); +} + +/* + * Reset the aggregated monitoring results + */ +static void kdamond_reset_aggregated(struct damon_ctx *c) +{ + struct damon_task *t; + struct damon_region *r; + + damon_for_each_task(t, c) { + damon_for_each_region(r, t) + r->nr_accesses = 0; + } +} + +/* + * Check whether current monitoring should be stopped + * + * The monitoring is stopped when either the user requested to stop, or all + * monitoring target tasks are dead. + * + * Returns true if need to stop current monitoring. + */ +static bool kdamond_need_stop(struct damon_ctx *ctx) +{ + struct damon_task *t; + struct task_struct *task; + bool stop; + + mutex_lock(&ctx->kdamond_lock); + stop = ctx->kdamond_stop; + mutex_unlock(&ctx->kdamond_lock); + if (stop) + return true; + + damon_for_each_task(t, ctx) { + /* -1 is reserved for non-process bounded monitoring */ + if (t->pid == -1) + return false; + + task = damon_get_task_struct(t); + if (task) { + put_task_struct(task); + return false; + } + } + + return true; +} + +/* + * The monitoring daemon that runs as a kernel thread + */ +static int kdamond_fn(void *data) +{ + struct damon_ctx *ctx = (struct damon_ctx *)data; + struct damon_task *t; + struct damon_region *r, *next; + + pr_info("kdamond (%d) starts\n", ctx->kdamond->pid); + if (ctx->init_target_regions) + ctx->init_target_regions(ctx); + while (!kdamond_need_stop(ctx)) { + if (ctx->prepare_access_checks) + ctx->prepare_access_checks(ctx); + if (ctx->sample_cb) + ctx->sample_cb(ctx); + + usleep_range(ctx->sample_interval, ctx->sample_interval + 1); + + if (ctx->check_accesses) + ctx->check_accesses(ctx); + + if (kdamond_aggregate_interval_passed(ctx)) { + if (ctx->aggregate_cb) + ctx->aggregate_cb(ctx); + kdamond_reset_aggregated(ctx); + } + + } + damon_for_each_task(t, ctx) { + damon_for_each_region_safe(r, next, t) + damon_destroy_region(r); + } + pr_debug("kdamond (%d) finishes\n", ctx->kdamond->pid); + mutex_lock(&ctx->kdamond_lock); + ctx->kdamond = NULL; + mutex_unlock(&ctx->kdamond_lock); + + do_exit(0); +} + +/* + * Functions for the DAMON programming interface + */ + +static bool damon_kdamond_running(struct damon_ctx *ctx) +{ + bool running; + + mutex_lock(&ctx->kdamond_lock); + running = ctx->kdamond != NULL; + mutex_unlock(&ctx->kdamond_lock); + + return running; +} + +/** + * damon_start() - Starts monitoring with given context. + * @ctx: monitoring context + * + * Return: 0 on success, negative error code otherwise. + */ +int damon_start(struct damon_ctx *ctx) +{ + int err = -EBUSY; + + mutex_lock(&ctx->kdamond_lock); + if (!ctx->kdamond) { + err = 0; + ctx->kdamond_stop = false; + ctx->kdamond = kthread_run(kdamond_fn, ctx, "kdamond"); + if (IS_ERR(ctx->kdamond)) + err = PTR_ERR(ctx->kdamond); + } + mutex_unlock(&ctx->kdamond_lock); + + return err; +} + +/** + * damon_stop() - Stops monitoring of given context. + * @ctx: monitoring context + * + * Return: 0 on success, negative error code otherwise. + */ +int damon_stop(struct damon_ctx *ctx) +{ + mutex_lock(&ctx->kdamond_lock); + if (ctx->kdamond) { + ctx->kdamond_stop = true; + mutex_unlock(&ctx->kdamond_lock); + while (damon_kdamond_running(ctx)) + usleep_range(ctx->sample_interval, + ctx->sample_interval * 2); + return 0; + } + mutex_unlock(&ctx->kdamond_lock); + + return -EPERM; +} + +/** + * damon_set_pids() - Set monitoring target processes. + * @ctx: monitoring context + * @pids: array of target processes pids + * @nr_pids: number of entries in @pids + * + * This function should not be called while the kdamond is running. + * + * Return: 0 on success, negative error code otherwise. + */ +int damon_set_pids(struct damon_ctx *ctx, int *pids, ssize_t nr_pids) +{ + ssize_t i; + struct damon_task *t, *next; + + damon_for_each_task_safe(t, next, ctx) + damon_destroy_task(t); + + for (i = 0; i < nr_pids; i++) { + t = damon_new_task(pids[i]); + if (!t) { + pr_err("Failed to alloc damon_task\n"); + return -ENOMEM; + } + damon_add_task(ctx, t); + } + + return 0; +} + +/** + * damon_set_attrs() - Set attributes for the monitoring. + * @ctx: monitoring context + * @sample_int: time interval between samplings + * @aggr_int: time interval between aggregations + * @min_nr_reg: minimal number of regions + * + * This function should not be called while the kdamond is running. + * Every time interval is in micro-seconds. + * + * Return: 0 on success, negative error code otherwise. + */ +int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int, + unsigned long aggr_int, unsigned long min_nr_reg) +{ + if (min_nr_reg < 3) { + pr_err("min_nr_regions (%lu) must be at least 3\n", + min_nr_reg); + return -EINVAL; + } + + ctx->sample_interval = sample_int; + ctx->aggr_interval = aggr_int; + ctx->min_nr_regions = min_nr_reg; + + return 0; +} + /* * Functions for the module loading/unloading */ From patchwork Mon Jun 15 16:19:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11605397 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 059FF92A for ; Mon, 15 Jun 2020 16:21:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B79D92078E for ; Mon, 15 Jun 2020 16:21:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="hG6Y0kim" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B79D92078E Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EF0E66B000A; Mon, 15 Jun 2020 12:21:19 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EA0086B000C; Mon, 15 Jun 2020 12:21:19 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D425D6B000D; Mon, 15 Jun 2020 12:21:19 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0085.hostedemail.com [216.40.44.85]) by kanga.kvack.org (Postfix) with ESMTP id BB26D6B000A for ; Mon, 15 Jun 2020 12:21:19 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 71D03181AC9BF for ; Mon, 15 Jun 2020 16:21:19 +0000 (UTC) X-FDA: 76931961078.10.feast42_4006b6726df7 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id 3D38AE3499 for ; Mon, 15 Jun 2020 16:21:19 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=4280d6d30=sjpark@amazon.com,,RULES_HIT:30012:30034:30054:30056:30064:30070:30075,0,RBL:72.21.198.25:@amazon.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: feast42_4006b6726df7 X-Filterd-Recvd-Size: 13215 Received: from smtp-fw-4101.amazon.com (smtp-fw-4101.amazon.com [72.21.198.25]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Mon, 15 Jun 2020 16:21:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1592238079; x=1623774079; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=kqIab3xglKx0fy5qTGilfaA/67p8jUh4Ymldbzg99T0=; b=hG6Y0kimZe9u34ielpgWqsgHc/LP3iUkQFd2AtEyNzjPw9cMOyyYwRad gIWdkZtHMfuEF6OkT/5zX8MBXE9tliQDCJ4yiW5WTgRDAfmj+QWSHTcyW I98zIvvnDaKDJPgm+nA72eNx3tZv/7Md1AHU+zRpNhao2Oq30svcPL7Wz 4=; IronPort-SDR: if8CRx50ytdWNmeZ8p379M7T1pLT8XK3UA+aUfYBVLvh8s5XPwzbah2SFvhAPBndVevi87PC9/ n7ADZ62Yl7WA== X-IronPort-AV: E=Sophos;i="5.73,515,1583193600"; d="scan'208";a="36385423" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2b-a7fdc47a.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP; 15 Jun 2020 16:21:15 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2b-a7fdc47a.us-west-2.amazon.com (Postfix) with ESMTPS id 3F795C05E2; Mon, 15 Jun 2020 16:21:13 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 15 Jun 2020 16:21:12 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.161.145) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 15 Jun 2020 16:20:55 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v16 04/14] mm/damon: Adaptively adjust regions Date: Mon, 15 Jun 2020 18:19:17 +0200 Message-ID: <20200615161927.12637-5-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200615161927.12637-1-sjpark@amazon.com> References: <20200615161927.12637-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.145] X-ClientProxiedBy: EX13D23UWA004.ant.amazon.com (10.43.160.72) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: 3D38AE3499 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park Even somehow the initial monitoring target regions are well constructed to fulfill the assumption (pages in same region have similar access frequencies), the data access pattern can be dynamically changed. This will result in low monitoring quality. To keep the assumption as much as possible, DAMON adaptively merges and splits each region. For each ``aggregation interval``, it compares the access frequencies of adjacent regions and merges those if the frequency difference is small. Then, after it reports and clears the aggregated access frequency of each region, it splits each region into two or third regions of random size, if the total number of regions after the splits wouldn't exceed the user-specified maximum number of regions. In this way, DAMON provides its best-effort quality and minimal overhead while keeping the overhead bound. Signed-off-by: SeongJae Park Reviewed-by: Leonard Foerster --- include/linux/damon.h | 7 +- mm/damon.c | 181 +++++++++++++++++++++++++++++++++++++++++- 2 files changed, 182 insertions(+), 6 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index 649fb8b6209f..9588bc162c3a 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -65,6 +65,7 @@ struct damon_task { * @sample_interval: The time between access samplings. * @aggr_interval: The time between monitor results aggregations. * @min_nr_regions: The number of initial monitoring regions. + * @max_nr_regions: The maximum number of monitoring regions. * * For each @sample_interval, DAMON checks whether each region is accessed or * not. It aggregates and keeps the access information (number of accesses to @@ -115,6 +116,7 @@ struct damon_ctx { unsigned long sample_interval; unsigned long aggr_interval; unsigned long min_nr_regions; + unsigned long max_nr_regions; struct timespec64 last_aggregation; @@ -133,8 +135,9 @@ struct damon_ctx { }; int damon_set_pids(struct damon_ctx *ctx, int *pids, ssize_t nr_pids); -int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int, - unsigned long aggr_int, unsigned long min_nr_reg); +int damon_set_attrs(struct damon_ctx *ctx, + unsigned long sample_int, unsigned long aggr_int, + unsigned long min_nr_reg, unsigned long max_nr_reg); int damon_start(struct damon_ctx *ctx); int damon_stop(struct damon_ctx *ctx); diff --git a/mm/damon.c b/mm/damon.c index aba02c652b51..5cf39b3ad222 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -176,6 +176,26 @@ static unsigned int nr_damon_regions(struct damon_task *t) return nr_regions; } +/* Returns the size upper limit for each monitoring region */ +static unsigned long damon_region_sz_limit(struct damon_ctx *ctx) +{ + struct damon_task *t; + struct damon_region *r; + unsigned long sz = 0; + + damon_for_each_task(t, ctx) { + damon_for_each_region(r, t) + sz += r->ar.end - r->ar.start; + } + + if (ctx->min_nr_regions) + sz /= ctx->min_nr_regions; + if (sz < MIN_REGION) + sz = MIN_REGION; + + return sz; +} + /* * Functions for DAMON core logics and features */ @@ -226,6 +246,145 @@ static void kdamond_reset_aggregated(struct damon_ctx *c) } } +#define sz_damon_region(r) (r->ar.end - r->ar.start) + +/* + * Merge two adjacent regions into one region + */ +static void damon_merge_two_regions(struct damon_region *l, + struct damon_region *r) +{ + l->nr_accesses = (l->nr_accesses * sz_damon_region(l) + + r->nr_accesses * sz_damon_region(r)) / + (sz_damon_region(l) + sz_damon_region(r)); + l->ar.end = r->ar.end; + damon_destroy_region(r); +} + +#define diff_of(a, b) (a > b ? a - b : b - a) + +/* + * Merge adjacent regions having similar access frequencies + * + * t task affected by merge operation + * thres '->nr_accesses' diff threshold for the merge + * sz_limit size upper limit of each region + */ +static void damon_merge_regions_of(struct damon_task *t, unsigned int thres, + unsigned long sz_limit) +{ + struct damon_region *r, *prev = NULL, *next; + + damon_for_each_region_safe(r, next, t) { + if (prev && prev->ar.end == r->ar.start && + diff_of(prev->nr_accesses, r->nr_accesses) <= thres && + sz_damon_region(prev) + sz_damon_region(r) <= sz_limit) + damon_merge_two_regions(prev, r); + else + prev = r; + } +} + +/* + * Merge adjacent regions having similar access frequencies + * + * threshold '->nr_accesses' diff threshold for the merge + * sz_limit size upper limit of each region + * + * This function merges monitoring target regions which are adjacent and their + * access frequencies are similar. This is for minimizing the monitoring + * overhead under the dynamically changeable access pattern. If a merge was + * unnecessarily made, later 'kdamond_split_regions()' will revert it. + */ +static void kdamond_merge_regions(struct damon_ctx *c, unsigned int threshold, + unsigned long sz_limit) +{ + struct damon_task *t; + + damon_for_each_task(t, c) + damon_merge_regions_of(t, threshold, sz_limit); +} + +/* + * Split a region in two + * + * r the region to be split + * sz_r size of the first sub-region that will be made + */ +static void damon_split_region_at(struct damon_ctx *ctx, + struct damon_region *r, unsigned long sz_r) +{ + struct damon_region *new; + + new = damon_new_region(ctx, r->ar.start + sz_r, r->ar.end); + r->ar.end = new->ar.start; + + damon_insert_region(new, r, damon_next_region(r)); +} + +/* Split every region in the given task into 'nr_subs' regions */ +static void damon_split_regions_of(struct damon_ctx *ctx, + struct damon_task *t, int nr_subs) +{ + struct damon_region *r, *next; + unsigned long sz_region, sz_sub = 0; + int i; + + damon_for_each_region_safe(r, next, t) { + sz_region = r->ar.end - r->ar.start; + + for (i = 0; i < nr_subs - 1 && + sz_region > 2 * MIN_REGION; i++) { + /* + * Randomly select size of left sub-region to be at + * least 10 percent and at most 90% of original region + */ + sz_sub = ALIGN_DOWN(damon_rand(1, 10) * + sz_region / 10, MIN_REGION); + /* Do not allow blank region */ + if (sz_sub == 0 || sz_sub >= sz_region) + continue; + + damon_split_region_at(ctx, r, sz_sub); + sz_region = sz_sub; + } + } +} + +/* + * splits every target region into two randomly-sized regions + * + * This function splits every target region into two random-sized regions if + * current total number of the regions is equal or smaller than half of the + * user-specified maximum number of regions. This is for maximizing the + * monitoring accuracy under the dynamically changeable access patterns. If a + * split was unnecessarily made, later 'kdamond_merge_regions()' will revert + * it. + */ +static void kdamond_split_regions(struct damon_ctx *ctx) +{ + struct damon_task *t; + unsigned int nr_regions = 0; + static unsigned int last_nr_regions; + int nr_subregions = 2; + + damon_for_each_task(t, ctx) + nr_regions += nr_damon_regions(t); + + if (nr_regions > ctx->max_nr_regions / 2) + return; + + /* If number of regions is not changed, we are maybe in corner case */ + if (last_nr_regions == nr_regions && + nr_regions < ctx->max_nr_regions / 3) + nr_subregions = 3; + + damon_for_each_task(t, ctx) + damon_split_regions_of(ctx, t, nr_subregions); + + last_nr_regions = nr_regions; +} + /* * Check whether current monitoring should be stopped * @@ -269,10 +428,14 @@ static int kdamond_fn(void *data) struct damon_ctx *ctx = (struct damon_ctx *)data; struct damon_task *t; struct damon_region *r, *next; + unsigned int max_nr_accesses = 0; + unsigned long sz_limit = 0; pr_info("kdamond (%d) starts\n", ctx->kdamond->pid); if (ctx->init_target_regions) ctx->init_target_regions(ctx); + sz_limit = damon_region_sz_limit(ctx); + while (!kdamond_need_stop(ctx)) { if (ctx->prepare_access_checks) ctx->prepare_access_checks(ctx); @@ -282,14 +445,16 @@ static int kdamond_fn(void *data) usleep_range(ctx->sample_interval, ctx->sample_interval + 1); if (ctx->check_accesses) - ctx->check_accesses(ctx); + max_nr_accesses = ctx->check_accesses(ctx); if (kdamond_aggregate_interval_passed(ctx)) { if (ctx->aggregate_cb) ctx->aggregate_cb(ctx); + kdamond_merge_regions(ctx, max_nr_accesses / 10, + sz_limit); kdamond_reset_aggregated(ctx); + kdamond_split_regions(ctx); } - } damon_for_each_task(t, ctx) { damon_for_each_region_safe(r, next, t) @@ -399,24 +564,32 @@ int damon_set_pids(struct damon_ctx *ctx, int *pids, ssize_t nr_pids) * @sample_int: time interval between samplings * @aggr_int: time interval between aggregations * @min_nr_reg: minimal number of regions + * @max_nr_reg: maximum number of regions * * This function should not be called while the kdamond is running. * Every time interval is in micro-seconds. * * Return: 0 on success, negative error code otherwise. */ -int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int, - unsigned long aggr_int, unsigned long min_nr_reg) +int damon_set_attrs(struct damon_ctx *ctx, + unsigned long sample_int, unsigned long aggr_int, + unsigned long min_nr_reg, unsigned long max_nr_reg) { if (min_nr_reg < 3) { pr_err("min_nr_regions (%lu) must be at least 3\n", min_nr_reg); return -EINVAL; } + if (min_nr_reg > max_nr_reg) { + pr_err("invalid nr_regions. min (%lu) > max (%lu)\n", + min_nr_reg, max_nr_reg); + return -EINVAL; + } ctx->sample_interval = sample_int; ctx->aggr_interval = aggr_int; ctx->min_nr_regions = min_nr_reg; + ctx->max_nr_regions = max_nr_reg; return 0; } From patchwork Mon Jun 15 16:19:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11605403 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 19F1A60D for ; Mon, 15 Jun 2020 16:21:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CB4862078E for ; Mon, 15 Jun 2020 16:21:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="uyaeGMI2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CB4862078E Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 025E06B000E; Mon, 15 Jun 2020 12:21:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F17A06B0010; Mon, 15 Jun 2020 12:21:43 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB8DD6B0022; Mon, 15 Jun 2020 12:21:43 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0163.hostedemail.com [216.40.44.163]) by kanga.kvack.org (Postfix) with ESMTP id C1CD56B000E for ; Mon, 15 Jun 2020 12:21:43 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 87F8878232 for ; Mon, 15 Jun 2020 16:21:43 +0000 (UTC) X-FDA: 76931962086.09.bike28_18136d326df7 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id A28E218027A79 for ; Mon, 15 Jun 2020 16:21:42 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=4280d6d30=sjpark@amazon.com,,RULES_HIT:30003:30012:30034:30046:30054:30064:30070:30075,0,RBL:207.171.184.29:@amazon.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: bike28_18136d326df7 X-Filterd-Recvd-Size: 10277 Received: from smtp-fw-9102.amazon.com (smtp-fw-9102.amazon.com [207.171.184.29]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Mon, 15 Jun 2020 16:21:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1592238102; x=1623774102; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=4r5hU/OjiOgfN9yyLSRNo4FMrTTlc/nf8dBTtqw+4/s=; b=uyaeGMI2jpCLjFRMXlt4cX/BkE6cH9ddazeedvGhGjfwxDXRA4XRFD7l CbY+DsmxDX0z8AMn00phikc2fm1F77Em7LKnowK0TkYrrXOUkqHrjDFcU Yg1n/B0sCx5Q5vvX5sRe6l2LgdpNQiDcNjbM7IyIovxnJQJb9L9c0o022 M=; IronPort-SDR: C6qtz/Nctr4MxU8hoYP6EOuLm8oQd/FagZTA8ORDzNP/xU33wXsTUM5KrYi6ZQzSECgg1YCtjz Z8K0Be2/ud0A== X-IronPort-AV: E=Sophos;i="5.73,515,1583193600"; d="scan'208";a="52392901" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 15 Jun 2020 16:21:34 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166]) by email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com (Postfix) with ESMTPS id D8C3CA252E; Mon, 15 Jun 2020 16:21:31 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 15 Jun 2020 16:21:31 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.161.145) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 15 Jun 2020 16:21:14 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v16 05/14] mm/damon: Allow dynamic monitoring target regions update Date: Mon, 15 Jun 2020 18:19:18 +0200 Message-ID: <20200615161927.12637-6-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200615161927.12637-1-sjpark@amazon.com> References: <20200615161927.12637-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.145] X-ClientProxiedBy: EX13D23UWA004.ant.amazon.com (10.43.160.72) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: A28E218027A79 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park The monitoring target regions can be dynamically changed. For example, virtual memory mapping could be dynamically updated and physical memory could be hot-plugged. To handle such cases, this commit adds a monitoring attribute, ``regions update interval`` and a callback ``init_target_regions`` in the monitoring context. If the two fields are properly set, DAMON will call the ``init_target_regions()`` callback for every ``regions update interval``. In the callback, users can check current memory mapping or hotplugged physical memory sections and appropriately update the monitoring target regions of the context. Signed-off-by: SeongJae Park Reviewed-by: Leonard Foerster --- include/linux/damon.h | 20 +++++++++++++++----- mm/damon.c | 23 +++++++++++++++++++++-- 2 files changed, 36 insertions(+), 7 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index 9588bc162c3a..aa14d4e910e5 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -64,13 +64,16 @@ struct damon_task { * * @sample_interval: The time between access samplings. * @aggr_interval: The time between monitor results aggregations. + * @regions_update_interval: The time between monitor regions updates. * @min_nr_regions: The number of initial monitoring regions. * @max_nr_regions: The maximum number of monitoring regions. * * For each @sample_interval, DAMON checks whether each region is accessed or * not. It aggregates and keeps the access information (number of accesses to - * each region) for @aggr_interval time. All time intervals are in - * micro-seconds. + * each region) for @aggr_interval time. DAMON also checks whether the target + * memory regions need update (e.g., by ``mmap()`` calls from the application, + * in case of virtual memory monitoring) and applies the changes for each + * @regions_update_interval. All time intervals are in micro-seconds. * * @kdamond: Kernel thread who does the monitoring. * @kdamond_stop: Notifies whether kdamond should stop. @@ -88,6 +91,7 @@ struct damon_task { * @tasks_list: Head of monitoring target tasks (&damon_task) list. * * @init_target_regions: Constructs initial monitoring target regions. + * @update_target_regions: Updates monitoring target regions. * @prepare_access_checks: Prepares next access check of target regions. * @check_accesses: Checks the access of target regions. * @sample_cb: Called for each sampling interval. @@ -96,11 +100,14 @@ struct damon_task { * DAMON can be extended for various address spaces by users. For this, users * can register the target address space dependent low level functions for * their usecases via the callback pointers of the context. The monitoring - * thread calls @init_target_regions before starting the monitoring, and + * thread calls @init_target_regions before starting the monitoring, + * @update_target_regions for each @regions_update_interval, and * @prepare_access_checks and @check_accesses for each @sample_interval. * * @init_target_regions should construct proper monitoring target regions and * link those to the DAMON context struct. + * @update_target_regions should update the monitoring target regions for + * current status. * @prepare_access_checks should manipulate the monitoring regions to be * prepare for the next access check. * @check_accesses should check the accesses to each region that made after the @@ -115,10 +122,12 @@ struct damon_task { struct damon_ctx { unsigned long sample_interval; unsigned long aggr_interval; + unsigned long regions_update_interval; unsigned long min_nr_regions; unsigned long max_nr_regions; struct timespec64 last_aggregation; + struct timespec64 last_regions_update; struct task_struct *kdamond; bool kdamond_stop; @@ -128,6 +137,7 @@ struct damon_ctx { /* callbacks */ void (*init_target_regions)(struct damon_ctx *context); + void (*update_target_regions)(struct damon_ctx *context); void (*prepare_access_checks)(struct damon_ctx *context); unsigned int (*check_accesses)(struct damon_ctx *context); void (*sample_cb)(struct damon_ctx *context); @@ -135,8 +145,8 @@ struct damon_ctx { }; int damon_set_pids(struct damon_ctx *ctx, int *pids, ssize_t nr_pids); -int damon_set_attrs(struct damon_ctx *ctx, - unsigned long sample_int, unsigned long aggr_int, +int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int, + unsigned long aggr_int, unsigned long regions_update_int, unsigned long min_nr_reg, unsigned long max_nr_reg); int damon_start(struct damon_ctx *ctx); int damon_stop(struct damon_ctx *ctx); diff --git a/mm/damon.c b/mm/damon.c index 5cf39b3ad222..a19ec17a35cb 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -385,6 +385,17 @@ static void kdamond_split_regions(struct damon_ctx *ctx) last_nr_regions = nr_regions; } +/* + * Check whether it is time to check and apply the target monitoring regions + * + * Returns true if it is. + */ +static bool kdamond_need_update_regions(struct damon_ctx *ctx) +{ + return damon_check_reset_time_interval(&ctx->last_regions_update, + ctx->regions_update_interval); +} + /* * Check whether current monitoring should be stopped * @@ -455,6 +466,12 @@ static int kdamond_fn(void *data) kdamond_reset_aggregated(ctx); kdamond_split_regions(ctx); } + + if (kdamond_need_update_regions(ctx)) { + if (ctx->update_target_regions) + ctx->update_target_regions(ctx); + sz_limit = damon_region_sz_limit(ctx); + } } damon_for_each_task(t, ctx) { damon_for_each_region_safe(r, next, t) @@ -562,6 +579,7 @@ int damon_set_pids(struct damon_ctx *ctx, int *pids, ssize_t nr_pids) * damon_set_attrs() - Set attributes for the monitoring. * @ctx: monitoring context * @sample_int: time interval between samplings + * @regions_update_int: time interval between target regions update * @aggr_int: time interval between aggregations * @min_nr_reg: minimal number of regions * @max_nr_reg: maximum number of regions @@ -571,8 +589,8 @@ int damon_set_pids(struct damon_ctx *ctx, int *pids, ssize_t nr_pids) * * Return: 0 on success, negative error code otherwise. */ -int damon_set_attrs(struct damon_ctx *ctx, - unsigned long sample_int, unsigned long aggr_int, +int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int, + unsigned long aggr_int, unsigned long regions_update_int, unsigned long min_nr_reg, unsigned long max_nr_reg) { if (min_nr_reg < 3) { @@ -588,6 +606,7 @@ int damon_set_attrs(struct damon_ctx *ctx, ctx->sample_interval = sample_int; ctx->aggr_interval = aggr_int; + ctx->regions_update_interval = regions_update_int; ctx->min_nr_regions = min_nr_reg; ctx->max_nr_regions = max_nr_reg; From patchwork Mon Jun 15 16:19:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11605409 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9C0D892A for ; Mon, 15 Jun 2020 16:22:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 42F332083E for ; Mon, 15 Jun 2020 16:22:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="B8+gwxps" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 42F332083E Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 776046B000C; Mon, 15 Jun 2020 12:22:40 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7009C6B000D; Mon, 15 Jun 2020 12:22:40 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C7936B0022; Mon, 15 Jun 2020 12:22:40 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0142.hostedemail.com [216.40.44.142]) by kanga.kvack.org (Postfix) with ESMTP id 3F14B6B000C for ; Mon, 15 Jun 2020 12:22:40 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id EE962180ACEF7 for ; Mon, 15 Jun 2020 16:22:39 +0000 (UTC) X-FDA: 76931964438.26.bell57_2012ba726df7 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id E851318190473 for ; Mon, 15 Jun 2020 16:22:00 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=4280d6d30=sjpark@amazon.com,,RULES_HIT:30003:30012:30034:30051:30054:30064:30070:30090,0,RBL:207.171.184.25:@amazon.com:.lbl8.mailshell.net-66.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: bell57_2012ba726df7 X-Filterd-Recvd-Size: 18853 Received: from smtp-fw-9101.amazon.com (smtp-fw-9101.amazon.com [207.171.184.25]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Mon, 15 Jun 2020 16:22:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1592238120; x=1623774120; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=ifr4gQV+yhFpihIiljeh6ImBelEF4KoxGNjHtpmralY=; b=B8+gwxps+ugB8L8OOyFsB06vbbxlcvvQTcHzsCMJ1qYleAAPmyppKkFw hq7fbAvSMVxkUJdgPEYg+IKpIu7wO9M5/WbNnuRDTWXRDvcdX55dFBjQf 8tKlQjC9iaIWN2VVI4iD9VTBCZR7crOO2HNHxvRk/kAPy4271vrUTFC9J M=; IronPort-SDR: A5GTeliEg3Ej4QZoE9QHjWn/x420Ty0YEmyV3bZssfMEIeDQINMlCrFnRgiFwoqfiv3SDywJor 7idm6Sk2GcbQ== X-IronPort-AV: E=Sophos;i="5.73,515,1583193600"; d="scan'208";a="44141831" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2a-6e2fc477.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 15 Jun 2020 16:21:52 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2a-6e2fc477.us-west-2.amazon.com (Postfix) with ESMTPS id DAF75A24E7; Mon, 15 Jun 2020 16:21:49 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 15 Jun 2020 16:21:49 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.161.145) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 15 Jun 2020 16:21:32 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v16 06/14] mm/damon: Implement callbacks for the virtual memory address spaces Date: Mon, 15 Jun 2020 18:19:19 +0200 Message-ID: <20200615161927.12637-7-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200615161927.12637-1-sjpark@amazon.com> References: <20200615161927.12637-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.145] X-ClientProxiedBy: EX13D23UWA004.ant.amazon.com (10.43.160.72) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: E851318190473 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park This commit implements the four essential callbacks of DAMON, '->init_target_regions', '->update_target_regions', '->prepare_access_checks', and '->check_accesses' for virtual memory address spaces. Those internally use PTE Accessed bit. Using these callbacks, users can easily monitor the virtual address space data accesses of specific processes. Nonetheless, these are just reference implementations. Users can implement and use their own callbacks for their special use case, if required. Signed-off-by: SeongJae Park Reviewed-by: Leonard Foerster --- include/linux/damon.h | 6 + mm/damon.c | 474 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 480 insertions(+) diff --git a/include/linux/damon.h b/include/linux/damon.h index aa14d4e910e5..aad30c500964 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -144,6 +144,12 @@ struct damon_ctx { void (*aggregate_cb)(struct damon_ctx *context); }; +/* Reference callback implementations for virtual memory */ +void kdamond_init_vm_regions(struct damon_ctx *ctx); +void kdamond_update_vm_regions(struct damon_ctx *ctx); +void kdamond_prepare_vm_access_checks(struct damon_ctx *ctx); +unsigned int kdamond_check_vm_accesses(struct damon_ctx *ctx); + int damon_set_pids(struct damon_ctx *ctx, int *pids, ssize_t nr_pids); int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int, unsigned long aggr_int, unsigned long regions_update_int, diff --git a/mm/damon.c b/mm/damon.c index a19ec17a35cb..973244a531b1 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -9,6 +9,9 @@ * This file is constructed in below parts. * * - Functions and macros for DAMON data structures + * - Functions for the initial monitoring target regions construction + * - Functions for the dynamic monitoring target regions update + * - Functions for the access checking of the regions * - Functions for DAMON core logics and features * - Functions for the DAMON programming interface * - Functions for the module loading/unloading @@ -196,6 +199,477 @@ static unsigned long damon_region_sz_limit(struct damon_ctx *ctx) return sz; } +/* + * Get the mm_struct of the given task + * + * Caller _must_ put the mm_struct after use, unless it is NULL. + * + * Returns the mm_struct of the task on success, NULL on failure + */ +static struct mm_struct *damon_get_mm(struct damon_task *t) +{ + struct task_struct *task; + struct mm_struct *mm; + + task = damon_get_task_struct(t); + if (!task) + return NULL; + + mm = get_task_mm(task); + put_task_struct(task); + return mm; +} + +/* + * Functions for the initial monitoring target regions construction + */ + +/* + * Size-evenly split a region into 'nr_pieces' small regions + * + * Returns 0 on success, or negative error code otherwise. + */ +static int damon_split_region_evenly(struct damon_ctx *ctx, + struct damon_region *r, unsigned int nr_pieces) +{ + unsigned long sz_orig, sz_piece, orig_end; + struct damon_region *n = NULL, *next; + unsigned long start; + + if (!r || !nr_pieces) + return -EINVAL; + + orig_end = r->ar.end; + sz_orig = r->ar.end - r->ar.start; + sz_piece = ALIGN_DOWN(sz_orig / nr_pieces, MIN_REGION); + + if (!sz_piece) + return -EINVAL; + + r->ar.end = r->ar.start + sz_piece; + next = damon_next_region(r); + for (start = r->ar.end; start + sz_piece <= orig_end; + start += sz_piece) { + n = damon_new_region(ctx, start, start + sz_piece); + if (!n) + return -ENOMEM; + damon_insert_region(n, r, next); + r = n; + } + /* complement last region for possible rounding error */ + if (n) + n->ar.end = orig_end; + + return 0; +} + +static unsigned long sz_range(struct damon_addr_range *r) +{ + return r->end - r->start; +} + +static void swap_ranges(struct damon_addr_range *r1, + struct damon_addr_range *r2) +{ + struct damon_addr_range tmp; + + tmp = *r1; + *r1 = *r2; + *r2 = tmp; +} + +/* + * Find three regions separated by two biggest unmapped regions + * + * vma the head vma of the target address space + * regions an array of three address ranges that results will be saved + * + * This function receives an address space and finds three regions in it which + * separated by the two biggest unmapped regions in the space. Please refer to + * below comments of 'damon_init_vm_regions_of()' function to know why this is + * necessary. + * + * Returns 0 if success, or negative error code otherwise. + */ +static int damon_three_regions_in_vmas(struct vm_area_struct *vma, + struct damon_addr_range regions[3]) +{ + struct damon_addr_range gap = {0}, first_gap = {0}, second_gap = {0}; + struct vm_area_struct *last_vma = NULL; + unsigned long start = 0; + struct rb_root rbroot; + + /* Find two biggest gaps so that first_gap > second_gap > others */ + for (; vma; vma = vma->vm_next) { + if (!last_vma) { + start = vma->vm_start; + goto next; + } + + if (vma->rb_subtree_gap <= sz_range(&second_gap)) { + rbroot.rb_node = &vma->vm_rb; + vma = rb_entry(rb_last(&rbroot), + struct vm_area_struct, vm_rb); + goto next; + } + + gap.start = last_vma->vm_end; + gap.end = vma->vm_start; + if (sz_range(&gap) > sz_range(&second_gap)) { + swap_ranges(&gap, &second_gap); + if (sz_range(&second_gap) > sz_range(&first_gap)) + swap_ranges(&second_gap, &first_gap); + } +next: + last_vma = vma; + } + + if (!sz_range(&second_gap) || !sz_range(&first_gap)) + return -EINVAL; + + /* Sort the two biggest gaps by address */ + if (first_gap.start > second_gap.start) + swap_ranges(&first_gap, &second_gap); + + /* Store the result */ + regions[0].start = ALIGN(start, MIN_REGION); + regions[0].end = ALIGN(first_gap.start, MIN_REGION); + regions[1].start = ALIGN(first_gap.end, MIN_REGION); + regions[1].end = ALIGN(second_gap.start, MIN_REGION); + regions[2].start = ALIGN(second_gap.end, MIN_REGION); + regions[2].end = ALIGN(last_vma->vm_end, MIN_REGION); + + return 0; +} + +/* + * Get the three regions in the given task + * + * Returns 0 on success, negative error code otherwise. + */ +static int damon_three_regions_of(struct damon_task *t, + struct damon_addr_range regions[3]) +{ + struct mm_struct *mm; + int rc; + + mm = damon_get_mm(t); + if (!mm) + return -EINVAL; + + down_read(&mm->mmap_sem); + rc = damon_three_regions_in_vmas(mm->mmap, regions); + up_read(&mm->mmap_sem); + + mmput(mm); + return rc; +} + +/* + * Initialize the monitoring target regions for the given task + * + * t the given target task + * + * Because only a number of small portions of the entire address space + * is actually mapped to the memory and accessed, monitoring the unmapped + * regions is wasteful. That said, because we can deal with small noises, + * tracking every mapping is not strictly required but could even incur a high + * overhead if the mapping frequently changes or the number of mappings is + * high. The adaptive regions adjustment mechanism will further help to deal + * with the noise by simply identifying the unmapped areas as a region that + * has no access. Moreover, applying the real mappings that would have many + * unmapped areas inside will make the adaptive mechanism quite complex. That + * said, too huge unmapped areas inside the monitoring target should be removed + * to not take the time for the adaptive mechanism. + * + * For the reason, we convert the complex mappings to three distinct regions + * that cover every mapped area of the address space. Also the two gaps + * between the three regions are the two biggest unmapped areas in the given + * address space. In detail, this function first identifies the start and the + * end of the mappings and the two biggest unmapped areas of the address space. + * Then, it constructs the three regions as below: + * + * [mappings[0]->start, big_two_unmapped_areas[0]->start) + * [big_two_unmapped_areas[0]->end, big_two_unmapped_areas[1]->start) + * [big_two_unmapped_areas[1]->end, mappings[nr_mappings - 1]->end) + * + * As usual memory map of processes is as below, the gap between the heap and + * the uppermost mmap()-ed region, and the gap between the lowermost mmap()-ed + * region and the stack will be two biggest unmapped regions. Because these + * gaps are exceptionally huge areas in usual address space, excluding these + * two biggest unmapped regions will be sufficient to make a trade-off. + * + * + * + * + * (other mmap()-ed regions and small unmapped regions) + * + * + * + */ +static void damon_init_vm_regions_of(struct damon_ctx *c, struct damon_task *t) +{ + struct damon_region *r; + struct damon_addr_range regions[3]; + unsigned long sz = 0, nr_pieces; + int i; + + if (damon_three_regions_of(t, regions)) { + pr_err("Failed to get three regions of task %d\n", t->pid); + return; + } + + for (i = 0; i < 3; i++) + sz += regions[i].end - regions[i].start; + if (c->min_nr_regions) + sz /= c->min_nr_regions; + if (sz < MIN_REGION) + sz = MIN_REGION; + + /* Set the initial three regions of the task */ + for (i = 0; i < 3; i++) { + r = damon_new_region(c, regions[i].start, regions[i].end); + if (!r) { + pr_err("%d'th init region creation failed\n", i); + return; + } + damon_add_region(r, t); + + nr_pieces = (regions[i].end - regions[i].start) / sz; + damon_split_region_evenly(c, r, nr_pieces); + } +} + +/* Initialize '->regions_list' of every task */ +void kdamond_init_vm_regions(struct damon_ctx *ctx) +{ + struct damon_task *t; + + damon_for_each_task(t, ctx) { + /* the user may set the target regions as they want */ + if (!nr_damon_regions(t)) + damon_init_vm_regions_of(ctx, t); + } +} + +/* + * Functions for the dynamic monitoring target regions update + */ + +/* + * Check whether a region is intersecting an address range + * + * Returns true if it is. + */ +static bool damon_intersect(struct damon_region *r, struct damon_addr_range *re) +{ + return !(r->ar.end <= re->start || re->end <= r->ar.start); +} + +/* + * Update damon regions for the three big regions of the given task + * + * t the given task + * bregions the three big regions of the task + */ +static void damon_apply_three_regions(struct damon_ctx *ctx, + struct damon_task *t, struct damon_addr_range bregions[3]) +{ + struct damon_region *r, *next; + unsigned int i = 0; + + /* Remove regions which are not in the three big regions now */ + damon_for_each_region_safe(r, next, t) { + for (i = 0; i < 3; i++) { + if (damon_intersect(r, &bregions[i])) + break; + } + if (i == 3) + damon_destroy_region(r); + } + + /* Adjust intersecting regions to fit with the three big regions */ + for (i = 0; i < 3; i++) { + struct damon_region *first = NULL, *last; + struct damon_region *newr; + struct damon_addr_range *br; + + br = &bregions[i]; + /* Get the first and last regions which intersects with br */ + damon_for_each_region(r, t) { + if (damon_intersect(r, br)) { + if (!first) + first = r; + last = r; + } + if (r->ar.start >= br->end) + break; + } + if (!first) { + /* no damon_region intersects with this big region */ + newr = damon_new_region(ctx, + ALIGN_DOWN(br->start, MIN_REGION), + ALIGN(br->end, MIN_REGION)); + if (!newr) + continue; + damon_insert_region(newr, damon_prev_region(r), r); + } else { + first->ar.start = ALIGN_DOWN(br->start, MIN_REGION); + last->ar.end = ALIGN(br->end, MIN_REGION); + } + } +} + +/* + * Update regions for current memory mappings + */ +void kdamond_update_vm_regions(struct damon_ctx *ctx) +{ + struct damon_addr_range three_regions[3]; + struct damon_task *t; + + damon_for_each_task(t, ctx) { + if (damon_three_regions_of(t, three_regions)) + continue; + damon_apply_three_regions(ctx, t, three_regions); + } +} + +/* + * Functions for the access checking of the regions + */ + +static void damon_mkold(struct mm_struct *mm, unsigned long addr) +{ + pte_t *pte = NULL; + pmd_t *pmd = NULL; + spinlock_t *ptl; + + if (follow_pte_pmd(mm, addr, NULL, &pte, &pmd, &ptl)) + return; + + if (pte) { + if (pte_young(*pte)) { + clear_page_idle(pte_page(*pte)); + set_page_young(pte_page(*pte)); + } + *pte = pte_mkold(*pte); + pte_unmap_unlock(pte, ptl); + return; + } + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if (pmd_young(*pmd)) { + clear_page_idle(pmd_page(*pmd)); + set_page_young(pmd_page(*pmd)); + } + *pmd = pmd_mkold(*pmd); + spin_unlock(ptl); +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +} + +static void damon_prepare_vm_access_check(struct damon_ctx *ctx, + struct mm_struct *mm, struct damon_region *r) +{ + r->sampling_addr = damon_rand(r->ar.start, r->ar.end); + + damon_mkold(mm, r->sampling_addr); +} + +void kdamond_prepare_vm_access_checks(struct damon_ctx *ctx) +{ + struct damon_task *t; + struct mm_struct *mm; + struct damon_region *r; + + damon_for_each_task(t, ctx) { + mm = damon_get_mm(t); + if (!mm) + continue; + damon_for_each_region(r, t) + damon_prepare_vm_access_check(ctx, mm, r); + mmput(mm); + } +} + +static bool damon_young(struct mm_struct *mm, unsigned long addr, + unsigned long *page_sz) +{ + pte_t *pte = NULL; + pmd_t *pmd = NULL; + spinlock_t *ptl; + bool young = false; + + if (follow_pte_pmd(mm, addr, NULL, &pte, &pmd, &ptl)) + return false; + + *page_sz = PAGE_SIZE; + if (pte) { + young = pte_young(*pte); + pte_unmap_unlock(pte, ptl); + return young; + } + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + young = pmd_young(*pmd); + spin_unlock(ptl); + *page_sz = ((1UL) << HPAGE_PMD_SHIFT); +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + + return young; +} + +/* + * Check whether the region was accessed after the last preparation + * + * mm 'mm_struct' for the given virtual address space + * r the region to be checked + */ +static void damon_check_vm_access(struct damon_ctx *ctx, + struct mm_struct *mm, struct damon_region *r) +{ + static struct mm_struct *last_mm; + static unsigned long last_addr; + static unsigned long last_page_sz = PAGE_SIZE; + static bool last_accessed; + + /* If the region is in the last checked page, reuse the result */ + if (mm == last_mm && (ALIGN_DOWN(last_addr, last_page_sz) == + ALIGN_DOWN(r->sampling_addr, last_page_sz))) { + if (last_accessed) + r->nr_accesses++; + return; + } + + last_accessed = damon_young(mm, r->sampling_addr, &last_page_sz); + if (last_accessed) + r->nr_accesses++; + + last_mm = mm; + last_addr = r->sampling_addr; +} + +unsigned int kdamond_check_vm_accesses(struct damon_ctx *ctx) +{ + struct damon_task *t; + struct mm_struct *mm; + struct damon_region *r; + unsigned int max_nr_accesses = 0; + + damon_for_each_task(t, ctx) { + mm = damon_get_mm(t); + if (!mm) + continue; + damon_for_each_region(r, t) { + damon_check_vm_access(ctx, mm, r); + max_nr_accesses = max(r->nr_accesses, max_nr_accesses); + } + mmput(mm); + } + + return max_nr_accesses; +} + /* * Functions for DAMON core logics and features */ From patchwork Mon Jun 15 16:19:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11605411 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3E2FD92A for ; Mon, 15 Jun 2020 16:22:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F16B72078A for ; Mon, 15 Jun 2020 16:22:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="gKZXBHSg" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F16B72078A Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 357876B000D; Mon, 15 Jun 2020 12:22:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 307076B000E; Mon, 15 Jun 2020 12:22:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F6426B0022; Mon, 15 Jun 2020 12:22:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0023.hostedemail.com [216.40.44.23]) by kanga.kvack.org (Postfix) with ESMTP id 093046B000D for ; Mon, 15 Jun 2020 12:22:47 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C1038F4A3C for ; Mon, 15 Jun 2020 16:22:46 +0000 (UTC) X-FDA: 76931964732.26.twist07_0a0518c26df7 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 6FB6018079A06 for ; Mon, 15 Jun 2020 16:22:19 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,prvs=4280d6d30=sjpark@amazon.com,,RULES_HIT:30003:30012:30034:30051:30054:30064:30070:30075,0,RBL:207.171.184.29:@amazon.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: twist07_0a0518c26df7 X-Filterd-Recvd-Size: 10867 Received: from smtp-fw-9102.amazon.com (smtp-fw-9102.amazon.com [207.171.184.29]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Mon, 15 Jun 2020 16:22:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1592238139; x=1623774139; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=kA9yf+k6JjXna5CJpm4uranTc+Jg3vcySjLbtefUsfk=; b=gKZXBHSgrIJWugA5tk17s6F6NpRLWTzTlV/L2ZXdWoHXfVw2b0niCf9X +nC0DacPebW3y/aAQKPq7N/YHH5TTGrc561fA+K3tqrGPFDLMW159moVs aYuWZXm6GWqScgqUVoKpHfClcYmIwPNLIRLNNQIn9e8Ma4egDjzSSoQHf s=; IronPort-SDR: HMtpk4hX7EP6tp2faUhEPoHbtKon3AV8j0TXPUQWXgOaX3vBof5coDZGdniYwvkaXzyUMP9ObZ D7xsHhv3iY4w== X-IronPort-AV: E=Sophos;i="5.73,515,1583193600"; d="scan'208";a="52393057" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2b-baacba05.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 15 Jun 2020 16:22:09 +0000 Received: from EX13MTAUEA002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162]) by email-inbound-relay-2b-baacba05.us-west-2.amazon.com (Postfix) with ESMTPS id 4A195A182F; Mon, 15 Jun 2020 16:22:07 +0000 (UTC) Received: from EX13D31EUA001.ant.amazon.com (10.43.165.15) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 15 Jun 2020 16:22:06 +0000 Received: from u886c93fd17d25d.ant.amazon.com (10.43.161.145) by EX13D31EUA001.ant.amazon.com (10.43.165.15) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 15 Jun 2020 16:21:49 +0000 From: SeongJae Park To: CC: SeongJae Park , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v16 07/14] mm/damon: Implement access pattern recording Date: Mon, 15 Jun 2020 18:19:20 +0200 Message-ID: <20200615161927.12637-8-sjpark@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200615161927.12637-1-sjpark@amazon.com> References: <20200615161927.12637-1-sjpark@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.145] X-ClientProxiedBy: EX13D23UWA004.ant.amazon.com (10.43.160.72) To EX13D31EUA001.ant.amazon.com (10.43.165.15) X-Rspamd-Queue-Id: 6FB6018079A06 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park This commit implements the recording feature of DAMON. If this feature is enabled, DAMON writes the monitored access patterns in its binary format into a file which specified by the user. This is already able to be implemented by each user using the callbacks. However, as the recording is expected to be used widely, this commit implements the feature in the DAMON, for more convenience and efficiency. Signed-off-by: SeongJae Park Reviewed-by: Leonard Foerster --- include/linux/damon.h | 15 +++++ mm/damon.c | 130 +++++++++++++++++++++++++++++++++++++++++- 2 files changed, 142 insertions(+), 3 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index aad30c500964..030f34b5176f 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -75,6 +75,14 @@ struct damon_task { * in case of virtual memory monitoring) and applies the changes for each * @regions_update_interval. All time intervals are in micro-seconds. * + * @rbuf: In-memory buffer for monitoring result recording. + * @rbuf_len: The length of @rbuf. + * @rbuf_offset: The offset for next write to @rbuf. + * @rfile_path: Record file path. + * + * If @rbuf, @rbuf_len, and @rfile_path are set, the monitored results are + * automatically stored in @rfile_path file. + * * @kdamond: Kernel thread who does the monitoring. * @kdamond_stop: Notifies whether kdamond should stop. * @kdamond_lock: Mutex for the synchronizations with @kdamond. @@ -129,6 +137,11 @@ struct damon_ctx { struct timespec64 last_aggregation; struct timespec64 last_regions_update; + unsigned char *rbuf; + unsigned int rbuf_len; + unsigned int rbuf_offset; + char *rfile_path; + struct task_struct *kdamond; bool kdamond_stop; struct mutex kdamond_lock; @@ -154,6 +167,8 @@ int damon_set_pids(struct damon_ctx *ctx, int *pids, ssize_t nr_pids); int damon_set_attrs(struct damon_ctx *ctx, unsigned long sample_int, unsigned long aggr_int, unsigned long regions_update_int, unsigned long min_nr_reg, unsigned long max_nr_reg); +int damon_set_recording(struct damon_ctx *ctx, + unsigned int rbuf_len, char *rfile_path); int damon_start(struct damon_ctx *ctx); int damon_stop(struct damon_ctx *ctx); diff --git a/mm/damon.c b/mm/damon.c index 973244a531b1..006bb66c6cf6 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -58,6 +58,9 @@ #define damon_for_each_task_safe(t, next, ctx) \ list_for_each_entry_safe(t, next, &(ctx)->tasks_list, list) +#define MAX_RECORD_BUFFER_LEN (4 * 1024 * 1024) +#define MAX_RFILE_PATH_LEN 256 + /* Get a random number in [l, r) */ #define damon_rand(l, r) (l + prandom_u32() % (r - l)) @@ -707,16 +710,80 @@ static bool kdamond_aggregate_interval_passed(struct damon_ctx *ctx) } /* - * Reset the aggregated monitoring results + * Flush the content in the result buffer to the result file + */ +static void damon_flush_rbuffer(struct damon_ctx *ctx) +{ + ssize_t sz; + loff_t pos = 0; + struct file *rfile; + + rfile = filp_open(ctx->rfile_path, O_CREAT | O_RDWR | O_APPEND, 0644); + if (IS_ERR(rfile)) { + pr_err("Cannot open the result file %s\n", + ctx->rfile_path); + return; + } + + while (ctx->rbuf_offset) { + sz = kernel_write(rfile, ctx->rbuf, ctx->rbuf_offset, &pos); + if (sz < 0) + break; + ctx->rbuf_offset -= sz; + } + filp_close(rfile, NULL); +} + +/* + * Write a data into the result buffer + */ +static void damon_write_rbuf(struct damon_ctx *ctx, void *data, ssize_t size) +{ + if (!ctx->rbuf_len || !ctx->rbuf) + return; + if (ctx->rbuf_offset + size > ctx->rbuf_len) + damon_flush_rbuffer(ctx); + + memcpy(&ctx->rbuf[ctx->rbuf_offset], data, size); + ctx->rbuf_offset += size; +} + +/* + * Flush the aggregated monitoring results to the result buffer + * + * Stores current tracking results to the result buffer and reset 'nr_accesses' + * of each region. The format for the result buffer is as below: + * + *