From patchwork Tue Feb 4 06:23:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11364071 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AFF8C921 for ; Tue, 4 Feb 2020 06:23:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 61B912166E for ; Tue, 4 Feb 2020 06:23:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WS04L8AH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 61B912166E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8F9D26B0005; Tue, 4 Feb 2020 01:23:41 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 883A36B0006; Tue, 4 Feb 2020 01:23:41 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FCB36B0007; Tue, 4 Feb 2020 01:23:41 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0239.hostedemail.com [216.40.44.239]) by kanga.kvack.org (Postfix) with ESMTP id 572C76B0005 for ; Tue, 4 Feb 2020 01:23:41 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0EC67181AEF1A for ; Tue, 4 Feb 2020 06:23:41 +0000 (UTC) X-FDA: 76451453442.20.room11_3d2e56d73205e X-Spam-Summary: 2,0,0,35cf84c8721be62c,d41d8cd98f00b204,sj38.park@gmail.com,:akpm@linux-foundation.org:sjpark@amazon.de:acme@kernel.org:alexander.shishkin@linux.intel.com:amit@kernel.org:brendan.d.gregg@gmail.com:brendanhiggins@google.com:cai@lca.pw:colin.king@canonical.com:corbet@lwn.net:dwmw@amazon.com:jolsa@redhat.com:kirill@shutemov.name:mark.rutland@arm.com:mgorman@suse.de:minchan@kernel.org:mingo@redhat.com:namhyung@kernel.org:peterz@infradead.org:rdunlap@infradead.org:rostedt@goodmis.org:sj38.park@gmail.com:vdavydov.dev@gmail.com::linux-doc@vger.kernel.org:linux-kernel@vger.kernel.org,RULES_HIT:1:41:355:379:541:800:960:966:973:982:988:989:1260:1345:1359:1437:1605:1730:1747:1777:1792:2194:2196:2199:2200:2393:2538:2539:2559:2562:2636:2693:2740:2741:2840:2892:2901:2918:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3873:3874:4250:4321:4385:4470:4605:5007:6119:6261:6653:6742:6743:7576:7774:7875:7903:8603:8957:9010:9121:9413:10004:11026:11473:11658:11914:12043: 12291:12 X-HE-Tag: room11_3d2e56d73205e X-Filterd-Recvd-Size: 13248 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Tue, 4 Feb 2020 06:23:40 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id x185so8914691pfc.5 for ; Mon, 03 Feb 2020 22:23:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=dQabRuh4MATl+1QDbxk7M0KRq1KLdG813aWlQ3UGSeo=; b=WS04L8AHUnGBb04N6QQOXjgrFtTwf4OdGF7CwL5o328SzX/hG0EG/WsBbDiCj/HMoD 2nGFOjMQ1jCUZ5PbJIO/IAmZ512b+mnJqfEFxo5EjcwRYWdxAsmofuTkvhlh7WXq6o4q V8w479xVzz4OBaxUH56i4TV7ZQlJsxitAePbGX6+PKL5anXAwMRTQIRMAQeYmH1zpS+o eaZR80Db8WntQMKF/nwYwfG4bz9HYMUwCgQSW4eNSk44sfX3zTilkLe50U/oytIw4Iby reB2lHzJFl0sKteUzJV9hLVkXh1PZEng1YEF80HoBj0xhcL77NgMkvkO0rBtZietOr7M tnaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dQabRuh4MATl+1QDbxk7M0KRq1KLdG813aWlQ3UGSeo=; b=NUkY9UqLjw5mxnJ7/F8opJZDfBm7Aiiwt73hqYc82ea4uc4AUP99TzrEK0xbDg3e79 hefUopvU1W6OXixJlYeajXJF5i6Qv+UnodVmGfw1TGBXLFuj+6foHpc9y7mSQnW8kGid lWrB1ZaiBfOO7Nuf8HHv0LcZSpXZ+45NkeMrgKM4vQ7bB6U1viLDe021/nLJbjPoIbjm ss5yl3NH8QAwTruIghjsi6sXdsVFqytfRU+AiLxr8xXfeBPMSYKOgZdk9CUcwnM4dNZ+ he/tF+THLmbjujld7su1AiuKuggIaoxsVLszlh/58nVJ6DsYWWo6Tw2kAq1xiJgUf3DN fz4A== X-Gm-Message-State: APjAAAU8liixxiuoef453I4iYf9vXx5GaLEze8VGooi2wYpMSLRR05NY uB6FAFV3KdQ9c8f4EYc/dlQ= X-Google-Smtp-Source: APXvYqyikHFuW5AEhScDkQeHdSC/ObvFGfseF/cUgmqY579cReG/kXwaXbnHKZbA2Z425qnHd+/Z9Q== X-Received: by 2002:a63:8dc4:: with SMTP id z187mr13834678pgd.68.1580797419042; Mon, 03 Feb 2020 22:23:39 -0800 (PST) Received: from localhost.localdomain ([106.254.212.20]) by smtp.gmail.com with ESMTPSA id u26sm21880240pfn.46.2020.02.03.22.23.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Feb 2020 22:23:38 -0800 (PST) From: sj38.park@gmail.com To: akpm@linux-foundation.org Cc: SeongJae Park , acme@kernel.org, alexander.shishkin@linux.intel.com, amit@kernel.org, brendan.d.gregg@gmail.com, brendanhiggins@google.com, cai@lca.pw, colin.king@canonical.com, corbet@lwn.net, dwmw@amazon.com, jolsa@redhat.com, kirill@shutemov.name, mark.rutland@arm.com, mgorman@suse.de, minchan@kernel.org, mingo@redhat.com, namhyung@kernel.org, peterz@infradead.org, rdunlap@infradead.org, rostedt@goodmis.org, sj38.park@gmail.com, vdavydov.dev@gmail.com, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 01/11] Introduce Data Access MONitor (DAMON) Date: Tue, 4 Feb 2020 06:23:02 +0000 Message-Id: <20200204062312.19913-2-sj38.park@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200204062312.19913-1-sj38.park@gmail.com> References: <20200204062312.19913-1-sj38.park@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park This commit introduces a kernel module named DAMON. Note that this commit is implementing only the stub for the module load/unload, basic data structures, and simple manipulation functions of the structures to keep the size of commit small. The core mechanisms of DAMON will be implemented one by one by following commits. Brief Introduction ================== Memory management decisions can normally be more efficient if finer data access information is available. However, because finer information usually comes with higher overhead, most systems including Linux made a tradeoff: Forgive some wise decisions and use coarse information and/or light-weight heuristics. A number of experimental data access pattern awared memory management optimizations say the sacrifices are huge. However, none of those has successfully adopted to Linux kernel mainly due to the absence of a scalable and efficient data access monitoring mechanism. DAMON is a data access monitoring solution for the problem. It is 1) accurate enough for the DRAM level memory management, 2) light-weight enough to be applied online, and 3) keeps predefined upper-bound overhead regardless of the size of target workloads (thus scalable). DAMON is implemented as a standalone kernel module and provides several simple interfaces. Owing to that, though it has mainly designed for the kernel's memory management mechanisms, it can be also used for a wide range of user space programs and people. Frequently Asked Questions ========================== Q: Why not integrated with perf? A: From the perspective of perf like profilers, DAMON can be thought of as a data source in kernel, like tracepoints, pressure stall information (psi), or idle page tracking. Thus, it can be easily integrated with those. However, this patchset doesn't provide a fancy perf integration because current step of DAMON development is focused on its core logic only. That said, DAMON already provides two interfaces for user space programs, which based on debugfs and tracepoint, respectively. Using the tracepoint interface, you can use DAMON with perf. This patchset also provides the debugfs interface based user space tool for DAMON. It can be used to record, visualize, and analyze data access pattern of target processes in a convenient way. Q: Why a new module, instead of extending perf or other tools? A: First, DAMON aims to be used by other programs including the kernel. Therefore, having dependency to specific tools like perf is not desirable. Second, because it need to be lightweight as much as possible so that it can be used online, any unnecessary overhead such as kernel - user space context switching cost should be avoided. These are the two most biggest reasons why DAMON is implemented in the kernel space. The idle page tracking subsystem would be the kernel module that most seems similar to DAMON. However, it's own interface is not compatible with DAMON. Also, the internal implementation of it has no common part to be reused by DAMON. Q: Can 'perf mem' provide the data required for DAMON? A: On the systems supporting 'perf mem', yes. DAMON is using the PTE Accessed bits in low level. Other H/W or S/W features that can be used for the purpose could be used. However, as explained with above question, DAMON need to be implemented in the kernel space. Signed-off-by: SeongJae Park --- mm/Kconfig | 12 +++ mm/Makefile | 1 + mm/damon.c | 226 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 239 insertions(+) create mode 100644 mm/damon.c diff --git a/mm/Kconfig b/mm/Kconfig index ab80933be65f..387d469f40ec 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -739,4 +739,16 @@ config ARCH_HAS_HUGEPD config MAPPING_DIRTY_HELPERS bool +config DAMON + tristate "Data Access Monitor" + depends on MMU + default n + help + Provides data access monitoring. + + DAMON is a kernel module that allows users to monitor the actual + memory access pattern of specific user-space processes. It aims to + be 1) accurate enough to be useful for performance-centric domains, + and 2) sufficiently light-weight so that it can be applied online. + endmenu diff --git a/mm/Makefile b/mm/Makefile index 1937cc251883..2911b3832c90 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -108,3 +108,4 @@ obj-$(CONFIG_ZONE_DEVICE) += memremap.o obj-$(CONFIG_HMM_MIRROR) += hmm.o obj-$(CONFIG_MEMFD_CREATE) += memfd.o obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o +obj-$(CONFIG_DAMON) += damon.o diff --git a/mm/damon.c b/mm/damon.c new file mode 100644 index 000000000000..0687d2b83bb6 --- /dev/null +++ b/mm/damon.c @@ -0,0 +1,226 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Data Access Monitor + * + * Copyright 2019 Amazon.com, Inc. or its affiliates. All rights reserved. + * + * Author: SeongJae Park + */ + +#define pr_fmt(fmt) "damon: " fmt + +#include +#include +#include +#include + +#define damon_get_task_struct(t) \ + (get_pid_task(find_vpid(t->pid), PIDTYPE_PID)) + +#define damon_next_region(r) \ + (container_of(r->list.next, struct damon_region, list)) + +#define damon_prev_region(r) \ + (container_of(r->list.prev, struct damon_region, list)) + +#define damon_for_each_region(r, t) \ + list_for_each_entry(r, &t->regions_list, list) + +#define damon_for_each_region_safe(r, next, t) \ + list_for_each_entry_safe(r, next, &t->regions_list, list) + +#define damon_for_each_task(ctx, t) \ + list_for_each_entry(t, &(ctx)->tasks_list, list) + +#define damon_for_each_task_safe(ctx, t, next) \ + list_for_each_entry_safe(t, next, &(ctx)->tasks_list, list) + +/* Represents a monitoring target region on the virtual address space */ +struct damon_region { + unsigned long vm_start; + unsigned long vm_end; + unsigned long sampling_addr; + unsigned int nr_accesses; + struct list_head list; +}; + +/* Represents a monitoring target task */ +struct damon_task { + unsigned long pid; + struct list_head regions_list; + struct list_head list; +}; + +struct damon_ctx { + struct rnd_state rndseed; + + struct list_head tasks_list; /* 'damon_task' objects */ +}; + +#define LEN_RES_FILE_PATH 256 + +/* Get a random number in [l, r) */ +#define damon_rand(ctx, l, r) (l + prandom_u32_state(&ctx->rndseed) % (r - l)) + +/* + * Construct a damon_region struct + * + * Returns the pointer to the new struct if success, or NULL otherwise + */ +static struct damon_region *damon_new_region(struct damon_ctx *ctx, + unsigned long vm_start, unsigned long vm_end) +{ + struct damon_region *ret; + + ret = kmalloc(sizeof(struct damon_region), GFP_KERNEL); + if (!ret) + return NULL; + ret->vm_start = vm_start; + ret->vm_end = vm_end; + ret->nr_accesses = 0; + ret->sampling_addr = damon_rand(ctx, vm_start, vm_end); + INIT_LIST_HEAD(&ret->list); + + return ret; +} + +/* + * Add a region between two other regions + */ +static inline void damon_add_region(struct damon_region *r, + struct damon_region *prev, struct damon_region *next) +{ + __list_add(&r->list, &prev->list, &next->list); +} + +/* + * Append a region to a task's list of regions + */ +static void damon_add_region_tail(struct damon_region *r, struct damon_task *t) +{ + list_add_tail(&r->list, &t->regions_list); +} + +/* + * Delete a region from its list + */ +static void damon_del_region(struct damon_region *r) +{ + list_del(&r->list); +} + +/* + * De-allocate a region + */ +static void damon_free_region(struct damon_region *r) +{ + kfree(r); +} + +static void damon_destroy_region(struct damon_region *r) +{ + damon_del_region(r); + damon_free_region(r); +} + +/* + * Construct a damon_task struct + * + * Returns the pointer to the new struct if success, or NULL otherwise + */ +static struct damon_task *damon_new_task(unsigned long pid) +{ + struct damon_task *t; + + t = kmalloc(sizeof(struct damon_task), GFP_KERNEL); + if (!t) + return NULL; + t->pid = pid; + INIT_LIST_HEAD(&t->regions_list); + + return t; +} + +/* Returns n-th damon_region of the given task */ +struct damon_region *damon_nth_region_of(struct damon_task *t, unsigned int n) +{ + struct damon_region *r; + unsigned int i; + + i = 0; + damon_for_each_region(r, t) { + if (i++ == n) + return r; + } + return NULL; +} + +static void damon_add_task_tail(struct damon_ctx *ctx, struct damon_task *t) +{ + list_add_tail(&t->list, &ctx->tasks_list); +} + +static void damon_del_task(struct damon_task *t) +{ + list_del(&t->list); +} + +static void damon_free_task(struct damon_task *t) +{ + struct damon_region *r, *next; + + damon_for_each_region_safe(r, next, t) + damon_free_region(r); + kfree(t); +} + +static void damon_destroy_task(struct damon_task *t) +{ + damon_del_task(t); + damon_free_task(t); +} + +/* + * Returns number of monitoring target tasks + */ +static unsigned int nr_damon_tasks(struct damon_ctx *ctx) +{ + struct damon_task *t; + unsigned int ret = 0; + + damon_for_each_task(ctx, t) + ret++; + return ret; +} + +/* + * Returns the number of target regions for a given target task + */ +static unsigned int nr_damon_regions(struct damon_task *t) +{ + struct damon_region *r; + unsigned int ret = 0; + + damon_for_each_region(r, t) + ret++; + return ret; +} + +static int __init damon_init(void) +{ + pr_info("init\n"); + + return 0; +} + +static void __exit damon_exit(void) +{ + pr_info("exit\n"); +} + +module_init(damon_init); +module_exit(damon_exit); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("SeongJae Park "); +MODULE_DESCRIPTION("DAMON: Data Access MONitor"); From patchwork Tue Feb 4 06:23:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 11364073 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 644C914B4 for ; Tue, 4 Feb 2020 06:23:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 084E821775 for ; Tue, 4 Feb 2020 06:23:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BU0iSKsp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 084E821775 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 291816B0006; Tue, 4 Feb 2020 01:23:49 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 21AF86B0007; Tue, 4 Feb 2020 01:23:49 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 093C56B0008; Tue, 4 Feb 2020 01:23:49 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0221.hostedemail.com [216.40.44.221]) by kanga.kvack.org (Postfix) with ESMTP id DE4696B0006 for ; Tue, 4 Feb 2020 01:23:48 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9CC99180AD804 for ; Tue, 4 Feb 2020 06:23:48 +0000 (UTC) X-FDA: 76451453736.28.rod16_3e43e7a9bd240 X-Spam-Summary: 2,0,0,241335d3b48c1471,d41d8cd98f00b204,sj38.park@gmail.com,:akpm@linux-foundation.org:sjpark@amazon.de:acme@kernel.org:alexander.shishkin@linux.intel.com:amit@kernel.org:brendan.d.gregg@gmail.com:brendanhiggins@google.com:cai@lca.pw:colin.king@canonical.com:corbet@lwn.net:dwmw@amazon.com:jolsa@redhat.com:kirill@shutemov.name:mark.rutland@arm.com:mgorman@suse.de:minchan@kernel.org:mingo@redhat.com:namhyung@kernel.org:peterz@infradead.org:rdunlap@infradead.org:rostedt@goodmis.org:sj38.park@gmail.com:vdavydov.dev@gmail.com::linux-doc@vger.kernel.org:linux-kernel@vger.kernel.org,RULES_HIT:41:69:327:355:379:541:800:960:966:968:973:988:989:1260:1345:1359:1431:1437:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2538:2553:2559:2562:2693:2736:2861:2892:2898:2899:2901:2918:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4321:4385:5007:6119:6261:6653:6742:6743:7264:7576:7809:7875:7903:7974:8603:8908:8957:9010:9108:9413:10004:11026:11473:116 57:11658 X-HE-Tag: rod16_3e43e7a9bd240 X-Filterd-Recvd-Size: 23593 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Tue, 4 Feb 2020 06:23:47 +0000 (UTC) Received: by mail-pj1-f68.google.com with SMTP id 12so893325pjb.5 for ; Mon, 03 Feb 2020 22:23:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=nxeNp6VT6G7iZvGbwhFhawRw4wqIc7PSzfMoJZpcMOg=; b=BU0iSKspN4vUqkYIF0r+3y1108UVPuCudE7+5sfSo6gXJVgKT0nvdoNhjzM7cwKblh OGCVOWu4YnFO8OjlxCFJGu/Ly3NiZQOFatgdqiEyR/cdeOnqJk0h2bteL8DmdoTh2KnP BeGzQN9X6nirUN8qghN/mNnWtmO9QzjzVH6YExNrd1/tpmFAOFwqWoJc6lV6MykPNlAE bjfYi4EtDIsoxdrM7bcw2Fb4vYP+6dXpu0qdDXJGBeTfFzRb5v5L7pK2kH5QLZgGw1x/ PCBOgzEsbpI/GdlsFrs7dpVK2dDBqpz62zNxtic1P6z+8g2IY8CZJ2ssswXIXigH0Trh X38Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=nxeNp6VT6G7iZvGbwhFhawRw4wqIc7PSzfMoJZpcMOg=; b=cNHKNejT5RJ7Mv11HxRpSBgG0vgAJvyK1SCBynFkkifsQtED0h185JtW6O/NSbpKoj RGGzglNg3+WMCJGgAMigVLpfZ484LMnyQXh68krdfj9B3A7f2+xtoVcCZUZZ3SZK23E/ p0iPJM/w4j9QL7FU49ep138CiXVhAO1VHR0/aY9bWLEKWrWGvEIzCF/+0XAhSmuSDkyn kXv9dNHooFrRfJYNP2IY4tJWzsEJlaJGZo2nnEC0VCdvL9N9+CQu3sVlm4lBcB7w46i4 fFUuOUitrd5EGt/0bRZ2gajvCyX7TLMCS5xGr3ccOFBxMm09G4VyFvza8theZfwlUbdY llBg== X-Gm-Message-State: APjAAAWLGPyTIeJCKKrfa9Ag3xYkBCPdMQWzC+9smTYMJBo8jV7DIIrs hNB0y+k5ZB+MFqBe5rOf1Y0= X-Google-Smtp-Source: APXvYqwwGQcUIeFV5T/pVOwfkycFatUEb6W7RLIO5Et0a3/BL3tkDqVM2Dd35E//LQsw/ygcaBQHiw== X-Received: by 2002:a17:902:9882:: with SMTP id s2mr28065147plp.156.1580797426446; Mon, 03 Feb 2020 22:23:46 -0800 (PST) Received: from localhost.localdomain ([106.254.212.20]) by smtp.gmail.com with ESMTPSA id u26sm21880240pfn.46.2020.02.03.22.23.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Feb 2020 22:23:45 -0800 (PST) From: sj38.park@gmail.com To: akpm@linux-foundation.org Cc: SeongJae Park , acme@kernel.org, alexander.shishkin@linux.intel.com, amit@kernel.org, brendan.d.gregg@gmail.com, brendanhiggins@google.com, cai@lca.pw, colin.king@canonical.com, corbet@lwn.net, dwmw@amazon.com, jolsa@redhat.com, kirill@shutemov.name, mark.rutland@arm.com, mgorman@suse.de, minchan@kernel.org, mingo@redhat.com, namhyung@kernel.org, peterz@infradead.org, rdunlap@infradead.org, rostedt@goodmis.org, sj38.park@gmail.com, vdavydov.dev@gmail.com, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 02/11] mm/damon: Implement region based sampling Date: Tue, 4 Feb 2020 06:23:03 +0000 Message-Id: <20200204062312.19913-3-sj38.park@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200204062312.19913-1-sj38.park@gmail.com> References: <20200204062312.19913-1-sj38.park@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: SeongJae Park This commit implements DAMON's basic access check and region based sampling mechanisms. This change would seems make no sense, mainly because it is only a part of the DAMON's logics. Following two commits will make more sense. Basic Access Check ------------------ DAMON basically reports what pages are how frequently accessed. Note that the frequency is not an absolute number of accesses, but a relative frequency among the pages of the target workloads. Users can control the resolution of the reports by setting two time intervals, ``sampling interval`` and ``aggregation interval``. In detail, DAMON checks access to each page per ``sampling interval``, aggregates the results (counts the number of the accesses to each page), and reports the aggregated results per ``aggregation interval``. For the access check of each page, DAMON uses the Accessed bits of PTEs. This is thus similar to common periodic access checks based access tracking mechanisms, which overhead is increasing as the size of the target process grows. Region Based Sampling --------------------- To avoid the unbounded increase of the overhead, DAMON groups a number of adjacent pages that assumed to have same access frequencies into a region. As long as the assumption (pages in a region have same access frequencies) is kept, only one page in the region is required to be checked. Thus, for each ``sampling interval``, DAMON randomly picks one page in each region and clears its Accessed bit. After one more ``sampling interval``, DAMON reads the Accessed bit of the page and increases the access frequency of the region if the bit has set meanwhile. Therefore, the monitoring overhead is controllable by setting the number of regions. Nonetheless, this scheme cannot preserve the quality of the output if the assumption is not kept. Following commit will introduce how we can make the guarantee with best effort. Signed-off-by: SeongJae Park --- mm/damon.c | 642 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 642 insertions(+) diff --git a/mm/damon.c b/mm/damon.c index 0687d2b83bb6..5a98c1365ee9 100644 --- a/mm/damon.c +++ b/mm/damon.c @@ -9,9 +9,14 @@ #define pr_fmt(fmt) "damon: " fmt +#include +#include #include #include +#include #include +#include +#include #include #define damon_get_task_struct(t) \ @@ -51,7 +56,30 @@ struct damon_task { struct list_head list; }; +/* + * For each 'sample_interval', DAMON checks whether each region is accessed or + * not. It aggregates and keeps the access information (number of accesses to + * each region) for 'aggr_interval' and then flushes it to the result buffer if + * an 'aggr_interval' surpassed. + * + * All time intervals are in micro-seconds. + */ struct damon_ctx { + unsigned long sample_interval; + unsigned long aggr_interval; + unsigned long min_nr_regions; + + struct timespec64 last_aggregation; + + unsigned char *rbuf; + unsigned int rbuf_len; + unsigned int rbuf_offset; + char *rfile_path; + + struct task_struct *kdamond; + bool kdamond_stop; + spinlock_t kdamond_lock; + struct rnd_state rndseed; struct list_head tasks_list; /* 'damon_task' objects */ @@ -206,6 +234,620 @@ static unsigned int nr_damon_regions(struct damon_task *t) return ret; } +/* + * Get the mm_struct of the given task + * + * Callser should put the mm_struct after use, unless it is NULL. + * + * Returns the mm_struct of the task on success, NULL on failure + */ +static struct mm_struct *damon_get_mm(struct damon_task *t) +{ + struct task_struct *task; + struct mm_struct *mm; + + task = damon_get_task_struct(t); + if (!task) + return NULL; + + mm = get_task_mm(task); + put_task_struct(task); + return mm; +} + +/* + * Size-evenly split a region into 'nr_pieces' small regions + * + * Returns 0 on success, or negative error code otherwise. + */ +static int damon_split_region_evenly(struct damon_ctx *ctx, + struct damon_region *r, unsigned int nr_pieces) +{ + unsigned long sz_orig, sz_piece, orig_end; + struct damon_region *piece = NULL, *next; + unsigned long start; + + if (!r || !nr_pieces) + return -EINVAL; + + orig_end = r->vm_end; + sz_orig = r->vm_end - r->vm_start; + sz_piece = sz_orig / nr_pieces; + + if (!sz_piece) + return -EINVAL; + + r->vm_end = r->vm_start + sz_piece; + next = damon_next_region(r); + for (start = r->vm_end; start + sz_piece <= orig_end; + start += sz_piece) { + piece = damon_new_region(ctx, start, start + sz_piece); + damon_add_region(piece, r, next); + r = piece; + } + if (piece) + piece->vm_end = orig_end; + return 0; +} + +struct region { + unsigned long start; + unsigned long end; +}; + +static unsigned long sz_region(struct region *r) +{ + return r->end - r->start; +} + +static void swap_regions(struct region *r1, struct region *r2) +{ + struct region tmp; + + tmp = *r1; + *r1 = *r2; + *r2 = tmp; +} + +/* + * Find the three regions in an address space + * + * vma the head vma of the target address space + * regions an array of three 'struct region's that results will be saved + * + * This function receives an address space and finds three regions in it which + * separated by the two biggest unmapped regions in the space. Please refer to + * below comments of 'damon_init_regions_of()' function to know why this is + * necessary. + * + * Returns 0 if success, or negative error code otherwise. + */ +static int damon_three_regions_in_vmas(struct vm_area_struct *vma, + struct region regions[3]) +{ + struct region gap = {0,}, first_gap = {0,}, second_gap = {0,}; + struct vm_area_struct *last_vma = NULL; + unsigned long start = 0; + + /* Find two biggest gaps so that first_gap > second_gap > others */ + for (; vma; vma = vma->vm_next) { + if (!last_vma) { + start = vma->vm_start; + last_vma = vma; + continue; + } + gap.start = last_vma->vm_end; + gap.end = vma->vm_start; + if (sz_region(&gap) > sz_region(&second_gap)) { + swap_regions(&gap, &second_gap); + if (sz_region(&second_gap) > sz_region(&first_gap)) + swap_regions(&second_gap, &first_gap); + } + last_vma = vma; + } + + if (!sz_region(&second_gap) || !sz_region(&first_gap)) + return -EINVAL; + + /* Sort the two biggest gaps by address */ + if (first_gap.start > second_gap.start) + swap_regions(&first_gap, &second_gap); + + /* Store the result */ + regions[0].start = start; + regions[0].end = first_gap.start; + regions[1].start = first_gap.end; + regions[1].end = second_gap.start; + regions[2].start = second_gap.end; + regions[2].end = last_vma->vm_end; + + return 0; +} + +/* + * Get the three regions in the given task + * + * Returns 0 on success, negative error code otherwise. + */ +static int damon_three_regions_of(struct damon_task *t, + struct region regions[3]) +{ + struct mm_struct *mm; + int ret; + + mm = damon_get_mm(t); + if (!mm) + return -EINVAL; + + down_read(&mm->mmap_sem); + ret = damon_three_regions_in_vmas(mm->mmap, regions); + up_read(&mm->mmap_sem); + + mmput(mm); + return ret; +} + +/* + * Initialize the monitoring target regions for the given task + * + * t the given target task + * + * Because only a number of small portions of the entire address space + * is acutally mapped to the memory and accessed, monitoring the unmapped + * regions is wasteful. That said, because we can deal with small noises, + * tracking every mapping is not strictly required but could even incur a high + * overhead if the mapping frequently changes or the number of mappings is + * high. Nonetheless, this may seems very weird. DAMON's dynamic regions + * adjustment mechanism, which will be implemented with following commit will + * make this more sense. + * + * For the reason, we convert the complex mappings to three distinct regions + * that cover every mapped areas of the address space. Also the two gaps + * between the three regions are the two biggest unmapped areas in the given + * address space. In detail, this function first identifies the start and the + * end of the mappings and the two biggest unmapped areas of the address space. + * Then, it constructs the three regions as below: + * + * [mappings[0]->start, big_two_unmapped_areas[0]->start) + * [big_two_unmapped_areas[0]->end, big_two_unmapped_areas[1]->start) + * [big_two_unmapped_areas[1]->end, mappings[nr_mappings - 1]->end) + * + * As usual memory map of processes is as below, the gap between the heap and + * the uppermost mmap()-ed region, and the gap between the lowermost mmap()-ed + * region and the stack will be two biggest unmapped regions. Because these + * gaps are exceptionally huge areas in usual address space, excluding these + * two biggest unmapped regions will be sufficient to make a trade-off. + * + * + * + * + * (other mmap()-ed regions and small unmapped regions) + * + * + * + */ +static void damon_init_regions_of(struct damon_ctx *c, struct damon_task *t) +{ + struct damon_region *r; + struct region regions[3]; + int i; + + if (damon_three_regions_of(t, regions)) { + pr_err("Failed to get three regions of task %lu\n", t->pid); + return; + } + + /* Set the initial three regions of the task */ + for (i = 0; i < 3; i++) { + r = damon_new_region(c, regions[i].start, regions[i].end); + damon_add_region_tail(r, t); + } + + /* Split the middle region into 'min_nr_regions - 2' regions */ + r = damon_nth_region_of(t, 1); + if (damon_split_region_evenly(c, r, c->min_nr_regions - 2)) + pr_warn("Init middle region failed to be split\n"); +} + +/* Initialize '->regions_list' of every task */ +static void kdamond_init_regions(struct damon_ctx *ctx) +{ + struct damon_task *t; + + damon_for_each_task(ctx, t) + damon_init_regions_of(ctx, t); +} + +/* + * Check whether the given region has accessed since the last check + * + * mm 'mm_struct' for the given virtual address space + * r the region to be checked + */ +static void kdamond_check_access(struct damon_ctx *ctx, + struct mm_struct *mm, struct damon_region *r) +{ + pte_t *pte = NULL; + pmd_t *pmd = NULL; + spinlock_t *ptl; + + if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl)) + goto mkold; + + /* Read the page table access bit of the page */ + if (pte && pte_young(*pte)) + r->nr_accesses++; +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + else if (pmd && pmd_young(*pmd)) + r->nr_accesses++; +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + + spin_unlock(ptl); + +mkold: + /* mkold next target */ + r->sampling_addr = damon_rand(ctx, r->vm_start, r->vm_end); + + if (follow_pte_pmd(mm, r->sampling_addr, NULL, &pte, &pmd, &ptl)) + return; + + if (pte) { + if (pte_young(*pte)) { + clear_page_idle(pte_page(*pte)); + set_page_young(pte_page(*pte)); + } + *pte = pte_mkold(*pte); + } +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + else if (pmd) { + if (pmd_young(*pmd)) { + clear_page_idle(pmd_page(*pmd)); + set_page_young(pte_page(*pte)); + } + *pmd = pmd_mkold(*pmd); + } +#endif + + spin_unlock(ptl); +} + +/* + * Check whether a time interval is elapsed + * + * baseline the time to check whether the interval has elapsed since + * interval the time interval (microseconds) + * + * See whether the given time interval has passed since the given baseline + * time. If so, it also updates the baseline to current time for next check. + * + * Returns true if the time interval has passed, or false otherwise. + */ +static bool damon_check_reset_time_interval(struct timespec64 *baseline, + unsigned long interval) +{ + struct timespec64 now; + + ktime_get_coarse_ts64(&now); + if ((timespec64_to_ns(&now) - timespec64_to_ns(baseline)) / 1000 < + interval) + return false; + *baseline = now; + return true; +} + +/* + * Check whether it is time to flush the aggregated information + */ +static bool kdamond_aggregate_interval_passed(struct damon_ctx *ctx) +{ + return damon_check_reset_time_interval(&ctx->last_aggregation, + ctx->aggr_interval); +} + +/* + * Flush the content in the result buffer to the result file + */ +static void damon_flush_rbuffer(struct damon_ctx *ctx) +{ + ssize_t sz; + loff_t pos; + struct file *rfile; + + while (ctx->rbuf_offset) { + pos = 0; + rfile = filp_open(ctx->rfile_path, O_CREAT | O_RDWR | O_APPEND, + 0644); + if (IS_ERR(rfile)) { + pr_err("Cannot open the result file %s\n", + ctx->rfile_path); + return; + } + + sz = kernel_write(rfile, ctx->rbuf, ctx->rbuf_offset, &pos); + filp_close(rfile, NULL); + + ctx->rbuf_offset -= sz; + } +} + +/* + * Write a data into the result buffer + */ +static void damon_write_rbuf(struct damon_ctx *ctx, void *data, ssize_t size) +{ + if (!ctx->rbuf_len || !ctx->rbuf) + return; + if (ctx->rbuf_offset + size > ctx->rbuf_len) + damon_flush_rbuffer(ctx); + + memcpy(&ctx->rbuf[ctx->rbuf_offset], data, size); + ctx->rbuf_offset += size; +} + +/* + * Flush the aggregated monitoring results to the result buffer + * + * Stores current tracking results to the result buffer and reset 'nr_accesses' + * of each regions. The format for the result buffer is as below: + * + *