From patchwork Sun Sep 10 03:40:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 13378442 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85F8AEEB57D for ; Sun, 10 Sep 2023 03:41:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2B5856B0129; Sat, 9 Sep 2023 23:40:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F5BF6B0128; Sat, 9 Sep 2023 23:40:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F0EDB6B012B; Sat, 9 Sep 2023 23:40:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id DAD9A6B0128 for ; Sat, 9 Sep 2023 23:40:58 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B2725A077A for ; Sun, 10 Sep 2023 03:40:57 +0000 (UTC) X-FDA: 81219286554.11.F5AF124 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf01.hostedemail.com (Postfix) with ESMTP id 018F840002 for ; Sun, 10 Sep 2023 03:40:54 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=WeRTQo6D; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of sj@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=sj@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694317255; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1OUCo8zw2GgIeOGDEMrrWFBzeO7iQa61Hzqj8Eh+Urw=; b=bp5Gif4buWnHPisptv2ViNdfQvx1rZLQIJMPdvnyxl4dI8Tx4auDFlG0WyvumES2LDipAn xxhnvFrIIaYy6U4mMD0NKsV4+A2w59CL3v0JRc6xhGucl9Eupe71UwOz7t5ifS37CN19Ac 6BFxKbuEK/GyX2fpSlyhQt9D38loKDo= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=WeRTQo6D; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of sj@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=sj@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694317255; a=rsa-sha256; cv=none; b=XOHPIntELRnOOxzYA4+H/HpikBBtrwo3gWLCJ5P3Broe03pGdXYGH4cHuSGJeiZ9FZSi8B KpzUKSQpaDTbvr8cxJdiBrorUJW9N+gHkhhynLUDYwZkM7cEto12BSshCKwb6lHPSTqmnm bq8DjawPQ8dkWZd7wy35Sd0vPlkxiaQ= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 359BC60C56; Sun, 10 Sep 2023 03:40:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 99B7FC433C7; Sun, 10 Sep 2023 03:40:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694317254; bh=DjnOjctR1HpX3KfBkTd9mJQPbWLMb6UvjaZw+MXDlfw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WeRTQo6DJOWDrNGyr55EHwB4hhqOco7edVSXVAZ74USTrG9CgzrwxoemPH6RxKpMt 1kuH8dC07oeYZsnHrKA7doiMYxJC9ghg69UKlemEh9VJqYeHxMwcb/zbnjsRknJrzK +zYwDjGmlCnOjyaY0j51NVV3dcmm5FTr5inHpIrhy0k1qHldsrFcShAurnipSa6TSU /MfGJqPDB2Mc0JsQStkiqJvTbY+AIsT9s9gmI2Aa/8TDZeVgiu0a/DJVcIGCkxO8oV K9TpVDT+SV54AM4/t7G1wkBROaQ29R7Dx5LvHox5XaOHhBMcbTwMnPt+zHcLXQ8U8K ZUIsF2p/i0oZQ== From: SeongJae Park To: Cc: SeongJae Park , Andrew Morton , damon@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC 4/8] mm/damon/core: implement scheme-specific apply interval Date: Sun, 10 Sep 2023 03:40:44 +0000 Message-Id: <20230910034048.59191-5-sj@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230910034048.59191-1-sj@kernel.org> References: <20230910034048.59191-1-sj@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 018F840002 X-Stat-Signature: dr5mc3zst4bdznkqqkw13dnytmjhk4du X-Rspam-User: X-HE-Tag: 1694317254-215120 X-HE-Meta: U2FsdGVkX1+t6UvZMENrCjWcTjawksJY+/WN4umNz6VeMZrcAAE3FbIfif8PYsbjnR51N6JZ7YB8yk69SkXtlZHry7H7NSRDrMtZq/Nfw6Eu48zk1ZdwQwlhE2GisGFlVew2VjXcsv6veKcHdh2+98DVGtNoDYt9CUdg93Ffe15neMzpCxWzZ9o576j+S5b0kPod5vp1lbnUHCWFb0Ann1xUqy37IfB+mVChgjShSRIduSeD9PDjv4saj3jzxf1ze3tJiElu6CBXUtNfTiTA3Ft/0SzW74wNFpqyUYCeb2mSyIYkPiD8V67d8CRPywuXYkx6OZ9BUatXcbxHxg8K4TGFLvtxazoTb2gJJCG3ZbdbgrlLNRbP4RoqAJLJ/pwY1evmeM2iAJPuLKwoBnHUshOmVugGq1gTdBAk39LVF/zxkmQdy6HifDOpVVc9iRsVOU/JZUWzdHfFDpTKs+kRXt+bmBr751L6CiRZRdl6G1EGhecT6FslETfvQXY7knS7jD9FYegPoa6dMqfejLhQnzLhlLltEpaY2mk99gSSMo9hiWiAwjyoysH7FVOHePiB1rGF8M4cPPntedEpjYlAaCJXQfjnl/6X/7cmKI+tWsGqurO+hl6SrqY0K81Dnzjak60fSJQlmokdhCnj/HGVDteUpBzvVZtx5UsfUKUpYXbRXgyycnSs2pEtSFKwy/auj7G7vizK17ULGA31dfkAuxulkTWgkPJ3ze4p4VIhSKF0nEPAtOfktJ4YxlGlPS6QYMfLWsVf8a4i8jBwXsTmR0zM94H2jCsKD3y1939vu83UiD8Bvd/wAbbkTJWuRep/+cdvENNii4f36uyQse5u0Oizb40i4cfjM5k7eWUlF6KvSBMqN4NB80EwBZv38UZzJJMD0dvFK/Ox4nAlOlLTKxpfA053PJvH0QPCOXBDAHI0S+DSCSIuDAn+NL7/IM/LKx2yE/KVR4sBBCVvseb SwJJeBtX der3o3oShPa1vcPdmKn4F6Sm6EfwjnKfXDNGl/SjQjk/wDxieI+x1n9BmaX88/yuZcxF5ktvn3gmIyQNmJ453XWoveX9vWRhK928FAAvGwE+Lq0P3h26jBKnqVm9bznENpkL8bAUorOKlAuodJDLD5OKEfr8c4cml7a88h01Vz8USnlRWiz6YCAXzh+Tvi6KX7zbDrRexPzxGddh1hdbIj78R76+8yAkyKeb94QOp7IW26h4i72ktp/jd0tF96xqK/Lep1JVE6PR9kzyKwtB0BYymm1pz1aTOfFk5DmPf/h7MD7sjfEEVgB0FdvVDKTBD6yycPR7tPKNkBQi7sJOG+en9Zw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: DAMON-based operation schemes are applied for every aggregation interval. That was mainly because schemes were using nr_accesses, which be complete to be used for every aggregation interval. However, the schemes are now using nr_accesses_bp, which is updated for each sampling interval in a way that reasonable to be used. Therefore, there is no reason to apply schemes for each aggregation interval. The unnecessary alignment with aggregation interval was also making some use case of DAMOS tricky. Quota setting under long aggregation interval is one such example. Suppose the aggregation interval is ten seconds, and there is a scheme having CPU quota 100ms per 1s. The scheme will actually uses 100ms per ten seconds, since it cannobe be applied before next aggregation interval. The feature is working as intended, but the results might not that intuitive for some users. This could be fixed by updating the quota to 1s per 10s. But, in the case, the CPU usage of DAMOS could look like spikes, and actually make a bad effect to other CPU-sensitive workloads. Implement a dedicated timing interval for each DAMON-based operation scheme, namely apply_interval. The interval will be sampling interval aligned, and each scheme will be applied for its apply_interval. The interval is set to 0 by default, and it means the scheme should use the aggregation interval instead. This avoids old users getting any behavioral difference. Signed-off-by: SeongJae Park --- include/linux/damon.h | 17 +++++++-- mm/damon/core.c | 75 ++++++++++++++++++++++++++++++++++++---- mm/damon/dbgfs.c | 3 +- mm/damon/lru_sort.c | 2 ++ mm/damon/reclaim.c | 2 ++ mm/damon/sysfs-schemes.c | 2 +- 6 files changed, 91 insertions(+), 10 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index 491fdd3e4c76..27b995c22497 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -314,16 +314,19 @@ struct damos_access_pattern { * struct damos - Represents a Data Access Monitoring-based Operation Scheme. * @pattern: Access pattern of target regions. * @action: &damo_action to be applied to the target regions. + * @apply_interval_us: The time between applying the @action. * @quota: Control the aggressiveness of this scheme. * @wmarks: Watermarks for automated (in)activation of this scheme. * @filters: Additional set of &struct damos_filter for &action. * @stat: Statistics of this scheme. * @list: List head for siblings. * - * For each aggregation interval, DAMON finds regions which fit in the + * For each @apply_interval_us, DAMON finds regions which fit in the * &pattern and applies &action to those. To avoid consuming too much * CPU time or IO resources for the &action, "a is used. * + * If @apply_interval_us is zero, &damon_attrs->aggr_interval is used instead. + * * To do the work only when needed, schemes can be activated for specific * system situations using &wmarks. If all schemes that registered to the * monitoring context are inactive, DAMON stops monitoring either, and just @@ -340,6 +343,14 @@ struct damos_access_pattern { struct damos { struct damos_access_pattern pattern; enum damos_action action; + unsigned long apply_interval_us; +/* private: internal use only */ + /* + * number of sample intervals that should be passed before applying + * @action + */ + unsigned long next_apply_sis; +/* public: */ struct damos_quota quota; struct damos_watermarks wmarks; struct list_head filters; @@ -641,7 +652,9 @@ void damos_add_filter(struct damos *s, struct damos_filter *f); void damos_destroy_filter(struct damos_filter *f); struct damos *damon_new_scheme(struct damos_access_pattern *pattern, - enum damos_action action, struct damos_quota *quota, + enum damos_action action, + unsigned long apply_interval_us, + struct damos_quota *quota, struct damos_watermarks *wmarks); void damon_add_scheme(struct damon_ctx *ctx, struct damos *s); void damon_destroy_scheme(struct damos *s); diff --git a/mm/damon/core.c b/mm/damon/core.c index 3e0532c6896c..c2801656a32d 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -323,7 +323,9 @@ static struct damos_quota *damos_quota_init_priv(struct damos_quota *quota) } struct damos *damon_new_scheme(struct damos_access_pattern *pattern, - enum damos_action action, struct damos_quota *quota, + enum damos_action action, + unsigned long apply_interval_us, + struct damos_quota *quota, struct damos_watermarks *wmarks) { struct damos *scheme; @@ -333,6 +335,13 @@ struct damos *damon_new_scheme(struct damos_access_pattern *pattern, return NULL; scheme->pattern = *pattern; scheme->action = action; + scheme->apply_interval_us = apply_interval_us; + /* + * next_apply_sis will be set when kdamond starts. While kdamond is + * running, it will also updated when it is added to the DAMON context, + * or damon_attrs are updated. + */ + scheme->next_apply_sis = 0; INIT_LIST_HEAD(&scheme->filters); scheme->stat = (struct damos_stat){}; INIT_LIST_HEAD(&scheme->list); @@ -345,9 +354,21 @@ struct damos *damon_new_scheme(struct damos_access_pattern *pattern, return scheme; } +static void damos_set_next_apply_sis(struct damos *s, struct damon_ctx *ctx) +{ + unsigned long sample_interval = ctx->attrs.sample_interval ? + ctx->attrs.sample_interval : 1; + unsigned long apply_interval = s->apply_interval_us ? + s->apply_interval_us : ctx->attrs.aggr_interval; + + s->next_apply_sis = ctx->passed_sample_intervals + + apply_interval / sample_interval; +} + void damon_add_scheme(struct damon_ctx *ctx, struct damos *s) { list_add_tail(&s->list, &ctx->schemes); + damos_set_next_apply_sis(s, ctx); } static void damon_del_scheme(struct damos *s) @@ -586,6 +607,7 @@ static void damon_update_monitoring_results(struct damon_ctx *ctx, int damon_set_attrs(struct damon_ctx *ctx, struct damon_attrs *attrs) { unsigned long sample_interval; + struct damos *s; if (attrs->min_nr_regions < 3) return -EINVAL; @@ -602,6 +624,10 @@ int damon_set_attrs(struct damon_ctx *ctx, struct damon_attrs *attrs) damon_update_monitoring_results(ctx, attrs); ctx->attrs = *attrs; + + damon_for_each_scheme(s, ctx) + damos_set_next_apply_sis(s, ctx); + return 0; } @@ -1127,14 +1153,29 @@ static void kdamond_apply_schemes(struct damon_ctx *c) struct damon_target *t; struct damon_region *r, *next_r; struct damos *s; + unsigned long sample_interval = c->attrs.sample_interval ? + c->attrs.sample_interval : 1; + bool has_schemes_to_apply = false; damon_for_each_scheme(s, c) { + if (c->passed_sample_intervals != s->next_apply_sis) + continue; + + s->next_apply_sis += + (s->apply_interval_us ? s->apply_interval_us : + c->attrs.aggr_interval) / sample_interval; + if (!s->wmarks.activated) continue; + has_schemes_to_apply = true; + damos_adjust_quota(c, s); } + if (!has_schemes_to_apply) + return; + damon_for_each_target(t, c) { damon_for_each_region_safe(r, next_r, t) damon_do_apply_schemes(c, t, r); @@ -1419,11 +1460,19 @@ static void kdamond_init_intervals_sis(struct damon_ctx *ctx) { unsigned long sample_interval = ctx->attrs.sample_interval ? ctx->attrs.sample_interval : 1; + unsigned long apply_interval; + struct damos *scheme; ctx->passed_sample_intervals = 0; ctx->next_aggregation_sis = ctx->attrs.aggr_interval / sample_interval; ctx->next_ops_update_sis = ctx->attrs.ops_update_interval / sample_interval; + + damon_for_each_scheme(scheme, ctx) { + apply_interval = scheme->apply_interval_us ? + scheme->apply_interval_us : ctx->attrs.aggr_interval; + scheme->next_apply_sis = apply_interval / sample_interval; + } } /* @@ -1470,16 +1519,30 @@ static int kdamond_fn(void *data) ctx->attrs.sample_interval : 1; if (ctx->passed_sample_intervals == ctx->next_aggregation_sis) { - ctx->next_aggregation_sis += - ctx->attrs.aggr_interval / sample_interval; kdamond_merge_regions(ctx, max_nr_accesses / 10, sz_limit); if (ctx->callback.after_aggregation && - ctx->callback.after_aggregation(ctx)) + ctx->callback.after_aggregation(ctx)) { + ctx->next_aggregation_sis += + ctx->attrs.aggr_interval / + sample_interval; break; - if (!list_empty(&ctx->schemes)) - kdamond_apply_schemes(ctx); + } + } + + /* + * do kdamond_apply_schemes() after kdamond_merge_regions() if + * possible, to reduce overhead + */ + if (!list_empty(&ctx->schemes)) + kdamond_apply_schemes(ctx); + + if (ctx->passed_sample_intervals == + ctx->next_aggregation_sis) { + ctx->next_aggregation_sis += + ctx->attrs.aggr_interval / sample_interval; + kdamond_reset_aggregated(ctx); kdamond_split_regions(ctx); if (ctx->ops.reset_aggregated) diff --git a/mm/damon/dbgfs.c b/mm/damon/dbgfs.c index 124f0f8c97b7..dc0ea1fc30ca 100644 --- a/mm/damon/dbgfs.c +++ b/mm/damon/dbgfs.c @@ -278,7 +278,8 @@ static struct damos **str_to_schemes(const char *str, ssize_t len, goto fail; pos += parsed; - scheme = damon_new_scheme(&pattern, action, "a, &wmarks); + scheme = damon_new_scheme(&pattern, action, 0, "a, + &wmarks); if (!scheme) goto fail; diff --git a/mm/damon/lru_sort.c b/mm/damon/lru_sort.c index 7b8fce2f67a8..3ecdcc029443 100644 --- a/mm/damon/lru_sort.c +++ b/mm/damon/lru_sort.c @@ -158,6 +158,8 @@ static struct damos *damon_lru_sort_new_scheme( pattern, /* (de)prioritize on LRU-lists */ action, + /* for each aggregation interval */ + 0, /* under the quota. */ "a, /* (De)activate this according to the watermarks. */ diff --git a/mm/damon/reclaim.c b/mm/damon/reclaim.c index 648d2a85523a..ab974e477d2f 100644 --- a/mm/damon/reclaim.c +++ b/mm/damon/reclaim.c @@ -142,6 +142,8 @@ static struct damos *damon_reclaim_new_scheme(void) &pattern, /* page out those, as soon as found */ DAMOS_PAGEOUT, + /* for each aggregation interval */ + 0, /* under the quota. */ &damon_reclaim_quota, /* (De)activate this according to the watermarks. */ diff --git a/mm/damon/sysfs-schemes.c b/mm/damon/sysfs-schemes.c index 093700f50b18..3d30e85596b0 100644 --- a/mm/damon/sysfs-schemes.c +++ b/mm/damon/sysfs-schemes.c @@ -1610,7 +1610,7 @@ static struct damos *damon_sysfs_mk_scheme( .low = sysfs_wmarks->low, }; - scheme = damon_new_scheme(&pattern, sysfs_scheme->action, "a, + scheme = damon_new_scheme(&pattern, sysfs_scheme->action, 0, "a, &wmarks); if (!scheme) return NULL;