From patchwork Fri Dec 13 21:53:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: SeongJae Park X-Patchwork-Id: 13907896 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D783E7717F for ; Fri, 13 Dec 2024 21:53:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4638E6B0095; Fri, 13 Dec 2024 16:53:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3FF036B0098; Fri, 13 Dec 2024 16:53:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2187E6B0099; Fri, 13 Dec 2024 16:53:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 004946B0095 for ; Fri, 13 Dec 2024 16:53:26 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A9BA04419A for ; Fri, 13 Dec 2024 21:53:26 +0000 (UTC) X-FDA: 82891287108.04.4E74577 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf12.hostedemail.com (Postfix) with ESMTP id B1B7F4000C for ; Fri, 13 Dec 2024 21:53:13 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=A1ciks1M; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf12.hostedemail.com: domain of sj@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=sj@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734126780; a=rsa-sha256; cv=none; b=WEVR8zKCCR0m2bDfCKuzbJYTTXL+s+zu8ZKjR1dzB18JBG98PkL8CqN53vn3UrMmO1d+Ka j6Fb3XPrD2+xR1XdD9g93/48iscjag/0yPwK2W5eBXKL3k/oeGj8mj7SmEv0gr+NfAaI4g PQhl4MyeREcwe+6JRdjxhtnSPx4umew= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=A1ciks1M; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf12.hostedemail.com: domain of sj@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=sj@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734126780; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=a5FVCgsVGYhP1aXVkbUt940RCM2vjFUzMNetxjgPB6E=; b=0jVUgK0XYPFdLlwrLVmtJY0lObNUnLCei0M/XZS7ZFPCc6H7NLtqypJO4vMom9VSiKkVzz TIbOJRC8uX175TkmyBzeNNZa0NEorg/j5T6jwqsZeZtS4AcLdIaGNOReHkNfaL2jFh7DM9 MuaCPKqbhBTElHkjXHlTl5nWPQ2NBKA= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 5DC42A42E05 for ; Fri, 13 Dec 2024 21:51:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2357EC4CEDD; Fri, 13 Dec 2024 21:53:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1734126804; bh=iQ/qZJ1QHQV0+h/4WTEQK+3ncwWBGkQjrNEPl3UnO74=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=A1ciks1MAAQLTtKHJqC1uldwlJ6lT8vW43m5x64l0pW73cebSFC34x+nIOL9ptgDV GdMY7EW9phV13W3Fxdr5MWnfiVyZPE6gEsV2K1Qf/p9YDRXhm0eZIxsa5qGfconTMF OAT2o2klemlx3QxZHU7qN37fNj2yOm5rkUpDwx9BDxn00TItH1BF5ayvM8ZcuDc7xm O1va3eT27f0HyOwXNAnm1Lu7F+jPh2uwZOQ9gusC8vGkGleZnLjrBr9T9vdGR6eP4h 6HxcrmKjgaf+rLpT4P9PD/7Of99n+M8ogmTeSi6PpwuwjMIBVO8yFLaxM+rSfgB7jN 1Rd/TxZq3PUDA== From: SeongJae Park To: Cc: SeongJae Park , damon@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH 7/9] mm/damon/core: implement damos_walk() Date: Fri, 13 Dec 2024 13:53:04 -0800 Message-Id: <20241213215306.54778-8-sj@kernel.org> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241213215306.54778-1-sj@kernel.org> References: <20241213215306.54778-1-sj@kernel.org> MIME-Version: 1.0 X-Stat-Signature: 56wc89gcg6kwja4biahtk63rn39app3g X-Rspam-User: X-Rspamd-Queue-Id: B1B7F4000C X-Rspamd-Server: rspam08 X-HE-Tag: 1734126793-962474 X-HE-Meta: U2FsdGVkX19csYz0U9+iyc0Y6Vg43SSnkagNSrW6BgQS/H/zaQ5nWT9cx99P6hWDn7xapELDU7OfzEa4hjx0xeI8S/pB4i2VMYyoSEraSjt9R7QPF/P5G9gIsGS+k8FW8JKBOxEKMuF0/wfelITfVqHeNekt5KXHsU15E6ZpcAfuIW4udK3T6zL/eRE+r+ZTcdKTTdcmoBeLo8K69kJJHMx+tWelJICszE7sY3vxvdHHJuVBcC2fhACpOfzXpZIRVW/QyIRUHjrSUvaS2VXs1xFcXnv4F8hIEbD9Z5XDIXjsnEULgREQndXM9Cxo2jqklzefL8nKyHwfYhYZfq8QEleyrC+/ag3G1tgVvUiqDfCOS0RVVpNGp3dl6YTtlqSQtDEUaND1yajYC8kzVRgYJm6VXAdRSZa+TJnfbiLOpX5LAUHIIhaOMJiblB8QVjAFi0NcW2Qk9oDhYKOsAoNd+8RSexe7Ph0DhhhvbmwHku1QRDC/DHly44BvMxzfZC5Howkb46gg/tzzjM6AETizlkHIwwa17RMdD0/ILzCXt1MYCjQyhEMzo4hYuXabE3x2UwHwzv/Gn8CmEQ613qeosBbg6mRQ7KQ3JJBdjBoBw+HNHTgb/tn0xOGUS1yqGIyKO/UXOa+SKAhjuKKvVKq4XhGCzzeZp4Hrp39BTE+21DsoA++xic4DXt1/peSMw7byMQkXilmmK5VL4nNw8imCgWdoJclnwnHqVSM3f1XboJa/ee9pxTfdLX0XzJs7HqNvLHUVFMmsba1h49NNTzHj/YwDQCGYugl3DaHjXGewnlNa6TB6SciFyvCrKpbs58GMTDDGCwO+CXZ/mT5GlTuQv/MgQNVilBJwhtJufTvjj++wuD/8vOQLnK7v4GXKNZjL732MUbVEM1+4aA8Qr61PRdumxA4d6kZxFRl7sllekBB7YKLXAubLQZBX4p9bG6C0CXJTN2i/FIvYNR+WEkR /82A2Bqx w/FPIizkaQ0imuHL4NuzBC7DvSgV8x+lcuMK/17X6rQNK2/hWJQfMG2tegfcsqZDdmUer1OAVKH2O/778UC1tCQ7I1evi9vAGgo7T691f5Sa2xRXfUD6l0UnVF5w1skb98BNo0Pf2fmijoCn0Ub4kfjhyaIyno6ffThgcP315GyAFE5NvJc3Y6V5/eKjpmVOL/d4s1hgQTYGqF7j3CdxLVT7TXhWNgHJj/bH08DZBPqC5PRxoMeAp/rLKKjAqSsgAfFrmmxWiRz8+gg6jv7Xm0WoGsnjSZ3igHVa6giGgskMH5UMLp5ngYhXC4cCJwDxIPy/6YxIWO9ZHZFSnYe24iNJQ40D8FnO+t2h4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce a new core layer interface, damos_walk(). It aims to replace some damon_callback usages that access DAMOS schemes applied regions of ongoing kdamond with additional synchronizations. It receives a function pointer and asks kdamond to invoke it for any region that it will apply any DAMOS action within one scheme apply interval for every scheme of it. The function further waits until the kdamond finishes the invocations for every scheme, or cancels the request, and returns. The kdamond invokes the function as requested within the main loop. If it is deactivated by DAMOS watermarks or going out of the main loop, it marks the request as canceled, so that damos_walk() can wakeup and return. Signed-off-by: SeongJae Park --- include/linux/damon.h | 33 ++++++++++- mm/damon/core.c | 134 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 165 insertions(+), 2 deletions(-) diff --git a/include/linux/damon.h b/include/linux/damon.h index 529ea578f2d5..acedaab4dccf 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -368,6 +368,31 @@ struct damos_filter { struct list_head list; }; +struct damon_ctx; +struct damos; + +/** + * struct damos_walk_control - Control damos_walk(). + * + * @walk_fn: Function to be called back for each region. + * @data: Data that will be passed to walk functions. + * + * Control damos_walk(), which requests specific kdamond to invoke the given + * function to each region that eligible to apply actions of the kdamond's + * schemes. Refer to damos_walk() for more details. + */ +struct damos_walk_control { + void (*walk_fn)(void *data, struct damon_ctx *ctx, + struct damon_target *t, struct damon_region *r, + struct damos *s); + void *data; +/* private: internal use only */ + /* informs if the kdamond finished handling of the walk request */ + struct completion completion; + /* informs if the walk is canceled. */ + bool canceled; +}; + /** * struct damos_access_pattern - Target access pattern of the given scheme. * @min_sz_region: Minimum size of target regions. @@ -453,6 +478,8 @@ struct damos { * @action */ unsigned long next_apply_sis; + /* informs if ongoing DAMOS walk for this scheme is finished */ + bool walk_completed; /* public: */ struct damos_quota quota; struct damos_watermarks wmarks; @@ -480,8 +507,6 @@ enum damon_ops_id { NR_DAMON_OPS, }; -struct damon_ctx; - /** * struct damon_operations - Monitoring operations for given use cases. * @@ -694,6 +719,9 @@ struct damon_ctx { struct damon_call_control *call_control; struct mutex call_control_lock; + struct damos_walk_control *walk_control; + struct mutex walk_control_lock; + /* public: */ struct task_struct *kdamond; struct mutex kdamond_lock; @@ -842,6 +870,7 @@ int damon_start(struct damon_ctx **ctxs, int nr_ctxs, bool exclusive); int damon_stop(struct damon_ctx **ctxs, int nr_ctxs); int damon_call(struct damon_ctx *ctx, struct damon_call_control *control); +int damos_walk(struct damon_ctx *ctx, struct damos_walk_control *control); int damon_set_region_biggest_system_ram_default(struct damon_target *t, unsigned long *start, unsigned long *end); diff --git a/mm/damon/core.c b/mm/damon/core.c index 89a679c06e30..de923b3a1084 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -534,6 +534,7 @@ struct damon_ctx *damon_new_ctx(void) mutex_init(&ctx->kdamond_lock); mutex_init(&ctx->call_control_lock); + mutex_init(&ctx->walk_control_lock); ctx->attrs.min_nr_regions = 10; ctx->attrs.max_nr_regions = 1000; @@ -1232,6 +1233,46 @@ int damon_call(struct damon_ctx *ctx, struct damon_call_control *control) return 0; } +/** + * damos_walk() - Invoke a given functions while DAMOS walk regions. + * @ctx: DAMON context to call the functions for. + * @control: Control variable of the walk request. + * + * Ask DAMON worker thread (kdamond) of @ctx to call a function for each region + * that the kdamond will apply DAMOS action to, and wait until the kdamond + * finishes handling of the request. + * + * The kdamond executes the given function in the main loop, for each region + * just before it applies any DAMOS actions of @ctx to it. The invocation is + * made only within one &damos->apply_interval_us since damos_walk() + * invocation, for each scheme. The given callback function can hence safely + * access the internal data of &struct damon_ctx and &struct damon_region that + * each of the scheme will apply the action for next interval, without + * additional synchronizations against the kdamond. If every scheme of @ctx + * passed at least one &damos->apply_interval_us, kdamond marks the request as + * completed so that damos_walk() can wakeup and return. + * + * Return: 0 on success, negative error code otherwise. + */ +int damos_walk(struct damon_ctx *ctx, struct damos_walk_control *control) +{ + init_completion(&control->completion); + control->canceled = false; + mutex_lock(&ctx->walk_control_lock); + if (ctx->walk_control) { + mutex_unlock(&ctx->walk_control_lock); + return -EBUSY; + } + ctx->walk_control = control; + mutex_unlock(&ctx->walk_control_lock); + if (!damon_is_running(ctx)) + return -EINVAL; + wait_for_completion(&control->completion); + if (control->canceled) + return -ECANCELED; + return 0; +} + /* * Reset the aggregated monitoring results ('nr_accesses' of each region). */ @@ -1411,6 +1452,93 @@ static bool damos_filter_out(struct damon_ctx *ctx, struct damon_target *t, return false; } +/* + * damos_walk_call_walk() - Call &damos_walk_control->walk_fn. + * @ctx: The context of &damon_ctx->walk_control. + * @t: The monitoring target of @r that @s will be applied. + * @r: The region of @t that @s will be applied. + * @s: The scheme of @ctx that will be applied to @r. + * + * This function is called from kdamond whenever it found a region that + * eligible to apply a DAMOS scheme's action. If a DAMOS walk request is + * installed by damos_walk() and its &damos_walk_control->walk_fn has not + * invoked for the region for the last &damos->apply_interval_us interval, + * invoke it. + */ +static void damos_walk_call_walk(struct damon_ctx *ctx, struct damon_target *t, + struct damon_region *r, struct damos *s) +{ + struct damos_walk_control *control; + + mutex_lock(&ctx->walk_control_lock); + control = ctx->walk_control; + mutex_unlock(&ctx->walk_control_lock); + if (!control) + return; + control->walk_fn(control->data, ctx, t, r, s); +} + +/* + * damos_walk_complete() - Complete DAMOS walk request if all walks are done. + * @ctx: The context of &damon_ctx->walk_control. + * @s: A scheme of @ctx that all walks are now done. + * + * This function is called when kdamond finished applying the action of a DAMOS + * scheme to regions that eligible for the given &damos->apply_interval_us. If + * every scheme of @ctx including @s now finished walking for at least one + * &damos->apply_interval_us, this function makrs the handling of the given + * DAMOS walk request is done, so that damos_walk() can wake up and return. + */ +static void damos_walk_complete(struct damon_ctx *ctx, struct damos *s) +{ + struct damos *siter; + struct damos_walk_control *control; + + mutex_lock(&ctx->walk_control_lock); + control = ctx->walk_control; + mutex_unlock(&ctx->walk_control_lock); + if (!control) + return; + + s->walk_completed = true; + /* if all schemes completed, signal completion to walker */ + damon_for_each_scheme(siter, ctx) { + if (!siter->walk_completed) + return; + } + complete(&control->completion); + mutex_lock(&ctx->walk_control_lock); + ctx->walk_control = NULL; + mutex_unlock(&ctx->walk_control_lock); +} + +/* + * damos_walk_cancel() - Cancel the current DAMOS walk request. + * @ctx: The context of &damon_ctx->walk_control. + * + * This function is called when @ctx is deactivated by DAMOS watermarks, DAMOS + * walk is requested but there is no DAMOS scheme to walk for, or the kdamond + * is already out of the main loop and therefore gonna be terminated, and hence + * cannot continue the walks. This function therefore marks the walk request + * as canceled, so that damos_walk() can wake up and return. + */ +static void damos_walk_cancel(struct damon_ctx *ctx) +{ + struct damos_walk_control *control; + + mutex_lock(&ctx->walk_control_lock); + control = ctx->walk_control; + mutex_unlock(&ctx->walk_control_lock); + + if (!control) + return; + control->canceled = true; + complete(&control->completion); + mutex_lock(&ctx->walk_control_lock); + ctx->walk_control = NULL; + mutex_unlock(&ctx->walk_control_lock); +} + static void damos_apply_scheme(struct damon_ctx *c, struct damon_target *t, struct damon_region *r, struct damos *s) { @@ -1467,6 +1595,7 @@ static void damos_apply_scheme(struct damon_ctx *c, struct damon_target *t, if (damos_filter_out(c, t, r, s)) return; ktime_get_coarse_ts64(&begin); + damos_walk_call_walk(c, t, r, s); if (c->callback.before_damos_apply) err = c->callback.before_damos_apply(c, t, r, s); if (!err) { @@ -1745,6 +1874,7 @@ static void kdamond_apply_schemes(struct damon_ctx *c) damon_for_each_scheme(s, c) { if (c->passed_sample_intervals < s->next_apply_sis) continue; + damos_walk_complete(c, s); s->next_apply_sis = c->passed_sample_intervals + (s->apply_interval_us ? s->apply_interval_us : c->attrs.aggr_interval) / sample_interval; @@ -2077,6 +2207,7 @@ static int kdamond_wait_activation(struct damon_ctx *ctx) ctx->callback.after_wmarks_check(ctx)) break; kdamond_call(ctx, true); + damos_walk_cancel(ctx); } return -EBUSY; } @@ -2171,6 +2302,8 @@ static int kdamond_fn(void *data) */ if (!list_empty(&ctx->schemes)) kdamond_apply_schemes(ctx); + else + damos_walk_cancel(ctx); sample_interval = ctx->attrs.sample_interval ? ctx->attrs.sample_interval : 1; @@ -2211,6 +2344,7 @@ static int kdamond_fn(void *data) mutex_unlock(&ctx->kdamond_lock); kdamond_call(ctx, true); + damos_walk_cancel(ctx); mutex_lock(&damon_lock); nr_running_ctxs--;