From patchwork Tue Feb 18 03:14:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: JP Kobryn X-Patchwork-Id: 13978854 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0E62C021AB for ; Tue, 18 Feb 2025 03:15:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A08CD2800BC; Mon, 17 Feb 2025 22:15:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9B6BF2800A4; Mon, 17 Feb 2025 22:15:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 834A52800BC; Mon, 17 Feb 2025 22:15:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 5E6642800A4 for ; Mon, 17 Feb 2025 22:15:11 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 25196C17F6 for ; Tue, 18 Feb 2025 03:15:11 +0000 (UTC) X-FDA: 83131599222.07.52F9B14 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf26.hostedemail.com (Postfix) with ESMTP id 4AD50140006 for ; Tue, 18 Feb 2025 03:15:09 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=LpM+kM14; spf=pass (imf26.hostedemail.com: domain of inwardvessel@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=inwardvessel@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739848509; a=rsa-sha256; cv=none; b=KawwpehDXszOsBg+4K9KagaHBxf/5zHCirRjH1B6GB7PClR4sxksIj60yG2HOeSveqChlo jwOGPTaENUHphAx3G7N6Z6DgcEby770g7KRxYHJ2HA1Hua+XD4qcXBP7J6F53wgEpfLcfg eEJG5CAkc7R0DtXti6TRBG85eQPyvbE= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=LpM+kM14; spf=pass (imf26.hostedemail.com: domain of inwardvessel@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=inwardvessel@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739848509; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZOkghzrE1OsBJ+MkQl3TgUWPiNlLeJUW6eoVzOAbjEY=; b=yzWoLByf40hJqJC8YQ2mQAecE2ldImXw+ZMnI2DGTxPOlNIq5Do+U43s000W5v7W1vW3cE Mho8H3CM8k8SIgRMTD8am166j+nNmAbDKKHnwd5t17FdSid5zgdmVwFvv5/iPvuAHVL2Gr BjEIvppsmQwN92etrIVhuIoEeFVvxWk= Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-220d39a5627so71563525ad.1 for ; Mon, 17 Feb 2025 19:15:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1739848508; x=1740453308; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZOkghzrE1OsBJ+MkQl3TgUWPiNlLeJUW6eoVzOAbjEY=; b=LpM+kM14vfKU2/qTFyYI8W/X7O+WmCvNVDprENBFxJVkGbmPGK+gGs45POQtoOc2hD q2+nGyO1A5+/c3YqiCSjt0QXHgK7ZYtkwoFeqBDwYpi0hZAlhIgUfVH5/a8zKWg0Wjex rVGlKWbem+2wVnmC+IAsUa15jnG29Vu7JGSa80yL0OivNAsnAW3jrTypnpqtnzoqV3+N qHVFMHbj7XbM16pwJ4cmyUvJfAg7rJaL732IHqbFYzgsDEAxe3/p3VG92NKMzdunX6jG bjggTnsw+Xn3kXvW8/8XTtwi41YXBv9A3TYusOs6DxcBAmraeNiRjd5z+DPCByEK/j1w rahw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739848508; x=1740453308; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZOkghzrE1OsBJ+MkQl3TgUWPiNlLeJUW6eoVzOAbjEY=; b=tFiQjZa9hT0c5/ZZhbwcVuU12LMO4l5SwCc9fr4yBvL/XWAlZTu3fHFQ/oM+u0yPxb P6fFJZUDXb5ZuNbnl1ttebbGOhSh7jbrszUCXzG3BXLOHr1aolhmLd2X0T7zHVkCOSMF Xy0WczfvAk/aWYv/A1pktQElECZo8D8iudKmB2JQWr1JAV6nSRkHDKnOUQ0oTM8TNFUI Bm49E86v3Y8IIyK6KTZ8uCp5Prbbur9NsUYynO+iWZ9oxia4MumNy7jTQLgLpq1JQp8Z r2lNCqJ+BurRHy13tSUKXwo1C3qycFznLsnn72sWFclABoVxgjemL69e0gKraHg/T/Pt eiMw== X-Gm-Message-State: AOJu0YzgGjr27Gmma+UuNs+ezF7ZuZ28rr7IlbekkvxC9fvzp14xLoHs gD+XQvGqoKO9YqXKPkwGaTJrs2X/y0aod0wffsykROFUzRTpfTGC X-Gm-Gg: ASbGncuF4lQyLiq4S9ZxyQ5u+FxSZhxdYP0rCR7I1eO07+v8wYMaCBrWjdBaV8cZy/H wgmozFo8drVlGmrk1B0xJrLPa9AetlNp+XocP8zoyZOnfX5fpyBhiJEtbVNF2Fhq8LMae9lVkiO rJozktrq+RzagmFMhjsxbx0Y3wrg9e+EflqCSxqDr3cqOqeWa1HVQXpRARlTso+b4YRiDKFRbOc JJA3kKMHhbU93T+iNqT11WrEacnWv5pAq+LmLH/QbFgxgVBP4mWPhnhemDAX/WS8YIegoJG1YwS STxK2GgFmT1NR3GTW6N7QGQ+R6CYavgZkwC2PtqyaWcnARYfXOFY X-Google-Smtp-Source: AGHT+IGyYGmJDhosxecR54xtKwc083QS170NIxJt7j0joSa3t2yuUZP3qQKRf4iiTOCbXpwa78B98A== X-Received: by 2002:a05:6a21:3995:b0:1ee:c390:58a4 with SMTP id adf61e73a8af0-1eec3905ce7mr4547041637.2.1739848508183; Mon, 17 Feb 2025 19:15:08 -0800 (PST) Received: from saturn.. (c-67-188-127-15.hsd1.ca.comcast.net. [67.188.127.15]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-732425466besm8763451b3a.9.2025.02.17.19.15.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Feb 2025 19:15:07 -0800 (PST) From: JP Kobryn To: shakeel.butt@linux.dev, tj@kernel.org, mhocko@kernel.org, hannes@cmpxchg.org, yosryahmed@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, kernel-team@meta.com Subject: [PATCH 05/11] cgroup: separate rstat for bpf cgroups Date: Mon, 17 Feb 2025 19:14:42 -0800 Message-ID: <20250218031448.46951-6-inwardvessel@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250218031448.46951-1-inwardvessel@gmail.com> References: <20250218031448.46951-1-inwardvessel@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4AD50140006 X-Stat-Signature: inaji5mjt8hmu1c5jtndgnncrukwbg75 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1739848509-400073 X-HE-Meta: U2FsdGVkX1+5Z0DbYLcGPOoIDatx1bg+apRdw1syZcNjvqX1LJ2e/Jykwepz9Q4xsXNXDbkDexFrREwHpREgyJCXof3WXnDCd7mcSFDyaAd4+lUAoN1iAqlGQskQPt5g/bP8F9H9mWKbzXPLk3+DVXH23kCBts0HSyTG67Oqns+q8MdMebt/pjqsOs0EegfEvXdA+hJHd4PHrMcqso4VX2U7knO7yDp1rqzNe+AChLm+J+LCXhBAAAl8FpgKnfII4HxofhNZjRL61xDFSMSCMW61XgYmamfXMueHe+BuNaljci2Rdcpwp17vks5X1r3+jpGcK1JeWQ4+Lfqrko3GkyLFbNdoseqhSjmqrLD0HLhI5tzMBP3F3hRdrrJCIGtxI3etiss1gP/zKlyNxiWmfc0O+IlVDUM/AujScPSBOmHx1tN4Hp6h3+kn//ZGqneujWlFSeVMTyN86gXkoaYxoktUB67M9SojRAmpz266s4GEB745N15oSQLXa+ZrVykIy2p0ftO0oKi5aLVT1mddHV6nOffPBv47dBHqr1a5dtHpMeyIooQeovMq6bgBljc1xzMWqk39K/zOBfLZQsktZE6TrgcLmegBBcszNcZmfDToJgg6zliUiwLH/yI5rBRlH8dMZnYBMROcSOC5GGr5cc0pN6nVQlgB3pCtduY2UN5ZOXrZQ/OYomG2Lfgbz6MLiqWn7FdRjdEpO+2q738+165r2LPUrID6IO+ySM6r5ZIRrxHuq38SFjuWBDaN7fg3Lo7+kOPhXXvhKMdsjDpL1GI8GegNaVppZ5+psL3PjjHHIpBtAr5Cs+I+g2z3Z3sQYyg2f7Jc5Ul2aCdrJ9xoj3XqIfLKbVWw040F4l5GslxmxRbqQqV2YZF4HmELBoqdVkYA4VQ7Enp4eWK3daS8dqae8e1UJoB6kPSr5G8ycBBOa7B/DuIVcbvhxj0Y+elxrM3TDDwE3BIKXtlrWrj 34W0LHWb 9TN5XlRBT/195c7ZFLHIaD0fNQSsu5qLiUKatAT+gLgU+gt+RPnFWPrGbcNb+3/n7RG+Cd/AW4RRwNZrfJnEbpu5w7VkmwOvElBtswxBj/Chi/xHpAkyQCL2o2AnYOvfigNdqZ2Gq+Ga7z8t2UsQKAlW7zwe1C4ASL8ZtsIx+sZTuUjIPOYAAyYKVrTus6vninZ1lXxQ6qZtvMoWjIeH2qsuLrusoq8Bf/4v+rV1XrjBXXcMeFf/9ff1ZaCcAg0gjm0VCaJ5xaomTppkbaEU+yVsQdqpF6bA/kTQRLd4os1NIbwysHxE95a02jb0nOO+ROzbFfQsTa9hty94DXLzkMX8OshDt/vCB2PcgG22UUBHfI74ZcNNdD8gbaQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The processing of bpf cgroup stats is tied to the rstat actions of other subsystems. Make changes to have them updated/flushed independently. Give the cgroup_bpf struct its own cgroup_rstat instance and define a new cgroup_rstat_ops instance specifically for the cgroup_bpf. Then replace the kfunc status of the existing updated/flush api calls with non-kfunc status. As an alternative, create new updated/flush kfuncs specifically for bpf cgroups. In these new kfuncs, make use of the bpf-specific rstat ops to plumb back in to the existing rstat routines. Where applicable, use pre-processor conditionals to define bpf rstat related stuff. Signed-off-by: JP Kobryn --- include/linux/bpf-cgroup-defs.h | 3 + include/linux/cgroup.h | 3 + kernel/bpf/cgroup.c | 6 ++ kernel/cgroup/cgroup-internal.h | 5 + kernel/cgroup/rstat.c | 95 ++++++++++++++++--- .../selftests/bpf/progs/btf_type_tag_percpu.c | 4 +- .../bpf/progs/cgroup_hierarchical_stats.c | 8 +- 7 files changed, 107 insertions(+), 17 deletions(-) diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h index 0985221d5478..e68359f861fb 100644 --- a/include/linux/bpf-cgroup-defs.h +++ b/include/linux/bpf-cgroup-defs.h @@ -75,6 +75,9 @@ struct cgroup_bpf { /* cgroup_bpf is released using a work queue */ struct work_struct release_work; + + /* per-cpu recursive resource statistics */ + struct cgroup_rstat rstat; }; #else /* CONFIG_CGROUP_BPF */ diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h index eec970622419..253ce4bff576 100644 --- a/include/linux/cgroup.h +++ b/include/linux/cgroup.h @@ -836,6 +836,9 @@ static inline bool cgroup_task_frozen(struct task_struct *task) #endif /* !CONFIG_CGROUPS */ #ifdef CONFIG_CGROUP_BPF +void bpf_cgroup_rstat_updated(struct cgroup *cgrp, int cpu); +void bpf_cgroup_rstat_flush(struct cgroup *cgrp); + static inline void cgroup_bpf_get(struct cgroup *cgrp) { percpu_ref_get(&cgrp->bpf.refcnt); diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c index 46e5db65dbc8..72bcfdbda6b1 100644 --- a/kernel/bpf/cgroup.c +++ b/kernel/bpf/cgroup.c @@ -210,6 +210,7 @@ void cgroup_bpf_offline(struct cgroup *cgrp) { cgroup_get(cgrp); percpu_ref_kill(&cgrp->bpf.refcnt); + bpf_cgroup_rstat_exit(&cgrp->bpf); } static void bpf_cgroup_storages_free(struct bpf_cgroup_storage *storages[]) @@ -490,6 +491,10 @@ int cgroup_bpf_inherit(struct cgroup *cgrp) if (ret) return ret; + ret = bpf_cgroup_rstat_init(&cgrp->bpf); + if (ret) + goto cleanup_ref; + for (p = cgroup_parent(cgrp); p; p = cgroup_parent(p)) cgroup_bpf_get(p); @@ -513,6 +518,7 @@ int cgroup_bpf_inherit(struct cgroup *cgrp) for (p = cgroup_parent(cgrp); p; p = cgroup_parent(p)) cgroup_bpf_put(p); +cleanup_ref: percpu_ref_exit(&cgrp->bpf.refcnt); return -ENOMEM; diff --git a/kernel/cgroup/cgroup-internal.h b/kernel/cgroup/cgroup-internal.h index 87d062baff90..bba1a1794de2 100644 --- a/kernel/cgroup/cgroup-internal.h +++ b/kernel/cgroup/cgroup-internal.h @@ -274,6 +274,11 @@ void cgroup_rstat_exit(struct cgroup_subsys_state *css); void cgroup_rstat_boot(void); void cgroup_base_stat_cputime_show(struct seq_file *seq); +#ifdef CONFIG_CGROUP_BPF +int bpf_cgroup_rstat_init(struct cgroup_bpf *bpf); +void bpf_cgroup_rstat_exit(struct cgroup_bpf *bpf); +#endif /* CONFIG_CGROUP_BPF */ + /* * namespace.c */ diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c index a8bb304e49c4..14dd8217db64 100644 --- a/kernel/cgroup/rstat.c +++ b/kernel/cgroup/rstat.c @@ -73,6 +73,47 @@ static struct cgroup_rstat_ops rstat_css_ops = { .flush_fn = rstat_flush_via_css, }; +#ifdef CONFIG_CGROUP_BPF +__weak noinline void bpf_rstat_flush(struct cgroup *cgrp, + struct cgroup *parent, int cpu); + +static struct cgroup *rstat_cgroup_via_bpf(struct cgroup_rstat *rstat) +{ + struct cgroup_bpf *bpf = container_of(rstat, typeof(*bpf), rstat); + struct cgroup *cgrp = container_of(bpf, typeof(*cgrp), bpf); + + return cgrp; +} + +static struct cgroup_rstat *rstat_parent_via_bpf( + struct cgroup_rstat *rstat) +{ + struct cgroup *cgrp, *cgrp_parent; + + cgrp = rstat_cgroup_via_bpf(rstat); + cgrp_parent = cgroup_parent(cgrp); + if (!cgrp_parent) + return NULL; + + return &(cgrp_parent->bpf.rstat); +} + +static void rstat_flush_via_bpf(struct cgroup_rstat *rstat, int cpu) +{ + struct cgroup *cgrp, *cgrp_parent; + + cgrp = rstat_cgroup_via_bpf(rstat); + cgrp_parent = cgroup_parent(cgrp); + bpf_rstat_flush(cgrp, cgrp_parent, cpu); +} + +static struct cgroup_rstat_ops rstat_bpf_ops = { + .parent_fn = rstat_parent_via_bpf, + .cgroup_fn = rstat_cgroup_via_bpf, + .flush_fn = rstat_flush_via_bpf, +}; +#endif /* CONFIG_CGROUP_BPF */ + /* * Helper functions for rstat per CPU lock (cgroup_rstat_cpu_lock). * @@ -187,11 +228,18 @@ static void __cgroup_rstat_updated(struct cgroup_rstat *rstat, int cpu, * rstat_cpu->updated_children list. See the comment on top of * cgroup_rstat_cpu definition for details. */ -__bpf_kfunc void cgroup_rstat_updated(struct cgroup_subsys_state *css, int cpu) +void cgroup_rstat_updated(struct cgroup_subsys_state *css, int cpu) { __cgroup_rstat_updated(&css->rstat, cpu, &rstat_css_ops); } +#ifdef CONFIG_CGROUP_BPF +__bpf_kfunc void bpf_cgroup_rstat_updated(struct cgroup *cgroup, int cpu) +{ + __cgroup_rstat_updated(&(cgroup->bpf.rstat), cpu, &rstat_bpf_ops); +} +#endif /* CONFIG_CGROUP_BPF */ + /** * cgroup_rstat_push_children - push children cgroups into the given list * @head: current head of the list (= subtree root) @@ -330,8 +378,7 @@ static struct cgroup_rstat *cgroup_rstat_updated_list( __bpf_hook_start(); -__weak noinline void bpf_rstat_flush(struct cgroup *cgrp, - struct cgroup *parent, int cpu) +void bpf_rstat_flush(struct cgroup *cgrp, struct cgroup *parent, int cpu) { } @@ -379,12 +426,8 @@ static void cgroup_rstat_flush_locked(struct cgroup_rstat *rstat, struct cgroup_rstat *pos = cgroup_rstat_updated_list( rstat, cpu, ops); - for (; pos; pos = pos->rstat_flush_next) { - struct cgroup *pos_cgroup = ops->cgroup_fn(pos); - + for (; pos; pos = pos->rstat_flush_next) ops->flush_fn(pos, cpu); - bpf_rstat_flush(pos_cgroup, cgroup_parent(pos_cgroup), cpu); - } /* play nice and yield if necessary */ if (need_resched() || spin_needbreak(&cgroup_rstat_lock)) { @@ -424,11 +467,18 @@ static void __cgroup_rstat_flush(struct cgroup_rstat *rstat, * * This function may block. */ -__bpf_kfunc void cgroup_rstat_flush(struct cgroup_subsys_state *css) +void cgroup_rstat_flush(struct cgroup_subsys_state *css) { __cgroup_rstat_flush(&css->rstat, &rstat_css_ops); } +#ifdef CONFIG_CGROUP_BPF +__bpf_kfunc void bpf_cgroup_rstat_flush(struct cgroup *cgroup) +{ + __cgroup_rstat_flush(&(cgroup->bpf.rstat), &rstat_bpf_ops); +} +#endif /* CONFIG_CGROUP_BPF */ + static void __cgroup_rstat_flush_hold(struct cgroup_rstat *rstat, struct cgroup_rstat_ops *ops) __acquires(&cgroup_rstat_lock) @@ -532,6 +582,27 @@ void cgroup_rstat_exit(struct cgroup_subsys_state *css) __cgroup_rstat_exit(rstat); } +#ifdef CONFIG_CGROUP_BPF +int bpf_cgroup_rstat_init(struct cgroup_bpf *bpf) +{ + struct cgroup_rstat *rstat = &bpf->rstat; + + rstat->rstat_cpu = alloc_percpu(struct cgroup_rstat_cpu); + if (!rstat->rstat_cpu) + return -ENOMEM; + + __cgroup_rstat_init(rstat); + + return 0; +} + +void bpf_cgroup_rstat_exit(struct cgroup_bpf *bpf) +{ + __cgroup_rstat_flush(&bpf->rstat, &rstat_bpf_ops); + __cgroup_rstat_exit(&bpf->rstat); +} +#endif /* CONFIG_CGROUP_BPF */ + void __init cgroup_rstat_boot(void) { int cpu; @@ -754,10 +825,11 @@ void cgroup_base_stat_cputime_show(struct seq_file *seq) cgroup_force_idle_show(seq, &cgrp->bstat); } +#ifdef CONFIG_CGROUP_BPF /* Add bpf kfuncs for cgroup_rstat_updated() and cgroup_rstat_flush() */ BTF_KFUNCS_START(bpf_rstat_kfunc_ids) -BTF_ID_FLAGS(func, cgroup_rstat_updated) -BTF_ID_FLAGS(func, cgroup_rstat_flush, KF_SLEEPABLE) +BTF_ID_FLAGS(func, bpf_cgroup_rstat_updated) +BTF_ID_FLAGS(func, bpf_cgroup_rstat_flush, KF_SLEEPABLE) BTF_KFUNCS_END(bpf_rstat_kfunc_ids) static const struct btf_kfunc_id_set bpf_rstat_kfunc_set = { @@ -771,3 +843,4 @@ static int __init bpf_rstat_kfunc_init(void) &bpf_rstat_kfunc_set); } late_initcall(bpf_rstat_kfunc_init); +#endif /* CONFIG_CGROUP_BPF */ diff --git a/tools/testing/selftests/bpf/progs/btf_type_tag_percpu.c b/tools/testing/selftests/bpf/progs/btf_type_tag_percpu.c index 310cd51e12e8..da15ada56218 100644 --- a/tools/testing/selftests/bpf/progs/btf_type_tag_percpu.c +++ b/tools/testing/selftests/bpf/progs/btf_type_tag_percpu.c @@ -45,7 +45,7 @@ int BPF_PROG(test_percpu2, struct bpf_testmod_btf_type_tag_2 *arg) SEC("tp_btf/cgroup_mkdir") int BPF_PROG(test_percpu_load, struct cgroup *cgrp, const char *path) { - g = (__u64)cgrp->self.rstat.rstat_cpu->updated_children; + g = (__u64)cgrp->bpf.rstat.rstat_cpu->updated_children; return 0; } @@ -57,7 +57,7 @@ int BPF_PROG(test_percpu_helper, struct cgroup *cgrp, const char *path) cpu = bpf_get_smp_processor_id(); rstat = (struct cgroup_rstat_cpu *)bpf_per_cpu_ptr( - cgrp->self.rstat.rstat_cpu, cpu); + cgrp->bpf.rstat.rstat_cpu, cpu); if (rstat) { /* READ_ONCE */ *(volatile int *)rstat; diff --git a/tools/testing/selftests/bpf/progs/cgroup_hierarchical_stats.c b/tools/testing/selftests/bpf/progs/cgroup_hierarchical_stats.c index 10c803c8dc70..24450dd4d3f3 100644 --- a/tools/testing/selftests/bpf/progs/cgroup_hierarchical_stats.c +++ b/tools/testing/selftests/bpf/progs/cgroup_hierarchical_stats.c @@ -37,8 +37,8 @@ struct { __type(value, struct attach_counter); } attach_counters SEC(".maps"); -extern void cgroup_rstat_updated(struct cgroup_subsys_state *css, int cpu) __ksym; -extern void cgroup_rstat_flush(struct cgroup_subsys_state *css) __ksym; +extern void bpf_cgroup_rstat_updated(struct cgroup *cgrp, int cpu) __ksym; +extern void bpf_cgroup_rstat_flush(struct cgroup *cgrp) __ksym; static uint64_t cgroup_id(struct cgroup *cgrp) { @@ -75,7 +75,7 @@ int BPF_PROG(counter, struct cgroup *dst_cgrp, struct task_struct *leader, else if (create_percpu_attach_counter(cg_id, 1)) return 0; - cgroup_rstat_updated(&dst_cgrp->self, bpf_get_smp_processor_id()); + bpf_cgroup_rstat_updated(dst_cgrp, bpf_get_smp_processor_id()); return 0; } @@ -141,7 +141,7 @@ int BPF_PROG(dumper, struct bpf_iter_meta *meta, struct cgroup *cgrp) return 1; /* Flush the stats to make sure we get the most updated numbers */ - cgroup_rstat_flush(&cgrp->self); + bpf_cgroup_rstat_flush(cgrp); total_counter = bpf_map_lookup_elem(&attach_counters, &cg_id); if (!total_counter) {