From patchwork Fri Sep 22 11:28:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13395566 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 51FF11F191 for ; Fri, 22 Sep 2023 11:29:08 +0000 (UTC) Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00904114; Fri, 22 Sep 2023 04:29:04 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id d2e1a72fcca58-692ada71d79so324746b3a.1; Fri, 22 Sep 2023 04:29:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695382144; x=1695986944; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=USRE5PGcnQJpJZpU1UfktdJjqpIxT9YBbxxOLiuroxw=; b=IpOrMyR/Q/r9XweAd5rXeWWU4hwbb03t6BOYBiC5IKpOxaEKa2DUdGMWg9oYOkvd6a fCrJxOpB4jy1FEimsNy5T1AErXLLHm6hGdSu5PdOmT8CiOkmMrqaA6CMUgURwY1DFfzl ylz6p0DChRRWQeTbtpqO06lyj4XxMmMdtI8figm2i7Z6ztgmo8j2Wq5fLSwIS/rwZ9pE DqpvmSOQUJA7899rM9w7VxHiouyissll53dJowRGy8bklVLTMxDd7+DeCbjPjrg6+CSF nZFoJ0CUGMD9G5+MwKqrTmLr8kxZJNwxWfwhMXeGS4P6jRbnC6eMDpxYfDfJk61Zfhw/ J29A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695382144; x=1695986944; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=USRE5PGcnQJpJZpU1UfktdJjqpIxT9YBbxxOLiuroxw=; b=HsoRBYlXh3bxgOqXPEtu5lXqn3unJGbSOpQ6p4HmiTDM9y8UfCzRWibz4WSy2XoXWu Ea5/g2RMitOXONCabBK5jgG6LLS+/36dWUgzBahxWXlz+oQCSHXUTJQcNLlpvZHR0Xz1 HVdNlKHaL/1v/WyyQdaE1LmPjTg+Odaroel4JNh9575k2plGRN9xbyEtU+H8JxcmiQmm +a01RqZ9CtF2rJTPA47UFnY1H0AA7Qat7JrfIVJ9k1rutQInUtjqlsudRKKfTzOJ6gGW FKCrEdTWdGi2SFC9nemvPfsexx+qBhctWfloEVIhSAqH4BWI02CPUi+XQKwciSWSj2y1 HgrA== X-Gm-Message-State: AOJu0YwBc8c1ay1rPTzlPgJ28IwIa+3re/NE5ohaGJLw8nEOSWdZCCgH zdAKol3AHyWzvIUHeok0gaM= X-Google-Smtp-Source: AGHT+IGZqnyVG2urjjfy7VlM2/m/f8ny9bXGV4LwcrN/uyFoRBTQ88BjQyISSNtQ8Pk/iS8ggQs4pw== X-Received: by 2002:a62:c102:0:b0:68e:3eab:9e1b with SMTP id i2-20020a62c102000000b0068e3eab9e1bmr6970217pfg.22.1695382144343; Fri, 22 Sep 2023 04:29:04 -0700 (PDT) Received: from vultr.guest ([149.28.194.201]) by smtp.gmail.com with ESMTPSA id v16-20020aa78090000000b00690beda6987sm2973493pff.77.2023.09.22.04.29.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 04:29:03 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yonghong.song@linux.dev, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, tj@kernel.org, lizefan.x@bytedance.com, hannes@cmpxchg.org, yosryahmed@google.com, mkoutny@suse.com Cc: cgroups@vger.kernel.org, bpf@vger.kernel.org, Yafang Shao , Feng Zhou Subject: [RFC PATCH bpf-next 1/8] bpf: Fix missed rcu read lock in bpf_task_under_cgroup() Date: Fri, 22 Sep 2023 11:28:39 +0000 Message-Id: <20230922112846.4265-2-laoar.shao@gmail.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230922112846.4265-1-laoar.shao@gmail.com> References: <20230922112846.4265-1-laoar.shao@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC When employed within a sleepable program not under RCU protection, the use of 'bpf_task_under_cgroup()' may trigger a warning in the kernel log, particularly when CONFIG_PROVE_RCU is enabled. [ 1259.662354] ============================= [ 1259.662357] WARNING: suspicious RCU usage [ 1259.662358] 6.5.0+ #33 Not tainted [ 1259.662360] ----------------------------- [ 1259.662361] include/linux/cgroup.h:423 suspicious rcu_dereference_check() usage! [ 1259.662364] other info that might help us debug this: [ 1259.662366] rcu_scheduler_active = 2, debug_locks = 1 [ 1259.662368] 1 lock held by trace/72954: [ 1259.662369] #0: ffffffffb5e3eda0 (rcu_read_lock_trace){....}-{0:0}, at: __bpf_prog_enter_sleepable+0x0/0xb0 [ 1259.662383] stack backtrace: [ 1259.662385] CPU: 50 PID: 72954 Comm: trace Kdump: loaded Not tainted 6.5.0+ #33 [ 1259.662391] Call Trace: [ 1259.662393] [ 1259.662395] dump_stack_lvl+0x6e/0x90 [ 1259.662401] dump_stack+0x10/0x20 [ 1259.662404] lockdep_rcu_suspicious+0x163/0x1b0 [ 1259.662412] task_css_set.part.0+0x23/0x30 [ 1259.662417] bpf_task_under_cgroup+0xe7/0xf0 [ 1259.662422] bpf_prog_7fffba481a3bcf88_lsm_run+0x5c/0x93 [ 1259.662431] bpf_trampoline_6442505574+0x60/0x1000 [ 1259.662439] bpf_lsm_bpf+0x5/0x20 [ 1259.662443] ? security_bpf+0x32/0x50 [ 1259.662452] __sys_bpf+0xe6/0xdd0 [ 1259.662463] __x64_sys_bpf+0x1a/0x30 [ 1259.662467] do_syscall_64+0x38/0x90 [ 1259.662472] entry_SYSCALL_64_after_hwframe+0x6e/0xd8 [ 1259.662479] RIP: 0033:0x7f487baf8e29 ... [ 1259.662504] This issue can be reproduced by executing a straightforward program, as demonstrated below: SEC("lsm.s/bpf") int BPF_PROG(lsm_run, int cmd, union bpf_attr *attr, unsigned int size) { struct cgroup *cgrp = NULL; struct task_struct *task; int ret = 0; if (cmd != BPF_LINK_CREATE) return 0; // The cgroup2 should be mounted first cgrp = bpf_cgroup_from_id(1); if (!cgrp) goto out; task = bpf_get_current_task_btf(); if (bpf_task_under_cgroup(task, cgrp)) ret = -1; bpf_cgroup_release(cgrp); out: return ret; } After running the program, if you subsequently execute another BPF program, you will encounter the warning. It's worth noting that task_under_cgroup_hierarchy() is also utilized by bpf_current_task_under_cgroup(). However, bpf_current_task_under_cgroup() doesn't exhibit this issue because it cannot be used in non-sleepable BPF programs. Fixes: b5ad4cdc46c7 ("bpf: Add bpf_task_under_cgroup() kfunc") Signed-off-by: Yafang Shao Cc: Feng Zhou Signed-off-by: Yafang Shao --- kernel/bpf/helpers.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index dd1c69ee3375..bb521b181cc3 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -2212,7 +2212,12 @@ __bpf_kfunc struct cgroup *bpf_cgroup_from_id(u64 cgid) __bpf_kfunc long bpf_task_under_cgroup(struct task_struct *task, struct cgroup *ancestor) { - return task_under_cgroup_hierarchy(task, ancestor); + long ret; + + rcu_read_lock(); + ret = task_under_cgroup_hierarchy(task, ancestor); + rcu_read_unlock(); + return ret; } #endif /* CONFIG_CGROUPS */ From patchwork Fri Sep 22 11:28:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13395581 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 37A131F18C for ; Fri, 22 Sep 2023 11:29:11 +0000 (UTC) Received: from mail-ot1-x331.google.com (mail-ot1-x331.google.com [IPv6:2607:f8b0:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 578FC180; Fri, 22 Sep 2023 04:29:09 -0700 (PDT) Received: by mail-ot1-x331.google.com with SMTP id 46e09a7af769-6bd0a0a6766so1187299a34.2; Fri, 22 Sep 2023 04:29:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695382148; x=1695986948; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DtklZII5h1uoYCXRNe+Jd8s2AyutOaiGxtLZvkF5nNg=; b=clBXYtOSYXNPBpTEclQPQQ8+t6XK2x15ADB9Hsb8LeW8sNnP+2YVdHFGADMOuT6HJB NXiOG2WQRXMTA388pVk5YJxQdp/3XBIExf5Ycn9KcvdlM+WwBPI9JcsoesenTemAeb7b oEl7xz/eBMHtbVOeHLIOdN5MuuUCaIiaI+v+0TVx7z9H3C0Uyz5+163dglxeIxLtnA6r Vd0YbH66a/qHa1K/ofpA3fHfe755Ow4a4614XmtTA7Ul6U9Q1mZk5eyya/2NoY9uecjb UX55yMvvL4O8IW/sgRTTSh+xl2kJzccSD0G/qgAm2jF2B5eF9ioaLzoxspOgfdZiH4dY EO5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695382148; x=1695986948; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DtklZII5h1uoYCXRNe+Jd8s2AyutOaiGxtLZvkF5nNg=; b=VzgURzcvG7gwdaCmf/kyIh2foBjnWe3x9RP02NLBzSbANxVKXTIYF0pCuWUUL8AnCd WX2HVyc5C/a7oYu+sCIBss2zcyD4sIkGDTl23IoH3IzSRRu0xQco0KM/fr530iDLfkbH 6WeCI0F5vOqHvPq3vzytXzE3ceWNS7Dx4FO3Hh+2IB7ZCJUbpQ3oMPYr6uKVL4B38mtf jL4j+WpAFtwWqYpkMnkmJIQ5zvLNj2AOBZJc6vyKUtwYL9v9ByedgQXM8XbZnuqxzTlJ iU6jI0javArgfjNV8g3FPeyOGWlU5scEpQ3394P7I1CzZ7pDFzqI4Wh1+UZCdGbR8Xfg Wv8Q== X-Gm-Message-State: AOJu0YxRD8MC6wcow7CQfsyUheWc1vRnjpdS/sFjz1YcxDT+Rg/sF/wK r79Z1HVtfTtHx8PWG/fK+r4= X-Google-Smtp-Source: AGHT+IHJndWMEJEjR9wUNcrfPZmW/KKZIJLe/nfxeG/OgTO8j6z/eoHjsDAvjGMMlNQHRNZ5OI/uTg== X-Received: by 2002:a05:6358:7247:b0:135:99fa:5040 with SMTP id i7-20020a056358724700b0013599fa5040mr11290945rwa.12.1695382148452; Fri, 22 Sep 2023 04:29:08 -0700 (PDT) Received: from vultr.guest ([149.28.194.201]) by smtp.gmail.com with ESMTPSA id v16-20020aa78090000000b00690beda6987sm2973493pff.77.2023.09.22.04.29.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 04:29:05 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yonghong.song@linux.dev, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, tj@kernel.org, lizefan.x@bytedance.com, hannes@cmpxchg.org, yosryahmed@google.com, mkoutny@suse.com Cc: cgroups@vger.kernel.org, bpf@vger.kernel.org, Yafang Shao Subject: [RFC PATCH bpf-next 2/8] cgroup: Enable task_under_cgroup_hierarchy() on cgroup1 Date: Fri, 22 Sep 2023 11:28:40 +0000 Message-Id: <20230922112846.4265-3-laoar.shao@gmail.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230922112846.4265-1-laoar.shao@gmail.com> References: <20230922112846.4265-1-laoar.shao@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC At present, the task_under_cgroup_hierarchy() function serves the purpose of determining whether a task resides exclusively within a cgroup2 hierarchy. However, considering the ongoing prevalence of cgroup1 and the substantial effort and time required to migrate all cgroup1-based applications to the cgroup2 framework, it becomes beneficial to make a minor adjustment that expands its functionality to encompass cgroup1 as well. By implementing this modification, we will gain the capability to easily confirm a task's cgroup membership within BPF programs. For example, we can effortlessly verify if a task belongs to a cgroup1 directory, such as '/sys/fs/cgroup/cpu,cpuacct/kubepods/', or '/sys/fs/cgroup/cpu,cpuacct/system.slice/'. Signed-off-by: Yafang Shao --- include/linux/cgroup-defs.h | 20 ++++++++++++++++++++ include/linux/cgroup.h | 30 ++++++++++++++++++++++++++---- kernel/cgroup/cgroup-internal.h | 20 -------------------- 3 files changed, 46 insertions(+), 24 deletions(-) diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h index f1b3151ac30b..5795825a04ff 100644 --- a/include/linux/cgroup-defs.h +++ b/include/linux/cgroup-defs.h @@ -299,6 +299,26 @@ struct css_set { struct rcu_head rcu_head; }; +/* + * A cgroup can be associated with multiple css_sets as different tasks may + * belong to different cgroups on different hierarchies. In the other + * direction, a css_set is naturally associated with multiple cgroups. + * This M:N relationship is represented by the following link structure + * which exists for each association and allows traversing the associations + * from both sides. + */ +struct cgrp_cset_link { + /* the cgroup and css_set this link associates */ + struct cgroup *cgrp; + struct css_set *cset; + + /* list of cgrp_cset_links anchored at cgrp->cset_links */ + struct list_head cset_link; + + /* list of cgrp_cset_links anchored at css_set->cgrp_links */ + struct list_head cgrp_link; +}; + struct cgroup_base_stat { struct task_cputime cputime; diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h index b307013b9c6c..e16cfb98b44c 100644 --- a/include/linux/cgroup.h +++ b/include/linux/cgroup.h @@ -387,8 +387,8 @@ static inline void cgroup_unlock(void) * The caller can also specify additional allowed conditions via @__c, such * as locks used during the cgroup_subsys::attach() methods. */ -#ifdef CONFIG_PROVE_RCU extern spinlock_t css_set_lock; +#ifdef CONFIG_PROVE_RCU #define task_css_set_check(task, __c) \ rcu_dereference_check((task)->cgroups, \ rcu_read_lock_sched_held() || \ @@ -543,15 +543,37 @@ static inline struct cgroup *cgroup_ancestor(struct cgroup *cgrp, * @ancestor: possible ancestor of @task's cgroup * * Tests whether @task's default cgroup hierarchy is a descendant of @ancestor. - * It follows all the same rules as cgroup_is_descendant, and only applies - * to the default hierarchy. + * It follows all the same rules as cgroup_is_descendant. */ static inline bool task_under_cgroup_hierarchy(struct task_struct *task, struct cgroup *ancestor) { struct css_set *cset = task_css_set(task); + struct cgrp_cset_link *link; + struct cgroup *cgrp = NULL; + bool ret = false; + + if (ancestor->root == &cgrp_dfl_root) + return cgroup_is_descendant(cset->dfl_cgrp, ancestor); + + if (cset == &init_css_set) + return ancestor == &ancestor->root->cgrp; + + spin_lock_irq(&css_set_lock); + list_for_each_entry(link, &cset->cgrp_links, cgrp_link) { + struct cgroup *c = link->cgrp; + + if (c->root == ancestor->root) { + cgrp = c; + break; + } + } + spin_unlock_irq(&css_set_lock); - return cgroup_is_descendant(cset->dfl_cgrp, ancestor); + WARN_ON_ONCE(!cgrp); + if (cgroup_is_descendant(cgrp, ancestor)) + ret = true; + return ret; } /* no synchronization, the result can only be used as a hint */ diff --git a/kernel/cgroup/cgroup-internal.h b/kernel/cgroup/cgroup-internal.h index c56071f150f2..620c60c9daa3 100644 --- a/kernel/cgroup/cgroup-internal.h +++ b/kernel/cgroup/cgroup-internal.h @@ -83,26 +83,6 @@ struct cgroup_file_ctx { } procs1; }; -/* - * A cgroup can be associated with multiple css_sets as different tasks may - * belong to different cgroups on different hierarchies. In the other - * direction, a css_set is naturally associated with multiple cgroups. - * This M:N relationship is represented by the following link structure - * which exists for each association and allows traversing the associations - * from both sides. - */ -struct cgrp_cset_link { - /* the cgroup and css_set this link associates */ - struct cgroup *cgrp; - struct css_set *cset; - - /* list of cgrp_cset_links anchored at cgrp->cset_links */ - struct list_head cset_link; - - /* list of cgrp_cset_links anchored at css_set->cgrp_links */ - struct list_head cgrp_link; -}; - /* used to track tasks and csets during migration */ struct cgroup_taskset { /* the src and dst cset list running through cset->mg_node */ From patchwork Fri Sep 22 11:28:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13395582 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1FF171F193 for ; Fri, 22 Sep 2023 11:29:15 +0000 (UTC) Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 048AE91; Fri, 22 Sep 2023 04:29:11 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id d2e1a72fcca58-690b7cb71aeso1651276b3a.0; Fri, 22 Sep 2023 04:29:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695382150; x=1695986950; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=IHx9zaz06OEXWU+TLyQGeUCpNolCvM+zvySDcLZVKEk=; b=e6XHujtU2Xmv56Jn0+m+2DYhpGsthOUu9Gte9bZzvYYRAIDR0u29uGpH//SACfBzLI MQrgx5QZnM8YL3OQxZvjXzYscm18DEr9t5cuO5dih6pUDkW11ejfi2nol1lrvKNWHpXK dkrvX5fCWdvVh68V2SZJhBoPunsf8YH2tpmxfFQBT+N6R9S1p1TqBdrCddg+9m0JT14J lqW1aBfBnS2iXB3y+/g0zDZSBdosQQlIDNLx6rcVP7MvIGEQ6oyuysubH9rhlMUGwB5T z7UxooOhIwl+Cs+/ZCc5VEUgUxAZmNls+tsHfLt2PARq5Tjp+GElRAZZbCUDawbdYtj7 zcuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695382150; x=1695986950; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IHx9zaz06OEXWU+TLyQGeUCpNolCvM+zvySDcLZVKEk=; b=WJkutIzUJwdk5ItVg0zT1Khnx5B6GLnYziRMzn4ZoBkHhx1Y5kMTONEF+nmPMAhU3U yHlW81QreVALCe8s2OGZPAF7lKhwYbUer7Aeq3cbCtcxbvPf2g4nICoOkkUWsHyoB3PE 2n70nNHpe5ghLCaFdSmjpui8RSugqT7rMqyL2xyNYSGeu5aSef+GAe4n3xiR/e51xyy0 7s827xVmolNob4x7zRHKZW6Bdt99nl6xbpA8RMjcJUFi7x8rgC6q6i3psmNiT8S0X9Sy 5iDrMwPyyAUlcf0SOpYSLQkci6m6Fyt3Scb4Qboc97TeEf+b16+5RAJ6vQspdnpocpgI bx5g== X-Gm-Message-State: AOJu0YxNmM171ngABFTfuxFeMqa7JbmXkhGyGn4kHLeUdQTR8077u2lz SCaDLFPSiSDXOttGtyYyKxc= X-Google-Smtp-Source: AGHT+IFmdHkwv++M6SDNQlmrcG9FGrpSN70P30Er1/YnrdybTEQ90BaY94ATOGJvWNJaZSPWlHimpQ== X-Received: by 2002:a05:6a20:258f:b0:134:73f6:5832 with SMTP id k15-20020a056a20258f00b0013473f65832mr3745501pzd.16.1695382150344; Fri, 22 Sep 2023 04:29:10 -0700 (PDT) Received: from vultr.guest ([149.28.194.201]) by smtp.gmail.com with ESMTPSA id v16-20020aa78090000000b00690beda6987sm2973493pff.77.2023.09.22.04.29.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 04:29:09 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yonghong.song@linux.dev, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, tj@kernel.org, lizefan.x@bytedance.com, hannes@cmpxchg.org, yosryahmed@google.com, mkoutny@suse.com Cc: cgroups@vger.kernel.org, bpf@vger.kernel.org, Yafang Shao Subject: [RFC PATCH bpf-next 3/8] cgroup: Add cgroup_get_from_id_within_subsys() Date: Fri, 22 Sep 2023 11:28:41 +0000 Message-Id: <20230922112846.4265-4-laoar.shao@gmail.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230922112846.4265-1-laoar.shao@gmail.com> References: <20230922112846.4265-1-laoar.shao@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Introduce a new helper function to retrieve the cgroup associated with a specific cgroup ID within a particular subsystem. Signed-off-by: Yafang Shao --- include/linux/cgroup.h | 1 + kernel/cgroup/cgroup.c | 32 +++++++++++++++++++++++++++----- 2 files changed, 28 insertions(+), 5 deletions(-) diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h index e16cfb98b44c..9f7616cbf710 100644 --- a/include/linux/cgroup.h +++ b/include/linux/cgroup.h @@ -656,6 +656,7 @@ static inline void cgroup_kthread_ready(void) void cgroup_path_from_kernfs_id(u64 id, char *buf, size_t buflen); struct cgroup *cgroup_get_from_id(u64 id); +struct cgroup *cgroup_get_from_id_within_subsys(u64 cgid, int ssid); #else /* !CONFIG_CGROUPS */ struct cgroup_subsys_state; diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index 1fb7f562289d..d30a62eed14c 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -6195,17 +6195,28 @@ void cgroup_path_from_kernfs_id(u64 id, char *buf, size_t buflen) } /* - * cgroup_get_from_id : get the cgroup associated with cgroup id - * @id: cgroup id + * cgroup_get_from_id_within_subsys - get the cgroup associated with cgroup id + * within specific subsystem + * @cgid: cgroup id + * @ssid: cgroup subsystem id, -1 for cgroup default tree * On success return the cgrp or ERR_PTR on failure * Only cgroups within current task's cgroup NS are valid. */ -struct cgroup *cgroup_get_from_id(u64 id) +struct cgroup *cgroup_get_from_id_within_subsys(u64 cgid, int ssid) { + struct cgroup_root *root; struct kernfs_node *kn; - struct cgroup *cgrp, *root_cgrp; + struct cgroup *cgrp; - kn = kernfs_find_and_get_node_by_id(cgrp_dfl_root.kf_root, id); + if (ssid == -1) { + root = &cgrp_dfl_root; + } else { + if (ssid >= CGROUP_SUBSYS_COUNT) + return ERR_PTR(-EINVAL); + root = cgroup_subsys[ssid]->root; + } + + kn = kernfs_find_and_get_node_by_id(root->kf_root, cgid); if (!kn) return ERR_PTR(-ENOENT); @@ -6226,6 +6237,17 @@ struct cgroup *cgroup_get_from_id(u64 id) if (!cgrp) return ERR_PTR(-ENOENT); + return cgrp; +} + +struct cgroup *cgroup_get_from_id(u64 id) +{ + struct cgroup *root_cgrp, *cgrp; + + cgrp = cgroup_get_from_id_within_subsys(id, -1); + if (IS_ERR(cgrp)) + return cgrp; + root_cgrp = current_cgns_cgroup_dfl(); if (!cgroup_is_descendant(cgrp, root_cgrp)) { cgroup_put(cgrp); From patchwork Fri Sep 22 11:28:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13395583 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CCAE31F196 for ; Fri, 22 Sep 2023 11:29:15 +0000 (UTC) Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D655139; Fri, 22 Sep 2023 04:29:13 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id d2e1a72fcca58-690b7cb71aeso1651311b3a.0; Fri, 22 Sep 2023 04:29:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695382152; x=1695986952; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+XT7WoTwvT3Nj7+0W/TJCIkVvjE4LDdWtuhMgBEm65Q=; b=KqduS1WQ3tt/XGwC/26Nnm/PIg1S8yuPI7ISaGucHojbIEBJwigNaj1k7WHB7GWPGp k46upGiZet85LZaaHB6RRt0cipiPiaYNOQKBkUBqgsN+stPDDSfjGQAHGoOUuDiUIPAN NIyJ5wCdH/mN3EtsLhDmD0n4IReFd/psh3BsZn6o0xfzfKzfHGX0f1goutjOIS3A8UKb HB+DL0svNVJCC4ripMfbxH5jJiWDjlms/jvbu8nR3unU+Q5x+qV/tH4wxqtQMdABGrHd wXOiqLYWoFCsInAU/L86sSQL6EtzD8en9BQdneGWyIk0SCn/mMsUI+X/BofCHO94Pc47 A6FA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695382152; x=1695986952; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+XT7WoTwvT3Nj7+0W/TJCIkVvjE4LDdWtuhMgBEm65Q=; b=s7wTdRwLBxxO/IgRpSPlLiu8rOPbnw3Nml0GocMsnNPz5Qkc5qMHLSjlEisUFpdla6 ClFNSFcE0Fa66fv2zcjLClih1oN/9YWMUMb+DAHzQK4NEwVSohm008IoEI+xYyvVgeTA IA5lc562wyTeMeXxNbP0at3S+jUrb7TIRYhmxDu1sQbdLBy26FtwZZOqLK7+anWaZhqp N1iDsiXH5BL8GwD9TZusuHTMLQflrn9zjrfXY4GcmgLK0arm08l0tDIDHYduH1MNdIs6 GdbqDVs7kuKM8tIcsPDvEMsqe3zz9eHbeXC2Yx63chTzn9TyMZNVV/2rzu0BM8GzGkmk 3hoQ== X-Gm-Message-State: AOJu0YyxiaiAhV2Btztdv3mannx6vpkfrlxqQyStBESfn25nYgzGzM1W uRPh6reRuNO7MNzMMgRAoHs= X-Google-Smtp-Source: AGHT+IEid2y4WfFDIqcd3beU088Pykw/qGeUvYEDMfDDP6X8AncVjx4AkGmGv+ViCPJ95sTYM6DVEQ== X-Received: by 2002:a05:6a20:4293:b0:125:3445:8af0 with SMTP id o19-20020a056a20429300b0012534458af0mr3442783pzj.7.1695382152506; Fri, 22 Sep 2023 04:29:12 -0700 (PDT) Received: from vultr.guest ([149.28.194.201]) by smtp.gmail.com with ESMTPSA id v16-20020aa78090000000b00690beda6987sm2973493pff.77.2023.09.22.04.29.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 04:29:11 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yonghong.song@linux.dev, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, tj@kernel.org, lizefan.x@bytedance.com, hannes@cmpxchg.org, yosryahmed@google.com, mkoutny@suse.com Cc: cgroups@vger.kernel.org, bpf@vger.kernel.org, Yafang Shao Subject: [RFC PATCH bpf-next 4/8] bpf: Add new kfuncs support for cgroup controller Date: Fri, 22 Sep 2023 11:28:42 +0000 Message-Id: <20230922112846.4265-5-laoar.shao@gmail.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230922112846.4265-1-laoar.shao@gmail.com> References: <20230922112846.4265-1-laoar.shao@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Introducing new kfuncs: - bpf_cgroup_id_from_task_within_controller Retrieves the cgroup ID from a task within a specific cgroup controller. - bpf_cgroup_acquire_from_id_within_controller Acquires the cgroup from a cgroup ID within a specific cgroup controller. - bpf_cgroup_ancestor_id_from_task_within_controller Retrieves the ancestor cgroup ID from a task within a specific cgroup controller. These functions eliminate the need to consider cgroup hierarchies, regardless of whether they involve cgroup1 or cgroup2. Signed-off-by: Yafang Shao --- kernel/bpf/helpers.c | 70 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index bb521b181cc3..1316b5fda349 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -2219,6 +2219,73 @@ __bpf_kfunc long bpf_task_under_cgroup(struct task_struct *task, rcu_read_unlock(); return ret; } + +/** + * bpf_cgroup_id_from_task_within_controller - To get the associated cgroup_id from + * a task within a specific cgroup controller. + * @task: The target task + * @ssid: The id of cgroup controller, e.g. cpu_cgrp_id, memory_cgrp_id and etc. + */ +__bpf_kfunc u64 bpf_cgroup_id_from_task_within_controller(struct task_struct *task, int ssid) +{ + struct cgroup *cgroup; + int id = 0; + + rcu_read_lock(); + cgroup = task_cgroup(task, ssid); + if (!cgroup) + goto out; + id = cgroup_id(cgroup); + +out: + rcu_read_unlock(); + return id; +} + +/** + * bpf_cgroup_id_from_task_within_controller - To get the associated cgroup_id from + * a task within a specific cgroup controller. + * @task: The target task + * @ssid: The id of cgroup subsystem, e.g. cpu_cgrp_id, memory_cgrp_id and etc. + * @level: The level of ancestor to look up + */ +__bpf_kfunc u64 bpf_cgroup_ancestor_id_from_task_within_controller(struct task_struct *task, + int ssid, int level) +{ + struct cgroup *cgrp, *ancestor; + int id = 0; + + rcu_read_lock(); + cgrp = task_cgroup(task, ssid); + if (!cgrp) + goto out; + ancestor = cgroup_ancestor(cgrp, level); + if (ancestor) + id = cgroup_id(ancestor); + +out: + rcu_read_unlock(); + return id; +} + +/** + * bpf_cgroup_acquire_from_id_within_controller - To acquire the cgroup from a + * cgroup id within specific cgroup controller. A cgroup acquired by this kfunc + * which is not stored in a map as a kptr, must be released by calling + * bpf_cgroup_release(). + * @cgid: cgroup id + * @ssid: The id of a cgroup controller, e.g. cpu_cgrp_id, memory_cgrp_id and etc. + */ +__bpf_kfunc struct cgroup *bpf_cgroup_acquire_from_id_within_controller(u64 cgid, int ssid) +{ + struct cgroup *cgrp; + + cgrp = cgroup_get_from_id_within_subsys(cgid, ssid); + if (IS_ERR(cgrp)) + return NULL; + return cgrp; +} + #endif /* CONFIG_CGROUPS */ /** @@ -2525,6 +2592,9 @@ BTF_ID_FLAGS(func, bpf_cgroup_release, KF_RELEASE) BTF_ID_FLAGS(func, bpf_cgroup_ancestor, KF_ACQUIRE | KF_RCU | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_cgroup_from_id, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_task_under_cgroup, KF_RCU) +BTF_ID_FLAGS(func, bpf_cgroup_id_from_task_within_controller) +BTF_ID_FLAGS(func, bpf_cgroup_ancestor_id_from_task_within_controller) +BTF_ID_FLAGS(func, bpf_cgroup_acquire_from_id_within_controller, KF_ACQUIRE | KF_RET_NULL) #endif BTF_ID_FLAGS(func, bpf_task_from_pid, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_throw) From patchwork Fri Sep 22 11:28:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13395584 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D5181F191 for ; Fri, 22 Sep 2023 11:29:16 +0000 (UTC) Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5190D197; Fri, 22 Sep 2023 04:29:15 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id d2e1a72fcca58-690d8fb3b7eso1852192b3a.1; Fri, 22 Sep 2023 04:29:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695382155; x=1695986955; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YxmghcOcb4Lv1mKG5g6exWWWkajf/SPl1xKizlQXNEE=; b=C1yvSKtcf8b8zMSnHslJ+/+kgpI+4RtJkLSPcVu3s31pgu2yciBb853YLfg8Lfb6rU MkWmrhENjSgJgfisqGTO73tewSBuTZTAfI8gtZ54IHXUuo7lhUoIykDbzUleUjAJ5Xd7 yMbPu2xXEF11kqSVobl1JcmVSd3dt+ubrbCSlCKL066r8hPFEA+fz4MpW8kFSYxs3jVV ZJZOlccuxf1zBb1t5E03PzjrwfMUE/j26fJr1Xt05lEVd1Mk25A18rjs5eraYQBQRIZN YDUujeAcVrW8NpsMp2uCnTfRKy+XTnQudb+96jgTI6b8ScJVklO5zbE4UBMj2b/dQRHM CQpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695382155; x=1695986955; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YxmghcOcb4Lv1mKG5g6exWWWkajf/SPl1xKizlQXNEE=; b=BcObpHEL0IRoPN4qNbm4V0JUj9BrQsPpBPN56gQNsVs2haYu61VVvq4PWgHWUFpZCX wodMEFT+Hzio1halxIsEdJ8QfXu/5Rbpfm4Iuo9hzk3UIXuyfxb2tfFctBO/62FDZkl9 h0jlOVJsOLjuNRtUcSpvam0wVo9HFxRGjSga126TYrCj1LDXRRmqheZycHKzbKMqAZY8 ZRzU/DqA/sQSD7zyK8O7kA3vn0lf4MlawO6fvqRhpgBTll+c4MQM5xqJZB0lKEc+eYtR A8NFrcAWb4zuwa0eaHntmAENOlk/uUiuuIALvtFEHL8LUWnkus7i26YtSkhoJ3qycBeM xv5Q== X-Gm-Message-State: AOJu0Yznj6NWuqHCdtsPatY8g4fxUKSnAyJv3y4Mj3MKwUpexU3L/qc5 JxZJ8EkDjW3cSLyZrIuOOfo= X-Google-Smtp-Source: AGHT+IHukcKWZ5HdAMd6ckgrDTLtw6m6pdu5olT6UOSJLosEq36YENbyW3y6Pav9V5xIX06Om7jmRA== X-Received: by 2002:a05:6a20:3ca3:b0:154:a9bc:12d0 with SMTP id b35-20020a056a203ca300b00154a9bc12d0mr9562321pzj.13.1695382154720; Fri, 22 Sep 2023 04:29:14 -0700 (PDT) Received: from vultr.guest ([149.28.194.201]) by smtp.gmail.com with ESMTPSA id v16-20020aa78090000000b00690beda6987sm2973493pff.77.2023.09.22.04.29.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 04:29:13 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yonghong.song@linux.dev, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, tj@kernel.org, lizefan.x@bytedance.com, hannes@cmpxchg.org, yosryahmed@google.com, mkoutny@suse.com Cc: cgroups@vger.kernel.org, bpf@vger.kernel.org, Yafang Shao Subject: [RFC PATCH bpf-next 5/8] selftests/bpf: Fix issues in setup_classid_environment() Date: Fri, 22 Sep 2023 11:28:43 +0000 Message-Id: <20230922112846.4265-6-laoar.shao@gmail.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230922112846.4265-1-laoar.shao@gmail.com> References: <20230922112846.4265-1-laoar.shao@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC If the net_cls subsystem is already mounted, attempting to mount it again in setup_classid_environment() will result in a failure with the error code EBUSY. Despite this, tmpfs will have been successfully mounted at /sys/fs/cgroup/net_cls. Consequently, the /sys/fs/cgroup/net_cls directory will be empty, causing subsequent setup operations to fail. Here's an error log excerpt illustrating the issue when net_cls has already been mounted at /sys/fs/cgroup/net_cls prior to running setup_classid_environment(): - Before that change $ tools/testing/selftests/bpf/test_progs --name=cgroup_v1v2 test_cgroup_v1v2:PASS:server_fd 0 nsec test_cgroup_v1v2:PASS:client_fd 0 nsec test_cgroup_v1v2:PASS:cgroup_fd 0 nsec test_cgroup_v1v2:PASS:server_fd 0 nsec run_test:PASS:skel_open 0 nsec run_test:PASS:prog_attach 0 nsec test_cgroup_v1v2:PASS:cgroup-v2-only 0 nsec (cgroup_helpers.c:248: errno: No such file or directory) Opening Cgroup Procs: /sys/fs/cgroup/net_cls/cgroup.procs (cgroup_helpers.c:540: errno: No such file or directory) Opening cgroup classid: /sys/fs/cgroup/net_cls/cgroup-test-work-dir/net_cls.classid run_test:PASS:skel_open 0 nsec run_test:PASS:prog_attach 0 nsec (cgroup_helpers.c:248: errno: No such file or directory) Opening Cgroup Procs: /sys/fs/cgroup/net_cls/cgroup-test-work-dir/cgroup.procs run_test:FAIL:join_classid unexpected error: 1 (errno 2) test_cgroup_v1v2:FAIL:cgroup-v1v2 unexpected error: -1 (errno 2) (cgroup_helpers.c:248: errno: No such file or directory) Opening Cgroup Procs: /sys/fs/cgroup/net_cls/cgroup.procs #44 cgroup_v1v2:FAIL Summary: 0/0 PASSED, 0 SKIPPED, 1 FAILED - After that change $ tools/testing/selftests/bpf/test_progs --name=cgroup_v1v2 #44 cgroup_v1v2:OK Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Yafang Shao --- tools/testing/selftests/bpf/cgroup_helpers.c | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/bpf/cgroup_helpers.c b/tools/testing/selftests/bpf/cgroup_helpers.c index 24ba56d42f2d..9c36d1db9f94 100644 --- a/tools/testing/selftests/bpf/cgroup_helpers.c +++ b/tools/testing/selftests/bpf/cgroup_helpers.c @@ -518,10 +518,20 @@ int setup_classid_environment(void) return 1; } - if (mount("net_cls", NETCLS_MOUNT_PATH, "cgroup", 0, "net_cls") && - errno != EBUSY) { - log_err("mount cgroup net_cls"); - return 1; + if (mount("net_cls", NETCLS_MOUNT_PATH, "cgroup", 0, "net_cls")) { + if (errno != EBUSY) { + log_err("mount cgroup net_cls"); + return 1; + } + + if (rmdir(NETCLS_MOUNT_PATH)) { + log_err("rmdir cgroup net_cls"); + return 1; + } + if (umount(CGROUP_MOUNT_DFLT)) { + log_err("umount cgroup base"); + return 1; + } } cleanup_classid_environment(); From patchwork Fri Sep 22 11:28:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13395585 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4EF9B1F928 for ; Fri, 22 Sep 2023 11:29:19 +0000 (UTC) Received: from mail-oi1-x22d.google.com (mail-oi1-x22d.google.com [IPv6:2607:f8b0:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD579199; Fri, 22 Sep 2023 04:29:17 -0700 (PDT) Received: by mail-oi1-x22d.google.com with SMTP id 5614622812f47-3ae2896974bso68019b6e.0; Fri, 22 Sep 2023 04:29:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695382157; x=1695986957; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=STXdhtnmTI3etV8zPaY3R9oC/tPAoDIEDC13dA2eLMM=; b=VJVgdirrilR5NCu94OPVPuUO+rsu6ytGZT1jYdL1t69d2JVdBMrn9jt1C9ESa+odsB 9FQBELLq8rW7mfaaAk5vToLzCZVABoJ5VnTAykiAEHL5IyX78VhOXQu7smTuNFpdYvLf v5tVVGI1pel85vXetGx6c3FBzgHj/0irasYJGWZg1mlZLgRmYze7XDBLcvMJcSf6293B r6AJKgq9dzFphzca9w/xfBX/PURN4CJs5nPEH1sgstEn5LLEvkGcHYpMdrgi+cbdfGRi KjNb0UTacpp0RrlTf+8IWwvCQI5rSLNp6hxur0rWgBxXj7TXABMxXyFaDyeNCUSAiYca jj3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695382157; x=1695986957; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=STXdhtnmTI3etV8zPaY3R9oC/tPAoDIEDC13dA2eLMM=; b=aIlEJUK2HVVnXUALydw/lyA7iXFCFyNUx+flU+lVKKp8Kd9YpitL8ExxVoDRJC7LTm h/+BYmYFFBNycsFQTTBp/1xDPo1MDEgeK9eLX6ebRApSpoFZp6bxLSGnteXEw8JA3YRK qU8lenKxaoCIg8CvlLnUNF2sRZrELWAtB5M12C2WL9QYcc/k6a84IzOrfiW4IFYGxR6V BZV6IDLtAoYGVQPD1S3caUnUNDtDJpxSwPwsQGTNxyWmZOvgXKhAUTGeuR+jTFC8vBcE EEiPBfp4msbOfPRX3X/Gw3lzl8P53ob0QN3deBG/FMUsSNoGHIZaupANldGvZEHCJcv4 GhKg== X-Gm-Message-State: AOJu0YyySoDQucN4AF15YzfsYDhw6bUEXNV1tkvHSDsf6dxNB6I8Be64 fvgrMjmSxz4gOp4O2vWVRKY= X-Google-Smtp-Source: AGHT+IE9nreh70tEqpwmNq6jBs01TxsDmeDiMnloh1NpfA8gxIsoQ6hZs77V3I2AQqB1ByS5qLOdmw== X-Received: by 2002:a05:6808:1291:b0:3ad:f6ad:b9c5 with SMTP id a17-20020a056808129100b003adf6adb9c5mr7844656oiw.59.1695382156931; Fri, 22 Sep 2023 04:29:16 -0700 (PDT) Received: from vultr.guest ([149.28.194.201]) by smtp.gmail.com with ESMTPSA id v16-20020aa78090000000b00690beda6987sm2973493pff.77.2023.09.22.04.29.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 04:29:15 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yonghong.song@linux.dev, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, tj@kernel.org, lizefan.x@bytedance.com, hannes@cmpxchg.org, yosryahmed@google.com, mkoutny@suse.com Cc: cgroups@vger.kernel.org, bpf@vger.kernel.org, Yafang Shao Subject: [RFC PATCH bpf-next 6/8] selftests/bpf: Add parallel support for classid Date: Fri, 22 Sep 2023 11:28:44 +0000 Message-Id: <20230922112846.4265-7-laoar.shao@gmail.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230922112846.4265-1-laoar.shao@gmail.com> References: <20230922112846.4265-1-laoar.shao@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Include the current pid in the classid cgroup path. This way, different testers relying on classid-based configurations will have distinct classid cgroup directories, enabling them to run concurrently. Additionally, we leverage the current pid as the classid, ensuring unique identification. Signed-off-by: Yafang Shao --- tools/testing/selftests/bpf/cgroup_helpers.c | 19 ++++++++++++------- tools/testing/selftests/bpf/cgroup_helpers.h | 2 +- .../selftests/bpf/prog_tests/cgroup_v1v2.c | 2 +- 3 files changed, 14 insertions(+), 9 deletions(-) diff --git a/tools/testing/selftests/bpf/cgroup_helpers.c b/tools/testing/selftests/bpf/cgroup_helpers.c index 9c36d1db9f94..e378fa057757 100644 --- a/tools/testing/selftests/bpf/cgroup_helpers.c +++ b/tools/testing/selftests/bpf/cgroup_helpers.c @@ -45,9 +45,12 @@ #define format_parent_cgroup_path(buf, path) \ format_cgroup_path_pid(buf, path, getppid()) -#define format_classid_path(buf) \ - snprintf(buf, sizeof(buf), "%s%s", NETCLS_MOUNT_PATH, \ - CGROUP_WORK_DIR) +#define format_classid_path_pid(buf, pid) \ + snprintf(buf, sizeof(buf), "%s%s%d", NETCLS_MOUNT_PATH, \ + CGROUP_WORK_DIR, pid) + +#define format_classid_path(buf) \ + format_classid_path_pid(buf, getpid()) static __thread bool cgroup_workdir_mounted; @@ -546,15 +549,17 @@ int setup_classid_environment(void) /** * set_classid() - Set a cgroupv1 net_cls classid - * @id: the numeric classid * - * Writes the passed classid into the cgroup work dir's net_cls.classid + * Writes the classid into the cgroup work dir's net_cls.classid * file in order to later on trigger socket tagging. * + * To make sure different classid testers have different classids, we use + * the current pid as the classid by default. + * * On success, it returns 0, otherwise on failure it returns 1. If there * is a failure, it prints the error to stderr. */ -int set_classid(unsigned int id) +int set_classid(void) { char cgroup_workdir[PATH_MAX - 42]; char cgroup_classid_path[PATH_MAX + 1]; @@ -570,7 +575,7 @@ int set_classid(unsigned int id) return 1; } - if (dprintf(fd, "%u\n", id) < 0) { + if (dprintf(fd, "%u\n", getpid()) < 0) { log_err("Setting cgroup classid"); rc = 1; } diff --git a/tools/testing/selftests/bpf/cgroup_helpers.h b/tools/testing/selftests/bpf/cgroup_helpers.h index 5c2cb9c8b546..92fc41daf4a4 100644 --- a/tools/testing/selftests/bpf/cgroup_helpers.h +++ b/tools/testing/selftests/bpf/cgroup_helpers.h @@ -29,7 +29,7 @@ int setup_cgroup_environment(void); void cleanup_cgroup_environment(void); /* cgroupv1 related */ -int set_classid(unsigned int id); +int set_classid(void); int join_classid(void); int setup_classid_environment(void); diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_v1v2.c b/tools/testing/selftests/bpf/prog_tests/cgroup_v1v2.c index 9026b42914d3..addf720428f7 100644 --- a/tools/testing/selftests/bpf/prog_tests/cgroup_v1v2.c +++ b/tools/testing/selftests/bpf/prog_tests/cgroup_v1v2.c @@ -71,7 +71,7 @@ void test_cgroup_v1v2(void) } ASSERT_OK(run_test(cgroup_fd, server_fd, false), "cgroup-v2-only"); setup_classid_environment(); - set_classid(42); + set_classid(); ASSERT_OK(run_test(cgroup_fd, server_fd, true), "cgroup-v1v2"); cleanup_classid_environment(); close(server_fd); From patchwork Fri Sep 22 11:28:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13395586 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5BEBA1F928 for ; Fri, 22 Sep 2023 11:29:21 +0000 (UTC) Received: from mail-oi1-x22b.google.com (mail-oi1-x22b.google.com [IPv6:2607:f8b0:4864:20::22b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 945CECA; Fri, 22 Sep 2023 04:29:19 -0700 (PDT) Received: by mail-oi1-x22b.google.com with SMTP id 5614622812f47-3ade77970a9so1291683b6e.2; Fri, 22 Sep 2023 04:29:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695382159; x=1695986959; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Fo3tjZPZMMIDp299nbsdxVbUIehDi8P0BDIlhn8r2cE=; b=bqey7blFBsajX1oh/B3VPkSLskhlkVz2shtcVXekEe0x4z86TqTuFoeyJY1DOFYbIF /jXUAV/LVKOzq/fsN8yVNFzSg5oW0ci1oGAuIe5AalvDjsP2vXEKusx4CQAZdS+t8XfV d6QXVfuRy3f9HxYConVLRuoPlO1bkobEJENdX8WFde+8n1IOb+8LFZvFxF3J3RETq6eG RrjWUrjc6nIhatc0MLX2QSaK8+yPgg4bOPGSjSBZG9G8ZLiMhvq4yo9dwr/yy3OxoPYX aVop/U+a4Vq4gvADNZ9q0kRy/9ayRGyR5wlTHDyZyPtXDe+YV3fDFa/Ux7z4gvMSoXgf E+sA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695382159; x=1695986959; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Fo3tjZPZMMIDp299nbsdxVbUIehDi8P0BDIlhn8r2cE=; b=iQ2zYSI1G6HAhWjdLfgz6pLsenbQQdw4PPbfNouN2NpRGP1UiFJ3rjhuIauG/3UjfG r4b4iFS0i5SC9iRHwDS85gJwqj+dTbpnbEee7qiqckpytX7fI5o5965dPpgO3qI5sT55 QbZr1jg31iPh6VwRV38UuuYJbJzl1cW9zgizlTw+7Rnpfgh9UD9cv5U1GqfFSpUxYDaH +IajOF997+gP+djEwSWP1XjwbfQ7zmqq+mQjLXdPKdv0UQjR4pyhREoIVNYx7GvZy/WG HccgahjccTJm+l8zUgRg4p8j/kh9hfado5gNpVvx2rcefEKweUdO96qF0RMVkWbDNq3B WuMA== X-Gm-Message-State: AOJu0YwgzN9BSMlgBbunpB8Y0QUvWdq8CRAI3JUdRCDiprg5FT0qCidm NTyo4Y17h8IQsmUmevI0oZXFkazoGR8T298oiEg= X-Google-Smtp-Source: AGHT+IG0Q660G5/8V7hq1GiOucPozd2atxCB8a3VEnE8GGbUbWNkWTZEC3o3BzoViU2+fA5TJepl9A== X-Received: by 2002:a05:6808:b37:b0:3a1:e85f:33c3 with SMTP id t23-20020a0568080b3700b003a1e85f33c3mr8121749oij.50.1695382158820; Fri, 22 Sep 2023 04:29:18 -0700 (PDT) Received: from vultr.guest ([149.28.194.201]) by smtp.gmail.com with ESMTPSA id v16-20020aa78090000000b00690beda6987sm2973493pff.77.2023.09.22.04.29.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 04:29:18 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yonghong.song@linux.dev, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, tj@kernel.org, lizefan.x@bytedance.com, hannes@cmpxchg.org, yosryahmed@google.com, mkoutny@suse.com Cc: cgroups@vger.kernel.org, bpf@vger.kernel.org, Yafang Shao Subject: [RFC PATCH bpf-next 7/8] selftests/bpf: Add new cgroup helper get_classid_cgroup_id() Date: Fri, 22 Sep 2023 11:28:45 +0000 Message-Id: <20230922112846.4265-8-laoar.shao@gmail.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230922112846.4265-1-laoar.shao@gmail.com> References: <20230922112846.4265-1-laoar.shao@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Introduce a new helper function to retrieve the cgroup ID from a net_cls cgroup directory. Signed-off-by: Yafang Shao --- tools/testing/selftests/bpf/cgroup_helpers.c | 28 +++++++++++++++----- tools/testing/selftests/bpf/cgroup_helpers.h | 1 + 2 files changed, 23 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/bpf/cgroup_helpers.c b/tools/testing/selftests/bpf/cgroup_helpers.c index e378fa057757..7cb2c9597b8f 100644 --- a/tools/testing/selftests/bpf/cgroup_helpers.c +++ b/tools/testing/selftests/bpf/cgroup_helpers.c @@ -417,26 +417,23 @@ int create_and_get_cgroup(const char *relative_path) } /** - * get_cgroup_id() - Get cgroup id for a particular cgroup path - * @relative_path: The cgroup path, relative to the workdir, to join + * get_cgroup_id_from_path - Get cgroup id for a particular cgroup path + * @cgroup_workdir: The absolute cgroup path * * On success, it returns the cgroup id. On failure it returns 0, * which is an invalid cgroup id. * If there is a failure, it prints the error to stderr. */ -unsigned long long get_cgroup_id(const char *relative_path) +unsigned long long get_cgroup_id_from_path(const char *cgroup_workdir) { int dirfd, err, flags, mount_id, fhsize; union { unsigned long long cgid; unsigned char raw_bytes[8]; } id; - char cgroup_workdir[PATH_MAX + 1]; struct file_handle *fhp, *fhp2; unsigned long long ret = 0; - format_cgroup_path(cgroup_workdir, relative_path); - dirfd = AT_FDCWD; flags = 0; fhsize = sizeof(*fhp); @@ -472,6 +469,14 @@ unsigned long long get_cgroup_id(const char *relative_path) return ret; } +unsigned long long get_cgroup_id(const char *relative_path) +{ + char cgroup_workdir[PATH_MAX + 1]; + + format_cgroup_path(cgroup_workdir, relative_path); + return get_cgroup_id_from_path(cgroup_workdir); +} + int cgroup_setup_and_join(const char *path) { int cg_fd; @@ -617,3 +622,14 @@ void cleanup_classid_environment(void) join_cgroup_from_top(NETCLS_MOUNT_PATH); nftw(cgroup_workdir, nftwfunc, WALK_FD_LIMIT, FTW_DEPTH | FTW_MOUNT); } + +/** + * get_classid_cgroup_id - Get the cgroup id of a net_cls cgroup + */ +unsigned long long get_classid_cgroup_id(void) +{ + char cgroup_workdir[PATH_MAX + 1]; + + format_classid_path(cgroup_workdir); + return get_cgroup_id_from_path(cgroup_workdir); +} diff --git a/tools/testing/selftests/bpf/cgroup_helpers.h b/tools/testing/selftests/bpf/cgroup_helpers.h index 92fc41daf4a4..e71da4ef031b 100644 --- a/tools/testing/selftests/bpf/cgroup_helpers.h +++ b/tools/testing/selftests/bpf/cgroup_helpers.h @@ -31,6 +31,7 @@ void cleanup_cgroup_environment(void); /* cgroupv1 related */ int set_classid(void); int join_classid(void); +unsigned long long get_classid_cgroup_id(void); int setup_classid_environment(void); void cleanup_classid_environment(void); From patchwork Fri Sep 22 11:28:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13395587 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C23481F16D for ; Fri, 22 Sep 2023 11:29:22 +0000 (UTC) Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16160192; Fri, 22 Sep 2023 04:29:21 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id d2e1a72fcca58-691c05bc5aaso1561782b3a.2; Fri, 22 Sep 2023 04:29:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695382160; x=1695986960; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Un+n8gtPu2H9LbD/gWmNkEsxWWkM8C1lnBzSSmQYUiM=; b=IJGudsSt17I0f0TkZxOSBzk8SJG/ttozDfBKztpBpbs0BiDCUrtBTyhckTrFQY1r2a 1SoCIlA46/vg4U5Ct+CTX5KTn9GnjzZF/hjIKb1DSS8sbJygU4MPjXiihMdbuAJEkcmb /fkIneBqcO0KBG+K2kN1cb5Hv8DHx/oAREm1+pMj0bsWAAl79g+6Cuq8RDdwTihe85lm 1cyiD7L76R+K8OVrNYGB6Bg0faOiSFjU0SPzENkfxjV1p4Akcm5GisIhlcTPuVosnoln egsrOVqOj5GyS/hbBzXsSuElU1iGAaiaJcWbOU1aZAAHfUCdCgPJvJ5y2OV/jbkMMBOn 3uMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695382160; x=1695986960; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Un+n8gtPu2H9LbD/gWmNkEsxWWkM8C1lnBzSSmQYUiM=; b=NEfqnMRd/68oBEyLLSF7AjctNXnwpkMpXefZPzkBxdSxnZl68Jd+Bd+K0GaDQoOUky GMmzRsnjj7J9kg6pQW8PTUrMZxOqy8Me6HrO13rWB5fvkMb0vsa36m5vUvvPQpEpI8Su gqgU0hy0ITFtuJKUb8pZkV6KLDPQ5Cg+7SS52kKicnD98tOyEpTDhGinpAHr4cK5OHva SiVhiug6R/Bmdh4cnaZN6Dslfz2QAJN5D+5x0TZ9oKC58I5fxdaNz9ohxt7QnZi98IGW gzNqMDCI5VywFWQ8e1RrX2YuTnUy4IDhiO+0estHau+ih45Mg2twgXvVIBU0hk/zc+Nf rd6g== X-Gm-Message-State: AOJu0YzILDV9ne5VUGPqKEl+mttRG4AFSVIb6kHKp3j8JPPZU9dYMIc4 +ClB7v7RprMNl+TsKxuCzl4= X-Google-Smtp-Source: AGHT+IH3eZlhv6LvJjmd5UZXNQdXaGgqt+GUQPlIBT0xIFRh3ng3xKbjhu7DQ7c4+LzIBT0TWWGuDg== X-Received: by 2002:a05:6a00:2493:b0:682:4c1c:a0fc with SMTP id c19-20020a056a00249300b006824c1ca0fcmr9602277pfv.19.1695382160467; Fri, 22 Sep 2023 04:29:20 -0700 (PDT) Received: from vultr.guest ([149.28.194.201]) by smtp.gmail.com with ESMTPSA id v16-20020aa78090000000b00690beda6987sm2973493pff.77.2023.09.22.04.29.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Sep 2023 04:29:20 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, john.fastabend@gmail.com, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yonghong.song@linux.dev, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, tj@kernel.org, lizefan.x@bytedance.com, hannes@cmpxchg.org, yosryahmed@google.com, mkoutny@suse.com Cc: cgroups@vger.kernel.org, bpf@vger.kernel.org, Yafang Shao Subject: [RFC PATCH bpf-next 8/8] selftests/bpf: Add selftests for cgroup controller Date: Fri, 22 Sep 2023 11:28:46 +0000 Message-Id: <20230922112846.4265-9-laoar.shao@gmail.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230922112846.4265-1-laoar.shao@gmail.com> References: <20230922112846.4265-1-laoar.shao@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Add selftests for cgroup controller on both cgroup1 and cgroup2. The result as follows, $ tools/testing/selftests/bpf/test_progs --name=cgroup_controller #40/1 cgroup_controller/test_cgroup1_controller:OK #40/2 cgroup_controller/test_invalid_cgroup_id:OK #40/3 cgroup_controller/test_sleepable_prog:OK #40/4 cgroup_controller/test_cgroup2_controller:OK #40 cgroup_controller:OK Signed-off-by: Yafang Shao --- .../bpf/prog_tests/cgroup_controller.c | 149 ++++++++++++++++++ .../bpf/progs/test_cgroup_controller.c | 80 ++++++++++ 2 files changed, 229 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/cgroup_controller.c create mode 100644 tools/testing/selftests/bpf/progs/test_cgroup_controller.c diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_controller.c b/tools/testing/selftests/bpf/prog_tests/cgroup_controller.c new file mode 100644 index 000000000000..f76ec1e65b2a --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/cgroup_controller.c @@ -0,0 +1,149 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2023 Yafang Shao */ + +#include +#include +#include +#include "cgroup_helpers.h" +#include "test_cgroup_controller.skel.h" + +#define CGROUP2_DIR "/cgroup2_controller" + +static void bpf_cgroup1_controller(bool sleepable, __u64 cgrp_id) +{ + struct test_cgroup_controller *skel; + int err; + + skel = test_cgroup_controller__open(); + if (!ASSERT_OK_PTR(skel, "open")) + return; + + skel->bss->target_pid = getpid(); + skel->bss->ancestor_cgid = cgrp_id; + + err = bpf_program__set_attach_target(skel->progs.fentry_run, 0, "bpf_fentry_test1"); + if (!ASSERT_OK(err, "fentry_set_target")) + goto cleanup; + + err = test_cgroup_controller__load(skel); + if (!ASSERT_OK(err, "load")) + goto cleanup; + + /* Attach LSM prog first */ + if (!sleepable) { + skel->links.lsm_net_cls = bpf_program__attach_lsm(skel->progs.lsm_net_cls); + if (!ASSERT_OK_PTR(skel->links.lsm_net_cls, "lsm_attach")) + goto cleanup; + } else { + skel->links.lsm_s_net_cls = bpf_program__attach_lsm(skel->progs.lsm_s_net_cls); + if (!ASSERT_OK_PTR(skel->links.lsm_s_net_cls, "lsm_attach_sleepable")) + goto cleanup; + } + + /* LSM prog will be triggered when attaching fentry */ + skel->links.fentry_run = bpf_program__attach_trace(skel->progs.fentry_run); + if (cgrp_id) { + ASSERT_NULL(skel->links.fentry_run, "fentry_attach_fail"); + } else { + if (!ASSERT_OK_PTR(skel->links.fentry_run, "fentry_attach_success")) + goto cleanup; + } + +cleanup: + test_cgroup_controller__destroy(skel); +} + +static void cgroup_controller_on_cgroup1(bool sleepable, bool invalid_cgid) +{ + __u64 cgrp_id; + int err; + + /* Setup cgroup1 hierarchy */ + err = setup_classid_environment(); + if (!ASSERT_OK(err, "setup_classid_environment")) + return; + + err = join_classid(); + if (!ASSERT_OK(err, "join_cgroup1")) + goto cleanup; + + cgrp_id = get_classid_cgroup_id(); + if (invalid_cgid) + bpf_cgroup1_controller(sleepable, 0); + else + bpf_cgroup1_controller(sleepable, cgrp_id); + +cleanup: + /* Cleanup cgroup1 hierarchy */ + cleanup_classid_environment(); +} + +static void bpf_cgroup2_controller(__u64 cgrp_id) +{ + struct test_cgroup_controller *skel; + int err; + + skel = test_cgroup_controller__open(); + if (!ASSERT_OK_PTR(skel, "open")) + return; + + skel->bss->target_pid = getpid(); + skel->bss->ancestor_cgid = cgrp_id; + + err = bpf_program__set_attach_target(skel->progs.fentry_run, 0, "bpf_fentry_test1"); + if (!ASSERT_OK(err, "fentry_set_target")) + goto cleanup; + + err = test_cgroup_controller__load(skel); + if (!ASSERT_OK(err, "load")) + goto cleanup; + + skel->links.lsm_cpu = bpf_program__attach_lsm(skel->progs.lsm_cpu); + if (!ASSERT_OK_PTR(skel->links.lsm_cpu, "lsm_attach")) + goto cleanup; + + skel->links.fentry_run = bpf_program__attach_trace(skel->progs.fentry_run); + ASSERT_NULL(skel->links.fentry_run, "fentry_attach_fail"); + +cleanup: + test_cgroup_controller__destroy(skel); +} + +static void cgroup_controller_on_cgroup2(void) +{ + int cgrp_fd, cgrp_id, err; + + err = setup_cgroup_environment(); + if (!ASSERT_OK(err, "cgrp2_env_setup")) + goto cleanup; + + cgrp_fd = test__join_cgroup(CGROUP2_DIR); + if (!ASSERT_GE(cgrp_fd, 0, "cgroup_join_cgroup2")) + goto cleanup; + + err = enable_controllers(CGROUP2_DIR, "cpu"); + if (!ASSERT_OK(err, "cgrp2_env_setup")) + goto close_fd; + + cgrp_id = get_cgroup_id(CGROUP2_DIR); + if (!ASSERT_GE(cgrp_id, 0, "cgroup2_id")) + goto close_fd; + bpf_cgroup2_controller(cgrp_id); + +close_fd: + close(cgrp_fd); +cleanup: + cleanup_cgroup_environment(); +} + +void test_cgroup_controller(void) +{ + if (test__start_subtest("test_cgroup1_controller")) + cgroup_controller_on_cgroup1(false, false); + if (test__start_subtest("test_invalid_cgroup_id")) + cgroup_controller_on_cgroup1(false, true); + if (test__start_subtest("test_sleepable_prog")) + cgroup_controller_on_cgroup1(true, false); + if (test__start_subtest("test_cgroup2_controller")) + cgroup_controller_on_cgroup2(); +} diff --git a/tools/testing/selftests/bpf/progs/test_cgroup_controller.c b/tools/testing/selftests/bpf/progs/test_cgroup_controller.c new file mode 100644 index 000000000000..958804a34794 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_cgroup_controller.c @@ -0,0 +1,80 @@ +// SPDX-License-Identifier: GPL-2.0 +//#endif +/* Copyright (C) 2023 Yafang Shao */ + +#include "vmlinux.h" +#include +#include +#include + +__u64 ancestor_cgid; +int target_pid; + +struct cgroup *bpf_cgroup_acquire_from_id_within_controller(u64 cgid, int ssid) __ksym; +u64 bpf_cgroup_id_from_task_within_controller(struct task_struct *task, int ssid) __ksym; +u64 bpf_cgroup_ancestor_id_from_task_within_controller(struct task_struct *task, + int ssid, int level) __ksym; +long bpf_task_under_cgroup(struct task_struct *task, struct cgroup *ancestor) __ksym; +void bpf_cgroup_release(struct cgroup *p) __ksym; + +static int bpf_link_create_verify(int cmd, union bpf_attr *attr, unsigned int size, int ssid) +{ + struct cgroup *cgrp = NULL; + struct task_struct *task; + __u64 cgid, root_cgid; + int ret = 0; + + if (cmd != BPF_LINK_CREATE) + return 0; + + task = bpf_get_current_task_btf(); + /* Then it can run in parallel */ + if (target_pid != BPF_CORE_READ(task, pid)) + return 0; + + cgrp = bpf_cgroup_acquire_from_id_within_controller(ancestor_cgid, ssid); + if (!cgrp) + goto out; + + if (bpf_task_under_cgroup(task, cgrp)) + ret = -1; + bpf_cgroup_release(cgrp); + + cgid = bpf_cgroup_id_from_task_within_controller(task, ssid); + if (cgid != ancestor_cgid) + ret = 0; + + /* The level of root cgroup is 0, and its id is always 1 */ + root_cgid = bpf_cgroup_ancestor_id_from_task_within_controller(task, ssid, 0); + if (root_cgid != 1) + ret = 0; + +out: + return ret; +} + +SEC("lsm/bpf") +int BPF_PROG(lsm_net_cls, int cmd, union bpf_attr *attr, unsigned int size) +{ + return bpf_link_create_verify(cmd, attr, size, net_cls_cgrp_id); +} + +SEC("lsm.s/bpf") +int BPF_PROG(lsm_s_net_cls, int cmd, union bpf_attr *attr, unsigned int size) +{ + return bpf_link_create_verify(cmd, attr, size, net_cls_cgrp_id); +} + +SEC("lsm/bpf") +int BPF_PROG(lsm_cpu, int cmd, union bpf_attr *attr, unsigned int size) +{ + return bpf_link_create_verify(cmd, attr, size, cpu_cgrp_id); +} + +SEC("fentry") +int BPF_PROG(fentry_run) +{ + return 0; +} + +char _license[] SEC("license") = "GPL";