From patchwork Mon Mar 28 17:50:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794000 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3864C4332F for ; Mon, 28 Mar 2022 17:52:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244750AbiC1Rxw (ORCPT ); Mon, 28 Mar 2022 13:53:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244761AbiC1Rwx (ORCPT ); Mon, 28 Mar 2022 13:52:53 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47B5159A5D; Mon, 28 Mar 2022 10:51:12 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0Z80VPMz67PnL; Tue, 29 Mar 2022 01:49:20 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:51:09 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 01/18] bpf: Export bpf_link_inc() Date: Mon, 28 Mar 2022 19:50:16 +0200 Message-ID: <20220328175033.2437312-2-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org In the upcoming patches, populate_bpffs() will not have visibility anymore on the links and maps to be pinned (to avoid the limitation of the 'objs' fixed-size array), but the eBPF-program-specific preload method will directly do the pinning and increase/decrease the reference count. Since the preload method can be implemented in a kernel module, also bpf_link_inc(), before called by populate_bpffs(), should be exported. Thus, export bpf_link_inc(). Signed-off-by: Roberto Sassu --- kernel/bpf/syscall.c | 1 + 1 file changed, 1 insertion(+) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index cdaa1152436a..8ffe342545c3 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -2459,6 +2459,7 @@ void bpf_link_inc(struct bpf_link *link) { atomic64_inc(&link->refcnt); } +EXPORT_SYMBOL_GPL(bpf_link_inc); /* bpf_link_free is guaranteed to be called from process context */ static void bpf_link_free(struct bpf_link *link) From patchwork Mon Mar 28 17:50:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794002 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE551C433F5 for ; Mon, 28 Mar 2022 17:52:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244756AbiC1Rxw (ORCPT ); Mon, 28 Mar 2022 13:53:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244768AbiC1Rwy (ORCPT ); Mon, 28 Mar 2022 13:52:54 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A2273C73F; Mon, 28 Mar 2022 10:51:13 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0YP2PvWz67NB8; Tue, 29 Mar 2022 01:48:41 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:51:10 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 02/18] bpf-preload: Move bpf_preload.h to include/linux Date: Mon, 28 Mar 2022 19:50:17 +0200 Message-ID: <20220328175033.2437312-3-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Move bpf_preload.h to include/linux, so that third-party providers can develop out-of-tree kernel modules to preload eBPF programs. Export the bpf_preload_ops global variable if CONFIG_BPF_SYSCALL is defined. Signed-off-by: Roberto Sassu --- {kernel/bpf/preload => include/linux}/bpf_preload.h | 4 ++++ kernel/bpf/inode.c | 2 +- kernel/bpf/preload/bpf_preload_kern.c | 2 +- 3 files changed, 6 insertions(+), 2 deletions(-) rename {kernel/bpf/preload => include/linux}/bpf_preload.h (85%) diff --git a/kernel/bpf/preload/bpf_preload.h b/include/linux/bpf_preload.h similarity index 85% rename from kernel/bpf/preload/bpf_preload.h rename to include/linux/bpf_preload.h index f065c91213a0..09d55d9f1131 100644 --- a/kernel/bpf/preload/bpf_preload.h +++ b/include/linux/bpf_preload.h @@ -11,6 +11,10 @@ struct bpf_preload_ops { int (*preload)(struct bpf_preload_info *); struct module *owner; }; + +#ifdef CONFIG_BPF_SYSCALL extern struct bpf_preload_ops *bpf_preload_ops; +#endif /*CONFIG_BPF_SYSCALL*/ + #define BPF_PRELOAD_LINKS 2 #endif diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c index 4f841e16779e..1f2d468abf58 100644 --- a/kernel/bpf/inode.c +++ b/kernel/bpf/inode.c @@ -20,7 +20,7 @@ #include #include #include -#include "preload/bpf_preload.h" +#include enum bpf_type { BPF_TYPE_UNSPEC = 0, diff --git a/kernel/bpf/preload/bpf_preload_kern.c b/kernel/bpf/preload/bpf_preload_kern.c index 5106b5372f0c..f43391d1c49c 100644 --- a/kernel/bpf/preload/bpf_preload_kern.c +++ b/kernel/bpf/preload/bpf_preload_kern.c @@ -2,7 +2,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include #include -#include "bpf_preload.h" +#include #include "iterators/iterators.lskel.h" static struct bpf_link *maps_link, *progs_link; From patchwork Mon Mar 28 17:50:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12793999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26564C4167D for ; Mon, 28 Mar 2022 17:51:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244734AbiC1RxM (ORCPT ); Mon, 28 Mar 2022 13:53:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244770AbiC1Rwz (ORCPT ); Mon, 28 Mar 2022 13:52:55 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C3B03CA5F; Mon, 28 Mar 2022 10:51:14 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0YQ3KRMz67Q7R; Tue, 29 Mar 2022 01:48:42 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:51:11 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 03/18] bpf-preload: Generalize object pinning from the kernel Date: Mon, 28 Mar 2022 19:50:18 +0200 Message-ID: <20220328175033.2437312-4-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Rename bpf_iter_link_pin_kernel() to bpf_obj_do_pin_kernel(), to match the user space counterpart bpf_obj_do_pin() and similarly to the latter, accept a generic object pointer and its type, so that the preload method can pin not only links but also maps and progs. Signed-off-by: Roberto Sassu --- kernel/bpf/inode.c | 29 ++++++++++++++++++++++------- 1 file changed, 22 insertions(+), 7 deletions(-) diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c index 1f2d468abf58..a9d725db4cf4 100644 --- a/kernel/bpf/inode.c +++ b/kernel/bpf/inode.c @@ -414,9 +414,10 @@ static const struct inode_operations bpf_dir_iops = { .unlink = simple_unlink, }; -/* pin iterator link into bpffs */ -static int bpf_iter_link_pin_kernel(struct dentry *parent, - const char *name, struct bpf_link *link) +/* pin object into bpffs */ +static int bpf_obj_do_pin_kernel(struct dentry *parent, + const char *name, void *raw, + enum bpf_type type) { umode_t mode = S_IFREG | S_IRUSR; struct dentry *dentry; @@ -428,8 +429,22 @@ static int bpf_iter_link_pin_kernel(struct dentry *parent, inode_unlock(parent->d_inode); return PTR_ERR(dentry); } - ret = bpf_mkobj_ops(dentry, mode, link, &bpf_link_iops, - &bpf_iter_fops); + + switch (type) { + case BPF_TYPE_PROG: + ret = bpf_mkprog(dentry, mode, raw); + break; + case BPF_TYPE_MAP: + ret = bpf_mkmap(dentry, mode, raw); + break; + case BPF_TYPE_LINK: + ret = bpf_mklink(dentry, mode, raw); + break; + default: + ret = -EOPNOTSUPP; + break; + } + dput(dentry); inode_unlock(parent->d_inode); return ret; @@ -726,8 +741,8 @@ static int populate_bpffs(struct dentry *parent) goto out_put; for (i = 0; i < BPF_PRELOAD_LINKS; i++) { bpf_link_inc(objs[i].link); - err = bpf_iter_link_pin_kernel(parent, - objs[i].link_name, objs[i].link); + err = bpf_obj_do_pin_kernel(parent, objs[i].link_name, + objs[i].link, BPF_TYPE_LINK); if (err) { bpf_link_put(objs[i].link); goto out_put; From patchwork Mon Mar 28 17:50:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12793998 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78B61C433FE for ; Mon, 28 Mar 2022 17:51:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244644AbiC1RxM (ORCPT ); Mon, 28 Mar 2022 13:53:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244775AbiC1Rw4 (ORCPT ); Mon, 28 Mar 2022 13:52:56 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD65D3C735; Mon, 28 Mar 2022 10:51:15 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0YR4g3Pz67w73; Tue, 29 Mar 2022 01:48:43 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:51:12 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 04/18] bpf-preload: Export and call bpf_obj_do_pin_kernel() Date: Mon, 28 Mar 2022 19:50:19 +0200 Message-ID: <20220328175033.2437312-5-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Export bpf_obj_do_pin_kernel(), so that the preload method of an eBPF program (built-in or in a kernel module) can call this function to pin links, maps and progs. This removes the need of defining the bpf_preload_info structure, to hold the links to be pinned by populate_bpffs(). Since after this patch the preload method has everything needed to pin objects, that makes it possible to pin arbitrary objects of eBPF programs, by generating the corresponding preload method in the light skeleton. Signed-off-by: Roberto Sassu --- include/linux/bpf_preload.h | 20 +++++++++++++----- kernel/bpf/inode.c | 30 +++++---------------------- kernel/bpf/preload/bpf_preload_kern.c | 25 +++++++++++++++++----- 3 files changed, 40 insertions(+), 35 deletions(-) diff --git a/include/linux/bpf_preload.h b/include/linux/bpf_preload.h index 09d55d9f1131..e604933b3daa 100644 --- a/include/linux/bpf_preload.h +++ b/include/linux/bpf_preload.h @@ -2,19 +2,29 @@ #ifndef _BPF_PRELOAD_H #define _BPF_PRELOAD_H -struct bpf_preload_info { - char link_name[16]; - struct bpf_link *link; +enum bpf_type { + BPF_TYPE_UNSPEC = 0, + BPF_TYPE_PROG, + BPF_TYPE_MAP, + BPF_TYPE_LINK, }; struct bpf_preload_ops { - int (*preload)(struct bpf_preload_info *); + int (*preload)(struct dentry *parent); struct module *owner; }; #ifdef CONFIG_BPF_SYSCALL extern struct bpf_preload_ops *bpf_preload_ops; + +int bpf_obj_do_pin_kernel(struct dentry *parent, const char *name, void *raw, + enum bpf_type type); +#else +static inline int bpf_obj_do_pin_kernel(struct dentry *parent, const char *name, + void *raw, enum bpf_type type) +{ + return -EOPNOTSUPP; +} #endif /*CONFIG_BPF_SYSCALL*/ -#define BPF_PRELOAD_LINKS 2 #endif diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c index a9d725db4cf4..bb8762abbf3d 100644 --- a/kernel/bpf/inode.c +++ b/kernel/bpf/inode.c @@ -22,13 +22,6 @@ #include #include -enum bpf_type { - BPF_TYPE_UNSPEC = 0, - BPF_TYPE_PROG, - BPF_TYPE_MAP, - BPF_TYPE_LINK, -}; - static void *bpf_any_get(void *raw, enum bpf_type type) { switch (type) { @@ -415,9 +408,8 @@ static const struct inode_operations bpf_dir_iops = { }; /* pin object into bpffs */ -static int bpf_obj_do_pin_kernel(struct dentry *parent, - const char *name, void *raw, - enum bpf_type type) +int bpf_obj_do_pin_kernel(struct dentry *parent, const char *name, void *raw, + enum bpf_type type) { umode_t mode = S_IFREG | S_IRUSR; struct dentry *dentry; @@ -449,6 +441,7 @@ static int bpf_obj_do_pin_kernel(struct dentry *parent, inode_unlock(parent->d_inode); return ret; } +EXPORT_SYMBOL(bpf_obj_do_pin_kernel); static int bpf_obj_do_pin(const char __user *pathname, void *raw, enum bpf_type type) @@ -724,8 +717,7 @@ static DEFINE_MUTEX(bpf_preload_lock); static int populate_bpffs(struct dentry *parent) { - struct bpf_preload_info objs[BPF_PRELOAD_LINKS] = {}; - int err = 0, i; + int err = 0; /* grab the mutex to make sure the kernel interactions with bpf_preload * are serialized @@ -736,19 +728,7 @@ static int populate_bpffs(struct dentry *parent) if (!bpf_preload_mod_get()) goto out; - err = bpf_preload_ops->preload(objs); - if (err) - goto out_put; - for (i = 0; i < BPF_PRELOAD_LINKS; i++) { - bpf_link_inc(objs[i].link); - err = bpf_obj_do_pin_kernel(parent, objs[i].link_name, - objs[i].link, BPF_TYPE_LINK); - if (err) { - bpf_link_put(objs[i].link); - goto out_put; - } - } -out_put: + err = bpf_preload_ops->preload(parent); bpf_preload_mod_put(); out: mutex_unlock(&bpf_preload_lock); diff --git a/kernel/bpf/preload/bpf_preload_kern.c b/kernel/bpf/preload/bpf_preload_kern.c index f43391d1c49c..d70047108bb3 100644 --- a/kernel/bpf/preload/bpf_preload_kern.c +++ b/kernel/bpf/preload/bpf_preload_kern.c @@ -17,13 +17,28 @@ static void free_links_and_skel(void) iterators_bpf__destroy(skel); } -static int preload(struct bpf_preload_info *obj) +static int preload(struct dentry *parent) { - strlcpy(obj[0].link_name, "maps.debug", sizeof(obj[0].link_name)); - obj[0].link = maps_link; - strlcpy(obj[1].link_name, "progs.debug", sizeof(obj[1].link_name)); - obj[1].link = progs_link; + int err; + + bpf_link_inc(maps_link); + bpf_link_inc(progs_link); + + err = bpf_obj_do_pin_kernel(parent, "maps.debug", maps_link, + BPF_TYPE_LINK); + if (err) + goto undo; + + err = bpf_obj_do_pin_kernel(parent, "progs.debug", progs_link, + BPF_TYPE_LINK); + if (err) + goto undo; + return 0; +undo: + bpf_link_put(maps_link); + bpf_link_put(progs_link); + return err; } static struct bpf_preload_ops ops = { From patchwork Mon Mar 28 17:50:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794055 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F2C1C433F5 for ; Mon, 28 Mar 2022 17:53:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244909AbiC1Ryn (ORCPT ); Mon, 28 Mar 2022 13:54:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244806AbiC1Ryj (ORCPT ); Mon, 28 Mar 2022 13:54:39 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C91E65780; Mon, 28 Mar 2022 10:52:28 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0bc1Gmwz67xLj; Tue, 29 Mar 2022 01:50:36 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:52:25 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 05/18] bpf-preload: Generate static variables Date: Mon, 28 Mar 2022 19:50:20 +0200 Message-ID: <20220328175033.2437312-6-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org The first part of the preload code generation consists in generating the static variables to be used by the code itself: the links and maps to be pinned, and the skeleton. Generation of the preload variables and methods is enabled with the option -P added to 'bpftool gen skeleton'. The existing variables maps_link and progs_links in bpf_preload_kern.c have been renamed respectively to dump_bpf_map_link and dump_bpf_prog_link, to match the name of the variables in the main structure of the light skeleton. Signed-off-by: Roberto Sassu --- kernel/bpf/preload/bpf_preload_kern.c | 35 +- kernel/bpf/preload/iterators/Makefile | 2 +- .../bpf/preload/iterators/iterators.lskel.h | 378 +++++++++--------- .../bpf/bpftool/Documentation/bpftool-gen.rst | 5 + tools/bpf/bpftool/bash-completion/bpftool | 2 +- tools/bpf/bpftool/gen.c | 27 ++ tools/bpf/bpftool/main.c | 7 +- tools/bpf/bpftool/main.h | 1 + 8 files changed, 254 insertions(+), 203 deletions(-) diff --git a/kernel/bpf/preload/bpf_preload_kern.c b/kernel/bpf/preload/bpf_preload_kern.c index d70047108bb3..485589e03bd2 100644 --- a/kernel/bpf/preload/bpf_preload_kern.c +++ b/kernel/bpf/preload/bpf_preload_kern.c @@ -5,15 +5,12 @@ #include #include "iterators/iterators.lskel.h" -static struct bpf_link *maps_link, *progs_link; -static struct iterators_bpf *skel; - static void free_links_and_skel(void) { - if (!IS_ERR_OR_NULL(maps_link)) - bpf_link_put(maps_link); - if (!IS_ERR_OR_NULL(progs_link)) - bpf_link_put(progs_link); + if (!IS_ERR_OR_NULL(dump_bpf_map_link)) + bpf_link_put(dump_bpf_map_link); + if (!IS_ERR_OR_NULL(dump_bpf_prog_link)) + bpf_link_put(dump_bpf_prog_link); iterators_bpf__destroy(skel); } @@ -21,23 +18,23 @@ static int preload(struct dentry *parent) { int err; - bpf_link_inc(maps_link); - bpf_link_inc(progs_link); + bpf_link_inc(dump_bpf_map_link); + bpf_link_inc(dump_bpf_prog_link); - err = bpf_obj_do_pin_kernel(parent, "maps.debug", maps_link, + err = bpf_obj_do_pin_kernel(parent, "maps.debug", dump_bpf_map_link, BPF_TYPE_LINK); if (err) goto undo; - err = bpf_obj_do_pin_kernel(parent, "progs.debug", progs_link, + err = bpf_obj_do_pin_kernel(parent, "progs.debug", dump_bpf_prog_link, BPF_TYPE_LINK); if (err) goto undo; return 0; undo: - bpf_link_put(maps_link); - bpf_link_put(progs_link); + bpf_link_put(dump_bpf_map_link); + bpf_link_put(dump_bpf_prog_link); return err; } @@ -59,14 +56,14 @@ static int load_skel(void) err = iterators_bpf__attach(skel); if (err) goto out; - maps_link = bpf_link_get_from_fd(skel->links.dump_bpf_map_fd); - if (IS_ERR(maps_link)) { - err = PTR_ERR(maps_link); + dump_bpf_map_link = bpf_link_get_from_fd(skel->links.dump_bpf_map_fd); + if (IS_ERR(dump_bpf_map_link)) { + err = PTR_ERR(dump_bpf_map_link); goto out; } - progs_link = bpf_link_get_from_fd(skel->links.dump_bpf_prog_fd); - if (IS_ERR(progs_link)) { - err = PTR_ERR(progs_link); + dump_bpf_prog_link = bpf_link_get_from_fd(skel->links.dump_bpf_prog_fd); + if (IS_ERR(dump_bpf_prog_link)) { + err = PTR_ERR(dump_bpf_prog_link); goto out; } /* Avoid taking over stdin/stdout/stderr of init process. Zeroing out diff --git a/kernel/bpf/preload/iterators/Makefile b/kernel/bpf/preload/iterators/Makefile index bfe24f8c5a20..d36a822d3e16 100644 --- a/kernel/bpf/preload/iterators/Makefile +++ b/kernel/bpf/preload/iterators/Makefile @@ -43,7 +43,7 @@ clean: iterators.lskel.h: $(OUTPUT)/iterators.bpf.o | $(BPFTOOL) $(call msg,GEN-SKEL,$@) - $(Q)$(BPFTOOL) gen skeleton -L $< > $@ + $(Q)$(BPFTOOL) gen skeleton -L -P $< > $@ $(OUTPUT)/iterators.bpf.o: iterators.bpf.c $(BPFOBJ) | $(OUTPUT) diff --git a/kernel/bpf/preload/iterators/iterators.lskel.h b/kernel/bpf/preload/iterators/iterators.lskel.h index 70f236a82fe1..9794acdfacf9 100644 --- a/kernel/bpf/preload/iterators/iterators.lskel.h +++ b/kernel/bpf/preload/iterators/iterators.lskel.h @@ -103,7 +103,7 @@ iterators_bpf__load(struct iterators_bpf *skel) int err; opts.ctx = (struct bpf_loader_ctx *)skel; - opts.data_sz = 6056; + opts.data_sz = 6088; opts.data = (void *)"\ \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ @@ -138,63 +138,64 @@ iterators_bpf__load(struct iterators_bpf *skel) \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x9f\xeb\x01\0\ -\x18\0\0\0\0\0\0\0\x1c\x04\0\0\x1c\x04\0\0\xf9\x04\0\0\0\0\0\0\0\0\0\x02\x02\0\ +\x18\0\0\0\0\0\0\0\x1c\x04\0\0\x1c\x04\0\0\x0b\x05\0\0\0\0\0\0\0\0\0\x02\x02\0\ \0\0\x01\0\0\0\x02\0\0\x04\x10\0\0\0\x13\0\0\0\x03\0\0\0\0\0\0\0\x18\0\0\0\x04\ \0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x08\0\0\0\0\0\0\0\0\0\0\x02\x0d\0\0\0\0\0\0\ \0\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x01\0\0\0\x20\0\0\0\0\0\0\x01\x04\0\0\0\x20\ -\0\0\x01\x24\0\0\0\x01\0\0\x0c\x05\0\0\0\xa3\0\0\0\x03\0\0\x04\x18\0\0\0\xb1\0\ -\0\0\x09\0\0\0\0\0\0\0\xb5\0\0\0\x0b\0\0\0\x40\0\0\0\xc0\0\0\0\x0b\0\0\0\x80\0\ -\0\0\0\0\0\0\0\0\0\x02\x0a\0\0\0\xc8\0\0\0\0\0\0\x07\0\0\0\0\xd1\0\0\0\0\0\0\ -\x08\x0c\0\0\0\xd7\0\0\0\0\0\0\x01\x08\0\0\0\x40\0\0\0\x94\x01\0\0\x03\0\0\x04\ -\x18\0\0\0\x9c\x01\0\0\x0e\0\0\0\0\0\0\0\x9f\x01\0\0\x11\0\0\0\x20\0\0\0\xa4\ -\x01\0\0\x0e\0\0\0\xa0\0\0\0\xb0\x01\0\0\0\0\0\x08\x0f\0\0\0\xb6\x01\0\0\0\0\0\ -\x01\x04\0\0\0\x20\0\0\0\xc3\x01\0\0\0\0\0\x01\x01\0\0\0\x08\0\0\x01\0\0\0\0\0\ -\0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x10\0\0\0\xc8\x01\0\0\0\0\0\x01\x04\0\0\0\ -\x20\0\0\0\0\0\0\0\0\0\0\x02\x14\0\0\0\x2c\x02\0\0\x02\0\0\x04\x10\0\0\0\x13\0\ -\0\0\x03\0\0\0\0\0\0\0\x3f\x02\0\0\x15\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x18\0\ -\0\0\0\0\0\0\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x13\0\0\0\x44\x02\0\0\x01\0\0\x0c\ -\x16\0\0\0\x90\x02\0\0\x01\0\0\x04\x08\0\0\0\x99\x02\0\0\x19\0\0\0\0\0\0\0\0\0\ -\0\0\0\0\0\x02\x1a\0\0\0\xea\x02\0\0\x06\0\0\x04\x38\0\0\0\x9c\x01\0\0\x0e\0\0\ -\0\0\0\0\0\x9f\x01\0\0\x11\0\0\0\x20\0\0\0\xf7\x02\0\0\x1b\0\0\0\xc0\0\0\0\x08\ -\x03\0\0\x15\0\0\0\0\x01\0\0\x11\x03\0\0\x1d\0\0\0\x40\x01\0\0\x1b\x03\0\0\x1e\ +\0\0\x01\x24\0\0\0\x01\0\0\x0c\x05\0\0\0\xb1\0\0\0\x03\0\0\x04\x18\0\0\0\xbf\0\ +\0\0\x09\0\0\0\0\0\0\0\xc3\0\0\0\x0b\0\0\0\x40\0\0\0\xce\0\0\0\x0b\0\0\0\x80\0\ +\0\0\0\0\0\0\0\0\0\x02\x0a\0\0\0\xd6\0\0\0\0\0\0\x07\0\0\0\0\xdf\0\0\0\0\0\0\ +\x08\x0c\0\0\0\xe5\0\0\0\0\0\0\x01\x08\0\0\0\x40\0\0\0\xa6\x01\0\0\x03\0\0\x04\ +\x18\0\0\0\xae\x01\0\0\x0e\0\0\0\0\0\0\0\xb1\x01\0\0\x11\0\0\0\x20\0\0\0\xb6\ +\x01\0\0\x0e\0\0\0\xa0\0\0\0\xc2\x01\0\0\0\0\0\x08\x0f\0\0\0\xc8\x01\0\0\0\0\0\ +\x01\x04\0\0\0\x20\0\0\0\xd5\x01\0\0\0\0\0\x01\x01\0\0\0\x08\0\0\x01\0\0\0\0\0\ +\0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x10\0\0\0\xda\x01\0\0\0\0\0\x01\x04\0\0\0\ +\x20\0\0\0\0\0\0\0\0\0\0\x02\x14\0\0\0\x3e\x02\0\0\x02\0\0\x04\x10\0\0\0\x13\0\ +\0\0\x03\0\0\0\0\0\0\0\x51\x02\0\0\x15\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x18\0\ +\0\0\0\0\0\0\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x13\0\0\0\x56\x02\0\0\x01\0\0\x0c\ +\x16\0\0\0\xa2\x02\0\0\x01\0\0\x04\x08\0\0\0\xab\x02\0\0\x19\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\x02\x1a\0\0\0\xfc\x02\0\0\x06\0\0\x04\x38\0\0\0\xae\x01\0\0\x0e\0\0\ +\0\0\0\0\0\xb1\x01\0\0\x11\0\0\0\x20\0\0\0\x09\x03\0\0\x1b\0\0\0\xc0\0\0\0\x1a\ +\x03\0\0\x15\0\0\0\0\x01\0\0\x23\x03\0\0\x1d\0\0\0\x40\x01\0\0\x2d\x03\0\0\x1e\ \0\0\0\x80\x01\0\0\0\0\0\0\0\0\0\x02\x1c\0\0\0\0\0\0\0\0\0\0\x0a\x10\0\0\0\0\0\ -\0\0\0\0\0\x02\x1f\0\0\0\0\0\0\0\0\0\0\x02\x20\0\0\0\x65\x03\0\0\x02\0\0\x04\ -\x08\0\0\0\x73\x03\0\0\x0e\0\0\0\0\0\0\0\x7c\x03\0\0\x0e\0\0\0\x20\0\0\0\x1b\ -\x03\0\0\x03\0\0\x04\x18\0\0\0\x86\x03\0\0\x1b\0\0\0\0\0\0\0\x8e\x03\0\0\x21\0\ -\0\0\x40\0\0\0\x94\x03\0\0\x23\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\x02\x22\0\0\0\0\0\ -\0\0\0\0\0\x02\x24\0\0\0\x98\x03\0\0\x01\0\0\x04\x04\0\0\0\xa3\x03\0\0\x0e\0\0\ -\0\0\0\0\0\x0c\x04\0\0\x01\0\0\x04\x04\0\0\0\x15\x04\0\0\x0e\0\0\0\0\0\0\0\0\0\ -\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\x23\0\0\0\x8b\x04\0\0\0\0\0\x0e\x25\ -\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\x0e\0\0\0\x9f\x04\ +\0\0\0\0\0\x02\x1f\0\0\0\0\0\0\0\0\0\0\x02\x20\0\0\0\x77\x03\0\0\x02\0\0\x04\ +\x08\0\0\0\x85\x03\0\0\x0e\0\0\0\0\0\0\0\x8e\x03\0\0\x0e\0\0\0\x20\0\0\0\x2d\ +\x03\0\0\x03\0\0\x04\x18\0\0\0\x98\x03\0\0\x1b\0\0\0\0\0\0\0\xa0\x03\0\0\x21\0\ +\0\0\x40\0\0\0\xa6\x03\0\0\x23\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\x02\x22\0\0\0\0\0\ +\0\0\0\0\0\x02\x24\0\0\0\xaa\x03\0\0\x01\0\0\x04\x04\0\0\0\xb5\x03\0\0\x0e\0\0\ +\0\0\0\0\0\x1e\x04\0\0\x01\0\0\x04\x04\0\0\0\x27\x04\0\0\x0e\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\x23\0\0\0\x9d\x04\0\0\0\0\0\x0e\x25\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\x0e\0\0\0\xb1\x04\ \0\0\0\0\0\x0e\x27\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\ -\x20\0\0\0\xb5\x04\0\0\0\0\0\x0e\x29\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\ -\x1c\0\0\0\x12\0\0\0\x11\0\0\0\xca\x04\0\0\0\0\0\x0e\x2b\0\0\0\0\0\0\0\0\0\0\0\ -\0\0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x04\0\0\0\xe1\x04\0\0\0\0\0\x0e\x2d\0\0\ -\0\x01\0\0\0\xe9\x04\0\0\x04\0\0\x0f\x62\0\0\0\x26\0\0\0\0\0\0\0\x23\0\0\0\x28\ +\x20\0\0\0\xc7\x04\0\0\0\0\0\x0e\x29\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\ +\x1c\0\0\0\x12\0\0\0\x11\0\0\0\xdc\x04\0\0\0\0\0\x0e\x2b\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x04\0\0\0\xf3\x04\0\0\0\0\0\x0e\x2d\0\0\ +\0\x01\0\0\0\xfb\x04\0\0\x04\0\0\x0f\x62\0\0\0\x26\0\0\0\0\0\0\0\x23\0\0\0\x28\ \0\0\0\x23\0\0\0\x0e\0\0\0\x2a\0\0\0\x31\0\0\0\x20\0\0\0\x2c\0\0\0\x51\0\0\0\ -\x11\0\0\0\xf1\x04\0\0\x01\0\0\x0f\x04\0\0\0\x2e\0\0\0\0\0\0\0\x04\0\0\0\0\x62\ +\x11\0\0\0\x03\x05\0\0\x01\0\0\x0f\x04\0\0\0\x2e\0\0\0\0\0\0\0\x04\0\0\0\0\x62\ \x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\x6d\x65\x74\ \x61\0\x6d\x61\x70\0\x63\x74\x78\0\x69\x6e\x74\0\x64\x75\x6d\x70\x5f\x62\x70\ \x66\x5f\x6d\x61\x70\0\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x6d\x61\x70\0\x30\ -\x3a\x30\0\x2f\x77\x2f\x6e\x65\x74\x2d\x6e\x65\x78\x74\x2f\x6b\x65\x72\x6e\x65\ -\x6c\x2f\x62\x70\x66\x2f\x70\x72\x65\x6c\x6f\x61\x64\x2f\x69\x74\x65\x72\x61\ -\x74\x6f\x72\x73\x2f\x69\x74\x65\x72\x61\x74\x6f\x72\x73\x2e\x62\x70\x66\x2e\ -\x63\0\x09\x73\x74\x72\x75\x63\x74\x20\x73\x65\x71\x5f\x66\x69\x6c\x65\x20\x2a\ -\x73\x65\x71\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\ -\x71\x3b\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x6d\x65\x74\x61\0\x73\x65\x71\0\ -\x73\x65\x73\x73\x69\x6f\x6e\x5f\x69\x64\0\x73\x65\x71\x5f\x6e\x75\x6d\0\x73\ -\x65\x71\x5f\x66\x69\x6c\x65\0\x5f\x5f\x75\x36\x34\0\x75\x6e\x73\x69\x67\x6e\ -\x65\x64\x20\x6c\x6f\x6e\x67\x20\x6c\x6f\x6e\x67\0\x30\x3a\x31\0\x09\x73\x74\ -\x72\x75\x63\x74\x20\x62\x70\x66\x5f\x6d\x61\x70\x20\x2a\x6d\x61\x70\x20\x3d\ -\x20\x63\x74\x78\x2d\x3e\x6d\x61\x70\x3b\0\x09\x69\x66\x20\x28\x21\x6d\x61\x70\ -\x29\0\x09\x5f\x5f\x75\x36\x34\x20\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\x20\x63\ -\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\x71\x5f\x6e\x75\x6d\x3b\0\x30\ -\x3a\x32\0\x09\x69\x66\x20\x28\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\x3d\x20\x30\ -\x29\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\ -\x65\x71\x2c\x20\x22\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\ -\x20\x20\x20\x20\x20\x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\ -\x5c\x6e\x22\x29\x3b\0\x62\x70\x66\x5f\x6d\x61\x70\0\x69\x64\0\x6e\x61\x6d\x65\ -\0\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\0\x5f\x5f\x75\x33\x32\0\x75\x6e\ +\x3a\x30\0\x2f\x68\x6f\x6d\x65\x2f\x72\x6f\x62\x65\x72\x74\x6f\x2f\x72\x65\x70\ +\x6f\x73\x2f\x6c\x69\x6e\x75\x78\x2f\x6b\x65\x72\x6e\x65\x6c\x2f\x62\x70\x66\ +\x2f\x70\x72\x65\x6c\x6f\x61\x64\x2f\x69\x74\x65\x72\x61\x74\x6f\x72\x73\x2f\ +\x69\x74\x65\x72\x61\x74\x6f\x72\x73\x2e\x62\x70\x66\x2e\x63\0\x09\x73\x74\x72\ +\x75\x63\x74\x20\x73\x65\x71\x5f\x66\x69\x6c\x65\x20\x2a\x73\x65\x71\x20\x3d\ +\x20\x63\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\x71\x3b\0\x62\x70\x66\ +\x5f\x69\x74\x65\x72\x5f\x6d\x65\x74\x61\0\x73\x65\x71\0\x73\x65\x73\x73\x69\ +\x6f\x6e\x5f\x69\x64\0\x73\x65\x71\x5f\x6e\x75\x6d\0\x73\x65\x71\x5f\x66\x69\ +\x6c\x65\0\x5f\x5f\x75\x36\x34\0\x6c\x6f\x6e\x67\x20\x6c\x6f\x6e\x67\x20\x75\ +\x6e\x73\x69\x67\x6e\x65\x64\x20\x69\x6e\x74\0\x30\x3a\x31\0\x09\x73\x74\x72\ +\x75\x63\x74\x20\x62\x70\x66\x5f\x6d\x61\x70\x20\x2a\x6d\x61\x70\x20\x3d\x20\ +\x63\x74\x78\x2d\x3e\x6d\x61\x70\x3b\0\x09\x69\x66\x20\x28\x21\x6d\x61\x70\x29\ +\0\x09\x5f\x5f\x75\x36\x34\x20\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\x20\x63\x74\ +\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\x71\x5f\x6e\x75\x6d\x3b\0\x30\x3a\ +\x32\0\x09\x69\x66\x20\x28\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\x3d\x20\x30\x29\ +\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\ +\x71\x2c\x20\x22\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\ +\x20\x20\x20\x20\x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x5c\ +\x6e\x22\x29\x3b\0\x62\x70\x66\x5f\x6d\x61\x70\0\x69\x64\0\x6e\x61\x6d\x65\0\ +\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\0\x5f\x5f\x75\x33\x32\0\x75\x6e\ \x73\x69\x67\x6e\x65\x64\x20\x69\x6e\x74\0\x63\x68\x61\x72\0\x5f\x5f\x41\x52\ \x52\x41\x59\x5f\x53\x49\x5a\x45\x5f\x54\x59\x50\x45\x5f\x5f\0\x09\x42\x50\x46\ \x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x25\ @@ -237,90 +238,90 @@ iterators_bpf__load(struct iterators_bpf *skel) \x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\x70\ \x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x32\0\x4c\x49\x43\x45\ \x4e\x53\x45\0\x2e\x72\x6f\x64\x61\x74\x61\0\x6c\x69\x63\x65\x6e\x73\x65\0\0\0\ -\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x2d\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x02\0\0\ -\0\x04\0\0\0\x62\0\0\0\x01\0\0\0\x80\x04\0\0\0\0\0\0\0\0\0\0\x69\x74\x65\x72\ -\x61\x74\x6f\x72\x2e\x72\x6f\x64\x61\x74\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\x2f\0\0\ -\0\0\0\0\0\0\0\0\0\0\0\0\0\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\ -\x20\x20\x20\x20\x20\x20\x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\ -\x73\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x25\x36\x64\x0a\0\x20\x20\x69\ -\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\ -\x61\x74\x74\x61\x63\x68\x65\x64\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\ -\x25\x73\x20\x25\x73\x0a\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ -\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x47\x50\x4c\0\0\0\0\0\ -\x79\x12\0\0\0\0\0\0\x79\x26\0\0\0\0\0\0\x79\x17\x08\0\0\0\0\0\x15\x07\x1b\0\0\ -\0\0\0\x79\x11\0\0\0\0\0\0\x79\x11\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\ -\0\0\0\0\0\0\x07\x04\0\0\xe8\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\ -\0\0\0\0\0\0\0\0\0\xb7\x03\0\0\x23\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\ -\0\x61\x71\0\0\0\0\0\0\x7b\x1a\xe8\xff\0\0\0\0\xb7\x01\0\0\x04\0\0\0\xbf\x72\0\ -\0\0\0\0\0\x0f\x12\0\0\0\0\0\0\x7b\x2a\xf0\xff\0\0\0\0\x61\x71\x14\0\0\0\0\0\ -\x7b\x1a\xf8\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xe8\xff\xff\xff\xbf\ -\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x23\0\0\0\xb7\x03\0\0\x0e\0\0\0\ -\xb7\x05\0\0\x18\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\ -\0\0\0\0\x07\0\0\0\0\0\0\0\x42\0\0\0\x7b\0\0\0\x1e\x3c\x01\0\x01\0\0\0\x42\0\0\ -\0\x7b\0\0\0\x24\x3c\x01\0\x02\0\0\0\x42\0\0\0\xee\0\0\0\x1d\x44\x01\0\x03\0\0\ -\0\x42\0\0\0\x0f\x01\0\0\x06\x4c\x01\0\x04\0\0\0\x42\0\0\0\x1a\x01\0\0\x17\x40\ -\x01\0\x05\0\0\0\x42\0\0\0\x1a\x01\0\0\x1d\x40\x01\0\x06\0\0\0\x42\0\0\0\x43\ -\x01\0\0\x06\x58\x01\0\x08\0\0\0\x42\0\0\0\x56\x01\0\0\x03\x5c\x01\0\x0f\0\0\0\ -\x42\0\0\0\xdc\x01\0\0\x02\x64\x01\0\x1f\0\0\0\x42\0\0\0\x2a\x02\0\0\x01\x6c\ -\x01\0\0\0\0\0\x02\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\0\0\0\0\ -\0\x10\0\0\0\x02\0\0\0\xea\0\0\0\0\0\0\0\x20\0\0\0\x02\0\0\0\x3e\0\0\0\0\0\0\0\ -\x28\0\0\0\x08\0\0\0\x3f\x01\0\0\0\0\0\0\x78\0\0\0\x0d\0\0\0\x3e\0\0\0\0\0\0\0\ -\x88\0\0\0\x0d\0\0\0\xea\0\0\0\0\0\0\0\xa8\0\0\0\x0d\0\0\0\x3f\x01\0\0\0\0\0\0\ -\x1a\0\0\0\x21\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ -\0\0\0\0\0\0\0\0\0\0\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\0\0\0\ -\0\0\0\0\x1c\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x10\0\0\0\0\0\0\ -\0\0\0\0\0\x0a\0\0\0\x01\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ -\0\x10\0\0\0\0\0\0\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x62\x70\x66\x5f\x6d\ -\x61\x70\0\0\0\0\0\0\0\0\x47\x50\x4c\0\0\0\0\0\x79\x12\0\0\0\0\0\0\x79\x26\0\0\ -\0\0\0\0\x79\x12\x08\0\0\0\0\0\x15\x02\x3c\0\0\0\0\0\x79\x11\0\0\0\0\0\0\x79\ -\x27\0\0\0\0\0\0\x79\x11\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\0\0\0\ -\0\x07\x04\0\0\xd0\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\ -\x31\0\0\0\xb7\x03\0\0\x20\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\x7b\ -\x6a\xc8\xff\0\0\0\0\x61\x71\0\0\0\0\0\0\x7b\x1a\xd0\xff\0\0\0\0\xb7\x03\0\0\ -\x04\0\0\0\xbf\x79\0\0\0\0\0\0\x0f\x39\0\0\0\0\0\0\x79\x71\x28\0\0\0\0\0\x79\ -\x78\x30\0\0\0\0\0\x15\x08\x18\0\0\0\0\0\xb7\x02\0\0\0\0\0\0\x0f\x21\0\0\0\0\0\ -\0\x61\x11\x04\0\0\0\0\0\x79\x83\x08\0\0\0\0\0\x67\x01\0\0\x03\0\0\0\x0f\x13\0\ -\0\0\0\0\0\x79\x86\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\xf8\xff\xff\xff\ -\xb7\x02\0\0\x08\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x01\0\0\0\0\0\0\x79\xa3\xf8\xff\ -\0\0\0\0\x0f\x13\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\xf4\xff\xff\xff\ -\xb7\x02\0\0\x04\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x03\0\0\x04\0\0\0\x61\xa1\xf4\ -\xff\0\0\0\0\x61\x82\x10\0\0\0\0\0\x3d\x21\x02\0\0\0\0\0\x0f\x16\0\0\0\0\0\0\ -\xbf\x69\0\0\0\0\0\0\x7b\x9a\xd8\xff\0\0\0\0\x79\x71\x18\0\0\0\0\0\x7b\x1a\xe0\ -\xff\0\0\0\0\x79\x71\x20\0\0\0\0\0\x79\x11\0\0\0\0\0\0\x0f\x31\0\0\0\0\0\0\x7b\ -\x1a\xe8\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xd0\xff\xff\xff\x79\xa1\ -\xc8\xff\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x51\0\0\0\xb7\x03\0\0\x11\0\0\0\ -\xb7\x05\0\0\x20\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\ -\0\0\0\0\x17\0\0\0\0\0\0\0\x42\0\0\0\x7b\0\0\0\x1e\x80\x01\0\x01\0\0\0\x42\0\0\ -\0\x7b\0\0\0\x24\x80\x01\0\x02\0\0\0\x42\0\0\0\x60\x02\0\0\x1f\x88\x01\0\x03\0\ -\0\0\x42\0\0\0\x84\x02\0\0\x06\x94\x01\0\x04\0\0\0\x42\0\0\0\x1a\x01\0\0\x17\ -\x84\x01\0\x05\0\0\0\x42\0\0\0\x9d\x02\0\0\x0e\xa0\x01\0\x06\0\0\0\x42\0\0\0\ -\x1a\x01\0\0\x1d\x84\x01\0\x07\0\0\0\x42\0\0\0\x43\x01\0\0\x06\xa4\x01\0\x09\0\ -\0\0\x42\0\0\0\xaf\x02\0\0\x03\xa8\x01\0\x11\0\0\0\x42\0\0\0\x1f\x03\0\0\x02\ -\xb0\x01\0\x18\0\0\0\x42\0\0\0\x5a\x03\0\0\x06\x04\x01\0\x1b\0\0\0\x42\0\0\0\0\ -\0\0\0\0\0\0\0\x1c\0\0\0\x42\0\0\0\xab\x03\0\0\x0f\x10\x01\0\x1d\0\0\0\x42\0\0\ -\0\xc0\x03\0\0\x2d\x14\x01\0\x1f\0\0\0\x42\0\0\0\xf7\x03\0\0\x0d\x0c\x01\0\x21\ -\0\0\0\x42\0\0\0\0\0\0\0\0\0\0\0\x22\0\0\0\x42\0\0\0\xc0\x03\0\0\x02\x14\x01\0\ -\x25\0\0\0\x42\0\0\0\x1e\x04\0\0\x0d\x18\x01\0\x28\0\0\0\x42\0\0\0\0\0\0\0\0\0\ -\0\0\x29\0\0\0\x42\0\0\0\x1e\x04\0\0\x0d\x18\x01\0\x2c\0\0\0\x42\0\0\0\x1e\x04\ -\0\0\x0d\x18\x01\0\x2d\0\0\0\x42\0\0\0\x4c\x04\0\0\x1b\x1c\x01\0\x2e\0\0\0\x42\ -\0\0\0\x4c\x04\0\0\x06\x1c\x01\0\x2f\0\0\0\x42\0\0\0\x6f\x04\0\0\x0d\x24\x01\0\ -\x31\0\0\0\x42\0\0\0\x1f\x03\0\0\x02\xb0\x01\0\x40\0\0\0\x42\0\0\0\x2a\x02\0\0\ -\x01\xc0\x01\0\0\0\0\0\x14\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\ -\0\0\0\0\0\x10\0\0\0\x14\0\0\0\xea\0\0\0\0\0\0\0\x20\0\0\0\x14\0\0\0\x3e\0\0\0\ -\0\0\0\0\x28\0\0\0\x18\0\0\0\x3e\0\0\0\0\0\0\0\x30\0\0\0\x08\0\0\0\x3f\x01\0\0\ -\0\0\0\0\x88\0\0\0\x1a\0\0\0\x3e\0\0\0\0\0\0\0\x98\0\0\0\x1a\0\0\0\xea\0\0\0\0\ -\0\0\0\xb0\0\0\0\x1a\0\0\0\x52\x03\0\0\0\0\0\0\xb8\0\0\0\x1a\0\0\0\x56\x03\0\0\ -\0\0\0\0\xc8\0\0\0\x1f\0\0\0\x84\x03\0\0\0\0\0\0\xe0\0\0\0\x20\0\0\0\xea\0\0\0\ -\0\0\0\0\xf8\0\0\0\x20\0\0\0\x3e\0\0\0\0\0\0\0\x20\x01\0\0\x24\0\0\0\x3e\0\0\0\ -\0\0\0\0\x58\x01\0\0\x1a\0\0\0\xea\0\0\0\0\0\0\0\x68\x01\0\0\x20\0\0\0\x46\x04\ -\0\0\0\0\0\0\x90\x01\0\0\x1a\0\0\0\x3f\x01\0\0\0\0\0\0\xa0\x01\0\0\x1a\0\0\0\ -\x87\x04\0\0\0\0\0\0\xa8\x01\0\0\x18\0\0\0\x3e\0\0\0\0\0\0\0\x1a\0\0\0\x42\0\0\ -\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ -\0\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\0\0\0\0\0\0\x1c\0\0\ -\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x10\0\0\0\0\0\0\0\0\0\0\0\x1a\0\ -\0\0\x01\0\0\0\0\0\0\0\x13\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x10\0\0\0\0\0\ -\0\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\0\0\ -\0\0\0\0"; +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x3f\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x02\0\0\0\ +\x04\0\0\0\x62\0\0\0\x01\0\0\0\x80\x04\0\0\0\0\0\0\0\0\0\0\x69\x74\x65\x72\x61\ +\x74\x6f\x72\x2e\x72\x6f\x64\x61\x74\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\x2f\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\ +\x20\x20\x20\x20\x20\x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\ +\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x25\x36\x64\x0a\0\x20\x20\x69\x64\ +\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x61\ +\x74\x74\x61\x63\x68\x65\x64\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\x25\ +\x73\x20\x25\x73\x0a\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x47\x50\x4c\0\0\0\0\0\x79\ +\x12\0\0\0\0\0\0\x79\x26\0\0\0\0\0\0\x79\x17\x08\0\0\0\0\0\x15\x07\x1b\0\0\0\0\ +\0\x79\x11\0\0\0\0\0\0\x79\x11\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\ +\0\0\0\0\x07\x04\0\0\xe8\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\xb7\x03\0\0\x23\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\ +\x61\x71\0\0\0\0\0\0\x7b\x1a\xe8\xff\0\0\0\0\xb7\x01\0\0\x04\0\0\0\xbf\x72\0\0\ +\0\0\0\0\x0f\x12\0\0\0\0\0\0\x7b\x2a\xf0\xff\0\0\0\0\x61\x71\x14\0\0\0\0\0\x7b\ +\x1a\xf8\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xe8\xff\xff\xff\xbf\x61\0\ +\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x23\0\0\0\xb7\x03\0\0\x0e\0\0\0\xb7\x05\ +\0\0\x18\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\0\0\0\0\ +\x07\0\0\0\0\0\0\0\x42\0\0\0\x89\0\0\0\x1e\x3c\x01\0\x01\0\0\0\x42\0\0\0\x89\0\ +\0\0\x24\x3c\x01\0\x02\0\0\0\x42\0\0\0\0\x01\0\0\x1d\x44\x01\0\x03\0\0\0\x42\0\ +\0\0\x21\x01\0\0\x06\x4c\x01\0\x04\0\0\0\x42\0\0\0\x2c\x01\0\0\x17\x40\x01\0\ +\x05\0\0\0\x42\0\0\0\x2c\x01\0\0\x1d\x40\x01\0\x06\0\0\0\x42\0\0\0\x55\x01\0\0\ +\x06\x58\x01\0\x08\0\0\0\x42\0\0\0\x68\x01\0\0\x03\x5c\x01\0\x0f\0\0\0\x42\0\0\ +\0\xee\x01\0\0\x02\x64\x01\0\x1f\0\0\0\x42\0\0\0\x3c\x02\0\0\x01\x6c\x01\0\0\0\ +\0\0\x02\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\0\0\0\0\0\x10\0\0\ +\0\x02\0\0\0\xfc\0\0\0\0\0\0\0\x20\0\0\0\x02\0\0\0\x3e\0\0\0\0\0\0\0\x28\0\0\0\ +\x08\0\0\0\x51\x01\0\0\0\0\0\0\x78\0\0\0\x0d\0\0\0\x3e\0\0\0\0\0\0\0\x88\0\0\0\ +\x0d\0\0\0\xfc\0\0\0\0\0\0\0\xa8\0\0\0\x0d\0\0\0\x51\x01\0\0\0\0\0\0\x1a\0\0\0\ +\x21\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\0\0\0\0\0\0\0\ +\x1c\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x10\0\0\0\0\0\0\0\0\0\0\ +\0\x0a\0\0\0\x01\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x10\0\ +\0\0\0\0\0\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\ +\0\0\0\0\0\0\0\x47\x50\x4c\0\0\0\0\0\x79\x12\0\0\0\0\0\0\x79\x26\0\0\0\0\0\0\ +\x79\x12\x08\0\0\0\0\0\x15\x02\x3c\0\0\0\0\0\x79\x11\0\0\0\0\0\0\x79\x27\0\0\0\ +\0\0\0\x79\x11\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\ +\0\0\xd0\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x31\0\0\0\ +\xb7\x03\0\0\x20\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\x7b\x6a\xc8\xff\ +\0\0\0\0\x61\x71\0\0\0\0\0\0\x7b\x1a\xd0\xff\0\0\0\0\xb7\x03\0\0\x04\0\0\0\xbf\ +\x79\0\0\0\0\0\0\x0f\x39\0\0\0\0\0\0\x79\x71\x28\0\0\0\0\0\x79\x78\x30\0\0\0\0\ +\0\x15\x08\x18\0\0\0\0\0\xb7\x02\0\0\0\0\0\0\x0f\x21\0\0\0\0\0\0\x61\x11\x04\0\ +\0\0\0\0\x79\x83\x08\0\0\0\0\0\x67\x01\0\0\x03\0\0\0\x0f\x13\0\0\0\0\0\0\x79\ +\x86\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\xf8\xff\xff\xff\xb7\x02\0\0\ +\x08\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x01\0\0\0\0\0\0\x79\xa3\xf8\xff\0\0\0\0\x0f\ +\x13\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\xf4\xff\xff\xff\xb7\x02\0\0\ +\x04\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x03\0\0\x04\0\0\0\x61\xa1\xf4\xff\0\0\0\0\ +\x61\x82\x10\0\0\0\0\0\x3d\x21\x02\0\0\0\0\0\x0f\x16\0\0\0\0\0\0\xbf\x69\0\0\0\ +\0\0\0\x7b\x9a\xd8\xff\0\0\0\0\x79\x71\x18\0\0\0\0\0\x7b\x1a\xe0\xff\0\0\0\0\ +\x79\x71\x20\0\0\0\0\0\x79\x11\0\0\0\0\0\0\x0f\x31\0\0\0\0\0\0\x7b\x1a\xe8\xff\ +\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xd0\xff\xff\xff\x79\xa1\xc8\xff\0\0\0\ +\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x51\0\0\0\xb7\x03\0\0\x11\0\0\0\xb7\x05\0\0\x20\ +\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\0\0\0\0\x17\0\0\ +\0\0\0\0\0\x42\0\0\0\x89\0\0\0\x1e\x80\x01\0\x01\0\0\0\x42\0\0\0\x89\0\0\0\x24\ +\x80\x01\0\x02\0\0\0\x42\0\0\0\x72\x02\0\0\x1f\x88\x01\0\x03\0\0\0\x42\0\0\0\ +\x96\x02\0\0\x06\x94\x01\0\x04\0\0\0\x42\0\0\0\x2c\x01\0\0\x17\x84\x01\0\x05\0\ +\0\0\x42\0\0\0\xaf\x02\0\0\x0e\xa0\x01\0\x06\0\0\0\x42\0\0\0\x2c\x01\0\0\x1d\ +\x84\x01\0\x07\0\0\0\x42\0\0\0\x55\x01\0\0\x06\xa4\x01\0\x09\0\0\0\x42\0\0\0\ +\xc1\x02\0\0\x03\xa8\x01\0\x11\0\0\0\x42\0\0\0\x31\x03\0\0\x02\xb0\x01\0\x18\0\ +\0\0\x42\0\0\0\x6c\x03\0\0\x06\x04\x01\0\x1b\0\0\0\x42\0\0\0\0\0\0\0\0\0\0\0\ +\x1c\0\0\0\x42\0\0\0\xbd\x03\0\0\x0f\x10\x01\0\x1d\0\0\0\x42\0\0\0\xd2\x03\0\0\ +\x2d\x14\x01\0\x1f\0\0\0\x42\0\0\0\x09\x04\0\0\x0d\x0c\x01\0\x21\0\0\0\x42\0\0\ +\0\0\0\0\0\0\0\0\0\x22\0\0\0\x42\0\0\0\xd2\x03\0\0\x02\x14\x01\0\x25\0\0\0\x42\ +\0\0\0\x30\x04\0\0\x0d\x18\x01\0\x28\0\0\0\x42\0\0\0\0\0\0\0\0\0\0\0\x29\0\0\0\ +\x42\0\0\0\x30\x04\0\0\x0d\x18\x01\0\x2c\0\0\0\x42\0\0\0\x30\x04\0\0\x0d\x18\ +\x01\0\x2d\0\0\0\x42\0\0\0\x5e\x04\0\0\x1b\x1c\x01\0\x2e\0\0\0\x42\0\0\0\x5e\ +\x04\0\0\x06\x1c\x01\0\x2f\0\0\0\x42\0\0\0\x81\x04\0\0\x0d\x24\x01\0\x30\0\0\0\ +\x42\0\0\0\0\0\0\0\0\0\0\0\x31\0\0\0\x42\0\0\0\x31\x03\0\0\x02\xb0\x01\0\x40\0\ +\0\0\x42\0\0\0\x3c\x02\0\0\x01\xc0\x01\0\0\0\0\0\x14\0\0\0\x3e\0\0\0\0\0\0\0\ +\x08\0\0\0\x08\0\0\0\x3e\0\0\0\0\0\0\0\x10\0\0\0\x14\0\0\0\xfc\0\0\0\0\0\0\0\ +\x20\0\0\0\x14\0\0\0\x3e\0\0\0\0\0\0\0\x28\0\0\0\x18\0\0\0\x3e\0\0\0\0\0\0\0\ +\x30\0\0\0\x08\0\0\0\x51\x01\0\0\0\0\0\0\x88\0\0\0\x1a\0\0\0\x3e\0\0\0\0\0\0\0\ +\x98\0\0\0\x1a\0\0\0\xfc\0\0\0\0\0\0\0\xb0\0\0\0\x1a\0\0\0\x64\x03\0\0\0\0\0\0\ +\xb8\0\0\0\x1a\0\0\0\x68\x03\0\0\0\0\0\0\xc8\0\0\0\x1f\0\0\0\x96\x03\0\0\0\0\0\ +\0\xe0\0\0\0\x20\0\0\0\xfc\0\0\0\0\0\0\0\xf8\0\0\0\x20\0\0\0\x3e\0\0\0\0\0\0\0\ +\x20\x01\0\0\x24\0\0\0\x3e\0\0\0\0\0\0\0\x58\x01\0\0\x1a\0\0\0\xfc\0\0\0\0\0\0\ +\0\x68\x01\0\0\x20\0\0\0\x58\x04\0\0\0\0\0\0\x90\x01\0\0\x1a\0\0\0\x51\x01\0\0\ +\0\0\0\0\xa0\x01\0\0\x1a\0\0\0\x99\x04\0\0\0\0\0\0\xa8\x01\0\0\x18\0\0\0\x3e\0\ +\0\0\0\0\0\0\x1a\0\0\0\x42\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\ +\x6f\x67\0\0\0\0\0\0\0\x1c\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\ +\x10\0\0\0\0\0\0\0\0\0\0\0\x1b\0\0\0\x01\0\0\0\0\0\0\0\x13\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x62\ +\x70\x66\x5f\x70\x72\x6f\x67\0\0\0\0\0\0\0"; opts.insns_sz = 2216; opts.insns = (void *)"\ \xbf\x16\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\x78\xff\xff\xff\xb7\x02\0\ @@ -331,66 +332,66 @@ iterators_bpf__load(struct iterators_bpf *skel) \0\0\0\x85\0\0\0\xa8\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x61\x01\0\0\0\0\ \0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xbf\x70\0\0\ \0\0\0\0\x95\0\0\0\0\0\0\0\x61\x60\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\ -\x48\x0e\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\ -\0\0\x44\x0e\0\0\x63\x01\0\0\0\0\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\ -\0\0\0\0\x38\x0e\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x05\0\0\ -\x18\x61\0\0\0\0\0\0\0\0\0\0\x30\x0e\0\0\x7b\x01\0\0\0\0\0\0\xb7\x01\0\0\x12\0\ -\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x30\x0e\0\0\xb7\x03\0\0\x1c\0\0\0\x85\0\0\0\ +\x58\x0e\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\ +\0\0\x54\x0e\0\0\x63\x01\0\0\0\0\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\ +\0\0\0\0\x48\x0e\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x05\0\0\ +\x18\x61\0\0\0\0\0\0\0\0\0\0\x40\x0e\0\0\x7b\x01\0\0\0\0\0\0\xb7\x01\0\0\x12\0\ +\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x40\x0e\0\0\xb7\x03\0\0\x1c\0\0\0\x85\0\0\0\ \xa6\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\xd4\xff\0\0\0\0\x63\x7a\x78\xff\0\0\0\0\ -\x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x80\x0e\0\0\x63\x01\0\0\0\ +\x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x90\x0e\0\0\x63\x01\0\0\0\ \0\0\0\x61\x60\x1c\0\0\0\0\0\x15\0\x03\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\ -\x5c\x0e\0\0\x63\x01\0\0\0\0\0\0\xb7\x01\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\ -\0\x50\x0e\0\0\xb7\x03\0\0\x48\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\ +\x6c\x0e\0\0\x63\x01\0\0\0\0\0\0\xb7\x01\0\0\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\ +\0\x60\x0e\0\0\xb7\x03\0\0\x48\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\ \xc5\x07\xc3\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x63\x71\0\0\0\0\0\ -\0\x79\x63\x20\0\0\0\0\0\x15\x03\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x98\ +\0\x79\x63\x20\0\0\0\0\0\x15\x03\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xa8\ \x0e\0\0\xb7\x02\0\0\x62\0\0\0\x61\x60\x04\0\0\0\0\0\x45\0\x02\0\x01\0\0\0\x85\ \0\0\0\x94\0\0\0\x05\0\x01\0\0\0\0\0\x85\0\0\0\x71\0\0\0\x18\x62\0\0\0\0\0\0\0\ -\0\0\0\0\0\0\0\x61\x20\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x08\x0f\0\0\x63\ -\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x0f\0\0\x18\x61\0\0\0\0\0\0\0\0\ -\0\0\x10\x0f\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x98\x0e\0\0\ -\x18\x61\0\0\0\0\0\0\0\0\0\0\x18\x0f\0\0\x7b\x01\0\0\0\0\0\0\xb7\x01\0\0\x02\0\ -\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x08\x0f\0\0\xb7\x03\0\0\x20\0\0\0\x85\0\0\0\ +\0\0\0\0\0\0\0\x61\x20\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x18\x0f\0\0\x63\ +\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x10\x0f\0\0\x18\x61\0\0\0\0\0\0\0\ +\0\0\0\x20\x0f\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xa8\x0e\0\0\ +\x18\x61\0\0\0\0\0\0\0\0\0\0\x28\x0f\0\0\x7b\x01\0\0\0\0\0\0\xb7\x01\0\0\x02\0\ +\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x18\x0f\0\0\xb7\x03\0\0\x20\0\0\0\x85\0\0\0\ \xa6\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\x9f\xff\0\0\0\0\x18\x62\0\0\0\0\0\0\0\0\ -\0\0\0\0\0\0\x61\x20\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x28\x0f\0\0\x63\ -\x01\0\0\0\0\0\0\xb7\x01\0\0\x16\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x28\x0f\0\0\ +\0\0\0\0\0\0\x61\x20\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x38\x0f\0\0\x63\ +\x01\0\0\0\0\0\0\xb7\x01\0\0\x16\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x38\x0f\0\0\ \xb7\x03\0\0\x04\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\x92\xff\ -\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x30\x0f\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\ -\x78\x11\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x38\x0f\0\0\x18\ -\x61\0\0\0\0\0\0\0\0\0\0\x70\x11\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\ -\0\0\0\x40\x10\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb8\x11\0\0\x7b\x01\0\0\0\0\0\0\ -\x18\x60\0\0\0\0\0\0\0\0\0\0\x48\x10\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xc8\x11\0\ -\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xe8\x10\0\0\x18\x61\0\0\0\0\ -\0\0\0\0\0\0\xe8\x11\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\0\0\ -\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xe0\x11\0\0\x7b\x01\0\0\0\0\0\0\x61\x60\x08\0\0\ -\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x80\x11\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\ -\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x84\x11\0\0\x63\x01\0\0\0\0\0\0\x79\x60\ -\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x88\x11\0\0\x7b\x01\0\0\0\0\0\0\x61\ -\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb0\x11\0\0\x63\x01\0\0\0\0\0\ -\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xf8\x11\0\0\xb7\x02\0\0\x11\0\0\0\xb7\x03\0\0\ +\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x40\x0f\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\ +\x88\x11\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x48\x0f\0\0\x18\ +\x61\0\0\0\0\0\0\0\0\0\0\x80\x11\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\ +\0\0\0\x50\x10\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xc8\x11\0\0\x7b\x01\0\0\0\0\0\0\ +\x18\x60\0\0\0\0\0\0\0\0\0\0\x58\x10\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xd8\x11\0\ +\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xf8\x10\0\0\x18\x61\0\0\0\0\ +\0\0\0\0\0\0\xf8\x11\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xf0\x11\0\0\x7b\x01\0\0\0\0\0\0\x61\x60\x08\0\0\ +\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x90\x11\0\0\x63\x01\0\0\0\0\0\0\x61\x60\x0c\ +\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x94\x11\0\0\x63\x01\0\0\0\0\0\0\x79\x60\ +\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x98\x11\0\0\x7b\x01\0\0\0\0\0\0\x61\ +\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xc0\x11\0\0\x63\x01\0\0\0\0\0\ +\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x08\x12\0\0\xb7\x02\0\0\x11\0\0\0\xb7\x03\0\0\ \x0c\0\0\0\xb7\x04\0\0\0\0\0\0\x85\0\0\0\xa7\0\0\0\xbf\x07\0\0\0\0\0\0\xc5\x07\ -\x5c\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x68\x11\0\0\x63\x70\x6c\0\0\0\0\0\ +\x5c\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x78\x11\0\0\x63\x70\x6c\0\0\0\0\0\ \x77\x07\0\0\x20\0\0\0\x63\x70\x70\0\0\0\0\0\xb7\x01\0\0\x05\0\0\0\x18\x62\0\0\ -\0\0\0\0\0\0\0\0\x68\x11\0\0\xb7\x03\0\0\x8c\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\ -\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xd8\x11\0\0\x61\x01\0\0\0\0\0\0\xd5\ +\0\0\0\0\0\0\0\0\x78\x11\0\0\xb7\x03\0\0\x8c\0\0\0\x85\0\0\0\xa6\0\0\0\xbf\x07\ +\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xe8\x11\0\0\x61\x01\0\0\0\0\0\0\xd5\ \x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xc5\x07\x4a\xff\0\0\ -\0\0\x63\x7a\x80\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x10\x12\0\0\x18\x61\0\ -\0\0\0\0\0\0\0\0\0\x10\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\ -\x18\x12\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x08\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\ -\x60\0\0\0\0\0\0\0\0\0\0\x28\x14\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x50\x17\0\0\ -\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x30\x14\0\0\x18\x61\0\0\0\0\0\ -\0\0\0\0\0\x60\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xd0\x15\ -\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x80\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\ -\0\0\0\0\0\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x78\x17\0\0\x7b\x01\0\0\0\0\ -\0\0\x61\x60\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x18\x17\0\0\x63\x01\0\0\ -\0\0\0\0\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x1c\x17\0\0\x63\x01\ -\0\0\0\0\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x20\x17\0\0\x7b\ -\x01\0\0\0\0\0\0\x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x48\x17\0\ -\0\x63\x01\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x90\x17\0\0\xb7\x02\0\0\x12\ +\0\0\x63\x7a\x80\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x20\x12\0\0\x18\x61\0\ +\0\0\0\0\0\0\0\0\0\x30\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\ +\x28\x12\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x28\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\ +\x60\0\0\0\0\0\0\0\0\0\0\x38\x14\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x70\x17\0\0\ +\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x40\x14\0\0\x18\x61\0\0\0\0\0\ +\0\0\0\0\0\x80\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\xf0\x15\ +\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xa0\x17\0\0\x7b\x01\0\0\0\0\0\0\x18\x60\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x98\x17\0\0\x7b\x01\0\0\0\0\ +\0\0\x61\x60\x08\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x38\x17\0\0\x63\x01\0\0\ +\0\0\0\0\x61\x60\x0c\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x3c\x17\0\0\x63\x01\ +\0\0\0\0\0\0\x79\x60\x10\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x40\x17\0\0\x7b\ +\x01\0\0\0\0\0\0\x61\xa0\x78\xff\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\x68\x17\0\ +\0\x63\x01\0\0\0\0\0\0\x18\x61\0\0\0\0\0\0\0\0\0\0\xb0\x17\0\0\xb7\x02\0\0\x12\ \0\0\0\xb7\x03\0\0\x0c\0\0\0\xb7\x04\0\0\0\0\0\0\x85\0\0\0\xa7\0\0\0\xbf\x07\0\ -\0\0\0\0\0\xc5\x07\x13\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\0\x17\0\0\x63\ +\0\0\0\0\0\xc5\x07\x13\xff\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x20\x17\0\0\x63\ \x70\x6c\0\0\0\0\0\x77\x07\0\0\x20\0\0\0\x63\x70\x70\0\0\0\0\0\xb7\x01\0\0\x05\ -\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\0\x17\0\0\xb7\x03\0\0\x8c\0\0\0\x85\0\0\0\ -\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x70\x17\0\0\x61\x01\ +\0\0\0\x18\x62\0\0\0\0\0\0\0\0\0\0\x20\x17\0\0\xb7\x03\0\0\x8c\0\0\0\x85\0\0\0\ +\xa6\0\0\0\xbf\x07\0\0\0\0\0\0\x18\x60\0\0\0\0\0\0\0\0\0\0\x90\x17\0\0\x61\x01\ \0\0\0\0\0\0\xd5\x01\x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\xc5\ \x07\x01\xff\0\0\0\0\x63\x7a\x84\xff\0\0\0\0\x61\xa1\x78\xff\0\0\0\0\xd5\x01\ \x02\0\0\0\0\0\xbf\x19\0\0\0\0\0\0\x85\0\0\0\xa8\0\0\0\x61\xa0\x80\xff\0\0\0\0\ @@ -422,4 +423,19 @@ iterators_bpf__open_and_load(void) return skel; } +__attribute__((unused)) static void +iterators_bpf__assert(struct iterators_bpf *s) +{ +#ifdef __cplusplus +#define _Static_assert static_assert +#endif +#ifdef __cplusplus +#undef _Static_assert +#endif +} + +static struct bpf_link *dump_bpf_map_link; +static struct bpf_link *dump_bpf_prog_link; +static struct iterators_bpf *skel; + #endif /* __ITERATORS_BPF_SKEL_H__ */ diff --git a/tools/bpf/bpftool/Documentation/bpftool-gen.rst b/tools/bpf/bpftool/Documentation/bpftool-gen.rst index 68454ef28f58..74bbefa28212 100644 --- a/tools/bpf/bpftool/Documentation/bpftool-gen.rst +++ b/tools/bpf/bpftool/Documentation/bpftool-gen.rst @@ -208,6 +208,11 @@ OPTIONS not use the majority of the libbpf infrastructure, and does not need libelf. + -P, --gen-preload-methods + For light skeletons, generate the static variables and the methods + required to preload an eBPF program and pin its objects to the bpf + filesystem. + EXAMPLES ======== **$ cat example1.bpf.c** diff --git a/tools/bpf/bpftool/bash-completion/bpftool b/tools/bpf/bpftool/bash-completion/bpftool index 5df8d72c5179..6e433e86fb26 100644 --- a/tools/bpf/bpftool/bash-completion/bpftool +++ b/tools/bpf/bpftool/bash-completion/bpftool @@ -261,7 +261,7 @@ _bpftool() # Deal with options if [[ ${words[cword]} == -* ]]; then local c='--version --json --pretty --bpffs --mapcompat --debug \ - --use-loader --base-btf --legacy' + --use-loader --gen-preload-methods --base-btf --legacy' COMPREPLY=( $( compgen -W "$c" -- "$cur" ) ) return 0 fi diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c index 7ba7ff55d2ea..c62c4c65b631 100644 --- a/tools/bpf/bpftool/gen.c +++ b/tools/bpf/bpftool/gen.c @@ -652,6 +652,28 @@ static void codegen_destroy(struct bpf_object *obj, const char *obj_name) obj_name); } +static void codegen_preload_vars(struct bpf_object *obj, const char *obj_name) +{ + struct bpf_program *prog; + + codegen("\ + \n\ + \n\ + "); + + bpf_object__for_each_program(prog, obj) { + codegen("\ + \n\ + static struct bpf_link *%s_link; \n\ + ", bpf_program__name(prog)); + } + + codegen("\ + \n\ + static struct %s *skel; \n\ + ", obj_name); +} + static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *header_guard) { DECLARE_LIBBPF_OPTS(gen_loader_opts, opts); @@ -800,6 +822,10 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h codegen_asserts(obj, obj_name); + if (gen_preload_methods) { + codegen_preload_vars(obj, obj_name); + } + codegen("\ \n\ \n\ @@ -1615,6 +1641,7 @@ static int do_help(int argc, char **argv) "\n" " " HELP_SPEC_OPTIONS " |\n" " {-L|--use-loader} }\n" + " {-P|--gen-preload-methods} }\n" "", bin_name, "gen"); diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c index e81227761f5d..5d5dae6215a3 100644 --- a/tools/bpf/bpftool/main.c +++ b/tools/bpf/bpftool/main.c @@ -31,6 +31,7 @@ bool block_mount; bool verifier_logs; bool relaxed_maps; bool use_loader; +bool gen_preload_methods; bool legacy_libbpf; struct btf *base_btf; struct hashmap *refs_table; @@ -426,6 +427,7 @@ int main(int argc, char **argv) { "nomount", no_argument, NULL, 'n' }, { "debug", no_argument, NULL, 'd' }, { "use-loader", no_argument, NULL, 'L' }, + { "gen-preload-methods", no_argument, NULL, 'P' }, { "base-btf", required_argument, NULL, 'B' }, { "legacy", no_argument, NULL, 'l' }, { 0 } @@ -443,7 +445,7 @@ int main(int argc, char **argv) bin_name = argv[0]; opterr = 0; - while ((opt = getopt_long(argc, argv, "VhpjfLmndB:l", + while ((opt = getopt_long(argc, argv, "VhpjfLmndB:lP", options, NULL)) >= 0) { switch (opt) { case 'V': @@ -493,6 +495,9 @@ int main(int argc, char **argv) case 'l': legacy_libbpf = true; break; + case 'P': + gen_preload_methods = true; + break; default: p_err("unrecognized option '%s'", argv[optind - 1]); if (json_output) diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h index 6e9277ffc68c..9485b354a084 100644 --- a/tools/bpf/bpftool/main.h +++ b/tools/bpf/bpftool/main.h @@ -90,6 +90,7 @@ extern bool block_mount; extern bool verifier_logs; extern bool relaxed_maps; extern bool use_loader; +extern bool gen_preload_methods; extern bool legacy_libbpf; extern struct btf *base_btf; extern struct hashmap *refs_table; From patchwork Mon Mar 28 17:50:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794056 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3C9DC433FE for ; Mon, 28 Mar 2022 17:53:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244796AbiC1Ryz (ORCPT ); Mon, 28 Mar 2022 13:54:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244836AbiC1Ryj (ORCPT ); Mon, 28 Mar 2022 13:54:39 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64C6E65796; Mon, 28 Mar 2022 10:52:29 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0bd1xkWz67PnL; Tue, 29 Mar 2022 01:50:37 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:52:26 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 06/18] bpf-preload: Generate free_objs_and_skel() Date: Mon, 28 Mar 2022 19:50:21 +0200 Message-ID: <20220328175033.2437312-7-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Generate free_objs_and_skel() (renamed from free_links_and_skel()) to decrease the reference count of pinned objects, and to destroy the skeleton. Signed-off-by: Roberto Sassu --- kernel/bpf/preload/bpf_preload_kern.c | 13 ++------- .../bpf/preload/iterators/iterators.lskel.h | 10 +++++++ tools/bpf/bpftool/gen.c | 28 +++++++++++++++++++ 3 files changed, 40 insertions(+), 11 deletions(-) diff --git a/kernel/bpf/preload/bpf_preload_kern.c b/kernel/bpf/preload/bpf_preload_kern.c index 485589e03bd2..8b49beb2b2d6 100644 --- a/kernel/bpf/preload/bpf_preload_kern.c +++ b/kernel/bpf/preload/bpf_preload_kern.c @@ -5,15 +5,6 @@ #include #include "iterators/iterators.lskel.h" -static void free_links_and_skel(void) -{ - if (!IS_ERR_OR_NULL(dump_bpf_map_link)) - bpf_link_put(dump_bpf_map_link); - if (!IS_ERR_OR_NULL(dump_bpf_prog_link)) - bpf_link_put(dump_bpf_prog_link); - iterators_bpf__destroy(skel); -} - static int preload(struct dentry *parent) { int err; @@ -75,7 +66,7 @@ static int load_skel(void) skel->links.dump_bpf_prog_fd = 0; return 0; out: - free_links_and_skel(); + free_objs_and_skel(); return err; } @@ -93,7 +84,7 @@ static int __init load(void) static void __exit fini(void) { bpf_preload_ops = NULL; - free_links_and_skel(); + free_objs_and_skel(); } late_initcall(load); module_exit(fini); diff --git a/kernel/bpf/preload/iterators/iterators.lskel.h b/kernel/bpf/preload/iterators/iterators.lskel.h index 9794acdfacf9..4afd983ad40e 100644 --- a/kernel/bpf/preload/iterators/iterators.lskel.h +++ b/kernel/bpf/preload/iterators/iterators.lskel.h @@ -438,4 +438,14 @@ static struct bpf_link *dump_bpf_map_link; static struct bpf_link *dump_bpf_prog_link; static struct iterators_bpf *skel; +static void free_objs_and_skel(void) +{ + if (!IS_ERR_OR_NULL(dump_bpf_map_link)) + bpf_link_put(dump_bpf_map_link); + if (!IS_ERR_OR_NULL(dump_bpf_prog_link)) + bpf_link_put(dump_bpf_prog_link); + + iterators_bpf__destroy(skel); +} + #endif /* __ITERATORS_BPF_SKEL_H__ */ diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c index c62c4c65b631..e167aa092b7f 100644 --- a/tools/bpf/bpftool/gen.c +++ b/tools/bpf/bpftool/gen.c @@ -674,6 +674,33 @@ static void codegen_preload_vars(struct bpf_object *obj, const char *obj_name) ", obj_name); } +static void codegen_preload_free(struct bpf_object *obj, const char *obj_name) +{ + struct bpf_program *prog; + + codegen("\ + \n\ + \n\ + static void free_objs_and_skel(void) \n\ + { \n\ + "); + + bpf_object__for_each_program(prog, obj) { + codegen("\ + \n\ + if (!IS_ERR_OR_NULL(%1$s_link)) \n\ + bpf_link_put(%1$s_link); \n\ + ", bpf_program__name(prog)); + } + + codegen("\ + \n\ + \n\ + %s__destroy(skel); \n\ + } \n\ + ", obj_name); +} + static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *header_guard) { DECLARE_LIBBPF_OPTS(gen_loader_opts, opts); @@ -824,6 +851,7 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h if (gen_preload_methods) { codegen_preload_vars(obj, obj_name); + codegen_preload_free(obj, obj_name); } codegen("\ From patchwork Mon Mar 28 17:50:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794057 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C4B4C4321E for ; Mon, 28 Mar 2022 17:53:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244817AbiC1Ry5 (ORCPT ); Mon, 28 Mar 2022 13:54:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244850AbiC1Ryj (ORCPT ); Mon, 28 Mar 2022 13:54:39 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACAB0657A5; Mon, 28 Mar 2022 10:52:30 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0bf391Gz681p5; Tue, 29 Mar 2022 01:50:38 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:52:27 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 07/18] bpf-preload: Generate preload() Date: Mon, 28 Mar 2022 19:50:22 +0200 Message-ID: <20220328175033.2437312-8-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Generate preload(), to pin defined objects. For pinning, use the bpf_obj_do_pin_kernel() function, just exported. Signed-off-by: Roberto Sassu --- kernel/bpf/preload/bpf_preload_kern.c | 24 ------- .../bpf/preload/iterators/iterators.lskel.h | 26 ++++++++ tools/bpf/bpftool/gen.c | 64 +++++++++++++++++++ 3 files changed, 90 insertions(+), 24 deletions(-) diff --git a/kernel/bpf/preload/bpf_preload_kern.c b/kernel/bpf/preload/bpf_preload_kern.c index 8b49beb2b2d6..0869c889255c 100644 --- a/kernel/bpf/preload/bpf_preload_kern.c +++ b/kernel/bpf/preload/bpf_preload_kern.c @@ -5,30 +5,6 @@ #include #include "iterators/iterators.lskel.h" -static int preload(struct dentry *parent) -{ - int err; - - bpf_link_inc(dump_bpf_map_link); - bpf_link_inc(dump_bpf_prog_link); - - err = bpf_obj_do_pin_kernel(parent, "maps.debug", dump_bpf_map_link, - BPF_TYPE_LINK); - if (err) - goto undo; - - err = bpf_obj_do_pin_kernel(parent, "progs.debug", dump_bpf_prog_link, - BPF_TYPE_LINK); - if (err) - goto undo; - - return 0; -undo: - bpf_link_put(dump_bpf_map_link); - bpf_link_put(dump_bpf_prog_link); - return err; -} - static struct bpf_preload_ops ops = { .preload = preload, .owner = THIS_MODULE, diff --git a/kernel/bpf/preload/iterators/iterators.lskel.h b/kernel/bpf/preload/iterators/iterators.lskel.h index 4afd983ad40e..75b2e94b7547 100644 --- a/kernel/bpf/preload/iterators/iterators.lskel.h +++ b/kernel/bpf/preload/iterators/iterators.lskel.h @@ -448,4 +448,30 @@ static void free_objs_and_skel(void) iterators_bpf__destroy(skel); } +static int preload(struct dentry *parent) +{ + int err; + + bpf_link_inc(dump_bpf_map_link); + bpf_link_inc(dump_bpf_prog_link); + + err = bpf_obj_do_pin_kernel(parent, "maps.debug", + dump_bpf_map_link, + BPF_TYPE_LINK); + if (err) + goto undo; + + err = bpf_obj_do_pin_kernel(parent, "progs.debug", + dump_bpf_prog_link, + BPF_TYPE_LINK); + if (err) + goto undo; + + return 0; +undo: + bpf_link_put(dump_bpf_map_link); + bpf_link_put(dump_bpf_prog_link); + return err; +} + #endif /* __ITERATORS_BPF_SKEL_H__ */ diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c index e167aa092b7f..fa2c6022b80d 100644 --- a/tools/bpf/bpftool/gen.c +++ b/tools/bpf/bpftool/gen.c @@ -701,6 +701,69 @@ static void codegen_preload_free(struct bpf_object *obj, const char *obj_name) ", obj_name); } +static void codegen_preload(struct bpf_object *obj, const char *obj_name) +{ + struct bpf_program *prog; + const char *link_name; + + codegen("\ + \n\ + \n\ + static int preload(struct dentry *parent) \n\ + { \n\ + int err; \n\ + \n\ + "); + + bpf_object__for_each_program(prog, obj) { + codegen("\ + \n\ + bpf_link_inc(%s_link); \n\ + ", bpf_program__name(prog)); + } + + bpf_object__for_each_program(prog, obj) { + link_name = bpf_program__name(prog); + /* These need to be hardcoded for compatibility reasons. */ + if (!strcmp(obj_name, "iterators_bpf")) { + if (!strcmp(link_name, "dump_bpf_map")) + link_name = "maps.debug"; + else if (!strcmp(link_name, "dump_bpf_prog")) + link_name = "progs.debug"; + } + + codegen("\ + \n\ + \n\ + err = bpf_obj_do_pin_kernel(parent, \"%s\", \n\ + %s_link, \n\ + BPF_TYPE_LINK); \n\ + if (err) \n\ + goto undo; \n\ + ", link_name, bpf_program__name(prog)); + } + + codegen("\ + \n\ + \n\ + return 0; \n\ + undo: \n\ + "); + + bpf_object__for_each_program(prog, obj) { + codegen("\ + \n\ + bpf_link_put(%s_link); \n\ + ", bpf_program__name(prog)); + } + + codegen("\ + \n\ + return err; \n\ + } \n\ + "); +} + static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *header_guard) { DECLARE_LIBBPF_OPTS(gen_loader_opts, opts); @@ -852,6 +915,7 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h if (gen_preload_methods) { codegen_preload_vars(obj, obj_name); codegen_preload_free(obj, obj_name); + codegen_preload(obj, obj_name); } codegen("\ From patchwork Mon Mar 28 17:50:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794058 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94D47C433FE for ; Mon, 28 Mar 2022 17:53:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244829AbiC1Ry5 (ORCPT ); Mon, 28 Mar 2022 13:54:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244849AbiC1Ryj (ORCPT ); Mon, 28 Mar 2022 13:54:39 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB154657AE; Mon, 28 Mar 2022 10:52:31 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0cC5Qj6z67yj3; Tue, 29 Mar 2022 01:51:07 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:52:28 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 08/18] bpf-preload: Generate load_skel() Date: Mon, 28 Mar 2022 19:50:23 +0200 Message-ID: <20220328175033.2437312-9-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Generate load_skel() to load and attach the eBPF program, and to retrieve the objects to be pinned. Signed-off-by: Roberto Sassu --- kernel/bpf/preload/bpf_preload_kern.c | 36 ----------- .../bpf/preload/iterators/iterators.lskel.h | 42 ++++++++++++ tools/bpf/bpftool/gen.c | 64 +++++++++++++++++++ 3 files changed, 106 insertions(+), 36 deletions(-) diff --git a/kernel/bpf/preload/bpf_preload_kern.c b/kernel/bpf/preload/bpf_preload_kern.c index 0869c889255c..35e9abd1a668 100644 --- a/kernel/bpf/preload/bpf_preload_kern.c +++ b/kernel/bpf/preload/bpf_preload_kern.c @@ -10,42 +10,6 @@ static struct bpf_preload_ops ops = { .owner = THIS_MODULE, }; -static int load_skel(void) -{ - int err; - - skel = iterators_bpf__open(); - if (!skel) - return -ENOMEM; - err = iterators_bpf__load(skel); - if (err) - goto out; - err = iterators_bpf__attach(skel); - if (err) - goto out; - dump_bpf_map_link = bpf_link_get_from_fd(skel->links.dump_bpf_map_fd); - if (IS_ERR(dump_bpf_map_link)) { - err = PTR_ERR(dump_bpf_map_link); - goto out; - } - dump_bpf_prog_link = bpf_link_get_from_fd(skel->links.dump_bpf_prog_fd); - if (IS_ERR(dump_bpf_prog_link)) { - err = PTR_ERR(dump_bpf_prog_link); - goto out; - } - /* Avoid taking over stdin/stdout/stderr of init process. Zeroing out - * makes skel_closenz() a no-op later in iterators_bpf__destroy(). - */ - close_fd(skel->links.dump_bpf_map_fd); - skel->links.dump_bpf_map_fd = 0; - close_fd(skel->links.dump_bpf_prog_fd); - skel->links.dump_bpf_prog_fd = 0; - return 0; -out: - free_objs_and_skel(); - return err; -} - static int __init load(void) { int err; diff --git a/kernel/bpf/preload/iterators/iterators.lskel.h b/kernel/bpf/preload/iterators/iterators.lskel.h index 75b2e94b7547..6faf3708be01 100644 --- a/kernel/bpf/preload/iterators/iterators.lskel.h +++ b/kernel/bpf/preload/iterators/iterators.lskel.h @@ -474,4 +474,46 @@ static int preload(struct dentry *parent) return err; } +static int load_skel(void) +{ + int err; + + skel = iterators_bpf__open(); + if (!skel) + return -ENOMEM; + + err = iterators_bpf__load(skel); + if (err) + goto out; + + err = iterators_bpf__attach(skel); + if (err) + goto out; + + dump_bpf_map_link = bpf_link_get_from_fd(skel->links.dump_bpf_map_fd); + if (IS_ERR(dump_bpf_map_link)) { + err = PTR_ERR(dump_bpf_map_link); + goto out; + } + + dump_bpf_prog_link = bpf_link_get_from_fd(skel->links.dump_bpf_prog_fd); + if (IS_ERR(dump_bpf_prog_link)) { + err = PTR_ERR(dump_bpf_prog_link); + goto out; + } + + /* Avoid taking over stdin/stdout/stderr of init process. Zeroing out + * makes skel_closenz() a no-op later in iterators_bpf__destroy(). + */ + close_fd(skel->links.dump_bpf_map_fd); + skel->links.dump_bpf_map_fd = 0; + close_fd(skel->links.dump_bpf_prog_fd); + skel->links.dump_bpf_prog_fd = 0; + + return 0; +out: + free_objs_and_skel(); + return err; +} + #endif /* __ITERATORS_BPF_SKEL_H__ */ diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c index fa2c6022b80d..ad948f1c90b5 100644 --- a/tools/bpf/bpftool/gen.c +++ b/tools/bpf/bpftool/gen.c @@ -764,6 +764,69 @@ static void codegen_preload(struct bpf_object *obj, const char *obj_name) "); } +static void codegen_preload_load(struct bpf_object *obj, const char *obj_name) +{ + struct bpf_program *prog; + + codegen("\ + \n\ + \n\ + static int load_skel(void) \n\ + { \n\ + int err; \n\ + \n\ + skel = %1$s__open(); \n\ + if (!skel) \n\ + return -ENOMEM; \n\ + \n\ + err = %1$s__load(skel); \n\ + if (err) \n\ + goto out; \n\ + \n\ + err = %1$s__attach(skel); \n\ + if (err) \n\ + goto out; \n\ + ", obj_name); + + bpf_object__for_each_program(prog, obj) { + codegen("\ + \n\ + \n\ + %1$s_link = bpf_link_get_from_fd(skel->links.%1$s_fd); \n\ + if (IS_ERR(%1$s_link)) { \n\ + err = PTR_ERR(%1$s_link); \n\ + goto out; \n\ + } \n\ + ", bpf_program__name(prog)); + } + + codegen("\ + \n\ + \n\ + /* Avoid taking over stdin/stdout/stderr of init process. Zeroing out \n\ + * makes skel_closenz() a no-op later in iterators_bpf__destroy(). \n\ + */ \n\ + "); + + bpf_object__for_each_program(prog, obj) { + codegen("\ + \n\ + close_fd(skel->links.%1$s_fd); \n\ + skel->links.%1$s_fd = 0; \n\ + ", bpf_program__name(prog)); + } + + codegen("\ + \n\ + \n\ + return 0; \n\ + out: \n\ + free_objs_and_skel(); \n\ + return err; \n\ + } \n\ + "); +} + static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *header_guard) { DECLARE_LIBBPF_OPTS(gen_loader_opts, opts); @@ -916,6 +979,7 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h codegen_preload_vars(obj, obj_name); codegen_preload_free(obj, obj_name); codegen_preload(obj, obj_name); + codegen_preload_load(obj, obj_name); } codegen("\ From patchwork Mon Mar 28 17:50:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794059 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB72AC4167E for ; Mon, 28 Mar 2022 17:53:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244837AbiC1Ry7 (ORCPT ); Mon, 28 Mar 2022 13:54:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244855AbiC1Ryj (ORCPT ); Mon, 28 Mar 2022 13:54:39 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E37A5657B9; Mon, 28 Mar 2022 10:52:32 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0Zw6Tljz67mcQ; Tue, 29 Mar 2022 01:50:00 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:52:29 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 09/18] bpf-preload: Generate code to pin non-internal maps Date: Mon, 28 Mar 2022 19:50:24 +0200 Message-ID: <20220328175033.2437312-10-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Take the non-internal maps from the skeleton, and generate the code for each of them (static variable declaration, additional code in free_objs_and_skel(), preload() and load_skel()). Signed-off-by: Roberto Sassu --- tools/bpf/bpftool/gen.c | 97 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 97 insertions(+) diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c index ad948f1c90b5..28b1fe718248 100644 --- a/tools/bpf/bpftool/gen.c +++ b/tools/bpf/bpftool/gen.c @@ -655,6 +655,8 @@ static void codegen_destroy(struct bpf_object *obj, const char *obj_name) static void codegen_preload_vars(struct bpf_object *obj, const char *obj_name) { struct bpf_program *prog; + struct bpf_map *map; + char ident[256]; codegen("\ \n\ @@ -668,6 +670,19 @@ static void codegen_preload_vars(struct bpf_object *obj, const char *obj_name) ", bpf_program__name(prog)); } + bpf_object__for_each_map(map, obj) { + if (!get_map_ident(map, ident, sizeof(ident))) + continue; + + if (bpf_map__is_internal(map)) + continue; + + codegen("\ + \n\ + static struct bpf_map *%s_map; \n\ + ", ident); + } + codegen("\ \n\ static struct %s *skel; \n\ @@ -677,6 +692,8 @@ static void codegen_preload_vars(struct bpf_object *obj, const char *obj_name) static void codegen_preload_free(struct bpf_object *obj, const char *obj_name) { struct bpf_program *prog; + struct bpf_map *map; + char ident[256]; codegen("\ \n\ @@ -693,6 +710,20 @@ static void codegen_preload_free(struct bpf_object *obj, const char *obj_name) ", bpf_program__name(prog)); } + bpf_object__for_each_map(map, obj) { + if (!get_map_ident(map, ident, sizeof(ident))) + continue; + + if (bpf_map__is_internal(map)) + continue; + + codegen("\ + \n\ + if (!IS_ERR_OR_NULL(%1$s_map)) \n\ + bpf_map_put(%1$s_map); \n\ + ", ident); + } + codegen("\ \n\ \n\ @@ -705,6 +736,8 @@ static void codegen_preload(struct bpf_object *obj, const char *obj_name) { struct bpf_program *prog; const char *link_name; + struct bpf_map *map; + char ident[256]; codegen("\ \n\ @@ -722,6 +755,19 @@ static void codegen_preload(struct bpf_object *obj, const char *obj_name) ", bpf_program__name(prog)); } + bpf_object__for_each_map(map, obj) { + if (!get_map_ident(map, ident, sizeof(ident))) + continue; + + if (bpf_map__is_internal(map)) + continue; + + codegen("\ + \n\ + bpf_map_inc(%s_map); \n\ + ", ident); + } + bpf_object__for_each_program(prog, obj) { link_name = bpf_program__name(prog); /* These need to be hardcoded for compatibility reasons. */ @@ -743,6 +789,24 @@ static void codegen_preload(struct bpf_object *obj, const char *obj_name) ", link_name, bpf_program__name(prog)); } + bpf_object__for_each_map(map, obj) { + if (!get_map_ident(map, ident, sizeof(ident))) + continue; + + if (bpf_map__is_internal(map)) + continue; + + codegen("\ + \n\ + \n\ + err = bpf_obj_do_pin_kernel(parent, \"%1$s\", \n\ + %1$s_map, \n\ + BPF_TYPE_MAP); \n\ + if (err) \n\ + goto undo; \n\ + ", ident); + } + codegen("\ \n\ \n\ @@ -757,6 +821,19 @@ static void codegen_preload(struct bpf_object *obj, const char *obj_name) ", bpf_program__name(prog)); } + bpf_object__for_each_map(map, obj) { + if (!get_map_ident(map, ident, sizeof(ident))) + continue; + + if (bpf_map__is_internal(map)) + continue; + + codegen("\ + \n\ + bpf_map_put(%s_map); \n\ + ", ident); + } + codegen("\ \n\ return err; \n\ @@ -767,6 +844,8 @@ static void codegen_preload(struct bpf_object *obj, const char *obj_name) static void codegen_preload_load(struct bpf_object *obj, const char *obj_name) { struct bpf_program *prog; + struct bpf_map *map; + char ident[256]; codegen("\ \n\ @@ -800,6 +879,24 @@ static void codegen_preload_load(struct bpf_object *obj, const char *obj_name) ", bpf_program__name(prog)); } + bpf_object__for_each_map(map, obj) { + if (!get_map_ident(map, ident, sizeof(ident))) + continue; + + if (bpf_map__is_internal(map)) + continue; + + codegen("\ + \n\ + \n\ + %1$s_map = bpf_map_get(skel->maps.%1$s.map_fd); \n\ + if (IS_ERR(%1$s_map)) { \n\ + err = PTR_ERR(%1$s_map); \n\ + goto out; \n\ + } \n\ + ", ident); + } + codegen("\ \n\ \n\ From patchwork Mon Mar 28 17:50:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794063 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE295C35276 for ; Mon, 28 Mar 2022 17:54:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245049AbiC1R43 (ORCPT ); Mon, 28 Mar 2022 13:56:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42316 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244903AbiC1R4I (ORCPT ); Mon, 28 Mar 2022 13:56:08 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC8BE2251A; Mon, 28 Mar 2022 10:53:45 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0d44bNNz67Lnh; Tue, 29 Mar 2022 01:51:52 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:53:41 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 10/18] bpf-preload: Generate bpf_preload_ops Date: Mon, 28 Mar 2022 19:50:25 +0200 Message-ID: <20220328175033.2437312-11-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Generate a bpf_preload_ops structure, to specify the kernel module and the preload method. Signed-off-by: Roberto Sassu --- kernel/bpf/preload/bpf_preload_kern.c | 5 ----- kernel/bpf/preload/iterators/iterators.lskel.h | 5 +++++ tools/bpf/bpftool/gen.c | 13 +++++++++++++ 3 files changed, 18 insertions(+), 5 deletions(-) diff --git a/kernel/bpf/preload/bpf_preload_kern.c b/kernel/bpf/preload/bpf_preload_kern.c index 35e9abd1a668..3839af367200 100644 --- a/kernel/bpf/preload/bpf_preload_kern.c +++ b/kernel/bpf/preload/bpf_preload_kern.c @@ -5,11 +5,6 @@ #include #include "iterators/iterators.lskel.h" -static struct bpf_preload_ops ops = { - .preload = preload, - .owner = THIS_MODULE, -}; - static int __init load(void) { int err; diff --git a/kernel/bpf/preload/iterators/iterators.lskel.h b/kernel/bpf/preload/iterators/iterators.lskel.h index 6faf3708be01..7595fc283a65 100644 --- a/kernel/bpf/preload/iterators/iterators.lskel.h +++ b/kernel/bpf/preload/iterators/iterators.lskel.h @@ -474,6 +474,11 @@ static int preload(struct dentry *parent) return err; } +static struct bpf_preload_ops ops = { + .preload = preload, + .owner = THIS_MODULE, +}; + static int load_skel(void) { int err; diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c index 28b1fe718248..5593cbee1846 100644 --- a/tools/bpf/bpftool/gen.c +++ b/tools/bpf/bpftool/gen.c @@ -841,6 +841,18 @@ static void codegen_preload(struct bpf_object *obj, const char *obj_name) "); } +static void codegen_preload_ops(void) +{ + codegen("\ + \n\ + \n\ + static struct bpf_preload_ops ops = { \n\ + .preload = preload, \n\ + .owner = THIS_MODULE, \n\ + }; \n\ + "); +} + static void codegen_preload_load(struct bpf_object *obj, const char *obj_name) { struct bpf_program *prog; @@ -1076,6 +1088,7 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h codegen_preload_vars(obj, obj_name); codegen_preload_free(obj, obj_name); codegen_preload(obj, obj_name); + codegen_preload_ops(); codegen_preload_load(obj, obj_name); } From patchwork Mon Mar 28 17:50:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794061 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74F94C4332F for ; Mon, 28 Mar 2022 17:54:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245021AbiC1R41 (ORCPT ); Mon, 28 Mar 2022 13:56:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244810AbiC1R4T (ORCPT ); Mon, 28 Mar 2022 13:56:19 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 451AA2316A; Mon, 28 Mar 2022 10:53:47 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0cL000Bz67mcQ; Tue, 29 Mar 2022 01:51:13 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:53:43 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 11/18] bpf-preload: Store multiple bpf_preload_ops structures in a linked list Date: Mon, 28 Mar 2022 19:50:26 +0200 Message-ID: <20220328175033.2437312-12-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org In preparation to support preloading multiple eBPF programs, define a linked list of bpf_preload_ops_item structures. The new structure contains the object name from the eBPF program to preload (except for iterators_bpf whose kernel module name is bpf_preload, the object name and the kernel module name should match). The new structure also contains a bpf_preload_ops structure declared in the light skeleton, with the preload method of the eBPF program. The list of eBPF programs that can be preloaded can be specified in a subsequent patch from the kernel configuration or with the new option bpf_preload_list= in the kernel command line. For now, bpf_preload is always preloaded, as it still relies on the old registration method consisting in setting the bpf_preload_ops global variable. That will change when bpf_preload will switch to the new registration method based on the linked list. Signed-off-by: Roberto Sassu --- kernel/bpf/inode.c | 89 +++++++++++++++++++++++++++++++++++++--------- 1 file changed, 73 insertions(+), 16 deletions(-) diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c index bb8762abbf3d..0a6e83d32360 100644 --- a/kernel/bpf/inode.c +++ b/kernel/bpf/inode.c @@ -685,35 +685,91 @@ static int bpf_parse_param(struct fs_context *fc, struct fs_parameter *param) struct bpf_preload_ops *bpf_preload_ops; EXPORT_SYMBOL_GPL(bpf_preload_ops); -static bool bpf_preload_mod_get(void) +struct bpf_preload_ops_item { + struct list_head list; + struct bpf_preload_ops *ops; + char *obj_name; +}; + +static LIST_HEAD(preload_list); +static DEFINE_MUTEX(bpf_preload_lock); + +static bool bpf_preload_mod_get(const char *obj_name, + struct bpf_preload_ops **ops) { - /* If bpf_preload.ko wasn't loaded earlier then load it now. - * When bpf_preload is built into vmlinux the module's __init + /* If the kernel preload module wasn't loaded earlier then load it now. + * When the preload code is built into vmlinux the module's __init * function will populate it. */ - if (!bpf_preload_ops) { - request_module("bpf_preload"); - if (!bpf_preload_ops) + if (!*ops) { + mutex_unlock(&bpf_preload_lock); + request_module(obj_name); + mutex_lock(&bpf_preload_lock); + if (!*ops) return false; } /* And grab the reference, so the module doesn't disappear while the * kernel is interacting with the kernel module and its UMD. */ - if (!try_module_get(bpf_preload_ops->owner)) { + if (!try_module_get((*ops)->owner)) { pr_err("bpf_preload module get failed.\n"); return false; } return true; } -static void bpf_preload_mod_put(void) +static void bpf_preload_mod_put(struct bpf_preload_ops *ops) { - if (bpf_preload_ops) - /* now user can "rmmod bpf_preload" if necessary */ - module_put(bpf_preload_ops->owner); + if (ops) + /* now user can "rmmod " if necessary */ + module_put(ops->owner); } -static DEFINE_MUTEX(bpf_preload_lock); +static bool bpf_preload_list_mod_get(void) +{ + struct bpf_preload_ops_item *cur; + bool ret = false; + + ret |= bpf_preload_mod_get("bpf_preload", &bpf_preload_ops); + + list_for_each_entry(cur, &preload_list, list) + ret |= bpf_preload_mod_get(cur->obj_name, &cur->ops); + + return ret; +} + +static int bpf_preload_list(struct dentry *parent) +{ + struct bpf_preload_ops_item *cur; + int err; + + if (bpf_preload_ops) { + err = bpf_preload_ops->preload(parent); + if (err) + return err; + } + + list_for_each_entry(cur, &preload_list, list) { + if (!cur->ops) + continue; + + err = cur->ops->preload(parent); + if (err) + return err; + } + + return 0; +} + +static void bpf_preload_list_mod_put(void) +{ + struct bpf_preload_ops_item *cur; + + list_for_each_entry(cur, &preload_list, list) + bpf_preload_mod_put(cur->ops); + + bpf_preload_mod_put(bpf_preload_ops); +} static int populate_bpffs(struct dentry *parent) { @@ -724,12 +780,13 @@ static int populate_bpffs(struct dentry *parent) */ mutex_lock(&bpf_preload_lock); - /* if bpf_preload.ko wasn't built into vmlinux then load it */ - if (!bpf_preload_mod_get()) + /* if kernel preload mods weren't built into vmlinux then load them */ + if (!bpf_preload_list_mod_get()) goto out; - err = bpf_preload_ops->preload(parent); - bpf_preload_mod_put(); + err = bpf_preload_list(parent); + bpf_preload_list_mod_put(); + out: mutex_unlock(&bpf_preload_lock); return err; From patchwork Mon Mar 28 17:50:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794062 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FD89C4167D for ; Mon, 28 Mar 2022 17:54:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245029AbiC1R42 (ORCPT ); Mon, 28 Mar 2022 13:56:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244928AbiC1R4U (ORCPT ); Mon, 28 Mar 2022 13:56:20 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5316B237E0; Mon, 28 Mar 2022 10:53:48 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0d716hCz67bVy; Tue, 29 Mar 2022 01:51:55 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:53:44 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 12/18] bpf-preload: Implement new registration method for preloading eBPF programs Date: Mon, 28 Mar 2022 19:50:27 +0200 Message-ID: <20220328175033.2437312-13-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org The current registration method consisting in setting the bpf_preload_ops global variable is not suitable for preloading multiple eBPF programs, as each eBPF program would overwrite the global variable with its own method. Implement a new registration method in two steps. First, introduce bpf_init_preload_list() to populate at kernel initialization time the new linked list with an element for each of the desired eBPF programs to preload. Second, introduce bpf_preload_set_ops() to allow an eBPF program to set its preload method in the corresponding item of the linked list. The condition for a successful registration is that the item in the linked list should already exist. Return a boolean value to report if registration was successful or not. Signed-off-by: Roberto Sassu --- include/linux/bpf_preload.h | 7 +++ kernel/bpf/inode.c | 107 +++++++++++++++++++++++++++++++++++- 2 files changed, 113 insertions(+), 1 deletion(-) diff --git a/include/linux/bpf_preload.h b/include/linux/bpf_preload.h index e604933b3daa..bdbe75c22fcb 100644 --- a/include/linux/bpf_preload.h +++ b/include/linux/bpf_preload.h @@ -19,12 +19,19 @@ extern struct bpf_preload_ops *bpf_preload_ops; int bpf_obj_do_pin_kernel(struct dentry *parent, const char *name, void *raw, enum bpf_type type); +bool bpf_preload_set_ops(const char *name, struct module *owner, + struct bpf_preload_ops *ops); #else static inline int bpf_obj_do_pin_kernel(struct dentry *parent, const char *name, void *raw, enum bpf_type type) { return -EOPNOTSUPP; } + +static inline bool bpf_preload_set_ops(const char *name, struct module *owner, + struct bpf_preload_ops *ops) +{ +} #endif /*CONFIG_BPF_SYSCALL*/ #endif diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c index 0a6e83d32360..440ea517cc29 100644 --- a/kernel/bpf/inode.c +++ b/kernel/bpf/inode.c @@ -22,6 +22,8 @@ #include #include +static char *bpf_preload_list_str; + static void *bpf_any_get(void *raw, enum bpf_type type) { switch (type) { @@ -855,6 +857,100 @@ static struct file_system_type bpf_fs_type = { .kill_sb = kill_litter_super, }; +static struct bpf_preload_ops_item * +bpf_preload_list_lookup_entry(const char *obj_name) +{ + struct bpf_preload_ops_item *cur; + + list_for_each_entry(cur, &preload_list, list) + if (!strcmp(obj_name, cur->obj_name)) + return cur; + + return NULL; +} + +static int bpf_preload_list_add_entry(const char *obj_name, + struct bpf_preload_ops *ops) +{ + struct bpf_preload_ops_item *new; + + if (!*obj_name) + return 0; + + new = kzalloc(sizeof(*new), GFP_NOFS); + if (!new) + return -ENOMEM; + + new->obj_name = kstrdup(obj_name, GFP_NOFS); + if (!new->obj_name) { + kfree(new); + return -ENOMEM; + } + + new->ops = ops; + + list_add(&new->list, &preload_list); + return 0; +} + +bool bpf_preload_set_ops(const char *obj_name, struct module *owner, + struct bpf_preload_ops *ops) +{ + struct bpf_preload_ops_item *found_item; + bool set = false; + + mutex_lock(&bpf_preload_lock); + + found_item = bpf_preload_list_lookup_entry(obj_name); + if (found_item) { + if (!found_item->ops || + (found_item->ops && found_item->ops->owner == owner)) { + found_item->ops = ops; + set = true; + } + } + + mutex_unlock(&bpf_preload_lock); + return set; +} +EXPORT_SYMBOL_GPL(bpf_preload_set_ops); + +static int __init bpf_init_preload_list(void) +{ + char *str_ptr = bpf_preload_list_str, *str_end; + struct bpf_preload_ops_item *cur, *tmp; + char obj_name[NAME_MAX + 1]; + int ret; + + while (str_ptr && *str_ptr) { + str_end = strchrnul(str_ptr, ','); + + snprintf(obj_name, sizeof(obj_name), "%.*s", + (int)(str_end - str_ptr), str_ptr); + + if (!bpf_preload_list_lookup_entry(obj_name)) { + ret = bpf_preload_list_add_entry(obj_name, NULL); + if (ret) + goto out; + } + + if (!*str_end) + break; + + str_ptr = str_end + 1; + } + + return 0; +out: + list_for_each_entry_safe(cur, tmp, &preload_list, list) { + list_del(&cur->list); + kfree(cur->obj_name); + kfree(cur); + } + + return ret; +} + static int __init bpf_init(void) { int ret; @@ -864,8 +960,17 @@ static int __init bpf_init(void) return ret; ret = register_filesystem(&bpf_fs_type); - if (ret) + if (ret) { sysfs_remove_mount_point(fs_kobj, "bpf"); + return ret; + } + + ret = bpf_init_preload_list(); + if (ret) { + unregister_filesystem(&bpf_fs_type); + sysfs_remove_mount_point(fs_kobj, "bpf"); + return ret; + } return ret; } From patchwork Mon Mar 28 17:50:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794064 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 628C9C4707F for ; Mon, 28 Mar 2022 17:54:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245038AbiC1R43 (ORCPT ); Mon, 28 Mar 2022 13:56:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244929AbiC1R4U (ORCPT ); Mon, 28 Mar 2022 13:56:20 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79B7823BCC; Mon, 28 Mar 2022 10:53:48 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0cN3jGnz67w73; Tue, 29 Mar 2022 01:51:16 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:53:45 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 13/18] bpf-preload: Move pinned links and maps to a dedicated directory in bpffs Date: Mon, 28 Mar 2022 19:50:28 +0200 Message-ID: <20220328175033.2437312-14-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org With support for preloading multiple eBPF programs, any map, link or prog will appear in the bpf filesystem. To identify which eBPF program a pinned object belongs to, create a subdir for each eBPF program preloaded and place the pinned object in the new subdir. Keep the pinned objects of iterators_bpf in the root directory of bpffs, for compatibility reasons. Signed-off-by: Roberto Sassu --- kernel/bpf/inode.c | 35 ++++++++++++++++++++++++++++++++++- 1 file changed, 34 insertions(+), 1 deletion(-) diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c index 440ea517cc29..619cdef0ba54 100644 --- a/kernel/bpf/inode.c +++ b/kernel/bpf/inode.c @@ -740,9 +740,30 @@ static bool bpf_preload_list_mod_get(void) return ret; } +static struct dentry *create_subdir(struct dentry *parent, const char *name) +{ + struct dentry *dentry; + int err; + + inode_lock(parent->d_inode); + dentry = lookup_one_len(name, parent, strlen(name)); + if (IS_ERR(dentry)) + goto out; + + err = vfs_mkdir(&init_user_ns, parent->d_inode, dentry, 0755); + if (err) { + dput(dentry); + dentry = ERR_PTR(err); + } +out: + inode_unlock(parent->d_inode); + return dentry; +} + static int bpf_preload_list(struct dentry *parent) { struct bpf_preload_ops_item *cur; + struct dentry *cur_parent; int err; if (bpf_preload_ops) { @@ -755,7 +776,19 @@ static int bpf_preload_list(struct dentry *parent) if (!cur->ops) continue; - err = cur->ops->preload(parent); + cur_parent = parent; + + if (strcmp(cur->obj_name, "bpf_preload")) { + cur_parent = create_subdir(parent, cur->obj_name); + if (IS_ERR(cur_parent)) + cur_parent = parent; + } + + err = cur->ops->preload(cur_parent); + + if (cur_parent != parent) + dput(cur_parent); + if (err) return err; } From patchwork Mon Mar 28 17:50:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794060 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A3C7C433FE for ; Mon, 28 Mar 2022 17:54:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245001AbiC1R40 (ORCPT ); Mon, 28 Mar 2022 13:56:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244923AbiC1R4U (ORCPT ); Mon, 28 Mar 2022 13:56:20 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B44F24582; Mon, 28 Mar 2022 10:53:49 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0cP5LvYz67Q7R; Tue, 29 Mar 2022 01:51:17 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:53:46 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 14/18] bpf-preload: Switch to new preload registration method Date: Mon, 28 Mar 2022 19:50:29 +0200 Message-ID: <20220328175033.2437312-15-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Modify the automatic generator of the light skeleton by adding three calls to bpf_preload_set_ops() for registering and unregistering a preload method, two in load_skel() (set and unset if there is an error) and one in free_objs_and_skel(). Regenerate the light skeleton of the already preloaded eBPF program iterators_bpf, which will now use the new registration method, and directly call load_skel() and free_objs_and_skel() in the init and fini module entrypoints. Finally, allow users to specify a customized list of eBPF programs to preload with the CONFIG_BPF_PRELOAD_LIST option in the kernel configuration, at build time, or with new kernel option bpf_preload_list=, at run-time. By default, set CONFIG_BPF_PRELOAD_LIST to 'bpf_preload', so that the current preloading behavior is kept unchanged. Signed-off-by: Roberto Sassu Reported-by: kernel test robot Reported-by: kernel test robot --- .../admin-guide/kernel-parameters.txt | 8 ++++++ kernel/bpf/inode.c | 16 ++++++++++-- kernel/bpf/preload/Kconfig | 25 +++++++++++++------ kernel/bpf/preload/bpf_preload_kern.c | 20 ++------------- .../bpf/preload/iterators/iterators.lskel.h | 9 +++++-- tools/bpf/bpftool/gen.c | 15 ++++++++--- 6 files changed, 60 insertions(+), 33 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 9927564db88e..732d83764e6e 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -482,6 +482,14 @@ bgrt_disable [ACPI][X86] Disable BGRT to avoid flickering OEM logo. + bpf_preload_list= [BPF] + Specify a list of eBPF programs to preload. + Format: obj_name1,obj_name2,... + Default: bpf_preload + + Specify the list of eBPF programs to preload when the + bpf filesystem is mounted. + bttv.card= [HW,V4L] bttv (bt848 + bt878 based grabber cards) bttv.radio= Most important insmod options are available as kernel args too. diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c index 619cdef0ba54..c1941c65ce95 100644 --- a/kernel/bpf/inode.c +++ b/kernel/bpf/inode.c @@ -22,7 +22,14 @@ #include #include -static char *bpf_preload_list_str; +static char *bpf_preload_list_str = CONFIG_BPF_PRELOAD_LIST; + +static int __init bpf_preload_list_setup(char *str) +{ + bpf_preload_list_str = str; + return 1; +} +__setup("bpf_preload_list=", bpf_preload_list_setup); static void *bpf_any_get(void *raw, enum bpf_type type) { @@ -732,7 +739,12 @@ static bool bpf_preload_list_mod_get(void) struct bpf_preload_ops_item *cur; bool ret = false; - ret |= bpf_preload_mod_get("bpf_preload", &bpf_preload_ops); + /* + * Keep the legacy registration method, but do not attempt to load + * bpf_preload.ko, as it switched to the new registration method. + */ + if (bpf_preload_ops) + ret |= bpf_preload_mod_get("bpf_preload", &bpf_preload_ops); list_for_each_entry(cur, &preload_list, list) ret |= bpf_preload_mod_get(cur->obj_name, &cur->ops); diff --git a/kernel/bpf/preload/Kconfig b/kernel/bpf/preload/Kconfig index c9d45c9d6918..f878e537b0ff 100644 --- a/kernel/bpf/preload/Kconfig +++ b/kernel/bpf/preload/Kconfig @@ -4,7 +4,7 @@ config USERMODE_DRIVER default n menuconfig BPF_PRELOAD - bool "Preload BPF file system with kernel specific program and map iterators" + bool "Preload eBPF programs" depends on BPF depends on BPF_SYSCALL # The dependency on !COMPILE_TEST prevents it from being enabled @@ -12,15 +12,26 @@ menuconfig BPF_PRELOAD depends on !COMPILE_TEST select USERMODE_DRIVER help - This builds kernel module with several embedded BPF programs that are - pinned into BPF FS mount point as human readable files that are - useful in debugging and introspection of BPF programs and maps. + This enables preloading eBPF programs chosen from the kernel + configuration or from the kernel option bpf_preload_list=. if BPF_PRELOAD config BPF_PRELOAD_UMD - tristate "bpf_preload kernel module" + tristate "Preload BPF file system with kernel specific program and map iterators" default m help - This builds bpf_preload kernel module with embedded BPF programs for - introspection in bpffs. + This builds bpf_preload kernel module with several embedded BPF + programs that are pinned into BPF FS mount point as human readable + files that are useful in debugging and introspection of BPF programs + and maps. + +config BPF_PRELOAD_LIST + string "Ordered list of eBPF programs to preload" + default "bpf_preload" + help + A comma-separated list of eBPF programs to preload. Any eBPF program + left off this list will be ignored. This can be controlled at boot + with the "bpf_preload_list=" parameter. + + If unsure, leave this as the default. endif diff --git a/kernel/bpf/preload/bpf_preload_kern.c b/kernel/bpf/preload/bpf_preload_kern.c index 3839af367200..c6d97872225b 100644 --- a/kernel/bpf/preload/bpf_preload_kern.c +++ b/kernel/bpf/preload/bpf_preload_kern.c @@ -5,22 +5,6 @@ #include #include "iterators/iterators.lskel.h" -static int __init load(void) -{ - int err; - - err = load_skel(); - if (err) - return err; - bpf_preload_ops = &ops; - return err; -} - -static void __exit fini(void) -{ - bpf_preload_ops = NULL; - free_objs_and_skel(); -} -late_initcall(load); -module_exit(fini); +late_initcall(load_skel); +module_exit(free_objs_and_skel); MODULE_LICENSE("GPL"); diff --git a/kernel/bpf/preload/iterators/iterators.lskel.h b/kernel/bpf/preload/iterators/iterators.lskel.h index 7595fc283a65..5e999564cc7a 100644 --- a/kernel/bpf/preload/iterators/iterators.lskel.h +++ b/kernel/bpf/preload/iterators/iterators.lskel.h @@ -440,6 +440,8 @@ static struct iterators_bpf *skel; static void free_objs_and_skel(void) { + bpf_preload_set_ops("bpf_preload", THIS_MODULE, NULL); + if (!IS_ERR_OR_NULL(dump_bpf_map_link)) bpf_link_put(dump_bpf_map_link); if (!IS_ERR_OR_NULL(dump_bpf_prog_link)) @@ -481,11 +483,14 @@ static struct bpf_preload_ops ops = { static int load_skel(void) { - int err; + int err = -ENOMEM; + + if (!bpf_preload_set_ops("bpf_preload", THIS_MODULE, &ops)) + return 0; skel = iterators_bpf__open(); if (!skel) - return -ENOMEM; + goto out; err = iterators_bpf__load(skel); if (err) diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c index 5593cbee1846..af939183f57a 100644 --- a/tools/bpf/bpftool/gen.c +++ b/tools/bpf/bpftool/gen.c @@ -700,7 +700,10 @@ static void codegen_preload_free(struct bpf_object *obj, const char *obj_name) \n\ static void free_objs_and_skel(void) \n\ { \n\ - "); + bpf_preload_set_ops(\"%s\", THIS_MODULE, NULL); \n\ + \n\ + ", !strcmp(obj_name, "iterators_bpf") ? + "bpf_preload" : obj_name); bpf_object__for_each_program(prog, obj) { codegen("\ @@ -864,11 +867,14 @@ static void codegen_preload_load(struct bpf_object *obj, const char *obj_name) \n\ static int load_skel(void) \n\ { \n\ - int err; \n\ + int err = -ENOMEM; \n\ + \n\ + if (!bpf_preload_set_ops(\"%2$s\", THIS_MODULE, &ops)) \n\ + return 0; \n\ \n\ skel = %1$s__open(); \n\ if (!skel) \n\ - return -ENOMEM; \n\ + goto out; \n\ \n\ err = %1$s__load(skel); \n\ if (err) \n\ @@ -877,7 +883,8 @@ static void codegen_preload_load(struct bpf_object *obj, const char *obj_name) err = %1$s__attach(skel); \n\ if (err) \n\ goto out; \n\ - ", obj_name); + ", obj_name, !strcmp(obj_name, "iterators_bpf") ? + "bpf_preload" : obj_name); bpf_object__for_each_program(prog, obj) { codegen("\ From patchwork Mon Mar 28 17:50:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794065 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 556D0C433FE for ; Mon, 28 Mar 2022 17:55:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244920AbiC1R5g (ORCPT ); Mon, 28 Mar 2022 13:57:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245130AbiC1R4u (ORCPT ); Mon, 28 Mar 2022 13:56:50 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2E271F619; Mon, 28 Mar 2022 10:55:03 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0fb2wNQz67y8J; Tue, 29 Mar 2022 01:53:11 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:55:00 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 15/18] bpf-preload: Generate code of kernel module to preload Date: Mon, 28 Mar 2022 19:50:30 +0200 Message-ID: <20220328175033.2437312-16-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Since every function is automatically generated and placed to the light skeleton, the kernel module for preloading an eBPF program is very small and with a well-defined structure. The only variable part is the path of the light skeleton. Introduce the new 'subcommand' module of the 'gen' bpftool command, which takes the path of the light skeleton to be included in the #include directive and generates the code of the kernel module to preload the eBPF program. Signed-off-by: Roberto Sassu --- kernel/bpf/preload/bpf_preload_kern.c | 1 + kernel/bpf/preload/iterators/Makefile | 7 +++-- .../bpf/bpftool/Documentation/bpftool-gen.rst | 8 +++++ tools/bpf/bpftool/bash-completion/bpftool | 4 +++ tools/bpf/bpftool/gen.c | 31 +++++++++++++++++++ 5 files changed, 49 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/preload/bpf_preload_kern.c b/kernel/bpf/preload/bpf_preload_kern.c index c6d97872225b..048bca3ba499 100644 --- a/kernel/bpf/preload/bpf_preload_kern.c +++ b/kernel/bpf/preload/bpf_preload_kern.c @@ -1,4 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 +/* THIS FILE IS AUTOGENERATED! */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include #include diff --git a/kernel/bpf/preload/iterators/Makefile b/kernel/bpf/preload/iterators/Makefile index d36a822d3e16..9dcad1c5c44b 100644 --- a/kernel/bpf/preload/iterators/Makefile +++ b/kernel/bpf/preload/iterators/Makefile @@ -35,17 +35,20 @@ endif .PHONY: all clean -all: iterators.lskel.h +all: iterators.lskel.h bpf_preload_kern.c clean: $(call msg,CLEAN) $(Q)rm -rf $(OUTPUT) iterators +bpf_preload_kern.c: iterators.lskel.h $(BPFTOOL) + $(call msg,GEN-PRELOAD,$@) + $(Q)$(BPFTOOL) gen module iterators/iterators.lskel.h $< > ../$@ + iterators.lskel.h: $(OUTPUT)/iterators.bpf.o | $(BPFTOOL) $(call msg,GEN-SKEL,$@) $(Q)$(BPFTOOL) gen skeleton -L -P $< > $@ - $(OUTPUT)/iterators.bpf.o: iterators.bpf.c $(BPFOBJ) | $(OUTPUT) $(call msg,BPF,$@) $(Q)$(CLANG) -g -O2 -target bpf $(INCLUDES) \ diff --git a/tools/bpf/bpftool/Documentation/bpftool-gen.rst b/tools/bpf/bpftool/Documentation/bpftool-gen.rst index 74bbefa28212..6d29d2b1e4e2 100644 --- a/tools/bpf/bpftool/Documentation/bpftool-gen.rst +++ b/tools/bpf/bpftool/Documentation/bpftool-gen.rst @@ -27,6 +27,7 @@ GEN COMMANDS | **bpftool** **gen skeleton** *FILE* [**name** *OBJECT_NAME*] | **bpftool** **gen subskeleton** *FILE* [**name** *OBJECT_NAME*] | **bpftool** **gen min_core_btf** *INPUT* *OUTPUT* *OBJECT* [*OBJECT*...] +| **bpftool** **gen module** *FILE* | **bpftool** **gen help** DESCRIPTION @@ -195,6 +196,13 @@ DESCRIPTION Check examples bellow for more information how to use it. + **bpftool** **gen module** *FILE* + Generate the code of a kernel module including the light + skeleton of an eBPF program to preload. The only variable part + is the path of the light skeleton. All kernel modules call + load_skel() and free_objs_and_skel() respectively in the init + and fini module entrypoints. + **bpftool gen help** Print short help message. diff --git a/tools/bpf/bpftool/bash-completion/bpftool b/tools/bpf/bpftool/bash-completion/bpftool index 6e433e86fb26..82e8716fd3ad 100644 --- a/tools/bpf/bpftool/bash-completion/bpftool +++ b/tools/bpf/bpftool/bash-completion/bpftool @@ -1019,6 +1019,10 @@ _bpftool() _filedir return 0 ;; + module) + _filedir + return 0 + ;; *) [[ $prev == $object ]] && \ COMPREPLY=( $( compgen -W 'object skeleton subskeleton help min_core_btf' -- "$cur" ) ) diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c index af939183f57a..77ab78884285 100644 --- a/tools/bpf/bpftool/gen.c +++ b/tools/bpf/bpftool/gen.c @@ -1898,6 +1898,35 @@ static int do_object(int argc, char **argv) return err; } +static int do_module(int argc, char **argv) +{ + const char *skeleton_file; + + if (!REQ_ARGS(1)) { + usage(); + return -1; + } + + skeleton_file = GET_ARG(); + + codegen("\ + \n\ + // SPDX-License-Identifier: GPL-2.0 \n\ + /* THIS FILE IS AUTOGENERATED! */ \n\ + #define pr_fmt(fmt) KBUILD_MODNAME \": \" fmt \n\ + #include \n\ + #include \n\ + #include \n\ + #include \"%s\" \n\ + \n\ + late_initcall(load_skel); \n\ + module_exit(free_objs_and_skel); \n\ + MODULE_LICENSE(\"GPL\"); \n\ + ", skeleton_file); + + return 0; +} + static int do_help(int argc, char **argv) { if (json_output) { @@ -1910,6 +1939,7 @@ static int do_help(int argc, char **argv) " %1$s %2$s skeleton FILE [name OBJECT_NAME]\n" " %1$s %2$s subskeleton FILE [name OBJECT_NAME]\n" " %1$s %2$s min_core_btf INPUT OUTPUT OBJECT [OBJECT...]\n" + " %1$s %2$s module SKELETON_FILE\n" " %1$s %2$s help\n" "\n" " " HELP_SPEC_OPTIONS " |\n" @@ -2508,6 +2538,7 @@ static const struct cmd cmds[] = { { "skeleton", do_skeleton }, { "subskeleton", do_subskeleton }, { "min_core_btf", do_min_core_btf}, + { "module", do_module}, { "help", do_help }, { 0 } }; From patchwork Mon Mar 28 17:50:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794066 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7545C433FE for ; Mon, 28 Mar 2022 17:56:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244945AbiC1R5h (ORCPT ); Mon, 28 Mar 2022 13:57:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245074AbiC1R4v (ORCPT ); Mon, 28 Mar 2022 13:56:51 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA96621E1F; Mon, 28 Mar 2022 10:55:04 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0dr4kkWz67Q5R; Tue, 29 Mar 2022 01:52:32 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:55:01 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 16/18] bpf-preload: Do kernel mount to ensure that pinned objects don't disappear Date: Mon, 28 Mar 2022 19:50:31 +0200 Message-ID: <20220328175033.2437312-17-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org One of the differences between traditional LSMs in the security subsystem and LSMs implemented as eBPF programs is that for the latter category it cannot be guaranteed that they cannot be stopped. If a pinned program is unpinned, its execution will be stopped and will not enforce anymore its policy. For traditional LSMs this problem does not arise as, once they are invoked by the kernel, only the LSMs themselves decide whether or not they could be stopped. Solve this problem by mounting the bpf filesystem from the kernel, so that an object cannot be unpinned (a kernel mount is not accessible to user space). This will ensure that the LSM will run until the very end of the kernel lifecycle. Delay the kernel mount until the security subsystem (e.g. IMA) is fully initialized (e.g. keys loaded), so that the security subsystem can evaluate kernel modules loaded by populate_bpffs(). Signed-off-by: Roberto Sassu Reported-by: kernel test robot Reported-by: kernel test robot --- fs/namespace.c | 1 + include/linux/bpf.h | 5 +++++ init/main.c | 2 ++ kernel/bpf/inode.c | 9 +++++++++ 4 files changed, 17 insertions(+) diff --git a/fs/namespace.c b/fs/namespace.c index 6e9844b8c6fb..3b69f96dc641 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include "pnode.h" diff --git a/include/linux/bpf.h b/include/linux/bpf.h index bdb5298735ce..5f624310fda2 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1103,6 +1103,8 @@ static inline void bpf_module_put(const void *data, struct module *owner) module_put(owner); } +void __init mount_bpffs(void); + #ifdef CONFIG_NET /* Define it here to avoid the use of forward declaration */ struct bpf_dummy_ops_state { @@ -1141,6 +1143,9 @@ static inline int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, { return -EINVAL; } +static inline void __init mount_bpffs(void) +{ +} #endif struct bpf_array { diff --git a/init/main.c b/init/main.c index 0c064c2c79fd..30dcd0dd9faa 100644 --- a/init/main.c +++ b/init/main.c @@ -99,6 +99,7 @@ #include #include #include +#include #include #include @@ -1638,4 +1639,5 @@ static noinline void __init kernel_init_freeable(void) */ integrity_load_keys(); + mount_bpffs(); } diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c index c1941c65ce95..e8361d7679d0 100644 --- a/kernel/bpf/inode.c +++ b/kernel/bpf/inode.c @@ -1020,3 +1020,12 @@ static int __init bpf_init(void) return ret; } fs_initcall(bpf_init); + +static struct vfsmount *bpffs_mount __read_mostly; + +void __init mount_bpffs(void) +{ + bpffs_mount = kern_mount(&bpf_fs_type); + if (IS_ERR(bpffs_mount)) + pr_err("bpffs: could not mount!\n"); +} From patchwork Mon Mar 28 17:50:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794068 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E15FEC433EF for ; Mon, 28 Mar 2022 17:56:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244791AbiC1R5r (ORCPT ); Mon, 28 Mar 2022 13:57:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244967AbiC1R4x (ORCPT ); Mon, 28 Mar 2022 13:56:53 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 17DDC23BC0; Mon, 28 Mar 2022 10:55:05 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0fd4t1mz67y8R; Tue, 29 Mar 2022 01:53:13 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:55:02 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 17/18] bpf-preload/selftests: Add test for automatic generation of preload methods Date: Mon, 28 Mar 2022 19:50:32 +0200 Message-ID: <20220328175033.2437312-18-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Add the test 'gen_preload_methods' to ensure that the preload methods are correctly generated. Introduce a sample eBPF program in progs/gen_preload_methods.c, generate the light skeleton without and with the preload methods, and finally compare the diff with the expected diff output in prog_tests/gen_preload_methods.expected.diff. Signed-off-by: Roberto Sassu --- tools/testing/selftests/bpf/Makefile | 15 ++- .../gen_preload_methods.expected.diff | 97 +++++++++++++++++++ .../bpf/prog_tests/test_gen_preload_methods.c | 27 ++++++ .../selftests/bpf/progs/gen_preload_methods.c | 23 +++++ 4 files changed, 160 insertions(+), 2 deletions(-) create mode 100644 tools/testing/selftests/bpf/prog_tests/gen_preload_methods.expected.diff create mode 100644 tools/testing/selftests/bpf/prog_tests/test_gen_preload_methods.c create mode 100644 tools/testing/selftests/bpf/progs/gen_preload_methods.c diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 3820608faf57..de81779e90e3 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -337,10 +337,11 @@ test_subskeleton_lib.skel.h-deps := test_subskeleton_lib2.o test_subskeleton_lib LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \ test_ringbuf.c atomics.c trace_printk.c trace_vprintk.c \ - map_ptr_kern.c core_kern.c core_kern_overflow.c + map_ptr_kern.c core_kern.c core_kern_overflow.c gen_preload_methods.c +LSKELSP := gen_preload_methods.c # Generate both light skeleton and libbpf skeleton for these LSKELS_EXTRA := test_ksyms_module.c test_ksyms_weak.c kfunc_call_test_subprog.c -SKEL_BLACKLIST += $$(LSKELS) +SKEL_BLACKLIST += $$(LSKELS) $$(LSKELSP) test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o linked_funcs.skel.h-deps := linked_funcs1.o linked_funcs2.o @@ -370,6 +371,7 @@ TRUNNER_BPF_SKELS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.skel.h, \ $$(filter-out $(SKEL_BLACKLIST) $(LINKED_BPF_SRCS),\ $$(TRUNNER_BPF_SRCS))) TRUNNER_BPF_LSKELS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.lskel.h, $$(LSKELS) $$(LSKELS_EXTRA)) +TRUNNER_BPF_LSKELSP := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.preload.lskel.h, $$(LSKELSP)) TRUNNER_BPF_SKELS_LINKED := $$(addprefix $$(TRUNNER_OUTPUT)/,$(LINKED_SKELS)) TEST_GEN_FILES += $$(TRUNNER_BPF_OBJS) @@ -421,6 +423,14 @@ $(TRUNNER_BPF_LSKELS): %.lskel.h: %.o $(BPFTOOL) | $(TRUNNER_OUTPUT) $(Q)diff $$(<:.o=.linked2.o) $$(<:.o=.linked3.o) $(Q)$$(BPFTOOL) gen skeleton -L $$(<:.o=.linked3.o) name $$(notdir $$(<:.o=_lskel)) > $$@ +$(TRUNNER_BPF_LSKELSP): %.preload.lskel.h: %.o $(BPFTOOL) | $(TRUNNER_OUTPUT) + $$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@) + $(Q)$$(BPFTOOL) gen object $$(<:.o=.linked1.o) $$< + $(Q)$$(BPFTOOL) gen object $$(<:.o=.linked2.o) $$(<:.o=.linked1.o) + $(Q)$$(BPFTOOL) gen object $$(<:.o=.linked3.o) $$(<:.o=.linked2.o) + $(Q)diff $$(<:.o=.linked2.o) $$(<:.o=.linked3.o) + $(Q)$$(BPFTOOL) gen skeleton -L -P $$(<:.o=.linked3.o) name $$(notdir $$(<:.o=_lskel)) > $$@ + $(TRUNNER_BPF_SKELS_LINKED): $(TRUNNER_BPF_OBJS) $(BPFTOOL) | $(TRUNNER_OUTPUT) $$(call msg,LINK-BPF,$(TRUNNER_BINARY),$$(@:.skel.h=.o)) $(Q)$$(BPFTOOL) gen object $$(@:.skel.h=.linked1.o) $$(addprefix $(TRUNNER_OUTPUT)/,$$($$(@F)-deps)) @@ -451,6 +461,7 @@ $(TRUNNER_TEST_OBJS): $(TRUNNER_OUTPUT)/%.test.o: \ $(TRUNNER_BPF_OBJS) \ $(TRUNNER_BPF_SKELS) \ $(TRUNNER_BPF_LSKELS) \ + $(TRUNNER_BPF_LSKELSP) \ $(TRUNNER_BPF_SKELS_LINKED) \ $$(BPFOBJ) | $(TRUNNER_OUTPUT) $$(call msg,TEST-OBJ,$(TRUNNER_BINARY),$$@) diff --git a/tools/testing/selftests/bpf/prog_tests/gen_preload_methods.expected.diff b/tools/testing/selftests/bpf/prog_tests/gen_preload_methods.expected.diff new file mode 100644 index 000000000000..5e010d380e50 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/gen_preload_methods.expected.diff @@ -0,0 +1,97 @@ +--- gen_preload_methods.lskel.h 2022-03-28 13:40:22.042715754 +0200 ++++ gen_preload_methods.preload.lskel.h 2022-03-28 13:40:22.530715750 +0200 +@@ -221,4 +221,94 @@ gen_preload_methods_lskel__assert(struct + #endif + } + ++static struct bpf_link *dump_bpf_map_link; ++static struct bpf_map *ringbuf_map; ++static struct gen_preload_methods_lskel *skel; ++ ++static void free_objs_and_skel(void) ++{ ++ bpf_preload_set_ops("gen_preload_methods_lskel", THIS_MODULE, NULL); ++ ++ if (!IS_ERR_OR_NULL(dump_bpf_map_link)) ++ bpf_link_put(dump_bpf_map_link); ++ if (!IS_ERR_OR_NULL(ringbuf_map)) ++ bpf_map_put(ringbuf_map); ++ ++ gen_preload_methods_lskel__destroy(skel); ++} ++ ++static int preload(struct dentry *parent) ++{ ++ int err; ++ ++ bpf_link_inc(dump_bpf_map_link); ++ bpf_map_inc(ringbuf_map); ++ ++ err = bpf_obj_do_pin_kernel(parent, "dump_bpf_map", ++ dump_bpf_map_link, ++ BPF_TYPE_LINK); ++ if (err) ++ goto undo; ++ ++ err = bpf_obj_do_pin_kernel(parent, "ringbuf", ++ ringbuf_map, ++ BPF_TYPE_MAP); ++ if (err) ++ goto undo; ++ ++ return 0; ++undo: ++ bpf_link_put(dump_bpf_map_link); ++ bpf_map_put(ringbuf_map); ++ return err; ++} ++ ++static struct bpf_preload_ops ops = { ++ .preload = preload, ++ .owner = THIS_MODULE, ++}; ++ ++static int load_skel(void) ++{ ++ int err = -ENOMEM; ++ ++ if (!bpf_preload_set_ops("gen_preload_methods_lskel", THIS_MODULE, &ops)) ++ return 0; ++ ++ skel = gen_preload_methods_lskel__open(); ++ if (!skel) ++ goto out; ++ ++ err = gen_preload_methods_lskel__load(skel); ++ if (err) ++ goto out; ++ ++ err = gen_preload_methods_lskel__attach(skel); ++ if (err) ++ goto out; ++ ++ dump_bpf_map_link = bpf_link_get_from_fd(skel->links.dump_bpf_map_fd); ++ if (IS_ERR(dump_bpf_map_link)) { ++ err = PTR_ERR(dump_bpf_map_link); ++ goto out; ++ } ++ ++ ringbuf_map = bpf_map_get(skel->maps.ringbuf.map_fd); ++ if (IS_ERR(ringbuf_map)) { ++ err = PTR_ERR(ringbuf_map); ++ goto out; ++ } ++ ++ /* Avoid taking over stdin/stdout/stderr of init process. Zeroing out ++ * makes skel_closenz() a no-op later in iterators_bpf__destroy(). ++ */ ++ close_fd(skel->links.dump_bpf_map_fd); ++ skel->links.dump_bpf_map_fd = 0; ++ ++ return 0; ++out: ++ free_objs_and_skel(); ++ return err; ++} ++ + #endif /* __GEN_PRELOAD_METHODS_LSKEL_SKEL_H__ */ diff --git a/tools/testing/selftests/bpf/prog_tests/test_gen_preload_methods.c b/tools/testing/selftests/bpf/prog_tests/test_gen_preload_methods.c new file mode 100644 index 000000000000..937b3e606f53 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/test_gen_preload_methods.c @@ -0,0 +1,27 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH + */ + +#include + +static int duration; + +void test_test_gen_preload_methods(void) +{ + char diff_cmd[1024]; + int err; + + snprintf(diff_cmd, sizeof(diff_cmd), + "diff -up gen_preload_methods.lskel.h " + "gen_preload_methods.preload.lskel.h " + "| tail -n +4 | " + "diff -u - " + "<(tail -n +4 prog_tests/gen_preload_methods.expected.diff)"); + err = system(diff_cmd); + if (CHECK(err, "diff", + "differing test output, err=%d, diff cmd:\n%s\n", + err, diff_cmd)) + return; +} diff --git a/tools/testing/selftests/bpf/progs/gen_preload_methods.c b/tools/testing/selftests/bpf/progs/gen_preload_methods.c new file mode 100644 index 000000000000..5b3ab27f945d --- /dev/null +++ b/tools/testing/selftests/bpf/progs/gen_preload_methods.c @@ -0,0 +1,23 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH + */ + +#include "vmlinux.h" +#include +#include +#include + +struct { + __uint(type, BPF_MAP_TYPE_RINGBUF); + __uint(max_entries, 1 << 12); +} ringbuf SEC(".maps"); + +char _license[] SEC("license") = "GPL"; + +SEC("iter/bpf_map") +int dump_bpf_map(struct bpf_iter__bpf_map *ctx) +{ + return 0; +} From patchwork Mon Mar 28 17:50:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 12794067 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B141FC433F5 for ; Mon, 28 Mar 2022 17:56:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240694AbiC1R5q (ORCPT ); Mon, 28 Mar 2022 13:57:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245011AbiC1R4y (ORCPT ); Mon, 28 Mar 2022 13:56:54 -0400 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0441224594; Mon, 28 Mar 2022 10:55:07 -0700 (PDT) Received: from fraeml714-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4KS0dt6lWgz67sj9; Tue, 29 Mar 2022 01:52:34 +0800 (CST) Received: from roberto-ThinkStation-P620.huawei.com (10.204.63.22) by fraeml714-chm.china.huawei.com (10.206.15.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Mon, 28 Mar 2022 19:55:04 +0200 From: Roberto Sassu To: , , , , , , , , , CC: , , , , , , , , , , Roberto Sassu Subject: [PATCH 18/18] bpf-preload/selftests: Preload a test eBPF program and check pinned objects Date: Mon, 28 Mar 2022 19:50:33 +0200 Message-ID: <20220328175033.2437312-19-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220328175033.2437312-1-roberto.sassu@huawei.com> References: <20220328175033.2437312-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.63.22] X-ClientProxiedBy: lhreml754-chm.china.huawei.com (10.201.108.204) To fraeml714-chm.china.huawei.com (10.206.15.33) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Introduce the 'preload_methods' test, which loads the new kernel module bpf_testmod_preload.ko (with the light skeleton from gen_preload_methods.c), mounts a new instance of the bpf filesystem, and checks if the pinned objects exist. The test requires to include 'gen_preload_methods_lskel' among the list of eBPF programs to preload. Signed-off-by: Roberto Sassu --- tools/testing/selftests/bpf/Makefile | 17 ++++- .../bpf/bpf_testmod_preload/.gitignore | 7 ++ .../bpf/bpf_testmod_preload/Makefile | 20 ++++++ .../bpf/prog_tests/test_preload_methods.c | 69 +++++++++++++++++++ 4 files changed, 110 insertions(+), 3 deletions(-) create mode 100644 tools/testing/selftests/bpf/bpf_testmod_preload/.gitignore create mode 100644 tools/testing/selftests/bpf/bpf_testmod_preload/Makefile create mode 100644 tools/testing/selftests/bpf/prog_tests/test_preload_methods.c diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index de81779e90e3..ca419b0a083c 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -82,7 +82,7 @@ TEST_PROGS_EXTENDED := with_addr.sh \ TEST_GEN_PROGS_EXTENDED = test_sock_addr test_skb_cgroup_id_user \ flow_dissector_load test_flow_dissector test_tcp_check_syncookie_user \ test_lirc_mode2_user xdping test_cpp runqslower bench bpf_testmod.ko \ - xdpxceiver xdp_redirect_multi + xdpxceiver xdp_redirect_multi bpf_testmod_preload.ko TEST_CUSTOM_PROGS = $(OUTPUT)/urandom_read @@ -110,6 +110,7 @@ override define CLEAN $(Q)$(RM) -r $(TEST_GEN_FILES) $(Q)$(RM) -r $(EXTRA_CLEAN) $(Q)$(MAKE) -C bpf_testmod clean + $(Q)$(MAKE) -C bpf_testmod_preload clean $(Q)$(MAKE) docs-clean endef @@ -502,7 +503,7 @@ TRUNNER_EXTRA_SOURCES := test_progs.c cgroup_helpers.c trace_helpers.c \ btf_helpers.c flow_dissector_load.h \ cap_helpers.c TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read $(OUTPUT)/bpf_testmod.ko \ - ima_setup.sh \ + ima_setup.sh $(OUTPUT)/bpf_testmod_preload.ko \ $(wildcard progs/btf_dump_test_case_*.c) TRUNNER_BPF_BUILD_RULE := CLANG_BPF_BUILD_RULE TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS) -DENABLE_ATOMICS_TESTS @@ -575,9 +576,19 @@ $(OUTPUT)/bench: $(OUTPUT)/bench.o \ $(call msg,BINARY,,$@) $(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@ +bpf_testmod_preload/bpf_testmod_preload.c: $(OUTPUT)/gen_preload_methods.preload.lskel.h $(BPFTOOL) $(TRUNNER_BPF_LSKELSP) + $(call msg,GEN-MOD,,$@) + $(BPFTOOL) gen module $< > $@ + +$(OUTPUT)/bpf_testmod_preload.ko: bpf_testmod_preload/bpf_testmod_preload.c + $(call msg,MOD,,$@) + $(Q)$(RM) bpf_testmod_preload/bpf_testmod_preload.ko # force re-compilation + $(Q)$(MAKE) $(submake_extras) -C bpf_testmod_preload + $(Q)cp bpf_testmod_preload/bpf_testmod_preload.ko $@ + EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \ prog_tests/tests.h map_tests/tests.h verifier/tests.h \ feature bpftool \ - $(addprefix $(OUTPUT)/,*.o *.skel.h *.lskel.h *.subskel.h no_alu32 bpf_gcc bpf_testmod.ko) + $(addprefix $(OUTPUT)/,*.o *.skel.h *.lskel.h *.subskel.h no_alu32 bpf_gcc bpf_testmod.ko bpf_testmod_preload.ko) .PHONY: docs docs-clean diff --git a/tools/testing/selftests/bpf/bpf_testmod_preload/.gitignore b/tools/testing/selftests/bpf/bpf_testmod_preload/.gitignore new file mode 100644 index 000000000000..989530ffc79f --- /dev/null +++ b/tools/testing/selftests/bpf/bpf_testmod_preload/.gitignore @@ -0,0 +1,7 @@ +*.mod +*.mod.c +*.o +.ko +/Module.symvers +/modules.order +bpf_testmod_preload.c diff --git a/tools/testing/selftests/bpf/bpf_testmod_preload/Makefile b/tools/testing/selftests/bpf/bpf_testmod_preload/Makefile new file mode 100644 index 000000000000..d17ac6670974 --- /dev/null +++ b/tools/testing/selftests/bpf/bpf_testmod_preload/Makefile @@ -0,0 +1,20 @@ +BPF_TESTMOD_PRELOAD_DIR := $(realpath $(dir $(abspath $(lastword $(MAKEFILE_LIST))))) +KDIR ?= $(abspath $(BPF_TESTMOD_PRELOAD_DIR)/../../../../..) + +ifeq ($(V),1) +Q = +else +Q = @ +endif + +MODULES = bpf_testmod_preload.ko + +obj-m += bpf_testmod_preload.o +CFLAGS_bpf_testmod_preload.o = -I$(BPF_TESTMOD_PRELOAD_DIR)/../tools/include + +all: + +$(Q)make -C $(KDIR) M=$(BPF_TESTMOD_PRELOAD_DIR) modules + +clean: + +$(Q)make -C $(KDIR) M=$(BPF_TESTMOD_PRELOAD_DIR) clean + diff --git a/tools/testing/selftests/bpf/prog_tests/test_preload_methods.c b/tools/testing/selftests/bpf/prog_tests/test_preload_methods.c new file mode 100644 index 000000000000..bad3b187794b --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/test_preload_methods.c @@ -0,0 +1,69 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (C) 2022 Huawei Technologies Duesseldorf GmbH + */ + +#include +#include +#include +#include +#include + +#define MOUNT_FLAGS (MS_NOSUID | MS_NODEV | MS_NOEXEC | MS_RELATIME) + +static int duration; + +void test_test_preload_methods(void) +{ + char bpf_mntpoint[] = "/tmp/bpf_mntpointXXXXXX", *dir; + char path[PATH_MAX]; + struct stat st; + int err; + + system("rmmod bpf_testmod_preload 2> /dev/null"); + + err = system("insmod bpf_testmod_preload.ko"); + if (CHECK(err, "insmod", + "cannot load bpf_testmod_preload.ko, err=%d\n", err)) + return; + + dir = mkdtemp(bpf_mntpoint); + if (CHECK(!dir, "mkstemp", "cannot create temp file, err=%d\n", + -errno)) + goto out_rmmod; + + err = mount(bpf_mntpoint, bpf_mntpoint, "bpf", MOUNT_FLAGS, NULL); + if (CHECK(err, "mount", + "cannot mount bpf filesystem to %s, err=%d\n", bpf_mntpoint, + err)) + goto out_unlink; + + snprintf(path, sizeof(path), "%s/gen_preload_methods_lskel", + bpf_mntpoint); + + err = stat(path, &st); + if (CHECK(err, "stat", "cannot find %s\n", path)) + goto out_unmount; + + snprintf(path, sizeof(path), + "%s/gen_preload_methods_lskel/dump_bpf_map", bpf_mntpoint); + + err = stat(path, &st); + if (CHECK(err, "stat", "cannot find %s\n", path)) + goto out_unmount; + + snprintf(path, sizeof(path), "%s/gen_preload_methods_lskel/ringbuf", + bpf_mntpoint); + + err = stat(path, &st); + if (CHECK(err, "stat", "cannot find %s\n", path)) + goto out_unmount; + +out_unmount: + umount(bpf_mntpoint); +out_unlink: + rmdir(bpf_mntpoint); +out_rmmod: + system("rmmod bpf_testmod_preload"); +}