From patchwork Wed Jun 21 23:37:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13288041 X-Patchwork-Delegate: paul@paul-moore.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EED6EB64DC for ; Wed, 21 Jun 2023 23:38:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229675AbjFUXie convert rfc822-to-8bit (ORCPT ); Wed, 21 Jun 2023 19:38:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229920AbjFUXiZ (ORCPT ); Wed, 21 Jun 2023 19:38:25 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 974ED1BD1 for ; Wed, 21 Jun 2023 16:38:23 -0700 (PDT) Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35LIqrkR007418 for ; Wed, 21 Jun 2023 16:38:22 -0700 Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3rc05hwt75-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 21 Jun 2023 16:38:22 -0700 Received: from twshared24695.38.frc1.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:21d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:38:19 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id B17CB333E885B; Wed, 21 Jun 2023 16:38:11 -0700 (PDT) From: Andrii Nakryiko To: CC: , , , , , , , Subject: [PATCH v3 bpf-next 01/14] bpf: introduce BPF token object Date: Wed, 21 Jun 2023 16:37:56 -0700 Message-ID: <20230621233809.1941811-2-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230621233809.1941811-1-andrii@kernel.org> References: <20230621233809.1941811-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: iQwAF-nNT8iKzRPZ9vhBIJNsPTspg8n6 X-Proofpoint-ORIG-GUID: iQwAF-nNT8iKzRPZ9vhBIJNsPTspg8n6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-21_12,2023-06-16_01,2023-05-22_02 Precedence: bulk List-ID: Add new kind of BPF kernel object, BPF token. BPF token is meant to to allow delegating privileged BPF functionality, like loading a BPF program or creating a BPF map, from privileged process to a *trusted* unprivileged process, all while have a good amount of control over which privileged operations could be performed using provided BPF token. This patch adds new BPF_TOKEN_CREATE command to bpf() syscall, which allows to create a new BPF token object along with a set of allowed commands that such BPF token allows to unprivileged applications. Currently only BPF_TOKEN_CREATE command itself can be delegated, but other patches gradually add ability to delegate BPF_MAP_CREATE, BPF_BTF_LOAD, and BPF_PROG_LOAD commands. The above means that new BPF tokens can be created using existing BPF token, if original privileged creator allowed BPF_TOKEN_CREATE command. New derived BPF token cannot be more powerful than the original BPF token. Importantly, BPF token is automatically pinned at the specified location inside an instance of BPF FS and cannot be repinned using BPF_OBJ_PIN command, unlike BPF prog/map/btf/link. This provides more control over unintended sharing of BPF tokens through pinning it in another BPF FS instances. Signed-off-by: Andrii Nakryiko --- include/linux/bpf.h | 47 ++++++++++ include/uapi/linux/bpf.h | 38 ++++++++ kernel/bpf/Makefile | 2 +- kernel/bpf/inode.c | 46 +++++++-- kernel/bpf/syscall.c | 17 ++++ kernel/bpf/token.c | 167 +++++++++++++++++++++++++++++++++ tools/include/uapi/linux/bpf.h | 38 ++++++++ 7 files changed, 344 insertions(+), 11 deletions(-) create mode 100644 kernel/bpf/token.c diff --git a/include/linux/bpf.h b/include/linux/bpf.h index f58895830ada..c4f1684aa138 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -51,6 +51,7 @@ struct module; struct bpf_func_state; struct ftrace_ops; struct cgroup; +struct bpf_token; extern struct idr btf_idr; extern spinlock_t btf_idr_lock; @@ -1533,6 +1534,12 @@ struct bpf_link_primer { u32 id; }; +struct bpf_token { + struct work_struct work; + atomic64_t refcnt; + u64 allowed_cmds; +}; + struct bpf_struct_ops_value; struct btf_member; @@ -1916,6 +1923,11 @@ bpf_prog_run_array_sleepable(const struct bpf_prog_array __rcu *array_rcu, return ret; } +static inline bool bpf_token_capable(const struct bpf_token *token, int cap) +{ + return token || capable(cap) || (cap != CAP_SYS_ADMIN && capable(CAP_SYS_ADMIN)); +} + #ifdef CONFIG_BPF_SYSCALL DECLARE_PER_CPU(int, bpf_prog_active); extern struct mutex bpf_stats_enabled_mutex; @@ -2077,8 +2089,25 @@ struct file *bpf_link_new_file(struct bpf_link *link, int *reserved_fd); struct bpf_link *bpf_link_get_from_fd(u32 ufd); struct bpf_link *bpf_link_get_curr_or_next(u32 *id); +void bpf_token_inc(struct bpf_token *token); +void bpf_token_put(struct bpf_token *token); +int bpf_token_create(union bpf_attr *attr); +int bpf_token_new_fd(struct bpf_token *token); +struct bpf_token *bpf_token_get_from_fd(u32 ufd); + +bool bpf_token_allow_cmd(const struct bpf_token *token, enum bpf_cmd cmd); + +enum bpf_type { + BPF_TYPE_UNSPEC = 0, + BPF_TYPE_PROG, + BPF_TYPE_MAP, + BPF_TYPE_LINK, + BPF_TYPE_TOKEN, +}; + int bpf_obj_pin_user(u32 ufd, int path_fd, const char __user *pathname); int bpf_obj_get_user(int path_fd, const char __user *pathname, int flags); +int bpf_obj_pin_any(int path_fd, const char __user *pathname, void *raw, enum bpf_type type); #define BPF_ITER_FUNC_PREFIX "bpf_iter_" #define DEFINE_BPF_ITER_FUNC(target, args...) \ @@ -2436,6 +2465,24 @@ static inline int bpf_obj_get_user(const char __user *pathname, int flags) return -EOPNOTSUPP; } +static inline void bpf_token_inc(struct bpf_token *token) +{ +} + +static inline void bpf_token_put(struct bpf_token *token) +{ +} + +static inline int bpf_token_new_fd(struct bpf_token *token) +{ + return -EOPNOTSUPP; +} + +static inline struct bpf_token *bpf_token_get_from_fd(u32 ufd) +{ + return ERR_PTR(-EOPNOTSUPP); +} + static inline void __dev_flush(void) { } diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index a7b5e91dd768..3c201cfe6d5c 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -846,6 +846,24 @@ union bpf_iter_link_info { * Returns zero on success. On error, -1 is returned and *errno* * is set appropriately. * + * BPF_TOKEN_CREATE + * Description + * Create BPF token with embedded information about what + * BPF-related functionality it allows. This BPF token can be + * passed as an extra parameter to various bpf() syscall commands + * to grant BPF subsystem functionality to unprivileged processes. + * BPF token is automatically pinned at specified location in BPF + * FS. It can be retrieved (to get FD passed to bpf() syscall) + * using BPF_OBJ_GET command. It's not allowed to re-pin BPF + * token using BPF_OBJ_PIN command. Such restrictions ensure BPF + * token stays associated with originally intended BPF FS + * instance and cannot be intentionally or unintentionally pinned + * somewhere else. + * + * Return + * Returns zero on success. On error, -1 is returned and *errno* + * is set appropriately. + * * NOTES * eBPF objects (maps and programs) can be shared between processes. * @@ -900,6 +918,7 @@ enum bpf_cmd { BPF_ITER_CREATE, BPF_LINK_DETACH, BPF_PROG_BIND_MAP, + BPF_TOKEN_CREATE, }; enum bpf_map_type { @@ -1621,6 +1640,25 @@ union bpf_attr { __u32 flags; /* extra flags */ } prog_bind_map; + struct { /* struct used by BPF_TOKEN_CREATE command */ + /* optional, BPF token FD granting operation */ + __u32 token_fd; + __u32 token_flags; + __u32 pin_flags; + /* pin_{path_fd,pathname} specify location in BPF FS instance + * to pin BPF token at; + * path_fd + pathname have the same semantics as openat() syscall + */ + __u32 pin_path_fd; + __u64 pin_pathname; + /* a bit set of allowed bpf() syscall commands, + * e.g., (1ULL << BPF_TOKEN_CREATE) | (1ULL << BPF_PROG_LOAD) + * will allow creating derived BPF tokens and loading new BPF + * programs + */ + __u64 allowed_cmds; + } token_create; + } __attribute__((aligned(8))); /* The description below is an attempt at providing documentation to eBPF diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile index 1d3892168d32..bbc17ea3878f 100644 --- a/kernel/bpf/Makefile +++ b/kernel/bpf/Makefile @@ -6,7 +6,7 @@ cflags-nogcse-$(CONFIG_X86)$(CONFIG_CC_IS_GCC) := -fno-gcse endif CFLAGS_core.o += $(call cc-disable-warning, override-init) $(cflags-nogcse-yy) -obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o log.o +obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o log.o token.o obj-$(CONFIG_BPF_SYSCALL) += bpf_iter.o map_iter.o task_iter.o prog_iter.o link_iter.o obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list.o lpm_trie.o map_in_map.o bloom_filter.o obj-$(CONFIG_BPF_SYSCALL) += local_storage.o queue_stack_maps.o ringbuf.o diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c index 4174f76133df..b9b93b81af9a 100644 --- a/kernel/bpf/inode.c +++ b/kernel/bpf/inode.c @@ -22,13 +22,6 @@ #include #include "preload/bpf_preload.h" -enum bpf_type { - BPF_TYPE_UNSPEC = 0, - BPF_TYPE_PROG, - BPF_TYPE_MAP, - BPF_TYPE_LINK, -}; - static void *bpf_any_get(void *raw, enum bpf_type type) { switch (type) { @@ -41,6 +34,9 @@ static void *bpf_any_get(void *raw, enum bpf_type type) case BPF_TYPE_LINK: bpf_link_inc(raw); break; + case BPF_TYPE_TOKEN: + bpf_token_inc(raw); + break; default: WARN_ON_ONCE(1); break; @@ -61,6 +57,9 @@ static void bpf_any_put(void *raw, enum bpf_type type) case BPF_TYPE_LINK: bpf_link_put(raw); break; + case BPF_TYPE_TOKEN: + bpf_token_put(raw); + break; default: WARN_ON_ONCE(1); break; @@ -89,6 +88,12 @@ static void *bpf_fd_probe_obj(u32 ufd, enum bpf_type *type) return raw; } + raw = bpf_token_get_from_fd(ufd); + if (!IS_ERR(raw)) { + *type = BPF_TYPE_TOKEN; + return raw; + } + return ERR_PTR(-EINVAL); } @@ -97,6 +102,7 @@ static const struct inode_operations bpf_dir_iops; static const struct inode_operations bpf_prog_iops = { }; static const struct inode_operations bpf_map_iops = { }; static const struct inode_operations bpf_link_iops = { }; +static const struct inode_operations bpf_token_iops = { }; static struct inode *bpf_get_inode(struct super_block *sb, const struct inode *dir, @@ -136,6 +142,8 @@ static int bpf_inode_type(const struct inode *inode, enum bpf_type *type) *type = BPF_TYPE_MAP; else if (inode->i_op == &bpf_link_iops) *type = BPF_TYPE_LINK; + else if (inode->i_op == &bpf_token_iops) + *type = BPF_TYPE_TOKEN; else return -EACCES; @@ -369,6 +377,11 @@ static int bpf_mklink(struct dentry *dentry, umode_t mode, void *arg) &bpf_iter_fops : &bpffs_obj_fops); } +static int bpf_mktoken(struct dentry *dentry, umode_t mode, void *arg) +{ + return bpf_mkobj_ops(dentry, mode, arg, &bpf_token_iops, &bpffs_obj_fops); +} + static struct dentry * bpf_lookup(struct inode *dir, struct dentry *dentry, unsigned flags) { @@ -435,8 +448,8 @@ static int bpf_iter_link_pin_kernel(struct dentry *parent, return ret; } -static int bpf_obj_do_pin(int path_fd, const char __user *pathname, void *raw, - enum bpf_type type) +int bpf_obj_pin_any(int path_fd, const char __user *pathname, void *raw, + enum bpf_type type) { struct dentry *dentry; struct inode *dir; @@ -469,6 +482,9 @@ static int bpf_obj_do_pin(int path_fd, const char __user *pathname, void *raw, case BPF_TYPE_LINK: ret = vfs_mkobj(dentry, mode, bpf_mklink, raw); break; + case BPF_TYPE_TOKEN: + ret = vfs_mkobj(dentry, mode, bpf_mktoken, raw); + break; default: ret = -EPERM; } @@ -487,7 +503,15 @@ int bpf_obj_pin_user(u32 ufd, int path_fd, const char __user *pathname) if (IS_ERR(raw)) return PTR_ERR(raw); - ret = bpf_obj_do_pin(path_fd, pathname, raw, type); + /* disallow BPF_OBJ_PIN command for BPF token; BPF token can only be + * auto-pinned during creation with BPF_TOKEN_CREATE + */ + if (type == BPF_TYPE_TOKEN) { + bpf_any_put(raw, type); + return -EOPNOTSUPP; + } + + ret = bpf_obj_pin_any(path_fd, pathname, raw, type); if (ret != 0) bpf_any_put(raw, type); @@ -547,6 +571,8 @@ int bpf_obj_get_user(int path_fd, const char __user *pathname, int flags) ret = bpf_map_new_fd(raw, f_flags); else if (type == BPF_TYPE_LINK) ret = (f_flags != O_RDWR) ? -EINVAL : bpf_link_new_fd(raw); + else if (type == BPF_TYPE_TOKEN) + ret = (f_flags != O_RDWR) ? -EINVAL : bpf_token_new_fd(raw); else return -ENOENT; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index a75c54b6f8a3..c48e0e829b06 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -5081,6 +5081,20 @@ static int bpf_prog_bind_map(union bpf_attr *attr) return ret; } +#define BPF_TOKEN_CREATE_LAST_FIELD token_create.allowed_cmds + +static int token_create(union bpf_attr *attr) +{ + if (CHECK_ATTR(BPF_TOKEN_CREATE)) + return -EINVAL; + + /* no flags are supported yet */ + if (attr->token_create.token_flags || attr->token_create.pin_flags) + return -EINVAL; + + return bpf_token_create(attr); +} + static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size) { union bpf_attr attr; @@ -5214,6 +5228,9 @@ static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size) case BPF_PROG_BIND_MAP: err = bpf_prog_bind_map(&attr); break; + case BPF_TOKEN_CREATE: + err = token_create(&attr); + break; default: err = -EINVAL; break; diff --git a/kernel/bpf/token.c b/kernel/bpf/token.c new file mode 100644 index 000000000000..1ece52439701 --- /dev/null +++ b/kernel/bpf/token.c @@ -0,0 +1,167 @@ +#include +#include +#include +#include +#include +#include +#include +#include +#include + +DEFINE_IDR(token_idr); +DEFINE_SPINLOCK(token_idr_lock); + +void bpf_token_inc(struct bpf_token *token) +{ + atomic64_inc(&token->refcnt); +} + +static void bpf_token_put_deferred(struct work_struct *work) +{ + struct bpf_token *token = container_of(work, struct bpf_token, work); + + kvfree(token); +} + +void bpf_token_put(struct bpf_token *token) +{ + if (!token) + return; + + if (!atomic64_dec_and_test(&token->refcnt)) + return; + + INIT_WORK(&token->work, bpf_token_put_deferred); + schedule_work(&token->work); +} + +static int bpf_token_release(struct inode *inode, struct file *filp) +{ + struct bpf_token *token = filp->private_data; + + bpf_token_put(token); + return 0; +} + +static ssize_t bpf_dummy_read(struct file *filp, char __user *buf, size_t siz, + loff_t *ppos) +{ + /* We need this handler such that alloc_file() enables + * f_mode with FMODE_CAN_READ. + */ + return -EINVAL; +} + +static ssize_t bpf_dummy_write(struct file *filp, const char __user *buf, + size_t siz, loff_t *ppos) +{ + /* We need this handler such that alloc_file() enables + * f_mode with FMODE_CAN_WRITE. + */ + return -EINVAL; +} + +static const struct file_operations bpf_token_fops = { + .release = bpf_token_release, + .read = bpf_dummy_read, + .write = bpf_dummy_write, +}; + +static struct bpf_token *bpf_token_alloc(void) +{ + struct bpf_token *token; + + token = kvzalloc(sizeof(*token), GFP_USER); + if (!token) + return NULL; + + atomic64_set(&token->refcnt, 1); + + return token; +} + +static bool is_bit_subset_of(u32 subset, u32 superset) +{ + return (superset & subset) == subset; +} + +int bpf_token_create(union bpf_attr *attr) +{ + struct bpf_token *new_token, *token = NULL; + int ret; + + if (attr->token_create.token_fd) { + token = bpf_token_get_from_fd(attr->token_create.token_fd); + if (IS_ERR(token)) + return PTR_ERR(token); + /* if provided BPF token doesn't allow creating new tokens, + * then use system-wide capability checks only + */ + if (!bpf_token_allow_cmd(token, BPF_TOKEN_CREATE)) { + bpf_token_put(token); + token = NULL; + } + } + + ret = -EPERM; + if (!bpf_token_capable(token, CAP_SYS_ADMIN)) + goto out; + + /* requested cmds should be a subset of associated token's set */ + if (token && !is_bit_subset_of(attr->token_create.allowed_cmds, token->allowed_cmds)) + goto out; + + new_token = bpf_token_alloc(); + if (!new_token) { + ret = -ENOMEM; + goto out; + } + + new_token->allowed_cmds = attr->token_create.allowed_cmds; + + ret = bpf_obj_pin_any(attr->token_create.pin_path_fd, + u64_to_user_ptr(attr->token_create.pin_pathname), + new_token, BPF_TYPE_TOKEN); + if (ret < 0) + bpf_token_put(new_token); +out: + bpf_token_put(token); + return ret; +} + +#define BPF_TOKEN_INODE_NAME "bpf-token" + +/* Alloc anon_inode and FD for prepared token. + * Returns fd >= 0 on success; negative error, otherwise. + */ +int bpf_token_new_fd(struct bpf_token *token) +{ + return anon_inode_getfd(BPF_TOKEN_INODE_NAME, &bpf_token_fops, token, O_CLOEXEC); +} + +struct bpf_token *bpf_token_get_from_fd(u32 ufd) +{ + struct fd f = fdget(ufd); + struct bpf_token *token; + + if (!f.file) + return ERR_PTR(-EBADF); + if (f.file->f_op != &bpf_token_fops) { + fdput(f); + return ERR_PTR(-EINVAL); + } + + token = f.file->private_data; + bpf_token_inc(token); + fdput(f); + + return token; +} + +bool bpf_token_allow_cmd(const struct bpf_token *token, enum bpf_cmd cmd) +{ + if (!token) + return false; + + return token->allowed_cmds & (1ULL << cmd); +} diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index a7b5e91dd768..3c201cfe6d5c 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -846,6 +846,24 @@ union bpf_iter_link_info { * Returns zero on success. On error, -1 is returned and *errno* * is set appropriately. * + * BPF_TOKEN_CREATE + * Description + * Create BPF token with embedded information about what + * BPF-related functionality it allows. This BPF token can be + * passed as an extra parameter to various bpf() syscall commands + * to grant BPF subsystem functionality to unprivileged processes. + * BPF token is automatically pinned at specified location in BPF + * FS. It can be retrieved (to get FD passed to bpf() syscall) + * using BPF_OBJ_GET command. It's not allowed to re-pin BPF + * token using BPF_OBJ_PIN command. Such restrictions ensure BPF + * token stays associated with originally intended BPF FS + * instance and cannot be intentionally or unintentionally pinned + * somewhere else. + * + * Return + * Returns zero on success. On error, -1 is returned and *errno* + * is set appropriately. + * * NOTES * eBPF objects (maps and programs) can be shared between processes. * @@ -900,6 +918,7 @@ enum bpf_cmd { BPF_ITER_CREATE, BPF_LINK_DETACH, BPF_PROG_BIND_MAP, + BPF_TOKEN_CREATE, }; enum bpf_map_type { @@ -1621,6 +1640,25 @@ union bpf_attr { __u32 flags; /* extra flags */ } prog_bind_map; + struct { /* struct used by BPF_TOKEN_CREATE command */ + /* optional, BPF token FD granting operation */ + __u32 token_fd; + __u32 token_flags; + __u32 pin_flags; + /* pin_{path_fd,pathname} specify location in BPF FS instance + * to pin BPF token at; + * path_fd + pathname have the same semantics as openat() syscall + */ + __u32 pin_path_fd; + __u64 pin_pathname; + /* a bit set of allowed bpf() syscall commands, + * e.g., (1ULL << BPF_TOKEN_CREATE) | (1ULL << BPF_PROG_LOAD) + * will allow creating derived BPF tokens and loading new BPF + * programs + */ + __u64 allowed_cmds; + } token_create; + } __attribute__((aligned(8))); /* The description below is an attempt at providing documentation to eBPF From patchwork Wed Jun 21 23:37:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13288042 X-Patchwork-Delegate: paul@paul-moore.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBDEEEB64D8 for ; Wed, 21 Jun 2023 23:38:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229912AbjFUXif convert rfc822-to-8bit (ORCPT ); Wed, 21 Jun 2023 19:38:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229810AbjFUXi2 (ORCPT ); Wed, 21 Jun 2023 19:38:28 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6956F172C for ; Wed, 21 Jun 2023 16:38:27 -0700 (PDT) Received: from pps.filterd (m0044012.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35LKuc17019368 for ; Wed, 21 Jun 2023 16:38:27 -0700 Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3rc04w5qm5-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 21 Jun 2023 16:38:26 -0700 Received: from twshared44841.48.prn1.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:21d::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:38:26 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id BA230333E886A; Wed, 21 Jun 2023 16:38:13 -0700 (PDT) From: Andrii Nakryiko To: CC: , , , , , , , Subject: [PATCH v3 bpf-next 02/14] libbpf: add bpf_token_create() API Date: Wed, 21 Jun 2023 16:37:57 -0700 Message-ID: <20230621233809.1941811-3-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230621233809.1941811-1-andrii@kernel.org> References: <20230621233809.1941811-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: x834sXhRRda95FIW6SIvdnacj6oR8ec- X-Proofpoint-ORIG-GUID: x834sXhRRda95FIW6SIvdnacj6oR8ec- X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-21_13,2023-06-16_01,2023-05-22_02 Precedence: bulk List-ID: Add low-level wrapper API for BPF_TOKEN_CREATE command in bpf() syscall. Signed-off-by: Andrii Nakryiko --- tools/lib/bpf/bpf.c | 21 +++++++++++++++++++++ tools/lib/bpf/bpf.h | 32 ++++++++++++++++++++++++++++++++ tools/lib/bpf/libbpf.map | 1 + 3 files changed, 54 insertions(+) diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index ed86b37d8024..a247a1612f29 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -1201,3 +1201,24 @@ int bpf_prog_bind_map(int prog_fd, int map_fd, ret = sys_bpf(BPF_PROG_BIND_MAP, &attr, attr_sz); return libbpf_err_errno(ret); } + +int bpf_token_create(int pin_path_fd, const char *pin_pathname, struct bpf_token_create_opts *opts) +{ + const size_t attr_sz = offsetofend(union bpf_attr, token_create); + union bpf_attr attr; + int ret; + + if (!OPTS_VALID(opts, bpf_token_create_opts)) + return libbpf_err(-EINVAL); + + memset(&attr, 0, attr_sz); + attr.token_create.pin_path_fd = pin_path_fd; + attr.token_create.pin_pathname = ptr_to_u64(pin_pathname); + attr.token_create.token_fd = OPTS_GET(opts, token_fd, 0); + attr.token_create.token_flags = OPTS_GET(opts, token_flags, 0); + attr.token_create.pin_flags = OPTS_GET(opts, pin_flags, 0); + attr.token_create.allowed_cmds = OPTS_GET(opts, allowed_cmds, 0); + + ret = sys_bpf(BPF_TOKEN_CREATE, &attr, attr_sz); + return libbpf_err_errno(ret); +} diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index 9aa0ee473754..ab0355d90a2c 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -551,6 +551,38 @@ struct bpf_test_run_opts { LIBBPF_API int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts); +struct bpf_token_create_opts { + size_t sz; /* size of this struct for forward/backward compatibility */ + __u32 token_fd; + __u32 token_flags; + __u32 pin_flags; + __u64 allowed_cmds; + size_t :0; +}; +#define bpf_token_create_opts__last_field allowed_cmds + +/** + * @brief **bpf_token_create()** creates a new instance of BPF token, pinning + * it at the specified location in BPF FS. + * + * BPF token created and pinned with this API can be subsequently opened using + * bpf_obj_get() API to obtain FD that can be passed to bpf() syscall for + * commands like BPF_PROG_LOAD, BPF_MAP_CREATE, etc. + * + * @param pin_path_fd O_PATH FD (see man 2 openat() for semantics) specifying, + * in combination with *pin_pathname*, target location in BPF FS at which to + * create and pin BPF token. + * @param pin_pathname absolute or relative path specifying, in combination + * with *pin_path_fd*, specifying in combination with *pin_path_fd*, target + * location in BPF FS at which to create and pin BPF token. + * @param opts optional BPF token creation options, can be NULL + * + * @return 0, on success; negative error code, otherwise (errno is also set to + * the error code) + */ +LIBBPF_API int bpf_token_create(int pin_path_fd, const char *pin_pathname, + struct bpf_token_create_opts *opts); + #ifdef __cplusplus } /* extern "C" */ #endif diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index 7521a2fb7626..62cbe4775081 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -395,4 +395,5 @@ LIBBPF_1.2.0 { LIBBPF_1.3.0 { global: bpf_obj_pin_opts; + bpf_token_create; } LIBBPF_1.2.0; From patchwork Wed Jun 21 23:37:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13288040 X-Patchwork-Delegate: paul@paul-moore.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C66FEB64D7 for ; Wed, 21 Jun 2023 23:38:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230037AbjFUXid convert rfc822-to-8bit (ORCPT ); Wed, 21 Jun 2023 19:38:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33308 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229912AbjFUXiZ (ORCPT ); Wed, 21 Jun 2023 19:38:25 -0400 Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 171D71721 for ; Wed, 21 Jun 2023 16:38:25 -0700 (PDT) Received: from pps.filterd (m0089730.ppops.net [127.0.0.1]) by m0089730.ppops.net (8.17.1.19/8.17.1.19) with ESMTP id 35LG4stx018398 for ; Wed, 21 Jun 2023 16:38:24 -0700 Received: from mail.thefacebook.com ([163.114.132.120]) by m0089730.ppops.net (PPS) with ESMTPS id 3rbnmp1mw6-7 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 21 Jun 2023 16:38:24 -0700 Received: from twshared24695.38.frc1.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:21d::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:38:19 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id C894E333E8872; Wed, 21 Jun 2023 16:38:15 -0700 (PDT) From: Andrii Nakryiko To: CC: , , , , , , , Subject: [PATCH v3 bpf-next 03/14] selftests/bpf: add BPF_TOKEN_CREATE test Date: Wed, 21 Jun 2023 16:37:58 -0700 Message-ID: <20230621233809.1941811-4-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230621233809.1941811-1-andrii@kernel.org> References: <20230621233809.1941811-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: Cx1SYnuv5Vqo0isS4DEK4s5hHry-0_6F X-Proofpoint-ORIG-GUID: Cx1SYnuv5Vqo0isS4DEK4s5hHry-0_6F X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-21_13,2023-06-16_01,2023-05-22_02 Precedence: bulk List-ID: Add a subtest validating BPF_TOKEN_CREATE command, pinning/getting BPF token in/from BPF FS, and creating derived BPF tokens using token_fd parameter. Signed-off-by: Andrii Nakryiko --- .../testing/selftests/bpf/prog_tests/token.c | 96 +++++++++++++++++++ 1 file changed, 96 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/token.c diff --git a/tools/testing/selftests/bpf/prog_tests/token.c b/tools/testing/selftests/bpf/prog_tests/token.c new file mode 100644 index 000000000000..153c4e26ef6b --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/token.c @@ -0,0 +1,96 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ +#include "linux/bpf.h" +#include +#include +#include "cap_helpers.h" + +static int drop_priv_caps(__u64 *old_caps) +{ + return cap_disable_effective((1ULL << CAP_BPF) | + (1ULL << CAP_PERFMON) | + (1ULL << CAP_NET_ADMIN) | + (1ULL << CAP_SYS_ADMIN), old_caps); +} + +static int restore_priv_caps(__u64 old_caps) +{ + return cap_enable_effective(old_caps, NULL); +} + +#define BPFFS_PATH "/sys/fs/bpf" +#define TOKEN_PATH BPFFS_PATH "/test_token" + +static void subtest_token_create(void) +{ + LIBBPF_OPTS(bpf_token_create_opts, opts); + int token_fd = 0, limited_token_fd = 0, err; + __u64 old_caps = 0; + + /* check that any current and future cmd can be specified */ + opts.allowed_cmds = ~0ULL; + err = bpf_token_create(-EBADF, TOKEN_PATH, &opts); + if (!ASSERT_OK(err, "token_create_future_proof")) + return; + unlink(TOKEN_PATH); + + /* create BPF token which allows creating derived BPF tokens */ + opts.allowed_cmds = 1ULL << BPF_TOKEN_CREATE; + err = bpf_token_create(-EBADF, TOKEN_PATH, &opts); + if (!ASSERT_OK(err, "token_create")) + return; + + token_fd = bpf_obj_get(TOKEN_PATH); + if (!ASSERT_GT(token_fd, 0, "token_get")) + goto cleanup; + unlink(TOKEN_PATH); + + /* validate pinning and getting works as expected */ + err = bpf_obj_pin(token_fd, TOKEN_PATH); + if (!ASSERT_ERR(err, "token_pin_unexpected_success")) + goto cleanup; + + + /* drop privileges to test token_fd passing */ + if (!ASSERT_OK(drop_priv_caps(&old_caps), "drop_caps")) + goto cleanup; + + /* unprivileged BPF_TOKEN_CREATE should fail */ + err = bpf_token_create(-EBADF, TOKEN_PATH, NULL); + if (!ASSERT_ERR(err, "token_create_unpriv_fail")) + goto cleanup; + + /* unprivileged BPF_TOKEN_CREATE using granted BPF token succeeds */ + opts.allowed_cmds = 0; /* ask for BPF token which doesn't allow new tokens */ + opts.token_fd = token_fd; + err = bpf_token_create(-EBADF, TOKEN_PATH, &opts); + if (!ASSERT_OK(limited_token_fd, "token_create_limited")) + goto cleanup; + + limited_token_fd = bpf_obj_get(TOKEN_PATH); + if (!ASSERT_GT(limited_token_fd, 0, "token_get_limited")) + goto cleanup; + unlink(TOKEN_PATH); + + /* creating yet another token using "limited" BPF token should fail */ + opts.allowed_cmds = 0; + opts.token_fd = limited_token_fd; + err = bpf_token_create(-EBADF, TOKEN_PATH, &opts); + if (!ASSERT_ERR(err, "token_create_from_lim_fail")) + goto cleanup; + +cleanup: + if (token_fd) + close(token_fd); + if (limited_token_fd) + close(limited_token_fd); + unlink(TOKEN_PATH); + if (old_caps) + ASSERT_OK(restore_priv_caps(old_caps), "restore_caps"); +} + +void test_token(void) +{ + if (test__start_subtest("token_create")) + subtest_token_create(); +} From patchwork Wed Jun 21 23:37:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13288039 X-Patchwork-Delegate: paul@paul-moore.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A052EB64D8 for ; Wed, 21 Jun 2023 23:38:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229699AbjFUXiZ convert rfc822-to-8bit (ORCPT ); Wed, 21 Jun 2023 19:38:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229912AbjFUXiY (ORCPT ); Wed, 21 Jun 2023 19:38:24 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9AFE1995 for ; Wed, 21 Jun 2023 16:38:22 -0700 (PDT) Received: from pps.filterd (m0148461.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35LKFbCU032359 for ; Wed, 21 Jun 2023 16:38:22 -0700 Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3rc832s9m4-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 21 Jun 2023 16:38:22 -0700 Received: from twshared24695.38.frc1.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:11d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:38:19 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id E32FC333E888D; Wed, 21 Jun 2023 16:38:17 -0700 (PDT) From: Andrii Nakryiko To: CC: , , , , , , , Subject: [PATCH v3 bpf-next 04/14] bpf: add BPF token support to BPF_MAP_CREATE command Date: Wed, 21 Jun 2023 16:37:59 -0700 Message-ID: <20230621233809.1941811-5-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230621233809.1941811-1-andrii@kernel.org> References: <20230621233809.1941811-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: mypjpcShOevMRy1hXqXrqXA6goL8oZnB X-Proofpoint-GUID: mypjpcShOevMRy1hXqXrqXA6goL8oZnB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-21_13,2023-06-16_01,2023-05-22_02 Precedence: bulk List-ID: Allow providing token_fd for BPF_MAP_CREATE command to allow controlled BPF map creation from unprivileged process through delegated BPF token. Further, add a filter of allowed BPF map types to BPF token, specified at BPF token creation time. This, in combination with allowed_cmds allows to create a narrowly-focused BPF token (controlled by privileged agent) with a restrictive set of BPF maps that application can attempt to create. Signed-off-by: Andrii Nakryiko --- include/linux/bpf.h | 3 + include/uapi/linux/bpf.h | 6 ++ kernel/bpf/syscall.c | 56 +++++++++++++++---- kernel/bpf/token.c | 13 +++++ tools/include/uapi/linux/bpf.h | 6 ++ .../selftests/bpf/prog_tests/libbpf_probes.c | 2 + .../selftests/bpf/prog_tests/libbpf_str.c | 3 + 7 files changed, 77 insertions(+), 12 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index c4f1684aa138..856a147c8ce8 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -251,6 +251,7 @@ struct bpf_map { u32 btf_value_type_id; u32 btf_vmlinux_value_type_id; struct btf *btf; + struct bpf_token *token; #ifdef CONFIG_MEMCG_KMEM struct obj_cgroup *objcg; #endif @@ -1538,6 +1539,7 @@ struct bpf_token { struct work_struct work; atomic64_t refcnt; u64 allowed_cmds; + u64 allowed_map_types; }; struct bpf_struct_ops_value; @@ -2096,6 +2098,7 @@ int bpf_token_new_fd(struct bpf_token *token); struct bpf_token *bpf_token_get_from_fd(u32 ufd); bool bpf_token_allow_cmd(const struct bpf_token *token, enum bpf_cmd cmd); +bool bpf_token_allow_map_type(const struct bpf_token *token, enum bpf_map_type type); enum bpf_type { BPF_TYPE_UNSPEC = 0, diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 3c201cfe6d5c..81c88edceac5 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -962,6 +962,7 @@ enum bpf_map_type { BPF_MAP_TYPE_BLOOM_FILTER, BPF_MAP_TYPE_USER_RINGBUF, BPF_MAP_TYPE_CGRP_STORAGE, + __MAX_BPF_MAP_TYPE }; /* Note that tracing related programs such as @@ -1367,6 +1368,7 @@ union bpf_attr { * to using 5 hash functions). */ __u64 map_extra; + __u32 map_token_fd; }; struct { /* anonymous struct used by BPF_MAP_*_ELEM commands */ @@ -1657,6 +1659,10 @@ union bpf_attr { * programs */ __u64 allowed_cmds; + /* similarly to allowed_cmds, a bit set of BPF map types that + * are allowed to be created by requested BPF token; + */ + __u64 allowed_map_types; } token_create; } __attribute__((aligned(8))); diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index c48e0e829b06..0046bd579f13 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -691,6 +691,7 @@ static void bpf_map_free_deferred(struct work_struct *work) { struct bpf_map *map = container_of(work, struct bpf_map, work); struct btf_record *rec = map->record; + struct bpf_token *token = map->token; security_bpf_map_free(map); bpf_map_release_memcg(map); @@ -706,6 +707,7 @@ static void bpf_map_free_deferred(struct work_struct *work) * template bpf_map struct used during verification. */ btf_record_free(rec); + bpf_token_put(token); } static void bpf_map_put_uref(struct bpf_map *map) @@ -1010,7 +1012,7 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf, if (!IS_ERR_OR_NULL(map->record)) { int i; - if (!bpf_capable()) { + if (!bpf_token_capable(map->token, CAP_BPF)) { ret = -EPERM; goto free_map_tab; } @@ -1092,11 +1094,12 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf, return ret; } -#define BPF_MAP_CREATE_LAST_FIELD map_extra +#define BPF_MAP_CREATE_LAST_FIELD map_token_fd /* called via syscall */ static int map_create(union bpf_attr *attr) { const struct bpf_map_ops *ops; + struct bpf_token *token = NULL; int numa_node = bpf_map_attr_numa_node(attr); u32 map_type = attr->map_type; struct bpf_map *map; @@ -1147,14 +1150,32 @@ static int map_create(union bpf_attr *attr) if (!ops->map_mem_usage) return -EINVAL; + if (attr->map_token_fd) { + token = bpf_token_get_from_fd(attr->map_token_fd); + if (IS_ERR(token)) + return PTR_ERR(token); + + /* if current token doesn't grant map creation permissions, + * then we can't use this token, so ignore it and rely on + * system-wide capabilities checks + */ + if (!bpf_token_allow_cmd(token, BPF_MAP_CREATE) || + !bpf_token_allow_map_type(token, attr->map_type)) { + bpf_token_put(token); + token = NULL; + } + } + + err = -EPERM; + /* Intent here is for unprivileged_bpf_disabled to block BPF map * creation for unprivileged users; other actions depend * on fd availability and access to bpffs, so are dependent on * object creation success. Even with unprivileged BPF disabled, * capability checks are still carried out. */ - if (sysctl_unprivileged_bpf_disabled && !bpf_capable()) - return -EPERM; + if (sysctl_unprivileged_bpf_disabled && !bpf_token_capable(token, CAP_BPF)) + goto put_token; /* check privileged map type permissions */ switch (map_type) { @@ -1187,28 +1208,36 @@ static int map_create(union bpf_attr *attr) case BPF_MAP_TYPE_LRU_PERCPU_HASH: case BPF_MAP_TYPE_STRUCT_OPS: case BPF_MAP_TYPE_CPUMAP: - if (!bpf_capable()) - return -EPERM; + if (!bpf_token_capable(token, CAP_BPF)) + goto put_token; break; case BPF_MAP_TYPE_SOCKMAP: case BPF_MAP_TYPE_SOCKHASH: case BPF_MAP_TYPE_DEVMAP: case BPF_MAP_TYPE_DEVMAP_HASH: case BPF_MAP_TYPE_XSKMAP: - if (!capable(CAP_NET_ADMIN)) - return -EPERM; + if (!bpf_token_capable(token, CAP_NET_ADMIN)) + goto put_token; break; default: WARN(1, "unsupported map type %d", map_type); - return -EPERM; + goto put_token; } map = ops->map_alloc(attr); - if (IS_ERR(map)) - return PTR_ERR(map); + if (IS_ERR(map)) { + err = PTR_ERR(map); + goto put_token; + } map->ops = ops; map->map_type = map_type; + if (token) { + /* move token reference into map->token, reuse our refcnt */ + map->token = token; + token = NULL; + } + err = bpf_obj_name_cpy(map->name, attr->map_name, sizeof(attr->map_name)); if (err < 0) @@ -1281,8 +1310,11 @@ static int map_create(union bpf_attr *attr) free_map_sec: security_bpf_map_free(map); free_map: + bpf_token_put(map->token); btf_put(map->btf); map->ops->map_free(map); +put_token: + bpf_token_put(token); return err; } @@ -5081,7 +5113,7 @@ static int bpf_prog_bind_map(union bpf_attr *attr) return ret; } -#define BPF_TOKEN_CREATE_LAST_FIELD token_create.allowed_cmds +#define BPF_TOKEN_CREATE_LAST_FIELD token_create.allowed_map_types static int token_create(union bpf_attr *attr) { diff --git a/kernel/bpf/token.c b/kernel/bpf/token.c index 1ece52439701..91d8d987faea 100644 --- a/kernel/bpf/token.c +++ b/kernel/bpf/token.c @@ -110,6 +110,10 @@ int bpf_token_create(union bpf_attr *attr) /* requested cmds should be a subset of associated token's set */ if (token && !is_bit_subset_of(attr->token_create.allowed_cmds, token->allowed_cmds)) goto out; + /* requested map types should be a subset of associated token's set */ + if (token && !is_bit_subset_of(attr->token_create.allowed_map_types, + token->allowed_map_types)) + goto out; new_token = bpf_token_alloc(); if (!new_token) { @@ -118,6 +122,7 @@ int bpf_token_create(union bpf_attr *attr) } new_token->allowed_cmds = attr->token_create.allowed_cmds; + new_token->allowed_map_types = attr->token_create.allowed_map_types; ret = bpf_obj_pin_any(attr->token_create.pin_path_fd, u64_to_user_ptr(attr->token_create.pin_pathname), @@ -165,3 +170,11 @@ bool bpf_token_allow_cmd(const struct bpf_token *token, enum bpf_cmd cmd) return token->allowed_cmds & (1ULL << cmd); } + +bool bpf_token_allow_map_type(const struct bpf_token *token, enum bpf_map_type type) +{ + if (!token || type >= __MAX_BPF_MAP_TYPE) + return false; + + return token->allowed_map_types & (1ULL << type); +} diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 3c201cfe6d5c..81c88edceac5 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -962,6 +962,7 @@ enum bpf_map_type { BPF_MAP_TYPE_BLOOM_FILTER, BPF_MAP_TYPE_USER_RINGBUF, BPF_MAP_TYPE_CGRP_STORAGE, + __MAX_BPF_MAP_TYPE }; /* Note that tracing related programs such as @@ -1367,6 +1368,7 @@ union bpf_attr { * to using 5 hash functions). */ __u64 map_extra; + __u32 map_token_fd; }; struct { /* anonymous struct used by BPF_MAP_*_ELEM commands */ @@ -1657,6 +1659,10 @@ union bpf_attr { * programs */ __u64 allowed_cmds; + /* similarly to allowed_cmds, a bit set of BPF map types that + * are allowed to be created by requested BPF token; + */ + __u64 allowed_map_types; } token_create; } __attribute__((aligned(8))); diff --git a/tools/testing/selftests/bpf/prog_tests/libbpf_probes.c b/tools/testing/selftests/bpf/prog_tests/libbpf_probes.c index 9f766ddd946a..573249a2814d 100644 --- a/tools/testing/selftests/bpf/prog_tests/libbpf_probes.c +++ b/tools/testing/selftests/bpf/prog_tests/libbpf_probes.c @@ -68,6 +68,8 @@ void test_libbpf_probe_map_types(void) if (map_type == BPF_MAP_TYPE_UNSPEC) continue; + if (strcmp(map_type_name, "__MAX_BPF_MAP_TYPE") == 0) + continue; if (!test__start_subtest(map_type_name)) continue; diff --git a/tools/testing/selftests/bpf/prog_tests/libbpf_str.c b/tools/testing/selftests/bpf/prog_tests/libbpf_str.c index efb8bd43653c..e677c0435cec 100644 --- a/tools/testing/selftests/bpf/prog_tests/libbpf_str.c +++ b/tools/testing/selftests/bpf/prog_tests/libbpf_str.c @@ -132,6 +132,9 @@ static void test_libbpf_bpf_map_type_str(void) const char *map_type_str; char buf[256]; + if (map_type == __MAX_BPF_MAP_TYPE) + continue; + map_type_name = btf__str_by_offset(btf, e->name_off); map_type_str = libbpf_bpf_map_type_str(map_type); ASSERT_OK_PTR(map_type_str, map_type_name); From patchwork Wed Jun 21 23:38:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13288051 X-Patchwork-Delegate: paul@paul-moore.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38E8FEB64D7 for ; Wed, 21 Jun 2023 23:41:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229655AbjFUXlM convert rfc822-to-8bit (ORCPT ); Wed, 21 Jun 2023 19:41:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229593AbjFUXlL (ORCPT ); Wed, 21 Jun 2023 19:41:11 -0400 Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2D83198 for ; Wed, 21 Jun 2023 16:41:10 -0700 (PDT) Received: from pps.filterd (m0089730.ppops.net [127.0.0.1]) by m0089730.ppops.net (8.17.1.19/8.17.1.19) with ESMTP id 35LG4xWJ019410 for ; Wed, 21 Jun 2023 16:41:10 -0700 Received: from mail.thefacebook.com ([163.114.132.120]) by m0089730.ppops.net (PPS) with ESMTPS id 3rbnmp1nhx-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 21 Jun 2023 16:41:09 -0700 Received: from twshared24695.38.frc1.facebook.com (2620:10d:c085:108::4) by mail.thefacebook.com (2620:10d:c085:21d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:41:07 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id 241AB333E8896; Wed, 21 Jun 2023 16:38:19 -0700 (PDT) From: Andrii Nakryiko To: CC: , , , , , , , Subject: [PATCH v3 bpf-next 05/14] libbpf: add BPF token support to bpf_map_create() API Date: Wed, 21 Jun 2023 16:38:00 -0700 Message-ID: <20230621233809.1941811-6-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230621233809.1941811-1-andrii@kernel.org> References: <20230621233809.1941811-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: _kbSatPh66x8JbKIHM3fJijJKM47LMnj X-Proofpoint-ORIG-GUID: _kbSatPh66x8JbKIHM3fJijJKM47LMnj X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-21_13,2023-06-16_01,2023-05-22_02 Precedence: bulk List-ID: Add ability to provide token_fd for BPF_MAP_CREATE command through bpf_map_create() API. Also wire through token_create.allowed_map_types param for BPF_TOKEN_CREATE command. Signed-off-by: Andrii Nakryiko --- tools/lib/bpf/bpf.c | 5 ++++- tools/lib/bpf/bpf.h | 7 +++++-- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index a247a1612f29..882297b1e136 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -169,7 +169,7 @@ int bpf_map_create(enum bpf_map_type map_type, __u32 max_entries, const struct bpf_map_create_opts *opts) { - const size_t attr_sz = offsetofend(union bpf_attr, map_extra); + const size_t attr_sz = offsetofend(union bpf_attr, map_token_fd); union bpf_attr attr; int fd; @@ -198,6 +198,8 @@ int bpf_map_create(enum bpf_map_type map_type, attr.numa_node = OPTS_GET(opts, numa_node, 0); attr.map_ifindex = OPTS_GET(opts, map_ifindex, 0); + attr.map_token_fd = OPTS_GET(opts, token_fd, 0); + fd = sys_bpf_fd(BPF_MAP_CREATE, &attr, attr_sz); return libbpf_err_errno(fd); } @@ -1218,6 +1220,7 @@ int bpf_token_create(int pin_path_fd, const char *pin_pathname, struct bpf_token attr.token_create.token_flags = OPTS_GET(opts, token_flags, 0); attr.token_create.pin_flags = OPTS_GET(opts, pin_flags, 0); attr.token_create.allowed_cmds = OPTS_GET(opts, allowed_cmds, 0); + attr.token_create.allowed_map_types = OPTS_GET(opts, allowed_map_types, 0); ret = sys_bpf(BPF_TOKEN_CREATE, &attr, attr_sz); return libbpf_err_errno(ret); diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index ab0355d90a2c..cd3fb5ce6fe2 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -51,8 +51,10 @@ struct bpf_map_create_opts { __u32 numa_node; __u32 map_ifindex; + + __u32 token_fd; }; -#define bpf_map_create_opts__last_field map_ifindex +#define bpf_map_create_opts__last_field token_fd LIBBPF_API int bpf_map_create(enum bpf_map_type map_type, const char *map_name, @@ -557,9 +559,10 @@ struct bpf_token_create_opts { __u32 token_flags; __u32 pin_flags; __u64 allowed_cmds; + __u64 allowed_map_types; size_t :0; }; -#define bpf_token_create_opts__last_field allowed_cmds +#define bpf_token_create_opts__last_field allowed_map_types /** * @brief **bpf_token_create()** creates a new instance of BPF token, pinning From patchwork Wed Jun 21 23:38:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13288054 X-Patchwork-Delegate: paul@paul-moore.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12F44EB64D7 for ; Wed, 21 Jun 2023 23:41:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229656AbjFUXlQ convert rfc822-to-8bit (ORCPT ); Wed, 21 Jun 2023 19:41:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229593AbjFUXlN (ORCPT ); Wed, 21 Jun 2023 19:41:13 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED871198 for ; Wed, 21 Jun 2023 16:41:12 -0700 (PDT) Received: from pps.filterd (m0148460.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35LJeN2v010647 for ; Wed, 21 Jun 2023 16:41:12 -0700 Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3rbw1y6qk4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 21 Jun 2023 16:41:12 -0700 Received: from twshared58712.02.prn6.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:21d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:41:10 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id 4AE7B333E88A8; Wed, 21 Jun 2023 16:38:22 -0700 (PDT) From: Andrii Nakryiko To: CC: , , , , , , , Subject: [PATCH v3 bpf-next 06/14] selftests/bpf: add BPF token-enabled test for BPF_MAP_CREATE command Date: Wed, 21 Jun 2023 16:38:01 -0700 Message-ID: <20230621233809.1941811-7-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230621233809.1941811-1-andrii@kernel.org> References: <20230621233809.1941811-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: hm5-24SbGZ5gA3S_jyezq-v38mhcokla X-Proofpoint-ORIG-GUID: hm5-24SbGZ5gA3S_jyezq-v38mhcokla X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-21_13,2023-06-16_01,2023-05-22_02 Precedence: bulk List-ID: Add test for creating BPF token with support for BPF_MAP_CREATE delegation. And validate that its allowed_map_types filter works as expected and allows to create privileged BPF maps through delegated token, as long as they are allowed by privileged creator of a token. Signed-off-by: Andrii Nakryiko --- .../testing/selftests/bpf/prog_tests/token.c | 55 +++++++++++++++++++ 1 file changed, 55 insertions(+) diff --git a/tools/testing/selftests/bpf/prog_tests/token.c b/tools/testing/selftests/bpf/prog_tests/token.c index 153c4e26ef6b..0f832f9178a2 100644 --- a/tools/testing/selftests/bpf/prog_tests/token.c +++ b/tools/testing/selftests/bpf/prog_tests/token.c @@ -89,8 +89,63 @@ static void subtest_token_create(void) ASSERT_OK(restore_priv_caps(old_caps), "restore_caps"); } +static void subtest_map_token(void) +{ + LIBBPF_OPTS(bpf_token_create_opts, token_opts); + LIBBPF_OPTS(bpf_map_create_opts, map_opts); + int token_fd = 0, map_fd = 0, err; + __u64 old_caps = 0; + + /* check that it's ok to allow any map type */ + token_opts.allowed_map_types = ~0ULL; /* any current and future map types is allowed */ + err = bpf_token_create(-EBADF, TOKEN_PATH, &token_opts); + if (!ASSERT_OK(err, "token_create_future_proof")) + return; + unlink(TOKEN_PATH); + + /* create BPF token allowing STACK, but not QUEUE map */ + token_opts.allowed_cmds = 1ULL << BPF_MAP_CREATE; + token_opts.allowed_map_types = 1ULL << BPF_MAP_TYPE_STACK; /* but not QUEUE */ + err = bpf_token_create(-EBADF, TOKEN_PATH, &token_opts); + if (!ASSERT_OK(err, "token_create")) + return; + + /* drop privileges to test token_fd passing */ + if (!ASSERT_OK(drop_priv_caps(&old_caps), "drop_caps")) + goto cleanup; + + token_fd = bpf_obj_get(TOKEN_PATH); + if (!ASSERT_GT(token_fd, 0, "token_get")) + goto cleanup; + + /* BPF_MAP_TYPE_STACK is privileged, but with given token_fd should succeed */ + map_opts.token_fd = token_fd; + map_fd = bpf_map_create(BPF_MAP_TYPE_STACK, "token_stack", 0, 8, 1, &map_opts); + if (!ASSERT_GT(map_fd, 0, "stack_map_fd")) + goto cleanup; + close(map_fd); + map_fd = 0; + + /* BPF_MAP_TYPE_QUEUE is privileged, and token doesn't allow it, so should fail */ + map_opts.token_fd = token_fd; + map_fd = bpf_map_create(BPF_MAP_TYPE_QUEUE, "token_queue", 0, 8, 1, &map_opts); + if (!ASSERT_EQ(map_fd, -EPERM, "queue_map_fd")) + goto cleanup; + +cleanup: + if (map_fd > 0) + close(map_fd); + if (token_fd) + close(token_fd); + unlink(TOKEN_PATH); + if (old_caps) + ASSERT_OK(restore_priv_caps(old_caps), "restore_caps"); +} + void test_token(void) { if (test__start_subtest("token_create")) subtest_token_create(); + if (test__start_subtest("map_token")) + subtest_map_token(); } From patchwork Wed Jun 21 23:38:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13288046 X-Patchwork-Delegate: paul@paul-moore.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 136EEEB64D7 for ; Wed, 21 Jun 2023 23:39:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229789AbjFUXjI convert rfc822-to-8bit (ORCPT ); Wed, 21 Jun 2023 19:39:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229463AbjFUXjH (ORCPT ); Wed, 21 Jun 2023 19:39:07 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BFFC172C for ; Wed, 21 Jun 2023 16:39:06 -0700 (PDT) Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35LHcvsS006275 for ; Wed, 21 Jun 2023 16:39:05 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3rbnr9hmwq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 21 Jun 2023 16:39:05 -0700 Received: from twshared52232.38.frc1.facebook.com (2620:10d:c0a8:1c::1b) by mail.thefacebook.com (2620:10d:c0a8:83::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:38:35 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id 55353333E88B8; Wed, 21 Jun 2023 16:38:24 -0700 (PDT) From: Andrii Nakryiko To: CC: , , , , , , , Subject: [PATCH v3 bpf-next 07/14] bpf: add BPF token support to BPF_BTF_LOAD command Date: Wed, 21 Jun 2023 16:38:02 -0700 Message-ID: <20230621233809.1941811-8-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230621233809.1941811-1-andrii@kernel.org> References: <20230621233809.1941811-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: cALdhj-GRPRPZSKbDMsSdCRKbQgwH_J0 X-Proofpoint-ORIG-GUID: cALdhj-GRPRPZSKbDMsSdCRKbQgwH_J0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-21_13,2023-06-16_01,2023-05-22_02 Precedence: bulk List-ID: Accept BPF token FD in BPF_BTF_LOAD command to allow BTF data loading through delegated BPF token. BTF loading is a pretty straightforward operation, so as long as BPF token is created with allow_cmds granting BPF_BTF_LOAD command, kernel proceeds to parsing BTF data and creating BTF object. Signed-off-by: Andrii Nakryiko --- include/uapi/linux/bpf.h | 1 + kernel/bpf/syscall.c | 20 ++++++++++++++++++-- tools/include/uapi/linux/bpf.h | 1 + 3 files changed, 20 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 81c88edceac5..f926f553e6eb 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1535,6 +1535,7 @@ union bpf_attr { * truncated), or smaller (if log buffer wasn't filled completely). */ __u32 btf_log_true_size; + __u32 btf_token_fd; }; struct { diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 0046bd579f13..36be42159c2c 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -4475,15 +4475,31 @@ static int bpf_obj_get_info_by_fd(const union bpf_attr *attr, return err; } -#define BPF_BTF_LOAD_LAST_FIELD btf_log_true_size +#define BPF_BTF_LOAD_LAST_FIELD btf_token_fd static int bpf_btf_load(const union bpf_attr *attr, bpfptr_t uattr, __u32 uattr_size) { + struct bpf_token *token = NULL; + if (CHECK_ATTR(BPF_BTF_LOAD)) return -EINVAL; - if (!bpf_capable()) + if (attr->btf_token_fd) { + token = bpf_token_get_from_fd(attr->btf_token_fd); + if (IS_ERR(token)) + return PTR_ERR(token); + if (!bpf_token_allow_cmd(token, BPF_BTF_LOAD)) { + bpf_token_put(token); + token = NULL; + } + } + + if (!bpf_token_capable(token, CAP_BPF)) { + bpf_token_put(token); return -EPERM; + } + + bpf_token_put(token); return btf_new_fd(attr, uattr, uattr_size); } diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 81c88edceac5..f926f553e6eb 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1535,6 +1535,7 @@ union bpf_attr { * truncated), or smaller (if log buffer wasn't filled completely). */ __u32 btf_log_true_size; + __u32 btf_token_fd; }; struct { From patchwork Wed Jun 21 23:38:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13288044 X-Patchwork-Delegate: paul@paul-moore.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 096EEEB64DC for ; Wed, 21 Jun 2023 23:38:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229920AbjFUXiq convert rfc822-to-8bit (ORCPT ); Wed, 21 Jun 2023 19:38:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33444 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229988AbjFUXio (ORCPT ); Wed, 21 Jun 2023 19:38:44 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2CDA01BD3 for ; Wed, 21 Jun 2023 16:38:41 -0700 (PDT) Received: from pps.filterd (m0148460.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35LJeNPQ010602 for ; Wed, 21 Jun 2023 16:38:40 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3rbw1y6py8-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 21 Jun 2023 16:38:40 -0700 Received: from twshared66906.03.prn6.facebook.com (2620:10d:c0a8:1c::1b) by mail.thefacebook.com (2620:10d:c0a8:82::c) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:38:39 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id 69AB9333E88BE; Wed, 21 Jun 2023 16:38:26 -0700 (PDT) From: Andrii Nakryiko To: CC: , , , , , , , Subject: [PATCH v3 bpf-next 08/14] libbpf: add BPF token support to bpf_btf_load() API Date: Wed, 21 Jun 2023 16:38:03 -0700 Message-ID: <20230621233809.1941811-9-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230621233809.1941811-1-andrii@kernel.org> References: <20230621233809.1941811-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: mwApv2Wn7FH4ofz_3mLehw3IDpIRyhI2 X-Proofpoint-ORIG-GUID: mwApv2Wn7FH4ofz_3mLehw3IDpIRyhI2 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-21_13,2023-06-16_01,2023-05-22_02 Precedence: bulk List-ID: Allow user to specify token_fd for bpf_btf_load() API that wraps kernel's BPF_BTF_LOAD command. This allows loading BTF from unprivileged process as long as it has BPF token allowing BPF_BTF_LOAD command, which can be created and delegated by privileged process. Signed-off-by: Andrii Nakryiko --- tools/lib/bpf/bpf.c | 4 +++- tools/lib/bpf/bpf.h | 3 ++- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index 882297b1e136..6fb915069be7 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -1098,7 +1098,7 @@ int bpf_raw_tracepoint_open(const char *name, int prog_fd) int bpf_btf_load(const void *btf_data, size_t btf_size, struct bpf_btf_load_opts *opts) { - const size_t attr_sz = offsetofend(union bpf_attr, btf_log_true_size); + const size_t attr_sz = offsetofend(union bpf_attr, btf_token_fd); union bpf_attr attr; char *log_buf; size_t log_size; @@ -1123,6 +1123,8 @@ int bpf_btf_load(const void *btf_data, size_t btf_size, struct bpf_btf_load_opts attr.btf = ptr_to_u64(btf_data); attr.btf_size = btf_size; + attr.btf_token_fd = OPTS_GET(opts, token_fd, 0); + /* log_level == 0 and log_buf != NULL means "try loading without * log_buf, but retry with log_buf and log_level=1 on error", which is * consistent across low-level and high-level BTF and program loading diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index cd3fb5ce6fe2..dc7c4af21ad9 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -132,9 +132,10 @@ struct bpf_btf_load_opts { * If kernel doesn't support this feature, log_size is left unchanged. */ __u32 log_true_size; + __u32 token_fd; size_t :0; }; -#define bpf_btf_load_opts__last_field log_true_size +#define bpf_btf_load_opts__last_field token_fd LIBBPF_API int bpf_btf_load(const void *btf_data, size_t btf_size, struct bpf_btf_load_opts *opts); From patchwork Wed Jun 21 23:38:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13288043 X-Patchwork-Delegate: paul@paul-moore.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B5DFEB64D7 for ; Wed, 21 Jun 2023 23:38:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229888AbjFUXip convert rfc822-to-8bit (ORCPT ); Wed, 21 Jun 2023 19:38:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229920AbjFUXij (ORCPT ); Wed, 21 Jun 2023 19:38:39 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 037481721 for ; Wed, 21 Jun 2023 16:38:39 -0700 (PDT) Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35LHr5kM002480 for ; Wed, 21 Jun 2023 16:38:38 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3rbwks6cb4-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 21 Jun 2023 16:38:38 -0700 Received: from twshared52232.38.frc1.facebook.com (2620:10d:c0a8:1b::30) by mail.thefacebook.com (2620:10d:c0a8:83::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:38:35 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id 781F2333E88DB; Wed, 21 Jun 2023 16:38:28 -0700 (PDT) From: Andrii Nakryiko To: CC: , , , , , , , Subject: [PATCH v3 bpf-next 09/14] selftests/bpf: add BPF token-enabled BPF_BTF_LOAD selftest Date: Wed, 21 Jun 2023 16:38:04 -0700 Message-ID: <20230621233809.1941811-10-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230621233809.1941811-1-andrii@kernel.org> References: <20230621233809.1941811-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: nZb6WRO968KO_epF4POXEoxzbyk0h9nW X-Proofpoint-ORIG-GUID: nZb6WRO968KO_epF4POXEoxzbyk0h9nW X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-21_13,2023-06-16_01,2023-05-22_02 Precedence: bulk List-ID: Add a simple test validating that BTF loading can be done from unprivileged process through delegated BPF token. Signed-off-by: Andrii Nakryiko --- .../testing/selftests/bpf/prog_tests/token.c | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/tools/testing/selftests/bpf/prog_tests/token.c b/tools/testing/selftests/bpf/prog_tests/token.c index 0f832f9178a2..113cd4786a70 100644 --- a/tools/testing/selftests/bpf/prog_tests/token.c +++ b/tools/testing/selftests/bpf/prog_tests/token.c @@ -142,10 +142,70 @@ static void subtest_map_token(void) ASSERT_OK(restore_priv_caps(old_caps), "restore_caps"); } +static void subtest_btf_token(void) +{ + LIBBPF_OPTS(bpf_token_create_opts, token_opts); + LIBBPF_OPTS(bpf_btf_load_opts, btf_opts); + int token_fd = 0, btf_fd = 0, err; + const void *raw_btf_data; + struct btf *btf = NULL; + __u32 raw_btf_size; + __u64 old_caps = 0; + + /* create BPF token allowing BPF_BTF_LOAD command */ + token_opts.allowed_cmds = 1ULL << BPF_BTF_LOAD; + err = bpf_token_create(-EBADF, TOKEN_PATH, &token_opts); + if (!ASSERT_OK(err, "token_create")) + return; + + /* drop privileges to test token_fd passing */ + if (!ASSERT_OK(drop_priv_caps(&old_caps), "drop_caps")) + goto cleanup; + + token_fd = bpf_obj_get(TOKEN_PATH); + if (!ASSERT_GT(token_fd, 0, "token_get")) + goto cleanup; + + btf = btf__new_empty(); + if (!ASSERT_OK_PTR(btf, "empty_btf")) + goto cleanup; + + ASSERT_GT(btf__add_int(btf, "int", 4, 0), 0, "int_type"); + + raw_btf_data = btf__raw_data(btf, &raw_btf_size); + if (!ASSERT_OK_PTR(raw_btf_data, "raw_btf_data")) + goto cleanup; + + /* validate we can successfully load new BTF with token */ + btf_opts.token_fd = token_fd; + btf_fd = bpf_btf_load(raw_btf_data, raw_btf_size, &btf_opts); + if (!ASSERT_GT(btf_fd, 0, "btf_fd")) + goto cleanup; + close(btf_fd); + + /* now validate that we *cannot* load BTF without token */ + btf_opts.token_fd = 0; + btf_fd = bpf_btf_load(raw_btf_data, raw_btf_size, &btf_opts); + if (!ASSERT_EQ(btf_fd, -EPERM, "btf_fd_eperm")) + goto cleanup; + +cleanup: + btf__free(btf); + if (btf_fd > 0) + close(btf_fd); + if (token_fd) + close(token_fd); + unlink(TOKEN_PATH); + if (old_caps) + ASSERT_OK(restore_priv_caps(old_caps), "restore_caps"); +} + void test_token(void) { if (test__start_subtest("token_create")) subtest_token_create(); if (test__start_subtest("map_token")) subtest_map_token(); + if (test__start_subtest("btf_token")) + subtest_btf_token(); } From patchwork Wed Jun 21 23:38:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13288050 X-Patchwork-Delegate: paul@paul-moore.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BEE1EB64DC for ; Wed, 21 Jun 2023 23:39:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229810AbjFUXjS convert rfc822-to-8bit (ORCPT ); Wed, 21 Jun 2023 19:39:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229903AbjFUXjL (ORCPT ); Wed, 21 Jun 2023 19:39:11 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B8021721 for ; Wed, 21 Jun 2023 16:39:10 -0700 (PDT) Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35LHcvsb006275 for ; Wed, 21 Jun 2023 16:39:09 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3rbnr9hmwq-8 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 21 Jun 2023 16:39:09 -0700 Received: from ash-exhub204.TheFacebook.com (2620:10d:c0a8:83::4) by ash-exhub203.TheFacebook.com (2620:10d:c0a8:83::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:38:44 -0700 Received: from twshared34392.14.frc2.facebook.com (2620:10d:c0a8:1c::1b) by mail.thefacebook.com (2620:10d:c0a8:83::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:38:44 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id 8E9EB333E88E4; Wed, 21 Jun 2023 16:38:30 -0700 (PDT) From: Andrii Nakryiko To: CC: , , , , , , , Subject: [PATCH v3 bpf-next 10/14] bpf: add BPF token support to BPF_PROG_LOAD command Date: Wed, 21 Jun 2023 16:38:05 -0700 Message-ID: <20230621233809.1941811-11-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230621233809.1941811-1-andrii@kernel.org> References: <20230621233809.1941811-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: 2VG5hnmDf02WlC2iXmOggIJRJMlHcnRM X-Proofpoint-ORIG-GUID: 2VG5hnmDf02WlC2iXmOggIJRJMlHcnRM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-21_13,2023-06-16_01,2023-05-22_02 Precedence: bulk List-ID: Add basic support of BPF token to BPF_PROG_LOAD. Extend BPF token to allow specifying BPF_PROG_LOAD as an allowed command, and also allow to specify bit sets of program type and attach type combination that would be allowed to be loaded by requested BPF token. Signed-off-by: Andrii Nakryiko --- include/linux/bpf.h | 6 ++ include/uapi/linux/bpf.h | 8 ++ kernel/bpf/core.c | 1 + kernel/bpf/syscall.c | 89 +++++++++++++------ kernel/bpf/token.c | 21 +++++ tools/include/uapi/linux/bpf.h | 8 ++ .../selftests/bpf/prog_tests/libbpf_probes.c | 2 + .../selftests/bpf/prog_tests/libbpf_str.c | 3 + 8 files changed, 113 insertions(+), 25 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 856a147c8ce8..64dcdc18f09a 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1411,6 +1411,7 @@ struct bpf_prog_aux { #ifdef CONFIG_SECURITY void *security; #endif + struct bpf_token *token; struct bpf_prog_offload *offload; struct btf *btf; struct bpf_func_info *func_info; @@ -1540,6 +1541,8 @@ struct bpf_token { atomic64_t refcnt; u64 allowed_cmds; u64 allowed_map_types; + u64 allowed_prog_types; + u64 allowed_attach_types; }; struct bpf_struct_ops_value; @@ -2099,6 +2102,9 @@ struct bpf_token *bpf_token_get_from_fd(u32 ufd); bool bpf_token_allow_cmd(const struct bpf_token *token, enum bpf_cmd cmd); bool bpf_token_allow_map_type(const struct bpf_token *token, enum bpf_map_type type); +bool bpf_token_allow_prog_type(const struct bpf_token *token, + enum bpf_prog_type prog_type, + enum bpf_attach_type attach_type); enum bpf_type { BPF_TYPE_UNSPEC = 0, diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index f926f553e6eb..091484cf2efc 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1007,6 +1007,7 @@ enum bpf_prog_type { BPF_PROG_TYPE_SK_LOOKUP, BPF_PROG_TYPE_SYSCALL, /* a program that can execute syscalls */ BPF_PROG_TYPE_NETFILTER, + __MAX_BPF_PROG_TYPE }; enum bpf_attach_type { @@ -1438,6 +1439,7 @@ union bpf_attr { * truncated), or smaller (if log buffer wasn't filled completely). */ __u32 log_true_size; + __u32 prog_token_fd; }; struct { /* anonymous struct used by BPF_OBJ_* commands */ @@ -1664,6 +1666,12 @@ union bpf_attr { * are allowed to be created by requested BPF token; */ __u64 allowed_map_types; + /* similarly to allowed_map_types, bit sets of BPF program + * types and BPF program attach types that are allowed to be + * loaded by requested BPF token + */ + __u64 allowed_prog_types; + __u64 allowed_attach_types; } token_create; } __attribute__((aligned(8))); diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index dc85240a0134..2ed54d1ed32a 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2599,6 +2599,7 @@ void bpf_prog_free(struct bpf_prog *fp) if (aux->dst_prog) bpf_prog_put(aux->dst_prog); + bpf_token_put(aux->token); INIT_WORK(&aux->work, bpf_prog_free_deferred); schedule_work(&aux->work); } diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 36be42159c2c..364b1efca301 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -2573,13 +2573,15 @@ static bool is_perfmon_prog_type(enum bpf_prog_type prog_type) } /* last field in 'union bpf_attr' used by this command */ -#define BPF_PROG_LOAD_LAST_FIELD log_true_size +#define BPF_PROG_LOAD_LAST_FIELD prog_token_fd static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) { enum bpf_prog_type type = attr->prog_type; struct bpf_prog *prog, *dst_prog = NULL; struct btf *attach_btf = NULL; + struct bpf_token *token = NULL; + bool bpf_cap; int err; char license[128]; @@ -2595,10 +2597,31 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) BPF_F_XDP_DEV_BOUND_ONLY)) return -EINVAL; + bpf_prog_load_fixup_attach_type(attr); + + if (attr->prog_token_fd) { + token = bpf_token_get_from_fd(attr->prog_token_fd); + if (IS_ERR(token)) + return PTR_ERR(token); + /* if current token doesn't grant prog loading permissions, + * then we can't use this token, so ignore it and rely on + * system-wide capabilities checks + */ + if (!bpf_token_allow_cmd(token, BPF_PROG_LOAD) || + !bpf_token_allow_prog_type(token, attr->prog_type, + attr->expected_attach_type)) { + bpf_token_put(token); + token = NULL; + } + } + + bpf_cap = bpf_token_capable(token, CAP_BPF); + err = -EPERM; + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && (attr->prog_flags & BPF_F_ANY_ALIGNMENT) && - !bpf_capable()) - return -EPERM; + !bpf_cap) + goto put_token; /* Intent here is for unprivileged_bpf_disabled to block BPF program * creation for unprivileged users; other actions depend @@ -2607,21 +2630,23 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) * capability checks are still carried out for these * and other operations. */ - if (sysctl_unprivileged_bpf_disabled && !bpf_capable()) - return -EPERM; + if (sysctl_unprivileged_bpf_disabled && !bpf_cap) + goto put_token; if (attr->insn_cnt == 0 || - attr->insn_cnt > (bpf_capable() ? BPF_COMPLEXITY_LIMIT_INSNS : BPF_MAXINSNS)) - return -E2BIG; + attr->insn_cnt > (bpf_cap ? BPF_COMPLEXITY_LIMIT_INSNS : BPF_MAXINSNS)) { + err = -E2BIG; + goto put_token; + } if (type != BPF_PROG_TYPE_SOCKET_FILTER && type != BPF_PROG_TYPE_CGROUP_SKB && - !bpf_capable()) - return -EPERM; + !bpf_cap) + goto put_token; - if (is_net_admin_prog_type(type) && !capable(CAP_NET_ADMIN) && !capable(CAP_SYS_ADMIN)) - return -EPERM; - if (is_perfmon_prog_type(type) && !perfmon_capable()) - return -EPERM; + if (is_net_admin_prog_type(type) && !bpf_token_capable(token, CAP_NET_ADMIN)) + goto put_token; + if (is_perfmon_prog_type(type) && !bpf_token_capable(token, CAP_PERFMON)) + goto put_token; /* attach_prog_fd/attach_btf_obj_fd can specify fd of either bpf_prog * or btf, we need to check which one it is @@ -2631,27 +2656,33 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) if (IS_ERR(dst_prog)) { dst_prog = NULL; attach_btf = btf_get_by_fd(attr->attach_btf_obj_fd); - if (IS_ERR(attach_btf)) - return -EINVAL; + if (IS_ERR(attach_btf)) { + err = -EINVAL; + goto put_token; + } if (!btf_is_kernel(attach_btf)) { /* attaching through specifying bpf_prog's BTF * objects directly might be supported eventually */ btf_put(attach_btf); - return -ENOTSUPP; + err = -ENOTSUPP; + goto put_token; } } } else if (attr->attach_btf_id) { /* fall back to vmlinux BTF, if BTF type ID is specified */ attach_btf = bpf_get_btf_vmlinux(); - if (IS_ERR(attach_btf)) - return PTR_ERR(attach_btf); - if (!attach_btf) - return -EINVAL; + if (IS_ERR(attach_btf)) { + err = PTR_ERR(attach_btf); + goto put_token; + } + if (!attach_btf) { + err = -EINVAL; + goto put_token; + } btf_get(attach_btf); } - bpf_prog_load_fixup_attach_type(attr); if (bpf_prog_load_check_attach(type, attr->expected_attach_type, attach_btf, attr->attach_btf_id, dst_prog)) { @@ -2659,7 +2690,8 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) bpf_prog_put(dst_prog); if (attach_btf) btf_put(attach_btf); - return -EINVAL; + err = -EINVAL; + goto put_token; } /* plain bpf_prog allocation */ @@ -2669,7 +2701,8 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) bpf_prog_put(dst_prog); if (attach_btf) btf_put(attach_btf); - return -ENOMEM; + err = -EINVAL; + goto put_token; } prog->expected_attach_type = attr->expected_attach_type; @@ -2680,6 +2713,10 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) prog->aux->sleepable = attr->prog_flags & BPF_F_SLEEPABLE; prog->aux->xdp_has_frags = attr->prog_flags & BPF_F_XDP_HAS_FRAGS; + /* move token into prog->aux, reuse taken refcnt */ + prog->aux->token = token; + token = NULL; + err = security_bpf_prog_alloc(prog->aux); if (err) goto free_prog; @@ -2781,6 +2818,8 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) if (prog->aux->attach_btf) btf_put(prog->aux->attach_btf); bpf_prog_free(prog); +put_token: + bpf_token_put(token); return err; } @@ -3540,7 +3579,7 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog, case BPF_PROG_TYPE_SK_LOOKUP: return attach_type == prog->expected_attach_type ? 0 : -EINVAL; case BPF_PROG_TYPE_CGROUP_SKB: - if (!capable(CAP_NET_ADMIN)) + if (!bpf_token_capable(prog->aux->token, CAP_NET_ADMIN)) /* cg-skb progs can be loaded by unpriv user. * check permissions at attach time. */ @@ -5129,7 +5168,7 @@ static int bpf_prog_bind_map(union bpf_attr *attr) return ret; } -#define BPF_TOKEN_CREATE_LAST_FIELD token_create.allowed_map_types +#define BPF_TOKEN_CREATE_LAST_FIELD token_create.allowed_attach_types static int token_create(union bpf_attr *attr) { diff --git a/kernel/bpf/token.c b/kernel/bpf/token.c index 91d8d987faea..22449a509048 100644 --- a/kernel/bpf/token.c +++ b/kernel/bpf/token.c @@ -114,6 +114,14 @@ int bpf_token_create(union bpf_attr *attr) if (token && !is_bit_subset_of(attr->token_create.allowed_map_types, token->allowed_map_types)) goto out; + /* requested prog types should be a subset of associated token's set */ + if (token && !is_bit_subset_of(attr->token_create.allowed_prog_types, + token->allowed_prog_types)) + goto out; + /* requested attach types should be a subset of associated token's set */ + if (token && !is_bit_subset_of(attr->token_create.allowed_attach_types, + token->allowed_attach_types)) + goto out; new_token = bpf_token_alloc(); if (!new_token) { @@ -123,6 +131,8 @@ int bpf_token_create(union bpf_attr *attr) new_token->allowed_cmds = attr->token_create.allowed_cmds; new_token->allowed_map_types = attr->token_create.allowed_map_types; + new_token->allowed_prog_types = attr->token_create.allowed_prog_types; + new_token->allowed_attach_types = attr->token_create.allowed_attach_types; ret = bpf_obj_pin_any(attr->token_create.pin_path_fd, u64_to_user_ptr(attr->token_create.pin_pathname), @@ -178,3 +188,14 @@ bool bpf_token_allow_map_type(const struct bpf_token *token, enum bpf_map_type t return token->allowed_map_types & (1ULL << type); } + +bool bpf_token_allow_prog_type(const struct bpf_token *token, + enum bpf_prog_type prog_type, + enum bpf_attach_type attach_type) +{ + if (!token || prog_type >= __MAX_BPF_PROG_TYPE || attach_type >= __MAX_BPF_ATTACH_TYPE) + return false; + + return (token->allowed_prog_types & (1ULL << prog_type)) && + (token->allowed_attach_types & (1ULL << attach_type)); +} diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index f926f553e6eb..091484cf2efc 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1007,6 +1007,7 @@ enum bpf_prog_type { BPF_PROG_TYPE_SK_LOOKUP, BPF_PROG_TYPE_SYSCALL, /* a program that can execute syscalls */ BPF_PROG_TYPE_NETFILTER, + __MAX_BPF_PROG_TYPE }; enum bpf_attach_type { @@ -1438,6 +1439,7 @@ union bpf_attr { * truncated), or smaller (if log buffer wasn't filled completely). */ __u32 log_true_size; + __u32 prog_token_fd; }; struct { /* anonymous struct used by BPF_OBJ_* commands */ @@ -1664,6 +1666,12 @@ union bpf_attr { * are allowed to be created by requested BPF token; */ __u64 allowed_map_types; + /* similarly to allowed_map_types, bit sets of BPF program + * types and BPF program attach types that are allowed to be + * loaded by requested BPF token + */ + __u64 allowed_prog_types; + __u64 allowed_attach_types; } token_create; } __attribute__((aligned(8))); diff --git a/tools/testing/selftests/bpf/prog_tests/libbpf_probes.c b/tools/testing/selftests/bpf/prog_tests/libbpf_probes.c index 573249a2814d..4ed46ed58a7b 100644 --- a/tools/testing/selftests/bpf/prog_tests/libbpf_probes.c +++ b/tools/testing/selftests/bpf/prog_tests/libbpf_probes.c @@ -30,6 +30,8 @@ void test_libbpf_probe_prog_types(void) if (prog_type == BPF_PROG_TYPE_UNSPEC) continue; + if (strcmp(prog_type_name, "__MAX_BPF_PROG_TYPE") == 0) + continue; if (!test__start_subtest(prog_type_name)) continue; diff --git a/tools/testing/selftests/bpf/prog_tests/libbpf_str.c b/tools/testing/selftests/bpf/prog_tests/libbpf_str.c index e677c0435cec..ea2a8c4063a8 100644 --- a/tools/testing/selftests/bpf/prog_tests/libbpf_str.c +++ b/tools/testing/selftests/bpf/prog_tests/libbpf_str.c @@ -185,6 +185,9 @@ static void test_libbpf_bpf_prog_type_str(void) const char *prog_type_str; char buf[256]; + if (prog_type == __MAX_BPF_PROG_TYPE) + continue; + prog_type_name = btf__str_by_offset(btf, e->name_off); prog_type_str = libbpf_bpf_prog_type_str(prog_type); ASSERT_OK_PTR(prog_type_str, prog_type_name); From patchwork Wed Jun 21 23:38:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13288048 X-Patchwork-Delegate: paul@paul-moore.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80A95EB64D8 for ; Wed, 21 Jun 2023 23:39:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230032AbjFUXjR convert rfc822-to-8bit (ORCPT ); Wed, 21 Jun 2023 19:39:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229463AbjFUXjJ (ORCPT ); Wed, 21 Jun 2023 19:39:09 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB18E198 for ; Wed, 21 Jun 2023 16:39:07 -0700 (PDT) Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35LHcvsX006275 for ; Wed, 21 Jun 2023 16:39:07 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3rbnr9hmwq-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 21 Jun 2023 16:39:06 -0700 Received: from twshared34392.14.frc2.facebook.com (2620:10d:c0a8:1c::11) by mail.thefacebook.com (2620:10d:c0a8:83::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:38:44 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id A3238333E88F5; Wed, 21 Jun 2023 16:38:32 -0700 (PDT) From: Andrii Nakryiko To: CC: , , , , , , , Subject: [PATCH v3 bpf-next 11/14] bpf: take into account BPF token when fetching helper protos Date: Wed, 21 Jun 2023 16:38:06 -0700 Message-ID: <20230621233809.1941811-12-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230621233809.1941811-1-andrii@kernel.org> References: <20230621233809.1941811-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: DNYuW_nr2Se6d9588f03qeetLqvqFl8W X-Proofpoint-ORIG-GUID: DNYuW_nr2Se6d9588f03qeetLqvqFl8W X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-21_13,2023-06-16_01,2023-05-22_02 Precedence: bulk List-ID: Instead of performing unconditional system-wide bpf_capable() and perfmon_capable() calls inside bpf_base_func_proto() function (and other similar ones) to determine eligibility of a given BPF helper for a given program, use previously recorded BPF token during BPF_PROG_LOAD command handling to inform the decision. Signed-off-by: Andrii Nakryiko --- drivers/media/rc/bpf-lirc.c | 2 +- include/linux/bpf.h | 5 +++-- kernel/bpf/cgroup.c | 6 +++--- kernel/bpf/helpers.c | 6 +++--- kernel/bpf/syscall.c | 5 +++-- kernel/trace/bpf_trace.c | 2 +- net/core/filter.c | 32 ++++++++++++++++---------------- net/ipv4/bpf_tcp_ca.c | 2 +- net/netfilter/nf_bpf_link.c | 2 +- 9 files changed, 32 insertions(+), 30 deletions(-) diff --git a/drivers/media/rc/bpf-lirc.c b/drivers/media/rc/bpf-lirc.c index fe17c7f98e81..6d07693c6b9f 100644 --- a/drivers/media/rc/bpf-lirc.c +++ b/drivers/media/rc/bpf-lirc.c @@ -110,7 +110,7 @@ lirc_mode2_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_get_prandom_u32: return &bpf_get_prandom_u32_proto; case BPF_FUNC_trace_printk: - if (perfmon_capable()) + if (bpf_token_capable(prog->aux->token, CAP_PERFMON)) return bpf_get_trace_printk_proto(); fallthrough; default: diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 64dcdc18f09a..0e8680e639cb 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2358,7 +2358,8 @@ int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *pr struct bpf_prog *bpf_prog_by_id(u32 id); struct bpf_link *bpf_link_by_id(u32 id); -const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id); +const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id, + const struct bpf_prog *prog); void bpf_task_storage_free(struct task_struct *task); void bpf_cgrp_storage_free(struct cgroup *cgroup); bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog); @@ -2615,7 +2616,7 @@ static inline int btf_struct_access(struct bpf_verifier_log *log, } static inline const struct bpf_func_proto * -bpf_base_func_proto(enum bpf_func_id func_id) +bpf_base_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { return NULL; } diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c index 5b2741aa0d9b..39d6cfb6f304 100644 --- a/kernel/bpf/cgroup.c +++ b/kernel/bpf/cgroup.c @@ -1615,7 +1615,7 @@ cgroup_dev_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_perf_event_output: return &bpf_event_output_data_proto; default: - return bpf_base_func_proto(func_id); + return bpf_base_func_proto(func_id, prog); } } @@ -2173,7 +2173,7 @@ sysctl_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_perf_event_output: return &bpf_event_output_data_proto; default: - return bpf_base_func_proto(func_id); + return bpf_base_func_proto(func_id, prog); } } @@ -2330,7 +2330,7 @@ cg_sockopt_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_perf_event_output: return &bpf_event_output_data_proto; default: - return bpf_base_func_proto(func_id); + return bpf_base_func_proto(func_id, prog); } } diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 9e80efa59a5d..6a740af48908 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1663,7 +1663,7 @@ const struct bpf_func_proto bpf_probe_read_kernel_str_proto __weak; const struct bpf_func_proto bpf_task_pt_regs_proto __weak; const struct bpf_func_proto * -bpf_base_func_proto(enum bpf_func_id func_id) +bpf_base_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { switch (func_id) { case BPF_FUNC_map_lookup_elem: @@ -1714,7 +1714,7 @@ bpf_base_func_proto(enum bpf_func_id func_id) break; } - if (!bpf_capable()) + if (!bpf_token_capable(prog->aux->token, CAP_BPF)) return NULL; switch (func_id) { @@ -1772,7 +1772,7 @@ bpf_base_func_proto(enum bpf_func_id func_id) break; } - if (!perfmon_capable()) + if (!bpf_token_capable(prog->aux->token, CAP_PERFMON)) return NULL; switch (func_id) { diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 364b1efca301..c6a40c90176a 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -5424,7 +5424,7 @@ static const struct bpf_func_proto bpf_sys_bpf_proto = { const struct bpf_func_proto * __weak tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { - return bpf_base_func_proto(func_id); + return bpf_base_func_proto(func_id, prog); } BPF_CALL_1(bpf_sys_close, u32, fd) @@ -5474,7 +5474,8 @@ syscall_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { switch (func_id) { case BPF_FUNC_sys_bpf: - return !perfmon_capable() ? NULL : &bpf_sys_bpf_proto; + return !bpf_token_capable(prog->aux->token, CAP_PERFMON) + ? NULL : &bpf_sys_bpf_proto; case BPF_FUNC_btf_find_by_name_kind: return &bpf_btf_find_by_name_kind_proto; case BPF_FUNC_sys_close: diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 2bc41e6ac9fe..f5382d8bb690 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1511,7 +1511,7 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_trace_vprintk: return bpf_get_trace_vprintk_proto(); default: - return bpf_base_func_proto(func_id); + return bpf_base_func_proto(func_id, prog); } } diff --git a/net/core/filter.c b/net/core/filter.c index 428df050d021..59e5f41f2d5b 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -83,7 +83,7 @@ #include static const struct bpf_func_proto * -bpf_sk_base_func_proto(enum bpf_func_id func_id); +bpf_sk_base_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog); int copy_bpf_fprog_from_user(struct sock_fprog *dst, sockptr_t src, int len) { @@ -7739,7 +7739,7 @@ sock_filter_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_ktime_get_coarse_ns: return &bpf_ktime_get_coarse_ns_proto; default: - return bpf_base_func_proto(func_id); + return bpf_base_func_proto(func_id, prog); } } @@ -7822,7 +7822,7 @@ sock_addr_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return NULL; } default: - return bpf_sk_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id, prog); } } @@ -7841,7 +7841,7 @@ sk_filter_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_perf_event_output: return &bpf_skb_event_output_proto; default: - return bpf_sk_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id, prog); } } @@ -8028,7 +8028,7 @@ tc_cls_act_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) #endif #endif default: - return bpf_sk_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id, prog); } } @@ -8087,7 +8087,7 @@ xdp_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) #endif #endif default: - return bpf_sk_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id, prog); } #if IS_MODULE(CONFIG_NF_CONNTRACK) && IS_ENABLED(CONFIG_DEBUG_INFO_BTF_MODULES) @@ -8148,7 +8148,7 @@ sock_ops_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_tcp_sock_proto; #endif /* CONFIG_INET */ default: - return bpf_sk_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id, prog); } } @@ -8190,7 +8190,7 @@ sk_msg_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_get_cgroup_classid_curr_proto; #endif default: - return bpf_sk_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id, prog); } } @@ -8234,7 +8234,7 @@ sk_skb_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_skc_lookup_tcp_proto; #endif default: - return bpf_sk_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id, prog); } } @@ -8245,7 +8245,7 @@ flow_dissector_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_skb_load_bytes: return &bpf_flow_dissector_load_bytes_proto; default: - return bpf_sk_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id, prog); } } @@ -8272,7 +8272,7 @@ lwt_out_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_skb_under_cgroup: return &bpf_skb_under_cgroup_proto; default: - return bpf_sk_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id, prog); } } @@ -11103,7 +11103,7 @@ sk_reuseport_func_proto(enum bpf_func_id func_id, case BPF_FUNC_ktime_get_coarse_ns: return &bpf_ktime_get_coarse_ns_proto; default: - return bpf_base_func_proto(func_id); + return bpf_base_func_proto(func_id, prog); } } @@ -11285,7 +11285,7 @@ sk_lookup_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) case BPF_FUNC_sk_release: return &bpf_sk_release_proto; default: - return bpf_sk_base_func_proto(func_id); + return bpf_sk_base_func_proto(func_id, prog); } } @@ -11619,7 +11619,7 @@ const struct bpf_func_proto bpf_sock_from_file_proto = { }; static const struct bpf_func_proto * -bpf_sk_base_func_proto(enum bpf_func_id func_id) +bpf_sk_base_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { const struct bpf_func_proto *func; @@ -11648,10 +11648,10 @@ bpf_sk_base_func_proto(enum bpf_func_id func_id) case BPF_FUNC_ktime_get_coarse_ns: return &bpf_ktime_get_coarse_ns_proto; default: - return bpf_base_func_proto(func_id); + return bpf_base_func_proto(func_id, prog); } - if (!perfmon_capable()) + if (!bpf_token_capable(prog->aux->token, CAP_PERFMON)) return NULL; return func; diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c index 4406d796cc2f..0a3a60e7c282 100644 --- a/net/ipv4/bpf_tcp_ca.c +++ b/net/ipv4/bpf_tcp_ca.c @@ -193,7 +193,7 @@ bpf_tcp_ca_get_func_proto(enum bpf_func_id func_id, case BPF_FUNC_ktime_get_coarse_ns: return &bpf_ktime_get_coarse_ns_proto; default: - return bpf_base_func_proto(func_id); + return bpf_base_func_proto(func_id, prog); } } diff --git a/net/netfilter/nf_bpf_link.c b/net/netfilter/nf_bpf_link.c index c36da56d756f..d7786ea9c01a 100644 --- a/net/netfilter/nf_bpf_link.c +++ b/net/netfilter/nf_bpf_link.c @@ -219,7 +219,7 @@ static bool nf_is_valid_access(int off, int size, enum bpf_access_type type, static const struct bpf_func_proto * bpf_nf_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) { - return bpf_base_func_proto(func_id); + return bpf_base_func_proto(func_id, prog); } const struct bpf_verifier_ops netfilter_verifier_ops = { From patchwork Wed Jun 21 23:38:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13288047 X-Patchwork-Delegate: paul@paul-moore.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB026EB64D7 for ; Wed, 21 Jun 2023 23:39:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229463AbjFUXjS convert rfc822-to-8bit (ORCPT ); Wed, 21 Jun 2023 19:39:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229810AbjFUXjK (ORCPT ); Wed, 21 Jun 2023 19:39:10 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C36E81730 for ; Wed, 21 Jun 2023 16:39:08 -0700 (PDT) Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35LHcupt006014 for ; Wed, 21 Jun 2023 16:39:08 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3rbnr9hmwk-8 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 21 Jun 2023 16:39:07 -0700 Received: from twshared52232.38.frc1.facebook.com (2620:10d:c0a8:1c::11) by mail.thefacebook.com (2620:10d:c0a8:82::d) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:38:47 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id AF846333E8905; Wed, 21 Jun 2023 16:38:34 -0700 (PDT) From: Andrii Nakryiko To: CC: , , , , , , , Subject: [PATCH v3 bpf-next 12/14] bpf: consistenly use BPF token throughout BPF verifier logic Date: Wed, 21 Jun 2023 16:38:07 -0700 Message-ID: <20230621233809.1941811-13-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230621233809.1941811-1-andrii@kernel.org> References: <20230621233809.1941811-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: Xg8R-KeKPqJQ1TVkIAmmts6aPzDU2VHM X-Proofpoint-ORIG-GUID: Xg8R-KeKPqJQ1TVkIAmmts6aPzDU2VHM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-21_13,2023-06-16_01,2023-05-22_02 Precedence: bulk List-ID: Remove remaining direct queries to perfmon_capable() and bpf_capable() in BPF verifier logic and instead use BPF token (if available) to make decisions about privileges. Signed-off-by: Andrii Nakryiko --- include/linux/bpf.h | 18 ++++++++++-------- include/linux/filter.h | 2 +- kernel/bpf/arraymap.c | 2 +- kernel/bpf/core.c | 2 +- kernel/bpf/verifier.c | 13 ++++++------- net/core/filter.c | 4 ++-- 6 files changed, 21 insertions(+), 20 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 0e8680e639cb..af9f7dc60f21 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2059,24 +2059,26 @@ bpf_map_alloc_percpu(const struct bpf_map *map, size_t size, size_t align, extern int sysctl_unprivileged_bpf_disabled; -static inline bool bpf_allow_ptr_leaks(void) +bool bpf_token_capable(const struct bpf_token *token, int cap); + +static inline bool bpf_allow_ptr_leaks(const struct bpf_token *token) { - return perfmon_capable(); + return bpf_token_capable(token, CAP_PERFMON); } -static inline bool bpf_allow_uninit_stack(void) +static inline bool bpf_allow_uninit_stack(const struct bpf_token *token) { - return perfmon_capable(); + return bpf_token_capable(token, CAP_PERFMON); } -static inline bool bpf_bypass_spec_v1(void) +static inline bool bpf_bypass_spec_v1(const struct bpf_token *token) { - return perfmon_capable(); + return bpf_token_capable(token, CAP_PERFMON); } -static inline bool bpf_bypass_spec_v4(void) +static inline bool bpf_bypass_spec_v4(const struct bpf_token *token) { - return perfmon_capable(); + return bpf_token_capable(token, CAP_PERFMON); } int bpf_map_new_fd(struct bpf_map *map, int flags); diff --git a/include/linux/filter.h b/include/linux/filter.h index f69114083ec7..2391a9025ffd 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -1109,7 +1109,7 @@ static inline bool bpf_jit_blinding_enabled(struct bpf_prog *prog) return false; if (!bpf_jit_harden) return false; - if (bpf_jit_harden == 1 && bpf_capable()) + if (bpf_jit_harden == 1 && bpf_token_capable(prog->aux->token, CAP_BPF)) return false; return true; diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index 2058e89b5ddd..f0c64df6b6ff 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -82,7 +82,7 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr) bool percpu = attr->map_type == BPF_MAP_TYPE_PERCPU_ARRAY; int numa_node = bpf_map_attr_numa_node(attr); u32 elem_size, index_mask, max_entries; - bool bypass_spec_v1 = bpf_bypass_spec_v1(); + bool bypass_spec_v1 = bpf_bypass_spec_v1(NULL); u64 array_size, mask64; struct bpf_array *array; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 2ed54d1ed32a..979c10b9399d 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -661,7 +661,7 @@ static bool bpf_prog_kallsyms_candidate(const struct bpf_prog *fp) void bpf_prog_kallsyms_add(struct bpf_prog *fp) { if (!bpf_prog_kallsyms_candidate(fp) || - !bpf_capable()) + !bpf_token_capable(fp->aux->token, CAP_BPF)) return; bpf_prog_ksym_set_addr(fp); diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index fa43dc8e85b9..eedaf0e98d8f 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -19397,7 +19397,12 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3 env->prog = *prog; env->ops = bpf_verifier_ops[env->prog->type]; env->fd_array = make_bpfptr(attr->fd_array, uattr.is_kernel); - is_priv = bpf_capable(); + + env->allow_ptr_leaks = bpf_allow_ptr_leaks(env->prog->aux->token); + env->allow_uninit_stack = bpf_allow_uninit_stack(env->prog->aux->token); + env->bypass_spec_v1 = bpf_bypass_spec_v1(env->prog->aux->token); + env->bypass_spec_v4 = bpf_bypass_spec_v4(env->prog->aux->token); + env->bpf_capable = is_priv = bpf_token_capable(env->prog->aux->token, CAP_BPF); bpf_get_btf_vmlinux(); @@ -19429,12 +19434,6 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3 if (attr->prog_flags & BPF_F_ANY_ALIGNMENT) env->strict_alignment = false; - env->allow_ptr_leaks = bpf_allow_ptr_leaks(); - env->allow_uninit_stack = bpf_allow_uninit_stack(); - env->bypass_spec_v1 = bpf_bypass_spec_v1(); - env->bypass_spec_v4 = bpf_bypass_spec_v4(); - env->bpf_capable = bpf_capable(); - if (is_priv) env->test_state_freq = attr->prog_flags & BPF_F_TEST_STATE_FREQ; diff --git a/net/core/filter.c b/net/core/filter.c index 59e5f41f2d5b..0f2e5a15f1fd 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -8447,7 +8447,7 @@ static bool cg_skb_is_valid_access(int off, int size, return false; case bpf_ctx_range(struct __sk_buff, data): case bpf_ctx_range(struct __sk_buff, data_end): - if (!bpf_capable()) + if (!bpf_token_capable(prog->aux->token, CAP_BPF)) return false; break; } @@ -8459,7 +8459,7 @@ static bool cg_skb_is_valid_access(int off, int size, case bpf_ctx_range_till(struct __sk_buff, cb[0], cb[4]): break; case bpf_ctx_range(struct __sk_buff, tstamp): - if (!bpf_capable()) + if (!bpf_token_capable(prog->aux->token, CAP_BPF)) return false; break; default: From patchwork Wed Jun 21 23:38:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13288045 X-Patchwork-Delegate: paul@paul-moore.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3C75EB64DC for ; Wed, 21 Jun 2023 23:38:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229988AbjFUXis convert rfc822-to-8bit (ORCPT ); Wed, 21 Jun 2023 19:38:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229810AbjFUXiq (ORCPT ); Wed, 21 Jun 2023 19:38:46 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E02D61721 for ; Wed, 21 Jun 2023 16:38:45 -0700 (PDT) Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35LIe3R6008064 for ; Wed, 21 Jun 2023 16:38:45 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3rc05hwta7-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 21 Jun 2023 16:38:45 -0700 Received: from twshared34392.14.frc2.facebook.com (2620:10d:c0a8:1c::1b) by mail.thefacebook.com (2620:10d:c0a8:83::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:38:44 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id BBDE2333E8915; Wed, 21 Jun 2023 16:38:36 -0700 (PDT) From: Andrii Nakryiko To: CC: , , , , , , , Subject: [PATCH v3 bpf-next 13/14] libbpf: add BPF token support to bpf_prog_load() API Date: Wed, 21 Jun 2023 16:38:08 -0700 Message-ID: <20230621233809.1941811-14-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230621233809.1941811-1-andrii@kernel.org> References: <20230621233809.1941811-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: eJI5if62UlEE8XTvYYQbyFO5t3q9Yq0_ X-Proofpoint-ORIG-GUID: eJI5if62UlEE8XTvYYQbyFO5t3q9Yq0_ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-21_12,2023-06-16_01,2023-05-22_02 Precedence: bulk List-ID: Wire through token_fd into bpf_prog_load(). Also make sure to pass allowed_{prog,attach}_types to kernel in bpf_token_create(). Signed-off-by: Andrii Nakryiko --- tools/lib/bpf/bpf.c | 5 ++++- tools/lib/bpf/bpf.h | 7 +++++-- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index 6fb915069be7..5f331bbf1ad2 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -234,7 +234,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type, const struct bpf_insn *insns, size_t insn_cnt, struct bpf_prog_load_opts *opts) { - const size_t attr_sz = offsetofend(union bpf_attr, log_true_size); + const size_t attr_sz = offsetofend(union bpf_attr, prog_token_fd); void *finfo = NULL, *linfo = NULL; const char *func_info, *line_info; __u32 log_size, log_level, attach_prog_fd, attach_btf_obj_fd; @@ -263,6 +263,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type, attr.prog_flags = OPTS_GET(opts, prog_flags, 0); attr.prog_ifindex = OPTS_GET(opts, prog_ifindex, 0); attr.kern_version = OPTS_GET(opts, kern_version, 0); + attr.prog_token_fd = OPTS_GET(opts, token_fd, 0); if (prog_name && kernel_supports(NULL, FEAT_PROG_NAME)) libbpf_strlcpy(attr.prog_name, prog_name, sizeof(attr.prog_name)); @@ -1223,6 +1224,8 @@ int bpf_token_create(int pin_path_fd, const char *pin_pathname, struct bpf_token attr.token_create.pin_flags = OPTS_GET(opts, pin_flags, 0); attr.token_create.allowed_cmds = OPTS_GET(opts, allowed_cmds, 0); attr.token_create.allowed_map_types = OPTS_GET(opts, allowed_map_types, 0); + attr.token_create.allowed_prog_types = OPTS_GET(opts, allowed_prog_types, 0); + attr.token_create.allowed_attach_types = OPTS_GET(opts, allowed_attach_types, 0); ret = sys_bpf(BPF_TOKEN_CREATE, &attr, attr_sz); return libbpf_err_errno(ret); diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index dc7c4af21ad9..2ac56fba6027 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -104,9 +104,10 @@ struct bpf_prog_load_opts { * If kernel doesn't support this feature, log_size is left unchanged. */ __u32 log_true_size; + __u32 token_fd; size_t :0; }; -#define bpf_prog_load_opts__last_field log_true_size +#define bpf_prog_load_opts__last_field token_fd LIBBPF_API int bpf_prog_load(enum bpf_prog_type prog_type, const char *prog_name, const char *license, @@ -561,9 +562,11 @@ struct bpf_token_create_opts { __u32 pin_flags; __u64 allowed_cmds; __u64 allowed_map_types; + __u64 allowed_prog_types; + __u64 allowed_attach_types; size_t :0; }; -#define bpf_token_create_opts__last_field allowed_map_types +#define bpf_token_create_opts__last_field allowed_attach_types /** * @brief **bpf_token_create()** creates a new instance of BPF token, pinning From patchwork Wed Jun 21 23:38:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13288049 X-Patchwork-Delegate: paul@paul-moore.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EC37C0015E for ; Wed, 21 Jun 2023 23:39:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229527AbjFUXjT convert rfc822-to-8bit (ORCPT ); Wed, 21 Jun 2023 19:39:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33640 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229934AbjFUXjN (ORCPT ); Wed, 21 Jun 2023 19:39:13 -0400 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 758811733 for ; Wed, 21 Jun 2023 16:39:12 -0700 (PDT) Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35LHcvsg006275 for ; Wed, 21 Jun 2023 16:39:11 -0700 Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3rbnr9hmwq-13 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 21 Jun 2023 16:39:11 -0700 Received: from ash-exhub204.TheFacebook.com (2620:10d:c0a8:83::4) by ash-exhub203.TheFacebook.com (2620:10d:c0a8:83::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:38:45 -0700 Received: from twshared34392.14.frc2.facebook.com (2620:10d:c0a8:1b::30) by mail.thefacebook.com (2620:10d:c0a8:83::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Wed, 21 Jun 2023 16:38:44 -0700 Received: by devbig019.vll3.facebook.com (Postfix, from userid 137359) id C8375333E8937; Wed, 21 Jun 2023 16:38:38 -0700 (PDT) From: Andrii Nakryiko To: CC: , , , , , , , Subject: [PATCH v3 bpf-next 14/14] selftests/bpf: add BPF token-enabled BPF_PROG_LOAD tests Date: Wed, 21 Jun 2023 16:38:09 -0700 Message-ID: <20230621233809.1941811-15-andrii@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230621233809.1941811-1-andrii@kernel.org> References: <20230621233809.1941811-1-andrii@kernel.org> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: _d4prvxSoLRONUhRxA2Fo7OcNi_5mIFM X-Proofpoint-ORIG-GUID: _d4prvxSoLRONUhRxA2Fo7OcNi_5mIFM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-21_13,2023-06-16_01,2023-05-22_02 Precedence: bulk List-ID: Add a test validating that BPF token can be used to load privileged BPF program using privileged BPF helpers through delegated BPF token created by privileged process. Signed-off-by: Andrii Nakryiko --- .../testing/selftests/bpf/prog_tests/token.c | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) diff --git a/tools/testing/selftests/bpf/prog_tests/token.c b/tools/testing/selftests/bpf/prog_tests/token.c index 113cd4786a70..415d49eacd4f 100644 --- a/tools/testing/selftests/bpf/prog_tests/token.c +++ b/tools/testing/selftests/bpf/prog_tests/token.c @@ -4,6 +4,7 @@ #include #include #include "cap_helpers.h" +#include static int drop_priv_caps(__u64 *old_caps) { @@ -200,6 +201,69 @@ static void subtest_btf_token(void) ASSERT_OK(restore_priv_caps(old_caps), "restore_caps"); } +static void subtest_prog_token(void) +{ + LIBBPF_OPTS(bpf_token_create_opts, token_opts); + LIBBPF_OPTS(bpf_prog_load_opts, prog_opts); + int token_fd = 0, prog_fd = 0, err; + __u64 old_caps = 0; + struct bpf_insn insns[] = { + /* bpf_jiffies64() requires CAP_BPF */ + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64), + /* bpf_get_current_task() requires CAP_PERFMON */ + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_current_task), + /* r0 = 0; exit; */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }; + size_t insn_cnt = ARRAY_SIZE(insns); + + /* create BPF token allowing BPF_PROG_LOAD command */ + token_opts.allowed_cmds = 1ULL << BPF_PROG_LOAD; + token_opts.allowed_prog_types = 1ULL << BPF_PROG_TYPE_XDP; + token_opts.allowed_attach_types = 1ULL << BPF_XDP; + err = bpf_token_create(-EBADF, TOKEN_PATH, &token_opts); + if (!ASSERT_OK(err, "token_create")) + return; + + /* drop privileges to test token_fd passing */ + if (!ASSERT_OK(drop_priv_caps(&old_caps), "drop_caps")) + goto cleanup; + + token_fd = bpf_obj_get(TOKEN_PATH); + if (!ASSERT_GT(token_fd, 0, "token_get")) + goto cleanup; + + /* validate we can successfully load BPF program with token; this + * being XDP program (CAP_NET_ADMIN) using bpf_jiffies64() (CAP_BPF) + * and bpf_get_current_task() (CAP_PERFMON) helpers validates we have + * BPF token wired properly in a bunch of places in the kernel + */ + prog_opts.token_fd = token_fd; + prog_opts.expected_attach_type = BPF_XDP; + prog_fd = bpf_prog_load(BPF_PROG_TYPE_XDP, "token_prog", "GPL", + insns, insn_cnt, &prog_opts); + if (!ASSERT_GT(prog_fd, 0, "prog_fd")) + goto cleanup; + close(prog_fd); + + /* now validate that we *cannot* load BPF program without token */ + prog_opts.token_fd = 0; + prog_fd = bpf_prog_load(BPF_PROG_TYPE_XDP, "token_prog", "GPL", + insns, insn_cnt, &prog_opts); + if (!ASSERT_EQ(prog_fd, -EPERM, "prog_fd_eperm")) + goto cleanup; + +cleanup: + if (prog_fd > 0) + close(prog_fd); + if (token_fd) + close(token_fd); + unlink(TOKEN_PATH); + if (old_caps) + ASSERT_OK(restore_priv_caps(old_caps), "restore_caps"); +} + void test_token(void) { if (test__start_subtest("token_create")) @@ -208,4 +272,6 @@ void test_token(void) subtest_map_token(); if (test__start_subtest("btf_token")) subtest_btf_token(); + if (test__start_subtest("prog_token")) + subtest_prog_token(); }