From patchwork Tue Sep 28 02:52:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Tao X-Patchwork-Id: 12521611 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B110FC433FE for ; Tue, 28 Sep 2021 02:38:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8A24061247 for ; Tue, 28 Sep 2021 02:38:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238732AbhI1CkM (ORCPT ); Mon, 27 Sep 2021 22:40:12 -0400 Received: from szxga03-in.huawei.com ([45.249.212.189]:22253 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238594AbhI1CkL (ORCPT ); Mon, 27 Sep 2021 22:40:11 -0400 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4HJNvm0qJbz8tRN; Tue, 28 Sep 2021 10:37:40 +0800 (CST) Received: from dggpeml500025.china.huawei.com (7.185.36.35) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 28 Sep 2021 10:38:30 +0800 Received: from huawei.com (10.175.124.27) by dggpeml500025.china.huawei.com (7.185.36.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 28 Sep 2021 10:38:30 +0800 From: Hou Tao To: Martin KaFai Lau , Alexei Starovoitov CC: Daniel Borkmann , Andrii Nakryiko , , , Subject: [PATCH bpf-next 1/5] bpf: add dummy BPF STRUCT_OPS for test purpose Date: Tue, 28 Sep 2021 10:52:24 +0800 Message-ID: <20210928025228.88673-2-houtao1@huawei.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210928025228.88673-1-houtao1@huawei.com> References: <20210928025228.88673-1-houtao1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpeml500025.china.huawei.com (7.185.36.35) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently the test of BPF STRUCT_OPS depends on the specific bpf implementation of tcp_congestion_ops, but it can not cover all basic functionalities (e.g, return value handling), so introduce a dummy BPF STRUCT_OPS for test purpose. Loading a bpf_dummy_ops implementation from userspace is prohibited, and its only purpose is to run BPF_PROG_TYPE_STRUCT_OPS program through bpf(BPF_PROG_TEST_RUN). Signed-off-by: Hou Tao --- include/linux/bpf_dummy_ops.h | 14 ++++++++++ kernel/bpf/bpf_struct_ops_types.h | 2 ++ net/bpf/Makefile | 3 +++ net/bpf/bpf_dummy_struct_ops.c | 44 +++++++++++++++++++++++++++++++ 4 files changed, 63 insertions(+) create mode 100644 include/linux/bpf_dummy_ops.h create mode 100644 net/bpf/bpf_dummy_struct_ops.c diff --git a/include/linux/bpf_dummy_ops.h b/include/linux/bpf_dummy_ops.h new file mode 100644 index 000000000000..a594ae830eba --- /dev/null +++ b/include/linux/bpf_dummy_ops.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2021. Huawei Technologies Co., Ltd + */ +#ifndef _BPF_DUMMY_OPS_H +#define _BPF_DUMMY_OPS_H + +typedef int (*bpf_dummy_ops_init_t)(void); + +struct bpf_dummy_ops { + bpf_dummy_ops_init_t init; +}; + +#endif diff --git a/kernel/bpf/bpf_struct_ops_types.h b/kernel/bpf/bpf_struct_ops_types.h index 066d83ea1c99..02c86cf9c207 100644 --- a/kernel/bpf/bpf_struct_ops_types.h +++ b/kernel/bpf/bpf_struct_ops_types.h @@ -2,6 +2,8 @@ /* internal file - do not include directly */ #ifdef CONFIG_BPF_JIT +#include +BPF_STRUCT_OPS_TYPE(bpf_dummy_ops) #ifdef CONFIG_INET #include BPF_STRUCT_OPS_TYPE(tcp_congestion_ops) diff --git a/net/bpf/Makefile b/net/bpf/Makefile index 1c0a98d8c28f..1ebe270bde23 100644 --- a/net/bpf/Makefile +++ b/net/bpf/Makefile @@ -1,2 +1,5 @@ # SPDX-License-Identifier: GPL-2.0-only obj-$(CONFIG_BPF_SYSCALL) := test_run.o +ifeq ($(CONFIG_BPF_JIT),y) +obj-$(CONFIG_BPF_SYSCALL) += bpf_dummy_struct_ops.o +endif diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c new file mode 100644 index 000000000000..1249e4bb4ccb --- /dev/null +++ b/net/bpf/bpf_dummy_struct_ops.c @@ -0,0 +1,44 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021. Huawei Technologies Co., Ltd + */ +#include +#include +#include +#include +#include + +extern struct bpf_struct_ops bpf_bpf_dummy_ops; + +static int bpf_dummy_init(struct btf *btf) +{ + return 0; +} + +static const struct bpf_verifier_ops bpf_dummy_verifier_ops = { +}; + +static int bpf_dummy_init_member(const struct btf_type *t, + const struct btf_member *member, + void *kdata, const void *udata) +{ + return -EOPNOTSUPP; +} + +static int bpf_dummy_reg(void *kdata) +{ + return -EOPNOTSUPP; +} + +static void bpf_dummy_unreg(void *kdata) +{ +} + +struct bpf_struct_ops bpf_bpf_dummy_ops = { + .verifier_ops = &bpf_dummy_verifier_ops, + .init = bpf_dummy_init, + .init_member = bpf_dummy_init_member, + .reg = bpf_dummy_reg, + .unreg = bpf_dummy_unreg, + .name = "bpf_dummy_ops", +}; From patchwork Tue Sep 28 02:52:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Tao X-Patchwork-Id: 12521613 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D38D9C433FE for ; Tue, 28 Sep 2021 02:38:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BB6A46124B for ; Tue, 28 Sep 2021 02:38:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238766AbhI1CkO (ORCPT ); Mon, 27 Sep 2021 22:40:14 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:13344 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238733AbhI1CkN (ORCPT ); Mon, 27 Sep 2021 22:40:13 -0400 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4HJNqR3sV0z8ywj; Tue, 28 Sep 2021 10:33:55 +0800 (CST) Received: from dggpeml500025.china.huawei.com (7.185.36.35) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 28 Sep 2021 10:38:31 +0800 Received: from huawei.com (10.175.124.27) by dggpeml500025.china.huawei.com (7.185.36.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 28 Sep 2021 10:38:30 +0800 From: Hou Tao To: Martin KaFai Lau , Alexei Starovoitov CC: Daniel Borkmann , Andrii Nakryiko , , , Subject: [PATCH bpf-next 2/5] bpf: factor out a helper to prepare trampoline for struct_ops prog Date: Tue, 28 Sep 2021 10:52:25 +0800 Message-ID: <20210928025228.88673-3-houtao1@huawei.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210928025228.88673-1-houtao1@huawei.com> References: <20210928025228.88673-1-houtao1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpeml500025.china.huawei.com (7.185.36.35) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Factor out a helper bpf_prepare_st_ops_prog() to prepare trampoline for BPF_PROG_TYPE_STRUCT_OPS prog. It will be used by .test_run callback in following patch. Signed-off-by: Hou Tao --- include/linux/bpf.h | 5 +++++ kernel/bpf/bpf_struct_ops.c | 26 +++++++++++++++++--------- 2 files changed, 22 insertions(+), 9 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 155dfcfb8923..002bbb2c8bc7 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2224,4 +2224,9 @@ int bpf_bprintf_prepare(char *fmt, u32 fmt_size, const u64 *raw_args, u32 **bin_buf, u32 num_args); void bpf_bprintf_cleanup(void); +int bpf_prepare_st_ops_prog(struct bpf_tramp_progs *tprogs, + struct bpf_prog *prog, + const struct btf_func_model *model, + void *image, void *image_end); + #endif /* _LINUX_BPF_H */ diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index 9abcc33f02cf..ec3c25174923 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -312,6 +312,20 @@ static int check_zero_holes(const struct btf_type *t, void *data) return 0; } +int bpf_prepare_st_ops_prog(struct bpf_tramp_progs *tprogs, + struct bpf_prog *prog, + const struct btf_func_model *model, + void *image, void *image_end) +{ + u32 flags; + + tprogs[BPF_TRAMP_FENTRY].progs[0] = prog; + tprogs[BPF_TRAMP_FENTRY].nr_progs = 1; + flags = model->ret_size > 0 ? BPF_TRAMP_F_RET_FENTRY_RET : 0; + return arch_prepare_bpf_trampoline(NULL, image, image_end, + model, flags, tprogs, NULL); +} + static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, void *value, u64 flags) { @@ -368,7 +382,6 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, const struct btf_type *mtype, *ptype; struct bpf_prog *prog; u32 moff; - u32 flags; moff = btf_member_bit_offset(t, member) / 8; ptype = btf_type_resolve_ptr(btf_vmlinux, member->type, NULL); @@ -430,14 +443,9 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, goto reset_unlock; } - tprogs[BPF_TRAMP_FENTRY].progs[0] = prog; - tprogs[BPF_TRAMP_FENTRY].nr_progs = 1; - flags = st_ops->func_models[i].ret_size > 0 ? - BPF_TRAMP_F_RET_FENTRY_RET : 0; - err = arch_prepare_bpf_trampoline(NULL, image, - st_map->image + PAGE_SIZE, - &st_ops->func_models[i], - flags, tprogs, NULL); + err = bpf_prepare_st_ops_prog(tprogs, prog, + &st_ops->func_models[i], + image, st_map->image + PAGE_SIZE); if (err < 0) goto reset_unlock; From patchwork Tue Sep 28 02:52:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Tao X-Patchwork-Id: 12521615 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55124C4332F for ; Tue, 28 Sep 2021 02:38:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 397D461260 for ; Tue, 28 Sep 2021 02:38:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238752AbhI1CkO (ORCPT ); Mon, 27 Sep 2021 22:40:14 -0400 Received: from szxga03-in.huawei.com ([45.249.212.189]:22254 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238723AbhI1CkM (ORCPT ); Mon, 27 Sep 2021 22:40:12 -0400 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4HJNvn0Xgsz8tV4; Tue, 28 Sep 2021 10:37:41 +0800 (CST) Received: from dggpeml500025.china.huawei.com (7.185.36.35) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 28 Sep 2021 10:38:31 +0800 Received: from huawei.com (10.175.124.27) by dggpeml500025.china.huawei.com (7.185.36.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 28 Sep 2021 10:38:31 +0800 From: Hou Tao To: Martin KaFai Lau , Alexei Starovoitov CC: Daniel Borkmann , Andrii Nakryiko , , , Subject: [PATCH bpf-next 3/5] bpf: do .test_run in dummy BPF STRUCT_OPS Date: Tue, 28 Sep 2021 10:52:26 +0800 Message-ID: <20210928025228.88673-4-houtao1@huawei.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210928025228.88673-1-houtao1@huawei.com> References: <20210928025228.88673-1-houtao1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpeml500025.china.huawei.com (7.185.36.35) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Now only program for bpf_dummy_ops::init() is supported. The following two cases are exercised in bpf_dummy_st_ops_test_run(): (1) test and check the value returned from state arg in init(state) The content of state is copied from data_in before calling init() and copied back to data_out after calling, so test program could use data_in to pass the input state and use data_out to get the output state. (2) test and check the return value of init(NULL) data_in_size is set as 0, so the state will be NULL and there will be no copy-in & copy-out. Signed-off-by: Hou Tao --- include/linux/bpf_dummy_ops.h | 13 ++- net/bpf/bpf_dummy_struct_ops.c | 176 +++++++++++++++++++++++++++++++++ 2 files changed, 188 insertions(+), 1 deletion(-) diff --git a/include/linux/bpf_dummy_ops.h b/include/linux/bpf_dummy_ops.h index a594ae830eba..5049484e6693 100644 --- a/include/linux/bpf_dummy_ops.h +++ b/include/linux/bpf_dummy_ops.h @@ -5,10 +5,21 @@ #ifndef _BPF_DUMMY_OPS_H #define _BPF_DUMMY_OPS_H -typedef int (*bpf_dummy_ops_init_t)(void); +#include +#include + +struct bpf_dummy_ops_state { + int val; +}; + +typedef int (*bpf_dummy_ops_init_t)(struct bpf_dummy_ops_state *cb); struct bpf_dummy_ops { bpf_dummy_ops_init_t init; }; +extern int bpf_dummy_st_ops_test_run(struct bpf_prog *prog, + const union bpf_attr *kattr, + union bpf_attr __user *uattr); + #endif diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c index 1249e4bb4ccb..da77736cd093 100644 --- a/net/bpf/bpf_dummy_struct_ops.c +++ b/net/bpf/bpf_dummy_struct_ops.c @@ -10,12 +10,188 @@ extern struct bpf_struct_ops bpf_bpf_dummy_ops; +static const struct btf_type *dummy_ops_state; + +static struct bpf_dummy_ops_state * +init_dummy_ops_state(const union bpf_attr *kattr) +{ + __u32 size_in; + struct bpf_dummy_ops_state *state; + void __user *data_in; + + size_in = kattr->test.data_size_in; + if (!size_in) + return NULL; + + if (size_in != sizeof(*state)) + return ERR_PTR(-EINVAL); + + state = kzalloc(sizeof(*state), GFP_KERNEL); + if (!state) + return ERR_PTR(-ENOMEM); + + data_in = u64_to_user_ptr(kattr->test.data_in); + if (copy_from_user(state, data_in, size_in)) { + kfree(state); + return ERR_PTR(-EFAULT); + } + + return state; +} + +static int copy_dummy_ops_state(struct bpf_dummy_ops_state *state, + const union bpf_attr *kattr, + union bpf_attr __user *uattr) +{ + int err = 0; + void __user *data_out; + + if (!state) + return 0; + + data_out = u64_to_user_ptr(kattr->test.data_out); + if (copy_to_user(data_out, state, sizeof(*state))) { + err = -EFAULT; + goto out; + } + if (put_user(sizeof(*state), &uattr->test.data_size_out)) { + err = -EFAULT; + goto out; + } +out: + return err; +} + +static inline void exit_dummy_ops_state(struct bpf_dummy_ops_state *state) +{ + kfree(state); +} + +int bpf_dummy_st_ops_test_run(struct bpf_prog *prog, + const union bpf_attr *kattr, + union bpf_attr __user *uattr) +{ + const struct bpf_struct_ops *st_ops = &bpf_bpf_dummy_ops; + struct bpf_dummy_ops_state *state = NULL; + struct bpf_tramp_progs *tprogs = NULL; + void *image = NULL; + int err; + int prog_ret; + + /* Now only support to call init(...) */ + if (prog->expected_attach_type != 0) { + err = -EOPNOTSUPP; + goto out; + } + + /* state will be NULL when data_size_in == 0 */ + state = init_dummy_ops_state(kattr); + if (IS_ERR(state)) { + err = PTR_ERR(state); + state = NULL; + goto out; + } + + tprogs = kcalloc(BPF_TRAMP_MAX, sizeof(*tprogs), GFP_KERNEL); + if (!tprogs) { + err = -ENOMEM; + goto out; + } + + image = bpf_jit_alloc_exec(PAGE_SIZE); + if (!image) { + err = -ENOMEM; + goto out; + } + set_vm_flush_reset_perms(image); + + err = bpf_prepare_st_ops_prog(tprogs, prog, &st_ops->func_models[0], + image, image + PAGE_SIZE); + if (err < 0) + goto out; + + set_memory_ro((long)image, 1); + set_memory_x((long)image, 1); + prog_ret = ((bpf_dummy_ops_init_t)image)(state); + + err = copy_dummy_ops_state(state, kattr, uattr); + if (err) + goto out; + if (put_user(prog_ret, &uattr->test.retval)) + err = -EFAULT; +out: + exit_dummy_ops_state(state); + bpf_jit_free_exec(image); + kfree(tprogs); + return err; +} + static int bpf_dummy_init(struct btf *btf) { + s32 type_id; + + type_id = btf_find_by_name_kind(btf, "bpf_dummy_ops_state", + BTF_KIND_STRUCT); + if (type_id < 0) + return -EINVAL; + + dummy_ops_state = btf_type_by_id(btf, type_id); + return 0; } +static bool bpf_dummy_ops_is_valid_access(int off, int size, + enum bpf_access_type type, + const struct bpf_prog *prog, + struct bpf_insn_access_aux *info) +{ + /* init(state) only has one argument */ + if (off || type != BPF_READ) + return false; + + return btf_ctx_access(off, size, type, prog, info); +} + +static int bpf_dummy_ops_btf_struct_access(struct bpf_verifier_log *log, + const struct btf *btf, + const struct btf_type *t, int off, + int size, enum bpf_access_type atype, + u32 *next_btf_id) +{ + size_t end; + + if (atype == BPF_READ) + return btf_struct_access(log, btf, t, off, size, atype, + next_btf_id); + + if (t != dummy_ops_state) { + bpf_log(log, "only read is supported\n"); + return -EACCES; + } + + switch (off) { + case offsetof(struct bpf_dummy_ops_state, val): + end = offsetofend(struct bpf_dummy_ops_state, val); + break; + default: + bpf_log(log, "no write support to bpf_dummy_ops_state at off %d\n", + off); + return -EACCES; + } + + if (off + size > end) { + bpf_log(log, + "write access at off %d with size %d beyond the member of bpf_dummy_ops_state ended at %zu\n", + off, size, end); + return -EACCES; + } + + return NOT_INIT; +} + static const struct bpf_verifier_ops bpf_dummy_verifier_ops = { + .is_valid_access = bpf_dummy_ops_is_valid_access, + .btf_struct_access = bpf_dummy_ops_btf_struct_access, }; static int bpf_dummy_init_member(const struct btf_type *t, From patchwork Tue Sep 28 02:52:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Tao X-Patchwork-Id: 12521619 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DB1FC4332F for ; Tue, 28 Sep 2021 02:38:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 46D816124B for ; Tue, 28 Sep 2021 02:38:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238785AbhI1CkR (ORCPT ); Mon, 27 Sep 2021 22:40:17 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:22346 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238697AbhI1CkM (ORCPT ); Mon, 27 Sep 2021 22:40:12 -0400 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4HJNqq6Hy4zRZ8x; Tue, 28 Sep 2021 10:34:15 +0800 (CST) Received: from dggpeml500025.china.huawei.com (7.185.36.35) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 28 Sep 2021 10:38:32 +0800 Received: from huawei.com (10.175.124.27) by dggpeml500025.china.huawei.com (7.185.36.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 28 Sep 2021 10:38:31 +0800 From: Hou Tao To: Martin KaFai Lau , Alexei Starovoitov CC: Daniel Borkmann , Andrii Nakryiko , , , Subject: [PATCH bpf-next 4/5] bpf: hook .test_run for struct_ops program Date: Tue, 28 Sep 2021 10:52:27 +0800 Message-ID: <20210928025228.88673-5-houtao1@huawei.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210928025228.88673-1-houtao1@huawei.com> References: <20210928025228.88673-1-houtao1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpeml500025.china.huawei.com (7.185.36.35) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net bpf_struct_ops_test_run() will be used to run struct_ops program from bpf_dummy_ops and now its main purpose is to test the handling of return value. Signed-off-by: Hou Tao --- kernel/bpf/bpf_struct_ops.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index ec3c25174923..3cedd2f489db 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -11,6 +11,9 @@ #include #include +static int bpf_struct_ops_test_run(struct bpf_prog *prog, + const union bpf_attr *kattr, + union bpf_attr __user *uattr); enum bpf_struct_ops_state { BPF_STRUCT_OPS_STATE_INIT, BPF_STRUCT_OPS_STATE_INUSE, @@ -93,6 +96,7 @@ const struct bpf_verifier_ops bpf_struct_ops_verifier_ops = { }; const struct bpf_prog_ops bpf_struct_ops_prog_ops = { + .test_run = bpf_struct_ops_test_run, }; static const struct btf_type *module_type; @@ -666,3 +670,16 @@ void bpf_struct_ops_put(const void *kdata) call_rcu(&st_map->rcu, bpf_struct_ops_put_rcu); } } + +static int bpf_struct_ops_test_run(struct bpf_prog *prog, + const union bpf_attr *kattr, + union bpf_attr __user *uattr) +{ + const struct bpf_struct_ops *st_ops; + + st_ops = bpf_struct_ops_find(prog->aux->attach_btf_id); + if (st_ops != &bpf_bpf_dummy_ops) + return -EOPNOTSUPP; + + return bpf_dummy_st_ops_test_run(prog, kattr, uattr); +} From patchwork Tue Sep 28 02:52:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hou Tao X-Patchwork-Id: 12521617 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFDE1C43217 for ; Tue, 28 Sep 2021 02:38:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 96FF86124B for ; Tue, 28 Sep 2021 02:38:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238770AbhI1CkP (ORCPT ); Mon, 27 Sep 2021 22:40:15 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:13343 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238692AbhI1CkN (ORCPT ); Mon, 27 Sep 2021 22:40:13 -0400 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4HJNqR4LFCz8ywr; Tue, 28 Sep 2021 10:33:55 +0800 (CST) Received: from dggpeml500025.china.huawei.com (7.185.36.35) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 28 Sep 2021 10:38:32 +0800 Received: from huawei.com (10.175.124.27) by dggpeml500025.china.huawei.com (7.185.36.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 28 Sep 2021 10:38:32 +0800 From: Hou Tao To: Martin KaFai Lau , Alexei Starovoitov CC: Daniel Borkmann , Andrii Nakryiko , , , Subject: [PATCH bpf-next 5/5] selftests/bpf: test return value handling for struct_ops prog Date: Tue, 28 Sep 2021 10:52:28 +0800 Message-ID: <20210928025228.88673-6-houtao1@huawei.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210928025228.88673-1-houtao1@huawei.com> References: <20210928025228.88673-1-houtao1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpeml500025.china.huawei.com (7.185.36.35) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Running a BPF_PROG_TYPE_STRUCT_OPS prog for dummy_st_ops::init() through bpf_prog_test_run(). Three test cases are added: (1) attach dummy_st_ops should fail (2) function return value of bpf_dummy_ops::init() is expected (3) pointer argument of bpf_dummy_ops::init() works as expected Signed-off-by: Hou Tao --- .../selftests/bpf/prog_tests/dummy_st_ops.c | 81 +++++++++++++++++++ .../selftests/bpf/progs/dummy_st_ops.c | 33 ++++++++ 2 files changed, 114 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c create mode 100644 tools/testing/selftests/bpf/progs/dummy_st_ops.c diff --git a/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c b/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c new file mode 100644 index 000000000000..4b1b52b847e6 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c @@ -0,0 +1,81 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021. Huawei Technologies Co., Ltd */ +#include +#include "dummy_st_ops.skel.h" + +/* Need to keep consistent with definitions in include/linux/bpf_dummy_ops.h */ +struct bpf_dummy_ops_state { + int val; +}; + +static void test_dummy_st_ops_attach(void) +{ + struct dummy_st_ops *skel; + struct bpf_link *link; + + skel = dummy_st_ops__open_and_load(); + if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load")) + goto out; + + link = bpf_map__attach_struct_ops(skel->maps.dummy_1); + if (!ASSERT_EQ(libbpf_get_error(link), -EOPNOTSUPP, + "dummy_st_ops_attach")) + goto out; +out: + dummy_st_ops__destroy(skel); +} + +static void test_dummy_init_ret_value(void) +{ + struct dummy_st_ops *skel; + int err, fd; + __u32 duration = 0, retval = 0; + + skel = dummy_st_ops__open_and_load(); + if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load")) + goto out; + + fd = bpf_program__fd(skel->progs.init_1); + err = bpf_prog_test_run(fd, 1, NULL, 0, + NULL, NULL, &retval, &duration); + ASSERT_OK(err, "test_run"); + ASSERT_EQ(retval, 0xf2f3f4f5, "test_ret"); +out: + dummy_st_ops__destroy(skel); +} + +static void test_dummy_init_ptr_arg(void) +{ + struct dummy_st_ops *skel; + int err, fd; + __u32 duration = 0, retval = 0; + struct bpf_dummy_ops_state in_state, out_state; + __u32 state_size; + + skel = dummy_st_ops__open_and_load(); + if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load")) + goto out; + + fd = bpf_program__fd(skel->progs.init_1); + memset(&in_state, 0, sizeof(in_state)); + in_state.val = 0xbeef; + memset(&out_state, 0, sizeof(out_state)); + err = bpf_prog_test_run(fd, 1, &in_state, sizeof(in_state), + &out_state, &state_size, &retval, &duration); + ASSERT_OK(err, "test_run"); + ASSERT_EQ(state_size, sizeof(out_state), "test_data_out"); + ASSERT_EQ(out_state.val, 0x5a, "test_ptr_ret"); + ASSERT_EQ(retval, in_state.val, "test_ret"); +out: + dummy_st_ops__destroy(skel); +} + +void test_dummy_st_ops(void) +{ + if (test__start_subtest("dummy_st_ops_attach")) + test_dummy_st_ops_attach(); + if (test__start_subtest("dummy_init_ret_value")) + test_dummy_init_ret_value(); + if (test__start_subtest("dummy_init_ptr_arg")) + test_dummy_init_ptr_arg(); +} diff --git a/tools/testing/selftests/bpf/progs/dummy_st_ops.c b/tools/testing/selftests/bpf/progs/dummy_st_ops.c new file mode 100644 index 000000000000..133c328f082a --- /dev/null +++ b/tools/testing/selftests/bpf/progs/dummy_st_ops.c @@ -0,0 +1,33 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (C) 2021. Huawei Technologies Co., Ltd */ +#include +#include +#include + +struct bpf_dummy_ops_state { + int val; +} __attribute__((preserve_access_index)); + +struct bpf_dummy_ops { + int (*init)(struct bpf_dummy_ops_state *state); +}; + +char _liencse[] SEC("license") = "GPL"; + +SEC("struct_ops/init_1") +int BPF_PROG(init_1, struct bpf_dummy_ops_state *state) +{ + int ret; + + if (!state) + return 0xf2f3f4f5; + + ret = state->val; + state->val = 0x5a; + return ret; +} + +SEC(".struct_ops") +struct bpf_dummy_ops dummy_1 = { + .init = (void *)init_1, +};