From patchwork Fri Oct 13 22:42:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kui-Feng Lee X-Patchwork-Id: 13421807 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E12112C84D for ; Fri, 13 Oct 2023 22:43:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="NtkUyH2r" Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com [IPv6:2607:f8b0:4864:20::1132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E766B7 for ; Fri, 13 Oct 2023 15:43:14 -0700 (PDT) Received: by mail-yw1-x1132.google.com with SMTP id 00721157ae682-5a7eef0b931so32477287b3.0 for ; Fri, 13 Oct 2023 15:43:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697236993; x=1697841793; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uUcW83e55Bna8gBEv5q3MMNy5IQNTEE5zYJyBPrfHXs=; b=NtkUyH2r4o0q8uwMQY3a1NGq6KJ37lSrMoucll4JaazrHmDpTkcceqSWrdxBXo9FvL nngBbl5PqUVS6JLBJWYSjF/3Bc0Hvkc91eYpLxpWP/J/m82DrZPjJ9XbVFlNzdkt+OjM 6pCnGr9CPUUt9QRZm0MpkE71c0xyI907GkmP14gsDvlQIw5HT42S4rq3Pdx5BwqVEsl9 OYW+b9WxhkNhVZcG+hx7ZlPSNU1gTkIS5lRgGfqyq37OX7HtAeolwRQMSkly1/3haxFL /5NL2KEmbjRCWK6vobcEbXIk/6IMJUcHXjSx+tmafMRNLvnpgfqWCi5Fc4UYL44wmhIR L8Ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697236993; x=1697841793; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uUcW83e55Bna8gBEv5q3MMNy5IQNTEE5zYJyBPrfHXs=; b=GOps8vwKFjjEwurhiL3WFgJl3jnO5Harsve4E976C3kqczh8GKL/jGa2qeEeyJ6Xjz R/IuqV47JTM5MgyKmLdDtWemJNDLvpUrMzgmSPvliZYUAudknJQSSuZWdr2wRBcxlHOK adu8rkHq6G009iQPbKxZzrDPyZRS79e8ppLhdZnB2phwMXEOBoaH4F31Qiunu6NzAzJb YQsrUegjW5pNBH9fgptZxdbf7EGAjRzw1T6Z5FJ2DVjCvS1hvoLC5xOD9OZYUMmN92HV fEn53LLLh2l+yHIw65Q/hXiParW6yqD2NuZ8Cyq+jclDR1fsAXciYFyKrzYCNDd23x8J FZpA== X-Gm-Message-State: AOJu0Yz04ky45zyDWlbB1Tv9UgiD/uBDVrmMX7MKtYEmqLeOKeYWmzJf IzrgFNV0cMOQLETstE0XfAlBFJw2Y0A= X-Google-Smtp-Source: AGHT+IEYEBTj1k8Kce3+qucKnsIhCj2K1vj9IqdiDNpWLjbp3CzIOCqLuLEeI6PqDFpG9hKp6nlX6g== X-Received: by 2002:a0d:cb83:0:b0:5a7:c50e:8df with SMTP id n125-20020a0dcb83000000b005a7c50e08dfmr12423790ywd.18.1697236993188; Fri, 13 Oct 2023 15:43:13 -0700 (PDT) Received: from kickker.attlocal.net ([2600:1700:6cf8:1240:df89:3514:fdf4:ee2]) by smtp.gmail.com with ESMTPSA id g141-20020a0ddd93000000b00592548b2c47sm101989ywe.80.2023.10.13.15.43.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 13 Oct 2023 15:43:12 -0700 (PDT) From: thinker.li@gmail.com To: bpf@vger.kernel.org, ast@kernel.org, martin.lau@linux.dev, song@kernel.org, kernel-team@meta.com, andrii@kernel.org, drosen@google.com Cc: sinquersw@gmail.com, kuifeng@meta.com, Kui-Feng Lee Subject: [PATCH bpf-next v4 4/9] bpf: validate value_type Date: Fri, 13 Oct 2023 15:42:59 -0700 Message-Id: <20231013224304.187218-5-thinker.li@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231013224304.187218-1-thinker.li@gmail.com> References: <20231013224304.187218-1-thinker.li@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net From: Kui-Feng Lee A value_type should consist of three components: refcnt, state, and data. refcnt and state has been move to struct bpf_struct_ops_common_value to make it easier to check the value type. Signed-off-by: Kui-Feng Lee --- kernel/bpf/bpf_struct_ops.c | 88 +++++++++++++++++++++++++++---------- 1 file changed, 66 insertions(+), 22 deletions(-) diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index b561245fe235..69703584fa4a 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -20,9 +20,11 @@ enum bpf_struct_ops_state { BPF_STRUCT_OPS_STATE_READY, }; -#define BPF_STRUCT_OPS_COMMON_VALUE \ - refcount_t refcnt; \ - enum bpf_struct_ops_state state +struct bpf_struct_ops_common_value { + refcount_t refcnt; + enum bpf_struct_ops_state state; +}; +#define BPF_STRUCT_OPS_COMMON_VALUE struct bpf_struct_ops_common_value common struct bpf_struct_ops_value { BPF_STRUCT_OPS_COMMON_VALUE; @@ -109,6 +111,38 @@ const struct bpf_prog_ops bpf_struct_ops_prog_ops = { }; static const struct btf_type *module_type; +static const struct btf_type *common_value_type; + +static bool is_valid_value_type(struct btf *btf, s32 value_id, + const struct btf_type *type, + const char *value_name) +{ + const struct btf_member *member; + const struct btf_type *vt, *mt; + + vt = btf_type_by_id(btf, value_id); + if (btf_vlen(vt) != 2) { + pr_warn("The number of %s's members should be 2, but we get %d\n", + value_name, btf_vlen(vt)); + return false; + } + member = btf_type_member(vt); + mt = btf_type_by_id(btf, member->type); + if (mt != common_value_type) { + pr_warn("The first member of %s should be bpf_struct_ops_common_value\n", + value_name); + return false; + } + member++; + mt = btf_type_by_id(btf, member->type); + if (mt != type) { + pr_warn("The second member of %s should be %s\n", + value_name, btf_name_by_offset(btf, type->name_off)); + return false; + } + + return true; +} static void bpf_struct_ops_init_one(struct bpf_struct_ops *st_ops, struct btf *btf, @@ -130,14 +164,6 @@ static void bpf_struct_ops_init_one(struct bpf_struct_ops *st_ops, } sprintf(value_name, "%s%s", VALUE_PREFIX, st_ops->name); - value_id = btf_find_by_name_kind(btf, value_name, - BTF_KIND_STRUCT); - if (value_id < 0) { - pr_warn("Cannot find struct %s in btf_vmlinux\n", - value_name); - return; - } - type_id = btf_find_by_name_kind(btf, st_ops->name, BTF_KIND_STRUCT); if (type_id < 0) { @@ -152,6 +178,16 @@ static void bpf_struct_ops_init_one(struct bpf_struct_ops *st_ops, return; } + value_id = btf_find_by_name_kind(btf, value_name, + BTF_KIND_STRUCT); + if (value_id < 0) { + pr_warn("Cannot find struct %s in btf_vmlinux\n", + value_name); + return; + } + if (!is_valid_value_type(btf, value_id, t, value_name)) + return; + for_each_member(i, t, member) { const struct btf_type *func_proto; @@ -201,7 +237,7 @@ static void bpf_struct_ops_init_one(struct bpf_struct_ops *st_ops, void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log) { struct bpf_struct_ops *st_ops; - s32 module_id; + s32 module_id, common_value_id; u32 i; /* Ensure BTF type is emitted for "struct bpf_struct_ops_##_name" */ @@ -215,6 +251,14 @@ void bpf_struct_ops_init(struct btf *btf, struct bpf_verifier_log *log) return; } module_type = btf_type_by_id(btf, module_id); + common_value_id = btf_find_by_name_kind(btf, + "bpf_struct_ops_common_value", + BTF_KIND_STRUCT); + if (common_value_id < 0) { + pr_warn("Cannot find struct common_value in btf_vmlinux\n"); + return; + } + common_value_type = btf_type_by_id(btf, common_value_id); for (i = 0; i < ARRAY_SIZE(bpf_struct_ops); i++) { st_ops = bpf_struct_ops[i]; @@ -278,7 +322,7 @@ int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key, kvalue = &st_map->kvalue; /* Pair with smp_store_release() during map_update */ - state = smp_load_acquire(&kvalue->state); + state = smp_load_acquire(&kvalue->common.state); if (state == BPF_STRUCT_OPS_STATE_INIT) { memset(value, 0, map->value_size); return 0; @@ -289,7 +333,7 @@ int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key, */ uvalue = value; memcpy(uvalue, st_map->uvalue, map->value_size); - uvalue->state = state; + uvalue->common.state = state; /* This value offers the user space a general estimate of how * many sockets are still utilizing this struct_ops for TCP @@ -297,7 +341,7 @@ int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key, * should sufficiently meet our present goals. */ refcnt = atomic64_read(&map->refcnt) - atomic64_read(&map->usercnt); - refcount_set(&uvalue->refcnt, max_t(s64, refcnt, 0)); + refcount_set(&uvalue->common.refcnt, max_t(s64, refcnt, 0)); return 0; } @@ -408,7 +452,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, if (err) return err; - if (uvalue->state || refcount_read(&uvalue->refcnt)) + if (uvalue->common.state || refcount_read(&uvalue->common.refcnt)) return -EINVAL; tlinks = kcalloc(BPF_TRAMP_MAX, sizeof(*tlinks), GFP_KERNEL); @@ -420,7 +464,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, mutex_lock(&st_map->lock); - if (kvalue->state != BPF_STRUCT_OPS_STATE_INIT) { + if (kvalue->common.state != BPF_STRUCT_OPS_STATE_INIT) { err = -EBUSY; goto unlock; } @@ -533,7 +577,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, * * Pair with smp_load_acquire() during lookup_elem(). */ - smp_store_release(&kvalue->state, BPF_STRUCT_OPS_STATE_READY); + smp_store_release(&kvalue->common.state, BPF_STRUCT_OPS_STATE_READY); goto unlock; } @@ -551,7 +595,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, * It ensures the above udata updates (e.g. prog->aux->id) * can be seen once BPF_STRUCT_OPS_STATE_INUSE is set. */ - smp_store_release(&kvalue->state, BPF_STRUCT_OPS_STATE_INUSE); + smp_store_release(&kvalue->common.state, BPF_STRUCT_OPS_STATE_INUSE); goto unlock; } @@ -582,7 +626,7 @@ static long bpf_struct_ops_map_delete_elem(struct bpf_map *map, void *key) if (st_map->map.map_flags & BPF_F_LINK) return -EOPNOTSUPP; - prev_state = cmpxchg(&st_map->kvalue.state, + prev_state = cmpxchg(&st_map->kvalue.common.state, BPF_STRUCT_OPS_STATE_INUSE, BPF_STRUCT_OPS_STATE_TOBEFREE); switch (prev_state) { @@ -676,7 +720,7 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr) struct bpf_map *map; int ret; - st_ops = bpf_struct_ops_find_value(attr->btf_vmlinux_value_type_id, btf_vmlinux); + st_ops = bpf_struct_ops_find_value(btf_vmlinux, attr->btf_vmlinux_value_type_id); if (!st_ops) return ERR_PTR(-ENOTSUPP); @@ -805,7 +849,7 @@ static bool bpf_struct_ops_valid_to_reg(struct bpf_map *map) return map->map_type == BPF_MAP_TYPE_STRUCT_OPS && map->map_flags & BPF_F_LINK && /* Pair with smp_store_release() during map_update */ - smp_load_acquire(&st_map->kvalue.state) == BPF_STRUCT_OPS_STATE_READY; + smp_load_acquire(&st_map->kvalue.common.state) == BPF_STRUCT_OPS_STATE_READY; } static void bpf_struct_ops_map_link_dealloc(struct bpf_link *link)