From patchwork Mon Oct 23 22:00:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Marchevsky X-Patchwork-Id: 13433596 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 852B71859 for ; Mon, 23 Oct 2023 22:00:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=fb.com header.i=@fb.com header.b="gbDLwzyO" Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2CB91D7A for ; Mon, 23 Oct 2023 15:00:43 -0700 (PDT) Received: from pps.filterd (m0109334.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39NKDh89013104 for ; Mon, 23 Oct 2023 15:00:42 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=IycrfTyEnyQVI9/6CSYf7V+dLXNMc7RXm6zjB6pA5sk=; b=gbDLwzyO4Z3XC4nVdL/hR7BEw7lFJhENEJvRhGZaII50ZiSut7V5h0AcTRySrX0yOKBU mnXnGz/Vz8yl0vCspnFoLAcGaKfCSPLFCU8PI+Kb15jEoccYi1pgn+l/AHe0nxPU4roe 2Kn5b6RqHabrXicffoLSctxB2I52hMRyNek= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3twr6fv1wd-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 23 Oct 2023 15:00:42 -0700 Received: from twshared12617.07.ash9.facebook.com (2620:10d:c0a8:1c::1b) by mail.thefacebook.com (2620:10d:c0a8:82::b) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 23 Oct 2023 15:00:38 -0700 Received: by devbig077.ldc1.facebook.com (Postfix, from userid 158236) id 16AC92632D0C0; Mon, 23 Oct 2023 15:00:32 -0700 (PDT) From: Dave Marchevsky To: CC: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Kernel Team , Dave Marchevsky Subject: [PATCH v1 bpf-next 1/4] bpf: Fix btf_get_field_type to fail for multiple bpf_refcount fields Date: Mon, 23 Oct 2023 15:00:27 -0700 Message-ID: <20231023220030.2556229-2-davemarchevsky@fb.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231023220030.2556229-1-davemarchevsky@fb.com> References: <20231023220030.2556229-1-davemarchevsky@fb.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: 9lbb6Yc25UL1VGwbAU2WKzVsRkobjvY6 X-Proofpoint-ORIG-GUID: 9lbb6Yc25UL1VGwbAU2WKzVsRkobjvY6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-23_21,2023-10-19_01,2023-05-22_02 X-Patchwork-Delegate: bpf@iogearbox.net If a struct has a bpf_refcount field, the refcount controls lifetime of the entire struct. Currently there's no usecase or support for multiple bpf_refcount fields in a struct. bpf_spin_lock and bpf_timer fields don't support multiples either, but with better error behavior. Parsing BTF w/ a struct containing multiple {bpf_spin_lock, bpf_timer} fields fails in btf_get_field_type, while multiple bpf_refcount fields doesn't fail BTF parsing at all, instead triggering a WARN_ON_ONCE in btf_parse_fields, with the verifier using the last bpf_refcount field to actually do refcounting. This patch changes bpf_refcount handling in btf_get_field_type to use same error logic as bpf_spin_lock and bpf_timer. Since it's being used 3x and is boilerplatey, the logic is shoved into field_mask_test_name_check_seen helper macro. Signed-off-by: Dave Marchevsky Fixes: d54730b50bae ("bpf: Introduce opaque bpf_refcount struct and add btf_record plumbing") Acked-by: Yonghong Song Acked-by: Andrii Nakryiko --- kernel/bpf/btf.c | 37 ++++++++++++++++--------------------- 1 file changed, 16 insertions(+), 21 deletions(-) diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 15d71d2986d3..975ef8e73393 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -3374,8 +3374,17 @@ btf_find_graph_root(const struct btf *btf, const struct btf_type *pt, return BTF_FIELD_FOUND; } -#define field_mask_test_name(field_type, field_type_str) \ - if (field_mask & field_type && !strcmp(name, field_type_str)) { \ +#define field_mask_test_name(field_type, field_type_str) \ + if (field_mask & field_type && !strcmp(name, field_type_str)) { \ + type = field_type; \ + goto end; \ + } + +#define field_mask_test_name_check_seen(field_type, field_type_str) \ + if (field_mask & field_type && !strcmp(name, field_type_str)) { \ + if (*seen_mask & field_type) \ + return -E2BIG; \ + *seen_mask |= field_type; \ type = field_type; \ goto end; \ } @@ -3385,29 +3394,14 @@ static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask, { int type = 0; - if (field_mask & BPF_SPIN_LOCK) { - if (!strcmp(name, "bpf_spin_lock")) { - if (*seen_mask & BPF_SPIN_LOCK) - return -E2BIG; - *seen_mask |= BPF_SPIN_LOCK; - type = BPF_SPIN_LOCK; - goto end; - } - } - if (field_mask & BPF_TIMER) { - if (!strcmp(name, "bpf_timer")) { - if (*seen_mask & BPF_TIMER) - return -E2BIG; - *seen_mask |= BPF_TIMER; - type = BPF_TIMER; - goto end; - } - } + field_mask_test_name_check_seen(BPF_SPIN_LOCK, "bpf_spin_lock"); + field_mask_test_name_check_seen(BPF_TIMER, "bpf_timer"); + field_mask_test_name_check_seen(BPF_REFCOUNT, "bpf_refcount"); + field_mask_test_name(BPF_LIST_HEAD, "bpf_list_head"); field_mask_test_name(BPF_LIST_NODE, "bpf_list_node"); field_mask_test_name(BPF_RB_ROOT, "bpf_rb_root"); field_mask_test_name(BPF_RB_NODE, "bpf_rb_node"); - field_mask_test_name(BPF_REFCOUNT, "bpf_refcount"); /* Only return BPF_KPTR when all other types with matchable names fail */ if (field_mask & BPF_KPTR) { @@ -3421,6 +3415,7 @@ static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask, return type; } +#undef field_mask_test_name_check_seen #undef field_mask_test_name static int btf_find_struct_field(const struct btf *btf, From patchwork Mon Oct 23 22:00:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Marchevsky X-Patchwork-Id: 13433598 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C0A81859 for ; Mon, 23 Oct 2023 22:00:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=fb.com header.i=@fb.com header.b="MrxVQ45V" Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55C1310C8 for ; Mon, 23 Oct 2023 15:00:43 -0700 (PDT) Received: from pps.filterd (m0148460.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39NF6YMG031036 for ; Mon, 23 Oct 2023 15:00:42 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=oZOLPD2rM92DmYgETMB+DdoR7lO+mCygsBERnN4P08c=; b=MrxVQ45Vxck1KVANsT0/99FPfZn8NEIiEE5uPBPOSM1Th7ofTYPGDP8GIKAUq32SND0Z eZmp7D9X5HqNE2J/vuJykZD2MGWEpfnk2o1+I93wi3j9QII5Jk2n6Bn4f1136tPG/c6C ep+9l1FIGchTM0hMdy9MHcORO+njp96Y9iE= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3twu662weu-15 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 23 Oct 2023 15:00:42 -0700 Received: from twshared5508.02.ash9.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:21d::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 23 Oct 2023 15:00:40 -0700 Received: by devbig077.ldc1.facebook.com (Postfix, from userid 158236) id 94D1B2632D0CA; Mon, 23 Oct 2023 15:00:34 -0700 (PDT) From: Dave Marchevsky To: CC: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Kernel Team , Dave Marchevsky Subject: [PATCH v1 bpf-next 2/4] bpf: Refactor btf_find_field with btf_field_info_search Date: Mon, 23 Oct 2023 15:00:28 -0700 Message-ID: <20231023220030.2556229-3-davemarchevsky@fb.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231023220030.2556229-1-davemarchevsky@fb.com> References: <20231023220030.2556229-1-davemarchevsky@fb.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: Nr0OzvCXCa88rotxZ11cfngsakRzdphv X-Proofpoint-GUID: Nr0OzvCXCa88rotxZ11cfngsakRzdphv X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-23_21,2023-10-19_01,2023-05-22_02 X-Patchwork-Delegate: bpf@iogearbox.net btf_find_field takes (btf_type, special_field_types) and returns info about the specific special fields in btf_type, in the form of an array of struct btf_field info. The meat of this 'search for special fields' process happens in btf_find_datasec_var and btf_find_struct_field helpers: each member is examined and, if it's special, a struct btf_field_info describing it is added to the return array. Indeed, any function that might add to the output probably also looks at struct members or datasec vars. Most of the parameters passed around between helpers doing the search can be grouped into two categories: "info about the output array" and "info about which fields to search for". This patch joins those together in struct btf_field_info_search, simplifying the signatures of most helpers involved in the search, including array flattening helper that later patches in the series will add. The aforementioned array flattening logic will greatly increase the number of btf_field_info's needed to describe some structs, so this patch also turns the statically-sized struct btf_field_info info_arr[BTF_FIELDS_MAX] into a growable array with a larger max size. Implementation notes: * BTF_FIELDS_MAX is now max size of growable btf_field_info *infos instead of initial (and max) size of static result array * Static array before had 10 elems (+1 tmp btf_field_info) * growable array starts with 16 and doubles every time it needs to grow, up to BTF_FIELDS_MAX of 256 * __next_field_infos is used with next_cnt > 1 later in the series * btf_find_{datasec_var, struct_field} have special logic for an edge case where the result array is full but the field being examined gets BTF_FIELD_IGNORE return from btf_find_{struct, kptr,graph_root} * If result wasn't BTF_FIELD_IGNORE, a btf_field_info would have to be added to the array. Since it is we can look at next field. * Before this patch the logic handling this edge case was hard to follow and used a tmp btf_struct_info. This patch moves the add-if-not-ignore logic down into btf_find_{struct, kptr, graph_root}, removing the need to opportunistically grab a btf_field_info to populate before knowing if it's actually necessary. Now a new one is grabbed only if the field shouldn't be ignored. * Within btf_find_{datasec_var, struct_field}, each member is currently examined in two phases: first btf_get_field_type checks the member type name, then btf_find_{struct,graph_root,kptr} do additional sanity checking and populate struct btf_field_info. Kptr fields don't have a specific type name, though, so btf_get_field_type assumes that - if we're looking for kptrs - any member that fails type name check could be a kptr field. * As a result btf_find_kptr effectively does all the pointer hopping, sanity checking, and info population. Instead of trying to fit kptr handling into this two-phase model, where it's unwieldy, handle it in a separate codepath when name matching fails. Signed-off-by: Dave Marchevsky --- include/linux/bpf.h | 4 +- kernel/bpf/btf.c | 331 +++++++++++++++++++++++++++++--------------- 2 files changed, 219 insertions(+), 116 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index b4825d3cdb29..e07cac5cc3cf 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -171,8 +171,8 @@ struct bpf_map_ops { }; enum { - /* Support at most 10 fields in a BTF type */ - BTF_FIELDS_MAX = 10, + /* Support at most 256 fields in a BTF type */ + BTF_FIELDS_MAX = 256, }; enum btf_field_type { diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 975ef8e73393..e999ba85c363 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -3257,25 +3257,94 @@ struct btf_field_info { }; }; +struct btf_field_info_search { + /* growable array. allocated in __next_field_infos + * free'd in btf_parse_fields + */ + struct btf_field_info *infos; + /* size of infos */ + int info_cnt; + /* index of next btf_field_info to populate */ + int idx; + + /* btf_field_types to search for */ + u32 field_mask; + /* btf_field_types found earlier */ + u32 seen_mask; +}; + +/* Reserve next_cnt contiguous btf_field_info's for caller to populate + * Returns ptr to first reserved btf_field_info + */ +static struct btf_field_info *__next_field_infos(struct btf_field_info_search *srch, + u32 next_cnt) +{ + struct btf_field_info *new_infos, *ret; + + if (!next_cnt) + return ERR_PTR(-EINVAL); + + if (srch->idx + next_cnt < srch->info_cnt) + goto nogrow_out; + + /* Need to grow */ + if (srch->idx + next_cnt > BTF_FIELDS_MAX) + return ERR_PTR(-E2BIG); + + while (srch->idx + next_cnt >= srch->info_cnt) + srch->info_cnt = srch->infos ? srch->info_cnt * 2 : 16; + + new_infos = krealloc(srch->infos, + srch->info_cnt * sizeof(struct btf_field_info), + GFP_KERNEL | __GFP_NOWARN); + if (!new_infos) + return ERR_PTR(-ENOMEM); + srch->infos = new_infos; + +nogrow_out: + ret = &srch->infos[srch->idx]; + srch->idx += next_cnt; + return ret; +} + +/* Request srch's next free btf_field_info to populate, possibly growing + * srch->infos + */ +static struct btf_field_info *__next_field_info(struct btf_field_info_search *srch) +{ + return __next_field_infos(srch, 1); +} + static int btf_find_struct(const struct btf *btf, const struct btf_type *t, u32 off, int sz, enum btf_field_type field_type, - struct btf_field_info *info) + struct btf_field_info_search *srch) { + struct btf_field_info *info; + if (!__btf_type_is_struct(t)) return BTF_FIELD_IGNORE; if (t->size != sz) return BTF_FIELD_IGNORE; + + info = __next_field_info(srch); + if (IS_ERR_OR_NULL(info)) + return PTR_ERR(info); + info->type = field_type; info->off = off; return BTF_FIELD_FOUND; } -static int btf_find_kptr(const struct btf *btf, const struct btf_type *t, - u32 off, int sz, struct btf_field_info *info) +static int btf_maybe_find_kptr(const struct btf *btf, const struct btf_type *t, + u32 off, struct btf_field_info_search *srch) { + struct btf_field_info *info; enum btf_field_type type; u32 res_id; + if (!(srch->field_mask & BPF_KPTR)) + return BTF_FIELD_IGNORE; + /* Permit modifiers on the pointer itself */ if (btf_type_is_volatile(t)) t = btf_type_by_id(btf, t->type); @@ -3304,6 +3373,10 @@ static int btf_find_kptr(const struct btf *btf, const struct btf_type *t, if (!__btf_type_is_struct(t)) return -EINVAL; + info = __next_field_info(srch); + if (IS_ERR_OR_NULL(info)) + return PTR_ERR(info); + info->type = type; info->off = off; info->kptr.type_id = res_id; @@ -3340,9 +3413,10 @@ const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type static int btf_find_graph_root(const struct btf *btf, const struct btf_type *pt, const struct btf_type *t, int comp_idx, u32 off, - int sz, struct btf_field_info *info, + int sz, struct btf_field_info_search *srch, enum btf_field_type head_type) { + struct btf_field_info *info; const char *node_field_name; const char *value_type; s32 id; @@ -3367,6 +3441,11 @@ btf_find_graph_root(const struct btf *btf, const struct btf_type *pt, node_field_name++; if (str_is_empty(node_field_name)) return -EINVAL; + + info = __next_field_info(srch); + if (IS_ERR_OR_NULL(info)) + return PTR_ERR(info); + info->type = head_type; info->off = off; info->graph_root.value_btf_id = id; @@ -3374,25 +3453,24 @@ btf_find_graph_root(const struct btf *btf, const struct btf_type *pt, return BTF_FIELD_FOUND; } -#define field_mask_test_name(field_type, field_type_str) \ - if (field_mask & field_type && !strcmp(name, field_type_str)) { \ - type = field_type; \ - goto end; \ +#define field_mask_test_name(field_type, field_type_str) \ + if (srch->field_mask & field_type && !strcmp(name, field_type_str)) { \ + return field_type; \ } -#define field_mask_test_name_check_seen(field_type, field_type_str) \ - if (field_mask & field_type && !strcmp(name, field_type_str)) { \ - if (*seen_mask & field_type) \ - return -E2BIG; \ - *seen_mask |= field_type; \ - type = field_type; \ - goto end; \ +#define field_mask_test_name_check_seen(field_type, field_type_str) \ + if (srch->field_mask & field_type && !strcmp(name, field_type_str)) { \ + if (srch->seen_mask & field_type) \ + return -E2BIG; \ + srch->seen_mask |= field_type; \ + return field_type; \ } -static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask, - int *align, int *sz) +static int btf_get_field_type_by_name(const struct btf *btf, + const struct btf_type *t, + struct btf_field_info_search *srch) { - int type = 0; + const char *name = __btf_name_by_offset(btf, t->name_off); field_mask_test_name_check_seen(BPF_SPIN_LOCK, "bpf_spin_lock"); field_mask_test_name_check_seen(BPF_TIMER, "bpf_timer"); @@ -3403,47 +3481,58 @@ static int btf_get_field_type(const char *name, u32 field_mask, u32 *seen_mask, field_mask_test_name(BPF_RB_ROOT, "bpf_rb_root"); field_mask_test_name(BPF_RB_NODE, "bpf_rb_node"); - /* Only return BPF_KPTR when all other types with matchable names fail */ - if (field_mask & BPF_KPTR) { - type = BPF_KPTR_REF; - goto end; - } return 0; -end: - *sz = btf_field_type_size(type); - *align = btf_field_type_align(type); - return type; } #undef field_mask_test_name_check_seen #undef field_mask_test_name +static int __struct_member_check_align(u32 off, enum btf_field_type field_type) +{ + u32 align = btf_field_type_align(field_type); + + if (off % align) + return -EINVAL; + return 0; +} + static int btf_find_struct_field(const struct btf *btf, - const struct btf_type *t, u32 field_mask, - struct btf_field_info *info, int info_cnt) + const struct btf_type *t, + struct btf_field_info_search *srch) { - int ret, idx = 0, align, sz, field_type; const struct btf_member *member; - struct btf_field_info tmp; - u32 i, off, seen_mask = 0; + int ret, field_type; + u32 i, off, sz; for_each_member(i, t, member) { const struct btf_type *member_type = btf_type_by_id(btf, member->type); - - field_type = btf_get_field_type(__btf_name_by_offset(btf, member_type->name_off), - field_mask, &seen_mask, &align, &sz); - if (field_type == 0) - continue; - if (field_type < 0) - return field_type; - off = __btf_member_bit_offset(t, member); if (off % 8) /* valid C code cannot generate such BTF */ return -EINVAL; off /= 8; - if (off % align) + + field_type = btf_get_field_type_by_name(btf, member_type, srch); + if (field_type < 0) + return field_type; + + if (field_type == 0) { + /* Maybe it's a kptr. Use BPF_KPTR_REF for align + * checks, all ptrs have same align. + * btf_maybe_find_kptr will find actual kptr type + */ + if (__struct_member_check_align(off, BPF_KPTR_REF)) + continue; + + ret = btf_maybe_find_kptr(btf, member_type, off, srch); + if (ret < 0) + return ret; + continue; + } + + sz = btf_field_type_size(field_type); + if (__struct_member_check_align(off, field_type)) continue; switch (field_type) { @@ -3453,64 +3542,81 @@ static int btf_find_struct_field(const struct btf *btf, case BPF_RB_NODE: case BPF_REFCOUNT: ret = btf_find_struct(btf, member_type, off, sz, field_type, - idx < info_cnt ? &info[idx] : &tmp); - if (ret < 0) - return ret; - break; - case BPF_KPTR_UNREF: - case BPF_KPTR_REF: - case BPF_KPTR_PERCPU: - ret = btf_find_kptr(btf, member_type, off, sz, - idx < info_cnt ? &info[idx] : &tmp); + srch); if (ret < 0) return ret; break; case BPF_LIST_HEAD: case BPF_RB_ROOT: ret = btf_find_graph_root(btf, t, member_type, - i, off, sz, - idx < info_cnt ? &info[idx] : &tmp, - field_type); + i, off, sz, srch, field_type); if (ret < 0) return ret; break; + /* kptr fields are not handled in this switch, see + * btf_maybe_find_kptr above + */ + case BPF_KPTR_UNREF: + case BPF_KPTR_REF: + case BPF_KPTR_PERCPU: default: return -EFAULT; } - - if (ret == BTF_FIELD_IGNORE) - continue; - if (idx >= info_cnt) - return -E2BIG; - ++idx; } - return idx; + return srch->idx; +} + +static int __datasec_vsi_check_align_sz(const struct btf_var_secinfo *vsi, + enum btf_field_type field_type, + u32 expected_sz) +{ + u32 off, align; + + off = vsi->offset; + align = btf_field_type_align(field_type); + + if (vsi->size != expected_sz) + return -EINVAL; + if (off % align) + return -EINVAL; + + return 0; } static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t, - u32 field_mask, struct btf_field_info *info, - int info_cnt) + struct btf_field_info_search *srch) { - int ret, idx = 0, align, sz, field_type; const struct btf_var_secinfo *vsi; - struct btf_field_info tmp; - u32 i, off, seen_mask = 0; + int ret, field_type; + u32 i, off, sz; for_each_vsi(i, t, vsi) { const struct btf_type *var = btf_type_by_id(btf, vsi->type); const struct btf_type *var_type = btf_type_by_id(btf, var->type); - field_type = btf_get_field_type(__btf_name_by_offset(btf, var_type->name_off), - field_mask, &seen_mask, &align, &sz); - if (field_type == 0) - continue; + off = vsi->offset; + field_type = btf_get_field_type_by_name(btf, var_type, srch); if (field_type < 0) return field_type; - off = vsi->offset; - if (vsi->size != sz) + if (field_type == 0) { + /* Maybe it's a kptr. Use BPF_KPTR_REF for sz / align + * checks, all ptrs have same sz / align. + * btf_maybe_find_kptr will find actual kptr type + */ + sz = btf_field_type_size(BPF_KPTR_REF); + if (__datasec_vsi_check_align_sz(vsi, BPF_KPTR_REF, sz)) + continue; + + ret = btf_maybe_find_kptr(btf, var_type, off, srch); + if (ret < 0) + return ret; continue; - if (off % align) + } + + sz = btf_field_type_size(field_type); + + if (__datasec_vsi_check_align_sz(vsi, field_type, sz)) continue; switch (field_type) { @@ -3520,48 +3626,38 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t, case BPF_RB_NODE: case BPF_REFCOUNT: ret = btf_find_struct(btf, var_type, off, sz, field_type, - idx < info_cnt ? &info[idx] : &tmp); - if (ret < 0) - return ret; - break; - case BPF_KPTR_UNREF: - case BPF_KPTR_REF: - case BPF_KPTR_PERCPU: - ret = btf_find_kptr(btf, var_type, off, sz, - idx < info_cnt ? &info[idx] : &tmp); + srch); if (ret < 0) return ret; break; case BPF_LIST_HEAD: case BPF_RB_ROOT: ret = btf_find_graph_root(btf, var, var_type, - -1, off, sz, - idx < info_cnt ? &info[idx] : &tmp, + -1, off, sz, srch, field_type); if (ret < 0) return ret; break; + /* kptr fields are not handled in this switch, see + * btf_maybe_find_kptr above + */ + case BPF_KPTR_UNREF: + case BPF_KPTR_REF: + case BPF_KPTR_PERCPU: default: return -EFAULT; } - - if (ret == BTF_FIELD_IGNORE) - continue; - if (idx >= info_cnt) - return -E2BIG; - ++idx; } - return idx; + return srch->idx; } static int btf_find_field(const struct btf *btf, const struct btf_type *t, - u32 field_mask, struct btf_field_info *info, - int info_cnt) + struct btf_field_info_search *srch) { if (__btf_type_is_struct(t)) - return btf_find_struct_field(btf, t, field_mask, info, info_cnt); + return btf_find_struct_field(btf, t, srch); else if (btf_type_is_datasec(t)) - return btf_find_datasec_var(btf, t, field_mask, info, info_cnt); + return btf_find_datasec_var(btf, t, srch); return -EINVAL; } @@ -3729,47 +3825,51 @@ static int btf_field_cmp(const void *_a, const void *_b, const void *priv) struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type *t, u32 field_mask, u32 value_size) { - struct btf_field_info info_arr[BTF_FIELDS_MAX]; + struct btf_field_info_search srch; u32 next_off = 0, field_type_size; + struct btf_field_info *info; struct btf_record *rec; int ret, i, cnt; - ret = btf_find_field(btf, t, field_mask, info_arr, ARRAY_SIZE(info_arr)); - if (ret < 0) - return ERR_PTR(ret); - if (!ret) - return NULL; + memset(&srch, 0, sizeof(srch)); + srch.field_mask = field_mask; + ret = btf_find_field(btf, t, &srch); + if (ret <= 0) + goto end_srch; cnt = ret; /* This needs to be kzalloc to zero out padding and unused fields, see * comment in btf_record_equal. */ rec = kzalloc(offsetof(struct btf_record, fields[cnt]), GFP_KERNEL | __GFP_NOWARN); - if (!rec) - return ERR_PTR(-ENOMEM); + if (!rec) { + ret = -ENOMEM; + goto end_srch; + } rec->spin_lock_off = -EINVAL; rec->timer_off = -EINVAL; rec->refcount_off = -EINVAL; for (i = 0; i < cnt; i++) { - field_type_size = btf_field_type_size(info_arr[i].type); - if (info_arr[i].off + field_type_size > value_size) { - WARN_ONCE(1, "verifier bug off %d size %d", info_arr[i].off, value_size); + info = &srch.infos[i]; + field_type_size = btf_field_type_size(info->type); + if (info->off + field_type_size > value_size) { + WARN_ONCE(1, "verifier bug off %d size %d", info->off, value_size); ret = -EFAULT; goto end; } - if (info_arr[i].off < next_off) { + if (info->off < next_off) { ret = -EEXIST; goto end; } - next_off = info_arr[i].off + field_type_size; + next_off = info->off + field_type_size; - rec->field_mask |= info_arr[i].type; - rec->fields[i].offset = info_arr[i].off; - rec->fields[i].type = info_arr[i].type; + rec->field_mask |= info->type; + rec->fields[i].offset = info->off; + rec->fields[i].type = info->type; rec->fields[i].size = field_type_size; - switch (info_arr[i].type) { + switch (info->type) { case BPF_SPIN_LOCK: WARN_ON_ONCE(rec->spin_lock_off >= 0); /* Cache offset for faster lookup at runtime */ @@ -3788,17 +3888,17 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type case BPF_KPTR_UNREF: case BPF_KPTR_REF: case BPF_KPTR_PERCPU: - ret = btf_parse_kptr(btf, &rec->fields[i], &info_arr[i]); + ret = btf_parse_kptr(btf, &rec->fields[i], info); if (ret < 0) goto end; break; case BPF_LIST_HEAD: - ret = btf_parse_list_head(btf, &rec->fields[i], &info_arr[i]); + ret = btf_parse_list_head(btf, &rec->fields[i], info); if (ret < 0) goto end; break; case BPF_RB_ROOT: - ret = btf_parse_rb_root(btf, &rec->fields[i], &info_arr[i]); + ret = btf_parse_rb_root(btf, &rec->fields[i], info); if (ret < 0) goto end; break; @@ -3828,10 +3928,13 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type sort_r(rec->fields, rec->cnt, sizeof(struct btf_field), btf_field_cmp, NULL, rec); + kfree(srch.infos); return rec; end: btf_record_free(rec); +end_srch: + kfree(srch.infos); return ERR_PTR(ret); } From patchwork Mon Oct 23 22:00:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Marchevsky X-Patchwork-Id: 13433600 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3FB3D23758 for ; Mon, 23 Oct 2023 22:00:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=fb.com header.i=@fb.com header.b="rWyUGOnk" Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 038D510C for ; Mon, 23 Oct 2023 15:00:56 -0700 (PDT) Received: from pps.filterd (m0089730.ppops.net [127.0.0.1]) by m0089730.ppops.net (8.17.1.19/8.17.1.19) with ESMTP id 39NGk4dY010212 for ; Mon, 23 Oct 2023 15:00:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=bQ0SRCCLeLmBmt3dIr9VyvpsMlil8PiR94pFAktpJ2U=; b=rWyUGOnk7Z+hJ/lNKuRB3aj1jdxuRvxCnnUgAuo+shoabZ5h5EJB9bUmGLXi04E5sshX J1vDKxVG6yBNRgt39LbE89h0N/+OOVaykNzkn5FoCqO79StDkcXYO900/MUxQ7HaqYoU YrPd14RM4+yLRCVWOkdeaGznT7iIuq4VyFc= Received: from mail.thefacebook.com ([163.114.132.120]) by m0089730.ppops.net (PPS) with ESMTPS id 3twkuhw6xn-16 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 23 Oct 2023 15:00:55 -0700 Received: from twshared34392.14.frc2.facebook.com (2620:10d:c085:108::4) by mail.thefacebook.com (2620:10d:c085:21d::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 23 Oct 2023 15:00:52 -0700 Received: by devbig077.ldc1.facebook.com (Postfix, from userid 158236) id 3624F2632D0D5; Mon, 23 Oct 2023 15:00:35 -0700 (PDT) From: Dave Marchevsky To: CC: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Kernel Team , Dave Marchevsky Subject: [PATCH v1 bpf-next 3/4] btf: Descend into structs and arrays during special field search Date: Mon, 23 Oct 2023 15:00:29 -0700 Message-ID: <20231023220030.2556229-4-davemarchevsky@fb.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231023220030.2556229-1-davemarchevsky@fb.com> References: <20231023220030.2556229-1-davemarchevsky@fb.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-ORIG-GUID: 2hhCc7X5ay2KQB4z-4mYj_kk_KOJnS9W X-Proofpoint-GUID: 2hhCc7X5ay2KQB4z-4mYj_kk_KOJnS9W X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-23_21,2023-10-19_01,2023-05-22_02 X-Patchwork-Delegate: bpf@iogearbox.net Structs and arrays are aggregate types which contain some inner type(s) - members and elements - at various offsets. Currently, when examining a struct or datasec for special fields, the verifier does not look into the inner type of the structs or arrays it contains. This patch adds logic to descend into struct and array types when searching for special fields. If we have struct x containing an array: struct x { int a; u64 b[10]; }; we can construct some struct y with no array or struct members that has the same types at the same offsets: struct y { int a; u64 b1; u64 b2; /* ... */ u64 b10; }; Similarly for a struct containing a struct: struct x { char a; struct { int b; u64 c; } inner; }; there's a struct y with no aggregate members and same types/offsets: struct y { char a; int inner_b __attribute__ ((aligned (8))); /* See [0] */ u64 inner_c __attribute__ ((aligned (8))); }; This patch takes advantage of this equivalence to 'flatten' the field info found while descending into struct or array members into the btf_field_info result array of the original type being examined. The resultant btf_record of the original type being searched will have the correct fields at the correct offsets, but without any differentiation between "this field is one of my members" and "this field is actually in my some struct / array member". For now this descendant search logic looks for kptr fields only. Implementation notes: * Search starts at btf_find_field - we're either looking at a struct that's the type of some mapval (btf_find_struct_field), or a datasec representing a .bss or .data map (btf_find_datasec_var). Newly-added btf_find_aggregate_field is a "disambiguation helper" like btf_find_field, but is meant to be called from one of the starting points of the search - btf_find_{struct_field, datasec_var}. * btf_find_aggregate_field may itself call btf_find_struct_field, so there's some recursive digging possible here * Newly-added btf_flatten_array_field handles array fields by finding the type of their element and continuing the dig based on elem type. [0]: Structs have the alignment of their largest field, so the explicit alignment is necessary here. Luckily this patch's changes don't need to care about alignment / padding, since the BTF created during compilation is being searched, and it already has the correct information. Signed-off-by: Dave Marchevsky --- kernel/bpf/btf.c | 151 ++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 142 insertions(+), 9 deletions(-) diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index e999ba85c363..b982bf6fef9d 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -3496,9 +3496,41 @@ static int __struct_member_check_align(u32 off, enum btf_field_type field_type) return 0; } +/* Return number of elems and elem_type of a btf_array + * + * If the array is multi-dimensional, return elem count of + * equivalent single-dimensional array + * e.g. int x[10][10][10] has same layout as int x[1000] + */ +static u32 __multi_dim_elem_type_nelems(const struct btf *btf, + const struct btf_type *t, + const struct btf_type **elem_type) +{ + u32 nelems = btf_array(t)->nelems; + + if (!nelems) + return 0; + + *elem_type = btf_type_by_id(btf, btf_array(t)->type); + + while (btf_type_is_array(*elem_type)) { + if (!btf_array(*elem_type)->nelems) + return 0; + nelems *= btf_array(*elem_type)->nelems; + *elem_type = btf_type_by_id(btf, btf_array(*elem_type)->type); + } + return nelems; +} + +static int btf_find_aggregate_field(const struct btf *btf, + const struct btf_type *t, + struct btf_field_info_search *srch, + int field_off, int rec); + static int btf_find_struct_field(const struct btf *btf, const struct btf_type *t, - struct btf_field_info_search *srch) + struct btf_field_info_search *srch, + int struct_field_off, int rec) { const struct btf_member *member; int ret, field_type; @@ -3522,10 +3554,24 @@ static int btf_find_struct_field(const struct btf *btf, * checks, all ptrs have same align. * btf_maybe_find_kptr will find actual kptr type */ - if (__struct_member_check_align(off, BPF_KPTR_REF)) + if (srch->field_mask & BPF_KPTR && + !__struct_member_check_align(off, BPF_KPTR_REF)) { + ret = btf_maybe_find_kptr(btf, member_type, + struct_field_off + off, + srch); + if (ret < 0) + return ret; + if (ret == BTF_FIELD_FOUND) + continue; + } + + if (!(btf_type_is_array(member_type) || + __btf_type_is_struct(member_type))) continue; - ret = btf_maybe_find_kptr(btf, member_type, off, srch); + ret = btf_find_aggregate_field(btf, member_type, srch, + struct_field_off + off, + rec); if (ret < 0) return ret; continue; @@ -3541,15 +3587,17 @@ static int btf_find_struct_field(const struct btf *btf, case BPF_LIST_NODE: case BPF_RB_NODE: case BPF_REFCOUNT: - ret = btf_find_struct(btf, member_type, off, sz, field_type, - srch); + ret = btf_find_struct(btf, member_type, + struct_field_off + off, + sz, field_type, srch); if (ret < 0) return ret; break; case BPF_LIST_HEAD: case BPF_RB_ROOT: ret = btf_find_graph_root(btf, t, member_type, - i, off, sz, srch, field_type); + i, struct_field_off + off, sz, + srch, field_type); if (ret < 0) return ret; break; @@ -3566,6 +3614,82 @@ static int btf_find_struct_field(const struct btf *btf, return srch->idx; } +static int btf_flatten_array_field(const struct btf *btf, + const struct btf_type *t, + struct btf_field_info_search *srch, + int array_field_off, int rec) +{ + int ret, start_idx, elem_field_cnt; + const struct btf_type *elem_type; + struct btf_field_info *info; + u32 i, j, off, nelems; + + if (!btf_type_is_array(t)) + return -EINVAL; + nelems = __multi_dim_elem_type_nelems(btf, t, &elem_type); + if (!nelems || !__btf_type_is_struct(elem_type)) + return srch->idx; + + start_idx = srch->idx; + ret = btf_find_struct_field(btf, elem_type, srch, array_field_off + off, rec); + if (ret < 0) + return ret; + + /* No btf_field_info's added */ + if (srch->idx == start_idx) + return srch->idx; + + elem_field_cnt = srch->idx - start_idx; + info = __next_field_infos(srch, elem_field_cnt * (nelems - 1)); + if (IS_ERR_OR_NULL(info)) + return PTR_ERR(info); + + /* Array elems after the first can copy first elem's btf_field_infos + * and adjust offset + */ + for (i = 1; i < nelems; i++) { + memcpy(info, &srch->infos[start_idx], + elem_field_cnt * sizeof(struct btf_field_info)); + for (j = 0; j < elem_field_cnt; j++) { + info->off += (i * elem_type->size); + info++; + } + } + return srch->idx; +} + +static int btf_find_aggregate_field(const struct btf *btf, + const struct btf_type *t, + struct btf_field_info_search *srch, + int field_off, int rec) +{ + u32 orig_field_mask; + int ret; + + /* Dig up to 4 levels deep */ + if (rec >= 4) + return -E2BIG; + + orig_field_mask = srch->field_mask; + srch->field_mask &= BPF_KPTR; + + if (!srch->field_mask) { + ret = 0; + goto reset_field_mask; + } + + if (__btf_type_is_struct(t)) + ret = btf_find_struct_field(btf, t, srch, field_off, rec + 1); + else if (btf_type_is_array(t)) + ret = btf_flatten_array_field(btf, t, srch, field_off, rec + 1); + else + ret = -EINVAL; + +reset_field_mask: + srch->field_mask = orig_field_mask; + return ret; +} + static int __datasec_vsi_check_align_sz(const struct btf_var_secinfo *vsi, enum btf_field_type field_type, u32 expected_sz) @@ -3605,10 +3729,19 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t, * btf_maybe_find_kptr will find actual kptr type */ sz = btf_field_type_size(BPF_KPTR_REF); - if (__datasec_vsi_check_align_sz(vsi, BPF_KPTR_REF, sz)) + if (srch->field_mask & BPF_KPTR && + !__datasec_vsi_check_align_sz(vsi, BPF_KPTR_REF, sz)) { + ret = btf_maybe_find_kptr(btf, var_type, off, srch); + if (ret < 0) + return ret; + if (ret == BTF_FIELD_FOUND) + continue; + } + + if (!(btf_type_is_array(var_type) || __btf_type_is_struct(var_type))) continue; - ret = btf_maybe_find_kptr(btf, var_type, off, srch); + ret = btf_find_aggregate_field(btf, var_type, srch, off, 0); if (ret < 0) return ret; continue; @@ -3655,7 +3788,7 @@ static int btf_find_field(const struct btf *btf, const struct btf_type *t, struct btf_field_info_search *srch) { if (__btf_type_is_struct(t)) - return btf_find_struct_field(btf, t, srch); + return btf_find_struct_field(btf, t, srch, 0, 0); else if (btf_type_is_datasec(t)) return btf_find_datasec_var(btf, t, srch); return -EINVAL; From patchwork Mon Oct 23 22:00:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Marchevsky X-Patchwork-Id: 13433599 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A58D71859 for ; Mon, 23 Oct 2023 22:00:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=fb.com header.i=@fb.com header.b="OjTYUtp1" Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5CD93DE for ; Mon, 23 Oct 2023 15:00:56 -0700 (PDT) Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39NGY97a005886 for ; Mon, 23 Oct 2023 15:00:55 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=FO8cxAQ+MB4w4oWhkikOueJxnp7rHmIv83Wtph69K7s=; b=OjTYUtp1mlC6r16TOiHlbmIAfs83+bsdX2sB0p1zN7tTy6N9vXdIIhCuA4+cg2gM8JpT F76l46dskHYCxt3AaOZ9uwMBbUQV07zPxG5M2pM2lHNA0sYGzgOMg2+aE6IZOdRKtpLa OeHakiC4sp7P8bjlpQWYI7/nhN/REwy4ffk= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3twkuf5580-16 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 23 Oct 2023 15:00:55 -0700 Received: from twshared34392.14.frc2.facebook.com (2620:10d:c085:108::4) by mail.thefacebook.com (2620:10d:c085:21d::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 23 Oct 2023 15:00:52 -0700 Received: by devbig077.ldc1.facebook.com (Postfix, from userid 158236) id A8C3B2632D0F1; Mon, 23 Oct 2023 15:00:39 -0700 (PDT) From: Dave Marchevsky To: CC: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Kernel Team , Dave Marchevsky Subject: [PATCH v1 bpf-next 4/4] selftests/bpf: Add tests exercising aggregate type BTF field search Date: Mon, 23 Oct 2023 15:00:30 -0700 Message-ID: <20231023220030.2556229-5-davemarchevsky@fb.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231023220030.2556229-1-davemarchevsky@fb.com> References: <20231023220030.2556229-1-davemarchevsky@fb.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: 9Ieb1Evzsu1ATJdkSBQA8LMCCTUmjeOg X-Proofpoint-ORIG-GUID: 9Ieb1Evzsu1ATJdkSBQA8LMCCTUmjeOg X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-23_21,2023-10-19_01,2023-05-22_02 X-Patchwork-Delegate: bpf@iogearbox.net The newly-added test file attempts to kptr_xchg a prog_test_ref_kfunc kptr into a kptr field in a variety of nested aggregate types. If the verifier recognizes that there's a kptr field where we're trying to kptr_xchg, then the aggregate type digging logic works as expected. Some of the refactoring changes in this series are tested as well. Specifically: * BTF_FIELDS_MAX is now higher and represents the max size of the growable array. Confirm that btf_parse_fields fails for a type which contains too many fields. * If we've already seen BTF_FIELDS_MAX fields, we should continue looking for fields and fail if we find another one, otherwise the search should succeed and return BTF_FIELDS_MAX btf_field_infos. Confirm that this edge case works as expected. Signed-off-by: Dave Marchevsky --- .../selftests/bpf/prog_tests/array_kptr.c | 12 ++ .../testing/selftests/bpf/progs/array_kptr.c | 179 ++++++++++++++++++ 2 files changed, 191 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/array_kptr.c create mode 100644 tools/testing/selftests/bpf/progs/array_kptr.c diff --git a/tools/testing/selftests/bpf/prog_tests/array_kptr.c b/tools/testing/selftests/bpf/prog_tests/array_kptr.c new file mode 100644 index 000000000000..9d088520bdfe --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/array_kptr.c @@ -0,0 +1,12 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include + +#include "array_kptr.skel.h" + +void test_array_kptr(void) +{ + if (env.has_testmod) + RUN_TESTS(array_kptr); +} diff --git a/tools/testing/selftests/bpf/progs/array_kptr.c b/tools/testing/selftests/bpf/progs/array_kptr.c new file mode 100644 index 000000000000..f34872e74024 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/array_kptr.c @@ -0,0 +1,179 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include +#include +#include +#include "../bpf_testmod/bpf_testmod_kfunc.h" +#include "bpf_misc.h" + +struct val { + int d; + struct prog_test_ref_kfunc __kptr *ref_ptr; +}; + +struct val2 { + char c; + struct val v; +}; + +struct val_holder { + int e; + struct val2 first[2]; + int f; + struct val second[2]; +}; + +struct array_map { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, int); + __type(value, struct val); + __uint(max_entries, 10); +} array_map SEC(".maps"); + +struct array_map2 { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, int); + __type(value, struct val2); + __uint(max_entries, 10); +} array_map2 SEC(".maps"); + +__hidden struct val array[25]; +__hidden struct val double_array[5][5]; +__hidden struct val_holder double_holder_array[2][2]; + +/* Some tests need their own section to force separate bss arraymap, + * otherwise above arrays wouldn't have btf_field_info either + */ +#define private(name) SEC(".bss." #name) __hidden __attribute__((aligned(8))) +private(A) struct val array_too_big[300]; + +private(B) struct val exactly_max_fields[256]; +private(B) int ints[50]; + +SEC("tc") +__success __retval(0) +int test_arraymap(void *ctx) +{ + struct prog_test_ref_kfunc *p; + unsigned long dummy = 0; + struct val *v; + int idx = 0; + + v = bpf_map_lookup_elem(&array_map, &idx); + if (!v) + return 1; + + p = bpf_kfunc_call_test_acquire(&dummy); + if (!p) + return 2; + + p = bpf_kptr_xchg(&v->ref_ptr, p); + if (p) { + bpf_kfunc_call_test_release(p); + return 3; + } + + return 0; +} + +SEC("tc") +__success __retval(0) +int test_arraymap2(void *ctx) +{ + struct prog_test_ref_kfunc *p; + unsigned long dummy = 0; + struct val2 *v; + int idx = 0; + + v = bpf_map_lookup_elem(&array_map2, &idx); + if (!v) + return 1; + + p = bpf_kfunc_call_test_acquire(&dummy); + if (!p) + return 2; + + p = bpf_kptr_xchg(&v->v.ref_ptr, p); + if (p) { + bpf_kfunc_call_test_release(p); + return 3; + } + + return 0; +} + +/* elem must be contained within some mapval so it can be used as + * bpf_kptr_xchg's first param + */ +static __always_inline int test_array_xchg(struct val *elem) +{ + struct prog_test_ref_kfunc *p; + unsigned long dummy = 0; + + p = bpf_kfunc_call_test_acquire(&dummy); + if (!p) + return 1; + + p = bpf_kptr_xchg(&elem->ref_ptr, p); + if (p) { + bpf_kfunc_call_test_release(p); + return 2; + } + + return 0; +} + +SEC("tc") +__success __retval(0) +int test_array(void *ctx) +{ + return test_array_xchg(&array[10]); +} + +SEC("tc") +__success __retval(0) +int test_double_array(void *ctx) +{ + /* array -> array -> struct -> kptr */ + return test_array_xchg(&double_array[4][3]); +} + +SEC("tc") +__success __retval(0) +int test_double_holder_array_first(void *ctx) +{ + /* array -> array -> struct -> array -> struct -> struct -> kptr */ + return test_array_xchg(&double_holder_array[1][1].first[1].v); +} + +SEC("tc") +__success __retval(0) +int test_double_holder_array_second(void *ctx) +{ + /* array -> array -> struct -> array -> struct -> kptr */ + return test_array_xchg(&double_holder_array[1][1].second[1]); +} + +SEC("tc") +__success __retval(0) +int test_exactly_max_fields(void *ctx) +{ + /* Edge case where verifier finds BTF_FIELDS_MAX fields. It should be + * safe to examine .bss.B's other array, and .bss.B will have a valid + * btf_record if no more fields are found + */ + return test_array_xchg(&exactly_max_fields[255]); +} + +SEC("tc") +__failure __msg("map '.bss.A' has no valid kptr") +int test_array_fail__too_big(void *ctx) +{ + /* array_too_big's btf_record parsing will fail due to the + * number of btf_field_infos being > BTF_FIELDS_MAX + */ + return test_array_xchg(&array_too_big[50]); +} + +char _license[] SEC("license") = "GPL";