From patchwork Thu Jul 4 10:23:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13723569 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D68391BC4E for ; Thu, 4 Jul 2024 10:24:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088662; cv=none; b=Jvsxw5LfK5t+QXrH66uOtAYxcao/JjavJClMvIV9ImYtkKfzH/TEgu3FtTfBaiQCPCr3aLiuPwn0ELHJNdVUwqkyJtCrkNvEPJc3miJFSKyP8VdgcEcLePvAVkQZhwKrIGAMaGo+KtGvJpkyK5XoDdYz9q8mTLKH2OIGlyCft4I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088662; c=relaxed/simple; bh=MjIfO8uP/iqo7X+m/k1uKOlGdQI8ptMEjXgjANf3WUc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gUAyVhSiEDR5GNfwUxDgtDEanhxMXfg2wzVCEDgj460ZfIt6sxsjCyNKVb/sxy0cn0VsjpuIn24zJl3HD1VSVfMsT20YlGMmEY3uqP0yIiVEtVqyJa2KrBjbQwDDFNhtCdMB+UDpFmGNS79JK5y5tr/iU4otCmdyDaZtzsSNPro= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=LzPzzOtB; arc=none smtp.client-ip=209.85.216.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LzPzzOtB" Received: by mail-pj1-f45.google.com with SMTP id 98e67ed59e1d1-2c2d25b5432so342349a91.2 for ; Thu, 04 Jul 2024 03:24:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720088660; x=1720693460; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4xXAwELzpALDrMLBpWnJA7MUy/dCPT5AhcwukZUjp2Q=; b=LzPzzOtBk2JtvrVc4y1xQocKVN3teM5I51MnV/w3Y32zvl58ewWjjLy5eQrSy/XRLr R1Cb930/Hj70C1ZmsdVnS80NKdcGYpYbN/U/5Yn6eB3WyaiptcRnsrv+9qbqk/U3zosb bgucHCXc7QRrDRGfJYALzeojveWDPEwzLxqNIsoUWLjUoxc9jGBmCuZQuqG74bYma6Xq bDDHCgZS5aKAkIgkB6qxr9WPndgXoNqFwkiXc1LCS0EHUPpG+N3le3+SdrAewJixoDGK RjT/M3hAj1IdaFgaW6LS94lgducoGBbVHJ2VBGeIEK1Oau5bSrFjDS++PDWIErr/IHSA Macw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720088660; x=1720693460; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4xXAwELzpALDrMLBpWnJA7MUy/dCPT5AhcwukZUjp2Q=; b=WD6y2wPMzgx9YabBTrWWrfY/0CLwaTQu+ID+g+vkB2nRncSWS4mGAvPDvht6NaYKzV 6GqVEBoP9OBhIBpSv5i/PEhKPjZssYKx5Bs/ccqyE6eg2j5QDE+Q1eX4OvvAm7nu4hfK Dlbss3XDdydO00riy8z0eUvnEMDhkWQshBJqtLiBn0ucI0Eno5+azCvQBxlCR/XCrr+v N1EXzQBXch81BzAJ0jHRHKAEsb8t0mJu4+SpoHgRZWfihWqFVCpKC+giqToMS9p52Og1 Amd04Xl1c6ASczGfaSeiFnYESP1QGNQaEKJUY6yt3/PMzWynx7Rp3VtP5O0eTOQeRSJU MjXg== X-Gm-Message-State: AOJu0YxRAI+Hbe1kARWeKIQCo/Xd0IsuH8fyWS3gH51W2TF2ZAS/ogMA 5N3YiuxFP3N3rAXR5feBfq+6o4jKNkfFSpBC09CXZfBHadSfBlWqtnkP9Q== X-Google-Smtp-Source: AGHT+IFfi61otj6tqEHL/209wfyCVMD19buRhcnSRUoMIIvXE68U0iW9lE4e1v3vywyJNiFuDdI/sQ== X-Received: by 2002:a17:90a:34ca:b0:2c3:7e3:6be0 with SMTP id 98e67ed59e1d1-2c99c6e49bdmr889804a91.31.1720088659850; Thu, 04 Jul 2024 03:24:19 -0700 (PDT) Received: from badger.. ([38.34.87.7]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2c9a4c0fe8dsm216693a91.0.2024.07.04.03.24.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jul 2024 03:24:19 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yonghong.song@linux.dev, puranjay@kernel.org, jose.marchesi@oracle.com, Eduard Zingerman Subject: [RFC bpf-next v2 1/9] bpf: add a get_helper_proto() utility function Date: Thu, 4 Jul 2024 03:23:53 -0700 Message-ID: <20240704102402.1644916-2-eddyz87@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240704102402.1644916-1-eddyz87@gmail.com> References: <20240704102402.1644916-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Extract the part of check_helper_call() as a utility function allowing to query 'struct bpf_func_proto' for a specific helper function id. Signed-off-by: Eduard Zingerman Acked-by: Andrii Nakryiko --- kernel/bpf/verifier.c | 30 +++++++++++++++++++++++------- 1 file changed, 23 insertions(+), 7 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index d3927d819465..4869f1fb0a42 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -10261,6 +10261,24 @@ static void update_loop_inline_state(struct bpf_verifier_env *env, u32 subprogno state->callback_subprogno == subprogno); } +static int get_helper_proto(struct bpf_verifier_env *env, int func_id, + const struct bpf_func_proto **ptr) +{ + const struct bpf_func_proto *result = NULL; + + if (func_id < 0 || func_id >= __BPF_FUNC_MAX_ID) + return -ERANGE; + + if (env->ops->get_func_proto) + result = env->ops->get_func_proto(func_id, env->prog); + + if (!result) + return -EINVAL; + + *ptr = result; + return 0; +} + static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn, int *insn_idx_p) { @@ -10277,18 +10295,16 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn /* find function prototype */ func_id = insn->imm; - if (func_id < 0 || func_id >= __BPF_FUNC_MAX_ID) { - verbose(env, "invalid func %s#%d\n", func_id_name(func_id), - func_id); + err = get_helper_proto(env, insn->imm, &fn); + if (err == -ERANGE) { + verbose(env, "invalid func %s#%d\n", func_id_name(func_id), func_id); return -EINVAL; } - if (env->ops->get_func_proto) - fn = env->ops->get_func_proto(func_id, env->prog); - if (!fn) { + if (err) { verbose(env, "program of this type cannot use helper %s#%d\n", func_id_name(func_id), func_id); - return -EINVAL; + return err; } /* eBPF programs must be GPL compatible to use GPL-ed functions */ From patchwork Thu Jul 4 10:23:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13723570 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4719613C8FF for ; Thu, 4 Jul 2024 10:24:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088664; cv=none; b=ZlTH7RduBg76V52JPA4U2RJMs+NdUEesM045bUIspWTFh/FvwED0a3ZivURqYK3+zUdw9Nt5gH4jZiMoOLiF1uNEsw3xGc/X8cb/MJg/rUnWLNyWGxrZtd62LcLATAUwJyIvVXmlctmYbL9oqw+5VpFyVEvTcek06iwfDRA5VAo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088664; c=relaxed/simple; bh=NQ3zqU4xyVDoo6gWb2RXiW3R5jzlGZsaz7A+Ub7KWIw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MB1K9XroU08oSYY7F0PAoaZV/sujqCgB1LvExTfX6FC/p9Go7/RW1tdaGrVg7el5NNXcm2zRNlvU55f1H0RNeP1IxlxykLKYP13dYCciYLD/zC/bdEO/GS63AmTyaehWHIp4Oz2TDIhzhQNZn5TTx/s7hF3l9lqTlybkyeFkRVU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=c6hOmqgo; arc=none smtp.client-ip=209.85.214.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="c6hOmqgo" Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-1fa9f540f45so2533705ad.1 for ; Thu, 04 Jul 2024 03:24:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720088661; x=1720693461; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4oOSvEjG6UckfqElOwQLGR5+LxYpNUGP9vxbbl382do=; b=c6hOmqgotOfPFDhHbpIRppYacdLNxGhjC8nVwOepty0VUptoT0lNQWZ/iJWXF5S1e7 IYvvmlLZtuVEE31vf6zo2yd8pNrMRLDKsrXUx/1Ja+/fGuOpqiF3xGdd9dLXVhZdaW+I 0YQsc146vLspVsOxTzXK+A2uRZA+NpTmdHW0Es18oeeOCo71pDuzNxCo0T2v7qvpzv/j Yo2iYebTjcdc+VFiza24Cx5WpFQ2w48JHpmbK01PnGVO9WJoRLT34Vt88kOrqnvD2tSW kIhJJ703cVQaU5Ba2M4JZdCX0ykLWVMqLjG8WvP2oq/ePB2knYjY46nGGVV7NsxycpQH ucDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720088661; x=1720693461; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4oOSvEjG6UckfqElOwQLGR5+LxYpNUGP9vxbbl382do=; b=V/42aRxR905JI1kPSBq6Z05jN2RuRcGfN4G+fH7kM7fSwrQ+7MJckhgNIrCKz0SqUX VW/qz4qwBXjv0cEwW3lNmerq0WT5G6UER3z8WMf8ukCcwFFiYFoTvVDYUnxOoKP+/IWS 7xzFv7XD+6gQuS3guDVP6lHxQkPC8WDj4k598U0q7c3BfGNL6Wd0I+Y0CK+xWllNuqbB rwS6K148Y3BXhcPOtEGsjp75Kzk4dyjToSngv6LVbrlbd6JyQLW0z3l3dfOD7JqVtNkg JiiGaCHEozCVbNoWiqGheqCFzjs5n0DH5NaDafSPqoTnwAWdgpODkQHHCkE93ZHxnGkS CKiw== X-Gm-Message-State: AOJu0YySXPe3whwEZcysXeHtfqS5DwfnSHAdd6l2cQrhh72n1ATcAZXw /Lc8npQYqzAJygPGhPOF0g16Fi66jvRFFSbFMo7PCh9uHbSBhTny4QO5KA== X-Google-Smtp-Source: AGHT+IGPy0w9sAKNEgWDl0jnWiNky2N8lV+Wb0H9AHfvMOmvelJbjIav5uq3MTXY9zXLIgmU/3lDvg== X-Received: by 2002:a17:90b:4ac4:b0:2c9:9fdf:f72e with SMTP id 98e67ed59e1d1-2c99fdffb19mr629060a91.26.1720088660856; Thu, 04 Jul 2024 03:24:20 -0700 (PDT) Received: from badger.. ([38.34.87.7]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2c9a4c0fe8dsm216693a91.0.2024.07.04.03.24.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jul 2024 03:24:20 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yonghong.song@linux.dev, puranjay@kernel.org, jose.marchesi@oracle.com, Eduard Zingerman , Alexei Starovoitov Subject: [RFC bpf-next v2 2/9] bpf: no_caller_saved_registers attribute for helper calls Date: Thu, 4 Jul 2024 03:23:54 -0700 Message-ID: <20240704102402.1644916-3-eddyz87@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240704102402.1644916-1-eddyz87@gmail.com> References: <20240704102402.1644916-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC GCC and LLVM define a no_caller_saved_registers function attribute. This attribute means that function scratches only some of the caller saved registers defined by ABI. For BPF the set of such registers could be defined as follows: - R0 is scratched only if function is non-void; - R1-R5 are scratched only if corresponding parameter type is defined in the function prototype. This commit introduces flag bpf_func_prot->allow_nocsr. If this flag is set for some helper function, verifier assumes that it follows no_caller_saved_registers calling convention. The contract between kernel and clang allows to simultaneously use such functions and maintain backwards compatibility with old kernels that don't understand no_caller_saved_registers calls (nocsr for short): - clang generates a simple pattern for nocsr calls, e.g.: r1 = 1; r2 = 2; *(u64 *)(r10 - 8) = r1; *(u64 *)(r10 - 16) = r2; call %[to_be_inlined] r2 = *(u64 *)(r10 - 16); r1 = *(u64 *)(r10 - 8); r0 = r1; r0 += r2; exit; - kernel removes unnecessary spills and fills, if called function is inlined by verifier or current JIT (with assumption that patch inserted by verifier or JIT honors nocsr contract, e.g. does not scratch r3-r5 for the example above), e.g. the code above would be transformed to: r1 = 1; r2 = 2; call %[to_be_inlined] r0 = r1; r0 += r2; exit; Technically, the transformation is split into the following phases: - function mark_nocsr_pattern_patterns(), called from bpf_check() searches and marks potential patterns in instruction auxiliary data; - upon stack read or write access, function check_nocsr_stack_contract() is used to verify if stack offsets, presumably reserved for nocsr patterns, are used only from those patterns; - function do_misc_fixups(), called from bpf_check(), applies the rewrite for valid patterns. See comment in mark_nocsr_pattern_for_call() for more details. Suggested-by: Alexei Starovoitov Signed-off-by: Eduard Zingerman --- include/linux/bpf.h | 6 + include/linux/bpf_verifier.h | 14 ++ kernel/bpf/verifier.c | 300 ++++++++++++++++++++++++++++++++++- 3 files changed, 314 insertions(+), 6 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 960780ef04e1..391e19c5cd68 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -807,6 +807,12 @@ struct bpf_func_proto { bool gpl_only; bool pkt_access; bool might_sleep; + /* set to true if helper follows contract for gcc/llvm + * attribute no_caller_saved_registers: + * - void functions do not scratch r0 + * - functions taking N arguments scratch only registers r1-rN + */ + bool allow_nocsr; enum bpf_return_type ret_type; union { struct { diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 2b54e25d2364..735ae0901b3d 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -585,6 +585,15 @@ struct bpf_insn_aux_data { * accepts callback function as a parameter. */ bool calls_callback; + /* true if STX or LDX instruction is a part of a spill/fill + * pattern for a no_caller_saved_registers call. + */ + u8 nocsr_pattern:1; + /* for CALL instructions, a number of spill/fill pairs in the + * no_caller_saved_registers pattern. + */ + u8 nocsr_spills_num:3; + }; #define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */ @@ -641,6 +650,11 @@ struct bpf_subprog_info { u32 linfo_idx; /* The idx to the main_prog->aux->linfo */ u16 stack_depth; /* max. stack depth used by this function */ u16 stack_extra; + /* stack depth after which slots reserved for + * no_caller_saved_registers spills/fills start, + * value <= nocsr_stack_off belongs to the spill/fill area. + */ + s16 nocsr_stack_off; bool has_tail_call: 1; bool tail_call_reachable: 1; bool has_ld_abs: 1; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 4869f1fb0a42..d16a249b59ad 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2471,16 +2471,37 @@ static int cmp_subprogs(const void *a, const void *b) ((struct bpf_subprog_info *)b)->start; } -static int find_subprog(struct bpf_verifier_env *env, int off) +/* Find subprogram that contains instruction at 'off' */ +static int find_containing_subprog(struct bpf_verifier_env *env, int off) { - struct bpf_subprog_info *p; + struct bpf_subprog_info *vals = env->subprog_info; + int l, r, m; - p = bsearch(&off, env->subprog_info, env->subprog_cnt, - sizeof(env->subprog_info[0]), cmp_subprogs); - if (!p) + if (off >= env->prog->len || off < 0 || env->subprog_cnt == 0) return -ENOENT; - return p - env->subprog_info; + l = 0; + m = 0; + r = env->subprog_cnt - 1; + while (l < r) { + m = l + (r - l + 1) / 2; + if (vals[m].start <= off) + l = m; + else + r = m - 1; + } + return l; +} + +/* Find subprogram that starts exactly at 'off' */ +static int find_subprog(struct bpf_verifier_env *env, int off) +{ + int idx; + + idx = find_containing_subprog(env, off); + if (idx < 0 || env->subprog_info[idx].start != off) + return -ENOENT; + return idx; } static int add_subprog(struct bpf_verifier_env *env, int off) @@ -4501,6 +4522,23 @@ static int get_reg_width(struct bpf_reg_state *reg) return fls64(reg->umax_value); } +/* See comment for mark_nocsr_pattern_for_call() */ +static void check_nocsr_stack_contract(struct bpf_verifier_env *env, struct bpf_func_state *state, + int insn_idx, int off) +{ + struct bpf_subprog_info *subprog = &env->subprog_info[state->subprogno]; + struct bpf_insn_aux_data *aux = &env->insn_aux_data[insn_idx]; + + if (subprog->nocsr_stack_off <= off || aux->nocsr_pattern) + return; + /* access to the region [max_stack_depth .. nocsr_stack_off] + * from something that is not a part of the nocsr pattern, + * disable nocsr rewrites for current subprogram by setting + * nocsr_stack_off to a value smaller than any possible offset. + */ + subprog->nocsr_stack_off = S16_MIN; +} + /* check_stack_{read,write}_fixed_off functions track spill/fill of registers, * stack boundary and alignment are checked in check_mem_access() */ @@ -4549,6 +4587,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, if (err) return err; + check_nocsr_stack_contract(env, state, insn_idx, off); mark_stack_slot_scratched(env, spi); if (reg && !(off % BPF_REG_SIZE) && reg->type == SCALAR_VALUE && env->bpf_capable) { bool reg_value_fits; @@ -4682,6 +4721,7 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env, return err; } + check_nocsr_stack_contract(env, state, insn_idx, min_off); /* Variable offset writes destroy any spilled pointers in range. */ for (i = min_off; i < max_off; i++) { u8 new_type, *stype; @@ -4820,6 +4860,7 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env, reg = ®_state->stack[spi].spilled_ptr; mark_stack_slot_scratched(env, spi); + check_nocsr_stack_contract(env, state, env->insn_idx, off); if (is_spilled_reg(®_state->stack[spi])) { u8 spill_size = 1; @@ -4980,6 +5021,7 @@ static int check_stack_read_var_off(struct bpf_verifier_env *env, min_off = reg->smin_value + off; max_off = reg->smax_value + off; mark_reg_stack_read(env, ptr_state, min_off, max_off + size, dst_regno); + check_nocsr_stack_contract(env, ptr_state, env->insn_idx, min_off); return 0; } @@ -15951,6 +15993,206 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns, return ret; } +/* Bitmask with 1s for all caller saved registers */ +#define ALL_CALLER_SAVED_REGS ((1u << CALLER_SAVED_REGS) - 1) + +/* Return a bitmask specifying which caller saved registers are + * modified by a call to a helper. + * (Either as a return value or as scratch registers). + * + * For normal helpers registers R0-R5 are scratched. + * For helpers marked as no_csr: + * - scratch R0 if function is non-void; + * - scratch R1-R5 if corresponding parameter type is set + * in the function prototype. + */ +static u8 get_helper_reg_mask(const struct bpf_func_proto *fn) +{ + u8 mask; + int i; + + if (!fn->allow_nocsr) + return ALL_CALLER_SAVED_REGS; + + mask = 0; + if (fn->ret_type != RET_VOID) + mask |= BIT(BPF_REG_0); + for (i = 0; i < ARRAY_SIZE(fn->arg_type); ++i) + if (fn->arg_type[i] != ARG_DONTCARE) + mask |= BIT(BPF_REG_1 + i); + return mask; +} + +/* True if do_misc_fixups() replaces calls to helper number 'imm', + * replacement patch is presumed to follow no_caller_saved_registers contract + * (see mark_nocsr_pattern_for_call() below). + */ +static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm) +{ + return false; +} + +/* If 'insn' is a call that follows no_caller_saved_registers contract + * and called function is inlined by current jit or verifier, + * return a mask with 1s corresponding to registers that are scratched + * by this call (depends on return type and number of return parameters). + * Otherwise return ALL_CALLER_SAVED_REGS mask. + */ +static u32 call_csr_mask(struct bpf_verifier_env *env, struct bpf_insn *insn) +{ + const struct bpf_func_proto *fn; + + if (bpf_helper_call(insn) && + (verifier_inlines_helper_call(env, insn->imm) || bpf_jit_inlines_helper_call(insn->imm)) && + get_helper_proto(env, insn->imm, &fn) == 0 && + fn->allow_nocsr) + return ~get_helper_reg_mask(fn); + + return ALL_CALLER_SAVED_REGS; +} + +/* GCC and LLVM define a no_caller_saved_registers function attribute. + * This attribute means that function scratches only some of + * the caller saved registers defined by ABI. + * For BPF the set of such registers could be defined as follows: + * - R0 is scratched only if function is non-void; + * - R1-R5 are scratched only if corresponding parameter type is defined + * in the function prototype. + * + * The contract between kernel and clang allows to simultaneously use + * such functions and maintain backwards compatibility with old + * kernels that don't understand no_caller_saved_registers calls + * (nocsr for short): + * + * - for nocsr calls clang allocates registers as-if relevant r0-r5 + * registers are not scratched by the call; + * + * - as a post-processing step, clang visits each nocsr call and adds + * spill/fill for every live r0-r5; + * + * - stack offsets used for the spill/fill are allocated as minimal + * stack offsets in whole function and are not used for any other + * purposes; + * + * - when kernel loads a program, it looks for such patterns + * (nocsr function surrounded by spills/fills) and checks if + * spill/fill stack offsets are used exclusively in nocsr patterns; + * + * - if so, and if verifier or current JIT inlines the call to the + * nocsr function (e.g. a helper call), kernel removes unnecessary + * spill/fill pairs; + * + * - when old kernel loads a program, presence of spill/fill pairs + * keeps BPF program valid, albeit slightly less efficient. + * + * For example: + * + * r1 = 1; + * r2 = 2; + * *(u64 *)(r10 - 8) = r1; r1 = 1; + * *(u64 *)(r10 - 16) = r2; r2 = 2; + * call %[to_be_inlined] --> call %[to_be_inlined] + * r2 = *(u64 *)(r10 - 16); r0 = r1; + * r1 = *(u64 *)(r10 - 8); r0 += r2; + * r0 = r1; exit; + * r0 += r2; + * exit; + * + * The purpose of mark_nocsr_pattern_for_call is to: + * - look for such patterns; + * - mark spill and fill instructions in env->insn_aux_data[*].nocsr_pattern; + * - mark set env->insn_aux_data[*].nocsr_spills_num for call instruction; + * - update env->subprog_info[*]->nocsr_stack_off to find an offset + * at which nocsr spill/fill stack slots start. + * + * The .nocsr_pattern and .nocsr_stack_off are used by + * check_nocsr_stack_contract() to check if every stack access to + * nocsr spill/fill stack slot originates from spill/fill + * instructions, members of nocsr patterns. + * + * If such condition holds true for a subprogram, nocsr patterns could + * be rewritten by do_misc_fixups(). + * Otherwise nocsr patterns are not changed in the subprogram + * (code, presumably, generated by an older clang version). + * + * For example, it is *not* safe to remove spill/fill below: + * + * r1 = 1; + * *(u64 *)(r10 - 8) = r1; r1 = 1; + * call %[to_be_inlined] --> call %[to_be_inlined] + * r1 = *(u64 *)(r10 - 8); r0 = *(u64 *)(r10 - 8); <---- wrong !!! + * r0 = *(u64 *)(r10 - 8); r0 += r1; + * r0 += r1; exit; + * exit; + */ +static void mark_nocsr_pattern_for_call(struct bpf_verifier_env *env, int t) +{ + struct bpf_insn *insns = env->prog->insnsi, *stx, *ldx; + struct bpf_subprog_info *subprog; + u32 csr_mask = call_csr_mask(env, &insns[t]); + u32 reg_mask = ~csr_mask | ~ALL_CALLER_SAVED_REGS; + int s, i; + s16 off; + + if (csr_mask == ALL_CALLER_SAVED_REGS) + return; + + for (i = 1, off = 0; i <= ARRAY_SIZE(caller_saved); ++i, off += BPF_REG_SIZE) { + if (t - i < 0 || t + i >= env->prog->len) + break; + stx = &insns[t - i]; + ldx = &insns[t + i]; + if (off == 0) { + off = stx->off; + if (off % BPF_REG_SIZE != 0) + break; + } + if (/* *(u64 *)(r10 - off) = r[0-5]? */ + stx->code != (BPF_STX | BPF_MEM | BPF_DW) || + stx->dst_reg != BPF_REG_10 || + /* r[0-5] = *(u64 *)(r10 - off)? */ + ldx->code != (BPF_LDX | BPF_MEM | BPF_DW) || + ldx->src_reg != BPF_REG_10 || + /* check spill/fill for the same reg and offset */ + stx->src_reg != ldx->dst_reg || + stx->off != ldx->off || + stx->off != off || + /* this should be a previously unseen register */ + BIT(stx->src_reg) & reg_mask) + break; + reg_mask |= BIT(stx->src_reg); + env->insn_aux_data[t - i].nocsr_pattern = 1; + env->insn_aux_data[t + i].nocsr_pattern = 1; + } + if (i == 1) + return; + env->insn_aux_data[t].nocsr_spills_num = i - 1; + s = find_containing_subprog(env, t); + /* can't happen */ + if (WARN_ON_ONCE(s < 0)) + return; + subprog = &env->subprog_info[s]; + subprog->nocsr_stack_off = min(subprog->nocsr_stack_off, off); +} + +/* Update the following fields when appropriate: + * - env->insn_aux_data[*].nocsr_pattern + * - env->insn_aux_data[*].spills_num and + * - env->subprog_info[*].nocsr_stack_off + * See mark_nocsr_pattern_for_call(). + */ +static int mark_nocsr_patterns(struct bpf_verifier_env *env) +{ + struct bpf_insn *insn = env->prog->insnsi; + int i, insn_cnt = env->prog->len; + + for (i = 0; i < insn_cnt; i++, insn++) + /* might be extended to handle kfuncs as well */ + if (bpf_helper_call(insn)) + mark_nocsr_pattern_for_call(env, i); + return 0; +} + /* Visits the instruction at index t and returns one of the following: * < 0 - an error occurred * DONE_EXPLORING - the instruction was fully explored @@ -20119,6 +20361,48 @@ static int do_misc_fixups(struct bpf_verifier_env *env) goto next_insn; if (insn->src_reg == BPF_PSEUDO_CALL) goto next_insn; + /* Remove unnecessary spill/fill pairs, members of nocsr pattern */ + if (env->insn_aux_data[i + delta].nocsr_spills_num > 0) { + u32 j, spills_num = env->insn_aux_data[i + delta].nocsr_spills_num; + int err; + + /* don't apply this on a second visit */ + env->insn_aux_data[i + delta].nocsr_spills_num = 0; + + /* check if spill/fill stack access is in expected offset range */ + for (j = 1; j <= spills_num; ++j) { + if ((insn - j)->off >= subprogs[cur_subprog].nocsr_stack_off || + (insn + j)->off >= subprogs[cur_subprog].nocsr_stack_off) { + /* do a second visit of this instruction, + * so that verifier can inline it + */ + i -= 1; + insn -= 1; + goto next_insn; + } + } + + /* apply the rewrite: + * *(u64 *)(r10 - X) = rY ; num-times + * call() -> call() + * rY = *(u64 *)(r10 - X) ; num-times + */ + err = verifier_remove_insns(env, i + delta - spills_num, spills_num); + if (err) + return err; + err = verifier_remove_insns(env, i + delta - spills_num + 1, spills_num); + if (err) + return err; + + i += spills_num - 1; + /* ^ ^ do a second visit of this instruction, + * | '-- so that verifier can inline it + * '--------------- jump over deleted fills + */ + delta -= 2 * spills_num; + insn = env->prog->insnsi + i + delta; + goto next_insn; + } if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { ret = fixup_kfunc_call(env, insn, insn_buf, i + delta, &cnt); if (ret) @@ -21704,6 +21988,10 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3 if (ret < 0) goto skip_full_check; + ret = mark_nocsr_patterns(env); + if (ret < 0) + goto skip_full_check; + ret = do_check_main(env); ret = ret ?: do_check_subprogs(env); From patchwork Thu Jul 4 10:23:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13723571 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D068E1AB90D for ; Thu, 4 Jul 2024 10:24:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088665; cv=none; b=gKswQPNEig5W++6KH96eekr7Pp5pXYY+dmMJeLa++CGWpf0rOzDzIjEpNiTtlroddGoX6FTIGwbOImZpdSBuEFzEApx8vQukhnr9MEQmcSwQMFOG5axpTsQooMkvaFT7/Lc+OTXuco9FZe4gGLCE3Gu/VF5S1AgLYD+aVXuwzu0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088665; c=relaxed/simple; bh=AzvXWRFLXr7xIiY/rivzljP18btI67BmTenXFccTDA4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=m1XiRC5+aJmYISZeXUl7RACdDPW/1ByqaFwgbwBPTI1sojMHPn41vIxGkMbjjnxqMFnLvr9BzSCS/K1I7D7kttDqaECHF2/iUPfR5ud5bSYpt6ULlHKHuBadLS6gg21LvanNh7VpDq9ZK4j3mx3vo5NQj9xBYQJspLllRh504As= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=cYBdov6U; arc=none smtp.client-ip=209.85.216.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="cYBdov6U" Received: by mail-pj1-f42.google.com with SMTP id 98e67ed59e1d1-2c1a4192d55so358666a91.2 for ; Thu, 04 Jul 2024 03:24:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720088663; x=1720693463; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OrBDEFVo0cfjnqmOyEfJiub0G37zOhdIJoBPwbO1g1U=; b=cYBdov6UJX0Mu0Q0rX+oQan7MI6h+Yz2FyRsRp04GF3wSDtk6lexYEUSWT/Mh1xICL a/dSpDK7k7vj5FNhD1CrDhCAdPAsEPgwGCoYGJCZs19tERWRAO0djgc2B/lRlYnSi7Bm +jGFRhgwfU+UAdreeDFT8wrjpT7gCBjhXQN1vkt5hkf1CqZPwG3aCgb6ykbKksHYV52b Dzk1QBBCk8HVWsnzV6bnnCGJFJYyLyshwjlmk+osTuW38q31prZE3LZueglOigMPssRW i0munO6OoRzllVC2GXR/+RdeqK6Eq81GWfYh7lbDpQd99LS4dKR5MsKZ8ZlxkigkZfoP QILw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720088663; x=1720693463; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OrBDEFVo0cfjnqmOyEfJiub0G37zOhdIJoBPwbO1g1U=; b=o6szb2+BZvh0BS9Ew5Ir0oJZXp0DtFswa+uR7bNrGxfNVfAHI/Ja7ag8OaZ9ggXNGF Y8PURgjKKzfNDZm/C1eC4yfTrolD2gpmPlx3JH41S+GLpRxPt0ExankJNF68JhJ7Gxl0 k8EiuEBW/FlqYoljNQ9LHou+PDkDG8tAZObQQuqO00r8CrtJJIW203cDH57ttsjQPMlV BH2H5rGw5eMbJNDpTlpzDB3IhMQJL+dtUlhxbY1mlxCgSAYdJ7Mtjjgl2GESdW34fHi1 gUL1a/cux3Ew9ZgC048b6DQeANhxkcYJ+furJLJbUl5KkftbkLjBw3sLKdMvtQ0Bsj7l kpLA== X-Gm-Message-State: AOJu0YwbTmv1ukwFXEJh5Nkg/oROeIT1qwcTG2Y0MeCGGu65oF9viy15 eyNqYnOPiVTd6k+Yd/lBxBfZxb52QeRS7ibILZ43PwM3bmGVqUUOk2EgYA== X-Google-Smtp-Source: AGHT+IFNUcdBD4+cPDS29qFQ2jxEe42xCFHivWE1llZUrdm+6s05Z/4hJ29drNKyUfIWqzhkvEvbrQ== X-Received: by 2002:a17:90a:6f82:b0:2c9:7a8d:43f7 with SMTP id 98e67ed59e1d1-2c99c5700d4mr886756a91.23.1720088661741; Thu, 04 Jul 2024 03:24:21 -0700 (PDT) Received: from badger.. ([38.34.87.7]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2c9a4c0fe8dsm216693a91.0.2024.07.04.03.24.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jul 2024 03:24:21 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yonghong.song@linux.dev, puranjay@kernel.org, jose.marchesi@oracle.com, Eduard Zingerman Subject: [RFC bpf-next v2 3/9] bpf, x86, riscv, arm: no_caller_saved_registers for bpf_get_smp_processor_id() Date: Thu, 4 Jul 2024 03:23:55 -0700 Message-ID: <20240704102402.1644916-4-eddyz87@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240704102402.1644916-1-eddyz87@gmail.com> References: <20240704102402.1644916-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC The function bpf_get_smp_processor_id() is processed in a different way, depending on the arch: - on x86 verifier replaces call to bpf_get_smp_processor_id() with a sequence of instructions that modify only r0; - on riscv64 jit replaces call to bpf_get_smp_processor_id() with a sequence of instructions that modify only r0; - on arm64 jit replaces call to bpf_get_smp_processor_id() with a sequence of instructions that modify only r0 and tmp registers. These rewrites satisfy attribute no_caller_saved_registers contract. Allow rewrite of no_caller_saved_registers patterns for bpf_get_smp_processor_id() in order to use this function as a canary for no_caller_saved_registers tests. Signed-off-by: Eduard Zingerman --- kernel/bpf/helpers.c | 1 + kernel/bpf/verifier.c | 11 +++++++++-- 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 229396172026..26863b162a88 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -158,6 +158,7 @@ const struct bpf_func_proto bpf_get_smp_processor_id_proto = { .func = bpf_get_smp_processor_id, .gpl_only = false, .ret_type = RET_INTEGER, + .allow_nocsr = true, }; BPF_CALL_0(bpf_get_numa_node_id) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index d16a249b59ad..99115c552e3b 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -16029,7 +16029,14 @@ static u8 get_helper_reg_mask(const struct bpf_func_proto *fn) */ static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm) { - return false; + switch (imm) { +#ifdef CONFIG_X86_64 + case BPF_FUNC_get_smp_processor_id: + return env->prog->jit_requested && bpf_jit_supports_percpu_insn(); +#endif + default: + return false; + } } /* If 'insn' is a call that follows no_caller_saved_registers contract @@ -20703,7 +20710,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) #ifdef CONFIG_X86_64 /* Implement bpf_get_smp_processor_id() inline. */ if (insn->imm == BPF_FUNC_get_smp_processor_id && - prog->jit_requested && bpf_jit_supports_percpu_insn()) { + verifier_inlines_helper_call(env, insn->imm)) { /* BPF_FUNC_get_smp_processor_id inlining is an * optimization, so if pcpu_hot.cpu_number is ever * changed in some incompatible and hard to support From patchwork Thu Jul 4 10:23:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13723572 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 123A013C8FF for ; Thu, 4 Jul 2024 10:24:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.44 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088666; cv=none; b=KU81W79KRocK7t8OfkNcpFTj2Aqnw5DOz2t6zC8gEFK0mUqcMGMsSkgtde8fS+eQAHTv1X4cgXrFOhRgN0BEJm93L0qxETuGDYl5mfq8maRGmMsrtXn2zcw0fo+Ye6WB/BmIjaVAIcvEJlYRKi5WSXPWFLb6UQeLh6SJHn1OAYY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088666; c=relaxed/simple; bh=xnASLsCOBryRuodIdDKREqMt8MKr9ttqgtV/XcxU2N4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Fe1QPRilLFHISINOxnm+e30h+Kqz3aRuIEIViScrMW/dWoj0bonMXijZ0p3VBToJ1++9CT1LLKwOwE79oNSzdJPJy2qzjchDB2A9N3DRDRlKNltfGlHI5w7GaY/F8lIFFxmMIzN67asBI2p7vSQvQfZWdnLQiEp8DR5zH20aS4A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=PScnbChj; arc=none smtp.client-ip=209.85.216.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="PScnbChj" Received: by mail-pj1-f44.google.com with SMTP id 98e67ed59e1d1-2c983d8bdc7so402212a91.0 for ; Thu, 04 Jul 2024 03:24:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720088664; x=1720693464; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C24mtLz9Hjduk/1K2PNUEthFuyWN0AEt97R5T0ddlL0=; b=PScnbChjWPzToYd0XKlqiMEGi7J6eWu2vTF9i9Mv7iFSVhc/zAPaUV6aNfqzS68c9m GIc+IOIi/GmF/LzgD3IbCNzpmRfHQfAgPQyvu9sqPDltFIOfNZzi0FV2F/AqOd5zFuH9 3H0kXgDGJmrRaDd70cOamCIm6CnT4/u8aOWOjAjOA/mfGqKnXH0LJXaKiMVnriUU/SSi LsROxw7b4/m5cIMYoBVjeVXaWtTn9oic7dgCy8lw3g9oFkRpItL0c7rVSbwYMLpLFRa7 T3lSC1quqPuaG7rKawDYipSBoJ+pagWiRgtA9wLC5JdQ/r8Nnrf9m4hVp4pd+pU27LNS GZmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720088664; x=1720693464; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C24mtLz9Hjduk/1K2PNUEthFuyWN0AEt97R5T0ddlL0=; b=MNKw/RorXNOFoE6mNHUI2jzPOhFfN2H4alIe8PjOr4Et2k3SlZvIOkXFTG5DIQ19oF jUD9ZXhF2azInknnigNDp1F+M02vssE/C0ITUmzDY8nbnfY4JUF/6Zme8g/PUSMzfufb 1fOyAEyyw4lO1Aa/WCpweEFxqKCF1TwlRQnzWz2ZWejZ5qd+Gu2rsK3MmwKZatjM00fw AqLLkvkoDmfgB7SDV20rwNkepDP6/qQMkaYabN328+WV617WbKmX+AxY2wha3fDcCpiQ 5xGFULJc0ch4DJw+571pgLCE6/z4JWx7onMRUHCGsu53eECpqerYJhY4TWZrxxHDv/xw 4I7w== X-Gm-Message-State: AOJu0YzxEAF+p+t4CDNXfwerS3Ni73x7Q8QPZlBZ3SmocIntMibE6SHM IM1ghtPRm4ZchVbasjeXZTAhm1oCorb48FdM+KqEvPyDncwERb/z7rc6fg== X-Google-Smtp-Source: AGHT+IFpconvVWGmswtTwjEfWe6fuzG6u+hl5/Z+y0jm3SByH1RkDdiwN0mdyxjBGRdP8gE5NNDFvg== X-Received: by 2002:a17:90a:d795:b0:2c9:8b23:15ba with SMTP id 98e67ed59e1d1-2c99c6c8eaemr913720a91.42.1720088663955; Thu, 04 Jul 2024 03:24:23 -0700 (PDT) Received: from badger.. ([38.34.87.7]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2c9a4c0fe8dsm216693a91.0.2024.07.04.03.24.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jul 2024 03:24:23 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yonghong.song@linux.dev, puranjay@kernel.org, jose.marchesi@oracle.com, Eduard Zingerman Subject: [RFC bpf-next v2 4/9] selftests/bpf: extract utility function for BPF disassembly Date: Thu, 4 Jul 2024 03:23:56 -0700 Message-ID: <20240704102402.1644916-5-eddyz87@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240704102402.1644916-1-eddyz87@gmail.com> References: <20240704102402.1644916-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC struct bpf_insn *disasm_insn(struct bpf_insn *insn, char *buf, size_t buf_sz); Disassembles instruction 'insn' to a text buffer 'buf'. Removes insn->code hex prefix added by kernel disassembly routine. Returns a pointer to the next instruction (increments insn by either 1 or 2). Signed-off-by: Eduard Zingerman Acked-by: Andrii Nakryiko --- tools/testing/selftests/bpf/Makefile | 1 + tools/testing/selftests/bpf/disasm_helpers.c | 51 +++++++++++++ tools/testing/selftests/bpf/disasm_helpers.h | 12 +++ .../selftests/bpf/prog_tests/ctx_rewrite.c | 74 +++---------------- tools/testing/selftests/bpf/testing_helpers.c | 1 + 5 files changed, 75 insertions(+), 64 deletions(-) create mode 100644 tools/testing/selftests/bpf/disasm_helpers.c create mode 100644 tools/testing/selftests/bpf/disasm_helpers.h diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index e0b3887b3d2d..5eb7b5eb89d2 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -636,6 +636,7 @@ TRUNNER_EXTRA_SOURCES := test_progs.c \ test_loader.c \ xsk.c \ disasm.c \ + disasm_helpers.c \ json_writer.c \ flow_dissector_load.h \ ip_check_defrag_frags.h diff --git a/tools/testing/selftests/bpf/disasm_helpers.c b/tools/testing/selftests/bpf/disasm_helpers.c new file mode 100644 index 000000000000..96b1f2ffe438 --- /dev/null +++ b/tools/testing/selftests/bpf/disasm_helpers.c @@ -0,0 +1,51 @@ +// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) + +#include +#include "disasm.h" + +struct print_insn_context { + char *buf; + size_t sz; +}; + +static void print_insn_cb(void *private_data, const char *fmt, ...) +{ + struct print_insn_context *ctx = private_data; + va_list args; + + va_start(args, fmt); + vsnprintf(ctx->buf, ctx->sz, fmt, args); + va_end(args); +} + +struct bpf_insn *disasm_insn(struct bpf_insn *insn, char *buf, size_t buf_sz) +{ + struct print_insn_context ctx = { + .buf = buf, + .sz = buf_sz, + }; + struct bpf_insn_cbs cbs = { + .cb_print = print_insn_cb, + .private_data = &ctx, + }; + char *tmp, *pfx_end, *sfx_start; + bool double_insn; + int len; + + print_bpf_insn(&cbs, insn, true); + /* We share code with kernel BPF disassembler, it adds '(FF) ' prefix + * for each instruction (FF stands for instruction `code` byte). + * Remove the prefix inplace, and also simplify call instructions. + * E.g.: "(85) call foo#10" -> "call foo". + * Also remove newline in the end (the 'max(strlen(buf) - 1, 0)' thing). + */ + pfx_end = buf + 5; + sfx_start = buf + max((int)strlen(buf) - 1, 0); + if (strncmp(pfx_end, "call ", 5) == 0 && (tmp = strrchr(buf, '#'))) + sfx_start = tmp; + len = sfx_start - pfx_end; + memmove(buf, pfx_end, len); + buf[len] = 0; + double_insn = insn->code == (BPF_LD | BPF_IMM | BPF_DW); + return insn + (double_insn ? 2 : 1); +} diff --git a/tools/testing/selftests/bpf/disasm_helpers.h b/tools/testing/selftests/bpf/disasm_helpers.h new file mode 100644 index 000000000000..7b26cab70099 --- /dev/null +++ b/tools/testing/selftests/bpf/disasm_helpers.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ + +#ifndef __DISASM_HELPERS_H +#define __DISASM_HELPERS_H + +#include + +struct bpf_insn; + +struct bpf_insn *disasm_insn(struct bpf_insn *insn, char *buf, size_t buf_sz); + +#endif /* __DISASM_HELPERS_H */ diff --git a/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c b/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c index 08b6391f2f56..dd75ccb03770 100644 --- a/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c +++ b/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c @@ -10,7 +10,8 @@ #include "bpf/btf.h" #include "bpf_util.h" #include "linux/filter.h" -#include "disasm.h" +#include "linux/kernel.h" +#include "disasm_helpers.h" #define MAX_PROG_TEXT_SZ (32 * 1024) @@ -628,63 +629,6 @@ static bool match_pattern(struct btf *btf, char *pattern, char *text, char *reg_ return false; } -static void print_insn(void *private_data, const char *fmt, ...) -{ - va_list args; - - va_start(args, fmt); - vfprintf((FILE *)private_data, fmt, args); - va_end(args); -} - -/* Disassemble instructions to a stream */ -static void print_xlated(FILE *out, struct bpf_insn *insn, __u32 len) -{ - const struct bpf_insn_cbs cbs = { - .cb_print = print_insn, - .cb_call = NULL, - .cb_imm = NULL, - .private_data = out, - }; - bool double_insn = false; - int i; - - for (i = 0; i < len; i++) { - if (double_insn) { - double_insn = false; - continue; - } - - double_insn = insn[i].code == (BPF_LD | BPF_IMM | BPF_DW); - print_bpf_insn(&cbs, insn + i, true); - } -} - -/* We share code with kernel BPF disassembler, it adds '(FF) ' prefix - * for each instruction (FF stands for instruction `code` byte). - * This function removes the prefix inplace for each line in `str`. - */ -static void remove_insn_prefix(char *str, int size) -{ - const int prefix_size = 5; - - int write_pos = 0, read_pos = prefix_size; - int len = strlen(str); - char c; - - size = min(size, len); - - while (read_pos < size) { - c = str[read_pos++]; - if (c == 0) - break; - str[write_pos++] = c; - if (c == '\n') - read_pos += prefix_size; - } - str[write_pos] = 0; -} - struct prog_info { char *prog_kind; enum bpf_prog_type prog_type; @@ -699,9 +643,10 @@ static void match_program(struct btf *btf, char *reg_map[][2], bool skip_first_insn) { - struct bpf_insn *buf = NULL; + struct bpf_insn *buf = NULL, *insn, *insn_end; int err = 0, prog_fd = 0; FILE *prog_out = NULL; + char insn_buf[64]; char *text = NULL; __u32 cnt = 0; @@ -739,12 +684,13 @@ static void match_program(struct btf *btf, PRINT_FAIL("Can't open memory stream\n"); goto out; } - if (skip_first_insn) - print_xlated(prog_out, buf + 1, cnt - 1); - else - print_xlated(prog_out, buf, cnt); + insn_end = buf + cnt; + insn = buf + (skip_first_insn ? 1 : 0); + while (insn < insn_end) { + insn = disasm_insn(insn, insn_buf, sizeof(insn_buf)); + fprintf(prog_out, "%s\n", insn_buf); + } fclose(prog_out); - remove_insn_prefix(text, MAX_PROG_TEXT_SZ); ASSERT_TRUE(match_pattern(btf, pattern, text, reg_map), pinfo->prog_kind); diff --git a/tools/testing/selftests/bpf/testing_helpers.c b/tools/testing/selftests/bpf/testing_helpers.c index d5379a0e6da8..ac7c66f4fc7b 100644 --- a/tools/testing/selftests/bpf/testing_helpers.c +++ b/tools/testing/selftests/bpf/testing_helpers.c @@ -7,6 +7,7 @@ #include #include #include +#include "disasm.h" #include "test_progs.h" #include "testing_helpers.h" #include From patchwork Thu Jul 4 10:23:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13723573 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EFD461ABC25 for ; Thu, 4 Jul 2024 10:24:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088668; cv=none; b=Ruhi/Z8P2md6MT8bOQUXGet78JD3+KfXlR+tIXRUDPTtfFWJQijAAu+b3PFG6ise1HKepmSE3a3bktOKHK2CuuH0eT3T2ZL/DXLIRRj8WbXJveurX1DFcVWLaEOgDmpgDSMNM+pjSWQwx2wTBm6XmpbhnOL3yeu6fb3+2/y16ug= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088668; c=relaxed/simple; bh=3VHWvWIRoYm0OHxNSWyRtzLOCPr1yVWGX3xvOTYUc4Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hKkTiVQD35jj4RXYv2PuLKkhcQPHSklkiKeS04I+gmVBnIjYPdOHKJrBA66zJZdm1Mnw3yaOZvGk/rsgXgduY787Ky965+NeuepAbhjiyPLWrane5FFJpexOukVtmNHVVFQdsjLDjpOchNksoTAQy7dRH7JAJvj273oCBOQANro= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=HCtX56oY; arc=none smtp.client-ip=209.85.216.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="HCtX56oY" Received: by mail-pj1-f43.google.com with SMTP id 98e67ed59e1d1-2c9a1ea8cc3so188023a91.0 for ; Thu, 04 Jul 2024 03:24:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720088665; x=1720693465; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=l3Bs+tT/4cSTOV0X1Cvoo1p89KZrjCaeAGPrKtOb4Fc=; b=HCtX56oYHP6GRExTMp2JkulrQ7tGrHjvWBJFOr3vHdvXdsz7ca3UUmPqiSDflOHrIL b3XYmvrbGPyu1g+jWAR9IgdDr03RcqHCtQzlfnUtwNaDloGxmFvQQEPTymlArbt7R+y2 zuyLB0V2IEXYZ2WoPg87+MbMZ92yYUi36XmCTF77iKtk11iEOuBPIM2X87emBoJCaPjz ferr6/cIeW5UL8YNeWszF9HZY1BwfBrkci/AvTqPR2BHuOO4s1lgSNb/l7nuJmkXuiyf aDGq6enTsK10dIdDX1Si34bjbaXZpldQptEjiuoiJ0GhQjJqw6LCic3TRSt/qXFm1Aw0 h19w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720088665; x=1720693465; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l3Bs+tT/4cSTOV0X1Cvoo1p89KZrjCaeAGPrKtOb4Fc=; b=OuyOMitSsXvbMRx6hbBHn1ELXHTu1rC419SkMNPvMLQryQm4tWdUdECC4MC79i6TGV 45czuh5Gtbtw9iPnPdjDxEThrOXF7+vzDna9ErEXQ/7l9se3NYi0Bq+QBtn/o2lf/J7y M37gkxt8DsLm8hCFQHWeOzAIK7N/pQI2oU9Xl6ZDCe8CvwBYxcWf6aLLsbL4nHQTNPXc P7CeA+i59eopc1KuAzTra7OqjVwijzz9FO6q9A78X5B76n4B65c92Ke57+gKV2eR2q6Y WpgU5hLVH3NwcZ1eQv1yAfSNhnWR6VviMFcSeuyIXlzQxaBJvrmbbVJrH4TtOR7bzIwU hfjg== X-Gm-Message-State: AOJu0Yzcte7gahputGDh70Dy6ujeeOqdGhgQLW12a9XYOACjMi00gFfz n8aIOMmcItnX288pG5zNT79Vub/lZwzw5jUOo95HHTF2SoI8fLi7wQWPzA== X-Google-Smtp-Source: AGHT+IFlL78F9EbBDZp9k1zRzHEukCzWVLGqxb4vZspE6hzfZ1IvFeM2YHoeJCDY+hJjzFvQFSA7rQ== X-Received: by 2002:a17:90a:d314:b0:2c9:6a0e:6e66 with SMTP id 98e67ed59e1d1-2c99f2fd0e9mr1493766a91.5.1720088665057; Thu, 04 Jul 2024 03:24:25 -0700 (PDT) Received: from badger.. ([38.34.87.7]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2c9a4c0fe8dsm216693a91.0.2024.07.04.03.24.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jul 2024 03:24:24 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yonghong.song@linux.dev, puranjay@kernel.org, jose.marchesi@oracle.com, Eduard Zingerman Subject: [RFC bpf-next v2 5/9] selftests/bpf: no need to track next_match_pos in struct test_loader Date: Thu, 4 Jul 2024 03:23:57 -0700 Message-ID: <20240704102402.1644916-6-eddyz87@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240704102402.1644916-1-eddyz87@gmail.com> References: <20240704102402.1644916-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC The call stack for validate_case() function looks as follows: - test_loader__run_subtests() - process_subtest() - run_subtest() - prepare_case(), which does 'tester->next_match_pos = 0'; - validate_case(), which increments tester->next_match_pos. Hence, each subtest is run with next_match_pos freshly set to zero. Meaning that there is no need to persist this variable in the struct test_loader, use local variable instead. Acked-by: Andrii Nakryiko Signed-off-by: Eduard Zingerman --- tools/testing/selftests/bpf/test_loader.c | 19 ++++++++----------- tools/testing/selftests/bpf/test_progs.h | 1 - 2 files changed, 8 insertions(+), 12 deletions(-) diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c index f14e10b0de96..47508cf66e89 100644 --- a/tools/testing/selftests/bpf/test_loader.c +++ b/tools/testing/selftests/bpf/test_loader.c @@ -434,7 +434,6 @@ static void prepare_case(struct test_loader *tester, bpf_program__set_flags(prog, prog_flags | spec->prog_flags); tester->log_buf[0] = '\0'; - tester->next_match_pos = 0; } static void emit_verifier_log(const char *log_buf, bool force) @@ -450,25 +449,23 @@ static void validate_case(struct test_loader *tester, struct bpf_program *prog, int load_err) { - int i, j, err; - char *match; regmatch_t reg_match[1]; + const char *log = tester->log_buf; + int i, j, err; for (i = 0; i < subspec->expect_msg_cnt; i++) { struct expect_msg *msg = &subspec->expect_msgs[i]; + const char *match = NULL; if (msg->substr) { - match = strstr(tester->log_buf + tester->next_match_pos, msg->substr); + match = strstr(log, msg->substr); if (match) - tester->next_match_pos = match - tester->log_buf + strlen(msg->substr); + log += strlen(msg->substr); } else { - err = regexec(&msg->regex, - tester->log_buf + tester->next_match_pos, 1, reg_match, 0); + err = regexec(&msg->regex, log, 1, reg_match, 0); if (err == 0) { - match = tester->log_buf + tester->next_match_pos + reg_match[0].rm_so; - tester->next_match_pos += reg_match[0].rm_eo; - } else { - match = NULL; + match = log + reg_match[0].rm_so; + log += reg_match[0].rm_eo; } } diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h index 0ba5a20b19ba..8e997de596db 100644 --- a/tools/testing/selftests/bpf/test_progs.h +++ b/tools/testing/selftests/bpf/test_progs.h @@ -438,7 +438,6 @@ typedef int (*pre_execution_cb)(struct bpf_object *obj); struct test_loader { char *log_buf; size_t log_buf_sz; - size_t next_match_pos; pre_execution_cb pre_execution_cb; struct bpf_object *obj; From patchwork Thu Jul 4 10:23:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13723574 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E7961ABC28 for ; Thu, 4 Jul 2024 10:24:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088668; cv=none; b=HI2VFhlcCIGQuMkhB5tyT6poKIIyt8JiXEWfHJhIT+7rbR7fTIUg8Z4yfPvG6xexVfjxANJTU6xHZmKD9pd/Bm+ppgh5DRlViScwF0BTcfUJT2mygLkgjHYd0V8EzXzVyRHDfbxxYCRDzGuySt68DjojrMd3TahTVT7RR3F9beg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088668; c=relaxed/simple; bh=zHssEFSew6/7/I5d3v96ywaAl8Z3NeUArp5+VNQy/C0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IHSaJaeG6f4Csy0okYB64YdOLWk9PHUl9pZ8Tu0AE43Ows1eIYuyevW4mN+4zyLA8khtBJsQ6s5MSwRE3/mctqKpnKVYtywsJRwYEskC081rJk9v8gLKVlTXHNKgK6km4T9ZtUhBOFpnCPt+6+LBpU3RvgzrA+BSRrHYpeC8CTw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=InDW6LzQ; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="InDW6LzQ" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-70aff4e3f6dso361265b3a.3 for ; Thu, 04 Jul 2024 03:24:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720088666; x=1720693466; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Skpti20qxoE6XuXbBaxdLW/TvxJxIRzVhmXOURRx+sc=; b=InDW6LzQt5oc5e/YuYnaGUvUHcRJv/1pksaaTMytS9pcjWtThcP/w6+v4GJUKSl0vr qeID3dDcFDHn8Qe9oYPIBEV2IoGSPDAiI9y3snUQeO5WYyKaBmXrSVmiY8kVoQIHfmet Rqm+ESvrf6ZglmPntRIOcljc6HVP2I0NeeJOGXIogKWC0Ra8RP+q9WkqJ6MpNypwKOd6 zoZU6nQwHuga/pQTy4Ro37WqtR/cvCPV6SXdvZ0711GQ+xhqUl2gr4K9Fwx83j7lBlMf qkEQOf2YH/Hqfzx9xWZuDw63Iu5O3sSPDF1h3DqACepPZ2fsysBuhjmybgGjSKrmzowm wvKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720088666; x=1720693466; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Skpti20qxoE6XuXbBaxdLW/TvxJxIRzVhmXOURRx+sc=; b=THZwochfy0JoYKBriA5p6rs0Y7uLzhtUFD/st+hvE06jomEKPVjdmajdtqLilBfO9L rvTdVuo8wZhDOrVJfBf67cWAMPIntq+DeoOBfKOJYrb6RnFBxqwQxefFVIdD3CqbE4ii kEeuJ+zhaTpwdUpxm5x2mfFHnompmatr7BcnlXRlxWjVkX4MqBSYbFQCsKKKxSDU7XI+ LBIOEJvv9yz8bMxGhrRF+TC+v++VfQ4e2MT86L6ei+MEGFrKFuo1nCKdZfQCNzvSTZWx LG43K24BA5PnEWsy+zpI3gBRK2uQAWQMFiuj7xcPEg2RRYB12/00wktRDYEIgEXJzlOh 7e4g== X-Gm-Message-State: AOJu0YyZRxHlbCwCIwS0oCIPFoVshZQmzteBpVULGXRsgTjxu1TVeco1 JZv/eNYeK1kxVvBBmQV1oIGzobrvdMWuZmumGzGW12MS+kRXDU5so+L7dw== X-Google-Smtp-Source: AGHT+IEsRDbxqpFqfxa8iyBCfi3sPYVVtlJK1Ip7bHTb9Z9108Pvpul4mxx9WBy9bvdTfX8oC7LwqA== X-Received: by 2002:a05:6a20:c995:b0:1bd:23c7:ebfb with SMTP id adf61e73a8af0-1c0cc8db0f4mr1172934637.62.1720088665993; Thu, 04 Jul 2024 03:24:25 -0700 (PDT) Received: from badger.. ([38.34.87.7]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2c9a4c0fe8dsm216693a91.0.2024.07.04.03.24.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jul 2024 03:24:25 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yonghong.song@linux.dev, puranjay@kernel.org, jose.marchesi@oracle.com, Eduard Zingerman Subject: [RFC bpf-next v2 6/9] selftests/bpf: extract test_loader->expect_msgs as a data structure Date: Thu, 4 Jul 2024 03:23:58 -0700 Message-ID: <20240704102402.1644916-7-eddyz87@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240704102402.1644916-1-eddyz87@gmail.com> References: <20240704102402.1644916-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Non-functional change: use a separate data structure to represented expected messages in test_loader. This would allow to use the same functionality for expected set of disassembled instructions in the follow-up commit. Acked-by: Andrii Nakryiko Signed-off-by: Eduard Zingerman --- tools/testing/selftests/bpf/test_loader.c | 81 ++++++++++++----------- 1 file changed, 41 insertions(+), 40 deletions(-) diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c index 47508cf66e89..3f84903558dd 100644 --- a/tools/testing/selftests/bpf/test_loader.c +++ b/tools/testing/selftests/bpf/test_loader.c @@ -55,11 +55,15 @@ struct expect_msg { regex_t regex; }; +struct expected_msgs { + struct expect_msg *patterns; + size_t cnt; +}; + struct test_subspec { char *name; bool expect_failure; - struct expect_msg *expect_msgs; - size_t expect_msg_cnt; + struct expected_msgs expect_msgs; int retval; bool execute; }; @@ -96,44 +100,45 @@ void test_loader_fini(struct test_loader *tester) free(tester->log_buf); } -static void free_test_spec(struct test_spec *spec) +static void free_msgs(struct expected_msgs *msgs) { int i; + for (i = 0; i < msgs->cnt; i++) + if (msgs->patterns[i].regex_str) + regfree(&msgs->patterns[i].regex); + free(msgs->patterns); + msgs->patterns = NULL; + msgs->cnt = 0; +} + +static void free_test_spec(struct test_spec *spec) +{ /* Deallocate expect_msgs arrays. */ - for (i = 0; i < spec->priv.expect_msg_cnt; i++) - if (spec->priv.expect_msgs[i].regex_str) - regfree(&spec->priv.expect_msgs[i].regex); - for (i = 0; i < spec->unpriv.expect_msg_cnt; i++) - if (spec->unpriv.expect_msgs[i].regex_str) - regfree(&spec->unpriv.expect_msgs[i].regex); + free_msgs(&spec->priv.expect_msgs); + free_msgs(&spec->unpriv.expect_msgs); free(spec->priv.name); free(spec->unpriv.name); - free(spec->priv.expect_msgs); - free(spec->unpriv.expect_msgs); - spec->priv.name = NULL; spec->unpriv.name = NULL; - spec->priv.expect_msgs = NULL; - spec->unpriv.expect_msgs = NULL; } -static int push_msg(const char *substr, const char *regex_str, struct test_subspec *subspec) +static int push_msg(const char *substr, const char *regex_str, struct expected_msgs *msgs) { void *tmp; int regcomp_res; char error_msg[100]; struct expect_msg *msg; - tmp = realloc(subspec->expect_msgs, - (1 + subspec->expect_msg_cnt) * sizeof(struct expect_msg)); + tmp = realloc(msgs->patterns, + (1 + msgs->cnt) * sizeof(struct expect_msg)); if (!tmp) { ASSERT_FAIL("failed to realloc memory for messages\n"); return -ENOMEM; } - subspec->expect_msgs = tmp; - msg = &subspec->expect_msgs[subspec->expect_msg_cnt]; + msgs->patterns = tmp; + msg = &msgs->patterns[msgs->cnt]; if (substr) { msg->substr = substr; @@ -150,7 +155,7 @@ static int push_msg(const char *substr, const char *regex_str, struct test_subsp } } - subspec->expect_msg_cnt += 1; + msgs->cnt += 1; return 0; } @@ -272,25 +277,25 @@ static int parse_test_spec(struct test_loader *tester, spec->mode_mask |= UNPRIV; } else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX)) { msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1; - err = push_msg(msg, NULL, &spec->priv); + err = push_msg(msg, NULL, &spec->priv.expect_msgs); if (err) goto cleanup; spec->mode_mask |= PRIV; } else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX_UNPRIV)) { msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX_UNPRIV) - 1; - err = push_msg(msg, NULL, &spec->unpriv); + err = push_msg(msg, NULL, &spec->unpriv.expect_msgs); if (err) goto cleanup; spec->mode_mask |= UNPRIV; } else if (str_has_pfx(s, TEST_TAG_EXPECT_REGEX_PFX)) { msg = s + sizeof(TEST_TAG_EXPECT_REGEX_PFX) - 1; - err = push_msg(NULL, msg, &spec->priv); + err = push_msg(NULL, msg, &spec->priv.expect_msgs); if (err) goto cleanup; spec->mode_mask |= PRIV; } else if (str_has_pfx(s, TEST_TAG_EXPECT_REGEX_PFX_UNPRIV)) { msg = s + sizeof(TEST_TAG_EXPECT_REGEX_PFX_UNPRIV) - 1; - err = push_msg(NULL, msg, &spec->unpriv); + err = push_msg(NULL, msg, &spec->unpriv.expect_msgs); if (err) goto cleanup; spec->mode_mask |= UNPRIV; @@ -387,11 +392,12 @@ static int parse_test_spec(struct test_loader *tester, spec->unpriv.execute = spec->priv.execute; } - if (!spec->unpriv.expect_msgs) { - for (i = 0; i < spec->priv.expect_msg_cnt; i++) { - struct expect_msg *msg = &spec->priv.expect_msgs[i]; + if (spec->unpriv.expect_msgs.cnt == 0) { + for (i = 0; i < spec->priv.expect_msgs.cnt; i++) { + struct expect_msg *msg = &spec->priv.expect_msgs.patterns[i]; - err = push_msg(msg->substr, msg->regex_str, &spec->unpriv); + err = push_msg(msg->substr, msg->regex_str, + &spec->unpriv.expect_msgs); if (err) goto cleanup; } @@ -443,18 +449,14 @@ static void emit_verifier_log(const char *log_buf, bool force) fprintf(stdout, "VERIFIER LOG:\n=============\n%s=============\n", log_buf); } -static void validate_case(struct test_loader *tester, - struct test_subspec *subspec, - struct bpf_object *obj, - struct bpf_program *prog, - int load_err) +static void validate_msgs(char *log_buf, struct expected_msgs *msgs) { regmatch_t reg_match[1]; - const char *log = tester->log_buf; + const char *log = log_buf; int i, j, err; - for (i = 0; i < subspec->expect_msg_cnt; i++) { - struct expect_msg *msg = &subspec->expect_msgs[i]; + for (i = 0; i < msgs->cnt; i++) { + struct expect_msg *msg = &msgs->patterns[i]; const char *match = NULL; if (msg->substr) { @@ -471,9 +473,9 @@ static void validate_case(struct test_loader *tester, if (!ASSERT_OK_PTR(match, "expect_msg")) { if (env.verbosity == VERBOSE_NONE) - emit_verifier_log(tester->log_buf, true /*force*/); + emit_verifier_log(log_buf, true /*force*/); for (j = 0; j <= i; j++) { - msg = &subspec->expect_msgs[j]; + msg = &msgs->patterns[j]; fprintf(stderr, "%s %s: '%s'\n", j < i ? "MATCHED " : "EXPECTED", msg->substr ? "SUBSTR" : " REGEX", @@ -692,9 +694,8 @@ void run_subtest(struct test_loader *tester, goto tobj_cleanup; } } - emit_verifier_log(tester->log_buf, false /*force*/); - validate_case(tester, subspec, tobj, tprog, err); + validate_msgs(tester->log_buf, &subspec->expect_msgs); if (should_do_test_run(spec, subspec)) { /* For some reason test_verifier executes programs From patchwork Thu Jul 4 10:23:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13723575 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pg1-f175.google.com (mail-pg1-f175.google.com [209.85.215.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 192721AB916 for ; Thu, 4 Jul 2024 10:24:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088669; cv=none; b=gYchWCD3TkW5a90IgfTh+K71l5i9C3y9fdU60hAFPhZDF4F5tKr9/VQJKJhQ80WZfII00Big9WRaRfWCK3istujbosp40OUS3I+TZkZpWEuXJSQ8oJjW6SSYY9wtpcQUwtwBel9rGmFuN2S1F0I2xU5J4d7PjXLsmE/KlhYWS1o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088669; c=relaxed/simple; bh=NBnUPa5kXeybE0bH0innWyI3Q4aBa3Uz3jPNvmzZ6f0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PyI2R/g+x8iIhkEIiljtxV8ulj5ltmoX+S4XO6y4q90eJ6l75ZN3A4A0sZIQgJSu0zz8PVV5mO5Pa6yAK1GIBzJpy1/Dp0efdaDBlUlSfi8Td3YfR4i0WchuJOUzuu9ZsHySvCVI/WbvTHYtr+uJ6SnN0MiOXln9Yo6kZp6kXrQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=H3JHRRKF; arc=none smtp.client-ip=209.85.215.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="H3JHRRKF" Received: by mail-pg1-f175.google.com with SMTP id 41be03b00d2f7-7201cb6cae1so286841a12.2 for ; Thu, 04 Jul 2024 03:24:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720088667; x=1720693467; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dhxKVSVlOw2OBGqZZRk30w4flqSDWzJlPRFfdg9zOrs=; b=H3JHRRKFM+Yp5fAi4kl/FAMaiK//BypoDZSMNGJyyiyF0E1/3+CYMDwYUc0S35nO+U by+hPYcVimrxcrexfdMm6jyL4dUEPug25tN+wE7EW+cIr06RIGgboYs4QbTeSDHtXAWQ 3azPtJgs+SSOhKMc3zG0gfsmIJ6O+ui5AhW/XXOosDWQny1QNpLNHuxZnxw8DUCoH0+w tqU3lpDm3Et1nslxrpzvt/C9VmX8WA0pA4VYXC58sIR+cBsBgh7xK41h1NcE6CsAovZu V5tLPApoZ/xCY7x1R80bB7CKIGSxTOCnhKohO00reqxuH9qE5gpndg/6F9Xl+S3ERFPg HKWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720088667; x=1720693467; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dhxKVSVlOw2OBGqZZRk30w4flqSDWzJlPRFfdg9zOrs=; b=PTrr49vcuE5pMNFCXrv7OUX2lLXHxXPcQF0mIScdib2z0MPBE11HwKF7c7HQfEuNW+ YMibwshGREr7q4ezA7uD3/EusbAvAVbtTr58lD3pm6vr8iN4vp/CDOO5OURgW+p6YJi7 3qakinZzE2dLaKrtjL6xbOXkpcZqEtdtFQMNRxMm7zaKITtEr0UeqQDARpkG5nbbDwBU 5sZMEfxa/H2GXLDxTLQZyjRFurvMFCeln6ikF8PTTaTPtblG1MGfwT2hJ0/7IHtzJt54 08St+tS/y8YQSpmTa9KpeC8soR4p9bz0PdtFVbZ8Q+3hUcmW98pdX/RjDuC6DKlA2lSO gF1Q== X-Gm-Message-State: AOJu0YwsH2KwDRhIiC6QHDJqIE20CnrECofOUradDddbbDz9jbHaDNDj uktabBn6tHZIWV7WTdT3xPrtGerEIgbtFGDFQvoqgHAdPVmYyulBBvjmeA== X-Google-Smtp-Source: AGHT+IGsFSXAUdazNXq2QmuaKDdvdtU4UBsKjMX+O2tIFS6pREgAWgBFoAeRQVvwSHiKlK+MrMqmHw== X-Received: by 2002:a05:6a20:3948:b0:1be:e5c3:f97a with SMTP id adf61e73a8af0-1c0cc72bd60mr1334252637.3.1720088667097; Thu, 04 Jul 2024 03:24:27 -0700 (PDT) Received: from badger.. ([38.34.87.7]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2c9a4c0fe8dsm216693a91.0.2024.07.04.03.24.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jul 2024 03:24:26 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yonghong.song@linux.dev, puranjay@kernel.org, jose.marchesi@oracle.com, Eduard Zingerman Subject: [RFC bpf-next v2 7/9] selftests/bpf: allow checking xlated programs in verifier_* tests Date: Thu, 4 Jul 2024 03:23:59 -0700 Message-ID: <20240704102402.1644916-8-eddyz87@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240704102402.1644916-1-eddyz87@gmail.com> References: <20240704102402.1644916-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Add a macro __xlated("...") for use with test_loader tests. When such annotations are present for the test case: - bpf_prog_get_info_by_fd() is used to get BPF program after all rewrites are applied by verifier. - the program is disassembled and patterns specified in __xlated are searched for in the disassembly text. __xlated matching follows the same mechanics as __msg: each subsequent pattern is matched from the point where previous pattern ended. This allows to write tests like below, where the goal is to verify the behavior of one of the of the transformations applied by verifier: SEC("raw_tp") __xlated("1: w0 = ") __xlated("2: r0 = &(void __percpu *)(r0)") __xlated("3: r0 = *(u32 *)(r0 +0)") __xlated("4: exit") __success __naked void simple(void) { asm volatile ( "call %[bpf_get_smp_processor_id];" "exit;" : : __imm(bpf_get_smp_processor_id) : __clobber_all); } Acked-by: Andrii Nakryiko Signed-off-by: Eduard Zingerman --- tools/testing/selftests/bpf/progs/bpf_misc.h | 5 ++ tools/testing/selftests/bpf/test_loader.c | 82 +++++++++++++++++++- 2 files changed, 84 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h index 81097a3f15eb..a70939c7bc26 100644 --- a/tools/testing/selftests/bpf/progs/bpf_misc.h +++ b/tools/testing/selftests/bpf/progs/bpf_misc.h @@ -26,6 +26,9 @@ * * __regex Same as __msg, but using a regular expression. * __regex_unpriv Same as __msg_unpriv but using a regular expression. + * __xlated Expect a line in a disassembly log after verifier applies rewrites. + * Multiple __xlated attributes could be specified. + * __xlated_unpriv Same as __xlated but for unprivileged mode. * * __success Expect program load success in privileged mode. * __success_unpriv Expect program load success in unprivileged mode. @@ -63,11 +66,13 @@ */ #define __msg(msg) __attribute__((btf_decl_tag("comment:test_expect_msg=" msg))) #define __regex(regex) __attribute__((btf_decl_tag("comment:test_expect_regex=" regex))) +#define __xlated(msg) __attribute__((btf_decl_tag("comment:test_expect_xlated=" msg))) #define __failure __attribute__((btf_decl_tag("comment:test_expect_failure"))) #define __success __attribute__((btf_decl_tag("comment:test_expect_success"))) #define __description(desc) __attribute__((btf_decl_tag("comment:test_description=" desc))) #define __msg_unpriv(msg) __attribute__((btf_decl_tag("comment:test_expect_msg_unpriv=" msg))) #define __regex_unpriv(regex) __attribute__((btf_decl_tag("comment:test_expect_regex_unpriv=" regex))) +#define __xlated_unpriv(msg) __attribute__((btf_decl_tag("comment:test_expect_xlated_unpriv=" msg))) #define __failure_unpriv __attribute__((btf_decl_tag("comment:test_expect_failure_unpriv"))) #define __success_unpriv __attribute__((btf_decl_tag("comment:test_expect_success_unpriv"))) #define __log_level(lvl) __attribute__((btf_decl_tag("comment:test_log_level="#lvl))) diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c index 3f84903558dd..b44b6a2fc82c 100644 --- a/tools/testing/selftests/bpf/test_loader.c +++ b/tools/testing/selftests/bpf/test_loader.c @@ -7,6 +7,7 @@ #include #include "autoconf_helper.h" +#include "disasm_helpers.h" #include "unpriv_helpers.h" #include "cap_helpers.h" @@ -19,10 +20,12 @@ #define TEST_TAG_EXPECT_SUCCESS "comment:test_expect_success" #define TEST_TAG_EXPECT_MSG_PFX "comment:test_expect_msg=" #define TEST_TAG_EXPECT_REGEX_PFX "comment:test_expect_regex=" +#define TEST_TAG_EXPECT_XLATED_PFX "comment:test_expect_xlated=" #define TEST_TAG_EXPECT_FAILURE_UNPRIV "comment:test_expect_failure_unpriv" #define TEST_TAG_EXPECT_SUCCESS_UNPRIV "comment:test_expect_success_unpriv" #define TEST_TAG_EXPECT_MSG_PFX_UNPRIV "comment:test_expect_msg_unpriv=" #define TEST_TAG_EXPECT_REGEX_PFX_UNPRIV "comment:test_expect_regex_unpriv=" +#define TEST_TAG_EXPECT_XLATED_PFX_UNPRIV "comment:test_expect_xlated_unpriv=" #define TEST_TAG_LOG_LEVEL_PFX "comment:test_log_level=" #define TEST_TAG_PROG_FLAGS_PFX "comment:test_prog_flags=" #define TEST_TAG_DESCRIPTION_PFX "comment:test_description=" @@ -64,6 +67,7 @@ struct test_subspec { char *name; bool expect_failure; struct expected_msgs expect_msgs; + struct expected_msgs expect_xlated; int retval; bool execute; }; @@ -117,6 +121,8 @@ static void free_test_spec(struct test_spec *spec) /* Deallocate expect_msgs arrays. */ free_msgs(&spec->priv.expect_msgs); free_msgs(&spec->unpriv.expect_msgs); + free_msgs(&spec->priv.expect_xlated); + free_msgs(&spec->unpriv.expect_xlated); free(spec->priv.name); free(spec->unpriv.name); @@ -299,6 +305,18 @@ static int parse_test_spec(struct test_loader *tester, if (err) goto cleanup; spec->mode_mask |= UNPRIV; + } else if (str_has_pfx(s, TEST_TAG_EXPECT_XLATED_PFX)) { + msg = s + sizeof(TEST_TAG_EXPECT_XLATED_PFX) - 1; + err = push_msg(msg, NULL, &spec->priv.expect_xlated); + if (err) + goto cleanup; + spec->mode_mask |= PRIV; + } else if (str_has_pfx(s, TEST_TAG_EXPECT_XLATED_PFX_UNPRIV)) { + msg = s + sizeof(TEST_TAG_EXPECT_XLATED_PFX_UNPRIV) - 1; + err = push_msg(msg, NULL, &spec->unpriv.expect_xlated); + if (err) + goto cleanup; + spec->mode_mask |= UNPRIV; } else if (str_has_pfx(s, TEST_TAG_RETVAL_PFX)) { val = s + sizeof(TEST_TAG_RETVAL_PFX) - 1; err = parse_retval(val, &spec->priv.retval, "__retval"); @@ -402,6 +420,16 @@ static int parse_test_spec(struct test_loader *tester, goto cleanup; } } + if (spec->unpriv.expect_xlated.cnt == 0) { + for (i = 0; i < spec->priv.expect_xlated.cnt; i++) { + struct expect_msg *msg = &spec->priv.expect_xlated.patterns[i]; + + err = push_msg(msg->substr, msg->regex_str, + &spec->unpriv.expect_xlated); + if (err) + goto cleanup; + } + } } spec->valid = true; @@ -449,7 +477,15 @@ static void emit_verifier_log(const char *log_buf, bool force) fprintf(stdout, "VERIFIER LOG:\n=============\n%s=============\n", log_buf); } -static void validate_msgs(char *log_buf, struct expected_msgs *msgs) +static void emit_xlated(const char *xlated, bool force) +{ + if (!force && env.verbosity == VERBOSE_NONE) + return; + fprintf(stdout, "XLATED:\n=============\n%s=============\n", xlated); +} + +static void validate_msgs(char *log_buf, struct expected_msgs *msgs, + void (*emit_fn)(const char *buf, bool force)) { regmatch_t reg_match[1]; const char *log = log_buf; @@ -473,7 +509,7 @@ static void validate_msgs(char *log_buf, struct expected_msgs *msgs) if (!ASSERT_OK_PTR(match, "expect_msg")) { if (env.verbosity == VERBOSE_NONE) - emit_verifier_log(log_buf, true /*force*/); + emit_fn(log_buf, true /*force*/); for (j = 0; j <= i; j++) { msg = &msgs->patterns[j]; fprintf(stderr, "%s %s: '%s'\n", @@ -610,6 +646,37 @@ static bool should_do_test_run(struct test_spec *spec, struct test_subspec *subs return true; } +/* Get a disassembly of BPF program after verifier applies all rewrites */ +static int get_xlated_program_text(int prog_fd, char *text, size_t text_sz) +{ + struct bpf_insn *insn_start = NULL, *insn, *insn_end; + __u32 insns_cnt = 0, i; + char buf[64]; + FILE *out = NULL; + int err; + + err = get_xlated_program(prog_fd, &insn_start, &insns_cnt); + if (!ASSERT_OK(err, "get_xlated_program")) + goto out; + out = fmemopen(text, text_sz, "w"); + if (!ASSERT_OK_PTR(out, "open_memstream")) + goto out; + insn_end = insn_start + insns_cnt; + insn = insn_start; + while (insn < insn_end) { + i = insn - insn_start; + insn = disasm_insn(insn, buf, sizeof(buf)); + fprintf(out, "%d: %s\n", i, buf); + } + fflush(out); + +out: + free(insn_start); + if (out) + fclose(out); + return err; +} + /* this function is forced noinline and has short generic name to look better * in test_progs output (in case of a failure) */ @@ -695,7 +762,16 @@ void run_subtest(struct test_loader *tester, } } emit_verifier_log(tester->log_buf, false /*force*/); - validate_msgs(tester->log_buf, &subspec->expect_msgs); + validate_msgs(tester->log_buf, &subspec->expect_msgs, emit_verifier_log); + + if (subspec->expect_xlated.cnt) { + err = get_xlated_program_text(bpf_program__fd(tprog), + tester->log_buf, tester->log_buf_sz); + if (err) + goto tobj_cleanup; + emit_xlated(tester->log_buf, false /*force*/); + validate_msgs(tester->log_buf, &subspec->expect_xlated, emit_xlated); + } if (should_do_test_run(spec, subspec)) { /* For some reason test_verifier executes programs From patchwork Thu Jul 4 10:24:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13723576 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DFE71ABC3A for ; Thu, 4 Jul 2024 10:24:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088671; cv=none; b=sOqumF1Bw4Hvd+J/0/v/n/Euu9eYJd0fV0Wj2rP4vZ5Mw0mkMsCKrx6pPVD9MGywradRczdDrCRFtcW84bKqGI3hbEAwEyYNLOYhd7z/HLvUedMrNdUB2saKVIqzLvo0dbLnWg0MuQKpr9EWhB9lZkyl9+kYwlrPQWEB/2j8FGk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088671; c=relaxed/simple; bh=29n5vYJZ345T7Xft7dsQOYRPruneZOXPB+glXq8Z0yI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=f4N5hWsH9iHtLgm4YCBwpGYSg6qPG0pO3O/Wt3uhngmaboHhvPl9mY9waJiDkL37zt8WUb7+v2I5RZAvs24wxfiZ56jYl9VlmsXAP19V4mxuDwwoi+4ofcxLaB7wnjTsu3tEiGrcY6cxncm1Ctkztx4vHImTdbqkD/wK1kp43VU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=BvV7ElA6; arc=none smtp.client-ip=209.85.216.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BvV7ElA6" Received: by mail-pj1-f52.google.com with SMTP id 98e67ed59e1d1-2c80637d8adso369699a91.0 for ; Thu, 04 Jul 2024 03:24:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720088669; x=1720693469; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dciH7Y+cZMAD4kcbt2ByOnD1Rh13u7692d9hN4gvL/4=; b=BvV7ElA6tzL7jUfCIL299jb2cvbZC2DwWOxBB/dcNUTF5R+qhp+thQG8BKgIzT0Faw IHzEI7UtTnrjMhFw98Aue5qk7kOAtLgtObUlmOprLZ+Z7lu37EYKY8MHpYoYaqJ14zHo vrO+Jw0I2j69jPgA+swWRbzWfNYW0cmA5BcVDv67r2LLl9Toy+qx/h3Mz4pqPKFgzmr5 cmcLkzNa9NiX8eytL22q2IJJwqom8fJNm+/ur40GJcU3fZiPT8BKYUeHiH1uRoJYkMmi 9D3wBIEcSWu5k+rlhMhBP9RfimNrpywWYcEmxnAtV0VoHd41O1at5E/3hzACzDf6CgeJ FXiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720088669; x=1720693469; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dciH7Y+cZMAD4kcbt2ByOnD1Rh13u7692d9hN4gvL/4=; b=om/9Eut7QW03znfbcnUYMIhxLDGrSAXfwVjpimKf8V1strvCqVuTgWUYqdmw1RrAwV af9uZ92bTUnXlLfCIx6/F8KiJrs0He0HWC5O3h2m3cx6V+ltkrm304KLE7WMpQ7iuSn3 1zWvuRYZqvazpYxcQfnj/mGMfk9YhIlg9FF+24EHvsRi/U9Cm0GH2wRH8fh0/u85zYAV t2eT5W0jRNdDlPmnCWzE/XILft8lJq3TSmzsnkcCJg6o0VhoDx/KfqNEmN2zuNRCV0Sk gPDVx46cW34q37SjXeZ/Hr1zDaEuB9WFh4mrlilD5zq8qawp7aHMyPjTcrsDrxTIgHt/ +pNA== X-Gm-Message-State: AOJu0Ywob3v2+I44//8odZcTRQbbuU3QIVAA8AIxcSz9Hk/uV8c3Va7w tFQSmWuQaLtl9y+B0ltkhpoU3E6ulcRZMkQu4ZuUkybZ35CS/vFORYuxjw== X-Google-Smtp-Source: AGHT+IGrpgh9j1u4gaM9sZDQggTGM8XKKMRNo/BWsFl+H8Mqqpc9HnzsTjHSH5MP4pi+JWtws2/hgw== X-Received: by 2002:a17:90b:314b:b0:2c9:5fae:4f7e with SMTP id 98e67ed59e1d1-2c99c55084bmr902851a91.16.1720088668156; Thu, 04 Jul 2024 03:24:28 -0700 (PDT) Received: from badger.. ([38.34.87.7]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2c9a4c0fe8dsm216693a91.0.2024.07.04.03.24.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jul 2024 03:24:27 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yonghong.song@linux.dev, puranjay@kernel.org, jose.marchesi@oracle.com, Eduard Zingerman Subject: [RFC bpf-next v2 8/9] selftests/bpf: __arch_* macro to limit test cases to specific archs Date: Thu, 4 Jul 2024 03:24:00 -0700 Message-ID: <20240704102402.1644916-9-eddyz87@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240704102402.1644916-1-eddyz87@gmail.com> References: <20240704102402.1644916-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Add annotations __arch_x86_64, __arch_arm64, __arch_riscv64 to specify on which architecture the test case should be tested. Several __arch_* annotations could be specified at once. When test case is not run on current arch it is marked as skipped. For example, the following would be tested only on arm64 and riscv64: SEC("raw_tp") __arch_arm64 __arch_riscv64 __xlated("1: *(u64 *)(r10 - 16) = r1") __xlated("2: call") __xlated("3: r1 = *(u64 *)(r10 - 16);") __success __naked void canary_arm64_riscv64(void) { asm volatile ( "r1 = 1;" "*(u64 *)(r10 - 16) = r1;" "call %[bpf_get_smp_processor_id];" "r1 = *(u64 *)(r10 - 16);" "exit;" : : __imm(bpf_get_smp_processor_id) : __clobber_all); } On x86 it would be skipped: #467/2 verifier_nocsr/canary_arm64_riscv64:SKIP Signed-off-by: Eduard Zingerman Acked-by: Andrii Nakryiko --- tools/testing/selftests/bpf/progs/bpf_misc.h | 8 ++++ tools/testing/selftests/bpf/test_loader.c | 43 ++++++++++++++++++++ 2 files changed, 51 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h index a70939c7bc26..a225cd87897c 100644 --- a/tools/testing/selftests/bpf/progs/bpf_misc.h +++ b/tools/testing/selftests/bpf/progs/bpf_misc.h @@ -63,6 +63,10 @@ * __auxiliary Annotated program is not a separate test, but used as auxiliary * for some other test cases and should always be loaded. * __auxiliary_unpriv Same, but load program in unprivileged mode. + * + * __arch_* Specify on which architecture the test case should be tested. + * Several __arch_* annotations could be specified at once. + * When test case is not run on current arch it is marked as skipped. */ #define __msg(msg) __attribute__((btf_decl_tag("comment:test_expect_msg=" msg))) #define __regex(regex) __attribute__((btf_decl_tag("comment:test_expect_regex=" regex))) @@ -82,6 +86,10 @@ #define __auxiliary __attribute__((btf_decl_tag("comment:test_auxiliary"))) #define __auxiliary_unpriv __attribute__((btf_decl_tag("comment:test_auxiliary_unpriv"))) #define __btf_path(path) __attribute__((btf_decl_tag("comment:test_btf_path=" path))) +#define __arch(arch) __attribute__((btf_decl_tag("comment:test_arch=" arch))) +#define __arch_x86_64 __arch("X86_64") +#define __arch_arm64 __arch("ARM64") +#define __arch_riscv64 __arch("RISCV64") /* Convenience macro for use with 'asm volatile' blocks */ #define __naked __attribute__((naked)) diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c index b44b6a2fc82c..97befd720541 100644 --- a/tools/testing/selftests/bpf/test_loader.c +++ b/tools/testing/selftests/bpf/test_loader.c @@ -34,6 +34,7 @@ #define TEST_TAG_AUXILIARY "comment:test_auxiliary" #define TEST_TAG_AUXILIARY_UNPRIV "comment:test_auxiliary_unpriv" #define TEST_BTF_PATH "comment:test_btf_path=" +#define TEST_TAG_ARCH "comment:test_arch=" /* Warning: duplicated in bpf_misc.h */ #define POINTER_VALUE 0xcafe4all @@ -80,6 +81,7 @@ struct test_spec { int log_level; int prog_flags; int mode_mask; + int arch_mask; bool auxiliary; bool valid; }; @@ -213,6 +215,12 @@ static void update_flags(int *flags, int flag, bool clear) *flags |= flag; } +enum arch { + ARCH_X86_64 = 0x1, + ARCH_ARM64 = 0x2, + ARCH_RISCV64 = 0x4, +}; + /* Uses btf_decl_tag attributes to describe the expected test * behavior, see bpf_misc.h for detailed description of each attribute * and attribute combinations. @@ -226,6 +234,7 @@ static int parse_test_spec(struct test_loader *tester, bool has_unpriv_result = false; bool has_unpriv_retval = false; int func_id, i, err = 0; + u32 arch_mask = 0; struct btf *btf; memset(spec, 0, sizeof(*spec)); @@ -364,11 +373,26 @@ static int parse_test_spec(struct test_loader *tester, goto cleanup; update_flags(&spec->prog_flags, flags, clear); } + } else if (str_has_pfx(s, TEST_TAG_ARCH)) { + val = s + sizeof(TEST_TAG_ARCH) - 1; + if (strcmp(val, "X86_64") == 0) { + arch_mask |= ARCH_X86_64; + } else if (strcmp(val, "ARM64") == 0) { + arch_mask |= ARCH_ARM64; + } else if (strcmp(val, "RISCV64") == 0) { + arch_mask |= ARCH_RISCV64; + } else { + PRINT_FAIL("bad arch spec: '%s'", val); + err = -EINVAL; + goto cleanup; + } } else if (str_has_pfx(s, TEST_BTF_PATH)) { spec->btf_custom_path = s + sizeof(TEST_BTF_PATH) - 1; } } + spec->arch_mask = arch_mask; + if (spec->mode_mask == 0) spec->mode_mask = PRIV; @@ -677,6 +701,20 @@ static int get_xlated_program_text(int prog_fd, char *text, size_t text_sz) return err; } +static bool run_on_current_arch(int arch_mask) +{ + if (arch_mask == 0) + return true; +#if defined(__x86_64__) + return !!(arch_mask & ARCH_X86_64); +#elif defined(__aarch64__) + return !!(arch_mask & ARCH_ARM64); +#elif defined(__riscv) && __riscv_xlen == 64 + return !!(arch_mask & ARCH_RISCV64); +#endif + return false; +} + /* this function is forced noinline and has short generic name to look better * in test_progs output (in case of a failure) */ @@ -701,6 +739,11 @@ void run_subtest(struct test_loader *tester, if (!test__start_subtest(subspec->name)) return; + if (!run_on_current_arch(spec->arch_mask)) { + test__skip(); + return; + } + if (unpriv) { if (!can_execute_unpriv(tester, spec)) { test__skip(); From patchwork Thu Jul 4 10:24:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard Zingerman X-Patchwork-Id: 13723577 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 71CD11AB908 for ; Thu, 4 Jul 2024 10:24:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088673; cv=none; b=FB3jHa7AZG2+fsDVXUKvgsTc81bQ4+YlaYEhWNP/orpSxTskMk+UnNY9obj3bNObkrv0LZ4F+7iXJvrGZNsTwsmpiyZX7V3y+/qGcg30AlePOLaR91DnfQthtj8A3Gn7zh4+D8FLdlSKdnEsevzL5NQ1OjDX6FpI0iwF9m+t13M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720088673; c=relaxed/simple; bh=FrsdcRoYiA37h8ukuYwcII9Ne5an50W6yB4pbD5OBxg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QfG+lclNHiVbhEL8E2FyWmGj5WTlCNXUM+mllNxhjgzZIhdqVoGpTfHeNhYdBe3H1YE8AavJwg3ErLkddvpGXHt1KfVfdaYDTXGFNQxvKGsfnhlv87iU0r5NeUweABs0PIbRoOgPJ16/Q5/OrlTbNGXeM6EAD6JY89QDW/RJI3Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=AUOHHLbF; arc=none smtp.client-ip=209.85.215.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="AUOHHLbF" Received: by mail-pg1-f181.google.com with SMTP id 41be03b00d2f7-761e0f371f5so144811a12.1 for ; Thu, 04 Jul 2024 03:24:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720088670; x=1720693470; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5f3QwKJcFbMz2GQRu8VWXW6RzRYSGYhQhXZ6QFSjbTk=; b=AUOHHLbFF99ZSTDrx5yovU9FWXohb41ruA0T5qfA/KxZOvuySnUVP1t6NJFdWu0Xao Nd+xhstGJmbZHI5+/bTyPpEPFVx9u7YKyJ4spZAeHup2jpvoYOnGz2z0smx+/8k8o0zK Ix4tQP4+Uj9NikAjQVuOlRfd7FGzwxAh1nR+dZw9uYc8064cj8sWYgcVEa4DGgCkU8q6 Gq+Y62YCXsvsmwaPa6dV5acJJ0x8WVaDWBH5+R2AunFkyhoQBjH5gPvBjH8tpayJHCws MoCaxIl0YqRcTJ4EPOaIkaInjA8z9jBw6ddxx38W11LtEYPX9hVyRvNOPK7ahlj1x8eE bFwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720088670; x=1720693470; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5f3QwKJcFbMz2GQRu8VWXW6RzRYSGYhQhXZ6QFSjbTk=; b=pjQVPPVTMbq9k1cOo7sQEjQ2065zqWLwka0TwMGIgj9vEK5V8UstZM0HkSJWzmrnH0 qN4448OxjSTn4GiA36Kopa669wT+V5YDpoLhBz0mic97YtqcoYot9o2KJKDXm+rcZeO5 v+CamRfbjcdsNpG+2winbqgz8tMGwIyjg63BjoTxKPd6PADqvisdHiQIy8hYJLNGDV3J bje+GYgkKeFk9vi/HeMsQ7r1pEtMis146qFEfpvg09Ln4WBd/BhTySCGMtTjrfXPo4X2 2FTDW8db+Zc64VefuuWAfYcF+wXbJXzFSz/kMdS0KPX6tLucOSbxc9MU9QKtpxOmbXjF E9Xg== X-Gm-Message-State: AOJu0YxNCwSF/0KgN7A+z9jsuiVjaoepBvcsep076Iwjdcpo+DHS5HLk VrAqjLOh2kw14eBVyMcUc+fy+w9NjS9Gsr4c1yuWe4GhQhCpoWBaasioDQ== X-Google-Smtp-Source: AGHT+IHl1AAGR48SDQho4s3LSy/4yHAn6F5m5QzyKnLissj23RqznXsphl7wHbupZIaOzDD2DUPwVw== X-Received: by 2002:a17:90a:d314:b0:2c9:6a0e:6e66 with SMTP id 98e67ed59e1d1-2c99f2fd0e9mr1494017a91.5.1720088670368; Thu, 04 Jul 2024 03:24:30 -0700 (PDT) Received: from badger.. ([38.34.87.7]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2c9a4c0fe8dsm216693a91.0.2024.07.04.03.24.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Jul 2024 03:24:29 -0700 (PDT) From: Eduard Zingerman To: bpf@vger.kernel.org, ast@kernel.org Cc: andrii@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com, yonghong.song@linux.dev, puranjay@kernel.org, jose.marchesi@oracle.com, Eduard Zingerman Subject: [RFC bpf-next v2 9/9] selftests/bpf: test no_caller_saved_registers spill/fill removal Date: Thu, 4 Jul 2024 03:24:01 -0700 Message-ID: <20240704102402.1644916-10-eddyz87@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240704102402.1644916-1-eddyz87@gmail.com> References: <20240704102402.1644916-1-eddyz87@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Tests for no_caller_saved_registers processing logic (see verifier.c:match_and_mark_nocsr_pattern()): - a canary positive test case; - a canary test case for arm64 and riscv64; - various tests with broken patterns; - tests with read/write fixed/varying stack access that violate nocsr stack access contract; - tests with multiple subprograms. Signed-off-by: Eduard Zingerman --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_nocsr.c | 521 ++++++++++++++++++ 2 files changed, 523 insertions(+) create mode 100644 tools/testing/selftests/bpf/progs/verifier_nocsr.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index 6816ff064516..8ca306c28e62 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -53,6 +53,7 @@ #include "verifier_movsx.skel.h" #include "verifier_netfilter_ctx.skel.h" #include "verifier_netfilter_retcode.skel.h" +#include "verifier_nocsr.skel.h" #include "verifier_precision.skel.h" #include "verifier_prevent_map_lookup.skel.h" #include "verifier_raw_stack.skel.h" @@ -171,6 +172,7 @@ void test_verifier_meta_access(void) { RUN(verifier_meta_access); } void test_verifier_movsx(void) { RUN(verifier_movsx); } void test_verifier_netfilter_ctx(void) { RUN(verifier_netfilter_ctx); } void test_verifier_netfilter_retcode(void) { RUN(verifier_netfilter_retcode); } +void test_verifier_nocsr(void) { RUN(verifier_nocsr); } void test_verifier_precision(void) { RUN(verifier_precision); } void test_verifier_prevent_map_lookup(void) { RUN(verifier_prevent_map_lookup); } void test_verifier_raw_stack(void) { RUN(verifier_raw_stack); } diff --git a/tools/testing/selftests/bpf/progs/verifier_nocsr.c b/tools/testing/selftests/bpf/progs/verifier_nocsr.c new file mode 100644 index 000000000000..4e767d768f1c --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_nocsr.c @@ -0,0 +1,521 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include "bpf_misc.h" + +SEC("raw_tp") +__arch_x86_64 +__xlated("4: r5 = 5") +__xlated("5: w0 = ") +__xlated("6: r0 = &(void __percpu *)(r0)") +__xlated("7: r0 = *(u32 *)(r0 +0)") +__xlated("8: exit") +__success +__naked void simple(void) +{ + asm volatile ( + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + "*(u64 *)(r10 - 16) = r1;" + "*(u64 *)(r10 - 24) = r2;" + "*(u64 *)(r10 - 32) = r3;" + "*(u64 *)(r10 - 40) = r4;" + "*(u64 *)(r10 - 48) = r5;" + "call %[bpf_get_smp_processor_id];" + "r5 = *(u64 *)(r10 - 48);" + "r4 = *(u64 *)(r10 - 40);" + "r3 = *(u64 *)(r10 - 32);" + "r2 = *(u64 *)(r10 - 24);" + "r1 = *(u64 *)(r10 - 16);" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +/* The logic for detecting and verifying nocsr pattern is the same for + * any arch, however x86 differs from arm64 or riscv64 in a way + * bpf_get_smp_processor_id is rewritten: + * - on x86 it is done by verifier + * - on arm64 and riscv64 it is done by jit + * + * Which leads to different xlated patterns for different archs: + * - on x86 the call is expanded as 3 instructions + * - on arm64 and riscv64 the call remains as is + * (but spills/fills are still removed) + * + * It is really desirable to check instruction indexes in the xlated + * patterns, so add this canary test to check that function rewrite by + * jit is correctly processed by nocsr logic, keep the rest of the + * tests as x86. + */ +SEC("raw_tp") +__arch_arm64 +__arch_riscv64 +__xlated("0: r1 = 1") +__xlated("1: call bpf_get_smp_processor_id") +__xlated("2: exit") +__success +__naked void canary_arm64_riscv64(void) +{ + asm volatile ( + "r1 = 1;" + "*(u64 *)(r10 - 16) = r1;" + "call %[bpf_get_smp_processor_id];" + "r1 = *(u64 *)(r10 - 16);" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("1: r0 = &(void __percpu *)(r0)") +__xlated("3: exit") +__success +__naked void canary_zero_spills(void) +{ + asm volatile ( + "call %[bpf_get_smp_processor_id];" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("1: *(u64 *)(r10 -16) = r1") +__xlated("3: r0 = &(void __percpu *)(r0)") +__xlated("5: r2 = *(u64 *)(r10 -16)") +__success +__naked void wrong_reg_in_pattern1(void) +{ + asm volatile ( + "r1 = 1;" + "*(u64 *)(r10 - 16) = r1;" + "call %[bpf_get_smp_processor_id];" + "r2 = *(u64 *)(r10 - 16);" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("1: *(u64 *)(r10 -16) = r6") +__xlated("3: r0 = &(void __percpu *)(r0)") +__xlated("5: r6 = *(u64 *)(r10 -16)") +__success +__naked void wrong_reg_in_pattern2(void) +{ + asm volatile ( + "r6 = 1;" + "*(u64 *)(r10 - 16) = r6;" + "call %[bpf_get_smp_processor_id];" + "r6 = *(u64 *)(r10 - 16);" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("1: *(u64 *)(r10 -16) = r0") +__xlated("3: r0 = &(void __percpu *)(r0)") +__xlated("5: r0 = *(u64 *)(r10 -16)") +__success +__naked void wrong_reg_in_pattern3(void) +{ + asm volatile ( + "r0 = 1;" + "*(u64 *)(r10 - 16) = r0;" + "call %[bpf_get_smp_processor_id];" + "r0 = *(u64 *)(r10 - 16);" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("2: *(u64 *)(r2 -16) = r1") +__xlated("4: r0 = &(void __percpu *)(r0)") +__xlated("6: r1 = *(u64 *)(r10 -16)") +__success +__naked void wrong_base_in_pattern(void) +{ + asm volatile ( + "r1 = 1;" + "r2 = r10;" + "*(u64 *)(r2 - 16) = r1;" + "call %[bpf_get_smp_processor_id];" + "r1 = *(u64 *)(r10 - 16);" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("1: *(u64 *)(r10 -16) = r1") +__xlated("3: r0 = &(void __percpu *)(r0)") +__xlated("5: r2 = 1") +__success +__naked void wrong_insn_in_pattern(void) +{ + asm volatile ( + "r1 = 1;" + "*(u64 *)(r10 - 16) = r1;" + "call %[bpf_get_smp_processor_id];" + "r2 = 1;" + "r1 = *(u64 *)(r10 - 16);" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("2: *(u64 *)(r10 -16) = r1") +__xlated("4: r0 = &(void __percpu *)(r0)") +__xlated("6: r1 = *(u64 *)(r10 -8)") +__success +__naked void wrong_off_in_pattern1(void) +{ + asm volatile ( + "r1 = 1;" + "*(u64 *)(r10 - 8) = r1;" + "*(u64 *)(r10 - 16) = r1;" + "call %[bpf_get_smp_processor_id];" + "r1 = *(u64 *)(r10 - 8);" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("1: *(u32 *)(r10 -4) = r1") +__xlated("3: r0 = &(void __percpu *)(r0)") +__xlated("5: r1 = *(u32 *)(r10 -4)") +__success +__naked void wrong_off_in_pattern2(void) +{ + asm volatile ( + "r1 = 1;" + "*(u32 *)(r10 - 4) = r1;" + "call %[bpf_get_smp_processor_id];" + "r1 = *(u32 *)(r10 - 4);" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("1: *(u32 *)(r10 -16) = r1") +__xlated("3: r0 = &(void __percpu *)(r0)") +__xlated("5: r1 = *(u32 *)(r10 -16)") +__success +__naked void wrong_size_in_pattern(void) +{ + asm volatile ( + "r1 = 1;" + "*(u32 *)(r10 - 16) = r1;" + "call %[bpf_get_smp_processor_id];" + "r1 = *(u32 *)(r10 - 16);" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("2: *(u32 *)(r10 -8) = r1") +__xlated("4: r0 = &(void __percpu *)(r0)") +__xlated("6: r1 = *(u32 *)(r10 -8)") +__success +__naked void partial_pattern(void) +{ + asm volatile ( + "r1 = 1;" + "r2 = 2;" + "*(u32 *)(r10 - 8) = r1;" + "*(u64 *)(r10 - 16) = r2;" + "call %[bpf_get_smp_processor_id];" + "r2 = *(u64 *)(r10 - 16);" + "r1 = *(u32 *)(r10 - 8);" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("0: r1 = 1") +__xlated("1: r2 = 2") +/* not patched, spills for -8, -16 not removed */ +__xlated("2: *(u64 *)(r10 -8) = r1") +__xlated("3: *(u64 *)(r10 -16) = r2") +__xlated("5: r0 = &(void __percpu *)(r0)") +__xlated("7: r2 = *(u64 *)(r10 -16)") +__xlated("8: r1 = *(u64 *)(r10 -8)") +/* patched, spills for -16, -24 removed */ +__xlated("10: r0 = &(void __percpu *)(r0)") +__xlated("12: exit") +__success +__naked void min_stack_offset(void) +{ + asm volatile ( + "r1 = 1;" + "r2 = 2;" + /* this call won't be patched */ + "*(u64 *)(r10 - 8) = r1;" + "*(u64 *)(r10 - 16) = r2;" + "call %[bpf_get_smp_processor_id];" + "r2 = *(u64 *)(r10 - 16);" + "r1 = *(u64 *)(r10 - 8);" + /* this call would be patched */ + "*(u64 *)(r10 - 16) = r1;" + "*(u64 *)(r10 - 24) = r2;" + "call %[bpf_get_smp_processor_id];" + "r2 = *(u64 *)(r10 - 24);" + "r1 = *(u64 *)(r10 - 16);" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("1: *(u64 *)(r10 -8) = r1") +__xlated("3: r0 = &(void __percpu *)(r0)") +__xlated("5: r1 = *(u64 *)(r10 -8)") +__success +__naked void bad_fixed_read(void) +{ + asm volatile ( + "r1 = 1;" + "*(u64 *)(r10 - 8) = r1;" + "call %[bpf_get_smp_processor_id];" + "r1 = *(u64 *)(r10 - 8);" + "r1 = r10;" + "r1 += -8;" + "r1 = *(u64 *)(r1 - 0);" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("1: *(u64 *)(r10 -8) = r1") +__xlated("3: r0 = &(void __percpu *)(r0)") +__xlated("5: r1 = *(u64 *)(r10 -8)") +__success +__naked void bad_fixed_write(void) +{ + asm volatile ( + "r1 = 1;" + "*(u64 *)(r10 - 8) = r1;" + "call %[bpf_get_smp_processor_id];" + "r1 = *(u64 *)(r10 - 8);" + "r1 = r10;" + "r1 += -8;" + "*(u64 *)(r1 - 0) = r1;" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("6: *(u64 *)(r10 -16) = r1") +__xlated("8: r0 = &(void __percpu *)(r0)") +__xlated("10: r1 = *(u64 *)(r10 -16)") +__success +__naked void bad_varying_read(void) +{ + asm volatile ( + "r6 = *(u64 *)(r1 + 0);" /* random scalar value */ + "r6 &= 0x7;" /* r6 range [0..7] */ + "r6 += 0x2;" /* r6 range [2..9] */ + "r7 = 0;" + "r7 -= r6;" /* r7 range [-9..-2] */ + "r1 = 1;" + "*(u64 *)(r10 - 16) = r1;" + "call %[bpf_get_smp_processor_id];" + "r1 = *(u64 *)(r10 - 16);" + "r1 = r10;" + "r1 += r7;" + "r1 = *(u8 *)(r1 - 0);" /* touches slot [-16..-9] where spills are stored */ + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("6: *(u64 *)(r10 -16) = r1") +__xlated("8: r0 = &(void __percpu *)(r0)") +__xlated("10: r1 = *(u64 *)(r10 -16)") +__success +__naked void bad_varying_write(void) +{ + asm volatile ( + "r6 = *(u64 *)(r1 + 0);" /* random scalar value */ + "r6 &= 0x7;" /* r6 range [0..7] */ + "r6 += 0x2;" /* r6 range [2..9] */ + "r7 = 0;" + "r7 -= r6;" /* r7 range [-9..-2] */ + "r1 = 1;" + "*(u64 *)(r10 - 16) = r1;" + "call %[bpf_get_smp_processor_id];" + "r1 = *(u64 *)(r10 - 16);" + "r1 = r10;" + "r1 += r7;" + "*(u8 *)(r1 - 0) = r7;" /* touches slot [-16..-9] where spills are stored */ + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +__xlated("1: *(u64 *)(r10 -8) = r1") +__xlated("3: r0 = &(void __percpu *)(r0)") +__xlated("5: r1 = *(u64 *)(r10 -8)") +__success +__naked void bad_write_in_subprog(void) +{ + asm volatile ( + "r1 = 1;" + "*(u64 *)(r10 - 8) = r1;" + "call %[bpf_get_smp_processor_id];" + "r1 = *(u64 *)(r10 - 8);" + "r1 = r10;" + "r1 += -8;" + "call bad_write_in_subprog_aux;" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +__used +__naked static void bad_write_in_subprog_aux(void) +{ + asm volatile ( + "r0 = 1;" + "*(u64 *)(r1 - 0) = r0;" /* invalidates nocsr contract for caller: */ + "exit;" /* caller stack at -8 used outside of the pattern */ + ::: __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +/* main, not patched */ +__xlated("1: *(u64 *)(r10 -8) = r1") +__xlated("3: r0 = &(void __percpu *)(r0)") +__xlated("5: r1 = *(u64 *)(r10 -8)") +__xlated("9: call pc+1") +__xlated("10: exit") +/* subprogram, patched */ +__xlated("11: r1 = 1") +__xlated("13: r0 = &(void __percpu *)(r0)") +__xlated("15: exit") +__success +__naked void invalidate_one_subprog(void) +{ + asm volatile ( + "r1 = 1;" + "*(u64 *)(r10 - 8) = r1;" + "call %[bpf_get_smp_processor_id];" + "r1 = *(u64 *)(r10 - 8);" + "r1 = r10;" + "r1 += -8;" + "r1 = *(u64 *)(r1 - 0);" + "call invalidate_one_subprog_aux;" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +__used +__naked static void invalidate_one_subprog_aux(void) +{ + asm volatile ( + "r1 = 1;" + "*(u64 *)(r10 - 8) = r1;" + "call %[bpf_get_smp_processor_id];" + "r1 = *(u64 *)(r10 - 8);" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +SEC("raw_tp") +__arch_x86_64 +/* main */ +__xlated("0: r1 = 1") +__xlated("2: r0 = &(void __percpu *)(r0)") +__xlated("4: call pc+1") +__xlated("5: exit") +/* subprogram */ +__xlated("6: r1 = 1") +__xlated("8: r0 = &(void __percpu *)(r0)") +__xlated("10: *(u64 *)(r10 -16) = r1") +__xlated("11: exit") +__success +__naked void subprogs_use_independent_offsets(void) +{ + asm volatile ( + "r1 = 1;" + "*(u64 *)(r10 - 16) = r1;" + "call %[bpf_get_smp_processor_id];" + "r1 = *(u64 *)(r10 - 16);" + "call subprogs_use_independent_offsets_aux;" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +__used +__naked static void subprogs_use_independent_offsets_aux(void) +{ + asm volatile ( + "r1 = 1;" + "*(u64 *)(r10 - 24) = r1;" + "call %[bpf_get_smp_processor_id];" + "r1 = *(u64 *)(r10 - 24);" + "*(u64 *)(r10 - 16) = r1;" + "exit;" + : + : __imm(bpf_get_smp_processor_id) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL";