From patchwork Tue Feb 6 22:04:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13547843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E67A4C48297 for ; Tue, 6 Feb 2024 22:05:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 768826B0095; Tue, 6 Feb 2024 17:05:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 718516B0096; Tue, 6 Feb 2024 17:05:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B9566B0098; Tue, 6 Feb 2024 17:05:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 4573D6B0095 for ; Tue, 6 Feb 2024 17:05:20 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 165AC120879 for ; Tue, 6 Feb 2024 22:05:20 +0000 (UTC) X-FDA: 81762760800.09.0F9659F Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) by imf25.hostedemail.com (Postfix) with ESMTP id 3BFFFA0010 for ; Tue, 6 Feb 2024 22:05:18 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ECdynRiR; spf=pass (imf25.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.210.173 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707257118; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=x70vLSLuXpXSToMN/uastZH05CPVL6I6J8RYkA3WTV8=; b=B6ERo7F/0DZZxWSbQ6Vui9AHWOObDcKEm0K6ffaiwEadfVyFWZoYkKxiEp1Xy5vzq50fci +4mIcMxvuafguAF0hdn7B89EVECSFpFrtizmTjmvjF8hEKaItmEjQAlWnIyFZTtwGqkMs7 weGtJuUPmWJq7RfUmx3ZNLYISWXX17k= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707257118; a=rsa-sha256; cv=none; b=6l0xUfiQkaEWypyzPRzY2A6mTi7AZJn5PFkYTumQzQ7VJVQuQS//Tr2KfS026iZ5Hj3HVG LNdLwxbor8xfAsVlLL60LikLYlUH3+ACDGzelImccfeqODnh7RwQD/k7L3X3SzXnD5IdZK 42ZDbfXwpf6Cr59FPrCjWRje6HAwTJc= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ECdynRiR; spf=pass (imf25.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.210.173 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-6e05d958b61so7915b3a.2 for ; Tue, 06 Feb 2024 14:05:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1707257117; x=1707861917; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=x70vLSLuXpXSToMN/uastZH05CPVL6I6J8RYkA3WTV8=; b=ECdynRiRFRobrMk11pGcxtG0QtpzTXdZqJ0HzqOBLgx2fcuv/e1blcnv9F0uNFXP9/ dqqIZxJEQVzNtklvToZJKgSlumCSmdmjaBgnH3hepEWlSbbqPXuBcxdl9DbzbEkrHhkD 8/UUg2Oy5GbJMZl7H+yb/WS2KIFFMmiBG6BOfOdYCyeVyMi3Lxd08NhywzmULMv+GBnP lgN9IWRtvQ9yWDrUW68nzO4KwOIaFvcorJtzJDSHn9d8OJqxDctnasS2+6laxCBGH8Fk ZvMvS21eR7r9kZZAv4nkRInNCKNKx+j/PHnQlN6kaEtr1tXSwtrtxkDtvQjuVi3hgIsN Vhfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707257117; x=1707861917; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=x70vLSLuXpXSToMN/uastZH05CPVL6I6J8RYkA3WTV8=; b=FK8Ye9Ge/vN3M3dy5zkarOxidBBumBXwlnTBC72NAQdM7F0MnJXpupaE592pkcffk7 zv8v6wA2Elf/MeaDDLFxhEVorbJ/7a4rnpeZb/nubYxwceuxMFQATyX2BwrUC6k4CC5q IoNexn08nIlL8PXlncJKsR41/2PkYDDSjl70bpVAVRK4JtX7Hi9DhDfEm9fynpoYEDBn itpWph1tCHoHyNfzL1ZO4PazQXA5xAG7K3RhMdhx0WXFjVrry6+hn2WPjggD1iA4rAEP 3z3sg5xCx4j7Lr5d1VsNPeoEyqCXDzCZOKV4pFfKVr6lM9gLbSjY2EDFCcJgOUcnNyBG Q9uQ== X-Gm-Message-State: AOJu0YzaACxSfPU8eOojo17mKJkZHdCKO17hNuQ819E2uMhD2pIRfMV3 1oCKgil6WNjon6jjy2lAYoM5vZ9aZZA80terg99so+wDIKB2ldJn X-Google-Smtp-Source: AGHT+IEzPOfG6Yk1/DlYA0HQAsLNnAeAbJiYKnYVhkz+oFjGy38tTVOSA4WEfIoEPimN4Ul29wfrSg== X-Received: by 2002:a62:e807:0:b0:6de:40e:65a3 with SMTP id c7-20020a62e807000000b006de040e65a3mr834468pfi.16.1707257117133; Tue, 06 Feb 2024 14:05:17 -0800 (PST) X-Forwarded-Encrypted: i=0; AJvYcCU1D19B7Hqb61Kyd2+Fvj5kDQspG3/YW/fJlehTD32oKpMmLdiL77sc0UopdB4RBsoXI9kowCg9Cnov8RwV43zLduFGVuvX6yZl8S6HcHqledsiBtXT3bpKw/GTMsaNbtV4bdRY6yq6CzwY5YsdlEO6vvoKEVhMcejghIFGRFTM7Ul0iQBc/fBoCI39YuCHEA9V5wa7G9k0+0lIS2JEyLnad8uR9ktURW/yhgXJJc4/CTw0cK+b8wA47kg3DaAFHICVILo2nlpQyF162HLFXhp3nNGsQp4v3BJ0 Received: from localhost.localdomain ([2620:10d:c090:400::4:27bf]) by smtp.gmail.com with ESMTPSA id w20-20020aa78594000000b006e046c04b81sm2555282pfn.147.2024.02.06.14.05.15 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 06 Feb 2024 14:05:16 -0800 (PST) From: Alexei Starovoitov To: bpf@vger.kernel.org Cc: daniel@iogearbox.net, andrii@kernel.org, martin.lau@kernel.org, memxor@gmail.com, eddyz87@gmail.com, tj@kernel.org, brho@google.com, hannes@cmpxchg.org, linux-mm@kvack.org, kernel-team@fb.com Subject: [PATCH bpf-next 08/16] bpf: Recognize cast_kern/user instructions in the verifier. Date: Tue, 6 Feb 2024 14:04:33 -0800 Message-Id: <20240206220441.38311-9-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-145) In-Reply-To: <20240206220441.38311-1-alexei.starovoitov@gmail.com> References: <20240206220441.38311-1-alexei.starovoitov@gmail.com> MIME-Version: 1.0 X-Stat-Signature: 48hq6ugbq7617srq3jr5mrhu9ufqoptx X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 3BFFFA0010 X-Rspam-User: X-HE-Tag: 1707257118-276115 X-HE-Meta: U2FsdGVkX1/Zc9VbjofuNOpPmi1w1Q+W4Vsw7os0G6FopiHiFz0je/K7UkEe3Tzu9VMnLNHUrORdsasyCKkX3jkZJqcHNrUlHvjtQt/iQXjG2FSJ8rSRZQYa63JnLaxntLo1fXBmHRxiEuNDVeKyR3hfP0/JYpLDEOP7lf7+IK+LaXgFdgsWs9y+JukxwS7bqAEqeNqzdhw7Ej5ZFb4xVRVt9QoIOmIjsUOQUcSjXspbASDm2+MRg+007guzS5hnl6r4Y0myQ3Yl8YZvEBOBUxChyLxJSL8aXnkOEDnSJVXIyRJYHhjWaLGu1f97A8jcOQ5zyNY8RjO0ihVCCgg/p4scOgz5dTCAiob9zpvvncFTFuEgAtia2eINLdMIFTMju03v1cMViBgjBL7IYXJT2bN6ZiKHOpIE8tl1pzwHetgHHrheiBvOGQnDXj6XF3y/fim3j41qL0p0kkuHlNYlal+oJKhTDkRE19wqmHavk3rKv493dbYdOsFjd6wByOWeeCH/rUk3QiJSSfvoOWca3NMX5HM05BFrWSEsVGWmomoNobp/8iZP2mnMhTpWv/s1pe8zhuHw6XcMnbKflaaY+odphv8FQvn7UT5BaeJ2wVOBeyHec8M62E4qmUZVi3i1qRyVxLgX6UntLRwmg7i03fkTafZNKh5d482i2lT0QhvNTtLjmic3JTSFI4XXn63+0uVUeAVQ/MpGcroY2C56rkaTPkye5KwDN7dr0diC8lRO7LjVYlKegI1JPeh4dC5NCLAMypXITnzkHLya2edLY+yy6OyPIwUxxLPsjmVRhXwPZw/PE0rLQUr4QlW26Bykb6PePcaEYUxBqhIm/qYblR3vjaqtlahiy3qUa9hSqQjYAMvy10BFnW9YwC4GOkrIPm39mbDi9np7G4ay3mRUA+IYn/MVI6QoaDdsEkBc1lMbfVYkH95uNK9biYkC9q2DZl8Bj2qIWx7vxN/9EZY xZgCHjDD rgqbRnwq20M4mhNEv7WW97DY3ZdF+pWeqlM4uPAPzjHFlMkqvb/IaizlM94lAHLs4lO1zcMA5oJ6HOf8cPGkIkjYmDfw+GbW/q7t6ayh/5ftT5hiUaMxC1N9ipVrm0uRxyGaAJ7QuDkTQVtvBAsIw0+iL2pXNtoT/zf/LMsHhLYw5wZQfmb0xDtnhER9yDG0YxZrrfgxlQRij+aCVYCawywnrhdTQW4Npx7lkFro+brX87sxjIxfjAq5vSp4PoalI2ah2klXAemARkSwCnQQ0eHiNgswU/A/b+18qFAxx2Lp4H+aYRz3JXjr8K+8WBPUs1fBUIOk5Zz2OHY2UEMl1idKam8cekTNSFIdJuLbxv2HP3XGNmTAt3TJ0N044AiC42T3f+KGEenP88M9t/mddLpLyP/MvGkXi0crawjHRcRBAaCG15VlVBSUOIhQmGc8bDpKMJL9KJzsSOkNTbumcjS4RqmqClCLfkSMf6OsBX80CeNnMBQOMJE05FJTlSk2ci0sYz0/yU3dt+GM9yKnlIynNBMcE+ID7ZtYEGU9OLsgGsW3KczgppJECY3P4GdAnrfu5E2S0QCzEgW5k2lxsViPINrkVXxO6BIOh X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexei Starovoitov rX = bpf_cast_kern(rY, addr_space) tells the verifier that rX->type = PTR_TO_ARENA. Any further operations on PTR_TO_ARENA register have to be in 32-bit domain. The verifier will mark load/store through PTR_TO_ARENA with PROBE_MEM32. JIT will generate them as kern_vm_start + 32bit_addr memory accesses. rX = bpf_cast_user(rY, addr_space) tells the verifier that rX->type = unknown scalar. If arena->map_flags has BPF_F_NO_USER_CONV set then convert cast_user to mov32 as well. Otherwise JIT will convert it to: rX = (u32)rY; if (rX) rX |= arena->user_vm_start & ~(u64)~0U; Signed-off-by: Alexei Starovoitov --- include/linux/bpf.h | 1 + include/linux/bpf_verifier.h | 1 + kernel/bpf/log.c | 3 ++ kernel/bpf/verifier.c | 94 +++++++++++++++++++++++++++++++++--- 4 files changed, 92 insertions(+), 7 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index a0d737bb86d1..82f7727e434a 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -886,6 +886,7 @@ enum bpf_reg_type { * an explicit null check is required for this struct. */ PTR_TO_MEM, /* reg points to valid memory region */ + PTR_TO_ARENA, PTR_TO_BUF, /* reg points to a read/write buffer */ PTR_TO_FUNC, /* reg points to a bpf program function */ CONST_PTR_TO_DYNPTR, /* reg points to a const struct bpf_dynptr */ diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 84365e6dd85d..43c95e3e2a3c 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -547,6 +547,7 @@ struct bpf_insn_aux_data { u32 seen; /* this insn was processed by the verifier at env->pass_cnt */ bool sanitize_stack_spill; /* subject to Spectre v4 sanitation */ bool zext_dst; /* this insn zero extends dst reg */ + bool needs_zext; /* alu op needs to clear upper bits */ bool storage_get_func_atomic; /* bpf_*_storage_get() with atomic memory alloc */ bool is_iter_next; /* bpf_iter__next() kfunc call */ bool call_with_percpu_alloc_ptr; /* {this,per}_cpu_ptr() with prog percpu alloc */ diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c index 594a234f122b..677076c760ff 100644 --- a/kernel/bpf/log.c +++ b/kernel/bpf/log.c @@ -416,6 +416,7 @@ const char *reg_type_str(struct bpf_verifier_env *env, enum bpf_reg_type type) [PTR_TO_XDP_SOCK] = "xdp_sock", [PTR_TO_BTF_ID] = "ptr_", [PTR_TO_MEM] = "mem", + [PTR_TO_ARENA] = "arena", [PTR_TO_BUF] = "buf", [PTR_TO_FUNC] = "func", [PTR_TO_MAP_KEY] = "map_key", @@ -651,6 +652,8 @@ static void print_reg_state(struct bpf_verifier_env *env, } verbose(env, "%s", reg_type_str(env, t)); + if (t == PTR_TO_ARENA) + return; if (t == PTR_TO_STACK) { if (state->frameno != reg->frameno) verbose(env, "[%d]", reg->frameno); diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 3c77a3ab1192..6bd5a0f30f72 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4370,6 +4370,7 @@ static bool is_spillable_regtype(enum bpf_reg_type type) case PTR_TO_MEM: case PTR_TO_FUNC: case PTR_TO_MAP_KEY: + case PTR_TO_ARENA: return true; default: return false; @@ -5805,6 +5806,8 @@ static int check_ptr_alignment(struct bpf_verifier_env *env, case PTR_TO_XDP_SOCK: pointer_desc = "xdp_sock "; break; + case PTR_TO_ARENA: + return 0; default: break; } @@ -6906,6 +6909,9 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn if (!err && value_regno >= 0 && (rdonly_mem || t == BPF_READ)) mark_reg_unknown(env, regs, value_regno); + } else if (reg->type == PTR_TO_ARENA) { + if (t == BPF_READ && value_regno >= 0) + mark_reg_unknown(env, regs, value_regno); } else { verbose(env, "R%d invalid mem access '%s'\n", regno, reg_type_str(env, reg->type)); @@ -8377,6 +8383,7 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env, case PTR_TO_MEM | MEM_RINGBUF: case PTR_TO_BUF: case PTR_TO_BUF | MEM_RDONLY: + case PTR_TO_ARENA: case SCALAR_VALUE: return 0; /* All the rest must be rejected, except PTR_TO_BTF_ID which allows @@ -13837,6 +13844,21 @@ static int adjust_reg_min_max_vals(struct bpf_verifier_env *env, dst_reg = ®s[insn->dst_reg]; src_reg = NULL; + + if (dst_reg->type == PTR_TO_ARENA) { + struct bpf_insn_aux_data *aux = cur_aux(env); + + if (BPF_CLASS(insn->code) == BPF_ALU64) + /* + * 32-bit operations zero upper bits automatically. + * 64-bit operations need to be converted to 32. + */ + aux->needs_zext = true; + + /* Any arithmetic operations are allowed on arena pointers */ + return 0; + } + if (dst_reg->type != SCALAR_VALUE) ptr_reg = dst_reg; else @@ -13954,16 +13976,17 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) } else if (opcode == BPF_MOV) { if (BPF_SRC(insn->code) == BPF_X) { - if (insn->imm != 0) { - verbose(env, "BPF_MOV uses reserved fields\n"); - return -EINVAL; - } - if (BPF_CLASS(insn->code) == BPF_ALU) { - if (insn->off != 0 && insn->off != 8 && insn->off != 16) { + if ((insn->off != 0 && insn->off != 8 && insn->off != 16) || + insn->imm) { verbose(env, "BPF_MOV uses reserved fields\n"); return -EINVAL; } + } else if (insn->off == BPF_ARENA_CAST_KERN || insn->off == BPF_ARENA_CAST_USER) { + if (!insn->imm) { + verbose(env, "cast_kern/user insn must have non zero imm32\n"); + return -EINVAL; + } } else { if (insn->off != 0 && insn->off != 8 && insn->off != 16 && insn->off != 32) { @@ -13993,7 +14016,12 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) struct bpf_reg_state *dst_reg = regs + insn->dst_reg; if (BPF_CLASS(insn->code) == BPF_ALU64) { - if (insn->off == 0) { + if (insn->imm) { + /* off == BPF_ARENA_CAST_KERN || off == BPF_ARENA_CAST_USER */ + mark_reg_unknown(env, regs, insn->dst_reg); + if (insn->off == BPF_ARENA_CAST_KERN) + dst_reg->type = PTR_TO_ARENA; + } else if (insn->off == 0) { /* case: R1 = R2 * copy register state to dest reg */ @@ -14059,6 +14087,9 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) dst_reg->subreg_def = env->insn_idx + 1; coerce_subreg_to_size_sx(dst_reg, insn->off >> 3); } + } else if (src_reg->type == PTR_TO_ARENA) { + mark_reg_unknown(env, regs, insn->dst_reg); + dst_reg->type = PTR_TO_ARENA; } else { mark_reg_unknown(env, regs, insn->dst_reg); @@ -16519,6 +16550,8 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold, * the same stack frame, since fp-8 in foo != fp-8 in bar */ return regs_exact(rold, rcur, idmap) && rold->frameno == rcur->frameno; + case PTR_TO_ARENA: + return true; default: return regs_exact(rold, rcur, idmap); } @@ -18235,6 +18268,27 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) fdput(f); return -EBUSY; } + if (map->map_type == BPF_MAP_TYPE_ARENA) { + if (env->prog->aux->arena) { + verbose(env, "Only one arena per program\n"); + fdput(f); + return -EBUSY; + } + if (!env->allow_ptr_leaks || !env->bpf_capable) { + verbose(env, "CAP_BPF and CAP_PERFMON are required to use arena\n"); + fdput(f); + return -EPERM; + } + if (!env->prog->jit_requested) { + verbose(env, "JIT is required to use arena\n"); + return -EOPNOTSUPP; + } + if (!bpf_jit_supports_arena()) { + verbose(env, "JIT doesn't support arena\n"); + return -EOPNOTSUPP; + } + env->prog->aux->arena = (void *)map; + } fdput(f); next_insn: @@ -18799,6 +18853,18 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) insn->code == (BPF_ST | BPF_MEM | BPF_W) || insn->code == (BPF_ST | BPF_MEM | BPF_DW)) { type = BPF_WRITE; + } else if (insn->code == (BPF_ALU64 | BPF_MOV | BPF_X) && insn->imm) { + if (insn->off == BPF_ARENA_CAST_KERN || + (((struct bpf_map *)env->prog->aux->arena)->map_flags & BPF_F_NO_USER_CONV)) { + /* convert to 32-bit mov that clears upper 32-bit */ + insn->code = BPF_ALU | BPF_MOV | BPF_X; + /* clear off, so it's a normal 'wX = wY' from JIT pov */ + insn->off = 0; + } /* else insn->off == BPF_ARENA_CAST_USER should be handled by JIT */ + continue; + } else if (env->insn_aux_data[i + delta].needs_zext) { + /* Convert BPF_CLASS(insn->code) == BPF_ALU64 to 32-bit ALU */ + insn->code = BPF_ALU | BPF_OP(insn->code) | BPF_SRC(insn->code); } else { continue; } @@ -18856,6 +18922,14 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) env->prog->aux->num_exentries++; } continue; + case PTR_TO_ARENA: + if (BPF_MODE(insn->code) == BPF_MEMSX) { + verbose(env, "sign extending loads from arena are not supported yet\n"); + return -EOPNOTSUPP; + } + insn->code = BPF_CLASS(insn->code) | BPF_PROBE_MEM32 | BPF_SIZE(insn->code); + env->prog->aux->num_exentries++; + continue; default: continue; } @@ -19041,13 +19115,19 @@ static int jit_subprogs(struct bpf_verifier_env *env) func[i]->aux->nr_linfo = prog->aux->nr_linfo; func[i]->aux->jited_linfo = prog->aux->jited_linfo; func[i]->aux->linfo_idx = env->subprog_info[i].linfo_idx; + func[i]->aux->arena = prog->aux->arena; num_exentries = 0; insn = func[i]->insnsi; for (j = 0; j < func[i]->len; j++, insn++) { if (BPF_CLASS(insn->code) == BPF_LDX && (BPF_MODE(insn->code) == BPF_PROBE_MEM || + BPF_MODE(insn->code) == BPF_PROBE_MEM32 || BPF_MODE(insn->code) == BPF_PROBE_MEMSX)) num_exentries++; + if ((BPF_CLASS(insn->code) == BPF_STX || + BPF_CLASS(insn->code) == BPF_ST) && + BPF_MODE(insn->code) == BPF_PROBE_MEM32) + num_exentries++; } func[i]->aux->num_exentries = num_exentries; func[i]->aux->tail_call_reachable = env->subprog_info[i].tail_call_reachable;