From patchwork Wed Oct 11 15:27:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Hwang X-Patchwork-Id: 13417552 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D2B4208B5 for ; Wed, 11 Oct 2023 15:27:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DNBepDMk" Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C457EA7 for ; Wed, 11 Oct 2023 08:27:40 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id 41be03b00d2f7-53fbf2c42bfso5571988a12.3 for ; Wed, 11 Oct 2023 08:27:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697038060; x=1697642860; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FrBauvKnpQmGwaYtS0oLVEa+f631MCY7UMeKdwZ8kPo=; b=DNBepDMkwds4HiWbh+VTYKY9SW1Y8Hj1oFT0pzJ7bSND9SFY/sTikBfMZp7ZYtXKND 9Za/mPj5kX67J1WwGcv9Nso0tcIVcdR5A74Ohf6BILulAvgU2DDw5eu+eqPy9vdMDmSr Ise6AKPVcQNl7jnuQHanlJXXIIhKK3DiJHmxSrkqUatdNAvbI5dlaauHQ61yPvWurYme mpKsa3xiqcsxbvIRr6TcthHZMawd7xaAy2yOePAYN7ggvWK/kymtJRGP53jjkUjpbt31 7maq3vwrhanwUcnvAtG/yYOFR0nnrOeALAMPbS0bZ8BhdJGpWnt9NRJy2JMMISKKmLLp YrXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697038060; x=1697642860; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FrBauvKnpQmGwaYtS0oLVEa+f631MCY7UMeKdwZ8kPo=; b=rQecFpoihV2JGhCGfqhGJ4WhO/Pa5zUQFvvxM4kyKd6sOj1xA9zKHuPRQI41SMzBsU 13DJJ14TCZ7W8In2LE6q271mtY5Z8pKgpefZ6ZctK+pUHP4znRFoWdiIksTYigWHBz8O uCauf3HTK2xoUE9kx/hE501IgrHvAKrt40a30SjQn7CSh1XjRU5+0T16z9Bh7Nj/47U4 aeersWbLd0MZqD8/oFnv6T5yAcc+tefrs7TcMiSesdW/8QhBt5D3w0H7gPiuPFp8GLUi wGj0R3uQxEbZuRMheWdCLsF0YmA+K4ulb6sFwa/TJeA9sLVWwUNPagX6XX0lUK+gILVc R9qg== X-Gm-Message-State: AOJu0YycEEWVocZIfgw7+MBfsyF24XKUZabhpcANXu8Zsl/+U96ilH1h QBu+/36XdZtKqV9gSdXaugDTSHYHyIcPBg== X-Google-Smtp-Source: AGHT+IEsctwEnNd/NjHHDKnpe40uzAtP9W0sJsZL9D9EtQ6SNOefQw1mNwWcF/hQlfFouaodA7qvFw== X-Received: by 2002:a05:6a20:2447:b0:154:3f13:1bb7 with SMTP id t7-20020a056a20244700b001543f131bb7mr24200440pzc.49.1697038059744; Wed, 11 Oct 2023 08:27:39 -0700 (PDT) Received: from localhost.localdomain (bb119-74-148-123.singnet.com.sg. [119.74.148.123]) by smtp.gmail.com with ESMTPSA id jf3-20020a170903268300b001c755810f89sm14092070plb.181.2023.10.11.08.27.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Oct 2023 08:27:39 -0700 (PDT) From: Leon Hwang To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, maciej.fijalkowski@intel.com, jakub@cloudflare.com, iii@linux.ibm.com, hengqi.chen@gmail.com, hffilwlqm@gmail.com Subject: [RFC PATCH bpf-next v2 1/4] bpf, x64: Emit nops for X86_PATCH Date: Wed, 11 Oct 2023 23:27:22 +0800 Message-ID: <20231011152725.95895-2-hffilwlqm@gmail.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231011152725.95895-1-hffilwlqm@gmail.com> References: <20231011152725.95895-1-hffilwlqm@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-0.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,HK_RANDOM_ENVFROM, HK_RANDOM_FROM,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC For next commit to reuse emit_nops(), move emit_nops() before emit_prologue(). By the way, change memcpy(prog, x86_nops[5], X86_PATCH_SIZE) to emit_nops(&prog, X86_PATCH_SIZE). Signed-off-by: Leon Hwang --- arch/x86/net/bpf_jit_comp.c | 41 ++++++++++++++++++------------------- 1 file changed, 20 insertions(+), 21 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 8c10d9abc2394..c2a0465d37da4 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -304,6 +304,25 @@ static void pop_callee_regs(u8 **pprog, bool *callee_regs_used) *pprog = prog; } +static void emit_nops(u8 **pprog, int len) +{ + u8 *prog = *pprog; + int i, noplen; + + while (len > 0) { + noplen = len; + + if (noplen > ASM_NOP_MAX) + noplen = ASM_NOP_MAX; + + for (i = 0; i < noplen; i++) + EMIT1(x86_nops[noplen][i]); + len -= noplen; + } + + *pprog = prog; +} + /* * Emit x86-64 prologue code for BPF program. * bpf_tail_call helper will skip the first X86_TAIL_CALL_OFFSET bytes @@ -319,8 +338,7 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf, * but let's waste 5 bytes for now and optimize later */ EMIT_ENDBR(); - memcpy(prog, x86_nops[5], X86_PATCH_SIZE); - prog += X86_PATCH_SIZE; + emit_nops(&prog, X86_PATCH_SIZE); if (!ebpf_from_cbpf) { if (tail_call_reachable && !is_subprog) /* When it's the entry of the whole tailcall context, @@ -989,25 +1007,6 @@ static void detect_reg_usage(struct bpf_insn *insn, int insn_cnt, } } -static void emit_nops(u8 **pprog, int len) -{ - u8 *prog = *pprog; - int i, noplen; - - while (len > 0) { - noplen = len; - - if (noplen > ASM_NOP_MAX) - noplen = ASM_NOP_MAX; - - for (i = 0; i < noplen; i++) - EMIT1(x86_nops[noplen][i]); - len -= noplen; - } - - *pprog = prog; -} - /* emit the 3-byte VEX prefix * * r: same as rex.r, extra bit for ModRM reg field From patchwork Wed Oct 11 15:27:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Hwang X-Patchwork-Id: 13417553 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50765208A5 for ; Wed, 11 Oct 2023 15:27:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZjJCVm7/" Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B315698 for ; Wed, 11 Oct 2023 08:27:43 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-1c9b70b9656so14438995ad.1 for ; Wed, 11 Oct 2023 08:27:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697038062; x=1697642862; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sb6y+LqvKR8cNpY7OhH023xfl4L6jCAH3jFd8OiIdsI=; b=ZjJCVm7/QF2opt8Ei6cx7gspioX37lsVgLzYRrpimAA3qGvvFBB+rsAOzY+ZXUZhnd vhjgHYiuFJ1PI7K8Vy6QA1GCuqRR1sOiyAlMnSw9MXprJGB8wW+FYIfV1L7ZHqgsEuC6 IjdYEuw1CgOKXnpdEIpqhwwImdn2fdSbWeDqSNynMbJBHr9rOPKPe8/0z7MO+nsXbteP pvx1Vy6Zkrgbgj3g8j7Ui2AMcN27FP7VvkRzGloKHtD+JcbUjleUyMO3XwkF78M7pYMV xX3jNnKKFhDHE+4WmlOqXzFSNyvaq1wlkTbSSmeM3KuXRb6g01B2MmsZt8raZCAHSJng MF4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697038062; x=1697642862; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sb6y+LqvKR8cNpY7OhH023xfl4L6jCAH3jFd8OiIdsI=; b=FN4uyrbxxxZbC4QtBoINyQsaS5hHeq1pJHTRfUegxUYffFyD/VABMzkV4KwTV3ZtW9 u5kHl20JIHvyFnayIu0MMstY8azKi0ZGM7WYpoQnhN4Cwv5jP75poFKtzXtxH1KC/Rjk zOd4eVdGhz8cM/xpWmrU8Or8qC1WjJYRczhzoy4o7yrOYWU0LfRCS+Mp7PVOmsIaAd3t mmvlWR4dkJynYJ7AYl56YiPe31kZ87J8K8Ospun9abcXdjAArMMXjoVJyTHI4/DwSWTn YZg8HG1q/zI2A4WdPC2Ph+y46iVrRsIS0EMw0Q21tW667dKxyifiWyUya/n0+25Xtldc hc6A== X-Gm-Message-State: AOJu0Yz2piJgKrZKLwwJyZSsdlxfwasZXDHfPBKgxNf4HylFptusUQ+m RvCChbRMRz+UA3JdyFbF5hoPr3YBdxHM3w== X-Google-Smtp-Source: AGHT+IEld9NkS3R5Baila4eWFHi9GC/MVXuoM5ZIMEAJHcv+ppz/3dan4Y9Os2rOtFyE/JMKU6X1DQ== X-Received: by 2002:a17:902:d34b:b0:1c3:7628:fca8 with SMTP id l11-20020a170902d34b00b001c37628fca8mr18142938plk.49.1697038062604; Wed, 11 Oct 2023 08:27:42 -0700 (PDT) Received: from localhost.localdomain (bb119-74-148-123.singnet.com.sg. [119.74.148.123]) by smtp.gmail.com with ESMTPSA id jf3-20020a170903268300b001c755810f89sm14092070plb.181.2023.10.11.08.27.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Oct 2023 08:27:42 -0700 (PDT) From: Leon Hwang To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, maciej.fijalkowski@intel.com, jakub@cloudflare.com, iii@linux.ibm.com, hengqi.chen@gmail.com, hffilwlqm@gmail.com Subject: [RFC PATCH bpf-next v2 2/4] bpf, x64: Fix tailcall hierarchy Date: Wed, 11 Oct 2023 23:27:23 +0800 Message-ID: <20231011152725.95895-3-hffilwlqm@gmail.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231011152725.95895-1-hffilwlqm@gmail.com> References: <20231011152725.95895-1-hffilwlqm@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-0.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,HK_RANDOM_ENVFROM, HK_RANDOM_FROM,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC From commit ebf7d1f508a73871 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT"), the tailcall on x64 works better than before. From commit e411901c0b775a3a ("bpf: allow for tailcalls in BPF subprograms for x64 JIT"), tailcall is able to run in BPF subprograms on x64. How about: 1. More than 1 subprograms are called in a bpf program. 2. The tailcalls in the subprograms call the bpf program. Because of missing tail_call_cnt back-propagation, a tailcall hierarchy comes up. And MAX_TAIL_CALL_CNT limit does not work for this case. As we know, in tail call context, the tail_call_cnt propagates by stack and rax register between BPF subprograms. So, propagating tail_call_cnt pointer by stack and rax register makes tail_call_cnt as like a global variable, in order to make MAX_TAIL_CALL_CNT limit works for tailcall hierarchy cases. Before jumping to other bpf prog, load tail_call_cnt from the pointer and then compare with MAX_TAIL_CALL_CNT. Finally, increment tail_call_cnt by its pointer. But, where does tail_call_cnt store? It stores on the stack of bpf prog's caller, like | STACK | | | | rip | +->| tcc | | | rip | | | rbp | | +---------+ RBP | | | | | | | | | +--| tcc_ptr | | rbx | +---------+ RSP And tcc_ptr is unnecessary to be popped from stack at the epilogue of bpf prog, like the way of commit d207929d97ea028f ("bpf, x64: Drop "pop %rcx" instruction on BPF JIT epilogue"). Why not back-propagate tail_call_cnt? It's because it's vulnerable to back-propagate it. It's unable to work well with the following case. int prog1(); int prog2(); prog1 is tail caller, and prog2 is tail callee. If we do back-propagate tail_call_cnt at the epilogue of prog2, can prog2 run standalone at the same time? The answer is NO. Otherwise, there will be a register to be polluted, which will make kernel crash. Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") Fixes: e411901c0b77 ("bpf: allow for tailcalls in BPF subprograms for x64 JIT") Signed-off-by: Leon Hwang Reviewed-by: Maciej Fijalkowski --- arch/x86/net/bpf_jit_comp.c | 40 ++++++++++++++++++++++--------------- 1 file changed, 24 insertions(+), 16 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index c2a0465d37da4..36631129cc800 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -256,7 +256,7 @@ struct jit_context { /* Number of bytes emit_patch() needs to generate instructions */ #define X86_PATCH_SIZE 5 /* Number of bytes that will be skipped on tailcall */ -#define X86_TAIL_CALL_OFFSET (11 + ENDBR_INSN_SIZE) +#define X86_TAIL_CALL_OFFSET (22 + ENDBR_INSN_SIZE) static void push_r12(u8 **pprog) { @@ -340,14 +340,21 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf, EMIT_ENDBR(); emit_nops(&prog, X86_PATCH_SIZE); if (!ebpf_from_cbpf) { - if (tail_call_reachable && !is_subprog) + if (tail_call_reachable && !is_subprog) { /* When it's the entry of the whole tailcall context, * zeroing rax means initialising tail_call_cnt. */ - EMIT2(0x31, 0xC0); /* xor eax, eax */ - else - /* Keep the same instruction layout. */ - EMIT2(0x66, 0x90); /* nop2 */ + EMIT2(0x31, 0xC0); /* xor eax, eax */ + EMIT1(0x50); /* push rax */ + /* Make rax as ptr that points to tail_call_cnt. */ + EMIT3(0x48, 0x89, 0xE0); /* mov rax, rsp */ + EMIT1_off32(0xE8, 2); /* call main prog */ + EMIT1(0x59); /* pop rcx, get rid of tail_call_cnt */ + EMIT1(0xC3); /* ret */ + } else { + /* Keep the same instruction size. */ + emit_nops(&prog, 13); + } } /* Exception callback receives FP as third parameter */ if (is_exception_cb) { @@ -373,6 +380,7 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf, if (stack_depth) EMIT3_off32(0x48, 0x81, 0xEC, round_up(stack_depth, 8)); if (tail_call_reachable) + /* Here, rax is tail_call_cnt_ptr. */ EMIT1(0x50); /* push rax */ *pprog = prog; } @@ -528,7 +536,7 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog, u32 stack_depth, u8 *ip, struct jit_context *ctx) { - int tcc_off = -4 - round_up(stack_depth, 8); + int tcc_ptr_off = -8 - round_up(stack_depth, 8); u8 *prog = *pprog, *start = *pprog; int offset; @@ -553,13 +561,12 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog, * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT) * goto out; */ - EMIT2_off32(0x8B, 0x85, tcc_off); /* mov eax, dword ptr [rbp - tcc_off] */ - EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */ + EMIT3_off32(0x48, 0x8B, 0x85, tcc_ptr_off); /* mov rax, qword ptr [rbp - tcc_ptr_off] */ + EMIT3(0x83, 0x38, MAX_TAIL_CALL_CNT); /* cmp dword ptr [rax], MAX_TAIL_CALL_CNT */ offset = ctx->tail_call_indirect_label - (prog + 2 - start); EMIT2(X86_JAE, offset); /* jae out */ - EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */ - EMIT2_off32(0x89, 0x85, tcc_off); /* mov dword ptr [rbp - tcc_off], eax */ + EMIT3(0x83, 0x00, 0x01); /* add dword ptr [rax], 1 */ /* prog = array->ptrs[index]; */ EMIT4_off32(0x48, 0x8B, 0x8C, 0xD6, /* mov rcx, [rsi + rdx * 8 + offsetof(...)] */ @@ -581,6 +588,7 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog, pop_callee_regs(&prog, callee_regs_used); } + /* pop tail_call_cnt_ptr */ EMIT1(0x58); /* pop rax */ if (stack_depth) EMIT3_off32(0x48, 0x81, 0xC4, /* add rsp, sd */ @@ -609,7 +617,7 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog, bool *callee_regs_used, u32 stack_depth, struct jit_context *ctx) { - int tcc_off = -4 - round_up(stack_depth, 8); + int tcc_ptr_off = -8 - round_up(stack_depth, 8); u8 *prog = *pprog, *start = *pprog; int offset; @@ -617,13 +625,12 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog, * if (tail_call_cnt++ >= MAX_TAIL_CALL_CNT) * goto out; */ - EMIT2_off32(0x8B, 0x85, tcc_off); /* mov eax, dword ptr [rbp - tcc_off] */ - EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */ + EMIT3_off32(0x48, 0x8B, 0x85, tcc_ptr_off); /* mov rax, qword ptr [rbp - tcc_ptr_off] */ + EMIT3(0x83, 0x38, MAX_TAIL_CALL_CNT); /* cmp dword ptr [rax], MAX_TAIL_CALL_CNT */ offset = ctx->tail_call_direct_label - (prog + 2 - start); EMIT2(X86_JAE, offset); /* jae out */ - EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */ - EMIT2_off32(0x89, 0x85, tcc_off); /* mov dword ptr [rbp - tcc_off], eax */ + EMIT3(0x83, 0x00, 0x01); /* add dword ptr [rax], 1 */ poke->tailcall_bypass = ip + (prog - start); poke->adj_off = X86_TAIL_CALL_OFFSET; @@ -640,6 +647,7 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog, pop_callee_regs(&prog, callee_regs_used); } + /* pop tail_call_cnt_ptr */ EMIT1(0x58); /* pop rax */ if (stack_depth) EMIT3_off32(0x48, 0x81, 0xC4, round_up(stack_depth, 8)); From patchwork Wed Oct 11 15:27:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Hwang X-Patchwork-Id: 13417554 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BB505208A5 for ; Wed, 11 Oct 2023 15:27:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GrLENzky" Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 735A4A4 for ; Wed, 11 Oct 2023 08:27:46 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id d9443c01a7336-1c9daca2b85so737025ad.1 for ; Wed, 11 Oct 2023 08:27:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697038065; x=1697642865; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lkWNJjpWixu6rUQLCUScaaQlNZQ316HHtfjOeQpbK8Q=; b=GrLENzkybbdB9Aw+wX57DHhAVHVRJ2owSuZ1el8mu7Zwqpl0RK2+PioeguI5bDasjV ltCMr5PyBTfeGXe7ZwDBU7bBBlLYPbAE3T/WCqwCxDOt5BypNhZ7NAbaOi7kftvMM0Vq 5BfQXS6zWVS+G4iZJloDuFloe8EVqkyisZLu35yxZ4FSkFIeGCgZ47xBE30+jl6DOxUQ euCz+z1sog3c9FgzDxhz7VAOszPPTCDUjxiHFxMHv6MvqKd+fL0YmvjiydOYeBLkw+f0 rYyiIPwmeCa30FD9GTHpki7OtxSTJAgqYkDSbODKwYSIr6E9kcd3ETp1EeKwQhohkEWh rKSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697038065; x=1697642865; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lkWNJjpWixu6rUQLCUScaaQlNZQ316HHtfjOeQpbK8Q=; b=u4tRoquLXP2YAImtm9GvWELf186tTRKJ0dPrPwFdDZMRyM8Q8z/2BJsbb8/QKm5wWB CRC/j73/3jDv6WzCMCc/9nk15de+9VeQfhIErifauYE15fEHvckgkzzioH307ffvSjhc XvXndLtVq39pU1fxPs4r7TTLhk0FV6iBuK++AP2fns93qztUoWnl21DFXdFAHGx/AnI5 B1sFB2lamFXimIg7EEmPf1yAOtQp8dex5KbrxDSd3FVD3whezbK8oJj2SBnTtqta2wPJ JU6fjGWRIOhSPsLnpJBLHFyaZLfYPSojoN32M913Q6uifP08gP3Q6cpCeWD4b+Bf+RuF Nz2A== X-Gm-Message-State: AOJu0Yx5b7oVTkwcf6htVtCeQQTFUcpAUElXh0a8lYeUJCa2f/6WFwWM Gfj0lfzarvstCQTDwULyojYCWi2Ip5yrDQ== X-Google-Smtp-Source: AGHT+IHh/R6f0P6IzLJsHMb20ngDKMAGr5aB2xC83cOQFGozh6d/JS8uxiKbJMPbakuxW0YXV9IO1g== X-Received: by 2002:a17:902:cecb:b0:1c0:cbaf:6954 with SMTP id d11-20020a170902cecb00b001c0cbaf6954mr30749174plg.25.1697038065436; Wed, 11 Oct 2023 08:27:45 -0700 (PDT) Received: from localhost.localdomain (bb119-74-148-123.singnet.com.sg. [119.74.148.123]) by smtp.gmail.com with ESMTPSA id jf3-20020a170903268300b001c755810f89sm14092070plb.181.2023.10.11.08.27.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Oct 2023 08:27:44 -0700 (PDT) From: Leon Hwang To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, maciej.fijalkowski@intel.com, jakub@cloudflare.com, iii@linux.ibm.com, hengqi.chen@gmail.com, hffilwlqm@gmail.com Subject: [RFC PATCH bpf-next v2 3/4] bpf, x64: Load tail_call_cnt pointer Date: Wed, 11 Oct 2023 23:27:24 +0800 Message-ID: <20231011152725.95895-4-hffilwlqm@gmail.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231011152725.95895-1-hffilwlqm@gmail.com> References: <20231011152725.95895-1-hffilwlqm@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-0.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,HK_RANDOM_ENVFROM, HK_RANDOM_FROM,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Rename RESTORE_TAIL_CALL_CNT() to LOAD_TAIL_CALL_CNT_PTR(). With previous commit, rax is used to propagate tail_call_cnt pointer instead of tail_call_cnt. So, LOAD_TAIL_CALL_CNT_PTR() is better. Signed-off-by: Leon Hwang --- arch/x86/net/bpf_jit_comp.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 36631129cc800..73da9a2125589 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1077,7 +1077,7 @@ static void emit_shiftx(u8 **pprog, u32 dst_reg, u8 src_reg, bool is64, u8 op) #define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp))) /* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */ -#define RESTORE_TAIL_CALL_CNT(stack) \ +#define LOAD_TAIL_CALL_CNT_PTR(stack) \ EMIT3_off32(0x48, 0x8B, 0x85, -round_up(stack, 8) - 8) static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image, @@ -1697,7 +1697,7 @@ st: if (is_imm8(insn->off)) func = (u8 *) __bpf_call_base + imm32; if (tail_call_reachable) { - RESTORE_TAIL_CALL_CNT(bpf_prog->aux->stack_depth); + LOAD_TAIL_CALL_CNT_PTR(bpf_prog->aux->stack_depth); if (!imm32) return -EINVAL; offs = 7 + x86_call_depth_emit_accounting(&prog, func); @@ -2479,7 +2479,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i * [ ... ] * [ stack_arg2 ] * RBP - arg_stack_off [ stack_arg1 ] - * RSP [ tail_call_cnt ] BPF_TRAMP_F_TAIL_CALL_CTX + * RSP [ tail_call_cnt_ptr ] BPF_TRAMP_F_TAIL_CALL_CTX */ /* room for return value of orig_call or fentry prog */ @@ -2599,10 +2599,10 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i save_args(m, &prog, arg_stack_off, true); if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) - /* Before calling the original function, restore the - * tail_call_cnt from stack to rax. + /* Before calling the original function, load the + * tail_call_cnt_ptr to rax. */ - RESTORE_TAIL_CALL_CNT(stack_size); + LOAD_TAIL_CALL_CNT_PTR(stack_size); if (flags & BPF_TRAMP_F_ORIG_STACK) { emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, 8); @@ -2658,10 +2658,10 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i goto cleanup; } } else if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) - /* Before running the original function, restore the - * tail_call_cnt from stack to rax. + /* Before running the original function, load the + * tail_call_cnt_ptr to rax. */ - RESTORE_TAIL_CALL_CNT(stack_size); + LOAD_TAIL_CALL_CNT_PTR(stack_size); /* restore return value of orig_call or fentry prog back into RAX */ if (save_ret) From patchwork Wed Oct 11 15:27:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Hwang X-Patchwork-Id: 13417555 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C17DC208A5 for ; Wed, 11 Oct 2023 15:27:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="KaVK4Gfd" Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [IPv6:2607:f8b0:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60761C6 for ; Wed, 11 Oct 2023 08:27:49 -0700 (PDT) Received: by mail-pg1-x530.google.com with SMTP id 41be03b00d2f7-5859e22c7daso4635448a12.1 for ; Wed, 11 Oct 2023 08:27:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697038068; x=1697642868; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UxzR9NLlC2+DuS+sWV6UtSEyEf7PN1ARSs2qggkMpjY=; b=KaVK4GfdgwDF64lgXUjc6YzlAwmRnQoVfccEklKNeIHXXfbG9BcW7Y8OSM0MftQfRE L+lLBscGRaslbYaLvOz9kdQFPteSTRtQuKPQUI3lfJ58LsZzDFTcPjQFru5G4eNXGxST C51wM/LKE7R8NKCzZgETxHHWfEfFsCmGwZaDplbm0ET10ek8o4if+RqHKlEvONrNe8fC 1hezaEY5SWFYR+042YFHBCsaANs/aF1Mc4vQV+gSvjdWoD3Qngr0JaQVKz6Ti6XLtmmZ MM6BHwJAyMnMVMq1/s6+JmPltYNqZQ0ZuCI0ho2xANN1WgF0Y4Ls9pRd+YwwiEfbgbfs Z4rQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697038068; x=1697642868; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UxzR9NLlC2+DuS+sWV6UtSEyEf7PN1ARSs2qggkMpjY=; b=X8retRRPW1NGPDDBrKv5XlGmZaKue/llEX6HPNXcJAEf/vRrg/J9x1hUH2OC3rqak9 k5SQWXNmPadDnln80UaXD7TS2C4q4YYv2eI+DYlBpFN+AVY//8DG5MAjl9ghiTexI7N6 4xdkvGy9nynCMgwdN7AoOe2uJ4EQPqhnl84aUArB6aSW9efZ0GONpEe9vM/yczAJ6pa/ y9JX3VR/6XWSopcxgGRvT5YpvlIMpYd31W1fD8BE7DT2/N9+zCqqw85MZZhA2TLvarTX +LR7+bpeMcGFfQjPb1U70wISU5YNAtTOlOCHYCpoU79CTKJZGVCs7hgx0bul9Q02Zkbo LibQ== X-Gm-Message-State: AOJu0Yx5+2H1+Wnj+0xzwgWt/Ue8NRsERcdJ1ZjQIVmQMm1vFYHYOJt7 brXZX48nk8kVtCakbI1v9txV7qx6WNm6+A== X-Google-Smtp-Source: AGHT+IEGVhYxqWRXpfF1MdeRoronRzS49GOKwhGwHWevrdT34kS0ypY5rd9EYijM0nE6kStkVpXUVw== X-Received: by 2002:a05:6a20:96c7:b0:15d:2bff:77b with SMTP id hq7-20020a056a2096c700b0015d2bff077bmr16581261pzc.34.1697038068203; Wed, 11 Oct 2023 08:27:48 -0700 (PDT) Received: from localhost.localdomain (bb119-74-148-123.singnet.com.sg. [119.74.148.123]) by smtp.gmail.com with ESMTPSA id jf3-20020a170903268300b001c755810f89sm14092070plb.181.2023.10.11.08.27.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Oct 2023 08:27:47 -0700 (PDT) From: Leon Hwang To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, maciej.fijalkowski@intel.com, jakub@cloudflare.com, iii@linux.ibm.com, hengqi.chen@gmail.com, hffilwlqm@gmail.com Subject: [RFC PATCH bpf-next v2 4/4] selftests/bpf: Add testcases for tailcall hierarchy fixing Date: Wed, 11 Oct 2023 23:27:25 +0800 Message-ID: <20231011152725.95895-5-hffilwlqm@gmail.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231011152725.95895-1-hffilwlqm@gmail.com> References: <20231011152725.95895-1-hffilwlqm@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-0.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,HK_RANDOM_ENVFROM, HK_RANDOM_FROM,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC Add some test cases to confirm the tailcall hierarchy issue has been fixed. tools/testing/selftests/bpf/test_progs -t tailcalls 235/17 tailcalls/tailcall_bpf2bpf_hierarchy_1:OK 235/18 tailcalls/tailcall_bpf2bpf_hierarchy_fentry:OK 235/19 tailcalls/tailcall_bpf2bpf_hierarchy_fexit:OK 235/20 tailcalls/tailcall_bpf2bpf_hierarchy_fentry_fexit:OK 235/21 tailcalls/tailcall_bpf2bpf_hierarchy_2:OK 235/22 tailcalls/tailcall_bpf2bpf_hierarchy_3:OK 235 tailcalls:OK Summary: 1/22 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Leon Hwang --- .../selftests/bpf/prog_tests/tailcalls.c | 418 ++++++++++++++++++ .../bpf/progs/tailcall_bpf2bpf_hierarchy1.c | 34 ++ .../bpf/progs/tailcall_bpf2bpf_hierarchy2.c | 55 +++ .../bpf/progs/tailcall_bpf2bpf_hierarchy3.c | 46 ++ 4 files changed, 553 insertions(+) create mode 100644 tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy1.c create mode 100644 tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy2.c create mode 100644 tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy3.c diff --git a/tools/testing/selftests/bpf/prog_tests/tailcalls.c b/tools/testing/selftests/bpf/prog_tests/tailcalls.c index fc6b2954e8f50..49d9b7ed41950 100644 --- a/tools/testing/selftests/bpf/prog_tests/tailcalls.c +++ b/tools/testing/selftests/bpf/prog_tests/tailcalls.c @@ -1105,6 +1105,412 @@ static void test_tailcall_bpf2bpf_fentry_entry(void) bpf_object__close(tgt_obj); } +static void test_tailcall_hierarchy_count(const char *which, bool test_fentry, + bool test_fexit) +{ + int err, map_fd, prog_fd, main_data_fd, fentry_data_fd, fexit_data_fd, i, val; + struct bpf_object *obj = NULL, *fentry_obj = NULL, *fexit_obj = NULL; + struct bpf_link *fentry_link = NULL, *fexit_link = NULL; + struct bpf_map *prog_array, *data_map; + struct bpf_program *prog; + char buff[128] = {}; + + LIBBPF_OPTS(bpf_test_run_opts, topts, + .data_in = buff, + .data_size_in = sizeof(buff), + .repeat = 1, + ); + + err = bpf_prog_test_load(which, BPF_PROG_TYPE_SCHED_CLS, &obj, + &prog_fd); + if (!ASSERT_OK(err, "load obj")) + return; + + prog = bpf_object__find_program_by_name(obj, "entry"); + if (!ASSERT_OK_PTR(prog, "find entry prog")) + goto out; + + prog_fd = bpf_program__fd(prog); + if (!ASSERT_GE(prog_fd, 0, "prog_fd")) + goto out; + + prog_array = bpf_object__find_map_by_name(obj, "jmp_table"); + if (!ASSERT_OK_PTR(prog_array, "find jmp_table")) + goto out; + + map_fd = bpf_map__fd(prog_array); + if (!ASSERT_GE(map_fd, 0, "map_fd")) + goto out; + + i = 0; + err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY); + if (!ASSERT_OK(err, "update jmp_table")) + goto out; + + if (test_fentry) { + fentry_obj = bpf_object__open_file("tailcall_bpf2bpf_fentry.bpf.o", + NULL); + if (!ASSERT_OK_PTR(fentry_obj, "open fentry_obj file")) + goto out; + + prog = bpf_object__find_program_by_name(fentry_obj, "fentry"); + if (!ASSERT_OK_PTR(prog, "find fentry prog")) + goto out; + + err = bpf_program__set_attach_target(prog, prog_fd, + "subprog_tail"); + if (!ASSERT_OK(err, "set_attach_target subprog_tail")) + goto out; + + err = bpf_object__load(fentry_obj); + if (!ASSERT_OK(err, "load fentry_obj")) + goto out; + + fentry_link = bpf_program__attach_trace(prog); + if (!ASSERT_OK_PTR(fentry_link, "attach_trace")) + goto out; + } + + if (test_fexit) { + fexit_obj = bpf_object__open_file("tailcall_bpf2bpf_fexit.bpf.o", + NULL); + if (!ASSERT_OK_PTR(fexit_obj, "open fexit_obj file")) + goto out; + + prog = bpf_object__find_program_by_name(fexit_obj, "fexit"); + if (!ASSERT_OK_PTR(prog, "find fexit prog")) + goto out; + + err = bpf_program__set_attach_target(prog, prog_fd, + "subprog_tail"); + if (!ASSERT_OK(err, "set_attach_target subprog_tail")) + goto out; + + err = bpf_object__load(fexit_obj); + if (!ASSERT_OK(err, "load fexit_obj")) + goto out; + + fexit_link = bpf_program__attach_trace(prog); + if (!ASSERT_OK_PTR(fexit_link, "attach_trace")) + goto out; + } + + err = bpf_prog_test_run_opts(prog_fd, &topts); + ASSERT_OK(err, "tailcall"); + ASSERT_EQ(topts.retval, 1, "tailcall retval"); + + data_map = bpf_object__find_map_by_name(obj, ".bss"); + if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map), + "find data_map")) + goto out; + + main_data_fd = bpf_map__fd(data_map); + if (!ASSERT_GE(main_data_fd, 0, "main_data_fd")) + goto out; + + i = 0; + err = bpf_map_lookup_elem(main_data_fd, &i, &val); + ASSERT_OK(err, "tailcall count"); + ASSERT_EQ(val, 34, "tailcall count"); + + if (test_fentry) { + data_map = bpf_object__find_map_by_name(fentry_obj, ".bss"); + if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map), + "find tailcall_bpf2bpf_fentry.bss map")) + goto out; + + fentry_data_fd = bpf_map__fd(data_map); + if (!ASSERT_GE(fentry_data_fd, 0, + "find tailcall_bpf2bpf_fentry.bss map fd")) + goto out; + + i = 0; + err = bpf_map_lookup_elem(fentry_data_fd, &i, &val); + ASSERT_OK(err, "fentry count"); + ASSERT_EQ(val, 68, "fentry count"); + } + + if (test_fexit) { + data_map = bpf_object__find_map_by_name(fexit_obj, ".bss"); + if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map), + "find tailcall_bpf2bpf_fexit.bss map")) + goto out; + + fexit_data_fd = bpf_map__fd(data_map); + if (!ASSERT_GE(fexit_data_fd, 0, + "find tailcall_bpf2bpf_fexit.bss map fd")) + goto out; + + i = 0; + err = bpf_map_lookup_elem(fexit_data_fd, &i, &val); + ASSERT_OK(err, "fexit count"); + ASSERT_EQ(val, 68, "fexit count"); + } + + i = 0; + err = bpf_map_delete_elem(map_fd, &i); + if (!ASSERT_OK(err, "delete_elem from jmp_table")) + goto out; + + err = bpf_prog_test_run_opts(prog_fd, &topts); + ASSERT_OK(err, "tailcall"); + ASSERT_EQ(topts.retval, 1, "tailcall retval"); + + i = 0; + err = bpf_map_lookup_elem(main_data_fd, &i, &val); + ASSERT_OK(err, "tailcall count"); + ASSERT_EQ(val, 35, "tailcall count"); + + if (test_fentry) { + i = 0; + err = bpf_map_lookup_elem(fentry_data_fd, &i, &val); + ASSERT_OK(err, "fentry count"); + ASSERT_EQ(val, 70, "fentry count"); + } + + if (test_fexit) { + i = 0; + err = bpf_map_lookup_elem(fexit_data_fd, &i, &val); + ASSERT_OK(err, "fexit count"); + ASSERT_EQ(val, 70, "fexit count"); + } + +out: + bpf_link__destroy(fentry_link); + bpf_link__destroy(fexit_link); + bpf_object__close(fentry_obj); + bpf_object__close(fexit_obj); + bpf_object__close(obj); +} + +/* test_tailcall_bpf2bpf_hierarchy_1 checks that the count value of the tail + * call limit enforcement matches with expectations when tailcalls are preceded + * with two bpf2bpf calls. + * + * subprog --tailcall-> entry + * entry < + * subprog --tailcall-> entry + */ +static void test_tailcall_bpf2bpf_hierarchy_1(void) +{ + test_tailcall_hierarchy_count("tailcall_bpf2bpf_hierarchy1.bpf.o", + false, false); +} + +/* test_tailcall_bpf2bpf_hierarchy_fentry checks that the count value of the + * tail call limit enforcement matches with expectations when tailcalls are + * preceded with two bpf2bpf calls, and the two subprogs are traced by fentry. + */ +static void test_tailcall_bpf2bpf_hierarchy_fentry(void) +{ + test_tailcall_hierarchy_count("tailcall_bpf2bpf_hierarchy1.bpf.o", + true, false); +} + +/* test_tailcall_bpf2bpf_hierarchy_fexit checks that the count value of the tail + * call limit enforcement matches with expectations when tailcalls are preceded + * with two bpf2bpf calls, and the two subprogs are traced by fexit. + */ +static void test_tailcall_bpf2bpf_hierarchy_fexit(void) +{ + test_tailcall_hierarchy_count("tailcall_bpf2bpf_hierarchy1.bpf.o", + false, true); +} + +/* test_tailcall_bpf2bpf_hierarchy_fentry_fexit checks that the count value of + * the tail call limit enforcement matches with expectations when tailcalls are + * preceded with two bpf2bpf calls, and the two subprogs are traced by both + * fentry and fexit. + */ +static void test_tailcall_bpf2bpf_hierarchy_fentry_fexit(void) +{ + test_tailcall_hierarchy_count("tailcall_bpf2bpf_hierarchy1.bpf.o", + true, true); +} + +/* test_tailcall_bpf2bpf_hierarchy_2 checks that the count value of the tail + * call limit enforcement matches with expectations: + * + * subprog_tail0 --tailcall-> classifier_0 -> subprog_tail0 + * entry < + * subprog_tail1 --tailcall-> classifier_1 -> subprog_tail1 + */ +static void test_tailcall_bpf2bpf_hierarchy_2(void) +{ + int err, map_fd, prog_fd, data_fd, main_fd, i, val[2]; + struct bpf_map *prog_array, *data_map; + struct bpf_object *obj = NULL; + struct bpf_program *prog; + char buff[128] = {}; + + LIBBPF_OPTS(bpf_test_run_opts, topts, + .data_in = buff, + .data_size_in = sizeof(buff), + .repeat = 1, + ); + + err = bpf_prog_test_load("tailcall_bpf2bpf_hierarchy2.bpf.o", + BPF_PROG_TYPE_SCHED_CLS, + &obj, &prog_fd); + if (!ASSERT_OK(err, "load obj")) + return; + + prog = bpf_object__find_program_by_name(obj, "entry"); + if (!ASSERT_OK_PTR(prog, "find entry prog")) + goto out; + + main_fd = bpf_program__fd(prog); + if (!ASSERT_GE(main_fd, 0, "main_fd")) + goto out; + + prog_array = bpf_object__find_map_by_name(obj, "jmp_table"); + if (!ASSERT_OK_PTR(prog_array, "find jmp_table map")) + goto out; + + map_fd = bpf_map__fd(prog_array); + if (!ASSERT_GE(map_fd, 0, "find jmp_table map fd")) + goto out; + + prog = bpf_object__find_program_by_name(obj, "classifier_0"); + if (!ASSERT_OK_PTR(prog, "find classifier_0 prog")) + goto out; + + prog_fd = bpf_program__fd(prog); + if (!ASSERT_GE(prog_fd, 0, "find classifier_0 prog fd")) + goto out; + + i = 0; + err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY); + if (!ASSERT_OK(err, "update jmp_table")) + goto out; + + prog = bpf_object__find_program_by_name(obj, "classifier_1"); + if (!ASSERT_OK_PTR(prog, "find classifier_1 prog")) + goto out; + + prog_fd = bpf_program__fd(prog); + if (!ASSERT_GE(prog_fd, 0, "find classifier_1 prog fd")) + goto out; + + i = 1; + err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY); + if (!ASSERT_OK(err, "update jmp_table")) + goto out; + + err = bpf_prog_test_run_opts(main_fd, &topts); + ASSERT_OK(err, "tailcall"); + ASSERT_EQ(topts.retval, 1, "tailcall retval"); + + data_map = bpf_object__find_map_by_name(obj, ".bss"); + if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map), + "find .bss map")) + goto out; + + data_fd = bpf_map__fd(data_map); + if (!ASSERT_GE(data_fd, 0, "find .bss map fd")) + goto out; + + i = 0; + err = bpf_map_lookup_elem(data_fd, &i, &val); + ASSERT_OK(err, "tailcall counts"); + ASSERT_EQ(val[0], 33, "tailcall count0"); + ASSERT_EQ(val[1], 0, "tailcall count1"); + +out: + bpf_object__close(obj); +} + +/* test_tailcall_bpf2bpf_hierarchy_3 checks that the count value of the tail + * call limit enforcement matches with expectations: + * + * subprog with jmp_table0 to classifier_0 + * entry --tailcall-> classifier_0 < + * subprog with jmp_table1 to classifier_0 + */ +static void test_tailcall_bpf2bpf_hierarchy_3(void) +{ + int err, map_fd, prog_fd, data_fd, main_fd, i, val; + struct bpf_map *prog_array, *data_map; + struct bpf_object *obj = NULL; + struct bpf_program *prog; + char buff[128] = {}; + + LIBBPF_OPTS(bpf_test_run_opts, topts, + .data_in = buff, + .data_size_in = sizeof(buff), + .repeat = 1, + ); + + err = bpf_prog_test_load("tailcall_bpf2bpf_hierarchy3.bpf.o", + BPF_PROG_TYPE_SCHED_CLS, + &obj, &prog_fd); + if (!ASSERT_OK(err, "load obj")) + return; + + prog = bpf_object__find_program_by_name(obj, "entry"); + if (!ASSERT_OK_PTR(prog, "find entry prog")) + goto out; + + main_fd = bpf_program__fd(prog); + if (!ASSERT_GE(main_fd, 0, "main_fd")) + goto out; + + prog_array = bpf_object__find_map_by_name(obj, "jmp_table0"); + if (!ASSERT_OK_PTR(prog_array, "find jmp_table0 map")) + goto out; + + map_fd = bpf_map__fd(prog_array); + if (!ASSERT_GE(map_fd, 0, "find jmp_table0 map fd")) + goto out; + + prog = bpf_object__find_program_by_name(obj, "classifier_0"); + if (!ASSERT_OK_PTR(prog, "find classifier_0 prog")) + goto out; + + prog_fd = bpf_program__fd(prog); + if (!ASSERT_GE(prog_fd, 0, "find classifier_0 prog fd")) + goto out; + + i = 0; + err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY); + if (!ASSERT_OK(err, "update jmp_table0")) + goto out; + + prog_array = bpf_object__find_map_by_name(obj, "jmp_table1"); + if (!ASSERT_OK_PTR(prog_array, "find jmp_table1 map")) + goto out; + + map_fd = bpf_map__fd(prog_array); + if (!ASSERT_GE(map_fd, 0, "find jmp_table1 map fd")) + goto out; + + i = 0; + err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY); + if (!ASSERT_OK(err, "update jmp_table1")) + goto out; + + err = bpf_prog_test_run_opts(main_fd, &topts); + ASSERT_OK(err, "tailcall"); + ASSERT_EQ(topts.retval, 1, "tailcall retval"); + + data_map = bpf_object__find_map_by_name(obj, ".bss"); + if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map), + "find .bss map")) + goto out; + + data_fd = bpf_map__fd(data_map); + if (!ASSERT_GE(data_fd, 0, "find .bss map fd")) + goto out; + + i = 0; + err = bpf_map_lookup_elem(data_fd, &i, &val); + ASSERT_OK(err, "tailcall count"); + ASSERT_EQ(val, 33, "tailcall count"); + +out: + bpf_object__close(obj); +} + void test_tailcalls(void) { if (test__start_subtest("tailcall_1")) @@ -1139,4 +1545,16 @@ void test_tailcalls(void) test_tailcall_bpf2bpf_fentry_fexit(); if (test__start_subtest("tailcall_bpf2bpf_fentry_entry")) test_tailcall_bpf2bpf_fentry_entry(); + if (test__start_subtest("tailcall_bpf2bpf_hierarchy_1")) + test_tailcall_bpf2bpf_hierarchy_1(); + if (test__start_subtest("tailcall_bpf2bpf_hierarchy_fentry")) + test_tailcall_bpf2bpf_hierarchy_fentry(); + if (test__start_subtest("tailcall_bpf2bpf_hierarchy_fexit")) + test_tailcall_bpf2bpf_hierarchy_fexit(); + if (test__start_subtest("tailcall_bpf2bpf_hierarchy_fentry_fexit")) + test_tailcall_bpf2bpf_hierarchy_fentry_fexit(); + if (test__start_subtest("tailcall_bpf2bpf_hierarchy_2")) + test_tailcall_bpf2bpf_hierarchy_2(); + if (test__start_subtest("tailcall_bpf2bpf_hierarchy_3")) + test_tailcall_bpf2bpf_hierarchy_3(); } diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy1.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy1.c new file mode 100644 index 0000000000000..0bfbb7c9637b7 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy1.c @@ -0,0 +1,34 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include "bpf_legacy.h" + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 1); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(__u32)); +} jmp_table SEC(".maps"); + +int count = 0; + +static __noinline +int subprog_tail(struct __sk_buff *skb) +{ + bpf_tail_call_static(skb, &jmp_table, 0); + return 0; +} + +SEC("tc") +int entry(struct __sk_buff *skb) +{ + volatile int ret = 1; + + count++; + subprog_tail(skb); + subprog_tail(skb); + + return ret; +} + +char __license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy2.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy2.c new file mode 100644 index 0000000000000..b84541546082e --- /dev/null +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy2.c @@ -0,0 +1,55 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include "bpf_legacy.h" + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 2); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(__u32)); +} jmp_table SEC(".maps"); + +int count0 = 0; +int count1 = 0; + +static __noinline +int subprog_tail0(struct __sk_buff *skb) +{ + bpf_tail_call_static(skb, &jmp_table, 0); + return 0; +} + +SEC("tc") +int classifier_0(struct __sk_buff *skb) +{ + count0++; + subprog_tail0(skb); + return 0; +} + +static __noinline +int subprog_tail1(struct __sk_buff *skb) +{ + bpf_tail_call_static(skb, &jmp_table, 1); + return 0; +} + +SEC("tc") +int classifier_1(struct __sk_buff *skb) +{ + count1++; + subprog_tail1(skb); + return 0; +} + +SEC("tc") +int entry(struct __sk_buff *skb) +{ + subprog_tail0(skb); + subprog_tail1(skb); + + return 1; +} + +char __license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy3.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy3.c new file mode 100644 index 0000000000000..6398a1d277fc7 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_hierarchy3.c @@ -0,0 +1,46 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include "bpf_legacy.h" + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 1); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(__u32)); +} jmp_table0 SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); + __uint(max_entries, 1); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(__u32)); +} jmp_table1 SEC(".maps"); + +int count = 0; + +static __noinline +int subprog_tail(struct __sk_buff *skb, void *jmp_table) +{ + bpf_tail_call_static(skb, jmp_table, 0); + return 0; +} + +SEC("tc") +int classifier_0(struct __sk_buff *skb) +{ + count++; + subprog_tail(skb, &jmp_table0); + subprog_tail(skb, &jmp_table1); + return 1; +} + +SEC("tc") +int entry(struct __sk_buff *skb) +{ + bpf_tail_call_static(skb, &jmp_table0, 0); + + return 0; +} + +char __license[] SEC("license") = "GPL";