diff mbox series

[v3,1/3] bpf,x64: pad NOPs to make images converge more easily

Message ID 20210114095411.20903-2-glin@suse.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series bpf,x64: implement jump padding in jit | expand

Checks

Context Check Description
netdev/tree_selection success Not a local patch

Commit Message

Gary Lin Jan. 14, 2021, 9:54 a.m. UTC
The x64 bpf jit expects bpf images converge within the given passes, but
it could fail to do so with some corner cases. For example:

      l0:     ja 40
      l1:     ja 40

        [... repeated ja 40 ]

      l39:    ja 40
      l40:    ret #0

This bpf program contains 40 "ja 40" instructions which are effectively
NOPs and designed to be replaced with valid code dynamically. Ideally,
bpf jit should optimize those "ja 40" instructions out when translating
the bpf instructions into x64 machine code. However, do_jit() can only
remove one "ja 40" for offset==0 on each pass, so it requires at least
40 runs to eliminate those JMPs and exceeds the current limit of
passes(20). In the end, the program got rejected when BPF_JIT_ALWAYS_ON
is set even though it's legit as a classic socket filter.

To make bpf images more likely converge within 20 passes, this commit
pads some instructions with NOPs in the last 5 passes:

1. conditional jumps
  A possible size variance comes from the adoption of imm8 JMP. If the
  offset is imm8, we calculate the size difference of this BPF instruction
  between the previous and the current pass and fill the gap with NOPs.
  To avoid the recalculation of jump offset, those NOPs are inserted before
  the JMP code, so we have to subtract the 2 bytes of imm8 JMP when
  calculating the NOP number.

2. BPF_JA
  There are two conditions for BPF_JA.
  a.) nop jumps
    If this instruction is not optimized out in the previous pass,
    instead of removing it, we insert the equivalent size of NOPs.
  b.) label jumps
    Similar to condition jumps, we prepend NOPs right before the JMP
    code.

To make the code concise, emit_nops() is modified to use the signed len and
return the number of inserted NOPs.

For bpf-to-bpf, we always enable padding for the extra pass since there
is only one extra run and the jump padding doesn't affected the images
that converge without padding.

After applying this patch, the corner case was loaded with the following
jit code:

    flen=45 proglen=77 pass=17 image=ffffffffc03367d4 from=jump pid=10097
    JIT code: 00000000: 0f 1f 44 00 00 55 48 89 e5 53 41 55 31 c0 45 31
    JIT code: 00000010: ed 48 89 fb eb 30 eb 2e eb 2c eb 2a eb 28 eb 26
    JIT code: 00000020: eb 24 eb 22 eb 20 eb 1e eb 1c eb 1a eb 18 eb 16
    JIT code: 00000030: eb 14 eb 12 eb 10 eb 0e eb 0c eb 0a eb 08 eb 06
    JIT code: 00000040: eb 04 eb 02 66 90 31 c0 41 5d 5b c9 c3

     0: 0f 1f 44 00 00          nop    DWORD PTR [rax+rax*1+0x0]
     5: 55                      push   rbp
     6: 48 89 e5                mov    rbp,rsp
     9: 53                      push   rbx
     a: 41 55                   push   r13
     c: 31 c0                   xor    eax,eax
     e: 45 31 ed                xor    r13d,r13d
    11: 48 89 fb                mov    rbx,rdi
    14: eb 30                   jmp    0x46
    16: eb 2e                   jmp    0x46
        ...
    3e: eb 06                   jmp    0x46
    40: eb 04                   jmp    0x46
    42: eb 02                   jmp    0x46
    44: 66 90                   xchg   ax,ax
    46: 31 c0                   xor    eax,eax
    48: 41 5d                   pop    r13
    4a: 5b                      pop    rbx
    4b: c9                      leave
    4c: c3                      ret

At the 16th pass, 15 jumps were already optimized out, and one jump was
replaced with NOPs at 44 and the image converged at the 17th pass.

v3:
  - Copy the instructions of prologue separately or the size calculation
    of the first BPF instruction would include the prologue.
  - Replace WARN_ONCE() with pr_err() and EFAULT
  - Use MAX_PASSES in the for loop condition check
  - Remove the "padded" flag from x64_jit_data. For the extra pass of
    subprogs, padding is always enabled since it won't hurt the images
    that converge without padding.

v2:
  - Simplify the sample code in the description and provide the jit code
  - Check the expected padding bytes with WARN_ONCE
  - Move the 'padded' flag to 'struct x64_jit_data'

Signed-off-by: Gary Lin <glin@suse.com>
---
 arch/x86/net/bpf_jit_comp.c | 103 ++++++++++++++++++++++++++----------
 1 file changed, 75 insertions(+), 28 deletions(-)

Comments

Alexei Starovoitov Jan. 15, 2021, 6:37 a.m. UTC | #1
On Thu, Jan 14, 2021 at 1:54 AM Gary Lin <glin@suse.com> wrote:
>          * pass to emit the final image.
>          */
> -       for (pass = 0; pass < 20 || image; pass++) {
> -               proglen = do_jit(prog, addrs, image, oldproglen, &ctx);
> +       for (pass = 0; pass < MAX_PASSES || image; pass++) {
> +               if (!padding && pass >= PADDING_PASSES)
> +                       padding = true;
> +               proglen = do_jit(prog, addrs, image, oldproglen, &ctx, padding);

I'm struggling to reconcile the discussion we had before holidays with
the discussion you guys had in v2:

>> What is the rationale for the latter when JIT is called again for subprog to fill in relative
>> call locations?
>>
> Hmmmm, my thinking was that we only enable padding for those programs
> which are already padded before. But, you're right. For the programs
> converging without padding, enabling padding won't change the final
> image, so it's safe to always set "padding" to true for the extra pass.
>
> Will remove the "padded" flag in v3.

I'm not following why "enabling padding won't change the final image"
is correct.
Say the subprog image converges without padding.
Then for subprog we call JIT again.
Now extra_pass==true and padding==true.
The JITed image will be different.
The test in patch 3 should have caught it, but it didn't,
because it checks for a subprog that needed padding.
The extra_pass needs to emit insns exactly in the right spots.
Otherwise jump targets will be incorrect.
The saved addrs[] array is crucial.
If extra_pass emits different things the instruction starts won't align
to places where addrs[] expects them to be.

So I think the padded flag has to be part of x64_jit_data.
Please double check my analysis and see why your test keeps working.
And please add another test that crashes with this v3 and works when
'padding' is saved.
I expected at least some tests in test_progs to be crashing, but
I've applied patch 1 and run the tests manually and everything passed,
so I could be missing something or our test coverage for subprogs is too weak.
Gary Lin Jan. 15, 2021, 9:41 a.m. UTC | #2
On Thu, Jan 14, 2021 at 10:37:33PM -0800, Alexei Starovoitov wrote:
> On Thu, Jan 14, 2021 at 1:54 AM Gary Lin <glin@suse.com> wrote:
> >          * pass to emit the final image.
> >          */
> > -       for (pass = 0; pass < 20 || image; pass++) {
> > -               proglen = do_jit(prog, addrs, image, oldproglen, &ctx);
> > +       for (pass = 0; pass < MAX_PASSES || image; pass++) {
> > +               if (!padding && pass >= PADDING_PASSES)
> > +                       padding = true;
> > +               proglen = do_jit(prog, addrs, image, oldproglen, &ctx, padding);
> 
> I'm struggling to reconcile the discussion we had before holidays with
> the discussion you guys had in v2:
> 
> >> What is the rationale for the latter when JIT is called again for subprog to fill in relative
> >> call locations?
> >>
> > Hmmmm, my thinking was that we only enable padding for those programs
> > which are already padded before. But, you're right. For the programs
> > converging without padding, enabling padding won't change the final
> > image, so it's safe to always set "padding" to true for the extra pass.
> >
> > Will remove the "padded" flag in v3.
> 
> I'm not following why "enabling padding won't change the final image"
> is correct.
> Say the subprog image converges without padding.
> Then for subprog we call JIT again.
> Now extra_pass==true and padding==true.
> The JITed image will be different.
Actually no.

> The test in patch 3 should have caught it, but it didn't,
> because it checks for a subprog that needed padding.
> The extra_pass needs to emit insns exactly in the right spots.
> Otherwise jump targets will be incorrect.
> The saved addrs[] array is crucial.
> If extra_pass emits different things the instruction starts won't align
> to places where addrs[] expects them to be.
> 
When calculating padding bytes, if the image already converges, the
emitted instruction size just matches (addrs[i] - addrs[i-1]), so
emit_nops() emits 0 byte, and the image doesn't change.

> So I think the padded flag has to be part of x64_jit_data.
> Please double check my analysis and see why your test keeps working.
> And please add another test that crashes with this v3 and works when
> 'padding' is saved.
> I expected at least some tests in test_progs to be crashing, but
> I've applied patch 1 and run the tests manually and everything passed,
> so I could be missing something or our test coverage for subprogs is too weak.
>
Alexei Starovoitov Jan. 15, 2021, 4:04 p.m. UTC | #3
On Fri, Jan 15, 2021 at 1:41 AM Gary Lin <glin@suse.com> wrote:
>
> On Thu, Jan 14, 2021 at 10:37:33PM -0800, Alexei Starovoitov wrote:
> > On Thu, Jan 14, 2021 at 1:54 AM Gary Lin <glin@suse.com> wrote:
> > >          * pass to emit the final image.
> > >          */
> > > -       for (pass = 0; pass < 20 || image; pass++) {
> > > -               proglen = do_jit(prog, addrs, image, oldproglen, &ctx);
> > > +       for (pass = 0; pass < MAX_PASSES || image; pass++) {
> > > +               if (!padding && pass >= PADDING_PASSES)
> > > +                       padding = true;
> > > +               proglen = do_jit(prog, addrs, image, oldproglen, &ctx, padding);
> >
> > I'm struggling to reconcile the discussion we had before holidays with
> > the discussion you guys had in v2:
> >
> > >> What is the rationale for the latter when JIT is called again for subprog to fill in relative
> > >> call locations?
> > >>
> > > Hmmmm, my thinking was that we only enable padding for those programs
> > > which are already padded before. But, you're right. For the programs
> > > converging without padding, enabling padding won't change the final
> > > image, so it's safe to always set "padding" to true for the extra pass.
> > >
> > > Will remove the "padded" flag in v3.
> >
> > I'm not following why "enabling padding won't change the final image"
> > is correct.
> > Say the subprog image converges without padding.
> > Then for subprog we call JIT again.
> > Now extra_pass==true and padding==true.
> > The JITed image will be different.
> Actually no.
>
> > The test in patch 3 should have caught it, but it didn't,
> > because it checks for a subprog that needed padding.
> > The extra_pass needs to emit insns exactly in the right spots.
> > Otherwise jump targets will be incorrect.
> > The saved addrs[] array is crucial.
> > If extra_pass emits different things the instruction starts won't align
> > to places where addrs[] expects them to be.
> >
> When calculating padding bytes, if the image already converges, the
> emitted instruction size just matches (addrs[i] - addrs[i-1]), so
> emit_nops() emits 0 byte, and the image doesn't change.

I see. You're right. That's very tricky.

The patch set doesn't apply cleanly.
Could you please rebase and add a detailed comment about this logic?

Also please add comments why we check:
nops != 0 && nops != 4
nops != 0 && nops != 2 && nops != 5
nops != 0 && nops != 3
None of it is obvious.

Does your single test cover all combinations of numbers?
Gary Lin Jan. 18, 2021, 1:51 a.m. UTC | #4
On Fri, Jan 15, 2021 at 08:04:06AM -0800, Alexei Starovoitov wrote:
> On Fri, Jan 15, 2021 at 1:41 AM Gary Lin <glin@suse.com> wrote:
> >
> > On Thu, Jan 14, 2021 at 10:37:33PM -0800, Alexei Starovoitov wrote:
> > > On Thu, Jan 14, 2021 at 1:54 AM Gary Lin <glin@suse.com> wrote:
> > > >          * pass to emit the final image.
> > > >          */
> > > > -       for (pass = 0; pass < 20 || image; pass++) {
> > > > -               proglen = do_jit(prog, addrs, image, oldproglen, &ctx);
> > > > +       for (pass = 0; pass < MAX_PASSES || image; pass++) {
> > > > +               if (!padding && pass >= PADDING_PASSES)
> > > > +                       padding = true;
> > > > +               proglen = do_jit(prog, addrs, image, oldproglen, &ctx, padding);
> > >
> > > I'm struggling to reconcile the discussion we had before holidays with
> > > the discussion you guys had in v2:
> > >
> > > >> What is the rationale for the latter when JIT is called again for subprog to fill in relative
> > > >> call locations?
> > > >>
> > > > Hmmmm, my thinking was that we only enable padding for those programs
> > > > which are already padded before. But, you're right. For the programs
> > > > converging without padding, enabling padding won't change the final
> > > > image, so it's safe to always set "padding" to true for the extra pass.
> > > >
> > > > Will remove the "padded" flag in v3.
> > >
> > > I'm not following why "enabling padding won't change the final image"
> > > is correct.
> > > Say the subprog image converges without padding.
> > > Then for subprog we call JIT again.
> > > Now extra_pass==true and padding==true.
> > > The JITed image will be different.
> > Actually no.
> >
> > > The test in patch 3 should have caught it, but it didn't,
> > > because it checks for a subprog that needed padding.
> > > The extra_pass needs to emit insns exactly in the right spots.
> > > Otherwise jump targets will be incorrect.
> > > The saved addrs[] array is crucial.
> > > If extra_pass emits different things the instruction starts won't align
> > > to places where addrs[] expects them to be.
> > >
> > When calculating padding bytes, if the image already converges, the
> > emitted instruction size just matches (addrs[i] - addrs[i-1]), so
> > emit_nops() emits 0 byte, and the image doesn't change.
> 
> I see. You're right. That's very tricky.
> 
> The patch set doesn't apply cleanly.
> Could you please rebase and add a detailed comment about this logic?
> 
> Also please add comments why we check:
> nops != 0 && nops != 4
> nops != 0 && nops != 2 && nops != 5
> nops != 0 && nops != 3
> None of it is obvious.
Sure, I'll add comments for them.

> 
> Does your single test cover all combinations of numbers?
> 
The test case only covers the NOP JUMP for nops == 0 and nops == 2.
I have to figure out how to create a large enough program to trigger the
transition of imm32 jump to imm8 jump.

Gary Lin
diff mbox series

Patch

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 796506dcfc42..bb36f4117e9b 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -789,8 +789,31 @@  static void detect_reg_usage(struct bpf_insn *insn, int insn_cnt,
 	}
 }
 
+static int emit_nops(u8 **pprog, int len)
+{
+	u8 *prog = *pprog;
+	int i, noplen, cnt = 0;
+
+	while (len > 0) {
+		noplen = len;
+
+		if (noplen > ASM_NOP_MAX)
+			noplen = ASM_NOP_MAX;
+
+		for (i = 0; i < noplen; i++)
+			EMIT1(ideal_nops[noplen][i]);
+		len -= noplen;
+	}
+
+	*pprog = prog;
+
+	return cnt;
+}
+
+#define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp)))
+
 static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
-		  int oldproglen, struct jit_context *ctx)
+		  int oldproglen, struct jit_context *ctx, bool jmp_padding)
 {
 	bool tail_call_reachable = bpf_prog->aux->tail_call_reachable;
 	struct bpf_insn *insn = bpf_prog->insnsi;
@@ -800,7 +823,7 @@  static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 	bool seen_exit = false;
 	u8 temp[BPF_MAX_INSN_SIZE + BPF_INSN_SAFETY];
 	int i, cnt = 0, excnt = 0;
-	int proglen = 0;
+	int ilen, proglen = 0;
 	u8 *prog = temp;
 
 	detect_reg_usage(insn, insn_cnt, callee_regs_used,
@@ -813,7 +836,13 @@  static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 		      bpf_prog_was_classic(bpf_prog), tail_call_reachable,
 		      bpf_prog->aux->func_idx != 0);
 	push_callee_regs(&prog, callee_regs_used);
-	addrs[0] = prog - temp;
+
+	ilen = prog - temp;
+	if (image)
+		memcpy(image + proglen, temp, ilen);
+	proglen += ilen;
+	addrs[0] = proglen;
+	prog = temp;
 
 	for (i = 1; i <= insn_cnt; i++, insn++) {
 		const s32 imm32 = insn->imm;
@@ -822,8 +851,8 @@  static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 		u8 b2 = 0, b3 = 0;
 		s64 jmp_offset;
 		u8 jmp_cond;
-		int ilen;
 		u8 *func;
+		int nops;
 
 		switch (insn->code) {
 			/* ALU */
@@ -1409,6 +1438,15 @@  xadd:			if (is_imm8(insn->off))
 			}
 			jmp_offset = addrs[i + insn->off] - addrs[i];
 			if (is_imm8(jmp_offset)) {
+				if (jmp_padding) {
+					nops = INSN_SZ_DIFF - 2;
+					if (nops != 0 && nops != 4) {
+						pr_err("unexpected cond_jmp padding: %d bytes\n",
+						       nops);
+						return -EFAULT;
+					}
+					cnt += emit_nops(&prog, nops);
+				}
 				EMIT2(jmp_cond, jmp_offset);
 			} else if (is_simm32(jmp_offset)) {
 				EMIT2_off32(0x0F, jmp_cond + 0x10, jmp_offset);
@@ -1431,11 +1469,33 @@  xadd:			if (is_imm8(insn->off))
 			else
 				jmp_offset = addrs[i + insn->off] - addrs[i];
 
-			if (!jmp_offset)
-				/* Optimize out nop jumps */
+			if (!jmp_offset) {
+				/*
+				 * If jmp_padding is enabled, the extra nops will
+				 * be inserted. Otherwise, optimize out nop jumps.
+				 */
+				if (jmp_padding) {
+					nops = INSN_SZ_DIFF;
+					if (nops != 0 && nops != 2 && nops != 5) {
+						pr_err("unexpected nop jump padding: %d bytes\n",
+						       nops);
+						return -EFAULT;
+					}
+					cnt += emit_nops(&prog, nops);
+				}
 				break;
+			}
 emit_jmp:
 			if (is_imm8(jmp_offset)) {
+				if (jmp_padding) {
+					nops = INSN_SZ_DIFF - 2;
+					if (nops != 0 && nops != 3) {
+						pr_err("unexpected jump padding: %d bytes\n",
+						       nops);
+						return -EFAULT;
+					}
+					cnt += emit_nops(&prog, INSN_SZ_DIFF - 2);
+				}
 				EMIT2(0xEB, jmp_offset);
 			} else if (is_simm32(jmp_offset)) {
 				EMIT1_off32(0xE9, jmp_offset);
@@ -1578,26 +1638,6 @@  static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
 	return 0;
 }
 
-static void emit_nops(u8 **pprog, unsigned int len)
-{
-	unsigned int i, noplen;
-	u8 *prog = *pprog;
-	int cnt = 0;
-
-	while (len > 0) {
-		noplen = len;
-
-		if (noplen > ASM_NOP_MAX)
-			noplen = ASM_NOP_MAX;
-
-		for (i = 0; i < noplen; i++)
-			EMIT1(ideal_nops[noplen][i]);
-		len -= noplen;
-	}
-
-	*pprog = prog;
-}
-
 static void emit_align(u8 **pprog, u32 align)
 {
 	u8 *target, *prog = *pprog;
@@ -1972,6 +2012,9 @@  struct x64_jit_data {
 	struct jit_context ctx;
 };
 
+#define MAX_PASSES 20
+#define PADDING_PASSES (MAX_PASSES - 5)
+
 struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 {
 	struct bpf_binary_header *header = NULL;
@@ -1981,6 +2024,7 @@  struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	struct jit_context ctx = {};
 	bool tmp_blinded = false;
 	bool extra_pass = false;
+	bool padding = false;
 	u8 *image = NULL;
 	int *addrs;
 	int pass;
@@ -2017,6 +2061,7 @@  struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 		image = jit_data->image;
 		header = jit_data->header;
 		extra_pass = true;
+		padding = true;
 		goto skip_init_addrs;
 	}
 	addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);
@@ -2042,8 +2087,10 @@  struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	 * may converge on the last pass. In such case do one more
 	 * pass to emit the final image.
 	 */
-	for (pass = 0; pass < 20 || image; pass++) {
-		proglen = do_jit(prog, addrs, image, oldproglen, &ctx);
+	for (pass = 0; pass < MAX_PASSES || image; pass++) {
+		if (!padding && pass >= PADDING_PASSES)
+			padding = true;
+		proglen = do_jit(prog, addrs, image, oldproglen, &ctx, padding);
 		if (proglen <= 0) {
 out_image:
 			image = NULL;