Message ID | 20221110184303.393179-3-hbathini@linux.ibm.com (mailing list archive) |
---|---|
State | RFC |
Delegated to: | BPF |
Headers | show |
Series | enable bpf_prog_pack allocator for powerpc | expand |
Le 10/11/2022 à 19:43, Hari Bathini a écrit : > Implement bpf_arch_text_invalidate and use it to fill unused part of > the bpf_prog_pack with trap instructions when a BPF program is freed. Same here, allthough patch_instruction() is nice for a first try, it is not the solution on the long run. Same as with previous patch, it should just map the necessary page by allocating a vma area then mapping the associated physical pages over it using map_kernel_page(), then use bpf_jit_fill_ill_insns() over than page. > > Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> > --- > arch/powerpc/net/bpf_jit_comp.c | 32 ++++++++++++++++++++++++++++++++ > 1 file changed, 32 insertions(+) > > diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c > index 7383e0effad2..f925755cd249 100644 > --- a/arch/powerpc/net/bpf_jit_comp.c > +++ b/arch/powerpc/net/bpf_jit_comp.c > @@ -26,6 +26,33 @@ static void bpf_jit_fill_ill_insns(void *area, unsigned int size) > memset32(area, BREAKPOINT_INSTRUCTION, size / 4); > } > > +/* > + * Patch 'len' bytes with trap instruction at addr, one instruction > + * at a time. Returns addr on success. ERR_PTR(-EINVAL), otherwise. > + */ > +static void *bpf_patch_ill_insns(void *addr, size_t len) > +{ > + void *ret = ERR_PTR(-EINVAL); > + size_t patched = 0; > + u32 *start = addr; > + > + if (WARN_ON_ONCE(core_kernel_text((unsigned long)addr))) > + return ret; > + > + mutex_lock(&text_mutex); > + while (patched < len) { > + if (patch_instruction(start++, ppc_inst(PPC_RAW_TRAP()))) Use BREAKPOINT_INSTRUCTION instead of PPC_RAW_TRAP() > + goto error; > + > + patched += 4; > + } > + > + ret = addr; > +error: > + mutex_unlock(&text_mutex); > + return ret; > +} > + > /* > * Patch 'len' bytes of instructions from opcode to addr, one instruction > * at a time. Returns addr on success. ERR_PTR(-EINVAL), otherwise. > @@ -394,3 +421,8 @@ void *bpf_arch_text_copy(void *dst, void *src, size_t len) > { > return bpf_patch_instructions(dst, src, len); > } > + > +int bpf_arch_text_invalidate(void *dst, size_t len) > +{ > + return IS_ERR(bpf_patch_ill_insns(dst, len)); > +} The exact same split between bpf_arch_text_invalidate() and bpf_patch_ill_insns() as previous patch could be done here.
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c index 7383e0effad2..f925755cd249 100644 --- a/arch/powerpc/net/bpf_jit_comp.c +++ b/arch/powerpc/net/bpf_jit_comp.c @@ -26,6 +26,33 @@ static void bpf_jit_fill_ill_insns(void *area, unsigned int size) memset32(area, BREAKPOINT_INSTRUCTION, size / 4); } +/* + * Patch 'len' bytes with trap instruction at addr, one instruction + * at a time. Returns addr on success. ERR_PTR(-EINVAL), otherwise. + */ +static void *bpf_patch_ill_insns(void *addr, size_t len) +{ + void *ret = ERR_PTR(-EINVAL); + size_t patched = 0; + u32 *start = addr; + + if (WARN_ON_ONCE(core_kernel_text((unsigned long)addr))) + return ret; + + mutex_lock(&text_mutex); + while (patched < len) { + if (patch_instruction(start++, ppc_inst(PPC_RAW_TRAP()))) + goto error; + + patched += 4; + } + + ret = addr; +error: + mutex_unlock(&text_mutex); + return ret; +} + /* * Patch 'len' bytes of instructions from opcode to addr, one instruction * at a time. Returns addr on success. ERR_PTR(-EINVAL), otherwise. @@ -394,3 +421,8 @@ void *bpf_arch_text_copy(void *dst, void *src, size_t len) { return bpf_patch_instructions(dst, src, len); } + +int bpf_arch_text_invalidate(void *dst, size_t len) +{ + return IS_ERR(bpf_patch_ill_insns(dst, len)); +}
Implement bpf_arch_text_invalidate and use it to fill unused part of the bpf_prog_pack with trap instructions when a BPF program is freed. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> --- arch/powerpc/net/bpf_jit_comp.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+)