Message ID | 1409144552-12751-4-git-send-email-wangnan0@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
(2014/08/27 22:02), Wang Nan wrote: > +/* > + * ARM can always optimize an instruction when using ARM ISA. > + */ Hmm, this comment looks not correct anymore :) > +int arch_prepared_optinsn(struct arch_optimized_insn *optinsn) > +{ > + return optinsn->prepared; > +} BTW, why don't you check optinsn->insn != NULL ? If it is not prepared for optimizing, optinsn->insn always be NULL. [...] > +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op) > +{ > + u8 *buf; > + unsigned long rel_chk; > + unsigned long val; > + > + if (!can_optimize(op)) > + return -EILSEQ; > + > + op->optinsn.insn = get_optinsn_slot(); > + if (!op->optinsn.insn) > + return -ENOMEM; > + > + /* > + * Verify if the address gap is in 32MiB range, because this uses > + * a relative jump. > + * > + * kprobe opt use a 'b' instruction to branch to optinsn.insn. > + * According to ARM manual, branch instruction is: > + * > + * 31 28 27 24 23 0 > + * +------+---+---+---+---+----------------+ > + * | cond | 1 | 0 | 1 | 0 | imm24 | > + * +------+---+---+---+---+----------------+ > + * > + * imm24 is a signed 24 bits integer. The real branch offset is computed > + * by: imm32 = SignExtend(imm24:'00', 32); > + * > + * So the maximum forward branch should be: > + * (0x007fffff << 2) = 0x01fffffc = 0x1fffffc > + * The maximum backword branch should be: > + * (0xff800000 << 2) = 0xfe000000 = -0x2000000 > + * > + * We can simply check (rel & 0xfe000003): > + * if rel is positive, (rel & 0xfe000000) shoule be 0 > + * if rel is negitive, (rel & 0xfe000000) should be 0xfe000000 > + * the last '3' is used for alignment checking. > + */ > + rel_chk = (unsigned long)((long)op->optinsn.insn - > + (long)op->kp.addr + 8) & 0xfe000003; > + > + if ((rel_chk != 0) && (rel_chk != 0xfe000000)) { > + __arch_remove_optimized_kprobe(op, 0); > + return -ERANGE; > + } > + > + buf = (u8 *)op->optinsn.insn; > + > + /* Copy arch-dep-instance from template */ > + memcpy(buf, &optprobe_template_entry, TMPL_END_IDX); > + > + /* Set probe information */ > + val = (unsigned long)op; > + memcpy(buf + TMPL_VAL_IDX, &val, sizeof(val)); > + > + /* Set probe function call */ > + val = (unsigned long)optimized_callback; > + memcpy(buf + TMPL_CALL_IDX, &val, sizeof(val)); > + > + flush_icache_range((unsigned long)buf, > + (unsigned long)buf + TMPL_END_IDX); > + > + op->optinsn.prepared = true; > + return 0; > +} > + Thank you,
I gave the patches a quick test and in doing so found a bug which stops any probes actually being optimised, and the same bug should affect X86, see comment below... On Wed, 2014-08-27 at 21:02 +0800, Wang Nan wrote: [...] > +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op) > +{ > + u8 *buf; > + unsigned long rel_chk; > + unsigned long val; > + > + if (!can_optimize(op)) > + return -EILSEQ; > + > + op->optinsn.insn = get_optinsn_slot(); > + if (!op->optinsn.insn) > + return -ENOMEM; > + > + /* > + * Verify if the address gap is in 32MiB range, because this uses > + * a relative jump. > + * > + * kprobe opt use a 'b' instruction to branch to optinsn.insn. > + * According to ARM manual, branch instruction is: > + * > + * 31 28 27 24 23 0 > + * +------+---+---+---+---+----------------+ > + * | cond | 1 | 0 | 1 | 0 | imm24 | > + * +------+---+---+---+---+----------------+ > + * > + * imm24 is a signed 24 bits integer. The real branch offset is computed > + * by: imm32 = SignExtend(imm24:'00', 32); > + * > + * So the maximum forward branch should be: > + * (0x007fffff << 2) = 0x01fffffc = 0x1fffffc > + * The maximum backword branch should be: > + * (0xff800000 << 2) = 0xfe000000 = -0x2000000 > + * > + * We can simply check (rel & 0xfe000003): > + * if rel is positive, (rel & 0xfe000000) shoule be 0 > + * if rel is negitive, (rel & 0xfe000000) should be 0xfe000000 > + * the last '3' is used for alignment checking. > + */ > + rel_chk = (unsigned long)((long)op->optinsn.insn - > + (long)op->kp.addr + 8) & 0xfe000003; This check always fails because op->kp.addr is zero. Debugging this I found that this function is called from alloc_aggr_kprobe() and that copies the real kprobe into op->kp using copy_kprobe(), which doesn't actually copy the 'addr' value... static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p) { memcpy(&p->opcode, &ap->opcode, sizeof(kprobe_opcode_t)); memcpy(&p->ainsn, &ap->ainsn, sizeof(struct arch_specific_insn)); } Thing is, the new ARM code is a close copy of the existing X86 version so that would also suffer the same problem of kp.addr always being zero. So either I've miss understood something or this is fundamental bug no one has noticed before. Throwing in 'p->addr = ap->addr' into the copy_kprobe function fixed the behaviour arch_prepare_optimized_kprobe. I was testing this by running the kprobes tests (CONFIG_ARM_KPROBES_TEST=y) and putting a few printk's in strategic places in kprobes-opt.c to check to see what code paths got executed, which is how I discovered the problem. Two things to note when running kprobes tests... 1. On SMP systems it's very slow because of kprobe's use of stop_machine for applying and removing probes, this forces the system to idle and wait for the next scheduler tick for each probe change. 2. It's a good idea to enable VERBOSE in kprobes-test.h to get output for each test case instruction, this reassures you things are progressing and if things explode lets you know what instruction type triggered it.
(2014/09/02 22:49), Jon Medhurst (Tixy) wrote: > I gave the patches a quick test and in doing so found a bug which stops > any probes actually being optimised, and the same bug should affect X86, > see comment below... > > On Wed, 2014-08-27 at 21:02 +0800, Wang Nan wrote: > [...] >> +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op) >> +{ >> + u8 *buf; >> + unsigned long rel_chk; >> + unsigned long val; >> + >> + if (!can_optimize(op)) >> + return -EILSEQ; >> + >> + op->optinsn.insn = get_optinsn_slot(); >> + if (!op->optinsn.insn) >> + return -ENOMEM; >> + >> + /* >> + * Verify if the address gap is in 32MiB range, because this uses >> + * a relative jump. >> + * >> + * kprobe opt use a 'b' instruction to branch to optinsn.insn. >> + * According to ARM manual, branch instruction is: >> + * >> + * 31 28 27 24 23 0 >> + * +------+---+---+---+---+----------------+ >> + * | cond | 1 | 0 | 1 | 0 | imm24 | >> + * +------+---+---+---+---+----------------+ >> + * >> + * imm24 is a signed 24 bits integer. The real branch offset is computed >> + * by: imm32 = SignExtend(imm24:'00', 32); >> + * >> + * So the maximum forward branch should be: >> + * (0x007fffff << 2) = 0x01fffffc = 0x1fffffc >> + * The maximum backword branch should be: >> + * (0xff800000 << 2) = 0xfe000000 = -0x2000000 >> + * >> + * We can simply check (rel & 0xfe000003): >> + * if rel is positive, (rel & 0xfe000000) shoule be 0 >> + * if rel is negitive, (rel & 0xfe000000) should be 0xfe000000 >> + * the last '3' is used for alignment checking. >> + */ >> + rel_chk = (unsigned long)((long)op->optinsn.insn - >> + (long)op->kp.addr + 8) & 0xfe000003; > > This check always fails because op->kp.addr is zero. Debugging this I > found that this function is called from alloc_aggr_kprobe() and that > copies the real kprobe into op->kp using copy_kprobe(), which doesn't > actually copy the 'addr' value... Right, I've already pointed that :) https://lkml.org/lkml/2014/8/28/114 > > static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p) > { > memcpy(&p->opcode, &ap->opcode, sizeof(kprobe_opcode_t)); > memcpy(&p->ainsn, &ap->ainsn, sizeof(struct arch_specific_insn)); > } > > Thing is, the new ARM code is a close copy of the existing X86 version > so that would also suffer the same problem of kp.addr always being zero. > So either I've miss understood something or this is fundamental bug no > one has noticed before. > > Throwing in 'p->addr = ap->addr' into the copy_kprobe function fixed the > behaviour arch_prepare_optimized_kprobe. > > I was testing this by running the kprobes tests > (CONFIG_ARM_KPROBES_TEST=y) and putting a few printk's in strategic > places in kprobes-opt.c to check to see what code paths got executed, > which is how I discovered the problem. > > Two things to note when running kprobes tests... > > 1. On SMP systems it's very slow because of kprobe's use of stop_machine > for applying and removing probes, this forces the system to idle and > wait for the next scheduler tick for each probe change. Hmm, agreed. It seems that arm32 limitation of self-modifying code on SMP. I'm not sure how we can handle it, but I guess; - for some processors which have better coherent cache for SMP, we can atomically replace the breakpoint code with original code. - Even if we get an "undefined instruction" exception, its handler can ask kprobes if the address is under modifying or not. And if it is, we can just return from the exception to retry the execution. Thank you, > 2. It's a good idea to enable VERBOSE in kprobes-test.h to get output > for each test case instruction, this reassures you things are > progressing and if things explode lets you know what instruction type > triggered it. >
On Wed, Sep 03, 2014 at 11:18:04AM +0100, Masami Hiramatsu wrote: > (2014/09/02 22:49), Jon Medhurst (Tixy) wrote: > > 1. On SMP systems it's very slow because of kprobe's use of stop_machine > > for applying and removing probes, this forces the system to idle and > > wait for the next scheduler tick for each probe change. > > Hmm, agreed. It seems that arm32 limitation of self-modifying code on SMP. > I'm not sure how we can handle it, but I guess; > - for some processors which have better coherent cache for SMP, we can > atomically replace the breakpoint code with original code. Except that it's not an architected breakpoint instruction, as I mentioned before. It's also not really a property of the cache. > - Even if we get an "undefined instruction" exception, its handler can > ask kprobes if the address is under modifying or not. And if it is, > we can just return from the exception to retry the execution. It's not as simple as that -- you could potentially see an interleaving of the two instructions. The architecture is even broader than that: Concurrent modification and execution of instructions can lead to the resulting instruction performing any behavior that can be achieved by executing any sequence of instructions that can be executed from the same Exception level, There are additional guarantees for some instructions (like the architected BKPT instruction). Will
On Wed, 2014-09-03 at 11:30 +0100, Will Deacon wrote: > On Wed, Sep 03, 2014 at 11:18:04AM +0100, Masami Hiramatsu wrote: > > (2014/09/02 22:49), Jon Medhurst (Tixy) wrote: > > > 1. On SMP systems it's very slow because of kprobe's use of stop_machine > > > for applying and removing probes, this forces the system to idle and > > > wait for the next scheduler tick for each probe change. > > > > Hmm, agreed. It seems that arm32 limitation of self-modifying code on SMP. > > I'm not sure how we can handle it, but I guess; > > - for some processors which have better coherent cache for SMP, we can > > atomically replace the breakpoint code with original code. > > Except that it's not an architected breakpoint instruction, as I mentioned > before. It's also not really a property of the cache. > > > - Even if we get an "undefined instruction" exception, its handler can > > ask kprobes if the address is under modifying or not. And if it is, > > we can just return from the exception to retry the execution. > > It's not as simple as that -- you could potentially see an interleaving of > the two instructions. The architecture is even broader than that: > > Concurrent modification and execution of instructions can lead to the > resulting instruction performing any behavior that can be achieved by > executing any sequence of instructions that can be executed from the > same Exception level, > > There are additional guarantees for some instructions (like the architected > BKPT instruction). I should point out that the current implementation of kprobes doesn't use stop_machine because it's trying to meet the above architecture restrictions, and that arming kprobes (changing probed instruction to an undefined instruction) isn't usually done under stop_machine, so other CPUs could be executing the original instruction as it's being modified. So, should we be making patch_text unconditionally use stop machine and remove all direct use of __patch_text? (E.g. by jump labels.)
On Thu, Sep 04, 2014 at 11:40:35AM +0100, Jon Medhurst (Tixy) wrote: > On Wed, 2014-09-03 at 11:30 +0100, Will Deacon wrote: > > On Wed, Sep 03, 2014 at 11:18:04AM +0100, Masami Hiramatsu wrote: > > > (2014/09/02 22:49), Jon Medhurst (Tixy) wrote: > > > > 1. On SMP systems it's very slow because of kprobe's use of stop_machine > > > > for applying and removing probes, this forces the system to idle and > > > > wait for the next scheduler tick for each probe change. > > > > > > Hmm, agreed. It seems that arm32 limitation of self-modifying code on SMP. > > > I'm not sure how we can handle it, but I guess; > > > - for some processors which have better coherent cache for SMP, we can > > > atomically replace the breakpoint code with original code. > > > > Except that it's not an architected breakpoint instruction, as I mentioned > > before. It's also not really a property of the cache. > > > > > - Even if we get an "undefined instruction" exception, its handler can > > > ask kprobes if the address is under modifying or not. And if it is, > > > we can just return from the exception to retry the execution. > > > > It's not as simple as that -- you could potentially see an interleaving of > > the two instructions. The architecture is even broader than that: > > > > Concurrent modification and execution of instructions can lead to the > > resulting instruction performing any behavior that can be achieved by > > executing any sequence of instructions that can be executed from the > > same Exception level, > > > > There are additional guarantees for some instructions (like the architected > > BKPT instruction). > > I should point out that the current implementation of kprobes doesn't > use stop_machine because it's trying to meet the above architecture > restrictions, and that arming kprobes (changing probed instruction to an > undefined instruction) isn't usually done under stop_machine, so other > CPUs could be executing the original instruction as it's being modified. > > So, should we be making patch_text unconditionally use stop machine and > remove all direct use of __patch_text? (E.g. by jump labels.) You could take a look at what we do for arm64 (see aarch64_insn_hotpatch_safe) for inspiration. Will
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index c49a775..7106fba 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -57,6 +57,7 @@ config ARM select HAVE_MEMBLOCK select HAVE_MOD_ARCH_SPECIFIC if ARM_UNWIND select HAVE_OPROFILE if (HAVE_PERF_EVENTS) + select HAVE_OPTPROBES if (!THUMB2_KERNEL) select HAVE_PERF_EVENTS select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP diff --git a/arch/arm/include/asm/kprobes.h b/arch/arm/include/asm/kprobes.h index 49fa0df..88a0345 100644 --- a/arch/arm/include/asm/kprobes.h +++ b/arch/arm/include/asm/kprobes.h @@ -51,5 +51,33 @@ int kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr); int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, void *data); +/* optinsn template addresses */ +extern __visible kprobe_opcode_t optprobe_template_entry; +extern __visible kprobe_opcode_t optprobe_template_val; +extern __visible kprobe_opcode_t optprobe_template_call; +extern __visible kprobe_opcode_t optprobe_template_end; + +#define MAX_OPTIMIZED_LENGTH (4) +#define MAX_OPTINSN_SIZE \ + (((unsigned long)&optprobe_template_end - \ + (unsigned long)&optprobe_template_entry)) +#define RELATIVEJUMP_SIZE (4) + +struct arch_optimized_insn { + /* + * copy of the original instructions. + * Different from x86, ARM kprobe_opcode_t is u32. + */ +#define MAX_COPIED_INSN ((RELATIVEJUMP_SIZE) / sizeof(kprobe_opcode_t)) + kprobe_opcode_t copied_insn[MAX_COPIED_INSN]; + /* detour code buffer */ + kprobe_opcode_t *insn; + /* + * we always copies one instruction on arm32, + * size always be 4, so no size field. + */ + /* indicate whether this optimization is prepared */ + bool prepared; +}; #endif /* _ARM_KPROBES_H */ diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile index 38ddd9f..6a38ec1 100644 --- a/arch/arm/kernel/Makefile +++ b/arch/arm/kernel/Makefile @@ -52,11 +52,12 @@ obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o insn.o obj-$(CONFIG_JUMP_LABEL) += jump_label.o insn.o patch.o obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o obj-$(CONFIG_UPROBES) += probes.o probes-arm.o uprobes.o uprobes-arm.o -obj-$(CONFIG_KPROBES) += probes.o kprobes.o kprobes-common.o patch.o +obj-$(CONFIG_KPROBES) += probes.o kprobes.o kprobes-common.o patch.o insn.o ifdef CONFIG_THUMB2_KERNEL obj-$(CONFIG_KPROBES) += kprobes-thumb.o probes-thumb.o else obj-$(CONFIG_KPROBES) += kprobes-arm.o probes-arm.o +obj-$(CONFIG_OPTPROBES) += kprobes-opt.o endif obj-$(CONFIG_ARM_KPROBES_TEST) += test-kprobes.o test-kprobes-objs := kprobes-test.o diff --git a/arch/arm/kernel/kprobes-opt.c b/arch/arm/kernel/kprobes-opt.c new file mode 100644 index 0000000..8407858 --- /dev/null +++ b/arch/arm/kernel/kprobes-opt.c @@ -0,0 +1,259 @@ +/* + * Kernel Probes Jump Optimization (Optprobes) + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * + * Copyright (C) IBM Corporation, 2002, 2004 + * Copyright (C) Hitachi Ltd., 2012 + * Copyright (C) Huawei Inc., 2014 + */ + +#include <linux/kprobes.h> +#include <linux/jump_label.h> +#include <asm/kprobes.h> +#include <asm/cacheflush.h> +/* for arm_gen_branch */ +#include "insn.h" +/* for patch_text */ +#include "patch.h" + +asm ( + ".global optprobe_template_entry\n" + "optprobe_template_entry:\n" + " sub sp, sp, #80\n" + " stmia sp, {r0 - r14} \n" + " add r3, sp, #80\n" + " str r3, [sp, #52]\n" + " mrs r4, cpsr\n" + " str r4, [sp, #64]\n" + " mov r1, sp\n" + " ldr r0, 1f\n" + " ldr r2, 2f\n" + " blx r2\n" + " ldr r1, [sp, #64]\n" + " msr cpsr_fs, r1\n" + " ldmia sp, {r0 - r15}\n" + ".global optprobe_template_val\n" + "optprobe_template_val:\n" + "1: .long 0\n" + ".global optprobe_template_call\n" + "optprobe_template_call:\n" + "2: .long 0\n" + ".global optprobe_template_end\n" + "optprobe_template_end:\n"); + +#define TMPL_VAL_IDX \ + ((long)&optprobe_template_val - (long)&optprobe_template_entry) +#define TMPL_CALL_IDX \ + ((long)&optprobe_template_call - (long)&optprobe_template_entry) +#define TMPL_END_IDX \ + ((long)&optprobe_template_end - (long)&optprobe_template_entry) + +/* + * ARM can always optimize an instruction when using ARM ISA. + */ +int arch_prepared_optinsn(struct arch_optimized_insn *optinsn) +{ + return optinsn->prepared; +} + +/* + * In ARM ISA, kprobe opt always replace one instruction (4 bytes + * aligned and 4 bytes long). It is impossiable to encounter another + * kprobe in the address range. So always return 0. + */ +int arch_check_optimized_kprobe(struct optimized_kprobe *op) +{ + return 0; +} + +/* Caller must ensure addr & 3 == 0 */ +static int can_optimize(struct optimized_kprobe *op) +{ + if (op->kp.ainsn.is_stack_operation) + return 0; + return 1; +} + +/* Free optimized instruction slot */ +static void +__arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty) +{ + if (op->optinsn.insn) { + free_optinsn_slot(op->optinsn.insn, dirty); + op->optinsn.insn = NULL; + } +} + +extern void kprobe_handler(struct pt_regs *regs); + +static void +optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs) +{ + unsigned long flags; + struct kprobe *p = &op->kp; + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); + + /* Save skipped registers */ + regs->ARM_pc = (unsigned long)op->kp.addr; + regs->ARM_ORIG_r0 = ~0UL; + + local_irq_save(flags); + + if (kprobe_running()) { + kprobes_inc_nmissed_count(&op->kp); + } else { + __this_cpu_write(current_kprobe, &op->kp); + kcb->kprobe_status = KPROBE_HIT_ACTIVE; + opt_pre_handler(&op->kp, regs); + __this_cpu_write(current_kprobe, NULL); + } + + /* In each case, we must singlestep the replaced instruction. */ + op->kp.ainsn.insn_singlestep(p->opcode, &p->ainsn, regs); + + local_irq_restore(flags); +} + +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op) +{ + u8 *buf; + unsigned long rel_chk; + unsigned long val; + + if (!can_optimize(op)) + return -EILSEQ; + + op->optinsn.insn = get_optinsn_slot(); + if (!op->optinsn.insn) + return -ENOMEM; + + /* + * Verify if the address gap is in 32MiB range, because this uses + * a relative jump. + * + * kprobe opt use a 'b' instruction to branch to optinsn.insn. + * According to ARM manual, branch instruction is: + * + * 31 28 27 24 23 0 + * +------+---+---+---+---+----------------+ + * | cond | 1 | 0 | 1 | 0 | imm24 | + * +------+---+---+---+---+----------------+ + * + * imm24 is a signed 24 bits integer. The real branch offset is computed + * by: imm32 = SignExtend(imm24:'00', 32); + * + * So the maximum forward branch should be: + * (0x007fffff << 2) = 0x01fffffc = 0x1fffffc + * The maximum backword branch should be: + * (0xff800000 << 2) = 0xfe000000 = -0x2000000 + * + * We can simply check (rel & 0xfe000003): + * if rel is positive, (rel & 0xfe000000) shoule be 0 + * if rel is negitive, (rel & 0xfe000000) should be 0xfe000000 + * the last '3' is used for alignment checking. + */ + rel_chk = (unsigned long)((long)op->optinsn.insn - + (long)op->kp.addr + 8) & 0xfe000003; + + if ((rel_chk != 0) && (rel_chk != 0xfe000000)) { + __arch_remove_optimized_kprobe(op, 0); + return -ERANGE; + } + + buf = (u8 *)op->optinsn.insn; + + /* Copy arch-dep-instance from template */ + memcpy(buf, &optprobe_template_entry, TMPL_END_IDX); + + /* Set probe information */ + val = (unsigned long)op; + memcpy(buf + TMPL_VAL_IDX, &val, sizeof(val)); + + /* Set probe function call */ + val = (unsigned long)optimized_callback; + memcpy(buf + TMPL_CALL_IDX, &val, sizeof(val)); + + flush_icache_range((unsigned long)buf, + (unsigned long)buf + TMPL_END_IDX); + + op->optinsn.prepared = true; + return 0; +} + +void arch_optimize_kprobes(struct list_head *oplist) +{ + struct optimized_kprobe *op, *tmp; + + list_for_each_entry_safe(op, tmp, oplist, list) { + unsigned long insn; + WARN_ON(kprobe_disabled(&op->kp)); + + /* + * Backup instructions which will be replaced + * by jump address + */ + memcpy(op->optinsn.copied_insn, op->kp.addr, + RELATIVEJUMP_SIZE); + + insn = arm_gen_branch((unsigned long)op->kp.addr, + (unsigned long)op->optinsn.insn); + BUG_ON(insn == 0); + + /* + * Make it a conditional branch if replaced insn + * is consitional + */ + insn = (__mem_to_opcode_arm( + op->optinsn.copied_insn[0]) & 0xf0000000) | + (insn & 0x0fffffff); + + patch_text(op->kp.addr, insn); + + list_del_init(&op->list); + } +} + +void arch_unoptimize_kprobe(struct optimized_kprobe *op) +{ + arch_arm_kprobe(&op->kp); +} + +/* + * Recover original instructions and breakpoints from relative jumps. + * Caller must call with locking kprobe_mutex. + */ +void arch_unoptimize_kprobes(struct list_head *oplist, + struct list_head *done_list) +{ + struct optimized_kprobe *op, *tmp; + + list_for_each_entry_safe(op, tmp, oplist, list) { + arch_unoptimize_kprobe(op); + list_move(&op->list, done_list); + } +} + +int arch_within_optimized_kprobe(struct optimized_kprobe *op, + unsigned long addr) +{ + return ((unsigned long)op->kp.addr <= addr && + (unsigned long)op->kp.addr + RELATIVEJUMP_SIZE > addr); +} + +void arch_remove_optimized_kprobe(struct optimized_kprobe *op) +{ + __arch_remove_optimized_kprobe(op, 1); +}