diff mbox

[v5,3/3] kprobes: arm: enable OPTPROBES for ARM 32

Message ID 1409144552-12751-4-git-send-email-wangnan0@huawei.com (mailing list archive)
State New, archived
Headers show

Commit Message

Wang Nan Aug. 27, 2014, 1:02 p.m. UTC
This patch introduce kprobeopt for ARM 32.

Limitations:
 - Currently only kernel compiled with ARM ISA is supported.

 - Offset between probe point and optinsn slot must not larger than
   32MiB. Masami Hiramatsu suggests replacing 2 words, it will make
   things complex. Futher patch can make such optimization.

Kprobe opt on ARM is relatively simpler than kprobe opt on x86 because
ARM instruction is always 4 bytes aligned and 4 bytes long. This patch
replace probed instruction by a 'b', branch to trampoline code and then
calls optimized_callback(). optimized_callback() calls opt_pre_handler()
to execute kprobe handler. It also emulate/simulate replaced instruction.

When unregistering kprobe, the deferred manner of unoptimizer may leave
branch instruction before optimizer is called. Different from x86_64,
which only copy the probed insn after optprobe_template_end and
reexecute them, this patch call singlestep to emulate/simulate the insn
directly. Futher patch can optimize this behavior.

v1 -> v2:

 - Improvement: if replaced instruction is conditional, generate a
   conditional branch instruction for it;

 - Introduces RELATIVEJUMP_OPCODES due to ARM kprobe_opcode_t is 4
   bytes;

 - Removes size field in struct arch_optimized_insn;

 - Use arm_gen_branch() to generate branch instruction;

 - Remove all recover logic: ARM doesn't use tail buffer, no need
   recover replaced instructions like x86;

 - Remove incorrect CONFIG_THUMB checking;

 - can_optimize() always returns true if address is well aligned;

 - Improve optimized_callback: using opt_pre_handler();

 - Bugfix: correct range checking code and improve comments;

 - Fix commit message.

v2 -> v3:

 - Rename RELATIVEJUMP_OPCODES to MAX_COPIED_INSNS;

 - Remove unneeded checking:
      arch_check_optimized_kprobe(), can_optimize();

 - Add missing flush_icache_range() in arch_prepare_optimized_kprobe();

 - Remove unneeded 'return;'.

v3 -> v4:

 - Use __mem_to_opcode_arm() to translate copied_insn to ensure it
   works in big endian kernel;

 - Replace 'nop' placeholder in trampoline code template with
   '.long 0' to avoid confusion: reader may regard 'nop' as an
   instruction, but it is value in fact.

v4 -> v5:

 - Don't optimize stack store operations.

 - Introduce prepared field to arch_optimized_insn to indicate whether
   it is prepared. Similar to size field with x86. See v1 -> v2.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: "David A. Long" <dave.long@linaro.org> 
Cc: Jon Medhurst <tixy@linaro.org>
Cc: Taras Kondratiuk <taras.kondratiuk@linaro.org>
Cc: Ben Dooks <ben.dooks@codethink.co.uk>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Will Deacon <will.deacon@arm.com>

---
 arch/arm/Kconfig               |   1 +
 arch/arm/include/asm/kprobes.h |  28 +++++
 arch/arm/kernel/Makefile       |   3 +-
 arch/arm/kernel/kprobes-opt.c  | 259 +++++++++++++++++++++++++++++++++++++++++
 4 files changed, 290 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm/kernel/kprobes-opt.c

Comments

Masami Hiramatsu Aug. 28, 2014, 10:20 a.m. UTC | #1
(2014/08/27 22:02), Wang Nan wrote:
> +/*
> + * ARM can always optimize an instruction when using ARM ISA.
> + */

Hmm, this comment looks not correct anymore :)

> +int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
> +{
> +	return optinsn->prepared;
> +}

BTW, why don't you check optinsn->insn != NULL ?
If it is not prepared for optimizing,  optinsn->insn always be NULL.

[...]
> +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op)
> +{
> +	u8 *buf;
> +	unsigned long rel_chk;
> +	unsigned long val;
> +
> +	if (!can_optimize(op))
> +		return -EILSEQ;
> +
> +	op->optinsn.insn = get_optinsn_slot();
> +	if (!op->optinsn.insn)
> +		return -ENOMEM;
> +
> +	/*
> +	 * Verify if the address gap is in 32MiB range, because this uses
> +	 * a relative jump.
> +	 *
> +	 * kprobe opt use a 'b' instruction to branch to optinsn.insn.
> +	 * According to ARM manual, branch instruction is:
> +	 *
> +	 *   31  28 27           24 23             0
> +	 *  +------+---+---+---+---+----------------+
> +	 *  | cond | 1 | 0 | 1 | 0 |      imm24     |
> +	 *  +------+---+---+---+---+----------------+
> +	 *
> +	 * imm24 is a signed 24 bits integer. The real branch offset is computed
> +	 * by: imm32 = SignExtend(imm24:'00', 32);
> +	 *
> +	 * So the maximum forward branch should be:
> +	 *   (0x007fffff << 2) = 0x01fffffc =  0x1fffffc
> +	 * The maximum backword branch should be:
> +	 *   (0xff800000 << 2) = 0xfe000000 = -0x2000000
> +	 *
> +	 * We can simply check (rel & 0xfe000003):
> +	 *  if rel is positive, (rel & 0xfe000000) shoule be 0
> +	 *  if rel is negitive, (rel & 0xfe000000) should be 0xfe000000
> +	 *  the last '3' is used for alignment checking.
> +	 */
> +	rel_chk = (unsigned long)((long)op->optinsn.insn -
> +			(long)op->kp.addr + 8) & 0xfe000003;
> +
> +	if ((rel_chk != 0) && (rel_chk != 0xfe000000)) {
> +		__arch_remove_optimized_kprobe(op, 0);
> +		return -ERANGE;
> +	}
> +
> +	buf = (u8 *)op->optinsn.insn;
> +
> +	/* Copy arch-dep-instance from template */
> +	memcpy(buf, &optprobe_template_entry, TMPL_END_IDX);
> +
> +	/* Set probe information */
> +	val = (unsigned long)op;
> +	memcpy(buf + TMPL_VAL_IDX, &val, sizeof(val));
> +
> +	/* Set probe function call */
> +	val = (unsigned long)optimized_callback;
> +	memcpy(buf + TMPL_CALL_IDX, &val, sizeof(val));
> +
> +	flush_icache_range((unsigned long)buf,
> +			   (unsigned long)buf + TMPL_END_IDX);
> +
> +	op->optinsn.prepared = true;
> +	return 0;
> +}
> +

Thank you,
Jon Medhurst (Tixy) Sept. 2, 2014, 1:49 p.m. UTC | #2
I gave the patches a quick test and in doing so found a bug which stops
any probes actually being optimised, and the same bug should affect X86,
see comment below...

On Wed, 2014-08-27 at 21:02 +0800, Wang Nan wrote:
[...]
> +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op)
> +{
> +	u8 *buf;
> +	unsigned long rel_chk;
> +	unsigned long val;
> +
> +	if (!can_optimize(op))
> +		return -EILSEQ;
> +
> +	op->optinsn.insn = get_optinsn_slot();
> +	if (!op->optinsn.insn)
> +		return -ENOMEM;
> +
> +	/*
> +	 * Verify if the address gap is in 32MiB range, because this uses
> +	 * a relative jump.
> +	 *
> +	 * kprobe opt use a 'b' instruction to branch to optinsn.insn.
> +	 * According to ARM manual, branch instruction is:
> +	 *
> +	 *   31  28 27           24 23             0
> +	 *  +------+---+---+---+---+----------------+
> +	 *  | cond | 1 | 0 | 1 | 0 |      imm24     |
> +	 *  +------+---+---+---+---+----------------+
> +	 *
> +	 * imm24 is a signed 24 bits integer. The real branch offset is computed
> +	 * by: imm32 = SignExtend(imm24:'00', 32);
> +	 *
> +	 * So the maximum forward branch should be:
> +	 *   (0x007fffff << 2) = 0x01fffffc =  0x1fffffc
> +	 * The maximum backword branch should be:
> +	 *   (0xff800000 << 2) = 0xfe000000 = -0x2000000
> +	 *
> +	 * We can simply check (rel & 0xfe000003):
> +	 *  if rel is positive, (rel & 0xfe000000) shoule be 0
> +	 *  if rel is negitive, (rel & 0xfe000000) should be 0xfe000000
> +	 *  the last '3' is used for alignment checking.
> +	 */
> +	rel_chk = (unsigned long)((long)op->optinsn.insn -
> +			(long)op->kp.addr + 8) & 0xfe000003;

This check always fails because op->kp.addr is zero. Debugging this I
found that this function is called from alloc_aggr_kprobe() and that
copies the real kprobe into op->kp using copy_kprobe(), which doesn't
actually copy the 'addr' value...

        static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p)
        {
        	memcpy(&p->opcode, &ap->opcode, sizeof(kprobe_opcode_t));
        	memcpy(&p->ainsn, &ap->ainsn, sizeof(struct arch_specific_insn));
        }

Thing is, the new ARM code is a close copy of the existing X86 version
so that would also suffer the same problem of kp.addr always being zero.
So either I've miss understood something or this is fundamental bug no
one has noticed before.

Throwing in 'p->addr = ap->addr' into the copy_kprobe function fixed the
behaviour arch_prepare_optimized_kprobe.

I was testing this by running the kprobes tests
(CONFIG_ARM_KPROBES_TEST=y) and putting a few printk's in strategic
places in kprobes-opt.c to check to see what code paths got executed,
which is how I discovered the problem.

Two things to note when running kprobes tests...

1. On SMP systems it's very slow because of kprobe's use of stop_machine
for applying and removing probes, this forces the system to idle and
wait for the next scheduler tick for each probe change.

2. It's a good idea to enable VERBOSE in kprobes-test.h to get output
for each test case instruction, this reassures you things are
progressing and if things explode lets you know what instruction type
triggered it.
Masami Hiramatsu Sept. 3, 2014, 10:18 a.m. UTC | #3
(2014/09/02 22:49), Jon Medhurst (Tixy) wrote:
> I gave the patches a quick test and in doing so found a bug which stops
> any probes actually being optimised, and the same bug should affect X86,
> see comment below...
> 
> On Wed, 2014-08-27 at 21:02 +0800, Wang Nan wrote:
> [...]
>> +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op)
>> +{
>> +	u8 *buf;
>> +	unsigned long rel_chk;
>> +	unsigned long val;
>> +
>> +	if (!can_optimize(op))
>> +		return -EILSEQ;
>> +
>> +	op->optinsn.insn = get_optinsn_slot();
>> +	if (!op->optinsn.insn)
>> +		return -ENOMEM;
>> +
>> +	/*
>> +	 * Verify if the address gap is in 32MiB range, because this uses
>> +	 * a relative jump.
>> +	 *
>> +	 * kprobe opt use a 'b' instruction to branch to optinsn.insn.
>> +	 * According to ARM manual, branch instruction is:
>> +	 *
>> +	 *   31  28 27           24 23             0
>> +	 *  +------+---+---+---+---+----------------+
>> +	 *  | cond | 1 | 0 | 1 | 0 |      imm24     |
>> +	 *  +------+---+---+---+---+----------------+
>> +	 *
>> +	 * imm24 is a signed 24 bits integer. The real branch offset is computed
>> +	 * by: imm32 = SignExtend(imm24:'00', 32);
>> +	 *
>> +	 * So the maximum forward branch should be:
>> +	 *   (0x007fffff << 2) = 0x01fffffc =  0x1fffffc
>> +	 * The maximum backword branch should be:
>> +	 *   (0xff800000 << 2) = 0xfe000000 = -0x2000000
>> +	 *
>> +	 * We can simply check (rel & 0xfe000003):
>> +	 *  if rel is positive, (rel & 0xfe000000) shoule be 0
>> +	 *  if rel is negitive, (rel & 0xfe000000) should be 0xfe000000
>> +	 *  the last '3' is used for alignment checking.
>> +	 */
>> +	rel_chk = (unsigned long)((long)op->optinsn.insn -
>> +			(long)op->kp.addr + 8) & 0xfe000003;
> 
> This check always fails because op->kp.addr is zero. Debugging this I
> found that this function is called from alloc_aggr_kprobe() and that
> copies the real kprobe into op->kp using copy_kprobe(), which doesn't
> actually copy the 'addr' value...

Right, I've already pointed that :)

https://lkml.org/lkml/2014/8/28/114

> 
>         static inline void copy_kprobe(struct kprobe *ap, struct kprobe *p)
>         {
>         	memcpy(&p->opcode, &ap->opcode, sizeof(kprobe_opcode_t));
>         	memcpy(&p->ainsn, &ap->ainsn, sizeof(struct arch_specific_insn));
>         }
> 
> Thing is, the new ARM code is a close copy of the existing X86 version
> so that would also suffer the same problem of kp.addr always being zero.
> So either I've miss understood something or this is fundamental bug no
> one has noticed before.
> 
> Throwing in 'p->addr = ap->addr' into the copy_kprobe function fixed the
> behaviour arch_prepare_optimized_kprobe.
> 
> I was testing this by running the kprobes tests
> (CONFIG_ARM_KPROBES_TEST=y) and putting a few printk's in strategic
> places in kprobes-opt.c to check to see what code paths got executed,
> which is how I discovered the problem.
> 
> Two things to note when running kprobes tests...
> 
> 1. On SMP systems it's very slow because of kprobe's use of stop_machine
> for applying and removing probes, this forces the system to idle and
> wait for the next scheduler tick for each probe change.

Hmm, agreed. It seems that arm32 limitation of self-modifying code on SMP.
I'm not sure how we can handle it, but I guess;
 - for some processors which have better coherent cache for SMP, we can
   atomically replace the breakpoint code with original code.
 - Even if we get an "undefined instruction" exception, its handler can
   ask kprobes if the address is under modifying or not. And if it is,
   we can just return from the exception to retry the execution.

Thank you,

> 2. It's a good idea to enable VERBOSE in kprobes-test.h to get output
> for each test case instruction, this reassures you things are
> progressing and if things explode lets you know what instruction type
> triggered it.
>
Will Deacon Sept. 3, 2014, 10:30 a.m. UTC | #4
On Wed, Sep 03, 2014 at 11:18:04AM +0100, Masami Hiramatsu wrote:
> (2014/09/02 22:49), Jon Medhurst (Tixy) wrote:
> > 1. On SMP systems it's very slow because of kprobe's use of stop_machine
> > for applying and removing probes, this forces the system to idle and
> > wait for the next scheduler tick for each probe change.
> 
> Hmm, agreed. It seems that arm32 limitation of self-modifying code on SMP.
> I'm not sure how we can handle it, but I guess;
>  - for some processors which have better coherent cache for SMP, we can
>    atomically replace the breakpoint code with original code.

Except that it's not an architected breakpoint instruction, as I mentioned
before. It's also not really a property of the cache.

>  - Even if we get an "undefined instruction" exception, its handler can
>    ask kprobes if the address is under modifying or not. And if it is,
>    we can just return from the exception to retry the execution.

It's not as simple as that -- you could potentially see an interleaving of
the two instructions. The architecture is even broader than that:

 Concurrent modification and execution of instructions can lead to the
 resulting instruction performing any behavior that can be achieved by
 executing any sequence of instructions that can be executed from the
 same Exception level,

There are additional guarantees for some instructions (like the architected
BKPT instruction).

Will
Jon Medhurst (Tixy) Sept. 4, 2014, 10:40 a.m. UTC | #5
On Wed, 2014-09-03 at 11:30 +0100, Will Deacon wrote:
> On Wed, Sep 03, 2014 at 11:18:04AM +0100, Masami Hiramatsu wrote:
> > (2014/09/02 22:49), Jon Medhurst (Tixy) wrote:
> > > 1. On SMP systems it's very slow because of kprobe's use of stop_machine
> > > for applying and removing probes, this forces the system to idle and
> > > wait for the next scheduler tick for each probe change.
> > 
> > Hmm, agreed. It seems that arm32 limitation of self-modifying code on SMP.
> > I'm not sure how we can handle it, but I guess;
> >  - for some processors which have better coherent cache for SMP, we can
> >    atomically replace the breakpoint code with original code.
> 
> Except that it's not an architected breakpoint instruction, as I mentioned
> before. It's also not really a property of the cache.
> 
> >  - Even if we get an "undefined instruction" exception, its handler can
> >    ask kprobes if the address is under modifying or not. And if it is,
> >    we can just return from the exception to retry the execution.
> 
> It's not as simple as that -- you could potentially see an interleaving of
> the two instructions. The architecture is even broader than that:
> 
>  Concurrent modification and execution of instructions can lead to the
>  resulting instruction performing any behavior that can be achieved by
>  executing any sequence of instructions that can be executed from the
>  same Exception level,
> 
> There are additional guarantees for some instructions (like the architected
> BKPT instruction).

I should point out that the current implementation of kprobes doesn't
use stop_machine because it's trying to meet the above architecture
restrictions, and that arming kprobes (changing probed instruction to an
undefined instruction) isn't usually done under stop_machine, so other
CPUs could be executing the original instruction as it's being modified.

So, should we be making patch_text unconditionally use stop machine and
remove all direct use of __patch_text? (E.g. by jump labels.)
Will Deacon Sept. 4, 2014, 10:52 a.m. UTC | #6
On Thu, Sep 04, 2014 at 11:40:35AM +0100, Jon Medhurst (Tixy) wrote:
> On Wed, 2014-09-03 at 11:30 +0100, Will Deacon wrote:
> > On Wed, Sep 03, 2014 at 11:18:04AM +0100, Masami Hiramatsu wrote:
> > > (2014/09/02 22:49), Jon Medhurst (Tixy) wrote:
> > > > 1. On SMP systems it's very slow because of kprobe's use of stop_machine
> > > > for applying and removing probes, this forces the system to idle and
> > > > wait for the next scheduler tick for each probe change.
> > > 
> > > Hmm, agreed. It seems that arm32 limitation of self-modifying code on SMP.
> > > I'm not sure how we can handle it, but I guess;
> > >  - for some processors which have better coherent cache for SMP, we can
> > >    atomically replace the breakpoint code with original code.
> > 
> > Except that it's not an architected breakpoint instruction, as I mentioned
> > before. It's also not really a property of the cache.
> > 
> > >  - Even if we get an "undefined instruction" exception, its handler can
> > >    ask kprobes if the address is under modifying or not. And if it is,
> > >    we can just return from the exception to retry the execution.
> > 
> > It's not as simple as that -- you could potentially see an interleaving of
> > the two instructions. The architecture is even broader than that:
> > 
> >  Concurrent modification and execution of instructions can lead to the
> >  resulting instruction performing any behavior that can be achieved by
> >  executing any sequence of instructions that can be executed from the
> >  same Exception level,
> > 
> > There are additional guarantees for some instructions (like the architected
> > BKPT instruction).
> 
> I should point out that the current implementation of kprobes doesn't
> use stop_machine because it's trying to meet the above architecture
> restrictions, and that arming kprobes (changing probed instruction to an
> undefined instruction) isn't usually done under stop_machine, so other
> CPUs could be executing the original instruction as it's being modified.
> 
> So, should we be making patch_text unconditionally use stop machine and
> remove all direct use of __patch_text? (E.g. by jump labels.)

You could take a look at what we do for arm64 (see aarch64_insn_hotpatch_safe)
for inspiration.

Will
diff mbox

Patch

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index c49a775..7106fba 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -57,6 +57,7 @@  config ARM
 	select HAVE_MEMBLOCK
 	select HAVE_MOD_ARCH_SPECIFIC if ARM_UNWIND
 	select HAVE_OPROFILE if (HAVE_PERF_EVENTS)
+	select HAVE_OPTPROBES if (!THUMB2_KERNEL)
 	select HAVE_PERF_EVENTS
 	select HAVE_PERF_REGS
 	select HAVE_PERF_USER_STACK_DUMP
diff --git a/arch/arm/include/asm/kprobes.h b/arch/arm/include/asm/kprobes.h
index 49fa0df..88a0345 100644
--- a/arch/arm/include/asm/kprobes.h
+++ b/arch/arm/include/asm/kprobes.h
@@ -51,5 +51,33 @@  int kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr);
 int kprobe_exceptions_notify(struct notifier_block *self,
 			     unsigned long val, void *data);
 
+/* optinsn template addresses */
+extern __visible kprobe_opcode_t optprobe_template_entry;
+extern __visible kprobe_opcode_t optprobe_template_val;
+extern __visible kprobe_opcode_t optprobe_template_call;
+extern __visible kprobe_opcode_t optprobe_template_end;
+
+#define MAX_OPTIMIZED_LENGTH	(4)
+#define MAX_OPTINSN_SIZE				\
+	(((unsigned long)&optprobe_template_end -	\
+	  (unsigned long)&optprobe_template_entry))
+#define RELATIVEJUMP_SIZE	(4)
+
+struct arch_optimized_insn {
+	/*
+	 * copy of the original instructions.
+	 * Different from x86, ARM kprobe_opcode_t is u32.
+	 */
+#define MAX_COPIED_INSN	((RELATIVEJUMP_SIZE) / sizeof(kprobe_opcode_t))
+	kprobe_opcode_t copied_insn[MAX_COPIED_INSN];
+	/* detour code buffer */
+	kprobe_opcode_t *insn;
+	/*
+	 *  we always copies one instruction on arm32,
+	 *  size always be 4, so no size field.
+	 */
+	/* indicate whether this optimization is prepared */
+	bool prepared;
+};
 
 #endif /* _ARM_KPROBES_H */
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index 38ddd9f..6a38ec1 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -52,11 +52,12 @@  obj-$(CONFIG_FUNCTION_GRAPH_TRACER)	+= ftrace.o insn.o
 obj-$(CONFIG_JUMP_LABEL)	+= jump_label.o insn.o patch.o
 obj-$(CONFIG_KEXEC)		+= machine_kexec.o relocate_kernel.o
 obj-$(CONFIG_UPROBES)		+= probes.o probes-arm.o uprobes.o uprobes-arm.o
-obj-$(CONFIG_KPROBES)		+= probes.o kprobes.o kprobes-common.o patch.o
+obj-$(CONFIG_KPROBES)		+= probes.o kprobes.o kprobes-common.o patch.o insn.o
 ifdef CONFIG_THUMB2_KERNEL
 obj-$(CONFIG_KPROBES)		+= kprobes-thumb.o probes-thumb.o
 else
 obj-$(CONFIG_KPROBES)		+= kprobes-arm.o probes-arm.o
+obj-$(CONFIG_OPTPROBES)		+= kprobes-opt.o
 endif
 obj-$(CONFIG_ARM_KPROBES_TEST)	+= test-kprobes.o
 test-kprobes-objs		:= kprobes-test.o
diff --git a/arch/arm/kernel/kprobes-opt.c b/arch/arm/kernel/kprobes-opt.c
new file mode 100644
index 0000000..8407858
--- /dev/null
+++ b/arch/arm/kernel/kprobes-opt.c
@@ -0,0 +1,259 @@ 
+/*
+ *  Kernel Probes Jump Optimization (Optprobes)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ * Copyright (C) IBM Corporation, 2002, 2004
+ * Copyright (C) Hitachi Ltd., 2012
+ * Copyright (C) Huawei Inc., 2014
+ */
+
+#include <linux/kprobes.h>
+#include <linux/jump_label.h>
+#include <asm/kprobes.h>
+#include <asm/cacheflush.h>
+/* for arm_gen_branch */
+#include "insn.h"
+/* for patch_text */
+#include "patch.h"
+
+asm (
+			".global optprobe_template_entry\n"
+			"optprobe_template_entry:\n"
+			"	sub	sp, sp, #80\n"
+			"	stmia	sp, {r0 - r14} \n"
+			"	add	r3, sp, #80\n"
+			"	str	r3, [sp, #52]\n"
+			"	mrs	r4, cpsr\n"
+			"	str	r4, [sp, #64]\n"
+			"	mov	r1, sp\n"
+			"	ldr	r0, 1f\n"
+			"	ldr	r2, 2f\n"
+			"	blx	r2\n"
+			"	ldr	r1, [sp, #64]\n"
+			"	msr	cpsr_fs, r1\n"
+			"	ldmia	sp, {r0 - r15}\n"
+			".global optprobe_template_val\n"
+			"optprobe_template_val:\n"
+			"1:	.long 0\n"
+			".global optprobe_template_call\n"
+			"optprobe_template_call:\n"
+			"2:	.long 0\n"
+			".global optprobe_template_end\n"
+			"optprobe_template_end:\n");
+
+#define TMPL_VAL_IDX \
+	((long)&optprobe_template_val - (long)&optprobe_template_entry)
+#define TMPL_CALL_IDX \
+	((long)&optprobe_template_call - (long)&optprobe_template_entry)
+#define TMPL_END_IDX \
+	((long)&optprobe_template_end - (long)&optprobe_template_entry)
+
+/*
+ * ARM can always optimize an instruction when using ARM ISA.
+ */
+int arch_prepared_optinsn(struct arch_optimized_insn *optinsn)
+{
+	return optinsn->prepared;
+}
+
+/*
+ * In ARM ISA, kprobe opt always replace one instruction (4 bytes
+ * aligned and 4 bytes long). It is impossiable to encounter another
+ * kprobe in the address range. So always return 0.
+ */
+int arch_check_optimized_kprobe(struct optimized_kprobe *op)
+{
+	return 0;
+}
+
+/* Caller must ensure addr & 3 == 0 */
+static int can_optimize(struct optimized_kprobe *op)
+{
+	if (op->kp.ainsn.is_stack_operation)
+		return 0;
+	return 1;
+}
+
+/* Free optimized instruction slot */
+static void
+__arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty)
+{
+	if (op->optinsn.insn) {
+		free_optinsn_slot(op->optinsn.insn, dirty);
+		op->optinsn.insn = NULL;
+	}
+}
+
+extern void kprobe_handler(struct pt_regs *regs);
+
+static void
+optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
+{
+	unsigned long flags;
+	struct kprobe *p = &op->kp;
+	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
+	/* Save skipped registers */
+	regs->ARM_pc = (unsigned long)op->kp.addr;
+	regs->ARM_ORIG_r0 = ~0UL;
+
+	local_irq_save(flags);
+
+	if (kprobe_running()) {
+		kprobes_inc_nmissed_count(&op->kp);
+	} else {
+		__this_cpu_write(current_kprobe, &op->kp);
+		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+		opt_pre_handler(&op->kp, regs);
+		__this_cpu_write(current_kprobe, NULL);
+	}
+
+	/* In each case, we must singlestep the replaced instruction. */
+	op->kp.ainsn.insn_singlestep(p->opcode, &p->ainsn, regs);
+
+	local_irq_restore(flags);
+}
+
+int arch_prepare_optimized_kprobe(struct optimized_kprobe *op)
+{
+	u8 *buf;
+	unsigned long rel_chk;
+	unsigned long val;
+
+	if (!can_optimize(op))
+		return -EILSEQ;
+
+	op->optinsn.insn = get_optinsn_slot();
+	if (!op->optinsn.insn)
+		return -ENOMEM;
+
+	/*
+	 * Verify if the address gap is in 32MiB range, because this uses
+	 * a relative jump.
+	 *
+	 * kprobe opt use a 'b' instruction to branch to optinsn.insn.
+	 * According to ARM manual, branch instruction is:
+	 *
+	 *   31  28 27           24 23             0
+	 *  +------+---+---+---+---+----------------+
+	 *  | cond | 1 | 0 | 1 | 0 |      imm24     |
+	 *  +------+---+---+---+---+----------------+
+	 *
+	 * imm24 is a signed 24 bits integer. The real branch offset is computed
+	 * by: imm32 = SignExtend(imm24:'00', 32);
+	 *
+	 * So the maximum forward branch should be:
+	 *   (0x007fffff << 2) = 0x01fffffc =  0x1fffffc
+	 * The maximum backword branch should be:
+	 *   (0xff800000 << 2) = 0xfe000000 = -0x2000000
+	 *
+	 * We can simply check (rel & 0xfe000003):
+	 *  if rel is positive, (rel & 0xfe000000) shoule be 0
+	 *  if rel is negitive, (rel & 0xfe000000) should be 0xfe000000
+	 *  the last '3' is used for alignment checking.
+	 */
+	rel_chk = (unsigned long)((long)op->optinsn.insn -
+			(long)op->kp.addr + 8) & 0xfe000003;
+
+	if ((rel_chk != 0) && (rel_chk != 0xfe000000)) {
+		__arch_remove_optimized_kprobe(op, 0);
+		return -ERANGE;
+	}
+
+	buf = (u8 *)op->optinsn.insn;
+
+	/* Copy arch-dep-instance from template */
+	memcpy(buf, &optprobe_template_entry, TMPL_END_IDX);
+
+	/* Set probe information */
+	val = (unsigned long)op;
+	memcpy(buf + TMPL_VAL_IDX, &val, sizeof(val));
+
+	/* Set probe function call */
+	val = (unsigned long)optimized_callback;
+	memcpy(buf + TMPL_CALL_IDX, &val, sizeof(val));
+
+	flush_icache_range((unsigned long)buf,
+			   (unsigned long)buf + TMPL_END_IDX);
+
+	op->optinsn.prepared = true;
+	return 0;
+}
+
+void arch_optimize_kprobes(struct list_head *oplist)
+{
+	struct optimized_kprobe *op, *tmp;
+
+	list_for_each_entry_safe(op, tmp, oplist, list) {
+		unsigned long insn;
+		WARN_ON(kprobe_disabled(&op->kp));
+
+		/*
+		 * Backup instructions which will be replaced
+		 * by jump address
+		 */
+		memcpy(op->optinsn.copied_insn, op->kp.addr,
+				RELATIVEJUMP_SIZE);
+
+		insn = arm_gen_branch((unsigned long)op->kp.addr,
+				(unsigned long)op->optinsn.insn);
+		BUG_ON(insn == 0);
+
+		/*
+		 * Make it a conditional branch if replaced insn
+		 * is consitional
+		 */
+		insn = (__mem_to_opcode_arm(
+			  op->optinsn.copied_insn[0]) & 0xf0000000) |
+			(insn & 0x0fffffff);
+
+		patch_text(op->kp.addr, insn);
+
+		list_del_init(&op->list);
+	}
+}
+
+void arch_unoptimize_kprobe(struct optimized_kprobe *op)
+{
+	arch_arm_kprobe(&op->kp);
+}
+
+/*
+ * Recover original instructions and breakpoints from relative jumps.
+ * Caller must call with locking kprobe_mutex.
+ */
+void arch_unoptimize_kprobes(struct list_head *oplist,
+ 			    struct list_head *done_list)
+{
+	struct optimized_kprobe *op, *tmp;
+
+	list_for_each_entry_safe(op, tmp, oplist, list) {
+		arch_unoptimize_kprobe(op);
+		list_move(&op->list, done_list);
+	}
+}
+
+int arch_within_optimized_kprobe(struct optimized_kprobe *op,
+ 				unsigned long addr)
+{
+	return ((unsigned long)op->kp.addr <= addr &&
+		(unsigned long)op->kp.addr + RELATIVEJUMP_SIZE > addr);
+}
+
+void arch_remove_optimized_kprobe(struct optimized_kprobe *op)
+{
+	__arch_remove_optimized_kprobe(op, 1);
+}