Message ID | 49D63327.8020009@redhat.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
On Fri, 2009-04-03 at 12:02 -0400, Masami Hiramatsu wrote: > Ensure safeness of inserting kprobes by checking whether the specified > address is at the first byte of a instruction. This is done by decoding > probed function from its head to the probe point. > > changes from v4: > - change a comment according to Ananth's suggestion. > > Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com> > Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> > Cc: Jim Keniston <jkenisto@us.ibm.com> > Cc: Ingo Molnar <mingo@elte.hu> > --- > > arch/x86/kernel/kprobes.c | 51 +++++++++++++++++++++++++++++++++++++++++++++ > 1 files changed, 51 insertions(+), 0 deletions(-) > > > diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c ... > > +/* Recover original instruction */ /* Recover the probed instruction at addr for further analysis. */ See below. > +static int recover_probed_instruction(kprobe_opcode_t *buf, unsigned long addr) > +{ > + struct kprobe *kp; > + kp = get_kprobe((void *)addr); > + if (!kp) > + return -EINVAL; > + > + /* Don't use p->ainsn.insn; which will be modified by fix_riprel */ fix_riprel doesn't affect the instruction's length, which is what concerns this patch. But we want this function to be useful for unforeseen uses as well, so I like the code you have. Just consider the suggested comment changes. /* * Don't use p->ainsn.insn, which could be modified -- e.g., * by fix_riprel(). */ > + memcpy(buf, kp->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); > + buf[0] = kp->opcode; > + return 0; > +} Jim Keniston -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Jim Keniston wrote: > On Fri, 2009-04-03 at 12:02 -0400, Masami Hiramatsu wrote: >> Ensure safeness of inserting kprobes by checking whether the specified >> address is at the first byte of a instruction. This is done by decoding >> probed function from its head to the probe point. >> >> changes from v4: >> - change a comment according to Ananth's suggestion. >> >> Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com> >> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> >> Cc: Jim Keniston <jkenisto@us.ibm.com> >> Cc: Ingo Molnar <mingo@elte.hu> >> --- >> >> arch/x86/kernel/kprobes.c | 51 +++++++++++++++++++++++++++++++++++++++++++++ >> 1 files changed, 51 insertions(+), 0 deletions(-) >> >> >> diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c > ... >> +/* Recover original instruction */ > > /* Recover the probed instruction at addr for further analysis. */ > See below. Sure. > >> +static int recover_probed_instruction(kprobe_opcode_t *buf, unsigned long addr) >> +{ >> + struct kprobe *kp; >> + kp = get_kprobe((void *)addr); >> + if (!kp) >> + return -EINVAL; >> + >> + /* Don't use p->ainsn.insn; which will be modified by fix_riprel */ > > fix_riprel doesn't affect the instruction's length, which is what > concerns this patch. But we want this function to be useful for > unforeseen uses as well, so I like the code you have. Just consider the > suggested comment changes. > > /* > * Don't use p->ainsn.insn, which could be modified -- e.g., > * by fix_riprel(). > */ Thanks, I'll update comments then! > >> + memcpy(buf, kp->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); >> + buf[0] = kp->opcode; >> + return 0; >> +} > > Jim Keniston > > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/
diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c index 7b5169d..e4f36e7 100644 --- a/arch/x86/kernel/kprobes.c +++ b/arch/x86/kernel/kprobes.c @@ -48,12 +48,14 @@ #include <linux/preempt.h> #include <linux/module.h> #include <linux/kdebug.h> +#include <linux/kallsyms.h> #include <asm/cacheflush.h> #include <asm/desc.h> #include <asm/pgtable.h> #include <asm/uaccess.h> #include <asm/alternative.h> +#include <asm/insn.h> void jprobe_return_end(void); @@ -244,6 +246,53 @@ retry: } } +/* Recover original instruction */ +static int recover_probed_instruction(kprobe_opcode_t *buf, unsigned long addr) +{ + struct kprobe *kp; + kp = get_kprobe((void *)addr); + if (!kp) + return -EINVAL; + + /* Don't use p->ainsn.insn; which will be modified by fix_riprel */ + memcpy(buf, kp->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); + buf[0] = kp->opcode; + return 0; +} + +/* Dummy buffers for lookup_symbol_attrs */ +static char __dummy_buf[KSYM_NAME_LEN]; + +/* Check if paddr is at an instruction boundary */ +static int __kprobes can_probe(unsigned long paddr) +{ + int ret; + unsigned long addr, offset = 0; + struct insn insn; + kprobe_opcode_t buf[MAX_INSN_SIZE]; + + /* Lookup symbol including addr */ + if (!kallsyms_lookup(paddr, NULL, &offset, NULL, __dummy_buf)) + return 0; + + /* Decode instructions */ + addr = paddr - offset; + while (addr < paddr) { + insn_init_kernel(&insn, (void *)addr); + insn_get_opcode(&insn); + if (OPCODE1(&insn) == BREAKPOINT_INSTRUCTION) { + ret = recover_probed_instruction(buf, addr); + if (ret) + return 0; + insn_init_kernel(&insn, buf); + } + insn_get_length(&insn); + addr += insn.length; + } + + return (addr == paddr); +} + /* * Returns non-zero if opcode modifies the interrupt flag. */ @@ -359,6 +408,8 @@ static void __kprobes arch_copy_kprobe(struct kprobe *p) int __kprobes arch_prepare_kprobe(struct kprobe *p) { + if (!can_probe((unsigned long)p->addr)) + return -EILSEQ; /* insn: must be on special executable page on x86. */ p->ainsn.insn = get_insn_slot(); if (!p->ainsn.insn)
Ensure safeness of inserting kprobes by checking whether the specified address is at the first byte of a instruction. This is done by decoding probed function from its head to the probe point. changes from v4: - change a comment according to Ananth's suggestion. Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Jim Keniston <jkenisto@us.ibm.com> Cc: Ingo Molnar <mingo@elte.hu> --- arch/x86/kernel/kprobes.c | 51 +++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 51 insertions(+), 0 deletions(-)