From patchwork Sat Dec 27 07:36:00 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 5544381 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 961CBBEEA8 for ; Sat, 27 Dec 2014 07:46:12 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9865720155 for ; Sat, 27 Dec 2014 07:46:11 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A08592016C for ; Sat, 27 Dec 2014 07:46:10 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Y4m2B-0006hV-EH; Sat, 27 Dec 2014 07:43:35 +0000 Received: from szxga03-in.huawei.com ([119.145.14.66]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Y4m04-0004vN-QR for linux-arm-kernel@lists.infradead.org; Sat, 27 Dec 2014 07:41:27 +0000 Received: from 172.24.2.119 (EHLO lggeml423-hub.china.huawei.com) ([172.24.2.119]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id AZF73142; Sat, 27 Dec 2014 15:40:39 +0800 (CST) Received: from kernel-host.huawei (10.107.197.247) by lggeml423-hub.china.huawei.com (10.72.61.33) with Microsoft SMTP Server id 14.3.158.1; Sat, 27 Dec 2014 15:40:31 +0800 From: Wang Nan To: , , Subject: [PATCH v17 11/11] ARM: optprobes: execute instruction during restoring if possible. Date: Sat, 27 Dec 2014 15:36:00 +0800 Message-ID: <1419665760-13336-1-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.4 In-Reply-To: <1419665637-12744-1-git-send-email-wangnan0@huawei.com> References: <1419665637-12744-1-git-send-email-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.197.247] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020205.549E627B.002D, ss=1, re=0.001, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-05-26 15:14:31, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: f6bc74a1f8d0499f3e397a086ae24c8f X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141226_234125_472438_19351F26 X-CRM114-Status: GOOD ( 15.44 ) X-Spam-Score: -2.3 (--) Cc: lizefan@huawei.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch removes software emulation or simulation for most of probed instructions. If the instruction doesn't use PC relative addressing, it will be translated into following instructions in the restore code in code template: ldmia {r0 - r14} // restore all instruction except PC // direct execute the probed instruction b next_insn // branch to next instruction. Signed-off-by: Wang Nan --- arch/arm/include/asm/kprobes.h | 3 +++ arch/arm/include/asm/probes.h | 1 + arch/arm/probes/kprobes/opt-arm.c | 47 +++++++++++++++++++++++++++++++++++++-- 3 files changed, 49 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/kprobes.h b/arch/arm/include/asm/kprobes.h index 50ff3bc..3ea9be5 100644 --- a/arch/arm/include/asm/kprobes.h +++ b/arch/arm/include/asm/kprobes.h @@ -57,6 +57,9 @@ extern __visible kprobe_opcode_t optprobe_template_call; extern __visible kprobe_opcode_t optprobe_template_end; extern __visible kprobe_opcode_t optprobe_template_sub_sp; extern __visible kprobe_opcode_t optprobe_template_add_sp; +extern __visible kprobe_opcode_t optprobe_template_restore_begin; +extern __visible kprobe_opcode_t optprobe_template_restore_orig_insn; +extern __visible kprobe_opcode_t optprobe_template_restore_end; #define MAX_OPTIMIZED_LENGTH 4 #define MAX_OPTINSN_SIZE \ diff --git a/arch/arm/include/asm/probes.h b/arch/arm/include/asm/probes.h index ee8725c..8ebbe83 100644 --- a/arch/arm/include/asm/probes.h +++ b/arch/arm/include/asm/probes.h @@ -50,6 +50,7 @@ struct arch_probes_insn { #define set_register_nouse(m, n) __clear_register_flag(m, n, REG_NO_USE) #define set_register_use(m, n) __set_register_flag(m, n, REG_USE) int register_usage_mask; + bool kprobe_direct_exec; }; #endif /* __ASSEMBLY__ */ diff --git a/arch/arm/probes/kprobes/opt-arm.c b/arch/arm/probes/kprobes/opt-arm.c index 6a60df3..f3bd1cc 100644 --- a/arch/arm/probes/kprobes/opt-arm.c +++ b/arch/arm/probes/kprobes/opt-arm.c @@ -32,6 +32,13 @@ #include "core.h" /* + * See register_usage_mask. If the probed instruction doesn't use PC, + * we can copy it into template and have it executed directly without + * simulation or emulation. + */ +#define can_kprobe_direct_exec(m) (!((m) & 0xc0000000UL)) + +/* * NOTE: the first sub and add instruction will be modified according * to the stack cost of the instruction. */ @@ -66,7 +73,15 @@ asm ( " orrne r2, #1\n" " strne r2, [sp, #60] @ set bit0 of PC for thumb\n" " msr cpsr_cxsf, r1\n" + ".global optprobe_template_restore_begin\n" + "optprobe_template_restore_begin:\n" " ldmia sp, {r0 - r15}\n" + ".global optprobe_template_restore_orig_insn\n" + "optprobe_template_restore_orig_insn:\n" + " nop\n" + ".global optprobe_template_restore_end\n" + "optprobe_template_restore_end:\n" + " ldmia sp, {r13 - r15}\n" ".global optprobe_template_val\n" "optprobe_template_val:\n" "1: .long 0\n" @@ -86,6 +101,12 @@ asm ( ((unsigned long *)&optprobe_template_add_sp - (unsigned long *)&optprobe_template_entry) #define TMPL_SUB_SP \ ((unsigned long *)&optprobe_template_sub_sp - (unsigned long *)&optprobe_template_entry) +#define TMPL_RESTORE_BEGIN \ + ((unsigned long *)&optprobe_template_restore_begin - (unsigned long *)&optprobe_template_entry) +#define TMPL_RESTORE_ORIGN_INSN \ + ((unsigned long *)&optprobe_template_restore_orig_insn - (unsigned long *)&optprobe_template_entry) +#define TMPL_RESTORE_END \ + ((unsigned long *)&optprobe_template_restore_end - (unsigned long *)&optprobe_template_entry) /* * ARM can always optimize an instruction when using ARM ISA, except @@ -155,8 +176,12 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs) __this_cpu_write(current_kprobe, NULL); } - /* In each case, we must singlestep the replaced instruction. */ - op->kp.ainsn.insn_singlestep(p->opcode, &p->ainsn, regs); + /* + * We singlestep the replaced instruction only when it can't be + * executed directly during restore. + */ + if (!p->ainsn.kprobe_direct_exec) + op->kp.ainsn.insn_singlestep(p->opcode, &p->ainsn, regs); local_irq_restore(flags); } @@ -238,6 +263,24 @@ int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *or val = (unsigned long)optimized_callback; code[TMPL_CALL_IDX] = val; + /* If possible, copy insn and have it executed during restore */ + orig->ainsn.kprobe_direct_exec = false; + if (can_kprobe_direct_exec(orig->ainsn.register_usage_mask)) { + kprobe_opcode_t final_branch = arm_gen_branch( + (unsigned long)(&code[TMPL_RESTORE_END]), + (unsigned long)(op->kp.addr) + 4); + if (final_branch != 0) { + /* + * Replace original 'ldmia sp, {r0 - r15}' with + * 'ldmia {r0 - r14}', restore all register except pc. + */ + code[TMPL_RESTORE_BEGIN] = __opcode_to_mem_arm(0xe89d7fff); + code[TMPL_RESTORE_ORIGN_INSN] = __opcode_to_mem_arm(orig->opcode); + code[TMPL_RESTORE_END] = __opcode_to_mem_arm(final_branch); + orig->ainsn.kprobe_direct_exec = true; + } + } + flush_icache_range((unsigned long)code, (unsigned long)(&code[TMPL_END_IDX]));