From patchwork Wed Apr 27 18:53:03 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Long X-Patchwork-Id: 8962011 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E4EAD9F1C1 for ; Wed, 27 Apr 2016 18:56:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 69A7F20253 for ; Wed, 27 Apr 2016 18:56:18 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 56A1220142 for ; Wed, 27 Apr 2016 18:56:17 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1avUbq-00086m-NX; Wed, 27 Apr 2016 18:54:50 +0000 Received: from mail-qk0-x232.google.com ([2607:f8b0:400d:c09::232]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1avUap-0006yI-CT for linux-arm-kernel@lists.infradead.org; Wed, 27 Apr 2016 18:53:48 +0000 Received: by mail-qk0-x232.google.com with SMTP id q76so20279813qke.2 for ; Wed, 27 Apr 2016 11:53:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=BaLXHho3wC6wVGU7lx9izLHjXTh9G5aaw4JER6zXVds=; b=WQGkFI1VNmE81//Ld9YZTtwYi4nCG3TfRwVkxvSIwByx+XV1Z23IsE7UDCUxYDCHeP 23Zz5V97e9hyd/ZdIAK6xc7YruI95u3OORQQDU7hi8NTCVLcUqqfqcuJz58t1AuVSSzT 62oXclk/ebI7PCkUKSSGPwKdBGEzrDpjI0Rno= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=BaLXHho3wC6wVGU7lx9izLHjXTh9G5aaw4JER6zXVds=; b=fJEclEwh3xigMD+9Txu620CpANVi0+nZbS+njnLjoN72FDHIz1N5vf9/Eg9qubf7/9 L+KrhB752NjB+JXSzsNlX+YqyzDpOxl8/NPxRp3d2Pbj1oLYVCRAxEgVbwm6T1RXPIRb goNqdHw4tX0J9Qk5ki6B6nK6X//EEH9bQcfAKJf5IwTiADoHePZkrQpkOl1tlsVLOTJx mGoDzMCRBQcCVD/lOGT8LHSwuA/Zwv9eJfCLiDH6a7moSeZujideseXffhNWBsuCWSVj 9HPEw0pkMmk3R21LRgcxX9SVL2hdu0aWmX2n0/cj+L6NPFA0veK79/8lepc+aNgAADmZ ulJA== X-Gm-Message-State: AOPr4FVTRjhJ76F3cJ9HW05/6GsyrNQLNl7/Gy3wGXptx115lw2FF4kjztcvpOjZqTBISaP1 X-Received: by 10.55.76.15 with SMTP id z15mr10237706qka.32.1461783206333; Wed, 27 Apr 2016 11:53:26 -0700 (PDT) Received: from solstice.hsd1.nh.comcast.net ([2601:191:8000:f700:ecf0:c045:ec9f:ed60]) by smtp.googlemail.com with ESMTPSA id b18sm1604432qkc.9.2016.04.27.11.53.24 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 27 Apr 2016 11:53:25 -0700 (PDT) From: David Long To: Catalin Marinas , Will Deacon , Sandeepa Prabhu , William Cohen , Pratyush Anand , Steve Capper , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Marc Zyngier Subject: [PATCH v12 08/10] arm64: Add trampoline code for kretprobes Date: Wed, 27 Apr 2016 14:53:03 -0400 Message-Id: <1461783185-9056-9-git-send-email-dave.long@linaro.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1461783185-9056-1-git-send-email-dave.long@linaro.org> References: <1461783185-9056-1-git-send-email-dave.long@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160427_115347_692670_5E89FFCE X-CRM114-Status: GOOD ( 18.82 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Petr Mladek , Viresh Kumar , John Blackwood , Feng Kan , Zi Shen Lim , Dave P Martin , Yang Shi , Vladimir Murzin , Kees Cook , "Suzuki K. Poulose" , Mark Brown , =?UTF-8?q?Alex=20Benn=C3=A9e?= , Ard Biesheuvel , Greg Kroah-Hartman , Mark Salyzyn , James Morse , Christoffer Dall , Andrew Morton , Robin Murphy , Jens Wiklander , Balamurugan Shanmugam MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: William Cohen The trampoline code is used by kretprobes to capture a return from a probed function. This is done by saving the registers, calling the handler, and restoring the registers. The code then returns to the original saved caller return address. It is necessary to do this directly instead of using a software breakpoint because the code used in processing that breakpoint could itself be kprobe'd and cause a problematic reentry into the debug exception handler. Signed-off-by: William Cohen Signed-off-by: David A. Long --- arch/arm64/include/asm/kprobes.h | 2 + arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/asm-offsets.c | 11 +++++ arch/arm64/kernel/kprobes.c | 5 ++ arch/arm64/kernel/kprobes_trampoline.S | 85 ++++++++++++++++++++++++++++++++++ 5 files changed, 104 insertions(+) create mode 100644 arch/arm64/kernel/kprobes_trampoline.S diff --git a/arch/arm64/include/asm/kprobes.h b/arch/arm64/include/asm/kprobes.h index 79c9511..61b4915 100644 --- a/arch/arm64/include/asm/kprobes.h +++ b/arch/arm64/include/asm/kprobes.h @@ -56,5 +56,7 @@ int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, void *data); int kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr); int kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr); +void kretprobe_trampoline(void); +void __kprobes *trampoline_probe_handler(struct pt_regs *regs); #endif /* _ARM_KPROBES_H */ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 43bf6cc..f4c64e2 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -38,6 +38,7 @@ arm64-obj-$(CONFIG_CPU_IDLE) += cpuidle.o arm64-obj-$(CONFIG_JUMP_LABEL) += jump_label.o arm64-obj-$(CONFIG_KGDB) += kgdb.o arm64-obj-$(CONFIG_KPROBES) += kprobes.o kprobes-arm64.o \ + kprobes_trampoline.o \ probes-simulate-insn.o arm64-obj-$(CONFIG_EFI) += efi.o efi-entry.stub.o arm64-obj-$(CONFIG_PCI) += pci.o diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 3ae6b31..ca6ad2d 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -50,6 +50,17 @@ int main(void) DEFINE(S_X5, offsetof(struct pt_regs, regs[5])); DEFINE(S_X6, offsetof(struct pt_regs, regs[6])); DEFINE(S_X7, offsetof(struct pt_regs, regs[7])); + DEFINE(S_X8, offsetof(struct pt_regs, regs[8])); + DEFINE(S_X10, offsetof(struct pt_regs, regs[10])); + DEFINE(S_X12, offsetof(struct pt_regs, regs[12])); + DEFINE(S_X14, offsetof(struct pt_regs, regs[14])); + DEFINE(S_X16, offsetof(struct pt_regs, regs[16])); + DEFINE(S_X18, offsetof(struct pt_regs, regs[18])); + DEFINE(S_X20, offsetof(struct pt_regs, regs[20])); + DEFINE(S_X22, offsetof(struct pt_regs, regs[22])); + DEFINE(S_X24, offsetof(struct pt_regs, regs[24])); + DEFINE(S_X26, offsetof(struct pt_regs, regs[26])); + DEFINE(S_X28, offsetof(struct pt_regs, regs[28])); DEFINE(S_LR, offsetof(struct pt_regs, regs[30])); DEFINE(S_SP, offsetof(struct pt_regs, sp)); #ifdef CONFIG_COMPAT diff --git a/arch/arm64/kernel/kprobes.c b/arch/arm64/kernel/kprobes.c index 492c7d4..0a2e00e 100644 --- a/arch/arm64/kernel/kprobes.c +++ b/arch/arm64/kernel/kprobes.c @@ -550,6 +550,11 @@ bool arch_within_kprobe_blacklist(unsigned long addr) !!search_exception_tables(addr); } +void __kprobes __used *trampoline_probe_handler(struct pt_regs *regs) +{ + return NULL; +} + int __init arch_init_kprobes(void) { return 0; diff --git a/arch/arm64/kernel/kprobes_trampoline.S b/arch/arm64/kernel/kprobes_trampoline.S new file mode 100644 index 0000000..ba37d85 --- /dev/null +++ b/arch/arm64/kernel/kprobes_trampoline.S @@ -0,0 +1,85 @@ +/* + * trampoline entry and return code for kretprobes. + */ + +#include +#include +#include + + .text + +.macro save_all_base_regs + stp x0, x1, [sp, #S_X0] + stp x2, x3, [sp, #S_X2] + stp x4, x5, [sp, #S_X4] + stp x6, x7, [sp, #S_X6] + stp x8, x9, [sp, #S_X8] + stp x10, x11, [sp, #S_X10] + stp x12, x13, [sp, #S_X12] + stp x14, x15, [sp, #S_X14] + stp x16, x17, [sp, #S_X16] + stp x18, x19, [sp, #S_X18] + stp x20, x21, [sp, #S_X20] + stp x22, x23, [sp, #S_X22] + stp x24, x25, [sp, #S_X24] + stp x26, x27, [sp, #S_X26] + stp x28, x29, [sp, #S_X28] + add x0, sp, #S_FRAME_SIZE + stp lr, x0, [sp, #S_LR] +/* + * Construct a useful saved PSTATE + */ + mrs x0, nzcv + and x0, x0, #(PSR_N_BIT | PSR_Z_BIT | PSR_C_BIT | PSR_V_BIT) + mrs x1, daif + and x1, x1, #(PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT) + orr x0, x0, x1 + mrs x1, CurrentEL + and x1, x1, #(3 << 2) + orr x0, x1, x0 + mrs x1, SPSel + and x1, x1, #1 + orr x0, x1, x0 + str x0, [sp, #S_PSTATE] +.endm + +.macro restore_all_base_regs + ldr x0, [sp, #S_PSTATE] + and x0, x0, #(PSR_N_BIT | PSR_Z_BIT | PSR_C_BIT | PSR_V_BIT) + msr nzcv, x0 + ldp x0, x1, [sp, #S_X0] + ldp x2, x3, [sp, #S_X2] + ldp x4, x5, [sp, #S_X4] + ldp x6, x7, [sp, #S_X6] + ldp x8, x9, [sp, #S_X8] + ldp x10, x11, [sp, #S_X10] + ldp x12, x13, [sp, #S_X12] + ldp x14, x15, [sp, #S_X14] + ldp x16, x17, [sp, #S_X16] + ldp x18, x19, [sp, #S_X18] + ldp x20, x21, [sp, #S_X20] + ldp x22, x23, [sp, #S_X22] + ldp x24, x25, [sp, #S_X24] + ldp x26, x27, [sp, #S_X26] + ldp x28, x29, [sp, #S_X28] +.endm + +ENTRY(kretprobe_trampoline) + + sub sp, sp, #S_FRAME_SIZE + + save_all_base_regs + + mov x0, sp + bl trampoline_probe_handler + /* Replace trampoline address in lr with actual + orig_ret_addr return address. */ + mov lr, x0 + + restore_all_base_regs + + add sp, sp, #S_FRAME_SIZE + + ret + +ENDPROC(kretprobe_trampoline)