From patchwork Tue Aug 2 05:30:09 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pratyush Anand X-Patchwork-Id: 9255177 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 92BC96077C for ; Tue, 2 Aug 2016 05:34:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 80F4928501 for ; Tue, 2 Aug 2016 05:34:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 758CF28504; Tue, 2 Aug 2016 05:34:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9074028503 for ; Tue, 2 Aug 2016 05:34:25 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bUSK1-0007S5-M4; Tue, 02 Aug 2016 05:32:57 +0000 Received: from mail-pa0-f53.google.com ([209.85.220.53]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bUSJS-0006vn-MC for linux-arm-kernel@lists.infradead.org; Tue, 02 Aug 2016 05:32:31 +0000 Received: by mail-pa0-f53.google.com with SMTP id fi15so59873458pac.1 for ; Mon, 01 Aug 2016 22:32:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=WVBfF5TJBSZA/5AQC0Zr+ErPdstxrTX6S4iuLVkmbMs=; b=YON1/UHLiTKIcVVU0rWOEHYfJjJ5IPxAkPyxuBDsVldWNiWwQpUl9LEu6xJiFJQLxr es8ZAgzdbanmhmvLfAxjg9J6IIciKKucv9nwKOMNppnERlVWn9vqhuB/Gy1qnDIYkIMI sSLLQNo6p0sftAWPQhT0PtCDSk3bkqqGKDScubc14dnDOanagBIdgNRYh+YiLtTcU9GT 5+5yd86tF6K0KU4xzqCa9Mw1YONjga1fjKP/k4qZOSwc+o+kS4IjRfgYJSHwmncKJ0ec ZSG7znB7TizlkwF8Bsn4DUBpIFxypQ+NNiGU4Gam4K3A+wtJCMjJJ7yGcknxQDOZkiEN /cKg== X-Gm-Message-State: AEkoouvrJ2mIfetw93T4fPt7LSIBtt7u4NOkuJfEFiJKFsT274UyXY+AkDOIAR83UaWFX5hQ X-Received: by 10.66.155.7 with SMTP id vs7mr78333101pab.154.1470115921790; Mon, 01 Aug 2016 22:32:01 -0700 (PDT) Received: from localhost ([122.180.189.31]) by smtp.gmail.com with ESMTPSA id o8sm1220583pav.5.2016.08.01.22.32.00 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 01 Aug 2016 22:32:00 -0700 (PDT) From: Pratyush Anand To: linux-arm-kernel@lists.infradead.org, linux@arm.linux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com Subject: [PATCH 5/5] arm64: Add uprobe support Date: Tue, 2 Aug 2016 11:00:09 +0530 Message-Id: X-Mailer: git-send-email 2.5.5 In-Reply-To: References: In-Reply-To: References: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160801_223222_886099_8283AAE7 X-CRM114-Status: GOOD ( 25.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , srikar@linux.vnet.ibm.com, oleg@redhat.com, Jungseok Lee , Pratyush Anand , vijaya.kumar@caviumnetworks.com, dave.long@linaro.org, Shi Yang , Vladimir Murzin , steve.capper@linaro.org, "Suzuki K. Poulose" , Andre Przywara , Shaokun Zhang , Ashok Kumar , Sandeepa Prabhu , wcohen@redhat.com, Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse , Masami Hiramatsu , Robin Murphy , "Kirill A. Shutemov" MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds support for uprobe on ARM64 architecture. Unit test for following has been done so far and they have been found working 1. Step-able instructions, like sub, ldr, add etc. 2. Simulation-able like ret, cbnz, cbz etc. 3. uretprobe 4. Reject-able instructions like sev, wfe etc. 5. trapped and abort xol path 6. probe at unaligned user address. 7. longjump test cases Currently it does not support aarch32 instruction probing. Thanks to Shi Yang Cc: Shi Yang --- arch/arm64/Kconfig | 3 + arch/arm64/include/asm/debug-monitors.h | 3 + arch/arm64/include/asm/probes.h | 4 + arch/arm64/include/asm/ptrace.h | 8 ++ arch/arm64/include/asm/thread_info.h | 5 +- arch/arm64/include/asm/uprobes.h | 37 ++++++ arch/arm64/kernel/entry.S | 6 +- arch/arm64/kernel/probes/Makefile | 2 + arch/arm64/kernel/probes/uprobes.c | 227 ++++++++++++++++++++++++++++++++ arch/arm64/kernel/signal.c | 4 +- arch/arm64/mm/flush.c | 6 + 11 files changed, 301 insertions(+), 4 deletions(-) create mode 100644 arch/arm64/include/asm/uprobes.h create mode 100644 arch/arm64/kernel/probes/uprobes.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9f8b99e20557..77be79cb123b 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -232,6 +232,9 @@ config PGTABLE_LEVELS default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47 default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48 +config ARCH_SUPPORTS_UPROBES + def_bool y + source "init/Kconfig" source "kernel/Kconfig.freezer" diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h index 4b6b3f72a215..d04248b749a8 100644 --- a/arch/arm64/include/asm/debug-monitors.h +++ b/arch/arm64/include/asm/debug-monitors.h @@ -70,6 +70,9 @@ #define BRK64_ESR_MASK 0xFFFF #define BRK64_ESR_KPROBES 0x0004 #define BRK64_OPCODE_KPROBES (AARCH64_BREAK_MON | (BRK64_ESR_KPROBES << 5)) +/* uprobes BRK opcodes with ESR encoding */ +#define BRK64_ESR_UPROBES 0x0008 +#define BRK64_OPCODE_UPROBES (AARCH64_BREAK_MON | (BRK64_ESR_UPROBES << 5)) /* AArch32 */ #define DBG_ESR_EVT_BKPT 0x4 diff --git a/arch/arm64/include/asm/probes.h b/arch/arm64/include/asm/probes.h index e175a825b187..3d8fbca3f556 100644 --- a/arch/arm64/include/asm/probes.h +++ b/arch/arm64/include/asm/probes.h @@ -35,4 +35,8 @@ struct arch_specific_insn { }; #endif +#ifdef CONFIG_UPROBES +typedef u32 uprobe_opcode_t; +#endif + #endif diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h index ada08b5b036d..513daf050e84 100644 --- a/arch/arm64/include/asm/ptrace.h +++ b/arch/arm64/include/asm/ptrace.h @@ -217,6 +217,14 @@ int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task); #include +#define procedure_link_pointer(regs) ((regs)->regs[30]) + +static inline void procedure_link_pointer_set(struct pt_regs *regs, + unsigned long val) +{ + procedure_link_pointer(regs) = val; +} + #undef profile_pc extern unsigned long profile_pc(struct pt_regs *regs); diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index abd64bd1f6d9..d5ebf2ee5024 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -109,6 +109,7 @@ static inline struct thread_info *current_thread_info(void) #define TIF_NEED_RESCHED 1 #define TIF_NOTIFY_RESUME 2 /* callback before returning to user */ #define TIF_FOREIGN_FPSTATE 3 /* CPU's FP state is not current's */ +#define TIF_UPROBE 5 /* uprobe breakpoint or singlestep */ #define TIF_NOHZ 7 #define TIF_SYSCALL_TRACE 8 #define TIF_SYSCALL_AUDIT 9 @@ -129,10 +130,12 @@ static inline struct thread_info *current_thread_info(void) #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) #define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT) #define _TIF_SECCOMP (1 << TIF_SECCOMP) +#define _TIF_UPROBE (1 << TIF_UPROBE) #define _TIF_32BIT (1 << TIF_32BIT) #define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \ - _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE) + _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \ + _TIF_UPROBE) #define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \ _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \ diff --git a/arch/arm64/include/asm/uprobes.h b/arch/arm64/include/asm/uprobes.h new file mode 100644 index 000000000000..434a3afebcd1 --- /dev/null +++ b/arch/arm64/include/asm/uprobes.h @@ -0,0 +1,37 @@ +/* + * Copyright (C) 2014-2015 Pratyush Anand + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#ifndef _ASM_UPROBES_H +#define _ASM_UPROBES_H + +#include +#include +#include + +#define MAX_UINSN_BYTES AARCH64_INSN_SIZE + +#define UPROBE_SWBP_INSN BRK64_OPCODE_UPROBES +#define UPROBE_SWBP_INSN_SIZE 4 +#define UPROBE_XOL_SLOT_BYTES MAX_UINSN_BYTES + +struct arch_uprobe_task { + unsigned long saved_fault_code; +}; + +struct arch_uprobe { + union { + u8 insn[MAX_UINSN_BYTES]; + u8 ixol[MAX_UINSN_BYTES]; + }; + struct arch_probe_insn api; + bool simulate; +}; + +extern void flush_uprobe_xol_access(struct page *page, unsigned long uaddr, + void *kaddr, unsigned long len); +#endif diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 96e4a2b64cc1..801b9064b89a 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -688,7 +688,8 @@ ret_fast_syscall: ldr x1, [tsk, #TI_FLAGS] // re-check for syscall tracing and x2, x1, #_TIF_SYSCALL_WORK cbnz x2, ret_fast_syscall_trace - and x2, x1, #_TIF_WORK_MASK + mov x2, #_TIF_WORK_MASK + and x2, x1, x2 cbnz x2, work_pending enable_step_tsk x1, x2 kernel_exit 0 @@ -718,7 +719,8 @@ work_resched: ret_to_user: disable_irq // disable interrupts ldr x1, [tsk, #TI_FLAGS] - and x2, x1, #_TIF_WORK_MASK + mov x2, #_TIF_WORK_MASK + and x2, x1, x2 cbnz x2, work_pending enable_step_tsk x1, x2 kernel_exit 0 diff --git a/arch/arm64/kernel/probes/Makefile b/arch/arm64/kernel/probes/Makefile index ce06312e3d34..89b6df613dde 100644 --- a/arch/arm64/kernel/probes/Makefile +++ b/arch/arm64/kernel/probes/Makefile @@ -1,3 +1,5 @@ obj-$(CONFIG_KPROBES) += kprobes.o decode-insn.o \ kprobes_trampoline.o \ simulate-insn.o +obj-$(CONFIG_UPROBES) += uprobes.o decode-insn.o \ + simulate-insn.o diff --git a/arch/arm64/kernel/probes/uprobes.c b/arch/arm64/kernel/probes/uprobes.c new file mode 100644 index 000000000000..4d9d21f6e22e --- /dev/null +++ b/arch/arm64/kernel/probes/uprobes.c @@ -0,0 +1,227 @@ +/* + * Copyright (C) 2014-2015 Pratyush Anand + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ +#include +#include +#include + +#include "decode-insn.h" + +#define UPROBE_INV_FAULT_CODE UINT_MAX + +void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, + void *src, unsigned long len) +{ + void *xol_page_kaddr = kmap_atomic(page); + void *dst = xol_page_kaddr + (vaddr & ~PAGE_MASK); + + preempt_disable(); + + /* Initialize the slot */ + memcpy(dst, src, len); + + /* flush caches (dcache/icache) */ + flush_uprobe_xol_access(page, vaddr, dst, len); + + preempt_enable(); + + kunmap_atomic(xol_page_kaddr); +} + +unsigned long uprobe_get_swbp_addr(struct pt_regs *regs) +{ + return instruction_pointer(regs); +} + +int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, + unsigned long addr) +{ + probe_opcode_t insn; + + /* TODO: Currently we do not support AARCH32 instruction probing */ + + if (!IS_ALIGNED(addr, AARCH64_INSN_SIZE)) + return -EINVAL; + + insn = *(probe_opcode_t *)(&auprobe->insn[0]); + + switch (arm_probe_decode_insn(insn, &auprobe->api)) { + case INSN_REJECTED: + return -EINVAL; + + case INSN_GOOD_NO_SLOT: + auprobe->simulate = true; + break; + + case INSN_GOOD: + default: + break; + } + + return 0; +} + +int arch_uprobe_pre_xol(struct arch_uprobe *auprobe, struct pt_regs *regs) +{ + struct uprobe_task *utask = current->utask; + + /* saved fault code is restored in post_xol */ + utask->autask.saved_fault_code = current->thread.fault_code; + + /* An invalid fault code between pre/post xol event */ + current->thread.fault_code = UPROBE_INV_FAULT_CODE; + + /* Instruction point to execute ol */ + instruction_pointer_set(regs, utask->xol_vaddr); + + user_enable_single_step(current); + + return 0; +} + +int arch_uprobe_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs) +{ + struct uprobe_task *utask = current->utask; + + WARN_ON_ONCE(current->thread.fault_code != UPROBE_INV_FAULT_CODE); + + /* restore fault code */ + current->thread.fault_code = utask->autask.saved_fault_code; + + /* Instruction point to execute next to breakpoint address */ + instruction_pointer_set(regs, utask->vaddr + 4); + + user_disable_single_step(current); + + return 0; +} +bool arch_uprobe_xol_was_trapped(struct task_struct *t) +{ + /* + * Between arch_uprobe_pre_xol and arch_uprobe_post_xol, if an xol + * insn itself is trapped, then detect the case with the help of + * invalid fault code which is being set in arch_uprobe_pre_xol and + * restored in arch_uprobe_post_xol. + */ + if (t->thread.fault_code != UPROBE_INV_FAULT_CODE) + return true; + + return false; +} + +bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs) +{ + probe_opcode_t insn; + unsigned long addr; + + if (!auprobe->simulate) + return false; + + insn = *(probe_opcode_t *)(&auprobe->insn[0]); + addr = instruction_pointer(regs); + + if (auprobe->api.handler) + auprobe->api.handler(insn, addr, regs); + + return true; +} + +void arch_uprobe_abort_xol(struct arch_uprobe *auprobe, struct pt_regs *regs) +{ + struct uprobe_task *utask = current->utask; + + current->thread.fault_code = utask->autask.saved_fault_code; + /* + * Task has received a fatal signal, so reset back to probbed + * address. + */ + instruction_pointer_set(regs, utask->vaddr); + + user_disable_single_step(current); +} + +bool arch_uretprobe_is_alive(struct return_instance *ret, enum rp_check ctx, + struct pt_regs *regs) +{ + /* + * If a simple branch instruction (B) was called for retprobed + * assembly label then return true even when regs->sp and ret->stack + * are same. It will insure that cleanup and reporting of return + * instances corresponding to callee label is done when + * handle_trampoline for called function is executed. + */ + if (ctx == RP_CHECK_CHAIN_CALL) + return regs->sp <= ret->stack; + else + return regs->sp < ret->stack; +} + +unsigned long +arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr, + struct pt_regs *regs) +{ + unsigned long orig_ret_vaddr; + + orig_ret_vaddr = procedure_link_pointer(regs); + /* Replace the return addr with trampoline addr */ + procedure_link_pointer_set(regs, trampoline_vaddr); + + return orig_ret_vaddr; +} + +int arch_uprobe_exception_notify(struct notifier_block *self, + unsigned long val, void *data) +{ + return NOTIFY_DONE; +} + +static int __kprobes uprobe_breakpoint_handler(struct pt_regs *regs, + unsigned int esr) +{ + if (user_mode(regs) && uprobe_pre_sstep_notifier(regs)) + return DBG_HOOK_HANDLED; + + return DBG_HOOK_ERROR; +} + +static int __kprobes uprobe_single_step_handler(struct pt_regs *regs, + unsigned int esr) +{ + struct uprobe_task *utask = current->utask; + + if (user_mode(regs)) { + WARN_ON(utask && + (instruction_pointer(regs) != utask->xol_vaddr + 4)); + + if (uprobe_post_sstep_notifier(regs)) + return DBG_HOOK_HANDLED; + } + + return DBG_HOOK_ERROR; +} + +/* uprobe breakpoint handler hook */ +static struct break_hook uprobes_break_hook = { + .esr_mask = BRK64_ESR_MASK, + .esr_val = BRK64_ESR_UPROBES, + .fn = uprobe_breakpoint_handler, +}; + +/* uprobe single step handler hook */ +static struct step_hook uprobes_step_hook = { + .fn = uprobe_single_step_handler, +}; + +static int __init arch_init_uprobes(void) +{ + register_break_hook(&uprobes_break_hook); + register_step_hook(&uprobes_step_hook); + + return 0; +} + +device_initcall(arch_init_uprobes); diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index a8eafdbc7cb8..0ff1208929c0 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -402,6 +402,9 @@ static void do_signal(struct pt_regs *regs) asmlinkage void do_notify_resume(struct pt_regs *regs, unsigned int thread_flags) { + if (thread_flags & _TIF_UPROBE) + uprobe_notify_resume(regs); + if (thread_flags & _TIF_SIGPENDING) do_signal(regs); @@ -412,5 +415,4 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, if (thread_flags & _TIF_FOREIGN_FPSTATE) fpsimd_restore_current_state(); - } diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index 43a76b07eb32..179587614756 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -54,6 +54,12 @@ static void flush_ptrace_access(struct vm_area_struct *vma, struct page *page, sync_icache_aliases(kaddr, len); } +void flush_uprobe_xol_access(struct page *page, unsigned long uaddr, + void *kaddr, unsigned long len) +{ + sync_icache_aliases(kaddr, len); +} + /* * Copy user data from/to a page which is mapped into a different processes * address space. Really, we want to allow our "user space" model to handle