From patchwork Fri Aug 17 10:27:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Torsten Duwe X-Patchwork-Id: 10568601 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5C54C1390 for ; Fri, 17 Aug 2018 10:27:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4522D2B49E for ; Fri, 17 Aug 2018 10:27:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 351622B4AC; Fri, 17 Aug 2018 10:27:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 816C42B49E for ; Fri, 17 Aug 2018 10:27:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:From:Date: Message-Id:References:In-Reply-To:Subject:To:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=BlmK+Ujawz0G9N9EANjMJDfSPh/TPcgkL7JMCSacCek=; b=MVKDvpzbM3NsRH07I6RZJEMwGZ xQT9FCRybOxroMVBlto2/ih19xlQhYWVKzwEMT338P4prk30A+uFiPPi3i+V4qvyqaZ4pgQmjPH6M IVT6WMoFFouPYLmrq8dnmwhHWs624paHmJCEWgddYyV/RerGAo3b5UBHwNQFkxgT7P9/riV59Sq8e WR9Ahv7arMMKPATkBI++hKhtFb2YhtSncbFsLvZWIT/nSOI4xO+UIwUqMMeN2q3bjJkeFtf2NCH0p SJvSOStVObbdrzkFx+t4yL7YGmRAfrjKZJ7uqLueaeE3yBj/lcgziVX8hHGUdnu2QfDh14Vlc/kwV wxgnQCdg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fqby7-00019m-Db; Fri, 17 Aug 2018 10:26:59 +0000 Received: from verein.lst.de ([213.95.11.211] helo=newverein.lst.de) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fqbxf-0000cb-Py for linux-arm-kernel@lists.infradead.org; Fri, 17 Aug 2018 10:26:35 +0000 Received: by newverein.lst.de (Postfix, from userid 2005) id 022FB68EF4; Fri, 17 Aug 2018 12:27:24 +0200 (CEST) To: Will Deacon , Catalin Marinas , Julien Thierry , Steven Rostedt , Josh Poimboeuf , Ingo Molnar , Ard Biesheuvel , Arnd Bergmann , AKASHI Takahiro Subject: [PATCH v2 1/3] arm64: implement ftrace with regs In-Reply-To: <20180817102544.C628D68EF4@newverein.lst.de> References: <20180817102544.C628D68EF4@newverein.lst.de> Message-Id: <20180817102724.022FB68EF4@newverein.lst.de> Date: Fri, 17 Aug 2018 12:27:24 +0200 (CEST) From: duwe@lst.de (Torsten Duwe) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180817_032632_143213_FF4BBE6A X-CRM114-Status: GOOD ( 11.86 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Check for compiler support of -fpatchable-function-entry and use it to intercept functions immediately on entry, saving the LR in x9. Disable ftracing in efi/libstub, because this triggers cross-section linker errors now (-pg is disabled already for those files). Add an ftrace_caller which can handle LR in x9, as well as an ftrace_regs_caller that additionally writes out a set of pt_regs for inspection. Signed-off-by: Torsten Duwe --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -110,6 +110,7 @@ config ARM64 select HAVE_DEBUG_KMEMLEAK select HAVE_DMA_CONTIGUOUS select HAVE_DYNAMIC_FTRACE + select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_TRACER --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -78,6 +78,15 @@ ifeq ($(CONFIG_ARM64_MODULE_PLTS),y) KBUILD_LDFLAGS_MODULE += -T $(srctree)/arch/arm64/kernel/module.lds endif +ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS + CC_FLAGS_FTRACE := -fpatchable-function-entry=2 + KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY + ifeq ($(call cc-option,-fpatchable-function-entry=2),) + $(error Cannot use CONFIG_DYNAMIC_FTRACE_WITH_REGS: \ + -fpatchable-function-entry not supported by compiler) + endif +endif + # Default value head-y := arch/arm64/kernel/head.o --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -16,6 +16,13 @@ #define MCOUNT_ADDR ((unsigned long)_mcount) #define MCOUNT_INSN_SIZE AARCH64_INSN_SIZE +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS +#define ARCH_SUPPORTS_FTRACE_OPS 1 +#define REC_IP_BRANCH_OFFSET 4 +#else +#define REC_IP_BRANCH_OFFSET 0 +#endif + #ifndef __ASSEMBLY__ #include --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -7,9 +7,9 @@ CPPFLAGS_vmlinux.lds := -DTEXT_OFFSET=$( AFLAGS_head.o := -DTEXT_OFFSET=$(TEXT_OFFSET) CFLAGS_armv8_deprecated.o := -I$(src) -CFLAGS_REMOVE_ftrace.o = -pg -CFLAGS_REMOVE_insn.o = -pg -CFLAGS_REMOVE_return_address.o = -pg +CFLAGS_REMOVE_ftrace.o = -pg $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_insn.o = -pg $(CC_FLAGS_FTRACE) +CFLAGS_REMOVE_return_address.o = -pg $(CC_FLAGS_FTRACE) # Object file lists. arm64-obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -11,7 +11,8 @@ cflags-$(CONFIG_X86) += -m$(BITS) -D__K -fPIC -fno-strict-aliasing -mno-red-zone \ -mno-mmx -mno-sse -fshort-wchar -cflags-$(CONFIG_ARM64) := $(subst -pg,,$(KBUILD_CFLAGS)) -fpie +cflags-$(CONFIG_ARM64) := $(filter-out -pg $(CC_FLAGS_FTRACE)\ + ,$(KBUILD_CFLAGS)) -fpie cflags-$(CONFIG_ARM) := $(subst -pg,,$(KBUILD_CFLAGS)) \ -fno-builtin -fpic -mno-single-pic-base --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -13,6 +13,8 @@ #include #include #include +#include +#include /* * Gcc with -pg will put the following code in the beginning of each function: @@ -123,6 +125,7 @@ skip_ftrace_call: // } ENDPROC(_mcount) #else /* CONFIG_DYNAMIC_FTRACE */ +#ifndef CC_USING_PATCHABLE_FUNCTION_ENTRY /* * _mcount() is used to build the kernel with -pg option, but all the branch * instructions to _mcount() are replaced to NOP initially at kernel start up, @@ -162,6 +165,88 @@ ftrace_graph_call: // ftrace_graph_cal mcount_exit ENDPROC(ftrace_caller) +#else /* CC_USING_PATCHABLE_FUNCTION_ENTRY */ +ENTRY(_mcount) + mov x10, lr + mov lr, x9 + ret x10 +ENDPROC(_mcount) + +ENTRY(ftrace_regs_caller) + stp x29, x9, [sp, #-16]! + sub sp, sp, #S_FRAME_SIZE + + stp x10, x11, [sp, #80] + stp x12, x13, [sp, #96] + stp x14, x15, [sp, #112] + stp x16, x17, [sp, #128] + stp x18, x19, [sp, #144] + stp x20, x21, [sp, #160] + stp x22, x23, [sp, #176] + stp x24, x25, [sp, #192] + stp x26, x27, [sp, #208] + + b ftrace_common +ENDPROC(ftrace_regs_caller) + +ENTRY(ftrace_caller) + stp x29, x9, [sp, #-16]! + sub sp, sp, #S_FRAME_SIZE + +ftrace_common: + stp x28, x29, [sp, #224] + + /* save function arguments */ + stp x0, x1, [sp] + stp x2, x3, [sp, #16] + stp x4, x5, [sp, #32] + stp x6, x7, [sp, #48] + stp x8, x9, [sp, #64] + + /* The link Register at callee entry */ + str x9, [sp, #S_LR] + /* The program counter just after the ftrace call site */ + str lr, [sp, #S_PC] + /* The stack pointer as it was on ftrace_caller entry... */ + add x29, sp, #S_FRAME_SIZE+16 /* ...is also our new FP */ + str x29, [sp, #S_SP] + + adrp x0, function_trace_op + ldr x2, [x0, #:lo12:function_trace_op] + mov x1, x9 /* saved LR == parent IP */ + sub x0, lr, #8 /* function entry == IP */ + mov x3, sp /* pt_regs are @sp */ + + .global ftrace_call +ftrace_call: + + bl ftrace_stub + +#ifdef CONFIG_FUNCTION_GRAPH_TRACER + .global ftrace_graph_call +ftrace_graph_call: // ftrace_graph_caller(); + nop // If enabled, this will be replaced + // "b ftrace_graph_caller" +#endif + +ftrace_common_return: + ldp x0, x1, [sp] + ldp x2, x3, [sp, #16] + ldp x4, x5, [sp, #32] + ldp x6, x7, [sp, #48] + ldp x8, x9, [sp, #64] + + ldp x28, x29, [sp, #224] + + ldr x9, [sp, #S_PC] + ldr lr, [sp, #S_LR] + add sp, sp, #S_FRAME_SIZE+16 + + ret x9 + +ENDPROC(ftrace_caller) + +#endif /* CC_USING_PATCHABLE_FUNCTION_ENTRY */ #endif /* CONFIG_DYNAMIC_FTRACE */ ENTRY(ftrace_stub) @@ -197,12 +284,20 @@ ENDPROC(ftrace_stub) * and run return_to_handler() later on its exit. */ ENTRY(ftrace_graph_caller) +#ifndef CC_USING_PATCHABLE_FUNCTION_ENTRY mcount_get_lr_addr x0 // pointer to function's saved lr mcount_get_pc x1 // function's pc mcount_get_parent_fp x2 // parent's fp bl prepare_ftrace_return // prepare_ftrace_return(&lr, pc, fp) mcount_exit +#else + add x0, sp, #S_LR /* address of (LR pointing into caller) */ + ldr x1, [sp, #S_PC] + ldr x2, [sp, #S_X28 + 8] /* caller's frame pointer */ + bl prepare_ftrace_return + b ftrace_common_return +#endif ENDPROC(ftrace_graph_caller) /* --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -70,7 +70,8 @@ int ftrace_update_ftrace_func(ftrace_fun */ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) { - unsigned long pc = rec->ip; + unsigned long pc = rec->ip+REC_IP_BRANCH_OFFSET; + int ret; u32 old, new; long offset = (long)pc - (long)addr; @@ -128,22 +129,54 @@ int ftrace_make_call(struct dyn_ftrace * } old = aarch64_insn_gen_nop(); +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS +#define INSN_MOV_X9_X30 aarch64_insn_gen_add_sub_imm(AARCH64_INSN_REG_9, \ + AARCH64_INSN_REG_30, 0, \ + AARCH64_INSN_VARIANT_64BIT, AARCH64_INSN_ADSB_ADD) + new = INSN_MOV_X9_X30; + ret = ftrace_modify_code(pc-REC_IP_BRANCH_OFFSET, old, new, true); + if (ret) + return ret; + smp_wmb(); /* ensure LR saver is in place before ftrace call */ +#endif new = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK); return ftrace_modify_code(pc, old, new, true); } +int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, + unsigned long addr) +{ + unsigned long pc = rec->ip+REC_IP_BRANCH_OFFSET; + u32 old, new; + + old = aarch64_insn_gen_branch_imm(pc, old_addr, true); + new = aarch64_insn_gen_branch_imm(pc, addr, true); + + return ftrace_modify_code(pc, old, new, true); +} + + /* * Turn off the call to ftrace_caller() in instrumented function */ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long addr) { - unsigned long pc = rec->ip; + unsigned long pc = rec->ip+REC_IP_BRANCH_OFFSET; bool validate = true; + int ret; u32 old = 0, new; long offset = (long)pc - (long)addr; +#ifdef CC_USING_PATCHABLE_FUNCTION_ENTRY + /* -fpatchable-function-entry= does not generate a profiling call + * initially; the NOPs are already there. + */ + if (addr == MCOUNT_ADDR) + return 0; +#endif + if (offset < -SZ_128M || offset >= SZ_128M) { #ifdef CONFIG_ARM64_MODULE_PLTS u32 replaced; @@ -188,7 +221,15 @@ int ftrace_make_nop(struct module *mod, new = aarch64_insn_gen_nop(); - return ftrace_modify_code(pc, old, new, validate); + ret = ftrace_modify_code(pc, old, new, validate); + if (ret) + return ret; +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS + smp_wmb(); /* ftrace call must not remain without LR saver. */ + old = INSN_MOV_X9_X30; + ret = ftrace_modify_code(pc-REC_IP_BRANCH_OFFSET, old, new, true); +#endif + return ret; } void arch_ftrace_update_code(int command) --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -115,6 +115,7 @@ #define MCOUNT_REC() . = ALIGN(8); \ __start_mcount_loc = .; \ KEEP(*(__mcount_loc)) \ + KEEP(*(__patchable_function_entries)) \ __stop_mcount_loc = .; #else #define MCOUNT_REC() --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -61,8 +61,12 @@ extern void __chk_io_ptr(const volatile #if defined(CC_USING_HOTPATCH) && !defined(__CHECKER__) #define notrace __attribute__((hotpatch(0,0))) #else +#ifdef CC_USING_PATCHABLE_FUNCTION_ENTRY +#define notrace __attribute__((patchable_function_entry(0))) +#else #define notrace __attribute__((no_instrument_function)) #endif +#endif /* Intel compiler defines __GNUC__. So we will overwrite implementations * coming from above header files here From patchwork Fri Aug 17 10:27:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Torsten Duwe X-Patchwork-Id: 10568599 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A489D1390 for ; Fri, 17 Aug 2018 10:26:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 917C32B49E for ; Fri, 17 Aug 2018 10:26:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 85D322B4AC; Fri, 17 Aug 2018 10:26:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8F5302B49E for ; Fri, 17 Aug 2018 10:26:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:From:Date: Message-Id:References:In-Reply-To:Subject:To:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=doCvRsVQIltQHd/7zXb9PB+2u02sGobk8y1cQRBLluA=; b=Spcjxfu+GXeptzO4xXIzf1tcMx v6KMRNIM1HE6KV6uLZH/ymtssE2Rell7uahNXmH0ERpYptAfOedIbkKp8dR303sqYGt3+GItLcs5Q xJcyzPkHXbGedFLiMIXx9iZyZuYlAso4FGQuOxTFXfxYqA3gYJdfUC3P1kHZpR7g7YBwK984RgLEm XIjBkYzNf1/W/VSr2jAgaQhctRjq9aB6QMA378WpdvUwcPCs9YGR0gmpfLuqA0iuqviPOEQStbKnH cBQpanpHjZnHNQNtWwWQ7BCabGFI/x3C8JLqbLh8PNKaw/3FWvOSjOmZSCKvzcPJtbTVetSoCzcYc /xUlMqwg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fqbxq-0000o4-T9; Fri, 17 Aug 2018 10:26:42 +0000 Received: from verein.lst.de ([213.95.11.211] helo=newverein.lst.de) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fqbxf-0000cu-Pw for linux-arm-kernel@lists.infradead.org; Fri, 17 Aug 2018 10:26:33 +0000 Received: by newverein.lst.de (Postfix, from userid 2005) id 4E10C68EF5; Fri, 17 Aug 2018 12:27:28 +0200 (CEST) To: Will Deacon , Catalin Marinas , Julien Thierry , Steven Rostedt , Josh Poimboeuf , Ingo Molnar , Ard Biesheuvel , Arnd Bergmann , AKASHI Takahiro Subject: [PATCH v2 2/3] arm64: implement live patching In-Reply-To: <20180817102544.C628D68EF4@newverein.lst.de> References: <20180817102544.C628D68EF4@newverein.lst.de> Message-Id: <20180817102728.4E10C68EF5@newverein.lst.de> Date: Fri, 17 Aug 2018 12:27:28 +0200 (CEST) From: duwe@lst.de (Torsten Duwe) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180817_032632_143525_1F7BF5DC X-CRM114-Status: GOOD ( 11.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Based on ftrace with regs, do the usual thing. Also allocate a task flag for whatever consistency handling will be used. Watch out for interactions with the graph tracer. Signed-off-by: Torsten Duwe --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -119,6 +119,7 @@ config ARM64 select HAVE_GENERIC_DMA_COHERENT select HAVE_HW_BREAKPOINT if PERF_EVENTS select HAVE_IRQ_TIME_ACCOUNTING + select HAVE_LIVEPATCH select HAVE_MEMBLOCK select HAVE_MEMBLOCK_NODE_MAP if NUMA select HAVE_NMI @@ -1349,4 +1350,6 @@ if CRYPTO source "arch/arm64/crypto/Kconfig" endif +source "kernel/livepatch/Kconfig" + source "lib/Kconfig" --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -76,6 +76,7 @@ void arch_release_task_struct(struct tas #define TIF_FOREIGN_FPSTATE 3 /* CPU's FP state is not current's */ #define TIF_UPROBE 4 /* uprobe breakpoint or singlestep */ #define TIF_FSCHECK 5 /* Check FS is USER_DS on return */ +#define TIF_PATCH_PENDING 6 #define TIF_NOHZ 7 #define TIF_SYSCALL_TRACE 8 #define TIF_SYSCALL_AUDIT 9 @@ -94,6 +95,7 @@ void arch_release_task_struct(struct tas #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) #define _TIF_FOREIGN_FPSTATE (1 << TIF_FOREIGN_FPSTATE) +#define _TIF_PATCH_PENDING (1 << TIF_PATCH_PENDING) #define _TIF_NOHZ (1 << TIF_NOHZ) #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) @@ -106,7 +108,8 @@ void arch_release_task_struct(struct tas #define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \ _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \ - _TIF_UPROBE | _TIF_FSCHECK) + _TIF_UPROBE | _TIF_FSCHECK | \ + _TIF_PATCH_PENDING) #define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \ _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \ --- /dev/null +++ b/arch/arm64/include/asm/livepatch.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * livepatch.h - arm64-specific Kernel Live Patching Core + * + * Copyright (C) 2016,2018 SUSE + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ +#ifndef _ASM_ARM64_LIVEPATCH_H +#define _ASM_ARM64_LIVEPATCH_H + +#include +#include + +#ifdef CONFIG_LIVEPATCH +static inline int klp_check_compiler_support(void) +{ + return 0; +} + +static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip) +{ + regs->pc = ip; +} +#endif /* CONFIG_LIVEPATCH */ + +#endif /* _ASM_ARM64_LIVEPATCH_H */ --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -209,6 +209,9 @@ ftrace_common: str x9, [sp, #S_LR] /* The program counter just after the ftrace call site */ str lr, [sp, #S_PC] +#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_FUNCTION_GRAPH_TRACER) + mov x28,lr /* remember old return address */ +#endif /* The stack pointer as it was on ftrace_caller entry... */ add x29, sp, #S_FRAME_SIZE+16 /* ...is also our new FP */ str x29, [sp, #S_SP] @@ -224,6 +227,16 @@ ftrace_call: bl ftrace_stub +#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_FUNCTION_GRAPH_TRACER) + /* Is the trace function a live patcher an has messed with + * the return address? + */ + ldr x9, [sp, #S_PC] + cmp x9, x28 /* compare with the value we remembered */ + /* to not call graph tracer's "call" mechanism twice! */ + b.ne ftrace_common_return +#endif + #ifdef CONFIG_FUNCTION_GRAPH_TRACER .global ftrace_graph_call ftrace_graph_call: // ftrace_graph_caller(); From patchwork Fri Aug 17 10:27:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Torsten Duwe X-Patchwork-Id: 10568597 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C278D139B for ; Fri, 17 Aug 2018 10:26:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AF88E2B49E for ; Fri, 17 Aug 2018 10:26:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A29A52B4AC; Fri, 17 Aug 2018 10:26:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6EBB42B49E for ; Fri, 17 Aug 2018 10:26:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:From:Date: Message-Id:References:In-Reply-To:Subject:To:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ozPYLYYz+2XXTkeSy1/s8R/smCMMRwATTojzHogDsj8=; b=DmytQxlpB/bbhTCjIzIlHycpyO PCe5Jk1Py7BoCDUFdaQ6nNZzz/sPyXqxe+/Zqju7oIlNdZpNswr2MP03NCEsJza68kYPUdfcAXIRa Tjo/qH0vjIXUIDtkMyYFeX2itT3lS8GUB5ge3sSzXscA/22LlLwynEqAEA77hHKD4y6t9H61JBXcL AK9tZVEcmNEuhCMXSROOGJKmQ29FCtCA1X9vj5IRQHohLsv41IGcLNSoqElPIn9GlJDgjXHZtVOA4 NJGf4s8hu8sjKaFfoZUKSAEwD4KM9GSzLpynHJoruGwXEbYkuXTSzTaqbbQRZ2FELDT2hU7lOQK9/ 1832/G7A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fqbxj-0000ev-E8; Fri, 17 Aug 2018 10:26:35 +0000 Received: from verein.lst.de ([213.95.11.211] helo=newverein.lst.de) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fqbxf-0000dl-Q0 for linux-arm-kernel@lists.infradead.org; Fri, 17 Aug 2018 10:26:33 +0000 Received: by newverein.lst.de (Postfix, from userid 2005) id B777568EF6; Fri, 17 Aug 2018 12:27:33 +0200 (CEST) To: Will Deacon , Catalin Marinas , Julien Thierry , Steven Rostedt , Josh Poimboeuf , Ingo Molnar , Ard Biesheuvel , Arnd Bergmann , AKASHI Takahiro Subject: [PATCH v2 3/3] arm64: reliable stacktraces In-Reply-To: <20180817102544.C628D68EF4@newverein.lst.de> References: <20180817102544.C628D68EF4@newverein.lst.de> Message-Id: <20180817102733.B777568EF6@newverein.lst.de> Date: Fri, 17 Aug 2018 12:27:33 +0200 (CEST) From: duwe@lst.de (Torsten Duwe) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180817_032632_146310_B1A9FF42 X-CRM114-Status: GOOD ( 12.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: live-patching@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This is more an RFC in the original sense: is this basically the correct approach? (as I had to tweak the save_stack API a bit). In particular the code does not detect interrupts and exception frames, and does not yet check whether the code address is valid. The latter check would also have to be omitted for the latest frame on other tasks' stacks. This would require some more tweaking. unwind_frame() now reports whether we had to stop normally or due to an error condition; walk_stackframe() will pass that info. __save_stack_trace() is used for a start to check the validity of a frame; maybe save_stack_trace_tsk_reliable() will need its own callback. The question is whether this change, once complete, is sufficient (as on powerpc) or whether a port of objtool is needed, like x86. I can dig into this myself and draw conclusions, but I'd prefer to have some input from ARM people here... Signed-off-by: Torsten Duwe --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -127,8 +127,9 @@ config ARM64 select HAVE_PERF_EVENTS select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP - select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RCU_TABLE_FREE + select HAVE_REGS_AND_STACK_ACCESS_API + select HAVE_RELIABLE_STACKTRACE select HAVE_STACKPROTECTOR select HAVE_SYSCALL_TRACEPOINTS select HAVE_KPROBES --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -33,7 +33,7 @@ struct stackframe { }; extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame); -extern void walk_stackframe(struct task_struct *tsk, struct stackframe *frame, +extern int walk_stackframe(struct task_struct *tsk, struct stackframe *frame, int (*fn)(struct stackframe *, void *), void *data); extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk); diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index d5718a060672..fe0dd4745ff3 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -81,23 +81,27 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame) * both are NULL. */ if (!frame->fp && !frame->pc) - return -EINVAL; + return 1; return 0; } -void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame, +int notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame, int (*fn)(struct stackframe *, void *), void *data) { while (1) { int ret; - if (fn(frame, data)) - break; + ret = fn(frame, data); + if (ret) + return ret; ret = unwind_frame(tsk, frame); if (ret < 0) + return ret; + if (ret > 0) break; } + return 0; } #ifdef CONFIG_STACKTRACE @@ -145,14 +149,15 @@ void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace) trace->entries[trace->nr_entries++] = ULONG_MAX; } -static noinline void __save_stack_trace(struct task_struct *tsk, +static noinline int __save_stack_trace(struct task_struct *tsk, struct stack_trace *trace, unsigned int nosched) { struct stack_trace_data data; struct stackframe frame; + int ret; if (!try_get_task_stack(tsk)) - return; + return -EBUSY; data.trace = trace; data.skip = trace->skip; @@ -171,11 +176,12 @@ static noinline void __save_stack_trace(struct task_struct *tsk, frame.graph = tsk->curr_ret_stack; #endif - walk_stackframe(tsk, &frame, save_trace, &data); + ret = walk_stackframe(tsk, &frame, save_trace, &data); if (trace->nr_entries < trace->max_entries) trace->entries[trace->nr_entries++] = ULONG_MAX; put_task_stack(tsk); + return ret; } EXPORT_SYMBOL_GPL(save_stack_trace_tsk); @@ -190,4 +196,12 @@ void save_stack_trace(struct stack_trace *trace) } EXPORT_SYMBOL_GPL(save_stack_trace); + +int save_stack_trace_tsk_reliable(struct task_struct *tsk, + struct stack_trace *trace) +{ + return __save_stack_trace(tsk, trace, 1); +} +EXPORT_SYMBOL_GPL(save_stack_trace_tsk_reliable); + #endif