From patchwork Wed Aug 18 07:33:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "liuqi (BA)" X-Patchwork-Id: 12443981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.9 required=3.0 tests=BAYES_00,DATE_IN_PAST_03_06, DKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38963C432BE for ; Wed, 18 Aug 2021 10:59:41 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 094E560FBF for ; Wed, 18 Aug 2021 10:59:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 094E560FBF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nW5XxoEeVRBKjmnw0hTGVJM4ieKC3WHxa7NVormf5r8=; b=GfazAKjGiD1PPn cAS5j9ZvIaJJu2E5m5QyrkW6kJ8f/oULUVMdMe8fIJuk6CjDmwmeaZtKWdoD/sQo+HkcnQ+jyOLV9 k9gltzPJ+9j5oD3NnatDvG6GGoOpAqgm+Q+F2UBpUTr/9sjbfj606yzGfIDb85WMUMAG1QmOwaOUe Z8E5uVdACEwrhkth0AcyVra0aTTlybtjSUCegVV9l+wtse2S3fs9JoyS9qcNlgLdHa93dZjxMn/ag h8i3r1mHIEVqsEnl6pkwknTR0otLC8uIx/qWZx1CkbZbBWouS0e8/anFcaPlDWV54ZBnnfpz5RYMQ 56FhXDjeJc//jXa8znmQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mGJG8-005FFB-Gh; Wed, 18 Aug 2021 10:57:24 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mGJG4-005FD9-1h for linux-arm-kernel@lists.infradead.org; Wed, 18 Aug 2021 10:57:22 +0000 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4GqPrV4twBz8sXX; Wed, 18 Aug 2021 18:53:14 +0800 (CST) Received: from dggema757-chm.china.huawei.com (10.1.198.199) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 18:57:17 +0800 Received: from localhost.localdomain (10.67.165.2) by dggema757-chm.china.huawei.com (10.1.198.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Wed, 18 Aug 2021 18:57:17 +0800 From: Qi Liu To: , , , , , , CC: , , , , , , Subject: [PATCH v4 1/2] Make save_all_base_regs and restore_all_base_regs as common macro Date: Wed, 18 Aug 2021 15:33:35 +0800 Message-ID: <20210818073336.59678-2-liuqi115@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818073336.59678-1-liuqi115@huawei.com> References: <20210818073336.59678-1-liuqi115@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.2] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggema757-chm.china.huawei.com (10.1.198.199) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210818_035720_499182_1CDE59EA X-CRM114-Status: UNSURE ( 8.79 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Move save_all_base_regs and restore_all_base_regs to , as these two macros can be reused in optprobe. --- arch/arm64/include/asm/assembler.h | 52 +++++++++++++++++++ arch/arm64/kernel/probes/kprobes_trampoline.S | 52 ------------------- 2 files changed, 52 insertions(+), 52 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 89faca0e740d..cd912810fc80 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -515,6 +515,58 @@ alternative_endif .pushsection "_kprobe_blacklist", "aw"; \ .quad x; \ .popsection; + + .macro save_all_base_regs + stp x0, x1, [sp, #S_X0] + stp x2, x3, [sp, #S_X2] + stp x4, x5, [sp, #S_X4] + stp x6, x7, [sp, #S_X6] + stp x8, x9, [sp, #S_X8] + stp x10, x11, [sp, #S_X10] + stp x12, x13, [sp, #S_X12] + stp x14, x15, [sp, #S_X14] + stp x16, x17, [sp, #S_X16] + stp x18, x19, [sp, #S_X18] + stp x20, x21, [sp, #S_X20] + stp x22, x23, [sp, #S_X22] + stp x24, x25, [sp, #S_X24] + stp x26, x27, [sp, #S_X26] + stp x28, x29, [sp, #S_X28] + add x0, sp, #PT_REGS_SIZE + stp lr, x0, [sp, #S_LR] + /* + * Construct a useful saved PSTATE + */ + mrs x0, nzcv + mrs x1, daif + orr x0, x0, x1 + mrs x1, CurrentEL + orr x0, x0, x1 + mrs x1, SPSel + orr x0, x0, x1 + stp xzr, x0, [sp, #S_PC] + .endm + + .macro restore_all_base_regs + ldr x0, [sp, #S_PSTATE] + and x0, x0, #(PSR_N_BIT | PSR_Z_BIT | PSR_C_BIT | PSR_V_BIT) + msr nzcv, x0 + ldp x0, x1, [sp, #S_X0] + ldp x2, x3, [sp, #S_X2] + ldp x4, x5, [sp, #S_X4] + ldp x6, x7, [sp, #S_X6] + ldp x8, x9, [sp, #S_X8] + ldp x10, x11, [sp, #S_X10] + ldp x12, x13, [sp, #S_X12] + ldp x14, x15, [sp, #S_X14] + ldp x16, x17, [sp, #S_X16] + ldp x18, x19, [sp, #S_X18] + ldp x20, x21, [sp, #S_X20] + ldp x22, x23, [sp, #S_X22] + ldp x24, x25, [sp, #S_X24] + ldp x26, x27, [sp, #S_X26] + ldp x28, x29, [sp, #S_X28] + .endm #else #define NOKPROBE(x) #endif diff --git a/arch/arm64/kernel/probes/kprobes_trampoline.S b/arch/arm64/kernel/probes/kprobes_trampoline.S index 288a84e253cc..2463d5d0e004 100644 --- a/arch/arm64/kernel/probes/kprobes_trampoline.S +++ b/arch/arm64/kernel/probes/kprobes_trampoline.S @@ -9,58 +9,6 @@ .text - .macro save_all_base_regs - stp x0, x1, [sp, #S_X0] - stp x2, x3, [sp, #S_X2] - stp x4, x5, [sp, #S_X4] - stp x6, x7, [sp, #S_X6] - stp x8, x9, [sp, #S_X8] - stp x10, x11, [sp, #S_X10] - stp x12, x13, [sp, #S_X12] - stp x14, x15, [sp, #S_X14] - stp x16, x17, [sp, #S_X16] - stp x18, x19, [sp, #S_X18] - stp x20, x21, [sp, #S_X20] - stp x22, x23, [sp, #S_X22] - stp x24, x25, [sp, #S_X24] - stp x26, x27, [sp, #S_X26] - stp x28, x29, [sp, #S_X28] - add x0, sp, #PT_REGS_SIZE - stp lr, x0, [sp, #S_LR] - /* - * Construct a useful saved PSTATE - */ - mrs x0, nzcv - mrs x1, daif - orr x0, x0, x1 - mrs x1, CurrentEL - orr x0, x0, x1 - mrs x1, SPSel - orr x0, x0, x1 - stp xzr, x0, [sp, #S_PC] - .endm - - .macro restore_all_base_regs - ldr x0, [sp, #S_PSTATE] - and x0, x0, #(PSR_N_BIT | PSR_Z_BIT | PSR_C_BIT | PSR_V_BIT) - msr nzcv, x0 - ldp x0, x1, [sp, #S_X0] - ldp x2, x3, [sp, #S_X2] - ldp x4, x5, [sp, #S_X4] - ldp x6, x7, [sp, #S_X6] - ldp x8, x9, [sp, #S_X8] - ldp x10, x11, [sp, #S_X10] - ldp x12, x13, [sp, #S_X12] - ldp x14, x15, [sp, #S_X14] - ldp x16, x17, [sp, #S_X16] - ldp x18, x19, [sp, #S_X18] - ldp x20, x21, [sp, #S_X20] - ldp x22, x23, [sp, #S_X22] - ldp x24, x25, [sp, #S_X24] - ldp x26, x27, [sp, #S_X26] - ldp x28, x29, [sp, #S_X28] - .endm - SYM_CODE_START(kretprobe_trampoline) sub sp, sp, #PT_REGS_SIZE From patchwork Wed Aug 18 07:33:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "liuqi (BA)" X-Patchwork-Id: 12443985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.9 required=3.0 tests=BAYES_00,DATE_IN_PAST_03_06, DKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE806C4338F for ; Wed, 18 Aug 2021 11:00:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B0FFB60FBF for ; Wed, 18 Aug 2021 11:00:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B0FFB60FBF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rxThT2hJdYRWPyOz5On1fUkpa/F7NJV3S+a8lwadaXk=; b=Treg8b025m0aKE BNx9jCLR1XQpVllgKIImeXXbC0fV9imhVr4KwdXFBwwvNHjkBEbGQnjjQuKtJI6VAdgJVmydtNVgt cdrgX1g44aRy10qnX51HR0gT8mFnTRltgvMrOtUDKNI+5SUqE0srdjCnF6A18hRb3wIm1FPKUwihF ncCRoInbkfocu7pncA3oi7G0ddlirN0nxUpVR/sIJhKpc0qFWwOD4aCaplvLM9XS8vKHQPnCgagNF 3n9oJ7GznPeW6cE0TNvcmIMgKij39gd6R6vHWnqcrkmBGDDfHvAKwc8lR6VSk8PdpQ5tXIbplrIEt x4/3wKz4nmknbrBnWaPA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mGJGd-005FQ8-R2; Wed, 18 Aug 2021 10:57:56 +0000 Received: from szxga02-in.huawei.com ([45.249.212.188]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mGJGM-005FK8-5n for linux-arm-kernel@lists.infradead.org; Wed, 18 Aug 2021 10:57:41 +0000 Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4GqPsC2LN3zdbtX; Wed, 18 Aug 2021 18:53:51 +0800 (CST) Received: from dggema757-chm.china.huawei.com (10.1.198.199) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.2; Wed, 18 Aug 2021 18:57:17 +0800 Received: from localhost.localdomain (10.67.165.2) by dggema757-chm.china.huawei.com (10.1.198.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Wed, 18 Aug 2021 18:57:17 +0800 From: Qi Liu To: , , , , , , CC: , , , , , , Subject: [PATCH v4 2/2] arm64: kprobe: Enable OPTPROBE for arm64 Date: Wed, 18 Aug 2021 15:33:36 +0800 Message-ID: <20210818073336.59678-3-liuqi115@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210818073336.59678-1-liuqi115@huawei.com> References: <20210818073336.59678-1-liuqi115@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.2] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggema757-chm.china.huawei.com (10.1.198.199) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210818_035738_633564_DA71CA7E X-CRM114-Status: GOOD ( 30.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch introduce optprobe for ARM64. In optprobe, probed instruction is replaced by a branch instruction to detour buffer. Detour buffer contains trampoline code and a call to optimized_callback(). optimized_callback() calls opt_pre_handler() to execute kprobe handler. Performance of optprobe on Hip08 platform is test using kprobe example module[1] to analyze the latency of a kernel function, and here is the result: [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/samples/kprobes/kretprobe_example.c kprobe before optimized: [280709.846380] do_empty returned 0 and took 1530 ns to execute [280709.852057] do_empty returned 0 and took 550 ns to execute [280709.857631] do_empty returned 0 and took 440 ns to execute [280709.863215] do_empty returned 0 and took 380 ns to execute [280709.868787] do_empty returned 0 and took 360 ns to execute [280709.874362] do_empty returned 0 and took 340 ns to execute [280709.879936] do_empty returned 0 and took 320 ns to execute [280709.885505] do_empty returned 0 and took 300 ns to execute [280709.891075] do_empty returned 0 and took 280 ns to execute [280709.896646] do_empty returned 0 and took 290 ns to execute [280709.902220] do_empty returned 0 and took 290 ns to execute [280709.907807] do_empty returned 0 and took 290 ns to execute optprobe: [ 2965.964572] do_empty returned 0 and took 90 ns to execute [ 2965.969952] do_empty returned 0 and took 80 ns to execute [ 2965.975332] do_empty returned 0 and took 70 ns to execute [ 2965.980714] do_empty returned 0 and took 60 ns to execute [ 2965.986128] do_empty returned 0 and took 80 ns to execute [ 2965.991507] do_empty returned 0 and took 70 ns to execute [ 2965.996884] do_empty returned 0 and took 70 ns to execute [ 2966.002262] do_empty returned 0 and took 80 ns to execute [ 2966.007642] do_empty returned 0 and took 70 ns to execute [ 2966.013020] do_empty returned 0 and took 70 ns to execute [ 2966.018400] do_empty returned 0 and took 70 ns to execute [ 2966.023779] do_empty returned 0 and took 70 ns to execute [ 2966.029158] do_empty returned 0 and took 70 ns to execute Signed-off-by: Qi Liu Note: To guarantee the offset between probe point and kprobe pre_handler is smaller than 128MiB, users should set CONFIG_RANDOMIZE_MODULE_REGION_FULL=N or set nokaslr in command line, or optprobe will not work and fall back to normal kprobe. Acked-by: Masami Hiramatsu --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/kprobes.h | 24 ++ arch/arm64/kernel/probes/Makefile | 2 + arch/arm64/kernel/probes/kprobes.c | 19 +- arch/arm64/kernel/probes/opt_arm64.c | 276 ++++++++++++++++++ .../arm64/kernel/probes/optprobe_trampoline.S | 37 +++ 6 files changed, 356 insertions(+), 3 deletions(-) create mode 100644 arch/arm64/kernel/probes/opt_arm64.c create mode 100644 arch/arm64/kernel/probes/optprobe_trampoline.S diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index b5b13a932561..b05d1d275d87 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -200,6 +200,7 @@ config ARM64 select HAVE_STACKPROTECTOR select HAVE_SYSCALL_TRACEPOINTS select HAVE_KPROBES + select HAVE_OPTPROBES select HAVE_KRETPROBES select HAVE_GENERIC_VDSO select IOMMU_DMA if IOMMU_SUPPORT diff --git a/arch/arm64/include/asm/kprobes.h b/arch/arm64/include/asm/kprobes.h index 5d38ff4a4806..6b2fdd2ad7d8 100644 --- a/arch/arm64/include/asm/kprobes.h +++ b/arch/arm64/include/asm/kprobes.h @@ -39,6 +39,30 @@ void arch_remove_kprobe(struct kprobe *); int kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr); int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, void *data); + +#define RELATIVEJUMP_SIZE (4) +#define MAX_COPIED_INSN DIV_ROUND_UP(RELATIVEJUMP_SIZE, sizeof(kprobe_opcode_t)) +struct arch_optimized_insn { + kprobe_opcode_t copied_insn[MAX_COPIED_INSN]; + /* detour code buffer */ + kprobe_opcode_t *insn; +}; + +/* optinsn template addresses */ +extern __visible kprobe_opcode_t optprobe_template_entry[]; +extern __visible kprobe_opcode_t optprobe_template_val[]; +extern __visible kprobe_opcode_t optprobe_template_call[]; +extern __visible kprobe_opcode_t optprobe_template_end[]; +extern __visible kprobe_opcode_t optprobe_template_restore_begin[]; +extern __visible kprobe_opcode_t optprobe_template_restore_orig_insn[]; +extern __visible kprobe_opcode_t optprobe_template_restore_end[]; +extern __visible kprobe_opcode_t optprobe_template_max_length[]; + +#define MAX_OPTIMIZED_LENGTH 4 +#define MAX_OPTINSN_SIZE \ + ((unsigned long)optprobe_template_end - \ + (unsigned long)optprobe_template_entry) + void kretprobe_trampoline(void); void __kprobes *trampoline_probe_handler(struct pt_regs *regs); diff --git a/arch/arm64/kernel/probes/Makefile b/arch/arm64/kernel/probes/Makefile index 8e4be92e25b1..07105fd3261d 100644 --- a/arch/arm64/kernel/probes/Makefile +++ b/arch/arm64/kernel/probes/Makefile @@ -4,3 +4,5 @@ obj-$(CONFIG_KPROBES) += kprobes.o decode-insn.o \ simulate-insn.o obj-$(CONFIG_UPROBES) += uprobes.o decode-insn.o \ simulate-insn.o +obj-$(CONFIG_OPTPROBES) += opt_arm64.o \ + optprobe_trampoline.o diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c index 6dbcc89f6662..83755ad62abe 100644 --- a/arch/arm64/kernel/probes/kprobes.c +++ b/arch/arm64/kernel/probes/kprobes.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -113,9 +114,21 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) void *alloc_insn_page(void) { - return __vmalloc_node_range(PAGE_SIZE, 1, VMALLOC_START, VMALLOC_END, - GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS, - NUMA_NO_NODE, __builtin_return_address(0)); + void *page; + + page = module_alloc(PAGE_SIZE); + if (!page) + return NULL; + + set_vm_flush_reset_perms(page); + /* + * First make the page read-only, and only then make it executable to + * prevent it from being W+X in between. + */ + set_memory_ro((unsigned long)page, 1); + set_memory_x((unsigned long)page, 1); + + return page; } /* arm kprobe: install breakpoint in text */ diff --git a/arch/arm64/kernel/probes/opt_arm64.c b/arch/arm64/kernel/probes/opt_arm64.c new file mode 100644 index 000000000000..4de535bee534 --- /dev/null +++ b/arch/arm64/kernel/probes/opt_arm64.c @@ -0,0 +1,276 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Code for Kernel probes Jump optimization. + * + * Copyright (C) 2021 Hisilicon Limited + */ + +#include +#include + +#include +#include +#include +#include + +#define TMPL_VAL_IDX \ + (optprobe_template_val - optprobe_template_entry) +#define TMPL_CALL_BACK \ + (optprobe_template_call - optprobe_template_entry) +#define TMPL_END_IDX \ + (optprobe_template_end - optprobe_template_entry) +#define TMPL_RESTORE_ORIGN_INSN \ + (optprobe_template_restore_orig_insn - optprobe_template_entry) +#define TMPL_RESTORE_END \ + (optprobe_template_restore_end - optprobe_template_entry) +#define TMPL_MAX_LENGTH \ + (optprobe_template_max_length - optprobe_template_entry) +#define OPTPROBE_BATCH_SIZE 64 + +int arch_check_optimized_kprobe(struct optimized_kprobe *op) +{ + return 0; +} + +int arch_prepared_optinsn(struct arch_optimized_insn *optinsn) +{ + return optinsn->insn != NULL; +} + +int arch_within_optimized_kprobe(struct optimized_kprobe *op, + unsigned long addr) +{ + return ((unsigned long)op->kp.addr <= addr && + (unsigned long)op->kp.addr + RELATIVEJUMP_SIZE > addr); +} + +static void +optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs) +{ + /* This is possible if op is under delayed unoptimizing */ + if (kprobe_disabled(&op->kp)) + return; + + preempt_disable(); + + if (kprobe_running()) { + kprobes_inc_nmissed_count(&op->kp); + } else { + __this_cpu_write(current_kprobe, &op->kp); + regs->pc = (unsigned long)op->kp.addr; + get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE; + opt_pre_handler(&op->kp, regs); + __this_cpu_write(current_kprobe, NULL); + } + + preempt_enable_no_resched(); +} +NOKPROBE_SYMBOL(optimized_callback) + +static bool is_offset_in_range(unsigned long start, unsigned long end) +{ + long offset = end - start; + + /* + * Verify if the address gap is in 128MiB range, because this uses + * a relative jump. + * + * kprobe opt use a 'b' instruction to branch to optinsn.insn. + * According to ARM manual, branch instruction is: + * + * 31 30 25 0 + * +----+---+---+---+---+---+---------------+ + * |cond| 0 | 0 | 1 | 0 | 1 | imm26 | + * +----+---+---+---+---+---+---------------+ + * + * imm26 is a signed 26 bits integer. The real branch offset is computed + * by: imm64 = SignExtend(imm26:'00', 64); + * + * So the maximum forward branch should be: + * (0x01ffffff << 2) = 0x07fffffc + * The maximum backward branch should be: + * (0xfe000000 << 2) = 0xFFFFFFFFF8000000 = -0x08000000 + * + * We can simply check (rel & 0xf8000003): + * if rel is positive, (rel & 0xf8000003) should be 0 + * if rel is negitive, (rel & 0xf8000003) should be 0xf8000000 + * the last '3' is used for alignment checking. + */ + return (offset >= -0x8000000 && offset <= 0x7fffffc && !(offset & 0x3)); +} + +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, + struct kprobe *orig) +{ + kprobe_opcode_t *code, *buf; + void **addrs; + u32 insn; + int ret, i; + + addrs = kcalloc(TMPL_MAX_LENGTH, sizeof(void *), GFP_KERNEL); + if (!addrs) + return -ENOMEM; + + buf = kcalloc(TMPL_MAX_LENGTH, sizeof(kprobe_opcode_t), GFP_KERNEL); + if (!buf) { + kfree(addrs); + return -ENOMEM; + } + + code = get_optinsn_slot(); + if (!code) { + kfree(addrs); + kfree(buf); + return -ENOMEM; + } + + if (!is_offset_in_range((unsigned long)code, + (unsigned long)orig->addr + 8)) { + ret = -ERANGE; + goto error; + } + + if (!is_offset_in_range((unsigned long)code + TMPL_CALL_BACK, + (unsigned long)optimized_callback)) { + ret = -ERANGE; + goto error; + } + + if (!is_offset_in_range((unsigned long)&code[TMPL_RESTORE_END], + (unsigned long)op->kp.addr + 4)) { + ret = -ERANGE; + goto error; + } + + memcpy(buf, optprobe_template_entry, + TMPL_END_IDX * sizeof(kprobe_opcode_t)); + + buf[TMPL_VAL_IDX] = FIELD_GET(GENMASK(31, 0), (unsigned long long)op); + buf[TMPL_VAL_IDX + 1] = + FIELD_GET(GENMASK(63, 32), (unsigned long long)op); + buf[TMPL_RESTORE_ORIGN_INSN] = orig->opcode; + + insn = aarch64_insn_gen_branch_imm( + (unsigned long)(&code[TMPL_CALL_BACK]), + (unsigned long)optimized_callback, AARCH64_INSN_BRANCH_LINK); + buf[TMPL_CALL_BACK] = insn; + + insn = aarch64_insn_gen_branch_imm( + (unsigned long)(&code[TMPL_RESTORE_END]), + (unsigned long)(op->kp.addr) + 4, AARCH64_INSN_BRANCH_NOLINK); + buf[TMPL_RESTORE_END] = insn; + + /* Setup template */ + for (i = 0; i < TMPL_MAX_LENGTH; i++) + addrs[i] = code + i; + + ret = aarch64_insn_patch_text(addrs, buf, TMPL_MAX_LENGTH); + if (ret < 0) + goto error; + + flush_icache_range((unsigned long)code, + (unsigned long)(&code[TMPL_END_IDX])); + + /* Set op->optinsn.insn means prepared. */ + op->optinsn.insn = code; + +out: + kfree(addrs); + kfree(buf); + return ret; + +error: + free_optinsn_slot(code, 0); + goto out; +} + +void arch_optimize_kprobes(struct list_head *oplist) +{ + struct optimized_kprobe *op, *tmp; + kprobe_opcode_t *insns; + void **addrs; + int i = 0; + + addrs = kcalloc(OPTPROBE_BATCH_SIZE, sizeof(void *), GFP_KERNEL); + if (!addrs) + return; + + insns = kcalloc(OPTPROBE_BATCH_SIZE, sizeof(kprobe_opcode_t), GFP_KERNEL); + if (!insns) { + kfree(addrs); + return; + } + + list_for_each_entry_safe(op, tmp, oplist, list) { + WARN_ON(kprobe_disabled(&op->kp)); + + /* + * Backup instructions which will be replaced + * by jump address + */ + memcpy(op->optinsn.copied_insn, op->kp.addr, + RELATIVEJUMP_SIZE); + + addrs[i] = (void *)op->kp.addr; + insns[i] = aarch64_insn_gen_branch_imm((unsigned long)op->kp.addr, + (unsigned long)op->optinsn.insn, + AARCH64_INSN_BRANCH_NOLINK); + + list_del_init(&op->list); + if (++i == OPTPROBE_BATCH_SIZE) + break; + } + + aarch64_insn_patch_text(addrs, insns, i); + kfree(addrs); + kfree(insns); +} + +void arch_unoptimize_kprobe(struct optimized_kprobe *op) +{ + arch_arm_kprobe(&op->kp); +} + +/* + * Recover original instructions and breakpoints from relative jumps. + * Caller must call with locking kprobe_mutex. + */ +void arch_unoptimize_kprobes(struct list_head *oplist, + struct list_head *done_list) +{ + struct optimized_kprobe *op, *tmp; + kprobe_opcode_t *insns; + void **addrs; + int i = 0; + + addrs = kcalloc(OPTPROBE_BATCH_SIZE, sizeof(void *), GFP_KERNEL); + if (!addrs) + return; + + insns = kcalloc(OPTPROBE_BATCH_SIZE, sizeof(kprobe_opcode_t), GFP_KERNEL); + if (!insns) { + kfree(addrs); + return; + } + + list_for_each_entry_safe(op, tmp, oplist, list) { + addrs[i] = (void *)op->kp.addr; + insns[i] = BRK64_OPCODE_KPROBES; + list_move(&op->list, done_list); + + if (++i == OPTPROBE_BATCH_SIZE) + break; + } + + aarch64_insn_patch_text(addrs, insns, i); + kfree(addrs); + kfree(insns); +} + +void arch_remove_optimized_kprobe(struct optimized_kprobe *op) +{ + if (op->optinsn.insn) { + free_optinsn_slot(op->optinsn.insn, 1); + op->optinsn.insn = NULL; + } +} diff --git a/arch/arm64/kernel/probes/optprobe_trampoline.S b/arch/arm64/kernel/probes/optprobe_trampoline.S new file mode 100644 index 000000000000..24d713d400cd --- /dev/null +++ b/arch/arm64/kernel/probes/optprobe_trampoline.S @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * trampoline entry and return code for optprobes. + */ + +#include +#include +#include + + .global optprobe_template_entry +optprobe_template_entry: + sub sp, sp, #PT_REGS_SIZE + save_all_base_regs + /* Get parameters to optimized_callback() */ + ldr x0, 1f + mov x1, sp + /* Branch to optimized_callback() */ + .global optprobe_template_call +optprobe_template_call: + nop + restore_all_base_regs + ldr lr, [sp, #S_LR] + add sp, sp, #PT_REGS_SIZE + .global optprobe_template_restore_orig_insn +optprobe_template_restore_orig_insn: + nop + .global optprobe_template_restore_end +optprobe_template_restore_end: + nop + .global optprobe_template_end +optprobe_template_end: + .global optprobe_template_val +optprobe_template_val: + 1: .long 0 + .long 0 + .global optprobe_template_max_length +optprobe_template_max_length: