From patchwork Thu Feb 12 12:21:25 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 5819531 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id EF4EABF440 for ; Thu, 12 Feb 2015 12:52:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CC0252012E for ; Thu, 12 Feb 2015 12:52:53 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8D9322011E for ; Thu, 12 Feb 2015 12:52:49 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YLtDP-0006Ya-HI; Thu, 12 Feb 2015 12:49:55 +0000 Received: from szxga03-in.huawei.com ([119.145.14.66]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YLswx-0005Ye-Ak; Thu, 12 Feb 2015 12:32:57 +0000 Received: from 172.24.2.119 (EHLO lggeml425-hub.china.huawei.com) ([172.24.2.119]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id BBV91948; Thu, 12 Feb 2015 20:28:11 +0800 (CST) Received: from kernel-host.huawei (10.107.197.247) by lggeml425-hub.china.huawei.com (10.72.61.35) with Microsoft SMTP Server id 14.3.158.1; Thu, 12 Feb 2015 20:28:05 +0800 From: Wang Nan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [RFC PATCH v2 24/26] early kprobes: core logic to support early kprobe on ftrace. Date: Thu, 12 Feb 2015 20:21:25 +0800 Message-ID: <1423743685-13072-1-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.4 In-Reply-To: <1423743476-11927-1-git-send-email-wangnan0@huawei.com> References: <1423743476-11927-1-git-send-email-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.197.247] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020205.54DC9C5E.023F, ss=1, re=0.001, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-05-26 15:14:31, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 0db269c0b6b6ebb10dcd6ec054b72c89 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150212_043255_913947_B6F75606 X-CRM114-Status: GOOD ( 19.12 ) X-Spam-Score: -2.3 (--) Cc: lizefan@huawei.com, x86@kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Utilize previous introduced ftrace update notify chain to support early kprobe on ftrace. Signed-off-by: Wang Nan --- include/linux/kprobes.h | 1 + kernel/kprobes.c | 213 ++++++++++++++++++++++++++++++++++++++++++++---- 2 files changed, 197 insertions(+), 17 deletions(-) diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h index 92aafa7..1c211e8 100644 --- a/include/linux/kprobes.h +++ b/include/linux/kprobes.h @@ -131,6 +131,7 @@ struct kprobe { */ #define KPROBE_FLAG_FTRACE 8 /* probe is using ftrace */ #define KPROBE_FLAG_EARLY 16 /* early kprobe */ +#define KPROBE_FLAG_RESTORED 32 /* temporarily restored to its original insn */ /* Has this kprobe gone ? */ static inline int kprobe_gone(struct kprobe *p) diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 0bbb510..c9cd46f 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -48,6 +48,7 @@ #include #include #include +#include #include #include @@ -2540,11 +2541,127 @@ EXPORT_SYMBOL_GPL(jprobe_return); void __weak arch_fix_ftrace_early_kprobe(struct optimized_kprobe *p) { } + +static int restore_optimized_kprobe(struct optimized_kprobe *op) +{ + /* If it already restored, pass it to other. */ + if (op->kp.flags & KPROBE_FLAG_RESTORED) + return NOTIFY_DONE; + + get_online_cpus(); + mutex_lock(&text_mutex); + arch_restore_optimized_kprobe(op); + mutex_unlock(&text_mutex); + put_online_cpus(); + + op->kp.flags |= KPROBE_FLAG_RESTORED; + return NOTIFY_STOP; +} + +static int ftrace_notifier_call(struct notifier_block *nb, + unsigned long val, void *param) +{ + struct ftrace_update_notifier_info *info = param; + struct optimized_kprobe *op; + struct dyn_ftrace *rec; + struct kprobe *kp; + int enable; + void *addr; + int ret = NOTIFY_DONE; + + if (!info || !info->rec || !info->rec->ip) + return NOTIFY_DONE; + + rec = info->rec; + enable = info->enable; + addr = (void *)rec->ip; + + mutex_lock(&kprobe_mutex); + kp = get_kprobe(addr); + mutex_unlock(&kprobe_mutex); + + if (!kp || !kprobe_aggrprobe(kp)) + return NOTIFY_DONE; + + op = container_of(kp, struct optimized_kprobe, kp); + /* + * Ftrace is trying to convert ftrace entries to nop + * instruction. This conversion should have already been done + * at register_early_kprobe(). x86 needs fixing here. + */ + if (!(rec->flags & FTRACE_FL_ENABLED) && (!enable)) { + arch_fix_ftrace_early_kprobe(op); + return NOTIFY_STOP; + } + + /* + * Ftrace is trying to enable a trace entry. We temporary + * restore the probed instruction. + * We can continue using this kprobe as a ftrace-based kprobe, + * but event between this restoring and early kprobe conversion + * will get lost. + */ + if (!(rec->flags & FTRACE_FL_ENABLED) && enable) { + ret = restore_optimized_kprobe(op); + + /* Let ftrace retry if restore is successful. */ + if (ret == NOTIFY_STOP) + info->retry = true; + return ret; + } + + return NOTIFY_DONE; +} + +static struct notifier_block ftrace_notifier_block = { + .notifier_call = ftrace_notifier_call, +}; +static bool ftrace_notifier_registred = false; + +static int enable_early_kprobe_on_ftrace(struct kprobe *p) +{ + int err; + + if (!ftrace_notifier_registred) { + err = register_ftrace_update_notifier(&ftrace_notifier_block); + if (err) { + pr_err("Failed to register ftrace update notifier\n"); + return err; + } + ftrace_notifier_registred = true; + } + + err = ftrace_process_loc_early((unsigned long)p->addr); + if (err) + pr_err("Failed to process ftrace entry at %p\n", p->addr); + return err; +} + +/* Caller must ensure kprobe_aggrprobe(kp). */ +static void convert_early_ftrace_kprobe_top(struct optimized_kprobe *op) +{ + restore_optimized_kprobe(op); + arm_kprobe_ftrace(&op->kp); +} + +#else +static inline int enable_early_kprobe_on_ftrace(struct kprobe *__unused) +{ return 0; } + +/* + * If CONFIG_KPROBES_ON_FTRACE is off this function should never get called, + * so let it trigger a warning. + */ +static inline void convert_early_ftrace_kprobe_top(struct optimized_kprobe *__unused) +{ + WARN_ON(1); +} #endif static int register_early_kprobe(struct kprobe *p) { struct early_kprobe_slot *slot; + struct module *probed_mod; int err; if (p->break_handler || p->post_handler) @@ -2552,13 +2669,25 @@ static int register_early_kprobe(struct kprobe *p) if (p->flags & KPROBE_FLAG_DISABLED) return -EINVAL; + err = check_kprobe_address_safe(p, &probed_mod); + if (err) + return err; + + BUG_ON(probed_mod); + + if (kprobe_ftrace(p)) { + err = enable_early_kprobe_on_ftrace(p); + if (err) + return err; + } + slot = ek_alloc_early_kprobe(); if (!slot) { pr_err("No enough early kprobe slots.\n"); return -ENOMEM; } - p->flags &= KPROBE_FLAG_DISABLED; + p->flags &= KPROBE_FLAG_DISABLED | KPROBE_FLAG_FTRACE; p->flags |= KPROBE_FLAG_EARLY; p->nmissed = 0; @@ -2599,43 +2728,93 @@ free_slot: } static void -convert_early_kprobe(struct kprobe *kp) +convert_early_kprobe_top(struct kprobe *kp) { struct module *probed_mod; + struct optimized_kprobe *op; int err; BUG_ON(!kprobe_aggrprobe(kp)); + op = container_of(kp, struct optimized_kprobe, kp); err = check_kprobe_address_safe(kp, &probed_mod); if (err) panic("Insert kprobe at %p is not safe!", kp->addr); + BUG_ON(probed_mod); - /* - * FIXME: - * convert kprobe to ftrace if CONFIG_KPROBES_ON_FTRACE is on - * and kp is on ftrace location. - */ + if (kprobe_ftrace(kp)) + convert_early_ftrace_kprobe_top(op); +} - mutex_lock(&kprobe_mutex); - hlist_del_rcu(&kp->hlist); +static void +convert_early_kprobes_top(void) +{ + struct kprobe *p; + + hlist_for_each_entry(p, &early_kprobe_hlist, hlist) + convert_early_kprobe_top(p); +} + +static LIST_HEAD(early_freeing_list); + +static void +convert_early_kprobe_stop_machine(struct kprobe *kp) +{ + struct optimized_kprobe *op; + + BUG_ON(!kprobe_aggrprobe(kp)); + op = container_of(kp, struct optimized_kprobe, kp); + + if ((kprobe_ftrace(kp)) && (list_is_singular(&op->kp.list))) { + /* Update kp */ + kp = list_entry(op->kp.list.next, struct kprobe, list); + + hlist_replace_rcu(&op->kp.hlist, &kp->hlist); + list_del_init(&kp->list); + + op->kp.flags |= KPROBE_FLAG_DISABLED; + list_add(&op->list, &early_freeing_list); + } + hlist_del_rcu(&kp->hlist); INIT_HLIST_NODE(&kp->hlist); hlist_add_head_rcu(&kp->hlist, - &kprobe_table[hash_ptr(kp->addr, KPROBE_HASH_BITS)]); - mutex_unlock(&kprobe_mutex); - - if (probed_mod) - module_put(probed_mod); + &kprobe_table[hash_ptr(kp->addr, KPROBE_HASH_BITS)]); } -static void -convert_early_kprobes(void) +static int +convert_early_kprobes_stop_machine(void *__unused) { struct kprobe *p; struct hlist_node *tmp; hlist_for_each_entry_safe(p, tmp, &early_kprobe_hlist, hlist) - convert_early_kprobe(p); + convert_early_kprobe_stop_machine(p); + return 0; +} + +static void +convert_early_kprobes(void) +{ + struct optimized_kprobe *op, *tmp; + + mutex_lock(&kprobe_mutex); + + convert_early_kprobes_top(); + + get_online_cpus(); + mutex_lock(&text_mutex); + + stop_machine(convert_early_kprobes_stop_machine, NULL, NULL); + + mutex_unlock(&text_mutex); + put_online_cpus(); + mutex_unlock(&kprobe_mutex); + + list_for_each_entry_safe(op, tmp, &early_freeing_list, list) { + list_del_init(&op->list); + free_aggr_kprobe(&op->kp); + } }; #else static int register_early_kprobe(struct kprobe *p) { return -ENOSYS; }