From patchwork Tue Aug 30 08:07:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chunyan Zhang X-Patchwork-Id: 9305017 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1EDED601C0 for ; Tue, 30 Aug 2016 08:10:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0DCA528B19 for ; Tue, 30 Aug 2016 08:10:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 028D128B1F; Tue, 30 Aug 2016 08:10:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4456828B19 for ; Tue, 30 Aug 2016 08:10:55 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bee6m-0004A1-VP; Tue, 30 Aug 2016 08:09:25 +0000 Received: from mail-pf0-x236.google.com ([2607:f8b0:400e:c00::236]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bee6h-00043V-AI for linux-arm-kernel@lists.infradead.org; Tue, 30 Aug 2016 08:09:20 +0000 Received: by mail-pf0-x236.google.com with SMTP id y134so5492806pfg.0 for ; Tue, 30 Aug 2016 01:08:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=uYRXpE9hdLUgbgGQT6bnfykznfZimNj3GRptMqjf+f8=; b=V2+ZeMNaEpzFqSagbIICwHLhZHcduVATzYSGPOzAKQGgxHpg5xazDHplduk1ODC5HK Ta4R6M1rq+aL/AHzGwMMSgGuvFe/2inve1f9My+cjboC74TYELiZXcHgERdArt7LPOuf pauVa/zn5CdVgv1Ia5psrutlfz84aSiDZ4QAw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=uYRXpE9hdLUgbgGQT6bnfykznfZimNj3GRptMqjf+f8=; b=DOdLodey30HlKvsO7c4YvEwuheJYiujjMstsEE+05RM0iam0dAWxUJ4PQepmhDrymO 7NN5QSZHZzW8KhnDXKm4QsA70CAADa2w1k974QphMfTK9vx+tdNUfbhBTPl1px6z6++Y 4FsTccGmH9IDYN243sc6QZQn2QqvAqiufz/l4s7Z+piRFUeNcyPWXHNNlXB+Fh07NcKq ZUSPj/dKpwdKVrTqJLLdsAjLpY7/RS4qzGXZ4Ym8CvJq6T7OdvzIixbw6Gj3EzjgxHL1 QqqqdF3PluqjZrnkpK9vbEQS+FKI8wfQetttjEUNOuMXfe33LrzN5FHvZR4h1nTMzk+E tDYw== X-Gm-Message-State: AE9vXwMAry54s1Wj4UDqh5lx9SM1YNlLI1jN3QMwKPx3sCN1KmEXpkox/KBb+PZoNfJyuoFp X-Received: by 10.98.29.201 with SMTP id d192mr3989150pfd.142.1472544538313; Tue, 30 Aug 2016 01:08:58 -0700 (PDT) Received: from ubuntu16.spreadtrum.com ([175.111.195.49]) by smtp.gmail.com with ESMTPSA id c125sm54644732pfc.40.2016.08.30.01.08.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 30 Aug 2016 01:08:57 -0700 (PDT) From: Chunyan Zhang To: rostedt@goodmis.org, mathieu.poirier@linaro.org, alexander.shishkin@linux.intel.com, mingo@redhat.com Subject: [PATCHV5 1/3] tracing: add a possibility of exporting function trace to other places instead of ring buffer only Date: Tue, 30 Aug 2016 16:07:28 +0800 Message-Id: <1472544450-9915-2-git-send-email-zhang.chunyan@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1472544450-9915-1-git-send-email-zhang.chunyan@linaro.org> References: <1472544450-9915-1-git-send-email-zhang.chunyan@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160830_010919_473181_AA802452 X-CRM114-Status: GOOD ( 21.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: arnd@arndb.de, felipe.balbi@linux.intel.com, zhang.lyra@gmail.com, linux-kernel@vger.kernel.org, tor@ti.com, philippe.langlais@st.com, mike.leach@arm.com, nicolas.guion@st.com, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Currently Function traces can be only exported to ring buffer, this patch added trace_export concept which can process traces and export them to a registered destination as an addition to the current only output of Ftrace - i.e. ring buffer. In this way, if we want Function traces to be sent to other destination rather than ring buffer only, we just need to register a new trace_export and implement its own .commit() callback or just use 'trace_generic_commit()' which this patch also added and hooks up its own .write() function for writing traces to the storage. With this patch, only Function trace (trace type is TRACE_FN) is supported. Signed-off-by: Chunyan Zhang --- include/linux/trace.h | 35 ++++++++++++ kernel/trace/trace.c | 155 +++++++++++++++++++++++++++++++++++++++++++++++++- kernel/trace/trace.h | 1 + 3 files changed, 190 insertions(+), 1 deletion(-) create mode 100644 include/linux/trace.h diff --git a/include/linux/trace.h b/include/linux/trace.h new file mode 100644 index 0000000..30ded92 --- /dev/null +++ b/include/linux/trace.h @@ -0,0 +1,35 @@ +#ifndef _LINUX_TRACE_H +#define _LINUX_TRACE_H + +#include +struct trace_array; + +#ifdef CONFIG_TRACING +/* + * The trace export - an export of Ftrace. The trace_export can process + * traces and export them to a registered destination as an addition to + * the current only output of Ftrace - i.e. ring buffer. + * + * If you want traces to be sent to some other place rather than + * ring buffer only, just need to register a new trace_export and + * implement its own .commit() callback or just directly use + * 'trace_generic_commit()' and hooks up its own .write() function + * for writing traces to the storage. + * + * next - pointer to the next trace_export + * commit - commit the traces to the destination + * write - copy traces which have been delt with ->commit() to + * the destination + */ +struct trace_export { + struct trace_export __rcu *next; + void (*commit)(struct trace_array *, struct ring_buffer_event *); + void (*write)(const char *, unsigned int); +}; + +int register_ftrace_export(struct trace_export *export); +int unregister_ftrace_export(struct trace_export *export); + +#endif /* CONFIG_TRACING */ + +#endif /* _LINUX_TRACE_H */ diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index dade4c9..3163fa6 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include "trace.h" @@ -2128,6 +2129,155 @@ void trace_buffer_unlock_commit_regs(struct trace_array *tr, ftrace_trace_userstack(buffer, flags, pc); } +static DEFINE_STATIC_KEY_FALSE(ftrace_exports_enabled); + +static void ftrace_exports_enable(void) +{ + static_branch_enable(&ftrace_exports_enabled); +} + +static void ftrace_exports_disable(void) +{ + static_branch_disable(&ftrace_exports_enabled); +} + +static size_t trace_size[] = { + [TRACE_FN] = sizeof(struct ftrace_entry), + [TRACE_CTX] = sizeof(struct ctx_switch_entry), + [TRACE_WAKE] = sizeof(struct ctx_switch_entry), + [TRACE_STACK] = sizeof(struct stack_entry), + [TRACE_PRINT] = sizeof(struct print_entry), + [TRACE_BPRINT] = sizeof(struct bprint_entry), + [TRACE_MMIO_RW] = sizeof(struct trace_mmiotrace_rw), + [TRACE_MMIO_MAP] = sizeof(struct trace_mmiotrace_map), + [TRACE_BRANCH] = sizeof(struct trace_branch), + [TRACE_GRAPH_RET] = sizeof(struct ftrace_graph_ret_entry), + [TRACE_GRAPH_ENT] = sizeof(struct ftrace_graph_ent_entry), + [TRACE_USER_STACK] = sizeof(struct userstack_entry), + [TRACE_BPUTS] = sizeof(struct bputs_entry), +}; + +static void +trace_generic_commit(struct trace_array *tr, + struct ring_buffer_event *event) +{ + struct trace_entry *entry; + struct trace_export *export = tr->export; + unsigned int size = 0; + + entry = ring_buffer_event_data(event); + + size = trace_size[entry->type]; + if (!size) + return; + + if (export && export->write) + export->write((char *)entry, size); +} + +static DEFINE_MUTEX(ftrace_export_lock); + +static struct trace_export __rcu *ftrace_exports_list __read_mostly; + +static inline void +ftrace_exports(struct trace_array *tr, struct ring_buffer_event *event) +{ + struct trace_export *export; + + preempt_disable_notrace(); + + for (export = rcu_dereference_raw_notrace(ftrace_exports_list); + export && export->commit; + export = rcu_dereference_raw_notrace(export->next)) { + tr->export = export; + export->commit(tr, event); + } + + preempt_enable_notrace(); +} + +static inline void +add_trace_export(struct trace_export **list, struct trace_export *export) +{ + rcu_assign_pointer(export->next, *list); + /* + * We are entering export into the list but another + * CPU might be walking that list. We need to make sure + * the export->next pointer is valid before another CPU sees + * the export pointer included into the list. + */ + rcu_assign_pointer(*list, export); +} + +static inline int +rm_trace_export(struct trace_export **list, struct trace_export *export) +{ + struct trace_export **p; + + for (p = list; *p != NULL; p = &(*p)->next) + if (*p == export) + break; + + if (*p != export) + return -1; + + rcu_assign_pointer(*p, (*p)->next); + + return 0; +} + +static inline void +add_ftrace_export(struct trace_export **list, struct trace_export *export) +{ + if (*list == NULL) + ftrace_exports_enable(); + + add_trace_export(list, export); +} + +static inline int +rm_ftrace_export(struct trace_export **list, struct trace_export *export) +{ + int ret; + + ret = rm_trace_export(list, export); + if (*list == NULL) + ftrace_exports_disable(); + + return ret; +} + +int register_ftrace_export(struct trace_export *export) +{ + if (WARN_ON_ONCE(!export->write)) + return -1; + + mutex_lock(&ftrace_export_lock); + + export->commit = trace_generic_commit; + + add_ftrace_export(&ftrace_exports_list, export); + + mutex_unlock(&ftrace_export_lock); + + return 0; +} +EXPORT_SYMBOL_GPL(register_ftrace_export); + +int unregister_ftrace_export(struct trace_export *export) +{ + int ret; + + mutex_lock(&ftrace_export_lock); + + ret = rm_ftrace_export(&ftrace_exports_list, export); + + mutex_unlock(&ftrace_export_lock); + + return ret; +} +EXPORT_SYMBOL_GPL(unregister_ftrace_export); + void trace_function(struct trace_array *tr, unsigned long ip, unsigned long parent_ip, unsigned long flags, @@ -2146,8 +2296,11 @@ trace_function(struct trace_array *tr, entry->ip = ip; entry->parent_ip = parent_ip; - if (!call_filter_check_discard(call, entry, buffer, event)) + if (!call_filter_check_discard(call, entry, buffer, event)) { + if (static_branch_unlikely(&ftrace_exports_enabled)) + ftrace_exports(tr, event); __buffer_unlock_commit(buffer, event); + } } #ifdef CONFIG_STACKTRACE diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index f783df4..26a3088 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -260,6 +260,7 @@ struct trace_array { /* function tracing enabled */ int function_enabled; #endif + struct trace_export *export; }; enum {