From patchwork Mon Aug 15 11:50:01 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chunyan Zhang X-Patchwork-Id: 9280837 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5422860467 for ; Mon, 15 Aug 2016 11:52:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 437B828C4B for ; Mon, 15 Aug 2016 11:52:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3845B28C51; Mon, 15 Aug 2016 11:52:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 945E828C4F for ; Mon, 15 Aug 2016 11:52:41 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bZGQP-0007y0-1g; Mon, 15 Aug 2016 11:51:25 +0000 Received: from mail-pa0-x22d.google.com ([2607:f8b0:400e:c03::22d]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bZGQJ-0007pI-J7 for linux-arm-kernel@lists.infradead.org; Mon, 15 Aug 2016 11:51:21 +0000 Received: by mail-pa0-x22d.google.com with SMTP id ti13so15853339pac.0 for ; Mon, 15 Aug 2016 04:50:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FJwC+NtG/ojCiTwFM7iHtC321/tFW3Uk7I+zHY3RJso=; b=AZGmKaKve7JDJihQSCrgx786m2I/xiXuytXKy572tDzVbQ9cVy6Nabcj6YjaLQ3G54 WzB8Eh0RnmMC1YIUdfGhJ+16n/eiof12gZvrqqmGKbiPKjwxxsU22w73dHe8H/aEih0c ak85Zrmg11w0aGVBb5fBYdxJrnDulXXnfFfzI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FJwC+NtG/ojCiTwFM7iHtC321/tFW3Uk7I+zHY3RJso=; b=EYrpcjWUd+mB7plYDOUm1zGI5kZxXsVVu2nMKZ1zPp5TUEzdO7aujpilEw13ImOmt7 w+92f8bF3Lk5t/v20sQzwKz3gW1i4ANr+Motj2IjegKyzsYWciYL5I86eLuYB6xwVehi W3BmpOSdPpCWO5vfnBFkV6rhgg6DyDptTIrz8jzOl2YFZSZuvqqA53KxXWLOP5YxZJ6r GJuDLsrzSsJh2PsdbXIwgiS+YY4749OWK8MK/j+q05hGUZl0l38AmRSNrM/hl8hzYKcT etsxUOCgQ3YEpAQlhl8H50eEapt5TGym18QwqUur2PfybqcrsZ/BUV157+A5Hgbharfe ZP8A== X-Gm-Message-State: AEkoouv7coiKCW+XbS6P5uk75Lw450KP8KoXR7D9VqCmfmgVRAMa5NFPsbE2dQIS8z+1gmQl X-Received: by 10.66.222.202 with SMTP id qo10mr1285209pac.76.1471261858556; Mon, 15 Aug 2016 04:50:58 -0700 (PDT) Received: from ubuntu16.spreadtrum.com ([175.111.195.49]) by smtp.gmail.com with ESMTPSA id x66sm31275769pfb.86.2016.08.15.04.50.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Aug 2016 04:50:58 -0700 (PDT) From: Chunyan Zhang To: rostedt@goodmis.org, mathieu.poirier@linaro.org, alexander.shishkin@linux.intel.com, mingo@redhat.com Subject: [PATCH V4 1/3] tracing: add a possibility of exporting function trace to other places instead of ring buffer only Date: Mon, 15 Aug 2016 19:50:01 +0800 Message-Id: <1471261803-1186-2-git-send-email-zhang.chunyan@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1471261803-1186-1-git-send-email-zhang.chunyan@linaro.org> References: <1471261803-1186-1-git-send-email-zhang.chunyan@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160815_045119_773393_0EF90573 X-CRM114-Status: GOOD ( 25.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: arnd@arndb.de, felipe.balbi@linux.intel.com, zhang.lyra@gmail.com, linux-kernel@vger.kernel.org, tor@ti.com, philippe.langlais@st.com, mike.leach@arm.com, nicolas.guion@st.com, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Currently ring buffer is the only one output of Function traces, this patch added trace_export concept which would process the traces and export traces to a registered destination which can be ring buffer or some other storage, in this way if we want Function traces to be sent to other destination rather than ring buffer only, we just need to register a new trace_export and implement its own .commit() callback or just use 'trace_generic_commit()' which this patch also added and hooks up its own .write() functio for writing traces to the storage. Currently, only Function trace (TRACE_FN) is supported. Signed-off-by: Chunyan Zhang --- include/linux/trace.h | 33 ++++++++++++++ kernel/trace/trace.c | 124 +++++++++++++++++++++++++++++++++++++++++++++++++- kernel/trace/trace.h | 31 +++++++++++++ 3 files changed, 187 insertions(+), 1 deletion(-) create mode 100644 include/linux/trace.h diff --git a/include/linux/trace.h b/include/linux/trace.h new file mode 100644 index 0000000..4d4f0e1 --- /dev/null +++ b/include/linux/trace.h @@ -0,0 +1,33 @@ +#ifndef _LINUX_TRACE_H +#define _LINUX_TRACE_H + +#include +struct trace_array; + +#ifdef CONFIG_TRACING +/* + * The trace export - an export of function traces. Every ftrace_ops + * has at least one export which would output function traces to ring + * buffer. + * + * name - the name of this export + * next - pointer to the next trace_export + * tr - the trace_array this export belongs to + * commit - commit the traces to ring buffer and/or some other places + * write - copy traces which have been delt with ->commit() to + * the destination + */ +struct trace_export { + char name[16]; + struct trace_export *next; + struct trace_array *tr; + void (*commit)(struct trace_array *, struct ring_buffer_event *); + void (*write)(const char *, unsigned int); +}; + +int register_trace_export(struct trace_export *export); +int unregister_trace_export(struct trace_export *export); + +#endif /* CONFIG_TRACING */ + +#endif /* _LINUX_TRACE_H */ diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index dade4c9..0247ac2 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include "trace.h" @@ -2128,6 +2129,127 @@ void trace_buffer_unlock_commit_regs(struct trace_array *tr, ftrace_trace_userstack(buffer, flags, pc); } +static inline void +trace_generic_commit(struct trace_array *tr, + struct ring_buffer_event *event) +{ + struct trace_entry *entry; + struct trace_export *export = tr->export; + unsigned int size = 0; + + entry = ring_buffer_event_data(event); + + trace_entry_size(size, entry->type); + if (!size) + return; + + if (export->write) + export->write((char *)entry, size); +} + +static inline void +trace_rb_commit(struct trace_array *tr, + struct ring_buffer_event *event) +{ + __buffer_unlock_commit(tr->trace_buffer.buffer, event); +} + +static DEFINE_MUTEX(trace_export_lock); + +static struct trace_export trace_export_rb __read_mostly = { + .name = "rb", + .commit = trace_rb_commit, + .next = NULL, +}; +static struct trace_export *trace_exports_list __read_mostly = &trace_export_rb; + +inline void +trace_exports(struct trace_array *tr, struct ring_buffer_event *event) +{ + struct trace_export *export; + + preempt_disable_notrace(); + + for (export = rcu_dereference_raw_notrace(trace_exports_list); + export && export->commit; + export = rcu_dereference_raw_notrace(export->next)) { + tr->export = export; + export->commit(tr, event); + } + + preempt_enable_notrace(); +} + +static void +add_trace_export(struct trace_export **list, struct trace_export *export) +{ + export->next = *list; + /* + * We are entering export into the list but another + * CPU might be walking that list. We need to make sure + * the export->next pointer is valid before another CPU sees + * the export pointer included into the list. + */ + rcu_assign_pointer(*list, export); + +} + +static int +rm_trace_export(struct trace_export **list, struct trace_export *export) +{ + struct trace_export **p; + + for (p = list; *p != &trace_export_rb; p = &(*p)->next) + if (*p == export) + break; + + if (*p != export) + return -1; + + *p = (*p)->next; + + return 0; +} + +int register_trace_export(struct trace_export *export) +{ + if (!export->write) { + pr_warn("trace_export must have the write() call back.\n"); + return -1; + } + + if (!export->name) { + pr_warn("trace_export must have a name.\n"); + return -1; + } + + mutex_lock(&trace_export_lock); + + export->tr = trace_exports_list->tr; + export->commit = trace_generic_commit; + + add_trace_export(&trace_exports_list, export); + + mutex_unlock(&trace_export_lock); + + return 0; +} +EXPORT_SYMBOL_GPL(register_trace_export); + +int unregister_trace_export(struct trace_export *export) +{ + int ret; + + mutex_lock(&trace_export_lock); + + ret = rm_trace_export(&trace_exports_list, export); + + mutex_unlock(&trace_export_lock); + + return ret; +} +EXPORT_SYMBOL_GPL(unregister_trace_export); + void trace_function(struct trace_array *tr, unsigned long ip, unsigned long parent_ip, unsigned long flags, @@ -2147,7 +2269,7 @@ trace_function(struct trace_array *tr, entry->parent_ip = parent_ip; if (!call_filter_check_discard(call, entry, buffer, event)) - __buffer_unlock_commit(buffer, event); + trace_exports(tr, event); } #ifdef CONFIG_STACKTRACE diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index f783df4..a40f07c 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -260,6 +260,7 @@ struct trace_array { /* function tracing enabled */ int function_enabled; #endif + struct trace_export *export; }; enum { @@ -301,6 +302,13 @@ static inline struct trace_array *top_trace_array(void) break; \ } +#undef IF_SIZE +#define IF_SIZE(size, var, etype, id) \ + if (var == id) { \ + size = (sizeof(etype)); \ + break; \ + } + /* Will cause compile errors if type is not found. */ extern void __ftrace_bad_type(void); @@ -339,6 +347,29 @@ extern void __ftrace_bad_type(void); } while (0) /* + * The trace_entry_size return the size of specific trace type + * + * IF_SIZE(size, var); + * + * Where "var" is just the given trace type. + */ +#define trace_entry_size(size, var) \ + do { \ + IF_SIZE(size, var, struct ftrace_entry, TRACE_FN); \ + IF_SIZE(size, var, struct stack_entry, TRACE_STACK); \ + IF_SIZE(size, var, struct userstack_entry, \ + TRACE_USER_STACK); \ + IF_SIZE(size, var, struct print_entry, TRACE_PRINT); \ + IF_SIZE(size, var, struct bprint_entry, TRACE_BPRINT); \ + IF_SIZE(size, var, struct bputs_entry, TRACE_BPUTS); \ + IF_SIZE(size, var, struct trace_branch, TRACE_BRANCH); \ + IF_SIZE(size, var, struct ftrace_graph_ent_entry, \ + TRACE_GRAPH_ENT); \ + IF_SIZE(size, var, struct ftrace_graph_ret_entry, \ + TRACE_GRAPH_RET); \ + } while (0) + +/* * An option specific to a tracer. This is a boolean value. * The bit is the bit index that sets its value on the * flags value in struct tracer_flags.