From patchwork Tue Feb 20 20:23:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13564481 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CE5D2612D7 for ; Tue, 20 Feb 2024 20:23:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708460599; cv=none; b=NWX/eCPnZmvno6Ey0nXNhbZewVVzf9GYfDLWvLLcuXhHrrj3R+GVUwdJfKpsFgobRMeLuWe194ksVahQaifxtEdxXfm5WK2aKLi7r6FMe5Nd+BQCjDk+txJktPvOQzHL63m7nOPzPAcSkjepC0zIgPc0UQQGkAqTDgyeiKGJdDI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708460599; c=relaxed/simple; bh=okSYdCoqtYalJqTVBXSYlec5NVTxMVA7/+jsV6YNCNE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=cJT7qL80t40dLerqPeP1IsFUpqeKQ0LuMSlW6NM0lFxMTsQ+iK7T7w7zQ+uyJsYGTEBu2U8TBpN+POgNQG8zrBDO8DOLrBBwbrZLXFIYwI3R310QWTxeOC9jVdPlJ6WFjrese2r4qw7iCaxxA3nmdvBHA1pF00/56TEwafW+auY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XCe6XRjN; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XCe6XRjN" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc6b26ce0bbso12311594276.1 for ; Tue, 20 Feb 2024 12:23:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708460597; x=1709065397; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=U2evzm3HI5GwNbxXWo5kB/PowbSsdEjz5X/q5vGufRc=; b=XCe6XRjNKl6p6D+QtnVQ1fd7smNuNNF4gBbip7rGWdP1a9Zwnxyxrd1pYVihv84RY9 WabywR95mB+MegEgdXAnDXr68KTYX1cqytlQb7wrnLOLVGuu2kqszQpbhTqqn1cIP/gC d9Z72cSndzBGxmybUSkSBj0iZXd3pLfFhZ+Xqiyp2J9KCelQXOJymKmMr0PVBZdQsXHX b+WskpkM1w1+CPsfhPajAZux0PI+FhOhO1WHZDlJ4Z9XT6lMcHxmK6YShuJe9FiMvtA9 EzuHCIyvpCTAMSnwjSIA1xUEz5rxDYnp+tx2OeAoW0w8YrhqLeAjLOKa7TU+znrT/IcW pfrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708460597; x=1709065397; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=U2evzm3HI5GwNbxXWo5kB/PowbSsdEjz5X/q5vGufRc=; b=hcQKHMl9hGT8cQOh9ERR6hEpKmOHYb/ZCWTMQeuJtLLYVdVpYfICFJh46ttj7ilNxd HORbWkZBG+5seACzeTySBQb4UdXHUp7/WpuycB2dmzG8nZ52lpUkWmHNM6fem+7VeQkB bmitpZ+nmKsZFcd3kYz9uwSLRwG+BIA48Kh7WTEap3uHEkMdCnpHVyXFeXjm7NF6/iOC JTKOFbwB0uhUQoHX0WhmuODXBMvOP4uddUzFSODKqRg/n445HTvdpNAGtAtoozI3DgwB 9ia1iUVPaMfz/ibe7E053WdmFFHBb59q2RoaSBVJ1FYgG9A/Me5OvquQzUVfE5NhS6Aj usag== X-Forwarded-Encrypted: i=1; AJvYcCUSrLYI1VdE5g1lho2ZOPlucum0HlKrhSBkn50XwqM7a1Tb9Yaoytuylz/CikdGyUUkuJyGB5a3joo/wGh8PwgeY6bNDfX5FQqKI+hjSE1fvXDh X-Gm-Message-State: AOJu0YwgMRqKqh/ReOy6MamQ0pwzUW0lq5WgNhaMn1ngD3rq/qufPHtB YgcHtNzBwMy0xbLIW7IZ5NZ4qAkirAqh+TmvMDgVxW/5hXSHur4XQk/qoNw2mIKVohHIBQPKlXL CyvoM1qjsH6tin1rpbA== X-Google-Smtp-Source: AGHT+IF9l0VKWnBaSreNF7S4sqi6G/8kgHmMrShHsp5wOZWAOcrtrpo0Vk2Q23Wkc3CzfLSQ6ShF4bsT3GENMm1y X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a05:6902:2188:b0:dc7:6efe:1aab with SMTP id dl8-20020a056902218800b00dc76efe1aabmr4730294ybb.6.1708460597013; Tue, 20 Feb 2024 12:23:17 -0800 (PST) Date: Tue, 20 Feb 2024 20:23:05 +0000 In-Reply-To: <20240220202310.2489614-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240220202310.2489614-1-vdonnefort@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog Message-ID: <20240220202310.2489614-2-vdonnefort@google.com> Subject: [PATCH v18 1/6] ring-buffer: Zero ring-buffer sub-buffers From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mathieu.desnoyers@efficios.com, kernel-team@android.com, Vincent Donnefort In preparation for the ring-buffer memory mapping where each subbuf will be accessible to user-space, zero all the page allocations. Signed-off-by: Vincent Donnefort Reviewed-by: Masami Hiramatsu (Google) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index fd4bfe3ecf01..ca796675c0a1 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -1472,7 +1472,8 @@ static int __rb_allocate_pages(struct ring_buffer_per_cpu *cpu_buffer, list_add(&bpage->list, pages); - page = alloc_pages_node(cpu_to_node(cpu_buffer->cpu), mflags, + page = alloc_pages_node(cpu_to_node(cpu_buffer->cpu), + mflags | __GFP_ZERO, cpu_buffer->buffer->subbuf_order); if (!page) goto free_pages; @@ -1557,7 +1558,8 @@ rb_allocate_cpu_buffer(struct trace_buffer *buffer, long nr_pages, int cpu) cpu_buffer->reader_page = bpage; - page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL, cpu_buffer->buffer->subbuf_order); + page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL | __GFP_ZERO, + cpu_buffer->buffer->subbuf_order); if (!page) goto fail_free_reader; bpage->page = page_address(page); @@ -5525,7 +5527,8 @@ ring_buffer_alloc_read_page(struct trace_buffer *buffer, int cpu) if (bpage->data) goto out; - page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL | __GFP_NORETRY, + page = alloc_pages_node(cpu_to_node(cpu), + GFP_KERNEL | __GFP_NORETRY | __GFP_ZERO, cpu_buffer->buffer->subbuf_order); if (!page) { kfree(bpage); From patchwork Tue Feb 20 20:23:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13564482 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 02D3214C5A9 for ; Tue, 20 Feb 2024 20:23:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708460603; cv=none; b=LW5h49oLhZ9kBafAi+EhBrI3CfaXy1SbnZ6DOI76x81unP0UCRKWaCsaozwPI3TM3utgQVmjDmPnO7KGY+OH66+aN+k3HR42QJgGTHPG8kmKN/xFOcCzaZ71rgTn1jUV7LdfZo7k7rgU/HaiQPBKmuJYhmQ03jPmCnb0xfLsJ5Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708460603; c=relaxed/simple; bh=iJ8P6YhKO1FC0/SQO/W+pkbzMfauZ9LQPD3QNeqUBzs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JzuUY1GPUpTi/4L0Wmpqk+C+lWm8sSbyblTo24lz6+d8/dQds5YMGlOBlD2mbKnFEVmXpVbM0a6CyDcTJ9vC50UCv0cdu8mnvVUQiK4+O7TJlH/YCCpxgIatpMnMboXGMorZbNAdkJ9MJtvhDsT36H87SLfL4vxs+1BUwH+rDkE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=OYNQfh2/; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OYNQfh2/" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-40e53200380so33870015e9.3 for ; Tue, 20 Feb 2024 12:23:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708460599; x=1709065399; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EdLMp1+vX6RpwH2ri01SzXpqjAq0On8Y0+ObbBOGPbo=; b=OYNQfh2/zHxR6AR9H2UMEfW5wmnVZCIVxA3fSFKr86KoSFlFIzeCpgOfM8DMnKkhyQ 9jX+8ubLE6bWqgENI5KiLF1xJNLl0NLL5BsISNWsbSqITtneWKTxcFSyTnsa3V1GhSUW 1aJWniKfLsVwcNPru/Q3XJszVx2Upq7/3x+EkdzSvfFiME+Voz55qDzj7uejL0+XC0eC YLNGm1KqmVwZpOisewHQCYviE1IgUTPp2hLJ0HDk7VrDgJOyUCrhxcW1jHuaq4YFXw2F 760TRY3VfdSR4eGmzaZDqCzKv9No2frsHCAOkeWStd1DpUoBN73Hst8cUeZ4/hs6SWxk yL+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708460599; x=1709065399; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EdLMp1+vX6RpwH2ri01SzXpqjAq0On8Y0+ObbBOGPbo=; b=CIpBgRHQn+yjQiXUD8Rfd16h7zrQQh76K3caEI3IsJkpi12juxiD/7NVeO36uZwLP8 H23ckcAi2PpEn2yHTx0/IxnFS4LmcB6uyCtZ3vUUdNVsznmHLzsP95UqnJlZdAquxBLk 2KgKZv37mocHkvmSkQbwEYTTQUnh3dmENlcW8iVr06oC1wFcLhw09abns25n4wUeVjcc iyckQYjjofHU9YmooRVJ1rloJYONLM2ZDMW1mhB3qfdL7yOCk7MUTWVH/w5evXbEsEDD wc0oLwCuvpwAFDOa4BxrodF6qXMh+Ke7lyfMILZLoUNF2l6yU16CqrlOOne8YPhj5Mep Lg9A== X-Forwarded-Encrypted: i=1; AJvYcCWTplj0afs6IrZyHldT6hHmq3d3/FoysxoyT7CKxnnkjnzBYespfFvO55Qj6JdJ42KqmZ3klq6rD6BonZHNkYhW9Lk8zyaeV+tOWc5TTjcPqJrm X-Gm-Message-State: AOJu0YxNBMJcDNOl8Mh4bU6WHtiGlLe6B/JHFxl7+JFZtwbhteKog13r MH7Qp8tsyZdz4yTwWnyLlatocD4BH2YNF5Om3wjNIfndMJIqrkfuLe7on1HAirr+6UjgKpNVZkg 2VQCuSVdYk9lZIeo8jw== X-Google-Smtp-Source: AGHT+IEw4LrgB9pwG8YWXIWIpis7x0ELuKrjVTIIqfRR/I3qnmqlF/hjpUkwGcK3yqYwOujIVUuR2vt2n5CyPWNR X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a05:600c:34c5:b0:412:6e51:2475 with SMTP id d5-20020a05600c34c500b004126e512475mr19749wmq.3.1708460599395; Tue, 20 Feb 2024 12:23:19 -0800 (PST) Date: Tue, 20 Feb 2024 20:23:06 +0000 In-Reply-To: <20240220202310.2489614-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240220202310.2489614-1-vdonnefort@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog Message-ID: <20240220202310.2489614-3-vdonnefort@google.com> Subject: [PATCH v18 2/6] ring-buffer: Introducing ring-buffer mapping functions From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mathieu.desnoyers@efficios.com, kernel-team@android.com, Vincent Donnefort In preparation for allowing the user-space to map a ring-buffer, add a set of mapping functions: ring_buffer_{map,unmap}() ring_buffer_map_fault() And controls on the ring-buffer: ring_buffer_map_get_reader() /* swap reader and head */ Mapping the ring-buffer also involves: A unique ID for each subbuf of the ring-buffer, currently they are only identified through their in-kernel VA. A meta-page, where are stored ring-buffer statistics and a description for the current reader The linear mapping exposes the meta-page, and each subbuf of the ring-buffer, ordered following their unique ID, assigned during the first mapping. Once mapped, no subbuf can get in or out of the ring-buffer: the buffer size will remain unmodified and the splice enabling functions will in reality simply memcpy the data instead of swapping subbufs. Signed-off-by: Vincent Donnefort diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h index fa802db216f9..0841ba8bab14 100644 --- a/include/linux/ring_buffer.h +++ b/include/linux/ring_buffer.h @@ -6,6 +6,8 @@ #include #include +#include + struct trace_buffer; struct ring_buffer_iter; @@ -221,4 +223,9 @@ int trace_rb_cpu_prepare(unsigned int cpu, struct hlist_node *node); #define trace_rb_cpu_prepare NULL #endif +int ring_buffer_map(struct trace_buffer *buffer, int cpu); +int ring_buffer_unmap(struct trace_buffer *buffer, int cpu); +struct page *ring_buffer_map_fault(struct trace_buffer *buffer, int cpu, + unsigned long pgoff); +int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu); #endif /* _LINUX_RING_BUFFER_H */ diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h new file mode 100644 index 000000000000..ffcd8dfcaa4f --- /dev/null +++ b/include/uapi/linux/trace_mmap.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _TRACE_MMAP_H_ +#define _TRACE_MMAP_H_ + +#include + +/** + * struct trace_buffer_meta - Ring-buffer Meta-page description + * @meta_page_size: Size of this meta-page. + * @meta_struct_len: Size of this structure. + * @subbuf_size: Size of each sub-buffer. + * @nr_subbufs: Number of subbfs in the ring-buffer, including the reader. + * @reader.lost_events: Number of events lost at the time of the reader swap. + * @reader.id: subbuf ID of the current reader. ID range [0 : @nr_subbufs - 1] + * @reader.read: Number of bytes read on the reader subbuf. + * @flags: Placeholder for now, 0 until new features are supported. + * @entries: Number of entries in the ring-buffer. + * @overrun: Number of entries lost in the ring-buffer. + * @read: Number of entries that have been read. + * @Reserved1: Reserved for future use. + * @Reserved2: Reserved for future use. + */ +struct trace_buffer_meta { + __u32 meta_page_size; + __u32 meta_struct_len; + + __u32 subbuf_size; + __u32 nr_subbufs; + + struct { + __u64 lost_events; + __u32 id; + __u32 read; + } reader; + + __u64 flags; + + __u64 entries; + __u64 overrun; + __u64 read; + + __u64 Reserved1; + __u64 Reserved2; +}; + +#endif /* _TRACE_MMAP_H_ */ diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index ca796675c0a1..1d7d7a701867 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -338,6 +339,7 @@ struct buffer_page { local_t entries; /* entries on this page */ unsigned long real_end; /* real end of data */ unsigned order; /* order of the page */ + u32 id; /* ID for external mapping */ struct buffer_data_page *page; /* Actual data page */ }; @@ -484,6 +486,12 @@ struct ring_buffer_per_cpu { u64 read_stamp; /* pages removed since last reset */ unsigned long pages_removed; + + unsigned int mapped; + struct mutex mapping_lock; + unsigned long *subbuf_ids; /* ID to subbuf addr */ + struct trace_buffer_meta *meta_page; + /* ring buffer pages to update, > 0 to add, < 0 to remove */ long nr_pages_to_update; struct list_head new_pages; /* new pages to add */ @@ -1548,6 +1556,7 @@ rb_allocate_cpu_buffer(struct trace_buffer *buffer, long nr_pages, int cpu) init_irq_work(&cpu_buffer->irq_work.work, rb_wake_up_waiters); init_waitqueue_head(&cpu_buffer->irq_work.waiters); init_waitqueue_head(&cpu_buffer->irq_work.full_waiters); + mutex_init(&cpu_buffer->mapping_lock); bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()), GFP_KERNEL, cpu_to_node(cpu)); @@ -1738,8 +1747,6 @@ bool ring_buffer_time_stamp_abs(struct trace_buffer *buffer) return buffer->time_stamp_abs; } -static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer); - static inline unsigned long rb_page_entries(struct buffer_page *bpage) { return local_read(&bpage->entries) & RB_WRITE_MASK; @@ -5160,6 +5167,22 @@ static void rb_clear_buffer_page(struct buffer_page *page) page->read = 0; } +static void rb_update_meta_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + struct trace_buffer_meta *meta = cpu_buffer->meta_page; + + meta->reader.read = cpu_buffer->reader_page->read; + meta->reader.id = cpu_buffer->reader_page->id; + meta->reader.lost_events = cpu_buffer->lost_events; + + meta->entries = local_read(&cpu_buffer->entries); + meta->overrun = local_read(&cpu_buffer->overrun); + meta->read = cpu_buffer->read; + + /* Some archs do not have data cache coherency between kernel and user-space */ + flush_dcache_folio(virt_to_folio(cpu_buffer->meta_page)); +} + static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) { @@ -5204,6 +5227,9 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) cpu_buffer->lost_events = 0; cpu_buffer->last_overrun = 0; + if (cpu_buffer->mapped) + rb_update_meta_page(cpu_buffer); + rb_head_page_activate(cpu_buffer); cpu_buffer->pages_removed = 0; } @@ -5418,6 +5444,12 @@ int ring_buffer_swap_cpu(struct trace_buffer *buffer_a, cpu_buffer_a = buffer_a->buffers[cpu]; cpu_buffer_b = buffer_b->buffers[cpu]; + /* It's up to the callers to not try to swap mapped buffers */ + if (WARN_ON_ONCE(cpu_buffer_a->mapped || cpu_buffer_b->mapped)) { + ret = -EBUSY; + goto out; + } + /* At least make sure the two buffers are somewhat the same */ if (cpu_buffer_a->nr_pages != cpu_buffer_b->nr_pages) goto out; @@ -5682,7 +5714,8 @@ int ring_buffer_read_page(struct trace_buffer *buffer, * Otherwise, we can simply swap the page with the one passed in. */ if (read || (len < (commit - read)) || - cpu_buffer->reader_page == cpu_buffer->commit_page) { + cpu_buffer->reader_page == cpu_buffer->commit_page || + cpu_buffer->mapped) { struct buffer_data_page *rpage = cpu_buffer->reader_page->page; unsigned int rpos = read; unsigned int pos = 0; @@ -5901,6 +5934,11 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order) cpu_buffer = buffer->buffers[cpu]; + if (cpu_buffer->mapped) { + err = -EBUSY; + goto error; + } + /* Update the number of pages to match the new size */ nr_pages = old_size * buffer->buffers[cpu]->nr_pages; nr_pages = DIV_ROUND_UP(nr_pages, buffer->subbuf_size); @@ -6002,6 +6040,338 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order) } EXPORT_SYMBOL_GPL(ring_buffer_subbuf_order_set); +#define subbuf_page(off, start) \ + virt_to_page((void *)(start + (off << PAGE_SHIFT))) + +#define foreach_subbuf_page(sub_order, start, page) \ + page = subbuf_page(0, (start)); \ + for (int __off = 0; __off < (1 << (sub_order)); \ + __off++, page = subbuf_page(__off, (start))) + +static inline void subbuf_map_prepare(unsigned long subbuf_start, int order) +{ + struct page *page; + + /* + * When allocating order > 0 pages, only the first struct page has a + * refcount > 1. Increasing the refcount here ensures none of the struct + * page composing the sub-buffer is freeed when the mapping is closed. + */ + foreach_subbuf_page(order, subbuf_start, page) + page_ref_inc(page); +} + +static inline void subbuf_unmap(unsigned long subbuf_start, int order) +{ + struct page *page; + + foreach_subbuf_page(order, subbuf_start, page) { + page_ref_dec(page); + page->mapping = NULL; + } +} + +static void rb_free_subbuf_ids(struct ring_buffer_per_cpu *cpu_buffer) +{ + int sub_id; + + for (sub_id = 0; sub_id < cpu_buffer->nr_pages + 1; sub_id++) + subbuf_unmap(cpu_buffer->subbuf_ids[sub_id], + cpu_buffer->buffer->subbuf_order); + + kfree(cpu_buffer->subbuf_ids); + cpu_buffer->subbuf_ids = NULL; +} + +static int rb_alloc_meta_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + struct page *page; + + if (cpu_buffer->meta_page) + return 0; + + page = alloc_page(GFP_USER | __GFP_ZERO); + if (!page) + return -ENOMEM; + + cpu_buffer->meta_page = page_to_virt(page); + + return 0; +} + +static void rb_free_meta_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + unsigned long addr = (unsigned long)cpu_buffer->meta_page; + + if (!addr) + return; + + virt_to_page((void *)addr)->mapping = NULL; + free_page(addr); + cpu_buffer->meta_page = NULL; +} + +static void rb_setup_ids_meta_page(struct ring_buffer_per_cpu *cpu_buffer, + unsigned long *subbuf_ids) +{ + struct trace_buffer_meta *meta = cpu_buffer->meta_page; + unsigned int nr_subbufs = cpu_buffer->nr_pages + 1; + struct buffer_page *first_subbuf, *subbuf; + int id = 0; + + subbuf_ids[id] = (unsigned long)cpu_buffer->reader_page->page; + subbuf_map_prepare(subbuf_ids[id], cpu_buffer->buffer->subbuf_order); + cpu_buffer->reader_page->id = id++; + + first_subbuf = subbuf = rb_set_head_page(cpu_buffer); + do { + if (WARN_ON(id >= nr_subbufs)) + break; + + subbuf_ids[id] = (unsigned long)subbuf->page; + subbuf->id = id; + subbuf_map_prepare(subbuf_ids[id], cpu_buffer->buffer->subbuf_order); + + rb_inc_page(&subbuf); + id++; + } while (subbuf != first_subbuf); + + /* install subbuf ID to kern VA translation */ + cpu_buffer->subbuf_ids = subbuf_ids; + + meta->meta_page_size = PAGE_SIZE; + meta->meta_struct_len = sizeof(*meta); + meta->nr_subbufs = nr_subbufs; + meta->subbuf_size = cpu_buffer->buffer->subbuf_size + BUF_PAGE_HDR_SIZE; + + rb_update_meta_page(cpu_buffer); +} + +static inline struct ring_buffer_per_cpu * +rb_get_mapped_buffer(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return ERR_PTR(-EINVAL); + + cpu_buffer = buffer->buffers[cpu]; + + mutex_lock(&cpu_buffer->mapping_lock); + + if (!cpu_buffer->mapped) { + mutex_unlock(&cpu_buffer->mapping_lock); + return ERR_PTR(-ENODEV); + } + + return cpu_buffer; +} + +static inline void rb_put_mapped_buffer(struct ring_buffer_per_cpu *cpu_buffer) +{ + mutex_unlock(&cpu_buffer->mapping_lock); +} + +/* + * Fast-path for rb_buffer_(un)map(). Called whenever the meta-page doesn't need + * to be set-up or torn-down. + */ +static int __rb_inc_dec_mapped(struct trace_buffer *buffer, + struct ring_buffer_per_cpu *cpu_buffer, + bool inc) +{ + unsigned long flags; + + lockdep_assert_held(&cpu_buffer->mapping_lock); + + if (inc && cpu_buffer->mapped == UINT_MAX) + return -EBUSY; + + if (WARN_ON(!inc && cpu_buffer->mapped == 0)) + return -EINVAL; + + mutex_lock(&buffer->mutex); + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); + + if (inc) + cpu_buffer->mapped++; + else + cpu_buffer->mapped--; + + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + mutex_unlock(&buffer->mutex); + + return 0; +} + +int ring_buffer_map(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + unsigned long flags, *subbuf_ids; + int err = 0; + + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return -EINVAL; + + cpu_buffer = buffer->buffers[cpu]; + + mutex_lock(&cpu_buffer->mapping_lock); + + if (cpu_buffer->mapped) { + err = __rb_inc_dec_mapped(buffer, cpu_buffer, true); + mutex_unlock(&cpu_buffer->mapping_lock); + return err; + } + + /* prevent another thread from changing buffer/sub-buffer sizes */ + mutex_lock(&buffer->mutex); + + err = rb_alloc_meta_page(cpu_buffer); + if (err) + goto unlock; + + /* subbuf_ids include the reader while nr_pages does not */ + subbuf_ids = kcalloc(cpu_buffer->nr_pages + 1, sizeof(*subbuf_ids), GFP_KERNEL); + if (!subbuf_ids) { + rb_free_meta_page(cpu_buffer); + err = -ENOMEM; + goto unlock; + } + + atomic_inc(&cpu_buffer->resize_disabled); + + /* + * Lock all readers to block any subbuf swap until the subbuf IDs are + * assigned. + */ + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); + + rb_setup_ids_meta_page(cpu_buffer, subbuf_ids); + cpu_buffer->mapped = 1; + + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); +unlock: + mutex_unlock(&buffer->mutex); + mutex_unlock(&cpu_buffer->mapping_lock); + + return err; +} + +int ring_buffer_unmap(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + unsigned long flags; + int err = 0; + + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return -EINVAL; + + cpu_buffer = buffer->buffers[cpu]; + + mutex_lock(&cpu_buffer->mapping_lock); + + if (!cpu_buffer->mapped) { + err = -ENODEV; + goto out; + } else if (cpu_buffer->mapped > 1) { + __rb_inc_dec_mapped(buffer, cpu_buffer, false); + goto out; + } + + mutex_lock(&buffer->mutex); + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); + + cpu_buffer->mapped = 0; + + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + + rb_free_subbuf_ids(cpu_buffer); + rb_free_meta_page(cpu_buffer); + atomic_dec(&cpu_buffer->resize_disabled); + + mutex_unlock(&buffer->mutex); +out: + mutex_unlock(&cpu_buffer->mapping_lock); + + return err; +} + +/* + * +--------------+ pgoff == 0 + * | meta page | + * +--------------+ pgoff == 1 + * | subbuffer 0 | + * +--------------+ pgoff == 1 + (1 << subbuf_order) + * | subbuffer 1 | + * ... + */ +struct page *ring_buffer_map_fault(struct trace_buffer *buffer, int cpu, + unsigned long pgoff) +{ + struct ring_buffer_per_cpu *cpu_buffer = buffer->buffers[cpu]; + unsigned long subbuf_id, subbuf_offset, addr; + struct page *page; + + if (!pgoff) + return virt_to_page((void *)cpu_buffer->meta_page); + + pgoff--; + + subbuf_id = pgoff >> buffer->subbuf_order; + if (subbuf_id > cpu_buffer->nr_pages) + return NULL; + + subbuf_offset = pgoff & ((1UL << buffer->subbuf_order) - 1); + addr = cpu_buffer->subbuf_ids[subbuf_id] + (subbuf_offset * PAGE_SIZE); + page = virt_to_page((void *)addr); + + return page; +} + +int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + unsigned long reader_size; + unsigned long flags; + + cpu_buffer = rb_get_mapped_buffer(buffer, cpu); + if (IS_ERR(cpu_buffer)) + return (int)PTR_ERR(cpu_buffer); + + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); +consume: + if (rb_per_cpu_empty(cpu_buffer)) + goto out; + + reader_size = rb_page_size(cpu_buffer->reader_page); + + /* + * There are data to be read on the current reader page, we can + * return to the caller. But before that, we assume the latter will read + * everything. Let's update the kernel reader accordingly. + */ + if (cpu_buffer->reader_page->read < reader_size) { + while (cpu_buffer->reader_page->read < reader_size) + rb_advance_reader(cpu_buffer); + goto out; + } + + if (WARN_ON(!rb_get_reader_page(cpu_buffer))) + goto out; + + goto consume; +out: + /* Some archs do not have data cache coherency between kernel and user-space */ + flush_dcache_folio(virt_to_folio(cpu_buffer->reader_page->page)); + + rb_update_meta_page(cpu_buffer); + + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + rb_put_mapped_buffer(cpu_buffer); + + return 0; +} + /* * We only allocate new buffers, never free them if the CPU goes down. * If we were to free the buffer, then the user would lose any trace that was in From patchwork Tue Feb 20 20:23:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13564484 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DA8914C595 for ; Tue, 20 Feb 2024 20:23:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708460607; cv=none; b=mTZwbDFkx1pNw+5WVzSrGWbLH6gKxjKpH5yg+5zUycMuUd9ol2lJNf08DRXlSeYn/kJM28zhqmO6UmsJQuQ4L+BgvPPHCBAqHrdiUhznzwO0zPIx1+xj5vSr83fZNLJFzpln4JXGcEECVB6Uofp5XPQgRz9hzRCoSvyp9biUfvs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708460607; c=relaxed/simple; bh=c11OTwgRSMGhQcGkXUr50s80GSv8XYTKEbItBCwkfmk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AE4pHlDMYoJpQd85QJn/ia3hSsEO4vWvpcrQClGYMg+3Tpyv59XBam3Lvd/RVcYP0lHzgRIKicyho/FGCOMj/MHucbSm9F13vVVNnr9v3dUETeiQ8rgUam/ef4ieWANzq9iD9Fk4MYDSPlk+RorsJ7hzhrfbBOia200o+bQ+gsw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Ylukd/Uv; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ylukd/Uv" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6081639fecfso35886027b3.1 for ; Tue, 20 Feb 2024 12:23:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708460601; x=1709065401; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1emHXkXXbazGKAEdzB+eG6Np9XJposqSjB6lDEqOmYs=; b=Ylukd/UvcLlBIIVIcT34qrIivRrvcG0L2DC1sfPipgHJcTRB6lybDCSX542PMRLsCK jS7oX8YVHVOGWiEL0N10bqd6/Wv9B/JBALzRSlwF4SF5iIL5Lsn5YM5RFiVved411GEI LT8eVNk38ex17mhAZLtE1oyXeVd5I373lZmIgMWlC2YtJYlxGugwPYaCiH1rwAgi6x2M FzE3nB7L9DuerLZhZdnbwlTflP/uXPRQDRRkDf7L4AQDtUQ/XbMP/l6xLLRonlXAm2xX vto+gMd2LQOl2SSg9GACJQojPYYUBYAztnfddArmSAgiUB5zputk6ba/xJ96jEORx6ca 4rBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708460601; x=1709065401; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1emHXkXXbazGKAEdzB+eG6Np9XJposqSjB6lDEqOmYs=; b=Y3HSzHQlanv/g9fvDulbxQnpNk+F3qZICCwQkrOWJ/JZJz8xEQX9jZKU27kzkrueV6 e3WqFOGITWWDE/67vKnp7hdvrg+wEXAukOoTVScp2QBDazbmmtgRWSDLkC5iDeq0KCgQ UgthKoa5dHkEraD9qgBlUbaeGk6qa9Z802xWWmbaipsytjiGtkkiUxNCyq3jMeY8Q1EU vnQMmF3WDaI0qAcoX2ESKRfuvqCSjsg6hc69+krwHAkKtpnLvx7Y+WGw4y0AsNUpI2q4 DUONR/tCfaeRT2owc7XJdT5bCZKxfFFQ3qAqnRALpAJDqWJXpWzi2DEKtgn4BmfRckl4 XVtQ== X-Forwarded-Encrypted: i=1; AJvYcCVik/1sfOfNRthVrQekWvAHMEI1pOUeNOq+2o3q433WChBqR69nUCchT8v7RlVpp2Jpnhqi6Q8P/PXP9K02KLKM7zWY1u76Z+Pw7VfuuPI/b3uq X-Gm-Message-State: AOJu0YwwjvaVan1Hh2WI/rQyHBZYVlkbcv7xFEbR7dGO3SgsxFo3SHFh kWVthKJ1HVe87R6H8NWcZ+NrMU1bfuS5w7N6ZVzyPum4UdS6uTyrRkY8A/LAwlmxiSOf8VIURwe PBDfMoR40H8UdwTmoiQ== X-Google-Smtp-Source: AGHT+IHKDofuoilOLCBE8uzcdmBWoDqhZMx1jiYJp+gc9AfJG0VNqjXEyyKxfFJaPJ9RW0Q2/KOSxyXLFA+kcZCn X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a0d:d5d7:0:b0:607:9bfd:d0bc with SMTP id x206-20020a0dd5d7000000b006079bfdd0bcmr2748611ywd.7.1708460601644; Tue, 20 Feb 2024 12:23:21 -0800 (PST) Date: Tue, 20 Feb 2024 20:23:07 +0000 In-Reply-To: <20240220202310.2489614-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240220202310.2489614-1-vdonnefort@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog Message-ID: <20240220202310.2489614-4-vdonnefort@google.com> Subject: [PATCH v18 3/6] tracing: Add snapshot refcount From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mathieu.desnoyers@efficios.com, kernel-team@android.com, Vincent Donnefort When a ring-buffer is memory mapped by user-space, no trace or ring-buffer swap is possible. This means the snapshot feature is mutually exclusive with the memory mapping. Having a refcount on snapshot users will help to know if a mapping is possible or not. Instead of relying on the global trace_types_lock, a new spinlock is introduced to serialize accesses to trace_array->snapshot. This intends to allow access to that variable in a context where the mmap lock is already held. Signed-off-by: Vincent Donnefort diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 8198bfc54b58..8ae7c2cb63a0 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -1301,6 +1301,50 @@ static void free_snapshot(struct trace_array *tr) tr->allocated_snapshot = false; } +static int tracing_arm_snapshot_locked(struct trace_array *tr) +{ + int ret; + + lockdep_assert_held(&trace_types_lock); + + spin_lock(&tr->snapshot_trigger_lock); + if (tr->snapshot == UINT_MAX) { + spin_unlock(&tr->snapshot_trigger_lock); + return -EBUSY; + } + + tr->snapshot++; + spin_unlock(&tr->snapshot_trigger_lock); + + ret = tracing_alloc_snapshot_instance(tr); + if (ret) { + spin_lock(&tr->snapshot_trigger_lock); + tr->snapshot--; + spin_unlock(&tr->snapshot_trigger_lock); + } + + return ret; +} + +int tracing_arm_snapshot(struct trace_array *tr) +{ + int ret; + + mutex_lock(&trace_types_lock); + ret = tracing_arm_snapshot_locked(tr); + mutex_unlock(&trace_types_lock); + + return ret; +} + +void tracing_disarm_snapshot(struct trace_array *tr) +{ + spin_lock(&tr->snapshot_trigger_lock); + if (!WARN_ON(!tr->snapshot)) + tr->snapshot--; + spin_unlock(&tr->snapshot_trigger_lock); +} + /** * tracing_alloc_snapshot - allocate snapshot buffer. * @@ -1374,10 +1418,6 @@ int tracing_snapshot_cond_enable(struct trace_array *tr, void *cond_data, mutex_lock(&trace_types_lock); - ret = tracing_alloc_snapshot_instance(tr); - if (ret) - goto fail_unlock; - if (tr->current_trace->use_max_tr) { ret = -EBUSY; goto fail_unlock; @@ -1396,6 +1436,10 @@ int tracing_snapshot_cond_enable(struct trace_array *tr, void *cond_data, goto fail_unlock; } + ret = tracing_arm_snapshot_locked(tr); + if (ret) + goto fail_unlock; + local_irq_disable(); arch_spin_lock(&tr->max_lock); tr->cond_snapshot = cond_snapshot; @@ -1440,6 +1484,8 @@ int tracing_snapshot_cond_disable(struct trace_array *tr) arch_spin_unlock(&tr->max_lock); local_irq_enable(); + tracing_disarm_snapshot(tr); + return ret; } EXPORT_SYMBOL_GPL(tracing_snapshot_cond_disable); @@ -1482,6 +1528,7 @@ int tracing_snapshot_cond_disable(struct trace_array *tr) } EXPORT_SYMBOL_GPL(tracing_snapshot_cond_disable); #define free_snapshot(tr) do { } while (0) +#define tracing_arm_snapshot_locked(tr) ({ -EBUSY; }) #endif /* CONFIG_TRACER_SNAPSHOT */ void tracer_tracing_off(struct trace_array *tr) @@ -6595,11 +6642,12 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf) */ synchronize_rcu(); free_snapshot(tr); + tracing_disarm_snapshot(tr); } - if (t->use_max_tr && !tr->allocated_snapshot) { - ret = tracing_alloc_snapshot_instance(tr); - if (ret < 0) + if (t->use_max_tr) { + ret = tracing_arm_snapshot_locked(tr); + if (ret) goto out; } #else @@ -6608,8 +6656,13 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf) if (t->init) { ret = tracer_init(t, tr); - if (ret) + if (ret) { +#ifdef CONFIG_TRACER_MAX_TRACE + if (t->use_max_tr) + tracing_disarm_snapshot(tr); +#endif goto out; + } } tr->current_trace = t; @@ -7711,10 +7764,11 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt, if (tr->allocated_snapshot) ret = resize_buffer_duplicate_size(&tr->max_buffer, &tr->array_buffer, iter->cpu_file); - else - ret = tracing_alloc_snapshot_instance(tr); - if (ret < 0) + + ret = tracing_arm_snapshot_locked(tr); + if (ret) break; + /* Now, we're going to swap */ if (iter->cpu_file == RING_BUFFER_ALL_CPUS) { local_irq_disable(); @@ -7724,6 +7778,7 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt, smp_call_function_single(iter->cpu_file, tracing_swap_cpu_buffer, (void *)tr, 1); } + tracing_disarm_snapshot(tr); break; default: if (tr->allocated_snapshot) { @@ -8848,8 +8903,13 @@ ftrace_trace_snapshot_callback(struct trace_array *tr, struct ftrace_hash *hash, ops = param ? &snapshot_count_probe_ops : &snapshot_probe_ops; - if (glob[0] == '!') - return unregister_ftrace_function_probe_func(glob+1, tr, ops); + if (glob[0] == '!') { + ret = unregister_ftrace_function_probe_func(glob+1, tr, ops); + if (!ret) + tracing_disarm_snapshot(tr); + + return ret; + } if (!param) goto out_reg; @@ -8868,12 +8928,13 @@ ftrace_trace_snapshot_callback(struct trace_array *tr, struct ftrace_hash *hash, return ret; out_reg: - ret = tracing_alloc_snapshot_instance(tr); + ret = tracing_arm_snapshot(tr); if (ret < 0) goto out; ret = register_ftrace_function_probe(glob, tr, ops, count); - + if (ret < 0) + tracing_disarm_snapshot(tr); out: return ret < 0 ? ret : 0; } @@ -9680,7 +9741,9 @@ trace_array_create_systems(const char *name, const char *systems) raw_spin_lock_init(&tr->start_lock); tr->max_lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; - +#ifdef CONFIG_TRACER_MAX_TRACE + spin_lock_init(&tr->snapshot_trigger_lock); +#endif tr->current_trace = &nop_trace; INIT_LIST_HEAD(&tr->systems); @@ -10650,7 +10713,9 @@ __init static int tracer_alloc_buffers(void) global_trace.current_trace = &nop_trace; global_trace.max_lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; - +#ifdef CONFIG_TRACER_MAX_TRACE + spin_lock_init(&global_trace.snapshot_trigger_lock); +#endif ftrace_init_global_array_ops(&global_trace); init_trace_flags_index(&global_trace); diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 00f873910c5d..bd312e9afe25 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -334,8 +334,8 @@ struct trace_array { */ struct array_buffer max_buffer; bool allocated_snapshot; -#endif -#ifdef CONFIG_TRACER_MAX_TRACE + spinlock_t snapshot_trigger_lock; + unsigned int snapshot; unsigned long max_latency; #ifdef CONFIG_FSNOTIFY struct dentry *d_max_latency; @@ -1973,12 +1973,16 @@ static inline void trace_event_eval_update(struct trace_eval_map **map, int len) #ifdef CONFIG_TRACER_SNAPSHOT void tracing_snapshot_instance(struct trace_array *tr); int tracing_alloc_snapshot_instance(struct trace_array *tr); +int tracing_arm_snapshot(struct trace_array *tr); +void tracing_disarm_snapshot(struct trace_array *tr); #else static inline void tracing_snapshot_instance(struct trace_array *tr) { } static inline int tracing_alloc_snapshot_instance(struct trace_array *tr) { return 0; } +static inline int tracing_arm_snapshot(struct trace_array *tr) { return 0; } +static inline void tracing_disarm_snapshot(struct trace_array *tr) { } #endif #ifdef CONFIG_PREEMPT_TRACER diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c index b33c3861fbbb..62e4f58b8671 100644 --- a/kernel/trace/trace_events_trigger.c +++ b/kernel/trace/trace_events_trigger.c @@ -597,20 +597,12 @@ static int register_trigger(char *glob, return ret; } -/** - * unregister_trigger - Generic event_command @unreg implementation - * @glob: The raw string used to register the trigger - * @test: Trigger-specific data used to find the trigger to remove - * @file: The trace_event_file associated with the event - * - * Common implementation for event trigger unregistration. - * - * Usually used directly as the @unreg method in event command - * implementations. +/* + * True if the trigger was found and unregistered, else false. */ -static void unregister_trigger(char *glob, - struct event_trigger_data *test, - struct trace_event_file *file) +static bool try_unregister_trigger(char *glob, + struct event_trigger_data *test, + struct trace_event_file *file) { struct event_trigger_data *data = NULL, *iter; @@ -626,8 +618,32 @@ static void unregister_trigger(char *glob, } } - if (data && data->ops->free) - data->ops->free(data); + if (data) { + if (data->ops->free) + data->ops->free(data); + + return true; + } + + return false; +} + +/** + * unregister_trigger - Generic event_command @unreg implementation + * @glob: The raw string used to register the trigger + * @test: Trigger-specific data used to find the trigger to remove + * @file: The trace_event_file associated with the event + * + * Common implementation for event trigger unregistration. + * + * Usually used directly as the @unreg method in event command + * implementations. + */ +static void unregister_trigger(char *glob, + struct event_trigger_data *test, + struct trace_event_file *file) +{ + try_unregister_trigger(glob, test, file); } /* @@ -1470,7 +1486,7 @@ register_snapshot_trigger(char *glob, struct event_trigger_data *data, struct trace_event_file *file) { - int ret = tracing_alloc_snapshot_instance(file->tr); + int ret = tracing_arm_snapshot(file->tr); if (ret < 0) return ret; @@ -1478,6 +1494,14 @@ register_snapshot_trigger(char *glob, return register_trigger(glob, data, file); } +static void unregister_snapshot_trigger(char *glob, + struct event_trigger_data *data, + struct trace_event_file *file) +{ + if (try_unregister_trigger(glob, data, file)) + tracing_disarm_snapshot(file->tr); +} + static int snapshot_trigger_print(struct seq_file *m, struct event_trigger_data *data) { @@ -1510,7 +1534,7 @@ static struct event_command trigger_snapshot_cmd = { .trigger_type = ETT_SNAPSHOT, .parse = event_trigger_parse, .reg = register_snapshot_trigger, - .unreg = unregister_trigger, + .unreg = unregister_snapshot_trigger, .get_trigger_ops = snapshot_get_trigger_ops, .set_filter = set_trigger_filter, }; From patchwork Tue Feb 20 20:23:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13564483 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3994E14A4F2 for ; Tue, 20 Feb 2024 20:23:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708460607; cv=none; b=F6QgcMELTTyea7wnkkHunSiM/2AC5LivfYk8V0uFdj6ds/LSNReiA4wfqmc350rP56K3wdJacsdhPX3GzreZOJcdkhL+k+P8lQm9B+hOMsRB/h91Q7DL2AJ0CF75RGs1ew3c10pc9f7QylPDdFlzOPbcxi1ZssaLhVv5svhngig= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708460607; c=relaxed/simple; bh=3SHPrTsh291gPPSJVJYwpI6tTu40xoNN9n+YT4ukuDU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iMiJAA1T7h3y9sG0IWddAXH+IRoGIxrKHN9P21USzT1Z0WSupxSPL/lNeg2GUg/5gE863jSap8D0y3ZNsLCfvBTco0KOrld2k7cmgCFIY38skAxp+4J7TMsbw8ykN+dbbc2KzdELYIt3+ibiPNjQZZvtSMt22+DWT+h3MIZ1lq8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PzN4T4M/; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PzN4T4M/" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc6ceade361so11056928276.0 for ; Tue, 20 Feb 2024 12:23:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708460604; x=1709065404; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=L8QdSbGAViukHkPqIszfFAUwep1tq+aZItBmmeT/BG0=; b=PzN4T4M/JM9C0wO1w247nch8Ynuh8lB0pacXm3Xshsz+91cyqXgsObQIOkBbWIfQ57 t8yE4mVUyxwhiKaMA3I7jP0z5prFynpRXRoduS2i9jHqTUyqav+UvnAyi8gDlcQmn5OV JY61V6hX7/gzuI8rHeC/pTKHQ9TKRKyh1VJhgSRZ/C7ioRWcR285h3qPCdZZPW8lOPEX rY6a+g6RJzKXf6PYe1/2F0Kevz9cM4pSEWbXWUfJZCt+oN7045gcrl8cPNrfkIHkepPv 9MHSrwypxKa/3CB/4Mwdx5rReBqNOyB6CL4ikGm8+pTP+1zcRdPFcfuqd4GIhScRwVc3 lioA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708460604; x=1709065404; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=L8QdSbGAViukHkPqIszfFAUwep1tq+aZItBmmeT/BG0=; b=vh7FQqANn21hx9fWzTOSVbA9Zv56zBLjDqLnKMcmaJ0C5KCEXjf/IPQWZdf8n9qI12 GEtsw901CPHUxpfrHKlRxuJMIRmOEZKlFA1vY7DGWHgLw/XF/HXm5Aw36a0MTJ6X21kK u/7W2pqt3ovKthkJ7/rheZb3mGj0OgSD+rAq6mY+4gVQtqyw/Gih9g822B9eKjniPts2 k4J87Q52mA+G2L1s3MdttiDyF8sAU6+Wiy7sMIvWYi82/oyKkKaKFcnq8w9t5XVZZvS6 ekySttQo5/TxHRbSljvji6t9/LZmn8mtC+nlGLX5g8fQ15HVVZfbaEAUQ3977xaTvxT3 9WOQ== X-Forwarded-Encrypted: i=1; AJvYcCWmwkhYmw3cbTVdxHvnFw919852Vy2s3lQBDeFu6ahLYQo9h+gwOP/+RaP2pjLP/dwGOsU+Z4pk1cfhsWvKia5gQHeYg9iuVacY8JdJsUA4z1+S X-Gm-Message-State: AOJu0YzC1nHavb57ilPYO22MSO4UrpxXbM21v34LRpB9Q/ONppXfWRqj 4Re6Ps8xkcBT8OlIfcxxGWsLZ3DSZsaz3hJjDmcNqamzesN+lXlbeOehALXWyr/7bAWNKxHdFXo laBasRVArWw5iKQ663Q== X-Google-Smtp-Source: AGHT+IE/ancGfEt3dVmtYkHD4N3MkOZIKbA3j7T/n8+lKl5omaui0C0RvM/YgDQURp401zWa0PNW73oikfY9sTEi X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a05:6902:1209:b0:dc6:9e4a:f950 with SMTP id s9-20020a056902120900b00dc69e4af950mr4085428ybu.3.1708460604213; Tue, 20 Feb 2024 12:23:24 -0800 (PST) Date: Tue, 20 Feb 2024 20:23:08 +0000 In-Reply-To: <20240220202310.2489614-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240220202310.2489614-1-vdonnefort@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog Message-ID: <20240220202310.2489614-5-vdonnefort@google.com> Subject: [PATCH v18 4/6] tracing: Allow user-space mapping of the ring-buffer From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mathieu.desnoyers@efficios.com, kernel-team@android.com, Vincent Donnefort Currently, user-space extracts data from the ring-buffer via splice, which is handy for storage or network sharing. However, due to splice limitations, it is imposible to do real-time analysis without a copy. A solution for that problem is to let the user-space map the ring-buffer directly. The mapping is exposed via the per-CPU file trace_pipe_raw. The first element of the mapping is the meta-page. It is followed by each subbuffer constituting the ring-buffer, ordered by their unique page ID: * Meta-page -- include/uapi/linux/trace_mmap.h for a description * Subbuf ID 0 * Subbuf ID 1 ... It is therefore easy to translate a subbuf ID into an offset in the mapping: reader_id = meta->reader->id; reader_offset = meta->meta_page_size + reader_id * meta->subbuf_size; When new data is available, the mapper must call a newly introduced ioctl: TRACE_MMAP_IOCTL_GET_READER. This will update the Meta-page reader ID to point to the next reader containing unread data. Mapping will prevent snapshot and buffer size modifications. Signed-off-by: Vincent Donnefort diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h index ffcd8dfcaa4f..d25b9d504a7c 100644 --- a/include/uapi/linux/trace_mmap.h +++ b/include/uapi/linux/trace_mmap.h @@ -43,4 +43,6 @@ struct trace_buffer_meta { __u64 Reserved2; }; +#define TRACE_MMAP_IOCTL_GET_READER _IO('T', 0x1) + #endif /* _TRACE_MMAP_H_ */ diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 8ae7c2cb63a0..67ce7b367edb 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -1176,6 +1176,12 @@ static void tracing_snapshot_instance_cond(struct trace_array *tr, return; } + if (tr->mapped) { + trace_array_puts(tr, "*** BUFFER MEMORY MAPPED ***\n"); + trace_array_puts(tr, "*** Can not use snapshot (sorry) ***\n"); + return; + } + local_irq_save(flags); update_max_tr(tr, current, smp_processor_id(), cond_data); local_irq_restore(flags); @@ -1308,7 +1314,7 @@ static int tracing_arm_snapshot_locked(struct trace_array *tr) lockdep_assert_held(&trace_types_lock); spin_lock(&tr->snapshot_trigger_lock); - if (tr->snapshot == UINT_MAX) { + if (tr->snapshot == UINT_MAX || tr->mapped) { spin_unlock(&tr->snapshot_trigger_lock); return -EBUSY; } @@ -6535,7 +6541,7 @@ static void tracing_set_nop(struct trace_array *tr) { if (tr->current_trace == &nop_trace) return; - + tr->current_trace->enabled--; if (tr->current_trace->reset) @@ -8654,15 +8660,31 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos, return ret; } -/* An ioctl call with cmd 0 to the ring buffer file will wake up all waiters */ static long tracing_buffers_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { struct ftrace_buffer_info *info = file->private_data; struct trace_iterator *iter = &info->iter; + int err; - if (cmd) - return -ENOIOCTLCMD; + if (cmd == TRACE_MMAP_IOCTL_GET_READER) { + if (!(file->f_flags & O_NONBLOCK)) { + err = ring_buffer_wait(iter->array_buffer->buffer, + iter->cpu_file, + iter->tr->buffer_percent); + if (err) + return err; + } + return ring_buffer_map_get_reader(iter->array_buffer->buffer, + iter->cpu_file); + } else if (cmd) { + return -ENOTTY; + } + + /* + * An ioctl call with cmd 0 to the ring buffer file will wake up all + * waiters + */ mutex_lock(&trace_types_lock); iter->wait_index++; @@ -8675,6 +8697,110 @@ static long tracing_buffers_ioctl(struct file *file, unsigned int cmd, unsigned return 0; } +static vm_fault_t tracing_buffers_mmap_fault(struct vm_fault *vmf) +{ + struct ftrace_buffer_info *info = vmf->vma->vm_file->private_data; + struct trace_iterator *iter = &info->iter; + vm_fault_t ret = VM_FAULT_SIGBUS; + struct page *page; + + page = ring_buffer_map_fault(iter->array_buffer->buffer, iter->cpu_file, + vmf->pgoff); + if (!page) + return ret; + + get_page(page); + vmf->page = page; + vmf->page->mapping = vmf->vma->vm_file->f_mapping; + vmf->page->index = vmf->pgoff; + + return 0; +} + +#ifdef CONFIG_TRACER_MAX_TRACE +static int get_snapshot_map(struct trace_array *tr) +{ + int err = 0; + + /* + * Called with mmap_lock held. lockdep would be unhappy if we would now + * take trace_types_lock. Instead use the specific + * snapshot_trigger_lock. + */ + spin_lock(&tr->snapshot_trigger_lock); + + if (tr->snapshot || tr->mapped == UINT_MAX) + err = -EBUSY; + else + tr->mapped++; + + spin_unlock(&tr->snapshot_trigger_lock); + + /* Wait for update_max_tr() to observe iter->tr->mapped */ + if (tr->mapped == 1) + synchronize_rcu(); + + return err; + +} +static void put_snapshot_map(struct trace_array *tr) +{ + spin_lock(&tr->snapshot_trigger_lock); + if (!WARN_ON(!tr->mapped)) + tr->mapped--; + spin_unlock(&tr->snapshot_trigger_lock); +} +#else +static inline int get_snapshot_map(struct trace_array *tr) { return 0; } +static inline void put_snapshot_map(struct trace_array *tr) { } +#endif + +static void tracing_buffers_mmap_close(struct vm_area_struct *vma) +{ + struct ftrace_buffer_info *info = vma->vm_file->private_data; + struct trace_iterator *iter = &info->iter; + + ring_buffer_unmap(iter->array_buffer->buffer, iter->cpu_file); + put_snapshot_map(iter->tr); +} + +static void tracing_buffers_mmap_open(struct vm_area_struct *vma) +{ + struct ftrace_buffer_info *info = vma->vm_file->private_data; + struct trace_iterator *iter = &info->iter; + + WARN_ON(ring_buffer_map(iter->array_buffer->buffer, iter->cpu_file)); +} + +static const struct vm_operations_struct tracing_buffers_vmops = { + .open = tracing_buffers_mmap_open, + .close = tracing_buffers_mmap_close, + .fault = tracing_buffers_mmap_fault, +}; + +static int tracing_buffers_mmap(struct file *filp, struct vm_area_struct *vma) +{ + struct ftrace_buffer_info *info = filp->private_data; + struct trace_iterator *iter = &info->iter; + int ret = 0; + + if (vma->vm_flags & VM_WRITE || vma->vm_flags & VM_EXEC) + return -EPERM; + + vm_flags_mod(vma, VM_DONTCOPY | VM_DONTDUMP, VM_MAYWRITE); + vma->vm_ops = &tracing_buffers_vmops; + + ret = get_snapshot_map(iter->tr); + if (ret) + return ret; + + ret = ring_buffer_map(iter->array_buffer->buffer, iter->cpu_file); + if (ret) + put_snapshot_map(iter->tr); + + return ret; +} + static const struct file_operations tracing_buffers_fops = { .open = tracing_buffers_open, .read = tracing_buffers_read, @@ -8683,6 +8809,7 @@ static const struct file_operations tracing_buffers_fops = { .splice_read = tracing_buffers_splice_read, .unlocked_ioctl = tracing_buffers_ioctl, .llseek = no_llseek, + .mmap = tracing_buffers_mmap, }; static ssize_t diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index bd312e9afe25..8a96e7a89e6b 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -336,6 +336,7 @@ struct trace_array { bool allocated_snapshot; spinlock_t snapshot_trigger_lock; unsigned int snapshot; + unsigned int mapped; unsigned long max_latency; #ifdef CONFIG_FSNOTIFY struct dentry *d_max_latency; From patchwork Tue Feb 20 20:23:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13564485 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4C6F914E2C1 for ; Tue, 20 Feb 2024 20:23:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708460610; cv=none; b=hOYRFgVGZdw38/mDscIDFwtnNErDOWLdqyzYMfh7OTwNi6P/izmAYhSml+fMKIwDMUocKYj5e55q+9tnz9eeif3WofQHf67iHM6JEEVlpRYFc8XqEmN6UD3Wgh9ep837kQ3DcbpIUwUlPBjauER90CHlvVeFjo7+jAiVgQVIjpo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708460610; c=relaxed/simple; bh=vgqyuul3SkiL5frR1s4ETodUVGABBxLVvcWnLo7TXos=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kS4+kGgqByIf6Y5o7oxPY6JtGc8OSes1A64oeaztxj38NB53n3CpAG4CZr+dkeANRg6Qnp8g6rXU0wtSNv3zhWOitCNCXlZcRVZv9AKfBs15AY0LcO6vy8Fk9T4qVa9Y7+pUmVaK1hIKAEtIiE25dohGvrVVR3RsYHcxO6vcf/A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TQiHoq5d; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TQiHoq5d" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc6b26eef6cso8659521276.3 for ; Tue, 20 Feb 2024 12:23:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708460606; x=1709065406; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HidXxbIzcx0OHxEMN4uJVlbdWn044WNRaUBnHlFCLwo=; b=TQiHoq5ddcVbet04mkRI2uNVzDyh/GVxx8niwi2PHgKmuF9EssWF6Ix03uI5QvGh6c aVRT060jmege+7LTjWTYXn+ZHt+C9HnrODlWphxHPRN9eX4PTVXm9POTPlqy9YE/DIyQ y4By99q732nEkSMmfFrEL2rCmtifQaPxTrNMEDCJvFPl1/bzDpr1wJ/7ToOVDHESmoaw Ln/OXysh/vcRj+An5hzuuNW3hwC/r2xjDKDfM2jxZEN5KnKO0hv9bQnUf06ODwrTPYmh otmCfJfAtLae1ZMGcr690QC3FNozdKU4jWPoJjIxXeGF3aGlZjIBt7A/cTHiqaaDZDo9 oxqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708460606; x=1709065406; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HidXxbIzcx0OHxEMN4uJVlbdWn044WNRaUBnHlFCLwo=; b=LOZSQw7Y39BACwLEYNfxgJ0vTDJWh99envVXzDrtv3BcPPP8tAEa2jj3LqA49tuw01 lMDqV41WrTksu7fRWaPhjtnYC59a2sXCgJepX+uCw6w0MJYP7SM/GZ3Lw3C1IugoGpaG cKYAjfewLWuA9Enorr95Afpr4WeS7yM9nE5O4YxYfKsHU3XysHg5z4iWP2J+Jw/vPuF5 BG/yqBYEslRn+aw9kq+yybeEwsBJcv91ExrYP1yOWJknOWuw+WrO8ScdrfNGVwVWe85J Wfv2RGM6E6qiFoGhBWo9d0OZb9nW5B781+dTktUhi1wKwXK7rDM6K8nf5g4VinYi5W87 xu4g== X-Forwarded-Encrypted: i=1; AJvYcCVbLUW0VPBEdHd4AaFoiaXawPqgBPinU+2qV5cjwscYQXvB3F1lBPLg+1lor9uluzaRsF4QqA9eEMcCmZRrb844qglnRsezsrZeK7ZBAfCog0eh X-Gm-Message-State: AOJu0YxXuWjWX+2BP4KHvDLz8p5jus2+ZEvShUWkyV0nQLLMFcHKTqms fJko42MWZm3+YBxAxkbUZ/EzpJTQ3qgVV30FGDR7GPXYN4zasz3mNBSRfpxoxBNgHemRf2Hgchp LEw7cZ82cxNVcKBxhsA== X-Google-Smtp-Source: AGHT+IGbTdJxhHjC2aVIq5O+3Qg5kVShJ3co5ILVQ03C7eQ0wwr29B2CNTSs2RMKx80ldT0cPY0Q5kFZmio96K06 X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a05:6902:a8e:b0:dc7:6cfa:dc59 with SMTP id cd14-20020a0569020a8e00b00dc76cfadc59mr577243ybb.4.1708460606438; Tue, 20 Feb 2024 12:23:26 -0800 (PST) Date: Tue, 20 Feb 2024 20:23:09 +0000 In-Reply-To: <20240220202310.2489614-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240220202310.2489614-1-vdonnefort@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog Message-ID: <20240220202310.2489614-6-vdonnefort@google.com> Subject: [PATCH v18 5/6] Documentation: tracing: Add ring-buffer mapping From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mathieu.desnoyers@efficios.com, kernel-team@android.com, Vincent Donnefort It is now possible to mmap() a ring-buffer to stream its content. Add some documentation and a code example. Signed-off-by: Vincent Donnefort diff --git a/Documentation/trace/index.rst b/Documentation/trace/index.rst index 5092d6c13af5..0b300901fd75 100644 --- a/Documentation/trace/index.rst +++ b/Documentation/trace/index.rst @@ -29,6 +29,7 @@ Linux Tracing Technologies timerlat-tracer intel_th ring-buffer-design + ring-buffer-map stm sys-t coresight/index diff --git a/Documentation/trace/ring-buffer-map.rst b/Documentation/trace/ring-buffer-map.rst new file mode 100644 index 000000000000..0426ab4bcf3d --- /dev/null +++ b/Documentation/trace/ring-buffer-map.rst @@ -0,0 +1,106 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================================== +Tracefs ring-buffer memory mapping +================================== + +:Author: Vincent Donnefort + +Overview +======== +Tracefs ring-buffer memory map provides an efficient method to stream data +as no memory copy is necessary. The application mapping the ring-buffer becomes +then a consumer for that ring-buffer, in a similar fashion to trace_pipe. + +Memory mapping setup +==================== +The mapping works with a mmap() of the trace_pipe_raw interface. + +The first system page of the mapping contains ring-buffer statistics and +description. It is referred as the meta-page. One of the most important field of +the meta-page is the reader. It contains the sub-buffer ID which can be safely +read by the mapper (see ring-buffer-design.rst). + +The meta-page is followed by all the sub-buffers, ordered by ascendant ID. It is +therefore effortless to know where the reader starts in the mapping: + +.. code-block:: c + + reader_id = meta->reader->id; + reader_offset = meta->meta_page_size + reader_id * meta->subbuf_size; + +When the application is done with the current reader, it can get a new one using +the trace_pipe_raw ioctl() TRACE_MMAP_IOCTL_GET_READER. This ioctl also updates +the meta-page fields. + +Limitations +=========== +When a mapping is in place on a Tracefs ring-buffer, it is not possible to +either resize it (either by increasing the entire size of the ring-buffer or +each subbuf). It is also not possible to use snapshot and causes splice to copy +the ring buffer data instead of using the copyless swap from the ring buffer. + +Concurrent readers (either another application mapping that ring-buffer or the +kernel with trace_pipe) are allowed but not recommended. They will compete for +the ring-buffer and the output is unpredictable, just like concurrent readers on +trace_pipe would be. + +Example +======= + +.. code-block:: c + + #include + #include + #include + #include + + #include + + #include + #include + + #define TRACE_PIPE_RAW "/sys/kernel/tracing/per_cpu/cpu0/trace_pipe_raw" + + int main(void) + { + int page_size = getpagesize(), fd, reader_id; + unsigned long meta_len, data_len; + struct trace_buffer_meta *meta; + void *map, *reader, *data; + + fd = open(TRACE_PIPE_RAW, O_RDONLY | O_NONBLOCK); + if (fd < 0) + exit(EXIT_FAILURE); + + map = mmap(NULL, page_size, PROT_READ, MAP_SHARED, fd, 0); + if (map == MAP_FAILED) + exit(EXIT_FAILURE); + + meta = (struct trace_buffer_meta *)map; + meta_len = meta->meta_page_size; + + printf("entries: %llu\n", meta->entries); + printf("overrun: %llu\n", meta->overrun); + printf("read: %llu\n", meta->read); + printf("nr_subbufs: %u\n", meta->nr_subbufs); + + data_len = meta->subbuf_size * meta->nr_subbufs; + data = mmap(NULL, data_len, PROT_READ, MAP_SHARED, fd, meta_len); + if (data == MAP_FAILED) + exit(EXIT_FAILURE); + + if (ioctl(fd, TRACE_MMAP_IOCTL_GET_READER) < 0) + exit(EXIT_FAILURE); + + reader_id = meta->reader.id; + reader = data + meta->subbuf_size * reader_id; + + printf("Current reader address: %p\n", reader); + + munmap(data, data_len); + munmap(meta, meta_len); + close (fd); + + return 0; + } From patchwork Tue Feb 20 20:23:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13564486 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1E6E14F9CB for ; Tue, 20 Feb 2024 20:23:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708460611; cv=none; b=QVkdrNtJGerCg37Mc+Wm2UR6APrPt29lfGTSaYFZywtiS9BJ/S8FHGFaJNynaQsOTnKmZVwRCMoi72bvHF9ks8dO/QrKqX+iCJ7Igqqkfz0wZsgpWT10HfMfQX6LBU8Ym1XSOYSL5fk0wtOVEZXSGyYwaM/JPGG1O7SFSEaszb8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708460611; c=relaxed/simple; bh=ccTAR8GzSCDWXW0/WIZvXFB3XJOO/H5kh/tczds8Vvw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=quy0GqIL6Ehhtrh6+pWf30Ka9dvszMy3UD1pSvyVoC4xdIl2vdovtaWGvE+Kbu8v/WG0aB6ArNzG0NkN36/Ud0q3S4GZwgjoJKw0Lvq3oTqRrvrIq3fU7VJhZaSJpg6avKXFfhLhF2QuRnM6aRp2GPbYC2mcQFklxqfbNs4bA2s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZH8migIR; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZH8migIR" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5efe82b835fso128364967b3.0 for ; Tue, 20 Feb 2024 12:23:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708460609; x=1709065409; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iZjxidu+JFmCthSlYk9Bx3D9eTqyNowu6l339iQEnac=; b=ZH8migIR+tJ8xmkaw830LOHR1KpmVbQibIbzK3lsK/hOWWCQoCSUt5+0MLMBWQBqpZ P/3Q4n+IKOD6qYp/1SgVLRbrvkjtyvUKplPKglmG3aEOj8Wcu8kMO9z3fyEq3Hb1clKE 4SVnK3g5ZtODx5QBWFQdJIs6uP934XMaPVMkSS8XF8poZ7dyoUyE7+GaUgulT7l1AYwG a5Yr15FtZ3js1yqKEmQcbnuisriGyjRNtce9UR1TdLGk+hLsp4vn4UA3EtxztwGxBTue u0J5k5AKdncq2Cz0HF9iECrvCz7YZFUcjvKhJuPtWLLNNyGPRk1FLxcQj6CPPcJIE7rJ 1CBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708460609; x=1709065409; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iZjxidu+JFmCthSlYk9Bx3D9eTqyNowu6l339iQEnac=; b=g9A+oREGmH6b3VeSoAmqsAc3iAg2mTos34HS6EL+99nm8aBswj20LD+HepCNhyocVa uT3a3Tx/osiL9NVMKsooOMWwwIGFJYOJcZ0QVmt9wv4aisVjtP9PKjTaOPNN00is7rmk 7bI3N5BD9KXjw8BkatZjxu8WcIO/F2fLflYwmDWrPgIP7OZ9lbxdg2k9p0DGYnizD9LH EXD40EgEL6ebaJR+F4JskQVY5UCkUGr7153mcgNXhCsuTiaiY+JWLCShfP2NsALjArQ/ bcac/3zvruRV9bjJoy2nmgbWAw1+j46MLzGa/1dRKcz/9+ce7WSxI/4pKyiLBoZ02fmQ xayw== X-Forwarded-Encrypted: i=1; AJvYcCXT8FBCRO6BSNPZz6Lqw6nU7I81Yw8ckMnJ4NcXlJ3UhY4comam3zIt0aZnJl2Zhvp1mGWXPSCQk/9mt59W1jKuIuD19iFVniqAKRlBwidTwgcO X-Gm-Message-State: AOJu0YxRHUc0rpvxM+WMG8riutgr/1eANTkLwPK+DKASQOu8veM12O/v EiacUWlS/qeaaDBwzhO6jzjfESUq2YRqCFdnLv5U4LG/vqLGjLGpdq79M5wbnMJPS6GTzGi+iBA 5JxKdRDIbdwBu65F4hw== X-Google-Smtp-Source: AGHT+IE987VjqfU9C9aQ3pwSaMP6OTjNB6LiyuAyjrfstMXyEZkEEbgfvJtsI6mFo47pSUhNbAmltIEQwXmwsopz X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a05:690c:fd0:b0:608:dae:a0c2 with SMTP id dg16-20020a05690c0fd000b006080daea0c2mr2896583ywb.3.1708460608985; Tue, 20 Feb 2024 12:23:28 -0800 (PST) Date: Tue, 20 Feb 2024 20:23:10 +0000 In-Reply-To: <20240220202310.2489614-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240220202310.2489614-1-vdonnefort@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog Message-ID: <20240220202310.2489614-7-vdonnefort@google.com> Subject: [PATCH v18 6/6] ring-buffer/selftest: Add ring-buffer mapping test From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mathieu.desnoyers@efficios.com, kernel-team@android.com, Vincent Donnefort , Shuah Khan , Shuah Khan , linux-kselftest@vger.kernel.org This test maps a ring-buffer and validate the meta-page after reset and after emitting few events. Cc: Shuah Khan Cc: Shuah Khan Cc: linux-kselftest@vger.kernel.org Signed-off-by: Vincent Donnefort diff --git a/tools/testing/selftests/ring-buffer/Makefile b/tools/testing/selftests/ring-buffer/Makefile new file mode 100644 index 000000000000..627c5fa6d1ab --- /dev/null +++ b/tools/testing/selftests/ring-buffer/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0 +CFLAGS += -Wl,-no-as-needed -Wall +CFLAGS += $(KHDR_INCLUDES) +CFLAGS += -D_GNU_SOURCE + +TEST_GEN_PROGS = map_test + +include ../lib.mk diff --git a/tools/testing/selftests/ring-buffer/config b/tools/testing/selftests/ring-buffer/config new file mode 100644 index 000000000000..d936f8f00e78 --- /dev/null +++ b/tools/testing/selftests/ring-buffer/config @@ -0,0 +1,2 @@ +CONFIG_FTRACE=y +CONFIG_TRACER_SNAPSHOT=y diff --git a/tools/testing/selftests/ring-buffer/map_test.c b/tools/testing/selftests/ring-buffer/map_test.c new file mode 100644 index 000000000000..56c44b29d998 --- /dev/null +++ b/tools/testing/selftests/ring-buffer/map_test.c @@ -0,0 +1,273 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Ring-buffer memory mapping tests + * + * Copyright (c) 2024 Vincent Donnefort + */ +#include +#include +#include +#include +#include +#include + +#include + +#include +#include + +#include "../user_events/user_events_selftests.h" /* share tracefs setup */ +#include "../kselftest_harness.h" + +#define TRACEFS_ROOT "/sys/kernel/tracing" + +static int __tracefs_write(const char *path, const char *value) +{ + int fd, ret; + + fd = open(path, O_WRONLY | O_TRUNC); + if (fd < 0) + return fd; + + ret = write(fd, value, strlen(value)); + + close(fd); + + return ret == -1 ? -errno : 0; +} + +static int __tracefs_write_int(const char *path, int value) +{ + char *str; + int ret; + + if (asprintf(&str, "%d", value) < 0) + return -1; + + ret = __tracefs_write(path, str); + + free(str); + + return ret; +} + +#define tracefs_write_int(path, value) \ + ASSERT_EQ(__tracefs_write_int((path), (value)), 0) + +#define tracefs_write(path, value) \ + ASSERT_EQ(__tracefs_write((path), (value)), 0) + +static int tracefs_reset(void) +{ + if (__tracefs_write_int(TRACEFS_ROOT"/tracing_on", 0)) + return -1; + if (__tracefs_write(TRACEFS_ROOT"/trace", "")) + return -1; + if (__tracefs_write(TRACEFS_ROOT"/set_event", "")) + return -1; + if (__tracefs_write(TRACEFS_ROOT"/current_tracer", "nop")) + return -1; + + return 0; +} + +struct tracefs_cpu_map_desc { + struct trace_buffer_meta *meta; + void *data; + int cpu_fd; +}; + +int tracefs_cpu_map(struct tracefs_cpu_map_desc *desc, int cpu) +{ + unsigned long meta_len, data_len; + int page_size = getpagesize(); + char *cpu_path; + void *map; + + if (asprintf(&cpu_path, + TRACEFS_ROOT"/per_cpu/cpu%d/trace_pipe_raw", + cpu) < 0) + return -ENOMEM; + + desc->cpu_fd = open(cpu_path, O_RDONLY | O_NONBLOCK); + free(cpu_path); + if (desc->cpu_fd < 0) + return -ENODEV; + + map = mmap(NULL, page_size, PROT_READ, MAP_SHARED, desc->cpu_fd, 0); + if (map == MAP_FAILED) + return -errno; + + desc->meta = (struct trace_buffer_meta *)map; + + meta_len = desc->meta->meta_page_size; + data_len = desc->meta->subbuf_size * desc->meta->nr_subbufs; + + map = mmap(NULL, data_len, PROT_READ, MAP_SHARED, desc->cpu_fd, meta_len); + if (map == MAP_FAILED) { + munmap(desc->meta, desc->meta->meta_page_size); + return -EINVAL; + } + + desc->data = map; + + return 0; +} + +void tracefs_cpu_unmap(struct tracefs_cpu_map_desc *desc) +{ + munmap(desc->data, desc->meta->subbuf_size * desc->meta->nr_subbufs); + munmap(desc->meta, desc->meta->meta_page_size); + close(desc->cpu_fd); +} + +FIXTURE(map) { + struct tracefs_cpu_map_desc map_desc; + bool umount; +}; + +FIXTURE_VARIANT(map) { + int subbuf_size; +}; + +FIXTURE_VARIANT_ADD(map, subbuf_size_4k) { + .subbuf_size = 4, +}; + +FIXTURE_VARIANT_ADD(map, subbuf_size_8k) { + .subbuf_size = 8, +}; + +FIXTURE_SETUP(map) +{ + int cpu = sched_getcpu(); + cpu_set_t cpu_mask; + bool fail, umount; + char *message; + + if (!tracefs_enabled(&message, &fail, &umount)) { + if (fail) { + TH_LOG("Tracefs setup failed: %s", message); + ASSERT_FALSE(fail); + } + SKIP(return, "Skipping: %s", message); + } + + self->umount = umount; + + ASSERT_GE(cpu, 0); + + ASSERT_EQ(tracefs_reset(), 0); + + tracefs_write_int(TRACEFS_ROOT"/buffer_subbuf_size_kb", variant->subbuf_size); + + ASSERT_EQ(tracefs_cpu_map(&self->map_desc, cpu), 0); + + /* + * Ensure generated events will be found on this very same ring-buffer. + */ + CPU_ZERO(&cpu_mask); + CPU_SET(cpu, &cpu_mask); + ASSERT_EQ(sched_setaffinity(0, sizeof(cpu_mask), &cpu_mask), 0); +} + +FIXTURE_TEARDOWN(map) +{ + tracefs_reset(); + + if (self->umount) + tracefs_unmount(); + + tracefs_cpu_unmap(&self->map_desc); +} + +TEST_F(map, meta_page_check) +{ + struct tracefs_cpu_map_desc *desc = &self->map_desc; + int cnt = 0; + + ASSERT_EQ(desc->meta->entries, 0); + ASSERT_EQ(desc->meta->overrun, 0); + ASSERT_EQ(desc->meta->read, 0); + + ASSERT_EQ(desc->meta->reader.id, 0); + ASSERT_EQ(desc->meta->reader.read, 0); + + ASSERT_EQ(ioctl(desc->cpu_fd, TRACE_MMAP_IOCTL_GET_READER), 0); + ASSERT_EQ(desc->meta->reader.id, 0); + + tracefs_write_int(TRACEFS_ROOT"/tracing_on", 1); + for (int i = 0; i < 16; i++) + tracefs_write_int(TRACEFS_ROOT"/trace_marker", i); +again: + ASSERT_EQ(ioctl(desc->cpu_fd, TRACE_MMAP_IOCTL_GET_READER), 0); + + ASSERT_EQ(desc->meta->entries, 16); + ASSERT_EQ(desc->meta->overrun, 0); + ASSERT_EQ(desc->meta->read, 16); + + ASSERT_EQ(desc->meta->reader.id, 1); + + if (!(cnt++)) + goto again; +} + +FIXTURE(snapshot) { + bool umount; +}; + +FIXTURE_SETUP(snapshot) +{ + bool fail, umount; + struct stat sb; + char *message; + + if (stat(TRACEFS_ROOT"/snapshot", &sb)) + SKIP(return, "Skipping: %s", "snapshot not available"); + + if (!tracefs_enabled(&message, &fail, &umount)) { + if (fail) { + TH_LOG("Tracefs setup failed: %s", message); + ASSERT_FALSE(fail); + } + SKIP(return, "Skipping: %s", message); + } + + self->umount = umount; +} + +FIXTURE_TEARDOWN(snapshot) +{ + __tracefs_write(TRACEFS_ROOT"/events/sched/sched_switch/trigger", + "!snapshot"); + tracefs_reset(); + + if (self->umount) + tracefs_unmount(); +} + +TEST_F(snapshot, excludes_map) +{ + struct tracefs_cpu_map_desc map_desc; + int cpu = sched_getcpu(); + + ASSERT_GE(cpu, 0); + tracefs_write(TRACEFS_ROOT"/events/sched/sched_switch/trigger", + "snapshot"); + ASSERT_EQ(tracefs_cpu_map(&map_desc, cpu), -EBUSY); +} + +TEST_F(snapshot, excluded_by_map) +{ + struct tracefs_cpu_map_desc map_desc; + int cpu = sched_getcpu(); + + ASSERT_EQ(tracefs_cpu_map(&map_desc, cpu), 0); + + ASSERT_EQ(__tracefs_write(TRACEFS_ROOT"/events/sched/sched_switch/trigger", + "snapshot"), -EBUSY); + ASSERT_EQ(__tracefs_write(TRACEFS_ROOT"/snapshot", + "1"), -EBUSY); +} + +TEST_HARNESS_MAIN