From patchwork Tue Apr 23 23:27:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13640879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F20F8C4345F for ; Tue, 23 Apr 2024 23:27:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6FE206B019A; Tue, 23 Apr 2024 19:27:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6AE666B019B; Tue, 23 Apr 2024 19:27:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5282A6B019C; Tue, 23 Apr 2024 19:27:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 2EEDD6B019A for ; Tue, 23 Apr 2024 19:27:48 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B4731C02DC for ; Tue, 23 Apr 2024 23:27:47 +0000 (UTC) X-FDA: 82042386174.18.1819C59 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf06.hostedemail.com (Postfix) with ESMTP id DA733180019 for ; Tue, 23 Apr 2024 23:27:45 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=g+vppPC8; spf=pass (imf06.hostedemail.com: domain of 38EMoZgoKCFEEw766xy7ACz77z4x.v75416DG-553Etv3.7Az@flex--vdonnefort.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=38EMoZgoKCFEEw766xy7ACz77z4x.v75416DG-553Etv3.7Az@flex--vdonnefort.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713914865; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KGGhlhHcrtvsLzBLVtBaJiNhKHV3FjN4Xdf9+voLC7k=; b=aD4CRuJYsMQhwtLvIrS/NIa/mhiAZ05TUyUOUnTa7upNaMKGOs+FraUe/4SI1woc03Jw1D RHl6nJocxUjJNJXzhsx6bRmOpXM967uKbauKsjZ/sl8NXBC44XnX/qWhX0PSSWa8DeMYBu 5FPUxaF47KZura0yliEzZQ5mySxC6IM= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=g+vppPC8; spf=pass (imf06.hostedemail.com: domain of 38EMoZgoKCFEEw766xy7ACz77z4x.v75416DG-553Etv3.7Az@flex--vdonnefort.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=38EMoZgoKCFEEw766xy7ACz77z4x.v75416DG-553Etv3.7Az@flex--vdonnefort.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713914866; a=rsa-sha256; cv=none; b=anijDsi6oAAETYmHs04Wn0dWgwdplZxFxxsUe8FCUD6x3b/znnMdW/Zj85B9XAh00zx1r/ JMAO9NFk1sxjiYLRRpk7rlABp8JXw9jlgccGEyXyDve+CEKG2hwiQfMMWrAPMyokrfwVdi j4qn2SPHoBjXLa+i/q2xHdMmA5ndWk0= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc693399655so12360599276.1 for ; Tue, 23 Apr 2024 16:27:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713914865; x=1714519665; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KGGhlhHcrtvsLzBLVtBaJiNhKHV3FjN4Xdf9+voLC7k=; b=g+vppPC8vLQMC7Jg8b29eCkwhgBZhSvvNhJ1SG6yq7TBIW37gIfMd3HCW4+GebtsOy emiWIlkqiM64R22DmnXVcK19V6wK24ONFj3yE29knfABOZwmjEPfPkSX5MntQknh4IVv HPzFQifY+VnSEk4AqKiVj+TwnxBI/gYgUxcFDN456Gt232wjiyt5iqNSvu7pkmGWsiI7 TqzCTweY8JrYFSSdJNbl5a7BVXXJWrof4NyKdissqmiDZMIDpIRt0O++YizCkZ8fOwGv PDCLoDbs8WZQWZ8z2vp2YyDWNhv+ldPKy+XjaRNPQAvsX7d084R6x5YU8TyVZI0m/5dO ixQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713914865; x=1714519665; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KGGhlhHcrtvsLzBLVtBaJiNhKHV3FjN4Xdf9+voLC7k=; b=KHXO2oAbwBrCWVZrE+kzypNbmag7q8wRvxGXhtPuG7b1vS70Uv5VvFoXIejHm5Ovz+ kM6zAMpbN/3DKgt/CJga71gZk4B/E1+ztv4VcfkqIC8BxyELMFhRmWJH8o4OhK/ddf7d dEY6B/oD2WPOiiKc3oSpUhHPBQbBM+rmzW2gxH+TzLDkr8jEg/ExPWuyg8LWedbRV4uY cZU6Nu6eIECQb9dRgcEhSafVoAq62K2qxVBiYsHvgyH6Ri8KvFxhGabnFHQnehMKJK2f HOTCiab0ssniRvees69hqxl+EUa1MkwU5jlHeahOc60WTEEtytUl4T2g8EKbhwzbdQYD 9S7A== X-Forwarded-Encrypted: i=1; AJvYcCVdF2QGAMTLbOLVeUAYroFLXyGf4+35RIeKtTZyHD92d5AMm48KRYhWEeWRVmdRKMTBzTrN4BExPV/HXQUof9E3olQ= X-Gm-Message-State: AOJu0YwzQdk2AzFm1lCqu1/IYnlgvmdyAoUsPbCvwMsa8YdyhgikoGt4 oUqHfC7bevFsgjRr8ojeBiQ4jHptyxCa1HY33imKqN3mB6bfCMYkACNw6q8/7zTfQ0EvsewDBCB ePWlKSSOj8I7Ru+8whA== X-Google-Smtp-Source: AGHT+IH/t+cNCM+isu9yMK4PkYrKvgLfuqkOcx8GrD3g/C3sx9RtvN7lXGh3h8zpmSqQFXOTqBhpJ6/YgHp2c0hk X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a25:d851:0:b0:de4:5ec1:57af with SMTP id p78-20020a25d851000000b00de45ec157afmr321950ybg.10.1713914864892; Tue, 23 Apr 2024 16:27:44 -0700 (PDT) Date: Wed, 24 Apr 2024 00:27:25 +0100 In-Reply-To: <20240423232728.1492340-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20240423232728.1492340-1-vdonnefort@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240423232728.1492340-3-vdonnefort@google.com> Subject: [PATCH v21 2/5] ring-buffer: Introducing ring-buffer mapping functions From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mathieu.desnoyers@efficios.com, kernel-team@android.com, rdunlap@infradead.org, rppt@kernel.org, david@redhat.com, Vincent Donnefort , linux-mm@kvack.org X-Rspamd-Queue-Id: DA733180019 X-Stat-Signature: z5gi36bipeeztk93oqtraxgk9gw98i5u X-Rspam-User: X-Rspamd-Server: rspam07 X-HE-Tag: 1713914865-510305 X-HE-Meta: U2FsdGVkX19T1P/0A0lASXIjH++nW77MFXj0+3xJbwVUzxz3PwSJUAJoNKDuCeLaLbT0+fzI2UaWL5b8lSN9G6bXHbTJXOPd8IDEnHEAscTE1rf6P2tZqEd94+VXnsyLPU/klEs/qrLc2aieI4bt8zic73DI3RKwuqEl/cgkoTTzesJMIxSrNSepzSvmCArthjD1UBZXBElCu4uT0BDwznlubw/lTzDEplxDqmhcOoBrvZmW110DPWz4Op23IfvEdPIzGKq+7N0tMnFmHHwgzxwKXJeRSC3U2iCTNPKXevQWo8eOiIsplAZdMnQA0vUZNpEPkMFoo41Sft9/Pjj30MyQRJb0xdy7QmXfvxceQnWAFvlHiUCoyCGsUl9n2Z6mHfIlu2QHm4OShwPl3oKgNtSjxt/kydtrmmY2TuUaGqv23E6B2/IxkSoO6sKYYAQxLUzDIpYghGzQRJ1vvxQt5QgkYfOCPGP4T52GH2EMQkIz0DKanbFp7cKuCmHib9HFIAN0TT7dYJwBII60RbIVLRrGwxnTDgS6sD7AXpJHNx16Ifj2Hw7VO5Yg8He+qfSp0SG6BmuW0CyO16nosJQjnISyGeLRF3tr7j6UZHTDK5pVnqCBdJBGwWK3mF3+cRsPnNh3Uhvp9TnohoKFqPMhjX/UKMpmL/WM1oTc1SvW2Jq2nRtMt5u7GxBiCtzIaliTkjL/mwnfH2R7bHFvUjXKJvhjNu264dUSn0wkFrLv6ulUqJweMg7PJ8zy7QHQDtX4hJ70aC7//uCH9QQBNs3LIsYlbBCO8iJXSNs3BxvS/1GcLC4QhzV4iJw2e6Gcm4AhGiEL5LaFp6/UnLxOFiPPMX+h0BfP5xaN00kPxopaDnYDBZ139/IESnoEvDKGfOyFubp+g55jAWF92Gv19iQCENm25lXngAE+FmWdrpr/Fl7JsqSrTvsk5ZAx1npb3KD0eB8m1kQPhYOKtPSBijL TM3/BUWw BeO+0BVpZXLxEf9JMUclb8FTBsbaXPOJd6s7fL8Tf5xfK4nfHZzaolCMR6ZTR89wc7Uy5cFUhM8lufd3klb+rZRSJv31mfr5z2fvUkfZcisZ9xyedE0SycDpYIg/YhsMLV9IKG2JJC3Vhy6vvXbMwerpHkg8nEfScJxJv7Fg4jfCjVlj1rqp96iWZ8BI9EnYMmwDbxLD+ciS07cW7k27McQmoSUJKqtXAVT5blwsqf69d4OcA4fb19jBEJj0iGM7xDa69QoKuEeTfqf5U19sAQ9UxIJPNCG0vwvdT79OPgHcexNkMuHocuEKcuBknTOwwdMYE4PX2CDTifl12trwyela+WcYzoju6YTI8LcI95joaTDnSzhmNF5oG4tX5vTFd8EPM3o99a2RiSkpjDv58xth5IlXdEZdk60rljDElWh0cxnlKT2JYLzrv4BvY7IUsHKLifjllcO8kPw9LDkSmtEAQsrHx2RrHsoSGN97fnsHgGLhsADqwagpOJlRTImEYSa1D3I1tgxMVKLS9RJfsRMNaCe1ciLxFfRAVVg2nWwPlBawz98z0o6H6AtZWo9bI4E/0xxMIBS8KguP/ymtxfCHh2mE0GRoBjodI/QXK3U1OvT//mHeNrIFSD7rctPD0ancpTuk4SQ+O/aJW/eRD0aL518HhLrg+2cHaUTY0P23DMw2RoGJWNyU0+Y6WcvQt4nAi X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In preparation for allowing the user-space to map a ring-buffer, add a set of mapping functions: ring_buffer_{map,unmap}() And controls on the ring-buffer: ring_buffer_map_get_reader() /* swap reader and head */ Mapping the ring-buffer also involves: A unique ID for each subbuf of the ring-buffer, currently they are only identified through their in-kernel VA. A meta-page, where are stored ring-buffer statistics and a description for the current reader The linear mapping exposes the meta-page, and each subbuf of the ring-buffer, ordered following their unique ID, assigned during the first mapping. Once mapped, no subbuf can get in or out of the ring-buffer: the buffer size will remain unmodified and the splice enabling functions will in reality simply memcpy the data instead of swapping subbufs. CC: Signed-off-by: Vincent Donnefort diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h index dc5ae4e96aee..96d2140b471e 100644 --- a/include/linux/ring_buffer.h +++ b/include/linux/ring_buffer.h @@ -6,6 +6,8 @@ #include #include +#include + struct trace_buffer; struct ring_buffer_iter; @@ -223,4 +225,8 @@ int trace_rb_cpu_prepare(unsigned int cpu, struct hlist_node *node); #define trace_rb_cpu_prepare NULL #endif +int ring_buffer_map(struct trace_buffer *buffer, int cpu, + struct vm_area_struct *vma); +int ring_buffer_unmap(struct trace_buffer *buffer, int cpu); +int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu); #endif /* _LINUX_RING_BUFFER_H */ diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h new file mode 100644 index 000000000000..b682e9925539 --- /dev/null +++ b/include/uapi/linux/trace_mmap.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _TRACE_MMAP_H_ +#define _TRACE_MMAP_H_ + +#include + +/** + * struct trace_buffer_meta - Ring-buffer Meta-page description + * @meta_page_size: Size of this meta-page. + * @meta_struct_len: Size of this structure. + * @subbuf_size: Size of each sub-buffer. + * @nr_subbufs: Number of subbfs in the ring-buffer, including the reader. + * @reader.lost_events: Number of events lost at the time of the reader swap. + * @reader.id: subbuf ID of the current reader. ID range [0 : @nr_subbufs - 1] + * @reader.read: Number of bytes read on the reader subbuf. + * @flags: Placeholder for now, 0 until new features are supported. + * @entries: Number of entries in the ring-buffer. + * @overrun: Number of entries lost in the ring-buffer. + * @read: Number of entries that have been read. + * @Reserved1: Internal use only. + * @Reserved2: Internal use only. + */ +struct trace_buffer_meta { + __u32 meta_page_size; + __u32 meta_struct_len; + + __u32 subbuf_size; + __u32 nr_subbufs; + + struct { + __u64 lost_events; + __u32 id; + __u32 read; + } reader; + + __u64 flags; + + __u64 entries; + __u64 overrun; + __u64 read; + + __u64 Reserved1; + __u64 Reserved2; +}; + +#endif /* _TRACE_MMAP_H_ */ diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index cc9ebe593571..84f8744fa110 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -26,6 +27,7 @@ #include #include #include +#include #include #include @@ -338,6 +340,7 @@ struct buffer_page { local_t entries; /* entries on this page */ unsigned long real_end; /* real end of data */ unsigned order; /* order of the page */ + u32 id; /* ID for external mapping */ struct buffer_data_page *page; /* Actual data page */ }; @@ -484,6 +487,12 @@ struct ring_buffer_per_cpu { u64 read_stamp; /* pages removed since last reset */ unsigned long pages_removed; + + unsigned int mapped; + struct mutex mapping_lock; + unsigned long *subbuf_ids; /* ID to subbuf VA */ + struct trace_buffer_meta *meta_page; + /* ring buffer pages to update, > 0 to add, < 0 to remove */ long nr_pages_to_update; struct list_head new_pages; /* new pages to add */ @@ -1599,6 +1608,7 @@ rb_allocate_cpu_buffer(struct trace_buffer *buffer, long nr_pages, int cpu) init_irq_work(&cpu_buffer->irq_work.work, rb_wake_up_waiters); init_waitqueue_head(&cpu_buffer->irq_work.waiters); init_waitqueue_head(&cpu_buffer->irq_work.full_waiters); + mutex_init(&cpu_buffer->mapping_lock); bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()), GFP_KERNEL, cpu_to_node(cpu)); @@ -1789,8 +1799,6 @@ bool ring_buffer_time_stamp_abs(struct trace_buffer *buffer) return buffer->time_stamp_abs; } -static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer); - static inline unsigned long rb_page_entries(struct buffer_page *bpage) { return local_read(&bpage->entries) & RB_WRITE_MASK; @@ -5211,6 +5219,22 @@ static void rb_clear_buffer_page(struct buffer_page *page) page->read = 0; } +static void rb_update_meta_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + struct trace_buffer_meta *meta = cpu_buffer->meta_page; + + meta->reader.read = cpu_buffer->reader_page->read; + meta->reader.id = cpu_buffer->reader_page->id; + meta->reader.lost_events = cpu_buffer->lost_events; + + meta->entries = local_read(&cpu_buffer->entries); + meta->overrun = local_read(&cpu_buffer->overrun); + meta->read = cpu_buffer->read; + + /* Some archs do not have data cache coherency between kernel and user-space */ + flush_dcache_folio(virt_to_folio(cpu_buffer->meta_page)); +} + static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) { @@ -5255,6 +5279,9 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) cpu_buffer->lost_events = 0; cpu_buffer->last_overrun = 0; + if (cpu_buffer->mapped) + rb_update_meta_page(cpu_buffer); + rb_head_page_activate(cpu_buffer); cpu_buffer->pages_removed = 0; } @@ -5469,6 +5496,12 @@ int ring_buffer_swap_cpu(struct trace_buffer *buffer_a, cpu_buffer_a = buffer_a->buffers[cpu]; cpu_buffer_b = buffer_b->buffers[cpu]; + /* It's up to the callers to not try to swap mapped buffers */ + if (WARN_ON_ONCE(cpu_buffer_a->mapped || cpu_buffer_b->mapped)) { + ret = -EBUSY; + goto out; + } + /* At least make sure the two buffers are somewhat the same */ if (cpu_buffer_a->nr_pages != cpu_buffer_b->nr_pages) goto out; @@ -5733,7 +5766,8 @@ int ring_buffer_read_page(struct trace_buffer *buffer, * Otherwise, we can simply swap the page with the one passed in. */ if (read || (len < (commit - read)) || - cpu_buffer->reader_page == cpu_buffer->commit_page) { + cpu_buffer->reader_page == cpu_buffer->commit_page || + cpu_buffer->mapped) { struct buffer_data_page *rpage = cpu_buffer->reader_page->page; unsigned int rpos = read; unsigned int pos = 0; @@ -5956,6 +5990,11 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order) cpu_buffer = buffer->buffers[cpu]; + if (cpu_buffer->mapped) { + err = -EBUSY; + goto error; + } + /* Update the number of pages to match the new size */ nr_pages = old_size * buffer->buffers[cpu]->nr_pages; nr_pages = DIV_ROUND_UP(nr_pages, buffer->subbuf_size); @@ -6057,6 +6096,367 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order) } EXPORT_SYMBOL_GPL(ring_buffer_subbuf_order_set); +static int rb_alloc_meta_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + struct page *page; + + if (cpu_buffer->meta_page) + return 0; + + page = alloc_page(GFP_USER | __GFP_ZERO); + if (!page) + return -ENOMEM; + + cpu_buffer->meta_page = page_to_virt(page); + + return 0; +} + +static void rb_free_meta_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + unsigned long addr = (unsigned long)cpu_buffer->meta_page; + + free_page(addr); + cpu_buffer->meta_page = NULL; +} + +static void rb_setup_ids_meta_page(struct ring_buffer_per_cpu *cpu_buffer, + unsigned long *subbuf_ids) +{ + struct trace_buffer_meta *meta = cpu_buffer->meta_page; + unsigned int nr_subbufs = cpu_buffer->nr_pages + 1; + struct buffer_page *first_subbuf, *subbuf; + int id = 0; + + subbuf_ids[id] = (unsigned long)cpu_buffer->reader_page->page; + cpu_buffer->reader_page->id = id++; + + first_subbuf = subbuf = rb_set_head_page(cpu_buffer); + do { + if (WARN_ON(id >= nr_subbufs)) + break; + + subbuf_ids[id] = (unsigned long)subbuf->page; + subbuf->id = id; + + rb_inc_page(&subbuf); + id++; + } while (subbuf != first_subbuf); + + /* install subbuf ID to kern VA translation */ + cpu_buffer->subbuf_ids = subbuf_ids; + + meta->meta_page_size = PAGE_SIZE; + meta->meta_struct_len = sizeof(*meta); + meta->nr_subbufs = nr_subbufs; + meta->subbuf_size = cpu_buffer->buffer->subbuf_size + BUF_PAGE_HDR_SIZE; + + rb_update_meta_page(cpu_buffer); +} + +static struct ring_buffer_per_cpu * +rb_get_mapped_buffer(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return ERR_PTR(-EINVAL); + + cpu_buffer = buffer->buffers[cpu]; + + mutex_lock(&cpu_buffer->mapping_lock); + + if (!cpu_buffer->mapped) { + mutex_unlock(&cpu_buffer->mapping_lock); + return ERR_PTR(-ENODEV); + } + + return cpu_buffer; +} + +static void rb_put_mapped_buffer(struct ring_buffer_per_cpu *cpu_buffer) +{ + mutex_unlock(&cpu_buffer->mapping_lock); +} + +/* + * Fast-path for rb_buffer_(un)map(). Called whenever the meta-page doesn't need + * to be set-up or torn-down. + */ +static int __rb_inc_dec_mapped(struct ring_buffer_per_cpu *cpu_buffer, + bool inc) +{ + unsigned long flags; + + lockdep_assert_held(&cpu_buffer->mapping_lock); + + if (inc && cpu_buffer->mapped == UINT_MAX) + return -EBUSY; + + if (WARN_ON(!inc && cpu_buffer->mapped == 0)) + return -EINVAL; + + mutex_lock(&cpu_buffer->buffer->mutex); + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); + + if (inc) + cpu_buffer->mapped++; + else + cpu_buffer->mapped--; + + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + mutex_unlock(&cpu_buffer->buffer->mutex); + + return 0; +} + +/* + * +--------------+ pgoff == 0 + * | meta page | + * +--------------+ pgoff == 1 + * | subbuffer 0 | + * | | + * +--------------+ pgoff == (1 + (1 << subbuf_order)) + * | subbuffer 1 | + * | | + * ... + */ +#ifdef CONFIG_MMU +static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer, + struct vm_area_struct *vma) +{ + unsigned long nr_subbufs, nr_pages, vma_pages, pgoff = vma->vm_pgoff; + unsigned int subbuf_pages, subbuf_order; + struct page **pages; + int p = 0, s = 0; + int err; + + if (vma->vm_flags & VM_WRITE || vma->vm_flags & VM_EXEC || + !(vma->vm_flags & VM_MAYSHARE)) + return -EPERM; + + vm_flags_mod(vma, + VM_MIXEDMAP | VM_PFNMAP | + VM_DONTCOPY | VM_DONTDUMP | VM_DONTEXPAND | VM_IO, + VM_MAYWRITE); + + lockdep_assert_held(&cpu_buffer->mapping_lock); + + subbuf_order = cpu_buffer->buffer->subbuf_order; + subbuf_pages = 1 << subbuf_order; + + nr_subbufs = cpu_buffer->nr_pages + 1; /* + reader-subbuf */ + nr_pages = ((nr_subbufs) << subbuf_order) - pgoff + 1; /* + meta-page */ + + vma_pages = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; + if (!vma_pages || vma_pages > nr_pages) + return -EINVAL; + + nr_pages = vma_pages; + + pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL); + if (!pages) + return -ENOMEM; + + if (!pgoff) { + pages[p++] = virt_to_page(cpu_buffer->meta_page); + + /* + * TODO: Align sub-buffers on their size, once + * vm_insert_pages() supports the zero-page. + */ + } else { + /* Skip the meta-page */ + pgoff--; + + if (pgoff % subbuf_pages) { + err = -EINVAL; + goto out; + } + + s += pgoff / subbuf_pages; + } + + while (s < nr_subbufs && p < nr_pages) { + struct page *page = virt_to_page(cpu_buffer->subbuf_ids[s]); + int off = 0; + + for (; off < (1 << (subbuf_order)); off++, page++) { + if (p >= nr_pages) + break; + + pages[p++] = page; + } + s++; + } + + err = vm_insert_pages(vma, vma->vm_start, pages, &nr_pages); + +out: + kfree(pages); + + return err; +} +#else +static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer, + struct vm_area_struct *vma) +{ + return -EOPNOTSUPP; +} +#endif + +int ring_buffer_map(struct trace_buffer *buffer, int cpu, + struct vm_area_struct *vma) +{ + struct ring_buffer_per_cpu *cpu_buffer; + unsigned long flags, *subbuf_ids; + int err = 0; + + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return -EINVAL; + + cpu_buffer = buffer->buffers[cpu]; + + mutex_lock(&cpu_buffer->mapping_lock); + + if (cpu_buffer->mapped) { + err = __rb_map_vma(cpu_buffer, vma); + if (!err) + err = __rb_inc_dec_mapped(cpu_buffer, true); + mutex_unlock(&cpu_buffer->mapping_lock); + return err; + } + + /* prevent another thread from changing buffer/sub-buffer sizes */ + mutex_lock(&buffer->mutex); + + err = rb_alloc_meta_page(cpu_buffer); + if (err) + goto unlock; + + /* subbuf_ids include the reader while nr_pages does not */ + subbuf_ids = kcalloc(cpu_buffer->nr_pages + 1, sizeof(*subbuf_ids), GFP_KERNEL); + if (!subbuf_ids) { + rb_free_meta_page(cpu_buffer); + err = -ENOMEM; + goto unlock; + } + + atomic_inc(&cpu_buffer->resize_disabled); + + /* + * Lock all readers to block any subbuf swap until the subbuf IDs are + * assigned. + */ + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); + rb_setup_ids_meta_page(cpu_buffer, subbuf_ids); + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + + err = __rb_map_vma(cpu_buffer, vma); + if (!err) { + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); + cpu_buffer->mapped = 1; + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + } else { + kfree(cpu_buffer->subbuf_ids); + cpu_buffer->subbuf_ids = NULL; + rb_free_meta_page(cpu_buffer); + } + +unlock: + mutex_unlock(&buffer->mutex); + mutex_unlock(&cpu_buffer->mapping_lock); + + return err; +} + +int ring_buffer_unmap(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + unsigned long flags; + int err = 0; + + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return -EINVAL; + + cpu_buffer = buffer->buffers[cpu]; + + mutex_lock(&cpu_buffer->mapping_lock); + + if (!cpu_buffer->mapped) { + err = -ENODEV; + goto out; + } else if (cpu_buffer->mapped > 1) { + __rb_inc_dec_mapped(cpu_buffer, false); + goto out; + } + + mutex_lock(&buffer->mutex); + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); + + cpu_buffer->mapped = 0; + + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + + kfree(cpu_buffer->subbuf_ids); + cpu_buffer->subbuf_ids = NULL; + rb_free_meta_page(cpu_buffer); + atomic_dec(&cpu_buffer->resize_disabled); + + mutex_unlock(&buffer->mutex); + +out: + mutex_unlock(&cpu_buffer->mapping_lock); + + return err; +} + +int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + unsigned long reader_size; + unsigned long flags; + + cpu_buffer = rb_get_mapped_buffer(buffer, cpu); + if (IS_ERR(cpu_buffer)) + return (int)PTR_ERR(cpu_buffer); + + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); + +consume: + if (rb_per_cpu_empty(cpu_buffer)) + goto out; + + reader_size = rb_page_size(cpu_buffer->reader_page); + + /* + * There are data to be read on the current reader page, we can + * return to the caller. But before that, we assume the latter will read + * everything. Let's update the kernel reader accordingly. + */ + if (cpu_buffer->reader_page->read < reader_size) { + while (cpu_buffer->reader_page->read < reader_size) + rb_advance_reader(cpu_buffer); + goto out; + } + + if (WARN_ON(!rb_get_reader_page(cpu_buffer))) + goto out; + + goto consume; + +out: + /* Some archs do not have data cache coherency between kernel and user-space */ + flush_dcache_folio(virt_to_folio(cpu_buffer->reader_page->page)); + + rb_update_meta_page(cpu_buffer); + + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + rb_put_mapped_buffer(cpu_buffer); + + return 0; +} + /* * We only allocate new buffers, never free them if the CPU goes down. * If we were to free the buffer, then the user would lose any trace that was in From patchwork Tue Apr 23 23:27:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13640880 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2868C10F15 for ; Tue, 23 Apr 2024 23:27:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 10B166B019B; Tue, 23 Apr 2024 19:27:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0BBD46B019D; Tue, 23 Apr 2024 19:27:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E79436B019E; Tue, 23 Apr 2024 19:27:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C63836B019B for ; Tue, 23 Apr 2024 19:27:50 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 7FA5940EAA for ; Tue, 23 Apr 2024 23:27:50 +0000 (UTC) X-FDA: 82042386300.10.1F5B3EC Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf25.hostedemail.com (Postfix) with ESMTP id A289EA000C for ; Tue, 23 Apr 2024 23:27:48 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=o0n1XgpH; spf=pass (imf25.hostedemail.com: domain of 380MoZgoKCFQHzA9901ADF2AA270.yA8749GJ-886Hwy6.AD2@flex--vdonnefort.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=380MoZgoKCFQHzA9901ADF2AA270.yA8749GJ-886Hwy6.AD2@flex--vdonnefort.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713914868; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kaFaT9yjXtresVS5HFOg4DBcxHZeTWd1pZVR5Za+Mxc=; b=QC3TTG+z0hBvmYGXZ8of8rJCqUy/q2/uLHiZLzlHe5XGsJPBbQoGLUPPdoGwnqkIl48tDX 5DGjEop22t0dcvRVCV8+I8SG0krXhEn+0goAL4hD/BtZK+XYu93abaJHEmZ9nFdK3szqiR 9CvkmPuKgBerHsYRXLH9gYU3LEZ8E4U= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=o0n1XgpH; spf=pass (imf25.hostedemail.com: domain of 380MoZgoKCFQHzA9901ADF2AA270.yA8749GJ-886Hwy6.AD2@flex--vdonnefort.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=380MoZgoKCFQHzA9901ADF2AA270.yA8749GJ-886Hwy6.AD2@flex--vdonnefort.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713914868; a=rsa-sha256; cv=none; b=ICExlDLY6uPzCeqShLFsRHxs0A9Hm1yXNufP9SK7jMdBn2BrzxzOyf6NRahoVmFSguGuXT 28r1Nz39M33tOlTJL8FG3utrH/OLeFkScBQ0B6sXuWnCay6N656jF9pr2B0MRbtcb4Z9AE 59LwipKyi4Zx1ngO2OKGlps8mM59lGs= Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-41681022d82so29763925e9.1 for ; Tue, 23 Apr 2024 16:27:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713914867; x=1714519667; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kaFaT9yjXtresVS5HFOg4DBcxHZeTWd1pZVR5Za+Mxc=; b=o0n1XgpHdyL09zPd8+aR+vQX/dciTp+wFveBn90vSTPhPGKpOGcatGCo+9wq72nHoY TQQDdmvZztUsVX80xEdEEEEkMVMeQ5QS0IcnoYco8wVx3HtfJjKw99lx75n1X3hUw3Qh HEMRwhF4DVGYXm6Qx67kALIw+H96nbns6AgmMsQc+Eli734nJ/eUbBIc6xAN2zIoPTkj UF4wUJ+3QOSM0UVW1aJhhCI/VPbNvsxOXm0q4KlQJlgQCjGxdjuL0pRkolj80mdg3R1n LYX2pK7vMo1cg6jTh4mB8PCB9+9DBELiX5+AU7UISGsJTbYYiBk8x67pm1FJlryiRzUe lrtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713914867; x=1714519667; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kaFaT9yjXtresVS5HFOg4DBcxHZeTWd1pZVR5Za+Mxc=; b=twEkc1UIomdVVfHiPBY9TrYecnHdAv0TmOpcZI6B1456dSYIvgrKilcvlendC35F+U ctwqj2O4sqYSduTcC2XvwCCZPdg77n3f+XcptMBVXS0qcrwVChIVPdEU24OVAySzIxdN Z8fcVnEj8jx1MqXYuRfbpjX+8ROtWKZtVOpIKfNAZhEYIJ6T9+artFQ5btxnsMnG5wD2 T5uDMI4qcYE2roO9sgd8PJq1x59sizhffG+gx+Khrj4cm6GF2Cfg7rfFLTA/L9l+0UNp +6Z0W+ImMbIH/Zxg+rOYb/9xmGoFjFAMY3wqQX+HWAZRugsE7Be8x6gUaAJ0Aeh5loW6 VLHw== X-Forwarded-Encrypted: i=1; AJvYcCWdN4ArmEUq7mYPLhmBO/Jp3dxlXHTJ7vdxsAzaJIApCc3kKSZi++YOaQIFyp+jg3PePBHDhYYOTf4W6JL21qby79w= X-Gm-Message-State: AOJu0Yzd2o+SEQcokAl/87CTG2kyTGMvi38pD3VJNfT7a6HGVda3ldXV 5aNIy+gDw31pLpSQjkaT0HEL8Xm26bazigtx5rymcGax1qOM3kRjxPcBhM5ObUIQ0GRujiy8VWF kL26Kfx4ny23YKJUddw== X-Google-Smtp-Source: AGHT+IFCDABCADdFLnD0phykkbD9Gu5AIQBZ62UxA/EhQAKHMu7J7egaT2rm+tUe6aiIMaJ0/Tc/k/iDgGFOOjAe X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a05:600c:5352:b0:41a:3485:19c4 with SMTP id hi18-20020a05600c535200b0041a348519c4mr3395wmb.3.1713914867196; Tue, 23 Apr 2024 16:27:47 -0700 (PDT) Date: Wed, 24 Apr 2024 00:27:26 +0100 In-Reply-To: <20240423232728.1492340-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20240423232728.1492340-1-vdonnefort@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240423232728.1492340-4-vdonnefort@google.com> Subject: [PATCH v21 3/5] tracing: Allow user-space mapping of the ring-buffer From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mathieu.desnoyers@efficios.com, kernel-team@android.com, rdunlap@infradead.org, rppt@kernel.org, david@redhat.com, Vincent Donnefort , linux-mm@kvack.org X-Stat-Signature: 1za8w4aq3aju3fe8rz7zqqbpkriumect X-Rspamd-Queue-Id: A289EA000C X-Rspamd-Server: rspam10 X-Rspam-User: X-HE-Tag: 1713914868-444192 X-HE-Meta: U2FsdGVkX18jRkHrFPLtFqkssopFsbGwarAarAapg//06yYSLiKUCv72yGEmLcvxizPtZNVBdxHjPKPtWIv+5L2SoehSMsHZqFlPQyWYh4ASP73KE5O7aZsOZ78X6wXz3NSK5hO2wrVNYd02XQvG83IinbDU1TNqhlvU0JCwCsU0VA9AYvx6sbrycfRdde84gkjPfWECLUpUqHnH97rXe+9DlnbjFZZvqIU0wTn1ebcmxOI0KniEUy/m1iTiVKVhyQxxWih4PQpuSocN1MaD2K/6zJ98f9Z4Ltzlput6FZLZhysPuWk6zvYiOzFNjHyXH1wmX7wY/mIPbOkooFv4chF59S7GiMqgkbLQvZlYmN+ad6zrUylz0i6Lglnrd8tokuuHaPGndJta2/paybcmPSUjrMjgMln9doLQITiyX8PTiYq0nIvjG21WZq7y/djE8RgIPKqJryCKuQ0QJLKXmLHvXjL3WLR3BZCLY1ejphmObydNmNoyGL19D2zKPS3jYA5r1HOh/ZThNOHZmIBMwYKSTqGwX4Q7drsa5vL+f6xmims7YJMxWDypoYxdgEylx6qweawUJGxDVnuPtW726GSi1nf/YVFWmFl3IH4UVk4Kl1RiOLXcAu4TfMfmlyNBO4xaFKHpq6ThwIBf1iWF8eqbkjISX/MdGavEuMHjBVAbElC4X1C4cryykPnfyGAGitELrOGAz9djbB2xB3Uug2jN3MnjJsxjERkJhAZ3nivIe5DALrtGct4UtkQkjCx/Ip13q8V/NrB2Rz38bRWzUtA62loKWSDz8Mj1W6UBI85l25Ti1uh5KDAhg7GIG+M68kEffL0A4c8NwbnyX7OZbQl0zA+FxCgUM384HVnbxmlch8QBvW6loF0ZvOzfGe+qk1UQzzljoiZfrFv3SG8PPB+wfEWjiKPyY9YNHWkfrcFAGZTl9ugIkKrJIuTYquy7gDRo7ZRyzZnYdw/fDXJ sY9XRR/i +tg+q9957A3yfjZ7rbMEZZHCxx/joxDfHXXkSfukD94+3/8iMo2L8eusDcvo0jaHwiOtTfd9q4LB8QniWPNmmViKhXJa6ftDhDIwSIvXqzuTH2aE3PusDumr7O2m06XR1AF+wFrG4rRQAoDzZwwJ/KA1AuNzkH1kw7yMQcF7nMW0mEQAsxVJGHKi5UqzFK1vqh3BTV4QeK4sfpWKRrvxFgqb6AEU3ACjooznahRNv9Mtv9O0Z3UC7G2s6kHEOFRYeNV4Qc94aqLh4Hk6mn4v07P9NCIyzns7+L7izWXH3NJezRGnchbWlqZV5wqE48YeGZu/F7DQp6OnGhlFuENmeJrTsKFHElwJuYHlfU2p461sZgKV/i7CwL2/eESIQ+W/Wb/vW4KeGi8gAlecDIu6csQawZk/rkIf9K9xjTYiiweX404tTGYOGqwu9Ve7OG684wfmjeM1l3Wp+oeW+kPU6oN/q6Bp7g5Epi9a8BQKtHsCk/dr8Bz/AAxeMdoTXER90ySo/PEsA0wbsYdX0TmyYctl2AJsMuePsRRg7NldgCDGCkTkppSbedhJHIzVr7Z+sXieE8GkDUNXHEnULdX4mTxY2/xOqttDsdhGyLZzPdCuIxJmP4K3CM61Tm0MBvBi0c+OtmUFI+NoydV872JybrWyFB5+Vj4ckpe7aNgsdCWW29dizw4TTH+5Cz0J4jwXriXWo X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, user-space extracts data from the ring-buffer via splice, which is handy for storage or network sharing. However, due to splice limitations, it is imposible to do real-time analysis without a copy. A solution for that problem is to let the user-space map the ring-buffer directly. The mapping is exposed via the per-CPU file trace_pipe_raw. The first element of the mapping is the meta-page. It is followed by each subbuffer constituting the ring-buffer, ordered by their unique page ID: * Meta-page -- include/uapi/linux/trace_mmap.h for a description * Subbuf ID 0 * Subbuf ID 1 ... It is therefore easy to translate a subbuf ID into an offset in the mapping: reader_id = meta->reader->id; reader_offset = meta->meta_page_size + reader_id * meta->subbuf_size; When new data is available, the mapper must call a newly introduced ioctl: TRACE_MMAP_IOCTL_GET_READER. This will update the Meta-page reader ID to point to the next reader containing unread data. Mapping will prevent snapshot and buffer size modifications. CC: Signed-off-by: Vincent Donnefort diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h index b682e9925539..bd1066754220 100644 --- a/include/uapi/linux/trace_mmap.h +++ b/include/uapi/linux/trace_mmap.h @@ -43,4 +43,6 @@ struct trace_buffer_meta { __u64 Reserved2; }; +#define TRACE_MMAP_IOCTL_GET_READER _IO('T', 0x1) + #endif /* _TRACE_MMAP_H_ */ diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 233d1af39fff..a35e7f598233 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -1191,6 +1191,12 @@ static void tracing_snapshot_instance_cond(struct trace_array *tr, return; } + if (tr->mapped) { + trace_array_puts(tr, "*** BUFFER MEMORY MAPPED ***\n"); + trace_array_puts(tr, "*** Can not use snapshot (sorry) ***\n"); + return; + } + local_irq_save(flags); update_max_tr(tr, current, smp_processor_id(), cond_data); local_irq_restore(flags); @@ -1323,7 +1329,7 @@ static int tracing_arm_snapshot_locked(struct trace_array *tr) lockdep_assert_held(&trace_types_lock); spin_lock(&tr->snapshot_trigger_lock); - if (tr->snapshot == UINT_MAX) { + if (tr->snapshot == UINT_MAX || tr->mapped) { spin_unlock(&tr->snapshot_trigger_lock); return -EBUSY; } @@ -6068,7 +6074,7 @@ static void tracing_set_nop(struct trace_array *tr) { if (tr->current_trace == &nop_trace) return; - + tr->current_trace->enabled--; if (tr->current_trace->reset) @@ -8194,15 +8200,32 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos, return ret; } -/* An ioctl call with cmd 0 to the ring buffer file will wake up all waiters */ static long tracing_buffers_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { struct ftrace_buffer_info *info = file->private_data; struct trace_iterator *iter = &info->iter; + int err; + + if (cmd == TRACE_MMAP_IOCTL_GET_READER) { + if (!(file->f_flags & O_NONBLOCK)) { + err = ring_buffer_wait(iter->array_buffer->buffer, + iter->cpu_file, + iter->tr->buffer_percent, + NULL, NULL); + if (err) + return err; + } - if (cmd) - return -ENOIOCTLCMD; + return ring_buffer_map_get_reader(iter->array_buffer->buffer, + iter->cpu_file); + } else if (cmd) { + return -ENOTTY; + } + /* + * An ioctl call with cmd 0 to the ring buffer file will wake up all + * waiters + */ mutex_lock(&trace_types_lock); /* Make sure the waiters see the new wait_index */ @@ -8214,6 +8237,76 @@ static long tracing_buffers_ioctl(struct file *file, unsigned int cmd, unsigned return 0; } +#ifdef CONFIG_TRACER_MAX_TRACE +static int get_snapshot_map(struct trace_array *tr) +{ + int err = 0; + + /* + * Called with mmap_lock held. lockdep would be unhappy if we would now + * take trace_types_lock. Instead use the specific + * snapshot_trigger_lock. + */ + spin_lock(&tr->snapshot_trigger_lock); + + if (tr->snapshot || tr->mapped == UINT_MAX) + err = -EBUSY; + else + tr->mapped++; + + spin_unlock(&tr->snapshot_trigger_lock); + + /* Wait for update_max_tr() to observe iter->tr->mapped */ + if (tr->mapped == 1) + synchronize_rcu(); + + return err; + +} +static void put_snapshot_map(struct trace_array *tr) +{ + spin_lock(&tr->snapshot_trigger_lock); + if (!WARN_ON(!tr->mapped)) + tr->mapped--; + spin_unlock(&tr->snapshot_trigger_lock); +} +#else +static inline int get_snapshot_map(struct trace_array *tr) { return 0; } +static inline void put_snapshot_map(struct trace_array *tr) { } +#endif + +static void tracing_buffers_mmap_close(struct vm_area_struct *vma) +{ + struct ftrace_buffer_info *info = vma->vm_file->private_data; + struct trace_iterator *iter = &info->iter; + + WARN_ON(ring_buffer_unmap(iter->array_buffer->buffer, iter->cpu_file)); + put_snapshot_map(iter->tr); +} + +static const struct vm_operations_struct tracing_buffers_vmops = { + .close = tracing_buffers_mmap_close, +}; + +static int tracing_buffers_mmap(struct file *filp, struct vm_area_struct *vma) +{ + struct ftrace_buffer_info *info = filp->private_data; + struct trace_iterator *iter = &info->iter; + int ret = 0; + + ret = get_snapshot_map(iter->tr); + if (ret) + return ret; + + ret = ring_buffer_map(iter->array_buffer->buffer, iter->cpu_file, vma); + if (ret) + put_snapshot_map(iter->tr); + + vma->vm_ops = &tracing_buffers_vmops; + + return ret; +} + static const struct file_operations tracing_buffers_fops = { .open = tracing_buffers_open, .read = tracing_buffers_read, @@ -8223,6 +8316,7 @@ static const struct file_operations tracing_buffers_fops = { .splice_read = tracing_buffers_splice_read, .unlocked_ioctl = tracing_buffers_ioctl, .llseek = no_llseek, + .mmap = tracing_buffers_mmap, }; static ssize_t diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 64450615ca0c..749a182dab48 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -336,6 +336,7 @@ struct trace_array { bool allocated_snapshot; spinlock_t snapshot_trigger_lock; unsigned int snapshot; + unsigned int mapped; unsigned long max_latency; #ifdef CONFIG_FSNOTIFY struct dentry *d_max_latency;