From patchwork Tue Mar 26 10:08:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13603821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81D4DCD11DF for ; Tue, 26 Mar 2024 10:08:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0617E6B0098; Tue, 26 Mar 2024 06:08:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 011456B0099; Tue, 26 Mar 2024 06:08:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF3F86B009A; Tue, 26 Mar 2024 06:08:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C819C6B0098 for ; Tue, 26 Mar 2024 06:08:43 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 93A971409D1 for ; Tue, 26 Mar 2024 10:08:43 +0000 (UTC) X-FDA: 81938766126.12.F1E1DF6 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf09.hostedemail.com (Postfix) with ESMTP id AB6DA140006 for ; Tue, 26 Mar 2024 10:08:41 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=iuAJqxex; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3qJ4CZgoKCCcYGRQQHIRUWJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--vdonnefort.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3qJ4CZgoKCCcYGRQQHIRUWJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--vdonnefort.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711447721; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5jIMRt/32GRgE1eetSdAND17lpYq06dNuYebVWRM2Sg=; b=drRf+k2ODPBjCZy6hUiLjwjT6L7jVIrCFxPEbzRdvypzNVkCIAEJReUVIsCOSEq5Dvvkcd w5UgAMcKWKcP4e3PGqzIv5nWcHM39cm+aX1QYdry1ZeF4zb77DTL3R8n02F/Ssm4IDnqM0 s3va//B/Zfit9Kph8J3f7/0CS2GOwIM= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=iuAJqxex; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3qJ4CZgoKCCcYGRQQHIRUWJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--vdonnefort.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3qJ4CZgoKCCcYGRQQHIRUWJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--vdonnefort.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711447721; a=rsa-sha256; cv=none; b=aEfnU+Sdfbj+qbLf08uJiEvR87ZN0nJH4077E/obx0owXywYy4o4Mvq5dMxy8FPrH01D6v G2ODZ8XtpRU/ocAb58QqufD4xoaHi3kD2lbBkdOG8xXYOwPf9bwS66m7kbC79VikjrD+g+ hV5RiAh7vwHPK8CgPTSgPr0xBsK0dNY= Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dcd1779adbeso8844887276.3 for ; Tue, 26 Mar 2024 03:08:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711447720; x=1712052520; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5jIMRt/32GRgE1eetSdAND17lpYq06dNuYebVWRM2Sg=; b=iuAJqxexZdkFrtnHNyHtw8+AIwnjzqAUUHTfvrK+HhV8RLC9teVK1M5bRt80qFrGbY r7TXMo0CE3Jdg2NV8Jjg17uOeA4gC5r0yfyh2mY+QrBZIW3zqLpOHzGPx7FHGRfO+ucU qUpvLJHCMHxGsIBQoQkhHplcH7VYIkS9CFSYXFvyUxA//sweUXh74qSIgj7aOHmoGe4r luSQaA2qM1b5OBU3ARO6ccOlmLu3yAsnhEf5tXhhFMZedEfU1Wl9NVfR0WW9+hd+boXF Vf4Lc3zBKvOoEekN4w0N4UAR1JX4cOtlFAObPQYkKSnkVy0lgnKAwOeQXO3rC9q4OA5e JfNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711447721; x=1712052521; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5jIMRt/32GRgE1eetSdAND17lpYq06dNuYebVWRM2Sg=; b=rWCn9Vti7a9BMz44fRTLlsuCHmZvQwkbmGUEaGrV7Et0KmRrp417YDXxeC/Fa1tLvP YIJldYZk+qJb6N3H0GVaaP3O20YagE3y3ospBe9xZK07rT1nkHQKfMAzwU4a9ODfNuju noReWkRMEXryWuXHkweVu2nM+wlBvucc75R+q8RkwLJtbgEM5aKArtry0P8i30u+HvBE /HYZOY+bIBrm4h75F7NlU0u91MfaQMw8vj0RVZfn4Almb56X5Er1IJ416aS7FJUuhszw XGcMXyKeF2O/WJ85iwgLoNMdfXd/AdZpACIVNqgm3eiKycPmOmaJ/ujVK2QFTJZWaepE qgXg== X-Forwarded-Encrypted: i=1; AJvYcCXA7MvOD7HTJErbVDVPNdGxbZRlP8LtHnOcK9KwYQggjIGKkPVMjwqIgNWeOJpYf8rw9vjqN5pHM4tiG2ctMmgIIks= X-Gm-Message-State: AOJu0YwB6DGm5ZtZGdyS/vsGxBXjzaVuvm3lHGCBThjM4hHEGPaFSm1n Io4IyytBwmIcVRKlOlsh7Lz+8UGOqQHtLiVLHOAIFHgA7Uu7uwrita0iHJOzvmyxiU9lptFSmmg oG+17ZxOQqKKhUQpGZA== X-Google-Smtp-Source: AGHT+IFwXZZ1Wx4DE4vsbszM9l2ter4BS6DmyHMqF/FJAV0QEj5Ht+N6qtgumOfrgAx7TdKMgokT0D3vnHNcXOKY X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a05:6902:100a:b0:dc6:ff2b:7e1b with SMTP id w10-20020a056902100a00b00dc6ff2b7e1bmr2932842ybt.4.1711447720760; Tue, 26 Mar 2024 03:08:40 -0700 (PDT) Date: Tue, 26 Mar 2024 10:08:27 +0000 In-Reply-To: <20240326100830.1326610-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20240326100830.1326610-1-vdonnefort@google.com> X-Mailer: git-send-email 2.44.0.396.g6e790dbe36-goog Message-ID: <20240326100830.1326610-3-vdonnefort@google.com> Subject: [PATCH v19 RESEND 2/5] ring-buffer: Introducing ring-buffer mapping functions From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: mathieu.desnoyers@efficios.com, kernel-team@android.com, Vincent Donnefort , linux-mm@kvack.org X-Rspamd-Queue-Id: AB6DA140006 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: deipm8xjqqhtqt14x3q3dih614mahkjd X-HE-Tag: 1711447721-648595 X-HE-Meta: U2FsdGVkX1/Fy5wH0wtWGfUVYHfg6mrvPBrlafyAjB2qO0PTTSkwZui46otMbnYbOoAvVhbAVW58itc2gGYInM01Un8gqABj/n5Lsqm+5Y38K6ADIhZgoR2RsGbzOpIjljSka7SHs3HbHYBpdv5/J/sFppKaBOHesKxaBEjDq2Eu0HMde1cmLt31hiBJ4hxz9MPyi8fEYqjWSXGB7+bC8bTKMXReLDed+161t6ByRcD/w3ZA1Ni0pFjIB6xmAw174Hxx4kzhQ0cfaP03fOvqrbAbWurhRw+EaBZ7COmrSseRsZQQsGBdvnayG3L8fjvqgLQIOSL9x8BzRvlThXLRfV945MlQmdJpkik0W/HSbpdBnr3zJ6FD4SzIeSQRTr8UiUQT7TomL6zxpTdTbWBBVMGbT0aBggckAjVcrOSG+uAZrGzqAcZfsFkTGjYj3D5FvAf5l5+NvB+zA3dd+IX6VdHbJc0MjVqjPaKa0MuGkgkKPI0zfgRpMOSyAyTvLLNE/MFIv9ZHGnmd10GNHLZwYQ7xoKnuaysMbRvHLMDpURQEERvohi2LXGq9P6BWbgWxglE6nDT+9POAZSW+rEmb1Ta/7NG7++RJn7Bdidgc1C3m5u5xMrnoKzS22eUoALsp+sjgr0JdMRg6niGQTCk1hlkoNTryTBDv3KmKs9LLOWHSRpKHk15VzDYEuZX8GLRXTiElfu+ZZP/UWyruH/6qi0nSZSJOG1C9TBICZrw18Pvq5moDPxuTsO3NIhyg/19IzO/tj0+KfRYDfR9vruZfG2wQoCKOiaVa8lD63nx791V//FALRS7XhoATqNpYFuJ/pgq3ezFMtU2ppU0V/GR+1eEH6oBgKGK2/o2By16j/ZCWg1/J83Md0n82nHSXwU7NSpqLjNggaAe9cx4JMz88ndhBlccNyhTjAtgrPF/d+UxySbc7NYrV0iMZV4xDOGytYPkc6JHXSf+NoTvRx3+ 7eiYcFil 5z7KRXtqeyciKk9Zm+T/F317PgLzptGThRfZGQhoySA25/ikEN93tC3u6eA3N6x1CEkCJ3QLumfdGHEjhfABL7bh16HO59Z/Bxjsbl63bzZuXoG2801LoVcNIzuH3dyP+RhEvb2rsJCu5bZNVwt+sE0dFa0eoELSrE8eBXwFIaYJtsulDvFuRlV+EgQ3tACR1l5EV0Y5IZEYdeieXuodu+Jqz8Te6+HOL2PW4YxfBza6nlgWUvVImETcQzgOQQ7uK+ATbhTdKF5uxyCHzHYOdHcsitDlPuTAVDKWrBCcz+mwtTNd8cdj6U5BQXRDb354tynlp94jJ0QgXrOroV2cpk9vKgOBtVTU1YBxWKquUt6g8AiVQYPYQGLk4RnkUlVNI5EQOZtNPo7Dg8f6FWaQ2JxQOolrS7Ev0DoT94Go5TNoD09iS9RxMr+kOAKYkLQNsFDCrDBvO+4Dl7mHz6Dg56vyq9vefnCDnKQhs23tUhPL8NDijxKAbHF1v5ImlyGOGP22stN00eVl6TTjgNAPAe4B0QpNteJ55/WKXrZzO7EF7X3NBFRjM3UzM9knk98Yr1yWt7jF0OjXIzOLZdYSMsnlTgD56l2nYriTqECCZ8XdZE6cm85HiqWuwOOqGaWu0uf4pgyDhQgwuEvzYQ3KTtsuBcCfvbHzywq3w9NS3tyPf2Qb7OjMySmf6KfW9/IeSrgxR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In preparation for allowing the user-space to map a ring-buffer, add a set of mapping functions: ring_buffer_{map,unmap}() And controls on the ring-buffer: ring_buffer_map_get_reader() /* swap reader and head */ Mapping the ring-buffer also involves: A unique ID for each subbuf of the ring-buffer, currently they are only identified through their in-kernel VA. A meta-page, where are stored ring-buffer statistics and a description for the current reader The linear mapping exposes the meta-page, and each subbuf of the ring-buffer, ordered following their unique ID, assigned during the first mapping. Once mapped, no subbuf can get in or out of the ring-buffer: the buffer size will remain unmodified and the splice enabling functions will in reality simply memcpy the data instead of swapping subbufs. CC: Signed-off-by: Vincent Donnefort diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h index dc5ae4e96aee..96d2140b471e 100644 --- a/include/linux/ring_buffer.h +++ b/include/linux/ring_buffer.h @@ -6,6 +6,8 @@ #include #include +#include + struct trace_buffer; struct ring_buffer_iter; @@ -223,4 +225,8 @@ int trace_rb_cpu_prepare(unsigned int cpu, struct hlist_node *node); #define trace_rb_cpu_prepare NULL #endif +int ring_buffer_map(struct trace_buffer *buffer, int cpu, + struct vm_area_struct *vma); +int ring_buffer_unmap(struct trace_buffer *buffer, int cpu); +int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu); #endif /* _LINUX_RING_BUFFER_H */ diff --git a/include/uapi/linux/trace_mmap.h b/include/uapi/linux/trace_mmap.h new file mode 100644 index 000000000000..ffcd8dfcaa4f --- /dev/null +++ b/include/uapi/linux/trace_mmap.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _TRACE_MMAP_H_ +#define _TRACE_MMAP_H_ + +#include + +/** + * struct trace_buffer_meta - Ring-buffer Meta-page description + * @meta_page_size: Size of this meta-page. + * @meta_struct_len: Size of this structure. + * @subbuf_size: Size of each sub-buffer. + * @nr_subbufs: Number of subbfs in the ring-buffer, including the reader. + * @reader.lost_events: Number of events lost at the time of the reader swap. + * @reader.id: subbuf ID of the current reader. ID range [0 : @nr_subbufs - 1] + * @reader.read: Number of bytes read on the reader subbuf. + * @flags: Placeholder for now, 0 until new features are supported. + * @entries: Number of entries in the ring-buffer. + * @overrun: Number of entries lost in the ring-buffer. + * @read: Number of entries that have been read. + * @Reserved1: Reserved for future use. + * @Reserved2: Reserved for future use. + */ +struct trace_buffer_meta { + __u32 meta_page_size; + __u32 meta_struct_len; + + __u32 subbuf_size; + __u32 nr_subbufs; + + struct { + __u64 lost_events; + __u32 id; + __u32 read; + } reader; + + __u64 flags; + + __u64 entries; + __u64 overrun; + __u64 read; + + __u64 Reserved1; + __u64 Reserved2; +}; + +#endif /* _TRACE_MMAP_H_ */ diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index cc9ebe593571..1dc932e7963c 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -338,6 +339,7 @@ struct buffer_page { local_t entries; /* entries on this page */ unsigned long real_end; /* real end of data */ unsigned order; /* order of the page */ + u32 id; /* ID for external mapping */ struct buffer_data_page *page; /* Actual data page */ }; @@ -484,6 +486,12 @@ struct ring_buffer_per_cpu { u64 read_stamp; /* pages removed since last reset */ unsigned long pages_removed; + + unsigned int mapped; + struct mutex mapping_lock; + unsigned long *subbuf_ids; /* ID to subbuf VA */ + struct trace_buffer_meta *meta_page; + /* ring buffer pages to update, > 0 to add, < 0 to remove */ long nr_pages_to_update; struct list_head new_pages; /* new pages to add */ @@ -1599,6 +1607,7 @@ rb_allocate_cpu_buffer(struct trace_buffer *buffer, long nr_pages, int cpu) init_irq_work(&cpu_buffer->irq_work.work, rb_wake_up_waiters); init_waitqueue_head(&cpu_buffer->irq_work.waiters); init_waitqueue_head(&cpu_buffer->irq_work.full_waiters); + mutex_init(&cpu_buffer->mapping_lock); bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()), GFP_KERNEL, cpu_to_node(cpu)); @@ -1789,8 +1798,6 @@ bool ring_buffer_time_stamp_abs(struct trace_buffer *buffer) return buffer->time_stamp_abs; } -static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer); - static inline unsigned long rb_page_entries(struct buffer_page *bpage) { return local_read(&bpage->entries) & RB_WRITE_MASK; @@ -5211,6 +5218,22 @@ static void rb_clear_buffer_page(struct buffer_page *page) page->read = 0; } +static void rb_update_meta_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + struct trace_buffer_meta *meta = cpu_buffer->meta_page; + + meta->reader.read = cpu_buffer->reader_page->read; + meta->reader.id = cpu_buffer->reader_page->id; + meta->reader.lost_events = cpu_buffer->lost_events; + + meta->entries = local_read(&cpu_buffer->entries); + meta->overrun = local_read(&cpu_buffer->overrun); + meta->read = cpu_buffer->read; + + /* Some archs do not have data cache coherency between kernel and user-space */ + flush_dcache_folio(virt_to_folio(cpu_buffer->meta_page)); +} + static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) { @@ -5255,6 +5278,9 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) cpu_buffer->lost_events = 0; cpu_buffer->last_overrun = 0; + if (cpu_buffer->mapped) + rb_update_meta_page(cpu_buffer); + rb_head_page_activate(cpu_buffer); cpu_buffer->pages_removed = 0; } @@ -5469,6 +5495,12 @@ int ring_buffer_swap_cpu(struct trace_buffer *buffer_a, cpu_buffer_a = buffer_a->buffers[cpu]; cpu_buffer_b = buffer_b->buffers[cpu]; + /* It's up to the callers to not try to swap mapped buffers */ + if (WARN_ON_ONCE(cpu_buffer_a->mapped || cpu_buffer_b->mapped)) { + ret = -EBUSY; + goto out; + } + /* At least make sure the two buffers are somewhat the same */ if (cpu_buffer_a->nr_pages != cpu_buffer_b->nr_pages) goto out; @@ -5733,7 +5765,8 @@ int ring_buffer_read_page(struct trace_buffer *buffer, * Otherwise, we can simply swap the page with the one passed in. */ if (read || (len < (commit - read)) || - cpu_buffer->reader_page == cpu_buffer->commit_page) { + cpu_buffer->reader_page == cpu_buffer->commit_page || + cpu_buffer->mapped) { struct buffer_data_page *rpage = cpu_buffer->reader_page->page; unsigned int rpos = read; unsigned int pos = 0; @@ -5956,6 +5989,11 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order) cpu_buffer = buffer->buffers[cpu]; + if (cpu_buffer->mapped) { + err = -EBUSY; + goto error; + } + /* Update the number of pages to match the new size */ nr_pages = old_size * buffer->buffers[cpu]->nr_pages; nr_pages = DIV_ROUND_UP(nr_pages, buffer->subbuf_size); @@ -6057,6 +6095,358 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order) } EXPORT_SYMBOL_GPL(ring_buffer_subbuf_order_set); +static int rb_alloc_meta_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + struct page *page; + + if (cpu_buffer->meta_page) + return 0; + + page = alloc_page(GFP_USER | __GFP_ZERO); + if (!page) + return -ENOMEM; + + cpu_buffer->meta_page = page_to_virt(page); + + return 0; +} + +static void rb_free_meta_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + unsigned long addr = (unsigned long)cpu_buffer->meta_page; + + free_page(addr); + cpu_buffer->meta_page = NULL; +} + +static void rb_setup_ids_meta_page(struct ring_buffer_per_cpu *cpu_buffer, + unsigned long *subbuf_ids) +{ + struct trace_buffer_meta *meta = cpu_buffer->meta_page; + unsigned int nr_subbufs = cpu_buffer->nr_pages + 1; + struct buffer_page *first_subbuf, *subbuf; + int id = 0; + + subbuf_ids[id] = (unsigned long)cpu_buffer->reader_page->page; + cpu_buffer->reader_page->id = id++; + + first_subbuf = subbuf = rb_set_head_page(cpu_buffer); + do { + if (WARN_ON(id >= nr_subbufs)) + break; + + subbuf_ids[id] = (unsigned long)subbuf->page; + subbuf->id = id; + + rb_inc_page(&subbuf); + id++; + } while (subbuf != first_subbuf); + + /* install subbuf ID to kern VA translation */ + cpu_buffer->subbuf_ids = subbuf_ids; + + /* __rb_map_vma() pads the meta-page to align it with the sub-buffers */ + meta->meta_page_size = PAGE_SIZE << cpu_buffer->buffer->subbuf_order; + meta->meta_struct_len = sizeof(*meta); + meta->nr_subbufs = nr_subbufs; + meta->subbuf_size = cpu_buffer->buffer->subbuf_size + BUF_PAGE_HDR_SIZE; + + rb_update_meta_page(cpu_buffer); +} + +static struct ring_buffer_per_cpu * +rb_get_mapped_buffer(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return ERR_PTR(-EINVAL); + + cpu_buffer = buffer->buffers[cpu]; + + mutex_lock(&cpu_buffer->mapping_lock); + + if (!cpu_buffer->mapped) { + mutex_unlock(&cpu_buffer->mapping_lock); + return ERR_PTR(-ENODEV); + } + + return cpu_buffer; +} + +static void rb_put_mapped_buffer(struct ring_buffer_per_cpu *cpu_buffer) +{ + mutex_unlock(&cpu_buffer->mapping_lock); +} + +/* + * Fast-path for rb_buffer_(un)map(). Called whenever the meta-page doesn't need + * to be set-up or torn-down. + */ +static int __rb_inc_dec_mapped(struct ring_buffer_per_cpu *cpu_buffer, + bool inc) +{ + unsigned long flags; + + lockdep_assert_held(&cpu_buffer->mapping_lock); + + if (inc && cpu_buffer->mapped == UINT_MAX) + return -EBUSY; + + if (WARN_ON(!inc && cpu_buffer->mapped == 0)) + return -EINVAL; + + mutex_lock(&cpu_buffer->buffer->mutex); + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); + + if (inc) + cpu_buffer->mapped++; + else + cpu_buffer->mapped--; + + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + mutex_unlock(&cpu_buffer->buffer->mutex); + + return 0; +} + +#define subbuf_page(off, start) \ + virt_to_page((void *)((start) + ((off) << PAGE_SHIFT))) + +#define foreach_subbuf_page(sub_order, start, page) \ + page = subbuf_page(0, (start)); \ + for (int __off = 0; __off < (1 << (sub_order)); \ + __off++, page = subbuf_page(__off, (start))) + +/* + * +--------------+ pgoff == 0 + * | meta page | + * +--------------+ pgoff == 1 + * | 000000000 | + * +--------------+ pgoff == (1 << subbuf_order) + * | subbuffer 0 | + * | | + * +--------------+ pgoff == (2 * (1 << subbuf_order)) + * | subbuffer 1 | + * | | + * ... + */ +static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer, + struct vm_area_struct *vma) +{ + unsigned long nr_subbufs, nr_pages, vma_pages, pgoff = vma->vm_pgoff; + unsigned int subbuf_pages, subbuf_order; + struct page **pages; + int p = 0, s = 0; + int err; + + lockdep_assert_held(&cpu_buffer->mapping_lock); + + subbuf_order = cpu_buffer->buffer->subbuf_order; + subbuf_pages = 1 << subbuf_order; + + if (subbuf_order && pgoff % subbuf_pages) + return -EINVAL; + + nr_subbufs = cpu_buffer->nr_pages + 1; + nr_pages = ((nr_subbufs + 1) << subbuf_order) - pgoff; + + vma_pages = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; + if (!vma_pages || vma_pages > nr_pages) + return -EINVAL; + + nr_pages = vma_pages; + + pages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL); + if (!pages) + return -ENOMEM; + + if (!pgoff) { + unsigned long meta_page_padding; + + pages[p++] = virt_to_page(cpu_buffer->meta_page); + + /* + * Pad with the zero-page to align the meta-page with the + * sub-buffers. + */ + meta_page_padding = subbuf_pages - 1; + while (meta_page_padding-- && p < nr_pages) + pages[p++] = ZERO_PAGE(0); + } else { + /* Skip the meta-page */ + pgoff -= subbuf_pages; + + s += pgoff / subbuf_pages; + } + + while (s < nr_subbufs && p < nr_pages) { + struct page *page; + + foreach_subbuf_page(subbuf_order, cpu_buffer->subbuf_ids[s], page) { + if (p >= nr_pages) + break; + + pages[p++] = page; + } + s++; + } + + err = vm_insert_pages(vma, vma->vm_start, pages, &nr_pages); + + kfree(pages); + + return err; +} + +int ring_buffer_map(struct trace_buffer *buffer, int cpu, + struct vm_area_struct *vma) +{ + struct ring_buffer_per_cpu *cpu_buffer; + unsigned long flags, *subbuf_ids; + int err = 0; + + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return -EINVAL; + + cpu_buffer = buffer->buffers[cpu]; + + mutex_lock(&cpu_buffer->mapping_lock); + + if (cpu_buffer->mapped) { + err = __rb_map_vma(cpu_buffer, vma); + if (!err) + err = __rb_inc_dec_mapped(cpu_buffer, true); + mutex_unlock(&cpu_buffer->mapping_lock); + return err; + } + + /* prevent another thread from changing buffer/sub-buffer sizes */ + mutex_lock(&buffer->mutex); + + err = rb_alloc_meta_page(cpu_buffer); + if (err) + goto unlock; + + /* subbuf_ids include the reader while nr_pages does not */ + subbuf_ids = kcalloc(cpu_buffer->nr_pages + 1, sizeof(*subbuf_ids), GFP_KERNEL); + if (!subbuf_ids) { + rb_free_meta_page(cpu_buffer); + err = -ENOMEM; + goto unlock; + } + + atomic_inc(&cpu_buffer->resize_disabled); + + /* + * Lock all readers to block any subbuf swap until the subbuf IDs are + * assigned. + */ + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); + rb_setup_ids_meta_page(cpu_buffer, subbuf_ids); + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + + err = __rb_map_vma(cpu_buffer, vma); + if (!err) { + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); + cpu_buffer->mapped = 1; + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + } else { + kfree(cpu_buffer->subbuf_ids); + cpu_buffer->subbuf_ids = NULL; + rb_free_meta_page(cpu_buffer); + } +unlock: + mutex_unlock(&buffer->mutex); + mutex_unlock(&cpu_buffer->mapping_lock); + + return err; +} + +int ring_buffer_unmap(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + unsigned long flags; + int err = 0; + + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return -EINVAL; + + cpu_buffer = buffer->buffers[cpu]; + + mutex_lock(&cpu_buffer->mapping_lock); + + if (!cpu_buffer->mapped) { + err = -ENODEV; + goto out; + } else if (cpu_buffer->mapped > 1) { + __rb_inc_dec_mapped(cpu_buffer, false); + goto out; + } + + mutex_lock(&buffer->mutex); + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); + + cpu_buffer->mapped = 0; + + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + + kfree(cpu_buffer->subbuf_ids); + cpu_buffer->subbuf_ids = NULL; + rb_free_meta_page(cpu_buffer); + atomic_dec(&cpu_buffer->resize_disabled); + + mutex_unlock(&buffer->mutex); +out: + mutex_unlock(&cpu_buffer->mapping_lock); + + return err; +} + +int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + unsigned long reader_size; + unsigned long flags; + + cpu_buffer = rb_get_mapped_buffer(buffer, cpu); + if (IS_ERR(cpu_buffer)) + return (int)PTR_ERR(cpu_buffer); + + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); +consume: + if (rb_per_cpu_empty(cpu_buffer)) + goto out; + + reader_size = rb_page_size(cpu_buffer->reader_page); + + /* + * There are data to be read on the current reader page, we can + * return to the caller. But before that, we assume the latter will read + * everything. Let's update the kernel reader accordingly. + */ + if (cpu_buffer->reader_page->read < reader_size) { + while (cpu_buffer->reader_page->read < reader_size) + rb_advance_reader(cpu_buffer); + goto out; + } + + if (WARN_ON(!rb_get_reader_page(cpu_buffer))) + goto out; + + goto consume; +out: + /* Some archs do not have data cache coherency between kernel and user-space */ + flush_dcache_folio(virt_to_folio(cpu_buffer->reader_page->page)); + + rb_update_meta_page(cpu_buffer); + + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + rb_put_mapped_buffer(cpu_buffer); + + return 0; +} + /* * We only allocate new buffers, never free them if the CPU goes down. * If we were to free the buffer, then the user would lose any trace that was in