diff mbox series

ring-buffer: Fix reader locking when changing the sub buffer order

Message ID 20241014141554.10484-1-petr.pavlu@suse.com (mailing list archive)
State Superseded
Headers show
Series ring-buffer: Fix reader locking when changing the sub buffer order | expand

Commit Message

Petr Pavlu Oct. 14, 2024, 2:14 p.m. UTC
The function ring_buffer_subbuf_order_set() updates each
ring_buffer_per_cpu and installs new sub buffers that match the requested
page order. This operation may be invoked concurrently with readers that
rely on some of the modified data, such as the head bit (RB_PAGE_HEAD), or
the ring_buffer_per_cpu.pages and reader_page pointers. However, no
exclusive access is acquired by ring_buffer_subbuf_order_set(). Modifying
the mentioned data while a reader also operates on them can then result in
incorrect memory access and various crashes.

Fix the problem by taking the reader_lock when updating a specific
ring_buffer_per_cpu in ring_buffer_subbuf_order_set().

Fixes: 8e7b58c27b3c ("ring-buffer: Just update the subbuffers when changing their allocation order")
Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
---
References:
https://lore.kernel.org/linux-trace-kernel/20240715145141.5528-1-petr.pavlu@suse.com/
https://lore.kernel.org/linux-trace-kernel/20241010195849.2f77cc3f@gandalf.local.home/
https://lore.kernel.org/linux-trace-kernel/20241011112850.17212b25@gandalf.local.home/

 kernel/trace/ring_buffer.c | 44 ++++++++++++++++++++++----------------
 1 file changed, 26 insertions(+), 18 deletions(-)


base-commit: 6485cf5ea253d40d507cd71253c9568c5470cd27
prerequisite-patch-id: 0aa81c18abaac4990d14c431e12b9e91696aa053

Comments

Steven Rostedt Oct. 14, 2024, 10:15 p.m. UTC | #1
On Mon, 14 Oct 2024 16:14:16 +0200
Petr Pavlu <petr.pavlu@suse.com> wrote:

> The function ring_buffer_subbuf_order_set() updates each
> ring_buffer_per_cpu and installs new sub buffers that match the requested
> page order. This operation may be invoked concurrently with readers that
> rely on some of the modified data, such as the head bit (RB_PAGE_HEAD), or
> the ring_buffer_per_cpu.pages and reader_page pointers. However, no
> exclusive access is acquired by ring_buffer_subbuf_order_set(). Modifying
> the mentioned data while a reader also operates on them can then result in
> incorrect memory access and various crashes.
> 
> Fix the problem by taking the reader_lock when updating a specific
> ring_buffer_per_cpu in ring_buffer_subbuf_order_set().
> 
> Fixes: 8e7b58c27b3c ("ring-buffer: Just update the subbuffers when
> changing their allocation order") Signed-off-by: Petr Pavlu
> <petr.pavlu@suse.com>

I see you applied this on top of your cnt patch. Can you reverse the order
and add this patch first and then the cnt patch please.

This is a fix that needs to go up to mainline during the rc release. The
cnt patch is going into the next merge window.

> ---
> References:
> https://lore.kernel.org/linux-trace-kernel/20240715145141.5528-1-petr.pavlu@suse.com/
> https://lore.kernel.org/linux-trace-kernel/20241010195849.2f77cc3f@gandalf.local.home/
> https://lore.kernel.org/linux-trace-kernel/20241011112850.17212b25@gandalf.local.home/

BTW, it's also OK to add these as Link: tags in the commit message.

-- Steve


> 
> base-commit: 6485cf5ea253d40d507cd71253c9568c5470cd27
> prerequisite-patch-id: 0aa81c18abaac4990d14c431e12b9e91696aa053
diff mbox series

Patch

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 4c24191fa47d..adde95400ab4 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -6773,41 +6773,40 @@  int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
 	}
 
 	for_each_buffer_cpu(buffer, cpu) {
+		struct buffer_data_page *old_free_data_page;
+		struct list_head old_pages;
+		unsigned long flags;
 
 		if (!cpumask_test_cpu(cpu, buffer->cpumask))
 			continue;
 
 		cpu_buffer = buffer->buffers[cpu];
 
+		raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+
 		/* Clear the head bit to make the link list normal to read */
 		rb_head_page_deactivate(cpu_buffer);
 
-		/* Now walk the list and free all the old sub buffers */
-		list_for_each_entry_safe(bpage, tmp, cpu_buffer->pages, list) {
-			list_del_init(&bpage->list);
-			free_buffer_page(bpage);
-		}
-		/* The above loop stopped an the last page needing to be freed */
-		bpage = list_entry(cpu_buffer->pages, struct buffer_page, list);
-		free_buffer_page(bpage);
-
-		/* Free the current reader page */
-		free_buffer_page(cpu_buffer->reader_page);
+		/*
+		 * Collect buffers from the cpu_buffer pages list and the
+		 * reader_page on old_pages, so they can be freed later when not
+		 * under a spinlock. The pages list is a linked list with no
+		 * head, adding old_pages turns it into a regular list with
+		 * old_pages being the head.
+		 */
+		list_add(&old_pages, cpu_buffer->pages);
+		list_add(&cpu_buffer->reader_page->list, &old_pages);
 
 		/* One page was allocated for the reader page */
 		cpu_buffer->reader_page = list_entry(cpu_buffer->new_pages.next,
 						     struct buffer_page, list);
 		list_del_init(&cpu_buffer->reader_page->list);
 
-		/* The cpu_buffer pages are a link list with no head */
+		/* Install the new pages, remove the head from the list */
 		cpu_buffer->pages = cpu_buffer->new_pages.next;
-		cpu_buffer->new_pages.next->prev = cpu_buffer->new_pages.prev;
-		cpu_buffer->new_pages.prev->next = cpu_buffer->new_pages.next;
+		list_del_init(&cpu_buffer->new_pages);
 		cpu_buffer->cnt++;
 
-		/* Clear the new_pages list */
-		INIT_LIST_HEAD(&cpu_buffer->new_pages);
-
 		cpu_buffer->head_page
 			= list_entry(cpu_buffer->pages, struct buffer_page, list);
 		cpu_buffer->tail_page = cpu_buffer->commit_page = cpu_buffer->head_page;
@@ -6815,11 +6814,20 @@  int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
 		cpu_buffer->nr_pages = cpu_buffer->nr_pages_to_update;
 		cpu_buffer->nr_pages_to_update = 0;
 
-		free_pages((unsigned long)cpu_buffer->free_page, old_order);
+		old_free_data_page = cpu_buffer->free_page;
 		cpu_buffer->free_page = NULL;
 
 		rb_head_page_activate(cpu_buffer);
 
+		raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+
+		/* Free old sub buffers */
+		list_for_each_entry_safe(bpage, tmp, &old_pages, list) {
+			list_del_init(&bpage->list);
+			free_buffer_page(bpage);
+		}
+		free_pages((unsigned long)old_free_data_page, old_order);
+
 		rb_check_pages(cpu_buffer);
 	}