From patchwork Wed Apr 2 14:49:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 14036112 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8363023A9BA; Wed, 2 Apr 2025 14:48:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743605330; cv=none; b=bQepXt/MBbKAqovwEfuUff7/bA0JarmDwjFawFo0ZPIep28WM8ttab5z0SeTNhGyP6KWKmQsva5MixTU1wvey83t/RSMs0jNogFRyhRBboDswhdUP4bsZid49+dZL7mRiYBsIQd0+QUR6YQGSux8WyvoDyPrPaKcZCJkOaUtSug= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743605330; c=relaxed/simple; bh=kD1slgJMHB22SQH7ZSUUCP8644JVmC+VECus+O3nWzc=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=GsTTGSyMk/9sZ8aRD08VgIDOFeApTnB4b1ltximkDAfjlATqKDWXd10GtE68zrzYeC8QP+0kO0mw+6UYCD3NxB0KihCixth2hZaECcpb7iuR/if/Gsr+gyxCex117S3HWi9PDyOnW3f9UoPiIBZd7ifKBbZriJJPo+qX9Xyevx4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01356C4CEDD; Wed, 2 Apr 2025 14:48:50 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tzzPd-00000006SOA-2MRv; Wed, 02 Apr 2025 10:49:53 -0400 Message-ID: <20250402144953.412882844@goodmis.org> User-Agent: quilt/0.68 Date: Wed, 02 Apr 2025 10:49:04 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: Linus Torvalds , Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Vincent Donnefort , Vlastimil Babka , Mike Rapoport , Jann Horn Subject: [PATCH v6 1/4] tracing: Enforce the persistent ring buffer to be page aligned References: <20250402144903.993276623@goodmis.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Steven Rostedt Enforce that the address and the size of the memory used by the persistent ring buffer is page aligned. Also update the documentation to reflect this requirement. Link: https://lore.kernel.org/all/CAHk-=whUOfVucfJRt7E0AH+GV41ELmS4wJqxHDnui6Giddfkzw@mail.gmail.com/ Suggested-by: Linus Torvalds Signed-off-by: Steven Rostedt (Google) --- Change since v5: https://lore.kernel.org/linux-trace-kernel/20250401225842.261475465@goodmis.org/ - Use %pa instead of %lx for start and size sizes (Mike Rapoport) Documentation/admin-guide/kernel-parameters.txt | 2 ++ Documentation/trace/debugging.rst | 2 ++ kernel/trace/trace.c | 10 ++++++++++ 3 files changed, 14 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index fb8752b42ec8..71861643ef14 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -7241,6 +7241,8 @@ This is just one of many ways that can clear memory. Make sure your system keeps the content of memory across reboots before relying on this option. + NB: Both the mapped address and size must be page aligned for the architecture. + See also Documentation/trace/debugging.rst diff --git a/Documentation/trace/debugging.rst b/Documentation/trace/debugging.rst index 54fb16239d70..d54bc500af80 100644 --- a/Documentation/trace/debugging.rst +++ b/Documentation/trace/debugging.rst @@ -136,6 +136,8 @@ kernel, so only the same kernel is guaranteed to work if the mapping is preserved. Switching to a different kernel version may find a different layout and mark the buffer as invalid. +NB: Both the mapped address and size must be page aligned for the architecture. + Using trace_printk() in the boot instance ----------------------------------------- By default, the content of trace_printk() goes into the top level tracing diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 14c38fcd6f9e..96129985e81c 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -10774,6 +10774,16 @@ __init static void enable_instances(void) } if (start) { + /* Start and size must be page aligned */ + if (start & ~PAGE_MASK) { + pr_warn("Tracing: mapping start addr %pa is not page aligned\n", &start); + continue; + } + if (size & ~PAGE_MASK) { + pr_warn("Tracing: mapping size %pa is not page aligned\n", &size); + continue; + } + addr = map_pages(start, size); if (addr) { pr_info("Tracing: mapped boot instance %s at physical memory %pa of size 0x%lx\n", From patchwork Wed Apr 2 14:49:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 14036114 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D28F823BFA6; Wed, 2 Apr 2025 14:48:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743605330; cv=none; b=V0geR+TeYwna4ZTpklEgPa4iRspCA7h9dl+m1e1Sksxz7jtJ08mB0H5xzJR4jDj/hU9D2y8OoMTZwsLQztMG1z81LvsuEt0fgZG73KWK8yEZ8t9+5BevCZ2SBw1a7afXOXEAE/0f+HelgQNhx/Ih8T/8TEYChBrQvPTgv7nCNMA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743605330; c=relaxed/simple; bh=NkrZFm87HlJc8esdzVgQ3RER7T9L0z8JKggbOBdeFdM=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=U9ejObsTA+ZVusXskZGNM1gSjj2SDQeexskO/PIHiirpPLmIAif2h0VxaRVvq5YenNPby24q44QkLxCCun63ejIAehOsMBeCk8sUIDPgUDEtu+IiVJXzsv27xH+cwtjykpw6ryrKlL37W810lOSSEkZvT8V8r5GgH/CTGcaJTOw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 395BFC4CEE7; Wed, 2 Apr 2025 14:48:50 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tzzPd-00000006SOe-3520; Wed, 02 Apr 2025 10:49:53 -0400 Message-ID: <20250402144953.583750106@goodmis.org> User-Agent: quilt/0.68 Date: Wed, 02 Apr 2025 10:49:05 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: Linus Torvalds , Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Vincent Donnefort , Vlastimil Babka , Mike Rapoport , Jann Horn Subject: [PATCH v6 2/4] tracing: Have reserve_mem use phys_to_virt() and separate from memmap buffer References: <20250402144903.993276623@goodmis.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Steven Rostedt The reserve_mem kernel command line option may pass back a physical address, but the memory is still part of the normal memory just like using memblock_alloc() would be. This means that the physical memory returned by the reserve_mem command line option can be converted directly to virtual memory by simply using phys_to_virt(). When freeing the buffer there's no need to call vunmap() anymore as the memory allocated by reserve_mem is freed by the call to reserve_mem_release_by_name(). Because the persistent ring buffer can also be allocated via the memmap option, which *is* different than normal memory as it cannot be added back to the buddy system, it must be treated differently. It still needs to be virtually mapped to have access to it. It also can not be freed nor can it ever be memory mapped to user space. Create a new trace_array flag called TRACE_ARRAY_FL_MEMMAP which gets set if the buffer is created by the memmap option, and this will prevent the buffer from being memory mapped by user space. Also increment the ref count for memmap'ed buffers so that they can never be freed. Link: https://lore.kernel.org/all/Z-wFszhJ_9o4dc8O@kernel.org/ Suggested-by: Mike Rapoport Signed-off-by: Steven Rostedt (Google) --- Changes since v5: https://lore.kernel.org/linux-trace-kernel/20250401225842.429332654@goodmis.org/ - Updated change log to use memblock_alloc() instead of memblock_reserve() (Mike Rapoport) kernel/trace/trace.c | 23 ++++++++++++++++------- kernel/trace/trace.h | 1 + 2 files changed, 17 insertions(+), 7 deletions(-) diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 96129985e81c..e58b72e61573 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -8492,6 +8492,10 @@ static int tracing_buffers_mmap(struct file *filp, struct vm_area_struct *vma) struct trace_iterator *iter = &info->iter; int ret = 0; + /* A memmap'ed buffer is not supported for user space mmap */ + if (iter->tr->flags & TRACE_ARRAY_FL_MEMMAP) + return -ENODEV; + /* Currently the boot mapped buffer is not supported for mmap */ if (iter->tr->flags & TRACE_ARRAY_FL_BOOT) return -ENODEV; @@ -9600,9 +9604,6 @@ static void free_trace_buffers(struct trace_array *tr) #ifdef CONFIG_TRACER_MAX_TRACE free_trace_buffer(&tr->max_buffer); #endif - - if (tr->range_addr_start) - vunmap((void *)tr->range_addr_start); } static void init_trace_flags_index(struct trace_array *tr) @@ -10696,6 +10697,7 @@ static inline void do_allocate_snapshot(const char *name) { } __init static void enable_instances(void) { struct trace_array *tr; + bool memmap_area = false; char *curr_str; char *name; char *str; @@ -10764,6 +10766,7 @@ __init static void enable_instances(void) name); continue; } + memmap_area = true; } else if (tok) { if (!reserve_mem_find_by_name(tok, &start, &size)) { start = 0; @@ -10784,7 +10787,10 @@ __init static void enable_instances(void) continue; } - addr = map_pages(start, size); + if (memmap_area) + addr = map_pages(start, size); + else + addr = (unsigned long)phys_to_virt(start); if (addr) { pr_info("Tracing: mapped boot instance %s at physical memory %pa of size 0x%lx\n", name, &start, (unsigned long)size); @@ -10811,10 +10817,13 @@ __init static void enable_instances(void) update_printk_trace(tr); /* - * If start is set, then this is a mapped buffer, and - * cannot be deleted by user space, so keep the reference - * to it. + * memmap'd buffers can not be freed. */ + if (memmap_area) { + tr->flags |= TRACE_ARRAY_FL_MEMMAP; + tr->ref++; + } + if (start) { tr->flags |= TRACE_ARRAY_FL_BOOT | TRACE_ARRAY_FL_LAST_BOOT; tr->range_name = no_free_ptr(rname); diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index ab7c7a1930cc..9d9dcfad6269 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -446,6 +446,7 @@ enum { TRACE_ARRAY_FL_BOOT = BIT(1), TRACE_ARRAY_FL_LAST_BOOT = BIT(2), TRACE_ARRAY_FL_MOD_INIT = BIT(3), + TRACE_ARRAY_FL_MEMMAP = BIT(4), }; #ifdef CONFIG_MODULES From patchwork Wed Apr 2 14:49:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 14036115 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DFF5523C384; Wed, 2 Apr 2025 14:48:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743605330; cv=none; b=X67id3tcNqAi55ylX4sd9N3JORDtDbLSDdYArJIe/oskCwvNfpc2y2kMxX2RyxsXOdtneM66+xRNn9h2kjYWSJaE5oNY7gXW6ZHp1Y6RYjP+Lb7jbnhl1JyJKGAXDsms9LpfnKQtEkRijC3R3bOS/nx2G5E6kFuBvAaw4z81qyE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743605330; c=relaxed/simple; bh=4iNFYvlarmr3OWHl0Ppc9yyZFCTE2C9loNtBYGawdHw=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=k5Z54uhQJohPDnwsNYAgBvFWIntC4vdt/nLmYzQt/mJ+5FQgIGH4EYx2qGe1GR5dyuiFWR5IOYdhjT//3v7hNhAi/zP3dutpmgURU4jZDbic/UFrtmvrb8Hghuh5Cigugp2br1ryTCybwhmaLreFvUHl5mOQj7m+Twd6jxfJkS0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 70604C4CEEF; Wed, 2 Apr 2025 14:48:50 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tzzPd-00000006SP8-3lhZ; Wed, 02 Apr 2025 10:49:53 -0400 Message-ID: <20250402144953.754618481@goodmis.org> User-Agent: quilt/0.68 Date: Wed, 02 Apr 2025 10:49:06 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: Linus Torvalds , Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Vincent Donnefort , Vlastimil Babka , Mike Rapoport , Jann Horn Subject: [PATCH v6 3/4] tracing: Use vmap_page_range() to map memmap ring buffer References: <20250402144903.993276623@goodmis.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Steven Rostedt The code to map the physical memory retrieved by memmap currently allocates an array of pages to cover the physical memory and then calls vmap() to map it to a virtual address. Instead of using this temporary array of struct page descriptors, simply use vmap_page_range() that can directly map the contiguous physical memory to a virtual address. Link: https://lore.kernel.org/all/CAHk-=whUOfVucfJRt7E0AH+GV41ELmS4wJqxHDnui6Giddfkzw@mail.gmail.com/ Suggested-by: Linus Torvalds Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace.c | 33 ++++++++++++++++----------------- 1 file changed, 16 insertions(+), 17 deletions(-) diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index e58b72e61573..f6531cd3176b 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -50,6 +50,7 @@ #include #include #include +#include /* vmap_page_range() */ #include /* COMMAND_LINE_SIZE */ @@ -9796,29 +9797,27 @@ static int instance_mkdir(const char *name) return ret; } -static u64 map_pages(u64 start, u64 size) +static u64 map_pages(unsigned long start, unsigned long size) { - struct page **pages; - phys_addr_t page_start; - unsigned int page_count; - unsigned int i; - void *vaddr; - - page_count = DIV_ROUND_UP(size, PAGE_SIZE); + unsigned long vmap_start, vmap_end; + struct vm_struct *area; + int ret; - page_start = start; - pages = kmalloc_array(page_count, sizeof(struct page *), GFP_KERNEL); - if (!pages) + area = get_vm_area(size, VM_IOREMAP); + if (!area) return 0; - for (i = 0; i < page_count; i++) { - phys_addr_t addr = page_start + i * PAGE_SIZE; - pages[i] = pfn_to_page(addr >> PAGE_SHIFT); + vmap_start = (unsigned long) area->addr; + vmap_end = vmap_start + size; + + ret = vmap_page_range(vmap_start, vmap_end, + start, pgprot_nx(PAGE_KERNEL)); + if (ret < 0) { + free_vm_area(area); + return 0; } - vaddr = vmap(pages, page_count, VM_MAP, PAGE_KERNEL); - kfree(pages); - return (u64)(unsigned long)vaddr; + return (u64)vmap_start; } /** From patchwork Wed Apr 2 14:49:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 14036116 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 08B5F23C8A2; Wed, 2 Apr 2025 14:48:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743605331; cv=none; b=H8Rs99GqIecEu8BN+8H4DiCwwRz7wsPi1Y1GkP0fpSjhL8RCB9Ig8XQHPw/aPf1tA3BdDS0r0AjlWKorHxKMUMiY22h0eO/RAESC8VEnkMUGhGbDzdgvwSxsdAodnFLAdLIpNqSj0goklXKsu+lHkkwbi0AyMLNiKC4xj0aVXF4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743605331; c=relaxed/simple; bh=foiVaB0xCH0+i3jeZk8KsERy/lVTTajbEH937lGNGjk=; h=Message-ID:Date:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=agQt8HO6TX6M/OsxmLJkX2vU1JR044BtWgViT5yZoZ5UhDB5HoadDiFcN/ZUAPSW4R9dv3oqe9+S10A5LlFADkxF/cH7y0LXhESVkMVYlPVwjVje6uhW59gZGZZvgXXq+z1yCENHy/XJL/ET8mTmihjmzCsGzD5BEDBwOJwi35M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7F9B7C4CEF1; Wed, 2 Apr 2025 14:48:50 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.98) (envelope-from ) id 1tzzPe-00000006SPc-0IWG; Wed, 02 Apr 2025 10:49:54 -0400 Message-ID: <20250402144953.920792197@goodmis.org> User-Agent: quilt/0.68 Date: Wed, 02 Apr 2025 10:49:07 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: Linus Torvalds , Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Vincent Donnefort , Vlastimil Babka , Mike Rapoport , Jann Horn , stable@vger.kernel.org Subject: [PATCH v6 4/4] ring-buffer: Use flush_kernel_vmap_range() over flush_dcache_folio() References: <20250402144903.993276623@goodmis.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Steven Rostedt Some architectures do not have data cache coherency between user and kernel space. For these architectures, the cache needs to be flushed on both the kernel and user addresses so that user space can see the updates the kernel has made. Instead of using flush_dcache_folio() and playing with virt_to_folio() within the call to that function, use flush_kernel_vmap_range() which takes the virtual address and does the work for those architectures that need it. This also fixes a bug where the flush of the reader page only flushed one page. If the sub-buffer order is 1 or more, where the sub-buffer size would be greater than a page, it would miss the rest of the sub-buffer content, as the "reader page" is not just a page, but the size of a sub-buffer. Link: https://lore.kernel.org/all/CAG48ez3w0my4Rwttbc5tEbNsme6tc0mrSN95thjXUFaJ3aQ6SA@mail.gmail.com/ Cc: stable@vger.kernel.org Fixes: 117c39200d9d7 ("ring-buffer: Introducing ring-buffer mapping functions"); Suggested-by: Jann Horn Signed-off-by: Steven Rostedt (Google) --- kernel/trace/ring_buffer.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index f25966b3a1fc..d4b0f7b55cce 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -6016,7 +6016,7 @@ static void rb_update_meta_page(struct ring_buffer_per_cpu *cpu_buffer) meta->read = cpu_buffer->read; /* Some archs do not have data cache coherency between kernel and user-space */ - flush_dcache_folio(virt_to_folio(cpu_buffer->meta_page)); + flush_kernel_vmap_range(cpu_buffer->meta_page, PAGE_SIZE); } static void @@ -7319,7 +7319,8 @@ int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu) out: /* Some archs do not have data cache coherency between kernel and user-space */ - flush_dcache_folio(virt_to_folio(cpu_buffer->reader_page->page)); + flush_kernel_vmap_range(cpu_buffer->reader_page->page, + buffer->subbuf_size + BUF_PAGE_HDR_SIZE); rb_update_meta_page(cpu_buffer);