From patchwork Wed Sep 13 17:14:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: andrey.konovalov@linux.dev X-Patchwork-Id: 13383623 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 791FBEE01F1 for ; Wed, 13 Sep 2023 17:16:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BF5896B0266; Wed, 13 Sep 2023 13:16:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B60556B0269; Wed, 13 Sep 2023 13:16:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9590C6B026A; Wed, 13 Sep 2023 13:16:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 79EFD6B0266 for ; Wed, 13 Sep 2023 13:16:00 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 28AD7C04DD for ; Wed, 13 Sep 2023 17:16:00 +0000 (UTC) X-FDA: 81232226880.25.51A41D8 Received: from out-218.mta1.migadu.com (out-218.mta1.migadu.com [95.215.58.218]) by imf05.hostedemail.com (Postfix) with ESMTP id 47F1C100016 for ; Wed, 13 Sep 2023 17:15:58 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=sHXeZyLg; spf=pass (imf05.hostedemail.com: domain of andrey.konovalov@linux.dev designates 95.215.58.218 as permitted sender) smtp.mailfrom=andrey.konovalov@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694625358; a=rsa-sha256; cv=none; b=3TIskP3OcExrmb47TqXoGs1JagDDgRgktx9Bj7T0Gw3Aflr8rM3PWtIzdY9cFPF2PfH0pi YWThmaHBLg56U8Jj4jnUjxbIYqQ25iU1F1ZOoogzKv8PhV9V5q6S3pbTg2krPxih9IVwSz F114ipqfDFqKbXswfWSo5eM//prZjQQ= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=sHXeZyLg; spf=pass (imf05.hostedemail.com: domain of andrey.konovalov@linux.dev designates 95.215.58.218 as permitted sender) smtp.mailfrom=andrey.konovalov@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694625358; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=toA317coTc48scxGmgrPy+Qu0+ZANZtHQIsZ7v5QrVU=; b=eZy5pNHuILKOSA5xAqqNj0E0sQYQRlKma6SoRIFpsIr3+/r3Rky0SSFGLTq1WICsocvo/N Y+l8Ln6A9bowSePdKdfR9ktEs8wldTsxRQjefCK9+TF9BJraI9esOTa+p3X/QjDSOUZX7b blVEW6e8AAtwpMlQ6d5wBKlKL8vmd6Q= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1694625356; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=toA317coTc48scxGmgrPy+Qu0+ZANZtHQIsZ7v5QrVU=; b=sHXeZyLgBNYMFPbyE+xXimJU/cWOgsyZs9gQcIuMN7tDfd9VZD/xdZ1RUuIm3sCJedAeFu e0aDkvFDJoyGmmMnm/t7DX3XRxFvKBMSMlx3Iyv3M86jATRb7q2FVbkDE5t+RkcwVcYLOC CxMN6cHXInTQgo1LAFhXDzMsY/RvU0M= From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko Cc: Andrey Konovalov , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , Oscar Salvador , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH v2 10/19] lib/stackdepot: store free stack records in a freelist Date: Wed, 13 Sep 2023 19:14:35 +0200 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 47F1C100016 X-Stat-Signature: x37gq9kh6sts1ay1hda8cr9y4p8ypatc X-Rspam-User: X-HE-Tag: 1694625358-404719 X-HE-Meta: U2FsdGVkX1+wXmwcs5vOvUejkXTGBpkohpO46CgVBxGFd6pmqvL/IaDN4dKwQPIFW8YZ7C21cUPaYbP0VVlcBZ8oSdhjfr1fVOUjWI4BhpRoB7fLCSWplQuqGN2E09cEpOLNcHv8fLX0w0vNApZcIcsTXQffcdtBvF3xLMdX70CsQiTMeka0oZbQXF+oIAw2gOo+qIk9RcqqgXysXa6xGwd0G7pprnddUA5deQzkiBOZvlUBUzeknyaK4l4fpEV3C9Fhbf2fcG2mcY+Z7/3ksGjx5L8WyL3mxlhUuDvbSGWHlQ4Et/GwVRB7SNR00Vhn4E0e08t9JA8GWWoUaU0n8Y/2nQyKq5nlrkgY2AGeJbGCYgVIJtFgzi7ZGAZKZyzqBwhRXPhlQaHkZLqVk/oRTUsCYjrrMDVkkIFRr3spal6uWgaEz6GScfjlb4f0FRJyASEZp0e5trwZwYasuqTeHZBwPjO1YOREyWKqWsDnz5vW1fzLB4wb3L0V2BM6nelwM9ByQgE+dzQ2oipYA6v/rY9JJpSuinU/VovN1KIJB5W1Etu6RpxhJeCU7wfacg+TWAN4Je3E6L5eRV4+e5+nJO13ZNTsqovLpRHEtxHjTc+QCwdh5L7AiWNdNKVd7x2gMsTs0BWFsXgu9wfaKdlfkMpqb7zoHiSSDLXyYmjiWqT5KXb+f4+/o9qneZHspBBbYD/U+C6qq5JwJo4cwD0sDTGKmHv25Ew7WZJga4f26hevmgcgtqciE3vJmk0Z0h4OiCUMAjWg3napSLGBkaTHzbXwTJ1+EnBOrFz9raShoVZG1JFr0Aem33CT7gjwszL0rodKGnpM58uXXNO2HwfPz9B8tik0zQQZnaO6Qyb8QSHX1FLcqGTB9wshtEtNuhhpUPRJcFpHUwuvn6C8NP8/9ZiEaM5lN+8RVmyhrYj5Nk/Ta0UVzr/mQhqYovYE2H3fF4GirEaPkYpG3mMQou8 QgvwSE5e GF/gAQT4THbAm0wILv/HwbyIhDf88MScRZmU3HHv+TjB1qdkAU/uI1HhZkHGvUQo830CS+3B4fICfEdfrGkVBXgBU28UaSaKVY3kx X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Instead of using the global pool_offset variable to find a free slot when storing a new stack record, mainlain a freelist of free slots within the allocated stack pools. A global next_stack variable is used as the head of the freelist, and the next field in the stack_record struct is reused as freelist link (when the record is not in the freelist, this field is used as a link in the hash table). This is preparatory patch for implementing the eviction of stack records from the stack depot. Signed-off-by: Andrey Konovalov Reviewed-by: Alexander Potapenko --- Changes v1->v2: - Fix out-of-bounds when initializing a pool. --- lib/stackdepot.c | 131 +++++++++++++++++++++++++++++------------------ 1 file changed, 82 insertions(+), 49 deletions(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 81d8733cdbed..ca8e6fee0cb4 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -54,8 +54,8 @@ union handle_parts { }; struct stack_record { - struct stack_record *next; /* Link in the hash table */ - u32 hash; /* Hash in the hash table */ + struct stack_record *next; /* Link in hash table or freelist */ + u32 hash; /* Hash in hash table */ u32 size; /* Number of stored frames */ union handle_parts handle; unsigned long entries[CONFIG_STACKDEPOT_MAX_FRAMES]; /* Frames */ @@ -87,10 +87,10 @@ static unsigned int stack_hash_mask; static void *stack_pools[DEPOT_MAX_POOLS]; /* Newly allocated pool that is not yet added to stack_pools. */ static void *new_pool; -/* Currently used pool in stack_pools. */ -static int pool_index; -/* Offset to the unused space in the currently used pool. */ -static size_t pool_offset; +/* Number of pools in stack_pools. */ +static int pools_num; +/* Next stack in the freelist of stack records within stack_pools. */ +static struct stack_record *next_stack; /* Lock that protects the variables above. */ static DEFINE_RAW_SPINLOCK(pool_lock); /* @@ -220,6 +220,42 @@ int stack_depot_init(void) } EXPORT_SYMBOL_GPL(stack_depot_init); +/* Initializes a stack depol pool. */ +static void depot_init_pool(void *pool) +{ + const int records_in_pool = DEPOT_POOL_SIZE / DEPOT_STACK_RECORD_SIZE; + int i, offset; + + /* Initialize handles and link stack records to each other. */ + for (i = 0, offset = 0; + offset <= DEPOT_POOL_SIZE - DEPOT_STACK_RECORD_SIZE; + i++, offset += DEPOT_STACK_RECORD_SIZE) { + struct stack_record *stack = pool + offset; + + stack->handle.pool_index = pools_num; + stack->handle.offset = offset >> DEPOT_STACK_ALIGN; + stack->handle.extra = 0; + + if (i < records_in_pool - 1) + stack->next = (void *)stack + DEPOT_STACK_RECORD_SIZE; + else + stack->next = NULL; + } + + /* Link stack records into the freelist. */ + WARN_ON(next_stack); + next_stack = pool; + + /* Save reference to the pool to be used by depot_fetch_stack. */ + stack_pools[pools_num] = pool; + + /* + * WRITE_ONCE pairs with potential concurrent read in + * depot_fetch_stack. + */ + WRITE_ONCE(pools_num, pools_num + 1); +} + /* Keeps the preallocated memory to be used for a new stack depot pool. */ static void depot_keep_new_pool(void **prealloc) { @@ -234,7 +270,7 @@ static void depot_keep_new_pool(void **prealloc) * Use the preallocated memory for the new pool * as long as we do not exceed the maximum number of pools. */ - if (pool_index + 1 < DEPOT_MAX_POOLS) { + if (pools_num < DEPOT_MAX_POOLS) { new_pool = *prealloc; *prealloc = NULL; } @@ -249,45 +285,42 @@ static void depot_keep_new_pool(void **prealloc) } /* Updates refences to the current and the next stack depot pools. */ -static bool depot_update_pools(size_t required_size, void **prealloc) +static bool depot_update_pools(void **prealloc) { - /* Check if there is not enough space in the current pool. */ - if (unlikely(pool_offset + required_size > DEPOT_POOL_SIZE)) { - /* Bail out if we reached the pool limit. */ - if (unlikely(pool_index + 1 >= DEPOT_MAX_POOLS)) { - WARN_ONCE(1, "Stack depot reached limit capacity"); - return false; - } + /* Check if we still have objects in the freelist. */ + if (next_stack) + goto out_keep_prealloc; - /* - * Move on to the new pool. - * WRITE_ONCE pairs with potential concurrent read in - * stack_depot_fetch. - */ - WRITE_ONCE(pool_index, pool_index + 1); - stack_pools[pool_index] = new_pool; + /* Check if we have a new pool saved and use it. */ + if (new_pool) { + depot_init_pool(new_pool); new_pool = NULL; - pool_offset = 0; - /* - * If the maximum number of pools is not reached, take note - * that yet another new pool needs to be allocated. - * smp_store_release pairs with smp_load_acquire in - * stack_depot_save. - */ - if (pool_index + 1 < DEPOT_MAX_POOLS) + /* Take note that we might need a new new_pool. */ + if (pools_num < DEPOT_MAX_POOLS) smp_store_release(&new_pool_required, 1); + + /* Try keeping the preallocated memory for new_pool. */ + goto out_keep_prealloc; + } + + /* Bail out if we reached the pool limit. */ + if (unlikely(pools_num >= DEPOT_MAX_POOLS)) { + WARN_ONCE(1, "Stack depot reached limit capacity"); + return false; } - /* Check if the current pool is not yet allocated. */ - if (*prealloc && stack_pools[pool_index] == NULL) { - /* Use the preallocated memory for the current pool. */ - stack_pools[pool_index] = *prealloc; + /* Check if we have preallocated memory and use it. */ + if (*prealloc) { + depot_init_pool(*prealloc); *prealloc = NULL; return true; } - /* Otherwise, try using the preallocated memory for a new pool. */ + return false; + +out_keep_prealloc: + /* Keep the preallocated memory for a new pool if required. */ if (*prealloc) depot_keep_new_pool(prealloc); return true; @@ -298,35 +331,35 @@ static struct stack_record * depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) { struct stack_record *stack; - size_t required_size = DEPOT_STACK_RECORD_SIZE; /* Update current and new pools if required and possible. */ - if (!depot_update_pools(required_size, prealloc)) + if (!depot_update_pools(prealloc)) return NULL; - /* Check if we have a pool to save the stack trace. */ - if (stack_pools[pool_index] == NULL) + /* Check if we have a stack record to save the stack trace. */ + stack = next_stack; + if (!stack) return NULL; + /* Advance the freelist. */ + next_stack = stack->next; + /* Limit number of saved frames to CONFIG_STACKDEPOT_MAX_FRAMES. */ if (size > CONFIG_STACKDEPOT_MAX_FRAMES) size = CONFIG_STACKDEPOT_MAX_FRAMES; /* Save the stack trace. */ - stack = stack_pools[pool_index] + pool_offset; + stack->next = NULL; stack->hash = hash; stack->size = size; - stack->handle.pool_index = pool_index; - stack->handle.offset = pool_offset >> DEPOT_STACK_ALIGN; - stack->handle.extra = 0; + /* stack->handle is already filled in by depot_init_pool. */ memcpy(stack->entries, entries, flex_array_size(stack, entries, size)); - pool_offset += required_size; /* * Let KMSAN know the stored stack record is initialized. This shall * prevent false positive reports if instrumented code accesses it. */ - kmsan_unpoison_memory(stack, required_size); + kmsan_unpoison_memory(stack, DEPOT_STACK_RECORD_SIZE); return stack; } @@ -336,16 +369,16 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle) union handle_parts parts = { .handle = handle }; /* * READ_ONCE pairs with potential concurrent write in - * depot_update_pools. + * depot_init_pool. */ - int pool_index_cached = READ_ONCE(pool_index); + int pools_num_cached = READ_ONCE(pools_num); void *pool; size_t offset = parts.offset << DEPOT_STACK_ALIGN; struct stack_record *stack; - if (parts.pool_index > pool_index_cached) { + if (parts.pool_index > pools_num_cached) { WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n", - parts.pool_index, pool_index_cached, handle); + parts.pool_index, pools_num_cached, handle); return NULL; }