From patchwork Tue Mar 29 12:39:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794752 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F007FC433FE for ; Tue, 29 Mar 2022 12:40:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 851D88D0006; Tue, 29 Mar 2022 08:40:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 802AB8D0005; Tue, 29 Mar 2022 08:40:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A0ED8D0006; Tue, 29 Mar 2022 08:40:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id 5BBE28D0005 for ; Tue, 29 Mar 2022 08:40:35 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 16B6A8249980 for ; Tue, 29 Mar 2022 12:40:35 +0000 (UTC) X-FDA: 79297382430.30.54134B7 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf10.hostedemail.com (Postfix) with ESMTP id 87166C0005 for ; Tue, 29 Mar 2022 12:40:34 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id x2-20020a1709065ac200b006d9b316257fso8148542ejs.12 for ; Tue, 29 Mar 2022 05:40:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8AKHIAUMGhsnollvhsjHBd8Sk5jMCuogkkUw2fq2mio=; b=XcHMPrIunHEB7bnidvOWVqGkL2poJnCfl3iM5yKiBsKR6gSu8Wixo56EriIryx1DRp ygH60Tu95F9/meUZw/hUVe5ooSEvV4DUyL+AnjgINrfQqz23fqsQ/Urcza7iRtW9piEV THlnQvDWO68UWi/+f1lgxFSyUR51bTc8P/jVxtWV8kXPBWeM+/Vw42NYvsdPHbIvLwKl BtRHlbH4A6DhSIcH+duqkqgGkt1gIGp9zkGZLXy9w5ZW5bKRdMPnBS9mX2KVQ6Pv+KK4 P1gagNsItDlHvWIoB7ysOglwXcmuxg7hOzImH2QfoKH5JQ//E60JCop3/Ya58UM8UMrt o7bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8AKHIAUMGhsnollvhsjHBd8Sk5jMCuogkkUw2fq2mio=; b=N96yVyz+FrcJHg5IgFWlAdsjQzuHA7LlRTjF/MC9VwcwQj5bh2pD/ZCcl4/Pz5vEzM 0IozWVYBTgJsQXzaTMufgdiKfObl3RrewxWUXkTu0cuHNtWCi1MktehTd94l9V9vilW4 WKMv/56koXRZdPSBfz+u4f2JIEYg2pBPeyTgkhxLI/+cUw4aR8BF8kQALKEOymQnzVt4 IhiD8Vj7P1FC1fq9rXdOeAAVUutMEdsdzXhtsiBzCjYiv4QoAv82lYrA69k+h1sfs9lQ OtRb612SggoF3c3jeI4wh767D6YwBwV2Ti5absxtkBqWWiNj1yAkdUwmonZu5XA+fosr Dz9Q== X-Gm-Message-State: AOAM5336T4Od4ZR9fhxlefpOuDTv+zhOfDVU55d9KoplNTn9QdT05HZy Qj1KJx0G8XSNv8TwTcdk3PFVmvFEqms= X-Google-Smtp-Source: ABdhPJwvUW8uBcv5kN+9WJ7X27ALOaTkbkLM5E0yAmcoJPh2fi12GxrbHP+7afPcQ2MGB4Oun8BxCd/6FeU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:5d03:b0:6df:a042:d6d5 with SMTP id g3-20020a1709065d0300b006dfa042d6d5mr33258222ejt.678.1648557633185; Tue, 29 Mar 2022 05:40:33 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:31 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-3-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 02/48] stackdepot: reserve 5 extra bits in depot_stack_handle_t From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: zigtt88gbop7zqtyp5yuiq86q7boua5w Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=XcHMPrIu; spf=pass (imf10.hostedemail.com: domain of 3Qf5CYgYKCGIGLIDERGOOGLE.COMLINUX-MMKVACK.ORG@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3Qf5CYgYKCGIGLIDERGOOGLE.COMLINUX-MMKVACK.ORG@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 87166C0005 X-HE-Tag: 1648557634-394313 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some users (currently only KMSAN) may want to use spare bits in depot_stack_handle_t. Let them do so by adding @extra_bits to __stack_depot_save() to store arbitrary flags, and providing stack_depot_get_extra_bits() to retrieve those flags. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I0587f6c777667864768daf07821d594bce6d8ff9 --- include/linux/stackdepot.h | 8 ++++++++ lib/stackdepot.c | 29 ++++++++++++++++++++++++----- 2 files changed, 32 insertions(+), 5 deletions(-) diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h index 17f992fe6355b..fd641d266bead 100644 --- a/include/linux/stackdepot.h +++ b/include/linux/stackdepot.h @@ -14,9 +14,15 @@ #include typedef u32 depot_stack_handle_t; +/* + * Number of bits in the handle that stack depot doesn't use. Users may store + * information in them. + */ +#define STACK_DEPOT_EXTRA_BITS 5 depot_stack_handle_t __stack_depot_save(unsigned long *entries, unsigned int nr_entries, + unsigned int extra_bits, gfp_t gfp_flags, bool can_alloc); /* @@ -41,6 +47,8 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries, unsigned int stack_depot_fetch(depot_stack_handle_t handle, unsigned long **entries); +unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle); + int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size, int spaces); diff --git a/lib/stackdepot.c b/lib/stackdepot.c index bf5ba9af05009..6dc11a3b7b88e 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -42,7 +42,8 @@ #define STACK_ALLOC_OFFSET_BITS (STACK_ALLOC_ORDER + PAGE_SHIFT - \ STACK_ALLOC_ALIGN) #define STACK_ALLOC_INDEX_BITS (DEPOT_STACK_BITS - \ - STACK_ALLOC_NULL_PROTECTION_BITS - STACK_ALLOC_OFFSET_BITS) + STACK_ALLOC_NULL_PROTECTION_BITS - \ + STACK_ALLOC_OFFSET_BITS - STACK_DEPOT_EXTRA_BITS) #define STACK_ALLOC_SLABS_CAP 8192 #define STACK_ALLOC_MAX_SLABS \ (((1LL << (STACK_ALLOC_INDEX_BITS)) < STACK_ALLOC_SLABS_CAP) ? \ @@ -55,6 +56,7 @@ union handle_parts { u32 slabindex : STACK_ALLOC_INDEX_BITS; u32 offset : STACK_ALLOC_OFFSET_BITS; u32 valid : STACK_ALLOC_NULL_PROTECTION_BITS; + u32 extra : STACK_DEPOT_EXTRA_BITS; }; }; @@ -73,6 +75,14 @@ static int next_slab_inited; static size_t depot_offset; static DEFINE_RAW_SPINLOCK(depot_lock); +unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle) +{ + union handle_parts parts = { .handle = handle }; + + return parts.extra; +} +EXPORT_SYMBOL(stack_depot_get_extra_bits); + static bool init_stack_slab(void **prealloc) { if (!*prealloc) @@ -136,6 +146,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) stack->handle.slabindex = depot_index; stack->handle.offset = depot_offset >> STACK_ALLOC_ALIGN; stack->handle.valid = 1; + stack->handle.extra = 0; memcpy(stack->entries, entries, flex_array_size(stack, entries, size)); depot_offset += required_size; @@ -320,6 +331,7 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); * * @entries: Pointer to storage array * @nr_entries: Size of the storage array + * @extra_bits: Flags to store in unused bits of depot_stack_handle_t * @alloc_flags: Allocation gfp flags * @can_alloc: Allocate stack slabs (increased chance of failure if false) * @@ -331,6 +343,10 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); * If the stack trace in @entries is from an interrupt, only the portion up to * interrupt entry is saved. * + * Additional opaque flags can be passed in @extra_bits, stored in the unused + * bits of the stack handle, and retrieved using stack_depot_get_extra_bits() + * without calling stack_depot_fetch(). + * * Context: Any context, but setting @can_alloc to %false is required if * alloc_pages() cannot be used from the current context. Currently * this is the case from contexts where neither %GFP_ATOMIC nor @@ -340,10 +356,11 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); */ depot_stack_handle_t __stack_depot_save(unsigned long *entries, unsigned int nr_entries, + unsigned int extra_bits, gfp_t alloc_flags, bool can_alloc) { struct stack_record *found = NULL, **bucket; - depot_stack_handle_t retval = 0; + union handle_parts retval = { .handle = 0 }; struct page *page = NULL; void *prealloc = NULL; unsigned long flags; @@ -427,9 +444,11 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, free_pages((unsigned long)prealloc, STACK_ALLOC_ORDER); } if (found) - retval = found->handle.handle; + retval.handle = found->handle.handle; fast_exit: - return retval; + retval.extra = extra_bits; + + return retval.handle; } EXPORT_SYMBOL_GPL(__stack_depot_save); @@ -449,6 +468,6 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries, unsigned int nr_entries, gfp_t alloc_flags) { - return __stack_depot_save(entries, nr_entries, alloc_flags, true); + return __stack_depot_save(entries, nr_entries, 0, alloc_flags, true); } EXPORT_SYMBOL_GPL(stack_depot_save);