From patchwork Thu Sep 23 10:47:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12512395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8A79C433EF for ; Thu, 23 Sep 2021 10:48:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 549E260F21 for ; Thu, 23 Sep 2021 10:48:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 549E260F21 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7CA7F6B006C; Thu, 23 Sep 2021 06:48:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 77AAF6B0071; Thu, 23 Sep 2021 06:48:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 642F1900002; Thu, 23 Sep 2021 06:48:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id 544616B006C for ; Thu, 23 Sep 2021 06:48:13 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id EC09D2D4BD for ; Thu, 23 Sep 2021 10:48:12 +0000 (UTC) X-FDA: 78618513624.18.F7BDA96 Received: from mail-qk1-f202.google.com (mail-qk1-f202.google.com [209.85.222.202]) by imf22.hostedemail.com (Postfix) with ESMTP id B80DB1901 for ; Thu, 23 Sep 2021 10:48:12 +0000 (UTC) Received: by mail-qk1-f202.google.com with SMTP id m10-20020a05620a290a00b004334bcdf1f4so19152476qkp.2 for ; Thu, 23 Sep 2021 03:48:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=T9tewya8iK1bzbkFhOsP6uZciKvGiqobRhQCuQcyJJA=; b=rz5x5gdFAfKsoOzOeKiiiw9rsoxpiAT/+SfjAaRlku3ENBJ6Sv0bng2d3pu+fdqZIO 6h9mMsK4wjmjPVPWDdN42QwiYm/VAkRWB/wDHQ7BFkGCSB5Q+mgGnNuAYMzXckFFJ++v CCpGy9XMb/VnOu8LDath5dmTQrSEzUjTcqeEpaioullCZ7Wl5gc4wIc6yVpNdhgXEV0G SUOhnGc+j9tGco+Y98HL0n70SkvSP4gCKhOXf/7TqjWz1kAEDNIGxk9Dq3ZSsK7OaBsf mt0yhfYMwV7cHsUD5TG1G+7vKd0RK6U+43SAEHj0vmYvVUIjNj+yfsuZG/hzu5x4aDZX e6ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=T9tewya8iK1bzbkFhOsP6uZciKvGiqobRhQCuQcyJJA=; b=wt22PMxMs1tXF3p3KZ/XY0kpIj8JQLJBc8XOGUqm9pjyQVav0K/t2LUa6edOEd3gEi YFL7QGnGLRqZWhwlN6L/dJm1H3VOd8E+sgqQ1c1w9rJfoIvAzq6yG/pF5RmWMDkSPPOX MlplRbUepMwHtl0IUTnGYZlmEuRoi05puSB8w3a/nDrkzoPNFSlAtCYU9zn14XNsIMol LctNrr7ZAMjkjzPnZKioaFvdGZ4pi6bv35Wv0OFNYGWN/Q2dXxM2Sy+JP35XaC5tXMgQ LPGkIsfPFE9Nw51BPUMmcRL9WyiiCA1LI1JODM12YPub2P9KKu3aMbVSyhOgcRojTF1x JARw== X-Gm-Message-State: AOAM533E73RbUB0EyOQf4TFACFatgIX5+HFwOWB+9fnBesxQvhk8+1ye Fsvq2dU211vWe/ECfugZqHlSlGRQaQ== X-Google-Smtp-Source: ABdhPJyhS9WtvLwD9LcVlaSbGlll/fGP2M2LJrAHSg0KxHef3g2hDkeZjXjFmpCG748ujbqPd5iOXPnIqw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:bd72:fd35:a085:c2e3]) (user=elver job=sendgmr) by 2002:a05:6214:4c9:: with SMTP id ck9mr3692117qvb.52.1632394091933; Thu, 23 Sep 2021 03:48:11 -0700 (PDT) Date: Thu, 23 Sep 2021 12:47:59 +0200 Message-Id: <20210923104803.2620285-1-elver@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v3 1/5] stacktrace: move filter_irq_stacks() to kernel/stacktrace.c From: Marco Elver To: elver@google.com, Andrew Morton Cc: Alexander Potapenko , Dmitry Vyukov , Jann Horn , Aleksandr Nogikh , Taras Madan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com X-Stat-Signature: 7ybh7697j615ypgsdy8zp6moyfkfhrax Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=rz5x5gdF; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf22.hostedemail.com: domain of 3a1tMYQUKCGQGNXGTIQQING.EQONKPWZ-OOMXCEM.QTI@flex--elver.bounces.google.com designates 209.85.222.202 as permitted sender) smtp.mailfrom=3a1tMYQUKCGQGNXGTIQQING.EQONKPWZ-OOMXCEM.QTI@flex--elver.bounces.google.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: B80DB1901 X-HE-Tag: 1632394092-442083 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: filter_irq_stacks() has little to do with the stackdepot implementation, except that it is usually used by users (such as KASAN) of stackdepot to reduce the stack trace. However, filter_irq_stacks() itself is not useful without a stack trace as obtained by stack_trace_save() and friends. Therefore, move filter_irq_stacks() to kernel/stacktrace.c, so that new users of filter_irq_stacks() do not have to start depending on STACKDEPOT only for filter_irq_stacks(). Signed-off-by: Marco Elver Acked-by: Dmitry Vyukov Acked-by: Alexander Potapenko --- v3: * Rebase to -next due to conflicting stackdepot changes. v2: * New patch. --- include/linux/stackdepot.h | 2 -- include/linux/stacktrace.h | 1 + kernel/stacktrace.c | 30 ++++++++++++++++++++++++++++++ lib/stackdepot.c | 24 ------------------------ 4 files changed, 31 insertions(+), 26 deletions(-) diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h index ee03f11bb51a..c34b55a6e554 100644 --- a/include/linux/stackdepot.h +++ b/include/linux/stackdepot.h @@ -30,8 +30,6 @@ int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size, void stack_depot_print(depot_stack_handle_t stack); -unsigned int filter_irq_stacks(unsigned long *entries, unsigned int nr_entries); - #ifdef CONFIG_STACKDEPOT int stack_depot_init(void); #else diff --git a/include/linux/stacktrace.h b/include/linux/stacktrace.h index 9edecb494e9e..bef158815e83 100644 --- a/include/linux/stacktrace.h +++ b/include/linux/stacktrace.h @@ -21,6 +21,7 @@ unsigned int stack_trace_save_tsk(struct task_struct *task, unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store, unsigned int size, unsigned int skipnr); unsigned int stack_trace_save_user(unsigned long *store, unsigned int size); +unsigned int filter_irq_stacks(unsigned long *entries, unsigned int nr_entries); /* Internal interfaces. Do not use in generic code */ #ifdef CONFIG_ARCH_STACKWALK diff --git a/kernel/stacktrace.c b/kernel/stacktrace.c index 9f8117c7cfdd..9c625257023d 100644 --- a/kernel/stacktrace.c +++ b/kernel/stacktrace.c @@ -13,6 +13,7 @@ #include #include #include +#include /** * stack_trace_print - Print the entries in the stack trace @@ -373,3 +374,32 @@ unsigned int stack_trace_save_user(unsigned long *store, unsigned int size) #endif /* CONFIG_USER_STACKTRACE_SUPPORT */ #endif /* !CONFIG_ARCH_STACKWALK */ + +static inline bool in_irqentry_text(unsigned long ptr) +{ + return (ptr >= (unsigned long)&__irqentry_text_start && + ptr < (unsigned long)&__irqentry_text_end) || + (ptr >= (unsigned long)&__softirqentry_text_start && + ptr < (unsigned long)&__softirqentry_text_end); +} + +/** + * filter_irq_stacks - Find first IRQ stack entry in trace + * @entries: Pointer to stack trace array + * @nr_entries: Number of entries in the storage array + * + * Return: Number of trace entries until IRQ stack starts. + */ +unsigned int filter_irq_stacks(unsigned long *entries, unsigned int nr_entries) +{ + unsigned int i; + + for (i = 0; i < nr_entries; i++) { + if (in_irqentry_text(entries[i])) { + /* Include the irqentry function into the stack. */ + return i + 1; + } + } + return nr_entries; +} +EXPORT_SYMBOL_GPL(filter_irq_stacks); diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 69c8c9b0d8d7..b437ae79aca1 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -20,7 +20,6 @@ */ #include -#include #include #include #include @@ -417,26 +416,3 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries, return __stack_depot_save(entries, nr_entries, alloc_flags, true); } EXPORT_SYMBOL_GPL(stack_depot_save); - -static inline int in_irqentry_text(unsigned long ptr) -{ - return (ptr >= (unsigned long)&__irqentry_text_start && - ptr < (unsigned long)&__irqentry_text_end) || - (ptr >= (unsigned long)&__softirqentry_text_start && - ptr < (unsigned long)&__softirqentry_text_end); -} - -unsigned int filter_irq_stacks(unsigned long *entries, - unsigned int nr_entries) -{ - unsigned int i; - - for (i = 0; i < nr_entries; i++) { - if (in_irqentry_text(entries[i])) { - /* Include the irqentry function into the stack. */ - return i + 1; - } - } - return nr_entries; -} -EXPORT_SYMBOL_GPL(filter_irq_stacks); From patchwork Thu Sep 23 10:48:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12512397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B1AAC433F5 for ; Thu, 23 Sep 2021 10:48:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E6E7C60EE7 for ; Thu, 23 Sep 2021 10:48:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E6E7C60EE7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 8408E6B0071; Thu, 23 Sep 2021 06:48:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7F0B06B0072; Thu, 23 Sep 2021 06:48:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6DFD3900002; Thu, 23 Sep 2021 06:48:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0213.hostedemail.com [216.40.44.213]) by kanga.kvack.org (Postfix) with ESMTP id 601BF6B0071 for ; Thu, 23 Sep 2021 06:48:16 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 14A022FD9C for ; Thu, 23 Sep 2021 10:48:16 +0000 (UTC) X-FDA: 78618513792.10.FEB9E59 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf17.hostedemail.com (Postfix) with ESMTP id CEBF8F00038C for ; Thu, 23 Sep 2021 10:48:15 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id r7-20020a5d6947000000b0015e0f68a63bso4756483wrw.22 for ; Thu, 23 Sep 2021 03:48:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=fWJbOvO/9IQd66CKXhWeX2UUgjmgk+7Ytnnk5fprUlg=; b=YK7pY2T3iaVC5xj3yZsnws4HJRmKJ1x8OmasO4WGgABT7rft/ZvQs4eh1gFpy8QetD 9KFo7eg38M1OU0JT69layBKo4st3huLUeepUVwNfVluaA7bxQ+sTdPwn6Xevo9ThbnmG jQQDmsU1F8ePO39RpE0bTfH97J7YRmeHXWGwGY2IhhhojpcORagCJnt8penB7I+v9yDj 7vilSsCePqG+aodTc6Az7A1RmmZbD9Vzq1m1U9a/xZF4rtFOXa2naUGI9rjT2fthCoV/ pH3tfvOHyvmn/yr6p+nVHw10RJ8IsJ5icn8k/qTaMN0WdIIt56ezcaixdz8RC479uBXQ gS3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=fWJbOvO/9IQd66CKXhWeX2UUgjmgk+7Ytnnk5fprUlg=; b=dEt2NNq1v/3tSJjWL12Ca+oPu7AlPyoECZCbLxkUcfcLkI3Lp0751dcN2MmKR3YYOD K0nLZ0gsKITjm2jnEGvF6w1ah0INfNH15AUHSCpsO8Gw5K6xd6NiBGNXoCSl9sSZA/CX R2m0q5rm9z6cyTbh6y7a7+GvErxEe9JQWFYkw1zTfvqRAjAbNEE4kcxw9RmiNAxvOerd MBXPj/z5TOpxvawhOE4qDe5otPMGEjp+5jWWp7h+bBrXepxzfzpgHcAcyAdj1tKG9fDx UnV1PCabVCC9HXUhHOWdda3fgTO69GBvZxSZZe/C2u40mXQSs3dF8vfqZu7fqcorB0am q+AQ== X-Gm-Message-State: AOAM530/O60KK89R6/diCT4kTus70JyIQmwWsnP3TsY5g8wgxOlxZMth /fU/mjpCB1zls0Kh2U9wpwfzUEUwQw== X-Google-Smtp-Source: ABdhPJww9Oj32004rlHlbgMaoG0LRo/rz6EFbKkPl4dyYvsN1SYDd8AmtefGBVYLegGFwkMLj0o33H8J5w== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:bd72:fd35:a085:c2e3]) (user=elver job=sendgmr) by 2002:a05:600c:4154:: with SMTP id h20mr15186317wmm.172.1632394094273; Thu, 23 Sep 2021 03:48:14 -0700 (PDT) Date: Thu, 23 Sep 2021 12:48:00 +0200 In-Reply-To: <20210923104803.2620285-1-elver@google.com> Message-Id: <20210923104803.2620285-2-elver@google.com> Mime-Version: 1.0 References: <20210923104803.2620285-1-elver@google.com> X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v3 2/5] kfence: count unexpectedly skipped allocations From: Marco Elver To: elver@google.com, Andrew Morton Cc: Alexander Potapenko , Dmitry Vyukov , Jann Horn , Aleksandr Nogikh , Taras Madan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: CEBF8F00038C X-Stat-Signature: nuc3m95fg36dq1cg6wae49nod6jtno5z Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=YK7pY2T3; spf=pass (imf17.hostedemail.com: domain of 3bltMYQUKCGcJQaJWLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--elver.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3bltMYQUKCGcJQaJWLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1632394095-717136 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Maintain a counter to count allocations that are skipped due to being incompatible (oversized, incompatible gfp flags) or no capacity. This is to compute the fraction of allocations that could not be serviced by KFENCE, which we expect to be rare. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Acked-by: Alexander Potapenko --- v2: * Do not count deadlock-avoidance skips. --- mm/kfence/core.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 7a97db8bc8e7..249d75b7e5ee 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -112,6 +112,8 @@ enum kfence_counter_id { KFENCE_COUNTER_FREES, KFENCE_COUNTER_ZOMBIES, KFENCE_COUNTER_BUGS, + KFENCE_COUNTER_SKIP_INCOMPAT, + KFENCE_COUNTER_SKIP_CAPACITY, KFENCE_COUNTER_COUNT, }; static atomic_long_t counters[KFENCE_COUNTER_COUNT]; @@ -121,6 +123,8 @@ static const char *const counter_names[] = { [KFENCE_COUNTER_FREES] = "total frees", [KFENCE_COUNTER_ZOMBIES] = "zombie allocations", [KFENCE_COUNTER_BUGS] = "total bugs", + [KFENCE_COUNTER_SKIP_INCOMPAT] = "skipped allocations (incompatible)", + [KFENCE_COUNTER_SKIP_CAPACITY] = "skipped allocations (capacity)", }; static_assert(ARRAY_SIZE(counter_names) == KFENCE_COUNTER_COUNT); @@ -271,8 +275,10 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g list_del_init(&meta->list); } raw_spin_unlock_irqrestore(&kfence_freelist_lock, flags); - if (!meta) + if (!meta) { + atomic_long_inc(&counters[KFENCE_COUNTER_SKIP_CAPACITY]); return NULL; + } if (unlikely(!raw_spin_trylock_irqsave(&meta->lock, flags))) { /* @@ -740,8 +746,10 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) * Perform size check before switching kfence_allocation_gate, so that * we don't disable KFENCE without making an allocation. */ - if (size > PAGE_SIZE) + if (size > PAGE_SIZE) { + atomic_long_inc(&counters[KFENCE_COUNTER_SKIP_INCOMPAT]); return NULL; + } /* * Skip allocations from non-default zones, including DMA. We cannot @@ -749,8 +757,10 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) * properties (e.g. reside in DMAable memory). */ if ((flags & GFP_ZONEMASK) || - (s->flags & (SLAB_CACHE_DMA | SLAB_CACHE_DMA32))) + (s->flags & (SLAB_CACHE_DMA | SLAB_CACHE_DMA32))) { + atomic_long_inc(&counters[KFENCE_COUNTER_SKIP_INCOMPAT]); return NULL; + } /* * allocation_gate only needs to become non-zero, so it doesn't make From patchwork Thu Sep 23 10:48:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12512399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D7C3C433FE for ; Thu, 23 Sep 2021 10:48:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 524D560EE7 for ; Thu, 23 Sep 2021 10:48:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 524D560EE7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E82476B0072; Thu, 23 Sep 2021 06:48:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E0A636B0073; Thu, 23 Sep 2021 06:48:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CFAAD900002; Thu, 23 Sep 2021 06:48:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0080.hostedemail.com [216.40.44.80]) by kanga.kvack.org (Postfix) with ESMTP id BA7256B0072 for ; Thu, 23 Sep 2021 06:48:17 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 770D68249980 for ; Thu, 23 Sep 2021 10:48:17 +0000 (UTC) X-FDA: 78618513834.06.AEA0709 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) by imf27.hostedemail.com (Postfix) with ESMTP id 32C737000081 for ; Thu, 23 Sep 2021 10:48:17 +0000 (UTC) Received: by mail-qk1-f201.google.com with SMTP id ay30-20020a05620a179e00b00433294fbf97so18982560qkb.3 for ; Thu, 23 Sep 2021 03:48:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=QoufVORGRixTO8MhoQUDRlzmXbjgEZCp2vEpgCNZeS8=; b=RkZpopRFiOjxNsrmecoWsi1UduTb+8JLsEAwtEPG1lrGsiOw54eRPsTPfESoDEwLkJ WSSnirsYqa9d+h1C3KoIiyrkzylhkTJGE2RvWpvaLDDQDn0sPgOkD9MzMu+t5DKilDk2 TkJtDCtGHiZ5Q9E2fL41SMBC/m0JhgiokwIDuqbL5qAE/KwwLnYnLWOtMJz3PmXKxaix X5jC5u7WtggSpbU+oCUNFUCO7CfQ7TCb3+sZhusU9i+t3hoPGo5AeJdUiKdsr+nLpy+R rhGGB04EmDjrXvMIeBLpHRnKRmUGbWLO8U6RNLFF5/jI9NwpUeOQTinHhAJZVN8OArJQ FrmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QoufVORGRixTO8MhoQUDRlzmXbjgEZCp2vEpgCNZeS8=; b=yFaKOu6HLu3hfdxJ8S3+JCkySb/imcSYgrLmLQpvGGrpAZ4IkI/7JnW8eoQIcWr3cj chMIdm7vVEIaOVFJyhPofwtuK9XnjhSjCR2W6xbWC4NwmBmwCa9pn6VmHlLrE6wbDOLl T9ZNTjqltfEjc2u8wSowY4f9XQuCVnctZ57Kr/V6CGq2T1HOnb+hAgFVnqQ7ep/RsWsD nj7p/cf+hTXpO5NFHdHNUNzwlgVs2D/utvJhXlnnGA2oAKZFs9sptJzAzpkXDDerf7i/ J4fW+evNpfIPBJdSup7aamt+ewa7YEDW/NEfuUrLFPx7CUuBWByL0n/ISXlj/Tff1qQR k55w== X-Gm-Message-State: AOAM530z+bag3WWYWh6Tg7EV3QooBS8P+Z56s2VX92cC8QEVqQYvIb7h psAtAYfHv+IqZi9FX2xbtss9k0huuA== X-Google-Smtp-Source: ABdhPJw2ccb7nXnaRKRSmiDJt/UyuuctC2avhfN9qyANiuPdcVjTxceHwmJez6L7WkM9VcAFK9qvF7JvCA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:bd72:fd35:a085:c2e3]) (user=elver job=sendgmr) by 2002:a05:6214:406:: with SMTP id z6mr3769082qvx.34.1632394096539; Thu, 23 Sep 2021 03:48:16 -0700 (PDT) Date: Thu, 23 Sep 2021 12:48:01 +0200 In-Reply-To: <20210923104803.2620285-1-elver@google.com> Message-Id: <20210923104803.2620285-3-elver@google.com> Mime-Version: 1.0 References: <20210923104803.2620285-1-elver@google.com> X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v3 3/5] kfence: move saving stack trace of allocations into __kfence_alloc() From: Marco Elver To: elver@google.com, Andrew Morton Cc: Alexander Potapenko , Dmitry Vyukov , Jann Horn , Aleksandr Nogikh , Taras Madan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=RkZpopRF; spf=pass (imf27.hostedemail.com: domain of 3cFtMYQUKCGkLScLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--elver.bounces.google.com designates 209.85.222.201 as permitted sender) smtp.mailfrom=3cFtMYQUKCGkLScLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: rjp6co5juyzo5iumu5ba44q4794spr3m X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 32C737000081 X-HE-Tag: 1632394097-779871 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move the saving of the stack trace of allocations into __kfence_alloc(), so that the stack entries array can be used outside of kfence_guarded_alloc() and we avoid potentially unwinding the stack multiple times. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Acked-by: Alexander Potapenko --- v2: * New patch. --- mm/kfence/core.c | 35 ++++++++++++++++++++++++----------- 1 file changed, 24 insertions(+), 11 deletions(-) diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 249d75b7e5ee..db01814f8ff0 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -187,19 +187,26 @@ static inline unsigned long metadata_to_pageaddr(const struct kfence_metadata *m * Update the object's metadata state, including updating the alloc/free stacks * depending on the state transition. */ -static noinline void metadata_update_state(struct kfence_metadata *meta, - enum kfence_object_state next) +static noinline void +metadata_update_state(struct kfence_metadata *meta, enum kfence_object_state next, + unsigned long *stack_entries, size_t num_stack_entries) { struct kfence_track *track = next == KFENCE_OBJECT_FREED ? &meta->free_track : &meta->alloc_track; lockdep_assert_held(&meta->lock); - /* - * Skip over 1 (this) functions; noinline ensures we do not accidentally - * skip over the caller by never inlining. - */ - track->num_stack_entries = stack_trace_save(track->stack_entries, KFENCE_STACK_DEPTH, 1); + if (stack_entries) { + memcpy(track->stack_entries, stack_entries, + num_stack_entries * sizeof(stack_entries[0])); + } else { + /* + * Skip over 1 (this) functions; noinline ensures we do not + * accidentally skip over the caller by never inlining. + */ + num_stack_entries = stack_trace_save(track->stack_entries, KFENCE_STACK_DEPTH, 1); + } + track->num_stack_entries = num_stack_entries; track->pid = task_pid_nr(current); track->cpu = raw_smp_processor_id(); track->ts_nsec = local_clock(); /* Same source as printk timestamps. */ @@ -261,7 +268,8 @@ static __always_inline void for_each_canary(const struct kfence_metadata *meta, } } -static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t gfp) +static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t gfp, + unsigned long *stack_entries, size_t num_stack_entries) { struct kfence_metadata *meta = NULL; unsigned long flags; @@ -320,7 +328,7 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g addr = (void *)meta->addr; /* Update remaining metadata. */ - metadata_update_state(meta, KFENCE_OBJECT_ALLOCATED); + metadata_update_state(meta, KFENCE_OBJECT_ALLOCATED, stack_entries, num_stack_entries); /* Pairs with READ_ONCE() in kfence_shutdown_cache(). */ WRITE_ONCE(meta->cache, cache); meta->size = size; @@ -400,7 +408,7 @@ static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool z memzero_explicit(addr, meta->size); /* Mark the object as freed. */ - metadata_update_state(meta, KFENCE_OBJECT_FREED); + metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); raw_spin_unlock_irqrestore(&meta->lock, flags); @@ -742,6 +750,9 @@ void kfence_shutdown_cache(struct kmem_cache *s) void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) { + unsigned long stack_entries[KFENCE_STACK_DEPTH]; + size_t num_stack_entries; + /* * Perform size check before switching kfence_allocation_gate, so that * we don't disable KFENCE without making an allocation. @@ -786,7 +797,9 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) if (!READ_ONCE(kfence_enabled)) return NULL; - return kfence_guarded_alloc(s, size, flags); + num_stack_entries = stack_trace_save(stack_entries, KFENCE_STACK_DEPTH, 0); + + return kfence_guarded_alloc(s, size, flags, stack_entries, num_stack_entries); } size_t kfence_ksize(const void *addr) From patchwork Thu Sep 23 10:48:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12512401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F290DC433FE for ; Thu, 23 Sep 2021 10:48:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 87ADC610A0 for ; Thu, 23 Sep 2021 10:48:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 87ADC610A0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 29F0A6B0073; Thu, 23 Sep 2021 06:48:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 24BB7900002; Thu, 23 Sep 2021 06:48:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09EAF6B0075; Thu, 23 Sep 2021 06:48:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0182.hostedemail.com [216.40.44.182]) by kanga.kvack.org (Postfix) with ESMTP id EA28C6B0073 for ; Thu, 23 Sep 2021 06:48:19 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A577E18014D27 for ; Thu, 23 Sep 2021 10:48:19 +0000 (UTC) X-FDA: 78618513918.39.DA068B2 Received: from mail-qv1-f73.google.com (mail-qv1-f73.google.com [209.85.219.73]) by imf05.hostedemail.com (Postfix) with ESMTP id 5FD21505ED53 for ; Thu, 23 Sep 2021 10:48:19 +0000 (UTC) Received: by mail-qv1-f73.google.com with SMTP id b17-20020a056214135100b0037eaf39cb1fso19478601qvw.11 for ; Thu, 23 Sep 2021 03:48:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=zqdGAxiwy73PPAUb3qLgXKxgIEaGpSoYFxR743aTgss=; b=skXFe3aSS9SUlUT/oUpQ8cSv4NmKKmbytAHDkCjKGnF5ZxVY7svELE8iAPYnY1aNLz vwvvdfPAAljb6f0l0WlQZlEj2brnbFGX8guCdQvS2CmFQi7sBz56xzsGOpNYu0ylpe6m tHXeWKXhWGNfS91XgbgwjtvbhrXywgaoDkzk37A/yFNGjrX5wCz/EPppF/PwvyFGRGax l+qyhJpBTJHcAjkE6jfNn66yAt/z4YJF1koZsCfLHVpiiMKOgpVus2ZwfU8/hH0JEzyt zOJyOHl5WJwfWRNI3cNR7jDbQe/NMFqow/BmcUtbQCWOh9HO/AyaZOVonanskGW+YepE RVfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zqdGAxiwy73PPAUb3qLgXKxgIEaGpSoYFxR743aTgss=; b=drDjML7M4ViHOHf4NqdXLzw0wPnWz7QOPz7HpNlv5rAATAa1k5cg2NNxJq+EykpjFr C/tUA2MxRokYJsgAXiRRXjeBCoIa7tgCW9ZDGvRq/MLS/I1cbpZtIcoVRoL0mQghMS2A lM0OtvUR4P8AbNMX1EgXZJgMf8RbZcRlWIe69p8pUk8KfukBACo5YQeWa4ldotdItpn4 ywwwrvETS2zV6bqhZs2gDOuFkannpb7CdCK5/XmelLQniCtStPIFZ3Q7Tj0RPhWMjbZF w94xrDCjqgbSsxny5IuI72hv8E4632nYU1et3rtZi919fgFXZcsd6pNCqvh0SNNDtWbz dDcw== X-Gm-Message-State: AOAM532fDMhjQ6P0Vcb88bn6rAVwWtSb25gBEP11Z3lvzJnvdHAY+XWy dCwTsRd5lFTQ/rsbLorPjhTFvRlZdw== X-Google-Smtp-Source: ABdhPJxHFEgSX77j9YQMbnkRNt/b3e8T+y6gXj18Vqfm2jZeXn4OqDyGjWOuGLx7Yi/e2pui+nSbOB/6xQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:bd72:fd35:a085:c2e3]) (user=elver job=sendgmr) by 2002:a05:6214:142c:: with SMTP id o12mr3671612qvx.26.1632394098782; Thu, 23 Sep 2021 03:48:18 -0700 (PDT) Date: Thu, 23 Sep 2021 12:48:02 +0200 In-Reply-To: <20210923104803.2620285-1-elver@google.com> Message-Id: <20210923104803.2620285-4-elver@google.com> Mime-Version: 1.0 References: <20210923104803.2620285-1-elver@google.com> X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v3 4/5] kfence: limit currently covered allocations when pool nearly full From: Marco Elver To: elver@google.com, Andrew Morton Cc: Alexander Potapenko , Dmitry Vyukov , Jann Horn , Aleksandr Nogikh , Taras Madan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 5FD21505ED53 X-Stat-Signature: dksp9i61qbaneo3rgicm4twgcfker666 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=skXFe3aS; spf=pass (imf05.hostedemail.com: domain of 3cltMYQUKCGsNUeNaPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--elver.bounces.google.com designates 209.85.219.73 as permitted sender) smtp.mailfrom=3cltMYQUKCGsNUeNaPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1632394099-393888 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: One of KFENCE's main design principles is that with increasing uptime, allocation coverage increases sufficiently to detect previously undetected bugs. We have observed that frequent long-lived allocations of the same source (e.g. pagecache) tend to permanently fill up the KFENCE pool with increasing system uptime, thus breaking the above requirement. The workaround thus far had been increasing the sample interval and/or increasing the KFENCE pool size, but is no reliable solution. To ensure diverse coverage of allocations, limit currently covered allocations of the same source once pool utilization reaches 75% (configurable via `kfence.skip_covered_thresh`) or above. The effect is retaining reasonable allocation coverage when the pool is close to full. A side-effect is that this also limits frequent long-lived allocations of the same source filling up the pool permanently. Uniqueness of an allocation for coverage purposes is based on its (partial) allocation stack trace (the source). A Counting Bloom filter is used to check if an allocation is covered; if the allocation is currently covered, the allocation is skipped by KFENCE. Testing was done using: (a) a synthetic workload that performs frequent long-lived allocations (default config values; sample_interval=1; num_objects=63), and (b) normal desktop workloads on an otherwise idle machine where the problem was first reported after a few days of uptime (default config values). In both test cases the sampled allocation rate no longer drops to zero at any point. In the case of (b) we observe (after 2 days uptime) 15% unique allocations in the pool, 77% pool utilization, with 20% "skipped allocations (covered)". Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Acked-by: Alexander Potapenko Signed-off-by: Marco Elver --- v3: * Remove unneeded !alloc_stack_hash checks. * Remove unneeded meta->alloc_stack_hash=0 in kfence_guarded_free(). v2: * Switch to counting bloom filter to guarantee currently covered allocations being skipped. * Use a module param for skip_covered threshold. * Use kfence pool address as hash entropy. * Use filter_irq_stacks(). --- mm/kfence/core.c | 103 ++++++++++++++++++++++++++++++++++++++++++++- mm/kfence/kfence.h | 2 + 2 files changed, 103 insertions(+), 2 deletions(-) diff --git a/mm/kfence/core.c b/mm/kfence/core.c index db01814f8ff0..58a0f6f1acc5 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -11,11 +11,13 @@ #include #include #include +#include #include #include #include #include #include +#include #include #include #include @@ -82,6 +84,10 @@ static const struct kernel_param_ops sample_interval_param_ops = { }; module_param_cb(sample_interval, &sample_interval_param_ops, &kfence_sample_interval, 0600); +/* Pool usage% threshold when currently covered allocations are skipped. */ +static unsigned long kfence_skip_covered_thresh __read_mostly = 75; +module_param_named(skip_covered_thresh, kfence_skip_covered_thresh, ulong, 0644); + /* The pool of pages used for guard pages and objects. */ char *__kfence_pool __ro_after_init; EXPORT_SYMBOL(__kfence_pool); /* Export for test modules. */ @@ -105,6 +111,25 @@ DEFINE_STATIC_KEY_FALSE(kfence_allocation_key); /* Gates the allocation, ensuring only one succeeds in a given period. */ atomic_t kfence_allocation_gate = ATOMIC_INIT(1); +/* + * A Counting Bloom filter of allocation coverage: limits currently covered + * allocations of the same source filling up the pool. + * + * Assuming a range of 15%-85% unique allocations in the pool at any point in + * time, the below parameters provide a probablity of 0.02-0.33 for false + * positive hits respectively: + * + * P(alloc_traces) = (1 - e^(-HNUM * (alloc_traces / SIZE)) ^ HNUM + */ +#define ALLOC_COVERED_HNUM 2 +#define ALLOC_COVERED_SIZE (1 << (const_ilog2(CONFIG_KFENCE_NUM_OBJECTS) + 2)) +#define ALLOC_COVERED_HNEXT(h) (1664525 * (h) + 1013904223) +#define ALLOC_COVERED_MASK (ALLOC_COVERED_SIZE - 1) +static atomic_t alloc_covered[ALLOC_COVERED_SIZE]; + +/* Stack depth used to determine uniqueness of an allocation. */ +#define UNIQUE_ALLOC_STACK_DEPTH 8UL + /* Statistics counters for debugfs. */ enum kfence_counter_id { KFENCE_COUNTER_ALLOCATED, @@ -114,6 +139,7 @@ enum kfence_counter_id { KFENCE_COUNTER_BUGS, KFENCE_COUNTER_SKIP_INCOMPAT, KFENCE_COUNTER_SKIP_CAPACITY, + KFENCE_COUNTER_SKIP_COVERED, KFENCE_COUNTER_COUNT, }; static atomic_long_t counters[KFENCE_COUNTER_COUNT]; @@ -125,11 +151,60 @@ static const char *const counter_names[] = { [KFENCE_COUNTER_BUGS] = "total bugs", [KFENCE_COUNTER_SKIP_INCOMPAT] = "skipped allocations (incompatible)", [KFENCE_COUNTER_SKIP_CAPACITY] = "skipped allocations (capacity)", + [KFENCE_COUNTER_SKIP_COVERED] = "skipped allocations (covered)", }; static_assert(ARRAY_SIZE(counter_names) == KFENCE_COUNTER_COUNT); /* === Internals ============================================================ */ +static inline bool should_skip_covered(void) +{ + unsigned long thresh = (CONFIG_KFENCE_NUM_OBJECTS * kfence_skip_covered_thresh) / 100; + + return atomic_long_read(&counters[KFENCE_COUNTER_ALLOCATED]) > thresh; +} + +static u32 get_alloc_stack_hash(unsigned long *stack_entries, size_t num_entries) +{ + /* Some randomness across reboots / different machines. */ + u32 seed = (u32)((unsigned long)__kfence_pool >> (BITS_PER_LONG - 32)); + + num_entries = min(num_entries, UNIQUE_ALLOC_STACK_DEPTH); + num_entries = filter_irq_stacks(stack_entries, num_entries); + return jhash(stack_entries, num_entries * sizeof(stack_entries[0]), seed); +} + +/* + * Adds (or subtracts) count @val for allocation stack trace hash + * @alloc_stack_hash from Counting Bloom filter. + */ +static void alloc_covered_add(u32 alloc_stack_hash, int val) +{ + int i; + + for (i = 0; i < ALLOC_COVERED_HNUM; i++) { + atomic_add(val, &alloc_covered[alloc_stack_hash & ALLOC_COVERED_MASK]); + alloc_stack_hash = ALLOC_COVERED_HNEXT(alloc_stack_hash); + } +} + +/* + * Returns true if the allocation stack trace hash @alloc_stack_hash is + * currently contained (non-zero count) in Counting Bloom filter. + */ +static bool alloc_covered_contains(u32 alloc_stack_hash) +{ + int i; + + for (i = 0; i < ALLOC_COVERED_HNUM; i++) { + if (!atomic_read(&alloc_covered[alloc_stack_hash & ALLOC_COVERED_MASK])) + return false; + alloc_stack_hash = ALLOC_COVERED_HNEXT(alloc_stack_hash); + } + + return true; +} + static bool kfence_protect(unsigned long addr) { return !KFENCE_WARN_ON(!kfence_protect_page(ALIGN_DOWN(addr, PAGE_SIZE), true)); @@ -269,7 +344,8 @@ static __always_inline void for_each_canary(const struct kfence_metadata *meta, } static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t gfp, - unsigned long *stack_entries, size_t num_stack_entries) + unsigned long *stack_entries, size_t num_stack_entries, + u32 alloc_stack_hash) { struct kfence_metadata *meta = NULL; unsigned long flags; @@ -332,6 +408,8 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g /* Pairs with READ_ONCE() in kfence_shutdown_cache(). */ WRITE_ONCE(meta->cache, cache); meta->size = size; + meta->alloc_stack_hash = alloc_stack_hash; + for_each_canary(meta, set_canary_byte); /* Set required struct page fields. */ @@ -344,6 +422,8 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g raw_spin_unlock_irqrestore(&meta->lock, flags); + alloc_covered_add(alloc_stack_hash, 1); + /* Memory initialization. */ /* @@ -412,6 +492,8 @@ static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool z raw_spin_unlock_irqrestore(&meta->lock, flags); + alloc_covered_add(meta->alloc_stack_hash, -1); + /* Protect to detect use-after-frees. */ kfence_protect((unsigned long)addr); @@ -752,6 +834,7 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) { unsigned long stack_entries[KFENCE_STACK_DEPTH]; size_t num_stack_entries; + u32 alloc_stack_hash; /* * Perform size check before switching kfence_allocation_gate, so that @@ -799,7 +882,23 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) num_stack_entries = stack_trace_save(stack_entries, KFENCE_STACK_DEPTH, 0); - return kfence_guarded_alloc(s, size, flags, stack_entries, num_stack_entries); + /* + * Do expensive check for coverage of allocation in slow-path after + * allocation_gate has already become non-zero, even though it might + * mean not making any allocation within a given sample interval. + * + * This ensures reasonable allocation coverage when the pool is almost + * full, including avoiding long-lived allocations of the same source + * filling up the pool (e.g. pagecache allocations). + */ + alloc_stack_hash = get_alloc_stack_hash(stack_entries, num_stack_entries); + if (should_skip_covered() && alloc_covered_contains(alloc_stack_hash)) { + atomic_long_inc(&counters[KFENCE_COUNTER_SKIP_COVERED]); + return NULL; + } + + return kfence_guarded_alloc(s, size, flags, stack_entries, num_stack_entries, + alloc_stack_hash); } size_t kfence_ksize(const void *addr) diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index c1f23c61e5f9..2a2d5de9d379 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -87,6 +87,8 @@ struct kfence_metadata { /* Allocation and free stack information. */ struct kfence_track alloc_track; struct kfence_track free_track; + /* For updating alloc_covered on frees. */ + u32 alloc_stack_hash; }; extern struct kfence_metadata kfence_metadata[CONFIG_KFENCE_NUM_OBJECTS]; From patchwork Thu Sep 23 10:48:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12512403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D2FFC433EF for ; Thu, 23 Sep 2021 10:48:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B421260EE7 for ; Thu, 23 Sep 2021 10:48:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B421260EE7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 4239D6B0074; Thu, 23 Sep 2021 06:48:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D1736B0075; Thu, 23 Sep 2021 06:48:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 24B596B0078; Thu, 23 Sep 2021 06:48:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0249.hostedemail.com [216.40.44.249]) by kanga.kvack.org (Postfix) with ESMTP id 137856B0074 for ; Thu, 23 Sep 2021 06:48:22 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D30912C6BD for ; Thu, 23 Sep 2021 10:48:21 +0000 (UTC) X-FDA: 78618514002.11.9B7EF3B Received: from mail-qv1-f74.google.com (mail-qv1-f74.google.com [209.85.219.74]) by imf04.hostedemail.com (Postfix) with ESMTP id 9D9CD50000AD for ; Thu, 23 Sep 2021 10:48:21 +0000 (UTC) Received: by mail-qv1-f74.google.com with SMTP id h25-20020a0cab19000000b0037a49d15c93so18684072qvb.22 for ; Thu, 23 Sep 2021 03:48:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=sS022PP0nBNSy8OKBYkYKJTWDCY37FWogqIg9gOqwaw=; b=lz2zeNS9KKJed9/idbHiyT52xuRsU8jmRFz/4Mh/ZlYvIfI3NDOCAuA2Mq6nkWqR9E JbggrFJJNsGHx4UKBCtnu9oTDdSfGQ0U63Ow6+INWd6jX6hEEq6oHYFPolUW7dRfVRSs R6LrBwtvFPJpvUbHCmbEJZIDifLmii2lv7P7cjnPv8SxIUaqliMUYRn+f2H7Ehom3Emb Nd3qoLdmJxeoeyBYCcPpjy52IK5R6kCxicripz/jbLDMASJ3WgAJNchlIAe9GvEqG38b Zbs2ntvM8UA5eMvtb3l9ZHTuJ7KdpkCnp5pvlUHdzViBukGzgMT6AgB9Jc6cAayCd8Gp 3VSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=sS022PP0nBNSy8OKBYkYKJTWDCY37FWogqIg9gOqwaw=; b=WsYXNswYUs0viiQj5HI5UXeuUi+Lpi69E7Urj4Zq6wzb9vDBv9mlGCkJpDDWt0kqHD iN7+576DRxyzj1XzJIh1qk4ByTr2wUSj1RkITOj+MMrvsUb7GFPOFAfnh++Uqo+2dw4h gUkE0cugaAXZyZOSpomreVtMQJx6kHVWilutQVcVeDaE+dWY/aYs/HsZzg68FBrofzxX c3uC7NaI1vGYJGHaLEYz5LwOpYyEeK7gl2XI/Y+Mkc3YxIHdPeCB/lfOmW22uBnu2FYM ZKcGIn+R3FRCP27mQnhj8t86OeuadxU8WkUUQA1Ah3awx6FN3E7QvI/XhqPK2LkcGVt4 y/3Q== X-Gm-Message-State: AOAM532HazwuZy/nWFDUVAHMdXy50dIxjjMfFIacSC9WdL9yGSz3Zgmw MqiwC+5cKauuPu3bZg4BjJDcZFKmHA== X-Google-Smtp-Source: ABdhPJwl3K/JOiSHRQnSOsLFlsdwEP0YkjR1wD9m+isgUOiNjdnzFZHgt8ubrMgehmtACfPt7lrjI06liQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:bd72:fd35:a085:c2e3]) (user=elver job=sendgmr) by 2002:ad4:4990:: with SMTP id t16mr3775079qvx.32.1632394101026; Thu, 23 Sep 2021 03:48:21 -0700 (PDT) Date: Thu, 23 Sep 2021 12:48:03 +0200 In-Reply-To: <20210923104803.2620285-1-elver@google.com> Message-Id: <20210923104803.2620285-5-elver@google.com> Mime-Version: 1.0 References: <20210923104803.2620285-1-elver@google.com> X-Mailer: git-send-email 2.33.0.464.g1972c5931b-goog Subject: [PATCH v3 5/5] kfence: add note to documentation about skipping covered allocations From: Marco Elver To: elver@google.com, Andrew Morton Cc: Alexander Potapenko , Dmitry Vyukov , Jann Horn , Aleksandr Nogikh , Taras Madan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 9D9CD50000AD X-Stat-Signature: 5tpco44y8g1t9qcg8owirfrp85pbyces Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=lz2zeNS9; spf=pass (imf04.hostedemail.com: domain of 3dVtMYQUKCG4QXhQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--elver.bounces.google.com designates 209.85.219.74 as permitted sender) smtp.mailfrom=3dVtMYQUKCG4QXhQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1632394101-230073 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a note briefly mentioning the new policy about "skipping currently covered allocations if pool close to full." Since this has a notable impact on KFENCE's bug-detection ability on systems with large uptimes, it is worth pointing out the feature. Signed-off-by: Marco Elver Reviewed-by: Dmitry Vyukov Acked-by: Alexander Potapenko --- v2: * Rewrite. --- Documentation/dev-tools/kfence.rst | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/Documentation/dev-tools/kfence.rst b/Documentation/dev-tools/kfence.rst index 0fbe3308bf37..d45f952986ae 100644 --- a/Documentation/dev-tools/kfence.rst +++ b/Documentation/dev-tools/kfence.rst @@ -269,6 +269,17 @@ tail of KFENCE's freelist, so that the least recently freed objects are reused first, and the chances of detecting use-after-frees of recently freed objects is increased. +If pool utilization reaches 75% (default) or above, to reduce the risk of the +pool eventually being fully occupied by allocated objects yet ensure diverse +coverage of allocations, KFENCE limits currently covered allocations of the +same source from further filling up the pool. The "source" of an allocation is +based on its partial allocation stack trace. A side-effect is that this also +limits frequent long-lived allocations (e.g. pagecache) of the same source +filling up the pool permanently, which is the most common risk for the pool +becoming full and the sampled allocation rate dropping to zero. The threshold +at which to start limiting currently covered allocations can be configured via +the boot parameter ``kfence.skip_covered_thresh`` (pool usage%). + Interface ---------