From patchwork Tue Oct 19 10:25:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12569407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FC09C433EF for ; Tue, 19 Oct 2021 10:25:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 368936137F for ; Tue, 19 Oct 2021 10:25:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 368936137F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id AC0356B006C; Tue, 19 Oct 2021 06:25:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A703A6B0071; Tue, 19 Oct 2021 06:25:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9389D900002; Tue, 19 Oct 2021 06:25:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id 877256B006C for ; Tue, 19 Oct 2021 06:25:34 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4205B2BFA1 for ; Tue, 19 Oct 2021 10:25:34 +0000 (UTC) X-FDA: 78712805388.09.9E9F3CE Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf21.hostedemail.com (Postfix) with ESMTP id 5C573D042B5B for ; Tue, 19 Oct 2021 10:25:32 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id k5-20020a7bc3050000b02901e081f69d80so2466119wmj.8 for ; Tue, 19 Oct 2021 03:25:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=QtB1JTnzUP+wCjNXipQfBq+gxdYpMqzkTaAZVfPtWwI=; b=bLgLHB1EsD6Xqf/nesKeJq6K8NpxXqKu4siGjC40FCFeSYan7eAaWhX7avRE0Bepx7 Y6qqbNNQ/A07vx4efyod3le2wZ1p0cD080aqYbKGx/yslzQBooO1UxzORc5d40hIkkHa Ga1Ohc+v/to6XT8CiBIBVksccv9L0oaOuTrPxR7CXdMB5lyUOZcT4dHf35Hkeeymuw7h CEU5Lzj7I64RNr6Jrb2CNAZvzltzYzY00qRW0N7Ncs+7YOjUimB0ws/wHXbitiQzBWxC DliOsP4jxlnlkvyCSdiiboBPbx4rq2bRzjz7oD0B8+j0lOyKnqW15UegbpXWk2NAY2zJ teUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=QtB1JTnzUP+wCjNXipQfBq+gxdYpMqzkTaAZVfPtWwI=; b=JDGmo3NztWQ2yi7qtcipYyF1ujBQXwVAfAEKVsWq7sIegjYqtc2f+LAPlMiS9BSLb/ OP2/+wPt5ehdnhugJqA8halldI9E27yiC10+IyFmMKTk4VKjVAnB8DUmr6jhrWNBqn// MHk7E7s9rVg5t/W+yJhr+Qjz8HjQt3mmU54rociDBFe+RTDTIn/2Z3aZLpC6USERvmxz OT1Birvy7OVzO3bVB8mrn5Uak7lG8iuzKDYh9w9HIEFPQbqh/taAwCzL1VWsbnSLvAMd NZdn4Q1ETG4idy+LEvx2zCHH3I1ZHvLP4B/KLvCQz1Z/IFXHuJ0j+f84cQo8jAgx1bwo r7ng== X-Gm-Message-State: AOAM530PteFk697FeYwYJZtCnnMOJPtDMRnSHHfpig9wVvjA3jsxD2Jb CGxqQ5Ld+XpumrBTb8i8TGA+UFaaiA== X-Google-Smtp-Source: ABdhPJw1fLlelqg3KerFPx343JWE60pNjScsycCfcOz5DzweIH6O9KAzUxjmnPsfsZtQilKbBZ4HhNNpzw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:feca:f6ef:d785:c732]) (user=elver job=sendgmr) by 2002:a1c:f31a:: with SMTP id q26mr5061343wmq.148.1634639132518; Tue, 19 Oct 2021 03:25:32 -0700 (PDT) Date: Tue, 19 Oct 2021 12:25:23 +0200 Message-Id: <20211019102524.2807208-1-elver@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.33.0.1079.g6e70778dc9-goog Subject: [PATCH 1/2] kfence: always use static branches to guard kfence_alloc() From: Marco Elver To: elver@google.com, Andrew Morton Cc: Alexander Potapenko , Dmitry Vyukov , Jann Horn , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com X-Rspamd-Queue-Id: 5C573D042B5B X-Stat-Signature: dcjz7sc9o46h4b9rshjcu1t9n53zgfwm Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=bLgLHB1E; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of 3HJ1uYQUKCCMDKUDQFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3HJ1uYQUKCCMDKUDQFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--elver.bounces.google.com X-Rspamd-Server: rspam02 X-HE-Tag: 1634639132-609297 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Regardless of KFENCE mode (CONFIG_KFENCE_STATIC_KEYS: either using static keys to gate allocations, or using a simple dynamic branch), always use a static branch to avoid the dynamic branch in kfence_alloc() if KFENCE was disabled at boot. For CONFIG_KFENCE_STATIC_KEYS=n, this now avoids the dynamic branch if KFENCE was disabled at boot. To simplify, also unifies the location where kfence_allocation_gate is read-checked to just be inline in kfence_alloc(). Signed-off-by: Marco Elver --- include/linux/kfence.h | 21 +++++++++++---------- mm/kfence/core.c | 16 +++++++--------- 2 files changed, 18 insertions(+), 19 deletions(-) diff --git a/include/linux/kfence.h b/include/linux/kfence.h index 3fe6dd8a18c1..4b5e3679a72c 100644 --- a/include/linux/kfence.h +++ b/include/linux/kfence.h @@ -14,6 +14,9 @@ #ifdef CONFIG_KFENCE +#include +#include + /* * We allocate an even number of pages, as it simplifies calculations to map * address to metadata indices; effectively, the very first page serves as an @@ -22,13 +25,8 @@ #define KFENCE_POOL_SIZE ((CONFIG_KFENCE_NUM_OBJECTS + 1) * 2 * PAGE_SIZE) extern char *__kfence_pool; -#ifdef CONFIG_KFENCE_STATIC_KEYS -#include DECLARE_STATIC_KEY_FALSE(kfence_allocation_key); -#else -#include extern atomic_t kfence_allocation_gate; -#endif /** * is_kfence_address() - check if an address belongs to KFENCE pool @@ -116,13 +114,16 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags); */ static __always_inline void *kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) { -#ifdef CONFIG_KFENCE_STATIC_KEYS - if (static_branch_unlikely(&kfence_allocation_key)) +#if defined(CONFIG_KFENCE_STATIC_KEYS) || CONFIG_KFENCE_SAMPLE_INTERVAL == 0 + if (!static_branch_unlikely(&kfence_allocation_key)) + return NULL; #else - if (unlikely(!atomic_read(&kfence_allocation_gate))) + if (!static_branch_likely(&kfence_allocation_key)) + return NULL; #endif - return __kfence_alloc(s, size, flags); - return NULL; + if (likely(atomic_read(&kfence_allocation_gate))) + return NULL; + return __kfence_alloc(s, size, flags); } /** diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 802905b1c89b..09945784df9e 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -104,10 +104,11 @@ struct kfence_metadata kfence_metadata[CONFIG_KFENCE_NUM_OBJECTS]; static struct list_head kfence_freelist = LIST_HEAD_INIT(kfence_freelist); static DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freelist. */ -#ifdef CONFIG_KFENCE_STATIC_KEYS -/* The static key to set up a KFENCE allocation. */ +/* + * The static key to set up a KFENCE allocation; or if static keys are not used + * to gate allocations, to avoid a load and compare if KFENCE is disabled. + */ DEFINE_STATIC_KEY_FALSE(kfence_allocation_key); -#endif /* Gates the allocation, ensuring only one succeeds in a given period. */ atomic_t kfence_allocation_gate = ATOMIC_INIT(1); @@ -774,6 +775,8 @@ void __init kfence_init(void) return; } + if (!IS_ENABLED(CONFIG_KFENCE_STATIC_KEYS)) + static_branch_enable(&kfence_allocation_key); WRITE_ONCE(kfence_enabled, true); queue_delayed_work(system_unbound_wq, &kfence_timer, 0); pr_info("initialized - using %lu bytes for %d objects at 0x%p-0x%p\n", KFENCE_POOL_SIZE, @@ -866,12 +869,7 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) return NULL; } - /* - * allocation_gate only needs to become non-zero, so it doesn't make - * sense to continue writing to it and pay the associated contention - * cost, in case we have a large number of concurrent allocations. - */ - if (atomic_read(&kfence_allocation_gate) || atomic_inc_return(&kfence_allocation_gate) > 1) + if (atomic_inc_return(&kfence_allocation_gate) > 1) return NULL; #ifdef CONFIG_KFENCE_STATIC_KEYS /*