From patchwork Thu Sep 30 15:37:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12528735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56319C433F5 for ; Thu, 30 Sep 2021 15:37:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DE4EF6126A for ; Thu, 30 Sep 2021 15:37:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org DE4EF6126A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 428139400B7; Thu, 30 Sep 2021 11:37:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D7A494003A; Thu, 30 Sep 2021 11:37:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2C6AA9400B7; Thu, 30 Sep 2021 11:37:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id 1A6CB94003A for ; Thu, 30 Sep 2021 11:37:19 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B5D7E2BFD9 for ; Thu, 30 Sep 2021 15:37:18 +0000 (UTC) X-FDA: 78644643756.31.0B6A36B Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf01.hostedemail.com (Postfix) with ESMTP id 707D2506B15E for ; Thu, 30 Sep 2021 15:37:18 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id 14-20020a508e4e000000b003d84544f33eso6796488edx.2 for ; Thu, 30 Sep 2021 08:37:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=yWAvf4aQlVR2F4yVcHUaOtHEh1BNX6MVAiZjUIr82Uk=; b=agJQ6ParoQ5qoZN/UB6cQqFuJimcYA6IkRfvw62IlHl4ENmP0QKUEirlzUF9NqEeQR gzPkp9DGPVJlmuy/fUVk6MtQmDdJbJ+T8xF5cWkxtYk5fmnACVV3VE/gE6tM2BJYHMBe 9eysVUrRAF9iNVlcwKDaawscJ45v/umjbSOR2AjaR3oSCzbTJoyObZEmSJX1aaR94FXM 52OGDACT8JRCYzJOVnXwQS3jJosLqYEOg/OSpUF20FZXnL1hrQBeyJOktKR4hp0KolXi D7jwm0XblroooRYDavrCeWYlkP90SE9iAtI28DKB5MUeXfZ1+4RQHIQJZE8rivs+KQky ytTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=yWAvf4aQlVR2F4yVcHUaOtHEh1BNX6MVAiZjUIr82Uk=; b=h03GK2K7BK5Ql2+2u7k8C58mO9aZRjNnoSadSWIv2SbG9UVlzFgXP0UR/YIYcpjKBm vCF5Ub0trFOIGTbIajlohBMAxKX6WRH3Mv1cdt8OfICqAEvJWwACkEC784DBcjL1ajiS SdpcP2yXTi9g6HoFL5jC9h8gTT/7et6J+rJXIvqPIt0G2UmVSDGkHejMinyAkAoDMwq2 s9ZIXDF7IqPC/XFNiEAhcFNtfTbMAQuVPUanwMTBC7Ar5ClU0RJKb2k9DxYSCdqwTve4 Q4smlxC/JInup5MQ+031GtfKFnW1Cm56SewQtjbjOIsa0TlcrHBTtqwNf3H/LTHLXeKd MfuA== X-Gm-Message-State: AOAM531iswuu89U7ZESt0kHkm4A9kswAzQ1H02vWc4+BN/YZIjpzlb3Q PQGGjt6L4IQD38yZqYa5RoudOE2QLQ== X-Google-Smtp-Source: ABdhPJxWIKdJoMfJk5APMrRcVXFXeqnuCl45P2uJdVgvCuTsu/ahFUcn5cJnOpeO/+mraM71DX4uVuZ1pw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:4c54:2b8f:fabf:78b0]) (user=elver job=sendgmr) by 2002:a05:6402:21ef:: with SMTP id ce15mr7675136edb.19.1633016237037; Thu, 30 Sep 2021 08:37:17 -0700 (PDT) Date: Thu, 30 Sep 2021 17:37:06 +0200 Message-Id: <20210930153706.2105471-1-elver@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.33.0.685.g46640cef36-goog Subject: [PATCH] kfence: shorten critical sections of alloc/free From: Marco Elver To: elver@google.com, Andrew Morton Cc: Alexander Potapenko , Dmitry Vyukov , Jann Horn , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 707D2506B15E X-Stat-Signature: cwdfyydi8hykfoqejyqnxupsg4ig3ofi Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=agJQ6Par; spf=pass (imf01.hostedemail.com: domain of 3rdlVYQUKCMgsz9s5u22uzs.q20zw18B-00y9oqy.25u@flex--elver.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3rdlVYQUKCMgsz9s5u22uzs.q20zw18B-00y9oqy.25u@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1633016238-388822 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Initializing memory and setting/checking the canary bytes is relatively expensive, and doing so in the meta->lock critical sections extends the duration with preemption and interrupts disabled unnecessarily. Any reads to meta->addr and meta->size in kfence_guarded_alloc() and kfence_guarded_free() don't require locking meta->lock as long as the object is removed from the freelist: only kfence_guarded_alloc() sets meta->addr and meta->size after removing it from the freelist, which requires a preceding kfence_guarded_free() returning it to the list or the initial state. Therefore move reads to meta->addr and meta->size, including expensive memory initialization using them, out of meta->lock critical sections. Signed-off-by: Marco Elver Acked-by: Alexander Potapenko --- mm/kfence/core.c | 38 +++++++++++++++++++++----------------- 1 file changed, 21 insertions(+), 17 deletions(-) diff --git a/mm/kfence/core.c b/mm/kfence/core.c index b61ef93d9f98..802905b1c89b 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -309,12 +309,19 @@ static inline bool set_canary_byte(u8 *addr) /* Check canary byte at @addr. */ static inline bool check_canary_byte(u8 *addr) { + struct kfence_metadata *meta; + unsigned long flags; + if (likely(*addr == KFENCE_CANARY_PATTERN(addr))) return true; atomic_long_inc(&counters[KFENCE_COUNTER_BUGS]); - kfence_report_error((unsigned long)addr, false, NULL, addr_to_metadata((unsigned long)addr), - KFENCE_ERROR_CORRUPTION); + + meta = addr_to_metadata((unsigned long)addr); + raw_spin_lock_irqsave(&meta->lock, flags); + kfence_report_error((unsigned long)addr, false, NULL, meta, KFENCE_ERROR_CORRUPTION); + raw_spin_unlock_irqrestore(&meta->lock, flags); + return false; } @@ -324,8 +331,6 @@ static __always_inline void for_each_canary(const struct kfence_metadata *meta, const unsigned long pageaddr = ALIGN_DOWN(meta->addr, PAGE_SIZE); unsigned long addr; - lockdep_assert_held(&meta->lock); - /* * We'll iterate over each canary byte per-side until fn() returns * false. However, we'll still iterate over the canary bytes to the @@ -414,8 +419,9 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g WRITE_ONCE(meta->cache, cache); meta->size = size; meta->alloc_stack_hash = alloc_stack_hash; + raw_spin_unlock_irqrestore(&meta->lock, flags); - for_each_canary(meta, set_canary_byte); + alloc_covered_add(alloc_stack_hash, 1); /* Set required struct page fields. */ page = virt_to_page(meta->addr); @@ -425,11 +431,8 @@ static void *kfence_guarded_alloc(struct kmem_cache *cache, size_t size, gfp_t g if (IS_ENABLED(CONFIG_SLAB)) page->s_mem = addr; - raw_spin_unlock_irqrestore(&meta->lock, flags); - - alloc_covered_add(alloc_stack_hash, 1); - /* Memory initialization. */ + for_each_canary(meta, set_canary_byte); /* * We check slab_want_init_on_alloc() ourselves, rather than letting @@ -454,6 +457,7 @@ static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool z { struct kcsan_scoped_access assert_page_exclusive; unsigned long flags; + bool init; raw_spin_lock_irqsave(&meta->lock, flags); @@ -481,6 +485,13 @@ static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool z meta->unprotected_page = 0; } + /* Mark the object as freed. */ + metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); + init = slab_want_init_on_free(meta->cache); + raw_spin_unlock_irqrestore(&meta->lock, flags); + + alloc_covered_add(meta->alloc_stack_hash, -1); + /* Check canary bytes for memory corruption. */ for_each_canary(meta, check_canary_byte); @@ -489,16 +500,9 @@ static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool z * data is still there, and after a use-after-free is detected, we * unprotect the page, so the data is still accessible. */ - if (!zombie && unlikely(slab_want_init_on_free(meta->cache))) + if (!zombie && unlikely(init)) memzero_explicit(addr, meta->size); - /* Mark the object as freed. */ - metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); - - raw_spin_unlock_irqrestore(&meta->lock, flags); - - alloc_covered_add(meta->alloc_stack_hash, -1); - /* Protect to detect use-after-frees. */ kfence_protect((unsigned long)addr);