From patchwork Thu Nov 5 00:02:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 11883101 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 244F0C388F9 for ; Thu, 5 Nov 2020 04:13:13 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8F7ED20786 for ; Thu, 5 Nov 2020 04:13:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="wWQGWEhW"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Burz7TvE"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="qJE0zOp+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8F7ED20786 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=/e4XEUofn30JAYsd+ANNsmMG/ikHh8qbOMVhv24Om/g=; b=wWQGWEhWHPQddFOiRVOauiyly iVkDIgNDExXJKfPwrY0ncSotGyiF4bF0kBhw6sLu+nIgGANKPf9Kl7/gIiVb/QffGx5hFZ/TwZT5e dBQKGJEcEMt3GVTUGOohvvic6Z0hUAHRohzjzrRY5aOj9xNxE1g6+iKRA/bwxp1NS8A10FIZULz0B OSboYnLCxqkqTYveiKgPUgaa12rT0nTh/mmQz5CJ/XrFopU9cI2r8QlQq2AV6I4Zby6c+v5gCgk/M YS6wr8H1JaVMY9Vaqnhi7hAxzYCI6UCZJ82ro3S21jKDSmonMsNLi7xBISOrJCKEbu8ns2wAf44Hc APKODbtCw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kaWcl-0004Zr-4t; Thu, 05 Nov 2020 04:11:47 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kaW3m-000145-NE for linux-arm-kernel@merlin.infradead.org; Thu, 05 Nov 2020 03:35:38 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: References:Mime-Version:Message-Id:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=BKsLmNNgmZaUnh/EpgjKJBz6eZxr51wf9zKFrfwsdMY=; b=Burz7TvE3aKSZBr+nQc6lp3eFr jkzXPH/hx8qJCdW159qp10pb7qlfzKumczZzE2Ywzb6RcQiyXVcJiFPvg6DdyWbL4vRvCTVvvT5IY KZ4ivexfllpcNOQkKfW9cu6tE0lEsMQegTxlgvbHtfa3udVie3Knjqk2ROIUXuySrK2kKpzglF9sw 9Fm/DodwvfmdLJZ8y6CJACdSg0U9RLE212gkCsMS/T3ECPSKkhlHscrdsMN7R9oaE3O+zu7cmNXZC FZx8vNZ/+J+uR89TTSYKv2WWfLJQS4VviX6agafx6lgh1MSlWGPyf//K6438F+1rcRxiMpl2Yxam/ 3FPhtWcw==; Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by casper.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kaSlK-0000tc-Nb for linux-arm-kernel@lists.infradead.org; Thu, 05 Nov 2020 00:04:25 +0000 Received: by mail-wm1-x34a.google.com with SMTP id y1so2489wma.5 for ; Wed, 04 Nov 2020 16:04:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=BKsLmNNgmZaUnh/EpgjKJBz6eZxr51wf9zKFrfwsdMY=; b=qJE0zOp+CKttdyhR8H016jLOb78DZeJauMV8UV6YZ6W6V0nNZuzfcr66fKWF+xyu0j Hu93tQ9+h2euPfBrbAxeuBkQNmB8XDwFPIUVJgoex7AOSAWddfvqlgdW3pAWZhXwKd6X b+4Ap2alZM5FYcGzH2cFewAp6b7/bUbZIK3yBJc2oRUFxZfbXaPfwTPIUBN3UXPtMd8B 0s2vwIlVrgjH1M0Fqhf9QXQfXantgORoY4k0jb4EKhOV+mnANLXMQSac5m11GpbObnx4 yfQ1qeP54OrJQ6bj36qKUqzswK3NjKcjADRI71Uarnr5Yd65cdFHT2Dm5JIJEZXHacUn FVew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=BKsLmNNgmZaUnh/EpgjKJBz6eZxr51wf9zKFrfwsdMY=; b=mx5Au7SimyA939GyyIdYBqbwwKP/sVbEqIqZGqZ2XxJr39i7TzeQeuZyyxgQSzoY1X F7O68z/LlIORh4uarNyOP3A6uEQoN5aBgZnkRNlzjJtWsCWpOdHqiHlSNJTvzA4W4x4K JwYdwQRAr7MaldUSPevuZ8SNKX0win7319+wGqPA9EF/1h5VG54xQuKDRoIIHfO321Gy 7eX0pKDOJSk341FVtXcUkh3VA9wdVgInZeNfXF/YTNv4mLaWS12ewxmIzmYUZz6QJxb+ TgY7Mbl+f1RE8kOz4anYnWNzFkVCx/uxNUGK4E4rXq5nGvr7MHllEc2QGw5le6ayS8U7 WAAw== X-Gm-Message-State: AOAM5337s0gixs0vsEgp1mi287l0am0rkHSDG/XFD509bEt1kWhe14nq RqM/DLGHenEG9qlmfeXeXtcUtXZKdBNFFmxG X-Google-Smtp-Source: ABdhPJwrvNjDqk1pGCj7inLtCzHsZII7Iw+NeQV+f26/W8zfFQBJG75O6f9HjOB9QVSFdbKpg6sk4TSVpyEf1Ad2 X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:7220:84ff:fe09:7e9d]) (user=andreyknvl job=sendgmr) by 2002:a1c:6302:: with SMTP id x2mr196504wmb.121.1604534601835; Wed, 04 Nov 2020 16:03:21 -0800 (PST) Date: Thu, 5 Nov 2020 01:02:29 +0100 In-Reply-To: Message-Id: <17ecf27ee7b275869047bef91558bd263dd243f1.1604534322.git.andreyknvl@google.com> Mime-Version: 1.0 References: X-Mailer: git-send-email 2.29.1.341.ge80a0c044ae-goog Subject: [PATCH 19/20] kasan, mm: allow cache merging with no metadata From: Andrey Konovalov To: Catalin Marinas , Will Deacon , Vincenzo Frascino , Dmitry Vyukov , Alexander Potapenko , Marco Elver X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201105_000422_916254_F9C7B1F8 X-CRM114-Status: GOOD ( 16.28 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Branislav Rankov , Andrey Konovalov , Kevin Brodsky , linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, Andrey Ryabinin , Andrew Morton , Evgenii Stepanov Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The reason cache merging is disabled with KASAN is because KASAN puts its metadata right after the allocated object. When the merged caches have slightly different sizes, the metadata ends up in different places, which KASAN doesn't support. It might be possible to adjust the metadata allocation algorithm and make it friendly to the cache merging code. Instead this change takes a simpler approach and allows merging caches when no metadata is present. Which is the case for hardware tag-based KASAN with kasan.mode=prod. Signed-off-by: Andrey Konovalov Link: https://linux-review.googlesource.com/id/Ia114847dfb2244f297d2cb82d592bf6a07455dba --- include/linux/kasan.h | 26 ++++++++++++++++++++++++-- mm/kasan/common.c | 11 +++++++++++ mm/slab_common.c | 11 ++++++++--- 3 files changed, 43 insertions(+), 5 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index d47601517dad..fb8ba4719e3b 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -79,17 +79,35 @@ struct kasan_cache { }; #ifdef CONFIG_KASAN_HW_TAGS + DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled); + static inline kasan_enabled(void) { return static_branch_likely(&kasan_flag_enabled); } -#else + +slab_flags_t __kasan_never_merge(slab_flags_t flags); +static inline slab_flags_t kasan_never_merge(slab_flags_t flags) +{ + if (kasan_enabled()) + return __kasan_never_merge(flags); + return flags; +} + +#else /* CONFIG_KASAN_HW_TAGS */ + static inline kasan_enabled(void) { return true; } -#endif + +static inline slab_flags_t kasan_never_merge(slab_flags_t flags) +{ + return flags; +} + +#endif /* CONFIG_KASAN_HW_TAGS */ void __kasan_alloc_pages(struct page *page, unsigned int order); static inline void kasan_alloc_pages(struct page *page, unsigned int order) @@ -238,6 +256,10 @@ static inline kasan_enabled(void) { return false; } +static inline slab_flags_t kasan_never_merge(slab_flags_t flags) +{ + return flags; +} static inline void kasan_alloc_pages(struct page *page, unsigned int order) {} static inline void kasan_free_pages(struct page *page, unsigned int order) {} static inline void kasan_cache_create(struct kmem_cache *cache, diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 940b42231069..25b18c145b06 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -81,6 +81,17 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark) } #endif /* CONFIG_KASAN_STACK */ +/* + * Only allow cache merging when stack collection is disabled and no metadata + * is present. + */ +slab_flags_t __kasan_never_merge(slab_flags_t flags) +{ + if (kasan_stack_collection_enabled()) + return flags; + return flags & ~SLAB_KASAN; +} + void __kasan_alloc_pages(struct page *page, unsigned int order) { u8 tag; diff --git a/mm/slab_common.c b/mm/slab_common.c index f1b0c4a22f08..3042ee8ea9ce 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -49,12 +50,16 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work, slab_caches_to_rcu_destroy_workfn); /* - * Set of flags that will prevent slab merging + * Set of flags that will prevent slab merging. + * Use slab_never_merge() instead. */ #define SLAB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \ SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \ SLAB_FAILSLAB | SLAB_KASAN) +/* KASAN allows merging in some configurations and will remove SLAB_KASAN. */ +#define slab_never_merge() (kasan_never_merge(SLAB_NEVER_MERGE)) + #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \ SLAB_CACHE_DMA32 | SLAB_ACCOUNT) @@ -164,7 +169,7 @@ static unsigned int calculate_alignment(slab_flags_t flags, */ int slab_unmergeable(struct kmem_cache *s) { - if (slab_nomerge || (s->flags & SLAB_NEVER_MERGE)) + if (slab_nomerge || (s->flags & slab_never_merge())) return 1; if (s->ctor) @@ -198,7 +203,7 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, size = ALIGN(size, align); flags = kmem_cache_flags(size, flags, name, NULL); - if (flags & SLAB_NEVER_MERGE) + if (flags & slab_never_merge()) return NULL; list_for_each_entry_reverse(s, &slab_caches, list) {