From patchwork Tue Nov 10 22:20:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 11895739 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 727C3C388F7 for ; Tue, 10 Nov 2020 22:41:25 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F3B232068D for ; Tue, 10 Nov 2020 22:41:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Yg7QfOJl"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="LpMJwR/j"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="MvPjJraT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F3B232068D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=5JB3NiBo+ZrP5Nh3l4xB+o5oyr35w4L2w2x57uFIHCI=; b=Yg7QfOJlqpEqeBgnRmXqzPb1g 3SRn5NXDur8YvyKbOQdbFFDJ9dVcQi9IO07/o9mjJN5+yXRpzXFkCT7D3w/xum9a0QY89o98fQQrY EXQAKdFnaktebKp9bMbr74rW1B9exvAu52eRcxwubSdKxvN4e6WSsN1AE1/3aPcIZFnwq1YeCCQ7K ceFc61cL/iPD/Hwcrex8HLpjeuSSMhmaYe1T4Mkm8wRjLSIOcar1d/HkAYx4fdEdKlGu3U10pK9kw FOri/F+Lxw/bmbUDeMTWH3gm/AdSkkmlmHtuLX/LWy3c2xNA/XvoJ6iN4/InXIi+oYvDIV8NmXD2x i2Kg1ywIg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kccJj-0000gL-9b; Tue, 10 Nov 2020 22:40:47 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kccAy-0005D9-0C for linux-arm-kernel@merlin.infradead.org; Tue, 10 Nov 2020 22:31:44 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: References:Mime-Version:Message-Id:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=NSEnP5/Ymw2+CzfuUjUL3yzFjWGlVw3jfopzq6nBhYw=; b=LpMJwR/jy6xxNHE/J1MlNldMT9 u1ipI2dKIFy5x573Rt0yH75ITl9Sp/Am4zubcEKfbQzfOs+1hmFQFNTKoT9wH8W3Y6Hr09FDQvC/R gg9Of0U3WUawpbHV4R8Uo8x3f/phRAoDUnzvJAomeRYrskq70KHyMaKUt/kY+yzhVRuxvrK8lkmJf 4evwazBXdb3vjTV1S/oM71+HT+XkKNbbNVuIYnzwfeBKZp8+MfEnWROHd3yzFl3ciCBlZ4rbgBZUS UE8ekuuESsG6ky3k6yLDrm2sQjgYUdhEe5bkneXKw7qBC4ceIyeBxO2SFnwx8ACeyv0c104bZLPg+ oL167ljQ==; Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by casper.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kcc0r-00005D-IQ for linux-arm-kernel@lists.infradead.org; Tue, 10 Nov 2020 22:21:23 +0000 Received: by mail-wm1-x34a.google.com with SMTP id a130so1677797wmf.0 for ; Tue, 10 Nov 2020 14:21:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=NSEnP5/Ymw2+CzfuUjUL3yzFjWGlVw3jfopzq6nBhYw=; b=MvPjJraTvvciMCrmEVBHJf7JZffHC8y5WsKXsz7dKaBKOd+ahVD8jotAMioOOY+Zy8 +oIjs5zBxjMN13SJYSVufJUR8ZSJRp8wcFRg6ysEbVOpnOQ5vU6igGKyRCAtUMHR205f 7QyvhDr6wwOcsOkI4zxzz6jcwQtJl0AqqctzgvATGiNPeP/sVqQ+6ox1fbYOYABAYPvk 7DaxX13UIIfsr9P6vPscgt+nvmJbOlQRSXO/sZhZ6Ht3GxlE3kgYYtamuzZF9LJ/Pvpz BQVuFuMfR8gf1A4I0Rij0OHDKjK8tIBD4Mhw3LvAXhWqwcwJRbF96SPZigccwisHP0Br nfyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NSEnP5/Ymw2+CzfuUjUL3yzFjWGlVw3jfopzq6nBhYw=; b=Yr5HlI9sTPPLAVDK7SV6sGweZmiI5BxrNsWQRgstyMrZZztJNOYNiw/yQzZB3raXv5 S6s+dMU9tujXfZJtea7n3UI1+euHwnDmUS96kuficd65u6ryhDxxC0MNe8dnLrzZ2U87 5gw+a/r0brAYKZJi8lTt+S3xY8kZusWgBD+uOIGnYT2aBJv7wXHHK2xH09Veww7tyqS0 W4Mfe0zLfRr03YezSjInLvDe68hFgBU6EnQtAg8RGz2LJaMYhHXH+6ZCNohEOgkOvWYl ZqrxpGdE6vCOYCyyFpu4Ml8zmPvc9sHCdcTaLKEfpuSVR9iAER3cGnurC4lcIYjVQO88 cufg== X-Gm-Message-State: AOAM533+pHdTwMFPbE6qkc9ZIzTWbcbYlixMxls9zJZu5OkYKYLzRDZd 8KBk9QR56IWcCA6DpWW1DClkr11q+xt8a6Jk X-Google-Smtp-Source: ABdhPJx5R9Lwi7vyMEdhZoBWGFA+5/tY8OWnRDBKQTuXeI+HTirxH7sptEemR57Y7sCa/FhiaUQSY6Jezq4w1pp+ X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:7220:84ff:fe09:7e9d]) (user=andreyknvl job=sendgmr) by 2002:a7b:c772:: with SMTP id x18mr262720wmk.185.1605046875625; Tue, 10 Nov 2020 14:21:15 -0800 (PST) Date: Tue, 10 Nov 2020 23:20:23 +0100 In-Reply-To: Message-Id: <936c0c198145b663e031527c49a6895bd21ac3a0.1605046662.git.andreyknvl@google.com> Mime-Version: 1.0 References: X-Mailer: git-send-email 2.29.2.222.g5d2a92d10f8-goog Subject: [PATCH v2 19/20] kasan, mm: allow cache merging with no metadata From: Andrey Konovalov To: Dmitry Vyukov , Alexander Potapenko , Marco Elver X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201110_222117_833342_96CB61A8 X-CRM114-Status: GOOD ( 16.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Branislav Rankov , Catalin Marinas , Kevin Brodsky , Will Deacon , linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, Andrey Konovalov , Andrey Ryabinin , Andrew Morton , Vincenzo Frascino , Evgenii Stepanov Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The reason cache merging is disabled with KASAN is because KASAN puts its metadata right after the allocated object. When the merged caches have slightly different sizes, the metadata ends up in different places, which KASAN doesn't support. It might be possible to adjust the metadata allocation algorithm and make it friendly to the cache merging code. Instead this change takes a simpler approach and allows merging caches when no metadata is present. Which is the case for hardware tag-based KASAN with kasan.mode=prod. Signed-off-by: Andrey Konovalov Link: https://linux-review.googlesource.com/id/Ia114847dfb2244f297d2cb82d592bf6a07455dba --- include/linux/kasan.h | 26 ++++++++++++++++++++++++-- mm/kasan/common.c | 11 +++++++++++ mm/slab_common.c | 11 ++++++++--- 3 files changed, 43 insertions(+), 5 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 534ab3e2935a..c754eca356f7 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -81,17 +81,35 @@ struct kasan_cache { }; #ifdef CONFIG_KASAN_HW_TAGS + DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled); + static inline kasan_enabled(void) { return static_branch_likely(&kasan_flag_enabled); } -#else + +slab_flags_t __kasan_never_merge(slab_flags_t flags); +static inline slab_flags_t kasan_never_merge(slab_flags_t flags) +{ + if (kasan_enabled()) + return __kasan_never_merge(flags); + return flags; +} + +#else /* CONFIG_KASAN_HW_TAGS */ + static inline kasan_enabled(void) { return true; } -#endif + +static inline slab_flags_t kasan_never_merge(slab_flags_t flags) +{ + return flags; +} + +#endif /* CONFIG_KASAN_HW_TAGS */ void __kasan_alloc_pages(struct page *page, unsigned int order); static inline void kasan_alloc_pages(struct page *page, unsigned int order) @@ -240,6 +258,10 @@ static inline kasan_enabled(void) { return false; } +static inline slab_flags_t kasan_never_merge(slab_flags_t flags) +{ + return flags; +} static inline void kasan_alloc_pages(struct page *page, unsigned int order) {} static inline void kasan_free_pages(struct page *page, unsigned int order) {} static inline void kasan_cache_create(struct kmem_cache *cache, diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 940b42231069..25b18c145b06 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -81,6 +81,17 @@ asmlinkage void kasan_unpoison_task_stack_below(const void *watermark) } #endif /* CONFIG_KASAN_STACK */ +/* + * Only allow cache merging when stack collection is disabled and no metadata + * is present. + */ +slab_flags_t __kasan_never_merge(slab_flags_t flags) +{ + if (kasan_stack_collection_enabled()) + return flags; + return flags & ~SLAB_KASAN; +} + void __kasan_alloc_pages(struct page *page, unsigned int order) { u8 tag; diff --git a/mm/slab_common.c b/mm/slab_common.c index f1b0c4a22f08..3042ee8ea9ce 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -49,12 +50,16 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work, slab_caches_to_rcu_destroy_workfn); /* - * Set of flags that will prevent slab merging + * Set of flags that will prevent slab merging. + * Use slab_never_merge() instead. */ #define SLAB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \ SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \ SLAB_FAILSLAB | SLAB_KASAN) +/* KASAN allows merging in some configurations and will remove SLAB_KASAN. */ +#define slab_never_merge() (kasan_never_merge(SLAB_NEVER_MERGE)) + #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \ SLAB_CACHE_DMA32 | SLAB_ACCOUNT) @@ -164,7 +169,7 @@ static unsigned int calculate_alignment(slab_flags_t flags, */ int slab_unmergeable(struct kmem_cache *s) { - if (slab_nomerge || (s->flags & SLAB_NEVER_MERGE)) + if (slab_nomerge || (s->flags & slab_never_merge())) return 1; if (s->ctor) @@ -198,7 +203,7 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, size = ALIGN(size, align); flags = kmem_cache_flags(size, flags, name, NULL); - if (flags & SLAB_NEVER_MERGE) + if (flags & slab_never_merge()) return NULL; list_for_each_entry_reverse(s, &slab_caches, list) {