From patchwork Tue Mar 9 13:24:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 12125291 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00B52C433DB for ; Tue, 9 Mar 2021 13:44:23 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4FDFF64F69 for ; Tue, 9 Mar 2021 13:44:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4FDFF64F69 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Cc:To:From:Subject:References:Mime-Version: Message-Id:In-Reply-To:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lpsRmJBwtYEHETURCGjBEjau8VlshIA7r9hIw/3/axg=; b=ClBecGYWv33WvP hI+2rFE7GdbbHT2vCROmstdI6aKvySh43XD7NXymQU5r17PUCZvcUstbP0g/41qMo1LB51EoQMtSo engbVE8XgTNm+1LOjsMAtQGucl74KhZnshx5341y0dnvtK3qJU6CHI2mnh5DsnFKlzh61aTDdJu98 wu+y7LZFqzyjchTGctTk+DgrdIRN7OKaIJRvc4gqCdLNpqkxDEYjFmDdG7lXAre+v13Bt/dYydnXz Cu5ToxQ312krDOujWkY0IcDdocheHrchvG46W9BG+x+Ae11S/kx2ZM67H0fhQDkYa2LpzbU2tkhGB dQuTjNyBb2DrCWq0ZoYw==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lJccy-004iEC-Jx; Tue, 09 Mar 2021 13:42:25 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lJcM5-004btx-KH for linux-arm-kernel@lists.infradead.org; Tue, 09 Mar 2021 13:25:05 +0000 Received: by mail-wr1-x449.google.com with SMTP id g2so6419127wrx.20 for ; Tue, 09 Mar 2021 05:24:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=qJAXU5d3STshdjmo9ZaLGYR5dtJDA06hL5AxGivV8jc=; b=XsWCn7FEbcQHbjM9dxNsTqETQgS51uKbFupFpjf3dx8+412Xp2tiES0HRg38MN0yOM Af+KSRIcuY/bnzFr2/3ka4JlYKKpsIm7IEB4h1kuhIX0q6APYlbvIRpuKclHmzANBT3C OwnV+2JXsHQRfASew/ZO71yZ7qafuP+VQM3wQM3VwuINQ8x76HzgG2+h9jXBh7lz1F8F FaEOWI2uV7/4oe5c6Z15ew3vmh6TpVzjRJ8H76njIJPrtUyyLY4jR2OUuxQk0ag3gE2q 9pTM2jk9qhb7pif74pei+YO5Z78Ad9kHJrUgL1aPSd5vfC7hSqkKj/HVSwFPPXGN9sRs /8NA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=qJAXU5d3STshdjmo9ZaLGYR5dtJDA06hL5AxGivV8jc=; b=DrHwXBEXGaS1jbbAvDHH58ULwdFIEOmw+ijuzW4VNgtEFIt2joSsD/X3gJ9kHtMj/n JenXzgoW6bik0ajIC6H0dHE+nt5ZcFaFFOWaqgv5dKvSQQUjXuNjZoTLf6zD0b+jbySM xlpkZnhNPmtvh60gEZH0lJL3gaUN+szikpGvJsSQrM+xDKVgbnBUoMUhcN+AQmKw8cPF j0QB+kOliZmf0JGvHjho2erKZvGgKm6IPOIhAHWgC9E9+nfg6k+/kN160SbHbNk8jUKE muDxRM4p5eLXEXvkglg252B+ZxeKyenGUh7jEmwyLFKexv4QckW4C08LC4pPPFldH83K nDvA== X-Gm-Message-State: AOAM5332uCFvNEVus7jH/QlMM6+mCVQFbo/QRLWIEHYv8bBhAu9s/11i j0H9bG1bRXjc4GJI7AI2Bv9osib/zzkp+w9T X-Google-Smtp-Source: ABdhPJw1preyrN4by9t/QCQkvN+8OZ/BIff1p6veTji9uhApNEyo7nUUdHFSX9sJvdFN6FTUAmIvDcJP5iBmPh5k X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:5802:818:ce92:dfef]) (user=andreyknvl job=sendgmr) by 2002:a1c:7715:: with SMTP id t21mr4007265wmi.132.1615296295273; Tue, 09 Mar 2021 05:24:55 -0800 (PST) Date: Tue, 9 Mar 2021 14:24:39 +0100 In-Reply-To: Message-Id: <190fd15c1886654afdec0d19ebebd5ade665b601.1615296150.git.andreyknvl@google.com> Mime-Version: 1.0 References: X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v3 5/5] kasan, mm: integrate slab init_on_free with HW_TAGS From: Andrey Konovalov To: Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Catalin Marinas Cc: Will Deacon , Vincenzo Frascino , Dmitry Vyukov , Andrey Ryabinin , Alexander Potapenko , Marco Elver , Peter Collingbourne , Evgenii Stepanov , Branislav Rankov , Kevin Brodsky , kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210309_132458_058578_90903DB5 X-CRM114-Status: GOOD ( 21.88 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This change uses the previously added memory initialization feature of HW_TAGS KASAN routines for slab memory when init_on_free is enabled. With this change, memory initialization memset() is no longer called when both HW_TAGS KASAN and init_on_free are enabled. Instead, memory is initialized in KASAN runtime. For SLUB, the memory initialization memset() is moved into slab_free_hook() that currently directly follows the initialization loop. A new argument is added to slab_free_hook() that indicates whether to initialize the memory or not. To avoid discrepancies with which memory gets initialized that can be caused by future changes, both KASAN hook and initialization memset() are put together and a warning comment is added. Combining setting allocation tags with memory initialization improves HW_TAGS KASAN performance when init_on_free is enabled. Reviewed-by: Marco Elver Signed-off-by: Andrey Konovalov --- include/linux/kasan.h | 10 ++++++---- mm/kasan/common.c | 13 +++++++------ mm/slab.c | 15 +++++++++++---- mm/slub.c | 43 ++++++++++++++++++++++++------------------- 4 files changed, 48 insertions(+), 33 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 85f2a8786606..ed08c419a687 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -203,11 +203,13 @@ static __always_inline void * __must_check kasan_init_slab_obj( return (void *)object; } -bool __kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip); -static __always_inline bool kasan_slab_free(struct kmem_cache *s, void *object) +bool __kasan_slab_free(struct kmem_cache *s, void *object, + unsigned long ip, bool init); +static __always_inline bool kasan_slab_free(struct kmem_cache *s, + void *object, bool init) { if (kasan_enabled()) - return __kasan_slab_free(s, object, _RET_IP_); + return __kasan_slab_free(s, object, _RET_IP_, init); return false; } @@ -313,7 +315,7 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache, { return (void *)object; } -static inline bool kasan_slab_free(struct kmem_cache *s, void *object) +static inline bool kasan_slab_free(struct kmem_cache *s, void *object, bool init) { return false; } diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 7ea747b18c26..623cf94288a2 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -322,8 +322,8 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache, return (void *)object; } -static inline bool ____kasan_slab_free(struct kmem_cache *cache, - void *object, unsigned long ip, bool quarantine) +static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object, + unsigned long ip, bool quarantine, bool init) { u8 tag; void *tagged_object; @@ -351,7 +351,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, } kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE), - KASAN_KMALLOC_FREE, false); + KASAN_KMALLOC_FREE, init); if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine)) return false; @@ -362,9 +362,10 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, return kasan_quarantine_put(cache, object); } -bool __kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) +bool __kasan_slab_free(struct kmem_cache *cache, void *object, + unsigned long ip, bool init) { - return ____kasan_slab_free(cache, object, ip, true); + return ____kasan_slab_free(cache, object, ip, true, init); } static inline bool ____kasan_kfree_large(void *ptr, unsigned long ip) @@ -409,7 +410,7 @@ void __kasan_slab_free_mempool(void *ptr, unsigned long ip) return; kasan_poison(ptr, page_size(page), KASAN_FREE_PAGE, false); } else { - ____kasan_slab_free(page->slab_cache, ptr, ip, false); + ____kasan_slab_free(page->slab_cache, ptr, ip, false, false); } } diff --git a/mm/slab.c b/mm/slab.c index 936dd686dec9..3adfe5bc3e2e 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3425,17 +3425,24 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp, unsigned long caller) { + bool init; + if (is_kfence_address(objp)) { kmemleak_free_recursive(objp, cachep->flags); __kfence_free(objp); return; } - if (unlikely(slab_want_init_on_free(cachep))) + /* + * As memory initialization might be integrated into KASAN, + * kasan_slab_free and initialization memset must be + * kept together to avoid discrepancies in behavior. + */ + init = slab_want_init_on_free(cachep); + if (init && !kasan_has_integrated_init()) memset(objp, 0, cachep->object_size); - - /* Put the object into the quarantine, don't touch it for now. */ - if (kasan_slab_free(cachep, objp)) + /* KASAN might put objp into memory quarantine, delaying its reuse. */ + if (kasan_slab_free(cachep, objp, init)) return; /* Use KCSAN to help debug racy use-after-free. */ diff --git a/mm/slub.c b/mm/slub.c index f53df23760e3..37afe6251bcc 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1532,7 +1532,8 @@ static __always_inline void kfree_hook(void *x) kasan_kfree_large(x); } -static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x) +static __always_inline bool slab_free_hook(struct kmem_cache *s, + void *x, bool init) { kmemleak_free_recursive(x, s->flags); @@ -1558,8 +1559,25 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x) __kcsan_check_access(x, s->object_size, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT); - /* KASAN might put x into memory quarantine, delaying its reuse */ - return kasan_slab_free(s, x); + /* + * As memory initialization might be integrated into KASAN, + * kasan_slab_free and initialization memset's must be + * kept together to avoid discrepancies in behavior. + * + * The initialization memset's clear the object and the metadata, + * but don't touch the SLAB redzone. + */ + if (init) { + int rsize; + + if (!kasan_has_integrated_init()) + memset(kasan_reset_tag(x), 0, s->object_size); + rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad : 0; + memset((char *)kasan_reset_tag(x) + s->inuse, 0, + s->size - s->inuse - rsize); + } + /* KASAN might put x into memory quarantine, delaying its reuse. */ + return kasan_slab_free(s, x, init); } static inline bool slab_free_freelist_hook(struct kmem_cache *s, @@ -1569,10 +1587,9 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, void *object; void *next = *head; void *old_tail = *tail ? *tail : *head; - int rsize; if (is_kfence_address(next)) { - slab_free_hook(s, next); + slab_free_hook(s, next, false); return true; } @@ -1584,20 +1601,8 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, object = next; next = get_freepointer(s, object); - if (slab_want_init_on_free(s)) { - /* - * Clear the object and the metadata, but don't touch - * the redzone. - */ - memset(kasan_reset_tag(object), 0, s->object_size); - rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad - : 0; - memset((char *)kasan_reset_tag(object) + s->inuse, 0, - s->size - s->inuse - rsize); - - } /* If object's reuse doesn't have to be delayed */ - if (!slab_free_hook(s, object)) { + if (!slab_free_hook(s, object, slab_want_init_on_free(s))) { /* Move object to the new freelist */ set_freepointer(s, object, *head); *head = object; @@ -3235,7 +3240,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, } if (is_kfence_address(object)) { - slab_free_hook(df->s, object); + slab_free_hook(df->s, object, false); __kfence_free(object); p[size] = NULL; /* mark object processed */ return size;