From patchwork Thu Aug 9 19:20:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10561699 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0AD0F174A for ; Thu, 9 Aug 2018 19:21:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ED5CD2B938 for ; Thu, 9 Aug 2018 19:21:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DF3C22B93B; Thu, 9 Aug 2018 19:21:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4B78D2B91F for ; Thu, 9 Aug 2018 19:21:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727351AbeHIVrd (ORCPT ); Thu, 9 Aug 2018 17:47:33 -0400 Received: from mail-wm0-f67.google.com ([74.125.82.67]:40036 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727319AbeHIVrc (ORCPT ); Thu, 9 Aug 2018 17:47:32 -0400 Received: by mail-wm0-f67.google.com with SMTP id y9-v6so1368521wma.5 for ; Thu, 09 Aug 2018 12:21:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+9lusn/uftl3FAZ6ZyFaXTT1KH6bEEBFmRc1hQR6FkE=; b=BPNZ1wesv1SQWrM7ZXd26Nb/a887DVg1Q4uAMrz6ZZ43xJBa/U2mLTMaeWt2YS49ox JXq+eNlQ/TDGSxYskHve/rM3PunMOAKRq+VNJfWAXkzUedMPtlNxKeqoAFAPBXbBfSiI idPC4AkwFfKGAJVUAijhbOTFzj+1iIW/tuY2XkMaD/0t16JjGR7dSyOH3SZtWFufb/o8 KVIVpZHntMFtz/hsFcwMW1K7t2cwbkQfndYDF66Ciu39L0eXvcm8XFlduxOQVIhm0UXi HsR8BH9qAd7gc8mDOXLak+hX9S9UDDzo2PUBiKaRGjNNyFBo9BpZawjLGDhcG875eXeB e7JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+9lusn/uftl3FAZ6ZyFaXTT1KH6bEEBFmRc1hQR6FkE=; b=TZrOyorLfp+0SgO0WZQVFx3O4kKlOIvyMKOwvDacAnCA9uJEPR+Xzx7e8No1iuGaKC SVy8xHDIOYZrl7PqVXOY4/d8A00/UuEPAsrtiWJQ3glloV4luY3ubp+yWgilQjXJFKpq mdRRN7w5VITzOasn0KRf32qR9NK24L4aXVQNPS+yUn6WiyJR6CbkjF+iBQ8QuSBfWEGP pv7gaDi4ekGRhbRfrmLQ3z7d7e9pcFJMyS+5UbVbbBHNTLAoO621tYItwwdIJdYqrrtX /y0uJD4izJtwj+A6qAPGYmxXaajB3rqWHy7sBp4OCVGqXU306dJVj1ipyM7x/suUNlAA Hbcw== X-Gm-Message-State: AOUpUlFPx6bJifF3XuurxptsBl21joJb/8A8PhVJVlCdOXOvZD9VxXAQ JM6pVr2AhZ7Py/fAbXp8qbz4XQ== X-Google-Smtp-Source: AA+uWPzfScz5wbXrQM7e19ntyOUB7vmfS0pu1nKtGl9yJmPUZzRTRbtO5WyN97uQVZ9z1fLx27zNog== X-Received: by 2002:a1c:6354:: with SMTP id x81-v6mr2227952wmb.23.1533842476252; Thu, 09 Aug 2018 12:21:16 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id o14-v6sm14738797wmd.35.2018.08.09.12.21.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 09 Aug 2018 12:21:15 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v5 01/18] khwasan, mm: change kasan hooks signatures Date: Thu, 9 Aug 2018 21:20:53 +0200 Message-Id: <327ede8363658620073d173c615a26ba9faf38d0.1533842385.git.andreyknvl@google.com> X-Mailer: git-send-email 2.18.0.597.ga71716f1ad-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP KHWASAN will change the value of the top byte of pointers returned from the kernel allocation functions (such as kmalloc). This patch updates KASAN hooks signatures and their usage in SLAB and SLUB code to reflect that. Signed-off-by: Andrey Konovalov --- include/linux/kasan.h | 34 +++++++++++++++++++++++----------- mm/kasan/kasan.c | 24 ++++++++++++++---------- mm/slab.c | 12 ++++++------ mm/slab.h | 2 +- mm/slab_common.c | 4 ++-- mm/slub.c | 15 +++++++-------- 6 files changed, 53 insertions(+), 38 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index de784fd11d12..cbdc54543803 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -53,14 +53,14 @@ void kasan_unpoison_object_data(struct kmem_cache *cache, void *object); void kasan_poison_object_data(struct kmem_cache *cache, void *object); void kasan_init_slab_obj(struct kmem_cache *cache, const void *object); -void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); +void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); void kasan_kfree_large(void *ptr, unsigned long ip); void kasan_poison_kfree(void *ptr, unsigned long ip); -void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size, +void *kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size, gfp_t flags); -void kasan_krealloc(const void *object, size_t new_size, gfp_t flags); +void *kasan_krealloc(const void *object, size_t new_size, gfp_t flags); -void kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); +void *kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip); struct kasan_cache { @@ -105,16 +105,28 @@ static inline void kasan_poison_object_data(struct kmem_cache *cache, static inline void kasan_init_slab_obj(struct kmem_cache *cache, const void *object) {} -static inline void kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) {} +static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) +{ + return ptr; +} static inline void kasan_kfree_large(void *ptr, unsigned long ip) {} static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {} -static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, - size_t size, gfp_t flags) {} -static inline void kasan_krealloc(const void *object, size_t new_size, - gfp_t flags) {} +static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object, + size_t size, gfp_t flags) +{ + return (void *)object; +} +static inline void *kasan_krealloc(const void *object, size_t new_size, + gfp_t flags) +{ + return (void *)object; +} -static inline void kasan_slab_alloc(struct kmem_cache *s, void *object, - gfp_t flags) {} +static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags) +{ + return object; +} static inline bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip) { diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c index c3bd5209da38..f696c7c143c2 100644 --- a/mm/kasan/kasan.c +++ b/mm/kasan/kasan.c @@ -485,9 +485,9 @@ void kasan_init_slab_obj(struct kmem_cache *cache, const void *object) __memset(alloc_info, 0, sizeof(*alloc_info)); } -void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags) +void *kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags) { - kasan_kmalloc(cache, object, cache->object_size, flags); + return kasan_kmalloc(cache, object, cache->object_size, flags); } static bool __kasan_slab_free(struct kmem_cache *cache, void *object, @@ -528,7 +528,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) return __kasan_slab_free(cache, object, ip, true); } -void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, +void *kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, gfp_t flags) { unsigned long redzone_start; @@ -538,7 +538,7 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, quarantine_reduce(); if (unlikely(object == NULL)) - return; + return NULL; redzone_start = round_up((unsigned long)(object + size), KASAN_SHADOW_SCALE_SIZE); @@ -551,10 +551,12 @@ void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, if (cache->flags & SLAB_KASAN) set_track(&get_alloc_info(cache, object)->alloc_track, flags); + + return (void *)object; } EXPORT_SYMBOL(kasan_kmalloc); -void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) +void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) { struct page *page; unsigned long redzone_start; @@ -564,7 +566,7 @@ void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) quarantine_reduce(); if (unlikely(ptr == NULL)) - return; + return NULL; page = virt_to_page(ptr); redzone_start = round_up((unsigned long)(ptr + size), @@ -574,21 +576,23 @@ void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) kasan_unpoison_shadow(ptr, size); kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, KASAN_PAGE_REDZONE); + + return (void *)ptr; } -void kasan_krealloc(const void *object, size_t size, gfp_t flags) +void *kasan_krealloc(const void *object, size_t size, gfp_t flags) { struct page *page; if (unlikely(object == ZERO_SIZE_PTR)) - return; + return ZERO_SIZE_PTR; page = virt_to_head_page(object); if (unlikely(!PageSlab(page))) - kasan_kmalloc_large(object, size, flags); + return kasan_kmalloc_large(object, size, flags); else - kasan_kmalloc(page->slab_cache, object, size, flags); + return kasan_kmalloc(page->slab_cache, object, size, flags); } void kasan_poison_kfree(void *ptr, unsigned long ip) diff --git a/mm/slab.c b/mm/slab.c index aa76a70e087e..6fdca9ec2ea4 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3551,7 +3551,7 @@ void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) { void *ret = slab_alloc(cachep, flags, _RET_IP_); - kasan_slab_alloc(cachep, ret, flags); + ret = kasan_slab_alloc(cachep, ret, flags); trace_kmem_cache_alloc(_RET_IP_, ret, cachep->object_size, cachep->size, flags); @@ -3617,7 +3617,7 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) ret = slab_alloc(cachep, flags, _RET_IP_); - kasan_kmalloc(cachep, ret, size, flags); + ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc(_RET_IP_, ret, size, cachep->size, flags); return ret; @@ -3641,7 +3641,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) { void *ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); - kasan_slab_alloc(cachep, ret, flags); + ret = kasan_slab_alloc(cachep, ret, flags); trace_kmem_cache_alloc_node(_RET_IP_, ret, cachep->object_size, cachep->size, flags, nodeid); @@ -3660,7 +3660,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); - kasan_kmalloc(cachep, ret, size, flags); + ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc_node(_RET_IP_, ret, size, cachep->size, flags, nodeid); @@ -3679,7 +3679,7 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) if (unlikely(ZERO_OR_NULL_PTR(cachep))) return cachep; ret = kmem_cache_alloc_node_trace(cachep, flags, node, size); - kasan_kmalloc(cachep, ret, size, flags); + ret = kasan_kmalloc(cachep, ret, size, flags); return ret; } @@ -3715,7 +3715,7 @@ static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, return cachep; ret = slab_alloc(cachep, flags, caller); - kasan_kmalloc(cachep, ret, size, flags); + ret = kasan_kmalloc(cachep, ret, size, flags); trace_kmalloc(caller, ret, size, cachep->size, flags); diff --git a/mm/slab.h b/mm/slab.h index 68bdf498da3b..15ef6a0d9c16 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -441,7 +441,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags); - kasan_slab_alloc(s, object, flags); + p[i] = kasan_slab_alloc(s, object, flags); } if (memcg_kmem_enabled()) diff --git a/mm/slab_common.c b/mm/slab_common.c index 2296caf87bfb..a99a1feb52c4 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1183,7 +1183,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) page = alloc_pages(flags, order); ret = page ? page_address(page) : NULL; kmemleak_alloc(ret, size, 1, flags); - kasan_kmalloc_large(ret, size, flags); + ret = kasan_kmalloc_large(ret, size, flags); return ret; } EXPORT_SYMBOL(kmalloc_order); @@ -1461,7 +1461,7 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size, ks = ksize(p); if (ks >= new_size) { - kasan_krealloc((void *)p, new_size, flags); + p = kasan_krealloc((void *)p, new_size, flags); return (void *)p; } diff --git a/mm/slub.c b/mm/slub.c index 51258eff4178..382bc0fea498 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1336,10 +1336,10 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node, * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. */ -static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) +static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) { kmemleak_alloc(ptr, size, 1, flags); - kasan_kmalloc_large(ptr, size, flags); + return kasan_kmalloc_large(ptr, size, flags); } static __always_inline void kfree_hook(void *x) @@ -2732,7 +2732,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) { void *ret = slab_alloc(s, gfpflags, _RET_IP_); trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); - kasan_kmalloc(s, ret, size, gfpflags); + ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; } EXPORT_SYMBOL(kmem_cache_alloc_trace); @@ -2760,7 +2760,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, trace_kmalloc_node(_RET_IP_, ret, size, s->size, gfpflags, node); - kasan_kmalloc(s, ret, size, gfpflags); + ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); @@ -3750,7 +3750,7 @@ void *__kmalloc(size_t size, gfp_t flags) trace_kmalloc(_RET_IP_, ret, size, s->size, flags); - kasan_kmalloc(s, ret, size, flags); + ret = kasan_kmalloc(s, ret, size, flags); return ret; } @@ -3767,8 +3767,7 @@ static void *kmalloc_large_node(size_t size, gfp_t flags, int node) if (page) ptr = page_address(page); - kmalloc_large_node_hook(ptr, size, flags); - return ptr; + return kmalloc_large_node_hook(ptr, size, flags); } void *__kmalloc_node(size_t size, gfp_t flags, int node) @@ -3795,7 +3794,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node); - kasan_kmalloc(s, ret, size, flags); + ret = kasan_kmalloc(s, ret, size, flags); return ret; }