From patchwork Tue Sep 29 13:38:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 11806003 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 45152112C for ; Tue, 29 Sep 2020 14:00:28 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CE118206B7 for ; Tue, 29 Sep 2020 14:00:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="0+0v0Sun"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="aKu1+6PR"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="m3ciNFIc" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE118206B7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=X5ifmtGMBeegSdnqdtr2y3KvemLD5gaRF+T7EYEnFME=; b=0+0v0SunNgpeLzaOgtdPA4N/9 mMdIaDTsjfsFd0IWuzXt+Sli/jBpXQ4N/Z9VAJM8HkbpEUb0XhfiZwKW51PmD92uBRolDqVSZE1S3 drbfbyDa2VZAuCFk+RGjHFImvAuPtc4yGTnn5EYfVgJCeX8syVL8NgzktokNZymIV70CSPZQbM+O/ zKYk6VQECvtBMpdmZ9H/P2J1ZqSr2JJwcRqycqFX3gZ0KQgZNZOy7DvAvF9h99/uubvlnaFlWku0i Q9Rr2O0mtO55PoFjYkhbgxh7hLaIzKz73PAdRHyyNJLDBJyA44Zej1qbiI/NRVg+CRN8IkSkxj2S7 I6yCVavvw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNGAr-0002RH-NA; Tue, 29 Sep 2020 14:00:09 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNGAn-0002Qg-IA for linux-arm-kernel@merlin.infradead.org; Tue, 29 Sep 2020 14:00:05 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: References:Mime-Version:Message-Id:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=TcPFKG27AKWC24GfjbdRwfCqEVji/eGlkYn6VN42bOA=; b=aKu1+6PRU1C/VKmspXGll+LtQb WOyHrCezBiTtYW9LbasQBT564/hRWm+TAOtGEqnY4SAa70Ih5bAe3yaAKZUyc4R4Ym1QVtI+5atwk rNNbKSWeTDYUUIUld2HD1aW2ucad/Dm6jcGm6a9j3yRZ8bHQeYebmi98BKZvkt7rZRGmdKMwp94k/ uPDxZggbnjUAyNCrpe9NzObdyb52+1NX9OeU+2iArvzwiCnhkCM/IBlUsCffqGTsvb70eic4adFPS cIaKVo69lexRoq83o5CRTgrUZNu1r2kKxDETBiWdAdqXSWjxlh6OlPaNNpf7pBvsFQQDbHFszVm6S RPN+JwjA==; Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by casper.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNFqK-0001kx-Lh for linux-arm-kernel@lists.infradead.org; Tue, 29 Sep 2020 13:39:00 +0000 Received: by mail-wr1-x44a.google.com with SMTP id a2so1769426wrp.8 for ; Tue, 29 Sep 2020 06:38:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=TcPFKG27AKWC24GfjbdRwfCqEVji/eGlkYn6VN42bOA=; b=m3ciNFIcLJLc0/CABzp40eciPJve/txucLB9B8dH0CotxO1wuzVP/iXZbNGeWBDzpl w6dBrfU0pizbMlH0zNkeLQ9q2VVHOjFT5Fd37+1VehId57CezCgbu4tamDSwL0O9C/bJ 1f8GcZ5C686vFB/RN8ba3Ae+aEomG08RxH4vjAxbooxoeinRDi/qvpf1lxWoxLQRvBbo vv1d124ldFjHlShrkCMfR2kXIehbW+PFNDNzSBIWBiPpJlnXbYUP9x4KFn3W8j6Czc3T p2nrSpucue4yKvJDPOISbKsI4zanzPGMKSa8ngM9q9j0thCmqcVXjpfaBio9aFONLE/k 7HMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=TcPFKG27AKWC24GfjbdRwfCqEVji/eGlkYn6VN42bOA=; b=oLnM69EfT5DwKMOAvYj/JZCcXw7i/oA+JqcX/HAtkV/eGmH6lnhxePUmS4fPp3f0k5 OYCVXz+m1JkVoBT+A5kX9QOp4bZOYCs6VUG6K3HdMI3C5CAunS4arMRRVXtmQELXi8ps ipsSsZp87LHZf/urCyLbCbCz4QmlZUPzLwjlenV0wqnrwQ4qDVQYH8dV566i8Jm6ZrGj +3ugll87Bg39ZDa2YIF9Ql2ZVYSWUnV95OvFLYeU9dIStjfEtyJ++zds0xrqSxJ6Dlcj 00nXTtP7LSSqqaz1fqznAPsoS9wxq5D3deMnQ1VSoQXfrtIP0qZE4pmz+vG6RIUhVVMG cZlQ== X-Gm-Message-State: AOAM5336uJnYqE2UkFnhr1WFKT4uYp+dcaJwTecA6FaIbgaK79jhgqW+ /BtovYNo5VtWmfMpciWg/Cjtx6bqlA== X-Google-Smtp-Source: ABdhPJyoZM6RVnWj5Y8MW8UpmsHOne+2BZIdyc4kf5OiRB4sKXQS0UlfwKm7nk5nCF/1TmEP02F1S0E93g== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:f693:9fff:fef4:2449]) (user=elver job=sendgmr) by 2002:a5d:55c8:: with SMTP id i8mr4470523wrw.331.1601386734786; Tue, 29 Sep 2020 06:38:54 -0700 (PDT) Date: Tue, 29 Sep 2020 15:38:08 +0200 In-Reply-To: <20200929133814.2834621-1-elver@google.com> Message-Id: <20200929133814.2834621-6-elver@google.com> Mime-Version: 1.0 References: <20200929133814.2834621-1-elver@google.com> X-Mailer: git-send-email 2.28.0.709.gb0816b6eb0-goog Subject: [PATCH v4 05/11] mm, kfence: insert KFENCE hooks for SLUB From: Marco Elver To: elver@google.com, akpm@linux-foundation.org, glider@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200929_143856_972041_10577849 X-CRM114-Status: GOOD ( 23.05 ) X-Spam-Score: -10.1 (----------) X-Spam-Report: SpamAssassin version 3.4.4 on casper.infradead.org summary: Content analysis details: (-10.1 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:44a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.5 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, hdanton@sina.com, linux-doc@vger.kernel.org, peterz@infradead.org, catalin.marinas@arm.com, dave.hansen@linux.intel.com, linux-mm@kvack.org, edumazet@google.com, hpa@zytor.com, cl@linux.com, will@kernel.org, sjpark@amazon.com, corbet@lwn.net, x86@kernel.org, kasan-dev@googlegroups.com, mingo@redhat.com, vbabka@suse.cz, rientjes@google.com, aryabinin@virtuozzo.com, keescook@chromium.org, paulmck@kernel.org, jannh@google.com, andreyknvl@google.com, bp@alien8.de, luto@kernel.org, Jonathan.Cameron@huawei.com, tglx@linutronix.de, dvyukov@google.com, linux-arm-kernel@lists.infradead.org, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, penberg@kernel.org, iamjoonsoo.kim@lge.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Alexander Potapenko Inserts KFENCE hooks into the SLUB allocator. To pass the originally requested size to KFENCE, add an argument 'orig_size' to slab_alloc*(). The additional argument is required to preserve the requested original size for kmalloc() allocations, which uses size classes (e.g. an allocation of 272 bytes will return an object of size 512). Therefore, kmem_cache::size does not represent the kmalloc-caller's requested size, and we must introduce the argument 'orig_size' to propagate the originally requested size to KFENCE. Without the originally requested size, we would not be able to detect out-of-bounds accesses for objects placed at the end of a KFENCE object page if that object is not equal to the kmalloc-size class it was bucketed into. When KFENCE is disabled, there is no additional overhead, since slab_alloc*() functions are __always_inline. Reviewed-by: Dmitry Vyukov Co-developed-by: Marco Elver Signed-off-by: Marco Elver Signed-off-by: Alexander Potapenko --- v3: * Rewrite patch description to clarify need for 'orig_size' [reported by Christopher Lameter]. --- mm/slub.c | 72 ++++++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 53 insertions(+), 19 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index d4177aecedf6..5c5a13a7857c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -1557,6 +1558,11 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, void *old_tail = *tail ? *tail : *head; int rsize; + if (is_kfence_address(next)) { + slab_free_hook(s, next); + return true; + } + /* Head and tail of the reconstructed freelist */ *head = NULL; *tail = NULL; @@ -2660,7 +2666,8 @@ static inline void *get_freelist(struct kmem_cache *s, struct page *page) * already disabled (which is the case for bulk allocation). */ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, - unsigned long addr, struct kmem_cache_cpu *c) + unsigned long addr, struct kmem_cache_cpu *c, + size_t orig_size) { void *freelist; struct page *page; @@ -2763,7 +2770,8 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, * cpu changes by refetching the per cpu area pointer. */ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, - unsigned long addr, struct kmem_cache_cpu *c) + unsigned long addr, struct kmem_cache_cpu *c, + size_t orig_size) { void *p; unsigned long flags; @@ -2778,7 +2786,7 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, c = this_cpu_ptr(s->cpu_slab); #endif - p = ___slab_alloc(s, gfpflags, node, addr, c); + p = ___slab_alloc(s, gfpflags, node, addr, c, orig_size); local_irq_restore(flags); return p; } @@ -2805,7 +2813,7 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, * Otherwise we can simply pick the next object from the lockless free list. */ static __always_inline void *slab_alloc_node(struct kmem_cache *s, - gfp_t gfpflags, int node, unsigned long addr) + gfp_t gfpflags, int node, unsigned long addr, size_t orig_size) { void *object; struct kmem_cache_cpu *c; @@ -2816,6 +2824,11 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, s = slab_pre_alloc_hook(s, &objcg, 1, gfpflags); if (!s) return NULL; + + object = kfence_alloc(s, orig_size, gfpflags); + if (unlikely(object)) + goto out; + redo: /* * Must read kmem_cache cpu data via this cpu ptr. Preemption is @@ -2853,7 +2866,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, object = c->freelist; page = c->page; if (unlikely(!object || !node_match(page, node))) { - object = __slab_alloc(s, gfpflags, node, addr, c); + object = __slab_alloc(s, gfpflags, node, addr, c, orig_size); stat(s, ALLOC_SLOWPATH); } else { void *next_object = get_freepointer_safe(s, object); @@ -2889,20 +2902,21 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) memset(object, 0, s->object_size); +out: slab_post_alloc_hook(s, objcg, gfpflags, 1, &object); return object; } static __always_inline void *slab_alloc(struct kmem_cache *s, - gfp_t gfpflags, unsigned long addr) + gfp_t gfpflags, unsigned long addr, size_t orig_size) { - return slab_alloc_node(s, gfpflags, NUMA_NO_NODE, addr); + return slab_alloc_node(s, gfpflags, NUMA_NO_NODE, addr, orig_size); } void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags) { - void *ret = slab_alloc(s, gfpflags, _RET_IP_); + void *ret = slab_alloc(s, gfpflags, _RET_IP_, s->object_size); trace_kmem_cache_alloc(_RET_IP_, ret, s->object_size, s->size, gfpflags); @@ -2914,7 +2928,7 @@ EXPORT_SYMBOL(kmem_cache_alloc); #ifdef CONFIG_TRACING void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) { - void *ret = slab_alloc(s, gfpflags, _RET_IP_); + void *ret = slab_alloc(s, gfpflags, _RET_IP_, size); trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; @@ -2925,7 +2939,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_trace); #ifdef CONFIG_NUMA void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { - void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_); + void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_, s->object_size); trace_kmem_cache_alloc_node(_RET_IP_, ret, s->object_size, s->size, gfpflags, node); @@ -2939,7 +2953,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) { - void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_); + void *ret = slab_alloc_node(s, gfpflags, node, _RET_IP_, size); trace_kmalloc_node(_RET_IP_, ret, size, s->size, gfpflags, node); @@ -2973,6 +2987,9 @@ static void __slab_free(struct kmem_cache *s, struct page *page, stat(s, FREE_SLOWPATH); + if (kfence_free(head)) + return; + if (kmem_cache_debug(s) && !free_debug_processing(s, page, head, tail, cnt, addr)) return; @@ -3216,6 +3233,13 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, df->s = cache_from_obj(s, object); /* Support for memcg */ } + if (is_kfence_address(object)) { + slab_free_hook(df->s, object); + WARN_ON(!kfence_free(object)); + p[size] = NULL; /* mark object processed */ + return size; + } + /* Start new detached freelist */ df->page = page; set_freepointer(df->s, object, NULL); @@ -3290,8 +3314,14 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, c = this_cpu_ptr(s->cpu_slab); for (i = 0; i < size; i++) { - void *object = c->freelist; + void *object = kfence_alloc(s, s->object_size, flags); + if (unlikely(object)) { + p[i] = object; + continue; + } + + object = c->freelist; if (unlikely(!object)) { /* * We may have removed an object from c->freelist using @@ -3307,7 +3337,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, * of re-populating per CPU c->freelist */ p[i] = ___slab_alloc(s, flags, NUMA_NO_NODE, - _RET_IP_, c); + _RET_IP_, c, size); if (unlikely(!p[i])) goto error; @@ -3962,7 +3992,7 @@ void *__kmalloc(size_t size, gfp_t flags) if (unlikely(ZERO_OR_NULL_PTR(s))) return s; - ret = slab_alloc(s, flags, _RET_IP_); + ret = slab_alloc(s, flags, _RET_IP_, size); trace_kmalloc(_RET_IP_, ret, size, s->size, flags); @@ -4010,7 +4040,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) if (unlikely(ZERO_OR_NULL_PTR(s))) return s; - ret = slab_alloc_node(s, flags, node, _RET_IP_); + ret = slab_alloc_node(s, flags, node, _RET_IP_, size); trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node); @@ -4036,6 +4066,7 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, struct kmem_cache *s; unsigned int offset; size_t object_size; + bool is_kfence = is_kfence_address(ptr); ptr = kasan_reset_tag(ptr); @@ -4048,10 +4079,13 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, to_user, 0, n); /* Find offset within object. */ - offset = (ptr - page_address(page)) % s->size; + if (is_kfence) + offset = ptr - kfence_object_start(ptr); + else + offset = (ptr - page_address(page)) % s->size; /* Adjust for redzone and reject if within the redzone. */ - if (kmem_cache_debug_flags(s, SLAB_RED_ZONE)) { + if (!is_kfence && kmem_cache_debug_flags(s, SLAB_RED_ZONE)) { if (offset < s->red_left_pad) usercopy_abort("SLUB object in left red zone", s->name, to_user, offset, n); @@ -4460,7 +4494,7 @@ void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller) if (unlikely(ZERO_OR_NULL_PTR(s))) return s; - ret = slab_alloc(s, gfpflags, caller); + ret = slab_alloc(s, gfpflags, caller, size); /* Honor the call site pointer we received. */ trace_kmalloc(caller, ret, size, s->size, gfpflags); @@ -4491,7 +4525,7 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, if (unlikely(ZERO_OR_NULL_PTR(s))) return s; - ret = slab_alloc_node(s, gfpflags, node, caller); + ret = slab_alloc_node(s, gfpflags, node, caller, size); /* Honor the call site pointer we received. */ trace_kmalloc_node(caller, ret, size, s->size, gfpflags, node);