From patchwork Fri Sep 21 15:13:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10610423 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C6815112B for ; Fri, 21 Sep 2018 15:15:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B857B2E4BE for ; Fri, 21 Sep 2018 15:15:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AC5352E4C5; Fri, 21 Sep 2018 15:15:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4697A2E4BE for ; Fri, 21 Sep 2018 15:15:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390932AbeIUVDX (ORCPT ); Fri, 21 Sep 2018 17:03:23 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:36327 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390918AbeIUVDW (ORCPT ); Fri, 21 Sep 2018 17:03:22 -0400 Received: by mail-wr1-f68.google.com with SMTP id e1-v6so13240391wrt.3 for ; Fri, 21 Sep 2018 08:14:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mjTJMND2InMYoPSUJVvHZluvejsOaOH9BQkiQvHv+tc=; b=iP8uqVhJGEXg6tZRqMuU9Q7HePNsCduoH/q7Aw1X/zYOwPA0bolayCWyXePfTy7zJp vCrPFVZA3OfJblBv7fEQ5Wj64KRmCWAF1iMIV+qyobMXa/WsvTJOtSp+vFKnbrVbriEP 0KCjR1cLDCo3eHlhUpiVkjZaW+hlmNewj8FTFsjYGn63b0wWJdAePM2mBJR+xBnLO53q bmw++1JEHafERQZ7w/pyE3pv7akhaNozJHz/rBu1JHSvly1AqKW2UlHQqYmZHxPb+smO DZR1w1WqwZ7hm4SnzIEY1xcHQqixqSxPeEe5Djkg99PW60K6BMEknQmTgfoIbYPvrP05 XzkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mjTJMND2InMYoPSUJVvHZluvejsOaOH9BQkiQvHv+tc=; b=BgORiZ+ignYIoRnEKruK7Ir1EQRUWf732ZQxs7DjHQHdu3TD9uqnQA7b77GDYOlNCx rqxHXxc7VvwEZM/JU3wQ+4zTwLSu+Cp7vZrGBUhUlIYSYB9YJbcdloF0329rb1z9/Ylq 1nt6qa2CRL9Q/g7MFXsdAURTQsuI1qA2Z5b/+5puGdnDmft/nuOPJSfXrgUyEkGA77ns BZGt5UherJLLFvGElxiY5tVWAvYXwP0Tj30PnV8KOufhXJI5FkOTIrtcl7kU1F/4YUW7 Y9ITjrBXedV4ap8l4t+pQebQrYtXW7Vd9p3s2dmJ4BhPlz4AFsarglorBGb52NXLg00q CeHQ== X-Gm-Message-State: APzg51ANlhrOvwKi+faD8ZaixQXIMTxN3i/VwqUhWorxAmmEAoypBPs0 jLmYvF6OMuroL4hyxmWVhgVETA== X-Google-Smtp-Source: ANB0VdahIErwHyxu6qlRT7K4dJtmr7FgqGiVYl/UxD7i/B9elu+PCdd/MlOQK3vhfJ9Gyi/7uXOBSw== X-Received: by 2002:a5d:438d:: with SMTP id i13-v6mr39162425wrq.156.1537542842008; Fri, 21 Sep 2018 08:14:02 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id e7-v6sm27990271wru.46.2018.09.21.08.14.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 21 Sep 2018 08:14:01 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v9 09/20] kasan: preassign tags to objects with ctors or SLAB_TYPESAFE_BY_RCU Date: Fri, 21 Sep 2018 17:13:31 +0200 Message-Id: <9ea379b38a763adeae0e43638a9769c96eea767f.1537542735.git.andreyknvl@google.com> X-Mailer: git-send-email 2.19.0.444.g18242da7ef-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP An object constructor can initialize pointers within this objects based on the address of the object. Since the object address might be tagged, we need to assign a tag before calling constructor. The implemented approach is to assign tags to objects with constructors when a slab is allocated and call constructors once as usual. The downside is that such object would always have the same tag when it is reallocated, so we won't catch use-after-frees on it. Also pressign tags for objects from SLAB_TYPESAFE_BY_RCU caches, since they can be validy accessed after having been freed. Signed-off-by: Andrey Konovalov --- mm/slab.c | 2 +- mm/slub.c | 24 ++++++++++++++---------- 2 files changed, 15 insertions(+), 11 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 6fdca9ec2ea4..fe0ddf08aa2c 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2574,7 +2574,7 @@ static void cache_init_objs(struct kmem_cache *cachep, for (i = 0; i < cachep->num; i++) { objp = index_to_obj(cachep, page, i); - kasan_init_slab_obj(cachep, objp); + objp = kasan_init_slab_obj(cachep, objp); /* constructor could break poison info */ if (DEBUG == 0 && cachep->ctor) { diff --git a/mm/slub.c b/mm/slub.c index c4d5f4442ff1..75fc76e42a1e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1413,16 +1413,17 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, #endif } -static void setup_object(struct kmem_cache *s, struct page *page, +static void *setup_object(struct kmem_cache *s, struct page *page, void *object) { setup_object_debug(s, page, object); - kasan_init_slab_obj(s, object); + object = kasan_init_slab_obj(s, object); if (unlikely(s->ctor)) { kasan_unpoison_object_data(s, object); s->ctor(object); kasan_poison_object_data(s, object); } + return object; } /* @@ -1530,16 +1531,16 @@ static bool shuffle_freelist(struct kmem_cache *s, struct page *page) /* First entry is used as the base of the freelist */ cur = next_freelist_entry(s, page, &pos, start, page_limit, freelist_count); + cur = setup_object(s, page, cur); page->freelist = cur; for (idx = 1; idx < page->objects; idx++) { - setup_object(s, page, cur); next = next_freelist_entry(s, page, &pos, start, page_limit, freelist_count); + next = setup_object(s, page, next); set_freepointer(s, cur, next); cur = next; } - setup_object(s, page, cur); set_freepointer(s, cur, NULL); return true; @@ -1561,7 +1562,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) struct page *page; struct kmem_cache_order_objects oo = s->oo; gfp_t alloc_gfp; - void *start, *p; + void *start, *p, *next; int idx, order; bool shuffle; @@ -1613,13 +1614,16 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) if (!shuffle) { for_each_object_idx(p, idx, s, start, page->objects) { - setup_object(s, page, p); - if (likely(idx < page->objects)) - set_freepointer(s, p, p + s->size); - else + if (likely(idx < page->objects)) { + next = p + s->size; + next = setup_object(s, page, next); + set_freepointer(s, p, next); + } else set_freepointer(s, p, NULL); } - page->freelist = fixup_red_left(s, start); + start = fixup_red_left(s, start); + start = setup_object(s, page, start); + page->freelist = start; } page->inuse = page->objects;