From patchwork Wed Sep 19 18:54:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10606315 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A71DC161F for ; Wed, 19 Sep 2018 18:57:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 97D7F274A3 for ; Wed, 19 Sep 2018 18:57:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8B120284A5; Wed, 19 Sep 2018 18:57:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 33FF128358 for ; Wed, 19 Sep 2018 18:57:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727371AbeITAgM (ORCPT ); Wed, 19 Sep 2018 20:36:12 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:45299 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387527AbeITAeg (ORCPT ); Wed, 19 Sep 2018 20:34:36 -0400 Received: by mail-wr1-f65.google.com with SMTP id 20-v6so6862619wrb.12 for ; Wed, 19 Sep 2018 11:55:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PIceIrvWyef1xlEqXhKzbMfNSt9Di8ecFmfot9dbvKM=; b=Vnc101Zm5F6hDUo3GCgeWj0T1ajKKdHv+fSoJ5cPZmtrZy+76KwfSvlBYifVYbaBm4 aT52eLOZvNN3mOoRXhXDmsrE0FSKlQbc4NUYXZZltyVU+gNWEuXYnfdqFBaon3aBbFbz Rrf7heXVQrKNnZ993RTjCusodK9ahYJmmjxFNSO2m66AWNUoiGrqZKiWA6DXeRqoSmP0 CRNQNV56bk1ptsTjko1IcufN+aJH4xQ+qzmSUXjXQIk/T9e0w4InsMgesDaVxDZtzsPy AtZx87+m6lGcPD+Yo0NCHIOHCqBTdNKWhpQcGE6W24gyD9Pm+JggCEBYU8leqhBj+gue DptA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PIceIrvWyef1xlEqXhKzbMfNSt9Di8ecFmfot9dbvKM=; b=GgO9QG3qwqhHzvjd4DQ0UALA015Amo0SVhM7GzXkOLIYkPfI1Z7Bdp1CFDufVxFtob 0r8hrrANcaqGF/Jxei1Pvn4gimj59NL26tKISVIhuFkBew8RPJ7jQ8Go9o4FD0XvjIZM 0/7PHZbui4s1yByVZxFFXC3TlnAAtFWHhOpgR5d8fmmaLJSahLWmC63NWZpYrkq63uZt VdR3exzwNAB/rpy2Yx8F5jGfYDo/I4B4b3kP69Q4HN5TSZuQfOGE7MTiqgNcotCWmcG7 A+LU+U3TgldZsyIpCsKrAzK00YJTy9EPxT+MScg8LDVuBIdQUSf8U2uy1kC33l1FT1Zk dx4w== X-Gm-Message-State: APzg51CztS+m0rwHzCWAFFzcARPkegtJLkL/BusCFUSvZFsUJyte4TVT kCFKPspwQoDQtzwSsJgIC2Wp/g== X-Google-Smtp-Source: ANB0VdYOeki63tDIAVwp8ieNxxHhTyhrB+nzOntZG8kNpeZXd21UUF5kmU3s+AZ5Ilpiw1n4VWV5Bg== X-Received: by 2002:a5d:5248:: with SMTP id p8-v6mr30297536wrv.198.1537383319545; Wed, 19 Sep 2018 11:55:19 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id b10-v6sm8510065wmc.28.2018.09.19.11.55.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Sep 2018 11:55:18 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v8 10/20] mm: move obj_to_index to include/linux/slab_def.h Date: Wed, 19 Sep 2018 20:54:49 +0200 Message-Id: X-Mailer: git-send-email 2.19.0.397.gdd90340f6a-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP While with SLUB we can actually preassign tags for caches with contructors and store them in pointers in the freelist, SLAB doesn't allow that since the freelist is stored as an array of indexes, so there are no pointers to store the tags. Instead we compute the tag twice, once when a slab is created before calling the constructor and then again each time when an object is allocated with kmalloc. Tag is computed simply by taking the lowest byte of the index that corresponds to the object. However in kasan_kmalloc we only have access to the objects pointer, so we need a way to find out which index this object corresponds to. This patch moves obj_to_index from slab.c to include/linux/slab_def.h to be reused by KASAN. Signed-off-by: Andrey Konovalov --- include/linux/slab_def.h | 13 +++++++++++++ mm/slab.c | 13 ------------- 2 files changed, 13 insertions(+), 13 deletions(-) diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 3485c58cfd1c..9a5eafb7145b 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -104,4 +104,17 @@ static inline void *nearest_obj(struct kmem_cache *cache, struct page *page, return object; } +/* + * We want to avoid an expensive divide : (offset / cache->size) + * Using the fact that size is a constant for a particular cache, + * we can replace (offset / cache->size) by + * reciprocal_divide(offset, cache->reciprocal_buffer_size) + */ +static inline unsigned int obj_to_index(const struct kmem_cache *cache, + const struct page *page, void *obj) +{ + u32 offset = (obj - page->s_mem); + return reciprocal_divide(offset, cache->reciprocal_buffer_size); +} + #endif /* _LINUX_SLAB_DEF_H */ diff --git a/mm/slab.c b/mm/slab.c index fe0ddf08aa2c..6d8de7630944 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -406,19 +406,6 @@ static inline void *index_to_obj(struct kmem_cache *cache, struct page *page, return page->s_mem + cache->size * idx; } -/* - * We want to avoid an expensive divide : (offset / cache->size) - * Using the fact that size is a constant for a particular cache, - * we can replace (offset / cache->size) by - * reciprocal_divide(offset, cache->reciprocal_buffer_size) - */ -static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct page *page, void *obj) -{ - u32 offset = (obj - page->s_mem); - return reciprocal_divide(offset, cache->reciprocal_buffer_size); -} - #define BOOT_CPUCACHE_ENTRIES 1 /* internal cache of cache description objs */ static struct kmem_cache kmem_cache_boot = {