From patchwork Mon Nov 19 17:26:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10689095 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0693114DB for ; Mon, 19 Nov 2018 17:28:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E098C2A2EA for ; Mon, 19 Nov 2018 17:28:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D381E2A2F0; Mon, 19 Nov 2018 17:28:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 680472A2EA for ; Mon, 19 Nov 2018 17:28:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390889AbeKTDwx (ORCPT ); Mon, 19 Nov 2018 22:52:53 -0500 Received: from mail-wm1-f65.google.com ([209.85.128.65]:56013 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404803AbeKTDvu (ORCPT ); Mon, 19 Nov 2018 22:51:50 -0500 Received: by mail-wm1-f65.google.com with SMTP id y139so6143364wmc.5 for ; Mon, 19 Nov 2018 09:27:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EyBpjtdRieSeOMALerP208swINOjBw8fKlaLIeFHrqw=; b=XIyQ7LkUD5KAD5LNVibegZo2YQL6sjY4shf+8EvtMS6lfNAIXlipRsrY9SlZEP7tD8 B5CRBSh/M3mEjQrRCV/ecd+9q4uglviLpKl/jQ0nz3N45zcqgiXrJG/PA/uAFzg8cPxN ZYrVcJ48b3UetIZqZ8rVQIQLIKBJk6ESdkoMs4NEBLtfVOB56wxkbDJBaySqw3JL8XNK xHsv7FoerSQTBV277w91lFXqeBB6vczxEDIlx2tGUMbQRzpBiq3hwY2CNya3WFAfzqeC HBL9OedMuAPzzcdHA2a0nv4EsScPU6M6n7KIYif+hal1VdNgGMers+rOlJaJDx7od8bA IMcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EyBpjtdRieSeOMALerP208swINOjBw8fKlaLIeFHrqw=; b=joEYANGH/5nQRM16OvyZIH9Ei3oclUOtu+DRXxAlkpKkY1p9VIustZPJTXHAKIw5lV 9cGSQm3Ptn/tj8KyH221ZS3hgK96jPpS0bS0t1u2Y0DntVjMbo7SdjHRGW6BXYoPPZuq wZxHuL8Zn4lcbdzqOudQRN6LeA/eqN32mr9lv14V48sHlq3pOxazsO6lTYqUc4o1v8dw uMmn7ZHEGJ9x3Giz3RgGCnQR/BImwbpaPK8UOn6DxtQEEt577w6qia7/ocf2/T5Yq8BO rlY/8ku5Gu/2VmoSueRHkgDG+DqXgR8xsv6GhZFq6RETiVwQOcdMdMC9XFcixU67H30J HUDg== X-Gm-Message-State: AGRZ1gIXknTx0ZzFmj5rtpXGl3hwxNUDJux34Mv0MPw/TinZPDQhW5Uq 5D2VjVHXrOSRP7hVkf3qXPMp4Q== X-Google-Smtp-Source: AJdET5em08xCL54dcPzAIIByw4HLT4EUANvkk0hgkMmdTHmVtMZ3YohRtB8Wp4xwH3qn/bCiwV77Tw== X-Received: by 2002:a1c:4a:: with SMTP id 71-v6mr7564782wma.140.1542648444928; Mon, 19 Nov 2018 09:27:24 -0800 (PST) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:3180:41f8:3010:ff61]) by smtp.gmail.com with ESMTPSA id l143-v6sm23685190wmb.23.2018.11.19.09.27.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 19 Nov 2018 09:27:24 -0800 (PST) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v11 18/24] mm: move obj_to_index to include/linux/slab_def.h Date: Mon, 19 Nov 2018 18:26:34 +0100 Message-Id: <10ca1e57fe2d5dc835e88fb9dc9ab6855ae618dc.1542648335.git.andreyknvl@google.com> X-Mailer: git-send-email 2.19.1.1215.g8438c0b245-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP While with SLUB we can actually preassign tags for caches with contructors and store them in pointers in the freelist, SLAB doesn't allow that since the freelist is stored as an array of indexes, so there are no pointers to store the tags. Instead we compute the tag twice, once when a slab is created before calling the constructor and then again each time when an object is allocated with kmalloc. Tag is computed simply by taking the lowest byte of the index that corresponds to the object. However in kasan_kmalloc we only have access to the objects pointer, so we need a way to find out which index this object corresponds to. This patch moves obj_to_index from slab.c to include/linux/slab_def.h to be reused by KASAN. Acked-by: Christoph Lameter Reviewed-by: Andrey Ryabinin Reviewed-by: Dmitry Vyukov Signed-off-by: Andrey Konovalov --- include/linux/slab_def.h | 13 +++++++++++++ mm/slab.c | 13 ------------- 2 files changed, 13 insertions(+), 13 deletions(-) diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 3485c58cfd1c..9a5eafb7145b 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -104,4 +104,17 @@ static inline void *nearest_obj(struct kmem_cache *cache, struct page *page, return object; } +/* + * We want to avoid an expensive divide : (offset / cache->size) + * Using the fact that size is a constant for a particular cache, + * we can replace (offset / cache->size) by + * reciprocal_divide(offset, cache->reciprocal_buffer_size) + */ +static inline unsigned int obj_to_index(const struct kmem_cache *cache, + const struct page *page, void *obj) +{ + u32 offset = (obj - page->s_mem); + return reciprocal_divide(offset, cache->reciprocal_buffer_size); +} + #endif /* _LINUX_SLAB_DEF_H */ diff --git a/mm/slab.c b/mm/slab.c index 27859fb39889..d2f827316dfc 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -406,19 +406,6 @@ static inline void *index_to_obj(struct kmem_cache *cache, struct page *page, return page->s_mem + cache->size * idx; } -/* - * We want to avoid an expensive divide : (offset / cache->size) - * Using the fact that size is a constant for a particular cache, - * we can replace (offset / cache->size) by - * reciprocal_divide(offset, cache->reciprocal_buffer_size) - */ -static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct page *page, void *obj) -{ - u32 offset = (obj - page->s_mem); - return reciprocal_divide(offset, cache->reciprocal_buffer_size); -} - #define BOOT_CPUCACHE_ENTRIES 1 /* internal cache of cache description objs */ static struct kmem_cache kmem_cache_boot = {