From patchwork Fri May 18 19:45:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10412007 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D07426031B for ; Fri, 18 May 2018 19:47:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BFB7828ABB for ; Fri, 18 May 2018 19:47:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B46C428ABE; Fri, 18 May 2018 19:47:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 122EE28ABB for ; Fri, 18 May 2018 19:47:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B40B06B067E; Fri, 18 May 2018 15:47:25 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B18296B0680; Fri, 18 May 2018 15:47:25 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A2FB16B0681; Fri, 18 May 2018 15:47:25 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f69.google.com (mail-pl0-f69.google.com [209.85.160.69]) by kanga.kvack.org (Postfix) with ESMTP id 603086B067E for ; Fri, 18 May 2018 15:47:25 -0400 (EDT) Received: by mail-pl0-f69.google.com with SMTP id i1-v6so5623236pld.11 for ; Fri, 18 May 2018 12:47:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=erD7PzZDCevreLYxsDPDW1EUZ/wIGSkvCk60577pB40=; b=dHVhefO5EmPqJuV+9pXU/KHlHjiIK1nK4uB52mjy9jHa35jdoPVFC7fqiWoEqia3/G fPoqdIHwxyiFVzX47JS+zSXiyhFlqRZ3p8CQT7edzxan2YaWWoQYV48Snc9APC9c2o51 ybBG6u2HDhITIgo2qiTWxzwzZte9J6Q5c5HTYJUh5IvrfmQznPXB68Djt8vcIKg9kcJE +2YsEPA0VsTecTRQRnykZFDzfzk9lrpV/n3N6b+AVu3xYR8fz7EXjW2Y2KUV1TXMc19i 2li6HpP0jdiCMt6tgFzJbh7ERaM7mn7LZSLDJWXhx2UaKClsJccipGw2BRtt0rblAUWn l/ng== X-Gm-Message-State: ALKqPwefIHrvvsO0GwOuw8R0UJj8Wo8p/uNqLfqDjcPSGBdUDC5yVn7k trDqhLMoh7JVp/Qm4HL+R04VdTpYWA2LlllCQGW+zWkOTpfTOSLuBPHuaWRgkQs4J3DH6pRNwuw rfiHOD8wGDTNlyPKpwJj5DAb80GXWdhNwLbAbieKdb/1f7uacYzQSTNOs6sR/khwbFw== X-Received: by 2002:a65:654a:: with SMTP id a10-v6mr8401741pgw.107.1526672845043; Fri, 18 May 2018 12:47:25 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqMNe3KafzNMUreNZV2MTxISrRKaWkVHaM6YjjTVcM5kQveseJEQk08Pqq6JG7UfkWc4YmL X-Received: by 2002:a65:654a:: with SMTP id a10-v6mr8398592pgw.107.1526672724934; Fri, 18 May 2018 12:45:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526672724; cv=none; d=google.com; s=arc-20160816; b=V08EmKOhT5MSJ/W8xxygEePCQMiU870AN29IKb9V9ggb5OhN5HK4cmt3w/UpLEARiP olRjcAmnXuJXBU3r9RA9fIo8o6UaIusWSwIbzGkbnFldki5OTcL2kDRcqQ+M01TTMgGQ Ti/X/YBLwjPiDAmABflQvp5LkkHDOlbakzaT/cFKEqkD9/Xp0cnvyq7q+Lw8kb1hJ1F3 Muncc87RTrZsbY/byj0wra+0MhejrXIF8i89DhD2bJyk0sYDuZKIA1cgZLepBTRkMADH VG/YlJwtf0s/YDpoMp9hV2s9b0FO7up6SxHelKvv63DejxWgXuGU8za5G3VWf7P847qQ JLDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=erD7PzZDCevreLYxsDPDW1EUZ/wIGSkvCk60577pB40=; b=Dd8HZNWyk55KqwvrJnsvKxJfPrbaX8rZH+IDQTA/0XePAuOSMCMCLdPHspBS5he72E LQYh8zpGnKRlecjS35/IlO1kJvFGs92u5Z86dcBhYj3KyS7Kwi7Ez1aEyu3TwxzOba5s hWBhi4bSYlbt44/ZjRQNGWP0gZQOV9o+maWReN3tFZwXvfKFhyews8VMTNfjniWBBTKi si5UccwOwKMbimdBpLsl3gNc410ZLyDbYfG55TRXNIlV5tpDYhcOwBu0GtfzHMOg1y+B 90behIt8M6o99cH5k2+wHfe7njExp1qg5mjsi0D9wLuYt5N7eQs/u++H1onxRcbxROo/ nsHA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=SflDEPGc; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id m1-v6si7518279pls.70.2018.05.18.12.45.24 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 18 May 2018 12:45:24 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=SflDEPGc; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=erD7PzZDCevreLYxsDPDW1EUZ/wIGSkvCk60577pB40=; b=SflDEPGc7EZGRk4IcXuEwC8vO BCuChIlbFLB6e5z2IkKCyBSe0pWvR5cUHt96Px2xMuDfsbsAX10e2sCFRmggVAFpG1xfgMsRWrFBE 92fK1vF80OkT13CB1ian5bGG1dgtSQnsxbaMJop168Ax0WdtQ3rsNri429zJoAoBTsRhKu2Y6BJ+A adEj7HF447WaOipv7dHzAZoE7EHsPslvHGJbC2ktAUTUJQFCjb6eZuCHu6ynPbce52W7ifRFblDHE GZGV340ERmPXVzEUzpwEEKFD+mSl5eL9FujUO2QVF/LCkulh45P/Y/cB3KoM95GlEGIpKMfW5Icqn vBh/TTLog==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fJlJc-00012E-7L; Fri, 18 May 2018 19:45:24 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: Matthew Wilcox , Andrew Morton , "Kirill A . Shutemov" , Christoph Lameter , Lai Jiangshan , Pekka Enberg , Vlastimil Babka , Dave Hansen , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [PATCH v6 15/17] slub: Remove kmem_cache->reserved Date: Fri, 18 May 2018 12:45:17 -0700 Message-Id: <20180518194519.3820-16-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180518194519.3820-1-willy@infradead.org> References: <20180518194519.3820-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox The reserved field was only used for embedding an rcu_head in the data structure. With the previous commit, we no longer need it. That lets us remove the 'reserved' argument to a lot of functions. Signed-off-by: Matthew Wilcox Acked-by: Christoph Lameter --- include/linux/slub_def.h | 1 - mm/slub.c | 41 ++++++++++++++++++++-------------------- 2 files changed, 20 insertions(+), 22 deletions(-) diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 3773e26c08c1..09fa2c6f0e68 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -101,7 +101,6 @@ struct kmem_cache { void (*ctor)(void *); unsigned int inuse; /* Offset to metadata */ unsigned int align; /* Alignment */ - unsigned int reserved; /* Reserved bytes at the end of slabs */ unsigned int red_left_pad; /* Left redzone padding size */ const char *name; /* Name (only for display!) */ struct list_head list; /* List of slab caches */ diff --git a/mm/slub.c b/mm/slub.c index 8e2407f69855..33a811168fa9 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -316,16 +316,16 @@ static inline unsigned int slab_index(void *p, struct kmem_cache *s, void *addr) return (p - addr) / s->size; } -static inline unsigned int order_objects(unsigned int order, unsigned int size, unsigned int reserved) +static inline unsigned int order_objects(unsigned int order, unsigned int size) { - return (((unsigned int)PAGE_SIZE << order) - reserved) / size; + return ((unsigned int)PAGE_SIZE << order) / size; } static inline struct kmem_cache_order_objects oo_make(unsigned int order, - unsigned int size, unsigned int reserved) + unsigned int size) { struct kmem_cache_order_objects x = { - (order << OO_SHIFT) + order_objects(order, size, reserved) + (order << OO_SHIFT) + order_objects(order, size) }; return x; @@ -832,7 +832,7 @@ static int slab_pad_check(struct kmem_cache *s, struct page *page) return 1; start = page_address(page); - length = (PAGE_SIZE << compound_order(page)) - s->reserved; + length = PAGE_SIZE << compound_order(page); end = start + length; remainder = length % s->size; if (!remainder) @@ -921,7 +921,7 @@ static int check_slab(struct kmem_cache *s, struct page *page) return 0; } - maxobj = order_objects(compound_order(page), s->size, s->reserved); + maxobj = order_objects(compound_order(page), s->size); if (page->objects > maxobj) { slab_err(s, page, "objects %u > max %u", page->objects, maxobj); @@ -971,7 +971,7 @@ static int on_freelist(struct kmem_cache *s, struct page *page, void *search) nr++; } - max_objects = order_objects(compound_order(page), s->size, s->reserved); + max_objects = order_objects(compound_order(page), s->size); if (max_objects > MAX_OBJS_PER_PAGE) max_objects = MAX_OBJS_PER_PAGE; @@ -3188,21 +3188,21 @@ static unsigned int slub_min_objects; */ static inline unsigned int slab_order(unsigned int size, unsigned int min_objects, unsigned int max_order, - unsigned int fract_leftover, unsigned int reserved) + unsigned int fract_leftover) { unsigned int min_order = slub_min_order; unsigned int order; - if (order_objects(min_order, size, reserved) > MAX_OBJS_PER_PAGE) + if (order_objects(min_order, size) > MAX_OBJS_PER_PAGE) return get_order(size * MAX_OBJS_PER_PAGE) - 1; - for (order = max(min_order, (unsigned int)get_order(min_objects * size + reserved)); + for (order = max(min_order, (unsigned int)get_order(min_objects * size)); order <= max_order; order++) { unsigned int slab_size = (unsigned int)PAGE_SIZE << order; unsigned int rem; - rem = (slab_size - reserved) % size; + rem = slab_size % size; if (rem <= slab_size / fract_leftover) break; @@ -3211,7 +3211,7 @@ static inline unsigned int slab_order(unsigned int size, return order; } -static inline int calculate_order(unsigned int size, unsigned int reserved) +static inline int calculate_order(unsigned int size) { unsigned int order; unsigned int min_objects; @@ -3228,7 +3228,7 @@ static inline int calculate_order(unsigned int size, unsigned int reserved) min_objects = slub_min_objects; if (!min_objects) min_objects = 4 * (fls(nr_cpu_ids) + 1); - max_objects = order_objects(slub_max_order, size, reserved); + max_objects = order_objects(slub_max_order, size); min_objects = min(min_objects, max_objects); while (min_objects > 1) { @@ -3237,7 +3237,7 @@ static inline int calculate_order(unsigned int size, unsigned int reserved) fraction = 16; while (fraction >= 4) { order = slab_order(size, min_objects, - slub_max_order, fraction, reserved); + slub_max_order, fraction); if (order <= slub_max_order) return order; fraction /= 2; @@ -3249,14 +3249,14 @@ static inline int calculate_order(unsigned int size, unsigned int reserved) * We were unable to place multiple objects in a slab. Now * lets see if we can place a single object there. */ - order = slab_order(size, 1, slub_max_order, 1, reserved); + order = slab_order(size, 1, slub_max_order, 1); if (order <= slub_max_order) return order; /* * Doh this slab cannot be placed using slub_max_order. */ - order = slab_order(size, 1, MAX_ORDER, 1, reserved); + order = slab_order(size, 1, MAX_ORDER, 1); if (order < MAX_ORDER) return order; return -ENOSYS; @@ -3524,7 +3524,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) if (forced_order >= 0) order = forced_order; else - order = calculate_order(size, s->reserved); + order = calculate_order(size); if ((int)order < 0) return 0; @@ -3542,8 +3542,8 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) /* * Determine the number of objects per slab */ - s->oo = oo_make(order, size, s->reserved); - s->min = oo_make(get_order(size), size, s->reserved); + s->oo = oo_make(order, size); + s->min = oo_make(get_order(size), size); if (oo_objects(s->oo) > oo_objects(s->max)) s->max = s->oo; @@ -3553,7 +3553,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order) static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) { s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor); - s->reserved = 0; #ifdef CONFIG_SLAB_FREELIST_HARDENED s->random = get_random_long(); #endif @@ -5097,7 +5096,7 @@ SLAB_ATTR_RO(destroy_by_rcu); static ssize_t reserved_show(struct kmem_cache *s, char *buf) { - return sprintf(buf, "%u\n", s->reserved); + return sprintf(buf, "0\n"); } SLAB_ATTR_RO(reserved);