From patchwork Fri May 18 19:45:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10411993 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CEF926031B for ; Fri, 18 May 2018 19:46:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BD14720499 for ; Fri, 18 May 2018 19:46:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B182128A4F; Fri, 18 May 2018 19:46:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 193A120499 for ; Fri, 18 May 2018 19:46:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ED1206B0667; Fri, 18 May 2018 15:45:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E79FE6B066E; Fri, 18 May 2018 15:45:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CCC9B6B066B; Fri, 18 May 2018 15:45:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f71.google.com (mail-pl0-f71.google.com [209.85.160.71]) by kanga.kvack.org (Postfix) with ESMTP id 6D0DE6B066B for ; Fri, 18 May 2018 15:45:32 -0400 (EDT) Received: by mail-pl0-f71.google.com with SMTP id o23-v6so5576026pll.12 for ; Fri, 18 May 2018 12:45:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=w4vkc/YddBVe9qSJRN3RlvliqCDq6IUSfwmLCX5texM=; b=EKw0/FQaJuuDJB0vj/p5igqkOwG+Yajznqf+0iwMCpXtyhB7Q1hauYeZlpgPx6caO0 hMrTFWKscdRwDIsXwpj2vC9jt3Nbh2vqb6oUet0C6gJCfan4/qYE19+ZAH8yfP71N06m w+YFyF3JotMNnfKexrx+WWgIsgYTvoUJmtmkHAAADZubizDe57HIw1REdv0p/l9QrkSE ix1GyX0WKR+eZlKmHppksQh88QMrPxBOLaEde/wWx595pDtIrvmXkqoQmFNn88xdhzSG Ub9bNjtO6oGwLSNCs2mUk0qFM77xm4gyt+HaEKYDtTsQhA9G0pDBcJos4tUROnBUaFny X3rg== X-Gm-Message-State: ALKqPwfsTanj58LgKfEd264qF+VXfzbM+lUuTUFWbjNBBq0WRByGqoqD iS1Z0Y9U1lRxOgAwR4+jtjJVJ1RvjWjyOFfvt4zjmGGMyWAQoBHmahLrZAGl/SAkJGRfI4p0GVR kmhz2hNMUPXTQlcBly0MxWjSBLDsPjg2SSYMPlZppNgnbvAOhfS+HMfV2H+M9swskPQ== X-Received: by 2002:aa7:81cc:: with SMTP id c12-v6mr10751031pfn.169.1526672732112; Fri, 18 May 2018 12:45:32 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrH/Do9QyaDu4jkvHYxFQgxClk1GsolOjtVDY63+fYmXOmx1NdnUZKz6FJH31jSFQmkClV5 X-Received: by 2002:aa7:81cc:: with SMTP id c12-v6mr10750739pfn.169.1526672723703; Fri, 18 May 2018 12:45:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526672723; cv=none; d=google.com; s=arc-20160816; b=syV79N2MEJgZzpWxBV5GJx72ILfrSGp1HMChVfzH5kl2t10YK/p+jM9TcFOg6aa5ca FUt1UrbtPsKHh9S6ARE6xYXA+jubNoPD4zr1MX6sDNtvIHo7HMGEY5N8yurJM591lU6O kaYyd2yKNlwZ8wtHgEW6UE9qgccY2Qw0/DpeCNPiU3emMLHYno+DdN2SMg9sE+IhNx96 faO9wlhKvDCrCJSPc+wyt5SRZPwLb/1JprdUvW2S/dgEg00BYi8tQRnXvcCszVEe4p2j k/otmtS2HcAuI6iYI5UJa7u+MzLiSu2lsyvzHWm43o+WFsynx0fqqj0BaoRFiBM8hLB+ vRsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=w4vkc/YddBVe9qSJRN3RlvliqCDq6IUSfwmLCX5texM=; b=vKVrErdPNDJyxeH5z1+t6MVfxEBd2MzbEaElsGxfnuEXCDhXVFguUlCzOzTuxce31l vjOI5+rQVJbZUxT14BisxJej+4rHDttew0qUGfLuomPeVzqReJ90uzCUfzBN/d11Lu3Z KOoDI8/AH4CsOTGSmLtE5z3g4czP0rnPJgbPjDsZsM+9/lv7u27wy0aZovtbhPRty9/S wPra2sb4oD9NLKQncJ3p9R8K4mpE2HKtLwh8+uTXRhf+jhKquBju+rPkJhxf06Tjqy3M /ZO387Xg7f+b/JFkiNbPJOooi3XL812xQRRqC6LH7rjTu2uTZdQfBUPDJJGXxzI4QQ5A ymdg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=Dk9UfAnO; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id g12-v6si8469826pfi.212.2018.05.18.12.45.23 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 18 May 2018 12:45:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=Dk9UfAnO; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=w4vkc/YddBVe9qSJRN3RlvliqCDq6IUSfwmLCX5texM=; b=Dk9UfAnO4xr/GDVS2XTqPPDeB lLmzXYBnAfS+BJwYzGgjLm7y61xcfeODbWuMsTTCJRzGT3UbN2DLUyN1YGDbcx4pXih3kMmNhNFFj yCtn/WxnD8HGMvvGRQWO33Z8WUbtuMqmtB3tXoMNHcm15mflXtPydgpJahPc3kTVZsnEhX2Wv1HZH EIwkzyjtv49iHDYw5Jv2o8i6bpd1kJ0qai2RpVE4v0WdobwGRdCthrUyw+P829S+1shzw9n9G1VVh QF/DsusGSG5fcBTBD2iD++DNRc4MNnX8XDebctLSs6nCYSQVAovPseDgaZVNImbWCUp6HGQV4VxUE TrrWcEhvQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fJlJb-00011G-36; Fri, 18 May 2018 19:45:23 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: Matthew Wilcox , Andrew Morton , "Kirill A . Shutemov" , Christoph Lameter , Lai Jiangshan , Pekka Enberg , Vlastimil Babka , Dave Hansen , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [PATCH v6 09/17] mm: Move lru union within struct page Date: Fri, 18 May 2018 12:45:11 -0700 Message-Id: <20180518194519.3820-10-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180518194519.3820-1-willy@infradead.org> References: <20180518194519.3820-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox Since the LRU is two words, this does not affect the double-word alignment of SLUB's freelist. Signed-off-by: Matthew Wilcox Acked-by: Vlastimil Babka Acked-by: Kirill A. Shutemov --- include/linux/mm_types.h | 102 +++++++++++++++++++-------------------- mm/slub.c | 8 +-- 2 files changed, 55 insertions(+), 55 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 629a7b568ed7..b6a3948195d3 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -72,6 +72,57 @@ struct hmm; struct page { unsigned long flags; /* Atomic flags, some possibly * updated asynchronously */ + /* + * WARNING: bit 0 of the first word encode PageTail(). That means + * the rest users of the storage space MUST NOT use the bit to + * avoid collision and false-positive PageTail(). + */ + union { + struct list_head lru; /* Pageout list, eg. active_list + * protected by zone_lru_lock ! + * Can be used as a generic list + * by the page owner. + */ + struct dev_pagemap *pgmap; /* ZONE_DEVICE pages are never on an + * lru or handled by a slab + * allocator, this points to the + * hosting device page map. + */ + struct { /* slub per cpu partial pages */ + struct page *next; /* Next partial slab */ +#ifdef CONFIG_64BIT + int pages; /* Nr of partial slabs left */ + int pobjects; /* Approximate # of objects */ +#else + short int pages; + short int pobjects; +#endif + }; + + struct rcu_head rcu_head; /* Used by SLAB + * when destroying via RCU + */ + /* Tail pages of compound page */ + struct { + unsigned long compound_head; /* If bit zero is set */ + + /* First tail page only */ + unsigned char compound_dtor; + unsigned char compound_order; + /* two/six bytes available here */ + }; + +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && USE_SPLIT_PMD_PTLOCKS + struct { + unsigned long __pad; /* do not overlay pmd_huge_pte + * with compound_head to avoid + * possible bit 0 collision. + */ + pgtable_t pmd_huge_pte; /* protected by page->ptl */ + }; +#endif + }; + /* Three words (12/24 bytes) are available in this union. */ union { struct { /* Page cache and anonymous pages */ @@ -135,57 +186,6 @@ struct page { /* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */ atomic_t _refcount; - /* - * WARNING: bit 0 of the first word encode PageTail(). That means - * the rest users of the storage space MUST NOT use the bit to - * avoid collision and false-positive PageTail(). - */ - union { - struct list_head lru; /* Pageout list, eg. active_list - * protected by zone_lru_lock ! - * Can be used as a generic list - * by the page owner. - */ - struct dev_pagemap *pgmap; /* ZONE_DEVICE pages are never on an - * lru or handled by a slab - * allocator, this points to the - * hosting device page map. - */ - struct { /* slub per cpu partial pages */ - struct page *next; /* Next partial slab */ -#ifdef CONFIG_64BIT - int pages; /* Nr of partial slabs left */ - int pobjects; /* Approximate # of objects */ -#else - short int pages; - short int pobjects; -#endif - }; - - struct rcu_head rcu_head; /* Used by SLAB - * when destroying via RCU - */ - /* Tail pages of compound page */ - struct { - unsigned long compound_head; /* If bit zero is set */ - - /* First tail page only */ - unsigned char compound_dtor; - unsigned char compound_order; - /* two/six bytes available here */ - }; - -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && USE_SPLIT_PMD_PTLOCKS - struct { - unsigned long __pad; /* do not overlay pmd_huge_pte - * with compound_head to avoid - * possible bit 0 collision. - */ - pgtable_t pmd_huge_pte; /* protected by page->ptl */ - }; -#endif - }; - #ifdef CONFIG_MEMCG struct mem_cgroup *mem_cgroup; #endif diff --git a/mm/slub.c b/mm/slub.c index 05ca612a5fe6..57a20f995220 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -52,11 +52,11 @@ * and to synchronize major metadata changes to slab cache structures. * * The slab_lock is only used for debugging and on arches that do not - * have the ability to do a cmpxchg_double. It only protects the second - * double word in the page struct. Meaning + * have the ability to do a cmpxchg_double. It only protects: * A. page->freelist -> List of object free in a page - * B. page->counters -> Counters of objects - * C. page->frozen -> frozen state + * B. page->inuse -> Number of objects in use + * C. page->objects -> Number of objects in page + * D. page->frozen -> frozen state * * If a slab is frozen then it is exempt from list management. It is not * on any list. The processor that froze the slab is the one who can