From patchwork Wed Mar 13 05:20:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10850621 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F74617DF for ; Wed, 13 Mar 2019 05:21:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A19F29702 for ; Wed, 13 Mar 2019 05:21:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 87BAD29A6D; Wed, 13 Mar 2019 05:21:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F0BA729702 for ; Wed, 13 Mar 2019 05:21:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1BE6C8E0004; Wed, 13 Mar 2019 01:21:12 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1700E8E0002; Wed, 13 Mar 2019 01:21:12 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F29488E0004; Wed, 13 Mar 2019 01:21:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by kanga.kvack.org (Postfix) with ESMTP id CD4928E0002 for ; Wed, 13 Mar 2019 01:21:11 -0400 (EDT) Received: by mail-qk1-f199.google.com with SMTP id y6so575419qke.1 for ; Tue, 12 Mar 2019 22:21:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=acU2gA0XupWBWu0QZQ+K9+WO5Dqp8Z/Fo1Pi8Ye78KI=; b=LNa1DrpqL4EvfyvCfctVbLOiN1a+QlLT3R4+/Niayqlj1RiqfSfTSffB8pQIVmGNXI sAL1X7t1eQzIfZ46uc0lF7ptB49O4r2kSnAi+3jrJ2fuHAcpDSwxNCRS0GqlI7deyLG1 PZjINpbfE5s4j7Y6pNoaA3O40SShfAib9zwaixCTJQVFUmwHPuhxaSo8mFXd5gWCWv0I BSDB2PkGbdns4VWi/lK12XBRb0g8EB7TsrUtyvfmoDLPD7j/dCTgp11FU6wgARWJz5Zb M3gfSlk2UwarHSxC48RBsmP4fHyr1hS0JRBsy8gbdh9rB4Eax+FVs0PTYRrLy72z3QKV zMog== X-Gm-Message-State: APjAAAWowY6uZT3rYhjfIFLRy683BQm2GCXyG7zKhYLCZYwIUNOfYSnf wMLeTvX4nDrBiIHoiwh1g0cpZK+d08By09c2JiHM9XW+g2QMaEa8py+W1au8m3WFPULWAtwIYS9 LY1KLPF9E/fIJEr+jFscDtkzPJEjDeHlFMjai4j+35nYwCE9qyeEW3X0zj8AGe0M= X-Received: by 2002:a0c:e989:: with SMTP id z9mr9157934qvn.192.1552454471626; Tue, 12 Mar 2019 22:21:11 -0700 (PDT) X-Google-Smtp-Source: APXvYqz+a6h0Zq+XlKqerwKFkEkjvfQTQAYBcqIqJoKspijxrM1D6x65THQeXY2YivcJz74sg3QE X-Received: by 2002:a0c:e989:: with SMTP id z9mr9157885qvn.192.1552454470327; Tue, 12 Mar 2019 22:21:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552454470; cv=none; d=google.com; s=arc-20160816; b=YOCykvtgBXQ7tsQs/Sepq2M0hJQv7jeU3Nwj82W78NrC/U0fSFS4AXWfvuuYOKknaA xAhZdROWe5Zabu9b50gkhQMGcCYrU1WW7TeZ9sfvGfcN6objKzferfiMWE+tbBo7uJtF nzTYt7e18ecSZpuFGz3zhlmqk/DIgTkzv8UUKKtE+PHS1mcfTje2ULkv+NcusV+bBigH lZkTH3cMSc/0qsPFEzdFKkm1lVOXlTiNEGdwpV9KhhINudOO+pfew//X9MrKXW90Ah6S M5uwFGO4fkfofnztSHLVS6n1m0y99IzrN65WVoPHrHr5XdnqapL4QQdfjyx+ysOKZvv4 qA3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=acU2gA0XupWBWu0QZQ+K9+WO5Dqp8Z/Fo1Pi8Ye78KI=; b=NMYlpx9Y5ewRJ2SH33rIT4/1Je+qRZUMb79UMqbxk6/9BWT+5GvhBcLbmpFZha91um hpOg1PNyEaTpE7G39RS9A8B2Y6lh6HlqniOTZTJzpPcF1KdN2NLIwLh6KmE9D/SrhqYa u25tiXW8/nZDfvTeqjd5NWkDNtnxA6tAXj/iGUchEoy8Jptmq5wvUWJMx7UNA0ZvNPKp PtK1iGyagEtkeJ158nsLsTykAACPo3v/F+b9LbJlMTnjNHw8T96RceU8gHZrgFxB+gOE WcAbiayDS8sCLDDZSZ12BQeDv/4zW27b7Ee1wJ5Mz5wdXHFjVt6cVy9389/gojJdsP9P QH3Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=GN9yesFe; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id c24si187293qkj.36.2019.03.12.22.21.10 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Mar 2019 22:21:10 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=GN9yesFe; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id A62DC38B1; Wed, 13 Mar 2019 01:21:08 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Wed, 13 Mar 2019 01:21:09 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=acU2gA0XupWBWu0QZQ+K9+WO5Dqp8Z/Fo1Pi8Ye78KI=; b=GN9yesFe fo3KT4XAJDo714mB4gMydBPdMeMcnDRvhbZd+Ds8bNI2CGiKHRAoXJ7qpAFIl0wt wQhyVSgBu3pX1OUcdyYEhiWu0izqCbGg6ir2mVHVycMsrHFVyBU8C164PbRHmtxg AVCaisFmX5hUQQK+n+uQJ6scViCswM4tfjP3sZFO1m3fWxYLrpN+LeQLoLYK/Q9I Jttj8HFw54NR6+Ry+RcSxZ+Yl5+a332qgVj5cqPU+VFhrSgZkXx4+2hpFbfEXcYq DNlh0hsT2LzY5y7oL6AA8XuMejIP21MXq6X8GllKS4On7arir1leNxSISDUcF0a9 m4nvuUJJivoAaw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrgeelgdekudcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrvdefrddukeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehtohgs ihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedt X-ME-Proxy: Received: from eros.localdomain (124-169-23-184.dyn.iinet.net.au [124.169.23.184]) by mail.messagingengine.com (Postfix) with ESMTPA id CB509E4693; Wed, 13 Mar 2019 01:21:04 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/5] slub: Add comments to endif pre-processor macros Date: Wed, 13 Mar 2019 16:20:26 +1100 Message-Id: <20190313052030.13392-2-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190313052030.13392-1-tobin@kernel.org> References: <20190313052030.13392-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP SLUB allocator makes heavy use of ifdef/endif pre-processor macros. The pairing of these statements is at times hard to follow e.g. if the pair are further than a screen apart or if there are nested pairs. We can reduce cognitive load by adding a comment to the endif statement of form #ifdef CONFIG_FOO ... #endif /* CONFIG_FOO */ Add comments to endif pre-processor macros if ifdef/endif pair is not immediately apparent. Reviewed-by: Roman Gushchin Signed-off-by: Tobin C. Harding Acked-by: Christoph Lameter --- mm/slub.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 1b08fbcb7e61..b282e22885cd 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1951,7 +1951,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, } } } while (read_mems_allowed_retry(cpuset_mems_cookie)); -#endif +#endif /* CONFIG_NUMA */ return NULL; } @@ -2249,7 +2249,7 @@ static void unfreeze_partials(struct kmem_cache *s, discard_slab(s, page); stat(s, FREE_SLAB); } -#endif +#endif /* CONFIG_SLUB_CPU_PARTIAL */ } /* @@ -2308,7 +2308,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain) local_irq_restore(flags); } preempt_enable(); -#endif +#endif /* CONFIG_SLUB_CPU_PARTIAL */ } static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) @@ -2813,7 +2813,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); #endif -#endif +#endif /* CONFIG_NUMA */ /* * Slow path handling. This may still be called frequently since objects @@ -3845,7 +3845,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) return ret; } EXPORT_SYMBOL(__kmalloc_node); -#endif +#endif /* CONFIG_NUMA */ #ifdef CONFIG_HARDENED_USERCOPY /* @@ -4063,7 +4063,7 @@ void __kmemcg_cache_deactivate(struct kmem_cache *s) */ slab_deactivate_memcg_cache_rcu_sched(s, kmemcg_cache_deact_after_rcu); } -#endif +#endif /* CONFIG_MEMCG */ static int slab_mem_going_offline_callback(void *arg) { @@ -4696,7 +4696,7 @@ static int list_locations(struct kmem_cache *s, char *buf, len += sprintf(buf, "No data\n"); return len; } -#endif +#endif /* CONFIG_SLUB_DEBUG */ #ifdef SLUB_RESILIENCY_TEST static void __init resiliency_test(void) @@ -4756,7 +4756,7 @@ static void __init resiliency_test(void) #ifdef CONFIG_SYSFS static void resiliency_test(void) {}; #endif -#endif +#endif /* SLUB_RESILIENCY_TEST */ #ifdef CONFIG_SYSFS enum slab_stat_type { @@ -5413,7 +5413,7 @@ STAT_ATTR(CPU_PARTIAL_ALLOC, cpu_partial_alloc); STAT_ATTR(CPU_PARTIAL_FREE, cpu_partial_free); STAT_ATTR(CPU_PARTIAL_NODE, cpu_partial_node); STAT_ATTR(CPU_PARTIAL_DRAIN, cpu_partial_drain); -#endif +#endif /* CONFIG_SLUB_STATS */ static struct attribute *slab_attrs[] = { &slab_size_attr.attr, @@ -5614,7 +5614,7 @@ static void memcg_propagate_slab_attrs(struct kmem_cache *s) if (buffer) free_page((unsigned long)buffer); -#endif +#endif /* CONFIG_MEMCG */ } static void kmem_cache_release(struct kobject *k) From patchwork Wed Mar 13 05:20:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10850623 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C23976C2 for ; Wed, 13 Mar 2019 05:21:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AE8A9299DF for ; Wed, 13 Mar 2019 05:21:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AC86F29A7C; Wed, 13 Mar 2019 05:21:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D890729A7A for ; Wed, 13 Mar 2019 05:21:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 234938E0005; Wed, 13 Mar 2019 01:21:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1BDD58E0002; Wed, 13 Mar 2019 01:21:16 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 084D38E0005; Wed, 13 Mar 2019 01:21:16 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by kanga.kvack.org (Postfix) with ESMTP id D5E428E0002 for ; Wed, 13 Mar 2019 01:21:15 -0400 (EDT) Received: by mail-qt1-f198.google.com with SMTP id e1so745726qth.23 for ; Tue, 12 Mar 2019 22:21:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Y8N9T28j5V2rcuZvEFbiMge48OZHR3sYOunVsFBM+HA=; b=pPb2Zath7xj5uI0S/V5cgjxpxsCzzOdXrLg1dTBvjKLCSQZrijqfYSgqWfB5HvRAUP VJRO2JIwF6bcUVRximEOEmZoHivdKY0T/jox8B6Em/XKbnhhlaPkejceCFDCZ2m153ng wj0d4pPP0F/oWJh8nUsRkGWE3hRV4/4ZQ5msry/T8p/MRzdw3XZzEUt1yhWP74Jqjs94 oLMRI//ocfexT81v7XW78pFuL1KZk2TIaJleUuBoPtdrhL6IpmvMhfDnaQrh9Oetq1+c KUlNL0mWEVogqHocgSQWDeDKoEMv3dvjewGncKDU2vPGwGAK8SmIFpvdzGygN7+ryfYx lbPw== X-Gm-Message-State: APjAAAUxonST97aeqPNvDBRIIEQFhf0U+5JtETzHgcDtT+J+VzfPFFGo jcZ0tfxnAE9vFmdubIT9wVdLcGgWSCCPHDW+dqNBoZwPVQOoK9ny4E/P8enddc8+s5ttwiR3/YP FULEQ3o0bcimUUcnb+qdEt+YrKU3EArxfUF2i0W5Fo/0UtLSupdzERw3qwJYKLlA= X-Received: by 2002:a37:63cc:: with SMTP id x195mr21396504qkb.293.1552454475638; Tue, 12 Mar 2019 22:21:15 -0700 (PDT) X-Google-Smtp-Source: APXvYqyCeiFfYvbPSZW8GCLkt3HWPnQHwpSKkanNHgabTY2q2uctlLBWQrvB34yR9khTM0kyZ0Gp X-Received: by 2002:a37:63cc:: with SMTP id x195mr21396469qkb.293.1552454474570; Tue, 12 Mar 2019 22:21:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552454474; cv=none; d=google.com; s=arc-20160816; b=nHZW0oFC1NzhedpOa9FxvaA+4Xn+ZGfV6ifxsNS2mcW29JcK2CISpZSKRSJnTv4cEK YIdZZ/FQY9+PlFHnpxwb5FJFPu3Ag0y2z0MXupAjcIfeMXyawdPqfF7r1nR70uswhuxZ +Nsmr6Fx9Gd+pPKisUNTksIYkDRUPoaNCEwgBJWAkLOcSWEXQoerzIuX6ZFuiqy+g/35 owJlKCq8pVfq9yASfmT4sjD6uBoWitoDaeWcpfXFsm+e/ZCMlo0HXVhW/lD6jxR6s+p+ PplfsmpHfuVieeU4iJxaPuF4LG8wZqIF/+n5sXtNdU7G+qIXxQdouCqTpoVaLR2foH7g zwKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=Y8N9T28j5V2rcuZvEFbiMge48OZHR3sYOunVsFBM+HA=; b=vx+K54r781z+pWrOgVdz9XeY933ZG6TxN3vwwGrAZ/vQ4tFaiQC7YNe+qkXzQEqqCr U76/tmaflDm8TfhTtAtO1zJKYrjq8CE+SLn8NCeuA9PLrgYa3FOODfTfqXCVCmsrulPh 6g6zkJawXdHXLUxuJkR7CAn8ACRlmirMfs8lIL3Uihqaic6ZQu7lUX6PjaZSigZYWDeL B68cr3GPoYCCPH9fPtVIi6W3EPjaU5FdTMciePBCXVVefIJRaXxYFxe753O7fO8wtZSr e6nciWDjmkOi3m1/QNWsz6Odk/x0Rr8GFrXo/LxzIvHQCCem6y9HmqMdhifJKlNeVU0O oJqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=yDZ2iVVm; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id t125si1614775qkf.21.2019.03.12.22.21.14 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Mar 2019 22:21:14 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=yDZ2iVVm; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id F3D4938B7; Wed, 13 Mar 2019 01:21:12 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Wed, 13 Mar 2019 01:21:13 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=Y8N9T28j5V2rcuZvEFbiMge48OZHR3sYOunVsFBM+HA=; b=yDZ2iVVm 5psjseLHdnznxFaT+DnVAdhsGId6gAWyeg0vBMHl0JMAREDzaKD33cI/KXX7JAcD 9Vnok+jjUycsqcCVHBOtDKz18bIa7S3gCoxUOaJRXNchyPWSoIbDEYiJd56uv8OJ JKYFunc9hm8VN3sxCBG+a/vQpBzoHIj0erW38KdVljPjv4dTGFmCzoAfdjR3A8Hr Y55NhT1TTURNsDPgOc9USK/R1NriKmLAljrDbIw8Ui5SI1BehgKLGvIRiZRVKVlE m2VSOrztE9rdxZ2iepOqHO0fujjrr3zY0mROLaFEj8oxjNYl+hs2m/ux/TJz4NBS pfLLdpQZWkMGEg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrgeelgdekudcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrvdefrddukeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehtohgs ihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedu X-ME-Proxy: Received: from eros.localdomain (124-169-23-184.dyn.iinet.net.au [124.169.23.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 022ADE427E; Wed, 13 Mar 2019 01:21:08 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/5] slub: Use slab_list instead of lru Date: Wed, 13 Mar 2019 16:20:27 +1100 Message-Id: <20190313052030.13392-3-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190313052030.13392-1-tobin@kernel.org> References: <20190313052030.13392-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Currently we use the page->lru list for maintaining lists of slabs. We have a list_head in the page structure (slab_list) that can be used for this purpose. Doing so makes the code cleaner since we are not overloading the lru list. The slab_list is part of a union within the page struct (included here stripped down): union { struct { /* Page cache and anonymous pages */ struct list_head lru; ... }; struct { dma_addr_t dma_addr; }; struct { /* slab, slob and slub */ union { struct list_head slab_list; struct { /* Partial pages */ struct page *next; int pages; /* Nr of pages left */ int pobjects; /* Approximate count */ }; }; ... Here we see that slab_list and lru are the same bits. We can verify that this change is safe to do by examining the object file produced from slub.c before and after this patch is applied. Steps taken to verify: 1. checkout current tip of Linus' tree commit a667cb7a94d4 ("Merge branch 'akpm' (patches from Andrew)") 2. configure and build (defaults to SLUB allocator) CONFIG_SLUB=y CONFIG_SLUB_DEBUG=y CONFIG_SLUB_DEBUG_ON=y CONFIG_SLUB_STATS=y CONFIG_HAVE_DEBUG_KMEMLEAK=y CONFIG_SLAB_FREELIST_RANDOM=y CONFIG_SLAB_FREELIST_HARDENED=y 3. dissasemble object file `objdump -dr mm/slub.o > before.s 4. apply patch 5. build 6. dissasemble object file `objdump -dr mm/slub.o > after.s 7. diff before.s after.s Use slab_list list_head instead of the lru list_head for maintaining lists of slabs. Reviewed-by: Roman Gushchin Signed-off-by: Tobin C. Harding Acked-by: Christoph Lameter --- mm/slub.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index b282e22885cd..d692b5e0163d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1023,7 +1023,7 @@ static void add_full(struct kmem_cache *s, return; lockdep_assert_held(&n->list_lock); - list_add(&page->lru, &n->full); + list_add(&page->slab_list, &n->full); } static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct page *page) @@ -1032,7 +1032,7 @@ static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct return; lockdep_assert_held(&n->list_lock); - list_del(&page->lru); + list_del(&page->slab_list); } /* Tracking of the number of slabs for debugging purposes */ @@ -1773,9 +1773,9 @@ __add_partial(struct kmem_cache_node *n, struct page *page, int tail) { n->nr_partial++; if (tail == DEACTIVATE_TO_TAIL) - list_add_tail(&page->lru, &n->partial); + list_add_tail(&page->slab_list, &n->partial); else - list_add(&page->lru, &n->partial); + list_add(&page->slab_list, &n->partial); } static inline void add_partial(struct kmem_cache_node *n, @@ -1789,7 +1789,7 @@ static inline void remove_partial(struct kmem_cache_node *n, struct page *page) { lockdep_assert_held(&n->list_lock); - list_del(&page->lru); + list_del(&page->slab_list); n->nr_partial--; } @@ -1863,7 +1863,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, return NULL; spin_lock(&n->list_lock); - list_for_each_entry_safe(page, page2, &n->partial, lru) { + list_for_each_entry_safe(page, page2, &n->partial, slab_list) { void *t; if (!pfmemalloc_match(page, flags)) @@ -2407,7 +2407,7 @@ static unsigned long count_partial(struct kmem_cache_node *n, struct page *page; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, lru) + list_for_each_entry(page, &n->partial, slab_list) x += get_count(page); spin_unlock_irqrestore(&n->list_lock, flags); return x; @@ -3702,10 +3702,10 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) BUG_ON(irqs_disabled()); spin_lock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &n->partial, lru) { + list_for_each_entry_safe(page, h, &n->partial, slab_list) { if (!page->inuse) { remove_partial(n, page); - list_add(&page->lru, &discard); + list_add(&page->slab_list, &discard); } else { list_slab_objects(s, page, "Objects remaining in %s on __kmem_cache_shutdown()"); @@ -3713,7 +3713,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) } spin_unlock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &discard, lru) + list_for_each_entry_safe(page, h, &discard, slab_list) discard_slab(s, page); } @@ -3993,7 +3993,7 @@ int __kmem_cache_shrink(struct kmem_cache *s) * Note that concurrent frees may occur while we hold the * list_lock. page->inuse here is the upper limit. */ - list_for_each_entry_safe(page, t, &n->partial, lru) { + list_for_each_entry_safe(page, t, &n->partial, slab_list) { int free = page->objects - page->inuse; /* Do not reread page->inuse */ @@ -4003,10 +4003,10 @@ int __kmem_cache_shrink(struct kmem_cache *s) BUG_ON(free <= 0); if (free == page->objects) { - list_move(&page->lru, &discard); + list_move(&page->slab_list, &discard); n->nr_partial--; } else if (free <= SHRINK_PROMOTE_MAX) - list_move(&page->lru, promote + free - 1); + list_move(&page->slab_list, promote + free - 1); } /* @@ -4019,7 +4019,7 @@ int __kmem_cache_shrink(struct kmem_cache *s) spin_unlock_irqrestore(&n->list_lock, flags); /* Release empty slabs */ - list_for_each_entry_safe(page, t, &discard, lru) + list_for_each_entry_safe(page, t, &discard, slab_list) discard_slab(s, page); if (slabs_node(s, node)) @@ -4211,11 +4211,11 @@ static struct kmem_cache * __init bootstrap(struct kmem_cache *static_cache) for_each_kmem_cache_node(s, node, n) { struct page *p; - list_for_each_entry(p, &n->partial, lru) + list_for_each_entry(p, &n->partial, slab_list) p->slab_cache = s; #ifdef CONFIG_SLUB_DEBUG - list_for_each_entry(p, &n->full, lru) + list_for_each_entry(p, &n->full, slab_list) p->slab_cache = s; #endif } @@ -4432,7 +4432,7 @@ static int validate_slab_node(struct kmem_cache *s, spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, lru) { + list_for_each_entry(page, &n->partial, slab_list) { validate_slab_slab(s, page, map); count++; } @@ -4443,7 +4443,7 @@ static int validate_slab_node(struct kmem_cache *s, if (!(s->flags & SLAB_STORE_USER)) goto out; - list_for_each_entry(page, &n->full, lru) { + list_for_each_entry(page, &n->full, slab_list) { validate_slab_slab(s, page, map); count++; } @@ -4639,9 +4639,9 @@ static int list_locations(struct kmem_cache *s, char *buf, continue; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, lru) + list_for_each_entry(page, &n->partial, slab_list) process_slab(&t, s, page, alloc, map); - list_for_each_entry(page, &n->full, lru) + list_for_each_entry(page, &n->full, slab_list) process_slab(&t, s, page, alloc, map); spin_unlock_irqrestore(&n->list_lock, flags); } From patchwork Wed Mar 13 05:20:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10850625 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 873BD6C2 for ; Wed, 13 Mar 2019 05:21:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6FCBC298A9 for ; Wed, 13 Mar 2019 05:21:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6DBF929A77; Wed, 13 Mar 2019 05:21:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B3A129A70 for ; Wed, 13 Mar 2019 05:21:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B34B48E0006; Wed, 13 Mar 2019 01:21:19 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ABD208E0002; Wed, 13 Mar 2019 01:21:19 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 985988E0006; Wed, 13 Mar 2019 01:21:19 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by kanga.kvack.org (Postfix) with ESMTP id 71C8F8E0002 for ; Wed, 13 Mar 2019 01:21:19 -0400 (EDT) Received: by mail-qt1-f200.google.com with SMTP id g7so231487qtj.14 for ; Tue, 12 Mar 2019 22:21:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=oH5EnFXP99XnzGKj2+Fm20mBDHurM0tmDh5XcLBmzO0=; b=C4mSpfJCpZRmloeaXoINgOz0+4RaDhaoN22ZePXpzmnPtlw/0LyZZse2Sp9C2/VyMW wUDfSHUBBXh7mb0vPa0MRFonaGMxbj0FQXH1tXOi3NZQko4L4b1tzVOmHtY1YUP5HwS/ 2dr3cffr8nFIfiO1ivczhpMqWplygu7LFPQkWlaicBGA0eTBvCGASyTljlmdCBsZL7Ei qoRV7UZVXLGhLTdmkXy4LGYmPQefEhoEbQXYMk0UctBak2eY2yIMUu/jWFryZVYEfpO7 LEQRQ3iWfkh1FFHUsgzbJsfZMWPTt3pMRgPfS5RimPUXn7JmUy8HOb5zhMfMdb92iK5d yo6Q== X-Gm-Message-State: APjAAAVrqnspZ2Nhz2GGSvGEFA9ZI45Y9kvnEq9j44QaBaB5TTXTsq+K lSALqK/MNxIR2Mp3cpTJ5IH6YmuR6Q00MYfN29cCvuerly+7w7OaRY7DhJOmCWk9Ho31iTsoiHM qR/XgGx+FmyDcxvGy0uz3LDYvDzn7/h0VRCwC7rREsxTmsIwP/XUZrEmNf0k+1W4= X-Received: by 2002:ac8:2ea6:: with SMTP id h35mr33211887qta.181.1552454479251; Tue, 12 Mar 2019 22:21:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqx2Owov0gkT+NGyArJ1Os2MwKa94M2NkMcrgUCU8EL7SL7/9I0sKbfHyjeDMO5dr6ytk3e6 X-Received: by 2002:ac8:2ea6:: with SMTP id h35mr33211853qta.181.1552454478274; Tue, 12 Mar 2019 22:21:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552454478; cv=none; d=google.com; s=arc-20160816; b=ACe+0jJCKztX+vZdxB9ZIEIg3rt4tnu2oVTsGND8QEI/yowUnv861AmtStYUlXan+J a9Oit3ykESZLgPuUOFScs1fvZGV8ch2/ByyiJpko/HzUMsrTuiKJxRaVALgdAybzhwgR bw2kuYkY6xLcwFVququUFCKXjzGAGxG/KjWnSxCVQw/EAmrnHSlDm+c/P3MLn4MaRsO9 dI5E2APKjbBckDbIe1yPC19Gofaaugt4mvtChoViTSe87MGGO4RVmtDSALPR2dA7d2Cw byeMJMr3FJH6KmljYz3411A/7ZPNTTsupTEpKNoSUMSX0ODINw26ild4/2RmKFodxcEY bzKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=oH5EnFXP99XnzGKj2+Fm20mBDHurM0tmDh5XcLBmzO0=; b=DoB5wJD3ig5BknmGBhQlekFMEjysu7uzdmpqBdq7ku1HuT7vdDpcSkdr27qdV4j8c8 ixBxH/hkR5nWw2JlqpUvvsozitNDQAH83LDsi2gyWE/VMoAgRHdVxodUO/EUm4MflNZK fJtqph3Jy0C83POeb8K/uG5vBvSUlv0hF0wJgpnog4mPbJDSSiFBuyTrPp7ZKetS5LLq Nup/MbuS49mIYaavLXdjK3CAmrJpgnTJT/bap/eQm8SX+N/hqDS7WsdqWOcNHZNh5htJ LI7z8dbi4iD9eGSptBBIQhBKylQ39VCUgnuVLPjeMjakueI+BoyaXSS5FDEPCYOTjxlR 2tLw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=yyj8H+HO; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id q13si154121qvq.75.2019.03.12.22.21.18 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Mar 2019 22:21:18 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=yyj8H+HO; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id AF46438B2; Wed, 13 Mar 2019 01:21:16 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Wed, 13 Mar 2019 01:21:17 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=oH5EnFXP99XnzGKj2+Fm20mBDHurM0tmDh5XcLBmzO0=; b=yyj8H+HO 0lEcwPvsPIKedyFwvpt6C7idg6C71qRWiMru79jaMnbydSrBWisIMNPVooAYEPIb vPWmT5pHGXVnv2BdMpIqsbwKAYa8usJOAP4ScHK1YDw+ZUsemauGcvRVmYhPgyNV Kgjpaet3X9e6xAzOmx+cIeaVu5lXBV4IuLsGFmQgqm7BjwkdsLK6NEUT1BlzpZMj vnTWSkC0LetbFYXLePJk4rLXdzLJhHxdCojK8G4SD9vt7LkmPeI/pXvUoPPA7y67 HSGesKprGMkQl2/pAP+EIyZAvbs7EB1FndLXRJzjKXUOBlZXYGIvnSThowLAh8Og 0RBn22oRc+9rfQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrgeelgdekudcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrvdefrddukeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehtohgs ihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedv X-ME-Proxy: Received: from eros.localdomain (124-169-23-184.dyn.iinet.net.au [124.169.23.184]) by mail.messagingengine.com (Postfix) with ESMTPA id CBF93E4360; Wed, 13 Mar 2019 01:21:12 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/5] slab: Use slab_list instead of lru Date: Wed, 13 Mar 2019 16:20:28 +1100 Message-Id: <20190313052030.13392-4-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190313052030.13392-1-tobin@kernel.org> References: <20190313052030.13392-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Currently we use the page->lru list for maintaining lists of slabs. We have a list_head in the page structure (slab_list) that can be used for this purpose. Doing so makes the code cleaner since we are not overloading the lru list. The slab_list is part of a union within the page struct (included here stripped down): union { struct { /* Page cache and anonymous pages */ struct list_head lru; ... }; struct { dma_addr_t dma_addr; }; struct { /* slab, slob and slub */ union { struct list_head slab_list; struct { /* Partial pages */ struct page *next; int pages; /* Nr of pages left */ int pobjects; /* Approximate count */ }; }; ... Here we see that slab_list and lru are the same bits. We can verify that this change is safe to do by examining the object file produced from slab.c before and after this patch is applied. Steps taken to verify: 1. checkout current tip of Linus' tree commit a667cb7a94d4 ("Merge branch 'akpm' (patches from Andrew)") 2. configure and build (selecting SLAB allocator) CONFIG_SLAB=y CONFIG_SLAB_FREELIST_RANDOM=y CONFIG_DEBUG_SLAB=y CONFIG_DEBUG_SLAB_LEAK=y CONFIG_HAVE_DEBUG_KMEMLEAK=y 3. dissasemble object file `objdump -dr mm/slab.o > before.s 4. apply patch 5. build 6. dissasemble object file `objdump -dr mm/slab.o > after.s 7. diff before.s after.s Use slab_list list_head instead of the lru list_head for maintaining lists of slabs. Reviewed-by: Roman Gushchin Signed-off-by: Tobin C. Harding --- mm/slab.c | 49 +++++++++++++++++++++++++------------------------ 1 file changed, 25 insertions(+), 24 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 28652e4218e0..09cc64ef9613 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1710,8 +1710,8 @@ static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list) { struct page *page, *n; - list_for_each_entry_safe(page, n, list, lru) { - list_del(&page->lru); + list_for_each_entry_safe(page, n, list, slab_list) { + list_del(&page->slab_list); slab_destroy(cachep, page); } } @@ -2265,8 +2265,8 @@ static int drain_freelist(struct kmem_cache *cache, goto out; } - page = list_entry(p, struct page, lru); - list_del(&page->lru); + page = list_entry(p, struct page, slab_list); + list_del(&page->slab_list); n->free_slabs--; n->total_slabs--; /* @@ -2726,13 +2726,13 @@ static void cache_grow_end(struct kmem_cache *cachep, struct page *page) if (!page) return; - INIT_LIST_HEAD(&page->lru); + INIT_LIST_HEAD(&page->slab_list); n = get_node(cachep, page_to_nid(page)); spin_lock(&n->list_lock); n->total_slabs++; if (!page->active) { - list_add_tail(&page->lru, &(n->slabs_free)); + list_add_tail(&page->slab_list, &n->slabs_free); n->free_slabs++; } else fixup_slab_list(cachep, n, page, &list); @@ -2841,9 +2841,9 @@ static inline void fixup_slab_list(struct kmem_cache *cachep, void **list) { /* move slabp to correct slabp list: */ - list_del(&page->lru); + list_del(&page->slab_list); if (page->active == cachep->num) { - list_add(&page->lru, &n->slabs_full); + list_add(&page->slab_list, &n->slabs_full); if (OBJFREELIST_SLAB(cachep)) { #if DEBUG /* Poisoning will be done without holding the lock */ @@ -2857,7 +2857,7 @@ static inline void fixup_slab_list(struct kmem_cache *cachep, page->freelist = NULL; } } else - list_add(&page->lru, &n->slabs_partial); + list_add(&page->slab_list, &n->slabs_partial); } /* Try to find non-pfmemalloc slab if needed */ @@ -2880,20 +2880,20 @@ static noinline struct page *get_valid_first_slab(struct kmem_cache_node *n, } /* Move pfmemalloc slab to the end of list to speed up next search */ - list_del(&page->lru); + list_del(&page->slab_list); if (!page->active) { - list_add_tail(&page->lru, &n->slabs_free); + list_add_tail(&page->slab_list, &n->slabs_free); n->free_slabs++; } else - list_add_tail(&page->lru, &n->slabs_partial); + list_add_tail(&page->slab_list, &n->slabs_partial); - list_for_each_entry(page, &n->slabs_partial, lru) { + list_for_each_entry(page, &n->slabs_partial, slab_list) { if (!PageSlabPfmemalloc(page)) return page; } n->free_touched = 1; - list_for_each_entry(page, &n->slabs_free, lru) { + list_for_each_entry(page, &n->slabs_free, slab_list) { if (!PageSlabPfmemalloc(page)) { n->free_slabs--; return page; @@ -2908,11 +2908,12 @@ static struct page *get_first_slab(struct kmem_cache_node *n, bool pfmemalloc) struct page *page; assert_spin_locked(&n->list_lock); - page = list_first_entry_or_null(&n->slabs_partial, struct page, lru); + page = list_first_entry_or_null(&n->slabs_partial, struct page, + slab_list); if (!page) { n->free_touched = 1; page = list_first_entry_or_null(&n->slabs_free, struct page, - lru); + slab_list); if (page) n->free_slabs--; } @@ -3413,29 +3414,29 @@ static void free_block(struct kmem_cache *cachep, void **objpp, objp = objpp[i]; page = virt_to_head_page(objp); - list_del(&page->lru); + list_del(&page->slab_list); check_spinlock_acquired_node(cachep, node); slab_put_obj(cachep, page, objp); STATS_DEC_ACTIVE(cachep); /* fixup slab chains */ if (page->active == 0) { - list_add(&page->lru, &n->slabs_free); + list_add(&page->slab_list, &n->slabs_free); n->free_slabs++; } else { /* Unconditionally move a slab to the end of the * partial list on free - maximum time for the * other objects to be freed, too. */ - list_add_tail(&page->lru, &n->slabs_partial); + list_add_tail(&page->slab_list, &n->slabs_partial); } } while (n->free_objects > n->free_limit && !list_empty(&n->slabs_free)) { n->free_objects -= cachep->num; - page = list_last_entry(&n->slabs_free, struct page, lru); - list_move(&page->lru, list); + page = list_last_entry(&n->slabs_free, struct page, slab_list); + list_move(&page->slab_list, list); n->free_slabs--; n->total_slabs--; } @@ -3473,7 +3474,7 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) int i = 0; struct page *page; - list_for_each_entry(page, &n->slabs_free, lru) { + list_for_each_entry(page, &n->slabs_free, slab_list) { BUG_ON(page->active); i++; @@ -4336,9 +4337,9 @@ static int leaks_show(struct seq_file *m, void *p) check_irq_on(); spin_lock_irq(&n->list_lock); - list_for_each_entry(page, &n->slabs_full, lru) + list_for_each_entry(page, &n->slabs_full, slab_list) handle_slab(x, cachep, page); - list_for_each_entry(page, &n->slabs_partial, lru) + list_for_each_entry(page, &n->slabs_partial, slab_list) handle_slab(x, cachep, page); spin_unlock_irq(&n->list_lock); } From patchwork Wed Mar 13 05:20:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10850627 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5C90517DF for ; Wed, 13 Mar 2019 05:21:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 49E2529864 for ; Wed, 13 Mar 2019 05:21:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3E6DA29A13; Wed, 13 Mar 2019 05:21:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A8882299D2 for ; Wed, 13 Mar 2019 05:21:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6072A8E0007; Wed, 13 Mar 2019 01:21:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5B4D48E0002; Wed, 13 Mar 2019 01:21:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 455BD8E0007; Wed, 13 Mar 2019 01:21:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by kanga.kvack.org (Postfix) with ESMTP id 1FDB98E0002 for ; Wed, 13 Mar 2019 01:21:23 -0400 (EDT) Received: by mail-qt1-f197.google.com with SMTP id b40so820128qte.1 for ; Tue, 12 Mar 2019 22:21:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=aR78W31q72vxmgugCUO4zRYVwt/hNV8R2x/8ymYFgIU=; b=Z7kG68/lSeMVf6IX/W8OX8fammfRfY9zAQCuHqrtBTED9EiGJnhLoaRyLo3I1zdVq4 Z9s6rsv7mskQEEUbrlO5+tMrrMj8voyEKayPOha9yRQY5BhAXrmFUo96HqWWaejAraox N9qGfpMonYYfGUG7tkuo9biol/KxQod6Pb9Ej/wgBCQ9otZ80INac7+3i/fMPA/AoXN6 Vkpo7pdghTQ/3pTQWL2tr4kKByHrdmEqi19WpmlCX8FPTOxUAgx9zDo7aXxTUN8LvW0o sdLj2wwbz0k5/0WAdeshPg6EAzmuWcOuTYFP1knN0j2NVnSRQ1d2/pJvCG8aCtalilIy I8Gg== X-Gm-Message-State: APjAAAV9QqDR8cnHEh6tSOKlAmAQbcNfPGJofca1mH5mC2GoG9Ui3FBp 2z4cUkQ18aj8ousDnAyBM4NnRqDqY6lgweNMqefOEOreQdVikB+G7UzvhKtaEu95fibaPpm2Bf9 42EEzWm2RaOyyiScd9ichIjOq08KrSBxTrYuAutfQgPWa0AAGqXOK8ZusiyrlC3Q= X-Received: by 2002:ac8:3092:: with SMTP id v18mr9751715qta.41.1552454482928; Tue, 12 Mar 2019 22:21:22 -0700 (PDT) X-Google-Smtp-Source: APXvYqxsw6rjlOwc9+zv1uAfIjhRBrH0T4TWogpoSA27YXyEMyRnNnO+6rgmGe3SBnpwFlHDElB4 X-Received: by 2002:ac8:3092:: with SMTP id v18mr9751688qta.41.1552454482176; Tue, 12 Mar 2019 22:21:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552454482; cv=none; d=google.com; s=arc-20160816; b=namoafoJ/8MhSEN//ynjDoRZHaOIS+/UnBz+PEIqkJwbZXfMgoCStAAbLUtzt5/h0x s0EhKypGYxn4oZ5j+0cj+NfQf7UyB6RaUDi8ShNRVGBTsc8PArx594jDEU/MPxxNnctx wKdoxWarZUM0F/soJbHEbicYr/vqMzrTSWzhP+Nh3rLqDEfj256rWRB/SdqFUOKxeaxa FX/voPm+hPiFJDFFSv3wTaXg5HS8T0pFHattLcDmbAU0WcSsmOtZ67LzNgMTrwx4yHQS BFjOS5kKkH/45ETY+ccdfjqLKc5e4qYmPkT81EbDouvWHFPmQDRiXRbq85uqRJPwUU5Y Qwxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=aR78W31q72vxmgugCUO4zRYVwt/hNV8R2x/8ymYFgIU=; b=uuPEhJpluzvoD50dumrmezaZY2P6hFNY7rD4/v+cfGo+DtBkk2LqlSW8/GoZXZQv3p iZeShTdcghYXYlaoZEyIkeqgNh0v5vsYiC//JJhgdumwQEXDmqEkrSgVvnNoYZX/vDgR a5xtph/5k+RLfoMmrEMq73io2CtJVLXKwxPWRuwFomIbBqXxIubNfj+KlC2MA/jUVYtU 4jxypvmB7atREOrxTtXV74qSQEPo/l0uRofDqC2ryxzSfl96+n3mkdOS8JzVa2C6ryI/ AA09JVkTaJBrWY8xUnsujVrZMJ4uZM4Uk0yg3xRE/AQMyQnE1b0jtfjmudY/6sMlhgie gJ4g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=Y59tExhx; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id l31si6729016qtc.4.2019.03.12.22.21.22 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Mar 2019 22:21:22 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=Y59tExhx; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 9D02838BD; Wed, 13 Mar 2019 01:21:20 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Wed, 13 Mar 2019 01:21:21 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=aR78W31q72vxmgugCUO4zRYVwt/hNV8R2x/8ymYFgIU=; b=Y59tExhx GKNS4b5q5bZQhE8JfLu/yue+mnCE/qtxShZS6ONFIZCadUJmGwilY3O+AiZOo73g b6XQskb8KQcazK1o0Z9CCq5rHVGJv3fk4KfTwZWAw+cnC2Oxw9bSEGI9cdQKd8Rm BnlrynlD0UX/DNKn7kPHai/+kIufKApqtMtsEUlV/3i9jLyxk5jHjuykM7Z39/v6 Y0TybJ/0+fOy8/+josus7cXyZ0N4e2lV/CTWgnkwDCDLcMnKmE6LorOlGsy7TOWP 4vpxWGoWdYSYKjL18kPzkOXOgj0DA3OrHpOJNR+2y5hl1bx/3vrY9alD0UVeLpOD 079JXKsVGxtmCQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrgeelgdekudcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrvdefrddukeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehtohgs ihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpedv X-ME-Proxy: Received: from eros.localdomain (124-169-23-184.dyn.iinet.net.au [124.169.23.184]) by mail.messagingengine.com (Postfix) with ESMTPA id A7752E427E; Wed, 13 Mar 2019 01:21:16 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 4/5] slob: Use slab_list instead of lru Date: Wed, 13 Mar 2019 16:20:29 +1100 Message-Id: <20190313052030.13392-5-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190313052030.13392-1-tobin@kernel.org> References: <20190313052030.13392-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Currently we use the page->lru list for maintaining lists of slabs. We have a list_head in the page structure (slab_list) that can be used for this purpose. Doing so makes the code cleaner since we are not overloading the lru list. The slab_list is part of a union within the page struct (included here stripped down): union { struct { /* Page cache and anonymous pages */ struct list_head lru; ... }; struct { dma_addr_t dma_addr; }; struct { /* slab, slob and slub */ union { struct list_head slab_list; struct { /* Partial pages */ struct page *next; int pages; /* Nr of pages left */ int pobjects; /* Approximate count */ }; }; ... Here we see that slab_list and lru are the same bits. We can verify that this change is safe to do by examining the object file produced from slob.c before and after this patch is applied. Steps taken to verify: 1. checkout current tip of Linus' tree commit a667cb7a94d4 ("Merge branch 'akpm' (patches from Andrew)") 2. configure and build (select SLOB allocator) CONFIG_SLOB=y CONFIG_SLAB_MERGE_DEFAULT=y 3. dissasemble object file `objdump -dr mm/slub.o > before.s 4. apply patch 5. build 6. dissasemble object file `objdump -dr mm/slub.o > after.s 7. diff before.s after.s Use slab_list list_head instead of the lru list_head for maintaining lists of slabs. Reviewed-by: Roman Gushchin Signed-off-by: Tobin C. Harding --- mm/slob.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mm/slob.c b/mm/slob.c index 307c2c9feb44..ee68ff2a2833 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -112,13 +112,13 @@ static inline int slob_page_free(struct page *sp) static void set_slob_page_free(struct page *sp, struct list_head *list) { - list_add(&sp->lru, list); + list_add(&sp->slab_list, list); __SetPageSlobFree(sp); } static inline void clear_slob_page_free(struct page *sp) { - list_del(&sp->lru); + list_del(&sp->slab_list); __ClearPageSlobFree(sp); } @@ -283,7 +283,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) spin_lock_irqsave(&slob_lock, flags); /* Iterate through each partially free page, try to find room */ - list_for_each_entry(sp, slob_list, lru) { + list_for_each_entry(sp, slob_list, slab_list) { #ifdef CONFIG_NUMA /* * If there's a node specification, search for a partial @@ -297,7 +297,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) continue; /* Attempt to alloc */ - prev = sp->lru.prev; + prev = sp->slab_list.prev; b = slob_page_alloc(sp, size, align); if (!b) continue; @@ -323,7 +323,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) spin_lock_irqsave(&slob_lock, flags); sp->units = SLOB_UNITS(PAGE_SIZE); sp->freelist = b; - INIT_LIST_HEAD(&sp->lru); + INIT_LIST_HEAD(&sp->slab_list); set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE)); set_slob_page_free(sp, slob_list); b = slob_page_alloc(sp, size, align); From patchwork Wed Mar 13 05:20:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10850629 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A6CB17DF for ; Wed, 13 Mar 2019 05:21:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3877429864 for ; Wed, 13 Mar 2019 05:21:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2CD6F29A79; Wed, 13 Mar 2019 05:21:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B506F29A78 for ; Wed, 13 Mar 2019 05:21:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BF97F8E0008; Wed, 13 Mar 2019 01:21:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BCEA08E0002; Wed, 13 Mar 2019 01:21:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A974E8E0008; Wed, 13 Mar 2019 01:21:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by kanga.kvack.org (Postfix) with ESMTP id 84EF38E0002 for ; Wed, 13 Mar 2019 01:21:26 -0400 (EDT) Received: by mail-qk1-f199.google.com with SMTP id 207so554658qkf.9 for ; Tue, 12 Mar 2019 22:21:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=5/G8rkra1PA9yXaI4RJCelywIsswxA8Yq4jhZqugh54=; b=EW+6x3Fc4FkuidvHHqKe2g9bGLA8c5zv2A7OR4zNWae4gM6v5IaiGLePo/U8YueZRm ksg2MDmzwip0LGozLFxUc78ZbI9Qo6AN92d3nwZW4HoZE7V5Bc0fTeN+4pFJeM90dHVg kywd312YH3PaoxS0vKzLpcbSzsiacU6rIzbQ3K2AasnOwqHoxvEae3tBOOo13xDwZv/y WUt8Q2qsfwiMK8u6UOJmy/LrdsjEczTjP1EMTZC+CGzgIeSvl5t2gUhXb3Ul7kn9qeQu s3//jTwSl6r6QQhuwf/TlTFw2zUST/M2kYP/Hf3qzVgCFNWVQpS8RNOm1WCAFUgGxoNs 0KbQ== X-Gm-Message-State: APjAAAVtXwMeY/5wIjJLE1DvZWp/cqXlIfwg8ZYiLnCpVYjNpebeywO7 ofdD2Qi0mTd9TQxZz9lNPm8OmZ41u9rA9UXfAd/JFHrCzD0+TFul6eUxM+/WIPYxzpUCesPmQZZ O4yLK/ysD4THpR1j07J6T4QSlyZORTZR7fMCBPVQ1LGKKlzfriJpA4ZRqZOKFZbU= X-Received: by 2002:a05:620a:122e:: with SMTP id v14mr31320800qkj.105.1552454486354; Tue, 12 Mar 2019 22:21:26 -0700 (PDT) X-Google-Smtp-Source: APXvYqy3+aq8n2V0McjHO4GgV9cjxMzBfKFOjeZgZnYUNWNOc+oBDI9ztppwbLdB7btJE7NmWP2n X-Received: by 2002:a05:620a:122e:: with SMTP id v14mr31320783qkj.105.1552454485725; Tue, 12 Mar 2019 22:21:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552454485; cv=none; d=google.com; s=arc-20160816; b=WHL8ASjLBrvus0Inlt7sKFBiwcyg0m143lM8KNMj/rr8KNzHgEjofpgNZTqc8yuDoy PRGXc5Oga39wmphmGbtTypOTM4u0okAiDzW0f0KsuIbyclmN+VF75WVdpAAgw93opPE7 ztWxcDgMQKLsVtXeiv9ogjQenGS1qiHswQQjWKUvqRx94bd3dZFBIUBeuuT/R5lwkboC gyq9NPOKjO4rP95RsiaY3KdumGeOP4wXxZC9gnfE+jtvqMp2+3NtM4e5R1N4pAaBnWfu FEKAruzcS0FHEJYowKbDckwRNs1kaWjpx6cTMiWpe9IHzZ+8EDKV7/x1SITDEraoggiO jPNw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=5/G8rkra1PA9yXaI4RJCelywIsswxA8Yq4jhZqugh54=; b=aFmQgtw4apAjVZtX6FYZj4ZybVJ9Snd7jUwdOydbmqmuiGtEg6mzwF/dHjqWE1Xuwc lgF9n51pHg2zj7O+CcdSkv9GWmAwrrSNje5YAOZw1s1SxU3NVnu9Bz0WT4adqjR4RinY NQn7+OJmf084vDwokdhIeM11KjxwuZht5KAumJ6v6Xpi1n8teujCEfv2fwhyciRACCbO 5/M7xf+Yc4MFZ4ftxDO52NqaAZwbKbJxeS+ThlozhqzQYGx204KIjl0iqGP7POm/7iOG EprrBxaB5ZFd8/w9i0H4LKZDhZuFhNypYmyxZQHnjIsMxWHDXfBOSppOxEt0lG1+5URB XL9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=BIdKKQ84; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com. [64.147.123.25]) by mx.google.com with ESMTPS id i17si2833834qvo.131.2019.03.12.22.21.25 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Mar 2019 22:21:25 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) client-ip=64.147.123.25; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=BIdKKQ84; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 64.147.123.25 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 32C5F38BA; Wed, 13 Mar 2019 01:21:24 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Wed, 13 Mar 2019 01:21:24 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=5/G8rkra1PA9yXaI4RJCelywIsswxA8Yq4jhZqugh54=; b=BIdKKQ84 puIE8JiP3C10giQIRAAGXWMTSyOOJZwUWGn+oYQuU5RbdWp3FcAaZzwKODB3mziB 81SXV9l4IYnPWxoEtNI/QmGYbCxKyBMZ4QZx+UMMhBpc/Ky5OnKv+i8Jd3w9+Rl1 wPSr6W+u+bgFYSbUTrxIuyTNqYzsPqo/7UMg49LDlIctpXQHMJRWKVAMea1ZtGRU 2lsnNz1pqWSeH0rf+qch8GyvmjswpadElzUfIw/KPhS/Ug1EwzfekSvZtBwtpvpE Z2jxMNfqLlXFyiMemrWJ64aPSg4mgtYITFr+N4IKi7+Vcxhv/h/Cl5LaJsatiAtl ZK9+dauRWkZ0oQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrgeelgdekudcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepfdfvohgsihhn ucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrhhnvghlrdhorhhgqeenucfkph epuddvgedrudeiledrvdefrddukeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehtohgs ihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghrufhiiigvpeeg X-ME-Proxy: Received: from eros.localdomain (124-169-23-184.dyn.iinet.net.au [124.169.23.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 5BC0BE4580; Wed, 13 Mar 2019 01:21:20 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 5/5] mm: Remove stale comment from page struct Date: Wed, 13 Mar 2019 16:20:30 +1100 Message-Id: <20190313052030.13392-6-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190313052030.13392-1-tobin@kernel.org> References: <20190313052030.13392-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP We now use the slab_list list_head instead of the lru list_head. This comment has become stale. Remove stale comment from page struct slab_list list_head. Signed-off-by: Tobin C. Harding Acked-by: Christoph Lameter --- include/linux/mm_types.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 7eade9132f02..63a34e3d7c29 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -103,7 +103,7 @@ struct page { }; struct { /* slab, slob and slub */ union { - struct list_head slab_list; /* uses lru */ + struct list_head slab_list; struct { /* Partial pages */ struct page *next; #ifdef CONFIG_64BIT