From patchwork Tue Apr 2 23:05:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10882431 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 318F21708 for ; Tue, 2 Apr 2019 23:06:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1A6432897A for ; Tue, 2 Apr 2019 23:06:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0BD6928984; Tue, 2 Apr 2019 23:06:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A3592897A for ; Tue, 2 Apr 2019 23:06:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 255446B026D; Tue, 2 Apr 2019 19:06:31 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1DCA36B0272; Tue, 2 Apr 2019 19:06:31 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F24796B0274; Tue, 2 Apr 2019 19:06:30 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by kanga.kvack.org (Postfix) with ESMTP id CD9916B026D for ; Tue, 2 Apr 2019 19:06:30 -0400 (EDT) Received: by mail-qt1-f197.google.com with SMTP id n1so14925405qte.12 for ; Tue, 02 Apr 2019 16:06:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=WyIL3jAshSjdBFnKxRJVP8UxjvsL3RiIiqvjpUWI/P8=; b=dyOFNOYRn81wZkAAnOW1W+awapIAOifQgHjqUlvK3GCBCTXHlR1e4P/uMTJKoadrhY 7CiIOCM+Sika4jL+GVaKm5s9kM6H45gkmD/+QkIMNsrplr+nH1g5g+Z/49jXK6Btu6yu HzkZP8vl9vRNOz9pivQl2PNcmq38M9j/bFIzb7KgsvSofJYLn63TMkXGVJyNvnvb9/2v zGAEFygVDjLkvgP2t3s7JestB/+vTEoHfUeZxYEw0C62nfinJRLRfHOiiEYOglEp36ja sOvlYLYu6MJZbET0sXtuVwrrCh0ZQ2f5geZl5wW4tkHbpzqhPHQQRt088eOiBorFb0Y4 hNSA== X-Gm-Message-State: APjAAAX5VuBNVuNrtt45PBJ0MpJfDc3N8cerKrgO2tczominBun0iLpD lr+OUle0LYDTP2v/m5HgoXDoT7bM4jzmna5aZtP/3ImRvIrBWTbojP7CriLJxReNy8ZuJDPUX4y fDapS52JpeVh7M8QPQYjw0dTyTQ1ikRPzKJzttZwCi9HLhbSM7ujMFh396PlgJUI= X-Received: by 2002:a0c:947a:: with SMTP id i55mr60854009qvi.223.1554246390525; Tue, 02 Apr 2019 16:06:30 -0700 (PDT) X-Google-Smtp-Source: APXvYqyUa1eS7Pxpj0hzVrKpIo0zCCv+tNQhnPwhVyu+TLitXjra3tlLLKQlI/hhgxaC8WmGLmec X-Received: by 2002:a0c:947a:: with SMTP id i55mr60853964qvi.223.1554246389840; Tue, 02 Apr 2019 16:06:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554246389; cv=none; d=google.com; s=arc-20160816; b=jlX93HGI0oknuZr6iV64EDfnp0oUrhhxYnkX+8bCGADCprso9ZxNxcv/rK1G72Xzk3 bVIQ3AUIeP9rrqQtvJSyEShuIbB/JPzwNw7Td1LyGpT+P4UdjVhKCAriZLnhoJ3wDPIB RDUac4btnZ8Fk3znfuc9Z0AuoRMK0QJW/NZIIEAf9Lgy5SgOnmt600Aa+O2X1COHsxCs N1wdFHbN/EmCizjiiPDsOEm9H8ig7X5Aho/xgiuFLVu6jTZXW0pIoKWjyCXfvxdbhanx jnlLbjXtWaVBs82bV2NOh6BE5KAulShn0MLNl5B0544OeQWRvMmU9ZT0FfU7ZKYpSmkg MbRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=WyIL3jAshSjdBFnKxRJVP8UxjvsL3RiIiqvjpUWI/P8=; b=RLx1OrPqfGDaCK8/vvvy4TMP4XqKma75XGK778WFdanvFdPyQb6RmdnL3Mvcdc+IzA 3Avesfg00pYW6vob2iOe1fUV3nO25eBiyaLqDjJpSKYruUfCfe6++ozj+o65UYvTEGvA Xr6GplxPpi9xY5tyUb/C9N5kXu+AMUHG7tMaSRUVie0hxBZH3K9DHMdxzhO3vxtQRci4 m1tn5ePHcNQ5lt9Pn275mBZIJWrEMjeGfMq4saoDRHgi2RthftOmYIUSWXn2MrnccBrl 8wZczWa9ioRQD+2+OZSjPscT9WZGgNqW/BiScxsMCpscEGVnQbPRFzyWwv6UYaw7IZb5 BJLA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=S9YZIJcE; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com. [66.111.4.26]) by mx.google.com with ESMTPS id g48si1464110qtk.57.2019.04.02.16.06.29 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 02 Apr 2019 16:06:29 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) client-ip=66.111.4.26; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=S9YZIJcE; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 606BB20B63; Tue, 2 Apr 2019 19:06:29 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 02 Apr 2019 19:06:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=WyIL3jAshSjdBFnKxRJVP8UxjvsL3RiIiqvjpUWI/P8=; b=S9YZIJcE ioLWYGHJy9xvw0Z+EIbMaGsCAADIx/5Nw2oDC/AowE5OE8MjNuQ+sqBwuY2qyDC8 s9O9em3R9HRwWRLDyOjBCpdG7ol8zYFeYr1u8ckmdXRFOAWa6mJ4wjPTIIjn2dsx 38slfK7mNrfaEJaEsOVBy/a16YkhWbK7oMk3K5dqkBSS1EL/wh/1bdrxaDV9vXIP WUvdHHuR+EuFQTpeCnFjNzBXZIIbJ+HTdIV8CIJsK1K+uvNSd63Z6UIsK6zzKXJl JT79i6i8j8eOx7Jm9xkNMreAop2jfoHI2ra8ZDE2S1ISpOqY/Dw3FAo6z+nqU0my Tk2caoQt3XN1mA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduuddrtddugdduieculddtuddrgedutddrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdp uffrtefokffrpgfnqfghnecuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivg hnthhsucdlqddutddtmdenucfjughrpefhvffufffkofgjfhgggfestdekredtredttden ucfhrhhomhepfdfvohgsihhnucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrh hnvghlrdhorhhgqeenucfkphepuddvgedrudeiledrvdejrddvtdeknecurfgrrhgrmhep mhgrihhlfhhrohhmpehtohgsihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghruf hiiigvpedt X-ME-Proxy: Received: from eros.localdomain (124-169-27-208.dyn.iinet.net.au [124.169.27.208]) by mail.messagingengine.com (Postfix) with ESMTPA id CDCFC10310; Tue, 2 Apr 2019 19:06:25 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 1/7] list: Add function list_rotate_to_front() Date: Wed, 3 Apr 2019 10:05:39 +1100 Message-Id: <20190402230545.2929-2-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190402230545.2929-1-tobin@kernel.org> References: <20190402230545.2929-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Currently if we wish to rotate a list until a specific item is at the front of the list we can call list_move_tail(head, list). Note that the arguments are the reverse way to the usual use of list_move_tail(list, head). This is a hack, it depends on the developer knowing how the list_head operates internally which violates the layer of abstraction offered by the list_head. Also, it is not intuitive so the next developer to come along must study list.h in order to fully understand what is meant by the call, while this is 'good for' the developer it makes reading the code harder. We should have an function appropriately named that does this if there are users for it intree. By grep'ing the tree for list_move_tail() and list_tail() and attempting to guess the argument order from the names it seems there is only one place currently in the tree that does this - the slob allocatator. Add function list_rotate_to_front() to rotate a list until the specified item is at the front of the list. Signed-off-by: Tobin C. Harding Reviewed-by: Christoph Lameter Reviewed-by: Roman Gushchin --- include/linux/list.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/include/linux/list.h b/include/linux/list.h index 58aa3adf94e6..9e9a6403dbe4 100644 --- a/include/linux/list.h +++ b/include/linux/list.h @@ -270,6 +270,24 @@ static inline void list_rotate_left(struct list_head *head) } } +/** + * list_rotate_to_front() - Rotate list to specific item. + * @list: The desired new front of the list. + * @head: The head of the list. + * + * Rotates list so that @list becomes the new front of the list. + */ +static inline void list_rotate_to_front(struct list_head *list, + struct list_head *head) +{ + /* + * Deletes the list head from the list denoted by @head and + * places it as the tail of @list, this effectively rotates the + * list so that @list is at the front. + */ + list_move_tail(head, list); +} + /** * list_is_singular - tests whether a list has just one entry. * @head: the list to test. From patchwork Tue Apr 2 23:05:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10882433 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E9F3C1575 for ; Tue, 2 Apr 2019 23:06:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D565B2897A for ; Tue, 2 Apr 2019 23:06:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C9C4C28986; Tue, 2 Apr 2019 23:06:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B24792897A for ; Tue, 2 Apr 2019 23:06:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 362FA6B0272; Tue, 2 Apr 2019 19:06:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 313CD6B0274; Tue, 2 Apr 2019 19:06:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 18F206B0275; Tue, 2 Apr 2019 19:06:35 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by kanga.kvack.org (Postfix) with ESMTP id EC12E6B0272 for ; Tue, 2 Apr 2019 19:06:34 -0400 (EDT) Received: by mail-qk1-f200.google.com with SMTP id q127so13061923qkd.2 for ; Tue, 02 Apr 2019 16:06:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=DgIYnn/fssjAlV/t8WH3LwNCV6/gchODyjSyz2g47Ag=; b=Z9M/pasNnt2hqDhQAyIV+tZ06I1vNKNYuWUSIBVMsmK43dk5/x8N8UAFzIzSRR3A/B Agr8v7fPRjsROc8xp2JbfqtWn9tePmrEUWBqrFdeNEua0HSJO89grDRq9dSl1wpUcYZM itBpGi2syNs8IrOWSJUgQNPCU/2rfD0oF6kaL0qjtGH2Oz/0beQpPdNXP/vCkcyZSWLg E2God0zIAyvnMO1DEB1uRKLZCj5+P+6i/PSv0Ys6AVGAVQtNMGLrlL7ppxhUhdGB+sOI /krUufR/+YTUlM49qhTOg0ANaPJpvPN2Mkys6W6n5SUZZ1H7vCNogfsAAb3ORV9Dbdta 4vdQ== X-Gm-Message-State: APjAAAW2+azi2VcFeFPEvlJu/O7ZdOMZ72YQGMQ4O4Ei0yHxfhk0wuXN 46QGjpCWKlkS7TOIpCM/4E+UtrgGsOddLznFYlL11sdfH93414JWQk/sVresrjMA229KuIGI69h lpnRBRRljRk6+frG1BaK6e/AV94pKBDUOhD96NWAhHGwl2OlqsDfusMvRhQwoNaE= X-Received: by 2002:a37:5088:: with SMTP id e130mr33666571qkb.206.1554246394711; Tue, 02 Apr 2019 16:06:34 -0700 (PDT) X-Google-Smtp-Source: APXvYqw1Ge2MbfIe8DNX1MkJE1pP3JepQ9/KjZqkA+X6vtP26g1HDRqkrIOGMNXwJW45sCNRoXLM X-Received: by 2002:a37:5088:: with SMTP id e130mr33666521qkb.206.1554246393852; Tue, 02 Apr 2019 16:06:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554246393; cv=none; d=google.com; s=arc-20160816; b=JFuniKwm9rUBkzR9muCOktQIlewPs7K83df9jB568xesDWnrZ5B0il9kfuHzetzJ0x GVv7FIlIKHf3l3s/ZeIYziYv9aHjm7BnM74XqpXHtitOIYEKo4tM+m3iEU7/QaPdkPWj oSmMN7R3u8LcPGBcaOyYK9PZDL6b0DZyfsy3QyizE0Km7N+E3By/dX1Jj9WGymGdep0Q sFY4l+qb2IdgreI3h4Tg64HzfISMwIfvTMPudweMYTvvSNDLNOyiD+WE590wFQ05B1VR nr3UhvpUZebFF+BC5pe7meM1TeuFiF6RuCHaP+4Lc37MuCTpC8lfc5tJOErNH10ONocv e40A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=DgIYnn/fssjAlV/t8WH3LwNCV6/gchODyjSyz2g47Ag=; b=cCKwosP6bnSdU7t/Uo74Q5YGxnCCNPMT2muL0Q9+YyDtbmz9uf281lFQ/loHYO42fb MP7F3jrYGzVLpeHQTXMAiFL4GZsPIvysGtZe2ZRJm7rKPeEGI59/uA1e/Z5a5/tTzkba SqMkDCC25MzSt1CWukZbePh++uskN5kw7SaMO2c6tpXdfj75//E8k4n94LVqpGTesXkJ WVIQJPvMipBShOTHV9D5//bF7d1vIAnt4dXv3vwOd6vzcJJgWfALVmfywzBd21XVx21M 7I7ixdFiYxeuN8b0nupluwn2enxznn0vNBSunyTIxMOzb8YCoNRC/koeepm3gNckQAJD kDTA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=fY5GcgBj; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com. [66.111.4.26]) by mx.google.com with ESMTPS id q39si1648087qtc.321.2019.04.02.16.06.33 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 02 Apr 2019 16:06:33 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) client-ip=66.111.4.26; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=fY5GcgBj; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 92E3F21F4F; Tue, 2 Apr 2019 19:06:33 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 02 Apr 2019 19:06:33 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=DgIYnn/fssjAlV/t8WH3LwNCV6/gchODyjSyz2g47Ag=; b=fY5GcgBj A5RmmSax0s7ajf0TNyjjdaOe6cLvx0pgrb3jofsE3nCice4TPLerLhcX5/2ABdgU 2K+GTTOPLBMXz8+bCLgI9peLM9xjggKKl6Qm78+9feW7pEFv5G7xiejTWshm22bQ d0m+b6cFQhg+Z1Muyhlk1dZdXfZUs3RC+DxmW6f9+hTW7h/Yr29bdYL4nha7POyB ZBvS6jZkanocSqY10uSiaHPH0fSUzWt8VF4ae4kUZu/C8GDdwlwocI8IZweV5N2K Wl8OhPY5BNQ7HMIvu9r+k6k7Kt/yrk/3kHNxQG0uZGRsHv0Ifc1xrzgdwrwc7Ifw +TOc3UJrPUmNuA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduuddrtddugdduieculddtuddrgedutddrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdp uffrtefokffrpgfnqfghnecuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivg hnthhsucdlqddutddtmdenucfjughrpefhvffufffkofgjfhgggfestdekredtredttden ucfhrhhomhepfdfvohgsihhnucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrh hnvghlrdhorhhgqeenucfkphepuddvgedrudeiledrvdejrddvtdeknecurfgrrhgrmhep mhgrihhlfhhrohhmpehtohgsihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghruf hiiigvpedu X-ME-Proxy: Received: from eros.localdomain (124-169-27-208.dyn.iinet.net.au [124.169.27.208]) by mail.messagingengine.com (Postfix) with ESMTPA id 0362C10391; Tue, 2 Apr 2019 19:06:29 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 2/7] slob: Respect list_head abstraction layer Date: Wed, 3 Apr 2019 10:05:40 +1100 Message-Id: <20190402230545.2929-3-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190402230545.2929-1-tobin@kernel.org> References: <20190402230545.2929-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Currently we reach inside the list_head. This is a violation of the layer of abstraction provided by the list_head. It makes the code fragile. More importantly it makes the code wicked hard to understand. The code reaches into the list_head structure to counteract the fact that the list _may_ have been changed during slob_page_alloc(). Instead of this we can add a return parameter to slob_page_alloc() to signal that the list was modified (list_del() called with page->lru to remove page from the freelist). This code is concerned with an optimisation that counters the tendency for first fit allocation algorithm to fragment memory into many small chunks at the front of the memory pool. Since the page is only removed from the list when an allocation uses _all_ the remaining memory in the page then in this special case fragmentation does not occur and we therefore do not need the optimisation. Add a return parameter to slob_page_alloc() to signal that the allocation used up the whole page and that the page was removed from the free list. After calling slob_page_alloc() check the return value just added and only attempt optimisation if the page is still on the list. Use list_head API instead of reaching into the list_head structure to check if sp is at the front of the list. Signed-off-by: Tobin C. Harding Acked-by: Christoph Lameter --- mm/slob.c | 51 +++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 37 insertions(+), 14 deletions(-) diff --git a/mm/slob.c b/mm/slob.c index 307c2c9feb44..07356e9feaaa 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -213,13 +213,26 @@ static void slob_free_pages(void *b, int order) } /* - * Allocate a slob block within a given slob_page sp. + * slob_page_alloc() - Allocate a slob block within a given slob_page sp. + * @sp: Page to look in. + * @size: Size of the allocation. + * @align: Allocation alignment. + * @page_removed_from_list: Return parameter. + * + * Tries to find a chunk of memory at least @size bytes big within @page. + * + * Return: Pointer to memory if allocated, %NULL otherwise. If the + * allocation fills up @page then the page is removed from the + * freelist, in this case @page_removed_from_list will be set to + * true (set to false otherwise). */ -static void *slob_page_alloc(struct page *sp, size_t size, int align) +static void *slob_page_alloc(struct page *sp, size_t size, int align, + bool *page_removed_from_list) { slob_t *prev, *cur, *aligned = NULL; int delta = 0, units = SLOB_UNITS(size); + *page_removed_from_list = false; for (prev = NULL, cur = sp->freelist; ; prev = cur, cur = slob_next(cur)) { slobidx_t avail = slob_units(cur); @@ -254,8 +267,10 @@ static void *slob_page_alloc(struct page *sp, size_t size, int align) } sp->units -= units; - if (!sp->units) + if (!sp->units) { clear_slob_page_free(sp); + *page_removed_from_list = true; + } return cur; } if (slob_last(cur)) @@ -269,10 +284,10 @@ static void *slob_page_alloc(struct page *sp, size_t size, int align) static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) { struct page *sp; - struct list_head *prev; struct list_head *slob_list; slob_t *b = NULL; unsigned long flags; + bool _unused; if (size < SLOB_BREAK1) slob_list = &free_slob_small; @@ -284,6 +299,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) spin_lock_irqsave(&slob_lock, flags); /* Iterate through each partially free page, try to find room */ list_for_each_entry(sp, slob_list, lru) { + bool page_removed_from_list = false; #ifdef CONFIG_NUMA /* * If there's a node specification, search for a partial @@ -296,18 +312,25 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) if (sp->units < SLOB_UNITS(size)) continue; - /* Attempt to alloc */ - prev = sp->lru.prev; - b = slob_page_alloc(sp, size, align); + b = slob_page_alloc(sp, size, align, &page_removed_from_list); if (!b) continue; - /* Improve fragment distribution and reduce our average - * search time by starting our next search here. (see - * Knuth vol 1, sec 2.5, pg 449) */ - if (prev != slob_list->prev && - slob_list->next != prev->next) - list_move_tail(slob_list, prev->next); + /* + * If slob_page_alloc() removed sp from the list then we + * cannot call list functions on sp. If so allocation + * did not fragment the page anyway so optimisation is + * unnecessary. + */ + if (!page_removed_from_list) { + /* + * Improve fragment distribution and reduce our average + * search time by starting our next search here. (see + * Knuth vol 1, sec 2.5, pg 449) + */ + if (!list_is_first(&sp->lru, slob_list)) + list_rotate_to_front(&sp->lru, slob_list); + } break; } spin_unlock_irqrestore(&slob_lock, flags); @@ -326,7 +349,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) INIT_LIST_HEAD(&sp->lru); set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE)); set_slob_page_free(sp, slob_list); - b = slob_page_alloc(sp, size, align); + b = slob_page_alloc(sp, size, align, &_unused); BUG_ON(!b); spin_unlock_irqrestore(&slob_lock, flags); } From patchwork Tue Apr 2 23:05:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10882435 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 34B7A1575 for ; Tue, 2 Apr 2019 23:06:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 20BAB28984 for ; Tue, 2 Apr 2019 23:06:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 14E2328988; Tue, 2 Apr 2019 23:06:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 82D2728984 for ; Tue, 2 Apr 2019 23:06:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 511846B0274; Tue, 2 Apr 2019 19:06:40 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4C22B6B0275; Tue, 2 Apr 2019 19:06:40 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38AFC6B0276; Tue, 2 Apr 2019 19:06:40 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by kanga.kvack.org (Postfix) with ESMTP id 194246B0274 for ; Tue, 2 Apr 2019 19:06:40 -0400 (EDT) Received: by mail-qk1-f199.google.com with SMTP id 77so13071273qkd.9 for ; Tue, 02 Apr 2019 16:06:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=3K2yhT+sszNfHiVNPV2b/RN3GQNa8ZJypdeBAdHjZ+U=; b=JmPI2WjD9Em3KsP5Qi0I6phtFuTvv5upX/WVxO+f0XJ0ULRrGdBi1eDSqVpYBdphYH Hcoae+hU7DTpm2/oIWyiiEAPoYWCySsa3qLfY+nzpxg23uWTfmILIcABlUBCltr5nZ/f pH5t4Z1IfRTOtXYd0pCPCEZ+NiWtFZ0kAFrg4hxR0xuqQJ9GN4g/HoFmKj+QeB55/Nw0 KEYFgbaaiRHzERgRLhznEm6XEHD6DV/pqKQTsCSWmlbIX5jXvBqb5ApNsKopQFfFNC13 dpFK77O1bhiMGbAX9qs77CtzNb+c211808WqG+dhGFHSuwURMzcotEzv9DYfKjBYsOYk FuIg== X-Gm-Message-State: APjAAAUnBcgbeAs84U30wbHQTIXjqiwX8eQ5UCZv0clzA6CLcVD+X1jF i9rJXKPPbSIYJ3q3Z6P3yN/hvDOlBlPpP1oagDvRbrhOumO/QYRwja0pRhqlZgk3ab4V4zycAwm 8bDxIhzGCshcd/9dThTwDiFWKGmdDfEZj8pdSZyjO3V6fq3V/Mwhi37qHXOdwQrU= X-Received: by 2002:ac8:38b6:: with SMTP id f51mr39268301qtc.33.1554246399688; Tue, 02 Apr 2019 16:06:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqwkvm3MngK4miOCkknlamRol9qFCsHAVxReyimXzg/976gGUVc7Ts+Shvg+RaBlacmNoMN/ X-Received: by 2002:ac8:38b6:: with SMTP id f51mr39268198qtc.33.1554246398067; Tue, 02 Apr 2019 16:06:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554246398; cv=none; d=google.com; s=arc-20160816; b=VrKixVsuG1JJagqBD4jYglKvBcEoAYKR18lnEtNJA5EpS0XlYOFsiNAFFiP2oEirbN gqZFCLPEw74PDKvnxJzUYJKarQ7Axaj1QOGN+GV85ZNkMWWkflFJ+2EvR05IySRVOh+8 fCT81UXVRMF4Vuacwb5uCnkL1cdBLweMNtELxH9Dfjs7wIOAQh7N1UCpTif5aYQWOAFE N3GMIeHrYRPzu1V2Gli6In4UE0ahbOTGKzsTD3X1QHOLlJVSUCUGEviOMKlmKHPLbCZ1 FEAA092xWWQi6Bbh/R8Ux/ZKohJuQvgUY4mgCPKrUN9kyDELsl6+DVuxQN7scNk2JhW2 imUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=3K2yhT+sszNfHiVNPV2b/RN3GQNa8ZJypdeBAdHjZ+U=; b=GVpaxyz48HEw/0K0QywjsqSG6MxzEb4AsXb9PZUsNk4Y5MCKmRvAabpQqlbMWleo1i v5gqv+d03JgfikAvNE7hfheft5tzx1jKv8SwPmai7piEgZi+Z3Rk4B/mlzjzEYY4kDiT gy+KG4Bti+pJfeQzgeeLpiqFmuEAUN6ObL6MXRpUNB29KJKJ6Wirg1lrZrb0BvH7yDeK dL0C44xfx+SP2OPwdo7Eb6Ee5kQBQzdubefi9xFbt52BQXkc3sXQPAYkxeiWzcvfQ8BY yPn6AiH31jtcYb4lQq3cpa8YTVEiJduwRtutN0upokCdaXEj15q+2WYfSzmVpHeque5W nQ4Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=J+ufpkhH; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com. [66.111.4.26]) by mx.google.com with ESMTPS id a2si1124641qkl.123.2019.04.02.16.06.37 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 02 Apr 2019 16:06:38 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) client-ip=66.111.4.26; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=J+ufpkhH; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id ACCEC21EF6; Tue, 2 Apr 2019 19:06:37 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 02 Apr 2019 19:06:37 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=3K2yhT+sszNfHiVNPV2b/RN3GQNa8ZJypdeBAdHjZ+U=; b=J+ufpkhH vZEfzYhN14fxy08UgL7zicdFN3w19iPwhvtvHDyyFGhp+th9Y4H+bs59zZ/lCEYf sAuMzO0kEdX0d6MrM7j3VVadRO79mpKP0OeBSX9X+7XR3fTJ2yz68brwxGaHSN7/ D2xQ5Gzp5OpC+9JndF8XHSE37m6iEO2seUGEasvV+WqttsN3o24s6pa2VLPj8TSP Cpfyi7lPHhEFsybWnMCnfT/5akQtQ0uPXoTCed5EXmf51gCcEe07hzG7E6OEhOU2 3GN76kSDFAZda6mmChVz1dzZRZlbm5d40Y35rHyhWSbQtkAo37gq3JbRjTmTVTn0 Q+qfUXPIXyCO/Q== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduuddrtddugdduieculddtuddrgedutddrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdp uffrtefokffrpgfnqfghnecuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivg hnthhsucdlqddutddtmdenucfjughrpefhvffufffkofgjfhgggfestdekredtredttden ucfhrhhomhepfdfvohgsihhnucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrh hnvghlrdhorhhgqeenucfkphepuddvgedrudeiledrvdejrddvtdeknecurfgrrhgrmhep mhgrihhlfhhrohhmpehtohgsihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghruf hiiigvpedu X-ME-Proxy: Received: from eros.localdomain (124-169-27-208.dyn.iinet.net.au [124.169.27.208]) by mail.messagingengine.com (Postfix) with ESMTPA id 212961031A; Tue, 2 Apr 2019 19:06:33 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 3/7] slob: Use slab_list instead of lru Date: Wed, 3 Apr 2019 10:05:41 +1100 Message-Id: <20190402230545.2929-4-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190402230545.2929-1-tobin@kernel.org> References: <20190402230545.2929-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Currently we use the page->lru list for maintaining lists of slabs. We have a list_head in the page structure (slab_list) that can be used for this purpose. Doing so makes the code cleaner since we are not overloading the lru list. The slab_list is part of a union within the page struct (included here stripped down): union { struct { /* Page cache and anonymous pages */ struct list_head lru; ... }; struct { dma_addr_t dma_addr; }; struct { /* slab, slob and slub */ union { struct list_head slab_list; struct { /* Partial pages */ struct page *next; int pages; /* Nr of pages left */ int pobjects; /* Approximate count */ }; }; ... Here we see that slab_list and lru are the same bits. We can verify that this change is safe to do by examining the object file produced from slob.c before and after this patch is applied. Steps taken to verify: 1. checkout current tip of Linus' tree commit a667cb7a94d4 ("Merge branch 'akpm' (patches from Andrew)") 2. configure and build (select SLOB allocator) CONFIG_SLOB=y CONFIG_SLAB_MERGE_DEFAULT=y 3. dissasemble object file `objdump -dr mm/slub.o > before.s 4. apply patch 5. build 6. dissasemble object file `objdump -dr mm/slub.o > after.s 7. diff before.s after.s Use slab_list list_head instead of the lru list_head for maintaining lists of slabs. Reviewed-by: Roman Gushchin Signed-off-by: Tobin C. Harding Acked-by: Christoph Lameter --- mm/slob.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/slob.c b/mm/slob.c index 07356e9feaaa..84aefd9b91ee 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -112,13 +112,13 @@ static inline int slob_page_free(struct page *sp) static void set_slob_page_free(struct page *sp, struct list_head *list) { - list_add(&sp->lru, list); + list_add(&sp->slab_list, list); __SetPageSlobFree(sp); } static inline void clear_slob_page_free(struct page *sp) { - list_del(&sp->lru); + list_del(&sp->slab_list); __ClearPageSlobFree(sp); } @@ -298,7 +298,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) spin_lock_irqsave(&slob_lock, flags); /* Iterate through each partially free page, try to find room */ - list_for_each_entry(sp, slob_list, lru) { + list_for_each_entry(sp, slob_list, slab_list) { bool page_removed_from_list = false; #ifdef CONFIG_NUMA /* @@ -328,8 +328,8 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) * search time by starting our next search here. (see * Knuth vol 1, sec 2.5, pg 449) */ - if (!list_is_first(&sp->lru, slob_list)) - list_rotate_to_front(&sp->lru, slob_list); + if (!list_is_first(&sp->slab_list, slob_list)) + list_rotate_to_front(&sp->slab_list, slob_list); } break; } @@ -346,7 +346,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node) spin_lock_irqsave(&slob_lock, flags); sp->units = SLOB_UNITS(PAGE_SIZE); sp->freelist = b; - INIT_LIST_HEAD(&sp->lru); + INIT_LIST_HEAD(&sp->slab_list); set_slob(b, SLOB_UNITS(PAGE_SIZE), b + SLOB_UNITS(PAGE_SIZE)); set_slob_page_free(sp, slob_list); b = slob_page_alloc(sp, size, align, &_unused); From patchwork Tue Apr 2 23:05:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10882437 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B07E31575 for ; Tue, 2 Apr 2019 23:06:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 96E4B2897A for ; Tue, 2 Apr 2019 23:06:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8AB6928986; Tue, 2 Apr 2019 23:06:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EA9022897A for ; Tue, 2 Apr 2019 23:06:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B53386B0275; Tue, 2 Apr 2019 19:06:43 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B05226B0276; Tue, 2 Apr 2019 19:06:43 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9CC9D6B0277; Tue, 2 Apr 2019 19:06:43 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by kanga.kvack.org (Postfix) with ESMTP id 797806B0275 for ; Tue, 2 Apr 2019 19:06:43 -0400 (EDT) Received: by mail-qt1-f198.google.com with SMTP id x12so15087014qtk.2 for ; Tue, 02 Apr 2019 16:06:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=aP7x6ocNpn524UbFJtnYWiwqT9lY5T2tQUUdBKmIxoY=; b=sxOCA9rPxMp/p3TINGGVAT2TVYNqzfOUVgLaCTfwZ4HpSiHn+0mHWHhwZ3I1wVdnB8 AOr5ZfKKOt4ngVDn8kySKSn4/gijEMnOqLgnr67P31vvRRW+bcQsbz3VqtdMQr+8RIe0 kLdawBYXqVld61ffjK45k59K+IVVztJR2DBu9UFUBMbvsdAFzsh7DhAHsI6jVRaYLduC urvVOumIl+5HunRGutR2aMUW+o5F8XBd5K4LZ5s6JtlDSArxL6im1/iQfFsN7qn82A2a foFv0/hbDJ0ZozcaPjq1CDOHSTiGkXR659yd65wZ9W+8r5UI9+4bXH5wFhsbCDgt09fQ gutg== X-Gm-Message-State: APjAAAWDiAKyyblHI000g7yw/M4mhmQ0NujWFubgZoiqesND902zp2c6 TO3Rxj0tC875Bj3vhGlUbUQ+xbnZz3FLyaK9skh/whhViuwQ30+aw25XbYCZEOrOAWeAiBZyqid +EdqhOWmCgetKFDuilEMqdr5p0hJ23ugmp81E48t/3rhcYWe5HD1uqCrMbLeaIr8= X-Received: by 2002:a0c:f989:: with SMTP id t9mr4975658qvn.74.1554246403252; Tue, 02 Apr 2019 16:06:43 -0700 (PDT) X-Google-Smtp-Source: APXvYqxyzJgwEtYEhtAtpw/8ti2vl5Pzw8Dgkj+tU29VTY2qP1rmvQx3ibkvARzxaXIeLrxnfkkh X-Received: by 2002:a0c:f989:: with SMTP id t9mr4975600qvn.74.1554246402215; Tue, 02 Apr 2019 16:06:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554246402; cv=none; d=google.com; s=arc-20160816; b=e3RRFIJidPOoabAZFjk0eLtR5UudEzXSx6m5MShleZ/LwVvPJS4wjxESRchsMBx5aV tmje55gsBKIjd34QOeKcb0gEs1FhTfWl7TH3c5k/VgtPcIbhpu3DqVBdjjloPOn5IIHe rWv0JhBCP5FYOadlpzIHvKrNbJPALFz7j3/PFvMxWTHKMHEXT4oFDe9Sl5lfojQ1hnnY 0WHjvVkw52lbpHDt9ZOHoFSnalO7y9xhSCVqiz9kJpaxPgJV9nZI48SDLWLBLWzjDLSS ZkxBkvitLfscIyvevtJlsYoQ1BMzMgBq0xLkPveA2EqtjaVg1wNOb++5CSKoPi4w9Zjz Gwiw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=aP7x6ocNpn524UbFJtnYWiwqT9lY5T2tQUUdBKmIxoY=; b=boxSwX619gMfi+FIv9L9L7hsUCwK32SWDfL/0B+5q7udIRC93tOyIQQ0vvsQmVQqOO EFbEfoycEgC8Oppr3NbCHVF6FKEa12BU4BqlBh3EhkxUb5Sh7UsaEs0hEGa8WRwE0buu 4POoH/UtRHbzR7A6SxUW6AEpOn94Dx32HVeY8aZyqiFONTo60Dt1T/Cq1sbyobi+VTvy XUPACeDIihtc05XmSZh3Yi/rv+e9TV2iO/45B3s1EOGSa3SU89cum/RePcFReG7VbRUF BP+Ort7K54V/c4XzsJOOQKpRQv3b9E5S3oU0Hl3ZmuvFPALoMtcCpm6sB1kYWHR37Yc4 gPWQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=71NZ5cRe; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com. [66.111.4.26]) by mx.google.com with ESMTPS id y14si891269qvc.45.2019.04.02.16.06.42 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 02 Apr 2019 16:06:42 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) client-ip=66.111.4.26; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=71NZ5cRe; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id EA72921EF6; Tue, 2 Apr 2019 19:06:41 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 02 Apr 2019 19:06:41 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=aP7x6ocNpn524UbFJtnYWiwqT9lY5T2tQUUdBKmIxoY=; b=71NZ5cRe Vb88xdu4aAjbR8bx2wu26shp1R2vRoM4i8wn+wPH20gLrjy7KshCT6ZPLixtNDE7 yVF43B9/0IbyoKtRKEgOOFboAhAWWqBKbNRn5rVvXgOJAKWgQ5hJDVsRQ8H7Sy2Q tMu4GH/iXlDLr0kEdQZOtHcC+a4fe78PO4K3sMCil2TSiQpt1bqGQxjy3te6RdcJ 20AJ+oTGJ4ytiSxCMKKXQD2BhKA7W5JtkA8mzakt7f3AQKnF3BUqIlNjgP1tS0lQ m3sCkcxyEh6W0Kx1n2p0d2Gtg6sZ5wurgp2i3xsDSkUC5JWey739YqIasW2xGPEm wpQX8IZ+EwSqtg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduuddrtddugdduieculddtuddrgedutddrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdp uffrtefokffrpgfnqfghnecuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivg hnthhsucdlqddutddtmdenucfjughrpefhvffufffkofgjfhgggfestdekredtredttden ucfhrhhomhepfdfvohgsihhnucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrh hnvghlrdhorhhgqeenucfkphepuddvgedrudeiledrvdejrddvtdeknecurfgrrhgrmhep mhgrihhlfhhrohhmpehtohgsihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghruf hiiigvpeef X-ME-Proxy: Received: from eros.localdomain (124-169-27-208.dyn.iinet.net.au [124.169.27.208]) by mail.messagingengine.com (Postfix) with ESMTPA id 67C081031A; Tue, 2 Apr 2019 19:06:38 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 4/7] slub: Add comments to endif pre-processor macros Date: Wed, 3 Apr 2019 10:05:42 +1100 Message-Id: <20190402230545.2929-5-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190402230545.2929-1-tobin@kernel.org> References: <20190402230545.2929-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP SLUB allocator makes heavy use of ifdef/endif pre-processor macros. The pairing of these statements is at times hard to follow e.g. if the pair are further than a screen apart or if there are nested pairs. We can reduce cognitive load by adding a comment to the endif statement of form #ifdef CONFIG_FOO ... #endif /* CONFIG_FOO */ Add comments to endif pre-processor macros if ifdef/endif pair is not immediately apparent. Acked-by: Christoph Lameter Signed-off-by: Tobin C. Harding Reviewed-by: Roman Gushchin --- mm/slub.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index d30ede89f4a6..8fbba4ff6c67 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1951,7 +1951,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, } } } while (read_mems_allowed_retry(cpuset_mems_cookie)); -#endif +#endif /* CONFIG_NUMA */ return NULL; } @@ -2249,7 +2249,7 @@ static void unfreeze_partials(struct kmem_cache *s, discard_slab(s, page); stat(s, FREE_SLAB); } -#endif +#endif /* CONFIG_SLUB_CPU_PARTIAL */ } /* @@ -2308,7 +2308,7 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain) local_irq_restore(flags); } preempt_enable(); -#endif +#endif /* CONFIG_SLUB_CPU_PARTIAL */ } static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) @@ -2813,7 +2813,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); #endif -#endif +#endif /* CONFIG_NUMA */ /* * Slow path handling. This may still be called frequently since objects @@ -3848,7 +3848,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) return ret; } EXPORT_SYMBOL(__kmalloc_node); -#endif +#endif /* CONFIG_NUMA */ #ifdef CONFIG_HARDENED_USERCOPY /* @@ -4066,7 +4066,7 @@ void __kmemcg_cache_deactivate(struct kmem_cache *s) */ slab_deactivate_memcg_cache_rcu_sched(s, kmemcg_cache_deact_after_rcu); } -#endif +#endif /* CONFIG_MEMCG */ static int slab_mem_going_offline_callback(void *arg) { @@ -4699,7 +4699,7 @@ static int list_locations(struct kmem_cache *s, char *buf, len += sprintf(buf, "No data\n"); return len; } -#endif +#endif /* CONFIG_SLUB_DEBUG */ #ifdef SLUB_RESILIENCY_TEST static void __init resiliency_test(void) @@ -4759,7 +4759,7 @@ static void __init resiliency_test(void) #ifdef CONFIG_SYSFS static void resiliency_test(void) {}; #endif -#endif +#endif /* SLUB_RESILIENCY_TEST */ #ifdef CONFIG_SYSFS enum slab_stat_type { @@ -5416,7 +5416,7 @@ STAT_ATTR(CPU_PARTIAL_ALLOC, cpu_partial_alloc); STAT_ATTR(CPU_PARTIAL_FREE, cpu_partial_free); STAT_ATTR(CPU_PARTIAL_NODE, cpu_partial_node); STAT_ATTR(CPU_PARTIAL_DRAIN, cpu_partial_drain); -#endif +#endif /* CONFIG_SLUB_STATS */ static struct attribute *slab_attrs[] = { &slab_size_attr.attr, @@ -5617,7 +5617,7 @@ static void memcg_propagate_slab_attrs(struct kmem_cache *s) if (buffer) free_page((unsigned long)buffer); -#endif +#endif /* CONFIG_MEMCG */ } static void kmem_cache_release(struct kobject *k) From patchwork Tue Apr 2 23:05:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10882439 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DC7A41575 for ; Tue, 2 Apr 2019 23:06:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C73902897F for ; Tue, 2 Apr 2019 23:06:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B81E62897A; Tue, 2 Apr 2019 23:06:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0640E2897A for ; Tue, 2 Apr 2019 23:06:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B7ABC6B0276; Tue, 2 Apr 2019 19:06:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B04026B0277; Tue, 2 Apr 2019 19:06:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9CAB26B0278; Tue, 2 Apr 2019 19:06:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by kanga.kvack.org (Postfix) with ESMTP id 79DB76B0276 for ; Tue, 2 Apr 2019 19:06:47 -0400 (EDT) Received: by mail-qk1-f200.google.com with SMTP id 75so13061125qki.13 for ; Tue, 02 Apr 2019 16:06:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=qs5oMJVG8T7y17a75wRO6haSj5HkqCPcGJrvYSSveK4=; b=d8R2MfXC8xAOCT92KmX1Ry8I9pcqOSKd3oquMJ/Qr14tUp/BKKao0/H1GThn3ljwl9 iYo9mqjTzDL+f8l9GLUjcMSHtTSFoQnob3ersagKzgiEdyYXzgsZCG4abgzsxN7BVf+3 Z6hrgtStUlnO+hY70KpYo3wH8S3JmZ/+gpF93lePqJyuBxeFMUeTk5iL0ikQkXq+ANc6 iXcuyQtECvM1n69SpmvX15bq8A/HKx+16FdsU5ybAP7zQl/EjG93rrpEX5zsT92/jzwl RktY65U7IQ/e7lh4MNTxYN6ZJ491JQTB4rZWZPMi09vdj/OaRNfklX74OYFuCKTcHFwJ zJfg== X-Gm-Message-State: APjAAAXZkI7+Y5EUqa5SBzdJLCJNuB0gZPmK+pTVGk/ADEYhWmcGEJuv DbI4w71tYJXhFTicPU2YGTlyA6VeYoqjklT+P4KQlTXm6VgRTGMrXa5x9flM+1Yhax4caFVqvKS FpIO4V+1Wk50P7uDC+OGopO7ZCpfNiun7HcxTM6yryblt1mNOfKcC0iXTIxXKCY0= X-Received: by 2002:ac8:3042:: with SMTP id g2mr61243832qte.1.1554246407250; Tue, 02 Apr 2019 16:06:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqyofg6J/MrO7/9PKUqWp7chzCOlnLDmeKoIJHYNeAiW7xtMVKowSzyQbyM31uBdtQ3X2WyZ X-Received: by 2002:ac8:3042:: with SMTP id g2mr61243774qte.1.1554246406453; Tue, 02 Apr 2019 16:06:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554246406; cv=none; d=google.com; s=arc-20160816; b=SM+wPBcMwxacEqJhRWYOZ36qEfEpyEkahZ/vNk3V6pD3T0Dwte3iXPWXQY7BmZnSec GWCwjOeO/FtfZgNpx39B+PCv0JbvGlA/E83N9puktu2rCBXy2xAjLYI6q7q16+iJ16tM ezph6cd5SxdbigfDI++5cxs4zuw354+9G/XHfO8mXRxxa5LjkWJbGc25cpFJfqYCrblE lW+cFqyreJkeW8dRqxSPFlIavtm7+tlBUGvJbR//2e/7ZqM48UUZrcY//Ff7z8eIG74O 3yNY4zRUwT1BSvwWdWPHxGIc4nZL3bhEkgukIyPbnPoFg6BvbdIheNSJf+WrlFx+t6DD 2nzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=qs5oMJVG8T7y17a75wRO6haSj5HkqCPcGJrvYSSveK4=; b=lLyboLqx/mm144tWgv08QU5MlMIBt5GMATEmk1Dk3UdESS+nEL/wg3FutSJ3XN+F60 0DHqAR2Qtr95C4C5PCEFnHE10eg5/gaemztwRD5q++sUBtzyiJuI0RPFMVt44mX9Sf9s 2ceVxhbvvzT9XLXCPC/wQ0bbrS+qj81vVdUQLNuDzLwe3INr9lbfkqoNbWzkSLT75eDA eqChs0yw+RLwNd4wKGEbgJVYfH2y3xxXPR5BLigfl/XxnHhKJuPrpdQY9rCROZxDH2iI jEkjXj0PkGdnxOUmVnvC8+/iseQPuSsYxAzvm5gf5/1WO7kxSfAFgdIWHRKiG/OlEdMu +aDA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=pV6zDQgV; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com. [66.111.4.26]) by mx.google.com with ESMTPS id 41si731960qtp.20.2019.04.02.16.06.46 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 02 Apr 2019 16:06:46 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) client-ip=66.111.4.26; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=pV6zDQgV; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 2E6F922028; Tue, 2 Apr 2019 19:06:46 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 02 Apr 2019 19:06:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=qs5oMJVG8T7y17a75wRO6haSj5HkqCPcGJrvYSSveK4=; b=pV6zDQgV 51S0o14N/YzIUe8qXuA4ZYzNP+4WSpAt6O1Ajm4OXfNBL7Un91tHrOEUkAYEhWuV ibh92Ph0jEEgXTt9xwZMRA3fe6ieF+eAMuTCOkI8nol0w8u9DsqfoonHxylHwnsp mZ5wQpifhQLm1HMmytXFypzRiQBX2FvR2sRDdHbtM09YUmgNGh5lT/lA6JZKOeSd PYhiE3akqRYcGl6rAqDDA7WG5QONjrbvhGp2Cs+g5DLhRAzz+gLhNvTdrR6cwqqm 5IiSZ6o3jpjrUgfYp3lCXeN1+4Jcv+QADAaRbG4+CddgCozvEdmXluxEv4JibnIO PjptbIuavnfNdg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduuddrtddugdduieculddtuddrgedutddrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdp uffrtefokffrpgfnqfghnecuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivg hnthhsucdlqddutddtmdenucfjughrpefhvffufffkofgjfhgggfestdekredtredttden ucfhrhhomhepfdfvohgsihhnucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrh hnvghlrdhorhhgqeenucfkphepuddvgedrudeiledrvdejrddvtdeknecurfgrrhgrmhep mhgrihhlfhhrohhmpehtohgsihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghruf hiiigvpeeg X-ME-Proxy: Received: from eros.localdomain (124-169-27-208.dyn.iinet.net.au [124.169.27.208]) by mail.messagingengine.com (Postfix) with ESMTPA id 8D856100E5; Tue, 2 Apr 2019 19:06:42 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 5/7] slub: Use slab_list instead of lru Date: Wed, 3 Apr 2019 10:05:43 +1100 Message-Id: <20190402230545.2929-6-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190402230545.2929-1-tobin@kernel.org> References: <20190402230545.2929-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Currently we use the page->lru list for maintaining lists of slabs. We have a list in the page structure (slab_list) that can be used for this purpose. Doing so makes the code cleaner since we are not overloading the lru list. Use the slab_list instead of the lru list for maintaining lists of slabs. Acked-by: Christoph Lameter Signed-off-by: Tobin C. Harding Reviewed-by: Roman Gushchin --- mm/slub.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 8fbba4ff6c67..d17f117830a9 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1023,7 +1023,7 @@ static void add_full(struct kmem_cache *s, return; lockdep_assert_held(&n->list_lock); - list_add(&page->lru, &n->full); + list_add(&page->slab_list, &n->full); } static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct page *page) @@ -1032,7 +1032,7 @@ static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct return; lockdep_assert_held(&n->list_lock); - list_del(&page->lru); + list_del(&page->slab_list); } /* Tracking of the number of slabs for debugging purposes */ @@ -1773,9 +1773,9 @@ __add_partial(struct kmem_cache_node *n, struct page *page, int tail) { n->nr_partial++; if (tail == DEACTIVATE_TO_TAIL) - list_add_tail(&page->lru, &n->partial); + list_add_tail(&page->slab_list, &n->partial); else - list_add(&page->lru, &n->partial); + list_add(&page->slab_list, &n->partial); } static inline void add_partial(struct kmem_cache_node *n, @@ -1789,7 +1789,7 @@ static inline void remove_partial(struct kmem_cache_node *n, struct page *page) { lockdep_assert_held(&n->list_lock); - list_del(&page->lru); + list_del(&page->slab_list); n->nr_partial--; } @@ -1863,7 +1863,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, return NULL; spin_lock(&n->list_lock); - list_for_each_entry_safe(page, page2, &n->partial, lru) { + list_for_each_entry_safe(page, page2, &n->partial, slab_list) { void *t; if (!pfmemalloc_match(page, flags)) @@ -2407,7 +2407,7 @@ static unsigned long count_partial(struct kmem_cache_node *n, struct page *page; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, lru) + list_for_each_entry(page, &n->partial, slab_list) x += get_count(page); spin_unlock_irqrestore(&n->list_lock, flags); return x; @@ -3705,10 +3705,10 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) BUG_ON(irqs_disabled()); spin_lock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &n->partial, lru) { + list_for_each_entry_safe(page, h, &n->partial, slab_list) { if (!page->inuse) { remove_partial(n, page); - list_add(&page->lru, &discard); + list_add(&page->slab_list, &discard); } else { list_slab_objects(s, page, "Objects remaining in %s on __kmem_cache_shutdown()"); @@ -3716,7 +3716,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) } spin_unlock_irq(&n->list_lock); - list_for_each_entry_safe(page, h, &discard, lru) + list_for_each_entry_safe(page, h, &discard, slab_list) discard_slab(s, page); } @@ -3996,7 +3996,7 @@ int __kmem_cache_shrink(struct kmem_cache *s) * Note that concurrent frees may occur while we hold the * list_lock. page->inuse here is the upper limit. */ - list_for_each_entry_safe(page, t, &n->partial, lru) { + list_for_each_entry_safe(page, t, &n->partial, slab_list) { int free = page->objects - page->inuse; /* Do not reread page->inuse */ @@ -4006,10 +4006,10 @@ int __kmem_cache_shrink(struct kmem_cache *s) BUG_ON(free <= 0); if (free == page->objects) { - list_move(&page->lru, &discard); + list_move(&page->slab_list, &discard); n->nr_partial--; } else if (free <= SHRINK_PROMOTE_MAX) - list_move(&page->lru, promote + free - 1); + list_move(&page->slab_list, promote + free - 1); } /* @@ -4022,7 +4022,7 @@ int __kmem_cache_shrink(struct kmem_cache *s) spin_unlock_irqrestore(&n->list_lock, flags); /* Release empty slabs */ - list_for_each_entry_safe(page, t, &discard, lru) + list_for_each_entry_safe(page, t, &discard, slab_list) discard_slab(s, page); if (slabs_node(s, node)) @@ -4214,11 +4214,11 @@ static struct kmem_cache * __init bootstrap(struct kmem_cache *static_cache) for_each_kmem_cache_node(s, node, n) { struct page *p; - list_for_each_entry(p, &n->partial, lru) + list_for_each_entry(p, &n->partial, slab_list) p->slab_cache = s; #ifdef CONFIG_SLUB_DEBUG - list_for_each_entry(p, &n->full, lru) + list_for_each_entry(p, &n->full, slab_list) p->slab_cache = s; #endif } @@ -4435,7 +4435,7 @@ static int validate_slab_node(struct kmem_cache *s, spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, lru) { + list_for_each_entry(page, &n->partial, slab_list) { validate_slab_slab(s, page, map); count++; } @@ -4446,7 +4446,7 @@ static int validate_slab_node(struct kmem_cache *s, if (!(s->flags & SLAB_STORE_USER)) goto out; - list_for_each_entry(page, &n->full, lru) { + list_for_each_entry(page, &n->full, slab_list) { validate_slab_slab(s, page, map); count++; } @@ -4642,9 +4642,9 @@ static int list_locations(struct kmem_cache *s, char *buf, continue; spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, lru) + list_for_each_entry(page, &n->partial, slab_list) process_slab(&t, s, page, alloc, map); - list_for_each_entry(page, &n->full, lru) + list_for_each_entry(page, &n->full, slab_list) process_slab(&t, s, page, alloc, map); spin_unlock_irqrestore(&n->list_lock, flags); } From patchwork Tue Apr 2 23:05:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10882441 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EB5D71708 for ; Tue, 2 Apr 2019 23:06:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D5E182897A for ; Tue, 2 Apr 2019 23:06:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C941028984; Tue, 2 Apr 2019 23:06:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 207AA2897A for ; Tue, 2 Apr 2019 23:06:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB0F96B0277; Tue, 2 Apr 2019 19:06:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C640F6B0278; Tue, 2 Apr 2019 19:06:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B52A26B0279; Tue, 2 Apr 2019 19:06:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by kanga.kvack.org (Postfix) with ESMTP id 90E206B0277 for ; Tue, 2 Apr 2019 19:06:51 -0400 (EDT) Received: by mail-qt1-f197.google.com with SMTP id 18so15041904qtw.20 for ; Tue, 02 Apr 2019 16:06:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=gJ4aFp/Hf0v/GP1gn4YkY0hhZE2uSZfGmW1/JObfUr4=; b=G7n2FEWo978u9iZZw4ZtOlmo5RT+HDAxCSRZuYAVPeqQz9Vf7q7qNiJjNRqh6kQAL4 7EIehuKCjabUBZlOLBnssRDKHN/ZuYkhDmtcRDdK9+r8nj2NJn1MxDx+JV9Nsi9wEXCf 8OMONILLRMVeJ11ysgUZfgR03yj/iFtkSOU/CSmc0NUnvCSxArUhbWOAX7Qmo6nJO3YP D170c+hoqBN8T7B0htp8h02HAXwfXAbeH+3S7vRDGxDrHZReFsMb+GKOWlLISBpqmFmt kzU05dm+DVjS+DQmfsztrWHMTWoBkyLc/OJWXDFgrNoSkoCwRWXQFieY2/wNacPYtY+D dwxA== X-Gm-Message-State: APjAAAUvkO9pojkgqECd1nvGfe//ZBu5xmR1s/WawluO7zgH6gStAJzp DSDmOzxkX59dmyaTk1w6OdrpUbhtndKD/o22IvBTUF+6U0L9y3kflJGHjNuxZdNhv4GHqjqokT5 11C5LN+SHMI+MKn/8MACKunFoNmYszYELvxTbdJ2TESv5AFZ/rZxQPJfg9d19wWU= X-Received: by 2002:a0c:d25a:: with SMTP id o26mr61259475qvh.78.1554246411354; Tue, 02 Apr 2019 16:06:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqwfPYSlakVER7dtdh9WPnsJT1QpWSosXxnUlBpTnndP23+mhIqTffq7bboM4rTqXRuCd8s0 X-Received: by 2002:a0c:d25a:: with SMTP id o26mr61259434qvh.78.1554246410603; Tue, 02 Apr 2019 16:06:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554246410; cv=none; d=google.com; s=arc-20160816; b=y8vqbgjZWh8ILrIYcuFDBRkXIuEyeSfoI3o8UFEoGKDWQ3vXkoGRi4xVJbWRP/BjU5 KFiGlHUCI8WJfSR+zbLY8hueZeufALNXXDKOVkJqSqhO8KR4MZ7OTn7arg+An5eQw3PI 59NPu1KLGIKSgYxT4pEiUfdEGV4QvUGrSmdKEwLZ5BdrH3Xm9sUyC42pttkJ611CKWsK ZMuervNOePoehMjoh3A37qHOkrQtT1wJ+KsWAbRif7PzHCGx2VlzBsnp+iIItXv+d619 38cpf+MtpkGwBNEcotPDry4Ts4Q5OWM5Z/iUY68uprxfHZHmofN+4Y9she37QzxrkspY VgFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=gJ4aFp/Hf0v/GP1gn4YkY0hhZE2uSZfGmW1/JObfUr4=; b=eVkN3NN9dLyUHlU2xSyqZD/GrDljlNKDmMRVDs9fHzrKTScyQyc5JyfG9UcMg4R4IU Apdayw3sKUwX0mMmmb76yb0+HB6T/U839Y0aCGX5wMj4pJuH6q4HR8LccyhO2Mj/LdjP 00t0eTBFptdZ3bTn8TsG0t/aHxuOs9q4rAC+AuIigSKjhCxrS0KtKCo2Y0pz8LD2QEGx MvXF+UYvRlfC1k1ea6pglUl3CdpTIIDnW1r664lBwYggaVZuQBcF3/un9NOon7Gcy4/4 GmnQ0tTgy4Dk1Ki35cC3hIvriOslkTyJxIRdbYNxsvrp6idfGOh82UHp6wSnLxrX5O26 FT1g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=CIjqBdlg; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com. [66.111.4.26]) by mx.google.com with ESMTPS id l25si2125153qtj.228.2019.04.02.16.06.50 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 02 Apr 2019 16:06:50 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) client-ip=66.111.4.26; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=CIjqBdlg; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 545232208C; Tue, 2 Apr 2019 19:06:50 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 02 Apr 2019 19:06:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=gJ4aFp/Hf0v/GP1gn4YkY0hhZE2uSZfGmW1/JObfUr4=; b=CIjqBdlg 4pZi6byHB2UOnB7lSDMnuzKYQtIxxkQFbQacRaQVuQ3COHrquyuDyQyrIPElSA5j yAbr6GDGctSOU3N9wHRj4GEGxa/QHIgio+rRGfIvhf2EAqP10uxD5BdddpWg5q71 4PWX27lwB6Lp+zoVH+B5b6tgekX+zOVnHiCQgzm68W2uy2E47LNctUnSiqq6WDgs GxoBKz2KBbFxmSfnlNAliVJOI51HEY+jA3xpD8lvWA44E5DFEXSxosPOqYvvkdRy dO9keOl418HZFw/DoeJ2giYIxyOux/lUOE8XtVFAgMmp3UxsnO+0KJFjsgk9NqNY cPTJEQCmBfzrgQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduuddrtddugdduieculddtuddrgedutddrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdp uffrtefokffrpgfnqfghnecuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivg hnthhsucdlqddutddtmdenucfjughrpefhvffufffkofgjfhgggfestdekredtredttden ucfhrhhomhepfdfvohgsihhnucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrh hnvghlrdhorhhgqeenucfkphepuddvgedrudeiledrvdejrddvtdeknecurfgrrhgrmhep mhgrihhlfhhrohhmpehtohgsihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghruf hiiigvpeeg X-ME-Proxy: Received: from eros.localdomain (124-169-27-208.dyn.iinet.net.au [124.169.27.208]) by mail.messagingengine.com (Postfix) with ESMTPA id B549E100E5; Tue, 2 Apr 2019 19:06:46 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 6/7] slab: Use slab_list instead of lru Date: Wed, 3 Apr 2019 10:05:44 +1100 Message-Id: <20190402230545.2929-7-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190402230545.2929-1-tobin@kernel.org> References: <20190402230545.2929-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Currently we use the page->lru list for maintaining lists of slabs. We have a list in the page structure (slab_list) that can be used for this purpose. Doing so makes the code cleaner since we are not overloading the lru list. Use the slab_list instead of the lru list for maintaining lists of slabs. Signed-off-by: Tobin C. Harding Acked-by: Christoph Lameter Reviewed-by: Roman Gushchin --- mm/slab.c | 49 +++++++++++++++++++++++++------------------------ 1 file changed, 25 insertions(+), 24 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 329bfe67f2ca..09e2a0131338 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1710,8 +1710,8 @@ static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list) { struct page *page, *n; - list_for_each_entry_safe(page, n, list, lru) { - list_del(&page->lru); + list_for_each_entry_safe(page, n, list, slab_list) { + list_del(&page->slab_list); slab_destroy(cachep, page); } } @@ -2267,8 +2267,8 @@ static int drain_freelist(struct kmem_cache *cache, goto out; } - page = list_entry(p, struct page, lru); - list_del(&page->lru); + page = list_entry(p, struct page, slab_list); + list_del(&page->slab_list); n->free_slabs--; n->total_slabs--; /* @@ -2728,13 +2728,13 @@ static void cache_grow_end(struct kmem_cache *cachep, struct page *page) if (!page) return; - INIT_LIST_HEAD(&page->lru); + INIT_LIST_HEAD(&page->slab_list); n = get_node(cachep, page_to_nid(page)); spin_lock(&n->list_lock); n->total_slabs++; if (!page->active) { - list_add_tail(&page->lru, &(n->slabs_free)); + list_add_tail(&page->slab_list, &n->slabs_free); n->free_slabs++; } else fixup_slab_list(cachep, n, page, &list); @@ -2843,9 +2843,9 @@ static inline void fixup_slab_list(struct kmem_cache *cachep, void **list) { /* move slabp to correct slabp list: */ - list_del(&page->lru); + list_del(&page->slab_list); if (page->active == cachep->num) { - list_add(&page->lru, &n->slabs_full); + list_add(&page->slab_list, &n->slabs_full); if (OBJFREELIST_SLAB(cachep)) { #if DEBUG /* Poisoning will be done without holding the lock */ @@ -2859,7 +2859,7 @@ static inline void fixup_slab_list(struct kmem_cache *cachep, page->freelist = NULL; } } else - list_add(&page->lru, &n->slabs_partial); + list_add(&page->slab_list, &n->slabs_partial); } /* Try to find non-pfmemalloc slab if needed */ @@ -2882,20 +2882,20 @@ static noinline struct page *get_valid_first_slab(struct kmem_cache_node *n, } /* Move pfmemalloc slab to the end of list to speed up next search */ - list_del(&page->lru); + list_del(&page->slab_list); if (!page->active) { - list_add_tail(&page->lru, &n->slabs_free); + list_add_tail(&page->slab_list, &n->slabs_free); n->free_slabs++; } else - list_add_tail(&page->lru, &n->slabs_partial); + list_add_tail(&page->slab_list, &n->slabs_partial); - list_for_each_entry(page, &n->slabs_partial, lru) { + list_for_each_entry(page, &n->slabs_partial, slab_list) { if (!PageSlabPfmemalloc(page)) return page; } n->free_touched = 1; - list_for_each_entry(page, &n->slabs_free, lru) { + list_for_each_entry(page, &n->slabs_free, slab_list) { if (!PageSlabPfmemalloc(page)) { n->free_slabs--; return page; @@ -2910,11 +2910,12 @@ static struct page *get_first_slab(struct kmem_cache_node *n, bool pfmemalloc) struct page *page; assert_spin_locked(&n->list_lock); - page = list_first_entry_or_null(&n->slabs_partial, struct page, lru); + page = list_first_entry_or_null(&n->slabs_partial, struct page, + slab_list); if (!page) { n->free_touched = 1; page = list_first_entry_or_null(&n->slabs_free, struct page, - lru); + slab_list); if (page) n->free_slabs--; } @@ -3415,29 +3416,29 @@ static void free_block(struct kmem_cache *cachep, void **objpp, objp = objpp[i]; page = virt_to_head_page(objp); - list_del(&page->lru); + list_del(&page->slab_list); check_spinlock_acquired_node(cachep, node); slab_put_obj(cachep, page, objp); STATS_DEC_ACTIVE(cachep); /* fixup slab chains */ if (page->active == 0) { - list_add(&page->lru, &n->slabs_free); + list_add(&page->slab_list, &n->slabs_free); n->free_slabs++; } else { /* Unconditionally move a slab to the end of the * partial list on free - maximum time for the * other objects to be freed, too. */ - list_add_tail(&page->lru, &n->slabs_partial); + list_add_tail(&page->slab_list, &n->slabs_partial); } } while (n->free_objects > n->free_limit && !list_empty(&n->slabs_free)) { n->free_objects -= cachep->num; - page = list_last_entry(&n->slabs_free, struct page, lru); - list_move(&page->lru, list); + page = list_last_entry(&n->slabs_free, struct page, slab_list); + list_move(&page->slab_list, list); n->free_slabs--; n->total_slabs--; } @@ -3475,7 +3476,7 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) int i = 0; struct page *page; - list_for_each_entry(page, &n->slabs_free, lru) { + list_for_each_entry(page, &n->slabs_free, slab_list) { BUG_ON(page->active); i++; @@ -4338,9 +4339,9 @@ static int leaks_show(struct seq_file *m, void *p) check_irq_on(); spin_lock_irq(&n->list_lock); - list_for_each_entry(page, &n->slabs_full, lru) + list_for_each_entry(page, &n->slabs_full, slab_list) handle_slab(x, cachep, page); - list_for_each_entry(page, &n->slabs_partial, lru) + list_for_each_entry(page, &n->slabs_partial, slab_list) handle_slab(x, cachep, page); spin_unlock_irq(&n->list_lock); } From patchwork Tue Apr 2 23:05:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Tobin C. Harding" X-Patchwork-Id: 10882443 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CBE1C1708 for ; Tue, 2 Apr 2019 23:06:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B896128990 for ; Tue, 2 Apr 2019 23:06:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A75A128992; Tue, 2 Apr 2019 23:06:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 71A2A2898B for ; Tue, 2 Apr 2019 23:06:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 44E586B0278; Tue, 2 Apr 2019 19:06:56 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3FD5B6B0279; Tue, 2 Apr 2019 19:06:56 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2C63A6B027A; Tue, 2 Apr 2019 19:06:56 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by kanga.kvack.org (Postfix) with ESMTP id 0F4916B0278 for ; Tue, 2 Apr 2019 19:06:56 -0400 (EDT) Received: by mail-qk1-f200.google.com with SMTP id d8so12970705qkk.17 for ; Tue, 02 Apr 2019 16:06:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=BFIyNbFQfvFKfyZUHijpqSaVm8ggK3M3/XYK2FaoXEA=; b=LMtndD71l3MLIyIUv8lr9quui+3LwW1Jh3DjSjLZGfrE74C6g85SbsMUuJYNBPgVsw J8nqc1TzEJS5mbX/wtjXr0hgzloLD8XebU0fiw2rHuotXjSRU81hSt1DNJXF8nCMI1im rDK7LhSAGb95Eo3wSEv9+uA+nDUpk1Oz6bb6/1LrGVN/uSOARTlaE4FJIH7SMPHQc90K r0m281FfbxEz7zNEouUMRYT1BcjAIdEGipc5abGNEDA3QOSL0gqy8hv8XrdhsBlNYvxD C6lQftHaHLUdduBcA996j/HkBXp09C850rWkqCWubSpLSHy2bZhbmkWtox/wnVA753LW BlDg== X-Gm-Message-State: APjAAAVPlcgBYr5OLRaR6fUK5lSGplXb2HcYHpBNBzmRZCrPJRxI9D8E bS2Ul+0rYWbCFRrwe1oZUv84uWd9ZjYDUAPRR0BRr+U9lj//hymXTXodTaiCnNBj2hapXC6RW2p 9GSGE6NUPQWbdYY/d5SLt80dWFZUs/7Em38JFFDr/6VV0YSp1+eB8QlTxGRftQsk= X-Received: by 2002:a37:4b03:: with SMTP id y3mr16868856qka.260.1554246415845; Tue, 02 Apr 2019 16:06:55 -0700 (PDT) X-Google-Smtp-Source: APXvYqwoxM6lVwjwIlLFrrnY+3L46DXHUtWw+k6ioBDFNy3fxhj/ExUkWqR0+WC+Ve41b7cC8Hir X-Received: by 2002:a37:4b03:: with SMTP id y3mr16868807qka.260.1554246415123; Tue, 02 Apr 2019 16:06:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554246415; cv=none; d=google.com; s=arc-20160816; b=LrqzsB2l7DrJ13mQDt56Khzk+pdiTNe7N2DSP5Jm6V/DIs7QQM59/ZlhLXbDATpMnv MpFGyHVWzd+6uEPQl7/VSq26+iE9rIZQ6Aw7NE/ZruRrCMnGtpACDPyt9IryXqjPozYw p7rG6e11u2QoU66ue+yFt0/VMAFQ2FsJU9f3sRghEUG78sZBiEpQ8lcAZWu/v+izeV3c JT5qo2ryYtjh2dm9qU7jy4p0p8kXKty7jgzE/Myx52F0M11O4GXW45MtwaDPVUi2RrUH mTpeZl7I1h8EtgH1zMLTVyyAokCdUeH1HebxyfK2cTM/8fiFVOXMq29jLvI5lIopnVn1 oY0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=BFIyNbFQfvFKfyZUHijpqSaVm8ggK3M3/XYK2FaoXEA=; b=0qLbLUB5j5kMri26Q26+LOb5w557iDFpgq4B2V5yDUDsHLV7Z8gHfD3TJunSHBuG+M jYnwP1NnAamxTybxbKmSxQjq689V9i5b9w1z8d8xwYXOiYyW3mC568PcVxBlJydJAPBU da44VK/DBsThxjDg42aHolzPhHPXqvALyhC9NffEPAenRZXD0w6RbeIHiEQx3fICEvYH tLrWcAVvZGuTy0oiky+rw5udaUbwd5wST5VVnDkbt0Q4n/Mz+xUH6+gtTkjArwehVGj+ SN4S8HjuuReUQjCXqIMqPphNlA6WhjjkaAOgk6ZPY0EQu1G05fsr9kU5SYI0ARBwh2we +dzg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=jFUgvtAn; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com. [66.111.4.26]) by mx.google.com with ESMTPS id x18si5344430qtp.368.2019.04.02.16.06.55 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 02 Apr 2019 16:06:55 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) client-ip=66.111.4.26; Authentication-Results: mx.google.com; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=jFUgvtAn; spf=softfail (google.com: domain of transitioning tobin@kernel.org does not designate 66.111.4.26 as permitted sender) smtp.mailfrom=tobin@kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id D7A872201F; Tue, 2 Apr 2019 19:06:54 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 02 Apr 2019 19:06:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=BFIyNbFQfvFKfyZUHijpqSaVm8ggK3M3/XYK2FaoXEA=; b=jFUgvtAn ZD44OM9rhEKh8z7rEJ3IOjLZxODs7BrvCHA6A9nZFfJ4LTx2tMN6jsJa3K5fGXV7 JG0CEUTQLUc0zkavWdhwXnLP2jBJXznd0sZrSJ9DzJCwWfyTaIhawW8j+Fc/aOwL dnDekI7NVWb5zJVIM1HuXvsEG3cbBqh4/ikkVFEfWvpcQnmEQhh/lLUuvuhRvm9x DRsgG+bDTSQrq0SHLppcw/lsiLLFQ6qjJwI10FN7dae58qTzwu85vyXr7XVlEEuY dBeaZvSJtT442FqLK+Rbux04dPrtWeASpm1s+EYiWl7yp6UzRvfPs6tEUwFELSh1 JxHWOwHuipcqZQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduuddrtddugdduieculddtuddrgedutddrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdp uffrtefokffrpgfnqfghnecuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivg hnthhsucdlqddutddtmdenucfjughrpefhvffufffkofgjfhgggfestdekredtredttden ucfhrhhomhepfdfvohgsihhnucevrdcujfgrrhguihhnghdfuceothhosghinheskhgvrh hnvghlrdhorhhgqeenucfkphepuddvgedrudeiledrvdejrddvtdeknecurfgrrhgrmhep mhgrihhlfhhrohhmpehtohgsihhnsehkvghrnhgvlhdrohhrghenucevlhhushhtvghruf hiiigvpeei X-ME-Proxy: Received: from eros.localdomain (124-169-27-208.dyn.iinet.net.au [124.169.27.208]) by mail.messagingengine.com (Postfix) with ESMTPA id E3F3010390; Tue, 2 Apr 2019 19:06:50 -0400 (EDT) From: "Tobin C. Harding" To: Andrew Morton Cc: "Tobin C. Harding" , Roman Gushchin , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 7/7] mm: Remove stale comment from page struct Date: Wed, 3 Apr 2019 10:05:45 +1100 Message-Id: <20190402230545.2929-8-tobin@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190402230545.2929-1-tobin@kernel.org> References: <20190402230545.2929-1-tobin@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP We now use the slab_list list_head instead of the lru list_head. This comment has become stale. Remove stale comment from page struct slab_list list_head. Acked-by: Christoph Lameter Signed-off-by: Tobin C. Harding Reviewed-by: Roman Gushchin --- include/linux/mm_types.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 7eade9132f02..63a34e3d7c29 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -103,7 +103,7 @@ struct page { }; struct { /* slab, slob and slub */ union { - struct list_head slab_list; /* uses lru */ + struct list_head slab_list; struct { /* Partial pages */ struct page *next; #ifdef CONFIG_64BIT