From patchwork Wed Oct 28 05:50:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 11862495 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6120F921 for ; Wed, 28 Oct 2020 05:50:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C344B22282 for ; Wed, 28 Oct 2020 05:50:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="FIKQZdD3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C344B22282 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A9C916B005C; Wed, 28 Oct 2020 01:50:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A4EAF6B005D; Wed, 28 Oct 2020 01:50:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 915CE6B0062; Wed, 28 Oct 2020 01:50:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0133.hostedemail.com [216.40.44.133]) by kanga.kvack.org (Postfix) with ESMTP id 63AA86B005C for ; Wed, 28 Oct 2020 01:50:44 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 07A15180AD806 for ; Wed, 28 Oct 2020 05:50:44 +0000 (UTC) X-FDA: 77420260008.28.crow42_4200f6127282 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id E40106D66 for ; Wed, 28 Oct 2020 05:50:43 +0000 (UTC) X-Spam-Summary: 1,0,0,a12fa667aac0de13,d41d8cd98f00b204,bharata@linux.ibm.com,,RULES_HIT:1:2:41:69:355:379:966:973:988:989:1260:1261:1277:1312:1313:1314:1345:1437:1516:1518:1519:1593:1594:1595:1596:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2553:2559:2562:2692:2693:2731:2890:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4030:4042:4049:4184:4250:4321:4361:4385:4605:5007:6119:6261:6653:7264:7903:8603:8957:9010:9108:10004:11026:11658:11914:12043:12296:12438:12517:12519:12555:12663:12679:12683:12895:12986:13146:13161:13229:13230:13439:13548:13869:13895:14096:14097:21060:21080:21324:21450:21451:21627:21740:21939:21990:30001:30005:30012:30034:30054:30070:30090:30091,0,RBL:148.163.156.1:@linux.ibm.com:.lbl8.mailshell.net-62.14.0.100 64.201.201.201;04yfskapc6p1cpb447egkznax8exmoc433rcq1emg4wsernjkpxqpdtrxme6gdo.shjt55ofgi1t8d5hb3szyhchkx6m4hdiiioirrfuw6r7sjpf4xzq6ffrqtjxoua.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netc heck:non X-HE-Tag: crow42_4200f6127282 X-Filterd-Recvd-Size: 10372 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Wed, 28 Oct 2020 05:50:43 +0000 (UTC) Received: from pps.filterd (m0098394.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09S5WWkO004053; Wed, 28 Oct 2020 01:50:39 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=date : from : to : cc : subject : message-id : reply-to : mime-version : content-type; s=pp1; bh=YPe3avegKNPJIUROW0vifdqTsXY0DhxKdJ1AJeLXLiM=; b=FIKQZdD3hRTngZRgZP3DEgEeNj8ypvamoYrqgN2nL0IywLHUCTw6cJIDRDPPGTMTX+IU N9LPSwYnjNzIJknASjToY2DKcE6xI96VMD4UZd6gSHlNvLvkibLae/KOk1RRnu/rZENM 49cLQ2sgdNAdOJ9EiDOzK967OnFM9iOrzTk85w3svJC1A4x+gZzsYeiEG2hBzfON5JL8 d71BxwFZENT3HzLvJ0fTm+5le5HQD9qR/4ah/+j79lFcxZXSShvqJWqRiKBimhXwB7OT 9YiPV37SMIuJ6iVhtjv9h3HLgjGT8b7BH1ro8T1Gx5lAvrxco+pEkpPPTKnsVayKeJE6 hQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 34emb59sqw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 28 Oct 2020 01:50:39 -0400 Received: from m0098394.ppops.net (m0098394.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 09S5WWgV003986; Wed, 28 Oct 2020 01:50:39 -0400 Received: from ppma06fra.de.ibm.com (48.49.7a9f.ip4.static.sl-reverse.com [159.122.73.72]) by mx0a-001b2d01.pphosted.com with ESMTP id 34emb59sq1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 28 Oct 2020 01:50:39 -0400 Received: from pps.filterd (ppma06fra.de.ibm.com [127.0.0.1]) by ppma06fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 09S5ipL4002083; Wed, 28 Oct 2020 05:50:36 GMT Received: from b06cxnps3074.portsmouth.uk.ibm.com (d06relay09.portsmouth.uk.ibm.com [9.149.109.194]) by ppma06fra.de.ibm.com with ESMTP id 34ejqe8dy5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 28 Oct 2020 05:50:36 +0000 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 09S5oYbI37028170 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 28 Oct 2020 05:50:34 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 432E952051; Wed, 28 Oct 2020 05:50:34 +0000 (GMT) Received: from in.ibm.com (unknown [9.77.207.205]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTPS id 560395204E; Wed, 28 Oct 2020 05:50:32 +0000 (GMT) Date: Wed, 28 Oct 2020 11:20:30 +0530 From: Bharata B Rao To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cl@linux.com, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, guro@fb.com, vbabka@suse.cz, shakeelb@google.com, hannes@cmpxchg.org, aneesh.kumar@linux.ibm.com Subject: Higher slub memory consumption on 64K page-size systems? Message-ID: <20201028055030.GA362097@in.ibm.com> Reply-To: bharata@linux.ibm.com MIME-Version: 1.0 Content-Disposition: inline X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737 definitions=2020-10-28_01:2020-10-26,2020-10-28 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 spamscore=0 malwarescore=0 priorityscore=1501 adultscore=0 phishscore=0 suspectscore=3 lowpriorityscore=0 impostorscore=0 bulkscore=0 mlxscore=0 clxscore=1011 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2010280033 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, On POWER systems, where 64K PAGE_SIZE is default, I see that slub consumes higher amount of memory compared to any 4K page-size system. While slub is obviously going to consume more memory on 64K page-size systems compared to 4K as slabs are allocated in page-size granularity, I want to check if there are any obvious tuning (via existing tunables or via some code change) that we can do to reduce the amount of memory consumed by slub. Here is a comparision of the slab memory consumption between 4K and 64K page-size pseries hash KVM guest with 16 cores and 16G memory configuration immediately after boot: 64K 209280 kB 4K 67636 kB 64K configuration may never be able to consume as less as a 4K configuration, but it certainly shows that the slub can be optimized for 64K page-size better. slub_max_order -------------- The most promising tunable that shows consistent reduction in slab memory is slub_max_order. Here is a table that shows the number of slabs that end up with different orders and the total slab consumption at boot for different values of slub_max_order: ------------------------------------------- slub_max_order Order NrSlabs Slab memory ------------------------------------------- 0 276 3 1 16 207488 kB (default) 2 4 3 11 ------------------------------------------- 0 276 2 1 16 166656 kB 2 4 ------------------------------------------- 0 276 144128 kB 1 1 31 ------------------------------------------- Though only a few bigger sized caches fall into order-2 or order-3, they seem to make a considerable difference to the overall slab consumption. If we take task_struct cache as an example, this is how it ends up when slub_max_order is varied: task_struct, objsize=9856 -------------------------------------------- slub_max_order objperslab pagesperslab -------------------------------------------- 3 53 8 2 26 4 1 13 2 -------------------------------------------- The slab page-order and hence the number of objects in a slab has a bearing on the performance, but I wonder if some caches like task_struct above can be auto-tuned to fall into a conservative order and do good both wrt both memory and performance? mm/slub.c:calulate_order() has the logic which determines the the page-order for the slab. It starts with min_objects and attempts to arrive at the best configuration for the slab. The min_objects is starts like this: min_objects = 4 * (fls(nr_cpu_ids) + 1); Here nr_cpu_ids depends on the maxcpus and hence this can have a significant effect on those systems which define maxcpus. Slab numbers post-boot for a KVM pseries guest that has 16 boottime CPUs and varying number of maxcpus look like this: ------------------------------- maxcpus Slab memory(kB) ------------------------------- 64 209280 256 253824 512 293824 ------------------------------- Page-order is a one time setting and obviously can't be tweaked dynamically on CPU hotplug, but just wanted to bring out the effect of the same. And that constant multiplicative factor of 4 was infact added by the commit 9b2cd506e5f2 - "slub: Calculate min_objects based on number of processors." Reducing that to say 2, does give some reduction in the slab memory and also same hackbench performance with reduced slab memory, but I am not sure if that could be assumed to be beneficial for all scenarios. MIN_PARTIAL ----------- This determines the number of slabs left on the partial list even if they are empty. My initial thought was that the default MIN_PARTIAL value of 5 is on the higher side and we are accumulating MIN_PARTIAL number of empty slabs in all caches without freeing them. However I hardly find the case where an empty slab is retained during freeing on account of partial slabs being lesser than MIN_PARTIAL. However what I find in practice is that we are accumulating a lot of partial slabs with just one in-use object in the whole slab. High number of such partial slabs is indeed contributing to the increased slab memory consumption. For example, after a hackbench run, I find the distribution of objects like this for kmalloc-2k cache: total_objects 3168 objects 1611 Nr partial slabs 54 Nr parital slabs with just 1 inuse object 38 With 64K page-size, so many partial slabs with just 1 inuse object can result in high memory usage. Is there any workaround possible prevent this kind of situation? cpu_partial ----------- Here is how the slab consumption post-boot varies when all the slab caches are forced with the fixed cpu_partial value: --------------------------- cpu_partial Slab Memory --------------------------- 0 175872 kB 2 187136 kB 4 191616 kB default 204864 kB --------------------------- It has been suggested earlier that reducing cpu_partial and/or making cpu_partial 64K page-size aware will benefit. In set_cpu_partial(), for bigger sized slabs (size > PAGE_SIZE), cpu_partial is already set to 2. A bit of tweaking there to introduce cpu_partial=1 for certain slabs does give some benefit. With the above change, the slab consumption post-boot reduces to 186048 kB. Also, here are the hackbench numbers with and w/o the above change: Average of 10 runs of 'hackbench -s 1024 -l 200 -g 200 -f 25 -P' Slab consumption captured at the end of each run -------------------------------------------------------------- Time Slab memory -------------------------------------------------------------- Default 11.124s 645580 kB Patched 11.032s 584352 kB -------------------------------------------------------------- I have mostly looked at reducing the slab memory consumption here. But I do understand that default tunable values have been arrived at based on some benchmark numbers. Are there ways or possibilities to reduce the slub memory consumption with the existing level of performance is what I would like to understand and explore. Regards, Bharata. diff --git a/mm/slub.c b/mm/slub.c index a28ed9b8fc61..e09eff1199bf 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3626,7 +3626,9 @@ static void set_cpu_partial(struct kmem_cache *s) */ if (!kmem_cache_has_cpu_partial(s)) slub_set_cpu_partial(s, 0); - else if (s->size >= PAGE_SIZE) + else if (s->size >= 8192) + slub_set_cpu_partial(s, 1); + else if (s->size >= 4096) slub_set_cpu_partial(s, 2); else if (s->size >= 1024) slub_set_cpu_partial(s, 6);