From patchwork Thu Jul 20 10:23:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jay Patel X-Patchwork-Id: 13320282 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B12EEB64DA for ; Thu, 20 Jul 2023 10:24:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A0A112800E9; Thu, 20 Jul 2023 06:24:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BB5328004C; Thu, 20 Jul 2023 06:24:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 85AD12800E9; Thu, 20 Jul 2023 06:24:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 7663A28004C for ; Thu, 20 Jul 2023 06:24:14 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 3EFFDC02D2 for ; Thu, 20 Jul 2023 10:24:14 +0000 (UTC) X-FDA: 81031605228.22.2EE5818 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf27.hostedemail.com (Postfix) with ESMTP id CEE5640003 for ; Thu, 20 Jul 2023 10:24:11 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=gEVU2pod; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf27.hostedemail.com: domain of jaypatel@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=jaypatel@linux.ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689848652; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=QpActCRN8UuC/2fADBlwbdu2+qsJupatd7ZDgPlQByo=; b=mvtltT/cof2x7h74pfghyFAqyVbQNa/N1Cki91HH3VEou0Uj5Ad4gUyeL/TVtEU+D3R2fp wqIhU8Cg1DKwx2TsyuNgvC+4pyPsay3KxAYAXk+pC87ZNVAxT549dphP90mStHzd/l7oNH 3Vg7Wiw91j2ad/Aq6IqROgv7/oSmKDE= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=gEVU2pod; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf27.hostedemail.com: domain of jaypatel@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=jaypatel@linux.ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689848652; a=rsa-sha256; cv=none; b=vOomXeBNIR/jM/WKAOZfoWk4CEi9KOmNiStjeK0HWXTuxc/LrgsJHIvxXmqt06omXeEJrr RU1Kr2yag5qGYgU9wBMWXvZ/Aj3AMyBI9k4iiIAVWhcYUezdoHmjmbIFG3G6xEFnWosiD9 dsTiPxsBP6fch65v07bKirfJj1rvw5o= Received: from pps.filterd (m0353725.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36KAIqnp004620; Thu, 20 Jul 2023 10:24:02 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding; s=pp1; bh=QpActCRN8UuC/2fADBlwbdu2+qsJupatd7ZDgPlQByo=; b=gEVU2podlpz2+/OxUuyJrSWoQGkERS0c50rbuE3bPTmcAb1D1DdtJYk9LXijtUhBBRzd p68QYJjYF4+v7VEAa607OUupC3dgfbteF3ebkryV+otaF9Y8faLwtvVtJekNmU2t6Cql +RFZLXmuRnZOPfEA7nEnsFaFXoHz7YfKl0JwLKxcrVHHA1TtW77sSnnqgomWGnl21pQh lJgS1z4og7UdgXKzZm31T3UsV+nM+RgUcb3YOtsM4V/OMzA93HqIpjkLSsQtAwBcXyhH dvhvwDSZQzQaqtSWWAsc1SLgL+KkbipdOr0sTwtZcTrVRKG6SXuGXwRmRQnYeW8DYOlm dg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3rxwcgqtbp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 20 Jul 2023 10:24:02 +0000 Received: from m0353725.ppops.net (m0353725.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 36KA8EPk029194; Thu, 20 Jul 2023 10:24:01 GMT Received: from ppma22.wdc07v.mail.ibm.com (5c.69.3da9.ip4.static.sl-reverse.com [169.61.105.92]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3rxwcgqtbb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 20 Jul 2023 10:24:01 +0000 Received: from pps.filterd (ppma22.wdc07v.mail.ibm.com [127.0.0.1]) by ppma22.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 36KA0H0K003394; Thu, 20 Jul 2023 10:24:01 GMT Received: from smtprelay07.wdc07v.mail.ibm.com ([172.16.1.74]) by ppma22.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3rv65xp590-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 20 Jul 2023 10:24:01 +0000 Received: from smtpav01.wdc07v.mail.ibm.com (smtpav01.wdc07v.mail.ibm.com [10.39.53.228]) by smtprelay07.wdc07v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 36KAO0r864749964 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 20 Jul 2023 10:24:00 GMT Received: from smtpav01.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 10A8858063; Thu, 20 Jul 2023 10:24:00 +0000 (GMT) Received: from smtpav01.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5DC9A58055; Thu, 20 Jul 2023 10:23:54 +0000 (GMT) Received: from patel.ihost.com (unknown [9.43.75.201]) by smtpav01.wdc07v.mail.ibm.com (Postfix) with ESMTP; Thu, 20 Jul 2023 10:23:53 +0000 (GMT) From: Jay Patel To: linux-mm@kvack.org Cc: cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, vbabka@suse.cz, aneesh.kumar@linux.ibm.com, tsahu@linux.ibm.com, piyushs@linux.ibm.com, jaypatel@linux.ibm.com Subject: [RFC PATCH v4] mm/slub: Optimize slub memory usage Date: Thu, 20 Jul 2023 15:53:37 +0530 Message-Id: <20230720102337.2069722-1-jaypatel@linux.ibm.com> X-Mailer: git-send-email 2.39.1 MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: U5Oe298cGwrTq4rJUZaVu3MY5ggGa5-H X-Proofpoint-ORIG-GUID: k_XJO_ZAtqmINyRhJuADcgW32bhCACkD X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-20_04,2023-07-19_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 bulkscore=0 phishscore=0 mlxlogscore=798 suspectscore=0 impostorscore=0 adultscore=0 spamscore=0 lowpriorityscore=0 mlxscore=0 priorityscore=1501 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307200084 X-Rspam-User: X-Stat-Signature: qi9piawwp7nkm3gfrz8m33343k8bg6hw X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: CEE5640003 X-HE-Tag: 1689848651-451099 X-HE-Meta: U2FsdGVkX180RzPK4upfxIEM8XY+GNnyazB6xZYpXZMPN6XTthRfeOL7En67OF0Exkz2L1dACH0fHtcHNdmx87Sa1DdLf6PH4vNF76IOQs43sB2wJz/qbDBW9JxAIRpx7z8PG5xWp1HWzIsQBqHACFMRDBrcgRjEJtmV70WQaSEJrUOAU/9q1Z66+Cw7/gOUNIJkvlgyWFcm+ODKwHZp7iinvQUXYhoKrTxwrbv6NyRfkY17yQkgUSRevvWeztBx/Eu5v8fvk32KHmyf1An8NHu9exNaUcqaQNARY2Zv2OHu7dfH28AabopRmrTi29+8yraaOFKRNPRqS+8tEhqx1yRGDDq0DQwNIPnV5Y5BMDujS0x0YwA6WNVCVwNDeOg4/63G/g6zxcZ9UE/u3kpPdT38XsoeK5V4PEjp6WKWdPwgO1SOCHquPF5LviPUnT+5lIl7eiLftMkEAgs2i8dw+36w6juA1mq3WcZTKH4m0V96WiIDzAsrw1n7O4sb8wcaAeOSrRg0XOOY9uGH3mwNUaucLlcj23t91vlTUT1Tjfp+0YmnguMiXedx9nPM1NBG3Lv8EUpxxVn4rIgUtnVaQCPGVQqo27+YHnb2JL6bOkBlO5E9K4pZQyWxDlp7pQX8T3cyWmXN5F75wPjVNxrCs+FYz7EfClDFlYiKW+B2kG/SDjv3FrD/Lng3xG+aZFO+sHm17HQ/2p+GMwHre5IoR+psRMmq6ASbOVRAQllf5u0XtEZrrvXEAK+fvI3exn4yLmqxUsjnkKQzkHVGxgaZJvDhPAFUQaQtpUi7o0ILOVewnHAiSWdjnJIC6C2jaUS5kuovdXVVMdVTJaYqsp++xIsddOjY0nmJQ5RuDKbmdxgLmv3B/EDFSLupaCUDT+ALwuvZsK+CfJsdykrSrOVn9MgZhSe0/y/n/GoWx8UAQj9630ByXa4jAeodp6fhGa6t5UhGQpzPL3pe7szzX5J 0yNiLi1d 0eoMTofkxih2RwIvejmwrf6QZ3v/7wTivLMi8bc2JtErxa4f0xSh6s8+8+8q9aLBXUgGf0TV7kYh+GrgZ4EITHdKDxFLTAku0911pWvdgrfsiR7mqaipNog7KQc92Mjq3KJWDolIHqxaKLuMnFUiQie68wetz7fJDUPbGfOZ6088zAmUYlHHBMPpdWfpWyJN321xV X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the current implementation of the slub memory allocator, the slab order selection process follows these criteria: 1) Determine the minimum order required to serve the minimum number of objects (min_objects). This calculation is based on the formula (order = min_objects * object_size / PAGE_SIZE). 2) If the minimum order is greater than the maximum allowed order (slub_max_order), set slub_max_order as the order for this slab. 3) If the minimum order is less than the slub_max_order, iterate through a loop from minimum order to slub_max_order and check if the condition (rem <= slab_size / fract_leftover) holds true. Here, slab_size is calculated as (PAGE_SIZE << order), rem is (slab_size % object_size), and fract_leftover can have values of 16, 8, or 4. If the condition is true, select that order for the slab. However, in point 3, when calculating the fraction left over, it can result in a large range of values (like 1 Kb to 256 bytes on 4K page size & 4 Kb to 16 Kb on 64K page size with order 0 and goes on increasing with higher order) when compared to the remainder (rem). This can lead to the selection of an order that results in more memory wastage. To mitigate such wastage, we have modified point 3 as follows: To adjust the value of fract_leftover based on the page size, while retaining the current value as the default for a 4K page size. Test results are as follows: 1) On 160 CPUs with 64K Page size +-----------------+----------------+----------------+ | Total wastage in slub memory | +-----------------+----------------+----------------+ | | After Boot |After Hackbench | | Normal | 932 Kb | 1812 Kb | | With Patch | 729 Kb | 1636 Kb | | Wastage reduce | ~22% | ~10% | +-----------------+----------------+----------------+ +-----------------+----------------+----------------+ | Total slub memory | +-----------------+----------------+----------------+ | | After Boot | After Hackbench| | Normal | 1855296 | 2944576 | | With Patch | 1544576 | 2692032 | | Memory reduce | ~17% | ~9% | +-----------------+----------------+----------------+ hackbench-process-sockets +-------+-----+----------+----------+-----------+ | Amean | 1 | 1.2727 | 1.2450 | ( 2.22%) | | Amean | 4 | 1.6063 | 1.5810 | ( 1.60%) | | Amean | 7 | 2.4190 | 2.3983 | ( 0.86%) | | Amean | 12 | 3.9730 | 3.9347 | ( 0.97%) | | Amean | 21 | 6.9823 | 6.8957 | ( 1.26%) | | Amean | 30 | 10.1867 | 10.0600 | ( 1.26%) | | Amean | 48 | 16.7490 | 16.4853 | ( 1.60%) | | Amean | 79 | 28.1870 | 27.8673 | ( 1.15%) | | Amean | 110 | 39.8363 | 39.3793 | ( 1.16%) | | Amean | 141 | 51.5277 | 51.4907 | ( 0.07%) | | Amean | 172 | 62.9700 | 62.7300 | ( 0.38%) | | Amean | 203 | 74.5037 | 74.0630 | ( 0.59%) | | Amean | 234 | 85.6560 | 85.3587 | ( 0.35%) | | Amean | 265 | 96.9883 | 96.3770 | ( 0.63%) | | Amean | 296 | 108.6893 | 108.0870 | ( 0.56%) | +-------+-----+----------+----------+-----------+ 2) On 16 CPUs with 64K Page size +----------------+----------------+----------------+ | Total wastage in slub memory | +----------------+----------------+----------------+ | | After Boot | After Hackbench| | Normal | 273 Kb | 544 Kb | | With Patch | 260 Kb | 500 Kb | | Wastage reduce | ~5% | ~9% | +----------------+----------------+----------------+ +-----------------+----------------+----------------+ | Total slub memory | +-----------------+----------------+----------------+ | | After Boot | After Hackbench| | Normal | 275840 | 412480 | | With Patch | 272768 | 406208 | | Memory reduce | ~1% | ~2% | +-----------------+----------------+----------------+ hackbench-process-sockets +-------+----+---------+---------+-----------+ | Amean | 1 | 0.9513 | 0.9250 | ( 2.77%) | | Amean | 4 | 2.9630 | 2.9570 | ( 0.20%) | | Amean | 7 | 5.1780 | 5.1763 | ( 0.03%) | | Amean | 12 | 8.8833 | 8.8817 | ( 0.02%) | | Amean | 21 | 15.7577 | 15.6883 | ( 0.44%) | | Amean | 30 | 22.2063 | 22.2843 | ( -0.35%) | | Amean | 48 | 36.0587 | 36.1390 | ( -0.22%) | | Amean | 64 | 49.7803 | 49.3457 | ( 0.87%) | +-------+----+---------+---------+-----------+ Signed-off-by: Jay Patel --- Changes from V3 1) Resolved error and optimise logic for all arch Changes from V2 1) removed all page order selection logic for slab cache base on wastage. 2) Increasing fraction size base on page size (keeping current value as default to 4K page) Changes from V1 1) If min_objects * object_size > PAGE_ALLOC_COSTLY_ORDER, then it will return with PAGE_ALLOC_COSTLY_ORDER. 2) Similarly, if min_objects * object_size < PAGE_SIZE, then it will return with slub_min_order. 3) Additionally, I changed slub_max_order to 2. There is no specific reason for using the value 2, but it provided the best results in terms of performance without any noticeable impact. mm/slub.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index c87628cd8a9a..8f6f38083b94 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -287,6 +287,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) #define OO_SHIFT 16 #define OO_MASK ((1 << OO_SHIFT) - 1) #define MAX_OBJS_PER_PAGE 32767 /* since slab.objects is u15 */ +#define SLUB_PAGE_FRAC_SHIFT 12 /* Internal SLUB flags */ /* Poison object */ @@ -4117,6 +4118,7 @@ static inline int calculate_order(unsigned int size) unsigned int min_objects; unsigned int max_objects; unsigned int nr_cpus; + unsigned int page_size_frac; /* * Attempt to find best configuration for a slab. This @@ -4145,10 +4147,13 @@ static inline int calculate_order(unsigned int size) max_objects = order_objects(slub_max_order, size); min_objects = min(min_objects, max_objects); - while (min_objects > 1) { + page_size_frac = ((PAGE_SIZE >> SLUB_PAGE_FRAC_SHIFT) == 1) ? 0 + : PAGE_SIZE >> SLUB_PAGE_FRAC_SHIFT; + + while (min_objects >= 1) { unsigned int fraction; - fraction = 16; + fraction = 16 + page_size_frac; while (fraction >= 4) { order = calc_slab_order(size, min_objects, slub_max_order, fraction); @@ -4159,14 +4164,6 @@ static inline int calculate_order(unsigned int size) min_objects--; } - /* - * We were unable to place multiple objects in a slab. Now - * lets see if we can place a single object there. - */ - order = calc_slab_order(size, 1, slub_max_order, 1); - if (order <= slub_max_order) - return order; - /* * Doh this slab cannot be placed using slub_max_order. */