From patchwork Wed Nov 29 09:53:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13472566 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BA36C07E97 for ; Wed, 29 Nov 2023 09:53:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BE4BA6B03BC; Wed, 29 Nov 2023 04:53:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B68506B03C6; Wed, 29 Nov 2023 04:53:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E55A6B03BC; Wed, 29 Nov 2023 04:53:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 389D86B03BC for ; Wed, 29 Nov 2023 04:53:42 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E3F24140471 for ; Wed, 29 Nov 2023 09:53:41 +0000 (UTC) X-FDA: 81510529842.23.5699AED Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf11.hostedemail.com (Postfix) with ESMTP id CA4AB4000D for ; Wed, 29 Nov 2023 09:53:39 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; spf=pass (imf11.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701251620; a=rsa-sha256; cv=none; b=CqBPNssnc7pDIvvI4ChWEk+1tkcXjlGDVPM4rOPDy1vi+etzPXu1p4ws/vi9xuuT7tAbzs Uj9LB4qCH3ECQAorJ08jym0H4ucKuaBpccr8ZSKPPWuCAD4j/eLmFW97h3dd0oFzO/UCo8 HkncJ3D4KTmOzIF1DO5qw2Xe4d1/UfI= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; spf=pass (imf11.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701251620; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ItnxEFdMdReUcgI2eC5m3fUPnWv97oIpUOC4WdcPO3w=; b=AvJM/i8lUp3SFWBa45a4Eaxk2HqumUiT97hV7WgM/kkhUm7Ynz6WUS04WxQ2VvdWWcDCLP 4+nCOIWJUg8IDKhIPKLmGwSUOhLrm0fR1nnh/yCtomV8wberVmjCeC9hTSJ8ejXTuArqWQ /jJJlchSqYhK587DOzlvxZECQWecgSE= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id EEF061F8BD; Wed, 29 Nov 2023 09:53:37 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 911A113AA0; Wed, 29 Nov 2023 09:53:37 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id YPESIyEKZ2UrfQAAD6G6ig (envelope-from ); Wed, 29 Nov 2023 09:53:37 +0000 From: Vlastimil Babka Date: Wed, 29 Nov 2023 10:53:33 +0100 Subject: [PATCH RFC v3 8/9] maple_tree: Remove MA_STATE_PREALLOC MIME-Version: 1.0 Message-Id: <20231129-slub-percpu-caches-v3-8-6bcf536772bc@suse.cz> References: <20231129-slub-percpu-caches-v3-0-6bcf536772bc@suse.cz> In-Reply-To: <20231129-slub-percpu-caches-v3-0-6bcf536772bc@suse.cz> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , "Liam R. Howlett" Cc: Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Alexander Potapenko , Marco Elver , Dmitry Vyukov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, maple-tree@lists.infradead.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.12.4 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: CA4AB4000D X-Stat-Signature: hj4ubuoqjdx7z1s939k957soxm4a7yhg X-HE-Tag: 1701251619-701416 X-HE-Meta: U2FsdGVkX19KwHzrc8DZVpWCFoqYOANmXX9wQo/06pp2v/yGX0L0UbxBAD1pe/LhkvVyyOfIhphWoWua/PLwWcVudJE6JXjOBu7q2NFrZFgqGuQ8K5oZIZbBlW2NK7Gk1LgoHRBJXyTtbz49jviBT6LCkNvjK/CeePm2E20CiI8pGdMGps6GM05vZ1qeDaboiEaOqDiIpKCScknpLTqrzc6kX7TuEyoYkyzeLTZMBAMEAv3AvygTcrcEiTiFMTMyfX+46MRy6X6NxqWKJs/SZkiY0ZdRVwFuK1xq7IgOHPnV+gm798dpx28HuofwbX/C+3zC08i7yWFg6Gu/HrBI5foxeFCg0I287kuDDyceL6CxcrXMSmlLeuwWS5Vcq/5cWS4I8aOcABkiJOlrIjIqeRn02BF4t9FwR7T1HVNn+qu6Oo5oLV6L0ZA1ajFffKTcjAQo3OUvRvo3EaPez6W021wKb/fniyHQPo+p1eFC93mb9Wyyu2mDjqUYY37ZfEm/y4VA4JWbB8Wvgi4IOyBRaBo+a9cATV4xh/L55XYyT+3uYeIcBx04vY6cpTJo4AMlakIcIU6ilzIwVe9oCj0GHq1ntldlLVC8Jt5f4i3xE9xJCcZsAhYv5W1xUNgytBq49D60f/q0gaXUTpK/RrZ+xPqz6fnXhXOHjiDEfpPSs6TPwGT5GIUZ08xo63akLZdrvS8xv7je80iIvknuhJGWrGAcyCbNI6mb7Y+6WR9erMJBFGyeNiHEN3iZXqM/Ve4gqUBEbdOQy6qK3XfgDQSfdyF4tb8JXAB9KirJIO3SS+fhw0vPAE22kzMHQ6aGoM5eVLl8ZbWmMpOrodw9aD5ixJgTPLWWiXr4HIE0hbsID4UJGKBGFXg0eopN76IbgY49zIGgex4sfz9j4vdCsH+8kWhoZjnCf2TKwcfPTahKV+ZB97kivju6wIHn+z7hdX6s7jKejRItbszCQeRZVQU 4ztalfjt HdRBOxJOB1xVDEGrO/0xlZYRiNo+hfyqH4bo2UuRVBAxjAhkNifTNY7dcQdExDeClhf3HR+eH0kHhjnoAmOrU+ik7YwuiPYRgTgkTEIP3oPeVt3y2XINzAeM4d9LENuntfaV2l6y7AwVBvNcrZu3k8UsK9V3RELGtsEvUHtBpVOrHzUw5x5wvVKcFnQqwY2n3KqPM0neB99NrvYk0PABjVg9N8tN6g4WesSjjCKn/0KJZ/jzLGAbAqYPisEAKDsWR/5+Z X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Liam R. Howlett" MA_SATE_PREALLOC was added to catch any writes that try to allocate when the maple state is being used in preallocation mode. This can safely be removed in favour of the percpu array of nodes. Note that mas_expected_entries() still expects no allocations during operation and so MA_STATE_BULK can be used in place of preallocations for this case, which is primarily used for forking. Signed-off-by: Liam R. Howlett Signed-off-by: Vlastimil Babka --- lib/maple_tree.c | 20 ++++++-------------- 1 file changed, 6 insertions(+), 14 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index d9e7088fd9a7..f5c0bca2c5d7 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -68,11 +68,9 @@ * Maple state flags * * MA_STATE_BULK - Bulk insert mode * * MA_STATE_REBALANCE - Indicate a rebalance during bulk insert - * * MA_STATE_PREALLOC - Preallocated nodes, WARN_ON allocation */ #define MA_STATE_BULK 1 #define MA_STATE_REBALANCE 2 -#define MA_STATE_PREALLOC 4 #define ma_parent_ptr(x) ((struct maple_pnode *)(x)) #define mas_tree_parent(x) ((unsigned long)(x->tree) | MA_ROOT_PARENT) @@ -1255,11 +1253,8 @@ static inline void mas_alloc_nodes(struct ma_state *mas, gfp_t gfp) return; mas_set_alloc_req(mas, 0); - if (mas->mas_flags & MA_STATE_PREALLOC) { - if (allocated) - return; - WARN_ON(!allocated); - } + if (mas->mas_flags & MA_STATE_BULK) + return; if (!allocated || mas->alloc->node_count == MAPLE_ALLOC_SLOTS) { node = (struct maple_alloc *)mt_alloc_one(gfp); @@ -5518,7 +5513,6 @@ int mas_preallocate(struct ma_state *mas, void *entry, gfp_t gfp) /* node store, slot store needs one node */ ask_now: mas_node_count_gfp(mas, request, gfp); - mas->mas_flags |= MA_STATE_PREALLOC; if (likely(!mas_is_err(mas))) return 0; @@ -5561,7 +5555,7 @@ void mas_destroy(struct ma_state *mas) mas->mas_flags &= ~MA_STATE_REBALANCE; } - mas->mas_flags &= ~(MA_STATE_BULK|MA_STATE_PREALLOC); + mas->mas_flags &= ~MA_STATE_BULK; total = mas_allocated(mas); while (total) { @@ -5610,9 +5604,6 @@ int mas_expected_entries(struct ma_state *mas, unsigned long nr_entries) * of nodes during the operation. */ - /* Optimize splitting for bulk insert in-order */ - mas->mas_flags |= MA_STATE_BULK; - /* * Avoid overflow, assume a gap between each entry and a trailing null. * If this is wrong, it just means allocation can happen during @@ -5629,8 +5620,9 @@ int mas_expected_entries(struct ma_state *mas, unsigned long nr_entries) /* Add working room for split (2 nodes) + new parents */ mas_node_count_gfp(mas, nr_nodes + 3, GFP_KERNEL); - /* Detect if allocations run out */ - mas->mas_flags |= MA_STATE_PREALLOC; + /* Optimize splitting for bulk insert in-order */ + mas->mas_flags |= MA_STATE_BULK; + if (!mas_is_err(mas)) return 0;