From patchwork Mon Jan 9 20:52:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13094267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA9A2C6379F for ; Mon, 9 Jan 2023 20:53:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0978F8E0005; Mon, 9 Jan 2023 15:53:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 048058E0001; Mon, 9 Jan 2023 15:53:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2B108E0005; Mon, 9 Jan 2023 15:53:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id D39AB8E0001 for ; Mon, 9 Jan 2023 15:53:46 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id ACF0FA07C4 for ; Mon, 9 Jan 2023 20:53:46 +0000 (UTC) X-FDA: 80336462052.06.CC2A054 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf14.hostedemail.com (Postfix) with ESMTP id 1874C10000D for ; Mon, 9 Jan 2023 20:53:43 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=hV1eYOET; spf=pass (imf14.hostedemail.com: domain of 31368YwYKCOoegdQZNSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=31368YwYKCOoegdQZNSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673297624; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NqXJ+ZomBW9TS34c/jGDj4kdswhUlT56T9j0TeoADNk=; b=1Amb53ZOC9uktrl/yr1QbrLeYlRafzV7dENdXZ1NyjQXqoGjZNSTmlYJeBMcyfm3ye9GpG 3v2pt2JLVJRsPj6tkDJWH16lHFY87eqp05tDqZsPxLrrKviZ6YKi4cNR4ovNovlkgtLr+r msJdAm3aikwUenIzmmV86WtED3CtAbA= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=hV1eYOET; spf=pass (imf14.hostedemail.com: domain of 31368YwYKCOoegdQZNSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=31368YwYKCOoegdQZNSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673297624; a=rsa-sha256; cv=none; b=I5kOl63PWSQVCXGaXuyyc1ogUOzyiWIIozaatrOhoS1ER7SC93o0Pp+PDg0kLY0kaXeLFC lzlJ5lp9gCYyijzjwMtT2oO2fk/P1KmojSuq19FBMAfC/I7OziiZDUG8XclR3DWkbX8GpZ DRd3I/S0eHxhiNoTmJhyNqb4LBALIK0= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-4c36d00c389so91826167b3.18 for ; Mon, 09 Jan 2023 12:53:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NqXJ+ZomBW9TS34c/jGDj4kdswhUlT56T9j0TeoADNk=; b=hV1eYOET6T2afidyHxuU+TDalPYH7SoLHvYl4BYwO2+JPtGGpug5jlQUoy9K4emxnp wkRJchtDj68k/F+2XFNh7yn/DcVU/S6b5BfTlEf/HmlwKUP7B8xL+hIWlClC+stgjn0Q yjEqU4w0bHW24Tkvl4HAvFsvboqjzGC/OqOppLTxN2NoinKdWAqCYIk7Wp05H1K9TySk fMjEc/Qrcjc9huA2llMuqioo8fVnZzgiNBMYUce/o/EkMO+/Qtv30xUsop9ErVhi9UD1 18fRcEvtN38NaeuSdMu7DJkgG+1MjHvnJbjraNd54FiH0BDnP0iYXASfyl6JxE09JvbK zNVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NqXJ+ZomBW9TS34c/jGDj4kdswhUlT56T9j0TeoADNk=; b=wvWaVdjS28Fnx82RC6Ln6bI7z/TqhVQ+sQjwJymy//dk/qIbBNCPrKYeNcWyEI9blE 1IRFdChOHk8zKFM9KsU4WifUFo7oTjpWU1GBvVKEN2UhuhzABoZ8sCYAcF9HpVIv3Ll+ QgDZPwDwpJlm46CoKk11uskmU/4//9BM9QUuFiFNqGcLeBTu0Vl6GKQvEaCeUnBtyhW/ RcBvd10y76AofbBsDfCzo9tskVjgZcH/i4JVBq9/pEzBumd+dAtnQ8rrtxIXMg9RvnQw VG+fLf78PSe/QVRfYzF0At92I4xDJolphsHVLZRrIDwothvHWiDYDDiDC6pfrLtY0GV0 YchQ== X-Gm-Message-State: AFqh2koW3oYa+ZEz6rbbqpHGxzIWtyskujCFo7V1vru0KvRI16kQ7Sau EVNrf8gj0LEN6iTivoz6/gG984g32BY= X-Google-Smtp-Source: AMrXdXsc+lRNNMUj/grXDfNh64QUbjHDxIxAaFKRrkqMjOPH6SvLaEBts0d+5j6B8x6v1ECq7EPEJR82hgY= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:200:9393:6f7a:d410:55ca]) (user=surenb job=sendgmr) by 2002:a25:7189:0:b0:707:18f:7226 with SMTP id m131-20020a257189000000b00707018f7226mr210074ybc.505.1673297623117; Mon, 09 Jan 2023 12:53:43 -0800 (PST) Date: Mon, 9 Jan 2023 12:52:56 -0800 In-Reply-To: <20230109205336.3665937-1-surenb@google.com> Mime-Version: 1.0 References: <20230109205336.3665937-1-surenb@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230109205336.3665937-2-surenb@google.com> Subject: [PATCH 01/41] maple_tree: Be more cautious about dead nodes From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, paulmck@kernel.org, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, tatashin@google.com, edumazet@google.com, gthelen@google.com, gurua@google.com, arjunroy@google.com, soheil@google.com, hughlynch@google.com, leewalsh@google.com, posk@google.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com, surenb@google.com, Liam Howlett X-Stat-Signature: cg1hh3muccyenpohrz7j8ojbdfaq7myk X-Rspam-User: X-Rspamd-Queue-Id: 1874C10000D X-Rspamd-Server: rspam06 X-HE-Tag: 1673297623-688490 X-HE-Meta: U2FsdGVkX18kSq3Et5wSEAiZDMOhxN7ZUXfWwPrIxXj/XMti8YyoAlUPGic0Jpy9lVdZCyBZXo85U/8LkqA3SaPljYbCD9mIdfjdr6yrQwjhAA7wfdinalhfjOZJ2ZbVp9WJNMy4PA0ov6MkfEFR0P5MWmHRzc7S/J/sqE6ikpRw/pXIqTsHPElkO4UFk0qBbKdmAPoEtb/1ASbNe8DpS+Xp096ksmBlJ9wm0grtYpvpkNdtD3awD6q9zdWrUfs9nk/calkDw7SnAuZffGpnET3eAmGfgmXIn+R6D1XhLBiiOgaVgTI9ZnQ841uY0W9zZwuGNr4LdSTAgztEgH2mjBavZ9Fp6AJ5lRjm/nEwqt9KfhO1SE6tSliB8PklE4DNcekqhbxhQ7Fk6nad8pI4d01HdNFsV2ZLIWnsDPWjz8uR6+ysSxyskRcBQgX4nTxaQPheSL9uObyFuFkCMLd80TaVvxssoTlUX/NsoIDl6t38FELRZirdc6tOOSRbHYJsxrbSqRX3/DPBBTJ7Y14SPJz0K3Zr56zG51EQbiVynIeK8jFOwcBt53q8qjw3KKHNN3RG8dx989r1LW858tNTjErWxJdEb3xYygVtLaOIc2CD1NgZ+WqfIDdfYefrlX6qOMHiLzSCPMdn5yJdPypRJmBz7OKUC7qdgyvXXHWxXNmMSdKpRtsYZucZzgZ/BvcKySRGhJuZqAR3uTyTsKZE+Z47/aQoTHJMECGx51d3yLU+Oyfm1nNaaLVQlvz67NB2t4KZgs+0hNwm8/L2ysm+CEl+CX/2Vw5STQ5GMqL9HGa2VDvp4sKM8lGfyNc180hKIf2SfIfEeKW1v5Y1OS0YD0TdeIjf5ygOBZjt13e9x2SpQ/jXYsxTDXiI1xXUA0HO3zXHoX2UbwGd9xVqQ7q/zW1u0307Dd37sF4AntK31/0avXwOGkXNTIosgQVegI1wq47/b0UsQ0Qd5ubZWsT 571TqmsY Qp0Ut2T1KNJgIhcul2q2rlUg1bp9lbOcGVcXl252pA70YG1MAY9cUXl2STuWLGF+wPu1IObL+wKONIb1Xz8z7l8+14eV2i/CvREgwqHY79jW8yzj5mXSZ4n4gCHI0ONdX355rYErWukZwFuqQxGaenZLOqdwpevJRcBV4iiUp8hOneo6Au0wzQCRjmGB0ySVnjmJnhr4wLvzOb9k5MEv025PAepIiKGiGUlypslOaCwpVGpVLfkLjTyZFxZ6Gnfjp75Jr7KdWhsMogoTJE5DgRd9GwiYxKKfZY8jLx98DZp1w2UesEu5To4KL4SM6x4nLMDlJmGLpIenkB4e26P6vmXUkWOj58Rc4tVTeAdaaKiifedSi7aarvERkCTI4+udasOZGVZ+Xg61VTYkZzL+ii6A1vu9P81rsIYwF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Liam Howlett ma_pivots() and ma_data_end() may be called with a dead node. Ensure to that the node isn't dead before using the returned values. This is necessary for RCU mode of the maple tree. Fixes: 54a611b60590 ("Maple Tree: add new data structure") Signed-off-by: Liam Howlett Signed-off-by: Suren Baghdasaryan --- lib/maple_tree.c | 53 +++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 43 insertions(+), 10 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 26e2045d3cda..ff9f04e0150d 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -540,6 +540,7 @@ static inline bool ma_dead_node(const struct maple_node *node) return (parent == node); } + /* * mte_dead_node() - check if the @enode is dead. * @enode: The encoded maple node @@ -621,6 +622,8 @@ static inline unsigned int mas_alloc_req(const struct ma_state *mas) * @node - the maple node * @type - the node type * + * In the event of a dead node, this array may be %NULL + * * Return: A pointer to the maple node pivots */ static inline unsigned long *ma_pivots(struct maple_node *node, @@ -1091,8 +1094,11 @@ static int mas_ascend(struct ma_state *mas) a_type = mas_parent_enum(mas, p_enode); a_node = mte_parent(p_enode); a_slot = mte_parent_slot(p_enode); - pivots = ma_pivots(a_node, a_type); a_enode = mt_mk_node(a_node, a_type); + pivots = ma_pivots(a_node, a_type); + + if (unlikely(ma_dead_node(a_node))) + return 1; if (!set_min && a_slot) { set_min = true; @@ -1398,6 +1404,9 @@ static inline unsigned char ma_data_end(struct maple_node *node, { unsigned char offset; + if (!pivots) + return 0; + if (type == maple_arange_64) return ma_meta_end(node, type); @@ -1433,6 +1442,9 @@ static inline unsigned char mas_data_end(struct ma_state *mas) return ma_meta_end(node, type); pivots = ma_pivots(node, type); + if (unlikely(ma_dead_node(node))) + return 0; + offset = mt_pivots[type] - 1; if (likely(!pivots[offset])) return ma_meta_end(node, type); @@ -4504,6 +4516,9 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min) node = mas_mn(mas); slots = ma_slots(node, mt); pivots = ma_pivots(node, mt); + if (unlikely(ma_dead_node(node))) + return 1; + mas->max = pivots[offset]; if (offset) mas->min = pivots[offset - 1] + 1; @@ -4525,6 +4540,9 @@ static inline int mas_prev_node(struct ma_state *mas, unsigned long min) slots = ma_slots(node, mt); pivots = ma_pivots(node, mt); offset = ma_data_end(node, mt, pivots, mas->max); + if (unlikely(ma_dead_node(node))) + return 1; + if (offset) mas->min = pivots[offset - 1] + 1; @@ -4573,6 +4591,7 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node, struct maple_enode *enode; int level = 0; unsigned char offset; + unsigned char node_end; enum maple_type mt; void __rcu **slots; @@ -4596,7 +4615,11 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node, node = mas_mn(mas); mt = mte_node_type(mas->node); pivots = ma_pivots(node, mt); - } while (unlikely(offset == ma_data_end(node, mt, pivots, mas->max))); + node_end = ma_data_end(node, mt, pivots, mas->max); + if (unlikely(ma_dead_node(node))) + return 1; + + } while (unlikely(offset == node_end)); slots = ma_slots(node, mt); pivot = mas_safe_pivot(mas, pivots, ++offset, mt); @@ -4612,6 +4635,9 @@ static inline int mas_next_node(struct ma_state *mas, struct maple_node *node, mt = mte_node_type(mas->node); slots = ma_slots(node, mt); pivots = ma_pivots(node, mt); + if (unlikely(ma_dead_node(node))) + return 1; + offset = 0; pivot = pivots[0]; } @@ -4658,16 +4684,18 @@ static inline void *mas_next_nentry(struct ma_state *mas, return NULL; } - pivots = ma_pivots(node, type); slots = ma_slots(node, type); - mas->index = mas_safe_min(mas, pivots, mas->offset); - if (ma_dead_node(node)) + pivots = ma_pivots(node, type); + count = ma_data_end(node, type, pivots, mas->max); + if (unlikely(ma_dead_node(node))) return NULL; + mas->index = mas_safe_min(mas, pivots, mas->offset); + if (unlikely(ma_dead_node(node))) + return NULL; if (mas->index > max) return NULL; - count = ma_data_end(node, type, pivots, mas->max); if (mas->offset > count) return NULL; @@ -4815,6 +4843,11 @@ static inline void *mas_prev_nentry(struct ma_state *mas, unsigned long limit, slots = ma_slots(mn, mt); pivots = ma_pivots(mn, mt); + if (unlikely(ma_dead_node(mn))) { + mas_rewalk(mas, index); + goto retry; + } + if (offset == mt_pivots[mt]) pivot = mas->max; else @@ -6613,11 +6646,11 @@ static inline void *mas_first_entry(struct ma_state *mas, struct maple_node *mn, while (likely(!ma_is_leaf(mt))) { MT_BUG_ON(mas->tree, mte_dead_node(mas->node)); slots = ma_slots(mn, mt); - pivots = ma_pivots(mn, mt); - max = pivots[0]; entry = mas_slot(mas, slots, 0); + pivots = ma_pivots(mn, mt); if (unlikely(ma_dead_node(mn))) return NULL; + max = pivots[0]; mas->node = entry; mn = mas_mn(mas); mt = mte_node_type(mas->node); @@ -6637,13 +6670,13 @@ static inline void *mas_first_entry(struct ma_state *mas, struct maple_node *mn, if (likely(entry)) return entry; - pivots = ma_pivots(mn, mt); - mas->index = pivots[0] + 1; mas->offset = 1; entry = mas_slot(mas, slots, 1); + pivots = ma_pivots(mn, mt); if (unlikely(ma_dead_node(mn))) return NULL; + mas->index = pivots[0] + 1; if (mas->index > limit) goto none;