From patchwork Tue Jun 7 09:34:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 12871634 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08595C433EF for ; Tue, 7 Jun 2022 09:35:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 51C8F8D0007; Tue, 7 Jun 2022 05:35:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4CAB68D0001; Tue, 7 Jun 2022 05:35:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E5E98D0007; Tue, 7 Jun 2022 05:35:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0618A8D0001 for ; Tue, 7 Jun 2022 05:35:03 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id CC7D9C5F for ; Tue, 7 Jun 2022 09:35:02 +0000 (UTC) X-FDA: 79550930844.30.607FA2F Received: from mail-lf1-f51.google.com (mail-lf1-f51.google.com [209.85.167.51]) by imf03.hostedemail.com (Postfix) with ESMTP id 8A8712005D for ; Tue, 7 Jun 2022 09:34:45 +0000 (UTC) Received: by mail-lf1-f51.google.com with SMTP id s6so27339884lfo.13 for ; Tue, 07 Jun 2022 02:35:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/m93af/b5jfm9tZCfE4d8bYwXKExf/KtRK6sqDTZnX8=; b=d1Bn/bBj18RiG2Wo96aBwJlqBTl0eZb5+qmlzXXE6bQ+j8QfHZ6T7KzicQVAnczhwv gZu8VrkgcEUhaK5xIMDA4Y//j1RTSK5dRGhgw1DiaCnbgUkqsN6OdWJ+OxA2b0xx+utk dBgTvwQWP9erRmmJFn2tug3I6XFqkTKy03Wa/lXo402QokGda0DaI/w+4lEn57lgfj1G 3mD8WFfvQXHZuH5WTBMGd+HiF60kOnZYpeXyvu7m0eMycc142U4fus5IJyzevNlp/0yH D0jC/jxOGhyI8IGiBhw27bMaA5LtqMANjtCDqnUsi5G3MXWrWEItVhFN+qfpoSvw1UtE ZY2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/m93af/b5jfm9tZCfE4d8bYwXKExf/KtRK6sqDTZnX8=; b=nKxrH3uHXOW5a/hMerPbXlmTfDcbY2Tiuk1W2i6m5+CwCi+guFlZraIuW8iVUwLnj5 Yzqf6RIGUdczmil/7gRm0jnHFLdMZCafv+B0ywsmUa6UyN0NUVPEHCcJ+5VMrKJCqOt3 301OFhaGHIhn2DGBPSa8gP2MEeVyNjlHQx9obHp74vsAe4Nk7NECIKddqT5xw37U+9zA iU0dFamBhd8byCZgHWOCwDKc8RIKAcgeFU9RnspHTOtUQ+AsIy5lTJaCBE7AQOom1RIW VKpcGkLhcRYtM8en2S03Yc62YD2UnuGXaR1FT5Z4XIsOHWIehk4qes8aO9Abwhqkn/va Fuyg== X-Gm-Message-State: AOAM532RLXjpI6f26aIhEJOiOehR2Anbpx65iFWY9y2on9vHrRahwgDZ T66rpzIwm8YsxTeRrXk6cHoZNW+b7q8glg== X-Google-Smtp-Source: ABdhPJyVkUcTCv4+l8aPtPwj+grbnTiZq6vPqAsaIkKbrR3qe7PRUtUNv/A07lVdkcB0uZl4/17oLA== X-Received: by 2002:a05:6512:2622:b0:478:f29d:8197 with SMTP id bt34-20020a056512262200b00478f29d8197mr23728997lfb.513.1654594500741; Tue, 07 Jun 2022 02:35:00 -0700 (PDT) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id r27-20020a2e575b000000b002554a4ebf5fsm2748043ljd.74.2022.06.07.02.34.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jun 2022 02:35:00 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: Andrew Morton Cc: linux-mm@kvack.org, LKML , Christoph Hellwig , Matthew Wilcox , Nicholas Piggin , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH 1/5] mm/vmalloc: Make link_va()/unlink_va() common to different rb_root Date: Tue, 7 Jun 2022 11:34:45 +0200 Message-Id: <20220607093449.3100-2-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220607093449.3100-1-urezki@gmail.com> References: <20220607093449.3100-1-urezki@gmail.com> MIME-Version: 1.0 X-Stat-Signature: ufkuqz7m8qx19h1g1szrjphj893jzcoc Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="d1Bn/bBj"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf03.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.51 as permitted sender) smtp.mailfrom=urezki@gmail.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 8A8712005D X-HE-Tag: 1654594485-351657 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently link_va() and unlik_va(), in order to figure out a tree type, compares a passed root value with a global free_vmap_area_root variable to distinguish the augmented rb-tree from a regular one. It is hard coded since such functions can manipulate only with specific "free_vmap_area_root" tree that represents a global free vmap space. Make it common by introducing "_augment" versions of both internal functions, so it is possible to deal with different trees. There is no functional change as a result of this patch. Signed-off-by: Uladzislau Rezki (Sony) Reviewed-by: Baoquan He --- mm/vmalloc.c | 60 +++++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 48 insertions(+), 12 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index be8ed06804a5..0102d6d5fcdf 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -911,8 +911,9 @@ get_va_next_sibling(struct rb_node *parent, struct rb_node **link) } static __always_inline void -link_va(struct vmap_area *va, struct rb_root *root, - struct rb_node *parent, struct rb_node **link, struct list_head *head) +__link_va(struct vmap_area *va, struct rb_root *root, + struct rb_node *parent, struct rb_node **link, + struct list_head *head, bool augment) { /* * VA is still not in the list, but we can @@ -926,7 +927,7 @@ link_va(struct vmap_area *va, struct rb_root *root, /* Insert to the rb-tree */ rb_link_node(&va->rb_node, parent, link); - if (root == &free_vmap_area_root) { + if (augment) { /* * Some explanation here. Just perform simple insertion * to the tree. We do not set va->subtree_max_size to @@ -950,12 +951,28 @@ link_va(struct vmap_area *va, struct rb_root *root, } static __always_inline void -unlink_va(struct vmap_area *va, struct rb_root *root) +link_va(struct vmap_area *va, struct rb_root *root, + struct rb_node *parent, struct rb_node **link, + struct list_head *head) +{ + __link_va(va, root, parent, link, head, false); +} + +static __always_inline void +link_va_augment(struct vmap_area *va, struct rb_root *root, + struct rb_node *parent, struct rb_node **link, + struct list_head *head) +{ + __link_va(va, root, parent, link, head, true); +} + +static __always_inline void +__unlink_va(struct vmap_area *va, struct rb_root *root, bool augment) { if (WARN_ON(RB_EMPTY_NODE(&va->rb_node))) return; - if (root == &free_vmap_area_root) + if (augment) rb_erase_augmented(&va->rb_node, root, &free_vmap_area_rb_augment_cb); else @@ -965,6 +982,18 @@ unlink_va(struct vmap_area *va, struct rb_root *root) RB_CLEAR_NODE(&va->rb_node); } +static __always_inline void +unlink_va(struct vmap_area *va, struct rb_root *root) +{ + __unlink_va(va, root, false); +} + +static __always_inline void +unlink_va_augment(struct vmap_area *va, struct rb_root *root) +{ + __unlink_va(va, root, true); +} + #if DEBUG_AUGMENT_PROPAGATE_CHECK /* * Gets called when remove the node and rotate. @@ -1060,7 +1089,7 @@ insert_vmap_area_augment(struct vmap_area *va, link = find_va_links(va, root, NULL, &parent); if (link) { - link_va(va, root, parent, link, head); + link_va_augment(va, root, parent, link, head); augment_tree_propagate_from(va); } } @@ -1077,8 +1106,8 @@ insert_vmap_area_augment(struct vmap_area *va, * ongoing. */ static __always_inline struct vmap_area * -merge_or_add_vmap_area(struct vmap_area *va, - struct rb_root *root, struct list_head *head) +__merge_or_add_vmap_area(struct vmap_area *va, + struct rb_root *root, struct list_head *head, bool augment) { struct vmap_area *sibling; struct list_head *next; @@ -1140,7 +1169,7 @@ merge_or_add_vmap_area(struct vmap_area *va, * "normalized" because of rotation operations. */ if (merged) - unlink_va(va, root); + __unlink_va(va, root, augment); sibling->va_end = va->va_end; @@ -1155,16 +1184,23 @@ merge_or_add_vmap_area(struct vmap_area *va, insert: if (!merged) - link_va(va, root, parent, link, head); + __link_va(va, root, parent, link, head, augment); return va; } +static __always_inline struct vmap_area * +merge_or_add_vmap_area(struct vmap_area *va, + struct rb_root *root, struct list_head *head) +{ + return __merge_or_add_vmap_area(va, root, head, false); +} + static __always_inline struct vmap_area * merge_or_add_vmap_area_augment(struct vmap_area *va, struct rb_root *root, struct list_head *head) { - va = merge_or_add_vmap_area(va, root, head); + va = __merge_or_add_vmap_area(va, root, head, true); if (va) augment_tree_propagate_from(va); @@ -1348,7 +1384,7 @@ adjust_va_to_fit_type(struct vmap_area *va, * V NVA V * |---------------| */ - unlink_va(va, &free_vmap_area_root); + unlink_va_augment(va, &free_vmap_area_root); kmem_cache_free(vmap_area_cachep, va); } else if (type == LE_FIT_TYPE) { /*