From patchwork Tue Jun 7 09:34:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 12871635 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C98EC43334 for ; Tue, 7 Jun 2022 09:35:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 25AAF8D0008; Tue, 7 Jun 2022 05:35:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 20B268D0001; Tue, 7 Jun 2022 05:35:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 083CC8D0008; Tue, 7 Jun 2022 05:35:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id EDC6E8D0001 for ; Tue, 7 Jun 2022 05:35:03 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B07DB3397F for ; Tue, 7 Jun 2022 09:35:03 +0000 (UTC) X-FDA: 79550930886.09.43554BC Received: from mail-lf1-f48.google.com (mail-lf1-f48.google.com [209.85.167.48]) by imf29.hostedemail.com (Postfix) with ESMTP id 3FAEA120068 for ; Tue, 7 Jun 2022 09:34:48 +0000 (UTC) Received: by mail-lf1-f48.google.com with SMTP id a15so27373148lfb.9 for ; Tue, 07 Jun 2022 02:35:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5rMhy7+1gkAH7mug47fF5ZHhc0+p0pd9emNDGCWCkpQ=; b=W+JslE4xHQ3jMlYH2KXGmvWlZnLm/+4nJq/39GMb+2a3O8zXVrnEwptgEjNTBXWLXy Vb7PId2j2ByR/FNZXN71Tz4LMaIxEVrtf+tkuQeW14h/R//OWPOG1o23t5kIfL7Us8IS yrT6QyUs0taloEBjoqzxVjvZ0ToYoXS5o+8lY4SVo5Ei3kPEGMgP5oDcVg66Gwq8EyDo Spmc/f8XAkvXhwByc0/wn0IdYpFXvvfL2RWJ/2re53TVpVT3+CiuuX6hVLYF8y7uthwh yIt47aSh4oS9+eT/7zmQ8Auy1cd3HRfsNEhgTrUb4d+dUpmrp+gM1o6cds4HBcW/r32y Us6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5rMhy7+1gkAH7mug47fF5ZHhc0+p0pd9emNDGCWCkpQ=; b=hpbxAvLFYZvdt0+kMpGIapxMlgncBhfS5xcstKIs6vi+/6qXVjxuMahSgvNwsZXGUQ AnERgedYAnQHOxb6UBMXplJWTGO0wyjrqbkKFsPQT2KPQ8ZNiJeCe+Z2PdMdEJgGaRKr GW4IQ1UBATPsybFnW+NAjN5uMpcvT03WIeCNjhA08+JV+4AqQS0lFrBM9YDPbqGlG2+n qnC4MxOokSGS2mf9bpMt1cxKqOqXu5Tk/iwU/V7fZse+YuZnUJS/rDiCuH06KzTRfOxe HZOmNLan2XNuUczR6x4XbRoHULy0s0rAD47ent9C87Fl2HGpCMpTovvqBgpsGMU6vRqt OonA== X-Gm-Message-State: AOAM531qjnpzs9Mutf5G/zCtbm/M1Eo4Kjqo1F243mKeE+hLn6hk2fxC EOeTpsY7d829b56KkKZ7t6McMxbMMmZ6Wg== X-Google-Smtp-Source: ABdhPJzoedISnY2HZsN02qxifFxKqIIvrfom4dgYI59hEBCxocGEgX5giTqS91rcUcFjlqDshWY4jA== X-Received: by 2002:a05:6512:3f0a:b0:477:b1f8:1a40 with SMTP id y10-20020a0565123f0a00b00477b1f81a40mr66936199lfa.343.1654594501675; Tue, 07 Jun 2022 02:35:01 -0700 (PDT) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id r27-20020a2e575b000000b002554a4ebf5fsm2748043ljd.74.2022.06.07.02.35.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jun 2022 02:35:01 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: Andrew Morton Cc: linux-mm@kvack.org, LKML , Christoph Hellwig , Matthew Wilcox , Nicholas Piggin , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH 2/5] mm/vmalloc: Extend __alloc_vmap_area() with extra arguments Date: Tue, 7 Jun 2022 11:34:46 +0200 Message-Id: <20220607093449.3100-3-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220607093449.3100-1-urezki@gmail.com> References: <20220607093449.3100-1-urezki@gmail.com> MIME-Version: 1.0 X-Stat-Signature: 57cebo413y77pca17hofop5sxf3njp3n X-Rspam-User: Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=W+JslE4x; spf=pass (imf29.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.48 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 3FAEA120068 X-HE-Tag: 1654594488-725804 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It implies that __alloc_vmap_area() allocates only from the global vmap space, therefore a list-head and rb-tree, which represent a free vmap space, are not passed as parameters to this function and are accessed directly from this function. Extend the __alloc_vmap_area() and other dependent functions to have a possibility to allocate from different trees making an interface common and not specific. There is no functional change as a result of this patch. Signed-off-by: Uladzislau Rezki (Sony) Reviewed-by: Baoquan He --- mm/vmalloc.c | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 0102d6d5fcdf..745e89eb6ca1 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1234,15 +1234,15 @@ is_within_this_va(struct vmap_area *va, unsigned long size, * overhead. */ static __always_inline struct vmap_area * -find_vmap_lowest_match(unsigned long size, unsigned long align, - unsigned long vstart, bool adjust_search_size) +find_vmap_lowest_match(struct rb_root *root, unsigned long size, + unsigned long align, unsigned long vstart, bool adjust_search_size) { struct vmap_area *va; struct rb_node *node; unsigned long length; /* Start from the root. */ - node = free_vmap_area_root.rb_node; + node = root->rb_node; /* Adjust the search size for alignment overhead. */ length = adjust_search_size ? size + align - 1 : size; @@ -1370,9 +1370,9 @@ classify_va_fit_type(struct vmap_area *va, } static __always_inline int -adjust_va_to_fit_type(struct vmap_area *va, - unsigned long nva_start_addr, unsigned long size, - enum fit_type type) +adjust_va_to_fit_type(struct rb_root *root, struct list_head *head, + struct vmap_area *va, unsigned long nva_start_addr, + unsigned long size, enum fit_type type) { struct vmap_area *lva = NULL; @@ -1384,7 +1384,7 @@ adjust_va_to_fit_type(struct vmap_area *va, * V NVA V * |---------------| */ - unlink_va_augment(va, &free_vmap_area_root); + unlink_va_augment(va, root); kmem_cache_free(vmap_area_cachep, va); } else if (type == LE_FIT_TYPE) { /* @@ -1462,8 +1462,7 @@ adjust_va_to_fit_type(struct vmap_area *va, augment_tree_propagate_from(va); if (lva) /* type == NE_FIT_TYPE */ - insert_vmap_area_augment(lva, &va->rb_node, - &free_vmap_area_root, &free_vmap_area_list); + insert_vmap_area_augment(lva, &va->rb_node, root, head); } return 0; @@ -1474,7 +1473,8 @@ adjust_va_to_fit_type(struct vmap_area *va, * Otherwise a vend is returned that indicates failure. */ static __always_inline unsigned long -__alloc_vmap_area(unsigned long size, unsigned long align, +__alloc_vmap_area(struct rb_root *root, struct list_head *head, + unsigned long size, unsigned long align, unsigned long vstart, unsigned long vend) { bool adjust_search_size = true; @@ -1495,7 +1495,7 @@ __alloc_vmap_area(unsigned long size, unsigned long align, if (align <= PAGE_SIZE || (align > PAGE_SIZE && (vend - vstart) == size)) adjust_search_size = false; - va = find_vmap_lowest_match(size, align, vstart, adjust_search_size); + va = find_vmap_lowest_match(root, size, align, vstart, adjust_search_size); if (unlikely(!va)) return vend; @@ -1514,7 +1514,7 @@ __alloc_vmap_area(unsigned long size, unsigned long align, return vend; /* Update the free vmap_area. */ - ret = adjust_va_to_fit_type(va, nva_start_addr, size, type); + ret = adjust_va_to_fit_type(root, head, va, nva_start_addr, size, type); if (ret) return vend; @@ -1605,7 +1605,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, retry: preload_this_cpu_lock(&free_vmap_area_lock, gfp_mask, node); - addr = __alloc_vmap_area(size, align, vstart, vend); + addr = __alloc_vmap_area(&free_vmap_area_root, &free_vmap_area_list, + size, align, vstart, vend); spin_unlock(&free_vmap_area_lock); /* @@ -3886,7 +3887,8 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, /* It is a BUG(), but trigger recovery instead. */ goto recovery; - ret = adjust_va_to_fit_type(va, start, size, type); + ret = adjust_va_to_fit_type(&free_vmap_area_root, + &free_vmap_area_list, va, start, size, type); if (unlikely(ret)) goto recovery;