From patchwork Tue Aug 29 08:11:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13368658 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B755C6FA8F for ; Tue, 29 Aug 2023 08:11:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A40C08E001E; Tue, 29 Aug 2023 04:11:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F0D0280037; Tue, 29 Aug 2023 04:11:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 906388E0023; Tue, 29 Aug 2023 04:11:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 84D2B8E001E for ; Tue, 29 Aug 2023 04:11:48 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5D338160666 for ; Tue, 29 Aug 2023 08:11:48 +0000 (UTC) X-FDA: 81176423496.28.BEFE22C Received: from mail-lf1-f53.google.com (mail-lf1-f53.google.com [209.85.167.53]) by imf16.hostedemail.com (Postfix) with ESMTP id 71362180011 for ; Tue, 29 Aug 2023 08:11:46 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=KBNx8+rM; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf16.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.53 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693296706; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sL90/r5YfKaetndDygiXgRj9+82zEERzAQ0GDLDkLpU=; b=STZPq9MebWs6124Tri2pn39XFOgQXBiHUZ32oEyKgEa4RUD0m2cUry0nzzqcZTeBUfzTRv QUPpXcCqHnYTGKjHV9HuHAzhoRo6naEysE0AXpN03YZT1FmYlvambDtQzcW9frgeOMJSMG UdvhVzOu5snwjqVaY+jKt8Y41jcsoNc= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=KBNx8+rM; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf16.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.53 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693296706; a=rsa-sha256; cv=none; b=wPhRlTFFGiLwadP8RV2r6h28LkrVNpLyWxvKg8WPEWWnCe0BLlGDYbFBzy+m4x5eqBeGiV TnRdvbtAoJu/9zz8ygNHihQCtdxQVyq7ktaMua3ilyFw3AclFbGvjG5FtP2LljCMgl6fHa rYTLLunyojeCOiPXHm03KdQan/wfR6s= Received: by mail-lf1-f53.google.com with SMTP id 2adb3069b0e04-500a398cda5so6586850e87.0 for ; Tue, 29 Aug 2023 01:11:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1693296705; x=1693901505; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sL90/r5YfKaetndDygiXgRj9+82zEERzAQ0GDLDkLpU=; b=KBNx8+rMsWH6nCoZ+bqmYd9kAnk7WAnmsK1kBQsrAlAXw5G8gE0GvO3mXpHwmsaxo6 2gpql4OfGm5mlUT87PooQWCwid+0V/L0gGFxqRSDE33LssOioQu9/J/GftDFL6eIFLP2 WxkgXO7BfDqg+QFz6qDLjaC8GnQeNtUf+7iVTY2FPuZQ/tnQIvQOLSskE2Oh9B9ZvZ4N k1zIv/bkiyfhvv0zqX0jqhpriTp5UMS5HJvEu/9ev2Hy7/A7kVeOcLNiym9jh4PUxSk8 3Seg6UEsuWTLjTaMi36vLbd4qiliQuH+YQaAE0zVR/5ngM7wauyPOpps+nix0j1IeD0M 0l1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693296705; x=1693901505; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sL90/r5YfKaetndDygiXgRj9+82zEERzAQ0GDLDkLpU=; b=Lw25TrywwCxxWwgXvf05fqIzTjNOMH3aumrPU01rDxF+ppxhWNAeiD0iHB5CONn3KN a/2WNcJ/KSv9blZEbuQqGabQnOZ4m/378T+04xc9DYmHhCIVY3f1vpQ8B29uDqFKkrE3 eqBPpVWuL243WhH9D/IPLxHeZYxcrv2Ip+ssHRYqdMYe7kSNrcinj5Z8O+S1frT8V6M2 gBzGycTo2MyE7ZT61hK5sVxXKse45KVE4L4Y+UnQvom5mff6JpNKOJlJxuXdC6JAyjAD avb/G3HRETaaBex4lLEAzUCgQyGZKIZz7h0Z6B+kmKMsXHcFEAupY92DNdtW4BUyHRoP 7gjg== X-Gm-Message-State: AOJu0YwKuu7F4BSS/h+sInXmdtx6t7dCVihS+Z7ZfTypoPmfKuF4Jtdb VleaVUB1ytcxs3eOV4M4x6IwVXqfVYiinw== X-Google-Smtp-Source: AGHT+IGMSDc2itGbnAweqsvba/FoxkJn/ThOjItzn70Gby98afKeT+YDst0o5W/Z6YiFsaM3uYPSNA== X-Received: by 2002:a19:911a:0:b0:500:acf1:b432 with SMTP id t26-20020a19911a000000b00500acf1b432mr5985858lfd.63.1693296704660; Tue, 29 Aug 2023 01:11:44 -0700 (PDT) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id f25-20020a19ae19000000b004fbad341442sm1868026lfc.97.2023.08.29.01.11.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Aug 2023 01:11:44 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko , Christoph Hellwig Subject: [PATCH v2 1/9] mm: vmalloc: Add va_alloc() helper Date: Tue, 29 Aug 2023 10:11:34 +0200 Message-Id: <20230829081142.3619-2-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230829081142.3619-1-urezki@gmail.com> References: <20230829081142.3619-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: 97gpo83i8mgee9wfyk4keqd876rbew1w X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 71362180011 X-HE-Tag: 1693296706-323506 X-HE-Meta: U2FsdGVkX19JnACeE/smj4QOnIBuy6Gie8cUTGmdxT4bcDMiiDfwPbyf8cjDPQ5LfSyCzg0iX/GXy6wqVQZG2/rj9ET9S34DCAk2RhnrqS891ahuqDsgQ9Gj9g2Yg68ULCefkCIwXu/IZk+EpU+1s4MqMpm+dwx7g0lwL/q1OH3U+bjurnmf/xxwxFauy5IbI8dv2cQjhx8ibofPxmTYBOUD/YQhwOpvlarcZjCvDo1Id68mh020/n59b6SKGV60P2VlM12sqns0Cn+nlVAafUQC1McgVjNzkGWvDHxclbgMdf0jIuZTBmIl/lK0cME/POLLkDqFcZi2F7mRSqFPn1aP2INMPYQVdguWysQoAAZPHohuPf0qnlDJgQf6nVm0jhR5fU3MQkH9tF8wg8NYA/Fp6mgJSD3t3sR296Je0NH15e+XYQwOQ24t7Urni2Ol2R2UD8zQHTbq2LptBxXH2Q6JKm3oqPMhFpXSFeXa59gIlCbjvaxfKZyM0URatAVOjxVvtfKrqQH5c/qjlXnPFoB9OGj/lUa11KzGweS++8PAAXrO5vNBo/dLsHIxXS0Ycbpncgs7OM+9Vh4M/ubFo7g3xW462r9xtIAAz18k/57PMB+wdUy4UcmCuKG1cE3ljY4hqbwJJ6PvuZxCcEPus9isk6pM1PnIPoeEKWkQkxgJf8a35Ir9ZXLwvrOpzC3bm2m+I7/E8vBW7n/xv3YWRLOTaTbmiHiNwUJM0tgXC/hUXO2MPpU6BRhCxNOoO1THaLheSDcnK4YApkYhf87CnWcCL0nmfbwobmXxEA9QQkM789r8Ru8s5GBNTzIlA38L0ba8rkXzGX9L0F96xfcgJfPaqiSn2uYupGeRt1lgNbVWZprHxGLGVhrhYt+ZMAAenbe+WS6+sd442HrYj9plltO6dusrY/OOk5kTPit7ek/sTrwuzfxECKim7jj3LK8svTqquB35OnOuTaXqiG/ 0QGVO7+o sZqSNboZ8Daci0duTHijK7uv3/YkokJxhqe4M6cuFvrzDn+nG21bX67/d3NNzH70AS5hDQsGy6c+vHTh0IWyZ/v3aLkmh/P19aHMbWd4uBmciP79agRq2lMGjrSvWMi9xLMRIZlXReiL3ZO1QeMGiKoLw+hQqIAooRuP0eKGnYiYwtDfG3GsxnFS7lu6/Fd1giiAXj3nwqwIdzc9oiBCgqeKNDkSlI2ZMMbua2wtSYpv/htZ3h9adut5m8p8evmp+7/DqkA+HekHv/1XOM82BAiwfJKEZhHZgsccPxJLqQsiNe7E53Vqu6p18J+4QtyPUpI2xIs9HL98nDiQt8Rg7UatjaYQIuc3Ifif1bVIAuiCfmzUiBdIVjs2/59wNu8o6RSOYDJJsY74g3Mqc/P/CaBgT7wPWBY/ca/G0iZj2OBld+jAb5YY53nqME6kI/kvRFp5vQwSbOIO+mMVMjcPIuzb4LaAc/PuDZHbaRrvF1x5vStA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently __alloc_vmap_area() function contains an open codded logic that finds and adjusts a VA based on allocation request. Introduce a va_alloc() helper that adjusts found VA only. It will be used later at least in two places. There is no a functional change as a result of this patch. Reviewed-by: Christoph Hellwig Reviewed-by: Lorenzo Stoakes Signed-off-by: Uladzislau Rezki (Sony) Reviewed-by: Baoquan He --- mm/vmalloc.c | 41 ++++++++++++++++++++++++++++------------- 1 file changed, 28 insertions(+), 13 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 93cf99aba335..00afc1ee4756 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1481,6 +1481,32 @@ adjust_va_to_fit_type(struct rb_root *root, struct list_head *head, return 0; } +static unsigned long +va_alloc(struct vmap_area *va, + struct rb_root *root, struct list_head *head, + unsigned long size, unsigned long align, + unsigned long vstart, unsigned long vend) +{ + unsigned long nva_start_addr; + int ret; + + if (va->va_start > vstart) + nva_start_addr = ALIGN(va->va_start, align); + else + nva_start_addr = ALIGN(vstart, align); + + /* Check the "vend" restriction. */ + if (nva_start_addr + size > vend) + return vend; + + /* Update the free vmap_area. */ + ret = adjust_va_to_fit_type(root, head, va, nva_start_addr, size); + if (WARN_ON_ONCE(ret)) + return vend; + + return nva_start_addr; +} + /* * Returns a start address of the newly allocated area, if success. * Otherwise a vend is returned that indicates failure. @@ -1493,7 +1519,6 @@ __alloc_vmap_area(struct rb_root *root, struct list_head *head, bool adjust_search_size = true; unsigned long nva_start_addr; struct vmap_area *va; - int ret; /* * Do not adjust when: @@ -1511,18 +1536,8 @@ __alloc_vmap_area(struct rb_root *root, struct list_head *head, if (unlikely(!va)) return vend; - if (va->va_start > vstart) - nva_start_addr = ALIGN(va->va_start, align); - else - nva_start_addr = ALIGN(vstart, align); - - /* Check the "vend" restriction. */ - if (nva_start_addr + size > vend) - return vend; - - /* Update the free vmap_area. */ - ret = adjust_va_to_fit_type(root, head, va, nva_start_addr, size); - if (WARN_ON_ONCE(ret)) + nva_start_addr = va_alloc(va, root, head, size, align, vstart, vend); + if (nva_start_addr == vend) return vend; #if DEBUG_AUGMENT_LOWEST_MATCH_CHECK From patchwork Tue Aug 29 08:11:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13368659 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DA81C83F16 for ; Tue, 29 Aug 2023 08:11:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B526C280039; Tue, 29 Aug 2023 04:11:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B024D280037; Tue, 29 Aug 2023 04:11:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A3C8280039; Tue, 29 Aug 2023 04:11:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8A95F280037 for ; Tue, 29 Aug 2023 04:11:49 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 58FBF1A0450 for ; Tue, 29 Aug 2023 08:11:49 +0000 (UTC) X-FDA: 81176423538.22.6B319F2 Received: from mail-lj1-f178.google.com (mail-lj1-f178.google.com [209.85.208.178]) by imf21.hostedemail.com (Postfix) with ESMTP id 7A61D1C001B for ; Tue, 29 Aug 2023 08:11:47 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=cCHg9QJ7; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.178 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693296707; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ywq8StQp/KBu06JcZ9GL4Zmn1HAvlwzTQDj/mQ29ODE=; b=O8WSdq/la7Nh/+t9DC0hLO9JIIT0j8J77kFDTEhO9lstMW6PRdJE1RqaYbjzcEtc3/V50r bgyh3BxuIqe6JtXo+dgOFM7nJ6QlaL7R4XI7zYIQ4Wt0DJcGrlEKeBV+SbVqfjYSsdAJLq hhZqoOeiDsV/zNYaCOIPjIH4onajdUM= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=cCHg9QJ7; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf21.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.178 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693296707; a=rsa-sha256; cv=none; b=uTD455KAAI2jzlB9bQmZ/HnaZf9simNeXtBdUKRQ0zTZWzPrb8MZf/58nBd2oodc4kjkPA cXDRSSZHK7XsittGdtkIb7WmAiQJAVcLJp8W9bjL4WYjeLVF/HNUfRGdItVSXJiKpvTbLr V2hnzV3c/4cJAcDE7ZukSCJrAoFJizY= Received: by mail-lj1-f178.google.com with SMTP id 38308e7fff4ca-2bcc14ea414so62253231fa.0 for ; Tue, 29 Aug 2023 01:11:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1693296705; x=1693901505; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ywq8StQp/KBu06JcZ9GL4Zmn1HAvlwzTQDj/mQ29ODE=; b=cCHg9QJ7bV1PJkKT69qc78hMud5WO2XMZ2rC7Kq/zpCRpFTJt+XtyZcbFkXILGntyx kv1hCReR5H4Aousj8EzBEJKZoBYWg4HoA9bqmampWukHud4vwWjnBSYW2mLSV/uw7dBk 5J6TpDKuzrxGfeGZU6O/FjXI4FI9E6Swt258hqnD6CIFS4Pk6uZiQHDIJoiwTt5mbJkm di96B1OScPREmCOYj0GqBT18wYEM6wMvyEpII87nIOVOr9vYyOeJYuZNw2axR5sDg/17 GeMTgqQaxLFGx28j0AGBMtPnwqHSlnS8Q6w0W7oLv2PFIhdTmBN66ld6OxfCNZjikDnr QtqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693296705; x=1693901505; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ywq8StQp/KBu06JcZ9GL4Zmn1HAvlwzTQDj/mQ29ODE=; b=YSX4/NdesYiN7EEJGs1GPmvyziW+Ek68YDSym1752/9ou92mbpAY9vUCFzrqEeVdS1 Kmk7N0aZVLmAv3uGlaeIngbOG6njKHWxlu/6ta6lPvoMsa+Esf6N4eJsYJ9nM9j0S8bo B5mC6pQouPuNZZDRm1mvA3+c9BXvG/CpIcmzc8Y0qhtgt3nX7rmtISWYMb8/RLfG0GOK dHAKgjnnaQtFQ75mRm8XEIZjb6BtSc+saFiJEcmmMwTrqWHTHpOJuYxt9FkcYrSHSW0y M7p0laOf8+z7LysIq+cCO3MyW/vtzZsy2/YJY3kAhpohw7K7+Nk7Q9zRlsvPo09dAqqn tKfA== X-Gm-Message-State: AOJu0YznExsJYZtIkaWID86g2Fdb3DV7cmwVAIGTc5hwjO4gNVovwuaE Vowf3zWAvP+8Q4wwDyrBiIQBqwGOzCSOuw== X-Google-Smtp-Source: AGHT+IGwl4ZbrmE9sKqjqkXHLjLzQEeOulEYPjEqSQBVjeNPTFEyetQGhLr+FtcamNWOsT1eDs4/fw== X-Received: by 2002:a19:6705:0:b0:4f8:7513:8cb0 with SMTP id b5-20020a196705000000b004f875138cb0mr16773184lfc.2.1693296705456; Tue, 29 Aug 2023 01:11:45 -0700 (PDT) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id f25-20020a19ae19000000b004fbad341442sm1868026lfc.97.2023.08.29.01.11.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Aug 2023 01:11:45 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko , Christoph Hellwig Subject: [PATCH v2 2/9] mm: vmalloc: Rename adjust_va_to_fit_type() function Date: Tue, 29 Aug 2023 10:11:35 +0200 Message-Id: <20230829081142.3619-3-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230829081142.3619-1-urezki@gmail.com> References: <20230829081142.3619-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 7A61D1C001B X-Stat-Signature: 88753zogn931zpxp9c6paajbe4jsrum3 X-HE-Tag: 1693296707-47436 X-HE-Meta: U2FsdGVkX1/xSr6FtYmiW2TGuIlgLLtAjY16pvIeDsQijofURh1rWLSHshmUchRipA6pd9woytvCimCr8OqoyZYaHCr6njA0vIQFDdxPXdQsz+BvfMQMJETI3v13BE+bj8/tGkd8GhqpSme5ta+s/N1M87wHxFWXXVsCD1yfOL4IAmIb8UbYOcZtPPGxL9ca0D0aN18+LcQl78WJmS3N5AknMMzDZzv8uE8MvJdsrA4H5inz1uy92e/RML57swdx4eD8U/UuQTKK1ZDFdGi147npGCPQNDe+y1gAbcdGzCGDJE6amsNN/Jw/NZXsqlgJExUs/kF8Tsm9p37ADGcYTTN71r8nLbpG+cHaXfmRevHCFfUzwhGmYp4BIN6HoMSloE3ETjNgpCKz2CwvOoxC46WHTYTQtClAu782xxJ/lpMBQjVS0MY9VLinv+4W3Cv9LVXB/oxfpvvjz9+qcy1GKfWvioQ7aVKMOohRm952AQKA7778qHqtYQEycH3LfvgQB4qKRXDaT/pjL6KOyUeFWzluGB4zGO0oO3gGPqMXRTSukP7o7RsMKB2VH4y2IJQSrYSoIl2jINoivNYPaLAt515RpuNCnh1fxKnT5uGYcssIQDum43dhSm2/KQkWrijDCpRWA+QiAljTrWOkWb54AyIQYR/MIEM0w9IMa1EQjnKGlu2xscCzWJMs8e33U168nyvBhTT7O/Hdw7KhUUrDcu+eXM7QzFAjsjhwIkQ5ppFFaUFLt72BnPutk/lB1ja+3mJTymQTAXiEyPLAgL2B8FlajpEShrJH3aJQdz7p4N9e7vDtr9tdFTqCLr+QQg7Mp9t3yxXw+YlJubcwFcmCl5z2G9Ne68ORwHqZjlc1AQDoCRcO3XnXQXUdsXe4mrMswHshJifiZpTuB2+PDXYu87ChuEq9RAPBLSWY82Ur62MEUpytBih4aVdb09pkRqhYh3CkKbmqd0LSCSm7q5o OizDOrSQ s9csFucP8DqTPLP4INxsbCxTQCl1HTp00JlwjuJlqkYYqLlEP3UVBwrKh9LENURY/G7Jj18SqFxnJtHuBhkJSO5HaddTVpKjijX3zcVvqR7TdEOeL2oophP0jVoeeyKqcjH1geOsapufYCshTH7hAn/VZxSN1KqhqcKjEXzju0DVXrGSmI7yzQc+qaZiRMGXhpBZHp/jCerRSOO38SWLasB43LLIw5ZP+cXjAR/w4PlEGYl6qI9X9TSI0pTw6cw0IeDU7YUReVhZOnFeVTOfXi+sWpNXZw0avx9vrXFJhFYz8wuFICLYvHUvMesiT+fGFDz13H2JfUfeGuKITsIbcfTUdLiY9EfOz0tX48ZlEZwDT6T/hVNiub/aZDMbTehvjbv+IFdgRwhKZLX4pM0CJHl2HIwiSfvZqCE5oLl0RcEVHLVWy4NnxJZd/5d/WGx2/NEESr/c6fcv01LhcORqsN2wRfryCnoj+Bq8arBns+zhlo54= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch renames the adjust_va_to_fit_type() function to va_clip() which is shorter and more expressive. There is no a functional change as a result of this patch. Reviewed-by: Christoph Hellwig Reviewed-by: Lorenzo Stoakes Signed-off-by: Uladzislau Rezki (Sony) Reviewed-by: Baoquan He --- mm/vmalloc.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 00afc1ee4756..09e315f8ea34 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1382,9 +1382,9 @@ classify_va_fit_type(struct vmap_area *va, } static __always_inline int -adjust_va_to_fit_type(struct rb_root *root, struct list_head *head, - struct vmap_area *va, unsigned long nva_start_addr, - unsigned long size) +va_clip(struct rb_root *root, struct list_head *head, + struct vmap_area *va, unsigned long nva_start_addr, + unsigned long size) { struct vmap_area *lva = NULL; enum fit_type type = classify_va_fit_type(va, nva_start_addr, size); @@ -1500,7 +1500,7 @@ va_alloc(struct vmap_area *va, return vend; /* Update the free vmap_area. */ - ret = adjust_va_to_fit_type(root, head, va, nva_start_addr, size); + ret = va_clip(root, head, va, nva_start_addr, size); if (WARN_ON_ONCE(ret)) return vend; @@ -4151,9 +4151,8 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, /* It is a BUG(), but trigger recovery instead. */ goto recovery; - ret = adjust_va_to_fit_type(&free_vmap_area_root, - &free_vmap_area_list, - va, start, size); + ret = va_clip(&free_vmap_area_root, + &free_vmap_area_list, va, start, size); if (WARN_ON_ONCE(unlikely(ret))) /* It is a BUG(), but trigger recovery instead. */ goto recovery; From patchwork Tue Aug 29 08:11:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13368660 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 989DDC83F14 for ; Tue, 29 Aug 2023 08:11:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2C88328003A; Tue, 29 Aug 2023 04:11:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 276C4280037; Tue, 29 Aug 2023 04:11:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0CA0328003A; Tue, 29 Aug 2023 04:11:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id F0292280037 for ; Tue, 29 Aug 2023 04:11:49 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id C5545B27C9 for ; Tue, 29 Aug 2023 08:11:49 +0000 (UTC) X-FDA: 81176423538.17.9CDE6D3 Received: from mail-lf1-f51.google.com (mail-lf1-f51.google.com [209.85.167.51]) by imf25.hostedemail.com (Postfix) with ESMTP id D935FA0020 for ; Tue, 29 Aug 2023 08:11:47 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=QxThSYRw; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.51 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693296708; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZR6SvlG6RVqjpwTXvW3wIJv5GQWXT7NLdjvAZjLWLsM=; b=cBSIN1SQ7HcOf/oNjDt17KdprEpd65hJaMZuKni2lTSMjcsZsX9xEFD0m1RsKrfB8YcwIJ Er7cvSFSzdeS6aRwrzPlzcw45wvreHF/Rn+su6HxJoB5zgmKZ/LmOXBVc6XPcCVt0zwc7W S9XVJUdpUOkkvTai6weG6eGToJqMbnU= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=QxThSYRw; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.51 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693296708; a=rsa-sha256; cv=none; b=I0avkyUWndcXbFGVMYoJ+FcJYxL6ES7oR1LDtjT3TRTEjlwMwt9VOmRUe6M8F/cGmhxcu+ oOZfQKGQ8hcwK3i1xbiwz5Xlqe7sKxsLN4U2XzBL+6Hi/DMmgUlhgLFFIyIpdTCcmJzLsZ hpBjN8gz/RiC1ZzzLjG53p1HpF5We74= Received: by mail-lf1-f51.google.com with SMTP id 2adb3069b0e04-4ff09632194so6578598e87.2 for ; Tue, 29 Aug 2023 01:11:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1693296706; x=1693901506; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZR6SvlG6RVqjpwTXvW3wIJv5GQWXT7NLdjvAZjLWLsM=; b=QxThSYRwDtcE6nVhtK77Bm0vpeVqL1+GW7jmP7pJXd2x1LSjjVHUSOzUkmQ50ayv1A Wk8PDFDtcTBiDNVJyGosExWMqHj0UnI11W5Howqzjd/g6CiGQZrRGR4cSSv151uKSnAF pMN4P9e398gh/DtM2IioA/kT2cgyNgezI62AjKiJNw/sm/8YPwqFO5ear/XGoPX9ZJ3r cydejULmZlzX3LouEZVkLDFq5LpxOEG7Qhil0DCO2+8SSC9q8je8V6+goqRnyIaBvyhI UgMFNWk7WtmxZoO+GcfZ3aR8jO+hBkTj6pnIxKqEebqGcKrtimaMyY/nopENrKiqtx2G r5Jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693296706; x=1693901506; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZR6SvlG6RVqjpwTXvW3wIJv5GQWXT7NLdjvAZjLWLsM=; b=F8iZez+SEUx0J1BGNAoFT3Yk0BWOhBw2dv/I3KOzyhlEBG5+exgnXY9tgUKEKxhugj qtVFBfuH/nnKqyudBgnpoIdEQR4wxG7qspu7hO+6CTouHldvWevz32GI8v1J97ZMyR88 r3rB3QC+Jmr7M3GLqdLCVU7QvYilH172tKzOX2eEfQH6b5yn+l2lVencCStHLUA/7D3t MhEZcaNQRL3vwsLUIFTGp/GVbPTXazMCLaETWTI8+g8F+D0bl21bvYm47+OVuRUoBF3/ n8lW5bOhAO2VcBAexWOFkc1xw3HjPQT5ihrqSTFz08sxvqBnB5H3/QZUOlIK5VEec+/G b+oA== X-Gm-Message-State: AOJu0YwLMVK/2iH9G4X6gKBHftdA+aX5uw3tERVyAJubI5mIoRyY/lZh Rf9b+iLE79pnNsVAe2Y+hlsw51zMBKPy5g== X-Google-Smtp-Source: AGHT+IGEnacQTpxADckRzHoYIXvZaUhpJE6tHjbYNN3uY99M3n0FzgtKGoq4Q7h2iZ7uuaPLaYrrjQ== X-Received: by 2002:a05:6512:304f:b0:500:b890:fb38 with SMTP id b15-20020a056512304f00b00500b890fb38mr5023278lfb.24.1693296706241; Tue, 29 Aug 2023 01:11:46 -0700 (PDT) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id f25-20020a19ae19000000b004fbad341442sm1868026lfc.97.2023.08.29.01.11.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Aug 2023 01:11:45 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko , Christoph Hellwig Subject: [PATCH v2 3/9] mm: vmalloc: Move vmap_init_free_space() down in vmalloc.c Date: Tue, 29 Aug 2023 10:11:36 +0200 Message-Id: <20230829081142.3619-4-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230829081142.3619-1-urezki@gmail.com> References: <20230829081142.3619-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: mx6fi7desw5buiah7trad6moc69wizhk X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: D935FA0020 X-HE-Tag: 1693296707-874927 X-HE-Meta: U2FsdGVkX19Yaf5AemPs+P9oxUSfxicxXtFQc2mcbXgnaCnUJk7Iqxd1cmPEVXC+lfhIq806dYw2aaDTA991gI9Vwo46Sf5VIX7ywr9btTDYks0PvK8wN5059DKBkBWwpAJwwt8pmdoxyjLv62prtqMoVydFTXQjBUxUG6HVp7QUhR7ap+jl9kx2oGx/ysfI9+Z3oA96KYMbHxC74E6z6MDrAZBDecWfuJNdcLhFlBjJvgazvNZPzcSoXNC8qkoxSfyxu2pb04TJscMPCn1RhM5bVES7l2lp+tRsqdaHcpzg6OASHEgQT/B7YwQakw+HaM1kM+Z4ubsr1vH7lzNdaWMceRF8KprUU7AItBxYH4BYsp1kZskIwJYoxpFwjZFEU3dZLOj/fFQD/oP3zeAvNjuNOMap8MLafI1wZ3lo2D58S43salZNakUgW0092KDUisbXWGxLx76c0dX8uqHXtoaufJm7ysdZF5v5zbjBUCsrEVUBn+owv6TLoH/1xHoJmI9wEb64Q9ZDOmtUvrx19ONr/i6n8TvjCjW0segeHwyAjk5+ZSLx8qPUb77iiusozyoDtPuM/opw3BWn2DSlAKQH/TDiEzJjY7SC0hUc5zgeEvz6615X+jEdiHbyA8xGhKlENXyCWKRDvlcQbroWoIzVXEAW+jsmfHcXUJ1LF0phe0bHjfBNWWoK6qS/0YiqD0X3b8bZ9+mdd0RzEqN2/8SZBPfl1le1O5BOZ7ZMhP6lsvkHy/Gv0Qn1R2eKEh6WD7f0xER79WXnSCg7kNkbV2cvv9Jw4QikuDhLnUIMNfu+M5KnoxZLQNqWfeTVJEX3VtHwUsMpxmdJl5IwyxOeboI6CSl48KhFMBSMCDAGq1KMffu6GqB32lsMNDKLgIcdd6G1d/Ki0bP62blJa0RcTgID3CQU+0Bdtil4c2BmnnF4kMzqbbSLkjvfB/lXb1bLenvg2yFU+QwP2WC/QFl V0crY4uX tLB49QQeC/lHiAuy/9R5P/4oFPHkZ2wB4wknBx805UAWHVdDCs45d1GdZAus2241/RVFPy5AkrWahZqmbCW3Yip/tX6PXwtg0T8zvSUU5ZnEabH1x+tzicA9h2tgzFavOXJG1bvmBWkA5uzZj9DW0K90JWSdpsLfv0WL7sEu7E7zd9kWFh0xKOiA6Os19if0wrjRZAHQuqxjJoCx/AxlDW8Gux3Fn4ck8sqrGp1V4JgsbMv0dtWkVNbEmVVKi8juu7T6Nmks/IUc18WKJzjHLwQYqptIvkfhs3eaAlmpLy9lKaDWIdZ4NN0utHwz0kdP8WR1xFI/6OwzyCZtDVUQaHUG561SfDd6RYUwr8CdahARWoaOwXXh17+HcqcUmTTd6oATWGF4XyZ2vWu1JyJHUTWZSORVOWaCLs7SB9LlkOdDfhEgwp5Qd5vOkXf2CvOhELpBKgRccyG8SnK8dilHrMhDbBdUEb+G5s83RI4H0zPyRus8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A vmap_init_free_space() is a function that setups a vmap space and is considered as part of initialization phase. Since a main entry which is vmalloc_init(), has been moved down in vmalloc.c it makes sense to follow the pattern. There is no a functional change as a result of this patch. Reviewed-by: Christoph Hellwig Reviewed-by: Lorenzo Stoakes Signed-off-by: Uladzislau Rezki (Sony) Reviewed-by: Baoquan He --- mm/vmalloc.c | 82 ++++++++++++++++++++++++++-------------------------- 1 file changed, 41 insertions(+), 41 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 09e315f8ea34..b7deacca1483 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2512,47 +2512,6 @@ void __init vm_area_register_early(struct vm_struct *vm, size_t align) kasan_populate_early_vm_area_shadow(vm->addr, vm->size); } -static void vmap_init_free_space(void) -{ - unsigned long vmap_start = 1; - const unsigned long vmap_end = ULONG_MAX; - struct vmap_area *busy, *free; - - /* - * B F B B B F - * -|-----|.....|-----|-----|-----|.....|- - * | The KVA space | - * |<--------------------------------->| - */ - list_for_each_entry(busy, &vmap_area_list, list) { - if (busy->va_start - vmap_start > 0) { - free = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); - if (!WARN_ON_ONCE(!free)) { - free->va_start = vmap_start; - free->va_end = busy->va_start; - - insert_vmap_area_augment(free, NULL, - &free_vmap_area_root, - &free_vmap_area_list); - } - } - - vmap_start = busy->va_end; - } - - if (vmap_end - vmap_start > 0) { - free = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); - if (!WARN_ON_ONCE(!free)) { - free->va_start = vmap_start; - free->va_end = vmap_end; - - insert_vmap_area_augment(free, NULL, - &free_vmap_area_root, - &free_vmap_area_list); - } - } -} - static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, struct vmap_area *va, unsigned long flags, const void *caller) { @@ -4443,6 +4402,47 @@ module_init(proc_vmalloc_init); #endif +static void vmap_init_free_space(void) +{ + unsigned long vmap_start = 1; + const unsigned long vmap_end = ULONG_MAX; + struct vmap_area *busy, *free; + + /* + * B F B B B F + * -|-----|.....|-----|-----|-----|.....|- + * | The KVA space | + * |<--------------------------------->| + */ + list_for_each_entry(busy, &vmap_area_list, list) { + if (busy->va_start - vmap_start > 0) { + free = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); + if (!WARN_ON_ONCE(!free)) { + free->va_start = vmap_start; + free->va_end = busy->va_start; + + insert_vmap_area_augment(free, NULL, + &free_vmap_area_root, + &free_vmap_area_list); + } + } + + vmap_start = busy->va_end; + } + + if (vmap_end - vmap_start > 0) { + free = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); + if (!WARN_ON_ONCE(!free)) { + free->va_start = vmap_start; + free->va_end = vmap_end; + + insert_vmap_area_augment(free, NULL, + &free_vmap_area_root, + &free_vmap_area_list); + } + } +} + void __init vmalloc_init(void) { struct vmap_area *va; From patchwork Tue Aug 29 08:11:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13368661 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86ADCC83F14 for ; Tue, 29 Aug 2023 08:11:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4C51228003B; Tue, 29 Aug 2023 04:11:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3FD78280037; Tue, 29 Aug 2023 04:11:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 29E6A28003B; Tue, 29 Aug 2023 04:11:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1BAC0280037 for ; Tue, 29 Aug 2023 04:11:51 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id CB36C1204B2 for ; Tue, 29 Aug 2023 08:11:50 +0000 (UTC) X-FDA: 81176423580.12.625AB77 Received: from mail-lf1-f54.google.com (mail-lf1-f54.google.com [209.85.167.54]) by imf08.hostedemail.com (Postfix) with ESMTP id E5E93160008 for ; Tue, 29 Aug 2023 08:11:48 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=drYl4gUd; spf=pass (imf08.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.54 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693296709; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7IWUAKCJIzOp/gxV9EkyFZD0b+FcbujfqgJIfgCDsRU=; b=ibjREdCp6FMlBKIkFoABB4Z2T0IZtXJJd689RJgqfFANqSXUPnms+F7fvy6uDNQcDVer8X zRIv3YM0wSeCCldU1slVfV1Qcnm01Sw6tTFGL3qQy05NdXTA9wqmwkecTALc7QR2S3xBsQ 0kWL6Le9k335byRZkPLr6mbTmVnzlpk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693296709; a=rsa-sha256; cv=none; b=IKte+1A+jQne3asQgLIjwFoZVHOPoNMr3lychnQ1zhlxZa2KhjGofSZWUjDV8mBp6tB20w DkPr6OMcimmD1woPbgvIeLJ5lb19FJT9IfoWCTRf56JxwH3FrFKWdduk/usTIfugBs6O/q QNo2IsPaf2FMQioDbnO7N2zKkYSpB5I= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=drYl4gUd; spf=pass (imf08.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.54 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lf1-f54.google.com with SMTP id 2adb3069b0e04-500aed06ffcso4823778e87.0 for ; Tue, 29 Aug 2023 01:11:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1693296707; x=1693901507; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7IWUAKCJIzOp/gxV9EkyFZD0b+FcbujfqgJIfgCDsRU=; b=drYl4gUd+Jtb9pJbnkmWDw0WW9znwqfm/dN9edYWOz5h87Zk4m0CZviGo0tqS209ji 7VllomAwTwIXfIpMe+6KrvXr6F1IRj+WCNZvkLczNr61sBgvSNMuVN/kJ8mEWH2X6nX0 uuj1sm2KsNDMVcam+3JEmn7cWzUahEwLgLL2nwnedEfakWQpVGG+Q7S+18rT44j8Dzpq ATERh9jcoFaEnq1Ed/WsI1swE7aFwgUQoLhOvI01UFm7uQIkzqXeEF5RNOuqplOGcqhV OI6BaeR2dQa3xX9JA5JQKEioDYYkPMYwPRQU0RLQg5irN7C6Y2ey+azm0BbbyJmONyR1 ZjEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693296707; x=1693901507; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7IWUAKCJIzOp/gxV9EkyFZD0b+FcbujfqgJIfgCDsRU=; b=ceVXaI91US4Jytzm7XygvAjsL7zITJB+TiRgoZ+0oMX0wyAf8rZPcKDqWn8bBfi3MS 3Q3v8VFGY+Hvrnrx5ZLbC5jG3dasdIT5Hbx9SG1pcUSAqUnRrEdxfg6bSWgkNhSt9ldh Fh0omigKUL+hr1peIf9Ra4112Grm4qkdBwDu+AqcV6l8rdGYtFxlt8dqVK8E9MI/Xob9 zooHCrs1e4ILvX4E7GUfVi9+mno0Sxir8FNomKRQiXhj3CaAn9oPLcehoQ5DkaO86LbI 11chIwpeeFxs3aEOEHMjkkHWPwF1mciYHeLykjQyWt1rmDCknt3L89aP/zFUBqRLuDsk KbcA== X-Gm-Message-State: AOJu0YzffDZZiW/7bU4RMR92jHso8EiI03dPpn9PBryg4T3nxbsj/0xY 9bZZQmw5OBMper/uuRy/vHME+QhkV4sOnQ== X-Google-Smtp-Source: AGHT+IF4shimJXURLpT/BCINR6HV4dj6RneDkH6uQA661sVN+/c+lRcPx1Hj3E6H1bDmojzFC3+KaQ== X-Received: by 2002:a19:6451:0:b0:4fd:bc33:e508 with SMTP id b17-20020a196451000000b004fdbc33e508mr18034118lfj.49.1693296707014; Tue, 29 Aug 2023 01:11:47 -0700 (PDT) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id f25-20020a19ae19000000b004fbad341442sm1868026lfc.97.2023.08.29.01.11.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Aug 2023 01:11:46 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v2 4/9] mm: vmalloc: Remove global vmap_area_root rb-tree Date: Tue, 29 Aug 2023 10:11:37 +0200 Message-Id: <20230829081142.3619-5-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230829081142.3619-1-urezki@gmail.com> References: <20230829081142.3619-1-urezki@gmail.com> MIME-Version: 1.0 X-Stat-Signature: h3t6qtbtz8uhraqa5htbc3gs7e779z5p X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: E5E93160008 X-Rspam-User: X-HE-Tag: 1693296708-175836 X-HE-Meta: U2FsdGVkX18uekGGmC3rHv2RZhROau9OiwXDTTvED6oSPwNSW4z9W0i+olFIPfHpWUEzqtmHSB4248g4/SkPzTULMnFXwfT15CMyTTm0/WacvuLgTdI5qb7aM6v+QFk6Xt4+JHZLzSBgiQNkYtNKgiPXM9OQYRInZ3TU5/jbdDYH7lVxrOZeCC4hDFl2pv3apv+8DeTahgQ4+Fs1SDTF5qg8q+jRj6OHVRQREflwBTkfZvTeL34WgOx6aFRBOaZZTPrUrXAoVCMX0QsWnVUiYUl511NZ4K3FpLKfE01siA32DQ+XGbxYiBCqSQvaYeiMkokZt1p2yFt6KRDNbNf6vxoYLLAF41OtXqVI6UcgWdfIy/fxtyFbUk6bZF5w4HJBSzdXZpVKqpPniBnP7ggLsNY8t0cNuutN2vHlQcYsueb6l+HvNkqN55GspIyn/YfEBdFxBZJK5xC0PSirm6kiczgGK3RJChN9pq3On1O9VvjZY8cZ5k3NmvQZEY+25+DrKsvXnzfjxJrAYNwWJsP0c+wRkSTMROW3GtViLQ/EMheuoDWz984PFbm3LtXWgJHZWtUiWQ35yHrnOHoI0NrE4XRkxWHPbhgRi1b+7A87MmLeuQVhiqskQ7+APG3SjNTXovsZoIyZv29IhMcFyd905wp3BE7vl6j3/jcK9vxvzOfuLVr/J8nF26bgzciIdX4QZ8dY0umcWaHoghzZBc56e9lqyN0B7dthBCM5pb/EYMddODjfSzxW65+1ctMPXm4fe0ZEjnx63Du1ZsKQQjkLEN4vMtMOEb4H7CaGVA3zkrGnaKxKBdbElr2dUU2sn+KvMbqIk9SH62oajHpYsVPrFuxiFkVOqKLUB6L9qOW4DyR3qLh1JeXI46tXhUMA5Kh3Mq8OJ6r9bWOxiHKgsiSSsxK26H5ub8dZArKqLqF2BsbNU23WvM3LYaIULXJnR0c33NVMQj1ZZbgUbP3Bs7n VFCJEFqU jApH7is9LXuzCdwtzn55tGCxiXJnp130OubWhlZzSrLz3aPAu5TjekDJ16pfYsnKBWmWR48Ce+OIpmknM91kKB8BmzZTPyZu8RzFuVpbv3IS5JTomck3ywEQEh/lNd3rh406fpcQSrZpxV54yArWE3ORcf/qkSJiJSOhZ2F5P3lSC8HCYBmjnANrmud9rITp+iqTA8Dlp6f5joW6eyyKpCsqEjXkcJDUmUi/v+YDYdQZtMLVBHIz0hpR8mOHSUja00Rq5rSHI8IXO0aE1RT/BkRym6nPvjUgpBbo4uStz+fBZzFggC8Ogk+1kPASlzOAGwKzL8P1GPvf1AoHNslOGjiUbgoAt5envopY7FrNiRig4SSigVP4nt1AXzqO9YRe2eJKVX5MYM09j01bORJ5O2WN66BtDgjB3AKUM2O8KftmNrtso34EbbMKfBtNHN8yTf0L6So0NfUpiIdJGBx1eIoHI/CF5Bb2Rxp6tZipNMVkv9w0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Store allocated objects in a separate nodes. A va->va_start address is converted into a correct node where it should be placed and resided. An addr_to_node() function is used to do a proper address conversion to determine a node that contains a VA. Such approach balances VAs across nodes as a result an access becomes scalable. Number of nodes in a system depends on number of CPUs divided by two. The density factor in this case is 1/2. Please note: 1. As of now allocated VAs are bound to a node-0. It means the patch does not give any difference comparing with a current behavior; 2. The global vmap_area_lock, vmap_area_root are removed as there is no need in it anymore. The vmap_area_list is still kept and is _empty_. It is exported for a kexec only; 3. The vmallocinfo and vread() have to be reworked to be able to handle multiple nodes. Signed-off-by: Uladzislau Rezki (Sony) Reviewed-by: Baoquan He Signed-off-by: Baoquan He --- mm/vmalloc.c | 209 +++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 161 insertions(+), 48 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index b7deacca1483..ae0368c314ff 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -728,11 +728,9 @@ EXPORT_SYMBOL(vmalloc_to_pfn); #define DEBUG_AUGMENT_LOWEST_MATCH_CHECK 0 -static DEFINE_SPINLOCK(vmap_area_lock); static DEFINE_SPINLOCK(free_vmap_area_lock); /* Export for kexec only */ LIST_HEAD(vmap_area_list); -static struct rb_root vmap_area_root = RB_ROOT; static bool vmap_initialized __read_mostly; static struct rb_root purge_vmap_area_root = RB_ROOT; @@ -772,6 +770,38 @@ static struct rb_root free_vmap_area_root = RB_ROOT; */ static DEFINE_PER_CPU(struct vmap_area *, ne_fit_preload_node); +/* + * An effective vmap-node logic. Users make use of nodes instead + * of a global heap. It allows to balance an access and mitigate + * contention. + */ +struct rb_list { + struct rb_root root; + struct list_head head; + spinlock_t lock; +}; + +struct vmap_node { + /* Bookkeeping data of this node. */ + struct rb_list busy; +}; + +static struct vmap_node *nodes, snode; +static __read_mostly unsigned int nr_nodes = 1; +static __read_mostly unsigned int node_size = 1; + +static inline unsigned int +addr_to_node_id(unsigned long addr) +{ + return (addr / node_size) % nr_nodes; +} + +static inline struct vmap_node * +addr_to_node(unsigned long addr) +{ + return &nodes[addr_to_node_id(addr)]; +} + static __always_inline unsigned long va_size(struct vmap_area *va) { @@ -803,10 +833,11 @@ unsigned long vmalloc_nr_pages(void) } /* Look up the first VA which satisfies addr < va_end, NULL if none. */ -static struct vmap_area *find_vmap_area_exceed_addr(unsigned long addr) +static struct vmap_area * +find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root) { struct vmap_area *va = NULL; - struct rb_node *n = vmap_area_root.rb_node; + struct rb_node *n = root->rb_node; addr = (unsigned long)kasan_reset_tag((void *)addr); @@ -1552,12 +1583,14 @@ __alloc_vmap_area(struct rb_root *root, struct list_head *head, */ static void free_vmap_area(struct vmap_area *va) { + struct vmap_node *vn = addr_to_node(va->va_start); + /* * Remove from the busy tree/list. */ - spin_lock(&vmap_area_lock); - unlink_va(va, &vmap_area_root); - spin_unlock(&vmap_area_lock); + spin_lock(&vn->busy.lock); + unlink_va(va, &vn->busy.root); + spin_unlock(&vn->busy.lock); /* * Insert/Merge it back to the free tree/list. @@ -1600,6 +1633,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, int node, gfp_t gfp_mask, unsigned long va_flags) { + struct vmap_node *vn; struct vmap_area *va; unsigned long freed; unsigned long addr; @@ -1645,9 +1679,11 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, va->vm = NULL; va->flags = va_flags; - spin_lock(&vmap_area_lock); - insert_vmap_area(va, &vmap_area_root, &vmap_area_list); - spin_unlock(&vmap_area_lock); + vn = addr_to_node(va->va_start); + + spin_lock(&vn->busy.lock); + insert_vmap_area(va, &vn->busy.root, &vn->busy.head); + spin_unlock(&vn->busy.lock); BUG_ON(!IS_ALIGNED(va->va_start, align)); BUG_ON(va->va_start < vstart); @@ -1871,26 +1907,61 @@ static void free_unmap_vmap_area(struct vmap_area *va) struct vmap_area *find_vmap_area(unsigned long addr) { + struct vmap_node *vn; struct vmap_area *va; + int i, j; + + /* + * An addr_to_node_id(addr) converts an address to a node index + * where a VA is located. If VA spans several zones and passed + * addr is not the same as va->va_start, what is not common, we + * may need to scan an extra nodes. See an example: + * + * <--va--> + * -|-----|-----|-----|-----|- + * 1 2 0 1 + * + * VA resides in node 1 whereas it spans 1 and 2. If passed + * addr is within a second node we should do extra work. We + * should mention that it is rare and is a corner case from + * the other hand it has to be covered. + */ + i = j = addr_to_node_id(addr); + do { + vn = &nodes[i]; - spin_lock(&vmap_area_lock); - va = __find_vmap_area(addr, &vmap_area_root); - spin_unlock(&vmap_area_lock); + spin_lock(&vn->busy.lock); + va = __find_vmap_area(addr, &vn->busy.root); + spin_unlock(&vn->busy.lock); - return va; + if (va) + return va; + } while ((i = (i + 1) % nr_nodes) != j); + + return NULL; } static struct vmap_area *find_unlink_vmap_area(unsigned long addr) { + struct vmap_node *vn; struct vmap_area *va; + int i, j; - spin_lock(&vmap_area_lock); - va = __find_vmap_area(addr, &vmap_area_root); - if (va) - unlink_va(va, &vmap_area_root); - spin_unlock(&vmap_area_lock); + i = j = addr_to_node_id(addr); + do { + vn = &nodes[i]; - return va; + spin_lock(&vn->busy.lock); + va = __find_vmap_area(addr, &vn->busy.root); + if (va) + unlink_va(va, &vn->busy.root); + spin_unlock(&vn->busy.lock); + + if (va) + return va; + } while ((i = (i + 1) % nr_nodes) != j); + + return NULL; } /*** Per cpu kva allocator ***/ @@ -2092,6 +2163,7 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) static void free_vmap_block(struct vmap_block *vb) { + struct vmap_node *vn; struct vmap_block *tmp; struct xarray *xa; @@ -2099,9 +2171,10 @@ static void free_vmap_block(struct vmap_block *vb) tmp = xa_erase(xa, addr_to_vb_idx(vb->va->va_start)); BUG_ON(tmp != vb); - spin_lock(&vmap_area_lock); - unlink_va(vb->va, &vmap_area_root); - spin_unlock(&vmap_area_lock); + vn = addr_to_node(vb->va->va_start); + spin_lock(&vn->busy.lock); + unlink_va(vb->va, &vn->busy.root); + spin_unlock(&vn->busy.lock); free_vmap_area_noflush(vb->va); kfree_rcu(vb, rcu_head); @@ -2525,9 +2598,11 @@ static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va, unsigned long flags, const void *caller) { - spin_lock(&vmap_area_lock); + struct vmap_node *vn = addr_to_node(va->va_start); + + spin_lock(&vn->busy.lock); setup_vmalloc_vm_locked(vm, va, flags, caller); - spin_unlock(&vmap_area_lock); + spin_unlock(&vn->busy.lock); } static void clear_vm_uninitialized_flag(struct vm_struct *vm) @@ -3711,6 +3786,7 @@ static size_t vmap_ram_vread_iter(struct iov_iter *iter, const char *addr, */ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) { + struct vmap_node *vn; struct vmap_area *va; struct vm_struct *vm; char *vaddr; @@ -3724,8 +3800,11 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) remains = count; - spin_lock(&vmap_area_lock); - va = find_vmap_area_exceed_addr((unsigned long)addr); + /* Hooked to node_0 so far. */ + vn = addr_to_node(0); + spin_lock(&vn->busy.lock); + + va = find_vmap_area_exceed_addr((unsigned long)addr, &vn->busy.root); if (!va) goto finished_zero; @@ -3733,7 +3812,7 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) if ((unsigned long)addr + remains <= va->va_start) goto finished_zero; - list_for_each_entry_from(va, &vmap_area_list, list) { + list_for_each_entry_from(va, &vn->busy.head, list) { size_t copied; if (remains == 0) @@ -3792,12 +3871,12 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) } finished_zero: - spin_unlock(&vmap_area_lock); + spin_unlock(&vn->busy.lock); /* zero-fill memory holes */ return count - remains + zero_iter(iter, remains); finished: /* Nothing remains, or We couldn't copy/zero everything. */ - spin_unlock(&vmap_area_lock); + spin_unlock(&vn->busy.lock); return count - remains; } @@ -4131,14 +4210,15 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, } /* insert all vm's */ - spin_lock(&vmap_area_lock); for (area = 0; area < nr_vms; area++) { - insert_vmap_area(vas[area], &vmap_area_root, &vmap_area_list); + struct vmap_node *vn = addr_to_node(vas[area]->va_start); + spin_lock(&vn->busy.lock); + insert_vmap_area(vas[area], &vn->busy.root, &vn->busy.head); setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC, pcpu_get_vm_areas); + spin_unlock(&vn->busy.lock); } - spin_unlock(&vmap_area_lock); /* * Mark allocated areas as accessible. Do it now as a best-effort @@ -4261,25 +4341,26 @@ bool vmalloc_dump_obj(void *object) #ifdef CONFIG_PROC_FS static void *s_start(struct seq_file *m, loff_t *pos) - __acquires(&vmap_purge_lock) - __acquires(&vmap_area_lock) { + struct vmap_node *vn = addr_to_node(0); + mutex_lock(&vmap_purge_lock); - spin_lock(&vmap_area_lock); + spin_lock(&vn->busy.lock); - return seq_list_start(&vmap_area_list, *pos); + return seq_list_start(&vn->busy.head, *pos); } static void *s_next(struct seq_file *m, void *p, loff_t *pos) { - return seq_list_next(p, &vmap_area_list, pos); + struct vmap_node *vn = addr_to_node(0); + return seq_list_next(p, &vn->busy.head, pos); } static void s_stop(struct seq_file *m, void *p) - __releases(&vmap_area_lock) - __releases(&vmap_purge_lock) { - spin_unlock(&vmap_area_lock); + struct vmap_node *vn = addr_to_node(0); + + spin_unlock(&vn->busy.lock); mutex_unlock(&vmap_purge_lock); } @@ -4322,9 +4403,11 @@ static void show_purge_info(struct seq_file *m) static int s_show(struct seq_file *m, void *p) { + struct vmap_node *vn; struct vmap_area *va; struct vm_struct *v; + vn = addr_to_node(0); va = list_entry(p, struct vmap_area, list); if (!va->vm) { @@ -4375,7 +4458,7 @@ static int s_show(struct seq_file *m, void *p) * As a final step, dump "unpurged" areas. */ final: - if (list_is_last(&va->list, &vmap_area_list)) + if (list_is_last(&va->list, &vn->busy.head)) show_purge_info(m); return 0; @@ -4406,7 +4489,8 @@ static void vmap_init_free_space(void) { unsigned long vmap_start = 1; const unsigned long vmap_end = ULONG_MAX; - struct vmap_area *busy, *free; + struct vmap_area *free; + struct vm_struct *busy; /* * B F B B B F @@ -4414,12 +4498,12 @@ static void vmap_init_free_space(void) * | The KVA space | * |<--------------------------------->| */ - list_for_each_entry(busy, &vmap_area_list, list) { - if (busy->va_start - vmap_start > 0) { + for (busy = vmlist; busy; busy = busy->next) { + if (busy->addr - vmap_start > 0) { free = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); if (!WARN_ON_ONCE(!free)) { free->va_start = vmap_start; - free->va_end = busy->va_start; + free->va_end = (unsigned long) busy->addr; insert_vmap_area_augment(free, NULL, &free_vmap_area_root, @@ -4427,7 +4511,7 @@ static void vmap_init_free_space(void) } } - vmap_start = busy->va_end; + vmap_start = (unsigned long) busy->addr + busy->size; } if (vmap_end - vmap_start > 0) { @@ -4443,9 +4527,31 @@ static void vmap_init_free_space(void) } } +static void vmap_init_nodes(void) +{ + struct vmap_node *vn; + int i; + + nodes = &snode; + + if (nr_nodes > 1) { + vn = kmalloc_array(nr_nodes, sizeof(*vn), GFP_NOWAIT); + if (vn) + nodes = vn; + } + + for (i = 0; i < nr_nodes; i++) { + vn = &nodes[i]; + vn->busy.root = RB_ROOT; + INIT_LIST_HEAD(&vn->busy.head); + spin_lock_init(&vn->busy.lock); + } +} + void __init vmalloc_init(void) { struct vmap_area *va; + struct vmap_node *vn; struct vm_struct *tmp; int i; @@ -4467,6 +4573,11 @@ void __init vmalloc_init(void) xa_init(&vbq->vmap_blocks); } + /* + * Setup nodes before importing vmlist. + */ + vmap_init_nodes(); + /* Import existing vmlist entries. */ for (tmp = vmlist; tmp; tmp = tmp->next) { va = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); @@ -4476,7 +4587,9 @@ void __init vmalloc_init(void) va->va_start = (unsigned long)tmp->addr; va->va_end = va->va_start + tmp->size; va->vm = tmp; - insert_vmap_area(va, &vmap_area_root, &vmap_area_list); + + vn = addr_to_node(va->va_start); + insert_vmap_area(va, &vn->busy.root, &vn->busy.head); } /* From patchwork Tue Aug 29 08:11:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13368662 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 590D6C6FA8F for ; Tue, 29 Aug 2023 08:12:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D4FBD28003C; Tue, 29 Aug 2023 04:11:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CFF15280037; Tue, 29 Aug 2023 04:11:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B509028003C; Tue, 29 Aug 2023 04:11:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 9E446280037 for ; Tue, 29 Aug 2023 04:11:51 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8154E160666 for ; Tue, 29 Aug 2023 08:11:51 +0000 (UTC) X-FDA: 81176423622.09.DA7FEEA Received: from mail-lf1-f51.google.com (mail-lf1-f51.google.com [209.85.167.51]) by imf28.hostedemail.com (Postfix) with ESMTP id 8755EC0021 for ; Tue, 29 Aug 2023 08:11:49 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b="R8KeIq/3"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.51 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693296709; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zH+nVcQCC5XPUzS1kyDL4qDeiDz/PyOeL+vfo5mJUXo=; b=1zpddUmIYUEAfK79oFwmwHL00e9fzF88MuM+Obm95KZQrDMbdvRRwW9B+SGuxNWzhFa3r5 umxK8Lwu+r7PykCj2LEGDS9Cnb1sK0vZlhNSMrcFHaDeqq+vO2+bL0VwUxpsRTP4mIe+1F dWuPivw+o8pzag0qASJCI4dVzbl6WHc= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b="R8KeIq/3"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.51 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693296709; a=rsa-sha256; cv=none; b=4XmCmvZziVxsCvDYIAjHw/lr1razvHkJwuy4KpAM+BSlXhhhkDzKHKosCH9knMxDOa5osz vYT6DZGYLmCO6az6Jfe+63gjflvK9/sF7v42hP18NbRL9HYJobfzy45Et2rs81ZS6Kj97z yUvKVMwUmZCd4Nv2pEHf4nXP7/RFhDs= Received: by mail-lf1-f51.google.com with SMTP id 2adb3069b0e04-500b0f06136so4255741e87.0 for ; Tue, 29 Aug 2023 01:11:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1693296708; x=1693901508; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zH+nVcQCC5XPUzS1kyDL4qDeiDz/PyOeL+vfo5mJUXo=; b=R8KeIq/3gtO3iTgd0IM5MatHKA1srv2GfCeW89vuYPHGX1HrOm/YGVm8W/cQQusDa+ k4bPIBJsDvaqObULuZhov2b43smtAmAGE3mE/O87ZlEQVF1j/x+eK7mRjKby8fzRKPZI IR/xvnboQVc9ZPuO9gmR6JyvftTpEkcxXUztdZKbYKEp2zcqAr8mLmQ95qbB0hx232ja 9uhNWbJTEO2HHCO2QyalVugYHNtvpuxPxCUy7tZtxEWn7u7kcohg9BOsznKV8ysYNJ/r e+D6evKP257A/+P9M2pdRttK1CsHqFt3XnOrHhp5Utqib5+jDwPlUL1gxQY0akAHcqLo VLFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693296708; x=1693901508; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zH+nVcQCC5XPUzS1kyDL4qDeiDz/PyOeL+vfo5mJUXo=; b=UrZ+kZIMPZ2HKbYu01H/pdnNEPf3uuCIgYZfUwPKqSTzVCGGZGwnQ5O063B1uIJbIh nk8O0qwQldBix1pDU2+EXNcUBeOzNrHg/1A6WrnrMLnyY5XgqZZuhGePZ38tLv3WgQmu EsHBQiI1iy415KOx63j1oNGgc+JYZzIkeC5Yk2uRv+cnPd27n+etWkrpe0cYnJr0KEnn CFz1A71dJDBdqDU0F+88iozx07dAw4qH4pvXvhPYRRa9mGKNoblqgmfsgVvYlbclZxjT ne1Vr90Eld8rB2PMc9zY7wFqiUCnVCItQjqUE6zsjWpvmslgx6oGLfhX4S3amXjChnKe ndag== X-Gm-Message-State: AOJu0YxptGTuNlgkYBKfwg5uHxJGjF0G1t3Z/3ieNVS9KtkATGjmuBEF FSi8+3t80pM/pMegPkwEBmUGskN3/02xug== X-Google-Smtp-Source: AGHT+IEdFI1NxFW8VIakie/KzaEK0YUPq/uRCFagJRaw5Bk5Hwv7EB3kWT6TkchlvGIBEwP/Xo/3aw== X-Received: by 2002:a05:6512:128c:b0:4fa:f96c:745f with SMTP id u12-20020a056512128c00b004faf96c745fmr25351346lfs.38.1693296707770; Tue, 29 Aug 2023 01:11:47 -0700 (PDT) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id f25-20020a19ae19000000b004fbad341442sm1868026lfc.97.2023.08.29.01.11.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Aug 2023 01:11:47 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v2 5/9] mm: vmalloc: Remove global purge_vmap_area_root rb-tree Date: Tue, 29 Aug 2023 10:11:38 +0200 Message-Id: <20230829081142.3619-6-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230829081142.3619-1-urezki@gmail.com> References: <20230829081142.3619-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: tkuw433kf19xdrx8h6hym35r8r5qcbr4 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 8755EC0021 X-HE-Tag: 1693296709-672019 X-HE-Meta: U2FsdGVkX180HsB1pwzc7tCXRQ6B9BQg8rOOgIr2bNPl/X5cRaIaGrQRlk0Y5E7+xGrc6HqpLjhoF0T8c1D0O4vthVwq8fZp8+yuLtrfqcdugF1HfDUpmcxYEEBY8Y+Dqznlo/FNqENmJGSzktrFcu8Z6DnCSoMtrl/20JO+2NmpJi2F2eCgq6BW7K4S/QkM0n0WbEZxHGb9Y5M01R5qGpk2WXEHcU6ZJYOAhQN175EyJklfHmNw/qOHrlv3jSQDada81t04IbD9X2rSwlhKo3wnWz1wT9PQljqm1VWiyEUcQt4GFj0ZN21MrgmYIimuqBbt9rhegMRi37f9HtwdRFelxfncrJtVwmDr+qJ6NyXFAw0vEk9mLFkk6gw6QcD6QQ9XPtgiMa2FoD+nHjCZWrkEV7L3gCQG7lTtX+kP+9rHm0iIiH0WtVpQzRCpkoTLXvrNlQkmEaFZtmc9yyek8gTiIPIJAHzaHUzWB972SwVQlCJM0Cw/B+KG5vKGR3o9/qslr60zffUK9i34cgupa0rlNk3ErPhHUX0d2Bn2j82vHQgc1Dgc8oBLdDWsy2T7Xw8ys2+9Ca8+xpw0yftg0fCOwz/of83V+VMi7KGFJj9qygsVPdTCaU0Bzp5TKlU1JFQXjo6+3qoHqAm/nCjqSByf82oBAIFkfycTvpuUSKRJwBpgEM6hmU+dBynJtLpOPWN9shNNDlQ698aT8Fs5VuYmxgA8xt+fyJnjJZuMXKG0Pq9e5r+scNLPzpq1LNIgMlhFRsvfKvZytk/pGLJ8m4T1E8Nq0hztAyf8/OETApwb98pR+wGXrVld42fi3dGLS9MNuLnOkHX97ng7MKqNPqU9N1nJU5hQ5pEZjfD/y5EsBRJ8KNdsnJYa9xWT3a0ANYKSuCZ3OWbUXTkQ9B/zZSjhtKiXHlJ7Kp9bnsiSl8mwkxbm6vebMhpmS6OjWz6GPWzaqXU8uS8ZsREwNVv MdDldVUW fZazPVMjvwzug8XxMAcLZyk4jobeKML2iuy9y5wtT1Ur7lFBhIijLedFg1qAfC1yX+TDKWd1X/+CHJ/jod3uy8mXaYYaUh38S38gDLasOTcxw9xdVWgwWISQy+Sjkad1K9JdRb+K+4GqpL++P0adlsSxF83S/Q2MvO5n1s13G8cnmBzgSn8mr4BN8q27NS9QmAnGkAkK6Dd0IOitzKJiZqQi9ixazyhMJJBhz/IzN+7cGqCPMXhv3vv7KM/RrZuLaJZnlH9nk0T+fwHfN3A+qcTl4Cx6tinDWZIMUOMQaJ5xiinnVE8U/lOgvpcfXiuslxwBx81IJw7mPj5DSiRGrjAYD8dYud3aFrXNDG97RH5DD71kD9xNwgRtJDW9jxlw23CLHdJTAFvQLzccrSa+hAKuYn1xq7bWnZ3S+wt/rL/Htazgya525ZQad9sN6z+neoKOW2QSLBmihvsFvafxaoVNNOmtgZJExZeWoqp72F7h4V7o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Similar to busy VA, lazily-freed area is stored to a node it belongs to. Such approach does not require any global locking primitive, instead an access becomes scalable what mitigates a contention. This patch removes a global purge-lock, global purge-tree and global purge list. Signed-off-by: Uladzislau Rezki (Sony) Reviewed-by: Baoquan He --- mm/vmalloc.c | 135 +++++++++++++++++++++++++++++++-------------------- 1 file changed, 82 insertions(+), 53 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ae0368c314ff..5a8a9c1370b6 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -733,10 +733,6 @@ static DEFINE_SPINLOCK(free_vmap_area_lock); LIST_HEAD(vmap_area_list); static bool vmap_initialized __read_mostly; -static struct rb_root purge_vmap_area_root = RB_ROOT; -static LIST_HEAD(purge_vmap_area_list); -static DEFINE_SPINLOCK(purge_vmap_area_lock); - /* * This kmem_cache is used for vmap_area objects. Instead of * allocating from slab we reuse an object from this cache to @@ -784,6 +780,12 @@ struct rb_list { struct vmap_node { /* Bookkeeping data of this node. */ struct rb_list busy; + struct rb_list lazy; + + /* + * Ready-to-free areas. + */ + struct list_head purge_list; }; static struct vmap_node *nodes, snode; @@ -1768,40 +1770,22 @@ static DEFINE_MUTEX(vmap_purge_lock); /* for per-CPU blocks */ static void purge_fragmented_blocks_allcpus(void); +static cpumask_t purge_nodes; /* * Purges all lazily-freed vmap areas. */ -static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) +static unsigned long +purge_vmap_node(struct vmap_node *vn) { - unsigned long resched_threshold; - unsigned int num_purged_areas = 0; - struct list_head local_purge_list; + unsigned long num_purged_areas = 0; struct vmap_area *va, *n_va; - lockdep_assert_held(&vmap_purge_lock); - - spin_lock(&purge_vmap_area_lock); - purge_vmap_area_root = RB_ROOT; - list_replace_init(&purge_vmap_area_list, &local_purge_list); - spin_unlock(&purge_vmap_area_lock); - - if (unlikely(list_empty(&local_purge_list))) - goto out; - - start = min(start, - list_first_entry(&local_purge_list, - struct vmap_area, list)->va_start); - - end = max(end, - list_last_entry(&local_purge_list, - struct vmap_area, list)->va_end); - - flush_tlb_kernel_range(start, end); - resched_threshold = lazy_max_pages() << 1; + if (list_empty(&vn->purge_list)) + return 0; spin_lock(&free_vmap_area_lock); - list_for_each_entry_safe(va, n_va, &local_purge_list, list) { + list_for_each_entry_safe(va, n_va, &vn->purge_list, list) { unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT; unsigned long orig_start = va->va_start; unsigned long orig_end = va->va_end; @@ -1823,13 +1807,55 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) atomic_long_sub(nr, &vmap_lazy_nr); num_purged_areas++; - - if (atomic_long_read(&vmap_lazy_nr) < resched_threshold) - cond_resched_lock(&free_vmap_area_lock); } spin_unlock(&free_vmap_area_lock); -out: + return num_purged_areas; +} + +/* + * Purges all lazily-freed vmap areas. + */ +static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) +{ + unsigned long num_purged_areas = 0; + struct vmap_node *vn; + int i; + + lockdep_assert_held(&vmap_purge_lock); + purge_nodes = CPU_MASK_NONE; + + for (i = 0; i < nr_nodes; i++) { + vn = &nodes[i]; + + INIT_LIST_HEAD(&vn->purge_list); + + if (RB_EMPTY_ROOT(&vn->lazy.root)) + continue; + + spin_lock(&vn->lazy.lock); + WRITE_ONCE(vn->lazy.root.rb_node, NULL); + list_replace_init(&vn->lazy.head, &vn->purge_list); + spin_unlock(&vn->lazy.lock); + + start = min(start, list_first_entry(&vn->purge_list, + struct vmap_area, list)->va_start); + + end = max(end, list_last_entry(&vn->purge_list, + struct vmap_area, list)->va_end); + + cpumask_set_cpu(i, &purge_nodes); + } + + if (cpumask_weight(&purge_nodes) > 0) { + flush_tlb_kernel_range(start, end); + + for_each_cpu(i, &purge_nodes) { + vn = &nodes[i]; + num_purged_areas += purge_vmap_node(vn); + } + } + trace_purge_vmap_area_lazy(start, end, num_purged_areas); return num_purged_areas > 0; } @@ -1848,16 +1874,9 @@ static void reclaim_and_purge_vmap_areas(void) static void drain_vmap_area_work(struct work_struct *work) { - unsigned long nr_lazy; - - do { - mutex_lock(&vmap_purge_lock); - __purge_vmap_area_lazy(ULONG_MAX, 0); - mutex_unlock(&vmap_purge_lock); - - /* Recheck if further work is required. */ - nr_lazy = atomic_long_read(&vmap_lazy_nr); - } while (nr_lazy > lazy_max_pages()); + mutex_lock(&vmap_purge_lock); + __purge_vmap_area_lazy(ULONG_MAX, 0); + mutex_unlock(&vmap_purge_lock); } /* @@ -1867,6 +1886,7 @@ static void drain_vmap_area_work(struct work_struct *work) */ static void free_vmap_area_noflush(struct vmap_area *va) { + struct vmap_node *vn = addr_to_node(va->va_start); unsigned long nr_lazy_max = lazy_max_pages(); unsigned long va_start = va->va_start; unsigned long nr_lazy; @@ -1880,10 +1900,9 @@ static void free_vmap_area_noflush(struct vmap_area *va) /* * Merge or place it to the purge tree/list. */ - spin_lock(&purge_vmap_area_lock); - merge_or_add_vmap_area(va, - &purge_vmap_area_root, &purge_vmap_area_list); - spin_unlock(&purge_vmap_area_lock); + spin_lock(&vn->lazy.lock); + merge_or_add_vmap_area(va, &vn->lazy.root, &vn->lazy.head); + spin_unlock(&vn->lazy.lock); trace_free_vmap_area_noflush(va_start, nr_lazy, nr_lazy_max); @@ -4390,15 +4409,21 @@ static void show_numa_info(struct seq_file *m, struct vm_struct *v) static void show_purge_info(struct seq_file *m) { + struct vmap_node *vn; struct vmap_area *va; + int i; - spin_lock(&purge_vmap_area_lock); - list_for_each_entry(va, &purge_vmap_area_list, list) { - seq_printf(m, "0x%pK-0x%pK %7ld unpurged vm_area\n", - (void *)va->va_start, (void *)va->va_end, - va->va_end - va->va_start); + for (i = 0; i < nr_nodes; i++) { + vn = &nodes[i]; + + spin_lock(&vn->lazy.lock); + list_for_each_entry(va, &vn->lazy.head, list) { + seq_printf(m, "0x%pK-0x%pK %7ld unpurged vm_area\n", + (void *)va->va_start, (void *)va->va_end, + va->va_end - va->va_start); + } + spin_unlock(&vn->lazy.lock); } - spin_unlock(&purge_vmap_area_lock); } static int s_show(struct seq_file *m, void *p) @@ -4545,6 +4570,10 @@ static void vmap_init_nodes(void) vn->busy.root = RB_ROOT; INIT_LIST_HEAD(&vn->busy.head); spin_lock_init(&vn->busy.lock); + + vn->lazy.root = RB_ROOT; + INIT_LIST_HEAD(&vn->lazy.head); + spin_lock_init(&vn->lazy.lock); } } From patchwork Tue Aug 29 08:11:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13368663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C76ADC83F16 for ; Tue, 29 Aug 2023 08:12:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9F23F28003D; Tue, 29 Aug 2023 04:11:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A3E0280037; Tue, 29 Aug 2023 04:11:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 81C6328003D; Tue, 29 Aug 2023 04:11:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 72668280037 for ; Tue, 29 Aug 2023 04:11:52 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4D2DA16067B for ; Tue, 29 Aug 2023 08:11:52 +0000 (UTC) X-FDA: 81176423664.01.235DF38 Received: from mail-lf1-f47.google.com (mail-lf1-f47.google.com [209.85.167.47]) by imf27.hostedemail.com (Postfix) with ESMTP id 58A554000E for ; Tue, 29 Aug 2023 08:11:50 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=bJ6I4bux; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.47 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693296710; a=rsa-sha256; cv=none; b=2sqi4t3vrb+uojgf2uVEdV4wmu9ziJ2xWRz1bji4v/j+RL+PGGNpHMKVZhuX/iRvUQmnL3 Q7iFezRhlZbvmJ5OoAh0pUTyORRFDhXLK7qC3xjNFmVzFnsQng9YKJFlbfjSPW8MQIj36W aiRFUdMwevBN6Lgk3YrbzCAfI5yaUkE= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=bJ6I4bux; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.47 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693296710; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3Ov9GwojXyegJrClaNh+dNhWRQYcZI0mb3uhXBk2808=; b=eDXkaDM136Uj0QWlo81GxtX2Bs7f1HJb8+7OV4BEHk/1AF5rvqAvacwg4nreE3Qqg2+oDk vA7O3oGu835ta8TBc4aC5GPtuPmN3uXHMFoCjfAHBjfh4n3UagmFxcnBcGRkl/kBJWaAHf TSrLSP1goqaCkGHzNgc3VnVMc2uZI3M= Received: by mail-lf1-f47.google.com with SMTP id 2adb3069b0e04-50098cc8967so6466740e87.1 for ; Tue, 29 Aug 2023 01:11:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1693296709; x=1693901509; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3Ov9GwojXyegJrClaNh+dNhWRQYcZI0mb3uhXBk2808=; b=bJ6I4buxddwJbnV5khklHoNr8shneI1m2Pfz7Cn7lVn3Ct2TGx4NVrKFHMw0JCJm8K BMxCwi6WzwDxhS7jisbwydWfPFF2RvZ9E57hhAlUgeG9s7dO57fHKynARQd58sn5xkOu YU0ixCbxOslMZOn2Ct4ZXQKkUsSykzQLISy/dmgxa7AINLLz/rHk0FOsAmwAflAGW9NQ v56thAg/0Xxos/mutifkNdfXyC7i18IwGX5o1DFx7V18+tzrK/cckhszynKC6QzAYsBQ jA7iiREStn7vJ048+ZFJEwrzQz+pz+MPg0dv1ZXI+XOTiVY6jT9Isa7Aty3vQzXJb1JY Fevw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693296709; x=1693901509; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3Ov9GwojXyegJrClaNh+dNhWRQYcZI0mb3uhXBk2808=; b=MwM2Zr5DlAAgstk5qaCZ3RP6QgFe9zQOu6WA4juoSBNKMjz2TAuRtZOGVowezWFR7m xaWz4yZukjJMAkRieV2NocfNO53rTa8UBXJhKHHCAgGCDibLKkEkSTzojpXHM2wescYJ ijLPobrqXTsbquUJOrfzwtrWDOGBFaxGPmgJjYBaV1v3OSpB1ud5MBn3R3h9epD4MYtE Pj9hV5VQ+B3FkI+WKFm0WhuF9SDcFBAlrQrR84ciIu8zewi70nXwY9clAmW3HcqlnLzu AbrJetVJfTciN73urHvDdxwNVSy0QgcG3Y1MUaWUYWhN+gTxtu8BuLwz0WmFTrnZaf4T Ef0A== X-Gm-Message-State: AOJu0YzBrgaoWvz835LDL+Q4cQGScgLkqLhzx8GI8lPRBPCrNOFjbRxs 2x6gDh/6LcQhKs/KkPm7wzFznP4iK7q/pw== X-Google-Smtp-Source: AGHT+IFvidEopg5kx+6gzDu497/g2a1bwH0SJKpU7lfNruEN5J+4hnUh3jCCpR0AMiZy4S6Hr/ENbQ== X-Received: by 2002:a05:6512:282c:b0:4ff:8f1b:8ccf with SMTP id cf44-20020a056512282c00b004ff8f1b8ccfmr22489868lfb.21.1693296708620; Tue, 29 Aug 2023 01:11:48 -0700 (PDT) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id f25-20020a19ae19000000b004fbad341442sm1868026lfc.97.2023.08.29.01.11.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Aug 2023 01:11:48 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v2 6/9] mm: vmalloc: Offload free_vmap_area_lock lock Date: Tue, 29 Aug 2023 10:11:39 +0200 Message-Id: <20230829081142.3619-7-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230829081142.3619-1-urezki@gmail.com> References: <20230829081142.3619-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 58A554000E X-Stat-Signature: o9jc6n3qqmo5ak581mk6ibxhhgm178cr X-HE-Tag: 1693296710-831554 X-HE-Meta: U2FsdGVkX18Z+FqbrdituzlGuFOVirujQXkmAkvWC/DRVitF9LiXZuEWWORR34Jeb5zrQn93871kgyMyAritQgLP55zQiF/tDO0xJtCHWANa3xyI/HPJbNVnO3RlcVRJKh7vyqvkTBZ5Y2uXnOhHE2sHUQs7E1tKTunuMyAgeTv/sV62LNEqip/QYQ+CnM7uiojAl5HUJMpdm3jlrm9FCqqR2Yx53OVwaSsBBZQAnVr4ceWc0n7MkRIi0m9wZrB7NhtdG3UxiOp+bVIH2AzzypSWjCncW3gaZX+7LR9Q00ZsCvytlDsUFCxPoRPQyg6051rR62zJiYCMGpGS07b+mrWWt46YoIvOlbk/PD1wESGjrx1KjOC80YSqI7gFhaS731OQQjBEOnJN2niAlaVr5dNfmloSuBIIYv0o6LFbDq115+fYNv08uK0A6X9OrqGhdGNmUZ38eHJIVdfpyAN3kY6Aabps+LjSe1pHD2OkKaJoJ8/mfWWIrw+PKeLf8a9NNBSQEUcWdYixOzA7dN8ocwOpz04AoVgvjWJYW3PEPVf+lZJVR34auHcxZ79QEgrqnufM8SNxbxDwIlEhGfs9osCCPzIt3wOdPckHySrRkpJsgjk5azFB+vZdcBrWlcvSVH8p/2m7273US7jvp2ZJIFdSdqFt97NgsZiHwFQmGkSBT1h+qJ3suki/udDwzFOae4R6qO2/K0SsVMsEVkAwUWYwgtg+VNpQjWWcbRYZL5tIrRmfm6AlDyBp58VzsMjF/TSBfu5mxhxuzB1166IwhHXjBlzMmN1NbkShcKk/7i8D8qE02YuBsMbDK8lfpBBx+K/umYrR5Q2f0OkTks5dUDcrGTlOaxCDzlNMkK9c/Fp5YfokoGsWZQUt/NL+50yJWTiDIH4KX9VE2+PJ6jFSLW358mK0AjbCEjPtA/AYjsX0/UCxdymy6IwWG2YfhEXzON4NChuML0rqsHRi/3N gr5KKA1J 6m/8JLmpGkkQuAuXOrWXp06kam1iwE9Ilu7wqbpDNopmp+lvhlGsmLRoyJ89IgHA1UxGk4G2PvODiN3hQfMVHBJ4eNXEJF1n0UhvGnP9XvAMjNd0o5Mzu9PFNkIdwO3f9lEQtNkQ1RNL419eeXOh4DLtEsGRxxB6j6Tq8x7Z0hdC3ScNWPlSfCBFMPjbmU291f810lPW3BaWCtEnEE2Ywf/qZj2An6wqxPtZm/Xhg238bdyzOWiYwvISgl+E5M9AUGgF6iOXStm9MSYS4lHYUJT+gMfE6Uwq52JVsEeGn0wVNXHBLBKxsHDJJNMStG+ZiiQ1Q0y9I4s1UPNcg/LhocY6zZ+NR0CnwmpLZWv26WuXjAJnAzqKzd4KRv4KhSTWZk/LFEQtDzd1Mj+jwWr49wE7KwkveESdQQQY3DLkPgjrcWm/+n7LvzigiHeENAV+h/r4+Y0XkagMU6OwdXAc1XVtPmbFJ3j92TacEE2Bn96JPd8o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Concurrent access to a global vmap space is a bottle-neck. We can simulate a high contention by running a vmalloc test suite. To address it, introduce an effective vmap node logic. Each node behaves as independent entity. When a node is accessed it serves a request directly(if possible) also it can fetch a new block from a global heap to its internals if no space or low capacity is left. This technique reduces a pressure on the global vmap lock. Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 316 +++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 279 insertions(+), 37 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 5a8a9c1370b6..4fd4915c532d 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -779,6 +779,7 @@ struct rb_list { struct vmap_node { /* Bookkeeping data of this node. */ + struct rb_list free; struct rb_list busy; struct rb_list lazy; @@ -786,6 +787,13 @@ struct vmap_node { * Ready-to-free areas. */ struct list_head purge_list; + struct work_struct purge_work; + unsigned long nr_purged; + + /* + * Control that only one user can pre-fetch this node. + */ + atomic_t fill_in_progress; }; static struct vmap_node *nodes, snode; @@ -804,6 +812,32 @@ addr_to_node(unsigned long addr) return &nodes[addr_to_node_id(addr)]; } +static inline struct vmap_node * +id_to_node(int id) +{ + return &nodes[id % nr_nodes]; +} + +static inline int +this_node_id(void) +{ + return raw_smp_processor_id() % nr_nodes; +} + +static inline unsigned long +encode_vn_id(int node_id) +{ + /* Can store U8_MAX [0:254] nodes. */ + return (node_id + 1) << BITS_PER_BYTE; +} + +static inline int +decode_vn_id(unsigned long val) +{ + /* Can store U8_MAX [0:254] nodes. */ + return (val >> BITS_PER_BYTE) - 1; +} + static __always_inline unsigned long va_size(struct vmap_area *va) { @@ -1586,6 +1620,7 @@ __alloc_vmap_area(struct rb_root *root, struct list_head *head, static void free_vmap_area(struct vmap_area *va) { struct vmap_node *vn = addr_to_node(va->va_start); + int vn_id = decode_vn_id(va->flags); /* * Remove from the busy tree/list. @@ -1594,12 +1629,19 @@ static void free_vmap_area(struct vmap_area *va) unlink_va(va, &vn->busy.root); spin_unlock(&vn->busy.lock); - /* - * Insert/Merge it back to the free tree/list. - */ - spin_lock(&free_vmap_area_lock); - merge_or_add_vmap_area_augment(va, &free_vmap_area_root, &free_vmap_area_list); - spin_unlock(&free_vmap_area_lock); + if (vn_id >= 0) { + vn = id_to_node(vn_id); + + /* Belongs to this node. */ + spin_lock(&vn->free.lock); + merge_or_add_vmap_area_augment(va, &vn->free.root, &vn->free.head); + spin_unlock(&vn->free.lock); + } else { + /* Goes to global. */ + spin_lock(&free_vmap_area_lock); + merge_or_add_vmap_area_augment(va, &free_vmap_area_root, &free_vmap_area_list); + spin_unlock(&free_vmap_area_lock); + } } static inline void @@ -1625,6 +1667,134 @@ preload_this_cpu_lock(spinlock_t *lock, gfp_t gfp_mask, int node) kmem_cache_free(vmap_area_cachep, va); } +static unsigned long +node_alloc_fill(struct vmap_node *vn, + unsigned long size, unsigned long align, + gfp_t gfp_mask, int node) +{ + struct vmap_area *va; + unsigned long addr; + + va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node); + if (unlikely(!va)) + return VMALLOC_END; + + /* + * Please note, an allocated block is not aligned to its size. + * Therefore it can span several zones what means addr_to_node() + * can point to two different nodes: + * <-----> + * -|-----|-----|-----|-----|- + * 1 2 0 1 + * + * an alignment would just increase fragmentation thus more heap + * consumption what we would like to avoid. + */ + spin_lock(&free_vmap_area_lock); + addr = __alloc_vmap_area(&free_vmap_area_root, &free_vmap_area_list, + node_size, 1, VMALLOC_START, VMALLOC_END); + spin_unlock(&free_vmap_area_lock); + + if (addr == VMALLOC_END) { + kmem_cache_free(vmap_area_cachep, va); + return VMALLOC_END; + } + + /* + * Statement and condition of the problem: + * + * a) where to free allocated areas from a node: + * - directly to a global heap; + * - to a node that we got a VA from; + * - what is a condition to return allocated areas + * to a global heap then; + * b) how to properly handle left small free fragments + * of a node in order to mitigate a fragmentation. + * + * How to address described points: + * When a new block is allocated(from a global heap) we shrink + * it deliberately by one page from both sides and place it to + * this node to serve a request. + * + * Why we shrink. We would like to distinguish VAs which were + * obtained from a node and a global heap. This is for a free + * path. A va->flags contains a node-id it belongs to. No VAs + * merging is possible between each other unless they are part + * of same block. + * + * A free-path in its turn can detect a correct node where a + * VA has to be returned. Thus as a block is freed entirely, + * its size becomes(merging): node_size - (2 * PAGE_SIZE) it + * recovers its edges, thus is released to a global heap for + * reuse elsewhere. In partly freed case, VAs go back to the + * node not bothering a global vmap space. + * + * 1 2 3 + * |<------------>|<------------>|<------------>| + * |..<-------->..|..<-------->..|..<-------->..| + */ + va->va_start = addr + PAGE_SIZE; + va->va_end = (addr + node_size) - PAGE_SIZE; + + spin_lock(&vn->free.lock); + /* Never merges. See explanation above. */ + insert_vmap_area_augment(va, NULL, &vn->free.root, &vn->free.head); + addr = va_alloc(va, &vn->free.root, &vn->free.head, + size, align, VMALLOC_START, VMALLOC_END); + spin_unlock(&vn->free.lock); + + return addr; +} + +static unsigned long +node_alloc(int vn_id, unsigned long size, unsigned long align, + unsigned long vstart, unsigned long vend, + gfp_t gfp_mask, int node) +{ + struct vmap_node *vn = id_to_node(vn_id); + unsigned long extra = align > PAGE_SIZE ? align : 0; + bool do_alloc_fill = false; + unsigned long addr; + + /* + * Fallback to a global heap if not vmalloc. + */ + if (vstart != VMALLOC_START || vend != VMALLOC_END) + return vend; + + /* + * A maximum allocation limit is 1/4 of capacity. This + * is done in order to prevent a fast depleting of zone + * by few requests. + */ + if (size + extra > (node_size >> 2)) + return vend; + + spin_lock(&vn->free.lock); + addr = __alloc_vmap_area(&vn->free.root, &vn->free.head, + size, align, vstart, vend); + + if (addr == vend) { + /* + * Set the fetch flag under the critical section. + * This guarantees that only one user is eligible + * to perform a pre-fetch. A reset operation can + * be concurrent. + */ + if (!atomic_xchg(&vn->fill_in_progress, 1)) + do_alloc_fill = true; + } + spin_unlock(&vn->free.lock); + + /* Only if fails a previous attempt. */ + if (do_alloc_fill) { + addr = node_alloc_fill(vn, size, align, gfp_mask, node); + atomic_set(&vn->fill_in_progress, 0); + } + + return addr; +} + /* * Allocate a region of KVA of the specified size and alignment, within the * vstart and vend. @@ -1640,7 +1810,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, unsigned long freed; unsigned long addr; int purged = 0; - int ret; + int ret, vn_id; if (unlikely(!size || offset_in_page(size) || !is_power_of_2(align))) return ERR_PTR(-EINVAL); @@ -1661,11 +1831,17 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, */ kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask); + vn_id = this_node_id(); + addr = node_alloc(vn_id, size, align, vstart, vend, gfp_mask, node); + va->flags = (addr != vend) ? encode_vn_id(vn_id) : 0; + retry: - preload_this_cpu_lock(&free_vmap_area_lock, gfp_mask, node); - addr = __alloc_vmap_area(&free_vmap_area_root, &free_vmap_area_list, - size, align, vstart, vend); - spin_unlock(&free_vmap_area_lock); + if (addr == vend) { + preload_this_cpu_lock(&free_vmap_area_lock, gfp_mask, node); + addr = __alloc_vmap_area(&free_vmap_area_root, &free_vmap_area_list, + size, align, vstart, vend); + spin_unlock(&free_vmap_area_lock); + } trace_alloc_vmap_area(addr, size, align, vstart, vend, addr == vend); @@ -1679,7 +1855,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, va->va_start = addr; va->va_end = addr + size; va->vm = NULL; - va->flags = va_flags; + va->flags |= va_flags; vn = addr_to_node(va->va_start); @@ -1772,31 +1948,58 @@ static DEFINE_MUTEX(vmap_purge_lock); static void purge_fragmented_blocks_allcpus(void); static cpumask_t purge_nodes; -/* - * Purges all lazily-freed vmap areas. - */ -static unsigned long -purge_vmap_node(struct vmap_node *vn) +static void +reclaim_list_global(struct list_head *head) +{ + struct vmap_area *va, *n; + + if (list_empty(head)) + return; + + spin_lock(&free_vmap_area_lock); + list_for_each_entry_safe(va, n, head, list) + merge_or_add_vmap_area_augment(va, + &free_vmap_area_root, &free_vmap_area_list); + spin_unlock(&free_vmap_area_lock); +} + +static void purge_vmap_node(struct work_struct *work) { - unsigned long num_purged_areas = 0; + struct vmap_node *vn = container_of(work, + struct vmap_node, purge_work); struct vmap_area *va, *n_va; + LIST_HEAD(global); + + vn->nr_purged = 0; if (list_empty(&vn->purge_list)) - return 0; + return; - spin_lock(&free_vmap_area_lock); + spin_lock(&vn->free.lock); list_for_each_entry_safe(va, n_va, &vn->purge_list, list) { unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT; unsigned long orig_start = va->va_start; unsigned long orig_end = va->va_end; + int vn_id = decode_vn_id(va->flags); - /* - * Finally insert or merge lazily-freed area. It is - * detached and there is no need to "unlink" it from - * anything. - */ - va = merge_or_add_vmap_area_augment(va, &free_vmap_area_root, - &free_vmap_area_list); + list_del_init(&va->list); + + if (vn_id >= 0) { + if (va_size(va) != node_size - (2 * PAGE_SIZE)) + va = merge_or_add_vmap_area_augment(va, &vn->free.root, &vn->free.head); + + if (va_size(va) == node_size - (2 * PAGE_SIZE)) { + if (!list_empty(&va->list)) + unlink_va_augment(va, &vn->free.root); + + /* Restore the block size. */ + va->va_start -= PAGE_SIZE; + va->va_end += PAGE_SIZE; + list_add(&va->list, &global); + } + } else { + list_add(&va->list, &global); + } if (!va) continue; @@ -1806,11 +2009,10 @@ purge_vmap_node(struct vmap_node *vn) va->va_start, va->va_end); atomic_long_sub(nr, &vmap_lazy_nr); - num_purged_areas++; + vn->nr_purged++; } - spin_unlock(&free_vmap_area_lock); - - return num_purged_areas; + spin_unlock(&vn->free.lock); + reclaim_list_global(&global); } /* @@ -1818,11 +2020,17 @@ purge_vmap_node(struct vmap_node *vn) */ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) { - unsigned long num_purged_areas = 0; + unsigned long nr_purged_areas = 0; + unsigned int nr_purge_helpers; + unsigned int nr_purge_nodes; struct vmap_node *vn; int i; lockdep_assert_held(&vmap_purge_lock); + + /* + * Use cpumask to mark which node has to be processed. + */ purge_nodes = CPU_MASK_NONE; for (i = 0; i < nr_nodes; i++) { @@ -1847,17 +2055,45 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) cpumask_set_cpu(i, &purge_nodes); } - if (cpumask_weight(&purge_nodes) > 0) { + nr_purge_nodes = cpumask_weight(&purge_nodes); + if (nr_purge_nodes > 0) { flush_tlb_kernel_range(start, end); + /* One extra worker is per a lazy_max_pages() full set minus one. */ + nr_purge_helpers = atomic_long_read(&vmap_lazy_nr) / lazy_max_pages(); + nr_purge_helpers = clamp(nr_purge_helpers, 1U, nr_purge_nodes) - 1; + + for_each_cpu(i, &purge_nodes) { + vn = &nodes[i]; + + if (nr_purge_helpers > 0) { + INIT_WORK(&vn->purge_work, purge_vmap_node); + + if (cpumask_test_cpu(i, cpu_online_mask)) + schedule_work_on(i, &vn->purge_work); + else + schedule_work(&vn->purge_work); + + nr_purge_helpers--; + } else { + vn->purge_work.func = NULL; + purge_vmap_node(&vn->purge_work); + nr_purged_areas += vn->nr_purged; + } + } + for_each_cpu(i, &purge_nodes) { vn = &nodes[i]; - num_purged_areas += purge_vmap_node(vn); + + if (vn->purge_work.func) { + flush_work(&vn->purge_work); + nr_purged_areas += vn->nr_purged; + } } } - trace_purge_vmap_area_lazy(start, end, num_purged_areas); - return num_purged_areas > 0; + trace_purge_vmap_area_lazy(start, end, nr_purged_areas); + return nr_purged_areas > 0; } /* @@ -1886,9 +2122,11 @@ static void drain_vmap_area_work(struct work_struct *work) */ static void free_vmap_area_noflush(struct vmap_area *va) { - struct vmap_node *vn = addr_to_node(va->va_start); unsigned long nr_lazy_max = lazy_max_pages(); unsigned long va_start = va->va_start; + int vn_id = decode_vn_id(va->flags); + struct vmap_node *vn = vn_id >= 0 ? id_to_node(vn_id): + addr_to_node(va->va_start);; unsigned long nr_lazy; if (WARN_ON_ONCE(!list_empty(&va->list))) @@ -4574,6 +4812,10 @@ static void vmap_init_nodes(void) vn->lazy.root = RB_ROOT; INIT_LIST_HEAD(&vn->lazy.head); spin_lock_init(&vn->lazy.lock); + + vn->free.root = RB_ROOT; + INIT_LIST_HEAD(&vn->free.head); + spin_lock_init(&vn->free.lock); } } From patchwork Tue Aug 29 08:11:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13368664 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A7FFC83F14 for ; Tue, 29 Aug 2023 08:12:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5880F28003E; Tue, 29 Aug 2023 04:11:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 510D7280037; Tue, 29 Aug 2023 04:11:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38AC028003E; Tue, 29 Aug 2023 04:11:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 24CAF280037 for ; Tue, 29 Aug 2023 04:11:53 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E6ED314066D for ; Tue, 29 Aug 2023 08:11:52 +0000 (UTC) X-FDA: 81176423664.25.20F7ADA Received: from mail-lj1-f181.google.com (mail-lj1-f181.google.com [209.85.208.181]) by imf20.hostedemail.com (Postfix) with ESMTP id 1FD391C002D for ; Tue, 29 Aug 2023 08:11:50 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=C+0ZPJKe; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.181 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693296711; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=X9p+6vvdULZ9qjvQwfG8p+E9kLRCcHqiWfMDpSf+E74=; b=pI7prEhfjqaFtEL75SU8LOUcV8oItgpT03smBYOOv3rJM5LFucH+Ck+dmi++OwDkooFUxh V0Z4fWfMBfDWs1y9xav8B8wj1PPWVl8hKSiY+zP8VvFh/bPtarShfERlZ/MCg03k61EiBL C5WqyLum8RAKC2lXDzY3k9DzSBdD4NY= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=C+0ZPJKe; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.181 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693296711; a=rsa-sha256; cv=none; b=TAuDbMkJSjn+EfjsOv5aiXb98CQ0CqN0KhjZCvLQtPvuQFpBWrxD44+7AUBNiQpSB8qK5i FerLNDSL4/IDxLelfDgTT0P+HRJAlQBSKvnnxSxfNeLefECgPANaw3WL3dhdBOrSEBJ8jt c12gzYSzS0EnWl6OjqcuCU7lOxMbO5g= Received: by mail-lj1-f181.google.com with SMTP id 38308e7fff4ca-2bcb50e194dso62070171fa.3 for ; Tue, 29 Aug 2023 01:11:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1693296709; x=1693901509; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=X9p+6vvdULZ9qjvQwfG8p+E9kLRCcHqiWfMDpSf+E74=; b=C+0ZPJKeh2AIn7VDgao2/FXWVrLlcnX1KA3YLoJi7cwPkwXL9e5UDGxjNsRNkYgr/B QjzB0HDDO9cPrCbMbmzlbj58HYnOEwytSJhrd1l0DnARX/+hknfvFf2YJASBluevQcEF +yzAAGe/wZSpO9edu3hwtl/H38HwNAdqnmh/yA0Zbszg1MKGDQ56gb0c2lfUNJkKdMlB OFqll7OSQXP932CL5iFUyLqfIJQIr8tPj0euSy+qV8bKyVZaU1zZjBccPJ/+qXwW5q4Z 4WHcDl9RIYZZxsBZfJkiqYhk9NRoTEKZAywVQarXxpFzSE/gcYhUE4lKKGSIP24Hj6Xu 8IGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693296709; x=1693901509; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=X9p+6vvdULZ9qjvQwfG8p+E9kLRCcHqiWfMDpSf+E74=; b=FAweNELdwQ8jvz3MJoB20BJ+LCqLioxgAAX5clSjToAaUFRfFnMVX7n4cMk6ikM8pf 2RoV1ZiYC3q38xrwoguLnUhCt6F/MBh8liwJQW96mEY7muidRioS+cV5hKxmh9bnBGlk z61DYw0I52JEp9FKwwHeWp6sVgJN8HX5XF+XnPJKCMIyZuj/Qx9J/EpXvbJuJMwmAPX6 G4NcWWH7ndPE6BCzNwSYlW162YyRDYJC7o2r5A0py7OQ+F7yBs+znMFwKLAvKqWTo4oc m1ztnT5Q2tEcAsEpCsFDpV/3CZGRKOdFdg2IJGSOsCC3iDzM0qdzfA5V2JQGqXwfiwLG Hmsw== X-Gm-Message-State: AOJu0Yw6/VH/U8ztP2OVO2fJFOe+DPHzOUCInRGj7CZXj1JU+O8K3Sdx PL+DQE8xdbgHPEnD1XkQn7/q4SB8Adg5pA== X-Google-Smtp-Source: AGHT+IEC0NgW8tXM5qfPn6xExqsQCK2vQIifZS1j4TtRZBSspdGvCpkqpIJnDNq8vdHZtzaDCvPsdg== X-Received: by 2002:a05:6512:3115:b0:500:79a6:38d4 with SMTP id n21-20020a056512311500b0050079a638d4mr16193053lfb.40.1693296709391; Tue, 29 Aug 2023 01:11:49 -0700 (PDT) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id f25-20020a19ae19000000b004fbad341442sm1868026lfc.97.2023.08.29.01.11.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Aug 2023 01:11:49 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v2 7/9] mm: vmalloc: Support multiple nodes in vread_iter Date: Tue, 29 Aug 2023 10:11:40 +0200 Message-Id: <20230829081142.3619-8-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230829081142.3619-1-urezki@gmail.com> References: <20230829081142.3619-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1FD391C002D X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 1rn9xoxxtzzkj6fsogfkr1gguxj5byot X-HE-Tag: 1693296710-950747 X-HE-Meta: U2FsdGVkX18bBdWl9EloRPVwcMCnOmOprzR4pZgcrBnSAMJvVeAtiTt9OlCFkpmroarptdSMeMK7wqHBr169glzNK6sZkg5gTYMiM3GwgTJ/D79cK584baVBzBB7PPPrh+4ZoRMY4MC6kDZY0TpYHnYnlegzbS3ONIUTkZxgexnXgSp+13c7Wb6DvbDgy6YBJbn+l9TKHiiRUBtFvJyVMv0sz/GJCtLAFugyHaD0K04kZph9NC+V4sVtryaj8L6fyUtiwQ0nvMFDZmatrbL3K6SMO9JFYvKvCwZIPI3jA0DPxqHP9TsAIxF4+ppVXJeEKiI3/B51VKKhZqjXiJB3ymJHPnSKsQ5AKs6G4OyYNprAvyweCD+p7K1ReV9y9EO/yT1ZKvFh0zN3bJxtoEY2gJAb+Bzdtvpb/0admWzAD3oeZBWA6OHOvRb5gJGmgMZGCTpQJnY2PjyXVDDwVNz3vV3YHQB4DQjhrq+wHbfUBhKi+vdpr84fnTAME+sfB6Bh7WxJi5TP+FUDGF3gcrhfqAmiLTgvOSATZJh2hTNYvU9QaZ//WTf/0UX620PmQlRBzw8bV5EWbQdh0xGyYg03bcbfqCGhqO1x+71RdL8063q5wFRd8vUZD3EwIUufM1a+dQFTF/jyBGz7KwWGf8uekC+bmfXfU2i0yGBLsQsaS6EVASOQEAV9BFisDR8e+43JfJYwfLEANXv3dtxreDPW4W1PG1OfRSi7MaaIWOLFdsG3Vw2RgDX2+9ohl9dXRCQAa5BRAXR/wHswqJGEKrj1w9l4itq0QsJYxgp92SYg74huOU/i+QIrzQcsb6goc4TjZL9XRv2eO+qqYWSP5d/HeD0oFoJ+7OEsbyF2CKH+UF/V4CKfetjwcpaT4CiU4ZfJcFDTtBZxmPNdCTBcDGqGu4yEWR7fCk+pd+qpoGa2eGa7dUk70StTs0KZkmamCy9vSNNa3TQi82DhW4msUTA oSMOR99C /eqgTrfiQzYOl2tR+TsUPLA8kYpTh5sDz+s3gDXWOIKMNV+hHxCiF1EheQEjCJWtvSfQt5viN0wUkrCkHNLrEzhPBx76cENDvbnh4kPTyB01NG5IZ5EmC+mhOW6ewy1uy5Ky5rTO2ARN5YjQYAjakJxKtIO19OBjvIKh0fHWWP0nyq8Cpkv2/tIwiDmArGh6jIg9AycCHOSeSKiaQNJhL3MfqWUPZPW1PzgMCtT4Os0z90u0rlikzTHwkmbwlAsmNsndQCnzvUwNBZmtFB75m+ZUTK0HD2uHtKd93ZmPrf/37HYbP7T0jBhC5fAuWKNQhNsZcvXuWbys8vU7EKCQzw7L8VakbwbI1Ju/UULXmSTdF9GeKi3Z4CCjCdYIwNvjiUX2PCETALcFoRXXIsvag58EUO+dhFJzoSTdYiOkR0zdMHHhr3F6ogquMV3vKwyw5jXdXEebbkyI0vnZrPRJsgm6maZ5nw9qCDUatgb6ikR9MnVY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Extend the vread_iter() to be able to perform a sequential reading of VAs which are spread among multiple nodes. So a data read over the /dev/kmem correctly reflects a vmalloc memory layout. Signed-off-by: Uladzislau Rezki (Sony) Reviewed-by: Baoquan He --- mm/vmalloc.c | 67 +++++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 53 insertions(+), 14 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 4fd4915c532d..968144c16237 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -870,7 +870,7 @@ unsigned long vmalloc_nr_pages(void) /* Look up the first VA which satisfies addr < va_end, NULL if none. */ static struct vmap_area * -find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root) +__find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root) { struct vmap_area *va = NULL; struct rb_node *n = root->rb_node; @@ -894,6 +894,41 @@ find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root) return va; } +/* + * Returns a node where a first VA, that satisfies addr < va_end, resides. + * If success, a node is locked. A user is responsible to unlock it when a + * VA is no longer needed to be accessed. + * + * Returns NULL if nothing found. + */ +static struct vmap_node * +find_vmap_area_exceed_addr_lock(unsigned long addr, struct vmap_area **va) +{ + struct vmap_node *vn, *va_node = NULL; + struct vmap_area *va_lowest; + int i; + + for (i = 0; i < nr_nodes; i++) { + vn = &nodes[i]; + + spin_lock(&vn->busy.lock); + va_lowest = __find_vmap_area_exceed_addr(addr, &vn->busy.root); + if (va_lowest) { + if (!va_node || va_lowest->va_start < (*va)->va_start) { + if (va_node) + spin_unlock(&va_node->busy.lock); + + *va = va_lowest; + va_node = vn; + continue; + } + } + spin_unlock(&vn->busy.lock); + } + + return va_node; +} + static struct vmap_area *__find_vmap_area(unsigned long addr, struct rb_root *root) { struct rb_node *n = root->rb_node; @@ -4048,6 +4083,7 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) struct vm_struct *vm; char *vaddr; size_t n, size, flags, remains; + unsigned long next; addr = kasan_reset_tag(addr); @@ -4057,19 +4093,15 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) remains = count; - /* Hooked to node_0 so far. */ - vn = addr_to_node(0); - spin_lock(&vn->busy.lock); - - va = find_vmap_area_exceed_addr((unsigned long)addr, &vn->busy.root); - if (!va) + vn = find_vmap_area_exceed_addr_lock((unsigned long) addr, &va); + if (!vn) goto finished_zero; /* no intersects with alive vmap_area */ if ((unsigned long)addr + remains <= va->va_start) goto finished_zero; - list_for_each_entry_from(va, &vn->busy.head, list) { + do { size_t copied; if (remains == 0) @@ -4084,10 +4116,10 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) WARN_ON(flags == VMAP_BLOCK); if (!vm && !flags) - continue; + goto next_va; if (vm && (vm->flags & VM_UNINITIALIZED)) - continue; + goto next_va; /* Pair with smp_wmb() in clear_vm_uninitialized_flag() */ smp_rmb(); @@ -4096,7 +4128,7 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) size = vm ? get_vm_area_size(vm) : va_size(va); if (addr >= vaddr + size) - continue; + goto next_va; if (addr < vaddr) { size_t to_zero = min_t(size_t, vaddr - addr, remains); @@ -4125,15 +4157,22 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) if (copied != n) goto finished; - } + + next_va: + next = va->va_end; + spin_unlock(&vn->busy.lock); + } while ((vn = find_vmap_area_exceed_addr_lock(next, &va))); finished_zero: - spin_unlock(&vn->busy.lock); + if (vn) + spin_unlock(&vn->busy.lock); + /* zero-fill memory holes */ return count - remains + zero_iter(iter, remains); finished: /* Nothing remains, or We couldn't copy/zero everything. */ - spin_unlock(&vn->busy.lock); + if (vn) + spin_unlock(&vn->busy.lock); return count - remains; } From patchwork Tue Aug 29 08:11:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13368665 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B477EC83F14 for ; Tue, 29 Aug 2023 08:12:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 426DE28003F; Tue, 29 Aug 2023 04:11:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D5FB280037; Tue, 29 Aug 2023 04:11:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2784328003F; Tue, 29 Aug 2023 04:11:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 13C7C280037 for ; Tue, 29 Aug 2023 04:11:54 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D66B1A04C4 for ; Tue, 29 Aug 2023 08:11:53 +0000 (UTC) X-FDA: 81176423706.22.96B3C42 Received: from mail-lf1-f49.google.com (mail-lf1-f49.google.com [209.85.167.49]) by imf21.hostedemail.com (Postfix) with ESMTP id EA5ED1C0003 for ; Tue, 29 Aug 2023 08:11:51 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=cJ578Rd8; spf=pass (imf21.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.49 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693296712; a=rsa-sha256; cv=none; b=5WrsHFJhanFTNwmGWpRH7M6b1g070otiCGPfNt85ZtUwfrYj1OQwDsGDtzPL+KNrAn2f5N cq9SGVOwCBpSOXGd4kK+M8IGco65+BD7ojgbhSOc64qFx+17bs8Xrshk1rmJRIJYsJK3r5 3632fsSp6D2lRgCrGGjhYche0GBCf34= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=cJ578Rd8; spf=pass (imf21.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.49 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693296712; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MQ79dcWK5DB43dAqCTqh1rVk+BckiUBvnWUU5cp8hRQ=; b=Ru0Vl6e5sn1KHMgT+QVfANTWpmmdTOqBvrdHdFXrPOvADIQtsTnW4J2X8VvGHQg4vq4olg zBeR7/NbpdeakEZdvHgHgBxJo3D3BLrkH90ldmExBBY1AFKSavYWGcK/d9a/5bwPsiTJl3 mH1eBXmiNr0+E1Ti8esZgq/NWEs7y/Y= Received: by mail-lf1-f49.google.com with SMTP id 2adb3069b0e04-500a398cda5so6586992e87.0 for ; Tue, 29 Aug 2023 01:11:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1693296710; x=1693901510; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MQ79dcWK5DB43dAqCTqh1rVk+BckiUBvnWUU5cp8hRQ=; b=cJ578Rd8+pWkOxRraeWNzQ5Bjr4Xs8q21UHKNjn+pX9Pg9G1/d9RORSdyDavIEvqBu nZqw56Z/5bnZI+y6Pn8izwNVYNvZcPwy3SBuXJ0bmP150S3WqUUP2lvho86gZrTawaVa HN7hgQO88ZBN0j9YGepuZcSNMNKPJwd71vjjuyYtZgzfhFuO+xQAKqY/x0VT3pyjkI7i s33hleKJsNJYWtMKjIls/pp0qevocX7mEFRTjGXttgA1o21oQIAaRkMgOxo/Driz4GKG btE1nDRRXR/pCI90OWPdmQrGAGRn61cqJakmfIk3Jii9kpF3lXw5NDcqS79oKHhKU9MQ qvTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693296710; x=1693901510; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MQ79dcWK5DB43dAqCTqh1rVk+BckiUBvnWUU5cp8hRQ=; b=jLqBR6tYqsuFrQXexDI/epn6VFUKIyUACApWEaIEAHfZFYX10Du3RzcRY253ikEimy 3OkBZ0wGvk61Eg1uq2A+CUPXeIl/WKBPwfF4KIKYcik41Yb9p+DukXkyuqRxU8+czC/+ QmswvCZoaXk2WNNiR6RL3qmmpGCKesKiPPbPXYEy/dXXJTaAhn+MUq9hgOZVITRi6s9g UrZ+HvNVeHQGv7sS7DUhy5zxiaSFHZqcmGYWljP1V0325g7oji/rbO5sTMC1UT7OwG/A GZnX5RXNw9Ap6WAWJxD3UvIu/Uuk+VLRrNcDMiub6n8ckF8XsVHJlNq4suSdMaGVVGgA hR5g== X-Gm-Message-State: AOJu0Yw88z94Zpoq3dDeeZOeX8acI93gNKXH6DDd8wz9UePRfQe5LcAo GYHTedQfCNi7NMvg/ubUhXUKpTlpYDAn5Q== X-Google-Smtp-Source: AGHT+IGSANIaQ9YHIJaAEyDEmOUUzLYSIRZPqXEseP+0YpzgIn6wX5G+06cGZdVVQpKzh93qND74Rg== X-Received: by 2002:a05:6512:329c:b0:500:9de4:5968 with SMTP id p28-20020a056512329c00b005009de45968mr11140507lfe.59.1693296710194; Tue, 29 Aug 2023 01:11:50 -0700 (PDT) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id f25-20020a19ae19000000b004fbad341442sm1868026lfc.97.2023.08.29.01.11.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Aug 2023 01:11:49 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v2 8/9] mm: vmalloc: Support multiple nodes in vmallocinfo Date: Tue, 29 Aug 2023 10:11:41 +0200 Message-Id: <20230829081142.3619-9-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230829081142.3619-1-urezki@gmail.com> References: <20230829081142.3619-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: EA5ED1C0003 X-Stat-Signature: d9ke8gdtzufqyr13ycw1r1c1uhzizt98 X-Rspam-User: X-HE-Tag: 1693296711-792371 X-HE-Meta: U2FsdGVkX18XZuEldiC+qntpPZQLtfnYpNa8alPUHZYhV/Q0nZryc93/OEz9iwNZ1Pmx2VdnJt1HpqNoV8tBfQLJ/h1RXSfJiNe0JpwaR280sXx65XvHBMoMv+p9qhibB1MRrU0kkoLWfEPLzbGh07HuyxXhDr5TVX68K9FM66xxLAtE76hpAhgl+FKFMPpL8ZrTj9yBJLq+SLN2NBYbzm95G/Zv/2ITobtB3o4sF64ctGPAUB8/MiT1FzehFRNwJ9XTR+Te0REfkb135bm3FLSI8sHMxzh7GYq0NHiYlIwg20JB8BJSM9HY7hTn+3MUXoccUXcxfJYGuzjaNF8gsLSPaTouMg4zzyvrDZntFrVSotYw7s4+0zX6rpNjCmXKHEi2NkfQVHgKdcPuTtQ+Y+4impJqb7jlqo+86SHADt3UCmcd9gfJTMEys/Sv6/uXdok+6ukoAvIy+TNWmpoKkmgO6pkdMfB+HFYJlAVKpElBYOTMtQ5gss338MYfJyFX49EEUd4RauFqwrrki/yisMnUEVJDc+XcnooVh48Nz/fLmUeZ0/sZNpTebKBfteEJefrRAEuihRtVftfLfMn6uoZh6wxWLxdVdBofB1qTrKhoDl35/Vya7sbqBoOvidW5DZYo4qg5fL4Mht0VTvnol8scEl5pLvLcaSuAVG/fCPMgJWJQag3XW3PRi17fImjO7XOq7bz8gX1magsOqAlZIHkxy5yH4Xy7pkIYXsM5oSQBjNJZFvH357wbgTI8lyuY8EAmeispbAFOp/+A3OI1r8vnC0Jzmeb7tMaXJnZqaBSDKb8Fq2svuTHprweVIlNiT/HxZoxUfaHFi5VhmEF+qyqixpThFo4HNBS18goP//bfbT8NuErPbyA5f3h/nRQqnrT6hh1gHI1cqW5Z8xAgXFQ2SfOJW6f/uiwpLbL6YZTNEaOlIVQM83Gzj5z7TcfJLCipK/geFKWHXRGgOmq /lr8qAQV SxBeXhx/CKsWdckz+wI8K5pxZrg1oh0Kzp1GIsLh9QtvAaoUrQN1xQXH2NWA2WbIiUL5Moyxolz6KvtCeS9glcaKa90WQWs2x6QhIS6Pf9oQGlnQNr+Yz7wn6PlSYcTcK8/DKdSWe1Vmf2JFVjfTswvwxFKWUN0J4FZz0KbtOqNGpn8jgDYVh+7i3JyOBHEW44lH4X0EQOWpkRbFokj6e2OVK6KQzZ5yizxgMrVN1Evx4ghwB/XiiFMqH9q9SoVQ/bOnXGT3/D5B9Zry9umAoOY8wqz9AX6IMAgEFRI/VmjIJ19+8JW7FqlMkFzX2H00lwXSMUi8OlhABrvFP+eHIDU/kMxM2tQlh+VwbAF3mioQUNIm4lqGhkBdyMlliPH+sUuYH6Ig7L/F6X1K1WqjY14gTgxOdaCS2ndX2I/S/4PjMGhu6IluO6V9Pym3PEFKmprlT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allocated areas are spread among nodes, it implies that the scanning has to be performed individually of each node in order to dump all existing VAs. Signed-off-by: Uladzislau Rezki (Sony) Reviewed-by: Baoquan He --- mm/vmalloc.c | 120 ++++++++++++++++++++------------------------------- 1 file changed, 47 insertions(+), 73 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 968144c16237..9cce012aecdb 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -4636,30 +4636,6 @@ bool vmalloc_dump_obj(void *object) #endif #ifdef CONFIG_PROC_FS -static void *s_start(struct seq_file *m, loff_t *pos) -{ - struct vmap_node *vn = addr_to_node(0); - - mutex_lock(&vmap_purge_lock); - spin_lock(&vn->busy.lock); - - return seq_list_start(&vn->busy.head, *pos); -} - -static void *s_next(struct seq_file *m, void *p, loff_t *pos) -{ - struct vmap_node *vn = addr_to_node(0); - return seq_list_next(p, &vn->busy.head, pos); -} - -static void s_stop(struct seq_file *m, void *p) -{ - struct vmap_node *vn = addr_to_node(0); - - spin_unlock(&vn->busy.lock); - mutex_unlock(&vmap_purge_lock); -} - static void show_numa_info(struct seq_file *m, struct vm_struct *v) { if (IS_ENABLED(CONFIG_NUMA)) { @@ -4703,84 +4679,82 @@ static void show_purge_info(struct seq_file *m) } } -static int s_show(struct seq_file *m, void *p) +static int vmalloc_info_show(struct seq_file *m, void *p) { struct vmap_node *vn; struct vmap_area *va; struct vm_struct *v; + int i; - vn = addr_to_node(0); - va = list_entry(p, struct vmap_area, list); + for (i = 0; i < nr_nodes; i++) { + vn = &nodes[i]; - if (!va->vm) { - if (va->flags & VMAP_RAM) - seq_printf(m, "0x%pK-0x%pK %7ld vm_map_ram\n", - (void *)va->va_start, (void *)va->va_end, - va->va_end - va->va_start); + spin_lock(&vn->busy.lock); + list_for_each_entry(va, &vn->busy.head, list) { + if (!va->vm) { + if (va->flags & VMAP_RAM) + seq_printf(m, "0x%pK-0x%pK %7ld vm_map_ram\n", + (void *)va->va_start, (void *)va->va_end, + va->va_end - va->va_start); - goto final; - } + continue; + } - v = va->vm; + v = va->vm; - seq_printf(m, "0x%pK-0x%pK %7ld", - v->addr, v->addr + v->size, v->size); + seq_printf(m, "0x%pK-0x%pK %7ld", + v->addr, v->addr + v->size, v->size); - if (v->caller) - seq_printf(m, " %pS", v->caller); + if (v->caller) + seq_printf(m, " %pS", v->caller); - if (v->nr_pages) - seq_printf(m, " pages=%d", v->nr_pages); + if (v->nr_pages) + seq_printf(m, " pages=%d", v->nr_pages); - if (v->phys_addr) - seq_printf(m, " phys=%pa", &v->phys_addr); + if (v->phys_addr) + seq_printf(m, " phys=%pa", &v->phys_addr); - if (v->flags & VM_IOREMAP) - seq_puts(m, " ioremap"); + if (v->flags & VM_IOREMAP) + seq_puts(m, " ioremap"); - if (v->flags & VM_ALLOC) - seq_puts(m, " vmalloc"); + if (v->flags & VM_ALLOC) + seq_puts(m, " vmalloc"); - if (v->flags & VM_MAP) - seq_puts(m, " vmap"); + if (v->flags & VM_MAP) + seq_puts(m, " vmap"); - if (v->flags & VM_USERMAP) - seq_puts(m, " user"); + if (v->flags & VM_USERMAP) + seq_puts(m, " user"); - if (v->flags & VM_DMA_COHERENT) - seq_puts(m, " dma-coherent"); + if (v->flags & VM_DMA_COHERENT) + seq_puts(m, " dma-coherent"); - if (is_vmalloc_addr(v->pages)) - seq_puts(m, " vpages"); + if (is_vmalloc_addr(v->pages)) + seq_puts(m, " vpages"); - show_numa_info(m, v); - seq_putc(m, '\n'); + show_numa_info(m, v); + seq_putc(m, '\n'); + } + spin_unlock(&vn->busy.lock); + } /* * As a final step, dump "unpurged" areas. */ -final: - if (list_is_last(&va->list, &vn->busy.head)) - show_purge_info(m); - + show_purge_info(m); return 0; } -static const struct seq_operations vmalloc_op = { - .start = s_start, - .next = s_next, - .stop = s_stop, - .show = s_show, -}; - static int __init proc_vmalloc_init(void) { + void *priv_data = NULL; + if (IS_ENABLED(CONFIG_NUMA)) - proc_create_seq_private("vmallocinfo", 0400, NULL, - &vmalloc_op, - nr_node_ids * sizeof(unsigned int), NULL); - else - proc_create_seq("vmallocinfo", 0400, NULL, &vmalloc_op); + priv_data = kmalloc(nr_node_ids * sizeof(unsigned int), GFP_KERNEL); + + proc_create_single_data("vmallocinfo", + 0400, NULL, vmalloc_info_show, priv_data); + return 0; } module_init(proc_vmalloc_init); From patchwork Tue Aug 29 08:11:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13368666 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2DDCC83F14 for ; Tue, 29 Aug 2023 08:12:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0CC1C280040; Tue, 29 Aug 2023 04:11:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0551A280037; Tue, 29 Aug 2023 04:11:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E1A23280040; Tue, 29 Aug 2023 04:11:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C8CF7280037 for ; Tue, 29 Aug 2023 04:11:54 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id A0D838057C for ; Tue, 29 Aug 2023 08:11:54 +0000 (UTC) X-FDA: 81176423748.16.9855735 Received: from mail-lf1-f49.google.com (mail-lf1-f49.google.com [209.85.167.49]) by imf11.hostedemail.com (Postfix) with ESMTP id C030840007 for ; Tue, 29 Aug 2023 08:11:52 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=T6NvuHcl; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf11.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.49 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693296712; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WDP4tqNUhlNIFZZBcXRgHZdwRMYy1Esdu/NCW7wWlsU=; b=nJqIAG2SIyLoqijEsivrYDp1KkQKsUHwA+fncWGIC7kGujwHTDQK79zPpFq/DCv8L7J/XH lSK/wedu57ukbHEUnSwgN/fMFbZZw2lNCsSXrqWjqDdL5DCDnyBx3XDa2ccLAbamqEDMe7 jVvWWXtV11VlV6tPGftT3V36T9VUV7E= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=T6NvuHcl; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf11.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.49 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693296712; a=rsa-sha256; cv=none; b=EUcha8dY0ge1n7HSwfFMEbpHdEPLIHkfQd4bD6kWyZ84ZDhASE/aXmzp/zKPHfp4oGkzVu pdbQ8UW4RPgcqA0Wn2JoIBPi9bA6Ogzycta32O0wYFMNum22N1VK/pfqyMNeXAdDssM4nG WmVP80TAEyD34/lG+tnb/QKsbQVOVug= Received: by mail-lf1-f49.google.com with SMTP id 2adb3069b0e04-50098cc8967so6466790e87.1 for ; Tue, 29 Aug 2023 01:11:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1693296711; x=1693901511; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WDP4tqNUhlNIFZZBcXRgHZdwRMYy1Esdu/NCW7wWlsU=; b=T6NvuHclCxEppT8nFMh71q62sGE+tZlEA3hwVNUz/ZA8HpvLfE+jVsPEvlDkp2T6JA BvVtxAhcubFWHDu5ZanFy0HlsbfyU3/qbdz4bPI28m2qBXIlGa0rCttCqxWc/uXOIkl6 6UE+UaAshUPaG0V/dJkZFtmhVNTtGPOu4CJjaX56Km4/Cqox9EMJRpbuy2p45aKZ3PEV ZfkG9G9bTOinVcpj5NxphQLFLuXXYYF71Lw3fOpDNkJDEWv7QBySNmUcO8Z9PAuRFoM/ JueZT8bnCq4JbJcus2pTRqEENYrYmcJdD3FqrPZwA6rzNqirwxtGgAkf/SOxs02IMk+2 BN3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693296711; x=1693901511; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WDP4tqNUhlNIFZZBcXRgHZdwRMYy1Esdu/NCW7wWlsU=; b=MO1y8ICkI1ezF4+WLMFilckfZywrdzYHixIsc5UhEOELOLpBr+Eg9gk+ZqI7iNLX3J ymxibhj6PXiKwvNhjvL78IWau+Iy5wXo0SbyD2nItvlbHP6tFvMHGiib7dtfdkMPeKK+ mXms6PunSE4FXNFM5JKShE9HvqssEUHSaOkP7mpDBX7dZwUZH9LXVRL4EvKH8FlzlOX0 K4JRhzqcCJM9NQG4wZrgZ8vtmUf/YbZA5A1CAYgvxcsKxfPVMv6ZI/bn+gkgfWJGA2pR 7t3Pwl9Aebw9vVYXItDOLTBNoPe7QzaEHuhIRmtyZVuhh9B9F8ZgGuv5tg2PXmJvFHO6 IrUQ== X-Gm-Message-State: AOJu0Yx5KNnhmEGaASFNsQLMY99xlUGBzopzfEW2QzkcuVnTgtNuhObp 9Zxj9PTvQD277/qT75z10oJnqnhtdD6h6g== X-Google-Smtp-Source: AGHT+IHUJH0ptBLC+Y1DnxMxgzbKnbR/6Fjs1YJ0+fTGbIM1B2rz7XjXZu/5VMJRPZ0+lDrr4JRlNg== X-Received: by 2002:a19:ca12:0:b0:500:8fcd:c3b4 with SMTP id a18-20020a19ca12000000b005008fcdc3b4mr12372967lfg.69.1693296711166; Tue, 29 Aug 2023 01:11:51 -0700 (PDT) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id f25-20020a19ae19000000b004fbad341442sm1868026lfc.97.2023.08.29.01.11.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Aug 2023 01:11:50 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v2 9/9] mm: vmalloc: Set nr_nodes/node_size based on CPU-cores Date: Tue, 29 Aug 2023 10:11:42 +0200 Message-Id: <20230829081142.3619-10-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230829081142.3619-1-urezki@gmail.com> References: <20230829081142.3619-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: C030840007 X-Stat-Signature: bfw1rtooxredkbuq1arrmmig5maauwoc X-Rspam-User: X-HE-Tag: 1693296712-734736 X-HE-Meta: U2FsdGVkX1+8tvTLpmFInRX5leQoRz0TcURty2Oj5moMAcDQHyOKOsg+8+Rq2PY6OReRb0KH0K1Su10PY8EFg5JIH3IZlHAYLdb/3NCiN4KFQo6Aw5U6WWgNZlC4/adqjhJBhYoQRv3hIRHZlnvbpVVvDVNTNJdBX3OZouN06bYFSh/LPap70e/nQWss0INt+vqXOi/6KyRtjm/S67NuLO2s/jIgCB1smJgh7yBsltp8po96Vt8I7DYbVFHanSAfFAHZzSIX7zWcArnyyTcp8f4xs0z+4ziBpSmApX8cWbBK2IRv1rVop3HLEJKjgndYYeX+fu/tGkz0BSZyvHH5+5bpe18IzDKyfeJQGl6QUH+kV0jX2MQmsGpvPkG4E4+0+TlAq/4vD4PjnzcgEe3+y+HPRPizwMmCsv570uo81WnSWpo1MoZRLuOb0LH8uOvCZHpyHWhC3MHhPO+LSQnXdg3toRwmXbm6/uS7N2zpDDazNsqtifZ0o6K1vHa41ujHwEdKktvLLRYnIQqjcK/8V4tto+yNch6yFdmiiHEPcvm3wNkFC45bV90xnJbcHUO0hhKbf3Q1Ovm0WanXqPCs6CeYvaLQllbIqbcqsJ25sLrTKWQG/Yz3Xw+uP7rZUC+YtSBojsXXTzYufOAcMgSwtGP8qZ83SyQbriImcxt5cLBr//fXsqrotq5kdmdgHWjOECnMk7IzwdfRpqGDVosnt64VHdyneXR8y3ONEspZ1vYKq2g6g5xCU5QARpyD2dE+hCHHECk1UqbKYTvQJyLN3wI2eOTto2YaY3UUCeDeT4HzYwzclENkiTuwqD+LgjUbNmvjAE1U1CT6otFOqk60tQM7A3pnbMQ6NQCwh5hjwsdYvDfZ3lSZE55ooVK7mK7tIBquLPB7YN2xFhHkqVR5r/P6pRJNCiWlNsiXo5wlNGOkFB+SoAwKV0AV5Wez3Qd9D1SRwn8/FE7rNFhckzf kyPAu3No NVrbWSEuzhL/fJ4xFayRjjQxtfZ5e2+nKN3SjCnY967IXTRkSx89OHzDCQxGKA2bPEuaTo3rEqUUoytr++QfLUz74tYjnUf/QBUoJS1FeQH9Wg/GI2pFNr8IuTUvTGwlMjUWm9ZvvlXRVD8zdGCwuRW7jeJJNzDoijLUKKDlfTk0lZEhv7p7eftnHzn7+ilzWp9oNBxvqw0PVSyZhSrE7tmFhs7S0KxwmDhf6tvgrq8+kEytOnssQGWK0Bd8GqzjiQEqmZNesMAy1ZmH1ljHarDWwqEoHb/6BleaDWXIHoATy0ltsqWvvNH/wjxOWSwVA4K+S+LOO6K4/UvfkI01x2TTkWAJ4etlk1au4u/7EWJVUVGA/xooEx6WTN7PxcPUyRsHE/qke6DV6WA2ZAmLq9O9xMeCFR3vZK1NyK1isncy7JP5iTX7gZa3o1JmIo1j3cNtVZTNH1HBeql7ASEUlLJ/6kCxq7+ejipX7mKVcucEgXz8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The density ratio is set to 2, i.e. two users per one node. For example if there are 6 cores in a system the "nr_nodes" is 3. The "node_size" also depends on number of physical cores. A high-threshold limit is hard-coded and set to SZ_4M. For 32-bit, single/dual core systems an access to a global vmap heap is not balanced. Such small systems do not suffer from lock contentions due to limitation of CPU-cores. Test on AMD Ryzen Threadripper 3970X 32-Core Processor: sudo ./test_vmalloc.sh run_test_mask=127 nr_threads=64 94.17% 0.90% [kernel] [k] _raw_spin_lock 93.27% 93.05% [kernel] [k] native_queued_spin_lock_slowpath 74.69% 0.25% [kernel] [k] __vmalloc_node_range 72.64% 0.01% [kernel] [k] __get_vm_area_node 72.04% 0.89% [kernel] [k] alloc_vmap_area 42.17% 0.00% [kernel] [k] vmalloc 32.53% 0.00% [kernel] [k] __vmalloc_node 24.91% 0.25% [kernel] [k] vfree 24.32% 0.01% [kernel] [k] remove_vm_area 22.63% 0.21% [kernel] [k] find_unlink_vmap_area 15.51% 0.00% [unknown] [k] 0xffffffffc09a74ac 14.35% 0.00% [kernel] [k] ret_from_fork_asm 14.35% 0.00% [kernel] [k] ret_from_fork 14.35% 0.00% [kernel] [k] kthread vs 74.32% 2.42% [kernel] [k] __vmalloc_node_range 69.58% 0.01% [kernel] [k] vmalloc 54.21% 1.17% [kernel] [k] __alloc_pages_bulk 48.13% 47.91% [kernel] [k] clear_page_orig 43.60% 0.01% [unknown] [k] 0xffffffffc082f16f 32.06% 0.00% [kernel] [k] ret_from_fork_asm 32.06% 0.00% [kernel] [k] ret_from_fork 32.06% 0.00% [kernel] [k] kthread 31.30% 0.00% [unknown] [k] 0xffffffffc082f889 22.98% 4.16% [kernel] [k] vfree 14.36% 0.28% [kernel] [k] __get_vm_area_node 13.43% 3.35% [kernel] [k] alloc_vmap_area 10.86% 0.04% [kernel] [k] remove_vm_area 8.89% 2.75% [kernel] [k] _raw_spin_lock 7.19% 0.00% [unknown] [k] 0xffffffffc082fba3 6.65% 1.37% [kernel] [k] free_unref_page 6.13% 6.11% [kernel] [k] native_queued_spin_lock_slowpath confirms that a native_queued_spin_lock_slowpath bottle-neck can be considered as negligible for the patch-series version. The throughput is ~15x higher: urezki@pc638:~$ time sudo ./test_vmalloc.sh run_test_mask=127 nr_threads=64 Run the test with following parameters: run_test_mask=127 nr_threads=64 Done. Check the kernel ring buffer to see the summary. real 24m3.305s user 0m0.361s sys 0m0.013s urezki@pc638:~$ urezki@pc638:~$ time sudo ./test_vmalloc.sh run_test_mask=127 nr_threads=64 Run the test with following parameters: run_test_mask=127 nr_threads=64 Done. Check the kernel ring buffer to see the summary. real 1m28.382s user 0m0.014s sys 0m0.026s urezki@pc638:~$ Signed-off-by: Uladzislau Rezki (Sony) Reviewed-by: Baoquan He --- mm/vmalloc.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 9cce012aecdb..08990f630c21 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -796,6 +796,9 @@ struct vmap_node { atomic_t fill_in_progress; }; +#define MAX_NODES U8_MAX +#define MAX_NODE_SIZE SZ_4M + static struct vmap_node *nodes, snode; static __read_mostly unsigned int nr_nodes = 1; static __read_mostly unsigned int node_size = 1; @@ -4803,11 +4806,24 @@ static void vmap_init_free_space(void) } } +static unsigned int calculate_nr_nodes(void) +{ + unsigned int nr_cpus; + + nr_cpus = num_present_cpus(); + if (nr_cpus <= 1) + nr_cpus = num_possible_cpus(); + + /* Density factor. Two users per a node. */ + return clamp_t(unsigned int, nr_cpus >> 1, 1, MAX_NODES); +} + static void vmap_init_nodes(void) { struct vmap_node *vn; int i; + nr_nodes = calculate_nr_nodes(); nodes = &snode; if (nr_nodes > 1) { @@ -4830,6 +4846,16 @@ static void vmap_init_nodes(void) INIT_LIST_HEAD(&vn->free.head); spin_lock_init(&vn->free.lock); } + + /* + * Scale a node size to number of CPUs. Each power of two + * value doubles a node size. A high-threshold limit is set + * to 4M. + */ +#if BITS_PER_LONG == 64 + if (nr_nodes > 1) + node_size = min(SZ_64K << fls(num_possible_cpus()), SZ_4M); +#endif } void __init vmalloc_init(void)