From patchwork Tue Jan 2 18:46:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13509273 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A15C0C46CD2 for ; Tue, 2 Jan 2024 18:46:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B99066B01F8; Tue, 2 Jan 2024 13:46:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B6F5A6B01F7; Tue, 2 Jan 2024 13:46:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 92C166B01FA; Tue, 2 Jan 2024 13:46:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 64C0E6B01F5 for ; Tue, 2 Jan 2024 13:46:42 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 37599808B2 for ; Tue, 2 Jan 2024 18:46:42 +0000 (UTC) X-FDA: 81635252244.03.27E7CBE Received: from mail-lf1-f52.google.com (mail-lf1-f52.google.com [209.85.167.52]) by imf10.hostedemail.com (Postfix) with ESMTP id 5E444C0029 for ; Tue, 2 Jan 2024 18:46:39 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=du1uKiRL; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf10.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704221199; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=K5/Q6NIxJkZyoBWCsC5LnGeneeGSY9jSGkshkmqw6aM=; b=07ySmBCX8v5rSbhVOYUFsMEntb6Jw5ZUxdK5TSzxiifyabm0zDoIrslVcLKmQzQtihsYxh mMkLjMRmYB9y/+KohbEt5Zjx9z+ErQ5fxnQUQSJV1j+HB5qBN95aRQIsljQahO3XJgyJS0 D+HZ8QA9kDTKuKh10TvglMZnmixN1Bg= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=du1uKiRL; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf10.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704221199; a=rsa-sha256; cv=none; b=8mHdFEDCVBuYmZMecs3vNSNzAoDdj8kA81NaYrG8ILSFpiAJvxteIo9TfQaHKe+b5SHVaA wn48BmAMpPlFopRf81vbgQfSvkXQntc8vHJqZ+xgP9ZX5KZoxyMpe/MX0+jpcHgfEBBI2a FGtKuJP7ZAEfJ6iFqsPipW6y0plLwsA= Received: by mail-lf1-f52.google.com with SMTP id 2adb3069b0e04-50e7288a6e1so7913191e87.2 for ; Tue, 02 Jan 2024 10:46:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704221198; x=1704825998; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=K5/Q6NIxJkZyoBWCsC5LnGeneeGSY9jSGkshkmqw6aM=; b=du1uKiRL3dn7VHwTqlFywYFslAJq0kb7xgkH+3X0LVrXfHUgpWosygxlu07CHRSqYI jLnTd9Mt4qIM6yJwyFPc2HYDXsWoD0R+5V5kJGRjqT0FdaG8vkdBSyRBigJRp7QROfu4 SSZ0uPOma3229hi3jINvEVJOU/OP02AzVWAg8s3qY8Gz0hQraSyASPpFofnIUkX9VDHm /Whw9OZwZTP90IWZmV1QfcmFTFIlPgSvCUtDg6aTde3gmjACYGxplr9OiN96ZyZGxDcp jn0DMb7pEj61pKu5lnyB5hLt/vXbvACEF5IiQov2HDtIn33VaiyotERhYM+HSuDgsAKi FMog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704221198; x=1704825998; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=K5/Q6NIxJkZyoBWCsC5LnGeneeGSY9jSGkshkmqw6aM=; b=E2sZpIjGo8FtSpWLBCUUF9SMC/wMwSLnw8mM6r9gwy9zRJ4wZz8td2lfqmCljKRpyY wrj9dicueLjqBBbVfng5EK+ybDJXA0KLJMYJlrziu7QS6aKqLcGfQAIS4t+xbvAj/3nv LRNVgKdBuoInmCMoJkt/m4Rny5iWqd2s58YDMSr7ttBS6cutXjSXASkBAcDTAytQZlv6 Ar0c+E3L+NPXHdoXRoiTmy2mCDIqteqnSc4FYp0T/70aOu0Up1jSAFjMURwezkpFj/XD PAvd44Iu+yTf8FIghP/DcxUfBGqi8e/5ho+UoDBjOZz2X370E6KgHsNKrdJCHEln6c8q verA== X-Gm-Message-State: AOJu0Yz2dDpLm1qFJYq5boNRO30RMoQJSbedhtLaqqPXJN6JdSGwkmJL 6sBxy1fNJQ4KuKX2ssEM8HmtfZlbSIWXCg== X-Google-Smtp-Source: AGHT+IE9SbS94bEP8HryLcFLJIOhLEWknY4qeLso9+WpHYNNJ55PDxr0/9mn78utpteQ9g4ePL4CRw== X-Received: by 2002:a05:6512:1055:b0:50e:7e93:4d34 with SMTP id c21-20020a056512105500b0050e7e934d34mr5999337lfb.128.1704221197571; Tue, 02 Jan 2024 10:46:37 -0800 (PST) Received: from pc638.lan (host-185-121-47-193.sydskane.nu. [185.121.47.193]) by smtp.gmail.com with ESMTPSA id q1-20020ac246e1000000b0050e7be886d9sm2592656lfo.56.2024.01.02.10.46.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Jan 2024 10:46:37 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko , Christoph Hellwig Subject: [PATCH v3 01/11] mm: vmalloc: Add va_alloc() helper Date: Tue, 2 Jan 2024 19:46:23 +0100 Message-Id: <20240102184633.748113-2-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240102184633.748113-1-urezki@gmail.com> References: <20240102184633.748113-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5E444C0029 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: rtxsg6cgs1zy75on33yztd56a53a7oys X-HE-Tag: 1704221199-354098 X-HE-Meta: U2FsdGVkX19v6PXcNRZ4yrKhC7SGBYcPBDNK4L7STtoAHpNC4a7Y8KrDE6WOStSl3PHXmXw2dtbhziapypASJPr3X2jg7yXkOO6KyxrDvoBHGPaMW6wMtsYqCsokSYkNtQF9TZDayLFzkQw8WPtzh+MTpSfNCy97Ywip4RxvgvW5Tnji+a2dQTa5ESPbaHzrdWvcKwbnLYzU8iWrOvq2iG1i0isu+dxWfehH0aM7Um3SRoSQkeR4ozO03GQmKh6rx9L3+dWNoA/uatQEnBQS6MwC8VnIqrBLOUfr9SECS7QwV8LEyDsIrZ74wcjhi2rxHLOJXuZmU4bN3Y/AOCIq0QsZ+VHVaCUtOYVzviIbqDP1f4ApdecKMVxvS1lIno6pecWNwY6vSDPjoxc7eQLsOzjavKRSqoitX3Rm1wCgXjslZ+lHptAZlEJOH8bTj5Yxuf7aSnm2EKiLX6sYHi36EOBtJn3sKqxlVs1LWcc2VHPAdEbaXGjf3bqQXikV7QgCVrs6TbP7dRo1G/H7zGBef2XvAsIlSwh09wW0+bDuiW2nWosr+9QlHQHrDpUbGyS/54qQ1jAaNQ88e+tfLRcAVogpN6C8r1EE+sN/pVynI0vD0GYz5phHAV1vuiSgZ7FsTN0ZjYVHGIpgazouLJS6A9m7QvmVysolwNfaMG7cOZ/vvL5x+JrRu4bkpu3EFs4+t5JtbSamw1DBy6orwjuyw3dORNWS4bxTZcxiSM6GbWjPdRdQfQpR/aqTgIDSDtbUrbDcUEugH+UeEl0XH2N2grBO0Q+2x8AGMI/fZNRZaQAcTP21P9qmStZhw6CuFXcbU8A7QYS/lA/GsdahX2ad5lHDZcghWGc2qBc+nhATnK704hAghF1MZ2LJ2ZTNepO1qinsGKrmJFK8KZXwXBjKM93e+IjfcBEoXA/I5XjYpj+kzzkzdFUhfyqAL/MQWGgB2PW8PPzijtMIwlL2CVk J31lY4lE AZDdLTU2EARDYMyHWpLUiMFpj4IpP2ENEw8uxnFPG79+4kyRFCtchC7R9aZ04TM8f/aBNJaJtNwg0N4TqNr7YSgF0hS8zB267M56OrWhM4E3ycj4qSCQ0XaBGM3FvEqP4Cn5W4yRcOP2kSTeeUls5ZelT/g5FZeHxlyysF5nRoURf00gOXwP3NOW0iYUzizMvSNfXp3TxiR7fYGCLow5Q2GWZPh0QRa83CYivPHjIHyGHiTqo5UWHzgZ7lo4DxbCg96iP30rMgegdWXOHkl+HLT9z3XeylkUv+bs7TNGsCeiU8poKQuOowRpAfDZsEiSU9AQLf0B5YX7LKeaQyOemRCWYw5Cn/w36v4sEnd2h5MyhAYfFi+C9mgid16O3+CYrBbfXFm8AK1FqgHIY+WxzJCgCUiARA0YYUZatQPRSe0+YnsM61oyKz+ZB+tR1j8wpjOELaCPA513X8XjV9hkMGLJY81sMjjaBfTmO+HaZyrg2NMlz5EaEx1wj7x7xb+3w+zy/ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently __alloc_vmap_area() function contains an open codded logic that finds and adjusts a VA based on allocation request. Introduce a va_alloc() helper that adjusts found VA only. There is no a functional change as a result of this patch. Reviewed-by: Baoquan He Reviewed-by: Christoph Hellwig Reviewed-by: Lorenzo Stoakes Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 41 ++++++++++++++++++++++++++++------------- 1 file changed, 28 insertions(+), 13 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d12a17fc0c17..739401a9eafc 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1481,6 +1481,32 @@ adjust_va_to_fit_type(struct rb_root *root, struct list_head *head, return 0; } +static unsigned long +va_alloc(struct vmap_area *va, + struct rb_root *root, struct list_head *head, + unsigned long size, unsigned long align, + unsigned long vstart, unsigned long vend) +{ + unsigned long nva_start_addr; + int ret; + + if (va->va_start > vstart) + nva_start_addr = ALIGN(va->va_start, align); + else + nva_start_addr = ALIGN(vstart, align); + + /* Check the "vend" restriction. */ + if (nva_start_addr + size > vend) + return vend; + + /* Update the free vmap_area. */ + ret = adjust_va_to_fit_type(root, head, va, nva_start_addr, size); + if (WARN_ON_ONCE(ret)) + return vend; + + return nva_start_addr; +} + /* * Returns a start address of the newly allocated area, if success. * Otherwise a vend is returned that indicates failure. @@ -1493,7 +1519,6 @@ __alloc_vmap_area(struct rb_root *root, struct list_head *head, bool adjust_search_size = true; unsigned long nva_start_addr; struct vmap_area *va; - int ret; /* * Do not adjust when: @@ -1511,18 +1536,8 @@ __alloc_vmap_area(struct rb_root *root, struct list_head *head, if (unlikely(!va)) return vend; - if (va->va_start > vstart) - nva_start_addr = ALIGN(va->va_start, align); - else - nva_start_addr = ALIGN(vstart, align); - - /* Check the "vend" restriction. */ - if (nva_start_addr + size > vend) - return vend; - - /* Update the free vmap_area. */ - ret = adjust_va_to_fit_type(root, head, va, nva_start_addr, size); - if (WARN_ON_ONCE(ret)) + nva_start_addr = va_alloc(va, root, head, size, align, vstart, vend); + if (nva_start_addr == vend) return vend; #if DEBUG_AUGMENT_LOWEST_MATCH_CHECK From patchwork Tue Jan 2 18:46:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13509272 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E58D0C47073 for ; Tue, 2 Jan 2024 18:46:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 713246B01F6; Tue, 2 Jan 2024 13:46:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7029C6B01F8; Tue, 2 Jan 2024 13:46:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52E076B01F7; Tue, 2 Jan 2024 13:46:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 4651F6B01F5 for ; Tue, 2 Jan 2024 13:46:42 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 1819CC0834 for ; Tue, 2 Jan 2024 18:46:42 +0000 (UTC) X-FDA: 81635252244.04.850F551 Received: from mail-lf1-f49.google.com (mail-lf1-f49.google.com [209.85.167.49]) by imf09.hostedemail.com (Postfix) with ESMTP id 5CC51140007 for ; Tue, 2 Jan 2024 18:46:40 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=k4J4Bbty; spf=pass (imf09.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.49 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704221200; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FX7TcFMuiFXtsjOyyf3a39fplMDkCgu6S97Vy7SNSKo=; b=6cYzO04L7Nmy5s7yE+dvZEXrekFxpR1Nqxj+/jhp3sreIaa/TbVxkfWZ8T5dqDTT+UkS0N fey7SElIYPq/E+5X+8B6IjiVnlcRcFsgxEGb+O2PP/mBtYXUCQTHamhxPvsi5EBpNxW26Y xeDyU95xVYrAH4HsqXno7sv53FN3wYM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704221200; a=rsa-sha256; cv=none; b=iLaSrSESrRqsSVURyDAgfPjDcz20eAz3mrgBIjm3D/6cRxEXfgsUfDc6MjVdBimOa+uUIu Wss49Cs7YQe+dPWawleY/1f9AzAO8lM1uTJ7oO5eC/352GZCqH16h1GFbhppyHo8wRkcd1 XZ5n1UW+Dfzh6RBiz0MervEUtQxJn3E= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=k4J4Bbty; spf=pass (imf09.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.49 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lf1-f49.google.com with SMTP id 2adb3069b0e04-50e7dff3e9fso6845392e87.2 for ; Tue, 02 Jan 2024 10:46:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704221199; x=1704825999; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FX7TcFMuiFXtsjOyyf3a39fplMDkCgu6S97Vy7SNSKo=; b=k4J4BbtyS0XrqlMuNZKm2gCIuESw+x9dfFJFn87E/QmIaJD/1HpFeOhZUXKIVRyc0u 1pu7w3vCkhYcl293tJ/60jShbe/A5YYdgxNnPyLDf3DlTzIlvKrLxhyqisD3O+mkhlJ+ UJbHsWFpoEt6AfY3Z9+tvEGvb75u8qrQ9UDi8LdQtTHHCm7sO6C5oCvjrNWdupth3IHT 6xXD4ENbSqEOM/K/Zo9Jpo/AqBahLFGMlFpcSbNtIm0DJng1V2SJASaxEuoqk1SCOvpo hxHkq87L66kQXbjWo5WaBvcu0zcw0l9WqTVKdiRQymXy83ChlagMAxvi761dFgs0VHbk AJYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704221199; x=1704825999; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FX7TcFMuiFXtsjOyyf3a39fplMDkCgu6S97Vy7SNSKo=; b=t0qsN7Fy8Vu9N7IW0uC5slc5Qp5pFvIyFXsxpP5mXQS3gXfKYbUoOBrpIWukenJkoT BbMBGoIrIE0UeeAnrbTeV7NjoxbOPA8Zm0lg8le3druJa8aWymp9h1iRQ4LGwQAYau6u Il9NS+Wbs5F+ukAH8XzZJhaICyTpE4+Rw6qfpB5TawVFysff9pKVY1BcN+DBhOSWArH7 Ja4m1ouBXrouuPkGxLkqwcQXxOC76RgxY5prmTRETtG6A0TB0+pdTy8lGbGLc59xROG1 0v9i52J1JGbn6LvYr2J1HwBGqQCCzhpaoD+l+mS1ULkZod1yFA5kCiRwIN/l3gVQ49ZD EGlA== X-Gm-Message-State: AOJu0YxpG6ICe8ckoBuExjR9iXyb8dbhDvuZjgwm3zSckGQtb1rTklue FNEz+gGh7mYQFm11TDV9QbLpZNW4FPv8eg== X-Google-Smtp-Source: AGHT+IFb3oZclXd0Au63TOx+OSB0Wz5akV/2lKhF4t3qWVHWtcMSu2kZ5bT7p0eryTowoUB4TO5dpw== X-Received: by 2002:a05:6512:78d:b0:50e:7a91:7e93 with SMTP id x13-20020a056512078d00b0050e7a917e93mr5221292lfr.44.1704221198527; Tue, 02 Jan 2024 10:46:38 -0800 (PST) Received: from pc638.lan (host-185-121-47-193.sydskane.nu. [185.121.47.193]) by smtp.gmail.com with ESMTPSA id q1-20020ac246e1000000b0050e7be886d9sm2592656lfo.56.2024.01.02.10.46.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Jan 2024 10:46:38 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko , Christoph Hellwig Subject: [PATCH v3 02/11] mm: vmalloc: Rename adjust_va_to_fit_type() function Date: Tue, 2 Jan 2024 19:46:24 +0100 Message-Id: <20240102184633.748113-3-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240102184633.748113-1-urezki@gmail.com> References: <20240102184633.748113-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5CC51140007 X-Rspam-User: X-Stat-Signature: kuxbtqn9www3ommddqn4ixzdjerqgof4 X-Rspamd-Server: rspam03 X-HE-Tag: 1704221200-207356 X-HE-Meta: U2FsdGVkX1+GYEat7iMJ78NuqGUDJRRG/d8wFl2R8XIxKcEaTtT+WRsK/AWLoY/RlHpZ+4IN9IO5Boxn1tr09ipSrpjVFZzjHqjLrHYkrNXmgHlNLLtBQZ96cu1ov/XIR9gl9lxo7RAxhezNtoe+uIAqDv7Eg2yHNo+oZrzpSuoeyDDUMoe3QRKBkTzDRxDSpGNT2l49yqXQMd0p28iipT9kQzTn/M5vxMQGdo1U4Jp9CGzoXu92jOVO0bqYjaptcDle57amWbJhUvLxJnN5o66fkJvYWGd5A/ohbVYU48qhlbHx2RFHduxuhZpNK/d+WUJ8xgr2tviqYBTpXItUkT/C1e9qkHo4aco3Ml6DTidvaEJcXXelhf3kaYuw7YZzQ6fsJ50qMyTk+O0GWW2a8ttw4JkFRdLO0ggwsWT04dE5g2ORCkGLVSLlJLmbJu1nXbhD69OWV95AeODhTdIFfYixyreO9B5vvnOpogkO7a7D9ta8IH24CM0muRc8ZQLgbd32QXKMNkp6beOHzqjxkACUbJ5c2I2q4MNqQpfjn9STIkXrBTD52BIcsiU7MRxfA40fGuEgPeaeZRjIWTBnh7JFeWY7XkfA/S+c7+zVSnqbIjvCIGcp43tX3hSSEec+kOeaV39NDSrSyuWfq6frMNBNZaWmTPxtmv8fIqPgU1KQul9s7hI7yptYZPSQF5nZKqCyw3i+GVpqVTCWyj8qx3ezLVYgksIFw1z4CpJVO5CvsblSFPGEiD1V/XhUlKehb4v/R74awvZK9S7Z2sELbWywzBiODToZxsAHsBThxZl0KkJBeP1Wn76k9kiYpO7SLTrSha75akCWLX2cjWxSWqNuyFUVBooZUk0eW2b9Nc7lbD4sEL1U0oIZJ7KRfVtrmV8O+XxS0afd+VePFWOJIBaucjpTn9/EtkAGMdV2YymTc8Cb65P8yfW7bWm1y3d20D/4OuLwBeXwXfOumWG bgVBwE2V YX2kuSaz92Z+N9SJQRmj9GtlY1OMsUA9Y3XDu9LMS6YtVOnZM62f3aXDzYHXAkQKyA7Zp13kDXBDQd4/pNEirL88PR7cTZeqe05YKnupGtAGQ4/EnCgsu7dPEaOYB+gFEFiSeWe/SxbhuoNG74Cy0//gR37lLrqIW9NvC X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch renames the adjust_va_to_fit_type() function to va_clip() which is shorter and more expressive. There is no a functional change as a result of this patch. Reviewed-by: Baoquan He Reviewed-by: Christoph Hellwig Reviewed-by: Lorenzo Stoakes Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 739401a9eafc..10f289e86512 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1382,9 +1382,9 @@ classify_va_fit_type(struct vmap_area *va, } static __always_inline int -adjust_va_to_fit_type(struct rb_root *root, struct list_head *head, - struct vmap_area *va, unsigned long nva_start_addr, - unsigned long size) +va_clip(struct rb_root *root, struct list_head *head, + struct vmap_area *va, unsigned long nva_start_addr, + unsigned long size) { struct vmap_area *lva = NULL; enum fit_type type = classify_va_fit_type(va, nva_start_addr, size); @@ -1500,7 +1500,7 @@ va_alloc(struct vmap_area *va, return vend; /* Update the free vmap_area. */ - ret = adjust_va_to_fit_type(root, head, va, nva_start_addr, size); + ret = va_clip(root, head, va, nva_start_addr, size); if (WARN_ON_ONCE(ret)) return vend; @@ -4155,9 +4155,8 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, /* It is a BUG(), but trigger recovery instead. */ goto recovery; - ret = adjust_va_to_fit_type(&free_vmap_area_root, - &free_vmap_area_list, - va, start, size); + ret = va_clip(&free_vmap_area_root, + &free_vmap_area_list, va, start, size); if (WARN_ON_ONCE(unlikely(ret))) /* It is a BUG(), but trigger recovery instead. */ goto recovery; From patchwork Tue Jan 2 18:46:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13509274 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACF47C47073 for ; Tue, 2 Jan 2024 18:46:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 691CC6B01F7; Tue, 2 Jan 2024 13:46:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 64B996B01FA; Tue, 2 Jan 2024 13:46:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4938E6B01FE; Tue, 2 Jan 2024 13:46:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 2C0D66B01F7 for ; Tue, 2 Jan 2024 13:46:43 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 03039401FB for ; Tue, 2 Jan 2024 18:46:42 +0000 (UTC) X-FDA: 81635252286.27.6B6499B Received: from mail-lf1-f52.google.com (mail-lf1-f52.google.com [209.85.167.52]) by imf02.hostedemail.com (Postfix) with ESMTP id 1F5C080005 for ; Tue, 2 Jan 2024 18:46:40 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="OlvXkU/l"; spf=pass (imf02.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704221201; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MyVIBHsk2yf4bdOQXWkGleZVntvAopzQ+wazdTKAsLk=; b=M8ZAxYT3Wn2KAyvMF83LkrP1cMo7MGcsnMXf3clbW8MvGreef+jqJfOLcU46+0PpeYJaEG p0qVMSj4gGZhZ/LLTy9flVKxDeij+8yg8NMlG3X6i2O3Soc2hNG27qdrZvjzZD/Es4tS4S pCgX3I6J7y+URmJiqPD+Wmf/eGDDb3w= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="OlvXkU/l"; spf=pass (imf02.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704221201; a=rsa-sha256; cv=none; b=7mqZEozt6qTuBhJmp65gH+Il32ueewZKXBasTnDPQ46Dvb+oiAUqCpa1v0f/IGcRxmB71B I8Wln1MLW2oL8THU9YmatjWE78Bnv4g607UXnbpve3O9aq0KRqfu86FAXwQgp6H+n1WGtP yRSHRVBQCxVVWzl9sVr0bVoyUTniHjM= Received: by mail-lf1-f52.google.com with SMTP id 2adb3069b0e04-50e7288a6e1so7913224e87.2 for ; Tue, 02 Jan 2024 10:46:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704221199; x=1704825999; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MyVIBHsk2yf4bdOQXWkGleZVntvAopzQ+wazdTKAsLk=; b=OlvXkU/ls3dnCdpkNa5YRv3UmIeJY+P2ekyXjvNlzt6bGVMV1XsuJlx8Hts32ASV60 d9J+mXPt1r3MOWzIOeFZ1wx6TfMq3oeFOuiPOWh4b2V+i/1IprGfnGuq9wU/UIRCrIPw 53UWXt/aJe2q0Ym0FRyHAWtWhKEckG+1d225F9GU6R3jnaiwjhTHx7zmlCcofbXfNhdy uCyMp5J2mK0KAgd8iIZR2d3ixItwlvWkqBuV36RBt1cXscN3sTDVIPXww4+HV/U+ZW0C eYZp+eCYN4wJZknRkySqfVpl5eFVcdvJGktN/tupe+SaOTcTqClNkzOCoYSmome3H/9p 6nOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704221199; x=1704825999; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MyVIBHsk2yf4bdOQXWkGleZVntvAopzQ+wazdTKAsLk=; b=C4J7QOhC/Ail35JSQMxjv/JxW4vETX3nPQ7Z+8ogVyweDltV834TvVfpLHo1snYYiu amgQYke7Xznh4pqbZToP8n6N6rHEv1EmIMFl+1pnOh41y/uEuloeQn3o1av5RWe9Xxn2 y7wfm76+4iZfTAOyRA5pu8Da76mRNR+GEcBHuUxqG66utthuJYVTRooDNxNykmMgAgGC zwfBuP5K2OPMp9IfcHK3CH0Mvu5s2kGZHsxXdjYgdl6lQec2PdY2FBVu0n4fSxS7uGRK KO6MpnAuSUvUE0iJnED+mnz612oPMCzF9vWerHa+4E6i+/mg0NFzlkkUzOMpjm8Ch1bk cudg== X-Gm-Message-State: AOJu0Yx7Dn/+i0YTn0ME/Bf+snu9lYMBNxZlUK2O0YjGECMB2iWQtGJz 6G3C3K/DSVd2vueEsrvXeGVyRO81ZN2aDA== X-Google-Smtp-Source: AGHT+IGk2c4cJ37LTpQvZZNmS0KxjEDGtzNNytmQ3n3FUXtipuMauWJG2kN4yaiFW3Ij1oRt/VBGdg== X-Received: by 2002:ac2:4e43:0:b0:50b:c57e:1418 with SMTP id f3-20020ac24e43000000b0050bc57e1418mr6636354lfr.16.1704221199512; Tue, 02 Jan 2024 10:46:39 -0800 (PST) Received: from pc638.lan (host-185-121-47-193.sydskane.nu. [185.121.47.193]) by smtp.gmail.com with ESMTPSA id q1-20020ac246e1000000b0050e7be886d9sm2592656lfo.56.2024.01.02.10.46.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Jan 2024 10:46:39 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko , Christoph Hellwig Subject: [PATCH v3 03/11] mm: vmalloc: Move vmap_init_free_space() down in vmalloc.c Date: Tue, 2 Jan 2024 19:46:25 +0100 Message-Id: <20240102184633.748113-4-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240102184633.748113-1-urezki@gmail.com> References: <20240102184633.748113-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1F5C080005 X-Rspam-User: X-Stat-Signature: twngrizn8g89xu63f56jjsm438mk8g74 X-Rspamd-Server: rspam01 X-HE-Tag: 1704221200-338360 X-HE-Meta: U2FsdGVkX19nq6o8IPNPcixv7KTfNHqfL/v5cPjVzA5bs+/9K9hg3Z7EPVz9xdtces3ZwblfQZgBLExEs4y0+tUee5fHFxhikfsZmLTpAM25A78cgUhJ5Me/jBc1kZu8KnXqBrH1xE6ICAfiBM4HHKcoqfepNv7O3qqJVnYbQLf0eEhuY2J73OsRNkWp6vzp96QENPMVg4iDiLcTtdjPD6N8djF21bU4szqCU/zQpO6fS9K01Hb4c9eNQ0POR57xJvzpmlyww8MjoamXaCOHrsEFxBLWIXsUX3vgrycw6AhTePUAw8icOW5CZdlSoG/6q4Kglk43nIEhiPg5F0mrLq8WaC2ups/1ie/8raSE9daHCVxH6vA+E+k4Ipm/7TuNIE2UQIqh38LplmiOoull7EojsTyv5GzeQ7Yd1j9A8nY/38n4WlpQtqtpbz4qacuV7qvdjkeETm3yq7U+tii2XThXFdGPfpsUk3hOfixnl40EpSUwpe7A3IExUiRA4//d2SS3edJu48TWorLH/JHvAByxp0YFVMB3LtXw07PRG8Gp6RKL47QL1kLA6CygJR7/jDHUHw+xZMJPsW8pCbiU0MlfOykm88pp8tPArAoBQNTj2EeOmhoFeuvHXx+AANWaabTcCmUt/6Q0iB8P8gJ3f/cwFfQjRPGWox2w1xTVb1EH1NpWkCeYZkq1pTj4Kf7aDunsVj7JjfURrWwPmMBfSICgrjRU6ijt7UkdYrYmU+EuAz+5O/8DUYbpVmiJun3/fg4AHPnuR4CBeLxJPhqVTDq3/TwY15U7TsXXcjzfUdF2SHw7+z5tuR1jW3LWaV2HtzQ/aGiymi3A+KQk+aDeaBV+vw9ynJqIZPsjU2tTr63DAlMkfTvdAOTRqfD3vTGfyB4tYmdstdnbe9ECBC7THf/HtRm8g+UbNCkmevAiryYnUjOIc5hIhU7n+wQXeyhJvvDoIvwnUxiZdfEY/Fs 7U3GHO7m 5srXg1PWx/zJKp9NEJVo7nFswHwzIjbM25xHcaRcagbxf504fviIyIqQ1ARf5t4a2wASiQ3m103FS5/t3o1ia5QMBciirfmsBFp+n01tMrBH0rmRQKc5TkAtX308b+lKjvQ84lQAHdiOAa1txdthRGih5b6E/E1qqGDpiQhMXX8Cb/r+YrrdQMQMfQHUwur1GMh1gCKHppc3TG9sWfogG9v0btBaQu3ldFpY6xnj4y0xXHrAsNwOuZO3Lf6LBqTqR3R21GcfPiMe6uqv3ilVzpDGcdVeXBoucv7CXfeO16j3AuaQYoGzoUX0ssqG7bqt8OxXPtH4vTpE/s+A5xZFZeDNlptukJH1WdfqhLUFHmMkpNPydQJ8vp0MOAwboPf+Gn5OCMXZatxcK5es7B8LLHu8S9A37IGlN4DxhTOKgVDFLzh9J5V8EoLo8ZMUqXkOD2eGCNzzRcSoo8zBh3mXTnzd2U0yPmoIy2qsIN53SmpHp7yXZRIRxYLUZqqN4SkiAcxFY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A vmap_init_free_space() is a function that setups a vmap space and is considered as part of initialization phase. Since a main entry which is vmalloc_init(), has been moved down in vmalloc.c it makes sense to follow the pattern. There is no a functional change as a result of this patch. Reviewed-by: Baoquan He Reviewed-by: Christoph Hellwig Reviewed-by: Lorenzo Stoakes Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 82 ++++++++++++++++++++++++++-------------------------- 1 file changed, 41 insertions(+), 41 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 10f289e86512..06bd843d18ae 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2512,47 +2512,6 @@ void __init vm_area_register_early(struct vm_struct *vm, size_t align) kasan_populate_early_vm_area_shadow(vm->addr, vm->size); } -static void vmap_init_free_space(void) -{ - unsigned long vmap_start = 1; - const unsigned long vmap_end = ULONG_MAX; - struct vmap_area *busy, *free; - - /* - * B F B B B F - * -|-----|.....|-----|-----|-----|.....|- - * | The KVA space | - * |<--------------------------------->| - */ - list_for_each_entry(busy, &vmap_area_list, list) { - if (busy->va_start - vmap_start > 0) { - free = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); - if (!WARN_ON_ONCE(!free)) { - free->va_start = vmap_start; - free->va_end = busy->va_start; - - insert_vmap_area_augment(free, NULL, - &free_vmap_area_root, - &free_vmap_area_list); - } - } - - vmap_start = busy->va_end; - } - - if (vmap_end - vmap_start > 0) { - free = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); - if (!WARN_ON_ONCE(!free)) { - free->va_start = vmap_start; - free->va_end = vmap_end; - - insert_vmap_area_augment(free, NULL, - &free_vmap_area_root, - &free_vmap_area_list); - } - } -} - static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, struct vmap_area *va, unsigned long flags, const void *caller) { @@ -4465,6 +4424,47 @@ module_init(proc_vmalloc_init); #endif +static void vmap_init_free_space(void) +{ + unsigned long vmap_start = 1; + const unsigned long vmap_end = ULONG_MAX; + struct vmap_area *busy, *free; + + /* + * B F B B B F + * -|-----|.....|-----|-----|-----|.....|- + * | The KVA space | + * |<--------------------------------->| + */ + list_for_each_entry(busy, &vmap_area_list, list) { + if (busy->va_start - vmap_start > 0) { + free = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); + if (!WARN_ON_ONCE(!free)) { + free->va_start = vmap_start; + free->va_end = busy->va_start; + + insert_vmap_area_augment(free, NULL, + &free_vmap_area_root, + &free_vmap_area_list); + } + } + + vmap_start = busy->va_end; + } + + if (vmap_end - vmap_start > 0) { + free = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); + if (!WARN_ON_ONCE(!free)) { + free->va_start = vmap_start; + free->va_end = vmap_end; + + insert_vmap_area_augment(free, NULL, + &free_vmap_area_root, + &free_vmap_area_list); + } + } +} + void __init vmalloc_init(void) { struct vmap_area *va; From patchwork Tue Jan 2 18:46:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13509275 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F35C7C47074 for ; Tue, 2 Jan 2024 18:46:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 865DE6B01FA; Tue, 2 Jan 2024 13:46:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8180E6B01FE; Tue, 2 Jan 2024 13:46:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4911A6B01FF; Tue, 2 Jan 2024 13:46:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 356F46B01FA for ; Tue, 2 Jan 2024 13:46:44 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 0AE9A1401D0 for ; Tue, 2 Jan 2024 18:46:44 +0000 (UTC) X-FDA: 81635252328.21.0E4DDB9 Received: from mail-lf1-f54.google.com (mail-lf1-f54.google.com [209.85.167.54]) by imf27.hostedemail.com (Postfix) with ESMTP id 2B03740007 for ; Tue, 2 Jan 2024 18:46:41 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=bpjUpy1D; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.54 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704221202; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=846iHGUkxLzE5ToX3/xFEcRjp3aW1Mkydl1M48KgaNk=; b=Fq5qokP8Gx7uhoLbWAPBlz4BqTmYNn5Vg//eOjWozvimVV6dy4XJ+C5nF5wNskuTigTxuN dEfbhu70m4zLxQX3SxMw2cul3z1AW1u6OXUXZ0L5KoxLE1MS//zk6XyoejCsncBePeUPPS P+bs2igalYZwQ2IdPhc9Ys2AkjSOGcc= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=bpjUpy1D; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.54 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704221202; a=rsa-sha256; cv=none; b=YfRS7r0hpV/JlIsoSyXs/q7+oQhcadslUfzMKlYdFELZ118HAIMWrxDGIO6mt4hDXpEK7r SFGShmlRMX0MbZP5Uk7nMyX9U3fjZCZEI2FIrj/iO2T9x81avpqWO6g97LTWKb+y5LgkhN yYk02cX3+5EQgKpehX8iaAGpYQiGTbQ= Received: by mail-lf1-f54.google.com with SMTP id 2adb3069b0e04-50e7d6565b5so6740240e87.0 for ; Tue, 02 Jan 2024 10:46:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704221200; x=1704826000; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=846iHGUkxLzE5ToX3/xFEcRjp3aW1Mkydl1M48KgaNk=; b=bpjUpy1DMg+GKjLUL5+VQu+cbPrksEyjTD8IWczC2i/8DYkyUDjTyaj9p8JVSFI7IF FiMrJ8CclMt5uuZ/wOCRCjr//rAnz8AhqrYlbtjOsoacUyBt5QE7d+V6zSsXPmChS8QP EgnCQZK5K2KN2kQfACvQlAN89SDAouSIRtnskN2yXPXgeHjgwNVak53P5X6C6I4uxWMN QFnwQTlQmom5LyzXs++JAzUkLTAeGbgm/P/IlGemM8vyTtSQH8A6tIkmOYbFggboU0Ix AR8cTrb3CdLyLpD0uNMqgo+N7/2hknLPiTdfh/gQHOtYEPR/05Z0XD0jMuXV/REmEw5X VN3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704221200; x=1704826000; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=846iHGUkxLzE5ToX3/xFEcRjp3aW1Mkydl1M48KgaNk=; b=fjNc1ywCIOzkPQ6GQ/Lo0uQ43eDbUNElhz1hQbOaBSotJnloJMUXtnnsnrf0w1aR7/ udKbOlID+ulUnL0OOXh+Yd76KrHY96R+ySQ6r3cy7+ezbAKn64p8f3XcPM/ZEuhgd7/D OM73vf0G1vhDfKr0ElaCBSzj5o+Ihc1UvI41FaTh3Lx3UiSkNi4bfcEeNYJcKFEU6DUt h/eycx0WCd8nixgX8i9OVQ9m8325M2N9cJeGh3tZ+2Rj6N5mbr5PNmBRYikKjP7QtZLZ zO6BXyE7fGCpMqQxXkvDqsjg/C2v1o/kPIl2saP/DG0rB6WtELqctqMlZADFrtYG5ds6 YAbw== X-Gm-Message-State: AOJu0YxunCOsuT+7bffN52C89D1JYzlHVr9pCDKXDfqYX4JVzMVl2sq2 543WnoOfydAJhTokk0OrMmEAhyEj1N6RUA== X-Google-Smtp-Source: AGHT+IEWhJlp1zs3fLwCvhy1mP5b731c3cOSs/r4uDwHpR4O2rYQ3MiJru5ERnu0tkKGRVB+bMlVkg== X-Received: by 2002:a05:6512:3b28:b0:50e:7f8a:5f77 with SMTP id f40-20020a0565123b2800b0050e7f8a5f77mr6510582lfv.127.1704221200488; Tue, 02 Jan 2024 10:46:40 -0800 (PST) Received: from pc638.lan (host-185-121-47-193.sydskane.nu. [185.121.47.193]) by smtp.gmail.com with ESMTPSA id q1-20020ac246e1000000b0050e7be886d9sm2592656lfo.56.2024.01.02.10.46.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Jan 2024 10:46:40 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v3 04/11] mm: vmalloc: Remove global vmap_area_root rb-tree Date: Tue, 2 Jan 2024 19:46:26 +0100 Message-Id: <20240102184633.748113-5-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240102184633.748113-1-urezki@gmail.com> References: <20240102184633.748113-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: yswqkekhmiirh1pwhrhy9sbcnhstfont X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 2B03740007 X-HE-Tag: 1704221201-844322 X-HE-Meta: U2FsdGVkX185Dt5fg4d+nJWZCWWsBx9VtEx+cn9g7/9izP9bXh5b2mMfVfZVAQx389daJTfvXpKsczOEnIiCZ2oenBaO6CH/K2wQchsQzeCHVRgPzSLwLSr5Mayj34T1hULKc4vNy6oOQ+zzytKsjZQr3sKsesOforJlMcGFSKxaavlhOh7h3SSMi1xBuSD8tFUc3n+436QItD8R/+9EE6SM3wpT6eI5Nq2l6I1dChsc69or5U4CnUjZh35mJ+UqFzXTKnt6TCBGlKSDmro7ML2QJRsfr/S8ji/PwWBHR8dTcJeeK9glNA8Ugup/g91+QZlDrTpiMjH3QnwEnKlQpId9YbfaPW4PlM4zk72pRefvONtIyC0wDcDq3fZLfhvHNu6ZjUZ7SpsDpW/1KC3TzJ3tUS5kB1coexmEyHj5R4YAZLKYOOfuxzyPmppblW9bOEJBUWWbwiBskyyE5SFwGGuyHrwdbbjUyykccDSl1v96jKY+1uIE45DGaGrzP4SPhr1ZstcUGsQT7dfPPlDh4rFve8YatlaH2qJzeyaz+emJ9UkQeWR2YB1DbXUI0S+7HYUrP289fC6Wkz5Ek5AM0s/hJhqNiOZZiojJyuNB7afkKB1eLo/lxNTt9gvijR1aTo2aDEUV56zEkT4Pr80kxQ8nP1yXWJMfVJVf0vKLJ5XNjZKACD0rpSnir3UtHAFpk03NWiUAqjZF84rIUhzcVPJ3wnuAngcTWCshYb3HMPSWlbdKZ3i1ph7wkfCE1Loj72mJMrqvNjjPZFZSdHdRzf7ntyBbcdWo8eSuNmX+U/6RbBykjmtaZmDPLSUu/JOCSpRcwNceHj7TKXhQWsxR9lECQLia2AwRtqgYX9ysuQUhz+psSKLg/Mcf8EaUIvFQ1l2lDtr2uAkdq5FlX2hpbhT7ABiEB77h0hcy3n71AuckTR9QCHoJqVnIhVZMsMIoiG59WIA9ql76cQ8+fTh hdf7Czug yyephB7tIpv981w1ixNU5L1hYw1wM4iyGN0a4OR/hNG/r2+VZLOSTHMBTs6dMAuNGo3EF6vLlWSsVzgm0z5ayViSJOiX2wN5Tq/ZJWtRS3lp3BKTw27YmMcnJE8NifK0YfeS0zGOki6vfs5njHoZdN3yD76JjL61eA12r X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Store allocated objects in a separate nodes. A va->va_start address is converted into a correct node where it should be placed and resided. An addr_to_node() function is used to do a proper address conversion to determine a node that contains a VA. Such approach balances VAs across nodes as a result an access becomes scalable. Number of nodes in a system depends on number of CPUs. Please note: 1. As of now allocated VAs are bound to a node-0. It means the patch does not give any difference comparing with a current behavior; 2. The global vmap_area_lock, vmap_area_root are removed as there is no need in it anymore. The vmap_area_list is still kept and is _empty_. It is exported for a kexec only; 3. The vmallocinfo and vread() have to be reworked to be able to handle multiple nodes. Reviewed-by: Baoquan He Signed-off-by: Uladzislau Rezki (Sony) Reviewed-by: Lorenzo Stoakes --- mm/vmalloc.c | 240 +++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 173 insertions(+), 67 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 06bd843d18ae..786ecb18ae22 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -728,11 +728,9 @@ EXPORT_SYMBOL(vmalloc_to_pfn); #define DEBUG_AUGMENT_LOWEST_MATCH_CHECK 0 -static DEFINE_SPINLOCK(vmap_area_lock); static DEFINE_SPINLOCK(free_vmap_area_lock); /* Export for kexec only */ LIST_HEAD(vmap_area_list); -static struct rb_root vmap_area_root = RB_ROOT; static bool vmap_initialized __read_mostly; static struct rb_root purge_vmap_area_root = RB_ROOT; @@ -772,6 +770,38 @@ static struct rb_root free_vmap_area_root = RB_ROOT; */ static DEFINE_PER_CPU(struct vmap_area *, ne_fit_preload_node); +/* + * An effective vmap-node logic. Users make use of nodes instead + * of a global heap. It allows to balance an access and mitigate + * contention. + */ +struct rb_list { + struct rb_root root; + struct list_head head; + spinlock_t lock; +}; + +static struct vmap_node { + /* Bookkeeping data of this node. */ + struct rb_list busy; +} single; + +static struct vmap_node *vmap_nodes = &single; +static __read_mostly unsigned int nr_vmap_nodes = 1; +static __read_mostly unsigned int vmap_zone_size = 1; + +static inline unsigned int +addr_to_node_id(unsigned long addr) +{ + return (addr / vmap_zone_size) % nr_vmap_nodes; +} + +static inline struct vmap_node * +addr_to_node(unsigned long addr) +{ + return &vmap_nodes[addr_to_node_id(addr)]; +} + static __always_inline unsigned long va_size(struct vmap_area *va) { @@ -803,10 +833,11 @@ unsigned long vmalloc_nr_pages(void) } /* Look up the first VA which satisfies addr < va_end, NULL if none. */ -static struct vmap_area *find_vmap_area_exceed_addr(unsigned long addr) +static struct vmap_area * +find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root) { struct vmap_area *va = NULL; - struct rb_node *n = vmap_area_root.rb_node; + struct rb_node *n = root->rb_node; addr = (unsigned long)kasan_reset_tag((void *)addr); @@ -1552,12 +1583,14 @@ __alloc_vmap_area(struct rb_root *root, struct list_head *head, */ static void free_vmap_area(struct vmap_area *va) { + struct vmap_node *vn = addr_to_node(va->va_start); + /* * Remove from the busy tree/list. */ - spin_lock(&vmap_area_lock); - unlink_va(va, &vmap_area_root); - spin_unlock(&vmap_area_lock); + spin_lock(&vn->busy.lock); + unlink_va(va, &vn->busy.root); + spin_unlock(&vn->busy.lock); /* * Insert/Merge it back to the free tree/list. @@ -1600,6 +1633,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, int node, gfp_t gfp_mask, unsigned long va_flags) { + struct vmap_node *vn; struct vmap_area *va; unsigned long freed; unsigned long addr; @@ -1645,9 +1679,11 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, va->vm = NULL; va->flags = va_flags; - spin_lock(&vmap_area_lock); - insert_vmap_area(va, &vmap_area_root, &vmap_area_list); - spin_unlock(&vmap_area_lock); + vn = addr_to_node(va->va_start); + + spin_lock(&vn->busy.lock); + insert_vmap_area(va, &vn->busy.root, &vn->busy.head); + spin_unlock(&vn->busy.lock); BUG_ON(!IS_ALIGNED(va->va_start, align)); BUG_ON(va->va_start < vstart); @@ -1871,26 +1907,61 @@ static void free_unmap_vmap_area(struct vmap_area *va) struct vmap_area *find_vmap_area(unsigned long addr) { + struct vmap_node *vn; struct vmap_area *va; + int i, j; - spin_lock(&vmap_area_lock); - va = __find_vmap_area(addr, &vmap_area_root); - spin_unlock(&vmap_area_lock); + /* + * An addr_to_node_id(addr) converts an address to a node index + * where a VA is located. If VA spans several zones and passed + * addr is not the same as va->va_start, what is not common, we + * may need to scan an extra nodes. See an example: + * + * <--va--> + * -|-----|-----|-----|-----|- + * 1 2 0 1 + * + * VA resides in node 1 whereas it spans 1 and 2. If passed + * addr is within a second node we should do extra work. We + * should mention that it is rare and is a corner case from + * the other hand it has to be covered. + */ + i = j = addr_to_node_id(addr); + do { + vn = &vmap_nodes[i]; - return va; + spin_lock(&vn->busy.lock); + va = __find_vmap_area(addr, &vn->busy.root); + spin_unlock(&vn->busy.lock); + + if (va) + return va; + } while ((i = (i + 1) % nr_vmap_nodes) != j); + + return NULL; } static struct vmap_area *find_unlink_vmap_area(unsigned long addr) { + struct vmap_node *vn; struct vmap_area *va; + int i, j; - spin_lock(&vmap_area_lock); - va = __find_vmap_area(addr, &vmap_area_root); - if (va) - unlink_va(va, &vmap_area_root); - spin_unlock(&vmap_area_lock); + i = j = addr_to_node_id(addr); + do { + vn = &vmap_nodes[i]; - return va; + spin_lock(&vn->busy.lock); + va = __find_vmap_area(addr, &vn->busy.root); + if (va) + unlink_va(va, &vn->busy.root); + spin_unlock(&vn->busy.lock); + + if (va) + return va; + } while ((i = (i + 1) % nr_vmap_nodes) != j); + + return NULL; } /*** Per cpu kva allocator ***/ @@ -2092,6 +2163,7 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) static void free_vmap_block(struct vmap_block *vb) { + struct vmap_node *vn; struct vmap_block *tmp; struct xarray *xa; @@ -2099,9 +2171,10 @@ static void free_vmap_block(struct vmap_block *vb) tmp = xa_erase(xa, addr_to_vb_idx(vb->va->va_start)); BUG_ON(tmp != vb); - spin_lock(&vmap_area_lock); - unlink_va(vb->va, &vmap_area_root); - spin_unlock(&vmap_area_lock); + vn = addr_to_node(vb->va->va_start); + spin_lock(&vn->busy.lock); + unlink_va(vb->va, &vn->busy.root); + spin_unlock(&vn->busy.lock); free_vmap_area_noflush(vb->va); kfree_rcu(vb, rcu_head); @@ -2525,9 +2598,11 @@ static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va, unsigned long flags, const void *caller) { - spin_lock(&vmap_area_lock); + struct vmap_node *vn = addr_to_node(va->va_start); + + spin_lock(&vn->busy.lock); setup_vmalloc_vm_locked(vm, va, flags, caller); - spin_unlock(&vmap_area_lock); + spin_unlock(&vn->busy.lock); } static void clear_vm_uninitialized_flag(struct vm_struct *vm) @@ -3715,6 +3790,7 @@ static size_t vmap_ram_vread_iter(struct iov_iter *iter, const char *addr, */ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) { + struct vmap_node *vn; struct vmap_area *va; struct vm_struct *vm; char *vaddr; @@ -3728,8 +3804,11 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) remains = count; - spin_lock(&vmap_area_lock); - va = find_vmap_area_exceed_addr((unsigned long)addr); + /* Hooked to node_0 so far. */ + vn = addr_to_node(0); + spin_lock(&vn->busy.lock); + + va = find_vmap_area_exceed_addr((unsigned long)addr, &vn->busy.root); if (!va) goto finished_zero; @@ -3737,7 +3816,7 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) if ((unsigned long)addr + remains <= va->va_start) goto finished_zero; - list_for_each_entry_from(va, &vmap_area_list, list) { + list_for_each_entry_from(va, &vn->busy.head, list) { size_t copied; if (remains == 0) @@ -3796,12 +3875,12 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) } finished_zero: - spin_unlock(&vmap_area_lock); + spin_unlock(&vn->busy.lock); /* zero-fill memory holes */ return count - remains + zero_iter(iter, remains); finished: /* Nothing remains, or We couldn't copy/zero everything. */ - spin_unlock(&vmap_area_lock); + spin_unlock(&vn->busy.lock); return count - remains; } @@ -4135,14 +4214,15 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, } /* insert all vm's */ - spin_lock(&vmap_area_lock); for (area = 0; area < nr_vms; area++) { - insert_vmap_area(vas[area], &vmap_area_root, &vmap_area_list); + struct vmap_node *vn = addr_to_node(vas[area]->va_start); + spin_lock(&vn->busy.lock); + insert_vmap_area(vas[area], &vn->busy.root, &vn->busy.head); setup_vmalloc_vm_locked(vms[area], vas[area], VM_ALLOC, pcpu_get_vm_areas); + spin_unlock(&vn->busy.lock); } - spin_unlock(&vmap_area_lock); /* * Mark allocated areas as accessible. Do it now as a best-effort @@ -4253,55 +4333,57 @@ bool vmalloc_dump_obj(void *object) { void *objp = (void *)PAGE_ALIGN((unsigned long)object); const void *caller; - struct vm_struct *vm; struct vmap_area *va; + struct vmap_node *vn; unsigned long addr; unsigned int nr_pages; + bool success = false; - if (!spin_trylock(&vmap_area_lock)) - return false; - va = __find_vmap_area((unsigned long)objp, &vmap_area_root); - if (!va) { - spin_unlock(&vmap_area_lock); - return false; - } + vn = addr_to_node((unsigned long)objp); - vm = va->vm; - if (!vm) { - spin_unlock(&vmap_area_lock); - return false; + if (spin_trylock(&vn->busy.lock)) { + va = __find_vmap_area(addr, &vn->busy.root); + + if (va && va->vm) { + addr = (unsigned long)va->vm->addr; + caller = va->vm->caller; + nr_pages = va->vm->nr_pages; + success = true; + } + + spin_unlock(&vn->busy.lock); } - addr = (unsigned long)vm->addr; - caller = vm->caller; - nr_pages = vm->nr_pages; - spin_unlock(&vmap_area_lock); - pr_cont(" %u-page vmalloc region starting at %#lx allocated at %pS\n", - nr_pages, addr, caller); - return true; + + if (success) + pr_cont(" %u-page vmalloc region starting at %#lx allocated at %pS\n", + nr_pages, addr, caller); + + return success; } #endif #ifdef CONFIG_PROC_FS static void *s_start(struct seq_file *m, loff_t *pos) - __acquires(&vmap_purge_lock) - __acquires(&vmap_area_lock) { + struct vmap_node *vn = addr_to_node(0); + mutex_lock(&vmap_purge_lock); - spin_lock(&vmap_area_lock); + spin_lock(&vn->busy.lock); - return seq_list_start(&vmap_area_list, *pos); + return seq_list_start(&vn->busy.head, *pos); } static void *s_next(struct seq_file *m, void *p, loff_t *pos) { - return seq_list_next(p, &vmap_area_list, pos); + struct vmap_node *vn = addr_to_node(0); + return seq_list_next(p, &vn->busy.head, pos); } static void s_stop(struct seq_file *m, void *p) - __releases(&vmap_area_lock) - __releases(&vmap_purge_lock) { - spin_unlock(&vmap_area_lock); + struct vmap_node *vn = addr_to_node(0); + + spin_unlock(&vn->busy.lock); mutex_unlock(&vmap_purge_lock); } @@ -4344,9 +4426,11 @@ static void show_purge_info(struct seq_file *m) static int s_show(struct seq_file *m, void *p) { + struct vmap_node *vn; struct vmap_area *va; struct vm_struct *v; + vn = addr_to_node(0); va = list_entry(p, struct vmap_area, list); if (!va->vm) { @@ -4397,7 +4481,7 @@ static int s_show(struct seq_file *m, void *p) * As a final step, dump "unpurged" areas. */ final: - if (list_is_last(&va->list, &vmap_area_list)) + if (list_is_last(&va->list, &vn->busy.head)) show_purge_info(m); return 0; @@ -4428,7 +4512,8 @@ static void vmap_init_free_space(void) { unsigned long vmap_start = 1; const unsigned long vmap_end = ULONG_MAX; - struct vmap_area *busy, *free; + struct vmap_area *free; + struct vm_struct *busy; /* * B F B B B F @@ -4436,12 +4521,12 @@ static void vmap_init_free_space(void) * | The KVA space | * |<--------------------------------->| */ - list_for_each_entry(busy, &vmap_area_list, list) { - if (busy->va_start - vmap_start > 0) { + for (busy = vmlist; busy; busy = busy->next) { + if ((unsigned long) busy->addr - vmap_start > 0) { free = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); if (!WARN_ON_ONCE(!free)) { free->va_start = vmap_start; - free->va_end = busy->va_start; + free->va_end = (unsigned long) busy->addr; insert_vmap_area_augment(free, NULL, &free_vmap_area_root, @@ -4449,7 +4534,7 @@ static void vmap_init_free_space(void) } } - vmap_start = busy->va_end; + vmap_start = (unsigned long) busy->addr + busy->size; } if (vmap_end - vmap_start > 0) { @@ -4465,9 +4550,23 @@ static void vmap_init_free_space(void) } } +static void vmap_init_nodes(void) +{ + struct vmap_node *vn; + int i; + + for (i = 0; i < nr_vmap_nodes; i++) { + vn = &vmap_nodes[i]; + vn->busy.root = RB_ROOT; + INIT_LIST_HEAD(&vn->busy.head); + spin_lock_init(&vn->busy.lock); + } +} + void __init vmalloc_init(void) { struct vmap_area *va; + struct vmap_node *vn; struct vm_struct *tmp; int i; @@ -4489,6 +4588,11 @@ void __init vmalloc_init(void) xa_init(&vbq->vmap_blocks); } + /* + * Setup nodes before importing vmlist. + */ + vmap_init_nodes(); + /* Import existing vmlist entries. */ for (tmp = vmlist; tmp; tmp = tmp->next) { va = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); @@ -4498,7 +4602,9 @@ void __init vmalloc_init(void) va->va_start = (unsigned long)tmp->addr; va->va_end = va->va_start + tmp->size; va->vm = tmp; - insert_vmap_area(va, &vmap_area_root, &vmap_area_list); + + vn = addr_to_node(va->va_start); + insert_vmap_area(va, &vn->busy.root, &vn->busy.head); } /* From patchwork Tue Jan 2 18:46:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13509276 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 818E9C47073 for ; Tue, 2 Jan 2024 18:46:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 866B76B007D; Tue, 2 Jan 2024 13:46:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7ECDC6B0085; Tue, 2 Jan 2024 13:46:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 68DDE6B0089; Tue, 2 Jan 2024 13:46:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 580346B007D for ; Tue, 2 Jan 2024 13:46:45 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 35159160175 for ; Tue, 2 Jan 2024 18:46:45 +0000 (UTC) X-FDA: 81635252370.07.9CBADC1 Received: from mail-lf1-f49.google.com (mail-lf1-f49.google.com [209.85.167.49]) by imf21.hostedemail.com (Postfix) with ESMTP id 4D4E21C000A for ; Tue, 2 Jan 2024 18:46:43 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=cLpqoVeV; spf=pass (imf21.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.49 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704221203; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=W/i8ZfJzhOkgwpvB6heisf6dptNqD+Kyc9OPs5waPNM=; b=z9DunYOOBpXTQUkqOeIvC9B6zhPxEFn21PeWujktY0wDC+2bd7yyOffqEhEDFH/cX7XQ/z qnr16n2psCMY27umaEA89JGjsWf4W83ETPwrldDelNQ+e3x1FIEYELKJ3zST2dbqtY0Nzd Tq4pteWATD76230+GgCl/lZ6L+WNwfc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704221203; a=rsa-sha256; cv=none; b=5F+l4+8eGdJFPToITMvrB23t8lsdG/4I0QaLe1XbYDifj142yzEPdqT+E3JS/lxgsrPrQK 7xwS6fVlyOWSwBBCjupBzFMIYOldpl9YgZtWyyW7lwMGRpXMfTxscfcOS0rkj9iuUqt0Y5 /cL1xzAfqF/f/XPqQmJhMETBPbhb+ME= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=cLpqoVeV; spf=pass (imf21.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.49 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lf1-f49.google.com with SMTP id 2adb3069b0e04-50e7c6e3c63so6621953e87.3 for ; Tue, 02 Jan 2024 10:46:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704221202; x=1704826002; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=W/i8ZfJzhOkgwpvB6heisf6dptNqD+Kyc9OPs5waPNM=; b=cLpqoVeVgUbhkRQdjVNKKibS3Jp8ZGirPlV1eldcfWiU7+233+iPvQPUmIIcIF+poi 1aHhGKsZTJyHR+D1ifvK0OYxcl4u7rk9vzihNXppvN5ng7Yf8ZRzBVfl5njcPhScjBGx /sfH+YMZ3bC/OujFhbsVVmqmqfeMRXkc+WUwKpNRGs8wsbvPs3aC3HU38I82Q9kYS2sI Fn1nZeNqHj+CXqiZoIbjmmJpPV7WKxXl7a9jLx5WqG7jakhRm3avK+R/b1OS3ZShMJt1 vUwM86EMeFF/y7YYJo06mzzy6B9TnAH2KAkP39Rw4UAjYlhbADPZwMZCsHHqF0bWbYV/ i8FA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704221202; x=1704826002; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=W/i8ZfJzhOkgwpvB6heisf6dptNqD+Kyc9OPs5waPNM=; b=YceVXFoYWYGNMoBQjS2UAJIGoXsDuwSS5bvWMsZ/+oRa1DpeqvRRdQkcEVoyHZAobW xJEXerP7LNqIt65vEGGyvXHdlDKA6zVeyI+ik+DPMtS27tXFt+eR9ExSpb131R0T9Ugo mCklO3/nC+d96cq0SF2gZKgvJq5pLcZKJaBNyN+5ffQw/9zC+wEJ97qSidhU0rWxoLTy e+kDkogFSHDgc8fXEggNZhg5VgwfW0CKcPHAA/5IZDl4KqGfsliEfaKX7ITO1FzI9y4Z ZJfbLA+ESb7OjgIF+4i8LiwIeLl36RJncW8RE5hpbbj09Am49OrddSU0agdcc0nQYG5P VlBw== X-Gm-Message-State: AOJu0Yx4ctl1dk3Q4Ji2OJ3bzhSkLhcY/hFAcegV7Wtll5o9tz9O/mWR 8rnWvihVJgyPih8UUiauQDiln4mmUTGRUQ== X-Google-Smtp-Source: AGHT+IEsu7Urp6TRXfcpo6ncKQnZwNhBJ5A87G0DxmsMgN27G74Ggz2BQ1tsbpUPmeUAXXarv+DFCQ== X-Received: by 2002:a05:6512:5d5:b0:50e:7b70:b9cb with SMTP id o21-20020a05651205d500b0050e7b70b9cbmr2003145lfo.218.1704221201423; Tue, 02 Jan 2024 10:46:41 -0800 (PST) Received: from pc638.lan (host-185-121-47-193.sydskane.nu. [185.121.47.193]) by smtp.gmail.com with ESMTPSA id q1-20020ac246e1000000b0050e7be886d9sm2592656lfo.56.2024.01.02.10.46.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Jan 2024 10:46:41 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v3 05/11] mm/vmalloc: remove vmap_area_list Date: Tue, 2 Jan 2024 19:46:27 +0100 Message-Id: <20240102184633.748113-6-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240102184633.748113-1-urezki@gmail.com> References: <20240102184633.748113-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4D4E21C000A X-Rspam-User: X-Stat-Signature: 3upg76cq1rpyke3hesi9nrweo91hhh5e X-Rspamd-Server: rspam03 X-HE-Tag: 1704221203-8908 X-HE-Meta: U2FsdGVkX182g+N9UlvFmFIGo98LTziUMGVvK/nWSB5MYSZk0vhXNlg2AEdaQ34ApLyhjFUmZhU65UP+QI2dw3vQ6wEGmnIPPLNwpfsD9biKTXkiro7ejaeGDua5IkXZ+fw8t5jyu/5dZl11JPMlwVuhLrgMC+J/yOSOKCeCvT33TcCfaIq44fcnqiuEPhOtgHYjwJvPJvr7pI3kiSCRAsLlPl1yhfw+LEQCXRKzj1Skvha27vfbUCwRj9Ag8kKSn7YBDS0X3KsPgx1SHR3oIAzxxJo7wSphq3bQzQ0mq7uSDtYqQ161EbaShjcptm7WhEQNJ3NcTFQr9tTVNFGJajIgHvB525hijY+JNiMnheNFraumPEMtwv1N3fRP01ZeJ0DNWpMB1z3kEnzVN3AsQYvEsPJKsQviTPfhTR3AlR7nkXolec67A/2npsu4yYIu4hDin1UiYSVRydeDEPqRgG/gFqNlqbUbtt+UfXcyQB5Re8w8TuaR3xB3GVkg6mctYhpnoICu08OOj8n67pBGt4GWzgA45BVQNF20yard06RwKy6oUnC0wFv9Pc/yH0Swfa7e7dbQg8I0KxBJRDWBQJR3fXcRXaiJhogVGPgciJYhOa7E+oHWho3WrqCTt+ye2O5jRHY7WlAbmUbUmTpLRWwnGt6G3qW9N9cfK5vJ578YmVElvmAEvs2vtfxOaxqh3p2xAAtNGHLW4BtVK1GUtkyLQG/Tdv0IUDa9UkHlbqw3awHz7DbwX/J3e481BCIGyMqV61pC7mrnVOePMD0H6gG8Fb3fKFvp5x5k69GWFEbRJKaFI2dn4UuFKm8EPioMk2s55/3RiBe9/MsH0t08z78ZQW1ldFeialzhJ2EgQbwZ+Bu2Yk4YwAHSEbiJ4tATiB3QSIRrHB+bBnIcvG7iLsLUpZKsp2YLFLexmuyMG6/4KVzLX0ao62T15rRA3WmiWACTEuF0t4NOHlk6rnf kBXt4te0 WAUCRRJ9uBXNtyYPTH0kJwB98HdVaYjFyp8m7lyceS7DNlLFi2wDst1aGd6Hr9ru2ol21BkJB8DyCgWw95KvJ+ged1XW+Hcfo55vdyQw//B7gukJ5H7gZCcDfZj3YP1KsAB692U0K+nrwdA2FC5zoHwjKc9kBtKu8nk7Kj+uFZIrm4eyV/uiQJuCERk5tpSsNADA910TcQ3n/KYp9wyZo7irTSbjhAOVhfDQ/95wPHsuUI0OG3aTtcMjiF2UarFcT97o91tfwnXGr17iiqoo0hmQ4k+kqp2tpXaJbDXWMAqfC8neg8LFaej14gY+VyvZek9qOehWjZacPuxzT6nbKKRdPWvUJsGr2ioeeBdePWFcBChPcoSzUsj1u1VxsLNjSUX+X5mjfIOcAjlK/vQn6Ls28osM7Z7HivSaBAx8xfaL15lOLExGSLeC5lGWxuA1FUJuBc20Inq8HMMVji1PHEgnniYny58CJxxSDnumK6LLgxs9mc2B3lCzdA2t4SYTyKjPcOOP5qthpQOU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Baoquan He Earlier, vmap_area_list is exported to vmcoreinfo so that makedumpfile get the base address of vmalloc area. Now, vmap_area_list is empty, so export VMALLOC_START to vmcoreinfo instead, and remove vmap_area_list. Signed-off-by: Baoquan He Signed-off-by: Uladzislau Rezki (Sony) Acked-by: Lorenzo Stoakes --- Documentation/admin-guide/kdump/vmcoreinfo.rst | 8 ++++---- arch/arm64/kernel/crash_core.c | 1 - arch/riscv/kernel/crash_core.c | 1 - include/linux/vmalloc.h | 1 - kernel/crash_core.c | 4 +--- kernel/kallsyms_selftest.c | 1 - mm/nommu.c | 2 -- mm/vmalloc.c | 2 -- 8 files changed, 5 insertions(+), 15 deletions(-) diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation/admin-guide/kdump/vmcoreinfo.rst index 78e4d2e7ba14..df54fbeaaa16 100644 --- a/Documentation/admin-guide/kdump/vmcoreinfo.rst +++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst @@ -65,11 +65,11 @@ Defines the beginning of the text section. In general, _stext indicates the kernel start address. Used to convert a virtual address from the direct kernel map to a physical address. -vmap_area_list --------------- +VMALLOC_START +------------- -Stores the virtual area list. makedumpfile gets the vmalloc start value -from this variable and its value is necessary for vmalloc translation. +Stores the base address of vmalloc area. makedumpfile gets this value +since is necessary for vmalloc translation. mem_map ------- diff --git a/arch/arm64/kernel/crash_core.c b/arch/arm64/kernel/crash_core.c index 66cde752cd74..2a24199a9b81 100644 --- a/arch/arm64/kernel/crash_core.c +++ b/arch/arm64/kernel/crash_core.c @@ -23,7 +23,6 @@ void arch_crash_save_vmcoreinfo(void) /* Please note VMCOREINFO_NUMBER() uses "%d", not "%x" */ vmcoreinfo_append_str("NUMBER(MODULES_VADDR)=0x%lx\n", MODULES_VADDR); vmcoreinfo_append_str("NUMBER(MODULES_END)=0x%lx\n", MODULES_END); - vmcoreinfo_append_str("NUMBER(VMALLOC_START)=0x%lx\n", VMALLOC_START); vmcoreinfo_append_str("NUMBER(VMALLOC_END)=0x%lx\n", VMALLOC_END); vmcoreinfo_append_str("NUMBER(VMEMMAP_START)=0x%lx\n", VMEMMAP_START); vmcoreinfo_append_str("NUMBER(VMEMMAP_END)=0x%lx\n", VMEMMAP_END); diff --git a/arch/riscv/kernel/crash_core.c b/arch/riscv/kernel/crash_core.c index 8706736fd4e2..d18d529fd9b9 100644 --- a/arch/riscv/kernel/crash_core.c +++ b/arch/riscv/kernel/crash_core.c @@ -8,7 +8,6 @@ void arch_crash_save_vmcoreinfo(void) VMCOREINFO_NUMBER(phys_ram_base); vmcoreinfo_append_str("NUMBER(PAGE_OFFSET)=0x%lx\n", PAGE_OFFSET); - vmcoreinfo_append_str("NUMBER(VMALLOC_START)=0x%lx\n", VMALLOC_START); vmcoreinfo_append_str("NUMBER(VMALLOC_END)=0x%lx\n", VMALLOC_END); #ifdef CONFIG_MMU VMCOREINFO_NUMBER(VA_BITS); diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index c720be70c8dd..91810b4e9510 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -253,7 +253,6 @@ extern long vread_iter(struct iov_iter *iter, const char *addr, size_t count); /* * Internals. Don't use.. */ -extern struct list_head vmap_area_list; extern __init void vm_area_add_early(struct vm_struct *vm); extern __init void vm_area_register_early(struct vm_struct *vm, size_t align); diff --git a/kernel/crash_core.c b/kernel/crash_core.c index d4313b53837e..b427f4a3b156 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -759,7 +759,7 @@ static int __init crash_save_vmcoreinfo_init(void) VMCOREINFO_SYMBOL_ARRAY(swapper_pg_dir); #endif VMCOREINFO_SYMBOL(_stext); - VMCOREINFO_SYMBOL(vmap_area_list); + vmcoreinfo_append_str("NUMBER(VMALLOC_START)=0x%lx\n", VMALLOC_START); #ifndef CONFIG_NUMA VMCOREINFO_SYMBOL(mem_map); @@ -800,8 +800,6 @@ static int __init crash_save_vmcoreinfo_init(void) VMCOREINFO_OFFSET(free_area, free_list); VMCOREINFO_OFFSET(list_head, next); VMCOREINFO_OFFSET(list_head, prev); - VMCOREINFO_OFFSET(vmap_area, va_start); - VMCOREINFO_OFFSET(vmap_area, list); VMCOREINFO_LENGTH(zone.free_area, MAX_ORDER + 1); log_buf_vmcoreinfo_setup(); VMCOREINFO_LENGTH(free_area.free_list, MIGRATE_TYPES); diff --git a/kernel/kallsyms_selftest.c b/kernel/kallsyms_selftest.c index b4cac76ea5e9..8a689b4ff4f9 100644 --- a/kernel/kallsyms_selftest.c +++ b/kernel/kallsyms_selftest.c @@ -89,7 +89,6 @@ static struct test_item test_items[] = { ITEM_DATA(kallsyms_test_var_data_static), ITEM_DATA(kallsyms_test_var_bss), ITEM_DATA(kallsyms_test_var_data), - ITEM_DATA(vmap_area_list), #endif }; diff --git a/mm/nommu.c b/mm/nommu.c index b6dc558d3144..5ec8f44e7ce9 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -131,8 +131,6 @@ int follow_pfn(struct vm_area_struct *vma, unsigned long address, } EXPORT_SYMBOL(follow_pfn); -LIST_HEAD(vmap_area_list); - void vfree(const void *addr) { kfree(addr); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 786ecb18ae22..8c01f2225ef7 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -729,8 +729,6 @@ EXPORT_SYMBOL(vmalloc_to_pfn); static DEFINE_SPINLOCK(free_vmap_area_lock); -/* Export for kexec only */ -LIST_HEAD(vmap_area_list); static bool vmap_initialized __read_mostly; static struct rb_root purge_vmap_area_root = RB_ROOT; From patchwork Tue Jan 2 18:46:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13509277 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E454C4707B for ; Tue, 2 Jan 2024 18:46:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7CB686B0204; Tue, 2 Jan 2024 13:46:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 72E946B0206; Tue, 2 Jan 2024 13:46:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57CD56B0205; Tue, 2 Jan 2024 13:46:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 366BD6B0201 for ; Tue, 2 Jan 2024 13:46:46 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id EF001120149 for ; Tue, 2 Jan 2024 18:46:45 +0000 (UTC) X-FDA: 81635252370.02.2AB98A5 Received: from mail-lf1-f53.google.com (mail-lf1-f53.google.com [209.85.167.53]) by imf15.hostedemail.com (Postfix) with ESMTP id 12DFEA0020 for ; Tue, 2 Jan 2024 18:46:43 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="M/lG4gbD"; spf=pass (imf15.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.53 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704221204; a=rsa-sha256; cv=none; b=WzsEpv4b9MaBG89k+ixKqJDnYeHGmxdlCPo3jFDBicbdatQhxY5uNXNN8M1K7mfQvfF4Pp QIZtEuAaNVb9XgbeIzt/oCEEy/XXUXH5jMrpMbt47jFuIMB+Xv9udADC3+Va7/rlqmUAPm fCiyFFiWsTw9HyPvTifc+VKM4NHTEB4= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="M/lG4gbD"; spf=pass (imf15.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.53 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704221204; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Y7TPXqOPL1FAObBiwLOLjtzw+Q8CAhI3CaP8/RlkErk=; b=DaHASxdG3WAoZZmXha26JlLLvY3qsyuKSxImrZEvAjMBFoSNNcpk2PObjbxjbaPhHstYby 9vtMzDOmJnwkVnlRAqZOB5CVaDSQ5lYpfKNL6zw5Ikf+O9ehXJWQBoHaCbHqjAjrZs16ux x51kRhJIYzIj57At4b+RrPiLncPRLyU= Received: by mail-lf1-f53.google.com with SMTP id 2adb3069b0e04-50ea226bda8so574442e87.2 for ; Tue, 02 Jan 2024 10:46:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704221202; x=1704826002; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Y7TPXqOPL1FAObBiwLOLjtzw+Q8CAhI3CaP8/RlkErk=; b=M/lG4gbDZOIlO+3ZrFkmayfn+mZDzT9WKMNQLdimnaJLHewRcQU1c15Q8ZF4cx2iWP KTWN5vfxhqgWz3V8UaaJZkSdE7ngcwkwcVa6qSXG9EAaiE5xYft0Mo3xEFDO3uHj1zga Rzy9iRJxo1hY9/nlGcJRdwCSPrDJr0T9oPjdj1iDBWCqJGrjWil8Iz2XowPQgGy8Tw4i xqBWh7jxk4yd/qmgrK2gNEFx1JJSDy9Ggtq+/W0kjLoAfF5IUtmnUvHLZVxamsSX1qL/ afFqUENWIryuodd1FDmknnaDfmXk7gXfIupT1cp6jA8eruRTxjz8Qa4A6CQGCnwdjENl E8bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704221202; x=1704826002; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Y7TPXqOPL1FAObBiwLOLjtzw+Q8CAhI3CaP8/RlkErk=; b=gWkvFoBZluwcJsuvNIq8OCedDLZSjEp4/yLx7hdY3NH3CLbL47nSu3Wm+JXX5vCgD7 W5Z/QPqzoRW+xIrjWNyowzzGmXULN4hCmIBLQdgbUXOe7W1jKqH782dCRJpWv0EAoBGH g+jrHzxa8AptPZHaWKQU6Ucjb57h3rk95Za8bTl9t7Ikr7d9Uo4na9k6eX4Nm9+yqLAc R92YfXUWvKf5nK98XlEsHPDzSahT6Wd6MWOQebJZhkL366pFjjNp7u2Ij0qp/OggRcmn Qj9xnx3i0K0A1rd95At0aDcIwQnONeOQYW6X80R2iuphC94orWwwYM27aSux09svajwe N2Nw== X-Gm-Message-State: AOJu0YzZrUsMiDeii41FdLgJrdQpOb+BE/VHtJIRYaCpBzXkX5Cis1A8 cukaHRexiV7kGjjxGu35IuZu92pkVtGCpw== X-Google-Smtp-Source: AGHT+IEqmNEBRY+67JLEN5/lqBW0cHGCcx7d8OHSSEsbnprrKpdCdH45LWYpuY7EtQcmCYTMyqhKag== X-Received: by 2002:a05:6512:1581:b0:50e:902d:b48 with SMTP id bp1-20020a056512158100b0050e902d0b48mr3325679lfb.64.1704221202399; Tue, 02 Jan 2024 10:46:42 -0800 (PST) Received: from pc638.lan (host-185-121-47-193.sydskane.nu. [185.121.47.193]) by smtp.gmail.com with ESMTPSA id q1-20020ac246e1000000b0050e7be886d9sm2592656lfo.56.2024.01.02.10.46.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Jan 2024 10:46:42 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v3 06/11] mm: vmalloc: Remove global purge_vmap_area_root rb-tree Date: Tue, 2 Jan 2024 19:46:28 +0100 Message-Id: <20240102184633.748113-7-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240102184633.748113-1-urezki@gmail.com> References: <20240102184633.748113-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 12DFEA0020 X-Stat-Signature: s9cjai7za5j84ad41n1fxa9wsxf1nasc X-Rspam-User: X-HE-Tag: 1704221203-668783 X-HE-Meta: U2FsdGVkX1+uZeqHtVpuuDB4xNJYP1cEW9fwZKbsUu76ttQ7ZgmDhK9p1wG0AwSpRpH6I20KQMRFtaq4M/89t5TL6Wh0Pm5c4rZHSq8Q68kwmtmAkSA9wontUrSYGgOV1lcQAJMOd6Ex6ZUiMbg/7USTdGZbRXSXi87m1Zj0eWLoOi2bLtiUu9fhOUieX/Via2mIEvoSRdTFrE828bYXpFjgC0FC0icagqAfh9jzbmejdBhXEFOY9bS5oGIo9Ih5jzTkf7nrirrFjcLFCbX5RtvHMP2KkgdVb2945K9XbQMiSGwmawTchT0ER1nbHaHSxCbLeCQey7OvE7LRKslzyJXLCvQqiiKxfGiEJBbfar1NZjOEK3NKYce6IqSA4lbrlEmWbtSUfoHdvpHxt5ybrzSbPA48uEsY3KX4Vb9bjgYCCT9Tw82zTPzYX0qBukSnjoVAc77qyZVDJG+Kd/XR+j6dkR7oxOmLu5jWkusqFj+3gPgUPvQGtPjvww2mmMTyikK8wIZMg7zXQApwuJ4JXq9a62kpFL8kdcVGTAaaNYFY+RB1HlUL+5QlQrD5o4UywsYThzKyoafW2NMEK1Adm6/sY0C9F76dLoOxZCb+SS65+nqkYXCzps1zvn3r2PQPXbHG5CZxzD3hUADM9Hyj+Fmx8y1hkOYofQVNNeHdeISmZwwVBBw++srZxl3Wu8HaqxeGW/iO3p97kct5HZR2cqOCtSMIGPffZJNmPD6hrUNnaXvJirZ1g7iXDD3z2aJ/qEkcs91f2opqjyBAJaKSEBvAHWjQcmSCbVTit1TsHVBQrWllXNCrDkMAf6kHmprB+ploZnHFMSWNHT2UB7CvaVEmNpJhLAOAe0BwEcHnN6ii98QsK28mDiuz4C30SQKcORpWZoRSo8xTgaSGl+N5HnqvAQdz3jMDrXz9361jFtBpUZqx7mrMOZ1dH/2mQzGiwAFp93ZfSOq39zCZeRe uyGY6wSt +3HlWx5N+zVpUIuj4HMilzLUTL3omN74+F1Omts5asOdh5BjEpcS12QSYwaIMd+35ZiQBSALC713/tSW+N0vPLexLBdQ5HbqZhTN5iZ8tnQQE6PmyexwyXfAQbW/2UUARUyjX6ic9HmaQT3ljD9bNWZGES7UbayOhlrdWSKyM5/+lWnu+8DdvCdNIqwHLTRE45twHmyvP/g5b58KLbJWoaJFIOzighjleyEc/0TnAkHWAmOjzmtWDhFpFAeY+bhzM71sdiQKSdJsyGWWBvvzMw0vm6XkVb6M+4mfiCveyswZ1m5Nf8l7Fm8toc6eV9jcwfg7u+f+/PPvBd4XlBxorE4vr2Wdnj8XCdg0PHFBsKwCGotB/a9o6D2QxHocF1Gkr36NSkisw86Sml5Egx1UtV4oAv0SG7JyP6BBLlYEh0ItP6RVAJFLgipOI+0MhXd8vpvM9PVMQ3xSXUWkUalWqga3ZdjMJu0nuiACQXiUmEB8txAqz4jtx33DcrLaFbQITjtkC X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Similar to busy VA, lazily-freed area is stored to a node it belongs to. Such approach does not require any global locking primitive, instead an access becomes scalable what mitigates a contention. This patch removes a global purge-lock, global purge-tree and global purge list. Reviewed-by: Baoquan He Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 135 +++++++++++++++++++++++++++++++-------------------- 1 file changed, 82 insertions(+), 53 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 8c01f2225ef7..9b2f1b0cac9d 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -731,10 +731,6 @@ EXPORT_SYMBOL(vmalloc_to_pfn); static DEFINE_SPINLOCK(free_vmap_area_lock); static bool vmap_initialized __read_mostly; -static struct rb_root purge_vmap_area_root = RB_ROOT; -static LIST_HEAD(purge_vmap_area_list); -static DEFINE_SPINLOCK(purge_vmap_area_lock); - /* * This kmem_cache is used for vmap_area objects. Instead of * allocating from slab we reuse an object from this cache to @@ -782,6 +778,12 @@ struct rb_list { static struct vmap_node { /* Bookkeeping data of this node. */ struct rb_list busy; + struct rb_list lazy; + + /* + * Ready-to-free areas. + */ + struct list_head purge_list; } single; static struct vmap_node *vmap_nodes = &single; @@ -1766,40 +1768,22 @@ static DEFINE_MUTEX(vmap_purge_lock); /* for per-CPU blocks */ static void purge_fragmented_blocks_allcpus(void); +static cpumask_t purge_nodes; /* * Purges all lazily-freed vmap areas. */ -static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) +static unsigned long +purge_vmap_node(struct vmap_node *vn) { - unsigned long resched_threshold; - unsigned int num_purged_areas = 0; - struct list_head local_purge_list; + unsigned long num_purged_areas = 0; struct vmap_area *va, *n_va; - lockdep_assert_held(&vmap_purge_lock); - - spin_lock(&purge_vmap_area_lock); - purge_vmap_area_root = RB_ROOT; - list_replace_init(&purge_vmap_area_list, &local_purge_list); - spin_unlock(&purge_vmap_area_lock); - - if (unlikely(list_empty(&local_purge_list))) - goto out; - - start = min(start, - list_first_entry(&local_purge_list, - struct vmap_area, list)->va_start); - - end = max(end, - list_last_entry(&local_purge_list, - struct vmap_area, list)->va_end); - - flush_tlb_kernel_range(start, end); - resched_threshold = lazy_max_pages() << 1; + if (list_empty(&vn->purge_list)) + return 0; spin_lock(&free_vmap_area_lock); - list_for_each_entry_safe(va, n_va, &local_purge_list, list) { + list_for_each_entry_safe(va, n_va, &vn->purge_list, list) { unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT; unsigned long orig_start = va->va_start; unsigned long orig_end = va->va_end; @@ -1821,13 +1805,55 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) atomic_long_sub(nr, &vmap_lazy_nr); num_purged_areas++; - - if (atomic_long_read(&vmap_lazy_nr) < resched_threshold) - cond_resched_lock(&free_vmap_area_lock); } spin_unlock(&free_vmap_area_lock); -out: + return num_purged_areas; +} + +/* + * Purges all lazily-freed vmap areas. + */ +static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) +{ + unsigned long num_purged_areas = 0; + struct vmap_node *vn; + int i; + + lockdep_assert_held(&vmap_purge_lock); + purge_nodes = CPU_MASK_NONE; + + for (i = 0; i < nr_vmap_nodes; i++) { + vn = &vmap_nodes[i]; + + INIT_LIST_HEAD(&vn->purge_list); + + if (RB_EMPTY_ROOT(&vn->lazy.root)) + continue; + + spin_lock(&vn->lazy.lock); + WRITE_ONCE(vn->lazy.root.rb_node, NULL); + list_replace_init(&vn->lazy.head, &vn->purge_list); + spin_unlock(&vn->lazy.lock); + + start = min(start, list_first_entry(&vn->purge_list, + struct vmap_area, list)->va_start); + + end = max(end, list_last_entry(&vn->purge_list, + struct vmap_area, list)->va_end); + + cpumask_set_cpu(i, &purge_nodes); + } + + if (cpumask_weight(&purge_nodes) > 0) { + flush_tlb_kernel_range(start, end); + + for_each_cpu(i, &purge_nodes) { + vn = &nodes[i]; + num_purged_areas += purge_vmap_node(vn); + } + } + trace_purge_vmap_area_lazy(start, end, num_purged_areas); return num_purged_areas > 0; } @@ -1846,16 +1872,9 @@ static void reclaim_and_purge_vmap_areas(void) static void drain_vmap_area_work(struct work_struct *work) { - unsigned long nr_lazy; - - do { - mutex_lock(&vmap_purge_lock); - __purge_vmap_area_lazy(ULONG_MAX, 0); - mutex_unlock(&vmap_purge_lock); - - /* Recheck if further work is required. */ - nr_lazy = atomic_long_read(&vmap_lazy_nr); - } while (nr_lazy > lazy_max_pages()); + mutex_lock(&vmap_purge_lock); + __purge_vmap_area_lazy(ULONG_MAX, 0); + mutex_unlock(&vmap_purge_lock); } /* @@ -1865,6 +1884,7 @@ static void drain_vmap_area_work(struct work_struct *work) */ static void free_vmap_area_noflush(struct vmap_area *va) { + struct vmap_node *vn = addr_to_node(va->va_start); unsigned long nr_lazy_max = lazy_max_pages(); unsigned long va_start = va->va_start; unsigned long nr_lazy; @@ -1878,10 +1898,9 @@ static void free_vmap_area_noflush(struct vmap_area *va) /* * Merge or place it to the purge tree/list. */ - spin_lock(&purge_vmap_area_lock); - merge_or_add_vmap_area(va, - &purge_vmap_area_root, &purge_vmap_area_list); - spin_unlock(&purge_vmap_area_lock); + spin_lock(&vn->lazy.lock); + merge_or_add_vmap_area(va, &vn->lazy.root, &vn->lazy.head); + spin_unlock(&vn->lazy.lock); trace_free_vmap_area_noflush(va_start, nr_lazy, nr_lazy_max); @@ -4411,15 +4430,21 @@ static void show_numa_info(struct seq_file *m, struct vm_struct *v) static void show_purge_info(struct seq_file *m) { + struct vmap_node *vn; struct vmap_area *va; + int i; - spin_lock(&purge_vmap_area_lock); - list_for_each_entry(va, &purge_vmap_area_list, list) { - seq_printf(m, "0x%pK-0x%pK %7ld unpurged vm_area\n", - (void *)va->va_start, (void *)va->va_end, - va->va_end - va->va_start); + for (i = 0; i < nr_vmap_nodes; i++) { + vn = &vmap_nodes[i]; + + spin_lock(&vn->lazy.lock); + list_for_each_entry(va, &vn->lazy.head, list) { + seq_printf(m, "0x%pK-0x%pK %7ld unpurged vm_area\n", + (void *)va->va_start, (void *)va->va_end, + va->va_end - va->va_start); + } + spin_unlock(&vn->lazy.lock); } - spin_unlock(&purge_vmap_area_lock); } static int s_show(struct seq_file *m, void *p) @@ -4558,6 +4583,10 @@ static void vmap_init_nodes(void) vn->busy.root = RB_ROOT; INIT_LIST_HEAD(&vn->busy.head); spin_lock_init(&vn->busy.lock); + + vn->lazy.root = RB_ROOT; + INIT_LIST_HEAD(&vn->lazy.head); + spin_lock_init(&vn->lazy.lock); } } From patchwork Tue Jan 2 18:46:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13509278 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30ED5C46CD2 for ; Tue, 2 Jan 2024 18:47:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B72596B00B0; Tue, 2 Jan 2024 13:47:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B2D466B02D0; Tue, 2 Jan 2024 13:47:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 973E76B0160; Tue, 2 Jan 2024 13:47:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 8286E6B00AF for ; Tue, 2 Jan 2024 13:47:17 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5ACBF1608BB for ; Tue, 2 Jan 2024 18:47:17 +0000 (UTC) X-FDA: 81635253714.07.115B106 Received: from mail-lf1-f52.google.com (mail-lf1-f52.google.com [209.85.167.52]) by imf27.hostedemail.com (Postfix) with ESMTP id 58F2B40007 for ; Tue, 2 Jan 2024 18:47:15 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ejkPkRlf; spf=pass (imf27.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704221235; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h07asMUPPPIr10RgfLD/fjXiYN2b5qLhuka70E2DoO0=; b=cvHMRc5pMDuNhMCAS+Gqtb7Aq+Mxs6Itlmw3fCrDO+Aw3unQh75j8+Ezp7B+stYfa3cUJ7 sjE8t6wrYFveSc29w4bLDJRgx0G5Of0mNWb6O9ZT/gDBRHbJAISSGZ883U1/zQXHV3Iay3 QgWAZH2/IFOVenZFSYDpbHLILM8VxLc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704221235; a=rsa-sha256; cv=none; b=NzwOu4WJSPDWkwbwTMGRedpXk9PRLqctvFeiJVZEScSdpLAgExYfji+5Rg0B8thMHIKiQW S30sh6xAcmisCrLj9BPgmP1oNXdHsSSs7fjtW7aGrrFZMSa6b4ckrOUsLAurGFtNGxM1lW 33eeXA1gFl+uxycI4hoVxgfT5ZdM7uI= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ejkPkRlf; spf=pass (imf27.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lf1-f52.google.com with SMTP id 2adb3069b0e04-50e7b51b0ceso6116887e87.1 for ; Tue, 02 Jan 2024 10:47:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704221234; x=1704826034; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=h07asMUPPPIr10RgfLD/fjXiYN2b5qLhuka70E2DoO0=; b=ejkPkRlfOZsPMnN654lbSncMBr9zMaqgL2xDwnHPiZ3CgpLd0dRsybgO5fR1vadryB F57QaNukA16HJIKR4VIohF0kbIvPLGfRevLmT1pS6nJVKxDK/lUkrtspnoxl7F6BpU+Q OYDJO2Edpv0Xlx2zhfBSgRPu1skP2XXepiPecYFJu5HUjqCCnxwAZkNBMZHpCPl44jX9 S0wzeR3lWtOrkPzkefCJKs/xchrN4Hx6HxNOCnPlld1DjVO/35zBswxKJXqXldgO7MVr utn7GfM2RHEHxQw/Hui7ByojAmpxxr3oWX/p5eArj+nze/9/vZ1G88lWUcWzVkrE5mNF /U9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704221234; x=1704826034; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h07asMUPPPIr10RgfLD/fjXiYN2b5qLhuka70E2DoO0=; b=XDGDhZ/DR/goH8tMCdqLsMXhJV/Di3tnswXGYii9UCAbyA7DVPeJaaaMS1nsVDlqat +QbBPtJTW2utfXxEJ4DobfDh7uXHO/syXv8tTTAtnFv2Mej/g2U08hBcJHIntELYutXi 8qQz+g6hQJ6ZPW6xyTvLk1FMaxiKD7EkGqfFMN64cKtQzwxkOwzQjqdIrxoczrt+TcL+ DhtGIWnwDb+4/NU2S/nvD/paTaQ5bLbndbwHPdd1NKsekp32xOes6kf3y2FOmdRDvUFk 3Q7YDQlvMhtfTroLbAPnh+V4hTYHP0gNjtohk0aRnCK5ZfvlU/fHPu4NGo+iDt+RZVbA wm3g== X-Gm-Message-State: AOJu0Yw2erV16ZE9KiUtjW/r/8oWzV0Sx/lO+CfF0PpoRleqS79cHUWs PTd+WiIp/E+u24eHYLhNHsZtlHykh793aw== X-Google-Smtp-Source: AGHT+IECQYn33SzuSoJtEpL4Qq9+JrKnIGLpOJF6cxZlu1nvYsQ8EYyOApu4rMADQk0y798LtEw2JQ== X-Received: by 2002:ac2:5975:0:b0:50e:765b:1ea3 with SMTP id h21-20020ac25975000000b0050e765b1ea3mr6543728lfp.22.1704221203595; Tue, 02 Jan 2024 10:46:43 -0800 (PST) Received: from pc638.lan (host-185-121-47-193.sydskane.nu. [185.121.47.193]) by smtp.gmail.com with ESMTPSA id q1-20020ac246e1000000b0050e7be886d9sm2592656lfo.56.2024.01.02.10.46.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Jan 2024 10:46:43 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v3 07/11] mm: vmalloc: Offload free_vmap_area_lock lock Date: Tue, 2 Jan 2024 19:46:29 +0100 Message-Id: <20240102184633.748113-8-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240102184633.748113-1-urezki@gmail.com> References: <20240102184633.748113-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 58F2B40007 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 1ccenhdkh38aoyery4p6kx3qoo7yfoib X-HE-Tag: 1704221235-591247 X-HE-Meta: U2FsdGVkX1/XQn2MSSlytk4A1mQ/bKiC/tmAiSN0i/wgZKPnd8WKHhqo3rCiQAQ9rq2yAiyr4QtKEaUMIecQHKjLuyT57Zo6XU1tQUk+uwjOqoNP2bWuiw+ksAysl+J9c2wgasCa2II4zbbxQe0CW0FYIAJS7w+H5Ur9OXNCS2AE24r9fVXW93oWyX+m4RWsDW7DQMuJ1EPLnhH1RfZ6t1+apVqiManfok92D/w4f/yVgkznCV9RG05CltVynmcsgEwCVceg00hakViGlrs2rnhIghdylH7KcClhhdIQXTR9JD7Cnyrz/pBuC270ENDEPw2o5PzCNE+qsF7IrRII3tXIL+VjWjzWjHnVH/LcXG2pztrMJX0/qqbZyVaCLshr8mEuwoOEKADj3eO+9IDrvCiMbCciNP2DltS6KDpKMnN1uPImPzkvjVo0KLC4Z8w6l2sNKZaSi5pHgWpFGQT0Py5jRa5O7Gp49nzut7NxK9D5Lmj7OTjjgmWrDvUt0HvtzJcX7I4SibzDdYWe3gDpGvGjp0M+hhA+q//QfYil0UkXc8HRwaiiK2vM97bUnmYxXVTaiB8fRSYeGqWWeZvEB6dQsT7vc9Xcq08/Vyx94OtGOJnDE0vDJNn0Ft7bWW/jitbze346K+hr2BtF1Dqvr9vDkWXUMzYySNB0cH82SZKUyLVFGiY4Ka2EcsNp3GdNX2c/eR+jiRNx/MrMVh+Hz/xn54HeEJqpQ7UVLv5wu6Zq9bcjHHczroaN4Gg9pdPbi6YY7vsd4Nd112hzJI14WbXkHPZbzZgnGB9XFtf9ChVHzbjb0mhADiJy68Vq9Aknsit++GD5YisO8D5dbQYELrlUmcerhJTn00nX9KPxZGbRxDZF06wv9FeJZkAouhFpJIRnHxzOA8j8azKUTom+hAI1Bno5Uo2YBeGzzA4gqjvoORyVW4UfLMkgdflPS7SoDj46551HqAS5kdoC+jz PmDLmh/w jnSWzwk5RVK6sALvPlQWxgD3+qEBwTjRGyJb2eOYGZ7kDEA/M6FA/iws7HtWNNM3TXUQLxjwVTOtQq5jtJIM4ElT/j4O63BfOw33k2TLO66B7DFudpmdVutnmPbReeQX1PQhhtiNmuBrRt9m2rE2t9k5AMhCAT3k1kVHo0Q++xyY1wyF5y/Wrz5VVEmH4LfEcd1ji545ThlZrldjtqxAhrJrZ+LczGP4Fx/Tz7CASzXLXFQOyqxiC2eVINgOApDbUkXW2E7h5j5zvylGy21FO9dSYzIYAMapReo/iMdkB+MqjDlcqcgmOLE0GZgIut7DOy2zhlqsp9M+1R7iuOt12YOPSZsToJDGkqj3FKQNpr+xG1cyEgTxpWvjwdVeStGqn7LsdeQIeIV7MApv5BqCkRH4JTAbejAqGXp7dbJNOi+lBXqWHe1llasKIGF5aDwLhSUzFLB547D9Iipg9obYKBBMAoN5cWKoG5qz7APckjMMZ0OI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Concurrent access to a global vmap space is a bottle-neck. We can simulate a high contention by running a vmalloc test suite. To address it, introduce an effective vmap node logic. Each node behaves as independent entity. When a node is accessed it serves a request directly(if possible) from its pool. This model has a size based pool for requests, i.e. pools are serialized and populated based on object size and real demand. A maximum object size that pool can handle is set to 256 pages. This technique reduces a pressure on the global vmap lock. Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 387 +++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 342 insertions(+), 45 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 9b2f1b0cac9d..fa4ab2bbbc5b 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -775,7 +775,22 @@ struct rb_list { spinlock_t lock; }; +struct vmap_pool { + struct list_head head; + unsigned long len; +}; + +/* + * A fast size storage contains VAs up to 1M size. + */ +#define MAX_VA_SIZE_PAGES 256 + static struct vmap_node { + /* Simple size segregated storage. */ + struct vmap_pool pool[MAX_VA_SIZE_PAGES]; + spinlock_t pool_lock; + bool skip_populate; + /* Bookkeeping data of this node. */ struct rb_list busy; struct rb_list lazy; @@ -784,6 +799,8 @@ static struct vmap_node { * Ready-to-free areas. */ struct list_head purge_list; + struct work_struct purge_work; + unsigned long nr_purged; } single; static struct vmap_node *vmap_nodes = &single; @@ -802,6 +819,61 @@ addr_to_node(unsigned long addr) return &vmap_nodes[addr_to_node_id(addr)]; } +static inline struct vmap_node * +id_to_node(unsigned int id) +{ + return &vmap_nodes[id % nr_vmap_nodes]; +} + +/* + * We use the value 0 to represent "no node", that is why + * an encoded value will be the node-id incremented by 1. + * It is always greater then 0. A valid node_id which can + * be encoded is [0:nr_vmap_nodes - 1]. If a passed node_id + * is not valid 0 is returned. + */ +static unsigned int +encode_vn_id(unsigned int node_id) +{ + /* Can store U8_MAX [0:254] nodes. */ + if (node_id < nr_vmap_nodes) + return (node_id + 1) << BITS_PER_BYTE; + + /* Warn and no node encoded. */ + WARN_ONCE(1, "Encode wrong node id (%u)\n", node_id); + return 0; +} + +/* + * Returns an encoded node-id, the valid range is within + * [0:nr_vmap_nodes-1] values. Otherwise nr_vmap_nodes is + * returned if extracted data is wrong. + */ +static unsigned int +decode_vn_id(unsigned int val) +{ + unsigned int node_id = (val >> BITS_PER_BYTE) - 1; + + /* Can store U8_MAX [0:254] nodes. */ + if (node_id < nr_vmap_nodes) + return node_id; + + /* If it was _not_ zero, warn. */ + WARN_ONCE(node_id != UINT_MAX, + "Decode wrong node id (%d)\n", node_id); + + return nr_vmap_nodes; +} + +static bool +is_vn_id_valid(unsigned int node_id) +{ + if (node_id < nr_vmap_nodes) + return true; + + return false; +} + static __always_inline unsigned long va_size(struct vmap_area *va) { @@ -1623,6 +1695,104 @@ preload_this_cpu_lock(spinlock_t *lock, gfp_t gfp_mask, int node) kmem_cache_free(vmap_area_cachep, va); } +static struct vmap_pool * +size_to_va_pool(struct vmap_node *vn, unsigned long size) +{ + unsigned int idx = (size - 1) / PAGE_SIZE; + + if (idx < MAX_VA_SIZE_PAGES) + return &vn->pool[idx]; + + return NULL; +} + +static bool +node_pool_add_va(struct vmap_node *n, struct vmap_area *va) +{ + struct vmap_pool *vp; + + vp = size_to_va_pool(n, va_size(va)); + if (!vp) + return false; + + spin_lock(&n->pool_lock); + list_add(&va->list, &vp->head); + WRITE_ONCE(vp->len, vp->len + 1); + spin_unlock(&n->pool_lock); + + return true; +} + +static struct vmap_area * +node_pool_del_va(struct vmap_node *vn, unsigned long size, + unsigned long align, unsigned long vstart, + unsigned long vend) +{ + struct vmap_area *va = NULL; + struct vmap_pool *vp; + int err = 0; + + vp = size_to_va_pool(vn, size); + if (!vp || list_empty(&vp->head)) + return NULL; + + spin_lock(&vn->pool_lock); + if (!list_empty(&vp->head)) { + va = list_first_entry(&vp->head, struct vmap_area, list); + + if (IS_ALIGNED(va->va_start, align)) { + /* + * Do some sanity check and emit a warning + * if one of below checks detects an error. + */ + err |= (va_size(va) != size); + err |= (va->va_start < vstart); + err |= (va->va_end > vend); + + if (!WARN_ON_ONCE(err)) { + list_del_init(&va->list); + WRITE_ONCE(vp->len, vp->len - 1); + } else { + va = NULL; + } + } else { + list_move_tail(&va->list, &vp->head); + va = NULL; + } + } + spin_unlock(&vn->pool_lock); + + return va; +} + +static struct vmap_area * +node_alloc(unsigned long size, unsigned long align, + unsigned long vstart, unsigned long vend, + unsigned long *addr, unsigned int *vn_id) +{ + struct vmap_area *va; + + *vn_id = 0; + *addr = vend; + + /* + * Fallback to a global heap if not vmalloc or there + * is only one node. + */ + if (vstart != VMALLOC_START || vend != VMALLOC_END || + nr_vmap_nodes == 1) + return NULL; + + *vn_id = raw_smp_processor_id() % nr_vmap_nodes; + va = node_pool_del_va(id_to_node(*vn_id), size, align, vstart, vend); + *vn_id = encode_vn_id(*vn_id); + + if (va) + *addr = va->va_start; + + return va; +} + /* * Allocate a region of KVA of the specified size and alignment, within the * vstart and vend. @@ -1637,6 +1807,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, struct vmap_area *va; unsigned long freed; unsigned long addr; + unsigned int vn_id; int purged = 0; int ret; @@ -1647,11 +1818,23 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, return ERR_PTR(-EBUSY); might_sleep(); - gfp_mask = gfp_mask & GFP_RECLAIM_MASK; - va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node); - if (unlikely(!va)) - return ERR_PTR(-ENOMEM); + /* + * If a VA is obtained from a global heap(if it fails here) + * it is anyway marked with this "vn_id" so it is returned + * to this pool's node later. Such way gives a possibility + * to populate pools based on users demand. + * + * On success a ready to go VA is returned. + */ + va = node_alloc(size, align, vstart, vend, &addr, &vn_id); + if (!va) { + gfp_mask = gfp_mask & GFP_RECLAIM_MASK; + + va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node); + if (unlikely(!va)) + return ERR_PTR(-ENOMEM); + } /* * Only scan the relevant parts containing pointers to other objects @@ -1660,10 +1843,12 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask); retry: - preload_this_cpu_lock(&free_vmap_area_lock, gfp_mask, node); - addr = __alloc_vmap_area(&free_vmap_area_root, &free_vmap_area_list, - size, align, vstart, vend); - spin_unlock(&free_vmap_area_lock); + if (addr == vend) { + preload_this_cpu_lock(&free_vmap_area_lock, gfp_mask, node); + addr = __alloc_vmap_area(&free_vmap_area_root, &free_vmap_area_list, + size, align, vstart, vend); + spin_unlock(&free_vmap_area_lock); + } trace_alloc_vmap_area(addr, size, align, vstart, vend, addr == vend); @@ -1677,7 +1862,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, va->va_start = addr; va->va_end = addr + size; va->vm = NULL; - va->flags = va_flags; + va->flags = (va_flags | vn_id); vn = addr_to_node(va->va_start); @@ -1770,63 +1955,135 @@ static DEFINE_MUTEX(vmap_purge_lock); static void purge_fragmented_blocks_allcpus(void); static cpumask_t purge_nodes; -/* - * Purges all lazily-freed vmap areas. - */ -static unsigned long -purge_vmap_node(struct vmap_node *vn) +static void +reclaim_list_global(struct list_head *head) { - unsigned long num_purged_areas = 0; - struct vmap_area *va, *n_va; + struct vmap_area *va, *n; - if (list_empty(&vn->purge_list)) - return 0; + if (list_empty(head)) + return; spin_lock(&free_vmap_area_lock); + list_for_each_entry_safe(va, n, head, list) + merge_or_add_vmap_area_augment(va, + &free_vmap_area_root, &free_vmap_area_list); + spin_unlock(&free_vmap_area_lock); +} + +static void +decay_va_pool_node(struct vmap_node *vn, bool full_decay) +{ + struct vmap_area *va, *nva; + struct list_head decay_list; + struct rb_root decay_root; + unsigned long n_decay; + int i; + + decay_root = RB_ROOT; + INIT_LIST_HEAD(&decay_list); + + for (i = 0; i < MAX_VA_SIZE_PAGES; i++) { + struct list_head tmp_list; + + if (list_empty(&vn->pool[i].head)) + continue; + + INIT_LIST_HEAD(&tmp_list); + + /* Detach the pool, so no-one can access it. */ + spin_lock(&vn->pool_lock); + list_replace_init(&vn->pool[i].head, &tmp_list); + spin_unlock(&vn->pool_lock); + + if (full_decay) + WRITE_ONCE(vn->pool[i].len, 0); + + /* Decay a pool by ~25% out of left objects. */ + n_decay = vn->pool[i].len >> 2; + + list_for_each_entry_safe(va, nva, &tmp_list, list) { + list_del_init(&va->list); + merge_or_add_vmap_area(va, &decay_root, &decay_list); + + if (!full_decay) { + WRITE_ONCE(vn->pool[i].len, vn->pool[i].len - 1); + + if (!--n_decay) + break; + } + } + + /* Attach the pool back if it has been partly decayed. */ + if (!full_decay && !list_empty(&tmp_list)) { + spin_lock(&vn->pool_lock); + list_replace_init(&tmp_list, &vn->pool[i].head); + spin_unlock(&vn->pool_lock); + } + } + + reclaim_list_global(&decay_list); +} + +static void purge_vmap_node(struct work_struct *work) +{ + struct vmap_node *vn = container_of(work, + struct vmap_node, purge_work); + struct vmap_area *va, *n_va; + LIST_HEAD(local_list); + + vn->nr_purged = 0; + list_for_each_entry_safe(va, n_va, &vn->purge_list, list) { unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT; unsigned long orig_start = va->va_start; unsigned long orig_end = va->va_end; + unsigned int vn_id = decode_vn_id(va->flags); - /* - * Finally insert or merge lazily-freed area. It is - * detached and there is no need to "unlink" it from - * anything. - */ - va = merge_or_add_vmap_area_augment(va, &free_vmap_area_root, - &free_vmap_area_list); - - if (!va) - continue; + list_del_init(&va->list); if (is_vmalloc_or_module_addr((void *)orig_start)) kasan_release_vmalloc(orig_start, orig_end, va->va_start, va->va_end); atomic_long_sub(nr, &vmap_lazy_nr); - num_purged_areas++; + vn->nr_purged++; + + if (is_vn_id_valid(vn_id) && !vn->skip_populate) + if (node_pool_add_va(vn, va)) + continue; + + /* Go back to global. */ + list_add(&va->list, &local_list); } - spin_unlock(&free_vmap_area_lock); - return num_purged_areas; + reclaim_list_global(&local_list); } /* * Purges all lazily-freed vmap areas. */ -static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) +static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end, + bool full_pool_decay) { - unsigned long num_purged_areas = 0; + unsigned long nr_purged_areas = 0; + unsigned int nr_purge_helpers; + unsigned int nr_purge_nodes; struct vmap_node *vn; int i; lockdep_assert_held(&vmap_purge_lock); + + /* + * Use cpumask to mark which node has to be processed. + */ purge_nodes = CPU_MASK_NONE; for (i = 0; i < nr_vmap_nodes; i++) { vn = &vmap_nodes[i]; INIT_LIST_HEAD(&vn->purge_list); + vn->skip_populate = full_pool_decay; + decay_va_pool_node(vn, full_pool_decay); if (RB_EMPTY_ROOT(&vn->lazy.root)) continue; @@ -1845,17 +2102,45 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) cpumask_set_cpu(i, &purge_nodes); } - if (cpumask_weight(&purge_nodes) > 0) { + nr_purge_nodes = cpumask_weight(&purge_nodes); + if (nr_purge_nodes > 0) { flush_tlb_kernel_range(start, end); + /* One extra worker is per a lazy_max_pages() full set minus one. */ + nr_purge_helpers = atomic_long_read(&vmap_lazy_nr) / lazy_max_pages(); + nr_purge_helpers = clamp(nr_purge_helpers, 1U, nr_purge_nodes) - 1; + for_each_cpu(i, &purge_nodes) { - vn = &nodes[i]; - num_purged_areas += purge_vmap_node(vn); + vn = &vmap_nodes[i]; + + if (nr_purge_helpers > 0) { + INIT_WORK(&vn->purge_work, purge_vmap_node); + + if (cpumask_test_cpu(i, cpu_online_mask)) + schedule_work_on(i, &vn->purge_work); + else + schedule_work(&vn->purge_work); + + nr_purge_helpers--; + } else { + vn->purge_work.func = NULL; + purge_vmap_node(&vn->purge_work); + nr_purged_areas += vn->nr_purged; + } + } + + for_each_cpu(i, &purge_nodes) { + vn = &vmap_nodes[i]; + + if (vn->purge_work.func) { + flush_work(&vn->purge_work); + nr_purged_areas += vn->nr_purged; + } } } - trace_purge_vmap_area_lazy(start, end, num_purged_areas); - return num_purged_areas > 0; + trace_purge_vmap_area_lazy(start, end, nr_purged_areas); + return nr_purged_areas > 0; } /* @@ -1866,14 +2151,14 @@ static void reclaim_and_purge_vmap_areas(void) { mutex_lock(&vmap_purge_lock); purge_fragmented_blocks_allcpus(); - __purge_vmap_area_lazy(ULONG_MAX, 0); + __purge_vmap_area_lazy(ULONG_MAX, 0, true); mutex_unlock(&vmap_purge_lock); } static void drain_vmap_area_work(struct work_struct *work) { mutex_lock(&vmap_purge_lock); - __purge_vmap_area_lazy(ULONG_MAX, 0); + __purge_vmap_area_lazy(ULONG_MAX, 0, false); mutex_unlock(&vmap_purge_lock); } @@ -1884,9 +2169,10 @@ static void drain_vmap_area_work(struct work_struct *work) */ static void free_vmap_area_noflush(struct vmap_area *va) { - struct vmap_node *vn = addr_to_node(va->va_start); unsigned long nr_lazy_max = lazy_max_pages(); unsigned long va_start = va->va_start; + unsigned int vn_id = decode_vn_id(va->flags); + struct vmap_node *vn; unsigned long nr_lazy; if (WARN_ON_ONCE(!list_empty(&va->list))) @@ -1896,10 +2182,14 @@ static void free_vmap_area_noflush(struct vmap_area *va) PAGE_SHIFT, &vmap_lazy_nr); /* - * Merge or place it to the purge tree/list. + * If it was request by a certain node we would like to + * return it to that node, i.e. its pool for later reuse. */ + vn = is_vn_id_valid(vn_id) ? + id_to_node(vn_id):addr_to_node(va->va_start); + spin_lock(&vn->lazy.lock); - merge_or_add_vmap_area(va, &vn->lazy.root, &vn->lazy.head); + insert_vmap_area(va, &vn->lazy.root, &vn->lazy.head); spin_unlock(&vn->lazy.lock); trace_free_vmap_area_noflush(va_start, nr_lazy, nr_lazy_max); @@ -2408,7 +2698,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) } free_purged_blocks(&purge_list); - if (!__purge_vmap_area_lazy(start, end) && flush) + if (!__purge_vmap_area_lazy(start, end, false) && flush) flush_tlb_kernel_range(start, end); mutex_unlock(&vmap_purge_lock); } @@ -4576,7 +4866,7 @@ static void vmap_init_free_space(void) static void vmap_init_nodes(void) { struct vmap_node *vn; - int i; + int i, j; for (i = 0; i < nr_vmap_nodes; i++) { vn = &vmap_nodes[i]; @@ -4587,6 +4877,13 @@ static void vmap_init_nodes(void) vn->lazy.root = RB_ROOT; INIT_LIST_HEAD(&vn->lazy.head); spin_lock_init(&vn->lazy.lock); + + for (j = 0; j < MAX_VA_SIZE_PAGES; j++) { + INIT_LIST_HEAD(&vn->pool[j].head); + WRITE_ONCE(vn->pool[j].len, 0); + } + + spin_lock_init(&vn->pool_lock); } } From patchwork Tue Jan 2 18:46:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13509279 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9F50C47074 for ; Tue, 2 Jan 2024 18:47:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4E41D6B02D4; Tue, 2 Jan 2024 13:47:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 493D26B02D5; Tue, 2 Jan 2024 13:47:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 30F996B02D6; Tue, 2 Jan 2024 13:47:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1D9EB6B02D4 for ; Tue, 2 Jan 2024 13:47:18 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id F2236A1BE1 for ; Tue, 2 Jan 2024 18:47:17 +0000 (UTC) X-FDA: 81635253714.04.7CE84AB Received: from mail-lf1-f51.google.com (mail-lf1-f51.google.com [209.85.167.51]) by imf10.hostedemail.com (Postfix) with ESMTP id 495B2C0010 for ; Tue, 2 Jan 2024 18:47:16 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="cah3/AyH"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf10.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.51 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704221236; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QZAvt7CkxqIe5cgK0fSSW+F0xApr58BOh4APGf1P6Lc=; b=t5u9t0qYHALI3fDVBLPgASWXcLmET2GVm18FK3Pp7/1dvNMff8wS49XwaYWhpYXHqbfsVx i0X5WsITix9e77sKWwJfu8gEyA0Q+YB36/B7vKicMzif0cRWJA5vaAVIW1zTBjKlU++aOk ucITBE361qjXCSSuMd/wopLLn0TEQG8= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="cah3/AyH"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf10.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.51 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704221236; a=rsa-sha256; cv=none; b=rWwKrP4KL4SCjPSdcZyQAiiGKwcnHL2UzRC7yxTQedW541fYvjVE3mg5mdqaCm/UWaR2Dv lA9iBD3iQI0zCsIeOmodBTaFqWk6ts3FJcYUSIM8gYx8UsQkdP0CjWINyUBqt7idV7ZSjq ajwhW/u/5xVdv56qtpbOjtQ1mxQe/FE= Received: by mail-lf1-f51.google.com with SMTP id 2adb3069b0e04-50e7e6283bdso6452576e87.1 for ; Tue, 02 Jan 2024 10:47:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704221235; x=1704826035; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QZAvt7CkxqIe5cgK0fSSW+F0xApr58BOh4APGf1P6Lc=; b=cah3/AyHIFnsoNTxXALgHeU0UIGBr9XYXlC8xyHkQ2eRrhC0yyqjplqFfTMvG2xD8d BggCQS3QVTE8wNRVS1BJ/ncS1hi6zRm/Jv4SWktJyx0fvgtqO+XuC6289LD75qj4E/QI fQrKFskDHYAMFyxA85a3eXHODJE3aFqfEfH9w3vxh1GewsAmDPaqmw9gySQTqxSRTp9i H/tltcyxBAuupglrn+KtW7I4NtUwysMc+WyIMzk2zW8Apnf9vJZdNfrjUgvv4z7XBaQw 465x3fwn3Ql531IkycMyrEp+fyCACQF+E/TVJA3wviY5S3awbGVWNt8xaZX2E984dHv9 6MIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704221235; x=1704826035; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QZAvt7CkxqIe5cgK0fSSW+F0xApr58BOh4APGf1P6Lc=; b=v6NDuzjgvSKT7usXy0iBpgX6SEhsXsz6EJhsx55V4lxLGEgxN5KKhWMdqdB59tNZ9l is+R+GdXzBUMR+tAoZlyGqJ4gsMmjvePnMhSpJrzBkvPA2ODsEPzM5a04Xp3FlVYkpv4 ULSaSnS9o0clCXX5jxjP1BxOZgGPjCqR3r2tCzw5XaSGGXS8H9OYDJSYkn9qVvnXuN9Q y365YWJzvt9hbK2Hj8tcTotO3wGcMRJgD4Hjo6GQfZcciyv8nU3oyahk6E+fsxCDBzHA UhsyOt64YmPr3NaweVA9Ook9sBX1bIYupJlmoKlYrgjuRSGW+92+TxARAUEyb07XGjJY gEZA== X-Gm-Message-State: AOJu0Yxa0/P3dYxK9mKqPKcNsfvUDBXfSUJBlnOVKPcLYC5TzyAjusWp RlrSjkLjwixYAXqk7iZgAnVtKo7kXWNAVA== X-Google-Smtp-Source: AGHT+IG2+hyU4AcsC7fH/iKMYGpwIo7u9xULgZSVpY4T2CGFn4/x55AJtPvIKYTpeIpe7us9O7g2tw== X-Received: by 2002:ac2:520b:0:b0:50e:9a53:c22c with SMTP id a11-20020ac2520b000000b0050e9a53c22cmr794413lfl.126.1704221234559; Tue, 02 Jan 2024 10:47:14 -0800 (PST) Received: from pc638.lan (host-185-121-47-193.sydskane.nu. [185.121.47.193]) by smtp.gmail.com with ESMTPSA id q1-20020ac246e1000000b0050e7be886d9sm2592656lfo.56.2024.01.02.10.47.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Jan 2024 10:47:14 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v3 08/11] mm: vmalloc: Support multiple nodes in vread_iter Date: Tue, 2 Jan 2024 19:46:30 +0100 Message-Id: <20240102184633.748113-9-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240102184633.748113-1-urezki@gmail.com> References: <20240102184633.748113-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 495B2C0010 X-Stat-Signature: gtrjgjsyjnouh3j31geh8mm9b6s96fp3 X-HE-Tag: 1704221236-825974 X-HE-Meta: U2FsdGVkX199CPxJDLYHuYhLY+7J/VTfbFwJGSPmhqCNZibyrLV5CBw5umZq8Kb4K8u6aVtqR6fMHibuyQ074ll2bEor2taKsopHBpDEgxnFoB9cunNjYX02/f1GNTcslOSQm7fQM9Qc7+UNpQzNgmXVl/YFCCQFyhxS4rcARusS/ejGCe9TdLzP+uYSX9FHRMy+2NRLpdRqzvMcXQ+KWsWPcvmDDyC2dZBEuCbxhClNJ9nUzFxl0uwwVnOlydRj/Q4Fn9Y6byjYk7MT0bXjclS0fdcmjU5HkMC/JhVFIXG/r/76NNM9RKccolZUv5F/j4GryFkYhOB2ANNBxN1Z1kLpOwIZ5G7r9ZqX8MUfFnlH6+LiFhZGR7gh/jwx7ezIJNC1jCJIV5+LeGCnwdjb/EF0ULgu8Uc4CACENu2wx3W90Zp+OpKT2LTWBbbI6ESd+Sbv0/OWLsD4A84pnGRMfXEnAEzvTQd5g5QTaUVYsLIn9tPmTwsiQHQzGcvMnDsPEezoLuv0g9ofVjELjv2jgtEmxIPp2oLCLjg1m37kKsWmf2Col7rVWjAX7h7e+ykzg44Klxw9aEsHpkrSOHzGf0/JiwJsN48nlUz2bhdimYSZ2JLEWmuWgyO8O24Gx237ZC7ah/cy5gP8UkvKT6ac5PB/Nbax0sA6qlyLtxVh7Wtcs4/5uq2yT+XKKy9N3PK3TzD65tRleDFoolfTHjVJvJgTKULubT4nsc+C7Mf4u/lRZw00A36HA7Wv22yPqfV8/XfmA+T1YEl3KR9UKWHPrBGRiC2Ag28Pzut6KWe0QWj9BMameif+sQwrPZ3fI1lfRogm6haOCjvPdilfM5P5AQvJ/E72Y/qGwt8IREQSXXxdefjNy3zpky1ODbHGjCRnfA5HuMjgXVC27+TUkRdrwEgGsx9SBTTA4KtmKunw1LrfnKkcOtIBdlOhSliSNjmm4E8Ht4cu5nNvhs37bo7 7knZQHMu XcmgskVMUpRJpUwVjoqe47UNjQ5lenX+R6VXRIdz+HKm+u5Dee2ikdni2TD/v0HFG2IJJE7QOlwblEPfiKTCbNpWZSQpG6ColG+LDt0AaShuxHUKTMmBAqm/HJ3/FawaZ8mcypm2/UL+Gq3L3tb7gkmnftIRGRSfd31/z X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Extend the vread_iter() to be able to perform a sequential reading of VAs which are spread among multiple nodes. So a data read over the /dev/kmem correctly reflects a vmalloc memory layout. Reviewed-by: Baoquan He Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 67 +++++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 53 insertions(+), 14 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index fa4ab2bbbc5b..594ed003d44d 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -906,7 +906,7 @@ unsigned long vmalloc_nr_pages(void) /* Look up the first VA which satisfies addr < va_end, NULL if none. */ static struct vmap_area * -find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root) +__find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root) { struct vmap_area *va = NULL; struct rb_node *n = root->rb_node; @@ -930,6 +930,41 @@ find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root) return va; } +/* + * Returns a node where a first VA, that satisfies addr < va_end, resides. + * If success, a node is locked. A user is responsible to unlock it when a + * VA is no longer needed to be accessed. + * + * Returns NULL if nothing found. + */ +static struct vmap_node * +find_vmap_area_exceed_addr_lock(unsigned long addr, struct vmap_area **va) +{ + struct vmap_node *vn, *va_node = NULL; + struct vmap_area *va_lowest; + int i; + + for (i = 0; i < nr_vmap_nodes; i++) { + vn = &vmap_nodes[i]; + + spin_lock(&vn->busy.lock); + va_lowest = __find_vmap_area_exceed_addr(addr, &vn->busy.root); + if (va_lowest) { + if (!va_node || va_lowest->va_start < (*va)->va_start) { + if (va_node) + spin_unlock(&va_node->busy.lock); + + *va = va_lowest; + va_node = vn; + continue; + } + } + spin_unlock(&vn->busy.lock); + } + + return va_node; +} + static struct vmap_area *__find_vmap_area(unsigned long addr, struct rb_root *root) { struct rb_node *n = root->rb_node; @@ -4102,6 +4137,7 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) struct vm_struct *vm; char *vaddr; size_t n, size, flags, remains; + unsigned long next; addr = kasan_reset_tag(addr); @@ -4111,19 +4147,15 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) remains = count; - /* Hooked to node_0 so far. */ - vn = addr_to_node(0); - spin_lock(&vn->busy.lock); - - va = find_vmap_area_exceed_addr((unsigned long)addr, &vn->busy.root); - if (!va) + vn = find_vmap_area_exceed_addr_lock((unsigned long) addr, &va); + if (!vn) goto finished_zero; /* no intersects with alive vmap_area */ if ((unsigned long)addr + remains <= va->va_start) goto finished_zero; - list_for_each_entry_from(va, &vn->busy.head, list) { + do { size_t copied; if (remains == 0) @@ -4138,10 +4170,10 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) WARN_ON(flags == VMAP_BLOCK); if (!vm && !flags) - continue; + goto next_va; if (vm && (vm->flags & VM_UNINITIALIZED)) - continue; + goto next_va; /* Pair with smp_wmb() in clear_vm_uninitialized_flag() */ smp_rmb(); @@ -4150,7 +4182,7 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) size = vm ? get_vm_area_size(vm) : va_size(va); if (addr >= vaddr + size) - continue; + goto next_va; if (addr < vaddr) { size_t to_zero = min_t(size_t, vaddr - addr, remains); @@ -4179,15 +4211,22 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) if (copied != n) goto finished; - } + + next_va: + next = va->va_end; + spin_unlock(&vn->busy.lock); + } while ((vn = find_vmap_area_exceed_addr_lock(next, &va))); finished_zero: - spin_unlock(&vn->busy.lock); + if (vn) + spin_unlock(&vn->busy.lock); + /* zero-fill memory holes */ return count - remains + zero_iter(iter, remains); finished: /* Nothing remains, or We couldn't copy/zero everything. */ - spin_unlock(&vn->busy.lock); + if (vn) + spin_unlock(&vn->busy.lock); return count - remains; } From patchwork Tue Jan 2 18:46:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13509280 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10990C4707B for ; Tue, 2 Jan 2024 18:47:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B77F6B02D7; Tue, 2 Jan 2024 13:47:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 63F556B02D8; Tue, 2 Jan 2024 13:47:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4446A6B02D9; Tue, 2 Jan 2024 13:47:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 2F2FB6B02D7 for ; Tue, 2 Jan 2024 13:47:19 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0843B1C0A70 for ; Tue, 2 Jan 2024 18:47:19 +0000 (UTC) X-FDA: 81635253798.26.C616DA0 Received: from mail-lf1-f48.google.com (mail-lf1-f48.google.com [209.85.167.48]) by imf22.hostedemail.com (Postfix) with ESMTP id 21C5AC0017 for ; Tue, 2 Jan 2024 18:47:16 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Zx0NPT3l; spf=pass (imf22.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.48 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704221237; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cMaatDyVMtexg7cRa9eaEuAm/eowYR09uugeWui4AYQ=; b=V4m8JuMoseV099NC/l+4Yq9aUtEknpbmjK3YXCfEMCcjfO35VQ9TH+FZsPk8ARRbw30zk3 sBJ5UosNOL3ZJB4lbjI94zXQEF9VtoC0SFlICSEJkK5oOqpJDiIehtNcjjC1lQFF2tNGG6 9RwMEpCOiStiQc797Fyj4AIS4eQAvm0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704221237; a=rsa-sha256; cv=none; b=b4+fC4C2f/t+id1DahXMeDzW1xogx1RHGanzHG5XdMEyaz0j2PWcdWjlBt2Hoazx/M+Z84 7J3TcotZ0hQu0dHA2SPV0fc7tGIXzK4aQTL4ptPt6TM9pYWD/1UWNt2HtNQAM2iw/aoWux 0XEEeBv7MG2XmMeH6FdGdPAbUF1p7wY= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Zx0NPT3l; spf=pass (imf22.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.48 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lf1-f48.google.com with SMTP id 2adb3069b0e04-50e759ece35so7434349e87.3 for ; Tue, 02 Jan 2024 10:47:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704221235; x=1704826035; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cMaatDyVMtexg7cRa9eaEuAm/eowYR09uugeWui4AYQ=; b=Zx0NPT3lmNpDOcQs3C+896kgwIJEm5oefJPMx+aiERQoMCBWAXa1GVOsfqOsLG/4kR dAdhkbzD22PPPkZtAK6qkn0D3Pwn9z5u6z0hW7BVkVV61s+tlPfny30tXxYKD7HYxgzJ KzoRMLN0O1C8Z2DutbdQ7KGOHKxyWyLcC7p4wloYgTzqmJN07WZRljxvYB34GTTQXSf8 PxTIX2UiFaYaXzLliYYGhpyZCprpw2zEHgk0GKx21jk6n5JWUcfAOuDZEEzaDYEwjlAE CFh3R+lnmGRcEGt94/2tGVxZyrGRt5L4fdrVpU4qrDavAgqWc1EZHSlGK/qHZAbgK99Z KLHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704221235; x=1704826035; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cMaatDyVMtexg7cRa9eaEuAm/eowYR09uugeWui4AYQ=; b=tQrjOQp8uR2vMMzNFP8Euu75PX9m+baobhNLijQ/LLt9MMGVzdjNNPDpjfDGk2VZls QXscOobzztl/M04TzCa0Uh/8cBjbKM2m/8GaKlhwKjN0DxSJc03yi5Mfgg/Yyuwkd3RS sSgHP5ZbUrhSkOIgrdx6vHov+OSH9d8iSt0XLu9KACLamV2n84LjSpQhvFkG80FtMsFI KD/TsrH1zLVTm2os5ov6grE7cPNRkYnq8WBzIdSYe9g8+lKFikqZL4WMWaXFrDp4jS5t IYN2adAKrrIrvHkR2yugP7AthnegEDzkMXpDAALoYJeVbe8b+gvzJJ58yxWEHEJNynuo Cafw== X-Gm-Message-State: AOJu0YzJUF9fYLwZxGLhyvbhCFuGJQ9LUW4zyxBJ+wcIQ6nnBQGDMwor BJRKiFx1oD4lQkJCdQmGRuIAQJMOYN0JWQ== X-Google-Smtp-Source: AGHT+IHL4Wmy2w8EjWfxM2Ec6BVxhpJGK0/FEzXHtHxP7HAyn5/e0Ve12sc1nQgdeQE352aAG5a4ag== X-Received: by 2002:a05:6512:3d07:b0:50e:73ac:a179 with SMTP id d7-20020a0565123d0700b0050e73aca179mr7225341lfv.91.1704221235497; Tue, 02 Jan 2024 10:47:15 -0800 (PST) Received: from pc638.lan (host-185-121-47-193.sydskane.nu. [185.121.47.193]) by smtp.gmail.com with ESMTPSA id q1-20020ac246e1000000b0050e7be886d9sm2592656lfo.56.2024.01.02.10.47.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Jan 2024 10:47:15 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v3 09/11] mm: vmalloc: Support multiple nodes in vmallocinfo Date: Tue, 2 Jan 2024 19:46:31 +0100 Message-Id: <20240102184633.748113-10-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240102184633.748113-1-urezki@gmail.com> References: <20240102184633.748113-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 21C5AC0017 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: coe9nfmtj8t8uieqh5pbyxk5bzn9hdt6 X-HE-Tag: 1704221236-539955 X-HE-Meta: U2FsdGVkX18nWX1R0pd8lm2WBfKqQk+ZlH9iQUwA++jVHrVqW+f76RimAWGkG4csdN8R+tBaK/MTNlisRH5AfRLVEg5WDAS0O4oo5nr8qdDGSekKitIVfWSfIK9eVN691zwIbbsAZPeLJe6vqha1lQw6wmk5KWQJDez1OZh0gcWGZiR/jTEwcAXL5nOgGzj1H9dz/gvW2iLh4cbPTc8HDxhckp8s84H/E4pN0eEagiW49g9Z40V/soaa1TmYynLVXrmJr8EGuauXgjoVUk7AoCKUUnrM/CzP+o6RUk9x2I4OnTHP13n1g9MZH+8z8PeDSBn7TLzT6U/dGyLaJxqc2KROS3liSNSMBZeHVA2451kL8klz/aX8jL6A/ugisCDEAsw3f8ou7Ysyh167bzfTzrA4BFlnvuI5YraG0w9nL/CWD82Kzz/XwfILrk0UBXqGZumbBVmifE1xXlw2OyTVnadhwv4JmDymFw5by+5ykRAd7C22AjK/GiJRgfK7YOFDx9tGQjKQvPEroNp1m29CttNw0HKNGIht0Oubx7DOl+fc4c3TI4UCKCFmcvchizfXkXO6m9crwuQIIEDLyZQU7SHzuIi+H5czOnNbKhuGZet1YRyMCtLOgT56Ht30O/22BnXBKGSmBskQtLT/WX1QOfgmVWG+Dy58tQ22B+BkIammtkKlSejXqbQE28BsfR/MploG+YaRpWvIyYJEMegDsV0EOOOnusskpDPgh+/OSiu3Dxs/dDl5yrRRpzdXW5NqQuvkpZXSfSYhsr0DRBdG6nwS5xpzdKFgOLW8azOj0KW/8ES0gpBU98hr423Su4W6wPXuExLuTX9xqz5UsQ+Sp6tuBEKMWCYLCrXN3HgMYh0KO6yiNPuEwpk7qh77WUEqNChXt2/mMp+DwEF10v1IoglHxnqhGSyI6VxQbCY7brc5G/DNqNHUeuUQSv3lk510QYN/UCzaXajN6s4sN8e 26Ave6WV mwmh5fyKQmF/USeFud02C6KmuGmFwYI+vSdW44rG7u0Xwe5rUPDm9Eu1HxOU50hg+1mcTghy56U5zwicROZxhSZrUsQMd/BV15M+Zpeq0XKWsu73XKoLr4qgE7w0DGrS49msxN2loZD7L3yDNWQlNZn6fIHp+DHd8RusqY8DJ0mttODwIKGPiLI1ONoeOMeKuEPpbTCBqXhat/C2L04jUoF9DAstKbj1V5zI64kfC2DVFOTamftShxoSH1+qRIr6gMjPZKj0AdWZ4Cil3m+ApwQOnZwoPg1NrF9BNpP7nsXXAqqbIo1T0XrNFA0K8icdXLw0PG/V1skdVmwN4jm96kJ/46eep8PgpZ7nh8twdodhso03D226/KPR1BdtRVPdx7FyqIqzky9JEiONCZrNEL4N3UhguPLnox29t+GLaSDiie6txQgXfqjHOH7TxuIUUfToO X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Allocated areas are spread among nodes, it implies that the scanning has to be performed individually of each node in order to dump all existing VAs. Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 120 ++++++++++++++++++++------------------------------- 1 file changed, 47 insertions(+), 73 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 594ed003d44d..0c671cb96151 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -4709,30 +4709,6 @@ bool vmalloc_dump_obj(void *object) #endif #ifdef CONFIG_PROC_FS -static void *s_start(struct seq_file *m, loff_t *pos) -{ - struct vmap_node *vn = addr_to_node(0); - - mutex_lock(&vmap_purge_lock); - spin_lock(&vn->busy.lock); - - return seq_list_start(&vn->busy.head, *pos); -} - -static void *s_next(struct seq_file *m, void *p, loff_t *pos) -{ - struct vmap_node *vn = addr_to_node(0); - return seq_list_next(p, &vn->busy.head, pos); -} - -static void s_stop(struct seq_file *m, void *p) -{ - struct vmap_node *vn = addr_to_node(0); - - spin_unlock(&vn->busy.lock); - mutex_unlock(&vmap_purge_lock); -} - static void show_numa_info(struct seq_file *m, struct vm_struct *v) { if (IS_ENABLED(CONFIG_NUMA)) { @@ -4776,84 +4752,82 @@ static void show_purge_info(struct seq_file *m) } } -static int s_show(struct seq_file *m, void *p) +static int vmalloc_info_show(struct seq_file *m, void *p) { struct vmap_node *vn; struct vmap_area *va; struct vm_struct *v; + int i; - vn = addr_to_node(0); - va = list_entry(p, struct vmap_area, list); + for (i = 0; i < nr_vmap_nodes; i++) { + vn = &vmap_nodes[i]; - if (!va->vm) { - if (va->flags & VMAP_RAM) - seq_printf(m, "0x%pK-0x%pK %7ld vm_map_ram\n", - (void *)va->va_start, (void *)va->va_end, - va->va_end - va->va_start); + spin_lock(&vn->busy.lock); + list_for_each_entry(va, &vn->busy.head, list) { + if (!va->vm) { + if (va->flags & VMAP_RAM) + seq_printf(m, "0x%pK-0x%pK %7ld vm_map_ram\n", + (void *)va->va_start, (void *)va->va_end, + va->va_end - va->va_start); - goto final; - } + continue; + } - v = va->vm; + v = va->vm; - seq_printf(m, "0x%pK-0x%pK %7ld", - v->addr, v->addr + v->size, v->size); + seq_printf(m, "0x%pK-0x%pK %7ld", + v->addr, v->addr + v->size, v->size); - if (v->caller) - seq_printf(m, " %pS", v->caller); + if (v->caller) + seq_printf(m, " %pS", v->caller); - if (v->nr_pages) - seq_printf(m, " pages=%d", v->nr_pages); + if (v->nr_pages) + seq_printf(m, " pages=%d", v->nr_pages); - if (v->phys_addr) - seq_printf(m, " phys=%pa", &v->phys_addr); + if (v->phys_addr) + seq_printf(m, " phys=%pa", &v->phys_addr); - if (v->flags & VM_IOREMAP) - seq_puts(m, " ioremap"); + if (v->flags & VM_IOREMAP) + seq_puts(m, " ioremap"); - if (v->flags & VM_ALLOC) - seq_puts(m, " vmalloc"); + if (v->flags & VM_ALLOC) + seq_puts(m, " vmalloc"); - if (v->flags & VM_MAP) - seq_puts(m, " vmap"); + if (v->flags & VM_MAP) + seq_puts(m, " vmap"); - if (v->flags & VM_USERMAP) - seq_puts(m, " user"); + if (v->flags & VM_USERMAP) + seq_puts(m, " user"); - if (v->flags & VM_DMA_COHERENT) - seq_puts(m, " dma-coherent"); + if (v->flags & VM_DMA_COHERENT) + seq_puts(m, " dma-coherent"); - if (is_vmalloc_addr(v->pages)) - seq_puts(m, " vpages"); + if (is_vmalloc_addr(v->pages)) + seq_puts(m, " vpages"); - show_numa_info(m, v); - seq_putc(m, '\n'); + show_numa_info(m, v); + seq_putc(m, '\n'); + } + spin_unlock(&vn->busy.lock); + } /* * As a final step, dump "unpurged" areas. */ -final: - if (list_is_last(&va->list, &vn->busy.head)) - show_purge_info(m); - + show_purge_info(m); return 0; } -static const struct seq_operations vmalloc_op = { - .start = s_start, - .next = s_next, - .stop = s_stop, - .show = s_show, -}; - static int __init proc_vmalloc_init(void) { + void *priv_data = NULL; + if (IS_ENABLED(CONFIG_NUMA)) - proc_create_seq_private("vmallocinfo", 0400, NULL, - &vmalloc_op, - nr_node_ids * sizeof(unsigned int), NULL); - else - proc_create_seq("vmallocinfo", 0400, NULL, &vmalloc_op); + priv_data = kmalloc(nr_node_ids * sizeof(unsigned int), GFP_KERNEL); + + proc_create_single_data("vmallocinfo", + 0400, NULL, vmalloc_info_show, priv_data); + return 0; } module_init(proc_vmalloc_init); From patchwork Tue Jan 2 18:46:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13509281 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F7BDC47074 for ; Tue, 2 Jan 2024 18:47:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9EC546B00B4; Tue, 2 Jan 2024 13:47:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 99D726B02D9; Tue, 2 Jan 2024 13:47:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7F1496B02D8; Tue, 2 Jan 2024 13:47:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6B26C6B00B4 for ; Tue, 2 Jan 2024 13:47:20 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4DA1FA1C21 for ; Tue, 2 Jan 2024 18:47:20 +0000 (UTC) X-FDA: 81635253840.14.1B6C793 Received: from mail-lf1-f43.google.com (mail-lf1-f43.google.com [209.85.167.43]) by imf18.hostedemail.com (Postfix) with ESMTP id 4C29C1C0004 for ; Tue, 2 Jan 2024 18:47:18 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="aFRZuA/u"; spf=pass (imf18.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.43 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704221238; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=36KamwtZGiOJ+iCEiFn7yiYoyZmI7LhL5vNG1kBAh/Y=; b=zbOKi5a9eKXwmuYH2z8fLuR3YhHT9z0Se6yyD+A8FkoVI3UK94pvmLE+sdKOj7yt3RCPtd 6Klpkaww6Oxe8Y1tQqn1R+I8/spwrgv2rI5eJUk2cMv2QnoVCnS52yP8ZtZ0OgizxYDz2k ODEJ+lgJO7rOn5jRkl2hNkQxGZyKK1g= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704221238; a=rsa-sha256; cv=none; b=Qcwrmy/3KscrLIkSSSanHiH7XRQparkSdtfHa5uttQjb8DvC5VC4b6q2R7IlR8kquR6vSy 6NXrr9KbTGvl7WCE4hNwemBajA0E0cX6f+//kOE1kl/NefQjtiUDX7X2w5ncX4Zu8rpw2X ZiYZenHGU+Bvq7F1aGssMfB/1ulC568= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="aFRZuA/u"; spf=pass (imf18.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.43 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lf1-f43.google.com with SMTP id 2adb3069b0e04-50e7d6565b5so6740774e87.0 for ; Tue, 02 Jan 2024 10:47:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704221237; x=1704826037; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=36KamwtZGiOJ+iCEiFn7yiYoyZmI7LhL5vNG1kBAh/Y=; b=aFRZuA/urmBi97qC/yqKQJLSpBptW2OdXkWD6uh1PwRCTGJYa0jpLYKAKHJf3OlJhA BXODd5CCnYE48uI5ToGhZ3t3Ig5jYqiK+YJTguE23JsOfDLxxiOHL9RJ3T5PuUq1w7pM GcsK4+j5t1Fnh9iqnZ3j9S7GNJrCr95N1Gy9DlEkoz/N1WzUbUNOrJeEN85TLQL9RpJ5 JsSpiMXgsERFack14zjSAf2Mwhiz2Nnod34HFalSOn/mbIFFdZL5k+ngHsBqCTdyUN2a gkJ+nEAnkz4qzS91iIuADK13fHqk4ju6EjVWxMyASQIHZHCm3rylX9Kzg8wyVcunxY2M A4fQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704221237; x=1704826037; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=36KamwtZGiOJ+iCEiFn7yiYoyZmI7LhL5vNG1kBAh/Y=; b=N82yO2QkJnkLOXMPf1abicNgnotpHX0hjVLwBDC5EBTwUZVa0F5q5rfM1ymx97b/rq ygeImQNVIKCMv/HhaGNnzdWrWBE4jBW0JQM6XHmmkY1CnXF0DUwLDIjrkqFFG8cJ3EaP oA8I8Ol1ovU3/d4ZkRPwZurUBojjvycb17vFj/DYOGi+cT0ln3zYVMq1cQbBY9ON3hq7 2nIuEXrWijZkY+5g42kq6/O3lc7h1fTzXb7x8H0qPhn9aPsLDVGnX+1F5P5iXUAEQbFF IJMDHmBSYRUSrQOKi1CcrNK1RwS0UnNsv+Z7fBYkjKFJncYJTW746VsXVg6nR52psJ5x cTNA== X-Gm-Message-State: AOJu0YwSCWj96/ufUUUtmN0YGiVj7m8p4XGf8T2DVQeUhzquL8ASAzdR S41+UJTAFBahfkPivUavWWNli22LiNa3Bg== X-Google-Smtp-Source: AGHT+IFi7xrYzx6xhsiq08yLfVLl/kXBKY0Xd49T3HSM7WNrgq2+gVmjHodohn+UrhXHc14kRcbU4A== X-Received: by 2002:ac2:5d67:0:b0:50e:383b:19bd with SMTP id h7-20020ac25d67000000b0050e383b19bdmr6365437lft.102.1704221236664; Tue, 02 Jan 2024 10:47:16 -0800 (PST) Received: from pc638.lan (host-185-121-47-193.sydskane.nu. [185.121.47.193]) by smtp.gmail.com with ESMTPSA id q1-20020ac246e1000000b0050e7be886d9sm2592656lfo.56.2024.01.02.10.47.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Jan 2024 10:47:16 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v3 10/11] mm: vmalloc: Set nr_nodes based on CPUs in a system Date: Tue, 2 Jan 2024 19:46:32 +0100 Message-Id: <20240102184633.748113-11-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240102184633.748113-1-urezki@gmail.com> References: <20240102184633.748113-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4C29C1C0004 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: ojhsgc6qy8s7s37po1bperfwp9ms4ghp X-HE-Tag: 1704221238-579956 X-HE-Meta: U2FsdGVkX191tBI7Sfox4ouh/3WPtJXnLyodGk+4qornTgjthe4opdevfoceyw/wzhf0ewTRbV76IsliMhNGvZ6IY9bhIR7WFabQ5kz16QcscjzefS2sLI4RfW3S7wcFVrELAXpV4hFRUnCPpGafTLeVeJQTdA93adE77Jgrr/OlA8RwYU5Fnb3uoCnng9X5rc+raHryqx42flcEFYL1ojLHFl32BFhP5gIylTvpceNleXt9qy7f/eTeMcYR2UxbJccdYpqVOSRXXtOUPKAdQ6cLp5j1G2M8jjJWFMiI0JySvV316mHsquWo4wK5IVzLUXVfOOn4arWxAGtUxs7K2KBSSwAru6rsO888IVovcmeGwHetZNUc7KEH51SEWj8BS6ulf2SX4K+kuzjUlU6EgnfoviPi25yoGRA3sMRMhOlaNdIS/o1GdJgY8g3yjjwlNarzrRbU8fAJ6BcHHxl7y5A4AdeibD6R114WQC/wtK0HUvSI6U5xy0Qx7/WsAyzASlzsUYk1wL78BR6V4TN1E6UfVcCAxr1WZMP/FUy/IVrBXD0cmfcgKIZdUcYerB/9UM96WvG8lZgF2RyBhiWYIEX5+06lohf/+782DxMptyD6ilbwEK6Zm2YtAS8g8rDEo/Db67r5TEbdp2v2OuUZCefTgXMKDTRagDzxMGc9+c3ImRJ1bxNvJqkW3LftvYd6CWXdLKvgyTzbJZr8J7C69dy0WPMrKBjw0hKE2JrobTDyOiQbDlGlrmygZzhKo5AyYNKhzIf3hdR6aMzyU5neFzulJPUp5Dbolh92XgoHrGwd/m/ntkwbZILpZvHW0bv0WKVKq2UWQdL6FTRBTDL1dWvF5/JFCQxUbWgwlHLDqL1v/CInrS3zgi5IcwHcS5vRIiWQQb1hJ3RSPZxzUIygOll0cTGT60I9zxQI5nc1n+Ot9324h2RsNPged+CCAAInvMqcglfnIjg/xhrLzio 3Z9R6SN1 mY3TPlJO5uOFBehsPGVCAgmbZADnPMy3qoZNgPb+5FMQ8uvmZeTMFoxYx2it1qBVs52UH+l/BMUWbGr3MHD24G1kdZH38AWwwaYKrFNC9fselTXBp81i2rjN236RHA9eWtqJH9jCi3PNx5Mz156Xi8HrPhgCua9VCXviNkcRT+1kKQ5OQIQXWElC0cbJYDI/z72MS9yzEqJUBRPCuDO3GZOGJhij0WDSKD14T7YdJEj1paF4IjI6RiK2JPMR1gUeFNvlXUwRoIcorPPZa0lYPJwZSTFTz+WVmKH2hIIAJ6yVY/WI6IH+OuW/dKlCOGxS08WR7u899a/BZNSzZUDebe/LKWdVmAaMUiH90kMnyayWNNDFPbch6d+qTfsvPI5tn5KNPEb9L0DyG8E7CzUDuydMRKvVztgu7Qpf0PGWpVVCR2Q153Mg9SkI/Rgbnkljwv0sMaB8V0YAI3z2vy0RgPF2tstxjffmdjTsz24CTY2DQtE0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A number of nodes which are used in the alloc/free paths is set based on num_possible_cpus() in a system. Please note a high limit threshold though is fixed and corresponds to 128 nodes. For 32-bit or single core systems an access to a global vmap heap is not balanced. Such small systems do not suffer from lock contentions due to low number of CPUs. In such case the nr_nodes is equal to 1. Test on AMD Ryzen Threadripper 3970X 32-Core Processor: sudo ./test_vmalloc.sh run_test_mask=7 nr_threads=64 94.41% 0.89% [kernel] [k] _raw_spin_lock 93.35% 93.07% [kernel] [k] native_queued_spin_lock_slowpath 76.13% 0.28% [kernel] [k] __vmalloc_node_range 72.96% 0.81% [kernel] [k] alloc_vmap_area 56.94% 0.00% [kernel] [k] __get_vm_area_node 41.95% 0.00% [kernel] [k] vmalloc 37.15% 0.01% [test_vmalloc] [k] full_fit_alloc_test 35.17% 0.00% [kernel] [k] ret_from_fork_asm 35.17% 0.00% [kernel] [k] ret_from_fork 35.17% 0.00% [kernel] [k] kthread 35.08% 0.00% [test_vmalloc] [k] test_func 34.45% 0.00% [test_vmalloc] [k] fix_size_alloc_test 28.09% 0.01% [test_vmalloc] [k] long_busy_list_alloc_test 23.53% 0.25% [kernel] [k] vfree.part.0 21.72% 0.00% [kernel] [k] remove_vm_area 20.08% 0.21% [kernel] [k] find_unlink_vmap_area 2.34% 0.61% [kernel] [k] free_vmap_area_noflush vs 82.32% 0.22% [test_vmalloc] [k] long_busy_list_alloc_test 63.36% 0.02% [kernel] [k] vmalloc 63.34% 2.64% [kernel] [k] __vmalloc_node_range 30.42% 4.46% [kernel] [k] vfree.part.0 28.98% 2.51% [kernel] [k] __alloc_pages_bulk 27.28% 0.19% [kernel] [k] __get_vm_area_node 26.13% 1.50% [kernel] [k] alloc_vmap_area 21.72% 21.67% [kernel] [k] clear_page_rep 19.51% 2.43% [kernel] [k] _raw_spin_lock 16.61% 16.51% [kernel] [k] native_queued_spin_lock_slowpath 13.40% 2.07% [kernel] [k] free_unref_page 10.62% 0.01% [kernel] [k] remove_vm_area 9.02% 8.73% [kernel] [k] insert_vmap_area 8.94% 0.00% [kernel] [k] ret_from_fork_asm 8.94% 0.00% [kernel] [k] ret_from_fork 8.94% 0.00% [kernel] [k] kthread 8.29% 0.00% [test_vmalloc] [k] test_func 7.81% 0.05% [test_vmalloc] [k] full_fit_alloc_test 5.30% 4.73% [kernel] [k] purge_vmap_node 4.47% 2.65% [kernel] [k] free_vmap_area_noflush confirms that a native_queued_spin_lock_slowpath goes down to 16.51% percent from 93.07%. The throughput is ~12x higher: urezki@pc638:~$ time sudo ./test_vmalloc.sh run_test_mask=7 nr_threads=64 Run the test with following parameters: run_test_mask=7 nr_threads=64 Done. Check the kernel ring buffer to see the summary. real 10m51.271s user 0m0.013s sys 0m0.187s urezki@pc638:~$ urezki@pc638:~$ time sudo ./test_vmalloc.sh run_test_mask=7 nr_threads=64 Run the test with following parameters: run_test_mask=7 nr_threads=64 Done. Check the kernel ring buffer to see the summary. real 0m51.301s user 0m0.015s sys 0m0.040s urezki@pc638:~$ Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 29 +++++++++++++++++++++++------ 1 file changed, 23 insertions(+), 6 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 0c671cb96151..ef534c76daef 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -4879,10 +4879,27 @@ static void vmap_init_free_space(void) static void vmap_init_nodes(void) { struct vmap_node *vn; - int i, j; + int i, n; + +#if BITS_PER_LONG == 64 + /* A high threshold of max nodes is fixed and bound to 128. */ + n = clamp_t(unsigned int, num_possible_cpus(), 1, 128); + + if (n > 1) { + vn = kmalloc_array(n, sizeof(*vn), GFP_NOWAIT | __GFP_NOWARN); + if (vn) { + /* Node partition is 16 pages. */ + vmap_zone_size = (1 << 4) * PAGE_SIZE; + nr_vmap_nodes = n; + vmap_nodes = vn; + } else { + pr_err("Failed to allocate an array. Disable a node layer\n"); + } + } +#endif - for (i = 0; i < nr_vmap_nodes; i++) { - vn = &vmap_nodes[i]; + for (n = 0; n < nr_vmap_nodes; n++) { + vn = &vmap_nodes[n]; vn->busy.root = RB_ROOT; INIT_LIST_HEAD(&vn->busy.head); spin_lock_init(&vn->busy.lock); @@ -4891,9 +4908,9 @@ static void vmap_init_nodes(void) INIT_LIST_HEAD(&vn->lazy.head); spin_lock_init(&vn->lazy.lock); - for (j = 0; j < MAX_VA_SIZE_PAGES; j++) { - INIT_LIST_HEAD(&vn->pool[j].head); - WRITE_ONCE(vn->pool[j].len, 0); + for (i = 0; i < MAX_VA_SIZE_PAGES; i++) { + INIT_LIST_HEAD(&vn->pool[i].head); + WRITE_ONCE(vn->pool[i].len, 0); } spin_lock_init(&vn->pool_lock); From patchwork Tue Jan 2 18:46:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13509282 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EA66C46CD2 for ; Tue, 2 Jan 2024 18:47:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ADD856B02DA; Tue, 2 Jan 2024 13:47:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A38056B02DB; Tue, 2 Jan 2024 13:47:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88ED26B02DC; Tue, 2 Jan 2024 13:47:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 6CA8F6B02DA for ; Tue, 2 Jan 2024 13:47:21 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 44DD8120149 for ; Tue, 2 Jan 2024 18:47:21 +0000 (UTC) X-FDA: 81635253882.02.0EA53E8 Received: from mail-lf1-f43.google.com (mail-lf1-f43.google.com [209.85.167.43]) by imf26.hostedemail.com (Postfix) with ESMTP id 5E5D2140006 for ; Tue, 2 Jan 2024 18:47:19 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=F7oFccL2; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.43 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704221239; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=s9/T5K2ir2OJL/EqmsqvZQ3IpaLcsXDlXwLSIHXyRwY=; b=qo2GvFVaw1QWSdkawYIkyCP/YanIdi9x9QJKH7lfKeHM/JLqaeLAkQu9u8TrbKhlFvgZOL Y7Dei3wfax1O4ffutm3L0fzah1w712mZuQ2gm3qegbrJsRHsqq3cURo/KgRe1LucZrbUAC yLfjrh+YdV94BUITBWGy5Xfh2L/A6Ek= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=F7oFccL2; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.43 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704221239; a=rsa-sha256; cv=none; b=AH8AYXbGlg5pDe44ywyCY6dpPMX2t3cV1MYkHJ9BDH/EbRM0ZMeUP8dWVfA0EBNv5oUqvo v0S4KJUx8TH9nxVBYs6LAabZSNqrLw6PAHPLuT/WPY/bhKUfxCPKX51bmnTnWhBKKOZ7xp 9BDy9z50PnfnAEix7F/V0RDQWfNBmg0= Received: by mail-lf1-f43.google.com with SMTP id 2adb3069b0e04-50e67f70f34so8394028e87.0 for ; Tue, 02 Jan 2024 10:47:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704221238; x=1704826038; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=s9/T5K2ir2OJL/EqmsqvZQ3IpaLcsXDlXwLSIHXyRwY=; b=F7oFccL2aK+9WgeXl8r7sXGuHr9PWHUHPV6+BJp1DayBUeAl2gMNzYv6DhZp626w/v Ym4kOYt7jAbAZ20vpOIFNChWDWCG80SwRCcutZPagW0bNPpJDYCeEz5k0HY8ddLGEKGU qeC9H31j2fo+XjMfI+pDnZx8OC9XVcWb9OnMZDwuCDsewo0rk6d7czha/RITbk73vMhq te/9870Fmx/cyBxgPgAmsqXkZEWchQzfuo5/xzJXEAQwV+d4UlS7MAFDc8czcjIDfFN9 dKCsV8xvtZWDWHsTBxHDt2W3CpKZTkFl6NWATQ3hBptrA6yggTNuwW5doLS3sceKoYoL EL+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704221238; x=1704826038; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=s9/T5K2ir2OJL/EqmsqvZQ3IpaLcsXDlXwLSIHXyRwY=; b=S2+06wR+KMwD/IJ4UdjMzSMZJN9qvQUVqebMjGGpdEE6SLh1efGkBz2OZysLDz/3T6 ds8AaMbjr6wYTR/94Lj2plX9iCaTNg60gUQyI0FpGKEjiIjpgxTN7rhfLf3dobZfyHmX wwBXS0mwPvL4eWBdm+IzjTH5eiwfJkKC4ADCrCAdt9CO3b/SmcOtruA4s0WhY/O538Li j/QLaFgvHjScFPtdOG6YRStvTNA5qyMGhbonEBumoJf/QwOoJsQotQ5diJkT1AAX17Mg nRM1q8aY501tkehS5N4g4KlHOB+yaFWFc+/yPMUdHZ4wsERhUwPpuYiceIQYtaZLHL/L YuJg== X-Gm-Message-State: AOJu0Yy6+IJih1kJ7hduS7CSwjjw+MeLRwz1inmlA9oRfjtAV/Q+TQaz smOTRnrAaky6rU7xhtUBtu9yVeQ3sU5qEQ== X-Google-Smtp-Source: AGHT+IHRbxitAW0kY1SyrAVZNdrjS0E3Q6aU8WIW/EO9WbSw23NrRoV+q5pHLLcc7N5Pm8J6+ROD3A== X-Received: by 2002:ac2:5fae:0:b0:50e:5f99:21f2 with SMTP id s14-20020ac25fae000000b0050e5f9921f2mr6671072lfe.37.1704221237681; Tue, 02 Jan 2024 10:47:17 -0800 (PST) Received: from pc638.lan (host-185-121-47-193.sydskane.nu. [185.121.47.193]) by smtp.gmail.com with ESMTPSA id q1-20020ac246e1000000b0050e7be886d9sm2592656lfo.56.2024.01.02.10.47.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Jan 2024 10:47:17 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , Dave Chinner , "Paul E . McKenney" , Joel Fernandes , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v3 11/11] mm: vmalloc: Add a shrinker to drain vmap pools Date: Tue, 2 Jan 2024 19:46:33 +0100 Message-Id: <20240102184633.748113-12-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240102184633.748113-1-urezki@gmail.com> References: <20240102184633.748113-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5E5D2140006 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: xtea5jgzgjzmombx346erhagjmgdi8rg X-HE-Tag: 1704221239-855672 X-HE-Meta: U2FsdGVkX1/kY7cM2soO+A1nY1oL4RvRf9sd575b7cb4QzsITFKXo6RzdfnDRYh1mo0zRgWjkTymURdW66IKc5vj9P9pi1usuVNF5t/3+gHzt4n9wVbHgFOY0N6GaO1ReHadZw7owgn6pLu9Qlt4DebiVyAnzCOhCNzrvPSgtkdQsPoN7rYbxqtCAJpI9o+jph0FdMF/pgzBCrJ14EF3RimUHfRqdFUnO4OL0ZEBoTwc+k5vnBB3As2oide4lAQxQngyblfWOnEhFm/LXVSeHvMMYBfIWtabQFpiUWmx/W6sY/sPsr/Hqd114ryfsGG98tXt0v3Vmv6yCxwVDV6Z8FPVB5GIckcDj0pH3IW/sO48R3COCwu53UdZtdtnZpjeBSZ37vWVNu/WAfVybgJwG35nD43M9uHRnPVlQd1peeku6eTgnU5zYk/FZIzvrHURnf+lhCI0DsyssjufZtWl6TUDjmgw483cusYsWM2/VJn/xSi1ViNtIVCAxdsbpYzwfW1AM+GoeWnSEdAi23UmtzzCeWHJvHQXA8ZixU3OW6CYfsLYQ0HWVjTQ+zcmcRdPmFZuIswQ8r2+RiJPwawqg9edm8BMcnhyEZulCUk8POYwkX3YSB0tndIKyBmvTCk8GTDpZVWG22oJ57vjGYeg3jv0xAkeCbxVkz89IQfrXnhJs4wQ/oQyH2HniB3ZjtSX8rpJsKJK8VVKD+DVy+zpYczWaBtnNCCjvXbaBV13SAMvgXFqOba5NzVOIPoM6xPLYfYrrg/ZBkFCz7MqfPzpRnlKUVKa6oQrZ43j5sLkJiQikB8t1sZKvuJZHny3U5u9C/EyaeLbPCyRUp+wJMQ01BU5ISybQ0/eFeqVCxSlb1RNMDk7XciswxtWLMA5y54zJkYvbi3NZTMV74lYE9pSP0wxmbe7LeH/kldUDVEOzXullV3+Isy6k9YwnSqaTR0lsriTrQBkPxU4fa0D8f9 j2aN6Qas gyqQkUOYwNQ/l7esorrVpHUOiTDK4FEKNX06nGmoztWAtiePwIXCOMAXhy7jlQb4qHRYtIKq/6KRWn8QNGIvv3vBHYraAyhjMkpR0eytptFhDFM4eXvwR5kB3XG1Zffy3MHmTz2DdqdJOntDDBYsiFAoP0QXJfqNHolfQ3QFzsR1Y6QIBPpDzyYmW5+4rnU1yB+l7NxjN4ZkM8BS0TQoQLsMWQUBaKYrEFUFjJNV03ojIA2oFhWVfK5sAHkU2z/LLGu7tx0MGHyppCtVfbJ7mU1cP+D7Jyej2pHfvDDSaIgdR+bA6qpFzToLw4uWrRoFMahtEoh603eFl+7P1lsqm8GHetr38MNOBEG6n+ZxBVxEyqMZLcBu6pjGhLVtH6++h6yI1uxi5fg2fPJBrCuPorJ27HrzQQ7++LTZuv1Atsx1M9MGpaWRrqs1WDwECHr6/qs6t X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The added shrinker is used to return back current cached VAs into a global vmap space, when a system enters into a low memory mode. Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 39 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 39 insertions(+) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ef534c76daef..e30dabf68263 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -4917,8 +4917,37 @@ static void vmap_init_nodes(void) } } +static unsigned long +vmap_node_shrink_count(struct shrinker *shrink, struct shrink_control *sc) +{ + unsigned long count; + struct vmap_node *vn; + int i, j; + + for (count = 0, i = 0; i < nr_vmap_nodes; i++) { + vn = &vmap_nodes[i]; + + for (j = 0; j < MAX_VA_SIZE_PAGES; j++) + count += READ_ONCE(vn->pool[j].len); + } + + return count ? count : SHRINK_EMPTY; +} + +static unsigned long +vmap_node_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) +{ + int i; + + for (i = 0; i < nr_vmap_nodes; i++) + decay_va_pool_node(&vmap_nodes[i], true); + + return SHRINK_STOP; +} + void __init vmalloc_init(void) { + struct shrinker *vmap_node_shrinker; struct vmap_area *va; struct vmap_node *vn; struct vm_struct *tmp; @@ -4966,4 +4995,14 @@ void __init vmalloc_init(void) */ vmap_init_free_space(); vmap_initialized = true; + + vmap_node_shrinker = shrinker_alloc(0, "vmap-node"); + if (!vmap_node_shrinker) { + pr_err("Failed to allocate vmap-node shrinker!\n"); + return; + } + + vmap_node_shrinker->count_objects = vmap_node_shrink_count; + vmap_node_shrinker->scan_objects = vmap_node_shrink_scan; + shrinker_register(vmap_node_shrinker); }