From patchwork Sat Jan 21 07:10:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13111010 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10D6BC38141 for ; Sat, 21 Jan 2023 07:11:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A104B6B0078; Sat, 21 Jan 2023 02:11:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9C0BE6B007B; Sat, 21 Jan 2023 02:11:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 888FA6B007D; Sat, 21 Jan 2023 02:11:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7A1926B0078 for ; Sat, 21 Jan 2023 02:11:11 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4F094A0FC8 for ; Sat, 21 Jan 2023 07:11:11 +0000 (UTC) X-FDA: 80377934742.02.E60DEB8 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf21.hostedemail.com (Postfix) with ESMTP id A3B2E1C0006 for ; Sat, 21 Jan 2023 07:11:09 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=mCKo0YMQ; spf=none (imf21.hostedemail.com: domain of BATV+1651c3ebed9361b307e7+7090+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+1651c3ebed9361b307e7+7090+infradead.org+hch@bombadil.srs.infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674285069; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2b7jwqEZ/VORicHVoAJulj/DOUixU6O3VS17e+UwuxA=; b=rW/zB3cae++2dBPibKANptRmP6eHsjS1qbFjAawQyBs6s9JZz/FAbWmaATIpEGNokrP1FK oXBmlY5UyHctQ0W43c6Cvzn4KfFkwFVRXI90kH/qPx/CzIGLxLrWB6eUhHV72btRzct0KT iHnaNoYSuDY/f88sjqQiE79VZQwJQbU= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=mCKo0YMQ; spf=none (imf21.hostedemail.com: domain of BATV+1651c3ebed9361b307e7+7090+infradead.org+hch@bombadil.srs.infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=BATV+1651c3ebed9361b307e7+7090+infradead.org+hch@bombadil.srs.infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674285069; a=rsa-sha256; cv=none; b=OBEA1QMSbjdcWMGXrNOCsl249va6yNc1h3DG1Cv202Yg/iJH/CUF5FOOplqT8AQwIXfhn8 1RstL8+Ru+fiKHgMjWOB/YDUEC7up0H6pJ+WUPkq9GKARfamF9Jq09U36tJjSaX4JMrdKx 96QghLDDI0FpzhUq2fevlv++4PWDdgA= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=2b7jwqEZ/VORicHVoAJulj/DOUixU6O3VS17e+UwuxA=; b=mCKo0YMQ7Gz/166FUHw6DQQRng Vzb7WVMDSW8/Sho6QXe90Gmf/kwRiMvqtvDuchmvfsVA4P1tlC0OZqSILJUCC4m3aFli06L/WIsjj he0EkHwVqvpJYEm43EE2149vGXx9DCritYQQIZIuQhcl3Co6H7FfWJfxBkKgDQaOTmIbdvUJN1xcL aoaMBD+Wwo1VvnddDBtwvE5sQ9iYqPpgIy8C4bnEP2cYzFbw1NkvmOJM5EJ/SRhnzNWAX5Mg4tguG 4QtRF0epFSv0VjT5TDXp7auYlODKQO0YiCZD2YsHvzrFx+KCQh7slVrw2LMOJEZvCf1ulGocwIS0r qVXPzGWA==; Received: from [2001:4bb8:19a:2039:6754:cc81:9ace:36fc] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1pJ81o-00DTnU-2G; Sat, 21 Jan 2023 07:11:04 +0000 From: Christoph Hellwig To: Andrew Morton , Uladzislau Rezki Cc: Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , kasan-dev@googlegroups.com, linux-mm@kvack.org Subject: [PATCH 04/10] mm: move vmalloc_init and free_work down in vmalloc.c Date: Sat, 21 Jan 2023 08:10:45 +0100 Message-Id: <20230121071051.1143058-5-hch@lst.de> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230121071051.1143058-1-hch@lst.de> References: <20230121071051.1143058-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: biw9mraqnku15mcwa1rkqzwmhb6kg8nb X-Rspamd-Queue-Id: A3B2E1C0006 X-HE-Tag: 1674285069-833198 X-HE-Meta: U2FsdGVkX1+APfb/ItZuQvFaRDUQ7J32EnSmHPyS6RBV7tf8/Rj/XE/AMlHzco5o8bE9/LQLFVKpYWO9z7K70as/wszAami3NCnpV+Wl5K3Vv8g64Sl6wgHusM5M0fX11Gq+5K3rqYRT3ERn0dmeJIAYv9eRuE1GhU3CdZeBAu0ZIib2I1c60BaUa1YmuiYrxtt7j/KzDrNyWCHPgU56DgO/fdDsIoVW4aIU2co/vM/Efu2tWzDhcv37teE4OZmJiSCwzUo/WItg0Eey1l50PY6v8uu0u1+DtNc9emiH9pS16Qkb/5kHAWHsA4ltmKB5c667oY86t6m7T6H2/JAsSBDu90UKBW2pTL2ArDYCrC2Kiv400XZ6oSac8EiBtsbh87iWc7fXpczVSP7gIOshF8bt56/u3V0MWXKfxY5FmlPM+mUa8NdZ5xXmjiIYfnkKb0xy964M2cSpW7zRYYlMXRAm3ppEGW4LFDYNA3QAGzK4OZse40IgonUFD9wg3ODy6lG4m/cbCSttv44a1wg4JXNv8HdRZ4staTe1F4GsQ/x03WSohvDPcSEWO3mDu4uXKvzWxR0zZyhUl0M8ng4dY9rfr7forw60oVyucXANxoBDu9XoocgUA3nIk3Bm2jGBTkEBTu4B9X3Ve8zrbhSYbls0O28W1ouoI7S2ksq1OE/i6BOAbtYC0IqdnBm8iketA+otV0PyHCzTt/drcKJonSxhia/yBAis/4WvDVXIKPTqqKxSWji2pGhgYKezsKHeEt76wSvyOXDXFDA7vnux+5MxfArGDgLuHn7H1WvDT4GAkevHUCa6qA/59knJjolEcuahFbnckoqbVRfMjuycvg2VArhwtdYFhuH9o4NgruCaLyfw9erPvbqgVvVHhmhnf8073fi1dMKRTz9mix5+KelzAoM8VBcqwftJx+lqF+4P1adbYUfCkco/Bt5n0dyoTjghk/h7AWF1GMHKHbM Mm3kviIo pm/bHn20vB7S/ZwDb5iYACRbXlZ+hV/Aoiy8jlRtBc9b4T3C7N6lnqiais+KfJNTV2nUhrSYGmW2X4rC3V4kg7vaEJq+lrpGYGFB9dsOiXJ5p8qZZp7UnGXNHtGI7OFEfaDoLNV+T1Xutqo8aXeD4sB3yaUTgFPea1KUp7GBylRmTF2PC/M+P+pCVz0Ostw6p9mtnVZuDc3ATEjIHp43+17N0RaJdgWHKxuJoJo+z9U9fQX/JvP4Q9yUlgXo27xfGdTGy8F40DhiSyf18RPJl8p/pBDtyfdn3a46zOO8d2ObRNoRFmTk/419w79mIN2ifSF33d3ius6IUO9HK01OoBQDISDXT0Rzr9Bkzt2aiGdTf2alJc7JfKvA0DTWPGKK/TGfeWrztPfpUSw0FDiNiBhNc7E3NoOkKMNwcPL7efm3dB5fco3UsMnQu2OX/BwPpGB834At86lbKJDFE3PJ26j5zwgJcDLfMORhiFpT+V4BQUxxP2QKwTN4WY1/PNNAHccxfB6rXaOhDyd5nDQh/FEXfRhUQAk6cCSfDGUocFOZPvuNeTQ/k6tmh0obxZPIhsBq4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move these two functions around a bit to avoid forward declarations. Signed-off-by: Christoph Hellwig Reviewed-by: Uladzislau Rezki (Sony) Reviewed-by: David Hildenbrand --- mm/vmalloc.c | 105 +++++++++++++++++++++++++-------------------------- 1 file changed, 52 insertions(+), 53 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index fafb6227f4428f..daeb28b54663d5 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -89,17 +89,6 @@ struct vfree_deferred { }; static DEFINE_PER_CPU(struct vfree_deferred, vfree_deferred); -static void __vunmap(const void *, int); - -static void free_work(struct work_struct *w) -{ - struct vfree_deferred *p = container_of(w, struct vfree_deferred, wq); - struct llist_node *t, *llnode; - - llist_for_each_safe(llnode, t, llist_del_all(&p->list)) - __vunmap((void *)llnode, 1); -} - /*** Page table manipulation functions ***/ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, @@ -2449,48 +2438,6 @@ static void vmap_init_free_space(void) } } -void __init vmalloc_init(void) -{ - struct vmap_area *va; - struct vm_struct *tmp; - int i; - - /* - * Create the cache for vmap_area objects. - */ - vmap_area_cachep = KMEM_CACHE(vmap_area, SLAB_PANIC); - - for_each_possible_cpu(i) { - struct vmap_block_queue *vbq; - struct vfree_deferred *p; - - vbq = &per_cpu(vmap_block_queue, i); - spin_lock_init(&vbq->lock); - INIT_LIST_HEAD(&vbq->free); - p = &per_cpu(vfree_deferred, i); - init_llist_head(&p->list); - INIT_WORK(&p->wq, free_work); - } - - /* Import existing vmlist entries. */ - for (tmp = vmlist; tmp; tmp = tmp->next) { - va = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); - if (WARN_ON_ONCE(!va)) - continue; - - va->va_start = (unsigned long)tmp->addr; - va->va_end = va->va_start + tmp->size; - va->vm = tmp; - insert_vmap_area(va, &vmap_area_root, &vmap_area_list); - } - - /* - * Now we can initialize a free vmap space. - */ - vmap_init_free_space(); - vmap_initialized = true; -} - static inline void setup_vmalloc_vm_locked(struct vm_struct *vm, struct vmap_area *va, unsigned long flags, const void *caller) { @@ -2769,6 +2716,15 @@ static void __vunmap(const void *addr, int deallocate_pages) kfree(area); } +static void delayed_vfree_work(struct work_struct *w) +{ + struct vfree_deferred *p = container_of(w, struct vfree_deferred, wq); + struct llist_node *t, *llnode; + + llist_for_each_safe(llnode, t, llist_del_all(&p->list)) + __vunmap((void *)llnode, 1); +} + /** * vfree_atomic - release memory allocated by vmalloc() * @addr: memory base address @@ -4315,3 +4271,46 @@ static int __init proc_vmalloc_init(void) module_init(proc_vmalloc_init); #endif + +void __init vmalloc_init(void) +{ + struct vmap_area *va; + struct vm_struct *tmp; + int i; + + /* + * Create the cache for vmap_area objects. + */ + vmap_area_cachep = KMEM_CACHE(vmap_area, SLAB_PANIC); + + for_each_possible_cpu(i) { + struct vmap_block_queue *vbq; + struct vfree_deferred *p; + + vbq = &per_cpu(vmap_block_queue, i); + spin_lock_init(&vbq->lock); + INIT_LIST_HEAD(&vbq->free); + p = &per_cpu(vfree_deferred, i); + init_llist_head(&p->list); + INIT_WORK(&p->wq, delayed_vfree_work); + } + + /* Import existing vmlist entries. */ + for (tmp = vmlist; tmp; tmp = tmp->next) { + va = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT); + if (WARN_ON_ONCE(!va)) + continue; + + va->va_start = (unsigned long)tmp->addr; + va->va_end = va->va_start + tmp->size; + va->vm = tmp; + insert_vmap_area(va, &vmap_area_root, &vmap_area_list); + } + + /* + * Now we can initialize a free vmap space. + */ + vmap_init_free_space(); + vmap_initialized = true; +} +