From patchwork Mon May 29 18:00:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 13258853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 726BAC7EE2C for ; Mon, 29 May 2023 18:00:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D02DA900002; Mon, 29 May 2023 14:00:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CB3876B0074; Mon, 29 May 2023 14:00:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B7ADE900002; Mon, 29 May 2023 14:00:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A399F6B0072 for ; Mon, 29 May 2023 14:00:34 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 8504112030D for ; Mon, 29 May 2023 18:00:33 +0000 (UTC) X-FDA: 80844057546.08.B9904B1 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf24.hostedemail.com (Postfix) with ESMTP id B7C9318001D for ; Mon, 29 May 2023 18:00:30 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=dBaihDRV; spf=pass (imf24.hostedemail.com: domain of bjorn@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=bjorn@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685383230; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=bSaI8fhGhl0kwNOGlZoEyS3XNx3iOMOt8eFT9qByros=; b=J4MHEiD4L47ZFGNZZViFP/WyszyE9bbLBYKSSrVaHCaON81+G1Y0QZVtcCXFEDkv6nldpF U8s5Z6NJ/w4eshlgZinvVBRtzLvFZmOZ7FOn0PRADbAn6P86m/UX3hHMTDxq0jaG1aaZkA Lcs7HtAx7gO6vOs0+SkZi654GKJGINw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685383230; a=rsa-sha256; cv=none; b=koGR6SRp4k/ZOS2miqOBOyRPx069vS2P9vbooPwuK8EejbumArGtGBiCaMIFoMotcVYZXa XU5FaFArOv+u9ySvUyjiXrQmhODesAhG0wlmuRoGfcPQrwRzOmXgC7D1pc8DPs+dlv+EYA cLaHGpofUAJ74qTBVomvjts86QUQX4k= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=dBaihDRV; spf=pass (imf24.hostedemail.com: domain of bjorn@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=bjorn@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B13DD60B65; Mon, 29 May 2023 18:00:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 27062C433D2; Mon, 29 May 2023 18:00:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1685383229; bh=gJZ6i/YEnspBK6Kp4eZbVclre0eMmwrGGq06qCquzoU=; h=From:To:Cc:Subject:Date:From; b=dBaihDRVF/Dkad7v2ZgMsCQOwVB60RQvPXfXbZJE+4jLWCmYwYMZcv+IeFgKN4nlI qjlIdU/+f49JykEQI5cJEXzvPatnSWW+R4SeQ0sj3iS02QPXHC7w1g07/6d2HtOkJl zDvtwWdeh/gtkzTkkTk9xJSMWQZ9r0miuCMfr4a9dC7X+MNlaluy1HwR7tQGzKvNe7 JkhhyprDfm8iN+Y8eZHAO0juHbS6AjSMc5hoCMLRZkMQcY25TBKLZo5QnedSvLwZYg +ub+9Oiy+pr3PncbWYVPCPZ6TgInAbwXUjWscrTK7vHFPs3mRzFZX1zOskVu91TwLg dKSKOonkpEp/w== From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux@rivosinc.com, Alexandre Ghiti , Joerg Roedel Subject: [PATCH] riscv: mm: Pre-allocate PGD entries vmalloc/modules area Date: Mon, 29 May 2023 20:00:23 +0200 Message-Id: <20230529180023.289904-1-bjorn@kernel.org> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 X-Rspamd-Queue-Id: B7C9318001D X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: w8qb958cd9juzghxgm9fi4otekq6c3po X-HE-Tag: 1685383230-51559 X-HE-Meta: U2FsdGVkX1+9P2IGVHrKVduh1LXlHilkL8rQ5ThWYrdSXHaNDwdLoHjImmqA1ouGnjEy/smu8a2UkDtQrqKIo6z+gYnzFEtFggGNsjx9PZZA2/0s4S8AtnoqF/TFpQmqFEdSZkJ+dQCS2oVrkfQYF53z4je6bQNrZhi2ntmuuLkcwPc5V0ONMGeGw7tTjVqBIb1SWiWtm/+Dru3MTXMniwXQrbWqlxcmVrHJ6agoInnsVqSJtEvLAqFgUhRLyqk0zxKGQ+6ReCOgV8mrFZ3qXEKivLtvUrEQs76rj6ZyvIcLO8+hlt9E4bO1giIXkshO91AkdzeKJUJR/azmcUrwGr4NdwT4+o3Ze7OFyiMsXSL/vATjSsPSctRCpw9t+VEOhAXJKUeh2/9YGvoSzwK8ibvcztP44ZY/cUD1PzaC/njRsecfmQQfISEcpQxI5GKJ159P//XVDi4GFPJ/ttXBEnJa0r2bwxWTbjdPpvYpfk7snpdK37fsMajYTz+rNMMlo02KL5PFopQbIIu6C7nNjdNynjjvMhr775EZY58je2c+4EqnxCcIOFhHKFwPsr7Y+RISlMyJYWNM5y94A7TkW1wmfx4IulS76nRC+fCFlkAyTYifFyOzQs9v0EOyZwDHnJ+IrOnBff8mH2/0WZD56SD8GL+TnXhKV7tOMD3T5dKaZ5ai/nauIG5siqWRzhN+ykP4jh/GBDtAxl9RKUf7PJNZGnlZNhrWjoowRX5o/snsxEPbYufpBO5ZECBsmZ+GBqL3NzDSvYe3DFjN13y75mbt2x7To0UKjUzsnW1gloUJ6rTWqDmb6DShuPA/y6RYT4ESvXfI3ENtgy1bP+byGprYKmyS3XAo2TUQPDdFS83O8RkTyJ4y/pGgE0Fq5lMoCkC+EpKzLeVV2zHLWKuwybH0Re9QBbpnRuRoR0HTW/yZZP151iJnHw2Rtiwr/YrmXAEGmmBgU6e9yPsmy+l AQYcc4QU wxpy4xxUBpqHZSTfOog5vYFF9wgCwCVAZX29HssuGQqdNuRkwhnii4zkq6H6nHXB7YUhvoBgH0vTCLELjj5mNjstJnu0ui4laaCUvSnJmFHs3UhFiPFPe/93m4ZI2Pvlo9sQ1sWzl8hm9OmxwW0M8kW4nRX65sONShSJnMMeyP8ORdN505HFFskCICWMnGU78i4f1I4sts2Vlc0GJAyhbBMZgLeEOi4ZVPKVpgnK6ndB9F8Ux0OtKkd46n9YDrmXh6NIhQMbdZXXWiVTKkLeo1KMcOVUdHE5kY3T7mG69CJuaL8bIadmHmkBpHJU8/AtxK/L6B2N+eF8/f7/uR4hbduo3Kxcht+huYjA66rLM/RSYcCLaSzmlUKR024GR5vted+pVXP7Lbx32EKjV6bixfMtnGJPzllICo87Tx/LuQBzeeOJGXj7jeWvvY2uIAuaNyejaDiRgLdRCwdFickQnRWK4KTArvELQWXza9r9uYgzVAAXlSPNyIMhPSm+AW/nEBVli+9pMJajtWxuLiDEglVm+Swmg7a2MCG9I X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Björn Töpel The RISC-V port requires that kernel PGD entries are to be synchronized between MMs. This is done via the vmalloc_fault() function, that simply copies the PGD entries from init_mm to the faulting one. Historically, faulting in PGD entries have been a source for both bugs [1], and poor performance. One way to get rid of vmalloc faults is by pre-allocating the PGD entries. Pre-allocating the entries potientially wastes 64 * 4K (65 on SV39). The pre-allocation function is pulled from Jörg Rödel's x86 work, with the addition of 3-level page tables (PMD allocations). The pmd_alloc() function needs the ptlock cache to be initialized (when split page locks is enabled), so the pre-allocation is done in a RISC-V specific pgtable_cache_init() implementation. Pre-allocate the kernel PGD entries for the vmalloc/modules area, but only for 64b platforms. Link: https://lore.kernel.org/lkml/20200508144043.13893-1-joro@8bytes.org/ # [1] Signed-off-by: Björn Töpel Reviewed-by: Palmer Dabbelt --- arch/riscv/mm/fault.c | 20 +++------------ arch/riscv/mm/init.c | 58 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 62 insertions(+), 16 deletions(-) base-commit: ac9a78681b921877518763ba0e89202254349d1b diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 8685f85a7474..6b0b5e517e12 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -230,32 +230,20 @@ void handle_page_fault(struct pt_regs *regs) return; /* - * Fault-in kernel-space virtual memory on-demand. - * The 'reference' page table is init_mm.pgd. + * Fault-in kernel-space virtual memory on-demand, for 32-bit + * architectures. The 'reference' page table is init_mm.pgd. * * NOTE! We MUST NOT take any locks for this case. We may * be in an interrupt or a critical region, and should * only copy the information from the master page table, * nothing more. */ - if (unlikely((addr >= VMALLOC_START) && (addr < VMALLOC_END))) { + if (!IS_ENABLED(CONFIG_64BIT) && + unlikely(addr >= VMALLOC_START && addr < VMALLOC_END)) { vmalloc_fault(regs, code, addr); return; } -#ifdef CONFIG_64BIT - /* - * Modules in 64bit kernels lie in their own virtual region which is not - * in the vmalloc region, but dealing with page faults in this region - * or the vmalloc region amounts to doing the same thing: checking that - * the mapping exists in init_mm.pgd and updating user page table, so - * just use vmalloc_fault. - */ - if (unlikely(addr >= MODULES_VADDR && addr < MODULES_END)) { - vmalloc_fault(regs, code, addr); - return; - } -#endif /* Enable interrupts if they were enabled in the parent context. */ if (!regs_irqs_disabled(regs)) local_irq_enable(); diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 747e5b1ef02d..38bd4dd95276 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -1363,3 +1363,61 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, return vmemmap_populate_basepages(start, end, node, NULL); } #endif + +#ifdef CONFIG_64BIT +/* + * Pre-allocates page-table pages for a specific area in the kernel + * page-table. Only the level which needs to be synchronized between + * all page-tables is allocated because the synchronization can be + * expensive. + */ +static void __init preallocate_pgd_pages_range(unsigned long start, unsigned long end, + const char *area) +{ + unsigned long addr; + const char *lvl; + + for (addr = start; addr < end && addr >= start; addr = ALIGN(addr + 1, PGDIR_SIZE)) { + pgd_t *pgd = pgd_offset_k(addr); + p4d_t *p4d; + pud_t *pud; + pmd_t *pmd; + + lvl = "p4d"; + p4d = p4d_alloc(&init_mm, pgd, addr); + if (!p4d) + goto failed; + + if (pgtable_l5_enabled) + continue; + + lvl = "pud"; + pud = pud_alloc(&init_mm, p4d, addr); + if (!pud) + goto failed; + + if (pgtable_l4_enabled) + continue; + + lvl = "pmd"; + pmd = pmd_alloc(&init_mm, pud, addr); + if (!pmd) + goto failed; + } + return; + +failed: + /* + * The pages have to be there now or they will be missing in + * process page-tables later. + */ + panic("Failed to pre-allocate %s pages for %s area\n", lvl, area); +} + +void __init pgtable_cache_init(void) +{ + preallocate_pgd_pages_range(VMALLOC_START, VMALLOC_END, "vmalloc"); + if (IS_ENABLED(CONFIG_MODULES)) + preallocate_pgd_pages_range(MODULES_VADDR, MODULES_END, "bpf/modules"); +} +#endif