From patchwork Fri May 12 14:57:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= X-Patchwork-Id: 13239437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECB6AC7EE23 for ; Fri, 12 May 2023 14:58:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 83D656B007B; Fri, 12 May 2023 10:58:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7ED1D6B007D; Fri, 12 May 2023 10:58:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6679E6B007E; Fri, 12 May 2023 10:58:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 56F5E6B007B for ; Fri, 12 May 2023 10:58:12 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1C8EB140E8B for ; Fri, 12 May 2023 14:58:12 +0000 (UTC) X-FDA: 80781908424.26.AF3C11D Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf26.hostedemail.com (Postfix) with ESMTP id 3F9CB140007 for ; Fri, 12 May 2023 14:58:10 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Os5JiE+9; spf=pass (imf26.hostedemail.com: domain of bjorn@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=bjorn@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683903490; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=e1oESrIyym/GlTQwMzdbC00z/SKfXrvdk/+pJfjHr38=; b=XNsXqOMukk+Hi+MHBuKPG5pLJMmPA1nyIBSAV7zbhQP+BPiCvgWUutmTOSv8EayYmCcar2 YadhL991wU7rXa9mrF5DiYBDQkTS2/rfdepWQcWa34ItlixaeFOtd3jRwN6TvSkuPaIPPr r7W4IcDSJD4u6K9kA1aDqTe02n52pHs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683903490; a=rsa-sha256; cv=none; b=xCYBgQx8wVBZo4lz2xdeijDB4o20Jhz1ll1oHGPoKYV++58LVPSi8JF2DBWH0tJMV1OKow T0gng7Vz6yl8wPrlN4zBBMj//76UaOAtm3u9M0FPUSFvABQJ4shjWcZniMGOmZAT5WYM/U YDW6+txju96/WhgWA8VGBBtWUOCDuIQ= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Os5JiE+9; spf=pass (imf26.hostedemail.com: domain of bjorn@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=bjorn@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7BE6C6133B; Fri, 12 May 2023 14:58:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 414DCC433EF; Fri, 12 May 2023 14:58:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683903488; bh=2Hg7N6ebdPoIJZTj/Z2XT0CsZmR9YcBL0bRVObnrrco=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Os5JiE+94pc+yBVvOpqjMSDiGSPKKDvcZURYVV7tm3zq/Llfx7r+svdbETNcWHa9V Li9vVWnnYPg4WmY1beATZukfwTwxRLiQJpQmEs/GjF49Mf4C7W1sm0VTzGAs3LvJ4V fiW7zPjP32r2q8vcrAiansu0eVnkhKadKCYOX+a3UFnS4f2mBOnAYzv6slNn4OKi0T BBktSF+bxWWWpHNDAtW4wOAZ8gQ/LY6hbDNeK7wW+z40xpd+VuFNYevwb5gOiYtXJB 0A/zrgC1iRCSbGLV+DiScyv8mJHKg3G9Tak/03DMMiegzuW6pG8j5kuNpkGbWaRLoJ nuEOcUj9yW5Aw== From: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= To: Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-riscv@lists.infradead.org Cc: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , linux-kernel@vger.kernel.org, linux-mm@kvack.org, David Hildenbrand , Oscar Salvador , virtualization@lists.linux-foundation.org, linux@rivosinc.com, Alexandre Ghiti Subject: [PATCH 4/7] riscv: mm: Add memory hot add/remove support Date: Fri, 12 May 2023 16:57:34 +0200 Message-Id: <20230512145737.985671-5-bjorn@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230512145737.985671-1-bjorn@kernel.org> References: <20230512145737.985671-1-bjorn@kernel.org> MIME-Version: 1.0 X-Stat-Signature: nqe5953pzoj18ww1ruo6cm846honofz8 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 3F9CB140007 X-Rspam-User: X-HE-Tag: 1683903490-997145 X-HE-Meta: U2FsdGVkX19E6nGqgMzCherulBStgeq8TOSLvl2KMJGyNU8GB/LiNp9exxTtK6y4XcptqCEtcWimq7WwvBWA720IBgVFawbAz4Gqzh58J1vL0L+m1j7xkRrXGC4+a4z15cVApID2j6cQ509FJlPm7AZCWQ0EbwzfbH8sa5xPk2FN8Cu7DwYRaOPcoZY0PhNTGm3asY7YuY0aQZKIhu1GyQ0lGF4OeNu4pAkWaS6mgmyRUEAUENtfJEFYSiQaCrCYhMffXE/2qyCr7q3rROYpLjSmf9ejStNrOTr55WxQC9IyUYYxwbjQOGrJ0BUpQsuX8b9g1eZEUPZxrXmbniFMl5dXIRB5SZFtAQvi2tQ2is7g8e1AHiukYYA0Gj6SnWkS/Apxh8zXZzA0WdDbjG8nF2Z/Y7Cuy1Kz8/zBeJHEIUmu1nW7I2oKS48nyGia3NDu4sslceP9xMV6bo0pPOrAe9wgmGHgsg1Hzr1SMMugLFDmm/hFkqrrePXsd6pFUFMh5UlKkXidZP/UnxW0vKgjg+pE0It9WNtLpujQWZPbHswqE10V6b5BhKLUjrIqU/G/ErxOWXhJbfo9uNAF5AvUxE0fycSXUaKx5dViwYr327NDhyyDbPKJbE1dMB9X2d4PrqTsg2cphlR50Eo3uHcZDIYTupkpTaLMPZL2PAHHs8hwkaHhqFUod4rNeIRWjL/WlH6Hhye5RclJRbA/zwvN1KpkKy9EADEhvlOa+F4XVPX6QgDJsNFcSoEfV6xP05Mx0M2d90pANW2SOlcNwDf3kOD9JnAGKJGyLIsdKrvW3b4f8TQV42Xy3BaKtlv/wJaPHrJRAwpxoup3NfNWNw/nphXcXU731pULQri8dW7iyge1uvBBgvy0GWJe5mwZqOQyoU8WGlhHON+EVd/MwTrAhDsXgjQkyoQrcBfkDICvrS5WK8jZVGlel8x5Ihd9OceUY+nQq4PntE+LKHKA1hs MX5DnrTx 1jYaLE5gVNroLThr8qJNFV5PYr6puKaFjWF+dN4fam56I1Gaautxj1Y1Q5MlW5Tb4gJ8F4e2sEUR//iSE/js3EcCfW/TE+zZyrnc/wx2GQHRavXEkpX8mnZgFRiepBKNi9MZ0dVBa4su4tazwP237U6FwhMz4917m5QkXkRe2FuDviTXbq9xYRb0+MGdfM+2oOsgYxFWzKeuAxKPSuoo60WgVauscSRZvNlEHqzfZ+wrPudSOdzgX85JDDZA/0DFkPoBnQGMiWpawRYZnYgIHDuiN6gVYyklR6OqG5jvXWjoVlTJkudhW0KZ1tRVf6uwxTllbA4/OWfa0iHEaHsGzYU9aXyPM1klngsY7Fn5BiP1tkCi00k5BqlmaKokxoeSvtkfvR9bj4oOtbwOOHoqMkD8A3v8MyPHLguEO X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Björn Töpel From an arch perspective, a couple of callbacks needs to be implemented to support hotplugging: arch_add_memory() This callback is responsible for updating the linear/direct map, and call into the memory hotplugging generic code via __add_pages(). arch_remove_memory() In this callback the linear/direct map is tore down. vmemmap_free() The function tears down the vmemmap mappings (if CONFIG_SPARSEMEM_VMEMMAP is in-use), and also deallocates the backing vmemmap pages. Note that for persistent memory, an alternative allocator for the backing pages can be used -- the vmem_altmap. This means that when the backing pages are cleared, extra care is needed so that the correct deallocation method is used. Note that RISC-V populates the vmemmap using vmemmap_populate_basepages(), so currently no hugepages are used for the backing store. The page table unmap/teardown functions are heavily based (copied!) from the x86 tree. The same remove_pgd_mapping() is used in both vmemmap_free() and arch_remove_memory(), but in the latter function the backing pages are not removed. Signed-off-by: Björn Töpel --- arch/riscv/mm/init.c | 233 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 233 insertions(+) diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index aea8ccb3f4ae..a468708d1e1c 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -1444,3 +1444,236 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, return vmemmap_populate_basepages(start, end, node, NULL); } #endif + +#ifdef CONFIG_MEMORY_HOTPLUG +static void __meminit free_pte_table(pte_t *pte_start, pmd_t *pmd) +{ + pte_t *pte; + int i; + + for (i = 0; i < PTRS_PER_PTE; i++) { + pte = pte_start + i; + if (!pte_none(*pte)) + return; + } + + free_pages((unsigned long)page_address(pmd_page(*pmd)), 0); + pmd_clear(pmd); +} + +static void __meminit free_pmd_table(pmd_t *pmd_start, pud_t *pud) +{ + pmd_t *pmd; + int i; + + for (i = 0; i < PTRS_PER_PMD; i++) { + pmd = pmd_start + i; + if (!pmd_none(*pmd)) + return; + } + + free_pages((unsigned long)page_address(pud_page(*pud)), 0); + pud_clear(pud); +} + +static void __meminit free_pud_table(pud_t *pud_start, p4d_t *p4d) +{ + pud_t *pud; + int i; + + for (i = 0; i < PTRS_PER_PUD; i++) { + pud = pud_start + i; + if (!pud_none(*pud)) + return; + } + + free_pages((unsigned long)page_address(p4d_page(*p4d)), 0); + p4d_clear(p4d); +} + +static void __meminit free_vmemmap_storage(struct page *page, size_t size, + struct vmem_altmap *altmap) +{ + if (altmap) + vmem_altmap_free(altmap, size >> PAGE_SHIFT); + else + free_pages((unsigned long)page_address(page), get_order(size)); +} + +static void __meminit remove_pte_mapping(pte_t *pte_base, unsigned long addr, unsigned long end, + bool is_vmemmap, struct vmem_altmap *altmap) +{ + unsigned long next; + pte_t *ptep, pte; + + for (; addr < end; addr = next) { + next = (addr + PAGE_SIZE) & PAGE_MASK; + if (next > end) + next = end; + + ptep = pte_base + pte_index(addr); + pte = READ_ONCE(*ptep); + + if (!pte_present(*ptep)) + continue; + + pte_clear(&init_mm, addr, ptep); + if (is_vmemmap) + free_vmemmap_storage(pte_page(pte), PAGE_SIZE, altmap); + } +} + +static void __meminit remove_pmd_mapping(pmd_t *pmd_base, unsigned long addr, unsigned long end, + bool is_vmemmap, struct vmem_altmap *altmap) +{ + unsigned long next; + pte_t *pte_base; + pmd_t *pmdp, pmd; + + for (; addr < end; addr = next) { + next = pmd_addr_end(addr, end); + pmdp = pmd_base + pmd_index(addr); + pmd = READ_ONCE(*pmdp); + + if (!pmd_present(pmd)) + continue; + + if (pmd_leaf(pmd)) { + pmd_clear(pmdp); + if (is_vmemmap) + free_vmemmap_storage(pmd_page(pmd), PMD_SIZE, altmap); + continue; + } + + pte_base = (pte_t *)pmd_page_vaddr(*pmdp); + remove_pte_mapping(pte_base, addr, next, is_vmemmap, altmap); + free_pte_table(pte_base, pmdp); + } +} + +static void __meminit remove_pud_mapping(pud_t *pud_base, unsigned long addr, unsigned long end, + bool is_vmemmap, struct vmem_altmap *altmap) +{ + unsigned long next; + pud_t *pudp, pud; + pmd_t *pmd_base; + + for (; addr < end; addr = next) { + next = pud_addr_end(addr, end); + pudp = pud_base + pud_index(addr); + pud = READ_ONCE(*pudp); + + if (!pud_present(pud)) + continue; + + if (pud_leaf(pud)) { + if (pgtable_l4_enabled) { + pud_clear(pudp); + if (is_vmemmap) + free_vmemmap_storage(pud_page(pud), PUD_SIZE, altmap); + } + continue; + } + + pmd_base = pmd_offset(pudp, 0); + remove_pmd_mapping(pmd_base, addr, next, is_vmemmap, altmap); + + if (pgtable_l4_enabled) + free_pmd_table(pmd_base, pudp); + } +} + +static void __meminit remove_p4d_mapping(p4d_t *p4d_base, unsigned long addr, unsigned long end, + bool is_vmemmap, struct vmem_altmap *altmap) +{ + unsigned long next; + p4d_t *p4dp, p4d; + pud_t *pud_base; + + for (; addr < end; addr = next) { + next = p4d_addr_end(addr, end); + p4dp = p4d_base + p4d_index(addr); + p4d = READ_ONCE(*p4dp); + + if (!p4d_present(p4d)) + continue; + + if (p4d_leaf(p4d)) { + if (pgtable_l5_enabled) { + p4d_clear(p4dp); + if (is_vmemmap) + free_vmemmap_storage(p4d_page(p4d), P4D_SIZE, altmap); + } + continue; + } + + pud_base = pud_offset(p4dp, 0); + remove_pud_mapping(pud_base, addr, next, is_vmemmap, altmap); + + if (pgtable_l5_enabled) + free_pud_table(pud_base, p4dp); + } +} + +static void __meminit remove_pgd_mapping(unsigned long va, unsigned long end, bool is_vmemmap, + struct vmem_altmap *altmap) +{ + unsigned long addr, next; + p4d_t *p4d_base; + pgd_t *pgd; + + for (addr = va; addr < end; addr = next) { + next = pgd_addr_end(addr, end); + pgd = pgd_offset_k(addr); + + if (!pgd_present(*pgd)) + continue; + + if (pgd_leaf(*pgd)) + continue; + + p4d_base = p4d_offset(pgd, 0); + remove_p4d_mapping(p4d_base, addr, next, is_vmemmap, altmap); + } + + flush_tlb_all(); +} + +static void __meminit remove_linear_mapping(phys_addr_t start, u64 size) +{ + unsigned long va = (unsigned long)__va(start); + unsigned long end = (unsigned long)__va(start + size); + + remove_pgd_mapping(va, end, false, NULL); +} + +int __ref arch_add_memory(int nid, u64 start, u64 size, struct mhp_params *params) +{ + int ret; + + create_linear_mapping_range(start, start + size, params); + flush_tlb_all(); + ret = __add_pages(nid, start >> PAGE_SHIFT, size >> PAGE_SHIFT, params); + if (ret) { + remove_linear_mapping(start, size); + return ret; + } + + max_pfn = PFN_UP(start + size); + max_low_pfn = max_pfn; + return 0; +} + +void __ref arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) +{ + __remove_pages(start >> PAGE_SHIFT, size >> PAGE_SHIFT, altmap); + remove_linear_mapping(start, size); +} + +#ifdef CONFIG_SPARSEMEM_VMEMMAP +void __ref vmemmap_free(unsigned long start, unsigned long end, struct vmem_altmap *altmap) +{ + remove_pgd_mapping(start, end, true, altmap); +} +#endif /* CONFIG_SPARSEMEM_VMEMMAP */ +#endif /* CONFIG_MEMORY_HOTPLUG */