From patchwork Fri Feb 28 18:29:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank van der Linden X-Patchwork-Id: 13996913 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 529B8C282D0 for ; Fri, 28 Feb 2025 18:30:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 91FA2280010; Fri, 28 Feb 2025 13:30:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 882A3280001; Fri, 28 Feb 2025 13:30:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 72144280010; Fri, 28 Feb 2025 13:30:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 4CB52280001 for ; Fri, 28 Feb 2025 13:30:15 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0C052810AF for ; Fri, 28 Feb 2025 18:30:15 +0000 (UTC) X-FDA: 83170193190.15.7EDE29B Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf25.hostedemail.com (Postfix) with ESMTP id 25E18A001A for ; Fri, 28 Feb 2025 18:30:12 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=BTqXKzy4; spf=pass (imf25.hostedemail.com: domain of 3swDCZwQKCAIhxfniqqing.eqonkpwz-oomxcem.qti@flex--fvdl.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3swDCZwQKCAIhxfniqqing.eqonkpwz-oomxcem.qti@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740767413; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=r8aB06LbmC50HiWzwaYK4Q8Yqhxu+9P15EWR1SVWZ+A=; b=LtTNn1nQo08cTJatey+9aOUIEAISXaEVp8IgQxnjeistS1BLRMkMPExzRrB4c11boQUtlI ybjZBsS49m5HOrW9REm+3a6tUTWN6vg7RTrT/X8cz0al/iiScaafi30zXMpfjDXN2ggZC5 wuYMLz+vcKV+U0K4TGiTPk11+Hqg1gA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740767413; a=rsa-sha256; cv=none; b=cbVuzUImqCEc+cMvhhphD58rxZsY//+Bs0KLkmlicMXwopfdH50/LQ1XzzjTeevheW5A21 7MdpUA45teSXBZ7GiipO0OpSRKNykRbcEUo5c+7xlN6qHGCzHs47tndFYyC03PrW3iL1Ww 7q4ZkMAZwWcEL0Ff9M7WyLNFrTbtCbY= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=BTqXKzy4; spf=pass (imf25.hostedemail.com: domain of 3swDCZwQKCAIhxfniqqing.eqonkpwz-oomxcem.qti@flex--fvdl.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3swDCZwQKCAIhxfniqqing.eqonkpwz-oomxcem.qti@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fc1cb0c2cbso7606417a91.1 for ; Fri, 28 Feb 2025 10:30:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740767412; x=1741372212; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=r8aB06LbmC50HiWzwaYK4Q8Yqhxu+9P15EWR1SVWZ+A=; b=BTqXKzy4ajah1QOdGQVwehrG7QlTfjszven0nN6bwSmflVSsZzSqIL5C8bne9TQjwC S3k9U7LU1RvpSAeURWrLsylT0i52SiRlKtQmRmCvTN1rLTrHgTes0+uizL+b+FEoH8Ma ZWScdmpXo30k1kRMg2PYw2OtI9WIa+MMkkHf2tbRRzEjoIuBEbG9t4auG/x6HjWROWWN GOsWRv6eT3ZwfXM/7EIJEhZ77bnkfyw7NeovQunyxZ9QL1PohHQ4Dd5bfaKimxb+MMQr GGXWufTOcJr6pRqJh5ixoJUm4wuAZF0XxLYTDpgRXIPcA0I4S7/jsgEL2ue6j2gveIsm nesg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740767412; x=1741372212; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=r8aB06LbmC50HiWzwaYK4Q8Yqhxu+9P15EWR1SVWZ+A=; b=ZB7ZGq3rf/jWqEYmACkLO6p4wXV+RKRS5ZgplitqYIODkAcOSdMepoAg/HGLztLOEs ruUSBXN5TbmPIcrwmBQapOWawFUx/0mXcjpqJtfjAp4c+abJ/B2YTTCTj/o1wnB08FnP GqmuwwRsoBJG/hmlINERfTqa3En/KaNFKAava/Bjl3121RK2Q6eQtDwLkZ8B4LNbsay7 QW7jcIbjLO2ukyNqIxEYPAgwP7DmIicvmSKJrmTMK2rQaTP6qMX/sl/qIcTt4iPyqnPN tiufQfiQOnWiX6Ilr7JkL/9AAmkzThPBYfipb2Y2Xz+ulneNWw7/aKZu0PKtNOMghvRJ MMLQ== X-Forwarded-Encrypted: i=1; AJvYcCW9qmAH45GP7C2WIdaYq/2PB5OUZJv4P+oNWwIuLsFWVI+YZBmrN81agSIiXiZ6hVKhogyYSfneFg==@kvack.org X-Gm-Message-State: AOJu0Yy09Fa+NmJHh7Z1VNzC3X775gvCs9Q0J2x1zS+JMYw1qlO3lL4N B7CbprrCT+gdMDw0qr9IkNMB6MJPVHwxxPM0BSPNGdj/NIadh/5r49zHQ+DP4qb09PfNQQ== X-Google-Smtp-Source: AGHT+IEomtc175Tn7QU4dLcaFEmpKHLIE++VomnQMAMqddL9GpIn6l/mbB6muVrn6Dl0pyEnMs9+WryW X-Received: from pgvt16.prod.google.com ([2002:a65:64d0:0:b0:ade:f03a:8509]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:144e:b0:1f2:e0c3:2619 with SMTP id adf61e73a8af0-1f2f4ddb75cmr9167961637.32.1740767411937; Fri, 28 Feb 2025 10:30:11 -0800 (PST) Date: Fri, 28 Feb 2025 18:29:15 +0000 In-Reply-To: <20250228182928.2645936-1-fvdl@google.com> Mime-Version: 1.0 References: <20250228182928.2645936-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228182928.2645936-15-fvdl@google.com> Subject: [PATCH v5 14/27] mm/sparse: add vmemmap_*_hvo functions From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, ziy@nvidia.com, david@redhat.com, Frank van der Linden X-Rspamd-Server: rspam02 X-Stat-Signature: kc4ds7aqe9nzknue6u8fnt7o1p6r3tk4 X-Rspamd-Queue-Id: 25E18A001A X-Rspam-User: X-HE-Tag: 1740767412-72710 X-HE-Meta: U2FsdGVkX1+cejnsutPbOjwA3xKenjQFqXViCKh3UaJOMJkUZLmXN2YZ4X1pYQPbRCKNqzCZrPd8rUD5Qa8WhrXpWOnBtgnBBCPXmsSx50jkBedkTwFj9Uq0/F5cJwlzwBg3YwP7jayvUrL1RdH0CSd5B4lUkCPh4OafcavVYB46aJFipxZj6NChjtGGk1ZGtQuZKwHMI307Y2xxvxU6yXhsdpTZjx34V9xFdu/LQkywrduYE4HkXxwOs5x3YdK/jh1ny4naNbRI8YzJc6T+yt5FPrkJ7ZWYm4MQDZxHIrLAg14rAFTj50CAGuePLYgGs7y+aMAdbhxlgwqDNjNiFFHAyMYGRAlWQB2bWbARWFcQJ7tT4lqcqtWKONvhvAJuif0H2UVdPf215p6jxjtm/ydSivjyDil+lo2z8mspj3OQUVMFSKfd6WVdKH2RxQkJBJjlIPALWVr1OtZN6Q+KvcPMK989j3uQOQCXJg/PlMLUmtb3tQ+aOk6xdTAGH31GYqO1tjZV1QX87E4Lzdv6Y6kbln0B8Ifk4quwf33glpCIBjEXSvdO4adABjuDtkYpJVaV1G14YXEI0+M+TtwN2+4f2Ps3go6udm5ORc0y4AI7est1KtFwiok8IgQlOmDGZkv4NUJUe3XG8CJ2iAhOb/ohtXmGBgOy82HUip9ovC/394z/uyrxYs72rROO0AmHuB62hYfZg4Mv0fEBmDc92ouQKOrfF+B2f0+bMxMjTStS+HkAwCHw9NvwxTiVbPRgRWRRl00hvPe9Bv99s9CfD5a5DhMG0IUAHfj34O93qjxYsdRLina1ogGZwMXifaFsE+ViJuTMrjee0TzsHwGl4FBZBpDJL20QK9GZHtSVJlg5vlEPbwuqA9FkaXjujvtu4lOTHOJE5/KvMJMG5SRP567PX6LbfaT6r09WwrFVL5DU8SckaJ0UmMImvh45j60oFtCDfDfeSwo/7uErUTc UCA/q2t5 p4V3Yv6QQ3UqRVd1D0L1vzl/93pXRO/FFvYkob9D02CPTZDgp1T9IoR9V3G79ipMxFY779dm9v3ng8XqrFSTrkYYub16ufTYgfAZrrBA5KKkGQZ3b7YcIx+BUksByqiezNJngrC5xaUq5d51ZOuEXo8G1Zh8sZLGJgauELaflF8++Csr+KflkW++qJgHcIU7OHRCQwYOABh8/2ohIRWXJxnbGRHPJYcxZQaXXgkMHAMOUJ32dL4yU6Ps2ULr4q1pcaaFCk63BDLjcpMnY/SbWR+ZnJUiXW8fg31Qe8A98BjXrpr79M/LUSbQYKLY2Qf81xLFRlziPsXUUNXtF12iUnUlM1yOWM927n9nHSe9UcT60Ux3ECbw+OutABehbvdlcS0B/OS5OOYp5zj4H+DCLu4MPz67IhqdLr5NCZFtd7CJDdfLP5Lp07vm3a5Q67EASK12tjA/migHX9C6vGZGcx4zoEpKMy6TFQDmQbcD5cvam7OWsut3P1jCkEaozGGc519AoOTACWkY6GmLxJYvj1TsZD2fUUlaHIgkwRtJY/XG2XAqm2jBaoVTfq26GBrd4HQ7+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add a few functions to enable early HVO: vmemmap_populate_hvo vmemmap_undo_hvo vmemmap_wrprotect_hvo The populate and undo functions are expected to be used in early init, from the sparse_init_nid_early() function. The wrprotect function is to be used, potentially, later. To implement these functions, mostly re-use the existing compound pages vmemmap logic used by DAX. vmemmap_populate_address has its argument changed a bit in this commit: the page structure passed in to be reused in the mapping is replaced by a PFN and a flag. The flag indicates whether an extra ref should be taken on the vmemmap page containing the head page structure. Taking the ref is appropriate to for DAX / ZONE_DEVICE, but not for HugeTLB HVO. The HugeTLB vmemmap optimization maps tail page structure pages read-only. The vmemmap_wrprotect_hvo function that does this is implemented separately, because it cannot be guaranteed that reserved page structures will not be write accessed during memory initialization. Even with CONFIG_DEFERRED_STRUCT_PAGE_INIT, they might still be written to (if they are at the bottom of a zone). So, vmemmap_populate_hvo leaves the tail page structure pages RW initially, and then later during initialization, after memmap init is fully done, vmemmap_wrprotect_hvo must be called to finish the job. Subsequent commits will use these functions for early HugeTLB HVO. Signed-off-by: Frank van der Linden --- include/linux/mm.h | 9 ++- mm/sparse-vmemmap.c | 141 +++++++++++++++++++++++++++++++++++++++----- 2 files changed, 135 insertions(+), 15 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index df83653ed6e3..0463c062fd7a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3837,7 +3837,8 @@ p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node); pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node); pmd_t *vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node); pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, - struct vmem_altmap *altmap, struct page *reuse); + struct vmem_altmap *altmap, unsigned long ptpfn, + unsigned long flags); void *vmemmap_alloc_block(unsigned long size, int node); struct vmem_altmap; void *vmemmap_alloc_block_buf(unsigned long size, int node, @@ -3853,6 +3854,12 @@ int vmemmap_populate_hugepages(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap); int vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap); +int vmemmap_populate_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); +int vmemmap_undo_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); +void vmemmap_wrprotect_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); void vmemmap_populate_print_last(void); #ifdef CONFIG_MEMORY_HOTPLUG void vmemmap_free(unsigned long start, unsigned long end, diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 8751c46c35e4..8cc848c4b17c 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -30,6 +30,13 @@ #include #include +#include + +/* + * Flags for vmemmap_populate_range and friends. + */ +/* Get a ref on the head page struct page, for ZONE_DEVICE compound pages */ +#define VMEMMAP_POPULATE_PAGEREF 0x0001 #include "internal.h" @@ -144,17 +151,18 @@ void __meminit vmemmap_verify(pte_t *pte, int node, pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, unsigned long flags) { pte_t *pte = pte_offset_kernel(pmd, addr); if (pte_none(ptep_get(pte))) { pte_t entry; void *p; - if (!reuse) { + if (ptpfn == (unsigned long)-1) { p = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap); if (!p) return NULL; + ptpfn = PHYS_PFN(__pa(p)); } else { /* * When a PTE/PMD entry is freed from the init_mm @@ -165,10 +173,10 @@ pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, * and through vmemmap_populate_compound_pages() when * slab is available. */ - get_page(reuse); - p = page_to_virt(reuse); + if (flags & VMEMMAP_POPULATE_PAGEREF) + get_page(pfn_to_page(ptpfn)); } - entry = pfn_pte(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL); + entry = pfn_pte(ptpfn, PAGE_KERNEL); set_pte_at(&init_mm, addr, pte, entry); } return pte; @@ -238,7 +246,8 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node) static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, + unsigned long flags) { pgd_t *pgd; p4d_t *p4d; @@ -258,7 +267,7 @@ static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, pmd = vmemmap_pmd_populate(pud, addr, node); if (!pmd) return NULL; - pte = vmemmap_pte_populate(pmd, addr, node, altmap, reuse); + pte = vmemmap_pte_populate(pmd, addr, node, altmap, ptpfn, flags); if (!pte) return NULL; vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); @@ -269,13 +278,15 @@ static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, static int __meminit vmemmap_populate_range(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, + unsigned long flags) { unsigned long addr = start; pte_t *pte; for (; addr < end; addr += PAGE_SIZE) { - pte = vmemmap_populate_address(addr, node, altmap, reuse); + pte = vmemmap_populate_address(addr, node, altmap, + ptpfn, flags); if (!pte) return -ENOMEM; } @@ -286,7 +297,107 @@ static int __meminit vmemmap_populate_range(unsigned long start, int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { - return vmemmap_populate_range(start, end, node, altmap, NULL); + return vmemmap_populate_range(start, end, node, altmap, -1, 0); +} + +/* + * Undo populate_hvo, and replace it with a normal base page mapping. + * Used in memory init in case a HVO mapping needs to be undone. + * + * This can happen when it is discovered that a memblock allocated + * hugetlb page spans multiple zones, which can only be verified + * after zones have been initialized. + * + * We know that: + * 1) The first @headsize / PAGE_SIZE vmemmap pages were individually + * allocated through memblock, and mapped. + * + * 2) The rest of the vmemmap pages are mirrors of the last head page. + */ +int __meminit vmemmap_undo_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + unsigned long maddr, pfn; + pte_t *pte; + int headpages; + + /* + * Should only be called early in boot, so nothing will + * be accessing these page structures. + */ + WARN_ON(!early_boot_irqs_disabled); + + headpages = headsize >> PAGE_SHIFT; + + /* + * Clear mirrored mappings for tail page structs. + */ + for (maddr = addr + headsize; maddr < end; maddr += PAGE_SIZE) { + pte = virt_to_kpte(maddr); + pte_clear(&init_mm, maddr, pte); + } + + /* + * Clear and free mappings for head page and first tail page + * structs. + */ + for (maddr = addr; headpages-- > 0; maddr += PAGE_SIZE) { + pte = virt_to_kpte(maddr); + pfn = pte_pfn(ptep_get(pte)); + pte_clear(&init_mm, maddr, pte); + memblock_phys_free(PFN_PHYS(pfn), PAGE_SIZE); + } + + flush_tlb_kernel_range(addr, end); + + return vmemmap_populate(addr, end, node, NULL); +} + +/* + * Write protect the mirrored tail page structs for HVO. This will be + * called from the hugetlb code when gathering and initializing the + * memblock allocated gigantic pages. The write protect can't be + * done earlier, since it can't be guaranteed that the reserved + * page structures will not be written to during initialization, + * even if CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled. + * + * The PTEs are known to exist, and nothing else should be touching + * these pages. The caller is responsible for any TLB flushing. + */ +void vmemmap_wrprotect_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + unsigned long maddr; + pte_t *pte; + + for (maddr = addr + headsize; maddr < end; maddr += PAGE_SIZE) { + pte = virt_to_kpte(maddr); + ptep_set_wrprotect(&init_mm, maddr, pte); + } +} + +/* + * Populate vmemmap pages HVO-style. The first page contains the head + * page and needed tail pages, the other ones are mirrors of the first + * page. + */ +int __meminit vmemmap_populate_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + pte_t *pte; + unsigned long maddr; + + for (maddr = addr; maddr < addr + headsize; maddr += PAGE_SIZE) { + pte = vmemmap_populate_address(maddr, node, NULL, -1, 0); + if (!pte) + return -ENOMEM; + } + + /* + * Reuse the last page struct page mapped above for the rest. + */ + return vmemmap_populate_range(maddr, end, node, NULL, + pte_pfn(ptep_get(pte)), 0); } void __weak __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node, @@ -409,7 +520,8 @@ static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, * with just tail struct pages. */ return vmemmap_populate_range(start, end, node, NULL, - pte_page(ptep_get(pte))); + pte_pfn(ptep_get(pte)), + VMEMMAP_POPULATE_PAGEREF); } size = min(end - start, pgmap_vmemmap_nr(pgmap) * sizeof(struct page)); @@ -417,13 +529,13 @@ static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, unsigned long next, last = addr + size; /* Populate the head page vmemmap page */ - pte = vmemmap_populate_address(addr, node, NULL, NULL); + pte = vmemmap_populate_address(addr, node, NULL, -1, 0); if (!pte) return -ENOMEM; /* Populate the tail pages vmemmap page */ next = addr + PAGE_SIZE; - pte = vmemmap_populate_address(next, node, NULL, NULL); + pte = vmemmap_populate_address(next, node, NULL, -1, 0); if (!pte) return -ENOMEM; @@ -433,7 +545,8 @@ static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, */ next += PAGE_SIZE; rc = vmemmap_populate_range(next, last, node, NULL, - pte_page(ptep_get(pte))); + pte_pfn(ptep_get(pte)), + VMEMMAP_POPULATE_PAGEREF); if (rc) return -ENOMEM; }