Message ID | 20250304072700.3405036-1-quic_zhenhuah@quicinc.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v9] arm64: mm: Populate vmemmap at the page level if not section aligned | expand |
On Tue, Mar 04, 2025 at 03:27:00PM +0800, Zhenhua Huang wrote: > On the arm64 platform with 4K base page config, SECTION_SIZE_BITS is set > to 27, making one section 128M. The related page struct which vmemmap > points to is 2M then. > Commit c1cc1552616d ("arm64: MMU initialisation") optimizes the > vmemmap to populate at the PMD section level which was suitable > initially since hot plug granule is always one section(128M). However, > commit ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug") > introduced a 2M(SUBSECTION_SIZE) hot plug granule, which disrupted the > existing arm64 assumptions. > > The first problem is that if start or end is not aligned to a section > boundary, such as when a subsection is hot added, populating the entire > section is wasteful. > > The next problem is if we hotplug something that spans part of 128 MiB > section (subsections, let's call it memblock1), and then hotplug something > that spans another part of a 128 MiB section(subsections, let's call it > memblock2), and subsequently unplug memblock1, vmemmap_free() will clear > the entire PMD entry which also supports memblock2 even though memblock2 > is still active. > > Assuming hotplug/unplug sizes are guaranteed to be symmetric. Do the > fix similar to x86-64: populate to pages levels if start/end is not aligned > with section boundary. > > Cc: <stable@vger.kernel.org> # v5.4+ > Fixes: ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug") > Acked-by: David Hildenbrand <david@redhat.com> > Signed-off-by: Zhenhua Huang <quic_zhenhuah@quicinc.com> Reviewed-by: Oscar Salvador <osalvador@suse.de>
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index b4df5bc5b1b8..1dfe1a8efdbe 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1177,8 +1177,11 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); + /* [start, end] should be within one section */ + WARN_ON_ONCE(end - start > PAGES_PER_SECTION * sizeof(struct page)); - if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES)) + if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) || + (end - start < PAGES_PER_SECTION * sizeof(struct page))) return vmemmap_populate_basepages(start, end, node, altmap); else return vmemmap_populate_hugepages(start, end, node, altmap);