Message ID | 20240103091423.400294-1-peterx@redhat.com (mailing list archive) |
---|---|
Headers | show |
Series | mm/gup: Unify hugetlb, part 2 | expand |
Le 03/01/2024 à 10:14, peterx@redhat.com a écrit : > From: Peter Xu <peterx@redhat.com> > > > Test Done > ========= > > This v1 went through the normal GUP smoke tests over different memory > types on archs (using VM instances): x86_64, aarch64, ppc64le. For > aarch64, tested over 64KB cont_pte huge pages. For ppc64le, tested over > 16MB hugepd entries (Power8 hash MMU on 4K base page size). > Can you tell how you test ? I'm willing to test this series on powerpc 8xx (PPC32). Christophe
Hi, Christophe, On Wed, Jan 03, 2024 at 11:14:54AM +0000, Christophe Leroy wrote: > > Test Done > > ========= > > > > This v1 went through the normal GUP smoke tests over different memory > > types on archs (using VM instances): x86_64, aarch64, ppc64le. For > > aarch64, tested over 64KB cont_pte huge pages. For ppc64le, tested over > > 16MB hugepd entries (Power8 hash MMU on 4K base page size). > > > > Can you tell how you test ? > > I'm willing to test this series on powerpc 8xx (PPC32). My apologies, for some reason I totally overlooked this email.. I only tested using run_vmtests.sh, with: $ bash ./run_vmtests.sh -t gup_test -a It should cover pretty much lots of tests of GUP using gup_test program. I think the ones that matters here is "-H" over either "-U/-b". For ppc8xx, even though kernel mapping uses hugepd, I don't expect anything should change before/after this series, because the code that I touched (slow gup only) only affects user pages, so it shouldn't change anything over kernel mappings. Said so, please feel free to smoke over whatever type of kernel hugepd mappings, and I'd trust you're the expert on how to trigger those paths. Since I got your attention, when working on this series I talked to David Gibson and just got to know that hugepd is actually a pure software idea. IIUC it means there's no PPC hardware that really understands the hugepd format at all, but only a "this is a huge page" hint for Linux. Considering that it _seems_ to play a similar role of cont_pXX here: do you think hugepd can have any chance to be implemented similarly like cont_pXX, or somehow share the code? For example, if hugepd is recognized only by Linux kernel itself, maybe there can be some special pgtable hint that can be attached to the cont_* entries, showing whether it's a "real cont_*" entry or a "hugepd" entry? IIUC it can be quite flexible because if hugepd only works for hash MMU so no hardware will even walk that radix table. But I can overlook important things here. It'll be definitely great if hugepd can be merged into some existing forms like a generic pgtable (IMHO cont_* is such case: it's the same as no cont_* entries for softwares, while hardware can accelerate with TLB hits on larger ranges). But I can be asking a very silly question here too, as I can overlook very important things. Thanks,
From: Peter Xu <peterx@redhat.com> v2: - Collect acks - Patch 9: - Use READ_ONCE() to fetch pud entry [James] rfc: https://lore.kernel.org/r/20231116012908.392077-1-peterx@redhat.com v1: https://lore.kernel.org/r/20231219075538.414708-1-peterx@redhat.com This is v2 of the series, based on latest mm-unstalbe (856325d361df). The series removes the hugetlb slow gup path after a previous refactor work [1], so that slow gup now uses the exact same path to process all kinds of memory including hugetlb. For the long term, we may want to remove most, if not all, call sites of huge_pte_offset(). It'll be ideal if that API can be completely dropped from arch hugetlb API. This series is one small step towards merging hugetlb specific codes into generic mm paths. From that POV, this series removes one reference to huge_pte_offset() out of many others. One goal of such a route is that we can reconsider merging hugetlb features like High Granularity Mapping (HGM). It was not accepted in the past because it may add lots of hugetlb specific codes and make the mm code even harder to maintain. With a merged codeset, features like HGM can hopefully share some code with THP, legacy (PMD+) or modern (continuous PTEs). To make it work, the generic slow gup code will need to at least understand hugepd, which is already done like so in fast-gup. Fortunately it seems that's the only major thing I need to teach slow GUP to share the common path for now besides normal huge PxD entries. Non-gup can be more challenging, but that's a question for later. There's one major difference for slow-gup on cont_pte / cont_pmd handling, currently supported on three architectures (aarch64, riscv, ppc). Before the series, slow gup will be able to recognize e.g. cont_pte entries with the help of huge_pte_offset() when hstate is around. Now it's gone but still working, by looking up pgtable entries one by one. It's not ideal, but hopefully this change should not affect yet on major workloads. There's some more information in the commit message of the last patch. If this would be a concern, we can consider teaching slow gup to recognize cont pte/pmd entries, and that should recover the lost performance. But I doubt its necessity for now, so I kept it as simple as it can be. Test Done ========= This v1 went through the normal GUP smoke tests over different memory types on archs (using VM instances): x86_64, aarch64, ppc64le. For aarch64, tested over 64KB cont_pte huge pages. For ppc64le, tested over 16MB hugepd entries (Power8 hash MMU on 4K base page size). Patch layout ============= Patch 1-7: Preparation works, or cleanups in relevant code paths Patch 8-12: Teach slow gup with all kinds of huge entries (pXd, hugepd) Patch 13: Drop hugetlb_follow_page_mask() More information can be found in the commit messages of each patch. Any comment will be welcomed. Thanks. [1] https://lore.kernel.org/all/20230628215310.73782-1-peterx@redhat.com Peter Xu (13): mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static mm: Provide generic pmd_thp_or_huge() mm: Make HPAGE_PXD_* macros even if !THP mm: Introduce vma_pgtable_walk_{begin|end}() mm/gup: Drop folio_fast_pin_allowed() in hugepd processing mm/gup: Refactor record_subpages() to find 1st small page mm/gup: Handle hugetlb for no_page_table() mm/gup: Cache *pudp in follow_pud_mask() mm/gup: Handle huge pud for follow_pud_mask() mm/gup: Handle huge pmd for follow_pmd_mask() mm/gup: Handle hugepd for follow_page() mm/gup: Handle hugetlb in the generic follow_page_mask code include/linux/huge_mm.h | 25 +-- include/linux/hugetlb.h | 16 +- include/linux/mm.h | 3 + include/linux/pgtable.h | 4 + mm/Kconfig | 3 + mm/gup.c | 362 ++++++++++++++++++++++++++++++++-------- mm/huge_memory.c | 133 +-------------- mm/hugetlb.c | 75 +-------- mm/internal.h | 7 +- mm/memory.c | 12 ++ 10 files changed, 342 insertions(+), 298 deletions(-)