Message ID | 1561699231-20991-1-git-send-email-anshuman.khandual@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [V2] mm/ioremap: Probe platform for p4d huge map support | expand |
On Fri, 28 Jun 2019 10:50:31 +0530 Anshuman Khandual <anshuman.khandual@arm.com> wrote: > Finishing up what the commit c2febafc67734a ("mm: convert generic code to > 5-level paging") started out while levelling up P4D huge mapping support > at par with PUD and PMD. A new arch call back arch_ioremap_p4d_supported() > is being added which just maintains status quo (P4D huge map not supported) > on x86, arm64 and powerpc. Does this have any runtime effects? If so, what are they and why? If not, what's the actual point?
On 07/03/2019 04:36 AM, Andrew Morton wrote: > On Fri, 28 Jun 2019 10:50:31 +0530 Anshuman Khandual <anshuman.khandual@arm.com> wrote: > >> Finishing up what the commit c2febafc67734a ("mm: convert generic code to >> 5-level paging") started out while levelling up P4D huge mapping support >> at par with PUD and PMD. A new arch call back arch_ioremap_p4d_supported() >> is being added which just maintains status quo (P4D huge map not supported) >> on x86, arm64 and powerpc. > > Does this have any runtime effects? If so, what are they and why? If > not, what's the actual point? It just finishes up what the previous commit c2febafc67734a ("mm: convert generic code to 5-level paging") left off with respect p4d based huge page enablement for ioremap. When HAVE_ARCH_HUGE_VMAP is enabled its just a simple check from the arch about the support, hence runtime effects are minimal.
Anshuman Khandual <anshuman.khandual@arm.com> writes: > On 07/03/2019 04:36 AM, Andrew Morton wrote: >> On Fri, 28 Jun 2019 10:50:31 +0530 Anshuman Khandual <anshuman.khandual@arm.com> wrote: >> >>> Finishing up what the commit c2febafc67734a ("mm: convert generic code to >>> 5-level paging") started out while levelling up P4D huge mapping support >>> at par with PUD and PMD. A new arch call back arch_ioremap_p4d_supported() >>> is being added which just maintains status quo (P4D huge map not supported) >>> on x86, arm64 and powerpc. >> >> Does this have any runtime effects? If so, what are they and why? If >> not, what's the actual point? > > It just finishes up what the previous commit c2febafc67734a ("mm: convert > generic code to 5-level paging") left off with respect p4d based huge page > enablement for ioremap. When HAVE_ARCH_HUGE_VMAP is enabled its just a simple > check from the arch about the support, hence runtime effects are minimal. The return value of arch_ioremap_p4d_supported() is stored in the variable ioremap_p4d_capable which is then returned by ioremap_p4d_enabled(). That is used by ioremap_try_huge_p4d() called from ioremap_p4d_range() from ioremap_page_range(). The runtime effect is that it prevents ioremap_page_range() from trying to create huge mappings at the p4d level on arches that don't support it. cheers
Hi all, On Fri, 12 Jul 2019 17:07:48 +1000 Michael Ellerman <mpe@ellerman.id.au> wrote: > > The return value of arch_ioremap_p4d_supported() is stored in the > variable ioremap_p4d_capable which is then returned by > ioremap_p4d_enabled(). > > That is used by ioremap_try_huge_p4d() called from ioremap_p4d_range() > from ioremap_page_range(). When I first saw this, I wondered if we expect arch_ioremap_p4d_supported() to ever return something that is not computable at compile time. If not, why do we have this level of redirection? Why not just make it a static inline functions defined in an arch specific include file (or even just a CONFIG_ option)? In particular, ioremap_p4d_enabled() either returns ioremap_p4d_capable or 0 and is static to one file and has one call site ... The same is true of ioremap_pud_enabled() and ioremap_pmd_enabled().
On 07/12/2019 12:37 PM, Michael Ellerman wrote: > Anshuman Khandual <anshuman.khandual@arm.com> writes: >> On 07/03/2019 04:36 AM, Andrew Morton wrote: >>> On Fri, 28 Jun 2019 10:50:31 +0530 Anshuman Khandual <anshuman.khandual@arm.com> wrote: >>> >>>> Finishing up what the commit c2febafc67734a ("mm: convert generic code to >>>> 5-level paging") started out while levelling up P4D huge mapping support >>>> at par with PUD and PMD. A new arch call back arch_ioremap_p4d_supported() >>>> is being added which just maintains status quo (P4D huge map not supported) >>>> on x86, arm64 and powerpc. >>> >>> Does this have any runtime effects? If so, what are they and why? If >>> not, what's the actual point? >> >> It just finishes up what the previous commit c2febafc67734a ("mm: convert >> generic code to 5-level paging") left off with respect p4d based huge page >> enablement for ioremap. When HAVE_ARCH_HUGE_VMAP is enabled its just a simple >> check from the arch about the support, hence runtime effects are minimal. > > The return value of arch_ioremap_p4d_supported() is stored in the > variable ioremap_p4d_capable which is then returned by > ioremap_p4d_enabled(). > > That is used by ioremap_try_huge_p4d() called from ioremap_p4d_range() > from ioremap_page_range(). That is right. > > The runtime effect is that it prevents ioremap_page_range() from trying > to create huge mappings at the p4d level on arches that don't support > it. But now after first checking with an arch callback. Previously p4d huge mappings were disabled on all platforms as ioremap_p4d_capable remained clear through out being a static.
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 97ff0341..750a69d 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -942,6 +942,11 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys) return dt_virt; } +int __init arch_ioremap_p4d_supported(void) +{ + return 0; +} + int __init arch_ioremap_pud_supported(void) { /* diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index 22c0637..60c8fca 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -1120,6 +1120,11 @@ void radix__ptep_modify_prot_commit(struct vm_area_struct *vma, set_pte_at(mm, addr, ptep, pte); } +int __init arch_ioremap_p4d_supported(void) +{ + return 0; +} + int __init arch_ioremap_pud_supported(void) { /* HPT does not cope with large pages in the vmalloc area */ diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index e500f1d..63e99f1 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -459,6 +459,11 @@ void iounmap(volatile void __iomem *addr) } EXPORT_SYMBOL(iounmap); +int __init arch_ioremap_p4d_supported(void) +{ + return 0; +} + int __init arch_ioremap_pud_supported(void) { #ifdef CONFIG_X86_64 diff --git a/include/linux/io.h b/include/linux/io.h index 9876e58..accac82 100644 --- a/include/linux/io.h +++ b/include/linux/io.h @@ -33,6 +33,7 @@ static inline int ioremap_page_range(unsigned long addr, unsigned long end, #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP void __init ioremap_huge_init(void); +int arch_ioremap_p4d_supported(void); int arch_ioremap_pud_supported(void); int arch_ioremap_pmd_supported(void); #else diff --git a/lib/ioremap.c b/lib/ioremap.c index a95161d..0a2ffad 100644 --- a/lib/ioremap.c +++ b/lib/ioremap.c @@ -30,6 +30,8 @@ early_param("nohugeiomap", set_nohugeiomap); void __init ioremap_huge_init(void) { if (!ioremap_huge_disabled) { + if (arch_ioremap_p4d_supported()) + ioremap_p4d_capable = 1; if (arch_ioremap_pud_supported()) ioremap_pud_capable = 1; if (arch_ioremap_pmd_supported())