Message ID | 20240129124649.189745-9-david@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/memory: optimize fork() with PTE-mapped THP | expand |
On Mon, Jan 29, 2024 at 01:46:42PM +0100, David Hildenbrand wrote: > Let's provide pte_next_pfn(), independently of set_ptes(). This allows for > using the generic pte_next_pfn() version in some arch-specific set_ptes() > implementations, and prepares for reusing pte_next_pfn() in other context. > > Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> > Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org> > --- > include/linux/pgtable.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index f6d0e3513948..351cd9dc7194 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -212,7 +212,6 @@ static inline int pmd_dirty(pmd_t pmd) > #define arch_flush_lazy_mmu_mode() do {} while (0) > #endif > > -#ifndef set_ptes > > #ifndef pte_next_pfn > static inline pte_t pte_next_pfn(pte_t pte) > @@ -221,6 +220,7 @@ static inline pte_t pte_next_pfn(pte_t pte) > } > #endif > > +#ifndef set_ptes > /** > * set_ptes - Map consecutive pages to a contiguous range of addresses. > * @mm: Address space to map the pages into. > -- > 2.43.0 > >
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index f6d0e3513948..351cd9dc7194 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -212,7 +212,6 @@ static inline int pmd_dirty(pmd_t pmd) #define arch_flush_lazy_mmu_mode() do {} while (0) #endif -#ifndef set_ptes #ifndef pte_next_pfn static inline pte_t pte_next_pfn(pte_t pte) @@ -221,6 +220,7 @@ static inline pte_t pte_next_pfn(pte_t pte) } #endif +#ifndef set_ptes /** * set_ptes - Map consecutive pages to a contiguous range of addresses. * @mm: Address space to map the pages into.