Message ID | f302ef92c48d1f08a0459aaee1c568ca11213814.1612345700.git.christophe.leroy@csgroup.eu (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | mm/memory.c: Remove pte_sw_mkyoung() | expand |
On Wed, 3 Feb 2021 10:19:44 +0000 (UTC) Christophe Leroy <christophe.leroy@csgroup.eu> wrote: > Commit 83d116c53058 ("mm: fix double page fault on arm64 if PTE_AF > is cleared") introduced arch_faults_on_old_pte() helper to identify > platforms that don't set page access bit in HW and require a page > fault to set it. > > Commit 44bf431b47b4 ("mm/memory.c: Add memory read privilege on page > fault handling") added pte_sw_mkyoung() which is yet another way to > manage platforms that don't set page access bit in HW and require a > page fault to set it. > > Remove that pte_sw_mkyoung() helper and use the already existing > arch_faults_on_old_pte() helper together with pte_mkyoung() instead. This conflicts with mm/memory.c changes in linux-next. In do_set_pte(). Please check my efforts: --- a/arch/mips/include/asm/pgtable.h~mm-memoryc-remove-pte_sw_mkyoung +++ a/arch/mips/include/asm/pgtable.h @@ -406,8 +406,6 @@ static inline pte_t pte_mkyoung(pte_t pt return pte; } -#define pte_sw_mkyoung pte_mkyoung - #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT static inline int pte_huge(pte_t pte) { return pte_val(pte) & _PAGE_HUGE; } --- a/include/linux/pgtable.h~mm-memoryc-remove-pte_sw_mkyoung +++ a/include/linux/pgtable.h @@ -424,22 +424,6 @@ static inline void ptep_set_wrprotect(st } #endif -/* - * On some architectures hardware does not set page access bit when accessing - * memory page, it is responsibilty of software setting this bit. It brings - * out extra page fault penalty to track page access bit. For optimization page - * access bit can be set during all page fault flow on these arches. - * To be differentiate with macro pte_mkyoung, this macro is used on platforms - * where software maintains page access bit. - */ -#ifndef pte_sw_mkyoung -static inline pte_t pte_sw_mkyoung(pte_t pte) -{ - return pte; -} -#define pte_sw_mkyoung pte_sw_mkyoung -#endif - #ifndef pte_savedwrite #define pte_savedwrite pte_write #endif --- a/mm/memory.c~mm-memoryc-remove-pte_sw_mkyoung +++ a/mm/memory.c @@ -2902,7 +2902,8 @@ static vm_fault_t wp_page_copy(struct vm } flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); entry = mk_pte(new_page, vma->vm_page_prot); - entry = pte_sw_mkyoung(entry); + if (arch_faults_on_old_pte()) + entry = pte_mkyoung(entry); entry = maybe_mkwrite(pte_mkdirty(entry), vma); /* @@ -3560,7 +3561,8 @@ static vm_fault_t do_anonymous_page(stru __SetPageUptodate(page); entry = mk_pte(page, vma->vm_page_prot); - entry = pte_sw_mkyoung(entry); + if (arch_faults_on_old_pte()) + entry = pte_mkyoung(entry); if (vma->vm_flags & VM_WRITE) entry = pte_mkwrite(pte_mkdirty(entry)); @@ -3745,8 +3747,8 @@ void do_set_pte(struct vm_fault *vmf, st if (prefault && arch_wants_old_prefaulted_pte()) entry = pte_mkold(entry); - else - entry = pte_sw_mkyoung(entry); + else if (arch_faults_on_old_pte()) + entry = pte_mkyoung(entry); if (write) entry = maybe_mkwrite(pte_mkdirty(entry), vma);
Excerpts from Andrew Morton's message of February 4, 2021 10:46 am: > On Wed, 3 Feb 2021 10:19:44 +0000 (UTC) Christophe Leroy <christophe.leroy@csgroup.eu> wrote: > >> Commit 83d116c53058 ("mm: fix double page fault on arm64 if PTE_AF >> is cleared") introduced arch_faults_on_old_pte() helper to identify >> platforms that don't set page access bit in HW and require a page >> fault to set it. >> >> Commit 44bf431b47b4 ("mm/memory.c: Add memory read privilege on page >> fault handling") added pte_sw_mkyoung() which is yet another way to >> manage platforms that don't set page access bit in HW and require a >> page fault to set it. >> >> Remove that pte_sw_mkyoung() helper and use the already existing >> arch_faults_on_old_pte() helper together with pte_mkyoung() instead. > > This conflicts with mm/memory.c changes in linux-next. In > do_set_pte(). Please check my efforts: I wanted to just get rid of it completely -- https://marc.info/?l=linux-mm&m=160860750115163&w=2 Waiting for MIPs to get that patch mentioned merged or nacked but as yet seems to be no response from maintainers. https://lore.kernel.org/linux-arch/20201019081257.32127-1-huangpei@loongson.cn/ Thanks, Nick > > --- a/arch/mips/include/asm/pgtable.h~mm-memoryc-remove-pte_sw_mkyoung > +++ a/arch/mips/include/asm/pgtable.h > @@ -406,8 +406,6 @@ static inline pte_t pte_mkyoung(pte_t pt > return pte; > } > > -#define pte_sw_mkyoung pte_mkyoung > - > #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT > static inline int pte_huge(pte_t pte) { return pte_val(pte) & _PAGE_HUGE; } > > --- a/include/linux/pgtable.h~mm-memoryc-remove-pte_sw_mkyoung > +++ a/include/linux/pgtable.h > @@ -424,22 +424,6 @@ static inline void ptep_set_wrprotect(st > } > #endif > > -/* > - * On some architectures hardware does not set page access bit when accessing > - * memory page, it is responsibilty of software setting this bit. It brings > - * out extra page fault penalty to track page access bit. For optimization page > - * access bit can be set during all page fault flow on these arches. > - * To be differentiate with macro pte_mkyoung, this macro is used on platforms > - * where software maintains page access bit. > - */ > -#ifndef pte_sw_mkyoung > -static inline pte_t pte_sw_mkyoung(pte_t pte) > -{ > - return pte; > -} > -#define pte_sw_mkyoung pte_sw_mkyoung > -#endif > - > #ifndef pte_savedwrite > #define pte_savedwrite pte_write > #endif > --- a/mm/memory.c~mm-memoryc-remove-pte_sw_mkyoung > +++ a/mm/memory.c > @@ -2902,7 +2902,8 @@ static vm_fault_t wp_page_copy(struct vm > } > flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); > entry = mk_pte(new_page, vma->vm_page_prot); > - entry = pte_sw_mkyoung(entry); > + if (arch_faults_on_old_pte()) > + entry = pte_mkyoung(entry); > entry = maybe_mkwrite(pte_mkdirty(entry), vma); > > /* > @@ -3560,7 +3561,8 @@ static vm_fault_t do_anonymous_page(stru > __SetPageUptodate(page); > > entry = mk_pte(page, vma->vm_page_prot); > - entry = pte_sw_mkyoung(entry); > + if (arch_faults_on_old_pte()) > + entry = pte_mkyoung(entry); > if (vma->vm_flags & VM_WRITE) > entry = pte_mkwrite(pte_mkdirty(entry)); > > @@ -3745,8 +3747,8 @@ void do_set_pte(struct vm_fault *vmf, st > > if (prefault && arch_wants_old_prefaulted_pte()) > entry = pte_mkold(entry); > - else > - entry = pte_sw_mkyoung(entry); > + else if (arch_faults_on_old_pte()) > + entry = pte_mkyoung(entry); > > if (write) > entry = maybe_mkwrite(pte_mkdirty(entry), vma); > _ > > >
diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 4f9c37616d42..3275495adccb 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -406,8 +406,6 @@ static inline pte_t pte_mkyoung(pte_t pte) return pte; } -#define pte_sw_mkyoung pte_mkyoung - #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT static inline int pte_huge(pte_t pte) { return pte_val(pte) & _PAGE_HUGE; } diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 8fcdfa52eb4b..70d04931dff4 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -424,22 +424,6 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres } #endif -/* - * On some architectures hardware does not set page access bit when accessing - * memory page, it is responsibilty of software setting this bit. It brings - * out extra page fault penalty to track page access bit. For optimization page - * access bit can be set during all page fault flow on these arches. - * To be differentiate with macro pte_mkyoung, this macro is used on platforms - * where software maintains page access bit. - */ -#ifndef pte_sw_mkyoung -static inline pte_t pte_sw_mkyoung(pte_t pte) -{ - return pte; -} -#define pte_sw_mkyoung pte_sw_mkyoung -#endif - #ifndef pte_savedwrite #define pte_savedwrite pte_write #endif diff --git a/mm/memory.c b/mm/memory.c index feff48e1465a..46fab785f7b3 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2890,7 +2890,8 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) } flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); entry = mk_pte(new_page, vma->vm_page_prot); - entry = pte_sw_mkyoung(entry); + if (arch_faults_on_old_pte()) + entry = pte_mkyoung(entry); entry = maybe_mkwrite(pte_mkdirty(entry), vma); /* @@ -3548,7 +3549,8 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) __SetPageUptodate(page); entry = mk_pte(page, vma->vm_page_prot); - entry = pte_sw_mkyoung(entry); + if (arch_faults_on_old_pte()) + entry = pte_mkyoung(entry); if (vma->vm_flags & VM_WRITE) entry = pte_mkwrite(pte_mkdirty(entry)); @@ -3824,7 +3826,8 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct page *page) flush_icache_page(vma, page); entry = mk_pte(page, vma->vm_page_prot); - entry = pte_sw_mkyoung(entry); + if (arch_faults_on_old_pte()) + entry = pte_mkyoung(entry); if (write) entry = maybe_mkwrite(pte_mkdirty(entry), vma); /* copy-on-write page */
Commit 83d116c53058 ("mm: fix double page fault on arm64 if PTE_AF is cleared") introduced arch_faults_on_old_pte() helper to identify platforms that don't set page access bit in HW and require a page fault to set it. Commit 44bf431b47b4 ("mm/memory.c: Add memory read privilege on page fault handling") added pte_sw_mkyoung() which is yet another way to manage platforms that don't set page access bit in HW and require a page fault to set it. Remove that pte_sw_mkyoung() helper and use the already existing arch_faults_on_old_pte() helper together with pte_mkyoung() instead. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> --- arch/mips/include/asm/pgtable.h | 2 -- include/linux/pgtable.h | 16 ---------------- mm/memory.c | 9 ++++++--- 3 files changed, 6 insertions(+), 21 deletions(-)