Message ID | ZP8q05EiU3jCs8al@casper.infradead.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [willy@infradead.org:,Re:,[syzbot,mm?] BUG: Bad page map (7)] | expand |
> #ifndef set_ptes > /** > * set_ptes - Map consecutive pages to a contiguous range of addresses. > @@ -231,7 +235,10 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr, > if (--nr == 0) > break; > ptep++; > - pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); > + if (__pte_needs_invert(pte_val(pte))) > + pte = __pte(pte_val(pte) - (1UL << PFN_PTE_SHIFT)); > + else > + pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); > } Maybe we want some pte_advance() [or similar, you get the spirit] instead? Leaking this inverted-pte logic into common code really does look nasty.
On Mon, Sep 11 2023 at 19:50, David Hildenbrand wrote: >> #ifndef set_ptes >> /** >> * set_ptes - Map consecutive pages to a contiguous range of addresses. >> @@ -231,7 +235,10 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr, >> if (--nr == 0) >> break; >> ptep++; >> - pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); >> + if (__pte_needs_invert(pte_val(pte))) >> + pte = __pte(pte_val(pte) - (1UL << PFN_PTE_SHIFT)); >> + else >> + pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); >> } > > Maybe we want some pte_advance() [or similar, you get the spirit] instead? > > Leaking this inverted-pte logic into common code really does look nasty. Yes please
diff --git a/arch/x86/include/asm/pgtable-2level.h b/arch/x86/include/asm/pgtable-2level.h index e9482a11ac52..a89be3e9b032 100644 --- a/arch/x86/include/asm/pgtable-2level.h +++ b/arch/x86/include/asm/pgtable-2level.h @@ -123,9 +123,6 @@ static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask) return val; } -static inline bool __pte_needs_invert(u64 val) -{ - return false; -} +#define __pte_needs_invert(val) false #endif /* _ASM_X86_PGTABLE_2LEVEL_H */ diff --git a/arch/x86/include/asm/pgtable-invert.h b/arch/x86/include/asm/pgtable-invert.h index a0c1525f1b6f..f21726add655 100644 --- a/arch/x86/include/asm/pgtable-invert.h +++ b/arch/x86/include/asm/pgtable-invert.h @@ -17,6 +17,7 @@ static inline bool __pte_needs_invert(u64 val) { return val && !(val & _PAGE_PRESENT); } +#define __pte_needs_invert __pte_needs_invert /* Get a mask to xor with the page table entry to get the correct pfn. */ static inline u64 protnone_mask(u64 val) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 1fba072b3dac..34b12e94b850 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -205,6 +205,10 @@ static inline int pmd_young(pmd_t pmd) #define arch_flush_lazy_mmu_mode() do {} while (0) #endif +#ifndef __pte_needs_invert +#define __pte_needs_invert(pte) false +#endif + #ifndef set_ptes /** * set_ptes - Map consecutive pages to a contiguous range of addresses. @@ -231,7 +235,10 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr, if (--nr == 0) break; ptep++; - pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); + if (__pte_needs_invert(pte_val(pte))) + pte = __pte(pte_val(pte) - (1UL << PFN_PTE_SHIFT)); + else + pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); } arch_leave_lazy_mmu_mode(); }