Message ID | 20230920040958.866520-1-willy@infradead.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | None | expand |
On 9/20/23 12:09, Matthew Wilcox (Oracle) wrote: > In order to fix the L1TF vulnerability, x86 can invert the PTE bits for > PROT_NONE VMAs, which means we cannot move from one PTE to the next by > adding 1 to the PFN field of the PTE. Abstract advancing the PTE to > the next PFN through a pte_next_pfn() function/macro. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Fixes: bcc6cc832573 ("mm: add default definition of set_ptes()") > Reported-by: syzbot+55cc72f8cc3a549119df@syzkaller.appspotmail.com Reviewed-by: Yin Fengwei <fengwei.yin@intel.com> Thanks a lot for taking care of this. Regards Yin, Fengwei > --- > arch/x86/include/asm/pgtable.h | 8 ++++++++ > include/linux/pgtable.h | 10 +++++++++- > 2 files changed, 17 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h > index d6ad98ca1288..e02b179ec659 100644 > --- a/arch/x86/include/asm/pgtable.h > +++ b/arch/x86/include/asm/pgtable.h > @@ -955,6 +955,14 @@ static inline int pte_same(pte_t a, pte_t b) > return a.pte == b.pte; > } > > +static inline pte_t pte_next_pfn(pte_t pte) > +{ > + if (__pte_needs_invert(pte_val(pte))) > + return __pte(pte_val(pte) - (1UL << PFN_PTE_SHIFT)); > + return __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); > +} > +#define pte_next_pfn pte_next_pfn > + > static inline int pte_present(pte_t a) > { > return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE); > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index 1fba072b3dac..af7639c3b0a3 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -206,6 +206,14 @@ static inline int pmd_young(pmd_t pmd) > #endif > > #ifndef set_ptes > + > +#ifndef pte_next_pfn > +static inline pte_t pte_next_pfn(pte_t pte) > +{ > + return __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); > +} > +#endif > + > /** > * set_ptes - Map consecutive pages to a contiguous range of addresses. > * @mm: Address space to map the pages into. > @@ -231,7 +239,7 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr, > if (--nr == 0) > break; > ptep++; > - pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); > + pte = pte_next_pfn(pte); > } > arch_leave_lazy_mmu_mode(); > }
On Wed, 20 Sep 2023 05:09:58 +0100 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote: > In order to fix the L1TF vulnerability, x86 can invert the PTE bits for > PROT_NONE VMAs, which means we cannot move from one PTE to the next by > adding 1 to the PFN field of the PTE. Abstract advancing the PTE to > the next PFN through a pte_next_pfn() function/macro. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Fixes: bcc6cc832573 ("mm: add default definition of set_ptes()") > Reported-by: syzbot+55cc72f8cc3a549119df@syzkaller.appspotmail.com Is it just me, or is it a pain hunting down things via message IDs? I tweaked the changelog thusly, pointing out that this fixes a BUG. : In order to fix the L1TF vulnerability, x86 can invert the PTE bits for : PROT_NONE VMAs, which means we cannot move from one PTE to the next by : adding 1 to the PFN field of the PTE. This results in the BUG reported at : [1]. : : Abstract advancing the PTE to the next PFN through a pte_next_pfn() : function/macro. : : Link: https://lkml.kernel.org/r/20230920040958.866520-1-willy@infradead.org : Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> : Fixes: bcc6cc832573 ("mm: add default definition of set_ptes()") : Reported-by: syzbot+55cc72f8cc3a549119df@syzkaller.appspotmail.com : Closes: https://lkml.kernel.org/r/000000000000d099fa0604f03351@google.com [1]
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index d6ad98ca1288..e02b179ec659 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -955,6 +955,14 @@ static inline int pte_same(pte_t a, pte_t b) return a.pte == b.pte; } +static inline pte_t pte_next_pfn(pte_t pte) +{ + if (__pte_needs_invert(pte_val(pte))) + return __pte(pte_val(pte) - (1UL << PFN_PTE_SHIFT)); + return __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); +} +#define pte_next_pfn pte_next_pfn + static inline int pte_present(pte_t a) { return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE); diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 1fba072b3dac..af7639c3b0a3 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -206,6 +206,14 @@ static inline int pmd_young(pmd_t pmd) #endif #ifndef set_ptes + +#ifndef pte_next_pfn +static inline pte_t pte_next_pfn(pte_t pte) +{ + return __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); +} +#endif + /** * set_ptes - Map consecutive pages to a contiguous range of addresses. * @mm: Address space to map the pages into. @@ -231,7 +239,7 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr, if (--nr == 0) break; ptep++; - pte = __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT)); + pte = pte_next_pfn(pte); } arch_leave_lazy_mmu_mode(); }
In order to fix the L1TF vulnerability, x86 can invert the PTE bits for PROT_NONE VMAs, which means we cannot move from one PTE to the next by adding 1 to the PFN field of the PTE. Abstract advancing the PTE to the next PFN through a pte_next_pfn() function/macro. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Fixes: bcc6cc832573 ("mm: add default definition of set_ptes()") Reported-by: syzbot+55cc72f8cc3a549119df@syzkaller.appspotmail.com --- arch/x86/include/asm/pgtable.h | 8 ++++++++ include/linux/pgtable.h | 10 +++++++++- 2 files changed, 17 insertions(+), 1 deletion(-)