Message ID | 20220130211838.8382-12-rick.p.edgecombe@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Shadow stacks for userspace | expand |
On 1/30/22 13:18, Rick Edgecombe wrote: > From: Yu-cheng Yu <yu-cheng.yu@intel.com> > > The read-only and Dirty PTE has been used to indicate copy-on-write pages. Nit: This is another opportunity to use consistent terminology for these Write=0,Dirty=1 PTEs. > However, newer x86 processors also regard a read-only and Dirty PTE as a > shadow stack page. In order to separate the two, the software-defined > _PAGE_COW is created to replace _PAGE_DIRTY for the copy-on-write case, and > pte_*() are updated. The tense here is weird. "_PAGE_COW is created" is present tense, but it refers to something that happened earlier in the series. > Pte_modify() changes a PTE to 'newprot', but it doesn't use the pte_*(). I'm not seeing a clear problem statement in there. It looks something like this to me: pte_modify() takes a "raw" pgprot_t which was not necessarily created with any of the existing PTE bit helpers. That means that it can return a pte_t with Write=0,Dirty=1: a shadow stack PTE when it did not intend to create one. But, this kinda looks like a hack to me. It all boils down to _PAGE_CHG_MASK. If pte_modify() can change the bit's value, it is not included in _PAGE_CHG_MASK. But, pte_modify() *CAN* change the _PAGE_DIRTY value now. Another way of saying it is that _PAGE_DIRTY is now a permission bit (part-time, at least). > diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h > index a4a75e78a934..5c3886f6ccda 100644 > --- a/arch/x86/include/asm/pgtable.h > +++ b/arch/x86/include/asm/pgtable.h > @@ -773,6 +773,23 @@ static inline pmd_t pmd_mkinvalid(pmd_t pmd) > > static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask); > > +static inline pteval_t fixup_dirty_pte(pteval_t pteval) > +{ > + pte_t pte = __pte(pteval); > + > + /* > + * Fix up potential shadow stack page flags because the RO, Dirty > + * PTE is special. > + */ > + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { > + if (pte_dirty(pte)) { > + pte = pte_mkclean(pte); > + pte = pte_mkdirty(pte); > + } > + } > + return pte_val(pte); > +} > + > static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) > { > pteval_t val = pte_val(pte), oldval = val; > @@ -783,16 +800,36 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) > */ > val &= _PAGE_CHG_MASK; > val |= check_pgprot(newprot) & ~_PAGE_CHG_MASK; > + val = fixup_dirty_pte(val); > val = flip_protnone_guard(oldval, val, PTE_PFN_MASK); > return __pte(val); > } Maybe something like this? We can take _PAGE_DIRTY out of _PAGE_CHG_MASK, then the p*_modify() functions look like this: static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) { pteval_t val = pte_val(pte), oldval = val; + pte_t pte_result; /* Chop off any bits that might change with 'newprot': */ val &= _PAGE_CHG_MASK; val |= check_pgprot(newprot) & ~_PAGE_CHG_MASK; val = flip_protnone_guard(oldval, val, PTE_PFN_MASK); + pte_result = __pte(val); + + if (pte_dirty(oldval)) + pte_result = pte_mkdirty(pte_result)); + + return pte_result; } This: 1. Makes logical sense: the dirty bit *IS* special in that it has to be logically preserved across permission changes. 2. Would work with or without shadow stacks. That exact code would even work on a non-shadow-stack kernel 3. Doesn't introduce *any* new shadow-stack conditional code; the one already hidden in pte_mkdirty() is sufficient. 4. Avoids silly things like setting a bit and then immediately clearing it in a "fixup". 5. Removes the opaque "fixup" abstraction function. That's way better if I do say so myself.
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index a4a75e78a934..5c3886f6ccda 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -773,6 +773,23 @@ static inline pmd_t pmd_mkinvalid(pmd_t pmd) static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask); +static inline pteval_t fixup_dirty_pte(pteval_t pteval) +{ + pte_t pte = __pte(pteval); + + /* + * Fix up potential shadow stack page flags because the RO, Dirty + * PTE is special. + */ + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { + if (pte_dirty(pte)) { + pte = pte_mkclean(pte); + pte = pte_mkdirty(pte); + } + } + return pte_val(pte); +} + static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) { pteval_t val = pte_val(pte), oldval = val; @@ -783,16 +800,36 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) */ val &= _PAGE_CHG_MASK; val |= check_pgprot(newprot) & ~_PAGE_CHG_MASK; + val = fixup_dirty_pte(val); val = flip_protnone_guard(oldval, val, PTE_PFN_MASK); return __pte(val); } +static inline int pmd_write(pmd_t pmd); +static inline pmdval_t fixup_dirty_pmd(pmdval_t pmdval) +{ + pmd_t pmd = __pmd(pmdval); + + /* + * Fix up potential shadow stack page flags because the RO, Dirty + * PMD is special. + */ + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { + if (pmd_dirty(pmd)) { + pmd = pmd_mkclean(pmd); + pmd = pmd_mkdirty(pmd); + } + } + return pmd_val(pmd); +} + static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) { pmdval_t val = pmd_val(pmd), oldval = val; val &= _HPAGE_CHG_MASK; val |= check_pgprot(newprot) & ~_HPAGE_CHG_MASK; + val = fixup_dirty_pmd(val); val = flip_protnone_guard(oldval, val, PHYSICAL_PMD_PAGE_MASK); return __pmd(val); }