Message ID | 20221104223604.29615-21-rick.p.edgecombe@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Shadow stacks for userspace | expand |
On Fri, Nov 04, 2022 at 03:35:47PM -0700, Rick Edgecombe wrote: > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 73b9b78f8cf4..7643a4db1b50 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1803,6 +1803,13 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > return 0; > > preserve_write = prot_numa && pmd_write(*pmd); > + > + /* > + * Preserve only normal writable huge PMD, but not shadow > + * stack (RW=0, Dirty=1). > + */ > + if (vma->vm_flags & VM_SHADOW_STACK) > + preserve_write = false; > ret = 1; > > #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION > diff --git a/mm/mprotect.c b/mm/mprotect.c > index 668bfaa6ed2a..ea82ce5f38fe 100644 > --- a/mm/mprotect.c > +++ b/mm/mprotect.c > @@ -115,6 +115,13 @@ static unsigned long change_pte_range(struct mmu_gather *tlb, > pte_t ptent; > bool preserve_write = prot_numa && pte_write(oldpte); > > + /* > + * Preserve only normal writable PTE, but not shadow > + * stack (RW=0, Dirty=1). > + */ > + if (vma->vm_flags & VM_SHADOW_STACK) > + preserve_write = false; > + > /* > * Avoid trapping faults against the zero or KSM > * pages. See similar comment in change_huge_pmd. These comments lack a why component; someone is going to wonder wtf this code is doing in the near future -- that someone might be you.
On Tue, 2022-11-15 at 13:05 +0100, Peter Zijlstra wrote: > On Fri, Nov 04, 2022 at 03:35:47PM -0700, Rick Edgecombe wrote: > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > index 73b9b78f8cf4..7643a4db1b50 100644 > > --- a/mm/huge_memory.c > > +++ b/mm/huge_memory.c > > @@ -1803,6 +1803,13 @@ int change_huge_pmd(struct mmu_gather *tlb, > > struct vm_area_struct *vma, > > return 0; > > > > preserve_write = prot_numa && pmd_write(*pmd); > > + > > + /* > > + * Preserve only normal writable huge PMD, but not shadow > > + * stack (RW=0, Dirty=1). > > + */ > > + if (vma->vm_flags & VM_SHADOW_STACK) > > + preserve_write = false; > > ret = 1; > > > > #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION > > diff --git a/mm/mprotect.c b/mm/mprotect.c > > index 668bfaa6ed2a..ea82ce5f38fe 100644 > > --- a/mm/mprotect.c > > +++ b/mm/mprotect.c > > @@ -115,6 +115,13 @@ static unsigned long change_pte_range(struct > > mmu_gather *tlb, > > pte_t ptent; > > bool preserve_write = prot_numa && > > pte_write(oldpte); > > > > + /* > > + * Preserve only normal writable PTE, but not > > shadow > > + * stack (RW=0, Dirty=1). > > + */ > > + if (vma->vm_flags & VM_SHADOW_STACK) > > + preserve_write = false; > > + > > /* > > * Avoid trapping faults against the zero or > > KSM > > * pages. See similar comment in > > change_huge_pmd. > > These comments lack a why component; someone is going to wonder wtf > this > code is doing in the near future -- that someone might be you. Good point, I'll expand it.
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 73b9b78f8cf4..7643a4db1b50 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1803,6 +1803,13 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, return 0; preserve_write = prot_numa && pmd_write(*pmd); + + /* + * Preserve only normal writable huge PMD, but not shadow + * stack (RW=0, Dirty=1). + */ + if (vma->vm_flags & VM_SHADOW_STACK) + preserve_write = false; ret = 1; #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION diff --git a/mm/mprotect.c b/mm/mprotect.c index 668bfaa6ed2a..ea82ce5f38fe 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -115,6 +115,13 @@ static unsigned long change_pte_range(struct mmu_gather *tlb, pte_t ptent; bool preserve_write = prot_numa && pte_write(oldpte); + /* + * Preserve only normal writable PTE, but not shadow + * stack (RW=0, Dirty=1). + */ + if (vma->vm_flags & VM_SHADOW_STACK) + preserve_write = false; + /* * Avoid trapping faults against the zero or KSM * pages. See similar comment in change_huge_pmd.