diff mbox series

[20/35] mm: Update can_follow_write_pte() for shadow stack

Message ID 20220130211838.8382-21-rick.p.edgecombe@intel.com (mailing list archive)
State New
Headers show
Series Shadow stacks for userspace | expand

Commit Message

Rick Edgecombe Jan. 30, 2022, 9:18 p.m. UTC
From: Yu-cheng Yu <yu-cheng.yu@intel.com>

Can_follow_write_pte() ensures a read-only page is COWed by checking the
FOLL_COW flag, and uses pte_dirty() to validate the flag is still valid.

Like a writable data page, a shadow stack page is writable, and becomes
read-only during copy-on-write, but it is always dirty.  Thus, in the
can_follow_write_pte() check, it belongs to the writable page case and
should be excluded from the read-only page pte_dirty() check.  Apply
the same changes to can_follow_write_pmd().

While at it, also split the long line into smaller ones.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Kees Cook <keescook@chromium.org>
---

Yu-cheng v26:
 - Instead of passing vm_flags, pass down vma pointer to can_follow_write_*().

Yu-cheng v25:
 - Split long line into smaller ones.

Yu-cheng v24:
 - Change arch_shadow_stack_mapping() to is_shadow_stack_mapping().

 mm/gup.c         | 16 ++++++++++++----
 mm/huge_memory.c | 16 ++++++++++++----
 2 files changed, 24 insertions(+), 8 deletions(-)

Comments

Dave Hansen Feb. 9, 2022, 10:50 p.m. UTC | #1
On 1/30/22 13:18, Rick Edgecombe wrote:
> From: Yu-cheng Yu <yu-cheng.yu@intel.com>
> 
> Can_follow_write_pte() ensures a read-only page is COWed by checking the
> FOLL_COW flag, and uses pte_dirty() to validate the flag is still valid.
> 
> Like a writable data page, a shadow stack page is writable, and becomes
> read-only during copy-on-write,

I thought we could not have read-only shadow stack pages.  What does a
read-only shadow stack PTE look like? ;)

> but it is always dirty.  Thus, in the
> can_follow_write_pte() check, it belongs to the writable page case and
> should be excluded from the read-only page pte_dirty() check.  Apply
> the same changes to can_follow_write_pmd().
> 
> While at it, also split the long line into smaller ones.

FWIW, I probably would have had a preparatory patch for this part.  The
advantage is that if you break existing code, it's a lot easier to
figure it out if you have a separate refactoring patch.  Also, for a
patch like this, the refactoring might result in the same exact binary.
 It's a pretty good sign that your patch won't cause regressions if it
results in the same binary.

> diff --git a/mm/gup.c b/mm/gup.c
> index f0af462ac1e2..95b7d1084c44 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -464,10 +464,18 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
>   * FOLL_FORCE can write to even unwritable pte's, but only
>   * after we've gone through a COW cycle and they are dirty.
>   */
> -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)
> +static inline bool can_follow_write_pte(pte_t pte, unsigned int flags,
> +					struct vm_area_struct *vma)
>  {
> -	return pte_write(pte) ||
> -		((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte));
> +	if (pte_write(pte))
> +		return true;
> +	if ((flags & (FOLL_FORCE | FOLL_COW)) != (FOLL_FORCE | FOLL_COW))
> +		return false;
> +	if (!pte_dirty(pte))
> +		return false;
> +	if (is_shadow_stack_mapping(vma->vm_flags))
> +		return false;

You had me up until this is_shadow_stack_mapping().  It wasn't mentioned
at all in the changelog.  Logically, I think it's trying to say that a
shadow stack VMA never allows a FOLL_FORCE override.

That makes some sense, but it's a pretty big point not to mention in the
changelog.

> +	return true;
>  }
>  
>  static struct page *follow_page_pte(struct vm_area_struct *vma,
> @@ -510,7 +518,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
>  	}
>  	if ((flags & FOLL_NUMA) && pte_protnone(pte))
>  		goto no_page;
> -	if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) {
> +	if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags, vma)) {
>  		pte_unmap_unlock(ptep, ptl);
>  		return NULL;
>  	}
Dave Hansen Feb. 9, 2022, 10:52 p.m. UTC | #2
On 1/30/22 13:18, Rick Edgecombe wrote:
> Like a writable data page, a shadow stack page is writable, and becomes
> read-only during copy-on-write, but it is always dirty.

One other thing...

The language in these changelogs is a bit sloppy.  For instance, what
does "always dirty" mean here?  pte_dirty()?  Or strictly _PAGE_DIRTY?

In other words, logically dirty, or literally "has *the* dirty bit set"?
David Laight Feb. 10, 2022, 10:45 p.m. UTC | #3
From: Dave Hansen
> Sent: 09 February 2022 22:52
> 
> On 1/30/22 13:18, Rick Edgecombe wrote:
> > Like a writable data page, a shadow stack page is writable, and becomes
> > read-only during copy-on-write, but it is always dirty.
> 
> One other thing...
> 
> The language in these changelogs is a bit sloppy.  For instance, what
> does "always dirty" mean here?  pte_dirty()?  Or strictly _PAGE_DIRTY?
> 
> In other words, logically dirty, or literally "has *the* dirty bit set"?

Doesn't COW have to set it readonly - so that the access faults.
And then set the fault code set it readonly+dirty (without write)
to allow the shadow stack accesses to not-fault.

Or am I mis-guessing what the docs actually say?

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
diff mbox series

Patch

diff --git a/mm/gup.c b/mm/gup.c
index f0af462ac1e2..95b7d1084c44 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -464,10 +464,18 @@  static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
  * FOLL_FORCE can write to even unwritable pte's, but only
  * after we've gone through a COW cycle and they are dirty.
  */
-static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)
+static inline bool can_follow_write_pte(pte_t pte, unsigned int flags,
+					struct vm_area_struct *vma)
 {
-	return pte_write(pte) ||
-		((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte));
+	if (pte_write(pte))
+		return true;
+	if ((flags & (FOLL_FORCE | FOLL_COW)) != (FOLL_FORCE | FOLL_COW))
+		return false;
+	if (!pte_dirty(pte))
+		return false;
+	if (is_shadow_stack_mapping(vma->vm_flags))
+		return false;
+	return true;
 }
 
 static struct page *follow_page_pte(struct vm_area_struct *vma,
@@ -510,7 +518,7 @@  static struct page *follow_page_pte(struct vm_area_struct *vma,
 	}
 	if ((flags & FOLL_NUMA) && pte_protnone(pte))
 		goto no_page;
-	if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) {
+	if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags, vma)) {
 		pte_unmap_unlock(ptep, ptl);
 		return NULL;
 	}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 3588e9fefbe0..1c7167e6f223 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1346,10 +1346,18 @@  vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
  * FOLL_FORCE can write to even unwritable pmd's, but only
  * after we've gone through a COW cycle and they are dirty.
  */
-static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags)
+static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags,
+					struct vm_area_struct *vma)
 {
-	return pmd_write(pmd) ||
-	       ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd));
+	if (pmd_write(pmd))
+		return true;
+	if ((flags & (FOLL_FORCE | FOLL_COW)) != (FOLL_FORCE | FOLL_COW))
+		return false;
+	if (!pmd_dirty(pmd))
+		return false;
+	if (is_shadow_stack_mapping(vma->vm_flags))
+		return false;
+	return true;
 }
 
 struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
@@ -1362,7 +1370,7 @@  struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 
 	assert_spin_locked(pmd_lockptr(mm, pmd));
 
-	if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags))
+	if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags, vma))
 		goto out;
 
 	/* Avoid dumping huge zero page */