diff mbox series

[v4,1/1] KVM: s390: pv: fix race when making a page secure

Message ID 20250304182304.178746-2-imbrenda@linux.ibm.com (mailing list archive)
State New
Headers show
Series KVM: s390: fix a newly introduced bug | expand

Commit Message

Claudio Imbrenda March 4, 2025, 6:23 p.m. UTC
Holding the pte lock for the page that is being converted to secure is
needed to avoid races. A previous commit removed the locking, which
caused issues. Fix by locking the pte again.

Fixes: 5cbe24350b7d ("KVM: s390: move pv gmap functions into kvm")
Reported-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Tested-by: David Hildenbrand <david@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 arch/s390/include/asm/gmap.h |   1 -
 arch/s390/include/asm/uv.h   |   3 +-
 arch/s390/kernel/uv.c        | 135 +++++++++++++++++++++++++++++++++--
 arch/s390/kvm/gmap.c         | 101 ++------------------------
 arch/s390/kvm/kvm-s390.c     |  25 ++++---
 arch/s390/mm/gmap.c          |  28 --------
 6 files changed, 153 insertions(+), 140 deletions(-)

Comments

David Hildenbrand March 6, 2025, 10:23 a.m. UTC | #1
>   /**
> - * make_folio_secure() - make a folio secure
> + * __make_folio_secure() - make a folio secure
>    * @folio: the folio to make secure
>    * @uvcb: the uvcb that describes the UVC to be used
>    *
> @@ -243,14 +276,13 @@ static int expected_folio_refs(struct folio *folio)
>    *         -EINVAL if the UVC failed for other reasons.
>    *
>    * Context: The caller must hold exactly one extra reference on the folio
> - *          (it's the same logic as split_folio())
> + *          (it's the same logic as split_folio()), and the folio must be
> + *          locked.
>    */
> -int make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
> +static int __make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)

One more nit: -EBUSY can no longer be returned from his function, so you 
might just remove it from the doc above.


While chasing a very weird folio split bug that seems to result in late 
validation issues (:/), I was wondering if __gmap_destroy_page could 
similarly be problematic.

We're now no longer holding the PTL while performing the operation.

(not that that would explain the issue I am chasing, because 
gmap_destroy_page() is never called in my setup)
David Hildenbrand March 6, 2025, 10:07 p.m. UTC | #2
On 06.03.25 11:23, David Hildenbrand wrote:
>>    /**
>> - * make_folio_secure() - make a folio secure
>> + * __make_folio_secure() - make a folio secure
>>     * @folio: the folio to make secure
>>     * @uvcb: the uvcb that describes the UVC to be used
>>     *
>> @@ -243,14 +276,13 @@ static int expected_folio_refs(struct folio *folio)
>>     *         -EINVAL if the UVC failed for other reasons.
>>     *
>>     * Context: The caller must hold exactly one extra reference on the folio
>> - *          (it's the same logic as split_folio())
>> + *          (it's the same logic as split_folio()), and the folio must be
>> + *          locked.
>>     */
>> -int make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
>> +static int __make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
> 
> One more nit: -EBUSY can no longer be returned from his function, so you
> might just remove it from the doc above.
> 
> 
> While chasing a very weird folio split bug that seems to result in late
> validation issues (:/), I was wondering if __gmap_destroy_page could
> similarly be problematic.
> 
> We're now no longer holding the PTL while performing the operation.
> 
> (not that that would explain the issue I am chasing, because
> gmap_destroy_page() is never called in my setup)
> 

Okay, I've been debugging for way to long the weird issue I am seeing, and I
did not find the root cause yet. But the following things are problematic:

1) To walk the page tables, we need the mmap lock in read mode.

2) To walk the page tables, we must know that a VMA exists

3) get_locked_pte() must not be used on hugetlb areas.

Further, the following things should be cleaned up

4) s390_wiggle_split_folio() is only used in that file

5) gmap_make_secure() likely should be returning -EFAULT


See below, I went with a folio_walk (which also checks for pte_present()
like the old code did, but that should not matter here) so we can get rid of the
get_locked_pte() usage completely.


 From 1b9a4306b79a352daf80708252d166114e7335de Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david@redhat.com>
Date: Thu, 6 Mar 2025 22:43:43 +0100
Subject: [PATCH] merge

Signed-off-by: David Hildenbrand <david@redhat.com>
---
  arch/s390/include/asm/uv.h |  1 -
  arch/s390/kernel/uv.c      | 41 ++++++++++++++++++--------------------
  arch/s390/kvm/gmap.c       |  2 +-
  3 files changed, 20 insertions(+), 24 deletions(-)

diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
index fa33a6ff2fabf..46fb0ef6f9847 100644
--- a/arch/s390/include/asm/uv.h
+++ b/arch/s390/include/asm/uv.h
@@ -634,7 +634,6 @@ int uv_convert_from_secure_pte(pte_t pte);
  int make_hva_secure(struct mm_struct *mm, unsigned long hva, struct uv_cb_header *uvcb);
  int uv_convert_from_secure(unsigned long paddr);
  int uv_convert_from_secure_folio(struct folio *folio);
-int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool split);
  
  void setup_uv(void);
  
diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
index 63420a5f3ee57..11a1894e63405 100644
--- a/arch/s390/kernel/uv.c
+++ b/arch/s390/kernel/uv.c
@@ -270,7 +270,6 @@ static int expected_folio_refs(struct folio *folio)
   *
   * Return: 0 on success;
   *         -EBUSY if the folio is in writeback or has too many references;
- *         -E2BIG if the folio is large;
   *         -EAGAIN if the UVC needs to be attempted again;
   *         -ENXIO if the address is not mapped;
   *         -EINVAL if the UVC failed for other reasons.
@@ -324,17 +323,6 @@ static int make_folio_secure(struct mm_struct *mm, struct folio *folio, struct u
  	return rc;
  }
  
-static pte_t *get_locked_valid_pte(struct mm_struct *mm, unsigned long hva, spinlock_t **ptl)
-{
-	pte_t *ptep = get_locked_pte(mm, hva, ptl);
-
-	if (ptep && (pte_val(*ptep) & _PAGE_INVALID)) {
-		pte_unmap_unlock(ptep, *ptl);
-		ptep = NULL;
-	}
-	return ptep;
-}
-
  /**
   * s390_wiggle_split_folio() - try to drain extra references to a folio and optionally split
   * @mm:    the mm containing the folio to work on
@@ -344,7 +332,7 @@ static pte_t *get_locked_valid_pte(struct mm_struct *mm, unsigned long hva, spin
   * Context: Must be called while holding an extra reference to the folio;
   *          the mm lock should not be held.
   */
-int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool split)
+static int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool split)
  {
  	int rc;
  
@@ -361,20 +349,28 @@ int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool spli
  	}
  	return -EAGAIN;
  }
-EXPORT_SYMBOL_GPL(s390_wiggle_split_folio);
  
  int make_hva_secure(struct mm_struct *mm, unsigned long hva, struct uv_cb_header *uvcb)
  {
+	struct vm_area_struct *vma;
+	struct folio_walk fw;
  	struct folio *folio;
-	spinlock_t *ptelock;
-	pte_t *ptep;
  	int rc;
  
-	ptep = get_locked_valid_pte(mm, hva, &ptelock);
-	if (!ptep)
+	mmap_read_lock(mm);
+
+	vma = vma_lookup(mm, hva);
+	if (!vma) {
+		mmap_read_unlock(mm);
+		return -EFAULT;
+	}
+
+	folio = folio_walk_start(&fw, vma, hva, 0);
+	if (!folio) {
+		mmap_read_unlock(mm);
  		return -ENXIO;
+	}
  
-	folio = page_folio(pte_page(*ptep));
  	folio_get(folio);
  	/*
  	 * Secure pages cannot be huge and userspace should not combine both.
@@ -385,14 +381,15 @@ int make_hva_secure(struct mm_struct *mm, unsigned long hva, struct uv_cb_header
  	 * KVM_RUN will return -EFAULT.
  	 */
  	if (folio_test_hugetlb(folio))
-		rc =  -EFAULT;
+		rc = -EFAULT;
  	else if (folio_test_large(folio))
  		rc = -E2BIG;
-	else if (!pte_write(*ptep))
+	else if (!pte_write(fw.pte) || (pte_val(fw.pte) & _PAGE_INVALID))
  		rc = -ENXIO;
  	else
  		rc = make_folio_secure(mm, folio, uvcb);
-	pte_unmap_unlock(ptep, ptelock);
+	folio_walk_end(&fw, vma);
+	mmap_read_unlock(mm);
  
  	if (rc == -E2BIG || rc == -EBUSY)
  		rc = s390_wiggle_split_folio(mm, folio, rc == -E2BIG);
diff --git a/arch/s390/kvm/gmap.c b/arch/s390/kvm/gmap.c
index 21580cfecc6ac..1a88b32e7c134 100644
--- a/arch/s390/kvm/gmap.c
+++ b/arch/s390/kvm/gmap.c
@@ -41,7 +41,7 @@ int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb)
  
  	vmaddr = gfn_to_hva(kvm, gpa_to_gfn(gaddr));
  	if (kvm_is_error_hva(vmaddr))
-		rc = -ENXIO;
+		rc = -EFAULT;
  	else
  		rc = make_hva_secure(gmap->mm, vmaddr, uvcb);
David Hildenbrand March 6, 2025, 10:17 p.m. UTC | #3
On 06.03.25 23:07, David Hildenbrand wrote:
> On 06.03.25 11:23, David Hildenbrand wrote:
>>>     /**
>>> - * make_folio_secure() - make a folio secure
>>> + * __make_folio_secure() - make a folio secure
>>>      * @folio: the folio to make secure
>>>      * @uvcb: the uvcb that describes the UVC to be used
>>>      *
>>> @@ -243,14 +276,13 @@ static int expected_folio_refs(struct folio *folio)
>>>      *         -EINVAL if the UVC failed for other reasons.
>>>      *
>>>      * Context: The caller must hold exactly one extra reference on the folio
>>> - *          (it's the same logic as split_folio())
>>> + *          (it's the same logic as split_folio()), and the folio must be
>>> + *          locked.
>>>      */
>>> -int make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
>>> +static int __make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
>>
>> One more nit: -EBUSY can no longer be returned from his function, so you
>> might just remove it from the doc above.
>>
>>
>> While chasing a very weird folio split bug that seems to result in late
>> validation issues (:/), I was wondering if __gmap_destroy_page could
>> similarly be problematic.
>>
>> We're now no longer holding the PTL while performing the operation.
>>
>> (not that that would explain the issue I am chasing, because
>> gmap_destroy_page() is never called in my setup)
>>
> 
> Okay, I've been debugging for way to long the weird issue I am seeing, and I
> did not find the root cause yet. But the following things are problematic:
> 
> 1) To walk the page tables, we need the mmap lock in read mode.
> 
> 2) To walk the page tables, we must know that a VMA exists
> 
> 3) get_locked_pte() must not be used on hugetlb areas.
> 
> Further, the following things should be cleaned up
> 
> 4) s390_wiggle_split_folio() is only used in that file
> 
> 5) gmap_make_secure() likely should be returning -EFAULT
> 
> 
> See below, I went with a folio_walk (which also checks for pte_present()
> like the old code did, but that should not matter here) so we can get rid of the
> get_locked_pte() usage completely.
> 
> 
>   From 1b9a4306b79a352daf80708252d166114e7335de Mon Sep 17 00:00:00 2001
> From: David Hildenbrand <david@redhat.com>
> Date: Thu, 6 Mar 2025 22:43:43 +0100
> Subject: [PATCH] merge
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>    arch/s390/include/asm/uv.h |  1 -
>    arch/s390/kernel/uv.c      | 41 ++++++++++++++++++--------------------
>    arch/s390/kvm/gmap.c       |  2 +-
>    3 files changed, 20 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
> index fa33a6ff2fabf..46fb0ef6f9847 100644
> --- a/arch/s390/include/asm/uv.h
> +++ b/arch/s390/include/asm/uv.h
> @@ -634,7 +634,6 @@ int uv_convert_from_secure_pte(pte_t pte);
>    int make_hva_secure(struct mm_struct *mm, unsigned long hva, struct uv_cb_header *uvcb);
>    int uv_convert_from_secure(unsigned long paddr);
>    int uv_convert_from_secure_folio(struct folio *folio);
> -int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool split);
>    
>    void setup_uv(void);
>    
> diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
> index 63420a5f3ee57..11a1894e63405 100644
> --- a/arch/s390/kernel/uv.c
> +++ b/arch/s390/kernel/uv.c
> @@ -270,7 +270,6 @@ static int expected_folio_refs(struct folio *folio)
>     *
>     * Return: 0 on success;
>     *         -EBUSY if the folio is in writeback or has too many references;
> - *         -E2BIG if the folio is large;
>     *         -EAGAIN if the UVC needs to be attempted again;
>     *         -ENXIO if the address is not mapped;
>     *         -EINVAL if the UVC failed for other reasons.
> @@ -324,17 +323,6 @@ static int make_folio_secure(struct mm_struct *mm, struct folio *folio, struct u
>    	return rc;
>    }
>    
> -static pte_t *get_locked_valid_pte(struct mm_struct *mm, unsigned long hva, spinlock_t **ptl)
> -{
> -	pte_t *ptep = get_locked_pte(mm, hva, ptl);
> -
> -	if (ptep && (pte_val(*ptep) & _PAGE_INVALID)) {
> -		pte_unmap_unlock(ptep, *ptl);
> -		ptep = NULL;
> -	}
> -	return ptep;
> -}
> -
>    /**
>     * s390_wiggle_split_folio() - try to drain extra references to a folio and optionally split
>     * @mm:    the mm containing the folio to work on
> @@ -344,7 +332,7 @@ static pte_t *get_locked_valid_pte(struct mm_struct *mm, unsigned long hva, spin
>     * Context: Must be called while holding an extra reference to the folio;
>     *          the mm lock should not be held.
>     */
> -int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool split)
> +static int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool split)
>    {
>    	int rc;
>    
> @@ -361,20 +349,28 @@ int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool spli
>    	}
>    	return -EAGAIN;
>    }
> -EXPORT_SYMBOL_GPL(s390_wiggle_split_folio);
>    
>    int make_hva_secure(struct mm_struct *mm, unsigned long hva, struct uv_cb_header *uvcb)
>    {
> +	struct vm_area_struct *vma;
> +	struct folio_walk fw;
>    	struct folio *folio;
> -	spinlock_t *ptelock;
> -	pte_t *ptep;
>    	int rc;
>    
> -	ptep = get_locked_valid_pte(mm, hva, &ptelock);
> -	if (!ptep)
> +	mmap_read_lock(mm);
> +
> +	vma = vma_lookup(mm, hva);
> +	if (!vma) {
> +		mmap_read_unlock(mm);
> +		return -EFAULT;
> +	}
> +
> +	folio = folio_walk_start(&fw, vma, hva, 0);
> +	if (!folio) {
> +		mmap_read_unlock(mm);
>    		return -ENXIO;
> +	}
>    
> -	folio = page_folio(pte_page(*ptep));
>    	folio_get(folio);
>    	/*
>    	 * Secure pages cannot be huge and userspace should not combine both.
> @@ -385,14 +381,15 @@ int make_hva_secure(struct mm_struct *mm, unsigned long hva, struct uv_cb_header
>    	 * KVM_RUN will return -EFAULT.
>    	 */
>    	if (folio_test_hugetlb(folio))
> -		rc =  -EFAULT;
> +		rc = -EFAULT;
>    	else if (folio_test_large(folio))
>    		rc = -E2BIG;
> -	else if (!pte_write(*ptep))
> +	else if (!pte_write(fw.pte) || (pte_val(fw.pte) & _PAGE_INVALID))
>    		rc = -ENXIO;
>    	else
>    		rc = make_folio_secure(mm, folio, uvcb);
> -	pte_unmap_unlock(ptep, ptelock);
> +	folio_walk_end(&fw, vma);
> +	mmap_read_unlock(mm);
>    
>    	if (rc == -E2BIG || rc == -EBUSY)
>    		rc = s390_wiggle_split_folio(mm, folio, rc == -E2BIG);
> diff --git a/arch/s390/kvm/gmap.c b/arch/s390/kvm/gmap.c
> index 21580cfecc6ac..1a88b32e7c134 100644
> --- a/arch/s390/kvm/gmap.c
> +++ b/arch/s390/kvm/gmap.c
> @@ -41,7 +41,7 @@ int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb)
>    
>    	vmaddr = gfn_to_hva(kvm, gpa_to_gfn(gaddr));
>    	if (kvm_is_error_hva(vmaddr))
> -		rc = -ENXIO;
> +		rc = -EFAULT;
>    	else
>    		rc = make_hva_secure(gmap->mm, vmaddr, uvcb);
>    

The following likely on top as a cleanup as well:
diff --git a/arch/s390/kvm/gmap.c b/arch/s390/kvm/gmap.c
index 1a88b32e7c134..6d8944d1b4a0c 100644
--- a/arch/s390/kvm/gmap.c
+++ b/arch/s390/kvm/gmap.c
@@ -35,17 +35,13 @@ int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb)
  {
         struct kvm *kvm = gmap->private;
         unsigned long vmaddr;
-       int rc = 0;
  
         lockdep_assert_held(&kvm->srcu);
  
         vmaddr = gfn_to_hva(kvm, gpa_to_gfn(gaddr));
         if (kvm_is_error_hva(vmaddr))
-               rc = -EFAULT;
-       else
-               rc = make_hva_secure(gmap->mm, vmaddr, uvcb);
-
-       return rc;
+               return -EFAULT;
+       return make_hva_secure(gmap->mm, vmaddr, uvcb);
  }
  
  int gmap_convert_to_secure(struct gmap *gmap, unsigned long gaddr)
Claudio Imbrenda March 7, 2025, 1:18 p.m. UTC | #4
On Thu, 6 Mar 2025 23:07:04 +0100
David Hildenbrand <david@redhat.com> wrote:

> On 06.03.25 11:23, David Hildenbrand wrote:
> >>    /**
> >> - * make_folio_secure() - make a folio secure
> >> + * __make_folio_secure() - make a folio secure
> >>     * @folio: the folio to make secure
> >>     * @uvcb: the uvcb that describes the UVC to be used
> >>     *
> >> @@ -243,14 +276,13 @@ static int expected_folio_refs(struct folio *folio)
> >>     *         -EINVAL if the UVC failed for other reasons.
> >>     *
> >>     * Context: The caller must hold exactly one extra reference on the folio
> >> - *          (it's the same logic as split_folio())
> >> + *          (it's the same logic as split_folio()), and the folio must be
> >> + *          locked.
> >>     */
> >> -int make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
> >> +static int __make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)  
> > 
> > One more nit: -EBUSY can no longer be returned from his function, so you
> > might just remove it from the doc above.
> > 
> > 
> > While chasing a very weird folio split bug that seems to result in late
> > validation issues (:/), I was wondering if __gmap_destroy_page could
> > similarly be problematic.
> > 
> > We're now no longer holding the PTL while performing the operation.
> > 
> > (not that that would explain the issue I am chasing, because
> > gmap_destroy_page() is never called in my setup)
> >   
> 
> Okay, I've been debugging for way to long the weird issue I am seeing, and I
> did not find the root cause yet. But the following things are problematic:
> 
> 1) To walk the page tables, we need the mmap lock in read mode.
> 
> 2) To walk the page tables, we must know that a VMA exists
> 
> 3) get_locked_pte() must not be used on hugetlb areas.
> 
> Further, the following things should be cleaned up
> 
> 4) s390_wiggle_split_folio() is only used in that file
> 
> 5) gmap_make_secure() likely should be returning -EFAULT
> 
> 
> See below, I went with a folio_walk (which also checks for pte_present()
> like the old code did, but that should not matter here) so we can get rid of the
> get_locked_pte() usage completely.

I shall merge this into my patch, thanks a lot!

> 
> 
>  From 1b9a4306b79a352daf80708252d166114e7335de Mon Sep 17 00:00:00 2001
> From: David Hildenbrand <david@redhat.com>
> Date: Thu, 6 Mar 2025 22:43:43 +0100
> Subject: [PATCH] merge
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>   arch/s390/include/asm/uv.h |  1 -
>   arch/s390/kernel/uv.c      | 41 ++++++++++++++++++--------------------
>   arch/s390/kvm/gmap.c       |  2 +-
>   3 files changed, 20 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
> index fa33a6ff2fabf..46fb0ef6f9847 100644
> --- a/arch/s390/include/asm/uv.h
> +++ b/arch/s390/include/asm/uv.h
> @@ -634,7 +634,6 @@ int uv_convert_from_secure_pte(pte_t pte);
>   int make_hva_secure(struct mm_struct *mm, unsigned long hva, struct uv_cb_header *uvcb);
>   int uv_convert_from_secure(unsigned long paddr);
>   int uv_convert_from_secure_folio(struct folio *folio);
> -int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool split);
>   
>   void setup_uv(void);
>   
> diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
> index 63420a5f3ee57..11a1894e63405 100644
> --- a/arch/s390/kernel/uv.c
> +++ b/arch/s390/kernel/uv.c
> @@ -270,7 +270,6 @@ static int expected_folio_refs(struct folio *folio)
>    *
>    * Return: 0 on success;
>    *         -EBUSY if the folio is in writeback or has too many references;
> - *         -E2BIG if the folio is large;
>    *         -EAGAIN if the UVC needs to be attempted again;
>    *         -ENXIO if the address is not mapped;
>    *         -EINVAL if the UVC failed for other reasons.
> @@ -324,17 +323,6 @@ static int make_folio_secure(struct mm_struct *mm, struct folio *folio, struct u
>   	return rc;
>   }
>   
> -static pte_t *get_locked_valid_pte(struct mm_struct *mm, unsigned long hva, spinlock_t **ptl)
> -{
> -	pte_t *ptep = get_locked_pte(mm, hva, ptl);
> -
> -	if (ptep && (pte_val(*ptep) & _PAGE_INVALID)) {
> -		pte_unmap_unlock(ptep, *ptl);
> -		ptep = NULL;
> -	}
> -	return ptep;
> -}
> -
>   /**
>    * s390_wiggle_split_folio() - try to drain extra references to a folio and optionally split
>    * @mm:    the mm containing the folio to work on
> @@ -344,7 +332,7 @@ static pte_t *get_locked_valid_pte(struct mm_struct *mm, unsigned long hva, spin
>    * Context: Must be called while holding an extra reference to the folio;
>    *          the mm lock should not be held.
>    */
> -int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool split)
> +static int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool split)
>   {
>   	int rc;
>   
> @@ -361,20 +349,28 @@ int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool spli
>   	}
>   	return -EAGAIN;
>   }
> -EXPORT_SYMBOL_GPL(s390_wiggle_split_folio);
>   
>   int make_hva_secure(struct mm_struct *mm, unsigned long hva, struct uv_cb_header *uvcb)
>   {
> +	struct vm_area_struct *vma;
> +	struct folio_walk fw;
>   	struct folio *folio;
> -	spinlock_t *ptelock;
> -	pte_t *ptep;
>   	int rc;
>   
> -	ptep = get_locked_valid_pte(mm, hva, &ptelock);
> -	if (!ptep)
> +	mmap_read_lock(mm);
> +
> +	vma = vma_lookup(mm, hva);
> +	if (!vma) {
> +		mmap_read_unlock(mm);
> +		return -EFAULT;
> +	}
> +
> +	folio = folio_walk_start(&fw, vma, hva, 0);
> +	if (!folio) {
> +		mmap_read_unlock(mm);
>   		return -ENXIO;
> +	}
>   
> -	folio = page_folio(pte_page(*ptep));
>   	folio_get(folio);
>   	/*
>   	 * Secure pages cannot be huge and userspace should not combine both.
> @@ -385,14 +381,15 @@ int make_hva_secure(struct mm_struct *mm, unsigned long hva, struct uv_cb_header
>   	 * KVM_RUN will return -EFAULT.
>   	 */
>   	if (folio_test_hugetlb(folio))
> -		rc =  -EFAULT;
> +		rc = -EFAULT;
>   	else if (folio_test_large(folio))
>   		rc = -E2BIG;
> -	else if (!pte_write(*ptep))
> +	else if (!pte_write(fw.pte) || (pte_val(fw.pte) & _PAGE_INVALID))
>   		rc = -ENXIO;
>   	else
>   		rc = make_folio_secure(mm, folio, uvcb);
> -	pte_unmap_unlock(ptep, ptelock);
> +	folio_walk_end(&fw, vma);
> +	mmap_read_unlock(mm);
>   
>   	if (rc == -E2BIG || rc == -EBUSY)
>   		rc = s390_wiggle_split_folio(mm, folio, rc == -E2BIG);
> diff --git a/arch/s390/kvm/gmap.c b/arch/s390/kvm/gmap.c
> index 21580cfecc6ac..1a88b32e7c134 100644
> --- a/arch/s390/kvm/gmap.c
> +++ b/arch/s390/kvm/gmap.c
> @@ -41,7 +41,7 @@ int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb)
>   
>   	vmaddr = gfn_to_hva(kvm, gpa_to_gfn(gaddr));
>   	if (kvm_is_error_hva(vmaddr))
> -		rc = -ENXIO;
> +		rc = -EFAULT;
>   	else
>   		rc = make_hva_secure(gmap->mm, vmaddr, uvcb);
>
diff mbox series

Patch

diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
index 4e73ef46d4b2..9f2814d0e1e9 100644
--- a/arch/s390/include/asm/gmap.h
+++ b/arch/s390/include/asm/gmap.h
@@ -139,7 +139,6 @@  int s390_replace_asce(struct gmap *gmap);
 void s390_uv_destroy_pfns(unsigned long count, unsigned long *pfns);
 int __s390_uv_destroy_range(struct mm_struct *mm, unsigned long start,
 			    unsigned long end, bool interruptible);
-int kvm_s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool split);
 unsigned long *gmap_table_walk(struct gmap *gmap, unsigned long gaddr, int level);
 
 /**
diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h
index b11f5b6d0bd1..fa33a6ff2fab 100644
--- a/arch/s390/include/asm/uv.h
+++ b/arch/s390/include/asm/uv.h
@@ -631,9 +631,10 @@  int uv_pin_shared(unsigned long paddr);
 int uv_destroy_folio(struct folio *folio);
 int uv_destroy_pte(pte_t pte);
 int uv_convert_from_secure_pte(pte_t pte);
-int make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb);
+int make_hva_secure(struct mm_struct *mm, unsigned long hva, struct uv_cb_header *uvcb);
 int uv_convert_from_secure(unsigned long paddr);
 int uv_convert_from_secure_folio(struct folio *folio);
+int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool split);
 
 void setup_uv(void);
 
diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
index 9f05df2da2f7..63420a5f3ee5 100644
--- a/arch/s390/kernel/uv.c
+++ b/arch/s390/kernel/uv.c
@@ -206,6 +206,39 @@  int uv_convert_from_secure_pte(pte_t pte)
 	return uv_convert_from_secure_folio(pfn_folio(pte_pfn(pte)));
 }
 
+/**
+ * should_export_before_import - Determine whether an export is needed
+ * before an import-like operation
+ * @uvcb: the Ultravisor control block of the UVC to be performed
+ * @mm: the mm of the process
+ *
+ * Returns whether an export is needed before every import-like operation.
+ * This is needed for shared pages, which don't trigger a secure storage
+ * exception when accessed from a different guest.
+ *
+ * Although considered as one, the Unpin Page UVC is not an actual import,
+ * so it is not affected.
+ *
+ * No export is needed also when there is only one protected VM, because the
+ * page cannot belong to the wrong VM in that case (there is no "other VM"
+ * it can belong to).
+ *
+ * Return: true if an export is needed before every import, otherwise false.
+ */
+static bool should_export_before_import(struct uv_cb_header *uvcb, struct mm_struct *mm)
+{
+	/*
+	 * The misc feature indicates, among other things, that importing a
+	 * shared page from a different protected VM will automatically also
+	 * transfer its ownership.
+	 */
+	if (uv_has_feature(BIT_UV_FEAT_MISC))
+		return false;
+	if (uvcb->cmd == UVC_CMD_UNPIN_PAGE_SHARED)
+		return false;
+	return atomic_read(&mm->context.protected_count) > 1;
+}
+
 /*
  * Calculate the expected ref_count for a folio that would otherwise have no
  * further pins. This was cribbed from similar functions in other places in
@@ -228,7 +261,7 @@  static int expected_folio_refs(struct folio *folio)
 }
 
 /**
- * make_folio_secure() - make a folio secure
+ * __make_folio_secure() - make a folio secure
  * @folio: the folio to make secure
  * @uvcb: the uvcb that describes the UVC to be used
  *
@@ -243,14 +276,13 @@  static int expected_folio_refs(struct folio *folio)
  *         -EINVAL if the UVC failed for other reasons.
  *
  * Context: The caller must hold exactly one extra reference on the folio
- *          (it's the same logic as split_folio())
+ *          (it's the same logic as split_folio()), and the folio must be
+ *          locked.
  */
-int make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
+static int __make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
 {
 	int expected, cc = 0;
 
-	if (folio_test_large(folio))
-		return -E2BIG;
 	if (folio_test_writeback(folio))
 		return -EBUSY;
 	expected = expected_folio_refs(folio) + 1;
@@ -277,7 +309,98 @@  int make_folio_secure(struct folio *folio, struct uv_cb_header *uvcb)
 		return -EAGAIN;
 	return uvcb->rc == 0x10a ? -ENXIO : -EINVAL;
 }
-EXPORT_SYMBOL_GPL(make_folio_secure);
+
+static int make_folio_secure(struct mm_struct *mm, struct folio *folio, struct uv_cb_header *uvcb)
+{
+	int rc;
+
+	if (!folio_trylock(folio))
+		return -EAGAIN;
+	if (should_export_before_import(uvcb, mm))
+		uv_convert_from_secure(folio_to_phys(folio));
+	rc = __make_folio_secure(folio, uvcb);
+	folio_unlock(folio);
+
+	return rc;
+}
+
+static pte_t *get_locked_valid_pte(struct mm_struct *mm, unsigned long hva, spinlock_t **ptl)
+{
+	pte_t *ptep = get_locked_pte(mm, hva, ptl);
+
+	if (ptep && (pte_val(*ptep) & _PAGE_INVALID)) {
+		pte_unmap_unlock(ptep, *ptl);
+		ptep = NULL;
+	}
+	return ptep;
+}
+
+/**
+ * s390_wiggle_split_folio() - try to drain extra references to a folio and optionally split
+ * @mm:    the mm containing the folio to work on
+ * @folio: the folio
+ * @split: whether to split a large folio
+ *
+ * Context: Must be called while holding an extra reference to the folio;
+ *          the mm lock should not be held.
+ */
+int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool split)
+{
+	int rc;
+
+	lockdep_assert_not_held(&mm->mmap_lock);
+	folio_wait_writeback(folio);
+	lru_add_drain_all();
+	if (split) {
+		folio_lock(folio);
+		rc = split_folio(folio);
+		folio_unlock(folio);
+
+		if (rc != -EBUSY)
+			return rc;
+	}
+	return -EAGAIN;
+}
+EXPORT_SYMBOL_GPL(s390_wiggle_split_folio);
+
+int make_hva_secure(struct mm_struct *mm, unsigned long hva, struct uv_cb_header *uvcb)
+{
+	struct folio *folio;
+	spinlock_t *ptelock;
+	pte_t *ptep;
+	int rc;
+
+	ptep = get_locked_valid_pte(mm, hva, &ptelock);
+	if (!ptep)
+		return -ENXIO;
+
+	folio = page_folio(pte_page(*ptep));
+	folio_get(folio);
+	/*
+	 * Secure pages cannot be huge and userspace should not combine both.
+	 * In case userspace does it anyway this will result in an -EFAULT for
+	 * the unpack. The guest is thus never reaching secure mode.
+	 * If userspace plays dirty tricks and decides to map huge pages at a
+	 * later point in time, it will receive a segmentation fault or
+	 * KVM_RUN will return -EFAULT.
+	 */
+	if (folio_test_hugetlb(folio))
+		rc =  -EFAULT;
+	else if (folio_test_large(folio))
+		rc = -E2BIG;
+	else if (!pte_write(*ptep))
+		rc = -ENXIO;
+	else
+		rc = make_folio_secure(mm, folio, uvcb);
+	pte_unmap_unlock(ptep, ptelock);
+
+	if (rc == -E2BIG || rc == -EBUSY)
+		rc = s390_wiggle_split_folio(mm, folio, rc == -E2BIG);
+	folio_put(folio);
+
+	return rc;
+}
+EXPORT_SYMBOL_GPL(make_hva_secure);
 
 /*
  * To be called with the folio locked or with an extra reference! This will
diff --git a/arch/s390/kvm/gmap.c b/arch/s390/kvm/gmap.c
index 02adf151d4de..21580cfecc6a 100644
--- a/arch/s390/kvm/gmap.c
+++ b/arch/s390/kvm/gmap.c
@@ -22,92 +22,6 @@ 
 
 #include "gmap.h"
 
-/**
- * should_export_before_import - Determine whether an export is needed
- * before an import-like operation
- * @uvcb: the Ultravisor control block of the UVC to be performed
- * @mm: the mm of the process
- *
- * Returns whether an export is needed before every import-like operation.
- * This is needed for shared pages, which don't trigger a secure storage
- * exception when accessed from a different guest.
- *
- * Although considered as one, the Unpin Page UVC is not an actual import,
- * so it is not affected.
- *
- * No export is needed also when there is only one protected VM, because the
- * page cannot belong to the wrong VM in that case (there is no "other VM"
- * it can belong to).
- *
- * Return: true if an export is needed before every import, otherwise false.
- */
-static bool should_export_before_import(struct uv_cb_header *uvcb, struct mm_struct *mm)
-{
-	/*
-	 * The misc feature indicates, among other things, that importing a
-	 * shared page from a different protected VM will automatically also
-	 * transfer its ownership.
-	 */
-	if (uv_has_feature(BIT_UV_FEAT_MISC))
-		return false;
-	if (uvcb->cmd == UVC_CMD_UNPIN_PAGE_SHARED)
-		return false;
-	return atomic_read(&mm->context.protected_count) > 1;
-}
-
-static int __gmap_make_secure(struct gmap *gmap, struct page *page, void *uvcb)
-{
-	struct folio *folio = page_folio(page);
-	int rc;
-
-	/*
-	 * Secure pages cannot be huge and userspace should not combine both.
-	 * In case userspace does it anyway this will result in an -EFAULT for
-	 * the unpack. The guest is thus never reaching secure mode.
-	 * If userspace plays dirty tricks and decides to map huge pages at a
-	 * later point in time, it will receive a segmentation fault or
-	 * KVM_RUN will return -EFAULT.
-	 */
-	if (folio_test_hugetlb(folio))
-		return -EFAULT;
-	if (folio_test_large(folio)) {
-		mmap_read_unlock(gmap->mm);
-		rc = kvm_s390_wiggle_split_folio(gmap->mm, folio, true);
-		mmap_read_lock(gmap->mm);
-		if (rc)
-			return rc;
-		folio = page_folio(page);
-	}
-
-	if (!folio_trylock(folio))
-		return -EAGAIN;
-	if (should_export_before_import(uvcb, gmap->mm))
-		uv_convert_from_secure(folio_to_phys(folio));
-	rc = make_folio_secure(folio, uvcb);
-	folio_unlock(folio);
-
-	/*
-	 * In theory a race is possible and the folio might have become
-	 * large again before the folio_trylock() above. In that case, no
-	 * action is performed and -EAGAIN is returned; the callers will
-	 * have to try again later.
-	 * In most cases this implies running the VM again, getting the same
-	 * exception again, and make another attempt in this function.
-	 * This is expected to happen extremely rarely.
-	 */
-	if (rc == -E2BIG)
-		return -EAGAIN;
-	/* The folio has too many references, try to shake some off */
-	if (rc == -EBUSY) {
-		mmap_read_unlock(gmap->mm);
-		kvm_s390_wiggle_split_folio(gmap->mm, folio, false);
-		mmap_read_lock(gmap->mm);
-		return -EAGAIN;
-	}
-
-	return rc;
-}
-
 /**
  * gmap_make_secure() - make one guest page secure
  * @gmap: the guest gmap
@@ -115,22 +29,21 @@  static int __gmap_make_secure(struct gmap *gmap, struct page *page, void *uvcb)
  * @uvcb: the UVCB specifying which operation needs to be performed
  *
  * Context: needs to be called with kvm->srcu held.
- * Return: 0 on success, < 0 in case of error (see __gmap_make_secure()).
+ * Return: 0 on success, < 0 in case of error.
  */
 int gmap_make_secure(struct gmap *gmap, unsigned long gaddr, void *uvcb)
 {
 	struct kvm *kvm = gmap->private;
-	struct page *page;
+	unsigned long vmaddr;
 	int rc = 0;
 
 	lockdep_assert_held(&kvm->srcu);
 
-	page = gfn_to_page(kvm, gpa_to_gfn(gaddr));
-	mmap_read_lock(gmap->mm);
-	if (page)
-		rc = __gmap_make_secure(gmap, page, uvcb);
-	kvm_release_page_clean(page);
-	mmap_read_unlock(gmap->mm);
+	vmaddr = gfn_to_hva(kvm, gpa_to_gfn(gaddr));
+	if (kvm_is_error_hva(vmaddr))
+		rc = -ENXIO;
+	else
+		rc = make_hva_secure(gmap->mm, vmaddr, uvcb);
 
 	return rc;
 }
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index ebecb96bacce..020502af7dc9 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -4952,6 +4952,7 @@  static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
 {
 	unsigned int flags = 0;
 	unsigned long gaddr;
+	int rc;
 
 	gaddr = current->thread.gmap_teid.addr * PAGE_SIZE;
 	if (kvm_s390_cur_gmap_fault_is_write())
@@ -4961,16 +4962,6 @@  static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
 	case 0:
 		vcpu->stat.exit_null++;
 		break;
-	case PGM_NON_SECURE_STORAGE_ACCESS:
-		kvm_s390_assert_primary_as(vcpu);
-		/*
-		 * This is normal operation; a page belonging to a protected
-		 * guest has not been imported yet. Try to import the page into
-		 * the protected guest.
-		 */
-		if (gmap_convert_to_secure(vcpu->arch.gmap, gaddr) == -EINVAL)
-			send_sig(SIGSEGV, current, 0);
-		break;
 	case PGM_SECURE_STORAGE_ACCESS:
 	case PGM_SECURE_STORAGE_VIOLATION:
 		kvm_s390_assert_primary_as(vcpu);
@@ -4995,6 +4986,20 @@  static int vcpu_post_run_handle_fault(struct kvm_vcpu *vcpu)
 			send_sig(SIGSEGV, current, 0);
 		}
 		break;
+	case PGM_NON_SECURE_STORAGE_ACCESS:
+		kvm_s390_assert_primary_as(vcpu);
+		/*
+		 * This is normal operation; a page belonging to a protected
+		 * guest has not been imported yet. Try to import the page into
+		 * the protected guest.
+		 */
+		rc = gmap_convert_to_secure(vcpu->arch.gmap, gaddr);
+		if (rc == -EINVAL)
+			send_sig(SIGSEGV, current, 0);
+		if (rc != -ENXIO)
+			break;
+		flags = FAULT_FLAG_WRITE;
+		fallthrough;
 	case PGM_PROTECTION:
 	case PGM_SEGMENT_TRANSLATION:
 	case PGM_PAGE_TRANSLATION:
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 94d927785800..d14b488e7a1f 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -2626,31 +2626,3 @@  int s390_replace_asce(struct gmap *gmap)
 	return 0;
 }
 EXPORT_SYMBOL_GPL(s390_replace_asce);
-
-/**
- * kvm_s390_wiggle_split_folio() - try to drain extra references to a folio and optionally split
- * @mm:    the mm containing the folio to work on
- * @folio: the folio
- * @split: whether to split a large folio
- *
- * Context: Must be called while holding an extra reference to the folio;
- *          the mm lock should not be held.
- */
-int kvm_s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bool split)
-{
-	int rc;
-
-	lockdep_assert_not_held(&mm->mmap_lock);
-	folio_wait_writeback(folio);
-	lru_add_drain_all();
-	if (split) {
-		folio_lock(folio);
-		rc = split_folio(folio);
-		folio_unlock(folio);
-
-		if (rc != -EBUSY)
-			return rc;
-	}
-	return -EAGAIN;
-}
-EXPORT_SYMBOL_GPL(kvm_s390_wiggle_split_folio);