Message ID | 20240327171737.919590-2-david@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | s390/mm: shared zeropage + KVM fixes | expand |
On Wed, Mar 27, 2024 at 06:17:36PM +0100, David Hildenbrand wrote: Hi David, ... > static int mfill_atomic_pte_zeropage(pmd_t *dst_pmd, > struct vm_area_struct *dst_vma, > unsigned long dst_addr) > @@ -324,6 +355,9 @@ static int mfill_atomic_pte_zeropage(pmd_t *dst_pmd, > spinlock_t *ptl; > int ret; > > + if (mm_forbids_zeropage(dst_vma->mm)) I assume, you were going to pass dst_vma->vm_mm here? This patch does not compile otherwise. ... Thanks!
On 11.04.24 14:26, Alexander Gordeev wrote: > On Wed, Mar 27, 2024 at 06:17:36PM +0100, David Hildenbrand wrote: > > Hi David, > ... >> static int mfill_atomic_pte_zeropage(pmd_t *dst_pmd, >> struct vm_area_struct *dst_vma, >> unsigned long dst_addr) >> @@ -324,6 +355,9 @@ static int mfill_atomic_pte_zeropage(pmd_t *dst_pmd, >> spinlock_t *ptl; >> int ret; >> >> + if (mm_forbids_zeropage(dst_vma->mm)) > > I assume, you were going to pass dst_vma->vm_mm here? > This patch does not compile otherwise. Ah, I compiled it only on x86, where the parameter is ignored ... and for testing the code path I forced mm_forbids_zeropage to be 1 on x86. Yes, this must be dst_vma->vm_mm. Thanks!
On 11.04.24 14:30, David Hildenbrand wrote: > On 11.04.24 14:26, Alexander Gordeev wrote: >> On Wed, Mar 27, 2024 at 06:17:36PM +0100, David Hildenbrand wrote: >> >> Hi David, >> ... >>> static int mfill_atomic_pte_zeropage(pmd_t *dst_pmd, >>> struct vm_area_struct *dst_vma, >>> unsigned long dst_addr) >>> @@ -324,6 +355,9 @@ static int mfill_atomic_pte_zeropage(pmd_t *dst_pmd, >>> spinlock_t *ptl; >>> int ret; >>> >>> + if (mm_forbids_zeropage(dst_vma->mm)) >> >> I assume, you were going to pass dst_vma->vm_mm here? >> This patch does not compile otherwise. > > Ah, I compiled it only on x86, where the parameter is ignored ... and > for testing the code path I forced mm_forbids_zeropage to be 1 on x86. Now I get it, I compiled it all on s390x, but not the individual patches, so patch #2 hid the issue in patch #1. Sneaky. :)
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 712160cd41ec..9d385696fb89 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -316,6 +316,37 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, goto out; } +static int mfill_atomic_pte_zeroed_folio(pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, unsigned long dst_addr) +{ + struct folio *folio; + int ret = -ENOMEM; + + folio = vma_alloc_zeroed_movable_folio(dst_vma, dst_addr); + if (!folio) + return ret; + + if (mem_cgroup_charge(folio, dst_vma->vm_mm, GFP_KERNEL)) + goto out_put; + + /* + * The memory barrier inside __folio_mark_uptodate makes sure that + * zeroing out the folio become visible before mapping the page + * using set_pte_at(). See do_anonymous_page(). + */ + __folio_mark_uptodate(folio); + + ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, + &folio->page, true, 0); + if (ret) + goto out_put; + + return 0; +out_put: + folio_put(folio); + return ret; +} + static int mfill_atomic_pte_zeropage(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr) @@ -324,6 +355,9 @@ static int mfill_atomic_pte_zeropage(pmd_t *dst_pmd, spinlock_t *ptl; int ret; + if (mm_forbids_zeropage(dst_vma->mm)) + return mfill_atomic_pte_zeroed_folio(dst_pmd, dst_vma, dst_addr); + _dst_pte = pte_mkspecial(pfn_pte(my_zero_pfn(dst_addr), dst_vma->vm_page_prot)); ret = -EAGAIN;