Message ID | 20230330012519.804116-1-apopple@nvidia.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2] mm: Take a page reference when removing device exclusive entries | expand |
On 3/29/23 18:25, Alistair Popple wrote: > Device exclusive page table entries are used to prevent CPU access to > a page whilst it is being accessed from a device. Typically this is > used to implement atomic operations when the underlying bus does not > support atomic access. When a CPU thread encounters a device exclusive > entry it locks the page and restores the original entry after calling > mmu notifiers to signal drivers that exclusive access is no longer > available. > > The device exclusive entry holds a reference to the page making it > safe to access the struct page whilst the entry is present. However > the fault handling code does not hold the PTL when taking the page > lock. This means if there are multiple threads faulting concurrently > on the device exclusive entry one will remove the entry whilst others > will wait on the page lock without holding a reference. > > This can lead to threads locking or waiting on a folio with a zero > refcount. Whilst mmap_lock prevents the pages getting freed via > munmap() they may still be freed by a migration. This leads to > warnings such as PAGE_FLAGS_CHECK_AT_FREE due to the page being locked > when the refcount drops to zero. > > Fix this by trying to take a reference on the folio before locking > it. The code already checks the PTE under the PTL and aborts if the > entry is no longer there. It is also possible the folio has been > unmapped, freed and re-allocated allowing a reference to be taken on > an unrelated folio. This case is also detected by the PTE check and > the folio is unlocked without further changes. > > Signed-off-by: Alistair Popple <apopple@nvidia.com> > Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> > Reviewed-by: John Hubbard <jhubbard@nvidia.com> > Fixes: b756a3b5e7ea ("mm: device exclusive memory access") > Cc: stable@vger.kernel.org > > --- > > Changes for v2: > > - Rebased to Linus master > - Reworded commit message > - Switched to using folios (thanks Matthew!) > - Added Reviewed-by's v2 looks correct to me. thanks,
s/page/folio/ in the entire commit log?
Christoph Hellwig <hch@infradead.org> writes:
> s/page/folio/ in the entire commit log?
I debated that but settled on leaving it as is because device exclusive
entries only deal with non-compound pages for now and didn't want to
give any other impression. Happy to change that though if people think
it would be better/clearer.
On 30.03.23 03:25, Alistair Popple wrote: > Device exclusive page table entries are used to prevent CPU access to > a page whilst it is being accessed from a device. Typically this is > used to implement atomic operations when the underlying bus does not > support atomic access. When a CPU thread encounters a device exclusive > entry it locks the page and restores the original entry after calling > mmu notifiers to signal drivers that exclusive access is no longer > available. > > The device exclusive entry holds a reference to the page making it > safe to access the struct page whilst the entry is present. However > the fault handling code does not hold the PTL when taking the page > lock. This means if there are multiple threads faulting concurrently > on the device exclusive entry one will remove the entry whilst others > will wait on the page lock without holding a reference. > > This can lead to threads locking or waiting on a folio with a zero > refcount. Whilst mmap_lock prevents the pages getting freed via > munmap() they may still be freed by a migration. This leads to > warnings such as PAGE_FLAGS_CHECK_AT_FREE due to the page being locked > when the refcount drops to zero. > > Fix this by trying to take a reference on the folio before locking > it. The code already checks the PTE under the PTL and aborts if the > entry is no longer there. It is also possible the folio has been > unmapped, freed and re-allocated allowing a reference to be taken on > an unrelated folio. This case is also detected by the PTE check and > the folio is unlocked without further changes. > > Signed-off-by: Alistair Popple <apopple@nvidia.com> > Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> > Reviewed-by: John Hubbard <jhubbard@nvidia.com> > Fixes: b756a3b5e7ea ("mm: device exclusive memory access") > Cc: stable@vger.kernel.org Acked-by: David Hildenbrand <david@redhat.com>
diff --git a/mm/memory.c b/mm/memory.c index f456f3b5049c..01a23ad48a04 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3563,8 +3563,21 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) struct vm_area_struct *vma = vmf->vma; struct mmu_notifier_range range; - if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) + /* + * We need a reference to lock the folio because we don't hold + * the PTL so a racing thread can remove the device-exclusive + * entry and unmap it. If the folio is free the entry must + * have been removed already. If it happens to have already + * been re-allocated after being freed all we do is lock and + * unlock it. + */ + if (!folio_try_get(folio)) + return 0; + + if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) { + folio_put(folio); return VM_FAULT_RETRY; + } mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma->vm_mm, vmf->address & PAGE_MASK, (vmf->address & PAGE_MASK) + PAGE_SIZE, NULL); @@ -3577,6 +3590,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); folio_unlock(folio); + folio_put(folio); mmu_notifier_invalidate_range_end(&range); return 0;