Message ID | 20230711202047.3818697-8-willy@infradead.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Avoid the mmap lock for fault-around | expand |
On Tue, Jul 11, 2023 at 1:20 PM Matthew Wilcox (Oracle) <willy@infradead.org> wrote: > > The map_pages fs method should be safe to run under the VMA lock instead > of the mmap lock. This should have a measurable reduction in contention > on the mmap lock. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> I'll trust your claim that vmf->vma->vm_ops->map_pages() never rely on mmap_lock. I think it makes sense but I did not check every case :) Reviewed-by: Suren Baghdasaryan <surenb@google.com> > --- > mm/memory.c | 10 +++++----- > 1 file changed, 5 insertions(+), 5 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 709bffee8aa2..0a4e363b0605 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -4547,11 +4547,6 @@ static vm_fault_t do_read_fault(struct vm_fault *vmf) > vm_fault_t ret = 0; > struct folio *folio; > > - if (vmf->flags & FAULT_FLAG_VMA_LOCK) { > - vma_end_read(vmf->vma); > - return VM_FAULT_RETRY; > - } > - > /* > * Let's call ->map_pages() first and use ->fault() as fallback > * if page by the offset is not ready to be mapped (cold cache or > @@ -4563,6 +4558,11 @@ static vm_fault_t do_read_fault(struct vm_fault *vmf) > return ret; > } > > + if (vmf->flags & FAULT_FLAG_VMA_LOCK) { > + vma_end_read(vmf->vma); > + return VM_FAULT_RETRY; > + } > + > ret = __do_fault(vmf); > if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY))) > return ret; > -- > 2.39.2 >
On Thu, Jul 13, 2023 at 08:32:27PM -0700, Suren Baghdasaryan wrote: > On Tue, Jul 11, 2023 at 1:20 PM Matthew Wilcox (Oracle) > <willy@infradead.org> wrote: > > > > The map_pages fs method should be safe to run under the VMA lock instead > > of the mmap lock. This should have a measurable reduction in contention > > on the mmap lock. > > > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > > I'll trust your claim that vmf->vma->vm_ops->map_pages() never rely on > mmap_lock. I think it makes sense but I did not check every case :) Fortunately, there's really only one implementation of ->map_pages() and it's filemap_map_pages(). afs_vm_map_pages() is a thin wrapper around it.
diff --git a/mm/memory.c b/mm/memory.c index 709bffee8aa2..0a4e363b0605 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4547,11 +4547,6 @@ static vm_fault_t do_read_fault(struct vm_fault *vmf) vm_fault_t ret = 0; struct folio *folio; - if (vmf->flags & FAULT_FLAG_VMA_LOCK) { - vma_end_read(vmf->vma); - return VM_FAULT_RETRY; - } - /* * Let's call ->map_pages() first and use ->fault() as fallback * if page by the offset is not ready to be mapped (cold cache or @@ -4563,6 +4558,11 @@ static vm_fault_t do_read_fault(struct vm_fault *vmf) return ret; } + if (vmf->flags & FAULT_FLAG_VMA_LOCK) { + vma_end_read(vmf->vma); + return VM_FAULT_RETRY; + } + ret = __do_fault(vmf); if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY))) return ret;
The map_pages fs method should be safe to run under the VMA lock instead of the mmap lock. This should have a measurable reduction in contention on the mmap lock. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- mm/memory.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)