Message ID | 20230327174515.1811532-2-willy@infradead.org (mailing list archive) |
---|---|
State | Deferred, archived |
Headers | show |
Series | Prevent ->map_pages from sleeping | expand |
On Mon, Mar 27, 2023 at 06:45:13PM +0100, Matthew Wilcox (Oracle) wrote: > XFS doesn't actually need to be holding the XFS_MMAPLOCK_SHARED to do > this. filemap_map_pages() cannot bring new folios into the page cache > and the folio lock is taken during filemap_map_pages() which provides > sufficient protection against a truncation or hole punch. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > --- > fs/xfs/xfs_file.c | 17 +---------------- > 1 file changed, 1 insertion(+), 16 deletions(-) > > diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c > index 863289aaa441..aede746541f8 100644 > --- a/fs/xfs/xfs_file.c > +++ b/fs/xfs/xfs_file.c > @@ -1389,25 +1389,10 @@ xfs_filemap_pfn_mkwrite( > return __xfs_filemap_fault(vmf, PE_SIZE_PTE, true); > } > > -static vm_fault_t > -xfs_filemap_map_pages( > - struct vm_fault *vmf, > - pgoff_t start_pgoff, > - pgoff_t end_pgoff) > -{ > - struct inode *inode = file_inode(vmf->vma->vm_file); > - vm_fault_t ret; > - > - xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED); > - ret = filemap_map_pages(vmf, start_pgoff, end_pgoff); > - xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED); > - return ret; > -} > - > static const struct vm_operations_struct xfs_file_vm_ops = { > .fault = xfs_filemap_fault, > .huge_fault = xfs_filemap_huge_fault, > - .map_pages = xfs_filemap_map_pages, > + .map_pages = filemap_map_pages, > .page_mkwrite = xfs_filemap_page_mkwrite, > .pfn_mkwrite = xfs_filemap_pfn_mkwrite, > }; > -- > 2.39.2 Looks fine. Reviewed-by: Dave Chinner <dchinner@redhat.com>
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index 863289aaa441..aede746541f8 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -1389,25 +1389,10 @@ xfs_filemap_pfn_mkwrite( return __xfs_filemap_fault(vmf, PE_SIZE_PTE, true); } -static vm_fault_t -xfs_filemap_map_pages( - struct vm_fault *vmf, - pgoff_t start_pgoff, - pgoff_t end_pgoff) -{ - struct inode *inode = file_inode(vmf->vma->vm_file); - vm_fault_t ret; - - xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED); - ret = filemap_map_pages(vmf, start_pgoff, end_pgoff); - xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED); - return ret; -} - static const struct vm_operations_struct xfs_file_vm_ops = { .fault = xfs_filemap_fault, .huge_fault = xfs_filemap_huge_fault, - .map_pages = xfs_filemap_map_pages, + .map_pages = filemap_map_pages, .page_mkwrite = xfs_filemap_page_mkwrite, .pfn_mkwrite = xfs_filemap_pfn_mkwrite, };
XFS doesn't actually need to be holding the XFS_MMAPLOCK_SHARED to do this. filemap_map_pages() cannot bring new folios into the page cache and the folio lock is taken during filemap_map_pages() which provides sufficient protection against a truncation or hole punch. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- fs/xfs/xfs_file.c | 17 +---------------- 1 file changed, 1 insertion(+), 16 deletions(-)