Message ID | 1452230879-18117-3-git-send-email-ross.zwisler@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Thu 07-01-16 22:27:52, Ross Zwisler wrote: > When we get a DAX PMD fault for a write it is possible that there could be > some number of 4k zero pages already present for the same range that were > inserted to service reads from a hole. These 4k zero pages need to be > unmapped from the VMAs and removed from the struct address_space radix tree > before the real DAX PMD entry can be inserted. > > For PTE faults this same use case also exists and is handled by a > combination of unmap_mapping_range() to unmap the VMAs and > delete_from_page_cache() to remove the page from the address_space radix > tree. > > For PMD faults we do have a call to unmap_mapping_range() (protected by a > buffer_new() check), but nothing clears out the radix tree entry. The > buffer_new() check is also incorrect as the current ext4 and XFS filesystem > code will never return a buffer_head with BH_New set, even when allocating > new blocks over a hole. Instead the filesystem will zero the blocks > manually and return a buffer_head with only BH_Mapped set. > > Fix this situation by removing the buffer_new() check and adding a call to > truncate_inode_pages_range() to clear out the radix tree entries before we > insert the DAX PMD. > > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> > Reported-by: Dan Williams <dan.j.williams@intel.com> > Tested-by: Dan Williams <dan.j.williams@intel.com> Just two nits below. Nothing serious so you can add: Reviewed-by: Jan Kara <jack@suse.cz> > --- > fs/dax.c | 20 ++++++++++---------- > 1 file changed, 10 insertions(+), 10 deletions(-) > > diff --git a/fs/dax.c b/fs/dax.c > index 513bba5..5b84a46 100644 > --- a/fs/dax.c > +++ b/fs/dax.c > @@ -589,6 +589,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, > bool write = flags & FAULT_FLAG_WRITE; > struct block_device *bdev; > pgoff_t size, pgoff; > + loff_t lstart, lend; > sector_t block; > int result = 0; > > @@ -643,15 +644,13 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, > goto fallback; > } > > - /* > - * If we allocated new storage, make sure no process has any > - * zero pages covering this hole > - */ > - if (buffer_new(&bh)) { > - i_mmap_unlock_read(mapping); > - unmap_mapping_range(mapping, pgoff << PAGE_SHIFT, PMD_SIZE, 0); > - i_mmap_lock_read(mapping); > - } > + /* make sure no process has any zero pages covering this hole */ > + lstart = pgoff << PAGE_SHIFT; > + lend = lstart + PMD_SIZE - 1; /* inclusive */ > + i_mmap_unlock_read(mapping); Just a nit but is there reason why we grab i_mmap_lock_read(mapping) only to release it a few lines below? The bh checks inside the locked region don't seem to rely on i_mmap_lock... > + unmap_mapping_range(mapping, lstart, PMD_SIZE, 0); > + truncate_inode_pages_range(mapping, lstart, lend); These two calls can be shortened as: truncate_pagecache_range(inode, lstart, lend); Honza
On Tue, Jan 12, 2016 at 10:44:51AM +0100, Jan Kara wrote: > On Thu 07-01-16 22:27:52, Ross Zwisler wrote: > > When we get a DAX PMD fault for a write it is possible that there could be > > some number of 4k zero pages already present for the same range that were > > inserted to service reads from a hole. These 4k zero pages need to be > > unmapped from the VMAs and removed from the struct address_space radix tree > > before the real DAX PMD entry can be inserted. > > > > For PTE faults this same use case also exists and is handled by a > > combination of unmap_mapping_range() to unmap the VMAs and > > delete_from_page_cache() to remove the page from the address_space radix > > tree. > > > > For PMD faults we do have a call to unmap_mapping_range() (protected by a > > buffer_new() check), but nothing clears out the radix tree entry. The > > buffer_new() check is also incorrect as the current ext4 and XFS filesystem > > code will never return a buffer_head with BH_New set, even when allocating > > new blocks over a hole. Instead the filesystem will zero the blocks > > manually and return a buffer_head with only BH_Mapped set. > > > > Fix this situation by removing the buffer_new() check and adding a call to > > truncate_inode_pages_range() to clear out the radix tree entries before we > > insert the DAX PMD. > > > > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> > > Reported-by: Dan Williams <dan.j.williams@intel.com> > > Tested-by: Dan Williams <dan.j.williams@intel.com> > > Just two nits below. Nothing serious so you can add: > > Reviewed-by: Jan Kara <jack@suse.cz> Cool, thank you for the review! > > --- > > fs/dax.c | 20 ++++++++++---------- > > 1 file changed, 10 insertions(+), 10 deletions(-) > > > > diff --git a/fs/dax.c b/fs/dax.c > > index 513bba5..5b84a46 100644 > > --- a/fs/dax.c > > +++ b/fs/dax.c > > @@ -589,6 +589,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, > > bool write = flags & FAULT_FLAG_WRITE; > > struct block_device *bdev; > > pgoff_t size, pgoff; > > + loff_t lstart, lend; > > sector_t block; > > int result = 0; > > > > @@ -643,15 +644,13 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, > > goto fallback; > > } > > > > - /* > > - * If we allocated new storage, make sure no process has any > > - * zero pages covering this hole > > - */ > > - if (buffer_new(&bh)) { > > - i_mmap_unlock_read(mapping); > > - unmap_mapping_range(mapping, pgoff << PAGE_SHIFT, PMD_SIZE, 0); > > - i_mmap_lock_read(mapping); > > - } > > + /* make sure no process has any zero pages covering this hole */ > > + lstart = pgoff << PAGE_SHIFT; > > + lend = lstart + PMD_SIZE - 1; /* inclusive */ > > + i_mmap_unlock_read(mapping); > > Just a nit but is there reason why we grab i_mmap_lock_read(mapping) only > to release it a few lines below? The bh checks inside the locked region > don't seem to rely on i_mmap_lock... I think we can probably just take it when we're done with the truncate() - I'll fix for v9. > > + unmap_mapping_range(mapping, lstart, PMD_SIZE, 0); > > + truncate_inode_pages_range(mapping, lstart, lend); > > These two calls can be shortened as: > > truncate_pagecache_range(inode, lstart, lend); Nice. I'll change it for v9.
diff --git a/fs/dax.c b/fs/dax.c index 513bba5..5b84a46 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -589,6 +589,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, bool write = flags & FAULT_FLAG_WRITE; struct block_device *bdev; pgoff_t size, pgoff; + loff_t lstart, lend; sector_t block; int result = 0; @@ -643,15 +644,13 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, goto fallback; } - /* - * If we allocated new storage, make sure no process has any - * zero pages covering this hole - */ - if (buffer_new(&bh)) { - i_mmap_unlock_read(mapping); - unmap_mapping_range(mapping, pgoff << PAGE_SHIFT, PMD_SIZE, 0); - i_mmap_lock_read(mapping); - } + /* make sure no process has any zero pages covering this hole */ + lstart = pgoff << PAGE_SHIFT; + lend = lstart + PMD_SIZE - 1; /* inclusive */ + i_mmap_unlock_read(mapping); + unmap_mapping_range(mapping, lstart, PMD_SIZE, 0); + truncate_inode_pages_range(mapping, lstart, lend); + i_mmap_lock_read(mapping); /* * If a truncate happened while we were allocating blocks, we may @@ -665,7 +664,8 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, goto out; } if ((pgoff | PG_PMD_COLOUR) >= size) { - dax_pmd_dbg(&bh, address, "pgoff unaligned"); + dax_pmd_dbg(&bh, address, + "offset + huge page size > file size"); goto fallback; }