Message ID | 20151022064154.12700.90545.stgit@dwillia2-desk3.amr.corp.intel.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
On Thu 22-10-15 02:41:54, Dan Williams wrote: > dax_clear_blocks is currently performing a cond_resched() after every > PAGE_SIZE memset. We need not check so frequently, for example md-raid > only calls cond_resched() at stripe granularity. Also, in preparation > for introducing a dax_map_atomic() operation that temporarily pins a dax > mapping move the call to cond_resched() to the outer loop. > > Signed-off-by: Dan Williams <dan.j.williams@intel.com> The patch looks good to me. You can add: Reviewed-by: Jan Kara <jack@suse.com> Honza > --- > fs/dax.c | 27 ++++++++++++--------------- > 1 file changed, 12 insertions(+), 15 deletions(-) > > diff --git a/fs/dax.c b/fs/dax.c > index 5dc33d788d50..f8e543839e5c 100644 > --- a/fs/dax.c > +++ b/fs/dax.c > @@ -28,6 +28,7 @@ > #include <linux/sched.h> > #include <linux/uio.h> > #include <linux/vmstat.h> > +#include <linux/sizes.h> > > int dax_clear_blocks(struct inode *inode, sector_t block, long size) > { > @@ -38,24 +39,20 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) > do { > void __pmem *addr; > unsigned long pfn; > - long count; > + long count, sz; > > - count = bdev_direct_access(bdev, sector, &addr, &pfn, size); > + sz = min_t(long, size, SZ_1M); > + count = bdev_direct_access(bdev, sector, &addr, &pfn, sz); > if (count < 0) > return count; > - BUG_ON(size < count); > - while (count > 0) { > - unsigned pgsz = PAGE_SIZE - offset_in_page(addr); > - if (pgsz > count) > - pgsz = count; > - clear_pmem(addr, pgsz); > - addr += pgsz; > - size -= pgsz; > - count -= pgsz; > - BUG_ON(pgsz & 511); > - sector += pgsz / 512; > - cond_resched(); > - } > + if (count < sz) > + sz = count; > + clear_pmem(addr, sz); > + addr += sz; > + size -= sz; > + BUG_ON(sz & 511); > + sector += sz / 512; > + cond_resched(); > } while (size); > > wmb_pmem(); >
diff --git a/fs/dax.c b/fs/dax.c index 5dc33d788d50..f8e543839e5c 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -28,6 +28,7 @@ #include <linux/sched.h> #include <linux/uio.h> #include <linux/vmstat.h> +#include <linux/sizes.h> int dax_clear_blocks(struct inode *inode, sector_t block, long size) { @@ -38,24 +39,20 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) do { void __pmem *addr; unsigned long pfn; - long count; + long count, sz; - count = bdev_direct_access(bdev, sector, &addr, &pfn, size); + sz = min_t(long, size, SZ_1M); + count = bdev_direct_access(bdev, sector, &addr, &pfn, sz); if (count < 0) return count; - BUG_ON(size < count); - while (count > 0) { - unsigned pgsz = PAGE_SIZE - offset_in_page(addr); - if (pgsz > count) - pgsz = count; - clear_pmem(addr, pgsz); - addr += pgsz; - size -= pgsz; - count -= pgsz; - BUG_ON(pgsz & 511); - sector += pgsz / 512; - cond_resched(); - } + if (count < sz) + sz = count; + clear_pmem(addr, sz); + addr += sz; + size -= sz; + BUG_ON(sz & 511); + sector += sz / 512; + cond_resched(); } while (size); wmb_pmem();
dax_clear_blocks is currently performing a cond_resched() after every PAGE_SIZE memset. We need not check so frequently, for example md-raid only calls cond_resched() at stripe granularity. Also, in preparation for introducing a dax_map_atomic() operation that temporarily pins a dax mapping move the call to cond_resched() to the outer loop. Signed-off-by: Dan Williams <dan.j.williams@intel.com> --- fs/dax.c | 27 ++++++++++++--------------- 1 file changed, 12 insertions(+), 15 deletions(-)