Message ID | 1439484671-15718-8-git-send-email-ross.zwisler@linux.intel.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Ross Zwisler |
Headers | show |
On Thu, Aug 13, 2015 at 9:51 AM, Ross Zwisler <ross.zwisler@linux.intel.com> wrote: > Update the annotation for the kaddr pointer returned by direct_access() > so that it is a __pmem pointer. This is consistent with the PMEM driver > and with how this direct_access() pointer is used in the DAX code. > > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> > --- > Documentation/filesystems/Locking | 3 ++- > arch/powerpc/sysdev/axonram.c | 7 ++++--- > drivers/block/brd.c | 4 ++-- > drivers/nvdimm/pmem.c | 4 ++-- > drivers/s390/block/dcssblk.c | 10 +++++---- > fs/block_dev.c | 2 +- > fs/dax.c | 44 +++++++++++++++++++++------------------ > include/linux/blkdev.h | 8 +++---- > 8 files changed, 45 insertions(+), 37 deletions(-) > > diff --git a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking > index 6a34a0f..06d4434 100644 > --- a/Documentation/filesystems/Locking > +++ b/Documentation/filesystems/Locking > @@ -397,7 +397,8 @@ prototypes: > int (*release) (struct gendisk *, fmode_t); > int (*ioctl) (struct block_device *, fmode_t, unsigned, unsigned long); > int (*compat_ioctl) (struct block_device *, fmode_t, unsigned, unsigned long); > - int (*direct_access) (struct block_device *, sector_t, void **, unsigned long *); > + int (*direct_access) (struct block_device *, sector_t, void __pmem **, > + unsigned long *); So this collides with the __pfn_t work. I think the we have a reasonable chance of getting that in to 4.3, so I'd wait to see if we hit any major roadblocks with that set [1] before merging these. [1]: https://lists.01.org/pipermail/linux-nvdimm/2015-August/001803.html
On Thu, 2015-08-13 at 14:20 -0700, Dan Williams wrote: > On Thu, Aug 13, 2015 at 9:51 AM, Ross Zwisler > <ross.zwisler@linux.intel.com> wrote: > > Update the annotation for the kaddr pointer returned by direct_access() > > so that it is a __pmem pointer. This is consistent with the PMEM driver > > and with how this direct_access() pointer is used in the DAX code. > > > > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> > > --- > > Documentation/filesystems/Locking | 3 ++- > > arch/powerpc/sysdev/axonram.c | 7 ++++--- > > drivers/block/brd.c | 4 ++-- > > drivers/nvdimm/pmem.c | 4 ++-- > > drivers/s390/block/dcssblk.c | 10 +++++---- > > fs/block_dev.c | 2 +- > > fs/dax.c | 44 +++++++++++++++++++++------------------ > > include/linux/blkdev.h | 8 +++---- > > 8 files changed, 45 insertions(+), 37 deletions(-) > > > > diff --git a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking > > index 6a34a0f..06d4434 100644 > > --- a/Documentation/filesystems/Locking > > +++ b/Documentation/filesystems/Locking > > @@ -397,7 +397,8 @@ prototypes: > > int (*release) (struct gendisk *, fmode_t); > > int (*ioctl) (struct block_device *, fmode_t, unsigned, unsigned long); > > int (*compat_ioctl) (struct block_device *, fmode_t, unsigned, unsigned long); > > - int (*direct_access) (struct block_device *, sector_t, void **, unsigned long *); > > + int (*direct_access) (struct block_device *, sector_t, void __pmem **, > > + unsigned long *); > > So this collides with the __pfn_t work. I think the we have a > reasonable chance of getting that in to 4.3, so I'd wait to see if we > hit any major roadblocks with that set [1] before merging these. > > [1]: https://lists.01.org/pipermail/linux-nvdimm/2015-August/001803.html Fair enough. Yea, I hadn't merged with that series yet a) because I didn't know when its review cycle would settle down and b) because that series hadn't pulled in changes from Matthew for PMD support, which I was originally using as a baseline. I'll merge with your code for v3.
On Fri, Aug 14, 2015 at 9:55 AM, Ross Zwisler <ross.zwisler@linux.intel.com> wrote: > On Thu, 2015-08-13 at 14:20 -0700, Dan Williams wrote: >> On Thu, Aug 13, 2015 at 9:51 AM, Ross Zwisler >> <ross.zwisler@linux.intel.com> wrote: >> > Update the annotation for the kaddr pointer returned by direct_access() >> > so that it is a __pmem pointer. This is consistent with the PMEM driver >> > and with how this direct_access() pointer is used in the DAX code. >> > >> > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> >> > --- >> > Documentation/filesystems/Locking | 3 ++- >> > arch/powerpc/sysdev/axonram.c | 7 ++++--- >> > drivers/block/brd.c | 4 ++-- >> > drivers/nvdimm/pmem.c | 4 ++-- >> > drivers/s390/block/dcssblk.c | 10 +++++---- >> > fs/block_dev.c | 2 +- >> > fs/dax.c | 44 +++++++++++++++++++++------------------ >> > include/linux/blkdev.h | 8 +++---- >> > 8 files changed, 45 insertions(+), 37 deletions(-) >> > >> > diff --git a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking >> > index 6a34a0f..06d4434 100644 >> > --- a/Documentation/filesystems/Locking >> > +++ b/Documentation/filesystems/Locking >> > @@ -397,7 +397,8 @@ prototypes: >> > int (*release) (struct gendisk *, fmode_t); >> > int (*ioctl) (struct block_device *, fmode_t, unsigned, unsigned long); >> > int (*compat_ioctl) (struct block_device *, fmode_t, unsigned, unsigned long); >> > - int (*direct_access) (struct block_device *, sector_t, void **, unsigned long *); >> > + int (*direct_access) (struct block_device *, sector_t, void __pmem **, >> > + unsigned long *); >> >> So this collides with the __pfn_t work. I think the we have a >> reasonable chance of getting that in to 4.3, so I'd wait to see if we >> hit any major roadblocks with that set [1] before merging these. >> >> [1]: https://lists.01.org/pipermail/linux-nvdimm/2015-August/001803.html > > Fair enough. Yea, I hadn't merged with that series yet a) because I didn't > know when its review cycle would settle down and b) because that series hadn't > pulled in changes from Matthew for PMD support, which I was originally using > as a baseline. > > I'll merge with your code for v3. Sounds, let me go rebase the __pfn_t patches on -mm so we'all lined up and collision free.
On Fri, Aug 14, 2015 at 09:58:16AM -0700, Dan Williams wrote: > > I'll merge with your code for v3. > > Sounds, let me go rebase the __pfn_t patches on -mm so we'all lined up > and collision free. I'm doubt that we'll have PFN mapping ready for 4.3. I'd rather see Ross series goes first, and move the patch to remove the size argument from ->direct access [1] over to this series as well. [1] https://git.kernel.org/cgit/linux/kernel/git/djbw/nvdimm.git/commit/?h=pfn&id=8e15e69fb9e61ac563c5a7ffd9dd9a7b545cced3
On Thu, Aug 13, 2015 at 10:51:11AM -0600, Ross Zwisler wrote: > Update the annotation for the kaddr pointer returned by direct_access() > so that it is a __pmem pointer. This is consistent with the PMEM driver > and with how this direct_access() pointer is used in the DAX code. IFF we stick to the __pmem annotations this looks good. That beeing said I start to really dislike them. We don't special accesors to read/write from pmem, we just need to explicitly commit it if we want to make it persistent. So I really don't see the need to treat it special and require all the force casts to and from the attribute.
On Sat, Aug 15, 2015 at 2:19 AM, Christoph Hellwig <hch@lst.de> wrote: > On Thu, Aug 13, 2015 at 10:51:11AM -0600, Ross Zwisler wrote: >> Update the annotation for the kaddr pointer returned by direct_access() >> so that it is a __pmem pointer. This is consistent with the PMEM driver >> and with how this direct_access() pointer is used in the DAX code. > > IFF we stick to the __pmem annotations this looks good. > > That beeing said I start to really dislike them. We don't special > accesors to read/write from pmem, we just need to explicitly commit > it if we want to make it persistent. So I really don't see the need > to treat it special and require all the force casts to and from the > attribute. I'm not going to put up much of a fight if it's really getting in the way.... That said, while we don't need special accessors we do need guarantees that anything that has written to a persistent memory address has done so in a way that wmb_pmem() is able to flush it. It's more of a "I've audited this code path for wmb_pmem() compatibility so use this api to write to pmem." Perhaps a better way to statically check for missed flushes might be to have acquire_pmem_for_write() + release() annotations and the final release does a wmb_pmem(), but as far as I can tell the sparse acquire/release annotations don't stack.
On Sat, Aug 15, 2015 at 2:11 AM, Christoph Hellwig <hch@lst.de> wrote: > On Fri, Aug 14, 2015 at 09:58:16AM -0700, Dan Williams wrote: >> > I'll merge with your code for v3. >> >> Sounds, let me go rebase the __pfn_t patches on -mm so we'all lined up >> and collision free. > > I'm doubt that we'll have PFN mapping ready for 4.3. I'd rather see > Ross series goes first, and move the patch to remove the size argument > from ->direct access [1] over to this series as well. > > [1] https://git.kernel.org/cgit/linux/kernel/git/djbw/nvdimm.git/commit/?h=pfn&id=8e15e69fb9e61ac563c5a7ffd9dd9a7b545cced3 Yes, let's do it. The need for DAX persistence guarantees is a higher priority than solving the DAX vs PMEM unbind bug. Especially if we can defer __pfn_t and get some basic struct page mapping into 4.3 as well.
On Sat, Aug 15, 2015 at 08:44:27AM -0700, Dan Williams wrote: > That said, while we don't need special accessors we do need guarantees > that anything that has written to a persistent memory address has done > so in a way that wmb_pmem() is able to flush it. It's more of a "I've > audited this code path for wmb_pmem() compatibility so use this api to > write to pmem." I'm more worried about things where don't just do plain loads and stores to a pmem region but DMA, which will end up as a nightmare of casts. But we can wait and see how that evolves in the end..
On Sat, Aug 15, 2015 at 9:00 AM, Christoph Hellwig <hch@lst.de> wrote: > On Sat, Aug 15, 2015 at 08:44:27AM -0700, Dan Williams wrote: >> That said, while we don't need special accessors we do need guarantees >> that anything that has written to a persistent memory address has done >> so in a way that wmb_pmem() is able to flush it. It's more of a "I've >> audited this code path for wmb_pmem() compatibility so use this api to >> write to pmem." > > I'm more worried about things where don't just do plain loads and stores > to a pmem region but DMA, which will end up as a nightmare of casts. > > But we can wait and see how that evolves in the end.. It's already not possible to do something like dma_map_single() on an ioremap()'d address, so there currently are't any __iomem/DMA collisions. As long as DMA setup is relative to a physical address resource I think we're ok. Making sure a DMA is both complete and persistent though is a different problem.
On Sat, 2015-08-15 at 08:44 -0700, Dan Williams wrote: > On Sat, Aug 15, 2015 at 2:19 AM, Christoph Hellwig <hch@lst.de> wrote: > > On Thu, Aug 13, 2015 at 10:51:11AM -0600, Ross Zwisler wrote: > >> Update the annotation for the kaddr pointer returned by direct_access() > >> so that it is a __pmem pointer. This is consistent with the PMEM driver > >> and with how this direct_access() pointer is used in the DAX code. > > > > IFF we stick to the __pmem annotations this looks good. > > > > That beeing said I start to really dislike them. We don't special > > accesors to read/write from pmem, we just need to explicitly commit > > it if we want to make it persistent. So I really don't see the need > > to treat it special and require all the force casts to and from the > > attribute. > > I'm not going to put up much of a fight if it's really getting in the way.... > > That said, while we don't need special accessors we do need guarantees > that anything that has written to a persistent memory address has done > so in a way that wmb_pmem() is able to flush it. It's more of a "I've > audited this code path for wmb_pmem() compatibility so use this api to > write to pmem." > > Perhaps a better way to statically check for missed flushes might be > to have acquire_pmem_for_write() + release() annotations and the final > release does a wmb_pmem(), but as far as I can tell the sparse > acquire/release annotations don't stack. FWIW I've been on the fence about the __pmem annotations, but my current thought is that we really do need a way of saying that stores to these pointers need special care for wmb_pmem() to do its thing and that __pmem does a reasonably good job of that. If we can figure out a cooler way, such as the write() + release() flow Dan is talking about, great. But I think we need something to keep us from making errors by storing to PMEM pointers and leaving data in the processor cache.
diff --git a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking index 6a34a0f..06d4434 100644 --- a/Documentation/filesystems/Locking +++ b/Documentation/filesystems/Locking @@ -397,7 +397,8 @@ prototypes: int (*release) (struct gendisk *, fmode_t); int (*ioctl) (struct block_device *, fmode_t, unsigned, unsigned long); int (*compat_ioctl) (struct block_device *, fmode_t, unsigned, unsigned long); - int (*direct_access) (struct block_device *, sector_t, void **, unsigned long *); + int (*direct_access) (struct block_device *, sector_t, void __pmem **, + unsigned long *); int (*media_changed) (struct gendisk *); void (*unlock_native_capacity) (struct gendisk *); int (*revalidate_disk) (struct gendisk *); diff --git a/arch/powerpc/sysdev/axonram.c b/arch/powerpc/sysdev/axonram.c index ee90db1..a2be2a6 100644 --- a/arch/powerpc/sysdev/axonram.c +++ b/arch/powerpc/sysdev/axonram.c @@ -141,13 +141,14 @@ axon_ram_make_request(struct request_queue *queue, struct bio *bio) */ static long axon_ram_direct_access(struct block_device *device, sector_t sector, - void **kaddr, unsigned long *pfn, long size) + void __pmem **kaddr, unsigned long *pfn, long size) { struct axon_ram_bank *bank = device->bd_disk->private_data; loff_t offset = (loff_t)sector << AXON_RAM_SECTOR_SHIFT; + void *addr = (void *)(bank->ph_addr + offset); - *kaddr = (void *)(bank->ph_addr + offset); - *pfn = virt_to_phys(*kaddr) >> PAGE_SHIFT; + *kaddr = (void __pmem *)addr; + *pfn = virt_to_phys(addr) >> PAGE_SHIFT; return bank->size - offset; } diff --git a/drivers/block/brd.c b/drivers/block/brd.c index 5750b39..2691bb6 100644 --- a/drivers/block/brd.c +++ b/drivers/block/brd.c @@ -371,7 +371,7 @@ static int brd_rw_page(struct block_device *bdev, sector_t sector, #ifdef CONFIG_BLK_DEV_RAM_DAX static long brd_direct_access(struct block_device *bdev, sector_t sector, - void **kaddr, unsigned long *pfn, long size) + void __pmem **kaddr, unsigned long *pfn, long size) { struct brd_device *brd = bdev->bd_disk->private_data; struct page *page; @@ -381,7 +381,7 @@ static long brd_direct_access(struct block_device *bdev, sector_t sector, page = brd_insert_page(brd, sector); if (!page) return -ENOSPC; - *kaddr = page_address(page); + *kaddr = (void __pmem *)page_address(page); *pfn = page_to_pfn(page); /* diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index ade9eb9..68f6a6a 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -92,7 +92,7 @@ static int pmem_rw_page(struct block_device *bdev, sector_t sector, } static long pmem_direct_access(struct block_device *bdev, sector_t sector, - void **kaddr, unsigned long *pfn, long size) + void __pmem **kaddr, unsigned long *pfn, long size) { struct pmem_device *pmem = bdev->bd_disk->private_data; size_t offset = sector << 9; @@ -101,7 +101,7 @@ static long pmem_direct_access(struct block_device *bdev, sector_t sector, return -ENODEV; /* FIXME convert DAX to comprehend that this mapping has a lifetime */ - *kaddr = (void __force *) pmem->virt_addr + offset; + *kaddr = pmem->virt_addr + offset; *pfn = (pmem->phys_addr + offset) >> PAGE_SHIFT; return pmem->size - offset; diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c index da21281..2c5a397 100644 --- a/drivers/s390/block/dcssblk.c +++ b/drivers/s390/block/dcssblk.c @@ -29,7 +29,7 @@ static int dcssblk_open(struct block_device *bdev, fmode_t mode); static void dcssblk_release(struct gendisk *disk, fmode_t mode); static void dcssblk_make_request(struct request_queue *q, struct bio *bio); static long dcssblk_direct_access(struct block_device *bdev, sector_t secnum, - void **kaddr, unsigned long *pfn, long size); + void __pmem **kaddr, unsigned long *pfn, long size); static char dcssblk_segments[DCSSBLK_PARM_LEN] = "\0"; @@ -879,18 +879,20 @@ fail: static long dcssblk_direct_access (struct block_device *bdev, sector_t secnum, - void **kaddr, unsigned long *pfn, long size) + void __pmem **kaddr, unsigned long *pfn, long size) { struct dcssblk_dev_info *dev_info; unsigned long offset, dev_sz; + void *addr; dev_info = bdev->bd_disk->private_data; if (!dev_info) return -ENODEV; dev_sz = dev_info->end - dev_info->start; offset = secnum * 512; - *kaddr = (void *) (dev_info->start + offset); - *pfn = virt_to_phys(*kaddr) >> PAGE_SHIFT; + addr = (void *) (dev_info->start + offset); + *pfn = virt_to_phys(addr) >> PAGE_SHIFT; + *kaddr = (void __pmem *) addr; return dev_sz - offset; } diff --git a/fs/block_dev.c b/fs/block_dev.c index 9be2d7e..4ab366d 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -442,7 +442,7 @@ EXPORT_SYMBOL_GPL(bdev_write_page); * accessible at this address. */ long bdev_direct_access(struct block_device *bdev, sector_t sector, - void **addr, unsigned long *pfn, long size) + void __pmem **addr, unsigned long *pfn, long size) { long avail; const struct block_device_operations *ops = bdev->bd_disk->fops; diff --git a/fs/dax.c b/fs/dax.c index ea1b2c8..633a1ba 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -36,7 +36,7 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) might_sleep(); do { - void *addr; + void __pmem *addr; unsigned long pfn; long count; @@ -48,7 +48,7 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) unsigned pgsz = PAGE_SIZE - offset_in_page(addr); if (pgsz > count) pgsz = count; - clear_pmem((void __pmem *)addr, pgsz); + clear_pmem(addr, pgsz); addr += pgsz; size -= pgsz; count -= pgsz; @@ -63,7 +63,8 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) } EXPORT_SYMBOL_GPL(dax_clear_blocks); -static long dax_get_addr(struct buffer_head *bh, void **addr, unsigned blkbits) +static long dax_get_addr(struct buffer_head *bh, void __pmem **addr, + unsigned blkbits) { unsigned long pfn; sector_t sector = bh->b_blocknr << (blkbits - 9); @@ -71,15 +72,15 @@ static long dax_get_addr(struct buffer_head *bh, void **addr, unsigned blkbits) } /* the clear_pmem() calls are ordered by a wmb_pmem() in the caller */ -static void dax_new_buf(void *addr, unsigned size, unsigned first, loff_t pos, - loff_t end) +static void dax_new_buf(void __pmem *addr, unsigned size, unsigned first, + loff_t pos, loff_t end) { loff_t final = end - pos + first; /* The final byte of the buffer */ if (first > 0) - clear_pmem((void __pmem *)addr, first); + clear_pmem(addr, first); if (final < size) - clear_pmem((void __pmem *)addr + final, size - final); + clear_pmem(addr + final, size - final); } static bool buffer_written(struct buffer_head *bh) @@ -107,7 +108,7 @@ static ssize_t dax_io(struct inode *inode, struct iov_iter *iter, loff_t pos = start; loff_t max = start; loff_t bh_max = start; - void *addr; + void __pmem *addr; bool hole = false; bool need_wmb = false; @@ -159,16 +160,18 @@ static ssize_t dax_io(struct inode *inode, struct iov_iter *iter, } if (iov_iter_rw(iter) == WRITE) { - len = copy_from_iter_nocache(addr, max - pos, iter); + len = copy_from_iter_nocache((void __force *)addr, + max - pos, iter); /* * copy_from_iter_nocache() uses non-temporal stores * for iovec iterators so we can skip the write back. */ if (!iter_is_iovec(iter)) - wb_cache_pmem((void __pmem *)addr, max - pos); + wb_cache_pmem(addr, max - pos); need_wmb = true; } else if (!hole) - len = copy_to_iter(addr, max - pos, iter); + len = copy_to_iter((void __force *)addr, max - pos, + iter); else len = iov_iter_zero(max - pos, iter); @@ -274,11 +277,13 @@ static int dax_load_hole(struct address_space *mapping, struct page *page, static int copy_user_bh(struct page *to, struct buffer_head *bh, unsigned blkbits, unsigned long vaddr) { - void *vfrom, *vto; + void __pmem *vfrom; + void *vto; + if (dax_get_addr(bh, &vfrom, blkbits) < 0) return -EIO; vto = kmap_atomic(to); - copy_user_page(vto, vfrom, vaddr, to); + copy_user_page(vto, (void __force *)vfrom, vaddr, to); kunmap_atomic(vto); return 0; } @@ -288,7 +293,7 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh, { sector_t sector = bh->b_blocknr << (inode->i_blkbits - 9); unsigned long vaddr = (unsigned long)vmf->virtual_address; - void *addr; + void __pmem *addr; unsigned long pfn; pgoff_t size; int error; @@ -315,7 +320,7 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh, } if (buffer_unwritten(bh) || buffer_new(bh)) { - clear_pmem((void __pmem *)addr, PAGE_SIZE); + clear_pmem(addr, PAGE_SIZE); wmb_pmem(); } @@ -530,7 +535,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, unsigned long pmd_addr = address & PMD_MASK; bool write = flags & FAULT_FLAG_WRITE; long length; - void *kaddr; + void __pmem *kaddr; pgoff_t size, pgoff; sector_t block, sector; unsigned long pfn; @@ -624,8 +629,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, if (buffer_unwritten(&bh) || buffer_new(&bh)) { int i; for (i = 0; i < PTRS_PER_PMD; i++) - clear_pmem((void __pmem *)kaddr + i*PAGE_SIZE, - PAGE_SIZE); + clear_pmem(kaddr + i * PAGE_SIZE, PAGE_SIZE); wmb_pmem(); count_vm_event(PGMAJFAULT); mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT); @@ -734,11 +738,11 @@ int dax_zero_page_range(struct inode *inode, loff_t from, unsigned length, if (err < 0) return err; if (buffer_written(&bh)) { - void *addr; + void __pmem *addr; err = dax_get_addr(&bh, &addr, inode->i_blkbits); if (err < 0) return err; - clear_pmem((void __pmem *)addr + offset, length); + clear_pmem(addr + offset, length); wmb_pmem(); } diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index d4068c1..c401ecd 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1555,8 +1555,8 @@ struct block_device_operations { int (*rw_page)(struct block_device *, sector_t, struct page *, int rw); int (*ioctl) (struct block_device *, fmode_t, unsigned, unsigned long); int (*compat_ioctl) (struct block_device *, fmode_t, unsigned, unsigned long); - long (*direct_access)(struct block_device *, sector_t, - void **, unsigned long *pfn, long size); + long (*direct_access)(struct block_device *, sector_t, void __pmem **, + unsigned long *pfn, long size); unsigned int (*check_events) (struct gendisk *disk, unsigned int clearing); /* ->media_changed() is DEPRECATED, use ->check_events() instead */ @@ -1574,8 +1574,8 @@ extern int __blkdev_driver_ioctl(struct block_device *, fmode_t, unsigned int, extern int bdev_read_page(struct block_device *, sector_t, struct page *); extern int bdev_write_page(struct block_device *, sector_t, struct page *, struct writeback_control *); -extern long bdev_direct_access(struct block_device *, sector_t, void **addr, - unsigned long *pfn, long size); +extern long bdev_direct_access(struct block_device *, sector_t, + void __pmem **addr, unsigned long *pfn, long size); #else /* CONFIG_BLOCK */ struct block_device;
Update the annotation for the kaddr pointer returned by direct_access() so that it is a __pmem pointer. This is consistent with the PMEM driver and with how this direct_access() pointer is used in the DAX code. Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> --- Documentation/filesystems/Locking | 3 ++- arch/powerpc/sysdev/axonram.c | 7 ++++--- drivers/block/brd.c | 4 ++-- drivers/nvdimm/pmem.c | 4 ++-- drivers/s390/block/dcssblk.c | 10 +++++---- fs/block_dev.c | 2 +- fs/dax.c | 44 +++++++++++++++++++++------------------ include/linux/blkdev.h | 8 +++---- 8 files changed, 45 insertions(+), 37 deletions(-)