Message ID | 20190109144736.17452-6-pagupta@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | kvm "virtio pmem" device | expand |
On Wed, Jan 09, 2019 at 08:17:36PM +0530, Pankaj Gupta wrote: > Virtio pmem provides asynchronous host page cache flush > mechanism. we don't support 'MAP_SYNC' with virtio pmem > and xfs. > > Signed-off-by: Pankaj Gupta <pagupta@redhat.com> > --- > fs/xfs/xfs_file.c | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c > index e474250..eae4aa4 100644 > --- a/fs/xfs/xfs_file.c > +++ b/fs/xfs/xfs_file.c > @@ -1190,6 +1190,14 @@ xfs_file_mmap( > if (!IS_DAX(file_inode(filp)) && (vma->vm_flags & VM_SYNC)) > return -EOPNOTSUPP; > > + /* We don't support synchronous mappings with guest direct access > + * and virtio based host page cache mechanism. > + */ > + if (IS_DAX(file_inode(filp)) && virtio_pmem_host_cache_enabled( Echoing what Jan said, this ought to be some sort of generic function that tells us whether or not memory mapped from the dax device will always still be accessible even after a crash (i.e. supports MAP_SYNC). What if the underlying file on the host is itself on pmem and can be MAP_SYNC'd? Shouldn't the guest be able to use MAP_SYNC as well? --D > + xfs_find_daxdev_for_inode(file_inode(filp))) && > + (vma->vm_flags & VM_SYNC)) > + return -EOPNOTSUPP; > + > file_accessed(filp); > vma->vm_ops = &xfs_file_vm_ops; > if (IS_DAX(file_inode(filp))) > -- > 2.9.3 >
> > Virtio pmem provides asynchronous host page cache flush > > mechanism. we don't support 'MAP_SYNC' with virtio pmem > > and xfs. > > > > Signed-off-by: Pankaj Gupta <pagupta@redhat.com> > > --- > > fs/xfs/xfs_file.c | 8 ++++++++ > > 1 file changed, 8 insertions(+) > > > > diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c > > index e474250..eae4aa4 100644 > > --- a/fs/xfs/xfs_file.c > > +++ b/fs/xfs/xfs_file.c > > @@ -1190,6 +1190,14 @@ xfs_file_mmap( > > if (!IS_DAX(file_inode(filp)) && (vma->vm_flags & VM_SYNC)) > > return -EOPNOTSUPP; > > > > + /* We don't support synchronous mappings with guest direct access > > + * and virtio based host page cache mechanism. > > + */ > > + if (IS_DAX(file_inode(filp)) && virtio_pmem_host_cache_enabled( > > Echoing what Jan said, this ought to be some sort of generic function > that tells us whether or not memory mapped from the dax device will > always still be accessible even after a crash (i.e. supports MAP_SYNC). o.k > > What if the underlying file on the host is itself on pmem and can be > MAP_SYNC'd? Shouldn't the guest be able to use MAP_SYNC as well? Guest MAP_SYNC on actual host pmem will sync guest metadata, as guest writes are persistent on actual pmem. Host side Qemu MAP_SYNC enabling work for pmem is in progress. It will make sure metadata at host side is in consistent state after a crash or any other metadata corruption operation. For virtio-pmem, we are emulating pmem device over regular storage on host side. Guest need to call fsync followed by write to make sure guest metadata is in consistent state(or journalled). Host backing file data & metadata will also be persistent after guest to host fsync. Thanks, Pankaj
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index e474250..eae4aa4 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -1190,6 +1190,14 @@ xfs_file_mmap( if (!IS_DAX(file_inode(filp)) && (vma->vm_flags & VM_SYNC)) return -EOPNOTSUPP; + /* We don't support synchronous mappings with guest direct access + * and virtio based host page cache mechanism. + */ + if (IS_DAX(file_inode(filp)) && virtio_pmem_host_cache_enabled( + xfs_find_daxdev_for_inode(file_inode(filp))) && + (vma->vm_flags & VM_SYNC)) + return -EOPNOTSUPP; + file_accessed(filp); vma->vm_ops = &xfs_file_vm_ops; if (IS_DAX(file_inode(filp)))
Virtio pmem provides asynchronous host page cache flush mechanism. we don't support 'MAP_SYNC' with virtio pmem and xfs. Signed-off-by: Pankaj Gupta <pagupta@redhat.com> --- fs/xfs/xfs_file.c | 8 ++++++++ 1 file changed, 8 insertions(+)