Message ID | 149758932908.10006.10765671892098302463.stgit@noble (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Looks good,
Reviewed-by: Christoph Hellwig <hch@lst.de>
On Fri, Jun 16, 2017 at 03:02:09PM +1000, NeilBrown wrote: > When a loop device is being shutdown the backing file is > closed with fput(). This is different from how close(2) > closes files - it uses filp_close(). > > The difference is important for filesystems which provide a ->flush > file operation such as NFS. NFS assumes a flush will always > be called on last close, and gets confused otherwise. Huh? You do realize that mmap() + close() + modify + msync() + munmap() will have IO done *after* the last flush, right?
On Sat, Jun 17 2017, Al Viro wrote: > On Fri, Jun 16, 2017 at 03:02:09PM +1000, NeilBrown wrote: >> When a loop device is being shutdown the backing file is >> closed with fput(). This is different from how close(2) >> closes files - it uses filp_close(). >> >> The difference is important for filesystems which provide a ->flush >> file operation such as NFS. NFS assumes a flush will always >> be called on last close, and gets confused otherwise. > > Huh? You do realize that mmap() + close() + modify + msync() + munmap() > will have IO done *after* the last flush, right? Yes I do ... or rather I did. I didn't make that connection this time. The sequence you describe causes exactly the same sort of problem. I sent a patch to Trond to add a vfs_fsync() call to nfs_file_release() but he claims the current behaviour is "working as expected". I didn't quite know what to make of that.. https://www.spinics.net/lists/linux-nfs/msg62603.html To provide the full picture: When an NFS file has dirty pages, they (indirectly) hold extra references on the superblock, using nfs_sb_active(). This means that when the filesystem is unmounted, the superblock remains active until all the writes complete. This contrasts with every other filesystems where all writes will complete before the umount returns. When you open/write/close, there will be no dirty pages at umount time (because close() flushes) so this doesn't cause a problem. But when you mmap, or use a loop device, then dirty pages can still be around to keep the superblock alive. The observable symptom that brought this to my attention was that umount -a -t nfs disable network sync can hang in sync, because the NFS filesystems can still be waiting to write out data. If nfs_file_release() adds vfs_fsync(), or maybe if __fput() calls filp->f_op->flush(), then loop.c wouldn't need to use filp_close(). Which would you prefer? Thanks, NeilBrown
diff --git a/drivers/block/loop.c b/drivers/block/loop.c index ebbd0c3fe0ed..9c457ca6c55e 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -682,7 +682,7 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev, if (error) goto out_putf; - fput(old_file); + filp_close(old_file, NULL); if (lo->lo_flags & LO_FLAGS_PARTSCAN) loop_reread_partitions(lo, bdev); return 0; @@ -1071,12 +1071,12 @@ static int loop_clr_fd(struct loop_device *lo) loop_unprepare_queue(lo); mutex_unlock(&lo->lo_ctl_mutex); /* - * Need not hold lo_ctl_mutex to fput backing file. + * Need not hold lo_ctl_mutex to close backing file. * Calling fput holding lo_ctl_mutex triggers a circular * lock dependency possibility warning as fput can take * bd_mutex which is usually taken before lo_ctl_mutex. */ - fput(filp); + filp_close(filp, NULL); return 0; }
When a loop device is being shutdown the backing file is closed with fput(). This is different from how close(2) closes files - it uses filp_close(). The difference is important for filesystems which provide a ->flush file operation such as NFS. NFS assumes a flush will always be called on last close, and gets confused otherwise. Signed-off-by: NeilBrown <neilb@suse.com> --- drivers/block/loop.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)