Message ID | 1317958.1729096113@warthog.procyon.org.uk (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | netfs: Downgrade i_rwsem for a buffered write | expand |
diff --git a/fs/netfs/locking.c b/fs/netfs/locking.c index 21eab56ee2f9..2249ecd09d0a 100644 --- a/fs/netfs/locking.c +++ b/fs/netfs/locking.c @@ -109,6 +109,7 @@ int netfs_start_io_write(struct inode *inode) up_write(&inode->i_rwsem); return -ERESTARTSYS; } + downgrade_write(&inode->i_rwsem); return 0; } EXPORT_SYMBOL(netfs_start_io_write); @@ -123,7 +124,7 @@ EXPORT_SYMBOL(netfs_start_io_write); void netfs_end_io_write(struct inode *inode) __releases(inode->i_rwsem) { - up_write(&inode->i_rwsem); + up_read(&inode->i_rwsem); } EXPORT_SYMBOL(netfs_end_io_write);
In the I/O locking code borrowed from NFS into netfslib, i_rwsem is held locked across a buffered write - but this causes a performance regression in cifs as it excludes buffered reads for the duration (cifs didn't use any locking for buffered reads). Mitigate this somewhat by downgrading the i_rwsem to a read lock across the buffered write. This at least allows parallel reads to occur whilst excluding other writes, DIO, truncate and setattr. Note that this shouldn't be a problem for a buffered write as a read through an mmap can circumvent i_rwsem anyway. Also note that we might want to make this change in NFS also. Signed-off-by: David Howells <dhowells@redhat.com> cc: Steve French <sfrench@samba.org> cc: Paulo Alcantara <pc@manguebit.com> cc: Trond Myklebust <trondmy@kernel.org> cc: Jeff Layton <jlayton@kernel.org> cc: netfs@lists.linux.dev cc: linux-cifs@vger.kernel.org cc: linux-nfs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org --- fs/netfs/locking.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)