diff mbox

cifs: add .splice_write

Message ID 20171228132308.5709-1-kai670@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Andrés Souto Dec. 28, 2017, 1:23 p.m. UTC
add splice_write support in cifs vfs using iter_file_splice_write

Signed-off-by: Andrés Souto <kai670@gmail.com>
---
 fs/cifs/cifsfs.c | 6 ++++++
 1 file changed, 6 insertions(+)

Comments

Andrés Souto Jan. 26, 2018, 6:13 p.m. UTC | #1
I want to explain a bit what is the problem this patch solves.

Without it, I always obtain a transfer rate below 9MB/s when writing to a cifs share using splice syscall independently of the buffer size used. However, when reading from the cifs share, I obtain around 105 MB/s.

After applying the patch, the transfer rate I obtain writing to the cifs share depending on the buffer size is:

blocksize (kB)| transfer rate (MB/s)
--------------+----------------------
           64 | 51.52
          128 | 65.41
          256 | 76.64
          512 | 85.72
         1024 | 90.21
--
To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Steve French Jan. 26, 2018, 9:25 p.m. UTC | #2
The patch did look like it improves the performance of file copy from
Nautilus when I tried it, but would prefer experimenting tool where I
could measure the improvement more precisely.  Is there a good command
line tool which generates splice_write indirectly? As expected cp and
dd don't seem to be affected by this patch.

In looking back at the commit history, I noticed Jens's much older
commit from 2007 (5ffc4ef45b3b0a57872f631b4e4ceb8ace0d7496)  which
added the
callouts to splice_read and more recently Al's commit
8d0207652cbe27d1f962050737848e5ad4671958 which
introduced iter_file_splice_write a couple years ago (for cifs.ko he
updated via commit
3dae8750c368f8ac11c3c8c2a28f56dcee865c01 to add write_iter support).
Interesting that splice write wasn't
added to cifs.ko but it does seem to help in the case you describe
(Nautilus or other apps that use splice).

To localhost (Samba server, cifs.ko on client with default mount
options, current kernel rc9) on my laptop - my quick tests to showed >
500MB/sec performance with dd (1M block size), so will be curious what
splice syscall results in.

---------- Forwarded message ----------
From: Andrés Souto <kai670@gmail.com>
Date: Fri, Jan 26, 2018 at 12:13 PM
Subject: Re: [PATCH] cifs: add .splice_write
To: Steve French <sfrench@samba.org>
Cc: linux-cifs@vger.kernel.org, linux-kernel@vger.kernel.org


I want to explain a bit what is the problem this patch solves.

Without it, I always obtain a transfer rate below 9MB/s when writing
to a cifs share using splice syscall independently of the buffer size
used. However, when reading from the cifs share, I obtain around 105
MB/s.

After applying the patch, the transfer rate I obtain writing to the
cifs share depending on the buffer size is:

blocksize (kB)| transfer rate (MB/s)
--------------+----------------------
           64 | 51.52
          128 | 65.41
          256 | 76.64
          512 | 85.72
         1024 | 90.21
--
To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jens Axboe Jan. 27, 2018, 2:31 p.m. UTC | #3
On 1/26/18 2:25 PM, Steve French wrote:
> The patch did look like it improves the performance of file copy from
> Nautilus when I tried it, but would prefer experimenting tool where I
> could measure the improvement more precisely.  Is there a good command
> line tool which generates splice_write indirectly? As expected cp and
> dd don't seem to be affected by this patch.

You can use fio with ioengine=splice. Or if you have any sendfile()
based tools, that should also do the trick, as sendfile() is just
a wrapper around splice.
Andrés Souto Jan. 29, 2018, 10:47 a.m. UTC | #4
I have been trying to reproduce it with a localhost samba and I wasn't
able to. I also tried adding delays and limiting bandwidth using tc
with the idea of replicating network delays of a gigabit ethernet
network but I didn't see a significant difference either.

So I investigate what is the difference between writing to the NAS
(where I discovered the problem) compared to the samba at localhost and
I see a completely different behaviour in the amount of data carried
by each SMB message.

 writing to NAS
 -----------------------------
 read/write 64kB         64kB
 read/write 1MB        1024kB
 splice 64kB            4096B
 splice 64kB (PATCH)     64kB
 splice 1MB (PATCH)    1024kB


 writing to localhost samba
 ---------------------------
 read/write 64kB         1MB
 splice 64kB             1MB
 splice 64kB (PATCH)     1MB

In every case, the server seems to send one ack SMB message for each
block of data and client doesn't sends new data until the ack is
received. I suppose this explains why the throughput is ridiculous
low for the situations when a SMB message carries so little data.
--
To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
index 8c8b75d3..ba2986bd 100644
--- a/fs/cifs/cifsfs.c
+++ b/fs/cifs/cifsfs.c
@@ -1068,6 +1068,7 @@  const struct file_operations cifs_file_ops = {
 	.flush = cifs_flush,
 	.mmap  = cifs_file_mmap,
 	.splice_read = generic_file_splice_read,
+	.splice_write = iter_file_splice_write,
 	.llseek = cifs_llseek,
 	.unlocked_ioctl	= cifs_ioctl,
 	.copy_file_range = cifs_copy_file_range,
@@ -1086,6 +1087,7 @@  const struct file_operations cifs_file_strict_ops = {
 	.flush = cifs_flush,
 	.mmap = cifs_file_strict_mmap,
 	.splice_read = generic_file_splice_read,
+	.splice_write = iter_file_splice_write,
 	.llseek = cifs_llseek,
 	.unlocked_ioctl	= cifs_ioctl,
 	.copy_file_range = cifs_copy_file_range,
@@ -1105,6 +1107,7 @@  const struct file_operations cifs_file_direct_ops = {
 	.flush = cifs_flush,
 	.mmap = cifs_file_mmap,
 	.splice_read = generic_file_splice_read,
+	.splice_write = iter_file_splice_write,
 	.unlocked_ioctl  = cifs_ioctl,
 	.copy_file_range = cifs_copy_file_range,
 	.clone_file_range = cifs_clone_file_range,
@@ -1122,6 +1125,7 @@  const struct file_operations cifs_file_nobrl_ops = {
 	.flush = cifs_flush,
 	.mmap  = cifs_file_mmap,
 	.splice_read = generic_file_splice_read,
+	.splice_write = iter_file_splice_write,
 	.llseek = cifs_llseek,
 	.unlocked_ioctl	= cifs_ioctl,
 	.copy_file_range = cifs_copy_file_range,
@@ -1139,6 +1143,7 @@  const struct file_operations cifs_file_strict_nobrl_ops = {
 	.flush = cifs_flush,
 	.mmap = cifs_file_strict_mmap,
 	.splice_read = generic_file_splice_read,
+	.splice_write = iter_file_splice_write,
 	.llseek = cifs_llseek,
 	.unlocked_ioctl	= cifs_ioctl,
 	.copy_file_range = cifs_copy_file_range,
@@ -1157,6 +1162,7 @@  const struct file_operations cifs_file_direct_nobrl_ops = {
 	.flush = cifs_flush,
 	.mmap = cifs_file_mmap,
 	.splice_read = generic_file_splice_read,
+	.splice_write = iter_file_splice_write,
 	.unlocked_ioctl  = cifs_ioctl,
 	.copy_file_range = cifs_copy_file_range,
 	.clone_file_range = cifs_clone_file_range,