diff mbox series

[2/2] ksmbd: smbd: change the default maximum read/write, receive size

Message ID 20220107054531.619487-2-hyc.lee@gmail.com (mailing list archive)
State New, archived
Headers show
Series [1/2] ksmbd: smbd: create MR pool | expand

Commit Message

Hyunchul Lee Jan. 7, 2022, 5:45 a.m. UTC
Due to restriction that cannot handle multiple
buffer descriptor structures, decrease the maximum
read/write size for Windows clients.

And set the maximum fragmented receive size
in consideration of the receive queue size.

Signed-off-by: Hyunchul Lee <hyc.lee@gmail.com>
---
 fs/ksmbd/transport_rdma.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Comments

Namjae Jeon Jan. 9, 2022, 2:43 a.m. UTC | #1
2022-01-07 14:45 GMT+09:00, Hyunchul Lee <hyc.lee@gmail.com>:
> Due to restriction that cannot handle multiple
> buffer descriptor structures, decrease the maximum
> read/write size for Windows clients.
>
> And set the maximum fragmented receive size
> in consideration of the receive queue size.
>
> Signed-off-by: Hyunchul Lee <hyc.lee@gmail.com>
Acked-by: Namjae Jeon <linkinjeon@kernel.org>
Steve French Jan. 9, 2022, 6:44 a.m. UTC | #2
Do you have more detail on what the negotiated readsize/writesize
would be for Windows clients with this size? for Linux clients?

It looked like it would still be 4MB at first glance (although in
theory some Windows could do 8MB) ... I may have missed something

On Sat, Jan 8, 2022 at 8:43 PM Namjae Jeon <linkinjeon@kernel.org> wrote:
>
> 2022-01-07 14:45 GMT+09:00, Hyunchul Lee <hyc.lee@gmail.com>:
> > Due to restriction that cannot handle multiple
> > buffer descriptor structures, decrease the maximum
> > read/write size for Windows clients.
> >
> > And set the maximum fragmented receive size
> > in consideration of the receive queue size.
> >
> > Signed-off-by: Hyunchul Lee <hyc.lee@gmail.com>
> Acked-by: Namjae Jeon <linkinjeon@kernel.org>
Namjae Jeon Jan. 9, 2022, 12:56 p.m. UTC | #3
2022-01-09 15:44 GMT+09:00, Steve French <smfrench@gmail.com>:
> Do you have more detail on what the negotiated readsize/writesize
> would be for Windows clients with this size? for Linux clients?
Hyunchul, Please answer.

>
> It looked like it would still be 4MB at first glance (although in
> theory some Windows could do 8MB) ... I may have missed something
I understood that multiple-buffer descriptor support was required to
set a read/write size of 1MB or more. As I know, Hyunchul is currently
working on it.
It seems to be set to the smaller of max read/write size in smb-direct
negotiate and max read/write size in smb2 negotiate.

Hyunchul, I have one question more, How did you get 1048512 setting value ?
>
> On Sat, Jan 8, 2022 at 8:43 PM Namjae Jeon <linkinjeon@kernel.org> wrote:
>>
>> 2022-01-07 14:45 GMT+09:00, Hyunchul Lee <hyc.lee@gmail.com>:
>> > Due to restriction that cannot handle multiple
>> > buffer descriptor structures, decrease the maximum
>> > read/write size for Windows clients.
>> >
>> > And set the maximum fragmented receive size
>> > in consideration of the receive queue size.
>> >
>> > Signed-off-by: Hyunchul Lee <hyc.lee@gmail.com>
>> Acked-by: Namjae Jeon <linkinjeon@kernel.org>
>
>
>
> --
> Thanks,
>
> Steve
>
Hyunchul Lee Jan. 10, 2022, 1:37 a.m. UTC | #4
2022년 1월 9일 (일) 오후 9:56, Namjae Jeon <linkinjeon@kernel.org>님이 작성:
>
> 2022-01-09 15:44 GMT+09:00, Steve French <smfrench@gmail.com>:
> > Do you have more detail on what the negotiated readsize/writesize
> > would be for Windows clients with this size? for Linux clients?
> Hyunchul, Please answer.
>

For a Linux client, if connected using smb-direct,
the size will be 1048512. But connected with multichannel,
the size will be 4MB instead of 1048512. And this causes
problems because the read/write size is bigger than 1048512.
It looks like a bug. I have to limit the ksmbd's SMB2 maximum
read/write size for a test.

For Windows clients, the actual read/write size is less than
1048512.

> >
> > It looked like it would still be 4MB at first glance (although in
> > theory some Windows could do 8MB) ... I may have missed something
> I understood that multiple-buffer descriptor support was required to
> set a read/write size of 1MB or more. As I know, Hyunchul is currently
> working on it.
> It seems to be set to the smaller of max read/write size in smb-direct
> negotiate and max read/write size in smb2 negotiate.
>
> Hyunchul, I have one question more, How did you get 1048512 setting value ?
> >

I remember when the size was 1MB, Windows clients requested read/write with
1048512 and 64.

> > On Sat, Jan 8, 2022 at 8:43 PM Namjae Jeon <linkinjeon@kernel.org> wrote:
> >>
> >> 2022-01-07 14:45 GMT+09:00, Hyunchul Lee <hyc.lee@gmail.com>:
> >> > Due to restriction that cannot handle multiple
> >> > buffer descriptor structures, decrease the maximum
> >> > read/write size for Windows clients.
> >> >
> >> > And set the maximum fragmented receive size
> >> > in consideration of the receive queue size.
> >> >
> >> > Signed-off-by: Hyunchul Lee <hyc.lee@gmail.com>
> >> Acked-by: Namjae Jeon <linkinjeon@kernel.org>
> >
> >
> >
> > --
> > Thanks,
> >
> > Steve
> >



--
Thanks,
Hyunchul
Steve French Jan. 10, 2022, 1:43 a.m. UTC | #5
I was concerned because I saw significant improvements in large i/o
(file copy to or from the server) to Windows and Azure going to 1MB
(negotiated max read/write size), then slightly better to 2MB and
slightly better still to 4MB (was hard to show gain with 8MB in my
earlier tests though)

On Sun, Jan 9, 2022 at 7:37 PM Hyunchul Lee <hyc.lee@gmail.com> wrote:
>
> 2022년 1월 9일 (일) 오후 9:56, Namjae Jeon <linkinjeon@kernel.org>님이 작성:
> >
> > 2022-01-09 15:44 GMT+09:00, Steve French <smfrench@gmail.com>:
> > > Do you have more detail on what the negotiated readsize/writesize
> > > would be for Windows clients with this size? for Linux clients?
> > Hyunchul, Please answer.
> >
>
> For a Linux client, if connected using smb-direct,
> the size will be 1048512. But connected with multichannel,
> the size will be 4MB instead of 1048512. And this causes
> problems because the read/write size is bigger than 1048512.
> It looks like a bug. I have to limit the ksmbd's SMB2 maximum
> read/write size for a test.
>
> For Windows clients, the actual read/write size is less than
> 1048512.
>
> > >
> > > It looked like it would still be 4MB at first glance (although in
> > > theory some Windows could do 8MB) ... I may have missed something
> > I understood that multiple-buffer descriptor support was required to
> > set a read/write size of 1MB or more. As I know, Hyunchul is currently
> > working on it.
> > It seems to be set to the smaller of max read/write size in smb-direct
> > negotiate and max read/write size in smb2 negotiate.
> >
> > Hyunchul, I have one question more, How did you get 1048512 setting value ?
> > >
>
> I remember when the size was 1MB, Windows clients requested read/write with
> 1048512 and 64.
>
> > > On Sat, Jan 8, 2022 at 8:43 PM Namjae Jeon <linkinjeon@kernel.org> wrote:
> > >>
> > >> 2022-01-07 14:45 GMT+09:00, Hyunchul Lee <hyc.lee@gmail.com>:
> > >> > Due to restriction that cannot handle multiple
> > >> > buffer descriptor structures, decrease the maximum
> > >> > read/write size for Windows clients.
> > >> >
> > >> > And set the maximum fragmented receive size
> > >> > in consideration of the receive queue size.
> > >> >
> > >> > Signed-off-by: Hyunchul Lee <hyc.lee@gmail.com>
> > >> Acked-by: Namjae Jeon <linkinjeon@kernel.org>
> > >
> > >
> > >
> > > --
> > > Thanks,
> > >
> > > Steve
> > >
>
>
>
> --
> Thanks,
> Hyunchul
Hyunchul Lee Jan. 10, 2022, 4:03 a.m. UTC | #6
2022년 1월 10일 (월) 오전 10:43, Steve French <smfrench@gmail.com>님이 작성:
>
> I was concerned because I saw significant improvements in large i/o
> (file copy to or from the server) to Windows and Azure going to 1MB
> (negotiated max read/write size), then slightly better to 2MB and
> slightly better still to 4MB (was hard to show gain with 8MB in my
> earlier tests though)
>

This patch limits the size only when the SMB Direct protocol is used.
If handling multiple buffer descriptors is implemented, we can increase
the size.

> On Sun, Jan 9, 2022 at 7:37 PM Hyunchul Lee <hyc.lee@gmail.com> wrote:
> >
> > 2022년 1월 9일 (일) 오후 9:56, Namjae Jeon <linkinjeon@kernel.org>님이 작성:
> > >
> > > 2022-01-09 15:44 GMT+09:00, Steve French <smfrench@gmail.com>:
> > > > Do you have more detail on what the negotiated readsize/writesize
> > > > would be for Windows clients with this size? for Linux clients?
> > > Hyunchul, Please answer.
> > >
> >
> > For a Linux client, if connected using smb-direct,
> > the size will be 1048512. But connected with multichannel,
> > the size will be 4MB instead of 1048512. And this causes
> > problems because the read/write size is bigger than 1048512.
> > It looks like a bug. I have to limit the ksmbd's SMB2 maximum
> > read/write size for a test.
> >
> > For Windows clients, the actual read/write size is less than
> > 1048512.
> >
> > > >
> > > > It looked like it would still be 4MB at first glance (although in
> > > > theory some Windows could do 8MB) ... I may have missed something
> > > I understood that multiple-buffer descriptor support was required to
> > > set a read/write size of 1MB or more. As I know, Hyunchul is currently
> > > working on it.
> > > It seems to be set to the smaller of max read/write size in smb-direct
> > > negotiate and max read/write size in smb2 negotiate.
> > >
> > > Hyunchul, I have one question more, How did you get 1048512 setting value ?
> > > >
> >
> > I remember when the size was 1MB, Windows clients requested read/write with
> > 1048512 and 64.
> >
> > > > On Sat, Jan 8, 2022 at 8:43 PM Namjae Jeon <linkinjeon@kernel.org> wrote:
> > > >>
> > > >> 2022-01-07 14:45 GMT+09:00, Hyunchul Lee <hyc.lee@gmail.com>:
> > > >> > Due to restriction that cannot handle multiple
> > > >> > buffer descriptor structures, decrease the maximum
> > > >> > read/write size for Windows clients.
> > > >> >
> > > >> > And set the maximum fragmented receive size
> > > >> > in consideration of the receive queue size.
> > > >> >
> > > >> > Signed-off-by: Hyunchul Lee <hyc.lee@gmail.com>
> > > >> Acked-by: Namjae Jeon <linkinjeon@kernel.org>
> > > >
> > > >
> > > >
> > > > --
> > > > Thanks,
> > > >
> > > > Steve
> > > >
> >
> >
> >
> > --
> > Thanks,
> > Hyunchul
>
>
>
> --
> Thanks,
>
> Steve
Namjae Jeon Jan. 17, 2022, 11:33 p.m. UTC | #7
2022-01-10 10:37 GMT+09:00, Hyunchul Lee <hyc.lee@gmail.com>:
> 2022년 1월 9일 (일) 오후 9:56, Namjae Jeon <linkinjeon@kernel.org>님이 작성:
>>
>> 2022-01-09 15:44 GMT+09:00, Steve French <smfrench@gmail.com>:
>> > Do you have more detail on what the negotiated readsize/writesize
>> > would be for Windows clients with this size? for Linux clients?
>> Hyunchul, Please answer.
>>
>
> For a Linux client, if connected using smb-direct,
> the size will be 1048512. But connected with multichannel,
> the size will be 4MB instead of 1048512. And this causes
> problems because the read/write size is bigger than 1048512.
> It looks like a bug. I have to limit the ksmbd's SMB2 maximum
> read/write size for a test.
>
> For Windows clients, the actual read/write size is less than
> 1048512.
In the case of my Chelsio device, Need to set it to about
512K(512*1024 - 64) for it to work.
The 1048512 value seems insufficient to cover all devices. Is there
any other way to set the minimum read/write value? Calibrate this
minimum value by looking at
the device information? For example variables in ib_dev->attrs.

>
>> >
>> > It looked like it would still be 4MB at first glance (although in
>> > theory some Windows could do 8MB) ... I may have missed something
>> I understood that multiple-buffer descriptor support was required to
>> set a read/write size of 1MB or more. As I know, Hyunchul is currently
>> working on it.
>> It seems to be set to the smaller of max read/write size in smb-direct
>> negotiate and max read/write size in smb2 negotiate.
>>
>> Hyunchul, I have one question more, How did you get 1048512 setting value
>> ?
>> >
>
> I remember when the size was 1MB, Windows clients requested read/write with
> 1048512 and 64.
>
>> > On Sat, Jan 8, 2022 at 8:43 PM Namjae Jeon <linkinjeon@kernel.org>
>> > wrote:
>> >>
>> >> 2022-01-07 14:45 GMT+09:00, Hyunchul Lee <hyc.lee@gmail.com>:
>> >> > Due to restriction that cannot handle multiple
>> >> > buffer descriptor structures, decrease the maximum
>> >> > read/write size for Windows clients.
>> >> >
>> >> > And set the maximum fragmented receive size
>> >> > in consideration of the receive queue size.
>> >> >
>> >> > Signed-off-by: Hyunchul Lee <hyc.lee@gmail.com>
>> >> Acked-by: Namjae Jeon <linkinjeon@kernel.org>
>> >
>> >
>> >
>> > --
>> > Thanks,
>> >
>> > Steve
>> >
>
>
>
> --
> Thanks,
> Hyunchul
>
Hyunchul Lee Jan. 18, 2022, 6:40 a.m. UTC | #8
2022년 1월 18일 (화) 오전 8:33, Namjae Jeon <linkinjeon@kernel.org>님이 작성:
>
> 2022-01-10 10:37 GMT+09:00, Hyunchul Lee <hyc.lee@gmail.com>:
> > 2022년 1월 9일 (일) 오후 9:56, Namjae Jeon <linkinjeon@kernel.org>님이 작성:
> >>
> >> 2022-01-09 15:44 GMT+09:00, Steve French <smfrench@gmail.com>:
> >> > Do you have more detail on what the negotiated readsize/writesize
> >> > would be for Windows clients with this size? for Linux clients?
> >> Hyunchul, Please answer.
> >>
> >
> > For a Linux client, if connected using smb-direct,
> > the size will be 1048512. But connected with multichannel,
> > the size will be 4MB instead of 1048512. And this causes
> > problems because the read/write size is bigger than 1048512.
> > It looks like a bug. I have to limit the ksmbd's SMB2 maximum
> > read/write size for a test.
> >
> > For Windows clients, the actual read/write size is less than
> > 1048512.
> In the case of my Chelsio device, Need to set it to about
> 512K(512*1024 - 64) for it to work.
> The 1048512 value seems insufficient to cover all devices. Is there
> any other way to set the minimum read/write value? Calibrate this
> minimum value by looking at
> the device information? For example variables in ib_dev->attrs.
>

Let me check it. But I think multiple buffer descriptors caused
this problem. because of a client's device limitation, a client
seems to send a read/write request with multiple buffer descriptors
for sending 1048512 bytes.

To check this assumption, can you tell me the buffer descriptors'
content when the default read/write size is 1048512?

> >
> >> >
> >> > It looked like it would still be 4MB at first glance (although in
> >> > theory some Windows could do 8MB) ... I may have missed something
> >> I understood that multiple-buffer descriptor support was required to
> >> set a read/write size of 1MB or more. As I know, Hyunchul is currently
> >> working on it.
> >> It seems to be set to the smaller of max read/write size in smb-direct
> >> negotiate and max read/write size in smb2 negotiate.
> >>
> >> Hyunchul, I have one question more, How did you get 1048512 setting value
> >> ?
> >> >
> >
> > I remember when the size was 1MB, Windows clients requested read/write with
> > 1048512 and 64.
> >
> >> > On Sat, Jan 8, 2022 at 8:43 PM Namjae Jeon <linkinjeon@kernel.org>
> >> > wrote:
> >> >>
> >> >> 2022-01-07 14:45 GMT+09:00, Hyunchul Lee <hyc.lee@gmail.com>:
> >> >> > Due to restriction that cannot handle multiple
> >> >> > buffer descriptor structures, decrease the maximum
> >> >> > read/write size for Windows clients.
> >> >> >
> >> >> > And set the maximum fragmented receive size
> >> >> > in consideration of the receive queue size.
> >> >> >
> >> >> > Signed-off-by: Hyunchul Lee <hyc.lee@gmail.com>
> >> >> Acked-by: Namjae Jeon <linkinjeon@kernel.org>
> >> >
> >> >
> >> >
> >> > --
> >> > Thanks,
> >> >
> >> > Steve
> >> >
> >
> >
> >
> > --
> > Thanks,
> > Hyunchul
> >
diff mbox series

Patch

diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c
index f0b17da1cac2..86fd64511512 100644
--- a/fs/ksmbd/transport_rdma.c
+++ b/fs/ksmbd/transport_rdma.c
@@ -80,7 +80,7 @@  static int smb_direct_max_fragmented_recv_size = 1024 * 1024;
 /*  The maximum single-message size which can be received */
 static int smb_direct_max_receive_size = 8192;
 
-static int smb_direct_max_read_write_size = 1024 * 1024;
+static int smb_direct_max_read_write_size = 1048512;
 
 static int smb_direct_max_outstanding_rw_ops = 8;
 
@@ -1908,7 +1908,9 @@  static int smb_direct_prepare(struct ksmbd_transport *t)
 	st->max_send_size = min_t(int, st->max_send_size,
 				  le32_to_cpu(req->max_receive_size));
 	st->max_fragmented_send_size =
-			le32_to_cpu(req->max_fragmented_size);
+		le32_to_cpu(req->max_fragmented_size);
+	st->max_fragmented_recv_size =
+		(st->recv_credit_max * st->max_recv_size) / 2;
 
 	ret = smb_direct_send_negotiate_response(st, ret);
 out: