Message ID | 20230329141354.516864-41-dhowells@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | splice, net: Replace sendpage with sendmsg(MSG_SPLICE_PAGES) | expand |
> On Mar 29, 2023, at 10:13 AM, David Howells <dhowells@redhat.com> wrote: > > When transmitting data, call down into TCP using a single sendmsg with > MSG_SPLICE_PAGES to indicate that content should be spliced rather than > performing several sendmsg and sendpage calls to transmit header, data > pages and trailer. > > To make this work, the data is assembled in a bio_vec array and attached to > a BVEC-type iterator. The header and trailer are copied into page > fragments so that they can be freed with put_page and attached to iterators > of their own. An iterator-of-iterators is then created to bridge all three > iterators (headers, data, trailer) and that is passed to sendmsg to pass > the entire message in a single call. > > Signed-off-by: David Howells <dhowells@redhat.com> > cc: Trond Myklebust <trond.myklebust@hammerspace.com> > cc: Anna Schumaker <anna@kernel.org> > cc: Chuck Lever <chuck.lever@oracle.com> > cc: Jeff Layton <jlayton@kernel.org> > cc: "David S. Miller" <davem@davemloft.net> > cc: Eric Dumazet <edumazet@google.com> > cc: Jakub Kicinski <kuba@kernel.org> > cc: Paolo Abeni <pabeni@redhat.com> > cc: Jens Axboe <axboe@kernel.dk> > cc: Matthew Wilcox <willy@infradead.org> > cc: linux-nfs@vger.kernel.org > cc: netdev@vger.kernel.org > --- > include/linux/sunrpc/svc.h | 11 +++-- > net/sunrpc/svcsock.c | 89 +++++++++++++++----------------------- > 2 files changed, 40 insertions(+), 60 deletions(-) > > diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h > index 877891536c2f..456ae554aa11 100644 > --- a/include/linux/sunrpc/svc.h > +++ b/include/linux/sunrpc/svc.h > @@ -161,16 +161,15 @@ static inline bool svc_put_not_last(struct svc_serv *serv) > extern u32 svc_max_payload(const struct svc_rqst *rqstp); > > /* > - * RPC Requsts and replies are stored in one or more pages. > + * RPC Requests and replies are stored in one or more pages. > * We maintain an array of pages for each server thread. > * Requests are copied into these pages as they arrive. Remaining > * pages are available to write the reply into. > * > - * Pages are sent using ->sendpage so each server thread needs to > - * allocate more to replace those used in sending. To help keep track > - * of these pages we have a receive list where all pages initialy live, > - * and a send list where pages are moved to when there are to be part > - * of a reply. > + * Pages are sent using ->sendmsg with MSG_SPLICE_PAGES so each server thread > + * needs to allocate more to replace those used in sending. To help keep track > + * of these pages we have a receive list where all pages initialy live, and a > + * send list where pages are moved to when there are to be part of a reply. > * > * We use xdr_buf for holding responses as it fits well with NFS > * read responses (that have a header, and some data pages, and possibly > diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c > index 03a4f5615086..f1cc53aad6e0 100644 > --- a/net/sunrpc/svcsock.c > +++ b/net/sunrpc/svcsock.c > @@ -1060,16 +1060,8 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp) > return 0; /* record not complete */ > } > > -static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec, > - int flags) > -{ > - return kernel_sendpage(sock, virt_to_page(vec->iov_base), > - offset_in_page(vec->iov_base), > - vec->iov_len, flags); > -} > - > /* > - * kernel_sendpage() is used exclusively to reduce the number of > + * MSG_SPLICE_PAGES is used exclusively to reduce the number of > * copy operations in this path. Therefore the caller must ensure > * that the pages backing @xdr are unchanging. > * > @@ -1081,65 +1073,54 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr, > { > const struct kvec *head = xdr->head; > const struct kvec *tail = xdr->tail; > - struct kvec rm = { > - .iov_base = &marker, > - .iov_len = sizeof(marker), > - }; > + struct iov_iter iters[3]; > + struct bio_vec head_bv, tail_bv; > struct msghdr msg = { > - .msg_flags = 0, > + .msg_flags = MSG_SPLICE_PAGES, > }; > - int ret; > + void *m, *t; > + int ret, n = 2, size; > > *sentp = 0; > ret = xdr_alloc_bvec(xdr, GFP_KERNEL); > if (ret < 0) > return ret; > > - ret = kernel_sendmsg(sock, &msg, &rm, 1, rm.iov_len); > - if (ret < 0) > - return ret; > - *sentp += ret; > - if (ret != rm.iov_len) > - return -EAGAIN; > + m = page_frag_alloc(NULL, sizeof(marker) + head->iov_len + tail->iov_len, > + GFP_KERNEL); > + if (!m) > + return -ENOMEM; I'm not excited about adding another memory allocation for this very common case. It seems to me that you could eliminate the kernel_sendpage() consumer here in svc_tcp_sendmsg() without also replacing the kernel_sendmsg() calls. That would be a conservative step-wise approach which would carry less risk, and would accomplish your stated goal without more radical surgery. Later maybe we can find a way to deal with the head, tail, and record marker without additional memory allocations. I believe on the server side, head and tail are already in pages, for example, not in kmalloc'd memory. That would need some code auditing, but I'm OK with combining these into a single sock_sendmsg() call once we've worked out the disposition of the xdr_buf components outside of the bvec. That seems a bit outside your stated goal. Simply replacing the kernel_sendpage() loop would be a straightforward change and easy to evaluate and test, and I'd welcome that without hesitation. > - ret = svc_tcp_send_kvec(sock, head, 0); > - if (ret < 0) > - return ret; > - *sentp += ret; > - if (ret != head->iov_len) > - goto out; > + memcpy(m, &marker, sizeof(marker)); > + if (head->iov_len) > + memcpy(m + sizeof(marker), head->iov_base, head->iov_len); > + bvec_set_virt(&head_bv, m, sizeof(marker) + head->iov_len); > + iov_iter_bvec(&iters[0], ITER_SOURCE, &head_bv, 1, > + sizeof(marker) + head->iov_len); > > - if (xdr->page_len) { > - unsigned int offset, len, remaining; > - struct bio_vec *bvec; > - > - bvec = xdr->bvec + (xdr->page_base >> PAGE_SHIFT); > - offset = offset_in_page(xdr->page_base); > - remaining = xdr->page_len; > - while (remaining > 0) { > - len = min(remaining, bvec->bv_len - offset); > - ret = kernel_sendpage(sock, bvec->bv_page, > - bvec->bv_offset + offset, > - len, 0); > - if (ret < 0) > - return ret; > - *sentp += ret; > - if (ret != len) > - goto out; > - remaining -= len; > - offset = 0; > - bvec++; > - } > - } > + iov_iter_bvec(&iters[1], ITER_SOURCE, xdr->bvec, > + xdr_buf_pagecount(xdr), xdr->page_len); > > if (tail->iov_len) { > - ret = svc_tcp_send_kvec(sock, tail, 0); > - if (ret < 0) > - return ret; > - *sentp += ret; > + t = page_frag_alloc(NULL, tail->iov_len, GFP_KERNEL); > + if (!t) > + return -ENOMEM; > + memcpy(t, tail->iov_base, tail->iov_len); > + bvec_set_virt(&tail_bv, t, tail->iov_len); > + iov_iter_bvec(&iters[2], ITER_SOURCE, &tail_bv, 1, tail->iov_len); > + n++; > } > > -out: > + size = sizeof(marker) + head->iov_len + xdr->page_len + tail->iov_len; size = sizeof(marker) + xdr->len; If xdr->len != head->iov_len + xdr->page_len + tail->iov_len, that is a bug these days. > + iov_iter_iterlist(&msg.msg_iter, ITER_SOURCE, iters, n, size); > + > + ret = sock_sendmsg(sock, &msg); > + if (ret < 0) > + return ret; > + if (ret > 0) > + *sentp = ret; > + if (ret != size) > + return -EAGAIN; > return 0; > } > > -- Chuck Lever
Hi Chuck, Do you have a simple AF_TLS test to hand? David
Chuck Lever III <chuck.lever@oracle.com> wrote: > It seems to me that you could eliminate the kernel_sendpage() > consumer here in svc_tcp_sendmsg() without also replacing the > kernel_sendmsg() calls. That would be a conservative step-wise > approach which would carry less risk, and would accomplish > your stated goal without more radical surgery. Note that only the marker is sent with kernel_sendmsg() in the unmodified code; the head and tail are sent with svc_tcp_send_kvec()... which uses kernel_sendpage() which needs to be changed in my patchset. I can make it do individual sendmsg calls for all those for now. David
Chuck Lever III <chuck.lever@oracle.com> wrote: > > + if (ret > 0) > > + *sentp = ret; Should that be: *sentp = ret - sizeof(marker); David
Chuck Lever III <chuck.lever@oracle.com> wrote: > Simply replacing the kernel_sendpage() loop would be a > straightforward change and easy to evaluate and test, and > I'd welcome that without hesitation. How about the attached for a first phase? It does three sendmsgs, one for the marker + header, one for the body and one for the tail. David --- sunrpc: Use sendmsg(MSG_SPLICE_PAGES) rather then sendpage When transmitting data, call down into TCP using sendmsg with MSG_SPLICE_PAGES to indicate that content should be spliced rather than performing sendpage calls to transmit header, data pages and trailer. The marker and the header are passed in an array of kvecs. The marker will get copied and the header will get spliced. Signed-off-by: David Howells <dhowells@redhat.com> cc: Trond Myklebust <trond.myklebust@hammerspace.com> cc: Anna Schumaker <anna@kernel.org> cc: Chuck Lever <chuck.lever@oracle.com> cc: Jeff Layton <jlayton@kernel.org> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> cc: linux-nfs@vger.kernel.org cc: netdev@vger.kernel.org --- include/linux/sunrpc/svc.h | 11 +++--- net/sunrpc/svcsock.c | 75 ++++++++++++++------------------------------- 2 files changed, 29 insertions(+), 57 deletions(-) diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index 877891536c2f..456ae554aa11 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -161,16 +161,15 @@ static inline bool svc_put_not_last(struct svc_serv *serv) extern u32 svc_max_payload(const struct svc_rqst *rqstp); /* - * RPC Requsts and replies are stored in one or more pages. + * RPC Requests and replies are stored in one or more pages. * We maintain an array of pages for each server thread. * Requests are copied into these pages as they arrive. Remaining * pages are available to write the reply into. * - * Pages are sent using ->sendpage so each server thread needs to - * allocate more to replace those used in sending. To help keep track - * of these pages we have a receive list where all pages initialy live, - * and a send list where pages are moved to when there are to be part - * of a reply. + * Pages are sent using ->sendmsg with MSG_SPLICE_PAGES so each server thread + * needs to allocate more to replace those used in sending. To help keep track + * of these pages we have a receive list where all pages initialy live, and a + * send list where pages are moved to when there are to be part of a reply. * * We use xdr_buf for holding responses as it fits well with NFS * read responses (that have a header, and some data pages, and possibly diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 03a4f5615086..14efcc08c6f8 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1060,16 +1060,8 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp) return 0; /* record not complete */ } -static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec, - int flags) -{ - return kernel_sendpage(sock, virt_to_page(vec->iov_base), - offset_in_page(vec->iov_base), - vec->iov_len, flags); -} - /* - * kernel_sendpage() is used exclusively to reduce the number of + * MSG_SPLICE_PAGES is used exclusively to reduce the number of * copy operations in this path. Therefore the caller must ensure * that the pages backing @xdr are unchanging. * @@ -1081,13 +1073,9 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr, { const struct kvec *head = xdr->head; const struct kvec *tail = xdr->tail; - struct kvec rm = { - .iov_base = &marker, - .iov_len = sizeof(marker), - }; - struct msghdr msg = { - .msg_flags = 0, - }; + struct kvec kv[2]; + struct msghdr msg = { .msg_flags = MSG_SPLICE_PAGES | MSG_MORE, }; + size_t sent; int ret; *sentp = 0; @@ -1095,51 +1083,36 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr, if (ret < 0) return ret; - ret = kernel_sendmsg(sock, &msg, &rm, 1, rm.iov_len); + kv[0].iov_base = ▮ + kv[0].iov_len = sizeof(marker); + kv[1] = *head; + iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, kv, 2, sizeof(marker) + head->iov_len); + ret = sock_sendmsg(sock, &msg); if (ret < 0) return ret; - *sentp += ret; - if (ret != rm.iov_len) - return -EAGAIN; + sent = ret; - ret = svc_tcp_send_kvec(sock, head, 0); + if (!tail->iov_len) + msg.msg_flags &= ~MSG_MORE; + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, xdr->bvec, + xdr_buf_pagecount(xdr), xdr->page_len); + ret = sock_sendmsg(sock, &msg); if (ret < 0) return ret; - *sentp += ret; - if (ret != head->iov_len) - goto out; - - if (xdr->page_len) { - unsigned int offset, len, remaining; - struct bio_vec *bvec; - - bvec = xdr->bvec + (xdr->page_base >> PAGE_SHIFT); - offset = offset_in_page(xdr->page_base); - remaining = xdr->page_len; - while (remaining > 0) { - len = min(remaining, bvec->bv_len - offset); - ret = kernel_sendpage(sock, bvec->bv_page, - bvec->bv_offset + offset, - len, 0); - if (ret < 0) - return ret; - *sentp += ret; - if (ret != len) - goto out; - remaining -= len; - offset = 0; - bvec++; - } - } + sent += ret; if (tail->iov_len) { - ret = svc_tcp_send_kvec(sock, tail, 0); + msg.msg_flags &= ~MSG_MORE; + iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, tail, 1, tail->iov_len); + ret = sock_sendmsg(sock, &msg); if (ret < 0) return ret; - *sentp += ret; + sent += ret; } - -out: + if (sent > 0) + *sentp = sent; + if (sent != sizeof(marker) + xdr->len) + return -EAGAIN; return 0; }
> On Mar 30, 2023, at 5:41 AM, David Howells <dhowells@redhat.com> wrote: > > Chuck Lever III <chuck.lever@oracle.com> wrote: > >>> + if (ret > 0) >>> + *sentp = ret; > > Should that be: > > *sentp = ret - sizeof(marker); > > David > That's a bit out of context, but ... The return value of ->xpo_sendto is effectively ignored. There is no caller of svc_process that checks its return code. svc_rdma_sendto(), for example, returns zero or a negative errno. That should be cleaned up one day. -- Chuck Lever
David Howells <dhowells@redhat.com> wrote: > Chuck Lever III <chuck.lever@oracle.com> wrote: > > > Simply replacing the kernel_sendpage() loop would be a > > straightforward change and easy to evaluate and test, and > > I'd welcome that without hesitation. > > How about the attached for a first phase? > > It does three sendmsgs, one for the marker + header, one for the body and one > for the tail. ... And this as a second phase. David --- sunrpc: Allow xdr->bvec[] to be extended to do a single sendmsg Allow xdr->bvec[] to be extended and insert the marker, the header and the tail into it so that a single sendmsg() can be used to transmit the message. I wonder if it would be possible to insert the marker at the beginning of the head buffer. Signed-off-by: David Howells <dhowells@redhat.com> cc: Trond Myklebust <trond.myklebust@hammerspace.com> cc: Anna Schumaker <anna@kernel.org> cc: Chuck Lever <chuck.lever@oracle.com> cc: Jeff Layton <jlayton@kernel.org> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> cc: linux-nfs@vger.kernel.org cc: netdev@vger.kernel.org --- include/linux/sunrpc/xdr.h | 2 - net/sunrpc/svcsock.c | 46 ++++++++++++++------------------------------- net/sunrpc/xdr.c | 19 ++++++++++-------- net/sunrpc/xprtsock.c | 6 ++--- 4 files changed, 30 insertions(+), 43 deletions(-) diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h index 72014c9216fc..c74ea483228b 100644 --- a/include/linux/sunrpc/xdr.h +++ b/include/linux/sunrpc/xdr.h @@ -137,7 +137,7 @@ void xdr_inline_pages(struct xdr_buf *, unsigned int, struct page **, unsigned int, unsigned int); void xdr_terminate_string(const struct xdr_buf *, const u32); size_t xdr_buf_pagecount(const struct xdr_buf *buf); -int xdr_alloc_bvec(struct xdr_buf *buf, gfp_t gfp); +int xdr_alloc_bvec(struct xdr_buf *buf, gfp_t gfp, unsigned int head, unsigned int tail); void xdr_free_bvec(struct xdr_buf *buf); static inline __be32 *xdr_encode_array(__be32 *p, const void *s, unsigned int len) diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 14efcc08c6f8..e55761fe1ccf 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -569,7 +569,7 @@ static int svc_udp_sendto(struct svc_rqst *rqstp) if (svc_xprt_is_dead(xprt)) goto out_notconn; - err = xdr_alloc_bvec(xdr, GFP_KERNEL); + err = xdr_alloc_bvec(xdr, GFP_KERNEL, 0, 0); if (err < 0) goto out_unlock; @@ -1073,45 +1073,29 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr, { const struct kvec *head = xdr->head; const struct kvec *tail = xdr->tail; - struct kvec kv[2]; - struct msghdr msg = { .msg_flags = MSG_SPLICE_PAGES | MSG_MORE, }; - size_t sent; + struct msghdr msg = { .msg_flags = MSG_SPLICE_PAGES, }; + size_t n; int ret; *sentp = 0; - ret = xdr_alloc_bvec(xdr, GFP_KERNEL); + ret = xdr_alloc_bvec(xdr, GFP_KERNEL, 2, 1); if (ret < 0) return ret; - kv[0].iov_base = ▮ - kv[0].iov_len = sizeof(marker); - kv[1] = *head; - iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, kv, 2, sizeof(marker) + head->iov_len); + n = 2 + xdr_buf_pagecount(xdr); + bvec_set_virt(&xdr->bvec[0], &marker, sizeof(marker)); + bvec_set_virt(&xdr->bvec[1], head->iov_base, head->iov_len); + bvec_set_virt(&xdr->bvec[n], tail->iov_base, tail->iov_len); + if (tail->iov_len) + n++; + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, xdr->bvec, n, + sizeof(marker) + xdr->len); ret = sock_sendmsg(sock, &msg); if (ret < 0) return ret; - sent = ret; - - if (!tail->iov_len) - msg.msg_flags &= ~MSG_MORE; - iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, xdr->bvec, - xdr_buf_pagecount(xdr), xdr->page_len); - ret = sock_sendmsg(sock, &msg); - if (ret < 0) - return ret; - sent += ret; - - if (tail->iov_len) { - msg.msg_flags &= ~MSG_MORE; - iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, tail, 1, tail->iov_len); - ret = sock_sendmsg(sock, &msg); - if (ret < 0) - return ret; - sent += ret; - } - if (sent > 0) - *sentp = sent; - if (sent != sizeof(marker) + xdr->len) + if (ret > 0) + *sentp = ret; + if (ret != sizeof(marker) + xdr->len) return -EAGAIN; return 0; } diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c index 36835b2f5446..695821963849 100644 --- a/net/sunrpc/xdr.c +++ b/net/sunrpc/xdr.c @@ -141,18 +141,21 @@ size_t xdr_buf_pagecount(const struct xdr_buf *buf) } int -xdr_alloc_bvec(struct xdr_buf *buf, gfp_t gfp) +xdr_alloc_bvec(struct xdr_buf *buf, gfp_t gfp, unsigned int head, unsigned int tail) { - size_t i, n = xdr_buf_pagecount(buf); + size_t i, j = 0, n = xdr_buf_pagecount(buf); - if (n != 0 && buf->bvec == NULL) { - buf->bvec = kmalloc_array(n, sizeof(buf->bvec[0]), gfp); + if (head + n + tail != 0 && buf->bvec == NULL) { + buf->bvec = kmalloc_array(head + n + tail, + sizeof(buf->bvec[0]), gfp); if (!buf->bvec) return -ENOMEM; - for (i = 0; i < n; i++) { - bvec_set_page(&buf->bvec[i], buf->pages[i], PAGE_SIZE, - 0); - } + for (i = 0; i < head; i++) + bvec_set_page(&buf->bvec[j++], NULL, 0, 0); + for (i = 0; i < n; i++) + bvec_set_page(&buf->bvec[j++], buf->pages[i], PAGE_SIZE, 0); + for (i = 0; i < tail; i++) + bvec_set_page(&buf->bvec[j++], NULL, 0, 0); } return 0; } diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c index adcbedc244d6..fdf67e84b1c7 100644 --- a/net/sunrpc/xprtsock.c +++ b/net/sunrpc/xprtsock.c @@ -825,7 +825,7 @@ static int xs_stream_nospace(struct rpc_rqst *req, bool vm_wait) static int xs_stream_prepare_request(struct rpc_rqst *req, struct xdr_buf *buf) { - return xdr_alloc_bvec(buf, rpc_task_gfp_mask()); + return xdr_alloc_bvec(buf, rpc_task_gfp_mask(), 0, 0); } /* @@ -954,7 +954,7 @@ static int xs_udp_send_request(struct rpc_rqst *req) if (!xprt_request_get_cong(xprt, req)) return -EBADSLT; - status = xdr_alloc_bvec(xdr, rpc_task_gfp_mask()); + status = xdr_alloc_bvec(xdr, rpc_task_gfp_mask(), 0, 0); if (status < 0) return status; req->rq_xtime = ktime_get(); @@ -2591,7 +2591,7 @@ static int bc_sendto(struct rpc_rqst *req) int err; req->rq_xtime = ktime_get(); - err = xdr_alloc_bvec(xdr, rpc_task_gfp_mask()); + err = xdr_alloc_bvec(xdr, rpc_task_gfp_mask(), 0, 0); if (err < 0) return err; err = xprt_sock_sendmsg(transport->sock, &msg, xdr, 0, marker, &sent);
> On Mar 30, 2023, at 9:16 AM, David Howells <dhowells@redhat.com> wrote: > > David Howells <dhowells@redhat.com> wrote: > >> Chuck Lever III <chuck.lever@oracle.com> wrote: >> >>> Simply replacing the kernel_sendpage() loop would be a >>> straightforward change and easy to evaluate and test, and >>> I'd welcome that without hesitation. >> >> How about the attached for a first phase? >> >> It does three sendmsgs, one for the marker + header, one for the body and one >> for the tail. > > ... And this as a second phase. > > David > --- > sunrpc: Allow xdr->bvec[] to be extended to do a single sendmsg > > Allow xdr->bvec[] to be extended and insert the marker, the header and the > tail into it so that a single sendmsg() can be used to transmit the message. Don't. Just change svc_tcp_send_kvec() to use sock_sendmsg, and leave the marker alone for now, please. Let's focus on replacing kernel_sendpage() in this series and leave the deeper clean-ups for another time. > I wonder if it would be possible to insert the marker at the beginning of the > head buffer. That's the way it used to work. The reason we don't do that is because each transport has its own record marking mechanism. UDP has nothing, since each RPC message is encapsulated in a single datagram. RDMA has a full XDR-encoded header which contains the location of data chunks to be moved via RDMA. > Signed-off-by: David Howells <dhowells@redhat.com> > cc: Trond Myklebust <trond.myklebust@hammerspace.com> > cc: Anna Schumaker <anna@kernel.org> > cc: Chuck Lever <chuck.lever@oracle.com> > cc: Jeff Layton <jlayton@kernel.org> > cc: "David S. Miller" <davem@davemloft.net> > cc: Eric Dumazet <edumazet@google.com> > cc: Jakub Kicinski <kuba@kernel.org> > cc: Paolo Abeni <pabeni@redhat.com> > cc: Jens Axboe <axboe@kernel.dk> > cc: Matthew Wilcox <willy@infradead.org> > cc: linux-nfs@vger.kernel.org > cc: netdev@vger.kernel.org > --- > include/linux/sunrpc/xdr.h | 2 - > net/sunrpc/svcsock.c | 46 ++++++++++++++------------------------------- > net/sunrpc/xdr.c | 19 ++++++++++-------- > net/sunrpc/xprtsock.c | 6 ++--- > 4 files changed, 30 insertions(+), 43 deletions(-) > > diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h > index 72014c9216fc..c74ea483228b 100644 > --- a/include/linux/sunrpc/xdr.h > +++ b/include/linux/sunrpc/xdr.h > @@ -137,7 +137,7 @@ void xdr_inline_pages(struct xdr_buf *, unsigned int, > struct page **, unsigned int, unsigned int); > void xdr_terminate_string(const struct xdr_buf *, const u32); > size_t xdr_buf_pagecount(const struct xdr_buf *buf); > -int xdr_alloc_bvec(struct xdr_buf *buf, gfp_t gfp); > +int xdr_alloc_bvec(struct xdr_buf *buf, gfp_t gfp, unsigned int head, unsigned int tail); > void xdr_free_bvec(struct xdr_buf *buf); > > static inline __be32 *xdr_encode_array(__be32 *p, const void *s, unsigned int len) > diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c > index 14efcc08c6f8..e55761fe1ccf 100644 > --- a/net/sunrpc/svcsock.c > +++ b/net/sunrpc/svcsock.c > @@ -569,7 +569,7 @@ static int svc_udp_sendto(struct svc_rqst *rqstp) > if (svc_xprt_is_dead(xprt)) > goto out_notconn; > > - err = xdr_alloc_bvec(xdr, GFP_KERNEL); > + err = xdr_alloc_bvec(xdr, GFP_KERNEL, 0, 0); > if (err < 0) > goto out_unlock; > > @@ -1073,45 +1073,29 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr, > { > const struct kvec *head = xdr->head; > const struct kvec *tail = xdr->tail; > - struct kvec kv[2]; > - struct msghdr msg = { .msg_flags = MSG_SPLICE_PAGES | MSG_MORE, }; > - size_t sent; > + struct msghdr msg = { .msg_flags = MSG_SPLICE_PAGES, }; > + size_t n; > int ret; > > *sentp = 0; > - ret = xdr_alloc_bvec(xdr, GFP_KERNEL); > + ret = xdr_alloc_bvec(xdr, GFP_KERNEL, 2, 1); > if (ret < 0) > return ret; > > - kv[0].iov_base = ▮ > - kv[0].iov_len = sizeof(marker); > - kv[1] = *head; > - iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, kv, 2, sizeof(marker) + head->iov_len); > + n = 2 + xdr_buf_pagecount(xdr); > + bvec_set_virt(&xdr->bvec[0], &marker, sizeof(marker)); > + bvec_set_virt(&xdr->bvec[1], head->iov_base, head->iov_len); > + bvec_set_virt(&xdr->bvec[n], tail->iov_base, tail->iov_len); > + if (tail->iov_len) > + n++; > + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, xdr->bvec, n, > + sizeof(marker) + xdr->len); > ret = sock_sendmsg(sock, &msg); > if (ret < 0) > return ret; > - sent = ret; > - > - if (!tail->iov_len) > - msg.msg_flags &= ~MSG_MORE; > - iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, xdr->bvec, > - xdr_buf_pagecount(xdr), xdr->page_len); > - ret = sock_sendmsg(sock, &msg); > - if (ret < 0) > - return ret; > - sent += ret; > - > - if (tail->iov_len) { > - msg.msg_flags &= ~MSG_MORE; > - iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, tail, 1, tail->iov_len); > - ret = sock_sendmsg(sock, &msg); > - if (ret < 0) > - return ret; > - sent += ret; > - } > - if (sent > 0) > - *sentp = sent; > - if (sent != sizeof(marker) + xdr->len) > + if (ret > 0) > + *sentp = ret; > + if (ret != sizeof(marker) + xdr->len) > return -EAGAIN; > return 0; > } > diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c > index 36835b2f5446..695821963849 100644 > --- a/net/sunrpc/xdr.c > +++ b/net/sunrpc/xdr.c > @@ -141,18 +141,21 @@ size_t xdr_buf_pagecount(const struct xdr_buf *buf) > } > > int > -xdr_alloc_bvec(struct xdr_buf *buf, gfp_t gfp) > +xdr_alloc_bvec(struct xdr_buf *buf, gfp_t gfp, unsigned int head, unsigned int tail) > { > - size_t i, n = xdr_buf_pagecount(buf); > + size_t i, j = 0, n = xdr_buf_pagecount(buf); > > - if (n != 0 && buf->bvec == NULL) { > - buf->bvec = kmalloc_array(n, sizeof(buf->bvec[0]), gfp); > + if (head + n + tail != 0 && buf->bvec == NULL) { > + buf->bvec = kmalloc_array(head + n + tail, > + sizeof(buf->bvec[0]), gfp); > if (!buf->bvec) > return -ENOMEM; > - for (i = 0; i < n; i++) { > - bvec_set_page(&buf->bvec[i], buf->pages[i], PAGE_SIZE, > - 0); > - } > + for (i = 0; i < head; i++) > + bvec_set_page(&buf->bvec[j++], NULL, 0, 0); > + for (i = 0; i < n; i++) > + bvec_set_page(&buf->bvec[j++], buf->pages[i], PAGE_SIZE, 0); > + for (i = 0; i < tail; i++) > + bvec_set_page(&buf->bvec[j++], NULL, 0, 0); > } > return 0; > } > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c > index adcbedc244d6..fdf67e84b1c7 100644 > --- a/net/sunrpc/xprtsock.c > +++ b/net/sunrpc/xprtsock.c > @@ -825,7 +825,7 @@ static int xs_stream_nospace(struct rpc_rqst *req, bool vm_wait) > > static int xs_stream_prepare_request(struct rpc_rqst *req, struct xdr_buf *buf) > { > - return xdr_alloc_bvec(buf, rpc_task_gfp_mask()); > + return xdr_alloc_bvec(buf, rpc_task_gfp_mask(), 0, 0); > } > > /* > @@ -954,7 +954,7 @@ static int xs_udp_send_request(struct rpc_rqst *req) > if (!xprt_request_get_cong(xprt, req)) > return -EBADSLT; > > - status = xdr_alloc_bvec(xdr, rpc_task_gfp_mask()); > + status = xdr_alloc_bvec(xdr, rpc_task_gfp_mask(), 0, 0); > if (status < 0) > return status; > req->rq_xtime = ktime_get(); > @@ -2591,7 +2591,7 @@ static int bc_sendto(struct rpc_rqst *req) > int err; > > req->rq_xtime = ktime_get(); > - err = xdr_alloc_bvec(xdr, rpc_task_gfp_mask()); > + err = xdr_alloc_bvec(xdr, rpc_task_gfp_mask(), 0, 0); > if (err < 0) > return err; > err = xprt_sock_sendmsg(transport->sock, &msg, xdr, 0, marker, &sent); > -- Chuck Lever
Chuck Lever III <chuck.lever@oracle.com> wrote: > Don't. Just change svc_tcp_send_kvec() to use sock_sendmsg, and > leave the marker alone for now, please. If you insist. See attached. David --- sunrpc: Use sendmsg(MSG_SPLICE_PAGES) rather then sendpage When transmitting data, call down into TCP using sendmsg with MSG_SPLICE_PAGES to indicate that content should be spliced rather than performing sendpage calls to transmit header, data pages and trailer. Signed-off-by: David Howells <dhowells@redhat.com> cc: Trond Myklebust <trond.myklebust@hammerspace.com> cc: Anna Schumaker <anna@kernel.org> cc: Chuck Lever <chuck.lever@oracle.com> cc: Jeff Layton <jlayton@kernel.org> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> cc: linux-nfs@vger.kernel.org cc: netdev@vger.kernel.org --- include/linux/sunrpc/svc.h | 11 +++++------ net/sunrpc/svcsock.c | 40 +++++++++++++--------------------------- 2 files changed, 18 insertions(+), 33 deletions(-) diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index 877891536c2f..456ae554aa11 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -161,16 +161,15 @@ static inline bool svc_put_not_last(struct svc_serv *serv) extern u32 svc_max_payload(const struct svc_rqst *rqstp); /* - * RPC Requsts and replies are stored in one or more pages. + * RPC Requests and replies are stored in one or more pages. * We maintain an array of pages for each server thread. * Requests are copied into these pages as they arrive. Remaining * pages are available to write the reply into. * - * Pages are sent using ->sendpage so each server thread needs to - * allocate more to replace those used in sending. To help keep track - * of these pages we have a receive list where all pages initialy live, - * and a send list where pages are moved to when there are to be part - * of a reply. + * Pages are sent using ->sendmsg with MSG_SPLICE_PAGES so each server thread + * needs to allocate more to replace those used in sending. To help keep track + * of these pages we have a receive list where all pages initialy live, and a + * send list where pages are moved to when there are to be part of a reply. * * We use xdr_buf for holding responses as it fits well with NFS * read responses (that have a header, and some data pages, and possibly diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 03a4f5615086..af146e053dfc 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1059,17 +1059,18 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp) svc_xprt_received(rqstp->rq_xprt); return 0; /* record not complete */ } - + static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec, int flags) { - return kernel_sendpage(sock, virt_to_page(vec->iov_base), - offset_in_page(vec->iov_base), - vec->iov_len, flags); + struct msghdr msg = { .msg_flags = MSG_SPLICE_PAGES | flags, }; + + iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, vec, 1, vec->iov_len); + return sock_sendmsg(sock, &msg); } /* - * kernel_sendpage() is used exclusively to reduce the number of + * MSG_SPLICE_PAGES is used exclusively to reduce the number of * copy operations in this path. Therefore the caller must ensure * that the pages backing @xdr are unchanging. * @@ -1109,28 +1110,13 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr, if (ret != head->iov_len) goto out; - if (xdr->page_len) { - unsigned int offset, len, remaining; - struct bio_vec *bvec; - - bvec = xdr->bvec + (xdr->page_base >> PAGE_SHIFT); - offset = offset_in_page(xdr->page_base); - remaining = xdr->page_len; - while (remaining > 0) { - len = min(remaining, bvec->bv_len - offset); - ret = kernel_sendpage(sock, bvec->bv_page, - bvec->bv_offset + offset, - len, 0); - if (ret < 0) - return ret; - *sentp += ret; - if (ret != len) - goto out; - remaining -= len; - offset = 0; - bvec++; - } - } + msg.msg_flags = MSG_SPLICE_PAGES; + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, xdr->bvec, + xdr_buf_pagecount(xdr), xdr->page_len); + ret = sock_sendmsg(sock, &msg); + if (ret < 0) + return ret; + *sentp += ret; if (tail->iov_len) { ret = svc_tcp_send_kvec(sock, tail, 0);
> On Mar 30, 2023, at 10:26 AM, David Howells <dhowells@redhat.com> wrote: > > Chuck Lever III <chuck.lever@oracle.com> wrote: > >> Don't. Just change svc_tcp_send_kvec() to use sock_sendmsg, and >> leave the marker alone for now, please. > > If you insist. See attached. Very good, thank you for accommodating my regression paranoia. Acked-by: Chuck Lever <chuck.lever@oracle.com> > > David > --- > sunrpc: Use sendmsg(MSG_SPLICE_PAGES) rather then sendpage > > When transmitting data, call down into TCP using sendmsg with > MSG_SPLICE_PAGES to indicate that content should be spliced rather than > performing sendpage calls to transmit header, data pages and trailer. > > Signed-off-by: David Howells <dhowells@redhat.com> > cc: Trond Myklebust <trond.myklebust@hammerspace.com> > cc: Anna Schumaker <anna@kernel.org> > cc: Chuck Lever <chuck.lever@oracle.com> > cc: Jeff Layton <jlayton@kernel.org> > cc: "David S. Miller" <davem@davemloft.net> > cc: Eric Dumazet <edumazet@google.com> > cc: Jakub Kicinski <kuba@kernel.org> > cc: Paolo Abeni <pabeni@redhat.com> > cc: Jens Axboe <axboe@kernel.dk> > cc: Matthew Wilcox <willy@infradead.org> > cc: linux-nfs@vger.kernel.org > cc: netdev@vger.kernel.org > --- > include/linux/sunrpc/svc.h | 11 +++++------ > net/sunrpc/svcsock.c | 40 +++++++++++++--------------------------- > 2 files changed, 18 insertions(+), 33 deletions(-) > > diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h > index 877891536c2f..456ae554aa11 100644 > --- a/include/linux/sunrpc/svc.h > +++ b/include/linux/sunrpc/svc.h > @@ -161,16 +161,15 @@ static inline bool svc_put_not_last(struct svc_serv *serv) > extern u32 svc_max_payload(const struct svc_rqst *rqstp); > > /* > - * RPC Requsts and replies are stored in one or more pages. > + * RPC Requests and replies are stored in one or more pages. > * We maintain an array of pages for each server thread. > * Requests are copied into these pages as they arrive. Remaining > * pages are available to write the reply into. > * > - * Pages are sent using ->sendpage so each server thread needs to > - * allocate more to replace those used in sending. To help keep track > - * of these pages we have a receive list where all pages initialy live, > - * and a send list where pages are moved to when there are to be part > - * of a reply. > + * Pages are sent using ->sendmsg with MSG_SPLICE_PAGES so each server thread > + * needs to allocate more to replace those used in sending. To help keep track > + * of these pages we have a receive list where all pages initialy live, and a > + * send list where pages are moved to when there are to be part of a reply. > * > * We use xdr_buf for holding responses as it fits well with NFS > * read responses (that have a header, and some data pages, and possibly > diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c > index 03a4f5615086..af146e053dfc 100644 > --- a/net/sunrpc/svcsock.c > +++ b/net/sunrpc/svcsock.c > @@ -1059,17 +1059,18 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp) > svc_xprt_received(rqstp->rq_xprt); > return 0; /* record not complete */ > } > - > + > static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec, > int flags) > { > - return kernel_sendpage(sock, virt_to_page(vec->iov_base), > - offset_in_page(vec->iov_base), > - vec->iov_len, flags); > + struct msghdr msg = { .msg_flags = MSG_SPLICE_PAGES | flags, }; > + > + iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, vec, 1, vec->iov_len); > + return sock_sendmsg(sock, &msg); > } > > /* > - * kernel_sendpage() is used exclusively to reduce the number of > + * MSG_SPLICE_PAGES is used exclusively to reduce the number of > * copy operations in this path. Therefore the caller must ensure > * that the pages backing @xdr are unchanging. > * > @@ -1109,28 +1110,13 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr, > if (ret != head->iov_len) > goto out; > > - if (xdr->page_len) { > - unsigned int offset, len, remaining; > - struct bio_vec *bvec; > - > - bvec = xdr->bvec + (xdr->page_base >> PAGE_SHIFT); > - offset = offset_in_page(xdr->page_base); > - remaining = xdr->page_len; > - while (remaining > 0) { > - len = min(remaining, bvec->bv_len - offset); > - ret = kernel_sendpage(sock, bvec->bv_page, > - bvec->bv_offset + offset, > - len, 0); > - if (ret < 0) > - return ret; > - *sentp += ret; > - if (ret != len) > - goto out; > - remaining -= len; > - offset = 0; > - bvec++; > - } > - } > + msg.msg_flags = MSG_SPLICE_PAGES; > + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, xdr->bvec, > + xdr_buf_pagecount(xdr), xdr->page_len); > + ret = sock_sendmsg(sock, &msg); > + if (ret < 0) > + return ret; > + *sentp += ret; > > if (tail->iov_len) { > ret = svc_tcp_send_kvec(sock, tail, 0); > -- Chuck Lever
I gave this a spin because I had noticed a previous regression around the 5.7 time frame in sendpage/sendmsg code changes: https://bugzilla.kernel.org/show_bug.cgi?id=209439 In that case there was a noticeable regression in performance for high performance servers (100gbit). I see no such performance problems with David's iov-sendpage branch and it all looks good to me with simple benchmarks (100gbit server, 100 x 1gbit clients reading data). Tested-by: Daire Byrne <daire@dneg.com> Cheers, Daire On Thu, 30 Mar 2023 at 17:37, Chuck Lever III <chuck.lever@oracle.com> wrote: > > > > > On Mar 30, 2023, at 10:26 AM, David Howells <dhowells@redhat.com> wrote: > > > > Chuck Lever III <chuck.lever@oracle.com> wrote: > > > >> Don't. Just change svc_tcp_send_kvec() to use sock_sendmsg, and > >> leave the marker alone for now, please. > > > > If you insist. See attached. > > Very good, thank you for accommodating my regression paranoia. > > Acked-by: Chuck Lever <chuck.lever@oracle.com> > > > > > > David > > --- > > sunrpc: Use sendmsg(MSG_SPLICE_PAGES) rather then sendpage > > > > When transmitting data, call down into TCP using sendmsg with > > MSG_SPLICE_PAGES to indicate that content should be spliced rather than > > performing sendpage calls to transmit header, data pages and trailer. > > > > Signed-off-by: David Howells <dhowells@redhat.com> > > cc: Trond Myklebust <trond.myklebust@hammerspace.com> > > cc: Anna Schumaker <anna@kernel.org> > > cc: Chuck Lever <chuck.lever@oracle.com> > > cc: Jeff Layton <jlayton@kernel.org> > > cc: "David S. Miller" <davem@davemloft.net> > > cc: Eric Dumazet <edumazet@google.com> > > cc: Jakub Kicinski <kuba@kernel.org> > > cc: Paolo Abeni <pabeni@redhat.com> > > cc: Jens Axboe <axboe@kernel.dk> > > cc: Matthew Wilcox <willy@infradead.org> > > cc: linux-nfs@vger.kernel.org > > cc: netdev@vger.kernel.org > > --- > > include/linux/sunrpc/svc.h | 11 +++++------ > > net/sunrpc/svcsock.c | 40 +++++++++++++--------------------------- > > 2 files changed, 18 insertions(+), 33 deletions(-) > > > > diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h > > index 877891536c2f..456ae554aa11 100644 > > --- a/include/linux/sunrpc/svc.h > > +++ b/include/linux/sunrpc/svc.h > > @@ -161,16 +161,15 @@ static inline bool svc_put_not_last(struct svc_serv *serv) > > extern u32 svc_max_payload(const struct svc_rqst *rqstp); > > > > /* > > - * RPC Requsts and replies are stored in one or more pages. > > + * RPC Requests and replies are stored in one or more pages. > > * We maintain an array of pages for each server thread. > > * Requests are copied into these pages as they arrive. Remaining > > * pages are available to write the reply into. > > * > > - * Pages are sent using ->sendpage so each server thread needs to > > - * allocate more to replace those used in sending. To help keep track > > - * of these pages we have a receive list where all pages initialy live, > > - * and a send list where pages are moved to when there are to be part > > - * of a reply. > > + * Pages are sent using ->sendmsg with MSG_SPLICE_PAGES so each server thread > > + * needs to allocate more to replace those used in sending. To help keep track > > + * of these pages we have a receive list where all pages initialy live, and a > > + * send list where pages are moved to when there are to be part of a reply. > > * > > * We use xdr_buf for holding responses as it fits well with NFS > > * read responses (that have a header, and some data pages, and possibly > > diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c > > index 03a4f5615086..af146e053dfc 100644 > > --- a/net/sunrpc/svcsock.c > > +++ b/net/sunrpc/svcsock.c > > @@ -1059,17 +1059,18 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp) > > svc_xprt_received(rqstp->rq_xprt); > > return 0; /* record not complete */ > > } > > - > > + > > static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec, > > int flags) > > { > > - return kernel_sendpage(sock, virt_to_page(vec->iov_base), > > - offset_in_page(vec->iov_base), > > - vec->iov_len, flags); > > + struct msghdr msg = { .msg_flags = MSG_SPLICE_PAGES | flags, }; > > + > > + iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, vec, 1, vec->iov_len); > > + return sock_sendmsg(sock, &msg); > > } > > > > /* > > - * kernel_sendpage() is used exclusively to reduce the number of > > + * MSG_SPLICE_PAGES is used exclusively to reduce the number of > > * copy operations in this path. Therefore the caller must ensure > > * that the pages backing @xdr are unchanging. > > * > > @@ -1109,28 +1110,13 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr, > > if (ret != head->iov_len) > > goto out; > > > > - if (xdr->page_len) { > > - unsigned int offset, len, remaining; > > - struct bio_vec *bvec; > > - > > - bvec = xdr->bvec + (xdr->page_base >> PAGE_SHIFT); > > - offset = offset_in_page(xdr->page_base); > > - remaining = xdr->page_len; > > - while (remaining > 0) { > > - len = min(remaining, bvec->bv_len - offset); > > - ret = kernel_sendpage(sock, bvec->bv_page, > > - bvec->bv_offset + offset, > > - len, 0); > > - if (ret < 0) > > - return ret; > > - *sentp += ret; > > - if (ret != len) > > - goto out; > > - remaining -= len; > > - offset = 0; > > - bvec++; > > - } > > - } > > + msg.msg_flags = MSG_SPLICE_PAGES; > > + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, xdr->bvec, > > + xdr_buf_pagecount(xdr), xdr->page_len); > > + ret = sock_sendmsg(sock, &msg); > > + if (ret < 0) > > + return ret; > > + *sentp += ret; > > > > if (tail->iov_len) { > > ret = svc_tcp_send_kvec(sock, tail, 0); > > > > -- > Chuck Lever > >
diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index 877891536c2f..456ae554aa11 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -161,16 +161,15 @@ static inline bool svc_put_not_last(struct svc_serv *serv) extern u32 svc_max_payload(const struct svc_rqst *rqstp); /* - * RPC Requsts and replies are stored in one or more pages. + * RPC Requests and replies are stored in one or more pages. * We maintain an array of pages for each server thread. * Requests are copied into these pages as they arrive. Remaining * pages are available to write the reply into. * - * Pages are sent using ->sendpage so each server thread needs to - * allocate more to replace those used in sending. To help keep track - * of these pages we have a receive list where all pages initialy live, - * and a send list where pages are moved to when there are to be part - * of a reply. + * Pages are sent using ->sendmsg with MSG_SPLICE_PAGES so each server thread + * needs to allocate more to replace those used in sending. To help keep track + * of these pages we have a receive list where all pages initialy live, and a + * send list where pages are moved to when there are to be part of a reply. * * We use xdr_buf for holding responses as it fits well with NFS * read responses (that have a header, and some data pages, and possibly diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 03a4f5615086..f1cc53aad6e0 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1060,16 +1060,8 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp) return 0; /* record not complete */ } -static int svc_tcp_send_kvec(struct socket *sock, const struct kvec *vec, - int flags) -{ - return kernel_sendpage(sock, virt_to_page(vec->iov_base), - offset_in_page(vec->iov_base), - vec->iov_len, flags); -} - /* - * kernel_sendpage() is used exclusively to reduce the number of + * MSG_SPLICE_PAGES is used exclusively to reduce the number of * copy operations in this path. Therefore the caller must ensure * that the pages backing @xdr are unchanging. * @@ -1081,65 +1073,54 @@ static int svc_tcp_sendmsg(struct socket *sock, struct xdr_buf *xdr, { const struct kvec *head = xdr->head; const struct kvec *tail = xdr->tail; - struct kvec rm = { - .iov_base = &marker, - .iov_len = sizeof(marker), - }; + struct iov_iter iters[3]; + struct bio_vec head_bv, tail_bv; struct msghdr msg = { - .msg_flags = 0, + .msg_flags = MSG_SPLICE_PAGES, }; - int ret; + void *m, *t; + int ret, n = 2, size; *sentp = 0; ret = xdr_alloc_bvec(xdr, GFP_KERNEL); if (ret < 0) return ret; - ret = kernel_sendmsg(sock, &msg, &rm, 1, rm.iov_len); - if (ret < 0) - return ret; - *sentp += ret; - if (ret != rm.iov_len) - return -EAGAIN; + m = page_frag_alloc(NULL, sizeof(marker) + head->iov_len + tail->iov_len, + GFP_KERNEL); + if (!m) + return -ENOMEM; - ret = svc_tcp_send_kvec(sock, head, 0); - if (ret < 0) - return ret; - *sentp += ret; - if (ret != head->iov_len) - goto out; + memcpy(m, &marker, sizeof(marker)); + if (head->iov_len) + memcpy(m + sizeof(marker), head->iov_base, head->iov_len); + bvec_set_virt(&head_bv, m, sizeof(marker) + head->iov_len); + iov_iter_bvec(&iters[0], ITER_SOURCE, &head_bv, 1, + sizeof(marker) + head->iov_len); - if (xdr->page_len) { - unsigned int offset, len, remaining; - struct bio_vec *bvec; - - bvec = xdr->bvec + (xdr->page_base >> PAGE_SHIFT); - offset = offset_in_page(xdr->page_base); - remaining = xdr->page_len; - while (remaining > 0) { - len = min(remaining, bvec->bv_len - offset); - ret = kernel_sendpage(sock, bvec->bv_page, - bvec->bv_offset + offset, - len, 0); - if (ret < 0) - return ret; - *sentp += ret; - if (ret != len) - goto out; - remaining -= len; - offset = 0; - bvec++; - } - } + iov_iter_bvec(&iters[1], ITER_SOURCE, xdr->bvec, + xdr_buf_pagecount(xdr), xdr->page_len); if (tail->iov_len) { - ret = svc_tcp_send_kvec(sock, tail, 0); - if (ret < 0) - return ret; - *sentp += ret; + t = page_frag_alloc(NULL, tail->iov_len, GFP_KERNEL); + if (!t) + return -ENOMEM; + memcpy(t, tail->iov_base, tail->iov_len); + bvec_set_virt(&tail_bv, t, tail->iov_len); + iov_iter_bvec(&iters[2], ITER_SOURCE, &tail_bv, 1, tail->iov_len); + n++; } -out: + size = sizeof(marker) + head->iov_len + xdr->page_len + tail->iov_len; + iov_iter_iterlist(&msg.msg_iter, ITER_SOURCE, iters, n, size); + + ret = sock_sendmsg(sock, &msg); + if (ret < 0) + return ret; + if (ret > 0) + *sentp = ret; + if (ret != size) + return -EAGAIN; return 0; }
When transmitting data, call down into TCP using a single sendmsg with MSG_SPLICE_PAGES to indicate that content should be spliced rather than performing several sendmsg and sendpage calls to transmit header, data pages and trailer. To make this work, the data is assembled in a bio_vec array and attached to a BVEC-type iterator. The header and trailer are copied into page fragments so that they can be freed with put_page and attached to iterators of their own. An iterator-of-iterators is then created to bridge all three iterators (headers, data, trailer) and that is passed to sendmsg to pass the entire message in a single call. Signed-off-by: David Howells <dhowells@redhat.com> cc: Trond Myklebust <trond.myklebust@hammerspace.com> cc: Anna Schumaker <anna@kernel.org> cc: Chuck Lever <chuck.lever@oracle.com> cc: Jeff Layton <jlayton@kernel.org> cc: "David S. Miller" <davem@davemloft.net> cc: Eric Dumazet <edumazet@google.com> cc: Jakub Kicinski <kuba@kernel.org> cc: Paolo Abeni <pabeni@redhat.com> cc: Jens Axboe <axboe@kernel.dk> cc: Matthew Wilcox <willy@infradead.org> cc: linux-nfs@vger.kernel.org cc: netdev@vger.kernel.org --- include/linux/sunrpc/svc.h | 11 +++-- net/sunrpc/svcsock.c | 89 +++++++++++++++----------------------- 2 files changed, 40 insertions(+), 60 deletions(-)