From patchwork Wed Mar 29 14:13:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13192538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A10E7C74A5B for ; Wed, 29 Mar 2023 14:15:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C4856B0082; Wed, 29 Mar 2023 10:15:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 375836B00A2; Wed, 29 Mar 2023 10:15:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C57A900002; Wed, 29 Mar 2023 10:15:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id EEECC6B0082 for ; Wed, 29 Mar 2023 10:15:26 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B8FF41C61EF for ; Wed, 29 Mar 2023 14:15:26 +0000 (UTC) X-FDA: 80622133452.29.A2A63E3 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf23.hostedemail.com (Postfix) with ESMTP id E598D140029 for ; Wed, 29 Mar 2023 14:15:24 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VazfIy0l; spf=pass (imf23.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680099325; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uOfuvlKLCw3PwI6FlP4CG+iWtw5a+KyQbWE9E7aqyN8=; b=rv8xNsxWqrPqq317LVBwPV0ZpugmTzugCVolUl7ljgDIflSeTZvwDcn5r74aUR7u1nus33 jJ4rE37CytpPfHGdtbAXAqewoq/MYy/Yt+e+IB0oz+HDDKf5O49ffdFpH3UHqYOeZDGq1w OEM/gYhMNEbUSqMGG333urJqVbcCOrA= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VazfIy0l; spf=pass (imf23.hostedemail.com: domain of dhowells@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680099325; a=rsa-sha256; cv=none; b=yFGuTlMt1l1HSntvJmdQDOINy2DT51PMph+3t8YtrJ66pfp+LmkEeUwmRXLXbv2Pg+yLWZ RHXDPR6can+sdP4N3wjcslWjHqxjbbGhmfEX5BHeMBC7T7lTBbWAFVlRKJzpG8E3eTvWZK N4vnDJ+0cMXRq0DMsH3xJlHVwZc4ESg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680099324; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uOfuvlKLCw3PwI6FlP4CG+iWtw5a+KyQbWE9E7aqyN8=; b=VazfIy0lN/Cc9yHvozVU3HQuHLf8JeMjHnkkbY6vwneF7cHcrc0BiUcZcp3UPQLUWEw7Wh mkQqm7sEkMIdHcG3ku+UKEidntosDzq3fXN40a7b7kjJ+uUox/t4XZ9pUGR/6UTyqPQ5Lk shlqGcQayFT+RZMX10E4bJQyDS8wAQU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-70-Ff7TQ3BGOAynU7ztrXIohw-1; Wed, 29 Mar 2023 10:15:22 -0400 X-MC-Unique: Ff7TQ3BGOAynU7ztrXIohw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E073A1C0758E; Wed, 29 Mar 2023 14:15:20 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id B3D6F2027042; Wed, 29 Mar 2023 14:15:18 +0000 (UTC) From: David Howells To: Matthew Wilcox , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: David Howells , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , netdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Bernard Metzler , Tom Talpey , linux-rdma@vger.kernel.org Subject: [RFC PATCH v2 30/48] siw: Use sendmsg(MSG_SPLICE_PAGES) rather than sendpage to transmit Date: Wed, 29 Mar 2023 15:13:36 +0100 Message-Id: <20230329141354.516864-31-dhowells@redhat.com> In-Reply-To: <20230329141354.516864-1-dhowells@redhat.com> References: <20230329141354.516864-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 1acjc3qq7ku7jqk53cpqzfjrktmzgbh7 X-Rspamd-Queue-Id: E598D140029 X-HE-Tag: 1680099324-515447 X-HE-Meta: U2FsdGVkX188jBapd3pUjnpy8D9VA9ynObXPzXQQCHhvFPWX8MUlFibOgeZKBWXZrpuWam06mgzpaJrnklxtmtEqXzuEroAbJZK9LTdrF5Gt+ZobeFNfu1/oN89E+OxiQTY9W7Jl1UAnJd3V5engJ8cZE2sZiQtWlJmtQW8TaZrcdstfcbBjkSxS3rAUK/dwjGGd25ljJY3DfZIuO42VZh7axCqDsbpG6dgfKSXUlH0qDC1+u+Y948xmFWRTCZrgVuxpmiDkAykE0YZAh0XHd6Apffk0HnvFBhZzGTTaN5VGHsd/mPU6jBeUADUYcwjbYtnoFANihcfGOrF6YntmlZIWBNrviTZRje4gith8PuYMZ6vM6+gh5DwEx/emI38jfejgv6lIJLHoW1gWDfiL+LQSb9UZYiBlCNQ5tsLNNYBMaDASThBmCsaNm/e+oYhzQDizcaSxgV7pO1JI3kujYJYChrHHVdW1kfOjHLKCynTqcbs/3hdNh4TDoz+GAeG5hXiOCa+zoAL0M7O45r8M0pzctxz4JAqkM8hUwaZ4/i1QSzfOjHOazWcjCCxT0nTdKf5Bw8eBZ7zDEbQqodw1kOHT6nbEB8KDrknfAgNB83RnysrYED0srCLXTUmQgb2Rybe1ILWeGV0stSEs5y2DChOjCrHvP30dtvosA1tDFm+f0gmicF39Wg2xKRb2wKMWsNw8OziuNQ115uzXSTGZVbMnI6i3jlDyYbJibkp1YdFjWNf6IQgEbfi2dK+Kk9m8LGTIRFwPJWF2+XJz78LDpkNBvxzpPYxdDAToSI63yTcqfAT2skFVQrW9BXWzoP+/wtwTIwRVgRPHC8Vc8UKVAB5R57JGDXljgXRD8O8w2+j4tOQkEaAcKQBdO0BZR6D+h4cPBShOPfxqBay8hltCcSR3t4+GoXSEvXpmfONjt/CtgiANywDIvy3fj3663ln5U4N1m9PWHy2ox2wSqA2 QFZX5Bjx DF/Zn3i00F/d4tlf05oXdCYyLZhNkgRFLO8EOksMHfJYAU9wz2kQJr4cX6p50+KW6KE7dtHqfPvrd8Vlyraj9ArVyd0BLtMBu2RlhmW2jKg29WnDgIZZp5zP8mJiUNC4p2ffZzrZ40x1bYETzPZe2II2FZst7Lv+0LrE+mwhDPTNM3ZMJ+er21vEH5qIjclWIM14qVU44ADgNstOmG/Z16GbayNcKAdU4uLJQSR/z2lHQVf2DfonMh4mJ2YYFtvsrHLTM/fD/cu96jzo0NVaras34liPZG+f8GEY8UWYq74cmPWDOYz7rKoN6/azSEJB3XxnwTcsr6UKT+JHiprqb4f6ghWYCA6CWa0KxQPI9ATVZFVCJi98QQ2mavi590ODYz+Ip1pTuaI6kKaswuVC0PYlLjTcjguFos+Etj6L4zqSEnTnEZ63Izbc2Mgs1/lf0MdtN2R1713e0/U9kaT0zLJwWlV/BsJMwz3Q+kfYRSWWY7oRwKHWx3FCaVlHuGkmbv32++I6OmRFE43z9M7QN/3DpCw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When transmitting data, call down into TCP using a single sendmsg with MSG_SPLICE_PAGES to indicate that content should be spliced rather than performing several sendmsg and sendpage calls to transmit header, data pages and trailer. To make this work, the data is assembled in a bio_vec array and attached to a BVEC-type iterator. The header and trailer (if present) are copied into page fragments that can be freed with put_page(). Signed-off-by: David Howells cc: Bernard Metzler cc: Tom Talpey cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: linux-rdma@vger.kernel.org cc: netdev@vger.kernel.org --- drivers/infiniband/sw/siw/siw_qp_tx.c | 234 ++++++-------------------- 1 file changed, 48 insertions(+), 186 deletions(-) diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c index fa5de40d85d5..fbe80c06d0ca 100644 --- a/drivers/infiniband/sw/siw/siw_qp_tx.c +++ b/drivers/infiniband/sw/siw/siw_qp_tx.c @@ -312,114 +312,8 @@ static int siw_tx_ctrl(struct siw_iwarp_tx *c_tx, struct socket *s, return rv; } -/* - * 0copy TCP transmit interface: Use MSG_SPLICE_PAGES. - * - * Using sendpage to push page by page appears to be less efficient - * than using sendmsg, even if data are copied. - * - * A general performance limitation might be the extra four bytes - * trailer checksum segment to be pushed after user data. - */ -static int siw_tcp_sendpages(struct socket *s, struct page **page, int offset, - size_t size) -{ - struct bio_vec bvec; - struct msghdr msg = { - .msg_flags = (MSG_MORE | MSG_DONTWAIT | MSG_SENDPAGE_NOTLAST | - MSG_SPLICE_PAGES), - }; - struct sock *sk = s->sk; - int i = 0, rv = 0, sent = 0; - - while (size) { - size_t bytes = min_t(size_t, PAGE_SIZE - offset, size); - - if (size + offset <= PAGE_SIZE) - msg.msg_flags = MSG_MORE | MSG_DONTWAIT; - - tcp_rate_check_app_limited(sk); - bvec_set_page(&bvec, page[i], bytes, offset); - iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size); - -try_page_again: - lock_sock(sk); - rv = tcp_sendmsg_locked(sk, &msg, size); - release_sock(sk); - - if (rv > 0) { - size -= rv; - sent += rv; - if (rv != bytes) { - offset += rv; - bytes -= rv; - goto try_page_again; - } - offset = 0; - } else { - if (rv == -EAGAIN || rv == 0) - break; - return rv; - } - i++; - } - return sent; -} - -/* - * siw_0copy_tx() - * - * Pushes list of pages to TCP socket. If pages from multiple - * SGE's, all referenced pages of each SGE are pushed in one - * shot. - */ -static int siw_0copy_tx(struct socket *s, struct page **page, - struct siw_sge *sge, unsigned int offset, - unsigned int size) -{ - int i = 0, sent = 0, rv; - int sge_bytes = min(sge->length - offset, size); - - offset = (sge->laddr + offset) & ~PAGE_MASK; - - while (sent != size) { - rv = siw_tcp_sendpages(s, &page[i], offset, sge_bytes); - if (rv >= 0) { - sent += rv; - if (size == sent || sge_bytes > rv) - break; - - i += PAGE_ALIGN(sge_bytes + offset) >> PAGE_SHIFT; - sge++; - sge_bytes = min(sge->length, size - sent); - offset = sge->laddr & ~PAGE_MASK; - } else { - sent = rv; - break; - } - } - return sent; -} - #define MAX_TRAILER (MPA_CRC_SIZE + 4) -static void siw_unmap_pages(struct kvec *iov, unsigned long kmap_mask, int len) -{ - int i; - - /* - * Work backwards through the array to honor the kmap_local_page() - * ordering requirements. - */ - for (i = (len-1); i >= 0; i--) { - if (kmap_mask & BIT(i)) { - unsigned long addr = (unsigned long)iov[i].iov_base; - - kunmap_local((void *)(addr & PAGE_MASK)); - } - } -} - /* * siw_tx_hdt() tries to push a complete packet to TCP where all * packet fragments are referenced by the elements of one iovec. @@ -439,15 +333,14 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s) { struct siw_wqe *wqe = &c_tx->wqe_active; struct siw_sge *sge = &wqe->sqe.sge[c_tx->sge_idx]; - struct kvec iov[MAX_ARRAY]; - struct page *page_array[MAX_ARRAY]; + struct bio_vec bvec[MAX_ARRAY]; struct msghdr msg = { .msg_flags = MSG_DONTWAIT | MSG_EOR }; + void *trl, *t; int seg = 0, do_crc = c_tx->do_crc, is_kva = 0, rv; unsigned int data_len = c_tx->bytes_unsent, hdr_len = 0, trl_len = 0, sge_off = c_tx->sge_off, sge_idx = c_tx->sge_idx, pbl_idx = c_tx->pbl_idx; - unsigned long kmap_mask = 0L; if (c_tx->state == SIW_SEND_HDR) { if (c_tx->use_sendpage) { @@ -457,10 +350,15 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s) c_tx->state = SIW_SEND_DATA; } else { - iov[0].iov_base = - (char *)&c_tx->pkt.ctrl + c_tx->ctrl_sent; - iov[0].iov_len = hdr_len = - c_tx->ctrl_len - c_tx->ctrl_sent; + const void *hdr = &c_tx->pkt.ctrl + c_tx->ctrl_sent; + void *h; + + rv = -ENOMEM; + hdr_len = c_tx->ctrl_len - c_tx->ctrl_sent; + h = page_frag_memdup(NULL, hdr, hdr_len, GFP_NOFS, ULONG_MAX); + if (!h) + goto done; + bvec_set_virt(&bvec[0], h, hdr_len); seg = 1; } } @@ -478,28 +376,9 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s) } else { is_kva = 1; } - if (is_kva && !c_tx->use_sendpage) { - /* - * tx from kernel virtual address: either inline data - * or memory region with assigned kernel buffer - */ - iov[seg].iov_base = - (void *)(uintptr_t)(sge->laddr + sge_off); - iov[seg].iov_len = sge_len; - - if (do_crc) - crypto_shash_update(c_tx->mpa_crc_hd, - iov[seg].iov_base, - sge_len); - sge_off += sge_len; - data_len -= sge_len; - seg++; - goto sge_done; - } while (sge_len) { size_t plen = min((int)PAGE_SIZE - fp_off, sge_len); - void *kaddr; if (!is_kva) { struct page *p; @@ -512,33 +391,12 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s) p = siw_get_upage(mem->umem, sge->laddr + sge_off); if (unlikely(!p)) { - siw_unmap_pages(iov, kmap_mask, seg); wqe->processed -= c_tx->bytes_unsent; rv = -EFAULT; goto done_crc; } - page_array[seg] = p; - - if (!c_tx->use_sendpage) { - void *kaddr = kmap_local_page(p); - - /* Remember for later kunmap() */ - kmap_mask |= BIT(seg); - iov[seg].iov_base = kaddr + fp_off; - iov[seg].iov_len = plen; - - if (do_crc) - crypto_shash_update( - c_tx->mpa_crc_hd, - iov[seg].iov_base, - plen); - } else if (do_crc) { - kaddr = kmap_local_page(p); - crypto_shash_update(c_tx->mpa_crc_hd, - kaddr + fp_off, - plen); - kunmap_local(kaddr); - } + + bvec_set_page(&bvec[seg], p, plen, fp_off); } else { /* * Cast to an uintptr_t to preserve all 64 bits @@ -552,12 +410,15 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s) * bits on a 64 bit platform and 32 bits on a * 32 bit platform. */ - page_array[seg] = virt_to_page((void *)(va & PAGE_MASK)); - if (do_crc) - crypto_shash_update( - c_tx->mpa_crc_hd, - (void *)va, - plen); + bvec_set_virt(&bvec[seg], (void *)va, plen); + } + + if (do_crc) { + void *kaddr = kmap_local_page(bvec[seg].bv_page); + crypto_shash_update(c_tx->mpa_crc_hd, + kaddr + bvec[seg].bv_offset, + bvec[seg].bv_len); + kunmap_local(kaddr); } sge_len -= plen; @@ -567,13 +428,12 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s) if (++seg > (int)MAX_ARRAY) { siw_dbg_qp(tx_qp(c_tx), "to many fragments\n"); - siw_unmap_pages(iov, kmap_mask, seg-1); wqe->processed -= c_tx->bytes_unsent; rv = -EMSGSIZE; goto done_crc; } } -sge_done: + /* Update SGE variables at end of SGE */ if (sge_off == sge->length && (data_len != 0 || wqe->processed < wqe->bytes)) { @@ -582,15 +442,8 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s) sge_off = 0; } } - /* trailer */ - if (likely(c_tx->state != SIW_SEND_TRAILER)) { - iov[seg].iov_base = &c_tx->trailer.pad[4 - c_tx->pad]; - iov[seg].iov_len = trl_len = MAX_TRAILER - (4 - c_tx->pad); - } else { - iov[seg].iov_base = &c_tx->trailer.pad[c_tx->ctrl_sent]; - iov[seg].iov_len = trl_len = MAX_TRAILER - c_tx->ctrl_sent; - } + /* Set the CRC in the trailer */ if (c_tx->pad) { *(u32 *)c_tx->trailer.pad = 0; if (do_crc) @@ -603,23 +456,29 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s) else if (do_crc) crypto_shash_final(c_tx->mpa_crc_hd, (u8 *)&c_tx->trailer.crc); - data_len = c_tx->bytes_unsent; - - if (c_tx->use_sendpage) { - rv = siw_0copy_tx(s, page_array, &wqe->sqe.sge[c_tx->sge_idx], - c_tx->sge_off, data_len); - if (rv == data_len) { - rv = kernel_sendmsg(s, &msg, &iov[seg], 1, trl_len); - if (rv > 0) - rv += data_len; - else - rv = data_len; - } + /* Copy the trailer and add it to the output list */ + if (likely(c_tx->state != SIW_SEND_TRAILER)) { + trl = &c_tx->trailer.pad[4 - c_tx->pad]; + trl_len = MAX_TRAILER - (4 - c_tx->pad); } else { - rv = kernel_sendmsg(s, &msg, iov, seg + 1, - hdr_len + data_len + trl_len); - siw_unmap_pages(iov, kmap_mask, seg); + trl = &c_tx->trailer.pad[c_tx->ctrl_sent]; + trl_len = MAX_TRAILER - c_tx->ctrl_sent; } + + rv = -ENOMEM; + t = page_frag_memdup(NULL, trl, trl_len, GFP_NOFS, ULONG_MAX); + if (!t) + goto done_crc; + bvec_set_virt(&bvec[seg], t, trl_len); + + data_len = c_tx->bytes_unsent; + + if (c_tx->use_sendpage) + msg.msg_flags |= MSG_SPLICE_PAGES; + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, bvec, seg + 1, + hdr_len + data_len + trl_len); + rv = sock_sendmsg(s, &msg); + if (rv < (int)hdr_len) { /* Not even complete hdr pushed or negative rv */ wqe->processed -= data_len; @@ -680,6 +539,9 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s) } done_crc: c_tx->do_crc = 0; + if (c_tx->state == SIW_SEND_HDR) + folio_put(page_folio(bvec[0].bv_page)); + folio_put(page_folio(bvec[seg].bv_page)); done: return rv; }