From patchwork Wed Jun 7 18:19:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13271097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FBBDC83005 for ; Wed, 7 Jun 2023 18:20:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4D69690000A; Wed, 7 Jun 2023 14:20:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 45EAD900004; Wed, 7 Jun 2023 14:20:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D92D90000A; Wed, 7 Jun 2023 14:20:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 1BDAC900004 for ; Wed, 7 Jun 2023 14:20:28 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id CB6A9402A4 for ; Wed, 7 Jun 2023 18:20:27 +0000 (UTC) X-FDA: 80876766894.21.AB534FB Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf13.hostedemail.com (Postfix) with ESMTP id 16A5A2001E for ; Wed, 7 Jun 2023 18:20:25 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=OLTMbWKx; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686162026; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xIvq+KmHXpeRlpbNnQkpDzUJdGjGno6iPOPrQ1kfx1M=; b=3aNKOxqrvPW6Cshv/iN9WLyVr+B8tFmoSM4md4DWgGIJcA/b3/WgE9E9QZNrPIiDy1ib/z doGWAgj9rj0mjPdb3qj3YssBWG9lZSiYTcS1t0hBNv+gxvpiB8o1gWq6f8VFp9sTgukEcF uHBEv1b6ENBCAW4QgBZw7ZwwE5LV+xQ= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=OLTMbWKx; spf=pass (imf13.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686162026; a=rsa-sha256; cv=none; b=Fa0+bYpvui8oC+4Vh6/A1kMHJ/GzagulooCfASWoTxruCi4KCDZe5AXGctbq94u1ZlisIx rOIEhwXMi3ZPs0/4jqMaYSX2ftXWlZ9H1x8VKGwXaUWl2PGTe7JwQ6AQXBhMGcAlgV4HvH D2UPVFzSUOQEz5pRkstTjXno1GpaRPU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686162025; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xIvq+KmHXpeRlpbNnQkpDzUJdGjGno6iPOPrQ1kfx1M=; b=OLTMbWKxA9ZF+jHlcPYStI5NBt2HUhLCwJE9Z1y3lbGraJi6whN3xEqQfr/1Bw+c/aiQMW ox0ccIs6jprSpCIfI76E+FqctoYEwXwMCFc2DH2Gs2Zntv0uaexPHzY8uia3sFNo+ujhWc V28eApLQfwGTaOrC/oLynL7693bBrSo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-173-fVyYLsfrNGqu-sEnpHL0Hw-1; Wed, 07 Jun 2023 14:20:22 -0400 X-MC-Unique: fVyYLsfrNGqu-sEnpHL0Hw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E632F8015D8; Wed, 7 Jun 2023 18:20:21 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.182]) by smtp.corp.redhat.com (Postfix) with ESMTP id D748D2166B25; Wed, 7 Jun 2023 18:20:19 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org, Linus Torvalds Cc: David Howells , Chuck Lever , Boris Pismenny , John Fastabend , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-mm@kvack.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Subject: [PATCH net-next v6 12/14] tls/sw: Convert tls_sw_sendpage() to use MSG_SPLICE_PAGES Date: Wed, 7 Jun 2023 19:19:18 +0100 Message-ID: <20230607181920.2294972-13-dhowells@redhat.com> In-Reply-To: <20230607181920.2294972-1-dhowells@redhat.com> References: <20230607181920.2294972-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 X-Rspamd-Queue-Id: 16A5A2001E X-Rspam-User: X-Stat-Signature: sa691em3teysahnfe9tii8ua49b6ti8t X-Rspamd-Server: rspam01 X-HE-Tag: 1686162025-418022 X-HE-Meta: U2FsdGVkX198L2h0f5jWymq7Az1T5mi9j2UfN4zDy33Dk5B/wDjXkI4L5Ctb0YbB2xawPFTt3ARkFCC0t2Q/VcyMSBPeP7QUEQ1qItQ9YjqTxwwVNSIhqyVD5D5T7ZbjjuBJBz126AlUBr87J1DvOhSSTL9+/J84lzZKLGIG0WfubRxayHFII4n5rY2lQmmrgxMHGVZBy1UOHyUeJZPLhIXKsDLJzh2QHVrlOtFXB2Vkcd5UQb8eiHLtz5rqWboKkkT+lemWmcZA017G7mhHNyu4LLRXzd79l0N/KNliHJW8vbqgdFgdTuNekSS/99Zklf3EnvzATf4Nwk7mdyVEIoUcVokVmf3Vr/gYNEFaw/HjF07NMxW/TkmiF95IonpohMzcmyq0Zw1PjzEWKzfVHTKkDZRmh+1yuZVaAwvwfxNXvDN2b+iNKzmJ0dlUMbCOF4pznF17JwO1WqEle6IPZuUgOqji96XwF96+fJvDym7+BAIkYSIAwU2LHgIKuykp5R0c98fyTm4m/mWc+fAQUGacEBmXlLUUzY3RVQpnns4rWcQ5CgDSdCxtywqu7oCtogbhV+MAb39KSXQbYgvDt8H5lGr+jg3LOHXfK7eUgPu3pBrVa+MM0ruK4/vXARYu6OVctsqPU6vre/g3b4FrbljUxyQ5+JTzG+gcWTAPfA0BEi8plIkLhykAxofAnBb7Z6bP//dEE+gjhb2M+8JXGTsgLCTxADuFkQR1MPNfDSY8srGnLGUiyqcB32uxm6az5JeLjUCL4DPiJ8Rv3ZZ8WL6DcTqdpp9qrLvAdU7viG7OIxOLKF2fuJDcw9g+VXYB+hGekclmWMiJq8Y3UendXSeRFgwcka91GoEZCwcvuKqJk4/HpNqT8nxwJAsmZHDH/B90b2Osw7aqUTRNM4UCKWNqlNjgJZ26O032ljdZ3G+4rbTpAzL2Hnrddof96nTzXFrOb4XR61OnFPeKXKI Zf5ZHVM6 AJhnuIOGXGxyibF66YDS1giMP6aVkVt0DiyXG0VQwJTR3SmF/jwR+a/P24OuDf/ibHGUMe3NXQHeEiyN420MCoQVWuYeQqzZzRWxU39z6IAa/Fkfdk9FytvhBq2Zd4KGQ45a5sVImTTGOpf1C57Q0l/CSOt+7G3J2lBwSCYvGepT/02uLBuy7bCQpa+qUkMyZhoVecVfhuKbETEEyF3z8HeyrZmpM1zj0+mkdIP3s7QZdiEm6XaVo0pQzH5LLrNJQju7GdzTN6TfA5du2OGeZwk8zVsOkLtE7nz3XvOM1OsoTrcnbHmwOUfEAMzgTXGH99SzTaD3jCnOs//2IKk6J87TeeDCDvpa56FPAmT88meAnNiH8tTYeH8rc6ZfM9gd7bmKXAXP1ZUYz/+/6R3ZIJ2UYcC0vWHYgvhFAyxvRm+D8xRaXaa8yYaFKhXwng0Us4EnxPCCoINUlv2FvorHpEiEU8DZbA/cxOSYrdndAgN+20M1rMm6fjiFmJOp5pVJxzCSP0J3IWgU+HokOZRst9OkwprQAYOiKOM3zu8hO5u55PjlkA/f4CCFAZw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert tls_sw_sendpage() and tls_sw_sendpage_locked() to use sendmsg() with MSG_SPLICE_PAGES rather than directly splicing in the pages itself. [!] Note that tls_sw_sendpage_locked() appears to have the wrong locking upstream. I think the caller will only hold the socket lock, but it should hold tls_ctx->tx_lock too. This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. Signed-off-by: David Howells Reviewed-by: Jakub Kicinski cc: Chuck Lever cc: Boris Pismenny cc: John Fastabend cc: Eric Dumazet cc: "David S. Miller" cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org cc: bpf@vger.kernel.org --- net/tls/tls_sw.c | 173 ++++++++++------------------------------------- 1 file changed, 35 insertions(+), 138 deletions(-) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 2d2bb933d2a6..319f61590d2c 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -960,7 +960,8 @@ static int tls_sw_sendmsg_splice(struct sock *sk, struct msghdr *msg, return 0; } -int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) +static int tls_sw_sendmsg_locked(struct sock *sk, struct msghdr *msg, + size_t size) { long timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT); struct tls_context *tls_ctx = tls_get_ctx(sk); @@ -983,15 +984,6 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) int ret = 0; int pending; - if (msg->msg_flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL | - MSG_CMSG_COMPAT | MSG_SPLICE_PAGES)) - return -EOPNOTSUPP; - - ret = mutex_lock_interruptible(&tls_ctx->tx_lock); - if (ret) - return ret; - lock_sock(sk); - if (unlikely(msg->msg_controllen)) { ret = tls_process_cmsg(sk, msg, &record_type); if (ret) { @@ -1192,10 +1184,27 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) send_end: ret = sk_stream_error(sk, msg->msg_flags, ret); + return copied > 0 ? copied : ret; +} +int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) +{ + struct tls_context *tls_ctx = tls_get_ctx(sk); + int ret; + + if (msg->msg_flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL | + MSG_CMSG_COMPAT | MSG_SPLICE_PAGES | + MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY)) + return -EOPNOTSUPP; + + ret = mutex_lock_interruptible(&tls_ctx->tx_lock); + if (ret) + return ret; + lock_sock(sk); + ret = tls_sw_sendmsg_locked(sk, msg, size); release_sock(sk); mutex_unlock(&tls_ctx->tx_lock); - return copied > 0 ? copied : ret; + return ret; } /* @@ -1272,151 +1281,39 @@ void tls_sw_splice_eof(struct socket *sock) mutex_unlock(&tls_ctx->tx_lock); } -static int tls_sw_do_sendpage(struct sock *sk, struct page *page, - int offset, size_t size, int flags) -{ - long timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT); - struct tls_context *tls_ctx = tls_get_ctx(sk); - struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx); - struct tls_prot_info *prot = &tls_ctx->prot_info; - unsigned char record_type = TLS_RECORD_TYPE_DATA; - struct sk_msg *msg_pl; - struct tls_rec *rec; - int num_async = 0; - ssize_t copied = 0; - bool full_record; - int record_room; - int ret = 0; - bool eor; - - eor = !(flags & MSG_SENDPAGE_NOTLAST); - sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk); - - /* Call the sk_stream functions to manage the sndbuf mem. */ - while (size > 0) { - size_t copy, required_size; - - if (sk->sk_err) { - ret = -sk->sk_err; - goto sendpage_end; - } - - if (ctx->open_rec) - rec = ctx->open_rec; - else - rec = ctx->open_rec = tls_get_rec(sk); - if (!rec) { - ret = -ENOMEM; - goto sendpage_end; - } - - msg_pl = &rec->msg_plaintext; - - full_record = false; - record_room = TLS_MAX_PAYLOAD_SIZE - msg_pl->sg.size; - copy = size; - if (copy >= record_room) { - copy = record_room; - full_record = true; - } - - required_size = msg_pl->sg.size + copy + prot->overhead_size; - - if (!sk_stream_memory_free(sk)) - goto wait_for_sndbuf; -alloc_payload: - ret = tls_alloc_encrypted_msg(sk, required_size); - if (ret) { - if (ret != -ENOSPC) - goto wait_for_memory; - - /* Adjust copy according to the amount that was - * actually allocated. The difference is due - * to max sg elements limit - */ - copy -= required_size - msg_pl->sg.size; - full_record = true; - } - - sk_msg_page_add(msg_pl, page, copy, offset); - sk_mem_charge(sk, copy); - - offset += copy; - size -= copy; - copied += copy; - - tls_ctx->pending_open_record_frags = true; - if (full_record || eor || sk_msg_full(msg_pl)) { - ret = bpf_exec_tx_verdict(msg_pl, sk, full_record, - record_type, &copied, flags); - if (ret) { - if (ret == -EINPROGRESS) - num_async++; - else if (ret == -ENOMEM) - goto wait_for_memory; - else if (ret != -EAGAIN) { - if (ret == -ENOSPC) - ret = 0; - goto sendpage_end; - } - } - } - continue; -wait_for_sndbuf: - set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); -wait_for_memory: - ret = sk_stream_wait_memory(sk, &timeo); - if (ret) { - if (ctx->open_rec) - tls_trim_both_msgs(sk, msg_pl->sg.size); - goto sendpage_end; - } - - if (ctx->open_rec) - goto alloc_payload; - } - - if (num_async) { - /* Transmit if any encryptions have completed */ - if (test_and_clear_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) { - cancel_delayed_work(&ctx->tx_work.work); - tls_tx_records(sk, flags); - } - } -sendpage_end: - ret = sk_stream_error(sk, flags, ret); - return copied > 0 ? copied : ret; -} - int tls_sw_sendpage_locked(struct sock *sk, struct page *page, int offset, size_t size, int flags) { + struct bio_vec bvec; + struct msghdr msg = { .msg_flags = flags | MSG_SPLICE_PAGES, }; + if (flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL | MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY | MSG_NO_SHARED_FRAGS)) return -EOPNOTSUPP; + if (flags & MSG_SENDPAGE_NOTLAST) + msg.msg_flags |= MSG_MORE; - return tls_sw_do_sendpage(sk, page, offset, size, flags); + bvec_set_page(&bvec, page, size, offset); + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size); + return tls_sw_sendmsg_locked(sk, &msg, size); } int tls_sw_sendpage(struct sock *sk, struct page *page, int offset, size_t size, int flags) { - struct tls_context *tls_ctx = tls_get_ctx(sk); - int ret; + struct bio_vec bvec; + struct msghdr msg = { .msg_flags = flags | MSG_SPLICE_PAGES, }; if (flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL | MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY)) return -EOPNOTSUPP; + if (flags & MSG_SENDPAGE_NOTLAST) + msg.msg_flags |= MSG_MORE; - ret = mutex_lock_interruptible(&tls_ctx->tx_lock); - if (ret) - return ret; - lock_sock(sk); - ret = tls_sw_do_sendpage(sk, page, offset, size, flags); - release_sock(sk); - mutex_unlock(&tls_ctx->tx_lock); - return ret; + bvec_set_page(&bvec, page, size, offset); + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size); + return tls_sw_sendmsg(sk, &msg, size); } static int