From patchwork Mon Feb 6 07:00:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arseniy Krasnov X-Patchwork-Id: 13129379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD91EC61DA4 for ; Mon, 6 Feb 2023 07:00:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229870AbjBFHAl (ORCPT ); Mon, 6 Feb 2023 02:00:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229752AbjBFHAk (ORCPT ); Mon, 6 Feb 2023 02:00:40 -0500 Received: from mx.sberdevices.ru (mx.sberdevices.ru [45.89.227.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C56B18F; Sun, 5 Feb 2023 23:00:38 -0800 (PST) Received: from s-lin-edge02.sberdevices.ru (localhost [127.0.0.1]) by mx.sberdevices.ru (Postfix) with ESMTP id 55EDC5FD0A; Mon, 6 Feb 2023 10:00:36 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sberdevices.ru; s=mail; t=1675666836; bh=dB91PAv4fB7dF1IMO28IW2BHOJO6fnHOBgW4PbMZC18=; h=From:To:Subject:Date:Message-ID:Content-Type:MIME-Version; b=ecKf+cpDCpdKt/SZPoMSmYWgo8ZPCG2ymHoQJfnFYA4HUXju785a/HqEOY3JiqPKs PH1m+9dgViO8Yn0wMzG+32mM3Vk/40rBX8eGY0zVV4VtjUn/FaU42kgi6WdEwluhTk Grbx1W1L5SSVJuC0CgtH1O78iWWd3GfiCuSWh65tsfqA+0eQpv9ggW5E3jeLHH4+oi HpEcSDWWyJbm9ndl2h/aousTiuRV0I7usP3j/XE30SAXRiz7z+8HKtvpmf3/eowj2c kn5ztr/9yA5LmHq0bjxq3SwzI+EXe9QiQDptFum6pLHBZKCLj3+dw9W/6skqjCMWwA gmAksoPm0LJOQ== Received: from S-MS-EXCH01.sberdevices.ru (S-MS-EXCH01.sberdevices.ru [172.16.1.4]) by mx.sberdevices.ru (Postfix) with ESMTP; Mon, 6 Feb 2023 10:00:36 +0300 (MSK) From: Arseniy Krasnov To: Stefan Hajnoczi , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Arseniy Krasnov , "Krasnov Arseniy" CC: "linux-kernel@vger.kernel.org" , "kvm@vger.kernel.org" , "virtualization@lists.linux-foundation.org" , "netdev@vger.kernel.org" , kernel Subject: [RFC PATCH v1 07/12] vsock/virtio: MGS_ZEROCOPY flag support Thread-Topic: [RFC PATCH v1 07/12] vsock/virtio: MGS_ZEROCOPY flag support Thread-Index: AQHZOfi2sn4SgiUsYkKcLOif1R6btw== Date: Mon, 6 Feb 2023 07:00:35 +0000 Message-ID: <716333a1-d6d1-3dde-d04a-365d4a361bfe@sberdevices.ru> In-Reply-To: <0e7c6fc4-b4a6-a27b-36e9-359597bba2b5@sberdevices.ru> Accept-Language: en-US, ru-RU Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.16.1.12] Content-ID: MIME-Version: 1.0 X-KSMG-Rule-ID: 4 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Status: not scanned, disabled by settings X-KSMG-AntiSpam-Interceptor-Info: not scanned X-KSMG-AntiPhishing: not scanned, disabled by settings X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 1.1.2.30, bases: 2023/02/06 01:18:00 #20834045 X-KSMG-AntiVirus-Status: Clean, skipped Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC This adds main logic of MSG_ZEROCOPY flag processing for packet creation. When this flag is set and user's iov iterator fits for zerocopy transmission, call 'get_user_pages()' and add returned pages to the newly created skb. Signed-off-by: Arseniy Krasnov --- net/vmw_vsock/virtio_transport_common.c | 212 ++++++++++++++++++++++-- 1 file changed, 195 insertions(+), 17 deletions(-) diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c index 05ce97b967ad..69e37f8a68a6 100644 --- a/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c @@ -37,6 +37,169 @@ virtio_transport_get_ops(struct vsock_sock *vsk) return container_of(t, struct virtio_transport, transport); } +static int virtio_transport_can_zcopy(struct iov_iter *iov_iter, + size_t free_space) +{ + size_t pages; + int i; + + if (!iter_is_iovec(iov_iter)) + return -1; + + if (iov_iter->iov_offset) + return -1; + + /* We can't send whole iov. */ + if (free_space < iov_iter->count) + return -1; + + for (pages = 0, i = 0; i < iov_iter->nr_segs; i++) { + const struct iovec *iovec; + int pages_in_elem; + + iovec = &iov_iter->iov[i]; + + /* Base must be page aligned. */ + if (offset_in_page(iovec->iov_base)) + return -1; + + /* Only last element could have not page aligned size. */ + if (i != (iov_iter->nr_segs - 1)) { + if (offset_in_page(iovec->iov_len)) + return -1; + + pages_in_elem = iovec->iov_len >> PAGE_SHIFT; + } else { + pages_in_elem = round_up(iovec->iov_len, PAGE_SIZE); + pages_in_elem >>= PAGE_SHIFT; + } + + /* In case of user's pages - one page is one frag. */ + if (pages + pages_in_elem > MAX_SKB_FRAGS) + return -1; + + pages += pages_in_elem; + } + + return 0; +} + +static int virtio_transport_init_zcopy_skb(struct vsock_sock *vsk, + struct sk_buff *skb, + struct iov_iter *iter, + bool zerocopy) +{ + struct ubuf_info_msgzc *uarg_zc; + struct ubuf_info *uarg; + + uarg = msg_zerocopy_realloc(sk_vsock(vsk), + iov_length(iter->iov, iter->nr_segs), + NULL); + + if (!uarg) + return -1; + + uarg_zc = uarg_to_msgzc(uarg); + uarg_zc->zerocopy = zerocopy ? 1 : 0; + + skb_zcopy_init(skb, uarg); + + return 0; +} + +static int virtio_transport_fill_nonlinear_skb(struct sk_buff *skb, + struct vsock_sock *vsk, + struct virtio_vsock_pkt_info *info) +{ + struct iov_iter *iter; + int frag_idx; + int seg_idx; + + iter = &info->msg->msg_iter; + frag_idx = 0; + VIRTIO_VSOCK_SKB_CB(skb)->curr_frag = 0; + VIRTIO_VSOCK_SKB_CB(skb)->frag_off = 0; + + /* At this moment: + * 1) 'iov_offset' is zero. + * 2) Every 'iov_base' and 'iov_len' are also page aligned + * (except length of the last element). + * 3) Number of pages in this iov <= MAX_SKB_FRAGS. + * 4) Length of the data fits in current credit space. + */ + for (seg_idx = 0; seg_idx < iter->nr_segs; seg_idx++) { + struct page *user_pages[MAX_SKB_FRAGS]; + const struct iovec *iovec; + size_t last_frag_len; + size_t pages_in_seg; + int page_idx; + + iovec = &iter->iov[seg_idx]; + pages_in_seg = iovec->iov_len >> PAGE_SHIFT; + + if (iovec->iov_len % PAGE_SIZE) { + last_frag_len = iovec->iov_len % PAGE_SIZE; + pages_in_seg++; + } else { + last_frag_len = PAGE_SIZE; + } + + if (get_user_pages((unsigned long)iovec->iov_base, + pages_in_seg, FOLL_GET, user_pages, + NULL) != pages_in_seg) + return -1; + + for (page_idx = 0; page_idx < pages_in_seg; page_idx++) { + int frag_len = PAGE_SIZE; + + if (page_idx == (pages_in_seg - 1)) + frag_len = last_frag_len; + + skb_fill_page_desc(skb, frag_idx++, + user_pages[page_idx], 0, + frag_len); + skb_len_add(skb, frag_len); + } + } + + return virtio_transport_init_zcopy_skb(vsk, skb, iter, true); +} + +static int virtio_transport_copy_payload(struct sk_buff *skb, + struct vsock_sock *vsk, + struct virtio_vsock_pkt_info *info, + size_t len) +{ + void *payload; + int err; + + payload = skb_put(skb, len); + err = memcpy_from_msg(payload, info->msg, len); + if (err) + return -1; + + if (msg_data_left(info->msg)) + return 0; + + if (info->type == VIRTIO_VSOCK_TYPE_SEQPACKET) { + struct virtio_vsock_hdr *hdr; + + hdr = virtio_vsock_hdr(skb); + + hdr->flags |= cpu_to_le32(VIRTIO_VSOCK_SEQ_EOM); + + if (info->msg->msg_flags & MSG_EOR) + hdr->flags |= cpu_to_le32(VIRTIO_VSOCK_SEQ_EOR); + } + + if (info->flags & MSG_ZEROCOPY) + return virtio_transport_init_zcopy_skb(vsk, skb, + &info->msg->msg_iter, + false); + + return 0; +} + /* Returns a new packet on success, otherwise returns NULL. * * If NULL is returned, errp is set to a negative errno. @@ -47,15 +210,31 @@ virtio_transport_alloc_skb(struct virtio_vsock_pkt_info *info, u32 src_cid, u32 src_port, u32 dst_cid, - u32 dst_port) + u32 dst_port, + struct vsock_sock *vsk) { - const size_t skb_len = VIRTIO_VSOCK_SKB_HEADROOM + len; + const size_t skb_len = VIRTIO_VSOCK_SKB_HEADROOM; struct virtio_vsock_hdr *hdr; struct sk_buff *skb; - void *payload; - int err; + bool use_zcopy = false; + + if (info->msg) { + /* If SOCK_ZEROCOPY is not enabled, ignore MSG_ZEROCOPY + * flag later and continue in classic way(e.g. without + * completion). + */ + if (!sock_flag(sk_vsock(vsk), SOCK_ZEROCOPY)) { + info->flags &= ~MSG_ZEROCOPY; + } else { + if ((info->flags & MSG_ZEROCOPY) && + !virtio_transport_can_zcopy(&info->msg->msg_iter, len)) { + use_zcopy = true; + } + } + } - skb = virtio_vsock_alloc_skb(skb_len, GFP_KERNEL); + /* For MSG_ZEROCOPY length will be added later. */ + skb = virtio_vsock_alloc_skb(skb_len + (use_zcopy ? 0 : len), GFP_KERNEL); if (!skb) return NULL; @@ -70,18 +249,15 @@ virtio_transport_alloc_skb(struct virtio_vsock_pkt_info *info, hdr->len = cpu_to_le32(len); if (info->msg && len > 0) { - payload = skb_put(skb, len); - err = memcpy_from_msg(payload, info->msg, len); - if (err) - goto out; + int err; - if (msg_data_left(info->msg) == 0 && - info->type == VIRTIO_VSOCK_TYPE_SEQPACKET) { - hdr->flags |= cpu_to_le32(VIRTIO_VSOCK_SEQ_EOM); + if (use_zcopy) + err = virtio_transport_fill_nonlinear_skb(skb, vsk, info); + else + err = virtio_transport_copy_payload(skb, vsk, info, len); - if (info->msg->msg_flags & MSG_EOR) - hdr->flags |= cpu_to_le32(VIRTIO_VSOCK_SEQ_EOR); - } + if (err) + goto out; } if (info->reply) @@ -266,7 +442,8 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk, skb = virtio_transport_alloc_skb(info, pkt_len, src_cid, src_port, - dst_cid, dst_port); + dst_cid, dst_port, + vsk); if (!skb) { virtio_transport_put_credit(vvs, pkt_len); return -ENOMEM; @@ -842,6 +1019,7 @@ virtio_transport_stream_enqueue(struct vsock_sock *vsk, .msg = msg, .pkt_len = len, .vsk = vsk, + .flags = msg->msg_flags, }; return virtio_transport_send_pkt_info(vsk, &info); @@ -894,7 +1072,7 @@ static int virtio_transport_reset_no_sock(const struct virtio_transport *t, le64_to_cpu(hdr->dst_cid), le32_to_cpu(hdr->dst_port), le64_to_cpu(hdr->src_cid), - le32_to_cpu(hdr->src_port)); + le32_to_cpu(hdr->src_port), NULL); if (!reply) return -ENOMEM;