From patchwork Fri Jun 3 05:33:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arseniy Krasnov X-Patchwork-Id: 12868535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64D8BCCA473 for ; Fri, 3 Jun 2022 05:33:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240644AbiFCFdx (ORCPT ); Fri, 3 Jun 2022 01:33:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234979AbiFCFdv (ORCPT ); Fri, 3 Jun 2022 01:33:51 -0400 Received: from mail.sberdevices.ru (mail.sberdevices.ru [45.89.227.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5787738DB4; Thu, 2 Jun 2022 22:33:49 -0700 (PDT) Received: from s-lin-edge02.sberdevices.ru (localhost [127.0.0.1]) by mail.sberdevices.ru (Postfix) with ESMTP id 5D6555FD02; Fri, 3 Jun 2022 08:33:47 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sberdevices.ru; s=mail; t=1654234427; bh=b9GzNc2WzHZEJq4QtMm9PZjoUEvGtrSFFwftlDQ1gMs=; h=From:To:Subject:Date:Message-ID:Content-Type:MIME-Version; b=AersA+wy3i1Sk0Ws4oMv1WxulXlzA+iGQxmqXhbV2j1N6py8peQFv0YK6tUPUnAYF TUpaOvWba1FveyVgL4Djk3bSa8eyxZFObmW9QsC8MXgvsZvkmjq0iibmE+oP6RVDiz QDY02l1rpzP1kx0glOveb/LzLK2iDh9mFNoiWbR+rUbe/7KwN3yx3nfj3d5u2VogUV y75vSn53WbjmrVipNUhnZ9VGdERbFsDnFBFetWfR86g917FXPYaYWGbGmtODTn8pND JTv/7h3iusStTJyE6NASEsbUMYlfC3ncWK0A01WJHCZOdbATZP8JWrEctwvmiu/q8M 7WwmP2REVwN0Q== Received: from S-MS-EXCH01.sberdevices.ru (S-MS-EXCH01.sberdevices.ru [172.16.1.4]) by mail.sberdevices.ru (Postfix) with ESMTP; Fri, 3 Jun 2022 08:33:32 +0300 (MSK) From: Arseniy Krasnov To: Stefano Garzarella , Stefan Hajnoczi , "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , "Jakub Kicinski" , Paolo Abeni CC: "linux-kernel@vger.kernel.org" , "kvm@vger.kernel.org" , "virtualization@lists.linux-foundation.org" , "netdev@vger.kernel.org" , kernel , Krasnov Arseniy Subject: [RFC PATCH v2 2/8] vhost/vsock: rework packet allocation logic Thread-Topic: [RFC PATCH v2 2/8] vhost/vsock: rework packet allocation logic Thread-Index: AQHYdwtmNyMIn8KDWEakLUC7mf1LGg== Date: Fri, 3 Jun 2022 05:33:04 +0000 Message-ID: <72ae7f76-ffee-3e64-d445-7a0f4261d891@sberdevices.ru> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.16.1.12] Content-ID: <45162A55C80F4745A2F2E93FE9A70961@sberdevices.ru> MIME-Version: 1.0 X-KSMG-Rule-ID: 4 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Status: not scanned, disabled by settings X-KSMG-AntiSpam-Interceptor-Info: not scanned X-KSMG-AntiPhishing: not scanned, disabled by settings X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 1.1.2.30, bases: 2022/06/03 01:19:00 #19656765 X-KSMG-AntiVirus-Status: Clean, skipped Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC For packets received from virtio RX queue, use buddy allocator instead of 'kmalloc()' to be able to insert such pages to user provided vma. Single call to 'copy_from_iter()' replaced with per-page loop. Signed-off-by: Arseniy Krasnov --- drivers/vhost/vsock.c | 81 ++++++++++++++++++++++++++++++++++++------- 1 file changed, 69 insertions(+), 12 deletions(-) diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index e6c9d41db1de..0dc2229f18f7 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -58,6 +58,7 @@ struct vhost_vsock { u32 guest_cid; bool seqpacket_allow; + bool zerocopy_rx_on; }; static u32 vhost_transport_get_local_cid(void) @@ -357,6 +358,7 @@ vhost_vsock_alloc_pkt(struct vhost_virtqueue *vq, unsigned int out, unsigned int in) { struct virtio_vsock_pkt *pkt; + struct vhost_vsock *vsock; struct iov_iter iov_iter; size_t nbytes; size_t len; @@ -393,20 +395,75 @@ vhost_vsock_alloc_pkt(struct vhost_virtqueue *vq, return NULL; } - pkt->buf = kmalloc(pkt->len, GFP_KERNEL); - if (!pkt->buf) { - kfree(pkt); - return NULL; - } - pkt->buf_len = pkt->len; + vsock = container_of(vq->dev, struct vhost_vsock, dev); - nbytes = copy_from_iter(pkt->buf, pkt->len, &iov_iter); - if (nbytes != pkt->len) { - vq_err(vq, "Expected %u byte payload, got %zu bytes\n", - pkt->len, nbytes); - virtio_transport_free_pkt(pkt); - return NULL; + if (!vsock->zerocopy_rx_on) { + pkt->buf = kmalloc(pkt->len, GFP_KERNEL); + + if (!pkt->buf) { + kfree(pkt); + return NULL; + } + + pkt->slab_buf = true; + nbytes = copy_from_iter(pkt->buf, pkt->len, &iov_iter); + if (nbytes != pkt->len) { + vq_err(vq, "Expected %u byte payload, got %zu bytes\n", + pkt->len, nbytes); + virtio_transport_free_pkt(pkt); + return NULL; + } + } else { + struct page *buf_page; + ssize_t pkt_len; + int page_idx; + + /* This creates memory overrun, as we allocate + * at least one page for each packet. + */ + buf_page = alloc_pages(GFP_KERNEL, get_order(pkt->len)); + + if (buf_page == NULL) { + kfree(pkt); + return NULL; + } + + pkt->buf = page_to_virt(buf_page); + + page_idx = 0; + pkt_len = pkt->len; + + /* As allocated pages are not mapped, process + * pages one by one. + */ + while (pkt_len > 0) { + void *mapped; + size_t to_copy; + + mapped = kmap(buf_page + page_idx); + + if (mapped == NULL) { + virtio_transport_free_pkt(pkt); + return NULL; + } + + to_copy = min(pkt_len, ((ssize_t)PAGE_SIZE)); + + nbytes = copy_from_iter(mapped, to_copy, &iov_iter); + if (nbytes != to_copy) { + vq_err(vq, "Expected %zu byte payload, got %zu bytes\n", + to_copy, nbytes); + kunmap(mapped); + virtio_transport_free_pkt(pkt); + return NULL; + } + + kunmap(mapped); + + pkt_len -= to_copy; + page_idx++; + } } return pkt;