From patchwork Thu May 12 05:09:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arseniy Krasnov X-Patchwork-Id: 12847038 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D3BBC433EF for ; Thu, 12 May 2022 05:10:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349148AbiELFKK (ORCPT ); Thu, 12 May 2022 01:10:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242098AbiELFKI (ORCPT ); Thu, 12 May 2022 01:10:08 -0400 Received: from mail.sberdevices.ru (mail.sberdevices.ru [45.89.227.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C44B1D80A3; Wed, 11 May 2022 22:10:04 -0700 (PDT) Received: from s-lin-edge02.sberdevices.ru (localhost [127.0.0.1]) by mail.sberdevices.ru (Postfix) with ESMTP id DA33F5FD07; Thu, 12 May 2022 08:10:02 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sberdevices.ru; s=mail; t=1652332202; bh=hWa5wc59WhjDEzgjj/klRlZ7K1TFeSjuuN1vkJ+Q9iM=; h=From:To:Subject:Date:Message-ID:Content-Type:MIME-Version; b=jCB4baASDxwDw5GXa4bdVExVXKvJvmoCO1AtQGwuMqN96IOAkx8SPgbzaqHsuX36T j+NBBCc1TjkAGeLX6PDi8b1IC8mXG62/BDo0Mcgf+liUBZ87Z97wQ2oRFHxriUrC31 mAylAeV9XZYmiC4/dEgEJnZI1tX4pwwUZ+w4pakX6n8wvAcJRCSjkLJlOGIYzNQPf9 75X1MAQnShc/Zd26mciTl2xfuKbRWXzGtjP3AEeQi0rFaVswThas8GkBvbk4LnbYCQ AmHdjnpanf9zPT3hTqf+QMHhNmcP+QPhu1puqW5+Yobk3c0qkDpnHQ9OeDpV9J5ic9 ES77S9D2KVtLQ== Received: from S-MS-EXCH01.sberdevices.ru (S-MS-EXCH01.sberdevices.ru [172.16.1.4]) by mail.sberdevices.ru (Postfix) with ESMTP; Thu, 12 May 2022 08:10:02 +0300 (MSK) From: Arseniy Krasnov To: Stefan Hajnoczi , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , "Jakub Kicinski" , Paolo Abeni CC: "linux-kernel@vger.kernel.org" , "kvm@vger.kernel.org" , "virtualization@lists.linux-foundation.org" , "netdev@vger.kernel.org" , kernel Subject: [RFC PATCH v1 2/8] vhost/vsock: rework packet allocation logic Thread-Topic: [RFC PATCH v1 2/8] vhost/vsock: rework packet allocation logic Thread-Index: AQHYZb5v345nmTI7vUa/Rih0CKpclA== Date: Thu, 12 May 2022 05:09:19 +0000 Message-ID: <988e9e3c-7993-d6e2-626d-deb46248ed9f@sberdevices.ru> In-Reply-To: <7cdcb1e1-7c97-c054-19cf-5caeacae981d@sberdevices.ru> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.16.1.12] Content-ID: <947E2EBC49D02E47ACB875C40CBF0AF2@sberdevices.ru> MIME-Version: 1.0 X-KSMG-Rule-ID: 4 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Status: not scanned, disabled by settings X-KSMG-AntiSpam-Interceptor-Info: not scanned X-KSMG-AntiPhishing: not scanned, disabled by settings X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 1.1.2.30, bases: 2022/05/12 02:55:00 #19424207 X-KSMG-AntiVirus-Status: Clean, skipped Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC For packets received from virtio RX queue, use buddy allocator instead of 'kmalloc()' to be able to insert such pages to user provided vma. Single call to 'copy_from_iter()' replaced with per-page loop. Signed-off-by: Arseniy Krasnov --- drivers/vhost/vsock.c | 49 ++++++++++++++++++++++++++++++++++++------- 1 file changed, 41 insertions(+), 8 deletions(-) diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index 37f0b4274113..157798985389 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -360,6 +360,9 @@ vhost_vsock_alloc_pkt(struct vhost_virtqueue *vq, struct iov_iter iov_iter; size_t nbytes; size_t len; + struct page *buf_page; + ssize_t pkt_len; + int page_idx; if (in != 0) { vq_err(vq, "Expected 0 input buffers, got %u\n", in); @@ -393,20 +396,50 @@ vhost_vsock_alloc_pkt(struct vhost_virtqueue *vq, return NULL; } - pkt->buf = kmalloc(pkt->len, GFP_KERNEL); - if (!pkt->buf) { + /* This creates memory overrun, as we allocate + * at least one page for each packet. + */ + buf_page = alloc_pages(GFP_KERNEL, get_order(pkt->len)); + + if (buf_page == NULL) { kfree(pkt); return NULL; } + pkt->buf = page_to_virt(buf_page); pkt->buf_len = pkt->len; - nbytes = copy_from_iter(pkt->buf, pkt->len, &iov_iter); - if (nbytes != pkt->len) { - vq_err(vq, "Expected %u byte payload, got %zu bytes\n", - pkt->len, nbytes); - virtio_transport_free_pkt(pkt); - return NULL; + page_idx = 0; + pkt_len = pkt->len; + + /* As allocated pages are not mapped, process + * pages one by one. + */ + while (pkt_len > 0) { + void *mapped; + size_t to_copy; + + mapped = kmap(buf_page + page_idx); + + if (mapped == NULL) { + virtio_transport_free_pkt(pkt); + return NULL; + } + + to_copy = min(pkt_len, ((ssize_t)PAGE_SIZE)); + + nbytes = copy_from_iter(mapped, to_copy, &iov_iter); + if (nbytes != to_copy) { + vq_err(vq, "Expected %zu byte payload, got %zu bytes\n", + to_copy, nbytes); + virtio_transport_free_pkt(pkt); + return NULL; + } + + kunmap(mapped); + + pkt_len -= to_copy; + page_idx++; } return pkt;