From patchwork Sun Nov 6 19:36:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arseniy Krasnov X-Patchwork-Id: 13033533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B0F2C433FE for ; Sun, 6 Nov 2022 19:36:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230020AbiKFTgi (ORCPT ); Sun, 6 Nov 2022 14:36:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229947AbiKFTgh (ORCPT ); Sun, 6 Nov 2022 14:36:37 -0500 Received: from mx.sberdevices.ru (mx.sberdevices.ru [45.89.227.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0998BE084; Sun, 6 Nov 2022 11:36:35 -0800 (PST) Received: from s-lin-edge02.sberdevices.ru (localhost [127.0.0.1]) by mx.sberdevices.ru (Postfix) with ESMTP id 5438B5FD04; Sun, 6 Nov 2022 22:36:33 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sberdevices.ru; s=mail; t=1667763393; bh=yQ6suEWJBzxoUrocwoSmUoXHvbi6PvhSnqawY+nZJZ8=; h=From:To:Subject:Date:Message-ID:Content-Type:MIME-Version; b=YV1yrrDKglvFQLa34c6eqjzoqkTQQDdNQdyDxpaC6X/S6vOK5GLypskjvi+zp++Dm AhAfoS6eplCCgnaql70kq2zZIgZ4x20bZ15dsHACujm9LlO/N2UjuvrVMWWaIVbmy6 /q8GXlCZXIo9wDlCZKG/+dDz7dr2VwRtVrDXVsEl97SFZ3HreObFKRGRnZOBbn+P8v K9cSd98Nbz19+MBQq04/Y386P6CS1NzpglnFgdgaJwbgJFSzYpStKvR15+PLcozJUf E+6Mi/7mMASbAzeDOfaC75mlFgGA73t1xEFi0/+4kOB7mrjmx0Rl7/LceX84b6e8qq wlREl7z+nWDVw== Received: from S-MS-EXCH01.sberdevices.ru (S-MS-EXCH01.sberdevices.ru [172.16.1.4]) by mx.sberdevices.ru (Postfix) with ESMTP; Sun, 6 Nov 2022 22:36:33 +0300 (MSK) From: Arseniy Krasnov To: Stefano Garzarella , Stefan Hajnoczi , "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , "edumazet@google.com" , Jakub Kicinski , Paolo Abeni , Krasnov Arseniy CC: "linux-kernel@vger.kernel.org" , "kvm@vger.kernel.org" , "virtualization@lists.linux-foundation.org" , "netdev@vger.kernel.org" , kernel Subject: [RFC PATCH v3 01/11] virtio/vsock: rework packet allocation logic Thread-Topic: [RFC PATCH v3 01/11] virtio/vsock: rework packet allocation logic Thread-Index: AQHY8hcBEbxDmf4aG021vuOLUMusCQ== Date: Sun, 6 Nov 2022 19:36:02 +0000 Message-ID: In-Reply-To: Accept-Language: en-US, ru-RU Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.16.1.12] Content-ID: MIME-Version: 1.0 X-KSMG-Rule-ID: 4 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Status: not scanned, disabled by settings X-KSMG-AntiSpam-Interceptor-Info: not scanned X-KSMG-AntiPhishing: not scanned, disabled by settings X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 1.1.2.30, bases: 2022/11/06 12:52:00 #20573807 X-KSMG-AntiVirus-Status: Clean, skipped Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC To support zerocopy receive, packet's buffer allocation is changed: for buffers which could be mapped to user's vma we can't use 'kmalloc()'(as kernel restricts to map slab pages to user's vma) and raw buddy allocator now called. But, for tx packets(such packets won't be mapped to user), previous 'kmalloc()' way is used, but with special flag in packet's structure which allows to distinguish between 'kmalloc()' and raw pages buffers. Signed-off-by: Arseniy Krasnov --- drivers/vhost/vsock.c | 1 + include/linux/virtio_vsock.h | 1 + net/vmw_vsock/virtio_transport.c | 8 ++++++-- net/vmw_vsock/virtio_transport_common.c | 10 +++++++++- 4 files changed, 17 insertions(+), 3 deletions(-) diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index 5703775af129..65475d128a1d 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -399,6 +399,7 @@ vhost_vsock_alloc_pkt(struct vhost_virtqueue *vq, return NULL; } + pkt->slab_buf = true; pkt->buf_len = pkt->len; nbytes = copy_from_iter(pkt->buf, pkt->len, &iov_iter); diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h index 35d7eedb5e8e..d02cb7aa922f 100644 --- a/include/linux/virtio_vsock.h +++ b/include/linux/virtio_vsock.h @@ -50,6 +50,7 @@ struct virtio_vsock_pkt { u32 off; bool reply; bool tap_delivered; + bool slab_buf; }; struct virtio_vsock_pkt_info { diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index ad64f403536a..19909c1e9ba3 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -255,16 +255,20 @@ static void virtio_vsock_rx_fill(struct virtio_vsock *vsock) vq = vsock->vqs[VSOCK_VQ_RX]; do { + struct page *buf_page; + pkt = kzalloc(sizeof(*pkt), GFP_KERNEL); if (!pkt) break; - pkt->buf = kmalloc(buf_len, GFP_KERNEL); - if (!pkt->buf) { + buf_page = alloc_page(GFP_KERNEL); + + if (!buf_page) { virtio_transport_free_pkt(pkt); break; } + pkt->buf = page_to_virt(buf_page); pkt->buf_len = buf_len; pkt->len = buf_len; diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c index a9980e9b9304..37e8dbfe2f5d 100644 --- a/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c @@ -69,6 +69,7 @@ virtio_transport_alloc_pkt(struct virtio_vsock_pkt_info *info, if (!pkt->buf) goto out_pkt; + pkt->slab_buf = true; pkt->buf_len = len; err = memcpy_from_msg(pkt->buf, info->msg, len); @@ -1339,7 +1340,14 @@ EXPORT_SYMBOL_GPL(virtio_transport_recv_pkt); void virtio_transport_free_pkt(struct virtio_vsock_pkt *pkt) { - kvfree(pkt->buf); + if (pkt->buf_len) { + if (pkt->slab_buf) + kvfree(pkt->buf); + else + free_pages((unsigned long)pkt->buf, + get_order(pkt->buf_len)); + } + kfree(pkt); } EXPORT_SYMBOL_GPL(virtio_transport_free_pkt);